Welcome stranger, take a glimpse into my brain and all things that excite me

👨🏻💻 I’m an AI Engineer currently at Nutanix. I work on features in the Nutanix Enterprise AI platform. I also designed and deployed an internal agentic code review system end to end, covering model integration, Kubernetes deployment, observability pipelines, and the creation of a curated evaluation dataset to benchmark model performance. The system has reviewed over 25,000+ pull requests across the organization so far.
🔬 I have a strong passion for learning and continuously absorbing new advancements in AI, Machine Learning, and Software Engineering.
📚 I enjoy constantly learning and exploring new ideas. Recently, I’ve been spending my weekends reading about reinforcement learning techniques, particularly reinforcement learning with verifiable rewards, and experimenting with how these methods can be applied to code review agents through better dataset curation and objective design to improve agent performance.
Currently reading:
- Reinforcement Learning with Verifiable Rewards Implicitly Incentivizes Correct Reasoning in Base LLMs
- CodeJudge: Evaluating Code Generation with Large Language Models
- From Verifiable Dot to Reward Chain: Harnessing Verifiable Reference-based Rewards for Reinforcement Learning of Open-ended Generation
🏃 I am also interested in Running. My Full Marathon Personal Best is 3:49. Check me out on strava
Some cherry picked stuff I am proud of
Publications
- Kristen Pereira, Neelabh Sinha, Rajat Ghosh, Debojyoti Dutta.
CR-Bench: Evaluating the Real-World Utility of AI Code Review Agents.
Accepted to the ICLR 2026 Workshop on Agents in the Wild: Safety, Security, and Beyond.
(Paper on one of the public dataset I created for evaluating code-review agent @ Nutanix, links to dataset and paper will be updated here soon)
🤖 Open Source Contributions
(Still early in my open source journey, but every contribution is a step toward my goal of becoming a more active contributor.)
I have experience contributing to HuggingFace Transformers. Hugging Face Transformers is the go to open-source library for state-of-the-art machine learning models (since you have wandered here on my profile you prolly already knew that). I contributed to the vision models in the library.
Google adk-python Fixed an issue for windows users where the subagents weren’t spinning up for them.
📜 Reimplementing and Reproducing Papers
I love research at the intersection of vision and Language so I have dabbled into reproducing results from both domains that interested me.
Checkout my implementation of Neural Style Transfer from scratch (This was way before DALL-E entered the scene, love going back to basics and understanding stuff from first pricniples). Checkout my colab notebook where you can record a video from webcam and apply the style of your choice in realtime! Here are some of the results from this :

Checkout my implmentation of Cyclic Precision Training, I intergrated it into Meta’s Fairseq library making it easier to finetune and train using this technique. I tested the implementation by doing both post-training quantization and quantization-aware fine-tuning on a RoBERTa model with mixed precision. Check out the code here
My Reproduction of Token Compression Retrieval Augmented Large Language Model for Inference Cost Reduction using LlamaIndex Link.
If you love tearing things apart just to see how they work and enjoy the nitty-gritty of ML algorithms, you might get a kick out of my implementations of popular scikit-learn models from scratch using numpy. Check it out here
📚 Teaching and Community Contributions
Teaching Assistant for Computer Vision at Georgia Tech for the past four semesters, where I have been passionate about mentoring and guiding students in the field.
I enjoy sharing knowledge and insights through my Medium blog, where I have published several articles on AI and ML, with more content in the works—stay tuned!
