Building AI systems that go beyond the notebook — live multi-agent pipelines, full-stack ML applications, and LLM infrastructure that actually ships to production.
I am a second-year B.Tech student in Data Science & AI at IIT Bhilai (CGPA 9.48), but I think of myself less as a student and more as a builder. The classroom is where theory lives — the real learning happens when I am staring at a broken deployment at 2am.
My work lives at the intersection of LLM engineering, production ML, and systems architecture. I do not stop at the notebook. I containerize, deploy, monitor, and ship. Cascade AI is live on Firebase. OptiQuant runs on AWS EC2. The gap between prototype and production is where I am most at home.
When not building, I organise AI workshops for 200+ students at IIT Bhilai, run cold outreach for placement drives, and think about why model explainability tools can quietly create false confidence in a drifting system.
"Every system tells a story. These are mine."
A production-deployed backend where an LLM planner decomposes user tasks into a coordinated pipeline of specialised agents — web search, scraper, auditor, router, and formatter — executed in sequence via Firebase Cloud Functions.
An end-to-end ML application from feature engineering and ensemble model training to a live Streamlit interface for generating and explaining stock alpha signals. Walk-forward backtesting prevents data leakage. SHAP explainability makes every prediction inspectable in the UI.
"Thinking out loud so the next person doesn't have to start from zero."
Model explainability tools are supposed to build trust. But when your feature distribution shifts and the model hasn't been retrained, SHAP attributions can confidently mislead you into thinking everything is fine.
Everyone talks about "deploying to production" like it's a single step. It isn't. Here's what building Cascade AI taught me about the real gap between a working notebook and a system that earns the word "production."
Every quant strategy looks great on historical data. The dirty secret is that most backtests are optimised on the same data they're evaluated on. Walk-forward validation is the closest thing to an honest test — here's how I implemented it in OptiQuant.
Actively seeking AI engineering internships — LLM product teams, ML infrastructure, or applied ML at early-stage startups. If you are building something that matters, let us talk.