Skip to main content
Back to Blogs
2026
About
Full-Stack Developer
AI Integration
Next.js
TypeScript
DevOps
Cloud Architecture
LLM Integration
Open-Source

Who is Amit Divekar (amitdevx) in 2026? Full-Stack Engineer Building AI-First Systems

Amit Divekar

Who is Amit Divekar (amitdevx) in 2026?

Location: Nashik, India Handle: @amitdevx Bio: Full-Stack Developer building AI-first systems with Next.js, TypeScript, DevOps, Cloud, and LLM Integration. I build, break, and fix things for fun.

You probably saw @amitdevx on GitHub, visited amitdevx.tech, or read something about AI tools and cloud architecture. This is my 2026 update. A lot has changed since last year, mainly because the entire industry shifted how we think about building systems.

Quick Profile: 2026 Edition

I'm still a full-stack developer from Nashik focused on systems that scale. But "full-stack" means something different now than it did in 2024.

Two years ago, full-stack meant: frontend, backend, DevOps. That's incomplete in 2026. Now you also need to understand AI infrastructure, LLM costs, agentic systems, and how to integrate them without breaking everything.

The engineers who get this transition - who understand classical infrastructure AND can build with AI properly - are the ones solving actual problems right now.

What Actually Changed This Year

Looking back at 2024-2025, AI integration was an afterthought. You built your system, then figured out where AI could add features.

2026 is different. Now you design systems with AI as a foundational layer. Every architectural decision has to account for LLM costs, failure modes when APIs are down, token economics, and vector database design.

I shifted my thinking around this almost a year ago, and it changed everything about how I approach problems. The old patterns don't work anymore. Systems that were fast before get expensive when you add LLM calls. Code that was reliable breaks when the AI API is unreliable.

The Real 2026 Skill Set

If you ask developers what full-stack means in 2026, most will say frontend, backend, DevOps. Those skills still matter, but there's a new layer that nobody talks about enough: understanding how to build systems where AI is a first-class citizen, not an add-on.

Here's what I actually spend time on now:

AI-Aware Architecture: How do you design a system so LLM calls are efficient, cached properly, and don't tank your costs? How do you build fallback chains so the system works when the AI API fails? These are architectural problems, not just API integration problems.

Production AI Patterns: LangGraph for complex workflows, proper prompt versioning, cost monitoring and optimization, tracking what the AI system actually does in production versus what you expected.

Infrastructure for AI: How to run inference efficiently, how vector databases actually perform at scale, managing GPU resources, designing systems that work with both local models and cloud APIs.

It's not enough to know Python and call an API anymore. You need to understand the whole picture.

Connect With Me

I'm on all the usual platforms if you want to chat about engineering, AI systems, or just building things.

GitHub: https://github.com/amitdevx - Where the actual code lives LinkedIn: https://www.linkedin.com/in/divekar-amit/ - Professional updates and discussions X (Twitter): https://x.com/amitdevx_ - Real-time thoughts about engineering and AI Medium: https://medium.com/@amitdivekar - Longer technical pieces Instagram: https://instagram.com/amitdevx - Behind-the-scenes development and learning Website: https://amitdevx.tech - Portfolio, projects, and technical blog Kaggle: https://www.kaggle.com/divekaramit - Data science experiments Email: Amitdivekar289@gmail.com - For anything important

Full-Stack Engineering in 2026

The stack actually expanded. What used to be a pretty clear three-layer system now has multiple dimensions:

Frontend side: React and Next.js are standard. TypeScript too. The difference now is building UIs that work well with AI - streaming responses, showing reasoning traces, handling uncertainty. That's a different design problem than the traditional DOM stuff.

Backend side: Python is still the standard because of AI/ML infrastructure. But now you're managing LLM endpoints, vector database queries, prompt versioning, and designing APIs that work with streaming responses. FastAPI is better than Flask for this kind of thing.

The AI Layer: This is new territory. Managing LLM costs and behavior, designing RAG systems, understanding embeddings, building with agentic frameworks like LangGraph. It's not just calling an API anymore.

DevOps side: Cloud platforms are standard (AWS, GCP), but now you're also managing GPU resources, cost tracking for AI operations, and infrastructure that supports the new layers.

The combination means I can take something from concept through design and all the way to production with all these pieces accounted for, rather than hand it off to different people.

What I Actually Build

I focus on systems that need to work reliably at scale. The principles are pretty simple: systems need to be reliable even when things break, they need to be fast but also cost-aware, they need to handle AI properly.

Projects I've Built

SchemaSense AI is an example of what this looks like now. It auto-generates database documentation using AI. The old way would be to call an LLM API and show the result. The 2026 way is: cache similar queries so you don't hit the API repeatedly, have fallback models if the primary one fails, track costs and latency, show the user what's happening in real-time with streaming, and make sure the whole system works even if the AI API goes down temporarily.

Tech stack: Next.js 15, FastAPI, Python 3.12, TypeScript, PostgreSQL, DeepSeek-V3, OpenRouter, SQLite, LangGraph Links: GitHub: https://github.com/amitdevx/schemasense | Live Demo: https://schemasense.amitdevx.tech

Professor Profiler is a multi-agent system that analyzes exam papers. In 2024, that would have been "glue multiple API calls together." In 2026, it means coordinating multiple AI agents properly using LangGraph, understanding when they fail and what to do about it, showing the reasoning process to users, and making sure the costs stay reasonable.

Tech: Python 3.12, LangGraph, Multi-agent AI, Hub-and-Spoke architecture, DevOps, Git Links: GitHub: https://github.com/amitdevx/Professor_Profiler | Docs: https://deepwiki.com/amitdevx/Professor_Profiler

Eatinformed analyzes food images for nutrition info. The multi-modal AI aspect is interesting, but what actually matters in production is: does it work when someone uploads a bad image? What happens when the vision API is slow? How do we fall back to structured data when the AI is uncertain?

Tech: TypeScript, Next.js 15, React 19, Tailwind CSS, Google Gemini, Genkit, LangGraph Links: GitHub: https://github.com/amitdevx/Eatinformed | Demo: https://eatinformed.amitdevx.tech/

The other projects (2FA protector, file uploader, anime platform) are simpler but still practice the core patterns of building systems that work.

All the code is on GitHub if you want to see how this actually looks.

Experience and Learning

I'm still studying at Savitribai Phule Pune University (2024-2027, CGPA 8.36). Got some experience through virtual internships with EA and Accenture in 2023-2024, which taught me a lot about software development practices and what real systems need.

But honestly, the best learning comes from building real things and having them fail in production. That's where you actually learn what matters. The coursework helps with fundamentals, but the real education is from systems you built that broke and you had to fix.

Technical Skills

The classical stack is still there: React, Next.js, TypeScript, Python, FastAPI, PostgreSQL, MongoDB, Docker, Kubernetes, AWS, GCP. I'm comfortable across the whole thing.

But the 2026 skills that actually matter are the AI-aware ones: LLM integration and cost optimization, agentic systems with LangGraph, vector databases and RAG, prompt engineering and management, observability for AI behavior.

Database stuff is important. DevOps is important. Infrastructure as Code is important. But now all of that has to account for AI workloads, which is different than traditional systems.

Why This Matters

The problems have changed. It's not just about different layers understanding each other anymore.

Say you build a chat interface with an LLM. Seems simple. But if you didn't architect for it properly, your LLM bill becomes unmanageable because every similar question hits the API fresh instead of being cached. The system is slow because you're not streaming responses properly. It fails silently when the API is down because you had no fallback. These are architectural failures, not implementation failures.

Or you build an agentic system but it acts unpredictably in production because the prompts drift, you have no versioning, no observability. You're flying blind.

These are exactly the kinds of problems I think about now when designing systems, which is why I'm trying to document what actually works and what doesn't.

What I Write About

I focus on things that have actually worked in production systems. Not tutorials copied from documentation, not toy projects. Real patterns from real systems.

Current articles cover open-source AI frameworks (LangGraph, CrewAI, AutoGen), how SchemaSense works, database design, Python patterns for backends, and API design.

There's also the earlier stuff on data analysis, pandas, Reddit APIs, and other foundations that are still relevant but were written before the AI shift.

See the featured posts on amitdevx.tech for the latest.

Certifications and Learning

I've got the usual certifications from Microsoft, Google, EA, Accenture covering security, cloud, SDLC, DevOps. Nothing fancy. The real learning comes from the projects.

2026 Focus

Right now I'm concentrating on:

Production AI integration - making LLMs reliable, cost-effective, actually useful in real systems. Not demos, actual production code that handles failure modes and cost optimization.

Agentic systems - understanding how to coordinate multiple AI agents and make it work reliably. LangGraph is the right tool for this, which is why I spend a lot of time with it.

Full-stack architecture for AI - how do all the pieces fit together when AI is part of the system? Frontend, backend, DevOps, costs, all of it.

Open-source tooling for AI infrastructure - there are patterns emerging that could be open-sourced so other people don't have to figure them out from scratch.

Technical writing - documenting what works so it's not just in my head. The AI landscape moves fast. Things that are novel now become standard practices quickly.

Why You Might Care

If you're building systems with AI in 2026, most of the content out there is either "here's how to call an LLM API" or "here's a toy project." Neither of those is that helpful when you're trying to build something that actually works in production.

I try to bridge that gap. Build real things, document what worked and what didn't, explain the trade-offs.

The full-stack perspective matters too. You can't optimize for cost if you only understand the backend. You can't design the right fallback chains if you don't understand infrastructure. You can't build good UIs for AI systems if you don't understand the latency and uncertainty trade-offs.

How to Reach Out

For technical discussions and code stuff, GitHub is best.

For professional conversations, LinkedIn.

For quick thoughts and real-time discussion, X/Twitter.

Medium if you want to read longer pieces.

Direct email if it's something specific.

I'm pretty responsive on GitHub and LinkedIn. The others get checked less frequently but I do respond.

What's on amitdevx.tech

This is where I share technical knowledge about building production systems in 2026.

The blog covers AI architecture, full-stack development, cloud infrastructure, database design, and open-source tools. Real examples, real code, real patterns from actual systems.

GitHub has the code. LinkedIn has the professional updates. Medium has deeper analysis. The website ties it together.

The amitdevx Brand

What does @amitdevx represent? Full-stack thinking that includes AI. Production focus over hype. Cost awareness. Building in public. Honest about what works and what doesn't. Continuous learning because the field moves fast.

A perspective from Nashik on solving global problems.

Beyond Building Systems

When I'm not working on this stuff, I like solving puzzles and logic games, chess, digital art and design, exploring new tools and frameworks.


Let's Build Something

If you're interested in:

Building production AI systems, understanding how to make them reliable and cost-effective, full-stack architecture for the AI era, contributing to open-source solutions, or just talking about where engineering is going in 2026.

Reach out. I'm accessible on all the platforms above.

Quick Links:

GitHub: https://github.com/amitdevx LinkedIn: https://www.linkedin.com/in/divekar-amit/ X: https://x.com/amitdevx_ Medium: https://medium.com/@amitdivekar Instagram: https://instagram.com/amitdevx Website: https://amitdevx.tech Kaggle: https://www.kaggle.com/divekaramit Email: Amitdivekar289@gmail.com


Next Steps:

Read the recent posts about AI integration and LangGraph if you're building agentic systems. Check out the project case studies if you want to see how this looks in practice. Connect on LinkedIn if you want to discuss this stuff. Collaborate if you're working on similar problems.

Thanks for reading this update on where I am and what I'm focused on in 2026.

Amit Divekar Full-Stack Engineer | AI Systems Architect | DevOps Engineer | Open-Source Contributor Nashik, India 2026