
Mid-level AI Engineer (India)
Required Skills
Python
AWS
Azure
GenAI
LangChain
Agentic AI
CI/CD
Job Description
Job Title: Mid-level AI Engineer
Job Type: Full-time
Location: Hybrid (Bangalore, Pune, NCR)
Experience: 2-4 Years of Relevant Experience
Job Summary:
Join our team as a Mid-level AI Engineer and help drive the next generation of GenAI-powered applications. You will architect robust solutions, collaborating closely with fellow engineers, product managers, and designers to turn cutting-edge research into real-world impact. If you thrive in a fast-paced, innovation-driven environment and are passionate about clear communication, this is the opportunity for you.
Responsibilities:
- Build GenAI applications such as question-answering systems, content generation tools, and extraction pipelines.
- Design, implement, and optimize RAG pipelines end-to-end: ingestion, chunking, embedding, prompt templating, evaluation, and deployment.
- Develop robust APIs using Python frameworks (FastAPI preferred).
- Work with multiple LLMs (OpenAI, Mistral, Anthropic, etc.) and evaluate model performance for different use cases.
- Use LangChain, LangGraph, or LangSmith to orchestrate and monitor GenAI workflows.
- Deploy applications on AWS or Azure and implement CI/CD for continuous improvement.
- Collaborate with data engineers and product managers to translate technical designs into scalable solutions.
- Participate in system design discussions, contributing ideas for performance and maintainability.
Must-have Skills:
- Python: Strong experience with the language. Able to design clean, modular, production-grade code.
- API Development: Proven experience building and maintaining APIs (FastAPI/Flask).
- GenAI Stack: LangChain, LangGraph, LangSmith; multiple LLM APIs (OpenAI, Mistral, etc.).
- RAG Lifecycle: Data ingestion, prompt templating (Jinja), evaluation/re-ranking, deployment.
- Cloud/DevOps: Hands-on with AWS or Azure (deployments, monitoring).
Good-to-have Skills:
- Understanding of distributed systems and scaling strategies.
- Experience with containerization (Docker) and orchestration (Kubernetes).
- Familiarity with Guardrails AI or Responsible AI frameworks.
- Basic UI integration knowledge (React, optional).
About micro1
micro1 is a data engine that helps AI labs train foundational models and enterprises build AI agents. We provide frontier evaluations and reinforcement learning environments used to improve LLM capabilities, as well as contextual evaluations used to monitor and improve AI agents in enterprise settings. Our data engine includes an AI recruiter agent that sources and vets domain experts, a data platform that enables rapid production of high-quality training data, and a pipeline performance system that ensures both quality and velocity.
Our goal is to have 1 billion people doing meaningful work by contributing their expertise to the development of frontier AI models. We’ve raised $40M+ in funding, and our AI recruiter has powered more than 1 million AI-led interviews as our global network of experts expands to form the human intelligence layer for AGI.