Role overview
Role Summary:
We are seeking a Generative AI Engineer to build, optimize, and scale production-ready AI applications. You will design complex multi-agent systems, implement advanced RAG pipelines, and manage the deployment of both frontier and local LLMs. The ideal candidate blends deep machine learning expertise with modern software engineering practices.
*Technical Stack:
LLMs:** Gemini, OpenAI, Claude, Llama, and Local Model deployment.
What you'll work on
Develop and orchestrate sophisticated AI workflows using LangGraph and multi-agent architectures.
Build and maintain Advanced RAG systems utilizing LlamaIndex and vector databases for high-accuracy retrieval.
Integrate and swap diverse LLMs (commercial and open-source) based on performance and cost requirements.
Design and deploy high-performance, scalable backend services using FastAPI and Async Python.
Fine-tune large language models (LLMs) using PyTorch/TensorFlow to improve domain-specific performance.
Optimize GenAI workflows for latency, cost, and reliability using advanced prompt engineering and monitoring tools.
Containerize and deploy AI services via Docker to production environments.
What we're looking for
Maximum Compensation: USD 276,000
Compensation is based on actual experience and qualifications of the candidate. The above is a reasonable and a good faith estimate for the role.
Medical, vision, and dental benefits, 401k retirement plan, variable pay/incentives, paid time off, and paid holidays are available for full time employees.
This position is not available for independent contractors
No applications will be considered if received more than 120 days after the date of this post