Role overview
Nagish makes communication accessible for people who are Deaf or hard of hearing.
Our team is passionate about making the world more accessible using our state-of-the-art tech - made for consumers and enterprises.
We are backed by some of the best investors out there: Comcast, Techstars, Vertex, Precursor, Contour, Cardumen, and more.
What we're looking for
- Use and extend models for pose estimation, gesture segmentation, sign language animation, and transcription
- Integrate and champion a video generation pipeline that looks and feels like a human interpreter
- Evaluate, optimize, and deploy computer vision models to production
- Collaborate with the data engineer to build scalable training and inference pipelines
- Automate and accelerate CV tasks for annotation and content generation
- PhD in Computer Science, AI, or related field, or equivalent industry experience
- 3+ years working with PyTorch and computer vision models
- Proven ability to take ML models from research to production
- Solid understanding of machine learning, deep learning, optimization, and language models
- Experience working with motion or sign language data is a strong advantage
- Publication record in top-tier computer vision or ML venues preferred
- Strong Python skills and experience integrating models into cloud environments