You will be bridging the gap between our SDK and the underlying infrastructure, taking an active role on both, and designing the ergonomics of using our platform as a developer.
Sounds like you?
Get in touch →
Our kind of bird?
Email us →
We are seeking a Technical Evangelist to drive adoption of our platform. Your goal is to increase our engagement with the developers using the platform by creating relevant content and representing us at events.
You’ll help developers understand how they can use our services for a wide variety of natural language use cases in both enterprise and consumer settings.
Ideally, you're someone who enjoys writing code but has an unmet need to be on stage guiding the broader industry towards a technology you fervently believe in.
Reach out →
We are looking for a NLP Engineer to help us create a framework and computing infrastructure to deliver conversational intelligence to developers at the speed of thought.
You'll be responsible for building ML pipelines to train our models, curated datasets and localization modules.
To succeed in this role, you should possess outstanding skills in natural language processing, deep learning methods and text representation techniques.
Principal Technical Evangelist
We are a light-hearted, fast-paced and fun engineering team that appreciates diversity in opinion, candidness and ability to take risks but does not tolerate drama, politics or hubris.
We are extremely particular about code hygiene and following best practices. In many ways, we are a mise-en-place coding shop and strictly follow our methodology.
We work on really hard problems ranging from designing DSLs to building performant ML models for affective speech generation and demand rigor from all team members. Here be dragons.
SOME OF OUR ACTIVE PROJECTS
• Continually collecting data and creating tests for natural language training to get insights about the Mauna platform and its usefulness and accuracy.
• Creating high-level frameworks for representing natural language interfaces i.e. tools like AIML, SSML but built for composability and mimicking familiar mental models
• Optimizing the client SDK to achieve maximum throughput and minimize latency for the end consumer using aggressive caching, edge computing and parallelization.