What is Nomadic?
Nomadic is a full-stack toolkit that helps you take LLM systems from demo to production confidently. Teams use Nomadic to eliminate guesswork in ensuring performance in the face of new customer data.
Nomadic supports you throughout your ML development lifecycle, offering features from prompt tuning, to systematic optimization, to evaluation, all in one centralized experimentation platform.
Nomadic is purpose-built on the belief that reliable post-production performance is achieved from the following areas, particularly as the industry matures.
-
Experimentation: We consistently heard that teams need the ability to run small, repeatable, yet reliable experiment loops to guarantee model performance across different stages of the AI development process. Nomadic offers you the flexibility to experiment under budget constraints, with centralized experiment management that simplifies your team’s testing workflows.
-
Custom Evaluation: Set objective definitions of success and systematically experiment with different configurations. It’s increasingly critical to score applications robustly, particularly without needing ground truth labels. With Nomadic, you can stay ahead of major regressions, and understand when your system is going wrong before users churn. Set yourself apart from the majority of developers still relying on intuition (“vibe checks”) or “eye-ball” heuristics.
-
Systematic Optimization:
- Parameter Optimization: Nomadic started from an emerging customer need for auto-hyperparameter optimization (HPO). 20+ ML teams told us that setting model parameters across ML development stages (such as temperature, or learning rate/epochs during training) was intuition-based or full of guesswork. Nomadic’s unique HPO tuner library enables teams to identify the best model parameter configurations systematically. We make available state-of-the-art search techniques developed by Microsoft Research and the latest HPO libraries off-the-shelf, so that you can easily search, set and test your parameters based on evolving priorities, such as increased cost-awareness or maximum performance.
- Prompts: When building LLM applications, it’s crucial to recognize that some prompts are more performant than others. MLEs often experiment with hundreds of prompts before discovering optimal prompts. With auto-prompt tuning, Nomadic provides you prompt optimization (given a high-quality dataset of inputs / expected outputs), so that your nuanced requirements no longer “live in the head” of the MLE.
- Observability: Paired with experimentation, ML teams need to statistically interpret the impact of different parameter settings on model performance in order to confidently deploy. Nomadic provides detailed visualizations, score distributions, and statistical summaries to help you ensure your models are robust and reliable.
Nomadic is built by a team of ML practitioners like you, responsible for systems at Lyft, Snowflake, and Google.