Experimentation
Running experiments is a systematic way to measure the performance of your system across prompt, model, or hyperparameter configurations.
Nomadic’s experimentation capabilities simpliefies your testing process with integrated project setup, API key handling, and experiment configuration tools.
Submit an Experiment
There are two ways to create an Experiment:
- Through the Python SDK by invoking
nomadic.experiment(...)
as noted in the Example Usage, and - Through the Nomadic Workspace, our managed service GUI.
Nomadic SDK Example
See the Experiment class Experiment for basic Experiment usage.
Repeatable Experimentation
Re-configure and trigger Experiment runs either through the SDK or Nomadic Workspace.
Centralized Projects
We give you an interactive, centralized dashboard regardless of whether you’re using Nomadic from the workspace or your local machine.
Interpretable Results
Fine-grained visualizations of experiment results on different model settings. Compare each training run. It’s even easy to look-back on optimal settings as runtime data evolves.
Standard visualizations
Our Experiments dashboard includes key out-of-the-box visualizations such as:
- Detailed heatmaps on the the impact of different parameter settings on model performance
- Top-scoring parameter combinations in tables
- Statistical summaries to ensure your models are robust and reliable
- Live data lookback, that updates real-time with your most recent production data
On the SDK, see Observability for how you can leverage the experiment.visualize_results()
function to get a comprehensive visual analysis of the experiment results, focusing on the score distribution and its relationship with various hyperparameters.
Custom visualizations
You can also customize your visualizations. As an example,
Object Registry
Nomadic captures not just metrics, but your models, datasets, and configurations so that your project is reproducible.