This document provides an overview of three key model classes used in the SDK: Model, OpenAIModel, and SagemakerModel. These classes are designed to facilitate the integration and usage of various LLMs in your applications. Each class inherits from the base Model class, which provides a consistent interface for setting up and running models with specific API keys and hyperparameters.
The Model class is an abstract class and cannot be instantiated directly. Subclasses like OpenAIModel and SagemakerModel provide specific implementations for their respective platforms.
The api_keys parameter is essential and must be provided for the model to function correctly.
Each subclass is responsible for handling the specific setup and execution of its respective platform’s model.
get_required_api_keys() -> Set[str]: Returns the set of expected API keys for the model.
get_hyperparameters() -> Dict: Returns the hyperparameters that can be tuned for the model.
run(**kwargs) -> RunResult: Abstract method that must be implemented by subclasses to execute the model with the provided parameters and return a RunResult object.
The OpenAIModel class is a subclass of Model specifically designed to work with OpenAI’s GPT models. It handles the setup and execution of OpenAI models, allowing you to easily integrate them into your applications.
The SagemakerModel class is a subclass of Model designed to work with AWS SageMaker endpoints. It configures the SageMaker model using the provided AWS credentials and runs experiments using specified parameters.
A set of API keys expected for the SageMaker model.
Yes
hyperparameters
ClassVar[Dict]
A default hyperparameter search space with temperature, max_tokens, and top_p.
A dictionary of hyperparameters specific to the SageMaker model.
No
Example Usage
from nomadic.model import SagemakerModelfrom nomadic.tuner import tuneexperiment = Experiment( name ="Sample_Nomadic_Experiment", model = SagemakerModel( api_keys={"AWS_ACCESS_KEY_ID":"<...>","AWS_SECRET_ACCESS_KEY":"<...>","AWS_DEFAULT_REGION":"<...>","ENDPOINT_NAME":"<...>",}), evaluator = BatchEvalRunner(...), hp_space ={'temperature': tune.choice([0.1,0.3,0.5,0.7,0.9])'top_k': tune.randint(3,5)} current_hp_values ={'temperature'=0.5,'top_k'=5,'top_p'=8}, evaluation_dataset ={{'Context':"You are a helpful assistant writing a transcript for ...",'Instruction':"Absolutely do not hallucinate. Capture all relevant ...",'Answer':"The generated summary is shown below: ..."}})results = experiment.run()
Overview
The TogetherAIModel class is a subclass of Model that is implemented to work with models hosted on Together.AI. It handles the setup and execution of Together.AI models, allowing you to easily integrate them with your applications.
Class Variables
Parameter
Type
Default
Description
Required
model
Optional[str]
N/A
Name of the model to use - from the Together.AI Chat Models list.
No
required_api_keys
ClassVar[Set[str]]
{"TOGETHER_API_KEY"}
The set of expected API keys for Together.AI.
No
hyperparameters
ClassVar[Dict]
A default hyperparameter search space with temperature, max_tokens, top_p.
A dictionary defining the hyperparameters that can be tuned for the Together.AI model.
The VLLMModel class is a subclass of Model that is implemented to work with models served through vLLM. It handles the setup and execution, allowing you to easily integrate vLLM-served models them with your applications.
Class Variables
Parameter
Type
Default
Description
Required
api_url
Optional[str]
N/A
URL of model being served through vLLM.
No
required_api_keys
ClassVar[Set[str]]
{}
The set of expected API keys for model being served through vLLM.
No
hyperparameters
ClassVar[Dict]
A default hyperparameter search space with temperature, max_tokens, top_p.
A dictionary defining the hyperparameters that can be tuned for the model being served on vLLM.