Quick Start Notebook

For programmatic access to NomadicML, you can use our Python SDK. Quick Start Notebook below. Open this demo in colab

1. Install the SDK

pip install nomadicml

2. Initialize the Client

from nomadicml import NomadicML
import os

# Initialize with your API key
client = NomadicML(
    api_key=os.environ.get("NOMADICML_API_KEY")
)
To get your API key, log in to the web platform, go to Profile > API Key, and generate a new key. We recommend storing your API key in an environment variable for security.

3. Upload and Analyze Videos

The standard workflow involves uploading your videos first, then running analysis on them. Uploads accept local paths or remote URLs that end with a common video extension (.mp4, .mov, .avi, .webm):
from nomadicml.video import AnalysisType, CustomCategory
response = client.upload('https://storage.googleapis.com/videolm-bc319.firebasestorage.app/example-videos/Mayhem-on-Road-Compilation.mp4')

# Extract video ID
video_id = response["video_id"]

# Then analyze it
analysis = client.analyze(video_id,
    analysis_type=AnalysisType.RAPID_REVIEW,
    custom_event="Find outlier events",
    custom_category=CustomCategory.DRIVING
  )
print(analysis)
You can also pass a list of paths/URLs to upload and a list of ids to analyze for batch operations.
paths = [
    'https://storage.googleapis.com/videolm-bc319.firebasestorage.app/example-videos/Driving-a-bus-in-Switzerland-on-Snowy-Roads.mp4',
    'https://storage.googleapis.com/videolm-bc319.firebasestorage.app/example-videos/LIDAR-RBG-Waymo-YouTube-Public-Sample.mp4',
    'https://storage.googleapis.com/videolm-bc319.firebasestorage.app/example-videos/Mayhem-on-Road-Compilation.mp4',
    'https://storage.googleapis.com/videolm-bc319.firebasestorage.app/example-videos/Oakland-to-SF-on-Bridge.mp4',
    'https://storage.googleapis.com/videolm-bc319.firebasestorage.app/example-videos/Zoox_San%20Francisco-Bike-To-Wherever-Day.mp4'
]
response = client.upload(paths)
video_ids = [v['video_id'] for v in response]

analyses = client.analyze(video_ids,
    analysis_type=AnalysisType.RAPID_REVIEW,
    custom_event="Find outlier events",
    custom_category=CustomCategory.DRIVING
  )
print(analyses)

4. Analysis Types

NomadicML supports different analysis types, each optimized for different use cases. All analysis types can be run on a single video or a batch of videos.

Rapid Review

Extracts custom events based on your specific requirements and categories. It is great for fast custom event detection. Open In Colab
analysis = client.analyze(
    video_id,
    analysis_type=AnalysisType.RAPID_REVIEW,
    custom_event="green crosswalk",
    custom_category="environment"
)
print(analysis)

Edge Case Detection

Identifies unusual or edge-case scenarios based on a specific category.
analysis = client.analyze(
    video_id,
    analysis_type=AnalysisType.EDGE_CASE,
    edge_case_category="infrastructure-monitoring"
)
print(analysis)

Long Video: Needle-in-Haystack

Searches for specific events or objects within long videos using semantic search. Fast results with shorter anlaysis results.
analysis = client.analyze(
    video_id,
    analysis_type=AnalysisType.SEARCH,
    search_query="Pedestrians"
)
print(analysis)

5. Project-Based File Management & Composite Workflows

For larger projects, you can organize videos into folders and run batch analysis on entire folders at once. This is especially useful for processing datasets or running systematic reviews.

Example: Deleting videos

client.delete_video(video_id)

# OR
for v in video_ids:
    client.delete_video(v)

Example: Search + Analysis Workflow

This example shows how to find all videos in a folder containing “red trucks”, then run a more detailed analysis on just those videos to find car accidents.
from nomadicml.video import AnalysisType

# Define a folder for the project
folder_name = "traffic-incidents"

# Upload videos to the folder
responses = client.upload(['/path/to/incident1.mp4', '/path/to/incident2.mp4'], folder=folder_name)

# Step 1: Search for videos containing red trucks within the folder
print("🔍 Searching for videos with red trucks...")
search_results = client.search_videos(
    query="red trucks",
    folder=folder_name
)

# Step 2: Extract unique video IDs from search results
video_ids_with_red_trucks = list(set([
    match['videoId'] for match in search_results['matches']
]))

print(f"📹 Found red trucks in {len(video_ids_with_red_trucks)} videos")

# Step 3: Run detailed accident analysis on only those specific videos
if video_ids_with_red_trucks:
    print("\n🚨 Analyzing videos for car accidents...")
    accident_analyses = client.analyze(
        video_ids_with_red_trucks,
        analysis_type=AnalysisType.RAPID_REVIEW,
        custom_event="car accidents or collisions",
        custom_category="driving"
    )

# Step 4: Delete videos
for response in responses:
    client.delete_video(responses['video_id'])
Videos are automatically pre-indexed when uploaded, making search_videos() operations fast even across large video collections. The search uses semantic understanding, so queries like “red vehicles” will also match “crimson trucks” or “burgundy cars”.

6. Re-analyzing Videos

You don’t need to re-upload videos to run new analyses. You can efficiently query already uploaded videos using either their specific video_ids or by organizing them into folders.

Re-analyzing Specific Videos by ID

This is the most direct way to re-run analysis on a few specific videos. After you upload a video, the API returns a video_id. Store this ID to reference the video in future calls.
# Assume you have previously uploaded two videos and saved their IDs
video_id_1 = "5be1bd918d0f44adb8346fae231523a2"
video_id_2 = "81fbee264993496593db308ad4ccda02"

# Now, you can run a new analysis on just these two videos
pedestrian_analysis = client.analyze(
    [video_id_1, video_id_2],
    analysis_type=AnalysisType.SEARCH,
    search_query="pedestrians close to vehicle"
)
print(f"Found {len(pedestrian_analysis)} videos with pedestrian interactions.")

Using Folders for Batch Re-analysis

For larger-scale projects, organizing videos into folders is the best practice. This allows you to run analysis on an entire dataset with a single command.
from nomadicml.video import AnalysisType, CustomCategory

folder_name = "2024_urban_driving_set"

# Step 1: Upload and organize your videos into a folder (only needs to be done once)
client.upload(
    ['/path/to/city_drive_1.mp4', '/path/to/city_drive_2.mp4'],
    folder=folder_name
)

# Step 2: Run an initial analysis to find all road signs and their MUTCD codes
print(f"\nRunning initial analysis for 'road signs & MUTCD codes' in folder '{folder_name}'...")
pedestrian_analysis = client.analyze(
    folder=folder_name,
    analysis_type=AnalysisType.RAPID_REVIEW,
    custom_event="Find all road signs and note their corresponding MUTCD codes?",
    custom_category=CustomCategory.ENVIRONMENT
)
print(f"Found {len(pedestrian_analysis)} videos with road signs.")
# Step 3: Later, run a different analysis on the same set of videos
print(f"\nRunning second analysis for 'potholes' in folder '{folder_name}'...")
pothole_analysis = client.analyze(
    folder=folder_name,
    analysis_type=AnalysisType.RAPID_REVIEW,
    custom_event="potholes or major road cracks",
    custom_category=CustomCategory.DRIVING
)
print(f"Found {len(pothole_analysis)} videos with potholes.")

7. Working with Thumbnails

NomadicML can generate annotated thumbnails with bounding boxes for detected events. This is useful for visual inspection and validation of detected events.

Creating New Analysis with Thumbnails

When running a rapid review analysis, set is_thumbnail=True to generate thumbnails automatically:
from nomadicml.video import AnalysisType, CustomCategory

# Run rapid review with thumbnail generation
result = client.analyze(
    "1b9dac2525f34696a7ca03b0bdf775c2",
    analysis_type=AnalysisType.RAPID_REVIEW,
    custom_event="green crosswalk",
    custom_category=CustomCategory.DRIVING,
    is_thumbnail=True  # This enables thumbnail generation
)

Retrieving Thumbnails from Existing Analyses

If you have an existing analysis without thumbnails, or need to retrieve thumbnails later, use the get_visuals() methods:
# Get all thumbnail URLs for an analysis
video_id = "1b9dac2525f34696a7ca03b0bdf775c2"
analysis_id = "auc1QR27QdjluPH0qDoE"

# Get all thumbnails
thumbnails = client.get_visuals(video_id, analysis_id)


# Get a specific thumbnail by event index
first_thumbnail = client.get_visual(video_id, analysis_id, 0)

Thumbnail URLs

Generated thumbnails are stored in Google Cloud Storage and include:
  • The original video frame at the event timestamp
  • Bounding box annotations highlighting the detected object
  • Object labels for easy identification
The URLs are publicly accessible and can be embedded in reports, dashboards, or shared with stakeholders.

<Note>
  If thumbnails don't exist for an analysis, `get_visuals()` will automatically generate them. This is useful for older analyses that were created without the `is_thumbnail` flag.
</Note>

### 8. Storing Results in a Document Database

All SDK methods return serializable Python dictionaries, which can be easily processed and stored in any document database.

#### Example: Storing in MongoDB

```python
from pymongo import MongoClient

# Assume 'analysis_results' is the list of dicts from a client.analyze() call
results_to_store = []
for analysis in analysis_results:
    # ... (processing logic from previous examples) ...
    results_to_store.append(processed_event)

# Connect to MongoDB and insert the documents
try:
    db_client = MongoClient('mongodb://localhost:27017/')
    db = db_client['nomadicml_results']
    collection = db['driving_events']
    if results_to_store:
      collection.insert_many(results_to_store)
      print("Successfully saved results to MongoDB.")
except Exception as e:
    print(f"An error occurred with MongoDB: {e}")

Example: Storing in Supabase

Supabase provides a Postgres database with a Python client that’s simple to use.
from supabase import create_client, Client
import os

# Assume 'analysis_results' is the list of dicts from a client.analyze() call
results_to_store = []
for analysis in analysis_results:
    # ... (processing logic from previous examples) ...
    # Ensure your dict keys match your Supabase table columns
    processed_event_for_supabase = {
        'source_video_id': video_id,
        'event_type': event.get('type'),
        'timestamp_sec': event.get('time'),
        'description': event.get('description'),
        'severity': event.get('severity'),
        'dmv_rule': event.get('dmvRule'),
        'raw_ai_analysis': event.get('aiAnalysis')
    }
    results_to_store.append(processed_event_for_supabase)

# Initialize Supabase client
try:
    url: str = os.environ.get("SUPABASE_URL")
    key: str = os.environ.get("SUPABASE_KEY")
    supabase: Client = create_client(url, key)

    # Insert data into your 'events' table
    if results_to_store:
        data, count = supabase.table('events').insert(results_to_store).execute()
        print(f"Successfully saved {len(data[1])} results to Supabase.")
except Exception as e:
    print(f"An error occurred with Supabase: {e}")

Next Steps