This page describes all SDK methods for video operations.
For in-depth code walkthroughs, see SDK Usage Examples.
upload()
Upload local files or URLs with optional per-video metadata.
# Single local file
result = client.upload("video.mp4")
# Single video with metadata
result = client.upload(("video.mp4", "video.json"))
# Multiple local files
batch = client.upload(["a.mp4", "b.mp4"])
# Multiple videos with mixed metadata
batch = client.upload([
    ("video1.mp4", "video1.json"),     # Video with metadata
    "video2.mp4",                       # Video without metadata
    ("video3.mp4", "video3.json")      # Another video with metadata
])
# Public GCS URL (any accessible HTTPS URL)
remote = client.upload("https://storage.googleapis.com/my-bucket/videos/demo.mp4")
# Cloud Buckets. Must provide full gs:// URIs.
batch = client.upload([
  "gs://drive-monitor/uploads/trip-042/front.mp4",
  "gs://drive-monitor/uploads/trip-042/rear.mp4",
],
)
# With folder organization
result = client.upload("video.mp4", folder="my_folder")
# With metadata JSON file and folder
result = client.upload(("dashcam.mp4", "dashcam.json"), folder="fleet_videos")
# Organization scope
result = client.upload("launch.mp4", folder="robotics_org", scope="org")
Metadata sidecars must share the same base filename as their video (for example, launch.mp4 uploads with launch.json). Requests with mismatched names are rejected.
If you don’t provide a folder, we upload to your default personal folder. If you provide a folder that does not exist, it will be created automatically.Folders live in either your personal scope (scope="user") or your organization (scope="org"). If you omit scope we default to your personal space. Organization folders are visible to all other members in your organization.Folder names must be unique within each scope.For videos with overlay metadata, see the Metadata Ingestion Spec for the complete schema documentation. | Parameter | Type | Description | 
|---|
| videos | str | Path | tuple | Sequence | Single video, (video, metadata) tuple, or list of mixed videos/tuples | 
| Parameter | Type | Default | Description | 
|---|
| folder | str | None | Folder name for organizing uploads (unique within each scope) | 
| metadata_file | str | Path | None | Overlay metadata JSON file (must share the video’s base filename) per spec (ignored when using tuples) | 
| scope | 'user' | 'org' | 'user' | Scope hint for folder resolution. Use 'org'for shared org folders and'user'for personal uploads. | 
| upload_timeout | int | 1200 | Timeout in seconds for upload completion | 
| wait_for_uploaded | bool | True | Wait until upload is complete | 
| integration_id | str | None | Saved cloud integration identifier to use for imports. When omitted, the SDK attempts to match the bucket against saved integrations. | 
video_id and status
Cloud imports accept .mp4 objects referenced by full gs://bucket/object.mp4 URIs. Wildcard patterns are not supported—list each object explicitly. When no integration_id is provided, the SDK tries each saved integration whose bucket matches the URI until one succeeds; specify the id when multiple integrations share the bucket.
analyze()
Run analysis on one or more uploaded videos with different analysis types.
ASK
Detect custom events in videos using natural language descriptions. Perfect for finding specific scenarios like “green crosswalk” or “yellow taxi”. Works for all video lengths, including long videos, with fast results.
# Single-video prompt
client.analyze(
    "abc123",
    analysis_type=AnalysisType.ASK,
    custom_event="vehicles parked on sidewalk"
)
# With thumbnails (creates annotated bounding boxes)
client.analyze(
    "abc123",
    analysis_type=AnalysisType.ASK,
    custom_event="delivery vans double parked",
    is_thumbnail=True
)
# With overlay extraction for telemetry data
from nomadicml import OverlayMode
client.analyze(
    "abc123",
    analysis_type=AnalysisType.ASK,
    custom_event="speeding events",
    overlay_mode=OverlayMode.CUSTOM  # Extracts custom fields from uploaded metadata
)
# Batch Ask: analyze multiple IDs at once
client.analyze(
    ["abc123", "def456"],
    analysis_type=AnalysisType.ASK,
    custom_event="jaywalking near intersections"
)
# Batch Ask: analyze every video in a folder
client.analyze(
    folder="fleet_uploads",
    analysis_type=AnalysisType.ASK,
    custom_event="jaywalking near intersections"
)
| Parameter | Type | Description | 
|---|
| id(s)orfolder | str | Sequence[str] | Video ID(s) or folder name (use one, not both) | 
| analysis_type | AnalysisType | Must be AnalysisType.ASK | 
| custom_event | str | Event description to detect (e.g., “green crosswalk”) | 
| Parameter | Type | Default | Description | 
|---|
| custom_category | CustomCategory | str | "driving" | Optional context to steer the answer | 
| model_id | str | "Nomadic-VL-XLarge" | AI model to use | 
| timeout | int | 2400 | Analysis timeout in seconds | 
| wait_for_completion | bool | True | Wait for analysis to complete | 
| is_thumbnail | bool | False | Generate annotated bounding box thumbnails | 
| return_subset | bool | False | Return subset of results | 
| use_enhanced_motion_analysis | bool | False | Generates enhanced motion caption of events | 
| confidence | str | low | confidence level for event prediction, either set to loworhigh | 
| overlay_mode | OverlayMode | str | None | Select overlay extraction mode: OverlayMode.TIMESTAMPS,OverlayMode.GPS, orOverlayMode.CUSTOM(one at a time) | 
- CustomCategory.DRIVING
- CustomCategory.ROBOTICS
- CustomCategory.AERIAL
- CustomCategory.SECURITY
- CustomCategory.ENVIRONMENT
nomadicml.video.OverlayMode
- OverlayMode.TIMESTAMPS
- OverlayMode.GPS
- OverlayMode.CUSTOM
About Overlay Modes:Overlay modes allow you to extract telemetry data from on-screen overlays in your videos. Choose the mode based on your overlay type:
- 
TIMESTAMPSandGPS: Use these modes when your video has unstructured overlays visible on screen (like dashcam timestamps or GPS coordinates). Our models will automatically detect and extract these values from the visual overlay text.
- 
CUSTOM: Use this mode when you’ve uploaded structured metadata JSON (per the spec) describing your custom overlay fields. This mode extracts the specific fields you’ve defined like speed, altitude, or other telemetry values.
Remember: Metadata must be provided at upload time. The overlay_mode parameter only controls which extraction method to use during analysis. video_id, analysis_id, mode, status, summary, and events.
- If is_thumbnail=True, each event includes anannotated_thumbnail_url.
- If overlay_modeis specified and the video was uploaded with metadata, each event includes anoverlayfield with extracted telemetry data as{field_name: {"start": value, "end": value}}pairs.
- Overlay values are only surfaced through the overlayfield; the SDK no longer returns duplicateframe_*_startorframe_*_endkeys at the root level.
AGENT
Specialized motion and behavior detection powered by curated agent pipelines:
# General agent (edge-case detection)
client.analyze(
    "abc123",
    analysis_type=AnalysisType.GENERAL_AGENT,
)
# Lane change specialist
client.analyze(
    "abc123",
    analysis_type=AnalysisType.LANE_CHANGE,
)
# Batch agent run
client.analyze(
    ["abc123", "def456"],
    analysis_type=AnalysisType.LANE_CHANGE
)
# Batch agent run using a folder
client.analyze(
    folder="san_francisco_midday",
    analysis_type=AnalysisType.LANE_CHANGE
)
- AnalysisType.GENERAL_AGENT: zero-shot edge-case hunting (General Edge Case)
- AnalysisType.LANE_CHANGE: lane-change manoeuvre detection
- AnalysisType.TURN: left/right turn behaviour
- AnalysisType.RELATIVE_MOTION: relative motion between vehicles
- AnalysisType.DRIVING_VIOLATIONS: speeding, stop, red-light, and related violations
Required Parameters:| Parameter | Type | Description | 
|---|
| idsorfolder | str | Sequence[str] | Video ID(s) or folder name (use one, not both) | 
| analysis_type | AnalysisType | One of AnalysisType.GENERAL_AGENT,AnalysisType.LANE_CHANGE,AnalysisType.TURN,AnalysisType.RELATIVE_MOTION,AnalysisType.DRIVING_VIOLATIONS | 
| Parameter | Type | Default | Description | 
|---|
| model_id | str | "Nomadic-VL-XLarge" | AI model to use | 
| timeout | int | 2400 | Analysis timeout in seconds | 
| wait_for_completion | bool | True | Wait for analysis to complete | 
| concept_ids | List[str] | None | Concept IDs for specialized detection | 
| return_subset | bool | False | Return subset of results | 
video_id, analysis_id, mode, status, and events.
generate_structured_odd()
Produce an ASAM OpenODD–compliant CSV describing the vehicle’s operating domain.
from nomadicml import NomadicML, DEFAULT_STRUCTURED_ODD_COLUMNS
client = NomadicML(api_key="your_api_key")
# Use the default schema or customise it before calling the export.
columns = [
    {
        "name": "timestamp",
        "prompt": "Log the timestamp in ISO 8601 format (placeholder date 2024-01-01).",
        "type": "YYYY-MM-DDTHH:MM:SSZ",
    },
    {
        "name": "scenery.road.type",
        "prompt": "The type of road the vehicle is on.",
        "type": "categorical",
        "literals": ["motorway", "rural", "urban_street", "parking_lot", "unpaved", "unknown"],
    },
]
odd = client.generate_structured_odd(
    video_id="VIDEO_ID",
    columns=columns or DEFAULT_STRUCTURED_ODD_COLUMNS,
)
print(odd["csv"])
print(odd.get("share_url"))
| Parameter | Type | Description | 
|---|
| video_id | str | ID of the analysed video whose operating domain you want to export | 
| Parameter | Type | Default | Description | 
|---|
| columns | Sequence[StructuredOddColumn] | DEFAULT_STRUCTURED_ODD_COLUMNS | Column definitions matching the UI schema (name, prompt, type, optional literals) | 
| timeout | int | client default | Request timeout override in seconds | 
- csv: The generated CSV text.
- columns: The resolved column schema (after validation).
- reasoning_trace_path: Final Firestore path used for reasoning logs.
- share_id/- share_url: Optional sharing metadata if the backend stored the export.
- processing_time: Time spent generating the export.
- raw: Full backend response payload for additional introspection.
cloud_integrations()
Use this helper to manage reusable GCS/S3 credentials. (See Cloud Storage Uploads for instructions on creating service-account keys.)
Cloud integrations helper
# List every integration visible to your user/org
client.cloud_integrations.list()
# Filter by provider
client.cloud_integrations.list(type="gcs")
# Add a new GCS integration using a service account JSON file
client.cloud_integrations.add(
    type="gcs",
    name="Fleet bucket",
    bucket="drive-monitor",
    prefix="uploads/",
    credentials="service-account.json",  # path or dict/bytes
)
# Add a new S3 integration 
client.cloud_integrations.add(
    type="s3",
    name="AWS archive",
    bucket="drive-archive",
    prefix="raw/",
    region="us-east-1",
    credentials={
        "accessKeyId": "...",
        "secretAccessKey": "...",
        "sessionToken": "...",  # optional
    },
)
my_videos()
Retrieve your uploaded videos, optionally filtered by folder.
# Get all videos
videos = client.my_videos()
# Get videos in specific folder
videos = client.my_videos(folder_id="my_folder")
| Parameter | Type | Default | Description | 
|---|
| folder_id | str | None | None | Filter videos by folder name | 
create_or_get_folder()
Create a folder if it does not exist and get its metadata. Helpful when you
need a folder identifier before uploading.
marketing = client.create_or_get_folder("driving", scope="org")
print(driving["id"], driving["org_id"])
| Parameter | Type | Default | Description | 
|---|
| name | str | — | Folder name to create or fetch | 
| scope | 'user' | 'org' | 'user' | Scope to search/create in | 
| org_id | str | None | None | Optional explicit org ID (rarely needed; defaults to the caller’s org) | 
id, name, org_id, and scope
delete_video()
Remove a video by ID.
client.delete_video("video_id")
| Parameter | Type | Description | 
|---|
| video_id | str | ID of the video to delete (required) | 
search()
Run semantic search across all analysed events inside a folder. You can use open-ended natural language queries.
results = client.search(
    query="red pickup truck overtaking",
    folder_name="my_fleet_uploads",
    scope="org",                 # optional, defaults to "user"
)
print(results["summary"])
for thought in results["thoughts"]:
    print("•", thought)
| Parameter | Type | Description | 
|---|
| query | str | Natural-language search query | 
| folder_name | str | Human-friendly folder name to search within | 
| Parameter | Type | Default | Description | 
|---|
| scope | 'user' | 'org' | 'sample' | 'user' | Scope hint for folder resolution. Use 'org'for organization folders and'sample'for demo/sample folders. | 
- summary: string overview of the findings
- thoughts: list of reasoning steps (chain-of-thought) shown in the UI
- matches: list of- {video_id, analysis_id, event_index, similarity, reason}
- session_id: identifier for the associated search session (useful for re-fetching or sharing)
get_visuals()
Retrieve thumbnail URLs for all events in an analysis. If thumbnails don’t exist, they will be generated automatically.
# Get all thumbnail URLs from an analysis
visuals = client.get_visuals("video_id", "analysis_id")
# Returns list of thumbnail URLs
# ['https://storage.googleapis.com/.../event_0_thumb.jpg',
#  'https://storage.googleapis.com/.../event_1_thumb.jpg']
| Parameter | Type | Description | 
|---|
| video_id | str | ID of the video (required) | 
| analysis_id | str | ID of the analysis containing events (required) | 
get_visual()
Retrieve a single thumbnail URL for a specific event in an analysis.
# Get thumbnail for the first event (index 0)
thumbnail_url = client.get_visual("video_id", "analysis_id", 0)
# Get thumbnail for the third event (index 2)
thumbnail_url = client.get_visual("video_id", "analysis_id", 2)
| Parameter | Type | Description | 
|---|
| video_id | str | ID of the video (required) | 
| analysis_id | str | ID of the analysis containing events (required) | 
| event_idx | int | 0-based index of the event (required) | 
get_batch_analysis()
Retrieve analysis results for a completed batch. Optionally filter events by approval status (approved, rejected, pending, or invalid).
# Get all results from a batch
batch_results = client.get_batch_analysis("batch_id")
# Filter for only approved events
approved_only = client.get_batch_analysis(
    "batch_id",
    filter="approved"
)
# Filter for multiple statuses
pending_and_rejected = client.get_batch_analysis(
    "batch_id",
    filter=["pending", "rejected"]
)
# Access batch metadata
print(batch_results["batch_metadata"]["batch_type"])
print(batch_results["batch_metadata"]["batch_viewer_url"])
# Iterate through video results
for result in batch_results["results"]:
    print(f"Video: {result['video_id']}")
    print(f"Events: {len(result['events'])}")
    for event in result["events"]:
        print(f"  - {event['label']} at {event['start_time']}-{event['end_time']}")
| Parameter | Type | Description | 
|---|
| batch_id | str | ID of the batch to retrieve (required) | 
| Parameter | Type | Default | Description | 
|---|
| filter | str | List[str] | None | Filter events by approval status. Valid values: 'approved','rejected','pending','invalid'. If not specified, returns all events. | 
- 
batch_metadata: Contains batch information
- batch_id: The batch identifier
- batch_viewer_url: URL to view batch results in the web UI
- batch_type: Type of batch (- "ask"or- "agent")
- analysis_type: The analysis type used
- review_status: Whether the batch analysis have been fully reviewed by someone
- review_status_updated_at: Time the batch analysis review status was updated, N/A if not reviewed yet.
- metadata: Dictionary of custom metadata key-value pairs (empty dict if no metadata exists)
- Configuration details (for Ask batches: prompt,category, etc.)
 
- 
results: List of per-video analysis dictionaries
- video_id: ID of the video
- analysis_id: ID of the analysis
- mode: Analysis mode used
- status: Analysis status
- events: List of detected events (filtered by approval status if specified)
- Additional fields depending on analysis type
 
Raises:
- NomadicMLError: If batch is not completed or other API errors occur
- ValidationError: If filter contains invalid values
The batch must be completed before you can retrieve its results. If you need to check batch status first, use the batch viewer URL.
# Add metadata to a batch
client.add_batch_metadata(
    "batch_id",
    {
        "experiment_id": "exp-001",
        "version": 2,
        "model": "Nomadic-VL-XLarge",
        "notes": "Test run with new parameters"
    }
)
# Update existing metadata (new keys will be merged, existing keys overwritten)
client.add_batch_metadata(
    "batch_id",
    {
        "version": 3,
        "status": "completed"
    }
)
# Retrieve metadata later
batch_results = client.get_batch_analysis("batch_id")
metadata = batch_results["batch_metadata"]["metadata"]
print(f"Experiment: {metadata.get('experiment_id')}")
print(f"Version: {metadata.get('version')}")
| Parameter | Type | Description | 
|---|
| batch_id | str | ID of the batch to update (required) | 
| metadata | Dict[str, Union[str, int]] | Dictionary with string keys and string/int values (non-nested) | 
- success: Boolean indicating if the operation succeeded
- batch_id: The batch identifier
- metadata: Complete metadata dictionary after merge
Raises:
- ValidationError: If metadata format is invalid (e.g., nested objects, non-string keys, invalid value types)
- NomadicMLError: If batch is not found or you don’t have permission to modify it
Only the batch owner can add or update metadata. New metadata keys will be merged with existing metadata, with new values overwriting any existing keys with the same name.Metadata values must be strings or integers only - nested objects, arrays, booleans, or null values are not supported.