Skip to main content
This page describes all SDK methods for video operations. For in-depth code walkthroughs, see SDK Usage Examples.

upload()

Upload local files or URLs with optional per-video metadata.
Upload examples
# Single local file
result = client.upload("video.mp4")

# Single video with metadata
result = client.upload(("video.mp4", "video.json"))

# Multiple local files
batch = client.upload(["a.mp4", "b.mp4"])

# Multiple videos with mixed metadata
batch = client.upload([
    ("video1.mp4", "video1.json"),     # Video with metadata
    "video2.mp4",                       # Video without metadata
    ("video3.mp4", "video3.json")      # Another video with metadata
])

# Public GCS URL (any accessible HTTPS URL)
remote = client.upload("https://storage.googleapis.com/my-bucket/videos/demo.mp4")

# Cloud Buckets. Must provide full gs:// URIs.
batch = client.upload([
  "gs://drive-monitor/uploads/trip-042/front.mp4",
  "gs://drive-monitor/uploads/trip-042/rear.mp4",
],
)


# With folder organization
result = client.upload("video.mp4", folder="my_folder")

# With metadata JSON file and folder
result = client.upload(("dashcam.mp4", "dashcam.json"), folder="fleet_videos")

# Organization scope
result = client.upload("launch.mp4", folder="robotics_org", scope="org")
Metadata sidecars must share the same base filename as their video (for example, launch.mp4 uploads with launch.json). Requests with mismatched names are rejected.
If you don’t provide a folder, we upload to your default personal folder. If you provide a folder that does not exist, it will be created automatically.Folders live in either your personal scope (scope="user") or your organization (scope="org"). If you omit scope we default to your personal space. Organization folders are visible to all other members in your organization.Folder names must be unique within each scope.For videos with overlay metadata, see the Metadata Ingestion Spec for the complete schema documentation.
Required Parameters:
ParameterTypeDescription
videosstr | Path | tuple | SequenceSingle video, (video, metadata) tuple, or list of mixed videos/tuples
Optional Parameters:
ParameterTypeDefaultDescription
folderstrNoneFolder name for organizing uploads (unique within each scope)
metadata_filestr | PathNoneOverlay metadata JSON file (must share the video’s base filename) per spec (ignored when using tuples)
scope'user' | 'org''user'Scope hint for folder resolution. Use 'org' for shared org folders and 'user' for personal uploads.
upload_timeoutint1200Timeout in seconds for upload completion
wait_for_uploadedboolTrueWait until upload is complete
integration_idstrNoneSaved cloud integration identifier to use for imports. When omitted, the SDK attempts to match the bucket against saved integrations.
Returns: Dict (single) or List[Dict] (multiple) with video_id and status
Cloud imports accept .mp4 objects referenced by full gs://bucket/object.mp4 URIs. Wildcard patterns are not supported—list each object explicitly. When no integration_id is provided, the SDK tries each saved integration whose bucket matches the URI until one succeeds; specify the id when multiple integrations share the bucket.

analyze()

Run analysis on one or more uploaded videos with different analysis types.

ASK

Detect custom events in videos using natural language descriptions. Perfect for finding specific scenarios like “green crosswalk” or “yellow taxi”. Works for all video lengths, including long videos, with fast results.
Ask examples
# Single-video prompt
client.analyze(
    "abc123",
    analysis_type=AnalysisType.ASK,
    custom_event="vehicles parked on sidewalk"
)

# With thumbnails (creates annotated bounding boxes)
client.analyze(
    "abc123",
    analysis_type=AnalysisType.ASK,
    custom_event="delivery vans double parked",
    is_thumbnail=True
)

# With overlay extraction for telemetry data
from nomadicml import OverlayMode

client.analyze(
    "abc123",
    analysis_type=AnalysisType.ASK,
    custom_event="speeding events",
    overlay_mode=OverlayMode.CUSTOM  # Extracts custom fields from uploaded metadata
)

# Batch Ask: analyze multiple IDs at once
client.analyze(
    ["abc123", "def456"],
    analysis_type=AnalysisType.ASK,
    custom_event="jaywalking near intersections"
)

# Batch Ask: analyze every video in a folder
client.analyze(
    folder="fleet_uploads",
    analysis_type=AnalysisType.ASK,
    custom_event="jaywalking near intersections"
)
Required Parameters:
ParameterTypeDescription
id(s) or folderstr | Sequence[str]Video ID(s) or folder name (use one, not both)
analysis_typeAnalysisTypeMust be AnalysisType.ASK
custom_eventstrEvent description to detect (e.g., “green crosswalk”)
Optional Parameters:
ParameterTypeDefaultDescription
custom_categoryCustomCategory | str"driving"Optional context to steer the answer
model_idstr"Nomadic-VL-XLarge"AI model to use
timeoutint2400Analysis timeout in seconds
wait_for_completionboolTrueWait for analysis to complete
is_thumbnailboolFalseGenerate annotated bounding box thumbnails
return_subsetboolFalseReturn subset of results
use_enhanced_motion_analysisboolFalseGenerates enhanced motion caption of events
confidencestrlowconfidence level for event prediction, either set to low or high
overlay_modeOverlayMode | strNoneSelect overlay extraction mode: OverlayMode.TIMESTAMPS, OverlayMode.GPS, or OverlayMode.CUSTOM (one at a time)
nomadicml.video.CustomCategory
  • CustomCategory.DRIVING
  • CustomCategory.ROBOTICS
  • CustomCategory.AERIAL
  • CustomCategory.SECURITY
  • CustomCategory.ENVIRONMENT
nomadicml.video.OverlayMode
  • OverlayMode.TIMESTAMPS
  • OverlayMode.GPS
  • OverlayMode.CUSTOM
About Overlay Modes:Overlay modes allow you to extract telemetry data from on-screen overlays in your videos. Choose the mode based on your overlay type:
  • TIMESTAMPS and GPS: Use these modes when your video has unstructured overlays visible on screen (like dashcam timestamps or GPS coordinates). Our models will automatically detect and extract these values from the visual overlay text.
  • CUSTOM: Use this mode when you’ve uploaded structured metadata JSON (per the spec) describing your custom overlay fields. This mode extracts the specific fields you’ve defined like speed, altitude, or other telemetry values.
Remember: Metadata must be provided at upload time. The overlay_mode parameter only controls which extraction method to use during analysis.
Returns: Dict with video_id, analysis_id, mode, status, summary, and events.
  • If is_thumbnail=True, each event includes an annotated_thumbnail_url.
  • If overlay_mode is specified and the video was uploaded with metadata, each event includes an overlay field with extracted telemetry data as {field_name: {"start": value, "end": value}} pairs.
  • Overlay values are only surfaced through the overlay field; the SDK no longer returns duplicate frame_*_start or frame_*_end keys at the root level.

AGENT

Specialized motion and behavior detection powered by curated agent pipelines:
Agent examples
# General agent (edge-case detection)
client.analyze(
    "abc123",
    analysis_type=AnalysisType.GENERAL_AGENT,
)

# Lane change specialist
client.analyze(
    "abc123",
    analysis_type=AnalysisType.LANE_CHANGE,
)

# Batch agent run
client.analyze(
    ["abc123", "def456"],
    analysis_type=AnalysisType.LANE_CHANGE
)

# Batch agent run using a folder
client.analyze(
    folder="san_francisco_midday",
    analysis_type=AnalysisType.LANE_CHANGE
)
  • AnalysisType.GENERAL_AGENT: zero-shot edge-case hunting (General Edge Case)
  • AnalysisType.LANE_CHANGE: lane-change manoeuvre detection
  • AnalysisType.TURN: left/right turn behaviour
  • AnalysisType.RELATIVE_MOTION: relative motion between vehicles
  • AnalysisType.DRIVING_VIOLATIONS: speeding, stop, red-light, and related violations
Required Parameters:
ParameterTypeDescription
ids or folderstr | Sequence[str]Video ID(s) or folder name (use one, not both)
analysis_typeAnalysisTypeOne of AnalysisType.GENERAL_AGENT, AnalysisType.LANE_CHANGE, AnalysisType.TURN, AnalysisType.RELATIVE_MOTION, AnalysisType.DRIVING_VIOLATIONS
Optional Parameters:
ParameterTypeDefaultDescription
model_idstr"Nomadic-VL-XLarge"AI model to use
timeoutint2400Analysis timeout in seconds
wait_for_completionboolTrueWait for analysis to complete
concept_idsList[str]NoneConcept IDs for specialized detection
return_subsetboolFalseReturn subset of results
Returns: Dict with video_id, analysis_id, mode, status, and events.

generate_structured_odd()

Produce an ASAM OpenODD–compliant CSV describing the vehicle’s operating domain.
Structured ODD export
from nomadicml import NomadicML, DEFAULT_STRUCTURED_ODD_COLUMNS

client = NomadicML(api_key="your_api_key")

# Use the default schema or customise it before calling the export.
columns = [
    {
        "name": "timestamp",
        "prompt": "Log the timestamp in ISO 8601 format (placeholder date 2024-01-01).",
        "type": "YYYY-MM-DDTHH:MM:SSZ",
    },
    {
        "name": "scenery.road.type",
        "prompt": "The type of road the vehicle is on.",
        "type": "categorical",
        "literals": ["motorway", "rural", "urban_street", "parking_lot", "unpaved", "unknown"],
    },
]

odd = client.generate_structured_odd(
    video_id="VIDEO_ID",
    columns=columns or DEFAULT_STRUCTURED_ODD_COLUMNS,
)

print(odd["csv"])
print(odd.get("share_url"))
Required Parameters:
ParameterTypeDescription
video_idstrID of the analysed video whose operating domain you want to export
Optional Parameters:
ParameterTypeDefaultDescription
columnsSequence[StructuredOddColumn]DEFAULT_STRUCTURED_ODD_COLUMNSColumn definitions matching the UI schema (name, prompt, type, optional literals)
timeoutintclient defaultRequest timeout override in seconds
Returns: Dict containing:
  • csv: The generated CSV text.
  • columns: The resolved column schema (after validation).
  • reasoning_trace_path: Final Firestore path used for reasoning logs.
  • share_id / share_url: Optional sharing metadata if the backend stored the export.
  • processing_time: Time spent generating the export.
  • raw: Full backend response payload for additional introspection.

cloud_integrations()

Use this helper to manage reusable GCS/S3 credentials. (See Cloud Storage Uploads for instructions on creating service-account keys.)
Cloud integrations helper
# List every integration visible to your user/org
client.cloud_integrations.list()

# Filter by provider
client.cloud_integrations.list(type="gcs")

# Add a new GCS integration using a service account JSON file
client.cloud_integrations.add(
    type="gcs",
    name="Fleet bucket",
    bucket="drive-monitor",
    prefix="uploads/",
    credentials="service-account.json",  # path or dict/bytes
)

# Add a new S3 integration 
client.cloud_integrations.add(
    type="s3",
    name="AWS archive",
    bucket="drive-archive",
    prefix="raw/",
    region="us-east-1",
    credentials={
        "accessKeyId": "...",
        "secretAccessKey": "...",
        "sessionToken": "...",  # optional
    },
)

my_videos()

Retrieve your uploaded videos, optionally filtered by folder.
# Get all videos
videos = client.my_videos()

# Get videos in specific folder
videos = client.my_videos(folder_id="my_folder")
Parameters:
ParameterTypeDefaultDescription
folder_idstr | NoneNoneFilter videos by folder name
Returns: List[Dict] with video information (video_id, filename, duration, size, etc.)

create_or_get_folder()

Create a folder if it does not exist and get its metadata. Helpful when you need a folder identifier before uploading.
marketing = client.create_or_get_folder("driving", scope="org")
print(driving["id"], driving["org_id"])
Parameters:
ParameterTypeDefaultDescription
namestrFolder name to create or fetch
scope'user' | 'org''user'Scope to search/create in
org_idstr | NoneNoneOptional explicit org ID (rarely needed; defaults to the caller’s org)
Returns: Dict with folder id, name, org_id, and scope

delete_video()

Remove a video by ID.
client.delete_video("video_id")
Parameters:
ParameterTypeDescription
video_idstrID of the video to delete (required)
Returns: Dict with deletion status Run semantic search across all analysed events inside a folder. You can use open-ended natural language queries.
results = client.search(
    query="red pickup truck overtaking",
    folder_name="my_fleet_uploads",
    scope="org",                 # optional, defaults to "user"
)

print(results["summary"])
for thought in results["thoughts"]:
    print("•", thought)
Required Parameters:
ParameterTypeDescription
querystrNatural-language search query
folder_namestrHuman-friendly folder name to search within
Optional Parameters:
ParameterTypeDefaultDescription
scope'user' | 'org' | 'sample''user'Scope hint for folder resolution. Use 'org' for organization folders and 'sample' for demo/sample folders.
Returns: Dict with:
  • summary: string overview of the findings
  • thoughts: list of reasoning steps (chain-of-thought) shown in the UI
  • matches: list of {video_id, analysis_id, event_index, similarity, reason}
  • session_id: identifier for the associated search session (useful for re-fetching or sharing)

get_visuals()

Retrieve thumbnail URLs for all events in an analysis. If thumbnails don’t exist, they will be generated automatically.
# Get all thumbnail URLs from an analysis
visuals = client.get_visuals("video_id", "analysis_id")

# Returns list of thumbnail URLs
# ['https://storage.googleapis.com/.../event_0_thumb.jpg',
#  'https://storage.googleapis.com/.../event_1_thumb.jpg']
Parameters:
ParameterTypeDescription
video_idstrID of the video (required)
analysis_idstrID of the analysis containing events (required)
Returns: List[str] of thumbnail URLs with bounding box annotations

get_visual()

Retrieve a single thumbnail URL for a specific event in an analysis.
# Get thumbnail for the first event (index 0)
thumbnail_url = client.get_visual("video_id", "analysis_id", 0)

# Get thumbnail for the third event (index 2)
thumbnail_url = client.get_visual("video_id", "analysis_id", 2)
Parameters:
ParameterTypeDescription
video_idstrID of the video (required)
analysis_idstrID of the analysis containing events (required)
event_idxint0-based index of the event (required)
Returns: str - Single thumbnail URL Raises: ValueError if event index is out of range

get_batch_analysis()

Retrieve analysis results for a completed batch. Optionally filter events by approval status (approved, rejected, pending, or invalid).
# Get all results from a batch
batch_results = client.get_batch_analysis("batch_id")

# Filter for only approved events
approved_only = client.get_batch_analysis(
    "batch_id",
    filter="approved"
)

# Filter for multiple statuses
pending_and_rejected = client.get_batch_analysis(
    "batch_id",
    filter=["pending", "rejected"]
)

# Access batch metadata
print(batch_results["batch_metadata"]["batch_type"])
print(batch_results["batch_metadata"]["batch_viewer_url"])

# Iterate through video results
for result in batch_results["results"]:
    print(f"Video: {result['video_id']}")
    print(f"Events: {len(result['events'])}")
    for event in result["events"]:
        print(f"  - {event['label']} at {event['start_time']}-{event['end_time']}")
Required Parameters:
ParameterTypeDescription
batch_idstrID of the batch to retrieve (required)
Optional Parameters:
ParameterTypeDefaultDescription
filterstr | List[str]NoneFilter events by approval status. Valid values: 'approved', 'rejected', 'pending', 'invalid'. If not specified, returns all events.
Returns: Dict with two keys:
  • batch_metadata: Contains batch information
    • batch_id: The batch identifier
    • batch_viewer_url: URL to view batch results in the web UI
    • batch_type: Type of batch ("ask" or "agent")
    • analysis_type: The analysis type used
    • review_status: Whether the batch analysis have been fully reviewed by someone
    • review_status_updated_at: Time the batch analysis review status was updated, N/A if not reviewed yet.
    • metadata: Dictionary of custom metadata key-value pairs (empty dict if no metadata exists)
    • Configuration details (for Ask batches: prompt, category, etc.)
  • results: List of per-video analysis dictionaries
    • video_id: ID of the video
    • analysis_id: ID of the analysis
    • mode: Analysis mode used
    • status: Analysis status
    • events: List of detected events (filtered by approval status if specified)
    • Additional fields depending on analysis type
Raises:
  • NomadicMLError: If batch is not completed or other API errors occur
  • ValidationError: If filter contains invalid values
The batch must be completed before you can retrieve its results. If you need to check batch status first, use the batch viewer URL.

add_batch_metadata()

Add or update custom metadata for a batch analysis. Metadata is stored as key-value pairs and can be used to track experiments, versions, or any custom information about your batch runs.
# Add metadata to a batch
client.add_batch_metadata(
    "batch_id",
    {
        "experiment_id": "exp-001",
        "version": 2,
        "model": "Nomadic-VL-XLarge",
        "notes": "Test run with new parameters"
    }
)

# Update existing metadata (new keys will be merged, existing keys overwritten)
client.add_batch_metadata(
    "batch_id",
    {
        "version": 3,
        "status": "completed"
    }
)

# Retrieve metadata later
batch_results = client.get_batch_analysis("batch_id")
metadata = batch_results["batch_metadata"]["metadata"]
print(f"Experiment: {metadata.get('experiment_id')}")
print(f"Version: {metadata.get('version')}")
Required Parameters:
ParameterTypeDescription
batch_idstrID of the batch to update (required)
metadataDict[str, Union[str, int]]Dictionary with string keys and string/int values (non-nested)
Returns: Dict with success status and updated metadata:
  • success: Boolean indicating if the operation succeeded
  • batch_id: The batch identifier
  • metadata: Complete metadata dictionary after merge
Raises:
  • ValidationError: If metadata format is invalid (e.g., nested objects, non-string keys, invalid value types)
  • NomadicMLError: If batch is not found or you don’t have permission to modify it
Only the batch owner can add or update metadata. New metadata keys will be merged with existing metadata, with new values overwriting any existing keys with the same name.Metadata values must be strings or integers only - nested objects, arrays, booleans, or null values are not supported.
For a step-by-step tutorial, head over to SDK Usage Examples.