Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.nomadicml.com/llms.txt

Use this file to discover all available pages before exploring further.

Detect custom events in videos using natural language descriptions. Perfect for finding specific scenarios like “green crosswalk” or “yellow taxi”. Works for all video lengths, including long videos, with fast results.
Ask examples
# Single-video prompt
client.analyze(
    "abc123",
    analysis_type=AnalysisType.ASK,
    custom_event="vehicles parked on sidewalk"
)

# With thumbnails (creates annotated bounding boxes)
client.analyze(
    "abc123",
    analysis_type=AnalysisType.ASK,
    custom_event="delivery vans double parked",
    is_thumbnail=True
)

# With overlay extraction for telemetry data
from nomadicml import OverlayMode

client.analyze(
    "abc123",
    analysis_type=AnalysisType.ASK,
    custom_event="speeding events",
    overlay_mode=OverlayMode.CUSTOM  # Extracts custom fields from uploaded metadata
)

# Batch Ask: analyze multiple IDs at once
client.analyze(
    ["abc123", "def456"],
    analysis_type=AnalysisType.ASK,
    custom_event="jaywalking near intersections"
)

# Batch Ask: analyze every video in a folder
client.analyze(
    folder="fleet_uploads",
    analysis_type=AnalysisType.ASK,
    custom_event="jaywalking near intersections"
)
Required Parameters:
ParameterTypeDescription
id(s) or folderstr | Sequence[str]Video ID(s) or folder name (use one, not both)
analysis_typeAnalysisTypeMust be AnalysisType.ASK
custom_eventstrEvent description to detect (e.g., “green crosswalk”)
Optional Parameters:
ParameterTypeDefaultDescription
custom_categoryCustomCategory | str"driving"Optional context to steer the answer
model_idstr"Nomadic-VL-XLarge"AI model to use
timeoutint2400Analysis timeout in seconds
wait_for_completionboolTrueWait for analysis to complete
is_thumbnailboolFalseGenerate annotated bounding box thumbnails
return_subsetboolFalseReturn subset of results
use_enhanced_motion_analysisboolFalseGenerates enhanced motion caption of events
confidencestrlowconfidence level for event prediction, either set to low or high
overlay_modeOverlayMode | strNoneSelect overlay extraction mode: OverlayMode.TIMESTAMPS, OverlayMode.GPS, or OverlayMode.CUSTOM (one at a time)
nomadicml.video.CustomCategory
  • CustomCategory.DRIVING
  • CustomCategory.ROBOTICS
  • CustomCategory.AERIAL
  • CustomCategory.SECURITY
  • CustomCategory.ENVIRONMENT
nomadicml.video.OverlayMode
  • OverlayMode.TIMESTAMPS
  • OverlayMode.GPS
  • OverlayMode.CUSTOM
About Overlay Modes:Overlay modes allow you to extract telemetry data from on-screen overlays in your videos. Choose the mode based on your overlay type:
  • TIMESTAMPS and GPS: Use these modes when your video has unstructured overlays visible on screen (like dashcam timestamps or GPS coordinates). Our models will automatically detect and extract these values from the visual overlay text.
  • CUSTOM: Use this mode when you’ve uploaded structured metadata JSON (per the spec) describing your custom overlay fields. This mode extracts the specific fields you’ve defined like speed, altitude, or other telemetry values.
Remember: Metadata must be provided at upload time. The overlay_mode parameter only controls which extraction method to use during analysis.
Returns: Dict with video_id, analysis_id, mode, status, summary, and events.
  • If is_thumbnail=True, each event includes an annotated_thumbnail_url.
  • If overlay_mode is specified and the video was uploaded with metadata, each event includes an overlay field with extracted telemetry data as {field_name: {"start": value, "end": value}} pairs.
  • Overlay values are only surfaced through the overlay field; the SDK no longer returns duplicate frame_*_start or frame_*_end keys at the root level.