Best Practices

This guide provides recommendations and best practices to help you get the most out of NomadicML DriveMonitor.

Video Capture Guidelines

The quality of your analysis begins with the quality of your video input. Follow these guidelines for optimal results:

Camera Placement

  • Front View: Mount the camera at the center of the windshield for the best perspective
  • Height: Position at approximately eye level (4-5 feet above ground)
  • Angle: Slight downward tilt (10-15 degrees) to capture the road ahead
  • Clear View: Ensure the camera has an unobstructed view of the road

Video Quality

  • Resolution: Minimum 720p (1280×720), recommended 1080p (1920×1080)
  • Frame Rate: Minimum 24 fps, recommended 30 fps
  • Bitrate: Minimum 4 Mbps, recommended 8 Mbps
  • Format: MP4 with H.264 encoding works best
  • Lighting: Ensure adequate lighting for clear visibility
  • Weather: Be aware that heavy rain, snow, or fog can affect analysis accuracy

Recording Length

  • Optimal Duration: 5-20 minutes per video for best performance
  • Split Longer Drives: For drives over 30 minutes, consider splitting into multiple videos
  • Context: Include enough time before and after key events for proper context

Data Management

Effective data management ensures you can access and use your analysis effectively.

Video Organization

  • Consistent Naming: Use a consistent naming scheme for videos (e.g., YYYY-MM-DD_Driver_Route.mp4)
  • Metadata: Add relevant metadata when uploading (driver, vehicle, route, etc.)
  • Tagging: Use tags to categorize videos by purpose, location, or scenario
  • Archiving: Develop a policy for archiving older videos to manage storage

Analysis Retention

Consider how long you need to retain analysis data:

  • Short-term (30 days): Recent training sessions or evaluations
  • Medium-term (90 days): Trend analysis and pattern recognition
  • Long-term (1+ years): Historical comparisons and compliance documentation

API Usage Optimization

When working with the NomadicML API, follow these practices for optimal performance and reliability.

Rate Limiting

  • Respect Limits: Stay within the published rate limits (60 requests/minute, 10,000/day)
  • Batch Operations: Combine multiple operations into fewer API calls when possible
  • Implement Backoff: Use exponential backoff when receiving rate limit errors
  • Monitor Usage: Track your API usage to avoid unexpected throttling

Efficient Requests

  • Pagination: Use pagination parameters for large result sets
  • Filtering: Apply server-side filters to reduce data transfer
  • Field Selection: Request only the fields you need
  • Compression: Use gzip compression for larger responses
# Example of efficient API usage
import requests

API_BASE = "https://api.nomadicml.com/api"
API_KEY = "your_api_key"

headers = {
    "X-API-Key": API_KEY,
    "Accept-Encoding": "gzip"  # Request compression
}

# Use pagination and filters
params = {
    "limit": 100,  # Page size
    "offset": 0,   # Starting point
    "event_type": "Traffic Violation",  # Filter by type
    "fields": "video_id,time,type,severity,description"  # Select only needed fields
}

all_events = []
while True:
    response = requests.get(
        f"{API_BASE}/events",
        headers=headers,
        params=params
    )
    
    data = response.json()
    events = data["events"]
    
    if not events:
        break
        
    all_events.extend(events)
    params["offset"] += params["limit"]  # Move to next page

Caching

Implement client-side caching for frequently accessed data:

  • TTL-based Caching: Cache responses with appropriate time-to-live values
  • Conditional Requests: Use etags or last-modified headers for validation
  • Local Storage: Store reference data locally (e.g., event types, DMV rules)

SDK Best Practices

When using the NomadicML Python SDK, follow these recommendations:

Environment Setup

  • Virtual Environments: Use virtual environments to manage dependencies
  • Version Pinning: Pin the SDK version in your requirements.txt file
  • Configuration Management: Use environment variables or secure configuration files for API keys
# Example environment setup
python -m venv nomadic-env
source nomadic-env/bin/activate  # Or nomadic-env\Scripts\activate on Windows
pip install nomadicml==0.1.0

Error Handling

Implement robust error handling:

from nomadicml import NomadicML
from nomadicml.exceptions import (
    AuthenticationError,
    VideoUploadError,
    AnalysisError,
    NomadicMLError
)

try:
    client = NomadicML(api_key="your_api_key")
    result = client.video.upload_and_analyze("path/to/video.mp4")
except AuthenticationError:
    # Handle authentication issues
    print("Authentication failed - check your API key")
except VideoUploadError as e:
    # Handle upload-specific errors
    print(f"Upload failed: {e}")
except AnalysisError as e:
    # Handle analysis-specific errors
    print(f"Analysis failed: {e}")
except NomadicMLError as e:
    # Handle all other SDK errors
    print(f"An error occurred: {e}")
except Exception as e:
    # Handle unexpected errors
    print(f"Unexpected error: {e}")

Async Operations

For better performance with multiple operations:

import asyncio
import os
from nomadicml import NomadicML

async def process_folder(folder_path):
    client = NomadicML(api_key="your_api_key")
    tasks = []
    
    # Create upload tasks for all videos in folder
    for filename in os.listdir(folder_path):
        if filename.endswith((".mp4", ".mov", ".avi")):
            file_path = os.path.join(folder_path, filename)
            task = asyncio.create_task(
                upload_and_process(client, file_path)
            )
            tasks.append(task)
    
    # Wait for all uploads to complete
    results = await asyncio.gather(*tasks, return_exceptions=True)
    return results

async def upload_and_process(client, file_path):
    # Upload without waiting for analysis
    upload_result = client.video.upload_video(
        source="file",
        file_path=file_path
    )
    
    video_id = upload_result["video_id"]
    print(f"Uploaded {file_path} with ID {video_id}")
    
    # Start analysis
    client.video.analyze_video(video_id)
    
    # Return the video ID for later reference
    return {
        "file_path": file_path,
        "video_id": video_id
    }

# Run the async function
results = asyncio.run(process_folder("/path/to/videos"))

Analysis Interpretation

Getting the most from your analysis requires proper interpretation of the results.

Context Matters

  • Environmental Factors: Consider weather, traffic, and road conditions
  • Vehicle Limitations: Account for vehicle capabilities and characteristics
  • Driving Purpose: Interpret events in the context of the driving purpose (training, testing, etc.)

Severity Assessment

When evaluating event severity:

  • Low Severity: Opportunities for improvement, not immediate safety concerns
  • Medium Severity: Notable issues that should be addressed
  • High Severity: Critical safety concerns requiring immediate attention

Trend Analysis

Look beyond individual events to identify patterns:

  • Frequency Analysis: Track event frequency over time
  • Location Patterns: Identify problematic locations or scenarios
  • Driver Comparison: Compare performance across different drivers
  • Before/After: Measure the impact of training or interventions

Performance Optimization

For systems processing large volumes of videos, consider these optimization strategies:

Batch Processing

  • Process videos in batches during off-peak hours
  • Use background workers for upload and analysis tasks
  • Implement queuing systems for large workloads

Resource Management

  • Compress videos before upload to reduce bandwidth
  • Clean up temporary files after processing
  • Implement TTL (time-to-live) policies for stored videos

Parallel Processing

For enterprise use cases:

from concurrent.futures import ThreadPoolExecutor
from nomadicml import NomadicML

def process_video(file_path):
    client = NomadicML(api_key="your_api_key")
    try:
        result = client.video.upload_and_analyze(
            file_path,
            wait_for_completion=False  # Don't wait for analysis to complete
        )
        return {
            "file_path": file_path,
            "video_id": result["video_id"],
            "status": "uploaded"
        }
    except Exception as e:
        return {
            "file_path": file_path,
            "error": str(e),
            "status": "failed"
        }

# Process multiple videos in parallel
video_paths = ["video1.mp4", "video2.mp4", "video3.mp4", "video4.mp4"]

with ThreadPoolExecutor(max_workers=4) as executor:
    results = list(executor.map(process_video, video_paths))
    
print(f"Processed {len(results)} videos")

Security Best Practices

Protect your data and access with these security measures:

API Key Management

  • Rotation: Rotate API keys regularly (every 90 days recommended)
  • Scope Limitation: Use the minimum required permissions
  • Secure Storage: Store API keys in secure credential stores, not in code
  • Monitoring: Monitor API key usage for unusual patterns

Data Security

  • Encryption: Ensure data is encrypted in transit and at rest
  • Access Control: Implement proper access controls for videos and analysis data
  • Data Minimization: Only store the data you need
  • Retention Policy: Implement data retention and deletion policies

Audit Trail

Maintain an audit trail of system activities:

  • Log all video uploads and deletions
  • Track who accessed analysis results
  • Record API key creation and revocation
  • Monitor for suspicious activity

Next Steps

Now that you understand the best practices, explore these advanced topics: