See what
ThirdEye sees.
Three models. Trained on Indian roads.
324,000 labeled frames. Running in real time.
Every pixel,
classified.
Our SEGFormer model — fine-tuned on the IDD dataset — labels every pixel of every frame across 28 road-specific classes: roads, autorickshaws, cattle, cyclists, tunnels, and more. Trained exclusively on Indian road conditions.
Drag the divider to reveal the segmentation layer.
28 classes · IDD-trained · 30fps · 324K labeled frames in dataset
What’s on the road.
Our YOLO model — trained on 117,000 Indian road images — detects and classifies every road user in a frame. From cars and motorcycles to autorickshaws and cattle. Filter by object class to isolate what matters to you.
15 classes · 117K training images · Indian road–specific · YOLOv11m architecture
Context at a glance.
Using CLIP zero-shot classification, ThirdEye identifies driving conditions from a single frame — no fine-tuning required. Weather, road type, and time of day, all inferred automatically across every clip in the dataset.
01 — Weather
02 — Scene
03 — Time of Day
56 unique conditions captured · 4 weather states · 6 scene types · 4 times of day
134 hours. 3,300 km.
All Indian roads.
Every frame above was captured, processed, and labeled entirely by ThirdEye's pipeline — no external datasets, no synthetic data.
Get in touch →