Learning to Recognize, Observe, Analyze and Drive Through Work Zones

View a PDF of the paper titled ROADWork Dataset: Learning to Recognize, Observe, Analyze and Drive Through Work Zones, by Anurag Ghosh and 9 other authors

View PDF
HTML (experimental)

Abstract:Perceiving and autonomously navigating through work zones is a challenging and underexplored problem. Open datasets for this long-tailed scenario are scarce. We propose the ROADWork dataset to learn to recognize, observe, analyze, and drive through work zones. State-of-the-art foundation models fail when applied to work zones. Fine-tuning models on our dataset significantly improves perception and navigation in work zones. With ROADWork dataset, we discover new work zone images with higher precision (+32.5%) at a much higher rate (12.8$\times$) around the world. Open-vocabulary methods fail too, whereas fine-tuned detectors improve performance (+32.2 AP). Vision-Language Models (VLMs) struggle to describe work zones, but fine-tuning substantially improves performance (+36.7 SPICE).

Beyond fine-tuning, we show the value of simple techniques. Video label propagation provides additional gains (+2.6 AP) for instance segmentation. While reading work zone signs, composing a detector and text spotter via crop-scaling improves performance +14.2% 1-NED). Composing work zone detections to provide context further reduces hallucinations (+3.9 SPICE) in VLMs. We predict navigational goals and compute drivable paths from work zone videos. Incorporating road work semantics ensures 53.6% goals have angular error (AE) < 0.5 (+9.9 %) and 75.3% pathways have AE < 0.5 (+8.1 %).

Submission history

From: Anurag Ghosh [view email]
[v1]
Tue, 11 Jun 2024 19:06:41 UTC (47,871 KB)
[v2]
Tue, 22 Jul 2025 23:55:25 UTC (43,985 KB)

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top