

The brief was audacious: deliver person, vehicle, obstacle, speed, lane, plate, and traffic sign recognition at 30+ fps, and ship it inside a device no bigger than a pack of cigarettes. Our embedded ADAS stack now lives on fleet dashboards to keep drivers safer without cloud dependencies.
What we built
- Multi-head detector. A shared backbone feeds task-specific heads so we detect VRUs, vehicles, freestanding obstacles, and read OCR cues without redundant compute.
- Signal fusion. Camera cues blend with IMU and wheel-speed data to stabilize lane estimates, while a lightweight tracker keeps IDs persistent even during occlusions.
- Productization. Thermal envelopes, EMC compliance, and rugged housings were part of the job; we tuned every kernel and voltage rail to survive summer expressways.
Why it was hard
A jam-packed feature list strains both compute and memory. Running OCR for plates and traffic signs alongside dense perception meant choreographing cycles down to the millisecond. Packaging everything into a tiny box added thermal puzzles—no fans allowed, so we shaped the enclosure as a heat sink.
Where it lands
- Advanced driver assistance. Fleet operators get lane departure, forward collision, and blind spot warnings in markets where high-end cars remain rare.
- Autonomous groundwork. The same perception stack feeds higher-level planners when customers want to experiment with driverless logistics.
What’s next
We are adding driver monitoring (DMS) via the same SoC, plus over-the-air calibration so installers can mount cameras with less laser-level drama.
Visited 2 times, 1 visit(s) today



