Over a few intense days in San Francisco, our colleagues attended three mobility conferences in the US. Discussions ranged from sensor fusion to the growing role of human feedback in AI training. Here, we share a few reflections – not as ready-made answers, but as a glimpse into the key questions shaping the development of ADAS and autonomous driving today.
Multimodal sensor systems – and why they matter
A common topic during the conferences was how different stakeholders approach the development of autonomous vehicles – especially autonomous trucks, where safety requirements are particularly high. In this segment, sophisticated and redundant sensor setups are often used. Cameras, radar, lidar, and other sensors work together in a multimodal system to create a more complete and robust view of the surroundings. If one sensor fails, others can take over – crucial in complex traffic environments where reliability is essential.
Beyond safety, this redundancy also supports another critical factor: the high expectations on availability of logistic services. Downtime caused by technical failures or accidents – which may trigger lengthy investigations – can have major consequences for clients relying on continuous service delivery. A robust sensor setup not only reduces the risk of incidents but also increases service availability, making the service more dependable and commercially viable.
Quantity vs. quality – what makes data valuable?
It’s widely accepted that AI requires vast amounts of training data. But a growing question is: what matters more – quantity or quality? Many companies already hold enormous datasets – but size alone isn’t enough. What matters is whether the data is relevant, accurate, and usable.
One example raised at the conferences involved how different sensors can interpret the same scene in completely different ways. A lidar sensor might register an object as a trash can, while an RGB camera reveals that it’s actually a child, in daylight. For the AI to understand what it’s seeing, someone needs to label the data correctly – a process called annotation, where each object in the dataset is categorized and named. If the object is tagged wrong, the model will be trained incorrectly – and in traffic, those misunderstandings can have serious consequences.
Even as AI systems become more autonomous, human input still plays an important role in improving how models learn from complex data.
User experience determines whether the technology is used
One aspect often overlooked in technology development is how drivers actually experience and interact with driver assistance systems. This has been addressed in studies and was a clear point during the conferences: even the most advanced ADAS function will be turned off – if it’s perceived as annoying or hard to understand.
Many users don’t know when or how the systems should be used, or what will happen when they activate. A typical example is lane-keeping assist, which works well on highways but may beep or tug at the wheel on rural roads without a centreline. This can frustrate drivers – leading them to disable the system entirely, even on stretches where it could genuinely improve safety.
The message is clear: for ADAS to reach its full potential, user experience must take centre stage. It’s not just about technical performance – but about clear communication, intuitive design, and driver trust.
With BlincVision and our use of event cameras (also known as neuromorphic cameras), latency in the ADAS system is significantly reduced, resulting in smoother and more responsive behaviour. In situations like stop-and-go traffic, this means the vehicle can better follow the natural rhythm of traffic, allowing for shorter safety distances and a more comfortable driving experience. Faster sensors allow ADAS to adapt more naturally to traffic flow, reducing frustration and encouraging users to keep the system active. Faster sensors also make autonomous rides smoother for passengers.
What we take with us
The three conferences painted a clear picture of how rapidly the fields of ADAS and autonomous driving are evolving – and how many questions remain unanswered. Topics like safety, data quality, availability and user experience are high on the agenda, and it’s clear that technology must evolve in sync with both regulation and human needs.
The growing focus on multimodal sensor systems reflects a broader shift – towards solutions that support not just safety, but also availability and user experience. This aligns well with the value BlincVision brings.