Sort of; accidents are the absolute core of the product. They are rare, but they are the focus of the design.
By edge cases I mean scenarios like the lights going out in an underground garage; low vision due to colourful smoke or dust, or things like optical illusions or occlusion that a human would just need to remember.
Lidar can help, but not really enough to be worth it.
Urban operating domain combined with legacy approaches.
If I was designing a robotaxi 10 years ago I would use lidar, designing consumer vehicles for near future L3 it's no longer the best use of resources. I prefer more compute and cameras for the money.
Our current issues are now scene understanding and navigation; followed by parking. We get very little value from LIDAR in the driving cases, so much so that we don't even use it for active nav even on cars that have it. Only for training and parking.
Yeah, not compared with the extra money being spent on compute directly. $200 gets you a fair amount of extra processing power, and that's if one LIDAR is even enough, with the solid state style currently around we need several.
Things like when to change lanes, do I need to yield for that ambulance, or what is that pedestrian going to do, are not really improved by point clouds.
I still want the massive point clouds for validation and ground truth, but not for driving.
By edge cases I mean scenarios like the lights going out in an underground garage; low vision due to colourful smoke or dust, or things like optical illusions or occlusion that a human would just need to remember.
Lidar can help, but not really enough to be worth it.