Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

But it doesn't appear to be helping. Here's an example accident where depth data from Lidar would have helped:

"Tesla later said that during the crash, Autopilot’s camera could not distinguish between the white truck and the bright sky."

https://www.nytimes.com/2021/12/06/technology/tesla-autopilo...



The crash you referenced occurred in 2016 when they were using radar on the cars and I don't believe they were yet using raw photon counts nor did the NN have any voxel-based memory as it does now.


> any voxel-based memory

Haha, any WHAT?

Seriously though do you have any more info on that, it sounds intriguing. Where and how do voxels come into play in a 2D NN?


It is pretty cool: https://youtu.be/ODSJsviD_SU?t=4355

They transitioned from 2D to 3D a couple years ago, major transition but it does seem like a critical step. We live in a 3d rather than 2d world.


Humanity doesn’t know how to solve this yet, so it’s hard to say whether it is helping or not.


We already have the solution. LiDAR.


If LiDAR was a solution, we would have driverless LiDAR based vehicles. No one has solved driverless though.


If you could use LiDAR well enough then it would solve the problem. Of course, if you could use vision well enough it would solve the problem too.


The big limits of LiDAR are cost, more than anything. There have been dozens of public driving trials where from a functionality level the answer has been positive (apart from traffic lights, the bastards), but nobody wants to buy a solution with a six figure BOM, before integration.


Lidar also has problems in rain, fog, snow, etc… FLIR would actually be better




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: