Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

You are confused my friend.

Reading sensor data is not the same as feeding that data to a neural network and asking it to form a worldview composed of possibly conflicting sensor data streams(i.e. lidar vs vision vs ultrasonic).

You are somewhat correct that it is quite trivial to read sensor data. For many sensors, there is some work which needs to be done to denoise or cleanup the input data. That's not where the story ends, however.



In order to display the gravity-aligned acceleration, sensor fusion between gyro and accelerometer has to occur. This is typically done with a Kalman filter, and runs on 1960s levels of hardware. If you look at something like a drone autopilot, eg. Ardupilot, the sensor fusion is soo cheap that they even extended the Kalman filter to also estimate things like the sensor bias offsets and earth's magnetic field vectors.

Sensor fusion is computationally cheap. It's just a lot of R&D to do it in a way that leads to a net gain in precision and robustness.


We're talking about radar and ultrasonic sensors here, not accelerometers. We're also talking about feeding them to a deep neural network. Not the same thing. Sensor fusion is not being done with a Kalman filter in this case.


Radar and ultrasound both give drastically less data than a simple 720p webcam. After postprocessing their output bandwidth is more similar to a 9-axis IMU than a camera.


Yes and each time you process the incoming camera data in the neural network you have extra calculations due to the sensor fusion with another data source, regardless of its bitrate. Have you worked with deep learning much?




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: