Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

stereo cameras only give two viewpoints, it requires a lot of processing power to re-create the 3d scene. a lightfield gives you much more information making reconstruction much simpler.

You can also see past occlusions, something not possible with stereo.



How can you see past occlusions?


A stereo camera array can see past occlusions in the sense that some things are occluded to one camera but not to another.

With a larger and more distributed camera array, cameras could be pointed everywhere so that every point in the field has a certain level of coverage.


The GP said you can see past occlusions with a Lytro and not with stereo, and that sounds exactly the other way around to me (as to you).


You can do it with both, actually. Light field data allows you to simulate different viewing angles of the same scene. Depending on the sensor quality you can use this to "see around" objects which occlude the scene from a specific angle (like if you view the image from the center of the field of view). Some limitations, of course, similar to stereoscopic imagery.

Using stereoscopic as an example, you can synthesize a 2d image with the center being between the two sensors. The further apart you place the sensors, up to some limit, the more you can "rotate" the scene. But you lose a lot of data when the viewing angles become too extreme. A light field image can be used for the same effect, but because it's one sensor your ability to rotate the scene will be constrained (as if you had two sensors very close together).

Apparently the examples I saw years ago on Lytro's site aren't available anymore. It was a neat effect with their initial camera, I imagine higher quality cameras could do much more.

A neat thing with light field sensors versus stereoscopic is that you have far more freedom in how you move around the scene. With two sensors you can only move "left and right" or "up and down". With light field data and one sensor you can move in any direction.


Ah, I see what you mean. You're constrained to viewpoints on the sensor, though, no? So usually a small square.


A few Superbowls ago Intel demoed a supercomputer which could take in live video streams from hundreds of cameras, run a batch job for about 30 seconds, and then be able to synthesize video from arbitrary viewpoints above the crowd.

I think it worked by making a point cloud that fits the camera observations.

Obstruction is not a big issue in that use case, but if there was obstruction, the system could choose to ignore pixels that were obstructed when constructing the image.


Right. And with the initial cameras, which were neat but not breathtaking in their capabilities, you really only got a small ability to move around the scene. A larger sensor or a pair of these would offer greater capabilities for this application. I believe using an array of sensors was how their video camera worked, but I'm not 100%.

(I spent a lot of time reading about light field stuffs back when Lytro was first announced, I was also tangentially connected to a synthetic aperture radar project so it piqued a lot of my curiosity at the time. I haven't kept up with the state-of-the-art since then, though.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: