This is a shape from shading approach, which will only work if you're in the dark, with the phone as the only illumination source. If white dots are shown in different places on the screen, and assuming the phone and subject don't move during the process, surface normals can be computed from the resulting images, and once you have the normals then the shape can be approximated. Normals can be found by calculating the angle of maximum reflectance for each pixel for a series of images under different illuminations.
What are good resources for looking into converting photos into 3d object models? I'm really impressed by Shapeways but would like to take current real-world objects as a base rather than building them in a 3D tool etc.
I've seen the laser scanners but they're relatively expensive. Is there anything like a mount that takes two iphones, and software that can take those two photos to create atleast a projection?
There are a lot of approaches. Yes there is software that takes two photos and let you reconstruct 3d. If you have a man-made object you probably don't want a generic point-cloud building approach (like say http://www.photosculpt.net/ ) - but rather something like Photomodeler that lets you select vertices and build up surfaces.
There is also software that builds models from silhouette methods, that usually require you printing out targets which you place the model, so of like a manual turntable approach. I can't remember the name of software that does this at the moment, but its out there.
Or you could work from video (just from a single camera) that you move around. That can be effective, not sure of the best software for doing this - it's called structure from motion, a quick google shows some source at this project site: http://phototour.cs.washington.edu/
I've also seen some do it yourself structured light software (ie bring your own projector and camera), that seems to work ok.
It kind of depends on what size and type of objects you want to scan - things like the surface properties could be important - also how long you can keep it still.
If you're working with real-world objects, one approach is to place the object of interest on a turntable, allowing you to capture multiple perspectives with a single camera/sensor. Philo Hurbain has made some delightfully clever LEGO NXT 3D scanners this way (delightful especially because they're used, in turn, to digitize the shape of complex LEGO parts) - one using a needle probe, and another using a laser.
Probably the simplest way of obtaining 3D data suitable for turning into an object model would be using a Kinect.
For large objects, like buildings, you would need either expensive laser scanners or something which solves the multi-view stereo problem. There are systems capable of doing this, such as Photosynth, but in general it's quite involved with no easy solutions.
For medium range models, such as automotive applications, you could have a pair of cameras aligned in parallel and connected to a PC/laptop then use a utility like v4l2stereo.
If you're not already familiar with OpenCV, it's a rather good (and well documented!) open source image processing library. It's written in C++ but has bindings for a bunch of other languages too.
See this Google video for a similar technique. http://www.youtube.com/watch?v=rxNg-tXPPWc