One of my current projects is to find a cheap and accurate way to 3D scan faces for the creation of custom coins and memorabilia; mostly, I want my face on a 3D printable coin which can then be cast more cheaply in metal. I had the opportunity to borrow a Microsoft Kinect which has 2 cameras and a structured light infrared laser projector. One camera captures the infrared laser grid as projected into the room and constructs a depth map in realtime of the entire view. The other camera captures visible light e.g. normal images and video. I used the kinect to capture images and depth maps and reconstructed the scene in 3D using blender. To dump the data, I used libfreenect‘s ‘record’ program, part of the OpenKinect project.
Here’s is a camera panning animation of the result created in blender using a displacement modifier on a heavily subdivided plane:
This is the unedited depth map that I took from the ‘record’ program output:
I had to scale and move the corresponding image texture to fit the geometry properly. This is partly due to the slight distance between the cameras. Here is the slightly altered texture image captured by the kinect:
This is the depth data as determined by blender’s ambient occlusion rendering:
I will soon compare these results to the free version of DAVID-laserscanner. I’m currently waiting on the arrival of a very cheap laser line module ($2.50 to be exact) that will be used in conjunction with a high-def camera as input to the DAVID laserscanning software. Stay tuned.
UPDATE: I’ve attached the .blend file for exploring in blender. Textures are embedded. Blender 2.56 Beta or later is recommended.