In this video (also after the break), MIT demonstrates just how close we are to Minority Report user interfaces.
MIT shows that the Kinect camera, along with the recently-released libfreenect driver, is capable of incredible object resolution. Not only is each of the user's hands recognized, but the fingers and their orientation can be identified (look at the lines extending out of the fingers in the screenshot above). That kind of detail makes complex hand gestures possible, such as pinching or pointing -- imaging being able to point at an object in a virtual line-up, grab it, and then mold it by pinching, twisting and pulling it apart with your hands.
It's amazing how many awesome proofs of concept the OpenKinect project has produced in just a few weeks. It also makes you wonder what Microsoft and its first- and second-party developers have in store for us next year. There's no way Microsoft will let some beardies at MIT steal its thunder, so it must have something cool up its sleeve.
Incidentally, MIT has open-sourced the code for the demonstration above! You'll need a Kinect, and a Linux box, but it actually looks quite easy to set up. I may have to try it... and record it for Download Squad, of course.