At the end of 2010 I went to Paris (France) for an internship at a local company that produces multitouch hardware. I chose the job myself because I thought it would be interesting. What I implemented was markerless object recognition & improved tracking for the computer vision framework we’re developing (movid.org).
The video below shows the results of the prototype. The program is able to recognize objects based on their shape and size and does not need additional fiducial markers. It also takes into account object rotation (as long as you don’t have a perfectly circular object), even for square objects. You get an angle between 0 and 360 degrees.
The demo runs on an LLP setup. Only lasers, no diffuse illumination or similar approaches added.
The program that you actually see is just a visualization of the recognition and tracking, written in PyMT.
The quality of the video suffers from the fact that we had only 10 minutes to capture it before the table was transferred to an exhibition. The calibration was just quick and dirty, which is why I had to press that button to register an object (on the bottom) with a mouse instead of touching it.
What you see is a WIP project prototype. The code can be found in the GitHub repository on the master branch.
Markerless Object Recognition & Tracking (Movid) from Christopher Denter on Vimeo.