Open Frameworks, Computer Vision and Doctor Who

Something I’ve been keeping an eye on for a while is Open Frameworks (http://www.openframeworks.cc/) which is a C++ platform for creative coding.

The gallery is full of interesting installations such as the mirror Audience  which has dozens of robot mirrors spookily following you around the room, the Body Paint which looks like a really fun and physical way to paint a virtual mess, and Secret Powers which looks suspiciously similar to the scene in Doctor Who and The Empty Child where “everyone lives! Just this once!”. I wonder if they used OF for that…? Come to think of it, there’s also the gallery installation where you “mustn’t blink – don’t even blink!”.

The thing that peaked my interest recently was this incredible face tracking demo from Kyle McDonald – it looks like OF is really maturing with a host of amazing extensions. This was build on top of an extremely robust face tracking library from Jason Saragih which has also been used for some other amazing prototypes, including my favourite by Daito Manabe where he’s actually using the contours of the face as an arena for fluid physics simulations. Just stunning!

There’s a wrapper for OpenCV– a computer vision library which helps with all the low-level tasks of finding edges, objects, filtering, transforms and all sorts of things which bring memories of extremely late nights at university flooding back to me. I mean, we’re talking dawn chorus in the computer labs just to get a gaussian filter working, but these guys have got instant shape detection in video at 50fps! Where have I been for the last 15 years?

There’s also their friends at the Point Cloud Library who have an incredible array of techniques for mapping objects in 3D spaces – most of which I barely even begin to understand. But want anyway.

This lead me on to find a new API and standard in “natural interaction” – OpenNI who are trying to bridge the gap between hardware and software to facilitate the development of games, apps, systems which will use cameras and depth-sensors to enable people to interact with systems purely naturally – without a UI. Is this the end of the UI? Asus seem to be the only hardware vendor at the moment with a compatible device, but it’s not available in Australia yet…

I have an idea for a collaborative, evolutionary 3D-modelling installation where people can add things to and chop up and virtual sculpture, leaving it behind for the next person to mutate, destroy, defame or enhance purely through a webcam and a projector. I’ve also got another idea for a yellow-snow message writing augmented reality game, but I don’t think I could get it past the iPhone app-store regulations.

I feel like I’ve just opened a pandoras chocolate-box of old AI dreams from my 20’s and it’s truly re-ignited my passion for creative coding again. What else could have kept me up till 2am solving obscure gcc linker problems in XCode?

This entry was posted in Miscellaneous and tagged , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.