A long time ago, myself, Olly and Luke has a idea to build a system for an exhibition where ever guest would receive a unique number. When you put on an exhibition you usually send out an invite to selected guests from a long mailing list of people you’ve ‘acquired’. The idea would be to populate a database of visitors with details about them, and assign them a unique number, encode this into a database and print out large address labels which fold around the edge of the invite such that on one side side it had the address for posting it, and on the other a QR Code with their custom ID.
The purpose of this QR Code is that the visitor brings it with them to the exhibition and flashes it to cameras next to each project in order to tag/like/favorite/bookmark the interesting work for later. This would then allow us to know who came, who liked what and make projections about flow through the space etc.. for some interesting data project.
The data would be live updated to a database so the user could potentially:
- have an app on their phone visualising the data as it comes in
- a display next to each project could visualise ‘likes’
- a central projection could show live data
- touch screen interface to browse their likes
- an email after the event with a summary of their likes
- a custom page for each visitor that populates with work they liked, and suggests projects they’d missed based on other visitor likes.
The data collected would be very useful for the gallery/exhibitors but also create a new level of depth in the experience for the user hopefully bringing them to the website after where flash would enable them to scan their barcode, or type the code manually.
Although I don’t have an exhibition for this, with the recent advances in cheap computing like the Raspberry Pi and it’s GPIO (suitable for driving a display) and a Camera on the way, it seems like this could finally allow a project such as ours to exist, using a Raspberry Pi per project to read the QR Codes presented and submit them to a central server.
I started tinkering about with the ZXing library for reading barcodes, and have developed a little project that can track the QR code on the screen like any other fiducial marker, I can extrapolate scale and rotation from the image as well as read the code, and potentially because I QR codes have 3 position markers and 1 alignment marker, I may be able to get the geometric DoF pose for 3D Augmented Reality using these codes. This is very complicated however, and I really only need rotation and scale for the purposes of a multitouch interface.
With this in mind to the side you can see a drawing I did to try to understand the relationship between the 4 markers that help with alignment and positioning. I wrote a program in processing to read the code, and detect the points, then draw around the video image…
Then using this I envisage an interface which works like sticks, and as you rotate the QR Code in relation to it’s original rotation it would scroll through the menu, and then you tap the item to navigate into it.