Presenting Augmented Reality at MEIC5
We presented our 3-month research & prototyping project in augmented reality for mobile at the Mobile Experience and Innovation Center event on Thursday, November 26, to a very receptive, mobile-aware crowd.
In short, our R&P project, partially supported by MEIC, aim to develop a toolkit for making augmented reality applications on mobile platforms. Half of the project was doing an industry survey of the state of AR (who’s doing what in regards to AR on mobile phones) and another half was dedicated to coding and development. We worked with a very bright intern from Ryerson University, Michael Lawrie, on the development end.
Even though AR has been gathering a steady stream of attention from the online advertisers and other media in the past two years, once we looked into it, we found that the development for both web-based and mobile phone applications are still quite difficult and resources are limited. The open source library, ARToolKit and its Flash ported (FLARToolKit) version have helped to push the development on the web end, but projects in the mobile space are, relatively speaking, rather sparse.
In particular, we were interested in making an AR application that overlay virtual objects onto public spaces (at specific locations) – this would have required the usage of both image recognition and location based services. Developing an AR apps with LBS is already available with the iPhone SDK and other platforms (and several apps have been launched in the past months), doing image recognition on the mobile phone, however, is not. We spent the past month developing some tools that would allow us to do image recognition on the iPhone, and while we are very aware of examples and demos that various companies have put on YouTube, we have yet to see a compelling working application.
At MEIC5, we presented our overview on the state of AR in mobile platforms, its highlights and pitfalls, and showed the progress of our own development. Unfortunately there was no video recording. We’ve made our slides available, but embedded video clips from the slides are not supported by slideshare.
Our slideshare deck:
Demo videos of the split-screen iPhone apps doing marker detection:
Our prototype testing for doing AR on the iPhone using NYARToolKit to detect a marker
Our prototype testing for doing AR on the iPhone using OpenCV to detect a marker
Thanks, MEIC!
AR for learning science
Besides in the digital binocular station (last post), MindSpace Solutions also has other marker-based AR demos that are kind of interesting – (imho – still lack compelling visuals)
From AugmentedPlanet blog:
…A couple of my favourites are: Explore a Human Heart which has some really nice effects. Using five markers, four of which represent a piece of the heart while the fifth provides information on what you see. As you combine 2 or more cards the augmented reality images interact with each other.
The other demo I really like is the Solar Explorer. Similar to the heart demo but this time each marker represents a planet, when you add two planets together they scale to reflect their comparable size.
AR binocular stations
Here’s a good example to show that AR only looks good when done well with eye-popping graphics (not a surprise to anyone, really).
Where as the AR for YanMingYuan – a tourist attraction in China – has the idea and concept of bringing the ruins to life, it looks kind of fake and bordering on the cheesy side…
On the other hand the AR digital binocular station by NZ based MindSpace Solutions looks a bit more appealing (though, why not design the binocular to look more updated if they have to reconstruct it anyway?) and the integration of video content is always nice:
Note that in both cases, since it is using a stationary device, GPS or heavy video feed analysis might not even be needed to assemble this AR experience. As long as the computer knows the position of where the camera is looking (and we can calculate this using a rotary encoder), overlaying video and special fx onto the camera feed should be relatively simple.
AR Restaurant Browsers Review
It’s been a little more than a month (or is it two?) since mobile AR apps have started to make an appearance on the mobile market. And, it’s clearly not making that big of a buzz as a lot of AR developers and future forecasters were drumming it up to be back in early August. Not a big surprise, as these apps are really quite limited in their functionalities. Our biggest peeve: why overlay geo-data on camera view when it’s not all that accurate and it’s easier to use with a vector map view anyway? Having said that though, we’ll acknowledge that it has to start somewhere.
Augmented Planet, an extensive AR blog, has just posted a review of AR browsers. The author used several browsers to locate his local Indian restaurants, and the results are really not that great. Read the details here: http://www.augmentedplanet.com/2009/11/augmented-reality-browsers-head-to-head-part-1/
Here’s a bit of summary from the post:
Layar: “Out of all the applications tested Layar local search performed the best with the least amount of data errors, of the 8 restaurants shown I only spotted 2 that were misplaced in residential areas.”
Robotvision: “..The app however lists JPF Drum Tuition and Olga Piano Tuition as local Indian restaurants as well as plotting restaurants in residential streets. The augmented reality view has a number of problems, for starters there is no compass to tell you where things are making it really hard to find anything. The other problem is if places are close together then its impossible to select all but the front option”
Yelp: the two benchmark restaurants were relocated on the map and vanished from the map, respectively. “One last comment about the AR view is it’s so unresponsive, items in the view sit around regardless of where you point the camera then gradually slide away. Yelp is not without it’s mapping errors, further afield from my home location I noticed missplaced pubs, Chinese resturants and even a few resturants placed on the motoway.”
Urbanspoon:
“UrbanSpoon surprisingly for an application that is dedicated to eating and finding restaurants performed the worst with the most mapping errors. Brashwamy’s is completely missing despite having submitted it via the applications add a restaurant function several weeks back. Sharod is shown but also located in completely the wrong location.
…
Putting the application in to AR mode has a useful feature where the restaurants are represented by a bubble with the bubble colour reflecting the customer feedback. Clicking the bubble you can get the phone number, vote or add a menu with your camera. It’s just a shame that the data is so inaccurate that you’ll never be able to find the restaurant to see if it lives up to its hype.”
Granted we’re still in the early days of AR and the technology still need a little time to mature, but erratic data and bad user experience like these, if it becomes a norm with AR app, then we’ll be reduced to merely using AR as advertising gimmicks and this promised ‘terminator vision’ could very well fizzle away. Anyone remember VRML?
Markerless AR on the iPhone
This demo video from Total Immersion, the only company with marker-less AR tracking solution, is pretty exciting.
However, we suspect that this wouldn’t be available just yet because the SDK does not allow for analyzing live video feed – although it is only a matter of time.
And now, you can eat that marker
Having done so much AR-reading recently (and most things are just repetition of something we’ve seen before), it is very refreshing to see this project:
AR Cookies:
From Mike Clare via Fubiz
AR in the Louvre
a rather cumbersome device for museum attendants (and probably too expensive to implement for mass consumption), but shows that some people are thinking about the possibilities:
via ARnews
AR for mobile in 3D
Layar announced last week their new 3D feature coming in November. This feature will enable 3D rendering for Layar’s API, so that you can have something like this:
on the mobile application developed using Layar.
Interesting because: while we’ve seen a lot of AR apps on the desktop using 3D renderings (from Total Immersions to all the projects using ARToolKits, FLAR.. etc.) the AR apps on mobile have all been limited to drawing flat sprites/billboards on top of live video. Also note that this is *not* using a marker but location data in order to draw the 3D objects. Could be interesting to learn how they determine camera translation and smooth out the jitter.
However, 3D in AR on mobile phone still has lots of ways to go. Layar 3D: “With the limited processing power of phones we try to keep 3D objects below 1000 polygons. The platform can process objects up to 5000 polygons but starts to slow down.” Which means we probably won’t be able to get a lot of smooth-looking stuff w/ the current smart phones, but alas, it is a step forward.
Also, this looks like something out of Fringe:
Source: http://layar.com/3d/
Feasibility summary – video analyzing on iPhone
We spent some good number of hours last week looking into analyzing the video feed on the iPhone in order to do marker tracking for AR. The common knowledge amongst the developers is that it is currently not supported by Apple, but we wanted to look into what exactly needed to happen and if there is a way to get around it using a combination of techniques available.
Well, in short, the answer’s a big disappointing no. At least not just yet with the SDK. There is no quick way to get to the video stream in order to analyze the frame. While image capturing provides enough data to do image analysis on a single frame, drawing on top of the image then re-analysing the next frame, is, however, impossible because any elements that were drawn onto to the video will get captured as well. In addition, the frame rate would be too low to make the application seem realistic (1 fps at best).
Our assumption is that marker-based tracking iPhone apps that might be available to developers out there (as we’ve seen with ARToolKit 4.4 demo video) must be using non-SDK code, aka jailbroken method.
AR using only location based data, however, is available in the SDK and ready to roll. Â iPhone 3.1 SDK made available the ability to draw sprites on top of video, and we’ve seen way-finding applications already available in the app store. Â (see now-available apps in the previous post)
Also, a techy read: Â Why 3d markerless tracking is difficult for mobile augmented reality
First official iPhone AR apps
AR apps using location data on the iPhone (no video analyzing). Seems like these are available now on the app store (as of 21/09/09):
New York Nearest Subway from acrossair – NYC’s subway map
London Nearest Tube, also from acrossair – London’s Tube map
Discover Anywhere Transit – Transit information for several cities in North America.
While certainly cutting edge and putting new technology into practical use, the actual usefulness of these augmented maps remain to be seen, however.