One of the most misleading buzzwords, or perhaps one of the most disappointing technologies, is augmented reality. What we today call “AR” is not, in fact, an augmentation of reality — it’s a poor facsimile of a powerful idea.
Reality is something we share, not just a subjective experience. Reality is the world we move in together, not what we might be imagining in our mind’s private eye — and reality certainly isn’t what’s seen through the front-facing camera of a single solitary phone.
Today, AR is imagination but for the unimaginative. It’s the opposite of magic. Most modern AR apps place an object on your phone screen, and only yours. It places something in your subjective experience, and only yours, just like your imagination does — except you don’t even need to have an imagination.
AR is not a shared experience today, so it cannot be called an augmentation of reality, it is merely a fancy photo filter on your phone. It actually manages to be less magical than a funny moustache in a Zoom call.
If you place a cool tiger on your living room floor, you’d struggle to take a photo with it, or to show your friend how big the tiger is compared to yourself. Go ahead, give it a try!
You can’t place the tiger in your shared living room, tell your friend to whip out her device and see what you are seeing. The apps that try to do this have settled for a rough estimate of the actual placement, differing “only” by tens of centimeters if you’re lucky. The fidelity of positioning is so low that it cannot meaningfully be described as shared reality.
The reason for this discrepancy is that mobile devices don’t actually share any meaningful coordinate system. Sure, they both have GPS coordinates, but those are nowhere near accurate enough for AR. Instead, each device has an ephemeral world space concept and coordinate system that is as private and opaque as the thoughts in your mind.
For AR to be shared, two devices have to create a one-off coordinate abstraction and synchronize, making sure that the two devices not only agree on where the AR object is but also where the devices are relative to each other. Only when two devices have an accurate sense of where they both are in this ephemeral space can they begin to have a shared reality.
In the Niantic example above the users have been told to stand in a straight line and to observe an object in front of them. Both devices create a point cloud of the object that the device is looking at, and if the reported shapes match closely enough then the device will approximate a position. At Auki we call this method ad hoc calibration anchors, the ad hoc part signifying that the calibration anchor is created on the fly and is not known beforehand.
When precision is needed, like in our tabletop gaming measurement app Rightful Ruler, the calibration anchors are trusted objects that allow for millimeter precision in the positioning. In this video demo you can see how the AR selection markers placed by another device matches the experience of the second device with a much higher degree of fidelity
For augmented reality to become what we all dream it could be we must solve the problem of shared positioning. We must create a universal 3D positioning system with powerful consensus algorithms and a mix of ad hoc and trusted anchors for calibration.
For augmented reality to be more than a buzzword, more than a misnomer, Auki is building a universal positioning protocol. Follow us for more information.