Can Pokémon GO Augment the Planet?
When Pokémon GO launched, millions of gamers were offered a chance to play in a visually enhanced version of Planet Earth. While the game is still heavily map-based, experiential immersion has fallen by the wayside. But that may change soon.
Niantic is starting to play a new game with Pokémon GO. By collecting visual data from users, they will build a gigantic virtual copy of the surface of the earth, bit by bit. This is new territory for augmented reality (AR) technology, but how big of a step?
Niantic’s new tech is strong enough to shape the augmented metaverse, but they are playing a soft hand. If Niantic ever releases its long-promised AR platform, others may step in to push the boundaries of planetary AR.
The Challenges of AR Worldbuilding
The underlying tech innovation came to Niantic from 6D.ai. When Niantic bought this fresh startup, their market penetration was so small they simply shut down their service 30 days after acquisition. What 6D.ai did have, however, was a promising technology called AR Cloud.
Augmented reality apps need a model of the physical world to accurately perform their fundamental magic trick: a seamless stitching-together of the artificial and the real. Without this knowledge, virtual characters awkwardly walk through walls, mailboxes, and the like. The recent generation of AR SDKs (Apple’s ARKit, Google’s ARCore, etc) scan the world with smartphone cameras to build point clouds. For a variety of reasons, this point cloud method has emerged as the gold standard for environmental recognition.
Pointcloud-based world maps have proven crucial for the current generation of leading-edge AR apps. This new wave of technology allows you to place virtual objects in one part of your house, walk around, and come back to find that virtual object still in place. As you discover the environment, entertainment experiences develop: characters begin to roam your house, shoot at you, etc. A grand variety of entertainment and lifestyle applications could be built upon such data, and many in the community believe that this foundation is needed for AR to reliably deliver value.
The current mainstream SDKs (ARCore, ARKit) allow the loading of just one map, which constrains the size of the AR play area. Google’s Tango device, which prominently featured the ability to learn large buildings in a multiplayer fashion, was shelved due to lack of adoption (and Google’s shift of focus to smartphone AR).
Persistent Augmented Reality
Niantic’s AR Cloud is a gamechanger in terms of world recognition. Instead of focusing on isolated locations, their approach can model the entire planet. Due to the distributed nature of their data structure, the only constraint on world size is storage.
Like pages in a traditional atlas, the Earth is broken down into a collection of spheres. If we consider the walls of a room, we see that each wall segment is in at least one sphere:
The spheres overlap. These overlapping regions are used to make everything line up correctly, to improve data integrity. It doesn’t matter who uploads which data, the model intelligently incorporates scans from many different users.
GPS and WiFi information are combined with an initial pointcloud scan to determine which sphere your phone is in. Once registered, each app will passively upload new data to refine the planetary model and document changes to the environment. While none of the individual algorithms are groundbreaking, the combination of techniques fine-tuned to work well in harmony on a mobile device is impactful.
The net effect is a total reframing of the AR experience. For decades, AR apps have had to look out and discover the world. Whether detecting markers, or building an environmental mesh, scanning the world takes time, awkwardly burdening the user experience. With this innovation, apps can instead look to Niantic’s AR Cloud to provide a world representation. Apps merely place themselves in that world, after which a rich, location-aware AR experience can launch immediately. Currently this process (pointcloud scanning, data upload, registration) takes just a few seconds. Soon that time will drop to near zero.
We see here a spectacular event — a world-mapping technology capable of scanning the entire planet being deployed to players of Pokémon GO: the only AR audience numerous enough to actually map the Earth.
You’re going to have to take my word that this tech is impactful. The new features are being applied to Pokémon GO’s little-used AR+ mode, which like a Snapchat filter places an animated Pokémon on top of your phone’s camera feed. Not many people use this mode.
The world data can be used to create occlusion maps, a fancy term for geometry that’s used to hide (occlude) things. If the occlusion map says there’s a building between you and your favorite Pokémon, you won’t see it on your screen.
Occlusion has been a long-standing goal for Niantic. An earlier tech demo shows a neural net (ostensibly an image-based approach with no explicit world map) performing the same trick.
The new tech will remove the awkward movement speed limitation of AR+. Players currently have to walk very slowly to maintain tracking (here we see a poor soul having to hilariously turn off AR+ to walk around a river).
These technical improvements may be exactly what AR+ needs to take off. But it is unlikely. The AR+ features themselves don’t do much to help players. Catching Pokémon is actually slower with this new tech (instead of merely getting close to the location, players also have to physically interact with the Pokémon).
There are good reasons to be conservative with the rollout of this tech. It is dependent upon a mass of data collection (which the community may reject), and performance at scale needs to be tested. That said, it’s hard not to be a little disappointed by the low ambitions of Niantic. Or has it been difficult to find a truly meaningful gameplay contribution for this tech?
Much could be done with a planetary world model. Gamers could place and share content, advertisers could vie for prime real estate, game designers could go in and create personalized augmentations of the planet (directly and via procedural tools). None of these steps are being taken. Instead the tech is being used for the rather prosaic purpose of making the world a fair bit more realistic.
Fortunately, Niantic seems to have aspirations to become a platform player. Perhaps soon this API, powered by rich planetary map data, will be available for all developers. With this we can begin to see the shape of an augmented metaverse: a single, shared representation of the Earth in which all content developers can engage a shared human audience. Or perhaps instead we will see a fragmented set of augmented universes.
If Niantic does not open up planetary AR technology to the community, another company (Google, Facebook, Apple) will. It simply has to be explored.
One Step Forward
While this is a big move in terms of tech, the ramifications on the product side of AR experience are meager at best. This is a foundational move, the burden is on the market for use cases which take this new tech to the next level. Will that come in the form of Pokémon GO extensions, or another game entirely? I’d bet on the latter. Pokémon’s gameplay is pretty settled. A new game is more likely to push the new capabilities.
Of course, hit AR games do not appear very often. The AR space is a one-ring circus, with Pokémon making all the money. When new experiences crack the market open, my bet is that they’ll be using planetary AR tech to create augmented worlds that are not only rich, but also instantly accessible. While Niantic has a good change to be this trailblazer, opening up their platform to the wider community greatly increases the chances that such a hit will be written.