When the 87 different autonomous drones that assembled the International Mars Surface Station arrived on Mars, they were initially lost.
There was no belt of Martian positioning satellites girdling the planet to provide even the most rudimentary geopositioning, and no helpful Martians had been found to give reliable directions.
To build a map of Mars from scratch, the drones had been instructed to form a posemesh upon arrival. The anxiously awakening drones had sent out their network pairing requests, communicating directly peer to peer in the absence of anything as terrestrial and mundane as the internet. Over these peer-to-peer connections, the drones performed a carefully coordinated algorithmic dance, allowing them to synchronize their clocks down to mere fractions of a millisecond, and so the posemesh was poised for positioning.
Their meticulously synchronized clocks enabled them to start timing messages between each other, giving them the distance to their neighbors within millimeters or centimeters.
A battery of consensus algorithms and spatial heuristics locked them into the first posemesh on Mars — a network of drones and devices reasoning about space collaboratively over the mesh they had just established around the landing site.
Armed with the precise knowledge of their relative position with respect to each other, they started the much more exciting work of collaboratively mapping the surface of Mars. Various optical instruments, from expensive LIDAR arrays to simple RGB cameras, started reconstructing the surface of Mars into a fine-grained polygon mesh, high-res textures, and neural radiance fields (NeRFs) streamed back to eager scientists back home on Earth.
This rendering twin was much too bandwidth-intensive and feature-rich to be particularly helpful for navigation, but it was all the more useful back on Earth, where enthusiastic researchers and world-crafters turned the data into new insight and entertainment experiences to feed the human mind’s voracious appetites.
In a move painfully apparent in hindsight, the mission to Mars had been partially funded by Epic Games, who purchased the rights to the rendering twin in an auction and immediately announced their intent to bring Mars into the upcoming Fortnite 2.
Millions of titillated terrestrials tuned in and walked virtually over the surface of Mars within days of the drones’ landing. Not to be outdone, the Saudis had created the world’s largest indoor swimming pool in Neom The Line and allowed the nouveau riche to submerge themselves in a VR simulation of Mars in a weighted space suit.
For navigating and working on Mars, however, a more judiciously sized representation of the world had to be constructed.
The digital domain that the drones inhabited in the posemesh was expressed semantically rather than topographically. Although bandwidth may one day be abundant on Mars, there was still no reason to use many megabytes of data per second when a few kilobytes would do. And so, the second digital twin created by the drones for navigating Mars was a lower resolution collection of simple and compressed feature points, semantically segmented reference objects, and wireframes.
These allowed the robots to move efficiently in the increasingly complex 3D-printed indoor environments of the architecture they were erecting.
The posemesh of Mars allowed the 87 initial landing drones to print and assemble the International Mars Surface Station collaboratively.
Establishing a shared map of their new world was as instrumental to their success as was the gear they carried with them from Earth. Without it, they would have had no idea where to place any of it.
Through the posemesh, the participating devices could experience Mars through the combined sensory experience of their peers, like some fledgling tentacled alien probing its surroundings with 87 prehensile digits.
It wasn’t the first posemesh ever established, of course, and far from the biggest.
A globe-spanning terrestrial posemesh had replaced the GPS as the default positioning protocol for electronic devices and automata a decade before the lander drones first synched their clocks. On Earth, the posemesh contained tens of billions of participating devices and over a trillion dollars of value locked in its spatial consensus and reputation protocols.
The posemesh regulated most traffic on Earth, monitoring the movements and reputations of billions of vehicles and drones as they delivered resources, passengers, and french fries to their destinations in the dense and hulking megacities of the 21st century.
Coffee cups were delivered directly to office windows by automated drones at negligible cost, and by participating in the posemesh, these drones avoided the headache of midair collisions or GPS blackouts between and inside buildings.
The gig economy had vanished as quickly as it had once appeared three decades earlier when five-minute-delivery guarantees made it literally impossible for even the most destitute and exploited human workers from the global south to pedal fast enough to compete.
When cities like Beijing handed their inner city traffic over entirely to the posemesh, commutes were cut in half, and millions of hours making up thousands of years of human productivity were unlocked daily.
Smoother traffic also resulted in less pollution as fewer cars and trucks jerked back and forth in gridlocked traffic, reducing emissions by hundreds of thousands of tons of CO2 every single day.
But most important of all, perhaps, was the role that the posemesh played in the everyday lives of the humans living on Earth.
By the start of the 21st century’s third decade, augmented reality had begun its journey to replace digital displays as humanity’s primary computing interface.
The first primitive augmented reality devices relied on centralized visual positioning systems, with face-worn cameras assembling proprietary and ad-supported maps of the world.
The violation of having your field of view constantly monitored by impossibly powerful tech giants would have reached deep into the human subconscious and our very relationships, had an alternative not been found.
The inevitability of augmented reality hung as a promise and a threat over Earth during the decade of metaverse mania. It was clear that augmented reality would allow humanity to experience each other and the world in amazing new ways — but also that the inexorably hungry machine of surveillance capitalism would monitor every moment, paint every pavement, and share every whispered conversation.
Tens of billions of dollars poured into spatial computing and the metaverse faster than any other human endeavor, and petabytes of digital representations of streets, buildings, and faces clogged the congested arteries of the metaverse.
But at ConjureCon 2024, a company called Auki Labs unveiled the unimagined — a pair of AR glasses that shipped without a camera entirely.
The sleek frames of the glasses connected to a privacy-preserving posemesh, the very first of its kind, kept the wearer safe from the prying eyes of the data-guzzling incumbents of the internet age. Hardly anyone had predicted it at the time, but the posemesh was the lynchpin that allowed humanity to take the plunge into the cyberdelic dimensions of human creativity once predicted by Terence McKenna.
In the end, this allowed the 87 drones to assemble a home for nine Space X employees on a distant world.
When humanity and its emissaries first set foot on a world yet more distant than Mars, their first communication will undoubtedly be some variant of the question: Where am I?