AR has the potential to completely reimagine experience design from the ground up but first we need the catalyst: a proper spatial computing browser.
Apple’s Vision Pro is a step in the right direction but it remains a traditional VR headset that uses (very sophisticated) smoke and mirrors to mimic AR by immersing the user in a 1:1 realtime recreation of their surroundings. They can then throw whatever they want at this recreation but what you are seeing is still a facsimile and it still works alongside a closed-model, app-centric ‘push’ model.
Spatial computing first needs to do what seems impossible and throw away the computer. All around us, IOT sensors are being embedded in everything in our cities and neighbourhoods. All a proper AR device needs to do is parse this data into something useful. In the same way a web browser pulls information from a server so an AR device will pull in this information and make it useful. We only need look to Amazon’s Pavement network and Apple’s Find My for how this could work.
Thus AR devices can be built on an open platform like that the web was founded upon and everything around us becomes the interface. Road signs might offer directions and updates centric to every driver. An advert might let you know someone is trying to call and nobody else can see it. Your neighbourhood plants will let you know when they need watering.
Thus so, everything around us becomes a server, a computer. And we wear the device that lets us access that network. And if you want to unplug its as easy as removing some glasses.