Immersive Technologies: Explaining AR, VR, XR, MR, and Spatial Computing

March 25, 2024
Immersive Technologies: Explaining AR, VR, XR, MR, and Spatial Computing

It’s hard to keep up with new terminologies in the ever-evolving landscape of technology. When it comes to immersive technologies, this is particularly true, so much so, that we’ve included yet another new term, “Immersive Technology” just to explain them. 

Immersive Technologies is a commonly used umbrella term for experiences that merge the physical world with digital or simulated reality. It is this description that makes it a good overarching classification for the “(insert letter here) Reality” group of technologies.

In this guide we’ll summarize the following terminologies, their differences, and how they are commonly used:

  • Augmented Reality (AR)
  • Virtual Reality (VR)
  • Extended Reality (XR)
  • Mixed Reality (MR)
  • Spatial Computing

Let’s dive in with the one that is closest to our hearts here at Ocavu: Augmented Reality.

1. Augmented Reality (AR): Superimposing Digital Elements on the Physical World

Augmented Reality enhances (or augments) the real world by overlaying digital information onto physical surroundings. Digital elements such as 3D models, images, or text can be placed into the user’s field of view through devices like smartphones, smart glasses, or AR/VR/MR headsets. 

AR has been growing in popularity since the launch of the Pokémon GO mobile game in 2016, where virtual creatures are integrated into the real-world environment. What makes AR stand out from its more gaming-focused sibling VR, is its applications in extending everyday tasks. One very common use case is for retail and marketing, where consumers can visualize products in their space before purchasing them.

The technology works by sensing and tracking the environment, giving the illusion that a placed digital object is anchored in the real world. Continually measuring the proximity and orientation of your device relative to the real-world environment is computationally heavy. This is why wearable hardware today typically falls into two categories: lightweight smart glasses, or heavier AR/VR/MR headsets. 

Most smart glasses today simply superimpose graphics without continuously analyzing their position relative to the real-world. An example is a HUD (heads-up display) or watching a movie where the graphic is fixed in place on the screen, regardless of where the user looks.

AR Configurator Experience by Ocavu
A web-based Augmented Reality (webAR) experience by Ocavu launched in the Chrome browser on an iPhone 13

Leading AR/VR/MR headsets on the other hand are jam-packed with sensors and the latest computing advancements to provide real-time analysis and interaction between the real and digital. Hardware technology is still catching up to provide the best of both worlds when it comes to portability.

The early confusion between these two technologies is part of what lead to the popularization of the term Mixed Reality. More details on that later.

As headsets and glasses continue to develop, the most common and convenient way to experience AR today is through a smartphone. Because AR is not completely immersive like VR, you can simply point a smartphone or tablet at a location in the real world and place an object digitally, experiencing it through your device. With AR, there is no concern of the real-world appearing in your peripheral vision, as your experience augments this real world anyway.

At Ocavu, we specialize in web-based AR, meaning that consumers can launch an AR experience from the browser in their smartphone, without a dedicated iOS or Android App. The thing that makes web-based AR valuable is how accessible it is. Where only a small percentage of the population own headsets and use them for short periods, most have a smartphone permanently on hand. However, where downloading an iOS or Android app causes friction, particularly at point of purchase, web-based AR provides an immersive experience that anyone can access within seconds, launched from the mobile browser.

2. Virtual Reality (VR): Immersive Simulations in a Virtual World

Virtual Reality immerses users entirely in a computer-generated environment. Because users need to be fully immersed in this digital world, VR typically involves the use of a headset that completely covers the user's field of vision. This technology is often associated with gaming or the metaverse, where users can escape into virtual worlds and interact with elements as if they were physically present.

Because VR is completely immersive, users of commercially available headsets are typically limited to standing in a fixed location in the real world for their safety. This inability to maintain perspective on, or control their real-world surroundings is part of why, as amazing as VR is, it also has limitations for everyday use cases. 

If you’ve seen a blooper video of someone running full speed into a wall wearing a headset, this is likely because they were confused by a VR experience. They were probably running away from a zombie or something. This is a testament to how realistic and immersive VR experiences can be. Most VR devices have safety features that prevent users from wandering too far while utilizing peripherals like hand controllers or even hand gestures to control movements in the virtual world.

Jump Immersive VR Experience
3D render of the Jump wingsuit

Beyond gaming, VR has made significant strides in education, training, and therapy. VR simulations enable trainees to practice complex tasks in a risk-free environment, medical students can perform virtual surgeries, and individuals with phobias can undergo exposure therapy in a controlled setting.

Although VR as a technical term relates to the visual aspect only, many VR applications also leverage additional immersive enhancements such as motion, sound, smell, etc. This is a good example of how strict technical definitions have been loosened for commercial purposes. For consumers, it is much easier to round-off their marketing definitions to the nearest or most primary/dominant interaction. 

One such VR-centric example is Jump, where users enjoy an immersive base-jumping simulation. Users wear a VR helmet and wingsuit while being suspended by a harness in a controlled, indoor environment. Heavy-duty fans blow wind at them for a deeper immersive experience as they glide through a scenic rock gorge in Virtual Reality. 

3. Extended Reality (XR): Encompassing AR, VR, and Everything In-Between

Commercially, Extended Reality is the lesser-known of this list of terminologies, but like AR and VR, originated over 50 years ago. Extended Reality serves as an umbrella term that encompasses Augmented Reality, Virtual Reality, and Mixed Reality. So how does it differ from “Immersive Technology”? XR is the subset of immersive technology that focuses on the visual aspect.

Unlike XR, Immersive Technology includes non-vision technologies like spatial audio and haptic or tactile feedback that contribute to a sense of presence and engagement. So VR by definition is a subset of XR, which is a subset of Immersive Technology, even though immersive experiences with elements outside of XR are often marketed as VR (due to VR being their primary interaction). 

4. Mixed Reality (MR): Supercharged Augmented Reality

Mixed Reality is quite nuanced and has multiple slightly differing definitions. Cutting through the many vague or misunderstood definitions published online, two distinct definitions hold weight from both a technical and commercial perspective. 

The first and more technical definition is that while AR overlays digital elements onto the physical world, MR takes it a step further by allowing these digital objects to interact with the environment and vice versa. MR aims to seamlessly blend the digital and physical worlds, creating an environment where virtual and real-world elements coexist and interact in real-time.

That description sounds a lot like Augmented Reality, and this is where the nuance lies when compared to AR. Many AR platforms have already evolved far beyond simply placing an object into the real-world environment. The 3D objects placed in AR can be interactive, and these AR objects can also stick to or bounce off of real-world physical structures, meaning that they interact with the environment too.

The key distinction between Mixed Reality and Augmented Reality lies in the way MR maintains an ongoing real-time connection between the environment and digital beyond just proximity and surface detection. While AR is typically limited to tracking the position of physical surfaces or objects, MR actively interprets their state, characteristics, and context, providing a continuous feedback loop that can drive the logic of the digitally overlaid experience and/or the physical objects.

Although the term was invented earlier, “Mixed Reality” was popularized by Microsoft's HoloLens, perhaps looking for a way to describe a technology that went beyond what was widely misconstrued as AR at the time. At the time, Google Glass and similar devices that presented information (heads-up displays), but didn’t track the world were sometimes incorrectly labeled as AR by the media. You can understand why Microsoft felt differentiation was needed, even though retaining the AR classification could have been appropriate.

Microsoft HoloLens in an Industrial Application
Microsoft HoloLens used in an industrial environment. Image courtesy of Microsoft.

An example of MR is Microsoft's HoloLens being used in an industrial environment. MR doesn't just anchor digital information onto predefined points like AR could, it continually analyzes the real-time state of components, identifying areas that may require attention or maintenance. In this case, a virtual on/off button could not only read the current state of a physical machine but also be used to toggle the on/off state of that machine. This ability to process and respond to ongoing changes in the real world, beyond static anchor points, sets MR apart from AR.

The second definition of Mixed Reality is far simpler - it relates to a device that supports a “mix” of both AR and VR. A Mixed Reality Headset can provide a simple way to seamlessly dial the pass-through view or camera feed (transparency) to switch between AR and VR modes. This ability to switch between AR and VR has some publications describing Mixed Reality as “encompassing both AR and VR and everything in between”. From a technical perspective, it is more accurately described as an enhancement of AR due to the nature of requiring interaction with the physical space, even if anchoring a fully immersive world on top of that space.

5. Spatial Computing: Beyond Immersive Technologies

Spatial Computing refers to a broader concept that extends beyond AR, VR, or MR, and their umbrella term, XR. Similar to MR, Spatial Computing involves the use of computer algorithms and technologies to interpret and respond to the physical space around us. The main difference, however, is that Spatial Computing does not necessarily present the user with a virtual, augmented, or mixed reality visual experience.

So is Spatial Computing the same as “Immersive Technologies”? No. It’s a broader term. Again.

The key difference is that Spatial Computing isn’t necessarily immersive, nor is it necessarily human-facing. It is taking real-world inputs and doing something with that data. That something could be automated by AI for example.

Beyond cameras and sensors typically utilized in immersive technologies, Spatial Computing can take inputs from a wide range of devices to populate a holistic vision of the physical world. IoT motion, audio sensors, temperature sensors, or other data sources like GPS and maps are example inputs. 

Apple Vision Pro
Apple Vision Pro. Image courtesy of Apple.

With the launch of the Apple Vision Pro, Apple is marketing the device as the world’s first “Spatial Computer”. Because Spatial Computing is an umbrella term that includes immersive technologies, it could be debated how first-to-market is classified. 

Even from a commercial perspective, earlier devices such as the 2018 Magic Leap One were publicly acknowledged as a Spatial Computing Device, as pointed out in this excellent Forbes article by Cathy Hackl that dives deeper into Spatial Computing. You could even argue that smartphones use spatial computing through AR experiences.

What makes the Vision Pro a “Spatial Computer” if the HoloLens and Meta Quest Headsets are “Mixed Reality”?

This might just be the real question that led you to read this article in the first place. It’s super opinionated, partially because of the blurred lines between technical and commercial definitions of each term and how they are influenced and accepted over time. There are 2 answers, an obvious commercial one and a debatable technical one.

The obvious commercial one is that there is a huge advantage to being the first to market on a specific technology. In this case, Spatial Computing isn’t new, but that doesn’t matter because it was never successfully marketed to mainstream consumers. By calling the Vision Pro the first Spatial Computer, Apple can establish a first-to-market flagship status for this new era in technology, just like Microsoft did with the HoloLens.

What also suggests that market differentiation is the primary motivation is that Apple has told all 3rd party app developers to refer to their apps as “Spatial Computing Apps”, even when the app's functionality falls smack bang in the middle of the scope of AR or VR. That decision doesn’t exactly pay homage to the 50+ year history of the technology and builds further confusion, at least in 2024.

Microsoft HoloLens, Meta Quest Pro, and Apple Vision Pro.

From a capabilities perspective, there is a solid argument that the Vision Pro has features beyond the scope of Mixed Reality, therefore it must be considered differently. Remember, Mixed Reality relates to visual experiences only, which makes the term just one subset or input for Spatial Computing. There is no debate that the Vision Pro is a Spatial Computer, but rather what are those spatial-only features that make Mixed Reality an inappropriate classification? This again goes back to the question of whether experiences should be named for their primary interaction or more broadly due to the 1% that falls outside of that technical scope (such as audio or haptic feedback has for VR experiences over the years).

The groundbreaking enhancements in eye and hand tracking are pointed to as key Spatial Computing differentiators for the Apple Vision Pro. You could debate though that these are both Mixed Reality features, at least in their primary use cases. Both eye and hand inputs are used to control the immersed user’s experience by tracking gaze or gesture control. There is nothing in Mixed Reality’s definition that says that real-world inputs cannot be organic/human. Secondly, these are both features that are available in Mixed Reality headsets like the Microsoft HoloLens and Meta Quest Pro (regardless of how advanced or intuitive each device may be). 

When looking at it fairly, there doesn’t seem to be a clear, new categorization that separates these devices. Yes, they have slightly different capabilities, but they are all Spatial Computers primarily focused on Mixed Reality (or technically immersive) applications that can be classified as enhanced computing built upon AR or VR experiences. 

Final Thoughts

As you can see, so many of these terms are used interchangeably commercially. It will be interesting to see which terminologies will win out as big-tech jockey for market share. One suggestion is that although these devices can accurately be referred to as Mixed Reality or Spatial Computer headsets, we continue the tradition of referring to the experiences themselves as their primary interactions, which in many cases will be AR or VR.

As these computers advance human interaction, remember, they wouldn't be strapped to user's faces if the entire experience wasn't built on immersion.