Check out daily original content around XR and AI on LinkedIn

Unlocking Auto-Adaptive Mixed Reality 🔓

Unlocking Auto-Adaptive Mixed Reality 🔓

Creating Mixed Reality experiences that adapt to the user physical space is challenging. Today we will hear from Moritz Loos, co-founder of 2Sync how they help developers tackle this problem, the opportunities and use cases beyond beyond mixed reality and share a free app where you will be able to experience “co-located adaptive mixed reality” first-hand.

Interview with Moritz Loos

How did you get started with developing MR experiences?

Moritz Loos: When I started working on MR experiences we had a single stage setup and needed to pre-measure everything. We then put it into Unity and manually fitted it to the real space. As you can imagine this method was very limited as it worked only for one specific room. With our technology at 2Sync, we can build an application once and it can be played in any room globally, adapting the virtual content to the physical location automatically.

What are the different levels of mixed reality applications?

Moritz Loos: Sure, there are three main levels. The first is content floating in the air with no physical context, like puzzle games where you can see mixed reality content in front of you while still seeing the real world. The second involves overlapping real-world objects with virtual content, such as Piano Vision, which maps virtual piano keys onto a real piano. The third is a more advanced integration, where the virtual and real worlds interact deeply, like in our game House Defender, where real-world objects are digitally mapped and the user can move around freely in a safe, familiar space.

What is “adaptive mixed reality” and why is it important?

Moritz Loos: Adaptive mixed reality is a term we've created. It is about replacing real-world objects with digital ones, rather than just overlaying them. For example, our technology can create a full VR experience that matches the layout and objects of your real room. This approach allows users to walk around naturally and interact with the digital content as if it were part of their physical space. It’s essential because it enhances immersion making the experiences more intuitive and safer.

How do you ensure digital content fits physical spaces in adaptive mixed reality?

Moritz Loos: We start with a digital representation of the real room, which can be scanned using our technology or scene understanding from Meta. Developers then describe their virtual objects in a parametric way, setting constraints like height or proximity to other objects. Our system matches these virtual objects with the real-world ones, solving an optimization problem to find the best placement. This process ensures the digital content fits the physical space perfectly.

What inspired you to create an SDK rather than just games?

Moritz Loos: I experienced firsthand the difficulties of manually adapting VR experiences to specific rooms. This inspired me to create an automated solution that other developers could use. While creating games is simpler, developing an SDK is more challenging as it requires intuitive interfaces and convincing others of its value. Our goal is to enable developers to create adaptive mixed reality experiences easily, showcasing the potential through collaborations with well-known partners.

What are some key design guidelines for creating immersive experiences?

Moritz Loos: It's crucial to teach users that the space is safe to walk around. They need to trust that all real-world objects are accurately mapped and replaced. The virtual objects should match the characteristics of the real objects they replace, ensuring, for instance, that a solid-looking virtual table isn’t placed on a real plant. This helps avoid confusion and maintains immersion. Ensuring no real objects are missed in the room is also critical.

How can adaptive mixed reality be used beyond gaming?

Moritz Loos: While gaming is a common use case, adaptive mixed reality can benefit other areas like training and social experiences. For example, training scenarios for firefighters can be enhanced by simulating fires in real-world rooms, allowing them to practice and optimize their movements in a safe environment. The ability to move freely and interact with real-world objects while being in a virtual environment is invaluable for such training applications.

How does your SDK handle platform differences?

Moritz Loos: Our SDK is designed to be platform-independent, but we also support platform-specific functionality. For instance, Meta’s scene understanding works well on their devices but not on Pico or other headsets. We provide fallback solutions like room corner-based alignment to ensure our technology works across different platforms. This way, developers can create applications that are compatible with various hardware, benefiting from each platform's unique capabilities.

What challenges do users face with room scanning?

Moritz Loos: Scanning a room accurately can be a challenge, especially for end users. Manual scanning with a controller is tedious and can be inaccurate. We’re excited about advancements like Meta’s semantic labeling, which will automate this process and make it more user-friendly. This technology will soon allow users to scan their rooms effortlessly, providing a detailed digital representation that developers can use to create more immersive and adaptive experiences.

Can you tell us about your recent hackathon experience?

Moritz Loos: At the Meta Presence hackathon, we gamified the mundane task of vacuuming. We attached a controller to a vacuum cleaner and created a mixed reality experience where users could see digital grass growing on the floor and cut it with their vacuum. This made the task fun and engaging and our SDK makes this experience adapts to any environment. Our project won in the productivity category, showing how mixed reality can turn everyday chores into enjoyable activities.

 

That’s it for today,

 

See you next week