Built in Unity, I believe we were the first to integrate the Google Tango AR device, Vive VR system and PC into a single realtime mixed reality sandbox environment where user avatars could interact and explore a set of experiences.
Avatars that looked like old TV’s with wings and robotic grabbers allowed each user to see what the others were looking at, moving or drawing. Users could launch the different experiences by placing color and icon coded blocks onto a pedestal. An annotation tool, which as essentially a 3d drawing brush, allowed users to communicate visually to each other, along with VOIP so they could speak to each other.
The platform was demoed at the 2016 AT&T SHAPE Conference, and open to the public. Users could experience the demo as VR, AR or keyboard and screen and see how different experiences worked in each. A spectator camera was placed in the environment, and its view was projected onto a large screen for attendees to see how the users were interacting. Annotations were persistent between users, so new users could see what others had drawn, and we would reset them now and again to keep the environment from getting too cluttered.
We included a wide mix of experiences — this was a platform to show the potential for these technologies, so we wanted to go wider than just our usual entertainment focus. A list of some of the experiences: