NVISION is a mixed reality museum experience that breaks the glass between visitors and animals to enhance and advocate the mission of Carnegie Museum of Natural History. Through the augmented environments and the interactive elements in Microsoft HoloLens, NVISION connects visitors with animals both physically and emotionally, which empowers learning and caring towards animal conservation. AR postcards are provided for visitors to takeaway and share experience on social media through the designated app for the museum.
Our client Carnegie Museum of Natural History (CMNH) has a plethora of dioramas and three-dimensional spaces, but the exhibitions are rather static and distant from the visitors' point of view. We aim to enhance visitors' experience through our design. As local residents in Pittsburgh, we did multiple field trips to CMNH and analyzed their mission and values through the official website. We came up with this model which demonstrates the value that CMNH is promoting – to raise awareness of conservation through local community building and education.
To further understand the pain points of the current visitors' journey, we conducted interviews with 16 random museum visitors in CMNH and aggregated their quotes. Through this framework of "what we heard, what we learned, and what this means for design", we synthesized our findings to inform design implications. We also looked into how other museums are using technology to make innovated exhibitions or to improve the current experience. These technologies include augmented reality, virtual reality, mixed reality, and digital projections.
Based on our interview results, there is a large dissonance between the museum's mission and the visitors' experience. Some of the quotes we got were:
Meanwhile, some positive quotes are related to the proximity of physical location between the exhibited animals and the visitors:
Based on the insights from the research synthesis, we framed our problem statement as – “How can we build an immersive and connective experience in CMNH to raise awareness among the community about conservation through technology?”, and we developed four design implications for further ideation:
Also based on the museum's mission, we developed our three design principles to set a tone to our final goal: education, connection and expansion.
The previous secondary research on technologies and our field trips to CMNH enabled us to decide on our system for the new museum experience. Below is a system diagram summarizing the interaction system in front of a diorama. The technology we used is mixed reality, which could provide immersive experience and adapt to the current 3D spaces in CMNH, and the device we used is Microsoft HoloLens, a pair of AR goggles for the visitors to wear throughout the museum experience.
With the design implications, design principles, and interaction systems established, we created the user journey through storyboards and journey map. We brainstormed and sketched numerous options and scenarios during storyboarding section. With desk crit sessions and feedback from professors and guest lecturers, we finally narrowed down into the below storyline.
The four rows of storyboards demonstrate the four main stages of a visitor's journey in our new experience for CMNH, and the small texts below the bold captions are the audio guide scripts.
After the journey is finished, more people on social media or local community will notice the special MR experience in CMNH and prompted to learn, visit, and share again, which forms a positive cycle to promote both CMNH and the concept of animal conservation.
With reference to the standard of minimum text sizes in different viewing distance, and from our experience in using HoloLens during exploratory research, we developed the visual guidelines for our AR interface and mobile app interface, to ensure a consistent branding across all platforms. Considering the branding, we also came up with the name NVISION for our experience, because it enables people to envision the animals' habitat, to envision a sustainable lifestyle, and to envision a brighter future for our nature.
The interactions in AR goggles such as HoloLens are mainly in the form of gestures, so it is important to establish a simple and intuitive gesture system for NVISION. For gesture exploration, we not only tried on HoloLens by ourselves, but also did secondary research on gesture interactions in different versions of HoloLens. By reviewing our journey map as well as participants' responses in interviews, we determined on four main gestures for NVISION – close tap, far tap, swipe, and wrist tap, for the reason that people are not willing to learn and memorize complex gestures or to watch lengthy tutorials.
Our determined four gestures are made into illustrations in order to insert into the visitor journey as simple onboarding tutorial / hint.
The first main feature of NVISION is the guide. Through a simple tutorial, visitors are acknowledged to use the HoloLens default gestures to activate the menu, and access navigation to choose the recommended route or customize their destination. In previous research, we found that people are usually discouraged by extensive tutorials and having trouble remembering them all at once. Therefore, we implemented simple gesture onboarding before each new interaction that the visitor is going to experience.
The second main feature is the empowerment of interactive learning. For each diorama, the glowing outlines of specific parts indicate that they are interactive, or clickable. As explained in the gesture section, we used dashed line as an extension of the finger tip, so visitors can tap on the objects that are far away or are isolated by the glass. A brief onboarding tutorial hints the gestures prior to the very first interaction.
This feature incorporates two main parts of interaction, the globe interaction and the immersive habitat interaction. After tapping on the animals in the diorama, a globe showing the living location of the species appears along with a brief audio introduction. While listening, the visitor can swipe the globe to explore the location of the species and see how far it is from their homeland.
The second part is immersive habitat. An AR low-poly version of the animal's habitat is shown after the visitor taps on a part of the environment in the diorama. The visitor is able to "live" and explore the animal's living environment with realistic ambient sound. They can interact with some elements in the habitat and learn more about the species. For example, they can catch a fish in the river just like a bear does, and hear about bear's diet and how human activities are harming their diet.
From our research, it is clear that in order to make the visitor actually care about conservation, we need to form emotional connection between visitors and the animals in the diorama, so they are becoming empathetic and willing to take actions for animal conservation. Our approach to form this connection is mainly focused on endangered animals, where the visitor enters an immersive environment of the animal's destroyed habitat and actually sees the traces of human activities. The visitor will then be asked to make a pledge, during which they can understand what actions they can take to restore the animal's habitat. After choosing an action, the visitor can see the habitat being restored, and the AR animal inside the diorama will break the glass and come out to thank the visitor. The visitor can interact with the AR animal and find some hidden gems. For example, the gorilla will copy the visitors actions. A badge is given to the visitor after the interaction, which can be used to exchange for a take-home postcard at the reception.
To expand the impact of NVISION into a wider community and advocate animal conservation, we used postcard as an interactive souvenir to trigger sharing activity on social media. The postcards that the visitors received from exchanging the badges are AR recognizable. Once they downloaded the CMNH app from scanning the QR code on the postcard, they can scan the animal on the postcard in the app, and see the animal coming back to life. When multiple animals are scanned at the same time, the visitor can even see the animals interact with each other. They can use the CMNH app to record fun videos and share on social platform with tags regarding NVISION and animal conservation. This way, more people are able to see NVISION and willing to come and experience NVISION, forming a positive feedback loop of raising awareness and expanding the impact of animal conservation.
For the live demo session, we provided two prototypes for the participants to experience. The first one was done in Adobe Aero. It is an interactive prototype reside on iPad. Participants can see the AR habitat on the screen and tap to see the interaction and hear the audio guide. The second prototype in a VR prototype with VR goggles and inserted iPhone. This prototype is not interactive but participants are able to experience an immersive habitat and see the AR gorilla. Below are the screen recordings of the AR prototype, a screenshot of the VR prototype, the documentation of the demo session, and the propaganda poster for NVISION.
Since I was doing undergraduate degree in psychology, I started to become curious and interested about the interaction between human and different kinds of digital devices, especially with multimodal futuristic interfaces like AR and VR. In this project, we were able to explore this field in a very deep and holistic way, and I am very proud of what we have came up with. During the public presentation and demo session, we received a lot of positive feedback saying that they will be very excited to experience our design in the museum if it is implemented. It was pity that we do not have advanced knowledge about AR developing tools like Unity, but we all tried in our own ways to realize our concept in the most realistic way. We explored Unity for the first time, to animate the gorilla model; we used one of the most difficult softwares in the world – Autodesk Maya – to do the animation that Unity is not capable of; we tried numerous visualizations for the highlighted and selected states of the objects in the diorama; etc. When we have a specific, ambitious, but fantastic idea, we all strived to realize it in the best way we can. I learned so much in this project, including AR interface design, micro interactions, AR experience design, as well as different softwares and tools.
If we had more time and money, we could have explored deeper in Unity and Leap Motion, to create a holistic demo in HoloLens that is both interactive and immersive. We could also incorporate different stages for the restoration of habitat. For now, it is a one-time experience – after the visitor placed a pledge, the habitat is completely restored. If we want visitors to keep coming back, we make the restoration slower and subtler, so each time the visitor comes back, the habitat is changed into a better state. A surprising reward can be given after the habitat is completely restored. This can be shown on the app as well. We also want the visitor to be able to capture snippets of their interaction with the AR habitat and animals, so a camera function might be incorporated in the future. There are indeed multiple intervention points that we were not able to address, but I am glad that we completed all the goals that are in-scope.