Nov 2022

Duration - 8 weeks

Team - Angel Liu, Jason Wang, Nina Gao

Tools - Figma, Adobe After Effects

Skills - Interview, Research Synthesis, Journey Map, Ideation, User Interface, Prototyping, Filming, Video Production, Presentation

NVISION is a mixed reality museum experience that breaks the glass between visitors and animals to enhance and advocate the mission of Carnegie Museum of Natural History. Through the augmented environments and the interactive elements in Microsoft HoloLens, NVISION connects visitors with animals both physically and emotionally, which empowers learning and caring towards animal conservation. AR postcards are provided for visitors to takeaway and share experience on social media through the designated app for the museum.


Our client Carnegie Museum of Natural History (CMNH) has a plethora of dioramas and three-dimensional spaces, but the exhibitions are rather static and distant from the visitors' point of view. We aim to enhance visitors' experience through our design. As local residents in Pittsburgh, we did multiple field trips to CMNH and analyzed their mission and values through the official website. We came up with this model which demonstrates the value that CMNH is promoting – to raise awareness of conservation through local community building and education.

To further understand the pain points of the current visitors' journey, we conducted interviews with 16 random museum visitors in CMNH and aggregated their quotes. Through this framework of "what we heard, what we learned, and what this means for design", we synthesized our findings to inform design implications. We also looked into how other museums are using technology to make innovated exhibitions or to improve the current experience. These technologies include augmented reality, virtual reality, mixed reality, and digital projections.

Primary and secondary research synthesis

Design Implications

Based on our interview results, there is a large dissonance between the museum's mission and the visitors' experience. Some of the quotes we got were:

  • "Museum information is outdated and text-heavy."
  • "I feel distant from the dioramas due to the glass. I am not immersed."
  • "I cannot tell what is related to animal or plant conservation."

Meanwhile, some positive quotes are related to the proximity of physical location between the exhibited animals and the visitors:

  • "Local animals make me feel more connected."
  • "This section reminds me of where I used to live. I feel relatable."

Based on the insights from the research synthesis, we framed our problem statement as – “How can we build an immersive and connective experience in CMNH to raise awareness among the community about conservation through technology?”, and we developed four design implications for further ideation:

  • Form emotional connection between animals and visitors through hyper-localized curation and immersive experience.
  • Provide accessible and interactive information system for visitors to learn from the museum.
  • Extend the connection to endangered animals worldwide and motivate caring towards animals.
  • Promote our exhibition and the importance of animal conservation to a wider community.

Design Principles

Also based on the museum's mission, we developed our three design principles to set a tone to our final goal: education, connection and expansion.


System Diagram

The previous secondary research on technologies and our field trips to CMNH enabled us to decide on our system for the new museum experience. Below is a system diagram summarizing the interaction system in front of a diorama. The technology we used is mixed reality, which could provide immersive experience and adapt to the current 3D spaces in CMNH, and the device we used is Microsoft HoloLens, a pair of AR goggles for the visitors to wear throughout the museum experience.

System diagram

Visitor Journey

With the design implications, design principles, and interaction systems established, we created the user journey through storyboards and journey map. We brainstormed and sketched numerous options and scenarios during storyboarding section. With desk crit sessions and feedback from professors and guest lecturers, we finally narrowed down into the below storyline.

The four rows of storyboards demonstrate the four main stages of a visitor's journey in our new experience for CMNH, and the small texts below the bold captions are the audio guide scripts.

  1. The visitor enters the museum and gets a HoloLens for the designated mixed reality experience. They will go through a simple onboarding tutorial and can either choose the recommended route or customize their own destination. AR arrows will appear along the chosen route to guide the visitor to their desired location.
  2. Through recommended route, the visitor is guided to the local animal section, in order to establish emotional connections between the visitor and the animals through the proximity of physical locations. There are two main interactions included in each diorama – i) an AR globe interaction illustrating the location and other basic info of the animal through audio guide, ii) an immersive habitat interaction to immerse the visitor in the animal's habitat and carry out animal's daily activity, for example, hunting. Audio guide is also provided to explain the relevant activity and the factors that are potentially endangering the animal.
  3. After forming emotional connections between the visitor and the local animals, they are led to the worldwide animal sections to see more endangered animals. For endangered animals only, a low-poly version of the animal will be idling in the diorama beside the static animal model. The visitor is prompted to tap on the low-poly animal and they will be immersed in the devastated habitat of the animal. By clicking on the devastated areas like tree stumps, a pledge is triggered for the visitors to choose an action to take in order to help reconstruct the habitat – for example, use less paper products. After pledging, the habitat will be restored into a healthy state, and the low-poly animal will come out of the glass to interact with the visitor to thank them. The visitor will receive a token from the low-poly animal.
  4. After the museum experience, the visitor can exchange the tokens for corresponding postcards with animals' picture. After getting home, by scanning the QR code on the postcard, the visitor will be able to download the CMNH app. The banner on the app will lead them to scan the animal postcard with the app, where the animals will come alive as 3D low-poly models, and multiple animals can interact with each other when scanned together. The visitor can record a video of their animal(s) and share it on social media with animal conservation related tags.

After the journey is finished, more people on social media or local community will notice the special MR experience in CMNH and prompted to learn, visit, and share again, which forms a positive cycle to promote both CMNH and the concept of animal conservation.

Journey map

Visual Guidelines

With reference to the standard of minimum text sizes in different viewing distance, and from our experience in using HoloLens during exploratory research, we developed the visual guidelines for our AR interface and mobile app interface, to ensure a consistent branding across all platforms. Considering the branding, we also came up with the name NVISION for our experience, because it enables people to envision the animals' habitat, to envision a sustainable lifestyle, and to envision a brighter future for our nature.


The interactions in AR goggles such as HoloLens are mainly in the form of gestures, so it is important to establish a simple and intuitive gesture system for NVISION. For gesture exploration, we not only tried on HoloLens by ourselves, but also did secondary research on gesture interactions in different versions of HoloLens. By reviewing our journey map as well as participants' responses in interviews, we determined on four main gestures for NVISION – close tap, far tap, swipe, and wrist tap, for the reason that people are not willing to learn and memorize complex gestures or to watch lengthy tutorials.

  • Close tap is when the interface or object is right in front of the visitor, where they can just tap the button or object using index finger.
  • Far tap happens when the interactive component is far away from the visitor, or there is an obstacle in between (e.g. glass). We referenced the middle gif below, where there is a dotted line connecting the user's index finger with the interactive component, and a small circle on the component indicates where exactly the visitor is pointing at. However, we found the pinching gesture to be very unintuitive when using HoloLens, so we adopted the same gesture as the close tap for the tapping action here.
  • Left-right swiping happens when the visitor wants to turn the globe, and downwards swiping happens when the visitor needs to close an interaction. Some of the gestures in specific dioramas are also variations of swiping, in different directions. Swiping uses the full hand, which makes it easy to recognize and memorize.
  • Wrist tap is a designated gesture to bring up the menu. We created this gesture based on the common mobile phone interaction – bring up the hand holding the phone and tap on the phone screen.

Our determined four gestures are made into illustrations in order to insert into the visitor journey as simple onboarding tutorial / hint.

Close tap, far tap, swipe, and wrist tap

Key Features

Feature 01: Guide

The first main feature of NVISION is the guide. Through a simple tutorial, visitors are acknowledged to use the HoloLens default gestures to activate the menu, and access navigation to choose the recommended route or customize their destination. In previous research, we found that people are usually discouraged by extensive tutorials and having trouble remembering them all at once. Therefore, we implemented simple gesture onboarding before each new interaction that the visitor is going to experience.

Feature 02: Interactive Learning

The second main feature is the empowerment of interactive learning. For each diorama, the glowing outlines of specific parts indicate that they are interactive, or clickable. As explained in the gesture section, we used dashed line as an extension of the finger tip, so visitors can tap on the objects that are far away or are isolated by the glass. A brief onboarding tutorial hints the gestures prior to the very first interaction.

This feature incorporates two main parts of interaction, the globe interaction and the immersive habitat interaction. After tapping on the animals in the diorama, a globe showing the living location of the species appears along with a brief audio introduction. While listening, the visitor can swipe the globe to explore the location of the species and see how far it is from their homeland.

The second part is immersive habitat. An AR low-poly version of the animal's habitat is shown after the visitor taps on a part of the environment in the diorama. The visitor is able to "live" and explore the animal's living environment with realistic ambient sound. They can interact with some elements in the habitat and learn more about the species. For example, they can catch a fish in the river just like a bear does, and hear about bear's diet and how human activities are harming their diet.

Feature 03: Emotional Connection

From our research, it is clear that in order to make the visitor actually care about conservation, we need to form emotional connection between visitors and the animals in the diorama, so they are becoming empathetic and willing to take actions for animal conservation. Our approach to form this connection is mainly focused on endangered animals, where the visitor enters an immersive environment of the animal's destroyed habitat and actually sees the traces of human activities. The visitor will then be asked to make a pledge, during which they can understand what actions they can take to restore the animal's habitat. After choosing an action, the visitor can see the habitat being restored, and the AR animal inside the diorama will break the glass and come out to thank the visitor. The visitor can interact with the AR animal and find some hidden gems. For example, the gorilla will copy the visitors actions. A badge is given to the visitor after the interaction, which can be used to exchange for a take-home postcard at the reception.

Feature 04: Expansion of Social Impact

To expand the impact of NVISION into a wider community and advocate animal conservation, we used postcard as an interactive souvenir to trigger sharing activity on social media. The postcards that the visitors received from exchanging the badges are AR recognizable. Once they downloaded the CMNH app from scanning the QR code on the postcard, they can scan the animal on the postcard in the app, and see the animal coming back to life. When multiple animals are scanned at the same time, the visitor can even see the animals interact with each other. They can use the CMNH app to record fun videos and share on social platform with tags regarding NVISION and animal conservation. This way, more people are able to see NVISION and willing to come and experience NVISION, forming a positive feedback loop of raising awareness and expanding the impact of animal conservation.

Demo Session

For the live demo session, we provided two prototypes for the participants to experience. The first one was done in Adobe Aero. It is an interactive prototype reside on iPad. Participants can see the AR habitat on the screen and tap to see the interaction and hear the audio guide. The second prototype in a VR prototype with VR goggles and inserted iPhone. This prototype is not interactive but participants are able to experience an immersive habitat and see the AR gorilla. Below are the screen recordings of the AR prototype, a screenshot of the VR prototype, the documentation of the demo session, and the propaganda poster for NVISION.

VR prototype

Demo session documentation and NIVISION propaganda poster


Since I was doing undergraduate degree in psychology, I started to become curious and interested about the interaction between human and different kinds of digital devices, especially with multimodal futuristic interfaces like AR and VR. In this project, we were able to explore this field in a very deep and holistic way, and I am very proud of what we have came up with. During the public presentation and demo session, we received a lot of positive feedback saying that they will be very excited to experience our design in the museum if it is implemented. It was pity that we do not have advanced knowledge about AR developing tools like Unity, but we all tried in our own ways to realize our concept in the most realistic way. We explored Unity for the first time, to animate the gorilla model; we used one of the most difficult softwares in the world – Autodesk Maya – to do the animation that Unity is not capable of; we tried numerous visualizations for the highlighted and selected states of the objects in the diorama; etc. When we have a specific, ambitious, but fantastic idea, we all strived to realize it in the best way we can. I learned so much in this project, including AR interface design, micro interactions, AR experience design, as well as different softwares and tools.

If we had more time and money, we could have explored deeper in Unity and Leap Motion, to create a holistic demo in HoloLens that is both interactive and immersive. We could also incorporate different stages for the restoration of habitat. For now, it is a one-time experience – after the visitor placed a pledge, the habitat is completely restored. If we want visitors to keep coming back, we make the restoration slower and subtler, so each time the visitor comes back, the habitat is changed into a better state. A surprising reward can be given after the habitat is completely restored. This can be shown on the app as well. We also want the visitor to be able to capture snippets of their interaction with the AR habitat and animals, so a camera function might be incorporated in the future. There are indeed multiple intervention points that we were not able to address, but I am glad that we completed all the goals that are in-scope.

More Projects

Wanna know more? Looking forward to hearing from you!
Contact me→