Tell Me, Inge...
Erzähl mir, Inge...
Erzähl mir, Inge...
An interactive conversation with a Holocaust survivor, Inge Auerbacher
Tell Me, Inge/Erzähl mir, Inge is a conversation with a childhood Holocaust survivor, Inge Auerbacher. Ms. Auerbacher was a child when she was in the Terezin concentration camp, and this project tells her story. She chats about her early childhood, her time in Terezin, and her life after the war. We respected her experience, and she guided us on how she wanted her story to be told. I make a note of this, explicitly, because it’s important to have people tell their own stories; in situations like these, a product or design team is a collaborator, but not the expert.
- Teaching Holocaust history
In 2022, Meta approached StoryFile to collaborate on an immersive experience using Conversational AI Video technology for a VR/XR experience about a Holocaust survivor. It was always intended that the story could be told to a wide range of audiences, and therefore needed it needed to be easy to understand and to use. For most users, this would be a new type of experience, since multi-modal video & voice technologies are still new to the public, especially in XR experiences.
- Designing conversations with VR/XR in mind
My particular role in this project was as Content Lead, meaning I was tasked with managing the content of the experience, but more specifically, the conversational aspects. The conversational video technology in this experience uses StoryFile’s proprietary software called Conversa. Once the video content of the experience was filmed, I was in charge of inputting that content into Conversa, and start thinking of how to guide users through the experience.
There is limited, specific content in the experience and we wanted to encourage users to follow a storyline without being too explicit, along with creating an overall successful experience. The creative team decided to have stages of the experience that lined up with stages of Auerbacher’s life, supported by different objects/events that would show up onscreen. The objects would represent things/events we deemed important at an earlier stage of development and would change when triggered by a specific phrase, showing a part of Auerbacher’s story.
But what would be those trigger phrases? The most notable thing about VR is, well, the VR – you can move around in a 3D space and things will move with you. So, how would that impact how the user triggers each event? Even if we had all the intentions in the world to make this as linear as possible, it doesn’t mean users will follow the intended steps. So how could we leave strongly-flavored breadcrumbs to guide users?
The design team decided on using prompts on-screen along with designated objects to draw users in. I brainstormed and finetuned the hints along with the design team, as we wanted the hints to:
a) intrigue the user enough to ask prompts
b) have the prompts easy to say
c) tie in with both the visual cue and what Ms. Auerbacher would say
d) match in tone across multiple story chapters
e) depending on the content, allude to well-known moments of the Holocaust (such as the Night of Broken Glass, the Star of David)
However, we still wanted users to be curious and chart their own path within the options we provided. If users only wanted to ask one question, or all of them, in each chapter, we wanted to provide that option and still have the user leave with a positive experience. We decided to let users be able to trigger moments at will, with certain mini-storylines only being triggered if they asked a specific, previous question (the object/prompt would be replaced by another in the same spot).
Additionally, I created the initial data of the conversational flows, thinking of all the different ways people could ask the same question. This was also different from other projects, since there were a lot of visual cues and prompts for the same response. This meant that there was up to three triggers of what would initiate the question: the item, the prompt on screen, and a question itself. I pre-emptively created training data that would account for the different ways people would ask questions. Using the prompt 'a friend in a dark place,' [reference the image at the start of the page] users could ask about a few things: the bunk bed, the dolls, the phrase 'a friend in a dark place,' or ask "did you have any friends?"
Once the experience was published, I was in charge of going through user interactions, seeing whether there were any discrepancies, and the like. This is also where I look for ASR errors and add phonetic variants if needed. For example, in the German version, interactions would vary between 'freundin' (f., friend) and 'freund' (m., friend). I make adjustments in the proprietary product to account for this errors and reduce or eliminate them altogether in future interactions.
- A collaborative story told in English & German
Some content tweaks were done in the end (such as removing extra content, or rewording prompts or questions) to fine-tune the user’s experience. Now that it’s released to the public, I’m also in charge of monitoring user interactions as mentioned previously. We've noted that with the on-screen prompts, users tend to stick to repeating the exact or close enough phrasing (as expected, since similar things are noted with prompts from a text chatbot) and don't deviate too much on wording, allowing users to stay on track with the intented storyline.
Looking at user interactions has also provided us with additional data to how people interact and ask questions about things in the experience, which we’re keeping note of for future tweaks on this project and potential future projects in the VR/XR space.
links & press.
This is live and can be accessed at this link: inge.storyfile.com.
This project was presented at SIGGRAPH 2023. More information regarding that conference presentation can be found here.
Here's a press release on Meta's website about the experience.