From Concept to the Experience

Predict 02
Predict 02
Predict 02
Predict 01
Predict 01
Predict 01
Predict 03
Predict 03
Predict 03

Category:

Mixed Reality Experience

Client:

Staatstheater Augsburg

Duration:

3 months

After developing the initial concept last semester, we began the production phase this semester. Building on the feedback we received, we focused on refining and expanding the project across four main areas: the Fridaverse bundle, the user interface, the interactive painting, and the 3D reimaginings of Frida’s artworks. These elements represent the core of our work this semester, where we took what we had and brought it closer to a complete, immersive experience.

The Bundle

We came up with the idea for the bundle because we wanted the cube to be more than just a folded piece of paper with images. To enhance its value and usability, we created a set that includes the cube itself, an informational flyer explaining how to interact with it, and a stand so it can be displayed nicely at home as a souvenir.

The altered paintings on the cube were designed to spark curiosity. By partially hiding the full beauty of the artworks in the printed version, users are encouraged to explore the AR experience in order to reveal the images in their original, vibrant form. This approach makes the AR interaction feel more rewarding.

The bundle and the visual alterations serve to elevate the overall user experience. Transforming a simple paper object into an engaging and meaningful part of the extended reality journey.

3D Paintings

Based on the feedback we received last semester, we revisited the 3D paintings with a fresh perspective. One recurring suggestion was to incorporate animation to make the scenes more engaging and visually dynamic. This semester, we responded by introducing subtle animated elements alongside the layered 3D visuals we had already developed. The goal was to enhance the experience without overpowering it. We wanted to add movement that brings the paintings to life while still respecting their original tone and intention.

From the beginning, we were conscious of the fine line between enhancement and distraction. Together with our client, we agreed that the core of the experience should remain rooted in the paintings themselves. Too many or overly expressive animations could pull attention away from the original artwork and even risk distorting Frida Kahlo’s artistic voice. That is why we chose a restrained approach, allowing the animations to support the atmosphere while giving users space to reflect on and interpret the pieces in their own way.

In addition to the animation work, we redesigned the framing of the paintings. Last semester, they were presented with a standard wooden frame. This time, we replaced it with a retablo-inspired design, referencing Mexican folk art and religious traditions. This change adds cultural depth and better aligns the visual identity of the experience with Frida’s heritage.

To further strengthen the emotional and thematic connection to the Frida ballet, for which this project serves as a souvenir, we selected background music from the ballet’s original score. The music helps to bridge the live performance and the digital experience, making the AR component feel like a natural continuation of the event.

The Interactive Painting

Planning and Initial Concept

We started by asking, "What should be accomplished, and how should the user interact?"

→ The goal was a web application accessible via a browser (e.g., www.fridaverse.de).

→ Users would receive a brief tutorial and then scan the sides of a physical cube to experience animations and interactive paintings.

The initial concept included a simple Python backend that hosted an HTML template rendering a basic AR scene using the AR.js library. The first milestones were

  • tracking a 3D object,

  • displaying text and images on a marker,

  • ensuring performance was sufficient for mobile devices.

It quickly became apparent that AR.js was not suitable for more complex image-based markers (e.g., pictures with detailed features converted into trackable formats). It also performed poorly in low-light conditions or with lower-end cameras. Additionally, the file serving (templates, images, videos, animations, etc.) was rudimentary and not suitable for modern development or production environments.

Switching Technologies

Both problems were addressed with new solutions:

  • AR tracking was replaced with MindAR, a JavaScript-based library optimized for image and face tracking and compatible with modern browsers.

  • The backend was migrated to Django, a full-stack framework written in Python, offering faster development, better scalability, and built-in security features.

Development and Challenges

With the updated tech stack, full development began. We migrated to GitHub for version control and project management (issues, documentation, etc.).

A scalable mindset was maintained:

  • A management interface for administrators was implemented (based on Django’s admin, but heavily modified).

  • Web application routing and API endpoints were added to collect and process usage data (e.g., the number of interactive paintings created over time).

In this new environment, a new prototype was developed. A significant amount of time was invested in optimizing the interaction between components, especially the loading order of assets and the initialization of the renderer, which were critical to proper functionality.

One major challenge was the lack of robust error handling, a common issue in web development. For rendering and handling 3D objects, we used A-Frame, a well-established WebXR framework. After several weeks of development, the first functional prototypes were tested with users, and their feedback was incorporated into ongoing development.

In addition to desktop testing for in-depth debugging, a smartphone testing workflow was established. To run the application on physical devices, it was essential to use HTTPS (instead of HTTP, which is easier to use in local development but insecure and blocked by most mobile browsers for media and camera access). As a workaround, self-signed SSL certificates were created to enable testing on actual hardware.

Tests showed mostly successful implementation across various devices, though a few failed to load the application correctly - these issues were not related to Apple Safari's known media autoplay restrictions or platform-level security limitations. For the animation layers, custom media handlers were developed to control media playback, and a parallax effect was implemented to enhance the visual experience

The User Interface

In our initial version, we implemented both core features: a 3D animation and an interactive painting activity. These were presented through a UI-led, step-by-step instruction system designed to guide users through the experience intuitively. Early user testing yielded positive feedback, indicating that the concept was engaging and accessible. However, through continued iteration and deeper reflection on the preferences and needs of our target audience, many of whom favored minimal steps and intuitive interaction, we recognized the opportunity to streamline the experience further.

This insight led to the development of a second version, where the interface was simplified significantly. Instead of relying on step-by-step guidance, the experience was tied directly to physical interaction with the cube. Scanning one side, featuring the artwork, automatically triggers the 3D animation, while scanning the opposite side, designed for the painting activity, launches the interactive element. This approach removed unnecessary steps and made the experience more fluid and user-friendly, especially for audiences less familiar with digital interfaces.

The User Tests

Three iterations of UI designs were tested to determine which one was better. Visually pleasing appearances were appreciated but lacked usability and failed to follow accessibility guidelines. On the other hand, simple designs were more usable and less confusing for users.

We also tested the instructions and found that users generally don't like big texts explaining the steps; they rather prefer small tests or visual instructions. But on the other hand, too many micro-interaction-based instructions were misunderstood for features, so we had to find the middle ground with less text and fewer interactions.

We did A/B testing to determine if there should be a single point of entry for both the features or two different options; the outcome was having options is better for better differentiation.

The usability of the feature "interactive painting" was also tested, and we found out users like to use the feature if it is very usable and doesn't have any screen touch sensitivity issues.

The deployed web app was also tested on local servers to determine if the production environment works. The outcome was some scaling issues and some issues with animations, which were reported and solved later.