EXPERENTIAL DESIGN / ASMT 01

Naura / 0356798 / Interactive Spatial Design
MMD60204 / Experiential Design
Assignment 01: Trending Experience


__________________________________________________________

Week 1 : What is Experential Design?

Well, it focuses on crafting experiences that deeply engage users through multiple senses and emotions. Under this broad concept lies Experience Design, which includes specialized areas such as User Experience (UX) Design - the practice of shaping how users interact with products, systems, or services.

For our upcoming project, we will apply Experiential Design principles through mobile Augmented Reality (AR). This approach allows users to interact with digital elements overlaid onto the physical world, creating an immersive and memorable experience.

Mr. Razif also shared several key concepts and tips:

  • Minimum Viable Product (MVP)
    • A basic version of a product (such as a wireframe or prototype) designed to test essential functionality.
    • Helps quickly validate ideas without the need for full-scale development.
  • Vertical Slice
    • A fully functional, small portion of a larger project.
    • Instead of creating rough drafts of the entire system, the focus is on completing a single feature in detail.
    • Example: In a shopping app, this could be animating the action of adding an item to the cart, including interactions and feedback.
  • Exploring Interaction with Visuals
    • Simple interactions (such as animations, color changes, and visual feedback) can significantly enhance the user experience.
    • The goal is to create a strong, memorable impact through focused and minimal interactions.
  • Facing Difficulties and Moving Forward
    • Once an idea is developed, it is crucial to identify potential challenges early.
    • Addressing these challenges ensures steady progress and increases the likelihood of completing the project on time — an important lesson, especially considering past project difficulties.
We were informed that the upcoming project may now be completed in pairs, which I am truly grateful for. Reflecting on my experience two semesters ago - when I struggled with the solo development of my game project - this comes as a welcome relief. 


My thoughts:

Seeing how AR blends digital with the physical world made me excited — it feels like designing something that lives around the user, not just on a screen.

Learning about concepts like MVP and Vertical Slice was also useful. I tend to overthink or try to do everything at once, so hearing that it's okay (and actually better) to focus on just one well-polished feature really stuck with me. The shopping cart animation example was a nice way to visualize what “small but complete” means in a project.

I also appreciated how Mr. Razif emphasized addressing difficulties early. It made me reflect on my past struggles in solo projects — how things piled up just because I didn’t plan for issues from the start. That’s why I’m genuinely relieved that this project can be done in pairs. I’ll be working with Natania, and I already feel more confident knowing we can rely on each other. As the saying goes, two heads are better than one — and I’m looking forward to what we can accomplish together.

__________________________________________________________

Week 2 : Designing Experiences

In this week’s Experiential Design class, we explored the differences between major experience design concepts:

  • CX (Customer Experience): Focuses on the overall impression a person has of a brand, shaped by interactions across all touchpoints (purchase, service, etc.).
  • UX (User Experience): Focuses on the design of 2D interfaces like websites or apps, ensuring usability, efficiency, and user satisfaction.
  • XD (Experience Design): Centers on immersive, 3D experiences (e.g., AR, VR, spatial environments) that engage multiple senses.
  • BX (Brand Experience): Creates a consistent emotional and sensory relationship across all brand interactions, shaping how people feel and connect with the company.
  • IA (Information Architecture): Organizes and structures information within an interface to ensure smooth, intuitive navigation.

We also revisited key concepts around user personas and user mapping tools, which help designers understand user needs and design better experiences. The four main tools covered were:

  • Empathy Map: Understands what users say, think, feel, and do.
  • Customer Journey Map: Tracks the user’s experience across various touchpoints over time.
  • Experience Map: Looks at the broader context of the user’s interactions, even beyond the company’s direct influence.
  • Service Blueprint: Maps out the entire service process, including front-stage (visible) and back-stage (behind-the-scenes) activities.
Fig 1.0 | NNGroup's UX Mapping Cheat Sheet

For the class exercise, we were tasked with creating a journey map based on an experience that we had all shared. Naturally, we chose our own school as the case study, since it’s a setting familiar to everyone. As a group, we mapped out various parts of the school experience, including the journey to class, searching for food or toilets, and finding spots to relax after class.

We also identified thoughtful gain points that enhance the experience, such as the AR face recognition system that speeds up entry or the lakeside area, which provides a calming space to unwind.

Fig 1.1 My Group's User Mapping for Taylors

One of the most important takeaways from this exercise was realizing how customizable journey maps can be. There is no fixed format for how a journey map should look, and it can be adapted depending on what the team wants to communicate. Because of this, one of the feedback given from Mr. Razif was that we should use emojis or visuals to represent pain points, gain points, and solutions which would make the map more appealing - helping to communicate insights more effectively.

At the end of class, we were assigned homework to complete our future journey mapping. This task builds on the journey mapping activity we did earlier, where we analyzed our experience on campus. Now, we are expected to envision and map out an improved or ideal future journey, addressing current pain points and integrating possible solutions or enhancements.

Fig 1.2 | Our Future Journey Mapping for Taylors

My thoughts:

I’ve always heard of UX and CX, but breaking it down into CX, UX, XD, BX, and IA made it clearer how each one plays a role in shaping how people interact with not just products, but brands and environments too. What stood out to me most was the difference between UX and XD — I used to think they were the same, but now I get that XD takes it a step further by involving multi-sensory, immersive environments like AR.

Creating the user journey map for our school was actually more eye-opening than I expected. It made me think about everyday frustrations (like finding toilets 😅) and unexpected positives (like how peaceful the lakeside is). It was cool to realize that good experience design doesn't always mean adding more — sometimes it’s just about highlighting or improving what’s already there. 

__________________________________________________________

Week 3 : Introduction to UNITY + Vuforia

This week, for the lecture we were introduced to Extended Reality (XR), which includes Augmented Reality (AR), Virtual Reality (VR), and Mixed Reality (MR). XR is an umbrella term covering all of these technologies, and they differ in how immersive they are and how much of a sense of presence they create.

  • Immersion: How deeply you are drawn into digital content.
  • Sense of presence: The feeling of “being there” inside a virtual or digital environment.
      1. Augmented Reality (AR)
    • Combines the real world with virtual objects.
    • Extends visual perception by overlaying digital elements onto the physical world.
    • Device examples: Mobile phones, AR glasses.
    • Example use: Viewing extra product info through your phone’s camera.

            Types of AR:

    • Marker-based AR: Needs a visual marker (like a QR code) to trigger digital content.
    • Markerless AR: Uses GPS, sensors, or object recognition to place digital elements without a marker.

            Extra notes:

    • Mobile AR is mainly used to extend information and visuals, making it highly relevant to our project.
    • Projection mapping can also be seen as an AR experience, often used in public spaces (e.g., museums, events) to create shared visual interactions.
       2. Mixed Reality (MR)
    • Enhances interaction by allowing digital objects to recognize and respond to the real-world environment.
    • Digital objects interact with real-world physics and surroundings.
    • Example: A digital ball bouncing off a real table.
       3. Virtual Reality (VR)
    • A fully computer-generated world creating a sense of presence.
    • Extends experience by immersing the user in a fully virtual environment.
    • Example: Standing on top of the Twin Towers in VR — something not possible in real life.
For the first class exercise, we were tasked with designing an augmented reality (AR) experience for a specific location, including identifying a problem statement and proposing a relevant solution. Our group selected a hair salon as the focus, an idea initially suggested by Crystal, and below is the result; we also had to present it to the whole class.
Fig 2.0 | AR for Hair Salon

During our second class session, we were given a hands-on exercise to explore the capabilities of Augmented Reality (AR) on our mobile phones. The objective was to determine whether we could successfully implement and view 3D AR models on our personal devices - and in a way, also giving us a glimpse into the kind of interactive and delightful experiences we can create for our own projects.

Fig 2.1 | Trying out Cat 3D and Taylor's Patung AR

Then, we had the opportunity to get started on setting up image target in Vuforia with Unity3D. Here’s a step-by-step breakdown of the workflow I followed:
  1. Plan & Generate License:
    • Started by creating a project plan and generating a basic license key from the Vuforia Developer Portal.
  2. Target Manager Setup:
    • Added a new username under the Target Manager and selected it to add a new image target.
  3. Rating the Image:
    • Uploaded an image and received a 4/5 star rating, indicating that it was highly scannable and well-suited for AR recognition ( It’s generally better to use clean, high-contrast 2D images rather than real-life photos to ensure more accurate tracking and stability in AR applications.)
Fig 2.2 | My Image Target(s) AHAHAA
  1. Database Download & Import:
    • Downloaded the image target database and imported it into Unity to be used in the AR project.
  2. Unity & Vuforia Configuration:
    • In Unity, right-clicked to add a Vuforia Engine > AR Camera.
    • Opened the Vuforia Engine Configuration and pasted the long license key to enable AR functionality.
  3. Adding the Image Target:
    • Added an Image Target via the Vuforia Engine.
    • Set the image type, selected the imported database, and assigned the correct image target.
Fig 2.3 | Cube AR on my image tracker

Fig 2.4 | RAT AR on my image tracker

Sir also gave us a video in mytimes about Affordances and how it particularly applies to AR. It is by the Interactive Design Foundation.  This video helped me see affordances in a much deeper way, especially in the context of AR. I’ve learned about affordances before—how design elements give users clues about what actions are possible—but seeing how that translates into AR made me rethink how we guide users when there’s no physical interface.

In AR, affordances become less obvious. There’s no “button” to press or door handle to pull—so the challenge becomes: How do we make interactions feel natural and intuitive in a space where things don’t physically exist? The video showed how designers need to create visual, auditory, or behavioral cues that suggest interactivity—like a glowing object hinting it can be tapped or a floating item that moves closer when approached.

Fig 2.5 | A Screenshot from the Video Given

It also reminded me that perceived affordance is everything in AR. Just because something can be interacted with doesn’t mean users will know how. So it’s not just about making things functional—it’s about making them feel inviting and obvious without needing instructions.

This video helped me understand why some AR experiences feel frustrating or confusing, and how to avoid those pitfalls through better affordance design. For example, in Pokémon Go's AR+ mode, there are times when a Pokémon doesn’t appear, and there's no clear feedback on whether the phone has lost tracking or if something else went wrong. It leaves users confused, unsure whether to restart or keep waiting. That kind of breakdown in communication shows just how important affordances and feedback really are. It’s something I definitely want to apply when designing my own AR interactions—to make sure users always feel guided and in control, even in unexpected moments.

My thoughts:

This week was packed, but it helped tie everything together. I finally understand what XR means — it’s not just a fancy abbreviation, but an umbrella that covers AR, VR, and MR. I used to lump all of them together, but now I realize they differ based on immersion and presence. It was cool to see how AR adds layers to the real world, MR blends them, and VR fully replaces it.

The Patung app was really impressive — even if you move your camera away, the Barbie doll stays in place and reappears exactly where you placed it when you point the camera back. I thought that was so cool! Plus, when you enlarge the doll, you can even see the intricate details of the fabric — it’s actually kind of mind-blowing.

The hands-on exercise with Vuforia and Unity3D was definitely one of the highlights for me. It felt like a big step forward — going from theory to actually making an image tracker work with 3D models. Seeing my cube and rat appear on the tracker felt silly but exciting, and it made me more confident about being able to build a full AR prototype in the future.

But the biggest takeaway came from the affordances video. I’ve learned about affordances before, but never in an AR context. It hit me how tricky it is when there are no “real” buttons or handles to guide people. In AR, you really have to design the hint — through glows, sound, motion — to suggest what users can do. That’s something I want to be mindful of. If users feel confused, the experience fails no matter how cool it looks.

__________________________________________________________

Week 4 : Introduction to AR Interactions

In this session, we explored interactivity in depth, focusing on how to control user interface (UI) elements to show and hide components dynamically—an essential skill in building responsive, user-friendly AR experiences.

  1. Platform Setup:
    • Navigated to File > Build Settings.
    • Changed the target platform to iOS Clicked.
    • Switch Platform to apply the changes.
  2. UI Setup in Unity:
    • In the Hierarchy, we added UI > Canvas as the base for interface elements.
    • We learned about the three different Render Modes for the Canvas:
      • Screen Space - Overlay: UI stays fixed on screen, no camera needed
      • Screen Space - Camera: UI follows a camera’s view, allowing perspective and depth
      • World Space: UI exists as a 3D object, ideal for immersive environments like AR/VR
  3. Scaling the UI:
    • Adjusted the Canvas Scaler by setting UI Scale Mode to Scale With Screen Size, making sure the UI looks consistent across different devices and screen resolutions.
In the first exercise, I implemented a basic UI interaction where a button toggles the visibility of a 3D object. Building on last week's work, I added a sphere to the scene to demonstrate the interaction. Below is the outcome:
Fig 3.0 | Buttons toggles visibility of my objects.

Next, I added an animation to the rat, making it appear as if it leaps and spins slightly—an attempt to recreate the 'UIIA cat' meme but with a rat. I also animated the ball to react as if the rat is kicking it away before it returns. These elements were combined with the same visibility toggle from the earlier exercise.

Fig 3.1 | Rat and ball animations triggered by visibility toggle.

We then combined the buttons to function as a single toggle. When the image is detected, the animation plays automatically, and only the 'Stop' button is visible. If the user presses 'Stop', the animation halts, the 'Stop' button disappears, and the 'Start' button appears instead. This setup creates a smooth and intuitive control flow for the user.

Fig 3.2 | Start/Stop logic (1 button) controlling both objects.

After that, Sir Razif told us to have two buttons, each controlling two different objects independently. This helped us practice managing multiple UI interactions and object behaviors within the same scene.

Fig 3.3 | Two buttons controlling two different objects.

My thoughts:

Learning how to control UI elements in Unity gave me a whole new appreciation for how important interface design is, especially in AR where traditional UI elements don’t always behave the same way.

Additionally, combining the start/stop logic into one toggle button made me appreciate how clean interaction design can really improve usability. Instead of giving users too many buttons, we created a flow that felt natural: detect image → animation starts → user can stop and restart it easily. It felt good to make something that wasn’t just functional, but also intuitive.
__________________________________________________________

ASMT 01:

Fig 4.0 | My Initial AR Proposal Ideas

During my consultation with Mr. Razif, I presented three initial ideas for my experiential design project. However, he pointed out that my third idea had already been explored by a previous student. This was an oversight on my part, as I had not completed watching the full reference playlist he provided—I stopped at video 14 while the relevant project was actually in video 15 (LOL). 

Additionally, Mr. Razif advised that the magnetic-based interaction I proposed might be too technically demanding, especially since the student who previously attempted a similar concept came from a computer science background - a.k.a it'll be too hard for me.

That said, since the third idea needed to be replaced, I also developed a new one focusing on medical and biology education. The updated concept explores kidney physiology through AR and is more suited to my strengths while also addressing a real learning need. Below is the improved proposal, now expanded with additional features

Fig 4.1 | My 3 Proposal Ideas


Reflective Writing and Final Decision

After exploring all three of my initial concepts—ColorTone AR, Postcard Portal, and Magnetic Lab AR—I’ve decided to move forward with Postcard Portal as my final experiential design project.

Out of the three, this idea feels the most balanced in terms of feasibility, storytelling potential, and technical accessibility. Mr. Razif also recommended it after pointing out that the magnetic-based concept might be too complex for me to handle alone, especially since a similar version was previously done by a student with a computer science background. Since I’m more grounded in design than programming, this direction just made more sense.

I still wanted to push myself creatively, though. So instead of continuing with the Magnetic Lab, I came up with a completely new idea: ARNephro, an AR tool to help students understand kidney function better. This came from seeing how much my med school friends struggle with nephron diagrams and hormone regulation. I added layered learning stages, animations, and even quiz modes tailored to different levels—from IGCSE up to medical students. While I won’t be pursuing this as my main build, it’s an idea I’m genuinely proud of and would love to develop further in the future.

As for ColorTone AR, I initially liked it because of its connection to makeup, fashion, and self-expression. But after thinking more practically (+ the consultation), I realized it would also be quite technical to pull off. To make the avatar feature work, I’d need to detect the user’s face and map it accurately onto a mesh for try-ons—which goes beyond my current skillset and tools. The idea is fun, but the implementation would likely take too much time for one semester (even if I share the workload with one other person).

In contrast, Postcard Portal allows for an interactive experience while still being achievable. Plus, I love the idea that travelers can send meaningful experiences to loved ones without spending a lot of money (because I am, in fact, broke).

Choosing one direction wasn’t easy (actually maybe it actually is), but I think this version gives me the right mix of challenge, clarity, and creative freedom. Let’s hope it turns out as cool as it sounds in my head.

Comments

Popular Posts