EXPERENTIAL DESIGN / FINAL ASMT

Naura / 0356798 / Interactive Spatial Design
MMD60204 / Experiential Design
Final Assignment: Deployed App


__________________________________________________________

For the final assignment it's basically a perfected and completed version of our AR app which can be interacted with a mobile phone (in our case, an android). It's basically a buildup from our second assignment and I continued on developing the remaining pages that have yet to be completed as well as adding the necessary sound effects to enhance the user experience.

Tutorials page:
 I wasn’t really satisfied with how the prototype UI looked in our previous assignment as it felt too blocky and didn’t reflect the theme we were going for. So I redesigned the "cards" to feel more personal, like genuine recommendations from us to the user.


Fig 1.0 | Previous Design (Prototype)

Fig 1.1 | New + Final Design

I finally figured out how to implement the filter feature! Basically, I created an empty GameObject and attached a TutorialManagerScript to it. Then, I linked that script to the filter buttons so it could compare their tags. I honestly overthought the whole thing—I was worried that disabling items would leave awkward empty gaps. But of course, the ScrollRect handles it just fine (especially since I added a Vertical Layout Group and Content Size Fitter, so everything resizes automatically based on the content). 

The most time-consuming part was definitely searching for and watching all the tutorial videos, but honestly, I enjoyed the process. I wanted it to feel like a curated digital library, even though I only managed to collect 21 videos in the end. Still, I think that’s enough for an MVP. I liked tagging each video based on how I personally perceived its content. And since the filter system was already in place, I designed the “All Items” view to look a bit messy and randomized (kind of like a For You Page) so that when users apply a filter, the shift feels more noticeable and intentional.

Catalogue page:
One piece of feedback I got from my mom (who absolutely hates anything that feels complicated) was that the catalogue had too many buttons just to view item details. So, I simplified the layout to match the style of the tutorials page. This not only keeps the design consistent and user-friendly, but it also let me reuse the filter tagging system, which made things a lot easier on my end.

One of the bigger design decisions we made was to remove the search feature. Since we don’t have a backend, we weren’t sure how to implement it properly. I figured it wasn’t necessary for now anyway, given that we only have a small, fixed set of furniture items. If we ever scale the app, that’s something we can always add later. The removal of search, however, messed with the icon alignment in the navigation bar especially because we wanted to keep the AR button centered as it’s the main focus of our module. So, to keep the layout clean and balanced, we also decided to remove the favourites feature, which originally let users "like" items and save them to a separate page.

Elements page / Details page:

At first, I had no idea how to approach this page. Creating 24 separate scenes for each furniture item felt incredibly inefficient—not just in terms of time (designing layouts, swapping icons, arranging text, linking buttons, etc.) but also in performance. I’d heard that having that many scenes could slow down Unity’s loading and compilation times, so I started exploring other options.

My first idea was to keep everything in one scene by creating 24 panels (one for each item) and toggling their visibility. Since the item list is fixed for this assignment, it seemed doable. But when I asked ChatGPT for advice, it pointed out that this approach could still bloat the scene and make it harder to manage.

Luckily, ChatGPT suggested using a ScriptableObject to store all the furniture data and then load that data into a single reusable scene. I ended up going with this method - and it worked out great. It allowed me to build one universal UI template that updates dynamically depending on which item the user selects. It’s lighter, easier to maintain, and more scalable, especially if I ever want to localize or update any content later on (which I actually did for one of the items, so I was really glad I chose this route).

That said, because the text fields are fixed and styled based on the UI boxes I drew, I kept the descriptions short and simple. I didn’t want inconsistent layouts from overly long or varied sentences, and the fixed size meant I couldn’t control how the text wraps. So I asked ChatGPT to help write the item descriptions—clear, concise, and still able to capture the feel of each furniture piece.
Fig 2.0 | Scriptable Data Objects

This is what my base detail scene looks like. To display different UI content for each furniture item, I created a script with serialized fields where I can easily drag and assign the UI components from the Hierarchy. (Just a heads-up: make sure you’ve added using TMPro; at the top of the script—otherwise you won’t be able to drag in any TextMeshPro elements. Also, don’t forget to mark all fields as [SerializeField]!)

Fig 2.1 | Base Detail Layout

Fig 2.2 | Where to put the Data


Models:
Since I was in charge of the 3D models, I initially forgot to optimize them (some had up to 40k faces), which I knew could slow things down. So I had to go back and apply the Decimate modifier on each one to reduce the poly count. I tried to find a balance - lowering the faces just enough without ruining the shape or making it hard to apply textures.

On top of that, I didn’t know how to properly export the models into Unity with their textures. Turns out, all you have to do is check Copy and Embed Textures when exporting as an FBX. That way, when you bring the model into Unity and click on the material, the Extract Textures button won’t be greyed out. (And if the texture still doesn’t show up properly - just click it, and it usually fixes itself!)

Compilation:
I’m really thankful that Unity Version Control exists - it made things so much easier. Unlike GitHub LFS, which I heard some other groups struggled with, Unity’s built-in version control was super convenient. I loved how user-friendly it was, especially how it automatically resolved any conflicting changes between me and Natania. It saved us so much time and stress! Below are some examples of our correspondence while using it.

Fig 3.0 | Unity Version Control Changesets

Towards the end, I somehow ended up with a missing XR package dependency, which messed with our ability to export the project to mobile. Natania ran into the same issue too - probably because I accidentally checked in some of my local files through Unity Version Control. On top of that, we couldn’t get image tracking to work on Android. We suspect it’s because we used both AR Foundation and Vuforia, which might’ve caused a conflict in Unity (you can check out Natania’s blog for more on that).

Sir also suggested that we try testing everything in a clean Unity file - and honestly, that was our best option at that point. But there was no way I was going to manually copy every file and redo all the layouts from scratch again. So I looked up on YouTube if there was a way to copy just the things we needed—without dragging in all the corrupted or unnecessary junk - and thankfully, I found a video that showed exactly how to do it. Lifesaver. (Here’s the link!)

And it works towards the end!! YAYAYAYAY

Submission:

Drive link: here
Natania's blog: here


Reflection:
Honestly, I’m so glad this was a pair project because there’s no way I would’ve survived without Natania. We divided the work based on our strengths: she tackled the AR side (because she’s amazing at problem-solving and figuring out the mechanics), while I focused more on the design. Of course, there were plenty of frustrating moments - especially when the AR wasn’t working properly and we both spiraled into stress (BAHAHA). But we pushed through. We locked in for two solid days and got things done.

Looking back, one of the biggest things I learned was how to approach technical problems more strategically instead of immediately panicking. I also realized that sometimes I spend too much time wondering whether something will work or make sense, instead of just doing it. This project taught me that if I hit a roadblock, I can just try, iterate, and figure it out along the way especially now that there’s ChatGPT to help me troubleshoot on the spot.

One major roadblock was near the end, when neither of us could export the app to our Android phones. That really set us back, and we ended up having to start from scratch with a clean Unity file. It was stressful, but also a learning moment: keep things clean and modular, and don’t be afraid to restart if it helps solve the bigger issue.

If I were to do things differently, I’d definitely stop overthinking and just go for it. And I’d make sure to prototype the navigation flow in detail right from the start, instead of tweaking it halfway through. It would’ve saved us a lot of layout rework and confusion.

In the end, though, I’m really proud of what we built. We had to let go of a few features we originally planned, but honestly, that helped us focus the app and refine its message. It’s been a challenging but really rewarding experience.

Comments

Popular Posts