UX Research Mobile Design Healthcare Figma Gemini AI Audit Human Factors

UHealth App: Human-Centered Healthcare Experience Design

A human-factors driven mobile solution helping UofT international students navigate insurance and urgent care — reducing ER overcrowding through intelligent system design.

Role

UX Researcher, UI Designer & Engineer

Timeline

Sept – Dec 2025

Type

Coursework + AI Research

Team

Aayah S, Emma G, Zeina S, Angel R

01 — Overview

The ER-Default problem

International students at the University of Toronto often face a navigability crisis. Despite having insurance (UHIP), many default to the Emergency Room for minor issues due to a lack of clear system knowledge.

The result is a vicious cycle: overcrowded ERs, delayed care for critical patients, stressed students, and spiralling costs — all driven by an information gap that a well-designed system could close.

User Journey Map

Figure 1 — User Journey Map: tracking the "Onset to Outcome" experience of navigating UofT healthcare.

02 — The Problem

The "Uncertainty → Panic → ER" pipeline

Through research, I identified a dangerous cognitive pattern: when a student feels ill at 10 PM, the cognitive load of checking insurance coverage is simply too high. They default to the ER because it's the only "guaranteed" care — leading to overcrowding.

"Students would rather wait for an Uber while in severe pain than risk an uninsured ambulance fee, yet they default to the ER for a simple fever because it's 'guaranteed' care."

I visualized the density of these barriers through a detailed mind map, identifying key clusters around Wait Times, UHIP Coverage Clarity, and After-Hours Decision Support.

Brainstorming Mind Map

Figure 2 — Mind Map: mapping the fragmentation of information students face when seeking care.

03 — User Research

What the data revealed

We conducted surveys and interviews with international students across diverse backgrounds, focusing on their mental models around healthcare navigation. The findings were stark.

66%

Report difficulty booking at Health & Wellness Centre

50%

Experience ER wait times of 4+ hours

83%

Uncertain about what UHIP insurance covers

67%

Went to ER when a lower-acuity option was appropriate

Survey Results: Booking difficulty

Figure 3a — Booking difficulty at Health & Wellness.

Survey Results: ER Wait Times

Figure 3b — ER wait time distribution.

04 — Ideation

From 24 concepts to 1 solution

I adopted a "Quantity over Quality" approach (Crazy 8s) to break through initial biases, generating concepts ranging from Health Drones (Low Feasibility) to the winning solution. Every idea had to be evaluated fairly before any were ruled out.

24 Brainstormed Ideas Board

Figure 4 — The Ideas Board: 24 distinct concepts, from floating drones to AI chatbots.

I evaluated each concept against six weighted criteria: User Value, Feasibility, Equity, UHIP Alignment, After-Hours Support, and Testability. The selection matrix (below) made the decision clear.

Concept Selection Matrix

Figure 5 — Concept Selection Matrix: the winning ideas emerged from systematic evaluation.

Two concepts rose to the top and were combined into one unified solution:

05 — Iterative Design

Prototyping, testing, and refining

We built from rough wireframes to a fully interactive high-fidelity Figma prototype, with user testing at each stage to catch friction early.

Low-Fidelity Wireframes

Figure 6 — Low-Fi Wireframes: mapping core user paths before committing to high-fidelity.

Testing our low-fi wireframes revealed that while the core navigation was intuitive (100% task completion rate), specific pages like "Account Setup" were over-engineered. We used these insights to simplify the high-fidelity design.

100%

Task completion in low-fi testing

4.6/5

Average satisfaction score across tasks

Low-Fi → High-Fi: Key Iterations

Three friction points drove major UI changes between rounds:

Home Screen Iteration
Problem
1. Reducing Cognitive Load
"Quick Actions" cluttered the home screen. Fixed: removed redundant buttons to prioritize the primary Search bar.
Resources Icon Iteration
Improved
2. Intuitive Iconography
Generic grid icon confused users. Fixed: swapped for a "Book" icon matching the student mental model.
Account Setup Iteration
Simplified
3. Streamlined Onboarding
"Preferences" bar was overlapping. Fixed: removed entirely to reduce time-to-value.

Polishing the High-Fidelity UI

Even in high-fidelity, usability testing revealed subtle friction. A micro-iteration sprint resolved clarity issues across three key areas:

1

Map Filters

Removed redundant "More Filters" options to simplify the sorting bar and reduce cognitive load during stressful moments.

2

Quick Actions

Replaced generic "Notes" shortcut with high-value Emergency contacts — directly addressing the most critical user need.

3

Chatbot Visibility

Switched from an abstract icon to a clear "Chat" text label to increase feature discovery for first-time users.

High-Fidelity Iterations

Figure 7 — High-fidelity iterations: current state vs. future state for each UI refinement.

High-Fidelity Prototype Demo

UHealth — High-Fidelity Prototype Walkthrough Watch on YouTube →

06 — Usability Testing

Reality check: what users actually did

Prior to the AI redesign, we tested the prototype with real users (n=6) to validate our assumptions. While the critical "ER Bot" flow had a 100% completion rate, Task 2.2 (Use Filters) revealed a friction point — a failed attempt and the lowest satisfaction score (4/5) of the session.

This "failure" was a big learning moment. It highlighted that even a simple UI fails if the interaction design doesn't account for the physical constraints of a stressed, possibly unwell user.

Usability Testing Data Table

Figure 8 — Quantitative usability testing results: task completion rates and satisfaction scores.

07 — AI-Assisted Audit (Post-Course)

The Gemini 3.0 stress test

After completing the course, I used Gemini 3.0's Reasoning Mode to perform a "Stress Test" on my design — acting as a Human Factors Engineer and asking the AI to audit my prototype against Nielsen's Heuristics with a focus on "High-Panic" user states.

✨ AI Experiment

Gemini 3.0 vs. Nielsen's Heuristics

Heuristic Student Testing Finding Gemini 3.0 AI Insight
Visibility of System Status Users completed ER Bot flow but felt "uncertain" about length. Critical Gap: In a "Panic" state, users need a visible progress bar. Lack of "Steps Remaining" increases abandonment risk.
Match System with Real World Confusion over terms like "Direct Billing" and "Requisition." Jargon Alert: Non-native speakers view "Billing" as "Cost." AI suggested replacing text with "No-Cost" icons.
Error Prevention Login rated 1/5 satisfaction — "annoying" due to keyboard handling. Physical Constraint: Login requires too much dexterity in an emergency. AI proposed a "Guest Access" mode to bypass auth entirely.

Proposed Version 2.0 Changes

08 — Results

What we achieved

100%
ER Bot task completion rate
4.6/5
Average satisfaction across tasks
↓60%
Projected ER routing reduction in simulation

09 — Final Thoughts

What I learned

This project taught me that healthcare is fundamentally an information systems problem. The ER overcrowding isn't a UI issue — it's a systems failure driven by information asymmetry. Good design is one lever, but it must be grounded in a deep understanding of the system it operates within.

By combining Human Factors principles (visibility, error prevention, cognitive load) with AI-assisted auditing, we can build systems that don't just look good — but actually work when users need them most, in their most vulnerable moments.

If I were to continue, I'd focus on community trust vectors: peer reviews of care facilities, anonymized "what worked for me" stories from other international students. Social proof is often more persuasive than any official communication — and it builds the kind of confidence that stops people from defaulting to the ER.

Methodology Note: The core UHealth project was completed manually for MIE344 to ensure academic integrity. Credit to the MIE344 teaching team and my teammates (Aayah Shalaby, Emma Gillott, Zeina Shaltout) for their support during high-fidelity prototyping. The AI-Assisted Audit (Section 7) was a personal post-course experiment exploring modern UX workflows.

Explore the final design.

Interact with the full high-fidelity Figma prototype — every screen, every flow.

Open Figma Prototype