1. Overview
International students at the University of Toronto often face a navigability crisis. Despite having insurance (UHIP), many default to the Emergency Room for minor issues due to a lack of clear system knowledge.
User Journey Map: Tracking the "Onset to Outcome" experience of navigating UofT healthcare.
2. The Problem: The "ER-Default" Heuristic
Through my research, I identified a dangerous trend: The Uncertainty-Panic-ER Pipeline. When a student feels ill at 10:00 PM, the cognitive load of checking insurance coverage is too high. They default to the ER because it is the only "guaranteed" care, leading to overcrowding.
Mapping the Fragmentation
I visualized the density of these barriers through a detailed Mind Map, identifying key clusters: Wait Times, UHIP Coverage Clarity, and After-Hours Decision Support.
3. User Research Findings
Booking at Health & Wellness (66% report difficulty)
ER Wait Times (50% report 4+ hour waits)
Key Research Insight:
"Students would rather wait for an Uber while in severe pain than risk an uninsured ambulance fee, yet they default to the ER for simple fever because it's 'guaranteed' care."
4. Ideation: From 24 Concepts to 1 Solution
I adopted a "Quantity over Quality" approach (Crazy 8s) to break through initial biases, generating concepts ranging from Health Drones (Low Feasibility) to the winning solution.
The "Ideas Board": 24 distinct concepts ranging from floating drones to chat-bots.
The Selection Matrix
I evaluated concepts against six weighted criteria: User Value, Feasibility, Equity, UHIP Alignment, After-Hours Support, and Testability.
- One-Stop Care Navigator App (Selected)
- Campus "Now-Open" Map (Selected)
5. Iterative Changes & Prototype Demo
Initial Blueprints (Low-Fi)
Low-Fi Wireframes: Mapping core user paths including Login, Home, and the ER-Bot.
Low-Fi Testing Outcomes
From Feedback to Flow
Testing our low-fidelity wireframes revealed that while the core navigation was intuitive (100% completion rate), specific pages like "Account Setup" were over-engineered. We used these insights to simplify the high-fidelity design.
Low-Fi to High-Fi Key Iterations: Before vs. After
1. Reducing Cognitive Load
Problem: "Quick Actions" cluttered the home screen.
Fix: Removed redundant buttons to prioritize the primary Search bar.
2. Intuitive Iconography
Problem: Users confused the generic grid icon.
Fix: Swapped for a "Book" icon to match the student mental model.
3. Streamlining Onboarding
Problem: "Preferences" bar was overlapping and confusing.
Fix: Removed the feature entirely to reduce time-to-value.
Polishing the High-Fidelity UI
Even after moving to High-Fidelity, usability testing revealed subtle friction points. We conducted a "Micro-Iteration" sprint to resolve clarity issues regarding navigation visibility and button logic.
Map Filters
Removed redundant "More Filters" options to simplify the sorting bar.
Quick Actions
Replaced generic "Notes" shortcut with high-value Emergency contacts.
Chatbot Visibility
Switched from an abstract icon to a clear "Chat" text label to increase discovery.
Visualizing the "Current State" vs. "Future State" logic for UI refinements.
High-Fidelity Prototype Demo
6. Usability Reality Check
Design is never finished. Prior to the AI redesign, we tested the prototype with real users (n=6) to validate our assumptions.
While the critical "ER Bot" flow had a 100% completion rate, Task 2.2 (Use Filters) revealed a friction point, resulting in a failed attempt and the lowest satisfaction score (4/5) of the session.
Reflection
"This 'failure' was a big learning moment. It highlighted that even a simple UI fails if the interaction design doesn't account for the physical constraints of a stressed user."
7. The Gemini 3.0 AI Audit
Post-course, I used Gemini 3.0’s Reasoning Mode to perform a "Stress Test" on my design. I acted as a Human Factors Engineer, asking the AI to audit my prototype against Nielsen’s Heuristics with a specific focus on "High-Panic" user states.
| Heuristic Principle | Student Testing Finding | Gemini 3.0 AI Insight |
|---|---|---|
| Visibility of System Status | Users completed the ER Bot flow but felt "uncertain" about length. | Critical Gap: In a "Panic" state, users need a visible progress bar. The lack of "Steps Remaining" increases abandonment risk. |
| Match System w/ Real World | Confusion over terms like "Direct Billing" and "Requisition". | Jargon Alert: Non-native speakers view "Billing" as "Cost." AI suggested replacing text with "No-Cost" icons. |
| Error Prevention | Satisfaction Score 1/5: Login was "annoying" due to keyboard handling. | Physical Constraint: Login requires too much manual dexterity for an emergency. AI proposed a "Guest Access" mode to bypass auth entirely. |
Proposed "Version 2.0" Redesign
Based on the AI audit, I have proposed three architectural changes:
- 1 Emergency Guest Mode: A "Zero-UI" entry point that allows students to access the ER map immediately without logging in.
- 2 Predictive Wait-Times: Using historical data to show "Busy Trends," helping students avoid clinics about to hit capacity.
- 3 Jargon Translator: An AI overlay that translates complex insurance terms (e.g., "Deductible") into plain student language.
8. Final Thoughts
This project taught me that healthcare is an Information System problem. By combining Human Factors principles (visibility, error prevention) with AI-assisted auditing, we can build systems that don't just "look" good, but actually work when users need them most.
Explore the Final Solution!
Click below to interact with the high-fidelity Figma prototype.
Open Figma Prototype