Unni Web
Redesigning the Information Architecture to Help Users Explore and Search
Unni is Korea's largest medical aesthetics platform, helping millions of users understand procedures, compare clinics and doctors, and make decisions grounded in real patient reviews. Even though most web visitors arrived from search engines and were new to medical aesthetics, the web platform remained a limited experience designed for users who already knew what to look for.
I led the redesign of the web information architecture (IA) to support the full spectrum of user intent, from early learning and exploration to verification and consultation booking. This work established a scalable foundation for future feature development to better serve these journeys.
product designer at Unni
Background
Because the mobile app requires download and sign up, users typically arrive with clear goals. Web users, by contrast, often land on Unni through search results or shared links — without context, without an account, and with lower knowledge about what they want. Especially as the platform expands globally, Unni Web is often many users’ first exposure to aesthetic care.
However, the web information architecture had grown organically over time and was deprioritized as the mobile app grew. Navigation accumulated legacy categories and inconsistent patterns, making it difficult to add new web features coherently.
As we looked to support the full spectrum of user intents across the aesthetic care journey, we first needed to redesign the IA. Without this foundation, any additional feature would only increase complexity without addressing core user needs.
Discovery & Research
As the lead designer, I grounded the IA work in user needs and pain points. I evaluated the current web experience using mixed methods:
1
Conducted user interviews and usability testing with low-knowledge web users (n=8) to understand how they discovered Unni, what they were trying to find, and where they got stuck
2
Mapped and audited the existing web information architecture to pinpoint where navigation and structure didn't match user needs and expectations
3
Compared mobile and web user journeys for the same user goals to identify parity gaps and missing capabilities in the web experience
Understanding user behaviour
Together with the UX research team, I ran 8 remote interview & usability sessions with low-knowledge users who started on web. Each session paired brief context questions with guided exploration tasks.
"I found Unni through search"
Users often described landing on Unni from search results or social media, coming to browse what was possible before committing to a consultation.
"I don't know where to start"
Users felt overwhelmed by jargon and choice volume and looked for clear starting points and guidance on the first screen.
"I expected this to guide me more"
When faced with long lists of procedures and clinics, users looked for rankings, filters, and curated collections to narrow options; without them, they quickly felt lost.
"I need help understanding my options given my concern"
Users framed needs as concerns and constraints ("puffy eyes," "acne scars," "low downtime"). They expected curated lists, such as "popular options for this concern," that translated these concerns into possible procedures, examples, and next steps.
"I want to see real results from people like me"
Reviews and before/after photos were the strongest trust signals, along with cues like doctor expertise, downtime, and price range. In tasks like “find reviews for this doctor,” users often struggled to locate this information.
Auditing the existing web information architecture
Because there was no prior IA documentation, I reconstructed the current structure by mapping navigation levels and tracing journeys from the homepage to decision-critical screens (doctor profiles, reviews). In the diagram below, each level represents a click.
This surfaced several core issues:
Saturated navigation
Legacy categories crowded the navigation bar, leaving little room to support emerging user needs.
Narrow search entities
The legacy “overview” search endpoint primarily supported procedures and clinics.
Doctors, reviews, and photos were not directly searchable on web.
Buried decision-critical content
Doctor profiles and reviews were several clicks deep and difficult to reach from common entry points, even though they were essential for building trust and making decisions.
Comparing web and mobile user journeys
I mapped the same end-to-end user journey on mobile and web to see where the experiences diverged, then documented which tasks were supported, missing, or harder to complete on web.
mobile
1
enter
users arrive post-download and signed in
2
discover
users browse curated collections and rankings based on concerns to explore procedures
3
search
users search fluidly across procedures, clinics, doctors, and reviews in one place
4
compare and save
users compare options using reviews, before/after photos, and doctor expertise
users save procedures and clinics in their wishlist to revisit across sessions
5
decide and manage
users book a consultation and manage bookings in one place
web
1
enter (logged out)
users arrive from search results or social platforms
2
discover (list-based)
users sift through procedure and clinic lists with limited filtering and sorting
3
search (partial)
users see search results limited to procedures and clinics, separate from list browsing
users cannot view doctors and reviews in results
4
compare (difficult)
users reach doctors and reviews after several clicks from initial search
users cannot save and return to options later
5
decide (app handoff)
users can book a consultation, but are pushed to download the app to manage bookings (and other high-intent actions)

This comparison highlighted the parity gaps: mobile supports the full journey from discovery to booking, while web provides limited support for exploration, cross-entity search, and saving options to revisit later.
Research Insights
Based on the discovery work, I synthesized a set of cross-cutting insights about who the web experience serves and what they need from it. These insights shaped the information architecture problems I chose to focus on.
Synthesizing user behaviour: identifying pain points
01
Low-to-medium knowledge users primarily enter the web application
Most web visitors arrived from search engines or social platforms, often before committing to an app download or account. They came unsure what was possible or appropriate for them.
⇒ An information architecture centered on procedure and clinic lists assumes too much prior knowledge and pushes users into a more transactional mindset than they’re ready for.
02
People start with concerns and outcomes, not procedure names
In interviews and usability sessions, users described needs as concerns (e.g., “acne scars,” “jawline”) and outcomes/constraints (e.g., “low downtime,” “sharper profile”).
⇒ Users wanted curated, concern-based collections and guided starting points, both of which were missing on web.
03
Reviews and doctor expertise are central to trust
Users relied most on real reviews and photos, especially when they reflected “people like me” and included specifics about staff, communication, and recovery. Doctor specialty and expertise were another strong decision signal.
⇒ On web, these trust-building surfaces were hard to reach and thus often undiscovered.
04
Web underserves discovery, search, and decision tasks
Across the same journey, mobile supports curated discovery, cross-entity search (procedures, clinics, doctors, reviews), and saving options that help users narrow choices over time.
⇒ On web, these capabilities were limited or missing: learning and concern-based entry points were thin, search was fragmented and incomplete (especially for doctors and reviews), and there was no reliable way to save and revisit options across sessions.
Knowledge and intent framework
A consistent pattern emerged across sessions and audits: web users’ intent tended to track their knowledge. Lower-knowledge users focused on learning and exploring, while higher-knowledge users focused on verifying details and deciding.
To align the team on who web needed to serve and what support they needed, I captured the findings into a knowledge and intent framework:
Knowledge level
1
2
3
1
Low: has a vague concern, but doesn’t know what procedures exist
2
Medium: has a clear concern and recognizes procedures, but isn’t sure which option is best for them
3
High: knows what they want and is choosing a specific procedure, clinic, and/or doctor
Intents along the user journey
1
Learn: understand basic concepts, risks, and what’s possible
2
Explore: browse options, examples, and reviews
3
Verify: search to check specifics and compare shortlists
4
Decide: choose a procedure/clinic/doctor and book a consultation

What it revealed: Web users clustered in low to medium knowledge and spent the most time in learn, explore, and verify stages, where web support was thinnest.
Problem Framing
This framework translated qualitative insights into three concrete priorities for entry points, IA structure, and feature support:
1. Web does not support early exploration for low-knowledge users
Early-stage visitors arrived with general concerns and questions, not procedure names. They wanted to learn about procedures, see examples from people like them, and understand their options before committing.
The current IA lacked dedicated educational content and clear concern-based entry points, leaving no beginner-friendly place to start.
2. Search is fragmented and misaligned with how users search
Users expected a unified way to search across procedures, clinics, doctors, and reviews. Instead, search was split across separate entry points and result types, making it difficult to navigate and compare options.
Doctors and reviews also weren’t discoverable through search results, even though they were core decision signals. Reaching this information required deep, non-obvious navigation.
3. Web lacks key behaviors that support booking a consultation
As users move from exploration to decision, they need to save options to revisit them before booking. While mobile supported all these actions, web did not and often pushed high-intent actions to the app.
This gap made web feel like a one-off browsing touchpoint rather than a meaningful part of the decision journey, limiting its ability to drive conversion.
Why this matters
Choosing a cosmetic procedure is deeply personal and often emotionally sensitive. Users are trying to navigate uncertainty and fear of side effects while figuring out which treatments are safe and suited to their goals. When pathways for learning, exploring, verifying, and comparing are unclear, they face higher cognitive load at a time when they most need clarity and support.
These structural issues also affect the quality of their decisions. If users cannot easily find doctors, reviews, or curated recommendations, they explore less deeply, drop off more often from search, and are less likely to follow through to booking. Web remains a one-time touchpoint instead of a meaningful part of a cross-platform decision journey.
For a platform that relies on strong discovery, trust building, and social proof to help people make informed, confident decisions about aesthetic healthcare, improving the underlying IA is essential. This led to the core design question for the redesign:
How might we design an information architecture that supports users across knowledge levels and helps them explore and make confident decisions about medical aesthetics?
Ideation & Exploration
I approached each problem statement as a design prompt. For each one, I explored multiple IA options, evaluated tradeoffs with product and engineering, then converged on a direction that would be feasible to ship and extensible over time.
1. Web does not support early exploration for low-knowledge users
Design direction: Exploring options for learn and explore
I began exploration by benchmarking products with strong search and discovery patterns to see how they support early-stage exploration.
Direction A) Make home an exploratory surface.
Surface rankings, campaigns, and educational content directly on Home.
Pros: minimal IA change, low implementation cost, easy for novices to find content on arrival.
Cons: risks overloading a well-performing page, hard to scale, and may overwhelm experienced users.
Direction B) Add a dedicated "Explore" space
Introduce a new IA node for exploration in the main navigation.
Inside Explore, group:
  • rankings (by concern, procedure, theme)
  • campaigns or curated collections
  • future AI-driven or editorial curation
Pros: gives learn/explore a clear home, scales cleanly, and clarifies entry points.
Cons: requires navigation changes and some retraining for existing users.
Direction C) Fold Explore into Search
Use the initial search landing page as the exploratory surface.
Pros: preserves a search-first IA and familiar pattern.
Cons: exploratory users still may not start with Search, and it does not create a dedicated space that feels tailored to them. Combining concern-based exploration and search in one surface also risks a crowded, confusing first experience.

Chosen direction
I combined Direction B with a dedicated space for content:
  • Introduce an Explore node that houses rankings and curated campaigns.
  • Create a Learn node that consolidates educational articles and guides.
Together, these create: a clear home for users who are still learning or browsing, a scalable IA structure for new learn/explore features later, a way to keep Home lighter while still supporting discovery
2. Search is fragmented and misaligned with how users search
Design direction: Exploring options for unified search
Direction A) Separate search pages per entity
One search page each for procedures, clinics, doctors, and reviews.
Pros:
  • pages have a dedicated entity
  • lighter engineering scope since we mainly add doctor and review searches
Cons:
  • forces users to choose a search type up front, which conflicts with concern-based mental models (“I have this issue,” not “I want to search doctors vs reviews”)
  • Keeps search fragmented across entities and content, and still relies on the legacy “search overview” API that does not return doctors or reviews
Direction B) Multi-entity, tabbed search hub
A single search experience with one search bar and tabs for procedures, clinics, doctors, reviews, procedure information, and potentially community or content.
Pros:
  • mirrors how users move between entities, allowing for members to maintain one mental model
  • makes doctors and reviews clearly searchable
  • aligns web with the existing multi-entity search pattern on mobile
  • removes dependency on legacy "search overview" API
Cons:
  • higher design and implementation complexity
  • requires ordering logic across tabs

Chosen direction
The team converged on Direction B:
Design a unified search hub with: a shared query bar, tabs for procedures, clinics, doctors, reviews, and procedure information
Fold existing standalone lists (procedure list, clinic list) into the search hub rather than keeping them as separate nav destinations.
This directly addresses: depth and fragmentation (users no longer need to guess which entry point to use), discoverability of doctors and reviews (they become visible first-class tabs), both high-intent behavior (direct names) and exploratory behavior (concerns, questions)
3. Web lacks key behaviors that support booking a consultation
Design direction: Exploring options for supporting booking-related behaviors
Direction A) Add Wishlist entry point inside Profile page
Add a "save to wishlist" action on procedure and clinic detail pages, and an access point under the profile page.
Pros: small, contained change. Users can see what they have saved in one place
Cons: while the feature remains an essential segue to booking, it does not have a global access point
Direction B) Dedicated "Wishlist" space in IA
Introduce a clear "Wishlist" entry in the IA, and add save actions on procedures and clinic.
Pros: gives saving a clear mental model, supports cross-session behavior, ties web more directly to booking intent, and mirrors a pattern already proven on mobile as a key driver of conversion.
Cons: potentially uses limited navigation real estate and requires coordination with future app parity
Direction C) Rely on app for all saved and booking flows
Do not change web; treat it as browse-only, hand off serious intent to app.
Pros: no new complexity on web.
Cons: web remains a transitional layer and cannot contribute meaningfully to conversion metrics on an operational level

Chosen direction
We adopted a scoped version of Direction B for V0:
  • Start with save to wishlist for clinics only by adding a “Wishlist” destination.
  • Expose Wishlist in global navigation in the header so it is easy to find, without overhauling the entire IA.
  • Shorten the path from search and detail pages so users can easily move from discovery → save → revisit → booking in one flow.
This captures the value of a dedicated Wishlist with a focused scope. It supports low-commitment saving, helps users return and follow through toward booking, and creates a foundation for future management features as web and app move toward parity.
Proposed Solution
Structuring IA around user intent
The explorations above converged into a single IA that reorganizes web around three stages users move through as they go from browsing to booking:
1
Learn & Explore
For early-stage visitors who want to orient and understand options.
Explore
  • Rankings (by concern, procedure, theme)
  • Campaigns / curated collections
Learn
  • Educational content hub (articles, guides, FAQs)
  • Future AI or editorial recommendations
2
Search & Verify
For users who want to compare and verify options with one coherent mental model.
Search
  • Single entry point into a unified search experience
  • Supports search across procedures, clinics, doctors, reviews, and community posts
  • Filters and sorting specific to each tab (price, location, rating, etc.)
3
Decide & Continue
For saving, returning, and following through toward booking.
Wishlist
  • Save and revisit procedures and clinics
My account
  • Manage consultation bookings
Design criteria
To compare IA options and evaluate tradeoffs, I used three criteria:
  • Works across knowledge levels: supports first-time visitors and high-intent returners
  • Reduces friction to trust and action: fewer steps to doctors, reviews, and consultation booking
  • Scales with new features: creates clear homes for rankings, content, and future curation without reshuffling navigation
Global Structure
Distilled view
Proposed Solution
New search experience
With the unified search IA in place, the next step was to design a search experience that felt cohesive across entities and worked for both concern-based and precise queries.
Design goals
I used three goals to guide the search UX:
  • Support both concern-based queries (“tired eyes”) and exact queries (clinic or doctor names).
  • Make doctors, reviews, and procedure info first-class targets, not buried behind events and clinics.
  • Reduce friction between typing, scanning results, and pivoting between entities.
Search Modal

How this solves fragmented search
  • Users no longer have to guess which search entry to use.
  • Autosuggested keywords and recent searches make it easier to refine or resume queries.
  • Copy explicitly supports both concern-based and exact queries.
Unified results layout with entity tabs
Next, I explored how to structure multi-entity results so users could move fluidly between events, clinics, doctors, reviews, and info.
The current search experience offers little control or clarity: user's can't filter results, and it's unclear what content is being searched or returned.
Search Results
Search Results - Header
Key behaviors
The final search experience brings these decisions together:
  • Entry: Search is accessible from both the header icon and a central search bar on Home, leading to a full-page search surface with a clear prompt such as “Search by concern, procedure, clinic, or doctor.”
  • Results: Users see an “All” tab by default, then can pivot to Events, Clinics, Doctors, Reviews, or Info without retyping their query.
  • Sticky controls: The search bar and entity tabs remain visible as users scroll, making it easy to refine, filter, and compare.

How this solves fragmented search
  • Reduces fragmentation: all entities live under one mental model.
  • Makes doctors and reviews clearly discoverable and searchable.
  • Supports both exploratory queries (concerns, questions) and precise queries (names).
  • Users can move from "I typed my concern" → "Let me see doctors for this" → "Now show me reviews for this" in one flow.
Before
Static search modal, clinic and procedure search as separate entities, unsupported doctor and review search
After: search modal with keyword autosuggestions, unified search experience across procedure, clinic, doctor, review, and community
Expected Outcomes & Validation Plan
Success criteria
I defined expected measurable success since we're still in engineering implementation. Across the three problem areas, I expect the new IA and flows to produce:
Expected user outcomes
  • Faster orientation and entry into the right space
  • Users understand where to start if they want to learn, explore, or search.
  • Shorter paths to high-signal content
  • Fewer steps from entry → doctors → reviews → decision-critical information.
  • Clearer options for novices
  • Low-knowledge users can find concern-based content and curated lists instead of bouncing.
  • Higher success on "find X" tasks
  • More users can successfully complete tasks like "find a doctor for this concern" or "find reviews about X" without help.
Expected product success
  • Reduce abandonment from poor or confusing search flows
  • Fewer zero-result or dead-end experiences.
  • Increase depth of exploration
  • More users moving beyond one or two page types per session.
  • Drive traffic into new surfaces
  • Meaningful engagement with Explore, rankings, campaigns, and the Content Hub.
  • Support faster shipping of new features
  • A clear, extensible IA so that future features (e.g., more review tools, AI curation) can plug into existing buckets without a full redesign.
Pre-launch validation plan
Because this is an ongoing project, the focus is on expected impact and a clear plan to validate it.
Tree testing / IA validation
Can users locate doctors, reviews, and learn/explore surfaces using only the new structure?
Task-based usability testing on prototypes
Representative tasks such as:
  • "You are curious about a concern. Where would you start?"
  • "Find a doctor and reviews for [concern]."
  • “Save a procedure you might want to book later.”
Measure task success, time on task, and points of confusion.
Post-launch metrics to track
Collaborating with data and engineering, I am making sure analytics events are in place before launch so we can reliably track:
Search behavior
  • Search click-through rate
  • Zero-result rate
  • Search → detail page conversion
Exploration depth
  • Number of distinct page types per session (e.g. search + doctor + reviews vs search only)
  • Engagement with Explore and Learn surfaces
Journey continuation and booking support
  • Usage of Wishlist on web
  • Return visits starting from wishlist
  • Conversion from saved → booking flows (where measurable)
What I learned
This project reminded me that information architecture is not just about rearranging pages. It is about choosing which user intents a product truly serves and which ones it quietly leaves behind. On web, the structure had grown around high-intent, search-first users, while a large group of exploratory and low-knowledge visitors were effectively unsupported. Reframing the IA around learning, exploring, searching, and continuing forced me to think less in terms of "screens" and more in terms of behaviors, decision moments, and the kinds of journeys we want to make possible.
Leading this information architecture redesign helped me strengthen my strategic problem framing, advocate for a user-first perspective in a feature-heavy environment, and create a shared mental model for cross-functional teams. It also pushed me to think of IA as a living strategy, not a static site map.
The structure has to reflect three things at the same time:
  • who our users are and what they need
  • where, why, and how they interact with our product
  • what content and tools they engage with
This work also marks a pivotal moment for the Gangnam Unni web experience. For years, web sat in the background while the app led most product thinking. As web traffic and global interest grew, it became clear that we needed a more intentional IA to support that potential. Looking forward, I hope to continue validating and iterating on this structure as we ship Explore, unified search, and saved items, and as we introduce new features. The goal is to keep the IA flexible enough to grow with the product, while staying grounded in the real behaviors and needs of the people who rely on it.