Unni Web
Redesigning the Information Architecture to Help Users Better Explore and Search
Unni is Korea's largest aesthetic healthcare platform, helping millions of users understand cosmetic procedures, compare clinics and doctors, explore real patient reviews, and make informed treatment decisions. While the mobile app has evolved into a robust experience, the web platform lagged behind. It primarily served users who already knew what they wanted, and offered limited support for low-knowledge or exploratory visitors despite the fact that most web users arrive via search and are navigating aesthetic healthcare for the first time.
I led the redesign of the web information architecture (IA) to create a scalable structure that supports the full spectrum of user intents, from early-stage learning and exploration to verifying details, making decisions, and managing appointments. The goal was to build a flexible foundation for adding critical features required to better serve these needs.
Background
Because the mobile app requires download and sign up, users typically enter with clear goals in mind. Web, by contrast, provides a logged-out surface where most traffic comes from search results, social posts, and shared links. It plays a critical role as a first-touch channel for users discovering Unni and, for many, aesthetic care itself. As the platform expands globally, more international users are looking to learn about Korean aesthetics and want to explore different procedures and care.
However, the web IA had grown organically over time. Features were added without a shared structure for how information should be organized, how pages should connect, or where new features should live. As a result, navigation became crowded with legacy categories, not leaving room to introduce educational content and add features to reach web-mobile parity. Web search also remained a long-standing pain point, offering limited support for discovering hospitals, doctors, reviews, or community content.
As a first step, we needed a scalable IA that could reliably support the full spectrum of users intents throughout their aesthetic healthcare journey. Without this foundation, any additional feature would only increase complexity and fail to address core user needs.
Discovery & Research
To understand how Unni web was performing and where the IA was breaking down, I looked at the experience from three angles: the existing structure, parity with the mobile app, and user behavior.
Methods at a glance
  • Mapped the existing web IA and key entry points.
  • Compared core journeys and feature offerings on mobile and web.
  • Reviewed qualitative research and usability sessions across web and app.
  • Analyzed exploratory versus goal-driven behaviors along the care journey.
Auditing the existing web information architecture
Because there was no existing IA documentation, I first mapped key pages and entry points: how users moved between home, search, lists, and detail pages, and how many steps it took to reach them.
This surfaced several structural issues:
Saturated navigation: The navigation bar contained legacy items and had no clear space for new features or learn/explore surfaces.
Narrow search entities: The legacy “overview” search relied on a separate API that only returned procedures and hospitals, leaving doctors, reviews, and photos out of first-class results.
Buried decision-critical screens: Doctor profiles, doctor consultation flows, and reviews existed but were many clicks deep. This made them hard to reach from common entry points.
Comparing web and mobile journeys and identifying feature gaps
Looking across the same end-to-end journey on mobile and web surfaced clear gaps in how web supports users over time.
Gaps across the web journey
  • Learn & Explore Mobile offers richer surfaces for ongoing users to discover doctors, reviews, and campaigns. Web is relatively strong at showing procedures and clinics, but weak at supporting early-stage learning and exploration, especially for low-knowledge visitors arriving from search.
  • Search & Verify On mobile, users can move more fluidly between procedures, clinics, doctors, and reviews. On web, search is narrower and fragmented across separate entry points, which makes it harder to verify options across entities in one place.
  • Decide & Continue Features that help users continue and manage their journey, such as saving items and revisiting shortlists, are supported on mobile but missing or hard to find on web. As a result, web behaves like a thin, one-time browsing surface rather than a place to learn, explore, decide, and return.
Understanding users and mental models
To ground the IA work in real member behavior, I drew on existing qualitative research, including user interviews and usability sessions across both web and app. I focused on interviews that observed:
  • how people started their search,
  • how low knowledge users described their experience, and
  • what they looked for when moving toward a decision.
How web users start and where they search
  • Web visitors often "stumble in" from external sources. Many first research on Google, Instagram, and other social platforms. Gangnam Unni web is often a second or later touch, not the starting point.
  • Entry is low commitment and exploratory. Users arrive logged out from search results, social posts, or shared links. Intent is often fuzzy: they are curious, not yet ready to book.
  • They start from concerns, not procedure names. Mental model: "I have this concern" rather than "I want X procedure." They describe "puffy eyes," "tired face," "small change," or "low downtime" and expect the product to translate concerns into possible options. They expect the product to translate concern → possible options.
  • Expectation at first click: clarity, not complexity. When they tap a concern or topic, they expect: a clear list of relevant procedures or options, labels that make sense without prior knowledge. When this match happens, confidence increases. When it does not, they feel lost.
  • Search is often a fallback, not a confident tool. Some users default to search when they feel overwhelmed on the first screen. If search does not match their language or intent, they abandon or try again elsewhere.
How low-knowledge users described their experience
"I don't know where to start"
Low knowledge users described feeling overwhelmed by jargon and choices. They depended heavily on simple, plain language cues.
"I want to understand my options given my concern"
They frame their situation as: "puffy eyes," "tired face," "small change," "low downtime." They struggle when the interface jumps straight to technical procedure names.
"I want to learn about what procedures are out there"
They are comfortable with curated lists such as "popular options for this concern." Unfamiliar procedures are ignored unless clearly explained and contextualized.
"I want to see real results from people like me"
People relied on reviews and before/after photos to explore reviews what procedures exists and what best addresses their concerns.
How low-knowledge users actually navigate
Common exploration loop
concern or keyword search → procedure or hospital list → open a detail page → jump into review details or photos → back to the list to compare
They repeat this loop to slowly build understanding.
External content + app/web are interleaved.
Some users bounce between: external search / social, Gangnam Unni web, app (if they already have it). They patch together understanding across channels, not in one straight line.
What they notice vs. what they ignore
  • Clear, above-the-fold modules work when labeled well. New or low knowledge users often notice above-the-fold widgets like TOP9. They liked that TOP9 helped them quickly see "popular options for my concern." Simple tags and labels made it feel easy to try.
  • Ambiguous UI is skipped. Pills or modules with unclear labels were often ignored. Users avoided elements when they did not know what would happen after a tap.

Synthesizing user behavior
How web users arrive and start: Many web visitors stumble in from search results, social posts, or external links. They arrive logged out, with low commitment and an exploratory mindset. They start from concerns or goals rather than specific procedure names. When they tap a concern or topic, they expect a clear list of relevant options. If that mapping fails, confidence drops.
How low-knowledge users navigate: Low-knowledge users describe feeling overwhelmed by jargon and options. They rely on plain language and simple labels to decide what to click. Their typical loop is: concern or keyword → list → detail page → reviews/photos → back to list. They often shortlist only procedures they already know or have seen peers try. Unfamiliar options are ignored without guidance.
What users rely on to build trust: Reviews and before/after photos are the strongest decision signals. Users look for downtime, price range, location, and review themes like staff friendliness and wait times. Doctor profiles and real patient outcomes are essential, but are currently buried behind procedure or clinic flows.
How search is used and where it breaks: High-intent users jump straight into search when they know a procedure, clinic, or doctor name. Low and medium-knowledge users use search as a fallback when the interface feels confusing. Search is optimized for exact keywords and procedure/clinic lists, not concerns, questions, or people. Doctors, reviews, and content are not first-class search targets, making them hard to discover.
Research Insights
Based on the discovery work, I synthesized the findings into a behavioral model that combined procedure knowledge level with user intent. This model helped connect individual observations back to structural IA needs.
Knowledge and intent framework
Rather than creating personas, I mapped users along two dimensions.
Knowledge level
1
2
3
1
Low knowledge: know their concern, do not know procedures.
2
Medium knowledge: have heard of procedures, but are unsure what fits them.
3
High knowledge: know what they want and are choosing where or with whom.
Intents along the user journey
1
Learn: understand basic concepts, risks, and what is possible.
2
Explore: see options, examples, and rough price ranges.
3
Verify: check details, reviews, and compare shortlists.
4
Decide: choose a clinic, doctor, or procedure.
5
Contribute and manage: write reviews, track history, and manage future plans.
Web User Journey
Mapping observed behavior onto this framework revealed a clear pattern:
  • Most web visitors are in low or medium knowledge states and oscillate between learn, explore, and verify. They arrive from search or social with a concern in mind and need help understanding what options exist and whether they are safe.
  • High-knowledge users tend to arrive with a specific clinic, doctor, or procedure in mind and move quickly between search, compare, and decide.
  • The existing web IA is optimized for high-intent, search-first behaviors, while early-stage visitors and their learn/explore needs have no dedicated structures.
  • Planned and existing features such as rankings, content hubs, global guides, wishlists, and future AI-driven curation do not have a clear "home" in the current IA.
Problem Framing
With these insights, we synthesized and grouped into the core 3 critical problems we want to address in this redesign:
1. Web does not adequately support learn and explore intent
Early-stage and global visitors arrive on web with concerns and questions, not procedure names. They want to understand what is possible, see examples from people like them, and get oriented before committing.
The current IA offers scattered educational content and inconsistent concern-based entry points, with no cohesive, beginner-friendly place to start.
2. Search is fragmented, limited, and misaligned with how users search
Users expect to move through one unified flow to search across procedures, clinics, reviews, and doctors. Instead, search is split across separate entry points, making it hard for users to navigate search results.
Also, there is currently no direct way to search for doctors or reviews, even though they are core decision signals. Reaching this information requires deep, non-obvious navigation.
3. Web is missing key behaviors that support booking an appointment
As users move from exploring to deciding, they need to save options, revisit them, and move smoothly toward booking. While features to save procedures and clinics are offered on mobile, web does not and instead redirects these intents to the app.
This gap makes web feel like a one-off browsing touchpoint rather than a meaningful part of the decision journey. It also contributes to a functional and experiential gap between web and app, and limits how much the web experience can contribute to conversion.
Why this matters
Choosing a cosmetic procedure is deeply personal and often emotionally sensitive. Users are trying to navigate uncertainty and fear of side effects while figuring out which treatments are safe and suited to their goals. When pathways for learning, exploring, verifying, and comparing are unclear, they face higher cognitive load at a time when they most need clarity and support.
These structural issues also affect the quality of their decisions. If users cannot easily find doctors, reviews, or curated recommendations, they explore less deeply, drop off more often from search, and are less likely to follow through to booking. Web remains a one-time touchpoint instead of a meaningful part of a cross-platform decision journey.
For a platform that relies on strong discovery, trust building, and social proof to help people make informed, confident decisions about aesthetic healthcare, improving the underlying IA is essential. This led to the core design question for the redesign:
How might we design a human-centered, scalable information architecture that supports users across different knowledge levels, and helps them learn, explore, verify options, and make confident decisions about aesthetic healthcare on the web?
Ideation & Exploration
I approached each problem statement as a design prompt. For each one, I explored multiple IA options, evaluated tradeoffs with product and engineering, then converged on a direction that would be feasible to ship and extensible over time.
1. Web does not adequately support learn and explore intent
Design direction: Exploring options for learn and explore
I began exploration by benchmarking products with strong search and discovery patterns to see how they support early-stage exploration.
Direction A – Make home an exploratory surface.
Surface rankings, campaigns, and educational content directly on Home.
Pros: minimal IA change, low implementation cost, easy for novices to find content on arrival.
Cons: risks overloading a well-performing page, hard to scale, and may overwhelm experienced users.
Direction B – Add a dedicated "Explore" space
Introduce a new IA node for exploration in the main navigation.
Inside Explore, group:
  • rankings (by concern, procedure, theme)
  • campaigns or curated collections
  • future AI-driven or editorial curation
Pros: gives learn/explore a clear home, scales cleanly, and clarifies entry points.
Cons: requires navigation changes and some retraining for existing users.
Direction C – Fold Explore into Search
Use the initial search landing page as the exploratory surface.
Pros: preserves a search-first IA and familiar pattern.
Cons: exploratory users still may not start with Search, and it does not create a dedicated space that feels tailored to them. Combining concern-based exploration and search in one surface also risks a crowded, confusing first experience.

Chosen direction
I combined Direction B with a dedicated space for content:
  • Introduce an Explore node that houses rankings and curated campaigns.
  • Create a Learn node that consolidates educational articles and guides.
Together, these create: a clear home for users who are still learning or browsing, a scalable IA structure for new learn/explore features later, a way to keep Home lighter while still supporting discovery
2. Search is fragmented, limited, and misaligned with how users search
Design direction: Exploring options for unified search
Direction A – Separate search pages per entity
One search page each for procedures, clinics, doctors, and reviews.
Pros:
  • pages have a dedicated entity
  • lighter engineering scope since we mainly add doctor and review searches
Cons:
  • forces users to choose a search type up front, which conflicts with concern-based mental models (“I have this issue,” not “I want to search doctors vs reviews”)
  • Keeps search fragmented across entities and content, and still relies on the legacy “search overview” API that does not return doctors or reviews
Direction B – Multi-entity, tabbed search hub
A single search experience with one search bar and tabs for procedures, clinics, doctors, reviews, procedure information, and potentially community or content.
Pros:
  • mirrors how users move between entities, allowing for members to maintain one mental model
  • makes doctors and reviews clearly searchable
  • aligns web with the existing multi-entity search pattern on mobile
  • removes dependency on legacy "search overview" API
Cons:
  • higher design and implementation complexity
  • requires ordering logic across tabs

Chosen direction
I converged on Direction B:
Design a unified search hub with: a shared query bar, tabs for procedures, clinics, doctors, reviews, and procedure information
Fold existing standalone lists (procedure list, clinic list) into the search hub rather than keeping them as separate nav destinations.
This directly addresses: depth and fragmentation (users no longer need to guess which entry point to use), discoverability of doctors and reviews (they become visible first-class tabs), both high-intent behavior (direct names) and exploratory behavior (concerns, questions)
3. Web is missing key behaviors that support booking an appointment
Design direction: Exploring options for supporting booking-related behaviors
Direction A – Add Wishlist entry point inside Profile page
Add a "save to favourites" action on procedure and clinic detail pages, and an access point under the profile page.
Pros: small, contained change. Users can see what they have saved in one place
Cons: while the feature remains an essential segue to booking, it does not have a global access point
Direction B – Dedicated "Wishlist" space in IA
Introduce a clear "Wishlist" entry in the IA, and add save actions on procedures and clinic.
Pros: gives saving a clear mental model, supports cross-session behavior, ties web more directly to booking intent, and mirrors a pattern already proven on mobile as a key driver of conversion.
Cons: uses limited navigation real estate and requires coordination with future app parity
Direction C – Rely on app for all saved and booking flows
Do not change web; treat it as browse-only, hand off serious intent to app.
Pros: no new complexity on web.
Cons: web remains a transitional layer and cannot contribute meaningfully to conversion metrics on an operational level

Chosen direction
We adopted a scoped version of Direction B:
  • Start with favourites for procedures only, using a single “Wishlist” destination.
  • Expose Wishlist in global navigation (header or bottom nav) so it is easy to find, without overhauling the entire IA.
  • Shorten the path from search and lists to detail pages so users can realistically move from discovery → save → revisit → booking in one flow.
This approach delivers the core benefits of a dedicated Wishlist while keeping implementation focused. It lets web support the “I am not ready yet, but I want to remember this” behavior, positions web as a meaningful part of the decision journey instead of a one-time browse, and creates a structural home for future management features as web and app move toward parity.
Proposed Solution
Proposed IA: structuring around user intent
I reframed the IA around three pillars that map directly to the problem statements:
1
Learn & Explore
2
Search & Verify
3
Decide & Continue
Evaluation principles for the new IA
When comparing IA options, I used these principles as a checklist:
  1. Supports multiple exploration types (procedures, clinics, doctors, reviews, content)
  1. Reduces depth to key entities, especially doctors and reviews
  1. Works across knowledge levels, from first-time visitors to high-intent returners
  1. Scales with new features (ranking, content hub, AI curation) without reshuffling navigation
  1. Separates concerns clearly (learn/explore vs search vs manage)
  1. Provides clear entry points for both exploratory and goal-driven users
  1. Avoids redundancy between header and bottom navigation
These principles informed both the proposed IA and how I evaluated tradeoffs with the team.
Global Structure
Distilled view

Misc. notes
  • Search supports all 5 types of search: procedures, clinics, doctors, reviews, community
  • Learn becomes a home for educational content
  • Explore includes rankings and curated collections/guides
  • Navigation becomes simple, scalable, and intuitive
IA mapped to each user intent
Learn & Explore
Explore (bottom nav)
  • Rankings (by concern, procedure, theme)
  • Campaigns / curated collections
  • Future AI or editorial recommendations
Learn (bottom nav)
  • Content Hub for articles, guides, FAQs
  • Grouped by concern, treatment area, and level (beginner / deeper dive)
Search & Verify
Search (header)
  • Single entry point into the unified search experience
Comprehensive search structure
  • Shared search bar
  • Tabs: Procedures, Clinics, Doctors, Reviews, Community, Reviews
  • Filters and sorting tuned for each tab (price, downtime, location, rating, etc.)
Decide & Continue
My page / Saved
  • Saved pro (wishlist)
In-context CTAs
  • Clear "Save" on cards and detail pages
  • Reservation / consultation CTAs on relevant entitiesd
Proposed Solution
New search experience
With the unified search IA in place, the next step was to design a search experience that felt cohesive across entities and worked for both concern-based and precise queries.
Design goals
I used three goals to guide the search UX:
  • Support both concern-based queries (“tired eyes”) and exact queries (clinic or doctor names).
  • Make doctors, reviews, and procedure info first-class targets, not buried behind events and clinics.
  • Reduce friction between typing, scanning results, and pivoting between entities.
Entry into search
To minimize friction for existing members, I kept the current search entry points: the header search icon on every page and the central search bar on Home. As procedure and clinic lists are removed from navigation, search becomes the central entry for finding them.
From there, I focused first on the search entry modal that opens when users tap either of these surfaces.
Wireframes

How this solves fragmented search
  • Users no longer have to guess which search entry to use.
  • Autosuggested keywords and recent searches make it easier to refine or resume queries.
  • Copy explicitly supports both concern-based and exact queries.
Unified results layout with entity tabs
Next, I explored how to structure multi-entity results so users could move fluidly between events, clinics, doctors, reviews, and info.
Wireframes
Key behaviors
The final search experience brings these decisions together:
  • Entry: Search is accessible from both the header icon and a central search bar on Home, leading to a full-page search surface with a clear prompt such as “Search by concern, procedure, clinic, or doctor.”
  • Results: Users see an “All” tab by default, then can pivot to Events, Clinics, Doctors, Reviews, or Info without retyping their query.
  • Sticky controls: The search bar and entity tabs remain visible as users scroll, making it easy to refine, filter, and compare.

How this solves fragmented search
  • Reduces fragmentation: all entities live under one mental model.
  • Makes doctors and reviews clearly discoverable and searchable.
  • Supports both exploratory queries (concerns, questions) and precise queries (names).
  • Users can move from "I typed my concern" → "Let me see doctors for this" → "Now show me reviews for this" in one flow.
Proposed Designs
Before: static search modal, clinic and procedure search as separate entities, unsupported doctor and review search
After: search modal with keyword autosuggestions, unified search experience across procedure, clinic, doctor, review, and community
Expected Outcomes & Validation Plan
Success criteria
Because this is early-stage, I defined expected (not final) measurable success. Across the three problem areas, I expect the new IA and flows to produce:
Expected user outcomes
  • Faster orientation and entry into the right space – Users understand where to start if they want to learn, explore, or search.
  • Shorter paths to high-signal content – Fewer steps from entry → doctors → reviews → decision-critical information.
  • Clearer options for novices – Low-knowledge users can find concern-based content and curated lists instead of bouncing.
  • Higher success on "find X" tasks – More users can successfully complete tasks like "find a doctor for this concern" or "find reviews about X" without help.
Expected product success
  • Reduce abandonment from poor or confusing search flows – Fewer zero-result or dead-end experiences.
  • Increase depth of exploration – More users moving beyond one or two page types per session.
  • Drive traffic into new surfaces – Meaningful engagement with Explore, rankings, campaigns, and the Content Hub.
  • Support faster shipping of new features – A clear, extensible IA so that future features (e.g., more review tools, AI curation) can plug into existing buckets without a full redesign.
Validation plan
Because this is an early-stage IA project, the focus is on expected impact and a clear plan to validate it.
Pre-launch validation
1
Tree testing / IA validation
Can users locate doctors, reviews, and learn/explore surfaces using only the new structure?
2
Task-based usability testing on prototypes
Representative tasks such as:
  • "You are curious about a concern. Where would you start?"
  • "Find a doctor and reviews for [concern]."
  • “Save a procedure you might want to book later.”
Measure task success, time on task, and points of confusion.
Post-launch metrics to track
I plan to work with data and engineering to ensure analytics events are in place before launch so we can reliably track:
Search behavior
  • Search click-through rate
  • Zero-result rate
  • Search → detail page conversion
Exploration depth
  • Number of distinct page types per session (e.g. search + doctor + reviews vs search only)
  • Engagement with Explore and Learn surfaces
Journey continuation and booking support
  • Usage of Wishlist on web
  • Return visits starting from wishlist
  • Conversion from saved → booking flows (where measurable)
Success early on is defined less by hitting specific numbers and more by confirming directional improvements: fewer dead ends and irrelevant landings, higher discovery of doctors, reviews, and curated learn/explore surfaces, and a more scalable IA that can absorb upcoming features without breaking.
Reflection
This project reminded me that information architecture is not just about rearranging pages. It is about choosing which user intents a product truly serves and which ones it quietly leaves behind. On web, the structure had grown around high-intent, search-first users, while a large group of exploratory and low-knowledge visitors were effectively unsupported. Reframing the IA around learning, exploring, searching, and continuing forced me to think less in terms of "screens" and more in terms of behaviors, decision moments, and the kinds of journeys we want to make possible.
Leading this IA redesign helped me strengthen my strategic problem framing, advocate for a user-first perspective in a feature-heavy environment, and create a shared mental model for cross-functional teams. It also pushed me to think of IA as a living strategy, not a static site map.
The structure has to reflect three things at the same time:
  • who our users are and what they need
  • where, why, and how they interact with our product
  • what content and tools they engage with
This work also marks a pivotal moment for the Gangnam Unni web experience. For years, web sat in the background while the app led most product thinking. As web traffic and global interest grew, it became clear that we needed a more intentional IA to support that potential. Looking forward, I hope to continue validating and iterating on this structure as we ship Explore, unified search, and saved items, and as we introduce new features. The goal is to keep the IA flexible enough to grow with the product, while staying grounded in the real behaviors and needs of the people who rely on it.