Cvent

Event Agenda Builder using AI

An AI-powered session recommendation system designed to help attendees at large-scale events cut through choice overload and build a personalized, conflict-aware agenda, without the cognitive paralysis.

0 → 1 Feature AI · Mobile UX Research
Agenda Builder project cover image

3

Productive workshops facilitated

2

Rounds of usability testing before dev handoff

100%

User testing participants found design intuitive by round 2

My role

End-to-end ownership, from design strategy and research to dev handoff

Tools

Figma · FigJam · Figma Make · MixPanel · Sigma

Duration

8 months

Team

Senior Product Designer (me) · UX Research · Content Design · Localization · 2 PMs · Engineering

Background & the Problem

Cvent's Attendee Hub is an all-in-one digital event platform that powers virtual, hybrid, and in-person experiences. It centralizes session browsing and scheduling, networking and engagement, content delivery as well as gamification. Event planners control when the platform becomes available to attendees, which generally occurs after attendees register for an event and some time before the event day.

At large, multi-day events like conferences, attendees routinely faced a flat, unfiltered list of 50+ sessions with no personalization. The friction was two-fold:

Sheer volume

A flat list of 50+ sessions forced attendees to manually scan every option for each day, creating choice overload with no clear path to prioritization.

Lack of relevance

No clear indication of why a session might be worth attending forced attendees to read every description individually, which was time-consuming, frustrating, and error-prone.

The downstream effect was predictable: attendees missed valuable sessions, picked ones that weren't a good fit, and left events feeling underwhelmed.

How might we help attendees build their event agenda so they get the most value out of the sessions and prevent cognitive overload and decision paralysis?

The business case was clear: Better session engagement leads to higher attendee satisfaction, stronger ROI for planners, and more compelling events for returning exhibitors and speakers.

The Process

The project moved through five distinct phases. Each one built directly on the last - the audit informed the workshops, the workshops shaped ideation, ideation constrained the design, and two rounds of testing refined it.

Research & Audit

Existing research · data analysis · internal audit for existing patterns

Problem Discovery

Define problem · constraints · users · assumptions · cross-functional workshop

Ideation & Scoping

JTBD journey map · ideation workshops · OST · scope decisions

Design & Iteration

Wireframes → hi-fi · multiple iterations · feasibility checks · stakeholder reviews

Usability Testing ×2

Round 1 → received feedback · improved designs · Round 2 → validated · dev handoff prep

Phase 01

Research & Audit

The project began with a clear OKR: simplify agenda building for event attendees. Before running any workshops or drawing any wireframes, I invested time in building a deep, evidence-based understanding of the problem space. This meant going through all existing research insights related to agenda building, analyzing user behavior data in MixPanel and Sigma, and conducting a thorough internal audit of the current agenda-building experience across all products.

This phase was less visible but foundational. Without it, the workshops that followed would have been speculative. With it, I could walk into the room with validated pain points, concrete data, and a clear picture of what was already known, and what wasn't.

Phase 02

Problem Discovery Workshop

Armed with that research, I planned and facilitated a problem discovery workshop with product, research, and design partners. The goal wasn't to brainstorm solutions; it was to collectively agree on the problem. Together, we defined the problem statement, surfaced the constraints we'd be designing within, identified who the users were and what we already knew about them, listed the open questions we'd need to answer, and made our assumptions explicit so we could track and test them.

The output was alignment - a shared, documented problem statement that the whole cross-functional team could point back to throughout the project. This step made every subsequent decision faster and less contentious, because we had already agreed on what we were solving for.

Problem Discovery workshop in FigJam
Problem Discovery workshop in FigJam

Phase 03

Ideation, Journey Mapping & Scoping

With a clear problem statement in hand, I moved into ideation. Using existing Jobs-to-be-Done research data, I created a detailed attendee journey map that deliberately extended well beyond the boundaries of Attendee Hub. The full journey I mapped included the pre-event registration experience within a separate platform where they could sign up for initial sessions, and then the in-event experience inside the Attendee Hub platform post-registration.

Design Ideation workshop in FigJam
Design Ideation workshop in FigJam

I facilitated a design ideation workshop with the design and research teams, using the journey map as a shared reference. Grounding participants in the attendee's full lifecycle, not just their in-app experience, led to a much richer set of ideas. The session generated concepts that spanned Attendee Hub and beyond, including suggestions for improving touchpoints in the registration flow. After the workshop, I wireframed the strongest concepts and presented them to design leadership and stakeholders, receiving positive and directional feedback.

Wireframes of the top ideas gathered
Wireframes of some of the top ideas gathered

But this created a new challenge - we had too many good ideas and no clear scope. To resolve this, I organized and facilitated an Opportunity Solution Tree (OST) workshop with my product partners and Design Manager. We stepped back to the OKR, mapped out the various opportunities around it, listed the solutions each opportunity could yield, and surfaced the assumptions behind each.

Opportunity Solution Tree workshop in FigJam
Opportunity Solution Tree workshop in FigJam

This exercise narrowed the project to its final scope - an AI-powered recommendation flow that personalizes session suggestions based on attendee interests, goals, and behavioral data. The OST workshop was arguably the most strategically important step of the project. It happened before a single high-fidelity screen was drawn, and it set the direction for everything that followed.

Attendee's agenda building journey map
Attendee's agenda building journey map - clarified current state and provided design direction

Phase 04

Design & Iteration

With clear direction, I began designing the solution. During the initial flow, we hypothesized that:

  1. users would easily find the AI recommendation entry point, and
  2. they would intuitively recognize session conflicts when reviewing recommendations.

The designs went through multiple rounds of iteration; reviewing with engineering to check feasibility, presenting to stakeholders for feedback, and refining until we felt confident enough to usability test the designs.

Design iterations
Several rounds of design iterations that included reviews by stakeholders

Phase 05

Usability Testing & Refinement

Working with our user researcher, we ran the first round of usability testing. It disproved both hypotheses.

Round-1: Test usability

Finding 1

Missed entry point

The AI sparkle icon on the floating action button was too small and unrecognized. Users missed the feature entirely on first viewing.

Fix: The FAB label now expands with readable text when the user lands on the schedule page, then collapses after a short delay, making the feature unmissable on first visit without permanently cluttering the UI.

Finding 2

Unclear content

The language in the goals selection step reduced user confidence and completion rates.

Fix: The copy was rewritten to be more direct and outcome-oriented, with clearer examples of what selecting each goal would mean for their recommendations.

Finding 3

Missed conflicts

Users expected an explicit warning when a recommended session conflicted with one already on their schedule. Without one, the majority didn't notice.

Fix: A high-visibility "Conflict" indicator was added to recommendation cards, plus a calendar view for reviewing and resolving overlaps before committing to a session switch.

Unscripted insight from testing: When reviewing a recommended session, users wanted to know if any of their connections were also attending it. This wasn't in the brief, but it was clearly meaningful - social proof directly informed their decision. I added a hyperlinked connections count to the recommendation card - a small addition that brought a genuine layer of social context into the experience.

Round-2: Validate enhancements

After incorporating all changes, we ran a second round of usability testing. All participants described the updated flow as clear, intuitive, and easy to use. Conflict identification improved from 40% in round 1 to 100% in round 2.

The Solution

The final design is a three-step workflow where each step builds on the last, from empty or partially built schedule to personalized, conflict-aware agenda.

  1. Viewing the expanded "Get Recommendations" CTA: The CTA lives on both the session list and schedule page. On first load, the FAB label expands with text before collapsing, keeping the feature discoverable without permanently cluttering the UI.
  2. Goals & Interests Selection: A guided two-step flow captures attendee interests and session goals. Combined with inferred behavioral data (when consent is provided), this powers the recommendation engine.
  3. Recommendation List with "Why" & Conflict Indicators: Each recommendation card surfaces the reasoning behind it and flags any scheduling conflicts. A calendar view lets attendees review the overlap in context before deciding whether to switch sessions.

Social Proof - Connections Attending
Additionally, following the insights from the usability testing, a hyperlinked connections count was added to the recommendation card, bringing a genuine layer of social context into the decision.

User flow showing session conflicts
Happy path: User flow showing session conflicts

Outcome & Reflections

Measurable usability improvement

Conflict identification went from 40% in round 1 to 100% in the final round of testing. All participants described the flow as clear, intuitive, and easy to use.

What I learned

The most valuable design work on this project happened before any screen was designed. Running the problem discovery workshop and then the OST scoping session meant the team was aligned, the scope was defensible, and the solution we built was solving the right problem.