Project at Cvent

AI Event Agenda Builder

A Session Recommender that replaces the overwhelming event session list with a guided experience.

Agenda builder project Cover Image

Overview

Role: Senior Product Designer
Duration: 8 months
Team: 1 Senior Product Designer & owner (me), 1 User Researcher, 1 Content Designer, 1 Localisation Specialist, Product Managers, Front-end and Back-end Engineers
Tools: Figma, FigJam (workshops), MixPanel and Sigma (data analysis)

Background

Cvent's Attendee Hub is an all-in-one digital event platform that powers virtual, hybrid, and in-person experiences by centralising content delivery, networking, and engagement tools into a single unified interface.

From our early discovery and analysis, we confirmed that attendees at large, complex multi-day events like conferences, were deeply overwhelmed when it came to building their agenda. The problem was two-fold:

  1. Sheer Volume: For large, multi-day events, attendees faced a flat, unstructured list of 50+ sessions. They had to manually scan this entire list for each day resulting in choice overload.
  2. Lack of Relevance: They were recommended a few sessions but it gave no clear indication of why the session might be right for them. This forced attendees to click into every single session description, a time-consuming and frustrating process.
The business impact was clear: this friction led to attendees missing valuable sessions or choosing sessions that weren't a good fit, resulting in low engagement and dissatisfaction with the event.

Problem Statement

How might we help attendees build their event agenda so they get the most value out of the sessions, and prevent cognitive overload and decision paralysis?

My Role

As the Senior Product Designer on this project, I led the design strategy and execution, guiding the team from an ambiguous concept to a defined, validated solution.

My key responsibilities included:

  • Leading Discovery & Definition: I synthesised existing research, analysed product data (MixPanel, Sigma), and conducted extensive internal as well as external audits to build a foundational understanding of the problem space.
  • Facilitating Strategic Alignment: I planned and facilitated a problem-discovery workshop with product, research, and design partners to collaboratively define the problem statement, success metrics, and constraints.
  • Defining the Scope: I planned and facilitated a follow-up workshop with my PM and Design Manager, using the Opportunity Solution Tree methodology to map out all possible solutions and strategically align on a phased scope.
  • Driving Ideation: I created in-person and hybrid attendee journey maps (based on existing JTBD research) and facilitated a design ideation workshop for the design and research teams, resulting in a rich collection of ideas.
  • Executing End-to-End Design: I created wireframes for multiple concepts, presented them to design leadership for feedback, and drove the iterative process from low-fidelity to a final, high-fidelity solution.
  • Validating & Iterating: I worked with my UR partner to test the designs, synthesised feedback, and iterated to improve usability and user confidence.

The Process

Defining the Problem & Scope

My initial research validated the problem. My next step was to build cross-functional alignment. I led two key workshops:

  1. Problem Discovery Workshop: I presented my research, which guided the cross-functional team to a single, unified problem statement.
    Screenshot of the Problem Discovery workshop in FigJam
    Img: Screenshot of the Problem Discovery workshop in FigJam
  2. Opportunity Solution Tree Workshop: This workshop was pivotal for our product strategy. We mapped out every possible opportunity, which allowed us to deliberately scope down from a massive "AI Wizard" concept to a more focused, high-impact “Session Recommender” for our first release.
    Screenshot of the Opportunity Solution Tree workshop in FigJam
    Img: Screenshot of the Opportunity Solution Tree workshop in FigJam

Design Ideation

I facilitated a design workshop and created Attendee journey maps to help the participants have a wholistic view of an attendee's agenda building journey rather than only focussing on their experience within the app.

Screenshot of the Design Ideation workshop in FigJam
Img: Screenshot of the Design Ideation workshop in FigJam

This generated dozens of ideas, which I synthesised into a Venn diagram (AI vs. non-AI, Wizard vs. Recommender).

Venn Diagram with categorised ideas
Img: Venn Diagram with categorised ideas

Using the ideas gathered through the workshop, I created wireframes for multiple concepts and presented them to design leadership for feedback.

Wireframes of some of the top ideas gathered
Img: Wireframes of some of the top ideas gathered

We initially explored a complex "Agenda Builder Wizard." However, based on leadership feedback and my strategic analysis, I presented a strong case to pivot. We moved to the Session Recommender idea. This was a critical decision that de-risked the project, reduced engineering complexity, and allowed us to deliver value faster.

To get a clear understanding of what the various touch points were for an attendee to build their agenda, I mapped out the agenda building journey for an attendee that captured what the state of the agenda would be at every stage. This was then reviewed with the PMs where I cross-checked my analysis and assumptions with them, made corrections and finalised the direction I needed to take with the designs.

Attendee's agenda building journey that clarified current state and provided design direction
Img: Attendee's agenda building journey that clarified current state and provided design direction

Design & Iteration

I designed the initial flow for the Recommender, built on the hypothesis that users would easily find the CTA and understand session conflicts.

  • Usability Testing (V1): Testing revealed key gaps between our hypothesis and user behaviour.
    point number one

    Confusing Entry Point

    The "sparkle" icon on the FAB was too small and not understood, causing users to miss the feature.

    point number two

    Vague Content

    The copy for "session goals" was unclear, reducing user confidence.

    point number three

    Missed Conflicts

    Users did not identify session conflicts and expected a clear warning.

  • Iteration (V2): I translated this direct feedback into high-impact design changes:
    • Clarified the CTA: Redesigned the FAB experience for unambiguous clarity.
    • Refined the Content: Worked with our Content Designer to make the "goals" copy compelling.
    • Added Conflict Indicators: Designed a high-visibility "Conflict" warning.
  • Enhancing with User Insights: Testing also uncovered a new opportunity. When their connections were also attending the same session, users wanted to see who they were.I incorporated this insight by hyperlinking the "number of connections" text, adding a powerful layer of social proof.

The Solution

The final design is a multi-step, intelligent workflow that guides users to the right content.

The 'Get Recommendations' CTA: Altered the pattern so that the FAB text is expanded when the user lands on the page and collapses after a short delay.

The 'Goals & Interests' Selection: Empowers users by asking them to self-identify their goals. This data, combined with inferred behavioural data (used only if we had their consent), fuels the recommendation engine.

The 'Recommended List' with 'Why' & 'Conflict': The new recommendation card clearly states why a session is recommended (e.g., 'Based on your goals') and includes a high-visibility 'Conflict' indicator when there was one.

Outcome & Reflections

While the project is scheduled for release in Q1 2026, its impact was validated through a rigorous design and testing process.

  • Impact 1: Delivered a clear, data-driven product strategy. My work in the discovery and scoping workshops moved the team from a vague, oversized "AI Wizard" idea to a focused, achievable, and high-impact "Session Recommender" MVP.
  • Impact 2: Solved the core user problem. In our final round of testing, 100% of users successfully identified session conflicts (up from 40% in V1), and all users described the new flow as "clear," "intuitive," and "easy-to-use."
  • What I Learned: This project solidified my belief that a designer's most important job is often framing the problem. By leading workshops using methodologies like the Opportunity Solution Tree, I was able to build team alignment and ensure that the solution we designed was actually solving the right problem.
356x280