/

pitchsense-ai

Unsolicited Design Case Study

Designing the PitchSense AI MVP — How I would turn an AI Vision into a Habit-Forming Product in 30 Days

Role:

Product Designer

Company:

HSV Digital (if you guys hired me)

Read Time:

5 minutes

Timeline:

May 4, 2026 - May 5, 2026

Platform:

B2B Web Applications (5+ web apps)

Focus Areas:

Design System, Consistency, Prototyping, Developer Handoff

Context;

I built this unsolicited design exploration to show how I think, scope, and deliver — using nothing but the public PitchSense AI vision.

My Real Challenge (Why This Case Study Exists)

I’m not inside HSV Digital yet. I haven’t sat in your sprint planning or interviewed your users. So the traditional challenge — “we have this data, now design” — doesn’t apply to me.

My challenge is to get noticed by demonstrating that I can take a one-sentence product promise and turn it into a focused, testable MVP design that solves real human pain. This is me proving that on day one, you won't have to teach me how to think — I already think like a product designer who owns the problem.

Everything that follows is my outsider’s take on PitchSense AI, built in two weeks using only public information, competitive analysis, and behavioral design principles.

How I Learned the Space Without Internal Data

I immersed myself in three things simultaneously:

Three months in, the pattern became obvious.

[01]

Sales coaching research

(Gartner, Gong, Sales Hacker) — to understand why practice fails.

[02]

Competitor UX analysis

I signed up for every AI role-play tool I could find, documented where they lost users.

[03]

Behavioral psychology

Deliberate practice (Ericsson), habit loops (Fogg), and psychological safety (Amy Edmondson).

The synthesis screamed one thing:
Salespeople don’t practice because role-play is a stage for judgment, not a sandbox for growth. They need a safe, private space to fail, get instant, non-judgmental feedback, and build muscle memory in 5-minute daily doses.

That became my design north star, and I’m confident your internal research would align — if it doesn’t, I’ve built in validation loops to pivot fast.

Three months in, the pattern became obvious.

Defining Success for the MVP (What “Worth Building” Means)

Before designing anything, I pinned down what success looks like for both the user and the business. This is how I’d align a team from day one.

Weekly Return Rate

Weekly Return Rate

35%

35%

Users returning for sim sessions within 7 days

Users returning for sim sessions within 7 days

Sessions Per User / Week

Sessions Per User / Week

3 - 4

3 - 4

Consistent practice rhythm (daily-ish cadence)

Consistent practice rhythm (daily-ish cadence)

Guided Focus Adoption

Guided Focus Adoption

50%

50%

Users targeting recommended skill in next session

Users targeting recommended skill in next session

Why These Three?

Weekly return: If 35% of users come back in week 2 for another simulation session, you have product-market fit for habit formation. Sim-based learning tools typically see 5-15%.

Sessions per week: Daily sim sessions are cognitively expensive (live conversation is stressful). 3-4 per week signals habit formation without burnout. This is realistic for sustained practice.

Guided focus adoption: If users accept and act on a single recommended skill focus, they're not just passively consuming—they're actively directing their practice. This is intrinsic motivation.

User Success (in a pilot):

  • Reps complete 3 voluntary practice sessions in week one.

  • After each session, they can articulate one specific thing they’ll change on their next real call.

  • Self-reported “call anxiety” drops by at least 1 point on a 5-point scale.

Business Signal (to justify continued investment):

  • ≥50% week-1 retention among pilot users.

  • At least one manager reports they’ve reduced manual role-play time because AI practice is enough.

  • Early correlation between practice frequency and pipeline movement (even anecdotal).

North-Star Metric for MVP:
Sessions where the user opens a specific coaching insight (replays a moment, reads a tip, clicks “I understand”). This measures deliberate learning, not vanity logins.

Exploring Three Directions (And Killing Two for Now)

I forced myself to generate three fundamentally different product shapes before committing.

Direction

What it does

Why i killed it

Live Call Co-Pilot

Real-time AI whispers tips during live calls.

Creates dependency, doesn’t build muscle memory. Tech risk too high for MVP.

Auto-Analyzer

Post-call scoring, automated drill assignments.

Solves measurement, not muscle memory. Reps still fear the live moment.

AI Practice Simulator

On-demand, private conversations with adaptive AI personas, micro-nudges, and one clear post-session insight.

CHOSEN
Directly replaces high-anxiety role-play with a safe, habit-forming daily practice loop.

The simulator wins because it attacks the root cause: the fear of being watched while learning.

MVP Features I’d Fight For (And What I’d Cut)

I obsessed over doing less so the core loop could shine.

Direction

What it does

Text-first simulation, voice optional

Removes the terror of speaking. Reps can practice from anywhere. Voice becomes unlockable when they feel safe.

3 vivid AI personas

“Budget Skeptic,” “Too Busy DM,” “Curious Champion.” Enough variety to feel real, not overwhelming.

“Spark” nudges during chat

Non-scoring, gentle prompts: “Try asking about their current tool.” No judgment mid-session — only encouragement.

One strength + one growth area per session

A single-screen insight card, pinpointing the exact conversational moment of win and loss. No 20-metric dashboard.

Replay that exact moment

Users see their original response and an alternative suggestion side-by-side. Turns feedback into skill change.

Daily 5-min practice nudge (email/in-app)

Based on yesterday’s weak spot. Designed to build a streak, not pester.

Anonymous manager summary (opt-in only)

Based on yesterday’s weak spot. Designed to build a streak, not pester.

Deliberately excluded in MVP: Leaderboards, CRM integration, complex gamification, voice-only mode, team battles. All great ideas — but they would distract from proving the single habit loop.

User Flow I’d Prototype First (And Why It Feels Safe)

I designed this journey to feel like a pressure-free ritual, not a test.

Trigger → Notification: “You have a pitch tomorrow — 5-min discovery practice ready.”

Entry → Minimal dashboard: streak count, single scenario recommendation, one big “Start Practice” button.

Action → Chat interface. AI persona initiates. User types responses. Occasional spark tip slides in softly. No distracting metrics. Session wraps naturally.

Outcome → Insight card: “You handled the pricing pushback well. Try an open question next time — here’s an example.” One button to replay that moment.

Loop → Next day’s nudge targets the identified gap. The cycle feels like self-improvement, not homework.

Key Screens I Designed:

  1. Dashboard — Streak, daily focus card, oversized CTA.

  2. Persona Picker — 3 distinct cards with personality traits and difficulty.

  3. Simulation Chat — Minimal UI, auto-hiding feedback panel with spark tips.

  4. Insight Card — Strength, growth area, and “Relive this moment” button.

  5. Moment Replay — Original vs. suggested phrasing comparison.

  6. Practice Nudge Email — Plain text, no brand noise, just a friendly reminder.


All screens use a calm navy/teal palette with no red — a deliberate emotional design choice to communicate safety.

How I’d Validate (And Kill Assumptions Without Mercy)

Every design carries hypotheses. I treat them as falsifiable and bake validation into the MVP.

Month 1-2

~20% of new designs using system (system still incomplete)

Month 3-4

~40% adoption (atoms/molecules becoming useful)

Month 5+

~80% adoption (library mature, developers seeing time savings)

Growth drivers

  • System became actually useful (not just aspirational)

  • Better documentation in Storybook

  • Engineers saw real time savings

I’d use click analytics, embedded micro-surveys, and direct 1:1 calls with 5 pilot users to chase truth, not confirmation.

Summary: What I’d Bring to HSV Digital

I didn’t design a product. I designed a behavior change loop wrapped in a feeling of safety. That’s what I believe PitchSense AI’s MVP needs to be — not a feature-packed dashboard, but a daily micro-habit that makes salespeople confident without ever feeling watched.

If you bring me in, this is how I’ll operate from day one: I’ll obsess over the user’s unspoken fear, cut scope aggressively, validate with data, and ship products that earn daily engagement — not just login counts.

25-40%

25-40%

Faster screen design (from 40-50 min to 15-20 min per screen)

Faster screen design (from 40-50 min to 15-20 min per screen)

30-40%

30-40%

Shorter design reviews (from 40-50 min to 20-30 min)

Shorter design reviews (from 40-50 min to 20-30 min)

18/18

18/18

Engineers using library (90% of new components built on system)

Engineers using library (90% of new components built on system)

On the business side:

2 features

2 features

Launched 3 weeks earlier (design wasn't the bottleneck)

Launched 3 weeks earlier (design wasn't the bottleneck)

80%

80%

Less design-to-dev mis-communication (fewer bugs from design gaps)

Less design-to-dev mis-communication (fewer bugs from design gaps)

1 designer

1 designer

Now supports 5+ products (was impossible before)

Now supports 5+ products (was impossible before)

This is really good.
The distinction between primary secondary and tertiary actions is much clearer now.
Before this it wasn’t always obvious what should take priority.
This makes it a lot easier to review flows.
Great work!

—Brendan Lehman, Product Owner @Truemfg

I no longer have to ask questions related to uneven padding, spacing between same states of components, now it's much easier for me to develop and saves a lot of back-and-forth.

—Amit Joshi, Frontend Lead @Truemfg

Before and After compaison —

AfterBefore

what i'd do differently

If I restarted this work:

  • Introduce variables at the beginning (we learned this 80% in) → Would have saved ~40 hours of rework

  • Validate with engineering earlier → Transfer and Button components had unnecessary revision cycles

  • Map component state matrices upfront → Would have caught edge cases before production

  • Not having a senior designer slowed some decisions → What I'd do: Get design review from peer earlier (we only had PM reviews)

Prototype of a "search" component to help developer understand how it should work

NEXT STEPS

The system is still in progress.

Current focus:

  • completing organism level variable alignment

  • expanding edge case coverage

  • refining the transfer component

  • improving the button system

  • strengthening documentation

  • enabling developer contribution


The goal is to make the system easier to maintain and scale.