Led the 0→1 design of Groove Plays, redefining how sales strategy translates into real-time execution through state-aware, trust-preserving reinforcement.

Sales organizations invest heavily in strategy—new plays announced at kickoffs, updated guidance shared in decks, and revised processes rolled out quarter after quarter. Yet once those strategies reach the field, leadership has little visibility into whether they are actually being followed. Execution breaks down not because reps lack effort, but because existing tools struggle to translate intent into consistent, real-world behavior.
Most sales tooling at the time focused on automation or retrospective reporting. “Next best action” systems pushed notifications without knowing whether a rep had already completed the work, creating redundant noise, eroding trust, and encouraging reps to ignore guidance altogether. The gap wasn’t awareness or effort—it was state.
For example, a sales leader might roll out a new strategy requiring a follow-up email to be sent within 24 hours of a demo, tied to a specific opportunity. Existing tools could remind a rep to “send a follow-up,” but had no reliable way of knowing whether that email had already been sent—often prompting reps to do work they had already completed.
Groove Plays approached this differently. Before reinforcing the strategy, the system first checked whether the action had actually occurred—was a follow-up email sent, was it associated with the correct opportunity, and did it happen within the intended window? Only when the system detected a genuine gap did it intervene. When reps were already executing correctly, Plays stayed silent.
Groove Plays was conceived to close this strategy-to-behavior gap. Rather than treating strategy as static instructions or one-off tasks, Plays framed strategy as an executable system—one that reinforced guidance only when it was genuinely needed and stayed invisible when it wasn’t.
I led the end-to-end design of Groove Plays as a 0→1 platform concept, partnering closely with a Senior Product Manager to shape strategy, sequencing, and executive alignment. I owned problem framing, conceptual modeling, interaction design, and prototyping, with Engineering later brought in to evaluate technical feasibility and data integration constraints.
A pivotal design decision was to prioritize execution surfaces over configuration. While there was early pressure to focus on a highly simplified builder experience, I pushed to first redesign Groove Actions—the core unit of work through which reps would actually experience Plays. Without a trusted, legible execution surface, no amount of strategic sophistication would matter.
When Groove Plays was conceived, sales teams were operating in an increasingly fragmented tooling landscape. Outreach automation had matured, forecasting platforms were gaining traction, and sales engagement tools promised efficiency at scale. Yet despite this progress, one critical problem remained unsolved: ensuring that strategy actually translated into consistent execution in the middle of the funnel.
Sales leaders could articulate strong strategies—how to follow up after demos, how to multi-thread accounts, how to sequence stakeholder engagement—but once those strategies were announced, they largely disappeared into day-to-day rep workflows. Existing systems could track activity or issue reminders, but they could not reliably answer a more fundamental question: Did the rep actually do the thing, in the right context, at the right time?
This gap created real operational risk. Leaders had no dependable way to reinforce strategy without resorting to manual oversight or blunt reporting. Sales reps, meanwhile, were inundated with generic “next best action” prompts that often duplicated work they had already completed—training them to ignore guidance altogether. The result was a growing disconnect between intent, execution, and trust.
The timing made the problem harder. This work predated modern generative AI and large language models; intelligent behavior detection relied on brittle enterprise data sources, primarily Salesforce, with limited event semantics and inconsistent instrumentation. Determining whether an action had occurred—such as a follow-up email tied to a specific meeting, opportunity, and account—was non-trivial and often ambiguous. Any solution would need to work within those constraints while still delivering meaningful value.
The stakes were high. Solving this problem meant inventing a new execution model rather than shipping another feature: one that balanced rep autonomy with leadership accountability, avoided micromanagement, and earned trust through restraint. Getting it wrong risked creating yet another noisy system reps would ignore. Getting it right had the potential to change how sales organizations operationalized strategy—and how confidently they could close deals.
Groove Plays sat squarely at that inflection point.
Groove Plays originated as a founder-initiated concept, driven by a recognition that sales teams lacked a reliable way to ensure strategy translated into consistent execution in the middle of the funnel. I was brought in by the VP of Product alongside a Senior Product Manager to explore whether this idea could become a viable product capability.
I led the work as Head of Product Design and owned the design process end-to-end:
The work unfolded over several months through tight, iterative cycles. Early efforts focused on reframing the problem—from activity automation to state-aware execution—using lightweight concept models, exploratory prototypes, and collaborative working sessions rather than heavy documentation. Research with sales reps helped ground assumptions around trust, micromanagement, and how execution actually showed up in daily workflows.
As the concept evolved, the focus shifted toward defining the underlying system: how Plays would detect state, when it should intervene, and—just as importantly—when it should remain invisible. As the execution model became clearer, reviews with the VP of Product, CEO, CTO, and Engineering leaders served as pressure tests—challenging assumptions, validating feasibility, and refining trade-offs and scope.
A consistent principle guided how we worked throughout the project: validate the execution model before scaling the surface area. That principle shaped both design decisions—such as prioritizing execution surfaces over builder experiences—and how deeper engineering investment was sequenced.
The initial mandate was intentionally open-ended. Leadership wanted to explore whether Groove could create a meaningful solution for what was often described as the “middle-of-the-funnel problem”—the point where deals stalled not because of poor outreach, but because execution against strategy became inconsistent and difficult to reinforce.
At a high level, the ask was to:
Importantly, there was no predefined feature shape or delivery expectation. The mandate was to explore the space and determine whether a new product capability was warranted at all.
Early exploration made it clear that solving this problem required a reframing of what “execution support” meant. The real need was not more automation, more notifications, or more dashboards—but a system that could understand execution state and act with restraint.
For Groove Plays to succeed, it needed to:
This framing directly informed sequencing decisions, design priorities, and where investment would (and would not) go in early phases.
Given the exploratory nature of the work, success was not defined by launch metrics or revenue impact. Instead, we aligned on criteria that reflected whether the concept itself was viable and differentiated:
Only if these criteria were met would it make sense to scale into richer authoring tools, analytics, or intelligence layers.
To protect focus and avoid premature complexity, several areas were intentionally out of scope for early phases:
This discipline helped ensure the work remained centered on validating the execution model itself—not the surface area around it.
Research and discovery focused on understanding how sales strategy actually translated—or failed to translate—into day-to-day execution. Through conversations with account executives, sales managers, and internal stakeholders, several consistent patterns emerged that reframed the problem space and directly shaped the design direction.
Sales leaders were confident in their strategies, but had little signal on whether those strategies were being followed once reps returned to their daily workflows. Post-hoc reporting showed activity levels, but not whether specific strategic actions occurred at the right moment or in the right context.
Design implication: Groove Plays needed to make execution observable, not just enforceable—focusing on detecting whether actions actually occurred before attempting to reinforce behavior.
Sales reps described existing recommendation systems as noisy and often irrelevant. Because these tools couldn’t reliably detect completed work, they frequently prompted actions reps had already taken—eroding trust and encouraging dismissal of future guidance.
This contradicted a common internal assumption that more reminders would improve execution.
Design implication: Plays had to intervene sparingly. Detection of execution state became a prerequisite for any reinforcement, and silence became a deliberate success condition rather than a failure mode.
Top-performing reps consistently expressed concern about tools that felt like surveillance or rigid enforcement. At the same time, sales managers emphasized the need for consistency across the team—especially during critical deal stages.
Design implication: The system needed to remain invisible to reps who were already executing well, while selectively supporting those who weren’t—preserving autonomy without sacrificing accountability.
Reps rarely adopted new tools unless value surfaced directly inside their existing workflow. Separate dashboards, task lists, or strategy views were ignored unless tightly integrated with the work already happening.
Design implication: Groove Actions—the core unit of work—became the primary execution surface. Plays needed to express strategy through Actions rather than introduce an entirely new experience.
At the time, competitive tools clustered around either top-of-funnel automation or bottom-of-funnel forecasting. The middle of the funnel—where strategy execution mattered most—was underserved and poorly defined.
Analyst conversations reinforced this gap, framing the problem less as missing features and more as missing models for how strategy should operate during active deals.
Design implication: Plays was framed as a platform-level execution capability rather than a feature—prioritizing mechanism clarity over surface area.
Across users and the broader market, the signal was consistent: sales teams didn’t need more guidance—they needed better-timed, context-aware reinforcement that respected how work already happened. Groove Plays was designed to meet that need by focusing on execution state, restraint, and trust as first-order design concerns.
The core strategic move behind Groove Plays was to invert how sales execution systems were typically designed. Rather than starting with automation, recommendations, or analytics, the strategy focused on a more fundamental question:
Can we reliably understand what has actually happened before telling a rep what to do next?
This reframing shifted the product from an action-pushing system to an execution-aware system—one that treated detection and restraint as prerequisites for any form of reinforcement. The goal was not to optimize activity, but to close the gap between strategy and behavior without introducing noise or micromanagement.
To make that viable, the strategy centered on three deliberate choices:
The North Star for Groove Plays was a system where sales strategy behaved less like documentation and more like software—defined once, continuously evaluated, and selectively reinforced.
At its simplest, the intended experience followed this loop:
This loop allowed Plays to support consistency without imposing rigidity—and to scale strategy without scaling oversight.
Several tenets guided decision-making throughout the project and were used to evaluate trade-offs and scope:
From the outset, the design approach centered on a clear architectural distinction: where strategy is defined versus where strategy is experienced. While it was tempting to focus early on authoring tools and visual builders, the work consistently returned to a simpler question—where would reps actually feel the impact of this system?
The answer was unambiguous: inside Actions, the existing unit of work through which reps managed tasks, emails, calls, and follow-ups. Groove Plays would live or die not by how elegantly strategies were configured, but by how credibly they showed up in the flow of real work.
This led to a two-part experience architecture:
Rather than advancing both in parallel, the approach intentionally prioritized execution first.
One of the most consequential design decisions was to deprioritize a polished builder experience in favor of stabilizing the execution surface.
There was early pressure—from both a product and demo perspective—to invest in a highly simplified, visually compelling builder that sales leaders or reps could use directly. While attractive on the surface, this direction failed several real-world tests:
The alternative approach—execution-first—was grounded in behavioral realism:
This decision ultimately led to the redesign of Groove Actions, which became the primary expression of Plays for reps.
Several attractive alternatives were intentionally deprioritized or rejected:
Each was appealing in isolation—but failed the test of trust, restraint, or sequencing.
At the interaction level, the design focused on a tight execution loop rather than expansive flows. The core questions guiding each interaction were:
Actions generated by Plays were designed to feel indistinguishable from organic work—except for their contextual relevance. Visual hierarchy, labeling, and microcopy emphasized clarity and intent, while avoiding language that suggested surveillance or enforcement.
Just as important were the moments where nothing appeared. Significant design effort went into defining and preserving non-intervention states, ensuring the system did not surface redundant guidance when execution was already aligned with strategy.
Several compromises were knowingly accepted in service of validating the core model:
Each trade-off favored learning speed, trust, and behavioral signal over completeness.
Several adjacent capabilities were explored alongside the core execution model but intentionally sequenced out of the initial scope. These were not rejected; they were deferred to avoid expanding surface area before the system proved it could detect and reinforce execution reliably.
Strategy builder for sales leaders (deferred)
The long-term vision for Groove Plays included evolving the same admin- and RevOps-focused builder to better support sales managers and leaders over time—not introducing a separate authoring experience.
Rather than broadening the builder’s audience prematurely, these simplification efforts were intentionally deferred until the execution model proved trustworthy. The assumption was that easing authoring only mattered once downstream behavior reliably matched strategic intent.
Notification-driven recommendation layer (deferred)
We also explored extending Groove’s Notification Hub to surface Play-driven actions in a centralized, fast-response surface.
This capability was explicitly marked as a future enhancement, designed to layer on top of a proven, execution-aware foundation rather than lead it.
Why this sequencing mattered
The guiding principle was consistent throughout: prove execution awareness first, then expand capability with confidence.
The goal of this phase was not to exhaustively document every possible state of Groove Plays, but to make the execution model tangible through selected, representative screens. Each artifact was chosen to demonstrate how strategic intent translated into real work—and how design decisions preserved trust, clarity, and restraint.
The primary execution surface for Groove Plays was Actions, the core unit of work used by reps to manage tasks, emails, calls, and follow-ups. Rather than introducing a new interface for Plays, the system expressed strategy directly through Actions, ensuring that guidance appeared where work already happened.
Key qualities demonstrated in the Actions redesign:
These decisions ensured that Plays felt like a natural extension of a rep’s workflow—not an overlay competing for attention.
The following screen shows how a Plays-generated Action appeared to a sales rep inside their existing workflow within the web application.
The same Play-generated Action surfaced inside Google Calendar via the Groove Chrome browser extension, using the identical compose dialog and execution model—allowing reps to act without returning to the core application.
A critical aspect of execution quality was what didn’t appear. Significant design effort went into defining and preserving non-intervention states—moments when the system intentionally stayed silent because execution was already aligned with strategy.
This restraint was treated as a first-class design concern:
Design QA focused as much on validating absence as presence—ensuring that silence was meaningful, not accidental.
Early concepts for the strategy builder explored ways to express complex logic—conditions, timing, and scope—without overwhelming users. These concepts intentionally remained at a lower fidelity and were not polished for handoff.
This was a conscious quality decision:
By keeping builder concepts lightweight, design effort stayed concentrated on the experience that mattered most for adoption.
While reps experienced Plays exclusively through Actions, those Actions were the result of structured execution logic authored upstream. The Play builder was designed for admins and RevOps—users already fluent in Salesforce data models—to translate sales strategy into detectable, enforceable behavior.
This was intentionally not a general-purpose workflow builder. The goal was precision and reliability, not visual flexibility. Precision and reliability here were prerequisites for restraint downstream.
To maintain design integrity as the work evolved, several safeguards were applied:
Quality was measured less by visual polish and more by whether the system behaved as designed—intervening only when needed and disappearing when it wasn’t.
Groove Plays did not progress to a full public launch, which shaped how outcomes could be measured. Rather than relying on GA metrics or revenue attribution, evidence was triangulated across user response, expert signal, and organizational impact to assess whether the execution model itself was viable and differentiated.
Early validation focused on whether the core premise—state-aware reinforcement without micromanagement—resonated with users.
These signals reinforced that adoption hinged less on feature breadth and more on when and how the system intervened.
Beyond individual users, Groove Plays generated meaningful signal at the leadership and market level.
Although Groove Plays was not released in full, one critical dependency did ship:
Note: Signals reflect early-stage validation under limited rollout conditions.
While Groove Plays did not reach full market validation, the work successfully:
Taken together, these outcomes demonstrate sound judgment in defining, validating, and sequencing a complex 0→1 system under real organizational and technical constraints.
While Groove Plays was ultimately deprioritized post-acquisition, the underlying execution model informed a provisional patent filing following the Clari acquisition—reflecting recognition of the system’s novelty in translating sales strategy into state-aware, enforceable execution logic.
Early discussions framed the challenge as a tooling or automation gap. I repeatedly reframed it as a strategy-to-behavior problem—shifting conversations away from features and toward execution state, trust, and restraint. This reframing changed what we designed, how we sequenced work, and ultimately how leaders evaluated the opportunity.
There was consistent pressure to invest early in a polished strategy builder because it was easier to explain and demo. I pushed back—sometimes repeatedly—arguing that without addressing the experience of Actions, no one would care what Plays could do. That judgment led to prioritizing the execution surface, a decision that both validated the core model and produced durable, shipped value.
As interest in AI-forward approaches emerged, I worked to balance long-term ambition with near-term feasibility. Rather than over-investing in intelligence before the data substrate was ready, I advocated for proving the behavior model first. This helped ground the work in what could realistically be validated at the time, while still pointing toward a future direction.
In hindsight, I would involve Engineering slightly earlier in stress-testing data assumptions, even while keeping design-led exploration intact. While engineering engagement was intentionally sequenced to follow conceptual clarity, earlier validation of Salesforce event semantics and edge cases may have reduced downstream uncertainty and accelerated convergence on feasible detection patterns. The sequencing decision was still correct—but earlier feasibility pressure-testing would have accelerated convergence.
I would also instrument lightweight behavioral signals sooner—simple proxies to observe how often non-intervention occurred versus reinforcement. Even without full analytics, this could have strengthened early evidence and sharpened prioritization as organizational momentum shifted post-acquisition.
Groove Plays reinforced a core leadership lesson for me: not all impactful work ships, but all strong judgment leaves a trace. The value of this project lies not in a launch metric, but in the clarity it brought to a hard problem, the discipline of its sequencing, and the execution-first decisions that shaped both the product and the team’s thinking.