Established operating clarity, quality standards, and team systems in a Series D sales engagement tech company.
![[background image] image of an open laptop in an office (for an ai healthcare company)](https://cdn.prod.website-files.com/68235c9a6fae25c90dd07a14/6924e310dfac4f5d3c8deeaa_1_0KLvpsID4atvNgIYY3rzVA.webp)
When I joined Groove, a Series D company, the product and business were scaling faster than the design organization supporting them. Rather than focusing solely on output or headcount, I approached the role as a systems design problem: defining how design decisions would be made, how quality would be evaluated, and how a small team could operate with the clarity and effectiveness required to scale. The work that followed was less about building a team in isolation and more about designing an operating model that allowed design to grow alongside the product, adapt to increasing complexity, and remain durable as the company evolved.
Groove was a Series D sales engagement tech company operating in a competitive, fast-moving market. The product was expanding in scope, customer needs were becoming more nuanced, and expectations of design’s impact were rising alongside the business. As is typical at this stage, demand for design accelerated faster than the systems needed to support it.
The challenge was not talent or ambition. It was structural. Product Design was expected to move quickly, support multiple initiatives, and raise the overall quality bar, all while operating without a clearly defined model for decision-making, critique, or quality ownership.
This is a familiar inflection point for startups and scale-ups. Without early intervention, design teams in this phase often optimize for short-term delivery while inconsistencies accumulate across the product. Over time, this erosion shows up as slower decision-making, cross-functional friction, and diminished trust in design’s strategic value.
The stakes were clear:
The goal was not to add process or slow the team down. It was to establish clarity early so design could scale with intention rather than correction.
My leadership approach at Groove was grounded in a model I’ve relied on repeatedly when building and scaling product teams: Total Football.
In Total Football, players are not locked into rigid roles. Each player has a primary position, but success depends on collective awareness, fluid collaboration, trust, and the ability to step into adjacent responsibilities when the situation demands it. The system works because roles are clear, trust is high, and decision-making is distributed without becoming chaotic.
That mental model strongly influenced how I structured the Groove design team and how I partnered with product and engineering. The goal was not to create interchangeable generalists, nor to blur accountability. It was to build a team that could adapt as the product and company evolved, without relying on constant top-down direction.
I operate most effectively as a player-coach—establishing direction and systems while staying close enough to the work to model judgment, quality, and decision-making in real time.
Each designer at Groove had a clear ownership area and accountability for their work. That clarity mattered. At the same time, designers were expected to understand the broader product context and contribute beyond the narrow boundaries of a single surface or feature when needed.
This balance allowed the team to stay flexible without fragmenting. Designers could support one another during periods of peak demand, respond quickly to shifting priorities, and maintain continuity across the product experience as scope expanded.
In a Total Football system, decision-making happens close to the action. I applied the same principle here. Designers needed to know which decisions they owned outright and which required alignment. Making this explicit reduced hesitation and prevented work from stalling while waiting for validation.
I stayed involved where it mattered most: setting direction, calibrating quality, and stepping in when decisions had broader system-level implications. Beyond that, I deliberately avoided becoming a bottleneck. The operating model was designed to scale judgment across the team, not centralize it.
Critique played a central role in reinforcing this model. Rather than relying on ad-hoc feedback or isolated reviews, we established a consistent critique cadence that treated design review as a shared responsibility. Critique was not about consensus or approval. It was about sharpening thinking, stress-testing decisions, and ensuring individual contributions strengthened the system as a whole.
As a player-coach, I regularly participated in critique not to centralize decisions, but to model how to evaluate trade-offs and connect craft choices back to user and business context.
Just as importantly, critique became a mechanism for shared learning across the team. By regularly reviewing work outside of their immediate ownership areas, designers absorbed critical context about other product surfaces, user needs, and business constraints they weren’t directly exposed to day to day. Over time, this built stronger product intuition across the team and reduced local optimization, as designers were better equipped to anticipate downstream impacts and align decisions to broader product goals.
Designers were encouraged to challenge one another constructively and to engage across areas of ownership, much like players reading the field together rather than operating in isolation. The result was not only better work, but a team that developed shared judgment faster than any individual could in isolation.
Quality was not positioned as something enforced at the end of the process. It was treated as a collective outcome of good decisions made throughout the work. This meant designers could intentionally balance speed and rigor based on context, without defaulting to either extreme.
Because expectations were clear, quality discussions became pragmatic rather than subjective. The team developed a shared understanding of what “good” looked like at different stages of work, which reduced rework and built confidence across functions.
The Total Football analogy extended beyond design. I worked closely with product and engineering leaders to establish mutual expectations around collaboration, trade-offs, and ownership. Design was not framed as a service function or a downstream dependency. It was an equal partner in shaping product direction.
Over time, this shifted cross-functional conversations away from feasibility checks and toward solving the right problems for customers and the business. Trust grew not through alignment meetings, but through consistent execution within a shared system.
Throughout this work, I was intentional about where I was hands-on and where I stepped back. I modeled the behaviors I expected from the team, reinforced the operating principles through action, and avoided over-structuring. The objective was to build a team that could adapt, support one another, and scale with the company—without losing clarity, accountability, or quality.
That model only worked because it was reinforced through concrete systems and rituals, not just intent.
Once the leadership model was established, the next challenge was operationalizing it without turning design into a process-heavy organization. The goal was not to introduce ceremony for its own sake, but to create enough structure that a small team could operate with the clarity and consistency of a much larger one.
Every system I put in place was anchored to a specific problem we were encountering as Groove scaled, particularly as we shifted from primarily SMB and mid-market needs toward more enterprise-oriented expectations around quality, reliability, and trust.
This balance—hands-on contribution paired with system design—allowed me to stay close to the work early, then flex my involvement as leverage shifted.
As design’s influence grew, so did the need for more rigorous and repeatable user research. Ad-hoc interviews and informal feedback were no longer sufficient. I introduced a structured research approach that emphasized consistency, traceability, and relevance to real product decisions.
At the same time, the process was designed to flex based on context. Some initiatives required directional insight within days, while others benefited from deeper, longitudinal understanding. Rather than defaulting to a single level of rigor, we calibrated research methods to the nature and urgency of the problem, ensuring speed when needed without abandoning discipline.
Research efforts were framed deliberately, with clear plans and interview guides that aligned methods to specific questions. We used a mix of heuristic analysis, cognitive walkthroughs, and competitive review depending on the scope and timeline of the work. Remote interviews allowed us to reach a broader range of users quickly, while more involved studies were reserved for product areas with longer horizons or higher strategic risk.
To make insights usable across the team, research data was organized in Airtable and synthesized collaboratively in Mural through affinity mapping, user flow analysis, and journey mapping. This wasn’t about producing artifacts for presentation. It was about creating a shared understanding that could reliably inform decisions across product, design, and engineering.
The result was deeper, more credible insight and a noticeable shift in how design decisions were discussed. Conversations moved from opinion-based debate to evidence-informed trade-offs, without slowing the pace of delivery.
As the product surface area expanded, consistency became a growing risk. I began addressing this by building the first formal design system at Groove, starting with a comprehensive audit of the existing product experience.
The initial focus was intentionally narrow. We standardized foundational components such as buttons, form fields, and basic layout patterns. This created immediate gains in cohesion without overwhelming the team or constraining exploration.
As the system matured, we invested in a second iteration that leveraged newer capabilities in Figma. Components were restructured to be more modular and flexible, allowing designers to assemble complex interfaces more efficiently. We also revisited the color system, moving from a linear scale to a logarithmic approach. This enabled better hierarchy, improved accessibility, and more nuanced visual emphasis, particularly important as we evaluated enterprise use cases.
Rather than treating the design system as a static asset, it functioned as a living foundation. It reduced friction, supported faster iteration, and ensured that quality scaled alongside product ambition.

Design-to-engineering handoff had historically been informal, often relying on screenshots or direct Figma links embedded in Jira tickets. As complexity increased, this approach began to introduce ambiguity and rework.
Introducing Zeplin created a clearer contract between design and engineering. It provided a single, interpretable source for specifications, measurements, and assets, reducing guesswork and improving fidelity (while also minimizing designer work for hand-off). The transition required deliberate enablement. I invested time in training, documentation, and alignment to ensure the tool supported collaboration rather than becoming a point of friction.
This shift wasn’t about tooling preference. It was about making expectations explicit and reducing the cognitive load on both teams, particularly as we worked toward higher-quality, more enterprise-ready outcomes.
As delivery speed increased, I introduced a dedicated design QA moment to protect design intent without slowing engineering. This was not a traditional gate, but a focused review of pre-production builds to catch discrepancies early.
Reviews emphasized a small number of critical dimensions: typography and color accuracy, layout and spacing, interaction patterns, and system feedback. A lightweight checklist helped ensure consistency, while regular post-review conversations with engineering reinforced shared ownership rather than blame.
Design QA reduced last-minute surprises, improved fidelity, and strengthened trust between functions. More importantly, it shifted quality from a reactive correction to a shared responsibility.
With hiring constrained by market conditions, I looked for leverage wherever manual coordination was slowing the team down. This led to targeted automation and early experimentation with AI-assisted workflows (pre-ChatGPT).
In Jira, I implemented automations that linked design and engineering work from the outset. Design tickets were automatically created when engineering work required design input, and proactive notifications alerted designers when work was moving toward implementation. These small interventions eliminated common bottlenecks and reduced the need for constant status synchronization.
In parallel, I began experimenting with generative AI to augment design work. Tools like OpenAI's GPT APIs, Voiceflow, and early Zapier integrations were used pragmatically, not as novelty. They supported brainstorming, early validation, heuristic evaluation, and UI copy generation, particularly as we aligned with our voice and tone requirements.
AI was treated as a collaborator that accelerated thinking and reduced overhead, allowing the team to focus more time on problem-solving and less on repetitive tasks. This approach helped maintain momentum and quality during a period of constrained resources.
Taken together, these systems allowed a small design team to operate with clarity, consistency, and resilience. The work was rarely visible externally, but it fundamentally changed how the team functioned and how design showed up across the organization.
As Groove continued to grow, the design challenges shifted from speed and coverage toward consistency, trust, and readiness for a more enterprise-oriented future. Several decisions I made during this period were critical in setting the team up to scale without losing coherence.
When I joined, design output was moving, but the bar for quality and ownership was uneven. I began by setting clear expectations for craft, decision-making, and accountability. This wasn’t about imposing personal taste. It was about establishing a shared standard that aligned with where the product and customer base were headed, particularly as we began to feel increasing pressure from mid-market and enterprise buyers.
Early on, it became clear that one of the existing product designers was not a strong fit for the role or the direction the company was moving. I approached this the way I always do, with coaching, clear feedback, and concrete opportunities to improve. Over time, it became evident that the gap wasn’t closing.
Choosing to part ways was not about performance in isolation. It was about protecting the integrity of the team and the operating model we were putting in place. In early-stage organizations, misalignment at this level compounds quickly. Making the decision when I did allowed us to move forward with clarity rather than carry structural debt into the next phase of growth.
As I rebuilt and grew the team, I hired designers who were comfortable operating across problem spaces and collaborating fluidly with product and engineering. This aligned directly with the Total Football model. Each designer had clear ownership, but no one was boxed into a single lane. This flexibility was especially important as Groove worked to rebalance away from SMB-centric needs and toward more enterprise-grade expectations around reliability, consistency, and trust.
In the early stages, I was deeply involved in direct design contribution alongside building the team and its operating model. This wasn’t just a means of filling gaps—it was how I prefer to lead. Being hands-on allowed me to establish quality expectations through real work, unblock critical initiatives, and stay close to the product and users as the foundation of the team was taking shape.
As the organization matured, my role evolved, but it never became hands-off. Instead, I shifted toward a player-coach model where I spent more time working strategically across systems, priorities, and partnerships, while remaining able to zoom in tactically wherever it created the most leverage. That might mean jumping into a complex workflow, pressure-testing a key interaction, or partnering directly with a designer on a high-stakes area.
The goal was not to remove myself from the work, but to avoid becoming a dependency. By designing systems that could operate independently, I preserved the flexibility to contribute meaningfully at the right moments—maintaining product intimacy and craft rigor while ensuring the team could scale without bottlenecks.
We grew the team to four designers, then paused hiring as market conditions shifted. Rather than forcing growth, we adjusted expectations and focused on making the existing team more effective. This restraint helped maintain quality and avoided overextending the organization during a period of broader uncertainty.
Just as important as the decisions I made were the decisions I chose not to make.
As design demand increased, it would have been easy to grow the team aggressively. Instead, I prioritized clarity, quality, and team health. This ensured that when we did hire, new designers were joining a stable system rather than adding to organizational noise.
Despite increasing complexity and enterprise pressure, I resisted the temptation to formalize everything. The team needed shared judgment more than rigid workflows. Lightweight systems scaled better and preserved the flexibility required during this stage of growth.
Rather than defaulting to heavyweight research for every initiative, we calibrated methods to context—moving fast with directional insight when timelines were tight, and going deeper where decisions carried higher risk or longer horizons. This avoided performative research while preserving rigor, and ensured evidence informed decisions without becoming a bottleneck.
Design was never framed as an execution arm responding to tickets. From the beginning, I worked to establish design as a partner in shaping product direction, particularly as Groove evaluated how to move upmarket and expand beyond its core tech audience.
While we intentionally worked to reduce SMB focus and support more enterprise needs, we were careful not to break what made the product successful in the first place. Attempts to scale beyond the tech market moved more slowly and with mixed results, but the underlying product experience remained coherent and grounded.
These trade-offs reflected a broader philosophy: scale intentionally, protect quality, and avoid decisions that optimize for optics over outcomes.
The impact of this work showed up less in single launch moments and more in how the team operated over time. As Groove continued to evolve, the design organization demonstrated consistency under changing conditions rather than volatility.
Design decisions became easier to make and easier to trust. Product and engineering partners had a clearer understanding of how design evaluated trade-offs, which reduced rework and shortened alignment cycles. As expectations shifted toward more mid-market and enterprise-oriented needs, the product experience held together rather than fracturing across surfaces.
From a team perspective, designers operated with increasing confidence and autonomy. Critique became a normal part of how work progressed, not an exception. Quality discussions were grounded in shared standards rather than personal preference, which made collaboration more predictable and less dependent on individual personalities.
Over time, the critique model also accelerated how quickly designers developed product intuition beyond their immediate areas of ownership. Regular exposure to work across the product meant that context about users, business constraints, and adjacent workflows spread naturally through the team. This reduced local optimization, improved decision-making at the edges, and made the organization less dependent on any single individual for system-level understanding. This player-coach model helped sustain product intuition and quality over time, even as direct involvement shifted based on need rather than role definition.
Perhaps most importantly, the design organization proved durable. The operating model did not rely on constant oversight or heroic effort. It continued to function through hiring pauses, shifting priorities, and broader company change without needing to be redefined or rebuilt. What we put in place held.
This case reflects how I approach building and scaling design teams in growth-stage companies.
I lead with range—comfortable contributing directly when it accelerates outcomes, and equally comfortable stepping back once systems and judgment are in place.
I focus first on establishing clarity: how decisions are made, how quality is evaluated, and how design partners with the rest of the organization. From there, I design systems that scale judgment rather than control, allowing teams to adapt as products, customers, and market conditions change.
I’m comfortable making difficult calls when alignment or quality is at risk, and equally comfortable stepping back once the system is working. I stay hands-on when it accelerates learning or momentum, and deliberately transition away from that mode as teams mature.
Groove was not a one-off situation. The patterns here—early structure, clear standards, distributed ownership, and intentional restraint—are ones I’ve applied before and continue to apply as organizations move from startup to scale-up and beyond.
With hindsight, one area I would invest in earlier is making parts of the operating model more visible across the company, not just within design. While the system worked well internally, surfacing certain principles and expectations sooner could have accelerated cross-functional alignment as the company began to push further into enterprise use cases.
That refinement wouldn’t change the core approach. It would simply make the underlying logic more legible to partners earlier in the journey.