User-Centric Focus: Why AI Can Hurt Team Performance Without It

In my last post, I introduced the AI Productivity Paradox—how organizations with impressive adoption metrics are seeing performance decline instead of improvement. The DORA research identified seven organizational capabilities that influence AI outcomes. But one finding felt unexpected to me.

On teams lacking user-centric focus, AI adoption decreases team performance. The impact is actively harmful.

This perplexed me when I first saw the data. The same AI tools that boost performance on user-focused teams harm performance on teams that aren’t. Why would user-centric focus make such a difference?

What User-Centric Focus Actually Means

User-centric focus means your team consistently asks “What problem are we solving for users?” before asking “What can we build?” And then: How does this add user value? How does it improve usability?

Your metrics emphasize outcomes—user value delivered, problems solved, experiences improved—over outputs like features shipped, story points completed, lines of code written, or even AI-specific metrics like how many developers are using AI tools, how many AI-driven pull requests are created, or how many code reviews use AI assistance.

User-centric also means you’re reducing the risk that what you’re working on will be usable and deliver value. This requires showing something to users frequently. It means building prototypes or proof-of-concepts that get thrown away, not checked into production. You can build 20 prototypes in 20 days, improving the user value each time. AI enables you to do this.

Your demos start with the user problem solved, not the technology used. Your roadmap conversations start with user needs and pain points. Feature requests and technical possibilities get evaluated against those needs. Sometimes users struggle to articulate solutions because they don’t know what technology can make possible – that’s where product and tech leaders bridge between the user’s problem and what technology can do. You still create solutions users haven’t explicitly asked for – don’t limit yourself to stated requests. The discipline is in validating frequently: build it, show it, see if it resonates and adds value.

This isn’t about user-research theater or design-thinking workshops. It’s about whether user value is the actual north star guiding daily decisions.

How AI Without User-Centric Focus Creates Product Debt

AI is phenomenal at generating output. Code, documentation, tests, configurations—faster than any human. If your team isn’t grounded in what users actually need, AI just helps you build the wrong things faster.

I’m seeing this pattern: Teams use AI to rapidly build features that get shipped, add complexity to the product, and require ongoing maintenance. But users don’t adopt them. Some features actually confuse users or create friction in existing workflows.

You’ve created product debt—features that exist in your codebase, consume maintenance effort, increase complexity, but deliver zero user value. Worse, they may actively harm the user experience.

But the deeper problem is about what teams do with the iterations AI enables them to create quickly.

Because you’re user-centric, when you build iteration n and show it to users (or test it yourself against user needs), you know it’s not good enough in terms of user value. So you throw it away. You build iteration n+1. You keep iterating. AI empowers you to build 20 prototypes in 20 days, each one improving on user value. You’re learning fast and the throwaway work never becomes production code.

If you’re not user-centric, you don’t have the filter to know that iteration n should be thrown away. It seems fine. It works technically. So you productionize it. You’ve just created product debt by shipping something that didn’t need to exist.

This is the key problem with not being user-centric in an AI-enabled world. Without AI, building features was expensive and slow—you had natural constraints that forced some validation. With AI, those constraints disappear. The ability to build quickly doesn’t come with the judgment about what should be built.

How AI Without User-Centric Focus Accelerates Technical Debt

Here’s the compounding problem: every feature you build creates code. Code that needs to be maintained. Code that increases system complexity. Code that creates dependencies. Code that makes future changes harder.

When you’re building features driven by user value, that code pays for its existence. It solves real problems. Users adopt it. The maintenance burden is justified by the value delivered.

When you’re building features without clear user value, you’re just accumulating technical debt. Code that doesn’t drive valuable features. Code that still requires maintenance. Code that increases complexity. Code that increases instability because now you have more surface area for bugs.

I’m seeing organizations where AI has dramatically increased the rate of code generation. But user satisfaction hasn’t moved. System complexity has exploded. Stability has declined. The team is moving fast, but in circles.

I’m also hearing the term cognitive debt show up – code generated by AI that’s hard for humans to understand, especially for developers who didn’t use the AI tools to generate it. Some developers say they can use AI to understand AI-generated code, but that’s adding work. If cognitive debt is increasing while user value isn’t, you’re setting the organization back.

What the DORA Research Shows

The 2025 DORA report identified seven organizational capabilities that influence AI outcomes. Figure 45 on page 63 shows how these capabilities connect to different outcomes—team performance, code quality, individual effectiveness, product performance, and more.

User-centric focus shows the strongest connection to team performance in the diagram. But in my work with organizations, I’m seeing the impact extend further. Lack of user-centric focus affects organizational performance (as I define it – sustainable value creation), not just team metrics. And it impacts throughput – not in terms of code generated, but in terms of user value delivered. The DORA report doesn’t explicitly call this out, but the pattern is clear in practice.

What makes user-centric focus unique: it’s the capability where absence correlates negatively with performance when AI is adopted. The research shows that on teams lacking user-centric focus, AI adoption decreases team performance. Not less benefit – active harm.

Why This Happens: AI Suggests Patterns, Not Solutions

AI coding assistants are trained on millions of code examples. They’re brilliant at suggesting implementations based on patterns they’ve seen. “Oh, you’re building a notification system? Here’s a comprehensive notification framework with email, SMS, push notifications, in-app alerts, preference management, and delivery tracking.”

That might be impressive engineering. It might be well-written code. But is it what your users actually need? Maybe they just need a simple email when something specific happens.

Without user-centric focus, teams take the AI suggestion and run with it. “Look at what we built!” The code is there. It works. It gets shipped. But nobody asked whether users needed 80% of those capabilities.

AI doesn’t know your users. It doesn’t know your product strategy. It doesn’t know what problems you’re trying to solve. It just knows patterns it’s seen before.

If your team doesn’t have a strong filter of “what do users actually need,” AI suggestions become feature creep on steroids.

The Cascade Effect on Team Performance

This is why the DORA research shows AI decreasing team performance on teams without user-centric focus.

More code in the codebase means:

  • Longer test suites (slower CI/CD)
  • More complex deployments
  • Harder debugging (more places for bugs to hide)
  • Slower onboarding (new developers have more code to understand)
  • More cognitive load (developers need to understand more of the system)

More features in the product means:

  • More edge cases to handle
  • More customer support burden
  • More documentation to maintain
  • Harder to reason about the product
  • More confusion for users trying to accomplish their goals

The team is generating more output. But the output is making everything harder. Cycle time increases. Quality decreases. Velocity paradoxically slows down because you’re dragging more weight.

Why AI With User Focus Creates Leverage

Now flip this around. What happens when teams with strong user-centric focus get access to AI?

They use AI to validate faster. Instead of building one approach and hoping it works, they can prototype three approaches quickly and test them with users. The fast iteration isn’t used to build more features—it’s used to find the right solution faster.

They use AI to reduce scope. “AI generated this comprehensive implementation. But users only need these two capabilities. Let’s ship that and see if we need the rest.” The AI-saved time goes into understanding the problem better, not just building more.

They catch misalignments earlier. When you can prototype quickly, you can get user feedback before investing heavily. You discover “this isn’t what users need” when it’s still cheap to change direction.

They use AI to refactor for simplicity. When they know what’s important to users, they can leverage AI to refactor code while retaining (or increasing) user value. The code gets simpler, more maintainable. Technical debt decreases instead of accumulating.

They maintain focus. The team knows what user problem they’re solving. When AI suggests elaborate solutions, they have a clear filter: “Does this serve the user problem we’re solving?” If not, it doesn’t ship.

The result: AI becomes a genuine accelerant for user value creation. The team moves faster and in the right direction. Code quality stays high because they’re only building what’s needed. Technical debt stays manageable because every feature justifies its existence through user value.

The Business Impact Difference

Let me make this concrete with two hypothetical scenarios based on patterns I’m observing:

Company A: Low User-Centric Focus + AI Adoption

  • Developers generate 3x more code with AI assistance
  • Features shipped increases 2x
  • User engagement flat or declining
  • Customer support tickets increase (more features = more confusion)
  • System complexity increases dramatically
  • Cycle time for new features starts increasing (despite AI)
  • Team velocity paradoxically slows
  • Product becomes harder to use, not easier
  • NPS declines

Company B: High User-Centric Focus + AI Adoption

  • Developers generate 2x more code initially, but ship less
  • Features shipped carefully curated to user needs
  • User engagement increases
  • Customer support stable or improved (right features, well-designed)
  • System complexity controlled or possibly improved (only building what’s needed)
  • Cycle time stays stable or improves (less technical debt)
  • Team velocity increases sustainably
  • Product becomes more focused, more valuable
  • NPS increases

The difference isn’t the AI tools. The difference is whether user value is the actual filter for what gets built.

What This Means for Leaders

If you’re a business leader, product leader, or technology leader considering AI adoption, this is the most important thing to get right.

Before you invest in AI coding assistants, ask: Do we have strong user-centric focus?

If the answer is no—if your teams optimize for story points, feature counts, or “keeping busy”—AI will accelerate you in the wrong direction. You’ll build more things users don’t need, faster.

If the answer is yes—if your teams consistently validate against user needs, measure outcomes not outputs, and have the discipline to say no to features that don’t serve users—AI becomes a genuine competitive advantage.

How to Build User-Centric Focus

This isn’t something you can install or buy. It’s built through:

Clear metrics: What are you actually measuring? If it’s output (features shipped, story points), you’re optimizing for the wrong thing. Shift to outcome metrics (user value delivered, problems solved, user satisfaction).

Product discipline: Strong product management that maintains focus on user problems. PMs who can say “this isn’t what users need” and make it stick.

Regular user contact: Engineers and designers talking to users, not just receiving requirements through intermediaries. When your team sees users struggle, they build different things.

Outcome-based roadmaps: Your roadmap should describe user problems to solve, not features to build. The team figures out how to solve those problems. Sometimes the answer is less code, not more.

The discipline to say no: In an AI-enabled world, you can build almost anything quickly. The competitive advantage isn’t building more things. It’s building the right things.

The Path Forward

Strong product management matters more in an AI-enabled world, not less. Clear user outcome metrics matter more, not less. Tight feedback loops with users matter more, not less.

AI is also changing the product/engineering dynamic – product and design folks can now build prototypes without waiting for developers. More on this in a future post.

If you get user-centric focus right, AI becomes a genuine accelerant. You move faster toward user value. You validate faster. You learn faster. You adapt faster.

If you don’t get it right, AI just helps you accumulate product debt and technical debt faster.

The DORA research is clear: this isn’t optional. This is the capability that determines whether AI helps or hurts.


I’m curious about your experience: How is AI changing the balance between building fast and building the right thing in your organization? What’s working? What’s challenging? Share your observations in the comments.

Coming next: In my next post, we’ll explore how working in small batches and maintaining a clear AI stance amplify (or undermine) the foundation of user-centric focus.


Read the full series:

Research source: 2025 Accelerate State of DevOps Report – DORA AI Capabilities Model

About the author: Jawahar Malhotra is a technology executive turned leadership coach, working with CEOs and C-suite leaders on organizational transformation. With 20+ years building and scaling technology teams, he helps leaders create the conditions for sustainable high performance.

Work with me:

Comments

Leave a comment