What does AI success look like on a dashboard?
If you’re like most CTOs I talk with, you’re probably tracking metrics like AI-assisted pull requests, adoption percentages, or lines of code generated. The numbers look impressive: 40% increase in AI-assisted commits, 60% of developers using AI tools, thousands of code suggestions accepted.
But here’s the uncomfortable question: Are you measuring activity or value?
Because in my conversations with technology leaders, I’m consistently seeing a troubling pattern. The AI adoption metrics look great. But cycle time is up. Code review duration has doubled. And software instability incidents are creeping higher.
We’ve successfully measured activity. Now we need to talk about whether we’re creating value.
The Pattern I’m Seeing Everywhere
In my work with technology organizations over the past year, I’m witnessing a predictable pattern emerge around AI adoption. It looks something like this:
The Early Adopters rush in with enthusiasm. They’ve subscribed to GitHub Copilot, ChatGPT, Claude, and maybe a few other AI coding assistants. They’re generating code faster than ever before. They feel productive. They love it.
The Skeptics dig in their heels. “AI code is sloppy.” “It doesn’t understand our architecture.” “We can’t trust it with our proprietary logic.” They stick with their traditional workflows.
The Finance Team sends an email: “We’re seeing requests for AI tool subscriptions at $30-100 per developer seat. This wasn’t in the budget. What’s the ROI?”
The Engineering Leader is caught in the middle, trying to navigate:
- Inconsistent adoption across teams
- Tool sprawl (everyone has their favorite AI assistant)
- Confusion about what’s approved and what’s not
- Pressure to demonstrate return on investment
- Mounting concerns about code quality and security
Sound familiar?
Here’s what makes this challenging: All of these perspectives contain truth. AI tools genuinely can boost productivity. They also can create chaos. The difference isn’t the tool—it’s the organizational capability surrounding it.
The Research That Changes Everything
This year, the DevOps Research and Assessment (DORA) team released something remarkable: the first AI Capabilities Model for software development. Based on 78 in-depth interviews with developers and extensive survey research, they set out to answer a crucial question:
Under what conditions do AI-assisted software developers observe the best outcomes?
What they discovered should make every technology leader pause.
The research found seven organizational capabilities that, when present, amplify the benefits of AI adoption. Teams with these capabilities see AI drive improvements in individual effectiveness, team performance, code quality, and organizational outcomes.
But here’s the paradox that caught my attention: Without these capabilities, AI adoption can actually harm team performance.
Let me repeat that. The DORA research shows that on teams lacking a user-centric focus, AI adoption decreases team performance. It’s not neutral—it’s actively harmful.
This isn’t about the sophistication of the AI model. It’s not about whether you chose the “right” tool. It’s about whether your organization has built the foundational capabilities to harness AI effectively.
Why Traditional Metrics Are Lying to You
Those dashboard metrics we started with—AI-assisted pull requests, lines of code generated, adoption percentages? They’re informing you about the wrong things.
Here’s what I’m seeing organizations measure:
- Number of AI-assisted commits
- Lines of code generated by AI
- Percentage of developers using AI tools
- Count of AI prompts or queries
These are vanity metrics. They tell you about activity, not outcomes.
The size of your commits is the wrong metric. In fact, DORA’s research suggests that working in small batches—not large, AI-generated code dumps—is one of the capabilities that amplifies AI’s benefits.
When I tell finance teams to “consider AI tools part of the cost of purchasing a computer for your developers,” I’m not asking them to accept the expense on faith. I’m asking them to reframe the question from “Can we afford this?” to “Can we afford not to build the capabilities to use this effectively?”
The computer alone doesn’t create value. Neither does the AI subscription. It’s the organizational capability that creates sustainable competitive advantage.
This Is a Leadership Challenge, Not a Technology Problem
Here’s what the DORA AI Capabilities Model makes crystal clear: succeeding with AI isn’t primarily about choosing the right tools or training your developers on prompt engineering.
It’s about leadership decisions:
- Have you established a clear and communicated AI stance so developers know what’s expected and permitted?
- Are you maintaining healthy data ecosystems that give AI the context it needs to be useful?
- Do you have quality internal platforms that provide guardrails and shared capabilities?
- Are you enforcing the discipline of working in small batches even when AI makes it easy to generate large amounts of code?
- Most critically: Are you maintaining a user-centric focus that keeps teams oriented toward outcomes, not just output?
These are questions of organizational design, culture, and strategy. They require executive-level attention.
The teams thriving with AI aren’t the ones with the most sophisticated tools. They’re the ones whose leaders have architected an environment where AI can amplify human capability rather than amplify chaos.
A Note on Scope
(And Why This Matters to You)
The DORA research specifically studied software development teams—where AI adoption is most mature and outcomes are most measurable. The capabilities they identified are deeply rooted in software delivery: version control practices, code quality, deployment throughput.
But if you’re a technology leader, you should pay attention even if you’re not hands-on in code anymore. Here’s why:
First, software development is where your organization’s AI transformation will be most visible and measurable. What happens in engineering is your laboratory for learning.
Second, the leadership principles underlying these capabilities apply far beyond development. The need for clarity around AI use, the importance of data quality, the discipline of working in small batches—these patterns repeat across knowledge work.
Third, as a leader, your job is to create the conditions for success. Understanding what conditions matter gives you leverage.
What’s Coming in This Series
Over the next several posts, I’m going to take you deep into the DORA AI Capabilities Model and show you how to translate research into action.
We’ll explore:
- The Foundation: The three leadership decisions that determine whether AI helps or hurts your teams (Clear AI stance, User-centric focus, Working in small batches)
- The Technical Infrastructure: The platform and data investments that amplify AI’s benefits (Data ecosystems, AI-accessible internal data, Internal platforms, Version control)
- From Adoption to Advantage: How to integrate these capabilities into a sustainable competitive advantage
- The Human Side: How to navigate this transformation while honoring, retraining, and elevating your existing workforce
This isn’t about rushing to adopt the latest AI tool. It’s about building organizational muscle that will serve you regardless of which AI technologies emerge next.
The Question Every Leader Should Ask
Here’s how I now open conversations with technology leaders:
“Are you creating the conditions for AI to amplify your team’s capabilities, or are you just adding another tool to an already complex environment?”
The difference between these two paths is measurable. It shows up in your team’s performance, your code quality, your delivery speed, and ultimately your business outcomes.
The good news? Building these capabilities is within your control. It doesn’t require waiting for better AI models or larger budgets. It requires leadership—intentional, strategic, research-backed leadership.
That’s what this series is about.
Your Turn
Before the next post, I’d invite you to consider:
How clear is your AI stance? If I asked five developers on your team whether they’re expected to use AI tools and which ones are permitted, would I get five consistent answers?
What are you actually measuring? Are you counting AI activity or measuring AI impact on outcomes that matter to your business?
Where are you seeing the pattern? Enthusiastic adopters, resistant skeptics, confused leadership—which of these dynamics are playing out in your organization?
Drop a comment or reach out—I’d love to hear what you’re seeing in your organization.
Next in the series: “The Foundation: Three Leadership Decisions That Determine AI Success”
About This Series: This series explores the DORA AI Capabilities Model and translates academic research into practical guidance for technology leaders navigating AI transformation. The goal isn’t just AI adoption—it’s building sustainable organizational capabilities that create competitive advantage.
Research Source: 2025 Accelerate State of DevOps Report, DORA AI Capabilities Model, Google Cloud DevOps Research and Assessment Team. Download the full report here.

Leave a comment