28 Jan We Thought We Were Ready for AI. We Were Wrong.
Your Culture Isn’t as AI-Ready as You Think: Here’s What Surprised Me
I recently sat down with a group of seasoned CHROs: smart, future-focused leaders who know what it takes to navigate change. We were talking AI. Not the tools. Not the hype. But the human side: culture, capability, clarity, and trust.
Before diving into the conversation, I asked everyone to complete a short, 12-question version of our new SPARK AI.Q™ diagnostic, a tool we developed to help companies assess their readiness to lead in the age of AI. Not just to adopt it, but to actually extract value from it without breaking their teams or their culture in the process.
The results? Eye-opening.
Even in a room full of forward-thinking leaders, the data showed some surprising gaps, especially in areas that have nothing to do with technology and everything to do with leadership, trust, and alignment. Here are six of the most compelling insights that emerged:
1. Leaders are rushing to action before clarifying intent.
Fewer than 30% of respondents strongly agreed that their leadership team had a clear, shared understanding of why they were adopting AI in the first place.
This is a red flag. Without a crisp “why,” efforts tend to fragment. What you see instead is organizations pursuing automation in one corner, chasing innovation in another, and confusing employees everywhere. Clarity of purpose is the bedrock of responsible, effective transformation.
While it’s easy to get distracted and chase the shiny new object (hello, guilty as charged), having a clear, articulated strategy is key to ensure your investment delivers.
Your AI story needs to begin with alignment, not tools.
2. Ethics is still treated like compliance, not culture.
Only 11% strongly agreed their organization has a clearly articulated stance on responsible AI use that’s widely understood across the company.
We’re past the era of “just trust the engineers.” In a world where machines can make decisions that affect customers, employees, and society, companies need an ethical backbone that everyone, from product to people teams, can stand behind.
Responsible AI isn’t a checklist. It’s a leadership and governance priority.
3. Psychological safety isn’t keeping up with the pace of change.
Less than a third of participants felt their culture encourages curiosity, experimentation, and learning when it comes to AI.
That matters. A lot. Because AI transformation isn’t just about capability; it’s about courage. When employees don’t feel safe to test, question, or fail, they’ll either resist the change or go quiet. And both are dangerous.
AI fluency starts with trust. Make it safe to ask, try, and learn.
4. Middle managers are stuck in the missing middle.
As we discussed the results, respondents flagged the “frozen middle” as a blocker to progress. But the sentiment isn’t that managers are resisting AI; rather, they just aren’t equipped to lead through it.
This is the new change fatigue. Frontline teams are curious. Executives are pushing strategy. But without strong, skilled leadership in the middle, the adoption gap widens.
If you don’t activate the middle, you can’t scale the future.
5. Governance is muddy, especially when it comes to decision rights.
Only 19% of CHROs said their organizations had clearly defined who owns what decisions when AI is involved.
It’s bad enough when there’s confusion across teams of humans who don’t know who’s on tap to make the decision. But with AI in the mix, this is especially risky for boards and senior leaders. When humans and machines collaborate, who’s accountable? Who approves the use case? Who audits the output?
If you’re looking for a good governance framework, check out this one from the Athena Alliance.AI Governance Playbook
Without clear governance, you’re not innovating, you’re gambling.
6. Most organizations are skipping the cultural groundwork.
Fewer than 25% of participants strongly agreed that their existing practices, norms, and values were being intentionally adapted for an AI-augmented future.
Think about that. AI adoption is being layered on top of outdated operating models, legacy decision-making, and old incentives. That’s a recipe for friction, not transformation.
Please don’t digitize dysfunction. Fix the foundation first.
Why We Built SPARK AI.Q™
These insights aren’t just interesting; they’re a signal. As companies race to integrate AI, they’re missing the human systems that make transformation stick.
That’s why we developed the SPARK AI.Q™ diagnostic and transformation process. It’s built to assess the often-overlooked dimensions of readiness: strategic clarity, structural alignment, leadership mindset, cultural openness, motivation and incentives, and ethical stewardship.
In short, it helps leaders see specifically where their organizations are set up to thrive with AI and where they’re about to stumble. Over the decades, I have repeatedly seen the most transformation fails in the gap between leadership perception and organizational reality.
Try the Mini Diagnostic. Then Take the Next Step
If you’re curious where your organization stands, you can check out the mini diagnostic here.
It’s quick. It’s free. And it gives you a first look at your AI change readiness across six essential dimensions. If you’re interested in exploring the feedback, reach out!
Image licensed via Canva Pro

