The Confidence Gap: Why Gen Z Is the Least Optimistic About AI

There’s a disconnect showing up in almost every organization right now.

Executives are optimistic about AI.
Employees are not.

Recent findings from LinkedIn’s AI Confidence Survey highlight something that should give leaders pause. Gen Z, the most digitally fluent generation in the workforce, is the least optimistic about AI’s impact on their careers.

That doesn’t line up with the narrative many leadership teams are telling themselves.

The assumption is that younger employees will adapt quickly. That they are comfortable with technology. That they will lead adoption.

But comfort is not the same as confidence.

And what this data is really pointing to is not a technology gap.

It is a leadership gap.

In leadership conversations, I often hear a version of the same belief.

“We’re investing in tools.”
“We’re encouraging experimentation.”
“We’re giving teams access to AI.”

From the executive perspective, that feels like enablement.

From the employee perspective, it often feels very different.

It feels like expectations are changing without clarity.
It feels like performance standards are shifting without guidance.
It feels like risk is increasing without protection.

So instead of leaning in, many employees start hedging.

They experiment quietly.
They upskill on their own time.
They avoid asking questions that might signal uncertainty.

This is not resistance.

It is self-preservation.

Across the workforce, AI literacy is increasing quickly.

Employees are exploring tools.
They are testing use cases.
They are finding ways to work faster and smarter.

But in many organizations, leadership clarity has not kept pace.

There is no clear articulation of:

How AI will be used in the business.
How roles will evolve over time.
How performance will be evaluated in an AI-enabled environment.

So employees fill in the gaps themselves.

And when people fill in gaps under uncertainty, they tend to assume the worst.

This is where confidence erodes.

AI adoption requires experimentation.

Experimentation requires people to try new approaches, question existing processes, and occasionally get things wrong.

That only works in environments where people feel safe to speak up.

In many culture assessments, a different pattern shows up.

Employees hesitate to challenge ideas.
They avoid raising concerns in group settings.
They read the room before they speak.

What looks like alignment on the surface is often quiet calculation underneath.

One leader described it as “fake courage.”
People nod in meetings. Then they adjust privately.

That dynamic might have been manageable before.

With AI, it becomes a liability.

Because AI introduces new risks, new decisions, and new tradeoffs.

If people do not feel safe raising concerns, those risks stay hidden until they become problems.

Another pattern that consistently shows up is unclear decision ownership.

Who approves the use of AI in a workflow?
Who is accountable for the output?
Who is responsible if something goes wrong?

When those answers are unclear, employees default to caution.

They slow down.
They escalate unnecessarily.
They avoid taking initiative.

Or they move forward quietly without alignment.

Neither outcome builds confidence.

Both increase risk.

It is easy to interpret hesitation around AI as a skills issue.

Maybe employees need more training.
Maybe they need more exposure to tools.

But in most cases, capability is not the constraint.

The constraint is the environment.

Lack of trust.
Fear of speaking up.
Unclear expectations.
Ambiguous decision rights.

These are leadership design issues.

And no amount of tooling will solve them.

AI accelerates decision-making.

It introduces new ways of working.

It challenges existing assumptions about roles and value.

All of that creates tension.

Teams need to be able to debate tradeoffs.
Leaders need to hear dissent early.
Employees need to question how AI is being used.

If your culture defaults to politeness over clarity, those conversations do not happen.

And when they do not happen, problems surface later, at a higher cost.

This is why psychological safety is not a “nice to have” in the age of AI.

It is infrastructure.

AI adoption is not just about capability. It is about confidence.

And confidence is built through clarity, trust, and structure.

Leaders need to make a few things explicit:

What AI is being used for and why.
How roles are expected to evolve.
Where employees are encouraged to experiment.
How decisions are made when AI is involved.

Without that clarity, employees will continue to operate in the gap between what is said and what is felt.

Do your employees understand how AI will impact their role over the next 12 to 24 months?

Where in your organization are people holding back questions or concerns?

Are you interpreting silence as alignment?

If you’re curious where your organization stands, you can check out the mini diagnostic here. 

It’s quick. It’s free. And it gives you a first look at your AI change readiness across six essential dimensions. If you’re interested in exploring the feedback, reach out!

Image licensed via Canva Pro (Yan Krukau from Pexels)