Why AI Adoption Stalls, According to Industry Data

Juan Moyano/Getty Images

Companies in most industries are investing heavily in artificial intelligence: 88% of companies reporting regular AI use. Yet many leaders report familiar frustrations. AI adoption stalls. Performance gains plateau. Employees experiment with new tools but don’t integrate them deeply into how work actually gets done, leaving executives increasingly concerned about ROI.

Our research suggests this is not a random failure of execution. It is a predictable psychological pattern driven by industry-specific anxiety about what AI means for people’s jobs, identity, and future.

To understand why AI adoption stalls, Fractional Insights and Ferrazzi Greenlight partnered on two surveys conducted with employees across the United States and Europe. One was a cross-national study of more than 2,000 respondents conducted in Fall 2025 in partnership with QuestionPro; the other was a U.S.-only survey of 1,000 respondents conducted in Spring 2025. Respondents in both surveys spanned industries including healthcare, technology, finance, manufacturing, retail, education, hospitality, and more.

In the cross-national study, we asked participants to tell us about their AI adoption behaviors at work and their feelings about AI—including whether AI makes them fearful of job security or that they’ll be replaced, reduces their value as an employee, is limiting human connection with colleagues, or is hurting their intellect. We combined these results into a measure we call AI angst, a measure of perceived threats to job security, professional value, and growth using 10 items on a five-point scale.

About eight in ten employees had strong concern about at least one AI angst item. For example, 65% of people agreed that they “worry about being replaced by someone who knows how to use AI better than I do,” 61% worry “AI might make others think I don’t bring unique value,” 60% worry “that using AI to help with my work will make colleagues question my personal competency,” 54% feel AI is impacting the way they connect with others at work, and 44% feel it’s “making them dumber.” One in three employees had an average score of four or greater across the AI angst composite score.

Overall, we found that approximately 86% of people felt AI will make work at least a little better with 14% feeling AI will have a neutral or negative impact on the experience of work.

The Belief-Anxiety Paradox

But here’s the critical insight: believing in AI’s business value doesn’t mean employees feel secure about their own future. About 4 in 10 employees strongly believe in AI’s business value, while simultaneously fearing what it means for their own security and relevance.

Positive core beliefs about AI and AI angst tend to be predictable depending on what industry a person is in. The highest positive core belief scores came from people from finance, technology, and healthcare, whereas people from education, manufacturing, retail, and government tended to have more pessimistic views on AI.

The highest AI angst scores also came from people in technology and financial services who had AI angst scores an average of 48% higher compared to people from manufacturing and education. This pattern replicated in our cross-national study, as well, with those from finance and technology showing the strongest AI angst scores. Factors like an industry’s history with automation or its dominant sources of value may provide clues as to why.

Technology and financial services sit at the center of the optimism-fear tension. These industries have lived through repeated waves of disruption, restructuring, and skill obsolescence. As a result, AI may be interpreted simultaneously as a growth engine and a career threat.

Employees in these sectors may hold strong beliefs about AI’s value for the business. They appear to more readily understand its power, scalability, and competitive implications better than most. But that same proximity to AI also makes its risks feel more personal.

In our data, this shows up as high belief paired with high perceived personal risk. If adoption stalls, it is less likely to be because people doubt AI’s potential, but because they are actively managing their personal risk from it.

Healthcare presents a different psychological starting point. Here, AI is more often framed as mission-enhancing, supporting patient care, reducing administrative burden, and improving outcomes rather than replacing professional judgment outright. That alignment with mission and purpose may be why we found lower AI angst scores in healthcare employees.

However, that same optimism can create a different failure trap. When belief outpaces governance, adoption risks shift from resistance to execution strain or misuse. Without clear guardrails around safety, bias, accountability, and workflow integration, enthusiasm alone is not enough to sustain scalable, responsible adoption.

Professional services face perhaps the most identity-disruptive form of AI adoption. In fields like law, consulting, and accounting, value is rooted in expertise, judgment, and differentiation, precisely the domains AI increasingly touches. As a result, AI can often be interpreted less as a tool and more as a challenge to professional legitimacy.

We saw that employees in these sectors may have more skepticism about whether AI can support better work, lowering belief in its business value. At the same time, they report elevated concern about what AI means for their own relevance and career trajectory. Adoption risk here is double-sided: skepticism can limit experimentation, while professional threat can fuel self-protective behaviors.

Finally, in education, manufacturing, retail, and government, AI often remains psychologically distant from daily work. In these sectors, employees tend to report neither strong belief nor strong fear about AI. AI appears to feel more abstract, future-oriented, or tangential to performance expectations. The dominant barrier is not resistance, but indifference.

When AI does not yet feel connected to the job, employees have little incentive to invest energy in adoption. In these environments, adoption may stall not because people push back, but because they simply do not feel motivated to engage.

Across industries, the pattern is consistent: AI adoption stalls when the technology collides with how people understand their value, their risk, and their future. Industry context determines which of those forces dominates first and why leaders who ignore it misread both enthusiasm and resistance.

Why Anxiety Can Increase AI Use and Still Stall Results

Most AI strategies focus on the case for the business (e.g. “It will make work faster!”). Far fewer address the personal threat from AI to job security, relevance, and meaning in one’s work. When that risk is left unexamined, adoption may look good, but ROI does not.

One of the most counterintuitive findings in our research is that AI angst often increases usage while simultaneously increasing resistance.

In our cross-national study, we asked respondents to estimate the percentage of their job that they are using AI to assist with. We also measured AI angst using 10 items reflecting worry and fear about AI. High AI angst was defined as a score of 4 or greater on a 5-point composite scale, and low angst was defined as an average score of 2 or less. Those with high AI angst reported that on average, 65% of their job was currently AI-assisted, compared to 42% of the job for those with low AI angst. We also asked a very straight-forward question: To what extent do you “feel resistant to adopting AI at work?” High AI angst meant over twice the resistance (a resistance score of 4.6 for high angst vs. 2.1 for low angst groups on a five-point scale). The insight: Fear about job loss or becoming obsolete can drive compliance and usage, but it does not necessarily produce true buy-in and commitment.

This helps explain why AI rollouts can look successful on the surface—licenses activated, tools used—while failing to deliver durable impact. In these cases, usage reflects self-protection rather than genuine confidence or innovation.

As with any survey research, these findings reflect employee perceptions, but those perceptions strongly predict adoption behavior.

Our past research on workplace angst found that employees experiencing personal threat may perform well in the short term but are far more likely to disengage or leave.

AI-specific anxiety shows the same pattern. Even when AI usage metrics appear strong, the underlying behavior is often performative rather than participatory, which erodes real impact.

The Four Types of Employees

Analyzing this data even further, we identified four distinct employee groups based on their beliefs about AI and their anxiety levels. Understanding which profile dominates in your organization is critical to designing the right adoption strategy.

Visionaries (high belief in its value, low perceived personal risk) see AI’s upside and experiment readily across their job. About 40% of employees fall into this group.

  • Management imperatives: Deploy Visionaries as peer mentors and early adopters, but push them to pressure-test risks rather than just sell benefits. Keep them focused on rigorous execution rather than hype. Let them lead pilot teams and innovation sprints, but pair them with skeptics who can identify blindspots. Their enthusiasm is an asset; their overconfidence can be a liability.

Disruptors (high belief, high perceived risk) understand AI’s power while worrying deeply about their own relevance, likely leading to fear-driven use. Roughly 30% of employees fit this profile.

  • Management imperatives: Provide radical transparency around AI strategy and its implications for roles. Invest visibly in reskilling programs and make progress public. Create psychologically safe spaces for learning and experimentation where failure doesn’t carry career risk. Co-create transition plans rather than imposing them, giving Disruptors ownership over the change reduces anxiety while leveraging their cognitive understanding.

Endangered employees (low belief, high perceived risk) feel AI threatens professional identity and question its value. About 20% of employees fall into this category.

  • Management imperatives: Lead with empathy and acknowledge fear first, before making any rational arguments. Create low-risk wins that build both emotional confidence and cognitive belief in the technology. Pair Endangered employees with Visionaries for trust-building and modeling. Consistently reinforce the human elements of their roles that won’t be automated, helping them see AI as augmentation rather than replacement.

Complacent employees (low belief, low perceived risk) don’t feel threatened by AI but don’t see its value either. Roughly 10% of employees fall here.

  • Management imperatives: Shock the system with external disruption stories from competitors or adjacent industries. Make relevance deeply personal by showing what’s at stake for their specific roles. Use gamified learning to spark engagement where traditional training fails. Spotlight fast-movers within the organization to create FOMO and drive urgency through peer pressure rather than top-down mandates.

While these profiles echo familiar adoption patterns, AI introduces personal threat far earlier and more broadly than most prior technologies.

What Leaders Must Do Differently

When AI adoption fails to deliver results, leaders often respond by intensifying familiar levers: more training, clearer mandates, tighter governance. But these approaches miss the core problem. Our research suggests AI adoption fails to deliver impact not because people are not using it, but because they are often using it while simultaneously fearing what it means for them personally.

Three shifts matter most.

First, recognize industry-shaped risk before deploying AI.

Industry context determines the psychological starting point for adoption long before any tool is introduced. In some sectors, AI arrives more as an opportunity. In others, it arrives more as a threat. Effective AI strategies begin not only with technology roadmaps, but with a clear view of how people interpret AI angst and personal risk.

Second, stop treating usage as a proxy for buy-in.

High usage is often assumed to mean high adoption. In reality, usage can just as easily reflect self-protective compliance rather than genuine engagement. Our data shows that employees experiencing higher AI angst often use AI more broadly but may also feel greater internal resistance that can get in the way of real innovation. Without understanding the emotional context behind usage, leaders risk optimizing for activity rather than impact. Adoption metrics must be paired with signals of angst, psychological safety, and openness to experimentation to distinguish genuine engagement from calculated participation.

Third, design for learning before designing for scale.

Leaders cannot improve or govern AI adoption if they cannot distinguish genuine buy in from anxiety-driven compliance. When employees feel personal risk, they may use AI in ways that look active but are cautious or strategically constrained to protect their role. Scaling AI before people feel safe to learn simply amplifies superficial adoption rather than durable impact.

Together, these shifts require leaders to stop treating AI adoption as a rollout problem and start treating it as a risk-perception problem, one that is shaped by industry context and expressed through human behavior.

What differentiates organizations that succeed with AI is not better tools, but a more realistic understanding of how people experience AI, especially within their specific industry context.

AI impact ultimately is tied to whether employees can see a credible place for themselves in the future leaders are building. When leaders start with industry realities, acknowledge the real AI-related risk people feel, and restore visibility into how work is actually changing, adoption stops being something they push and can become something employees help shape.

Originally Published at HBR