Why Employees Are Hiding Their AI Use and What HR Should Do Instead of Surveillance
.png)
- Employees are hiding AI use because of fear, not resistance, and surveillance tends to make that secrecy worse by reducing psychological safety.
- HR and L&D should focus on trust, hands-on AI literacy, and manager behaviors that make it safe to ask questions, share failures, and learn openly.
- Real AI adoption is better measured through collaboration, experimentation, and confidence than through simple usage dashboards.
Microsoft's new Viva Insights Benchmarks tool lets managers track whether teams use Copilot "sufficiently." The intent is to justify AI investment. The likely outcome is the opposite: nearly half of employees already hide their AI use to avoid judgment. Surveillance amplifies the fear that creates that secrecy. AI adoption is a psychological safety problem, not a tracking problem. The companies leading on AI are the ones investing in trust, not metrics.
What Microsoft just launched
Microsoft has rolled out Benchmarks within Viva Insights, a feature that lets managers monitor Copilot adoption at team and individual level. Specifically, it allows leaders to:
- Track which employees are active Copilot users and measure adoption rates by department
- Compare internal teams and benchmark against the "Top 10%" of peer companies
- Identify who is experimenting with AI versus integrating it into daily workflows
The pitch is straightforward. Organizations have spent significant money on Copilot licenses. Leadership wants to see returns. Benchmarks give them visibility into who's using what.
The question is whether visibility produces adoption. The data says it doesn't.
Why so many employees are already hiding their AI use
Even without surveillance tools, the numbers are striking:
- 49% of workers admit to hiding their AI use at work to avoid judgment (WalkMe / SAP, 2025)
- 45% have pretended to understand an AI tool in a meeting rather than admit confusion (WalkMe / SAP, 2025)
- 50% of young employees are nervous to admit how much of their work involves AI (HRD / HCAMag, 2025)
- 47% worry AI could replace their jobs
This is not laziness or resistance. It is fear. Fear that admitting confusion looks like incompetence. Fear that admitting heavy AI use looks like cheating. Fear that any visible mistake will become a performance problem.
The behavior underneath the data is what L&D leaders call shadow AI: employees using the tools, but quietly, without sharing what they learn, what fails, or what concerns them.
Why surveillance makes shadow AI worse, not better
Drop a tracking dashboard into an environment where half the workforce is already hiding their AI use, and the response is predictable. Employees don't suddenly become more open. They become more strategic about appearances.
The mechanics are simple:
- Surveillance signals that AI use is being judged
- Judgment reduces psychological safety
- Low psychological safety means people stop asking questions, flagging errors, or sharing what works
- The data the dashboard captures becomes hollow: surface-level activity, not real adoption
Curiosity dies first under surveillance. And curiosity is precisely what AI adoption requires.
What high-trust AI cultures look like instead
In organizations with high psychological safety, employees say things like:
- "I'm not sure how to use this tool, can someone show me?"
- "This AI output seems wrong, can we double-check?"
- "I tried this prompt and it gave nonsense, here's what I learned."
Those sentences are how real adoption happens. They are also exactly what disappears under surveillance.
The companies winning AI adoption are not the ones with the highest dashboard scores. They are the ones where failure is shareable.
What HR and L&D leaders should focus on instead
Forget the tracking dashboards for a quarter. Focus on the conditions that actually drive adoption.
1. Build trust through transparency
Establish explicit norms that voicing concerns about AI is welcomed, not penalised. Make it a leadership behavior, not a poster.
- Manager scripts for normalising "I don't know how to use this yet"
- Public spaces (Slack channels, internal demos) where AI failures are shared and discussed
- Clear guidance on what AI use is acceptable, removing the ambiguity that fuels secrecy
2. Invest in skill development at two levels
- Employees: accessible, practical AI literacy training tied to real workflows. Not theory. Hands-on application of tools to the work people actually do.
- Managers: how to embed psychological safety in their teams, especially around AI. Most managers have never been taught how to lead an AI rollout, and they're improvising.
3. Respond productively to failure
When employees make AI-related mistakes, treat them as learning opportunities. Document what went wrong. Share the lesson. Reward the surfacing, not just the success.
This is the single biggest lever. The first time someone shares an AI mistake and gets thanked instead of corrected, the culture shifts.
4. Replace surveillance with diagnostics
If leadership wants visibility into AI adoption, give them better signals than usage rates:
- How often are AI use cases shared internally?
- How quickly do teams move from experimentation to integration?
- What's the rate of AI-related questions in team meetings vs. private DMs?
- Are managers reporting confidence in leading AI-augmented teams?
These are leading indicators of real adoption. Usage rates are lagging indicators of compliance.
The strategic point
Microsoft's Benchmarks tool is a symptom, not the problem. The underlying issue is that organizations are trying to solve a behavioral challenge with a measurement tool. AI adoption is not a metrics problem. It is a culture problem. Specifically, a psychological safety problem.
The companies that win the AI transition will not be the ones tracking the most. They will be the ones building the conditions where people can experiment, fail, share, and learn.
That is a leadership and L&D problem. Not a dashboard problem.
Lepaya's latest report on workforce resilience covers the skills, manager behaviors, and conditions that Europe's high-growth organizations are prioritising for AI adoption, including data on psychological safety as an AI readiness driver.
Frequently Asked Questions
Why are so many employees hiding their AI use at work?
Recent research shows nearly half of workers hide their AI use to avoid judgment, and 45% have pretended to understand AI tools in meetings rather than admit confusion. The cause is low psychological safety: employees fear that admitting either confusion or heavy AI use will damage their reputation, performance reviews, or job security.
What is shadow AI?
Shadow AI refers to employee use of AI tools outside of formally sanctioned or visible channels. It happens when staff use AI tools to do their work but don't disclose it because they fear judgment, job security risks, or accusations of cheating. Shadow AI undermines learning, creates compliance risks, and prevents organizations from understanding their real AI adoption.
Does tracking AI adoption increase usage?
Not meaningfully. Tracking usage rates often produces surface-level engagement rather than genuine adoption. Employees in surveilled environments tend to perform compliance — clicking the tool to register activity — rather than integrating it into workflows. Real adoption requires psychological safety, which tracking tends to reduce.
How does psychological safety affect AI adoption?
High psychological safety lets employees ask questions, admit confusion, flag concerns about AI outputs, and share failures. These behaviors are how learning happens. In low-trust environments, employees hide their use, hide their mistakes, and avoid raising concerns until problems become crises. AI adoption stalls or backfires.
What should HR leaders do instead of tracking AI usage?
Focus on the conditions that drive real adoption: build manager skills in psychological safety, invest in hands-on AI literacy training tied to real workflows, establish explicit norms around discussing AI failures and questions openly, and measure leading indicators of culture (sharing rates, confidence, manager-reported team behavior) rather than lagging usage metrics.
Is Microsoft Viva Insights Benchmarks bad for AI adoption?
The tool itself is neutral. The way it gets deployed is what matters. Used as a diagnostic to identify teams that need more support, it can be useful. Used as a performance metric or surveillance dashboard, it tends to deepen the secrecy and fear that already block adoption.
How do you build trust around AI in a team?
Make AI failures visible and shareable. Train managers to model "I don't know how to use this yet" rather than projecting expertise. Create dedicated channels or rituals for sharing AI experiments (both successes and failures). Tie AI training to real, daily work. Reward surfacing problems, not just successful outcomes.

We offer a scalable employee training solution. It lets you continuously upskill your people.
Book a call
"The safest organizations aren't the ones without mistakes, but the ones where mistakes can be shared openly, even those made with AI. Curiosity is just as important as caution; it's what keeps people experimenting and learning."

"In dealing with innovation, failure is a given. The way leaders and organizations respond to the inevitable failure that comes with working with AI, we can either tell people that they have done a bad thing, or we can invite them to experiment, fail safely, share, and learn."

Related articles
.png)
Review by:
Rethinking teams in the age of AI: Insights from Toby Newman on human-AI collaboration
Discover why self-awareness and learning mindsets are becoming more critical than technical skills in AI-integrated workplaces, as L&D consultant Toby Newman reveals the real organizational shifts happening beyond the AI hype.
Ready to drive impact together?
Close skill gaps, accelerate growth, and future-proof your workforce.



.jpg)
.png)