From theory to practice: Three critical conversations about AI adoption in the workplace

The promise is everywhere: AI will transform how we work, making us more productive, more innovative, more competitive. Yet in organizations across industries, a different reality is unfolding. Employees quietly resist new tools. HR struggles to justify learning investments. And leaders find themselves caught between the urgency of transformation and the very human fears that block it.
Last week, Lepaya and Everday hosted a roundtable event bringing together HR leaders, L&D professionals, and consultants to tackle three pressing questions about AI adoption in organizations. What emerged wasn't a collection of best practices, but honest conversations about the messy reality of integrating AI into workplaces that are still figuring out what "AI literacy" actually means.

Safety in uncertain times: The psychology of AI adoption
The first discussion centered on a topic often overlooked in AI implementation plans: psychological safety. Can employees admit they don't know how to use AI? Can they be honest about their AI-generated work? Will they face judgment for mistakes made while experimenting?
These aren't trivial concerns. In many organizations, there's an unspoken stigma around AI use, with employees unsure whether disclosing AI assistance will be seen as efficiency or laziness.
A few participants described that in their companies, many employees avoided using AI tools because they feared being seen as less capable if they relied on technology. Others mentioned instances of “shadow use” where people used AI in secret rather than risk looking incompetent for asking basic questions.
This creates a paradox: companies want AI adoption, but employees fear judgment for either using it or not knowing how.
The generational divide also adds another layer of complexity to this problem. Younger employees often embrace AI naturally, while senior staff nearing retirement question why they should invest time learning. As one participant put it: "I've got five to ten years left in my career, why would I care about this?" This creates pockets of resistance that formal training programs alone can't address.
Creating spaces for vulnerability
To build psychological safety, organizations need intentional cultural work. Several strategies for HR emerged as leaders discussed their approaches:
- Normalizing vulnerability from the top down: Leaders who openly share their AI learning journey, mistakes included, create permission for others to experiment. As one participant emphasized, “The number one thing with leaders and psychological safety is that they expect others to be vulnerable first. But you haven't been vulnerable yourself.”
- Starting with experimentation, not perfection: Companies that frame AI adoption as continuous exploration rather than immediate mastery see higher engagement. "We're kind of always learning, we're trying new things," was described as a more effective approach than treating every AI application as high-stakes.
- Addressing the transparency question directly: Rather than pretending everything is human-generated, organizations need explicit conversations about when and how to disclose AI use and create informal spaces for people to discuss AI experiences without pressure.

Beyond prompting: Rethinking what "AI skills" actually means
If the first challenge is creating safe learning environments, the second is determining what people actually need to learn. When the conversation turned to skill mapping, a tension emerged immediately: AI skills feel simultaneously everywhere and nowhere, essential but difficult to define.
"AI is so intangible," one participant noted. "We use AI in every sentence, but what it is, it's really hard to make tangible."
Mindset over mastery
Multiple participants emphasized that successful AI adoption looks less like achieving technical competency and more like building daily habits.
As one leader put it: "It's not a skills thing. The skills will come. It's the mindset thing." The goal isn't teaching everyone to code or become prompt engineering experts; it's creating the habit of reaching for AI tools when appropriate.
For example, one participant recommended a simple habit: replace Google with ChatGPT for everyday searches.
“Instead of using Google, use ChatGPT—simple change. If you just use exactly the same input in Google, but put it into ChatGPT, you'll get the same result, but with much more details.”
Instead of overwhelming employees with prompt engineering courses or technical training, organizations can focus on incremental behavior change. Use AI for one task today. Tomorrow, try another. Build the muscle memory of turning to AI tools naturally.

Critical thinking becomes crucial
However, the mindset shift alone isn't sufficient. As AI makes content creation easier, critical thinking becomes more important, not less.
"We have a lot of young people who just copy-paste whatever AI gives them," one participant observed. "They don't sanitize it, don't check quality. That critical thinking—they don't have the experience yet to know when the output is wrong."
This points to a crucial insight: AI doesn't eliminate the need for human judgment; it amplifies the need for it. Without critical thinking skills, AI becomes a tool for generating plausible-sounding nonsense faster.
Organizations addressing this need to focus on:
- Teaching people to use AI as a sparring partner, not an answer machine
- Requiring iterations rather than accepting first outputs
- Connecting AI outputs to business outcomes, so people understand when "good enough" isn't actually good enough
- Mentoring relationships where experienced workers helped newer employees develop judgment
Unlocking L&D budgets: Speaking the business language
The final roundtable confronted people leaders' eternal challenge: demonstrating ROI. Skills data should theoretically help, but the reality is more complicated.
“I never have a CEO asking for the skills data," one participant stated bluntly. "They always want the business data.”

Here’s the fundamental problem: learning data lives everywhere and nowhere. Participants described downloading information from multiple systems - LMS platforms, external certifications, workshop registrations - with no unified view.
“There is not one dashboard that brings all the data together, so it's never a full picture,” one L&D leader explained.
And even when data exists, it doesn't tell the complete story.
Certifications show someone completed training at a specific moment, but say nothing about retention, application, or ongoing competence. Self-assessments lack accuracy. And connecting training to business outcomes remains elusive.
Rather than relying solely on traditional L&D measurements, participants are experimenting with more sophisticated approaches:
- Skill gap analysis: Mapping individual competencies against role requirements, using self-assessment combined with peer feedback to create "spider web" visualizations showing gaps.
- Learning velocity tracking: Measuring not just skill attainment but how quickly people advance, helping identify who needs additional support and what interventions work best.
- Connection to performance data: Layering skill assessments over performance metrics to identify correlations between capability development and business outcomes.
The path forward
Three clear patterns emerged from these conversations:
First, successful AI adoption requires skills frameworks focused on mindsets and critical thinking rather than specific tool proficiency.
Second, psychological safety isn't a nice-to-have; it's the foundation that determines whether people will experiment, ask questions, and ultimately adopt new ways of working.
Third, L&D and HR must learn to speak the language of business impact, connecting skills development directly to organizational objectives.
Perhaps most importantly, these conversations revealed that most organizations are still in the process of figuring this out. There are no perfect playbooks, no guaranteed approaches. The leaders making progress are those willing to experiment, fail, learn, and try again - exactly the culture they're trying to build for AI adoption itself.
The real competitive advantage may not be in having all the answers, but in creating environments where people feel safe enough to explore the questions together.

We bieden een schaalbare oplossing voor de opleiding van werknemers. Hiermee kun je je mensen continu bijscholen.
Boek een gesprek
