Three years ago, I started delivering AI training to enterprise teams with a set of assumptions about what would work. Most of those assumptions turned out to be wrong. This is what 5,000+ trained professionals — across Samsung Research, Deloitte, Synechron, WNS, and 15+ other organisations — actually taught me.
The findings
The biggest barrier to AI adoption is not scepticism — it's anxiety
Before I started delivering training, I expected to spend significant time convincing participants that AI was worth adopting. The opposite was true. By the time most professionals arrived in a training room, they were already convinced AI was important. What they felt wasn't scepticism — it was anxiety. Anxiety about being left behind. Anxiety about saying something wrong in front of their peers. Anxiety about looking incompetent with a tool everyone else seemed to understand.
The first job of good AI training isn't to persuade. It's to create psychological safety. Until participants feel safe to ask basic questions and make mistakes, nothing else sticks.
Executives learn faster than they expect — and slower than they hope
Senior leaders often arrive at AI training sessions with two conflicting beliefs: that they're too busy to spend time on fundamentals, and that they'll be able to absorb everything quickly. The reality is more nuanced. Executives typically grasp strategic AI concepts — use-case identification, risk frameworks, vendor evaluation — faster than mid-level managers, because they're already practiced at abstract thinking.
Where they slow down is in developing genuine intuition for what AI can and cannot do. That intuition only comes from doing — from running prompts, seeing outputs fail, understanding why. The executives who leave training with the most clarity are invariably the ones who were willing to look slightly foolish by actually trying things in the session.
Generic training doesn't transfer — ever
This is perhaps the finding I feel most strongly about. When training uses examples from another industry, another company, or an abstract hypothetical scenario, participants engage intellectually but don't change their behaviour at work. The connection between "what I learned" and "what I do on Monday morning" simply doesn't fire.
We shifted every program to use the client's actual workflows, actual tools, and actual business problems as the training context. Engagement went up. Post-training application went up measurably. The content of the program was often similar; the context was entirely different. Context turns learning into behaviour.
The developer-manager gap is bigger than anyone expects
In the early days, we sometimes ran mixed cohorts — developers and business managers in the same room. The logic seemed sound: shared vocabulary, cross-functional understanding. In practice, it created two problems simultaneously. Developers were bored through the strategic foundations. Managers were lost during the technical implementation. Nobody got what they needed.
Role-differentiated training isn't a luxury. It's the baseline for effective delivery. The moment we separated cohorts by role, outcomes improved significantly for both groups.
Building something is worth more than understanding something
Across hundreds of training sessions, the single strongest predictor of post-training AI adoption was whether the participant had built something — a working prototype, a real workflow, a functional tool — during the training itself. Not a toy demo. Not a guided exercise following a step-by-step script. An actual thing they designed, built, and tested themselves.
Participants who built something during training were dramatically more likely to apply AI to their real work within two weeks. Participants who watched excellent demos and followed structured exercises were significantly less likely to do so — regardless of how well they scored on assessments.
The "AI champion" is the single most important variable
In almost every organisation where AI adoption accelerated after training, there was one person — sometimes two — who took it upon themselves to keep the momentum going. They shared what they'd learned with colleagues who hadn't attended. They documented the use cases that were working. They asked the uncomfortable questions in leadership meetings. They were informal advocates, not official AI leads.
Where these champions didn't exist or weren't supported, training outcomes faded within weeks regardless of how good the program was. The champion is the bridge between the training room and the organisation. Identifying and deliberately supporting them is one of the highest-leverage investments an L&D team can make.
India's enterprise AI context is genuinely different
This matters more than the training industry tends to acknowledge. Indian enterprise teams face a specific combination of constraints and opportunities that don't map cleanly onto Western training materials: large, multi-generational workforces with highly variable digital literacy; significant data localisation concerns; cost sensitivity that requires demonstrating ROI faster; and a preference for pragmatic, immediately applicable outcomes over theoretical frameworks.
Training designed for Silicon Valley teams or European corporations often lands awkwardly in Indian enterprise contexts — not because the technology is different, but because the organisational culture, risk appetite, and success criteria are different. Building training specifically for this context isn't niche positioning. It's a genuine response to a real gap.
"Every session teaches me something I didn't know going in. The best thing about training thousands of people is that you stop being able to hide behind theory. The room always tells you what's actually true."
What this means for your organisation
If I were designing an enterprise AI training program from scratch today, based solely on what I've observed in the field, here's what I'd insist on:
- Start with psychological safety, not content. The learning only happens when people feel safe to be beginners.
- Use real business context from day one. No hypotheticals. No generic examples from other industries.
- Separate cohorts by role. Leaders and developers need completely different programs.
- Make building mandatory. Every participant leaves with something they created — not something they watched someone else create.
- Identify AI champions before the training starts. Support them explicitly during and after.
- Measure behaviour change, not assessment scores. The question isn't "did they understand it?" It's "did they do anything differently on Monday?"
These aren't revolutionary ideas. They're just what the data, accumulated across thousands of training interactions, consistently points to. The organisations that get this right tend to accelerate. The ones that don't keep re-running the same training, wondering why nothing changes.