8 Hard-Won Lessons From Transforming a GLOBAL, 900-Person Team
What 18 months leading Honeywell's sales training overhaul taught me about competency models, content chaos, and why perfection is overrated.
When I took on the role of Senior Manager of Sales Training at Honeywell Process Solutions, I inherited what looked like a learning and development professional's dream: a $50B+ company, 900+ sellers across 40+ countries, executive buy-in for a complete training transformation, and a generous budget.
Eighteen months later, we'd delivered over 18,500 learning hours, reduced new hire onboarding time by 30%, and deployed Challenger methodology training to 300+ sales professionals. The metrics looked great in the year-end presentation.
But the real education happened in the failures, false starts, and course corrections along the way. Here are eight lessons I wish I'd known on day one—lessons that apply whether you're enabling 900 sellers or nine.
Lesson 1:
Executive Sponsorship Is Non-Negotiable (And You Have to Earn It Continuously)
The setup: I designed what I thought was an airtight Challenger Sales training program. Comprehensive curriculum, vetted by external consultants, pilot-tested with a small cohort. I announced the global rollout via email, set up registration links, and waited for the enrollments to pour in.
What happened: Crickets. Well, not quite—but attendance in certain regions hovered around 60%, while others hit 95%+.
The diagnosis: When I dug into the data, the pattern was obvious. Regions where Sales Directors personally nominated attendees, attended launch sessions themselves, and referenced Challenger in their team meetings had near-perfect completion rates. Regions where training was positioned as "available if you want it" treated it like a webinar they'd catch "when they had time" (read: never).
What I changed: I restructured the entire governance model:
Monthly RVP reviews of training KPIs alongside sales performance metrics
Manager pre-work requirements: Leaders had to complete a 2-hour module before their teams could enroll
Executive dashboards showing training completion vs. quota attainment by region (suddenly, training gaps became uncomfortable in leadership meetings)
Nomination-based cohorts: Instead of open enrollment, sales leaders selected participants and explained why they were chosen
The result: Attendance jumped to 92% overall. More importantly, sales leadership stopped viewing training as an "HR thing" and started proactively requesting it for strategic initiatives.
The takeaway: Learning professionals must sell internally before they can enable external selling. Your stakeholders have ten competing priorities today—if you want training to be one of them, you need to make the business case, in their language, every single quarter.
Executive sponsorship isn't a one-time approval meeting. It's an ongoing relationship where you prove value, connect your work to their goals, and make them look good in front of their bosses.
Lesson 2:
Competency Models Without Action Plans Are Just Expensive Wallpaper
The setup: We invested in AuctusIQ, a sales competency assessment platform, and deployed it to 972 sellers and managers. The system measured ten competencies (Generating Opportunities, Navigating the Deal, Negotiating to Close, etc.) and generated beautiful individual reports showing strengths and development areas.
I was thrilled. We had data. We could finally move from "spray and pray" training to targeted development.
What happened: Three weeks after launch, I checked usage analytics. Only 18% of sellers had opened their reports. Of those, maybe half had shared them with their managers. The data was sitting in inboxes, ignored.
The diagnosis: Assessment data without a clear "so what?" is noise. Sellers saw their scores and thought, "Okay, I'm weak in Managing Procurement. Now what?" Managers felt overwhelmed: "I have eight direct reports with different development needs—how do I personalize coaching for all of them while hitting my number?"
What I changed: I built a five-step activation process:
Seller watches a 15-minute playbook intro video (not the full report—a digestible summary)
Seller completes a planning journal with access to Challenger resources linked to their top development areas
Manager attends a 90-minute training on coaching to competencies (with scripts for each competency)
Manager schedules an IDP conversation within 30 days (tracked in our LMS)
Monthly progress reviews against a 3-goal action plan
The result: Instead of reports gathering digital dust, 73% of sellers created development plans within 60 days. Managers reported feeling equipped to coach to specific behaviors rather than giving vague feedback like "be more consultative."
The takeaway: Data without activation is just pretty charts. Competency models, 360 reviews, skill assessments—they're all worthless unless you build workflows that translate insights into behavior change.
Don't ask, "Did people take the assessment?" Ask, "Did the assessment lead to a coaching conversation that changed how someone sold last week?"
Lesson 3:
Onboarding Is a Manager Development Problem in Disguise
The setup: I redesigned our sales onboarding program from scratch. Eight weeks, 70 LMS hours, beautifully sequenced modules covering product knowledge, sales process, systems training, and Challenger methodology. Completion rate: 92%. I was proud.
What happened: Six months later, I analyzed time-to-first-deal by territory. The data was confusing. Some regions had new hires closing deals in 10 weeks; others took 20+ weeks—despite identical onboarding completion rates.
The diagnosis: I shadowed new hires in high-performing and low-performing territories. The difference wasn't the curriculum—it was the manager.
In high-performing regions, managers:
Conducted weekly 1:1s focused on onboarding milestones
Rode along on calls and debriefed using Challenger frameworks
Introduced new hires to internal champions who could accelerate their learning
In low-performing regions, managers:
Pointed new hires to the LMS and said, "Let me know when you're done"
Were too busy closing their own deals to coach
Treated onboarding as HR's responsibility, not theirs
What I changed: I stopped designing onboarding for managers and started designing it through them. I added manager "pulse points" to the onboarding workbook—specific coaching conversations managers should have at weeks 2, 4, 6, and 8, with discussion guides for each:
Week 2: "Let's review your first account research—show me how you'd prioritize these contacts"
Week 4: "I want you to pitch our value prop to me as if I'm a skeptical procurement manager"
Week 6: "Walk me through a deal in your pipeline—where are you in the buyer journey?"
Week 8: "What's one thing you've learned that wasn't in the training? How are you adapting to our culture?"
I also trained managers on what "good" looks like at each stage, so they could calibrate their coaching.
The result: Time-to-first-deal dropped by an additional 15% in previously struggling regions. New hire retention improved because sellers felt supported rather than abandoned. And managers stopped viewing onboarding as "something HR does" and started owning it.
The takeaway: The best curriculum in the world can't compensate for a disengaged manager. If you're designing onboarding, spend as much time enabling managers as you do building content. They're your delivery system—if they don't know how to activate the learning, it won't stick.
Lesson 4:
Content Abundance Without Curation Is a Curse
The setup: When I inherited our sales enablement platform (Klyck.io), it housed 3,281 assets. Presentations, case studies, datasheets, battle cards, webinar recordings—everything a seller could ever need. I felt rich.
What happened: Sellers complained constantly: "I can't find anything." Usage data confirmed it: 80% of views went to 10% of content. The other 90% might as well not have existed.
The diagnosis: More resources ≠ more effectiveness. Sellers don't need a library; they need a path. When you have 3,000+ assets with inconsistent tagging, no search guidance, and no curation, you've created a labyrinth, not a resource.
What I changed: Instead of adding more content (which was my instinct), I partnered with marketing to:
Create vertical-specific "Pages" (curated collections): "Everything you need to sell Life Sciences accounts" with 12 assets vs. 200
Tag content by buyer journey stage: Awareness, Consideration, Decision, Expansion
Establish governance rules: "If you do a webinar, sprint week, or campaign, corresponding resources must be in Klyck within 48 hours—no exceptions"
Train marketers on taxonomy: We created a tagging cheat sheet so content was categorized consistently
Spotlight the "most viewed": Weekly emails featuring top-performing content ("Here's what your peers are using to close deals")
The result: Content grew by only 13% over the next year, but engagement metrics exploded:
+38% unique users
+53% content opens
+350% page views
Sellers reported finding what they needed in minutes instead of asking colleagues or Googling for old presentations.
The takeaway: Enablement is editing as much as creating. Before you commission another asset, ask: "Do we have seven versions of this that no one can find?" Curation, taxonomy, and governance matter more than volume.
If your sellers are drowning in content, you don't have a resource problem—you have a findability problem.
Lesson 5:
Training to "Change Behavior" Requires Changing the System
The setup: After six months of Challenger training, I attended a regional sales meeting to see the methodology in action. I sat in on deal reviews, pipeline discussions, and forecast calls. I expected to hear sellers using Challenger language—Commercial Teaching, Constructive Tension, Tailoring the message to buyer personas.
What happened: Most sellers reverted to old habits. Forecast calls sounded like pre-Challenger forecast calls. Managers asked traditional questions: "What's the close date?" "Have you talked to the decision-maker?" "What's the competition?"
The diagnosis: When I shadow-coached a few sellers, they could articulate Challenger concepts. They'd learned the material. But the system hadn't changed to reinforce it.
Deal review templates still asked for BANT qualification (Budget, Authority, Need, Timeline)—not Challenger concepts like "What commercial insight did you teach the buyer?" Manager coaching guides didn't reference Challenger. Even our CRM opportunity stages didn't align with Challenger's buying journey model.
The training was an intervention. But behavior change requires a system.
What I changed: I worked with sales operations and leadership to align the infrastructure:
Sales Manager Playbook: I created a guide with Challenger-aligned deal review questions ("What's the unconsidered need you've surfaced?" "How have you reframed the buyer's thinking?")
MOS templates: Manager Operating Systems (daily schedules) now blocked time for account planning using Challenger frameworks
Win/loss story templates: Sellers had to describe which Challenger behaviors contributed to wins
Performance reviews: I lobbied for Challenger competencies to be added to the annual review rubric
The result: Six months post-training, 68% of sellers demonstrated Challenger techniques in live deal coaching vs. 22% pre-training. Managers became reinforcement mechanisms, not just training attendees.
The takeaway: Training is an event; behavior change is a system. If your tools, processes, incentives, and manager expectations don't align with what you taught, the training is theater.
Before you design the next workshop, ask: "What would need to change in our daily workflows for this to stick?"
Lesson 6:
Global Programs Require Local Empathy (Not Just Translation)
The setup: Early Challenger cohorts in AMER and EMEA went great—strong attendance, high engagement scores, enthusiastic feedback. When we rolled out APAC cohorts, I expected similar results.
What happened: Attendance was lower (74% vs. 92% in other regions). Post-training surveys showed higher "somewhat satisfied" ratings vs. "very satisfied." In follow-up interviews, sellers were polite but lukewarm.
The diagnosis: The content wasn't culturally calibrated. Some case studies featured aggressive buyer confrontation that felt uncomfortable in markets where relationship preservation matters more than "constructive tension." Role plays assumed direct communication styles that don't work in high-context cultures like Japan and Southeast Asia.
One seller in Singapore told me: "The Challenger approach makes sense—but if I push back on a customer like the case study showed, I'll damage the relationship. Can you help me adapt this?"
What I changed: I stopped treating "global rollout" as "same content, different time zone." I partnered with regional sales leaders to:
Adapt case studies: We replaced US-based examples with regional customer scenarios (e.g., petrochemical buyers in Malaysia, pulp-and-paper in Indonesia)
Build "teach-back" sessions: Sellers shared how they'd apply Challenger in their cultural context (this was often more valuable than my instruction)
Respect time zones: I stopped scheduling training at 9am CST (which is 10pm in Singapore) and offered APAC-friendly sessions
Translate materials: We provided pre-work in Mandarin, Japanese, and Bahasa for non-English-first speakers
The result: APAC attendance improved to 88%, and post-training satisfaction scores matched AMER/EMEA. More importantly, regional leaders became advocates instead of skeptics—they started requesting additional training because they saw it as theirs, not something imposed by headquarters.
The takeaway: Global scale requires local listening. "One size fits all" training is a myth. Cultural adaptation isn't optional—it's the difference between adoption and compliance.
Before you roll out globally, ask regional leaders: "What would make this feel relevant to your market?" Then actually change the content based on their feedback.
Lesson 7:
Measurement Should Inspire Action, Not Just Inform Spreadsheets
The setup: Like most L&D professionals, I tracked the standard metrics: completion rates, satisfaction scores, learning hours delivered. I presented them in quarterly business reviews. Executives nodded politely, said "nice work," and moved on.
What happened: I wasn't getting traction for Year 2 investment. The CFO asked, "How do we know this training drove results?" I pointed to my completion rates. He said, "That tells me people finished modules. Did it change how they sell? Did it impact revenue?"
I didn't have an answer.
The diagnosis: I was measuring activity, not outcomes. Hours delivered and satisfaction scores are inputs—they don't tell the business what changed as a result of the training.
What I changed: I started connecting training data to business outcomes by partnering with sales operations:
Cohort analysis: "Sellers who completed Challenger training within 90 days of hire had 34% higher Q1 quota attainment than those who didn't"
Regional correlation: "Regions with 80%+ manager training completion had 12% better pipeline quality scores"
Competency-performance mapping: "AuctusIQ scores in 'Navigating the Deal' correlated with 19% higher win rates"
I also borrowed finance's language. Instead of "7,111 learning hours," I said, "We invested $X in training—here's the expected ROI based on productivity gains."
The result: Sales leadership stopped seeing training as a cost center and started treating it as a revenue lever. I secured Year 2 budget expansion based on ROI projections, not completion rates.
The takeaway: Speak the language of the business. Hours and satisfaction scores matter to you—but your stakeholders care about revenue, retention, productivity, and margin.
Before your next exec presentation, ask: "If I were a Sales VP looking at this data, would it convince me to invest more?"
Lesson 8:
Perfection Is the Enemy of Momentum (And Sellers Are Better Co-Designers Than You Think)
The setup: I spent six weeks perfecting the new hire onboarding workbook. Beautiful design, comprehensive checklists, every possible scenario covered. I wanted it to be flawless before launch.
What happened: By the time I released it, we'd already onboarded 15 people using the old, broken process. And within two weeks of launch, sellers started pointing out gaps I'd missed despite my "perfection."
The diagnosis: I was designing in a vacuum. I was optimizing for theoretical completeness instead of real-world usability. And I was prioritizing my pride over the sellers' needs.
What I changed: I adopted an agile approach to L&D:
Launch MVPs: Every program went out with "Version 1.0" clearly labeled
Build feedback loops: Pulse surveys after every module, retrospectives after every cohort
Commit to iteration: I promised quarterly updates based on user input
Make sellers co-designers: I recruited a "seller advisory council" that reviewed drafts before launch
The result: Programs improved faster because real user feedback trumped my assumptions. Sellers appreciated the transparency ("We know this isn't perfect—help us make it better") and became invested in its success. And I spent less time polishing presentations and more time solving real problems.
The takeaway: Done is better than perfect. Build feedback into the DNA of your programs. Your sellers will tell you what's broken—if you're willing to listen and act quickly.
The Meta-Lesson:
Learning Leaders Are Change Agents First, Trainers Second
Here's the biggest lesson I learned: My job wasn't to deliver training—it was to change how HPS sold.
That required influence without authority, storytelling with data, and relentless stakeholder management. The programs I designed mattered less than the coalitions I built, the executive sponsors I cultivated, and the systems I aligned.
If I had to split my time, it looked like this:
40%: Designing learning experiences
60%: Building coalitions, navigating politics, proving value, and keeping stakeholders engaged
If you're not comfortable with that split, learning leadership will frustrate you. But if you embrace it—if you see yourself as a change agent who happens to use training as a tool—you'll have far more impact than the person who just builds great courses.
What Would I Do Differently?
If I could do it again, I'd:
Start with one high-visibility pilot instead of trying to boil the ocean (one region, one cohort, measurable results—then scale)
Hire a data analyst earlier (I wasted months manually pulling reports when I should've automated from day one)
Create a seller advisory council on day 1 (not month 6)
Spend more time in the field (I should've shadowed sellers every month, not just during needs analysis)
Say "no" more often (I tried to solve every stakeholder request—focus would've served me better)
Final Thought
Eighteen months, 18,500 learning hours, 900+ sellers, 40+ countries. The metrics look impressive. But the real transformation happened in the small moments:
The manager who stopped asking about close dates and started asking about commercial insights
The new hire who closed her first deal in week 10 instead of month 6
The seller in Japan who adapted Challenger to fit his culture and then taught his peers
The executive who said, "Training isn't an expense—it's how we win"
That's what success looks like in learning and development. Not the dashboard. Not the completion rate. The moment when learning becomes behavior, and behavior becomes results.
____________________
Want to transform your organization's performance through strategic learning? Let's talk about how these lessons can apply to your team—whether you're scaling globally or building your first enablement program.
____________________
About the Author: Elle Tokpah is a learning strategist specializing in corporate learning, sales enablement, leadership development, and organizational effectiveness. She currently serves as Senior Manager of Commercial Enablement and Effectiveness at Thermo Fisher Scientific and will hold a Master of Public Administration from Wake Forest University in Spring 2026. Connect with her on LinkedIn or visit elletokpah.com.