Using Data to Improve Training Programs

Explore top LinkedIn content from expert professionals.

  • View profile for Justin Seeley

    L&D Community Advocate | Sr. Learning Evangelist, Adobe

    11,433 followers

    Your instructional designers are wasting their talent building courses nobody asked for. I see it everywhere. Brilliant L&D teams spend months crafting beautiful, interactive modules about "Professional Email Etiquette" or "Workplace Wellness" while the sales team is begging for help with objection handling and the customer success team can't figure out why retention is tanking. We've turned instructional design into an art project instead of a business solution. Here's what's happening: Someone in leadership says, "We need training on X," and your team jumps into action. They research learning theories, build personas, create storyboards, and design gorgeous courses. Six months later, completion rates are 12% and nothing has changed. Meanwhile, the real problems are hiding in plain sight. People are struggling, metrics are declining, and teams are frustrated. But nobody thought to ask the humans doing the work what they needed to learn. Here's where it gets interesting: AI-powered learning platforms finally give us better ways to understand people's needs. Instead of guessing based on annual surveys, these systems can track learning patterns, identify skill gaps through competency mapping, and help you spot where interventions might make a difference. The best instructional designers I know spend more time in the business than at their desks. They're on sales calls, watching customer interactions, sitting with support teams, and asking, "What's making your job harder than it should be?" Now, they can use data from their LMS to validate those hunches and see which learning paths actually correlate with better performance. Stop designing courses for compliance checklists and start creating solutions for real people with real problems. Let the data help you find those problems faster, but remember that correlation isn't causation. Your job isn't to make training. Your job is to make people better at their jobs. There's a massive difference. Want to know if your L&D team is on the right track? Ask them, "What business problem did you solve this month?" If they can't answer immediately, you've got some redirecting to do. L&D leaders, what's the most impactful learning solution your team built by talking to the people who needed it? #InstructionalDesign #LearningAndDevelopment #BusinessAlignment #LDLeadership

  • View profile for Roxanne Bras Petraeus
    Roxanne Bras Petraeus Roxanne Bras Petraeus is an Influencer

    CEO @ Ethena | Helping Fortune 500 companies build ethical & inclusive teams | Army vet & mom

    21,379 followers

    The DOJ consistently says that compliance programs should be effective, data-driven, and focused on whether employees are actually learning. Yet... The standard training "data" is literally just completion data! Imagine if I asked a revenue leader how their sales team was doing and the leader said, "100% of our sales reps came to work today." I'd be furious! How can I assess effectiveness if all I have is an attendance list? Compliance leaders I chat with want to move to a data-driven approach but change management is hard, especially with clunky tech. Plus, it's tricky to know where to start– you often can't go from 0 to 60 in a quarter. In case this serves as inspiration, here are a few things Ethena customers are doing to make their compliance programs data-driven and learning-focused: 1. Employee-driven learning: One customer is asking, at the beginning of their code of conduct training, "Which topic do you want to learn more about?" and then offering a list. Employees get different training based on their selection...and no, "No training pls!" is not an option. The compliance team gets to see what issues are top of mind and then they can focus on those topics throughout the year. 2. Targeted training: Another customer is asking, "How confident are you raising bribery concerns in your team," and then analyzing the data based on department and country. They've identified the top 10 teams they are focusing their ABAC training and communications on, because prioritization is key. You don't need to move from the traditional, completion-focused model to a data-driven program all at once. But take incremental steps to layer on data that surfaces risks and lets you prioritize your efforts. And your vendor should be your thought partner, not the obstacle, in this journey! I've seen Ethena's team work magic in terms of navigating concerns like PII and LMS limitations – it can be done!

  • View profile for Mohsen Rafiei, Ph.D.

    UXR Lead | Assistant Professor of Psychological Science

    9,909 followers

    A good survey works like a therapy session. You don’t begin by asking for deep truths, you guide the person gently through context, emotion, and interpretation. When done in the right sequence, your questions help people articulate thoughts they didn’t even realize they had. Most UX surveys fall short not because users hold back, but because the design doesn’t help them get there. They capture behavior and preferences but often miss the emotional drivers, unmet expectations, and mental models behind them. In cognitive psychology, we understand that thoughts and feelings exist at different levels. Some answers come automatically, while others require reflection and reconstruction. If a survey jumps straight to asking why someone was frustrated, without first helping them recall the situation or how it felt, it skips essential cognitive steps. This often leads to vague or inconsistent data. When I design surveys, I use a layered approach grounded in models like Levels of Processing, schema activation, and emotional salience. It starts with simple, context-setting questions like “Which feature did you use most recently?” or “How often do you use this tool in a typical week?” These may seem basic, but they activate memory networks and help situate the participant in the experience. Visual prompts or brief scenarios can support this further. Once context is active, I move into emotional or evaluative questions (still gently) asking things like “How confident did you feel?” or “Was anything more difficult than expected?” These help surface emotional traces tied to memory. Using sliders or response ranges allows participants to express subtle variations in emotional intensity, which matters because emotion often turns small usability issues into lasting negative impressions. After emotional recall, we move into the interpretive layer, where users start making sense of what happened and why. I ask questions like “What did you expect to happen next?” or “Did the interface behave the way you assumed it would?” to uncover the mental models guiding their decisions. At this stage, responses become more thoughtful and reflective. While we sometimes use AI-powered sentiment analysis to identify patterns in open-ended responses, the real value comes from the survey’s structure, not the tool. Only after guiding users through context, emotion, and interpretation do we include satisfaction ratings, prioritization tasks, or broader reflections. When asked too early, these tend to produce vague answers. But after a structured cognitive journey, feedback becomes far more specific, grounded, and actionable. Adaptive paths or click-to-highlight elements often help deepen this final stage. So, if your survey results feel vague, the issue may lie in the pacing and flow of your questions. A great survey doesn’t just ask, it leads. And when done right, it can uncover insights as rich as any interview. *I’ve shared an example structure in the comment section.

  • View profile for Jonathan Raynor

    CEO @ Fig Learning | L&D is not a cost, it’s a strategic driver of business success.

    21,019 followers

    AI is only as smart as the data you feed it. Most HR teams already have the data. But it’s buried in the wrong formats. At Fig Learning, we help HR leaders unlock it. Here’s how to make your data AI-ready. Structured vs. Unstructured: What’s the difference? Structured = ready to use. Labeled, searchable, clean data in tools like LMSs. Unstructured = hidden value. Think emails, transcripts, PDFs, and feedback notes. Structured data is plug-and-play. Unstructured data needs work - but holds gold. Step 1: Audit your data sources Where does learning actually live right now? Start by mapping your tools, folders, and files: - LMS reports? - Post-training surveys? - Feedback forms? - Meeting notes? Inventory what you touch often but never analyze. Step 2: Prioritize what to work on Not all messy data is worth it. Start with content that’s high-volume and high-impact. Focus on: - Post-training feedback - Coaching and 1:1 notes - Workshop or debrief transcripts - Policy docs in unreadable formats This is where insights are hiding. Step 3: Structure the unstructured Use lightweight AI tools to make it usable. Try: - ChatGPT Enterprise to tag and summarize - Otter.ai / TLDV to transcribe and recap - Guidde to turn steps into searchable guides And tag docs with topic, team, and timestamp. Step 4: Train AI on what matters Once structured, your data becomes leverage. Use it to power SOPs, checklists, or internal bots. Let AI write based on your real examples. It will save time and multiply your reach. Good AI starts with good prep. Don’t feed it chaos. Feed it clarity. P.S. Want my free L&D strategy guide? 1. Scroll to the top 2. Click “Visit my website” 3. Download your free guide.

  • View profile for Nick Lawrence

    Outcomes, Outputs, & Obstacles || Enabling reps to achieve outcomes and produce outputs by removing obstacles @ Databricks

    9,430 followers

    Don't ask your trainees to rank how confident they feel: — "After the training, I feel confident to perform my job." 1) Strongly Disagree 2) Disagree 3) Neither Agree or Disagree 4) Agree 5) Strongly Agree — You'll end up with an average of 3.9 (or something like that). But what are you supposed to do with a 3.9? What decisions should you make? What specific actions should be taken? It’s impossible to know. Instead: Ask questions that reveal insights related to the effectiveness of the training. — “How confident are you when applying this training to real work situations? (Select all that apply)” A) I AM CONFIDENT I can successfully perform because I PERFORMED REAL WORK during the training and received HANDS ON COACHING B) I AM CONFIDENT because the training challenged me WITH AMPLE PRACTICE on WORK-RELATED TASKS C) I’M NOT FULLY CONFIDENT because the training DID NOT PROVIDE ENOUGH practice on WORK-RELATED TASKS D) I AM NOT CONFIDENT because the training DID NOT challenge me with practice on WORK-RELATED TASKS E) I HAVE ZERO CONFIDENCE that I can successfully perform because the training DID NOT REVIEW WORK-RELATED TASKS — One look at survey results that gauge the effectiveness of training will leave you with immediate decisions and actions to make. #salesenablement #salestraining PS - “confidence to apply” is only one important factor to assess. Read Will Thalheimer’s “Performance-Focused Learner Surveys” for the other pillars of training effectiveness.

  • View profile for Xavier Morera

    Helping companies reskill their workforce with AI-assisted video generation | Founder of Lupo.ai and Pluralsight author | EO Member | BNI

    7,539 followers

    𝗠𝗲𝗮𝘀𝘂𝗿𝗶𝗻𝗴 𝘁𝗵𝗲 𝗜𝗺𝗽𝗮𝗰𝘁 𝗼𝗳 𝗬𝗼𝘂𝗿 𝗧𝗿𝗮𝗶𝗻𝗶𝗻𝗴 𝗣𝗿𝗼𝗴𝗿𝗮𝗺 📚 Creating a training program is just the beginning—measuring its effectiveness is what drives real business value. Whether you’re training employees, customers, or partners, tracking key performance indicators (KPIs) ensures your efforts deliver tangible results. Here’s how to evaluate and improve your training initiatives: 1️⃣ Define Clear Training Goals 🎯 Before measuring, ask: ✅ What is the expected outcome? (Increased productivity, higher retention, reduced support tickets?) ✅ How does training align with business objectives? ✅ Who are you training, and what impact should it have on them? 2️⃣ Track Key Training Metrics 📈 ✔️ Employee Performance Improvements Are employees applying new skills? Has productivity or accuracy increased? Compare pre- and post-training performance reviews. ✔️ Customer Satisfaction & Engagement Are customers using your product more effectively? Measure support ticket volume—a drop indicates better self-sufficiency. Use Net Promoter Score (NPS) and Customer Satisfaction Score (CSAT) to gauge satisfaction. ✔️ Training Completion & Engagement Rates Track how many learners start and finish courses. Identify drop-off points to refine content. Analyze engagement with interactive elements (quizzes, discussions). ✔️ Retention & Revenue Impact 💰 Higher engagement often leads to lower churn rates. Measure whether trained customers renew subscriptions or buy additional products. Compare team retention rates before and after implementing training programs. 3️⃣ Use AI & Analytics for Deeper Insights 🤖 ✅ AI-driven learning platforms can track learner behavior and recommend improvements. ✅ Dashboards with real-time analytics help pinpoint what’s working (and what’s not). ✅ Personalized adaptive training keeps learners engaged based on their progress. 4️⃣ Continuously Optimize & Iterate 🔄 Regularly collect feedback through surveys and learner assessments. Conduct A/B testing on different training formats. Update content based on business and industry changes. 🚀 A data-driven approach to training leads to better learning experiences, higher engagement, and stronger business impact. 💡 How do you measure your training program’s success? Let’s discuss! #TrainingAnalytics #AI #BusinessGrowth #LupoAI #LearningandDevelopment #Innovation

  • View profile for Kurt Kaufmann

    Co-Founder and Chief Executive Officer

    6,107 followers

    How dispensaries train employees is changing. Since forever, retail and education leadership have been left in the dark about how employees perform and what they can do to help. Data visibility makes the difference here. Retailers can leverage their own historical data to understand where to improve employee performance. Here’s how Seed Talent does it today: 1. Analyze Analyze current qualitative and quantitative data to identify employee gaps in performance where supportive resources are needed. POS data is the key to retailers knowing whether or not employees are effective in their roles or may need additional sales or other supportive training! 2. Assign & Alert Automatically assign and alert employees of supportive training resources to address their performance gaps. Once the employee receives an alert, they have a set amount of time to complete their courses before sequential reminders are sent, helping managers stay on task. 3. Report Set up your KPIs and track progress against training after implementation. You can use your store dashboard to understand the direct business impact of upskilling your team. These reports will ultimately help you enable your team and make better strategic decisions about the role of training at your dispensary. Being able to measure the impact of resourcing and training your teams changes the conversation from box checking courses to actions that measurably drive your business forward!

  • View profile for Scott Burgess

    CEO at Continu - #1 Enterprise Learning Platform

    7,039 followers

    Did you know that 92% of learning leaders struggle to demonstrate the business impact of their training programs? After a decade of understanding learning analytics solutions at Continu, I've discovered a concerning pattern: Most organizations are investing millions in L&D while measuring almost nothing that matters to executive leadership. The problem isn't a lack of data. Most modern LMSs capture thousands of data points from every learning interaction. The real challenge is transforming that data into meaningful business insights. Completion rates and satisfaction scores might look good in quarterly reports, but they fail to answer the fundamental question: "How did this learning program impact our business outcomes?" Effective measurement requires establishing a clear line of sight between learning activities and business metrics that matter. Start by defining your desired business outcomes before designing your learning program. Is it reducing customer churn? Increasing sales conversion? Decreasing safety incidents? Then build measurement frameworks that track progress against these specific objectives. The most successful organizations we work with have combined traditional learning metrics with business impact metrics. They measure reduced time-to-proficiency in dollar amounts. They quantify the relationship between training completions and error reduction. They correlate leadership development with retention improvements. Modern learning platforms with robust analytics capabilities make this possible at scale. With advanced BI integrations and AI-powered analysis, you can now automatically detect correlations between learning activities and performance outcomes that would have taken months to uncover manually. What business metric would most powerfully demonstrate your learning program's value to your executive team? And what's stopping you from measuring it today? #LearningAnalytics #BusinessImpact #TrainingROI #DataDrivenLearning

  • View profile for Dr. Alaina Szlachta

    Creating bespoke assessment and data solutions for industry leaders • Author • Founder • Measurement Architect •

    6,931 followers

    Demonstrating the value of learning is easier than you think! In a recent workshop with The Institute for Transfer Effectiveness, I demonstrated how! One workshop participant was designing safety training to help employees use Microsoft 365 strategically to prevent data breaches. She was struggling to capture the value of the program for organizational leaders to understand. I used an alignment framework that incorporates Rob Brinkerhoff’s 6 L&D value propositions and mapped out how to connect her learning program with metrics that matter to organizational leaders. Here’s what that looked like! Aligning learning activities, initiatives or programs to strategic business outcomes is like looking for the through line between disparate things: learning, human performance, departmental key performance indicators, and organizational metrics. This can feel nearly impossible. The glue that holds these seemingly disparate things together are Brinkerhoff’s 6 L&D value propositions. In the safety training example we started by identifying the most relevant value proposition for the program. In this case, it was Regulatory Requirements: a learning program designed to ensure employees are complying with industry specific rules and regulations. Then we connect the L&D value proposition (Regulatory Requirements) with the most relevant outcome for the organization. In this case, it was Net Profit. If employees are complying with industry-specific rules and regulations, this consistent practice will save the organization money in fines, lawsuits, or dealing with the unpleasant consequences of safety challenges (like a data breach). Then we must do the hard work unpacking what people will be doing to support the targeted departmental KPIs. If you’re struggling to figure out the KPIs, you’ll likely find them by asking department leaders what problem they are experiencing on a regular basis that they would like solved. In this case it was too many data breaches and too many outdated files on the server causing misinformation and inconsistent practices. I discovered that what people could be doing differently to support the desired KPIs was adhering to updated protocols on how to manage data and documents within the 365 suite. If people followed the protocols with 100% fidelity, departments would experience a reduction in data breaches. Now … we have the behaviors to target in our training program and the data to use to show the value of learning: Learning metrics: Training attendance and completion rates. Capability metrics: Percentage of fidelity to data and document protocols before and after training. KPI metrics: # of documents on the server that are outdated (being at 20% of lower), # of data breaches per department being at 1 or less annually. Organizational metric: Net Profit How will you use the 6 L&D value propositions and alignment framework to tell your learning value story? #learninganddevelopment #trainingstrategy #datastrategy

Explore categories