𝗧𝗵𝗲 𝗵𝗶𝗱𝗱𝗲𝗻 𝗰𝗼𝘀𝘁 𝗼𝗳 𝗶𝗻𝘁𝗲𝗴𝗿𝗮𝘁𝗶𝗼𝗻 𝗰𝗼𝗺𝗽𝗹𝗲𝘅𝗶𝘁𝘆 𝗶𝗻 𝗔𝗜 𝘀𝘆𝘀𝘁𝗲𝗺𝘀 As you scale your enabled systems and integrate multiple AI models (like ChatGPT, Claude, Gemini, etc.) with enterprise tools—CRM, analytics, internal apps—something critical breaks: 𝗶𝗻𝘁𝗲𝗿𝗼𝗽𝗲𝗿𝗮𝗯𝗶𝗹𝗶𝘁𝘆. This is where the 𝗠𝗼𝗱𝗲𝗹 𝗖𝗼𝗻𝘁𝗲𝘅𝘁 𝗣𝗿𝗼𝘁𝗼𝗰𝗼𝗹 (𝗠𝗖𝗣) comes in. 𝗪𝗶𝘁𝗵𝗼𝘂𝘁 𝗠𝗖𝗣: Each AI agent needs a separate integration with each tool—resulting in an exponential 𝙼 × 𝙽 mess. 𝗪𝗶𝘁𝗵 𝗠𝗖𝗣: A single protocol acts as a unifying layer. Each model and system integrates once with MCP—bringing order, efficiency, and scalability. Now it's simply 𝙼 + 𝙽. This is not just cleaner architecture—it’s 𝗔𝗜 𝗲𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗶𝗻𝗴 𝗮𝘁 𝘀𝗰𝗮𝗹𝗲. I've visualized this transition in the image below to make the value of MCP clear for technical and non-technical teams alike. What do you think—are we heading toward an AI future where 𝗽𝗿𝗼𝘁𝗼𝗰𝗼𝗹-𝗳𝗶𝗿𝘀𝘁 𝗱𝗲𝘀𝗶𝗴𝗻 becomes standard?
Challenges of AI Adoption
Explore top LinkedIn content from expert professionals.
-
-
In January, everyone signs up for the gym, but you're not going to run a marathon in two or three months. The same applies to AI adoption. I've been watching enterprises rush into AI transformations, desperate not to be left behind. Board members demanding AI initiatives, executives asking for strategies, everyone scrambling to deploy the shiniest new capabilities. But here's the uncomfortable truth I've learned from 13+ years deploying AI at scale: Without organizational maturity, AI strategy isn’t strategy — it’s sophisticated guesswork. Before I recommend a single AI initiative, I assess five critical dimensions: 1. 𝗜𝗻𝗳𝗿𝗮𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲: Can your systems handle AI workloads? Or are you struggling with basic data connectivity? 2. 𝗗𝗮𝘁𝗮 𝗲𝗰𝗼𝘀𝘆𝘀𝘁𝗲𝗺: Is your data accessible? Or scattered across 76 different source systems? 3. 𝗧𝗮𝗹𝗲𝗻𝘁 𝗮𝘃𝗮𝗶𝗹𝗮𝗯𝗶𝗹𝗶𝘁𝘆: Do you have the right people with capacity to focus? Or are your best people already spread across 14 other strategic priorities? 4. 𝗥𝗶𝘀𝗸 𝘁𝗼𝗹𝗲𝗿𝗮𝗻𝗰𝗲: Is your culture ready to experiment? Or is it still “measure three times, cut once”? 5. 𝗙𝘂𝗻𝗱𝗶𝗻𝗴 𝗮𝗹𝗶𝗴𝗻𝗺𝗲𝗻𝘁: Are you willing to invest not just in tools, but in the foundational capabilities needed for success? This maturity assessment directly informs which of five AI strategies you can realistically execute: - Efficiency-based - Effectiveness-based - Productivity-based - Growth-based - Expert-based Here's my approach that's worked across 39+ production deployments: Think big, start small, scale fast. Or more simply: 𝗖𝗿𝗮𝘄𝗹. 𝗪𝗮𝗹𝗸. 𝗥𝘂𝗻. The companies stuck in POC purgatory? They sprinted before they could stand. So remember: AI is a muscle that has to be developed. You don't go from couch to marathon in a month, and you don't go from legacy systems to enterprise-wide AI transformation overnight. What's your organization's AI fitness level? Are you crawling, walking, or ready to run?
-
Last week, a customer said something that stopped me in my tracks: “Our data is what makes us unique. If we share it with an AI model, it may play against us.” This customer recognizes the transformative power of AI. They understand that their data holds the key to unlocking that potential. But they also see risks alongside the opportunities—and those risks can’t be ignored. The truth is, technology is advancing faster than many businesses feel ready to adopt it. Bridging that gap between innovation and trust will be critical for unlocking AI’s full potential. So, how do we do that? It comes down understanding, acknowledging and addressing the barriers to AI adoption facing SMBs today: 1. Inflated expectations Companies are promised that AI will revolutionize their business. But when they adopt new AI tools, the reality falls short. Many use cases feel novel, not necessary. And that leads to low repeat usage and high skepticism. For scaling companies with limited resources and big ambitions, AI needs to deliver real value – not just hype. 2. Complex setups Many AI solutions are too complex, requiring armies of consultants to build and train custom tools. That might be ok if you’re a large enterprise. But for everyone else it’s a barrier to getting started, let alone driving adoption. SMBs need AI that works out of the box and integrates seamlessly into the flow of work – from the start. 3. Data privacy concerns Remember the quote I shared earlier? SMBs worry their proprietary data could be exposed and even used against them by competitors. Sharing data with AI tools feels too risky (especially tools that rely on third-party platforms). And that’s a barrier to usage. AI adoption starts with trust, and SMBs need absolute confidence that their data is secure – no exceptions. If 2024 was the year when SMBs saw AI’s potential from afar, 2025 will be the year when they unlock that potential for themselves. That starts by tackling barriers to AI adoption with products that provide immediate value, not inflated hype. Products that offer simplicity, not complexity (or consultants!). Products with security that’s rigorous, not risky. That’s what we’re building at HubSpot, and I’m excited to see what scaling companies do with the full potential of AI at their fingertips this year!
-
Big consulting firms rushing to AI...do better. In the rapidly evolving world of AI, far too many enterprises are trusting the advice of large consulting firms, only to find themselves lagging behind or failing outright. As someone who has worked closely with organizations navigating the AI landscape, I see these pitfalls repeatedly—and they’re well documented by recent research. Here is the data: 1. High Failure Rates From Consultant-Led AI Initiatives A combination of Gartner and Boston Consulting Group (BCG) data demonstrates that over 70% of AI projects underperform or fail. The finger often points to poor-fit recommendations from consulting giants who may not understand the client’s unique context, pushing generic strategies that don’t translate into real business value. 2. One-Size-Fits-All Solutions Limit True Value Boston Consulting Group (BCG) found that 74% of companies using large consulting firms for AI encounter trouble when trying to scale beyond the pilot phase. These struggles are often linked to consulting approaches that rely on industry “best practices” or templated frameworks, rather than deeply integrating into an enterprise’s specific workflows and data realities. 3. Lost ROI and Siloed Progress Research from BCG shows that organizations leaning too heavily on consultant-driven AI roadmaps are less likely to see genuine returns on their investment. Many never move beyond flashy proof-of-concepts to meaningful, organization-wide transformation. 4. Inadequate Focus on Data Integration and Governance Surveys like Deloitte’s State of AI consistently highlight data integration and governance as major stumbling blocks. Despite sizable investments and consulting-led efforts, enterprises frequently face the same roadblocks because critical foundational work gets overshadowed by a rush to achieve headline results. 5. The Minority Enjoy the Major Gains MIT Sloan School of Management reported that just 10% of heavy AI spenders actually achieve significant business benefits—and most of these are not blindly following external advisors. Instead, their success stems from strong internal expertise and a tailored approach that fits their specific challenges and goals.
-
🤝 How Do We Build Trust Between Humans and Agents? Everyone is talking about AI agents. Autonomous systems that can decide, act, and deliver value at scale. Analysts estimate they could unlock $450B in economic impact by 2028. And yet… Most organizations are still struggling to scale them. Why? Because the challenge isn’t technical. It’s trust. 📉 Trust in AI has plummeted from 43% to just 27%. The paradox: AI’s potential is skyrocketing, while our confidence in it is collapsing. 🔑 So how do we fix it? My research and practice point to clear strategies: Transparency → Agents can’t be black boxes. Users must understand why a decision was made. Human Oversight → Think co-pilot, not unsupervised driver. Strategic oversight keeps AI aligned with values and goals. Gradual Adoption → Earn trust step by step: first verify everything, then verify selectively, and only at maturity allow full autonomy—with checkpoints and audits. Control → Configurable guardrails, real-time intervention, and human handoffs ensure accountability. Monitoring → Dashboards, anomaly detection, and continuous audits keep systems predictable. Culture & Skills → Upskilled teams who see agents as partners, not threats, drive adoption. Done right, this creates what I call Human-Agent Chemistry — the engine of innovation and growth. According to research, the results are measurable: 📈 65% more engagement in high-value tasks 🎨 53% increase in creativity 💡 49% boost in employee satisfaction 👉 The future of agents isn’t about full autonomy. It’s about calibrated trust — a new model where humans provide judgment, empathy, and context, and agents bring speed, precision, and scale. The question is: will leaders treat trust as an afterthought, or as the foundation for the next wave of growth? What do you think — are we moving too fast on autonomy, or too slow on trust? #AI #AIagents #HumanAICollaboration #FutureOfWork #AIethics #ResponsibleAI
-
AI field note: Reducing the 'mean time to ah-ha' (MTtAh) is critical for driving AI adoption—and unlocking the value. When it comes to AI adoption, there's a crucial milestone: the "ah-ha moment." It's that instant of realization when someone stops seeing AI as just a smarter search tool and starts recognizing it as a reasoning and integration engine—a fundamentally new way of solving problems, driving innovation, and collaborating with technology. For me, that moment came when I saw an AI system not just write code but also deploy it, identify errors, and fix them automatically. In that instant, I realized AI wasn’t just about automation or insights—it was about partnership. A dynamic, reasoning collaborator capable of understanding, iterating, and executing alongside us. But these "ah-ha moments" don’t happen by accident. Systems like ChatGPT or Claude excel at enabling breakthroughs, but it really requires us to ask the right questions. That creates a chicken-and-egg problem: until users see what’s possible, they struggle to imagine what else is possible. So how do we help people get hands-on with AI, especially in enterprise organizations, without relying on traditional training? Here are some approaches we have tried at PwC: 🤖 AI "Hackathons" or Challenges: Host short, low-stakes events where employees can experiment with AI on real problems. For example, marketing teams could test AI for campaign ideas, while operations teams explore process automation. ⚙️ Sandbox Environments: Provide low-friction, risk-aware access to AI tools within a dedicated environment. Let users explore capabilities like text generation, workflow automation, or analytics without worrying about “messing something up.” 🚀 Pre-built Use Cases: Offer ready-to-use templates for specific challenges, such as drafting a client email, summarizing documents, or automating routine reports. Seeing results in action builds confidence and sparks creativity. At PwC we have a community prompt library available to everyone, making it easier to get started. 🧩 Embedded AI Mentors: Assign "AI champions" who can guide teams on applying AI in their work. This informal mentorship encourages experimentation without formal, structured training. We do this at PwC and it's been huge. ⚡️ Integrate AI into Existing Tools: Embed AI into everyday platforms (like email, collaboration tools, or CRM systems) so users can naturally interact with it during routine workflows. Familiarity leads to discovery. Reducing the mean time to ah-ha—the time it takes someone to have that transformative realization—is critical. While starting with familiar use cases lowers the barrier to entry, the real shift happens when users experience AI’s deeper capabilities firsthand.
-
The biggest barrier to AI success isn’t technical. It’s cultural. And here is why . . . . . 👉 You can have the flashiest tools. The most advanced features. The best tech stack money can buy. But, if your company culture isn’t ready to absorb change... If collaboration is blocked by silos… If adoption isn’t supported… If fear outweighs experimentation… You’ll get no outcome. The soap is in the dispenser. But no one’s getting clean. You bought the tool. You launched the change. But your team still isn’t using it. Before blaming “resistance,” run this checklist 👇 ✅ Change Readiness Checklist for Leaders 1. Have you explained the “why” in their language? 🔲 Did you tie the change to their day-to-day pain? 🔲 Is it solving real problems or just chasing KPIs? 2. Did you communicate early and often? 🔲 Did you announce the change before it launched? 🔲 Have you created a consistent cadence of updates? 3. Are you (and other leaders) modeling the behavior? 🔲 Are leaders actively using the new tool/process? 🔲 Are they sharing wins, lessons, and being visible champions? 4. Have you removed something to make space? 🔲 What are you stopping to make room for the new? 🔲 Are priorities clear or is this just “one more thing”? 5. Have you built psychological safety into the rollout? 🔲 Are people rewarded for trying not just succeeding? 🔲 Have you normalized the messiness of change? 6. Is the training actually helpful (and timely)? 🔲 Did you enable people before asking for adoption? 🔲 Is help easy to access or hidden in a PDF? 7. Are you listening and adjusting in real time? 🔲 Is there a feedback loop employees trust? 🔲 Have you acted on their input? Flashy tools don’t drive change. Leaders do. The soap is in the dispenser. Are you making it usable? ♻️ Repost if you’re investing in people, not just tech. Follow Janet Perez for Real Talk on AI + Future of Work
-
AI models like ChatGPT and Claude are powerful, but they aren’t perfect. They can sometimes produce inaccurate, biased, or misleading answers due to issues related to data quality, training methods, prompt handling, context management, and system deployment. These problems arise from the complex interaction between model design, user input, and infrastructure. Here are the main factors that explain why incorrect outputs occur: 1. Model Training Limitations AI relies on the data it is trained on. Gaps, outdated information, or insufficient coverage of niche topics lead to shallow reasoning, overfitting to common patterns, and poor handling of rare scenarios. 2. Bias & Hallucination Issues Models can reflect social biases or create “hallucinations,” which are confident but false details. This leads to made-up facts, skewed statistics, or misleading narratives. 3. External Integration & Tooling Issues When AI connects to APIs, tools, or data pipelines, miscommunication, outdated integrations, or parsing errors can result in incorrect outputs or failed workflows. 4. Prompt Engineering Mistakes Ambiguous, vague, or overloaded prompts confuse the model. Without clear, refined instructions, outputs may drift off-task or omit key details. 5. Context Window Constraints AI has a limited memory span. Long inputs can cause it to forget earlier details, compress context poorly, or misinterpret references, resulting in incomplete responses. 6. Lack of Domain Adaptation General-purpose models struggle in specialized fields. Without fine-tuning, they provide generic insights, misuse terminology, or overlook expert-level knowledge. 7. Infrastructure & Deployment Challenges Performance relies on reliable infrastructure. Problems with GPU allocation, latency, scaling, or compliance can lower accuracy and system stability. Wrong outputs don’t mean AI is "broken." They show the challenge of balancing data quality, engineering, context management, and infrastructure. Tackling these issues makes AI systems stronger, more dependable, and ready for businesses. #LLM
-
Yesterday, I led a roundtable at SaaStr on churn in AI adoption. We’re at a critical moment: early enterprise AI contracts are up for renewal, and the novelty is wearing off. AI spend is moving from innovation budgets to operational budgets, where enterprises are asking what business outcomes this technology is actually driving. 5 strategies I’ve seen work: ⚙ Embed to eliminate friction. Don't make customers do the heavy lifting. Too many AI products operate in a silo, forcing users to copy-paste data between systems. That’s friction. And friction is your enemy. Embed into existing workflows and add value right where your customers already are. Once you’ve integrated, you can slowly shift the workflow over time, but only after you’ve won their trust. No one wants to reinvent the wheel on day one. 🏰 Create a data moat. Automation alone isn’t a differentiator anymore. Model capabilities are advancing fast, and if all you’re selling is marginally better automation, you’re in a race to the bottom on price. Automation is best used as a trojan horse that gets you through the door and allows you to develop a differentiated data moat. Customers may come for automation, but they will stay for data. 💲Track your ROI. Internal champions are under pressure. They need hard numbers around business outcomes to justify the spend—hours saved, revenue generated, customer satisfaction boosted. Don’t make them scramble for those numbers. The best teams track customer value relentlessly, embed ROI metrics directly into the product, and serve up those metrics regularly. You need to make it painfully obvious why you’re worth the spend—give them the numbers before they ask. ♻ Kickstart network effects. Network effects are the holy grail, but they don’t happen by accident. Multi-sided AI products (think meeting transcription, presentations) have a golden opportunity to trigger virality—but only if you make the conversion process effortless. Once a viewer sees your product in action, give them a way to jump in right then and there. You want zero friction between seeing the product and becoming a user. Build for the customer's network, as much as for the customer. 💭 Be a thought partner, not just a vendor. Enterprise AI isn’t plug-and-play. It’s more like plug-and-maybe-play, but only after your customers overcome security, privacy, and change management concerns. The best AI companies don’t just sell tech—they sell vision In an era of constant change, being a thought partner is as important as being a technology provider.
-
One in three companies are planning to invest at least $25m in AI this year, but only a quarter are seeing ROI so far. Why? I recently sat down with Megan Poinski at Forbes to discuss Boston Consulting Group (BCG)'s AI Radar reporting, our findings, and my POV. Key takeaways below for those in a hurry. ;-) 1. Most of the companies have a data science team, a data engineering team, a center of excellence for automation, and an IT team; yet they’re not unlocking the value for three reasons: a. For many execs, the technologies that exist today weren't around during their school years 20 years ago. As silly as it is, but there was no iPhone and for sure no AI at scale deployed at people’s fingertips. b. It's not in the DNA of a lot of teams to rethink the processes around AI technologies, so the muscle has never really been built. This needs to be addressed and fast... c. A lot of companies have got used to 2-3% continuous improvement on an annual basis on efficiency and productivity. Now 20-50% is expected and required to drive big changes. 2. The 10-20-70 approach to AI deployment is crucial. Building new and refining existing algorithms is 10% of the effort, 20% is making sure the right data is in the right place at the right time and that underlying infrastructure is right. And 70% of the effort goes into rethinking and then changing the workflows. 3. The most successful companies approach AI and tech with a clear focus. Instead of getting stuck on finer details, they zero in on friction points and how to create an edge. They prioritize fewer, higher-impact use cases, treating them as long-term workflow transformations rather than short-term pilots. Concentrating on core business processes is where the most value lies in moving quickly to redesign workflows end-to-end and align incentives to drive real change. 4. The biggest barrier to AI adoption isn’t incompetence; it’s organizational silos and no clear mandate to drive change and own outcomes. Too often, data science teams build AI tools in isolation, without the influence to make an impact. When the tools reach the front lines, they go unused because business incentives haven’t changed. Successful companies break this cycle by embedding business leaders, data scientists, and tech teams into cross-functional squads with the authority to rethink workflows and incentives. They create regular forums for collaboration, make progress visible to leadership, and ensure AI adoption is actively managed not just expected to happen.
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development