Unlocking Enterprise Ai
Unlocking Enterprise Ai
Sponsored by
Unlocking enterprise AI: opportunities and strategies 2
Contents
This Economist Impact report, commissioned services, media and entertainment, and
by Databricks, combines a global survey of telecommunications. Digital native businesses
715 technical executives and 385 data and across various industries are featured. Most
artificial intelligence (AI) technologists who interviewees are from multinational companies
work across the fields of data engineering, with headquarters spanning ten countries
data science and enterprise architecture. in the Americas, Asia-Pacific and Europe.
It also features insights from interviews with 28 The report provides a comprehensive
C-suite executives from leading organisations assessment of AI implementation from both
across 11 industries: financial services, the executives and practitioners’ points of
healthcare and life sciences, retail and consumer view, revealing catalysts and bottlenecks
goods, the public sector, manufacturing, and identifying strategies for success.
transport, energy, technology, professional
Survey
Acknowledgments
We would like to thank the following executives for participating in interviews and sharing their insights:
• Chalee Asavathiratham, former chief digital banking officer, Siam Commercial Bank
• Wassym Bensaid, chief software officer, Rivian
• Sanjay Bhakta, chief product and technology officer, Condé Nast
• Ian Botts, chief technology officer, Fanatics Betting & Gaming
• Bernd Bucher, chief information officer, Novartis
• Roman Bugaev, chief technology officer, Flo Health
• Ting Cai, chief AI and data officer, Rakuten Group
• Juan Jose Casado, chief digital officer, Repsol
• Kushal Chakrabarti, chief data officer, Opendoor
• Helen Choi, chief digital and information officer, CJ CheilJedang
• Carol Clements, chief digital and technology officer, JetBlue
• Ken Finnerty, president of IT and data analytics, UPS
• Jon Francis, chief data and analytics officer, General Motors
• Leonel Garciga, chief information officer, US Army
• Scott Hallworth, chief data and analytics officer, HP
• Andy Hill, chief data officer, Unilever
• Mohit Kapoor, group chief technology officer, Mahindra Group
• Jeff Martin, senior vice president and chief data officer, TD Bank Group
• Melissa Pint, chief digital information officer, Frontier
• Senthil Ramani, global lead, data and AI, Accenture
• Gereurd Roberts, group managing director, Seven Digital, Seven West Media
• Robbert Van Rutten, chief information officer, Shell
• Takaaki Sato, senior executive vice president, chief technology officer, NTT Docomo
• Amit Sharma, chief technology officer, Dream Sports
• Ryan Snyder, senior vice president and chief information officer, Thermo Fisher Scientific
• Sara Vaezy, chief strategy and digital officer, Providence
• Greg Ulrich, chief AI and data officer, Mastercard
• Darrin Vohs, global chief information officer, Molson Coors
© The Economist Group 2024
Unlocking enterprise AI: opportunities and strategies 6
Executive summary
1 Goldman Sachs, “Gen AI: too much spend, too little benefit?”, 2024,
https://www.goldmansachs.com/insights/top-of-mind/gen-ai-too-much-spend- too-little-benefit
Executives and practitioners alike believe considerations like business model innovation,
in the power of AI, but think investment market positioning, and environment, social
is falling short. From accelerating drug and governance standards have been the most
development to extending credit to the financially important elements when evaluating business
excluded, companies are finding a diverse array cases for AI. Revenue growth has been the least
of applications for AI and GenAI. Our survey effective metric for justifying investment to date,
found that 85% of organisations are actively using with experts arguing that AI returns will take time
GenAI in at least one business function, reaching to accrue given the need for experimentation,
97% of companies with revenue over US$10bn. iteration and digital infrastructure overhauls.
IT teams are the most avid adopters and the legal Over time, however, the ability to realise revenue
function is the most reticent. Internal projects are will set leaders apart, and financial value will
preferred but, by 2027, 99% of executives expect need to be more definitively quantified.
GenAI adoption across internal and external
use cases. Seven in ten see AI as crucial to their Enterprise-wide AI requires an infrastructure
long-term strategic goals, and only 18% say it is refit—most companies do not feel confident
overhyped. Despite the momentum, only one in in their current architecture. Many enterprises
five believe that their current investment across are operating with the technological equivalent
technical and non-technical domains is sufficient. of Victorian-era plumbing. Only 22% of
organisations say their current architecture can
AI’s short-term benefits include productivity support AI workloads without modifications,
and efficiency, but leaders will use AI to and 48% of data engineers spend most of their
unlock value, revenue and business model time resolving data source connections. The
innovation in the long term. So far, productivity problem could worsen as democratisation
is one of the most commonly reported impacts leads to a proliferation of AI pilots, leading to
of AI. In line with this, functions with high congestion, complexity, and opacity in data and
automation potential have been heavily testing infrastructure. The prize is worth the work; some
GenAI, such as IT (91%), marketing (85%), sales of the companies interviewed for this programme
and customer service (83%), and operations reported that they could be truly creative in
(80%), while 82% of data scientists report using finding use cases and achieving returns only once
AI for coding. Interviewees also report a range they had secured their data foundation. Worries
of benefits, from cost reduction to improved about data security, from silos to fragmented
employee experience and talent attraction. systems, are holding some companies back
However, executives do take a ‘long’ view of from more ambitious experimentation.
the GenAI paradigm shift, noting that strategic
Chapter one:
AI’s unlock moment
Organisations are racing to harness the power of AI, with generative models elevating capabilities
across industries and functions. From enhancing drug discovery to powering intelligent
assistants, a range of models and techniques are boosting productivity, guiding strategic
decisions, and opening new frontiers of innovation. As companies move from experimentation
and implementation, the future belongs to those who can successfully scale AI solutions.
Two years since the public launch of ChatGPT, Many companies were already experimenting
every organisation is searching for algorithmic with AI long before ChatGPT was released,
advantage. Companies are harnessing the powers especially larger enterprises in more technically
of so-called traditional AI—which can recognise advanced fields. Novartis, a Swiss pharmaceutical
patterns, process vast and varied data sets, and giant, had been using machine learning (ML)
make predictions based on past data—with the for drug discovery and predictive analytics to
ability of GenAI to reason and converse across improve clinical trials.3,4 Flo Health, a global
contexts, creating novel outputs that mimic consumer-facing women’s health app used
human ability from natural language prompts.2 by over 380 million people, is based on an
algorithm using traditional ML techniques.
The GenAI breakthrough took AI from narrow “Advancements in LLMs have been really exciting
tasks performed by expert teams to a capability for us,” says Andy Hill, chief data officer at
available to nearly everyone. “Gen AI has Unilever, a consumer goods firm. “But it's still
exploded because of its approachability; the the combination of that and the more traditional
average person can use these tools,” says supervised learning techniques that together
Darrin Vohs, global chief information officer at will provide the value for a company like us.”
Molson Coors, a Canadian beverage company.
Generative models elevate AI capabilities due Amid this cautious optimism, the rapid adoption
to their broad applications. Even those with of GenAI is hard to ignore. An impressive 85%
ample AI experience have added a new layer of organisations in our survey are testing and
of capability. Thermo Fisher Scientific, a global using it today, with adoption rates climbing as
life sciences company, has been incorporating company size increases—among those with
AI into its instruments for nearly a decade. revenue over US$10bn, 97% are testing and using
GenAI builds on this strong AI/ML foundation, it. This trend suggests that larger organisations
according to Ryan Snyder, the company’s chief may cement their competitive advantages
information officer, enabling deeper interrogation through AI, while those lagging in investment
of data and bolder experimentation. may miss out on the next major technology
shift. “There’s always some truth underneath the
Executives, and especially hype. When the dot-com boom happened, lots
practitioners, believe AI of companies failed, but a lot of real companies
has proven value with valuable products came out of it as well,
While AI has been through its share of winters, and I believe the same is going to happen in the
our survey finds respondents are overwhelmingly GenAI space,” predicts Mr Vohs at Molson Coors.
optimistic, with 73% saying GenAI is critical to “GenAI is going to be a commodity embedded in
their long-term strategic goals. Only 18% say AI everything, similarly to the internet, and the value
is overhyped, although executives are slightly will be in the business problems it helps solve.”
more cautious (20% lean towards there being
too much hype) than data and AI technologists
(just 13%). In the years ahead, winners and losers
will be defined by who is able to graduate from
experiments to implementation at scale.
Currently using in production Experimenting/ Not using now but planning Not using now and not planning
(ie, fully deployed) piloting to use in the next three years to use in the next three years
0 20 40 60 80 100
IT 62 29 7
Marketing 40 46 12
HR 24 42 30 4
Finance 21 25 34 20
Legal 11 20 31 37
Source: Economist Impact
0 20 40 60 80 100
Financial services,
45 36 18
banking and insurance
Telecommunications 40 36 24
In most countries, about 70% of organisations Similarly, for the US, which has the highest
have moved past the piloting stage for GenAI share of organisations still in the piloting stage,
(see figure 3). However, progress is much more this perceived lag may not actually represent a
varied when it comes to the scaling up stage. weakness. Our data suggest that many of these
experimenters may in fact be ‘GenAI leaders’—
In India, for instance, most companies that is, they are also the most confident about
(56%) are scaling up GenAI, relying on the readiness of their GenAI applications. This
public and open-source models much more is also true of firms in Europe to a lesser extent.
than organisations in other countries. These findings warrant further investigation,
Meanwhile, in Japan just 6% are scaling up, but it may be the case that leading companies in
which may reflect the country’s preference the US are more willing to take risks and explore
for more complex, customised models. more innovative use cases rather than rushing
Half of Japanese executives said they were to scale up basic ‘tried and tested’ use cases.
training custom models based on proprietary
data, a rate far beyond any other country.
India 56 38 6
France 41 28 31
UK 37 43 21
ASEAN 28 41 31
US 27 32 41
South Korea 26 36 38
Germany 21 53 24
Australia 18 53 29
Other Europe 11 57 32
Japan 6 60 34
6 Mastercard, “Mastercard accelerates card fraud detection with generative AI technology”, 2024,
https://www.mastercard.com/news/press/2024/may/mastercard-accelerates-card-fraud-detection-with-generative-ai-
technology/
Senthil Ramani, the global lead for data and Sandbox environments are one mechanism
AI at Accenture, a global consultancy, says the for AI experimentation; a space not only to
firm sees a distinction emerging between use test the technical aspects of an ML model for
cases. Table stakes refer to more straightforward hallucination, bias and performance, but also its
use cases, which include optimising customer business impact and validity. Scott Hallworth,
contact centre operations, marketing functions the chief data and analytics officer at HP, a
or coding workflows. Strategic bets represent technology company, is a strong proponent. He
more innovative and experimental use cases. embraced the sandbox approach early on in
the GenAI boom, where all HP employees had
access to an environment where they could
explore and test solutions that can make life
easier for HP staff and customers. HP today
supports 75 private sandboxes for a variety of
uses all in the name of testing and exploring the
possibilities that AI can bring to the organisation
in a safe, secure and robust environment.
Key takeaways
Take a stake-based approach to experimentation. Cognisant of the risks and unknowns,
firms are focusing on internal use-cases in lower-stakes domains as they ascend the
maturation curve, and should focus on small, tightly scoped, problem-oriented projects.
Chapter two:
Rewards and returns
As AI spending soars, businesses are grappling with the challenge of balancing enthusiasm
with disciplined investment strategies and realistic expectations for returns. While
internal operational improvements have been the initial focus, companies are now
shifting towards external AI applications that could drive significant revenue growth and
unlock value from previously untapped data sources. However, measuring AI success
requires patience and a nuanced approach as organisations navigate the complex
landscape of long-term value creation versus short-term gains in the AI-driven future.
Rational exuberance?
Earlier in 2024 a research note from Goldman
Sachs cautioned that the US$1trn of expected
spending on GenAI might risk running ahead
of the revenue prospects.11 The bear case
included GenAI being ill suited to solving
the kinds of complex problems that would
justify this spend and cost. The C-suite
executives interviewed for this report regard
the questioning of GenAI’s value proposition
as welcome, and part of a natural cycle; hype
followed by short-term disillusion, succeeded
by proof points of value over time.
11 Goldman Sachs, “Gen AI: too much spend, too little benefit?”, 2024,
https://www.goldmansachs.com/insights/top-of-mind/gen-ai-too-much-spend- too-little-benefit
Customer engagement 25 41
Cost savings 20 40
Employee productivity 20 37
Revenue growth 19 37
Executives are understandably focused GenAI offerings are already posting revenue gains,
on the long-term view and nervous about as are the ‘pick and shovel’ industries like cloud
falling prey to underinvestment. In an August computing, chips and data centres. The cloud
analyst call, Alphabet CEO Sundar Pichai market, for instance, could reach US$2trn on the
said that the risk of underinvesting in AI was back of AI workflows, according to one forecast.14
“dramatically greater” than of overinvesting. Meanwhile, our survey found that six in ten
Microsoft, Alphabet, Amazon and Meta enterprise architects are increasing their reliance
Platforms are expected to incur more than on cloud services for AI, with a shift towards
US$200bn in capital expenditure in 2024, multi-cloud environments for just over a third.
most of which is to build AI infrastructure.12
Our survey also shows that deployment plans
But investors do expect AI to shift from a ‘tell me’ of companies will turn outwards over the next
to a ‘show me’ story, with current disconnects three years, shifting from internal purposes
between investments and revenue generation like operational improvement to external use
to come under increased scrutiny.13 This may be cases (see figure 5). This may accelerate the
years away, not quarters, for many enterprise revenue opportunity for AI as compared with
adopters. But vendors and firms with tangible using it for optimising business as usual.
12 Channel News Asia, “Nvidia fails to impress growth-hungry investors, shares fall”, 2024,
https://www.channelnewsasia.com/business/nvidia-shares-fall-growth-hungry-investors-4573336
13 Investors’ Business Daily, “AI Stocks: Tech Giants, Cloud Titans Face 'Show Me' Moment. Apple Unveils iPhone 16”, 2024,
https://www.investors.com/news/technology/artificial-intelligence-stocks/
14 Goldman Sachs, “Cloud revenues poised to reach $2 trillion by 2030 amid AI rollout”, 2024,
https://www.goldmansachs.com/insights/articles/cloud-revenues-poised-to-reach-2-trillion-by-2030-amid-ai-rollout
Figure 5: GenAI internal and external use cases, now and in 2027
Percentage of executives
Current status 22 42 34
Current status 12 24 54 8
Indeed, many respondents (37%) are starting Mr Ramani at Accenture highlights that
to see more benefits from external rather productivity is a sensible rationale for initial
than internal use cases. This is noteworthy, adoption and easier to measure, but the
given that only 24% of organisations allocate organisations set to become tomorrow’s
the majority of their investment to external AI leaders will be those that transcend
use cases (see figure 6). It suggests that some productivity into understanding that AI is about
organisations—likely those on the vanguard the “growth of revenue, new opportunities,
of expanding and scaling—are beginning to and attracting and retaining talent”.
see outsized ROI from their external GenAI
applications. As the remaining 88% reach this
stage in the coming years, ROI may shift even
more strongly towards external use cases.
Equal 53
Equal 46
15 McKinsey, “How government can promote open data and help unleash over $3 trillion in economic value”,
https://www.mckinsey.com/~/media/mckinsey/industries/public%20and%20social%20sector/our%20insights/how%20
government%20can%20promote%20open%20data/how_govt_can_promote_open_data_and_help_unleash_over_$3_
trillion_in_economic_value.pdf
16 Deloitte, “Assessing the value of TfL’s open data and digital partnerships”, 2017,
https://content.tfl.gov.uk/deloitte-report-tfl-open-data.pdf
17 CDAO, “Advana Industry Day”, https://storage.tradewindai.com/pdfs/Advana-Industry-Day.pdf
Frontier’s Ms Pint makes a similar argument. Building a strong data foundation can help
“There is no such thing as a technology strategy. companies pursue the higher value goals of
There's only a business strategy that technology revenue and business model innovation. “We
supports. So whatever you start doing, got started with AI by building a strong data
understand the business strategy very, very well, foundation, then we pivoted to a commercial
and then tie your technology strategy to it.” use case around variable pricing of select add-
on products,” recalls Ms Clements at JetBlue.
“We first used the model in a very narrow set
“There is no such thing as a technology of markets, and we were immediately struck
by the incredible value that it delivered right
strategy. There's only a business strategy out of the gate. This use case was the catalyst
that technology supports. So whatever for us to understand the power of AI and
the ways it could transform our business;
you start doing, understand the business it got the creative juices flowing across the
strategy very, very well, and then tie organisation to think about leveraging the
technology to drive even more value.”
your technology strategy to it.”
Cost control is a valid goal too. For
Melissa Pint, chief digital information officer, Frontier
companies with significant operational
expenditure, even optimising spending
At Opendoor, executives determined that by a few percent could lead to savings of
productivity should not be the central ROI millions, some of our executive interviewees
measure, instead focusing on top-line growth have observed, and these operational
and improved consumer experience. While savings can be re-invested in innovation.
the company’s use of AI yields productivity
gains, these are not at the forefront of their
approach. “Our priority is better use of
human intuition and improved consistency,
which improves customer experience,” says
Mr Chakrabarti at Opendoor. “It’s better for
employees, customers and the business.”
Key takeaways
Productivity and efficiency gains are early, measurable wins, but true leaders will
seek more ambitious returns. Revenue, talent attraction, and novel products and services
are among the loftier targets. Some firms can also unlock value by using AI to extract and
apply their proprietary data in new ways, including developing more personalised products.
Organisations must strike a fine balance between business returns and the
time to achieve true value. Companies will soon need to demonstrate financial
returns from their AI investments, but their timelines must strike the right balance.
Experimentation takes time, and some projects will inevitably fail to deliver. Firms
need an initial runway to upgrade their data infrastructure. Regulatory constraints
also limit the speed of deployment in sectors like healthcare. There is also merit
in watchful waiting as the AI vendor and product landscape evolves.
Chapter three:
The infrastructure refit
As organisations harness AI's potential, they face a critical challenge: their data
infrastructure is woefully unprepared, akin to having Victorian-era plumbing in a
modern building. Silos, latency and security are all inhibitors for AI deployment.
From accessing valuable but unstructured data to navigating the complexities of
cloud solutions, organisations need to reimagine their data architecture.
One of the biggest shifts in the AI era is not The demands of AI and advanced analytics
the power of the models themselves, but workloads were already creating infrastructure
the underlying plumbing needed to deliver headaches before GenAI, largely because
them at scale. Many of the world’s largest of the different types of data generated
companies are sitting on the technological by companies and the variety of storage
equivalent of Victorian-era pipes that cannot mechanisms developed to handle them.
handle the demands of AI, with legacy
systems that are creaking at the seams. Many companies used cloud-based
warehouses to store highly structured data
Only 22% of organisations in our survey say that have clear classifications and labels,
their current architecture is fully capable of managed by expert IT teams and easily
supporting the unique demands of AI workloads, queried. However, as much as 90% of data
and just 23% say their current architecture fully collected by organisations is unstructured
integrates AI applications to relevant business and, therefore, unfit for warehousing.19 As ML
data. Even among the largest companies we and AI advanced, unstructured data came
surveyed, these rates only rise to 28% and 27%, to be seen as ever more valuable because of
respectively. These deficits hinder progress and ML systems’ ability to spot patterns in vast
likely contribute to quality concerns, with only troves of this previously neglected data.
37% of executives and even fewer practitioners
(29%) believing that GenAI applications at
their organisations are production ready.
Data lakes, capable of storing vast quantities Real-time streaming of live production data
of both structured and unstructured data, is crucial for AI applications that require
emerged as an alternative, allowing organisations immediate querying. Mahindra Group, for
to leverage unstructured data, but these instance, incorporates real-time data on
led to fragmentation between lakes and solar panel performance into an algorithm
warehouses. In effect, this led to silos, patchy that directs maintenance and cleaning crews
support for use cases, and incompatible around their renewable energy installations
security models, all of which made it harder for effective cleaning and maintenance
for companies to utilise their data effectively. schedules. Similarly Dream Sports’ use of AI
Then, over time, a new model called the data to decide when to generate lucrative contests
lakehouse emerged, aimed at addressing around sports events depends on knowing
these problems. This model combines aspects exactly how many users are building a fantasy
of both lakes and warehouses and is built on team for a specific event at any moment.
open-source software, using open standards.
Diseconomies of scale
Looking ahead, speed is the primary architecture
constraint that organisations plan to address, Finding the right infrastructure can become more
with real-time data processing voted the top difficult as companies get bigger. Diversified
capacity gap by 47% of respondents, followed companies with individual business units, and
by robust data pipelines and security (see figure conglomerates comprising entirely independent
7). Architects’ views were also corroborated companies, must determine where the data
by data engineers, who were the most likely go. For instance, having data centralised in
to rank streaming data issues as their top company-wide stores allows insights to be drawn
challenge in maintaining data quality. across the group, while opting for individual
units to manage the data gives more flexibility
to analytics and data science teams to work
with it in ways that suit their own needs.
0 10 20 30 40 50
Security 39
Integration with
33
downstream systems
Ethical Al/safety considerations/
33
bias detection
Efficient model hosting 31
Scalable compute 24
At Mahindra Group, core functions like cloud It would be wrong to assume that start-ups
and cybersecurity are managed in common, or digital native companies are free from
but individual architecture decisions are left infrastructure challenges. Dream Sports, for
to business units. The firm takes a ‘problem example, was still querying production databases
first, solution later’ approach, emphasising for its reporting needs several years after its
collaboration between business and technology founding and only set up a data store once
teams. Its car company uses a data mesh to the company had found product-market fit.
store driver relevant information such as journey However, such companies show a remarkable
history separately from manufacturing data, willingness to reengineer their data architecture
while allowing the two datasets to be combined from scratch when needed, perhaps because
if needed. This reduces latency for driver queries, they lack the substantial technical debt.
because only relevant information is searched,
but allows for supply chain problems to be Rivian, an EV manufacturer, has integrated
identified—for example, if several drivers report information from across its complex systems,
a common problem the company can trace the including supply chain, commercial operations
manufacturing history of the relevant component and financial departments, into a unified
in each car. Mahindra’s finance team, meanwhile, environment, enabling data-driven decision-
uses a more flexible architecture so that it can making at all levels of the organisation. This
consider personal loans, fixed deposits, insurance, benefits not just AI and GenAI applications,
mortgages and loans for small and medium-sized but also the entire gamut of data-linked
enterprises simultaneously in order to assess risk. business processes from dashboards and
analytics to customer profiles and history.
Beyond the question moving data into storage “Ultimately, all of this runs on data. If your data
comes the knotty issue of its accuracy and isn’t properly categorised, it's going to be very
security, along with the appropriate access difficult to benefit from any of this technology,
controls. Incorrect or conflicting data, after all, so you have to focus on the fundamentals of
could lead a GenAI model to offer incorrect good data engineering and data management,”
information, and weak controls could lead argues Mr Vohs at Molson Coors. When it comes
to compliance breaches. As Ms Clements at to these fundamentals, enterprise architects
JetBlue said: “You can have all the AI in the identified security, data privacy and data silos
world, but if it's on a shaky data foundation, as the three biggest challenges faced when
then it's not going to bring you any value.” managing data for AI applications (see figure 8).
0 10 20 30 40 50
One challenge introduced by GenAI, in particular, While companies in the past tended to ignore
is how much information to put into the cloud unstructured data—despite it making up the
for analysis. Some are reluctant to push more vast majority of what companies hold—it has
data into the cloud without a clear use case in taken on increasing importance as AI tools
mind. This reluctance is pronounced in sectors make it easier to access.20 UPS uses a data
like energy and banking, which have already mesh to classify data and analytics tools that
accrued substantial amounts of technical debt allow both data scientists and business teams
and data silos through legacy systems and to engage in machine learning operations
mergers and acquisitions, for example. Both (MLOps). “It is powerful because it can combine
Spanish energy company Repsol and Canadian different ML models in a quick time frame. If
bank TD also emphasise the potential expense tomorrow we wanted to infuse a GenAI model
of excessive data upload, both directly in cloud- into it, we would be able to do that because
related costs, and indirectly in terms of requiring we've got all the platforms and orchestration
staff and teams to prepare data and build and in place to do it,” says Mr Finnerty at UPS.
maintain pipelines to push data into storage.
0 10 20 30 40 50 60 70
Automated evaluation collection
(using Al-assisted judges)
The application collects user feedback through
a user interface or in-app feedback form
Feedback through
customer support
Focus groups
Note: for laggards, n=50; for leaders, n=37; insights are only directional and suggestive
Source: Economist Impact
Key takeaways
Only a fifth of organisations have an architecture fully capable of supporting AI
workloads. This is a major constraint to operationalising AI. Real-time data processing is one
high-value capability sought by companies, which can only be delivered with an infrastructure
refit. Unified infrastructure helps AI deliver on its potential not just as a productivity
tool, but also as an input for better decision-making at the speed businesses need.
AI needs housekeeping. Datasets must be constantly validated and checked for completeness,
recency and validity, performed by open-source tools and APIs integrated into the existing
data sets and clouds. Quality assurance should be applied across data pipelines to prevent
faulty data from entering the organisation’s systems in the first place. External data sources
used by LLMs to refine their answers must be regularly updated to maintain the quality of
information they contain. And the models themselves must be constantly tested for efficacy.
Practitioner perspectives
In the current economic climate, with The emergence of NLP as a tool for
mounting pressure to reduce the total cost of democratising data pipeline and workflow
ownership, these cross-platform capabilities creation is a promising trend, with more
are becoming more critical. The emphasis on than 97% of enterprise architects predicting
accessibility and efficiency not only reduces that NLP will either supplement or
costs but also promotes the democratisation replace traditional methods for pipeline
of data and AI, enabling a broader range of creation and workflow orchestration.
users to access and leverage these powerful This shift means, in the future, that data
tools and insights across the organisation. engineers will increasingly enjoy support
from intelligent automation to handle the
The challenges faced by data and AI architects complexity of real-time data processing
are numerous, but so are the opportunities. and pipeline management, allowing them
As they seek to develop strategies for to focus on more strategic tasks.
implementing ethical AI, managing data
ecosystems and democratising data access, At the top of the wish list for most data
architects will face a delicate balancing act and AI teams are simplified data source
of innovation, security and accessibility. connections, integrated GenAI tools for
coding assistance, and unified platforms to
Data engineers: bringing reduce the reliance on disparate systems.
order to the data storm Architects are placing significant bets on
Data engineering is under immense pressure real-time data processing and robust data
to meet the evolving demands of modern pipelines, with nearly 75% identifying one of
enterprises, with two dominant challenges— these as a critical goal for their organisations’
fostering high-data quality for end users data and AI capabilities. As streaming
and ensuring security and governance. architectures become more prevalent, with
These objectives come on top of data 46% of companies migrating a significant
engineers’ responsibility for managing most number of batch pipelines to real-time
aspects of the data engineering lifecycle. streams in the last year alone, the emphasis on
latency and real-time insights is intensifying.
The need for unification, simplification and
democratisation across the data engineering In sum, data engineering is at a critical
discipline is at a fever pitch. Today, nearly two- juncture. The field is moving towards
thirds of organisations are fully dependent greater efficiency through automation and
on data engineers for every aspect of data real-time data processing, but it remains
pipeline creation and management, and anchored by the foundational challenges
almost half of data engineers spend most of security, governance and data quality.
of their time configuring and fixing data
source connections. Data engineers face
significant burnout and fatigue given the
number of data sources and tools required
to configure usable data pipelines.
Chapter four:
Find your edge
As AI becomes a baseline for business, companies will need to find their unique
path by integrating foundation models with proprietary data and domain
expertise, balancing off-the-shelf convenience with the power of tailored
solutions, all while safeguarding their most valuable asset: their data.
0 20 40 60
Augmenting an LLM's data source with
58
enterprise/proprietary data (eg, RAG)
Fine-tuning the parameters of
49
an LLM with enterprise data
Using an LLM as it is (without
45
contextual enterprise data)
Pre-training an LLM from
35
scratch using enterprise data
Other (eg, augmenting with synthetic
8
data or other novel tools/functions)
“Most of the value that you see on AI is Foundation models are the coffee grinder; the
buried in the bottom,” explains Accenture’s beans are the company’s data. Using that data as
Mr Ramani. “The top is immediate benefits part of a ‘compound’23 system helps organisations
like productivity, but that’s the starting point. brew AI results that consider their unique
You’ve not gone into your core value chain, domain or situation. Blending multiple models
that's when you go beneath the iceberg. to meet varied use cases is a common trend,
That is where your proprietary data is.” our survey shows (see figure 12), as is integrating
proprietary data into GenAI models, which most
respondents are either doing or exploring.
“Most of the value that you see on AI
Respondents from the financial services
is buried in the bottom. The top is industry were by far the most likely to recognise
immediate benefits like productivity, significant potential in integrating GenAI models
with proprietary data, with 80% agreeing
but that’s the starting point. You’ve not (see figure 11). Across sectors, perspectives
gone into your core value chain, that's on this differ substantially, reflecting diverse
views on the value of organisational data, the
when you go beneath the iceberg. That complexity of processing it, the risks of using it
is where your proprietary data is.” in GenAI models and the cultural or operational
readiness to implement such technologies.
Senthil Ramani, global lead, data and AI, Accenture
0 20 40 60 80 100
Financial services,
80 17 3
banking and insurance
Telecommunications 72 22 6
But closed-source models offer some Relatively few companies see merit in
advantages. Superior performance was a developing their own LLM from scratch.
leading reason to choose closed-source models “We don’t believe a company of our size can
at 59%, followed by user-friendly interfaces promptly build something truly meaningful
and documentation, exclusive features, and in terms of home-grown LLMs,” says Mr
seamless integration with existing proprietary Bugaev at Flo Health. Equally, companies will
systems. Some also find that open-source not be able to find a measurable advantage
models require more in-house expertise and relying solely on off-the-shelf models.
skills to adapt and develop compared with
off-the-shelf closed-source models.28 “We believe that partnerships between
tech companies and pharma/healthcare are
Industry leaders are developing frameworks essential,” says Mr Bucher at Novartis. “Tech
to inform their model decisions. Accenture companies bring strong platforms, data
helps organisations through what it calls capabilities and AI talent, which are invaluable
its switchboard, a proprietary tool set in creating usable models. Pharma companies
and framework, enabling organisations possess deep expertise in safety, clinical
to decide which models to use based on trials and production, which are critical for
the business use case, relevance, cost, developing potential drugs. AI serves as an
accuracy and latency. These dimensions help accelerator, enhancing our combined efforts.
determine whether an organisation needs By leveraging our distinct strengths, we can
open source, frontier or other models. achieve greater advancements together.”
Currently using in production Experimenting/piloting Not using now but planning Not using now and not planning
(ie, fully deployed) to use in the next three years to use in the next three years
0 20 40 60 80 100
Bring your own Al: facilitate and govern
18 33 36 14
employees' use of public models
Licence or subscribe to a commercial
25 38 28 8
version of a public model
Serve or deploy an
25 44 28
open-source model
Fine-tune and engineer pre-trained
29 49 19
open-source models
Use Al embedded in other
37 45 16
software packages
Build applications by using APIs
27 50 21
to connect to proprietary models
Train a custom model based on
32 46 20
your own data or third-party data
Companies and organisations have access to Shell is also adapting foundational models to
the best fuel for AI models: the right and most work on internal data, for example, customer
relevant data, not merely data sourced from interactions or past contracts. “I'm not too
the open internet. Dream Sports, for instance, worried about hallucination in the language
found third-party LLMs ineffective, even if they models Shell uses because of the controls
were using the latest application, because the we have in place before deployment. We
company’s language is too specific. Models restrict these models to work only with our
have to be trained on the company’s data. controlled data, which has gone through the
necessary rigour, including the right human
Higher quality internal data improves the supervision, before becoming a part of our
efficiency and performance of models. For work processes,” says Robbert Van Rutten,
instance, Rakuten built a deep learning chief information officer of Shell, the energy
foundation and embeddings using far less company. “We are also working on developing
processing power because it trains them based our own models by pre-training or fine-
on transaction data from its e-commerce tuning LLMs to further minimise this risk.”
site, according to the company’s chief AI and
data officer, Ting Cai. "We can encode more
information in fewer dimensions because we
have high-quality transaction data that better
indicates our users’ interests, and using this data
allows us to create more efficient models.”
Companies can, of course, use third-party models Home-grown models are also appealing
for initial tinkering before deciding where to because they put companies in control of the
invest more technical resources. At Repsol, for data. Many are worried about data leakage
example, the company fed internal data into in closed-source LLMs, such as sensitive or
existing LLMs offered by companies like OpenAI, restricted material being dropped into external
Microsoft and Google rather than building a models. “We have strong views on owning and
GenAI pipeline from scratch as it experiments protecting our data, so we’re very cautious about
with use cases and seeks to explore the where we let our data go,” says Ms Clements
technology’s potential. “In the future, the market at JetBlue. “We don’t let it leave JetBlue.”
will move to the use of small private LLMs,”
predicts Juan Jose Casado, chief digital officer at
the Spanish energy company. “But GenAI is a new
technology. We still need to learn how to launch
and put initiatives into production in an agile way.”
Key takeaways
To unlock value and find their edge, companies will mix and match off-the-shelf AI with
open-source and in-house models. AI will soon be a baseline capability; the organisations
that excel will mix and match models to complement their unique data and know-how.
Only they have access to the best fuel for AI models: the right and most relevant data.
AI promises to unlock proprietary data that was hitherto hidden or dormant. This
will drive higher-performing models through accuracy, efficiency and relevance. Off-the-
shelf AI will be useful for low-cost tinkering and lower-value outputs. For true value creation,
and for greater control and security, companies will combine open- and closed-source
models, while larger firms may see value in developing their own models from scratch.
Chapter five:
Guardrails and governance
Data security and governance had already “Numerous issues, such as privacy protection,
become board-level priorities over the last security measures, IP issues, ethical concerns
decade as cybersecurity attacks—and the and the problem of deepfakes need to be
regulatory punishments—have grown more addressed,” says Mr Sato of NTT Docomo.
severe. A flurry of regulations like the Digital “It is essential to maximise the benefits of AI
Operational Resilience Act in Europe are also technology by overcoming these challenges in
raising the stakes of governance failures. Our collaboration with relevant parties,” he adds.
survey underscores these priorities, finding
that data privacy and security breaches are the Worldwide, legislation is ratcheting up. One 2024
top concern for 53% of enterprise architects, tracker estimated a total of 762 AI legislation
while security and governance are the most documents across 45 states in the US alone.29
challenging aspects of data engineering for This poses a challenge for companies that need
engineers (cited by 50% as a top-three challenge). to experiment with new systems while the
rules governing their use are in constant flux.
Data privacy and security breaches are According to Ian Botts, chief technology officer
of Fanatics Betting & Gaming, effective AI
the top concern for 53% of enterprise governance at an organisation encompasses three
architects, while security and governance components. “First, it's building the guardrails
for what you can and can't do, so that personally
are the most challenging aspects of identifiable information isn't used inside of a
data engineering for engineers. model to specifically target or take advantage of a
particular human. Second, it's mechanisms—and
29 Regulatory Transparency Project, “A Sensible Approach to State AI Policy”, https://rtp.fedsoc.org/blog/a-sensible-approach-
to-state-ai-policy/?utm_content=311457390&utm_medium=social&utm_source=twitter&hss_channel=tw-574405888
building out mechanisms—that enforce those RAG is another useful technique to control
guardrails. And third, it's having dedicated data quality and enrich knowledge. “With this
governance, which is almost like a compliance technique, Rakuten’s GenAI models retrieve
function to ensure that it's being audited real-time inventory and pricing information
and that [companies are] being transparent. that they can use to incorporate the latest
I think the transparency point is incredibly information or enterprise knowledge into
important because it drives better decisions.” applications such as search, recommendation
systems, and adverts,” says Mr Cai, the
Data innovations can help companies tackle company’s chief AI and data officer.
governance risks like bias or unreliability. For
example, some organisations may lack access Ian Botts at Fanatics also explains how its AI
to the necessary data to build what they need, and GenAI models are being used to detect and
in turn limiting the quality of their AI systems. enhance governance issues related to responsible
Synthetic data, generated using a purpose-built gaming. “We want your bet to be additive to
mathematical model or algorithm, can be brought your fandom,” he says. “If [our models] identify
in to help solve these data science tasks.30 It someone and say with very high accuracy that
can improve performance and reliability by there's a problem that spend is exceeding their
providing a larger volume of more diverse data means, or they're showing troubling behaviours
for training, such as including underrepresented or patterns, [we] could start to nudge them
data classes or categories, for instance. Closing into free-to-play games, lower cost things,
gender gaps and eliminating racial biases are try to reduce that burn or encourage them to
promising areas where synthetic data can take a timeout. Present them with information.
make a difference.31 It can also lower data It’s worth the additional investment … to get
acquisition costs, eliminate data bottlenecks someone back into a position that’s healthy and
and preserve data privacy. While it has risks and sustainable for them. AI can be used to spot and
downsides, with the right guardrails, synthetic understand problems, and GenAI can be used to
data could help improve the quality of AI understand and influence [in a personalised way].”
outputs, but it remains an untapped resource,
according to our survey, with only 7.5% of data
scientists utilising it in their GenAI projects.
Strongly agree Somewhat agree Neutral Somewhat disagree Strongly disagree Don't know / not applicable
0 20 40 60 80 100
Total 21 38 32 9
Japan 12 32 42 14
Australia 14 39 34 13
Other Europe 14 43 33 9
ASEAN 20 39 33 6
Germany 20 41 30 9
France 20 44 27 9
South Korea 21 39 31 8
UK 24 43 27 6
US 29 36 30 5
India 35 26 29 8
Centres of excellence (COEs) are emerging as Technically, COEs develop, ‘productionise’ and
a popular approach to achieving systematic monitor AI models with a lens on business impact
oversight. COEs have become key enablers of AI- and returns. They oversee core operations,
driven insights and use cases within organisations. including for data and ML, and seek architecture
Beyond acting as the central control unit of AI, and infrastructure optimisations for operational
they monitor, implement and share best practices efficiencies. Most importantly, they unify
for implementation across departments and the organisation’s AI vision and ambitions
business units. They also foster exploration and across business units and channels, while
innovation for building and operationalising AI.32 ensuring that safety and standards are met.
Accenture’s Mr Ramani relates that his company Measures like COEs are examples of how
collaborated with one national bank to build a companies can take the opportunity of the AI
COE to manage guidelines and protocols on how moment to revisit all aspects of governance
AI is used and adopted across the bank’s business and decision-making. “With GenAI, there is an
units. While centres can forge consistency in opportunity to revisit an entire business process
practices and create guidelines and guardrails, and reimagine what it could be,” says Mr Snyder
they are not just brakes on AI. They can actually at Thermo Fisher Scientific. “It encourages
be a kind of ‘forcing function’ for AI delivery leaders to challenge past decisions and accept
across the enterprise, according to Mr Ramani, that there may be a newer, better way.”
bringing together departments, from HR and
legal to tech teams, to unlock potential. A
COE is a collaborative effort bringing together
multidisciplinary teams and backgrounds for
a common goal. For example, Shell’s Analytics
COE brings together multi-faceted teams and
supports a community of hundreds of data
scientists and thousands of what the company
calls ‘AI enthusiasts’ coming together.
Calibrating humans and machines Securing talent is one of the greatest challenges
for firms deploying AI, our survey found; only
The march of the machines has long stirred one in six firms are confident in their ability
fears of widespread job losses as automation to attract the expertise needed to take full
threatens to render human workers obsolete. advantage of AI technologies. Those who
Yet, as AI technologies permeate industries, the succeed are deploying AI at a much faster rate.
reality is more nuanced. Many business leaders
emphasise that AI adoption will not lead to Meanwhile, the proliferation of GenAI is also
layoffs. “AI means more efficiency in our business, seen as a democratising force. More than
which means we can create more stories—not by 97% of enterprise architects in our survey
using AI to generate content but by automating predict that non-technical staff will use
mundane or manual tasks,” says Mr Bhakta at natural language programmes to interact with
Condé Nast. “So instead of ten a week, we could complex datasets within the next three years,
produce 30 a week, which is actually good for and 58% say that natural language will be the
the business. We're not looking to shrink, we're primary—or only—way they do this. Unilever,
looking to grow. Doing more with the number for instance, is investing in both people and
of people we have is the best outcome.” culture through a programme that has trained
almost 20,000 people in the use of GenAI, a
As mass AI adoption takes hold, there will be vital step for a company operating in a dynamic
more demand for specialised roles like data sector where demand for new skills and
scientists, developers and MLOps experts. competencies is rising, according to Mr Hill.
Companies are already paying premiums for
workers with AI-related abilities, and those “A data and AI culture helps all parts of the
who acquire AI skills see wages rise by an business understand that we prioritise data-
average of 21%.33,34 One 2030 forecast estimates driven decisions, and that’s what will help us
that 375 million workers may need to switch gain the insights that will improve performance,”
occupations as their line of work is disrupted says Mr Roberts at Seven West Media. “[Our]
by digitalisation, automation and AI.35 priority is to train and upskill our teams and
talent around GenAI, so that it becomes a
process and a product that's internalised.”
Only one in six firms are confident in their
ability to attract the expertise needed to take
full advantage of AI technologies. Those who
succeed are deploying AI at a much faster rate.
Firms like Condé Nast and Frontier are prioritising The Repsol Artificial Intelligence Products hands
engagement with employees, emphasising employees the keys to AI, letting them tackle
the advantages of automation rather than complex data tasks solo, without technical
presenting it as a threat. Their approach highlights knowledge.36 Staff can analyse performance
how AI can ease workloads and enhance job and optimise operations and organisational
performance rather than eliminate positions. change with natural language processors. These
tools promise autonomy and agility, allowing
Some firms are embedding AI tools into office Repsol to streamline operations while turning
software to ease workloads. “Our North Star is its workforce into “citizen data scientists”.
to democratise data. Everyone at the company
should be able to leverage data that drives Self-service platforms can empower workers
decisions … without being a data engineer or to handle tasks once left to specialists, like
a data scientist,” says Mr Bensaid at Rivian. processing payroll, scheduling meetings or
“Traditional AI involved specialised teams in crunching numbers. The appeal for employers
expert environments,” says Mr Bucher at Novartis. is more efficiency, fewer middlemen and lower
“GenAI is accessible to many users, with self- costs. However, simply making AI available is
explanatory tools and understandable use cases.” not enough. Broader use of AI requires broader
data literacy. The biggest data management
Once a new technology has been developed challenges identified by architects in our survey,
and proven to be successful, the crucial step for instance, include controlling data access
for a company is to develop self-service tools (49%) and protecting sensitive data (42%).
so employees can benefit, says Mr Casado
at Repsol. “Technicians will deal with the
technology—the real challenge is making it
easy for the people to use [it],” he says.
36 Repsol, “The RAIP Project develops Repsol's own AI solutions with company-wide impact and value”,
https://www.repsol.com/en/technology-and-digitalization/digital-transformation/digital-program/raip/index.cshtml
“We’re moving away from data literacy being Progress is most advanced in the financial
a specialised skill,” says Mr Garciga at the US services industry, where AI must fall within
Army. All employees must understand data strict regulatory guidelines, our survey found.
protection, platform use and centralised data “As organisations begin using AI to inform key
storage. The US Army has incorporated these decisions, they must still work within agreed
into its leadership training and has recognised upon risk models. For example, any approach
that managing sensitive data is crucial to the that TD takes to leverage AI must be within our
Army’s AI use cases. One such use case involves risk appetite and regulatory obligations,” says Mr
reviewing documents to be declassified under Martin at TD. That holds true for customer-facing
the Freedom of Information Act, which allows interactions too. “In the old world, trust was
for important documents related to events established by walking in the branch, getting to
like the war in Afghanistan to be released to know the manager and talking about the product,”
the public. “AI can improve the accuracy of this says Mr Kapoor at Mahindra Group. “Now trust
process and help us cover more documents, has to be established through cybersecurity and
increasing transparency,” says Mr Garciga. data privacy, and the customer needs to know
that data will not be misused for any purpose.”
“We’re moving away from data literacy In sensitive sectors with high-risk profiles
and low risk appetites, computer-generated
being a specialised skill. All employees errors will often provoke strong negative
must understand data protection, perceptions. As noted by Flo Health’s Mr
Bugaev, medical specialists can also make
platform use and centralised data mistakes or come to different answers to the
storage. The US Army has incorporated same question, much like AI models might.
But tolerance for errors and disparities in AI is
these into its leadership training and has understandably lower compared with tolerance
recognised that managing sensitive data for the same in medical professionals.
is crucial to the Army's AI use cases.”
Leonel Garciga, chief information officer, US Army “Individuals using GenAI
must have the ability to
Human judgement will remain critical for
monitoring AI’s outputs, and companies must independently evaluate
calibrate the right mix of people and machines. its outputs and handle
Two-thirds of organisations in our survey say
they are actively experimenting to find the them responsibly.”
balance between humans and AI, and 80% say Takaaki Sato, senior executive vice president,
these efforts could be stronger. Mr Sato at NTT chief technology officer, NTT Docomo
Docomo emphasises that “individuals using GenAI
must have the ability to independently evaluate
its outputs and handle them responsibly.”
Key takeaways
Data innovations enhance AI governance. Techniques like synthetic
data and RAG help organisations address data limitations, reduce bias and
improve AI system quality. These techniques can enhance performance,
preserve privacy and enrich AI models with real-time information.
Conclusion
Internal use cases and pilots are the AI needs KPIs. To avoid profligate spending
necessary testing ground. While there and the inevitable trough of disillusionment
is an understandable desire to deploy AI at that could follow, companies need to develop
scale, companies are conscious of risks and intelligent ROI metrics and a disciplined approach
unknowns; experts advocate internal use cases to spending. Productivity, cost control, revenue
initially, through sandboxes and iteration. This per worker, user experience and reduced staff
learning phase also allows them to decide burnout are all meaningful metrics. The most
which mix of models and tools works best successful will use AI to drive new revenue
for which ends. Pilot selection should be streams and business model innovation.
determined by risk thresholds, to solve clear
problems and within well-defined scopes. Use flexible timeframes for achieving returns.
KPIs and metrics matter, but companies should
Fixing the plumbing. To build a robust data not necessarily seek quick returns. Experiments
and technical foundation, companies need involve wrong turns and tinkering. Companies
to invest in an infrastructure refit to support in heavily regulated sectors like healthcare
high-volume, varied data. This will stress have to operate within structured systems of
the rigid data storage systems of yesteryear, regulatory oversight. It takes time to develop
which cannot manage the diversity of data, the high-quality data sets and governance
real-time usage and wider utilisation in this needed to power effective AI. There is also
more democratised era. Larger companies and merit in ‘watchful waiting’ to see how the AI
conglomerates especially must decide the right product and service ecosystem evolves.
balance between centralisation and devolution
as they look to harness cloud platforms.
Use a mix of models to sharpen your Calibrate human versus machine. Engaging
edge: in-house data and organisational with employees in the design and deployment
know-how. As foundation models become of AI systems is critical to secure buy-in and
commoditised, usage will be a baseline, not adoption. That means communicating how AI
a boost. All organisations need to find their can help workers rather than carry out their
edge—their unique recipe of data and know-how. work for them. Identifying AI champions can
In highly specialised industries, off-the-shelf also help build momentum. New skills and
models will fall short of quality and compliance competencies can be developed through
requirements. Companies can mix and match bite-sized programmes and exercises, with
open- and closed-source AI, and private or public companies bridging the gap between AI
cloud, based on performance, governance, specialists and the workforce. AI should also be
cost and capacity, to find the right blend. integrated into leadership programmes. Last,
but not least, organisations should encourage
Establish robust governance frameworks. a critical outlook to avoid automation bias
Organisations need clear guidelines, protocols and keep human judgement at the fore.
and oversight mechanisms, and institutional
innovations like Centres of Excellence (COEs),
to manage AI deployment and mitigate
potential risks. COEs can also be a forcing
function for improving co-ordination and
coherence overall. Technical innovations like
synthetic data and RAG can also strengthen
model performance. It is crucial to recognise
the importance of human oversight of AI
systems, especially in high-stakes domains.
While every effort has been taken to verify the accuracy of this information,
Economist Impact cannot accept any responsibility or liability for
reliance by any person on this report or any of the information, opinions
or conclusions set out in this report. The findings and views expressed
in the report do not necessarily reflect the views of the sponsor.
HONG KONG
1301
12 Taikoo Wan Road
Taikoo Shing
Hong Kong
Tel: (852) 2585 3888
Fax: (852) 2802 7638
Email: asia@economist.com