KEMBAR78
Cloudnative Application Development | PDF | Business | Computers
0% found this document useful (0 votes)
96 views50 pages

Cloudnative Application Development

The document discusses strategies for traditional enterprises to develop cloud-native applications. It outlines benefits like agility, ability to innovate and scale quickly. It also discusses challenges like integrating microservices and data interoperability. The document recommends a platform-agnostic enterprise PaaS to provide freedom of movement between clouds.

Uploaded by

Mario Goss
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
96 views50 pages

Cloudnative Application Development

The document discusses strategies for traditional enterprises to develop cloud-native applications. It outlines benefits like agility, ability to innovate and scale quickly. It also discusses challenges like integrating microservices and data interoperability. The document recommends a platform-agnostic enterprise PaaS to provide freedom of movement between clouds.

Uploaded by

Mario Goss
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 50

TIBCO whitepaper

Cloud-native Application
Development:
Your Enterprise at Start-up
Speed

Digital business transformation allows companies to:

•  Focus on the customer (and customer experience)


•  Deliver value to the market faster value to the market
•  Keep a tab on cost while doing the above

Key enablers for digital business transformation include:

•  Cloud-native platforms and applications


•  Dev-ops and continuous integration / continuous
delivery methods
•  An API-centric and event-driven approach to applications
•  Extensible tooling that fits your needs
•  Low-risk technology investments that avoid lock-in

This paper sets out a strategy for the traditional enterprise to


quickly launch digital business capabilities.

Why Cloud Native?


What are some of the benefits of developing cloud-native
applications optimized to run in the cloud? One is the ability
to innovate and quickly release applications and features
that deliver new product ideas. Developing cloud-optimized
applications gives you the ability to try new technologies
quickly and engage with users in different ways. And if the
technology doesn’t fit your needs or deliver on its promise, you
can bring down the application and experiment with something
else, the fail-fast approach.
TIBCO whitepaper | 2

Another benefit is the ability to quickly scale to match the


resource needs of unpredictable user demands. If you deploy a
new feature and it proves successful, but the infrastructure fails
to scale and support high traffic volume, you risk customers
leaving and going to a competitor, possibly never to return.
Successful organizations are building and deploying
modern cloud-native applications using agile development
principles. Some of these companies originated in the cloud,
unencumbered by traditional monolithic application designs.
Their customer-facing applications were designed to take
advantage of cloud technologies and optimized to run in the
cloud. You want the same playing field.

Your Cloud Strategy


Your business may have invested in on-premises core systems
that are still supporting your business operations, such
as an ERP system with custom interfaces. Many of these
customizations were likely built using monolithic architectural
styles with complex interactions hard-coded into the
application. It makes them difficult to change—but they hold
a wealth of highly valuable customer data, including buying
history and product preferences.
So, if yours is a traditional business, you can’t completely
change your business practices, but with the right tools, you
can combine your core systems and historical data with new
data sources for a unique customer perspective that enables
cross-sell and upsell opportunities. And the depth of customer
knowledge you put to use is something that most new cloud-
native businesses don’t have.
A cloud-native architecture can be implemented through
various approaches, including within:

PUBLIC MULTI- PRIVATE HYBRID


CLOUD CLOUD CLOUD CLOUD
Cloud environment offered Cloud environment that Cloud environment hosted Cloud environment that
by cloud vendors over the combines container in your data center or on combines both public and
public internet to any third services from different dedicated infrastructure private cloud services
party public cloud vendors from a public cloud vendor

• Amazon Elastic • Amazon EKS • Platform as a service • Amazon EKS Anywhere


Container Service, Anywhere such as OpenShift or
Amazon Elastic VMWare Tanzu • Microsoft Azure Arc
Kubernetes Service, • Microsoft Azure Arc Application Service
• Google Cloud Anthos
and AWS Fargate
• Google Cloud Anthos • Amazon EKS
• Microsoft Azure Anywhere
Kubernetes Service
and Azure Container • Microsoft Azure Arc
Instances
• Google Cloud Anthos
• Google Kubernetes
Engine and Google
Cloud Run
TIBCO whitepaper | 3

Many companies make the mistake of putting off their cloud


journey until they are certain of their platform choices. This
results in expensive loss of time, misplaced effort, and higher
overall cost for cloud-native adoption. The wiser choice is
a solution that is both platform and vendor agnostic and
that supplies the freedom to move applications to any other
platform and secure benefits out-of-the-box: the enterprise
platform as a service.

New Platform, New Methods


Cloud-native platforms also give you a fast path to newer
technologies and methods that support agility and innovation:

Microservices
Adopting a continuous delivery process usually means
adopting a new way of designing software applications. The
concept here is to compose an application from a pool of
smaller code pieces called microservices. These are loosely
coupled, reusable, independently deployable components
that are exposed and communicate with each other using
paradigms such as REST, GraphQL, gRPC, and event-driven or
asynchronous API technologies. The trend is to treat these APIs
as products and manage them that way. Building something
small and purpose-built lets you build faster and make changes
quickly, even while the service is in use. Microservices can also
be easily retired when no longer useful.

DevOps
A key aspect of continuous delivery is the concept of dev-ops,
where development and operations teams work together to
define and allocate the resources needed to build, test, and launch
applications. The dev-ops mentality helps automate software
delivery and removes restrictive development and deployment
processes so applications can be released more quickly.

Continuous Delivery
Cloud-native organizations strive for a continuous delivery (CD)
model in which developers quickly evaluate new technologies,
and if successful, adopt the technology, incorporate it in their
design, and deploy it on their cloud platform. Using continuous
delivery, you can reduce development cycles to short sprints
and incorporate small, incremental changes into an application
on a regular and more frequent basis. Some companies deploy
iterations several times a day. This model is far more agile than
new-release development cycles of 12 to 18 months that involve
checking code, testing the whole system, discovering errors,
rewriting—and long delays for bringing products to market.
TIBCO whitepaper | 4

Scalability
Cloud platforms use virtualization to auto scale and gracefully
shut down resources on demand. Some use hypervisor
virtualization to provision and share underlying resources. For
example, container virtualization uses lightweight containers
that compute resources very efficiently. Several containers
can share an operating system (OS) and the hardware it sits
on. These systems don’t pay the performance penalty of a
hypervisor having to manage and allocate OS, memory, and
CPU across virtual machines. And serverless computing, such
as function-as-a-service, virtualizes the entire application
runtime and deployment environment, freeing developers to
only manage application code while the cloud vendor manages
the capacity planning, configuration, operation, and scaling of
compute resources.
These are all great benefits, but this new architectural model
also brings challenges:

•  You still need to integrate services during development


and at service runtime.
•  Developers need to be even more concerned with the
various data formats among services and with moving
data reliably between them.
•  With a potentially massive array of services and
choreography between them, you need additional tools to
remove complexity to ensure continuous delivery.

New Containerized and Serverless Applications


“Build, test, and deploy cloud-native applications at startup speeds.”

TIBCO Cloud Integration for Cloud-


Native Architectures
The TIBCO Cloud Integration iPaaS (integration platform as a
service) is designed for modern cloud-native architectures. It offers
maximum flexibility to consume, run, deploy, and scale across
cloud environments as you choose. It uses TIBCO’s enterprise
integration technologies known for ease-of-use and first-class dev
tooling, but it’s specifically designed for developing integration
applications in cloud-native environments.
With the TIBCO Cloud Integration iPaaS, integration services are
lightweight and optimized to run inside containers—which helps
lower investment risk because services are then truly cloud-
native and vendor-agnostic. Applications built using the TIBCO
Cloud Integration iPaaS can be moved to various cloud platforms
TIBCO whitepaper | 5

without having to make changes or go back to design-time


compilation. Further, it supplies easy-to-use integration tools for
choreographing microservices in cloud-native applications and
uses plug-ins that promote continuous delivery.
Without a tool like this, it would be much more difficult
for developers to try new designs and incorporate new
technologies. Every time developers compose services, they
would need to write integration logic from scratch and worry
about converting data formats between services, very time-
consuming projects. Companies that have to hire developers
with specialized skills to do this work risk developer turnover
and the inability to maintain and support applications during
critical moments. In addition, software release cycles will
be longer, and new features delayed, which can put your
organization at competitive risk.

Visual Designer
A key component of the TIBCO Cloud Integration iPaaS is a
visual design-time modeler that lets users drag, drop, and
connect assets and activities to implement APIs, develop
loosely coupled event-driven applications, and define
integration logic. The visual designer reduces or removes the
complexity of integrating and building microservices, so you
don’t need a full cast of developers to do this successfully. It’s
like a canvas that lets you arrange APIs and events exposed
by various microservices, SaaS business applications, and your
legacy applications, choreograph how they work together, and
quickly turn ideas into products and revenue.
Applications can be developed quickly, and once developed,
visual flow design makes it easy for other users to understand
the logic and make changes whenever necessary. The services
are extensible, allowing other applications to attach to written
APIs or use existing integration services.
The visual designer lowers the barrier to entry for small businesses
with small crews of less specialized developers and brick-and-
mortar businesses that need to maintain investments in traditional
systems and have fewer resources to support new development
techniques. With this tool, you can build integration applications
easily. Updating software takes just days, not months, enabling
more frequent software feature releases.
The visual designer component of the TIBCO Cloud Integration
iPaaS enables you to:

•  Develop integration applications and APIs quickly with


easy-to-use, rich design capabilities, including smart data
mappings, conditional flows, design validations, and visual
step-back testing.
TIBCO whitepaper | 6

•  Automatically generate flows, configure input/output data,


and reuse schemas through first-class support for a range of
modern API and data specifications/schemas such as Open
API Specification, GraphQL, gRPC, JSON, Avro, and ProtoBuf.
•  Enable multiple team members to collaborate and
implement complex logic efficiently using unique diff
viewer features and integrated version control capabilities
that allow you to visually compare and merge changes
made in integration flows by different developers.
•  Easily create cloud-native patterns with built-in features
such as service discovery, configuration management
using application properties, and network error handling
using retry or timeout.

Continuous Integration, Continuous Delivery


Cloud-native applications are built for rapid application delivery
enabling developers to deliver new features or changes
quickly. This requires continuous integration and continuous
delivery (CI/CD). The TIBCO Cloud Integration iPaaS allows
you to easily build CI/CD pipelines for the automated delivery
of applications that use out-of-the-box support for unit
testing, mocking, and image building, as well as seamless
integration with your choice of open source tools such as Git,
Maven, SonarQube, Jenkins, Consul, or dev-ops services from
providers such as AWS and Microsoft Azure.

API Management
The microservices architectural style relies on APIs for
communications between microservices. After creating or
integrating microservices in the TIBCO Cloud Integration
iPaaS, developers can publish the APIs they created within an
included API management platform. When an API is published,
a “stub” is created, making the API visible to others. Product
managers responsible for managing a given API can then layer
on the required security and operational policies and enable
access to the community of developers who will use it.

Native Support for Container and Serverless


Technologies
The TIBCO Cloud Integration iPaaS provides out-of-the-box
support for cloud-native platforms such as CloudFoundry;
and it makes runtime available as a customizable build-pack
or as Docker images for more Docker friendly PaaS/CaaS
environments. This capability ensures that you can truly focus
on functional business logic. It also allows you to deploy
applications as serverless functions within AWS Lambda.
TIBCO whitepaper | 7

The TIBCO Cloud Integration iPaaS also supports key tooling used
in cloud-native environments, such as configuration management
(using Spring Cloud Config or Consul), service discovery, client-
side load-balancing, and circuit breaker patterns. When exposing
and choreographing microservices with other components using
APIs, the TIBCO Cloud Integration iPaaS ensures consistency and
provides confidence that the underlying infrastructure will work
whenever resources are deployed.

Advanced Monitoring
The TIBCO Cloud Integration iPaaS provides advanced
monitoring capability to provide a single pane of glass to
monitor applications regardless of where they are deployed,
giving visibility into execution and performance. It also
provides flexibility to monitor applications using external third-
party or open-source monitoring solutions such as Prometheus.
Users have end-to-end visibility across multiple applications
through native support for distributed tracing.
These practices all contribute to higher availability levels critical for
keeping users happy and customers coming back. Using support
from our key partners, TIBCO Cloud Integration applications can
benefit from rich monitoring features provided by your application
performance management solution of choice.

Summary
To transform to digital business, requirements include
development agility, web scale, and rapid innovation. Cloud-
native architecture supports these capabilities. It requires that
your IT organization become a cloud service provider that
enables the development of modern business applications
using a framework and fast choreography of microservices.
The TIBCO Cloud Integration iPaaS lowers the barrier to entry
for cloud-native development, allowing you to achieve agility
and continuous delivery of innovations leading to success in a
digital business world.
Learn more about the TIBCO Cloud Integration iPaaS, and try
it for 30 days at no charge at https://www.tibco.com/products/
cloud-integration

Global Headquarters TIBCO Software Inc. unlocks the potential of real-time data for making faster, smarter decisions. Our Connected
3307 Hillview Avenue Intelligence platform seamlessly connects any application or data source; intelligently unifies data for greater
Palo Alto, CA 94304 access, trust, and control; and confidently predicts outcomes in real time and at scale. Learn how solutions to our
+1 650-846-1000 TEL customers’ most critical business challenges are made possible by TIBCO at www.tibco.com.
+1 800-420-8450 ©2021, TIBCO Software Inc. All rights reserved. TIBCO, the TIBCO logo, and TIBCO Cloud are trademarks or registered trademarks of TIBCO Software Inc. or its
subsidiaries in the United States and/or other countries. All other product and company names and marks in this document are the property of their respective
+1 650-846-1005 FAX owners and mentioned for identification purposes only.
www.tibco.com 18Mar2021
TIBCO Success Story: Caesars Entertainment

Caesars Entertainment
Delivers a Fully Connected
Guest Experience

Business Challenge
Legacy systems with point-to-point integrations made it
increasingly difficult for the gaming and entertainment leader
to provide personalized customer experiences and innovate
with agility. Caesars needed all the data in its multi-cloud
environment to be shared among all systems equally.

Transformation
UX With TIBCO Cloud Integration and TIBCO Cloud API
Management software as a service, Caesars now manages its
Personalization where & hybrid environment with API-led integrations for speed and re-
when needed use—and a completely reinvented customer experience.


FIRST
Casino in Las Vegas to We are very much focused on the
go live with cloud-based customer experience, and TIBCO is
hotel management
helping us successfully provide a

FASTER holistic and memorable one for them.


—Nanda Reddy, Chief Enterprise Architect
Time-to-market for
integrating systems
running in the cloud and
on-premises With over 115 million annual guests, 55 properties, and more
than 63,000 team members worldwide, Caesars Entertainment
is one of the world’s most diversified casino-entertainment
providers in the world.
TIBCO Success Story: Caesars Entertainment | 2

Benefits
Advanced personalization, happy customers
Using the cloud-based API-led integration capabilities of TIBCO
Cloud Integration and TIBCO Cloud API Management software,
Caesars’ customer engagement model empowers team
members to deliver a world-class experience across all digital
and physical channels. With all systems sharing data, it creates
a unified customer profile for advanced offer matchmaking
and personalization. The technology delivers a fully connected
guest experience delivering real-time offers, faster check-in,
casino experience tracking, personalized sports and casino
leaderboards, e-sports, and sports betting.
“Customer personalization, with all the customer’s experiences
connected, differentiates us. This is the power of integration,
and TIBCO has been a foundational part of making this massive
effort a success,” said Les Ottolenghi, CIO.
“After a customer decides to make the journey to our properties,
right from making the bookings, coming to the property, and
then experiencing the various shows, gaming, and everything,
they leave as a happy customer. There are several platforms
that enable these experiences, and TIBCO plays a key role in
integrating all of this by providing timely, precise, intuitive data
to the customer and employees,” added Reddy.
“The TIBCO Cloud API Management (formerly TIBCO Mashery)
system enables us to deliver tailored customer experiences
very quickly.”

Data for experimentation and improvement


Having Caesars’ systems exposed as APIs allows the business
to be quick, try new things, and change those that don’t work.
A great example of what Caesars has been able to do with its new
architecture is a mobile-phone customer service bot called Ivy.
“When you come to our hotel, it greets you with an SMS. As you
move through and experience the casino, it reminds you of other
things available, and what you could be doing, all on your phone,”
explained Reddy. Guests have found Ivy to be a wonderful help
and often praise the mobile concierge on social media.

What can TIBCO make possible for you? Talk to an expert


today at tibco.com/contact-us/sales

Global Headquarters TIBCO Software Inc. unlocks the potential of real-time data for making faster, smarter decisions. Our Connected
3307 Hillview Avenue Intelligence platform seamlessly connects any application or data source; intelligently unifies data for greater
Palo Alto, CA 94304 access, trust, and control; and confidently predicts outcomes in real time and at scale. Learn how solutions to our
+1 650-846-1000 TEL customers’ most critical business challenges are made possible by TIBCO at www.tibco.com.
+1 800-420-8450 ©2019–2022, TIBCO Software Inc. All rights reserved. TIBCO, the TIBCO logo, TIBCO Cloud API Management, and TIBCO Cloud are trademarks or registered
trademarks of TIBCO Software Inc. or its subsidiaries in the United States and/or other countries. All other product and company names and marks in this
+1 650-846-1005 FAX document are the property of their respective owners and mentioned for identification purposes only.
www.tibco.com 21Mar2022
TIBCO Success Story: Hemlock Semiconductor

Hemlock Semiconductor
Increases Global
Competitiveness by
Optimizing Output and
Lowering Costs

On a daily basis, everyone who uses modern methods to stay


connected depends on quality polysilicon. Polycrystalline silicon,
or polysilicon, is the primary raw material for the semiconductor
and solar industries. It is used to create silicon wafers and other
substrates for building a wide range of devices that people use
every day, including smartwatches, cell phones, tablets, computers,
Internet of Things devices, solar cells, and many others.
Polysilicon is produced through a high-level chemical
purification process that involves the distillation of compounds

$300K that decompose into silicon at extreme temperatures.


Traditionally, polysilicon for electronic products requires
per month cost savings impurity levels of less than one part per billion (ppb), though
from optimized resource today slightly less pure polysilicon can be used by the solar
consumption industry to manufacture solar cells.
Manufacturing raw polysilicon material is fraught with obstacles.
Ensuring consistent near-perfect purity levels, taking steps to
reduce energy consumption and keeping costs low due to the
commoditized nature of the product means industry players
must take on a certain amount of risk. These were the main
obstacles faced by Hemlock Semiconductor Operations (HSC).
TIBCO Success Story: Hemlock Semiconductor | 2

Hemlock From Optimizing Production to


Semiconductor
Headquartered in Michigan, HSC
Maintaining Market Share
is the largest polysilicon producer In the highly competitive industry of polysilicon manufacturing,
in the United States. The company Hemlock’s ability to consistently produce the highest quality,
provides hyper-pure polysilicon to
most reliable polysilicon has positioned it as the partner of
manufacturers around the world,
who, in turn, use it to produce the choice for many buyers around the world. However, HSC
silicon substrates used in high-tech is continuously looking for ways to innovate to remain
electronics and solar panels. competitive in the global industry.
Because polysilicon is a commodity, there is little flexibility
in pricing. By focusing on reducing manufacturing costs,
HSC was able to increase its profitability. “Commoditization
in the polysilicon industry requires tight control of our cost
structure,” said Kevin Britton, program manager at Hemlock
Semiconductor Operations.
“Being able to Optimizing polysilicon production is far more complex than
most manufacturing processes. Purity levels of 99.999999999
fully utilize the percent are required to meet most customer expectations. The
vast volumes interaction among process components, must be precisely
of data that we optimized to maximize yield and plant efficiency. Process
variability can lead to higher impurity levels and lower yields.
continuously With such a small margin of error, HSC must drive continuous
generate became improvement of its manufacturing processes to ensure it is
achieving the best possible outcomes.
imperative. We
“Being able to fully utilize the vast volumes of data that we
needed to dive continuously generate became imperative,” said Keith Carey,
deeper into our CIO at Hemlock Semiconductor Operations. “We needed to
dive deeper into our internal processes to understand how to
internal processes improve quality while controlling our costs, and take advantage
to understand how of potential new business models.”
to improve quality To solve the joint operational challenges of cost, quality, and
conservation, HSC needed a strategy that would strengthen
while controlling its long-term profitability and competitive positioning. TIBCO
our costs, and technology was the key to making it a reality.
take advantage
of potential new Controlling Costs, Maximizing Output,
and Conserving Energy
business models.”
HSC’s first focus was on lowering its overall cost structure to
—Keith Carey, CIO, Hemlock ensure long-term price competitiveness. Cost management
Semiconductor Operations required analyzing data from each step of the manufacturing
process to better understand and quantify the impacts of
temperature, pressure, and energy usage in the reactor process.
HSC then was able to re-engineer its manufacturing process by
implementing real-time process monitoring and control in order
to maximize output, efficiency, and quality.
TIBCO Success Story: Hemlock Semiconductor | 3

The next challenge was to address the maximization of product


quality. This meant implementing a platform capable of
detecting process anomalies. Hemlock’s previous infrastructure
was a significant obstacle, as much of the company’s data
was stuck in silos and legacy systems that couldn’t keep pace
with modern data requirements. But now, when an anomaly is
detected, HSC can more readily access data to see precisely
which variable may have caused the problem. Once these
cause-and-effect relationships are identified, actions are made
to prevent process defects from happening again.

ARC
SiO2 + C
FURNACE CHEMICAL
(Quartz)
Si (METALLURGICAL APPLICATIONS
GRADE SILICON)

HSiCI3
(TRICHLOROSILANE)

FBR FLUIDIZED
BED REACTION
HSiCI3
(SEMICONDUCTOR
HCL GRADE)
(ANHYDROUS
HYDROGEN
CHLORIDE)
HSiCI3 (SEMICONDUCTOR TRICHLOROSILANE)

HCL H2 (HYDROGEN) CHEMICAL


VAPOR
CVD DEPOSITION
DECOMPOSER

CVD
RECOVERY
H2SiCI2 PROCESS
(SEMICONDUCTOR
GRADE)
HSiCI3 Si
(SEMICONDUCTOR (SEMICONDUCTOR
GRADE) GRADE SILICON)

VENT FEED

ENERGY SOURCE
H2
(LIQUID
HYDROGEN

HSiCI3 + H2 Si + 3HCI

SEMICONDUCTOR
GRADE SILICON

The complex manufacturing process used by HSC to produce


semiconductor grade polysilicon

While HSC gained a significant handle on its product quality, the


market for polysilicon experienced huge growth in recent years,
expanding into additional markets requiring different levels of
quality based on the nature of the application. For example, the
highest quality polysilicon is still used for semiconductors, while
solar panels can be made with less pure polysilicon. HSC was able
to take advantage of this market segmentation because it had built
a data-centric view of its manufacturing processes, allowing it to
further optimize its yield by selling the right product to the right
market at the right price.
TIBCO Success Story: Hemlock Semiconductor | 4

Then came the optimization of energy consumption. HSC’s


2020 Sustainability Report reveals how TIBCO helped support
the company’s sustainability goals and profitability. As stated
in the report, “TIBCO Spotfire provides HSC with optimized
rollouts and controls for a sophisticated understanding of
site processes and energy use. HSC initiated a peak power
management program and leveraged TIBCO Spotfire data
science to visualize and optimize our performance. Peak power
management runs more of our assets during off-peak hours,
lowering our demand on the electric grid when consumption
is the highest. This program has not only allowed our electric
utility to better manage its total demand, but it has also saved
HSC approximately $300,000 per month.”
“We’re Michigan’s largest single energy user, and we are
pursuing energy efficiency programs. We found TIBCO’s
Spotfire software to be one of the critical tools to increase
energy efficiency. We’re impressed with the results and the
data science product that helped us get there,” Carey said.

Proactively Managing the


Manufacturing Process
“With TIBCO, we’re The ability to make faster decisions, to detect and prevent
product impurities, and to have the agility to pivot production
connecting data yield based on the quality of the polysilicon was another
in ways we never major focus area. While HSC is an established industry leader,
maintaining overall cost competitiveness on the commoditized
could before, global polysilicon market is crucial to the continued profitability
which helps us of its business. Thus, the company couldn’t afford to allow slow
decision-making and production bottlenecks to hinder any
better manage
aspect of cost optimization.
maintenance Today, custom data visualizations allow HSC to understand
and plan the most granular aspects of its manufacturing processes.
The addition of real-time alerting as a complement to that
improvements.”
capability has empowered HSC’s workforce to respond to
—Kevin Britton, Program critical production line situations much faster than previously
possible. Not only did this solution help HSC lower costs and
Manager, Hemlock
prevent excess production of waste by alerting personnel
Semiconductor Operations
right away, it opened the door to projects that improved the
manufacturing process.
“With TIBCO, we’re connecting data in ways we never could
before, which helps us better manage maintenance and plan
improvements,” said Kevin Britton. “We are able to confirm
where we are doing things most efficiently and track our
performance, which is a key enabler to being able to improve.
This data-driven approach also extends to our continuous
improvement process itself.
TIBCO Success Story: Hemlock Semiconductor | 5

We’re now able to better see, in real time, not only what people are
working on, but how our progress tracks to our expectations of
how those projects should go. It also allows us to better visualize
our capacity and do future modeling for timing of upcoming
projects. This is now possible because our tools have advanced to
be able to combine fairly complex data from multiple sources for
visualization.”
By allowing HSC to move from a reactive to a more proactive
management of the manufacturing process, TIBCO helped the
company discover new business opportunities.
“We’re moving from archaic, static data to more intuitive, real-
time data,” Carey said. “We needed to be able to look at our
internal information to understand costs in more detail and
bring in external information so we could take advantage of
potential new business models, such as offering excess material
on the spot market.”

A Technology Foundation to Reinforce


Industry Leadership
HSC’s focus on cost began with a smaller-scale implementation
of TIBCO Spotfire analytics. The solution provided HSC with a
real-time visual representation of gaps in the manufacturing
process. When anomalies occurred, it can quickly pinpoint
which piece of equipment needed troubleshooting and alert
company personnel. It can also be used to perform what-if
scenario analyses to model the impact of process changes in
near real-time.
TIBCO Success Story: Hemlock Semiconductor | 6

While intelligent data analytics enabled greater control over


HSC’s complex manufacturing and operational processes, these
elements would not be nearly as robust without a universal
connectivity of HSC’s systems, processes, machinery, and more.
That’s where TIBCO integration came into play. HSC began
with several important analytic requirements that ultimately
revealed how the ubiquitous connectivity of data was critical to
a cleaner and more comprehensive view of its business.

HSC realized that “We had six or seven use cases for integration that we identified
and wanted to roll out over time,” Carey said. “The first one was
strong integration manufacturing integration: how do we connect the shop floor
across systems to the ERP system and business system? That has generated
lots of data for us. We’re not only analyzing systems into which
empowered it we previously had limited visibility, we better understand
with even more inventory levels, and more. It’s a platform that we can grow
information to into as our business needs evolve. That’s another reason for our
investment in TIBCO.”
which it was able HSC realized that strong integration across systems
to extend the empowered it with even more information to which it was able
impact, usefulness, to extend the impact, usefulness, and breadth of its analytics.
As market dynamics, business models, and customer demands
and breadth of its were changing, so too was HSC’s ability to provide more
analytics. compelling analytic insights that enabled it to expand with
additional use cases, in a scalable and sustainable way, with
greater confidence.
The success of HSC’s initial efforts also led to an expanded use
of the TIBCO Connected Intelligence platform. TIBCO powers
Hemlock’s Center of Excellence, which lays the foundation for
its integration program and SAP S4/HANA system. The Center
of Excellence brings together myriad integration, analytics,
and data management tools that work together to strengthen
HSC’s underlying architecture. It provides advanced analytics
capabilities, including data science and virtualization, streaming
analytics, and other self-service tools.
This integration with SAP S4/HANA means that information
can be replicated in real time and exposed to a wider HSC
user community through a series of calculation views. TIBCO
simplifies the delivery and transportation of information in a
timely manner, which results in large dividends for HSC.
Beyond being the foundational tool beneath HSC’s infrastructure,
TIBCO also lends its expertise and ongoing support to ensure
continued success with the platform. HSC can now handle a variety
of use cases — including manufacturing integration, B2B, and
hosting APIs — that allows its customers to consume its data in a
safe and secure environment.
TIBCO Success Story: Hemlock Semiconductor | 7

Maximizing the Value of All Data, Across


All Roles
As a manufacturer of one component in a longer supply
chain, HSC has several partners with whom sharing data is a
valuable capability.
“TIBCO has advanced the way we provide detailed business and
operational data to our end users without them having to wait.
Self-service capability is a key component of our IT strategy,”
said Carey.

“We now have With TIBCO, HSC can consume and analyze multiple years of
data, compared to previous spreadsheet-based data-wrangling
visibility into methods that were limited to just 90 days of data.
processes that “We now have visibility into processes that were very difficult
were very difficult to see before and can pursue answers to the really complex
questions that enable us to optimally tune each process,”
to see before Britton said.
and can pursue Because the TIBCO Connected Intelligence platform is live and
answers to the dynamic, it allows teams to come together to solve problems and
perform root cause investigations — all enabled by real-time data.
really complex Most importantly, it has helped HSC foster a data-driven culture
questions that with analytics that can be accessed in seconds. This cultural shift
has changed the way the company sells ideas, solves problems,
enable us to and brainstorms solutions. TIBCO has become the catalyst toward
optimally tune shifting to a data-driven environment where solutions are made
each process” alongside data, not just supported by data. TIBCO solutions have
become an integral part of the company, routinely being used in
—Kevin Britton, Program meetings, discussions, and presentations.
Manager, Hemlock “Now, we collaborate as one team and dig into the data in real-
Semiconductor Operations time to solve problems,” Carey said. “Enabled by data and the
TIBCO platform, the speed at which we learn and fail fast is
significantly different from the past.”

The Next Phase of the All-in with


TIBCO Plan
When the polysilicon market began experiencing disruption,
HSC knew it had to transition to maintain its leadership.
TIBCO’s Connected Intelligence platform helped HSC embrace
the disruption by modernizing its infrastructure and providing
the agility it needed to make faster, smarter decisions. From
the very beginning, partnering with TIBCO helped HSC
understand how to optimize numerous aspects of its business.
HSC has several capabilities in mind for the future, including a
more robust master data management system, a manufacturing
digital twin, the expansion of its data science applications, and
an improved responsiveness to real-time alerts enabled by data
streaming. In the next few years, HSC hopes to make its data
AI-ready and consumable for B2B end users.
TIBCO Success Story: Hemlock Semiconductor | 8

From the ability to visualize and optimize production costs to


maximizing product quality and quantity and helping create
a more reliable workspace, HSC is confident in its partnership
with TIBCO and believes that its future lies with TIBCO.
“The speed of our development and the robustness of our
solutions is going to be key to HSC’s future. The TIBCO
Connected Intelligence platform is really going to help us,”
Carey said. “Our decision to partner with TIBCO has been
validated time and again; we picked the right tool and the right
company for our culture. We’re looking forward to what we do
in the future, especially with our integration in the next phase
of our all-in with TIBCO plan.”

What can TIBCO make possible for you? Talk to an expert


today at tibco.com/contact-us/sales

Global Headquarters TIBCO Software Inc. unlocks the potential of real-time data for making faster, smarter decisions. Our Connected
3307 Hillview Avenue Intelligence platform seamlessly connects any application or data source; intelligently unifies data for greater
Palo Alto, CA 94304 access, trust, and control; and confidently predicts outcomes in real time and at scale. Learn how solutions to our
+1 650-846-1000 TEL customers’ most critical business challenges are made possible by TIBCO at www.tibco.com.
+1 800-420-8450 ©2021, TIBCO Software Inc. All rights reserved. TIBCO, the TIBCO logo, and Spotfire are trademarks or registered trademarks of TIBCO Software Inc. or its
subsidiaries in the United States and/or other countries. All other product and company names and marks in this document are the property of their respective
+1 650-846-1005 FAX owners and mentioned for identification purposes only.
www.tibco.com 07Jul2021
API Product Manager’s
Guide to API Analytics
TIBCO guide | 2

Purpose, Audience &


Objectives
This guide provides an overview of the current API analytics landscape, introduces a conceptual
framework & maturity model, and recommends best practices for API analytics.

Who should read: Objectives of this Guide:


•  API Product Managers •  Understand why every organization must invest in
API analytics
•  API Architects
•  Understand the major types of API analytics
•  IT Leadership
•  Understand the maturity model for API analytics
•  Understand the top 10 best practices

What Will Readers Gain?


•  An overview of API analytics and why it matters
•  An understanding of the key types and stakeholders of API analytics
•  A maturity model to guide the growth of your API analytics program
•  Best practices to consider when implementing an API analytics program
TIBCO guide | 3

Overview
In today’s IT landscape, application programming interfaces
(APIs) are key components of digital transformation or
modernization initiatives. They have grown in importance as
enablers of modern digital business, and with that growth, the
importance of API analytics in ensuring that businesses are
getting the maximum value out of their API programs has
also grown.
However, when asked to describe API analytics, most people
think of operational dashboards with metrics like queries per
month, trends, or API call status distribution. Most organizations,
even those with years of API management experience, find it
challenging to define API metrics that can be used for strategic
decision-making and planning. If the ultimate objective of your
analytics program is to support business decision-making,
the mismatch between expectations and reality needs to
be addressed.
Moreover, just as the amount of API-related data available
for analysis has multiplied, there has also been a proliferation
of new data management capabilities, architectural patterns,
and technology in areas like cloud-native computing, artificial
intelligence (AI), machine learning (ML), and modern data stacks
that also have to be integrated into API analytics. Especially in an
enterprise setting, architects and IT leadership need to strategize
and deliver API analytics in a multi-cloud, multi-domain, and
multi-API gateway environment.
This whitepaper offers practitioners like API product managers, IT
architects, and IT leadership guidance on modern API analytics
from organizational, practical, and technological perspectives.
TIBCO guide | 4

Before we start
It’s important to make sure we have a shared understanding of •  Realize ROI from your API platform investments: Realize
some basic concepts and key terms. more value from your API platform investments by clearly
defining the business objectives and strategy of your
API program.
What is an API?
•  Inform your IT and digital strategy: Create a feedback loop
An API is an interface that provides programmatic access
for your IT and business strategy to guide future investments.
to business assets, application functionality, and data. Over
the years, APIs have evolved from their roots in software
development. In the modern IT landscape, an API is a key Typical Signs You Need to Invest in API Analytics
enabler of digital transformation and used directly or indirectly
by a variety of technical and non-technical stakeholders within
WHAT IS OBSERVED / WHAT YOU SHOULD
and outside of the enterprise.
ASKED: IMPROVE:

What Is API Analytics? IT OPERATIONS API platform alerting and


API analytics is the sum total of all the technologies, skills, and “I didn’t know there was an issue reporting
processes required to create actionable insights from data with that API.”
that support an organization’s overall business objectives and
API strategy. IT TEAMS API platform documentation and
“I didn’t know there was an API for analytics
An API analytics program is needed to:
that.”
•  Improve the operational health of an API platform: You
cannot operate your API program reliably and efficiently IT AND BUSINESS LEADERSHIP API product operations analytics
without measuring and monitoring your program using
“Why do we have this API?” or
operational monitoring alerts, routine health scoreboards, “Why are we spending effort on
proactive testing, and more. this API?”
•  Improve the adoption of APIs: Drive the adoption of APIs by
both internal and external consumers by monitoring current PRODUCT MANAGERS, PARTNER API product strategy analytics
adoption and identifying potential actions to improve it. MANAGERS, AND EXECUTIVE
LEADERSHIP
•  Explore API monetization strategies: Identify the best-fit
“Why are we not getting enough
monetization strategies for individual API products.
value out of this?”
•  Provide feedback on the API product and API platform
roadmap: The best feedback for product managers on PRODUCT MANAGERS, PARTNER API product strategy analytics
future roadmaps comes from in-depth insights into what MANAGERS, AND EXECUTIVE
has already been built through traffic patterns, developer LEADERSHIP
engagement metrics, and more. “What can we do to improve
adoption and increase ROI?”
TIBCO guide | 5

Types of API Analytics


Your API analytics strategy should not take a one size fits all
approach. It must incorporate different viewpoints and support
multiple approaches. The specific approach should depend on
Styles of Analytics
the needs of a variety of stakeholders and what they would like Descriptive: Identify trends and relationships using
to achieve. historical and current data
Organizational span: Most organizations can identify the Diagnostic: Determine the root cause of events, trends,
number of stakeholders for APIs and how far and or broad the
or relationships
organizational span is. For instance, an internal API endpoint
may be used by only internal stakeholders on the same team, in Prescriptive: Identify the recommended actions to
contrast with an external API product that is used by partners, optimize business decisions
customers, and in some cases, customers’ customers.
Decision Horizon: Analytics exist to aid action in the form of
decisions. The temporal horizon of decisions and their potential
LIMITED ORGANIZATIONAL SPAN
impact indicates whether they are strategic or operational in
nature. Strategic decisions involve a longer time horizon in
support of certain business objectives; operational decisions API PLATFORM OPS API PLATFORM
PLANNING
involve the how and the now to achieve longer-term strategic WHAT
Diagnostic & Descriptive WHAT
objectives. For instance, determining the API keys to manually Analytics Descriptive & Predictive
Analytics
remove from an API product’s plan is operational, whereas WHO
IT Ops, SRE & Infrastructure WHO
deciding on the type of API monetization strategy for a IT Leadership, Infra &
Architecture
particular B2B2C external API is more strategic.
Based on our experience working with hundreds of
organizations over the past decade, we recommend choosing OPERATIONAL STRATEGIC
one of four areas of focus for your API analytics plan: DECISIONS DECISIONS

platform operations, platform planning, product operations, API PRODUCT OPS API PRODUCT
PLANNING
product strategy. WHAT
Descriptive & Predictive WHAT
Analytics Interactive Data Stories,
Prescriptive Analytics
WHO
IT Leadership, Product WHO
Managers, LoB Managers IT Ops, SRE & Infrastructure

BROAD ORGANIZATIONAL SPAN


TIBCO guide | 6

API ANALYTICS TYPE DESCRIPTION STAKEHOLDERS SAMPLE METRICS

API platform operations Objective • IT ops • API uptime


Operate the API platform
securely, reliably, and efficiently. • Site reliability engineering • CPU and memory use

Style of analytics • Peak API requests per second


Diagnostic
• API errors per minute
Descriptive
Prescriptive

API platform planning Objective • IT leadership • API uptime


Strategize API platform to drive
business value and operational • Infrastructure engineering • API latency
viability.
• SRE • API requests per month
Style of analytics
• Platform architecture • API error distribution
Descriptive
Predictive

API product operations Objective • Product management • API requests by status code
Manage API products for
customers and partners • Line of business managers • API requests per key
efficiently and reliably.
• API latency
Style of analytics
• API throughput
Descriptive
Predictive

API product strategy Objective • Product management • API consumers


Strategize API products
for customers and partners • IT and business leadership • API revenue by channel,
efficiently and reliably. customer, partner

Style of analytic • Business KPIs like gross


Descriptive merchandise value (GMV),
Prescriptive customer lifetime value (CLTV),
etc.
TIBCO guide | 7

API Analytics Maturity Model


We propose an API analytics maturity model that most
organizations can use to move through the various levels. This
is not meant to be followed in exact order; your organization
can jump through levels or skip a level depending on needs.

API API API API


MATURITY
DESCRIPTION OWNERSHIP PLATFORM PRODUCT PRODUCT PLATFORM
LEVEL
OPS OPS PLANNING STRATEGY

1: AD HOC No API analytics strategy or Undefined N/A N/A N/A N/A


ownership in place. Ad-hoc “data
pulls” or analyses are done by
specialists in support of specific
queries or initiatives.

2: BASIC There is no API analytics strategy IT


defined, however there is a defined
process for basic operational
analytics with IT owning the
operations.

3: DEFINED There is a well defined API IT and API


analytics strategy in place with product managers
clear ownership by API product with limited LOB
managers supported by IT involvement
incorporating guidance from line of
business (LOB).

4: MANAGED Well-defined API analytics strategy. Strong IT and LOB


Insights from analytics are regularly co-ownership
used for business decision-making
and API product planning.

5: OPTIMIZED The ultimate maturity level, where IT and LOB


API analytics strategy continually co-ownership
optimizes business performance with executive
with realtime, contextual decision- sponsorship and
making of API products, platform, involvement
and business on the whole.
TIBCO guide | 8

10 API Analytics Best


Practices
Here are the top 10 best API analytics practices with specific •  While it is true that internal APIs may not have specific
recommendations. analytical needs and may indeed reuse generic platform-level
analytics capabilities, we recommend product managers
formulate specific analytics requirements for all API products,
whether they are internal or external, and deliver them
1: Document Business OKRs and MBOs alongside the API product.

•  Who: IT leadership, Digital CoE •  Recommended Reading: A Guide to Product Analytics4

•  Why: Any API strategy exists solely in support of overall


business objectives. Whatever the technique or framework
used—OKR, MBOs, wikis, or emails—make sure your API 3: Adopt User-centered Analytics
strategy is documented and that it aligns with overall Design
business objectives.
•  Recommended Reading: Gartner® Create a successful API1 •  Who: API product managers
Based Ecosystem, Measure What Matters2 •  Why: The user-centered design approach addresses user
Why API Product Management is key for API Success3 needs in all the phases. It is equally useful in the case of
analytics, as it is in application development. When designing
API analytics, work backwards from the user’s needs,
2: Integrate Analytics with Your API motivations, and constraints. For instance, understanding
whether operations teams prefer receiving alerts on an
•  Who: API product managers alerting app like PagerDuty or OpsGenie versus email,
Slack, or Teams will go a long way in designing the right
•  What: We recommend that no API product should be analytics approach.
shipped without analytics in place. Analytics must be an
integral part of all API products. It should be conceived •  Recommended Reading: User Centered System Design5
of and delivered in step with other aspects of your API
product like documentation, security, etc. with a similar
product lifecycle.
TIBCO guide | 9

4: Encourage a Culture of Transparency 6: Assemble a cloud-native data fabric


•  Who: IT leadership, API product managers •  Who: API centers of excellence, API architects

•  Why: Any API strategy exists solely in support of overall •  Why: Modern analytics has been revolutionized by the
business objectives. Whatever may be the technique or cloud-native data fabric. A data fabric is an end-to-end
framework used — OKR, MBOs, wikis, or emails — make sure data integration and management solution consisting of
your API strategy is documented and fully communicated to architecture, data management, integration software, and
all stakeholders. shared data that helps organizations manage their data.
It provides a consistent, unified, user experience for any
•  It could be as simple as sharing a spreadsheet or an internal member of an organization worldwide. In addition, it enables
wiki page, whatever may be the mechanism, it is imperative real-time access to the data needed to implement an API
to be consistent. analytics program. (See diagram on next page)
•  Recommended Reading: 6 Key Lessons for Every New
Manager6

5. Align API Analytics with Domain-


based Data Teams
•  Who: API center of excellence, API architects

•  Why: To decentralize or not is a big question for data teams.


•  For the vast majority of medium to large organizations,
we recommend either a decentralized (also known as
embedded) data team or domain-based data team structure.
Note that such a structure aligns well with API products
and Domain Driven Design (DDD) boundaries. Please refer
to the article below with a great example implemented at
SnapTravel.
•  Recommended Reading: How should our company structure •  Recommended Reading: Data Fabric as Modern Data
our data team?7 Architecture by TIBCO
TIBCO guide | 10

7. Composable Analytical Application 8: Embrace Real-time Streaming


Design Analytics
•  Who: API center of excellence, API architects •  Who: API center of excellence, IT operations
•  Why: Just like for application development and architecture, •  Why: Modern API analytics, especially in the product and
the best practice in analytics is also to use smaller analytical platform ops zones, requires real-time streaming analytics
microapps that can be composed based on user needs. that can in-take multiple forms, such as real-time operational
notifications, streaming visual dashboards, or analytical
•  In other words, there is no need to be limited to one
operators that perform computations on event streams.
particular type or style of API analytics, or one source of
data, API management platform, or cloud vendor. •  Recommended Reading: Transform Operations with Real-
time Intelligence through Hyperconverged Analytics by
TIBCO
•  Real-time Analytics with Spotfire X and Spotfire Data
Streams by TIBCO
•  How to Use Real-Time Analytics When Building an Enterprise
IT LINE OF API PRODUCT SRE / OPS AUTOMATED
LEADERSHIP BUSINESS MANAGER AGENTS

ANALYTICS CONSUMERS Nervous System 8

INTERACTIVE VISUALIZATIONS ENTERPRISE REPORTING

REALTIME ALERTS CONVERSATIONAL BOTS


9: Experiment with Data Science
DATA STORYBOARDS EMBEDDED DASHBOARDS
•  Who: API center of excellence, API architects
COMPOSABLE API ANALYTICS
•  Why: Advances in cloud computing and data science
tooling make it feasible to conduct targeted, goal-driven
experimentation on APIs. This is a key enabler for predictive
API analytics.

API GATEWAYS EXCEL & DATA LAKES DATA WEB


•  For example, conducting an analysis on API call volume for
FLAT FILES STREAMS SERVICES
an airline’s weather related events can help predict platform
availability and customer satisfaction issues in the future.
•  Recommended Reading: The 4 Ways IT Is Revolutionizing
MASTER & RDBMS DATA BIG DATA CLOUD DATA
REFERENCE DATA WAREHOUSES

Innovation9
•  Delivering Smart Insights and Decisions with Six Essential
Smart Capabilities of Hyperconverged Analytics, TIBCO.
•  Break Down the Barriers to Better Data Science, TIBCO.
TIBCO guide | 11

10: Developer Experience Analytics


•  Who: API product managers
•  Why: Ultimately the success of any API is determined by
its adoption. A good developer experience (DX) for APIs
is crucial.
•  Examples of developer experience metrics for APIs are:
number of developer portal visits, time spent on each
section, number of test calls, time to first hello world, time to
first live API call, sample code exports, SDK downloads, etc.
•  Embracing composable analytical app design means that
these DX metrics can now be combined with operational
and business metrics to build insightful and engaging
analytical apps.
•  Recommended Reading: KPIs for APIs: Why API Calls Are
the New Web Hits10
TIBCO guide | 12

The Ultimate API Analytics


Checklist
MATURITY LEVEL REQUIRED ACTION OWNER

1: AD-HOC • Access predefined API health dashboards on demand Key stakeholders !

2: BASIC • Create and maintain analytical dashboards for API platform operations IT operations !
!
• Create reporting and alerts for operational monitoring

3: DEFINED • Improve API documentation for improved discoverability API product !


managers !
• Identify and document key API stakeholders !
• Create communication strategy for new API product releases and updates

• Create and maintain analytics dashboards for API product operations

• Define KPIs to enable API platform planning IT leadership !


!
• Create and maintain analytics dashboards for API platform planning

• Support analytics across multi-cloud, multi-domain, and multi-gateway


environments

4: MANAGED • Define KPIs/OKRs for each API product that maps to business objectives IT and LoB !
leadership !
• Create and maintain analytics dashboards for API product strategy !
• Institute quarterly or annual business reviews

5: OPTIMIZED • Implement streaming analytics for better real-time alerting IT and LoB !
leadership w/
• Invest in data science expertise for predictive analytics executive support
TIBCO guide | 13

Endnotes

1 Santoro, John and Mark O’Neill. Create a Successful API-Based Ecosystem: 3 key steps for a successful
ecosystem—before creating one. Gartner Inc.

2 Doerr, John. Measure What Matters. Penguin Publishing Group, April 24, 2018.

3 Mooter, David. API Product Management Is Key For API Success: Why Your IT-Led API Strategy Is Doomed To
Fail. Forrester, October 1, 2021.

4 A Guide To Product Analytics: Benefits, Metrics & Why It Matters. The Product Manager, March 21, 2022.

5 Norman, Don. User Centered System Design: New Perspectives on Human-computer Interaction. CRC Press,
January 1, 1986.

6 Laraway, Russ. 6 Key Lessons for Every New Manager. The Blog: Radical Reading. Accessed July 7, 2022.

7 Murray, David. How should our company structure our data team? medium.com, October 22, 2020.

8 Schulte, W. Roy et al. How to Use Real-Time Analytics When Building an Enterprise Nervous System, Gartner,
Inc. November 11, 2020.

9 The 4 Ways IT Is Revolutionizing Innovation, an interview with Erik Brynjolfsson.. MIT Sloan Management
Review, April 1, 2010.

10 Musser, John. KPIs for APIs: Why API Calls Are the New Web Hits, The Business of APIs Conference, YouTube,
October 15, 2014.

©2022, TIBCO Software Inc. All rights reserved. TIBCO, the TIBCO logo, and Spotfire are trademarks or registered
trademarks of TIBCO Software Inc. or its subsidiaries in the United States and/or other countries. All other product
and company names and marks in this document are the property of their respective owners and mentioned for
identification purposes only.

GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally
and is used herein with permission. All rights reserved.
TIBCO Technical Case Study: Air France-KLM

Air France-KLM
Development
Performance Soars
with API Management
Innovation

Air France-KLM operations involve a lot more than managing


flight logistics. Behind the airport and flight operations that
passengers see is a huge digital API-driven business that makes
daily operations possible—from customer ticket purchases to
flight maintenance.
Before the COVID pandemic, the Air France-KLM Group’s
goal was to become the world’s most customer-centric airline;
it was investing heavily in passenger service innovation. But
after the onset of the pandemic, the company’s priority shifted
The Air France-KLM airline
group’s main businesses are
to digitizing commercial cargo and freight operations. It also
aeronautical maintenance and needed to increase business agility to adapt to rapidly and
air transport of passengers continuously changing travel restrictions.
and freight. It is first in
To accomplish these goals, the company worked with TIBCO to
intercontinental traffic from
Europe and a major provider create an innovative solution to speed API delivery using TIBCO
of global air transport. Cloud API Management software.

Common Pain Points


In the fast-paced transportation business, it is critical to
correctly deploy to each production endpoint. Air France-
KLM uses 250-plus TIBCO Cloud API Management production
endpoints that are consumed by more than 300 applications
developed internally and by business partners. Failure of these
applications can wreak havoc in daily operations:
TIBCO Technical Case Study: Air France-KLM | 2

•  Aircraft push-back from the gate is delayed if the flight


crew doesn’t receive the passenger list and airway bill for
cargo in the hold.
•  If a maintenance engineer is not properly notified of
service requests, an aircraft may not be returned to
service on time, leading to flight delays.
•  If a logistics partner is not able to book air freight
automatically, the shipments may be delayed.

The centralized API management team is responsible for ensuring


that the correct security policies and governance are enforced
on every API endpoint created by more than 80 product teams.
When a product team requests to publish a new API, multiple
processes are set in motion to ensure compliance with company
policies. Additionally, Air France-KLM requires specific security
architectures and protocols to ensure passenger data is safe
and in compliance with the General Data Protection Regulation
(GDPR). Which security measures are required can vary based
on the type of data being handled. It is paramount that the API
management team verifies correct implementation of security
guidelines to ensure that:

•  All applicable security requirements have been applied.


•  The solution is running on an adequately secure
infrastructure.
•  The TIBCO Cloud API Management configuration is in line
with security guidelines.

Pandemic-related Struggles
Air France-KLM has a vast and complex IT infrastructure,
including multiple on-premises and cloud data centers. To
avoid downtimes and operations disruptions, the company
requires that its 80+ product teams adhere to the CI/CD
lifecycle. Planning and executing a successful API rollout
requires deep knowledge of all eight environments available for
continuous integration and test.
Based on experience, the API management team found
that most product teams requested API deployments with
higher security requirements than were necessary. This
overcomplicated the API deployment process, leading to
greatly increased workloads. Before COVID, it could take the
API management team a full calendar week to complete the
necessary deployment setup. This process could have been
shortened to several hours if a simpler alternative was chosen.
Additionally, a product team could lose track of what the
security measures accomplish, leading to outages regardless
of the testing throughout the development lifecycle. At Air
France-KLM, several acute traffic interruptions occurred due
to changes made to the security constructs. The worst event
deleted security credentials for more than 30 applications.
TIBCO Technical Case Study: Air France-KLM | 3

These credentials could not be restored and had to be re-


issued, resulting in an application downtime of approximately
three business days.
To minimize the chance for disruptions, the API management
team implemented a specialist peer review process at multiple
points in the API development journey. Prior to the pandemic,
the API management team had 12 specialists covering technical
review including change management, API management
administration, advanced networking, and programming. After
pandemic downsizing, the smaller API management team of
three specialists struggled to execute deployments correctly,
creating a greater risk of outages that could result in lost sales
and delays in airline operations.
As economies reopened and travel resumed, the team was
under pressure to shorten lead times for API publication to
meet project timelines. The circumstances demanded that the
remaining team quadruple efficiency to keep up with demand.

Working Smarter, Not Harder


To address its challenges, Air France-KLM took a critical look
at processes and tooling to reduce the processing time for
creating or modifying automated deployments. The company
identified two prime areas for improvement:

•  Creating a feedback mechanism to identify incorrect


interpretations of the Air France-KLM security guidelines
was needed. In the previous process, mistakes were
identified during user acceptance testing after several
environments had already been configured.
•  Training new employees in deployment processes had a
steep learning curve:
•  The XML deploy-based descriptors used in the API
endpoint configuration (now known as Digital.ai) were
specific to Air France-KLM, meaning new employees
had to start from scratch even if they had prior
devOps experience.
•  After new employees learned the descriptor format, it
then took considerable time to understand how the
API landscape translated into values entered in the
deployment descriptor. This process was complicated
by the large number of legacy and exceptional
constructs that needed to be maintained and secured.

A Shift in Mindset
The API management team embraced the idea that API
configuration was an engineering process. It started viewing
APIs as a composition of various product features, including
API productization, security settings, network routing, and
infrastructure services. All these features required inputs—
some mandatory, some optional—that could be validated
automatically in most cases. This shift in mindset enabled the
reduction of the API management team workload.
TIBCO Technical Case Study: Air France-KLM | 4

First, the team started collaborating with product teams


to identify the product features of each API and the
corresponding parameters required based on security
guidelines. Additionally, the team created a knowledge base
that translated the product features into the actual deployment
descriptor needed for the CI/CD automation tooling.

BEFORE AFTER

PRODUCT TEAM

TIBCO CLOUD TIBCO CLOUD


API MANAGEMENT API MANAGEMENT
API
MANAGEMENT DEPLOYMENT KNOWLEDGE DEPLOYMENT
TEAM DESCRIPTOR FEATURES BASE DESCRIPTOR

Product features are the important characteristics that


describe how a given API is expected to operate. These
features were split into three large categories:

•  API functional and security definitions: Air France-


KLM APIs requiring specific authentication and security
settings. An example is “an API carrying confidential data
intended for machine-to-machine exchanges.”
•  API routing: APIs that used an access path from TIBCO
Cloud API Management to back-end services. Because
the Air France-KLM network follows the defense-in-depth
principle, a connection to the back-end is possible via a
chain of security devices. An example could be “an access
path for public-facing, high-volume services with added
guards against site-scraping misuse.”
•  Back-end expectations and infrastructure service
dependencies: APIs with dependencies and developer
expectations for a back-end service such as “generates
standard Air France-KLM HATEOAS links,” “supports users
being authenticated via strong B2E authentication,” or
“incompatible with standard B2E standard authentication.”

To describe features, the API management team moved away


from the challenging XML-based deployment descriptors and
developed a lightweight, robust domain-specific language
(DSL) based on a popular YAML data serialization language.
This DSL emphasizes expression succinctness, as seen by the
internal nickname DRY or, “don’t repeat yourself.” Product
teams easily understood it after just a short introduction to the
enterprise API landscape.
TIBCO Technical Case Study: Air France-KLM | 5

Consider this practical example. the DRY format requires 44


lines and 643 characters, while the XML descriptor requires 152
lines and nearly 10 kilobytes. In terms of character volume, it’s a
compression of more than 10 times!

DRY FORMAT XML DEPLOYMENT DESCRIPTOR


package type: rest <?xml version=”1.0” encoding=”UTF-8”
name: Aircon Ground standalone=”yes”?>
domain: passenger
ac95b0c-RS-0.9b2” application=”mashery_conf/

version: 1.0.0 <orchestrator>sequential-by-container</


path: /aircon-ground orchestrator>
<deployables>
notify admin emails: <mash.ServiceDeployable name=”aircon-
- someone@klm.com

responsible emails:
- someone@klm.com

expose: mash</description>
- asset spec:
version: V01 0.9b2</version>
type: rest
path: “”
exposure policy:
http verbs:
- get
- post
- put <endpoints>
- delete

--- aircon.ground.v01”>
data center: ams 72 lines of XML code
platform: docker
data center segment: kl_whz </endpoints>
22 lines of XML code
environment: ite1 </mash.ServiceDeployable>
<mash.PackageDeployable name=”aircon-
provides from:
“/aircon-ground/”: 15 lines of XML code
assets:
“/”: <plans>
asset name: Aircon Ground 12 lines of XML code
asset versions:
- V01 <mash.PlanDeployable

<planName>Default</
planName>
</mash.PlanDeployable>
</plans>
</mash.PackageDeployable>
</deployables>
</udm.DeploymentPackage>

Compared to XML, the DRY format also requires considerably


less typing and proofreading. The main benefit of adopting this
approach was codifying the decisions and achieving decision
consistency among team members. In the example above,
when attempting to deploy this package, the deployer will see
the following critical warning:
--+-- 1 cross-check as follows: -----------------------------

expose

\> Response:
| Verify that the security exception is duly granted for this API.
TIBCO Technical Case Study: Air France-KLM | 6

Deployment Time Reduced, Potential


Errors Removed
How much time, effort, and financial cost did the Air France-
KLM API management team save when it implemented these
measures? The team quadrupled throughput with fewer people!

•  The effort to set up a typical API deployment went from


several hours to just minutes.
•  The effort to set up more complex cases went from a
week to just hours.
•  The team’s remaining three specialists can manage up to
30 API deployment projects at a time.

By implementing Most importantly, this gives Air France-KLM the agility to


respond more quickly to changing business requirements by
and streamlining
releasing new digital services to market faster.
automated
Not only has efficiency increased, but the new process
decision-making continuously removes chances to make an inadvertent error.
in its daily work, The current knowledge base features 500-plus rules supported
Air France-KLM by 300-plus regression test scenarios. Whenever a team
member discovers a new edge case, rules and regression tests
has delivered
are added.
on its strategic
Additionally, the DSL format streamlines employee on-boarding
objective: Achieve by simplifying the process and providing learning material for
first-time-right API future team members. The company’s internal documentation
deployment even in offers several solution templates that illustrate which product
features to select to build APIs. Product team architects can
the midst of post-
easily use DSL with complementary tooling to investigate and
COVID resource visualize the effects of various API publishing configurations
constraints. before deploying anything. Given that these configurations
can now be built in minutes, the architect gains the ability to
evaluate and choose the most optimal deployment strategy for
the product or release.
Additionally, for the product teams with high devOps maturity,
Air France-KLM has started offering the possibility to send
pull requests to directly propose desired changes. In an ideal
case, the API management team will only need to review and
approve these requests, reducing the API management team’s
workload even further.
By implementing and streamlining automated decision-making
in its daily work, Air France-KLM has delivered on its strategic
objective: Achieve first-time-right API deployment even in the
midst of post-COVID resource constraints.

Global Headquarters TIBCO Software Inc. unlocks the potential of real-time data for making faster, smarter decisions. Our Connected
3307 Hillview Avenue Intelligence platform seamlessly connects any application or data source; intelligently unifies data for greater
Palo Alto, CA 94304 access, trust, and control; and confidently predicts outcomes in real time and at scale. Learn how solutions to our
+1 650-846-1000 TEL customers’ most critical business challenges are made possible by TIBCO at www.tibco.com.
+1 800-420-8450 ©2022, TIBCO Software Inc. All rights reserved. TIBCO, the TIBCO logo, and TIBCO Cloud are trademarks or registered trademarks of TIBCO Software Inc. or its
subsidiaries in the United States and/or other countries. All other product and company names and marks in this document are the property of their respective
+1 650-846-1005 FAX owners and mentioned for identification purposes only.
www.tibco.com 25Jan2022
TIBCO solution brief

Choosing between
Kafka, Pulsar, and Other
Messaging Technologies

As the pace of business and change increases, application


communication and integration have become significantly more
important. A hardened, proven, tightly coupled communication
infrastructure is at the foundation of a truly digital enterprise
that can quickly react to change.
The most promising approaches to digital communication in
recent years have come from the open-source community where
developers have collaborated to provide solutions to common
challenges in building a digital world. One of these solutions is
Apache Kafka, which was built to provide distributed messaging for
log management and stream processing.
Because meeting the ever-changing needs of your business
requires careful consideration, in this whitepaper, we define several
commercial and open source options and set out their pros, cons,
and information about their complexity and cost of ownership.

A Brief History of Messaging

Starting from the time that someone needed one computer


to communicate with another, “messaging” describes
digital communications between systems.
Arpanet, the first wide-area packet switching network,
used network layer protocols like Ethernet, TCP/IP, and
UDP for system-to-system communication, which were the
beginnings of messaging.
As Arpanet evolved, and more systems were interconnected
to become today’s Internet, communications principles started
percolating up from the network layer to the application layer.
With these advances came layers of abstraction that simplified
how systems connected and communicated, which became
the goal of messaging technologies.
TIBCO solution brief | 2

Over the years, new communications protocols were invented


for different types of communications. Specifications
and protocols like Java Message Service (JMS) and Data
Distribution Service (DDS) came in the late 90s and early
2000s. Many applications began using protocols like HTML
and HTTP for more than what they were originally designed.
Everyone was looking for that one protocol that would
work for everything—like AMQ and AMQP (page 7)—but
the reality is that there will never be a single approach to
communicating. One person’s efficiency is another’s demise.
Digital communications will always be an amalgam of multiple
approaches and paradigms.

Apache Kafka
Open Source Software Solution
To understand Apache® Kafka®, you need to understand where it
came from. Developed by LinkedIn and donated to the Apache
Software Foundation, Kafka was originally designed as a
common framework to handle high-throughput and distributed
workloads for streaming logs and other real-time data feeds.
While the concept of high-throughput messaging isn’t new,
Kafka brings a new approach to solving the challenges of data
distribution and data resiliency that are built on the traditional
concepts of pub/sub messaging. Producer and consumer
applications send and receive data using topics (metadata)
that allow brokers to do the routing. Kafka is unique in how
it manages data persistence and tracks consumption. It
distributes brokers, and segments topics, into partitions that
can be balanced and redistributed by administrators as more
capacity and scale is needed.
Unlike other real-time messaging systems that store data based
on durable consumption, Kafka persists data (and consumption
metadata) based on Time To Live (TTL), an approach that
allows applications to consume data from any point in the
persisted data stream (replay those streams on demand) and
use consumer offsets to track which data has been consumed.
TTL provides native support for data replay where other systems
usually require out-of-band techniques to accomplish this.
TIBCO solution brief | 3

Key Characteristics of Apache Kafka

REQUIRED Understanding of messaging and underlying operating system


SKILLS functions, like storage and networking communications. Additional
understanding of open source software such as Apache Zookeeper,
MirrorMaker, etc.

COMPLEXITY Relatively easy to use out of the box. Complexity increases when
features like security, replication, and global distribution are required.

PROS • Long term data persistence

• Distributed data streaming

• Data replay services

• Higher throughput

CONS • Multiple systems to manage (Brokers, Zookeepers, MirrorMakers,


Connectors, etc.)

• Replication not natively built into Kafka brokers

• Management and monitoring can be challenging as the


infrastructure grows

• Topic partitioning

• Secure communications not designed in and is difficult


to implement.

• Node partition balancing and leader selection

• Community contributors are largely (90%) from a single organization

TOTAL COST OF Apache Kafka is simple and relatively easy to get up and running initially,
OWNERSHIP especially for small to medium-sized projects. Open source doesn’t mean
free, and growing Kafka to enterprise-scale requires dedicated support
staff to maintain the infrastructure. A number of commercial vendors,
including TIBCO, offer Apache Kafka support and maintenance.

PERFORMANCE Volume: HIGH


HIGHLIGHTS 100,000+ messages/second

Latency: AVERAGE
Average of 10 ms

Scalability: HIGH
Clusters can scale both horizontally and vertically

Global Distribution: YES


Possible with third-party add-ons

Apache Pulsar
Open Source Software Solution
Developed by Yahoo, Apache® Pulsar®, like many other
messaging solutions, is built on the concept of publisher and
subscriber clients that leverage topics for data access. However,
Pulsar provides a storage system for both real-time and
historical data analysis.
TIBCO solution brief | 4

In many ways, Pulsar is similar to Kafka, but the foundation for


enterprise scale and deployment differentiates it. Natively built
to support all the data distribution paradigms that traditional
messaging solutions need to provide, Pulsar also supports
the ability to manage stream processing functions directly
in the broker infrastructure. This is very appealing to users
looking for less complexity when deploying large scale, global
infrastructure. Pulsar’s distribution at enterprise scale provides
out-of-the-box support for multi-tenancy and data replication
as part of the core infrastructure, allowing for simplification in
growing application usage and adoption over time.

Key Characteristics of Apache Pulsar

REQUIRED Basic understanding of messaging and underlying operating system


SKILLS functions, like storage and networking communications. Additional
understanding of OSS like Apache Zookeeper, Apache BookKeeper, etc.

COMPLEXITY A simplified encapsulated approach where all functions are centrally


accessible, reducing complexity when scaling to enterprise levels.

PROS • Long-term data persistence

• Multi-tenancy and data replication

• Flexible security implementations

• Much higher performance

• A more centralized approach to integration and


streaming functions

• Very broad community support, multiple contributors from


multiple organizations

CONS • Initial setup can be more daunting

• While centralized, there are a number of components

• Not yet as widely deployed as other solutions

TOTAL COST OF Apache Pulsar takes a bit more effort to get up and running, but once
OWNERSHIP deployed, it scales to enterprise levels very well. Open source doesn’t
mean free, and running Pulsar at enterprise scale typically requires
dedicated support staff to maintain the infrastructure. A number of
commercial vendors, including TIBCO, offer Apache Pulsar support
and maintenance.

PERFORMANCE Volume: HIGH


HIGHLIGHTS 100,000+ messages/second

Latency: AVERAGE
Average of 10 ms

Scalability: VERY HIGH


Clusters can scale both horizontally and vertically

Global Distribution: YES


Native support for global distribution and data replication built-in
TIBCO solution brief | 5

Eclipse Mosquitto (MQTT)


Open Source Software Solution
Like many other messaging solutions, Eclipse Mosquitto
was built and designed with a specific purpose in mind. It is
uniquely different in that it was built for the Internet of Things
(IoT), specifically to support MQ Telemetry Transport (MQTT).
MQTT was developed as an OASIS standard with input
from multiple organizations with many years of experience
in messaging and data distribution. Organizations like IBM,
Microsoft, TIBCO, and many others, contributed to the
specification, which has become one of the standards for
IoT communication. Developed to leverage MQTT exclusively,
Eclipse Mosquitto provides a simple broker approach to deploy
lightweight messaging suitable for internet-connected devices
that usually have low power consumption and intermittent
network connectivity. MQTT allows for a pub/sub, topic
approach to communications for devices like phones, controls,
sensors, and microprocessors.

Key Characteristics of Eclipse Mosquitto (MQTT)

REQUIRED Knowledge of the MQTT protocol and specification


SKILLS

COMPLEXITY Very easy to set up and deploy. The MQTT protocol can be a little
complex depending on the usage requirements.

PROS • Simple setup for messaging to devices in seconds

• Purpose-built for IoT

• The protocol defines message structure, making it easy to


integrate with other systems

CONS • Limitations for large scale enterprise adoption

• Infrastructure data persistence can be a challenge

• Designed as a gateway communications protocol that should be


integrated into larger backend systems

TOTAL COST OF Eclipse Mosquitto provides a simple way to provide purpose-built


OWNERSHIP communications to IoT devices. It is easy to deploy and maintain,
but like any open-source solution, cost increases from supporting
and maintaining the infrastructure as it scales. IoT applications have
the potential to grow rapidly, and supporting this rapid growth
requires more investment in application development and the
supporting infrastructure.

PERFORMANCE Volume: HIGH


HIGHLIGHTS 100,000+ messages/second

Latency: VARIABLE
Depends heavily on deployment architecture and network devices

Scalability: HIGH
Designed for large scale device communication, cluster scalability can
require additional resources

Global Distribution: NO
Built for device interconnectivity; clusters are not designed to scale for
global communication but to provide global aggregation of data
TIBCO solution brief | 6

Java Message Service (JMS)


Open Source Software & Commercial Solutions
Developed by a large consortium of enterprise software
companies and software developers with the goal of
providing a vendor-neutral approach to pub/sub messaging,
Java Message Service (JMS) was designed to simplify
communications and application development for Java
programming. The early goal was to provide a common
framework and interface for sending and receiving data in
a Java-centric world. In the late 1990s, JMS became the de
facto standard for application communication for the Java
programming language; However, the end goal of vendor-
neutral messaging was never fully achieved because the
specification only defined the application programming
interface (API) to leverage a JMS system.
The JMS specification, while by definition only applying to Java,
quickly grew beyond Java because its features and functions
were needed for all types of enterprise communication. It
didn’t define the wire protocol or many of the implementation
details that the JMS infrastructure needed; therefore, it became
a standardized way for applications to interact with a JMS
compliant messaging system. Each implementation was unique
and provided additional functionality that was not defined. This
meant that switching from one JMS implementation to another
was not as simple as originally imagined.
Defining the JMS specification, however, ushered in a new
era where common patterns were expected to be available
for enterprise-class messaging systems. Flexibility in delivery
types for broadcasting data to large numbers of consumers
versus more pointed delivery for applications needing queuing
semantics became common features of most messaging
systems. In addition, the ability to define how data persistence
and distribution occurred and the agreements for when
a message was processed, became common functions of
messaging after the advent of JMS.

Key Characteristics of Java Message Service (JMS)

REQUIRED JMS specification knowledge is a plus


SKILLS

COMPLEXITY JMS systems are fairly simple to use and deploy. Since the system is
built on a defined specification, the operational behavior is fairly well
defined for most scenarios, but understanding all the pieces of the
specification can be daunting.

PROS • Well defined due to JMS specification

• A very broad set of delivery modes, semantics, and features

• Purpose-built for large scale Java communications


TIBCO solution brief | 7

Key Characteristics of Java Message Service (JMS)

CONS • Has grown to support many heavyweight operations

• Most implementations provide unique extensions that are highly


valuable but not interchangeable

• Specification requirements tend to make the protocols for data


exchange chatty and heavyweight

TOTAL COST OF JMS has been around for a long time, and there are both commercial
OWNERSHIP and open-source solutions available. Knowledge of the JMS
specification typically lowers application development costs as the
interface is well defined and well known. Scaling JMS infrastructure
can be challenging and at times requires large numbers of servers, and
most enterprise operations require JMS infrastructure to be set up for
disaster recovery or high availability, which adds significant complexity
and cost.

PERFORMANCE Volume: MEDIUM


HIGHLIGHTS 10,000+ messages/second

Latency: VARIABLE
Depends heavily on deployment architecture and latency of the
persistence engine can vary from 10s to 100s of milliseconds

Scalability: VARIABLE
Designed for large scale deployment but typically requires larger-scale
server infrastructure to support largely scalable environments

Global Distribution: YES


JMS is designed for large scale deployment but typically requires
larger-scale server infrastructure to support largely scalable
environments, and complex routing to support global architectures

AMQ / AMQP
Open Source Software & Commercial Solutions
Like JMS, Advanced Message Queuing (AMQ) and Advanced
Message Queuing Protocol (AMQP) are specifications designed
to provide a common framework for data exchange between
applications. Unlike JMS, AMQ and AMQP define both the
application programming interface (API) and the underlying
network communications layer. By defining the underlying protocol,
AMQP does something JMS cannot: provide a true vendor-neutral
approach to message distribution.
A common messaging paradigm both at the API layer and the
network protocol layer gives developers and organizations a
neutral way to implement a messaging infrastructure without
having to invest in multiple messaging systems. It’s equivalent
to the world agreeing that we are all going to speak the
same language. So, no need for error-prone translations from
one language to another and no more expensive time and
investment in learning how other languages communicate. But
what language is best for universal communications? How do
we incorporate all the efficiencies and subtlety of each unique
language’s communication into a common universal language?
TIBCO solution brief | 8

Will everybody be willing to sacrifice the language they are


comfortable with for one that will take a large amount of effort
to learn, adapt, and master?
This is the challenge of AMQ and AMQP. In trying to be
everything for everybody, many sacrifices in efficiency and
functionality have to be made. For some applications, these
sacrifices for universal communication definitely make sense;
For others, the sacrifice is just too great.

Key Characteristics of AMQ/AMQP

REQUIRED AMQ and AMQP specification knowledge needed.


SKILLS

COMPLEXITY AMQ and AMQP can get highly complex quite fast. A universal
approach to data exchange at both the API and protocol level means
that the options for data distribution grow exponentially.

PROS • Defined to provide a universal, vendor-neutral approach to


message exchange

• Allows organizations the flexibility to deploy infrastructure that is


easily exchanged

• Specification means operational behavior is very well defined

CONS • Specifications are very heavyweight, due to all the unique


universal requirements

• Doesn’t allow many specializations for individual application


requirements

• Plays to the lowest common denominator, which isn’t necessarily


the best choice for all applications

• Value proposition comes from universal adoption

• If AMQ is not used as the API, multi-language compatibility is lost

• Incompatibility

TOTAL COST OF AMQ and AMQP could be a great approach to provide universal
OWNERSHIP communications; However, to provide this there has to be an
organizational standard that all communications are required to use.
In some organizations this is possible; in many others, it is not due to
unique requirements. If universal adoption is possible, the TCO is low. If
not, the TCO grows significantly as AMQ/AMQP becomes an integration
protocol that is fairly heavyweight and needs support and maintenance
to provide a common point of message integration.

PERFORMANCE Volume: MEDIUM / LOW


HIGHLIGHTS Average 1,000+ to 10,000+ messages/second

Latency: AVERAGE
Average of 10 ms

Scalability: VARIABLE
Similar to JMS, AMQP is designed for large scale deployment but
typically requires larger-scale server infrastructure to support largely
scalable environments.

Global Distribution: YES


Similar to JMS, AMQP is designed for large scale deployment but
typically requires larger-scale server infrastructure to support
largely scalable environments and complex routing to support
global architectures.
TIBCO solution brief | 9

High Volume / Low Latency


Open Source Software & Commercial Solutions
One of the benefits of standardization is that everything
becomes a level playing field. And for some companies,
leveling the playing field would eliminate their competitive
advantage. Organizations in industries like financial services
increase profitability and productivity using applications that
make decisions faster, process events faster, or distribute more
market data faster than others.
This is where the specialization of high volume / low latency
messaging has its appeal. Thirty years ago, market data
distribution for electronic trading was limited in scope, and
the technology at the time was limited in scale. The high-
performance network infrastructure was lucky to be capable
of 10 megabits a second, and low latency was described in
seconds. Today, network performance has increased by four
orders of magnitude, and 10 gigabit networks and real-time or
near real-time application responsiveness are commonplace.
Data distribution has to be less than 50 microseconds, and
some applications require nanosecond response.
Today, many of the features and functions of high volume / low
latency messaging have been incorporated into more traditional
enterprise messaging offerings. For example, open-source solutions
like Apache Kafka and Apache Pulsar describe themselves as high
volume / low latency. Purpose-built commercial solutions like IBM
LLM, TIBCO FTL software, and Informatica/29west LBM all went to
war in the early 2000s with a low-latency race to zero with many
of them now providing broad enterprise functionality built on their
high volume / low latency heritage.
The biggest key to leveraging messaging for high volume /
low latency functionality is defining what high volume and
low latency means to your enterprise. One organization’s low
latency is another’s high latency. So knowing what can be done
with a given solution, and how far that solution can scale to
meet the demands of extreme data volumes, is key.

Key Characteristics of High Volume / Low Latency

REQUIRED Typically for extremely low latency and high volume, extreme tuning
SKILLS of the underlying operating systems, networks, and applications is
required. Knowledge in threading, kernel tuning, and network tuning is
a plus.

COMPLEXITY To achieve highest performance, the solutions can get fairly complex
in what can be optimized and what tradeoffs need to be made. Most
solutions do not require using the more complex layers unless application
requirements demand that level of performance. Typically solutions built
for high volume/low latency are fairly easy to use in basic operations and
can be tuned to meet high demands with increased complexity.
TIBCO solution brief | 10

Key Characteristics of High Volume / Low Latency

PROS • Built for some of the most demanding requirements and


performance

• Handles workloads for all or most all application types

• Designed to scale as infrastructure grows

• Typically can use a peer-peer communication paradigm, removing


network hops

CONS • Defining the needed level of performance can be challenging

• Some solutions lack enterprise features needed for enterprise-


wide deployment

• Complexity can increase quickly as demands for


performance increase

• Requires well-architected publishing and subscribing applications


to keep up with and leverage the advantages

TOTAL COST OF These high volume / low latency solutions typically perform very well
OWNERSHIP for the task at hand. As demands on performance increase, complexity
typically does as well, meaning more investment needs to be made in
deploying, optimizing, and maintaining the infrastructure. Leveraging
low latency / high volume messaging as the nervous system for
enterprise communication provides a high ceiling with regards to
growth, but choosing the right solution that can meet all requirements
can take time and effort.

PERFORMANCE Volume: Extremely HIGH


HIGHLIGHTS Average 1,000,000+ messages/second

Latency: Extremely LOW


Average 50 microseconds

Scalability: HIGH
Designed to scale both infrastructure and client applications to process
and distribute large volumes of data with extremely low latency

Global Distribution: YES


Many solutions provide global distribution but require careful
architecture deployment to maintain performance, typically
architectural guidelines are provided to deploy these solutions in a
globally accessible way, with trade-offs

Websockets / Mobile Messaging


Open Source Software & Commercial Solutions
One of the first areas that traditional enterprise messaging
needed to extend into was for supporting web and mobile
communications. Arguably, this requirement was a precursor
to cloud messaging and IoT because mobile devices required
a lightweight approach to data delivery and needed variable
infrastructure. With the advent of WebSockets and HTML5, a
new approach to data delivery for web and mobile devices
became available that allowed for a natural extension of
enterprise messaging features to web and mobile.
TIBCO solution brief | 11

Websockets offered a simplified approach to providing bi-


directional communications for web and mobile applications,
but like standard sockets, a publish/subscribe abstraction layer
became very appealing with its much simpler approach to
communicating and scaling web and mobile applications.
The biggest value WebSockets and mobile messaging brings is
messaging-based communication native to the devices that need
to use it. For web-based applications, messaging is extended as
native WebSockets in Javascript or node.js models. For mobile
devices, WebSockets and mobile messaging extend to the
native interfaces for those devices, Android Java for Android
devices, and iOS C and Swift for Apple devices. Flexible
communications natively supported by the consumption
medium provides native support for push notifications.
Enterprise functionality allows enterprise organizations
to integrate and extend existing architectures to mobile
communication and integration.

Key Characteristics of Websockets / Mobile


Messaging

REQUIRED Websockets / mobile messaging solutions tend to be fairly easy to


SKILLS stand up and use. The interfaces tend to be natively defined for web
applications and mobile devices, meaning that the normal skill sets for
developing applications on these types of interfaces apply.

COMPLEXITY The overall complexity of Websockets / Mobile Messaging tends to be


low. It is designed to be very easy to setup, deploy, and service. Typically,
complexity comes in handling the large number of connections that these
types of applications require.

PROS • Lightweight and easy to use

• Typically provides native device application development bringing


messaging to the device seamlessly

• Simplifies communications between front-end and back-end


systems leveraging native bi-directional communication to web
and mobile devices

CONS • Typically not as robust in the enterprise message feature set that
may be needed for some types of applications

• Communications protocols can be heavier weight due to


connection management and network connectivity requirements

• Failure operations need to be considered in highly mobile


environments and when devices and not reachable long term

TOTAL COST OF Typically Websockets / Mobile Messaging solutions are designed to


OWNERSHIP provide simplified communication for web and mobile devices and
integrate with large scale enterprise solutions. This means that if the
only requirement is to provide messaging to these types of applications,
the cost of ownership is relatively low, however like most systems
these solutions need to tie into an enterprise backbone that requires
additional components.
TIBCO solution brief | 12

Key Characteristics of Websockets / Mobile


Messaging

PERFORMANCE Volume: MEDIUM to HIGH


HIGHLIGHTS Depends on the deployment model data structures used

Latency: HIGH
Latency can be very high depending on the networks and systems used

Scalability: HIGH
Typically provides high connection scalability but can require many
nodes/servers to do it

Global Distribution: MEDIUM


Most solutions are designed to provide internet based communications,
but the deployment of these solutions can limit global reach and
availability

Cloud Messaging
Commercial Solutions, Some Built on Open Source
Cloud messaging offerings are the newest offerings. With
the rapid growth of cloud services, many organizations are
looking to leverage the cloud not only for hosting application
infrastructure, but for communications infrastructure as
well. Because of this, many cloud providers offer simple
communications protocols for application development and
integration. In addition, many traditional messaging approaches
are now available as either hosted services or deployable as
containers into cloud environments.
The challenge is that there are so many options depending on
the cloud service and the application requirements. Another
question is the level of integration needed with existing on-
premises systems, and whether multi-cloud support is needed.
Moving a communications nervous system from on-premises to
cloud/multi-cloud can expose a lot of components that may or
may not be suited for the Cloud.
Where cloud messaging solutions tend to really stand out is
in new application development. As cloud-native services are
being built and deployed, a cloud-native communications
infrastructure purpose-built for these applications makes
communication simple. Cloud messaging initially provides a
very fast and easy approach to enabling communication for
cloud-based applications.
TIBCO solution brief | 13

Key Characteristics of Cloud Messaging

REQUIRED Managed services require very little skill to get up and running.
SKILLS Deploying cloud solutions in non-managed environments typically
requires basic knowledge of cloud deployment options and
containerization models like Kubernetes.

COMPLEXITY Cloud messaging offerings tend to be fairly easy to use. When more
complex operations are needed, cloud deployment of enterprise solutions
can be used.

PROS • Built for cloud-native communications

• Easy to use, deploy, and maintain

• Managed services are readily available

• Managed services require no additional infrastructure

CONS • Many solutions do not provide the wide breadth of features that
traditional enterprise messaging solutions provide, like data
recovery and persistence

• Multi-cloud deployment requires a neutral approach that allows


for deployment into any vendor’s cloud

• On-premises integration can be challenging, depending on which


cloud messaging offering you choose, especially if an on-premises
messaging solution is already deployed

• The latency of transmission can become challenging depending on


the location of service deployment

TOTAL COST OF Cloud messaging is a very low-cost way to provide native messaging
OWNERSHIP support without the overhead of deploying infrastructure to support
communications channels. Leveraging cloud messaging typically is a
great, low-cost approach for new application development or when you
need to extend the reach of applications to internet-based services.

PERFORMANCE Volume: HIGHLY VARIABLE


HIGHLIGHTS Depends heavily on the deployment architecture and location
of services

Latency: HIGHLY VARIABLE


Depends heavily on the deployment architecture and location
of services

Scalability: HIGH
Designed to be highly scalable on demand

Global Distribution: HIGH


By design, cloud messaging is globally accessible and typically
globally distributed
TIBCO solution brief | 14

Conclusion
Today more than at any other time, enterprises face a difficult
challenge when selecting a messaging communication offering.
While a single solution has a low TCO, no one solution can meet
all the demands for all applications. Messaging has to be more
holistic to fit specific and varied application requirements—
including for high performance/low latency event processing,
streaming data for streaming analytics, microservices for native
integration among disparate applications, IoT applications, and
much more.
The TIBCO Messaging platform fully supports almost all the
approaches described in this whitepaper. It takes on the
burden of natively integrating all communications solutions
and allows application developers to select the approach that
makes the most sense for the requirements without sacrificing
performance or needing additional coding. And with its native
information exchange, interchange, and data transformation,
TIBCO Messaging gives you the flexibility to build a fully-
integrated communications nervous system to unlock the data
throughout your enterprise.
With nearly 30 years’ experience deploying and maintaining
some of the most complex communications infrastructures
in the world—and now with full enterprise support for open
source and the other messaging technologies in our portfolio—
TIBCO can help you deploy and maintain pretty much any
solution you chose.

Global Headquarters TIBCO Software Inc. unlocks the potential of real-time data for making faster, smarter decisions. Our Connected
3307 Hillview Avenue Intelligence platform seamlessly connects any application or data source; intelligently unifies data for greater
Palo Alto, CA 94304 access, trust, and control; and confidently predicts outcomes in real time and at scale. Learn how solutions to our
+1 650-846-1000 TEL customers’ most critical business challenges are made possible by TIBCO at www.tibco.com.
+1 800-420-8450 ©2020, TIBCO Software Inc. All rights reserved. TIBCO, the TIBCO logo, and FTL are trademarks or registered trademarks of TIBCO Software Inc. or its
subsidiaries in the United States and/or other countries. Apache, Kafka, and Pulsar are trademarks of The Apache Software Foundation in the United States and/
+1 650-846-1005 FAX or other countries. All other product and company names and marks in this document are the property of their respective owners and mentioned for identification
www.tibco.com purposes only.
20May2020

You might also like