Anypoint Platform Architecture Application Network Design
Anypoint Platform Architecture Application Network Design
Course prerequisites
The target audience of this course are architects, especially Enterprise Architects and Solution
Architects, new to Anypoint Platform, API-led connectivity and the application network, but
experienced in other integration approaches (e.g., SOA) and integration
technologies/platforms.
Prior to attending this course, students are required to get an overview of Anypoint Platform
and its constituent components. This can be achieved by various means, such as
Course goals
The overarching goal of this course is to enable students to
Course objectives
These high-level goals are broken-down into the following objectives:
• Select the deployment options for Anypoint Platform that most effectively support an
organization’s priorities. [A/E]
• Break down the functional requirements of strategic organizational initiatives first into
products and then into business-aligned, versioned APIs with effective granularity and API
data model, following the principles of API-led connectivity. [C]
• Direct the creation and publication of API-related assets using RAML and Anypoint Platform
components and features (such as Anypoint API Manager, API designer, API Notebooks, API
Portals, API Consoles, Anypoint Exchange) with the goal of maximizing not only their value
for a given project but also their wider consumption (incl. by API consumers and operations
teams) to further the growth of the application network. [A/E]
• Identify non-functional requirements that are best addressed on the level of API invocations
and architect for their effective documentation, implementation, enforcement, analysis,
monitoring and alerting using HTTP, RAML and components and features of Anypoint
Platform (such as CloudHub Load Balancers and CloudHub Dedicated Load Balancers,
Anypoint API Manager, API policies, API proxies and Anypoint Analytics). [C]
• Identify non-functional requirements that are best addressed on the level of API
implementations and/or API clients and architect for their effective implementation,
deployment, monitoring and alerting using components and features of Anypoint Platform
(such as CloudHub, the Mule runtime, Anypoint MQ and Anypoint Runtime Manager). [C]
• Select API policies, their enforcement in API proxies or within API implementations, and
define monitoring and alerting approaches in accordance with the principles of API-led
connectivity. [A/E]
• Architect for specific requirements by augmenting API-led connectivity with elements from
Event-Driven Architecture using Anypoint MQ. [A/E]
• Advise organizations on their general approach to DevOps, CI/CD and testing of API-led
connectivity solutions within an application network, making effective use of the automation
capabilities of Anypoint Platform. [U/A]
[R]
Remembering
[U/A]
Understanding and applying
[A/E]
Analyzing and evaluating
[C]
Creating
Course outline
• Welcome To Anypoint Platform Architecture: Application Network Design
• Module 1
• Module 2
• Module 3
• Module 4
• Module 5
• Module 6
• Module 7
• Module 8
• Module 9
• Module 10
• Wrapping-Up Anypoint Platform Architecture: Application Network Design
This course is primarily driven by a single case study, Acme Insurance, and two imminent
strategically important change initiatives that need to be addressed by Acme Insurance. These
change initiatives provide the background and motivation for most discussions in this course.
As various aspects of the case study are addressed, the discussion naturally elaborates on the
central topic of the course, i.e., how to architect and design application networks using API-led
connectivity and Anypoint Platform.
However, the course cannot jump into architecting without any prior knowledge about
Anypoint Platform, what terms like "API-led connectivity" and "application network" actually
mean, and how MuleSoft and MuleSoft customers typically approach integration challenges like
those faced by Acme Insurance. Therefore, Module 1 and Module 2 provide this context-setting
and introduction. Acme Insurance itself is briefly introduced already in Introducing Acme
Insurance and becomes the focus of the discussion from Module 3 onwards.
As the architectural and design discussions in this course unfold, it is inevitable that opinions
are expressed, solutions presented and decisions made that are somewhat ambiguous, without
a clear-cut distinction between correct or false: such is the nature of architecture and design.
A good example of this is the discussion on Bounded Context Data Model versus Enterprise
Data Model in section Section 6.3. Students are of course encouraged to challenge the
decisions made, and to decide differently in similar real-world situations. The crucial point is
that the thought processes behind these architectural and design decisions are elaborated-on
in the course, which creates awareness of the topic and increases understanding of the
tradeoffs involved in making a decision.
Exercises, typically in the form of group discussions, are an important element of this course.
But these exercises are never in the form of actually doing something, on the computer, with
Anypoint Platform or any of its components. Instead, they are simply discussions that invite to
reflect, with the intention of validating and deepening the understanding of a topic addressed
in the course.
All architecture diagrams use ArchiMate 3 notation. A summary of that notation can be found
in Appendix B.
Course logistics
• Class is from 09:00 to 17:00
• 1 hour lunch break, approx. from 12:30 to 13:30
• 15 minute break each morning and afternoon
◦ Other breaks as desired - just ask!
• Please let us know if you have other business to attend to!
The course manual is somewhere between a bound edition of the slides and a standalone
book: it contains all slide content and enough context around this content to be much easier to
consume than the slides alone would be. On the other hand, the course manual lacks some of
the explanations and elaborations that a full-fledged book would be expected to contain: this
additional depth is provided by the instructor when teaching the course!
MuleSoft offers a certification based on this course: "MuleSoft Certified Architect (MCA) -
Application Network Designer". For the target audience of this course, attending class and
studying the Course Manual should be sufficient for passing the exam.
Acme Insurance has recently been acquired by an international competitor: The New Owners.
As one consequence, Acme Insurance is currently being rebranded as a national subsidiary of
The New Owners. As another important consequence, Acme Insurance’s strategy is
increasingly being defined by The New Owners.
The New
Owners
• Acme Insurance operates an IBM-centric Data Center with the Acme Insurance Mainframe
and clusters of AIX machines
• The Policy Admin System runs on the Mainframe and is used by both Motor and Home
Underwriting. However, support for Motor and Home policies was added to the Policy Admin
System in different phases and so uses different data schemata and database tables
• The Motor Claims System is operated in-house on a WebSphere application server cluster
deployed on AIX
• The Home Claims System is a different system, operated by an external hosting provider
and accessed over the web
• Both claims systems are accessed by Acme Insurance’s own Claims Adjudication team from
their workstations
• Simple claims administration is handled by an outsourced call center, also via the Motor
Claims System and Home Claims System
Backoffice
• Open-up to external price comparison services ("Aggregators") for motor insurance: This
contributes to the goal of establishing new sales channels, which in turn (positively)
influences the driver of increasing revenue, which is important to all management
stakeholders
• Provide (minimal) self-service capabilities to customers: This contributes to the goal of
increasing the prevalence of customer self-service, which in turn (positively) influences the
driver of reducing cost, which is important to all management stakeholders as well as to
Corporate IT
Not immediately relevant, but clearly on the horizon, are the following far-reaching changes:
• Replace the two bespoke claims handling systems, the Motor Claims System and Home
Claims System, with one COTS product: This contributes to the principle of preferring COTS
solutions, which in turn contributes to Corporate IT’s goal of standardizing IT systems
across all subsidiaries of The New Owners
• Replace the legacy policy rating engine with a corporate standard: This contributes to the
principle of reusing custom software (such as the corporate standard policy rating engine)
where possible, which in turn contributes to Corporate IT’s goal of standardizing IT systems
Revenue Cost
Increase Reduction
Figure 3. Acme Insurance's immediate and near-term strategic change initiatives, their
rationale and stakeholders
Objectives
• Know about Outcome-Based Delivery (OBD)
• Understand how this course is aligned to parts of OBD
• Use essential course terminology correctly
• Become familiar with the ArchiMate 3 notation subset used in this course
Figure 4. OBD holistically addresses all aspects of integration capability delivery into an
organization
However:
• Iteration is at the heart of OBD, but this course does not iterate
◦ Every topic is discussed once, in the light of different aspects of the case study, which
would in the real world be addressed in different iterations
• OBD stresses planning, but this course does not simulate planning activities or present
plans
• Discussion of organizational enablement and the C4E is light and mainly introduces the
concept and a few ideas on how to measure the C4E’s impact with KPIs
Figure 5. This course focuses on architectural aspects of technology delivery and introduces
the C4E
Simplified
notion of API
API Client
Figure 6. The simplified notion of API merges the aspects of application interface, technology
interface, API implementation, application service and business service and is here
represented visually as an application service element with the name of the API. An API in this
simplified sense serves an API client and is invoked (triggered) by that API client
This simplified notion of API is justified because in a very significant number of cases there is
exactly one instance of each of these elements per API.
• Experience APIs are shown invoking Process APIs and Process APIs are shown invoking
System APIs, although, in reality, it is only the API implementation of the Experience API
that depends on the technology interface of the Process API and, at runtime, through that
interface, invokes the API implementation of the Process API; and only the API
implementation of the Process API that depends on the technology interface of the System
API and, at runtime, through that interface, invokes the API implementation of the System
API
• It is possible to say that an “API is deployed to a runtime” when in reality it is only the API
implementation (the application component) that is deployable
In other contexts (not in this course), the terms "service" or "microservice" are used for the
same purpose as the simplified notion of API.
When the simplified notion of API is dominant then the pleonasm "API interface" is sometimes
used to specifically address the interface-aspect of the API in contrast to its implementation-
aspect.
For instance:
• If the Auto policy rating API were just exposed over one HTTP-based interface, e.g., a
JSON/REST interface, then the simplified notion of this API would comprise:
◦ Technology interface: Auto policy rating JSON/REST programmatic interface
◦ Application interface: Auto policy rating
◦ Business service: Auto policy rating
◦ The application component (API implementation) implementing the exposed functionality
• However, since the Auto policy rating API (in the strict sense of application interface) is also
realized by a second technology interface, the Auto policy rating SOAP programmatic
interface, it is not clear whether the simplified notion of API comprises both technology
interfaces or not. It is therefore preferred, in complex cases such as this, to use the term
API only in its precise sense, i.e., as a special kind of application interface as defined in
Section 1.2.1.
Summary
• Outcome-based Delivery (OBD) is a holistic framework and methodology for enterprise
integration delivery proposed by MuleSoft, addressing
◦ Business outcomes
◦ Technology delivery in the form of platform delivery and delivery of projects
◦ Organizational enablement through the Center for Enablement (C4E), support for
Anypoint Platform and training
• This course is aligned with the technology delivery and C4E aspects of OBD
• An API is an application interface, typically with a business purpose, to programmatic API
clients using HTTP-based protocols
• Sometimes API also denotes the API implementation, i.e., the underlying application
component that implements the API’s functionality
• API-led connectivity is a style of integration architecture that prioritizes APIs and assigns
them to three tiers
• Application network is the state of an Enterprise Architecture that emerges from the
application of API-led connectivity and fosters governance, discoverability, consumability
and reuse
Objectives
• Understand MuleSoft’s mission
• Understand MuleSoft’s proposal for closing the increasing IT delivery gap
• Learn about Anypoint Platform, its capabilities and high-level components
• Understand options for hosting Anypoint Platform and provisioning Mule runtimes
It used to be that 80% of companies on the Fortune 500 would still be there after a decade.
Today, with these forces, enterprises have no better than a 1-in-2 chance of remaining in the
Fortune 500.
To succeed, companies need to be driving a very different clock speed and embrace change;
change has become a constant. Successful companies are leveraging the aforementioned
forces to be competitive and in some cases dominate their markets.
Business is pushing to move at much faster speeds than IT and technology are able to.
Technology and IT are holding back business rather than enabling it.
• McDonald’s
• Subway
• Marriot
• Amazon
• Even with constant IT delivery capacity, IT can empower "the edge" - i.e., LoB IT and
developers - by creating assets and helping to create assets they require.
• Consumption of those assets and the innovation enabled by those assets can then occur
outside of IT, at the edge, and therefore grow at a considerably faster rate than IT delivery
capacity itself.
• In this way, the ever-increasing demands on IT can be met even though IT delivery
capacity stays approximately constant.
Figure 9. How MuleSoft's proposal for an IT operating model that distinguishes between
enablement and asset production on the one hand, and consumption of those assets and
innovation on the other hand, allows the increasing demands on IT to be met at constant IT
delivery capacity
Central IT needs to move away from trying to deliver all IT projects themselves and start
building reusable assets to enable the business to deliver some of their own projects.
Figure 10. An IT operating model that emphasizes the consumption of assets by LoB IT and
developers as much as the production of these assets
The modern API is a product and it has its own software development lifecycle (SDLC)
consisting of design, test, build, manage, and versioning and it comes with thorough
documentation to enable its consumption.
• Modern APIs adhere to standards (typically HTTP and REST), that are developer-friendly,
easily accessible and understood broadly
Figure 11. Visualization of how a modern API, productized with the API consumer in mind,
gives various types of API clients access to backend systems
API-led connectivity provides an approach for connecting and exposing assets through APIs. As
a result, these assets become discoverable through self-service without losing control.
• System APIs: In the example, data from SAP, Salesforce and ecommerce systems is
unlocked by putting APIs in front of them. These form a System API tier, which provides
consistent, managed, and secure access to backend systems.
• Process APIs: Then, one builds on the System APIs by combining and streamlining
customer data from multiple sources into a "Customers" API (breaking down application
silos). These Process APIs take core assets and combine them with some business logic to
create a higher level of value. Importantly, these higher-level objects are now useful assets
that can be further reused, as they are APIs themselves.
• Experience APIs: Finally, an API is built that brings together the order status and history,
delivering the data specifically needed by the Web app. These are Experience APIs that are
designed specifically for consumption by a specific end-user app or device. These APIs allow
app developers to quickly innovate on projects by consuming the underlying assets without
having to know how the data got there. In fact, if anything changes to any of the systems
of processes underneath, it may not require any changes to the app itself.
With API-led connectivity, when tasked with a new mobile app, there are now reusable assets
to build on, eliminating a lot of work. It is now much easier to innovate.
• Central IT produces reusable assets, and in the process unlocks key systems, including
legacy applications, data sources, and SaaS apps. Decentralizes and democratizes access to
company data. These assets are created as part of the project delivery process, not as a
separate exercise.
• LOB IT and Central IT can then reuse these API assets and compose process level
information
• App developers can discover and self-serve on all of these reusable assets, creating the
experience-tier of APIs and ultimately the end-applications
It is critical to connect the three tiers as driving the production and consumption model with
reusable assets, which are discovered and self-served by downstream IT and developers.
Figure 13. The APIs in each of the three tiers of API-led connectivity have a specific focus and
are typically owned by different groups
Figure 15. Isolated backend systems before the first project following API-led connectivity
Figure 16. Every project following API-led connectivity not only connects backend systems but
contributes reusable APIs and API-related assets to the application network
• discoverable
• self-service
• consumable by the broader organization
The application network is recomposable: it is built for change because it "bends but does not
break".
◦ I.e., the deployment and execution of API clients and API implementations with certain
runtime characteristics
• API operations and management
◦ I.e., operations and management of APIs and API policies, API implementations and API
invocations
• API consumer engagement
◦ I.e., the engagement of developers of API clients and the management of the API clients
they develop
As was also mentioned in the context of OBD in Section 1.1.1, these capabilities are to be
deployed in such a way as to contribute to and be in alignment with the organization’s
(strategic) goals, drivers, outcomes, etc..
These technology delivery capabilities are furthermore used in the context of an (IT) operating
model that comprises various functions.
Operating Model
Delivery Architecture Governance Security
Figure 17. A high-level view of the technology delivery capabilities provided by Anypoint
Platform in the context of various relevant aspects of the Business Architecture of an
organization
Rather than going into a lot of detail now, it is best to just browse through these and revisit
them at the end of the course, matching what was discussed during the course against these
capabilities.
Runtime High-
Reusable Assets availability and Runtime Analytics API Actionable
Discovery Scalability and Monitoring Documentation
Supporting
Capabilities
API Specification API Analytics API Consumer and
Design API Implementation relevant to Client On-boarding
Runtime Hosting Operator/Provider Software
Development
API Implementation Data API Client Process
Design Transformation API Policy Credentials
Configuration and Management
Project
Management
Management
Artifacts Version Orchestration, API Analytics
Control Routing and Flow API Policy Alerting, relevant to
Control Analytics and Consumer
Reporting
Infrastructure
API Testing, Connectivity with Operations
Simulation and External Systems API Usage and
Mocking Discoverability
Analytics
Automated Build Runtime High-
Pipeline availability and
Scalability
Operating Model
Figure 18. A medium-level drill-down into the technology delivery capabilities provided by
Anypoint Platform, and some generic supporting capabilities needed for performing API-led
connectivity
Figure 19. Important derived capabilities related to API clients and API implementations
• API design
• API policy enforcement and alerting
• Monitoring and alerting of API invocations
• Analytics and reporting of API invocations, including the reporting on meeting of SLAs
• Discoverable assets for the consumption of anyone interested in the application network,
such as API consumers and API providers
• Engaging documentation, primarily for the consumption of API consumers
• Self-service API client registration for API consumers
Self-Service API
Client
Registration
Figure 20. Important derived capabilities related to APIs and API invocations
• Anypoint Design Center: Development tools to design and implement APIs, integrations and
connectors [Ref2]:
◦ API designer
◦ Flow designer
◦ Anypoint Studio
◦ Connector DevKit
◦ APIKit
◦ MUnit
◦ RAML SDKs
• Anypoint Management Center: Single unified web interface for Anypoint Platform
administration:
◦ Anypoint API Manager [Ref3]
◦ Anypoint Runtime Manager [Ref5]
◦ Anypoint Analytics [Ref7]
◦ Anypoint Access management [Ref6]
• Anypoint Exchange: Save and share reusable assets publicly or privately [Ref4]. Preloaded
content includes:
◦ Anypoint Connectors
◦ Anypoint Templates
◦ Examples
◦ WSDLs
◦ RAML APIs
◦ Developer Portals
• Mule runtime and Runtime services: Enterprise-grade security, scalability, reliability and
high availability:
◦ Mule runtime [Ref1]
◦ CloudHub
◦ Anypoint MQ [Ref8]
◦ Anypoint Enterprise Security
◦ Anypoint Fabric
▪ Worker Scaleout
▪ Persistent VM Queues
◦ Anypoint Virtual Private Cloud (VPC)
• Anypoint Connectors:
◦ Connectivity to external systems
◦ Dynamic connectivity to API specifications
◦ Build custom connectors using Connector DevKit
• Hybrid cloud
All functionality exposed in the web UI is also available via Anypoint Platform APIs: these are
JSON REST APIs which are also invoked by the web UI. Anypoint Platform APIs anable
extensive automation of the interaction with Anypoint Platform.
MuleSoft also provides higher-level automation tools that capitalize on the presence of
Anypoint Platform APIs:
Related to this discussion is the observation that Anypoint Exchange is also accessible as a
Maven repository. This means that a Maven POM can be configured to deploy artifacts into
Anypoint Exchange and retrieve artifacts from Anypoint Exchange, just like with any other
Maven repository (such as Nexus).
Summary
• MuleSoft’s mission is "To connect the world’s applications, data and devices to transform
business"
• MuleSoft proposes to close the increasing IT delivery gap through a consumption-oriented
operating model with modern APIs as the core enabler
• API-led connectivity defines tiers for Experience APIs, Process APIs and System APIs with
distinct stakeholders and focus
• An application network emerges from the repeated application of API-led connectivity and
stresses self-service, visibility, governance and security
• Anypoint Platform provides the capabilities for realizing application networks
• Anypoint Platform consists of these high-level components: Anypoint Design Center,
Anypoint Management Center, Anypoint Exchange, Mule runtime, Anypoint Connectors,
Runtime services, Hybrid cloud
• Interaction with Anypoint Platform can be extensively automated
Objectives
• Establish a C4E at Acme Insurance and identify KPIs to measure its success
• Understand options for hosting Anypoint Platform and provisioning Mule runtimes
• In Anypoint Platform set-up organizational structure and Identity Management for Acme
Insurance
• Contrast and compare Identity Management and Client Management on Anypoint Platform
• LoBs (personal motor and home) have a long history of independence, also in IT
• LoBs have strong IT skills, medium integration skills but no know-how in API-led
connectivity or Anypoint Platform
• Acme IT is small but enthusiastic about application networks and API-led connectivity
• DevOps capabilities are present in LoB IT and Acme IT
• Corporate IT lacks the capacity and desire to involve themselves directly in Acme
Insurance’s Enterprise Architecture. All they care about is that corporate principles are
being followed, as summarized in Figure 3
enable
Enables LoBs to fulfil their integration needs
API-first
Uses API-led connectivity as the main architectural approach
asset-focused
Provides directly valuable assets rather than just documentation contributing to this goal
self-service
Assets are to be self-service consumed (initially) and (co-) created (ultimately) by the LoBs
reuse-driven
Assets are to be reused wherever applicable
federated
Follows a decentralized, federated operating model
Remarks:
• The "enable" principle defines an Outcome-Based Delivery model (OBD) for the C4E
• The principles of "self-service", "reuse-driven" and "federated" promise increased
integration delivery speed
• Overall the C4E aims for a streamlined, lean engagement model, as also reflected in the
"asset-focused" principle
• This discussion omits questions of funding the C4E and whether C4E staff is fully or partially
assigned to the C4E
The New
Owners
Personal Motor Claims Motor Home Home Claims Home LoB IT C4E
Motor LoB IT Underwriting Underwriting
Figure 23. Organizational view into Acme Insurance's target Business Architecture with a C4E
supporting LoBs and their project teams with their integration needs. C4E roles are shown in
blue
1. Compile a list of statements which, if largely true, allow the conclusion that the C4E is
successful
2. Compile a similar list that allows the conclusion that the application network has grown to a
significant size
3. From the above lists, extract a list of metrics that measure success for the C4E and growth
of the application network
Solution
All of the metrics can be extracted automatically, through REST APIs, from Anypoint Platform.
Figure 25. Current number of assets published to Anypoint Exchange grouped by type of asset
Figure 26. Overview of Anypoint Platform deployment options and MuleSoft product names (in
bold) for each supported scenario. The Anypoint Platform management plane comprises
Anypoint API Manager and Anypoint Runtime Manager
Options for the deployment of the management plane of Anypoint Platform, i.e., Anypoint API
Manager and Anypoint Runtime Manager:
• MuleSoft-hosted:
◦ Product: Anypoint Platform
• Customer-hosted:
◦ Product: Anypoint Platform Private Cloud Edition
• MuleSoft-managed:
◦ In the public Amazon Web Services cloud: CloudHub
◦ In an Amazon Web Services VPC: CloudHub with Anypoint VPC
• Customer-managed:
◦ On-premises, manually provisioned
◦ In a customer-managed cloud:
Options for the PaaS functionality of Anypoint Platform, i.e., the execution of Mule applications
by automatically provisioned Mule runtimes:
• MuleSoft-managed:
◦ Product: CloudHub, which is a part of the MuleSoft-hosted Anypoint Platform
• Customer-managed:
◦ Product: Anypoint Platform for Pivotal Cloud Foundry
◦ Anypoint Platform Private Cloud Edition with community-driven PaaS elements for
OpenShift
Figure 30. Customer-hosted Anypoint Platform Private Cloud Edition managing customer-
managed Mule runtimes, such as on-premises Mule runtimes
Figure 31. Customer-hosted Anypoint Platform for Pivotal Cloud Foundry managing customer-
managed PaaS-provisioned Mule runtimes on PCF
Figure 32. Customer-hosted Anypoint Platform for Pivotal Cloud Foundry managing customer-
managed PaaS-provisioned Mule runtimes on PCF, where the Mule runtimes only host API
proxies infront of API implementations that are not Mule applications
• At the highest level, not all Anypoint Platform components are currently available for every
Anypoint Platform deployment scenario.
• Some features of Anypoint Platform components that are available in more than one
deployment scenario differ in their details, typically due to the different technical
characteristics and capabilities available in each case.
Notes:
Solution
Anypoint Platform deployment scenarios can be evaluated along the following dimensions:
• IT operations effort: favors MuleSoft-hosted Anypoint Platform over all other deployment
scenarios; favors Anypoint Platform for Pivotal Cloud Foundry over Anypoint Platform
Private Cloud Edition
• Latency and throughput when accessing on-premises data sources: favors scenarios where
Mule runtimes can be deployed close to these data sources, i.e., Anypoint Platform Private
Cloud Edition and Anypoint Platform for Pivotal Cloud Foundry
• Control over Mule runtime characteristics like JVM and machine memory, garbage collection
settings, hardware, etc.: favors Hybrid and Anypoint Platform Private Cloud Edition over
Anypoint Platform for Pivotal Cloud Foundry over MuleSoft-hosted Anypoint Platform
Figure 33. Anypoint Access management and the Anypoint Platform entitlements
Figure 34. Anypoint Access management controls access to and allocation of various resources
on Anypoint Platform
Figure 35. Anypoint Access management at the level of the Personal Motor LoB business group
By default, Anypoint Platform acts as an Identity Provider for Identity Management. But
Anypoint Platform also supports configuring one external Identity Provider for each of these
two uses, independently of each other.
If an external Identity Provider is configured for Identity Management, then this is currently
used only for interactions with the Anypoint Platform web UI and not for invocations of the
Anypoint Platform APIs. Therefore, before configuring an external Identity Provider for Identity
Management, setup administrative users for invoking the Anypoint Platform APIs. These will
remain valid after the external Identity Provider has been configured.
For Client Management Anypoint Platform supports the following Identity Providers as OAuth
2.0 servers:
• OpenAM
• PingFederate
After a brief evaluation period Acme Insurance chooses PingFederate as an Identity Provider
ontop of AD. They configure their Anypoint Platform organization in the MuleSoft-hosted
Anypoint Platform to access their on-premises PingFederate instance for Identity Management.
Acme Insurance is currently unsure whether they will need OAuth 2.0, but if they do, they plan
to use the same PingFederate instance also for Client Management.
Summary
• A federated C4E is established to facilitate API-led connectivity and the growth of an
application network
◦ Federation plays to the strength of Acme Insurance’s LoB IT
◦ KPIs to measure the C4E’s success are defined and monitored
• Anypoint Platform can be hosted by MuleSoft or customers
• Mule runtimes used with Anypoint Platform can be provisioned manually or through a PaaS
• PaaS-provisioning of Mule runtimes is supported via CloudHub or Anypoint Platform for
Pivotal Cloud Foundry
• Not all Anypoint Platform components are available in all deployment scenarios
• Acme Insurance and its LoBs and users are onboarded onto Anypoint Platform using an
external Identity Provider
• Identity Management and Client Management are clearly distinct functional areas, both
supported by Identity Providers
Objectives
• Map Acme Insurance’s planned strategic initiatives to products and projects
• Identify APIs needed to implement these products
• Assign each API to one of the three tiers of API-led connectivity
• Understand in detail composition and collaboration of APIs
• Reuse APIs wherever possible
• Publish APIs and related assets for reuse
All relevant stakeholders come together to concretize these strategic initiatives into two
minimally viable products and their defining features:
The products' features realize the requirements defined by the strategic initiatives.
Figure 36. Architecturally significant features of the immediately relevant strategic initiatives,
and the products they are assigned to
The two products "Aggregator Integration" product and "Customer Self-Service App" product
are assigned to two project teams. The project for the "Aggregator Integration" product is
kicked-off immediately, while the project for the "Customer Self-Service App" product starts
with some delay.
This project for the "Aggregator Integration" product is the first to use API-led connectivity at
Acme Insurance, and is also the one etablishing the foundation of what will become the Acme
Insurance application network.
The project to implement the "Aggregator Integration" product kicks off at the Personal Motor
LoB, and is actively supported by the newly established C4E within Acme IT.
This is the first API-led connectivity project at Acme Insurance, so it must establish an
Enterprise Architecture compatible with an application network. The resulting application
network will at first be minimal, just enough to sustain the "Aggregator Integration" product,
but it will grow subsequently when the "Customer Self-Service App" product is realized.
Within the application network and API-led connectivity frameworks, you first architect for the
functional and later, in Section 5.1, for the non-functional requirements of this feature.
1. The business process is triggered by the receipt of a policy description from the Aggregator
2. First it must be established whether the policy holder for whom the quote is to be created is
an existing customer of Acme Insurance, i.e., whether they already hold a policy at Acme
Insurance
3. Applicable policy options (collision coverage, liability coverage, comprehensive insurance, …
) must be retrieved based on the policy description
4. Policy options must be ranked such that options must likely to be attractive to the customer
and most lucrative to Acme Insurance appear first
5. One policy quote must be created for each of the top-5 policy options
6. The business process ends with the delivery (return) of the top-5 quotes to the Aggregator
Figure 37. High-level view of the "Create Aggregator Quotes" business process
4.2.3. Looking ahead to the NFRs for the "Create quote for
aggregators" feature
To give a bit more context for the following discussion, it is helpful to briefly inspect the non-
functional requirements (NFRs) that will have to be fulfilled for the "Create quote for
aggregators" feature: see Section 5.1.1.
Solution
Aggregator
Quote Creation
Experience APIs
Aggregator
Quote Creation
API
create
Process APIs
System APIs
Motor Policy Home Policy Policy Options Motor Quote Motor Quote
Holder Search Holder Search Retrieval API Creation New Creation Addon
API API Business API Business API
Policy Admin
System
Figure 38. Experience API, Process APIs and System APIs collaborating for the "Create quote
for aggregators" feature and ultimately serving the needs of the Aggregator
You observe that the "Create quote for aggregators" feature can be implemented by one
synchronous invocation of the "Aggregator Quote Creation Experience API" which in turn
triggers synchronous invocations of several APIs in all 3 tiers of the architecture, ultimately
leading to multiple invocations of the Policy Admin System.
This serves the functional requirements of this feature, but will need to be revisited when NFRs
are discussed.
Summary
Aggregator
Quote Creation
Aggregator Aggregator
Quote Creation Quote Creation
Service API
Policy Options
Ranking JSON
REST API
Create Motor
Quote JSON REST
API
Figure 39. "Aggregator Quote Creation Experience API", serving the Aggregator
• Its technology interface is a JSON REST API that is invoked by the API implementation of
the "Aggregator Quote Creation Experience API"
• Its API implementation invokes two System APIs, one of them being the "Motor Policy
Holder Search System API"
Policy Holder
Search
Home Policy
Holder Search
JSON REST API
Figure 40. "Policy Holder Search Process API", initially serving the API implementation of the
"Aggregator Quote Creation Experience API"
• Its technology interface is a JSON REST API that is invoked by the API implementation of
the "Policy Holder Search Process API"
• Its API implementation invokes the Policy Admin System over an unidentified technology
interface (MQ-based, see Section 7.1.3)
Motor Policy
Holder Search
Figure 41. "Motor Policy Holder Search System API", exposing existing functionality in the
Policy Admin System, and initially serving the API implementation of the "Policy Holder Search
Process API"
Aggregator
Quote
Creation API
Policy
Options
Retrieval API
Figure 42. How APIs in all tiers serve the "Create Aggregator Quotes" business process
1. Analyze the APIs identified for the realization of the "Create quote for aggregators" feature
with respect to their dependency on the data format exchanged with the Aggregator
2. Identify new APIs and refine existing APIs to maximize reuse when other Aggregators will
need to be supported in the future
3. Describe as clearly as possible which elements of the currently identified APIs will have to
change to accommodate your proposed changes
Solution
• Only the "Aggregator Quote Creation Experience API" depends on the Aggregator-defined
data format
• In the future there will be one Experience API per Aggregator, similar to "Aggregator Quote
Creation Experience API"
• The common functionality of these Experience APIs should be encapsulated in a new
Process API, e.g. the "One-Step Motor Quote Creation Process API"
◦ Accepts and returns Aggregator-neutral description of policy and quotes
• Changes:
This scenario, i.e., the refactoring of an existing Experience API to adapt to an improved
understanding of an integration scenario, is a concrete realization of the claim that application
networks are recomposable and "bend but don’t break" under change (see Section 2.2.13).
The current Aggregator as an existing client of the "Aggregator Quote Creation Experience
API" does not experience any change as the "Aggregator Quote Creation Experience API" API
implementation is refactored to use the new "One-Step" Process API. At the same time,
technical debt for the existing, misguided implementation of the "Aggregator Quote Creation
Experience API" is paid back immediately by the creation of the new "One-Step" Process API
and the re-use of the orhcestration logic hidden in "Aggregator Quote Creation Experience
API".
As the application network is just being established, Anypoint Exchange currently contains no
APIs that can be reused for this feature. In order to announce the fact that the chosen APIs
will be implemented, you immediately create an API Portal for each API and link it to Acme
Insurance’s Developer Portal and Anypoint Exchange. In order to create these assets, a basic
API specification, preferably in the form of a RAML definition, is required for the API.
The C4E provides guidance and support with these activities. Importantly, the C4E defines
naming conventions for all assets, including for those to be published in Anypoint Exchange_.
API documentation for the "Policy Holder Search Process API" is a form of contract for all
elements of the API, i.e., its business service, application service, application interface and
technology interface. The RAML definition of the API is the most important way of expressing
that contract.
API documentation must be discoverable and engaging for it to be effective: two capabilities
that are provided by Anypoint Platform as discussed shortly.
• Details of the JSON/REST interface to the API should be exhaustively specified in the RAML
definition of the API
• The same is true for security constraints like required HTTPS protocol and authentication
mechanisms
◦ Currently unknown information can be added later to the RAML definition, for instance
when NFRs are addressed
• Other NFRs, like throughput goals are not part of the RAML definition but the wider API
documentation, specifically the API Portal
Discoverable
Assets
Figure 43. Documentation for the "Policy Holder Search Process API", including its RAML
definition, documents the business service realized by the API, its SLA and non-functional
aspects. API documentation must also be discoverable and engaging, as it must be easy to
access by API consumers
The RAML definition is currently more of a stub, but will be amended as the project
progresses.
Using the mocking feature of API designer you confirm that the interaction with the API is
sound from the API client’s perspective.
Figure 44. Using API designer to sketch and try-out (mock) "Policy Holder Search Process API"
Engaging
Documentation
API Portal
API Notebook
API
Policy Holder Search Consumer
API Notebooks
API Console
Figure 45. Creating an API Portal with API Notebook and an API Console for the "Policy Holder
Search Process API" makes for engaging API documentation serving API consumers
At this early stage in the project you should already create an API Portal for the "Policy Holder
Search Process API" which should at the very least document what cannot be expressed in the
RAML definition:
• E.g., HTTPS mutual authentication for the "Aggregator Quote Creation Experience API"
(which is a different API)
Figure 46. Creating an API Portal for "Policy Holder Search Process API"
Include an API Console for the "Policy Holder Search Process API" into its API Portal, using the
preliminary RAML definition created earlier.
Figure 47. Including an API Console in the API Portal for "Policy Holder Search Process API"
Include an API Notebook for the "Policy Holder Search Process API" in its API Portal. This API
Notebook makes use of the API’s preliminary RAML definition created earlier.
Figure 48. Creating an API Notebook for "Policy Holder Search Process API"
• The API Portal for "Policy Holder Search Process API" is automatically included in Acme
Insurance’s Developer Portal
• You create an entry in Acme Insurance’s Anypoint Exchange pointing to the API Portal and
RAML definition of the "Policy Holder Search Process API"
Discoverable
Assets
API
Consumer
Acme Insurance Exchange
Figure 49. Including the API Portal for the "Policy Holder Search Process API" into Acme
Insurance's Developer Portal and Anypoint Exchange makes that API documentation
discoverable by API client developers
Figure 50. Acme Insurance's Developer Portal showing some of the APIs available in Acme
Insurance's application network
Figure 51. The API consumer's view of the API Portal for "Policy Holder Search Process API" is
reachable from the Acme Insurance's Developer Portal
Create an entry in Anypoint Exchange that references the newly created API Portal for the
"Policy Holder Search Process API" and also points to the RAML definition for that API. This
makes it even easier for API client developers to discover and reuse the API.
Strictly speaking, an Anypoint Exchange entry of type "REST API" is for a RAML definition.
Hence an Anypoint Exchange entry of type "REST API" does not represent the managed API
version, or an API endpoint or an API implementation. The version of the Anypoint Exchange
entry is therefore the artifact version of that RAML definition. Every change to the content of
that RAML definition triggers a version increase in the corresponding Anypoint Exchange entry.
This behavior is consistent with the fact that Anypoint Exchange is also a Maven-compatible
artifact repository - storing, in this case, a RAML definition. See Section 6.2 for a discussion of
versioning API-related artifacts.
Figure 52. The Anypoint Exchange entry for "Policy Holder Search Process API", giving access
to various assets of this and previous versions of this API, such as its RAML definition, a Mule
runtime connector/plugin to invoke this API from Mule applications, the API Portal and API
endpoint URLs
"Policy Holder Search Process API" can now be discovered by two main mechanisms:
4.3.12. Repeat for all APIs for the "Create quote for
aggregators" feature
Create rudimentary RAML definitions, API Portals with API Notebooks and API Consoles, and
Anypoint Exchange entries for all APIs needed for the "Create quote for aggregators" feature.
Figure 53. Acme Insurance's Anypoint Exchange showing some of the APIs available in Acme
Insurance's application network
This is the second API-led connectivity project at Acme Insurance, so it can already build on an
Enterprise Architecture compatible with a nascent application network.
The project team realizing the "Customer Self-Service App" product is again located at the
Personal Motor LoB. However, it is assumed that the "Retrieve policy holder summary" feature
will require access to information typically handled by the Home LoB. This product therefore
has a wider business scope than the very focused "Aggregator Integration" product addressed
earlier. The contribution of the C4E, as a cross-LoB, Acme Insurance-wide entity, is therefore
particularly important. The federated nature of the Acme Insurance C4E should come as an
advantage here, because it means that there are C4E-aligned and -assigned roles in both
Personal Motor LoB IT and Home LoB IT.
Within the application network and API-led connectivity frameworks, you first architect for the
functional and then for the non-functional requirements of the two features of the "Customer
Self-Service App" product, in turn.
You analyze the "Retrieve policy holder summary" feature, trying to break it down into APIs in
the three tiers of API-led connectivity, checking against Acme Insurance’s Anypoint Exchange
and Developer Portal as you do so:
• You discover the existing "Policy Holder Search Process API" and decide that it fits the first
step in the "Retrieve policy holder summary" feature, so you reuse it from the new "Policy
Holder Summary Process API"
• You define the new "Policy Search Process API" to support searching for policies across lines
of business (motor and home)
• The "Claims Process API" currently only needs to support searching for claims across LoBs,
but is envisioned to ultimately grow to support other operations on claims
Experience APIs
Mobile Policy
Holder Summary
API
Process APIs
Policy Holder
Summary API
search
System APIs
Motor Policy Home Policy Motor Policy Home Policy Motor Claims Home Claims
Holder Search Holder Search Search API Search API Search API Search API
API API
Policy Admin
Motor Claims Home Claims
System
System System
Figure 54. Experience API, Process APIs and System APIs collaborating for the "Retrieve policy
holder summary" feature of the "Customer Self-Service App" product
4.4.3. APIs for the "Submit auto claim" feature in all tiers
In addition to a "Motor Claims Submission System API" you also define the "Motor Claims
Submission Process API", to insulate the Experience API from the System API. This is because
• it is very possible that the Process API will have to perform as-yet undiscovered
coordination in order to invoke the System API
• the Process API will likely need to validate the claim submission before passing it on to the
System API
Experience APIs
Mobile Auto
Claim
Submission API
Process APIs
Motor Claims
Submission API
System APIs
Motor Claims
Submission API
Motor Claims
System
Figure 55. Experience API, Process APIs and System APIs collaborating for the "Submit auto
claim" feature of the "Customer Self-Service App" product
Note that the asynchronicity of the interaction (see Section 5.2.3) is not visible in this
diagram.
• The RAML definitions for the APIs capture the important functional aspects in a preliminary
fashion
• An API Portal has been created for each API, including an API Console and a rudimentary
API Notebook for that API
• An entry in Acme Insurance’s Anypoint Exchange has been created for each API,
referencing the API’s RAML definition and pointing to its API Portal and API endpoints
• The Acme Insurance Developer Portal gives API consumers access to all API Portals
• No NFRs have been addressed
• No API implementations and no API clients have been developed
Experience APIs
Process APIs
Policy Holder
create Summary API
search
Policy Holder Policy Options Motor Quote API Policy Search API Motor Claims Claims API
Search API Ranking API Submission API
System APIs
Motor Policy Home Policy Policy Options Motor Quote Motor Quote Motor Policy Home Policy Motor Claims Motor Claims Home Claims
Holder Search Holder Search Retrieval API Creation New Creation Addon Search API Search API Submission API Search API Search API
API API Business API Business API
Figure 56. All APIs in the Acme Insurance application network after addressing the functional
requirements of the "Aggregator Integration" product and "Customer Self-Service App"
product
Summary
• Acme Insurance’s immediate strategic initiatives require the creation of an "Aggregator
Integration" product and a "Customer Self-Service App" product
• The functional requirements of these products have been analyzed:
◦ Require 3 Experience APIs, 7 Process APIs and 10 System APIs
◦ Aggregator and Customer Self-Service Mobile App invoke Experience APIs
◦ API implementations of Experience APIs invoke Process APIs
◦ API implementations of Process APIs invoke other Process APIs or System APIs
◦ System APIs access the Policy Admin System, the Motor Claims System and the Home
Claims System, respectively
• 1 Process API and 2 System APIs originally identified for the "Aggregator Integration"
product have been reused in the "Customer Self-Service App" product
• Using API designer, RAML definitions for each API were sketched and simulated
• API Portals with API Console and API Notebook were created and published for each API
Objectives
• Know the NFRs for the "Aggregator Integration" product and "Customer Self-Service App"
product
• Understand how Anypoint API Manager controlls API invocations
• Use API policies to enforce non-functional constraints on API invocations
• Understand the difference between enforcement of API policies in an API implementation
and an API proxy
• Register an API client for access to an API version
• Understand how to pass client ID/secret to an API
• Establish guidelines for API policies suitable for System APIs, Process APIs and Experience
APIs
Consequently, the NFRs for the "Create quote for aggregators" feature are dictated primarily
by the Aggregator:
Figure 57. Essential NFRs for the "Create quote for aggregators" feature
5.1.2. Meeting the NFRs for the "Create quote for aggregators"
feature using an Anypoint Platform-based Technology
Architecture
The implementation of the "Create quote for aggregators" feature must meet the NFRs listed
above. At this point you select a Technology Architecture rooted in Anypoint Platform features
that addresses these NFRs:
• XML/HTTP interface:
◦ Not architecturally significant, should be captured in API specification
• Throughput and response time:
◦ Very demanding
◦ Must be broken-down to APIs on all tiers
◦ Must be enforced, monitored and analyzed: Anypoint API Manager, Anypoint Analytics
◦ Anticipate the need for caching
◦ Select highly scalable Mule runtime: CloudHub
◦ Need to carefully manage load on Policy Admin System
• Must not lose quotes
◦ All-synchronous chain of API invocations, hence reliability requirement can be met by an
ACID operation on Policy Admin System
▪ If the Aggregator receives a quote then that quote must have been persisted in the
Policy Admin System
▪ If the Aggregator does not receive a quote due to a failure then a quote may still
have been persisted in the Policy Admin System, but the Aggregator user cannot refer
to that quote and it is therefore "orphaned"
• HTTPS mutual authentication:
◦ Possible with CloudHub Dedicated Load Balancers in an Amazon Web Services VPC
◦ Should add client authentication on top of HTTPS mutual auth: Anypoint API Manager
Retrieve policy
holder summary
Figure 58. Essential NFRs for the "Retrieve policy holder summary" feature
This means that Acme Insurance’s PingFederate instance, in addition to serving as an Identity
Provider for Identity Management, also assumes the responsibilities for OAuth 2.0 Client
Management. The C4E in collaboration with Acme IT configures the MuleSoft-hosted Anypoint
Platform accordingly.
• Request over HTTP with claim submission, with asynchronous processing of the submission
• Performance:
◦ Currently ill-defined NFRs
◦ Aim for 10 requs/s
◦ No response time requirement because processing is asynchronous
• Security: HTTPS, OAuth 2.0-authenticated customer
• Reliability: claim submissions must not be lost
• Consistency: Claims submitted from the Customer Self-Service Mobile App through the
"Submit auto claim" feature should be included as soon as possible in the summary
returned by the "Retrieve policy holder summary" feature
Figure 59. Essential NFRs for the "Submit auto claim" feature
• Performance and security requirements: as before for "Retrieve policy holder summary"
feature
• Async processing of claim submission and no claim submission loss:
◦ Select suitable messaging system to trigger asynchronous processing without message
loss:
▪ Anypoint MQ or Mule runtime persistent VM queues as implemented in CloudHub
▪ Anypoint MQ would be a new component in Acme Insurance’s Technology
Architecture
◦ Select suitable persistence mechanism to store correlation information for asynchronous
processing:
▪ Mule runtime Object Store as implemented in CloudHub
• Consistency: to be addressed through event notifications, see Module 8
The consistency requirement cannot be met just through communication with the Motor
Claims System alone, because once a claim submission is passed to the Motor Claims System
it goes through a sequence of transitions that are not visible from outside the Motor Claims
System, i.e., are not accessible through the "Motor Claims Search System API". Only after
some considerable time has passed becomes the newly submitted claim visible to the "Motor
Claims Search System API" and can therefore be returned via the normal interaction with the
Motor Claims System for the "Retrieve policy holder summary" feature. This requirement will
be addressed separately in Module 8.
• REST APIs
◦ With API specification in the form of a RAML definition or OAS definition
◦ Without formal API specification
◦ Hypermedia-enabled REST APIs
• Non-REST APIs
◦ GraphQL APIs
◦ SOAP web services (APIs)
◦ JSON-RPC, gRPC, …
The API policies themselves are not included into any of these Mule applications, just the
capability of enforcing API policies. This is true for both the API policy template (code) and API
policy definition (data). API policies are downloaded at runtime from Anypoint API Manager
into the Mule application that enforces them.
API Policy
API Policy
enforcement
Mule runtime
API proxy
API
implementation
HTTP/S HTTP/S
HTTP/S
Solution
• With API implementations not deployed to Mule runtimes the use of API proxies is
mandatory.
• With API proxies API policy enforcement and API implementation can be scaled separately,
both horizontally as well as vertically.
• Number of nodes approx. doubles when API proxies are used.
• Deployment architecture and CI/CD is more complex with API proxies.
• API proxies can shield API implementations from DoS or similar attacks, which would be
rejected by the API proxies and therefore wouldn’t even reach the API implementations.
However, because all API invocations to an API implementation go through the API proxies
for that API, the DoS attack still has the potential to disrupt the service offered by that API
simply by swamping the API proxies with requests.
• API proxies can be deployed to a different subnet/VPC than their underlying API
implementations.
◦ Therefore Experience APIs can use API proxies deployed to a DMZ, while Process APIs
and System APIs, which often do not require protection via a DMZ, can use embedded
enforcement of API policies.
Figure 61. Anypoint API Manager displaying some of the APIs in the Acme Insurance
application network
◦ IP whitelist
◦ JSON threat protection
◦ XML threat protection
◦ LDAP security manager (injection thereof)
◦ Simple security manager (injection thereof)
◦ OAuth 2.0 access token enforcement using external provider
◦ OpenAM access token enforcement
◦ PingFederate access token enforcement
• QoS-related API policies
◦ SLA-based
▪ Rate Limiting - SLA-based
▪ Throttling - SLA-based
◦ non-SLA-based
▪ Rate Limiting
▪ Throttling
Anypoint Platform also allows arbitrary custom API policies to be implemented as Mule
applications.
API Policy
Cross-Origin HTTP Basic LDAP OAuth 2.0 IP blacklist JSON threat Rate Rate
Resource Authenticati security access token protection Limiting - Limiting
Sharing on manager enforcement SLA-based
OpenAM
authentication
access token
enforcement
• Client ID enforcement
• CORS control
The CORS policy participates in interactions with API clients defined by CORS (Cross-Origin
Resource Sharing):
• Rejects HTTP requests whose Origin request header does not match configured origin
domains
• Sets Access-Control- HTTP response headers to match configured cross-origins, usage of
credentials, etc.
• Responds to CORS pre-flight HTTP OPTIONS requests (containing Access-Control-
Request- request headers) as per the policy configuration (setting Access-Control-
response headers)
The CORS policy can be important for Experience APIs invoked from a browser.
• Authentication
• IP-based access control
• Payload threat protection
Token validation
Figure 63. The interaction between Anypoint Platform, PingFederate as an Identity Provider
configured for Client Management, and a Mule runtime enforcing the PingFederate access
token enforcement API policy
The Security Manager is made available to the HTTP Basic Authentication API policy through its
own "Security Manager Injector" API policy.
• nesting levels
• string length
• number of elements
etc.
• Rate Limiting
• Throttling
Both types of policies enforce a throughput limit defined in number of API invocations per unit
of time:
• Rate Limiting rejects requests when the throughput limit has been reached
• Throttling queues requests beyond the throughput limit
Anypoint Platform provides two different ways to define the throughput limit enforced by these
QoS-related API policies:
• Non-SLA-based, where a throughput limit is defined on the API policy definition associated
with a particular API (version)
◦ Limit is enforced for that API (version) and the sum of all its API clients, ignoring the
identity of the API clients
• SLA-based, where a throughput limit is defined in an SLA tier
◦ API clients must register with the API (version) at a particular SLA tier
◦ Limit is enforced separately for each registered API client
Thus SLA-based API policies require the API client to identify itself with a client ID: see Section
5.3.18. On the other hand, the API clients of APIs without client ID-based API policies can
remain anonymous.
When an API client invokes an API that has any QoS-related API policy defined, then the HTTP
response from the API invocation contains HTTP response headers that inform the API client of
the remaining capacity as per the QoS-related API policy:
• X-RateLimit-Reset: remaining time in milliseconds until the end of the current limit
enforcement time window
If an API (version) has SLA tiers defined then every API client that registers for access to that
API (version) is assigned to exactly one SLA tier and is thereby promised the QoS offered by
that SLA tier.
• defines one or more throughput limits, i.e., limits on the number of API invocations per time
unit
◦ E.g., 100 requs per second and simultaneously 1000 requs per hour
◦ These limits are per API client and API version
• requires either manual approval or supports automatic approval of API clients requesting
usage of that SLA tier
◦ typically at least SLA tiers that confer high QoS guarantees require manual approval
To enforce the throughput limits of an SLA tier, SLA-based Rate Limiting or Throttling API
policies need to be configured for that API version. The violation of the QoS defined by an SLA
tier can be monitored and reported with Anypoint Analytics and can also be the source of
alerts.
API clients sending API invocations to an API with enforced SLA tiers must identify themselves
via a client ID/client secret pair sent in the API invocation to the API.
In Anypoint Platform, an API client requesting access or having been granted access to an API
version is called "application" or "client application".
Once the registration request is approved - either automatically or manually - the API
consumer receives a client ID and client secret that must be supplied by the nominated API
client in subsequent API invocations to that API.
Figure 64. An API consumer using the API Portal for "Aggregator Quote Creation Experience
API" version v5 to request access to that API (version) for an API client (application) called
Aggregator
Figure 65. Anypoint API Manager web UI showing the Aggregator as the only API client
(application) registered for access to the "Aggregator Quote Creation Experience API" version
v5. The Aggregator is registered with the "standard" SLA tier
Client ID and secret must be supplied in the API invocation as defined by the API policy.
• as query parameters
◦ by default client_id and client_secret
• as custom request headers
• in the standard Authorization header as defined by HTTP Basic Authentication
◦ where client ID takes the role of username and client secret that of password
• Client ID enforcement
◦ Enforces presence and validity of client ID (and typically also client secret)
• Rate Limiting - SLA-based
◦ Rejects requests when the throughput limit defined in the SLA tier for the API client has
been reached
• Throttling - SLA-based
◦ Queues requests beyond the throughput limit
SLA-based Rate Limiting and Throttling require the SLA tier of the API client making the
current API invocation to be retrieved by the client ID supplied in the API invocation. These
two API policies therefore implicitly also enforce the presence and validity of the client ID, and,
as a convenience, also the client secret. They therefor subsume the functionality of the Client
ID enforcement API policy.
Experience APIs
Aggregator
Quote Creation
API
create
Process APIs
System APIs
Motor Policy Home Policy Policy Options Motor Quote Motor Quote
Holder Search Holder Search Retrieval API Creation New Creation Addon
API API Business API Business API
Policy Admin
System
Figure 66. All APIs collaborating for the "Aggregator Integration" product
Solution
See Section 5.3.20, Section 5.3.21, Section 5.3.22 and Section 5.3.23.
• Must always be protected by SLA-based API policies that require manual approval
• SLA-based Rate Limiting or Throttling API policies must enforce the QoS of the chosen SLA
tier
• IP whitelisting to the IP address range of the API implementations of Process APIs
(assuming they are in a VPC)
• This enforces strict compliance for this critical class of APIs
Acme Insurance applies these guidelines to all System APIs in their application network.
Figure 67. API policies defined for the "Policy Options Retrieval System API"
Acme Insurance applies these guidelines to all Process APIs in their application network.
Figure 68. API policies defined for the "Policy Holder Search Process API"
For the "Aggregator Quote Creation Experience API" consumed by the Aggregator you define
the following:
Figure 69. API policies defined for the "Aggregator Quote Creation Experience API"
For the "Mobile Policy Holder Summary Experience API" and "Mobile Auto Claim Submission
Experience API" consumed by Acme Insurance’s own Customer Self-Service Mobile App you
define:
• Non-SLA-based Rate Limiting (not Throttling) for 100 requs/s for "Mobile Policy Holder
Summary Experience API" and 10 requs/s for "Mobile Auto Claim Submission Experience
API"
• Client ID enforcement
• OAuth 2.0 access token enforcement
• JSON threat protection
Figure 70. API policies defined for the "Mobile Policy Holder Summary Experience API"
From this the following general guidelines for API policies on Experience APIs emerge:
XML threat
protection
Experience APIs
Aggregator Rate
Quote Creation Limiting -
API SLA-based
create
Process APIs
Throttling
Policy Holder Policy Options Motor Quote API
Search API Ranking API
Motor Policy Home Policy Policy Options Motor Quote Motor Quote
Holder Search Holder Search Retrieval API Creation New Creation Addon
API API Business API Business API
Throttling -
SLA-based
Policy Admin
System
Figure 71. API policies applied to APIs in all tiers collaborating for the "Create quote for
aggregators" feature
JSON threat
Customer Self- protection
Service Mobile
App
OAuth 2.0
Experience APIs access token
enforcement
Mobile Policy
Holder Summary
API
Client ID
enforcement
Process APIs
Policy Holder
Summary API
Rate
search Limiting
Throttling
System APIs
Motor Policy Home Policy Motor Policy Home Policy Motor Claims Home Claims
Holder Search Holder Search Search API Search API Search API Search API
API API
IP whitelist
Figure 72. API policies applied to APIs in all tiers collaborating for the "Retrieve policy holder
summary" feature
JSON threat
Customer Self- protection
Service Mobile
App
OAuth 2.0
access token
enforcement
Experience APIs
Mobile Auto
Claim Client ID
Submission API enforcement
Process APIs
System APIs
Throttling
Motor Claims
Submission API
IP whitelist
Motor Claims
System
Throttling -
SLA-based
Figure 73. API policies applied to APIs in all tiers collaborating for the "Submit auto claim"
feature
These changes to the contract between API client and API implementation must be reflected in
the RAML definition of the API. In other words, applying API policies often requires the RAML
definition to be changed to reflect the applied API policies.
In the case of security-related API policies, RAML has specific support through
securitySchemes, e.g. of type OAuth 2.0 or Basic Authentication. In other cases, RAML
traits are a perfect mechanism for expressing the changes to the API specification introduced
by the application of an API policy.
The C4E should own the definition of reusable RAML fragments for all commonly used API
policies in Acme Insurance. These RAML fragments should be published to Anypoint Exchange
to encourage consumption and reuse.
Summary
• The NFRs for the "Aggregator Integration" product and "Customer Self-Service App"
product are a combination of constraints on throughput, response time, security and
reliability
• Anypoint API Manager and API policies control APIs and invocations of APIs and can impose
NFRs on that level in the areas of:
◦ Compliance
◦ Security
◦ QoS
• API policies can be enforced directly in an API implementation that is a Mule application or
in an API proxy
• Client ID-based API policies require API clients to be registered for access to an API version
◦ Must pass client ID/secret with every API invocation
• The Acme Insurance C4E has defined guidelines for the API policies to apply to System
APIs, Process APIs and Experience APIs
◦ C4E has created reusable RAML fragments for API policies and published them to
Anypoint Exchange
Objectives
• Appreciate the importance of contract-first API design and RAML fragments
• Understand semantic versioning-based API versioning and where to expose what elements
of an API’s version
• Choose between Enterprise Data Model and Bounded Context Data Models
• Consciously design System APIs to abstract from backend systems
• Apply HTTP-based asynchronous execution of API invocations and caching to meet NFRs
• Understand idempotent HTTP methods and HTTP-native support for optimistic concurrency
1. Start by creating the API specification, ideally in the form of a RAML definition
2. Simulate interaction with the API based on the API specification
3. Gather feedback from potential future API consumers
4. Publish documentation and API-related assets, including the RAML definition of the API
5. Only then implement the API
• RAML definitions
◦ First-class support in all relevant components
• OpenAPI (OAS, Swagger) documents
◦ Import/export in API designer
◦ Import in Anypoint Exchange
• WSDL documents
◦ Import in Anypoint Exchange and Anypoint API Manager
With the support of the Acme Insurance C4E you isolate these and other RAML fragments from
the APIs identified so far:
• RAML SecurityScheme definitions for HTTP Basic Authentication and OAuth 2.0
• A RAML Library containing resourceTypes to support collections of items
• A RAML Library containing resourceTypes and traits to support asynchronous
processing of API invocations with polling
• RAML traits for the API policies recommended at Acme Insurance, amongst them:
◦ Client ID enforcement
◦ SLA-based and non-SLA-based Rate Limiting and Throttling
This makes them discoverable and reusable within the Acme Insurance application network.
Figure 74. Some RAML fragments identified by the Acme Insurance C4E and by MuleSoft and
made available in the public and Acme Insurance-private Anypoint Exchange, respectively
An API versioning approach is visible throughout the application network and should therefore
be standardized by the C4E.
• Major versions introduce backwards-incompatible changes in the structure of the API that
require API clients to adapt
• Minor versions introduce backwards-compatible changes to the API that do not require API
clients to change, unless the API client wants to take advantage of the newly introduced
changes.
• Patch versions introduce small fully backwards-compatible fixes, such as documentation
changes
If semantic versioning is followed then version 1.2.3 of an API is a perfect stand-in for version
1.1.5, and so all API clients that have previously used version 1.1.5 can be upgraded "silently"
to use version 1.2.3 instead. For this reason, often only the major version of an API is made
visible to API clients. This means that only the major version of the API should be visible in
While the Anypoint Exchange entry (asset) for the API should/must surface the full semantic
version of the API, including a patch version. This is because the Anypoint Exchange entry of
type "REST API" represents the RAML definition itself - see Section 4.3.11.
v1.cloudhub.io/
◦ Allows future (major) versions of the API to be backed by different API implementations
(or API proxies) without having to do URL rewriting
• Encode the version in the hostname and the URL path, e.g., http://acmeins-
policyholdersummary-papi-v1.cloudhub.io/v1
◦ Redundant but allows the URL path on its own to identify the requested API version,
without URL rewriting
◦ Allows the same API implementation to expose endpoints for more than one major
version
• Only expose major API versions as v1, v2, etc. in RAML definition, API endpoint URL and
Anypoint API Manager entries
• In the API endpoint URL expose the major API version only in the URL path
◦ E.g., http://acmeins-policyholdersummary-papi.cloudhub.io/v1
◦ Requires future major versions to either be implemented in same API implementation or
to route API invocations with URL mapping rules
• Publish to Anypoint Exchange using the full semantic version
• made documentation and API-related assets discoverable in the Acme Insurance Developer
Portal and Anypoint Exchange
• designed a high-level Application Architecture that identifies API interactions
• sketched a Technology Architecture, using components from Anypoint Platform, that is
likely to support all NFRs
You will now look in more detail at the APIs (this module) and their API implementations (next
module) and address the most important design questions that arise in doing so. You will
restrict this investigation to architecturally significant design topics, i.e., you will ignore design
questions that have no implication for the effectiveness of the resulting application network.
• The JSON representation of the Policy Holder of a Motor Policy returned by the "Motor Policy
Holder Search System API"
• The XML representation of a Quote returned by the "Aggregator Quote Creation Experience
API" to the Aggregator
• The JSON representation of a Motor Quote to be created for a given Policy Holder passed to
the "Motor Quote Process API"
• The JSON representation of any kind of Policy returned by the "Policy Search Process API"
All data types that appear in an API (i.e., the interface) form the API data model of that API.
The API data model should be specified in the RAML definition of the API. Data models are
clearly visible across the application network because they form an important part of the
interface contract for each API.
The API data model is conceptually clearly separate from similar models that may be used
inside the API implementation, such as an object-oriented or functional domain model, and/or
the persistence data model used by the API implementation. Only the API data model is visible
to API clients - all other forms of models are not.
• In an Enterprise Data Model there is exactly one canonical definition of each data type,
which is reused in in all APIs that require that data type, within all of Acme Insurance
◦ E.g., one definition of Policy that is used in APIs related to Motor Claims, Home Claims,
Motor Underwriting and Home Underwriting
• In a Bounded Context Data Model several Bounded Contexts are identified within Acme
Insurance by their usage of common terminology and concepts. Each Bounded Context
then has its own, distinct set of data type definitions - the Bounded Context Data Model.
The Bounded Context Data Models of separate Bounded Contexts are formally unrelated,
although they may share some names. All APIs in a Bounded Context reuse the Bounded
Context Data Model of that Bounded Context
◦ E.g., the Motor Claims Bounded Context has a distinct definition of Policy that is
unrelated to the definition of Policy in the Home Underwriting Bounded Context
• In the extreme case, every API defines its own data model. Put differently, every API is in a
separate Bounded Context with its own Bounded Context Data Model.
In general, do not assign APIs in different tiers of API-led connectivity, i.e. Experience APIs,
Process APIs and System APIs, to the same Bounded Context: it is unlikely that they use
concepts and data types identically, i.e. it is unlikely that they share a data model.
A Bounded Context Data Model should be published as RAML fragments (RAML types, possibly
in a RAML Library) in Anypoint Design Center and Anypoint Exchange, so that it can be easily
re-used from all APIs in a Bounded Context. The Acme Insurance C4E owns this activity and
the harvesting of data types from existing APIs.
Solution
For instance:
• Motor and home policy administration, although both implemented by the same Mainframe-
based Policy Admin System, are distinct Bounded Contexts, not only because they serve
different teams within Acme Insurance but also because they are implemented by different
data schemata and database tables in the Mainframe.
This approach to mapping between Bounded Context Data Models is called anticorruption
layer. There are other variants of mapping between Bounded Context Data Models, depending
on where transformation occurs, but this falls into the domain of detailed design and is not
visible on the Enterprise Architecture level and therefore out-of-scope for this discussion.
API implementations implemented as Mule applications can draw on the advanced data
mapping capabilities of Anypoint Studio and the Mule runtime for implementing data
transformations of this kind.
Solution
• Partnership: Motor Claims BC and Motor Policy BC, because located in same LoB and
cooperate on "Customer Self-Service App" product
• Customer/Supplier: Home Claims BC towards operators of Home Claims System, because
outsourced and externally developed
• Conformist: Motor Quote BC towards Aggregator, because Aggregator determines interface
General guidance:
• If an Enterprise Data Model is in use then System APIs should translate between the
Enterprise Data Model and the native data model of the backend system
• If not then System APIs should expose data approximately as defined in the backend
system
◦ same semantics and naming as backend system
◦ but for only one Bounded Context - backend system often are Big Balls of Mud that
cover many Bounded Contexts
◦ lightly sanitized
▪ e.g., using idiomatic JSON data types and naming, correcting misspellings, …
◦ expose all fields needed for the given Bounded Context, but not more
◦ making good use of REST conventions
This approach, in the absence of an Enterprise Data Model, does not provide optimal isolation
from backend systems through the System API tier on its own. On the other hand,
• it is a pragmatic approach
• and further isolation occurs in the Process API tier
HTTP has native support for asynchronous processing of the work triggered by HTTP requests.
This feature is therefore immediately available to REST APIs (but not non-REST APIs like SOAP
APIs).
Initial request
1 HTTP PUT/POST/PATCH
2 validate request
Polling
5 HTTP GET pollingURL
Retrieve result
8 HTTP GET resultURL
HTTP 200 OK
9
body: result
Figure 75. Asynchronous processing triggered by an API invocation from an API client to an
API implementation, with polling by the API client to retrieve the final result from the API
implementation
1
The API client sends a HTTP request that will trigger asynchronous processing
2
The API implementation accepts the request and validates it
3
If the HTTP request is not valid then, as usual, a HTTP response with a HTTP 4xx client
error response code is returned
4
If HTTP request validation succeeds then the API implementation triggers asynchronous
processing (in the background) and returns a HTTP response with the HTTP 202 Accepted
response status code to the API client with a Location response header containing the URL
of a resource to poll for progress
5
The API client then regularly sends HTTP GET requests to the polling resource
6
The API implementation returns HTTP 200 OK if asynchronous process is still ongoing
7
but returns a HTTP response with HTTP 303 See Other redirect status response code and a
Location response header with the URL of a resource that gives access to the final result of
the asynchronous processing
8
The API client then sends a HTTP GET request to that last URL to retrieve the final result of
the now-completed asynchronous processing
The fact that HTTP-based asynchronous execution of API invocations is used should be
documented in the RAML definition of the respective APIs. In fact, the Acme Insurance C4E
has already published a resuable RAML library for this purpose: see Figure 74.
Initial request
HTTP PUT/POST/PATCH
1
X-Callback-Location: callbackURL
2 validate request
Deliver result
HTTP POST callbackURL
5 X-Correlation-ID: correlationID
body: result
6 HTTP 200 OK
Figure 76. Asynchronous processing triggered by an API invocation from an API client to an
API implementation, with a callback from the API implementation to the API client to deliver
the final result. The callback URL is sent as a custom HTTP request header and the correlation
ID is also exchanged in a custom HTTP header
1
The API client sends a HTTP request that will trigger asynchronous processing
• With that HTTP request it sends the URL of a resource of the API client that will receive
callbacks
• The callback URL can be sent as a URL query parameter or custom HTTP request header
2
The API implementation accepts the request and validates it
3
If the HTTP request is not valid then, as usual, a HTTP response with a HTTP 4xx client
4
If HTTP request validation succeeds then the API implementation triggers asynchronous
processing (in the background) and returns a HTTP response with the HTTP 202 Accepted
response status code to the API client
• The HTTP response must also contain the correlation ID for the request, e.g., in a
custom response header
5
Once asynchronous processing is completed, the API implementation sends a HTTP POST
request to the callback URL containing the final result of the now-completed asynchronous
processing, sending the correlation ID (for instance) as a request header
6
The API client acknowledges receipt of the callback by returning a HTTP 200 OK from the
callback
The fact that HTTP-based asynchronous execution of API invocations is used should be
documented in the RAML definition of the respective APIs. The Acme Insurance C4E should
publish a resuable RAML library for this purpose.
• Polling is used for the "Mobile Auto Claim Submission Experience API" because callbacks to
the Customer Self-Service Mobile App are impossible due to obvious networking
restrictions.
• The "Motor Claims Submission Process API", which is invoked by the "Mobile Auto Claim
Submission Experience API", can implement asynchronous processing via a callback, which
is more efficient than polling.
Initial request
1 HTTP POST
2 validate request
5 validate request
8 HTTP 200 OK
10 HTTP 200 OK
Retrieve result
13 HTTP GET resultURL
HTTP 200 OK
14
body: result
Figure 77. Asynchronous execution of the "Mobile Auto Claim Submission Experience API",
with polling from the Customer Self-Service Mobile App, feeding into asynchronous execution
of the "Motor Claims Submission Process API", with callback to the "Mobile Auto Claim
Submission Experience API"
Options for keeping state in API implementations implemented as Mule applications and
executing in a Mule runtime:
Solution
Safe HTTP methods are ones that do not alter the state of the underlying resource. That is, the
HTTP responses to requests using safe HTTP methods may be cached.
The HTTP standard requires the following HTTP methods on any resource to be safe:
• GET
• HEAD
• OPTIONS
Safety must be honored by REST APIs (but not by non-REST APIs like SOAP APIs): It is the
responsibility of every API implementation to implement GET, HEAD or OPTIONS methods such
HTTP natively defines rigorous caching semantics using these (and more) HTTP headers. This
feature is therefore immediately available to REST APIs (but not non-REST APIs like SOAP
APIs):
• Cache-Control
• Last-Modified
• ETag
• If-Match, If-None-Match, If-Modified-Since
• Age
Caching requires
• storage management
• the manipulation of HTTP request and response headers in accordance with the HTTP
specification
Figure 78. A cacheable RAML fragment published to the public Anypoint Exchange
Mule applications acting as API clients or API implementations may make use of a caching
scope, which may be used, for instance, in a custom API policy, but Anypoint Platform as such
contains no ready-made facilities for caching. Custom API policies for various caching
scenarios are available in the public Anypoint Exchange.
Figure 79. Custom API policies performing caching, published in the public Anypoint Exchange
Solution
Idempotent HTTP methods are ones where a HTTP request may be re-sent without causing
duplicate processing. That is, idempotent HTTP methods may be retried.
The HTTP standard requires the following HTTP methods on any resource to be idempotent.
• GET
• HEAD
• OPTIONS
• PUT
• DELETE
Of these methods, only PUT and DELETE may change the state of a resource (i.e., are not
safe). On the other hand, POST and PATCH are not idempotent (and not safe): if a HTTP
response is lost the HTTP request can, in general, not be re-sent without causing duplicate
processing.
Idempotency must be honored by REST APIs (but not by non-REST APIs like SOAP APIs): It is
the responsibility of every API implementation to implement PUT and DELETE methods such
that they never perform an action if the PUT or DELETE was already successfully processed.
How does an API implementation decide if a HTTP request was already received and must
therefore be disregarded (if it is a PUT or DELETE)?
• One approach is to treat all requests with identical "content" (HTTP request body and
relevant request headers) as identical. This is similar to the default implementation of the
idempotent-message-filter (which makes that decision based on a hash of the Mule
message body). However, this approach prohibits identical PUT or DELETE requests from
being processed at all, which may be problematic.
• A common refinement of this approach is therefore to require the API client to generate a
unique request ID and add that to the HTTP request. If the API client re-sends a request it
must use the same request ID as in the original request. It follows that the content of
requests of this kind can only be identical if the request ID is also identical, so that identical
requests can be determined by the API implementation simply by comparing request IDs.
The idempotent-message-filter can easily be configured to do that.
HTTP supports optimistic concurrency control of this kind natively with the combination of the
following facilities. This feature is therefore immediately available to REST APIs (but not non-
REST APIs like SOAP APIs):
• the ETag HTTP response header to send a resource version ID in the HTTP response from
the API implementation to the API client
• the If-Match HTTP request header to send the resource version ID on which an update is
based in an HTTP PUT/POST/PATCH request from API client to API implementation
• the HTTP 412 Precondition Failed client error response code to inform the API client that the
resource version ID it sent was stale and hence the requested change not performed
• the HTTP 428 Precondition Required client error response code to inform the API client that
the resource in question is protected against concurrent modification and hence requires
If-Match HTTP request headers, which were however missing from the HTTP request
Because usage of these headers changes the technology interface of the API, this should be
fully documented in the RAML definition of each API that uses optimistic concurrency. A RAML
fragment in the form of a trait should be used to capture this element of the contract
between API client and API. This RAML fragment is reusable and should be made available as a
Anypoint Design Center project and published to Anypoint Exchange.
HTTP 200 OK
2
ETag: v42
3 HTTP GET
HTTP 200 OK
4
ETag: v42
HTTP 200 OK
8
ETag: v43
Figure 80. Optimistic concurrency preventing concurrent modification of a REST API resource
1. Identify resources accessed by those APIs that may require protection from concurrent
modification.
2. Discuss what implementing concurrent modification protection in these APIs would mean
for their API clients.
3. Do you consider HTTP-based optimistic concurrency control a standard feature of APIs that
is helpful in many situations?
Solution
The features discussed so far do neither require nor benefit from optimistic concurrency
control:
• "Create quote for aggregators" feature is not a modification but a de-novo creation of new
quotes
• "Retrieve policy holder summary" feature is a read-only operation
• "Submit auto claim" feature is not a modification but a de-novo creation of a new claim
submission
A hypothetical feature that might benefit from optimistic concurrency control is the
modification of policy holder details through the Customer Self-Service Mobile App:
• It is theoretically conceivable that updates to the details of the same policy holder occur
concurrently, through separate session of the Customer Self-Service Mobile App or through
the Customer Self-Service Mobile App and a backend system.
• But this is unlikely, and even if it happened there is no clear business need or value in
protecting against this kind of situation.
This discussion shows that HTTP-based optimistic concurrency control to achieve protection of
API resources from concurrent modification is a technical approach that is to be used with
caution and only when there is clear business value in doing so. Most often, the nature of the
interactions in an application network calls for embracing concurrency rather than imposing
the illusion that all modifications to resources occur sequentially along one universally
applicable time axis.
In collaboration with the C4E you have selected an enterprise-wide (application network-wide)
approach to
• API versioning
• API data model design (no Enterprise Data Model)
• captured NFRs
• decided on architecturally relevant design aspects (such as caching, asynchronous
execution and concurrency handling)
• selected and configured appropriate API policies
Along the way, important Technology Architecture-aspects of the Enterprise Architecture have
been decided:
• Anypoint Platform and CloudHub have been chosen and configured to support the APIs and
their API policies
◦ For instance, Amazon Web Services VPCs have been set up and CloudHub dedicated
load-balancers configured for TLS mutual authentication
• API policies are being enforced in the API implementations themselves rather than in API
proxies
• An Identity Provider (PingFederate) has been chosen for OAuth 2.0 Client Management
Last but not least, a decentralized C4E has been established at Acme Insurance IT, which
• enables the LoB IT project teams to implement the strategic products "Aggregator
Integration" product and "Customer Self-Service App" product
• owns the harvesting and publishing of reusable assets into Acme Insurance’s Anypoint
Exchange
• helps setting and spreading the aforementioned application network-wide standards
All but one API, namely the "Aggregator Quote Creation Experience API", whose technology
interface is defined by the Aggregator, are JSON REST APIs.
Almost all of the above information is directly visible in the interface contract of APIs and has
therefore been defined in RAML fragments and RAML definitions.
The API Portal of each API augments the "raw" RAML definition with an API Notebook, an API
Console and essential textual descriptions and context-setting.
All of the above information has been published to Acme Insurance’s Anypoint Exchange and is
visible across the Acme Insurance application network.
Summary
• In designing APIs start with the RAML definitions, using the simulation features of API
designer
• Extract reusable RAML fragments and publish them to Anypoint Exchange
• A semantic versioning-based API versioning strategy was chosen, that exposes only major
version numbers in all places except Anypoint Exchange
• API data models were defined by the Bounded Context to which the APIs belong
• System APIs were chosen to abstract only slightly from backend systems
• HTTP-based asynchronous execution of API invocations and caching were employed
wherever needed to meet NFRs
• Most decisions change the interface contract of APIs and were captured in RAML fragments
and RAML definitions and published to Anypoint Exchange
Objectives
• Know about auto-discovery of API implementations implemented as Mule applications
• Appreciate how Anypoint Connectors serve System APIs in particular
• Understand CloudHub
• Apply strategies that help API clients guard against failures in API invocations
• Understand the role of CQRS and the separation of commands and queries in API-led
connectivity
• Know the role of Event Sourcing
• When the API implementation starts up it automatically registers with Anypoint API
Manager as an implementation of a given API version
• and receives all API policies configured for that API version
• The API implementation should be configured to refuse API invocations until all API policies
have been applied
◦ This is called the gatekeeper feature of the Mule runtime
• The Mainframe-based Policy Admin System, which needs to be accessed via WebSphere MQ
• The WebSphere-deployed Motor Claims System, which needs to be integrated with by
directly accessing its DB/2 database
• The web-based Home Claims System, which exposes SOAP APIs
Acme Insurance can use the following Anypoint Connectors to implement its System APIs:
The existence of Anypoint Connectors is one more reason why Acme Insurance elects to
implement API implementations for the Mule runtime, using Anypoint Studio.
• CloudHub and API implementations deployed to CloudHub execute on Amazon Web Services
infrastructure
• Every API implementation is assigned a DNS name that resolves to the CloudHub Load
Balancer
◦ CloudHub workers deployed to CloudHub’s shared worker cloud are exposed only via the
public CloudHub Load Balancer, which builds on top of the Amazon Web Services Elastic
Load Balancer (ELB)
◦ CloudHub workers deployed to a VPC can also use CloudHub Dedicated Load Balancers,
which currently do not utilize the ELB
• Plus every API implementation also receives two other well-known DNS names,
◦ one maps to the public IP addresses of all CloudHub workers of the API implementation
◦ the other maps to the internal IP addresses of all CloudHub workers of the API
implementation for use within a VPC
◦ CloudHub Dedicated Load Balancers allow the load balancer to be configured with client
certificates in order to perform TLS mutual authentication
◦ DNS entries are maintained by the Amazon Web Services platform service Route 53
• HTTP and HTTPS requests, i.e., API invocations, sent by the API client to the CloudHub Load
Balancer on port 80 and 443, respectively, are forward by the load balancer to one of the
API implementation’s CloudHub workers, where they reach the API implementation Mule
application at port 8081 and 8082, respectively
◦ HTTP and HTTPS requests bypassing the CloudHub Load Balancer, i.e. being sent to the
public IP addresses of the API implementation’s CloudHub workers, do not undergo this
port change
◦ HTTP and HTTPS requests to the internal IP addresses of CloudHub workers, from within
the VPC, by convention go to ports 8091 and 8092, respectively
• CloudHub workers are Amazon Web Services EC2 instances running Linux, and a Mule
runtime within a JVM
• The maximum number of CloudHub workers for a single API implementation is currently 8
• CloudHub workers also execute CloudHub system services on the OS and Mule runtime
level which are required for the platform capabilities provided by CloudHub, such as
◦ Monitoring of API implementations in terms of CPU/memory/etc. usage, number of
messages and errors, etc., which allows Anypoint Platform to provide alerts based on
these metrics
◦ Auto-restarting failed CloudHub workers (including a failed Mule runtime on an otherwise
functional EC2 instance)
◦ Load-balancing via the CloudHub Load Balancer only to healthy CloudHub workers
◦ Provisioning of new CloudHub workers to increase/reduce capacity
◦ Persistence of Object Stores, message payloads, etc. across the CloudHub workers of an
API implementation
◦ DNS of the CloudHub workers and the CloudHub Load Balancer, as described above
• The API implementation implemented as a Mule application, in collaboration with the Mule
runtime, confers the capability of invoking backend systems and APIs
Table 2. CloudHub worker sizing and illustratory mapping to EC2 instance types.
Worker Name Worker Memory Worker Storage EC2 Instance EC2 Instance
Name Memory
0.1 vCores 500 MB 8 GB t2.micro 1 GB
0.2 vCores 1 GB 8 GB t2.small 2 GB
1 vCores 1.5 GB 8 + 4 GB m3.medium 3.75 GB
2 vCores 3.5 GB 8 + 32 GB m3.large 7.5 GB
4 vCores 7.5 GB 8 + 40 + 40 GB m3.xlarge 15 GB
8 vCores 15 GB 8 + 40+ + 40+ m3.2xlarge 30 GB
GB
16 vCores 32 GB 8 + 40+ + 40+ m4.4xlarge 64 GB
GB
API client
DNS lookup
Client certificates Amazon
Route 53
HTTP/S
optional
Amazon ELB
HTTP/S HTTP/S
HTTP/S HTTP/S
API API
implementation implementation
Figure 81. A very simple depiction of the general high-level CloudHub Technology Architecture
as it serves API clients invoking APIs exposed by API implementations implemented as Mule
applications running on CloudHub
Monitoring and
Alerting of API CloudHub Worker
Implementations
CloudHub system services Policy Holder Search
Implementation
HA and Scalable
Execution
Mule runtime Policy Holder Search
RAML Definition
healthchecks
Linux
Persistence Provisioning Load- Auto-restart
balancing
Figure 82. Anatomy of a CloudHub worker used by the API implementation of the "Policy
Holder Search Process API" and the capabilities bestowed on the API implementation by the
CloudHub worker, also by making use of the Amazon Web Services Platform Services
Solution
The invocation of an API implementation by an API client fails if any of the intervening
components fails at the moment of the API invocation:
Expressed in graph theoretic terms, the min/max/average degree of a graph of APIs is the
min/max/average number of API implementations that depend on any given API. In this
sense, successful application networks are characterized by a high (average) degree.
However, a high degree of dependency between APIs means that a failures in the invocation of
an API affects many other API implementations and the services they offer to the application
network - which in turn affects many other API implementations and their services - and so
on. That is, if unchecked, failures in API invocations propagate transitively through an
application network.
In other words, a successful application network - one with a high degree of dependency
between its APIs - is also an application network in which the failure of any API triggers
downstream failures of many other APIs.
For this reason, making API invocations fault-tolerant is an essential aspect of successful
application networks.
The most important goal of making API invocations fault-tolerant is breaking transitive failure
propagation.
Figure 83. Visualization of an API client that itself exposes an API to upstream API clients and
which employs fault-tolerant API invocations to a downstream API
• Timeout
• Retry
• Circuit Breaker
• Fallback API invocation
• Opportunistic parallel API invocations
• Cached fallback result
• Static fallback result
2x timeout
Upstream API API, API Client Primary
Client using fault- 1. Downstream
Succeeding API invocation tolerant API Failing API invocation API, failing
invocation
strategies 2.
4. 3.
Fallback API invocation
Retrieve Lookup
2x timeout
Fallback
Downstream
Static Results Client-Side Cache API
Figure 84. The most important fault-tolerant API invocation strategies and the order in which
they should be employed
• the SLA of the API client is put at risk even if the API invocation ultimately succeeds
• the API invocation has a higher-than-usual probability of eventually failing anyways
For both of these reasons, timeouts for API invocations should be set carefully to
timeout
Upstream API API, API Client Primary
Client using fault- Downstream
Succeeding API invocation tolerant API Failing API invocation API, failing
invocation
strategies
Figure 85. Short timeouts of API invocations are the most fundamental protection against
failing APIs
For instance:
• The "Aggregator Quote Creation Experience API" has an SLA (defined by the NFRs for the
"Create quote for aggregators" feature) of a median/max response time of 200 ms/500 ms
• It synchronously invokes 3 Process APIs, in turn, starting with "Policy Holder Search
Process API"
• The task performed by the "Policy Holder Search Process API" is the simplest of all 3
Process APIs and should hence be fastest to accomplish
• Hence the "Aggregator Quote Creation Experience API" should define a timeout of no more
than approx. 100 ms for the invocation of the "Policy Holder Search Process API"
• The SLA for the "Policy Holder Search Process API" must therefore guarantee/promise that
a sufficient percentage (say, 95%) of API invocations will complete within 100 ms. If "Policy
Holder Search Process API"'s SLA does not live up to this, then this API (or better, its
current API implementation) is unsuitable for implementing the "Aggregator Quote Creation
Experience API". For instance, if the percentage of API invocations to the "Policy Holder
Search Process API" that can be expected to complete within the timeout of 100 ms is
significantly below 95% (say, 80%) then, under normal conditions (without failure) the
percentage of these API invocations that will be timed-out is unreasonably high (20%) and
will therefore jeopardize the effiicent operation of the "Aggregator Quote Creation
Experience API". (Response times are typically log-normally distributed.)
A timed-out API invocation is equivalent to a failed API invocation, hence timing-out can only
be the first in a sequence of fault-tolerance strategies.
API clients implemented as Mule applications and executing on a Mule runtime have various
options for configuring timeouts of API invocations:
It is typically difficult for an API client to decide beyond doubt that a failure which occurred
during an API invocation was of a transient nature. This is way the default approach is often to
retry all failed API invocations.
In general, all connection issues (which materialize in Mule runtime as Java net or IO
exceptions) should be dealt with as transient failures.
In REST APIs, which are assumed to make correct use of HTTP response codes, response
codes in the 4xx range signify client errors and are therefore mostly permanent and the API
invocation that led to such a response code should hence not be retried. HTTP response codes
in the 5xx range, by contrast, should be expected to signify transient failures. These HTTP 4xx
client error response codes are exceptions to the previous rule and should also be treated as
transient failures:
In addition, HTTP states that only idempotent HTTP methods (see Section 6.4.9) may be
retried without causing potentially unwanted duplicate processing:
• GET
• HEAD
• OPTIONS
• PUT
• DELETE
Retrying an API invocation by necessity increases the overall processing time for the API client.
It should therefore be
2x timeout
Upstream API API, API Client Primary
Client using fault- Downstream
Succeeding API invocation tolerant API Failing API invocation API, failing
invocation
strategies
Figure 86. Retrying API invocations a few times with short delays between retries and short
timeouts for each API invocation
API clients implemented as Mule applications and executing on a Mule runtime have the HTTP
Request Connector and Until Successful Scope at their disposal for configuring retries of API
invocations. The HTTP Request Connector has (configurable) support for interpreting certain
HTTP response status codes as failures.
A Circuit Breaker
Seen from an API client, the most important contribution of a Circuit Breaker is that, when it is
in the "open" state, it saves the API client the wasted time and effort of invoking a failing API.
Instead, thanks to the Circuit Breaker, these invocations fail immediately and the API client
can quickly move on to apply fallback strategies as discussed in the remainder of this section.
A Circuit Breaker is by definition a stateful component, i.e., it monitors and manages API
invocations over "all" API clients. The scope of "all" may vary:
• In the easiest case, all API invocations from within a single instance of a type of API client
are managed
◦ E.g., all API invocations from the API client executing in a single CloudHub worker are
managed together
◦ This means that the different instances of the API client executing in different CloudHub
workers keep distinct statistics and state for the invocations of the failing API
• Alternatively, all API invocations from all instances of a type of API client are managed
together
◦ E.g., all API invocations from all instances of the API client executing in all CloudHub
workers are managed together
◦ This means that the different instances of the API client executing in different CloudHub
workers all share the same statistics and state for the invocations of the failing API
◦ Requires remote communication between instances of the API client, e.g., via a Mule
runtime Object Store
• In the extreme case, all API invocations from all types of API clients are managed together
◦ E.g., all API invocations from all instances of any API client executing in any CloudHub
worker are managed together
◦ This means that the different instances of any API client executing in any CloudHub
workers all share the same statistics and state for the invocations of the failing API
◦ Typically requires the Circuit Breaker to be an application network-wide shared resource,
e.g., an API in its own right
API clients implemented as Mule applications and executing on a Mule runtime have open-
source implementations of the Circuit Breaker pattern at their disposal for the configuration of
API invocations.
For instance, when the API implementation of the "Policy Holder Search Process API" has
definitely failed invoking the "Motor Policy Holder Search System API", it may instead invoke a
fallback API that is sufficient, if not ideal, for its purposes. (A fallback API by definition is never
ideal for the purposes of the API client at hand - otherwise it would be its primary API.)
• could be an old, deprecated version of the same API, that may however still be available;
e.g., an old but sufficiently compatible version of the "Motor Policy Holder Search System
API"
• could be an alternative endpoint of the same API and version; e.g., an endpoint of the
"Motor Policy Holder Search System API" from the DR site
• could be an API that does more than is required, and is therefore not as performant as the
primary API; e.g., the "Motor Policy Search System API" may provide the option to search
the Policy Admin System for Motor Policies by the name of the policy holder, similar to the
input received by the "Motor Policy Holder Search System API", returning entire Motor
Policies, from which the motor policy holders actually needed by the "Policy Holder Search
Process API" (the API client of the "Motor Policy Holder Search System API") can be
extracted
• could be an API that does less than is required, therefore forcing the API client into offering
a degraded service - which is still better than no service at all
2x timeout
Upstream API API, API Client Primary
Client using fault- Downstream
Succeeding API invocation tolerant API Failing API invocation API, failing
invocation
strategies
2x timeout
Fallback
Downstream
API
Figure 87. After retries on the primary API have been exhausted, at least one fallback API
should be invoked, even if that API is not a full substitute for the primary API
API clients implemented as Mule applications and executing on a Mule runtime have the Until
Successful Scope and exception strategies at their disposal, which together allow for
configuring fallback actions such as the fallback API invocations.
then the invocation of the primary API and the fallback API may be performed in parallel. If
the primary API does not respond in time (i.e., a short time-out should be used) and the
fallback API has delivered a result, then the result from the fallback API invocation is used.
Overall then, little time has been lost because the two API invocation were performed in
parallel rather than serially.
This is an opportunistic, egotistical strategy that puts increased load on the application
network by essentially doubling the number of API invocation for the cases where this strategy
is used. As thus it should be used only in exceptional cases as indicated above.
Client-side caching is governed by the same rules as general caching in an HTTP setting - it’s
just performed within the API client rather than by a API implementation or an intervening
network component.
In particular, only results from safe HTTP methods (see Section 6.4.6) should be cached:
• GET
• HEAD
• OPTIONS
and HTTP caching semantics should be honored (see Section 6.4.7). Although, in practice, it
would typically still be preferable to use an outdated/stale cached HTTP response, thereby
effectively ignoring HTTP caching semantics and control headers, than to not recover from the
failed API invocation.
2x timeout
Upstream API API, API Client Primary
Client using fault- Downstream
Succeeding API invocation tolerant API Failing API invocation API, failing
invocation
strategies
Lookup
Client-Side Cache
Figure 88. Using an API invocation result cached in the API client as a fallback for failing API
invocations
For instance,
• if the API implementation of the "Policy Options Ranking Process API" performs client-side
caching of its invocations of the "Policy Options Retrieval System API"
• and API invocations of the "Policy Options Retrieval System API" currently fail
• then the "Policy Options Ranking Process API" API implementation may use the cached
response from a previous, matching request to the "Policy Options Retrieval System API"
Caching increases the memory footprint of the API client, e.g. the API implementation of the
"Policy Options Ranking Process API". It also adds processing overhead for populating the
cache after every successful API invocation. There is furthermore the question of the scope of
caching, similar to the state handling discussed in Section 7.2.6.
API clients implemented as Mule applications and executing on a Mule runtime have the Cache
Scope and Object Store Connector available, which both support client-side caching.
Typically this works best for APIs that return results akin to reference data, e.g.,
• countries
• states
• currencies
• products
2x timeout
Upstream API API, API Client Primary
Client using fault- Downstream
Succeeding API invocation tolerant API Failing API invocation API, failing
invocation
strategies
Retrieve
Static Results
Figure 89. Using statically configured API invocation results as fallback for failing API
invocations
For instance, when the API implementation of the "Policy Options Ranking Process API" fails to
invoke the "Policy Options Retrieval System API", it may be able to work with a list of common
policy options loaded from a configuration file. These policy options may be limited, and may
not be ideal for creating the best possible quote for the customer, but it may be better than
not creating a quote at all. Again, the principle is to prefer a degraded service to no service at
all.
API clients implemented as Mule applications and executing on a Mule runtime have many
options available for storing and loading static results. One such option is to use properties,
which, when the Mule application is deployed to CloudHub, can actually be updated via the
Anypoint Runtime Manager web UI.
Characteristics of CQRS:
CQRS is a design choice to be made independently for each API implementation, but it is
architecturally significant because
The option to select persistence mechanisms independently for the read-side and write-side of
an API implementation may manifest itself in choosing entirely different persistence
technologies - e.g., a denormalized NoSQL database for the read-side and a fully normalized
relational database schema for the write-side. But it may equally just mean storing read-side
and write-side data in different table spaces in the same RDBMS, potentiall with less
normalization and fewer database constraints on the read-side.
• The "Motor Claims Submission Process API" and "Motor Claims Submission System API"
accept commands in the form of claim submissions, which are executed asynchronously
• The "Claims Process API" provides synchronous queries against claims, including ones that
result from previous claim submissions
• Read-side and write-side persistence in this case are both hidden inside the Motor Claims
System, which may, but typically will not, use different persistence mechanisms for the two
cases
◦ But on the level of the APIs there is clear CQRS-style separation
◦ This is a typical situation in integration solutions where the imperative of reusing existing
systems results in technologically and stylistically more muddled architectures than
would be the case in greenfield application development
Process APIs
System APIs
Motor Claims
System
Figure 90. CQRS emerging naturally on the API-level through the "Motor Claims Submission
Process API" and "Motor Claims Submission System API" accepting commands and the "Claims
Process API" accepting queries, though persistence is encapsulated in the shared Motor Claims
System
Event Sourcing is very similar in spirit to database transaction logs. The important difference is
that Event Sourcing is an approach at the application layer, chosen by application components
and API implementations, rather than hidden behind the technology service offered by a
RDBMS.
Unlike CQRS, Event Sourcing by itself is invisible in the API specification of the API exposed by
an API implementation. It is therefore an implementation-level design decision.
Summary
• API implementations can target the Mule runtime or other runtimes while still being
managed by Anypoint Platform
• API implementations implemented as Mule applications can be automatically discovered by
Anypoint Platform
• Anypoint Platform has over 120 Anypoint Connectors, which are indispensable for
implementing System APIs
• CloudHub is an Amazon Web Services-based PaaS for the scalable, performant and highly-
available deployment of Mule applications
• API clients, in particular those which are in turn API implementations, must employ
strategies to guard against failures in API invocations
◦ E.g., Retry, Circuit Breaker, Fallbacks
• Some API implementations may benefit from using CQRS as a persistence strategy, which
in turn influences the design of their API
• Separation of commands and queries often arises naturally in API design, even in the
absence of true CQRS
• Event Sourcing is an implementation-level design decision of each API implementation
Objectives
• Know when to make use of elements of Event-Driven Architecture in addition to API-led
connectivity
• Understand events and message destinations
• Impose event exchange patterns in accordance with API-led connectivity
• Get to know Anypoint MQ
• Apply Event-Driven Architecture with Anypoint MQ to address NFRs of the "Customer Self-
Service App" product
One way of architecting this "short-circuit" is through an approach not unlike CQRS and event
sourcing:
1. The "Motor Claims Submission System API", after transmitting a claim submission to the
Motor Claims System, must also publish a "Motor Claim Submitted" event.
2. The "Motor Claims Search System API", in addition to retrieving claims from the Motor
Claims System, must also consume "Motor Claim Submitted" events, store them and
include them in the search results it returns to its API clients, specifically the "Claims
Process API".
This amounts to a non-API communication channel between the "Motor Claims Submission
System API" and the "Motor Claims Search System API" that follows the general architectural
principles of Event-Driven Architecture.
Process APIs
System APIs
Motor Claims
System
Figure 91. To make recent claim submissions, which are not yet exposed by the Motor Claims
System, available to the "Motor Claims Search System API", the latter consumes "Motor Claim
Submitted" events published by the "Motor Claims Submission System API"
• Events are published by "Motor Claims Submission System API" and consumed by "Motor
Claims Search System API" (see Figure 91)
• Events are published by "Motor Claims Submission System API" and consumed by "Claims
Process API"
• Events are published by "Motor Claims Submission Process API" and consumed by "Motor
Claims Search System API"
• Events are published by "Motor Claims Submission Process API" and consumed by "Claims
Process API"
1. Discuss the characteristics of each of these four event exchange patterns.
2. Are there general rules akin to the API invocation rules of API-led connectivity that can
be extracted from this example?
Solution
• If "Motor Claim Submitted" captures the historical fact that a claim submission has been
(successfully) passed to the Motor Claims System then the publishing of that event should
occur as close to the Motor Claims System as possible, i.e., in the "Motor Claims Search
System API".
• If the API implementation of a System API publishes an event then it should only be
consumed by the API implementation of a System API or Process API: see Section 8.2.5.
• The publishing and consumption of events in a System API adds responsibilities to that
System API that are unrelated to backend system connectivity (thereby violating the Single
Responsibility Principle). This "muddling" of responsibilities is trivial in the case of event
publishing but significant in the case of event consumption and storage. It should therefore
be avoided that System APIs also consume and store events in addition to performing
"normal" backend connectivity.
• An architecturally clean solution therefore mandates a new System API that consumes and
stores "Motor Claim Submitted" events - the "Submitted Motor Claims Search System API" -
and requires the existing "Claims Process API" to coordinate between the two types of
System APIs that search against motor claims ("Motor Claims Search System API" and
"Submitted Motor Claims Search System API"). This is captured in Figure 92, which hence
improves on and replaces Figure 91.
◦ This is another example of the claim made in Section 2.2.13 that application networks
"bend but do not break": the consistency requirement for the "Customer Self-Service
App" product was added to an existing application network by changig a System API
("Motor Claims Submission System API"), adding a System API ("Submitted Motor
Claims Search System API") and changing the API implementation but not the API
specification of a Process API ("Claims Process API").
Process APIs
System APIs
Motor Claims
System
Figure 92. Architecturally clean separation of concerns between "Motor Claims Search System
API" for accessing the Motor Claims System and the new "Submitted Motor Claims Search
System API" for consuming "Motor Claim Submitted" events published by the "Motor Claims
Submission System API". The event store used by "Submitted Motor Claims Search System
API" to persist and search "Motor Claim Submitted" events is not shown
Solution
• Broker: Exchanging events requires a message broker, such as Anypoint MQ, as the owner
of destinations, whereas API invocations, at a minimum, only require API client and API
implementation.
• Contract: The contract for an API is its API specification, typically a RAML definition. The
contract for an event is the combination of destination and event (data) type and is not
typically captured formally.
API implementations typically have well-defined static dependencies on other APIs and/or
backend systems (see Figure 38 for an example). While similar relationships may materialize
in Event-Driven Architecture at runtime, there are no static dependencies between the
application components exchanging events. Instead, these application components only
depend on the exchanged event types, the destinations and the message broker hosting those
destinations. Furthermore, event consumers may change dynamically at any time, thereby
dynamically reconfiguring the relationship graph of the application components in an Event-
Driven Architecture, without the event producers becoming aware of that change.
API-led connectivity and in particular application networks are defined by the API-centric
assets published for self-service consumption. The equivalent for Event-Driven Architecture
would revolve around destination and event types.
Enforcing NFRs by applying API policies in Anypoint API Manager on top of existing API
implementations has no equivalent in Event-Driven Architecture on Anypoint Platform.
invoke Process APIs, Process APIs must only invoke System APIs or other Process APIs, and
System APIs must only communicate with backend systems. This constraint brings order and
predictability to the communication patterns in an application network.
When Event-Driven Architecture is applied in the context of API-led connectivity then the
application components exchanging events are predominantly API implementations.
Event-Driven Architecture by itself is agnostic about the three tiers of API-led connectivity and
hence does not restrict event exchanges between API implementations in different tiers. But
breaking the communication patterns of API-led connectivity through arbitrary, unrestricted
event exchanges risks destroying the order and structure created in an application network by
the application of API-led connectivity.
Importantly, API-led connectivity is an API-first approach, which does not rule-out the
exchange of events, but views it as an exceptional addition to API invocations as the dominant
form of communication between application components. It is therefore advisable to require
API implementations that exchange events to follow communication patterns in accordance
with API-led connectivity:
• Any API implementation that publishes events should define its own destinations (queues,
message exchanges) to send these events to. Often, there will be one destination per event
type published by that API implementation.
• In this way destinations belong logically to the same API-led connectivity tier as the API
implementation publishing events to them.
◦ I.e., a System API publishes events to destinations that logically belong to the System
API tier and can hence be described as "system events"
◦ I.e., a Process API publishes events to destinations that logically belong to the Process
API tier and can hence be described as "process events"
◦ I.e., an Experience API publishes events to destinations that logically belong to the
Experience API tier and can hence be described as "experience events"
• Any API implementation that consumes events must not do so from a destination that
belongs to a higher tier than the consuming API implementation itself. In other words,
events must not flow downwards across the tiers of API-led connectivity:
◦ Events published by Experience APIs to their destinations ("experience events") must
not be consumed from those destinations by Process APIs or System APIs.
◦ Events published by Process APIs to their destinations ("process events") must not be
consumed from those destinations by System APIs.
◦ Put differently: Events may only be consumed within the same tier or in higher tiers
relative to the API implementation that publishes the events.
◦ The logic for this rule is the same as for the communication patterns underlying API-led
connectivity: the rate of change of Experience APIs (which are relatively volatile) is
higher than the rate of change of Process APIs which is higher than the rate of change of
System APIs (which are comparatively stable). And a slow-changing component must
never depend on a fast-changing component.
• In addition, in analogy with API-led connectivity, it should be prohibited that Experience
APIs directly consume events published by System APIs, thereby bypassing Process APIs.
Experience Experience
APIs Events
API invocations
API invocations
Figure 93. Flow of events and API invocations in API-led connectivity augmented with
elements from Event-Driven Architecture, with API invocations between Process APIs omitted
In Anypoint MQ, messages can be sent to queues or message exchanges and consumed only
from queues. Message exchanges must therefore be (statically) configured to distribute the
messages sent to them to one or more queues in order for those messages to be consumable.
Publish event
HTTP PUT
body: claim submission
claim submission
HTTP response
Distribute event
event
Consume event
Motor Claim Submitted event
store event
Search events
HTTP GET
query params: search criteria
search
matching events
HTTP response
body: search results
Motor Claims Motor Claims Motor Claim Submitted Motor Claim Submitted Submitted Motor Event Store
Message Exchange Message Queue
Submission API System Claims Search API
Figure 94. "Motor Claims Submission System API" produces "Motor Claim Submitted" events
by publishing them to an Anypoint MQ message exchange, from where they are consumed by
"Submitted Motor Claims Search System API" and stored in an event store (an external
database or potentially a Mule runtime Object Store), such that when a search request arrives
at "Submitted Motor Claims Search System API" it can respond by searching that event store
for matching "Motor Claim Submitted" events
Summary
• Some NFRs are best realized by following Event-Driven Architecture paradigms in addition
to API-led connectivity
• Events describe historical facts and are exchanged asynchronously between application
components via destinations such as message exchanges
• Event exchange patterns should follow the rules established by API-led connectivity
• Anypoint MQ is a MuleSoft-hosted multi-tenant cloud-native messaging service that can be
used to implement Event-Driven Architecture
• The consistency requirement of the "Customer Self-Service App" product can be realized by
introducing a new System API that consumes events published by the "Motor Claims
Submission System API"
Objectives
• Locate API-related activities on a development lifecycle
• Interpret DevOps using Anypoint Platform tools and features
• Design automated tests from the viewpoint of API-led connectivity and the application
network
• Understand the factors involved in scaling API performance
• Know how to deprecate and delete an API version in Anypoint Platform
• Reflect on single points of failure
• starts with API-centric activities that lead to the creation of an API specification
• then proceeds to building the API implementation
• before transitioning both the API as well as the API implementation into production
Each of these three phases is supported by Anypoint Platform components, as shown in Figure
95.
Figure 95. The API-centric development lifecycle and Anypoint Platform components
supporting it
Acme Insurance, with strong guidance from the C4E, standardizes on the following DevOps
principles:
• All RAML definitions and RAML fragments must live in an artifact repository
◦ Acme Insurance chooses Anypoint Exchange which is a Maven-compatible artifact
repository
• All source code for API implementations must live in a source code repository
◦ Acme Insurance chooses Git and GitHub, with every API implementation having its own
GitHub repo
1. Developers of an API implementation implement on short-lived feature branches off the Git
develop branch of that API implementation’s repo
a. This is the GitFlow branching model
2. Developers implement the Mule application and all types of automated tests (unit,
integration, end-to-end, performance) and make them pass
a. Acme Insurance chooses a combination of Anypoint Studio, JUnit, MUnit, SOAPUI and
JMeter
b. For build automation Acme Insurance chooses Maven and suitable plugins such as the
Mule Maven plugin, the MUnit Maven plugin and those for SOAPUI and JMeter
3. Once done, developers submit GitHub pull requests from their feature branch into the
develop branch
4. A developer from the same team performs a complete code review of the pull request,
confirms that all tests pass, and, if satisfied, merges the pull request into the develop
branch of the API implementation’s GitHub repo
5. This triggers the CI pipeline:
a. Acme Insurance chooses Jenkins, delegating to Maven builds to implement CI/CD
b. The API implementation Mule application is compiled, packaged and all unit and
integration tests are run
c. The Mule application is deployed to an artifact repository
i. Acme Insurance chooses a private Nexus installation
6. When sufficient features have accumulated for the API implementation, the release
manager for that API implementation "cuts a release" by tagging, creating a release branch
and ultimately merging into the master branch of the API implementation’s repo
7. This triggers the CI/CD pipeline in Jenkins:
a. The CI pipeline is executed, leading to deployment of the API implementation into an
artifact repository
b. Automatically, and/or through a manual trigger, the CD pipeline is executed:
i. A well-defined version of the Mule application from the artifact repository is deployed
into a staging environment
ii. End-to-end tests and performance tests are run over HTTP/S against the API exposed
by the API implementation
iii. Upon success the Mule application from the artifact repository is deployed into the
production environment
iv. The "deployment verification sub-set" of the functional end-to-end tests is run to
verify the success of the deployment
A. Failure leads to immediate rollback via the execution of the CD pipeline with the
last good version of the API implementation
Anypoint Platform has no direct support for "canary deployments", i.e. the practice of initially
only directing a small portion of production traffic to the newly deployed API implementation.
The above discussion assumes that a previous version of the API implementation in question
has already been developed, and that, therefore, the Anypoint API Manager and API policy
configuration for the API exposed by the API implementation is already in place. Creation of
this configuration can be automated with the Anypoint Platform APIs.
Figure 96. The GitFlow branching model showing the master and develop branch as well as
release and feature branches. Source: Atlassian
Due to the interconnected nature of the application network, resilience tests become
particularly important in establishing confidence in the fail-safety of the application network -
see Section 9.2.4.
Unsurprisingly, APIs and API specifications take center stage when testing application
networks.
• Integration tests
◦ Do not require deployment into any special environment, such as a staging environment
• Experience APIs such as the "Aggregator Quote Creation Experience API" invoke Process
APIs such as the "Policy Holder Search Process API"
• Process APIs such as the "Policy Holder Summary Process API" invoke other Process APIs
such as the "Policy Holder Search Process API" and/or System APIs such as the "Motor
Policy Holder Search System API"
• System APIs such as the "Motor Policy Holder Search System API" interact with backend
systems such as the Policy Admin System over whatever protocol the backend system
supports (MQ in the case of the Policy Admin System)
Unit testing such complex API implementations can be daunting due to the need for dealing
with all these dependencies of an API implementation.
With MUnit Anypoint Platform provides a dedicated unit testing tool that
Resilience testing is an important practice in the move to API-led connectivity and application
networks.
Acme Insurance plans to implement resilience testing using the following automated approach:
While this resilience testing tool runs and disrupts the application network, "normal"
automated end-to-end tests are executed.
Solution
◦ Invoke for policy holder with only home and no motor policies
◦ Invoke at 500, 1000 and 1500 requs/s
◦ Invoke with valid and invalid client token and without client taken
• "Motor Claims Submission Process API":
◦ …
◦ Invoke polling endpoint once per original request, 1000 times per second, not at all
• Vertical scaling, i.e., scaling the performance of each node on which a Mule runtime and a
deployed API implementation or API proxy executes
◦ In CloudHub this is supported through different worker sizes (see Section 7.1.4)
• Horizontal scaling, i.e., scaling the number of nodes on which Mule runtimes and deployed
API implementations or API proxies executes
◦ In CloudHub and Anypoint Platform for Pivotal Cloud Foundry this is supported through
scaleout and load balancing, currently with a limit of 8 workers per API implementation
If separate API proxies are deployed in addition to API implementations then the two types of
nodes can be scaled independently. In most real-world cases the performance-limiting node is
the API implementation and not the API proxy, hence there typically need to be more/larger
instances of the API implementation than the corresponding API proxy.
It is therefore essential that the team responsible for an API understands the projected
performance needs of all its API’s API clients and is prepared to scale their API’s performance
See Figure 3.
The performance of any individual API in the Acme Insurance application network is only
important insofar as it enables and supports these strategic business objectives.
It is therefore not efficient if individual APIs provide performance that is unbalanced with
respect to the performance of other APIs in the application network. For instance, stellar
response time of the "Mobile Auto Claim Submission Experience API" used in the "Submit auto
claim" feature is irrelevant if the "Retrieve policy holder summary" feature is so slow that the
user is put off from ever executing the "Submit auto claim" feature.
What this means is that there is a systemic perspective on API performance, i.e., a viewpoint
that calls for an application network-wide analysis of API performance. This systemic viewpoint
must consider
• and if it receives responses in accordance with the contract and expected QoS of the API.
This process requires an API implementation of that API to be available and to live up to the
expected QoS standards. But since the API client is not aware of the API implementation itself
- it is only aware of the API it exposes at a certain endpoint - the API implementation can be
changed, updated and replaced without alerting the API client.
On the other hand, any change to the API, its contract or promised QoS, that is not entirely
backwards-compatible needs to be communicated to all API clients of that API. This is done
through the introduction of a new version of the API - and the subsequent phased ending of
the live of the previous API version.
Figure 97. Option to deprecate or delete a version of the "Aggregator Quote Creation
Experience API" in Anypoint API Manager
Figure 98. Deprecated versions of "Aggregator Quote Creation Experience API" in Anypoint API
Manager
Figure 99. Deprecated versions of "Aggregator Quote Creation Experience API" in Acme
Insurance's Developer Portal
Figure 100. The API Portal of a deprecated version of "Aggregator Quote Creation Experience
API" does not allow the API consumer to request API access
Solution
• Potential points of failure are everywhere: every node and system involved in processing
API invocations is a potential point of failure.
• For instance, failure of the Anypoint API Manager would mean that
◦ Already-applied API policies continue being in force
◦ New Mule runtimes that start-up (and are configured with the gatekeeper feature) will
not become functional until they can download API policies from the Anypoint API
Manager
◦ Continued unavailability of the Anypoint API Manager would lead to overflow of the
buffers in the Mule runtime that hold undelivered API analytics events
• Single points of failure are much rarer
◦ At first sight it seems as if there were no single points of failure
◦ Every API implementation that is deployed to only one CloudHub worker constitutes a
single point of failure (although CloudHub is typically configured to auto-restart failed
workers and hence the duration of failure is relatively short)
◦ There is little information about the Home Claims System and it may therefore
potentially contain single points of failures.
◦ Every deployment of an API implementation, in its entirety, constitutes a single point of
failure: If the deployment of an API implementation technically succeeds but deploys a
deficient API implementation, then API invocations to that API will fail. Thus this API now
constitutes an actually failed single point of failure for all API clients of that API (but they
may have a fallback API to invoke). This highlights the importance of reliable and fully
automated DevOps and testing processes, including a "deployment verification test
suite".
Summary
• API definition, implementation and management can be organized along an API
development lifecycle
• DevOps on Anypoint Platform builds on and supports well-known tools like Jenkins and
Maven
• API-centric automated testing augments standard testing approaches with an emphasis on
end-to-end tests and resilience tests
• Scaling API performance must match the API clients' needs and requires the C4E’s
application network-wide perspective
• Anypoint Platform supports gracefully decommissioning API versions using deprecation
• Anypoint Platform has no inherent single points of failure but every deployment of an API
implementation can become one
Objectives
• Understand the origin of data used in monitoring, analysis and alerting on Anypoint
Platform
• Know the metrics collected by Anypoint Platform on the level of API invocations
• Know the grouping of API metrics available in Anypoint Analytics
• Know available options for performing API analytics within and outside of Anypoint Platform
• Define alerts for key metrics of API invocations for all tiers of API-led connectivity
• Understand how metrics and alerts for API implementations augment those for API
invocations
• Recognize operations teams as an important stakeholder in API-related assets and organize
documentation accordingly
Anypoint Platform
API Platform Analytics Monitoring Runtime Analytics Runtime Mgr CloudHub Cloudhub
API Events API Query API Manager API Ingest API Private API Enhanced
Interface Logging API
Figure 101. Different types of data used for monitoring, analytics and alerting flow from Mule
runtimes to Anypoint Platform components like Anypoint Runtime Manager, Anypoint API
Manager and Anypoint Analytics and/or to external systems. Not all options are available in all
Anypoint Platform deployment scenarios
• Properties of the API invocation itself such as resource path, HTTP method, etc.
Note that only for REST APIs are HTTP response status codes indicative of success, failure and
reason for that failure.
Figure 102. Number of API invocations (requests) over time for a given API and all its API
clients, grouped by HTTP status code class
Figure 103. Mean response time (average latency) of API invocations over time for a given API
and all its API clients and all HTTP status codes
Figure 104. Number of API invocations (requests) over time to the "Mobile Auto Claim
Submission Experience API", grouped by each of its top 5 API clients
Figure 105. Number of API invocations (requests) over time to the "Policy Holder Search
Process API", grouped by each of its top 5 API clients
Figure 106. Overview of API invocations from the Customer Self-Service Mobile App to all APIs
it invokes - which are, by definition, Experience APIs
Figure 107. Number of API invocations from all API clients to all Experience APIs, grouped by
geography
Figure 108. Number of API invocations to all Experience APIs, grouped by API clients
Figure 109. Custom chart showing average (mean) response time per API invocation in
milliseconds for the top 5 slowest APIs, for the last 90 days
Figure 110. Custom chart showing number of policy violations, grouped by API policy and API
client
For each of these metrics Anypoint Platform typically triggers an alert when the metric
Figure 111. Some of the alerts based on API invocations in the Acme Insurance application
network
Applying this to "Aggregator Quote Creation Experience API", referring to the NFRs and API
policies for this API.
• "SLA tier exhausted for "Aggregator Quote Creation Experience API"": for violation of SLA-
based Rate Limiting, severity Info, more than 60 violations for at least 3 consecutive 10-
minute periods
◦ Alerts when approx. 10% of 1-second intervals are above SLA tier-defined rate limit
• "TLS mutual auth circumvented for "Aggregator Quote Creation Experience API"": for
violation of IP whitelist, severity Critical, more than 1 violation for at least 3 consecutive 1-
minute periods
• "XML attack on "Aggregator Quote Creation Experience API"": for violation of XML threat
protection, severity Warning, more than 30000 violations for at least 3 consecutive 10-
minute periods
◦ Alerts when approx. 5% of requests (5% of 1000*60*10 = 30000) are identified as XML
threats
• "Response time QoS guarantee violated by "Aggregator Quote Creation Experience API"":
severity Warning, more than 6000 requests whose response time exceeds 400 ms, for at
least 3 consecutive 10-minute periods
◦ Alerts when approx. 1% of API invocations (1% of 1000*60*10 = 6000) take longer
than 400 ms
◦ Note that exact QoS guarantee cannot be expressed in alert: median = 200 ms,
maximum = 500 ms
Figure 112. Alerts defined for the "Aggregator Quote Creation Experience API"
Applying this to "Policy Options Retrieval System API", referring to the NFRs and API policies
applicable to this API.
• "SLA tier exhausted for "Policy Options Retrieval System API"": for violation of SLA-based
Throttling, severity Info, more than 60 violations for at least 3 consecutive 10-minute
periods
◦ Alerts when approx. 10% of 1-second intervals are above SLA tier-defined rate limit
◦ Also alerts on invalid client ID/secret supplied
• "Client not in Process API subnet for "Policy Options Retrieval System API"": for violation of
IP whitelist, severity Critical, more than 1 violation for at least 3 consecutive 1-minute
periods
Figure 113. Alerts defined for the "Policy Options Retrieval System API"
Applying this to the "Policy Holder Search Process API", referring to the NFRs and API policies
applicable to this API.
• "Throughput QoS guarantee exhausted for "Policy Holder Search Process API"": for violation
of Throttling API policy, severity Info, more than 60 violations for at least 3 consecutive 10-
minute periods
◦ Alerts when approx. 10% of 1-second intervals are above rate limit defined in the API
policy
• "Client not in Experience API or Process API subnet for "Policy Holder Search Process API"":
for violation of IP whitelist, severity Critical, more than 1 violation for at least 3 consecutive
1-minute periods
• "Response time QoS guarantee violated by "Policy Holder Search Process API"": for
violation of non-SLA-based QoS guarantee of 1100 requs/s, severity Warning, more than
6600 requests whose response time exceeds 100 ms for at least 3 consecutive 10-minute
periods
◦ Alerts when approx. 1% of API invocations (1% of 1100*60*10 = 6600) take longer
than 100 ms (twice the target median of 50 ms)
◦ Note that exact QoS guarantee cannot be expressed in alert: median = 50 ms,
maximum = 150 ms
Figure 114. Alerts defined for the "Policy Holder Search Process API"
Figure 115. Alerts defined for all API implementations executing on Mule runtimes, such as
CloudHub workers, complement alerts on the level of API invocations
• Dashboards and alerts in Anypoint Runtime Manager, Anypoint API Manager and Anypoint
Analytics
• Custom-written documentation:
◦ Runbooks, which are written for the on-call operations teams and must succinctly give
guidance on how to address alerts
◦ On-call registers, which identify the current on-call operations teams and are for anyone
who needs to contact them about an issue with "their" API, e.g., the operations team of
an API client of their API
• The Anypoint Exchange entry for a particular version of an API specification links to the
matching
◦ API Portal
◦ Anypoint API Manager administration screens in all environments
◦ Anypoint Exchange entries for all known implementations of that API
• The Anypoint Exchange entry for a particular version of an API implementation (e.g., in the
form of a Mule application)
◦ links to the matching
▪ Anypoint Exchange entry for the API it implements
▪ Anypoint Exchange entries for all APIs it depends on (calls)
▪ Anypoint Runtime Manager dashboard for its deployments in all environments, e.g.,
to CloudHub
▪ GitHub repository
▪ CI/CD build pipelines, i.e., Jenkins jobs
◦ contains:
▪ Runbook
▪ which details resolution guidelines for all Anypoint API Manager and Anypoint
Runtime Manager alerts for this API implementation and the API it implements
▪ On-call register
▪ Developer onboarding documentation for developers joining the API implementation
development team
Figure 116. The Anypoint Exchange entry for a particular version of an API specification links
to the matching API Portal, administration screen in Anypoint API Manager and to the Anypoint
Exchange entries for all known implementations of that API
Figure 117. The API Portal for the API linked-to from its Anypoint Exchange entry
Figure 118. The Anypoint API Manager administration screen for the API linked-to from its
Anypoint Exchange entry, showing summary API analytics
Figure 119. The Anypoint Exchange entry for the API implementation Mule application linked-
to from the Anypoint Exchange entry of the API it exposes, linking to the Anypoint Exchange
entries for that API and all APIs it depends on (calls) plus the Anypoint Runtime Manager
dashboard for its CloudHub deployment. Also contains pages for the operations runbook and
on-call register and developer onboarding documentation and locates source code and CI/CD
build pipelines
Figure 120. The Anypoint Runtime Manager dashboard for the CloudHub deployment of the
API implementation linked-to from its Anypoint Exchange entry
Summary
• Data used in monitoring, analysis and alerting flows from Mule runtimes to external
monitoring/analytics systems and/or Anypoint Platform, from where it is available via APIs
for external reporting
• Anypoint Platform collects numerous metrics for API invocations, such as response time,
payload size, client location, etc.
• For analysis in Anypoint Analytics metrics can be grouped by API, API client or any of the
other metrics
• Anypoint Platform supports analyses targeted specifically at API consumers and their API
clients
• In addition to interactive analyses, Anypoint Analytics supports custom charts and reports
• All analytics data can be downloaded in CSV files and/or retrieved through Anypoint
Platform APIs
• Alerts can be defined based on these API invocation metrics: request count and time,
response status code and number of violations of an API policy
• Metrics for API implementations and alerts based on these metrics must be defined in
Anypoint Runtime Manager in addition to API invocations alerts
• Operations teams are an important stakeholder in API-related assets
• Structure and link Anypoint Exchange entries for APIs and API implementations, API
Portals, Anypoint API Manager administration screens and Anypoint Runtime Manager
dashboards to support operations teams
Objectives
• Review technology delivery capabilities
• Review OBD
• Review the course objectives
• Know where to go from here
• Being aware of the MuleSoft Certification program
• Take the class survey
Reviewing OBD
Review Figure 5 and briefly place all topics discussed in the course along one of the
dimensions of OBD.
Class survey
• You should have received an email with a link to the class survey
◦ Your instructor can also provide the direct link
▪ http://training.mulesoft.com/survey/<surveyID>.html
◦ Or you can go to a general page and select your class
▪ http://training.mulesoft.com/survey/survey.html
• Please fill the survey out now!
◦ We want your feedback!
Following the focus of this course, this architecture documentation is mostly on the Enterprise
Architecture level, with elements of Solution Architecture, and is in-line with the dimensions of
OBD and the approaches of API-led connectivity and application networks.
The information and views collected here have all been produced for Acme Insurance during
the course: no new information or views are introduced here.
APIs
1. Anypoint Platform deployment option: Figure 27, Figure 28, Figure 29, Figure 30, Figure 31
2. Anypoint Platform organizational setup: Section 3.3
3. Common API invocation dashboards and reports: Figure 110
4. Common application component alerts: Figure 115
5. DevOps guidelines: Section 9.1.2
6. Resilience testing guidelines: Section 9.2.4
7. API-related guidelines:
a. API policy enforcement guidelines: Section 5.3.5
b. API policy guidelines: Section 5.3.20, Section 5.3.21, Section 5.3.22
c. API versioning guidelines: Section 6.2.4
d. Alerting guidelines: Section 10.4.3, Section 10.4.4, Section 10.4.2
APIs
assigned to
Dependency relationships
serves
Who
accesses write
Who accesses read Data
accesses read/write
Who
influences
Dynamic relationships
triggers
transfers to (flow)
Other relationships
associated with
and
or
Composite elements
Grouping Location
Motivation elements
Strategy elements
Business
Passive structure elements Service
Composite elements
Product
Application Application
Passive structure elements
Interaction Event
Data Object
Application
Service
Artifact
Glossary
API
• Application Programming Interface
• A kind of application interface, i.e., a point of access to an application service
• to programmatic clients, i.e., API clients are typically application components
• using HTTP-based protocols, hence restricts the technology interfaces that may realize
this application interface to be HTTP-based
• typically with a well-defined business purpose, hence the application service to which
this application interface provides access typically realizes a business service
• See Figure 128
• Remarks:
◦ The prototypical API is a REST API using JSON-encoded data
◦ Non-programmatic interfaces, e.g., web UIs, are not APIs
▪ HTTP interfaces using HTML microdata are a corner case, as they are usable for
both human and programmatic clients
◦ Non-HTTP-based programmatic interfaces are not APIs
▪ E.g., Java RMI, CORBA/IIOP, raw TCP/IP interfaces not using HTTP
▪ Note that WebSocket interfaces are not APIs by this definition, and are not
currently supported by Anypoint Platform
◦ HTTP-based programmatic interfaces are APIs even if they don’t use REST or JSON
▪ E.g., REST APIs using XML-encoded data, JSON RPC, gRPC, SOAP/HTTP, XML/HTTP,
serialized Java objects over HTTP POST, …
▪ Note that interfaces using SSE (HTML5 Server-Sent Events) are APIs by this
definition, but are not currently supported by Anypoint Platform
◦ Interfaces using HTTP/2 are APIs. Also note that HTTP/2 adheres to HTTP/1.x
semantics
▪ E.g., gRPC
▪ But HTTP/2-based APIs are not currently supported by Anypoint Platform
• For instance:
◦ Auto policy rating API
▪ Is a programmatic application interface
▪ Is realized by these HTTP-based technology interfaces
▪ Auto policy rating JSON/REST programmatic interface
Business
Service
Application API
Service
API Client
Figure 128. An API is primarily a programmatic application interface but the concept actually
combines aspects of Business Architecture, Application Architecture and Technology
Architecture
API client
• An application component
• that accesses a service
• by invoking an API of that service - by definition of the term API over HTTP
• See Figure 128
API consumer
• A business role, which is often assigned to an individual
• that develops API clients, i.e., performs the activities necessary for enabling an API
client to invoke APIs
API definition
• Synonym for API specification, with API specification being the preferred term
API implementation
• An application component
• that implements the functionality
• exposed by the service
• which is made accessible via one or more APIs - by definition to API clients
• See Figure 128
API interface
• Synonym for API, with API being the preferred term
• Sometimes used in contexts where the simplified notion of API is the dominant one and
the interface-aspect of API needs to be addressed in contrast to the implementation-
aspect
API-led connectivity
• A style of integration architecture
• prioritizing APIs over other types of programmatic interfaces
• where each API is assigned to one of three tiers: System APIs, Process APIs and
Experience APIs
Experience APIs
Aggregator
Quote Creation
API
create
Process APIs
System APIs
Motor Policy Home Policy Policy Options Motor Quote Motor Quote
Holder Search Holder Search Retrieval API Creation New Creation Addon
API API Business API Business API
Policy Admin
System
Figure 129. An example of the collaboration of APIs in the three tiers of API-led connectivity.
Note the use of the simplified notion of API in this diagram, as lower-level APIs are shown
serving higher-level APIs
API policy
• Defines a typically non-functional requirement
• that can be applied to an API (version)
• by injection into the API invocation between an API client and the API endpoint
• without changing the API implementation listening on that API endpoint
• Consists of API policy template (code) and API policy definition (data)
• For instance:
◦ Rate Limiting to 100 requs/s
◦ HTTP Basic Authentication enforcement using a given Identity Provider
API endpoint
• The URL at which a specific API implementation listens for requests
API provider
• A business role
• that develops, publishes and operates API implementations and all related assets
API proxy
• A dedicated node that enforces API policies
• by acting as an HTTP proxy between API client and the API implementation at a specific
API endpoint.
• API proxies need to be accessed explicitly by the API client in place of the "normal" API
implementation, via the same API.
API specification
• A formal, machine-readable definition of the technology interface of an API
• Sufficiently accurate for developing API clients and API implementations for that API
• For instance:
◦ RAML definition
◦ WSDL document
◦ OAS/Swagger specification
Application (app)
• Used for API clients that are registered with Anypoint Platform as clients to at least one
API managed by Anypoint Platform
• In this context synonym for API client, with API client being the preferred term
Application interface
• Point of access to an Application Service
• exposing that Service to clients which may be humans or Application Components
• For instance:
◦ Auto policy rating programmatic interface
▪ Provides access to this Application Service: Auto policy rating
◦ Auto claim notification self-service UI
▪ Provides access to this Application Service: Auto claim notification
◦ Bank reconciliation batch interface
▪ Provides access to this Application Service: Bank reconciliation
Application
Service
Application Application
Programming User Interface
Interface
Application Application
Service Client Service Client
Figure 130. Business actor and application component accessing application service via
different application interfaces
Application network
• The state of an Enterprise Architecture
• emerging from the application of API-led connectivity
• that fosters governance, discoverability, consumability and reuse of the involved APIs
and related assets
Application service
• Exposes application functionality
• such as that performed by an Application Component
• through one or more application interfaces
• May (should) completely realize or at least serve a Business Service
• May serve other (more coarse-grained) Application Services
• For instance:
◦ Auto policy rating
▪ Realizes this Business Service: Auto policy rating
▪ Serves this Business Service: Policy administration
◦ Auto claim notification
▪ Serves this Business Service: Claim management
◦ Bank reconciliation
▪ Realizes this Business Service: Bank reconciliation
Business service
• Exposes business functionality
• such as that performed by a Business Actor
• through one or more business interfaces
• Has meaning and value on a business level
• May serve other (more coarse-grained) business services
• For instance:
◦ Policy administration
◦ Auto policy rating
▪ A fine-grained Business Service that serves the coarse-grained Business Service
Policy administration
◦ Claim management
◦ Bank reconciliation
CQRS
• Command Query Responsibility Segregation
• The usage of different models for reading from data (queries) and writing to data
(commands)
Event-Driven Architecture
• An architectural style
• defined by the asynchronous exchange of events
• between application components.
• Hence a form of message-driven architecture
• where the exchanged messages are (or describe) events
• and the message exchange pattern is typically publish-subscribe (i.e., potentially many
consumers per event).
Event Sourcing
• An approach to data persistence that keeps persistent state as a series of events rather
than just a snapshot of the current state
• Often combined with CQRS
Interface
• Point of access to Service
• exposing that Service to Service clients
• Only if needed differentiate between business interfaces, application interface and
technology interface
RAML
• REST API Modeling Language
• YAML-based language for the machine- and human-readable definition of APIs that
embody most or all of the principles of REST, which are:
◦ Uniform interface, stateless, cacheable, client-server, layered system, code on
demand (optional)
◦ Adherency to the HTTP specification in its usage of HTTP methods, HTTP response
status codes, HTTP request and response headers, etc.
RAML definition
• An API specification expressed in RAML
• comprising one main RAML document
• and optional included
◦ RAML fragment documents
◦ XSD and JSON-Schema documents
◦ examples, etc.
REST
• Representational State Transfer
• an architectural style characterized by the adherence to 6 constraints, namely
◦ Uniform interface
◦ Stateless
◦ Cacheable
◦ Client-server
◦ Layered system
◦ Code on demand (optional)
REST API
• An API that follows REST conventions
• and therefore adheres to the HTTP specification in its usage of HTTP methods, HTTP
response status codes, HTTP request and response headers, etc.
Service
• Explicitly defined exposed behavior
• exposes functionality to Service clients
• who access it through one or more interfaces
• Only if needed differentiate between Business Service, Application Service and
Technology Service
• May serve other (more coarse-grained) Services of the same kind
• For instance:
◦ Policy administration (a Business Service)
◦ Auto policy rating (an Application Service)
◦ HTTP request throttling (a Technology Service)
Services
Technology interface
• Point of access to a Technology Service
• May realize an application interface
Technology Technology
Service Interface
Technology
Service Client
Figure 135. Application component accessing technology service via technology interface
Technology service
• Exposes technology functionality
• such as that performed by a Node or Device or System Software
• through one or more technology interfaces
• May serve Application Components
• May serve other (more coarse-grained) Technology Services
• For instance:
◦ Automatic restart
▪ Serves Application Components that must immediately resume operation after a
failure
◦ Persistent message exchange
▪ Serves Application Components that require guaranteed message delivery
◦ HTTP request throttling
▪ Serves Application Components that expose services that must be protected from
request overload (DoS attacks)
Technology Technology
Node Service Interface
Web Service
• Synonym for API, with API being the preferred term
Bibliography
▪ [Ref1] MuleSoft, "Mule Runtime", https://docs.mulesoft.com/mule-user-guide. 2017.
▪ [Ref2] MuleSoft, "Design Center", https://docs.mulesoft.com/design-center. 2017.
▪ [Ref3] MuleSoft, "API Manager", https://docs.mulesoft.com/api-manager. 2017.
▪ [Ref4] MuleSoft, "Anypoint Exchange", https://docs.mulesoft.com/anypoint-exchange.
2017.
▪ [Ref5] MuleSoft, "Runtime Manager", https://docs.mulesoft.com/runtime-manager. 2017.
▪ [Ref6] MuleSoft, "Access Management", https://docs.mulesoft.com/access-management.
2017.
▪ [Ref7] MuleSoft, "Anypoint Analytics", https://docs.mulesoft.com/analytics. 2017.
▪ [Ref8] MuleSoft, "Anypoint MQ", https://docs.mulesoft.com/anypoint-mq. 2017.
▪ [Ref9] MuleSoft, "About Anypoint Platform Private Cloud Edition",
https://docs.mulesoft.com/anypoint-private-cloud. 2017.
▪ [Ref10] MuleSoft, "About Anypoint Platform for Pivotal Cloud Foundry",
https://docs.mulesoft.com/anypoint-platform-pcf/v/1.6. 2017.
▪ [Ref11] S.J. Fowler, Production-Ready Microservices. Sebastopol, CA: O’Reilly Media,
2016.
▪ [Ref12] M.T. Nygard, Release It!. Raleigh, NC: Pragmatic Bookshelf, 2007.
▪ [Ref13] V. Vernon, Domain-Driven Design Distilled. Boston, MA: Addison-Wesley, 2016.
Version History
2017-11-01 v1.1 refinements to align with slides; improved Section 7.2;
updated Section 9.1.1; shortened Course objectives; updated
Section 7.1.4; added Bibliography; glossary now self-
contained; added Appendix A
2017-09-29 beta2 added missing queue to Figure 94; tier instead of layer for
API-led connectivity; added missing images; numerous little
refinements; improved layout
2017-09-06 beta for internal review
2017-08-27 alpha (v1.0) basis of first public delivery