Data Mesh Architecture
Data Mesh Architecture
DATA MESH
ARCHITECTURE
Data Mesh From an Engineering Perspective
https://www.datamesh-architecture.com 1/27
26/9/22, 14:10 Data Mesh Architecture
Many organizations have invested in a central data lake and a data team with the expectation
to drive their business based on data. However, after a few initial quick wins, they notice that
the central data team often becomes a bottleneck. The team cannot handle all the
analytical questions of management and product owners quickly enough. This is a massive
problem because making timely data-driven decisions is crucial to stay competitive. For
example: Is it a good idea to offer free shipping during Black Week? Do customers accept
longer but more reliable shipping times? How does a product page change influence the
checkout and returns rate?
The data team wants to answer all those questions quickly. In practice, however, they struggle
because they need to spend too much time fixing broken data pipelines after operational
database changes. In their little time remaining, the data team has to discover and
understand the necessary domain data. For every question, they need to learn domain
knowledge to give meaningful insights. Getting the required domain expertise is a daunting
task.
On the other hand, organizations have also
invested in domain-driven design,
autonomous domain teams (also known as
https://www.datamesh-architecture.com 2/27
26/9/22, 14:10 Data Mesh Architecture
https://www.datamesh-architecture.com 4/27
26/9/22, 14:10 Data Mesh Architecture
The domain team agrees with others on global policies, such as interoperability, security, and
documentation standards in a federated governance guild, so that domain teams know how to
discover, understand and use data products available in the data mesh. The self-serve
domain-agnostic data platform, provided by the data platform team, enables domain teams to
easily build their own data products and do their own analysis effectively. An enabling team
guides domain teams on how to model analytical data, use the data platform, and build and
maintain interoperable data products.
Let’s zoom in to the core components of a data mesh architecture and their relationships:
Data Product
https://www.datamesh-architecture.com 5/27
26/9/22, 14:10 Data Mesh Architecture
A data product usually is a published data set that can be accessed by other domains,
similar to an API. For example, the history of inventory updates in a Google BigQuery
table or a daily JSON file with purchase orders on an AWS S3 bucket. A data product
can take other forms as well, including a sales report containing KPIs and charts as a
PDF or even a machine learning model to predict shipping dates as an ONNX file.
To discover, access, and use the data product, it is described with metadata, including
ownership and contact information, data location and access, update frequency, and a
specification of the data model.
The domain team is responsible for the operations of the data product during its entire
lifecycle. The team needs to continuously monitor and ensure data quality and
availability. For example, keep the data without duplicates or react to missing entries.
To design data products, we recommend to use the Data Product Canvas.
Federated Governance
https://www.datamesh-architecture.com 6/27
26/9/22, 14:10 Data Mesh Architecture
Analytical Data
https://www.datamesh-architecture.com 7/27
26/9/22, 14:10 Data Mesh Architecture
Diving into the analytical data, we can see the data flows that lead toward the data
products. Operational data is often ingested as some kind of raw and unstructured
data.
In a preprocessing step, raw data is cleaned and structured into events and entities.
Events are small, immutable, and highly domain oriented, such as OrderPurchased or
ShipmentDelivered. Entities represent business objects such as shipments or articles
with their state changing over time. That’s why the entities often are represented as a
list of snapshots, the history, with the latest snapshot being the current state.
In practice, we often see manually entered or imported data. For example, forecast
data sent via email as CSV files or text descriptions for business codes.
Data from other teams are integrated as external data. When using data products from
other teams that are well governed, this integration might be implemented in a very
lightweight way. In case of importing data from legacy systems, the external area acts
as an anti-corruption layer .
The published data product is derived by aggregating a subset of the events, entities,
manual, and external data.
Ingesting
https://www.datamesh-architecture.com 8/27
26/9/22, 14:10 Data Mesh Architecture
How can domain teams ingest their operational data into the data platform? A software
system designed according to domain-driven design principles contains data as
mutable entities/aggregates and immutable domain events.
Domain events are a great fit to be ingested into the data platform as they represent
relevant business facts. If there’s a messaging system in place domain events can be
forwarded to the data platform by attaching an additional message consumer. Data can
be collected, processed, and forwarded to the data platform in real time. With this
streaming ingestion, data is sent in small batches when they arrive, so they are
immediately available for analytics. As domain events are already well defined, there is
little to do in terms of cleaning and preprocessing, except deduplication and
anonymization of PII data. Sometimes, it is also advisable to define and ingest internal
analytical events that contain information that is relevant only for analytical use cases
so that domain events don’t have to be modified.
Examples for streaming ingestion: Kafka Connect, Kafka Streams, AWS Lambda
Many business objects are persisted as entities and aggregates in SQL or NoSQL
databases. Their state changes over time, and the latest state is persisted in the
database only. Strong candidates for entities with state are articles, prices, customer
data, or shipment status. For analytical use cases, it is often required to have both the
latest state and the history of states over time. There are several approaches to ingest
entities. One way is to generate and publish an onCreate/onUpdate/onDelete event
with the current state every time an entity is changed, e.g. by adding an aspect or
EntityListeners . Then streaming ingestion can be used to ingest the data as
described above. When it is not feasible to change the operational software, change
data capture (CDC) may be used to listen to database changes directly and stream
them into the data platform.
Examples for CDC streaming: Debezium
Lastly, traditional scheduled ELT or ETL jobs that export data to file and load them into
the platform can be set up, with the downside of not having real-time data, not having
all stage changes between exports, and some work to consolidate exported data again.
However, they are a viable option for legacy systems, such as mainframes.
https://www.datamesh-architecture.com 9/27
However, they are a viable option for legacy systems, such as mainframes.
26/9/22, 14:10 Data Mesh Architecture
Clean Data
1 -- Step 1: Deduplicate
2 WITH inventory_deduplicated AS (
3 SELECT *
4 EXCEPT (row_number)
5 FROM (
6 SELECT *,
7 ROW_NUMBER() OVER (PAR
8 FROM `datameshexample-fulfill
9 WHERE row_number = 1
10 ),
11 -- Step 2: Parse JSON to columns
12 inventory_parsed AS (
13 SELECT
14 json_value(data, "$.sku")
15 json_value(data, "$.location
16 CAST(json_value(data, "$.ava
17 CAST(json_value(data, "$.upd
18 FROM inventory_deduplicated
19 )
20 -- Step 3: Actual Query
21 SELECT sku, location, available, upd
22 FROM inventory_parsed
23 ORDER BY sku, location, updated_at
Clean data is the foundation for effective data analytics. With data mesh, domain teams
are responsible for performing data cleaning. They know their domain and can identify
why and how their domain data needs to be processed.
Data that is ingested into the data platform is usually imported in its original raw and
unstructured format. When using a columnar database, this might be a row per event
that contains a CLOB field for the event payload, which may be in JSON format. Now
it can be preprocessed to get data clean:
Structuring: Transform unstructured and semi-structured data to the analytical
data model, e.g., by extracting JSON fields into columns.
https://www.datamesh-architecture.com 10/27
26/9/22, 14:10 Data Mesh Architecture
Analytics
https://www.datamesh-architecture.com 11/27
26/9/22, 14:10 Data Mesh Architecture
To gain insights, domain teams query, process, and aggregate their analytical data
together with relevant data products from other domains.
SQL is the foundation for most analytical queries. It provides powerful functions to
connect and investigate data. The data platform should perform join operations
efficiently, even for large data sets. Aggregations are used to group data and window
functions help to perform a calculation across multiple rows. Notebooks help to build
and document exploratory findings.
Examples: Jupyter Notebooks, Presto
Humans understand data, trends, and anomalies much easier when they perceive them
visually. There are a number of great data visualization tools that build beautiful charts,
key performance indicator overviews, dashboards and reports. They provide an easy-
to-use UI to drill down, filter, and aggregate data.
Examples: Looker, Tableau, Metabase, Redash
For more advanced insights, data science and machine learning methods can be
https://www.datamesh-architecture.com 12/27
applied. These enable correlation analyses, prediction models, and other advanced use
26/9/22, 14:10 Data Mesh Architecture
Data Platform
The self-serve data platform may vary for each organization. Data mesh is a new field
and vendors are starting to add data mesh capabilities to their existing offerings.
Looking from the desired capabilities, you can distinguish between analytical
capabilities and data product capabilities: Analytical capabilities enable the domain
team to build an analytical data model and perform analytics for data-driven decisions.
The data platform needs functions to ingest, store, query, and visualize data as a self-
service. Typical data warehouse and data lake solutions, whether on-premise or a cloud
provider, already exist. The major difference is that each domain team gets its own
isolated area.
A more advanced data platform for data mesh also provides additional domain-agnostic
data product capabilities for creating, monitoring, discovering, and accessing data
products. The self-serve data platform should support the domain teams so that they
can quickly build a data product as well as run it in production in their isolated area. The
platform should support the domain team in publishing their data products so that other
teams can discover them. The discovery requires a central entry point for all the
decentralized data products. A data catalog can be implemented in different ways: as a
wiki, git repository, or there are even already vendor solutions for a cloud-based data
catalog such as Select Star, Google Data Catalog, or AWS Glue Data Catalog. The
actual usage of data products, however, requires a domain team to access, integrate,
and query other domains' data products. The platform should support, monitor, and
document the cross-domain access and usage of data products.
An even more advanced data platform supports policy automation. This means that,
instead of forcing the domain team to manually ensure that the global policies are not
violated, the policies are automatically enforced through the platform. For example, that
all data products have the same metadata structure in the data catalog, or that the PII
data are automatically removed during data ingestion.
Efficiently combining data products from multiple domains, i.e., having large cross-
domain join operations within a few seconds, ensures developer acceptance and
happiness. That's why the query engine has a large influence on the architecture of
the data platform. A shared platform with a single query language and support for
separated areas is a good way to start as everything is highly integrated. This could be
https://www.datamesh-architecture.com 13/27
26/9/22, 14:10 Data Mesh Architecture
Google BigQuery with tables in multiple projects that are discoverable through Google
Data Catalog. In a more decentralized and distributed data mesh, a distributed query
engine such as Presto can still perform cross-domain joins without importing data, but
they come with their own limitations, e.g., limited pushdowns require that all underlying
column data need to be transferred.
Enabling Team
The enabling team spreads the idea of data mesh within the organization. In the
beginning of data mesh adoption, a lot of explanatory efforts will be required and the
enabling team can act as data mesh advocates. They help domain teams on their
journey to become a full member of the data mesh. The enabling team consists of
specialists with extensive knowledge on data analytics, data engineering, and the self-
serve data platform.
A member of the enabling team temporarily joins a domain team for a limited time span
like a month as an internal consultant to understand the team’s needs, establish a
learning environment, upskill the team members in data analytics, and guide them on
how to use the self-serve data platform. They don’t create data products by
themselves.
In between their consulting engagements, they share learning materials such as
walking skeletons, examples, best practices, tutorials, or even podcasts.
Mesh
The mesh emerges when teams use other domain's data products. Using data from upstream
domains simplifies data references and lookups (such as getting an article's price), while data
from downstream domains enables analyzing effects, e.g. for A/B tests (such as changes in
the conversion rate). Data from multiple other domains can be aggregated to build
comprehensive reports and new data products.
Let's look at a simplified e-commerce example:
https://www.datamesh-architecture.com 14/27
26/9/22, 14:10 Data Mesh Architecture
Domains can be classified by data characteristics and data product usage. We adopt Zhamak
Dehghani’s classification:
Source-aligned
In this example, an online shop is subdivided into domains along the customer journey, from
product search over checkout to payment. In a data mesh, these domains publish their data as
data products, so others can access them. The engineers do analytics on their own data to
improve their operational systems and validate the business value of new features. They use
domain neighbor’s data to simplify their queries and get insights on effects in downstream
domains. These domain data can be referred to as source-aligned, as most of their published
data products correspond closely to the domain events and entities generated in their
operational systems.
Aggregate
For complicated subsystems , it can be efficient that a team focuses solely on delivering a
data product that is aggregated of various data products from other domains. A typical
example is a 360° customer view that includes relevant data from multiple domains, such as
account data, orders, shipments, invoices, returns, account balance, and internal ratings. With
respect to different bounded contexts, a comprehensive 360° customer view is hard to build,
but it might be useful for many other domains. Another example for a complicated subsystem
is building sophisticated ML models that require enhanced data science skills. It may be
sensible that a data scientists team develops and trains a recommendation model by using
data from checkout and the 360° customer view, and another team uses this model and
focuses to present the calculated recommendations in the online shop or in promotional
emails.
https://www.datamesh-architecture.com 15/27
26/9/22, 14:10 Data Mesh Architecture
Consumer-aligned
In a company, there are also business departments that need data from the whole value
stream to make sensible decisions, with people working in these departments are business
experts but not engineers or technology-savvy. Management and controlling requires detailed
reports and KPIs from all domains to identify strengths and deviations. Marketing does funnel
and web analysis over all steps in the customer journey in their own optimized tools, such as
Google Analytics or Adobe Analytics. In these domains, the data model is optimized for a
specific department's needs and can therefore be described as consumer-aligned.
Consumer-aligned reports were often one of the main tasks of central data teams. With data
mesh, (new) consumer-aligned domain teams focus on fulfilling data needs of one specific
business domain, allowing them to gain deep domain knowledge and constantly develop
better analytical results. Business and IT grow closer together, either by building integrated
domain teams or by having engineering teams that provide domain data as a service for the
business, e.g., to support C-level or controlling. Their data are typically used for their analytics
and reports, but does not need to be published and managed as data products for other
domains.
Tech Stacks
Data mesh is primarily an organizational approach, and that's why you can't buy a
data mesh from a vendor. Technology, however, is important still as it acts as an
enabler for data mesh, and only useful and easy to use solutions will lead to domain
teams' acceptance. The available offerings of cloud providers already provide a
sufficient set of good self-serve data services to let you form a data platform for your
data mesh. We want to show which services can be used to get started.
There are a lot of different ways to implement a data mesh architecture. Here is a selection of
typical tech stacks that we saw:
Google Cloud BigQuery
AWS S3 and Athena
Azure Synapse Analytics
dbt and Snowflake
Starburst Enterprise (TBD)
Databricks (TBD)
If you want to share your tech stack here, feel free to reach out to us.
https://www.datamesh-architecture.com 16/27
26/9/22, 14:10 Data Mesh Architecture
Domain
Team’s
Journey
Just as the data team has a journey to go on,
each of your domain teams has to go on a
journey to become a contributing part of your
data mesh as well. Each team can start their
journey whenever they are ready and at their
own pace. The benefits arise already along
the journey. Teams will quickly gain from first
data-driven decisions, starting an avalanche
to use more and better data for even deeper
insights. The data mesh evolves with each
team that shares their data as products,
enabling data-driven innovation.
To make this journey successful, the team
needs three things: a clear data mesh vision
from top management to get everybody
moving in the same direction, a supportive
environment including an easy-to-use self-
serve data platform to get the engineering
team on a learning path toward data
analytics, and a high trust environment to
walk the journey in their own way and pace.
So let’s start your journey!
https://www.datamesh-architecture.com 17/27
26/9/22, 14:10 Data Mesh Architecture
Your team is responsible for a domain and builds and operates self-contained systems
including the necessary infrastructure. It was quite an effort to build these systems, and you
were highly focused on delivery excellence. These operational systems now generate domain
data.
Data analytics was just not relevant.
Being in production, you probably have to investigate an incident and need to analyze how
many customers are affected. Also, some stakeholders might have questions regarding your
data, such as "Which in-stock articles haven’t been sold in the last six months?" or "What
were the shipping times during the last Black Week?" To answer all these questions, you send
analytical queries to your operational database. Over time, you also do some first explorative
analytics to get a deeper understanding of your system’s behavior.
https://www.datamesh-architecture.com 18/27
26/9/22, 14:10 Data Mesh Architecture
This increases load on your production database, and you might be tempted to change the
production database to better support your analytical queries, like creating additional indices.
You might offload the additional load to read replicas. But analytical queries are still slow and
cumbersome to write.
With the pains of slow and hard-to-write analytical queries in the back of your mind, you try
out the self-serve data platform that’s being promoted by the data platform team. For
example, you now have access to Google BigQuery. On this platform, your team starts to build
an analytical data model from your operational databases. This allows you to analyze data
covering your own systems with maintainable and fast queries, while keeping the schemas of
your operational databases untouched. You learn how to structure, preprocess, clean, analyze,
and visualize analytical data—that’s a lot to learn even though most is SQL, which you are
already familiar with.
As questions regarding your own data can now be answered quickly, you and your product
owner now enter the cycle of making data-driven decisions: define hypotheses and verify with
data.
https://www.datamesh-architecture.com 19/27
26/9/22, 14:10 Data Mesh Architecture
Analyzing your own domain data is a great start, but combining it with data from other
domains is where the magic begins. It allows you to get a comprehensive view despite the
decentralization of data. Examples are A/B tests of the effect of a UI change to the conversion
rate or building up machine learning models for fraud detection that include previous
purchasing history and current click stream behavior. This requires that other teams share
their data in a way that your team can discover, access, and use it. This is when the mesh
begins to form itself.
When a team becomes a consuming member of the data mesh, it starts to gain interest in the
interoperability and governance of the data mesh. Ideally, the team will send a representative
to the data mesh governance body.
In case you are the first team, you may have to skip this step for now and move on to level 4
and be the first to provide data for others.
https://www.datamesh-architecture.com 20/27
26/9/22, 14:10 Data Mesh Architecture
Based on other teams' needs, you share your data with others as products. For example, you
provide the confirmed, rejected, and aborted orders so others can correlate their events to the
conversion rate. Instead of just being a consumer of data products, you become a producer of
data products. You generate value for other teams. But at the same time, it increases your
responsibility and operational duties in the long term.
Data products must comply with the global policies defined by the federated governance
body. You have to know and understand the current global policies. Now, at the latest, you
need to participate in and contribute to the federated governance body.
Data Team’s
Journey
Data mesh is primarily an organizational
construct and fits right into the principles of
team topologies . It shifts the
responsibilities for data toward domain teams
which are supported by a data platform team
and a data enabling team. Representatives of
all teams come together in a federated
governance guild to define the common
standards.
Today, in many organizations a central data
team is responsible for a wide range of
analytical tasks, from data engineering and
managing data infrastructure to creating C-
level reports. Such a central data team
suffers from cognitive overload, including
domain, technical, and methodical
knowledge. data mesh mitigates this.
Data mesh offers new perspectives for
members of the central data team as their
analytical and data engineering skills remain
highly necessary. For example, they are a
perfect fit to establish the data platform for
people that prefer to work on the
https://www.datamesh-architecture.com 21/27
26/9/22, 14:10 Data Mesh Architecture
The real mind shift, however, happens when founding new data-centric domains as shown in
the figure above. Let’s look at typical management reports that large central data teams
usually produce based on monolithic data warehouses or data lakes. With data mesh, the data
engineers who created those management reports build a new domain team together with a
dedicated product owner. As engineers of the new domain team, they now can focus on their
new domain and their consumers. This allows them to gain deep domain knowledge over time,
resulting in better reports and continuous optimizations. In addition, they switch from using
that monolith data warehouse to data products from other domains. This switch is a gradual
process driven by the demand for data products, accelerating the forming of a data mesh. The
product owner negotiates with other domain teams about the required data products and
makes sure that the reports and other products the new domain team will build in the future
fulfill the needs of the business.
As existing domain teams on their journey do more and more data analytics, another
perspective for members of the central data team is to join one of those teams. With their
existing knowledge, they can accelerate the domain teams’ journeys toward a data mesh by
spreading and teaching their knowledge and skills to the others in the team. It is important that
https://www.datamesh-architecture.com 22/27
26/9/22, 14:10 Data Mesh Architecture
they become full members of the team and not founding a data sub-team within the domain
team. In addition to their knowledge and skills, the data engineers may also bring
responsibilities and artifacts from the central data team to their domain teams. For example,
customer profiling, which was previously done by the central data team, will move into the
responsibility of the recommendation domain team.
The data scientists, typically, are centrally organized as well. That’s why their future,
organizational-wise, is quite similar to that of the central data team. The data products in the
data mesh they focus on are machine learning features and models. When joining an existing
domain team, such a machine learning model might be fully integrated in a microservice. So,
data mesh enables such machine-learning-based services because the required MLOps
capabilities can be easily built on top of the data mesh.
FAQ
So, what's really behind the hype?
Data mesh is primarily an organizational change. The responsibilities of data are shifted closer
to the business value stream. This enables faster data-driven decisions and reduces barriers
for data-centric innovations.
Who has actually implemented a data mesh?
There is a comprehensive collection of user journey stories from the Data Mesh Learning
community that covers data mesh examples from many different industries.
Is Data Mesh for my company?
It depends, of course. There are a few prerequisites that should be in place: You should have
modularized your software system following domain-driven design principles or something
similar. You should have a good number (5+) of independent domain teams that have their
systems already running in production. And finally, you should trust your teams to make data-
driven decisions on their own.
How to get started?
Start small and agree on the big picture. Find two domain teams (that are around level 2) that
have a high value use case where one team needs data from the other team. Let one team
build a data product (level 4) and another team use that data product (level 3). You don’t need
a sophisticated data platform yet. You can start sharing the files via AWS S3, a Git repository,
or use a cloud-based database, such as Google BigQuery.
https://www.datamesh-architecture.com 23/27
26/9/22, 14:10 Data Mesh Architecture
There are some indicators when a data mesh approach might not be suitable for you,
including:
You are too small and don’t have multiple independent engineering teams.
You have low-latency data requirements. Data Mesh is a network of data. If you need to
optimize for low-latency, invest in a more integrated data platform.
You are happy with your monolithic highly integrated system (such as SAP). It might be
more efficient to use their analytical platform.
Is the Data Mesh a generic solution to a distributed data
architecture?
No.
By definition, data mesh does not include data products used for serving real-time needs.
Data mesh focuses on analytical use cases.
What's the difference between data mesh and data fabric?
At first, data fabric looks similar to data mesh because it offers a similar self-serve data
platform. Looking deeper, it turns out that data fabric is a central and domain-agnostic
approach, which is in strong contrast to the domain-centric and decentralized approach of
data mesh. More in this comparison article .
What might a journey be for teams who operate commercial off-the-
shelf (COTS) systems?
Many COTS systems (such as Salesforce, SAP, Shopify, Odoo) provide domain optimized
analytical capabilities. So the journey for domain teams starts directly from level 2.
The challenge is to integrate data products from other domains (level 3, which may be skipped
if not needed) and to publish data products for other domains (level 4). The system’s data
need to be exported to the data platform and managed as data products, conforming the
global policies. As data models evolve with system updates, an anti-corruption layer is a must,
e.g., as a cleaning step.
How might externally acquired datasets be part of a data mesh?
Typical examples: Price-Databases or Medical Studies. A team needs to own this dataset and
bring it into the datamesh. If this is not a very technical team, the data-platform should offer
an easy self-service to upload files and provide Meta-Data. An Excel API or Google Sheets
might also be an option here.
How did you draw the diagrams?
https://www.datamesh-architecture.com 24/27
26/9/22, 14:10 Data Mesh Architecture
We got this question quite a lot, so we are happy to share our tooling:
We use diagrams.net with "Sketch" style and Streamline Icons . We automate the
conversion to PNG, SVG and WebP with a little script.
What are your questions?
If you have any more questions, we encourage you to discuss with us on GitHub or reach out
to us directly. But be warned: Your question might end up in the FAQ. :-)
Learn more
Data Mesh
by Zhamak Dehghani
Zhamak's book about data mesh. The book not only discusses the principles of data
mesh, but also presents an execution strategy.
https://www.datamesh-architecture.com 25/27
26/9/22, 14:10 Data Mesh Architecture
Authors
Jochen Larysa Simon
@jochen_chri @visenger @simonharre
st r
Jochen Christ works as Dr. Larysa Visengeriyeva Dr. Simon Harrer is a
tech lead at INNOQ and received her doctorate curious person working
is a specialist for self- in Augmented Data at INNOQ who likes to
contained systems and Quality Management at share his knowledge.
data mesh. Jochen is the TU Berlin. At INNOQ He's a serial co-author
maintainer of HTTP she is working on the of Java by Comparison,
Feeds, Which JDK, and operationalization of Remote Mob
co-author of Remote Machine Learning Programming, GitOps,
Mob Programming. (MLOps). and, most recently, Data
Mesh.
https://www.datamesh-architecture.com 26/27
26/9/22, 14:10 Data Mesh Architecture
Contributors
A ton of people helped us curate our content through their great feedback. Special thanks to:
Anja Kammer, Benedikt Stemmhildt, Benjamin Wolf, Eberhard Wolff, Gernot Starke, Jan
Schwake, Julian Schikarski, Jörg Müller, Markus Harrer, Matthias Geiger, Philipp Beyerlein,
Rainer Jaspert, Stefan Tilkov, Tammo van Lessen, and Theo Pack.
And if you have feedback for us as well, feel free to discuss with us on GitHub or reach out to
us directly!
https://www.datamesh-architecture.com 27/27