The Optimal RabbitMQ Guide
The Optimal RabbitMQ Guide
I want to say a big thank you to everyone who has helped me, from the earliest
draft of this book, up to this edition. A special thanks go out to my lovely colleagues at
84codes and my tech friends.
Finally: A big thank you, to all CloudAMQP users for your feedback and continued
support. It has been a great motivator to see our customers’ projects succeed!
Email: lovisa@cloudamqp.com
Email: elin@cloudamqp.com
Published: 2020-03-10
ISBN: 978-91-519-3115-9
Please e-mail me any comments that you might have about the book and let me
know what you think should or shouldn’t be included in the next edition. If you have
an application using RabbitMQ or a user story that you would like to share, please
send me an e-mail!
This book is dedicated to all the frequent and future users of
RabbitMQ, and to the Swedish mentality that makes us
queue lovers by heart.
PART ONE
Introduction to RabbitMQ.....................................................10
What is RabbitMQ?........................................................ 16
PART TWO
PART TWO
RabbitMQ ...................................................................... 82
Best Practice.................................................................. 82
RabbitMQ Protocols.....................................................110
PART THREE
T
imes are changing; reliability and scalability are more important than ever. Because
of that, companies are rethinking their architecture. Monoliths are evolving
into microservices and servers are moving into the cloud. Message Queues and
RabbitMQ in particular, as one of the most widely deployed open-source message
brokers, have come to play a significant role in the growing world of microservices.
This book is divided into three parts:
The second part of the book is for more advanced users, who will learn how to take
full advantage of RabbitMQ. This section explores Best Practice recommendations for
High Performance and High Availability and common RabbitMQ errors and mistakes. It
is also a deep dive into some RabbitMQ concepts and features.
The third part gives some real-world user stories from our own experiences with
RabbitMQ as well from a CloudAMQP customer’s point of view.
We hope that this book takes you far on your message queuing journey with RabbitMQ.
P A R T O N E
INTRODUCTION TO
RABBITMQ
A MICROSERVICE ARCHITECTURE
Modules with various functionalities operate together to form a complete software
application via properly articulated operations and interfaces.
W
elcome to the wonderful world of message queuing! Many modern cloud
architectures are a collection of loosely coupled microservices that communi-
cate via a message queue. This microservice architecture is beneficial in that
all the different parts of the system are easier to develop, deploy and maintain.
One of the advantages of a message queue is that it makes your data temporarily
persistent, reducing the chance of errors that may occur when different parts of the
system are offline. If one part of the system is unreachable, the other part can continue
to interact with the queue. This part of the book features basic information on micros-
ervices and message queuing, and the benefits of RabbitMQ.
11
P A R T O N E
MICROSERVICES
AND RABBITMQ
M
anaging a complex, unified enterprise application can be a lot more difficult
than it seems. Before making even a small change, hours of testing and analysis
are required for possible impacts your change could have on the overall system.
New developers must spend days getting familiar with the system before they are con-
sidered ready to write a non-breaking line of code. Fortunately, the situation becomes
far less complicated thanks to the use of microservices. This chapter talks about the
benefits of a microservice architecture.
Imagine building a huge, bulky billing application that will involve authentica-
tion, authorization, charging and reporting of transactions. Dividing the applica-
tion across multiple services (one for each functionality) will separate the respon-
sibilities and give developers the freedom to write code for a specific service in any
chosen language; something that a monolithic application can’t provide. Addi-
tionally, it will also be easier to maintain written code and make changes to the
system. For example, updating an authentication scheme will only require the
addition of code to the authentication module and testing without having to worry
about disrupting any other functionalities of the application.
• Fault isolation
13
modules at the same time. In addition to the development cycle, the testing phase
is also sped up by the use of microservices, as each microservice can be tested
independently to determine the readiness of the overall system.
• Improved scalability
• Easy to understand
Think of a message broker like a delivery person who takes mail from a sender and
delivers it to the correct destination. In a microservice architecture, there typically are
cross-dependencies that mean no single service can perform without getting help from
other services. This is where it is crucial for systems to have a mechanism in place,
allowing services to keep in touch with each other with no blocked responses. Message
queuing fulfils this purpose by providing a means for services to push messages to
a queue asynchronously and ensure they are delivered to the correct destination. To
implement message queuing, a message broker is required.
www.cloudamqp.com
14
15
P A R T O N E
WHAT IS RABBITMQ?
M
essage queuing is an important facet of any microservice architecture; it is a
way of exchanging data between processes, applications and servers. RabbitMQ
is one of the most widely used message brokers in the world, boasting over
35,000 production deployments across both startups and big enterprises alike. This
chapter gives a brief understanding of message queuing and defines essential RabbitMQ
concepts. The chapter also explains the steps to go through when setting up connec-
tions including how to publish as well as consume messages from a queue.
A message can include any information. It could, for example, contain information
about a process or job that should start on another application, possibly even on another
server, or it might be a simple text message.
The message broker stores the messages until a receiving application connects
and consumes a message off the queue. The receiving application then appropriately
processes the message. A message producer adds messages to a queue without having
to wait for them to be processed.
17
RABBITMQ EXAMPLE
A message broker can act as a middleman for various services. A broker can be used to
reduce loads and delivery times by web application servers. A task that would generally
take a lot of time to process can be delegated to a third party whose only job is to
perform them.
This chapter is a case study of a web application, a PDF generator. The application
allows users to upload information to a website. The website handles the information
and generates a PDF, which is then emailed back to the user. The whole processing in
this example takes several seconds to perform.
When the user has entered user information into the web interface, the web applica-
tion puts a “PDF processing” job into a message, including information about the job
such as name and email. The message is then placed onto a queue defined in RabbitMQ
using an exchange process. The underlying architecture of a message queue is simple
- client applications called producers create messages and deliver them to the broker.
The broker is, essentially, the message queue. Other applications, known as consumers,
connect to the queue and subscribe to messages from the broker, processing those
messages according to their function. The software interacting with the broker can
be a producer, a consumer, or both. Messages placed in the queue are stored until the
consumer retrieves them.
If the PDF processing application crashes, or if many PDF requests are coming at the
same time, messages would continue to pile up in the queue until the consumer starts
again. It would then process all the messages, one by one.
Message queuing allows web servers to respond to requests in their own time instead
of being forced to perform resource-heavy procedures immediately. Message queuing
is also useful for distributing a message to multiple recipients for consumption or when
balancing the load between workers.
The consumer application removes a message from the queue and processes the PDF
at the same time as the producer pushes new task messages to the queue. The consumer
can be on an entirely different server than the publisher or they can be located on
the same server, it makes no difference. Requests can be created in one programming
language and handled in another programming language, as the two applications only
communicate through the messages they are sending to each other. The two services
have what is known as ‘low coupling’ between the sender and the receiver.
18
Figure 3 - Application architecture example with RabbitMQ.
Example
2. The web application (the producer) sends a message to RabbitMQ that includes
data from the request, such as name and email.
3. An exchange accepts the messages from a producer application and routes them
to correct message queues.
4. The PDF processor (the consumer) receives the job message from the queue and
starts the processing of the PDF.
19
EXCHANGES
Messages are not published directly to a queue; instead, the producer sends a message
to an exchange. The job of an exchange is to accept messages from the producer appli-
cations and route them to the correct message queues. It does this with the help of
bindings and routing keys. A binding is a link between a queue and an exchange.
2. The exchange receives the message and is now responsible for the routing
of the message. The exchange looks at different message attributes and keys
depending on the exchange type.
3. In this case, we see two bindings to two different queues from the exchange.
The exchange routes the message to the correct queue, depending on these
attributes.
4. The messages stay in the queue until the consumer processes them.
5. The consumer then removes the message from the queue once handled.
20
Types of Exchanges
Only direct exchanges are used within our sample code in the upcoming chapters.
More in-depth understanding of the different exchange types, binding keys, routing
keys and how/when they should be used can be found in the chapter: Exchanges, routing
keys and bindings.
• Topic - The topic exchange performs a wildcard match between the routing key
and the routing pattern specified in the binding.
• Fanout - A fanout exchange routes messages to all of the queues that are bound
to it.
• Headers - A header exchange uses the message header attributes for routing
purposes.
21
RABBITMQ AND SERVER CONCEPTS
Below are some important concepts that are helpful to know before we dig deeper
into RabbitMQ. The default virtual host, the default user, and the default permissions
are used in the examples that follow.
• Connection - TCP connection between the application and the RabbitMQ broker
• Routing Key - The key that the exchange looks at to decide how to route the
message to queues. Think of the routing key as the destination address of a
message.
• Users - It’s possible to connect to RabbitMQ with a given username and password,
with assigned permissions such as rights to read, write and configure. Users can
also be assigned permissions to specific virtual hosts.
• Vhost - Virtual host or Vhost segregate applications that are using the same
RabbitMQ instance. Different users can have different access privileges to differ-
ent vhosts and queues, and exchanges can be created so that they only exist in
one vhost.
22
SETTING UP A RABBITMQ INSTANCE
23
Getting Started with RabbitMQ
RabbitMQ provides an easy to use web user interface (UI) for management and mon-
itoring of the RabbitMQ server. A link to the management interface can be found on the
details page for the CloudAMQP instance.
The management interface allows users to manage, create, delete, and list queues. It
allows for monitoring of queue length, checking message rate, changing and adding user
permissions, and much more. Detailed information about the management interface is
provided in the chapter titled The Management Interface.
24
Figure 8 - The overview window in the RabbitMQ management interface.
RabbitMQ speaks the AMQP protocol by default, and to be able to communicate with
RabbitMQ, a library that understands the same protocol should be used. A RabbitMQ
client library (sometimes called helper library) abstract the complexity of the AMQP
protocol into simple methods; in this case, to communicate with RabbitMQ. The methods
should be used when connecting to the RabbitMQ broker using the given parameters,
hostname, port number, etc., for example, or when declaring a queue or an exchange.
There is a choice of libraries for all major programming languages.
1. First of all, we need to set up/create a connection object. Here, the username,
password, connection URL, port, etc., are specified. A TCP connection will be set
up between the application and RabbitMQ.
4. Set up exchanges and bind a queue to an exchange. Messages are only routed to
a queue if the queue is bound to an exchange.
25
5. Publish a message to an exchange and consume a message from the queue.
Sample Code
Sample code for Ruby, Node.js, and Python can be found in upcoming chapters.
Remember, it is possible to use different programming languages in different parts of
the system. The publisher could be written in Node.js while the subscriber is written in
Python, for example.
Like it’s possible to create different databases within a PostgreSQL (database) server
for different projects, vhost makes it possible to separate applications on one single
broker. Isolate users, exchanges, queues, etc. to one specific vhost or separate envi-
ronments, e.g., production, to one vhost and staging to another vhost within the same
broker instead of setting up multiple brokers. The downside of using a single instance
is that there is no resource isolation between vhosts. Shared plans on CloudAMQP are
located on isolated vhosts.
www.cloudamqp.com
26
27
P A R T O N E
EXCHANGES, ROUTING
KEYS AND BINDINGS
W
hat are exchanges, bindings, and routing keys? In what way are exchanges and
queues associated with each other? When should they be used and how? This
chapter explains the different types of exchanges in RabbitMQ and scenarios
of when to use them.
As mentioned in the previous chapter, messages are not published directly to a queue.
Instead, the producer sends messages to an exchange. Exchanges are message routing
agents, living in a virtual host (vhost) within RabbitMQ. Exchanges accept messages
from the producer application and route them to message queues with the help of
header attributes, bindings, and routing keys.
In RabbitMQ, four different types of exchanges route the message differently using
different parameters and bindings setups. Clients can create their own unique exchanges
or use the predefined default exchanges.
DIRECT EXCHANGE
A direct exchange delivers messages to queues based on a routing key. The routing
key is a message attribute added to the message by the producer. Think of the routing
key as an “address” that the exchange uses to decide on how to route the message. A
message goes to the queue(s) that have or has the exact match in binding key to the
routing key of the message. The direct exchange type is useful to distinguish messages
published to the same exchange using a simple string identifier.
29
Figure 9 - A message is directed to the queue where the binding key is an exact match
of the message’s routing key.
Scenario 1
• Exchange: pdf_events
• Queue A: create_pdf_queue
Scenario 2
• Exchange: pdf_events
• Queue B: pdf_log_queue
A message with the routing key pdf_log is sent to the exchange pdf_events (Figure
9). The message is routed to create_log_queue because the routing key (pdf_log)
matches the binding key (pdf_log).
Note: If the message routing key does not match any binding key, the message is
discarded.
The default exchange AMQP brokers must provide for the direct exchange is “amq.
direct”.
30
Default exchange
TOPIC EXCHANGE
Topic exchanges route messages to a queue based on a wildcard match between the
routing key and something called the routing pattern, which is specified by the queue
binding. Messages can be routed to one or many queues depending on this wildcard
match.
The routing key must be a list of words, delimited by a period (.). Examples can be
agreements.us or agreements.eu.stockholm, which in this case identifies agreements
that are set up for a company with offices in different locations. The routing patterns
may contain an asterisk (“*”) to match a word in a specific position of the routing key
(e.g., a routing pattern of agreements.*.*.b.* only match routing keys where the first
word is agreements and the fourth word is “b”). A pound symbol (“#”) indicates a
match on zero or more words (e.g., a routing pattern of agreements.eu.berlin.# matches
any routing keys beginning with agreements.eu.berlin).
Figure 10 - Messages are routed to one or many queues based on a match between a
message routing key and the routing patterns.
31
The consumers indicate which topics they are interested in (like subscribing to a feed
of an individual tag). The consumer creates a queue and sets up a binding with a given
routing pattern to the exchange. All messages with a routing key that match the routing
pattern are routed to the queue and stay there until the consumer handles the message.
The default exchange AMQP brokers must provide for the topic exchange is amq.
topic.
Scenario 1
• Exchange: agreements
• Queue A: berlin_agreements
Scenario 2
• Exchange: agreements
• Queue B: all_agreements
Scenario 3
• Exchange: agreements
• Queue C: store_agreements
32
A message with routing key agreements.eu.berlin is sent to the exchange agreements.
The message is routed to the queue berlin_agreements because of the routing pattern
of agreements.eu.berlin.# matches any routing keys beginning with agreements.eu.berlin.
The message is also routed to the queue all_agreements since the routing key (agree-
ments.eu.berlin) also matches the routing pattern agreements.#.
FANOUT EXCHANGE
Fanout exchanges copy and route a received message to all queues that are bound to
it regardless of routing keys or pattern matching as with direct and topic exchanges.
Keys provided will be ignored.
Fanout exchanges can be useful when the same message needs to be sent to one or
more queues with consumers who may process the same message in different ways, like
in distributed systems designed to broadcast various state and configuration updates.
The default exchange AMQP brokers must provide for the fanout exchange is “amq.
fanout”.
Scenario 1
• Exchange: sport_news
• Binding: Binding between the exchange (sport_news) and Queue A (Mobile client
queue A)
A message is sent to the exchange sport_news (Figure 11). The message is routed to
all queues (Queue A, Queue B, Queue C) because all queues are bound to the exchange.
Provided routing keys are ignored.
33
Figure 11 - Fanout Exchange: The received message is routed to all queues that are bound to
the exchange.
HEADERS EXCHANGE
A unique argument named “x-match” can be added in the binding between exchange
and the message queue. This argument decides if all headers must match or just one.
Either common headers between the message and the binding counts as a match, or
all the headers referenced in the binding need to be present in the m essage for it to
match. The “x-match” property can have two different values: “any” or “all”, where
“all” is the default value. A value of “all” means all header pairs (key, value) must
match and a value of “any” means at least one of the header pairs must match. Headers
can be constructed using a wider range of data types. Integer and hash functions are
preferred over commonly used strings. The headers exchange type (used with the
binding argument “any”) is useful for directing messages which may contain a subset
of known (unordered) criteria.
The default exchange AMQP brokers must provide for the header exchange is “amq.
headers”.
34
Figure 12 - Headers exchange routes messages to queues that are bound using arguments
(key and value) containing headers and optional values.
• Exchange: Binding to Queue A with arguments (key = value): format = pdf, type =
report, x-match = all
• Exchange: Binding to Queue B with arguments (key = value): format = pdf, type =
log, x-match = any
• Exchange: Binding to Queue C with arguments (key = value): format = zip, type =
report, x-match = all
Scenario 1
Message 1 is published to the exchange with the header arguments (key = value):
“format = pdf”, “type = report” and with the binding argument “x-match = all”
35
Scenario 2
Message 2 is published to the exchange with the header arguments (key = value):
“format = pdf” and with the binding argument “x-match = any”
Scenario 3
Message 3 is published to the exchange with the header arguments of (key = value):
“format = zip”, “type = log” and with the binding argument “x-match = all”
Message 3 is not delivered to any queue - the queue is configured to match all of the
headers (format and type)
RabbitMQ provides an AMQP extension known as the dead letter exchange. The dead
letter exchange captures messages that are not deliverable and routes them out of the
queue. A message is considered "dead" when it has reached the end of it's time-to-
live, the queue exceeds the max length (messages or bytes) configured for the queue,
or the message has been rejected by the queue or nacked by the consumer for some
reason and is not marked for re-queueing.
www.cloudamqp.com
36
37
P A R T O N E
The tutorial follows the scenario from the previous chapter, where a web application
allows users to upload user information to a website. The application handles the data,
generates a PDF, and emails it back to the user. A RabbitMQ instance is required in order
to practice the coding examples.
39
P A R T O N E
# example_publisher.rb
require "bunny"
require "json"
# Returns a connection instance
conn = Bunny.new ENV['CLOUDAMQP_URL']
# The connection is established when start is called
conn.start
# create a channel in the TCP connection
ch = conn.create_channel
# Declare a queue with a given name, examplequeue. In this example is a durable shared
queue used.
q = ch.queue("examplequeue", :durable => true)
# Bind a queue to an exchange
x = ch.direct("example.exchange", :durable => true)
q.bind(x, :routing_key => "process")
# Publish a message
information_message = "{\"email\":\"example@mail.com\",\"name\": \"name\",\"size\":
\"size\"}"
41
x.publish(information_message,
:timestamp => Time.now.to_i,
:routing_key => "process"
)
sleep 1.0
conn.close
# example_consumer.rb
require "bunny"
require "json"
# Returns a connection instance
conn = Bunny.new ENV['CLOUDAMQP_URL']
# The connection is established when start is called
conn.start
# Create a channel in the TCP connection
ch = conn.create_channel
# Declare a queue with a given name, examplequeue. In this example is a durable shared
queue used.
q = ch.queue("examplequeue", :durable => true)
# Method for the PDF processing
def pdf_processing(json_information_message)
puts "Handling pdf processing for"
puts json_information_message['email']
sleep 5.0
puts "pdf processing done"
end
# Set up the consumer to subscribe from the queue
q.subscribe(:block => true) do |delivery_info, properties, payload|
json_information_message = JSON.parse(payload)
pdf_processing(json_information_message)
end
42
PUBLISHER
# example_publisher.rb
require "bunny"
require "json"
Create a channel
Declare a queue
ch.queue is used to declare a queue with a particular name, in this case, the queue
is called examplequeue. The queue in this example is marked as durable, meaning that
RabbitMQ will not lose the queue during a RabbitMQ restart.
43
A direct exchange is used, it delivers messages to queues based on a message routing
key. The routing key “process” is used for this binding. The exchange is first created
and then bound to the queue.
Publish a message
# Publish a message
information_message = "{\"email\”: \"e@m.com\",\"name\": \"n\",\"size\": \"s\
"}"
x.publish(information_message,
:timestamp => Time.now.to_i,
:routing_key => "process"
)
information_message is all the information that is sent to the exchange. The direct
exchanges use the message routing key for routing, meaning the producers need to
specify the routing key in the message for it to not get dropped.
sleep 1.0
conn.close
CONSUMER
The method pdf_processing is a method stub that sleeps for 5 seconds to simulate the
pdf processing.
44
Set up the consumer
subscribe consumes messages and processes them. It will be called every time a
message arrives. The subscribe method will not block the calling thread by default.
More information about Ruby and CloudAMQP can be found on the CloudAMQP web
page: https://www.cloudamqp.com/docs/ruby.html
www.cloudamqp.com
45
P A R T O N E
When running the full code given, a connection is established between the RabbitMQ
instance and your application. Queues and exchanges are declared and created if they
do not already exist and finally, a message is published. The publish method queue
messages internally if the connection is down and resend them later. The consumer
subscribes to the queue. Messages are handled one by one and sent to the PDF process-
ing method.
A new message is published every second. A default exchange, identified by the empty
string (“”) is used. The default exchange means that messages are routed to the queue
with the name specified by rouhng_key , if it exists. (The default exchange is a direct
exchange with no name)
47
conn.on("close", function() {
console.error("[AMQP] reconnecting");
return setTimeout(start, 1000);
});
console.log("[AMQP] connected");
amqpConn = conn;
whenConnected();
});
}
function whenConnected() {
startPublisher();
startWorker();
}
var pubChannel = null;
var offlinePubQueue = [];
function startPublisher() {
amqpConn.createConfirmChannel(function(err, ch) {
if (closeOnErr(err)) return;
ch.on("error", function(err) {
console.error("[AMQP] channel error", err.message);
});
ch.on("close", function() {
console.log("[AMQP] channel closed");
});
pubChannel = ch;
while (true) {
var m = offlinePubQueue.shift();
if (!m) break;
publish(m[0], m[1], m[2]);
}
});
}
// method to publish a message, will queue messages internally if the connection is
down and resend later
function publish(exchange, routingKey, content) {
48
try {
pubChannel.publish(exchange, routingKey, content, { persistent: true },
function(err, ok) {
if (err) {
console.error("[AMQP] publish", err);
offlinePubQueue.push([exchange, routingKey, content]);
pubChannel.connection.close();
}
});
} catch (e) {
console.error("[AMQP] publish", e.message);
offlinePubQueue.push([exchange, routingKey, content]);
}
}
// A worker that acks messages only if processed succesfully
function startWorker() {
amqpConn.createChannel(function(err, ch) {
if (closeOnErr(err)) return;
ch.on("error", function(err) {
console.error("[AMQP] channel error", err.message);
});
ch.on("close", function() {
console.log("[AMQP] channel closed");
});
ch.prefetch(10);
ch.assertQueue("jobs", { durable: true }, function(err, _ok) {
if (closeOnErr(err)) return;
ch.consume("jobs", processMsg, { noAck: false });
console.log("Worker is started");
});
function processMsg(msg) {
work(msg, function(ok) {
try {
if (ok)
ch.ack(msg);
49
else
ch.reject(msg, true);
} catch (e) {
closeOnErr(e);
}
});
}
});
}
function work(msg, cb) {
console.log("Got msg ", msg.content.toString());
cb(true);
}
function closeOnErr(err) {
if (!err) return false;
console.error("[AMQP] error", err);
amqpConn.close();
return true;
}
setInterval(function() {
publish("", "jobs", new Buffer.from("work work work"));
}, 1000);
start();
PUBLISHER
Set up a connection
50
amqp.connect(process.env.CLOUDAMQP_URL + "?heartbeat=60", function(err, conn) {
if (err) {
console.error("[AMQP]", err.message);
return setTimeout(start, 1000);
}
conn.on("error", function(err) {
if (err.message !== "Connection closing") {
console.error("[AMQP] conn error", err.message);
}
});
conn.on("close", function() {
console.error("[AMQP] reconnecting");
return setTimeout(start, 1000);
});
console.log("[AMQP] connected");
amqpConn = conn;
whenConnected();
});
}
function whenConnected() {
startPublisher();
startWorker();
}
The function whenConnected calls two functions, one function that starts the publisher
and one that starts the worker (the consumer).
51
amqpConn.createConfirmChannel(function(err, ch) {
if (closeOnErr(err)) return;
ch.on("error", function(err) {
console.error("[AMQP] channel error", err.message);
});
ch.on("close", function() {
console.log("[AMQP] channel closed");
});
pubChannel = ch;
while (true) {
var m = offlinePubQueue.shift();
if (!m) break;
publish(m[0], m[1], m[2]);
}
});
}
offlinePubQueue is an internal queue for messages that could not be sent when the
application was offline. The application will keep an eye on this queue and try to resend
any messages added to it.
Publish
52
});
} catch (e) {
console.error("[AMQP] publish", e.message);
offlinePubQueue.push([exchange, routingKey, content]);
}
}
The publish function publishes a message to an exchange with a given routing key. If
an error occurs the message will be added to the internal queue, offlinePubQueue
53
CONSUMER
54
amqpConn.createChannel creates a channel on the connection. ch.assertQueue creates
a queue if it does not already exist. ch.consume sets up a consumer with a callback to
be invoked with each message it receives. The function called for each message is called
processMsg. processMsg processes the message. It will call the work function and wait for
it to finish.
The work function includes the handling of the message information and the creation
of the PDF.
function closeOnErr(err) {
if (!err) return false;
console.error("[AMQP] error", err);
amqpConn.close();
return true;
}
setInterval(function() {
publish("", "jobs", new Buffer.from("work work work"));
}, 1000);
start();
A new message is published every second. A default exchange, identified by the empty
string (“”) is used. The default exchange means that messages are routed to the queue
with the name specified by routing_key, if it exists. (The default exchange is a direct
exchange with no name.) More information about Node.js and CloudAMQP can be found
on the CloudAMQP web page.
55
P A R T O N E
When running the full code given below, a connection is established between the
RabbitMQ instance and your application. Queues and exchanges will be declared and
created if they do not already exist and finally, a message is published to the message
queue. The consumer subscribes to the queue, and messages are handled one by one
and sent to the PDF processing method.
A default exchange, identified by the empty string (“”) is used. By using the default
exchange, messages are routed to the queue with the name matching the routing_key,
if it exists.
# example_publisher.py
import pika, os, logging
logging.basicConfig()
connection = pika.BlockingConnection(params)
channel = connection.channel()
channel.queue_declare(queue='pdfprocess')
# send a message
channel.basic_publish(exchange='', routing_key='pdfprocess', body='User information')
print ("[x] Message sent to consumer")
connection.close()
# example_consumer.py
import pika, os, time
57
def pdf_process_function(msg):
print(" PDF processing")
print(" [x] Received " + str(msg))
PUBLISHER
58
url = os.environ.get('
os.environ.get('CLOUDAMQP_URL
CLOUDAMQP_URL'', 'amqp://guest:guest@localhost/%2f
amqp://guest:guest@localhost/%2f'')
params = pika.URLParameters(url)
params.socket_timeout = 5
Load the client library and set up some configuration parameters. The DEFAULT_
SOCKET_TIMEOUT is set to 0.25s, we would recommend raising this to about 5s to avoid
connection timeouts, params.socket_timeout = 5. Other connection parameter options
for Pika can be found here: http://pika.readthedocs.org/en/latest/modules/parameters.
html
Set up a connection
# Connect to CloudAMQP
connection = pika.BlockingConnection(params)
Start a channel
channel = connection.channel()
Declare a queue
channel.queue_declare(queue=''pdfprocess
channel.queue_declare(queue= pdfprocess'')
Publish a message
59
Close the connection
connection.close()
The connection will be closed after the message has been published.
CONSUMER
def pdf_process_function(msg):
print(" PDF processing")
print(" [x] Received " + str(msg))
time.sleep(5) # delays for 5 seconds
print(" PDF processing finished");
return;
The callback function is called once per message published to the queue. That function
will in turn call a worker function that simulates the PDF-processing.
Consume
60
start_consuming starts to consume messages from the queue.
More information about Python and CloudAMQP can be found on the CloudAMQP
web page. We recommend you to check the CloudAMQP web page for recommendations
if you are using the Celery task queue.
www.cloudamqp.com
61
P A R T O N E
THE MANAGEMENT
INTERFACE
Q
ueues, connections, channels, exchanges, users, and user permissions are
handled - created, deleted and listed - from a web browser through the RabbitMQ
management interface plugin. Additionally, message rates can be monitored
and sending or receiving messages can be manually performed via the interface. The
interface plugin consists of a single static HTML page that makes background queries to
the HTTP API for RabbitMQ. The management interface is useful for debugging appli-
cations or an overview of the whole system is required.
All tabs in the management menu are explained in this chapter, which includes
screenshots of overview, connections, and channels along with exchanges, queues, and
admin including users and permissions. A simple example will also demonstrate how
to set up a queue with an exchange and add a binding between them.
Concepts
63
OVERVIEW
The overview in Figure 13 shows two graphs; one graph for queued messages and
one with the message rate. The time interval shown in the graph can be changed by
pressing the text last minute above the graph. Information about all different statuses
for messages can be found by pressing “?”.
Queued messages
This graph shows the total number of queued messages for all queues. Ready shows
the number of messages that are available to be delivered. Unacked are the number of
messages for which the server is waiting for acknowledgment.
Message rate
Message rate is a graph with the rate of how fast the messages are being handled.
Publish shows the rate at which messages are entering the server and Confirm shows
a rate at which the server is confirming.Global Count
64
This represents the total number of connections, channels, exchanges, queues and
consumers for ALL virtual hosts to which the current user has access.
Nodes
Nodes show information about the different nodes in the RabbitMQ cluster or infor-
mation about one single node if only a single node is used. There is also information
about server memory, the number of Erlang processes per node, and other node-specific
information here. Info shows further information about the node and enabled plugins.
Import/export definitions
65
Connections
The connections tab (Figure 16) shows the connections established to the RabbitMQ
server. vhost shows in which vhost the connection operates and username shows the
user associated with the connection. Channels displays the number of channels using
the connection. SSL/TLS indicate whether the connection is secured with SSL or not.
66
Figure 17 - Connection information for a specific connection.
More information about the attributes associated with a can be found in the manual
page for rabbitmqctl, the command line tool for managing a RabbitMQ broker.
Channels
The channel tab (Figure 18) shows information about all the current channels. The
vhost shows in which vhost the channel operates and the username shows the user asso-
ciated with the channel. The guarantee mode can be in confirm or transactional mode.
When a channel is in confirm mode, both the broker and the client count messages. The
broker then confirms messages as it handles them. Confirm mode is activated once the
confirm.select method is used on a channel.
67
Figure 18 - The Channels tab
Clicking on one of the channels provides a detailed overview of that specific channel
(Figure 19). The message rate and the number of logical consumers retrieving messages
from the channel are also displayed.
68
Figure 19 - Detailed information about a specific channel.
More information about the attributes associated with a channel can be found in the
manual page for rabbitmqctl, which is the command line tool for managing a RabbitMQ
broker.
EXCHANGES
All exchanges can be listed from the exchange tab (Figure 20). Virtual host shows
the vhost for the exchange, type is the exchange type such as direct, topic, headers and
fanout. Features show the parameters for the exchange (e.g D stands for durable, and
AD for auto-delete). Features and types can be specified when the exchange is created.
In this list, there are some amq.* exchanges and the default (unnamed) exchange,
which are created by default.
69
Figure 20 - The exchanges tab in the RabbitMQ management interface.
Clicking on the exchange name displays a detailed page about the exchange (Figure
21). Adding bindings to the exchange and viewing already existing bindings is are also
actions that can be taken in this area as well as publishing a message to the exchange
or deleting the exchange.
70
QUEUES
The queue tab shows the queues for all or one selected vhost (Figure 22). Queues
may also be created from this area. Queues have different parameters and arguments
depending on how they were created. The features column shows the parameters that
belong to the queue, such as:
• Durable queue - A durable queue ensures that RabbitMQ never loses the queue.
• Message TTL - The time a message published to a queue can live before being
discarded.
• Max length - How many ready messages a queue can hold before it starts to drop
them.
• Max length bytes - The total size of ready messages a queue can hold before it
starts to drop them.
Clicking on any chosen queue from the list of queues will show all information about
it, as shown in Figure 23.
The first two graphs include the same information as the overview, however, they
show the number and rates for this specific queue.
71
Figure 23 - Specific information about a single queue.
CONSUMERS
The consumer’s field shows the consumers and channels that are connected to the
queue.
Bindings
All active bindings to the queue are shown under bindings. New bindings to queues
can be created from here or unbinding a queue from an exchange (Figure 25) as well.
72
Figure 25 - The bindings interface.
Publish message
Publishing a message can be performed manually to the queue from this area. The
message will be published to the default exchange with the queue name as its routing
key, ensuring that the message will be sent to the proper queue. It is also possible to
publish a message to an exchange from the exchange view.
73
Get message
Manually inspecting the message in the queue can be done in this area. Get message
gets the message available for review, while marking it as re-queue will cause RabbitMQ
to place it back in the queue in the same order.
A queue can be deleted by pressing the delete button or the queue can be emptied
with use of the purge function.
74
ADMIN
The Admin view (Figure 29) is where users are added and permissions for them
are changed. This area is also used to set up vhosts (Figure 30), policies, federation,
and shovels. Information about shovels can be found here: https://www.rabbitmq.com/
shovel.html while information about federation will be given in part two of this book.
75
Figure 30 - Virtual Hosts can be added from the Admin tab.
The example in Figure 31, shows how to create an example-queue and an exchange
called example.exchange (Figure 32).
The exchange and the queue are connected by a binding called pdfprocess (Figure 33).
Messages can be published (Figure 34) to the exchange with the routing key pdfprocess,
and will end up in the queue (Figure 35).
76
Many things are viewed and handled via the management interface, and studying it
will give an overview of the system and the relationship between functions.
Figure 33 - Click on the exchange or on the queue, go to “Add binding from this exchange” or
“Add binding to this queue”.
77
Figure 34 - Publish a message to the exchange with the routing key “pdfprocess”.
78
GET STARTED FOR FREE WITH CLOUDAMQP
Perfectly configured and optimized RabbitMQ clusters, ready in 2 minutes.
Custom Alarms • Free Plan Available • Easy Monitoring • 24/7 support • 99.95% SLA
www.cloudamqp.com
79
P A R T T W O
ADVANCED MESSAGE
QUEUING
BEST PRACTICE
Various configurations affect your RabbitMQ cluster in different ways. Learn how to
optimize performance through the setup.
S
ome applications require high throughput while other applications are publishing
batch jobs that can be delayed for a while. Tradeoffs must be accepted between
performance and guaranteed message delivery. The goal when designing the
system should be to maximize combinations of performance and availability that make
sense for the specific application. Bad architectural design decisions and client-side
bugs can damage the broker or affect throughput.
Part 2 of this book talks about the dos and the don’ts mixed with best practice for
two different usage categories; high availability and high performance (high through-
put). This part also mentions features in RabbitMQ that CloudAMQP gives some extra
attention such as how to migrate a cluster with queue federation.
81
P A R T T W O
RABBITMQ
BEST PRACTICE
T
hese are the RabbitMQ Best Practice recommendations based on the experience
CloudAMQP have gained while working with RabbitMQ. This includes informa-
tion on how to configure a RabbitMQ cluster for optimal performance and how to
configure it to get the most stable cluster. Queue size, common mistakes, lazy queues,
pre-fetch values, connections and channels, HiPE, and number of nodes in a cluster are
some of the things discussed.
QUEUES
If suitable for use, keep queues as short as possible. A message published to an empty
queue will go straight out to the consumer as soon as the queue receives it (a persistent
message in a durable queue will go to disk). The recommendation is to keep fewer than
10,000 messages in one queue.
Many messages in a queue can put a heavy load on RAM usage. In order to free
up RAM, RabbitMQ starts flushing (page out) messages to disk. The page out process
usually takes time and blocks the queue from processing messages when there are
many messages to page out. A large amount of messages might have a negative impact
on the broker since the process deteriorates queuing speed.
It is time-consuming to restart a cluster with many messages because the index must
be rebuilt. It takes time to sync messages between nodes in the cluster after a restart.
83
Enable lazy queues to get predictable performance
Queues can become long for various reasons; consumers might crash, or be offline
due to maintenance, or they might simply be working slower than usual. Lazy queues
are able to support long queues (millions of messages). Lazy queues, added in RabbitMQ
3.6, write messages to disk immediately, thus spreading the work out over time instead
of taking the risk of a performance hit somewhere down the line. It provides a more
predictable, smooth performance without any sudden drops, but at the cost. Messages
are only loaded into memory when needed, thereby minimizing the RAM usage, but
increasing the throughput time.
Sending many messages at once (e.g., processing batch jobs) or if there is a risk
consumers will not keep up with the speed of the publishers consistently, lazy queues
are recommended. Disable lazy queues if high performance is required, if queues are
always short, or if a max-length policy exists.
Another recommendation for applications that often get hit by spikes of messages,
and where throughput is more important than anything else, is to set a max-length
on the queue. This keeps the queue short by discarding messages from the head of the
queue so that it never becomes larger than the max-length setting.
Number of queues
Queues are single-threaded in RabbitMQ, with one queue able to handle up to about
50,000 messages. Better throughput is achieved on a multi-core system if multiple
queues and multiple consumers are used. Optimal throughput is achieved with as many
queues as cores on the underlying node(s).
The RabbitMQ management interface collects and calculates metrics for every queue
in the cluster. This might slow down the server if there are thousands upon thousands
of active queues and consumers. The CPU and RAM usage may also be affected nega-
tively with the use of too many queues.
84
Split queues over different cores
Queue performance is limited to one CPU core or hardware thread because a queue is
single threaded. Better performance is the result if queues are split over different cores
and into different nodes when using a RabbitMQ cluster. Routing messages to multiple
queues results in a much higher overall performance.
RabbitMQ queues are bound to the node where they are first declared. All messages
routed to a specific queue will end up on the node where that queue resides. Manually
splitting queues evenly between nodes is possible, but the downside is to remember
where each queue is located manually.
Two plugins that help if there are multiple nodes or a single node cluster with
multiple cores are the consistent hash exchange plugin and RabbitMQ Sharding.
The consistent hash exchange plugin allows for use of an exchange to load-bal-
ance messages between queues. Messages sent to the exchange are distributed consis-
tently and equally across many queues based on the routing key of the message. The
plugin creates a hash of the routing key and spreads the messages out between queues
that have a binding to that exchange. Performing this manually could quickly become
problematic without adding too much information about numbers of queues and their
bindings into the publisher.
The consistent hash exchange plugin is used to get maximum use of many cores in
the cluster. Note that it is important to consume from all queues. Read more about the
consistent hash exchange plugin here: https://github.com/rabbitmq/rabbitmq-consis-
tent-hash-exchange.
RabbitMQ sharding
85
Don’t set names on temporary queues
Setting a queue a name is important for sharing the queue between producers and
consumers, but it is not when using temporary queues. Instead, allow the server to
choose a random queue name, or modify the RabbitMQ policies.
A queue name starting with amq. is reserved for internal use by the broker.
Client connections can fail and potentially leave unused queues behind. Leaving too
many queues behind might affect the performance of the system. There are three ways
to have queues deleted automatically.
The first option is to set a TTL (time-to-live) policy on the queue. A TTL policy of
28 days will delete queues that have not had messages consumed from them in the last
28 days.
Another option is an auto-delete queue, which is deleted when its last consumer has
canceled or when the channel/connection is closed (or when it has lost the TCP connec-
tion with the server).
Finally, an exclusive queue is one that is only used (consumed from, purged, deleted,
etc.) by its declaring connection. Exclusive queues are deleted when their declaring
connection is closed or gone (due to underlying TCP connection loss or other circum-
stances).
Queues can have zero or more priority levels. Keep in mind that each priority level
uses an internal queue on the Erlang VM, which means that it takes up resources. In
most use cases, it is sufficient to have no more than five priority levels, which keeps
resource use manageable.
How to handle the payload size of messages sent to RabbitMQ is a common question
by developers. Avoid sending very large files in messages, but also keep in mind that the
rate of messages/second can be a larger bottleneck than the message size itself. Sending
multiple small messages might be a bad alternative. The better idea could be to bundle
the small messages into one larger message and let the consumer split it up. However,
if bundling multiple messages remember that this might affect the processing time. If
one of the bundled messages fails, will all of them need to be re-processes? Bandwidth
and architecture will dictate the best way to set up messages to considerd payload.
86
CONNECTIONS AND CHANNELS
Each connection uses about 100 KB of RAM (and even more, if TLS is used). Thousands
of connections can be a heavy burden on a RabbitMQ server. In a worst case scenario,
the server can crash due to running out of memory. Try to keep connection/channel
count low.
Don’t share channels between threads as most clients don’t make channels thread-
safe (it would have a serious negative effect on the performance).
You should make sure that you don’t share channels between threads as most clients
don’t make channels thread-safe.
Don’t open and close connections or channels repeatedly, as doing so will create a
higher latency because more TCP packages have to be sent and received.
The handshake process for an AMQP connection is actually quite involved and requires
at least seven TCP packets (more if TLS is used). The AMQP protocol has a mechanism
called channels that “multiplexes” a single TCP connection. It is recommended that
each process only create one TCP connection with multiple channels in that connection
for different threads. Connections should be long-lived so that channels can be opened
and closed more frequently, if required. Even channels should be long-lived if possible.
Do not open a channel every time a message is published. Best practice includes the
re-use of connections and multiplexing a connection between threads with channels,
when possible.
87
• AMQP close connection: 2 TCP packages
Don’t open and close connections or channels repeatedly - doing that means a higher
latency, as more TCP packages have to be sent and received.
Unless the connections are separated between publisher and consumer, messages
may not be consumed. This is especially true if the connection is in flow control, which
will constrict the message flow even more.
Another thing to keep in mind is that RabbitMQ may cause back pressure on the
TCP connection when the publisher is sending too many messages to the server. When
consuming on the same TCP connection, the server might not receive the message
acknowledgments from the client, affecting the performance of message consumption
and the overall server speed.
Another effect of having a large number of connections and channels is that the per-
formance of the RabbitMQ management interface will slow down. Metrics have to be
collected, analyzed and displayed for every connection and channel, which consumes
server resources.
Messages in transit might get lost in an event of a connection failure and may need
to be retransmitted. Acknowledgments let the server and clients know when to retrans-
mit messages. The client can either ack(knowledge) the message when it receives it, or
when the client has completely processed the message. Acknowledgment has a perfor-
88
mance impact, so for the fastest possible throughput, manual acks should be disabled.
Publish confirm is the same; the server acks when it has received a message from a
publisher. Publish confirm also has a performance impact, but keep in mind that it’s
required if the publisher needs messages to be processed at least once.
UNACKNOWLEDGED MESSAGES
All unacknowledged messages must reside in RAM on the servers. Too many unac-
knowledged messages will eventually use all system memory. An efficient way to limit
unacknowledged messages is to limit how many messages the clients pre-fetch. Read
more about this in the pre-fetch section.
Prepare for broker restarts, broker hardware failure, or broker crashes. To ensure
that messages and broker definitions survive restarts, ensure that they are on disk.
Messages, exchanges, and queues that are not durable and persistent are lost during a
broker restart.
Queues should be declared as “durable” and messages should be sent with delivery
mode “persistent”.
Persistent messages are heavier as they have to be written to disk. Similarly, lazy
queues have the same effect on performance, even though they are transient messages.
For high performance, use transient messages; for high throughput, use temporary or
non-durable queues.
Connecting to RabbitMQ over AMQPS is the AMQP protocol wrapped in TLS. TLS has
a performance impact since all traffic must be encrypted and decrypted. For maximum
performance, use VPC peering instead, which encrypts the traffic without involving the
AMQP client/server.
89
CloudAMQP configures the RabbitMQ servers so that they accept and prioritize fast
but secure encryption ciphers.
PRE-FETCH
The pre-fetch value is used to specify how many messages are consumed at the same
time. It is used to get as much out of the consumers as possible.
From RabbitMQ.com - “The goal is to keep the consumers saturated with work, but
to minimize the client’s buffer size so that more messages stay in RabbitMQ’s queue
and are thus available for new consumers or just to be sent out to consumers as they
become free.”
The RabbitMQ default pre-fetch setting gives clients an unlimited buffer, meaning
that RabbitMQ by default sends as many messages as it can to any consumer that looks
ready to accept them. Messages that are sent are cached by the RabbitMQ client library
in the consumer until processed. Pre-fetch limits how many messages the client can
receive before acknowledging a message. All pre-fetched messages are removed from
the queue and invisible to other consumers.
A pre-fetch count that is too small may hurt performance, as RabbitMQ typically
waiting to get permission to send more messages. Figure 37 illustrates long idling time.
In the example, we have a QoS pre-fetch setting of one (1). This means that RabbitMQ
will not send out the next message until after the round trip completes (deliver, process,
acknowledge). Round time in this picture is in total 125ms with a processing time of
only 5ms.
A large pre-fetch count, on the other hand, could deliver many messages to one
single consumer, and keep other consumers in an idling state, as illustrated in Figure
37.
90
Figure 37 - A large prefetch count could deliver lots of messages to one single
consumer.
Messages from a single or a few consumers can be processed quickly, and pre-fetch-
ing many messages at once is a good practice as it keeps clients as busy as possible. If
set to zero (0) and messages have about the same processing time all the time, with
network behavior remaining the same, use the total round trip time/processing time on
the client for each message to get an estimated pre-fetch value.
Messages from consumers and short processing times should be configured with a
lower pre-fetch value. A too low value will keep the consumers idling too much as they
wait for messages to arrive. A too high value may keep one consumer busy while other
consumers are held in an idling state.
If the system has many consumers, and/or a long processing time, the best practice
is to set the pre-fetch count to one (1) to evenly distribute the messages among all
resources.
Please note that if a client is set up to auto-ack messages, the pre-fetch value has
no effect.
The pre-fetch value will have no effect if the client auto-ack messages.
A typical mistake is to have an unlimited pre-fetch, where one client receives all
messages and runs out of memory and crashes, and then all messages are re-delivered
again.
One client could receive all messages and then run out of memory.
91
HIPE
Enabling HiPE increases server throughput at the cost of increased startup time.
Enabling HiPE means that RabbitMQ is compiled at startup. The throughput increases
from 20 to 80 percent according to benchmark tests. The drawback of HiPE is the
startup time increases by about 1-3 minutes. HiPE is still marked as experimental in
RabbitMQ’s documentation.
Creating a CloudAMQP instance with one node results in one single high performance
node, as the messages are not mirrored between multiple nodes.
Creating a CloudAMQP instance with two nodes results in half the performance
compared to the same plan size for a single node. The nodes are located in different
availability zones and queues are automatically mirrored between availability zones.
Two nodes result in high availability since one node might crash or be marked as
impaired, but the other node will still be up and running, ready to receive messages.
With three nodes created in a CloudAMQP instance, the result is one quarter of the
performance compared to the same plan size for a single node. The nodes are located
in different availability zones and queues are automatically mirrored between avail-
ability zones. In the case of three nodes, pause minority is also a result, which means
that shutting down the minority component reduces duplicate deliveries as compared
to allowing every node to respond. Pause minority is a partition handling strategy in a
three node cluster that protects data from being inconsistent due to net-splits.
A common mistake on CloudAMQP clusters is that users create a new vhost but forget
to enable an HA-policy for it. Messages will therefore not be synced between nodes.
Direct exchanges are the fastest to use because multiple bindings mean that RabbitMQ
must calculate where to send the message.
92
PLUGINS
Some plugins might consume a lot of CPU or use a large amount of RAM, which
makes them less than ideal on a production server. Disable plugins that are not being
used via the control panel in CloudAMQP.
The RabbitMQ management interface collects and stores stats for all queues, con-
nections, channels, and more, which might affect the broker in a negative way if, for
example, there are too many queues. Avoid setting the RabbitMQ management statis-
tics rate to ‘detailed’ as it could affect performance.
Stay up-to-date with the latest stable versions of RabbitMQ and Erlang. CloudAMQP
extensively tests new major versions before release; therefore, the most recommended
version is the default in the dropdown menu for a new cluster.
Use the latest recommended version of client libraries and check the documentation
or inquire directly with any questions regarding which library to use.
Stay up-to-date with the latest stable versions of RabbitMQ and Erlang. Make sure
that you are using the latest recommended version of client libraries.
93
DEAD LETTERING
TTL
Declaring a queue with the x-message-ttl property means that messages will be
discarded from the queue if they haven’t been consumed within the time specified.
www.cloudamqp.com
94
95
P A R T T W O
96
T
his section is a summary of recommended configurations for high-performance to
maximize the message passing throughput in RabbitMQ.
For optimal performance, keep queues as short as possible all the time. Longer
queues impose more processing overhead. Queues should always stay around zero (0)
for optimal performance.
A feature that is recommended for applications that often receive message spikes is
setting a max-length. This keeps the queue short by discarding messages from the head
of the queues to keep it no larger than the max-length setting.
Lazy queues mean that messages are automatically stored to disk, which slows down
the throughput. Note that CloudAMQP has lazy queues enabled by default.
Queues are single-threaded in RabbitMQ, and one queue can handle up to about
50k messages. Better throughput on a multi-core system is achieved through use of
multiple queues and consumers.
The RabbitMQ management interface collects and calculates metrics for every queue
in the cluster, which may slow down the server if thousands upon thousands of active
queues and consumers are on the system.
Split queues over different cores
Better performance is achieved by splitting queues into different cores, and into
different nodes if possible, as queue performance is limited to one CPU core.
Two plugins are recommended that will help systems copy with multiple nodes or
a single node cluster with multiple cores; the consistent hash exchange plugin and
RabbitMQ sharding plugin.
Acknowledgment and publish confirms both have an impact on performance. For the
fastest possible throughput, manual acks should be disabled, which will speed up the
broker by allowing it to “fire and forget” the message.
One node gives the highest throughput when compared to an HA cluster setup.
Messages and queues are not mirrored to other nodes.
HiPE increases server throughput, but keep in mind this will be at the cost of increased
startup time. Enabling HiPE causes RabbitMQ to be compiled at startup, which affects
the throughput by an increase of 20 to 80 percent, according to benchmark tests.
Some plugins might be great, but they also consume a lot of CPU or may use a high
amount of RAM. Because of this, they are not recommended for a production server.
Disable unused plugins.
98
GET STARTED FOR FREE WITH CLOUDAMQP
Perfectly configured and optimized RabbitMQ clusters, ready in 2 minutes.
Custom Alarms • Free Plan Available • Easy Monitoring • 24/7 support • 99.95% SLA
www.cloudamqp.com
99
P A R T T W O
For optimal performance, keep queues short whenever possible. Longer queues
impose more processing overhead, so keep queues around zero (0) for optimal perfor-
mance.
Lazy queues write messages to disk immediately, which spreads the work out over
time instead of risking a performance hit somewhere down the line. The result of a lazy
queue is a more predictable, smooth performance curve without sudden drops.
Enhance availability of data by using replicas means that clients can find messages
even through system failures. Two (2) nodes are optimal for high availability, and
CloudAMQP locates each node in a cluster in different availability zones (AWS). Addi-
tionally, queues are automatically mirrored and replicated (HA) between availability
zones. Message queues are by default located on one single node but they are visible and
reachable from all nodes.
When a node fails, the auto-failover mechanism distributes tasks to other nodes in
the cluster. RabbitMQ instances include a load balancer, which makes broker distribu-
tion transparent from the message publishers. Maximum failover time in CloudAMQP is
60s (the endpoint health is measured every 30s, and the DNS TTL is set to 30s).
101
Protect the setup against a region-wide outage by setting up two clusters in different
regions and use federation between them. Federation a method in which a software
system can benefit from having multiple RabbitMQ brokers distributed on different
machines.
In order to avoid losing messages in the broker, prepare for broker restarts, broker
hardware failure, and broker crashes by ensuring that they are on disk. Messages,
exchanges, and queues that are not durable and persistent are lost during a broker
restart.
Make sure the queue is declared “durable” and messages are sent with delivery mode
“persistent”.
HiPE increases server throughput, but keep in mind this will be at the cost of increased
startup time. Enabling HiPE causes RabbitMQ to be compiled at startup, which affects
the startup time with an increase of one to three minutes. HiPE might affect uptime
during a server restart, which affects availability.
Setting the rate mode of RabbitMQ management statistics to ‘detailed’ has a serious
impact on performance. This setting is not recommended in production.
Each priority level uses an internal queue on the Erlang VM, which consumes
resources. In most uses it is sufficient to have no more than five (5) priority levels.
102
GET STARTED FOR FREE WITH CLOUDAMQP
Perfectly configured and optimized RabbitMQ clusters, ready in 2 minutes.
Custom Alarms • Free Plan Available • Easy Monitoring • 24/7 support • 99.95% SLA
www.cloudamqp.com
103
P A R T T W O
QUEUE FEDERATION
R
abbitMQ supports federated queues, which have several uses; when collecting
messages from multiple clusters to a central cluster, when distributing the load
of one queue to multiple other clusters, and/or when migrating to another cluster
without stopping all producers/consumers.
See Figure 38 for an illustration of the concept of upstream and downstream servers.
The upstream server is where messages are initially published, while the downstream
server is where the messages are forwarded. Think of the downstream cluster as sub-
scribing to messages from the upstream cluster.
In this example, there is already one cluster set up in Amazon US-East-1 (named:
dupdjffe, as seen in Figure 39). This cluster is going to be migrated to a cluster in
EU-West-1 (in the pictures named aidajcdt). In this case, the server in US-East-1 will
be defined as the upstream server of EU-West-1.
Some queues in the cluster in US-East-1 are going to be migrated via federation, the
metric-queues (metric.cpu and metric.memory).
105
Figure 39 - Migration of metric.cpu and metric.memory.
Navigate to Admin -> Policies and press Add/update to create the policy
(Figure 40). A policy can apply to an upstream set or a single upstream of
exchanges and/or queues. In this example, it is applied to all upstream queues
with federation-upstream-set set to all.
106
Figure 40 - Set up the federation to the upstream .
3. Start by setting up the new cluster, in this case the cluster in EU-West-1.
107
Leave expiry time and TTL blank, which means that the message will stay
forever.
www.cloudamqp.com
108
109
P A R T T W O
RABBITMQ PROTOCOLS
R
abbitMQ is an open source multi-protocol messaging broker. This means that
RabbitMQ supports several messaging protocols over a range of different open
and standardized protocols such as AMQP, HTTP, STOMP, MQTT, and WebSock-
ets/Web-Stomp.
Message queuing protocols have features in common, so choosing the right one
comes down to the use case or scenario. In the simplest case, a message queue is using
an asynchronous protocol in which the sender and the receiver do not operate on the
message at the same time.
The protocol defines the communication between the client and the server and has no
impact on the message itself. One protocol can be used when publishing while another
can be used to consume. The MQTT protocol, with its minimal design, is perfect for
built-in systems, mobile phones, and other memory and bandwidth sensitive appli-
cations. While using AMQP for the same task will work, MQTT is a more appropriate
choice of protocol for this specific type of scenario.
When creating a CloudAMQP instance, all the common protocols are available by
default (exception: web-stomp/WebSockets is only enabled on dedicated plans).
AMQP
RabbitMQ was originally developed to support AMQP which is the “core” protocol
supported by the RabbitMQ broker. AMQP stands for Advanced Message Queuing
Protocol and it is an open standard application layer protocol. RabbitMQ implements
version 0.9.1 of the specification today, with legacy support for version 0.8 and 0.9.
AMQP was designed for efficient support of a wide variety of messaging applications
and communication patterns. AMQP is a more advanced protocol than MQTT, is more
reliable, and has better security support. AMQP also has features such as flexible routing,
durable and persistent queues, clustering, federation, and high availability queues. The
downside is that it is a more verbose protocol, depending solution implementation.
As with other message queuing protocols, the defining features of AMQP are message
orientation and queuing. Routing is another feature, which is the process by which
an exchange decides which queues to place messages on. Messages in RabbitMQ are
routed from the exchange to the queue depending on exchange types and keys. Reliabil-
ity and security are other important features of AMQP. RabbitMQ can be configured to
ensure that messages are always delivered. Read more in the Reliability Guide at www.
rabbitmq.com/reliability.html.
For more information about AMQP, check out the AMQP Working Group’s overview
page.
111
CloudAMQP AMQP assigned port number is 5672 or 5671 for AMQPS (TLS/SSL
encrypted AMQP).
MQTT
MQTT benefits for IoT make a difference for extremely low power devices. MQTT is
very code-footprint efficient, as it has a strong focus on using minimal bandwidth. It
requires less effort to implement MQTT on a client than AMQP because of its simplicity.
However, MQTT lacks authorization and error notifications from the server to clients,
which are significant limitations in some scenarios.
CloudAMQP MQTT assigned port number is 1883 (8883 for TLS wrapped MQTT). Use
the same default username and password as for AMQP.
STOMP
STOMP, Simple (or Streaming) Text Oriented Message Protocol, is a simple text-
based protocol used for transmitting data across applications. It is a less complex
protocol than AMQP, with more similarities to HTTP. STOMP clients can communicate
with almost every available STOMP message broker, which provides easy and wide-
spread messaging interoperability among many languages, platforms, and brokers. It is
for example possible to connect to a STOMP broker using a telnet client.
STOMP does not deal with queues and topics; instead, it uses a SEND semantic
with a destination string. RabbitMQ maps the message to topics, queues or exchanges
(other brokers might map onto something else understood internally). Consumers then
SUBSCRIBE to those destinations.
CloudAMQP STOMP assigned port number is 1883, 61613 (61614 for TLS wrapped
STOMP).
HTTP
HTTP stands for Hypertext Transfer Protocol, an application-level protocol for dis-
tributed, collaborative, hypermedia information systems. HTTP is not a messaging
protocol. However, RabbitMQ can transmit messages over HTTP.
112
The RabbitMQ-management plugin provides an HTTP-based API for management
and monitoring of the RabbitMQ server.
Example of how to publish a message to the default exchange with the routing key
“my_key”:
Example of how to get one message from the queue “your_queue”. (This is not an
HTTP GET as it will alter the state of the queue.)
When jobs are arriving faster in the queue than they are p rocessed - and when
the queue starts growing in length, it’s a good idea to spin up more workers. By polling
the HTTP API queue length, you can spin up or take down workers depending on the
length.
WEB-STOMP
To use Web-Stomp you first need to create at least one user, with limited permis-
113
sions, or a new vhost which you can expose publicly because the username/password
must be included in your javascript, and a non-limited user can subscribe and publish
to any queue or exchange.
Next includesocks.min.js and stomp.min.js in your HTML from for example CDNJS:
<script src="//cdnjs.cloudflare.com/ajax/libs/stomp.js/2.3.3/stomp.min.js"></script>
client.debug = onDebug;
function onConnect() {
var id = client.subscribe("/exchange/web/chat", function(d) {
var node = document.createTextNode(d.body + '\n');
document.getElementById('chat').appendChild(node);
});
}
function sendMsg() {
var msg = document.getElementById('msg').value;
client.send('/exchange/web/chat', { "content-type": "text/plain" }, msg);
}
function onError(e) {
console.log("STOMP ERROR", e);
}
114
function onDebug(m) {
console.log("STOMP DEBUG", m);
}
www.cloudamqp.com
115
P A R T T H R E E
RABBITMQ USER
STORIES
USER STORY
User stories from customers describing their experience using RabbitMQ in produc-
tion.
116
A
small story about how RabbitMQ is used in the real world can always be of great
value. The following are some interesting user stories from various sources that
describe how RabbitMQ is used in production.
117
P A R T T H R E E
A MONOLITHIC SYSTEM
INTO MICROSERVICES
A
growing digital parking service from Sweden is currently breaking down their
monolithic application and working toward microservices architecture. (Sept.
2017)
Founded in 2010, Parkster has become one of the fastest growing digital parking
services in Sweden. Their vision is to make it quick and easy to pay parking fees with
your smartphone, via your Parkster app, with SMS, or with voice. They want to see a
world where there is no need need to guesstimate the required parking time or stand
in line waiting by a busy parking meter. It should be easy to pay for parking - for
everyone, everywhere. Moreover, Parkster doesn’t want the customer to pay more when
using tools of the future - that’s why there are no extra fees when you are using Park-
ster’s app for parking.
Like many other companies, Netflix among them, Parkster started out with a mono-
lithic architecture. They wanted to prove their business model before they went further.
A monolithic application means that the whole application is built as a single unit. All
code for a system is in a single codebase that is compiled together and produces a single
system.
Having one codebase seemed like the easiest and fastest solution at the time, and
solved their core business problems, which included connecting devices with people,
parking zones, billing, and payments. A few years later, they decided to break up the
monolith into multiple small codebases, which they did through multiple microservices
communicating via message queues.
Parkster tried out their parking service for the first time in Lund, Sweden. After that,
they rapidly expanded into more cities and introduced new features. The core model
grew and components became tightly coupled.
In the monolith system, deploying the codebase meant deploying everything at once.
One big codebase made it difficult to fix bugs and to add new features. A deep knowledge
was also required before attempting a single small code change, as no one wants to
add new code that could disrupt operation in some unforeseen way. One day they had
119
enough - the application had to be decoupled.
Application decoupling
Parkster is already enjoying the change: “It’s very nice to focus on a specific limited
part of the system instead of having to think about the entire system every time you do
something new or make changes. As we grow, I think we will benefit even more from
this change,” said Anders Davoust, a developer at Parkster.
Breaking down the codebase has also given the software developers freedom to use
whatever technologies made sense for a particular service. Different parts of the appli-
cation can evolve independently, be written in different languages, and/or maintained
by separated developer teams. For example, one part of the system uses MongoDB and
another part uses MySQL. Most code is written in Java, but parts of the system are
written in Clojure. Parkster is using the open-source system Kubernetes as a container
orchestration platform.
Another thing the staff at Parkster likes about their system today is that it can still be
operational even if part of the backend processing is delayed or broken. The entire sysem
will not break just because one small part is not operating. Breaking up the system into
autonomous components has meant that Parkster inherently becomes more resilient.
The message broker used between all microservices in Parkster is RabbitMQ. “It was
a simple choice - we had used RabbitMQ in other projects before we built Parkster
and we had a good experience with RabbitMQ.” The reason they went for CloudAMQP,
was because they felt that CloudAMQP had more knowledge about the management of
RabbitMQ. They simply wanted to put their focus on the product instead of spending
days configuring and handling server setups. CloudAMQP has been at the forefront
when it comes to RabbitMQ server configurations and optimization since 2012.
120
I asked what they like about CloudAMQP, and I received a quick answer: “I love the
support that CloudAMQP gives us, always quick feedback and good help”.
Now Parkster’s goal is to get rid of the old monolithic structure entirely and focus on
a new era where the whole system is built upon microservices.
121
P A R T T H R E E
HOW RABBITMQ
TRANSFORMED FARMBOT
F
armBot is an open-source robotic hardware kit designed for gardeners, research-
ers, and educators to interact with agricultural projects in a more efficient way.
FarmBot uses physical sensors and actuators that require a bridge between the
physical garden and the software layer. In 2017, user demand for real-time response
to requests, shorter development time for new features, and a significant improvement
in system up-time led Farmbot to RabbitMQ - the world’s most popular open-source
message broker.
FarmBot’s Genesis version can plan, plant, and manage over 30 different crops such
as potatoes, peas, squash, and more. Able to manage an area of 2.9 meters × 1.4 meters
and a maximum plant height of 0.5 meters, FarmBot Genesis uses manual controls that
users move and operate to perform a variety of tasks.
Thanks to the game-like interface that drags and drops plants into a map, FarmBot
users immediately grasp the concept of controlling the entire growing season from
start to finish. FarmBot tools and peripherals perform tasks such as watering plants,
scaring birds away, taking photos of veggies, turning the lights on for a nighttime
harvest, or simply impressing friends and neighbors!
On the other side of the coin, FarmBot identifies and eliminates weeds through image
analysis via an onboard camera. To practice good conservation efforts, watering is done
automatically after taking into account the degree of soil moisture and the weather
forecast. There is a web app version of FarmBot to allow users to control garden tasks
via smartphone or tablet.
123
ARCHITECTURE OVERVIEW
2. Web server handling requests between the field and the user
3. Web app (on mobile/iPad/browser etc.) that lets the user interact with the
system
124
NOW FOR A DEEP-DIVE
The firmware, named Farmduino, is the software responsible for interaction with the
real world. It runs on an Arduino microcontroller, written in C++ and executes sensor
and actuator commands that allow users to drive the motors, the electro-irrigation
valve, and read the probe measurements.
An example of this is when FarmBot turns on the water valve or probes for soil
moisture. In this case, it is the device firmware that tells the peripherals directly when
and how to operate. Other parts of the system may request access to peripherals by
sending commands to the firmware.
The FarmBot API provides an HTTP based REST API, which handles a number of
responsibilities that prevent data loss between reflashes by storing it in a centralized
database. This allows users to edit information when the device is offline and allows
users to validate and control access to data via authentication and authorization mech-
anisms. It also sends email notifications (such as password resets and critical errors) to
end-users. All other messaging is handled by a message broker - RabbitMQ, a distinctly
decoupled sub-system of the Web API. The REST API does not control FarmBot - device
control is handled by the Message Broker, CeleryScript and FarmBot JS.
RabbitMQ communicates to the devices in the field by AMQP and acts as a message
queue to the backend services. MQTT protocol is used for real-time events to the
frontend user interface.
The frontend user interface allows users to control and configure FarmBot from any
desktop computer, laptop, tablet, or smartphone. Many users end up running their
own web server on-premise since it's all open source. FarmBot also provides a publicly
accessible server at https://my.farm.bot/ and a staging server at http://staging.farm.bot
for end users that are not familiar with Ruby on Rails application development.
125
WHY RABBITMQ?
In 2017 the Farmbot users got the option to vote for new features to focus on. The
vote fell down on auto-sync and better real-time support. This forced the Farmbot
community to reevaluate the architecture, and it also made them try something new.
Farmbot made the decision to transition their architecture to include RabbitMQ. In a
unanimous decision by the FarmBot community, RabbitMQ was the obvious choice, in
part because it has:
• A good ecosystem, mind share, and ubiquity overall - e.g. it is not an obscure
choice
RabbitMQ is now an important component of the FarmBot web API, where it handles
various tasks including:
Because RabbitMQ is a real-time message broker, there is no need to check for new
messages. Messages, such as a user clicking the "move" button on the user interface,
are sent back and forth between client, device, and server without initiating requests.
126
THE REST IS SUCCESS
He continues, “We really don't have time to manage our own services on AWS and
would much rather pay a premium to know that someone else is taking care of the
problem if there is one.”
Choosing the right technology and the right vendor often focuses on the best selection
to optimize developer time. To address this, CloudAMQP provides support around the
clock including built-in alarms that can be set up to detect issues with the server before
it affects the business.
The same goes for the decision to host the production servers on Heroku. Even though
there would be cost savings in hosting production a traditional AWS/Kubernetes setup,
it's simply too much of a risk for a small company to save only a couple hundred dollars
a month. FarmBot’s Carlino added, “Every hour I spend thinking about infra is an hour
that I am not spending on feature development.”
User demand for real-time response to requests, shorter development time for new
features, and a significant improvement in system up-time all led FarmBot to choose
RabbitMQ and CloudAMQP.
As the leading hosting provider of RabbitMQ, CloudAMQP was able to help Farmbot
to quickly format their old architecture into a message queue based system, transform-
ing the agri-tech app into a farming powerhouse.
As FarmBot leads the way in the innovation of how food is grown, CloudAMQP is
leading the RabbitMQ revolution through user success stories like this one. Let us know
if it’s time to create your own successful message queue architecture? Get in touch with
our friendly team of professionals today to determine the right RabbitMQ plan for your
future growth.
127
P A R T T H R E E
MICROSERVICE
ARCHITECTURE BUILT
UPON RABBITMQ
W
e at CloudAMQP rely on RabbitMQ in our everyday life; in fact, a huge number
of our events in the production environment pass through RabbitMQ. This
chapter gives a simple overview of the automated process behind CloudAMQP,
the polyglot workplace where microservices written in different languages communi-
cate through RabbitMQ.
CloudAMQP never had a traditional monolithic set up. It is built from scratch on
small, independent, manageable services that communicate with each other - micro-
services. These microservices are all highly decoupled and focused on their specific
task. This chapter gives an overview and a deeper insight into the automated process
behind CloudAMQP, describing some of our microservices and the use of RabbitMQ as a
message broker communicating between them.
BACKGROUND OF CLOUDAMQP
A few years ago, Carl Hörberg, the CEO of CloudAMQP, saw the need for a hosted
RabbitMQ solution. At the time, he was working at a consultancy company where he
was using RabbitMQ in combination with Heroku and AppHarbor. He was looking for a
hosted RabbitMQ solution himself, but he could not find any. Shortly after, he started
CloudAMQP, which entered the market in 2012.
Creating a CloudAMQP instance provides the option to choose a plan and how many
nodes to have. The cluster will behave a bit different depending on the cluster setup.
The option to create your instance in a dedicated VPC and select RabbitMQ version is
also available.
A dedicated RabbitMQ instance can be created via the CloudAMQP control panel, or
by adding the CloudAMQP add-on from any of our integrated platforms, like Heroku,
IBM Cloud Catalogue, AWS marketplace, or Azure marketplace, just to mention a few.
129
Figure 43 - The automated process behind CloudAMQP
When a client creates a new dedicated instance, an HTTP request is sent from the
reseller to a service called CloudAMQP-API (1). The HTTP request includes all infor-
mation specified by the client: plan, server name, data center, region, number of nodes
etc., as shown in Figure 43 above. CloudAMQP-API handles the request, saves infor-
mation into a database (2), and finally, sends a account.create-message to one of our
RabbitMQ-clusters (3).
130
Figure 44 - The microservice CloudAMQP-put-metrics
CloudAMQP has many other services communicating with those described above and
with the new server(s) created for the client.
CloudAMQP-server-metrics
CloudAMQP-alarm
Different services subscribe to server metrics data. One of these services is called
CloudAMQP-alarm. CloudAMQP-alarm checks the server metrics from RabbitMQ
against the alarm thresholds for the given server, and notifies the owner of the server
if needed. Users are able to enable/disable alarms such as CPU or memory, for example,
as they see fit.
CloudAMQP-put-metrics
This section described a small part of the CloudAMQP service, which includes around
100 microservices overall, all communicating via CloudAMQP and RabbitMQ.
131
P A R T T H R E E
EVENT-BASED
COMMUNICATION
S
oftonic, a software and app discovery portal, is accessed by over 100 million users
per month and delivers more than 2 million downloads per day, with a constant
flow of events and commands between services. As the world’s largest software
and app discovery destination, Softonic is also one of the world’s most highly-traf-
ficked websites.
Even without realizing it, you have probably landed on their website when down-
loading software or an application. Over 100 million users per month rely on Softonic as
an app guide that assists with the discovery of the best applications for any device that
also serves up reviews, news, articles, and free downloads.
CloudAMQP provides hosted RabbitMQ clusters in the biggest data centers around
the world and Softonic is one of our valued customers. The CloudAMPQ team met up
with Riccardo Piccoli, a developer at Softonic, at the RabbitMQ Summit 2018 in London
where he kindly shared Softonic’s customer story with us.
This article is broken down into two parts; the first part is an overview of the system,
which shows a simple RabbitMQ use cases of an event-based architecture. The second
part is a deep-dive into the internal architecture in Softonic and the plugins used by the
company along with examples of events they are sending.
Users are able to upload files to Softonic. The system first scans the uploaded file
for viruses and collects basic information. After the information is collected, the file is
ready for distribution to other users.
The new binary data is first held within a dedicated service, and a notification about
the upload is sent to an event bus. Other services collect this information which is even-
tually added to the Softonic website. In this case, the user gets notified immediately
after the upload has succeeded and a scanning event is simply placed on an event-bus
for other services to handle.
The message queue, in this section called an event-bus, allows web servers to
respond to requests quickly instead of being forced to perform a resource-heavy process
on the spot, which could cause user wait-time issues. Virus scanning is an example of a
resource-heavy process. The virus scanning application takes a message from the event
bus, such as a “ScanFile” command, and starts processing. At the same time, other
users are able to upload new files to Softonic and processing tasks are able to join the
queue. The event “FileScanned” is added back to the event bus, once the resource-con-
suming application has handled the event.
133
Figure 45 - Scan file request
Architecture like this creates two simple services and low coupling between the
sender and the receiver. Users can still upload files, even if the scanning application is
busy or is under maintenance.
1. Different events or commands are published to the event bus, e.g., a “ScanFile”
command.
3. The resource-consuming application retrieves the event and starts to process it,
while some data is stored to the database and more events can be published back
to another event queue (more about this in “Internal Structure of RabbtiMQ”).
134
When a microservice receives an event, it can update its own status, which leads to
more events being published, which is the case here.
It’s time for a deep-dive into the internal architecture of RabbitMQ and into the
Softonic application. Softonic is using the Consistent Hash Exchange Plugin and
RabbitMQ Sharding.
Image description external usage: Softonic services are built upon Node.js and PHP and
communicate with the RabbitMQ event bus, from which information from the services
are transferred from a PHP application to a MySQL event store.
Image description internal usage: Information from the first application retrieves data
from the MySQL Event Share and pushes it through consistent hash exchanges in two
internal RabbitMQ event buses using sharded queues. From there, the information
reaches the orchestration layer and an elasticsearch cluster, where it becomes visible
for users.
135
The consistent hash exchange plugin and RabbitMQ sharding
The consistent hash exchange plugin has the task of load balancing messages between
queues. Messages sent to the exchange are consistently and equally distributed across
many queues based on the routing key of the message. The plugin creates a hash of the
routing key and distributes the messages between queues that have a binding to that
exchange.
The RabbitMQ Sharding plugin partitions queues automatically. Once you define an
exchange as sharded, the supporting queues are automatically created on every cluster
node and messages are sharded across them. RabbitMQ sharding shows one queue to
the consumer but could be many queues running in the background. .
Below is an example of events and commands sent via RabbitMQ by Softonic using
the consistent hash exchange plugin. Events 1 and 2 end up in the same queue (with
the order preserved) while event 3 may or may not end up in the same queue. Data is
sharded, and processed with consistent hashing F(id_program) in order to preserve
order by the program.
Softonic did run RabbitMQ in-house before moving to the cloud. The biggest reason
for choosing CloudAMQP as a provider was for simplicity of installing RabbitMQ without
the hassle of maintaining a RabbitMQ cluster.
CloudAMQP offers many different plans designed for different uses, including a free
plan with Little Lemur.
The CloudAMQP team is grateful for the information from Softonic, an impressive
example of microservice architecture. A special thanks to Riccardo for your time at the
136
RabbitMQ Summit 2018, hope to see you again at the next event. We wish Softonic the
best of luck with their continued success!
“The biggest reason for choosing CloudAMQP as a provider was for simplicity of
installing RabbitMQ without the hassle of maintaining a RabbitMQ cluster.”
www.cloudamqp.com
137