KEMBAR78
Load Balancing and Scaling with NGINX | PPTX
Load Balancing and 
Scaling with NGINX 
Introduced by Andrew Alexeev 
Presented by Owen Garrett 
Nginx, Inc.
About this webinar 
When one server just isn’t enough, how can you scale out? In this 
webinar, you'll learn how to build out the capacity of your website. You'll 
see a variety of scalability approaches and some of the advanced 
capabilities of NGINX Plus.
INTRODUCING NGINX…
What is NGINX? 
Internet 
Proxy 
Caching, Load Balancing… HTTP traffic 
N 
Web Server 
Serve content from disk 
Application Server 
FastCGI, uWSGI, Passenger… 
Application Acceleration 
SSL and SPDY termination 
Performance Monitoring 
High Availability 
Advanced Features: Bandwidth Management 
Content-based Routing 
Request Manipulation 
Response Rewriting 
Authentication 
Video Delivery 
Mail Proxy 
GeoLocation
NGINX Accelerates 
143,000,000 
Websites
22% 
Top 1 million websites 37% 
Top 1,000 websites
NGINX and NGINX Plus 
NGINX F/OSS 
nginx.org 
3rd party 
modules 
Large community of >100 modules
NGINX and NGINX Plus 
NGINX F/OSS 
nginx.org 
3rd party 
modules 
Large community of >100 modules 
NGINX Plus 
Advanced load balancing features 
Ease-of-management 
Commercial support
WHY LOAD-BALANCE?
Load-balancing Web Servers 
Internet 
N 
 Improved Application Availability 
 Management 
 Increased Capacity 
 Advanced techniques e.g. A|B testing 
Why?  DNS Round Robin 
 Hardware L4 load balancer 
 Software Reverse Proxy LB 
 Cloud solution 
How?
Three Load Balancing case studies 
Basic Load Balancing with NGINX 
When you need more control 
Advanced techniques 
1 
2 
3
1. Basic Load Balancing 
• Simple scalability 
– All servers have same applications/services 
– Load-balancer extracts optimal performance
Basic load balancing 
server { 
listen 80; 
location / { 
proxy_pass http://backend; 
} 
} 
upstream backend { 
zone backend 64k; 
server webserver1:80; 
server webserver2:80; 
server webserver3:80; 
server webserver4:80; 
}
Basic load balancing 
• Use logging to debug: “$upstream_addr” 
log_format combined2 '$remote_addr - $remote_user [$time_local] ' 
'"$request" $status $body_bytes_sent ' 
'"$upstream_addr"'; 
192.168.56.1 - - [09/Mar/2014:23:08:56 +0000] "GET / HTTP/1.1" 200 30 "127.0.1.1:80" 
192.168.56.1 - - [09/Mar/2014:23:08:56 +0000] "GET /favicon.ico HTTP/1.1" 200 30 "127.0.1.2:80" 
192.168.56.1 - - [09/Mar/2014:23:08:57 +0000] "GET / HTTP/1.1" 200 30 "127.0.1.3:80" 
192.168.56.1 - - [09/Mar/2014:23:08:57 +0000] "GET /favicon.ico HTTP/1.1" 200 30 "127.0.1.4:80" 
192.168.56.1 - - [09/Mar/2014:23:08:57 +0000] "GET / HTTP/1.1" 200 30 "127.0.1.1:80" 
192.168.56.1 - - [09/Mar/2014:23:08:57 +0000] "GET /favicon.ico HTTP/1.1" 200 30 "127.0.1.2:80" 
192.168.56.1 - - [09/Mar/2014:23:08:58 +0000] "GET / HTTP/1.1" 200 30 "127.0.1.3:80" 
192.168.56.1 - - [09/Mar/2014:23:08:58 +0000] "GET /favicon.ico HTTP/1.1" 200 30 "127.0.1.4:80"
Basic Load Balancing 
• Round-robin is the default 
– Suitable for consistent pages 
• Least Connections 
– Suitable for varying pages 
• IP Hash 
– Fixed mapping, basic session 
persistence 
upstream backend { 
server webserver1:80; 
server webserver2:80; 
} 
upstream backend { 
least_conn; 
server webserver1:80; 
server webserver2:80; 
} 
upstream backend { 
ip_hash; 
server webserver1:80; 
server webserver2:80; 
}
Managing the Upstream Group 
• Direct config editing: 
– nginx –s reload 
– upstream.conf file: 
upstream backend { 
server webserver1:80; 
server webserver2:80; 
server webserver3:80; 
server webserver4:80; 
} 
• On-the-fly Reconfiguration [NGINX Plus only] 
$ curl 'http://localhost/upstream_conf?upstream=backend&id=3&down=1'
2. When you need more control… 
• In many scenarios, you want more control over 
where traffic is routed to: 
– Primary and secondary servers (aka master/slave) 
– Transaction state is accumulated on one server
‘Master’ and ‘Slave’ servers 
• Wordpress admin traffic (e.g. image uploads) 
Internet 
N 
‘Master’ 
‘Slave’ 
Copy image 
uploads from 
master to slave
‘Master’ and ‘Slave’ servers 
• Wordpress admin traffic (e.g. image uploads) 
N 
server { 
listen 80; 
location ~ ^/(wp-admin|wp-login) { 
proxy_pass http://wpadmin; 
} 
} 
upstream wpadmin { 
server server1:80; 
server server2:80 backup; 
} 
‘Master’ 
‘Slave’
Session Persistence [NGINX Plus only] 
• For when transaction state is accumulated on one server 
– Shopping carts 
– Advanced interactions 
– Non-RESTful Applications 
• NGINX Plus offers two methods: 
– sticky cookie 
– sticky route 
“Session persistence also 
helps performance”
Advanced Techniques 
• You can control load-balancing programmatically 
– A|B Testing 
– Migration between applications
A|B Testing 
Internet 
N 
‘backends’ upstream group 
Test 
server 
95% 
5% 
Partition traffic. 
Send 5% to new application instance
A|B Testing 
split_clients "${remote_addr}AAA" $servers { 
95% backends; 
5% 192.168.56.1:80; 
} 
server { 
listen 80; 
location / { 
proxy_pass http://$servers; 
} 
}
Application Migration 
Internet 
N 
‘backendsA’ upstream group 
‘backendsB’ upstream group 
Create new generation of application 
Migrate users from old to new 
Preserve sessions, no interruptions
Application Migration 
map $cookie_group $group { 
~(?P<value>.+)$ $value; 
default backendB; # The default upstream group 
} 
server { 
listen 80; 
location / { 
add_header Set-Cookie "group=$group; path=/" 
proxy_pass http://$group; 
} 
}
Three Load Balancing case studies 
Basic Load Balancing with NGINX 
When you need more control 
Advanced techniques 
1 
2 
3
Closing thoughts 
• 37% of the busiest websites use NGINX 
• Check out the load-balancing articles on 
nginx.com/blog 
• Future webinars: nginx.com/webinars 
Try NGINX F/OSS (nginx.org) or NGINX Plus (nginx.com)

Load Balancing and Scaling with NGINX

  • 1.
    Load Balancing and Scaling with NGINX Introduced by Andrew Alexeev Presented by Owen Garrett Nginx, Inc.
  • 2.
    About this webinar When one server just isn’t enough, how can you scale out? In this webinar, you'll learn how to build out the capacity of your website. You'll see a variety of scalability approaches and some of the advanced capabilities of NGINX Plus.
  • 3.
  • 4.
    What is NGINX? Internet Proxy Caching, Load Balancing… HTTP traffic N Web Server Serve content from disk Application Server FastCGI, uWSGI, Passenger… Application Acceleration SSL and SPDY termination Performance Monitoring High Availability Advanced Features: Bandwidth Management Content-based Routing Request Manipulation Response Rewriting Authentication Video Delivery Mail Proxy GeoLocation
  • 5.
  • 6.
    22% Top 1million websites 37% Top 1,000 websites
  • 7.
    NGINX and NGINXPlus NGINX F/OSS nginx.org 3rd party modules Large community of >100 modules
  • 8.
    NGINX and NGINXPlus NGINX F/OSS nginx.org 3rd party modules Large community of >100 modules NGINX Plus Advanced load balancing features Ease-of-management Commercial support
  • 9.
  • 10.
    Load-balancing Web Servers Internet N  Improved Application Availability  Management  Increased Capacity  Advanced techniques e.g. A|B testing Why?  DNS Round Robin  Hardware L4 load balancer  Software Reverse Proxy LB  Cloud solution How?
  • 12.
    Three Load Balancingcase studies Basic Load Balancing with NGINX When you need more control Advanced techniques 1 2 3
  • 13.
    1. Basic LoadBalancing • Simple scalability – All servers have same applications/services – Load-balancer extracts optimal performance
  • 14.
    Basic load balancing server { listen 80; location / { proxy_pass http://backend; } } upstream backend { zone backend 64k; server webserver1:80; server webserver2:80; server webserver3:80; server webserver4:80; }
  • 15.
    Basic load balancing • Use logging to debug: “$upstream_addr” log_format combined2 '$remote_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent ' '"$upstream_addr"'; 192.168.56.1 - - [09/Mar/2014:23:08:56 +0000] "GET / HTTP/1.1" 200 30 "127.0.1.1:80" 192.168.56.1 - - [09/Mar/2014:23:08:56 +0000] "GET /favicon.ico HTTP/1.1" 200 30 "127.0.1.2:80" 192.168.56.1 - - [09/Mar/2014:23:08:57 +0000] "GET / HTTP/1.1" 200 30 "127.0.1.3:80" 192.168.56.1 - - [09/Mar/2014:23:08:57 +0000] "GET /favicon.ico HTTP/1.1" 200 30 "127.0.1.4:80" 192.168.56.1 - - [09/Mar/2014:23:08:57 +0000] "GET / HTTP/1.1" 200 30 "127.0.1.1:80" 192.168.56.1 - - [09/Mar/2014:23:08:57 +0000] "GET /favicon.ico HTTP/1.1" 200 30 "127.0.1.2:80" 192.168.56.1 - - [09/Mar/2014:23:08:58 +0000] "GET / HTTP/1.1" 200 30 "127.0.1.3:80" 192.168.56.1 - - [09/Mar/2014:23:08:58 +0000] "GET /favicon.ico HTTP/1.1" 200 30 "127.0.1.4:80"
  • 16.
    Basic Load Balancing • Round-robin is the default – Suitable for consistent pages • Least Connections – Suitable for varying pages • IP Hash – Fixed mapping, basic session persistence upstream backend { server webserver1:80; server webserver2:80; } upstream backend { least_conn; server webserver1:80; server webserver2:80; } upstream backend { ip_hash; server webserver1:80; server webserver2:80; }
  • 17.
    Managing the UpstreamGroup • Direct config editing: – nginx –s reload – upstream.conf file: upstream backend { server webserver1:80; server webserver2:80; server webserver3:80; server webserver4:80; } • On-the-fly Reconfiguration [NGINX Plus only] $ curl 'http://localhost/upstream_conf?upstream=backend&id=3&down=1'
  • 18.
    2. When youneed more control… • In many scenarios, you want more control over where traffic is routed to: – Primary and secondary servers (aka master/slave) – Transaction state is accumulated on one server
  • 19.
    ‘Master’ and ‘Slave’servers • Wordpress admin traffic (e.g. image uploads) Internet N ‘Master’ ‘Slave’ Copy image uploads from master to slave
  • 20.
    ‘Master’ and ‘Slave’servers • Wordpress admin traffic (e.g. image uploads) N server { listen 80; location ~ ^/(wp-admin|wp-login) { proxy_pass http://wpadmin; } } upstream wpadmin { server server1:80; server server2:80 backup; } ‘Master’ ‘Slave’
  • 21.
    Session Persistence [NGINXPlus only] • For when transaction state is accumulated on one server – Shopping carts – Advanced interactions – Non-RESTful Applications • NGINX Plus offers two methods: – sticky cookie – sticky route “Session persistence also helps performance”
  • 22.
    Advanced Techniques •You can control load-balancing programmatically – A|B Testing – Migration between applications
  • 23.
    A|B Testing Internet N ‘backends’ upstream group Test server 95% 5% Partition traffic. Send 5% to new application instance
  • 24.
    A|B Testing split_clients"${remote_addr}AAA" $servers { 95% backends; 5% 192.168.56.1:80; } server { listen 80; location / { proxy_pass http://$servers; } }
  • 25.
    Application Migration Internet N ‘backendsA’ upstream group ‘backendsB’ upstream group Create new generation of application Migrate users from old to new Preserve sessions, no interruptions
  • 26.
    Application Migration map$cookie_group $group { ~(?P<value>.+)$ $value; default backendB; # The default upstream group } server { listen 80; location / { add_header Set-Cookie "group=$group; path=/" proxy_pass http://$group; } }
  • 27.
    Three Load Balancingcase studies Basic Load Balancing with NGINX When you need more control Advanced techniques 1 2 3
  • 28.
    Closing thoughts •37% of the busiest websites use NGINX • Check out the load-balancing articles on nginx.com/blog • Future webinars: nginx.com/webinars Try NGINX F/OSS (nginx.org) or NGINX Plus (nginx.com)

Editor's Notes

  • #3 Hook. Scaling a wordpress site… but principles can be applied to any web-based service
  • #9 As we go through this presentation, we’ll highlight some of the new features that are specific to nginx plus
  • #11 Hardware load balancer – L4 (or may be software) Partial TCP stack DSR, connection mirroring, failover very high performance (packets per second, syn cookies,…) example: F5 fasthttp (software). Warnings e.g. out of order packets most moving to a software reverse proxy approach Reverse proxy Full TCP stack
  • #15 Which discipline should I use? Round robin is simple, but sometimes has odd side effects Least connections is very effective at smoothing out loads IP Hash gives a simple session persistence effect but may not distribute load effectively Demo: round robin with the config above will appear to fail because we will get responses from server1 and 3 only. This is because the client makes a ‘silent’ request for favicon.ico too.
  • #18 On-the-fly reconfig… need a ‘zone’ in the upstream group
  • #20 In wordpress, it’s common to synchronize filesystems in one direction
  • #22 Example – image editing application, shopping cart