KEMBAR78
Introduction to Data Center Network Architecture | PPTX
Data Center Network Architecture
Presented by: Ankita Mahajan
Design Goals
FAT-Tree DCN
Recursive DCN Design
MDCs
Virtualized DCN
DCN-Introduction
Data Center Network
Data Center Networks are large clusters of servers interconnected by network
switches.
These servers are used to host applications which provide different concurrent
services. Ex)
• Web services like DNS, Web server, Mail server, gaming server, chat server.
• Compute services like suggestion systems, indexing and scientific computing.
DCN Usage Scenarios:
• Compute Intensive: Heavily loaded servers, but low inter-server comm. Ex) HPC
• Data Intensive: Huge intra-DCN data transfer, but low load at servers. Ex) Video
and File Streaming
• Balanced: Communication links and computing servers are proportionally
loaded. Ex) Geographic Information System
Conventional DCN Architecture
Rack 3 Rack 10Rack 1 Rack 2
Server 21
Server 100
Server 91
Server 30
Server 1
Server 20
Server 11
Server 10
ToR ToRToR ToR
AggrAggr Aggr
Core Core
Core
[10 GigE switches]
Aggregation
[10 GigE switches]
Edge
[Commodity
switches]
Internet
ETHERNET
DCN Design Goals
• Availability and Fault tolerance: Multiple
paths and replicated servers. Graceful
Degradation.
Challenges:
• Reduced Utilization
DCN Design Goals
• Availability and Fault tolerance: Multiple
paths and replicated servers. Graceful
Degradation.
• Scalability: Incrementally increase DCN
size as and when needed.
• Low Cost: Lower power and cooling costs.
Challenges:
• Reduced Utilization
• Scale-out vs Scale-up: per-port cost, cabling
and packaging complexity, scalable cooling.
• Placement, Air-Flow and rack-density
DCN Design Goals
• Availability and Fault tolerance: Multiple
paths and replicated servers. Graceful
Degradation.
• Scalability: Incrementally increase DCN
size as and when needed.
• Low Cost: Lower power and cooling costs.
• Throughput: The number of requests
completed by the data center per unit of
time. (Compute + Transmission+
Aggregation Time)
• Economies of scale: Utilize the benefits of
its huge size.
• Scalable interconnect bandwidth: Host to
host communication at full bisection
bandwidth.
• Load balancing: Avoid hot-spots, to fully
utilize the multiple paths.
Challenges:
• Reduced Utilization
• Scale-out vs Scale-up: per-port cost, cabling
and packaging complexity, scalable cooling.
• Placement, Air-Flow and rack-density
• TCP Incast, Large Buffer switches
• Resource fragmentation: VLANs
• Manual Configuration
• Oversubscription: 1:1 vs 1:240
• Flooding and Routing n/w overhead
Fat-Tree Based DC Architecture
1:1 Oversubscription ratio. Commodity Fat-tree with K=4
K-ary fat tree: three-layer topology (edge, aggregation and core)
• each pod consists of (k/2)2 servers & 2 layers of k/2 k-port switches
• each edge switch connects to k/2 servers & k/2 aggr. switches
• each aggr. switch connects to k/2 edge & k/2 core switches
• (k/2)2 core switches: each connects to k pods
• i,e, (k/2)2 core switches for k2 pod switches and (k/2)2 servers.
Fat-Tree Based DC Architecture
1:1 Oversubscription ratio. Commodity Fat-tree with K=4
Advantages:
•Full Bisection BW: 1:1 Oversubscription ratio
•Low Cost: Commodity switches
Disadvantage:
•Scalability: Size of n/w dependent upon ports per switch.48 ports => maximum 27,648 hosts.
•Agility and Performance Isolation: Not supported
Recursive DCN Architecture
• A Level-0 subnet is the basic building block. It contains inter-connected servers.
• Each level-k subnet has multiple level-(k-1) subnets.
• Ex) DCell, BCube, 4-4 1-4, etc
• Advantages:
• Highly Scalable commodity n/w
• Low CapEx and OpEx.
• Disadvantage:
• Cabling and packaging
Modular Data Centers (MDC)
High density, shipping container based DCN.
Should be Robust and
provide Graceful Performance Degradation.
Advantages:
Modular Data Centers (MDC)
High density, shipping container based DCN.
Should be Robust and
provide Graceful Performance Degradation.
Advantages:
• Fast deployment
• Lower costs
• Increased efficiency
• Easy scale-out
Virtualized DCN
Added Issues:
• Agility: Allocate any server to any service dynamically for performance isolation.
• VM-migration across DCNs: No manual configuration.
• Availability and Fault tolerance: Configuration of server IP addresses
Solution: Separation of Location and Identity addresses. Ex) VL2, 4-4 1-4, etc
Data Structure of Directory
Packet tunneled through physical network using location-IP header
Typical Inter Server Communication in DC
Example: 4-4 1-4 DCN
Fig: 4-4 1-4 Data Center
• 4-4 1-4 is a location based forwarding
architecture for DCN which utilizes IP-hierarchy.
• Uses statically assigned, location based IP
addresses for all network nodes.
• Forwarding of packets is done by masking the
destination IP address bits.
• No routing or forwarding table maintained at
switches
• No convergence overhead of routing protocols.
No. of physical machines in figure = 65,536
References
• A. Kumar, S. V. Rao, and D. Goswami, “4-4, 1-4: Architecture for Data Center Network Based
on IP Address Hierarchy for Efficient Routing," in Parallel and Distributed Computing (ISPDC),
2012 11th International Symposium on, 2012, pp. 235-242.
• M. Al-Fares, A. Loukissas, and A. Vahdat, “A scalable, commodity data center network
architecture," in Proceedings of the ACM SIGCOMM 2008 conference on Data
communication, ser. SIGCOMM '08. New York, NY, USA: ACM, 2008, pp. 63-74.[Online].
Available: http://doi.acm.org/10.1145/1402958.1402967
• C. Guo, G. Lu, D. Li, H. Wu, X. Zhang, Y. Shi, C. Tian, Y. Zhang, and S. Lu, “Bcube:a high
performance, server-centric network architecture for modular data centers.“
• T. Benson, A. Anand, A. Akella, and M. Zhang, “Understanding data center trac
characteristics," SIGCOMM Comput. Commun. Rev., vol. 40, no. 1, pp. 92{99, Jan. 2010.
[Online]. Available: http://doi.acm.org/10.1145/1672308.1672325
• A. Greenberg, J. Hamilton, D. A. Maltz, and P. Patel. “The cost of a cloud: research problems
in data center networks.” SIGCOMM Comput. Commun. Rev.,39(1):68–73, 2009.

Introduction to Data Center Network Architecture

  • 1.
    Data Center NetworkArchitecture Presented by: Ankita Mahajan
  • 2.
    Design Goals FAT-Tree DCN RecursiveDCN Design MDCs Virtualized DCN DCN-Introduction
  • 3.
    Data Center Network DataCenter Networks are large clusters of servers interconnected by network switches. These servers are used to host applications which provide different concurrent services. Ex) • Web services like DNS, Web server, Mail server, gaming server, chat server. • Compute services like suggestion systems, indexing and scientific computing. DCN Usage Scenarios: • Compute Intensive: Heavily loaded servers, but low inter-server comm. Ex) HPC • Data Intensive: Huge intra-DCN data transfer, but low load at servers. Ex) Video and File Streaming • Balanced: Communication links and computing servers are proportionally loaded. Ex) Geographic Information System
  • 4.
    Conventional DCN Architecture Rack3 Rack 10Rack 1 Rack 2 Server 21 Server 100 Server 91 Server 30 Server 1 Server 20 Server 11 Server 10 ToR ToRToR ToR AggrAggr Aggr Core Core Core [10 GigE switches] Aggregation [10 GigE switches] Edge [Commodity switches] Internet ETHERNET
  • 5.
    DCN Design Goals •Availability and Fault tolerance: Multiple paths and replicated servers. Graceful Degradation. Challenges: • Reduced Utilization
  • 6.
    DCN Design Goals •Availability and Fault tolerance: Multiple paths and replicated servers. Graceful Degradation. • Scalability: Incrementally increase DCN size as and when needed. • Low Cost: Lower power and cooling costs. Challenges: • Reduced Utilization • Scale-out vs Scale-up: per-port cost, cabling and packaging complexity, scalable cooling. • Placement, Air-Flow and rack-density
  • 7.
    DCN Design Goals •Availability and Fault tolerance: Multiple paths and replicated servers. Graceful Degradation. • Scalability: Incrementally increase DCN size as and when needed. • Low Cost: Lower power and cooling costs. • Throughput: The number of requests completed by the data center per unit of time. (Compute + Transmission+ Aggregation Time) • Economies of scale: Utilize the benefits of its huge size. • Scalable interconnect bandwidth: Host to host communication at full bisection bandwidth. • Load balancing: Avoid hot-spots, to fully utilize the multiple paths. Challenges: • Reduced Utilization • Scale-out vs Scale-up: per-port cost, cabling and packaging complexity, scalable cooling. • Placement, Air-Flow and rack-density • TCP Incast, Large Buffer switches • Resource fragmentation: VLANs • Manual Configuration • Oversubscription: 1:1 vs 1:240 • Flooding and Routing n/w overhead
  • 8.
    Fat-Tree Based DCArchitecture 1:1 Oversubscription ratio. Commodity Fat-tree with K=4 K-ary fat tree: three-layer topology (edge, aggregation and core) • each pod consists of (k/2)2 servers & 2 layers of k/2 k-port switches • each edge switch connects to k/2 servers & k/2 aggr. switches • each aggr. switch connects to k/2 edge & k/2 core switches • (k/2)2 core switches: each connects to k pods • i,e, (k/2)2 core switches for k2 pod switches and (k/2)2 servers.
  • 9.
    Fat-Tree Based DCArchitecture 1:1 Oversubscription ratio. Commodity Fat-tree with K=4 Advantages: •Full Bisection BW: 1:1 Oversubscription ratio •Low Cost: Commodity switches Disadvantage: •Scalability: Size of n/w dependent upon ports per switch.48 ports => maximum 27,648 hosts. •Agility and Performance Isolation: Not supported
  • 10.
    Recursive DCN Architecture •A Level-0 subnet is the basic building block. It contains inter-connected servers. • Each level-k subnet has multiple level-(k-1) subnets. • Ex) DCell, BCube, 4-4 1-4, etc • Advantages: • Highly Scalable commodity n/w • Low CapEx and OpEx. • Disadvantage: • Cabling and packaging
  • 11.
    Modular Data Centers(MDC) High density, shipping container based DCN. Should be Robust and provide Graceful Performance Degradation. Advantages:
  • 12.
    Modular Data Centers(MDC) High density, shipping container based DCN. Should be Robust and provide Graceful Performance Degradation. Advantages: • Fast deployment • Lower costs • Increased efficiency • Easy scale-out
  • 13.
    Virtualized DCN Added Issues: •Agility: Allocate any server to any service dynamically for performance isolation. • VM-migration across DCNs: No manual configuration. • Availability and Fault tolerance: Configuration of server IP addresses Solution: Separation of Location and Identity addresses. Ex) VL2, 4-4 1-4, etc Data Structure of Directory Packet tunneled through physical network using location-IP header
  • 14.
    Typical Inter ServerCommunication in DC
  • 15.
    Example: 4-4 1-4DCN Fig: 4-4 1-4 Data Center • 4-4 1-4 is a location based forwarding architecture for DCN which utilizes IP-hierarchy. • Uses statically assigned, location based IP addresses for all network nodes. • Forwarding of packets is done by masking the destination IP address bits. • No routing or forwarding table maintained at switches • No convergence overhead of routing protocols. No. of physical machines in figure = 65,536
  • 16.
    References • A. Kumar,S. V. Rao, and D. Goswami, “4-4, 1-4: Architecture for Data Center Network Based on IP Address Hierarchy for Efficient Routing," in Parallel and Distributed Computing (ISPDC), 2012 11th International Symposium on, 2012, pp. 235-242. • M. Al-Fares, A. Loukissas, and A. Vahdat, “A scalable, commodity data center network architecture," in Proceedings of the ACM SIGCOMM 2008 conference on Data communication, ser. SIGCOMM '08. New York, NY, USA: ACM, 2008, pp. 63-74.[Online]. Available: http://doi.acm.org/10.1145/1402958.1402967 • C. Guo, G. Lu, D. Li, H. Wu, X. Zhang, Y. Shi, C. Tian, Y. Zhang, and S. Lu, “Bcube:a high performance, server-centric network architecture for modular data centers.“ • T. Benson, A. Anand, A. Akella, and M. Zhang, “Understanding data center trac characteristics," SIGCOMM Comput. Commun. Rev., vol. 40, no. 1, pp. 92{99, Jan. 2010. [Online]. Available: http://doi.acm.org/10.1145/1672308.1672325 • A. Greenberg, J. Hamilton, D. A. Maltz, and P. Patel. “The cost of a cloud: research problems in data center networks.” SIGCOMM Comput. Commun. Rev.,39(1):68–73, 2009.