KEMBAR78
Cisco SBA DC NetAppStorageDeploymentGuide-Aug2012 | PDF | Computer Network | Ip Address
0% found this document useful (0 votes)
41 views35 pages

Cisco SBA DC NetAppStorageDeploymentGuide-Aug2012

Uploaded by

Norisham Rahman
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views35 pages

Cisco SBA DC NetAppStorageDeploymentGuide-Aug2012

Uploaded by

Norisham Rahman
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 35

SBA DATA CENTER DEPLOYMENT

GUIDE

NetApp Storage
Deployment Guide
S M A R T B USI NE S S A R C HI TEC TURE

August 2012 Series


Preface

Who Should Read This Guide How to Read Commands


This Cisco® Smart Business Architecture (SBA) guide is for people who fill a Many Cisco SBA guides provide specific details about how to configure
variety of roles: Cisco network devices that run Cisco IOS, Cisco NX-OS, or other operating
• Systems engineers who need standard procedures for implementing systems that you configure at a command-line interface (CLI). This section
solutions describes the conventions used to specify commands that you must enter.

• Project managers who create statements of work for Cisco SBA Commands to enter at a CLI appear as follows:
implementations configure terminal
• Sales partners who sell new technology or who create implementation Commands that specify a value for a variable appear as follows:
documentation
ntp server 10.10.48.17
• Trainers who need material for classroom instruction or on-the-job
Commands with variables that you must define appear as follows:
training
class-map [highest class name]
In general, you can also use Cisco SBA guides to improve consistency
among engineers and deployments, as well as to improve scoping and Commands shown in an interactive example, such as a script or when the
costing of deployment jobs. command prompt is included, appear as follows:
Router# enable
Release Series Long commands that line wrap are underlined. Enter them as one command:
Cisco strives to update and enhance SBA guides on a regular basis. As wrr-queue random-detect max-threshold 1 100 100 100 100 100
we develop a series of SBA guides, we test them together, as a complete 100 100 100
system. To ensure the mutual compatibility of designs in Cisco SBA guides,
Noteworthy parts of system output or device configuration files appear
you should use guides that belong to the same series.
highlighted, as follows:
The Release Notes for a series provides a summary of additions and interface Vlan64
changes made in the series.
ip address 10.5.204.5 255.255.255.0
All Cisco SBA guides include the series name on the cover and at the
bottom left of each page. We name the series for the month and year that we Comments and Questions
release them, as follows:
If you would like to comment on a guide or ask questions, please use the
month year Series SBA feedback form.
For example, the series of guides that we released in August 2012 are If you would like to be notified when new comments are posted, an RSS feed
the “August 2012 Series”. is available from the SBA customer and partner pages.
You can find the most recent series of SBA guides at the following sites:
Customer access: http://www.cisco.com/go/sba
Partner access: http://www.cisco.com/go/sbachannel

August 2012 Series Preface


Table of Contents

What’s In This SBA Guide. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Increasing Efficiency and Flexibility with Advanced Features. . . . . . . . . . . . . 26
Cisco SBA Data Center. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Thin Provisioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Route to Success. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 NetApp Deduplication and Compression. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
About This Guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 NetApp Snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
NetApp FlexClone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Backup, Disaster Recovery, and
Related Reading. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 High Availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Business Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 NetApp Advanced Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Technology Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Monitoring and Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

Deploying a NetApp Storage Solution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Appendix A: Changes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30


Completing Storage Array Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Provisioning Storage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Deploying FCoE or iSCSI Storage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Adding a LUN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

August 2012 Series Table of Contents


What’s In This SBA Guide

Cisco SBA Data Center About This Guide


Cisco SBA helps you design and quickly deploy a full-service business This deployment guide contains one or more deployment chapters, which
network. A Cisco SBA deployment is prescriptive, out-of-the-box, scalable, each include the following sections:
and flexible. • Business Overview—Describes the business use case for the design.
Cisco SBA incorporates LAN, WAN, wireless, security, data center, application Business decision makers may find this section especially useful.
optimization, and unified communication technologies—tested together as a • Technology Overview—Describes the technical design for the
complete system. This component-level approach simplifies system integration business use case, including an introduction to the Cisco products that
of multiple technologies, allowing you to select solutions that solve your make up the design. Technical decision makers can use this section to
organization’s problems—without worrying about the technical complexity. understand how the design works.
Cisco SBA Data Center is a comprehensive design that scales from a server • Deployment Details—Provides step-by-step instructions for deploying
room to a data center for networks with up to 10,000 connected users. This and configuring the design. Systems engineers can use this section to
design incorporates compute resources, security, application resiliency, and get the design up and running quickly and reliably.
virtualization.
You can find the most recent series of Cisco SBA guides at the following
sites:
Route to Success
Customer access: http://www.cisco.com/go/sba
To ensure your success when implementing the designs in this guide, you
should first read any guides that this guide depends upon—shown to the Partner access: http://www.cisco.com/go/sbachannel
left of this guide on the route below. As you read this guide, specific
prerequisites are cited where they are applicable.

Prerequisite Guides You Are Here

DATA
CENTER
Data Center Design Overview Data Center Deployment Guide NetApp Storage Deployment Guide

August 2012 Series What’s In This SBA Guide 1


Introduction
The Cisco SBA—Data Center Deployment Guide focuses on the processes
and procedures necessary to deploy your data-center foundation Ethernet
and storage transport. The data-center foundation is designed to support
the flexibility and scalability of the Cisco Unified Computing System and
provides details for the integration of functionality between the server
and the network for Cisco and non-Cisco servers. The foundation design
The requirements of smaller organizations increasingly reflect those of includes data-center services, such as security with firewall and intrusion
larger enterprises, although generally on a smaller scale. However, with prevention, and application resiliency with advanced server load-balancing
limited IT skill sets, many organizations must rely on partners and vendors techniques. This guide also discusses the considerations and options for
that can deliver effective solutions in a simplified manner. Cisco SBA solu- data-center power and cooling. The supplemental Cisco SBA—Data Center
tions address these needs. Configuration Files Guide provides snapshots of the actual platform con-
figurations used in the design.
NetApp storage solutions offer the performance and functionality to unlock
the value of your business, regardless of size. The NetApp family of storage The Cisco SBA—Data Center Unified Computing System Deployment
solutions offers class-leading storage efficiency and performance that scale Guide provides the processes and procedures necessary to deploy a Cisco
with your organization. Whether you deploy a NetApp fabric attached stor- Unified Computing System using both the Cisco B-Series blade server
age (FAS) system or choose to extend the life of your existing Fibre Channel system and Cisco C-Series rack-mount servers to a point where they are
(FC) storage with a NetApp V-Series system, NetApp storage solutions offer ready to deploy an operating system or hypervisor software.
a common storage platform that is designed to maximize the efficiency of The supplemental Cisco SBA—Data Center Virtualization with UCS, Nexus
your data storage. Capable of data-in-place upgrades to more powerful FAS 1000V and VMware Deployment Guide provides a concise yet detailed
or V-Series systems, running the same Data ONTAP operating system and process of deploying VMware in your data center and on the Cisco UCS
using the same management tools and feature sets, NetApp systems grow B-Series and C-Series servers. Additionally, the guide details the deploy-
with your organization. ment of the Cisco Nexus 1000V virtual switch to enhance the management
This guide focuses on deploying a NetApp storage system into networks and control of the network in a VMware environment.
built upon the Smart Business Architecture. Components of the entire stor-
age solution are present in all three layers of Cisco SBA. The Cisco Ethernet Business Overview
or FC-switching infrastructure forms the network foundation for access to As the value and amount of electronic data increases, deploying the right
storage, along with providing additional network services, such as security. storage strategy becomes even more critical. A centralized storage strategy
The NetApp storage systems themselves are end-nodes to the network, but offers the best solution to store, protect, and retain your valuable data.
provide additional security services and multiprotocol storage access as a Today’s solutions address technology trends around managed scalability,
user service, using either block-based access methods, such as iSCSI, FC, server virtualization, and networking with advanced features and simplified
and Fibre Channel over Ethernet (FCoE), or network-attached storage (NAS) management for organizations with limited IT staff.
protocols.
Trying to keep up with data growth can be a challenge, especially when
Related Reading using direct-attached storage (DAS). Selecting the right amount of storage
for a given application is not an exact science. You must estimate how much
The Cisco SBA—Data Center Design Overview provides an overview of data you need to store over time, and then choose a server with enough
the data-center architecture. This guide discusses how Cisco SBA data drive bays to accommodate the expected data growth. When the drive bays
center architecture is built in layers—the foundation of Ethernet and storage have all been filled, you can directly attach an external disk shelf or Just
networks and computing resources; the data-center services of security, a Bunch of Disks (JBOD) via a Small Computer System Interface (SCSI) or
application resilience, and virtual switching; and the user services layer that Serial-Attached SCSI (SAS), but that is an expensive solution if you expect
contains applications and user services. to use only a portion of the capacity of the shelf. A DAS shelf typically cannot
be used by more than one server at a time.

August 2012 Series Introduction 2


One of the most compelling technology trends today is server virtualization. Figure 1 - Storage connectivity over Ethernet
Server virtualization offers the ability to increase the utilization of excess
CPU-processing capacity available with today’s high-performing multicore
CPUs. Hosting many virtual servers on a physical server CPU allows better
utilization of this untapped processing capacity. After a server is virtualized
from the physical hardware, it can be moved from one physical server to
another to support load balancing, hardware servicing or failover, and site-
to-site mobility. These advanced virtualization features are only available in
shared networked storage environments.
Server virtualization is also driving the need for more robust data networks.
The ratio of virtual servers to physical servers is typically 10:1 or greater,
resulting in much higher I/O loads per physical server. Ethernet is emerging
as an increasingly robust and capable storage network that offers flex-
ibility, simplicity, and performance for organizations of all sizes. The ability
to share a common network technology for data traffic, voice, video, and
storage offers tremendous value over alternative dedicated solutions. The
skill sets required for Ethernet management are common in the industry,
and the penetration of this technology into every home and business drives
economies-of-scale to keep costs low.
The growth of data is not unique to larger enterprises. Businesses of all
sizes face the challenges of acquiring, storing, and retaining large amounts
of data. In response to these challenges, solutions that were once consid-
ered overly complicated and expensive for smaller businesses have been
made simpler to procure, deploy, and manage. Advanced functionality once
reserved for costly solutions is now common in lower-priced solutions, mak-
ing it easier for organizations to address data growth and retention.

Technology Overview
NetApp leads the industry in storage efficiency and innovation. With
features such as thin provisioning, deduplication, high-performing and
space-efficient NetApp Snapshot copies, and advanced disaster recovery NetApp storage solutions are built on a single platform that scales from
capabilities, NetApp offers solutions to help you achieve your business small deployments of a few terabytes to large deployments beyond a
goals. With a shared vision of a virtual dynamic data-center and a partner- petabyte, all with a common set of features and management tools. Each
ship that extends back to 2003, NetApp and Cisco are developing technolo- NetApp FAS system is capable of running multiple block-based and file-
gies that deliver the value and performance to meet your IT requirements. based protocols at the same time, including Network File System (NFS),
Cisco partners with industry leaders to provide reference architectures and Common Internet File System (CIFS), FC, FCoE, and Internet Small Computer
proven configurations to enable organizations of all sizes to meet the IT and System Interface (iSCSI). NetApp unified storage simplifies data manage-
business needs of today and tomorrow. ment with the ability to scale your storage environment as your business
grows, without the need for staff retraining or forklift equipment upgrades.
NetApp V-Series storage systems can extend the life of your existing FC
storage investments by extending many of the same advanced features
as the FAS storage systems to the management of legacy installed-base

August 2012 Series Introduction 3


storage systems from third-party manufacturers. Figure 1 illustrates IP/ Cisco SBA data center foundation provides resilient Ethernet and storage-
Ethernet-based and FCoE–based connections to a NetApp storage system. transport capable of transporting iSCSI, FC, FCoE, and NAS protocols.
Data centers run multiple parallel networks to accommodate both data Figure 2 - Cisco SBA data center
and storage traffic. To support these different networks in the data center,
administrators deploy separate network infrastructures, including different
types of host adapters, connectors and cables, and fabric switches. The use
of separate infrastructures increases both capital and operational costs for
IT executives. The deployment of a parallel storage network, for example,
adds to the overall capital expense in the data center, while the incremental
hardware components require additional power and cooling, management,
and rack space that negatively affect the operational expense.
Consolidating SAN and LAN in the data center into a unified, integrated
infrastructure is referred to as network convergence. A converged network
reduces both the overall capital expenditure required for network deploy-
ment, and the operational expenditure for maintaining the infrastructure.
NetApp Unified Connect delivers data access using NFS, CIFS, iSCSI, and
FCoE protocols concurrently over a shared network port using the NetApp
unified target adapter. As a leader in Ethernet storage, first as a NAS pioneer
and next as an early proponent of iSCSI, NetApp now leads with FCoE and
Unified Connect.
The new, industry-leading, NetApp FAS3200 series cost-effectively meets
the storage needs of business applications in both virtual and traditional
environments. The NetApp FAS3200 family scales to nearly 3-petabytes of
versatile storage, which adapts readily to the growing storage demands with
even better value and performance than before.

Storage Networking with Cisco SBA


A network infrastructure that uses FC or Ethernet should have no single
point of failure. A highly available solution includes:
• Two or more FC or Ethernet network switches.
• Two or more host bus adapters (HBAs) or network interface cards (NICs)
per server.
• Two or more target FC ports or Ethernet NICs per storage controller.
When using fibre channel, two fabrics are required to have a truly redundant
architecture. The Cisco SBA data center foundation is designed to support the flexibility
Cisco SBA is designed to address the common requirements of organiza- and scalability of the Cisco Unified Computing System and provides details
tions with 250 to 10,000 connected users. Each organization is unique, for the integration of functionality between the server, storage, and the
however, and so are its requirements, so Cisco SBA was built so that network for Cisco and non-Cisco servers.
additional capabilities could be added without redesigning the network. The

August 2012 Series Introduction 4


NetApp has improved upon its leadership position in unified storage as the server clustering, application load balancing, and the ability to non-disrup-
first storage vendor to support FCoE. FCoE combines two leading tech- tively move running VMs and data sets between physical servers.
nologies—the FC protocol and an enhanced 10-Gigabit Ethernet physical
To ensure storage availability, many levels of redundancy are available for
transport—to provide you with more options for SAN connectivity and
deployments, including purchasing physical servers with multiple stor-
networking. FCoE allows you to use the same tools and techniques that you
age interconnects or HBAs, deploying redundant storage networking and
use today to manage your FC network and storage. The FCoE network infra-
network paths, and leveraging storage arrays with redundant controllers. A
structure offers connectivity to either native NetApp FCoE systems and/or
deployed storage design that meets all of these criteria can be considered
NetApp FC storage systems. This allows you to migrate to a unified Ethernet
to have eliminated all single points of failure.
infrastructure while preserving investments you made in FC storage. FCoE
is a logical progression of NetApp’s unified storage approach to offering The reality is that data protection requirements in a virtual infrastructure are
concurrent support for FC, iSCSI, and NAS data access in its enterprise greater than those in a traditional physical server infrastructure. Data protec-
systems. It provides an evolutionary path for FC SAN customers to migrate tion is a paramount feature of shared storage devices. NetApp RAID-DP is
to Ethernet over time. an advanced Redundant Array of Independent Disks (RAID) technology that
is provided as the default RAID level on all NetApp FAS systems. RAID-DP
These solutions:
protects against the simultaneous loss of two drives in a single RAID group.
• Support SAN (FC, FCoE, and iSCSI) or NAS. It is very economical to deploy; the overhead with default RAID groups is
• Scale non-disruptively from a few terabytes to over 3-petabyte. a mere 12.5 percent. This level of resiliency and storage efficiency makes
data residing on RAID-DP safer than data stored on RAID 5 and more cost-
• Are easily installed, configured, managed, and maintained.
effective than RAID 10.
• Dynamically expand and contract storage volumes, as needed.
• Offer features that provide: FCOE Target and Unified Connect Requirements
◦◦ Rapid backup and recovery with zero-penalty Snapshot copies. Starting with NetApp Data ONTAP 7.3.2, FCoE is available through unified
◦◦ Simple, cost-effective replication for disaster recovery. target adapters (UTAs) in the Data ONTAP 7.3 family. A UTA is more com-
monly known as a converged network adapter (CNA). In Data ONTAP 8.0.1
◦◦ Instant, space-efficient data clones for provisioning and testing.
and later, FCoE and all other Ethernet protocols normally available from
◦◦ Data deduplication to reduce capacity requirements. NetApp storage (CIFS, NFS, iSCSI, and so on) are supported concurrently.
NetApp unified storage solutions enable powerful thin provisioning, simpli- The FC protocol license is required for FCoE functionality. For other proto-
fied data management, and scalable and consistent I/O performance for all cols, the relevant license is required. When a UTA is installed in a NetApp
protocols across network-attached storage (NAS, NFS, CIFS) and storage storage controller running Data ONTAP 8.0.1 and later releases, a dual-port
networks (SANs, FC, FCoE, and iSCSI) in a single pool of storage. 10 Gbps NIC and a 10 Gbps FCoE target adapter are presented for each
UTA installed in the controller. The UTA is supported on all current platforms
NetApp storage solutions offer powerful data management and data pro- that have a PCI-express (PCI-E) expansion slot.
tection capabilities that enable you to lower costs while meeting capacity,
utilization, and performance requirements.

Data Protection
Any consolidation effort increases risk to the organization in the event that
the consolidation platform fails. As physical servers are converted to virtual
machines (VMs) and multiple VMs are consolidated onto a single physical
platform, the effect of a failure to the single platform can be catastrophic.
Fortunately, hypervisors provide multiple technologies that enhance the
availability of a virtual data center. These technologies include physical

August 2012 Series Introduction 5


Deploying a NetApp
For system requirements to run NetApp OnCommand System Manager 2.0,
see the NetApp System Manager 2.0 Quick Start Guide here:
https://communities.netapp.com/docs/DOC-10695

Storage Solution To configure a system that supports Ethernet-based access to storage, it is


recommended that you dedicate the e0M interface for system management
traffic.
The Cisco SBA data-center design illustrates the storage system dual-
attached to the Ethernet switching fabric using a port-channel connection.
By using the NetApp OnCommand System Manager, you can easily On the NetApp controller, you will create a virtual interface (VIF), which
configure a NetApp storage system for end-to-end FCoE or iSCSI deploy- includes both 10 Gbs Ethernet links from the controller to the data-center
ment. System Manager Version 2.0 is supported on all NetApp FAS2000, core Cisco Nexus 5500UP switches. Configuration for the data-center core
FAS3000, and FAS6000 systems, and on the corresponding V-Series Cisco Nexus 5500UP switches is covered in Procedure 1, in the Deploying
systems. FCoE or iSCSI Storage section of this guide.
The NetApp OnCommand System Manager is a feature-rich, yet easy-to- Figure 3 - VIF configuration
use, storage management tool for basic configuration and management of
NetApp storage systems. System Manager is ideal for initial setup and for
configuration of one system at a time. As your environment grows, NetApp
offers a suite of storage management tools.
NetApp OnCommand System Manager is the simple yet powerful manage-
ment tool for NetApp storage. It is easy to use for small to medium-sized
businesses, and efficient for large enterprises and service providers.

Process

Completing Storage Array Setup

1. Complete the initial system setup With a VIF configuration, the dual interfaces provide system resiliency,
and when they are configured as a port-channel using link aggregation
2. Install OnCommand System Manager
control protocol (LACP), both links can actively carry storage traffic. In this
3. Get started with array management design, you enable Ethernet storage interfaces for FCoE or iSCSI as a VIF.
Configuration of the FCoE for the unified target adapter in Data ONTAP is
the same as it is for a traditional FC target adapter.
To follow the procedures in this guide, your storage system must be running
any of the following versions of Data ONTAP. This guide reflects the use of
Data ONTAP 8.0.2 operating in 7-Mode. Procedure 1 Complete the initial system setup
• Data ONTAP 7.x (starting from 7.2.3)
• Data ONTAP 8.x 7-Mode The setup includes the basic configuration such as host name, IP address,
subnet, credentials, management interface, and system default IP gateway.

August 2012 Series Deploying a NetApp Storage Solution 6


You can assign a static IP-address to the storage controller manually, or, if Step 4: Enter y, enter 1 for a single interface group, and then enter a name
you have Dynamic Host Configuration Protocol (DHCP) running on the data- for the interface group. This creates an interface group.
center subnet where your storage controller is installed, then you can have Do you want to configure interface groups? [n]: y
DCHP provide the IP address. This procedure uses the static IP-address
Number of interface groups to configure? [0] 1
assignment method using the system setup script.
Name of interface group #1 []: SBA-VIF-1
To configure the NetApp storage system, you need:
• An IP address for system management. Step 5: Enter l for an LACP interface group, enter i for load balancing, and
then assign two 10Gb Ethernet links on your controller to the VIF.
• An IP address for iSCSI traffic, in a different subnet than the system
management IP address. Is SBA-VIF-1 a single [s], multi [m] or a lacp [l] interface
group? [m] l
• An IP subnet default gateway address for the iSCSI subnet.
Is SBA-VIF-1 to use IP-based [i], MAC-based [m],Round-robin
In this procedure, you assign the IP default gateway for the controller to based [r] or Port based [p] load balancing? [i] i
the iSCSI subnet to ensure that iSCSI traffic uses the 10 Gigabit Ethernet
Number of links for SBA-VIF-1? [0] 2
interfaces, and not the lower-speed e0M management interface.
Name of link #1 for SBA-VIF-1 []: e1a
Name of link #2 for SBA-VIF-1 []: e1b
Reader Tip
Step 6: Enter the IP address and subnet mask for iSCSI transport on the
VIF, and then press Enter twice to skip assigning IP addresses to the remain-
For more information about management network interface ing physical interfaces.
configuration, see the NetApp library page here: https://library. Please enter the IP address for Network Interface SBA-VIF-1
netapp.com/ecmdocs/ECMM1277792/html/nag/frameset.html []: 10.4.62.10
Please enter the netmask for Network Interface SBA-VIF-1
[255.0.0.0]: 255.255.255.0
Step 1: Connect a console to the serial monitor port on the NetApp storage Should interface group SBA-VIF-1 take over a partner interface
controller and power on your storage system components according to the group during failover ? (You would answer yes here if you are
instructions in the Installation and Setup Instructions for your hardware running redundant controllers and then enter in the second
platform.
controllers interface name. These names would be identical on
Step 2: Press Ctrl+C. This skips the DHCP IP address search, and you each controller)[n] : n
follow the setup script. Please enter the IP address for Network Interface e0a []:
Should interface e0a take over a partner IP address during
Step 3: Assign the storage controller’s host name. failover? [n]: n
A valid hostname consists of alphanumeric characters Please enter the IP address for Network Interface e0b []:
[a-zA-Z0-9]and dash [-]. Should interface e0b take over a partner IP address during
Please enter the new hostname []: NetAppx failover? [n]: n
Next, you create the VIF that maps your 10-Gb Ethernet ports for
EtherChannel to the data center core for iSCSI and FCoE transport.

August 2012 Series Deploying a NetApp Storage Solution 7


Step 7: Enter the IP address and subnet for system management on inter- Step 11: Enter the time zone and the physical location for this device.
face e0M. This must be on a different IP subnet than your iSCSI traffic or you Please enter timezone [GMT]: PST8PDT
may experience performance issues.
Where is the filer located? []: Room 23
e0M is a Data ONTAP dedicated management port.
NOTE: Dedicated management ports cannot be used for data Step 12: Enter y to enable domain name server (DNS) IP-address resolu-
protocols (NFS, CIFS, iSCSI, NDMP or Snap*), and if they are tion, enter your DNS domain name, and then enter the IP address of your
DNS host. If you have more than one DNS host, enter the IP addresses now.
configured they should be on an isolated management LAN.
The default route will use dedicated mgmt ports only as the Do you want to run DNS resolver? [n]: y
last resort, since data protocol traffic will be blocked by Please enter DNS domain name []: cisco.local
default. You may enter up to 3 nameservers
Please enter the IP address for Network Interface e0M Please enter the IP address for first nameserver []:
[]:10.4.63.123 10.4.48.10
Please enter the netmask for Network Interface e0M Do you want another nameserver? [n]: n
[255.0.0.0]: 255.255.255.0 Step 13: Enter n to bypass enabling the network information service (NIS),
Should interface e0m take over a partner IP address during and then press Enter.
failover? [n]: n Do you want to run NIS client? [n]: n
Step 8: Enter none for flow control on the e0M management interface, and This system will send event messages and weekly reports
then enter n to continue the setup script. to NetApp Technical Support. To disable this feature,
Please enter flow control for e0M {none, receive, send, full} enter “options autosupport.support.enable off” within 24
[full]: none hours. Enabling AutoSupport can significantly speed problem
Would you like to continue setup through the web interface? determination and resolution should a problem occur on your
[n]: n system. For further information on AutoSupport, please see:
http://now.netapp.com/autosupport/
Step 9: Enter IP address of the default gateway for the controller using the Press the return key to continue.
default gateway for the iSCSI IP subnet. By using the iSCSI subnet IP default
gateway, you ensure that iSCSI traffic always uses the high-speed 10-Gb Step 14: Enter n to bypass the remote LAN module setup, and then enter n
Ethernet interfaces configured as a VIF. to bypass the shelf alternate control path management setup.
Please enter the name or IP address of the default gateway: Would you like to configure the SP LAN interface [y]: n
10.4.62.1
Step 15: Enter a password for the administrative (root) access to your stor-
Step 10: Press Enter to skip assigning an administration host. age controller.
The administration host is given root access to the filer’s / The initial aggregate currently contains 3 disks; you may add
etc files for system administration. To allow /etc root access more disks to it later using the “aggr add” command.
to all NFS clients enter RETURN below. Setting the administrative (root) password for NetAppx ...
Please enter the name or IP address of the administration New password: XXXX
host: Retype new password: XXXX

August 2012 Series Deploying a NetApp Storage Solution 8


Step 16: Log in to the NetApp controller by entering the password that you Step 4: In the User Name box, enter root, in the Password box, enter the
just configured. password that you configured in Step 15 of Procedure 1, and then click Add.
Password: XXXX
NetAppx>

Step 17: When the setup is complete, enter reboot. This transfers the
information that you have entered to the storage system.
NetAppx>reboot

Tech Tip

If you do not reboot the system the information you entered may
not be saved.

Procedure 2 Install OnCommand System Manager

NetApp OnCommand System Manager is the simple yet powerful manage-


The home page is updated with the new controller with the IP address that
ment tool for NetApp storage. This guide uses the OnCommand System
you statically assigned via CLI.
Manager to complete the storage system setup.

Step 1: Install NetApp OnCommand System Manager 2.0 from the software
CD, and then launch System Manager.

Step 2: Click Add. The Add a System window appears.

Step 3: In the Add a System window, enter the IP Address assigned to the
management port e0M in Step 7, and then click the arrow next to More.

August 2012 Series Deploying a NetApp Storage Solution 9


Step 5: On the Home page, select the new controller IP address, and then
on the top bar, click Login. The system logs you in using the username and
password that you just entered.

Step 3: System Manager also offers Notifications and Reminders roll-ups.


The Notifications area shows the current event list by scanning the syslog.
The Reminders area shows reminders or a to-do list. To go directly to the
source of the reminder, click the green arrow next to it.
Your NetApp storage system is configured and ready to use. You can now
perform storage management (manage disks, aggregates, volumes, qtrees,
logical unit numbers (LUNs)), protocol management (CIFS, NFS, iSCSI, FC,
FCoE), and system configuration (network, licenses, SNMP, users, groups).

After you log in, the new tab displays the host name and IP address of the Process
controller.

Provisioning Storage
Procedure 3 Get started with array management
1. Add SAN protocol licenses
Step 1: To view system details, click the controller name in the left pane. 2. Create an aggregate
The dashboard view displays Storage Capacity, Notifications/Reminders,
and Properties, including name, IP address, model, system ID, Data ONTAP 3. Configure flexible volumes
version, system uptime, and number of volumes, aggregates, and disks.

Step 2: To manage these items, click the green arrows to the right of the Storage provisioning on NetApp storage is easy, involving only a few steps
items. to provision a LUN or file share. Aggregates form the foundation storage
layer from which flexible volumes and then LUNs are stored. The layers of
System Manager also provides basic performance graphs for CPU utiliza-
storage virtualization offer a number of advantages to manage and optimize
tion, total I/O, combined operations of all protocols, and latency for all
the storage, protection, and retention of your data.
protocols.

August 2012 Series Deploying a NetApp Storage Solution 10


Step 3: After adding licenses in System Manager ensure that under
Procedure 1 Add SAN protocol licenses Storage, LUNs appear, and under Configuration > Protocols, the necessary
protocols appear.
Before you can view or configure FC/FCoE, or iSCSI storage, you must add
the licenses.

Step 1: In System Manager, navigate to Configuration > System Tools >


Licenses, and then in the right pane, click Add.

Step 2: On the Add License screen, enter the license keys in the New
license key box, and then click Add. If using iSCSI and FCoE, you may have
multiple license keys.

Step 4: Navigate to Configuration > Protocols > FC/FCoE, and then in the
right pane, click Start. If the FCoE service on the storage system is already
running, the Start button is grayed-out. Alternatively, select and enable
iSCSI if configuring iSCSI storage.

August 2012 Series Deploying a NetApp Storage Solution 11


Step 2: On the Storage Configuration Wizard page, select Create a new
Procedure 2 Create an aggregate aggregate (Recommended), and then click Next.

An aggregate is NetApp’s virtualization layer, which abstracts physical disks


on a storage device from logical data sets that are referred to as flexible
volumes. Aggregates are the means by which the total I/O operations per
second (IOPS) available from all of the individual physical disks are pooled
as a resource. Aggregates are well-suited to meet users’ differing security,
backup, performance, and data-sharing needs, as well as the most unpre-
dictable and mixed workloads.
NetApp recommends that, whenever possible, you use a separate, small
aggregate with RAID-DP to host the root volume. This stores the files
required for the GUI management tools for the NetApp storage system.
Place the remaining storage into a small number of large aggregates. This
provides optimal performance because of the ability of a large number of
physical spindles to service I/O requests. On smaller arrays, it may not be
practical to have more than a single aggregate, due to the restricted number
of disk drives on the system. In these cases, it is acceptable to have only a
single aggregate.

Step 1: In the left pane, select Storage, and in the right pane, click Storage
Configuration Wizard, and then click Next.

August 2012 Series Deploying a NetApp Storage Solution 12


Step 3: If your configuration requires that you create default CIFS shares Step 4: On the Configuration Summary page, verify the settings, and then
and NFS exports, select the check boxes, and then click Next. click Next.

Step 5: Click Finish. A window appears with a message that the creation of
an aggregate takes place in the background, allowing you to begin storing
data immediately.

Step 6: Click OK. The right pane of the Storage screen is refreshed with
Create Aggregate, Create Volume, and Create Qtree links.

August 2012 Series Deploying a NetApp Storage Solution 13


Step 3: Click Create.
Reader Tip

For instructions about how to manually configure an aggregate,


see the Creating an Aggregate section in the Data ONTAP
Storage Management Guide here:
http://hd.kvsconsulting.us/netappdoc/801docs/pdfs/ontap/smg.pdf

Procedure 3 Configure flexible volumes

FlexVol volumes are thin storage containers that can contain LUNs and/
or file shares that are accessed by servers over FC, FCoE, iSCSI, NFS, or
CIFS. A FlexVol volume is a virtual volume that you can manage and move
independently from physical storage. It can be created and resized larger or
smaller as your application needs change.

Step 1: Navigate to Storage > Volumes, and then click Create.

Step 2: On the General tab, configure the following properties, and then
click Create.
• Name—Enter a name for the volume. NetApp recommends that you use
a combination of the hostname, physical disk, and replication policy.
(Example: HostB_Disk1_4hmirror)
• Aggregate— Specify an aggregate for the LUN. Keep the value aggr1 Tech Tip
that gets populated.
• Storage Type—SAN
If you are using this volume for Cisco Unified Communications
• Total Size—Enter the capacity size of the LUN. services, such as Cisco Unified Communications Manager or
• Snapshot reserve—Select the percentage of space in the LUN that you Cisco Unity Connection, Cisco recommends that you choose
want to dedicate for Snapshot data. The default setting is 5%, which thick provisioning. For a description of thin provisioning, see the
means that 5% of the LUN will not be available for other data storage. Increasing Efficiency and Flexibility with Advanced Features
You can choose a higher value or as little as 0%, depending on your section of this guide.
requirements. NetApp recommends configuring all volumes with 0% and
disabling the default Snapshot schedule.
• Thin Provisioned Space reclamation—Select this option if you want to
create a thinly provisioned volume that consumes space on the disk only
as data is written.

August 2012 Series Deploying a NetApp Storage Solution 14


Process

Deploying FCoE or iSCSI Storage

1. Configure the data-center core


2. Add an initiator group

The data-center core deployed in the Cisco SBA—Data Center Deployment


Guide provides an Ethernet transport that supports IP traffic, as well as
FCoE. The Cisco SBA data-center core can also support native FC. This
guide uses the Ethernet transport to connect the NetApp storage array to
the network for iSCSI or FCoE operation.

Procedure 1 Configure the data-center core

Before you configure the storage controller for connectivity to the servers, you
must prepare the data-center core network. Details for configuring Ethernet
and FCoE in the Cisco SBA data-center core are provided in the Cisco SBA—
Data Center Deployment Guide. This procedure shows you how to prepare
for FCoE and iSCSI storage, and assumes that the Cisco Nexus data-center
core switches have been configured for virtual port channel (vPC) operations
according to the Cisco SBA—Data Center Deployment Guide. Follow the steps in this procedure to configure both FCoE and iSCSI traffic.
If you are configuring for iSCSI traffic only, perform Step 7 through Step 10,
and then skip to Procedure 2.

Step 1: Log in to the first Cisco Nexus 5500UP data center core switch.

Step 2: Ensure that you have licenses available for the FCoE ports to be
used. An FCoE license is required to keep FCoE enabled past the 90-day
grace period.
dc5548ax# install license bootflash://license.lic

August 2012 Series Deploying a NetApp Storage Solution 15


Step 3: Verify that FCoE, LACP, and vPC are enabled. Step 6: Create a VLAN that carries FCoE traffic to the storage controller,
dc5548ax# show feature and map it to the VSAN for the respective SAN fabrics.
Feature Name Instance State Table 1 - FC VSAN to FCoE VLAN mapping

fcoe 1 enabled
Data center core switch VSAN FCoE VLAN
lacp 1 enabled
vpc 1 enabled Nexus 5548UP-1 4 304
Nexus 5548UP-2 5 305
Step 4: If a feature in Step 3 is not enabled, enable it.
Example • On the first Cisco Nexus 5500UP, map VLAN 304 to VSAN 4. VLAN 304
carries all VSAN 4 traffic over the trunk.
feature fcoe
vsan database
Step 5: If you have not already configured your Cisco Nexus 5500UP vsan 4
switches for QoS by following the Configuring the Data Center Core proce- vlan 304
dure in the Cisco SBA—Data Center Deployment Guide, you must enable fcoe vsan 4
quality of service (QoS) for FCoE operation on the Cisco Nexus 5500UP.
exit
Four lines of QoS statements map the baseline system QoS policies for • On the second Cisco Nexus 5500UP, map VLAN 305 to VSAN 5.
FCoE. Without these commands, the virtual FC interface will not function
vsan database
when activated. If you followed the Cisco SBA—Data Center Deployment
Guide to deploy your network, you should have already executed a more vsan 5
comprehensive QoS policy, which includes FCoE traffic classification, so vlan 305
you can skip this step. If you use the commands below for the baseline FCoE fcoe vsan 5
QoS operation, you will overwrite your existing QoS policy. exit
system qos
Step 7: Create a VLAN on each Cisco Nexus 5500UP switch to carry iSCSI
service-policy type qos input fcoe-default-in-policy
traffic.
service-policy type queuing input fcoe-default-in-policy
vlan 162
service-policy type queuing output fcoe-default-out-policy
name iSCSI
service-policy type network-qos fcoe-default-nq-policy
end Step 8: Create a Layer 3 switch virtual interface (SVI) on each Cisco Nexus
5500UP switch to provide a default route for the iSCSI IP subnet.
• Configure the first Cisco Nexus 5500UP switch.
Tech Tip
interface Vlan162
no shutdown
All FC and FCoE control and data traffic is automatically classified
no ip redirects
into the FCoE system class, which provides a no-drop service. On
the Cisco Nexus 5010 and Cisco Nexus 5020, this class is created ip address 10.4.62.2/24
automatically when the system starts up. The class is named hsrp 162
class-fcoe in the CLI. priority 110
ip 10.4.62.1
• Configure the second Cisco Nexus 5500UP switch.

August 2012 Series Deploying a NetApp Storage Solution 16


interface Vlan162 switchport trunk native vlan 162
no shutdown switchport trunk allowed vlan 162,305
no ip redirects If you are deploying iSCSI only, you can proceed to Procedure 2.
ip address 10.4.62.3/24
hsrp 162 Step 11: Create a virtual fibre-channel (vFC). The vFC transports FCoE
ip 10.4.62.1 traffic to the storage controller.
• Configure the first Cisco Nexus 5500UP switch.
Step 9: Configure a vPC port channel to connect to the storage controller
interface vfc27
to transport both iSCSI and FCoE traffic. If you are deploying iSCSI only, you
can omit VLAN 304 or 305 on the trunk. bind interface port-channel 27
switchport trunk allowed vsan 4
• Configure the first Cisco Nexus 5500UP switch.
no shutdown
interface port-channel 27
vsan database
switchport mode trunk
vsan 4 interface vfc 27
switchport trunk native vlan 162
• Configure the second Cisco Nexus 5500UP switch.
switchport trunk allowed vlan 162,304
interface vfc27
spanning-tree port type edge trunk
bind interface port-channel 27
vpc 27
switchport trunk allowed vsan 5
• Configure the second Cisco Nexus 5500UP switch.
no shutdown
interface port-channel 27
vsan database
switchport mode trunk
vsan 5 interface vfc 27
switchport trunk native vlan 162
switchport trunk allowed vlan 162,305
spanning-tree port type edge trunk
vpc 27 Tech Tip
Step 10: Map the physical port connected to the storage array on each
Cisco Nexus 5500UP switch to the port channel. If you are deploying iSCSI A vFC port can be bound to a specific Ethernet port; this is done
only, you can omit VLAN 304 or 305 on the trunk. when a vPC has not been deployed in the design. A vFC port
can be bound to a specific port world-wide name (PWWN) when
• Configure the first Cisco Nexus 5500UP switch.
an FIP snooping bridge is attached. In the case of a vPC, you
interface ethernet1/27 are binding to port channel as a requirement for vPC and LACP
channel-group 27 mode active to work with the converged network adapter. You do not have to
switchport mode trunk assign the same number to the vFC as the physical port number,
switchport trunk native vlan 162 but doing so improves manageability.
switchport trunk allowed vlan 162,304
• Configure the second Cisco Nexus 5500UP switch.
interface ethernet1/27
channel-group 27 mode active
switchport mode trunk

August 2012 Series Deploying a NetApp Storage Solution 17


Step 12: Obtain the FC PWWN for the storage controller (target) and Step 14: Display active zoneset members. The asterisk (*) at the beginning
the server (initiator) to configure zoning. To obtain the storage controller of the fcid output line indicates an active member. Note that you will have to
PWWNs, enter the following command on the storage controller CLI. complete the NetApp FCoE configuration section before the active member
NetAppx> fcp portname show is logged in to the FC fabric and becomes active.
Portname Adapter • Configure the first Cisco Nexus 5500UP switch.
-------- ------- dc5548a# show zoneset active vsan 4
50:0a:09:81:8d:60:dc:42 1a zoneset name FCOE_4 vsan 4
50:0a:09:82:8d:60:dc:42 1b zone name p30-ucsc200m2-2-vhba3_netapp-e1a vsan 4
You can also use the show flogi database and show fcns database com- * fcid 0x13000e [pwwn 50:0a:09:81:8d:60:dc:42]
mands on the Cisco Nexus 5500UP switches to display active PWWNs on * fcid 0x130006 [pwwn 20:00:cc:ef:48:ce:c1:f9]
the FC fabric. The FC initiators and targets should be enabled and con- • Configure the second Cisco Nexus 5500UP switch.
nected to the FC network to display the active nodes.
dc5548b# show zoneset active vsan 5
Step 13: Create the FC zoneset and zone for FCoE operation. zoneset name FCOE_5 vsan 5
zone name p30-ucsc200m2-2-vhba4_netapp-e1b vsan 5
• Configure the first Cisco Nexus 5500UP switch.
* fcid 0x93000e [pwwn 50:0a:09:82:8d:60:dc:42]
zone name p30-ucsc200m2-2-vhba3_netapp-e1a vsan 4
* fcid 0x930006 [pwwn 20:00:cc:ef:48:ce:c1:fa]
member pwwn 50:0a:09:81:8d:60:dc:42
member pwwn 20:00:cc:ef:48:ce:c1:f9
exit Procedure 2 Add an initiator group
zoneset name FCOE_4 vsan 4
member p30-ucsc200m2-2-vhba3_netapp-e1a Initiator groups, which are commonly referred to as igroups, access control
exit lists, or LUN masking lists, are a way to group LUNs according to the param-
zoneset activate name FCOE_4 vsan 4 eters of the environments or servers they service. You create an initiator
• Configure the second Cisco Nexus 5500UP switch. group by specifying a collection of PWWNs of initiators in an FC network.
You use the PWWNs of the host’s HBAs. Initiator groups can be created
zone name p30-ucsc200m2-2-vhba4_netapp-e1b vsan 5 during LUN creation or independently, but when you first create a LUN, it
member pwwn 50:0a:09:82:8d:60:dc:42 is helpful if the appropriate initiator group already exists so that it can be
member pwwn 20:00:cc:ef:48:ce:c1:fa associated to the LUN directly within the LUN creation wizard. Initiators can
exit be programmed for FCoE or iSCSI operation. An iSCSI or FCoE initiator
zoneset name FCOE_5 vsan 5 group may be associated to a LUN for iSCSI or FCoE access, and holds one
or more initiator IDs.
member p30-ucsc200m2-2-vhba4_netapp-e1b
exit Before you can view LUNs or initiator groups on the NetApp filer, you must
zoneset activate name FCOE_5 vsan 5 have added licenses for FC/FCoE and iSCSI by following Procedure 1, in the
Provisioning Storage section of this guide.

Step 1: Navigate to Storage > LUNs, and in the right pane, click the
Initiator Groups tab, and then click Create.

August 2012 Series Deploying a NetApp Storage Solution 18


Step 2: Click the General tab, and then make the following changes: Step 3: Click the Initiators tab, and then click Add.
• Name—Enter a name for the initiator group • If you are creating an FCoE initiator group, the name of the initiator must
• Operating System—Choose the appropriate operating system for the match the PWWN identified in the Cisco Nexus 5500UP zone configura-
host that will be accessing this storage LUN tion of the host machine that is accessing storage. You configured this in
Step 13 of Procedure 1.
• Type—Select the type of protocol to be used
• If you are creating an iSCSI initiator group, add the initiator’s world-wide
• Enable ALUA—Leave this check box cleared name (WWN) or iSCSI qualified node (IQN) name for iSCSI initiator
group. The iSCSI initiator must be enabled, active, and pointed to the
NetApp filer (target).

August 2012 Series Deploying a NetApp Storage Solution 19


Step 4: Click Create. After the group is created, the group name in the
upper half of the right pane is highlighted, and initiators that were added to Tech Tip
the group are shown in the bottom half of the screen.

To determine the target WWN from the controller using a console


CLI, use the fcp or iscsi nodename command.
NetAppx> iscsi nodename
iSCSI target nodename: iqn.1992-08.com.
netapp:sn.151741374
NetAppx> fcp portname show
Portname Adapter
-------- -------
50:0a:09:81:8d:60:dc:42 1a
50:0a:09:82:8d:60:dc:42 1b

Process

Adding a LUN

1. Create a LUN

You can create the following four types of storage objects on the same
NetApp storage system:
• Aggregate
• Volume
• LUN
• Qtree
Creating a LUN automatically creates a default volume as part of the pro-
cess. To configure a volume with properties other than the defaults, see
Procedure 3.

August 2012 Series Deploying a NetApp Storage Solution 20


• Thin Provisioned—Select this check box unless your deployment meets
Procedure 1 Create a LUN the exception discussed in the following Tech Tip

LUNs are logical units of storage provisioned from a NetApp storage system
directly to servers. Hosts can access the LUNs as physical disks using FC, Tech Tip
FCoE, or iSCSI protocols. The following steps illustrate how to configure an
FCoE LUN.
If you are using this volume for Cisco unified communications
Step 1: In the left pane, select Storage, and then in the right pane, click services, such as Cisco Unified Communications Manager or
Create LUN. Cisco Unity Connection, Cisco recommends that you choose
thick provisioning. For a description of thin provisioning, see the
Increasing Efficiency and Flexibility with Advanced Features
section of this guide.

Step 2: On the Create LUN Wizard Welcome page, click Next.

Step 3: On the General Properties page, enter the following configuration


details, and then click Next.
• Name—LUN name
• Description (optional)—Description for the LUN
• Type—Operating system type
• Size—LUN size

August 2012 Series Deploying a NetApp Storage Solution 21


Each operating system (OS) maps data to LUNs slightly differently. The Step 4: Select the Select an existing volume or qtree for this LUN option,
OS-type parameter determines the on-disk layout of the LUN. It is important and then click Browse.
to specify the correct OS-type to make sure that the LUN is properly aligned
with the file system on it. This is because optimal performance with the
storage system requires that I/O is aligned to a 4096-byte boundary. If an
I/O is unaligned, it can:
• Cause an increase in per-operation latency.
• Require the storage system to read-from or write-to more blocks than
necessary to perform logical I/O.
This issue is not unique to NetApp storage. Any storage vendor or host
platform can exhibit this problem. After the LUN is created, you cannot
modify the LUN host OS type.

Tech Tip

If you have selected the incorrect OS type and then created the
LUN, the LUN at the storage array will be misaligned. To correct
this problem, you must create a new LUN, and then select the
correct OS type.

August 2012 Series Deploying a NetApp Storage Solution 22


Step 5: In the Select a volume or a qtree window, drill down to the location Step 6: By default this shows the initiator group (host) that was created
of the volume that you configured in Step 2 of Procedure 3, click OK, and earlier. Click Create to open the Create Initiator Group window. Under Map,
then click Next. check the box, and then, if you are creating an operating system boot LUN,
enter 0 in the LUN ID box. If you are creating any other type of LUN, enter an
unused LUN ID or allow Data ONTAP to assign the LUN ID automatically.

Step 7: Click Next.

August 2012 Series Deploying a NetApp Storage Solution 23


Step 8: On the LUN Summary page, verify the settings, and then click Next. Step 9: On the Completing the Create LUN wizard page, click Finish.

After the LUN has been created, it is accessible to the FCoE host that was
identified in the initiator mapping.

August 2012 Series Deploying a NetApp Storage Solution 24


Step 10: To view the LUN that you created, navigate to Storage > LUNs,
and in the right pane, click the LUN Management tab. The LUN is listed with
its Status as Online in the top half of the screen. You can view the LUN’s
Initiator Groups and Initiators by clicking the respective tabs in the bottom
half of the screen.

This concludes the NetApp configuration for FCoE or iSCSI.


You now have an end-to-end storage configuration in place. You can activate
any of the available Ethernet protocols (NFS, CIFS, iSCSI) and you can
access them from the VIF IP addresses. In addition, you can map to any
iSCSI LUNs created on the controller(s) that you have zoned to.

August 2012 Series Deploying a NetApp Storage Solution 25


Increasing Efficiency and
can be seamlessly introduced into the server environment without hav-
ing to make any changes to server administration, practices, or tasks.
Deduplication runs on the NetApp storage system at scheduled intervals

Flexibility with Advanced and does not consume any CPU cycles on the server.
Deduplication can be extremely helpful for virtual server scenarios such as

Features fixed-size virtual hard drives, frequent creation and deletion of virtual disk
files on the SAN LUNs, and data in the child VM.
Deduplication is enabled on the NetApp volume, and the amount of data
deduplication realized is based on the commonality of the data stored in a
deduplication-enabled volume.
NetApp data compression is a new feature that compresses data as it is
Thin Provisioning written to NetApp FAS and V-Series storage systems. Like deduplication,
Traditional storage provisioning and preallocation of storage on disk are NetApp data compression works in both SAN and NAS environments, and is
methods that storage administrators understand well. It is a common prac- application and storage tier agnostic.
tice for server administrators to overprovision storage to avoid running out of
storage, and thereby avoid the associated application downtime when they NetApp Snapshot
expand the provisioned storage to new levels.
A NetApp Snapshot copy is a locally retained, frozen, space-efficient,
Although no system can run at 100% storage utilization, there are storage read-only view of a volume or an aggregate. Its improved stability, scalability,
virtualization methods that allow administrators to address and oversub- recoverability, and performance make it more efficient than other storage
scribe storage in the same manner as server resources (such as CPU, snapshot technologies.
memory, and networking). This form of storage virtualization is referred to as
thin provisioning. Snapshot copies facilitate frequent low-impact, user-recoverable online
backup of files, directory hierarchies, LUNs, and application data. They offer
While traditional provisioning preallocates storage, thin provisioning a secure and simple method of restoring data so that users can directly
provides storage on-demand. The value of thin-provisioned storage is that access the Snapshot copies and recover from accidental file deletion, data
storage is treated as a shared resource pool and is consumed only as each corruption, or modification. The SnapManager suite of products, which is
individual application requires it. This sharing increases the total utilization available for various enterprise applications, uses the features of Snapshot
rate of storage by eliminating the unused but provisioned areas of storage copies and delivers an enterprise-class data protection solution.
that are associated with traditional storage. The drawback to thin provision-
ing and oversubscribing storage is that, without the addition of physical NetApp FlexClone
storage, if every application requires its maximum possible storage at the
same time, there will not be enough storage to satisfy the requests. NetApp FlexClone technology creates true cloned volumes, which are
instantly replicated data sets, files, LUNs, and volumes that use no additional
NetApp FlexVol uses thin provisioning to allow LUNs that are presented as storage space at the time of creation. A FlexClone volume is a writable point-
physical disks to be provisioned to their total capacity, yet consume only in-time copy generated from the Snapshot copy of a FlexVol volume. It has
as much physical storage as is required to store data. LUNs connected as all the features of a FlexVol volume, including growing, shrinking, and being
pass-through disks can also be thin provisioned. Thin provisioning applies the base for a Snapshot copy or even another FlexClone volume.
equally to file shares.
FlexClone volumes deployed in a virtualized environment offer significant
NetApp Deduplication and Compression savings in dollars, space, and energy. Additionally, the performance of a
FlexClone volume or file is identical to the performance of any other FlexVol
With NetApp deduplication, server deployments can eliminate the duplicate volume or individual file.
data in their environment, enabling greater storage utilization. Deduplication

August 2012 Series Increasing Efficiency and Flexibility with Advanced Features 26
Backup, Disaster Recovery, and NetApp offers the SnapMirror solution, which empowers IT infrastructures
with a fast, flexible data replication mechanism over Ethernet and FC
High Availability networks. It is a key component to consider when designing and deploying
Backup and recovery are the most critical components of the data protec- enterprise data protection plans. SnapMirror is an efficient data replication
tion plan. A backup is crucial to protect and recover business information if solution that takes advantage of underlying NetApp technologies, such as
data is changed unexpectedly, a system is compromised, or a site is lost. Snapshot, FlexClone, and deduplication. Disaster recovery is its primary
objective, and SnapMirror can also assist in other critical application areas,
NetApp backup and recovery solutions equip users to increase the reli-
such as disaster recovery testing, application testing, load sharing, remote
ability of data protection while minimizing management-overhead and cost.
tape archiving, and remote data access.
These solutions fit into any strategy, enabling users to meet their service-
level requirements.
Business Continuance Concepts
Backup and Recovery Concepts Disaster can occur in any IT infrastructure, and a data protection plan is
even more critical for environments that are consolidated by using server
Data protection plans for a virtualized environment become more critical as
virtualization. This is true because consolidation adds complexity by sharing
consolidation brings all of the crucial data into one place, so that any failure
reduced physical hardware resources for the applications and the business-
results in a massive impact on business applications.
critical data that are running.
Backup tasks running in the server virtualized infrastructure are often
The infrastructure must be designed to pay special attention to challenges
resource-intensive (CPU, memory, disk I/O, and network) and can result in
that can crop-up in a virtualized environment. These challenges include:
bottlenecks that adversely affect the performance of the other business-
critical applications that share the environment. Backup schedules must be • Less time (or possibly no time) available to schedule downtime windows
closely coordinated with the applications that are running on the available to perform cold-backup on virtual machines.
resources. • Performing hot backup of virtual machines can result in inconsistent
backup copies, which are of no use during recovery.
Disaster Recovery • Various operating system instances in the infrastructure, which can
Business operations depend heavily on information systems and the related make it difficult to identify a consistent state for backup.
IT infrastructure. A minor application outage can significantly affect opera- • Replicating data over LAN or WAN can consume twice as much of the
tions, and the effect of data loss is even more critical. There are various available resources.
metrics that are commonly used in designing a business continuity plan.
• Increased total cost of ownership (TCO) and unused infrastructure when
Two of the most frequently used metrics are recovery point objective (RPO)
planning for identical resources at the disaster recovery site.
and recovery time objective (RTO). RPO, measured in minutes and hours,
describes how far the recovered data are out of sync with the production
data at the time of disaster. RTO, measured in minutes, describes how fast NetApp Advanced Solutions
the operations can be restored. NetApp offers solutions that complement the server virtualization solutions
Several approaches have been developed to increase data availability and and help to mitigate these challenges. Solutions such as NetApp Snapshot,
business continuity in case of disaster occurring at the hardware or software FlexClone, compression, and deduplication enable an architect to design a
level, and even site failures. Backup methods primarily provide a way to complete data protection solution and to efficiently use available resources.
recover from data loss from an archived medium—a high-level data protec-
tion method. NetApp SnapMirror
Redundant hardware setups can provide second-level protection to mitigate NetApp SnapMirror software is a simple, flexible, cost-effective disaster-
damage caused by hardware failures. Data mirroring is another mechanism recovery and data-distribution solution that is deployed for more of the
to increase data availability and minimize downtime. enterprise application infrastructure. Data is replicated across LAN or WAN,

August 2012 Series Increasing Efficiency and Flexibility with Advanced Features 27
offering high availability and faster disaster-recovery for business-critical storage efficiency features like thin provisioning, data compression and
applications. Continuous data mirroring and mirror updates across multiple deduplication.
NetApp storage systems facilitate the mirrored data for multiple purposes.
System Manager supports integration with VMware ESX for virtual storage
Businesses in different geographical locations can take advantage of
management, and despite its ease-of-use, it can be used to graphically
SnapMirror and make local copies of mirrored data available to all locations,
manage advanced storage features like SnapMirror, SyncMirror, SnapLock,
enhancing efficiency and productivity.
vFiler and Vservers. System Manager is also the means to manage the new-
est NetApp innovation—Data ONTAP 8.x operating in Cluster-Mode.
NetApp SnapVault
NetApp SnapVault leverages disk-based backup and block-level incremen- NetApp OnCommand Unified Manager
tals for reliable, low-overhead backup and recovery of NetApp storage, and
NetApp OnCommand Unified Manager monitors, manages, and generates
is suitable for any environment.
reports on all of the NetApp storage systems in an organization. When you
With SnapVault, data protection occurs at the block level—copying only the are using NetApp thin provisioning, NetApp recommends deploying Unified
data blocks that have changed since the last backup, not entire files. This Manager and setting up email and pager notifications to the appropriate
enables backups to run more frequently and to use less capacity, because administrators. With thin-provisioned storage, it is very important to monitor
no redundant data is moved or stored. the free space available in the aggregates. Proper notification of the avail-
able free space means that additional storage can be made available before
For distributed organizations, this not only makes disk-based backup cost-
the aggregate becomes completely full.
effective, it offers the option of backing up directly from remote facilities to
a core data center, centralizing management, and minimizing investment
needs at the edge.
Reader Tip
Monitoring and Management
Storage monitoring and management are critical to the success of the For information about setting up notifications in Unified Manager,
server environment. NetApp offers tools to monitor the health of storage see the Configuring Alarms and Managing Aggregate Capacity
systems, provide alerts, generate reports, and manage storage growth. section in the Unified Manager Administration Guide on NOW.

The OnCommand Management Software Portfolio


Unified Manager’s policy-based management, global monitoring, and
NetApp OnCommand management software is a family of products that reporting allow you to automate your data protection operations.
improve storage and service efficiency by letting IT administrators control,
automate, and analyze shared storage infrastructure. Managing data protection can be complicated and time consuming. Most
tools fail to give you a comprehensive and easy-to-understand view of your
OnCommand System Manager data protection environment. Additionally, they make it difficult to efficiently
provision and use storage resources.
System Manager is the simple yet powerful management tool for NetApp
storage, and is easy to use for small to medium-sized businesses, and Unified Manager can simplify common data protection tasks and automate
efficient for large enterprises and service providers. management across Snapshot, SnapMirror, SnapManager, SnapVault, and
Open Systems SnapVault operations. It automates storage provisioning and
With System Manager, you don’t need to be a storage expert to manage provides global policy-based management, monitoring, and alerting.
NetApp storage systems. System Manager enables wizard-driven setup
of aggregates, volumes, LUNS, qtrees, shares, and exports. It manages Unified Manager makes it easy to define, apply, and update data protection
both NAS (CIFS, NFS) and SAN (iSCSI, FC) as a single tool, and provides policies across the enterprise. It minimizes effort, cuts administrative overhead,
a common look and feel. System Manager allows easy configuration of and helps to meet best practices and service-level agreements globally.

August 2012 Series Increasing Efficiency and Flexibility with Advanced Features 28
A simple dashboard shows comprehensive data-protection information at a With NetApp SnapManager, you can:
glance, including unprotected data, alerts, and utilization. • Leverage the NetApp technology stack to create near-instant and
Unified Manager automation combines with thin provisioning, deduplication, space-efficient Snapshot copies and clones of your applications.
NetApp Snapshot, and block incremental technology to shrink the storage • Integrate with native application technologies and achieve complete
footprint and increase management efficiency. automation of data management.
NetApp Unified Manager can speed the creation of new NetApp storage • Use policies to simplify, standardize, and automate data protection.
resources and help improve capacity management of existing storage • Increase back-up frequency—without affecting performance—for better
resources. Storage administrators can use Unified Manager’s policy-based data protection.
automation to create repeatable, automated provisioning processes to
improve the availability of data, and enable provisioned storage to comply • Recover and restore a failed database to full production in minutes,
with policies. These processes are faster than manually provisioning stor- regardless of size.
age, are easier to maintain than scripts, and help to minimize the risk of data • Create complete data clones in seconds on primary storage or directly
loss due to misconfigured storage. to your development and test environment.
Unified Manager applies user-defined policies to consistently select the • Use clones to engage in parallel QA, development, testing, and other
appropriate resources for each provisioning activity. This frees administra- processes, and deploy applications faster than ever before.
tors from the headache of searching for available space to provision and For more information on NetApp products, services, and solutions, NetApp
allows more time for strategic issues. A centralized management console sales representatives and reseller partners are ready to answer your ques-
allows administrators to monitor the status of their provisioned storage tions and provide you with the pricing and configuration information you
resources. need to make your purchasing decision.
Unified Manager can help improve your business agility and capacity utiliza- In the United States you can reach NetApp directly at: 1-877-263-8277 or
tion, shrink provisioning time, and improve administrator productivity. By select from the list of reseller partners located at:
leveraging Unified Manager’s thin provisioning and deduplication capabili- http://www.netapp.com/us/how-to-buy/
ties, you can get a high level of storage efficiency from your NetApp storage
investment. This allows you to store more data more efficiently, and helps If you are calling from outside the United States, please select your country
improve your business agility. from the list located on the right side of the link provided above.

NetApp SnapManager
NetApp SnapManager management tools integrate with the leading busi-
ness applications to automate and simplify the complex, manual, and time-
consuming processes associated with the backup, restoration, recovery,
and cloning of the leading business applications, including Oracle, Microsoft
Exchange, SQL Server, SharePoint, SAP, and server virtualization.

August 2012 Series Increasing Efficiency and Flexibility with Advanced Features 29
Appendix A: Changes

This appendix summarizes the changes to this guide since the previous
Cisco SBA series.
• We updated the NetApp storage system to a NetApp FAS3240 running
Data ONTAP 8.0.2 operating in 7-Mode. The initial system setup reflects
the changes required for the newer platform. Most of the GUI-based
configuration in this guide remain the same, allowing an easy migration
from the older NetApp FAS3140 to the NetApp FAS3240 storage system.
• We configured and documented FCoE and iSCSI access to the NetApp
FAS3240 using a virtual interface (VIF) on the NetApp FAS3240. The VIF
allows transport of iSCSI and FCoE over two 10-Gbs Ethernet links from
the controller to the data-center core Cisco Nexus 5500UP switches
configured as an EtherChannel for load balancing and resilience.
• We made minor changes to improve the readability of this guide.

August 2012 Series Appendix A: Changes 30


Feedback

Click here to provide feedback to Cisco SBA.

SMART BUSINESS ARCHITECTURE

ALL DESIGNS, SPECIFICATIONS, STATEMENTS, INFORMATION, AND RECOMMENDATIONS (COLLECTIVELY, “DESIGNS”) IN THIS MANUAL ARE PRESENTED “AS IS,” WITH ALL FAULTS. CISCO AND ITS SUPPLiERS DISCLAIM ALL WARRANTIES, INCLUDING, WITH-
OUT LIMITATION, THE WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE
FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THE DESIGNS, EVEN IF CISCO OR ITS SUPPLIERS
HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. THE DESIGNS ARE SUBJECT TO CHANGE WITHOUT NOTICE. USERS ARE SOLELY RESPONSIBLE FOR THEIR APPLICATION OF THE DESIGNS. THE DESIGNS DO NOT CONSTITUTE THE TECHNICAL
OR OTHER PROFESSIONAL ADVICE OF CISCO, ITS SUPPLIERS OR PARTNERS. USERS SHOULD CONSULT THEIR OWN TECHNICAL ADVISORS BEFORE IMPLEMENTING THE DESIGNS. RESULTS MAY VARY DEPENDING ON FACTORS NOT TESTED BY CISCO.

Any Internet Protocol (IP) addresses used in this document are not intended to be actual addresses. Any examples, command display output, and figures included in the document are shown for illustrative purposes only. Any use of actual IP addresses in illustrative content
is unintentional and coincidental.

© 2012 Cisco Systems, Inc. All rights reserved.

Americas Headquarters Asia Pacific Headquarters Europe Headquarters


Cisco Systems, Inc. Cisco Systems (USA) Pte. Ltd. Cisco Systems International BV Amsterdam,
San Jose, CA Singapore The Netherlands

Cisco has more than 200 offices worldwide. Addresses, phone numbers, and fax numbers are listed on the Cisco Website at www.cisco.com/go/offices.

Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. To view a list of Cisco trademarks, go to this URL: www.cisco.com/go/trademarks. Third-party trademarks mentioned are the property of their
respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (1110R)

B-0000930-1 7/12

You might also like