KEMBAR78
Nutanix Services - Cluster Deployment Guide | PDF | Computer Network | Ip Address
0% found this document useful (0 votes)
29 views14 pages

Nutanix Services - Cluster Deployment Guide

The Nutanix Cluster Deployment Build Guide provides detailed instructions for performing remote deployments of Nutanix clusters, specifically targeting Nutanix Services and partners. It outlines software and infrastructure requirements, hardware setup, and methods for configuring BMC IPs, while emphasizing the importance of pre-configured networks and compatibility with various hardware vendors. The document also includes default passwords for accessing different components and best practices for network configuration to ensure successful deployments.

Uploaded by

milton ramirez
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views14 pages

Nutanix Services - Cluster Deployment Guide

The Nutanix Cluster Deployment Build Guide provides detailed instructions for performing remote deployments of Nutanix clusters, specifically targeting Nutanix Services and partners. It outlines software and infrastructure requirements, hardware setup, and methods for configuring BMC IPs, while emphasizing the importance of pre-configured networks and compatibility with various hardware vendors. The document also includes default passwords for accessing different components and best practices for network configuration to ensure successful deployments.

Uploaded by

milton ramirez
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Nutanix Cluster Deployment

Build Guide
Copyright 2020 Nutanix, Inc.

Nutanix, Inc.

1740 Technology Drive, Suite 150

San Jose, CA 95110

All rights reserved. This product is protected by U.S. and


international copyright and intellectual property laws.

Nutanix is a trademark of Nutanix, Inc. in the United States and/or


other jurisdictions. All other marks and names mentioned herein
may be trademarks of their respective companies.

Remote Cluster Deployment – Build Guide | 2


Table of Contents
Deployment Overview ................................................................................................... 4
Audience ............................................................................................................................... 4
Purpose ................................................................................................................................. 4
Assumptions......................................................................................................................... 4
Software Requirements ................................................................................................ 5
Infrastructure Requirements ........................................................................................ 6
Hardware Setup ............................................................................................................. 9
Node Setup ........................................................................................................................... 9
Default Passwords ..............................................................................................................10
How to Set BMC IPs .................................................................................................... 11
Set the BMC IP from AHV/ESXi using ipmitool .................................................................11
Set Nutanix IPMI IP from the BIOS .....................................................................................11
Set Dell iDRAC 9 from BIOS ...............................................................................................12
Set Dell iDRAC 9 from server LCD panel ...........................................................................12
Set HPE iLO 5 from BIOS ....................................................................................................13
Version Control ........................................................................................................... 14

Remote Cluster Deployment – Build Guide | 3


Deployment Overview
Audience

This document is intended for Nutanix Services and partners. The content may be shared
with the customer for reference only.

Purpose

This document is intended to be used as a guide to perform remote Nutanix cluster


deployments.

 This guide is written with the assumption that the network is pre-configured to allow
communications between hypervisor/CVM and IPMI.
 This is a general guide that is primarily focused on Nutanix, Dell, and HPE hardware.
Mileage on Cisco and Lenovo hardware may vary.
 The only way to use a flat switch outside of the customer’s network is if the hardware
platform is Nutanix and does not require SFP+ to ethernet adapters.
 There are multiple ways to image Nutanix nodes. This process is called Foundation.

Assumptions

 The nodes are racked.


 The nodes are cabled for power and running.
o Nutanix nodes take around 10 minutes to become available after the first boot.
o Dell nodes can take at least 20-60 minutes to become available after the first
boot.
 The nodes are cabled for network.
 The network ports have been configured and turned on.
 The customer will set the IP address for the BMC on every node.
o A keyboard and monitor are required for this.
 If the project is happening in the United States and the nodes are Nutanix, they will
come with AHV pre-installed.
o Outside of the United States, Nutanix nodes and some OEM nodes will ship
with Discovery OS. Discovery OS is a lightweight OS for node discovery
during Foundation.
 Dell can customize which hypervisor the nodes come with.
 HPE nodes have been coming with AHV from the factory, but lately, the CVM has not
been discoverable. As a result, the HPE nodes have been requiring a bare-metal
Foundation.

Remote Cluster Deployment – Build Guide | 4


Software Requirements
During the Technical Call, the consultant should have a general idea of the primary
Foundation method to be used. The consultant will provide the customer with direct links to
the appropriate binaries. If the CVM-based Foundation is to be used, it is best to have an
alternative method available.

The customer should use the provided MD5 checksums to verify the binaries are healthy
after they have been downloaded to avoid future issues or delays.

 Compatibility Matrix: The consultant should verify compliance with the software
compatibility matrix. It is best to not stray from this guide. Hypervisors can be updated
after the fact.
o Reference: https://portal.nutanix.com/page/documents/compatibility-matrix

 AOS: Provide the appropriate version to the customer with the direct link and the
MD5 checksum.
o Reference: https://portal.nutanix.com/#/page/releases/nosDetails
o If a previous release of AOS is not available on the Portal, reference the
following KB: https://portal.nutanix.com/kb/2430

 Hypervisor: Provide the appropriate version to the customer with the direct link and
the MD5 checksum.
o AHV versions can be downloaded at the following link:
https://portal.nutanix.com/#/page/static/hypervisorDetails
o VMware software cannot be provided by Nutanix consultants as it violates
legal agreements. The consultant can provide the links, but the customer must
provide the ISO. Again, it is highly recommended to not deviate from the
compatibility matrix.
 Nutanix nodes use a vanilla version of VMware ESXi.
 Dell and HPE require custom ISOs from the manufacturers.
 For other OEMs, reference OEM guides.

 Foundation: Provide the appropriate version to the customer with the direct link and
the MD5 checksum.
o Reference: https://portal.nutanix.com/#/page/Foundation
o It is best to use the latest version of Foundation unless otherwise directed.
o If the Standalone Foundation VM is to be used, make sure to pick the
appropriate hypervisor that is to run on (AHV, ESXi, Fusion, VirtualBox).
o If the Portable Foundation is to be used, provide a direct link to the macOS or
Windows version.

Remote Cluster Deployment – Build Guide | 5


Infrastructure Requirements
Out of Band Management (BMC) notes:

BMC is used throughout this document to keep a general form. Please note that the following
vendors use the following acronyms for BMC. The ipmitool commands work for Nutanix, Dell,
and HPE hardware. This has not been tested recently on Cisco or Lenovo hardware.

 Nutanix – IPMI
 Dell – iDRAC
 HPE – iLO
 Cisco – CIMC
 Lenovo – IMM

1. Power:
a. Nutanix recommends using (2) 208v power sources from different circuits.
i. Follow datacenter best practices.
ii. Nutanix nodes come with (4) 208V PDU cables. The (2) black cables are
10A connections and the (2) grey cables are for 15A connections.

2. Network Cabling:
a. Nutanix recommends a minimum of (2) 10G interfaces to be used per node
connecting to (2) different top of rack switches.
i. Nutanix can run on 1G ports; this is not recommended for more than 8
nodes and dependent upon VM IO requirements.
b. Nutanix recommends (1) 1G interface for BMC per node.
c. If fiber cables are to be used:
i. Please contact the VAR to verify correct brand compatibility.
ii. Generally, switches have their coding for the transceiver (Cisco,
Mellanox, Arista, Aruba, etc) and the network cards (Intel, Mellanox,
Broadcom, etc) have their own coding.
d. If Twinax cables are to be used:
i. Twinax cables must be 5m or less and passive.
ii. The network best practice is to use OEM cables. For example: Cisco
switches should use Cisco Twinax cables.

Remote Cluster Deployment – Build Guide | 6


3. Network Switch Configuration:
a. The hypervisor, CVM, and cluster IP are required to be on the same subnet and
VLAN. Prism Central can be on a different subnet and VLAN.
b. Small subnets should be avoided as they are constricting to future growth. A /24
network is common practice.
c. Make sure IPv6 is enabled on the network and IPv6 multicast is supported.
i. Refer to the following KB to verify IPv6 connectivity from a VM:
1. Reference:
https://portal.nutanix.com/page/documents/kbs/details/?targetId=k
A0600000008SIbCAM
ii. Refer to the following KB to verify IPv6 multicast from a switch
manufacturer:
1. Reference:
https://portal.nutanix.com/page/documents/kbs/details/?targetId=k
A032000000TTkvCAG
d. Nutanix recommends configuring the hypervisor/CVM VLAN as a trunk port with
the hypervisor/CVM VLAN as a native VLAN on all the switch ports for the 10G
interfaces (or 1G).
i. This makes imaging and cluster expansion much easier.
e. Nutanix recommends configuring the BMC ports as access ports on the switch.
f. The Foundation machine must be able to talk to the hypervisor/CVM network and
the BMC network during Foundation.
i. If there are any firewalls between the VLANs, please see the “Firewall
DNS Req for Prism” section below.
ii. If a Foundation scenario involving a laptop is required, it would be ideal
for the laptop to be plugged directly into the top of rack switch to avoid
traversing the local network infrastructure. This could, however, require
an SFP+ to ethernet adapter. Based on the short notice of projects this
may not be viable.
1. Running Foundation across a VPN, WAN, or Wi-Fi connection is
not supported.
2. The entire local network connection between the Foundation and
the nodes must be 1G or greater.
3. The laptop could be plugged into a different switch if it has the
appropriate access to connect to the hypervisor/CVM and BMC
networks.

Remote Cluster Deployment – Build Guide | 7


g. An important thing to share with the network team is that our best practice
recommendation is to have two stand-alone trunk ports coming from the switch
down to the Nutanix nodes.
i. LACP is supported however we recommend stand-alone trunk ports.
ii. It is perfectly fine for two Cisco Nexus switches to have a VPC between
them, as long as the ports from the switch going to the nodes are not in a
VPC. The “vpc orphan-port suspend" option is added to each of the
interfaces for the hypervisor/CVM to tell the Cisco Nexus switches that
these links are not in a VPC.
iii. Nutanix’s Cisco Nexus Recommended Practices:
1. Please reference “SAMPLE 2.”
2. https://portal.nutanix.com/#/page/kbs/details?targetId=kA0320000
00TT4kCAG
h. During network failover testing, spanning tree can cause issues with the
convergence after a NIC loss. It is recommended to add the “spanning-tree port
type edge trunk" option to each of the interface configs.
i. References:
1. Cisco Spanning Tree Reference
2. Cisco Nexus 9000 Series NX-OS Layer 2 Switching Configuration
Guide, Release 7.x
i. Caveats:
i. Foundation does not support imaging nodes in an environment using
LACP without failback enabled.
ii. Foundation does not support configuring nodes’ virtual switch to use
LACP. This must be configured manually post-image.
iii. Foundation does not support configuring adapters to use jumbo frames
during imaging. This must be configured manually post-image.
iv. Jumbo frames are not recommended except for very specific iSCSI
Volumes use cases.
v. If a switch is configured for 9000 MTU (jumbo frames), this is acceptable.
Foundation will not send 9000 MTU communications and everything will
still send at 1500 MTU.

Remote Cluster Deployment – Build Guide | 8


Hardware Setup
Node Setup

1. Rack and stack nodes in the desired rack.

2. Wire in the 10G (or 1G) hypervisor/CVM network connections.


a. Please reference the “Cabling” section in the “Equipment Requirements” section.
b. Please adhere to the company’s existing best practice for wiring/labeling.
c. Nutanix recommends splitting these connections between two separate switches.
d. Please see the Nutanix Portal website for hardware specific ports.

3. Wire in the 1G ethernet connections for the BMC.


a. Please see Nutanix Portal website for hardware specific ports.

4. Wiring diagrams for Nutanix nodes are as follows:


a. System specifications for multinode G6 platforms:
i. https://portal.nutanix.com/#/page/docs/details?targetId=System-Specs-
G6-Multinode:System-Specs-G6-Multinode
b. System specifications for multinode G7 platforms:
i. https://portal.nutanix.com/#/page/docs/details?targetId=System-Specs-
G7-Multinode:System-Specs-G7-Multinode
c. System specifications for single-node G6 platforms:
i. https://portal.nutanix.com/#/page/docs/details?targetId=System-Specs-
G6-Single-Node:System-Specs-G6-Single-Node
d. System specifications for single-node G7 platforms:
i. https://portal.nutanix.com/#/page/docs/details?targetId=System-Specs-
G7-Single-Node:System-Specs-G7-Single-Node

5. Power on the nodes.


a. Nutanix nodes take around 10 minutes to become available after the first boot.
b. Dell nodes can take at least 20-60 minutes to become available after the first
boot.

6. After the previous steps have been completed, update the Nutanix-supplied
questionnaire to include the rack elevation and the serial numbers of every block.
a. The block serial number can be found on a black pullout tag at the front of the
server.

Remote Cluster Deployment – Build Guide | 9


Default Passwords

After the previous steps have been completed, document the passwords of any hardware
that has unique passwords. This does not need to be provided to Nutanix as long as the
customer has ready access to the passwords.

 Nutanix Supermicro IPMI:


o BMC version 7.08 and above
 Username: ADMIN
 Password: node-serial-number
 Reference: https://portal.nutanix.com/kb/1091
 To find the node serial number, use the following command from
the hypervisor:
o ipmitool fru print
 The node serial number can also be found on a sticker on the
back of the node.
o BMC version 7.07 and before
 Username: ADMIN
 Password: ADMIN

 Dell iDRAC:
o Username: root
o Password: calvin

 HPE iLO:
o Username: Administrator
o Password: unique
 Every HPE iLO has a different password.
 The password can be found on a black pullout tag at the front of the
server.

 AHV / ESXi hypervisor:


o username: root
o Password: nutanix/4u

 CVM:
o Username: nutanix
o Password: nutanix/4u

 Prism Element:
o Username: admin
o Password: Nutanix/4u

 Prism Central:
o Username: admin
o Password: Nutanix/4u

Remote Cluster Deployment – Build Guide | 10


How to Set BMC IPs
The customer must set the BMC IP addresses for all nodes. This can be performed in
several ways. A keyboard and monitor are required. The easiest method is to use the
ipmitool which should be installed from the factory if the nodes are running AHV or ESXi and
are Nutanix, Dell, or HPE. Mileage may vary on other OEM nodes.

Set the BMC IP from AHV/ESXi using ipmitool

Note, this method has been tested to work on Nutanix, Dell, and HPE nodes. Reference:
https://portal.nutanix.com/page/documents/details/?targetId=Hardware-Admin-Ref-AOS-
v5_15:ipc-remote-console-ip-address-reconfigure-cli-t.html

1. Connect a keyboard and monitor to the node.

2. Log into AHV using the default AHV credentials.

3. From the command prompt, issue the following IPMI commands:


a. ipmitool lan set 1 ipsrc static
b. ipmitool lan set 1 ipaddr ipmiIP
c. ipmitool lan set 1 netmask subnetMask
d. ipmitool lan set 1 defgw ipaddr gatewayIP

4. If the hardware is HPE, reboot the iLO using the following command:
a. ipmitool mc reset cold

5. The configuration can be verified using the following command:


a. ipmitool lan print 1

6. The IPMI should now be reachable from the network.

Set Nutanix IPMI IP from the BIOS

Reference: https://portal.nutanix.com/page/documents/details/?targetId=Hardware-Admin-
Ref-AOS-v5_15:ipc-remote-console-ip-address-reconfigure-nx3050-t.html

1. Connect a keyboard and monitor to a node.

2. Restart the node and press Delete to enter the BIOS setup utility.
a. There is a limited amount of time to enter BIOS before the host completes the
restart process.

3. Press the right arrow key to select the IPMI tab.

4. Press the down arrow key until BMC network configuration is highlighted and then press
Enter.

Remote Cluster Deployment – Build Guide | 11


5. Press the down arrow key until Update IPMI LAN Configuration is highlighted and press
Enter to select Yes.

6. Select Configuration Address source and press Enter.

7. Select Static and press Enter.

8. Assign the Static IP address, Subnet mask, and Gateway IP address.

9. Review the BIOS settings and press F4 to save the configuration changes and exit the
BIOS setup utility.

10. The node restarts.

Set Dell iDRAC 9 from BIOS

Source: https://www.dell.com/support/article/en-us/sln306877/dell-poweredge-how-to-
configure-the-idrac9-and-the-lifecycle-controller-network-ip?lang=en

1. Connect a keyboard and monitor to a node.

2. Turn on the managed system.

3. Press <F2> during Power-on Self-test (POST).

4. In the System Setup Main Menu page, click iDRAC Settings. The iDRAC Settings page
is displayed.

5. Click Network. The Network page is displayed.

6. Specify the network settings. Under Enable NIC, select Enabled.


a. Shared LOM (1, 2, 3 or 4) will share one of the NICs on the motherboard.
b. Dedicated NIC uses the dedicated network interface.

7. Set the IPv4 or IPv6 network settings, depending on the local configuration.

8. Click Back, click Finish, and then click Yes. The network information is saved, and the
system reboots.

Set Dell iDRAC 9 from server LCD panel

The reference link below walks an engineer through the process of setting the iDRAC IP
address from the LCD panel on the front of the server.

Reference: https://youtu.be/XVg-jcLmdek

Remote Cluster Deployment – Build Guide | 12


Set HPE iLO 5 from BIOS

Source:
https://support.hpe.com/hpesc/public/docDisplay?docLocale=en_US&docId=emr_na-
a00029920en_us#N10012

1. Connect a keyboard and monitor to your server and power it on.

2. Restart or power on the server.

3. Press F9 in the server POST screen. The UEFI system utilities start.

4. Click System Configuration.

5. Click iLO 5 Configuration utility.

6. Disable DHCP:
a. Click Network options.
b. Select OFF in the DHCP Enable menu. The IP Address, Subnet Mask, and
Gateway IP Address boxes become editable. When DHCP enable is set to ON,
the user cannot edit these values.

7. Enter values in the IP Address, Subnet Mask and Gateway IP Address boxes.

8. To save the changes and exit, press F12. The iLO 5 configuration utility prompts the user
to confirm that the user wants to save the pending configuration changes.

9. To save and exit, click Yes - Save Changes.


a. The iLO 5 Configuration Utility notifies the user that iLO must be reset for the
changes to take effect.

10. Click OK.


a. iLO resets, and the iLO session is automatically ended. The user can reconnect in
approximately 30 seconds.

11. Resume the normal boot process.


a. Start the iLO remote console. The iLO 5 configuration utility is still open from the
previous session.
b. Press ESC several times to navigate to the System Configuration page.
c. To exit the system utilities and resume the normal boot process, click Exit and
resume system boot.

Remote Cluster Deployment – Build Guide | 13


Version Control

Version Date Modified Author Comments

1.0 - Mike Beadle Draft Release

Added Network switch and


1.1 03/12/2019 Nathan Schweitzer
Deployment areas

- Streamlined the document


2.0 03/06/2020 Jr Consultant Team - Updated using new
version of Foundation

- Various grammar
Jesse James corrections
- Various technical
2.1 04/08/2020
clarifications
Tim Hammond
- Added CVM-Based
Foundation process

- Added Standalone
Foundation guide
- Added post Foundation
2.2 04/14/2020 Tim Hammond
workflow
- Re-arranged the flow of
the document

- Removed marketing
content
Kyle Van der Vort - Added additional BMC
2.3 04/24/2020 instructions
Tim Hammond - Added more required
download content
- Improved workflows

Remote Cluster Deployment – Build Guide | 14

You might also like