Planning and Implementation Guide PDF
Planning and Implementation Guide PDF
David Watts
Rani Doughty
Ilya Solovyev
October 2018
SG24-8208-01
Note: Before using this information and the product it supports, read the information in “Notices” on
page ix.
This edition applies to Lenovo System x3850 X6 and x3950 X6, machine type 6241, with Intel Xeon Processor
E7-4800/8800 v2, v3 and v4 processors.
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .x
Summary of changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
October 2018 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
February 2017 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
October 2016 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
August 2016 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
June 2016 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
September 2015. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
September 2014. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
June 2014 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
The team who wrote this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvi
Chapter 1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Target workloads. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.1 Databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.2 Business analytics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.3 Virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1.4 Enterprise applications: ERP and CRM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 Key features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Positioning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Storage versus in-memory data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.5 Flash storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.6 Energy efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.7 Services offerings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.8 About this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Contents v
5.4.1 NVMe drive placement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
5.4.2 NVMe PCIe SSD adapter placement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
5.4.3 Using NVMe drives with Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
5.4.4 Using NVMe drives with Microsoft Windows Server 2012 R2. . . . . . . . . . . . . . . 192
5.4.5 Using NVMe drives with VMware ESXi server . . . . . . . . . . . . . . . . . . . . . . . . . . 196
5.4.6 Ongoing NVMe drive management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
5.5 PCIe adapter placement advice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
5.6 Hot-swap procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
5.6.1 Hot-swapping a power supply . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
5.6.2 Hot-swapping an I/O Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
5.7 Partitioning the x3950 X6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
5.7.1 Partitioning an x3950 X6 via the IMM2 web interface . . . . . . . . . . . . . . . . . . . . . 206
5.8 Updating firmware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
5.8.1 Firmware tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
5.8.2 Updating firmware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
5.9 Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
5.9.1 Integrated Management Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
5.9.2 LCD system information panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
5.9.3 System event log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
5.9.4 POST event log. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
5.9.5 Installation and Service Guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
Contents vii
viii Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
Notices
Lenovo may not offer the products, services, or features discussed in this document in all countries. Consult
your local Lenovo representative for information on the products and services currently available in your area.
Any reference to a Lenovo product, program, or service is not intended to state or imply that only that Lenovo
product, program, or service may be used. Any functionally equivalent product, program, or service that does
not infringe any Lenovo intellectual property right may be used instead. However, it is the user's responsibility
to evaluate and verify the operation of any other product, program, or service.
Lenovo may have patents or pending patent applications covering subject matter described in this document.
The furnishing of this document does not give you any license to these patents. You can send license
inquiries, in writing, to:
LENOVO PROVIDES THIS PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some
jurisdictions do not allow disclaimer of express or implied warranties in certain transactions, therefore, this
statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. Lenovo may
make improvements and/or changes in the product(s) and/or the program(s) described in this publication at
any time without notice.
The products described in this document are not intended for use in implantation or other life support
applications where malfunction may result in injury or death to persons. The information contained in this
document does not affect or change Lenovo product specifications or warranties. Nothing in this document
shall operate as an express or implied license or indemnity under the intellectual property rights of Lenovo or
third parties. All information contained in this document was obtained in specific environments and is
presented as an illustration. The result obtained in other operating environments may vary.
Lenovo may use or distribute any of the information you supply in any way it believes appropriate without
incurring any obligation to you.
Any references in this publication to non-Lenovo Web sites are provided for convenience only and do not in
any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the
materials for this Lenovo product, and use of those Web sites is at your own risk.
Any performance data contained herein was determined in a controlled environment. Therefore, the result
obtained in other operating environments may vary significantly. Some measurements may have been made
on development-level systems and there is no guarantee that these measurements will be the same on
generally available systems. Furthermore, some measurements may have been estimated through
extrapolation. Actual results may vary. Users of this document should verify the applicable data for their
specific environment.
The following terms are trademarks of Lenovo in the United States, other countries, or both:
Advanced Settings Utility™ Lenovo® ThinkServer®
BladeCenter® Lenovo XClarity™ ToolsCenter™
Bootable Media Creator™ MAX5™ TopSeller™
Dynamic System Analysis™ Lenovo(logo)® TruDDR4™
eX5™ ServeRAID™ UltraNav®
eXFlash™ ServerGuide™ UpdateXpress System Packs™
Flex System™ Storage Configuration Manager™ vNIC™
Intelligent Cluster™ System x® X5™
Intel, Intel Core, Intel Xeon Phi, Xeon, and the Intel logo are trademarks or registered trademarks of Intel
Corporation or its subsidiaries in the United States and other countries.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
BitLocker, Excel, Hyper-V, Internet Explorer, Microsoft, Windows, Windows Server, and the Windows logo are
trademarks of Microsoft Corporation in the United States, other countries, or both.
Other company, product, or service names may be trademarks or service marks of others.
This section describes the changes made in this update and in previous updates. These
updates might also include minor corrections and editorial changes that are not identified.
October 2018
Additions:
Added the capacities of the blank USB keys, Table 3-25 on page 120
Updates:
Models with E7 v3 processors are now withdrawn from marketing, however compute
books are still available for field upgrades.
Updated the list of supported operating systems
Indicated options that are now withdrawn from marketing
Corrections:
Corrected which bays compute books should be installed in for a 6-processor
configuration, 3.7.2, “Compute Book population order” on page 78
February 2017
Added new compute book option, 00YG935, based on the Intel Xeon Processor E7-8894
v4, Table 3-6 on page 81
Added S3520 and PM863a solid-state drives
Added 32GB LRDIMM memory option, 00KH391, for certain E7 v3 processors Table 3-7
on page 85
Compute Books with E7 v2 processors now all withdrawn from marketing
Removed withdrawn options:
– Emulex VFA5 ML2 Dual Port 10GbE SFP+ Adapter, 00D1996
– 300GB 15K 12Gbps SAS 2.5" G3HS 512e HDD, 00NA221
– 500GB 7.2K 6Gbps NL SAS 2.5" G3HS HDD, 00AJ121
– 600GB 15K 12Gbps SAS 2.5" G3HS 512e HDD, 00NA231
– 500GB 7.2K 6Gbps NL SATA 2.5" G3HS HDD, 00AJ136
– 146GB 15K 6Gbps SAS 2.5" G3HS HDD, 00AJ111
– 1.2TB 10K 12Gbps SAS 2.5" G3HS 512e HDD, 00NA261
– USB Memory Key for VMware ESXi 5.1 Update 2, 00ML233
– USB Memory Key for VMware ESXi 5.1 U1, 41Y8382
– NVIDIA Quadro M6000, 00KH377
– NVIDIA Grid K1 PCIe x16 for System x3850/x3950 X6, 00FP671
– NVIDIA Grid K2 Actively Cooled PCIe x16 for System x3850/x3950 X6, 00FP674
– NVIDIA Tesla M60 GPU, PCIe (active), 00YL377
August 2016
Added additional options:
– Network adapters: Intel X710-DA2, Intel X550-T2, and Mellanox ConnectX-4
– FC host bus adapters: QLogic 16Gb FC and Emulex 16Gb FC
– NVIDIA GPUs: Tesla M60 and Quadro M5000
– USB memory keys preloaded with VMware ESXi 5.5 U3B and 6.0 U2
Grammar and style corrections
June 2016
New information
New models based on the Intel Xeon E7 v4 processors
New SAP HANA models
New compute book options (E7-4800 v4 and E7-8800 v4 processors)
New TruDDR4™ memory options
New adapter and drive options
Integrated Management Module is now IMM 2.1
New PDU options
Added console keyboard options
New Instructions on how to perform hot-remove and hot-add operations for NVMe drives
Lenovo® XClarity Energy Manager
Updated XClarity Integrator part number options
xii Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
Updated information
eXFlash™ DIMMs now withdrawn from marketing
io3 Enterprise Value Flash Storage Adapters now withdrawn from marketing
Additional information about the included rack rail kit
New IMM and XClarity screenshots
New links to Lenovo support pages
September 2015
New information:
Models of the x3850 X6 and x3950 X6, machine type 6241
Compute Books that are based on the Intel Xeon Processor E7-4800/8800 v3 family
Compute Books with DDR3 and TruDDR4 memory
PCIe NVMe solid-state drive technology and drive options
Support for more PCIe SSD adapters
Support for more network adapters
Support for more HDDs and SSDs
Information about upgrading from E7 v2-based Compute Books to E7 v3-based
Use of Lenovo XClarity™ Administrator
September 2014
Changed information:
All processors support eXFlash DIMMs
The RAID 1 feature of eXFlash DIMMs is currently not supported
June 2014
New information:
NVIDIA GPUs support a maximum of 1 TB of system memory, page 115
Information about the cable management kit shipped with the server, page 60
Added Intel I350 Ethernet adapters
Changed information:
Corrected the depth dimensions, page 60 and page 130
Certain processors do not support eXFlash DIMMs
The eXFlash DIMM driver does not support RAID
VMware vSphere 5.1 supports a maximum of 160 concurrent threads
The increasing demand for cloud computing and business analytical workloads by
enterprises to meet business needs drives innovation to find new ways to build informational
systems. Clients are looking for cost-optimized fit-for-purpose IT solutions that manage large
amounts of data, easily scale performance, and provide reliable real-time access to
actionable information.
This book describes the four-socket Lenovo System x3850 X6 and eight-socket Lenovo
System x3950 X6. These servers provide the computing power to drive mission-critical
scalable databases, business analytics, virtualization, enterprise applications, and cloud
applications.
This Lenovo Press book covers product information as well as planning and implementation
information. In the first few chapters, we provide detailed technical information about the X6
servers. This information is most useful in designing, configuring, and planning to order a
server solution. In the later chapters of the book, we provide detailed configuration and setup
information to get your server operational.
This book is for clients, Lenovo Business Partners, and Lenovo employees that want to
understand the features and capabilities of the X6 portfolio of servers and want to learn how
to install and configure the servers for use in production.
Comments welcome
Your comments are important to us!
We want our documents to be as helpful as possible. Send us your comments about this book
in one of the following ways:
Use the online feedback form found at the following web page:
http://lenovopress.com/sg248208
Send your comments in an email to:
comments@lenovopress.com
xvi Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
1
Chapter 1. Introduction
The X6 family of scalable rack servers consists of the following servers:
Lenovo System x3850 x6, a four-socket 4U rack-mount server
Lenovo System x3950 X6, an eight-socket 8U rack-mount server
These servers are the sixth generation of servers that are built upon Lenovo Enterprise
X-Architecture. Enterprise X-Architecture is the culmination of generations of Lenovo
technology and innovation that is derived from our experience in high-end enterprise servers.
The X6 servers deliver innovation with enhanced scalability, reliability, availability, and
serviceability (RAS) features to enable optimal break-through performance that is ideal for
mission-critical scalable databases, business analytics, virtualization, enterprise applications,
and cloud applications.
The X6 generation servers pack numerous fault-tolerant and high-availability features into a
high-density, rack-optimized, chassis-like package where all serviceable components are
front and rear accessible. This design significantly reduces the space that is needed to
support massive network computing operations and simplifies servicing.
These servers provide users who are looking for the highest level of scalable performance,
the maximum memory capacity, and the richest set of RAS features for maximum productivity.
The servers are designed for mission-critical, scalable workloads, including large databases,
and ERP/CRM systems to support online transaction processing, business analytics,
virtualization, and enterprise applications.
This section describes how X6 technology helps to address challenges clients are facing in
these mission-critical enterprise environments.
1.1.1 Databases
Leadership performance, scalability, and large memory support means that X6 systems can
be highly used, which yields the best return for the following database applications:
SAP Business Suite on X6
Microsoft SQL Data Warehouse on X6
SAP HANA on X6
IBM DB2 BLU on X6
X6 is well-suited for Online transaction processing (OLTP) workloads. OLTP workloads are
characterized by small, interactive transactions that generally require subsecond response
times. For most OLTP systems, the processor, memory, and I/O subsystem in a server are
well-balanced and are not considered performance bottlenecks.
The major source of performance issues in OLTP environments often is related to the storage
I/O. The speed of traditional hard disk drive (HDD)-based storage systems does not match
the processing capabilities of the server. As a result, a situation often occurs where a
powerful processor sits idle, waiting for storage I/O requests to complete, which negatively
affects the user and business productivity. This wait is not an issue with X6.
The OLTP workload optimization goal for X6 systems is to address storage I/O bottlenecks
through technologies, such as a large capacity memory subsystem to enable in-memory
data, and high-performance/low-latency storage subsystem that uses flash storage
technologies. For more information, see 1.4, “Storage versus in-memory data” on page 7 and
1.5, “Flash storage” on page 8.
For OLAP workloads, transactional delays can significantly increase business and financial
risks. Usually, decision making is stalled or delayed because of lack of accurate, real-time
operational data for analytics, which can mean missed opportunities.
In general, clients might experience the following challenges with OLAP environments:
Slow query execution and response times, which delay business decision making.
Dramatic growth in data, which requires deeper analysis.
Lenovo X6 systems can help to make businesses more agile and analytics-driven by
providing up-to-the-minute analytics that are based on real-time data. As with OLTP
workloads, in-memory databases or flash storage are used for workload optimization (see
1.4, “Storage versus in-memory data” on page 7 and 1.5, “Flash storage” on page 8).
1.1.3 Virtualization
Virtualization commonly increases effectiveness in the use of resources and reduces capital
expenses, software licensing fees, and operational and management costs.
The first wave of server consolidation focused on lightly loaded servers that easily tapped into
a hypervisor’s ability to share processor and memory resources across applications.
Hypervisors struggle to manage and share the heavy I/O loads that are typical of
performance-intensive workloads. As a result, performance-intensive databases that are
used for core enterprise workloads, such as customer relationship management (CRM),
enterprise resource planning (ERP), and supply chain management (SCM), are left to run on
physical, non-virtual servers.
The next wave of server virtualization with X6 expands the virtualization footprint to the
workhorse applications of enterprise IT, namely those performance-intensive databases.
Chapter 1. Introduction 3
1.1.4 Enterprise applications: ERP and CRM
Enterprise applications, such as ERP or CRM represent a mixed workload in which
transaction processing and a certain level of real-time reporting are available. In a 2-tier
implementation, database server and application modules are on the same server. The key
performance metric is response time, as with OLTP and OLAP workloads.
X6 offerings provide low latency, extreme performance, and efficient transaction management
to accommodate mixed workload requirements. X6 in-memory and flash storage offerings
can help to deliver the following benefits for enterprise applications:
Dramatically boost the performance of applications and lower cost per IOPS ratio without
redesigning the application architecture.
Increase user productivity with better response times, which improves business efficiency.
Increase data availability by using advanced system-level high availability and reliability
technologies, which reduces the number of solution components and shortens batch
processing and backup times.
Increase storage performance and capacity while decreasing power, cooling, and space
requirements.
These built-in technologies drive the outstanding system availability and uninterrupted
application performance that is needed to host mission-critical applications.
1.3 Positioning
The Lenovo System x3850 X6 and x3950 X6 servers are the next generation of
X-Architecture following the highly successful eX5 server. X6 servers include various new
features when compared to the previous generation of eX5, including support for more
memory and I/O, and model-dependent support for v2, v3, and v4 Intel Xeon Processors in a
modular design.
When compared to the 4-socket x3750 M4 server, the X6 servers fill the demand for
enterprise workloads that require 4-socket and 8-socket performance, high availability, and
advanced RAS features.
Table 1-1 shows a high-level comparison between the 4-socket x3750 M4, the eX5-based
x3850 and x3950 X5™, and the X6-based x3850 and x3950 X6.
Chapter 1. Introduction 5
Table 1-1 Maximum configurations for the X6 systems
Maximum configurations x3750 M4 x3850/x3950 X5 x3850/x3950 X6
Processors 1-node 4 4 4
Cores 1-node 32 40 96
USB ports 1-node 4 USB 2.0 8 USB 2.0 6 USB 2.0, 2 USB
3.0
Main memory is connected directly to the processors through a high-speed bus, whereas
hard disks are connected through a chain of buses (QPI, PCIe, and SAN) and controllers (I/O
hub, RAID controller or SAN adapter, and storage controller).
Compared to keeping data on disk, keeping the data in main memory can dramatically
improve database performance because of the improved access time. However, there is one
potential drawback. In a database transaction that was committed, the transaction cannot
stay committed.
Although the first three requirements are not affected by the in-memory concept, durability is
a requirement that cannot be met by storing data in main memory alone because main
memory is volatile storage. That is, it loses its content when no electrical power is present. To
make data persistent, it must be on non-volatile storage. Therefore, some sort of permanent
storage is still needed, such as hard disk drives (HDDs) or solid-state drives (SSDs) to form a
hybrid solution that uses in-memory and disk technology together.
The advantage of a hybrid solution can mean flexibility by balancing the performance, cost,
and persistence and form factor in the following ways:
Performance: Use in-memory technology to enhance performance of sorting, storing, and
retrieving specified data rather than going to disk.
Persistence and form factor: Memory cannot approach the density of a small HDD.
Cost: Less costly HDDs can be substituted for more memory.
Chapter 1. Introduction 7
1.5 Flash storage
Lenovo flash storage offerings for X6 servers combine extreme IOPS performance and low
response time for transactional database workloads. The flash technologies that are used in
the X6 servers include PCIe NVMe drives, Flash Storage Adapters, and SAS/SATA SSDs.
In addition to these offerings for System x, the Professional Services team has the following
offerings specifically for X6:
Virtualization Enablement
Database Enablement
Enterprise Application Enablement
Migration Study
Virtualization Health Check
The Data Center Services team offers in-depth data center power and cooling assessments,
including the following areas:
Planning for high-density systems and cloud for the data center
Data center baseline cooling assessment
Data center power and cooling resiliency assessment
Retail and campus data closet power and cooling planning
Chapter 1. Introduction 9
The services offerings are designed around having the flexibility to be customized to meet
your needs and can provide preconfigured services, custom services, expert skills transfer,
off-the-shelf training, and online or classroom courses for X6.
For more information, contact the appropriate team that is listed in Table 1-2.
Table 1-2 Lab Services and Data Center Services contact details
Contact Geography
x86svcAP@lenovo.com Asia Pacific (GCG, ANZ, ASEAN, Japan, Korea, and ISA)
power@lenovo.com Worldwide
Then we describe the current memory options and features of the storage subsystem,
including innovative memory-channel storage technology and PCIe NVMe solid-state drives
(SSDs). We also describe other advanced technology in the servers, including X6 scaling and
partitioning capabilities.
The x3950 X6 resembles two x3850 X6 servers, with one server placed on top of the other
server. However, unlike the earlier x3950 eX5 servers (which connected two x3850 servers
via external cables), x3950 X6 uses a single chassis with a single-midplane design, which
eliminates the need for external connectors and cables.
The X6 systems offer a new “bookshelf” design concept that is based on a fixed chassis
mounted in a standard rack cabinet. There is no need to pull the chassis out of the rack to
access components because all components can be accessed from the front or rear, just as
books are pulled from a bookshelf.
Figure 2-3 shows the x3850 X6 server with one of the four Compute Books partially removed.
Figure 2-5 shows the rear view of the x3850 X6 server in which the Primary I/O Book, other
I/O Books, and power supplies are highlighted.
Power supplies
Figure 2-5 x3850 X6 rear view
Figure 2-6 x3850 X6 midplane (front side showing Compute Book connections)
Each Compute Book contains 1 Intel Xeon processor, 24 memory DIMMs, and 2 dual-motor
fans. The 24 DIMM slots are on both sides of the Compute Book’s processor board, with 12
memory modules on each side.
Figure 2-7 shows the Compute Book with the clear side cover removed. The front of the
Compute Book includes two hot-swap fans.
The x3850 X6 server supports up to four Compute Books; the x3950 X6 server supports up to
eight . For more information about the Compute Books, see 3.7, “Compute Books” on
page 75.
The x3850 X6 server includes one Storage Book (maximum one); the x3950 X6 includes two
Storage Books (maximum two). For more information about the Storage Book, see 3.11.1,
“Storage Book” on page 94.
The Primary I/O Book also contains core logic, such as the Integrated Management Module II
(IMM2) and Unified Extensible Firmware Interface (UEFI), fan modules, and peripheral ports.
The Primary I/O Book installs in the rear of the server.
The x3850 X6 includes one Primary I/O Book (maximum one); the x3950 X6 includes two
Primary I/O Books (maximum two). For more information about the Primary I/O Book, see
3.13, “Primary I/O Book” on page 102.
The Full-length I/O Book accepts GPU adapters and coprocessors, including double-wide
adapters that require up to 300 W of power. The Full-length I/O Book includes two auxiliary
power connectors, 150 W and 75 W, and power cables.
The x3850 X6 server supports up to two extra I/O Books of any type; the x3950 X6 server
supports up to four extra I/O Books.
For more information about the extra I/O Books, see 3.14.1, “Half-length I/O Book” on
page 106 and 3.14.2, “Full-length I/O Book” on page 106.
2.2.1 x3850 X6
Figure 2-12 shows the system architecture of the x3850 X6 server.
4 3
1 2
x3850 X6 - 4 sockets
Figure 2-13 QPI links between processors
In the Compute Book, each processor has four Scalable Memory Interconnect Generation 2
(SMI2) channels (two memory controllers per processor, each with two SMI channels) that
are connected to four scalable memory buffers. Each memory buffer has six DIMM slots (two
channels with three DIMMs per channel) for a total of 24 DIMMs (eight channels with three
DIMMs per channel) per processor. Compute Books are connected to each other via QPI
links.
The Primary I/O Book has three PCIe 3.0 slots, a Mezzanine LOM slot, an I/O Controller Hub,
IMM2, and peripheral ports (such as USB, video, serial) on the board. Extra I/O Books
(Full-length and Half-length) have three PCIe 3.0 slots each and support hot-swap PCIe
adapters.
Extra I/O Books: For illustration purposes, Half-length and Full-length I/O Books are
shown in Figure 2-12 on page 20, where the Half-length I/O Book supplies slots 1, 2, and
3, and the Full-length I/O Book supplies slots 4, 5, and 6. Their order can be reversed, or
two of either type can be used.
The Primary I/O Book is connected to the Compute Books 1 (CPU 1) and 2 (CPU 2) directly
via PCIe links from those processors: PCIe slots 9 and (for dedicated mezzanine NIC) 10 are
connected to CPU 1, and PCIe slots 7 and 8 are connected to CPU 2. Also, CPU 1 and CPU
2 are connected to the Intel I/O Hub via DMI switched links for redundancy purposes.
The Storage Book also is connected to Compute Books 1 and 2; however, PCIe slots 11 and
12 (both used for dedicated HBA/RAID cards) are connected to different processors (CPU 2
and CPU 1, respectively). In addition, certain peripheral ports are routed from the Intel I/O
Hub and IMM2 to the Storage Book.
Extra I/O Books are connected to Compute Books 3 and 4 and use PCIe links from CPU 3
and CPU 4. If you need to install more I/O Books, you should install the Compute Book in an
appropriate slot first.
The 8-socket configuration is formed by using the native QPI scalability of the Intel Xeon
processor E7 family.
Figure 2-16 shows how the processors are connected via QPI links.
7 8 4 3
6 5 1 2
x3950 X6 - 8 sockets
Figure 2-16 QPI connectivity: x3950 X6 with eight processors installed
5 6
6 5 1 2 1 2
Figure 2-18 shows the x3950 X6 with only six processors installed. The connectivity is shown
on the right of the figure where each processor is either connected directly to another
processor or one hop away.
7 3 7 5 6
6 5 1 2 3 1 2
In addition, the 8-socket server can form two independent systems that contain four sockets
in each node, as if two independent 4U x3850 X6 servers are housed in one 8U chassis. This
partitioning feature is enabled via the IMM2 interface. When partitioning is enabled, each
partition can deploy its own operating system and applications. Each partition uses its own
resources and can no longer access the other partition’s resources.
2.3 Processors
The current models of the X6 systems use the Intel Xeon E7 processor family. The new Intel
Xeon E7 v4 processors feature the new Intel microarchitecture (formerly codenamed
“Broadwell-EX”) that provides higher core count, larger cache sizes, and DDR4 memory
support. Intel Xeon E7 v2, v3 and v4 families support up to 24 DIMMs per processor and
provide fast low-latency I/O with integrated PCIe 3.0 controllers.
The X6 systems support the latest generation of the Intel Xeon processor E7-4800 v4 and
E7-8800 v4 product family, which offers the following key features:
Up to 24 cores and 48 threads (by using Hyper-Threading feature) per processor
Up to 60 MB of shared last-level cache
Up to 3.2 GHz core frequencies
Up to 9.6 GTps bandwidth of QPI links
DDR4 memory interface support, which brings greater performance and power efficiency
Integrated memory controller with four SMI2 Gen2 channels that support up to 24 DDR4
DIMMs
Memory channel (SMI2) speeds up to 1866 MHz in RAS (lockstep) mode and up to
3200 MHz in performance mode.
Integrated PCIe 3.0 controller with 32 lanes per processor
Intel Virtualization Technology (VT-x and VT-d)
Intel Turbo Boost Technology 2.0
Improved performance for integer and floating point operations
Virtualization improvements with regards to posted jnterrupts, page modification logging,
and VM enter/exit latency reduction
New Intel Transactional Synchronization eXtensions (TSX)
Intel Advanced Vector Extensions 2 (AVX2.0) with new optimized turbo behavior
Intel AES-NI instructions for accelerating of encryption
Advanced QPI and memory reliability, availability, and serviceability (RAS) features
Machine Check Architecture recovery (non-running and running paths)
Enhanced Machine Check Architecture Gen2 (eMCA2)
Machine Check Architecture I/O
Resource director technology: Cache monitoring technology, cache allocation technology,
memory bandwidth monitoring
Security technologies: OS Guard, Secure Key, Intel TXT, Crypto performance
(ADOX/ADCX), Malicious Software (SMAP), Key generation (RDSEED)
Table 2-1 compares the Intel Xeon E7-4800/8800 processors that are supported in X6
systems.
Processor family Intel Xeon E7-8800 v2 Intel Xeon E7-8800 v3 Intel Xeon E7-8800 v4
Intel Xeon E7-4800 v2 Intel Xeon E7-4800 v3 Intel Xeon E7-4800 v4
QPI QPI 1.1 at 8.0 GT/s max QPI 1.1 at 9.6 GT/s max QPI 1.1 at 9.6 GT/s max
DIMM sockets 24 DDR3 DIMMs per CPU 24 DDR3 DIMMs per CPU 24 DDR4 DIMMs per CPU
24 DDR4 DIMMs per CPU
Maximum memory speeds 2667 MHz SMI2 3200 MHz SMI2 3200 MHz SMI2
PCIe technology PCIe 3.0 (8 GTps) PCIe 3.0 (8 GTps) PCIe 3.0 (8 GTps)
The Intel Xeon processor E7-4800 v3 and E7-8800 v3 product family offers the following key
features:
Up to 18 cores and 36 threads (by using Hyper-Threading feature) per processor
Up to 45 MB of shared last-level cache
Up to 3.2 GHz core frequencies
Up to 9.6 GTps bandwidth of QPI links
Integrated memory controller with four SMI2 channels that support up to 24 DDR3/DDR4
DIMMs
Up to 1600 MHz DDR3 or 1866 MHz DDR4 memory speeds
DDR4 memory channel (SMI2) speeds up to 1866 MHz in RAS (lockstep) mode and up to
3200 MHz in performance mode.
Integrated PCIe 3.0 controller with 32 lanes per processor
Intel Virtualization Technology (VT-x and VT-d)
Intel Turbo Boost Technology 2.0
Intel Advanced Vector Extensions 2 (AVX2)
Intel AES-NI instructions for accelerating of encryption
Advanced QPI and memory RAS features
Machine Check Architecture recovery (non-running and running paths)
Enhanced Machine Check Architecture Gen2 (eMCA2)
Machine Check Architecture I/O
Security technologies: OS Guard, Secure Key, Intel TXT
The Intel Xeon processor E7-4800 v2 and E7-8800 v2 product family offers the following key
features:
Up to 15 cores and 30 threads (by using Hyper-Threading feature) per processor
Up to 37.5 MB of L3 cache
Up to 3.4 GHz core frequencies
Up to 8 GTps bandwidth of QPI links
Integrated memory controller with four SMI2 channels that support up to 24 DDR3 DIMMs
Up to 1600 MHz DDR3 memory speeds
Integrated PCIe 3.0 controller with 32 lanes per processor
Intel Virtualization Technology (VT-x and VT-d)
Intel Turbo Boost Technology 2.0
Intel Advanced Vector Extensions (AVX)
Intel AES-NI instructions for accelerating of encryption
Advanced QPI and memory RAS features
Machine Check Architecture recovery (non-running and running paths)
Enhanced Machine Check Architecture Gen1 (eMCA1)
Machine Check Architecture I/O
Security technologies: OS Guard, Secure Key, Intel TXT
Much TSX-aware software gained great performance boosts by running on Intel Xeon E7 v4
processors. For example, SAP HANA SPS 09 in-memory database showed twice as many
transactions per minute with Intel TSX enabled versus TSX disabled on E7 v3 processors and
three times more transactions per minute compared to Intel Xeon E7 v2 processors.
For more information about Intel TSX, see the Solution Brief, Ask for More from Your Data,
which is available here:
http://www.intel.com/content/dam/www/public/us/en/documents/solution-briefs/sap-ha
na-real-time-analytics-solution-brief.pdf
Intel Virtualization Technology for x86 (Intel VT-x) allows the software hypervisors to better
manage memory and processing resources for virtual machines (VMs) and their guest
operating systems.
Intel Virtualization Technology for Directed I/O (Intel VT-d) helps improve I/O performance and
security for VMs by enabling hardware-assisted direct assignment and isolation of I/O
devices.
For more information about Intel Virtualization Technology, see this website:
http://www.intel.com/technology/virtualization
Hyper-Threading Technology
Intel Hyper-Threading Technology enables a single physical processor to run two separate
code streams (threads) concurrently. To the operating system, a processor core with
Hyper-Threading is seen as two logical processors. Each processor has its own architectural
state; that is, its own data, segment, and control registers, and its own advanced
programmable interrupt controller (APIC).
Each logical processor can be individually halted, interrupted, or directed to run a specified
thread independently from the other logical processor on the chip. The logical processors
share the running resources of the processor core, which include the running engine, caches,
system interface, and firmware.
vSphere 5.1 and 8-socket systems: VMware vSphere 5.1 has a fixed upper limit of 160
concurrent threads. Therefore, if you use an 8-socket system with more than 10 cores per
processor, you should disable Hyper-Threading.
Turbo Boost Technology is available on a per-processor basis for the X6 systems. For
ACPI-aware operating systems and hypervisors, such as Microsoft Windows 2008/2012,
RHEL 5/6, SLES 11, VMware ESXi 4.1, and later, no changes are required to use it. Turbo
Boost Technology can be used with any number of enabled and active cores, which results in
increased performance of multithreaded and single-threaded workloads.
Turbo Boost Technology dynamically saves power on unused processor cores and increases
the clock speed of the cores in use. In addition, it can temporarily increase the speed of all
cores by intelligently managing power and thermal headroom. For example, a 2.5 GHz
15-core processor can temporarily run all 15 active cores at 2.9 GHz. With only two cores
active, the same processor can run those active cores at 3.0 GHz. When the other cores are
needed again, they are turned back on dynamically and the processor frequency is adjusted.
When temperature, power, or current exceeds factory-configured limits and the processor is
running above the base operating frequency, the processor automatically steps the core
frequency back down to reduce temperature, power, and current. The processor then
monitors these variables, and reevaluates whether the current frequency is sustainable or if it
must reduce the core frequency further. At any time, all active cores run at the same
frequency.
For more information about Turbo Boost Technology, see this website:
http://www.intel.com/technology/turboboost/
QuickPath Interconnect
The Intel Xeon E7 processors implemented in X6 servers include two integrated memory
controllers in each processor. Processor-to-processor communication is carried over
shared-clock or coherent QPI links. Each processor has three QPI links to connect to other
processors.
Figure 2-19 shows the QPI configurations. On the left is how the four sockets of the x3850 X6
are connected. On the right is how all eight sockets of the x3950 X6 are connected.
4 3 7 8 4 3
1 2 6 5 1 2
x3850 X6
4 sockets x3950 X6 - 8 sockets
Figure 2-19 QPI links between processors
Each processor has some memory that is connected directly to that processor. To access
memory that is connected to another processor, each processor uses QPI links through the
other processor. This design creates a non-uniform memory access (NUMA) system.
Similarly, I/O can be local to a processor or remote through another processor.
For more information about Data Direct I/O, see this website:
http://www.intel.com/content/www/us/en/io/direct-data-i-o.html
RAS features
The Intel Xeon processor E7 family of processors has the following RAS features on their
interconnect links (SMI and QPI):
Cyclic redundancy checking (CRC) on the QPI links
The data on the QPI link is checked for errors.
QPI packet retry
If a data packet on the QPI link has errors or cannot be read, the receiving processor can
request that the sending processor try sending the packet again.
QPI clock failover
If there is a clock failure on a coherent QPI link, the processor on the other end of the link
can become the clock. This action is not required on the QPI links from processors to I/O
hubs because these links are asynchronous.
QPI self-healing
If persistent errors are detected on a QPI link, the link width can be reduced dynamically to
allow the system to run in a degraded mode until repair can be performed. QPI link can
reduce its width to a half width or a quarter width, and slowdown its speed.
Scalable memory interconnect (SMI) packet retry
If a memory packet has errors or cannot be read, the processor can request that the
packet be resent from the memory buffer.
Implementation of the MCA recovery requires hardware support, firmware support (such as
found in the UEFI), and operating system support. Microsoft, SUSE, Red Hat, VMware, and
other operating system vendors include or plan to include support for the Intel MCA recovery
feature on the Intel Xeon processors in their latest operating system versions.
Security improvements
The Intel Xeon E7-4800/8800 processor families feature the following important security
improvements that help to protect systems from different types of security threats:
Intel OS Guard: Evolution of Intel Execute Disable Bit technology, which helps to protect
against escalation of privilege attacks by preventing code execution from user space
memory pages while in kernel mode. It helps to protect against certain types of malware
attacks.
#VE2 (Beacon Pass 2 Technology): #VE utilizes ISA-level CPU assists to allow
memory-monitoring of antimalicious software performance to scale on virtualized and
non-virtualized servers, making deep malicious software detection possible on server
platforms.
Intel Trusted Execution Technology (Intel TXT), Intel VT-x, and Intel VT-d: New
hardware-based techniques, with which you can isolate VMs and start VMs in a trusted
environment only. In addition, malware-infected VMs cannot affect nother VMs on the
same host.
Intel Secure Key: Provides hardware random number generation without storing any data
in system memory. It keeps generated random numbers out of sight of malware, which
enhances encryption protection.
For more information, see Crimeware Protection: 3rd Generation Intel Core vPro Processors,
which is available at this website:
http://www.intel.com/content/dam/www/public/us/en/documents/white-papers/3rd-gen-c
ore-vpro-security-paper.pdf
x3850 X6 Essential: x3850 X6 Essential models with compute books containing only 12
DIMM sockets are not covered in this document.
The system board of the Compute Book has two sides, on which all components are installed.
Figure 2-21 and Figure 2-22 show the left and right sides of the Compute Book, respectively.
On the left side, there is 1 processor and 12 DIMM slots. On the right, there are 12 DIMM
slots, for a total of 24 DIMMs per Compute Book.
12 DIMM modules
Note: All Compute Books in a server must be the same type; that is, they must all have the
same processor and either all DDR3 or all DDR4 memory (processor-dependent) at the same
frequency/speed.
2.4 Memory
The System x3850 X6 and x3950 X6 servers support three generations of Intel Xeon E7
processors:
E7 v4 processors support DDR4 memory only
E7 v3 processors can use either DDR3 or DDR4 memory
E7 v2 processors support DDR3 memory only
DDR4 is a new memory standard that is supported by the Intel Xeon E7 v3 and v4 processor
families. DDR4 memory modules can run at greater speeds than DDR3 DIMMs, operate at
lower voltage, and are more energy-efficient than DDR3 modules.
X6 Compute Books with E7 v3 or v4 processors and DDR4 memory interface support Lenovo
TruDDR4 memory modules, which are tested and tuned to maximize performance and
reliability. Lenovo TruDDR4 DIMMs can operate at greater speeds and have higher
performance than DIMMs that only meet industry standards.
DDR3 and TruDDR4 memory types have ECC protection and support Chipkill and Redundant
Bit Steering technologies.
Each processor has two integrated memory controllers, and each memory controller has two
Scalable Memory Interconnect generation 2 (SMI2) links that are connected to two scalable
memory buffers. Each memory buffer has two memory channels, and each channel supports
three DIMMs, for a total of 24 DIMMs per processor.
Memory Memory
controller controller
SMI2 links
Memory
links
DIMM DIMM DIMM DIMM
The x3850 X6 supports up to 96 DIMMs when all processors are installed (24 DIMMs per
processor), and the x3950 X6 supports up to 192 DIMMs. The processor and the
corresponding memory DIMM slots are on the Compute Book (for more information, see
2.3.5, “Compute Books” on page 31).
Mirroring and sparing are also supported in both modes, as described in 2.4.2, “Memory
mirroring and rank sparing” on page 38.
Figure 2-24 shows the two modes. In RAS mode, both channels of one memory buffer are in
lockstep with each other.
SMI2 Data
Data 0 Data 1 links
Lockstep
channel
Figure 2-24 Memory modes: Performance mode (left) and RAS mode (right)
The following tables show the maximum speed and bandwidth for the SMI2 and memory
channels in both modes for DDR3 and TruDDR4 memory modules, as well as their operating
voltages.
1RX4, 2Gb, 1600 MHz / 4Gb 1333 MHz 1333 MHz 1333 MHz
1RX4, 4Gb, 1600 MHz / 8Gb 1333 MHz 1333 MHz 1333 MHz
2RX4, 4Gb, 1600 MHz / 16Gb 1333 MHz 1333 MHz 1333 MHz
4RX4, 4Gb, 1600 MHz / 32Gb 1333 MHz 1333 MHz 1333 MHz
8Rx4, 4Gb, 1333 MHz / 64Gb 1333 MHz 1333 MHz 1333 MHz
1RX4, 2Gb, 1600 MHz / 4Gb 1067 MHz 1067 MHz 1067 MHz
1RX4, 4Gb, 1600 MHz / 8Gb 1067 MHz 1067 MHz 1067 MHz
2RX4, 4Gb, 1600 MHz / 16Gb 1067 MHz 1067 MHz 1067 MHz
4RX4, 4Gb, 1600 MHz / 32Gb 1067 MHz 1067 MHz 1067 MHz
8Rx4, 4Gb, 1333 MHz / 64Gb 1067 MHz 1067 MHz 1067 MHz
1RX4, 2Gb, 1600 MHz / 4Gb 1067 MHz 1067 MHz 1067 MHz
1RX4, 4Gb, 1600 MHz / 8Gb 1067 MHz 1067 MHz 1067 MHz
2RX4, 4Gb, 1600 MHz / 16Gb 1067 MHz 1067 MHz 1067 MHz
4RX4, 4Gb, 1600 MHz / 32Gb 1067 MHz 1067 MHz 1067 MHz
8Rx4, 4Gb, 1333 MHz / 64Gb 1067 MHz 1067 MHz 1067 MHz
1RX4, 4Gb, 1600 MHz / 8Gb 1600 MHz 1600 MHz 1333 MHz
2RX4, 4Gb, 1600 MHz / 16Gb 1600 MHz 1600 MHz 1333 MHz
4RX4, 4Gb, 1600 MHz / 32Gb 1600 MHz 1600 MHz 1333 MHz
8Rx4, 4Gb, 1333 MHz / 64Gb 1333 MHz 1333 MHz 1333 MHz
1RX4, 2Gb, 1600 MHz / 4Gb 1333 MHz 1333 MHz 1067 MHz
1RX4, 4Gb, 1600 MHz / 8Gb 1333 MHz 1333 MHz 1067 MHz
2RX4, 4Gb, 1600 MHz / 16Gb 1333 MHz 1333 MHz 1067 MHz
4RX4, 4Gb, 1600 MHz / 32Gb 1333 MHz 1333 MHz 1333 MHz
8Rx4, 4Gb, 1333 MHz / 64Gb 1067 MHz 1067 MHz 1067 MHz
1RX4, 2Gb, 1600 MHz / 4Gb 1067 MHz 1067 MHz 1067 MHz
1RX4, 4Gb, 1600 MHz / 8Gb 1067 MHz 1067 MHz 1067 MHz
2RX4, 4Gb, 1600 MHz / 16Gb 1067 MHz 1067 MHz 1067 MHz
4RX4, 4Gb, 1600 MHz / 32Gb 1067 MHz 1067 MHz 1067 MHz
8Rx4, 4Gb, 1333 MHz / 64Gb 1067 MHz 1067 MHz 1067 MHz
1RX4, 4Gb, 2133 MHz / 8Gb 1600 MHz 1600 MHz 1600 MHz
2RX4, 4Gb, 2133 MHz / 16Gb 1600 MHz 1600 MHz 1600 MHz
2RX4, 8Gb, 2133 MHz / 32Gb 1600 MHz 1600 MHz 1600 MHz
4RX4, 16Gb, 2133 MHz / 64Gb 1600 MHz 1600 MHz 1600 MHz
1RX4, 4Gb, 2133 MHz / 8Gb 1333 MHz 1333 MHz 1333 MHz
2RX4, 4Gb, 2133 MHz / 16Gb 1333 MHz 1333 MHz 1333 MHz
2RX4, 8Gb, 2133 MHz / 32Gb 1333 MHz 1333 MHz 1333 MHz
4RX4, 16Gb, 2133 MHz / 64Gb 1333 MHz 1333 MHz 1333 MHz
1RX4, 4Gb, 2133 MHz / 8Gb 1333 MHz 1333 MHz 1333 MHz
2RX4, 4Gb, 2133 MHz / 16Gb 1333 MHz 1333 MHz 1333 MHz
2RX4, 8Gb, 2133 MHz / 32Gb 1333 MHz 1333 MHz 1333 MHz
4RX4, 16Gb, 2133 MHz / 64Gb 1333 MHz 1333 MHz 1333 MHz
1RX4, 4Gb, 2133 MHz / 8Gb 1867 MHz 1867 MHz 1600 MHz
2RX4, 4Gb, 2133 MHz / 16Gb 1867 MHz 1867 MHz 1600 MHz
2RX4, 8Gb, 2133 MHz / 32Gb 1867 MHz 1867 MHz 1600 MHz
4RX4, 16Gb, 2133 MHz / 64Gb 1867 MHz 1867 MHz 1600 MHz
1RX4, 4Gb, 2133 MHz / 8Gb 1600 MHz 1600 MHz 1333 MHz
2RX4, 4Gb, 2133 MHz / 16Gb 1600 MHz 1600 MHz 1333 MHz
2RX4, 8Gb, 2133 MHz / 32Gb 1600 MHz 1600 MHz 1333 MHz
4RX4, 16Gb, 2133 MHz / 64Gb 1600 MHz 1600 MHz 1333 MHz
1RX4, 4Gb, 2133 MHz / 8Gb 1333 MHz 1333 MHz 1333 MHz
2RX4, 4Gb, 2133 MHz / 16Gb 1333 MHz 1333 MHz 1333 MHz
2RX4, 8Gb, 2133 MHz / 32Gb 1333 MHz 1333 MHz 1333 MHz
4RX4, 16Gb, 2133 MHz / 64Gb 1333 MHz 1333 MHz 1333 MHz
Memory mirroring
To improve memory reliability and availability, the memory controller can mirror memory data
across two memory channels. To enable the mirroring feature, you must have both memory
channels of a processor populated with the same DIMM type and amount of memory.
Memory mirroring provides the user with a redundant copy of all code and data addressable
in the configured memory map. Two copies of the data are kept, similar to the way RAID-1
writes to disk. Reads are interleaved between memory channels. The system automatically
uses the most reliable memory channel as determined by error logging and monitoring.
If errors occur, only the alternative memory channel is used until bad memory is replaced.
Because a redundant copy is kept, mirroring results in only half the installed memory being
available to the operating system. Memory mirroring does not support asymmetrical memory
configurations and requires that each channel be populated in identical fashion. For example,
you must install two identical 4 GB 2133MHz DIMMs equally and symmetrically across the
two memory channels to achieve 4 GB of mirrored memory.
Memory mirroring is a hardware feature that operates independent of the operating system.
There is a slight memory performance trade-off when memory mirroring is enabled.
The memory mirroring feature can be used with performance or RAS modes:
When Performance mode is used, memory mirroring duplicates data between memory
channels of the two memory buffers connected to one memory controller.
In RAS (Lockstep) mode, memory mirroring duplicates data between memory buffers that
are connected to the same memory controller.
Mirror Mirror
Lockstep
pairs quads
channel
Figure 2-25 Memory mirroring with used with Performance mode (left) and RAS mode (right)
Memory rank sparing provides a degree of redundancy in the memory subsystem, but not to
the extent of mirroring. In contrast to mirroring, sparing leaves more memory for the operating
system. In sparing mode, the trigger for failover is a preset threshold of correctable errors.
When this threshold is reached, the content is copied to its spare rank. The failed rank is then
taken offline, and the spare counterpart is activated for use.
In rank sparing mode, one rank per memory channel is configured as a spare. The spare rank
must have identical or larger memory capacity than all the other ranks (sparing source ranks)
on the same channel.
For example, if dual-rank DIMMs are installed and are all of the same capacity, there are six
ranks total for each memory channel (three DIMMs per channel). This configuration means
that one of the six ranks are reserved and five of the six ranks can be used for the operating
system.
The rank sparing feature can be used in addition to performance or RAS modes. Consider the
following points:
When Performance mode is used, rank sparing duplicates data between memory modules
of the same channel of one memory buffer. If there is an imminent failure (as indicated by
a red X in Figure 2-26 on page 41), that rank is taken offline and the data is copied to the
spare rank.
When RAS (Lockstep) mode is used, rank sparing duplicates data between memory
channels of one memory buffer. If there is an imminent failure (as indicated by a red X in
Figure 2-26 on page 41), that rank is taken offline and the data is copied to the spare rank.
In addition, the partner rank on the other channel that is connected to the same memory
buffer also is copied over.
Figure 2-26 Rank sparing: Performance mode (left) and RAS mode (right)
Note that the spare rank(s) must have memory capacity identical to, or larger than, all the
other ranks (sparing source ranks). The total memory available in the system is reduced by
the amount of memory allocated for the spare ranks.
Chipkill on its own can provide 99.94% memory availability to the applications without
sacrificing performance and with standard ECC DIMMs.
X6 servers support the Intel implementation of Chipkill plus redundant bit steering, which Intel
refers to as DDDC.
Redundant bit steering uses the ECC coding scheme that provides Chipkill coverage for x4
DRAMs. This coding scheme leaves the equivalent of one x4 DRAM spare in every pair of
DIMMs. If a chip failure on the DIMM is detected, the memory controller can copy data from
the failed chip through the spare x4.
Redundant bit steering operates automatically without issuing a Predictive Failure Analysis
(PFA) or light path diagnostics alert to the administrator, although an event is logged to the
service processor log. After the second DRAM chip failure on the DIMM in RAS (Lockstep)
mode, more single bit errors result in PFA and light path diagnostics alerts.
The algorithm uses short- and long-term thresholds per memory rank with leaky bucket and
automatic sorting of memory pages with the highest correctable error counts. First, it uses
hardware recovery features, followed by software recovery features, to optimize recovery
results for newer and older operating systems and hypervisors.
When recovery features are exhausted, the firmware issues a Predictive Failure Alert.
Memory that failed completely is held offline during starts until it is repaired. Failed DIMMs are
indicated by light path diagnostics LEDs that are physically at the socket location.
PCIe 3.0 uses a 128b/130b encoding scheme, which is more efficient than the 8b/10b
encoding that is used in PCIe 2.0 protocol. This approach reduces overhead to less that 2%
when compared to 20% of PCIe 2.0. The bandwidth also can be doubled at 8 GTps speed.
Each processor contains an Integrated I/O (IIO) module that provides 32 lanes of PCIe 3.0.
These 32 lanes can be split into any combination of x4, x8, and x16.
Table 2-6 shows a comparison of the PCIe capabilities of the eX5 and X6 families.
For more information about PCIe 3.0, see the PCI Express Base 3.0 specification by
PCI-SIG, which is available at this website:
http://www.pcisig.com/specifications/pciexpress/base3/
For more information about the implementation of the I/O subsystem, see Chapter 3, “Product
information” on page 53
Two PCIe 3.0 x8 slots for RAID adapters Operator panel with LCD screen
and USB ports
Hot-swap drives
In addition to the drive bays, a Storage Book contains two PCIe 3.0 x8 slots for internal RAID
controllers or host bus adapters (HBAs).
The X6 servers bring a new generation of SAS protocol: 12 Gb SAS. It doubles the data
transfer rate of 6 Gb SAS solutions, to fully unlock the potential of the PCIe 3.0 interface and
to maximize performance for storage I/O-intensive applications, including databases,
business analytics, and virtualization and cloud environment.
In addition to or as an alternative to the use of SAS or SATA drives, the X6 servers support
the use of PCIe NVMe SSDs. These drives connect directly to the PCIe bus of the processors
and provide the ultimate in storage bandwidth and latency while still in a drive form factor.
SSDs are optimized for a heavy mix of random read and write operations, such as transaction
processing, data mining, business intelligence, and decision support, and other random
I/O-intensive applications. Built on enterprise-grade MLC NAND flash memory, the SSD
drives used in the X6 systems deliver up to 30,000 IOPS per single drive. Combined into an
SSD unit, these drives can deliver up to 240,000 IOPS and up to 2 GBps of sustained read
throughput per SSD unit. In addition to its superior performance, SSD offers superior uptime
with three times the reliability of mechanical disk drives because SSDs have no moving parts
to fail.
NVMe drives are available in 2.5-inch drive form-factor compatible with the X6 Storage Book,
but require a special NVMe backplane and PCIe extender adapters that are installed in
Storage Book PCIe 3.0 x8 slots.
Each NVMe PCIe extender supports one or two NVMe drives. You can install up to two NVMe
PCI extenders in each Storage Book, with which up to four NVMe PCIe drives can be used in
one Storage Book. As a result, the drive maximums per server are possible:
x3850 X6: four NVMe drives per server (one Storage Book)
x3950 X6: eight NVMe drives per server (two Storage Books)
If you use four NVMe drives in one Storage Book and two NVMe PCI extenders accordingly,
no more PCIe slots are available for RAID-adapters in that Storage Book. In this case, only
NVMe drives can be installed in the Storage Book and the other four 2.5-inch drive bays must
be left empty.
For more information about the available combinations of drives in the Storage Book, see
3.11.1, “Storage Book” on page 94.
Note: Each pair of NVMe PCIe drives require one NVMe PCIe extender, which is installed
in the Storage Book. Each NVMe PCIe SSD uses four PCIe 3.0 lanes; therefore, a pair of
NVMe PCIe drives completely uses the bandwidth of one available PCIe 3.0 x8 slot in the
Storage Book. One NVMe PCIe extender is needed for every two NVMe drives.
The Flash Storage Adapters combine high IOPS performance with low latency. As an
example, with 4 KB block random reads, the 3.2 TB io3 Enterprise Mainstream Flash Adapter
can read 345,000 IOPS, and the 2.0 TB P3700 NVMe Enterprise Performance Flash Adapter
can read 400,000 IOPs, compared with 420 IOPS for a 15 K RPM 146 GB disk drive. The
Reliability features include the use of enterprise-grade MLC (eMLC), advanced wear-leveling,
ECC protection, and Adaptive Flashback redundancy for RAID-like chip protection with
self-healing capabilities, which provides unparalleled reliability and efficiency.
Advanced bad-block management algorithms enable taking blocks out of service when their
failure rate becomes unacceptable. These reliability features provide a predictable lifetime
and up to 25 years of data retention.
Figure 2-30 shows the 6.4TB io3 Enterprise Mainstream Flash Storage Adapter:
Figure 2-31 on page 48 shows the P3700 NVMe Enterprise Performance Flash Adapter:
For more information about SSD Adapters that are supported in the X6 servers, see the
Lenovo Press Product Guides in the PCIe SSD Adapters category:
http://lenovopress.com/servers/options/ssdadapter
By using this feature, the user can display server activities from power-on to full operation
remotely, with remote user interaction, at virtually any time.
The following table represents the main differences between the IMM2 and the IMM2.1.
For more information, see Integrated Management Module II User’s Guide, which is available
at this URL:
https://support.lenovo.com/docs/UM103336
2.9 Scalability
The X6 servers have a flexible modular design with which you can increase the server’s
compute power and I/O capabilities by adding Compute Books and I/O Books. You can build
your initial configuration of an x3850 X6 server with one Compute Book, one Primary I/O
Table 2-8 shows a comparison of minimum and maximum configurations of X6 servers, from
an entry level x3850 X6 configuration to a fully populated x3950 X6.
PCIe slots 6 24
The x3850 X6 and x3950 X6 servers use native QPI scaling capabilities to achieve 4-socket
and 8-socket configurations. Unlike eX5 systems, there are no external connectors and
cables for X6 systems; all interconnects are integrated in the midplane.
For more information about the upgrading process, see 3.26, “Upgrading to an 8-socket X6
server” on page 125.
X6 servers continue to lead the way as the shift toward mission-critical scalable databases,
business analytics, virtualization, enterprise applications, and cloud applications accelerates.
3.2 Specifications
Table 3-1 lists the standard specifications.
Processor x3850 X6: One, two, or four Intel Xeon E7-4800 v4 or E7-8800 v4 processors, each in a
Compute Book. Processor options between 4 cores (3.2 GHz) and 24 cores (up to
2.7 GHz). Three QPI links up to 9.6 GTps each. Compute Books have TruDDR4 memory
that operate up to 1866 MHz. Up to 45 MB L3 cache. Intel C602J chipset. Models with E7
v3 processors are now withdrawn from marketing, however compute books are still
available as field upgrades.
x3950 X6: Four, six, or eight Intel E7-8800 v4 processors, each in a Compute Book.
Processor options from 4 cores (3.2 GHz) to 24 cores (up to 2.7 GHz). Three QPI links up
to 9.6 GTps each. Compute Books feature TruDDR4 memory that operates at up to 1866
MHz. Up to 45 MB L3 cache. Intel C602J chipset. Models with E7 v3 processors are now
withdrawn from marketing, however compute books are still available as field upgrades.
Memory protection ECC, Chipkill, RBS, memory mirroring, and memory rank sparing.
Disk drive bays x3850 X6: Up to eight 2.5-inch hot-swap SAS/SATA bays, or up to 16 1.8-inch SSD bays.
x3950 X6: Up to 16x eight 2.5-inch hot-swap SAS/SATA bays, or up to 32 1.8-inch SSD
bays.
RAID support 12 Gb SAS/SATA RAID 0, 1, or 10 with ServeRAID™ M5210; optional upgrades to RAID 5 or
50 are available (zero-cache; 1 GB non-backed cache; 1 GB or 2 GB flash-backed cache).
Upgrades to RAID 6 or 60 available for M5210 with 1 GB or 2 GB upgrades.
Optical and tape bays No internal bays; use an external USB drive. For more information, see this website:
http://support.lenovo.com/en/documents/pd011281
Network interfaces Mezzanine LOM (ML2) slot for dual-port 10 GbE cards with SFP+ or RJ-45 connectors or
quad-port GbE cards with RJ-45 connectors. x3950 X6 has two ML2 slots. Dedicated 1 GbE
port for systems management.
PCI Expansion slots x3850 X6: Up to 11 PCIe slots plus one dedicated Mezzanine LOM slot (12 total). The following
slots are available:
Two PCIe 3.0 x8 slots for internal RAID controllers (Storage Book)
Two PCIe 3.0 x16 slots (x16-wired), half-length, full-height (Primary I/O Book)
One PCIe 3.0 x16 (x8-wired), half-length, full-height (Primary I/O Book)
One ML2 slot for network adapter (PCIe 3.0 x8) (Primary I/O Book)
Two optional I/O Books, each with three slots, all full height (use of these I/O Books
requires four processors). Optional books are hot-swap capable.
x3950 X6: Up to 22 PCIe slots plus two dedicated Mezzanine LOM slots (24 total). The
following slots are available:
Four PCIe 3.0 x8 slots for internal RAID controllers (Storage Book)
Four PCIe 3.0 x16 slots (x16-wired), half-length, full height (Primary I/O Book)
Two PCIe 3.0 x16 (x8-wired), half-length, full-height (Primary I/O Book)
Two ML2 slots for network adapter (PCIe 3.0 x8) (Primary I/O Book)
Four optional I/O Books, each with three slots, all full height (use of these I/O Books
requires four processors). Optional books are hot-swap capable.
x3950 X6:
Front: Four USB 3.0, two USB 2.0, and two DB-15 video ports.
Rear: Eight USB 2.0, two DB-15 video, two DB-9 serial, and two 1 GbE RJ-45 IMM2
systems management.
Internal: two USB 2.0 port for embedded hypervisor.
Note: The second video, IMM2, and internal USB hypervisor ports are used only when the
server is partitioned into two four-socket servers.
Power supplies can be 900 W AC or 1400 W AC power supplies (all 80 PLUS Platinum
certified); -48V 750 W DC power supplies available via configure-to-order (CTO).
Note: In a four or eight power supply configuration, mixing 900 W and 1400 W power supplies
is allowed; however, they must be balanced and form pairs or a set of four of the same type.
Video Matrox G200eR2 with 16 MB memory that is integrated into the IMM2. Maximum resolution is
1600 x 1200 at 75 Hz with 16 M colors.
Security features x3850 X6: Power-on password, admin password, two TPMs
x3950 X6: Power-on password, admin password, four TPMs
Systems UEFI, IMM2 (Version 2.1) with remote presence feature, Predictive Failure Analysis, Light Path
management Diagnostics, Automatic Server Restart, Lenovo XClarity Administrator, Lenovo XClarity Energy
Manager, and ServerGuide™.
Supported operating Microsoft Windows Server, Red Hat Enterprise Linux, SUSE Linux Enterprise Server, and
systems VMware vSphere ESXi. See Chapter 6, “Operating system installation” on page 225 for
specific versions.
Limited warranty Three-year customer-replaceable unit (CRU) and onsite limited warranty with 9x5 next
business day (NBD).
Service and support Optional service upgrades are available through Lenovo Services offerings: 4-hour or 2-hour
response time, 8-hour fix time, 1-year or 2-year warranty extension, and remote technical
support for Lenovo hardware and some Lenovo/OEM applications.
Weight x3850 X6: Minimum configuration: 35.9 kg (79.2 lb), typical: 46.4 kg (102.3 lb), maximum:
54.7 kg (120 lb).
x3950 X6: Minimum configuration: 84.5 kg (186.3 lb), typical: 88.2 kg (194.5 lb),
maximum: 110.0 kg (242.6 lb).
The x3850 X6 and x3950 X6 servers are shipped with the following items:
Rail kit
Cable management bracket kit (2 brackets for x3850 X6, 4 brackets for the x3950 X6)
2.8 m (9.18 in) C13-C14 power cord (one for each power supply)
Statement of Limited Warranty
Important Notices
Rack Installation Instructions
Documentation CD that contains the Installation and User's Guide
Figure 3-1 shows the components of the cable management bracket kit that ship with the
x3850 X6. A second set (that is, two left and two right) is also shipped with the x3950 X6.
Machine type 3837: Machine type 3837 (withdrawn from marketing) is not covered in this
book. For information about these systems, see the following Lenovo Press product
guides:
System x3850 X6 (3837):
http://lenovopress.com/tips1084
System x3950 X6 (3837):
http://lenovopress.com/tips1132
x3850 X6 TopSeller™ models with E7 v4 processors (TruDDR4 memory) - North America only
x3850 X6 models with E7 v3 processors and TruDDR4 memory (all withdrawn from marketing)
x3850 X6 models with E7 v3 processors and DDR3 memory (all withdrawn from marketing)
x3850 X6 models with E7 v2 processors and DDR3 memory (all withdrawn from marketing)
x3950 X6 models with E7 v3 processors and TruDDR4 memory (all withdrawn from marketing)
x3950 X6 models with E7 v3 processors and DDR3 memory (withdrawn from marketing)
x3950 X6 models with E7 v2 processors and DDR3 memory (withdrawn from marketing)
For more information about standard features of the server, see Table 3-1 on page 57.
Table 3-4 SAP HANA Workload Optimized Solution models for x3850 X6
Model Intel Xeon Memory RAID Drive bays Ethernet I/O Power
Processor Drives slots supplies
6241- 2x E7-8880 v4 16x 16GB 1x M5210 + 8x 2.5"; 6x 1.2 TB SAS 1x 4x1GbE ML2; 2x 6 std; 4x 1400W
EKU 22C 2.2GHz (1866MHz)¤ upgradesa HDD; 2x 400 GB Mellanox 10Gba 12 max HS / 4
55M 150W S3710 SSD
SAP HANA models with E7 v3 processors and TruDDR4 memory (withdrawn from marketing)
SAP HANA models with E7 v3 processors and DDR3 memory (withdrawn from marketing)
SAP HANA models with E7 v2 processors and DDR3 memory (withdrawn from marketing)
Note: The operating system software is not included with the SAP HANA models.
Operating system selection must be a separate line item included in order: SLES for SAP
with standard or priority support. The SAP HANA Software is included, but the license is
sold separately by SAP or an SAP business partner. VMware Enterprise Plus license sold
separately. IBM Spectrum Scale (GPFS) is available from Lenovo separately.
Table 3-5 SAP HANA Workload Optimized Solution models for x3950 X6
Model Intel Xeon Memory RAID Drive bays Ethernet I/O Power
Processor Drives slots supplies
6241- 4x E7-8880 v4 1024 GB 2x M5210 + 16x 2.5-inch HS 2x 4x1GbE ML2 12 std; 8x 1400W
8Hx 22C 2.2GHz 32x 32GB upgradesa 12x 1.2 TB SAS HDD 2x Mellanox 10Gba 24 max HS / 8
55M 150W DDR4 RDIMM 4x 400 GB S3710 SSD
6241- 8x E7-8880 v4 2048 GB 2x M5210 + 16x 2.5-inch HS 2x 4x1GbE ML2 18 std; 8x 1400W
8Jx 22C 2.2GHz 64x 32GB upgradesa 12x 1.2 TB SAS HDD 2x Mellanox 10Gba 24 max HS / 8
55M 150W DDR4 RDIMM 4x 400 GB S3710 SSD
x3950 X6 HANA models with E7 v4 processors (TruDDR4 memory) - TopSeller - North America only
x3950 X6 HANA models with E7 v3 processors and TruDDR4 memory (withdrawn from marketing)
6241- 4x E7-8880 v3 1024 GB 2x M5210 16x 2.5-inch HS 2x 4x1GbE ML2 12 std 8x 1400 W
HIx 18C 2.3GHz 32x 32 GB + upgradesa 12x 1.2 TB SAS HDD 2x Mellanox 10Ga 24 max HS / 8
45MB 150W DDR4 RDIMM 4x 400 GB S3700 SSD
6241- 8x E7-8880 v3 2048 GB 2x M5210 16x 2.5-inch HS 2x 4x1GbE ML2 18 std 8x 1400 W
HJx 18C 2.3GHz 64x 32 GB + upgradesa 12x 1.2 TB SAS HDD 2x Mellanox 10Ga 24 max HS / 8
45MB 150W DDR4 RDIMM 4x 400 GB S3700 SSD
x3950 X6 HANA models with E7 v3 processors and DDR3 memory (withdrawn from marketing)
6241- 4x E7-8880 v2 1024 GB 2x M5210 16x 2.5-inch HS 2x 4x1GbE ML2 12 std 8x 1400 W
HFx 15C 2.5GHz 32x 32 GB + upgradesa 12x 1.2 TB SAS HDD 2x Mellanox 10Gba 24 max HS / 8
37.5MB 130W DDR3 LRDIMM 4x 400 GB S3700 SSD
6241- 8x E7-8880 v2 2048 GB 2x M5210 16x 2.5-inch HS 2x 4x1GbE ML2 18 std 8x 1400 W
HGx 15C 2.5GHz 64x 32 GB + upgradesa 12x 1.2 TB SAS HDD 2x Mellanox 10Gba 24 max HS / 8
37.5MB 130W DDR3 LRDIMM 4x 400 GB S3700 SSD
x3950 X6 HANA models with E7 v2 processors and DDR3 memory (withdrawn from marketing)
6241- 4x E7-8880 v2 1024 GB 2x M5210 16x 2.5-inch HS 2x 4x1GbE ML2 12 std 8x 1400 W
HCxb 15C 2.5GHz 32x 32 GB + upgradesa 12x 1.2 TB SAS HDD 2x Mellanox 10Gba 24 max HS / 8
37.5MB 130W DDR3 LRDIMM 4x 400 GB S3700 SSD
6241- 8x E7-8880 v2 2048 GB 2x M5210 16x 2.5-inch HS 2x 4x1GbE ML2 18 std 8x 1400 W
HDxb 15C 2.5GHz 64x 32 GB + upgradesa 12x 1.2 TB SAS HDD 2x Mellanox 10Gba 24 max HS / 8
37.5MB 130W DDR3 LRDIMM 4x 400 GB S3700 SSD
a. See the list of specific components in the next section.
b. Withdrawn from marketing
As shown in Figure 3-2, each x3850 X6 server can have up to four Compute Books (two
Compute Books minimum), and one Storage Book, which can have up to 8x 2.5-inch HDDs
or SSDs or up to 16x 1.8-inch SSDs. The x3850 X6 server also has three USB 3.0 ports
accessible from the front , one video port, and the control panel with LCD screen.
As shown in Figure 3-3, the x3950 X6 is the equivalent of two x3850 X6 servers. It can have
up to eight Compute Books (four Compute Books minimum), and two Storage Books, each of
which can have up to 8x 2.5-inch drives or up to 16x 1.8-inch SSDs.
For more information about the Compute Book, see 3.7, “Compute Books” on page 75. For
more information about the Storage Book, see 3.11, “Storage subsystem” on page 94.
Power supplies
Figure 3-4 Rear view of the x3850 X6
For more information about the I/O Books, see 3.12, “I/O subsystem” on page 100.
At the bottom of the server at the rear are bays for up to four power supply modules. For more
information, see 3.24, “Power subsystem” on page 122.
Drive activity
LED (green)
Figure 3-6 Ports, controls, and LEDs on the front operator panel
Select button
The LCD system information display panel contains the following buttons:
Scroll up button: Press this button to scroll up or scroll to the left in the main menu to
locate and select the system information that you want displayed.
Select button: Press this button to make your selection from the menu options.
Scroll down button: Press this button to scroll down or scroll to the right in the main menu
to location and select the system information that you want displayed.
Scroll Up Button
12
SystemName 34 Select Button
SystemStatus 0x23
Scroll Down Button
Figure 3-8 System properties that are shown on the LCD system information panel
Figure 3-9 shows an example of the information that you might see on the display panel.
Lenovo x3850 X6
System status
System status Check-mark UEFI\POST code
above 2 indicates
system is booting from
alternative UEFI bank.
Figure 3-9 LCD system information panel example
You can use the Scroll up and Scroll down buttons to navigate inside the menu. You can use
the Select button to choose an appropriate submenu.
For the Errors submenu set, if only one error occurs, the LCD display panel displays that
error. If more than one error occurs, the LCD display panel displays the number of errors that
occurred.
The LCD system information display panel displays the following types of information about
the server:
IMM system error log
System VPD information:
– Machine type and serial number
NMI
button Fan-pack 9 Fan-pack 10
System
power LED
Locate LED
System
error LED
The x3950 X6 system has the following number of ports when not partitioned (partitioning
allows you to split the x3950 X6 into two separate virtual 4-socket systems):
Front: Four USB 3.0, two USB 2.0, and one DB-15 video ports.
Rear: Eight USB 2.0, one DB-15 video, two DB-9 serial, and one 1 GbE RJ-45 IMM2
systems management.
Internal: One USB 2.0 port for embedded hypervisor.
The x3950 X6 system has the following number of ports when partitioned:
Front: Four USB 3.0, two USB 2.0, and two DB-15 video ports.
Rear: Eight USB 2.0, two DB-15 video, two DB-9 serial, and two 1 GbE RJ-45 IMM2
systems management.
Internal: Two USB 2.0 port for embedded hypervisor.
The processor options are preinstalled in a Compute Book. For more information, see 3.8,
“Processor options” on page 80.
E7 v2 and v3 processors:
Models with E7 v3 processors are now withdrawn from marketing, however compute
books are still available for field upgrades.
Compute Books and server models with E7 v2 processors are now withdrawn from
marketing.
As shown in Figure 3-11, the Compute Book has a cover with a transparent window, with
which you can check light path diagnostic LEDs and view installed DIMMs without removing
the cover.
Intel E7-4800/8800
processors
Figure 3-12 Compute Book with the side covers removed
Figure 3-13 shows the Compute Book from the front where two hot-swap fans are installed.
Light path
Two dual- Release Button to unlock diagnostics
motor fans handle the release handle button Fan front view
Figure 3-13 Front view of the Compute Book, front and rear views of fans
As shown in Figure 3-13, the Compute Book has two buttons that are hidden behind the
upper fan module: a light path diagnostic button and a slider to unlock the release handle.
Use the light path diagnostic button to determine a failed DIMM module on the Compute
Book. The appropriate DIMM error LED on the Compute Book board should be lit. Use the
slider to unlock the release handle so that you can remove the Compute Book from the
chassis.
DIMM 7 - 12 DIMM 1 - 3
error LEDs error LEDs
DIMM 4 - 6
(Left side of board) error LEDs
Figure 3-14 DIMM error LEDs placed on the left side of the system board
x3850 X6
1 2 3 4
x3950 X6
5 6 7 8
1 2 3 4
If you plan to use your existing DDR3 memory, complete the following steps to upgrade the
server:
1. Purchase up to four new Compute Books.
2. Check and upgrade all firmware to at least the minimum levels that are needed to support
E7 v3 processors, if necessary.
3. Power off the server and remove the old Compute Books.
4. Transfer all DDR3 memory DIMMs to the new E7 v3 Compute Books.
5. Reinstall the new E7 v3 Compute Books.
If you plan to upgrade to TruDDR4 memory, order the appropriate memory DIMMs to match
your workload requirements.
E7 v2 and v3 processors:
Models with E7 v3 processors are now withdrawn from marketing, however compute
books are still available for field upgrades.
Compute Books and server models with E7 v2 processors are now withdrawn from
marketing.
The Intel Xeon E7-4800 v3 series processors are available with up to 18 cores and 45 MB of
last-level cache and can form a 4-socket configuration. The Intel Xeon E7-8800 v3 series
processors are also available with up to 18 cores and 45 MB of last-level cache, but can be
used in 4-socket and 8-socket configurations. Using E7-8800 processors enables Compute
Books to be swapped between x3850 X6 and x3950 X6 servers.
The Intel Xeon E7-4800 v4 series processors are available with up to 16 cores and 40 MB of
last-level cache and can form a 4-socket configuration. The Intel Xeon E7-8800 v4 series
processors are also available with up to 24 cores and 60 MB of last-level cache, but can be
used in 4-socket and 8-socket configurations.
Using E7-8800 processors enables Compute Books to be swapped between x3850 X6 and
x3950 X6 servers.
Table 3-6 also lists the processor options that are grouped in the following manner:
E7-4800 v3 processors in Compute Books with TruDDR4 support
E7-8800 v3 processors in Compute Books with TruDDR4 support
E7-4800 v3 processors in Compute Books with DDR3 support
E7-8800 v3 processors in Compute Books with DDR3 support
The processor options are shipped preinstalled in a Compute Book. All Compute Books in a
server must have identical processors.
Intel Xeon E7-4800 v4 processors (not supported in the x3950 X6) with support for TruDDR4 memory
Intel Xeon E7-8800 v4 processors with support for TruDDR4 memory (supported in x3850/x3950 X6)
Intel Xeon E7-4800 v3 processors (not supported in the x3950 X6) with support for DDR3 memory
Intel Xeon E7-8800 v3 processors (supported in x3850/x3950 X6) with support for DDR3 memory
Intel Xeon E7-4800 v3 processors (not supported in the x3950 X6) with support for TruDDR4 memory
Intel Xeon E7-8800 v3 processors (supported in x3850/x3950 X6) with support for TruDDR4 memory
Lenovo TruDDR4 Memory uses the highest-quality components that are sourced from Tier 1
DRAM suppliers and only memory that meets the strict requirements of Lenovo is selected. It
is compatibility tested and tuned for optimal System x performance and throughput.
TruDDR4 Memory has a unique signature that is programmed into the DIMM that enables
System x servers to verify whether the memory that is installed is qualified or supported by
Lenovo. Because TruDDR4 Memory is authenticated, certain extended memory performance
features can be enabled to extend performance over industry standards.
Lenovo DDR3 memory is compatibility tested and tuned for optimal System x performance
and throughput. Lenovo memory specifications are integrated into the light path diagnostics
for immediate system performance feedback and optimum system uptime
Lenovo memory specifications are integrated into the light path diagnostics for immediate
system performance feedback and optimum system uptime. From a service and support
standpoint, Lenovo memory automatically assumes the system warranty, and Lenovo
provides service and support worldwide.
As described in 2.4, “Memory” on page 33, the x3850 X6 and x3950 X6 support TruDDR4
memory operating at speeds up to 1866 MHz and DDR3 memory at speeds up to 1600 MHz
(model dependent).
Tip: TruDDR4 2133 MHz and 2400 MHz DIMMs operate with Intel Xeon E7 processors,
but only at up to 1866 MHz speeds.
The x3850 X6 supports up to 96 DIMMs when all processors are installed, 24 DIMMs per
processor. The x3950 X6 supports up to 192 DIMMs when all processors are installed, 24
DIMMs per processor. Each processor has four memory channels that are implemented by
using Scalable Memory Interface generation 2 (SMI2) chips, and the server implements three
DIMMs per channel. The processor and the corresponding memory DIMM slots are on the
Compute Book.
The Intel Xeon E7 processors support two memory modes: Performance mode and RAS (or
Lockstep) mode. For more information, see 2.4.1, “Operational modes” on page 34.
Table 3-7 lists the memory options that are available for x3850 X6 and x3950 X6 servers.
TruDDR4 RDIMMs and LRDIMMs - 2400 MHz (for use with E7 v4 processors)
46W0821 ATC8 8GB TruDDR4 Memory (1Rx4, 1.2V) 96 / 192 (24 per CPU) 14x, 1RC
PC4-19200 CL17 2400MHz LP RDIMM
46W0829 ATCA 16GB TruDDR4 Memory (2Rx4, 1.2V) 96 / 192 (24 per CPU) All other v4
PC4-19200 CL17 2400MHz LP RDIMM models
46W0833 ATCB 32GB TruDDR4 Memory (2Rx4, 1.2V) 96 / 192 (24 per CPU) 8Cx, 8Fx, 8Hx,
PC4-19200 CL17 2400MHz LP RDIMM 8Jx
46W0841 ATGG 64GB TruDDR4 Memory (4Rx4, 1.2V) 96 / 192 (24 per CPU) -
PC4-19200 PC4 2400MHz LP LRDIMM
TruDDR4 RDIMMs and LRDIMMs - 2133 MHz (for use with E7 v3 processors, also supported with E7 v4; 00KH391
only supported on certain E7 v3 processors)
46W0788 A5B5 8GB TruDDR4 Memory (1Rx4, 1.2V) 96 / 192 (24 per CPU) D5x
PC4-17000 CL15 2133MHz LP RDIMM
46W0796 A5B7 16GB TruDDR4 Memory (2Rx4, 1.2V) 96 / 192 (24 per CPU) F2x, F4x, G2x,
PC4-17000 CL15 2133MHz LP RDIMM G4x
95Y4808 A5UJ 32GB TruDDR4 Memory (2Rx4, 1.2V) 96 / 192 (24 per CPU) -
PC4-17000 CL15 2133MHz LP RDIMM
00KH391a AUF3 32GB TruDDR4 Memory (4Rx4, 1.2V) 96 / 192 (24 per CPU) -
PC4-17000 CL15 2133MHz LP LRDIMM
95Y4812 A5UK 64GB TruDDR4 Memory (4Rx4,1.2V) 96 / 192 (24 per CPU) -
PC4-17000 CL15 2133MHz LP LRDIMM
DDR3 RDIMMs
00D5024b A3QE 4GB (1x4GB, 1Rx4, 1.35V) PC3L-12800 96 / 192 (24 per CPU) -
CL11 ECC DDR3 1600MHz LP RDIMM
00D5036 A3QH 8GB (1x8GB, 1Rx4, 1.35V) PC3L-12800 96 / 192 (24 per CPU) A4x, B1x, B3x,
CL11 ECC DDR3 1600MHz LP RDIMM C1x, C4x, D4x
46W0672 A3QM 16GB (1x16GB, 2Rx4, 1.35V) PC3L-12800 96 / 192 (24 per CPU) F1x, F3x, G1x,
CL11 ECC DDR3 1600MHz LP RDIMM G3x
DDR3 LRDIMMs
46W0676 A3SR 32GB (1x32GB, 4Rx4, 1.35V) PC3L-12800 96 / 192 (24 per CPU) -
CL11 ECC DDR3 1600MHz LP LRDIMM
a. 00KH391 is supported only in Compute Books with E7-8880 v3 or E7-8890 v3 processors; no other memory DIMM
is supported if 00KH391 is installed
b. 00D5024 is supported only in Compute Books with E7 v2 processors
Table 3-10 (for DDR3 memory) and Table 3-9 (for TruDDR4 memory) show the
characteristics of the supported DIMMs and the memory speeds. The cells that are
highlighted in gray indicate that the X6 servers support higher memory frequencies or larger
memory capacity (or both) than the Intel processor specification defines.
Memory speed: In Performance mode, memory channels operate independently, and the
SMI2 link operates at twice the DIMM speed. In RAS mode, two channels operate
synchronously, and the SMI2 link operates at the DIMM speed.
Table 3-8 Maximum memory speeds: 2133 MHz TruDDR4 memory - x3850 / x3950 X6
Specification TruDDR4 RDIMMs TruDDR4 LRDIMMs
Part number 46W0788 (8 GB) 46W0796 (16 GB) 95Y4812 (64 GB)
95Y4808 (32 GB)
Largest DIMM 8 GB 16 GB 64 GB
Maximum operating speed: Performance mode (2:1 mode; SMI2 link operates at twice the speed shown)
Maximum operating speed: RAS mode (1:1 mode; SMI2 link operates at the speed shown)
Table 3-9 Maximum memory speeds: 2400 MHz TruDDR4 memory - x3850 / x3950 X6
Specification TruDDR4 RDIMMs TruDDR4 LRDIMMs
Part number 46W0821 (8 GB) 46W0829 (16 GB) 46W0841 (64 GB)
46W0833 (32 GB)
Largest DIMM 8 GB 16 GB 64 GB
Maximum operating speed: Performance mode (2:1 mode; SMI2 link operates at twice the speed shown)
Maximum operating speed: RAS mode (1:1 mode; SMI2 link operates at the speed shown)
Rated speed 1600 MHz 1600 MHz 1600 MHz 1333 MHz
Operating voltage 1.35 V 1.5 V 1.35 V 1.5 V 1.35 V 1.5 V 1.35 V 1.5 V
Max. DIMM 8 GB 8 GB 16 GB 16 GB 32 GB 32 GB 64 GB 64 GB
capacity
Maximum operating speed - Performance mode (2:1 mode - SMI2 link operates at twice the DDR3 speed shown)
1 DIMM / channel 1333 MHz 1333 MHz 1333 MHz 1333 MHz 1333 MHz 1333 MHz 1333 MHz 1333 MHz
2 DIMMs / channel 1333 MHz 1333 MHz 1333 MHz 1333 MHz 1333 MHz 1333 MHz 1333 MHz 1333 MHz
3 DIMMs / channel 1066 MHz 1333 MHz 1066 MHz 1333 MHz 1333 MHz 1333 MHz 1333 MHz 1333 MHz
Maximum operating speed - RAS mode (1:1 mode - SMI2 link operates at the DDR3 speed shown)
1 DIMM / channel 1333 MHz 1600 MHz 1333 MHz 1600 MHz 1333 MHz 1600 MHz 1333 MHz 1333 MHz
2 DIMMs / channel 1333 MHz 1600 MHz 1333 MHz 1600 MHz 1333 MHz 1600 MHz 1333 MHz 1333 MHz
3 DIMMs / channel 1066 MHz 1333 MHz 1066 MHz 1333 MHz 1333 MHz 1333 MHz 1333 MHz 1333 MHz
a. Maximum memory capacity for the x3850 X6 server / Maximum memory capacity for the x3950 X6 server.
Chipkill and Redundant Bit Steering are supported in RAS mode. Chipkill is supported in
Performance mode.
If memory mirroring is used, DIMMs must be installed in pairs for Performance mode
(minimum of one pair per each processor) and quads for RAS mode. DIMMs in the pair or
quad must be identical in type and size.
If memory rank sparing is used, a minimum of two single-rank or dual-rank DIMMs must be
installed per populated channel (the DIMMs do not need to be identical). In rank sparing
mode, one rank of a DIMM in each populated channel is reserved as spare memory. The size
of a rank varies depending on the DIMMs that are installed.
The main operation modes can be combined with extra modes; therefore, the following
operation modes are available:
Performance (also referred to as Independent mode)
RAS (also referred to as Lockstep mode)
Performance + Mirroring
Performance + Rank Sparing
RAS + Mirroring
RAS + Rank Sparing
For more information about memory operation modes, see 2.4.1, “Operational modes” on
page 34 and 2.4.2, “Memory mirroring and rank sparing” on page 38.
Depending on the selected operating mode, you should use the appropriate DIMMs
installation order. Table 3-11 lists the DIMMs placement order that depends on the operation
mode.
The DIMM installation order for Rank Sparing mode follows the Performance mode or RAS
(Lockstep) mode installation order based on the operation mode selected.
As shown in Table 3-11, DIMMs must be installed in pairs (minimum of one pair per CPU) in
Mirroring mode. Both DIMMs in a pair must be identical in type and size.
Chipkill is supported in RAS mode only (Performance mode is not supported), and DIMMs
must be installed in a pair.
Rear
Front
DIMM 4 DIMM 5 DIMM 6 DIMM 10 DIMM 11 DIMM 12
Rear
DIMM 16 DIMM 17 DIMM 18 DIMM 22 DIMM 23 DIMM 24
For Rank Sparing, the spare rank must have identical or larger memory capacity than all the
other ranks (sparing source ranks) on the same channel. A minimum of two DIMMs per
channel is required if only single rank DIMMs are used. The DIMM installation order must be
modified to accommodate single rank only configurations.
Rank Sparing can be supported by 1 DIMM per channel if multi-rank DIMMs are used.
Configurations that consist of multiple CPU sockets must have matching memory
configurations on all CPUs because memory sparing is globally applied across all CPU
sockets when enabled.
If you use eXFlash DIMMs, see 3.10, “Memory-channel storage” on page 92 for more
information about the population order of eXFlash DIMMs.
Data transfers between processors and eXFlash DIMMs run directly without any extra
controllers, such as PCIe controllers and SAS/SATA controllers. This brings storage
electrically closer to the processor subsystem, which significantly reduces latency and
improves performance. Figure 3-17 shows the eXFlash DIMM.
eXFlash DIMM modules are available in DDR3 form-factor only and can be installed in the
same DIMM slots on the Compute Book as regular DDR3 DIMMs. Compute Books with
TruDDR4 memory (v3 or v4 processors) do not support eXFlash DIMMs.
Figure 3-18 on page 93 shows one eXFlash DIMM installed with RDIMMs in the Compute
Book.
The following rules apply when a server configuration is built with eXFlash DIMMs:
Only DDR3 Compute Books are supported. Compute Books with TruDDR4 memory (v3 or
v4 processors) do not support eXFlash DIMMs.
200 GB and 400 GB eXFlash DIMMs cannot be mixed.
Performance memory mode must be selected; RAS (Lockstep) memory mode is not
supported.
Only RDIMMs are supported by eXFlash DIMMs; LRDIMMs are not supported.
The following maximum quantities of eXFlash DIMMs in an x3850 X6 are supported:
– 1 processor: 8 eXFlash DIMMs
– 2 processors: 16 eXFlash DIMMs
– 4 processors: 32 eXFlash DIMMs
The following maximum quantities of eXFlash DIMMs in an x3950 X6 are supported:
– 4 processors: 16 eXFlash DIMMs
– 6 processors: 24 eXFlash DIMMs
– 8 processors: 32 eXFlash DIMMs
The Storage Book is accessible from the front of the server and contains the following
components:
Up to eight 2.5-inch drives or 16 1.8-inch SSDs or up to four NVMe drives
Up to two PCIe 3.0 x8 RAID-adapters
Operator panel with LCD system information panel
Two USB 3.0 ports
One USB 2.0 port
One SVGA port
LCD system
information
panel
USB 2.0 port
Four 2.5-inch
drives
USB 3.0 ports
Eight 1.8-inch
SSD drives
SVGA-port
Release
handler
Figure 3-20 shows all available combinations of drives in the Storage Book.
BlankSSD 5
2.5" NVMe Blank Blank Drive Legend
1.8" SAS SSD
BlankSSD 4
2.5" NVMe Blank Blank 2.5" SAS/SATA
1.8" SSD 7 HDD or SSD
2.5" HDD 3
1.8" SSD 6 2.5" PCIe NVMe
1.8" SSD 5 SSD
2.5" HDD 2
1.8" SSD 4
Blank
1.8" SSD 3
2.5" HDD 1
1.8" SSD 2
1.8" SSD 1
2.5" HDD 0
1.8" SSD 0
May not reverse
backplane order
Each Storage Book can house two backplanes. All standard models ship with one backplane
with four 2.5-inch hot-swap drive bays.
Figure 3-21 shows the location of SAS backplanes in the Storage Book and the RAID
controllers that connect to the backplanes.
SAS backplanes
16x 1.8-inch SSD Two 8x 1.8-inch HS 12Gb SAS HDD Backplane, 44X4106 2
4x 2.5-inch NVMe One 4x 2.5-inch NVMe PCIe Gen3 SSD Backplane, 44X4108 1 or 2a
2x 2.5-inch NVMe + One 4x 2.5-inch NVMe PCIe Gen3 SSD Backplane, 44X4108 2
4x 2.5-inch HDD/SSD One 4x 2.5-inch HS 12Gb SAS HDD Backplane, 44X4104
2x 2.5-inch NVMe + One 4x 2.5-inch NVMe PCIe Gen3 SSD Backplane, 44X4108 2
8x 1.8-inch SSD One 8x 1.8-inch HS 12Gb SAS HDD Backplane, 44X4106
a. The number of controllers that is required depends on the number of drives installed: one or two drives requires one
controller; three or four drives requires two controllers.
Figure 3-22 shows the Storage Book’s internal components and one of the RAID adapters.
ServeRAID adapter
Table 3-15 lists the RAID controllers, HBAs, and other hardware and feature upgrades that
are used for internal disk storage. The adapters are installed in slots in the Storage Book.
Controllers
For more information, see the list of Lenovo Press Product Guides in the RAID adapters
category at this website:
http://lenovopress.com/systemx/raid
One adapter is required for every two drives that are installed in the Storage Book. Because
there are only two PCIe slots in the Compute Book, only two adapters can be installed;
therefore, only four NVMe drives can be installed in a Storage Book.
For the x3950 X6 with two Compute Books, a total of four NVMe PCIe SSD Extender Adapter
can be installed plus a total of four NVMe drives.
Note: The drive IDs that are assigned by IMM2 match the IDs that are indicated on the
server front bezel.
The operating system and UEFI report the HDDs that are attached to the 4x2.5-inch NVMe
PCIe Gen3 SSD backplane as PCI devices.
Withdrawn: All supported Flash Storage Adapters are now withdrawn from marketing.
00YA812a AT7L Intel P3700 1.6TB NVMe Enterprise Performance Flash Adapter Nine 18
00YA815a AT7M Intel P3700 2.0TB NVMe Enterprise Performance Flash Adapter Nine 18
For more information about the Flash Storage Adapters, see 2.6.4, “Flash Storage Adapters”
on page 47.
The I/O Books provide many of the server ports and most of the PCIe adapter slots. The
following types of I/O Books are available:
Primary I/O Book, which is a core component of the server and consists of four PCIe slots
and system logic, such as IMM2 and UEFI:
– The x3850 X6 has one Primary I/O Book standard
– The x3950 X6 has two Primary I/O Books standard
100 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
Optional I/O Books: Half-length I/O Books or Full-length I/O Books, which provide three
slots each:
– The x3850 X6 supports two optional I/O Books
– The x3950 X6 supports four optional I/O Books
The I/O Books are accessible from the rear of the server. To access the slots, release the
locking handle and pull each book out from the rear of the server.
Figure 3-24 shows the rear view of the x3850 X6 server where you can see the Primary I/O
Book and Optional I/O Books.
Figure 3-24 I/O Books accessible from the rear of the server
The Storage Book also contains two PCIe slots, as described in 3.11.1, “Storage Book” on
page 94.
The PCIe lanes that are used in the I/O Books and Storage Book are connected to installed
processors in the following configurations:
The slots in the Primary I/O Book and Storage I/O Book connect to processor 1 or
processor 2.
The slots in the optional I/O Book in bay 1 connect to processor 4.
The slots in the optional I/O Book in bay 2 connect to processor 3.
Slot 1
Slot 3
Slot 6
Storage Book
As shown in Figure 3-25, the use of all slots in the Primary I/O Book requires two processors
(two Compute Books) installed by using the following configuration:
Processor 1 drives a PCIe x16 slot and the ML2 slot
Processor 2 drives the other PCIe 16 slot and the PCI x8 slot
Remember, the x3850 X6 has one Primary I/O Book, while the x3950 X6 has two Primary I/O
Books.
The Primary I/O Book also contains the following core logic:
IMM2
TPMs
Two hot-swap fan modules
Internal USB port for an embedded hypervisor
Dedicated 1 Gigabit Ethernet port dedicated for IMM2 connectivity
Four USB 2.0 ports
102 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
VGA port
Serial port
Figure 3-26 shows the Primary I/O Book location and its components.
Rather than a fixed network controller integrated on the system board, the x3850 X6 and
x3950 X6 offer a dedicated ML2 (mezzanine LOM 2) adapter slot where you can select (for
example) a 4-port Gigabit controller or a 2-port 10Gb controller from various networking
vendors, as listed in Table 3-19 on page 110. This slot also supports out-of-band connectivity
to the IMM2 management controller.
The Primary I/O Book has three PCIe 3.0 slots for optional PCIe adapters. All three ports
have PCIe x16 physical form-factor, but only PCIe slots 7 and 9 have 16 PCIe 3.0 lines; PCIe
slot 8 works as PCIe 3.0 x8. All PCIe ports in the Primary I/O Book support only half-length
full-height PCIe adapters. Maximum power consumption for each PCIe slot is 75 W. To use
full-length adapters, add one or two Full-Length I/O Books.
Two hot-swap
fans at rear
PCIe 3.0 x16
USB port for PCIe 3.0 x16 (x8 wired)
hypervisor PCIe 3.0 x16
ML2 adapter in
dedicated
PCIe 3.0 x8 slot
Figure 3-27 Primary I/O Book removed showing the internal components
As shown in Figure 3-27, the Primary I/O Book has an internal USB port for the embedded
hypervisor.
The Primary I/O Book also has a large plastic air baffle inside (the baffle is raised on a hinge
in Figure 3-27), which routes hot air from the Storage Book through the two fans in the
Primary I/O Book.
The dedicated IMM2 systems management port is a dedicated Gigabit Ethernet port that
connects to the IMM2. This port is useful if you have an isolated management network. If you
conduct management activities over your production network, you might want instead to use
Port 1 of the ML2 adapter, which can be shared between the operating system and IMM2.
Although the x3950 X6 has two Primary I/O Books, the second video, IMM2, and internal
USB hypervisor ports are used only when the x3950 X6 is partitioned into two virtual 4-socket
servers.
As with the Primary I/O Book, the optional I/O Books are also installed from the rear side of
the server. Figure 3-28 on page 105 shows the locations of the optional I/O Books in the
x3850 X6 server.
As shown in Figure 3-25 on page 102, the optional I/O Books require the following processors
(Compute Books) to be installed:
The I/O Book in bay 1 requires processor 4 be installed
104 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
The I/O Book in bay 2 requires processor 3 be installed
The part numbers for the optional I/O Books are listed in Table 3-17.
The table also lists the maximum number of books that is supported. The x3850 X6 supports
up to two optional I/O Books, and they both can be Half-length I/O Books or both Full-length
I/O Books or one of each. Similarly, the x3950 X6 supports up to four Optional I/O Books in
any combination.
Table 3-17 Optional I/O Book part numbers
Part Feature Description Maximum Maximum
number code supported supported
x3850 X6 x3950 X6
All slots support half-length full-height adapters (full-length adapters are not supported) and
the maximum power consumption for each PCIe slot is 75 W. Figure 3-30 shows a top-down
view of the Half-length I/O Book.
The Half-length I/O Book supports hot-swap PCIe adapters. For more information, see 3.15,
“Hot-swap adapter support” on page 108.
The Full-length I/O Book also includes two auxiliary power connectors. With the use of these
connectors and the supplied power cords, the I/O Book supports one double-wide adapter up
to 300 W. The following auxiliary power connectors are available:
One 2x4 power connector; supplies up to 150 W more power to the adapter
One 2x3 power connector; supplies up to 75 W more power to the adapter
The cables that connect to these auxiliary connectors are shipped with the Full-length I/O
Book.
The combined power consumption of all adapters that are installed in the Full-length I/O Book
cannot exceed 300 W.
106 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
Note: The 2x3 connector is intended to be used only when one adapter is installed in the
first x16 slot (the up-most slot that is shown in Figure 3-31), which requires 225 W or
300 W of power. The location of the 2x3 connector prevents an adapter from being
installed in the other x16 slot.
Figure 3-32 shows the Full-length I/O Book with Intel Xeon Phi coprocessor installed.
Figure 3-32 Full-length I/O Book with Intel Xeon Phi card installed
The Half-length I/O Book installs flush with the Primary I/O Book at the rear of the server.
When installed, the Full-length I/O Book adds a 99 mm (4 in) mechanical extension to the
base length dimension of the chassis.
Figure 3-33 shows a Full-length I/O Book and a Half-length I/O Book installed in the server.
Full-length
I/O Book Half-length
installed I/O Book
installed
Figure 3-33 I/O Books installed in the x3850 X6
The Full-length I/O Book supports hot-swap PCIe adapters. For more information, see 3.15,
“Hot-swap adapter support” on page 108.
108 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
A Half-length I/O Book cannot be hot-swapped with a Full-length I/O Book and a
Full-length I/O Book cannot be hot-swapped with a Half-length I/O Book. A restart is
required.
Only certain adapters support hot-swap. Table 3-18 on page 109 lists the adapters that
supported hot-swap at the time of this writing.
00D8540 A4M9 Emulex Dual Port 10GbE SFP+ VFA IIIr for System xa
49Y7960 A2EC Intel X520 Dual Port 10GbE SFP+ Adapter for System x
49Y7970 A2ED Intel X540-T2 Dual Port 10GBaseT Adapter for System x
49Y4230 5767 Intel Ethernet Dual Port Server Adapter I340-T2 for System x
49Y4240 5768 Intel Ethernet Quad Port Server Adapter I340-T4 for System x
As described in 3.3, “Standard models of X6 servers” on page 60, Models B3x, F3x, and F4x
include the Broadcom NetXtreme II ML2 Dual Port 10GbE SFP+ adapter as standard. All
other standard models include Intel I350-T4 ML2 Quad Port GbE Adapter adapter (I350-AM4
based).
The Broadcom NetXtreme II ML2 Dual Port 10GbE SFP+ Adapter has the following
specifications:
Dual-port 10 Gb Ethernet connectivity
Broadcom BCM57810S ASIC
SFP+ ports that support fiber optic and direct-attach copper (DAC) cables
For more information about this adapter, see Broadcom NetXtreme 10 GbE SFP+ Network
Adapter Family for System x, TIPS1027, which is available at this website:
http://lenovopress.com/tips1027
The Intel I350-T4 ML2 Quad Port GbE Adapter has the following specifications:
Quad-port 1 Gb Ethernet connectivity
Intel I350-AM4 ASIC
RJ45 ports for copper cables
The supported ML2 adapters are listed in Table 3-19 on page 110.
25 Gb Ethernet
10 Gb Ethernet
00AG560 AT7U Emulex VFA5.2 ML2 Dual Port 10GbE SFP+ Adapter 1 2
94Y5200 AS74 Intel X710 ML2 4x10GbE SFP+ Adapter for System x 1 2
Gigabit Ethernet
InfiniBand
The server also supports various other Ethernet and InfiniBand network adapters, as listed in
Table 3-20. The maximum quantity that is listed is for configurations with all processors and
I/O books installed.
100 Gb Ethernet
40 Gb Ethernet
00D9550 A3PN Mellanox ConnectX-3 40GbE/ FDR IB VPI Adapter for System x No 9 / 18
25 Gb Ethernet
110 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
Part Feature Description Hot-swap Maximum
number code capable supporteda
10 Gb Ethernet
00AG580 AT7T Emulex VFA5.2 2x10 GbE SFP+ Adapter and FCoE/iSCSI SW No 9 / 18
00JY820 A5UT Emulex VFA5 2x10 GbE SFP+ PCIe Adapter for System x No 9 / 18
00JY830c A5UU Emulex VFA5 2x10 GbE SFP+ Adapter and FCoE/iSCSI SW for No 9 / 18
System x
00JY824 A5UV Emulex VFA5 FCoE/iSCSI SW for PCIe Adapter for System x License 9 / 18
(FoD) (upgrade for 00AG570 or 00JY820)
00D8540c A4M9 Emulex Dual Port 10GbE SFP+ VFA III-R for System x Yes 9 / 18
49Y7960 A2EC Intel X520 Dual Port 10GbE SFP+ Adapter for System x Yes 9 / 18
49Y7970 A2ED Intel X540-T2 Dual Port 10GBaseT Adapter for System x Yes 9 / 18
90Y4600c A3MR QLogic 8200 Dual Port 10GbE SFP+ VFA for System x No 9 / 18
00Y5624c A3MT QLogic 8200 VFA FCoE/iSCSI license for System x (FoD) for License 9 / 18
System x (upgrade for 90Y4600)
Gigabit Ethernet
90Y9370 A2V4 Broadcom NetXtreme I Dual Port GbE Adapter for System x No 9 / 18
90Y9352 A2V3 Broadcom NetXtreme I Quad Port GbE Adapter for System x No 9 / 18
49Y4230c 5767 Intel Ethernet Dual Port Server Adapter I340-T2 for System x Yes 9 / 18
49Y4240c 5768 Intel Ethernet Quad Port Server Adapter I340-T4 for System x Yes 9 / 18
00AG500 A56K Intel I350-F1 1xGbE Fibre Adapter for System x Yes 9 / 18
00AG510 A56L Intel I350-T2 2xGbE BaseT Adapter for System x Yes 9 / 18
00AG520 A56M Intel I350-T4 4xGbE BaseT Adapter for System x Yes 9 / 18
InfiniBand
00D9550 A3PN Mellanox ConnectX-3 40GbE/ FDR IB VPI Adapter for System x No 9 / 18
00KH924 ASWQ Mellanox ConnectX-4 EDR IB VPI Single-port x16 PCIe 3.0 No 4/8
HCA
00WE027 AU0B Intel OPA 100 Series Single-port PCIe 3.0 x16 HFA No 3/6
00WE023 AU0A Intel OPA 100 Series Single-port PCIe 3.0 x8 HFA No 2/4
a. Quantities for x3850 X6 / x3950 X6
b. Not supported in servers with E7 v2 compute books
c. Withdrawn from marketing
d. Only supported in servers with E7 v4 compute books
For more information, see the list of Lenovo Press Product Guides in the Networking
adapters category that is available at this website:
http://lenovopress.com/systemx/networkadapters
Table 3-21 lists the supported RAID controller and HBA for external storage connectivity.
Table 3-21 SAS HBAs, RAID controllers and options for external disk storage expansion
Part Feature Description Maximum Maximum
number code supported supported
x3850 X6 x3950 X6
SAS HBAs
RAID adapters
112 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
Part Feature Description Maximum Maximum
number code supported supported
x3850 X6 x3950 X6
Table 3-22 compares the specifications of the external SAS HBAs and RAID adapters.
Adapter type SAS HBA SAS HBA SAS HBA RAID adapter RAID adapter
Form factor Low profile Low profile Low profile Low profile Low profile
Controller chip LSI SAS2308 LSI SAS3008 LSI SAS3008 LSI SAS2208 LSI SAS3108
Host interface 6 Gbps SAS 12 Gbps SAS 12 Gbps SAS 6 Gbps SAS 12 Gbps SAS
Number of 8 8 8 8 8
external ports
Drive interface SAS, SATA SAS, SATA SAS, SATA SAS, SATA SAS, SATA
Drive type HDD, SSD HDD, SSD HDD, SSD HDD, SED, SSD HDD, SED, SSD
Cache upgrade required: The ServeRAID M5120 SAS/SATA Controller ships standard
without a cache. One of the available cache upgrades (part number 81Y4487 or 81Y4559)
is required for the M5120 adapter operations, and it must be purchased with the controller.
For more information about the adapters, see these Lenovo Press Product Guides:
Fibre Channel: 16 Gb
Fibre Channel: 8 Gb
The GPUs and coprocessors have server memory minimums and maximums, as indicated in
the Table 3-24.
114 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
Not available via CTO: These adapters are not available via CTO and cannot be shipped
installed in the server. The adapters cannot be shipped installed because they are installed
in the Full-length I/O Book, which extends beyond the rear of the chassis (see Figure 3-33
on page 108). These adapters must be shipped separately from the server.
When partitioning is enabled, the 8-socket server is seen by the operating systems as two
independent 4-socket servers, as shown in Figure 3-34.
For details about how to implement partitioning, see 5.7, “Partitioning the x3950 X6” on
page 205.
116 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
3.21.1 Integrated Management Module II
Each X6 server has an IMM2 (version 2.1) service processor onboard. The IMM2 provides
the following standard major features:
IPMI v2.0 compliance
Remote configuration of IMM2 and UEFI settings without the need to power on the server
Remote access to system fan, voltage, and temperature values
Remote IMM and UEFI update
UEFI update when the server is powered off
Remote console by way of a serial over LAN
Remote access to the system event log
Predictive failure analysis and integrated alerting features (for example, by using Simple
Network Management Protocol, SNMP)
Remote presence, including remote control of server by using a Java or Active x client
Operating system failure window (blue screen) capture and display through the web
interface
Virtual media that allow the attachment of a diskette drive, CD/DVD drive, USB flash drive,
or disk image to a server
For more information about the IMM2, see 2.8, “Integrated Management Module” on page 49.
For more information about configuring the IMM2, see 5.1, “Configuring the IMM2 settings” on
page 156.
The use of all of the features of UEFI requires an UEFI-aware operating system and
adapters. UEFI is fully compatible with an earlier version of BIOS.
For more information about UEFI, see 2.7, “Unified Extensible Firmware Interface” on
page 49 and the Lenovo white paper, Introducing UEFI-Compliant Firmware on System x and
BladeCenter Servers, which is available at this address:
https://support.lenovo.com/docs/UM103225
For more information about the UEFI menu setup, see 5.2, “Unified Extensible Firmware
Interface settings” on page 165 and 5.3, “UEFI common settings” on page 169.
Full disk encryption applications, such as the BitLocker Drive Encryption feature of Microsoft
Windows Server 2008, can use this technology. The operating system uses it to protect the
keys that encrypt the computer’s operating system volume and provide integrity
authentication for a trusted boot pathway (BIOS, boot sector, and others). Several vendor
full-disk encryption products also support the TPM chip.
For more information about this technology, see the Trusted Computing Group (TCG) TPM
Main Specification at this website:
http://www.trustedcomputinggroup.org/resources/tpm_main_specification
If an error occurs, check the LEDs on the front operator panel first. If the check log LED is lit,
check IMM event log or system-event log. If the system-error LED is lit, find the appropriate
component with lit LED. Figure 3-35 shows the location of LEDs on the front operator panel.
118 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
For example, if a Compute Book error occurs, the Error LED on the appropriate Compute
Book should light up. Figure 3-36 shows the LED locations on the Compute Book.
Compute Light
Error book path
LED button
Power
LED
Fan LEDs
Figure 3-36 Compute Book LEDs location
If the Compute Book has a memory error, appropriate LEDs on the Compute Book’s system
board can help to determine the failed DIMM. To use this feature, you must disconnect the
server from power, remove the Compute Book from the server, press the Light path button on
the front panel of the Compute Book, and find the lit LED on the system board. The DIMM
LED that is lit indicates the failed memory module. Figure 3-37 shows the location of the
DIMM LEDs on one side of the Compute Book.
DIMM 7 - 12 DIMM 1 - 3
error LEDs error LEDs
DIMM 4 - 6
(Left side of board) error LEDs
In addition to the light path diagnostic LEDs, you can use the LCD system information display
on the front operator panel, which displays errors from the IMM event log.
For more information about the LCD system information display and the front operator panel,
see 3.6.2, “LCD system information panel” on page 72.
Figure 3-38 Internal USB port location for embedded hypervisor (view from above)
00WH140 ATRM Blank USB Memory Key 4G SLC for VMware ESXi 1/2
Downloads (4GB capacity)
41Y8298 A2G0 Blank USB Memory Key for VMware ESXi Downloads 1/2
(2GB capacity)
00ML235 ASN7 USB Memory Key for VMware ESXi 5.5 Update 2 1/2
00WH150b ATZG USB Memory Key for VMware ESXi 5.5 Update 3B 1/2
00WH138 ATRL USB Memory Key 4G for VMware ESXi 6.0 Update 1A 1/2
00WH151b ATZH USB Memory Key for VMware ESXi 6.0 Update 2 1/2
CTO only AVNW USB Memory Key for VMware ESXi 6.5 1/2
120 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
a. Two hypervisor keys are supported by only the x3950 X6 and only if the x3950 X6 is configured
to be partitioned, where the two halves of the server operate as two independent four-socket
servers. CTO orders can include only one hypervisor key.
b. Not supported in servers with E7 v2 compute books
For more information about the use of the embedded hypervisor, see 6.4, “Use of embedded
VMware ESXi” on page 239.
Hot-swap components have orange handles or touch points. Orange tabs are found on fan
modules, power supplies, extra I/O Books, and disk drives. The orange designates that the
items are hot-swap, and can be removed and replaced while the chassis is powered.
Touch points that are blue cannot be hot-swapped; the server must be powered off before
removing these devices. Blue touch points can be found on components, such as Compute
Books, Primary I/O Book, and Storage Book.
Compute Book No
Storage Book No
For more information about the hot-swap capabilities of the Optional I/O Books, see 3.15,
“Hot-swap adapter support” on page 108.
44X4150 A54D 1400W HE Redundant Power Supply for 2, 4 4, 8 All HANA models
altitudes above 5000 meters
Each installed AC power supply ships standard with one 2.8 m C13 - C14 power cord.
Use the Power Configurator that is available at the following website to determine the power
that your server needs:
http://support.lenovo.com/documents/LNVO-PWRCONF
For more information about power supplies and power planning, see 4.5, “Power guidelines”
on page 136.
For additional information on powering and coolinbg the X6 system refer to the power and
cooling reference guides located here:
http://support.lenovo.com/documents/LNVO-POWINF
For assistance with power and cooling for the X6 server, email power@lenovo.com.
122 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
Figure 3-39 shows the 1400 W AC power supply rear view and highlights the LEDs. There is
a handle for removal and insertion of the power supply.
Pull handle
LEDs:
AC
DC
Fault
Removal latch
Total output power is only 95% additive because of loss from current sharing (for example,
two 1400 W supplies provide 2660 W instead of 2800 W).
The power supply supports EPA Platinum, and 80Plus certification. The standard has several
ratings, such as Bronze, Silver, Gold, and Platinum. To meet the 80 PLUS Platinum standard,
the power supply must have a power factor (PF) of 0.95 or greater at 50% rated load and
efficiency equal to or greater than the following values:
90% at 20% of rated load
94% at 50% of rated load
91% at 100% of rated load
The four power supply bays are divided into two power domains to support N+N power supply
and power feed redundancy, where N = 1 or 2 (depending upon system configuration and
load). Power supplies that are installed in bays 1 and 3 belong to Group A; power supplies
that are installed in bays 2 and 4 belong to Group B.
The x3850 X6 server supports the following modes of redundancy based on the power supply
configuration, system load, and the Power Policy configuration controlled by the IMM:
Non-redundant
Fully system redundant
Redundant with reduced performance (throttling)
The IMM2 must be used to set and change the Power Policy and System Power
configurations. The power configurations and policies can be changed via the web interface,
CIM, and ASU interfaces. These settings cannot be changed by UEFI. The default
configuration setting for AC and DC models is Non-Redundant with Throttling enabled.
1 3 5 7
2 4 6 8
9 10
Fan packs 1 - 8 are in front of the Compute Books that are numbered left to right in
Figure 3-40, looking at the front of the server. Fan packs 9 and 10 are in the rear of the server,
numbered left to right in Figure 3-40.
124 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
Fan speed for all fan zones is controlled by several parameters, such as inlet ambient
temperature, CPU temperature, DIMMs, and PCIe card temperatures.
Figure 3-41 shows one of the Compute Book fans and Primary I/O Book fan (for Storage
Book cooling).
When the need arises to upgrade the server to six or eight processors, purchase more
Compute Books with the same processor model and more power supplies (as determined by
using the Power Configurator), I/O Books, adapters, and drives as needed.
As part of the RPQ, a service engineer comes onsite with the new mechanical chassis and
performs the field upgrade by transferring all components to the new chassis. This method
also requires the x3850 X6 compute books be the same E7-8800 processors as ordered for
the RPQ; however, in this scenario, the server maintain the original serial number.
Use of E7-4800 processors: Intel Xeon E7-4800 v2, v3, and v4 processors cannot be
used in an x3950 X6. If your x3850 X6 has Compute Books with E7-4800 processors,
these components must be replaced with Compute Books with E7-8800 v2, v3, or v4
processors if you plan to upgrade to an x3950 X6. In this instance, memory may also need
to be replaced. Refer to 3.9, “Memory” on page 84 for additoinal information.
For this method, submit an RPQ for assessment and pricing. Thex3850 X6 configuration is
evaluated and recommendations are made based on the workload requirements.
The major parts of the 4U to 8U upgrade are the 8U chassis, Storage Book, and Primary I/O
Book. All of the components in the package are installed in the top portion of the chassis. The
4U system’s components are transferred to the bottom section of the chassis.
Although this upgrade requires a new 8U chassis replacing the 4U chassis, most of the
internal components can be moved from the x3850 X6 to the x3950 X6.
The following x3850 X6 components can be migrated to the x3950 X6 as part of the RPQ
upgrade:
Compute Books if they use Intel Xeon E7-8800 processors
All memory DIMMs
Storage Book
All internal drives
Primary I/O Book (and associated fans)
Half-length I/O Books
Full-length I/O Books
All adapters
All power supplies
The RPQ upgrade might also require the following new parts:
More Compute Books (a minimum of four Compute Books required in the x3950 X6)
More power supplies (a minimum of four are required in the x3950 X6)
More I/O Books, network adapters, drives as needed
126 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
Consider the following key points regarding this upgrade:
Intel Xeon E7-4800 processors cannot be used in an x3950 X6. If your x3850 X6 has
Compute Books with E7-4800 v2, v3, or v4 processors, these components must be
replaced with Compute Books with E7-8800 v2, v3, or v4 processors if you plan to
upgrade to an x3950 X6. The memory in the Compute Books can be reused in the x3950
X6 if you are using the same series processor.
All processors that are used in the x3950 X6 must be identical; for example, all E7-8850
v4 processors. A minimum of four processors are required.
The upgrade results in the following parts no longer being used (“parts on the floor”):
– Existing 4U chassis and 4-socket midplane
– Compute Books that are based on E7-4800 processors
To minimize the number of parts that cannot be used in the upgraded system, the original
x3850 X6 should be configured with Compute Books that include E7-8800 processors.
Because many standard models of the x3850 X6 (see Table 3-2 on page 60) contain
E7-4800 processors, you might need to use CTO (by using a configurator, such as
x-config) or order the server by using Special Bid to create a server configuration with
E7-8800 processors.
The RPQ upgrade process also involves transferring the x3850 X6 serial number to the
x3950 X6 chassis. This transfer makes the upgrade simpler from an asset or depreciation
management perspective. This transfer also means that the old 4U chassis is retired
because it does not have a valid serial number.
Ideally, all power supplies are the 1400 W variant or a combination of 1400W and 900W.
Regardless of the selection, the power supplies coexistence rules that are described in
3.24, “Power subsystem” on page 122 must be followed.
Depending on your workload and configuration, you might need to provision for more PDU
outlets, cables, and power capacity for your x3950 X6 server. Use the Lenovo Power
Configurator to determine your total power draw to assist you in provisioning adequate
power. The Power Configurator is available at this website:
http://support.lenovo.com/documents/LNVO-PWRCONF
Another 4U of rack space is required when the x3850 X6 is upgraded to the x3950 X6 for
a total of 8U of rack space.
To upgrade the x3850 X6 server to a x3950 X6 server, you must allow for downtime. The
server must be powered off and have some of its components removed for reinstallation
into the new x3950 X6 server.
In this chapter, we describe infrastructure planning and considerations. This chapter includes
the following topics:
4.1, “Physical and electrical specifications” on page 130
4.2, “Rack selection and rack options” on page 131
4.3, “Floor clearance” on page 134
4.4, “Using Rear Door Heat eXchanger” on page 135
4.5, “Power guidelines” on page 136
4.6, “Cooling considerations” on page 145
4.7, “Uninterruptible power supply units” on page 146
4.8, “PDU and line cord selection” on page 147
130 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
Electrical: Models with 750 W DC power supplies:
– -40 to -75 VDC
– Input kilovolt-amperes (kVA) (approximately):
• Minimum configuration: 0.16 kVA
• Maximum configuration: 1.7 kVA
BTU output:
– Minimum configuration: 546 Btu/hr (160 watts)
– Maximum configuration: 10,912 Btu/hr (3,200 watts)
Noise level:
– 6.6 bels (operating)
– 6.4 bels (idle)
The server supports the rack cabinets that are listed in Table 4-1.
Full-length I/O Book: As indicated in Table 4-1, some racks are not deep enough to
support the servers with Full-length I/O Books installed.
For more information, see the list of Lenovo Press Product Guides in the Rack cabinets and
options category at this website:
http://lenovopress.com/systemx/rack
The server supports the rack console switches and monitor kits that are listed in Table 4-2.
Console keyboards
132 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
Part Feature Description
number code
46W6712 A50G Keyboard w/ Int. Pointing Device USB - US Eng 103P RoHS v2
46W6713 A50H Keyboard w/ Int. Pointing Device USB - Arabic 253 RoHS v2
46W6714 A50J Keyboard w/ Int. Pointing Device USB - Belg/UK 120 RoHS v2
46W6715 A50K Keyboard w/ Int. Pointing Device USB - Chinese/US 467 RoHS v2
46W6716 A50L Keyboard w/ Int. Pointing Device USB - Czech 489 RoHS v2
46W6717 A50M Keyboard w/ Int. Pointing Device USB - Danish 159 RoHS v2
46W6718 A50N Keyboard w/ Int. Pointing Device USB - Dutch 143 RoHS v2
46W6719 A50P Keyboard w/ Int. Pointing Device USB - French 189 RoHS v2
46W6720 A50Q Keyboard w/ Int. Pointing Device USB - Fr/Canada 445 RoHS v2
46W6721 A50R Keyboard w/ Int. Pointing Device USB - German 129 RoHS v2
46W6722 A50S Keyboard w/ Int. Pointing Device USB - Greek 219 RoHS v2
46W6723 A50T Keyboard w/ Int. Pointing Device USB - Hebrew 212 RoHS v2
46W6724 A50U Keyboard w/ Int. Pointing Device USB - Hungarian 208 RoHS v2
46W6725 A50V Keyboard w/ Int. Pointing Device USB - Italian 141 RoHS v2
46W6726 A50W Keyboard w/ Int. Pointing Device USB - Japanese 194 RoHS v2
46W6727 A50X Keyboard w/ Int. Pointing Device USB - Korean 413 RoHS v2
46W6728 A50Y Keyboard w/ Int. Pointing Device USB - LA Span 171 RoHS v2
46W6729 A50Z Keyboard w/ Int. Pointing Device USB - Norwegian 155 RoHS v2
46W6730 A510 Keyboard w/ Int. Pointing Device USB - Polish 214 RoHS v2
46W6731 A511 Keyboard w/ Int. Pointing Device USB - Portugese 163 RoHS v2
46W6732 A512 Keyboard w/ Int. Pointing Device USB - Russian 441 RoHS v2
46W6733 A513 Keyboard w/ Int. Pointing Device USB - Slovak 245 RoHS v2
46W6734 A514 Keyboard w/ Int. Pointing Device USB - Spanish 172 RoHS v2
46W6735 A515 Keyboard w/ Int. Pointing Device USB - Swed/Finn 153 RoHS v2
46W6736 A516 Keyboard w/ Int. Pointing Device USB - Swiss F/G 150 RoHS v2
46W6737 A517 Keyboard w/ Int. Pointing Device USB - Thai 191 RoHS v2
46W6738 A518 Keyboard w/ Int. Pointing Device USB - Turkish 179 RoHS v2
46W6739 A519 Keyboard w/ Int. Pointing Device USB - UK Eng 166 RoHS v2
46W6740 A51A Keyboard w/ Int. Pointing Device USB - US Euro 103P RoHS v2
46W6741 A51B Keyboard w/ Int. Pointing Device USB - Slovenian 234 RoHS v2
Console switches
Console cables
For more information, see the list of Lenovo Press Product Guides in the Rack cabinets and
options category at this website:
http://lenovopress.com/systemx/rack
When the server is mounted in a rack, it is on non-sliding rails and fixed to the rack. There is
no need for more floor clearance to pull the server out of the rack for maintenance or
upgrades. This extra clearance is not needed because all of the serviceable servers and
components that can be upgraded can be accessed from the front or rear of the server.
Having components that are accessed from the front or rear without sliding the server is
beneficial for the following reasons:
The modular design makes the system easier to service because you need to pull only the
affected subsystem without having to pull the entire server out from the rack.
Because there is no requirement to slide the server in and out of the rack, there are no
cable management arms with which to be concerned.
The system is easier to install because you can unload all pluggable parts for lower weight
when you are lifting in to the rack.
The system is easier to upgrade from 2S to 4S by adding Compute Books.
134 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
The system is easier to add I/O by hot-swapping and hot-adding an I/O Book to add
adapters or by adding an I/O Book to the server.
The system is easier to add memory by removing the appropriate Compute Book, adding
the memory, then reinserting the Compute Book.
After the server is installed in the rack, the only floor clearance you need is for pulling out or
installing pluggable components, such as the Compute Book, Storage Book, I/O Books, or
power supplies.
140
Water
130 temperature
12°C *
120 14°C *
16°C *
110
18°C *
% heat removal
100 20°C *
22°C *
90
24°C *
80 Rack Power
(W) = 30000
70 Tinlet, air
(C) = 27
60
Airflow
(cfm) = 2500
50
4 6 8 10 12 14
Water flow rate (gpm)
The ordering information for the rear door heat exchanger is listed in Table 4-3.
Table 4-3 Part number for the Rear Door Heat eXchanger for the 42U 1100 m rack
Part number Description
175642X Rear Door Heat eXchanger for 42U 1100 mm Enterprise V2 Dynamic Racks
For more information, see the Rear Door Heat eXchanger V2 Type 1756 Installation and
Maintenance Guide, which is available at this website:
https://support.lenovo.com/docs/UM103398
4.5.1 Considerations
When you are planning your power source for an x3850 X6 or an x3950 X6 system rack
installation, consider the following variables:
Input voltage range: 100 - 120 VAC or 200 - 240 VAC
Power Distribution Unit (PDU) input: Single-phase or three-phase
Power redundancies: AC source feed (power feed) N+N, power supply N+1, or power
supply N (no redundancy)
PDU control: Switched and monitored, monitored, or non-monitored
Hardware: Quantity of components and component power draw
136 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
The following examples provide guidance about selecting PDUs, power input line cords, and
PDU to server power jumper cord connections.
The following approaches can be used to provision power:
Provision to the label rating of the power supplies so that any configuration can be
supported; this approach covers all hot-swap components that can be added later.
Provision to the maximum, calculated, or observed power that the systems can draw.
Note: The official power planning tool is the Power Configurator. You can determine the
total power draw of your server configuration with this tool. This tool will validate N+N
redundancy based on your particular configuration. For more information, see this website:
http://support.lenovo.com/documents/LNVO-PWRCONF
For assistance with selecting appropriate PDUs to configure, see the PDU planning guides
that available at this website:
http://support.lenovo.com/documents/LNVO-POWINF
For assistance with rack, power, thermal and mechanical, and quoting appropriate PDU and
UPSs for this system, email the Lenovo power team at power@lenovo.com.
The power supply bays are numbered from left to right when viewed from the rear of the
chassis, as shown in Figure 4-3. Power supplies that are installed in bays 1 and 3 belong to
Group A (red); power supplies that are installed in bays 2 and 4 belong to Group B (blue).
1 2 3 4
Figure 4-4 shows the power supply bay numbering for the x3950 X6.
1 2 3 4
The x3850 X6 and x3950 X6 support the following modes of redundancy based on the power
supply configuration, system load, and the Power Policy configuration that is controlled by the
Integrated Management Module 2 (IMM2):
Non-redundant
Fully system redundant
Redundant with reduced performance (throttling)
The default configuration setting for AC and DC models is Non-Redundant with Throttling
enabled.
You can set and change the Power Policy and System Power Configurations by using the
IMM2 web interface. The power configurations and policies can be changed via the web,
Advanced Settings Utility (ASU), Common Information Model (CIM), and Unified Extensible
Firmware Interface (UEFI) interfaces. These settings cannot be changed by UEFI.
For more information about how to connect to the IMM2, see 7.2, “Integrated Management
Module II” on page 244. For information about the power policy in the IMM2 and changing the
settings, see 4.5.5, “Power policy” on page 141.
138 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
Depending on your load and power requirements, the x3850 X6 server supports the following
power supply installations:
One 900 W
One 1400 W
Two 900 W
Two 1400 W
Two 900 W and two 1400 W
Four 900 W
Four 1400 W
Four 750 W DC
There are two different 1400 W power supply units available: a standard 1400 W power
supply unit (PSU) and a high-altitude 1400 W PSU. The high-altitude PSU is used when the
server is at an altitude higher than 5000 meters from sea level. It is not advised to mix the
high-altitude PSUs with the standard 1400 W or 900 W PSUs because this configuration
nullifies the high-altitude capabilities.
Table 4-4 lists the different power supply installation configurations, policies, and level of
redundancy that is achieved, depending on the type and number of power supplies that are
installed.
Figure 4-5 shows a correctly balanced and redundant system with four 900 W power supplies
installed that are connected to two separate feeds.
Feed 1 Feed 2
If a mix of 900 W and 1400 W power supplies is used, ensure that power input feeds 1 and 2
have a mix of wattage. This configuration ensures source redundancy and is the only way
that mixed power supplies are supported.
The following power feed configuration is supported for mixed power supplies:
Power supply bay 1: 1400 W, Power feed 1
Power supply bay 2: 900 W, Power feed 2
Power supply bay 3: 900 W, Power feed 1
Power supply bay 4: 1400 W, Power feed 2
The power supplies are installed in the following order for the x3950 X6:
1. Power supply bays 3, 2, 7 and 6
2. Power supply bays 1, 4, 5 and 8
The following rules apply when power supplies are installed in the x3850 X6:
For one power supply configuration, install the power supply in bay 3. This configuration
does not support any form of power supply redundancy. A power supply filler must be
installed in bays 1, 2, and 4.
For two power supply configurations, install the power supplies in bays 2 and 3, with each
power supply on separate input power feeds. Ensure that both power supplies are of the
same type in terms of wattage and both must be AC or DC.
A configuration of three power supplies is not supported.
140 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
The following rules apply when power supplies are installed in the x3950 X6:
For four power supply configurations, install the power supplies in bays 2, 6, 3, and 7 with
each power supply on separate input power feeds. Power supplies must be all AC or all
DC. Pairs of power supplies must be the same wattage.
Only configurations of four or eight power supplies are supported. Any other combination
(1, 2, 3, 5, 6, and 7) is not supported.
The default configuration setting for AC and DC models is Non-Redundant with Throttling
enabled. The power configurations and policies can be changed via the IMM2, CIM, and ASU
interfaces.
This section describes changing the Power Policy by using the IMM2 interface.
3. Select the policy that you want to implement on your system and click OK to implement
that policy.
Warning: Select only a Power Policy that meets the hardware configuration of your
system. For example, you cannot select 2+2 redundancy with only two power supplies
installed. This selection can result in your system not starting.
142 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
4.5.6 More power settings in the IMM2
In this section, we describe the power capping and power allocation information you can set
and view in the Server Power Management page via the IMM2 to help you manage your
systems power usage.
Power capping
From the Policies tab, you can also set power capping. To set the overall power limit on your
server, click Change under the Power Limiting/Capping Policy section. Figure 4-8 shows the
power capping window in the IMM2.
Without capping enabled, the maximum power limit is determined by the active power
redundancy policy that is enabled.
Warning: With power capping enabled, the component is not permitted to power on in a
situation where powering on a component exceeds the limit.
The Power Supply Utilization graph that is shown in Figure 4-9 on page 144 shows the
theoretical maximum amount of power that all components together might use and the
remaining available capacity.
The Current DC Power Consumption graph displays the theoretical power consumption of the
individual components in your system, including memory, CPU, and Others.
144 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
The Power History tab also shows a representation of your server’s power consumption over
a selected period, as seen in Figure 4-10.
You can download the Power Configurator and information about UPS runtimes for your
power load from this web page:
http://support.lenovo.com/documents/LNVO-PWRCONF
The x3850 X6 and x3950 X6 support the attachment to UPS units that are listed in Table 4-5.
146 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
Part number Description
For more information, see the list of Lenovo Press Product Guides in the UPS category:
https://lenovopress.com/servers/options/ups
This section describes the power supply unit to PDU line cord options that are available to
connect your X6 power supplies to your PDU source.
39M5375 6201 1.5m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable
39M5377 6311 2.8m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable
39M5392 6204 2.8m, 10A/100-250V, C13 to IEC 320-C20 Rack Power Cable
39M5378 6263 4.3m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable
47C2487a A3SS 1.2m, 10A/100-250V, 2 Short C13s to Short C14 Rack Power Cable
47C2488a A3ST 2.5m, 10A/100-250V, 2 Long C13s to Short C14 Rack Power Cable
a
47C2489 A3SU 2.8m, 10A/100-250V, 2 Short C13s to Long C14 Rack Power Cable
47C2490a A3SV 4.1m, 10A/100-250V, 2 Long C13s to Long C14 Rack Power Cable
47C2491a A3SW 1.2m, 16A/100-250V, 2 Short C13s to Short C20 Rack Power Cable
47C2492a A3SX 2.5m, 16A/100-250V, 2 Long C13s to Short C20 Rack Power Cable
47C2493a A3SY 2.8m, 16A/100-250V, 2 Short C13s to Long C20 Rack Power Cable
47C2494a A3SZ 4.1m, 16A/100-250V, 2 Long C13s to Long C20 Rack Power Cable
a. This cable is a Y cable that connects to two identical power supply units (PSUs). Y cables can be used with 900 W
PSUs only. All other power supplies are not compatible with Y cables.
Because of the location of the power supplies at the back of the server, power cables with
right angle plugs do not physically fit into the power supply connector; therefore, right-angle
plug connectors are not supported.
46M4002 1U 9 C19/3 C13 AEM 40K9614 1ph 200 - 240 V 30 A (24 A) NEMA L6 30P 9 / C19
DPI PDU 3 / C13
40K9615 1ph 200 -240 V 60 A (48 A) IEC 309 2P+G
46M4003 1U 9 C19/3 C13 AEM Attached 3ph 208 V 60 A (27.7 IEC 309 3P+G 9 / C19
60A 3-Phase PDU A/ph) 3 / C13
46M4004 1U 12 C13 AEM DPI 40K9614 1ph 200 - 240 V 30 A (24 A) NEMA L6 30P 12 / C13
PDU
40K9615 1ph 200 - 240 V 60 A (48 A) IEC 309 2P+G
46M4005 1U 12 C13 AEM 60A Attached 3ph 208 V 60 A (27.7 IEC 309 3P+G 12 / C13
3-Phase PDU A/ph)
148 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
Part Description Line cord Phase Volts Line cord Line cord plug Number /
number part (V) rating Type of
number (Derated) outlets
46M4167 1U 9 C19/3 C13 Attached 3ph 208 V 30 A (13.85 NEMA L21-30P 9 / C19
Switched and A/ph) 3 / C13
Monitored 30A
3-Phase PDU
46M4116 0U 24 C13 Switched Attached 1ph 200 - 240 V 30 A (24 A) NEMA L6 30P 24 / C13
and Monitored 30A
PDU
46M4137 0U 12 C19/12 C13 Attached 3ph Y 380 - 415 V 32 A (32A/ph) IEC 309 3P+N+G 12 / C19
Switched and 12 / C13
Monitored 32A
3-Phase PDU
46M4002 1U 9 C19/3 C13 AEM 40K9612 1ph 220 -240 V 32 A IEC 309 P+N+G 9 / C19
DPI PDU 3 / C13
40K9613 1ph 220 - 240 V 63 A IEC 309 P+N+G
46M4004 1U 12 C13 AEM DPI 40K9612 1ph 220 - 240 V 32 A IEC 309 P+N+G 12 / C13
PDU
40K9613 1ph 220 - 240 V 63 A IEC 309 P+N+G
00YJ782 0U 18 C13/6 C19 Attached 3ph Y 200-240V, 32 A (32 A/ph) IEC60309 532P6 6 / C19
Switched and 340-415V 18 / C13
Monitored 32A 3
Phase PDU
71762NX Ultra Density 40K9614 1ph 200 - 240 V 30 A (24 A) NEMA L6-30P 9 / C19
Enterprise PDU C19 3 / C13
PDU 40K9615 1ph 200 - 240 V 60 A (48 A) IEC 309 2P+G
71763MU Ultra Density Attached 3ph 208 V 60 A (27.7 IEC 309 2P+G 9 / C19
Enterprise PDU C19 A/ph) 3 / C13
3-Phase 60A PDU+
Monitored
71763NU Ultra Density Attached 3ph 208 V 60 A (27.7 IEC 309 2P+G 9 / C19
Enterprise PDU C19 A/ph) 3 / C13
3-Phase 60A PDU
Basic
39M2816 DPI C13 Enterprise 40K9614 1ph 200 - 240 V 30 A (24 A) NEMA L6-30P 12 / C13
PDU+ without line
cord Monitored 40K9615 1ph 200 - 240 V 60 A (48 A) IEC 309 2P+G
39Y8941 DPI Single Phase C13 40K9614 1ph 200 - 240 V 30 A (24 A) NEMA L6-30P 12 / C13
Enterprise PDU
without line cord 40K9615 1ph 200 - 240 V 60 A (48 A) IEC 309 2P+G
39Y8948 DPI Single Phase C19 40K9614 1ph 200 -240 V 30 A (24 A) NEMA L6-30P 6 / C19
Enterprise PDU
without line cord 40K9615 1ph 200 - 240 V 60 A (48 A) IEC 309 2P+G
39Y8923 DPI 60A Three-Phase Attached 3ph 208 V 60 A (27.7 IEC 309 3P+G 6 / C19
C19 Enterprise PDU A/ph)
with IEC309 3P+G
(208 V) fixed line cord
71762NX Ultra Density 40K9612 1ph 220 - 240 V 32 A IEC 309 P+N+G 9 / C19
Enterprise PDU C19 3 / C13
PDU (WW) 40K9613 1ph 220 - 240 V 63 A IEC 309 P+N+G
71762MX Ultra Density 40K9612 1ph 220 - 240 V 32 A IEC 309 P+N+G 9 / C19
Enterprise PDU C19 3 / C13
PDU+ (WW) 40K9613 1ph 220 - 240 V 63 A IEC 309 P+N+G
150 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
Part Description Line cord Phase Volts Line cord Line cord plug Number /
number part (V) rating Type of
number (Derated) outlets
39M2816 DPI C13 Enterprise 40K9612 1ph 220 - 240 V 32 A IEC 309 P+N+G 12 / C13
PDU without line cord
Monitored 40K9613 1ph 220 - 240 V 63 A IEC 309 P+N+G
39Y8941 DPI Single Phase C13 40K9612 1ph 220 - 240 V 32 A IEC 309 P+N+G 12 / C13
Enterprise PDU
without line cord 40K9613 1ph 220 - 240 V 63 A IEC 309 P+N+G
39Y8948 DPI Single Phase C19 40K9612 1ph 220 - 240 V 32 A IEC 309 P+N+G 6 / C19
Enterprise PDU
without line cord 40K9613 1ph 220 - 240 V 63 A IEC 309 P+N+G
39Y8939 30 amp/240V Included 1ph 200 - 240 V 30 A (24 A) NEMA L6-30P 3 / C19
Front-end PDU
39Y8940 60 amp Front-end Included 1ph 200 - 240 V 60 A (48 A) IEC 309 2P+G 3 / C19
PDU
39Y8934 DPI 32 amp Front-end Included 1ph 220 - 240 V 32 A IEC 309 P+N+G 3 / C19
PDU
39Y8935 DPI 63 amp Front-end Included 1ph 220 - 240 V 63 A IEC 309 P+N+G 3 / C19
PDU
00YE443 DPI Universal Rack 39M5389 1ph 200 - 240 V 20 A 2.5m IEC 320 C19 to 7 / C13
PDU C20
00YE443 DPI Universal Rack 40K9772 1ph 200-240V 16A 4.3m NEMA L6-20 7 / C13
(con’t) PDU (continued) (con’t) 20A 16A / 200-240V (con’t)
Single Phase (HV)
152 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
Part Description Line cord Phase Volts Line cord Line cord plug Number /
number part (V) rating Type of
number (Derated) outlets
00YE443 DPI Universal Rack 41Y9233 1ph 200V 15A 4.3m 15A/200V, C19 7 / C13
(con’t) PDU (con’t) (con’t) to JIS C-8303 line (con’t)
cord
00YJ776 0U 36 C13/6 C19 30A Attached 1ph 200-240V 24A L6-30P 6 / C19
1 Phase PDU 36 / C13
00YJ779 0U 21 C13/12 C19 Attached 3ph 200-240V 48A IEC 603-309 460P9 12 / C19
60A 3 Phase PDU 21/ C13
46M4143 0U 12 C19/12 C13 Attached 3ph Y 380 - 415 V 32 A (32 A/ph) IEC 309 3P+N+G 12 / C19
32A 3-Phase PDU 12 / C13
00YJ777 0U 36 C13/6 C19 32A Attached 1ph 200-240V 32A A IEC60309 332P6 6 / C19
1 Phase PDU 36 / C13
00YJ778 0U 21 C13/12 C19 Attached 3ph Y 200-240V, 32 A (32 A/ph) IEC60309 532P6 12 / C19
32A 3 Phase PDU 350-450V 21/ C13
For more information about PDUs and cables, see the North America, Japan, and
International PDU planning guides that are available at this website:
https://support.lenovo.com/documents/LNVO-POWINF
156 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
Part of the information that is stored in the IMM2 can be accessed with F1-Setup by selecting
System Settings → Integrated Management Module. Figure 5-1 shows the first panel of
the IMM2 configuration panel.
Tip: If you have a server but you do not know the logon credentials, you can reset the
credentials by going to the panel that is shown in Figure 5-1. From F1-Setup, you can
restore the IMM2 configuration to the factory defaults by selecting Reset IMM to Defaults.
The next section describes how to connect to the IMM2, set up your network, and gain a
remote presence.
For more information about how to access the IMM2, see 7.2, “Integrated Management
Module II” on page 244.
In 5.1.4, “IMM2 dedicated versus shared ML2 Ethernet port” on page 159, the reasons for the
use of only the dedicated Ethernet port for IMM2 access or sharing the access to an ML2
Ethernet port are described. Also described are the steps to take to enable shared access.
You can also change the IMM default IP address in the Network Configuration panel, as
shown in Figure 5-2.
158 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
5.1.4 IMM2 dedicated versus shared ML2 Ethernet port
When configured as Dedicated, you are connecting to the network via the system
management port. As shown in Figure 5-3, the system management port is on the rear of the
server to the right side of the USB ports.
The use of this port allows for easier separation of public and management network traffic.
Separating the traffic is done when you connect your public network port to switch ports that
belong to a public access virtual LAN (VLAN). The management port is connected to a switch
port defined by a separate management VLAN.
When configured as Shared, you are sharing network traffic on an ML2 Ethernet adapter,
which is the adapter slot closest to the rear fans, as shown in Figure 5-4. The default port that
is used on the ML2 adapter is port 1.
Figure 5-4 Location of the ML2 adapter slot that can share IMM2 access
Although this configuration eliminates a physical switch port and patch cable configuration,
the media access control (MAC) address for the shared Ethernet port and the MAC address
for the IMM2 address through this single network port. This situation means that there are at
least two separate IP addresses for the same physical port, which prevents you from
configuring the ML2 adapter’s Ethernet port in a network team by using 802.3ad load
balancing if the particular ML2 adapter that you use supports this function.
Although the IMM2 uses a dedicated 32-bit RISC processor, there are limitations as to the
amount of network traffic that the IMM2 can be exposed to before complex functions, such as
starting from a remote DVD, or USB storage becomes unreliable because of timing issues.
Although the operating system has all of the necessary drivers in place to deal with these
timing issues, the Unified Extensible Firmware Interface (UEFI) is not as tolerant. For this
reason (maintaining secured access), the IMM2 should be kept on a separate management
network.
The primary IMM2 is on the lower half of the x3950 X6, as seen in Figure 5-5.
For each IMM2 in the x3950 X6 server, you assign its own IP address. The Primary IMM2 is
at the bottom half of the server and is used to connect.
160 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
XClarity and the x3950 X6: To properly manage the x3950 X6, both IMM2 interfaces must
be connected with valid IP addresses.
Most communication errors are because of network switch configuration options, such as
blocked ports or VLAN mismatches. The following procedure shows you how to determine
this type of problem by connecting directly to the IMM2 port with a notebook and Ethernet
patch cable, pinging, and then starting a web session.
If you can ping the IMM, you have a good direct network link. If the web session fails,
complete the following steps:
1. Try another web browser (see supported browsers 5.1.1, “IMM2 virtual presence” on
page 156).
2. Directly access the IMM configuration panel and reset the IMM2 in F1-Setup by selecting
System Settings → Integrated Management Module → Reset IMM. You must wait
approximately 5 minutes for the IMM2 to complete enough of its restart to allow you to
ping it. This IMM2 reset has no affect on the operating system that is running on the
server.
3. Try clearing the web browser cache.
4. Load the factory default settings back on the IMM2 through F1-Setup by selecting System
Settings → Integrated Management Module → Reset IMM2 to Defaults. The IMM2
must be reset again after the defaults are loaded.
5. Contact Lenovo support.
A green tick at the top menu bar indicates that all is working from a strict hardware
perspective. The IMM2 can check on the status of server components, the ServeRAID
controllers, and PCIe interfaces to most PCIe adapters.
The IMM2 does not check the functional status of most PCIe adapters regarding their
hardware connections to external devices. You must refer to the system event log from within
the operating system or the switch logs of the network and fiber switches to which the server
is connected to resolve connectivity issues.
When a hardware error is detected in the server, a red X replaces the green tick. The
Hardware Health summary at the bottom of the page provides information about any
hardware errors that are unresolved in the server. This information also is represented by a
red X.
162 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
Virtual light path diagnostics
If you are physically in front of the server, it is easy to track hardware problems by noticing the
first tier of light path diagnostics by using the LED panel at the front of the server, as shown in
Figure 5-7.
Scrolling through the LED panel by using the up and down arrows that are shown in
Figure 5-7 shows the hardware subsystem that is experiencing an error.
Most servers are not physically near the people who manage them. To help you see the event
from a remote location, you can use the IMM2 to review all tiers of the light path diagnostics
via the System Status page under the Hardware Health menu, as seen in Figure 5-6 on
page 162. Clicking a hardware component shows the hardware component that is causing
the error. A Hardware Status example of Local Storage is shown in Figure 5-8.
Remote control
Certain problems require that you enter the operating system or F1-Setup to detect or fix the
problems. For remotely managed servers, you can use the Remote Control feature of the
IMM, which is accessed via the top menu be selecting Server Management → Remote
Control.
Figure 5-10 shows the available options for starting a remote control session.
Figure 5-10 Integrated Management Module Remote Control session start panel
164 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
The IMM2 Remote Control provides the following features:
The remote control provides you with the same capability that you have with a keyboard,
mouse, and video panel that is directly connected to the server.
You can encrypt the session when it is used over public networks.
You can use local storage or ISO files as mounted storage resources on the remote server
that you are using. These storage resources can be unmounted, changed, and remounted
throughout the session, as needed.
When combined with the Power/Restart functions of the IMM, you can power down, restart, or
power on the server while maintaining the same remote control session. For more information
about the remote control feature on the X6 servers, see 7.3, “Remote control” on page 248.
UEFI is the replacement for BIOS. BIOS was available for many years but was not designed
to handle the amount of hardware that can be added to a server today. New System x models
implement UEFI to use its advanced features. For more information about UEFI, see this
website:
http://www.uefi.org/home/
The UEFI page is accessed by pressing F1 during the system initializing process, as shown
in Figure 5-11.
Choose System Settings to access the system settings options that are described here, as
shown in Figure 5-13.
You can use the Advanced Settings Utility (ASU) tool to change the UEFI settings values.
ASU makes more settings available than the settings that are accessed by using the
F1-Setup panel. For more information about ASU, see 7.8, “Advanced Settings Utility” on
page 289.
166 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
Table 5-1 lists the most commonly used UEFI settings and their default values.
Processor settings
Hyper-Threading Enabled
AES-NI Enabled
Memory Settings
Mirroring Disabled
Sparing Disabled
Power
Advanced RAS
In most operating conditions, the default settings provide the best performance possible
without wasting energy during off-peak usage. However, for certain workloads, it might be
appropriate to change these settings to meet specific power to performance requirements.
The UEFI provides several predefined setups for commonly wanted operation conditions.
These predefined values are referred to as operating modes. Access the menu in UEFI by
selecting System Settings → Operating Modes → Choose Operating Mode. You can see
the five operating modes from which to choose, as shown in Figure 5-14. When a mode is
chosen, the affected settings change to the appropriate predetermined values.
The following Operating Modes are available (the default mode is Efficiency - Favor
Performance):
Minimal Power
Select this mode to minimize the absolute power consumption of the system during
operation. Server performance in this mode might be reduced, depending on the
application that is running.
168 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
Efficiency - Favor Power
Select this mode to configure the server to draw the minimum amount of power and
generate the least noise. Server performance might be degraded, depending on the
application that you are running. This mode provides the best features for reducing power
and increasing performance in applications where the maximum bus speeds are not
critical.
Efficiency - Favor Performance
Select this mode to maintain the optimal balance between performance and power
consumption. The server generally produces the best performance per watt while it is in
this mode. No bus speeds are derated in this mode. This is the default seting.
Maximum Performance
Select this mode for the maximum performance for most server applications. The power
consumption in this mode is often higher than in the Efficiency - Favor Power or Efficiency
- Favor Performance mode.
Power saving and performance are also highly dependent on the hardware and software
that is running on the system.
Custom Mode
Select this mode only if you understand the function of the low-level IMM2 settings. This
mode is the only choice with which you can change the low-level IMM2 settings that affect
the performance and power consumption of the server.
170 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
Workload Configuration
The default selection for workload configuration is Balanced. You can change the
workload configuration to I/O Sensitive. However, I/O Sensitive is used with expansion
cards that require high I/O bandwidth when the CPU cores are idle to allow enough
frequency for the workload.
Choose ‘I/O sensitive’ mode for I/O throughput, bandwidth or latency sensitive workloads.
10Gb Mezz Card Standby Power (Default: Enabled)
When this option is enabled, system fans keep running when the server is off. If disabled,
the system saves more power when off, but loses network connection on a 10Gb Mezz
card, which affects Wake on LAN and shared System Management Ethernet port
functions.
It is recommended to disable this option, if you need to have maximum performance.
Figure 5-16 shows the UEFI Processor system settings window with the default values.
172 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
Cores in CPU Package (Default: All)
This option sets the number of processor cores to be activated within each CPU package.
You might want to change your CPU cores to lower your power consumption or to meet
software licensing requirements.
QPI Link Frequency (Default: Max Performance)
This option sets the operating frequency of the processor’s QPI link:
– Minimal Power provides less performance for better power savings.
– Power Efficiency provides best performance per watt ratio.
– Max performance provides the best system performance.
Energy Efficient Turbo (Default: Enabled)
When Energy Efficient Turbo is enabled, the CPU’s optimal turbo frequency is tuned
dynamically based on the CPU usage. The power and performance bias setting also
influences Energy Efficient Turbo.
Uncore Frequency scaling (Default: Enabled)
When enabled, the CPU uncore dynamically changes speed based on the workload. All
miscellaneous logic inside the CPU package is considered the uncore.
CPU Frequency Limits (Default: Full turbo uplift)
The maximum Turbo Boost frequency can be restricted with turbo limiting to a frequency
that is between the maximum turbo frequency and the rated frequency for the CPU
installed. This can be useful for synchronizing CPU tasks. Note that the maximum turbo
frequency for N+1 cores cannot be higher than for N cores. If the turbo limits are being
controlled through application software, leave this option at the default value.
Available options:
– Full turbo uplift
– Restrict maximum frequency.
AES-NI (Default: Enabled)
This option enables the support of Advanced Encryption Instructions to improve the
encryption performance.
Memory mirror mode: Memory mirror mode cannot be used with Memory spare
mode.
174 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
In this section, we provide general settings for the x3850 X6 that can be a good starting point
for further tuning. Table 5-2 lists the UEFI settings that are recommended for specific
workload requirements.
Power Workload Balanced Balanced I/O Sensitive I/O Sensitive I/O Sensitive
Configuration
For low latency, HPC workloads, and other applications where you need maximum
performance, make sure that active power policy has disabled throttling and enabled N+N
power redundancy. You should also disable power capping to avoid performance
degradation. For more information about power policies, see 4.5.5, “Power policy” on
page 141.
To avoid installation problems using VMware 5.1 and later, we recommend you disable PCI
64-Bit Resource allocation and set parameter MMConfigBase to 3 GB. You can set these
UEFI parameters using ASU utility, as shown in Figure 5-18.
You can also use UEFI setting presets like Maximum performance or Minimum Power, which
we cover in 5.2.1, “Operating modes” on page 168.
The LSI MegaRAID Configuration Utility has several options for managing your internal disks.
Figure 5-20 shows the options that are available in the LSI MegaRAID Configuration Utility
panel.
176 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
Drive Management
Displays the basic drive properties and performs operations, such as assign or unassign a
hot spare drive, locate drives, place drive offline or online, and rebuild a drive. You can
also view other properties by using the Advanced link.
Hardware Components
Displays the battery status and the status of temperature sensors, fans, and power
supplies. You can also view more properties and perform other operations by using the
Advanced link. Some options appear only if the controller supports them.
The UEFI supports the configuration of your RAID array with a supported controller. Complete
the following steps to create a RAID-1 array that is configured in the UEFI:
1. Enter UEFI setup by selecting F1 when prompted at start time. Access the LSI
Configuration Utility from the menu by selecting System Settings → Storage → LSI
MegaRAID <ServeRAID M5210> Configuration Utility, as shown in Figure 5-21.
2. Figure 5-22 shows the Main Menu for the Configuration Utility. Enter the Configuration
Manager by selecting Configuration Management.
4. From the Controller Management main menu panel, select Advanced, as seen in
Figure 5-24.
5. From the Advanced Controller Management menu, select Manage MegaRAID Advanced
Software Options, as shown in Figure 5-25.
178 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
6. Ensure that the MegaRAID Advanced Software Option is activated. Exit this menu by
pressing the Esc key and return to the Main Menu, as shown in Figure 5-22 on page 177.
7. From this menu, select Drive Management. Ensure that your disks are in an
Unconfigured Good state, as shown in Figure 5-26.
8. Select each drive individually. From here, you can find a drive and initialize a drive from
the menu, as shown in Figure 5-27.
9. Return to the Main Menu, as shown in Figure 5-22 on page 177. From here, select
Configuration Management panel (see Figure 5-28).
11. Select the wanted RAID level for your drives. For this example, we selected RAID 1, as
shown in Figure 5-30.
12.Select Save Configuration and confirm that you wan to create your RAID array, as shown
in Figure 5-31.
180 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
13.Select Confirm to mark an X in the box and then select Yes. After you confirm the array
that you created, you see a Success message, as shown in Figure 5-32.
14.Select OK, and press Esc to return to the main menu. Your array is now active.
There are more utilities for configuring and managing your disks in the x3850 X6 and x3950
X6 server. For more information, see 6.2, “ServerGuide” on page 233 and 7.9, “MegaRAID
Storage Manager” on page 292.
The IMM2 displays the RAID configuration in the Local Storage pane, as shown in
Figure 5-33.
For PXE, disable PXE-boot on ports that will not be used for booting
Enable PXE-boot only on the appropriate network port, which is used for network booting.
Avoid wasted time while all available network ports try to reach DHCP and TFTP servers.
Go to System Settings → Network → Network Boot Configuration, choose unused
network ports and disable PXE-boot, as shown in Figure 5-36:
182 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
Figure 5-36 Disable PXE-boot
The more memory you have, the longer the system takes in the “System initializing
memory” state.
You cannot reduce the amount of time needed for memory initializing, but you can
temporarely enable Quick boot mode. In this mode the server initializes only a minimum
amount of memory (only one or two DIMMs depending on the number of Compute Books
installed), which allows you to reboot the system relatively quickly. Quick boot mode may
be useful for any maintenance or OS deployment, when you need to install an OS, or boot
a rescue image or a BOMC image for firmware updates.
Important: You should not use Quick boot mode for production system.
To enable the Quick boot mode, you can use the Front operator panel, located on the
Storage book. Go to the Actions section and choose Quick boot mode.
You can also use ASU utility to enable or disable the Quick boot mode. Use UEFI option
IMM.DeploymentBoot with the following parameters:
– Disable to disable the Quick boot mode
– NextBoot to enable it during the next system reboot
– NextAC to enable this mode on the next power cycle.
Figure 5-37 shows the ASU usage examples on Linux:
For more information about the technology, see 2.6.3, “NVMe SSD technology” on page 45.
For more information about the components, see 3.11, “Storage subsystem” on page 94.
In this section, we describe the planning and use of these drives. This section includes the
following topics:
5.4.1, “NVMe drive placement”
5.4.2, “NVMe PCIe SSD adapter placement” on page 188
5.4.3, “Using NVMe drives with Linux” on page 188
5.4.4, “Using NVMe drives with Microsoft Windows Server 2012 R2” on page 192
5.4.5, “Using NVMe drives with VMware ESXi server” on page 196
184 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
You can install a SAS/SATA backplane in the lower backplane in the Storage Book.
One backplane supports up to four NMVe drives.
One Extender Adapter is required for every two NVMe drives installed (as listed in
Table 3-15 on page 98).
One NVMe cable is required between the adapter and backplane per drive installed.
For 3 or 4 NVMe drives that are installed in the server, a second Extender Adapter must
be installed. The Extender Adapters are installed in the following slots of the Storage
Books:
– For the x3850 X6, the slots are PCIe slots 11 and 12.
– For the x3950 X6, the slots are PCIe slots 11, 12, 43, and 44.
If four NVMe drives are installed in a Storage Book, two Extender Adapters are required to
connect those drives. Because these two adapters occupy both PCIe slots in the Storage
Book, no other controllers can be installed; therefore, the other four drive bays in the
Storage Book must remain empty.
Table 5-3 shows the connectivity and slot installation ordering of the PCIe Extender Adapter,
backplane, and solid-state drives (SSDs).
NVMe PCIe extender Adapter port 0 to backplane port 3 Drive 1, bay 7, PCIe slot 19
I/O book slot 11
adapter 1 Adapter port 1 to backplane port 2 Drive 2, bay 6, PCIe slot 18
NVMe PCIe extender Adapter port 0 to backplane port 1 Drive 3, bay 5, PCIe slot 17
I/O book slot 12
adapter 2 Adapter port 1 to backplane port 0 Drive 4, bay 4, PCIe slot 16
Figure 5-38 2.5-inch NVMe SSD drives location in the Storage Book
Port 1
Port 0
186 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
Figure 5-40 shows the ports of the NVMe SSD backplane (drive side of the backplane, as
seen from the front of the server)
Port 3
Port 2
Port 1
Port 0
Figure 5-40 NMVe backplane port numbering (drive side of the backplane)
The operating system and UEFI report the NVMe drives attached to the 4x2.5-inch NVMe
PCIe backplane as PCI devices, connected to PCIe slots 16-19. You can check connected
NVMe SSD drives from the IMM2 web-interface at Server Management → Adapters page,
as shown in Figure 5-41 on page 187:
Know your PCIe slot numbers: It’s important to know the PCIe slots numbers used by
NVMe drives: during the software RAID maintenance and NVMe SSD drives replacement,
these PCIe slot numbers allows you to distinguish the appropriate drive in the set of similar
NVMe drives.
For more information about NVMe drives, see 2.6.3, “NVMe SSD technology” on page 45.
To have the best performance using NVMe PCIe SSD adapters, follow the next PCIe
placement rules as shown in Table 5-4 on page 188:
x3850 X6 8, 7, 9, 4, 1, 5, 2, 6, 3
For more information about PCIe adapters placement rules, see 5.5, “PCIe adapter
placement advice” on page 198.
Other Linux distributions might have NVMe support, depending on the kernel version.
The RHEL and SLES distributions have NVMe kernel modules; therefore, no other drivers are
required to use NVMe drives. NVMe drives are represented in the OS as block devices with
device names, such as /dev/nvmeXn1, where X is a number that is associated with each
NVMe drive that is installed in the server. For example, for two NVMe drives installed, the
device names are nvme0n1 and nvme1n1 files in /dev directory, as shown in Figure 5-42.
188 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
Figure 5-43 shows other Linux commands that can show the NVMe drives.
As shown in Figure 5-43, lspci and lsblk commands show that two NVMe controllers are
connected to PCIe bus and two block devices nvme0n1 and nvme1n1 are available in the
operating system. You can also run simple performance tests by using the hdparm utility, as
shown in Figure 5-44.
/dev/nvme0n1:
Timing O_DIRECT cached reads: 4594 MB in 2.00 seconds = 2298.38 MB/sec
Timing O_DIRECT disk reads: 8314 MB in 3.00 seconds = 2770.65 MB/sec
Figure 5-44 Performance test by using hdparm utility
As shown in Figure 5-44, direct read speed from one NVMe drive is 2.7 GBps and cached
read speed is almost 2.3 GBps.
You can work with NVMe drives as with other block devices, such as SATA or SAS drives.
You can use fdisk or parted utilities to manage disk partitions, create any supported file
systems by using standard Linux commands, and mount these file systems.
(parted) quit
Figure 5-45 Partition creation with parted utility
When you create a partition on an NVMe drive, a new block device name appears in the
/dev/ directory. For example, for /dev/nvme0n1 drive, /dev/nvme0n1p1 is created.
190 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
After that you can create the ext4 file system on that partition, as shown in Figure 5-46.
Then, you can mount the new ext4 file system to the file system tree, as shown in
Figure 5-47.
You can manage other NVMe drives in the same way by using other Linux features, such as
Logical Volume Manager (LVM) and software RAID, if needed.
Microsoft Windows Server 2012 R2 has native NVMe driver support and no other drivers are
required to start use NVMe drivers. Other Windows version might require drivers.
Complete the following steps to check that NVMe drives are recognized by Windows:
1. Open Device Manager and the expand Disk drives section. All installed NVMe drives
should present, as shown in Figure 5-48.
2. Open the Disk Management tool, you should see all installed NVMe drives. For example,
both installed NVMe drives are presented as Disk 1 and Disk 2, as shown in Figure 5-49
on page 192.
192 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
3. Both NVMe drives must be online and initialized. To initialize the drives, right-click the
appropriate disk (Disk 1 or Disk 2 as shown in Figure 5-49) and select Initialize Disk. The
Initialize Disk window opens, as shown in Figure 5-50.
4. After the disks are initialized, you can create volumes. Right-click the NVMe drive and
select the required volume to create, as shown in Figure 5-51 on page 193.
5. For example, choose New Simple Volume. The volume creation wizard opens. Click
Next and specify a volume size, as shown in Figure 5-52.
6. You also must assign a drive letter or path for a new volume, as shown in Figure 5-53 on
page 194.
7. You must format the new volume and specify the file system parameters, such as block
size and volume label, as shown in Figure 5-54.
194 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
Figure 5-54 Format partition
9. After the New Simple Volume wizard completes, you can see the new volume NVMe drive
1 (F:) by using the Disk Management tool, as shown in Figure 5-56.
You now have NVMe drives that are available for storage. You can create software RAID
arrays of different types by using two or more drives.
The ESXi 5.5 driver for NVMe drives driver can be downloaded from the following VMware
website:
https://my.vmware.com/web/vmware/details?productId=353&downloadGroup=DT-ESXI55-VMW
ARE-NVME-10E030-1VMW
Complete the following steps to install the NVMe driver on ESXi 5.5 Update 2:
1. Download VMware ESXi 5.5 NVMe driver from this website:
https://my.vmware.com/web/vmware/details?productId=353&downloadGroup=DT-ESXI55-
VMWARE-NVME-10E030-1VMW
2. Enable SSH on the ESXi server.
196 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
3. Copy the VMware ESXi 5.5 NVMe driver to the ESXi server by using any SSH client, such
as lSCP or WinSCP by using a command that is similar to the command that is shown in
Figure 5-57.
4. Log in to the ESXi server by using SSH client and extracting the .zip file, as shown in
Figure 5-58.
~ # cd /tmp
/tmp # unzip VMW-ESX-5.5.0-nvme-1.0e.0.30-2284103.zip
Archive: VMW-ESX-5.5.0-nvme-1.0e.0.30-2284103.zip
inflating: VMW-ESX-5.5.0-nvme-1.0e.0.30-offline_bundle-2284103.zip
inflating: nvme-1.0e.0.30-1vmw.550.0.0.1391871.x86_64.vib
inflating: doc/README.txt
inflating: source/driver_source_nvme_1.0e.0.30-1vmw.550.0.0.1391871.tgz
inflating:
doc/open_source_licenses_nvme_1.0e.0.30-1vmw.550.0.0.1391871.txt
inflating: doc/release_note_nvme_1.0e.0.30-1vmw.550.0.0.1391871.txt
Figure 5-58 Extracting drivers from the archive
5. Install the extracted NVMe driver on the ESXi server, as shown in Figure 5-59.
In the Details panel, you should see more information about the selected NVMe drive, as
shown in Figure 5-61.
The NVMe drives are now available for use. You can use NVMe drives as VMFS datastores
or as Virtual Flash to improve I/O performance for all virtual machines or pass-through NVMe
drives to the dedicated virtual machines.
This section describes considerations to remember for determining how to use your PCIe
slots, depending on the type of I/O Books and adapters that you installed.
198 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
PCIe adapter slots
Table 5-5 shows the associated CPU with each PCIe adapter slot in the system. You cannot
install an adapter in an I/O Book that does not have an associated Compute Book.
CPU 0 (Compute Book 1) 9, 10a (Primary I/O Book), and Storage Book slot 12
CPU 1 (Compute Book 2) 7, 8 (Primary I/O Book), and Storage Book slot 11
Figure 5-62 shows the PCIe slots numbering of the x3850 X6.
Bay 1 Bay 2
Slots: 1 2 3 4 5 6 7 8 9 10 11 12
Figure 5-63 shows the PCIe slots numbering of the x3950 X6.
Slots: 33 34 35 36 37 38 39 40 41 42 43 44
Bay 3 Bay 4
Bay 1 Bay 2
Slots: 1 2 3 4 5 6 7 8 9 10 11 12
Figure 5-64 shows processor-to-slot PCIe connectivity for the x3850 X6.
Slot 1
Storage Book
200 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
5.6 Hot-swap procedures
The x3850 X6 and x3950 X6 hardware supports the ability to hot-swap certain components of
the server. The term hot-swap refers to adding or removing certain hardware components
while the server is running.
The following resources can be hot-swapped in the x3850 X6 and x3950 X6 server:
All 2.5-inch and 1.8-inch drives
All power supplies
All system fans
Optional I/O Books
Mixing power supplies: The 900 W and 1400 W power supplies can be mixed in pairs;
however, the 750 W DC power supply cannot be used with any AC supplies. For more
information about power supply pairs, see 4.5.2, “Power supply redundancy” on page 137.
You can use the IMM2 to set and change the power supply Power Policy and System Power
Configurations. You can set and change the policies and configurations by using the IMM2
web interface, CIM, or the ASU. You cannot set or change the Power Policy or System Power
Configurations by using the UEFI Setup utility. The default configuration setting for AC and
DC power supply models is non-redundant with throttling enabled.
For information about how to access the IMM, see 7.2, “Integrated Management Module II”
on page 244. From the IMM2 web interface, select Server Management → Power
Management. The Power Management panel is shown in Figure 5-65.
Primary I/O Book: Hot-swapping of the Primary I/O Book is not supported.
If the Primary I/O Book or any non-hot swappable component must be added or removed,
remove the AC power and wait for the LCD display and all LEDs to turn off.
The proper procedure to remove involves alerting the operating system that the adapters are
being removed before physically removing the book. If the adapters are not properly brought
offline before the I/O Book is removed, a Live Error Recovery (LER) results because a PCIe
link goes offline unexpectedly (UEFI refers to this issue as “Surprise Link Down”).
PCIe Live Error Recovery occurs when errors are detected by the PCIe root port. The LER
feature brings down the PCIe link that is associated with the affected root port within one
cycle and then automatically recover the link. PCIe LER also protects against the transfer of
associated corrupted data during this process.
Warning: The ability to hot-swap an I/O Book requires operating system support. If the OS
does not support PCIe hot plug, removing or adding the I/O Book can cause an
unrecoverable system error.
202 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
Power LED Attention button
Attention LED
Figure 5-66 Location of the Attention button and LEDs on an Optional I/O Book
3. Wait for all three power indicators to turn off (see Figure 5-66). It is now safe to remove the
I/O Book.
4. Open the I/O Book cam handle.
5. Slide the I/O Book out and away from the server.
Figure 5-67 shows adding and removing the Half-length I/O Book.
Handle
Filler panel
Handle
Release latch
Figure 5-67 Adding and removing a Half-length I/O Book
Figure 5-68 shows adding and removing the Full-length I/O Book.
Release latch
Filler
panel
Extension
bracket
Handle
The Full-length I/O Book can be used in the I/O Book bays that are associated with CPU 2 or
CPU 3. You can install up to two Full-length I/O Books.
Note: Because of the extended length of the full-length PCIe adapters, the Full-length I/O
Book adds a 4-inch mechanical extension to the base length dimension of the server.
204 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
Table 5-6 lists the meaning of the I/O Book Attention LED. For more information about the
location of this LED, see Figure 5-66 on page 203.
Flashing The slot is powering on or off. Do not remove the I/O Book when in flashing state.
Table 5-7 list the meaning of the I/O Book Power LED. For more information about the
location of this LED, see Figure 5-66 on page 203.
Flashing Power Transition: Hot-plug operation is in progress and insertion or removal of the
adapter is not permitted.
The following minimum number of components is required to support partitioning in the x3950
X6:
Four Compute Books (two in each node)
Two standard I/O Books (one in each node)
Two Storage Books (one in each node)
Two boot devices (one in each node), such as a local drive, hypervisor key, or external
boot device
Four power supplies (two power supplies in each node)
The x3950 X6 server is made up of an upper node and a lower node, which correspond to the
two physical halves of the x3950 X6. The nodes are joined internally via QPI links in the
midplane to form an 8-socket server.
206 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
Unassigned node
The servers are not a part of the assigned group and must be added. The processors
cannot be accessed and the node does not function in an unassigned state. After it is
added, the node can be assigned as a partition mode or assigned as stand-alone mode.
An example of two nodes in an unassigned state is shown in Figure 5-69.
By using the IMM2 web interface scalable complex window, you can perform the following
functions on assigned nodes:
Power Actions
Power on or off the nodes immediately, shut down OS and power off, and restart the
nodes, as shown in Figure 5-70.
Complete the following steps to remove the partition that is the x3950 X6 server and create
two stand-alone servers:
1. Log on to the IMM2 web interface. For information about how to set up and access the
IMM2, see 5.1.2, “IMM2 network access” on page 157.
2. Access the Scalable Complex window by selecting Server Management → Scalable
Complex from the top menu, as seen in Figure 5-72.
Figure 5-72 Accessing the Scalable Complex window via the MM2 web interface
208 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
Figure 5-73 shows the scalable complex window pane with one x3950 X6 server, which
contains two CPUs and 32 GB of RAM in each partition.
3. Before separating the x3950 X6 into two stand-alone servers, ensure that you turn off the
server. Failing to turn off the server results in an error message, as shown in Figure 5-74.
4. Check the option the left of the server that you intend to power off. When selected, you
can turn off the server via the Power Actions menu, as shown in Figure 5-75.
Figure 5-76 Partition options in the scalable complex window via the IMM2
6. Click Activate Stand-alone Mode in the confirmation message, as shown in Figure 5-77.
7. A progress window opens, as shown in Figure 5-78. You can track the progress of the
partitioning by refreshing the page.
The servers are now listed as stand-alone and behave as two individual servers, as shown in
Figure 5-79.
210 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
You can restore partition mode on the servers by highlighting the server with the check box
and selecting Partition Actions → Restore Partition Mode, as shown in Figure 5-80. By
restoring the server to partition mode, you are restoring it to a full x3950 X6 server and it no
longer functions as two separate servers.
Firmware updates are provided by Lenovo and can be downloaded from the support site,
including proven firmware from other manufacturers to be applied on Lenovo systems. In this
section, we describe the methods of performing firmware updates by using UXSPI, BoMC,
and IMM.
You can also perform firmware updates by using Lenovo XClarity Administrator, IBM Systems
Director, or Upward Integration Module (UIM) with a hypervisor.
UIMs provide hardware visibility to the hypervisor for superior system and VM management
with which you can perform the following tasks:
Concurrent Firmware Updates: All system software can be concurrently updated in a
virtualized environment with a single command.
Reliability, availability, serviceability (RAS) with UIM: By using RAS, you can manage and
set policies around all PFA in the system and evacuate, migrate, or manage VMs before
an outage affects them.
For more information about the Upward Integration Module, see 7.5, “Lenovo XClarity
integrators” on page 258.
For the x3850 X6 and x3950 X6 servers, in the UEFI menu, select System Settings →
Integrated Management Module → Commands on USB Interface Preference and enable
Commands on USB interface, as shown in Figure 5-81.
212 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
UpdateXpress System Pack Installer
UpdateXpress is a tool that allows the System x firmware and drivers to be updated via the
OS.
By using the UpdateXpress System Pack Installer (UXSPI), you can update the firmware and
device drivers of the system under an operating system. You also can deploy UpdateXpress
System Packs™ (UXSPs) and the latest individual updates.
UXSPI uses the standard HTTP (port 80) and HTTPS (port 443) to get the updates from IBM.
Your firewall must allow these ports. UXSPI is supported on Windows, Linux, and VMware
operating systems. UXSPI is supported on 32-bit and 64-bit operating systems.
For more information about supported operating systems, see UpdateXpress System Pack
Installer User’s Guide,, which is available at this website:
https://support.lenovo.com/documents/LNVO-XPRESS
4. Accept the default Check the system x web site, as shown in Figure 5-84. Click Next.
5. Select Latest available individual updates, as shown in Figure 5-85. Click Next.
6. Enter the settings for an HTTP proxy server (if necessary) or leave the option cleared, as
shown in Figure 5-87. Click Next.
214 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
7. Select the directory in which you want to store the downloaded files, as shown in
Figure 5-87. Click Next.
8. A message appears that shows that the UXSPI acquired the updates for the machine, as
shown in Figure 5-88. Click Next.
9. A message appears that shows that the download completed, as shown in Figure 5-89.
Click Next.
10.A component overview shows the components that need updating. By default, UXSPI
selects the components to update. Accept these settings and click Next.
11. When the update is finished, a message appears that confirms the updates. Click Next.
BoMC is supported on Windows, Linux, and VMware operating systems. BoMC supports
32-bit and 64-bit operating systems. For more information about supported operating
systems, see ToolsCenter BOMC Installation and User's Guide, which is available at the
BOMC web page:
https://support.lenovo.com/documents/LNVO-BOMC
Windows lnvgy_utl_bomc_v.r.m_windows_i386.exe
216 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
4. Select Updates, as shown in Figure 5-92. Click Next.
5. Select Latest available individual updates and click Next, as shown in Figure 5-93.
6. Enter the settings for an HTTP proxy server (if necessary) or select Do not use proxy, as
shown in Figure 5-94. Click Next.
7. Select one or more machine types that are on the bootable media and click Next.
9. By default, BoMC creates an ISO file, as shown in Figure 5-96. You can choose another
medium. Click Next.
10.Select Do not use unattended mode, as shown in Figure 5-97. Click Next.
11. Review the selections and confirm that they are correct. You can click Save to save this
configuration information to a file. Click Next.
BoMC acquires the files. In the progress bar, you can see the progress of the updates, as
shown in Figure 5-98.
218 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
Figure 5-98 Downloading the files
Note: Updating server firmware via the IMM2 is intended for recovery purposes. The
preferred method of updating firmware is to use USPI, BoMC, or XClarity Administrator as
described in this section.
Complete the following steps to update the IMM2, UEFI, and DSA via the IMM2 page:
1. From the IMM2 web interface, select Server Management → Server Firmware, as
shown in Figure 5-99.
3. Ensure that you downloaded the appropriate firmware update from the Lenovo support
page.
4. After you selected your file, you can perform the firmware flash.
220 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
5.9 Troubleshooting
This section describes the tools that are available to assist with problem resolution for the X6
servers in any specific configuration. It also provides considerations for extended outages.
Use the following tools you are when troubleshooting problems on the X6 servers in any
configuration.
From this page, you can check the power status of the server and state of the OS.
You also can view the System Information, such as name, machine type, serial number, and
state of the machine.
The Hardware Health of your system is also on this page, which monitors, fans, power, disk,
processors, memory, and system.
From the IMM, you also can access the hardware logs. From the main menu at the top of the
panel, click Events → Event log to access a full log history of all events.
LCD system
information Select
display panel button
Scroll up
button
Scroll
down
button
The information that is displayed on the LCD panel is shown in Figure 5-103.
Lenovo x3850 X6
System status
System status Check-mark UEFI\POST code
above 2 indicates
system is booting from
alternative UEFI bank.
Figure 5-103 LCD System Information panel
For more information about the LCD panel and error checking, see the Installation and
Service Guide.
222 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
You can view the system event log through the UEFI by pressing F1 at system start and
selecting System Event Logs → System Event Log.
This document describes the diagnostic tests that you can perform and troubleshooting
procedures and explains error messages and error codes.
If you completed the diagnostic procedure and the problem remains and you verified that all
code is at the latest level and all hardware and software configurations are valid, contact
Lenovo or an approved warranty service provider for assistance.
At the time of writing, the x3850 X6 and x3950 X6 servers with v4 processors are supported
with the following operating systems:
Microsoft Windows Server 2012
Microsoft Windows Server 2012 R2
Microsoft Windows Server 2016
Microsoft Windows Server, version 1709
Red Hat Enterprise Linux 6.10 x64
Red Hat Enterprise Linux 6.7 x64
Red Hat Enterprise Linux 6.8 x64
Red Hat Enterprise Linux 7.2
Red Hat Enterprise Linux 7.3
Red Hat Enterprise Linux 7.4
Red Hat Enterprise Linux 7.5
SUSE Linux Enterprise Server 11 Xen x64 SP4
SUSE Linux Enterprise Server 11 x64 SP4
SUSE Linux Enterprise Server 12 SP1
SUSE Linux Enterprise Server 12 SP2
SUSE Linux Enterprise Server 12 SP3
SUSE Linux Enterprise Server 12 Xen SP1
SUSE Linux Enterprise Server 12 Xen SP2
SUSE Linux Enterprise Server 12 Xen SP3
SUSE Linux Enterprise Server 15
SUSE Linux Enterprise Server 15 Xen
VMware ESXi 6.0 U2
For specific OS support, see the Lenovo Operating System Interoperability Guide:
https://lenovopress.com/osig#term=6241&support=all
vSphere 5.1 and 8-socket systems: VMware vSphere 5.1 has a fixed upper limit of 160
concurrent threads. Therefore, if you use an 8-socket system with more than 10 cores per
processor, you should disable Hyper-Threading.
Failing to disable Hyper-Threading in the Unified Extensible Firmware Interface (UEFI) with
12-core or 15-core processors and vSphere 5.1 affects performance.
For more information, see the Integrated Management Module II User's Guide, which is
available at this address:
https://support.lenovo.com/docs/UM103336
226 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
2. Click Remote Control on the main page of IMM2, as shown in Figure 6-1.
3. If you want to allow other users remote control access during your session, click Start
Remote Control in Multi-user Mode. Otherwise, click Start Remote Control in Single
User Mode.
The Java application window should open, as shown in Figure 6-2 on page 227.
4. To mount an image to the remote server as virtual media, you should ensure Activate is
selected under the Virtual Media menu, as shown in Figure 6-3.
6. Click Add Image if you want to map an IMG or ISO image file, as shown in Figure 6-5.
7. After adding an image, select the drive that you want to map and click Mount Selected, as
shown in Figure 6-6.
Closing the session: Closing the Virtual Media Session window when a remote disk is
mapped to the machine causes the machine to lose access to the remote disk.
228 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
6.1.2 Local USB port
You can use the local USB port to attach a USB flash drive that contains the OS installation
files. There are several methods available to create a bootable flash drive. For more
information about the use of a USB key as an installation medium, see these websites:
Installing Red Hat Linux from a USB flash drive:
https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/h
tml-single/Installation_Guide/index.html#Making_Minimal_Boot_Media
How to create a bootable USB drive to install SLES:
http://www.novell.com/support/kb/doc.php?id=3499891
Installing Windows from a USB flash drive:
http://technet.microsoft.com/en-us/library/dn293258.aspx
Formatting a USB flash drive to start the ESXi Installation or Upgrade:
https://pubs.vmware.com/vsphere-50/index.jsp?topic=%2Fcom.vmware.vsphere.instal
l.doc_50%2FGUID-33C3E7D5-20D0-4F84-B2E3-5CD33D32EAA8.html
You can also use the ServerGuide Scripting Toolkit to create a bootable USB flash drive, as
described in 6.1.3, “Preboot eXecution Environment” on page 229.
For example, you can use xCAT software to deploy a broad set of operating systems by
network. For more information, see this website:
http://sourceforge.net/apps/mediawiki/xcat
For information about Lenovo XClarity Administrator (including how to log on and start
discovering), see 7.4, “Lenovo XClarity Administrator” on page 249.
Complete the following steps to mount and deploy an OS image from XClarity to a managed
server:
1. Click Provisioning → Deploy OS Images under the Deploy Operating Systems section,
as shown in Figure 6-7.
2. On the Deploy Operating Systems: Deploy OS Images page, select the server and OS
image to deploy and click Deploy Images, as shown in Figure 6-8 on page 230.
Deploy Images
3. If you must change the OS image, select Change Selected → Image to deploy, as
shown in Figure 6-9.
230 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
4. Choose the required OS image from list of available images, as shown in Figure 6-10.
5. You are prompted to set an administrator password when the OS is installed. Figure 6-11
on page 231 shows an example of deploying VMware ESXi 5.5.
6. When the administrator password is set, you must confirm the OS installation by clicking
Deploy, as shown in Figure 6-12.
The OS deployment starts and an information message opens, as shown in Figure 6-13.
232 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
You can monitor the OS deployment progress from the Jobs page, as shown in
Figure 6-14.
You also can monitor the OS installation progress by using Lenovo XClarity Administrator
remote control feature, as described in 7.4.3, “Remote control” on page 253.
6.2 ServerGuide
ServerGuide is an installation assistant for Windows installations that simplifies the process
of installing and configuring Lenovo x86 servers. The wizard guides you through the setup,
configuration, and operating system installation processes.
ServerGuide can accelerate and simplify the installation of X6 servers in the following ways:
Assists with installing Windows based operating systems and provides updated device
drivers that are based on the detected hardware.
Reduces rebooting requirements during hardware configuration and Windows operating
system installation, which allows you to get your X6 server up and running sooner.
Provides a consistent server installation by using best practices for installing and
configuring an X6 server.
Provides access to more firmware and device drivers that might not be applied at
installation time, such as adapters that are added to the system later.
ServerGuide deploys the OS image to the first device in the start order sequence. Best
practices dictate that you have one device that is available for the ServerGuide installation
process. If you start from SAN, ensure that you have only one path to the device because
ServerGuide has no multipath support. For more information, see 6.5, “Booting from SAN” on
page 240.
After the ServerGuide installation procedure, you can attach external storage or activate
more paths to the disk. For more information about how to attach external storage or
multipath drivers, see the respective User Guide.
Complete the following steps to install Windows Server 2008 Foundation with ServerGuide
(the method to install Linux is similar):
1. Download the latest version of ServerGuide from this website:
https://support.lenovo.com/documents/LNVO-GUIDE
9. Select the operating system that you want to install and click Next, as shown in
Figure 6-16.
234 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
10.Enter the current date and time, as shown in Figure 6-17 and click Next.
11. Create a RAID configuration. Select a RAID configuration and click Next, as shown in
Figure 6-18.
13.You must now create and format a partition. Choose your selection and click Next to start
the process, as shown in Figure 6-20.
14.When the process completes, click Next. You can select postinstallation options, as shown
in Figure 6-21.
236 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
15.Review the configuration, as shown in Figure 6-22. Click Next.
16.ServerGuide copies the necessary files to the disk in preparation for the operating system
installation, as shown in Figure 6-23.
19.The Windows setup installation procedure starts. Follow the Microsoft installation
procedure to complete the installation of your OS.
By using the ServerGuide Scripting Toolkit, you can tailor and build custom hardware
deployment solutions. It provides hardware configuration utilities and OS installation
examples for System x and BladeCenter x86-based hardware.
By using the ServerGuide Scripting Toolkit you can create a CD, DVD, or USB key that is
used for starting that supports the following tasks and components:
Network and mass storage devices
Policy-based RAID configuration
Configuration of system settings that uses Advanced Settings Utility (ASU)
Configuration of fiber host bus adapters (HBAs)
Local self-contained DVD deployment scenarios
Local CD/DVD and network share-based deployment scenarios
Remote Supervisor Adapter (RSA) II, IMM, and BladeCenter Management Module
(MM)/Advanced Management Module (AMM) remote disk scenarios
UpdateXpress System Packs installation that is integrated with scripted network operating
system (NOS) deployment
IBM Director Agent installation that is integrated with scripted NOS deployment
The ServerGuide Scripting Toolkit, Windows Edition supports the following versions of IBM
Systems Director Agent:
Common Agent 6.1 or later
Core Services 5.20.31 or later
Director Agent 5.1 or later
The Windows version of the ServerGuide Scripting Toolkit enables automated operating
system support for the following Windows operating systems:
Microsoft Windows Server 2012
Microsoft Windows Server 2012 R2
Windows Server 2008, Standard, Enterprise, Datacenter, and Web Editions
Windows Server 2008 x64, Standard, Enterprise, Datacenter, and Web Editions
Windows Server 2008, Standard, Enterprise, and Datacenter Editions without Hyper-V
Windows Server 2008 x64, Standard, Enterprise, and Datacenter without Hyper-V
Windows Server 2008 R2 x64, Standard, Enterprise, Datacenter, and Web Editions
The Linux version of the ServerGuide Scripting Toolkit enables automated operating system
support for the following operating systems:
SUSE Linux Enterprise Server 10 SP2 and later
SUSE Linux Enterprise Server 11
Red Hat Enterprise Linux 6 U1 and later
Red Hat Enterprise Linux 5 U2 and later
To download the Scripting Toolkit or the ServerGuide Scripting Toolkit User’s Reference, see
this web page:
https://support.lenovo.com/us/en/documents/LNVO-TOOLKIT
238 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
6.4 Use of embedded VMware ESXi
The x3850 X6 and x3950 X6 servers support a USB flash drive option that is preinstalled with
VMware ESXi. VMware ESXi is fully contained on the flash drive, and does not require any
disk space. The USB Memory Key for VMware Hypervisor plugs into the internal USB port
that is on system board of Primary I/O Book.
For more information about supported options, see 3.22, “Integrated virtualization” on
page 120.
VMware ESXi supports starting from the Unified Extensible Firmware Interface (UEFI). To
ensure that you can start ESXi successfully, you must change the start order. The first start
entry must be Embedded Hypervisor. Complete the following steps:
1. Press F1 for the UEFI Setup.
2. Select Boot Manager → Add Boot Option.
3. Select Generic Boot Option, as shown in Figure 6-25.
4. Select Embedded Hypervisor, as shown in Figure 6-26. If either option is not listed, the
option is in the boot list. When you finish, press Esc to go back one panel.
Figure 6-27
240 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
If you do not have internal drives, disable the onboard SAS RAID Controller by selecting
System Settings → Devices and IO ports → Enable/Disable Onboard Devices and
disabling the SAS Controller or Planar SAS.
Set the HBA as the first device in the Option ROM Execution Order by selecting System
Settings → Devices and IO Ports → Set Option ROM Execution Order.
For older operating systems that do not support UEFI, set Legacy Only as the first boot
device.
Remove all devices from the boot order that might not host an OS. The optimal minimum
configuration is CD/DVD and Hard Disk 0. For older operating systems only, set Legacy
Only as the first boot device.
Enable the BIOS from your HBA.
Verify that your HBA can see a LUN from your storage.
For Microsoft Windows installations, ensure that the LUN is accessible through only one
path (Zoning or LUN masking).
After installation, remember to install the multipath driver before you set more than one
path if you have more than one path to the LUN.
You can also check the documentation for the operating system that is used for Boot from
SAN support and requirements and storage vendors. For more information about SAN boot,
see the following resources:
Red Hat Enterprise Linux 7 Installation Guide:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/I
nstallation_Guide/sect-storage-devices-x86.html
Red Hat Enterprise Linux 6 Installation Guide:
https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/h
tml/Installation_Guide/Storage_Devices-x86.html
Windows Boot from Fibre Channel SAN – Overview and Detailed Technical Instructions
for the System Administrator:
http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=2815
vSphere Storage document from VMware:
http://pubs.vmware.com/vsphere-55/topic/com.vmware.ICbase/PDF/vsphere-esxi-vcen
ter-server-55-storage-guide.pdf
IBM Redbooks publication, SAN Boot Implementation and Best Practices Guide for IBM
System Storage, SG24-7958:
http://www.redbooks.ibm.com/abstracts/sg247958.html?Open
For IBM System Storage compatibility information, see the IBM System Storage
Interoperability Center at this website:
http://www.ibm.com/systems/support/storage/config/ssic
The tools are listed in Table 7-1. In this section, we describe the use of several of these tools.
Firmware deployment tools are also described in 5.8, “Updating firmware” on page 211.
ServerGuide No Yes No No
244 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
Securely manages servers remotely and independently of the operating system state.
Helps remotely configure and deploy a server from bare metal.
Auto-discovers the scalable components, ports, and topology.
Provides one IMM2 firmware for a new generation of servers.
Helps system administrators easily manage large groups of diverse systems.
Requires no special drivers.
Works with Lenovo XClarity to provide secure alerts and status, which helps reduce
unplanned outages.
Uses standards-based alerting, which enables upward integration into various enterprise
management systems.
In the following section, we describe the out-of-band and in-band initial configuration.
By default, the ML2 slot is not shared for use with the IMM2. You must enable this feature in
the Unified Extensible Firmware Interface (UEFI) of the server.
Complete the following steps to enable the IMM2 to use the ML2 adapter:
1. Ensure that you have an ML2 Ethernet adapter installed.
2. Start the server and press F1 when prompted.
3. Select System Settings from the System Configuration and Boot Management menu.
4. Select Integrated Management Module from the System Settings menu.
5. Select Network Configuration from the IMM. By using this menu, you can configure the
network settings for the IMM2. Also, you can configure the IMM2 to share the use of ML2
or out-of-band management from this menu, as shown in Figure 7-2.
6. Set the Network Interface Port setting to Shared to allow the IMM2 to use the ML2
adapter.
246 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
7. For DHCP Control, choose Static IP.
8. For IP Address, enter the relevant IP address.
9. For Subnet Mask, enter the required subnet mask.
10.For Default Gateway, enter the required default gateway address.
11. When you complete the IP address configuration, press Esc three times to return to the
System Configuration and Boot Management menu.
12.For Exit Setup, press the Y key when prompted to save and exit the Setup utility. The
server restarts with the new settings.
13.Plug a network cable into the dedicated system management port or the ML2 adapter if
you set the IMM2 to share its use according to the instructions. Ensure that you can ping
the IP address of the IMM2 on the connected network port.
After the IMM2 is available in the network, you can log in to the IMM2 web interface by
entering its IP address in a supported web browser, as shown in Figure 7-3.
Enter the default user name USERID. This user name is case-sensitive. Enter the default
password PASSW0RD, in which 0 is the number zero.
For more information about the configuration settings of the IMM2, see User’s Guide for
Integrated Management Module II, which is available at this website:
https://support.lenovo.com/docs/UM103336
There is no actual configuration that is required within the IMM2 web interface so that the
IMM2 can be managed in-band. However, you must ensure that the prerequisite drivers are
installed so that the operating system can recognize the IMM2. All supported versions of
Microsoft Windows Server 2008 and 2012, 2012 R2, VMware ESX, and Linux now include
the prerequisite drivers for the X6 systems.
For more information about the supported operating systems, see the Lenovo Operating
System Interoperability Guide, located here:
http://lenovopress.com/osig
You can perform the following common tasks with the remote control function:
Control the power of the systems.
Mount remote media, which includes CD/DVD-ROMs, supported ISO and firmware
images, and USB devices.
Create your own customized keyboard key sequences b using the soft key programmer.
Customize your viewing experience.
248 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
3. To protect sensitive disk and KVM data during your session, click Encrypt disk and KVM
data during transmission before starting Remote Control. For complete security, use
Remote Control with SSL. You can configure SSL by selecting IMM2 Management →
Security from the top menu.
4. If you want exclusive remote access during your session, click Start Remote Control in
Single User Mode. If you want to allow other users remote console (KVM) access during
your session, click Start Remote Control in Multi-user Mode, as seen in Figure 7-5.
For more information about the various controls that are available to control the server,
see User’s Guide for Integrated Management Module II, which is available here:
https://support.lenovo.com/docs/UM103336
By remote controlling your server, you also can remotely mount an ISO image. For more
information about how to remote mount an ISO image to your server, see 6.1, “Installing an
OS without a local optical drive” on page 226.
XClarity Administrator provides agent-free hardware management for System x rack servers,
including x3850 X6 and x3950 X6 servers and Flex System compute nodes and components,
including the Chassis Management Module and Flex System I/O modules.
XClarity Administrator is a virtual appliance that is quickly imported into a Microsoft Hyper-V
or VMware virtualized environment, which gives easy deployment and portability. The tool
offers out-of-band agentless management to reduce complexity, which means that the
The administration dashboard is based on HTML 5. Such a foundation allows fast location of
resources so tasks can be run quickly.
In this section, we describe key functions and tasks of XClarity Administrator that are relevant
to the X6 servers. This section includes the following topics:
7.4.1, “X6 considerations” on page 250
7.4.2, “Discovering the IMM2 of an x3850 X6” on page 250
7.4.3, “Remote control” on page 253
7.4.4, “Hardware monitoring” on page 254
7.4.5, “Firmware updates” on page 256
7.4.6, “Operating system deployment” on page 257
7.4.1 X6 considerations
Consider the following points regarding the use of Lenovo XClarity Administrator with the
x3850 X6 and x3950 X6:
Only machine type 6241 of the x3850 X6 and x3950 X6 is supported. Machine type 3837
does not support XClarity Administrator.
XClarity Administrator Fix Pack 1 is required to perform firmware updates of the x3850 X6
and x3950 X6.
For the x3850 X6, the IMM2 must be connected to the network. For the x3950 X6, both
IMM2 ports must be connected to the network.
For the x3950 X6, you must set up automatic or custom partitioning by using the IMM
before deploying a server pattern to the partition.
Complete the following steps to discover and add the IMM2 to the XClarity Administrator
console:
1. Log in to the XClarity Administrator web interface by browsing to the following website,
where servername is the name or IP address of the virtual machine where XClarity
Administrator is running:
https://servername
250 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
For example:
https://xclarity-demo.lenovo.com
https://172.16.32.220
After you log in, the Dashboard page is displayed, as shown in Figure 7-6.
2. Select Hardware → Discover and Manage New Devices, as shown in Figure 7-7.
4. You also must specify the IMM2 credentials to gain access to the IMM2 and click Manage,
as shown in Figure 7-9.
252 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
When the discovery process finishes and XClarity Administrator discovers the IMM2, you
see a confirmation message, as shown in Figure 7-11.
5. Click Hardware → All Servers to see the discovered IMM2 that is now listed, as shown in
Figure 7-12.
From this point, you can perform management tasks against the X6 server.
Complete the following steps to open a remote control session to the server:
1. Select Hardware → All Servers, select the server from the list, and click All Actions →
Launch Remote Control, as shown in Figure 7-13 on page 254.
2. The remote control Java applet appears, which provides virtual keyboard, video, and
mouse to the server, as seen in Figure 7-14.
With the remote control feature, you can control the power to the systems, mount remote
media (including CD/DVD-ROMs, ISO-images, and USB devices), create your own
customized keyboard key sequences by using the soft key programmer, and open new
remote sessions to other servers that are managed by XClarity Administrator.
254 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
To open the alerts page of the managed server, select Hardware → All Servers, select the
server from the list, and click Alerts in the Status and Health section, as shown in
Figure 7-15.
You can also open the system event log of the server, check light path states, or power
consumption. For example, to open Power Consumption History graph, browse to Power and
Thermal of the Status and Health section, as shown in Figure 7-16.
Complete the following steps to update the firmware of one of servers that is managed by
Lenovo XClarity Administrator:
1. Click Provisioning → Apply / Activate under the Firmware Updates section, as shown in
Figure 7-17.
2. In the Firmware Updates: Apply / Activate page, select the required hardware components
to update, and click the Perform Updates icon, as shown in Figure 7-18.
Perform Updates
3. An Update Summary window opens, in which you can set Update Rules and Activation
Rules. As shown in Figure 7-19 on page 257 the Update Rules is set to “Stop all updates
on error” and Activation Rules is set to “Immediate activation”, which means an immediate
server restart is needed for new firmware activation.
256 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
Figure 7-19 Firmware update parameters
4. To start flashing, click Perform Update. When the Immediate activation option is chosen,
you must confirm the operation by clicking OK, as shown in Figure 7-20.
The firmware update process starts. You can check the firmware update status on the Jobs
page. Once the firmware has been successfully updated, the server will restart automatically
as requested.
For more information about how to mount and deploy an operating system image from
XClarity Administrator, see 6.1, “Installing an OS without a local optical drive” on page 226.
Lenovo XClarity Administrator also integrates with managers, such as VMware vSphere and
Microsoft System Center. This capability is described next.
Note that when call home is configured and enabled in Lenovo XClarity Administrator, call
home is disabled on all managed chassis and servers to avoid duplicate problem records
being created.
XClarity Administrator requires access to certain ports and Internet addresses for the call
home function to work. Table 7-2 and Table 7-3 list the required ports and IP address.
Table 7-2 Ports that must be open for the call home feature in Lenovo XClarity Administrator
Port Direction Affected Devices Purpose
80 Inbound/Outbound Support website: Used for HTTP and DDP file downloads
Address: 129.42.0.0/18 for call home
443 Inbound/Outbound Client computers that Used by HTTPS for web access and
access Lenovo XClarity REST communications. Outbound
Administrator direction is used for call home.
esupport.ibm.com 129.42.0.0/18
Complete the following steps to enable the call home feature in Lenovo XClarity
Administrator:
1. From the Lenovo XClarity Administrator menu bar, click Administration → Service and
Support.
2. Click the Call Home Configuration tab.n
3. Fill in the required fields (marked with *) in the Configure call home section. Click Apply.
4. The Enable Call Home checkbox will be viewable.
5. Click Enable Call Home
For additional information on enabling call home and the features and functions, refer to the
Lenovo XClarity Administrator Planning and Implementation Guide:
http://lenovopress.com/sg248296
258 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
Each integrator integrates hardware predictive failure analysis (PFA) and microcode
management and diagnostics into standard hypervisors, which provides the following
capabilities:
Manage resources from virtualization console
Perform nondisruptive server updates
Perform nondisruptive server starts
Evacuate workloads on predicted hardware failure
The Lenovo XClarity Integrator plug-ins are available for the following virtualization platforms:
Lenovo XClarity Integrator for VMware vCenter (requires a license)
https://support.lenovo.com/documents/LNVO-VMWARE
Lenovo XClarity Integrator for VMware vRealize Orchestrator (free download)
https://support.lenovo.com/documents/LNVO-VMRO
Lenovo XClarity Integrator for VMware vRealize Log Insight (free download)
https://solutionexchange.vmware.com/store/products/lenovo-xclarity-administrato
r-content-pack-for-vmware-vrealize-log-insight
Lenovo XClarity Integrator for Microsoft System Center (requires a license)
https://support.lenovo.com/documents/LNVO-MANAGE
Lenovo XClarity Integrator for Zenoss (requires a license)
https://support.lenovo.com/documents/LVNO-ZENOSS
Note: The Lenovo XClarity Integrator requires a Lenovo customized ESXi version. The
Lenovo customized version can be downloaded from this website:
https://my.vmware.com/web/vmware/info/slug/datacenter_cloud_infrastructure/vmwa
re_vsphere/6_0#custom_iso
If it is a generic installation, download and install the Lenovo Customization for ESXi offline
bundle.
This bundle enables all management functions. Without the customized version or offline
bundle installed, Lenovo XClarity Integrator for VMware vCenter provides limited
management functionality.
If you purchased Lenovo XClarity Administrator licenses and want to add integration with
VMware vCenter or Microsoft System Center, you can order the software license as listed in
Table 7-4 or Table 7-5 on page 260, depending on your location.
Note: Lenovo XClarity Integrators for VMware vCenter and Microsoft System Center are
included in the Lenovo XClarity Pro offerings.
Table 7-4 Lenovo XClarity Integrator part numbers per managed server (NA, AP, Canada, and Japan)
Lenovo XClarity Integrator per managed server, for Microsoft System Center or Part
VMware vCenter for United States, Asia Pacific, Canada, and Japan number
Lenovo XClarity Integrator for MSSC, Per Managed Srv w/1Yr S&S 00MT275
Lenovo XClarity Integrator for MSSC, Per Managed Srv w/3Yr S&S 00MT276
Lenovo XClarity Integrator for MSSC, Per Managed Srv w/5Yr S&S 00MT277
Lenovo XClarity Integrator for MSSC, w/IMMV2ADv, Per Managed Srv w/1Yr S&S 00MT278
Lenovo XClarity Integrator for MSSC, w/IMMV2ADv, Per Managed Srv w/3Yr S&S 00MT279
Lenovo XClarity Integrator for MSSC, w/IMMV2ADv, Per Managed Srv w/5Yr S&S 00MT280
Lenovo XClarity Integrator for VMware vCenter, Per Managed Srv w/1Yr S&S 00MT281
Lenovo XClarity Integrator for VMware vCenter, Per Managed Srv w/3Yr S&S 00MT282
Lenovo XClarity Integrator for VMware vCenter, Per Managed Srv w/5Yr S&S 00MT283
Lenovo XClarity Integrator f/VMw vCtr w/IMMv2Adv, Per Managed Svr w/1Yr S&S 00MT284
Lenovo XClarity Integrator f/VMw vCtr w/IMMv2Adv, Per Managed Svr w/3Yr S&S 00MT285
Lenovo XClarity Integrator f/VMw vCtr w/IMMv2Adv, Per Managed Svr w/5Yr S&S 00MT286
Table 7-5 Lenovo XClarity Integrator part numbers per managed server (EMEA and Latin America)
Lenovo XClarity Integrator per managed server, for Microsoft System Center or Part
VMware vCenter for Europe Middle East, Africa, and Latin America number
Lenovo XClarity Integrator for MSSC, Per Managed Srv w/1Yr S&S 00MT287
Lenovo XClarity Integrator for MSSC, Per Managed Srv w/3Yr S&S 00MT288
Lenovo XClarity Integrator for MSSC, Per Managed Srv w/5Yr S&S 00MT289
Lenovo XClarity Integrator for MSSC, w/IMMV2ADv, Per Managed Srv w/1Yr S&S 00MT290
Lenovo XClarity Integrator for MSSC, w/IMMV2ADv, Per Managed Srv w/3Yr S&S 00MT291
Lenovo XClarity Integrator for MSSC, w/IMMV2ADv, Per Managed Srv w/5Yr S&S 00MT292
Lenovo XClarity Integrator for VMware vCenter, Per Managed Srv w/1Yr S&S 00MT293
Lenovo XClarity Integrator for VMware vCenter, Per Managed Srv w/3Yr S&S 00MT294
Lenovo XClarity Integrator for VMware vCenter, Per Managed Srv w/5Yr S&S 00MT295
Lenovo XClarity Integrator f/VMw vCtr w/IMMv2Adv, Per Managed Svr w/1Yr S&S 00MT296
Lenovo XClarity Integrator f/VMw vCtr w/IMMv2Adv, Per Managed Svr w/3Yr S&S 00MT297
Lenovo XClarity Integrator f/VMw vCtr w/IMMv2Adv, Per Managed Svr w/5Yr S&S 00MT298
Table 7-6 Zenoss part numbers for United States, Asia Pacific, Canada and Japan
Description Part
number
Zenoss Resource Manager Virtual Server (includes 1 year of service & support)
1 required for each virtual machine
260 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
Description Part
number
Zenoss Resource Manager Physical Server (includes 1 year of service & support)
1 required for each network switch, storage array, and physical server without virtualization
Zenoss Service Dynamics Virtual Server (includes 1 year of service & support)
1 required for each virtual machine
Zenoss Service Dynamics Physical Server (includes 1 year of service & support)
1 required for each network switch, storage array, and physical server without virtualization
Table 7-7 Zenoss part numbers for Europe, Middle East, Africa & Latin America
Description Part
number
Zenoss Resource Manager Virtual Server (includes 1 year of service & support)
1 required for each virtual machine
Zenoss Resource Manager Physical Server (includes 1 year of service & support)
1 required for each network switch, storage array, and physical server without virtualization
Zenoss Service Dynamics Virtual Server (includes 1 year of service & support)
1 required for each virtual machine
Zenoss Service Dynamics Physical Server (includes 1 year of service & support)
1 required for each network switch, storage array, and physical server without virtualization
x3850 X6 x3850 X6
VM VM VM VM
VM VM VM
VM
Figure 7-21 Start a firmware update with upward integration module (UIM)
262 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
2. Any virtual machines (VMs) that are running on the first server are moved from the first
server to another server, as shown in Figure 7-22.
x3850 X6 x3850 X6
VM VM VM
VM VM VM
VM VM
x3850 X6 x3850 X6
VM VM VM
VM VM VM
VM VM
2. The server is emptied and the workload resumes on a different server, as shown in
Figure 7-25.
For more information and a 90-day free trail of the Integrators, see the following resources:
Lenovo XClarity Integrator for VMware vCenter, v4.0.2:
https://support.lenovo.com/documents/LNVO-VMWARE
Lenovo XClarity Integrator Offerings for Microsoft System Center Management Solutions:
https://support.lenovo.com/documents/LNVO-MANAGE
264 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
7.6 Lenovo XClarity Energy Manager
Lenovo XClarity Energy Manager is a standalone piece of software that models data center
physical hierarchy and monitors power and temperature at the server level and at the group
level. By analyzing power and temperature data monitored, Energy Manager helps data
center administrator improve business continuity and energy efficiency.
The following section discusses licensing, system requirements, and how to download,
install, and set up Energy Manager.
Note: A license file is bounded to a Root Key and vice versa. This means that a license file
can only be imported to an Energy Manager instance with the same Root Key shown in the
About dialog. The Root Key is generated based on the OS information. If an OS is
reconfigured (for example, a network configuration change) or if the OS is reinstalled, the
Root Key might change accordingly. This implies that you may need to request a new license
file based on the new Root Key.
266 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
To access the Energy Manager interface, the following web browsers are supported:
Mozilla Firefox 25
Google Chrome 31
Microsoft Internet Explorer 9 and above
Note: Energy Manager communicates with the managed nodes through multiple protocols
including IPMI, SSH, SNMP, WS-MAN, HTTPS, and DCOM (WMI). These protocols should
be allowed by the network firewall and the operating system firewall between Energy
Manager and the managed nodes.
Once downloaded, for Windows, run the installer and follow the wizard for the installation. For
Linux, unzip the package and then launch the executable for the installation. The software will
be installed on the server where you are launching the installation package from.
The following are the instructions for installing Energy Manager on Windows:
1. Run the EXE file you downloaded from the above web page.
2. At the standard InstallShield welcome screen, click Next.
3. Accept the license agreement and click Next.
4. Enter a User Name and Organization for where the software is being installed, as shown
in Figure 7-27. Click Next.
5. Specify the directory where the software is to be installed and click Next.
7. Specify the port to be used for the web service that will be used to access the Energy
Manager interface in Figure 7-29. By default, TLS (Transport Layer Security) is set to
enabled. When TLS is enabled, Energy Manager communicates via port 8643 by default.
If TLS is disabled, the communication from the browser is not secure. The port used by
default when TLS is disabled is 8688. If this port is already in use you can set a different
port. Click Next to continue the installation.
268 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
8. Set the sampling frequency and granularity of data as shown in Figure 7-30.
The frequency setting refers to the interval at which data will be collected. Data can be set
to be collected every 30, 60, 180, 300, 360 or 600 seconds via the pulldown menu. In this
example frequency is set to 60, which means collection of data will occur every minute.
The granularity refers to the interval at which the reporting graphs will be updated with
new data. Granularity can be set to either 30, 60, 180, 300, 360, 600, 1800 or 3600
seconds. In this example, granularity is set to 180, so the tools graphs will be updated
every 3 minutes with the new data.
Click Next.
In Figure 7-31, enter a username and password. These are the credentials that you will
need to access the web interface of Energy Manager. From the web interface you can
create and manage the data center hierarchy. Refer to 7.6.5, “Setting up Lenovo XClarity
Energy Manager” on page 274 for information on the data center hierarchy.
10.Energy Manager has an embedded database server. As seen in Figure 7-33, you can set
the database attributes including username, an open/unused port, password and
installation path. Once set, click Next.
270 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
11. I If changes to any settings are needed, click the Back button to make the changes now.
Otherwise, to begin the installation, click Install as shown in Figure 7-34.
12.When the installation is complete, the installation wizard will display a successful
installation message. Click the Finish button to exit the wizard. You can now launch the
web interface of Energy Manager, as described in the next section.
272 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
Once logged in to Energy Manager, the dashboard is displayed, Figure 7-36. The Dashboard
provides the overall health status of the data center. It shows the current power and cooling
status, the historical power and temperature trend, the hot rooms and hotspots, the power
and space capacity information, and the critical events. These are displayed in individual
information boxes which are called gadgets. In Figure 7-36, the gadgets do not currently
display any data because we have not yet discovered any devices.
You can customize the Dashboard by adding and deleting gadgets that are of interest to you.
To add and delete gadgets, click the Select Gadgets button on the top right hand corner of
the Dashboard. Check or uncheck the checkbox next to each Gadget description to delete or
add that Gadget to the Dashboard, as shown in Figure 7-37 on page 274.
Energy Manager provides several ways to set up the data center hierarchy. The hierarchy is
as follows:
Data Centers: Where you can add rooms
Rooms: Where you can add rows
Rows: Where you can add racks
Racks: Where you can add devices (chassis, server, PDU, UPS, etc)
Devices: That are monitored by Energy Manager for power and temperature trends
To set up a hierarchy, click the Datacenter Management button from the left hand menu and
create entries for a Data Center, Room, Row and Rack as seen in Figure 7-38.
274 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
When adding a rack to the hierarchy, enter the total power available (in Watts) within that rack
as seen in Figure 7-39. To determine the total power available in your rack, refer to the PDU
Technical Reference Guides for information on the PDUs power capacities in your rack. The
PDU Technical Reference Guides are located at the following web page:
https://support.lenovo.com/documents/LNVO-POWINF
Check the box for PDU Power as Rack Power if you want to use the power reading of the
PDU(s) in your rack as the IT equipment power of the rack. Click OK to add the rack.
2. Import devices (or an entire hierarchy) from an xls: To import devices or the hierarchy from
an Excel file, start from the Import button in Devices and Racks as seen in Figure 7-41.
For the Excel file requirements, refer to the Lenovo XClarity Energy Manager User Guide
located at this URL:
http://support.lenovo.com/us/en/downloads/ds101160
Once you have devices discovered in Energy Manager, you can add them to the hierarchy
previously created.
276 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
Adding devices to the hierarchy
To add your discovered devices to a rack within the hierarchy, select the Datacenter, room,
row, and rack that you previously created in the Datacenter Management page. Click the +
icon to add the discovered device(s) to that rack, as seen in Figure 7-42.
The Datacenter Management page gives you complete control over the hierarchy allowing
you to add, edit, move and delete data centers, rooms, rows, racks and devices as needed.
278 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
The hierarchy is interactive and will update information displayed in the GUI based on the
selection made in the hierarchy. For instance, when a room is selected in the hierarchy,
information for the selected room is displayed in the Summary tab, as seen in Figure 7-44.
You can view stats as well as how many racks are in the room, and how many devices exist.
280 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
The rack itself in the Summary tab is interactive and will update information based on the
device selected in the rack. Information such as IP address, serial and model numbers and
power draw is displayed. If enabled, you can also turn the device on and off, as seen in
Figure 7-46.
From the tabs, temperature/power trending, policies and thresholds can also be viewed for
each individual server.
To create a group: On the Groups page, click the + button under the Group List to add a
group. Specify the name and an optional description in the popup dialog, and then click OK.
You will see your group added to the Group List.
You can also select a rack for all devices in the rack to be added to a group.
SNMP traps
SNMP traps can be used to assign a recipient to receive events triggered. This makes it
easier to manage the events in 3rd-party event management systems. Energy Manager
events are defined in the Management Information Base (MIB) file. And the MIB file is
installed at “<installation path>\conf\DCMConsole-MIB-V1.mib”.
To add a trap receiver, go to the Settings poge and click Add Receiver, fill in the Destination
IP Address or Hostname, Port, and Community String fields, then click OK.
Email alerting
Energy Manager allows you to subscribe to alerts. This is done in the Settings page under
the “Email Subscription” tab. To subscribe to alerts and events: Go to the Settings page →
click Add Subscriber → fill in the email server configuration → and check the “Subscribe
threshold-based events only” (if you are only after threshold based events) → Click ok.
Setting policies
You can use policies to limit the amount of power that an entity consumes. There are two
ways in which a policy can be applied; either to a group entity or to an individual device.
To set a policy: Click the Datacenter Management page. Select a device from the hierarchy
by selecting data center → room → row → rack → device. Click the Policies tab. Click the
Add button.
In the popup dialog, specify the policy name and select the policy type from the drop-down
list. There are two types of policies available:
Custom Power Limit: If this is selected, Energy Manager will generate an alert when the
actual power consumption is higher than the threshold you configured.
Minimum Power: If this is selected, Energy Manager throttles the device power to the
minimum (so you do not need to specify a threshold).
282 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
Set the schedule for the policy as either a permanent policy, for a specific time or as
recurrent. Figure 7-48 displays the policy page.
Setting thresholds
When a threshold is set it monitors a device or group for that condition. When a condition is
met or exceeded an event is triggered and listed in the Summary tab. There are two types of
thresholds that can be set:
Power thresholds: Collected data is compared with the device or group power
consumption (in units of Watts)
Temperature thresholds: Collected data is compared with the device or groups
temperature (in units of Celsius degrees)
To set a threshold: Click the Datacenter Management page. Select a device from the
hierarchy by selecting data center → room → row → rack → device. Click the Thresholds
tab. Click the Edit option to set the threshold. Figure 7-49 displays the Thresholds page.
284 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
Cooling Analysis
The cooling analysis page provides real-time monitoring data of the inlet temperatures of
each device. The results are published in a bar graph where X-axis represents temperature
values and Y-axis represents the percentage of servers at the corresponding temperature, as
seen in Figure 7-51.
Energy Manager will identify the servers that are causing hotspots in the data center and
suggest an action to eliminate the hotspot, as seen in Figure 7-52. In this example, there are
6 servers with inlet temperatures that are higher than 27 degrees Celcius.
Low-utilization servers
Energy Manager evaluates and tries to identify low-utilization servers based on out-of-band
power data. Using a set of heuristics, server utilization is estimated according to the power
history with all the raw data that is collected. Given the statistics of utilization data estimated,
low-utilization servers are identified.
The X-axis shows the power values and the Y-axis shows the server model. The descriptions
next to the bars represent the power ranges measured for those server models.
286 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
In this example, the top bar reads 163-469. This means that, for all the servers that are this
certain model, the lowest power observed was 163 Watts and the highest power observed
was 469 Watts.
Each bar can be clicked on to show detailed power characteristics for the server type as seen
in Figure 7-55 which shows a break down of how peak power and idle power are distributed.
Workload placement
The workload placement page evaluates how likely a server would be able to accommodate a
new workload being added to it based on the current resource utilization and availability of the
server and the resources needed for the new workload.
For more information about the UIM for Zenoss, see this web page:
http://www.zenoss.com/solution/lenovo
288 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
7.8 Advanced Settings Utility
By using the Advanced Settings Utility (ASU), you can modify your server firmware settings
from a command line. It supports multiple operating systems, such as Linux, Solaris, and
Windows, including Windows Preinstallation Environment (PE). UEFI and IMM2 firmware
settings can be modified on the X6 platform.
ASU supports scripting environments through batch-processing mode. For more information
about downloading ASU and Advanced Settings Utility User’s Guide, see the following
Advanced Settings Utility website:
https://support.lenovo.com/documents/LNVO-ASU
In IMM2-based servers, you configure all firmware settings through the IMM2. The ASU can
connect to the IMM2 locally (in-band) through the keyboard console style (KCS) interface or
through the LAN over USB interface. The ASU can also connect remotely over the LAN
(out-of-band).
When the ASU runs any command on an IMM2-based server, it attempts to connect and
automatically configure the LAN over a USB interface if it detects that this interface is not
configured. The ASU also provides a level of automatic and default settings. You have the
option of specifying that the automatic configuration process is skipped if you manually
configured the IMM2 LAN over a USB interface. We advise that the ASU configure the LAN
over a USB interface.
Complete the following steps to download, install, and connect to the IMM2 by using a
Windows operating system:
1. Create a directory that is named ASU.
2. Download the ASU Tool for your operating system (32-bit or 64-bit) from the following web
page and save it in the ASU directory:
https://support.lenovo.com/documents/LNVO-ASU
3. Unpack the utility:
– For Windows, double-click the filename.exe, where filename is the name for the
Advanced Settings Utility file for Windows that you downloaded.
– For Linux, from a shell command prompt, enter one of the following commands and
press Enter:
If the .tgz file for ASU was downloaded, use the tar -zxvf filename.tgz command.
Where filename is the name of the Advanced Settings Utility file for Linux that you
downloaded. The files are extracted to the same directory.
If the .rpm file for ASU was downloaded, use the rpm -Uvh filename.rpm command.
Where filename is the name of the Advanced Settings Utility file for Linux that you
downloaded. The files are extracted to the /opt/IBM/toolscenter/asu directory.
4. Run a command, such as the asu show command, in-band or out-of-band, by using the
commands that are listed in Table . This command confirms that the connection and utility
work.
For in-band, run the asu show--kcs command.
For out-of-band, complete the following steps:
a. Ping the IMM2 to ensure that you have a network connection to the IMM2. The default
IP address is 192.168.70.125.
b. Run the following command:
asu show --host target_IMM_external_IP_address --user target_IMM_User_ID
--password target_IMM_password
Note: If the ASU is connecting remotely to the IMM2 over the LAN, there is no requirement
for the remote operating system of the targeted IMM2 to be online. The ASU can connect
to the IMM2 remotely when the server is connected to power or is using standby power.
290 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
Show all settings
At the command line, enter the asu show command. The command output is shown in
Example 7-1.
With MSM, you can configure, monitor, and maintain storage configurations on ServeRAID-M
controllers. The MegaRAID Storage Manager GUI makes it easy to create and manage
storage configurations.
You can use MSM to manage local or remote RAID controllers and configure MSM for remote
alert notifications. A command-line interface also is available.
To download the latest MegaRAID Storage Manager software and the Installation and User’s
Guide, see this website:
https://support.lenovo.com/documents/MIGR-5073015
292 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
Unconfigured Good
This drive functions normally but is not configured.
Hot Spare
This drive is powered up and ready for use as a spare if an online drive fails. This drive
can be Dedicated or Global Hot Spare.
Failed
A failed drive was originally configured as Online or Hot Spare, but the firmware detected
an unrecoverable error.
Rebuild
Data is written to this drive to restore full redundancy for a virtual drive.
Unconfigured Bad
The firmware detects an unrecoverable error on this drive.
Missing
This drive was online but was removed from its location.
Offline
This drive is part of a virtual drive but has invalid data as far as the RAID configuration is
concerned.
Note: StorCLI is the successor to MegaCLI however MegaCLI commands can be run on
the Storage Command Line (StorCLI) tool. A single binary is output for the StorCLI
commands and its equivalent MegaCLI commands.
Complete the following steps to create a virtual drive by using the CLI:
1. Run the following command to locate the Enclosure Device ID and the Slot Number of
both hard disk drives:
MegaCli -PDList -aAll
2. Run the following command to create an RAID-1:
MegaCli -CfgLDAdd -R1[252:1,252:2] -a0
Example 7-5 shows the resulting output.
MegaCli -AdpAllinfo -aALL storcli /cx show all Display controller properties for all installed
adapters
MegaCLI -adpCount storcli show ctrlcount Display the number of connected controllers
MegaCLI -CfgFreeSpaceinfo storcli /cx show Display available free space that is on the
-aN|-a0,1,2|-aALL freespac controller
MegaCLI-GetPreservedCacheList -aALL storcli /cx show all Display preserved cache status
MegaCLI –AdpGetTime -aN storcli /cx show time Display the controller time
294 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
MegaCLI StorCLI Function
MegaCli -AdpBIOS -Dsply -aALL storcli /cx show bios Schedule a consistency check
MegaCLI -AdpCcSched -Info storcli /cx show Display consistency check and parameters in
cc/ConsistencyCheck progress, if any
MegaCli -AdpBbuCmd -GetBbuStatus storcli /cx/bbu show Display battery information, firmware status, and
-aN|-a0,1,2|-aALL status the gas gauge status
As shown in previous Figure 7-58, using verbose (-v) mode of the lspci command you
can get PCIe slot number and serial number of the drive, which we will use later during the
drive replacement procedure.
2. Check that every NVMe drive has an associated block device. Use the ls command to
locate NVMe block devices in /dev directory, as shown in Figure 7-59:
296 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
Using the mdadm utility you can initialize a new array /dev/md0. In this example we create
a RAID-5 array consisting of four drives /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1
/dev/nvme3n1, as shown in Figure 7-61:
To check the status of the array, run the following commands, as shown in Figure 7-62 and
Figure 7-63 on page 298:
Layout : left-symmetric
Chunk Size : 512K
As you can see in the previous example (Figure 7-63), the total array size is 4480.56 GB,
all drives are active and in sync state, tand he array has no failed drives.
4. You can also use the mdadm command to generate config file, as shown in Figure 7-64:
When you complete this procedure, you will have a working software RAID array. You can use
it as a regular block device, In other words, you can create partitions and file systems and
mount it to the file system tree.
298 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
7.10.2 NVMe drive hot-replacement in Linux
In this section we cover the hot-replacement procedure of failed NVMe drives in RHEL7.2.
Hot-replacement means that we perform a graceful hot-swap procedure on the running
system with no interruption in service or downtime.
Note: Not every Linux distribution supports NVMe drive hot-replacement; it depends on
the Linux kernel version. Here is the list of distributions and Linux kernels that were
validated at the time of writing:
RHEL 7.0 and higher, kernel 3.10.0-123.el7.x86_64 and higher
RHEL 6.6 and higher, kernel 2.6.32-500.el6.x86_64 and higher
SLES 12, kernel 3.12.28-2 rc 3 and higher
According to Intel white paper, Hot-Plug Capability of NVMe SSDs in Server Platforms, you
must set the following kernel parameter: pci=pcie_bus_perf. To do that, add that line as the
kernel boot argument to the bootloader configuration file (grub.cfg or elilo.conf).
In this section we simulate the outage of one of the NVMe drive to demonstrate the
hot-replacement concept. Follow the described procedure below, to perform a graceful NVMe
drive hot-replacement procedure:
1. Make sure that required Linux kernel is running and pci kernel parameter has required
value.
2. Run the following command to check the running kernel version and its boot parameters,
as shown in Figure 7-65:
As you can see in Figure 7-67, nvme1n1 drive is in the failed state and the array now has
only 3 active drives.
4. Determine the PCIe address and PCIe slot number used by the failed drive
Run two commands to locate the failed NVMe drive in the server. First, find out the PCIe
address of the nvme1n1 drive. To do that, run the command shown in Figure 7-68:
As you can see, the failed nvme1n1 drive has PCIe address 0000:49:00.0. To determine
the PCIe slot number of the failed drive, use the lspci command, as shown in Figure 7-58
on page 296, or un the following command, as shown in Figure 7-69:
As you can see, both commands show the same result – the nvme1n1 drive is located in
PCIe slot 19, the upper drive bay in the Storage book (see Figure 5-38 on page 186).
5. Power off the failed NVMe drive
To gracefully power off the failed NVMe drive located in PCIe slot 19, run the following
command, as shown in Figure 7-70:
300 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
Verify that the drive is shut down and is no longer represented in the OS using the lspci
and lsblk commands, as shown in following Figure 7-71:
As you can see, lspci shows that now only three NVMe drives are available, lsblk also
shows three drives: nvme0n1, nvme2n1 and nvme3n1, which are combined in RAID-5.
6. Replace the failed NVMe drive
As shown previously, the failed nvme1n1 drive is located in PCIe slot 19, in the Storage
Book bay 7. Now it is safe to remove the NVMe drive from the Storage Book. Then insert
the new drive (same model and capacity) into the Storage Book.
If you insert the new NVMe drive in the same Storage Book bay where the failed drive was
located, the PCIe slot number remains the same; in this example it is PCIe slot 19.
2. Ensure that the new drive has been successfully started and recognized by the OS by
using the lspci and lsblk commands, as shown in Figure 7-73:
As you can see, lspci has shown that the new NVMe drive in PCIe slot 10 has been
recognized by theLinux kernel. The lsblk command also has shown that the appropriate
block device – nvme1n1 has been created. However, notice that nvme1n1 is not associated
with an array yet.
3. Recover the software RAID array
The existing RAID-5 now is in a degraded state: only 3 of 4 drive are available and active.
You must add a new drive to the array to recover it. To do that, run the following command:
You can check the RAID status using the comands in Figure 7-75.
302 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
As you can see in previous Figure 7-75, the nvme1n1 drive has been added successfully
and the array has started a recovery process. That may take some time and affect the
array performance. When it’s done, you will have a redundant and fully operating RAID.
The NVMe drive and software RAID initialization in Windows 2012 R2 is very simple and
straightforward. Te don’t cover this procedure in this section. For more information about
NVMe drive initialization in Windows, refer to 5.4.4, “Using NVMe drives with Microsoft
Windows Server 2012 R2” on page 192.
To demonstrate the hot-replacement procedure, assume there are four NVMe drives installed
in the server and combined in one array using software RAID-5. The Windows Disk
Management tool shows that we have disk X: (volume label “NVMe”), and a dynamic RAID-5
volume with capacity of 4470.87 GB. The status of the volume is Healthy, and all related
NVMe drives are online, as shown in Figure 7-76:
To perform the hot-replacement and RAID recovery operations, follow the procedure
described below:
1. Put the failed drive jn Offline mode
304 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
To do that, right-click on the Disk 1 and choose Offline from the pop-up menu, as shown
in Figure 7-78.
There are several steps to find out the physical location of a failed NVMe drive in the
Storage book.
a. Select the failed disk (Disk 1) in Disk Manager and open the Properties window, as
shown in Figure 7-79:
b. In the Properties window, on the General tab you can see the PCIe slot number of the
associated NVMe drive, as shown in Figure 7-80:
As you can see on the General tab, the drive location is 19. That means the NVMe drive is
located in PCIe slot 19 (bay 7 is the Storage book). For more information about NVMe
drives location in the Storage book, refer to 5.4.1, “NVMe drive placement” on page 184.
2. Power off the failed NVMe drive
Next, shut down the device from OS. Open the Devices and Printers window. All NVMe
drives installed in the server should appear, as shown in Figure 7-81:
306 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
Right-click on one of the NVMe drives and select Properties from the pop-up menu, as
shown in Figure 7-82:
Check the Hardware tab of the Properties window and locate the NVMe drive in Location
19, as shown in Figure 7-83:
308 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
As you can see, only three NVMe drives are visible to the OS. When the scanning process
is finished, you will see a new NVMe drive in the list of available devices:
6. Repair volume
When disk initialization is complete you can start the array recovery procedure. To do that,
right-click on the NVMe volume (disk X:) and select the Repair Volume option from the
pop-up menu:
310 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
Choose the new drive (Disk 1) to replace the failed drive in the array, as shown in
Figure 7-90:
When the resyncing process is complete, the array will become redundant and fully
operational again.
SoL can give you remote access to your X6 servers UEFI and power-on self-test (POST)
messages. By using SoL, you can log in to the machine remotely. It also can give you access
to special operating system functions during start.
In the x3850 X6, the serial port is shared with the IMM2. The IMM2 can take control of the
shared serial port to perform text console redirection and to redirect serial traffic by using
SoL.
312 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
7.11.3, “Starting an SoL connection” on page 317
General settings
COM settings
Data Bits 8
Parity None
Stop Bits 1
Tip: Terminal Emulation can be set to VT100 or ANSI; however, when Linux operating
systems are configured, ensure that the OS settings match the terminal emulation that is
selected in the hardware.
Boot Entries
------------
Boot entry ID: 1
OS Friendly Name: Windows Server 2003, Enterprise
Path: multi(0)disk(0)rdisk(0)partition(1)\WINDOWS
OS Load Options: /noexecute=optout /fastdetect
C:\>
3. Examine the output. If there is more than one boot entry, determine the default entry.
4. Enable EMS by using the bootcfg /ems on /port com2 /baud 115200 /id 1 command.
As shown in Example 7-7, the default boot entry has the ID 1.
Example 7-7 Output of the bootcfg /ems on /port com2 /baud 115200 /id 1 command
C:\>bootcfg /ems on /port com2 /baud 115200 /id 1
SUCCESS: Changed the redirection port in boot loader section.
SUCCESS: Changed the redirection baudrate in boot loader section.
SUCCESS: Changed the OS entry switches for line "1" in the BOOT.INI file.
5. Run bootcfg again to verify that the EMS is activated, as shown in Example 7-8.
Boot Entries
------------
Boot entry ID: 1
OS Friendly Name: Windows Server 2003, Enterprise
Path: multi(0)disk(0)rdisk(0)partition(1)\WINDOWS
OS Load Options: /noexecute=optout /fastdetect /redirect
C:\>
314 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
Windows Server 2012
The Microsoft EMS is enabled on servers. Use the following syntax commands for more
functions on Windows Server 2012:
bootcfg /ems {ON | OFF | EDIT} [/s <Computer> [/u <Domain>\<User> /p <Password>]]
[/port {COM1 | COM2 | COM3 | COM4 | BIOSSET}] [/baud {9600 | 19200 | 38400 | 57600
| 115200}] [/id <OSEntryLineNum>]
IMM setting
Complete the following steps to change the CLI mode for the COM port for use with EMS:
1. Log in to the web interface of the IMM2.
2. Browse to IMM Management → IMM Properties, as seen in Figure 7-93.
4. Change the CLI mode to CLI with EMS compatible keystroke sequences.
5. Click Apply to save the changes.
For more information about Microsoft EMS and the SAC, see the following documents:
Boot Parameters to Enable EMS Redirection:
http://msdn.microsoft.com/en-us/library/ff542282.aspx
Special Administration Console (SAC) and SAC commands:
http://msdn.microsoft.com/en-us/library/cc785873
RHEL 6: If you installed RHEL 6 in UEFI mode, you must edit the
/boot/efi/EFI/redhat/grub.conf file instead of the /boot/grub/menu.lst file.
Menu.lst or grub.conf
Add the parameter that is highlighted in bold in the file as shown in Example 7-9.
316 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
#
# Note that you do not have to rerun grub after making changes to this file
# NOTICE: You have a /boot partition. This means that
# all kernel and initrd paths are relative to /boot/, eg.
# root (hd0,1)
# kernel /vmlinuz-version ro root=/dev/mapper/VolGroup-lv_root
# initrd /initrd-[generic-]version.img
#boot=/dev/sda1
device (hd0) HD(1,800,64000,699900f5-c584-4061-a99f-d84c796d5c72)
default=0
timeout=5
splashimage=(hd0,1)/grub/splash.xpm.gz
hiddenmenu
title Red Hat Enterprise Linux (2.6.32-71.el6.x86_64)
root (hd0,1)
kernel /vmlinuz-2.6.32-71.el6.x86_64 ro root=/dev/mapper/VolGroup-lv_roo
t rd_LVM_LV=VolGroup/lv_root rd_LVM_LV=VolGroup/lv_swap rd_NO_LUKS rd_NO_MD rd_N
O_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=de cras
hkernel=auto console=ttyS1,115200n8 rhgb quiet
initrd /initramfs-2.6.32-71.el6.x86_64.img
[root@localhost redhat]#
/etc/inittab
Add the parameter that is highlighted in bold at the end of the /etc/inittab file, as shown in
Example 7-10.
318 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
Abbreviations and acronyms
AC alternating current ECC error checking and correcting
ACID atomicity, consistency, isolation, EIA Electronic Industries Alliance
and durability
EMEA Europe, Middle East, Africa
ACPI advanced control and power
EMS Emergency Messaging Service
interface
ERP enterprise resource planning
AES Advanced Encryption Standard
ESA Electronic Service Agent
AES-NI Advanced Encryption Standard
New Instructions ETS Enhanced Technical Support
AMM Advanced Management Module FC Fibre Channel
ANSI American National Standards FDR fourteen data rate
Institute FSM Flex System Manager
APIC Advanced Programmable Interrupt GB gigabyte
Controller
GPU Graphics Processing Unit
ASU Advanced Settings Utility
GT Gigatransfers
BIOS basic input output system
GUI graphical user interface
BM bridge module
HBA host bus adapter
BMC Baseboard Management Controller
HD high definition
BTU British Thermal Unit
HDD hard disk drive
CD compact disk
HPC high performance computing
CIM Common Information Model
HS hot-swap
CLI command-line interface
HTTP Hypertext Transfer Protocol
CMOS complementary metal oxide
HV high voltage
semiconductor
I/O input/output
CNA Converged Network Adapter
IB/E InfiniBand/Ethernet
COM Component Object Model
IBM International Business Machines
CPU central processing unit
ID identifier
CRC cyclic redundancy check
IEC International Electrotechnical
CRM Customer Relationship
Commission
Management
IMM integrated management module
CRU customer replaceable units
IOPS I/O operations per second
CTO configure-to-order
IP Internet Protocol
DC domain controller
IPMI Intelligent Platform Management
DCS Data Center Services
Interface
DCU data cache unit
ISD IBM Systems Director
DDR Double Data Rate
ISO International Organization for
DHCP Dynamic Host Configuration Standards
Protocol
IT information technology
DIMM dual inline memory module
ITSO International Technical Support
DNS Domain Name System Organization
DSA Dynamic System Analysis JBOD just a bunch of disks
DVD Digital Video Disc KB kilobyte
DW data warehousing KCS keyboard console style
320 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
UXSPI UpdateXpress System Packs
Installer
VAC Volts AC
VD virtual drive
VFA Virtual Fabric Adapter
VGA video graphics array
VLAN virtual LAN
VM virtual machine
VPD vital product data
VPI Virtual Protocol Interconnect
VT Virtualization Technology
WW world wide
The publications that are listed in this section are considered particularly suitable for a more
detailed discussion of the topics that are covered in this book.
Product Guides
The following Product Guides are available:
Lenovo System x3850 X6:
http://lenovopress.com/tips1250
Lenovo System x3950 X6:
http://lenovopress.com/tips1251
Online resources
For more information, see the following resources:
Lenovo System x3850 X6 and x3950 X6 product pages:
http://shop.lenovo.com/us/en/systems/servers/mission-critical/x3850-x6/
http://shop.lenovo.com/us/en/systems/servers/mission-critical/x3950-x6/
Lenovo Information Center
– Installation and Service Guide
– Rack Installation Instructions
http://publib.boulder.ibm.com/infocenter/systemx/documentation/topic/com.lenovo
.sysx.6241.doc/product_page.html
ServerProven hardware compatibility page for the x3850 X6 and x3950 X6
E7 v2: http://www.lenovo.com/us/en/serverproven/xseries/6241.shtml
E7 v3: http://www.lenovo.com/us/en/serverproven/xseries/6241E7xxxxV3.shtml
E7 v4: http://www.lenovo.com/us/en/serverproven/xseries/6241E7xxxxV4.shtml
Power Guides:
https://support.lenovo.com/documents/LNVO-POWINF
Power Configurator:
https://support.lenovo.com/documents/LNVO-PWRCONF
Configuration and Option Guide:
https://support.lenovo.com/documents/SCOD-3ZVQ5W
xREF - System x Reference:
http://lenovopress.com/xref
Lenovo Support Portal:
– x3850 X6:
http://support.lenovo.com/us/en/products/Servers/Lenovo-x86-servers/Lenovo-S
ystem-x3850-X6/6241
– x3950 X6:
http://support.lenovo.com/us/en/products/Servers/Lenovo-x86-servers/Lenovo-S
ystem-x3950-X6/6241
IBM System Storage Interoperation Center:
http://www.ibm.com/systems/support/storage/ssic
324 Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
Lenovo System x3850 X6 and x3950 X6
Lenovo System x3850 X6 and x3950 X6 Planning and
Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
(0.5” spine)
0.475”<->0.873”
250 <-> 459 pages
Lenovo System x3850 X6 and x3950 X6 Planning and Implementation Guide
Lenovo System x3850 X6 and x3950 X6
Lenovo System x3850 X6 and x3950 X6
Back cover
lenovopress.com SG24-8208-01