Win Server 2022 Performance Tuning Guidelines
Win Server 2022 Performance Tuning Guidelines
When you run a server system in your organization, you might have business needs not met using default
server settings. For example, you might need the lowest possible energy consumption, or the lowest possible
latency, or the maximum possible throughput on your server. This guide provides a set of guidelines that you
can use to tune the server settings in Windows Server 2022 and obtain incremental performance or energy
efficiency gains, especially when the nature of the workload varies little over time.
It is important that your tuning changes consider the hardware, the workload, the power budgets, and the
performance goals of your server. This guide describes each setting and its potential effect to help you make an
informed decision about its relevance to your system, workload, performance, and energy usage goals.
WARNING
Registry settings and tuning parameters changed significantly between versions of Windows Server. Be sure to use the
latest tuning guidelines to avoid unexpected results.
In this guide
This guide organizes performance and tuning guidance for Windows Server 2022 across three tuning
categories:
Hardware performance considerations Active Directory Servers Cache and memory management
Web Servers
The following section lists important items that you should consider when you choose server hardware.
Following these guidelines can help remove performance bottlenecks that might impede the server's
performance.
Processor Recommendations
Choose 64-bit processors for servers. 64-bit processors have significantly more address space, and are required
for Windows Server 2022. No 32-bit editions of the operating system will be provided, but 32-bit applications
will run on the 64-bit Windows Server 2022 operating system.
To increase the computing resources in a server, you can use a processor with higher-frequency cores, or you
can increase the number of processor cores. If CPU is the limiting resource in the system, a core with 2x
frequency typically provides a greater performance improvement than two cores with 1x frequency.
Multiple cores are not expected to provide a perfect linear scaling, and the scaling factor can be even less if
hyper-threading is enabled because hyper-threading relies on sharing resources of the same physical core.
IMPORTANT
Match and scale the memory and I/O subsystem with the CPU performance, and vice versa.
Do not compare CPU frequencies across manufacturers and generations of processors because the comparison
can be a misleading indicator of speed.
For Hyper-V, make sure that the processor supports SLAT (Second Level Address Translation). It is implemented
as Extended Page Tables (EPT) by Intel and Nested Page Tables (NPT) by AMD. You can verify this feature is
present by using SystemInfo.exe on your server.
Cache Recommendations
Choose large L2 or L3 processor caches. On newer architectures, such as Haswell or Skylake, there is a unified
Last Level Cache (LLC) or an L4. The larger caches generally provide better performance, and they often play a
bigger role than raw CPU frequency.
Increase the RAM to match your memory needs. When your computer runs low on memory and it needs more
immediately, Windows uses hard disk space to supplement system RAM through a procedure called paging. Too
much paging degrades the overall system performance. You can optimize paging by using the following
guidelines for page file placement:
Isolate the page file on its own storage device, or at least make sure it doesn't share the same storage
devices as other frequently accessed files. For example, place the page file and operating system files on
separate physical disk drives.
Place the page file on a drive that is fault-tolerant. If a non-fault-tolerant disk fails, a system crash is likely
to occur. If you place the page file on a fault-tolerant drive, remember that fault-tolerant systems are often
slower to write data because they write data to multiple locations.
Use multiple disks or a disk array if you need additional disk bandwidth for paging. Do not place multiple
page files on different partitions of the same physical disk drive.
Disk Recommendations
Choose disks with higher rotational speeds to reduce random request service times (~2 ms on average when
you compare 7,200- and 15,000-RPM drives) and to increase sequential request bandwidth. However, there are
cost, power, and other considerations associated with disks that have high rotational speeds.
2.5-inch enterprise-class disks can service a significantly larger number of random requests per second
compared to equivalent 3.5-inch drives.
Store frequently accessed data, especially sequentially accessed data, near the beginning of a disk because this
roughly corresponds to the outermost (fastest) tracks.
Consolidating small drives into fewer high-capacity drives can reduce overall storage performance. Fewer
spindles mean reduced request service concurrency; and therefore, potentially lower throughput and longer
response times (depending on the workload intensity).
The use of SSD and high speed flash disks is useful for read mostly disks with high I/O rates or latency sensitive
I/O. Boot disks are good candidates for the use of SSD or high speed flash disks as they can improve boot times
significantly.
NVMe SSDs offer superior performance with greater command queue depths, more efficient interrupt
processing, and greater efficiency for 4KB commands. This particularly benefits scenarios that requires heavy
simultaneous I/O.
See Also
Server Hardware Power Considerations
Overview about power and performance tuning for the Windows Server
Processor Power Management (PPM) tuning for the Windows Server balanced power plan
Server Hardware Performance Considerations
1/5/2022 • 6 minutes to read • Edit Online
The following section lists important items that you should consider when you choose server hardware.
Following these guidelines can help remove performance bottlenecks that might impede the server's
performance.
Processor Recommendations
Choose 64-bit processors for servers. 64-bit processors have significantly more address space, and are required
for Windows Server 2022. No 32-bit editions of the operating system will be provided, but 32-bit applications
will run on the 64-bit Windows Server 2022 operating system.
To increase the computing resources in a server, you can use a processor with higher-frequency cores, or you
can increase the number of processor cores. If CPU is the limiting resource in the system, a core with 2x
frequency typically provides a greater performance improvement than two cores with 1x frequency.
Multiple cores are not expected to provide a perfect linear scaling, and the scaling factor can be even less if
hyper-threading is enabled because hyper-threading relies on sharing resources of the same physical core.
IMPORTANT
Match and scale the memory and I/O subsystem with the CPU performance, and vice versa.
Do not compare CPU frequencies across manufacturers and generations of processors because the comparison
can be a misleading indicator of speed.
For Hyper-V, make sure that the processor supports SLAT (Second Level Address Translation). It is implemented
as Extended Page Tables (EPT) by Intel and Nested Page Tables (NPT) by AMD. You can verify this feature is
present by using SystemInfo.exe on your server.
Cache Recommendations
Choose large L2 or L3 processor caches. On newer architectures, such as Haswell or Skylake, there is a unified
Last Level Cache (LLC) or an L4. The larger caches generally provide better performance, and they often play a
bigger role than raw CPU frequency.
Increase the RAM to match your memory needs. When your computer runs low on memory and it needs more
immediately, Windows uses hard disk space to supplement system RAM through a procedure called paging. Too
much paging degrades the overall system performance. You can optimize paging by using the following
guidelines for page file placement:
Isolate the page file on its own storage device, or at least make sure it doesn't share the same storage
devices as other frequently accessed files. For example, place the page file and operating system files on
separate physical disk drives.
Place the page file on a drive that is fault-tolerant. If a non-fault-tolerant disk fails, a system crash is likely
to occur. If you place the page file on a fault-tolerant drive, remember that fault-tolerant systems are often
slower to write data because they write data to multiple locations.
Use multiple disks or a disk array if you need additional disk bandwidth for paging. Do not place multiple
page files on different partitions of the same physical disk drive.
Disk Recommendations
Choose disks with higher rotational speeds to reduce random request service times (~2 ms on average when
you compare 7,200- and 15,000-RPM drives) and to increase sequential request bandwidth. However, there are
cost, power, and other considerations associated with disks that have high rotational speeds.
2.5-inch enterprise-class disks can service a significantly larger number of random requests per second
compared to equivalent 3.5-inch drives.
Store frequently accessed data, especially sequentially accessed data, near the beginning of a disk because this
roughly corresponds to the outermost (fastest) tracks.
Consolidating small drives into fewer high-capacity drives can reduce overall storage performance. Fewer
spindles mean reduced request service concurrency; and therefore, potentially lower throughput and longer
response times (depending on the workload intensity).
The use of SSD and high speed flash disks is useful for read mostly disks with high I/O rates or latency sensitive
I/O. Boot disks are good candidates for the use of SSD or high speed flash disks as they can improve boot times
significantly.
NVMe SSDs offer superior performance with greater command queue depths, more efficient interrupt
processing, and greater efficiency for 4KB commands. This particularly benefits scenarios that requires heavy
simultaneous I/O.
See Also
Server Hardware Power Considerations
Overview about power and performance tuning for the Windows Server
Processor Power Management (PPM) tuning for the Windows Server balanced power plan
Server Hardware Power Considerations
1/5/2022 • 2 minutes to read • Edit Online
It is important to recognize the increasing importance of energy efficiency in enterprise and data center
environments. High performance and low-energy usage are often conflicting goals, but by carefully selecting
server components, you can achieve the correct balance between them. The following sections lists guidelines
for power characteristics and capabilities of server hardware components.
Processor Recommendations
Frequency, operating voltage, cache size, and process technology affect the energy consumption of processors.
Processors have a thermal design point (TDP) rating that gives a basic indication of energy consumption relative
to other models.
In general, opt for the lowest TDP processor that will meet your performance goals. Also, newer generations of
processors are generally more energy efficient, and they may expose more power states for the Windows power
management algorithms, which enables better power management at all levels of performance. Or they may
use some of the new "cooperative" power management techniques that Microsoft has developed in partnership
with hardware manufacturers.
For more info on cooperative power management techniques, see the section named Collaborative Processor
Performance Control in the Advanced Configuration and Power Interface Specification.
Memory Recommendations
Memory accounts for an increasing fraction of the total system power. Many factors affect the energy
consumption of a memory DIMM, such as memory technology, error correction code (ECC), bus frequency,
capacity, density, and number of ranks. Therefore, it is best to compare expected power ratings before
purchasing large quantities of memory.
Low-power memory is now available, but you must consider the performance and cost trade-offs. If your server
will be paging, you should also factor in the energy cost of the paging disks.
Disks Recommendations
Higher RPM means increased energy consumption. SSD drives are more power efficient than rotational drives.
Also, 2.5-inch drives generally require less power than 3.5-inch drives.
Fan Recommendations
Fans, like power supplies, are an area where you can reduce energy consumption without affecting system
performance. Variable-speed fans can reduce RPM as the system load decreases, eliminating otherwise
unnecessary energy consumption.
Processor terminology
The processor terminology used throughout this topic reflects the hierarchy of components available in the
following figure. Terms used from largest to smallest granularity of components are the following:
Processor socket
NUMA node
Core
Logical processor
Additional References
Server Hardware Performance Considerations
Overview about power and performance tuning for the Windows Server
Processor Power Management (PPM) tuning for the Windows Server balanced power plan
Power and performance tuning
1/5/2022 • 17 minutes to read • Edit Online
Energy efficiency is increasingly important in enterprise and data center environments, and it adds another set
of tradeoffs to the mix of configuration options. When managing servers, it’s important to ensure that they are
running as efficiently as possible while meeting the performance needs of their workloads. Windows Server is
optimized for excellent energy efficiency with minimum performance impact across a wide range of customer
workloads. Processor Power Management (PPM) Tuning for the Windows Server Balanced Power Plan describes
the workloads used for tuning the default parameters in multiple Windows Server versions, and provides
suggestions for customized tunings.
This section expands on energy-efficiency tradeoffs to help you make informed decisions if you need to adjust
the default power settings on your server. However, the majority of server hardware and workloads should not
require administrator power tuning when running Windows Server.
You can use this metric to set practical goals that respect the tradeoff between power and performance. In
contrast, a goal of 10 percent energy savings across the data center fails to capture the corresponding effects on
performance and vice versa.
Similarly, if you tune your server to increase performance by 5 percent, and that results in 10 percent higher
energy consumption, the total result might or might not be acceptable for your business goals. The energy
efficiency metric allows for more informed decision making than power or performance metrics alone.
You can use load lines to evaluate and compare the performance and energy consumption of configurations at
all load points. In this particular example, it is easy to see what the best configuration is. However, there can
easily be scenarios where one configuration works best for heavy workloads and one works best for light
workloads.
You need to thoroughly understand your workload requirements to choose an optimal configuration. Don't
assume that when you find a good configuration, it will always remain optimal. You should measure system
utilization and energy consumption on a regular basis and after changes in workloads, workload levels, or
server hardware.
IMPORTANT
To ensure an accurate analysis, make sure that all local apps are closed before you run PowerCfg.exe .
Shortened timer tick rates, drivers that lack power management support, and excessive CPU utilization are a few
of the behavioral issues that are detected by the powercfg /energy command. This tool provides a simple way
to identify and fix power management issues, potentially resulting in significant cost savings in a large
datacenter.
For more info about PowerCfg.exe, see Powercfg command-line options.
High Performance Increases performance at Low latency apps and app Processors are always
the cost of high energy code that is sensitive to locked at the highest
consumption. Power and processor performance performance state
thermal limitations, changes (including "turbo"
operating expenses, and frequencies). All cores are
reliability considerations unparked. Thermal output
apply. may be significant.
Power Saver Limits performance to save Deployments with limited Caps processor frequency
energy and reduce power budgets and thermal at a percentage of
operating cost. Not constraints maximum (if supported),
recommended without and enables other energy-
thorough testing to make saving features.
sure performance is
adequate.
These power plans exist in Windows for alternating current (AC) and direct current (DC) powered systems, but
we will assume that servers are always using an AC power source.
For more info on power plans and power policy configurations, see Powercfg command-line options.
NOTE
Some server manufactures have their own power management options available through the BIOS settings. If the
operating system does not have control over the power management, changing the power plans in Windows will not
affect system power and performance.
If your server requires lower energy consumption, you might want to cap the processor performance state at a
percentage of maximum. For example, you can restrict the processor to 75 percent of its maximum frequency by
using the following commands:
NOTE
Capping processor performance at a percentage of maximum requires processor support. Check the processor
documentation to determine whether such support exists, or view the Performance Monitor counter % of maximum
frequency in the Processor group to see if any frequency caps were applied.
For example, if your server workload is not sensitive to the latency and wants to loose the responsiveness
override to favor power, you can increase the Processor responsiveness override enable threshold and
Processor responsiveness override enable time, decrease the Processor responsiveness override disable
threshold and Processor responsiveness override disable time. Then the system will be hard to enter
responsiveness override state. The default value of Processor responsiveness override performance floor is set
as 100 so that the responsiveness override period will run at maximum frequency. You can also decrease the
processor performance floor and reduce the Processor responsiveness override energy performance preference
ceiling to let HWP to adjust the frequency. The following are the sample commands to set the parameters for
current active power plan.
NOTE
The EPB register is only supported in Intel Westmere and later processors.
For Intel Nehalem and AMD processors, Turbo is disabled by default on P-state-based platforms. However, if a
system supports Collaborative Processor Performance Control (CPPC), which is a new alternative mode of
performance communication between the operating system and the hardware (defined in ACPI 5.0), Turbo may
be engaged if the Windows operating system dynamically requests the hardware to deliver the highest possible
performance levels.
To enable or disable the Turbo Boost feature, the Processor Performance Boost Mode parameter must be
configured by the administrator or by the default parameter settings for the chosen power plan. Processor
Performance Boost Mode has five allowable values, as shown in Table 5.
For P-state-based control, the choices are Disabled, Enabled (Turbo is available to the hardware whenever
nominal performance is requested), and Efficient (Turbo is available only if the EPB register is implemented).
For CPPC-based control, the choices are Disabled, Efficient Enabled (Windows specifies the exact amount of
Turbo to provide), and Aggressive (Windows asks for "maximum performance" to enable Turbo).
In Windows Server 2016, the default value for Boost Mode is 3.
The following commands enable Processor Performance Boost Mode on the current power plan (specify the
policy by using a GUID alias):
IMPORTANT
You must run the powercfg -setactive command to enable the new settings. You do not need to reboot the server.
To set this value for power plans other than the currently selected plan, you can use aliases such as
SCHEME_MAX (Power Saver), SCHEME_MIN (High Performance), and SCHEME_BALANCED (Balanced) in place
of SCHEME_CURRENT. Replace "scheme current" in the powercfg -setactive commands previously shown with
the desired alias to enable that power plan.
For example, to adjust the Boost Mode in the Power Saver plan and make that Power Saver is the current plan,
run the following commands:
To reduce the number of schedulable cores to 50 percent of the maximum count, set the Processor
Performance Core Parking Maximum Cores parameter to 50 as follows:
Additional References
Server Hardware Performance Considerations
Server Hardware Power Considerations
Processor Power Management (PPM) tuning for the Windows Server balanced power plan
Processor Power Management (PPM) Tuning for the
Windows Server Balanced Power Plan
1/5/2022 • 12 minutes to read • Edit Online
Starting with Windows Server 2008, Windows Server provides three power plans: Balanced , High
Performance , and Power Saver . The Balanced power plan is the default choice that aims to give the best
energy efficiency for a set of typical server workloads. This topic describes the workloads that have been used to
determine the default settings for the Balanced scheme for the past several releases of Windows.
If you run a server system that has dramatically different workload characteristics or performance and power
requirements than these workloads, you might want to consider tuning the default power settings (that is, create
a custom power plan). One source of useful tuning information is the Server Hardware Power Considerations.
Alternately, you may decide that the High Performance power plan is the right choice for your environment,
recognizing that you will likely take a significant energy hit in exchange for some level of increased
responsiveness.
IMPORTANT
You should leverage the power policies that are included with Windows Server unless you have a specific need to create a
custom one and have a very good understanding that your results will vary depending on the characteristics of your
workload.
Hardware configurations
For each release of Windows, the most current production servers are used in the power plan analysis and
optimization process. In some cases, the tests were performed on pre-production systems whose release
schedule matched that of the next Windows release.
Given that most servers are sold with 1 to 4 processor sockets, and since scale-up servers are less likely to have
energy efficiency as a primary concern, the power plan optimization tests are primarily run on 2-socket and 4-
socket systems. The amount of RAM, disk, and network resources for each test are chosen to allow each system
to run all the way up to its full capacity, while taking into account the cost restrictions that would normally be in
place for real-world server environments, such as keeping the configurations reasonable.
IMPORTANT
Even though the system can run at its peak load, we typically optimize for lower load levels, since servers that consistently
run at their peak load levels would be well-advised to use the High Performance power plan unless energy efficiency is
a high priority.
Metrics
All of the tested benchmarks use throughput as the performance metric. Response Time is considered as an SLA
requirement for these workloads (except for SAP, where it is a primary metric). For example, a benchmark run is
considered "valid" if the mean or maximum response time is less than certain value.
Therefore, the PPM tuning analysis also uses throughput as its performance metric. At the highest load level
(100% CPU utilization), our goal is that the throughput should not decrease more than a few percent due to
power management optimizations. But the primary consideration is to maximize the power efficiency (as
defined below) at medium and low load levels.
Running the CPU cores at lower frequencies reduces energy consumption. However, lower frequencies typically
decrease throughput and increase response time. For the Balanced power plan, there is an intentional tradeoff
of responsiveness and power efficiency. The SAP workload tests, as well as the response time SLAs on the other
workloads, make sure that the response time increase doesn't exceed certain threshold (5% as an example) for
these specific workloads.
NOTE
If the workload is very sensitive to response time, the system should either switch to the High Performance power plan
or change Balanced power plan to very aggressively increase frequency when it is running.
W IN DO W S SERVER 2012R2 A N D
PA RA M ET ER B EF O RE W IN DO W S SERVER 2016 A N D A F T ER
For Intel pre-Broadwell systems or any systems that don’t have HWP support (for example, AMD servers),
Windows is still in full control and determines processor frequency based on the PPM parameters. The default
PPM parameters in Windows Server 2012R2 favor power too much that could significantly impact the workload
performance, especially to bursty workload. Four PPM parameters were changed in Windows Server 2016 RS2
to let the frequency increase faster around medium load level.
The CPU utilization-based power management algorithms might hurt the latency of IO or network-intensive
workloads. A logical processor could be idle while waiting for IO completion or network packets, which makes
the overall CPU utilization low. To resolve this issue, Windows Server 2019 automatically detects the IO
responsiveness period and raises the frequency floor to a higher level. The behavior can be tuned by the
following parameters no matter if the system uses HWP or not.
IMPORTANT
Before starting any experiments, you should first understand your workloads, which will help you make the right PPM
parameter choices and reduce the tuning effort.
See Also
Server Hardware Performance Considerations
Server Hardware Power Considerations
Overview about power and performance tuning for the Windows Server
Performance tuning Active Directory Servers
1/5/2022 • 2 minutes to read • Edit Online
IMPORTANT
Proper configuration and sizing of Active Directory has a significant potential impact on overall system and workload
performance. Readers are highly encouraged to start by reading Capacity planning for Active Directory Domain Services.
Additional References
Capacity planning for AD DS
Hardware considerations
Memory usage considerations
LDAP considerations
Proper placement of domain controllers and site considerations
Troubleshooting AD DS performance
Capacity planning for Active Directory Domain
Services
1/5/2022 • 92 minutes to read • Edit Online
This topic is originally written by Ken Brumfield, Program Manager at Microsoft, and provides recommendations
for capacity planning for Active Directory Domain Services (AD DS).
NOTE
Adding Active Directory-aware applications might have a noticeable impact on the DC load, whether the
load is coming from the application servers or clients.
C O M P O N EN T EST IM AT ES
Network 1 GB
Planning
For a long time, the community's recommendation for sizing AD DS has been to “put in as much RAM as the
database size.” For the most part, that recommendation is all that most environments needed to be concerned
about. But the ecosystem consuming AD DS has gotten much bigger, as have the AD DS environments
themselves, since its introduction in 1999. Although the increase in compute power and the switch from x86
architectures to x64 architectures has made the subtler aspects of sizing for performance irrelevant to a larger
set of customers running AD DS on physical hardware, the growth of virtualization has reintroduced the tuning
concerns to a larger audience than before.
The following guidance is thus about how to determine and plan for the demands of Active Directory as a
service regardless of whether it is deployed in a physical, a virtual/physical mix, or a purely virtualized scenario.
As such, we will break down the evaluation to each of the four main components: storage, memory, network,
and processor. In short, in order to maximize performance on AD DS, the goal is to get as close to processor
bound as possible.
RAM
Simply, the more that can be cached in RAM, the less it is necessary to go to disk. To maximize the scalability of
the server the minimum amount of RAM should be the sum of the current database size, the total SYSVOL size,
the operating system recommended amount, and the vendor recommendations for the agents (antivirus,
monitoring, backup, and so on). An additional amount should be added to accommodate growth over the
lifetime of the server. This will be environmentally subjective based on estimates of database growth based on
environmental changes.
For environments where maximizing the amount of RAM is not cost effective (such as a satellite locations) or
not feasible (DIT is too large), reference the Storage section to ensure that storage is properly designed.
A corollary that comes up in the general context in sizing memory is sizing of the page file. In the same context
as everything else memory related, the goal is to minimize going to the much slower disk. Thus the question
should go from, “how should the page file be sized?” to “how much RAM is needed to minimize paging?” The
answer to the latter question is outlined in the rest of this section. This leaves most of the discussion for sizing
the page file to the realm of general operating system recommendations and the need to configure the system
for memory dumps, which are unrelated to AD DS performance.
Evaluating
The amount of RAM that a domain controller (DC) needs is actually a complex exercise for these reasons:
High potential for error when trying to use an existing system to gauge how much RAM is needed as LSASS
will trim under memory pressure conditions, artificially deflating the need.
The subjective fact that an individual DC only needs to cache what is “interesting” to its clients. This means
that the data that needs to be cached on a DC in a site with only an Exchange server will be very different
than the data that needs to be cached on a DC that only authenticates users.
The labor to evaluate RAM for each DC on a case-by-case basis is prohibitive and changes as the
environment changes.
The criteria behind the recommendation will help to make informed decisions:
The more that can be cached in RAM, the less it is necessary to go to disk.
Storage is by far the slowest component of a computer. Access to data on spindle-based and SSD storage
media is on the order of 1,000,000x slower than access to data in RAM.
Thus, in order to maximize the scalability of the server, the minimum amount of RAM is the sum of the current
database size, the total SYSVOL size, the operating system recommended amount, and the vendor
recommendations for the agents (antivirus, monitoring, backup, and so on). Add additional amounts to
accommodate growth over the lifetime of the server. This will be environmentally subjective based on estimates
of database growth. However, for satellite locations with a small set of end users, these requirements can be
relaxed as these sites will not need to cache as much to service most of the requests.
For environments where maximizing the amount of RAM is not cost effective (such as a satellite locations) or
not feasible (DIT is too large), reference the Storage section to ensure that storage is properly sized.
NOTE
A corollary while sizing memory is sizing of the page file. Because the goal is to minimize going to the much slower disk,
the question goes from “how should the page file be sized?” to “how much RAM is needed to minimize paging?” The
answer to the latter question is outlined in the rest of this section. This leaves most of the discussion for sizing the page
file to the realm of general operating system recommendations and the need to configure the system for memory dumps,
which are unrelated to AD DS performance.
Antivirus 100 MB
Total 12 GB
Recommended: 16 GB
Over time, the assumption can be made that more data will be added to the database and the server will
probably be in production for 3 to 5 years. Based on an estimate of growth of 33%, 16 GB would be a
reasonable amount of RAM to put in a physical server. In a virtual machine, given the ease with which settings
can be modified and RAM can be added to the VM, starting at 12 GB with the plan to monitor and upgrade in
the future is reasonable.
Network
Evaluating
This section is less about evaluating the demands regarding replication traffic, which is focused on traffic
traversing the WAN and is thoroughly covered in Active Directory Replication Traffic, than it is about evaluating
total bandwidth and network capacity needed, inclusive of client queries, Group Policy applications, and so on.
For existing environments, this can be collected by using performance counters “Network Interface(*)\Bytes
Received/sec,” and “Network Interface(*)\Bytes Sent/sec.” Sample intervals for Network Interface counters in
either 15, 30, or 60 minutes. Anything less will generally be too volatile for good measurements; anything
greater will smooth out daily peeks excessively.
NOTE
Generally, the majority of network traffic on a DC is outbound as the DC responds to client queries. This is the reason for
the focus on outbound traffic, though it is recommended to evaluate each environment for inbound traffic also. The same
approaches can be used to address and review inbound network traffic requirements. For more information, see
Knowledge Base article 929851: The default dynamic port range for TCP/IP has changed in Windows Vista and in
Windows Server 2008.
Bandwidth needs
Planning for network scalability covers two distinct categories: the amount of traffic and the CPU load from the
network traffic. Each of these scenarios is straight-forward compared to some of the other topics in this article.
In evaluating how much traffic must be supported, there are two unique categories of capacity planning for AD
DS in terms of network traffic. The first is replication traffic that traverses between domain controllers and is
covered thoroughly in the reference Active Directory Replication Traffic and is still relevant to current versions of
AD DS. The second is the intrasite client-to-server traffic. One of the simpler scenarios to plan for, intrasite traffic
predominantly receives small requests from clients relative to the large amounts of data sent back to the clients.
100 MB will generally be adequate in environments up to 5,000 users per server, in a site. Using a 1 GB network
adapter and Receive Side Scaling (RSS) support is recommended for anything above 5,000 users. To validate
this scenario, particularly in the case of server consolidation scenarios, look at Network Interface(*)\Bytes/sec
across all the DCs in a site, add them together, and divide by the target number of domain controllers to ensure
that there is adequate capacity. The easiest way to do this is to use the “Stacked Area” view in Windows
Reliability and Performance Monitor (formerly known as Perfmon), making sure all of the counters are scaled
the same.
Consider the following example (also known as, a really, really complex way to validate that the general rule is
applicable to a specific environment). The following assumptions are made:
The goal is to reduce the footprint to as few servers as possible. Ideally, one server will carry the load and an
additional server is deployed for redundancy (N + 1 scenario).
In this scenario, the current network adapter supports only 100 MB and is in a switched environment. The
maximum target network bandwidth utilization is 60% in an N scenario (loss of a DC).
Each server has about 10,000 clients connected to it.
Knowledge gained from the data in the chart (Network Interface(*)\Bytes Sent/sec):
1. The business day starts ramping up around 5:30 and winds down at 7:00 PM.
2. The peak busiest period is from 8:00 AM to 8:15 AM, with greater than 25 Bytes sent/sec on the busiest DC.
NOTE
All performance data is historical. So the peak data point at 8:15 indicates the load from 8:00 to 8:15.
3. There are spikes before 4:00 AM, with more than 20 Bytes sent/sec on the busiest DC, which could indicate
either load from different time zones or background infrastructure activity, such as backups. Since the peak at
8:00 AM exceeds this activity, it is not relevant.
4. There are five Domain Controllers in the site.
5. The max load is about 5.5 MB/s per DC, which represents 44% of the 100 MB connection. Using this data, it
can be estimated that the total bandwidth needed between 8:00 AM and 8:15 AM is 28 MB/s.
NOTE
Be careful with the fact that Network Interface sent/receive counters are in bytes and network bandwidth is
measured in bits. 100 MB ÷ 8 = 12.5 MB, 1 GB ÷ 8 = 128 MB.
Conclusions:
1. This current environment does meet the N+1 level of fault tolerance at 60% target utilization. Taking one
system offline will shift the bandwidth per server from about 5.5 MB/s (44%) to about 7 MB/s (56%).
2. Based on the previously stated goal of consolidating to one server, this both exceeds the maximum target
utilization and theoretically the possible utilization of a 100 MB connection.
3. With a 1 GB connection this will represent 22% of the total capacity.
4. Under normal operating conditions in the N + 1 scenario, client load will be relatively evenly distributed at
about 14 MB/s per server or 11% of total capacity.
5. To ensure that capacity is adequate during unavailability of a DC, the normal operating targets per server
would be about 30% network utilization or 38 MB/s per server. Failover targets would be 60% network
utilization or 72 MB/s per server.
In short, the final deployment of systems must have a 1 GB network adapter and be connected to a network
infrastructure that will support said load. A further note is that given the amount of network traffic generated,
the CPU load from network communications can have a significant impact and limit the maximum scalability of
AD DS. This same process can be used to estimate the amount of inbound communication to the DC. But given
the predominance of outbound traffic relative to inbound traffic, it is an academic exercise for most
environments. Ensuring hardware support for RSS is important in environments with greater than 5,000 users
per server. For scenarios with high network traffic, balancing of interrupt load can be a bottleneck. This can be
detected by Processor(*)% Interrupt Time being unevenly distributed across CPUs. RSS enabled NICs can
mitigate this limitation and increase scalability.
NOTE
A similar approach can be used to estimate the additional capacity necessary when consolidating data centers, or retiring
a domain controller in a satellite location. Simply collect the outbound and inbound traffic to clients and that will be the
amount of traffic that will now be present on the WAN links.
In some cases, you might experience more traffic than expected because traffic is slower, such as when certificate checking
fails to meet aggressive time-outs on the WAN. For this reason, WAN sizing and utilization should be an iterative, ongoing
process.
DC 1 6.5 MB/s
DC 2 6.25 MB/s
DC 3 6.25 MB/s
DC 4 5.75 MB/s
DC 5 4.75 MB/s
2 28.5 MB/s
As always, over time the assumption can be made that client load will increase and this growth should be
planned for as best as possible. The recommended amount to plan for would allow for an estimated growth in
network traffic of 50%.
Storage
Planning storage constitutes two components:
Capacity, or storage size
Performance
A great amount of time and documentation is spent on planning capacity, leaving performance often completely
overlooked. With current hardware costs, most environments are not large enough that either of these is
actually a concern, and the recommendation to “put in as much RAM as the database size” usually covers the
rest, though it may be overkill for satellite locations in larger environments.
Sizing
Evaluating for storage
Compared to 13 years ago when Active Directory was introduced, a time when 4 GB and 9 GB drives were the
most common drive sizes, sizing for Active Directory is not even a consideration for all but the largest
environments. With the smallest available hard drive sizes in the 180 GB range, the entire operating system,
SYSVOL, and NTDS.dit can easily fit on one drive. As such, it is recommended to deprecate heavy investment in
this area.
The only recommendation for consideration is to ensure that 110% of the NTDS.dit size is available in order to
enable defrag. Additionally, accommodations for growth over the life of the hardware should be made.
The first and most important consideration is evaluating how large the NTDS.dit and SYSVOL will be. These
measurements will lead into sizing both fixed disk and RAM allocation. Due to the (relatively) low cost of these
components, the math does not need to be rigorous and precise. Content about how to evaluate this for both
existing and new environments can be found in the Data Storage series of articles. Specifically, refer to the
following articles:
For existing environments – The section titled “To activate logging of disk space that is freed by
defragmentation” in the article Storage Limits.
For new environments – The article titled Growth Estimates for Active Directory Users and
Organizational Units.
NOTE
The articles are based on data size estimates made at the time of the release of Active Directory in Windows 2000.
Use object sizes that reflect the actual size of objects in your environment.
When reviewing existing environments with multiple domains, there may be variations in database sizes. Where
this is true, use the smallest global catalog (GC) and non-GC sizes.
The database size can vary between operating system versions. DCs that run earlier operating systems such as
Windows Server 2003 has a smaller database size than a DC that runs a later operating system such as
Windows Server 2008 R2, especially when features such Active Directory Recycle Bin or Credential Roaming are
enabled.
NOTE
For new environments, notice that the estimates in Growth Estimates for Active Directory Users and Organizational
Units indicate that 100,000 users (in the same domain) consume about 450 MB of space. Please note that the
attributes populated can have a huge impact on the total amount. Attributes will be populated on many objects by
both third-party and Microsoft products, including Microsoft Exchange Server and Lync. An evaluation based on the
portfolio of the products in the environment is preferred, but the exercise of detailing out the math and testing for
precise estimates for all but the largest environments may not actually be worth significant time and effort.
Ensure that 110% of the NTDS.dit size is available as free space in order to enable offline defrag, and plan for growth
over a three to five year hardware lifespan. Given how cheap storage is, estimating storage at 300% the size of the DIT
as storage allocation is safe to accommodate growth and the potential need for offline defrag.
NTDS.dit size 35 GB
NOTE
This storage needed is in addition to the storage needed for SYSVOL, operating system, page file, temporary files, local
cached data (such as installer files), and applications.
Storage performance
Evaluating performance of storage
As the slowest component within any computer, storage can have the biggest adverse impact on client
experience. For those environments large enough for which the RAM sizing recommendations are not feasible,
the consequences of overlooking planning storage for performance can be devastating. Also, the complexities
and varieties of storage technology further increase the risk of failure as the relevance of long standing best
practices of “put operating system, logs, and database” on separate physical disks is limited in it's useful
scenarios. This is because the long standing best practice is based on the assumption that is that a “disk” is a
dedicated spindle and this allowed I/O to be isolated. This assumptions that make this true are no longer
relevant with the introduction of:
RAID
New storage types and virtualized and shared storage scenarios
Shared spindles on a Storage Area Network (SAN)
VHD file on a SAN or network-attached storage
Solid State Drives
Tiered storage architectures (i.e. SSD storage tier caching larger spindle based storage)
Specifically, shared storage (RAID, SAN, NAS, JBOD (i.e. Storage Spaces), VHD) all have the ability to be
oversubscribed/overloaded by other work loads that are placed on the back end storage. They also add in the
challenge that SAN/network/driver issues (everything between the physical disk and the AD application) can
cause throttling and/or delays. For clarification, these are not "bad" configurations, they are more complex
configurations that require every component along the way to be working properly, thus requiring additional
attention to ensure that performance is acceptable. See Appendix C, subsection "Introducing SANs," and
Appendix D later in this document for more detailed explanations. Also, whereas Solid State Drives do not have
the limitation of spinning disks (Hard Drives) regarding only allowing one IO at a time to be processed, they do
still have IO limitations, and overloading/oversubscribing of SSDs is possible. In short, the end goal of all
storage performance efforts, regardless of underlying storage architecture and design, is to ensure that the
needed amount of Input/output Operations Per Second (IOPS) is available and that those IOPS happen within an
acceptable time frame (as specified elsewhere in this document). For those scenarios with locally attached
storage, reference Appendix C for the basics in how to design traditional local storage scenarios. These
principals are generally applicable to more complex storage tiers and will also help in dialog with the vendors
supporting backend storage solutions.
Given the wide breadth of storage options available, it is recommended to engage the expertise of hardware
support teams or vendors to ensure that the specific solution meets the needs of AD DS. The following
numbers are the information that would be provided to the storage specialists.
For environments where the database is too large to be held in RAM, use the performance counters to
determine how much I/O needs to be supported:
LogicalDisk(*)\Avg Disk sec/Read (for example, if NTDS.dit is stored on the D:/ drive, the full path would be
LogicalDisk(D:)\Avg Disk sec/Read)
LogicalDisk(*)\Avg Disk sec/Write
LogicalDisk(*)\Avg Disk sec/Transfer
LogicalDisk(*)\Reads/sec
LogicalDisk(*)\Writes/sec
LogicalDisk(*)\Transfers/sec
These should be sampled in 15/30/60 minute intervals to benchmark the demands of the current environment.
Evaluating the results
NOTE
The focus is on reads from the database as this is usually the most demanding component, the same logic can be applied
to writes to the log file by substituting LogicalDisk(<NTDS Log>)\Avg Disk sec/Write and LogicalDisk(<NTDS
Log>)\Writes/sec):
LogicalDisk(<NTDS>)\Avg Disk sec/Read indicates whether or not the current storage is adequately sized. If the results
are roughly equal to the Disk Access Time for the disk type, LogicalDisk(<NTDS>)\Reads/sec is a valid measure. Check
the manufacturer specifications for the storage on the back end, but good ranges for LogicalDisk(<NTDS>)\Avg Disk
sec/Read would roughly be:
7200 – 9 to 12.5 milliseconds (ms)
10,000 – 6 to 10 ms
15,000 – 4 to 6 ms
SSD – 1 to 3 ms
NOTE
Recommendations exist stating that storage performance is degraded at 15ms to 20ms (depending on
source). The difference between the above values and the other guidance is that the above values are the
normal operating range. The other recommendations are troubleshooting guidance to identify when client
experience significantly degrades and becomes noticeable. Reference Appendix C for a deeper explanation.
Considerations:
Note that if the server is configured with a sub-optimal amount of RAM, these values will be inaccurate for
planning purposes. They will be erroneously on the high side and can still be used as a worst case scenario.
Adding/Optimizing RAM specifically will drive a decrease in the amount of read I/O
(LogicalDisk(<NTDS>)\Reads/Sec. This means the storage solution may not have to be as robust as initially
calculated. Unfortunately, anything more specific than this general statement is environmentally dependent
on client load and general guidance cannot be provided. The best option is to adjust storage sizing after
optimizing RAM.
Virtualization considerations for performance
Similar to all of the preceding virtualization discussions, the key here is to ensure that the underlying shared
infrastructure can support the DC load plus the other resources using the underlying shared media and all
pathways to it. This is true whether a physical domain controller is sharing the same underlying media on a
SAN, NAS, or iSCSI infrastructure as other servers or applications, whether it is a guest using pass through
access to a SAN, NAS, or iSCSI infrastructure that shares the underlying media, or if the guest is using a VHD file
that resides on shared media locally or a SAN, NAS, or iSCSI infrastructure. The planning exercise is all about
making sure that the underlying media can support the total load of all consumers.
Also, from a guest perspective, as there are additional code paths that must be traversed, there is a performance
impact to having to go through a host to access any storage. Not surprisingly, storage performance testing
indicates that the virtualizing has an impact on throughput that is subjective to the processor utilization of the
host system (see Appendix A: CPU Sizing Criteria), which is obviously influenced by the resources of the host
demanded by the guest. This contributes to the virtualization considerations regarding processing needs in a
virtualized scenario (see Virtualization considerations for processing).
Making this more complex is that there are a variety of different storage options that are available that all have
different performance impacts. As a safe estimate when migrating from physical to virtual, use a multiplier of
1.10 to adjust for different storage options for virtualized guests on Hyper-V, such as pass-through storage, SCSI
Adapter, or IDE. The adjustments that need to be made when transferring between the different storage
scenarios are irrelevant as to whether the storage is local, SAN, NAS, or iSCSI.
Calculation summary example
Determining the amount of I/O needed for a healthy system under normal operating conditions:
LogicalDisk(<NTDS Database Drive>)\Transfers/sec during the peak period 15 minute period
To determine the amount of I/O needed for storage where the capacity of the underlying storage is exceeded:
Needed IOPS = (LogicalDisk(<NTDS Database Drive>)\Avg Disk sec/Read ÷ <Target Avg Disk
sec/Read>) × LogicalDisk(<NTDS Database Drive>)\Read/sec
C O UN T ER VA L UE
VA L UE N A M E VA L UE
DATA P O IN T S TO C O L L EC T VA L UES
Database size 2 GB
C A L C UL AT IO N ST EP F O RM UL A RESULT
Calculate IOPS necessary to fully warm 262,144 pages ÷ 600 seconds = IOPS 437 IOPS
the cache needed
Processing
Evaluating Active Directory processor usage
For most environments, after storage, RAM, and networking are properly tuned as described in the Planning
section, managing the amount of processing capacity will be the component that deserves the most attention.
There are two challenges in evaluating CPU capacity needed:
Whether or not the applications in the environment are being well-behaved in a shared services
infrastructure, and is discussed in the section titled “Tracking Expensive and Inefficient Searches” in the
article Creating More Efficient Microsoft Active Directory-Enabled Applications or migrating away from
down-level SAM calls to LDAP calls.
In larger environments, the reason this is important is that poorly coded applications can drive volatility
in CPU load, “steal” an inordinate amount of CPU time from other applications, artificially drive up
capacity needs, and unevenly distribute load against the DCs.
As AD DS is a distributed environment with a large variety of potential clients, estimating the expense of a
“single client” is environmentally subjective due to usage patterns and the type or quantity of applications
leveraging AD DS. In short, much like the networking section, for broad applicability, this is better
approached from the perspective of evaluating the total capacity needed in the environment.
For existing environments, as storage sizing was discussed previously, the assumption is made that storage is
now properly sized and thus the data regarding processor load is valid. To reiterate, it is critical to ensure that
the bottleneck in the system is not the performance of the storage. When a bottleneck exists and the processor
is waiting, there are idle states that will go away once the bottleneck is removed. As processor wait states are
removed, by definition, CPU utilization increases as it no longer has to wait on the data. Thus, collect
performance counters “Logical Disk(<NTDS Database Drive>)\Avg Disk sec/Read” and “Process(lsass)\%
Processor Time”. The data in “Process(lsass)\% Processor Time” will be artificially low if “Logical Disk(<NTDS
Database Drive>)\Avg Disk sec/Read” exceeds 10 to 15 ms, which is a general threshold that Microsoft support
uses for troubleshooting storage-related performance issues. As before, it is recommended that sample
intervals be either 15, 30, or 60 minutes. Anything less will generally be too volatile for good measurements;
anything greater will smooth out daily peeks excessively.
Introduction
In order to plan capacity planning for domain controllers, processing power requires the most attention and
understanding. When sizing systems to ensure maximum performance, there is always a component that is the
bottleneck and in a properly sized Domain Controller this will be the processor.
Similar to the networking section where the demand of the environment is reviewed on a site-by-site basis, the
same must be done for the compute capacity demanded. Unlike the networking section, where the available
networking technologies far exceed the normal demand, pay more attention to sizing CPU capacity. As any
environment of even moderate size; anything over a few thousand concurrent users can put significant load on
the CPU.
Unfortunately, due to the huge variability of client applications that leverage AD, a general estimate of users per
CPU is woefully inapplicable to all environments. Specifically, the compute demands are subject to user behavior
and application profile. Therefore, each environment needs to be individually sized.
Target site behavior profile
As mentioned previously, when planning capacity for an entire site, the goal is to target a design with an N + 1
capacity design, such that failure of one system during the peak period will allow for continuation of service at a
reasonable level of quality. That means that in an “N” scenario, load across all the boxes should be less than
100% (better yet, less than 80%) during the peak periods.
Additionally, if the applications and clients in the site are using best practices for locating domain controllers
(that is, using the DsGetDcName function), the clients should be relatively evenly distributed with minor
transient spikes due to any number of factors.
In the next example, the following assumptions are made:
Each of the five DCs in the site has four of CPUs.
Total target CPU usage during business hours is 40% under normal operating conditions (“N + 1”) and 60%
otherwise (“N”). During non-business hours, the target CPU usage is 80% because backup software and
other maintenance are expected to consume all available resources.
Analyzing the data in the chart (Processor Information(_Total)\% Processor Utility) for each of the DCs:
For the most part, the load is relatively evenly distributed which is what would be expected when clients
use DC locator and have well written searches.
There are a number of five-minute spikes of 10%, with some as large as 20%. Generally, unless they
cause the capacity plan target to be exceeded, investigating these is not worthwhile.
The peak period for all systems is between about 8:00 AM and 9:15 AM. With the smooth transition from
about 5:00 AM through about 5:00 PM, this is generally indicative of the business cycle. The more
randomized spikes of CPU usage on a box-by-box scenario between 5:00 PM and 4:00 AM would be
outside of the capacity planning concerns.
NOTE
On a well-managed system, said spikes are might be backup software running, full system antivirus scans,
hardware or software inventory, software or patch deployment, and so on. Because they fall outside the peak user
business cycle, the targets are not exceeded.
As each system is about 40% and all systems have the same numbers of CPUs, should one fail or be
taken offline, the remaining systems would run at an estimated 53% (System D's 40% load is evenly split
and added to System A's and System C's existing 40% load). For a number of reasons, this linear
assumption is NOT perfectly accurate, but provides enough accuracy to gauge.
Alternate scenario – Two domain controllers running at 40%: One domain controller fails, estimated
CPU on the remaining one would be an estimated 80%. This far exceeds the thresholds outlined above
for capacity plan and also starts to severely limit the amount of head room for the 10% to 20% seen in
the load profile above, which means that the spikes would drive the DC to 90% to 100% during the “N”
scenario and definitely degrade responsiveness.
Calculating CPU demands
The “Process\% Processor Time” performance object counter sums the total amount of time that all of the
threads of an application spend on the CPU and divides by the total amount of system time that has passed. The
effect of this is that a multi-threaded application on a multi-CPU system can exceed 100% CPU time, and would
be interpreted VERY differently than “Processor Information\% Processor Utility”. In practice the
“Process(lsass)\% Processor Time” can be viewed as the count of CPUs running at 100% that are necessary to
support the process's demands. A value of 200% means that 2 CPUs, each at 100%, are needed to support the
full AD DS load. Although a CPU running at 100% capacity is the most cost efficient from the perspective of
money spent on CPUs and power and energy consumption, for a number of reasons detailed in Appendix A,
better responsiveness on a multi-threaded system occurs when the system is not running at 100%.
To accommodate transient spikes in client load, it is recommended to target a peak period CPU of between 40%
and 60% of system capacity. Working with the example above, that would mean that between 3.33 (60% target)
and 5 (40% target) CPUs would be needed for the AD DS (lsass process) load. Additional capacity should be
added in according to the demands of the base operating system and other agents required (such as antivirus,
backup, monitoring, and so on). Although the impact of agents needs to be evaluated on a per environment
basis, an estimate of between 5% and 10% of a single CPU can be made. In the current example, this would
suggest that between 3.43 (60% target) and 5.1 (40% target) CPUs are necessary during peak periods.
The easiest way to do this is to use the “Stacked Area” view in Windows Reliability and Performance Monitor
(perfmon), making sure all of the counters are scaled the same.
Assumptions:
Goal is to reduce footprint to as few servers as possible. Ideally, one server would carry the load and an
additional server added for redundancy (N + 1 scenario).
Knowledge gained from the data in the chart (Process(lsass)\% Processor Time):
The business day starts ramping up around 7:00 and decreases at 5:00 PM.
The peak busiest period is from 9:30 AM to 11:00 AM.
NOTE
All performance data is historical. The peak data point at 9:15 indicates the load from 9:00 to 9:15.
There are spikes before 7:00 AM which could indicate either load from different time zones or background
infrastructure activity, such as backups. Because the peak at 9:30 AM exceeds this activity, it is not relevant.
There are three domain controllers in the site.
At maximum load, lsass consumes about 485% of one CPU, or 4.85 CPUs running at 100%. As per the math
earlier, this means the site needs about 12.25 CPUs for AD DS. Add in the above suggestions of 5% to 10% for
background processes and that means replacing the server today would need approximately 12.30 to 12.35
CPUs to support the same load. An environmental estimate for growth now needs to be factored in.
When to tune LDAP weights
There are several scenarios where tuning LdapSrvWeight should be considered. Within the context of capacity
planning, this would be done when the application or user loads are not evenly balanced, or the underlying
systems are not evenly balanced in terms of capability. Reasons to do so beyond capacity planning are outside
of the scope of this article.
There are two common reasons to tune LDAP Weights:
The PDC emulator is an example that affects every environment for which user or application load behavior
is not evenly distributed. As certain tools and actions target the PDC emulator, such as the Group Policy
management tools, second attempts in the case of authentication failures, trust establishment, and so on,
CPU resources on the PDC emulator may be more heavily demanded than elsewhere in the site.
It is only useful to tune this if there is a noticeable difference in CPU utilization in order to reduce the
load on the PDC emulator and increase the load on other domain controllers will allow a more even
distribution of load.
In this case, set LDAPSrvWeight between 50 and 75 for the PDC emulator.
Servers with differing counts of CPUs (and speeds) in a site. For example, say there are two eight-core
servers and one four-core server. The last server has half the processors of the other two servers. This means
that a well distributed client load will increase the average CPU load on the four-core box to roughly twice
that of the eight-core boxes.
For example, the two eight-core boxes would be running at 40% and the four-core box would be
running at 80%.
Also, consider the impact of loss of one eight-core box in this scenario, specifically the fact that the
four-core box would now be overloaded.
Example 1 - PDC
UT IL IZ AT IO N W IT H EST IM AT ED N EW
SY ST EM DEFA ULT S N EW L DA P SRVW EIGH T UT IL IZ AT IO N
The catch here is that if the PDC emulator role is transferred or seized, particularly to another domain controller
in the site, there will be a dramatic increase on the new PDC emulator.
Using the example from the section Target site behavior profile, an assumption was made that all three domain
controllers in the site had four CPUs. What should happen, under normal conditions, if one of the domain
controllers had eight CPUs? There would be two domain controllers at 40% utilization and one at 20%
utilization. While this is not bad, there is an opportunity to balance the load a little bit better. Leverage LDAP
weights to accomplish this. An example scenario would be:
Example 2 - Differing CPU counts
P RO C ESSO R
IN F O RM AT IO N \
% P RO C ESSO R
UT IL IT Y ( _TOTA L )
UT IL IZ AT IO N W IT H EST IM AT ED N EW
SY ST EM DEFA ULT S N EW L DA P SRVW EIGH T UT IL IZ AT IO N
Be very careful with these scenarios though. As can be seen above, the math looks really nice and pretty on
paper. But throughout this article, planning for an “N + 1” scenario is of paramount importance. The impact of
one DC going offline must be calculated for every scenario. In the immediately preceding scenario where the
load distribution is even, in order to ensure a 60% load during an “N” scenario, with the load balanced evenly
across all servers, the distribution will be fine as the ratios stay consistent. Looking at the PDC emulator tuning
scenario, and in general any scenario where user or application load is unbalanced, the effect is very different:
EST IM AT ED N EW
SY ST EM T UN ED UT IL IZ AT IO N N EW L DA P SRVW EIGH T UT IL IZ AT IO N
DC 1 120%
DC 2 147%
Dc 3 218%
Repeating due to the importance of this point, remember to plan for growth. Assuming 50% growth over the
next three years, this environment will need 18.375 CPUs (12.25 × 1.5) at the three-year mark. An alternate plan
would be to review after the first year and add in additional capacity as needed.
Cross-trust client authentication load for NTLM
Evaluating cross-trust client authentication load
Many environments may have one or more domains connected by a trust. An authentication request for an
identity in another domain that does not use Kerberos authentication needs to traverse a trust using the domain
controller's secure channel to another domain controller either in the destination domain or the next domain in
the path to the destination domain. The number of concurrent calls using the secure channel that a domain
controller can make to a domain controller in a trusted domain is controlled by a setting known as
MaxConcurrentAPI . For domain controllers, ensuring that the secure channel can handle the amount of load is
accomplished by one of two approaches: tuning MaxConcurrentAPI or, within a forest, creating shortcut trusts.
To gauge the volume of traffic across an individual trust, refer to How to do performance tuning for NTLM
authentication by using the MaxConcurrentApi setting.
During data collection, this, as with all the other scenarios, must be collected during the peak busy periods of the
day for the data to be useful.
NOTE
Intraforest and interforest scenarios may cause the authentication to traverse multiple trusts and each stage would need
to be tuned.
Planning
There are a number of applications that use NTLM authentication by default, or use it in a certain configuration
scenario. Application servers grow in capacity and service an increasing number of active clients. There is also a
trend that clients keep sessions open for a limited time and rather reconnect on a regular basis (such as email
pull sync). Another common example for high NTLM load is web proxy servers that require authentication for
Internet access.
These applications can cause a significant load for NTLM authentication, which can put significant stress on the
DCs, especially when users and resources are in different domains.
There are multiple approaches to managing cross-trust load, which in practice are used in conjunction rather
than in an exclusive either/or scenario. The possible options are:
Reduce cross-trust client authentication by locating the services that a user consumes in the same domain
that the user is resident in.
Increase the number of secure-channels available. This is relevant to intraforest and cross-forest traffic and
are known as shortcut trusts.
Tune the default settings for MaxConcurrentAPI .
For tuning MaxConcurrentAPI on an existing server, the equation is:
For more information, see KB article 2688798: How to do performance tuning for NTLM authentication by using
the MaxConcurrentApi setting.
Virtualization considerations
None, this is an operating system tuning setting.
Calculation summary example
DATA T Y P E VA L UE
Semaphore Timeouts 0
For this system for this time period, the default values are acceptable.
P ERF O RM A N C E
C AT EGO RY C O UN T ER IN T ERVA L / SA M P L IN G TA RGET WA RN IN G
NOTE
The analogy about the rush hour scenario is extended in the next section: Response Time/How the System
Busyness Impacts Performance.
As a result, specifics about more or faster processors become highly subjective to application behavior, which in
the case of AD DS is very environmentally specific and even varies from server to server within an environment.
This is why the references earlier in the article do not invest heavily in being overly precise, and a margin of
safety is included in the calculations. When making budget-driven purchasing decisions, it is recommended that
optimizing usage of the processors at 40% (or the desired number for the environment) occurs first, before
considering buying faster processors. The increased synchronization across more processors reduces the true
benefit of more processors from the linear progression (2× the number of processors provides less than 2×
available additional compute power).
NOTE
Amdahl's Law and Gustafson's Law are the relevant concepts here.
It is observed that after 50% CPU load, on average there is always a wait of one other item in the queue, with a
noticeably rapid increase after about 70% CPU utilization.
Returning to the driving analogy used earlier in this section:
The busy times of “mid-afternoon” would, hypothetically, fall somewhere into the 40% to 70% range. There is
enough traffic such that one's ability to pick any lane is not majorly restricted, and the chance of another
driver being in the way, while high, does not require the level of effort to “find” a safe gap between other cars
on the road.
One will notice that as traffic approaches rush hour, the road system approaches 100% capacity. Changing
lanes can become very challenging because cars are so close together that increased caution must be
exercised to do so.
This is why the long term averages for capacity conservatively estimated at 40% allows for head room for
abnormal spikes in load, whether said spikes transitory (such as poorly coded queries that run for a few
minutes) or abnormal bursts in general load (the morning of the first day after a long weekend).
The above statement regards % Processor Time calculation being the same as the Utilization Law is a bit of a
simplification for the ease of the general reader. For those more mathematically rigorous:
Translating the PERF_100NSEC_TIMER_INV
B = The number of 100-ns intervals “Idle” thread spends on the logical processor. The change in the
“X” variable in the PERF_100NSEC_TIMER_INV calculation
T = the total number of 100-ns intervals in a given time range. The change in the “Y” variable in the
PERF_100NSEC_TIMER_INV calculation.
U k = The utilization percentage of the logical processor by the “Idle Thread” or % Idle Time.
Working out the math:
U k = 1 – %Processor Time
%Processor Time = 1 – U k
%Processor Time = 1 – B / T
%Processor Time = 1 – X1 – X0 / Y1 – Y0
Applying the concepts to capacity planning
The preceding math may make determinations about the number of logical processors needed in a system
seem overwhelmingly complex. This is why the approach to sizing the systems is focused on determining
maximum target utilization based on current load and calculating the number of logical processors required to
get there. Additionally, while logical processor speeds will have a significant impact on performance, cache
efficiencies, memory coherence requirements, thread scheduling and synchronization, and imperfectly balanced
client load will all have significant impacts on performance that will vary on a server-by-server basis. With the
relatively cheap cost of compute power, attempting to analyze and determine the perfect number of CPUs
needed becomes more an academic exercise than it does provide business value.
Forty percent is not a hard and fast requirement, it is a reasonable start. Various consumers of Active Directory
require various levels of responsiveness. There may be scenarios where environments can run at 80% or 90%
utilization as a sustained average, as the increased wait times for access to the processor will not noticeably
impact client performance. It is important to re-iterate that there are many areas in the system that are much
slower than the logical processor in the system, including access to RAM, access to disk, and transmitting the
response over the network. All of these items need to be tuned in conjunction. Examples:
Adding more processors to a system running 90% that is disk-bound is probably not going to significantly
improve performance. Deeper analysis of the system will probably identify that there are a lot of threads that
are not even getting on the processor because they are waiting on I/O to complete.
Resolving the disk-bound issues potentially means that threads that were previously spending a lot of time in
a waiting state will no longer be in a waiting state for I/O and there will be more competition for CPU time,
meaning that the 90% utilization in the previous example will go to 100% (because it can not go higher).
Both components need to be tuned in conjunction.
NOTE
Processor Information(*)\% Processor Utility can exceed 100% with systems that have a "Turbo" mode. This is
where the CPU exceeds the rated processor speed for short periods. Reference CPU manufacturers documentation
and description of the counter for greater insight.
Discussing whole system utilization considerations also brings into the conversation domain controllers as
virtualized guests. Response time/How the system busyness impacts performance applies to both the host and
the guest in a virtualized scenario. This is why in a host with only one guest, a domain controller (and generally
any system) has near the same performance it does on physical hardware. Adding additional guests to the hosts
increases the utilization of the underlying host, thereby increasing the wait times to get access to the processors
as explained previously. In short, logical processor utilization needs to be managed at both the host and at the
guest levels.
Extending the previous analogies, leaving the highway as the physical hardware, the guest VM will be
analogized with a bus (an express bus that goes straight to the destination the rider wants). Imagine the
following four scenarios:
It is off hours, a rider gets on a bus that is nearly empty, and the bus gets on a road that is also nearly empty.
As there is no traffic to contend with, the rider has a nice easy ride and gets there just as fast as if the rider
had driven instead. The rider's travel times are still constrained by the speed limit.
It is off hours so the bus is nearly empty but most of the lanes on the road are closed, so the highway is still
congested. The rider is on an almost-empty bus on a congested road. While the rider does not have a lot of
competition in the bus for where to sit, the total trip time is still dictated by the rest of the traffic outside.
It is rush hour so the highway and the bus are congested. Not only does the trip take longer, but getting on
and off the bus is a nightmare because people are shoulder to shoulder and the highway is not much better.
Adding more buses (logical processors to the guest) does not mean they can fit on the road any more easily,
or that the trip will be shortened.
The final scenario, though it may be stretching the analogy a little, is where the bus is full, but the road is not
congested. While the rider will still have trouble getting on and off the bus, the trip will be efficient after the
bus is on the road. This is the only scenario where adding more buses (logical processors to the guest) will
improve guest performance.
From there it is relatively easy to extrapolate that there are a number of scenarios in between the 0%-utilized
and the 100%-utilized state of the road and the 0%- and 100%-utilized state of the bus that have varying
degrees of impact.
Applying the principals above of 40% CPU as reasonable target for the host as well as the guest is a reasonable
start for the same reasoning as above, the amount of queuing.
NOTE
An option would be to turn off power management on the processors (setting the power plan to High Performance )
while data is collected. That would give a more accurate representation of the CPU consumption on the target server.
To adjust estimates for different processors, it used to be safe, excluding other system bottlenecks outlined
above, to assume that doubling processor speeds doubled the amount of processing that could be performed.
Today, the internal architecture of processors is different enough between processors, that a safer way to gauge
the effects of using different processors than data was taken from is to leverage the SPECint_rate2006
benchmark from Standard Performance Evaluation Corporation.
1. Find the SPECint_rate2006 scores for the processor that are in use and that plan to be used.
a. On the website of the Standard Performance Evaluation Corporation, select Results , highlight
CPU2006 , and select Search all SPECint_rate2006 results .
b. Under Simple Request , enter the search criteria for the target processor, for example Processor
Matches E5-2630 (baselinetarget) and Processor Matches E5-2650 (baseline) .
c. Find the server and processor configuration to be used (or something close, if an exact match is not
available) and note the value in the Result and # Cores columns.
2. To determine the modifier use the following equation:
((Target platform per-core score value) × (MHz per-core of baseline platform)) ÷ ((Baseline per-core
score value) × (MHz per-core of target platform))
3. Multiply the estimated number of processors by the modifier. In the above case to go from the E5-2650
processor to the E5-2630 processor multiply the calculated 11.25 CPUs × 0.92 = 10.35 processors
needed.
In this figure the two spindles are mirrored and split into logical areas for data storage (Data 1 and Data 2).
These logical areas are viewed by the operating system as separate physical disks.
Although this can be highly confusing, the following terminology is used throughout this appendix to identify
the different entities:
Spindle – the device that is physically installed in the server.
Array – a collection of spindles aggregated by controller.
Array par tition – a partitioning of the aggregated array
LUN – an array, used when referring to SANs
Disk – What the operating system observes to be a single physical disk.
Par tition – a logical partitioning of what the operating system perceives as a physical disk.
Operating system architecture considerations
The operating system creates a First In/First Out (FIFO) I/O queue for each disk that is observed; this disk may
be representing a spindle, an array, or an array partition. From the operating system perspective, with regard to
handling I/O, the more active queues the better. As a FIFO queue is serialized, meaning that all I/Os issued to the
storage subsystem must be processed in the order the request arrived. By correlating each disk observed by the
operating system with a spindle/array, the operating system now maintains an I/O queue for each unique set of
disks, thereby eliminating contention for scarce I/O resources across disks and isolating I/O demand to a single
disk. As an exception, Windows Server 2008 introduces the concept of I/O prioritization, and applications
designed to use the “Low” priority fall out of this normal order and take a back seat. Applications not specifically
coded to leverage the “Low” priority default to “Normal.”
Introducing simple storage subsystems
Starting with a simple example (a single hard drive inside a computer) a component-by-component analysis will
be given. Breaking this down into the major storage subsystem components, the system consists of:
1 – 10,000 RPM Ultra Fast SCSI HD (Ultra Fast SCSI has a 20 MB/s transfer rate)
1 – SCSI Bus (the cable)
1 – Ultra Fast SCSI Adapter
1 – 32-bit 33 MHz PCI bus
Once the components are identified, an idea of how much data can transit the system, or how much I/O can be
handled, can be calculated. Note that the amount of I/O and quantity of data that can transit the system is
correlated, but not the same. This correlation depends on whether the disk I/O is random or sequential and the
block size. (All data is written to the disk as a block, but different applications using different block sizes.) On a
component-by-component basis:
The hard drive – The average 10,000-RPM hard drive has a 7-millisecond (ms) seek time and a 3 ms
access time. Seek time is the average amount of time it takes the read/write head to move to a location
on the platter. Access time is the average amount of time it takes to read or write the data to disk, once
the head is in the correct location. Thus, the average time for reading a unique block of data in a 10,000-
RPM HD constitutes a seek and an access, for a total of approximately 10 ms (or .010 seconds) per block
of data.
When every disk access requires movement of the head to a new location on the disk, the read/write
behavior is referred to as “random.” Thus, when all I/O is random, a 10,000-RPM HD can handle
approximately 100 I/O per second (IOPS) (the formula is 1000 ms per second divided by 10 ms per I/O
or 1000/10=100 IOPS).
Alternatively, when all I/O occurs from adjacent sectors on the HD, this is referred to as sequential I/O.
Sequential I/O has no seek time because when the first I/O is complete, the read/write head is at the start
of where the next block of data is stored on the HD. Thus a 10,000-RPM HD is capable of handling
approximately 333 I/O per second (1000 ms per second divided by 3 ms per I/O).
NOTE
This example does not reflect the disk cache, where the data of one cylinder is typically kept. In this case, the 10
ms are needed on the first I/O and the disk reads the whole cylinder. All other sequential I/O is satisfied from the
cache. As a result, in-disk caches might improve sequential I/O performance.
So far, the transfer rate of the hard drive has been irrelevant. Whether the hard drive is 20 MB/s Ultra
Wide or an Ultra3 160 MB/s, the actual amount of IOPS the can be handled by the 10,000-RPM HD is
~100 random or ~300 sequential I/O. As block sizes change based on the application writing to the drive,
the amount of data that is pulled per I/O is different. For example, if the block size is 8 KB, 100 I/O
operations will read from or write to the hard drive a total of 800 KB. However, if the block size is 32 KB,
100 I/O will read/write 3,200 KB (3.2 MB) to the hard drive. As long as the SCSI transfer rate is in excess
of the total amount of data transferred, getting a “faster” transfer rate drive will gain nothing. See the
following tables for comparison.
SCSI backplane (bus) – Understanding how the “SCSI backplane (bus)”, or in this scenario the ribbon
cable, impacts throughput of the storage subsystem depends on knowledge of the block size. Essentially
the question would be, how much I/O can the bus handle if the I/O is in 8 KB blocks? In this scenario, the
SCSI bus is 20 MB/s, or 20480 KB/s. 20480 KB/s divided by 8 KB blocks yields a maximum of
approximately 2500 IOPS supported by the SCSI bus.
NOTE
The figures in the following table represent an example. Most attached storage devices currently use PCI Express,
which provides much higher throughput.
As can be determined from this chart, in the scenario presented, no matter what the use, the bus will
never be a bottleneck, as the spindle maximum is 100 I/O, well below any of the above thresholds.
NOTE
This assumes that the SCSI bus is 100% efficient.
SCSI adapter – For determining the amount of I/O that this can handle, the manufacturer's
specifications need to be checked. Directing I/O requests to the appropriate device requires processing of
some sort, thus the amount of I/O that can be handled is dependent on the SCSI adapter (or array
controller) processor.
In this example, the assumption that 1,000 I/O can be handled will be made.
PCI bus – This is an often overlooked component. In this example, this will not be the bottleneck;
however as systems scale up, it can become a bottleneck. For reference, a 32 bit PCI bus operating at
33Mhz can in theory transfer 133 MB/s of data. Following is the equation:
Note that is the theoretical limit; in reality only about 50% of the maximum is actually reached, although
in certain burst scenarios, 75% efficiency can be obtained for short periods.
A 66Mhz 64-bit PCI bus can support a theoretical maximum of (64 bits ÷ 8 bits per byte × 66 Mhz) = 528
MB/sec. Additionally, any other device (such as the network adapter, second SCSI controller, and so on)
will reduce the bandwidth available as the bandwidth is shared and the devices will contend for the
limited resources.
After analysis of the components of this storage subsystem, the spindle is the limiting factor in the amount of
I/O that can be requested, and consequently the amount of data that can transit the system. Specifically, in an AD
DS scenario, this is 100 random I/O per second in 8 KB increments, for a total of 800 KB per second when
accessing the Jet database. Alternatively, the maximum throughput for a spindle that is exclusively allocated to
log files would suffer the following limitations: 300 sequential I/O per second in 8 KB increments, for a total of
2400 KB (2.4 MB) per second.
Now, having analyzed a simple configuration, the following table demonstrates where the bottleneck will occur
as components in the storage subsystem are changed or added.
B OT T L EN EC K
N OT ES A N A LY SIS DISK B US A DA P T ER P C I B US
Introducing RAID
The nature of a storage subsystem does not change dramatically when an array controller is introduced; it just
replaces the SCSI adapter in the calculations. What does change is the cost of reading and writing data to the
disk when using the various array levels (such as RAID 0, RAID 1, or RAID 5).
In RAID 0, the data is striped across all the disks in the RAID set. This means that during a read or a write
operation, a portion of the data is pulled from or pushed to each disk, increasing the amount of data that can
transit the system during the same time period. Thus, in one second, on each spindle (again assuming 10,000-
RPM drives), 100 I/O operations can be performed. The total amount of I/O that can be supported is N spindles
times 100 I/O per second per spindle (yields 100*N I/O per second).
In RAID 1, the data is mirrored (duplicated) across a pair of spindles for redundancy. Thus, when a read I/O
operation is performed, data can be read from both of the spindles in the set. This effectively makes the I/O
capacity from both disks available during a read operation. The caveat is that write operations gain no
performance advantage in a RAID 1. This is because the same data needs to be written to both drives for the
sake of redundancy. Though it does not take any longer, as the write of data occurs concurrently on both
spindles, because both spindles are occupied duplicating the data, a write I/O operation in essence prevents two
read operations from occurring. Thus, every write I/O costs two read I/O. A formula can be created from that
information to determine the total number of I/O operations that are occurring:
When the ratio of reads to writes and the number of spindles are known, the following equation can be derived
from the above equation to identify the maximum I/O that can be supported by the array:
Maximum IOPS per spindle × 2 spindles × [(%Reads + %Writes) ÷ (%Reads + 2 × %Writes)] = Total IOPS
RAID 1+ 0, behaves exactly the same as RAID 1 regarding the expense of reading and writing. However, the I/O
is now striped across each mirrored set. If
Maximum IOPS per spindle × 2 spindles × [(%Reads + %Writes) ÷ (%Reads + 2 × %Writes)] = Total I/O
in a RAID 1 set, when a multiplicity (N) of RAID 1 sets are striped, the Total I/O that can be processed becomes N
× I/O per RAID 1 set:
N × {Maximum IOPS per spindle × 2 spindles × [(%Reads + %Writes) ÷ (%Reads + 2 × %Writes)] } = Total
IOPS
In RAID 5, sometimes referred to as N + 1 RAID, the data is striped across N spindles and parity information is
written to the “+ 1” spindle. However, RAID 5 is much more expensive when performing a write I/O than RAID 1
or 1 + 0. RAID 5 performs the following process every time a write I/O is submitted to the array:
1. Read the old data
2. Read the old parity
3. Write the new data
4. Write the new parity
As every write I/O request that is submitted to the array controller by the operating system requires four I/O
operations to complete, write requests submitted take four times as long to complete as a single read I/O. To
derive a formula to translate I/O requests from the operating system perspective to that experienced by the
spindles:
Similarly in a RAID 1 set, when the ratio of reads to writes and the number of spindles are known, the following
equation can be derived from the above equation to identify the maximum I/O that can be supported by the
array (Note that total number of spindles does not include the “drive” lost to parity):
IOPS per spindle × (Spindles – 1) × [(%Reads + %Writes) ÷ (%Reads + 4 × %Writes)] = Total IOPS
Introducing SANs
Expanding the complexity of the storage subsystem, when a SAN is introduced into the environment, the basic
principles outlined do not change, however I/O behavior for all of the systems connected to the SAN needs to be
taken into account. As one of the major advantages in using a SAN is an additional amount of redundancy over
internally or externally attached storage, capacity planning now needs to take into account fault tolerance needs.
Also, more components are introduced that need to be evaluated. Breaking a SAN down into the component
parts:
SCSI or Fibre Channel hard drive
Storage unit channel backplane
Storage units
Storage controller module
SAN switch(es)
HBA(s)
The PCI bus
When designing any system for redundancy, additional components are included to accommodate the potential
of failure. It is very important, when capacity planning, to exclude the redundant component from available
resources. For example, if the SAN has two controller modules, the I/O capacity of one controller module is all
that should be used for total I/O throughput available to the system. This is due to the fact that if one controller
fails, the entire I/O load demanded by all connected systems will need to be processed by the remaining
controller. As all capacity planning is done for peak usage periods, redundant components should not be
factored into the available resources and planned peak utilization should not exceed 80% saturation of the
system (in order to accommodate bursts or anomalous system behavior). Similarly, the redundant SAN switch,
storage unit, and spindles should not be factored into the I/O calculations.
When analyzing the behavior of the SCSI or Fibre Channel hard drive, the method of analyzing the behavior as
outlined previously does not change. Although there are certain advantages and disadvantages to each protocol,
the limiting factor on a per disk basis is the mechanical limitation of the hard drive.
Analyzing the channel on the storage unit is exactly the same as calculating the resources available on the SCSI
bus, or bandwidth (such as 20 MB/s) divided by block size (such as 8 KB). Where this deviates from the simple
previous example is in the aggregation of multiple channels. For example, if there are 6 channels, each
supporting 20 MB/s maximum transfer rate, the total amount of I/O and data transfer that is available is 100
MB/s (this is correct, it is not 120 MB/s). Again, fault tolerance is a major player in this calculation, in the event of
the loss of an entire channel, the system is only left with 5 functioning channels. Thus, to ensure continuing to
meet performance expectations in the event of failure, total throughput for all of the storage channels should
not exceed 100 MB/s (this assumes load and fault tolerance is evenly distributed across all channels). Turning
this into an I/O profile is dependent on the behavior of the application. In the case of Active Directory Jet I/O, this
would correlate to approximately 12,500 I/O per second (100 MB/s ÷ 8 KB per I/O).
Next, obtaining the manufacturer's specifications for the controller modules is required in order to gain an
understanding of the throughput each module can support. In this example, the SAN has two controller
modules that support 7,500 I/O each. The total throughput of the system may be 15,000 IOPS if redundancy is
not desired. In calculating maximum throughput in the case of failure, the limitation is the throughput of one
controller, or 7,500 IOPS. This threshold is well below the 12,500 IOPS (assuming 4 KB block size) maximum that
can be supported by all of the storage channels, and thus, is currently the bottleneck in the analysis. Still for
planning purposes, the desired maximum I/O to be planned for would be 10,400 I/O.
When the data exits the controller module, it transits a Fibre Channel connection rated at 1 GB/s (or 1 Gigabit
per second). To correlate this with the other metrics, 1 GB/s turns into 128 MB/s (1 GB/s ÷ 8 bits/byte). As this is
in excess of the total bandwidth across all channels in the storage unit (100 MB/s), this will not bottleneck the
system. Additionally, as this is only one of the two channels (the additional 1 GB/s Fibre Channel connection
being for redundancy), if one connection fails, the remaining connection still has enough capacity to handle all
the data transfer demanded.
En route to the server, the data will most likely transit a SAN switch. As the SAN switch has to process the
incoming I/O request and forward it out the appropriate port, the switch will have a limit to the amount of I/O
that can be handled, however, manufacturers specifications will be required to determine what that limit is. For
example, if there are two switches and each switch can handle 10,000 IOPS, the total throughput will be 20,000
IOPS. Again, fault tolerance being a concern, if one switch fails, the total throughput of the system will be 10,000
IOPS. As it is desired not to exceed 80% utilization in normal operation, using no more than 8000 I/O should be
the target.
Finally, the HBA installed in the server would also have a limit to the amount of I/O that it can handle. Usually, a
second HBA is installed for redundancy, but just like with the SAN switch, when calculating maximum I/O that
can be handled, the total throughput of N – 1 HBAs is what the maximum scalability of the system is.
Caching considerations
Caches are one of the components that can significantly impact the overall performance at any point in the
storage system. Detailed analysis about caching algorithms is beyond the scope of this article; however, some
basic statements about caching on disk subsystems are worth illuminating:
Caching does improved sustained sequential write I/O as it can buffer many smaller write operations into
larger I/O blocks and de-stage to storage in fewer, but larger block sizes. This will reduce total random I/O
and total sequential I/O, thus providing more resource availability for other I/O.
Caching does not improve sustained write I/O throughput of the storage subsystem. It only allows for the
writes to be buffered until the spindles are available to commit the data. When all the available I/O of the
spindles in the storage subsystem is saturated for long periods, the cache will eventually fill up. In order
to empty the cache, enough time between bursts, or extra spindles, need to be allotted in order to provide
enough I/O to allow the cache to flush.
Larger caches only allow for more data to be buffered. This means longer periods of saturation can be
accommodated.
In a normally operating storage subsystem, the operating system will experience improved write
performance as the data only needs to be written to cache. Once the underlying media is saturated with
I/O, the cache will fill and write performance will return to disk speed.
When caching read I/O, the scenario where the cache is most advantageous is when the data is stored
sequentially on the disk, and the cache can read-ahead (it makes the assumption that the next sector
contains the data that will be requested next).
When read I/O is random, caching at the drive controller is unlikely to provide any enhancement to the
amount of data that can be read from the disk. Any enhancement is non-existent if the operating system
or application-based cache size is greater than the hardware-based cache size.
In the case of Active Directory, the cache is only limited by the amount of RAM.
SSD considerations
SSDs are a completely different animal than spindle-based hard disks. Yet the two key criteria remain: “How
many IOPS can it handle?” and “What is the latency for those IOPS?” In comparison to spindle-based hard disks,
SSDs can handle higher volumes of I/O and can have lower latencies. In general and as of this writing, while
SSDs are still expensive in a cost-per-Gigabyte comparison, they are very cheap in terms of cost-per-I/O and
deserve significant consideration in terms of storage performance.
Considerations:
Both IOPS and latencies are very subjective to the manufacturer designs and in some cases have been
observed to be poorer performing than spindle based technologies. In short, it is more important to review
and validate the manufacturer specs drive by drive and not assume any generalities.
IOPS types can have very different numbers depending on whether it is read or write. AD DS services, in
general, being predominantly read-based, will be less affected than some other application scenarios.
“Write endurance” – this is the concept that SSD cells will eventually wear out. Various manufacturers deal
with this challenge different fashions. At least for the database drive, the predominantly read I/O profile
allows for downplaying the significance of this concern as the data is not highly volatile.
Summary
One way to think about storage is picturing household plumbing. Imagine the IOPS of the media that the data is
stored on is the household main drain. When this is clogged (such as roots in the pipe) or limited (it is collapsed
or too small), all the sinks in the household back up when too much water is being used (too many guests). This
is perfectly analogous to a shared environment where one or more systems are leveraging shared storage on an
SAN/NAS/iSCSI with the same underlying media. Different approaches can be taken to resolve the different
scenarios:
A collapsed or undersized drain requires a full scale replacement and fix. This would be similar to adding in
new hardware or redistributing the systems using the shared storage throughout the infrastructure.
A “clogged” pipe usually means identification of one or more offending problems and removal of those
problems. In a storage scenario this could be storage or system level backups, synchronized antivirus scans
across all servers, and synchronized defragmentation software running during peak periods.
In any plumbing design, multiple drains feed into the main drain. If anything stops up one of those drains or a
junction point, only the things behind that junction point back up. In a storage scenario, this could be an
overloaded switch (SAN/NAS/iSCSI scenario), driver compatibility issues (wrong driver/HBA
Firmware/storport.sys combination), or backup/antivirus/defragmentation. To determine if the storage “pipe” is
big enough, IOPS and I/O size needs to be measured. At each joint add them together to ensure adequate “pipe
diameter.”
NOTE
It is normal for short periods to observe the latencies climb when components aggressively read or write to disk, such as
when the system is being backed up or when AD DS is running garbage collection. Additional head room on top of the
calculations should be provided to accommodate these periodic events. The goal being to provide enough throughput to
accommodate these scenarios without impacting normal function.
As can be seen, there is a physical limit based on the storage design to how quickly the cache can possibly
warm. What will warm the cache are incoming client requests up to the rate that the underlying storage can
provide. Running scripts to “pre-warm” the cache during peak hours will provide competition to load driven by
real client requests. That can adversely affect delivering data that clients need first because, by design, it will
generate competition for scarce disk resources as artificial attempts to warm the cache will load data that is not
relevant to the clients contacting the DC.
Proper placement of domain controllers and site
considerations
1/5/2022 • 6 minutes to read • Edit Online
Proper site definition is critical to performance. Clients falling out of site can experience poor performance for
authentications and queries. Furthermore, with the introduction of IPv6 on clients, the request can come from
either the IPv4 or the IPv6 address and Active Directory needs to have sites properly defined for IPv6. The
operating system prefers IPv6 to IPv4 when both are configured.
Starting in Windows Server 2008, the domain controller attempts to use name resolution to do a reverse lookup
in order to determine the site the client should be in. This can cause exhaustion of the ATQ Thread Pool and
cause the domain controller to become unresponsive. The appropriate resolution to this is to properly define the
site topology for IPv6. As a workaround, one can optimize the name resolution infrastructure to respond quickly
to domain controller requests. For more info see Windows Server 2008 or Windows Server 2008 R2 Domain
Controller delayed response to LDAP or Kerberos requests.
An additional area of consideration is locating Read/Write DCs for scenarios where RODCs are in use. Certain
operations require access to a writable Domain Controller or target a writable Domain Controller when a Read-
Only Domain Controller would suffice. Optimizing these scenarios would take two paths:
Contacting writable Domain Controllers when a Read-Only Domain Controller would suffice. This requires an
application code change.
Where a writable Domain Controller may be necessary. Place read-write Domain Controllers at central
locations to minimize latency.
For further information reference:
Application Compatibility with RODCs
Active Directory Service Interface (ADSI) and the Read Only Domain Controller (RODC) – Avoiding
performance issues
Consider placing domain controllers from trusted and trusting domains in the same physical
location.
For all trust scenarios, credentials are routed according to the domain specified in the authentication requests.
This is also true for queries to the LookupAccountName and LsaLookupNames (as well as others, these are just
the most commonly used) APIs. When the domain parameters for these APIs are passed a NULL value, the
domain controller will attempt to find the account name specified in every trusted domain available.
Disable checking all available trusts when NULL domain is specified. How to restrict the lookup of
isolated names in external trusted domains by using the LsaLookupRestrictIsolatedNameLevel registry
entry
Disable passing authentication requests with NULL domain specified across all available trusts. The
Lsass.exe process may stop responding if you have many external trusts on an Active Directory domain
controller
Additional References
Performance tuning Active Directory Servers
Hardware considerations
LDAP considerations
Troubleshooting ADDS performance
Capacity Planning for Active Directory Domain Services
Hardware considerations in ADDS performance
tuning
1/5/2022 • 5 minutes to read • Edit Online
IMPORTANT
The following is a summary of the key recommendations and considerations to optimize server hardware for Active
Directory workloads covered in greater depth in the Capacity Planning for Active Directory Domain Services article.
Readers are highly encouraged to review Capacity Planning for Active Directory Domain Services for a greater technical
understanding and implications of these recommendations.
Additional References
Performance tuning Active Directory Servers
LDAP considerations
Proper placement of domain controllers and site considerations
Troubleshooting ADDS performance
Capacity Planning for Active Directory Domain Services
Memory usage considerations for AD DS
performance tuning
1/5/2022 • 5 minutes to read • Edit Online
This article describes some basics of the Local Security Authority Subsystem Service (LSASS, also known as the
Lsass.exe process), best practices for the configuration of LSASS, and expectations for memory usage. This
article should be used as a guide in the analysis of LSASS performance and memory use on domain controllers
(DCs). The information in this article may be useful if you have questions about how to tune and configure
servers and DCs to optimize this engine.
LSASS is responsible for management of local security authority (LSA) domain authentication and Active
Directory management. LSASS handles authentication for both the client and the server, and it also governs the
Active Directory engine. LSASS is responsible for the following components:
Local Security Authority
NetLogon service
Security Accounts Manager (SAM) service
LSA Server service
Secure Sockets Layer (SSL)
Kerberos v5 authentication protocol
NTLM authentication protocol
Other authentication packages that load into LSA
The Active Directory database services (NTDSAI.dll) work with the Extensible Storage Engine (ESE, ESENT.dll).
Here is a visual diagram of LSASS memory usage on a DC:
The amount of memory that LSASS uses on a DC increases in accordance with Active Directory usage. When
data is queried, it is cached in memory. As a result, it is normal to see LSASS using an amount of memory that is
larger than the size of the Active Directory database file (NTDS.dit).
As illustrated in the diagram, LSASS memory usage can be divided into several parts, including the ESE
database buffer cache, the ESE version store, and others. The rest of this article provides insight into each of
these parts.
ESE database buffer cache
The largest variable memory usage within LSASS is the ESE database buffer cache. The size of the cache can
range from less than 1 MB to the size of the entire database. Because a larger cache improves performance, the
database engine for Active Directory (ESENT) attempts to keep the cache as large as possible. While the size of
the cache varies with memory pressure in the computer, the maximum size of the ESE database buffer cache is
only limited by physical RAM installed in the computer. As long as there is no other memory pressure, the cache
can grow to the size of the Active Directory NTDS.dit database file. The more of the database that can be cached,
the better the performance of the DC will be.
NOTE
Because of the way that the database caching algorithm works, on a 64-bit system on which the database size is smaller
than the available RAM, the database cache can grow larger than the database size by 30 to 40 percent.
IMPORTANT
The following is a summary of the key recommendations and considerations to optimize server hardware for Active
Directory workloads covered in greater depth in the Capacity Planning for Active Directory Domain Services article.
Readers are highly encouraged to review Capacity Planning for Active Directory Domain Services for a greater technical
understanding and implications of these recommendations.
NOTE
High values here can also be indicators of delays in "proxying" requests to other domains and CRL checks.
NTDS\Estimated Queue Delay – This should ideally be near 0 for optimal performance as this
means that requests spend no time waiting to be serviced.
These scenarios can be detected using one or more of the following approaches:
Determining Query Timing with the Statistics Control
Tracking Expensive and Inefficient Searches
Active Directory Diagnostics Data Collector Set in Performance Monitor (Son of SPA: AD Data Collector
Sets in Win2008 and beyond)
Searches using any filter besides "(objectClass=*)" that use the Ancestors Index.
Other index considerations
Ensure that creating the index is the right solution to the problem after tuning the query has been
exhausted as an option. Sizing hardware properly is very important. Indices should be added only when
the right fix is to index the attribute, and not an attempt to obfuscate hardware problems.
Indices increase the size of the database by a minimum of the total size of the attribute being indexed. An
estimate of database growth can therefore be evaluated by taking the average size of the data in the
attribute and multiplying by the number of objects that will have the attribute populated. Generally this is
about a 1% increase in database size. For more info, see How the Data Store Works.
If search behavior is predominantly done at the organization unit level, consider indexing for
containerized searches.
Tuple indices are larger than normal indices, but it is much harder to estimate the size. Use normal indices
size estimates as the floor for growth, with a maximum of 20%. For more info, see How the Data Store
Works.
If search behavior is predominantly done at the organization unit level, consider indexing for
containerized searches.
Tuple Indices are needed to support medial search strings and final search strings. Tuple indices are not
needed for initial search strings.
Initial Search String – (samAccountName=MYPC*)
Medial Search String - (samAccountName=*MYPC*)
Final Search String – (samAccountName=*MYPC$)
Creating an index will generate disk I/O while the index is being built. This is done on a background
thread with lower priority and incoming requests will be prioritized over the index build. If capacity
planning for the environment has been done correctly, this should be transparent. However, write-heavy
scenarios or an environment where the load on the domain controller storage is unknown could degrade
client experience and should be done off-hours.
Affects to replication traffic is minimal since building indices occurs locally.
For more info, see the following:
Creating More Efficient Microsoft Active Directory-Enabled Applications
Searching in Active Directory Domain Services
Indexed Attributes
Additional References
Performance tuning Active Directory Servers
Hardware considerations
Proper placement of domain controllers and site considerations
Troubleshooting ADDS performance
Capacity Planning for Active Directory Domain Services
Troubleshooting Active Directory Domain Services
performance
1/5/2022 • 2 minutes to read • Edit Online
For additional information on ADDS performance troubleshooting, see Monitoring Your Branch Office
Environment.
Additional References
Performance tuning Active Directory Servers
Hardware considerations
LDAP considerations
Proper placement of domain controllers and site considerations
Capacity Planning for Active Directory Domain Services
Performance tuning for file servers
1/5/2022 • 7 minutes to read • Edit Online
You should select the proper hardware to satisfy the expected file server load, considering average load, peak
load, capacity, growth plans, and response times. Hardware bottlenecks limit the effectiveness of software
tuning.
HKLM\System\CurrentControlSet\Services\LanmanWorkstation\Parameters\ConnectionCountPerNetworkInterfac
e
Applies to Windows 10, Windows 8.1, Windows 8, Windows Server 2022, Windows Server 2016,
Windows Server 2012 R2, and Windows Server 2012
The default is 1, and we strongly recommend using the default. The valid range is 1-16. The maximum
number of connections per interface to be established with a server for non-RSS interfaces.
ConnectionCountPerRssNetworkInterface
HKLM\System\CurrentControlSet\Services\LanmanWorkstation\Parameters\ConnectionCountPerRssNetworkInter
face
Applies to Windows 10, Windows 8.1, Windows 8, Windows Server 2022, Windows Server 2016,
Windows Server 2012 R2, and Windows Server 2012
The default is 4, and we strongly recommend using the default. The valid range is 1-16. The maximum
number of connections per interface to be established with a server for RSS interfaces.
ConnectionCountPerRdmaNetworkInterface
HKLM\System\CurrentControlSet\Services\LanmanWorkstation\Parameters\ConnectionCountPerRdmaNetworkInte
rface
Applies to Windows 10, Windows 8.1, Windows 8, Windows Server 2022, Windows Server 2016,
Windows Server 2012 R2, and Windows Server 2012
The default is 2, and we strongly recommend using the default. The valid range is 1-16. The maximum
number of connections per interface to be established with a server for RDMA interfaces.
MaximumConnectionCountPerSer ver
HKLM\System\CurrentControlSet\Services\LanmanWorkstation\Parameters\MaximumConnectionCountPerServer
Applies to Windows 10, Windows 8.1, Windows 8, Windows Server 2022, Windows Server 2016,
Windows Server 2012 R2, and Windows Server 2012
The default is 32, with a valid range from 1-64. The maximum number of connections to be established
with a single server running Windows Server 2012 across all interfaces.
DormantDirector yTimeout
HKLM\System\CurrentControlSet\Services\LanmanWorkstation\Parameters\DormantDirectoryTimeout
Applies to Windows 10, Windows 8.1, Windows 8, Windows Server 2022, Windows Server 2016,
Windows Server 2012 R2, and Windows Server 2012
The default is 600 seconds. The maximum time server directory handles held open with directory leases.
FileInfoCacheLifetime
HKLM\System\CurrentControlSet\Services\LanmanWorkstation\Parameters\FileInfoCacheLifetime
Applies to Windows 10, Windows 8.1, Windows 8, Windows 7, Windows Vista, Windows Server 2022,
Windows Server 2016, Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, and
Windows Server 2008
The default is 10 seconds. The file information cache timeout period.
Director yCacheLifetime
HKLM\System\CurrentControlSet\Services\LanmanWorkstation\Parameters\DirectoryCacheLifetime
Applies to Windows 10, Windows 8.1, Windows 8, Windows 7, Windows Vista, Windows Server 2022,
Windows Server 2016, Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, and
Windows Server 2008
The default is 10 seconds. This is the directory cache timeout.
NOTE
This parameter controls caching of directory metadata in the absence of directory leases.
NOTE
A known issue in Windows 10, version 1803, affects the ability of Windows 10 to cache large directories. After you
upgrade a computer to Windows 10, version 1803, you access a network share that contains thousands of files
and folders, and you open a document that is located on that share. During both of these operations, you
experience significant delays.
To resolve this issue, install Windows 10, version 1809 or a later version.
To work around this issue, set Director yCacheLifetime to 0 .
This issue affects the following editions of Windows 10:
Windows 10 Enterprise, version 1803
Windows 10 Pro for Workstations, version 1803
Windows 10 Pro Education, version 1803
Windows 10 Professional, version 1803
Windows 10 Education, version 1803
Windows 10 Home, version 1803
Director yCacheEntr ySizeMax
HKLM\System\CurrentControlSet\Services\LanmanWorkstation\Parameters\DirectoryCacheEntrySizeMax
Applies to Windows 10, Windows 8.1, Windows 8, Windows 7, Windows Vista, Windows Server 2022,
Windows Server 2016, Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, and
Windows Server 2008
The default is 64 KB. This is the maximum size of directory cache entries.
FileNotFoundCacheLifetime
HKLM\System\CurrentControlSet\Services\LanmanWorkstation\Parameters\FileNotFoundCacheLifetime
Applies to Windows 10, Windows 8.1, Windows 8, Windows 7, Windows Vista, Windows Server 2022,
Windows Server 2016, Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, and
Windows Server 2008
The default is 5 seconds. The file not found cache timeout period.
CacheFileTimeout
HKLM\System\CurrentControlSet\Services\LanmanWorkstation\Parameters\CacheFileTimeout
Applies to Windows 8.1, Windows 8, Windows Server 2012, Windows Server 2012 R2, and Windows 7
The default is 10 seconds. This setting controls the length of time (in seconds) that the redirector will hold
on to cached data for a file after the last handle to the file is closed by an application.
DisableBandwidthThrottling
HKLM\System\CurrentControlSet\Services\LanmanWorkstation\Parameters\DisableBandwidthThrottling
Applies to Windows 10, Windows 8.1, Windows 8, Windows 7, Windows Vista, Windows Server 2022,
Windows Server 2016, Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, and
Windows Server 2008
The default is 0. By default, the SMB redirector throttles throughput across high-latency network
connections, in some cases to avoid network-related timeouts. Setting this registry value to 1 disables
this throttling, enabling higher file transfer throughput over high-latency network connections.
DisableLargeMtu
HKLM\System\CurrentControlSet\Services\LanmanWorkstation\Parameters\DisableLargeMtu
Applies to Windows 10, Windows 8.1, Windows 8, Windows 7, Windows Vista, Windows Server 2022,
Windows Server 2016, Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, and
Windows Server 2008
The default is 0 for Windows 8 only. In Windows 8, the SMB redirector transfers payloads as large as
1 MB per request, which can improve file transfer speed. Setting this registry value to 1 limits the request
size to 64 KB. You should evaluate the impact of this setting before applying it.
RequireSecuritySignature
HKLM\System\CurrentControlSet\Services\LanmanWorkstation\Parameters\RequireSecuritySignature
Applies to Windows 10, Windows 8.1, Windows 8, Windows 7, Windows Vista, Windows Server 2022,
Windows Server 2016, Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, and
Windows Server 2008
The default is 0, disabling SMB Signing. Changing this value to 1 enables SMB signing for all SMB
communication, preventing SMB communication with computers where SMB signing is disabled. SMB
signing can increase CPU cost and network round trips, but helps block man-in-the-middle attacks. If
SMB signing is not required, ensure that this registry value is 0 on all clients and servers.
For more info, see The Basics of SMB Signing.
FileInfoCacheEntriesMax
HKLM\System\CurrentControlSet\Services\LanmanWorkstation\Parameters\FileInfoCacheEntriesMax
Applies to Windows 10, Windows 8.1, Windows 8, Windows 7, Windows Vista, Windows Server 2022,
Windows Server 2016, Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, and
Windows Server 2008
The default is 64, with a valid range of 1 to 65536. This value is used to determine the amount of file
metadata that can be cached by the client. Increasing the value can reduce network traffic and increase
performance when a large number of files are accessed.
Director yCacheEntriesMax
HKLM\System\CurrentControlSet\Services\LanmanWorkstation\Parameters\DirectoryCacheEntriesMax
Applies to Windows 10, Windows 8.1, Windows 8, Windows 7, Windows Vista, Windows Server 2022,
Windows Server 2016, Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, and
Windows Server 2008
The default is 16, with a valid range of 1 to 4096. This value is used to determine the amount of directory
information that can be cached by the client. Increasing the value can reduce network traffic and increase
performance when large directories are accessed.
FileNotFoundCacheEntriesMax
HKLM\System\CurrentControlSet\Services\LanmanWorkstation\Parameters\FileNotFoundCacheEntriesMax
Applies to Windows 10, Windows 8.1, Windows 8, Windows 7, Windows Vista, Windows Server 2022,
Windows Server 2016, Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, and
Windows Server 2008
The default is 128, with a valid range of 1 to 65536. This value is used to determine the amount of file
name information that can be cached by the client. Increasing the value can reduce network traffic and
increase performance when a large number of file names are accessed.
MaxCmds
HKLM\System\CurrentControlSet\Services\LanmanWorkstation\Parameters\MaxCmds
Applies to Windows 10, Windows 8.1, Windows 8, Windows 7, Windows Vista, Windows Server 2022,
Windows Server 2016, Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, and
Windows Server 2008
The default is 15. This parameter limits the number of outstanding requests on a session. Increasing the
value can use more memory, but it can improve performance by enabling a deeper request
pipeline. Increasing the value in conjunction with MaxMpxCt can also eliminate errors that are
encountered due to large numbers of outstanding long-term file requests, such as
FindFirstChangeNotification calls. This parameter does not affect connections with SMB 2.0 servers.
DormantFileLimit
HKLM\System\CurrentControlSet\Services\LanmanWorkstation\Parameters\DormantFileLimit
Applies to Windows 10, Windows 8.1, Windows 8, Windows 7, Windows Vista, Windows Server 2022,
Windows Server 2016, Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2, and
Windows Server 2008
The default is 1023. This parameter specifies the maximum number of files that should be left open on a
shared resource after the application has closed the file.
Client tuning example
The general tuning parameters for client computers can optimize a computer for accessing remote file shares,
particularly over some high-latency networks (such as branch offices, cross-datacenter communication, home
offices, and mobile broadband). The settings are not optimal or appropriate on all computers. You should
evaluate the impact of individual settings before applying them.
PA RA M ET ER VA L UE DEFA ULT
DisableBandwidthThrottling 1 0
FileInfoCacheEntriesMax 32768 64
DirectoryCacheEntriesMax 4096 16
MaxCmds 32768 15
Starting in Windows 8, you can configure many of these SMB settings by using the Set-
SmbClientConfiguration and Set-SmbSer verConfiguration Windows PowerShell cmdlets. Registry-only
settings can be configured by using Windows PowerShell as well.
HKLM\System\CurrentControlSet\Services\LanmanServer\Parameters\Smb2CreditsMin
HKLM\System\CurrentControlSet\Services\LanmanServer\Parameters\Smb2CreditsMax
The defaults are 512 and 8192, respectively. These parameters allow the server to throttle client
operation concurrency dynamically within the specified boundaries. Some clients might achieve
increased throughput with higher concurrency limits, for example, copying files over high-bandwidth,
high-latency links.
TIP
Prior to Windows 10 and Windows Server 2016, the number of credits granted to the client varied dynamically
between Smb2CreditsMin and Smb2CreditsMax based on an algorithm that attempted to determine the optimal
number of credits to grant based on network latency and credit usage. In Windows 10 and Windows Server 2016,
the SMB server was changed to unconditionally grant credits upon request up to the configured maximum
number of credits. As part of this change, the credit throttling mechanism, which reduces the size of each
connection's credit window when the server is under memory pressure, was removed. The kernel's low memory
event that triggered throttling is only signaled when the server is so low on memory (< a few MB) as to be
useless. Since the server no longer shrinks credit windows the Smb2CreditsMin setting is no longer necessary and
is now ignored.
You can monitor SMB Client Shares\Credit Stalls /Sec to see if there are any issues with credits.
AdditionalCriticalWorkerThreads
HKLM\System\CurrentControlSet\Control\Session Manager\Executive\AdditionalCriticalWorkerThreads
The default is 0, which means that no additional critical kernel worker threads are added. This value
affects the number of threads that the file system cache uses for read-ahead and write-behind requests.
Raising this value can allow for more queued I/O in the storage subsystem, and it can improve I/O
performance, particularly on systems with many logical processors and powerful storage hardware.
TIP
The value may need to be increased if the amount of cache manager dirty data (performance counter Cache\Dirty
Pages) is growing to consume a large portion (over ~25%) of memory or if the system is doing lots of
synchronous read I/Os.
MaxThreadsPerQueue
HKLM\System\CurrentControlSet\Services\LanmanServer\Parameters\MaxThreadsPerQueue
The default is 20. Increasing this value raises the number of threads that the file server can use to service
concurrent requests. When a large number of active connections need to be serviced, and hardware
resources, such as storage bandwidth, are sufficient, increasing the value can improve server scalability,
performance, and response times.
TIP
An indication that the value may need to be increased is if the SMB2 work queues are growing very large
(performance counter ‘Server Work Queues\Queue Length\SMB2 NonBlocking *' is consistently above ~100).
NOTE
In Windows 10, Windows Server 2016, and Windows Server 2022, MaxThreadsPerQueue is unavailable. The
number of threads for a thread pool will be "20 * the number of processors in a NUMA node".
AsynchronousCredits
HKLM\System\CurrentControlSet\Services\LanmanServer\Parameters\AsynchronousCredits
The default is 512. This parameter limits the number of concurrent asynchronous SMB commands that
are allowed on a single connection. Some cases (such as when there is a front-end server with a back-end
IIS server) require a large amount of concurrency (for file change notification requests, in particular). The
value of this entry can be increased to support these cases.
RemoteFileDir tyPageThreshold
The default is 5GB. This value determines the maximum number of dirty pages in the cache (on a per-file basis)
for a remote write before an inline flush will be performed. We do not recommend changing this value unless
the system experiences consistent slowdowns during heavy remote writes. This slowdown behavior would
typically be seen where the client has faster storage IO performance than the remote server. The setting change
is applied to the server. Client and server refer to the distributed system architecture, not to particular operating
systems; for example, a Windows Server copying data to another Windows Server over SMB would still involve
an SMB client and an SMB server. See Troubleshoot Cache and Memory Manager Performance Issues for more
information.
SMB server tuning example
The following settings can optimize a computer for file server performance in many cases. The settings are not
optimal or appropriate on all computers. You should evaluate the impact of individual settings before applying
them.
PA RA M ET ER VA L UE DEFA ULT
AdditionalCriticalWorkerThreads 64 0
MaxThreadsPerQueue 64 20
HKLM\System\CurrentControlSet\Services\NfsServer\Parameters\OptimalReads
The default is 0. This parameter determines whether files are opened for FILE_RANDOM_ACCESS or for
FILE_SEQUENTIAL_ONLY, depending on the workload I/O characteristics. Set this value to 1 to force files
to be opened for FILE_RANDOM_ACCESS. FILE_RANDOM_ACCESS prevents the file system and cache
manager from prefetching.
NOTE
This setting must be carefully evaluated because it may have potential impact on system file cache grow.
RdWrHandleLifeTime
HKLM\System\CurrentControlSet\Services\NfsServer\Parameters\RdWrHandleLifeTime
The default is 5. This parameter controls the lifetime of an NFS cache entry in the file handle cache. The
parameter refers to cache entries that have an associated open NTFS file handle. Actual lifetime is
approximately equal to RdWrHandleLifeTime multiplied by RdWrThreadSleepTime. The minimum is 1
and the maximum is 60.
RdWrNfsHandleLifeTime
HKLM\System\CurrentControlSet\Services\NfsServer\Parameters\RdWrNfsHandleLifeTime
The default is 5. This parameter controls the lifetime of an NFS cache entry in the file handle cache. The
parameter refers to cache entries that do not have an associated open NTFS file handle. Services for NFS
uses these cache entries to store file attributes for a file without keeping an open handle with the file
system. Actual lifetime is approximately equal to RdWrNfsHandleLifeTime multiplied by
RdWrThreadSleepTime. The minimum is 1 and the maximum is 60.
RdWrNfsReadHandlesLifeTime
HKLM\System\CurrentControlSet\Services\NfsServer\Parameters\RdWrNfsReadHandlesLifeTime
The default is 5. This parameter controls the lifetime of an NFS read cache entry in the file handle cache.
Actual lifetime is approximately equal to RdWrNfsReadHandlesLifeTime multiplied by
RdWrThreadSleepTime. The minimum is 1 and the maximum is 60.
RdWrThreadSleepTime
HKLM\System\CurrentControlSet\Services\NfsServer\Parameters\RdWrThreadSleepTime
The default is 5. This parameter controls the wait interval before running the cleanup thread on the file
handle cache. The value is in ticks, and it is non-deterministic. A tick is equivalent to approximately 100
nanoseconds. The minimum is 1 and the maximum is 60.
FileHandleCacheSizeinMB
HKLM\System\CurrentControlSet\Services\NfsServer\Parameters\FileHandleCacheSizeinMB
The default is 4. This parameter specifies the maximum memory to be consumed by file handle cache
entries. The minimum is 1 and the maximum is 1*1024*1024*1024 (1073741824).
LockFileHandleCacheInMemor y
HKLM\System\CurrentControlSet\Services\NfsServer\Parameters\LockFileHandleCacheInMemory
The default is 0. This parameter specifies whether the physical pages that are allocated for the cache size
specified by FileHandleCacheSizeInMB are locked in memory. Setting this value to 1 enables this activity.
Pages are locked in memory (not paged to disk), which improves the performance of resolving file
handles, but reduces the memory that is available to applications.
MaxIcbNfsReadHandlesCacheSize
HKLM\System\CurrentControlSet\Services\NfsServer\Parameters\MaxIcbNfsReadHandlesCacheSize
The default is 64. This parameter specifies the maximum number of handles per volume for the read data
cache. Read cache entries are created only on systems that have more than 1 GB of memory. The
minimum is 0 and the maximum is 0xFFFFFFFF.
HandleSigningEnabled
HKLM\System\CurrentControlSet\Services\NfsServer\Parameters\HandleSigningEnabled
The default is 1. This parameter controls whether handles that are given out by NFS File Server are
signed cryptographically. Setting it to 0 disables handle signing.
RdWrNfsDeferredWritesFlushDelay
HKLM\System\CurrentControlSet\Services\NfsServer\Parameters\RdWrNfsDeferredWritesFlushDelay
The default is 60. This parameter is a soft timeout that controls the duration of NFS V3 UNSTABLE Write
data caching. The minimum is 1, and the maximum is 600. Actual lifetime is approximately equal to
RdWrNfsDeferredWritesFlushDelay multiplied by RdWrThreadSleepTime.
CacheAddFromCreateAndMkDir
HKLM\System\CurrentControlSet\Services\NfsServer\Parameters\CacheAddFromCreateAndMkDir
The default is 1 (enabled). This parameter controls whether handles that are opened during NFS V2 and
V3 CREATE and MKDIR RPC procedure handlers are retained in the file handle cache. Set this value to 0 to
disable adding entries to the cache in CREATE and MKDIR code paths.
AdditionalDelayedWorkerThreads
HKLM\SYSTEM\CurrentControlSet\Control\SessionManager\Executive\AdditionalDelayedWorkerThreads
Increases the number of delayed worker threads that are created for the specified work queue. Delayed
worker threads process work items that are not considered time-critical and that can have their memory
stack paged out while waiting for work items. An insufficient number of threads reduces the rate at which
work items are serviced; a value that is too high consumes system resources unnecessarily.
NtfsDisable8dot3NameCreation
HKLM\System\CurrentControlSet\Control\FileSystem\NtfsDisable8dot3NameCreation
The default in Windows Server 2012, Windows Server 2012 R2, and later versions of Windows Server is
2. In releases prior to Windows Server 2012, the default is 0. This parameter determines whether NTFS
generates a short name in the 8dot3 (MSDOS) naming convention for long file names and for file names
that contain characters from the extended character set. If the value of this entry is 0, files can have two
names: the name that the user specifies and the short name that NTFS generates. If the user-specified
name follows the 8dot3 naming convention, NTFS does not generate a short name. A value of 2 means
that this parameter can be configured per volume.
NOTE
The system volume has 8dot3 enabled by default. All other volumes in Windows Server 2012 and Windows
Server 2012 R2 have 8dot3 disabled by default. Changing this value does not change the contents of a file, but it
avoids the short-name attribute creation for the file, which also changes how NTFS displays and manages the file.
For most file servers, the recommended setting is 1 (disabled).
NtfsDisableLastAccessUpdate
HKLM\System\CurrentControlSet\Control\FileSystem\NtfsDisableLastAccessUpdate
The default is 1. This system-global switch reduces disk I/O load and latencies by disabling the updating
of the date and time stamp for the last file or directory access.
MaxConcurrentConnectionsPerIp
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Rpcxdr\Parameters\MaxConcurrentConnectionsPerIp
The default value of the MaxConcurrentConnectionsPerIp parameter is 16. You can increase this value up
to a maximum of 8192 to increase the number of connections per IP address.
Performance Tuning Hyper-V Servers
1/5/2022 • 2 minutes to read • Edit Online
Hyper-V is the virtualization server role in Windows Server 2016. Virtualization servers can host multiple virtual
machines that are isolated from each other but share the underlying hardware resources by virtualizing the
processors, memory, and I/O devices. By consolidating servers onto a single machine, virtualization can improve
resource usage and energy efficiency and reduce the operational and maintenance costs of servers. In addition,
virtual machines and the management APIs offer more flexibility for managing resources, balancing load, and
provisioning systems.
Additional References
Hyper-V terminology
Hyper-V architecture
Hyper-V server configuration
Hyper-V processor performance
Hyper-V memory performance
Hyper-V storage I/O performance
Hyper-V network I/O performance
Detecting bottlenecks in a virtualized environment
Linux Virtual Machines
Hyper-V Terminology
1/5/2022 • 2 minutes to read • Edit Online
This section summarizes key terminology specific to virtual machine technology that is used throughout this
performance tuning topic:
T ERM DEF IN IT IO N
child partition Any virtual machine that is created by the root partition.
hypervisor A layer of software that sits above the hardware and below
one or more operating systems. Its primary job is to provide
isolated execution environments called partitions. Each
partition has its own set of virtualized hardware resources
(central processing unit or CPU, memory, and devices). The
hypervisor controls and arbitrates access to the underlying
hardware.
root partition The root partition that is created first and owns all the
resources that the hypervisor does not, including most
devices and system memory. The root partition hosts the
virtualization stack and creates and manages the child
partitions.
T ERM DEF IN IT IO N
virtualization service client (VSC) A software module that a guest loads to consume a resource
or service. For I/O devices, the virtualization service client
can be a device driver that the operating system kernel
loads.
virtualization service provider (VSP) A provider exposed by the virtualization stack in the root
partition that provides resources or services such as I/O to a
child partition.
Additional References
Hyper-V architecture
Hyper-V server configuration
Hyper-V processor performance
Hyper-V memory performance
Hyper-V storage I/O performance
Hyper-V network I/O performance
Detecting bottlenecks in a virtualized environment
Linux Virtual Machines
Hyper-V Architecture
1/5/2022 • 2 minutes to read • Edit Online
Hyper-V features a Type 1 hypervisor-based architecture. The hypervisor virtualizes processors and memory
and provides mechanisms for the virtualization stack in the root partition to manage child partitions (virtual
machines) and expose services such as I/O devices to the virtual machines.
The root partition owns and has direct access to the physical I/O devices. The virtualization stack in the root
partition provides a memory manager for virtual machines, management APIs, and virtualized I/O devices. It
also implements emulated devices such as the integrated device electronics (IDE) disk controller and PS/2 input
device port, and it supports Hyper-V-specific synthetic devices for increased performance and reduced
overhead.
The Hyper-V-specific I/O architecture consists of virtualization service providers (VSPs) in the root partition and
virtualization service clients (VSCs) in the child partition. Each service is exposed as a device over VMBus, which
acts as an I/O bus and enables high-performance communication between virtual machines that use
mechanisms such as shared memory. The guest operating system's Plug and Play manager enumerates these
devices, including VMBus, and loads the appropriate device drivers (virtual service clients). Services other than
I/O are also exposed through this architecture.
Starting with Windows Server 2008, the operating system features enlightenments to optimize its behavior
when it is running in virtual machines. The benefits include reducing the cost of memory virtualization,
improving multicore scalability, and decreasing the background CPU usage of the guest operating system.
The following sections suggest best practices that yield increased performance on servers running Hyper-V role.
Additional References
Hyper-V terminology
Hyper-V server configuration
Hyper-V processor performance
Hyper-V memory performance
Hyper-V storage I/O performance
Hyper-V network I/O performance
Detecting bottlenecks in a virtualized environment
Linux Virtual Machines
Hyper-V Configuration
1/5/2022 • 5 minutes to read • Edit Online
Hardware selection
The hardware considerations for servers running Hyper-V generally resemble those of non-virtualized servers,
but servers running Hyper-V can exhibit increased CPU usage, consume more memory, and need larger I/O
bandwidth because of server consolidation.
Processors
Hyper-V in Windows Server 2016 presents the logical processors as one or more virtual processors to
each active virtual machine. Hyper-V now requires processors that support Second Level Address
Translation (SLAT) technologies such as Extended Page Tables (EPT) or Nested Page Tables (NPT).
Cache
Hyper-V can benefit from larger processor caches, especially for loads that have a large working set in
memory and in virtual machine configurations in which the ratio of virtual processors to logical
processors is high.
Memor y
The physical server requires sufficient memory for the both the root and child partitions. The root
partition requires memory to efficiently perform I/Os on behalf of the virtual machines and operations
such as a virtual machine snapshot. Hyper-V ensures that sufficient memory is available to the root
partition, and allows remaining memory to be assigned to child partitions. Child partitions should be
sized based on the needs of the expected load for each virtual machine.
Storage
The storage hardware should have sufficient I/O bandwidth and capacity to meet the current and future
needs of the virtual machines that the physical server hosts. Consider these requirements when you
select storage controllers and disks and choose the RAID configuration. Placing virtual machines with
highly disk-intensive workloads on different physical disks will likely improve overall performance. For
example, if four virtual machines share a single disk and actively use it, each virtual machine can yield
only 25 percent of the bandwidth of that disk.
CPU statistics
Hyper-V publishes performance counters to help characterize the behavior of the virtualization server and
report the resource usage. The standard set of tools for viewing performance counters in Windows includes
Performance Monitor and Logman.exe, which can display and log the Hyper-V performance counters. The
names of the relevant counter objects are prefixed with Hyper-V .
You should always measure the CPU usage of the physical system by using the Hyper-V Hypervisor Logical
Processor performance counters. The CPU utilization counters that Task Manager and Performance Monitor
report in the root and child partitions do not reflect the actual physical CPU usage. Use the following
performance counters to monitor performance:
Hyper-V Hyper visor Logical Processor (*)\% Total Run Time The total non-idle time of the logical
processors
Hyper-V Hyper visor Logical Processor (*)\% Guest Run Time The time spent running cycles
within a guest or within the host
Hyper-V Hyper visor Logical Processor (*)\% Hyper visor Run Time The time spent running within
the hypervisor
Hyper-V Hyper visor Root Vir tual Processor (*)\\ * Measures the CPU usage of the root partition
Hyper-V Hyper visor Vir tual Processor (*)\\ * Measures the CPU usage of guest partitions
Additional References
Hyper-V terminology
Hyper-V architecture
Hyper-V processor performance
Hyper-V memory performance
Hyper-V storage I/O performance
Hyper-V network I/O performance
Detecting bottlenecks in a virtualized environment
Linux Virtual Machines
Hyper-V Processor Performance
1/5/2022 • 3 minutes to read • Edit Online
Virtual processors
Hyper-V in Windows Server 2016 supports a maximum of 240 virtual processors per virtual machine. Virtual
machines that have loads that are not CPU intensive should be configured to use one virtual processor. This is
because of the additional overhead that is associated with multiple virtual processors, such as additional
synchronization costs in the guest operating system.
Increase the number of virtual processors if the virtual machine requires more than one CPU of processing
under peak load.
Background activity
Minimizing the background activity in idle virtual machines releases CPU cycles that can be used elsewhere by
other virtual machines. Windows guests typically use less than one percent of one CPU when they are idle. The
following are several best practices for minimizing the background CPU usage of a virtual machine:
Install the latest version of the Virtual Machine Integration Services.
Remove the emulated network adapter through the virtual machine settings dialog box (use the
Microsoft Hyper-V-specific adapter).
Remove unused devices such as the CD-ROM and COM port, or disconnect their media.
Keep the Windows guest operating system on the sign-in screen when it is not being used and disable
the screen saver.
Review the scheduled tasks and services that are enabled by default.
Review the ETW trace providers that are on by default by running logman.exe quer y -ets
Improve server applications to reduce periodic activity (such as timers).
Close Server Manager on both the host and guest operating systems.
Don't leave Hyper-V Manager running since it constantly refreshes the virtual machine's thumbnail.
The following are additional best practices for configuring a client version of Windows in a virtual machine to
reduce the overall CPU usage:
Disable background services such as SuperFetch and Windows Search.
Disable scheduled tasks such as Scheduled Defrag.
Virtual NUMA
To enable virtualizing large scale-up workloads, Hyper-V in Windows Server 2016 expanded virtual machine
scale limits. A single virtual machine can be assigned up to 240 virtual processors and 12 TB of memory. When
creating such large virtual machines, memory from multiple NUMA nodes on the host system will likely be
utilized. In such virtual machine configuration, if virtual processors and memory are not allocated from the
same NUMA node, workloads may have bad performance due to the inability to take advantage of NUMA
optimizations.
In Windows Server 2016, Hyper-V presents a virtual NUMA topology to virtual machines. By default, this virtual
NUMA topology is optimized to match the NUMA topology of the underlying host computer. Exposing a virtual
NUMA topology into a virtual machine allows the guest operating system and any NUMA-aware applications
running within it to take advantage of the NUMA performance optimizations, just as they would when running
on a physical computer.
There is no distinction between a virtual and a physical NUMA from the workload's perspective. Inside a virtual
machine, when a workload allocates local memory for data, and accesses that data in the same NUMA node, fast
local memory access results on the underlying physical system. Performance penalties due to remote memory
access are successfully avoided. Only NUMA-aware applications can benefit of vNUMA.
Microsoft SQL Server is an example of NUMA aware application. For more info, see Understanding Non-
uniform Memory Access.
Virtual NUMA and Dynamic Memory features cannot be used at the same time. A virtual machine that has
Dynamic Memory enabled effectively has only one virtual NUMA node, and no NUMA topology is presented to
the virtual machine regardless of the virtual NUMA settings.
For more info on Virtual NUMA, see Hyper-V Virtual NUMA Overview.
Additional References
Hyper-V terminology
Hyper-V architecture
Hyper-V server configuration
Hyper-V memory performance
Hyper-V storage I/O performance
Hyper-V network I/O performance
Detecting bottlenecks in a virtualized environment
Linux Virtual Machines
Hyper-V Memory Performance
1/5/2022 • 2 minutes to read • Edit Online
The hypervisor virtualizes the guest physical memory to isolate virtual machines from each other and to
provide a contiguous, zero-based memory space for each guest operating system, just as on non-virtualized
systems.
Memory – Standby Cache Reserve Bytes Sum of Standby Cache Reserve Bytes and Free and Zero
Page List Bytes should be 200 MB or more on systems with
1 GB, and 300 MB or more on systems with 2 GB or more of
visible RAM.
Memory – Free & Zero Page List Bytes Sum of Standby Cache Reserve Bytes and Free and Zero
Page List Bytes should be 200 MB or more on systems with
1 GB, and 300 MB or more on systems with 2 GB or more of
visible RAM.
Memory – Pages Input/Sec Average over a 1-hour period is less than 10.
Additional References
Hyper-V terminology
Hyper-V architecture
Hyper-V server configuration
Hyper-V processor performance
Hyper-V storage I/O performance
Hyper-V network I/O performance
Detecting bottlenecks in a virtualized environment
Linux Virtual Machines
Hyper-V Storage I/O Performance
1/5/2022 • 17 minutes to read • Edit Online
This section describes the different options and considerations for tuning storage I/O performance in a virtual
machine. The storage I/O path extends from the guest storage stack, through the host virtualization layer, to the
host storage stack, and then to the physical disk. Following are explanations about how optimizations are
possible at each of these stages.
Virtual controllers
Hyper-V offers three types of virtual controllers: IDE, SCSI, and Virtual host bus adapters (HBAs).
IDE
IDE controllers expose IDE disks to the virtual machine. The IDE controller is emulated, and it is the only
controller that is available for guest VMs running older version of Windows without the Virtual Machine
Integration Services. Disk I/O that is performed by using the IDE filter driver that is provided with the Virtual
Machine Integration Services is significantly better than the disk I/O performance that is provided with the
emulated IDE controller. We recommend that IDE disks be used only for the operating system disks because they
have performance limitations due to the maximum I/O size that can be issued to these devices.
Virtual disks
Disks can be exposed to the virtual machines through the virtual controllers. These disks could be virtual hard
disks that are file abstractions of a disk or a pass-through disk on the host.
VHD format
The VHD format was the only virtual hard disk format that was supported by Hyper-V in past releases.
Introduced in Windows Server 2012, the VHD format has been modified to allow better alignment, which results
in significantly better performance on new large sector disks.
Any new VHD that is created on a Windows Server 2012 or newer has the optimal 4 KB alignment. This aligned
format is completely compatible with previous Windows Server operating systems. However, the alignment
property will be broken for new allocations from parsers that are not 4 KB alignment-aware (such as a VHD
parser from a previous version of Windows Server or a non-Microsoft parser).
Any VHD that is moved from a previous release does not automatically get converted to this new improved
VHD format.
To convert to new VHD format, run the following Windows PowerShell command:
You can check the alignment property for all the VHDs on the system, and it should be converted to the optimal
4 KB alignment. You create a new VHD with the data from the original VHD by using the Create-from-Source
option.
To check for alignment by using Windows Powershell, examine the Alignment line, as shown below:
Path : E:\vms\testvhd\test.vhd
VhdFormat : VHD
VhdType : Dynamic
FileSize : 69245440
Size : 10737418240
MinimumSize : 10735321088
LogicalSectorSize : 512
PhysicalSectorSize : 512
BlockSize : 2097152
ParentPath :
FragmentationPercentage : 10
Alignment : 0
Attached : False
DiskNumber :
IsDeleted : False
Number :
To verify alignment by using Windows PowerShell, examine the Alignment line, as shown below:
Get-VHD –Path E:\vms\testvhd\test-converted.vhd
Path : E:\vms\testvhd\test-converted.vhd
VhdFormat : VHD
VhdType : Dynamic
FileSize : 69369856
Size : 10737418240
MinimumSize : 10735321088
LogicalSectorSize : 512
PhysicalSectorSize : 512
BlockSize : 2097152
ParentPath :
FragmentationPercentage : 0
Alignment : 1
Attached : False
DiskNumber :
IsDeleted : False
Number :
VHDX format
VHDX is a new virtual hard disk format introduced in Windows Server 2012, which allows you to create resilient
high-performance virtual disks up to 64 terabytes. Benefits of this format include:
Support for virtual hard disk storage capacity of up to 64 terabytes.
Protection against data corruption during power failures by logging updates to the VHDX metadata
structures.
Ability to store custom metadata about a file, which a user might want to record, such as operating
system version or patches applied.
The VHDX format also provides the following performance benefits:
Improved alignment of the virtual hard disk format to work well on large sector disks.
Larger block sizes for dynamic and differential disks, which allows these disks to attune to the needs of
the workload.
4 KB logical sector virtual disk that allows for increased performance when used by applications and
workloads that are designed for 4 KB sectors.
Efficiency in representing data, which results in smaller file size and allows the underlying physical
storage device to reclaim unused space. (Trim requires pass-through or SCSI disks and trim-compatible
hardware.)
When you upgrade to Windows Server 2016, we recommend that you convert all VHD files to the VHDX format
due to these benefits. The only scenario where it would make sense to keep the files in the VHD format is when
a virtual machine has the potential to be moved to a previous release of Hyper-V that does not support the
VHDX format.
Each 4 KB write command that is issued by the current parser to update the payload data results in two reads for
two blocks on the disk, which are then updated and subsequently written back to the two disk blocks. Hyper-V in
Windows Server 2016 mitigates some of the performance effects on 512e disks on the VHD stack by preparing
the previously mentioned structures for alignment to 4 KB boundaries in the VHD format. This avoids the RMW
effect when accessing the data within the virtual hard disk file and when updating the virtual hard disk metadata
structures.
As mentioned earlier, VHDs that are copied from previous versions of Windows Server will not automatically be
aligned to 4 KB. You can manually convert them to optimally align by using the Copy from Source disk option
that is available in the VHD interfaces.
By default, VHDs are exposed with a physical sector size of 512 bytes. This is done to ensure that physical sector
size dependent applications are not impacted when the application and VHDs are moved from a previous
version of Windows Server.
By default, disks with the VHDX format are created with the 4 KB physical sector size to optimize their
performance profile regular disks and large sector disks. To make full use of 4 KB sectors it's recommended to
use VHDX format.
Pass-through disks
The VHD in a virtual machine can be mapped directly to a physical disk or logical unit number (LUN), instead of
to a VHD file. The benefit is that this configuration bypasses the NTFS file system in the root partition, which
reduces the CPU usage of storage I/O. The risk is that physical disks or LUNs can be more difficult to move
between machines than VHD files.
Pass-through disks should be avoided due to the limitations introduced with virtual machine migration
scenarios.
Additional References
Hyper-V terminology
Hyper-V architecture
Hyper-V server configuration
Hyper-V processor performance
Hyper-V memory performance
Hyper-V network I/O performance
Detecting bottlenecks in a virtualized environment
Linux Virtual Machines
Hyper-V Network I/O Performance
1/5/2022 • 2 minutes to read • Edit Online
Server 2016 contains several improvements and new functionality to optimize network performance under
Hyper-V. Documentation on how to optimize network performance will be included in a future version of this
article.
Live Migration
Live Migration lets you to transparently move running virtual machines from one node of a failover cluster to
another node in the same cluster without a dropped network connection or perceived downtime.
NOTE
Failover Clustering requires shared storage for the cluster nodes.
The process of moving a running virtual machine can be divided into two major phases. The first phase copies
the memory of the virtual machine from the current host to the new host. The second phase transfers the virtual
machine state from the current host to the new host. The durations of both phases is greatly determined by the
speed at which data can be transferred from the current host to the new host.
Providing a dedicated network for live migration traffic helps minimize the time that is required to complete a
live migration, and it ensures consistent migration times.
Additionally, increasing the number of send and receive buffers on each network adapter that is involved in the
migration can improve migration performance.
Windows Server 2012 R2 introduced an option to speed up Live Migration by compressing memory before
transferring over the network or use Remote Direct Memory Access (RDMA), if your hardware supports it.
Additional References
Hyper-V terminology
Hyper-V architecture
Hyper-V server configuration
Hyper-V processor performance
Hyper-V memory performance
Hyper-V storage I/O performance
Detecting bottlenecks in a virtualized environment
Linux Virtual Machines
Detecting bottlenecks in a virtualized environment
1/5/2022 • 3 minutes to read • Edit Online
This section should give you some hints on what to monitor by using Performance Monitor and how to identify
where the problem might be when either the host or some of the virtual machines do not perform as you would
have expected.
Processor bottlenecks
Here are some common scenarios that could cause processor bottlenecks:
One or more logical processors are loaded
One or more virtual processors are loaded
You can use the following performance counters from the host:
Logical Processor Utilization - \Hyper-V Hypervisor Logical Processor(*)\% Total Run Time
Virtual Processor Utilization - \Hyper-V Hypervisor Virtual Processor(*)\% Total Run Time
Root Virtual Processor Utilization - \Hyper-V Hypervisor Root Virtual Processor(*)\% Total Run Time
If the Hyper-V Hyper visor Logical Processor(_Total)\% Total Runtime counter is over 90%, the host is
overloaded. You should add more processing power or move some virtual machines to a different host.
If the Hyper-V Hyper visor Vir tual Processor(VM Name:VP x)\% Total Runtime counter is over 90% for
all virtual processors, you should do the following:
Verify that the host is not overloaded
Find out if the workload can leverage more virtual processors
Assign more virtual processors to the virtual machine
If Hyper-V Hyper visor Vir tual Processor(VM Name:VP x)\% Total Runtime counter is over 90% for
some, but not all, of the virtual processors, you should do the following:
If your workload is receive network-intensive, you should consider using vRSS.
If the virtual machines are not running Windows Server 2012 R2, you should add more network
adapters.
If your workload is storage-intensive, you should enable virtual NUMA and add more virtual disks.
If the Hyper-V Hyper visor Root Vir tual Processor (Root VP x)\% Total Runtime counter is over 90% for
some, but not all, virtual processors and the Processor (x)\% Interrupt Time and Processor (x)\% DPC
Time counter approximately adds up to the value for the Root Vir tual Processor(Root VP x)\% Total
Runtime counter, you should ensure enable VMQ on the network adapters.
Memory bottlenecks
Here are some common scenarios that could cause memory bottlenecks:
The host is not responsive.
Virtual machines cannot be started.
Virtual machines run out of memory.
You can use the following performance counters from the host:
Memory\Available Mbytes
Hyper-V Dynamic Memory Balancer (*)\Available Memory
You can use the following performance counters from the virtual machine:
Memory\Available Mbytes
If the Memor y\Available Mbytes and Hyper-V Dynamic Memor y Balancer (*)\Available Memor y
counters are low on the host, you should stop non-essential services and migrate one or more virtual machines
to another host.
If the Memor y\Available Mbytes counter is low in the virtual machine, you should assign more memory to
the virtual machine. If you are using Dynamic Memory, you should increase the maximum memory setting.
Network bottlenecks
Here are some common scenarios that could cause network bottlenecks:
The host is network bound.
The virtual machine is network bound.
You can use the following performance counters from the host:
Network Interface(network adapter name)\Bytes/sec
You can use the following performance counters from the virtual machine:
Hyper-V Virtual Network Adapter (virtual machine name name<GUID>)\Bytes/sec
If the Physical NIC Bytes/sec counter is greater than or equal to 90% of capacity, you should add additional
network adapters, migrate virtual machines to another host, and configure Network QoS.
If the Hyper-V Vir tual Network Adapter Bytes/sec counter is greater than or equal to 250 MBps, you
should add additional teamed network adapters in the virtual machine, enable vRSS, and use SR-IOV.
If your workloads can't meet their network latency, enable SR-IOV to present physical network adapter
resources to the virtual machine.
Storage bottlenecks
Here are some common scenarios that could cause storage bottlenecks:
The host and virtual machine operations are slow or time out.
The virtual machine is sluggish.
You can use the following performance counters from the host:
Physical Disk(disk letter)\Avg. disk sec/Read
Physical Disk(disk letter)\Avg. disk sec/Write
Physical Disk(disk letter)\Avg. disk read queue length
Physical Disk(disk letter)\Avg. disk write queue length
If latencies are consistently greater than 50ms, you should do the following:
Spread virtual machines across additional storage
Consider purchasing faster storage
Consider Tiered Storage Spaces, which was introduced in Windows Server 2012 R2
Consider using Storage QoS, which was introduced in Windows Server 2012 R2
Use VHDX
Additional References
Hyper-V terminology
Hyper-V architecture
Hyper-V server configuration
Hyper-V processor performance
Hyper-V memory performance
Hyper-V storage I/O performance
Hyper-V network I/O performance
Linux Virtual Machines
Linux Virtual Machine Considerations
1/5/2022 • 2 minutes to read • Edit Online
Linux and BSD virtual machines have additional considerations compared to Windows virtual machines in
Hyper-V.
The first consideration is whether Integration Services are present or if the VM is running merely on emulated
hardware with no enlightenment. A table of Linux and BSD releases that have built-in or downloadable
Integration Services is available in Supported Linux and FreeBSD virtual machines for Hyper-V on Windows.
These pages have grids of available Hyper-V features available to Linux distribution releases, and notes on those
features where applicable.
Even when the guest is running Integration Services, it can be configured with legacy hardware which does not
exhibit the best performance. For example, configure and use a virtual ethernet adapter for the guest instead of
using a legacy network adapter. With Windows Server 2016, advanced networking like SR-IOV are available as
well.
In the guest additional TCP tuning can be performed by increasing limits. For the best performance spreading
workload over multiple CPUs and having deep workloads produces the best throughput, as virtualized
workloads will have higher latency than "bare metal" ones.
Some example tuning parameters that have been useful in network benchmarks include:
net.core.netdev_max_backlog = 30000
net.core.rmem_max = 67108864
net.core.wmem_max = 67108864
net.ipv4.tcp_wmem = 4096 12582912 33554432
net.ipv4.tcp_rmem = 4096 12582912 33554432
net.ipv4.tcp_max_syn_backlog = 80960
net.ipv4.tcp_slow_start_after_idle = 0
net.ipv4.tcp_tw_reuse = 1
net.ipv4.ip_local_port_range = 10240 65535
net.ipv4.tcp_abort_on_overflow = 1
A useful tool for network microbenchmarks is ntttcp, which is available on both Linux and Windows. The Linux
version is open source and available from ntttcp-for-linux on github.com. The Windows version can be found in
the download center. When tuning workloads it is best to use as many streams as necessary to get the best
throughput. Using ntttcp to model traffic, the -P parameter sets the number of parallel connections used.
Linux Storage Performance
Some best practices, like the following, are listed on Best Practices for Running Linux on Hyper-V. The Linux
kernel has different I/O schedulers to reorder requests with different algorithms. NOOP is a first-in first-out
queue that passes the schedule decision to be made by the hypervisor. It is recommended to use NOOP as the
scheduler when running Linux virtual machine on Hyper-V. To change the scheduler for a specific device, in the
boot loader's configuration (/etc/grub.conf, for example), add elevator=noop to the kernel parameters, and then
restart.
Similar to networking, Linux guest performance with storage benefits the most from multiple queues with
enough depth to keep the host busy. Microbenchmarking storage performance is probably best with the fio
benchmark tool with the libaio engine.
Additional References
Hyper-V terminology
Hyper-V architecture
Hyper-V server configuration
Hyper-V processor performance
Hyper-V memory performance
Hyper-V storage I/O performance
Hyper-V network I/O performance
Detecting bottlenecks in a virtualized environment
Performance tuning Windows Server Containers
1/5/2022 • 5 minutes to read • Edit Online
Introduction
Starting with Windows Server 2022, two types of containers are available: Windows Server Containers and
Hyper-V Containers. Each container type supports either the Server Core or Nano Server SKU of Windows
Server 2022.
These configurations have different performance implications, which we detail below to help you understand
which is right for your scenarios. In addition, we detail performance impacting configurations, and describe the
tradeoffs with each of those options.
Windows Server Container and Hyper-V Containers
Windows Server Container and Hyper-V containers offer many of the same portability and consistency benefits
but differ in terms of their isolation guarantees and performance characteristics.
Windows Ser ver Containers provide application isolation through process and namespace isolation
technology. A Windows Server container shares a kernel with the container host and all containers running on
the host.
Hyper-V Containers expand on the isolation provided by Windows Server Containers by running each
container in a highly optimized virtual machine. In this configuration, the kernel of the container host is not
shared with the Hyper-V Containers.
The extra isolation provided by Hyper-V containers is achieved in large part by a hypervisor layer of isolation
between the container and the container host. This affects container density as, unlike Windows Server
Containers, less sharing of system files and binaries can occur, resulting in an overall larger storage and
memory footprint. In addition, there is the expected further overhead in some network, storage IO, and CPU
paths.
Nano Server and Server Core
Windows Server Containers and Hyper-V Containers offer support for Server Core and for a new installation
option available in Windows Server 2022: Nano Server.
Nano Server is a remotely administered server operating system optimized for private clouds and datacenters.
It is similar to Windows Server in Server Core mode, but significantly smaller, has no local logon capability, and
only supports 64-bit applications, tools, and agents. It takes up far less disk space, sets up significantly faster,
and requires far fewer updates and restarts than Windows Server. When it does restart, it restarts much faster.
Storage
Mounted Data Volumes
Containers offer the ability to use the container host system drive for the container scratch space. However, the
container scratch space has a life span equal to that of the container. That is, when the container is stopped, the
scratch space and all associated data goes away.
However, there are many scenarios in which having data persist independent of container lifetime is desired. In
these cases, we support mounting data volumes from the container host into the container. For Windows Server
Containers, there is negligible IO path overhead associated with mounted data volumes (near native
performance). However, when mounting data volumes into Hyper-V containers, there is some IO performance
degradation in that path. In addition, this impact is exaggerated when running Hyper-V containers inside of
virtual machines.
Scratch Space
Both Windows Server Containers and Hyper-V containers provide a 20GB dynamic VHD for the container
scratch space by default. For both container types, the container OS takes up a portion of that space, and this is
true for every container started. Thus it is important to remember that every container started has some
storage impact, and depending on the workload can write up to 20GB of the backing storage media. Server
storage configurations should be designed with this in mind.
Networking
Windows Server Containers and Hyper-V containers offer various of networking modes to best suit the needs of
differing networking configurations. Each of these options presents their own performance characteristics.
Windows Network Address Translation (WinNAT )
Each container will receive an IP address from an internal, private IP prefix (for example 172.16.0.0/12). Port
forwarding / mapping from the container host to container endpoints is supported. Docker creates a NAT
network by default when the dockerd first runs.
Of these three modes, the NAT configuration is the most expensive network IO path, but has the least amount of
configuration needed.
Windows Server containers use a Host vNIC to attach to the virtual switch. Hyper-V Containers use a Synthetic
VM NIC (not exposed to the Utility VM) to attach to the virtual switch. When containers are communicating with
the external network, packets are routed through WinNAT with address translations applied, which incurs some
overhead.
Transparent
Each container endpoint is directly connected to the physical network. IP addresses from the physical network
can be assigned statically or dynamically using an external DHCP server.
Transparent mode is the least expensive in terms of the network IO path, and external packets are directly
passed through to the container virtual NIC giving direct access to the external network.
L2 Bridge
Each container endpoint will be in the same IP subnet as the container host. The IP addresses must be assigned
statically from the same prefix as the container host. All container endpoints on the host will have the same MAC
address due to Layer-2 address translation.
L2 Bridge Mode is more performant than WinNAT mode as it provides direct access to the external network, but
less performant than Transparent mode as it also introduces MAC address translation.
Performance Tuning Remote Desktop Session Hosts
1/5/2022 • 12 minutes to read • Edit Online
This topic discusses how to select Remote Desktop Session Host (RD Session Host) hardware, tune the host, and
tune applications.
In this topic:
Selecting the proper hardware for performance
Tuning applications for Remote Desktop Session Host
Remote Desktop Session Host tuning parameters
If DLLs are relocated, it is impossible to share their code across sessions, which significantly
increases the footprint of a session. This is one of the most common memory-related performance
issues on an RD Session Host server.
For common language runtime (CLR) applications, use Native Image Generator (Ngen.exe) to increase
page sharing and reduce CPU overhead.
When possible, apply similar techniques to other similar execution engines.
This topic discusses how to select Remote Desktop Session Host (RD Session Host) hardware, tune the host, and
tune applications.
In this topic:
Selecting the proper hardware for performance
Tuning applications for Remote Desktop Session Host
Remote Desktop Session Host tuning parameters
If DLLs are relocated, it is impossible to share their code across sessions, which significantly
increases the footprint of a session. This is one of the most common memory-related performance
issues on an RD Session Host server.
For common language runtime (CLR) applications, use Native Image Generator (Ngen.exe) to increase
page sharing and reduce CPU overhead.
When possible, apply similar techniques to other similar execution engines.
Remote Desktop Virtualization Host (RD Virtualization Host) is a role service that supports Virtual Desktop
Infrastructure (VDI) scenarios and lets multiple users run Windows-based applications in virtual machines
hosted on a server running Windows Server and Hyper-V.
Windows Server supports two types of virtual desktops: personal virtual desktops and pooled virtual desktops.
General considerations
Storage
Storage is the most likely performance bottleneck, and it is important to size your storage to properly handle
the I/O load that is generated by virtual machine state changes. If a pilot or simulation is not feasible, a good
guideline is to provision one disk spindle for four active virtual machines. Use disk configurations that have
good write performance (such as RAID 1+0).
When appropriate, use Disk Deduplication and caching to reduce the disk read load and to enable your storage
solution to speed up performance by caching a significant portion of the image.
Data Deduplication and VDI
Introduced in Windows Server 2012 R2, Data Deduplication supports optimization of open files. In order to use
virtual machines running on a deduplicated volume, the virtual machine files need to be stored on a separate
host from the Hyper-V host. If Hyper-V and deduplication are running on the same machine, the two features
will contend for system resources and negatively impact overall performance.
The volume must also be configured to use the "Virtual Desktop Infrastructure (VDI)" deduplication optimization
type. You can configure this by using Server Manager (File and Storage Ser vices -> Volumes -> Dedup
Settings ) or by using the following Windows PowerShell command:
NOTE
Data Deduplication optimization of open files is supported only for VDI scenarios with Hyper-V using remote storage over
SMB 3.0.
Memory
Server memory usage is driven by three main factors:
Operating system overhead
Hyper-V service overhead per virtual machine
Memory allocated to each virtual machine
For a typical knowledge worker workload, guest virtual machines running x86 Window 8 or Windows 8.1
should be given ~512 MB of memory as the baseline. However, Dynamic Memory will likely increase the guest
virtual machine's memory to about 800 MB, depending on the workload. For x64, we see about 800 MB starting,
increasing to 1024 MB.
Therefore, it is important to provide enough server memory to satisfy the memory that is required by the
expected number of guest virtual machines, plus allow a sufficient amount of memory for the server.
CPU
When you plan server capacity for an RD Virtualization Host server, the number of virtual machines per physical
core will depend on the nature of the workload. As a starting point, it is reasonable to plan 12 virtual machines
per physical core, and then run the appropriate scenarios to validate performance and density. Higher density
may be achievable depending on the specifics of the workload.
We recommend enabling hyper-threading, but be sure to calculate the oversubscription ratio based on the
number of physical cores and not the number of logical processors. This ensures the expected level of
performance on a per CPU basis.
Performance optimizations
Dynamic Memory
Dynamic Memory enables more efficiently utilization of the memory resources of the server running Hyper-V
by balancing how memory is distributed between running virtual machines. Memory can be dynamically
reallocated between virtual machines in response to their changing workloads.
Dynamic Memory enables you to increase virtual machine density with the resources you already have without
sacrificing performance or scalability. The result is more efficient use of expensive server hardware resources,
which can translate into easier management and lower costs.
On guest operating systems running Windows 8 and above with virtual processors that span multiple logical
processors, consider the tradeoff between running with Dynamic Memory to help minimize memory usage and
disabling Dynamic Memory to improve the performance of an application that is computer-topology aware.
Such an application can leverage the topology information to make scheduling and memory allocation
decisions.
Tiered Storage
RD Virtualization Host supports tiered storage for virtual desktop pools. The physical computer that is shared by
all pooled virtual desktops within a collection can use a small-size, high-performance storage solution, such as a
mirrored solid-state drive (SSD). The pooled virtual desktops can be placed on less expensive, traditional storage
such as RAID 1+0.
The physical computer should be placed on a SSD is because most of the read-I/Os from pooled virtual
desktops go to the management operating system. Therefore, the storage that is used by the physical computer
must sustain much higher read I/Os per second.
This deployment configuration assures cost effective performance where performance is needed. The SSD
provides higher performance on a smaller size disk (~20 GB per collection, depending on the configuration).
Traditional storage for pooled virtual desktops (RAID 1+0) uses about 3 GB per virtual machine.
CSV cache
Failover Clustering in Windows Server 2012 and above provides caching on Cluster Shared Volumes (CSV). This
is extremely beneficial for pooled virtual desktop collections where the majority of the read I/Os come from the
management operating system. The CSV cache provides higher performance by several orders of magnitude
because it caches blocks that are read more than once and delivers them from system memory, which reduces
the I/O. For more info on CSV cache, see How to Enable CSV Cache.
Pooled virtual desktops
By default, pooled virtual desktops are rolled back to the pristine state after a user signs out, so any changes
made to the Windows operating system since the last user sign-in are abandoned.
Although it's possible to disable the rollback, it is still a temporary condition because typically a pooled virtual
desktop collection is re-created due to various updates to the virtual desktop template.
It makes sense to turn off Windows features and services that depend on persistent state. Additionally, it makes
sense to turn off services that are primarily for non-enterprise scenarios.
Each specific service should be evaluated appropriately prior to any broad deployment. The following are some
initial things to consider:
SERVIC E WH Y?
Offline files Virtual desktops are always online and connected from a
networking point-of-view.
Background defrag File-system changes are discarded after a user signs off (due
to a rollback to the pristine state or re-creating the virtual
desktop template, which results in re-creating all pooled
virtual desktops).
Bug check memory dump No such concept for pooled virtual desktops. A bug-check
pooled virtual desktop will start from the pristine state.
NOTE
This list is not meant to be a complete list, because any changes will affect the intended goals and scenarios. For more
info, see Hot off the presses, get it now, the Windows 8 VDI optimization script, courtesy of PFE!.
NOTE
SuperFetch in Windows 8 is enabled by default. It is VDI-aware and should not be disabled. SuperFetch can further reduce
memory consumption through memory page sharing, which is beneficial for VDI. Pooled virtual desktops running
Windows 7, SuperFetch should be disabled, but for personal virtual desktops running Windows 7, it should be left on.
Performance Tuning Remote Desktop Gateways
1/5/2022 • 2 minutes to read • Edit Online
NOTE
In Windows 8+ and Windows Server 2012 R2+, Remote Desktop Gateway (RD Gateway) supports TCP, UDP, and the
legacy RPC transports. Most of the following data is regarding the legacy RPC transport. If the legacy RPC transport is
not being used, this section is not applicable.
This topic describes the performance-related parameters that help improve the performance of a customer
deployment and the tunings that rely on the customer's network usage patterns.
At its core, RD Gateway performs many packet forwarding operations between Remote Desktop Connection
instances and the RD Session Host server instances within the customer's network.
NOTE
The following parameters apply to RPC transport only.
Internet Information Services (IIS) and RD Gateway export the following registry parameters to help improve
system performance in the RD Gateway.
Thread tunings
Maxiothreads
This app-specific thread pool specifies the number of threads that RD Gateway creates to handle
incoming requests. If this registry setting is present, it takes effect. The number of threads equals the
number of logical processes. If the number of logical processors is less than 5, the default is 5 threads.
MaxPoolThreads
HKLM\System\CurrentControlSet\Services\InetInfo\Parameters\MaxPoolThreads (REG_DWORD)
This parameter specifies the number of IIS pool threads to create per logical processor. The IIS pool
threads watch the network for requests and process all incoming requests. The MaxPoolThreads count
does not include threads that RD Gateway consumes. The default value is 4.
Remote procedure call tunings for RD Gateway
The following parameters can help tune the remote procedure calls (RPC) that are received by Remote Desktop
Connection and RD Gateway computers. Changing the windows helps throttle how much data is flowing
through each connection and can improve performance for RPC over HTTP v2 scenarios.
Ser verReceiveWindow
HKLM\Software\Microsoft\Rpc\ServerReceiveWindow (REG_DWORD)
The default value is 64 KB. This value specifies the window that the server uses for data that is received
from the RPC proxy. The minimum value is set to 8 KB, and the maximum value is set at 1 GB. If a value is
not present, the default value is used. When changes are made to this value, IIS must be restarted for the
change to take effect.
Ser verReceiveWindow
HKLM\Software\Microsoft\Rpc\ServerReceiveWindow (REG_DWORD)
The default value is 64 KB. This value specifies the window that the client uses for data that is received
from the RPC proxy. The minimum value is 8 KB, and the maximum value is 1 GB. If a value is not present,
the default value is used.
NOTE
If applicable, add the \IPv6\* and \TCPv6\* objects.ReplaceThisText
Performance Tuning Web Servers
1/5/2022 • 2 minutes to read • Edit Online
This topic describes performance tuning methods and recommendations for Windows Server 2022 web
servers.
WARNING
Some applications, such as incremental backup utilities, rely on this update information, and they do not function correctly
without it.
Additional References
IIS 10.0 performance tuning
HTTP 1.1/2 tuning
Tuning IIS 10.0
1/5/2022 • 23 minutes to read • Edit Online
Internet Information Services (IIS) 10.0 is included with Windows Server 2022. It uses a process model similar to
that of IIS 8.5 and IIS 7.0. A kernel-mode web driver (http.sys) receives and routes HTTP requests, and satisfies
requests from its response cache. Worker processes register for URL subspaces, and http.sys routes the request
to the appropriate process (or set of processes for application pools).
HTTP.sys is responsible for connection management and request handling. The request can be served from the
HTTP.sys cache or passed to a worker process for further handling. Multiple worker processes can be configured,
which provides isolation at a reduced cost. For more info on how request handling works, see the following
figure:
HTTP.sys includes a response cache. When a request matches an entry in the response cache, HTTP.sys sends the
cache response directly from kernel mode. Some web application platforms, such as ASP.NET, provide
mechanisms to enable any dynamic content to be cached in the kernel-mode cache. The static file handler in
IIS 10.0 automatically caches frequently requested files in http.sys.
Because a web server has kernel-mode and user-mode components, both components must be tuned for
optimal performance. Therefore, tuning IIS 10.0 for a specific workload includes configuring the following:
HTTP.sys and the associated kernel-mode cache
Worker processes and user-mode IIS, including the application pool configuration
Certain tuning parameters that affect performance
The following sections discuss how to configure the kernel-mode and user-mode aspects of IIS 10.0.
Kernel-mode settings
Performance-related HTTP.sys settings fall into two broad categories: cache management and connection and
request management. All registry settings are stored under the following registry entry:
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Http\Parameters
Note If the HTTP service is already running, you must restart it for the changes to take effect.
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Http\Parameters\MaxConnections
IdleConnectionsHighMark
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Http\Parameters\IdleConnectionsHighMark
IdleConnectionsLowMark
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Http\Parameters\IdleConnectionsLowMark
IdleListTrimmerPeriod
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Http\Parameters\IdleListTrimmerPeriod
RequestBufferLookasideDepth
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Http\Parameters\RequestBufferLookasideDepth
InternalRequestLookasideDepth
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Http\Parameters\InternalRequestLookasideDepth
User-mode settings
The settings in this section affect the IISÂ 10.0 worker process behavior. Most of these settings can be found in
the following XML configuration file:
%SystemRoot%\system32\inetsrv\config\applicationHost.config
Use Appcmd.exe, the IIS 10.0 Management Console, the WebAdministration or IISAdministration PowerShell
Cmdlets to change them. Most settings are automatically detected, and they do not require a restart of the
IIS 10.0 worker processes or web application server. For more info about the applicationHost.config file, see
Introduction to ApplicationHost.config.
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\InetInfo\Parameters\ThreadPoolUseIdealCpu
With this feature enabled, IIS thread manager makes its best effort to evenly distribute IIS thread pool threads
across all CPUs in all NUMA nodes based on their current loads. In general, it is recommended to keep this
default setting unchanged for NUMA hardware.
Note The ideal CPU setting is different from the worker process NUMA node assignment settings
(numaNodeAssignment and numaNodeAffinityMode) introduced in CPU Settings for an Application Pool. The
ideal CPU setting affects how IIS distributes its thread pool threads, while the worker process NUMA node
assignment settings determine on which NUMA node a worker process starts.
staticCompression-EnableCpuUsage Enables or disables compression if the 50, 100, 50, and 90 respectively
current percentage CPU usage goes
staticCompression-DisableCpuUsage above or below specified limits.
system.webSer ver/urlCompression
NOTE
For servers running IIS 10.0 that have low average CPU usage, consider enabling compression for dynamic content,
especially if responses are large. This should first be done in a test environment to assess the effect on the CPU usage
from the baseline.
<files> element Specifies the file names that are The default list is Default.htm,
configured as default documents. Default.asp, Index.htm, Index.html,
Iisstart.htm, and Default.aspx.
system.applicationHost/log/centralBinar yLogFile
system.webSer ver/asp/limits
AT T RIB UT E DESC RIP T IO N DEFA ULT
system.webSer ver/asp/comPlus
<system.web>
<applicationPool maxConcurrentRequestsPerCPU="5000"/>
</system.web>
<system.web>
<applicationPool percentCpuLimit="90" percentCpuLimitMinActiveRequestPerCpu="100"/>
</system.web>
percentCpuLimit Default value: 90 Asynchronous request has some scalability issues when a huge load
(beyond the hardware capabilities) is put on such scenario. The problem is due to the nature of allocation on
asynchronous scenarios. In these conditions, allocation will happen when the asynchronous operation starts,
and it will be consumed when it completes. By that time, itâs very possible the objects have been moved to
generation 1 or 2 by GC. When this happens, increasing the load will show increase on request per second
(rps) until a point. Once we pass that point, the time spent in GC will start to become a problem and the rps
will start to dip, having a negative scaling effect. To fix the problem, when the cpu usage exceeds
percentCpuLimit setting, requests will be sent to the ASP.NET native queue.
percentCpuLimitMinActiveRequestPerCpu Default value: 100 CPU throttling(percentCpuLimit setting) is
not based on number of requests but on how expensive they are. As a result, there could be just a few CPU-
intensive requests causing a backup in the native queue with no way to empty it aside from incoming
requests. To solve this problem, percentCpuLimitMinActiveRequestPerCpu can be used to ensure a minimum
number of requests are being served before throttling kicks in.
NOTE
In case the site runs unstable code, such as code with a memory leak, or otherwise unstable, setting the site to terminate
on idle can be a quick-and-dirty alternative to fixing the code bug. This isn't something we would encourage, but in a
crunch, it may be better to use this feature as a clean-up mechanism while a more permanent solution is in the works.]
Another factor to consider is that if the site does use a lot of memory, then the suspension process itself takes a
toll, because the computer has to write the data used by the worker process to disk. If the worker process is
using a large chunk of memory, then suspending it might be more expensive than the cost of having to wait for
it to start back up.
To make the best of the worker process suspension feature, you need to review your sites in each application
pool, and decide which should be suspended, which should be terminated, and which should be active
indefinitely. For each action and each site, you need to figure out the ideal time-out period.
Ideally, the sites that you will configure for suspension or termination are those that have visitors every day, but
not enough to warrant keeping it active all the time. These are usually sites with around 20 unique visitors a day
or less. You can analyze the traffic patterns using the site's log files and calculate the average daily traffic.
Keep in mind that once a specific user connects to the site, they will typically stay on it for at least a while,
making additional requests, and so just counting daily requests may not accurately reflect the real traffic
patterns. To get a more accurate reading, you can also use a tool, such as Microsoft Excel, to calculate the
average time between requests. For example:
1 /SourceSilverLight/Geosourc 10:01
e.web/grosource.html
5 / 10:23 0:11
SourceSilverLight/Geosourc
ewebService/Service.asmx
6 / 11:50 1:27
SourceSilverLight/Geosourc
e.web/GeoSearchServer...¦.
N UM B ER REQ UEST URL REQ UEST T IM E DELTA
The hard part, though, is figuring out what setting to apply to make sense. In our case, the site gets a bunch of
requests from users, and the table above shows that a total of 4 unique sessions occurred in a period of 4 hours.
With the default settings for worker process suspension of the application pool, the site would be terminated
after the default timeout of 20 minutes, which means each of these users would experience the site spin-up
cycle. This makes it an ideal candidate for worker process suspension, because for most of the time, the site is
idle, and so suspending it would conserve resources, and allow the users to reach the site almost instantly.
A final, and very important note about this is that disk performance is crucial for this feature. Because the
suspension and wake-up process involve writing and reading large amount of data to the hard drive, we
strongly recommend using a fast disk for this. Solid State Drives (SSDs) are ideal and highly recommended for
this, and you should make sure that the Windows page file is stored on it (if the operating system itself is not
installed on the SSD, configure the operating system to move the page file to it).
Whether you use an SSD or not, we also recommend fixing the size of the page file to accommodate writing the
page-out data to it without file-resizing. Page-file resizing might happen when the operating system needs to
store data in the page file, because by default, Windows is configured to automatically adjust its size based on
need. By setting the size to a fixed one, you can prevent resizing and improve performance a lot.
To configure a pre-fixed page file size, you need to calculate its ideal size, which depends on how many sites you
will be suspending, and how much memory they consume. If the average is 200 MB for an active worker
process and you have 500 sites on the servers that will be suspending, then the page file should be at least (200
* 500) MB over the base size of the page file (so base + 100 GB in our example).
NOTE
When sites are suspended, they will consume approximately 6 MB each, so in our case, memory usage if all sites are
suspended would be around 3 GB. In reality, though, youâre probably never going to have them all suspended at the
same time.
NOTE
Larger keys provide more security, but they also use more CPU time.
All components might not need to be encrypted. However, mixing plain HTTP and HTTPS might result in a pop-up
warning that not all content on the page is secure.
Additional References
Web Server performance tuning
HTTP 1.1/2 tuning
Performance Tuning HTTP 1.1/2
1/5/2022 • 2 minutes to read • Edit Online
HTTP/2 is meant to improve performance on the client side (e.g., page load time on a browser). On the server, it
may represent a slight increase in CPU cost. Whereas the server no longer requires a single TCP connection for
every request, some of that state will now be kept in the HTTP layer. Furthermore, HTTP/2 has header
compression, which represents additional CPU load.
Some situations require an HTTP/1.1 fallback (resetting the HTTP/2 connection and instead establishing a new
connection to use HTTP/1.1). In particular, TLS renegotiation and HTTP authentication (other than Basic and
Digest) require HTTP/1.1 fallback. Even though this adds overhead, these operations already imply some delay
and so are not particularly performance-sensitive.
Additional References
Web Server performance tuning
IIS 10.0 performance tuning
Performance Tuning Cache and Memory Manager
1/5/2022 • 2 minutes to read • Edit Online
By default, Windows caches file data that is read from disks and written to disks. This implies that read
operations read file data from an area in system memory, known as the system file cache, rather than from the
physical disk. Correspondingly, write operations write file data to the system file cache rather than to the disk,
and this type of cache is referred to as a write-back cache. Caching is managed per file object. Caching occurs
under the direction of the Cache Manager, which operates continuously while Windows is running.
File data in the system file cache is written to the disk at intervals determined by the operating system. Flushed
pages stay either in system cache working set (when FILE_FLAG_RANDOM_ACCESS is set and file handle wasn't
closed) or on the standby list where these become part of available memory.
The policy of delaying the writing of the data to the file and holding it in the cache until the cache is flushed is
called lazy writing, and it is triggered by the Cache Manager at a determinate time interval. The time at which a
block of file data is flushed is partially based on the amount of time it has been stored in the cache and the
amount of time since the data was last accessed in a read operation. This ensures that file data that is frequently
read will stay accessible in the system file cache for the maximum amount of time.
This file data caching process is illustrated in the following figure:
As depicted by the solid arrows in the preceding figure, a 256 KB region of data is read into a 256 KB cache slot
in system address space when it is first requested by the Cache Manager during a file read operation. A user-
mode process then copies the data in this slot to its own address space. When the process has completed its
data access, it writes the altered data back to the same slot in the system cache, as shown by the dotted arrow
between the process address space and the system cache. When the Cache Manager has determined that the
data will no longer be needed for a certain amount of time, it writes the altered data back to the file on the disk,
as shown by the dotted arrow between the system cache and the disk.
In this section:
Cache and Memory Manager Potential Performance Issues
Cache and Memory Manager Improvements in Windows Server 2016
Troubleshoot Cache and Memory Manager
Performance Issues
1/5/2022 • 3 minutes to read • Edit Online
Before Windows Server 2012, two primary potential issues caused system file cache to grow until available
memory was almost depleted under certain workloads. When this situation results in the system being sluggish,
you can determine whether the server is facing one of these issues.
Counters to monitor
Memory\Long-Term Average Standby Cache Lifetime (s) < 1800 seconds
Memory\Available Mbytes is low
Memory\System Cache Resident Bytes
If Memory\Available Mbytes is low and at the same time Memory\System Cache Resident Bytes is consuming
significant part of the physical memory, you can use RAMMAP to find out what the cache is being used for.
The problem used to be mitigated by DynCache tool. In Windows Server 2012+, the architecture has been
redesigned and this problem should no longer exist.
This topic describes Cache Manager and Memory Manager improvements in Windows Server 2012 and 2016.
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016, Azure Stack HCI, versions
21H2 and 20H2
You can use this topic for an overview of the network subsystem and for links to other topics in this guide.
NOTE
In addition to this topic, the following sections of this guide provide performance tuning recommendations for network
devices and the network stack.
Choosing a Network Adapter
Configure the Order of Network Interfaces
Performance Tuning Network Adapters
Network-Related Performance Counters
Performance Tools for Network Workloads
Performance tuning the network subsystem, particularly for network intensive workloads, can involve each layer
of the network architecture, which is also called the network stack. These layers are broadly divided into the
following sections.
1. Network interface . This is the lowest layer in the network stack, and contains the network driver that
communicates directly with the network adapter.
2. Network Driver Interface Specification (NDIS) . NDIS exposes interfaces for the driver below it and
for the layers above it, such as the Protocol Stack.
3. Protocol Stack . The protocol stack implements protocols such as TCP/IP and UDP/IP. These layers
expose the transport layer interface for layers above them.
4. System Drivers . These are typically clients that use a transport data extension (TDX) or Winsock Kernel
(WSK) interface to expose interfaces to user-mode applications. The WSK interface was introduced in
Windows Server 2008 and Windows® Vista, and it is exposed by AFD.sys. The interface improves
performance by eliminating the switching between user mode and kernel mode.
5. User-Mode Applications . These are typically Microsoft solutions or custom applications.
The table below provides a vertical illustration of the layers of the network stack, including examples of items
that run in each layer.
Choosing a Network Adapter
1/5/2022 • 10 minutes to read • Edit Online
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016, Azure Stack HCI, versions
21H2 and 20H2
You can use this topic to learn some of the features of network adapters that might affect your purchasing
choices.
Network-intensive applications require high-performance network adapters. This section explores some
considerations for choosing network adapters, as well as how to configure different network adapter settings to
achieve the best network performance.
TIP
You can configure network adapter settings by using Windows PowerShell. For more information, see Network Adapter
Cmdlets in Windows PowerShell.
Offload Capabilities
Offloading tasks from the central processing unit (CPU) to the network adapter can reduce CPU usage on the
server, which improves the overall system performance.
The network stack in Microsoft products can offload one or more tasks to a network adapter if you select a
network adapter that has the appropriate offload capabilities. The following table provides a brief overview of
different offload capabilities that are available in Windows Server 2016.
O F F LO A D T Y P E DESC RIP T IO N
Checksum calculation for TCP The network stack can offload the calculation and validation
of Transmission Control Protocol (TCP) checksums on send
and receive code paths. It can also offload the calculation
and validation of IPv4 and IPv6 checksums on send and
receive code paths.
Checksum calculation for UDP The network stack can offload the calculation and validation
of User Datagram Protocol (UDP) checksums on send and
receive code paths.
Checksum calculation for IPv4 The network stack can offload the calculation and validation
of IPv4 checksums on send and receive code paths.
Checksum calculation for IPv6 The network stack can offload the calculation and validation
of IPv6 checksums on send and receive code paths.
Segmentation of large TCP packets The TCP/IP transport layer supports Large Send Offload v2
(LSOv2). With LSOv2, the TCP/IP transport layer can offload
the segmentation of large TCP packets to the network
adapter.
O F F LO A D T Y P E DESC RIP T IO N
Receive Side Scaling (RSS) RSS is a network driver technology that enables the efficient
distribution of network receive processing across multiple
CPUs in multiprocessor systems. More detail about RSS is
provided later in this topic.
Receive Segment Coalescing (RSC) RSC is the ability to group packets together to minimize the
header processing that is necessary for the host to perform.
A maximum of 64 KB of received payload can be coalesced
into a single larger packet for processing. More detail about
RSC is provided later in this topic.
NOTE
For a detailed command reference for each cmdlet, including syntax and parameters, you can click the following links. In
addition, you can pass the cmdlet name to Get-Help at the Windows PowerShell prompt for details on each command.
Disable-NetAdapterRss. This command disables RSS on the network adapter that you specify.
Enable-NetAdapterRss. This command enables RSS on the network adapter that you specify.
Get-NetAdapterRss. This command retrieves RSS properties of the network adapter that you specify.
Set-NetAdapterRss. This command sets the RSS properties on the network adapter that you specify.
RSS profiles
You can use the –Profile parameter of the Set-NetAdapterRss cmdlet to specify which logical processors are
assigned to which network adapter. Available values for this parameter are:
Closest . Logical processor numbers that are near the network adapter's base RSS processor are
preferred. With this profile, the operating system might rebalance logical processors dynamically based
on load.
ClosestStatic . Logical processor numbers near the network adapter's base RSS processor are preferred.
With this profile, the operating system does not rebalance logical processors dynamically based on load.
NUMA . Logical processor numbers are generally selected on different NUMA nodes to distribute the
load. With this profile, the operating system might rebalance logical processors dynamically based on
load.
NUMAStatic . This is the default profile . Logical processor numbers are generally selected on different
NUMA nodes to distribute the load. With this profile, the operating system will not rebalance logical
processors dynamically based on load.
Conser vative . RSS uses as few processors as possible to sustain the load. This option helps reduce the
number of interrupts.
Depending on the scenario and the workload characteristics, you can also use other parameters of the Set-
NetAdapterRss Windows PowerShell cmdlet to specify the following:
On a per-network adapter basis, how many logical processors can be used for RSS.
The starting offset for the range of logical processors.
The node from which the network adapter allocates memory.
Following are the additional Set-NetAdapterRss parameters that you can use to configure RSS:
NOTE
In the example syntax for each parameter below, the network adapter name Ethernet is used as an example value for the
–Name parameter of the Set-NetAdapterRss command. When you run the cmdlet, ensure that the network adapter
name that you use is appropriate for your environment.
* MaxProcessors : Sets the maximum number of RSS processors to be used. This ensures that
application traffic is bound to a maximum number of processors on a given interface. Example syntax:
Set-NetAdapterRss –Name "Ethernet" –MaxProcessors <value>
* BaseProcessorGroup : Sets the base processor group of a NUMA node. This impacts the processor
array that is used by RSS. Example syntax:
Set-NetAdapterRss –Name "Ethernet" –BaseProcessorGroup <value>
* MaxProcessorGroup : Sets the Max processor group of a NUMA node. This impacts the processor
array that is used by RSS. Setting this would restrict a maximum processor group so that load balancing
is aligned within a k-group. Example syntax:
Set-NetAdapterRss –Name "Ethernet" –MaxProcessorGroup <value>
* BaseProcessorNumber : Sets the base processor number of a NUMA node. This impacts the
processor array that is used by RSS. This allows partitioning processors across network adapters. This is
the first logical processor in the range of RSS processors that is assigned to each adapter. Example syntax:
Set-NetAdapterRss –Name "Ethernet" –BaseProcessorNumber <Byte Value>
* NumaNode : The NUMA node that each network adapter can allocate memory from. This can be within
a k-group or from different k-groups. Example syntax:
Set-NetAdapterRss –Name "Ethernet" –NumaNodeID <value>
* NumberofReceiveQueues : If your logical processors seem to be underutilized for receive traffic (for
example, as viewed in Task Manager), you can try increasing the number of RSS queues from the default
of 2 to the maximum that is supported by your network adapter. Your network adapter may have options
to change the number of RSS queues as part of the driver. Example syntax:
Set-NetAdapterRss –Name "Ethernet" –NumberOfReceiveQueues <value>
For more information, click the following link to download Scalable Networking: Eliminating the Receive
Processing Bottleneck—Introducing RSS in Word format.
Understanding RSS Performance
Tuning RSS requires understanding the configuration and the load-balancing logic. To verify that the RSS
settings have taken effect, you can review the output when you run the Get-NetAdapterRss Windows
PowerShell cmdlet. Following is example output of this cmdlet.
PS C:\Users\Administrator> get-netadapterrss
Name : testnic 2
InterfaceDescription : Broadcom BCM5708C NetXtreme II GigE (NDIS VBD Client) #66
Enabled : True
NumberOfReceiveQueues : 2
Profile : NUMAStatic
BaseProcessor: [Group:Number] : 0:0
MaxProcessor: [Group:Number] : 0:15
MaxProcessors : 8
IndirectionTable: [Group:Number]:
0:0 0:4 0:0 0:4 0:0 0:4 0:0 0:4
…
(# indirection table entries are a power of 2 and based on # of processors)
…
0:0 0:4 0:0 0:4 0:0 0:4 0:0 0:4
In addition to echoing parameters that were set, the key aspect of the output is the indirection table output. The
indirection table displays the hash table buckets that are used to distribute incoming traffic. In this example, the
n:c notation designates the Numa K-Group:CPU index pair that is used to direct incoming traffic. We see exactly
2 unique entries (0:0 and 0:4), which represent k-group 0/cpu0 and k-group 0/cpu 4, respectively.
There is only one k-group for this system (k-group 0) and a n (where n <= 128) indirection table entry. Because
the number of receive queues is set to 2, only 2 processors (0:0, 0:4) are chosen - even though maximum
processors is set to 8. In effect, the indirection table is hashing incoming traffic to only use 2 CPUs out of the 8
that are available.
To fully utilize the CPUs, the number of RSS Receive Queues must be equal to or greater than Max Processors. In
the previous example, the Receive Queue should be set to 8 or greater.
NIC Teaming and RSS
RSS can be enabled on a network adapter that is teamed with another network interface card using NIC
Teaming. In this scenario, only the underlying physical network adapter can be configured to use RSS. A user
cannot set RSS cmdlets on the teamed network adapter.
Receive Segment Coalescing (RSC )
Receive Segment Coalescing (RSC) helps performance by reducing the number of IP headers that are processed
for a given amount of received data. It should be used to help scale the performance of received data by
grouping (or coalescing) the smaller packets into larger units.
This approach can affect latency with benefits mostly seen in throughput gains. RSC is recommended to
increase throughput for received heavy workloads. Consider deploying network adapters that support RSC.
On these network adapters, ensure that RSC is on (this is the default setting), unless you have specific workloads
(for example, low latency, low throughput networking) that show benefit from RSC being off.
Understanding RSC Diagnostics
You can diagnose RSC by using the Windows PowerShell cmdlets Get-NetAdapterRsc and Get-
NetAdapterStatistics .
Following is example output when you run the Get-NetAdapterRsc cmdlet.
PS C:\Users\Administrator> Get-NetAdapterRsc
The Get cmdlet shows whether RSC is enabled in the interface and whether TCP enables RSC to be in an
operational state. The failure reason provides details about the failure to enable RSC on that interface.
In the previous scenario, IPv4 RSC is supported and operational in the interface. To understand diagnostic
failures, one can see the coalesced bytes or exceptions caused. This provides an indication of the coalescing
issues.
Following is example output when you run the Get-NetAdapterStatistics cmdlet.
CoalescedBytes : 0
CoalescedPackets : 0
CoalescingEvents : 0
CoalescingExceptions : 0
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016, Azure Stack HCI, versions
21H2 and 20H2
In Windows Server 2016 and Windows 10, you can use the interface metric to configure the order of network
interfaces.
This is different than in previous versions of Windows and Windows Server, which allowed you to configure the
binding order of network adapters by using either the user interface or the commands
INetCfgComponentBindings::MoveBefore and INetCfgComponentBindings::MoveAfter . These two
methods for ordering network interfaces are not available in Windows Server 2016 and Windows 10.
Instead, you can use the new method for setting the enumerated order of network adapters by configuring the
interface metric of each adapter. You can configure the interface metric by using the Set-NetIPInterface Windows
PowerShell command.
When network traffic routes are chosen and you have configured the InterfaceMetric parameter of the Set-
NetIPInterface command, the overall metric that is used to determine the interface preference is the sum of
the route metric and the interface metric. Typically, the interface metric gives preference to a particular interface,
such as using wired if both wired and wireless are available.
The following Windows PowerShell command example shows use of this parameter.
The order in which adapters appear in a list is determined by the IPv4 or IPv6 interface metric. For more
information, see GetAdaptersAddresses function.
For links to all topics in this guide, see Network Subsystem Performance Tuning.
Performance Tuning Network Adapters
1/5/2022 • 14 minutes to read • Edit Online
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016, Azure Stack HCI, versions
21H2 and 20H2
Use the information in this topic to tune the performance network adapters for computers that are running
Windows Server 2016 and later versions. If your network adapters provide tuning options, you can use these
options to optimize network throughput and resource usage.
The correct tuning settings for your network adapters depend on the following variables:
The network adapter and its feature set
The type of workload that the server performs
The server hardware and software resources
Your performance goals for the server
The following sections describe some of your performance tuning options.
IMPORTANT
Do not use the offload features IPsec Task Offload or TCP Chimney Offload . These technologies are deprecated in
Windows Server 2016, and might adversely affect server and networking performance. In addition, these technologies
might not be supported by Microsoft in the future.
For example, consider a network adapter that has limited hardware resources. In that case, enabling
segmentation offload features might reduce the maximum sustainable throughput of the adapter. However, if
the reduced throughput is acceptable, you should go ahead an enable the segmentation offload features.
NOTE
Some network adapters require you to enable offload features independently for the send and receive paths.
NOTE
If a network adapter does not expose manual resource configuration, either it dynamically configures the resources, or the
resources are set to a fixed value that cannot be changed.
NOTE
This setting does not work properly if the system BIOS has been set to disable operating system control of power
management.
Enable static offloads. For example, enable the UDP Checksums, TCP Checksums, and Send Large Offload
(LSO) settings.
If the traffic is multi-streamed, such as when receiving high-volume multicast traffic, enable RSS.
Disable the Interrupt Moderation setting for network card drivers that require the lowest possible
latency. Remember, this configuration can use more CPU time and it represents a tradeoff.
Handle network adapter interrupts and DPCs on a core processor that shares CPU cache with the core
that is being used by the program (user thread) that is handling the packet. CPU affinity tuning can be
used to direct a process to certain logical processors in conjunction with RSS configuration to accomplish
this. Using the same core for the interrupt, DPC, and user mode thread exhibits worse performance as
load increases because the ISR, DPC, and thread contend for the use of the core.
NOTE
The operating system cannot control SMIs because the logical processor is running in a special maintenance mode, which
prevents operating system intervention.
Total achievable throughput in bytes = TCP receive window size in bytes * (1 / connection latency in
seconds)
For example, for a connection that has a latency of 10 ms, the total achievable throughput is only 51 Mbps. This
value is reasonable for a large corporate network infrastructure. However, by using autotuning to adjust the
receive window, the connection can achieve the full line rate of a 1-Gbps connection.
Some applications define the size of the TCP receive window. If the application does not define the receive
window size, the link speed determines the size as follows:
Less than 1 megabit per second (Mbps): 8 kilobytes (KB)
1 Mbps to 100 Mbps: 17 KB
100 Mbps to 10 gigabits per second (Gbps): 64 KB
10 Gbps or faster: 128 KB
For example, on a computer that has a 1-Gbps network adapter installed, the window size should be 64 KB.
This feature also makes full use of other features to improve network performance. These features include the
rest of the TCP options that are defined in RFC 1323. By using these features, Windows-based computers can
negotiate TCP receive window sizes that are smaller but are scaled at a defined value, depending on the
configuration. This behavior the sizes easier to handle for networking devices.
NOTE
You may experience an issue in which the network device is not compliant with the TCP window scale option , as
defined in RFC 1323 and, therefore, doesn't support the scale factor. In such cases, refer to this KB 934430, Network
connectivity fails when you try to use Windows Vista behind a firewall device or contact the Support team for your
network device vendor.
NOTE
Unlike in versions of Windows that pre-date Windows 10 or Windows Server 2019, you can no longer use the registry to
configure the TCP receive window size. For more information about the deprecated settings, see Deprecated TCP
parameters.
NOTE
For detailed information about the available autotuning levels, see Autotuning levels.
To modify the setting, run the following command at the command prompt:
NOTE
In the preceding command, <Value> represents the new value for the auto tuning level.
For more information about this command, see Netsh commands for Interface Transmission Control Protocol.
To use Powershell to review or modify the autotuning level
To review the current settings, open a PowerShell window and run the following cmdlet.
SettingName AutoTuningLevelLocal
----------- --------------------
Automatic
InternetCustom Normal
DatacenterCustom Normal
Compat Normal
Datacenter Normal
Internet Normal
To modify the setting, run the following cmdlet at the PowerShell command prompt.
For more information about these cmdlets, see the following articles:
Get-NetTCPSetting
Set-NetTCPSetting
Autotuning levels
You can set receive window autotuning to any of five levels. The default level is Normal . The following table
describes the levels.
Normal (default) 0x8 (scale factor of 8) Set the TCP receive window to grow to
accommodate almost all scenarios.
Disabled No scale factor available Set the TCP receive window at its
default value.
Restricted 0x4 (scale factor of 4) Set the TCP receive window to grow
beyond its default value, but limit such
growth in some scenarios.
Highly Restricted 0x2 (scale factor of 2) Set the TCP receive window to grow
beyond its default value, but do so
very conservatively.
Experimental 0xE (scale factor of 14) Set the TCP receive window to grow to
accommodate extreme scenarios.
If you use an application to capture network packets, the application should report data that resembles the
following for different window autotuning level settings.
Autotuning level: Normal (default state)
Frame: Number = 492, Captured Frame Length = 66, MediaType = ETHERNET
+ Ethernet: Etype = Internet IP (IPv4),DestinationAddress:[D8-FE-E3-65-F3-FD],SourceAddress:[C8-5B-
76-7D-FA-7F]
+ Ipv4: Src = 192.169.0.5, Dest = 192.169.0.4, Next Protocol = TCP, Packet ID = 2667, Total IP Length
= 52
- Tcp: [Bad CheckSum]Flags=......S., SrcPort=60975, DstPort=Microsoft-DS(445), PayloadLen=0,
Seq=4075590425, Ack=0, Win=64240 ( Negotiating scale factor 0x8 ) = 64240
SrcPort: 60975
DstPort: Microsoft-DS(445)
SequenceNumber: 4075590425 (0xF2EC9319)
AcknowledgementNumber: 0 (0x0)
+ DataOffset: 128 (0x80)
+ Flags: ......S. ---------------------------------------------------------> SYN Flag set
Window: 64240 ( Negotiating scale factor 0x8 ) = 64240 ---------> TCP Receive Window set as 64K as
per NIC Link bitrate. Note it shows the 0x8 Scale Factor.
Checksum: 0x8182, Bad
UrgentPointer: 0 (0x0)
- TCPOptions:
+ MaxSegmentSize: 1
+ NoOption:
+ WindowsScaleFactor: ShiftCount: 8 -----------------------------> Scale factor, defined by
AutoTuningLevel
+ NoOption:
+ NoOption:
+ SACKPermitted:
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Ser vices\Tcpip\Parameters
NOTE
A poorly-written WFP filter can significantly decrease a server's networking performance. For more information, see
Porting Packet-Processing Drivers and Apps to WFP in the Windows Dev Center.
For links to all topics in this guide, see Network Subsystem Performance Tuning.
Network-Related Performance Counters
1/5/2022 • 2 minutes to read • Edit Online
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016, Azure Stack HCI, versions
21H2 and 20H2
This topic lists the counters that are relevant to managing network performance, and contains the following
sections.
Resource Utilization
Potential Network Problems
Receive Side Coalescing (RSC) performance
Resource Utilization
The following performance counters are relevant to network resource utilization.
IPv4, IPv6
Datagrams Received/sec
Datagrams Sent/sec
TCPv4, TCPv6
Segments Received/sec
Segments Sent/sec
Segments Retransmitted/sec
Network Interface(*), Network Adapter(*)
Bytes Received/sec
Bytes Sent/sec
Packets Received/sec
Packets Sent/sec
Output Queue Length
This counter is the length of the output packet queue (in packets). If this is longer than 2, delays
occur. You should find the bottleneck and eliminate it if you can. Because NDIS queues the
requests, this length should always be 0.
Processor Information
% Processor Time
Interrupts/sec
DPCs Queued/sec
This counter is an average rate at which DPCs were added to the logical processor's DPC queue.
Each logical processor has its own DPC queue. This counter measures the rate at which DPCs are
added to the queue, not the number of DPCs in the queue. It displays the difference between the
values that were observed in the last two samples, divided by the duration of the sample interval.
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016, Azure Stack HCI, versions
21H2 and 20H2
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016, Azure Stack HCI, versions
21H2 and 20H2
In this topic, we give you an overview of Network Interface Card (NIC) Teaming in Windows Server. NIC Teaming
allows you to group between one and 32 physical Ethernet network adapters into one or more software-based
virtual network adapters. These virtual network adapters provide fast performance and fault tolerance in the
event of a network adapter failure.
IMPORTANT
You must install NIC Team member network adapters in the same physical host computer.
TIP
A NIC team that contains only one network adapter cannot provide load balancing and failover. However, with one
network adapter, you can use NIC Teaming for separation of network traffic when you are also using virtual Local Area
Networks (VLANs).
When you configure network adapters into a NIC team, they connect into the NIC teaming solution common
core, which then presents one or more virtual adapters (also called team NICs [tNICs] or team interfaces) to the
operating system.
Since Windows Server 2016 supports up to 32 team interfaces per team, there are a variety of algorithms that
distribute outbound traffic (load) between the NICs. The following illustration depicts a NIC Team with multiple
tNICs.
Also, you can connect your teamed NICs to the same switch or different switches. If you connect NICs to
different switches, both switches must be on the same subnet.
Availability
NIC Teaming is available in all versions of Windows Server 2016. You can use a variety of tools to manage NIC
Teaming from computers running a client operating system, such as:
Windows PowerShell cmdlets
Remote Desktop
Remote Server Administration Tools
IMPORTANT
Do not place Hyper-V virtual NICs exposed in the host partition (vNICs) in a team. Teaming of vNICs inside of the
host partition is not supported in any configuration. Attempts to team vNICs might cause a complete loss of
communication if network failures occur.
Compatibility
NIC teaming is compatible with all networking technologies in Windows Server 2016 with the following
exceptions.
Single-root I/O vir tualization (SR-IOV) . For SR-IOV, data is delivered directly to the NIC without
passing it through the networking stack (in the host operating system, in the case of virtualization).
Therefore, it is not possible for the NIC team to inspect or redirect the data to another path in the team.
Native host Quality of Ser vice (QoS) . When you set QoS policies on a native or host system, and
those policies invoke minimum bandwidth limitations, the overall throughput for a NIC team is less than
it would be without the bandwidth policies in place.
TCP Chimney . TCP Chimney is not supported with NIC teaming because TCP Chimney offloads the
entire networking stack directly to the NIC.
802.1X Authentication . You should not use 802.1X Authentication with NIC Teaming because some
switches do not permit the configuration of both 802.1X Authentication and NIC Teaming on the same
port.
To learn about using NIC Teaming within virtual machines (VMs) that run on a Hyper-V host, see Create a new
NIC Team on a host computer or VM.
Live Migration
NIC Teaming in VMs does not affect Live Migration. The same rules exist for Live Migration whether or not
configuring NIC Teaming in the VM.
Each VM can have a virtual function (VF) from one or both SR-IOV NICs and, in the event of a NIC disconnect,
failover from the primary VF to the backup adapter (VF). Alternately, the VM may have a VF from one NIC and a
non-VF vmNIC connected to another virtual switch. If the NIC associated with the VF gets disconnected, the
traffic can failover to the other switch without loss of connectivity.
Because failover between NICs in a VM might result in traffic sent with the MAC address of the other vmNIC,
each Hyper-V Virtual Switch port associated with a VM using NIC Teaming must be set to allow teaming.
Related topics
NIC Teaming MAC address use and management: When you configure a NIC Team with switch
independent mode and either address hash or dynamic load distribution, the team uses the media access
control (MAC) address of the primary NIC Team member on outbound traffic. The primary NIC Team
member is a network adapter selected by the operating system from the initial set of team members.
NIC Teaming settings: In this topic, we give you an overview of the NIC Team properties such as teaming
and load balancing modes. We also give you details about the Standby adapter setting and the Primary
team interface property. If you have at least two network adapters in a NIC Team, you do not need to
designate a Standby adapter for fault tolerance.
Create a new NIC Team on a host computer or VM: In this topic, you create a new NIC Team on a host
computer or in a Hyper-V virtual machine (VM) running Windows Server 2016.
Troubleshooting NIC Teaming: In this topic, we discuss ways to troubleshoot NIC Teaming, such as
hardware, physical switch securities, and disabling or enabling network adapters using Windows
PowerShell.
NIC Teaming
1/5/2022 • 10 minutes to read • Edit Online
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016, Azure Stack HCI, versions
21H2 and 20H2
In this topic, we give you an overview of Network Interface Card (NIC) Teaming in Windows Server. NIC Teaming
allows you to group between one and 32 physical Ethernet network adapters into one or more software-based
virtual network adapters. These virtual network adapters provide fast performance and fault tolerance in the
event of a network adapter failure.
IMPORTANT
You must install NIC Team member network adapters in the same physical host computer.
TIP
A NIC team that contains only one network adapter cannot provide load balancing and failover. However, with one
network adapter, you can use NIC Teaming for separation of network traffic when you are also using virtual Local Area
Networks (VLANs).
When you configure network adapters into a NIC team, they connect into the NIC teaming solution common
core, which then presents one or more virtual adapters (also called team NICs [tNICs] or team interfaces) to the
operating system.
Since Windows Server 2016 supports up to 32 team interfaces per team, there are a variety of algorithms that
distribute outbound traffic (load) between the NICs. The following illustration depicts a NIC Team with multiple
tNICs.
Also, you can connect your teamed NICs to the same switch or different switches. If you connect NICs to
different switches, both switches must be on the same subnet.
Availability
NIC Teaming is available in all versions of Windows Server 2016. You can use a variety of tools to manage NIC
Teaming from computers running a client operating system, such as:
Windows PowerShell cmdlets
Remote Desktop
Remote Server Administration Tools
IMPORTANT
Do not place Hyper-V virtual NICs exposed in the host partition (vNICs) in a team. Teaming of vNICs inside of the
host partition is not supported in any configuration. Attempts to team vNICs might cause a complete loss of
communication if network failures occur.
Compatibility
NIC teaming is compatible with all networking technologies in Windows Server 2016 with the following
exceptions.
Single-root I/O vir tualization (SR-IOV) . For SR-IOV, data is delivered directly to the NIC without
passing it through the networking stack (in the host operating system, in the case of virtualization).
Therefore, it is not possible for the NIC team to inspect or redirect the data to another path in the team.
Native host Quality of Ser vice (QoS) . When you set QoS policies on a native or host system, and
those policies invoke minimum bandwidth limitations, the overall throughput for a NIC team is less than
it would be without the bandwidth policies in place.
TCP Chimney . TCP Chimney is not supported with NIC teaming because TCP Chimney offloads the
entire networking stack directly to the NIC.
802.1X Authentication . You should not use 802.1X Authentication with NIC Teaming because some
switches do not permit the configuration of both 802.1X Authentication and NIC Teaming on the same
port.
To learn about using NIC Teaming within virtual machines (VMs) that run on a Hyper-V host, see Create a new
NIC Team on a host computer or VM.
Live Migration
NIC Teaming in VMs does not affect Live Migration. The same rules exist for Live Migration whether or not
configuring NIC Teaming in the VM.
Each VM can have a virtual function (VF) from one or both SR-IOV NICs and, in the event of a NIC disconnect,
failover from the primary VF to the backup adapter (VF). Alternately, the VM may have a VF from one NIC and a
non-VF vmNIC connected to another virtual switch. If the NIC associated with the VF gets disconnected, the
traffic can failover to the other switch without loss of connectivity.
Because failover between NICs in a VM might result in traffic sent with the MAC address of the other vmNIC,
each Hyper-V Virtual Switch port associated with a VM using NIC Teaming must be set to allow teaming.
Related topics
NIC Teaming MAC address use and management: When you configure a NIC Team with switch
independent mode and either address hash or dynamic load distribution, the team uses the media access
control (MAC) address of the primary NIC Team member on outbound traffic. The primary NIC Team
member is a network adapter selected by the operating system from the initial set of team members.
NIC Teaming settings: In this topic, we give you an overview of the NIC Team properties such as teaming
and load balancing modes. We also give you details about the Standby adapter setting and the Primary
team interface property. If you have at least two network adapters in a NIC Team, you do not need to
designate a Standby adapter for fault tolerance.
Create a new NIC Team on a host computer or VM: In this topic, you create a new NIC Team on a host
computer or in a Hyper-V virtual machine (VM) running Windows Server 2016.
Troubleshooting NIC Teaming: In this topic, we discuss ways to troubleshoot NIC Teaming, such as
hardware, physical switch securities, and disabling or enabling network adapters using Windows
PowerShell.
NIC Teaming
1/5/2022 • 10 minutes to read • Edit Online
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016, Azure Stack HCI, versions
21H2 and 20H2
In this topic, we give you an overview of Network Interface Card (NIC) Teaming in Windows Server. NIC Teaming
allows you to group between one and 32 physical Ethernet network adapters into one or more software-based
virtual network adapters. These virtual network adapters provide fast performance and fault tolerance in the
event of a network adapter failure.
IMPORTANT
You must install NIC Team member network adapters in the same physical host computer.
TIP
A NIC team that contains only one network adapter cannot provide load balancing and failover. However, with one
network adapter, you can use NIC Teaming for separation of network traffic when you are also using virtual Local Area
Networks (VLANs).
When you configure network adapters into a NIC team, they connect into the NIC teaming solution common
core, which then presents one or more virtual adapters (also called team NICs [tNICs] or team interfaces) to the
operating system.
Since Windows Server 2016 supports up to 32 team interfaces per team, there are a variety of algorithms that
distribute outbound traffic (load) between the NICs. The following illustration depicts a NIC Team with multiple
tNICs.
Also, you can connect your teamed NICs to the same switch or different switches. If you connect NICs to
different switches, both switches must be on the same subnet.
Availability
NIC Teaming is available in all versions of Windows Server 2016. You can use a variety of tools to manage NIC
Teaming from computers running a client operating system, such as:
Windows PowerShell cmdlets
Remote Desktop
Remote Server Administration Tools
IMPORTANT
Do not place Hyper-V virtual NICs exposed in the host partition (vNICs) in a team. Teaming of vNICs inside of the
host partition is not supported in any configuration. Attempts to team vNICs might cause a complete loss of
communication if network failures occur.
Compatibility
NIC teaming is compatible with all networking technologies in Windows Server 2016 with the following
exceptions.
Single-root I/O vir tualization (SR-IOV) . For SR-IOV, data is delivered directly to the NIC without
passing it through the networking stack (in the host operating system, in the case of virtualization).
Therefore, it is not possible for the NIC team to inspect or redirect the data to another path in the team.
Native host Quality of Ser vice (QoS) . When you set QoS policies on a native or host system, and
those policies invoke minimum bandwidth limitations, the overall throughput for a NIC team is less than
it would be without the bandwidth policies in place.
TCP Chimney . TCP Chimney is not supported with NIC teaming because TCP Chimney offloads the
entire networking stack directly to the NIC.
802.1X Authentication . You should not use 802.1X Authentication with NIC Teaming because some
switches do not permit the configuration of both 802.1X Authentication and NIC Teaming on the same
port.
To learn about using NIC Teaming within virtual machines (VMs) that run on a Hyper-V host, see Create a new
NIC Team on a host computer or VM.
Live Migration
NIC Teaming in VMs does not affect Live Migration. The same rules exist for Live Migration whether or not
configuring NIC Teaming in the VM.
Each VM can have a virtual function (VF) from one or both SR-IOV NICs and, in the event of a NIC disconnect,
failover from the primary VF to the backup adapter (VF). Alternately, the VM may have a VF from one NIC and a
non-VF vmNIC connected to another virtual switch. If the NIC associated with the VF gets disconnected, the
traffic can failover to the other switch without loss of connectivity.
Because failover between NICs in a VM might result in traffic sent with the MAC address of the other vmNIC,
each Hyper-V Virtual Switch port associated with a VM using NIC Teaming must be set to allow teaming.
Related topics
NIC Teaming MAC address use and management: When you configure a NIC Team with switch
independent mode and either address hash or dynamic load distribution, the team uses the media access
control (MAC) address of the primary NIC Team member on outbound traffic. The primary NIC Team
member is a network adapter selected by the operating system from the initial set of team members.
NIC Teaming settings: In this topic, we give you an overview of the NIC Team properties such as teaming
and load balancing modes. We also give you details about the Standby adapter setting and the Primary
team interface property. If you have at least two network adapters in a NIC Team, you do not need to
designate a Standby adapter for fault tolerance.
Create a new NIC Team on a host computer or VM: In this topic, you create a new NIC Team on a host
computer or in a Hyper-V virtual machine (VM) running Windows Server 2016.
Troubleshooting NIC Teaming: In this topic, we discuss ways to troubleshoot NIC Teaming, such as
hardware, physical switch securities, and disabling or enabling network adapters using Windows
PowerShell.
NIC Teaming MAC address use and management
1/5/2022 • 4 minutes to read • Edit Online
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016, Azure Stack HCI, versions
21H2 and 20H2
When you configure a NIC Team with switch independent mode and either address hash or dynamic load
distribution, the team uses the media access control (MAC) address of the primary NIC Team member on
outbound traffic. The primary NIC Team member is a network adapter that the operating system selects from
the initial set of team members. It is the first team member to bind to the team after you create it or after the
host computer is restarted. Because the primary team member might change in a non-deterministic manner at
each boot, NIC disable/enable action, or other reconfiguration activities, the primary team member might
change, and the MAC address of the team might vary.
In most situations this doesn't cause problems, but there are a few cases where issues might arise.
If the primary team member is removed from the team and then placed into operation, there may be a MAC
address conflict. To resolve this conflict, disable and then enable the team interface. The process of disabling and
enabling the team interface causes the interface to select a new MAC address from the remaining team
members, and eliminates the MAC address conflict.
You can set the MAC address of the NIC team to a specific MAC address by setting it in the primary team
interface, just as you can do when configuring the MAC address of any physical NIC.
Related topics
NIC Teaming: In this topic, we give you an overview of Network Interface Card (NIC) Teaming in Windows
Server 2016. NIC Teaming allows you to group between one and 32 physical Ethernet network adapters
into one or more software-based virtual network adapters. These virtual network adapters provide fast
performance and fault tolerance if a network adapter failure.
NIC Teaming settings: In this topic, we give you an overview of the NIC Team properties such as teaming
and load balancing modes. We also give you details about the Standby adapter setting and the Primary
team interface property. If you have at least two network adapters in a NIC Team, you do not need to
designate a Standby adapter for fault tolerance.
Create a new NIC Team on a host computer or VM: In this topic, you create a new NIC Team on a host
computer or in a Hyper-V virtual machine (VM) running Windows Server 2016.
Troubleshooting NIC Teaming: In this topic, we discuss ways to troubleshoot NIC Teaming, such as
hardware, physical switch securities, and disabling or enabling network adapters using Windows
PowerShell.
Create a new NIC Team on a host computer or VM
1/5/2022 • 8 minutes to read • Edit Online
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016, Azure Stack HCI, versions
21H2 and 20H2
In this topic, you create a new NIC Team on a host computer or in a Hyper-V virtual machine (VM) running
Windows Server 2016.
TIP
You can also enable NIC Teaming with a Windows PowerShell command:
3. In Adapters and Interfaces , select the one or more network adapters that you want to add to a NIC
Team.
4. Click TASKS , and click Add to New Team .
The New team dialog box opens and displays network adapters and team members.
5. In Team name , type a name for the new NIC Team, and then click Additional proper ties .
6. In Additional proper ties , select values for:
Teaming mode . The options for Teaming mode are Switch Independent and Switch
Dependent . The Switch Dependent mode includes Static Teaming and Link Aggregation
Control Protocol (L ACP) .
Switch Independent. With Switch Independent mode, the switch or switches to which the
NIC Team members are connected are unaware of the presence of the NIC team and do not
determine how to distribute network traffic to NIC Team members - instead, the NIC Team
distributes inbound network traffic across the NIC Team members.
Switch Dependent. With Switch Dependent modes, the switch to which the NIC Team
members are connected determines how to distribute the inbound network traffic among
the NIC Team members. The switch has complete independence to determine how to
distribute the network traffic across the NIC Team members.
M O DE DESC RIP T IO N
Link Aggregation Control Protocol (L ACP) Unlike Static Teaming, LACP Teaming mode
dynamically identifies links that are connected
between the host and the switch. This dynamic
connection enables the automatic creation of a
team and, in theory but rarely in practice, the
expansion and reduction of a team simply by the
transmission or receipt of LACP packets from the
peer entity. All server-class switches support
LACP, and all require the network operator to
administratively enable LACP on the switch port.
When you configure a Teaming mode of LACP,
NIC Teaming always operates in LACP's Active
mode. By default, NIC Teaming uses a short timer
(3 seconds), but you can configure a long timer
(90 seconds) with Set-NetLbfoTeam .
Load balancing mode . The options for Load Balancing distribution mode are Address Hash ,
Hyper-V Por t , and Dynamic .
Address Hash. With Address Hash, this mode creates a hash based on address
components of the packet, which then get assigned to one of the available adapters. Usually,
this mechanism alone is sufficient to create a reasonable balance across the available
adapters.
Hyper-V Por t. With Hyper-V Port, NIC Teams configured on Hyper-V hosts give VMs
independent MAC addresses. The VMs MAC address or the VM ported connected to the
Hyper-V switch, can be used to divide network traffic between NIC Team members. You
cannot configure NIC Teams that you create within VMs with the Hyper-V Port load
balancing mode. Instead, use the Address Hash mode.
Dynamic. With Dynamic, outbound loads are distributed based on a hash of the TCP ports
and IP addresses. Dynamic mode also rebalances loads in real time so that a given
outbound flow may move back and forth between team members. Inbound loads, on the
other hand, get distributed the same way as Hyper-V Port. In a nutshell, Dynamic mode
utilizes the best aspects of both Address Hash and Hyper-V Port and is the highest
performing load balancing mode.
Standby adapter . The options for Standby Adapter are None (all adapters Active) or your
selection of a specific network adapter in the NIC Team that acts as a Standby adapter.
TIP
If you are configuring a NIC Team in a virtual machine (VM), you must select a Teaming mode of Switch
Independent and a Load balancing mode of Address Hash.
7. If you want to configure the primary team interface name or assign a VLAN number to the NIC Team,
click the link to the right of Primar y team interface .
The New team interface dialog box opens.
Related topics
NIC Teaming: In this topic, we give you an overview of Network Interface Card (NIC) Teaming in Windows
Server 2016. NIC Teaming allows you to group between one and 32 physical Ethernet network adapters
into one or more software-based virtual network adapters. These virtual network adapters provide fast
performance and fault tolerance in the event of a network adapter failure.
NIC Teaming MAC address use and management: When you configure a NIC Team with switch
independent mode and either address hash or dynamic load distribution, the team uses the media access
control (MAC) address of the primary NIC Team member on outbound traffic. The primary NIC Team
member is a network adapter selected by the operating system from the initial set of team members.
NIC Teaming settings: In this topic, we give you an overview of the NIC Team properties such as teaming
and load balancing modes. We also give you details about the Standby adapter setting and the Primary
team interface property. If you have at least two network adapters in a NIC Team, you do not need to
designate a Standby adapter for fault tolerance.
Troubleshooting NIC Teaming: In this topic, we discuss ways to troubleshoot NIC Teaming, such as
hardware, physical switch securities, and disabling or enabling network adapters using Windows
PowerShell.
Create a new NIC Team on a host computer or VM
1/5/2022 • 8 minutes to read • Edit Online
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016, Azure Stack HCI, versions
21H2 and 20H2
In this topic, you create a new NIC Team on a host computer or in a Hyper-V virtual machine (VM) running
Windows Server 2016.
TIP
You can also enable NIC Teaming with a Windows PowerShell command:
3. In Adapters and Interfaces , select the one or more network adapters that you want to add to a NIC
Team.
4. Click TASKS , and click Add to New Team .
The New team dialog box opens and displays network adapters and team members.
5. In Team name , type a name for the new NIC Team, and then click Additional proper ties .
6. In Additional proper ties , select values for:
Teaming mode . The options for Teaming mode are Switch Independent and Switch
Dependent . The Switch Dependent mode includes Static Teaming and Link Aggregation
Control Protocol (L ACP) .
Switch Independent. With Switch Independent mode, the switch or switches to which the
NIC Team members are connected are unaware of the presence of the NIC team and do not
determine how to distribute network traffic to NIC Team members - instead, the NIC Team
distributes inbound network traffic across the NIC Team members.
Switch Dependent. With Switch Dependent modes, the switch to which the NIC Team
members are connected determines how to distribute the inbound network traffic among
the NIC Team members. The switch has complete independence to determine how to
distribute the network traffic across the NIC Team members.
M O DE DESC RIP T IO N
Link Aggregation Control Protocol (L ACP) Unlike Static Teaming, LACP Teaming mode
dynamically identifies links that are connected
between the host and the switch. This dynamic
connection enables the automatic creation of a
team and, in theory but rarely in practice, the
expansion and reduction of a team simply by the
transmission or receipt of LACP packets from the
peer entity. All server-class switches support
LACP, and all require the network operator to
administratively enable LACP on the switch port.
When you configure a Teaming mode of LACP,
NIC Teaming always operates in LACP's Active
mode. By default, NIC Teaming uses a short timer
(3 seconds), but you can configure a long timer
(90 seconds) with Set-NetLbfoTeam .
Load balancing mode . The options for Load Balancing distribution mode are Address Hash ,
Hyper-V Por t , and Dynamic .
Address Hash. With Address Hash, this mode creates a hash based on address
components of the packet, which then get assigned to one of the available adapters. Usually,
this mechanism alone is sufficient to create a reasonable balance across the available
adapters.
Hyper-V Por t. With Hyper-V Port, NIC Teams configured on Hyper-V hosts give VMs
independent MAC addresses. The VMs MAC address or the VM ported connected to the
Hyper-V switch, can be used to divide network traffic between NIC Team members. You
cannot configure NIC Teams that you create within VMs with the Hyper-V Port load
balancing mode. Instead, use the Address Hash mode.
Dynamic. With Dynamic, outbound loads are distributed based on a hash of the TCP ports
and IP addresses. Dynamic mode also rebalances loads in real time so that a given
outbound flow may move back and forth between team members. Inbound loads, on the
other hand, get distributed the same way as Hyper-V Port. In a nutshell, Dynamic mode
utilizes the best aspects of both Address Hash and Hyper-V Port and is the highest
performing load balancing mode.
Standby adapter . The options for Standby Adapter are None (all adapters Active) or your
selection of a specific network adapter in the NIC Team that acts as a Standby adapter.
TIP
If you are configuring a NIC Team in a virtual machine (VM), you must select a Teaming mode of Switch
Independent and a Load balancing mode of Address Hash.
7. If you want to configure the primary team interface name or assign a VLAN number to the NIC Team,
click the link to the right of Primar y team interface .
The New team interface dialog box opens.
Related topics
NIC Teaming: In this topic, we give you an overview of Network Interface Card (NIC) Teaming in Windows
Server 2016. NIC Teaming allows you to group between one and 32 physical Ethernet network adapters
into one or more software-based virtual network adapters. These virtual network adapters provide fast
performance and fault tolerance in the event of a network adapter failure.
NIC Teaming MAC address use and management: When you configure a NIC Team with switch
independent mode and either address hash or dynamic load distribution, the team uses the media access
control (MAC) address of the primary NIC Team member on outbound traffic. The primary NIC Team
member is a network adapter selected by the operating system from the initial set of team members.
NIC Teaming settings: In this topic, we give you an overview of the NIC Team properties such as teaming
and load balancing modes. We also give you details about the Standby adapter setting and the Primary
team interface property. If you have at least two network adapters in a NIC Team, you do not need to
designate a Standby adapter for fault tolerance.
Troubleshooting NIC Teaming: In this topic, we discuss ways to troubleshoot NIC Teaming, such as
hardware, physical switch securities, and disabling or enabling network adapters using Windows
PowerShell.
Create a new NIC Team on a host computer or VM
1/5/2022 • 8 minutes to read • Edit Online
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016, Azure Stack HCI, versions
21H2 and 20H2
In this topic, you create a new NIC Team on a host computer or in a Hyper-V virtual machine (VM) running
Windows Server 2016.
TIP
You can also enable NIC Teaming with a Windows PowerShell command:
3. In Adapters and Interfaces , select the one or more network adapters that you want to add to a NIC
Team.
4. Click TASKS , and click Add to New Team .
The New team dialog box opens and displays network adapters and team members.
5. In Team name , type a name for the new NIC Team, and then click Additional proper ties .
6. In Additional proper ties , select values for:
Teaming mode . The options for Teaming mode are Switch Independent and Switch
Dependent . The Switch Dependent mode includes Static Teaming and Link Aggregation
Control Protocol (L ACP) .
Switch Independent. With Switch Independent mode, the switch or switches to which the
NIC Team members are connected are unaware of the presence of the NIC team and do not
determine how to distribute network traffic to NIC Team members - instead, the NIC Team
distributes inbound network traffic across the NIC Team members.
Switch Dependent. With Switch Dependent modes, the switch to which the NIC Team
members are connected determines how to distribute the inbound network traffic among
the NIC Team members. The switch has complete independence to determine how to
distribute the network traffic across the NIC Team members.
M O DE DESC RIP T IO N
Link Aggregation Control Protocol (L ACP) Unlike Static Teaming, LACP Teaming mode
dynamically identifies links that are connected
between the host and the switch. This dynamic
connection enables the automatic creation of a
team and, in theory but rarely in practice, the
expansion and reduction of a team simply by the
transmission or receipt of LACP packets from the
peer entity. All server-class switches support
LACP, and all require the network operator to
administratively enable LACP on the switch port.
When you configure a Teaming mode of LACP,
NIC Teaming always operates in LACP's Active
mode. By default, NIC Teaming uses a short timer
(3 seconds), but you can configure a long timer
(90 seconds) with Set-NetLbfoTeam .
Load balancing mode . The options for Load Balancing distribution mode are Address Hash ,
Hyper-V Por t , and Dynamic .
Address Hash. With Address Hash, this mode creates a hash based on address
components of the packet, which then get assigned to one of the available adapters. Usually,
this mechanism alone is sufficient to create a reasonable balance across the available
adapters.
Hyper-V Por t. With Hyper-V Port, NIC Teams configured on Hyper-V hosts give VMs
independent MAC addresses. The VMs MAC address or the VM ported connected to the
Hyper-V switch, can be used to divide network traffic between NIC Team members. You
cannot configure NIC Teams that you create within VMs with the Hyper-V Port load
balancing mode. Instead, use the Address Hash mode.
Dynamic. With Dynamic, outbound loads are distributed based on a hash of the TCP ports
and IP addresses. Dynamic mode also rebalances loads in real time so that a given
outbound flow may move back and forth between team members. Inbound loads, on the
other hand, get distributed the same way as Hyper-V Port. In a nutshell, Dynamic mode
utilizes the best aspects of both Address Hash and Hyper-V Port and is the highest
performing load balancing mode.
Standby adapter . The options for Standby Adapter are None (all adapters Active) or your
selection of a specific network adapter in the NIC Team that acts as a Standby adapter.
TIP
If you are configuring a NIC Team in a virtual machine (VM), you must select a Teaming mode of Switch
Independent and a Load balancing mode of Address Hash.
7. If you want to configure the primary team interface name or assign a VLAN number to the NIC Team,
click the link to the right of Primar y team interface .
The New team interface dialog box opens.
Related topics
NIC Teaming: In this topic, we give you an overview of Network Interface Card (NIC) Teaming in Windows
Server 2016. NIC Teaming allows you to group between one and 32 physical Ethernet network adapters
into one or more software-based virtual network adapters. These virtual network adapters provide fast
performance and fault tolerance in the event of a network adapter failure.
NIC Teaming MAC address use and management: When you configure a NIC Team with switch
independent mode and either address hash or dynamic load distribution, the team uses the media access
control (MAC) address of the primary NIC Team member on outbound traffic. The primary NIC Team
member is a network adapter selected by the operating system from the initial set of team members.
NIC Teaming settings: In this topic, we give you an overview of the NIC Team properties such as teaming
and load balancing modes. We also give you details about the Standby adapter setting and the Primary
team interface property. If you have at least two network adapters in a NIC Team, you do not need to
designate a Standby adapter for fault tolerance.
Troubleshooting NIC Teaming: In this topic, we discuss ways to troubleshoot NIC Teaming, such as
hardware, physical switch securities, and disabling or enabling network adapters using Windows
PowerShell.
Troubleshooting NIC Teaming
1/5/2022 • 3 minutes to read • Edit Online
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016, Azure Stack HCI, versions
21H2 and 20H2
In this topic, we discuss ways to troubleshoot NIC Teaming, such as hardware and physical switch securities.
When hardware implementations of standard protocols don't conform to specifications, NIC Teaming
performance might be affected. Also, depending on the configuration, NIC Teaming may send packets from the
same IP address with multiple MAC addresses that can trip security features on the physical switch.
Disable-NetAdapter *
Enable-NetAdapter *
Related topics
NIC Teaming: In this topic, we give you an overview of Network Interface Card (NIC) Teaming in Windows
Server 2016. NIC Teaming allows you to group between one and 32 physical Ethernet network adapters
into one or more software-based virtual network adapters. These virtual network adapters provide fast
performance and fault tolerance if a network adapter fails.
NIC Teaming MAC address use and management: When you configure a NIC Team with switch
independent mode and either address hash or dynamic load distribution, the team uses the media access
control (MAC) address of the primary NIC Team member on outbound traffic. The primary NIC Team
member is a network adapter that the operating system selects from the initial set of team members.
NIC Teaming settings: In this topic, we give you an overview of the NIC Team properties such as teaming
and load balancing modes. We also give you details about the Standby adapter setting and the Primary
team interface property. If you have at least two network adapters in a NIC Team, you do not need to
designate a Standby adapter for fault tolerance.
Performance Tuning Software Defined Networks
1/5/2022 • 3 minutes to read • Edit Online
Software Defined Networking (SDN) in Windows Server 2016 is made up of a combination of a Network
Controller, Hyper-V Hosts, Software Load Balancer Gateways and HNV Gateways. For tuning of each of these
components refer to the following sections:
Network Controller
The network controller is a Windows Server role which must be enabled on Virtual Machines running on hosts
that are configured to use SDN and are controlled by the network controller.
Three Network Controller enabled VMs are sufficient for high availability and maximum performance. Each VM
must be sized according to the guidelines provided in the SDN infrastructure virtual machine role requirements
section of the Plan a Software Defined Network Infrastructure topic.
SDN Quality of Service (QoS )
To ensure virtual machine traffic is prioritized effectively and fairly, it is recommended that you configure SDN
QoS on the workload virtual machines. For more information on configuring SDN QoS, refer to the Configure
QoS for a Tenant VM Network Adapter topic.
(Get-NetworkControllerVirtualNetworkConfiguration -connectionuri
$uri).properties.networkvirtualizationprotocol
For best performance, if VXLAN is returned then you must make sure your physical network adapters support
VXLAN task offload. If NVGRE is returned, then your physical network adapters must support NVGRE task
offload.
MTU
Encapsulation results in extra bytes being added to each packet. In order to avoid fragmentation of these
packets, the physical network must be configured to use jumbo frames. An MTU value of 9234 is the
recommended size for either VXLAN or NVGRE and must be configured on the physical switch for the physical
interfaces of the host ports (L2) and the router interfaces (L3) of the VLANs over which encapsulated packets
will be sent. This includes the Transit, HNV Provider and Management networks.
MTU on the Hyper-V host is configured through the network adapter, and the Network Controller Host Agent
running on the Hyper-V host will adjust for the encapsulation overhead automatically if supported by the
network adapter driver.
Once traffic egresses from the virtual network via a Gateway, the encapsulation is removed and the original
MTU as sent from the VM is used.
Single Root IO Virtualization (SR -IOV )
SDN is implemented on the Hyper-V host using a forwarding switch extension in the virtual switch. For this
switch extension to process packets, SR-IOV must not be used on virtual network interfaces that are configured
for use with the network controller as it causes VM traffic to bypass the virtual switch.
SR-IOV can still be enabled on the virtual switch if desired and can be used by VM network adapters that are not
controlled by the network controller. These SR-IOV VMs can coexist on the same virtual switch as network
controller controlled VMs which do not use SR-IOV.
If you are using 40Gbit network adapters it is recommended that you enable SR-IOV on the virtual switch for
the Software Load Balancing (SLB) Gateways to achieve maximum throughput. This is covered in more detail in
the Software Load Balancer Gateways section.
HNV Gateways
You can find information on tuning HNV Gateways for use with SDN in the HVN Gateways section.
This topic provides hardware specifications and configuration recommendations for servers that are running
Hyper-V and hosting Windows Server Gateway virtual machines, in addition to configuration parameters for
Windows Server Gateway virtual machines (VMs). To extract best performance from Windows Server gateway
VMs, it is expected that these guidelines will be followed. The following sections contain hardware and
configuration requirements when you deploy Windows Server Gateway.
1. Hyper-V hardware recommendations
2. Hyper-V host configuration
3. Windows Server gateway VM configuration
SERVER C O M P O N EN T SP EC IF IC AT IO N
Network Interface Cards (NICs) Two 10 GB NICs,The gateway performance will depend on
the line rate. If the line rate is less than 10Gbps, the gateway
tunnel throughput numbers will also go down by the same
factor.
Ensure that the number of virtual processors that are assigned to a Windows Server Gateway VM does not
exceed the number of processors on the NUMA node. For example, if a NUMA node has 8 cores, the number of
virtual processors should be less than or equal to 8. For best performance, it should be 8. To find out the number
of NUMA nodes and the number of cores per NUMA node, run the following Windows PowerShell script on
each Hyper-V host:
$nodes = [object[]] $(gwmi –Namespace root\virtualization\v2 -Class MSVM_NumaNode)
$cores = ($nodes | Measure-Object NumberOfProcessorCores -sum).Sum
$lps = ($nodes | Measure-Object NumberOfLogicalProcessors -sum).Sum
IMPORTANT
Allocating virtual processors across NUMA nodes might have a negative performance impact on Windows Server
Gateway. Running multiple VMs, each of which has virtual processors from one NUMA node, likely provides better
aggregate performance than a single VM to which all virtual processors are assigned.
One gateway VM with eight virtual processors and at least 8GB RAM is recommended when selecting the
number of gateway VMs to install on each Hyper-V host when each NUMA node has eight cores. In this case,
one NUMA node is dedicated to the host machine.
NOTE
To run the following Windows PowerShell commands, you must be a member of the Administrators group.
Switch Embedded Teaming When you create a vswitch with multiple network adapters,
it automatically enabled switch embedded teaming for those
adapters.
New-VMSwitch -Name TeamedvSwitch -NetAdapterName
"NIC 1","NIC 2"
Traditional teaming through LBFO is not supported with
SDN in Windows Server 2016. Switch Embedded Teaming
allows you to use the same set of NICs for your virtual traffic
and RDMA traffic. This was not supported with NIC teaming
based on LBFO.
Interrupt Moderation on physical NICs Use default settings. To check the configuration, you can use
the following Windows PowerShell command:
Get-NetAdapterAdvancedProperty
C O N F IGURAT IO N IT EM W IN DO W S P O W ERSH EL L C O N F IGURAT IO N
Receive Buffers size on physical NICs You can verify whether the physical NICs support the
configuration of this parameter by running the command
Get-NetAdapterAdvancedProperty . If they do not support
this parameter, the output from the command does not
include the property "Receive Buffers." If NICs do support
this parameter, you can use the following Windows
PowerShell command to set the Receive Buffers size:
Set-NetAdapterAdvancedProperty "NIC1" –DisplayName
"Receive Buffers" –DisplayValue 3000
Send Buffers size on physical NICs You can verify whether the physical NICs support the
configuration of this parameter by running the command
Get-NetAdapterAdvancedProperty . If the NICs do not
support this parameter, the output from the command does
not include the property "Send Buffers." If NICs do support
this parameter, you can use the following Windows
PowerShell command to set the Send Buffers size:
Set-NetAdapterAdvancedProperty "NIC1" –DisplayName
"Transmit Buffers" –DisplayValue 3000
Receive Side Scaling (RSS) on physical NICs You can verify whether your physical NICs have RSS enabled
by running the Windows PowerShell command Get-
NetAdapterRss. You can use the following Windows
PowerShell commands to enable and configure RSS on your
network adapters:
Enable-NetAdapterRss "NIC1","NIC2"
Set-NetAdapterRss "NIC1","NIC2" –
NumberOfReceiveQueues 16 -MaxProcessors
NOTE: If VMMQ or VMQ is enabled, RSS does not have to
be enabled on the physical network adapters. You can enable
it on the host virtual network adapters
Virtual Machine Queue (VMQ) on the NIC Team You can enable VMQ on your SET team by using the
following Windows PowerShell command:
Enable-NetAdapterVmq
NOTE: This should be enabled only if the HW does not
support VMMQ. If supported, VMMQ should be enabled for
better performance.
NOTE
VMQ and vRSS come into picture only when the load on the VM is high and the CPU is being utilized to the maximum.
Only then will at least one processor core max out. VMQ and vRSS will then be beneficial to help spread the processing
load across multiple cores. This is not applicable for IPsec traffic as IPsec traffic is confined to a single core.
Memory 8 GB
Number of virtual network adapters 3 NICs with the following specific uses: 1 for Management
that is used by the management operating system, 1
External that provides access to external networks, 1 that is
Internal that provides access to internal networks only.
Receive Side Scaling (RSS) You can keep the default RSS settings for the Management
NIC. The following example configuration is for a VM that
has 8 virtual processors. For the External and Internal NICs,
you can enable RSS with BaseProcNumber set to 0 and
MaxRssProcessors set to 8 using the following Windows
PowerShell command:
Set-NetAdapterRss "Internal","External" –
BaseProcNumber 0 –MaxProcessorNumber 8
Send side buffer You can keep the default Send Side Buffer settings for the
Management NIC. For both the Internal and External NICs
you can configure the Send Side Buffer with 32 MB of RAM
by using the following Windows PowerShell command:
Set-NetAdapterAdvancedProperty
"Internal","External" –DisplayName "Send Buffer
Size" –DisplayValue "32MB"
Receive Side buffer You can keep the default Receive Side Buffer settings for the
Management NIC. For both the Internal and External NICs,
you can configure the Receive Side Buffer with 16 MB of
RAM by using the following Windows PowerShell command:
Set-NetAdapterAdvancedProperty
"Internal","External" –DisplayName "Receive Buffer
Size" –DisplayValue "16MB"
Forward Optimization You can keep the default Forward Optimization settings for
the Management NIC. For both the Internal and External
NICs, you can enable Forward Optimization by using the
following Windows PowerShell command:
Set-NetAdapterAdvancedProperty
"Internal","External" –DisplayName "Forward
Optimization" –DisplayValue "1"
SLB Gateway Performance Tuning in Software
Defined Networks
1/5/2022 • 2 minutes to read • Edit Online
Software load balancing is provided by a combination of a load balancer manager in the Network Controller
VMs, the Hyper-V Virtual Switch and a set of Load Balancer Multixplexor (Mux) VMs.
No additional performance tuning is required to configure the Network Controller or the Hyper-V host for load
balancing beyond what is described in the Software Defined Networking section, unless you will be using SR-
IOV for the Muxes as described below.
Then, it must be enabled on the virtual network adapter(s) of the SLB Mux VM which process the data traffic. In
this example, SR-IOV is being enabled on all adapters:
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016
Storage Spaces Direct uses industry-standard servers with local-attached drives to create highly available,
highly scalable software-defined storage at a fraction of the cost of traditional SAN or NAS arrays. Its converged
or hyper-converged architecture radically simplifies procurement and deployment, while features such as
caching, storage tiers, and erasure coding, together with the latest hardware innovations such as RDMA
networking and NVMe drives, deliver unrivaled efficiency and performance.
Storage Spaces Direct is included in Windows Server 2019 Datacenter, Windows Server 2016 Datacenter, and
Windows Server Insider Preview Builds. It also provides the software-defined storage layer for Azure Stack HCI.
For other applications of Storage Spaces, such as shared SAS clusters and stand-alone servers, see Storage
Spaces overview. If you're looking for info about using Storage Spaces on a Windows 10 PC, see Storage Spaces
in Windows 10.
Understand Plan
Overview (you are here) Hardware requirements
Understand the cache Using the CSV in-memory read cache
Fault tolerance and storage efficiency Choose drives
Drive symmetry considerations Plan volumes
Understand and monitor storage resync Using guest VM clusters
Understanding cluster and pool quorum Disaster recovery
Cluster sets
Deploy Manage
Deploy Storage Spaces Direct Manage with Windows Admin Center
Create volumes Add servers or drives
Nested resiliency Taking a server offline for maintenance
Configure quorum Remove servers
Upgrade a Storage Spaces Direct cluster to Windows Extend volumes
Server 2019 Delete volumes
Understand and deploy persistent memory Update drive firmware
Performance history
Delimit the allocation of volumes
Use Azure Monitor on a hyper-converged cluster
Key benefits
IM A GE DESC RIP T IO N
Deployment options
Storage Spaces Direct was designed for two distinct deployment options: converged and hyper-converged.
NOTE
Azure Stack HCI 20H2 supports only hyper-converged deployment, while Azure Stack HCI 2019 supports both converged
and hyper-converged deployment.
Converged
Storage and compute in separate clusters. The converged deployment option, also known as
'disaggregated', layers a Scale-out File Server (SoFS) atop Storage Spaces Direct to provide network-attached
storage over SMB3 file shares. This allows for scaling compute/workload independently from the storage cluster,
essential for larger-scale deployments such as Hyper-V IaaS (Infrastructure as a Service) for service providers
and enterprises.
Hyper-Converged
One cluster for compute and storage. The hyper-converged deployment option runs Hyper-V virtual
machines or SQL Server databases directly on the servers providing the storage, storing their files on the local
volumes. This eliminates the need to configure file server access and permissions, and reduces hardware costs
for small-to-medium business or remote office/branch office deployments. See Deploy Storage Spaces Direct.
How it works
Storage Spaces Direct is the evolution of Storage Spaces, first introduced in Windows Server 2012. It leverages
many of the features you know today in Windows Server, such as Failover Clustering, the Cluster Shared Volume
(CSV) file system, Server Message Block (SMB) 3, and of course Storage Spaces. It also introduces new
technology, most notably the Software Storage Bus.
Here's an overview of the Storage Spaces Direct stack:
Networking Hardware. Storage Spaces Direct uses SMB3, including SMB Direct and SMB Multichannel, over
Ethernet to communicate between servers. We strongly recommend 10+ GbE with remote-direct memory
access (RDMA), either iWARP or RoCE.
Storage Hardware. From 2 to 16 servers with local-attached SATA, SAS, or NVMe drives. Each server must
have at least 2 solid-state drives, and at least 4 additional drives. The SATA and SAS devices should be behind a
host-bus adapter (HBA) and SAS expander. We strongly recommend the meticulously engineered and
extensively validated platforms from our partners (coming soon).
Failover Clustering. The built-in clustering feature of Windows Server is used to connect the servers.
Software Storage Bus. The Software Storage Bus is new in Storage Spaces Direct. It spans the cluster and
establishes a software-defined storage fabric whereby all the servers can see all of each other's local drives. You
can think of it as replacing costly and restrictive Fibre Channel or Shared SAS cabling.
Storage Bus Layer Cache. The Software Storage Bus dynamically binds the fastest drives present (e.g. SSD) to
slower drives (e.g. HDDs) to provide server-side read/write caching that accelerates IO and boosts throughput.
Storage Pool. The collection of drives that form the basis of Storage Spaces is called the storage pool. It is
automatically created, and all eligible drives are automatically discovered and added to it. We strongly
recommend you use one pool per cluster, with the default settings. Read our Deep Dive into the Storage Pool to
learn more.
Storage Spaces. Storage Spaces provides fault tolerance to virtual "disks" using mirroring, erasure coding, or
both. You can think of it as distributed, software-defined RAID using the drives in the pool. In Storage Spaces
Direct, these virtual disks typically have resiliency to two simultaneous drive or server failures (e.g. 3-way
mirroring, with each data copy in a different server) though chassis and rack fault tolerance is also available.
Resilient File System (ReFS). ReFS is the premier filesystem purpose-built for virtualization. It includes
dramatic accelerations for .vhdx file operations such as creation, expansion, and checkpoint merging, and built-in
checksums to detect and correct bit errors. It also introduces real-time tiers that rotate data between so-called
"hot" and "cold" storage tiers in real-time based on usage.
Cluster Shared Volumes. The CSV file system unifies all the ReFS volumes into a single namespace accessible
through any server, so that to each server, every volume looks and acts like it's mounted locally.
Scale-Out File Ser ver. This final layer is necessary in converged deployments only. It provides remote file
access using the SMB3 access protocol to clients, such as another cluster running Hyper-V, over the network,
effectively turning Storage Spaces Direct into network-attached storage (NAS).
Customer stories
There are over 10,000 clusters worldwide running Storage Spaces Direct. Organizations of all sizes, from small
businesses deploying just two nodes, to large enterprises and governments deploying hundreds of nodes,
depend on Storage Spaces Direct for their critical applications and infrastructure.
Visit Microsoft.com/HCI to read their stories:
Management tools
The following tools can be used to manage and/or monitor Storage Spaces Direct:
Get started
Try Storage Spaces Direct in Microsoft Azure, or download a 180-day-licensed evaluation copy of Windows
Server from Windows Server Evaluations.
Additional References
Fault tolerance and storage efficiency
Storage Replica
Storage at Microsoft blog
Storage Spaces Direct throughput with iWARP (TechNet blog)
What's New in Failover Clustering in Windows Server
Storage Quality of Service
Windows IT Pro Support
Advanced Data Deduplication settings
1/5/2022 • 12 minutes to read • Edit Online
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016, Azure Stack HCI, version
20H2
This document describes how to modify advanced Data Deduplication settings. For recommended workloads,
the default settings should be sufficient. The main reason to modify these settings is to improve Data
Deduplication's performance with other kinds of workloads.
The most common reason for changing when Data Deduplication jobs run is to ensure that jobs run during off
hours. The following step-by-step example shows how to modify the Data Deduplication schedule for a sunny
day scenario: a hyper-converged Hyper-V host that is idle on weekends and after 7:00 PM on week nights. To
change the schedule, run the following PowerShell cmdlets in an Administrator context.
1. Disable the scheduled hourly Optimization jobs.
2. Remove the currently scheduled Garbage Collection and Integrity Scrubbing jobs.
3. Create a nightly Optimization job that runs at 7:00 PM with high priority and all the CPUs and memory
available on the system.
4. Create a weekly Garbage Collection job that runs on Saturday starting at 7:00 AM with high priority and
all the CPUs and memory available on the system.
5. Create a weekly Integrity Scrubbing job that runs on Sunday starting at 7 AM with high priority and all
the CPUs and memory available on the system.
W H Y W O UL D Y O U WA N T
PA RA M ET ER N A M E DEF IN IT IO N A C C EP T ED VA L UES TO SET T H IS VA L UE?
Type The type of the job that Optimization This value is required
should be scheduled GarbageCollection because it is the type of job
Scrubbing that you want to schedule.
This value cannot be
changed after the task has
been scheduled.
Priority The system priority of the High This value helps the system
scheduled job Medium determine how to allocate
Low CPU time. High will use
more CPU time, low will use
less.
Days The days that the job is An array of integers 0-6 Scheduled tasks have to run
scheduled representing the days of on at least one day.
the week:
0 = Sunday
1 = Monday
2 = Tuesday
3 = Wednesday
4 = Thursday
5 = Friday
6 = Saturday
Cores The percentage of cores on Integers 0-100 (indicates a To control what level of
the system that a job percentage) impact a job will have on
should use the compute resources on
the system
W H Y W O UL D Y O U WA N T
PA RA M ET ER N A M E DEF IN IT IO N A C C EP T ED VA L UES TO SET T H IS VA L UE?
DurationHours The maximum number of Positive integers To prevent a job for running
hours a job should be into a workload's non-idle
allowed to run hours
Enabled Whether the job will run True/false To disable a job without
removing it
Full For scheduling a full Switch (true/false) By default, every fourth job
Garbage Collection job is a full Garbage Collection
job. With this switch, you
can schedule full Garbage
Collection to run more
frequently.
InputOutputThrottle Specifies the amount of Integers 0-100 (indicates a Throttling ensures that jobs
input/output throttling percentage) don't interfere with other
applied to the job I/O-intensive processes.
Memory The percentage of memory Integers 0-100 (indicates a To control what level of
on the system that a job percentage) impact the job will have on
should use the memory resources of
the system
Name The name of the scheduled String A job must have a uniquely
job identifiable name.
ReadOnly Indicates that the scrubbing Switch (true/false) You want to manually
job processes and reports restore files that sit on bad
on corruptions that it finds, sections of the disk.
but does not run any repair
actions
Start Specifies the time a job System.DateTime The date part of the
should start System.Datetime
provided to Start is
irrelevant (as long as it's in
the past), but the time part
specifies when the job
should start.
StopWhenSystemBusy Specifies whether Data Switch (True/False) This switch gives you the
Deduplication should stop if ability to control the
the system is busy behavior of Data
Deduplication--this is
especially important if you
want to run Data
Deduplication while your
workload is not idle.
The main reasons to modify the volume settings from the selected usage type are to improve read performance
for specific files (such as multimedia or other file types that are already compressed) or to fine-tune Data
Deduplication for better optimization for your specific workload. The following example shows how to modify
the Data Deduplication volume settings for a workload that most closely resembles a general purpose file server
workload, but uses large files that change frequently.
1. See the current volume settings for Cluster Shared Volume 1.
2. Enable OptimizePartialFiles on Cluster Shared Volume 1 so that the MinimumFileAge policy applies to
sections of the file rather than the whole file. This ensures that the majority of the file gets optimized even
though sections of the file change regularly.
ChunkRedundancyThreshol The number of times that a Positive integers The main reason to modify
d chunk is referenced before a this number is to increase
chunk is duplicated into the the savings rate for
hotspot section of the volumes with high
Chunk Store. The value of duplication. In general, the
the hotspot section is that default value (100) is the
so-called "hot" chunks that recommended setting, and
are referenced frequently you shouldn't need to
have multiple access paths modify this.
to improve access time.
ExcludeFileType File types that are excluded Array of file extensions Some file types, particularly
from optimization multimedia or files that are
already compressed, do not
benefit very much from
being optimized. This
setting allows you to
configure which types are
excluded.
ExcludeFolder Specifies folder paths that Array of folder paths If you want to improve
should not be considered performance or keep
for optimization content in particular paths
from being optimized, you
can exclude certain paths
on the volume from
consideration for
optimization.
W H Y W O UL D Y O U WA N T
SET T IN G N A M E DEF IN IT IO N A C C EP T ED VA L UES TO M O DIF Y T H IS VA L UE?
InputOutputScale Specifies the level of IO Positive integers ranging 1- The main reason to modify
parallelization (IO queues) 36 this value is to decrease the
for Data Deduplication to impact on the performance
use on a volume during a of a high IO workload by
post-processing job restricting the number of IO
queues that Data
Deduplication is allowed to
use on a volume. Note that
modifying this setting from
the default may cause Data
Deduplication's post-
processing jobs to run
slowly.
MinimumFileAgeDays Number of days after the Positive integers (inclusive The Default and Hyper-V
file is created before the file of zero) usage types set this value
is considered to be in-policy to 3 to maximize
for optimization. performance on hot or
recently created files. You
may want to modify this if
you want Data
Deduplication to be more
aggressive or if you do not
care about the extra latency
associated with
deduplication.
MinimumFileSize Minimum file size that a file Positive integers (bytes) The main reason to change
must have to be considered greater than 32 KB this value is to exclude small
in-policy for optimization files that may have limited
optimization value to
conserve compute time.
NoCompressionFileType File types whose chunks Array of file extensions Some types of files,
should not be compressed particularly multimedia files
before going into the and already compressed file
Chunk Store types, may not compress
well. This setting allows
compression to be turned
off for those files, saving
CPU resources.
W H Y W O UL D Y O U WA N T
SET T IN G N A M E DEF IN IT IO N A C C EP T ED VA L UES TO M O DIF Y T H IS VA L UE?
OptimizeInUseFiles When enabled, files that True/false Enable this setting if your
have active handles against workload keeps files open
them will be considered as for extended periods of
in-policy for optimization. time. If this setting is not
enabled, a file would never
get optimized if the
workload has an open
handle to it, even if it's only
occasionally appending data
at the end.
WlmMemoryOverPercentTh This setting allows jobs to Positive integers (a value of If you have another task
reshold use more memory than 300 means 300% or 3 that will stop if Data
Data Deduplication judges times) Deduplication takes more
to actually be available. For memory
example, a setting of 300
would mean that the job
would have to use three
times the assigned memory
to get canceled.
DeepGCInterval This setting configures the Integers (-1 indicates See this frequently asked
interval at which regular disabled) question
Garbage Collection jobs
become full Garbage
Collection jobs. A setting of
n would mean that every
nth job was a full Garbage
Collection job. Note that full
Garbage Collection is
always disabled (regardless
of the registry value) for
volumes with the Backup
Usage Type.
Start-DedupJob -Type
GarbageCollection -
Full
may be used if full Garbage
Collection is desired on a
Backup volume.
Use the links in this topic to learn more about the concepts that were discussed in this tuning guide.