Emc Unity Fast Technology Overview
Emc Unity Fast Technology Overview
Abstract
This white paper is an introduction to the Dell EMC® FAST™ technology for the
Dell EMC Unity™ family of storage systems. It describes the background
concepts, major components, and implementation steps for Dell EMC FAST
technology, which includes FAST VP and FAST Cache. Guidelines and other
useful information such as benefits is included.
February 2021
H15086.3
Revisions
Revisions
Date Description
February 2021 Template, figures, and formatting updates. Added Unity XT supported FAST Cache
configurations.
Acknowledgments
Author: Ryan Poulin
The information in this publication is provided “as is.” Dell Inc. makes no representations or warranties of any kind with respect to the information in this
publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose.
Use, copying, and distribution of any software described in this publication requires an applicable software license.
Copyright © 2016-2021 Dell Inc. or its subsidiaries. All Rights Reserved. Dell Technologies, Dell, EMC, Dell EMC and other trademarks are trademarks
of Dell Inc. or its subsidiaries. Other trademarks may be trademarks of their respective owners. [2/17/2021] [Technical White Paper] [H15086.3]
Table of contents
Revisions.............................................................................................................................................................................2
Acknowledgments ...............................................................................................................................................................2
Table of contents ................................................................................................................................................................3
Executive summary .............................................................................................................................................................5
Audience .............................................................................................................................................................................5
Terminology ........................................................................................................................................................................5
1 FAST VP .......................................................................................................................................................................7
1.1 Introduction .........................................................................................................................................................7
1.2 FAST VP licensing ..............................................................................................................................................7
1.3 Using FAST VP...................................................................................................................................................7
1.3.1 Pools ...................................................................................................................................................................7
1.3.2 FAST VP tiers .....................................................................................................................................................8
1.3.3 Pool tier RAID configurations .............................................................................................................................8
1.3.4 Tier considerations .............................................................................................................................................9
1.4 FAST VP tiering policies ...................................................................................................................................10
1.4.1 Tiering policy options ........................................................................................................................................10
1.4.2 Comparing the tiering policies ..........................................................................................................................12
1.5 The FAST VP algorithm ....................................................................................................................................12
1.5.1 Statistics Collection ..........................................................................................................................................12
1.5.2 Analysis ............................................................................................................................................................13
1.5.3 Relocation .........................................................................................................................................................13
1.6 Managing FAST VP ..........................................................................................................................................14
1.6.1 UnityVSA ..........................................................................................................................................................14
1.6.2 System level FAST VP management ...............................................................................................................15
1.6.3 Pool level FAST VP management ....................................................................................................................16
1.6.4 Storage resource level FAST VP management ...............................................................................................18
1.6.5 Expanding a Pool..............................................................................................................................................19
2 FAST Cache ...............................................................................................................................................................20
2.1 Introduction .......................................................................................................................................................20
2.2 FAST Cache Licensing .....................................................................................................................................20
2.3 FAST Cache Components ................................................................................................................................20
2.3.1 Policy Engine ....................................................................................................................................................20
2.3.2 Memory Map .....................................................................................................................................................20
2.4 Theory of Operation ..........................................................................................................................................21
Executive summary
Dell EMC FAST™ software allows the Dell EMC Unity™ product family to leverage high-performance Flash
drives. FAST software consists of Fully Automated Storage Tiering for Virtual Pools (FAST VP) and FAST
Cache. These two features work in tandem to use the storage within the system as efficiently as possible.
Each of these software features ensures that the most active data is serviced from Flash.
When FAST VP is enabled, the FAST VP software will measure and record performance statistics on each
slice within a Pool. Later, FAST VP analyzes this data and makes decisions to move data across multiple tiers
in a Pool to maximize Pool performance and efficiently use the space within the Pool. Slices that are highly
accessed are automatically moved to the higher tiers within a Pool, while slices with less activity move to
lower tiers within a Pool. Data already residing on Flash within a Pool will not promote to FAST Cache,
allowing more data within the system to take advantage of Flash.
FAST Cache is a high capacity secondary cache which logically sits below System Cache, and above the
drives in the system. FAST Cache complements the System Cache by using Flash drives to service highly
active data being requested from the drives. Frequently accessed data located on spinning drives in the Pool
is copied into FAST Cache for higher performance and lower response time. As FAST Cache copies data
onto Flash in 64 KB chunks, FAST Cache is efficient at using Flash. It is not uncommon for an entire active
dataset within a small capacity to reside in FAST Cache. With large maximum capacities on the higher end
Unity models, FAST Cache can be configured to handle most of the I/O on a system.
Audience
This white paper is intended for Dell EMC customers, partners, and employees who are considering the use
of the FAST VP and FAST Cache features in the Dell EMC Unity family of storage systems. It assumes
familiarity with Dell EMC Unity and Dell EMC’s management software.
Terminology
Chunk: A piece of data located within a particular 64 KB region.
DRAM memory: DRAM-based memory used by the storage system to store data.
FAST Cache clean page: A 64 KB FAST Cache page which is in use and contains data that is an exact copy
of the corresponding page within the Pool for the storage resource.
FAST Cache copy: The process of copying a FAST Cache dirty page to the corresponding storage resource.
FAST Cache dirty page: A 64 KB FAST Cache page which is in use and contains a newer copy of the data
than the corresponding page within the storage resource. These pages will be synchronized when the FAST
Cache page is cleaned.
FAST Cache flush: The process of freeing a FAST Cache page for a new promotion by first copying the
contents of a FAST Cache page to its storage resource, then freeing the page.
FAST Cache hit: The instance when the data being requested or updated is contained within FAST Cache.
FAST Cache miss: The instance when the data being requested or updated is not contained within FAST
Cache.
Fast Cache page: A single allocation unit (page) located within FAST Cache used to store data. This page is
64 KB in size.
FAST Cache promotion: The process of copying data from a storage resource into FAST Cache.
Flash drive (Solid-State Drive - SSD): A Flash-based storage device used to store data.
Hard Disk Drive (HDD): A storage device based on spinning platters used to store data.
Locality of reference: Used when describing a data access pattern where adjacent blocks of a dataset are
accessed frequently.
Logical Block Address (LBA): The addressing scheme used when accessing a particular block of data
within a storage resource.
Memory map: A FAST Cache component which tracks the current contents and locations of data within
FAST Cache. A copy of the memory map is storage within system memory and on the drives.
Pool: A set of drives that provide specific storage characteristics for the resources that use them, such as
LUNs, VMware Datastores, and File Systems.
Rebalance: A FAST VP process automatically started when unused drives are used to increase the capacity
a tier of storage within a Pool. The process relocates data to correct the imbalance of data placement across
the tier.
Slice: A 256 MB unit of capacity which is allocated to storage resources to provide space to store data.
Slice Relocation: A physical movement of a 256 MB slice of data within a tier or across tiers within a Pool.
System Cache (DRAM Cache): Dell EMC Unity software component which leverages DRAM memory to
improve host read and write performance.
System Cache hit: The instance when a host I/O can be serviced with the contents of System Cache.
System Cache miss: The instance when a host I/O cannot be serviced with the contents of System Cache.
Temperature: The weighted average of a 256 MB Slice’s activity level over time.
Tier: A label used to describe the various categories of media used within a Pool. In a physical system, the
tier directly relates to the drive types used within the Pool. The available tiers are the Extreme Performance
Tier (Flash drives), the Performance Tier (SAS drives), and the Capacity Tier (NL-SAS drives). For UnityVSA,
the storage tier of a Virtual Drive must be entered manually and should be chosen to match the underlying
characteristics of the Virtual Drive.
1 FAST VP
1.1 Introduction
When reviewing the access patterns for data within a system, most access patterns show a basic trend.
Typically, the data is most heavily accessed near the time it was created, and the activity level decreases as
the data ages. This trending is also referred to as the life cycle of the data. Dell EMC Unity Fully Automated
Storage Tiering for Virtual Pools (FAST VP) monitors the data access patterns within Pools on the system,
and dynamically matches the performance requirements of the data with drives that provide that level of
performance. FAST VP separates drives into three categories, called tiers.
FAST VP helps to reduce the Total Cost of Ownership (TCO) by maintaining performance while efficiently
using the configuration of a Pool. Instead of creating a Pool with one type of drive, mixing Flash, SAS, and
NL-SAS drives can help reduce the cost of a configuration by reducing drive counts and leveraging larger
capacity drives. Data requiring the highest level of performance is tiered to Flash, while data with less activity
resides on SAS or NL-SAS drives.
Dell EMC Unity has a unified approach to creating storage resources on the system. Block LUNs, File
Systems, VMware Datastores, and VMware Virtual Volumes can all exist within a single Pool and can all
benefit from using FAST VP. In system configurations with minimal amounts of Flash, FAST VP will efficiently
use the Flash drives for active data, regardless of the resource type. For efficiency, FAST VP also leverages
low cost spinning drives for less active data. The access patterns for all data within a Pool are compared
against each other, and the most active data is placed on the highest performing drives while adhering to the
storage resource’s tiering policy. Tiering policies are explained later in this document.
1.3.1 Pools
A Pool is a set of drives on which storage resources are created. Pools can be created on all Dell EMC Unity
systems, including a Unity All Flash system, Unity Hybrid system, or the UnityVSA. In the Unity All Flash and
Hybrid systems, Pools are consisted of physical drives found within the system. In UnityVSA, Pools are
created on Virtual Drives, which have been provided from the VMware ESXi host the UnityVSA is deployed
on. Pools can contain a few drives or contain all drives within a system. FAST VP is the software feature
which helps to efficiently use the resources found within a Pool. A Pool can either contain a single drive type
or contain a mix of drive types.
A Pool which only contains a single drive type is called a single tier Pool, also known as a Homogenous Pool.
A single tier Pool can contain either all Flash drives, all SAS drives, or all NL-SAS drives. Single tier Pools
provide predictable performance, as all drives within the Pool are of the same type. All data contained within a
single tier Pool has the same performance potential, regardless of the data’s age. Single tier Pools are best
used when data access is uniform across large address ranges.
A Pool which contains a mixture of drive types is called a multi-tiered Pool, also known as a Heterogeneous
Pool. A multi-tiered Pool can contain any combination of Flash, SAS, and NL-SAS drives. For example, a
Pool may contain all three drive types, another only containing Flash and NL-SAS, and a third containing SAS
and NL-SAS. In a multi-tiered Pool, data from storage resources are spread across the tiers by FAST VP in
256 MB slices. FAST VP monitors the usage of each tier and relocates data slices within the Pool based on
data access levels and capacity.
FAST VP differentiates each of these tiers by drive type, and not rotational speed. Dell EMC suggests not
mixing drives with different rotational speeds within a tier of a Pool. For example, do not mix 10K RPM and 15
K RPM SAS drives within the same Pool. It is suggested to allocate these drives to different Pools. It is OK
however to mix 10K RPM SAS drives in a Pool with 7.2 K RPM NL-SAS drives as they are different drive
types and will exist in different tiers.
FAST VP leverages all tiers within a Pool, as each tier provides unique advantages regarding performance
and cost.
When deciding on which RAID Configuration to use, consider the performance, capacity, and protection levels
each configuration provides. RAID 1/0 is suggested for applications with large amounts of random writes, as
there is no parity write penalty in this RAID type. RAID 5 is preferred when cost and performance are a
concern. RAID 6 provides the maximum level of protection against drive faults of all the supported RAID
types. When considering a RAID configuration which includes many drives, (12+1, 12+2, 14+2), consider the
tradeoffs that the larger drive counts contain, such as the fault domain and potentially long rebuild times.
When creating a Pool, it is suggested to add Flash drives to the configuration. Even a small amount of Flash
capacity added to a Pool can be leveraged by FAST VP to increase the overall performance of the system.
For the best return on investment, use Flash drives to store hot data on storage resources requiring fast
response times and high IOPs. FAST VP will optimize the entire Pool’s resources and automatically relocate
less active data to other tiers as needed.
The Extreme Performance Tier can be created using SAS Flash 2, SAS Flash 3 or SAS Flash 4 Flash drives.
While it is possible to mix different size SAS Flash drives within the same Pool, it is not recommended. In the
Unity OE version 4.1 and later, SAS Flash 3 drives can be used in hybrid (mixed drive type) pools. SAS Flash
4 drives within a Unity system are only supported in an All Flash Pool.
Comparison of the Extreme Performance Tier, Performance Tier, and Capacity Tier
Extreme Performance (Flash) Performance (SAS) Capacity (NL-SAS)
10 millisecond – 50
< 10 milliseconds ≤ 100 milliseconds
milliseconds
Low IOPs/GB
High IOPs/GB High Bandwidth with
Low Latency contending workloads Leverages System Cache for
sequential and large block I/O
Extremely fast access for reads Sequential reads leverage prefetching of data
Strengths
Handles multiple sequential Sequential writes leverage system optimizations favoring drive
workloads better than SAS or NL- Read/write mixes provide
SAS Large I/O is serviced efficiently
predictable performance
Writes perform slower than reads
Observations
The following are the available Tiering Policies for storage resources created on Pool in Dell EMC Unity.
All Tiering Policies can be set at time of resource creation or changed later.
If the highest tier in the Pool does not have any space, space from the next tier with available space will be
taken. Slices are prioritized in the following manner:
• Existing slices on the highest tier of a Pool have priority over new slices being consumed for storage
resources. New slice allocations do not immediately force slices out of the highest tier, regardless of
the tiering policy set on the resource. This priority is revisited during the next FAST VP relocation
window.
• When multiple resources have the Highest Available Tier policy assigned, and there is not enough
space within the top tier to store all data slices, they compete for top tier placement based on each
slice’s temperature. The temperature is based on the activity level of the slice. Slices for resources
with Highest Available Tier assigned always take top tier priority over other tiering policies.
1.4.1.2 Auto-Tier
A storage resource tends to contain regions which have higher activity levels than others. To efficiently use
the tiers within a Pool, FAST VP will move the “hot” slices to the higher tiers, while placing less active data on
lower tiers of the Pool. The Auto-Tier policy automatically places slices of data for the storage resource on the
various tiers of a Pool based on the data’s usage level. Although a slice for a storage resource with Auto-Tier
assigned may be more active than a resource with Highest Available Tier assigned, the resource with Highest
Available Tier takes precedence. When allocating new slices to a storage resource, slices are taken from all
tiers based on the usage of each tier. If a large portion of a Pool’s free space resides in the Capacity Tier,
slices will be allocated from the Capacity Tier.
FAST VP also attempts to free space within each tier to allow for new slice allocations or slice promotions.
Leaving 10% free capacity in each tier allows FAST VP to be more efficient when tiering slices to higher tiers.
If needed, during a relocation window least recently used slices within the tier will be tiered to lower tiers to
reach the 10% free capacity target.
The activity of a slice is determined by tracking the amount of I/O that is sent to each slice, which includes
both reads and writes. FAST VP keeps these statistics and “weighs” the I/O based on the time of arrival.
Recent activity on a slice receives a higher weight, and the weight deteriorates over time. The slice statistics
are collected continuously on the system for all storage resources.
1.5.2 Analysis
Once an hour, FAST VP analyzes the data collected and ranks each slice, based on each slice’s temperature.
This list is ordered from “Hottest” to “Coldest”, and lists are created for each Pool within the system. Based on
this list, a relocation candidate list is compiled with information regarding which slices should be moved up,
moved down, or moved within a tier in a Pool. This candidate list also considers each storage resource’s
tiering policy, to ensure each policy is being followed. The next time a relocation is started, either by the
schedule or manually, the latest candidate list is used. You can influence the candidate list by changing the
tiering policies on storage resources, as the tiering policy takes precedence over the activity levels of slices.
1.5.3 Relocation
When a relocation window starts, either by the schedule or manually, FAST VP begins promoting or demoting
slices according to the candidate list created in the analysis phase. The hottest slices are moved to the higher
tiers, and colder slices are moved down to the lower tiers. During the relocation window, priority is given to
slices moving to the higher tiers, as they will benefit most from the relocation. For slices relocating to a lower
tier, relocations only occur when slices being promoted need the space it occupies. By leveraging the space
available, FAST VP ensures that top tier drives are used.
Storage resources with a tiering policy of Lowest Available Tier may also relocate during a FAST VP
relocation window. If slices for these resources do not already reside on the lowest tier, and space becomes
available on the lowest tier, relocations may occur. Only the “Coldest” slices for storage resources with the
Lowest Available Tier policy will be stored on the lowest tier if not enough capacity exists for all slices.
Another factor for relocations is tier capacity. FAST VP will also review the capacity of a tier to make
relocation decisions. If a tier has less than 10% of free space, “Cold” slices will be tiered down to free enough
space to reach the 10% mark. Leaving free capacity in each tier allows storage resources to allocate slices
efficiently based on their tiering policies. The free capacity is also used when slices are relocated into higher
tiers when the relocation window starts. Using free space within a tier is more efficient on the system than
relocating slices from a tier before relocating slices into a tier.
Figure 1 shows an illustration of how FAST VP can improve the performance of the Pool by relocating slices.
On the left is a Storage Pool before FAST VP relocations have occurred. Notice that slices across each of the
tiers have different levels of activity. After analyzing the activity on these slices, FAST VP will determine the
best placement for the data within the Pool. The right side of Figure 1 shows the Pool after relocations have
occurred. Notice that activity levels have been corrected, and slices have been placed on the appropriate
tiers.
1.6.1 UnityVSA
As UnityVSA is a virtual Dell EMC Unity system, no physical drives exist. Pools on a UnityVSA system are
created using Virtual Drives, which have been provisioned to the system from VMware. As there is a layer of
abstraction between the UnityVSA system and the storage providing capacity for the Virtual Drives, FAST VP
cannot automatically differentiate and assign the proper Storage Tier for each Virtual Drive. You must
manually assign the Storage Tier to each Virtual Drive before they can be used within a Pool. Correctly
matching the Storage Tier to the type of technology the Virtual Drive is created on is a crucial step, as FAST
VP will use this information when tiering slices within the Pool. Typical Dell EMC tier classification denotes
Flash drives to be “Extreme Performance”, SAS drives to be “Performance”, and NL-SAS drives to be
“Capacity”. Dell Technologies recommends adhering to this schema to ensure FAST VP relocates data to the
appropriate Virtual Drives.
Figure 2 below shows the Tier Assignment step, which only exists in the Create Pool Wizard within the
UnityVSA. In this step, you must specify the Storage Tier for each Virtual Drive that will be used in the Pool.
To do so, click the pencil icon located in the Storage Tier column, and select the appropriate Storage Tier
label. Once the Storage Tier is specified for a Virtual Drive, and the Virtual Drive is added to a Pool, the
Storage Tier cannot be changed. If many Virtual Drives exist in the UnityVSA, but only a subset of them will
be used at this time, you will not need to assign a Storage Tier to Virtual Drives that will be left unused.
To differentiate between the Virtual Drives on the system, match the SCSI ID of the Virtual Drive in Unisphere
to the SCSI ID of the Hard Drives in VMware. For more information about Dell EMC UnityVSA, please see the
Dell EMC Unity: UnityVSA white paper on Dell EMC Online Support.
On this page, you can customize the FAST VP settings. Near the top of the page is the Data Relocation
Status. In Figure 3 above, the status is Active, which means FAST VP is active on the system. You can
pause all automatic and manual data relocations on the system by clicking the Pause button at any time.
While the status is Paused, the button will say Resume. Resuming will resume all paused relocations on the
system.
Below the Data relocation status is the Data relocation rate. By clicking the pencil icon, you can change the
relocation rate to either High, Medium, or Low. High uses the most system resources to relocate data, while
Low uses the least. The default relocation rate is Medium.
In the middle of the FAST VP Settings page, shown in Figure 3, Schedule data relocation displays whether
or not the system is scheduled for relocations to occur. If this shows “No”, all data relocations must be
manually started by the user. The Relocation Window is shown next. This displays which days of the week
relocations are schedule for, and the Start and End times for the relocation window. By default, relocations
are scheduled daily, between 5 PM local time to the system, and 1 AM of the next day. Clicking the Modify
data relocation schedule link allows you to customize the relocations schedule further.
Lastly on the FAST VP Setting page is the Amount of scheduled data to relocate and the Estimated
scheduled relocation time. This gives you an idea how much data needs to move on the system based on
the FAST VP algorithm, and the amount of time it will take based on the relocation rate. In this example, a
large amount of data needs to move up, move down, and move within a Pool’s tier.
Shown in Figure 4 is the window that appears after selecting the Modify data relocation schedule link
shown in Figure 3. In this window, you can check or clear the Schedule data relocations checkbox to enable
or disable FAST VP from running on a schedule. Below this are checkboxes for each day of the week. To
have FAST VP relocations run on a particular day, ensure that day’s corresponding checkbox is checked. By
default, FAST VP is scheduled to run on every day of the week. You have to ability to customize which days
relocation runs on, and which days to avoid.
Also shown are the Start time and End time for FAST VP relocations. To customize the start or end time,
change the values within the boxes and click OK. FAST VP will attempt to complete all relocations within this
time period, and if this is not possible, all non-complete relocations are stopped from running. The next time
FAST VP runs will be based on a new relocation priority list.
Also shown in this tab is Relocation information specific to this Pool. In Figure 5, you can see that the
relocation Status is Active. This means relocations are running on this Pool.
The estimated Time to relocate the data on the Pool, displayed in hours and minutes, is also displayed.
From this page, you can also see the Last start time and Last end time for relocations within this Pool. You
will also notice a button below this information that says either Start Relocation or Stop Relocation. If
relocations are occurring on the Pool, you can stop them at any time by clicking the Stop Relocation button.
If relocations are not running, the button would say Start Relocation, and you could select it to start a
relocation on the Pool.
In the bottom of the FAST VP tab is the Pool’s Tier information. As shown in Figure 5, each Tier configured
within this Pool is displayed, along with how many drives and which RAID type is used for the Pool. Also, in
the chart is information regarding how much data per tier needs to Move Up, Move Down, and Rebalance
within the tier. Each of the totals are displayed in GBs. In this example, a large amount of data is being
relocated across tiers. Lastly, the chart displays the Total Size and Free Size, both in TBs for each tier within
the Pool. From this chart, you can see the configuration, the scheduled relocations, and the total and free
capacity for the Pool.
When available, the Start Relocation button brings you to the Start Data Relocation window, shown in
Figure 6. From this window you can choose the Data relocation rate, either High, Medium, or Low, and
choose and End time for relocations. By default, the end time is 2 hours from the local time within the system.
After changing settings within the window, click OK to start relocations manually on the Pool.
You can also change FAST VP settings on a storage resource after it has been created. Figure 8 shows an
example of the FAST VP tab within the LUN Properties window. From here, you can change the FAST VP
Tiering policy for the resource at any time by selecting another one from the drop-down list. Also shown are
the Tiers within the Pool, and the resources Data Distribution across those tiers. In this example, 26% of the
LUN’s data resides on the Extreme Performance Tier, 50% resides on the Performance Tier, and 24%
resides on the Capacity Tier.
Figure 10 below shows the Pool Properties Window after expanding a Pool. The Pool was expanded by
adding drives to the Extreme Performance Tier. Once a tier is expanded, a rebalance is started to spread
the existing data within the tier across all drives within the tier. In the Rebalance column, notice a double
arrow is displayed with the amount of data to rebalance.
2 FAST Cache
2.1 Introduction
The Dell EMC Unity FAST software includes FAST Cache and FAST VP. FAST Cache uses flash drives as
an additional cache layer within the system to temporarily store highly accessed data. For data not already
located on Flash, the system copies the highly accessed 64 KB chunks of data from their current locations on
spinning drives to FAST Cache. Repeated access to this data will benefit by taking advantage of the high
IOPs and low response times Flash drives provide. As FAST Cache is a global resource on the system, all
data can benefit from this caching layer and the overall performance of the system can increase. When a
piece of data located on a spinning drive is marked for promotion into FAST Cache and there are currently no
free FAST Cache pages, FAST Cache will free a page by removing the Least Recently Used (LRU) chunk of
data. If the data being removed from FAST Cache is dirty, meaning the data has not been synchronized with
the location on the Pool, the data is first copied back to its location on the drives before being removed from
FAST Cache.
A FAST Cache promotion occurs when the Policy Engine determines the performance for a chunk of data
would benefit by residing in FAST Cache. While the Policy Engine is monitoring the I/O to FAST Cache
enabled Pools, data access patterns are reviewed. When a chunk of data is accessed three times within a
certain period of time, the eligibility of the block is checked, and the block is marked for promotion into FAST
Cache. If there are free FAST Cache blocks available, the data is copied into FAST Cache. If FAST Cache is
full, the access pattern of the data being considered for promotion is compared to the access pattern of data
in FAST Cache. If the access pattern of the data considered for promotion exceeds that of a chunk of data in
FAST Cache, the least accessed data in FAST Cache is flushed out of FAST Cache and the new promotion
replaces it. The Memory Map is then updated to include all changes to the contents of FAST Cache. The next
time the promoted block is accessed, assuming System Cache could not complete the I/O, the FAST Cache
Memory Map will be checked and the I/O will be serviced from FAST Cache. While promoted into FAST
Cache, the chunk of data has the potential for higher overall throughput and lower Response Time. When a
large portion of a dataset resides in FAST Cache, applications can also benefit with the increased
performance FAST Cache can provide.
There are multiple circumstances in which the access pattern of an application is assumed to cause a FAST
Cache promotion, but it does not occur. In some cases, the efficiencies in system cache are more suited to
handle the I/O, while other times the configuration or location of the data stops the promotion. Some of these
circumstances are:
FAST Cache promotions are avoided when I/O sizes exceed the RAID configuration’s stripe length.
For example, in an 8+1 RAID 5 configuration, I/Os above 512 KB (8x64KB) in size will not cause
promotions.
If System Cache cannot complete the I/O, a System Cache miss occurs. If FAST Cache is enabled, the
Memory Map is reviewed to see if the contents of FAST Cache can complete the I/O. If the data resides in
FAST Cache, the I/O is redirected to the location within FAST Cache the data resides, the data is copied into
System Cache, and the read request is completed.
When the data being requested is not currently located in FAST Cache, the data must be requested from the
drive the data resides on. The data is then copied from the drive to System Cache, and the read operation
completes with the requestor of the information. If the data has been accessed frequently, the Policy Engine
will cause a promotion to occur and the data will be copied into FAST Cache. Subsequent requests for this
information will either come from System Cache or FAST Cache.
Read operation.
If Write Cache on the system becomes disabled, writes to the system will need to be saved on the Pool
before the write operation can be acknowledged. During this operation, the data is temporarily held in System
Cache, while the data is saved to the Pool. If FAST Cache is enabled, the Memory Map is reviewed to see if a
copy of the data resides in FAST Cache. If so, the data in FAST Cache is updated and the write operation is
acknowledged. If the data does not reside in FAST Cache, a write to the Pool occurs and the write is
acknowledged. This write to the Pool may cause a FAST Cache promotion to occur.
Write operation.
In the instance when System Cache is proactively cleaning cache pages or flushing cache pages, outlined in
Figure 13 below, updates to FAST Cache may be seen. During this operation, the FAST Cache Memory Map
is reviewed to determine if the data being overwritten resides in FAST Cache. If the data is located within
FAST Cache, the data being cleaned from System Cache is synchronized with the contents of FAST Cache.
As FAST Cache now contains data which is newer than what resides on the Pool, the data is considered dirty.
This is also known as a FAST Cache dirty page. This data will be synchronized with the data on the Pool
when a FAST Cache page cleaning operation occurs. If FAST Cache does not contain a copy of the data
being updated, the data is written directly to the Pool drives. This operation may cause a FAST Cache
Promotion to occur.
For FAST Cache promotions to occur efficiently, FAST Cache free or clean pages need to exist. If no free or
clean pages exist, a page cleaning operation needs to happen before the page can be freed for the next
promotion to occur. When a promotion is scheduled, pages are used in the following order:
When a FAST Cache expansion occurs, a background operation is started to add the new drives into FAST
Cache. This operation first configures a pair of drives into a RAID 1 mirrored set. The capacity from this set is
then added to FAST Cache and is available for future promotions. These operations are repeated for all
remaining drives being added to FAST Cache. During these operations, all FAST Cache reads, writes, and
promotions occur without being impacted by the expansion. The amount of time the expand operation takes
to complete depends on the size of drives used in FAST Cache and the number of drives being added to the
configuration.
When a FAST Cache shrink occurs, a background operation is started to remove drives from the current
FAST Cache configuration. Removing drives from FAST Cache reduces the size of FAST Cache by the
number of drives selected. After starting a shrink operation, new promotions are blocked to each pair of drives
selected by the system to be removed from FAST Cache. Next, each FAST Cache dirty page within the drives
to be removed is cleaned to ensure that data is synchronized with the locations on the Pool. After all dirty
pages are cleaned within a set of drives, the capacity of the set is removed from the FAST Cache
configuration. Data which existed on FAST Cache drives that were removed may be promoted to FAST
Cache again through the normal promotion mechanism.
Setting window. Storage Configuration – FAST Cache. FAST Cache not configured.
The Drives step of the Create FAST Cache wizard is now shown. Figure 15 shows an example of what is
seen when a single drive size and type is present in the system that is supported for use in FAST Cache.
FAST Cache can only be created using drives of the same size. If multiple supported drive sizes are within
the system, a radio icon is displayed before each drive size in the list. To select a certain drive size to use for
FAST Cache, select the radio icon in front of the wanted drive size. Next, click the drop-down box and select
the number of drives you will use for FAST Cache.
At this time FAST Cache enabled on all existing Pools on the system. FAST Cache is a global resource,
which can be used by all Pools within the system. The Enable FAST Cache for existing pools checkbox is
checked by default and can be cleared before proceeding. To change the FAST Cache setting for a Pool,
view the General tab within the Pool Properties window. After selecting the wanted drive size and the number
of drives to use in FAST Cache, click the Next button.
The Summary step for the Create FAST Cache Wizard is now displayed, and an example of this screen can
be seen in Figure 16. This screen displays the choices made in the previous step and allows you to confirm
the proper selections were made. If the incorrect drive size was selected, or you want to change the number
of drives selected, you can select the Back button to correct the information. After reviewing this screen and
confirming the correct information is displayed, select Finish to create FAST Cache with these settings.
The Results step for the Create FAST Cache Wizard is now shown. The overall status of the FAST Cache
creation is shown, along with each job and its status. Figure 17 shows an example of this window when all
processes are complete. For each pair of drives, a RAID 1 RAID Group is created, and the capacity of the
group is added to FAST Cache. Once capacity is added to FAST Cache, FAST Cache is enabled and
available for data promotions. A process is also started to enable FAST Cache on all Pools on the system if
that option was selected. While FAST Cache is being created, you can click Close to close out of this
window. The process is a Unisphere job and will continue to run in the background.
To view which drives were used to configure FAST Cache, select FAST Cache Drives, which is found under
Storage Configuration in the Settings window. As shown in Figure 20, the FAST Cache drives are
displayed, shown with their locations and size. In this example, drives from the Drive Processor Enclosure
were selected to configure FAST Cache. This window makes it easy to locate the physical location of each of
the FAST Cache drives.
Figure 21 shows and example of the Expand FAST Cache wizard. When expanding FAST Cache, you may
only select free drives of the same size and type as what is currently in FAST Cache. In this example, only
400 GB SAS Flash 2 drives are available to be selected, as FAST Cache is currently created with those
drives. From the drop-down list, you can select pairs of drives to expand the capacity of FAST Cache up to
the system maximum. In this example, only two free drives were found. Click OK to start the expansion
process.
After clicking OK, an Expand FAST Cache job is created to add the drives to the FAST Cache configuration.
This process occurs in the background and does not impact I/O or promotions to FAST Cache. Figure 22
shows an example of the Job Properties window for the Expand FAST Cache job. Shown is the overall status
of the operation, and the individual steps taken for the process. In this example, only two drives were added
to the configuration.
Figure 23 shows an example of the Shrink FAST Cache wizard, with and without the drop-down box selected.
In this window, you can see that FAST Cache currently contains six drives. During a shrink operation, you can
remove all but two of the drives currently configured in FAST Cache. In this example, two drives will be
removed from FAST Cache.
Figure 24 shows the warning message received when shrinking drives out of the FAST Cache configuration.
The warning message outlines that all FAST Cache data must be flushed from the drives being removed from
FAST Cache. This operation takes time to complete and can vary based on the I/O workload being seen by
FAST Cache and the Pool drives. Performance of the system may also be impacted due to data no longer
residing in FAST Cache. Hot data will need to promote again once flushed out of FAST Cache.
While the FAST Cache Shrink operation is completed, no changes to FAST Cache can be made. Figure 25
shows the FAST Cache page within system settings during the time a shrink operation was occurring. Notice
that no options within the FAST Cache are available while the operation is running.
At any time, the status of the FAST Cache Shrink job can be viewed by going to the Jobs page found under
the Events heading in the left pane of Unisphere. Figure 26 shows an example of a completed FAST Cache
Shrink job.
Figure 27 shows the message received after selecting Delete. This message states that all data must be
flushed from FAST Cache which can be a time-consuming operation. Performance of the system may also be
impacted during the delete operation as the contents of FAST Cache will need to be copied to the Pool drives
on the system. Because the data flushed from FAST Cache may no longer reside on Flash, this data may see
increased Response Times.
Figure 28 shows the FAST Cache page within the Settings window while the FAST Cache Delete operation is
running. No changes can be made to FAST Cache while a Delete is running. More information about the
progress of the operation can be seen on the Jobs page in Unisphere. Once the operation completes FAST
Cache can be created again.
Settings window. Storage Configuration – FAST Cache. FAST Cache deletion in progress.
You can review the status during a Delete FAST Cache operation by reviewing the Jobs page. Figure 29
shows an example of the processes seen during a delete job. In this example the Delete operation was just
started, and the first operation was still running.
When a failed drive is encountered, as with Pool drives, FAST Cache drives can permanently spare to free
drives within the system. The system’s Hot Spare Policy however does not apply to drives within FAST
Cache. This means that if a system only contains two drives of a supported size and type, FAST Cache could
still be created. However, it is always recommended to have spare drives within the system. All drives not
used in FAST Cache are subject to the Unity Hot Spare Policy.
FAST Cache is a global resource on the system which leverages Flash to provide high throughput and low
response times. The most heavily accessed chunks of data not already residing on Flash are promoted into
FAST Cache, which boosts performance for active workloads. FAST Cache absorbs I/O bursts to the system,
which helps reduce I/O workloads on the Pool.
FAST VP helps to optimize TCO by relocating data across storage Pools to meet the demands of workloads
over time. As data ages, and the activity level for data reduces, data is relocated to provide capacity for active
workloads.
When FAST Cache and FAST VP are used together, they deliver improved TCO for the system and high
performance and efficiency. As FAST Cache is a global resource, highly active data from all Pools can use
FAST Cache. Bursts of data sent to Pool drives can be handled by FAST Cache, while FAST VP optimizes
drive utilization and efficiency within the Pool. With highly active data on FAST Cache, FAST VP can prioritize
the placement of slices for data accessed from the Pool. FAST Cache is a cost-efficient way to add Flash to a
configuration.
FAST VP works on a schedule to optimize the storage within a Pool. Even if a burst of I/O is seen, no slice
movements occur until the relocation window. If the activity is frequent enough to cause a promotion, the data
will be promoted to FAST Cache. FAST VP only monitors I/O which reaches the drives within the Pool. I/O
handled by FAST Cache does not affect the analysis done by FAST VP. However, I/O activity due to FAST
Cache page cleaning or flushing is monitored and weighed like normal I/O. If the activity of the slice is hot
enough, the slice may relocate to a higher tier.
3.1 Interoperability
FAST Cache and FAST VP are designed to work with other features of the system, as well as each other.
The following features are compatible for use with FAST Cache and FAST VP.
FAST Cache and System Cache both have specific functions which complement each other. System Cache
handles many workloads, such as high-frequency workloads, which are best handled by System Cache.
System Cache also consolidates I/O requests where possible, which can be seen in prefetching of sequential
data, or coalescing of sequential writes. This helps to reduce the amount of I/O sent to the Pools. Each
feature helps to improve the overall performance of the system. The table below shows the differences
between System Cache and FAST Cache.
Position Closest to the CPU, Lowest Latency Between System Cache and Pool drives
Sequential I/O
Random I/O
I/O larger than 64 KB
Best Suited For I/O smaller than 64 KB
Zero fill requests
Data with a high locality
High-Frequency Access Patterns
Nanosecond to microsecond response
Response Time Microsecond to millisecond response time
time
Single memory region which services Single region which services read and write
Operation
read and write requests requests
Capacities scale to higher levels than
Capacity Limited in size, capacity based on model System Cache. Maximums based on
model.
Granularity Fixed 8 KB page size Fixed 64 KB page size
Failing drives can proactively hot spare to
Memory modules are customer
Availability free drives within the system. Faulted
replaceable
component is customer replaceable.
4 Conclusion
The FAST software optimizes Dell EMC Unity systems which reduce the Total Cost of Ownership and
increases the overall performance of the system by efficiently using the resources within the system. The
FAST software optimizes the use of Flash drives, by leveraging them to service highly active data within the
system. Utilizing Flash with FAST Cache avoids dedicating these drives to Pools and allows them to be a
global resource on the system. FAST Cache and FAST VP serve different purposes and help to achieve
storage efficiency in different ways.
While FAST Cache handles bursts of activity and unpredictable workloads, FAST VP tiers data over time, and
continuously matches resource needs to tiers within the Pool. These features complement each other, as they
work on different granularities, and have different frequencies in which they function. Implementing both FAST
Cache and FAST VP on your storage system can improve performance and reduce storage costs.
FAST Cache and FAST VP can be customized once, and left to run on the storage, with no other manual
intervention needed. FAST Cache and FAST VP will efficiently manage the placement of data and leverage
the drives in the system. Each of these features can be managed easily through Unisphere, Unisphere CLI,
and REST API.
Storage technical documents and videos provide expertise that helps to ensure customer success on Dell
EMC storage platforms.