Upload
trinhtram
View
219
Download
0
Embed Size (px)
Citation preview
Proof of Concept
TRANSPARENT CLOUD TIERING WITH IBM SPECTRUM SCALE
ATS Innovation Center Malvern PAJoshua Kwedar ndash The ATS GroupOctober ndash November 2017
01
INTRODUCTION
With the release of IBM Spectrum Scale 421 IBM is now offering a hybrid cloud solution to leverage object cloud storage as an additional tier within their ILM (Information Lifecycle Management) engine
As an IBM Business Partner wersquove stood up several IBM Spectrum Scale environments that
leverage the ILM engine for data placement and migration using on prem hardware including
near-line SAS Flash and Spectrum Archive (LTFS) Our customers have expressed an interest
in an additional off prem tier for cold data as their rate of data ingest and available rack space
does not allow them enough time or space to react to immediate capacity needs In an effort
to enhance our offering wersquove conducted a proof of concept of IBM Spectrum Scale TCT
(Transparent Cloud Tiering) in our Innovation Center
02
Our PoC environment for IBM Spectrum Scale TCT
bull Power 8 S822
bull PowerVM
bull IBM V7000
The goal of the PoC was the following
bull Create a 4-node IBM Spectrum Scale cluster consisting of three NSD servers and one Protocol node to be used for TCT
bull Identify and execute the steps needed to define an object cloud tier within IBM Spectrum Scale
bull Manually push files(s) to S3 Manually recall file(s) from S3
bull Create an ILM policy to automatically tier data based on access time and file size when lowDiskSpace or noDiskSpace callback events occur
A IBM Spectrum Scale 423 cluster using Transparent Cloud Tiering was configured using the following high level steps
bull Install IBM Spectrum Scale packages via GUI install for 3 NSD servers and 1 protocol node (NSD servers on Power 8 Protocol server on x86)
bull Create cluster via GUI
bull Define (4) NSDs 50GB each
bull Create filesystem s3gpfs 200GB total
bull Create a Cloudtier nodeclass and add the protocol server to the class
bull Install the Transparent Cloud Tiering rpm on the protocol server
bull Enable cloud gateway start TCT services assign filesystem s3gpfs to TCT config
bull Configure cloud gateway to authenticate with Amazon S3 store
bull Use ILM policy and callback to migrate data when lowDiskSpace and noDiskSpace events are triggered
bull Red Hat Enterprise Linux 73
bull IBM Spectrum Scale 423 Advanced Edition
bull Amazon S3 object storage
mmlscluster output
mmlnsd output
After the cluster was built gvics3gpfsprot1 was added to a node class named Cloudtier and the TCT server RPM was installed
Cloud tiering must be enabled on the server performing the TCT function After running an mmchnode --cloud-gateway-enable against the Cloudtier nodeclass our server will show up in the list of cloud enabled nodes
03
04
Start the cloud tiering service by running lsquommcloudgateway service start ndashN Cloudtierrsquo To verify the service is running execute the following
Define the filesystem to the TCT nodeclass and verify using the following commands
At this point we are ready to define a filesystem for Transparent Cloud Tiering Only one
filesystem can be associated with the TCT node class (Cloudtier) Several of our customers
especially those with SaaS offerings backed by IBM Spectrum Scale provision multiple GPFS
filesystems in a ldquoone per customerrdquo model Given the single filesystem limitation such users
may need to get creative in ways that they design for cloud tiering One option would be to de-
fine a single GPFS filesystem solely used to copy data for archive (gpfsarchive for example)
The filesystem would need to be backed by NSDs and data previously segregated via multiple
namespaces would then be mixed together under the same namespace Another option would
be to create multiple nodeclasses that perform TCT functions A minimum of two nodes run-
ning TCT services would be recommended for HA purposes for each filesystem tiering to the
cloud Depending on the number of filesystems yoursquore looking to tier to the cloud the total of
additional TCT servers could grow quite large If you are using SMB in any combination of other
protocols you can configure only up to 16 protocol nodes This is a hard limit and SMB cannot
be enabled if there are more protocol nodes If only NFS and Object are enabled you can have
32 nodes configured as protocol nodes There does not appear to be a documented limited
number of nodes solely running TCT services
Before defining the S3 account to the configuration IBM offers a pre-test function to confirm
IOPs for putsgets estimated throughput and verify the credentials provided
05
Testing noted in future sections of this document prove this estimate to be accurate in terms of throughput
Define the AWS account to be used for TCT and verify
There is no need to manually create any S3 buckets Within the AWS console the following newly created S3 buckets are automatically created
This is a good opportunity to compare your expected WAN throughput against the throughput
numbers provided below
Now that the AWSS3 account and buckets have been created we can proceed with migrating
files to the cloud In the following screenshots a 1G file named 0txt is migrated to S3 itrsquos current
state changes from Resident (exists locally) to Non-Resident (exists only in S3) the same file
is recalled and itrsquos state is changed to Co-Resident (exists locally AND in S3) Included are
throughput stats to verify estimated values provided during the authentication pre-test
06
Once recall is completed
When selecting multiple files to migrate to the cloud in a filesystem consisting of several
directories and subdirectories it is important to note that wildcards cannot be provided If a user
were to migrate all files under a directory (for example s3gpfstest) including subdirectories the
following usage would be required
find s3gpfstest -type f ndashexec mmcloudgateway files migrate +
In order to change a file from a Co-Resident state to a Resident state a policy included under
optibmMCStoresamples named coResidentToResidenttemplate can be applied using the
mmapplypolicy command The policy calls policy helper function that invokes an mcstore binary
not intended for users to execute It removes the extended attributes of the file and puts it in a
Resident state
07
The following message is posted during the mmapplypolicy that indicates an lsquommcloudgateway files reconcilersquo must be executed in order for the changed attribute of the file to be updated in the local cloud directory database
In the next example a default policy and callback are defined to perform the following functions
bull Exclude the contents of the internal TCT configuration from being eligible for migration to
the cloud (in our example s3gpfsmcstore and s3gpfsmcstorebak)
bull Define attributes for access age of a file size in MB and a weight expression based on age
and size of a file
bull Define an external pool for the cloud tier
bull Define a migration rule to migrate eligible files to the cloud tier when the filesystem is
95 full until local filesystem utilization is at 90
bull Define rules to automatically recall files during user executed read and write operations
bull A callback is configured to execute an mmapplypolicy and migrate eligible files to the cloud
tier when a lowDiskSpace (95 as defined in the policy) or noDiskSpace (100) event is logged
08
The following policy was configured using the mmchpolicy command against filesystem s3gpfs
09
The following callback configured using the command
mmaddcallback thresholdMigration --event lowDiskSpacenoDiskSpace --command usrlppmmfsbinmmapplypolicy --parms ldquofsname -N Cloudtier -g s3gpfstest --single-instancerdquo
To test the newly created callback generate test files until a lowDiskSpace event is triggered
10
When the filesystem reaches the defined lowDiskSpace threshold the following entries are logged in varadmrasmmfsloglatest of the filesystem manager node and the mmapplypolicy is executed as defined in the thresholdMigration callback we defined
For those leveraging other IBM Spectrum Scale features (AFM LTFS HSM FPO etc) it is important
to note the following restrictions and limitations as it relates to Transparent Cloud Tiering
bull Spectrum Archive (LTFS) and Transparent Cloud Tiering cannot be configured for the same
filesystem Both solutions are intended to serve the same purpose (archive) be sure to plan
for one up front
bull TCT cannot be used to tier snapshots
bull TCT is not supported in a multi-cluster setup
bull There is a one filesystem per TCT nodeclass limitation
bull TCT cannot be run on AFM gateway nodes or configured for use on AFM or AFM DR filesets
bull There is no mixed cluster support if using Windowssystem Z nodes TCT can co-exist
in a Linux cluster comprised of x86 and Power nodes
bull TCT is not a replacement for a backup or DR solution
bull An average file size of 1MB is recommended when tiering to the cloud Smaller file sizes
are supported but performance may suffer as a result due to IO overhead
About the ATS Group
Since our founding in 2001 the ATS Group has consulted on thousands of system
implementations upgrades backups and recoveries We also support customers by providing
managed services performance analysis and capacity planning With over 60 industry-
certified professionals we support SMBs Fortune 500 companies and government agencies
As experts in IBM VMware Oracle and other top vendors we are experienced in virtualization
storage area networks (SANs) high availability performance tuning SDS enterprise backup
and other evolving technologies that operate mission-critical systems on premise in the
cloud or in a hybrid environment
T H E AT S G R O U P C O M C O M PA N Y C O N TA C T
01
INTRODUCTION
With the release of IBM Spectrum Scale 421 IBM is now offering a hybrid cloud solution to leverage object cloud storage as an additional tier within their ILM (Information Lifecycle Management) engine
As an IBM Business Partner wersquove stood up several IBM Spectrum Scale environments that
leverage the ILM engine for data placement and migration using on prem hardware including
near-line SAS Flash and Spectrum Archive (LTFS) Our customers have expressed an interest
in an additional off prem tier for cold data as their rate of data ingest and available rack space
does not allow them enough time or space to react to immediate capacity needs In an effort
to enhance our offering wersquove conducted a proof of concept of IBM Spectrum Scale TCT
(Transparent Cloud Tiering) in our Innovation Center
02
Our PoC environment for IBM Spectrum Scale TCT
bull Power 8 S822
bull PowerVM
bull IBM V7000
The goal of the PoC was the following
bull Create a 4-node IBM Spectrum Scale cluster consisting of three NSD servers and one Protocol node to be used for TCT
bull Identify and execute the steps needed to define an object cloud tier within IBM Spectrum Scale
bull Manually push files(s) to S3 Manually recall file(s) from S3
bull Create an ILM policy to automatically tier data based on access time and file size when lowDiskSpace or noDiskSpace callback events occur
A IBM Spectrum Scale 423 cluster using Transparent Cloud Tiering was configured using the following high level steps
bull Install IBM Spectrum Scale packages via GUI install for 3 NSD servers and 1 protocol node (NSD servers on Power 8 Protocol server on x86)
bull Create cluster via GUI
bull Define (4) NSDs 50GB each
bull Create filesystem s3gpfs 200GB total
bull Create a Cloudtier nodeclass and add the protocol server to the class
bull Install the Transparent Cloud Tiering rpm on the protocol server
bull Enable cloud gateway start TCT services assign filesystem s3gpfs to TCT config
bull Configure cloud gateway to authenticate with Amazon S3 store
bull Use ILM policy and callback to migrate data when lowDiskSpace and noDiskSpace events are triggered
bull Red Hat Enterprise Linux 73
bull IBM Spectrum Scale 423 Advanced Edition
bull Amazon S3 object storage
mmlscluster output
mmlnsd output
After the cluster was built gvics3gpfsprot1 was added to a node class named Cloudtier and the TCT server RPM was installed
Cloud tiering must be enabled on the server performing the TCT function After running an mmchnode --cloud-gateway-enable against the Cloudtier nodeclass our server will show up in the list of cloud enabled nodes
03
04
Start the cloud tiering service by running lsquommcloudgateway service start ndashN Cloudtierrsquo To verify the service is running execute the following
Define the filesystem to the TCT nodeclass and verify using the following commands
At this point we are ready to define a filesystem for Transparent Cloud Tiering Only one
filesystem can be associated with the TCT node class (Cloudtier) Several of our customers
especially those with SaaS offerings backed by IBM Spectrum Scale provision multiple GPFS
filesystems in a ldquoone per customerrdquo model Given the single filesystem limitation such users
may need to get creative in ways that they design for cloud tiering One option would be to de-
fine a single GPFS filesystem solely used to copy data for archive (gpfsarchive for example)
The filesystem would need to be backed by NSDs and data previously segregated via multiple
namespaces would then be mixed together under the same namespace Another option would
be to create multiple nodeclasses that perform TCT functions A minimum of two nodes run-
ning TCT services would be recommended for HA purposes for each filesystem tiering to the
cloud Depending on the number of filesystems yoursquore looking to tier to the cloud the total of
additional TCT servers could grow quite large If you are using SMB in any combination of other
protocols you can configure only up to 16 protocol nodes This is a hard limit and SMB cannot
be enabled if there are more protocol nodes If only NFS and Object are enabled you can have
32 nodes configured as protocol nodes There does not appear to be a documented limited
number of nodes solely running TCT services
Before defining the S3 account to the configuration IBM offers a pre-test function to confirm
IOPs for putsgets estimated throughput and verify the credentials provided
05
Testing noted in future sections of this document prove this estimate to be accurate in terms of throughput
Define the AWS account to be used for TCT and verify
There is no need to manually create any S3 buckets Within the AWS console the following newly created S3 buckets are automatically created
This is a good opportunity to compare your expected WAN throughput against the throughput
numbers provided below
Now that the AWSS3 account and buckets have been created we can proceed with migrating
files to the cloud In the following screenshots a 1G file named 0txt is migrated to S3 itrsquos current
state changes from Resident (exists locally) to Non-Resident (exists only in S3) the same file
is recalled and itrsquos state is changed to Co-Resident (exists locally AND in S3) Included are
throughput stats to verify estimated values provided during the authentication pre-test
06
Once recall is completed
When selecting multiple files to migrate to the cloud in a filesystem consisting of several
directories and subdirectories it is important to note that wildcards cannot be provided If a user
were to migrate all files under a directory (for example s3gpfstest) including subdirectories the
following usage would be required
find s3gpfstest -type f ndashexec mmcloudgateway files migrate +
In order to change a file from a Co-Resident state to a Resident state a policy included under
optibmMCStoresamples named coResidentToResidenttemplate can be applied using the
mmapplypolicy command The policy calls policy helper function that invokes an mcstore binary
not intended for users to execute It removes the extended attributes of the file and puts it in a
Resident state
07
The following message is posted during the mmapplypolicy that indicates an lsquommcloudgateway files reconcilersquo must be executed in order for the changed attribute of the file to be updated in the local cloud directory database
In the next example a default policy and callback are defined to perform the following functions
bull Exclude the contents of the internal TCT configuration from being eligible for migration to
the cloud (in our example s3gpfsmcstore and s3gpfsmcstorebak)
bull Define attributes for access age of a file size in MB and a weight expression based on age
and size of a file
bull Define an external pool for the cloud tier
bull Define a migration rule to migrate eligible files to the cloud tier when the filesystem is
95 full until local filesystem utilization is at 90
bull Define rules to automatically recall files during user executed read and write operations
bull A callback is configured to execute an mmapplypolicy and migrate eligible files to the cloud
tier when a lowDiskSpace (95 as defined in the policy) or noDiskSpace (100) event is logged
08
The following policy was configured using the mmchpolicy command against filesystem s3gpfs
09
The following callback configured using the command
mmaddcallback thresholdMigration --event lowDiskSpacenoDiskSpace --command usrlppmmfsbinmmapplypolicy --parms ldquofsname -N Cloudtier -g s3gpfstest --single-instancerdquo
To test the newly created callback generate test files until a lowDiskSpace event is triggered
10
When the filesystem reaches the defined lowDiskSpace threshold the following entries are logged in varadmrasmmfsloglatest of the filesystem manager node and the mmapplypolicy is executed as defined in the thresholdMigration callback we defined
For those leveraging other IBM Spectrum Scale features (AFM LTFS HSM FPO etc) it is important
to note the following restrictions and limitations as it relates to Transparent Cloud Tiering
bull Spectrum Archive (LTFS) and Transparent Cloud Tiering cannot be configured for the same
filesystem Both solutions are intended to serve the same purpose (archive) be sure to plan
for one up front
bull TCT cannot be used to tier snapshots
bull TCT is not supported in a multi-cluster setup
bull There is a one filesystem per TCT nodeclass limitation
bull TCT cannot be run on AFM gateway nodes or configured for use on AFM or AFM DR filesets
bull There is no mixed cluster support if using Windowssystem Z nodes TCT can co-exist
in a Linux cluster comprised of x86 and Power nodes
bull TCT is not a replacement for a backup or DR solution
bull An average file size of 1MB is recommended when tiering to the cloud Smaller file sizes
are supported but performance may suffer as a result due to IO overhead
About the ATS Group
Since our founding in 2001 the ATS Group has consulted on thousands of system
implementations upgrades backups and recoveries We also support customers by providing
managed services performance analysis and capacity planning With over 60 industry-
certified professionals we support SMBs Fortune 500 companies and government agencies
As experts in IBM VMware Oracle and other top vendors we are experienced in virtualization
storage area networks (SANs) high availability performance tuning SDS enterprise backup
and other evolving technologies that operate mission-critical systems on premise in the
cloud or in a hybrid environment
T H E AT S G R O U P C O M C O M PA N Y C O N TA C T
02
Our PoC environment for IBM Spectrum Scale TCT
bull Power 8 S822
bull PowerVM
bull IBM V7000
The goal of the PoC was the following
bull Create a 4-node IBM Spectrum Scale cluster consisting of three NSD servers and one Protocol node to be used for TCT
bull Identify and execute the steps needed to define an object cloud tier within IBM Spectrum Scale
bull Manually push files(s) to S3 Manually recall file(s) from S3
bull Create an ILM policy to automatically tier data based on access time and file size when lowDiskSpace or noDiskSpace callback events occur
A IBM Spectrum Scale 423 cluster using Transparent Cloud Tiering was configured using the following high level steps
bull Install IBM Spectrum Scale packages via GUI install for 3 NSD servers and 1 protocol node (NSD servers on Power 8 Protocol server on x86)
bull Create cluster via GUI
bull Define (4) NSDs 50GB each
bull Create filesystem s3gpfs 200GB total
bull Create a Cloudtier nodeclass and add the protocol server to the class
bull Install the Transparent Cloud Tiering rpm on the protocol server
bull Enable cloud gateway start TCT services assign filesystem s3gpfs to TCT config
bull Configure cloud gateway to authenticate with Amazon S3 store
bull Use ILM policy and callback to migrate data when lowDiskSpace and noDiskSpace events are triggered
bull Red Hat Enterprise Linux 73
bull IBM Spectrum Scale 423 Advanced Edition
bull Amazon S3 object storage
mmlscluster output
mmlnsd output
After the cluster was built gvics3gpfsprot1 was added to a node class named Cloudtier and the TCT server RPM was installed
Cloud tiering must be enabled on the server performing the TCT function After running an mmchnode --cloud-gateway-enable against the Cloudtier nodeclass our server will show up in the list of cloud enabled nodes
03
04
Start the cloud tiering service by running lsquommcloudgateway service start ndashN Cloudtierrsquo To verify the service is running execute the following
Define the filesystem to the TCT nodeclass and verify using the following commands
At this point we are ready to define a filesystem for Transparent Cloud Tiering Only one
filesystem can be associated with the TCT node class (Cloudtier) Several of our customers
especially those with SaaS offerings backed by IBM Spectrum Scale provision multiple GPFS
filesystems in a ldquoone per customerrdquo model Given the single filesystem limitation such users
may need to get creative in ways that they design for cloud tiering One option would be to de-
fine a single GPFS filesystem solely used to copy data for archive (gpfsarchive for example)
The filesystem would need to be backed by NSDs and data previously segregated via multiple
namespaces would then be mixed together under the same namespace Another option would
be to create multiple nodeclasses that perform TCT functions A minimum of two nodes run-
ning TCT services would be recommended for HA purposes for each filesystem tiering to the
cloud Depending on the number of filesystems yoursquore looking to tier to the cloud the total of
additional TCT servers could grow quite large If you are using SMB in any combination of other
protocols you can configure only up to 16 protocol nodes This is a hard limit and SMB cannot
be enabled if there are more protocol nodes If only NFS and Object are enabled you can have
32 nodes configured as protocol nodes There does not appear to be a documented limited
number of nodes solely running TCT services
Before defining the S3 account to the configuration IBM offers a pre-test function to confirm
IOPs for putsgets estimated throughput and verify the credentials provided
05
Testing noted in future sections of this document prove this estimate to be accurate in terms of throughput
Define the AWS account to be used for TCT and verify
There is no need to manually create any S3 buckets Within the AWS console the following newly created S3 buckets are automatically created
This is a good opportunity to compare your expected WAN throughput against the throughput
numbers provided below
Now that the AWSS3 account and buckets have been created we can proceed with migrating
files to the cloud In the following screenshots a 1G file named 0txt is migrated to S3 itrsquos current
state changes from Resident (exists locally) to Non-Resident (exists only in S3) the same file
is recalled and itrsquos state is changed to Co-Resident (exists locally AND in S3) Included are
throughput stats to verify estimated values provided during the authentication pre-test
06
Once recall is completed
When selecting multiple files to migrate to the cloud in a filesystem consisting of several
directories and subdirectories it is important to note that wildcards cannot be provided If a user
were to migrate all files under a directory (for example s3gpfstest) including subdirectories the
following usage would be required
find s3gpfstest -type f ndashexec mmcloudgateway files migrate +
In order to change a file from a Co-Resident state to a Resident state a policy included under
optibmMCStoresamples named coResidentToResidenttemplate can be applied using the
mmapplypolicy command The policy calls policy helper function that invokes an mcstore binary
not intended for users to execute It removes the extended attributes of the file and puts it in a
Resident state
07
The following message is posted during the mmapplypolicy that indicates an lsquommcloudgateway files reconcilersquo must be executed in order for the changed attribute of the file to be updated in the local cloud directory database
In the next example a default policy and callback are defined to perform the following functions
bull Exclude the contents of the internal TCT configuration from being eligible for migration to
the cloud (in our example s3gpfsmcstore and s3gpfsmcstorebak)
bull Define attributes for access age of a file size in MB and a weight expression based on age
and size of a file
bull Define an external pool for the cloud tier
bull Define a migration rule to migrate eligible files to the cloud tier when the filesystem is
95 full until local filesystem utilization is at 90
bull Define rules to automatically recall files during user executed read and write operations
bull A callback is configured to execute an mmapplypolicy and migrate eligible files to the cloud
tier when a lowDiskSpace (95 as defined in the policy) or noDiskSpace (100) event is logged
08
The following policy was configured using the mmchpolicy command against filesystem s3gpfs
09
The following callback configured using the command
mmaddcallback thresholdMigration --event lowDiskSpacenoDiskSpace --command usrlppmmfsbinmmapplypolicy --parms ldquofsname -N Cloudtier -g s3gpfstest --single-instancerdquo
To test the newly created callback generate test files until a lowDiskSpace event is triggered
10
When the filesystem reaches the defined lowDiskSpace threshold the following entries are logged in varadmrasmmfsloglatest of the filesystem manager node and the mmapplypolicy is executed as defined in the thresholdMigration callback we defined
For those leveraging other IBM Spectrum Scale features (AFM LTFS HSM FPO etc) it is important
to note the following restrictions and limitations as it relates to Transparent Cloud Tiering
bull Spectrum Archive (LTFS) and Transparent Cloud Tiering cannot be configured for the same
filesystem Both solutions are intended to serve the same purpose (archive) be sure to plan
for one up front
bull TCT cannot be used to tier snapshots
bull TCT is not supported in a multi-cluster setup
bull There is a one filesystem per TCT nodeclass limitation
bull TCT cannot be run on AFM gateway nodes or configured for use on AFM or AFM DR filesets
bull There is no mixed cluster support if using Windowssystem Z nodes TCT can co-exist
in a Linux cluster comprised of x86 and Power nodes
bull TCT is not a replacement for a backup or DR solution
bull An average file size of 1MB is recommended when tiering to the cloud Smaller file sizes
are supported but performance may suffer as a result due to IO overhead
About the ATS Group
Since our founding in 2001 the ATS Group has consulted on thousands of system
implementations upgrades backups and recoveries We also support customers by providing
managed services performance analysis and capacity planning With over 60 industry-
certified professionals we support SMBs Fortune 500 companies and government agencies
As experts in IBM VMware Oracle and other top vendors we are experienced in virtualization
storage area networks (SANs) high availability performance tuning SDS enterprise backup
and other evolving technologies that operate mission-critical systems on premise in the
cloud or in a hybrid environment
T H E AT S G R O U P C O M C O M PA N Y C O N TA C T
mmlscluster output
mmlnsd output
After the cluster was built gvics3gpfsprot1 was added to a node class named Cloudtier and the TCT server RPM was installed
Cloud tiering must be enabled on the server performing the TCT function After running an mmchnode --cloud-gateway-enable against the Cloudtier nodeclass our server will show up in the list of cloud enabled nodes
03
04
Start the cloud tiering service by running lsquommcloudgateway service start ndashN Cloudtierrsquo To verify the service is running execute the following
Define the filesystem to the TCT nodeclass and verify using the following commands
At this point we are ready to define a filesystem for Transparent Cloud Tiering Only one
filesystem can be associated with the TCT node class (Cloudtier) Several of our customers
especially those with SaaS offerings backed by IBM Spectrum Scale provision multiple GPFS
filesystems in a ldquoone per customerrdquo model Given the single filesystem limitation such users
may need to get creative in ways that they design for cloud tiering One option would be to de-
fine a single GPFS filesystem solely used to copy data for archive (gpfsarchive for example)
The filesystem would need to be backed by NSDs and data previously segregated via multiple
namespaces would then be mixed together under the same namespace Another option would
be to create multiple nodeclasses that perform TCT functions A minimum of two nodes run-
ning TCT services would be recommended for HA purposes for each filesystem tiering to the
cloud Depending on the number of filesystems yoursquore looking to tier to the cloud the total of
additional TCT servers could grow quite large If you are using SMB in any combination of other
protocols you can configure only up to 16 protocol nodes This is a hard limit and SMB cannot
be enabled if there are more protocol nodes If only NFS and Object are enabled you can have
32 nodes configured as protocol nodes There does not appear to be a documented limited
number of nodes solely running TCT services
Before defining the S3 account to the configuration IBM offers a pre-test function to confirm
IOPs for putsgets estimated throughput and verify the credentials provided
05
Testing noted in future sections of this document prove this estimate to be accurate in terms of throughput
Define the AWS account to be used for TCT and verify
There is no need to manually create any S3 buckets Within the AWS console the following newly created S3 buckets are automatically created
This is a good opportunity to compare your expected WAN throughput against the throughput
numbers provided below
Now that the AWSS3 account and buckets have been created we can proceed with migrating
files to the cloud In the following screenshots a 1G file named 0txt is migrated to S3 itrsquos current
state changes from Resident (exists locally) to Non-Resident (exists only in S3) the same file
is recalled and itrsquos state is changed to Co-Resident (exists locally AND in S3) Included are
throughput stats to verify estimated values provided during the authentication pre-test
06
Once recall is completed
When selecting multiple files to migrate to the cloud in a filesystem consisting of several
directories and subdirectories it is important to note that wildcards cannot be provided If a user
were to migrate all files under a directory (for example s3gpfstest) including subdirectories the
following usage would be required
find s3gpfstest -type f ndashexec mmcloudgateway files migrate +
In order to change a file from a Co-Resident state to a Resident state a policy included under
optibmMCStoresamples named coResidentToResidenttemplate can be applied using the
mmapplypolicy command The policy calls policy helper function that invokes an mcstore binary
not intended for users to execute It removes the extended attributes of the file and puts it in a
Resident state
07
The following message is posted during the mmapplypolicy that indicates an lsquommcloudgateway files reconcilersquo must be executed in order for the changed attribute of the file to be updated in the local cloud directory database
In the next example a default policy and callback are defined to perform the following functions
bull Exclude the contents of the internal TCT configuration from being eligible for migration to
the cloud (in our example s3gpfsmcstore and s3gpfsmcstorebak)
bull Define attributes for access age of a file size in MB and a weight expression based on age
and size of a file
bull Define an external pool for the cloud tier
bull Define a migration rule to migrate eligible files to the cloud tier when the filesystem is
95 full until local filesystem utilization is at 90
bull Define rules to automatically recall files during user executed read and write operations
bull A callback is configured to execute an mmapplypolicy and migrate eligible files to the cloud
tier when a lowDiskSpace (95 as defined in the policy) or noDiskSpace (100) event is logged
08
The following policy was configured using the mmchpolicy command against filesystem s3gpfs
09
The following callback configured using the command
mmaddcallback thresholdMigration --event lowDiskSpacenoDiskSpace --command usrlppmmfsbinmmapplypolicy --parms ldquofsname -N Cloudtier -g s3gpfstest --single-instancerdquo
To test the newly created callback generate test files until a lowDiskSpace event is triggered
10
When the filesystem reaches the defined lowDiskSpace threshold the following entries are logged in varadmrasmmfsloglatest of the filesystem manager node and the mmapplypolicy is executed as defined in the thresholdMigration callback we defined
For those leveraging other IBM Spectrum Scale features (AFM LTFS HSM FPO etc) it is important
to note the following restrictions and limitations as it relates to Transparent Cloud Tiering
bull Spectrum Archive (LTFS) and Transparent Cloud Tiering cannot be configured for the same
filesystem Both solutions are intended to serve the same purpose (archive) be sure to plan
for one up front
bull TCT cannot be used to tier snapshots
bull TCT is not supported in a multi-cluster setup
bull There is a one filesystem per TCT nodeclass limitation
bull TCT cannot be run on AFM gateway nodes or configured for use on AFM or AFM DR filesets
bull There is no mixed cluster support if using Windowssystem Z nodes TCT can co-exist
in a Linux cluster comprised of x86 and Power nodes
bull TCT is not a replacement for a backup or DR solution
bull An average file size of 1MB is recommended when tiering to the cloud Smaller file sizes
are supported but performance may suffer as a result due to IO overhead
About the ATS Group
Since our founding in 2001 the ATS Group has consulted on thousands of system
implementations upgrades backups and recoveries We also support customers by providing
managed services performance analysis and capacity planning With over 60 industry-
certified professionals we support SMBs Fortune 500 companies and government agencies
As experts in IBM VMware Oracle and other top vendors we are experienced in virtualization
storage area networks (SANs) high availability performance tuning SDS enterprise backup
and other evolving technologies that operate mission-critical systems on premise in the
cloud or in a hybrid environment
T H E AT S G R O U P C O M C O M PA N Y C O N TA C T
04
Start the cloud tiering service by running lsquommcloudgateway service start ndashN Cloudtierrsquo To verify the service is running execute the following
Define the filesystem to the TCT nodeclass and verify using the following commands
At this point we are ready to define a filesystem for Transparent Cloud Tiering Only one
filesystem can be associated with the TCT node class (Cloudtier) Several of our customers
especially those with SaaS offerings backed by IBM Spectrum Scale provision multiple GPFS
filesystems in a ldquoone per customerrdquo model Given the single filesystem limitation such users
may need to get creative in ways that they design for cloud tiering One option would be to de-
fine a single GPFS filesystem solely used to copy data for archive (gpfsarchive for example)
The filesystem would need to be backed by NSDs and data previously segregated via multiple
namespaces would then be mixed together under the same namespace Another option would
be to create multiple nodeclasses that perform TCT functions A minimum of two nodes run-
ning TCT services would be recommended for HA purposes for each filesystem tiering to the
cloud Depending on the number of filesystems yoursquore looking to tier to the cloud the total of
additional TCT servers could grow quite large If you are using SMB in any combination of other
protocols you can configure only up to 16 protocol nodes This is a hard limit and SMB cannot
be enabled if there are more protocol nodes If only NFS and Object are enabled you can have
32 nodes configured as protocol nodes There does not appear to be a documented limited
number of nodes solely running TCT services
Before defining the S3 account to the configuration IBM offers a pre-test function to confirm
IOPs for putsgets estimated throughput and verify the credentials provided
05
Testing noted in future sections of this document prove this estimate to be accurate in terms of throughput
Define the AWS account to be used for TCT and verify
There is no need to manually create any S3 buckets Within the AWS console the following newly created S3 buckets are automatically created
This is a good opportunity to compare your expected WAN throughput against the throughput
numbers provided below
Now that the AWSS3 account and buckets have been created we can proceed with migrating
files to the cloud In the following screenshots a 1G file named 0txt is migrated to S3 itrsquos current
state changes from Resident (exists locally) to Non-Resident (exists only in S3) the same file
is recalled and itrsquos state is changed to Co-Resident (exists locally AND in S3) Included are
throughput stats to verify estimated values provided during the authentication pre-test
06
Once recall is completed
When selecting multiple files to migrate to the cloud in a filesystem consisting of several
directories and subdirectories it is important to note that wildcards cannot be provided If a user
were to migrate all files under a directory (for example s3gpfstest) including subdirectories the
following usage would be required
find s3gpfstest -type f ndashexec mmcloudgateway files migrate +
In order to change a file from a Co-Resident state to a Resident state a policy included under
optibmMCStoresamples named coResidentToResidenttemplate can be applied using the
mmapplypolicy command The policy calls policy helper function that invokes an mcstore binary
not intended for users to execute It removes the extended attributes of the file and puts it in a
Resident state
07
The following message is posted during the mmapplypolicy that indicates an lsquommcloudgateway files reconcilersquo must be executed in order for the changed attribute of the file to be updated in the local cloud directory database
In the next example a default policy and callback are defined to perform the following functions
bull Exclude the contents of the internal TCT configuration from being eligible for migration to
the cloud (in our example s3gpfsmcstore and s3gpfsmcstorebak)
bull Define attributes for access age of a file size in MB and a weight expression based on age
and size of a file
bull Define an external pool for the cloud tier
bull Define a migration rule to migrate eligible files to the cloud tier when the filesystem is
95 full until local filesystem utilization is at 90
bull Define rules to automatically recall files during user executed read and write operations
bull A callback is configured to execute an mmapplypolicy and migrate eligible files to the cloud
tier when a lowDiskSpace (95 as defined in the policy) or noDiskSpace (100) event is logged
08
The following policy was configured using the mmchpolicy command against filesystem s3gpfs
09
The following callback configured using the command
mmaddcallback thresholdMigration --event lowDiskSpacenoDiskSpace --command usrlppmmfsbinmmapplypolicy --parms ldquofsname -N Cloudtier -g s3gpfstest --single-instancerdquo
To test the newly created callback generate test files until a lowDiskSpace event is triggered
10
When the filesystem reaches the defined lowDiskSpace threshold the following entries are logged in varadmrasmmfsloglatest of the filesystem manager node and the mmapplypolicy is executed as defined in the thresholdMigration callback we defined
For those leveraging other IBM Spectrum Scale features (AFM LTFS HSM FPO etc) it is important
to note the following restrictions and limitations as it relates to Transparent Cloud Tiering
bull Spectrum Archive (LTFS) and Transparent Cloud Tiering cannot be configured for the same
filesystem Both solutions are intended to serve the same purpose (archive) be sure to plan
for one up front
bull TCT cannot be used to tier snapshots
bull TCT is not supported in a multi-cluster setup
bull There is a one filesystem per TCT nodeclass limitation
bull TCT cannot be run on AFM gateway nodes or configured for use on AFM or AFM DR filesets
bull There is no mixed cluster support if using Windowssystem Z nodes TCT can co-exist
in a Linux cluster comprised of x86 and Power nodes
bull TCT is not a replacement for a backup or DR solution
bull An average file size of 1MB is recommended when tiering to the cloud Smaller file sizes
are supported but performance may suffer as a result due to IO overhead
About the ATS Group
Since our founding in 2001 the ATS Group has consulted on thousands of system
implementations upgrades backups and recoveries We also support customers by providing
managed services performance analysis and capacity planning With over 60 industry-
certified professionals we support SMBs Fortune 500 companies and government agencies
As experts in IBM VMware Oracle and other top vendors we are experienced in virtualization
storage area networks (SANs) high availability performance tuning SDS enterprise backup
and other evolving technologies that operate mission-critical systems on premise in the
cloud or in a hybrid environment
T H E AT S G R O U P C O M C O M PA N Y C O N TA C T
05
Testing noted in future sections of this document prove this estimate to be accurate in terms of throughput
Define the AWS account to be used for TCT and verify
There is no need to manually create any S3 buckets Within the AWS console the following newly created S3 buckets are automatically created
This is a good opportunity to compare your expected WAN throughput against the throughput
numbers provided below
Now that the AWSS3 account and buckets have been created we can proceed with migrating
files to the cloud In the following screenshots a 1G file named 0txt is migrated to S3 itrsquos current
state changes from Resident (exists locally) to Non-Resident (exists only in S3) the same file
is recalled and itrsquos state is changed to Co-Resident (exists locally AND in S3) Included are
throughput stats to verify estimated values provided during the authentication pre-test
06
Once recall is completed
When selecting multiple files to migrate to the cloud in a filesystem consisting of several
directories and subdirectories it is important to note that wildcards cannot be provided If a user
were to migrate all files under a directory (for example s3gpfstest) including subdirectories the
following usage would be required
find s3gpfstest -type f ndashexec mmcloudgateway files migrate +
In order to change a file from a Co-Resident state to a Resident state a policy included under
optibmMCStoresamples named coResidentToResidenttemplate can be applied using the
mmapplypolicy command The policy calls policy helper function that invokes an mcstore binary
not intended for users to execute It removes the extended attributes of the file and puts it in a
Resident state
07
The following message is posted during the mmapplypolicy that indicates an lsquommcloudgateway files reconcilersquo must be executed in order for the changed attribute of the file to be updated in the local cloud directory database
In the next example a default policy and callback are defined to perform the following functions
bull Exclude the contents of the internal TCT configuration from being eligible for migration to
the cloud (in our example s3gpfsmcstore and s3gpfsmcstorebak)
bull Define attributes for access age of a file size in MB and a weight expression based on age
and size of a file
bull Define an external pool for the cloud tier
bull Define a migration rule to migrate eligible files to the cloud tier when the filesystem is
95 full until local filesystem utilization is at 90
bull Define rules to automatically recall files during user executed read and write operations
bull A callback is configured to execute an mmapplypolicy and migrate eligible files to the cloud
tier when a lowDiskSpace (95 as defined in the policy) or noDiskSpace (100) event is logged
08
The following policy was configured using the mmchpolicy command against filesystem s3gpfs
09
The following callback configured using the command
mmaddcallback thresholdMigration --event lowDiskSpacenoDiskSpace --command usrlppmmfsbinmmapplypolicy --parms ldquofsname -N Cloudtier -g s3gpfstest --single-instancerdquo
To test the newly created callback generate test files until a lowDiskSpace event is triggered
10
When the filesystem reaches the defined lowDiskSpace threshold the following entries are logged in varadmrasmmfsloglatest of the filesystem manager node and the mmapplypolicy is executed as defined in the thresholdMigration callback we defined
For those leveraging other IBM Spectrum Scale features (AFM LTFS HSM FPO etc) it is important
to note the following restrictions and limitations as it relates to Transparent Cloud Tiering
bull Spectrum Archive (LTFS) and Transparent Cloud Tiering cannot be configured for the same
filesystem Both solutions are intended to serve the same purpose (archive) be sure to plan
for one up front
bull TCT cannot be used to tier snapshots
bull TCT is not supported in a multi-cluster setup
bull There is a one filesystem per TCT nodeclass limitation
bull TCT cannot be run on AFM gateway nodes or configured for use on AFM or AFM DR filesets
bull There is no mixed cluster support if using Windowssystem Z nodes TCT can co-exist
in a Linux cluster comprised of x86 and Power nodes
bull TCT is not a replacement for a backup or DR solution
bull An average file size of 1MB is recommended when tiering to the cloud Smaller file sizes
are supported but performance may suffer as a result due to IO overhead
About the ATS Group
Since our founding in 2001 the ATS Group has consulted on thousands of system
implementations upgrades backups and recoveries We also support customers by providing
managed services performance analysis and capacity planning With over 60 industry-
certified professionals we support SMBs Fortune 500 companies and government agencies
As experts in IBM VMware Oracle and other top vendors we are experienced in virtualization
storage area networks (SANs) high availability performance tuning SDS enterprise backup
and other evolving technologies that operate mission-critical systems on premise in the
cloud or in a hybrid environment
T H E AT S G R O U P C O M C O M PA N Y C O N TA C T
06
Once recall is completed
When selecting multiple files to migrate to the cloud in a filesystem consisting of several
directories and subdirectories it is important to note that wildcards cannot be provided If a user
were to migrate all files under a directory (for example s3gpfstest) including subdirectories the
following usage would be required
find s3gpfstest -type f ndashexec mmcloudgateway files migrate +
In order to change a file from a Co-Resident state to a Resident state a policy included under
optibmMCStoresamples named coResidentToResidenttemplate can be applied using the
mmapplypolicy command The policy calls policy helper function that invokes an mcstore binary
not intended for users to execute It removes the extended attributes of the file and puts it in a
Resident state
07
The following message is posted during the mmapplypolicy that indicates an lsquommcloudgateway files reconcilersquo must be executed in order for the changed attribute of the file to be updated in the local cloud directory database
In the next example a default policy and callback are defined to perform the following functions
bull Exclude the contents of the internal TCT configuration from being eligible for migration to
the cloud (in our example s3gpfsmcstore and s3gpfsmcstorebak)
bull Define attributes for access age of a file size in MB and a weight expression based on age
and size of a file
bull Define an external pool for the cloud tier
bull Define a migration rule to migrate eligible files to the cloud tier when the filesystem is
95 full until local filesystem utilization is at 90
bull Define rules to automatically recall files during user executed read and write operations
bull A callback is configured to execute an mmapplypolicy and migrate eligible files to the cloud
tier when a lowDiskSpace (95 as defined in the policy) or noDiskSpace (100) event is logged
08
The following policy was configured using the mmchpolicy command against filesystem s3gpfs
09
The following callback configured using the command
mmaddcallback thresholdMigration --event lowDiskSpacenoDiskSpace --command usrlppmmfsbinmmapplypolicy --parms ldquofsname -N Cloudtier -g s3gpfstest --single-instancerdquo
To test the newly created callback generate test files until a lowDiskSpace event is triggered
10
When the filesystem reaches the defined lowDiskSpace threshold the following entries are logged in varadmrasmmfsloglatest of the filesystem manager node and the mmapplypolicy is executed as defined in the thresholdMigration callback we defined
For those leveraging other IBM Spectrum Scale features (AFM LTFS HSM FPO etc) it is important
to note the following restrictions and limitations as it relates to Transparent Cloud Tiering
bull Spectrum Archive (LTFS) and Transparent Cloud Tiering cannot be configured for the same
filesystem Both solutions are intended to serve the same purpose (archive) be sure to plan
for one up front
bull TCT cannot be used to tier snapshots
bull TCT is not supported in a multi-cluster setup
bull There is a one filesystem per TCT nodeclass limitation
bull TCT cannot be run on AFM gateway nodes or configured for use on AFM or AFM DR filesets
bull There is no mixed cluster support if using Windowssystem Z nodes TCT can co-exist
in a Linux cluster comprised of x86 and Power nodes
bull TCT is not a replacement for a backup or DR solution
bull An average file size of 1MB is recommended when tiering to the cloud Smaller file sizes
are supported but performance may suffer as a result due to IO overhead
About the ATS Group
Since our founding in 2001 the ATS Group has consulted on thousands of system
implementations upgrades backups and recoveries We also support customers by providing
managed services performance analysis and capacity planning With over 60 industry-
certified professionals we support SMBs Fortune 500 companies and government agencies
As experts in IBM VMware Oracle and other top vendors we are experienced in virtualization
storage area networks (SANs) high availability performance tuning SDS enterprise backup
and other evolving technologies that operate mission-critical systems on premise in the
cloud or in a hybrid environment
T H E AT S G R O U P C O M C O M PA N Y C O N TA C T
07
The following message is posted during the mmapplypolicy that indicates an lsquommcloudgateway files reconcilersquo must be executed in order for the changed attribute of the file to be updated in the local cloud directory database
In the next example a default policy and callback are defined to perform the following functions
bull Exclude the contents of the internal TCT configuration from being eligible for migration to
the cloud (in our example s3gpfsmcstore and s3gpfsmcstorebak)
bull Define attributes for access age of a file size in MB and a weight expression based on age
and size of a file
bull Define an external pool for the cloud tier
bull Define a migration rule to migrate eligible files to the cloud tier when the filesystem is
95 full until local filesystem utilization is at 90
bull Define rules to automatically recall files during user executed read and write operations
bull A callback is configured to execute an mmapplypolicy and migrate eligible files to the cloud
tier when a lowDiskSpace (95 as defined in the policy) or noDiskSpace (100) event is logged
08
The following policy was configured using the mmchpolicy command against filesystem s3gpfs
09
The following callback configured using the command
mmaddcallback thresholdMigration --event lowDiskSpacenoDiskSpace --command usrlppmmfsbinmmapplypolicy --parms ldquofsname -N Cloudtier -g s3gpfstest --single-instancerdquo
To test the newly created callback generate test files until a lowDiskSpace event is triggered
10
When the filesystem reaches the defined lowDiskSpace threshold the following entries are logged in varadmrasmmfsloglatest of the filesystem manager node and the mmapplypolicy is executed as defined in the thresholdMigration callback we defined
For those leveraging other IBM Spectrum Scale features (AFM LTFS HSM FPO etc) it is important
to note the following restrictions and limitations as it relates to Transparent Cloud Tiering
bull Spectrum Archive (LTFS) and Transparent Cloud Tiering cannot be configured for the same
filesystem Both solutions are intended to serve the same purpose (archive) be sure to plan
for one up front
bull TCT cannot be used to tier snapshots
bull TCT is not supported in a multi-cluster setup
bull There is a one filesystem per TCT nodeclass limitation
bull TCT cannot be run on AFM gateway nodes or configured for use on AFM or AFM DR filesets
bull There is no mixed cluster support if using Windowssystem Z nodes TCT can co-exist
in a Linux cluster comprised of x86 and Power nodes
bull TCT is not a replacement for a backup or DR solution
bull An average file size of 1MB is recommended when tiering to the cloud Smaller file sizes
are supported but performance may suffer as a result due to IO overhead
About the ATS Group
Since our founding in 2001 the ATS Group has consulted on thousands of system
implementations upgrades backups and recoveries We also support customers by providing
managed services performance analysis and capacity planning With over 60 industry-
certified professionals we support SMBs Fortune 500 companies and government agencies
As experts in IBM VMware Oracle and other top vendors we are experienced in virtualization
storage area networks (SANs) high availability performance tuning SDS enterprise backup
and other evolving technologies that operate mission-critical systems on premise in the
cloud or in a hybrid environment
T H E AT S G R O U P C O M C O M PA N Y C O N TA C T
08
The following policy was configured using the mmchpolicy command against filesystem s3gpfs
09
The following callback configured using the command
mmaddcallback thresholdMigration --event lowDiskSpacenoDiskSpace --command usrlppmmfsbinmmapplypolicy --parms ldquofsname -N Cloudtier -g s3gpfstest --single-instancerdquo
To test the newly created callback generate test files until a lowDiskSpace event is triggered
10
When the filesystem reaches the defined lowDiskSpace threshold the following entries are logged in varadmrasmmfsloglatest of the filesystem manager node and the mmapplypolicy is executed as defined in the thresholdMigration callback we defined
For those leveraging other IBM Spectrum Scale features (AFM LTFS HSM FPO etc) it is important
to note the following restrictions and limitations as it relates to Transparent Cloud Tiering
bull Spectrum Archive (LTFS) and Transparent Cloud Tiering cannot be configured for the same
filesystem Both solutions are intended to serve the same purpose (archive) be sure to plan
for one up front
bull TCT cannot be used to tier snapshots
bull TCT is not supported in a multi-cluster setup
bull There is a one filesystem per TCT nodeclass limitation
bull TCT cannot be run on AFM gateway nodes or configured for use on AFM or AFM DR filesets
bull There is no mixed cluster support if using Windowssystem Z nodes TCT can co-exist
in a Linux cluster comprised of x86 and Power nodes
bull TCT is not a replacement for a backup or DR solution
bull An average file size of 1MB is recommended when tiering to the cloud Smaller file sizes
are supported but performance may suffer as a result due to IO overhead
About the ATS Group
Since our founding in 2001 the ATS Group has consulted on thousands of system
implementations upgrades backups and recoveries We also support customers by providing
managed services performance analysis and capacity planning With over 60 industry-
certified professionals we support SMBs Fortune 500 companies and government agencies
As experts in IBM VMware Oracle and other top vendors we are experienced in virtualization
storage area networks (SANs) high availability performance tuning SDS enterprise backup
and other evolving technologies that operate mission-critical systems on premise in the
cloud or in a hybrid environment
T H E AT S G R O U P C O M C O M PA N Y C O N TA C T
09
The following callback configured using the command
mmaddcallback thresholdMigration --event lowDiskSpacenoDiskSpace --command usrlppmmfsbinmmapplypolicy --parms ldquofsname -N Cloudtier -g s3gpfstest --single-instancerdquo
To test the newly created callback generate test files until a lowDiskSpace event is triggered
10
When the filesystem reaches the defined lowDiskSpace threshold the following entries are logged in varadmrasmmfsloglatest of the filesystem manager node and the mmapplypolicy is executed as defined in the thresholdMigration callback we defined
For those leveraging other IBM Spectrum Scale features (AFM LTFS HSM FPO etc) it is important
to note the following restrictions and limitations as it relates to Transparent Cloud Tiering
bull Spectrum Archive (LTFS) and Transparent Cloud Tiering cannot be configured for the same
filesystem Both solutions are intended to serve the same purpose (archive) be sure to plan
for one up front
bull TCT cannot be used to tier snapshots
bull TCT is not supported in a multi-cluster setup
bull There is a one filesystem per TCT nodeclass limitation
bull TCT cannot be run on AFM gateway nodes or configured for use on AFM or AFM DR filesets
bull There is no mixed cluster support if using Windowssystem Z nodes TCT can co-exist
in a Linux cluster comprised of x86 and Power nodes
bull TCT is not a replacement for a backup or DR solution
bull An average file size of 1MB is recommended when tiering to the cloud Smaller file sizes
are supported but performance may suffer as a result due to IO overhead
About the ATS Group
Since our founding in 2001 the ATS Group has consulted on thousands of system
implementations upgrades backups and recoveries We also support customers by providing
managed services performance analysis and capacity planning With over 60 industry-
certified professionals we support SMBs Fortune 500 companies and government agencies
As experts in IBM VMware Oracle and other top vendors we are experienced in virtualization
storage area networks (SANs) high availability performance tuning SDS enterprise backup
and other evolving technologies that operate mission-critical systems on premise in the
cloud or in a hybrid environment
T H E AT S G R O U P C O M C O M PA N Y C O N TA C T
10
When the filesystem reaches the defined lowDiskSpace threshold the following entries are logged in varadmrasmmfsloglatest of the filesystem manager node and the mmapplypolicy is executed as defined in the thresholdMigration callback we defined
For those leveraging other IBM Spectrum Scale features (AFM LTFS HSM FPO etc) it is important
to note the following restrictions and limitations as it relates to Transparent Cloud Tiering
bull Spectrum Archive (LTFS) and Transparent Cloud Tiering cannot be configured for the same
filesystem Both solutions are intended to serve the same purpose (archive) be sure to plan
for one up front
bull TCT cannot be used to tier snapshots
bull TCT is not supported in a multi-cluster setup
bull There is a one filesystem per TCT nodeclass limitation
bull TCT cannot be run on AFM gateway nodes or configured for use on AFM or AFM DR filesets
bull There is no mixed cluster support if using Windowssystem Z nodes TCT can co-exist
in a Linux cluster comprised of x86 and Power nodes
bull TCT is not a replacement for a backup or DR solution
bull An average file size of 1MB is recommended when tiering to the cloud Smaller file sizes
are supported but performance may suffer as a result due to IO overhead
About the ATS Group
Since our founding in 2001 the ATS Group has consulted on thousands of system
implementations upgrades backups and recoveries We also support customers by providing
managed services performance analysis and capacity planning With over 60 industry-
certified professionals we support SMBs Fortune 500 companies and government agencies
As experts in IBM VMware Oracle and other top vendors we are experienced in virtualization
storage area networks (SANs) high availability performance tuning SDS enterprise backup
and other evolving technologies that operate mission-critical systems on premise in the
cloud or in a hybrid environment
T H E AT S G R O U P C O M C O M PA N Y C O N TA C T
About the ATS Group
Since our founding in 2001 the ATS Group has consulted on thousands of system
implementations upgrades backups and recoveries We also support customers by providing
managed services performance analysis and capacity planning With over 60 industry-
certified professionals we support SMBs Fortune 500 companies and government agencies
As experts in IBM VMware Oracle and other top vendors we are experienced in virtualization
storage area networks (SANs) high availability performance tuning SDS enterprise backup
and other evolving technologies that operate mission-critical systems on premise in the
cloud or in a hybrid environment
T H E AT S G R O U P C O M C O M PA N Y C O N TA C T