31 October 2014

7.3 Configuring backup and archive copy groups for client data protection (Part 2)

How to define Backup and Archive Copy Groups

After defining management class, you have to define backup and archive copy groups which contains the parameters that control the generation and expiration of backup and archive data.

Defining BACKUP Copy Group

Use the following command syntax to define a backup copy group

backup copy group syntax

DOMAIN_NAME (Required): Specifies the name of the policy domain that you are defining the copy group for.

POLICY_SET_NAME (Required): Specifies the name of the policy set that you are defining the copy group for.

CLASS_NAME (Required): Specifies the name of the management class that you are defining the copy group for.

STANDARD (Optional): Specifies the name of the copy group as STANDARD. The name of the copy group must be STANDARD, which is the default value.

TYPE=Backup (Optional): Specifies that you want to define a backup copy group. The default parameter is BACKUP.

VEREXISTS (Optional): Specifies the maximum number of backup versions to retain for files that are currently on the client file system. The default value is 2. The other option is NOLIMIT.

VERDELETE (Optional): Specifies the maximum number of backup versions to retain for files that are deleted from the client file system after using Tivoli Storage Manager to back them up. The default value is 1. The other option is NOLIMIT.

RETEXTRA (Optional): Specifies the number of days to retain a backup version after that version becomes inactive. A version of a file becomes inactive when the client stores a more recent backup version, or when the client deletes the file from the workstation and then runs a full incremental backup. The server deletes inactive versions that are based on retention time even if the number of inactive versions does not exceed the number allowed by the VEREXISTS or VERDELETED parameters. The default value is 30 days. The other option is NOLIMIT.

RETONLY (Optional): Specifies the number of days to retain the last backup version of a file that is deleted from the client file system. The default value is 60. The other option is NOLIMIT.

DESTINATION (Required): Specifies the name of a storage pool that is defined. BACKUPPOOL is the default for the backup copy group, and ARCHIVEPOOL is the default for the archive copy group.

FREQUENCY=freqvalue (Optional): Specifies the minimum interval, in days, between successive backups.

MODE=mode (Optional): Specifies whether a file is backed up based on changes made to the file since the last time that it is backed up. Use the MODE value only for incremental backup. This value is bypassed during selective backup. The default value is MODIFIED. The other option is ABSOLUTE, which is bypassed if the file is modified.

SERIALIZATION=serialvalue (Optional): Specifies handling of files or directories if they are modified during backup processing and also actions for Tivoli Storage Manager to take if a modification occurs. The default value is SHRSTATIC.

The SHRSTATIC parameter specifies that Tivoli Storage Manager backs up a file or directory only if it is not modified during the backup or archive operation. Tivoli Storage Manager attempts to perform a backup or archive operation as many as four times, depending on the value specified for the CHANGINGRETRIES client option. If the file or directory is modified during each backup or archive attempt, Tivoli Storage Manager does not process it. Other options are STATIC, SHRDYNAMIC, and DYNAMIC.

Defining Archive Copy Group

Use the following command syntax to define a archive copy group

archive copy group syntax

For archive copy group, we have different retention parameters as shown above and remaining parameters are same

TYPE=Archive (Required): Specifies that you want to define an archive copy group. The
default parameter is BACKUP.

RETVER (Optional): Specifies the number of days to keep an archive copy. The default value is 365. The other option is NOLIMIT.

RETINIT (Optional): Specifies the trigger to initiate the retention time that the RETVER attribute specifies. The default value is CREATION. The other option is EVENT.

RETMIN (Optional): Specifies the minimum number of days to keep an archive copy after it is archived. The default value is 365.

Assigning Default Management Class

Each policy set contains a default management class and can contain any number of additional management classes. Use policy sets to implement different policies, based on user and business requirements.

You must assign a default management class for a policy set before you can activate that policy set. Use the assign defmgmtclass command to specify an existing management class as the default management class for a particular policy set.  Ensure that the default management class contains both an archive copy group and a backup copy group.

ASsign DEFMGmtclass domainname setname classname

Validating and activating Policy Sets

The validate command examines the management class and copy group definitions in a specified policy set. It reports on conditions that need consideration if the policy set is activated. After a change is made to a policy set and the policy set is validated, you must activate the policy set to make it the ACTIVE policy set. The validate policy set command fails if any of the following conditions exist
  • A default management class is not defined for the policy set.
  • A copy group within the policy set specifies a copy storage pool as a destination.
  • A management class specifies a copy pool as the destination for space-managed files.
When a policy set is activated, the contents of the policy set are copied to a policy set that has the reserved name ACTIVE. When activated, the policy set that is activated, copied to ACTIVE, and the contents of the ACTIVE policy set have no relationship. You can still modify the original policy set, but the copied definitions in the ACTIVE policy set can be modified only by activating another policy set.

Use the validate policyset command to verify that a policy set is complete and valid before you activate it

VALidate POlicyset domainname policysetname

Then, use the activate policyset command to specify a policy set as the active policy set for a policy domain

ACTivate POlicyset domainname policysetname

When a policy set is activated, its contents are copied to a policy set with the reserved name, ACTIVE.

7.2 Configuring Policy Domain for client data protection (Part 1)

Defining Policy Domain for client backup and archives

Before you start taking backup of any clients, client should be registered to  default Policy Domain  (STANDARD) settings or you can define your own policy domain settings according to your business requirements. You first have to define a policy domain, policy set, management class and then backup/archive copy group. In this post we will see the purpose of these parameters and how to define them. You have to configure according to the policy domain hierarchy as shown in the below image.
policy domain structure

Policy Domain

A policy domain provides a logical way of managing backup and archive policies for a group of nodes with common needs. It is a collection of one or more nodes and one or more policies. Each domain is an object that the Tivoli Storage Manager database stores, with a name that contains from 1 to 30 characters. The number of policy domains that you can define on a Tivoli Storage Manager server is limitless. 

A client node is associated with only one policy domain on a specific Tivoli Storage Manager server. However, a client or node might be registered (defined) to more than one server. Each domain can have one or more clients or nodes associated with it. The clients or nodes can run on the same or different platforms. Some installations can require only a single policy domain.

A policy domain also contains a grace period backup and an archive retention period. This grace period acts as a safety net to ensure that data that is backed up or archived in a storage pool is not inadvertently deleted. Use the following define domain command to
define a policy domain for clients

  define domain LINUX description=“Policy domain or LINUXclients”

Each policy domain contains a backup grace retention period and an archive grace retention period. The grace retention period protects backup versions and archive copies from immediate deletion when the default management class has no backup or archive copy group. The client node uses the grace retention period only if there is no other defined retention period. The policy domain grace retention period is specified in the define domain command.
  • Backup retention grace period defaults to 30 (BACKRETention=30).
  • Archive retention grace period defaults to 365 (ARCHRETention=365).
Use the policy domain grace retention period when default MGMTCLASS has no copy group for backup and archive. One of the following situations must also apply:
  • MGMTCLASS for backup no longer contains backup copy group.
  • MGMTCLASS for an archive no longer contains archive copy group.
  • MGMTCLASS no longer exists.

Policy Set

A policy set is a group of rules in a policy domain. These rules specify how data is automatically managed for client nodes in the policy domain. The ACTIVE policy set is the set that contains the policy rules currently in use by all client nodes assigned to the policy domain.

define policyset domainname policyname

Management classes

A management class associates backup and archive groups with files and specifies if and how client node files are migrated to storage pools. Users can bind, which means associate, their files with a management class by using the include-exclude list. 

 define mgmtclass LINUX lab LINUXMC
  • A management class (MC) represents a business requirements policy or service level agreement.
  • A management class is associated with a backup copy group and archive copy group.
  • The default management class does not require a backup copy group or an archive copy group, but such a group might be useful.
  • Clients can explicitly select a management class.
  • The server database stores management class information.
  • A management class can contain a backup copy group, an archive copy group, both copy groups, or no copy groups.

The management class that is specified in the policy domain defines the backup and archive criteria for client nodes in the policy domain. If you do not specify the Management Class, the default Management Class is used.

Client users have the option of creating an include-exclude list to identify the files that are eligible for backup services and specifying how Tivoli Storage Manager manages backed up or archived files. The INCLUDE and EXCLUDE options are specified in the client option file.

Copy Groups

A copy group contains the specific storage management attributes that describe how the server manages backed up or archived files. Copy groups contain the parameters that control the generation and expiration of backup and archive data. There are two types of copy groups Backup and Archive Each management class contains up to two copy groups. If it has two copy groups, one is for backups, and one is for archives. All copy groups are named STANDARD. The attributes in any copy group define the following information
  • The storage pool destination where the backed up or archived data is stored
  • The minimal interval, in days, between backup and archive operations
  • Whether the file is backed up regardless of whether it has been modified since the last backup
  • Whether the file can be in use when a user attempts to back up or archive the file
  • The maximum number of different backup versions that are retained for files that are no longer on the file system of the client
  • The retention period, in days, for all but the most recent backup version, and for the last remaining backup version that is no longer on the file system of the client
  • The number of days that an archive copy is retained

The set of backup parameters defines the following attributes:
  • Frequency
  • Mode (modified or absolute)
  • Destination
  • Copy serialization
  • Number of versions
  • Number of versions when the file is deleted
  • Retention days for all but the last version
  • Retention days for the last version when the file is deleted

The set of archive parameters defines the following attributes:
  • Frequency (always CMD)
  • Mode (always ABSOLUTE)
  • Destination
  • Copy serialization
  • Retention days for archive copies

30 October 2014

7.1 TSM Policy Management Overview

A TSM policy management address the following business (SLA) requirements
  • Data to back up: Specify the items to store as back up objects.
  • Data to archive: Specify the items to store as archive objects.
  • Location to store the data: Specify the storage pools to use.
  • Number of versions to retain: Specify the number of inactive versions, in addition to the active version, to keep when the data still exists on the client node.
  • The retention period: Specify the duration to store the data associated with the policy.

Policy Management Hierarchy Introduction

Policies, which the administrator creates and stores in the database on the server, can be updated and the updates applied retroactively to already managed data. You might have one policy or many depending on your business needs. Policies include the following elements.
TSM Policy Hierarchy

Policy Domain: A set of rules that are applied to a group of nodes that the same set of policy constraints manage, as defined by the policy sets. A node can be defined to only one policy domain per server. A node can be defined to more than one Tivoli Storage Manager server.

Policy Set: A collection of management class (MC) definitions. A policy domain can contain a number of policy sets. However, only one policy set in a domain is active at a time.

Management Class: A collection of management attributes that describe backup and archive characteristics. There are two sets of MC attributes, one for backup and one for archive. A set of attributes is a copy group. There is a backup copy group and an archive copy group. For Tivoli Space Manager clients only, there are parameters that affect space management.

Default Policy during installation

After you set up the policy objects provided in Tivoli Storage Manager, you can begin using Tivoli Storage Manager immediately. Tivoli Storage Manager provides a predefined policy domain, policy set, management class, backup copy group, and archive copy group. Each policy is stored on the server and named STANDARD. 

Backup Retention Grace Period: Specifies the number of days to retain a backup version when the server cannot rebind the file to an appropriate management class. The default is 30 days (BACKRETention=30).

Archive Retention Grace Period: Specifies the number of days to retain an archive copy when the server cannot rebind the file to an appropriate management class. The default is 365 days (ARCHRETention=365).

The following values come with Tivoli Storage Manager in the STANDARD domain:

Type = Backup 
DESTination = Backuppool 
VERExists = 2 
VERDeleted = 1
RETExtra = 30
RETOnly = 60 
SERialization = SHRSTatic

Type = Archive 
DESTination = Archivepool 
FREQuency = Cmd 
RETver = 365
MODE = ABSolute 
SERialization = SHRSTatic

Copying and updating a domain
You can update a domain with new values or you can also copy the existing domain and create a domain with the same values if required. 

To copy a domain into new domain, use copy domain command. To update a domain with new parameter values, use update domain command. After updating any copy group values, you must activate and validate policy-set to make them active. For example

copy domain olddomain newdomain

update domain domainname backretention=60

The EXPORT POLICY command
The export policy command moves policy information from one or more policy domains. Policy information includes policy domain and set definitions, management class definitions, backup, copy group and archive group definitions, schedule definitions for each policy domain, and client node associations.

For example, use the following export policy command to export definitions for policy domains, policy sets, management classes, backup and archive copy groups, and schedules to another server

   export policy toserver=server2 replacedefs=yes

29 October 2014

6.3 Using Virtual Tape Library (VTL) in TSM Environment

Using Virtual tape library in TSM Environment

A Virtual Tape Library or VTL, is an backup solution that combines traditional tape backup methodology with low-cost disk technology to create an optimized backup and recovery solution.  It is an intelligent disk-based library that emulates traditional tape devices and tape formats. Acting like a tape library with the performance of modern disk drives, data is deposited onto disk drives just as it would onto a tape library, only faster. Virtual tape backup solutions can be used as a secondary backup stage on the way to tape, or as their own standalone tape library solution. A VTL generally consists of a Virtual Tape appliance or server, and software which emulates traditional tape devices and formats.

With a virtual tape library (VTL), data can either remain in the virtual tape library as long as there is enough space, or it can migrate to tape for off-site storage, or both. It uses a unique blend of several storage tiers. In addition, it has a combination of high-performance SAN-attached disk and high-performance servers that run Linux, emulating a tape storage device.

VTLs maintain volume space allocation after Tivoli Storage Manager deletes a volume and returns it to a scratch state. The VTL keeps the full size of the volume allocated. This allocation might be large, depending on the devices being emulated. As a result of multiple volumes that revert to scratch, the VTL can maintain their allocation size and run out of storage space.

The only way for the VTL to realize that a volume is deleted and its space can be reallocated is to write to the beginning of the newly returned scratch volume. The VTL then sees the volume as available. Tivoli Storage Manager can relabel volumes that are returned to scratch if the RELABELSCRATCH parameter is specified.

A virtual tape library (VTL) provides high-performance backup and restore by using disk arrays and virtualization software. A VTL can provide the following benefits
  • Helps to validate data recoverability.
  • Reduces the resources that managing tape media requires.
  • Improves backup and recovery and simplifies disaster recovery operations
  • Lowers operational costs and energy usage.
  • Manages more data with less infrastructure.

Defining a virtual tape library (VTL)

Defining a VTL library includes the following parameters:

Virtual Tape Library (VTL) in TSM


LIBType=VTL (Required): Specifies that the library has a SCSI-controlled media changer device that is represented by a virtual tape library.

SHAREd: Specifies whether this library is shared with other Tivoli Storage Manager servers in a storage area network (SAN).

RESETDrives: Specifies whether the server preempts a drive reservation with persistent reserve when the server restarts or when a library client or storage agent re-connection is established. If persistent reserve is not supported, the server performs a target reset.
  • Yes: Specifies that drive preemption through persistent reserve and target reset are performed. YES is the default for a library that is defined with SHARED=YES.
  • No: Specifies that drive preemption through persistent reserve and target reset are not performed. NO is the default for a library that is defined with SHARED=NO.
AUTOLabel: Specifies whether the server attempts to automatically label tape volumes. This parameter is optional. The default is NO. To use this option, you need to check in the tapes with CHECKLABEL=BARCODE on the CHECKIN LIBVOLUME command.

RELABELSCRatch: Specifies whether the server relabels volumes that are deleted and returned to scratch. When this parameter is set to YES, a LABEL LIBVOLUME operation is started and the existing volume label is overwritten.

SERial: Specifies the serial number for the defined library. This parameter is optional. The default is AUTODETECT. If SERIAL=AUTODETECT, when you define the path to the library, use the serial number that the library detects as the serial number. If SERIAL=serial_number, the number that you enter is compared to the number that Tivoli Storage Manager detects.

6.2 How to manage and monitor TSM tape volumes

Types of Tivoli Storage Manager (TSM) Volumes

Tivoli Storage Manager classifies its volumes into the two following categories

PRIVATE: A private volume is a labeled volume that is in use or owned by an application. This volume might contain valid data. You must define each private volume, and must mount it by name. Private volumes do not revert to scratch when they become empty.

SCRATCH: A scratch volume is a labeled volume that is empty or contains no valid data. Use this volume to comply with any request to mount a scratch volume. When data is written to a scratch volume, its status changes to private.

You can change the status of volumes by issuing the update libvolume command. With the command, you can assign a private status to a scratch volume or assign a scratch status to a private volume. The private volumes must be administrator-defined volumes with either no data or invalid data. They cannot be partially written volumes that contain active data. Volume statistics are discarded when volume statuses are modified.

Difference between Scratch and Private Volumes

Scratch volumes:
  • Contain a label
  • Are empty or contain no valid data
  • Change status to PRIVATE when data is written to them
  • Can use to satisfy any request for mounting a scratch volume

Private volumes:
  • Contain a label.
  • Might contain valid data.
  • An application uses or owns them.
  • Use only to satisfy a request to mount the specified volume.

Life cycle of TSM Tape Volumes

You must label tapes first, then add them to the inventory of tapes available to Tivoli Storage Manager. Check tapes in to Tivoli Storage Manager as either scratch or private. Tapes that are part of the scratch pool are eligible to select for use. After a tape is selected, data remains on the tape until it expires or moves. You can then reclaim the tape and return it to the scratch pool.

life cycle of TSM tape volumes

You can use several methods to check in a tape. Check it can in online or offline. You can label and check in one or all of the tapes in one step with label libvol command or use the autolabel=yes feature to define a 349X, ACSLS, SCSI, external, or manual library. See the following example to learn how to label the new tapes and checkin into the library.

Labeling new tapes

When you use the tape volumes for the first time, you have to label the tapes to be recognised by the TSM server. Use the following command to label and checkin those tapes at the same time. Check the TSM admin reference for the full syntax of these commands.

label libvolume library_name checkin=scratch search=bulk|yes labelsource=barcode
    

Search: Specifies that the server searches the library for usable volumes to label. Possible values are as follows:
  • Bulk: Specifies that the server searches the library entry and exit ports for usable volumes to label.
  • Yes: Specifies that the server labels each volume, unless the volume is already labeled or its barcode cannot be read.

Labelsource: Specifies how or whether the server reads sequential media labels of volumes. Possible values are as follows:
  • Barcode: The server attempts to read the barcode label. If the attempt fails, the server does not label the volume, and it shows a message.
  • Prompt: Prompts for volume names as necessary.
  • Vollist (SCSI only) reads for a file or list.

Checking in Volumes

After the volumes are labeled, you make the volumes available to Tivoli Storage Manager devices by checking the volumes into the library volume inventory. You use the checkin libvolume command. Checking media into an automated library involves adding the volumes to the library inventory when the tape is again ready for reuse or removed out from the library. Check in volumes by using a command line with the following commands:

checkin libvolume tapelib JK0007L4 search=no status=scratch

checkin libvolume tapelib search=yes status=scratch checklabel=barcode

Checking out Volumes

You can remove volumes from automated libraries by issuing the checkout libvolume command. For automated libraries with multiple entry and exit ports, you can issue the checkout libvolume command and include the REMOVE=BULK parameter. Tivoli Storage Manager ejects the volume to the next available entry and exit port.

After it is checked out, the volume moves to the entry or exit port if the device has one. If it does not, the operator is prompted to remove the volume from the drive that is within the device. Tivoli Storage Manager mounts each volume and verifies the internal label before checking it out. Check out volumes by using a command line with the following commands:

checkout libvolume library_name volume_name

checkout libvolume tapelib JK0007L4 checklabel=yes remove=bulk

Auditing a Tape Library

You can issue the audit library command to audit the volume inventories of automated libraries. Auditing the volume inventory ensures that the information that the Tivoli Storage Manager server maintains is consistent with the physical media in the library. The audit is useful when the inventory is moved physically. Tivoli Storage Manager deletes missing volumes and updates the locations of volumes that moved since the last audit. Tivoli Storage Manager cannot add new volumes during an audit. Remember that you should run this command when no other activity is running on the library.

          audit library tapelib checklabel=barcode

Watch the below video to learn how to update library volume status



PREVIOUS: 6.1 TSM Physical Storage Devices Introduction
NEXT: 6.3 Using Virtual Tape Library (VTL) in TSM Environment

28 October 2014

6.1 TSM Physical Storage Devices Introduction

Types of Physical Storage Devices used in TSM Environment

The combination of the library, drives, and their device class represents the physical device environment. A physical library is a collection of one or more drives that shares similar media mounting requirements. Each drive mechanism within a device, that uses removable media, is represented by a drive object.

For devices with multiple drives, including automated libraries, each drive is separately defined and must be associated with a library. A drive is a hardware device that is capable of performing operations on a specific type of sequential media. Drive definitions include such information as the element address, for drives in SCSI libraries, how often the drive is cleaned, for tape drives, and whether or not the drive is online. These are the terms we use in TSM environment.

Library: A device that organizes and holds one or more media, tape, or disk volumes and an optional robotic mechanism. Use the define library command and the define path command.

Drive: A hardware device capable of performing operations on a specific type of sequential media. Use the define drive command.

Device class: A category that represents a device type that defines the media that the library uses. Use the define devclass command.

Each library or drives requires device drivers to access the drives. IBM devices use IBM device drivers. Non-IBM devices use the TSM device driver in the TSM environment. The following are some of the library and device types we can use in TSM infrastructure. The installation and configuration depends upon the type of library you use.

Types of Physical Libraries

TSM supports different types of libraries, which can be used with TSM server. Check the following link to find the available types of librariesLTO tape technology is used for backup and archive. Advantage of tape is low cost and portability of moving cartridges off-site for disaster recovery purposes.

Tape Library example, TS3500
Tivoli Storage Manager servers frequently store extremely large amounts of data. Tape libraries load required tapes into drives automatically, which might be expandable, depending on model. Some libraries hold hundreds of tape drives and thousands of tapes.

Virtual Tape Library (VTL) example, TS 7650
VTLs are disk devices that can simulate tape by using a software or firmware interface. VTLs might have built-in functions for compression and data deduplication. They are easily configurable when you perform LAN-free backups to disk.

Disk example, DS8000
Disks can be large or small, and might be attached directly to the Tivoli Storage Manager server or connected by using a SAN. Tivoli Storage Manager can store data on Random Access disk pools and store data by using tape volumes that Tivoli Storage Manager software simulates. Using a disk device is fast and reliable, but costs more than other devices.

You can also different/multiple types of drives in the single library, but you need to follow some rules while configuring it. The element number indicates the physical location of a drive within an automated library. IBM Tivoli Storage Manager needs the element number to connect the physical location of the drive to the drive’s SCSI address. When you define a drive, the element numbers are required only for SCSI type libraries. The element number is detected automatically, depending on the capabilities of the library.

How to find devices on AIX

# lsdev -Cc disk
hdisk0 Available 40-60-00-4,0 16 Bit SCSI Disk Drive

# lsdev -Cc tape
rmt0 Available 1P-08-00-0,0 4.0 GB 4mm Tape Drive
rmt1 Available 1P-08-00-2,0 4.0 GB 4mm Tape Drive
smc0 Available 1P-08-00-3,0 IBM 7336 Tape Medium Changer

You can also enter smitty devices from an AIX command line to list tape drives and tape libraries.

How to find devices on Windows & Linux
You can use tsmdlst command to find devices on Microsoft Windows. Open a system command prompt, and change to the directory that the Tivoli Storage Manager server is installed in.

For default installations, use the following path:
c:\Program Files\Tivoli\TSM\server

To generate the information, enter the following command:
      tsmdlst > devlist.txt

How to install and Configure Tape Library

Before you install and configure Tape library to TSM server, you first have to install suitable device drivers and double check the drives are visible by using OS commands. After the device is installed and recognized by the operating system, configure the devices to the TSM Server. Run the following commands as shown below for defining library and drive paths.

DEFine LIBRary library_name LIBType=library_type

DEFine PATH source_name destination_name LIBRary=library_name SCRType=SERVer  DESTType=LIBRary DEVice=device_name

DEFine Drive library_name drive_name

DEFine PATH source_name destination_name SCRType=SERVer DESTType=DRive DEVice=device_name

Similarly, you have to configure paths for all the available drives.


PREVIOUS: 5.8 TSM Node Replication Introduction and Features
NEXT: 6.2 How to manage and monitor TSM tape volumes

27 October 2014

5.8 TSM Node Replication Introduction and Features

TSM Node Replication Overview

The purpose of node replication is to maintain the same level of files on the Tivoli Storage Manager source replication server and the Tivoli Storage Manager target replication servers. 

As part of replication processing, client node data that is deleted from the source replication server is also deleted from the target replication server. When client node data is replicated, only the data that is not on the target replication server is copied.

If a disaster occurs, and the Tivoli Storage Manager source replication server is temporarily unavailable, client nodes can recover their data from the Tivoli Storage Manager target replication server. If the source replication server is unrecoverable, you can convert client nodes for store operations on the target replication server. You can replicate the following types of client node data
  • Active and inactive backup data together, or only active backup data
  • Archive data
  • Data that is migrated to a source replication server by Tivoli Storage Manager for Space Management clients
You can use only Tivoli Storage Manager v6.3 or later servers for node replication. However, you can replicate data for client nodes that are v6.3 or earlier. You can also replicate data that is stored on a Tivoli Storage Manager v6.2 or earlier server before you upgraded it to v6.3.

Node replication Features

  • Provides the ability to incrementally replicate a node’s data to a remote target server for disaster recover purposes.
  • Replicates only directories and files that do not exist on the target server, true incremental replication.
  • Deletes data on target server that is deleted on the source server.
  • Replicates data according to replication rules.
  • Replication occurs between a source and target server.
  • A source server can have only one replication target server.
  • A target server can be the replication target for more than one source server.
Check the below 2 videos to know more about TSM Node Replication.







PREVIOUS: 5.7 How to Check errors in tape volumes and move data to another tape volumes
NEXT: 6.1 TSM Physical Storage Devices Introduction

25 October 2014

5.7 How to Check errors in tape volumes and move data to another tape volumes

Moving Data within Storage Pool Volumes and Storage Pools

1) You can use the MOVE DATA command to move files from one volume to another volume in the same or a different storage pool by using the following command.

move data volume_name
move data volume_name STGpool=pool_name

2) You can also use the MOVE NODEDATA command to move file spaces for a node from one storage pool to another by using the following command.

move nodedata node_name fromstgpool=source_STGpool tostgpool=target_STGpool

3) Before you delete a storage pool, be sure that you move all the data that you need to retain to another storage pool by using the move data command. 

4) You can use the move nodedata command to move data in a sequential access storage pool for one or more nodes. The command is useful to consolidate data for a specific node within a storage pool. This consolidation helps reduce the number of volume mounts required during a restore operation.

5) You can also use the move nodedata command to move data for a node to a different storage pool. This action is helpful to prepare for client restore processing by first moving data to a random access storage pool. The move nodedata process performs the following tasks
  • Creates a list of nodes and file spaces to move, according to the criteria that the user specifies in the move nodedata command.
  • Starts a queue thread to determine a list of volumes to process.
Τhe volumes added to this list meet the following criteria:
  • Access READWRITE or READONLY.
  • Have data for the node and file spaces that the user specifies.
6) The volumes list that is created also becomes the list of volumes to exclude. After the queue thread finishes, a process thread starts. The MAXPROCESS parameter determines the number of process threads that start. Each process thread that starts performs the following tasks
  • Selects a volume from the volume list.
  • Processes each bit file in the volume to determine whether it must be moved.
  • Uses the move data functions to batch up the file and to move the bit file.
7) The RECONSTRUCT option specifies whether to reconstruct file aggregates during data movement. Reconstruction removes empty space that has accumulated during deletion of logical files from an aggregate.

How to check the errors in Storage Pool Tape Volumes

Use AUDIT VOLUME command to check the inconsistencies between database information and a storage pool volume. While an audit process is active, clients cannot restore data from the specified volume or store new data to that volume.

If the server detects a file with errors, handling of the file will depend on the type of storage pool to which the volume belongs, whether the FIX option is specified on this command, and whether the file is also stored on a volume assigned to other pools.

If Tivoli Storage Manager does not detect errors for a file that was marked as damaged, the state of the file is reset so that it can be used.

If the AUDIT VOLUME command does not detect an error in a file that was previously marked as damaged, Tivoli Storage Manager resets the state of the file so that it can be used. This provides a means for resetting the state of damaged files if it is determined that the errors were caused by a correctable hardware problem such as a dirty tape head.

The Tivoli Storage Manager server will not delete archive files that are on deletion hold. If archive retention protection is enabled, the Tivoli Storage Manager server will not delete archive files whose retention period has not expired.

You can use the following command syntax to audit the volume for checking errors.

audit volume volume_name fix=no/yes

Fix Parameter description
Specifies how the server resolves inconsistencies between the database inventory and the specified storage pool volume. This parameter is optional. The default is NO. The actions the server performs depend on whether the volume is assigned to a primary or a copy storage pool.

For Primary Storage Pool Volumes:

Fix=No
Tivoli Storage Manager reports, but does not delete, database records that refer to files with inconsistencies. Tivoli Storage Manager marks the file as damaged in the database. If a backup copy is stored in a copy storage pool, you can restore the file using the RESTORE VOLUME or RESTORE STGPOOL command.

If the file is a cached copy, you must delete references to the file on this volume by issuing the AUDIT VOLUME command and specifying FIX=YES. If the physical file is not a cached copy, and a duplicate is stored in a copy storage pool, it can be restored by using the RESTORE VOLUME or RESTORE STGPOOL command.

Fix=Yes
The server fixes any inconsistencies as they are detected. If the physical file is a cache copy, the server deletes the database records that refer to the cached file. The primary file is stored on another volume. If the physical file is not a cached copy, and the file is also stored in one or more copy storage pools, the error will be reported and the physical file marked as damaged in the database. You can restore the physical file by using the RESTORE VOLUME or RESTORE STGPOOL command.

If the physical file is not a cached copy, and the physical file is not stored in a copy storage pool, each logical file for which inconsistencies are detected are deleted from the database.
If archive retention protection is enabled by using the SET ARCHIVERETENTIONPROTECTION command, a cached copy of data can be deleted if needed. Data in primary and copy storage pools can only be marked damaged and never deleted.

Do not use the AUDIT VOLUME command with FIX=YES if a restore process (RESTORE STGPOOL or RESTORE VOLUME) is running. The AUDIT VOLUME command could cause the restore to be incomplete.

For Copy Storage Pool Volumes:

Fix=No
The server reports the error and marks the physical file copy as damaged in the database.

Fix=Yes
The server deletes any references to the physical file and any database records that point to a physical file that does not exist.

Watch the below video to learn how to audit a volume and move data to another volumes.