26 October 2013

TSM L3 Interview Questions by IBM India

Tivoli Storage Manager Latest Interview Questions by IBM India. These questions are asked for L3 level TSM administrators. Most of the questions are scenario and troubleshooting based and there can be one or more answers. Please prepare answers for these questions correctly without confusion. Ask or consult any TSM expert or your manager before answering a difficult question. Hope this post help your interview..... Keep checking this blog for more latest interview questions.

IBM TSM L3 Level Interview Questions

  • What is the difference between 5.5 & 6.x TSM Versions ?
  • Explain about Recovery log structure in TSM 6.x versions ?
  • What is the default Dsmschedlog size ?
  • What is the Clopset, how do you define and explain about it ?
  • What TDP's you have in your environment ? explain their configuration ?
  • What are the different TSM Volume status and what is the meaning of STATUS=PENDING ?
  • Where will you see the reusedelay parameter ?
  • What is the Expiration proecess and its impact ?
  • Tell me about DRM process ?
  • In how many ways u can delete TSM db backup ?
  • How will you schedule to take total SQL DB backup ?
  • Difference between q occupancy and q audit occupancy commands ?
  • What are the new features added to Expire Inventory process ?
  • If Recovery log is 98% when Client backup is going on, what will you do ?
  • Someone has badly configured on client side, and I have to take backup of 3TB,  what I have to do ?
  • Suppose for one client, required retention period is 30 days, and for another client is  clients 365 days, what are the advantages and disadvantages of sending their backup to same storage pool ?
  • During TSM Db restore in AIX machine, restoration completed 80% and the rest is not able to complete, what might be the causes ?
  • Differences between TSM Backupset and image backup ?
  • What is the TSM Randomization ?
  • what is the TSM resource Utilization ?
  • What are the different types of TSM scheduling modes ?
  • What are the different TSM Server & TSM Client Tuning parameters ?
  • There are 100 tapes that are eligible for reclamation but there are no scratches available, How will you do reclamation ?
  • ANR9999D what does this error code represents ?
  • We have one client machine with 4 nodes with different dsm.opt files, each dsm.opt file will have Servername and nodename only. If suppose each node is assigned to one drive and only one mountpoint is configured, what are the advantages and disadvantages of this configuration ?
  • Tell me how to configure TSM Storage Agent ?
  • What is Offsite Reclamation ?
  • Long term retentions should be given but it should not archive, how to configure ?
  • Explain about Version Exist, Version Delete, Retention Extra and Retention Only parameters ?
  • Suppose if I give you new tape library and TSM Server, how do you configure ?

25 October 2013

How to improve Tivoli Storage Manager TSM client restore performance

TSM offers many features and techniques to improve the client node restore performance. Following these recommended best practices would help you to reduce the time for restoration. Different TSM setups may require different techniques based up on their Storage and Network configuration. In this post I will discuss some of the IBM recommended techniques to improve your restore performance. You should select appropriate techniques suitable for your TSM environment. 

Most of the TSM administrators choose daily incremental backups and weekly/monthly full backups to reduce the backup time and tape resources. As days goes by, the data will not be closely packed together in relevant tape volumes since active data will be mixed in with inactive data, expired data and data from other nodes/filespaces that are not relevant to a particular restore operation. So, eventually a simple single directory restore will become more problematic & time consuming because data is unevenly located in the tapes.

Initially performance will be good, however over time as the data is spread out over a tape or multiple tapes performance becomes degraded. The degradation is caused by the increase in tape locate commands that are used to get to the valid data and perhaps additional tape mounts. Tape drives perform well when they are allowed to stream, reading or writing sequential data, but when forced to skip around on the tape the locate commands begin to dominate.

Tivoli Storage Manager Restore Process

In TSM, there are three basic types of Restore scenarios

Single file Restore
TSM is optimized very well for this type of scenario since TSM keeps records of the exact location of each file

Multiple file (directory) Restore
Initially this scenario will be fast but slowly it will become problematic since the required directory (or set of files) may be spread throughout a tape or set of tapes over time

Total file system Restore 
Total file system restoration is mostly done during Disaster recovery or during changing to new hardware. This scenario  can be optimized if image backup/restore is utilized.

Based upon your environment and requirements you should plan your backup strategy to include image backup and some occasional selective/full backups along with daily incremental backups.

How to improve TSM Client Restore Performance

There are several methods that can help avoid the problem of slow restore. Many of these below mentions techniques and strategies can and should be used together, whereas others are mutually exclusive.

1. Understand when to use No Query Restore and Classic Restore

TSM utilizes two different types of methods when determining what needs to be restored based on the restore specification used, NQR and classic restore.

NQR is invoked when a simple wildcard, which matches an entire directory, is used such as:


Classic Restore invokes when the options “inactive”, “latest”, “pick”, “fromdate”, and “todate” are not used. A restricted wildcard that matches a subset of files in a directory such as:


The difference becomes important because the two types have different performance characteristics. When restoring an entire filesystem, NQR will be superior. However when restoring a single directory classic restore can be faster in some situations. Classic restore can be invoked in these situations by using a (trick) slightly different wildcard such as (?*)


Classic restore can also be invoked by using the testflag DISABLENQR. If small restores are taking too much time, classic restore may provide better performance. The difference becomes more pronounced for large filesystems with millions of files.

2. Use Multiple Sessions Restore

Multi-session restore allows TSM to restore data from multiple tape volumes simultaneously if the desired data resides on multiple tapes, increasing performance.

If collocation by node or filespace is being used client data may only reside on one tape, eliminating the possibility of using multi-session restore.

3. Configure appropriate Collocation Mode

TSM Collocation when enabled reserves a volume or set of volumes for a particular TSM client, filespace or group of clients. This allows filespace data to be more closely packed together and avoids excessive tape mounts. It’s possible that dozens of tapes could be needed for restore of a single filespace or even a directory with collocation disabled. Collocation can be enabled for nodes, groups of nodes, or filespaces.

Using collocation will results in increased tape cartridge use and the fact that multi-session restore may not be used effectively since data may reside on a single tape volume. However using collocation by group can eliminate these drawbacks if groups are chosen wisely such that the total quantity of data stored by a particular group is able to fill several tape volumes.

4. Perform weekly/Monthly Full Selective backups of Filesystems

Similar to image backup, a full selective backup of a filesystem has the effect of putting the filesystem data in one place, which provides faster restores. This is the single best way to optimize restore of a subset of a filesystem (directory) when using tape devices.

Periodic selective backups have the drawback of sending all the data when only a fraction may have changed and can interfere with expiration/retention policies.

To minimize the impact of backing up all the data during a particular backup the selective backups for different filespaces/nodes can be staggered throughout the week.

To avoid interfering with expiration/retention policies the selective backups can be performed under a different node name. When a restore of a directory or whole filesystem is needed, the last selective can be restored under the alternate node name, and then any changed files since that selective can be restored from the incremental backups, similar to what is done for image backup, except a single directory restore is possible. For restore of an entire filesystem, image backup/restore is superior.

5. Perform Periodic Image Backups

Image backup can be used to take a snapshot at the disk device level, which provides high back up and restore speeds since individual files need not be processed. Image backup is most often integrated with the incremental forever strategy. Incremental backups are performed often and image backups performed less often, depending on how much the data is changing. If most of the data is changing between image backups the benefit of image restore becomes less to none.

Upon a full filesystem failure the image can be restored at great speed and then the files that changed between the time that image was taken and the last incremental are then restored. This has the advantage of allowing the drive to stream during the image restore and has the added advantage that the files restored since the image are likely to be closely packed together (since they were backed up relatively recently.)

6. Running regular Tape Reclamation

Be sure that tape reclamation is running periodically (storage pool reclaim < 100%.) By freeing up unused areas of tape reclamation makes data more compact on the tape volume, providing better restore performance.

7. Run/Start Multiple Restore Commands

When restoring multiple filespaces start one restore command for each filespace to allow them to happen simultaneously. This has the greatest effect if the node’s data is collocated by filespace.

8. Use Virtual Mountpoints

When filesystems becomes large or huge,  virtual mount points can be used to make a single filesystem appear as many to TSM. This is only possible if the directory structure is somewhat static and balanced. Through collocation you can then ensure that different parts of the filesystem go to different tapes allowing a more optimized restore.

9. In Disaster Recovery Situations Use “Move Data” or “Move Nodedata” Commands To Stage Data To Disk

In a DR situations, time will be needed to physically get the client system back online. Client data can be restored to a disk storage pool from primary tape pool volumes so that when the client system becomes available much, or all, of the data can be restored from disk. This can be done by using MOVE DATA or MOVE NODEDATA commands

10. Prioritize Importance of Nodes

Important nodes can be put in a separate storage pool that uses tape collocation, utilizes disk as main storage, or uses other features that you may not be willing to use for all clients.

11. Utilize Active Data Pools

Active data pools allow your most recent data to reside on disk while older, inactive data may be stored on tape. Most restores are of the active data and if that data is in an active data pool on disk restores may be much faster. 

20 October 2013

How to configure TSM storage devices on different types of netwrok topologies (LAN, SAN and NAS)

Tivoli Storage Manager provides various methods for configuring storage devices. TSM Storage devices can be configured on a Local Area Network (LAN), on a Storage Area Network (SAN) for LAN-free data movement  and on Network-Attached Storage (NAS). 

TSM Storage Devices on Local Area Networks (LAN)

In the conventional local area network (LAN) configuration, one or more tape or optical libraries are associated with a single Tivoli Storage Manager server. In a LAN configuration, client data, electronic mail, terminal connection, application program, and device control information must all be handled by the same network. Device control information and client backup and restore data flow across the LAN. Some Tape Libraries cannot be partitioned or shared in a LAN environment.

TSM Storage Devices on Storage Area Networks (SAN)

A SAN is a dedicated storage network that can improve system performance. On a SAN you can consolidate storage and relieve the distance, scalability, and bandwidth limitations of LANs and wide area networks (WANs). Using Tivoli Storage Manager in a SAN allows the following functions.
  • Sharing storage devices among multiple Tivoli Storage Manager servers. 
  • Allowing Tivoli Storage Manager clients, through a storage agent on the client machine, to move data directly to storage devices (LAN-free data movement).

In a SAN you can share tape drives, optical drives, and libraries that are supported by the Tivoli Storage Manager server, including most SCSI devices.

When two  or more Tivoli Storage Manager servers share a library, one of the TSM servers which is referred as the Library Manager controls device operations. These operations include mount, dismount, volume ownership, and library inventory. Other Tivoli Storage Manager servers, which acts as a Library Clients, use server-to-server communications to contact the library manager and request device service. Data moves over the SAN between each server and the storage device.

TSM LAN-Free Data Movement Procedure

Tivoli Storage Manager allows a client, through a storage agent, to directly back up and restore data to a tape library on a SAN. LAN-free data movement requires the installation of a storage agent software on the client machine. TSM Server maintains the database and recovery log, and acts as the library manager to control device operations. The storage agent on the client handles the data transfer to the device on the SAN. This implementation frees up bandwidth on the LAN that would otherwise be used for client data movement.

The following outlines a typical backup scenario for a client that uses LAN-free data movement
  • The client begins a backup operation. The client and the server exchange policy information over the LAN to determine the destination of the backed up data. For a client using LAN-free data movement, the destination is a storage pool that uses a device on the SAN.
  • Because the destination is on the SAN, the client contacts the storage agent, which will handle the data transfer. The storage agent sends a request for a volume mount to the server.
  • The server contacts the storage device and, in the case of a tape library, mounts the appropriate media.
  • The server notifies the client of the location of the mounted media.
  • The client, through the storage agent, writes the backup data directly to the device over the SAN.
  • The storage agent sends file attribute information to the server, and the server stores the information in its database.

If a failure occurs on the SAN path, failover occurs. The client uses its LAN connection to the Tivoli Storage Manager server and moves the client data over the LAN.

TSM Storage Devices on Network-Attached Storage (NAS)

Network-attached storage (NAS) file servers are dedicated storage machines whose operating systems are optimized for file-serving functions. NAS file servers typically do not run software acquired from another vendor. Instead, they interact with programs like Tivoli Storage Manager through industry-standard network protocols, such as network data management protocol (NDMP).

Tivoli Storage Manager provides two basic types of configurations that use NDMP for backing up and managing NAS file servers. 
  • In one type of configuration, Tivoli Storage Manager uses NDMP to back up a NAS file server to a library device directly attached to the NAS file server. The NAS file server, which can be distant from the Tivoli Storage Manager server, transfers backup data directly to a drive in a SCSI-attached tape library. Data is stored in special, NDMP-formatted storage pools, which can be backed up to storage media that can be moved offsite for protection in case of an on-site disaster.
  • In the other type of NDMP-based configuration, Tivoli Storage Manager uses NDMP to back up a NAS file server to a Tivoli Storage Manager storage-pool hierarchy. With this type of configuration you can store NAS data directly to disk (either random access or sequential access) and then migrate the data to tape. Data can also be backed up to storage media that can then be moved offsite. The advantage of this type of configuration is that it gives all the backend-data management features associated with a conventional Tivoli Storage Manager storage-pool hierarchy, including migration and reclamation.

In both types of configurations, Tivoli Storage Manager tracks file-system image backups and has the capability to perform NDMP file-level restores.

TSM NDMP Backup Procedure

In backup images produced by network data management protocol (NDMP) operations for a NAS file server, Tivoli Storage Manager creates NAS file-system-level or directory-level image backups. The image backups are different from traditional Tivoli Storage Manager backups because the NAS file server transfers the data to the drives in the library or directly to the Tivoli Storage Manager server. NAS file system image backups can be either full or differential image backups. The first backup of a file system on a NAS file server is always a full image backup. By default, subsequent backups are differential image backups containing only data that has changed in the file system since the last full image backup. If a full image backup does not already exist, a full image backup is performed.

If you restore a differential image, Tivoli Storage Manager automatically restores the full backup image first, followed by the differential image.

NDMP file-level Restoration

Tivoli Storage Manager provides a way to restore data from backup images produced by NDMP operations. To assist users in restoring selected files, you can create a table of contents (TOC) of file-level information for each backup image. Using the Web backup-archive client, users can then browse the TOC and select the files that they want to restore. If you do not create a TOC, users must be able to specify the name of the backup image that contains the file to be restored and the fully qualified name of the file.

You can create a TOC using one of the following commands:
  • BACKUP NODE server command. 
  • BACKUP NAS client command, with include.fs.nas specified in the client options file or specified in the client options set. 

Directory-level Backup and Restore

If you have a large NAS file system, initiating a backup on a directory level reduces backup and restore times, and provides more flexibility in configuring your NAS backups. By defining virtual file spaces, a file system backup can be partitioned among several NDMP backup operations and multiple tape drives. You can also use different backup schedules to back up sub-trees of a file system.

The virtual file space name cannot be identical to any file system on the NAS node. If a file system is created on the NAS device with the same name as a virtual file system, a name conflict will occur on the Tivoli Storage Manager server when the new file space is backed up. 

What are Tivoli Storage Manager (TSM) storage pool and storage-pool volumes ?

In terms of Tivoli Storage Manager, tapes are referred as storage pool volumes (PRIVATE) if they are defined to any specific storage pool or volumes (SCRATCH) if they are not defined yet to any storage pool. In other words, the logical group of same types of tape volumes is known as Storage pool and volumes in the storage pool are referred as storagep ool volumes. 

TSM Storage Pool

A storage pool is a collection of volumes that are associated with one device class and one media type. For example, a storage pool that is associated with a device class for LTO tape volumes contains only LTO tape volumes. You can control the characteristics of storage pools, such as whether scratch volumes are used. Tivoli Storage Manager supplies default disk storage pools.

For DISK device classes, you must define volumes. For other device classes, such as tape and FILE, you can allow the server to dynamically acquire scratch volumes and define those volumes as needed

One or more device classes are associated with one library, which can contain multiple drives. When you define a storage pool, you associate the pool with a device class. Volumes are associated with pools.

Ex :   define stgpool diskpool DISK maxsize=500m himig=90 lomig=60                                              pooltype=primary

          define stgpool copypool LTOTAPE pooltype=copy maxscr=100

          define stgpool activedatapool FILE pooltype=activedata maxscr=100

TSM Storage Pool Volumes

A volume is the basic unit of storage for Tivoli Storage Manager storage pools. Tivoli Storage Manager volumes are classified according to status - private, scratch, and scratch write-once, read-many (WORM). Scratch WORM status applies to 349X libraries only when the volumes are IBM 3592 WORM volumes.
  • A Private volume is a labeled volume that is in use or owned by an application, and may contain valid data. You must define each private volume. Alternatively, for storage pools associated with sequential access disk (FILE) device classes, you can use space triggers to create private, preassigned volumes when predetermined space-utilization thresholds have been exceeded. Private FILE volumes are allocated as a whole. The result is less risk of severe fragmentation than with space dynamically acquired for scratch FILE volumes. A request to mount a private volume must include the name of that volume. Defined private volumes do not return to scratch when they become empty.
  • A Scratch volume is a labeled volume that is empty or contains no valid data and that can be used to satisfy any request to mount a scratch volume. When data is written to a scratch volume, its status is changed to private, and it is defined as part of the storage pool for which the mount request was made. When valid data is moved from the volume and the volume is reclaimed, the volume returns to scratch status and can be reused by any storage pool associated with the library.
  • A WORM scratch volume is similar to a conventional scratch volume. However, WORM volumes cannot be reclaimed by Tivoli Storage Manager reclamation processing. WORM volumes can be returned to scratch status only if they have empty space in which data can be written. Empty space is space that does not contain valid, expired or deleted data. (Deleted and expired data on WORM volumes cannot be overwritten.) If a WORM volume does not have any empty space in which data can be written (for example, if the volume is entirely full of deleted or expired data), the volume remains private.

For each storage pool, you must decide whether to use scratch volumes. If you do not use scratch volumes, you must define private volumes, or you can use space-triggers if the volume is assigned to a storage pool with a FILE device type. Tivoli Storage Manager keeps an inventory of volumes in each automated library it manages and tracks whether the volumes are in scratch or private status. When a volume mount is requested, Tivoli Storage Manager selects a scratch volume only if scratch volumes are allowed in the storage pool. The server can choose any scratch volume that has been checked into the library.

You do not need to allocate volumes to different storage pools associated with the same automated library. Each storage pool associated with the library can dynamically acquire volumes from the library's inventory of scratch volumes. Even if only one storage pool is associated with a library, you do not need to explicitly define all the volumes for the storage pool. The server automatically adds volumes to and deletes volumes from the storage pool.

A disadvantage of using scratch volumes is that volume usage information, which you can use to determine when the media has reached its end of life, is deleted when a private volume is returned to the scratch volume pool.

TSM Library Volume Inventory

A library's volume inventory includes only those volumes that have been checked into that library. This inventory is not necessarily identical to the list of volumes in the storage pools associated with the library. 
  1. A volume can be checked into the library but not be in a storage pool (a scratch volume, a database backup volume, or a backup set volume).
  2. A volume can be defined to a storage pool associated with the library (a private volume), but not checked into the library.
Ex :     Query Library TSMLIB
            Query Volume

Different types of tape drives and device class objects supported in Tivoli Storage Manager (TSM)

Library objects, Drive objects, and Device-class objects taken together represent physical storage entities. Tapes used for storage should match the drive objects requirements to take the backup to the tapes.

Tape Drives

A drive object represents a drive mechanism within a library that uses removable media. For devices with multiple drives, including automated libraries, you must define each drive separately and associate it with a library. Drive definitions can include such information as the element address for drives in SCSI or virtual tape libraries (VTLs), how often a tape drive is cleaned, and whether the drive is online. Tivoli Storage Manager drives include tape and optical drives that can stand alone or that can be part of an automated library. Supported removable media drives also include removable file devices such as rewritable CDs.

Ex:     define drive TSMLIB drive01

           define path server01 drive01 srctype=server desttype=drive  library=TSMLIB                              device=/dev/rmt0

Device class

Each device that is defined to Tivoli Storage Manager is associated with one device class, which specifies the device type and media management information, such as recording format, estimated capacity, and labeling prefixes. A device type identifies a device as a member of a group of devices that share similar media characteristics. Device types include a variety of removable media types as well as FILE, CENTERA, and SERVER. A device class for a tape or optical drive must also specify a library.

Disk Devices
Using Tivoli Storage Manager, you can define random-access disk (DISK device type) volumes using a single command. You can also use space triggers to automatically create preassigned private volumes when predetermined space-utilization thresholds are exceeded.

Removable Media
Tivoli Storage Manager provides a set of specified removable-media device types, such as 8MM for 8 mm tape devices, or REMOVABLEFILE for Jaz or DVD-RAM drives. The GENERICTAPE device type is provided to support certain devices that are not supported by the Tivoli Storage Manager server.

Ex:    define devclass LTOTAPE devtype=lto library=TSMLIB format=ultrium4c                                      mountlimit=12 mountretention=5 

          define devclass NASTAPE devtype=nas library=naslib mountretention=0                                       mountlimit=drives  estcapacity=200G

Files on disk as sequential volumes (FILE)
The FILE device type lets you create sequential volumes by creating files on disk storage. To the server, these files have the characteristics of a tape volume. FILE volumes can also be useful when transferring data for purposes such as electronic vaulting or for taking advantage of relatively inexpensive disk storage devices. FILE volumes are a convenient way to use sequential-access disk storage for the following reasons:
  • You do not need to explicitly define scratch volumes. The server can automatically acquire and define scratch FILE volumes as needed.
  • You can create and format FILE volumes using a single command. The advantage of private FILE volumes is that they can reduce disk fragmentation and maintenance overhead.
  • Using a single device class definition that specifies two or more directories, you can create large, FILE-type storage pools. Volumes are created in the directories you specify in the device class definition. For optimal performance, volumes should be associated with file systems.
  • When predetermined space-utilization thresholds have been exceeded, space trigger functionality can automatically allocate space for private volumes in FILE-type storage pools.

The Tivoli Storage Manager server allows concurrent read-access and write-access to a volume in a storage pool associated with the FILE device type. Concurrent access improves restore performance by allowing two or more clients to access the same volume at the same time. Multiple client sessions (archive, retrieve, backup, and restore) or server processes (for example, storage pool backup) can read the volume concurrently. In addition, one client session or one server process can write to the volume while it is being read. Unless sharing with storage agents is specified, the FILE device type does not require you to define library or drive objects. The only required object is a device class.

Ex:   define devclass FILEDEVC devtype=file directory=/opt/FILE

Files on sequential volumes (CENTERA)
The CENTERA device type defines the EMC Centera storage device. It can be used like any standard storage device from which files can be backed up and archived as needed. The Centera storage device can also be configured with the Tivoli Storage Manager server to form a specialized storage system that protects you from inadvertent deletion of mission-critical data such as e-mails, trade settlements, legal documents, and so on. The CENTERA device class creates logical sequential volumes for use with Centera storage pools. These volumes share many of the same characteristics as FILE type volumes. With the CENTERA device type, you are not required to define library or drive objects. CENTERA volumes are created as needed and end in the suffix "CNT."

Ex:    define devclass CENTERADEVC devtype=CENTERA                                                             HLADDRESS=,

Sequential volumes on another Tivoli Storage Manager server (SERVER)
The SERVER device type lets you create volumes for one Tivoli Storage Manager server that exist as archived files in the storage hierarchy of another server. These virtual volumes have the characteristics of sequential-access volumes such as tape. No library or drive definition is required. You can use virtual volumes for the following
  • Device-sharing between servers. One server is attached to a large tape library device. Other servers can use that library device indirectly through a SERVER device class.
  • Data-sharing between servers. By using a SERVER device class to export and import data, physical media remains at the original location instead having to be transported.
  • Immediate offsite storage. Storage pools and databases can be backed up without physically moving media to other locations.
  • Offsite storage of the disaster recovery manager (DRM) recovery plan file.
  • Electronic vaulting
Ex:   define devclass Serverdevc devtype=server servername=tsmserver1                                          maxcapacity=500G

19 October 2013

What are the different types of tape libraries supported in Tivoli Storage Manager environment

Tape Library is one of the storage objects used in Tivoli Storage Manager for storing and moving data from one tape to another tapes. A physical library is a collection of one or more drives that share similar media-mounting requirements. That is, the drive can be mounted by an operator or by an automated mounting mechanism.Any tape library must be first defined in TSM server with appropriate commands to use the library. 

Types of Tape Libraries used in Tivoli Storage Manager

Shared Libraries

Shared libraries are logical libraries that are represented physically by SCSI, 349X, or ACSLS libraries. The physical library is controlled by the Tivoli Storage Manager server configured as a library manager. Tivoli Storage Manager servers using the SHARED library type are library clients to the library manager server. If you want to use a single library for 2 or more TSM Servers use this type to define the library.

Ex: define library sharedtsm libtype=shared primarylibmanager=libmgr1

Automated Cartridge System Library Software Libraries

An automated cartridge system library software (ACSLS) library is a type of external library that is controlled by Oracle StorageTek ACSLS media-management software. The server can act as a client application to the ACSLS software to use the drives. The StorageTek software performs the following functions
  • Mounts volumes, both private and scratch
  • Dismounts volumes
  • Returns library volumes to scratch status

The ACSLS software selects an appropriate drive for media-access operations. You do not define the drives, check in media, or label the volumes in an external library.

Ex: define library acslib libtype=acsls acsid=1 shared=yes

Manual Libraries

In manual libraries, operators mount the volumes in response to mount-request messages issued by the server. The server sends these messages to the server console and to administrative clients that were started by using the special MOUNTMODE or CONSOLEMODE parameter.

Ex: define library manualmount libtype=manual

SCSI libraries

A SCSI library is controlled through a SCSI interface, attached either directly to the server's host using SCSI cabling or by a storage area network. A robot or other mechanism automatically handles volume mounts and dismounts. The drives in a SCSI library can be of different types. A SCSI library can contain drives of mixed technologies, for example LTO Ultrium and DLT drives. Some examples of this library type are
  • The Oracle StorageTek L700 library
  • The IBM 3590 tape device, with its Automatic Cartridge Facility (ACF)

Ex: define library scsilib libtype=scsi
      define path server1 scsilib srctype=server desttype=library device=/dev/lb0

Virtual Tape Libraries

A virtual tape library (VTL) is a hardware component that can emulate a tape library while using a disk as the underlying storage hardware. Using a VTL, you can create variable numbers of drives and volumes because they are only logical entities within the VTL. The ability to create more drives and volumes increases the capability for parallelism, giving you more simultaneous mounts and tape I/O. VTLs use SCSI and Fibre Channel interfaces to interact with applications. Because VTLs emulate tape drives, libraries, and volumes, an application such as Tivoli Storage Manager cannot distinguish a VTL from real tape hardware unless the library is identified as a VTL.

Ex:  define library vtllib libtype=vtl
       define path server1 vtllib srctype=server desttype=library device=/dev/lb0

349X Libraries

A 349X library is a collection of drives in an IBM 3494. Volume mounts and demounts are handled automatically by the library. A 349X library has one or more library management control points (LMCP) that the server uses to mount and dismount volumes in a drive. Each LMCP provides an independent interface to the robot mechanism in the library. The drives in a 3494 library must be of one type only (either IBM 3490, 3590, or 3592).

Ex:  define library my3494 libtype=349x scratchcategory=550 privatecategory=600                           wormscratchcategory=400

External Libraries

An external library is a collection of drives managed by an external media-management system that is not part of Tivoli Storage Manager. The server provides an interface that allows external media management systems to operate with the server. The external media-management system performs the following functions
  • Volume mounts (specific and scratch)
  • Volume dismounts
  • Freeing of library volumes (return to scratch)

The external media manager selects the appropriate drive for media-access operations. You do not define the drives, check in media, or label the volumes in an external library. An external library allows flexibility in grouping drives into libraries and storage pools. The library can have one drive, a collection of drives, or even a part of an automated library.

Ex:  define library extlib libtype=external
define path server1 extlib srctype=server desttype=library externalmanager="/usr/lpp/dtelm/bin/elm"

Zosmedia libraries

A zosmedia library represents a tape or disk storage resource that is attached with a Fibre Channel connection (FICON) and is managed by Tivoli Storage Manager for z/OS Media.
A zosmedia library does not require drive definitions. Paths are defined for the Tivoli Storage Manager server and any storage agents that need access to the zosmedia library resource

Ex:  define library zebra libtype=zosmedia
       define path sahara zebra srctype=server desttype=library zosmediaserver=oasis

FILE Libraries

Specifies that a pseudo-library is created for sequential file volumes. When you issue the DEFINE DEVCLASS command with DEVTYPE=FILE and SHARED=YES parameters, this occurs automatically. FILE libraries are necessary only when sharing sequential file volumes between the server and one or more storage agents. The use of FILE libraries requires library sharing. Shared FILE libraries are supported for use in LAN-free backup configurations only. You cannot use a shared FILE library in an environment in which a library manager is used to manage library clients.

Ex:  define library file1 libtype=file shared=yes

How to manage or monitor Tivoli Storage Manager storage pools health and utilization

In Tivoli Storage Manager (TSM) backed-up and archived files are stored in groups of volumes that are called storage pools. Because each storage pool is assigned to a device class, you can logically group your storage devices to meet your storage-management needs.

TSM Storage Pool Management Techniques


The server can keep each client's files on a minimal number of volumes within a storage pool. Because client files are consolidated, restoring collocated files requires fewer media mounts. However, backing up files from different clients requires more mounts.


Files on sequential access volumes may expire, move, or be deleted. The reclamation process consolidates the active, unexpired data on many volumes onto fewer volumes. The original volumes can then be reused for new data, making more efficient use of media.

Backing Up Storage pool

Client backup, archive, and space-managed data in primary storage pools can be backed up to copy storage pools for disaster recovery purposes. As client data is written to the primary storage pools, it can also be simultaneously written to copy storage pools.

Copying Active Data

The active versions of client backup data can be copied to active-data pools. Active-data pools provide a number of benefits. For example, if the device type associated with an active-data pool is sequential-access disk (FILE), you can eliminate the need for disk staging pools. Restoring client data is faster because FILE volumes are not physically mounted, and the server does not need to position past inactive files that do not need to be restored. An active-data pool that uses removable media, such as tape or optical, lets you reduce the number of volumes for onsite and offsite storage.  If you vault data electronically to a remote location, a SERVER-type active-data pool lets you save bandwidth by copying and restoring only active data.As backup client data is written to primary storage pools, the active versions can be simultaneously written to active-data pools.


When the server migrates files from disk storage pools, duplicate copies of the files can remain in cache (disk storage) for faster retrieval. Cached files are deleted only when space is needed. However, client backup operations that use the disk storage pool may have poorer performance.

StoragePool Hierarchy

You can establish a hierarchy of storage pools. The hierarchy can be based on the speed or the cost of the devices associated with the pools. Tivoli Storage Manager migrates client files through this hierarchy to ensure the most efficient use of a server's storage devices.
You manage storage volumes by defining, updating, and deleting volumes, and by monitoring the use of server storage. You can also move files within and across storage pools to optimize the use of server storage.

17 October 2013

What is Tivoli Storage Manager (TSM) Administration Centre ?

You can monitor or administer Tivoli Storage Manager (TSM) servers in bot command line mode and GUI modes. The Administration Center is a Web-based graphical user interface for centrally configuring and managing IBM Tivoli Storage Manager server V5.3 and later, and V6.1, V6.2, and V6.3. 

To administer a Tivoli Storage Manager server using the Administration Center, the Administration Center must be the same or later version than the servers that you want to administer. For example, you can use a V6.3 Administration Center to administer a V6.2 server, but you cannot use a V6.2 Administration Center to administer a V6.3 server.

The Administration Center is a task-oriented interface that replaced the previous administrative Web interface. The Administration Center provides wizards to help guide you through common configuration tasks. Using properties notebooks, you can modify settings and perform advanced management tasks.

To be known as a good TSM administrator, you should be able to do administrative tasks in both command-line and GUI modes

TSM Administration Center Features 

  • You need to log in only once to access multiple Tivoli Storage Manager servers from a single interface.
  • You can easily monitor the health of your storage environment. Regular status updates are provided for Scheduled events, the server database and recovery log in server V5.3 and later.
  • The database manager in V6.1, V6.2, and V6.3 servers.
  • Storage devices, including information about off-line drives and paths, and mounted volumes.
  • You can filter and sort storage objects, such as client nodes and library volumes.
  • You can use wizards to more easily perform complex tasks, such as creating schedules to perform client node and administrative operations.
  • Creating a server maintenance script to perform database and storage pool backup, migration, expiration, and reclamation.
  • Configuring storage devices. A comprehensive wizard helps you create a library, add drives, check in media volumes, and create storage pools.
  • Configuring V6.1, V6.2, and V6.3 servers on local or remote UNIX systems.

16 October 2013

Hardware and device management commands in AIX operating system

Device Management commands can be used to manage the different hardware devices that are available in AIX. Some of the devices that you can manage include Logical Volume Manager, file systems, tape library, tape drives, and printers.

Most Common AIX Device Management Commands

  • List all devices on a system


  • Install devices for attached peripherals  

               cfgmgr –v

  • List all disk devices on a system

              lsdev -Cc disk

  • List all customized (existing) device classes (-P for complete list)

              lsdev -C -r class

  • Remove hdisk5

              rmdev -dl hdisk5

  • Get device address of hdisk1

             getconf DISK DEVNAME hdisk1 ⇐or⇒ bootinfo -o hdisk1

  • Get the size (in MB) of hdisk1

              getconf DISK SIZE /dev/hdisk1 ⇐or⇒ bootinfo -s hdisk1

  • List all disks belonging to scsi0

             lsdev -Cc disk -p scsi0

  • Find the slot of a PCI Ethernet adapter

               lsslot -c pci -l ent0

  • Find the (virtual) location of an Ethernet adapter

              lscfg -l ent1

  • Find the location codes of all devices in the system


  • List all MPIO paths for hdisk0

                lspath -l hdisk0

  • Find the WWN of the fcs0 HBA adapter

                 lscfg -vl fcs0 | grep Network

  • Temporarily change console output to /console.out

                 swcons /console.out → (Use swcons to change back.) 

  • Get statistics and extended information on fcs0

                  fcstat fcs0

  • Change port type of HBA (This may vary by HBA vendor)

               rmdev -d -l fcnet0
               rmdev -d -l fscsi0
               chdev -l fcs0 -a link type=pt2pt cfgmgr

  • Mirroring rootvg to hdisk1

                 extendvg rootvg hdisk1
                 mirrorvg rootvg
                 bosboot -ad hdisk0
                 bosboot -ad hdisk1
                 bootlist -m normal hdisk0 hdisk1

  • Mount a CD/DVD ROM to /mnt

                  mount -rv cdrfs /dev/cd0 /mnt → (for a CD) 
                  mount -v udfs -o ro /dev/cd0 /mnt → (for a DVD)

                  Note the two different types of read-only flags. Either is Ok.

  • Create a VG, LV, and FS, mirror, and create mirrored LV

                  mkvg -s 256 -y datavg hdisk1 (PP size is 1/4 Gig) 
                  mklv -t jfs2log -y dataloglv datavg 1
                  logform /dev/dataloglv
                  mklv -t jfs2 -y data01lv datavg 8 → (2 Gig LV) 
                  crfs -v jfs2 -d data01lv -m /data01 -A yes 
                  extendvg datavg hdisk2
                  mklvcopy dataloglv 2 → (Note use of mirrorvg in next example) 
                  mklvcopy data01lv 2
                  syncvg -v datavg
                  lsvg -l datavg will now list 2 PPs for every LP
                  mklv -c 2 -t jfs2 -y data02lv datavg 8 → (2 Gig LV) 
                  crfs -v jfs2 -d data02lv -m /data02 -A yes
                  mount -a

  • Move a VG from hdisk1 to hdisk2

                  extendvg datavg hdisk2
                  mirrorvg datavg hdisk2

                 Wait for mirrors to synchronize

                unmirrorvg datavg hdisk1 
                reducevg datavg hdisk1

  • Find the free space on PV hdisk1

                 lspv hdisk1 → (Look for “FREE PPs”)