IBM Tivoli Software IBM Tivoli Software

[ | | | ]


IBM Tivoli Storage Manager for UNIX: Backup-Archive Clients Installation and User's Guide


Appendix B. Configuring the backup-archive client in an HACMP takeover environment

High Availability Cluster Multi Processing (HACMP) allows scheduled Tivoli Storage Manager client operations to continue processing during a failover situation.

For example, a scheduled incremental backup of a clustered volume is running on machine-a. A situation causes the client acceptor daemon (CAD) to failover to machine-b. machine-b then reconnects to the server. If the reconnection occurs within the start window for that event, the scheduled command is restarted. This scheduled incremental backup will reexamine files sent to the server before the failover. The backup will then "catch up" to where it terminated before the failover situation.

If a failover occurs during a user initiated client session, the Tivoli Storage Manager CAD starts on the node that is handling the takeover. This allows it to process scheduled events and provide Web client access. You can install Tivoli Storage Manager locally on each node of an HACMP environment. You can also install and configure the Tivoli Storage Manager Scheduler Service for each cluster node to manage all local disks and each cluster group containing physical disk resources.

Note:
If you use the httport option to specify specific ports for Web client access, this will help you remember which port to point to. Without it, the CAD will find first available port, starting with 1581, and will likely change ports when failover or fallback occurs. Choosing a port value, such as 1585 can save you the inconvenience of trying to determine which port the CAD may have changed to.

The clusternode option determines if you want the Tivoli Storage Manager client to back up cluster resources and participate in cluster failover for high availability. See Clusternode for more information.

The following software is required:

The HACMP Cluster Information Daemon must also be running.


Installing the backup-archive client

Install the Tivoli Storage Manager Backup-Archive client software on a local disk on each node in the cluster you want to participate in an HACMP takeover. The following client configuration files must be stored locally:

The following client configuration files must be stored externally in a shared disk subsystem so they can be defined as a cluster resource and be available to the takeover node during a failover. Each resource group must have the following configuration:


Configuring the backup-archive client to process local nodes

You can edit your dsm.opt file on each local node to process local disk drives using the following options:

clusternode
Do not specify this option when processing local drives. See Clusternode for more information.

nodename
If no value is specified, Tivoli Storage Manager uses the local machine name. See Nodename for more information.

domain
If no value is specified, Tivoli Storage Manager processes all local drives that are not owned by the cluster. See Domain for more information.

You can also configure the Tivoli Storage Manager Backup-Archive Scheduler Service to back up the local cluster nodes.


Configuring Tivoli Storage Manager backup-archive client to process cluster disk resources

Ensure that Tivoli Storage Manager manages each cluster group that contains physical disk resources as a unique node. This ensures that Tivoli Storage Manager correctly manages all disk resources, regardless of which cluster node owns the resource at the time of back up.

Step 1: Register the client to a server

A Tivoli Storage Manager client in an HACMP cluster must be registered to a Tivoli Storage Manager server with an assigned node name. Consider the following conditions when registering your node name:

Step 2: Configure the client system options file

Each node in the HACMP cluster that runs the Tivoli Storage Manager client must have the following settings defined in each respective dsm.sys file:

The server stanzas defined to back up non-clustered volumes must have the following special characteristics:

The server stanzas defined to back up clustered volumes must have the following special characteristics:

Other options can be set as desired.

Step 3: Configure the client user options file

The client user options file (dsm.opt) for the Tivoli Storage Manager client that will manage your clustered file spaces must reside on the shared volumes in the cluster resource group. Define the DSM_CONFIG environment variable to point to this dsm.opt file. Make sure the dsm.opt file contains the following settings:


Defining the client as an HACMP application

The Tivoli Storage Manager client must be defined as an application to HACMP to participate in failover processing. See HACMP for AIX 4.4.1 Installation Guide, SC23-4278, for detailed instructions on how to perform this procedure. Following is a summary of this procedure:

  1. Start HACMP for AIX system management with the following command:
    smit hacmp
    
  2. Select Cluster Configuration, Cluster Resources, Define Application Servers, and Add an Application Server.
  3. Enter the following field values:

    Server Name
    Enter an ASCII text string that identifies the server. You use this name to refer to the application server when you define it as a resource during node configuration. The server name can include alphabetic and numeric characters and underscores. Use no more than 31 characters.

    Start Script
    Enter the full path name of the script that starts the server. This script is called by the cluster event scripts and must reside on a local disk. This script must be in the same location on each cluster node that might start the server. The start script is used in the following cases:
    1. when HACMP is started and resource groups are activated
    2. when a failover occurs and the resource group is started on another node
    3. when fallback occurs (a failed node re-enters the cluster) and the resource group is transferred back to the node re-entering the cluster.

    A sample start script (StartClusterTsmClient.sh.smp) is provided in the /usr/tivoli/tsm/client/ba/bin directory.

    Stop Script
    Enter the full path name of the script that stops the server. This script is called by the cluster event scripts and must reside on a local disk. This script must be in the same location on each cluster node that might stop the server. The stop script is used in the following cases:
    1. when HACMP is stopped
    2. when a failover occurs due to a component failure in a resource group, the other members are stopped so that the entire group can be restarted on the target node in the failover
    3. when a fallback occurs and the resource group is stopped on the node currently hosting it to allow transfer back to the node re-entering the cluster.

    A sample stop script (StopClusterTsmClient.sh.smp) is provided in the /usr/tivoli/tsm/client/ba/bin directory.

  4. Press Enter to add your information to the HACMP for AIX.
  5. Press F10 after the command completes to exit smit and return to the command line. Press F3 to perform other configuration tasks.

The Storage Manager client must be in a resource group with a cascading or rotating takeover relationship. The client does not support a concurrent access resource group. See HACMP for AIX 4.4.1 Planning Guide, SC23-4277, for additional information regarding HACMP topology and strategy.


Creating an HACMP resource group to add a client

You must first create an HACMP resource group so you can add the client to it. The following is a summary of this procedure:

  1. Start HACMP for AIX system management with the following command:
    smit hacmp
    
  2. Select Cluster Configuration, Cluster Resources, Define Resource Groups, and Add a Resource Group. The Add a Resource Group window is displayed.
  3. On the Add a Resource Group window, enter the following field values:

    Resource Group Name
    Enter an ASCII text string that identifies the resource group. The resource group name can include alphabetic and numeric characters and underscores. Use no more than 31 characters.

    Node Relationship
    Select Cascading.

    Participating Node Names/Default Node Priority
    Select the node names that are participating in the resource group. Add the nodes in order of priority. The node owner of the resource group should be the first node listed.
  4. Click OK.
  5. Press F10 to exit smit and return to the command line. Press F3 to perform other configuration tasks.

The Storage Manager client must be in a resource group with a cascading or rotating takeover relationship. The client does not support a concurrent access resource group. See HACMP for AIX 4.4.1 Planning Guide, SC23-4277, for additional information regarding HACMP topology and strategy.


Adding the client to an HACMP resource group

The Tivoli Storage Manager client must be defined to a cluster resource group. See HACMP for AIX 4.4.1 Installation Guide, SC23-4278, for detailed instructions on how to perform this procedure. Following is a summary of how to define resources as part of a resource group:

  1. Start HACMP for AIX system management with the following command:
    smit hacmp
    
  2. Select Cluster Configuration, Cluster Resources, and Change/Show Resources/Attributes for a Resource Group. Press Enter.
  3. Select the desired resource group.
  4. Press Enter. The Configure a Resource Group screen appears.
  5. Enter values that define all the resources you want to add to this resource group.
  6. Synchronize cluster resources after entering field values in Step 5. Do this by selecting Cluster Configuration, Cluster Resources, and Synchronize Cluster Resources.
  7. Press F10 to exit smit and return to the command line. Press F3 to perform other configuration tasks.

The Tivoli Storage Manager client must be added to the resource group that contains the file systems to be backed up. These file systems must also be the same file systems specified by the domain option in the dsm.opt file defined for this client instance.

Both JFS and NFS file systems can be defined as cluster resources. NFS supports only 2 node clusters in a cascading takeover relationship.


[ Top of Page | Previous Page | Next Page | Table of Contents | Index ]