[ | | | ]
IBM Tivoli Storage Manager for UNIX: Backup-Archive Clients Installation and User's Guide
High Availability Cluster Multi Processing (HACMP) allows scheduled Tivoli
Storage Manager client operations to continue processing during a failover
situation.
For example, a scheduled incremental backup of a clustered volume is
running on machine-a. A situation causes the client acceptor
daemon (CAD) to failover to machine-b. machine-b
then reconnects to the server. If the reconnection occurs within the
start window for that event, the scheduled command is restarted. This
scheduled incremental backup will reexamine files sent to the server before
the failover. The backup will then "catch up" to where it terminated
before the failover situation.
If a failover occurs during a user initiated client session, the Tivoli
Storage Manager CAD starts on the node that is handling the takeover.
This allows it to process scheduled events and provide Web client
access. You can install Tivoli Storage Manager locally on each node of
an HACMP environment. You can also install and configure the Tivoli
Storage Manager Scheduler Service for each cluster node to manage all local
disks and each cluster group containing physical disk resources.
- Note:
- If you use the httport option to specify specific ports for Web
client access, this will help you remember which port to point to.
Without it, the CAD will find first available port, starting with 1581, and
will likely change ports when failover or fallback occurs. Choosing a
port value, such as 1585 can save you the inconvenience of trying to determine
which port the CAD may have changed to.
The clusternode option determines if you want the Tivoli Storage
Manager client to back up cluster resources and participate in cluster
failover for high availability. See Clusternode for more information.
The following software is required:
- HACMP for AIX Version 4.4 (or later) or HACMP/ES for AIX Version
4.4 (or later)
- AIX 5L (5.1 and 5.2)
The HACMP Cluster Information Daemon must also be running.
Install the Tivoli Storage Manager Backup-Archive client software on a
local disk on each node in the cluster you want to participate in an HACMP
takeover. The following client configuration files must be stored
locally:
- The client executables and related files should reside in the same
location on each node in the cluster.
- The API executable and configuration files should reside in the default
API installation directory (/usr/tivoli/tsm/client/api/bin)
- The system options file (dsm.sys) should reside in the
default client installation directory
(/usr/tivoli/tsm/client/ba/bin)
The following client configuration files must be stored externally in a
shared disk subsystem so they can be defined as a cluster resource and be
available to the takeover node during a failover. Each resource group
must have the following configuration:
- The client option file (dsm.opt), include-exclude file,
and password file must be placed in a directory on the shared disk.
- The client error log file must be placed on the shared disk volumes to
maintain a single continuous error log file.
You can edit your dsm.opt file on each local node to
process local disk drives using the following options:
- clusternode
- Do not specify this option when processing local drives. See Clusternode for more information.
- nodename
- If no value is specified, Tivoli Storage Manager uses the local machine
name. See Nodename for more information.
- domain
- If no value is specified, Tivoli Storage Manager processes all local
drives that are not owned by the cluster. See Domain for more information.
You can also configure the Tivoli Storage Manager Backup-Archive Scheduler
Service to back up the local cluster nodes.
Ensure that Tivoli Storage Manager manages each cluster group that contains
physical disk resources as a unique node. This ensures that Tivoli
Storage Manager correctly manages all disk resources, regardless of which
cluster node owns the resource at the time of back up.
A Tivoli Storage Manager client in an HACMP cluster must be registered to a
Tivoli Storage Manager server with an assigned node name. Consider the
following conditions when registering your node name:
- If local volumes that are not defined as cluster resources will
be backed up, separate node names (and separate client instances) must be used
for both non-clustered and clustered volumes.
- The node name used to back up clustered volumes defaults to the cluster
name, not the host name. We recommend that you choose a node name
related to the cluster resource group to be managed by that node.
- If multiple resource groups are defined in the HACMP environment to
failover independently, then separate node names must be defined per resource
group.
Each node in the HACMP cluster that runs the Tivoli Storage Manager client
must have the following settings defined in each respective
dsm.sys file:
- Separate server stanzas to back up non-clustered volumes
- Separate server stanzas for each cluster resource group to be backed up
The server stanzas defined to back up non-clustered volumes must have the
following special characteristics:
- The value of the tcpclientaddress option must be the
service IP address. This is the IP address used for primary
traffic to and from the node.
- If the client will back up and restore non-clustered volumes without being
connected to the HACMP cluster, the value of the tcpclientaddress
option must be the boot IP address. This is the IP address
used to start the machine (node) before it rejoins the HACMP cluster.
The server stanzas defined to back up clustered volumes must have the
following special characteristics:
- clusternode yes
- The nodename value must be related to the resource
group. If nodename is not specified, the cluster name is
used.
- The tcpclientaddress option must refer to the service IP
address of the HACMP node.
- The passworddir option must point to a directory on the shared
volumes that are part of the cluster resource group.
- The errorlogname and schedlogname options must point
to files on the shared volumes that are part of the cluster resource
group.
- All inclexcl statements must point to files on the shared
volumes that are part of the cluster resource group.
- Set the managedservices statement to indicate that the
scheduler (or Web client) should be managed by the client acceptor
daemon.
Other options can be set as desired.
The client user options file (dsm.opt) for the Tivoli
Storage Manager client that will manage your clustered file spaces must reside
on the shared volumes in the cluster resource group. Define the
DSM_CONFIG environment variable to point to this dsm.opt
file. Make sure the dsm.opt file contains the
following settings:
- The value of the servername option must be the server stanza in
the dsm.sys file which defines parameters for backing up
clustered volumes. The dsm.sys file may reside on
shared space.
- If the dsm.sys file resides on a local disk, each node
on the cluster must have a matching stanza.
- Define clustered filespaces to be backed up with the domain
option.
- Other options can be set as desired.
The Tivoli Storage Manager client must be defined as an application to
HACMP to participate in failover processing. See HACMP for AIX
4.4.1 Installation Guide, SC23-4278, for detailed
instructions on how to perform this procedure. Following is a summary
of this procedure:
- Start HACMP for AIX system management with the following command:
smit hacmp
- Select Cluster Configuration, Cluster Resources,
Define Application Servers, and Add an Application
Server.
- Enter the following field values:
- Server Name
- Enter an ASCII text string that identifies the server. You use this
name to refer to the application server when you define it as a resource
during node configuration. The server name can include alphabetic and
numeric characters and underscores. Use no more than 31
characters.
- Start Script
- Enter the full path name of the script that starts the server. This
script is called by the cluster event scripts and must reside on a local
disk. This script must be in the same location on each cluster node
that might start the server. The start script is used in the following
cases:
- when HACMP is started and resource groups are activated
- when a failover occurs and the resource group is started on another node
- when fallback occurs (a failed node re-enters the cluster) and the
resource group is transferred back to the node re-entering the cluster.
A sample start script (StartClusterTsmClient.sh.smp) is
provided in the /usr/tivoli/tsm/client/ba/bin directory.
- Stop Script
- Enter the full path name of the script that stops the server. This
script is called by the cluster event scripts and must reside on a local
disk. This script must be in the same location on each cluster node
that might stop the server. The stop script is used in the following
cases:
- when HACMP is stopped
- when a failover occurs due to a component failure in a resource group, the
other members are stopped so that the entire group can be restarted on the
target node in the failover
- when a fallback occurs and the resource group is stopped on the node
currently hosting it to allow transfer back to the node re-entering the
cluster.
A sample stop script (StopClusterTsmClient.sh.smp) is
provided in the /usr/tivoli/tsm/client/ba/bin directory.
- Press Enter to add your information to the HACMP for AIX.
- Press F10 after the command completes to exit smit and return
to the command line. Press F3 to perform other configuration
tasks.
The Storage Manager client must be in a resource group with a
cascading or rotating takeover relationship. The
client does not support a concurrent access resource group. See
HACMP for AIX 4.4.1 Planning Guide, SC23-4277, for
additional information regarding HACMP topology and
strategy.
You must first create an HACMP resource group so you can add the client to
it. The following is a summary of this procedure:
- Start HACMP for AIX system management with the following command:
smit hacmp
- Select Cluster Configuration, Cluster Resources,
Define Resource Groups, and Add a Resource Group.
The Add a Resource Group window is displayed.
- On the Add a Resource Group window, enter the following field
values:
- Resource Group Name
- Enter an ASCII text string that identifies the resource group. The
resource group name can include alphabetic and numeric characters and
underscores. Use no more than 31 characters.
- Node Relationship
- Select Cascading.
- Participating Node Names/Default Node Priority
- Select the node names that are participating in the resource group.
Add the nodes in order of priority. The node owner of the resource
group should be the first node listed.
- Click OK.
- Press F10 to exit smit and return to the command line.
Press F3 to perform other configuration tasks.
The Storage Manager client must be in a resource group with a
cascading or rotating takeover relationship. The
client does not support a concurrent access resource group. See
HACMP for AIX 4.4.1 Planning Guide, SC23-4277, for
additional information regarding HACMP topology and
strategy.
The Tivoli Storage Manager client must be defined to a cluster resource
group. See HACMP for AIX 4.4.1 Installation
Guide, SC23-4278, for detailed instructions on how to perform this
procedure. Following is a summary of how to define resources as part of
a resource group:
- Start HACMP for AIX system management with the following command:
smit hacmp
- Select Cluster Configuration, Cluster Resources, and
Change/Show Resources/Attributes for a Resource Group. Press
Enter.
- Select the desired resource group.
- Press Enter. The Configure a Resource Group screen
appears.
- Enter values that define all the resources you want to add to this
resource group.
- Synchronize cluster resources after entering field values in Step
5. Do this by selecting Cluster Configuration, Cluster
Resources, and Synchronize Cluster Resources.
- Press F10 to exit smit and return to the command line.
Press F3 to perform other configuration tasks.
The Tivoli Storage Manager client must be added to the resource group that
contains the file systems to be backed up. These file systems must also
be the same file systems specified by the domain option in the
dsm.opt file defined for this client instance.
Both JFS and NFS file systems can be defined as cluster resources.
NFS supports only 2 node clusters in a cascading takeover
relationship.
[ Top of Page | Previous Page | Next Page | Table of Contents | Index ]