IBM TSM (Spectrum Protect) on Veritas Cluster Server

By vermaden

Until today I mostly shared articles about free and open systems. Now its time to share so called enterprise experience 🙂 Not so long ago I made a IBM TSM instance as highly available service on Symantec Veritas Cluster Server.

ibm-tsm-logo.png

The IBM TSM (Tivoli Storage Manager) has been rebranded by IBM into IBM Spectrum Protect and in the similar period of time Symantec moved Veritas Cluster Server info InfoScale Availability while creating separate/dedicated Veritas company for this purpose.

The instructions I want to share today are for sure the same for latest versions of Veritas Cluster Server and its later InfoScale Availability incarnations and latest IBM Spectrum Protect 8.1 family introduction was mostly related to rebranding/cleaning of the whole Spectrum Protect/TSM modules and additions, so they all will have common 8.1 label. As these instructions were made for IBM TSM (Spectrum Protect) 7.1.6 version they should still be very similar for current versions.

This highly available IBM TSM instance is part of the whole Backup Consolidation project which uses two physical servers to server both this IBM TSM service and Dell/EMC Networker backup server. When everything is OK then one of the nodes is dedicated to IBM TSM and the other one is used by Dell/EMC Networker, so all physical resources are well saturated and we do not ‘waste’ whole node to wait for 99% of the time empty for the first node to crash. Of course if first node misbehaves or has a hardware failure, then both IBM TSM and Dell/EMC Networker run nicely on single node. It is also very convenient for various maintenance tasks, to be able to switch all services to other node and and work in peace on the first one, but I do not have to tell you that. The third and last service is shared between these two Oracle RMAN Catalog for the Oracle databases metadata information – also for backup/restore purposes.

I will not write here instructions to install the operating system (we use amd64 RHEL 6.x here) or to setup the Veritas Cluster Server as I installed it earlier and its quite simple to set it up. These instructions focus on creating IBM TSM highly available service along using/allocating the resources from the IBM Storwize V5030 storage array where 400 GB SSD disks are dedicated for IBM TSM DB2 database instance and 1.8 TB 10K SAS disks are dedicated for DRAID groups that will be serving space for IBM TSM storage pools implemented in latest IBM TSM container pools with deduplication and compression enabled. The head of IBM Storwize V5030 storage array is shown below.

ibm-tsm-v5030-photo.jpg

Each node is IBM System x3650 M4 server with two dual-port 8Gb FC cards and one dual-port 10GE cards … along with builtin 1GE cards for Veritas Cluster Server heartbeats. Each has 192 GB RAM and dual 6-core CPUs @ 3.5 GHz each which translates to 12 physical cores or 24 HTT threads per node. The three internal SSD drives are used for the system only in RAID1 + SPARE configuration. All clustered resources are from IBM Storwize V5030 FC/SAN storage array. The operating system installed on these nodes is amd64 RHEL 6.x and the Veritas Cluster Server is at 6.2.x version. The IBM System x3650 M4 server is shown below.

ibm-tsm-x3650-m4.jpg

All of the setting/tuning/decisions were made based on the IBM TSM documentation and great IBM Spectrum Protect Blueprints resources from the valuable IBM developerWorks wiki.

Storage Array Setup

First we need to create MDISKS. We used DRAID with double parity protection + spare for each MDISK with 17 SAS 1.8 TB 10K disks each. That gives 14 disks for data 2 for parity and 1 spare from which all provide I/O thanks to DRAID setup. We have three such MDISKs with ~21.7 TB each for the total 65.1 TB for IBM TSM containers. Of course all these 3 ‘pool’ MDISKs are in one Storage Group. The LUNs for the IBM TSM DB2 database were 5 SSD 400 GB disks setup in a DRAID disk with 1 parity and 1 spare disk. This gives 3 disks for data 1 for parity and 1 for spare space. This gives about 1.1 TB for the IBM TSM DB2 database.

Here are LUNs created from these MDISKs.

ibm-tsm-v5030.png

I needed to remove some names of course 🙂

LUNs Initialization

Veritas Service Cluster needs to have storage prepared with disk groups which are similar in concept (but more powerful) then LVM. Below are instructions to first detect and then initialize these LUNs from IBM Storwize V5030 storage array. I marked them in blue for more clarity.

[root@300 ~]# haconf -makerw
[root@300 ~]# vxdisk -o alldgs list
DEVICE TYPE DISK GROUP STATUS
disk_0 auto:LVM - - online invalid
storwizev70000_00000a auto:cdsdisk - (dg_fencing) online
storwizev70000_00000b auto:cdsdisk stgFC_00B NSR_dg_nsr online
storwizev70000_00000c auto:cdsdisk stgFC_00C NSR_dg_nsr online
storwizev70000_00000d auto:cdsdisk stgFC_00D NSR_dg_nsr online
storwizev70000_00000e auto:cdsdisk stgFC_00E NSR_dg_nsr online
storwizev70000_00000f auto:cdsdisk - (RMAN_dg) online
storwizev70000_00001a auto:none - - online invalid
storwizev70000_00001b auto:none - - online invalid
storwizev70000_00001c auto:none - - online invalid
storwizev70000_00001d auto:none - - online invalid
storwizev70000_00001e auto:none - - online invalid
storwizev70000_00001f auto:none - - online invalid
storwizev70000_000008 auto:cdsdisk - (dg_fencing) online
storwizev70000_000009 auto:cdsdisk - (dg_fencing) online
storwizev70000_000010 auto:cdsdisk - (RMAN_dg) online
storwizev70000_000011 auto:cdsdisk - (RMAN_dg) online
storwizev70000_000012 auto:none - - online invalid
storwizev70000_000013 auto:none - - online invalid
storwizev70000_000014 auto:none - - online invalid
storwizev70000_000015 auto:none - - online invalid
storwizev70000_000016 auto:none - - online invalid
storwizev70000_000017 auto:none - - online invalid
storwizev70000_000018 auto:none - - online invalid
storwizev70000_000019 auto:none - - online invalid
storwizev70000_000020 auto:none - - online invalid
[root@300 ~]# vxdisksetup -i storwizev70000_00001a
[root@300 ~]# vxdisksetup -i storwizev70000_00001b
[root@300 ~]# vxdisksetup -i storwizev70000_00001c
[root@300 ~]# vxdisksetup -i storwizev70000_00001d
[root@300 ~]# vxdisksetup -i storwizev70000_00001e
[root@300 ~]# vxdisksetup -i storwizev70000_00001f
[root@300 ~]# vxdisksetup -i storwizev70000_000012
[root@300 ~]# vxdisksetup -i storwizev70000_000013
[root@300 ~]# vxdisksetup -i storwizev70000_000014
[root@300 ~]# vxdisksetup -i storwizev70000_000015
[root@300 ~]# vxdisksetup -i storwizev70000_000016
[root@300 ~]# vxdisksetup -i storwizev70000_000017
[root@300 ~]# vxdisksetup -i storwizev70000_000018
[root@300 ~]# vxdisksetup -i storwizev70000_000019
[root@300 ~]# vxdisksetup -i storwizev70000_000020
[root@300 ~]# vxdisk -o alldgs list
DEVICE TYPE DISK GROUP STATUS
disk_0 auto:LVM - - online invalid
storwizev70000_00000a auto:cdsdisk - (dg_fencing) online
storwizev70000_00000b auto:cdsdisk stgFC_00B NSR_dg_nsr online
storwizev70000_00000c auto:cdsdisk stgFC_00C NSR_dg_nsr online
storwizev70000_00000d auto:cdsdisk stgFC_00D NSR_dg_nsr online
storwizev70000_00000e auto:cdsdisk stgFC_00E NSR_dg_nsr online
storwizev70000_00000f auto:cdsdisk - (RMAN_dg) online
storwizev70000_00001a auto:cdsdisk - - online
storwizev70000_00001b auto:cdsdisk - - online
storwizev70000_00001c auto:cdsdisk - - online
storwizev70000_00001d auto:cdsdisk - - online
storwizev70000_00001e auto:cdsdisk - - online
storwizev70000_00001f auto:cdsdisk - - online
storwizev70000_000008 auto:cdsdisk - (dg_fencing) online
storwizev70000_000009 auto:cdsdisk - (dg_fencing) online
storwizev70000_000010 auto:cdsdisk - (RMAN_dg) online
storwizev70000_000011 auto:cdsdisk - (RMAN_dg) online
storwizev70000_000012 auto:cdsdisk - - online
storwizev70000_000013 auto:cdsdisk - - online
storwizev70000_000014 auto:cdsdisk - - online
storwizev70000_000015 auto:cdsdisk - - online
storwizev70000_000016 auto:cdsdisk - - online
storwizev70000_000017 auto:cdsdisk - - online
storwizev70000_000018 auto:cdsdisk - - online
storwizev70000_000019 auto:cdsdisk - - online
storwizev70000_000019 auto:cdsdisk - - online
storwizev70000_000020 auto:cdsdisk - - online
[root@300 ~]# vxdg init TSM0_dg \ stgFC_020=storwizev70000_000020 \ stgFC_012=storwizev70000_000012 \ stgFC_016=storwizev70000_000016 \ stgFC_013=storwizev70000_000013 \ stgFC_014=storwizev70000_000014 \ stgFC_015=storwizev70000_000015 \ stgFC_017=storwizev70000_000017 \ stgFC_018=storwizev70000_000018 \ stgFC_019=storwizev70000_000019 \ stgFC_01A=storwizev70000_00001a \ stgFC_01B=storwizev70000_00001b \ stgFC_01C=storwizev70000_00001c \ stgFC_01D=storwizev70000_00001d \ stgFC_01E=storwizev70000_00001e \ stgFC_01F=storwizev70000_00001f
[root@300 ~]# vxdisk -o alldgs list
DEVICE TYPE DISK GROUP STATUS
disk_0 auto:LVM - - online invalid
storwizev70000_00000a auto:cdsdisk - (dg_fencing) online
storwizev70000_00000b auto:cdsdisk stgFC_00B NSR_dg_nsr online
storwizev70000_00000c auto:cdsdisk stgFC_00C NSR_dg_nsr online
storwizev70000_00000d auto:cdsdisk stgFC_00D NSR_dg_nsr online
storwizev70000_00000e auto:cdsdisk stgFC_00E NSR_dg_nsr online
storwizev70000_00000f auto:cdsdisk - (RMAN_dg) online
storwizev70000_00001a auto:cdsdisk stgFC_01A TSM0_dg online
storwizev70000_00001b auto:cdsdisk stgFC_01B TSM0_dg online
storwizev70000_00001c auto:cdsdisk stgFC_01C TSM0_dg online
storwizev70000_00001d auto:cdsdisk stgFC_01D TSM0_dg online
storwizev70000_00001e auto:cdsdisk stgFC_01E TSM0_dg online
storwizev70000_00001f auto:cdsdisk stgFC_01F TSM0_dg online
storwizev70000_000008 auto:cdsdisk - (dg_fencing) online
storwizev70000_000009 auto:cdsdisk - (dg_fencing) online
storwizev70000_000010 auto:cdsdisk - (RMAN_dg) online
storwizev70000_000011 auto:cdsdisk - (RMAN_dg) online
storwizev70000_000012 auto:cdsdisk stgFC_012 TSM0_dg online
storwizev70000_000013 auto:cdsdisk stgFC_013 TSM0_dg online
storwizev70000_000014 auto:cdsdisk stgFC_014 TSM0_dg online
storwizev70000_000015 auto:cdsdisk stgFC_015 TSM0_dg online
storwizev70000_000016 auto:cdsdisk stgFC_016 TSM0_dg online
storwizev70000_000017 auto:cdsdisk stgFC_017 TSM0_dg online
storwizev70000_000018 auto:cdsdisk stgFC_018 TSM0_dg online
storwizev70000_000019 auto:cdsdisk stgFC_019 TSM0_dg online
storwizev70000_000020 auto:cdsdisk stgFC_020 TSM0_dg online
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_instance maxsize=32G stgFC_020
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_active_log maxsize=128G stgFC_012
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_archive_log maxsize=384G stgFC_016
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_db_01 maxsize=300G stgFC_013
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_db_02 maxsize=300G stgFC_014
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_db_03 maxsize=300G stgFC_015
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_db_backup_01 maxsize=900G stgFC_017
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_db_backup_02 maxsize=900G stgFC_018
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_db_backup_03 maxsize=900G stgFC_019
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_pool0_01 maxsize=6700G stgFC_01A
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_pool0_02 maxsize=6700G stgFC_01B
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_pool0_03 maxsize=6700G stgFC_01C
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_pool0_04 maxsize=6700G stgFC_01D
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_pool0_05 maxsize=6700G stgFC_01E
[root@300 ~]# vxassist -g TSM0_dg make TSM0_vol_pool0_06 maxsize=6700G stgFC_01F
[root@300 ~]# vxprint -u h | grep ^sd | column -t
sd stgFC_00B-01 NSR_vol_index-02 ENABLED 399.95g 0.00 - - -
sd stgFC_00C-01 NSR_vol_media-02 ENABLED 9.96g 0.00 - - -
sd stgFC_00D-01 NSR_vol_nsr-02 ENABLED 79.96g 0.00 - - -
sd stgFC_00E-01 NSR_vol_res-02 ENABLED 9.96g 0.00 - - -
sd stgFC_012-01 TSM0_vol_active_log-01 ENABLED 127.96g 0.00 - - -
sd stgFC_016-01 TSM0_vol_archive_log-01 ENABLED 383.95g 0.00 - - -
sd stgFC_017-01 TSM0_vol_db_backup_01-01 ENABLED 899.93g 0.00 - - -
sd stgFC_018-01 TSM0_vol_db_backup_02-01 ENABLED 899.93g 0.00 - - -
sd stgFC_019-01 TSM0_vol_db_backup_03-01 ENABLED 899.93g 0.00 - - -
sd stgFC_013-01 TSM0_vol_db_01-01 ENABLED 299.95g 0.00 - - -
sd stgFC_014-01 TSM0_vol_db_02-01 ENABLED 299.95g 0.00 - - -
sd stgFC_015-01 TSM0_vol_db_03-01 ENABLED 299.95g 0.00 - - -
sd stgFC_020-01 TSM0_vol_instance-01 ENABLED 31.96g 0.00 - - -
sd stgFC_01A-01 TSM0_vol_pool0_01-01 ENABLED 6.54t 0.00 - - -
sd stgFC_01B-01 TSM0_vol_pool0_02-01 ENABLED 6.54t 0.00 - - -
sd stgFC_01C-01 TSM0_vol_pool0_03-01 ENABLED 6.54t 0.00 - - -
sd stgFC_01D-01 TSM0_vol_pool0_04-01 ENABLED 6.54t 0.00 - - -
sd stgFC_01E-01 TSM0_vol_pool0_05-01 ENABLED 6.54t 0.00 - - -
sd stgFC_01F-01 TSM0_vol_pool0_06-01 ENABLED 6.54t 0.00 - - -
[root@300 ~]# vxprint -u h -g TSM0_dg | column -t
TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0
dg TSM0_dg TSM0_dg - - - - - -
dm stgFC_01A storwizev70000_00001a - 6.54t - - - -
dm stgFC_01B storwizev70000_00001b - 6.54t - - - -
dm stgFC_01C storwizev70000_00001c - 6.54t - - - -
dm stgFC_01D storwizev70000_00001d - 6.54t - - - -
dm stgFC_01E storwizev70000_00001e - 6.54t - - - -
dm stgFC_01F storwizev70000_00001f - 6.54t - - - -
dm stgFC_012 storwizev70000_000012 - 127.96g - - - -
dm stgFC_013 storwizev70000_000013 - 299.95g - - - -
dm stgFC_014 storwizev70000_000014 - 299.95g - - - -
dm stgFC_015 storwizev70000_000015 - 299.95g - - - -
dm stgFC_016 storwizev70000_000016 - 383.95g - - - -
dm stgFC_017 storwizev70000_000017 - 899.93g - - - -
dm stgFC_018 storwizev70000_000018 - 899.93g - - - -
dm stgFC_019 storwizev70000_000019 - 899.93g - - - -
dm stgFC_020 storwizev70000_000020 - 31.96g - - - - v TSM0_vol_active_log fsgen ENABLED 127.96g - ACTIVE - -
pl TSM0_vol_active_log-01 TSM0_vol_active_log ENABLED 127.96g - ACTIVE - -
sd stgFC_012-01 TSM0_vol_active_log-01 ENABLED 127.96g 0.00 - - - v TSM0_vol_archive_log fsgen ENABLED 383.95g - ACTIVE - -
pl TSM0_vol_archive_log-01 TSM0_vol_archive_log ENABLED 383.95g - ACTIVE - -
sd stgFC_016-01 TSM0_vol_archive_log-01 ENABLED 383.95g 0.00 - - - v TSM0_vol_db_backup_01 fsgen ENABLED 899.93g - ACTIVE - -
pl TSM0_vol_db_backup_01-01 TSM0_vol_db_backup_01 ENABLED 899.93g - ACTIVE - -
sd stgFC_017-01 TSM0_vol_db_backup_01-01 ENABLED 899.93g 0.00 - - - v TSM0_vol_db_backup_02 fsgen ENABLED 899.93g - ACTIVE - -
pl TSM0_vol_db_backup_02-01 TSM0_vol_db_backup_02 ENABLED 899.93g - ACTIVE - -
sd stgFC_018-01 TSM0_vol_db_backup_02-01 ENABLED 899.93g 0.00 - - - v TSM0_vol_db_backup_03 fsgen ENABLED 899.93g - ACTIVE - -
pl TSM0_vol_db_backup_03-01 TSM0_vol_db_backup_03 ENABLED 899.93g - ACTIVE - -
sd stgFC_019-01 TSM0_vol_db_backup_03-01 ENABLED 899.93g 0.00 - - - v TSM0_vol_db_01 fsgen ENABLED 299.95g - ACTIVE - -
pl TSM0_vol_db_01-01 TSM0_vol_db_01 ENABLED 299.95g - ACTIVE - -
sd stgFC_013-01 TSM0_vol_db_01-01 ENABLED 299.95g 0.00 - - - v TSM0_vol_db_02 fsgen ENABLED 299.95g - ACTIVE - -
pl TSM0_vol_db_02-01 TSM0_vol_db_02 ENABLED 299.95g - ACTIVE - -
sd stgFC_014-01 TSM0_vol_db_02-01 ENABLED 299.95g 0.00 - - - v TSM0_vol_db_03 fsgen ENABLED 299.95g - ACTIVE - -
pl TSM0_vol_db_03-01 TSM0_vol_db_03 ENABLED 299.95g - ACTIVE - -
sd stgFC_015-01 TSM0_vol_db_03-01 ENABLED 299.95g 0.00 - - - v TSM0_vol_instance fsgen ENABLED 31.96g - ACTIVE - -
pl TSM0_vol_instance-01 TSM0_vol_instance ENABLED 31.96g - ACTIVE - -
sd stgFC_020-01 TSM0_vol_instance-01 ENABLED 31.96g 0.00 - - - v TSM0_vol_pool0_01 fsgen ENABLED 6.54t - ACTIVE - -
pl TSM0_vol_pool0_01-01 TSM0_vol_pool0_01 ENABLED 6.54t - ACTIVE - -
sd stgFC_01A-01 TSM0_vol_pool0_01-01 ENABLED 6.54t 0.00 - - - v TSM0_vol_pool0_02 fsgen ENABLED 6.54t - ACTIVE - -
pl TSM0_vol_pool0_02-01 TSM0_vol_pool0_02 ENABLED 6.54t - ACTIVE - -
sd stgFC_01B-01 TSM0_vol_pool0_02-01 ENABLED 6.54t 0.00 - - - v TSM0_vol_pool0_03 fsgen ENABLED 6.54t - ACTIVE - -
pl TSM0_vol_pool0_03-01 TSM0_vol_pool0_03 ENABLED 6.54t - ACTIVE - -
sd stgFC_01C-01 TSM0_vol_pool0_03-01 ENABLED 6.54t 0.00 - - - v TSM0_vol_pool0_04 fsgen ENABLED 6.54t - ACTIVE - -
pl TSM0_vol_pool0_04-01 TSM0_vol_pool0_04 ENABLED 6.54t - ACTIVE - -
sd stgFC_01D-01 TSM0_vol_pool0_04-01 ENABLED 6.54t 0.00 - - - v TSM0_vol_pool0_05 fsgen ENABLED 6.54t - ACTIVE - -
pl TSM0_vol_pool0_05-01 TSM0_vol_pool0_05 ENABLED 6.54t - ACTIVE - -
sd stgFC_01E-01 TSM0_vol_pool0_05-01 ENABLED 6.54t 0.00 - - - v TSM0_vol_pool0_06 fsgen ENABLED 6.54t - ACTIVE - -
pl TSM0_vol_pool0_06-01 TSM0_vol_pool0_06 ENABLED 6.54t - ACTIVE - -
sd stgFC_01F-01 TSM0_vol_pool0_06-01 ENABLED 6.54t 0.00 - - -
[root@300 ~]# vxinfo -p -g TSM0_dg | column -t
vol TSM0_vol_instance fsgen Started
plex TSM0_vol_instance-01 ACTIVE
vol TSM0_vol_active_log fsgen Started
plex TSM0_vol_active_log-01 ACTIVE
vol TSM0_vol_archive_log fsgen Started
plex TSM0_vol_archive_log-01 ACTIVE
vol TSM0_vol_db_01 fsgen Started
plex TSM0_vol_db_01-01 ACTIVE
vol TSM0_vol_db_02 fsgen Started
plex TSM0_vol_db_02-01 ACTIVE
vol TSM0_vol_db_03 fsgen Started
plex TSM0_vol_db_03-01 ACTIVE
vol TSM0_vol_db_backup_01 fsgen Started
plex TSM0_vol_db_backup_01-01 ACTIVE
vol TSM0_vol_db_backup_02 fsgen Started
plex TSM0_vol_db_backup_02-01 ACTIVE
vol TSM0_vol_db_backup_03 fsgen Started
plex TSM0_vol_db_backup_03-01 ACTIVE
vol TSM0_vol_pool0_01 fsgen Started
plex TSM0_vol_pool0_01-01 ACTIVE
vol TSM0_vol_pool0_02 fsgen Started
plex TSM0_vol_pool0_02-01 ACTIVE
vol TSM0_vol_pool0_03 fsgen Started
plex TSM0_vol_pool0_03-01 ACTIVE
vol TSM0_vol_pool0_04 fsgen Started
plex TSM0_vol_pool0_04-01 ACTIVE
vol TSM0_vol_pool0_05 fsgen Started
plex TSM0_vol_pool0_05-01 ACTIVE
vol TSM0_vol_pool0_06 fsgen Started
plex TSM0_vol_pool0_06-01 ACTIVE
[root@300 ~]# find /dev/vx/dsk -name TSM0_\*
/dev/vx/dsk/TSM0_dg
/dev/vx/dsk/TSM0_dg/TSM0_vol_pool0_06
/dev/vx/dsk/TSM0_dg/TSM0_vol_pool0_05
/dev/vx/dsk/TSM0_dg/TSM0_vol_pool0_04
/dev/vx/dsk/TSM0_dg/TSM0_vol_pool0_03
/dev/vx/dsk/TSM0_dg/TSM0_vol_pool0_02
/dev/vx/dsk/TSM0_dg/TSM0_vol_pool0_01
/dev/vx/dsk/TSM0_dg/TSM0_vol_db_backup_03
/dev/vx/dsk/TSM0_dg/TSM0_vol_db_backup_02
/dev/vx/dsk/TSM0_dg/TSM0_vol_db_backup_01
/dev/vx/dsk/TSM0_dg/TSM0_vol_db_03
/dev/vx/dsk/TSM0_dg/TSM0_vol_db_02
/dev/vx/dsk/TSM0_dg/TSM0_vol_db_01
/dev/vx/dsk/TSM0_dg/TSM0_vol_archive_log
/dev/vx/dsk/TSM0_dg/TSM0_vol_active_log
/dev/vx/dsk/TSM0_dg/TSM0_vol_instance
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_pool0_06 &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_pool0_05 &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_pool0_04 &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_pool0_03 &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_pool0_02 &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_pool0_01 &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_db_backup_03 &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_db_backup_02 &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_db_backup_01 &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_db_03 &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_db_02 &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_db_01 &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_archive_log &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_active_log &
[root@300 ~]# mkfs -t vxfs -o bsize=8192,largefiles /dev/vx/rdsk/TSM0_dg/TSM0_vol_instance &

[root@300 ~]# haconf -dump -makero

Veritas Cluster Server Group

Now as we have LUNs initialized into Disk Group we may now create the cluster service.

[root@300 ~]# haconf -makerw
[root@300 ~]# hagrp -add TSM0_site
VCS NOTICE V-16-1-10136 Group added; populating SystemList and setting the Parallel attribute recommended before adding resources
[root@300 ~]# hagrp -modify TSM0_site SystemList 300 0 301 1
[root@300 ~]# hagrp -modify TSM0_site AutoStartList 300 301
[root@300 ~]# hagrp -modify TSM0_site Parallel 0
[root@300 ~]# hares -add TSM0_nic_bond0 NIC TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_nic_bond0 Critical 1
[root@300 ~]# hares -modify TSM0_nic_bond0 PingOptimize 1
[root@300 ~]# hares -modify TSM0_nic_bond0 Device bond0
[root@300 ~]# hares -modify TSM0_nic_bond0 Enabled 1
[root@300 ~]# hares -probe TSM0_nic_bond0 -sys 301
[root@300 ~]# hares -add TSM0_ip_bond0 IP TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_ip_bond0 Critical 1
[root@300 ~]# hares -modify TSM0_ip_bond0 Device bond0
[root@300 ~]# hares -modify TSM0_ip_bond0 Address 10.20.30.44
[root@300 ~]# hares -modify TSM0_ip_bond0 NetMask 255.255.255.0
[root@300 ~]# hares -modify TSM0_ip_bond0 Enabled 1
[root@300 ~]# hares -link TSM0_ip_bond0 TSM0_nic_bond0
[root@300 ~]# hares -add TSM0_dg DiskGroup TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_dg Critical 1
[root@300 ~]# hares -modify TSM0_dg DiskGroup TSM0_dg
[root@300 ~]# hares -modify TSM0_dg Enabled 1
[root@300 ~]# hares -probe TSM0_dg -sys 301
[root@300 ~]# mkdir /tsm0
[root@301 ~]# mkdir /tsm0

I did not wanted to type all these over and over again so I generated these commands as shown below.

[LOCAL] % cat > LIST << __EOF
stgFC_020 32 /tsm0 TSM0_vol_instance TSM0_mnt_instance
stgFC_012 128 /tsm0/active_log TSM0_vol_active_log TSM0_mnt_active_log
stgFC_016 384 /tsm0/archive_log TSM0_vol_archive_log TSM0_mnt_archive_log
stgFC_013 300 /tsm0/db/db_01 TSM0_vol_db_01 TSM0_mnt_db_01
stgFC_014 300 /tsm0/db/db_02 TSM0_vol_db_02 TSM0_mnt_db_02
stgFC_015 300 /tsm0/db/db_03 TSM0_vol_db_03 TSM0_mnt_db_03
stgFC_017 900 /tsm0/db_backup/db_backup_01 TSM0_vol_db_backup_01 TSM0_mnt_db_backup_01
stgFC_018 900 /tsm0/db_backup/db_backup_02 TSM0_vol_db_backup_02 TSM0_mnt_db_backup_02
stgFC_019 900 /tsm0/db_backup/db_backup_03 TSM0_vol_db_backup_03 TSM0_mnt_db_backup_03
stgFC_01A 6700 /tsm0/pool0/pool0_01 TSM0_vol_pool0_01 TSM0_mnt_pool0_01
stgFC_01B 6700 /tsm0/pool0/pool0_02 TSM0_vol_pool0_02 TSM0_mnt_pool0_02
stgFC_01C 6700 /tsm0/pool0/pool0_03 TSM0_vol_pool0_03 TSM0_mnt_pool0_03
stgFC_01D 6700 /tsm0/pool0/pool0_04 TSM0_vol_pool0_04 TSM0_mnt_pool0_04
stgFC_01E 6700 /tsm0/pool0/pool0_05 TSM0_vol_pool0_05 TSM0_mnt_pool0_05
stgFC_01F 6700 /tsm0/pool0/pool0_06 TSM0_vol_pool0_06 TSM0_mnt_pool0_06
__EOF
[LOCAL]# cat LIST \ | while read STG SIZE MNTPOINT VOL MNTNAME do echo sleep 0.2; echo hares -add ${MNTNAME} Mount TSM0_site echo sleep 0.2; echo hares -modify ${MNTNAME} Critical 1 echo sleep 0.2; echo hares -modify ${MNTNAME} SnapUmount 0 echo sleep 0.2; echo hares -modify ${MNTNAME} MountPoint ${MNTPOINT} echo sleep 0.2; echo hares -modify ${MNTNAME} BlockDevice /dev/vx/dsk/TSM0_dg/${VOL} echo sleep 0.2; echo hares -modify ${MNTNAME} FSType vxfs echo sleep 0.2; echo hares -modify ${MNTNAME} MountOpt largefiles echo sleep 0.2; echo hares -modify ${MNTNAME} FsckOpt %-y echo sleep 0.2; echo hares -modify ${MNTNAME} Enabled 1 echo sleep 0.2; echo hares -probe ${MNTNAME} -sys 301 echo sleep 0.2; echo hares -link ${MNTNAME} TSM0_dg echo done
[root@300 ~]# hares -add TSM0_mnt_instance Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_instance Critical 1
[root@300 ~]# hares -modify TSM0_mnt_instance SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_instance MountPoint /tsm0
[root@300 ~]# hares -modify TSM0_mnt_instance BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_instance
[root@300 ~]# hares -modify TSM0_mnt_instance FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_instance MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_instance FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_instance Enabled 1
[root@300 ~]# hares -probe TSM0_mnt_instance -sys 301
[root@300 ~]# hares -link TSM0_mnt_instance TSM0_dg
[root@300 ~]# hares -add TSM0_mnt_active_log Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_active_log Critical 1
[root@300 ~]# hares -modify TSM0_mnt_active_log SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_active_log MountPoint /tsm0/active_log
[root@300 ~]# hares -modify TSM0_mnt_active_log BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_active_log
[root@300 ~]# hares -modify TSM0_mnt_active_log FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_active_log MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_active_log FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_active_log Enabled 1
[root@300 ~]# hares -probe TSM0_mnt_active_log -sys 301
[root@300 ~]# hares -link TSM0_mnt_active_log TSM0_dg
[root@300 ~]# hares -add TSM0_mnt_archive_log Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_archive_log Critical 1
[root@300 ~]# hares -modify TSM0_mnt_archive_log SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_archive_log MountPoint /tsm0/archive_log
[root@300 ~]# hares -modify TSM0_mnt_archive_log BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_archive_log
[root@300 ~]# hares -modify TSM0_mnt_archive_log FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_archive_log MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_archive_log FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_archive_log Enabled 1
[root@300 ~]# hares -probe TSM0_mnt_archive_log -sys 301
[root@300 ~]# hares -link TSM0_mnt_archive_log TSM0_dg
[root@300 ~]# hares -add TSM0_mnt_db_01 Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_db_01 Critical 1
[root@300 ~]# hares -modify TSM0_mnt_db_01 SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_db_01 MountPoint /tsm0/db/db_01
[root@300 ~]# hares -modify TSM0_mnt_db_01 BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_db_01
[root@300 ~]# hares -modify TSM0_mnt_db_01 FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_db_01 MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_db_01 FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_db_01 Enabled 1
[root@300 ~]# hares -probe TSM0_mnt_db_01 -sys 301
[root@300 ~]# hares -link TSM0_mnt_db_01 TSM0_dg
[root@300 ~]# hares -add TSM0_mnt_db_02 Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_db_02 Critical 1
[root@300 ~]# hares -modify TSM0_mnt_db_02 SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_db_02 MountPoint /tsm0/db/db_02
[root@300 ~]# hares -modify TSM0_mnt_db_02 BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_db_02
[root@300 ~]# hares -modify TSM0_mnt_db_02 FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_db_02 MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_db_02 FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_db_02 Enabled 1
[root@300 ~]# hares -probe TSM0_mnt_db_02 -sys 301
[root@300 ~]# hares -link TSM0_mnt_db_02 TSM0_dg
[root@300 ~]# hares -add TSM0_mnt_db_03 Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_db_03 Critical 1
[root@300 ~]# hares -modify TSM0_mnt_db_03 SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_db_03 MountPoint /tsm0/db/db_03
[root@300 ~]# hares -modify TSM0_mnt_db_03 BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_db_03
[root@300 ~]# hares -modify TSM0_mnt_db_03 FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_db_03 MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_db_03 FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_db_03 Enabled 1
[root@300 ~]# hares -probe TSM0_mnt_db_03 -sys 301
[root@300 ~]# hares -link TSM0_mnt_db_03 TSM0_dg
[root@300 ~]# hares -add TSM0_mnt_db_backup_01 Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_db_backup_01 Critical 1
[root@300 ~]# hares -modify TSM0_mnt_db_backup_01 SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_db_backup_01 MountPoint /tsm0/db_backup/db_backup_01
[root@300 ~]# hares -modify TSM0_mnt_db_backup_01 BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_db_backup_01
[root@300 ~]# hares -modify TSM0_mnt_db_backup_01 FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_db_backup_01 MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_db_backup_01 FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_db_backup_01 Enabled 1
[root@300 ~]# hares -probe TSM0_mnt_db_backup_01 -sys 301
[root@300 ~]# hares -link TSM0_mnt_db_backup_01 TSM0_dg
[root@300 ~]# hares -add TSM0_mnt_db_backup_02 Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_db_backup_02 Critical 1
[root@300 ~]# hares -modify TSM0_mnt_db_backup_02 SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_db_backup_02 MountPoint /tsm0/db_backup/db_backup_02
[root@300 ~]# hares -modify TSM0_mnt_db_backup_02 BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_db_backup_02
[root@300 ~]# hares -modify TSM0_mnt_db_backup_02 FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_db_backup_02 MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_db_backup_02 FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_db_backup_02 Enabled 1
[root@300 ~]# hares -probe TSM0_mnt_db_backup_02 -sys 301
[root@300 ~]# hares -link TSM0_mnt_db_backup_02 TSM0_dg
[root@300 ~]# hares -add TSM0_mnt_db_backup_03 Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_db_backup_03 Critical 1
[root@300 ~]# hares -modify TSM0_mnt_db_backup_03 SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_db_backup_03 MountPoint /tsm0/db_backup/db_backup_03
[root@300 ~]# hares -modify TSM0_mnt_db_backup_03 BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_db_backup_03
[root@300 ~]# hares -modify TSM0_mnt_db_backup_03 FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_db_backup_03 MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_db_backup_03 FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_db_backup_03 Enabled 1
[root@300 ~]# hares -probe TSM0_mnt_db_backup_03 -sys 301
[root@300 ~]# hares -link TSM0_mnt_db_backup_03 TSM0_dg
[root@300 ~]# hares -add TSM0_mnt_pool0_01 Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_pool0_01 Critical 1
[root@300 ~]# hares -modify TSM0_mnt_pool0_01 SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_pool0_01 MountPoint /tsm0/pool0/pool0_01
[root@300 ~]# hares -modify TSM0_mnt_pool0_01 BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_pool0_01
[root@300 ~]# hares -modify TSM0_mnt_pool0_01 FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_pool0_01 MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_pool0_01 FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_pool0_01 Enabled 1
[root@300 ~]# hares -probe TSM0_mnt_pool0_01 -sys 301
[root@300 ~]# hares -link TSM0_mnt_pool0_01 TSM0_dg
[root@300 ~]# hares -add TSM0_mnt_pool0_02 Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_pool0_02 Critical 1
[root@300 ~]# hares -modify TSM0_mnt_pool0_02 SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_pool0_02 MountPoint /tsm0/pool0/pool0_02
[root@300 ~]# hares -modify TSM0_mnt_pool0_02 BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_pool0_02
[root@300 ~]# hares -modify TSM0_mnt_pool0_02 FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_pool0_02 MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_pool0_02 FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_pool0_02 Enabled 1
[root@300 ~]# hares -probe TSM0_mnt_pool0_02 -sys 301
[root@300 ~]# hares -link TSM0_mnt_pool0_02 TSM0_dg
[root@300 ~]# hares -add TSM0_mnt_pool0_03 Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_pool0_03 Critical 1
[root@300 ~]# hares -modify TSM0_mnt_pool0_03 SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_pool0_03 MountPoint /tsm0/pool0/pool0_03
[root@300 ~]# hares -modify TSM0_mnt_pool0_03 BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_pool0_03
[root@300 ~]# hares -modify TSM0_mnt_pool0_03 FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_pool0_03 MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_pool0_03 FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_pool0_03 Enabled 1
[root@300 ~]# hares -probe TSM0_mnt_pool0_03 -sys 301
[root@300 ~]# hares -link TSM0_mnt_pool0_03 TSM0_dg
[root@300 ~]# hares -add TSM0_mnt_pool0_04 Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_pool0_04 Critical 1
[root@300 ~]# hares -modify TSM0_mnt_pool0_04 SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_pool0_04 MountPoint /tsm0/pool0/pool0_04
[root@300 ~]# hares -modify TSM0_mnt_pool0_04 BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_pool0_04
[root@300 ~]# hares -modify TSM0_mnt_pool0_04 FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_pool0_04 MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_pool0_04 FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_pool0_04 Enabled 1
[root@300 ~]# hares -probe TSM0_mnt_pool0_04 -sys 301
[root@300 ~]# hares -link TSM0_mnt_pool0_04 TSM0_dg
[root@300 ~]# hares -add TSM0_mnt_pool0_05 Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_pool0_05 Critical 1
[root@300 ~]# hares -modify TSM0_mnt_pool0_05 SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_pool0_05 MountPoint /tsm0/pool0/pool0_05
[root@300 ~]# hares -modify TSM0_mnt_pool0_05 BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_pool0_05
[root@300 ~]# hares -modify TSM0_mnt_pool0_05 FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_pool0_05 MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_pool0_05 FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_pool0_05 Enabled 1
[root@300 ~]# hares -probe TSM0_mnt_pool0_05 -sys 301
[root@300 ~]# hares -link TSM0_mnt_pool0_05 TSM0_dg
[root@300 ~]# hares -add TSM0_mnt_pool0_06 Mount TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_mnt_pool0_06 Critical 1
[root@300 ~]# hares -modify TSM0_mnt_pool0_06 SnapUmount 0
[root@300 ~]# hares -modify TSM0_mnt_pool0_06 MountPoint /tsm0/pool0/pool0_06
[root@300 ~]# hares -modify TSM0_mnt_pool0_06 BlockDevice /dev/vx/dsk/TSM0_dg/TSM0_vol_pool0_06
[root@300 ~]# hares -modify TSM0_mnt_pool0_06 FSType vxfs
[root@300 ~]# hares -modify TSM0_mnt_pool0_06 MountOpt largefiles
[root@300 ~]# hares -modify TSM0_mnt_pool0_06 FsckOpt %-y
[root@300 ~]# hares -modify TSM0_mnt_pool0_06 Enabled 1
[root@300 ~]# hares -probe TSM0_mnt_pool0_06 -sys 301
[root@300 ~]# hares -link TSM0_mnt_pool0_06 TSM0_dg
[root@300 ~]# hares -state | grep TSM0 | grep _mnt_ | \ while read I; do hares -display $I 2>&1 | grep -v ArgListValues | grep 'largefiles'; done | column -t
TSM0_mnt_active_log MountOpt localclus largefiles
TSM0_mnt_active_log MountOpt localclus largefiles
TSM0_mnt_archive_log MountOpt localclus largefiles
TSM0_mnt_archive_log MountOpt localclus largefiles
TSM0_mnt_db_01 MountOpt localclus largefiles
TSM0_mnt_db_01 MountOpt localclus largefiles
TSM0_mnt_db_02 MountOpt localclus largefiles
TSM0_mnt_db_02 MountOpt localclus largefiles
TSM0_mnt_db_03 MountOpt localclus largefiles
TSM0_mnt_db_03 MountOpt localclus largefiles
TSM0_mnt_db_backup_01 MountOpt localclus largefiles
TSM0_mnt_db_backup_01 MountOpt localclus largefiles
TSM0_mnt_db_backup_02 MountOpt localclus largefiles
TSM0_mnt_db_backup_02 MountOpt localclus largefiles
TSM0_mnt_db_backup_03 MountOpt localclus largefiles
TSM0_mnt_db_backup_03 MountOpt localclus largefiles
TSM0_mnt_instance MountOpt localclus largefiles
TSM0_mnt_instance MountOpt localclus largefiles
TSM0_mnt_pool0_01 MountOpt localclus largefiles
TSM0_mnt_pool0_01 MountOpt localclus largefiles
TSM0_mnt_pool0_02 MountOpt localclus largefiles
TSM0_mnt_pool0_02 MountOpt localclus largefiles
TSM0_mnt_pool0_03 MountOpt localclus largefiles
TSM0_mnt_pool0_03 MountOpt localclus largefiles
TSM0_mnt_pool0_04 MountOpt localclus largefiles
TSM0_mnt_pool0_04 MountOpt localclus largefiles
TSM0_mnt_pool0_05 MountOpt localclus largefiles
TSM0_mnt_pool0_05 MountOpt localclus largefiles
TSM0_mnt_pool0_06 MountOpt localclus largefiles
TSM0_mnt_pool0_06 MountOpt localclus largefiles
[root@300 ~]# hares -add TSM0_server Application TSM0_site
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
[root@300 ~]# hares -modify TSM0_server StartProgram "/etc/init.d/tsm0 start"
[root@300 ~]# hares -modify TSM0_server StopProgram "/etc/init.d/tsm0 stop"
[root@300 ~]# hares -modify TSM0_server MonitorProgram "/etc/init.d/tsm0 status"
[root@300 ~]# hares -modify TSM0_server Enabled 1
[root@300 ~]# hares -probe TSM0_server -sys 301
[root@300 ~]# hares -link TSM0_server TSM0_mnt_instance
[root@300 ~]# hares -link TSM0_server TSM0_mnt_active_log
[root@300 ~]# hares -link TSM0_server TSM0_mnt_archive_log
[root@300 ~]# hares -link TSM0_server TSM0_mnt_db_01
[root@300 ~]# hares -link TSM0_server TSM0_mnt_db_02
[root@300 ~]# hares -link TSM0_server TSM0_mnt_db_03
[root@300 ~]# hares -link TSM0_server TSM0_mnt_db_backup_01
[root@300 ~]# hares -link TSM0_server TSM0_mnt_db_backup_02
[root@300 ~]# hares -link TSM0_server TSM0_mnt_db_backup_03
[root@300 ~]# hares -link TSM0_server TSM0_mnt_pool0_01
[root@300 ~]# hares -link TSM0_server TSM0_mnt_pool0_02
[root@300 ~]# hares -link TSM0_server TSM0_mnt_pool0_03
[root@300 ~]# hares -link TSM0_server TSM0_mnt_pool0_04
[root@300 ~]# hares -link TSM0_server TSM0_mnt_pool0_05
[root@300 ~]# hares -link TSM0_server TSM0_mnt_pool0_06
[root@300 ~]# hares -link TSM0_server TSM0_ip_bond0
[root@300 ~]# hares -link TSM0_mnt_active_log TSM0_mnt_instance
[root@300 ~]# hares -link TSM0_mnt_archive_log TSM0_mnt_instance
[root@300 ~]# hares -link TSM0_mnt_db_01 TSM0_mnt_instance
[root@300 ~]# hares -link TSM0_mnt_db_02 TSM0_mnt_instance
[root@300 ~]# hares -link TSM0_mnt_db_03 TSM0_mnt_instance
[root@300 ~]# hares -link TSM0_mnt_db_backup_01 TSM0_mnt_instance
[root@300 ~]# hares -link TSM0_mnt_db_backup_02 TSM0_mnt_instance
[root@300 ~]# hares -link TSM0_mnt_db_backup_03 TSM0_mnt_instance
[root@300 ~]# hares -link TSM0_mnt_pool0_01 TSM0_mnt_instance
[root@300 ~]# hares -link TSM0_mnt_pool0_02 TSM0_mnt_instance
[root@300 ~]# hares -link TSM0_mnt_pool0_03 TSM0_mnt_instance
[root@300 ~]# hares -link TSM0_mnt_pool0_04 TSM0_mnt_instance
[root@300 ~]# hares -link TSM0_mnt_pool0_05 TSM0_mnt_instance
[root@300 ~]# hares -link TSM0_mnt_pool0_06 TSM0_mnt_instance
[root@300 ~]# vxdg import TSM0_dg
[root@300 ~]# mount -t vxfs /dev/vx/dsk/TSM0_dg/TSM0_vol_instance /tsm0
[root@301 ~]# mkdir -p /tsm0/active_log
[root@301 ~]# mkdir -p /tsm0/archive_log
[root@300 ~]# mkdir -p /tsm0/db/db_01
[root@300 ~]# mkdir -p /tsm0/db/db_02
[root@300 ~]# mkdir -p /tsm0/db/db_03
[root@300 ~]# mkdir -p /tsm0/db_backup/db_backup_01
[root@300 ~]# mkdir -p /tsm0/db_backup/db_backup_02
[root@300 ~]# mkdir -p /tsm0/db_backup/db_backup_03
[root@300 ~]# mkdir -p /tsm0/pool0/pool0_01
[root@300 ~]# mkdir -p /tsm0/pool0/pool0_02
[root@300 ~]# mkdir -p /tsm0/pool0/pool0_03
[root@300 ~]# mkdir -p /tsm0/pool0/pool0_04
[root@300 ~]# mkdir -p /tsm0/pool0/pool0_05
[root@300 ~]# mkdir -p /tsm0/pool0/pool0_06
[root@300 ~]# find /tsm0
/tsm0
/tsm0/lost+found
/tsm0/active_log
/tsm0/archive_log
/tsm0/db
/tsm0/db/db_01
/tsm0/db/db_02
/tsm0/db/db_03
/tsm0/db_backup
/tsm0/db_backup/db_backup_01
/tsm0/db_backup/db_backup_02
/tsm0/db_backup/db_backup_03
/tsm0/pool0
/tsm0/pool0/pool0_01
/tsm0/pool0/pool0_02
/tsm0/pool0/pool0_03
/tsm0/pool0/pool0_04
/tsm0/pool0/pool0_05
/tsm0/pool0/pool0_06
[root@300 ~]# umount /tsm0
[root@300 ~]# vxdg deport TSM0_dg
[root@300 ~]# haconf -dump -makero
[root@300 ~]# grep TSM0_server /etc/VRTSvcs/conf/config/main.cf Application TSM0_server ( TSM0_server requires TSM0_ip_bond0 TSM0_server requires TSM0_mnt_active_log TSM0_server requires TSM0_mnt_archive_log TSM0_server requires TSM0_mnt_db_01 TSM0_server requires TSM0_mnt_db_02 TSM0_server requires TSM0_mnt_db_03 TSM0_server requires TSM0_mnt_db_backup_01 TSM0_server requires TSM0_mnt_db_backup_02 TSM0_server requires TSM0_mnt_db_backup_03 TSM0_server requires TSM0_mnt_instance TSM0_server requires TSM0_mnt_pool0_01 TSM0_server requires TSM0_mnt_pool0_02 TSM0_server requires TSM0_mnt_pool0_03 TSM0_server requires TSM0_mnt_pool0_04 TSM0_server requires TSM0_mnt_pool0_05 TSM0_server requires TSM0_mnt_pool0_06 // Application TSM0_server

Local Per Node Resources

[root@300 ~]# lvcreate -n lv_tmp -L 4G vg_local
[root@300 ~]# lvcreate -n lv_opt_tivoli -L 16G vg_local
[root@300 ~]# lvcreate -n lv_home -L 4G vg_local
[root@301 ~]# mkfs.ext3 /dev/vg_local/lv_tmp
[root@301 ~]# mkfs.ext3 /dev/vg_local/lv_opt_tivoli
[root@301 ~]# mkfs.ext3 /dev/vg_local/lv_home
[root@300 ~]# lvcreate -n lv_tmp -L 4G vg_local
[root@300 ~]# lvcreate -n lv_opt_tivoli -L 16G vg_local
[root@300 ~]# lvcreate -n lv_home -L 4G vg_local
[root@301 ~]# mkfs.ext3 /dev/vg_local/lv_tmp
[root@301 ~]# mkfs.ext3 /dev/vg_local/lv_opt_tivoli
[root@301 ~]# mkfs.ext3 /dev/vg_local/lv_home
[root@300 ~]# cat /etc/fstab
/dev/mapper/vg_local-lv_root / ext3 rw,noatime,nodiratime 1 1
UUID=28d0988a-e6d7-48d8-b0e5-0f70f8eb681e /boot ext3 defaults 1 2
UUID=D401-661A /boot/efi vfat umask=0077,shortname=winnt 0 0
/dev/vg_local/lv_swap swap swap defaults 0 0
/dev/vg_local/lv_tmp /tmp ext3 rw,noatime,nodiratime 2 2
/dev/vg_local/lv_opt_tivoli /opt/tivoli ext3 rw,noatime,nodiratime 2 2
/dev/vg_local/lv_home /home ext3 rw,noatime,nodiratime 2 2 # VIRT
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0

Install IBM TSM Server Dependencies.

[root@ANY ~]# yum install numactl
[root@ANY ~]# yum install /usr/lib/libgtk-x11-2.0.so.0
[root@ANY ~]# yum install /usr/lib64/libgtk-x11-2.0.so.0
[root@ANY ~]# yum install xorg-x11-xauth xterm fontconfig libICE \ libX11-common libXau libXmu libSM libX11 libXt

System /etc/sysctl.conf parameters for both nodes.

[root@300 ~]# cat /etc/sysctl.conf
# Controls IP packet forwarding
net.ipv4.ip_forward = 0 # Controls source route verification
net.ipv4.conf.default.rp_filter = 1 # Do not accept source routing
net.ipv4.conf.default.accept_source_route = 0 # Controls the System Request debugging functionality of the kernel
kernel.sysrq = 0 # Controls whether core dumps will append the PID to the core filename.
# Useful for debugging multi-threaded applications.
kernel.core_uses_pid = 1 # Controls the use of TCP syncookies
net.ipv4.tcp_syncookies = 1 # Disable netfilter on bridges.
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0 # Controls the default maxmimum size of a mesage queue
kernel.msgmnb = 65536 # Controls the maximum size of a message, in bytes
kernel.msgmax = 65536 # Controls the maximum shared segment size, in bytes
kernel.shmmax = 206158430208 # Controls the maximum number of shared memory segments, in pages
kernel.shmall = 4294967296 # For SF HA
kernel.hung_task_panic=0 # NetWorker
# connection backlog (hash tables) to the maximum value allowed
net.ipv4.tcp_max_syn_backlog = 8192
net.core.netdev_max_backlog = 8192 # increase the memory size available for TCP buffers
net.core.rmem_default = 262144
net.core.wmem_default = 262144
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 8192 524288 16777216
net.ipv4.tcp_wmem = 8192 524288 16777216 # recommended keepalive values
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_keepalive_probes = 20
net.ipv4.tcp_keepalive_time = 600 # recommended timeout after improper close
net.ipv4.tcp_fin_timeout = 60
sunrpc.tcp_slot_table_entries = 64 # for RDBMS 11.2.0.4 rman cat
fs.suid_dumpable = 1
fs.aio-max-nr = 1048576
fs.file-max = 6815744 # support EMC 2016.04.20
net.core.somaxconn = 1024 # 256 * RAM in GB
kernel.shmmni = 65536 # TSM/NSR
kernel.sem = 250 256000 32 65536 # RAM in GB * 1024
kernel.msgmni = 262144 # TSM
kernel.randomize_va_space = 0
vm.swappiness = 0
vm.overcommit_memory = 0
[root@301 ~]# cat /etc/sysctl.conf
# Controls IP packet forwarding
net.ipv4.ip_forward = 0 # Controls source route verification
net.ipv4.conf.default.rp_filter = 1 # Do not accept source routing
net.ipv4.conf.default.accept_source_route = 0 # Controls the System Request debugging functionality of the kernel
kernel.sysrq = 0 # Controls whether core dumps will append the PID to the core filename.
# Useful for debugging multi-threaded applications.
kernel.core_uses_pid = 1 # Controls the use of TCP syncookies
net.ipv4.tcp_syncookies = 1 # Disable netfilter on bridges.
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0 # Controls the default maxmimum size of a mesage queue
kernel.msgmnb = 65536 # Controls the maximum size of a message, in bytes
kernel.msgmax = 65536 # Controls the maximum shared segment size, in bytes
kernel.shmmax = 206158430208 # Controls the maximum number of shared memory segments, in pages
kernel.shmall = 4294967296 # For SF HA
kernel.hung_task_panic=0 # NetWorker
# connection backlog (hash tables) to the maximum value allowed
net.ipv4.tcp_max_syn_backlog = 8192
net.core.netdev_max_backlog = 8192 # increase the memory size available for TCP buffers
net.core.rmem_default = 262144
net.core.wmem_default = 262144
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 8192 524288 16777216
net.ipv4.tcp_wmem = 8192 524288 16777216 # recommended keepalive values
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_keepalive_probes = 20
net.ipv4.tcp_keepalive_time = 600 # recommended timeout after improper close
net.ipv4.tcp_fin_timeout = 60
sunrpc.tcp_slot_table_entries = 64 # for RDBMS 11.2.0.4 rman cat
fs.suid_dumpable = 1
fs.aio-max-nr = 1048576
fs.file-max = 6815744 # support EMC 2016.04.20
net.core.somaxconn = 1024 # 256 * RAM in GB
kernel.shmmni = 65536 # TSM/NSR
kernel.sem = 250 256000 32 65536 # RAM in GB * 1024
kernel.msgmni = 262144 # TSM
kernel.randomize_va_space = 0
vm.swappiness = 0
vm.overcommit_memory = 0

Install IBM TSM Server

Connect to each node with SSH Forwarding enabled and install IBM TSM server.

[root@300 ~]# chmod +x 7.1.6.000-TIV-TSMSRV-Linuxx86_64.bin
[root@300 ~]# ./7.1.6.000-TIV-TSMSRV-Linuxx86_64.bin
[root@300 ~]# ./install.sh

… and the second node.

[root@301 ~]# chmod +x 7.1.6.000-TIV-TSMSRV-Linuxx86_64.bin
[root@301 ~]# ./7.1.6.000-TIV-TSMSRV-Linuxx86_64.bin
[root@301 ~]# ./install.sh

Options choosen during installation.

INSTALL | DESELECT 'Languages' and DESELECT 'Operations Center'
INSTALL | /opt/tivoli/IBM/IBMIMShared
INSTALL | /opt/tivoli/IBM/InstallationManager/eclipse
INSTALL | /opt/tivoli/tsm

Screenshots from the installation process.

ibm-tsm-install-01

ibm-tsm-install-02

ibm-tsm-install-03

ibm-tsm-install-04

ibm-tsm-install-05

ibm-tsm-install-06

Install IBM TSM Client

[root@300 ~]# yum localinstall gskcrypt64-8.0.50.66.linux.x86_64.rpm \ gskssl64-8.0.50.66.linux.x86_64.rpm \ TIVsm-API64.x86_64.rpm \ TIVsm-BA.x86_64.rpm
[root@301 ~]# yum localinstall gskcrypt64-8.0.50.66.linux.x86_64.rpm \ gskssl64-8.0.50.66.linux.x86_64.rpm \ TIVsm-API64.x86_64.rpm \ TIVsm-BA.x86_64.rpm 
[root@300 ~]# useradd -u 1500 -m tsm0
[root@301 ~]# useradd -u 1500 -m tsm0
[root@300 ~]# passwd tsm0
Changing password for user tsm0.
New password:
Retype new password:
passwd: all authentication tokens updated successfully. [root@301 ~]# passwd tsm0
Changing password for user tsm0.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
[root@300 ~]# tail -1 /etc/passwd
tsm0:x:1500:1500::/home/tsm0:/bin/bash [root@301 ~]# tail -1 /etc/passwd
tsm0:x:1500:1500::/home/tsm0:/bin/bash
[root@300 ~]# tail -1 /etc/group
tsm0:x:1500: [root@301 ~]# tail -1 /etc/group
tsm0:x:1500:
[root@300 ~]# cat /etc/security/limits.conf
# ORACLE
oracle soft nproc 16384
oracle hard nproc 16384
oracle soft nofile 4096
oracle hard nofile 65536
oracle soft stack 10240 # TSM
tsm0 soft nofile 32768
tsm0 hard nofile 32768 [root@301 ~]# cat /etc/security/limits.conf
# ORACLE
oracle soft nproc 16384
oracle hard nproc 16384
oracle soft nofile 4096
oracle hard nofile 65536
oracle soft stack 10240 # TSM
tsm0 soft nofile 32768
tsm0 hard nofile 32768
[root@300 ~]# :> /var/run/dsmserv_tsm0.pid
[root@301 ~]# :> /var/run/dsmserv_tsm0.pid
[root@300 ~]# chown tsm0:tsm0 /var/run/dsmserv_tsm0.pid
[root@301 ~]# chown tsm0:tsm0 /var/run/dsmserv_tsm0.pid
[root@300 ~]# hares -state | grep TSM
TSM0_dg State 300 OFFLINE
TSM0_dg State 301 OFFLINE
TSM0_ip_bond0 State 300 OFFLINE
TSM0_ip_bond0 State 301 OFFLINE
TSM0_mnt_active_log State 300 OFFLINE
TSM0_mnt_active_log State 301 OFFLINE
TSM0_mnt_archive_log State 300 OFFLINE
TSM0_mnt_archive_log State 301 OFFLINE
TSM0_mnt_db_01 State 300 OFFLINE
TSM0_mnt_db_01 State 301 OFFLINE
TSM0_mnt_db_02 State 300 OFFLINE
TSM0_mnt_db_02 State 301 OFFLINE
TSM0_mnt_db_03 State 300 OFFLINE
TSM0_mnt_db_03 State 301 OFFLINE
TSM0_mnt_db_backup_01 State 300 OFFLINE
TSM0_mnt_db_backup_01 State 301 OFFLINE
TSM0_mnt_db_backup_02 State 300 OFFLINE
TSM0_mnt_db_backup_02 State 301 OFFLINE
TSM0_mnt_db_backup_03 State 300 OFFLINE
TSM0_mnt_db_backup_03 State 301 OFFLINE
TSM0_mnt_instance State 300 OFFLINE
TSM0_mnt_instance State 301 OFFLINE
TSM0_mnt_pool0_01 State 300 OFFLINE
TSM0_mnt_pool0_01 State 301 OFFLINE
TSM0_mnt_pool0_02 State 300 OFFLINE
TSM0_mnt_pool0_02 State 301 OFFLINE
TSM0_mnt_pool0_03 State 300 OFFLINE
TSM0_mnt_pool0_03 State 301 OFFLINE
TSM0_mnt_pool0_04 State 300 OFFLINE
TSM0_mnt_pool0_04 State 301 OFFLINE
TSM0_mnt_pool0_05 State 300 OFFLINE
TSM0_mnt_pool0_05 State 301 OFFLINE
TSM0_mnt_pool0_06 State 300 OFFLINE
TSM0_mnt_pool0_06 State 301 OFFLINE
TSM0_nic_bond0 State 300 ONLINE
TSM0_nic_bond0 State 301 ONLINE
TSM0_server State 300 OFFLINE
TSM0_server State 301 OFFLINE
[root@300 ~]# hares -online TSM0_mnt_instance -sys $( hostname -s )
[root@300 ~]# hares -online TSM0_ip_bond0 -sys $( hostname -s )
[root@300 ~]# hares -state | grep TSM0 | grep 301 | grep mnt | grep -v instance | awk '{print $1}' \ | while read I; do hares -online ${I} -sys $( hostname -s ); done
[root@300 ~]# hares -state | grep 301 | grep TSM0
TSM0_dg State 301 ONLINE
TSM0_ip_bond0 State 301 ONLINE
TSM0_mnt_active_log State 301 ONLINE
TSM0_mnt_archive_log State 301 ONLINE
TSM0_mnt_db_01 State 301 ONLINE
TSM0_mnt_db_02 State 301 ONLINE
TSM0_mnt_db_03 State 301 ONLINE
TSM0_mnt_db_backup_01 State 301 ONLINE
TSM0_mnt_db_backup_02 State 301 ONLINE
TSM0_mnt_db_backup_03 State 301 ONLINE
TSM0_mnt_instance State 301 ONLINE
TSM0_mnt_pool0_01 State 301 ONLINE
TSM0_mnt_pool0_02 State 301 ONLINE
TSM0_mnt_pool0_03 State 301 ONLINE
TSM0_mnt_pool0_04 State 301 ONLINE
TSM0_mnt_pool0_05 State 301 ONLINE
TSM0_mnt_pool0_06 State 301 ONLINE
TSM0_nic_bond0 State 301 ONLINE
TSM0_server State 301 OFFLINE
[root@300 ~]# find /tsm0 | grep -v 'lost+found'
/tsm0
/tsm0/active_log
/tsm0/archive_log
/tsm0/db
/tsm0/db/db_01
/tsm0/db/db_02
/tsm0/db/db_03
/tsm0/db_backup
/tsm0/db_backup/db_backup_01
/tsm0/db_backup/db_backup_02
/tsm0/db_backup/db_backup_03
/tsm0/pool0
/tsm0/pool0/pool0_01
/tsm0/pool0/pool0_02
/tsm0/pool0/pool0_03
/tsm0/pool0/pool0_04
/tsm0/pool0/pool0_05
/tsm0/pool0/pool0_06
[root@300 ~]# chown -R tsm0:tsm0 /tsm0

Connect to one of the nodes with SSH Forwarding enabled.

[root@300 ~]# cd /opt/tivoli/tsm/server/bin
[root@300 /opt/tivoli/tsm/server/bin]# ./dsmicfgx
Preparing to install...
Extracting the JRE from the installer archive...
Unpacking the JRE...
Extracting the installation resources from the installer archive...
Configuring the installer for this system's environment... Launching installer...

Options choosen during configuration.

INSTALL | Instance user ID:
INSTALL |   tsm0
INSTALL |
INSTALL | Instance directory:
INSTALL |   /tsm0
INSTALL |
INSTALL | Database directories:
INSTALL |   /tsm0/db/db_01
INSTALL |  /tsm0/db/db_02
INSTALL |  /tsm0/db/db_03
INSTALL |
INSTALL | Active log directory:
INSTALL |   /tsm0/active_log
INSTALL |
INSTALL | Primary archive log directory:
INSTALL |   /tsm0/archive_log
INSTALL |
INSTALL | Instance autostart setting:
INSTALL |   Start automatically using the instance user ID

Screenshots from the configuration process.

ibm-tsm-configure-01

ibm-tsm-configure-02

ibm-tsm-configure-03

ibm-tsm-configure-04

ibm-tsm-configure-05

ibm-tsm-configure-06

ibm-tsm-configure-07

ibm-tsm-configure-08

ibm-tsm-configure-09

ibm-tsm-configure-10

ibm-tsm-configure-12

Log from the IBM TSM DB2 instance creation.

Creating the database manager instance...
The database manager instance was created successfully. Formatting the server database... ANR7800I DSMSERV generated at 16:39:04 on Jun 8 2016. IBM Tivoli Storage Manager for Linux/x86_64
Version 7, Release 1, Level 6.000 Licensed Materials - Property of IBM (C) Copyright IBM Corporation 1990, 2016.
All rights reserved.
U.S. Government Users Restricted Rights - Use, duplication or disclosure
restricted by GSA ADP Schedule Contract with IBM Corporation. ANR7801I Subsystem process ID is 5208.
ANR0900I Processing options file /tsm0/dsmserv.opt.
ANR0010W Unable to open message catalog for language en_US.UTF-8. The default
language message catalog will be used.
ANR7814I Using instance directory /tsm0.
ANR4726I The ICC support module has been loaded.
ANR0152I Database manager successfully started.
ANR2976I Offline DB backup for database TSMDB1 started.
ANR2974I Offline DB backup for database TSMDB1 completed successfully.
ANR0992I Server's database formatting complete.
ANR0369I Stopping the database manager because of a server shutdown. Format completed with return code 0
Beginning initial configuration... ANR7800I DSMSERV generated at 16:39:04 on Jun 8 2016. IBM Tivoli Storage Manager for Linux/x86_64
Version 7, Release 1, Level 6.000 Licensed Materials - Property of IBM (C) Copyright IBM Corporation 1990, 2016.
All rights reserved.
U.S. Government Users Restricted Rights - Use, duplication or disclosure
restricted by GSA ADP Schedule Contract with IBM Corporation. ANR7801I Subsystem process ID is 8741.
ANR0900I Processing options file /tsm0/dsmserv.opt.
ANR0010W Unable to open message catalog for language en_US.UTF-8. The default
language message catalog will be used.
ANR7814I Using instance directory /tsm0.
ANR4726I The ICC support module has been loaded.
ANR0990I Server restart-recovery in progress.
ANR0152I Database manager successfully started.
ANR1628I The database manager is using port 51500 for server connections.
ANR1636W The server machine GUID changed: old value (), new value (f0.8a.27.61-
.e5.43.b6.11.92.b5.00.0a.f7.49.31.18).
ANR2100I Activity log process has started.
ANR3733W The master encryption key cannot be generated because the server
password is not set.
ANR3339I Default Label in key data base is TSM Server SelfSigned Key.
ANR4726I The NAS-NDMP support module has been loaded.
ANR1794W TSM SAN discovery is disabled by options.
ANR2200I Storage pool BACKUPPOOL defined (device class DISK).
ANR2200I Storage pool ARCHIVEPOOL defined (device class DISK).
ANR2200I Storage pool SPACEMGPOOL defined (device class DISK).
ANR2560I Schedule manager started.
ANR0993I Server initialization complete.
ANR0916I TIVOLI STORAGE MANAGER distributed by Tivoli is now ready for use.
ANR2094I Server name set to TSM0.
ANR4865W The server name has been changed. Windows clients that use "passworda-
ccess generate" may be unable to authenticate with the server.
ANR2068I Administrator ADMIN registered.
ANR2076I System privilege granted to administrator ADMIN.
ANR1912I Stopping the activity log because of a server shutdown.
ANR0369I Stopping the database manager because of a server shutdown. Configuration is complete.

Modify IBM TSM Server Startup Script

Modified startup script to properly work with Veritas Cluster Server with modification in blue below.

[root@300 ~]# cat /etc/init.d/tsm0
#!/bin/bash
#
# dsmserv Start/Stop IBM Tivoli Storage Manager
#
# chkconfig: - 90 10
# description: Starts/Stops an IBM Tivoli Storage Manager Server instance
# processname: dsmserv
# pidfile: /var/run/dsmserv_instancename.pid #***********************************************************************
# Distributed Storage Manager (ADSM) *
# Server Component *
# *
# IBM Confidential *
# (IBM Confidential-Restricted when combined with the Aggregated OCO *
# Source Modules for this Program) *
# *
# OCO Source Materials *
# *
# 5765-303 (C) Copyright IBM Corporation 1990, 2009 *
#*********************************************************************** #
# This init script is designed to start a single Tivoli Storage Manager
# server instance on a system where multiple instances might be running.
# It assumes that the name of the script is also the name of the instance
# to be started (or, if the script name starts with Snn or Knn, where 'n'
# is a digit, that the name of the instance is the script name with the
# three letter prefix removed).
#
# To use the script to start multiple instances, install multiple copies
# of the script in /etc/rc.d/init.d, naming each copy after the instance
# it will start.
#
# The script makes a number of simplifying assumptions about the way
# the instance is set up.
# - The Tivoli Storage Manager Server instance runs as a non-root user whose
# name is the instance name
# - The server's instance directory (the directory in which it keeps all of
# its important state information) is in a subdirectory of the home
# directory called tsminst1.
# If any of these assumptions are not valid, then the script will require
# some modifications to work. To start with, look at the
# instance, instance_user, and instance_dir variables set below... # First of all, check for syntax
if [[ $# != 1 ]]
then echo $"Usage: $0 {start|stop|status|restart}" exit 1
fi prog="dsmserv"
instance=tsm0
serverBinDir="/opt/tivoli/tsm/server/bin" if [[ ! -e $serverBinDir/$prog ]]
then echo "IBM Tivoli Storage Manager Server not found on this system ($serverBinDir/$prog)" exit -1
fi # see if $0 starts with Snn or Knn, where 'n' is a digit. If it does, then
# strip off the prefix and use the remainder as the instance name.
if [[ ${instance:0:1} == S ]]
then instance=${instance#S[0123456789][0123456789]}
elif [[ ${instance:0:1} == K ]]
then instance=${instance#K[0123456789][0123456789]}
fi instance_home=`${serverBinDir}/dsmfngr $instance 2>/dev/null`
if [[ -z "$instance_home" ]]
then instance_home="/home/${instance}"
fi
instance_user=tsm0
instance_dir=/tsm0
pidfile="/var/run/${prog}_${instance}.pid" PATH=/sbin:/bin:/usr/bin:/usr/sbin:$serverBinDir #
# Do some basic error checking before starting the server
#
# Is the server installed?
if [[ ! -e $serverBinDir/$prog ]]
then echo "IBM Tivoli Storage Manager Server not found on this system" exit 0
fi # Does the instance directory exist?
if [[ ! -d $instance_dir ]]
then echo "Instance directory ${instance_dir} does not exist" exit -1
fi
rc=0 SLEEP_INTERVAL=5
MAX_SLEEP_TIME=10 function check_pid_file()
{ test -f $pidfile
} function check_process()
{ ps -p `cat $pidfile` > /dev/null
} function check_running()
{ check_pid_file && check_process
} start() { # set the standard value for the user limits ulimit -c unlimited ulimit -d unlimited ulimit -f unlimited ulimit -n 65536 ulimit -t unlimited ulimit -u 16384 echo -n "Starting $prog instance $instance ... " #if we're already running, say so status 0 if [[ $g_status == "running" ]] then echo "$prog instance $instance already running..." exit 0 else $serverBinDir/rc.dsmserv -u $instance_user -i $instance_dir -q >/dev/null 2>&1 & # give enough time to server to start sleep 5 # if the lock file got created, we did ok if [[ -f $instance_dir/dsmserv.v6lock ]] then gawk --source '{print $4}' $instance_dir/dsmserv.v6lock>$pidfile [ $? = 0 ] && echo "Succeeded" || echo "Failed" rc=$? echo [ $rc -eq 0 ] && touch /var/lock/subsys/${instance} return $rc else echo "Failed" return 1 fi fi
} stop() { echo "Stopping $prog instance $instance ..." if [[ -e $pidfile ]] then # make sure someone else didn't kill us already progpid=`cat $pidfile` running=`ps -ef | grep $prog | grep -w $progpid | grep -v grep` if [[ -n $running ]] then #echo "executing cmd kill `cat $pidfile`" kill `cat $pidfile` total_slept=0 while check_running; do \ echo "$prog instance $instance still running, will check after $SLEEP_INTERVAL seconds" sleep $SLEEP_INTERVAL total_slept=`expr $total_slept + 1` if [ "$total_slept" -gt "$MAX_SLEEP_TIME" ]; then \ break fi done if check_running then echo "Unable to stop $prog instance $instance" exit 1 else echo "$prog instance $instance stopped Successfully" fi fi # remove the pid file so that we don't try to kill same pid again rm $pidfile if [[ $? != 0 ]] then echo "Process $prog instance $instance stopped, but unable to remove $pidfile" echo "Be sure to remove $pidfile." exit 1 fi else echo "$prog instance $instance is not running." fi rc=$? echo [ $rc -eq 0 ] && rm -f /var/lock/subsys/${instance} return $rc
} status() { # check usage if [[ $# != 1 ]] then echo "$0: Invalid call to status routine. Expected argument: " echo "where display_to_screen is 0 or 1 and indicates whether output will be sent to screen."
 exit 100 # exit 1 fi #see if file $pidfile exists # if it does, see if process is running # if it doesn't, it's not running - or at least was not started by dsmserv.rc if [[ -e $pidfile ]] then progpid=`cat $pidfile` running=`ps -ef | grep $prog | grep -w $progpid | grep -v grep` if [[ -n $running ]] then g_status="running" else g_status="stopped" # remove the pidfile if stopped. if [[ -e $pidfile ]] then rm $pidfile if [[ $? != 0 ]] then echo "$prog instance $instance stopped, but unable to remove $pidfile" echo "Be sure to remove $pidfile." fi fi fi else g_status="stopped" fi if [[ $1 == 1 ]] then echo "Status of $prog instance $instance: $g_status" fi  if [ "${1}" = "1" ] then case ${g_status} in (stopped) EXIT=100 ;; (running) EXIT=110 ;; esac exit ${EXIT} fi
} restart() { stop start
} case "$1" in start) start ;; stop) stop ;; status) status 1 ;; restart|reload) restart ;; *) echo $"Usage: $0 {start|stop|status|restart}" exit 1
esac exit $? 

… and the diff(1) between original and modified one.

[root@300 ~]# diff -u /etc/init.d/tsm0 /root/tsm0
--- /etc/init.d/tsm0 2016-07-13 13:20:43.000000000 +0200
+++ /root/tsm0 2016-07-13 13:27:41.000000000 +0200
@@ -207,7 +207,8 @@ then echo "$0: Invalid call to status routine. Expected argument: " echo "where display_to_screen is 0 or 1 and indicates whether output will be sent to screen."
- exit 1
+ exit 100
+ # exit 1 fi #see if file $pidfile exists # if it does, see if process is running
@@ -239,6 +240,15 @@ then echo "Status of $prog instance $instance: $g_status" fi
+
+ if [ "${1}" = "1" ]
+ then
+ case ${g_status} in
+ (stopped) EXIT=100 ;;
+ (running) EXIT=110 ;;
+ esac
+ exit ${EXIT}
+ fi } restart() { 

Copy tsm0 Profile to the Other Node

[root@300 ~]# pwd
/home
[root@300 /home]# tar -czf - tsm0 | ssh 301 'tar -C /home -xzf -'
[root@300 ~]# cat /home/tsm0/sqllib/db2nodes.cfg
0 TSM0.domain.com 0
[root@301 ~]# cat /home/tsm0/sqllib/db2nodes.cfg
0 TSM0.domain.com 0
[root@300 ~]# hares -online TSM0_ip_bond0 -sys 300
[root@300 ~]# hares -online TSM0_mnt_active_log -sys 300
[root@300 ~]# hares -online TSM0_mnt_archive_log -sys 300
[root@300 ~]# hares -online TSM0_mnt_db_01 -sys 300
[root@300 ~]# hares -online TSM0_mnt_db_02 -sys 300
[root@300 ~]# hares -online TSM0_mnt_db_03 -sys 300
[root@300 ~]# hares -online TSM0_mnt_db_backup_01 -sys 300
[root@300 ~]# hares -online TSM0_mnt_db_backup_02 -sys 300
[root@300 ~]# hares -online TSM0_mnt_db_backup_03 -sys 300
[root@300 ~]# hares -online TSM0_mnt_instance -sys 300
[root@300 ~]# hares -online TSM0_mnt_pool0_01 -sys 300
[root@300 ~]# hares -online TSM0_mnt_pool0_02 -sys 300
[root@300 ~]# hares -online TSM0_mnt_pool0_03 -sys 300
[root@300 ~]# hares -online TSM0_mnt_pool0_04 -sys 300
[root@300 ~]# hares -online TSM0_mnt_pool0_05 -sys 300
[root@300 ~]# hares -online TSM0_mnt_pool0_06 -sys 300
[root@300 ~]# hares -state | grep TSM0 | grep 300
TSM0_dg State 300 ONLINE
TSM0_ip_bond0 State 300 ONLINE
TSM0_mnt_active_log State 300 ONLINE
TSM0_mnt_archive_log State 300 ONLINE
TSM0_mnt_db_01 State 300 ONLINE
TSM0_mnt_db_02 State 300 ONLINE
TSM0_mnt_db_03 State 300 ONLINE
TSM0_mnt_db_backup_01 State 300 ONLINE
TSM0_mnt_db_backup_02 State 300 ONLINE
TSM0_mnt_db_backup_03 State 300 ONLINE
TSM0_mnt_instance State 300 ONLINE
TSM0_mnt_pool0_01 State 300 ONLINE
TSM0_mnt_pool0_02 State 300 ONLINE
TSM0_mnt_pool0_03 State 300 ONLINE
TSM0_mnt_pool0_04 State 300 ONLINE
TSM0_mnt_pool0_05 State 300 ONLINE
TSM0_mnt_pool0_06 State 300 ONLINE
TSM0_nic_bond0 State 300 ONLINE
TSM0_server State 300 OFFLINE 
[root@300 ~]# cat >> /etc/services << __EOF
DB2_tsm0 60000/tcp
DB2_tsm0_1 60001/tcp
DB2_tsm0_2 60002/tcp
DB2_tsm0_3 60003/tcp
DB2_tsm0_4 60004/tcp
DB2_tsm0_END 60005/tcp
__EOF
[root@300 ~]# hagrp -freeze TSM0_site
[root@300 ~]# hastatus -sum -- SYSTEM STATE
-- System State Frozen A 300 RUNNING 0
A 301 RUNNING 0 -- GROUP STATE
-- Group System Probed AutoDisabled State B NSR_site 300 Y N OFFLINE
B NSR_site 301 Y N ONLINE
B RMAN_site 300 Y N OFFLINE
B RMAN_site 301 Y N ONLINE
B TSM0_site 300 Y N PARTIAL
B TSM0_site 301 Y N OFFLINE
B VCS_site 300 Y N OFFLINE
B VCS_site 301 Y N ONLINE -- GROUPS FROZEN
-- Group C TSM0_site -- RESOURCES DISABLED
-- Group Type Resource H TSM0_site Application TSM0_server
H TSM0_site DiskGroup TSM0_dg
H TSM0_site IP TSM0_ip_bond0
H TSM0_site Mount TSM0_mnt_active_log
H TSM0_site Mount TSM0_mnt_archive_log
H TSM0_site Mount TSM0_mnt_db_01
H TSM0_site Mount TSM0_mnt_db_02
H TSM0_site Mount TSM0_mnt_db_03
H TSM0_site Mount TSM0_mnt_db_backup_01
H TSM0_site Mount TSM0_mnt_db_backup_02
H TSM0_site Mount TSM0_mnt_db_backup_03
H TSM0_site Mount TSM0_mnt_instance
H TSM0_site Mount TSM0_mnt_pool0_01
H TSM0_site Mount TSM0_mnt_pool0_02
H TSM0_site Mount TSM0_mnt_pool0_03
H TSM0_site Mount TSM0_mnt_pool0_04
H TSM0_site Mount TSM0_mnt_pool0_05
H TSM0_site Mount TSM0_mnt_pool0_06
H TSM0_site NIC TSM0_nic_bond0 
[root@300 ~]# su - tsm0 -c '/opt/tivoli/tsm/server/bin/dsmserv -i /tsm0'
ANR7800I DSMSERV generated at 16:39:04 on Jun 8 2016. IBM Tivoli Storage Manager for Linux/x86_64
Version 7, Release 1, Level 6.000 Licensed Materials - Property of IBM (C) Copyright IBM Corporation 1990, 2016.
All rights reserved.
U.S. Government Users Restricted Rights - Use, duplication or disclosure
restricted by GSA ADP Schedule Contract with IBM Corporation. ANR7801I Subsystem process ID is 9834.
ANR0900I Processing options file /tsm0/dsmserv.opt.
ANR0010W Unable to open message catalog for language en_US.UTF-8. The default language message
catalog will be used.
ANR7814I Using instance directory /tsm0.
ANR4726I The ICC support module has been loaded.
ANR0990I Server restart-recovery in progress.
ANR0152I Database manager successfully started.
ANR1628I The database manager is using port 51500 for server connections.
ANR1635I The server machine GUID, 54.80.e8.50.e4.48.e6.11.8e.6d.00.0a.f7.49.2b.08, has
initialized.
ANR2100I Activity log process has started.
ANR3733W The master encryption key cannot be generated because the server password is not set.
ANR3339I Default Label in key data base is TSM Server SelfSigned Key.
ANR4726I The NAS-NDMP support module has been loaded.
ANR1794W TSM SAN discovery is disabled by options.
ANR2803I License manager started.
ANR8200I TCP/IP Version 4 driver ready for connection with clients on port 1500.
ANR9639W Unable to load Shared License File dsmreg.sl.
ANR9652I An EVALUATION LICENSE for IBM System Storage Archive Manager will expire on
08/13/2016.
ANR9652I An EVALUATION LICENSE for Tivoli Storage Manager Basic Edition will expire on
08/13/2016.
ANR9652I An EVALUATION LICENSE for Tivoli Storage Manager Extended Edition will expire on
08/13/2016.
ANR2828I Server is licensed to support IBM System Storage Archive Manager.
ANR2828I Server is licensed to support Tivoli Storage Manager Basic Edition.
ANR2828I Server is licensed to support Tivoli Storage Manager Extended Edition.
ANR2560I Schedule manager started.
ANR0984I Process 1 for EXPIRE INVENTORY (Automatic) started in the BACKGROUND at 01:58:03 PM.
ANR0811I Inventory client file expiration started as process 1.
ANR0167I Inventory file expiration process 1 processed for 0 minutes.
ANR0812I Inventory file expiration process 1 completed: processed 0 nodes, examined 0 objects,
deleting 0 backup objects, 0 archive objects, 0 DB backup volumes, and 0 recovery plan files. 0
objects were retried and 0 errors were encountered.
ANR0985I Process 1 for EXPIRE INVENTORY (Automatic) running in the BACKGROUND completed with
completion state SUCCESS at 01:58:03 PM.
ANR0993I Server initialization complete.
ANR0916I TIVOLI STORAGE MANAGER distributed by Tivoli is now ready for use.
TSM:TSM0>q admin
ANR2017I Administrator SERVER_CONSOLE issued command: QUERY ADMIN Administrator Days Since Days Since Locked? Privilege Classes
Name Last Access Password Set
-------------- ------------ ------------ ---------- -----------------------
ADMIN <1 <1 No System
ADMIN_CENTER <1 (?) Yes SERVER_CONSOLE No System TSM:TSM0>halt
ANR2017I Administrator SERVER_CONSOLE issued command: HALT
ANR1912I Stopping the activity log because of a server shutdown.
ANR0369I Stopping the database manager because of a server shutdown.
ANR0991I Server shutdown complete. 
[root@300 ~]# hagrp -unfreeze TSM0_site 
[root@300 ~]# hares -state | grep TSM0 | grep 302
TSM0_dg State 300 ONLINE
TSM0_ip_bond0 State 300 ONLINE
TSM0_mnt_active_log State 300 ONLINE
TSM0_mnt_archive_log State 300 ONLINE
TSM0_mnt_db_01 State 300 ONLINE
TSM0_mnt_db_02 State 300 ONLINE
TSM0_mnt_db_03 State 300 ONLINE
TSM0_mnt_db_backup_01 State 300 ONLINE
TSM0_mnt_db_backup_02 State 300 ONLINE
TSM0_mnt_db_backup_03 State 300 ONLINE
TSM0_mnt_instance State 300 ONLINE
TSM0_mnt_pool0_01 State 300 ONLINE
TSM0_mnt_pool0_02 State 300 ONLINE
TSM0_mnt_pool0_03 State 300 ONLINE
TSM0_mnt_pool0_04 State 300 ONLINE
TSM0_mnt_pool0_05 State 300 ONLINE
TSM0_mnt_pool0_06 State 300 ONLINE
TSM0_nic_bond0 State 300 ONLINE
TSM0_server State 300 OFFLINE 
[root@301 ~]# hares -online TSM0_server -sys 300 

Ignore these errors below during first IBM TSM server startup.

IGNORE | ERRORS TO IGNORE DURING FIRST IBM TSM SERVER START
IGNORE | IGNORE | DBI1306N The instance profile is not defined.
IGNORE |
IGNORE | Explanation:
IGNORE |
IGNORE | The instance is not defined in the target machine registry.
IGNORE |
IGNORE | User response:
IGNORE |
IGNORE | Specify an existing instance name or create the required instance. 

Install IBM TSM Server Licenses

Screenshots from that process below.

ibm-tsm-install-license-01

ibm-tsm-install-license-02

ibm-tsm-install-license-03

ibm-tsm-install-license-04

Lets now register licenses for the IBM TSM.

tsm: TSM0_SITE>register license file=/opt/tivoli/tsm/server/bin/tsmee.lic
ANR2852I Current license information:
ANR2853I New license information:
ANR2828I Server is licensed to support Tivoli Storage Manager Basic Edition.
ANR2828I Server is licensed to support Tivoli Storage Manager Extended Edition.

IBM TSM Client Configuration on the IBM TSM Server Nodes

[root@300 ~]# cat > /opt/tivoli/tsm/client/ba/bin/dsm.opt << __EOF
SERVERNAME TSM0
__EOF [root@301 ~]# cat > /opt/tivoli/tsm/client/ba/bin/dsm.opt << __EOF
SERVERNAME TSM0
__EOF 
[root@300 ~]# cat > /opt/tivoli/tsm/client/ba/bin/dsm.sys << __EOF
SERVERNAME TSM0
COMMMethod TCPip
TCPPort 1500
TCPSERVERADDRESS localhost
SCHEDLOGNAME /opt/tivoli/tsm/client/ba/bin/dsmsched.log
ERRORLOGNAME /opt/tivoli/tsm/client/ba/bin/dsmerror.log
SCHEDLOGRETENTION 7 D
ERRORLOGRETENTION 7 D
__EOF [root@301 ~]# cat > /opt/tivoli/tsm/client/ba/bin/dsm.sys << __EOF
SERVERNAME TSM0
COMMMethod TCPip
TCPPort 1500
TCPSERVERADDRESS localhost
SCHEDLOGNAME /opt/tivoli/tsm/client/ba/bin/dsmsched.log
ERRORLOGNAME /opt/tivoli/tsm/client/ba/bin/dsmerror.log
SCHEDLOGRETENTION 7 D
ERRORLOGRETENTION 7 D
__EOF 
[root@ALL]# uname -r
2.6.32-504.el6.x86_64 [root@ALL]# uname -r | sed 's|.x86_64||g'
2.6.32-504.el6 [root@ALL]# yum --showduplicates list kernel-devel | grep 2.6.32-504.el6
kernel-devel.x86_64 2.6.32-504.el6 rhel-6-server-rpms [root@ALL]# yum install rpm-build kernel-devel-2.6.32-504.el6 [root@ALL]# rpm -Uvh /root/rpmbuild/RPMS/x86_64/lin_tape-3.0.10-1.x86_64.rpm
Preparing... ########################################### [100%] 1:lin_tape ########################################### [100%]
Starting lin_tape...
lin_tape loaded [root@ALL]# rpm -Uvh lin_taped-3.0.10-rhel6.x86_64.rpm
Preparing... ########################################### [100%] 1:lin_taped ########################################### [100%]
Starting lin_tape...
lin_taped loaded [root@ALL]# /etc/init.d/lin_tape start
Starting lin_tape... lin_taped already running. Abort! [root@ALL]# /etc/init.d/lin_tape restart
Shutting down lin_tape... lin_taped unloaded
Starting lin_tape...

This is quite unusual configuration as the IBM TS3310 library with 4 LTO4 drives are logically partitioned into two logical libraries with 2 drives dedicated to Dell/EMC Networker and 2 drives dedicated to the IBM TSM server. Such library is shown below.

ibm-tsm-ts3310.jpg

The changers and tape drives for each backup system.

Networker | (L) 000001317577_LLA changer0
TSM  | (L) 000001317577_LLB changer1_persistent_TSM0
Networker | (1) 7310132058 tape0
Networker | (2) 7310295146 tape1
TSM  | (3) 7310214751 tape2_persistent_TSM0
TSM  | (4) 7310214904 tape3_persistent_TSM0
[root@300 ~]# find /dev/IBM*
/dev/IBMchanger0
/dev/IBMchanger1
/dev/IBMSpecial
/dev/IBMtape
/dev/IBMtape0
/dev/IBMtape0n
/dev/IBMtape1
/dev/IBMtape1n
/dev/IBMtape2
/dev/IBMtape2n
/dev/IBMtape3
/dev/IBMtape3n 

We will use UDEV for persistent configuration.

[root@300 ~]# udevadm info -a -p $(udevadm info -q path -n /dev/IBMtape0) | grep -i serial ATTR{serial_num}=="7310132058"
[root@300 ~]# udevadm info -a -p $(udevadm info -q path -n /dev/IBMtape1) | grep -i serial ATTR{serial_num}=="7310295146"
[root@300 ~]# udevadm info -a -p $(udevadm info -q path -n /dev/IBMtape2) | grep -i serial ATTR{serial_num}=="7310214751"
[root@300 ~]# udevadm info -a -p $(udevadm info -q path -n /dev/IBMtape3) | grep -i serial ATTR{serial_num}=="7310214904"
[root@300 ~]# udevadm info -a -p $(udevadm info -q path -n /dev/IBMchanger0) | grep -i serial ATTR{serial_num}=="000001317577_LLA"
[root@300 ~]# udevadm info -a -p $(udevadm info -q path -n /dev/IBMchanger1) | grep -i serial ATTR{serial_num}=="000001317577_LLB"
[root@300 ~]# cat /proc/scsi/IBM*
lin_tape version: 3.0.10
lin_tape major number: 239
Attached Changer Devices:
Number model SN HBA SCSI FO Path
0 3576-MTL 000001317577_LLA qla2xxx 2:0:1:1 NA
1 3576-MTL 000001317577_LLB qla2xxx 4:0:1:1 NA
lin_tape version: 3.0.10
lin_tape major number: 239
Attached Tape Devices:
Number model SN HBA SCSI FO Path
0 ULT3580-TD4 7310132058 qla2xxx 2:0:0:0 NA
1 ULT3580-TD4 7310295146 qla2xxx 2:0:1:0 NA
2 ULT3580-TD4 7310214751 qla2xxx 4:0:0:0 NA
3 ULT3580-TD4 7310214904 qla2xxx 4:0:1:0 NA 
[root@300 ~]# cat /etc/udev/rules.d/98-lin_tape.rules
KERNEL=="IBMtape*", SYSFS{serial_num}=="7310132058", MODE="0660", SYMLINK="IBMtape0"
KERNEL=="IBMtape*", SYSFS{serial_num}=="7310295146", MODE="0660", SYMLINK="IBMtape1"
KERNEL=="IBMtape*", SYSFS{serial_num}=="7310214751", MODE="0660", SYMLINK="IBMtape2_persistent_TSM0"
KERNEL=="IBMtape*", SYSFS{serial_num}=="7310214904", MODE="0660", SYMLINK="IBMtape3_persistent_TSM0"
KERNEL=="IBMchanger*", ATTR{serial_num}=="000001317577_LLB", MODE="0660", SYMLINK="IBMchanger1_persistent_TSM0" 
[root@301 ~]# /etc/init.d/lin_tape stop
Shutting down lin_tape... lin_taped unloaded [root@301 ~]# rmmod lin_tape [root@301 ~]# /etc/init.d/lin_tape start
Starting lin_tape... 

New persistent devices.

[root@301 ~]# find /dev/IBM*
/dev/IBMchanger0
/dev/IBMchanger1
/dev/IBMchanger1_persistent_TSM0
/dev/IBMSpecial
/dev/IBMtape
/dev/IBMtape0
/dev/IBMtape0n
/dev/IBMtape1
/dev/IBMtape1n
/dev/IBMtape2
/dev/IBMtape2n
/dev/IBMtape2_persistent_TSM0
/dev/IBMtape3
/dev/IBMtape3n
/dev/IBMtape3_persistent_TSM0 

Lets update the paths to the tape drives now.

tsm: TSM0_SITE>query path f=d Source Name: TSM0_SITE Source Type: SERVER Destination Name: TS3310 Destination Type: LIBRARY Library: Node Name: Device: /dev/IBMchanger0 External Manager: ZOS Media Server: Comm. Method: LUN: Initiator: 0 Directory: On-Line: Yes
Last Update by (administrator): ADMIN Last Update Date/Time: 09/16/2014 13:36:14 Source Name: TSM0_SITE Source Type: SERVER Destination Name: DRIVE0 Destination Type: DRIVE Library: TS3310 Node Name: Device: /dev/IBMtape0 External Manager: ZOS Media Server: Comm. Method: LUN: Initiator: 0 Directory: On-Line: Yes
Last Update by (administrator): SERVER_CONSOLE Last Update Date/Time: 07/14/2016 14:02:02 Source Name: TSM0_SITE Source Type: SERVER Destination Name: DRIVE1 Destination Type: DRIVE Library: TS3310 Node Name: Device: /dev/IBMtape1 External Manager: ZOS Media Server: Comm. Method: LUN: Initiator: 0 Directory: On-Line: Yes
Last Update by (administrator): SERVER_CONSOLE Last Update Date/Time: 07/14/2016 13:59:48 tsm: TSM0_SITE>update path TSM0_SITE TS3310 SRCType=SERVER DESTType=LIBRary online=no
ANR1722I A path from TSM0_SITE to TS3310 has been updated. tsm: TSM0_SITE>update path TSM0_SITE TS3310 SRCType=SERVER DESTType=LIBRary device=/dev/IBMchanger1_persistent_TSM0
ANR1722I A path from TSM0_SITE to TS3310 has been updated. tsm: TSM0_SITE>update path TSM0_SITE TS3310 SRCType=SERVER DESTType=LIBRary online=yes
ANR1722I A path from TSM0_SITE to TS3310 has been updated. tsm: TSM0_SITE>update drive TS3310 DRIVE1 SERial=AUTODetect element=AUTODetect
ANR8467I Drive DRIVE1 in library TS3310 updated. tsm: TSM0_SITE>update drive TS3310 DRIVE1 online=no
ANR8467I Drive DRIVE1 in library TS3310 updated. tsm: TSM0_SITE>update drive TS3310 DRIVE1 SERial=AUTODetect element=AUTODetect
ANR8467I Drive DRIVE1 in library TS3310 updated. tsm: TSM0_SITE>update drive TS3310 DRIVE1 online=yes
ANR8467I Drive DRIVE1 in library TS3310 updated. tsm: TSM0_SITE>update drive TS3310 DRIVE1 SERial=AUTODetect element=AUTODetect
ANR8467I Drive DRIVE1 in library TS3310 updated. tsm: TSM0_SITE>update drive TS3310 DRIVE1 online=yes
ANR8467I Drive DRIVE1 in library TS3310 updated. tsm: TSM0_SITE>update path TSM0_SITE DRIVE0 SRCType=SERVER autodetect=yes DESTType=DRIVE library=ts3310 device=/dev/IBMtape2_persistent_TSM0
ANR1722I A path from TSM0_SITE to TS3310 DRIVE0 has been updated. tsm: TSM0_SITE>update drive TS3310 DRIVE0 SERial=AUTODetect element=AUTODetect
ANR8467I Drive DRIVE0 in library TS3310 updated. tsm: TSM0_SITE>update path TSM0_SITE DRIVE1 SRCType=SERVER autodetect=yes DESTType=DRIVE library=ts3310 device=/dev/IBMtape3_persistent_TSM0
ANR1722I A path from TSM0_SITE to TS3310 DRIVE1 has been updated. tsm: TSM0_SITE>update path TSM0_SITE DRIVE1 SRCType=SERVER DESTType=DRIVE library=ts3310 online=yes
ANR1722I A path from TSM0_SITE to TS3310 DRIVE1 has been updated. tsm: TSM0_SITE>update path TSM0_SITE DRIVE0 SRCType=SERVER DESTType=DRIVE library=ts3310 online=yes
ANR1722I A path from TSM0_SITE to TS3310 DRIVE0 has been updated. 

Lets verify that our library works properly.

tsm: TSM0_SITE>audit library TS3310 checklabel=barcode
ANS8003I Process number 2 started. tsm: TSM0_SITE>query proc Process Process Description Process Status Number
-------- -------------------- ------------------------------------------------- 2 AUDIT LIBRARY ANR8459I Auditing volume inventory for library TS3310. tsm: TSM0_SITE>query act
(...) 08/04/2016 14:30:41 ANR2017I Administrator ADMIN issued command: AUDIT LIBRARY TS3310 checklabel=barcode (SESSION: 8)
08/04/2016 14:30:41 ANR0984I Process 2 for AUDIT LIBRARY started in the BACKGROUND at 02:30:41 PM. (SESSION: 8, PROCESS: 2)
08/04/2016 14:30:41 ANR8457I AUDIT LIBRARY: Operation for library TS3310 started as process 2. (SESSION: 8, PROCESS: 2)
08/04/2016 14:30:46 ANR8358E Audit operation is required for library TS3310. (SESSION: 8, PROCESS: 2)
08/04/2016 14:30:51 ANR8439I SCSI library TS3310 is ready for operations. (SESSION: 8, PROCESS: 2) (...) 08/04/2016 14:31:26 ANR0985I Process 2 for AUDIT LIBRARY running in the BACKGROUND completed with completion state SUCCESS at 02:31:26 PM. (SESSION: 8, PROCESS: 2) (...) 

IBM TSM Storage Pool Configuration

IBM TSM container storage pool creation.

tsm: TSM0_SITE>define stgpool POOL0_stgFC stgtype=directory
ANR2249I Storage pool POOL0_stgFC is defined. tsm: TSM0_SITE>define stgpooldirectory POOL0_stgFC /tsm0/pool0/pool0_01,/tsm0/pool0/pool0_02,/tsm0/pool0/pool0_03,/tsm0/pool0/pool0_04,/tsm0/pool0/pool0_05,/tsm0/pool0/pool0_06
ANR3254I Storage pool directory /tsm0/pool0/pool0_01 was defined in storage pool POOL0_stgFC.
ANR3254I Storage pool directory /tsm0/pool0/pool0_02 was defined in storage pool POOL0_stgFC.
ANR3254I Storage pool directory /tsm0/pool0/pool0_03 was defined in storage pool POOL0_stgFC.
ANR3254I Storage pool directory /tsm0/pool0/pool0_04 was defined in storage pool POOL0_stgFC.
ANR3254I Storage pool directory /tsm0/pool0/pool0_05 was defined in storage pool POOL0_stgFC.
ANR3254I Storage pool directory /tsm0/pool0/pool0_06 was defined in storage pool POOL0_stgFC. tsm: TSM0_SITE>q stgpooldirectory Storage Pool Name Directory Access
----------------- --------------------------------------------- ------------
POOL0_stgFC /tsm0/pool0/pool0_01 Read/Write
POOL0_stgFC /tsm0/pool0/pool0_02 Read/Write
POOL0_stgFC /tsm0/pool0/pool0_03 Read/Write
POOL0_stgFC /tsm0/pool0/pool0_04 Read/Write
POOL0_stgFC /tsm0/pool0/pool0_05 Read/Write
POOL0_stgFC /tsm0/pool0/pool0_06 Read/Write 

IBM TSM Backup Policies Configuration

Below is an example policy.

tsm: TSM0_SITE>def dom FS backret=30 archret=30
ANR1500I Policy domain FS defined. tsm: TSM0_SITE>def pol FS FS
ANR1510I Policy set FS defined in policy domain FS. tsm: TSM0_SITE>def mg FS FS FS_1DAY
ANR1520I Management class FS_1DAY defined in policy domain FS, set FS. tsm: TSM0_SITE>def co FS FS FS_1DAY STANDARD type=backup destination=POOL0_STGFC verexists=32 verdeleted=1 retextra=31 retonly=14
ANR1530I Backup copy group STANDARD defined in policy domain FS, set FS, management class FS_1DAY. tsm: TSM0_SITE>def mg FS FS FS_1MONTH
ANR1520I Management class FS_1MONTH defined in policy domain FS, set FS. tsm: TSM0_SITE>def co FS FS FS_1MONTH STANDARD type=backup destination=POOL0_STGFC verexists=4 verdeleted=1 retextra=91 retonly=14
ANR1530I Backup copy group STANDARD defined in policy domain FS, set FS, management class FS_1MONTH. tsm: TSM0_SITE>as defmg FS FS FS_1DAY
ANR1538I Default management class set to FS_1DAY for policy domain FS, set FS. tsm: TSM0_SITE>act pol FS FS
ANR1554W DEFAULT Management class FS_1DAY in policy set FS FS does not have an ARCHIVE copygroup: files will not be archived by default if this set is activated. Do you wish to proceed? (Yes (Y)/No (N)) y
ANR1554W DEFAULT Management class FS_1DAY in policy set FS FS does not have an ARCHIVE copygroup: files will not be archived by default if this set is activated.
ANR1514I Policy set FS activated in policy domain FS. 

I hope that the amount of instructions did not discouraged you from one of the best enterprise backup systems – the IBM TSM (now IBM Spectrum Protect) and on of the best high availability cluster – the Veritas Cluster Server 🙂

EOF

. . . .

.