Monday, September 25, 2023

Migrating HACMP to PowerHA 6.1 in aix?

Migrating HACMP to PowerHA 6.1 in aix?


Migrating HACMP version 5.4 or lower to PowerHA 6.1 using Snapshot

GROUND WORK:

#lslpp -l|grep -i cluster.es     --> To check the current version of the cluster
#./clRGinfo                            --> Check if the RG is online status.
#lssrc -ls clstrmgrES               --> check if the cluster manager is stable on both nodes


TAKE SNAPSOFT OF THE CLUSTER CONFIG (Taking snapshot from one node is enough as it is similar on both the nodes)

#smitty hacmp ->
    Extended configuration
      snapshot configuration
            add a cluster snapshot
        Create a snapshot of the cluster configuration  (Mandatory -snap name and description)
output: .info and .odm file created under /usr/es/sbin/cluster/snapshots directory (Required this for restoration, so keep it in a safe location like your home directory)


STOP THE CLUSTER SERVICES ON BOTH THE NODES.

#lssrc -ls clstrmgrES                    --> check if the cluster manager is st_stable on both nodes
#smitty hacmp
     system management (c-spoc)
         manage hacmp services
             stop cluster services  (now, select node/nodes and bring resource groups offline)

#lssrc -ls clstrmgrES                    --> Now the cluster manager demon should be ST_INIT


REMOVE THE HACMP 5.4 VERSION WHICH IS CURRENTLY INSTALLED

#smitty remove
Sofware name:   (use F7 and select the all the cluster filesets)
Remove only: No; Detailed output: Yes 
Enter to accept the warning and the cluster filesets will be removed in few minutes.
You will Ok prompt.

#lslpp –l|grep –I cluster.es  (Make sure the cluster filesets has been removed)
Note: you have to  remove the cluster filesets on all the nodes.



INSTALL THE NEW POWERHA 6.1 VERSION

#cd <package_located directory>
#smitty install
Install software  (enter . )
Software to install  (use F7 to select the powerha filesets)
Preview only: No; Commit software update: No; Detailed output: Yes, Accept license: Yes
Enter to accept the warning and the new powerHA filesets will be installed in few minutes.
You will Ok prompt.

#lslpp –l|grep –I cluster.es  (Make sure the cluster filesets has been removed)
Note: you have to do the same powerHA filesets installation on all the nodes
 #lssrc –ls clstrmgrES  (Now the current state is “Not Configured” state)



 CONVERT THE SNAPSHOT WHICH WAS TAKEN ON THE CLUSTER 5.4 VERSION

We need to covert the snapshot to be compaitble with powerHA 6.1
Rememer that we have taken the snapshot of the cluster 5.4 under /usr/es/sbin/cluster/snapshots directory  ( .info and .odm file )

#/usr/es/sbin/cluster/conversion/clconvert_snapshot  -v  5.4 –s snapshot_5.4
v - <version of the HA in which the snapshot was taken> 
-s  <name of the snapshot>
Note: The .odm file will be updated.


THE SNAPSHOT NEEDS TO BE APPLIED.

# smitty hacmp
Extended Configuration
Snapshot Configuration
Restore the Cluster Configuration from a Snapshot
Select the snapshot which was converted on the previous step (i.e snapshot_5.4)
Cluster snapshotname: snapshot_5.4; un/configure cluster resource: Yes
Enter to accept the warning and the snapshot to be restored in few minutes.
You will Ok prompt.

After the snapshot restore on the primary node, We can see that the cluster configuration has been restored.  #cltopinfo



THE CLUSTER CONFIGURATION NEEDS TO BE SYNCED.

Remember that the sync has to be performed where you restore the snapshot.
#smitty hacmp
Extended Configuration
Extended Verification and Synchronization
Verify/sync or both: Both; Automatically correct errors: Interactively; Logging: Verbos.
Enter
Now we can able to see the cluster information on the secondary node
Use cltopinfo to get/confirm the details.


START THE CLUSTER

#smitty hacmp
System Management (C-SPOC)
HACMP Services
Start now: now; start services on nodes: mention nodes name; manage RGs: Auto;
startup cluster demon: True
Press enter

#clRGinfo  -> check RG is online
#lssrc -ls clstrmgrES     --> To check if the cluster manager is stable (ST_Stable)


How to remove the hard disk (pdisk) in aix?

How to remove the hard disk (pdisk) in aix?


The below steps are clearly explained how to remove the hard disk (pdisk) in aix.

Please follow the below steps to replace the hdisk (pdisk) with the following steps

$ oem_setup_env
smitty sasdam
=> Delete a SAS Disk Array
=> sissasO
=>hdisk4
Confirm that we want to delete the array, you should get a message stating something like
hdisk4 deleted
pdisk4 Defined
ESC÷O or F10 (this may change according to your keyboard) to exit smitty

# diag
=> enter
=> Task Selection (Diagnostics, Advanced Diagnostics, Service Aids, etc.)
= Hot Plug Task
=> SCSI and SCSI RAID Hot Plug Manager
> Replace/Remove a Device Attached to an SCSI Hot Swap Enclosure Device
=> Select the slot of pdisk4
=> enter

You should have a message like “Running rmdev on pdisk4” followed by the following
The LED should be in the Remove state for the selected device.
YOU may now remove or replace the device.
Use ‘Enter’ to indicate you are finished.

Now it’s time to remove the failed disk and to insert the new disk, then return to the previous session and bit enter as per the reported message
You will be back to the previous menu, but this rune instead of pdisk4 you will have [populated]
ESC+3 or F3 (this may change according to your keyboard) to return to the previous menu

=> Configure Adde/Replaced Devices
You’ll get a ‘processing data” and then it will return to menu
ESC+O or 110 to exit diag

Now we have to recreate the array
# smitty sasdam
=> Create a SAS Disk Array
=> sissasO
=> 0
=> 256 Kb (recommended)
> pdisk4
Confirm that we want to create the new array
ESC+O or 110 (this way change according to your keyboard) to exit smitty.

see, the disk is successfully replaced, Now you can go ahead for your further steps based on your setup.



Tuesday, December 22, 2020

Very useful command related to network concept in aix?

 Very useful command related to network concept in aix? 


How to identify the default gateway in aix?
#netstat -nr|egrep -iw 'default|ug'
#lsconf|grep -i gateway

How to find out the MAC address of the network card in aix?
# netstat -ai

How to find out the netmask and other important details of the network card?
#lsdev -Cc adapter
#lsattr -El en0        (remember it is ent0, it is en0) (This will help when you are in a position to restore the network card when it crashed)


The above commands may be familiar to you. But I am sure, this will get you the exact/direct information you looking for when you are in sev-1 call/service recovery.


Friday, November 20, 2020

Raid concept - How to deal with RAID concepts in interview persective?

 Raid concept - How to deal with RAID concepts from an interview perspective?


RAID (Redundant Array of Independent Disks)

 

RAID 0 -  RAID 1 -  RAID 5 -  RAID 10

 

RAID 0 (Striping)

==============

Not fault tolerance

Data is striped across multiple disks

The data will be lost if one of the disk got corruped/destroyed.

 

RAID 1 (Mirroring and Duplexing)

============================

Fault tolerant

Data is copied on more than one disk

Each disk has the same data (data is safe)

 

RAID 5  (Striping with parity)

======================

Requires 3 or more disks

commonly used and It can store large amount of data and raid 5 is fast

Data is 'striped' across multiple disks along with parity (Parity is used to rebuild the data in the event of disk failure.)

 

RAID 10 (RAID 1+0)

==================

Combines Raid 1 and Raid 0

Need to use minimum of 4 disks

Two disks is mirrored using raid 1 setup (Both sets of 2 disks are striped using raid 0 setup)

Benefits from the fault tolerance of raid 1 and speed of raid 0

The disadvantage is - can only use 50% for data storage

 

(BTW, Fault tolerance refers to the ability of a system (computer, network, cloud cluster, etc.) to continue operating without interruption when one or more of its components fail.)

 


Tuesday, November 17, 2020

How to list out all the directories and sub-directories on the particular filesystem or directory in aix?

 How to view directories and sub-directories in aix?


Sometimes, you may in the situation to list out all the directories and sub-directories on the particular FS (or) directory in aix, Please use the below commands. It will list out all the directories and all the sub directories/files inside the directory/FS.

 

# ls -aeltFR



Thursday, August 27, 2020

How to restore a directory from mksysb backup in aix?

 How to restore a directory from mksysb backup in aix?



Using the below command, we can restore the directory from the mksysb backup whenever you required.

For example, if you want to restore the directory - /var/spool/mail from the mksysb. Please follow below.

Check if the target directory is existing on the mksysb:

testlpar:# restore -T -q -l -f /backup/testlpar.mksysb |grep "/var/spool/mail"

It will list out the detailed output and at the end, it will display the directory which we need to recover.

drwxrwxr-x  2 bin  mail  512  27 July 10:00 ./var/spool/mail


We can confirmed that the directory is available on the mksysb and we can try to restore it using /tmp/restore directory, then follow below.


Restore the particular directory (/var/spool/mail) using below
============================================

testlpar:# cd /tmp/restore

testlpar:# restore -xdvqf /backup/testlpar.backup ./var/spool/mail

Please note the (.) before the directory name

new volume on /backup/testlpar.mksysb :
Cluster size is 51200 bytes (100 blocks)
The volume number is 1.
The backup date is: Wed 26 July 00:05:15 2020
Files are backed by name
The user is root.

x 0./var/spool/mail
x 12586 ./var/spool/mail/root
x 485 ./var/spool/mail/wasadmin

The total size is 1684952 bytes.
The number of restored file is 2.
testlpar:# 
testlpar:# pwd
/tmp/restore/var/spool/mail
testlpar:# ls -ltr
total 2333
-rw-rw----   1 1001 mail 12586  17 may 2019 root
-rw-rw----   1 1001 mail 485      17 may 2019 wasadmin
testlpar:# 


Note: If you want to restore a single file from mksysb backup, please see the below link


Saturday, May 9, 2020

Facing issue while Restoring /image.data from mksysb image in aix?

Facing issue while Restoring /image.data from mksysb image in aix?


Facing issue while Restoring /image.data from mksysb image in aix

testnim# alt_disk_mksysb -m testserver.mksysb -d hdisk3
Restoring /image.data from mksysb image.
checking disk sizes
0505-111 alt_disk_install: There is not enough disk space on target
disks specified.
Total disk space required is 67072 megabytes and target
disk space is only 34175 megabytes.
testnim#

(or)

testnim# alt_disk_mksysb -m testserver.mksysb -d hdisk3
Restoring /image.data from mksysb image.
checking disk sizes
creating cloned rootvg volume group and associated logical volumes.
Creating logical volume alt_hd5
0516-404 allocp: This system cannot fulfill the allocatioin request.
          There are not enough free partitions or not enough physical volumes to keep restrictness and satisfy allocation requests.
          The command should be retried with different allocaiton characteristics.
0516-822 mklv: unable to create logical volume.
0505-115 alt_disk_install: mklv failed to create logical volume hd5
cleaning up.
testnim#

 Solution:

We will get the above error only if the rootvg was mirrored priror to taking the mksysb.

As long as your rootvg was mirrored you can break the mirror and retry the installation. You need to edit image.data so that you can use it to restore to a single disk.

we need to manually break the mirror in the mksysb with a custom image.data file.

#mkdir /export/idata
#cd /export/idata
#restore -xqvf /<path>/<mksysb_file_name> ./image.data
#vi /export/idata/image.data
 for each lv_data stanza.
   -Change the COPIES=2 to COPIES=1
   -Note the size of the LPs=XX value
   -change the PP=YY to match the Ps=XX value


Define a customized image_data resource:

#smitty nim_mkres
>>Resource Type: image_data
>>Resource Name: [nomirror_idata]
>>Resource Type: image_data
>>Server to Resource: [master]
>>Location of Resource: [/export/idata/image.data]

From the steps which you have you shoud go back and use the following migration command.

#nimadm -s <spot> -l <lpp_source> -i <image_data> -j <volume group for cache> -Y -T <old_mksysb_resource> -O <new_mksysb_resource_file_pathname> -N <new_mksysb_resource>


How to put log rotate in aix?


How to put log rotate in aix?


To rotate the file you should append 'rotate size <desired size> files <desired number of files> part to the above entry similar to the following entry in your /etc/syslog.conf

*.info;mark.none      /var/adm/syslog rotate size 32m files 8 compress

once done that please "refresh -s syslogd"

Output similar to below.

ls -l /var/adm/syslog*
-rw-rw-r--      1 adm adm     19718076 08 Apr 09.34      /var/adm/syslog
-rw-rw-r--      1 adm adm     2728076 07 Apr 09.34        /var/adm/syslog.0.Z
-rw-rw-r--      1 adm adm     2608076 07 Apr 09.34        /var/adm/syslog.1.Z
-rw-rw-r--      1 adm adm     2688076 07 Apr 09.34        /var/adm/syslog.2.Z
-rw-rw-r--      1 adm adm     2528076 06 Apr 09.34        /var/adm/syslog.3.Z
-rw-rw-r--      1 adm adm     2378076 06 Apr 09.34        /var/adm/syslog.4.Z
-rw-rw-r--      1 adm adm     2898076 06 Apr 09.34        /var/adm/syslog.5.Z
-rw-rw-r--      1 adm adm     2678076 06 Apr 09.34        /var/adm/syslog.6.Z
-rw-rw-r--      1 adm adm     2888076 06 Apr 09.34        /var/adm/syslog.7.Z



Wednesday, May 6, 2020

What is NPIV in aix?

What is NPIV in aix?


NPIV  (N_Port ID Virtualization)

The virtual fiber channel adapters support the use of N-Port ID Virtualization (NPIV)

With NPIV, the VIO's role is fundamentally different, The VIOs serving NPIV is a pass through, providing a Fiber channel protocol (FCP) connection from the client to the SAN.

The Virutal Fiber channel adapter capability allows client partitions to access SAN (Storage Area Network) devices using NPIV.  Each partition is identified by a unique WWPN.


Requirements:
============

Not all Fibre channel adapters and SAN Switches support NPIV. 

NPIV capable switches present the virtual WWPN to other SAN switches and devices as if they represent physical FC adapter endpoints.

VIO 2.1.0.10 and AIX oslevel 6.1.2 or 5.3.9
P6 processes based servers.
HMC 7.3.4

Commands
=========

To list the available NPIV capable ports -> #lsnports
To map a virutal FC server adapter to a port of physical hba  -> # vfcmap -vadapter vfchostX -fcp fcs0
To unmap a virutal FC server adapter -> # vfcmap -vadapter vfchostX - fcp
To identify the wwpn ->  # lscfg -vl fcsX
To identify the wwpn using hmc - >  # lshwres -m frame_name -r virtualio --rsubtype fc --level lpar
To View the fibre channel mapping -> # lsmap -all -npiv
To see the virtual FC adapters details  -> # lsdev -vpd|grep vfchost
To see the specific info about vfchostX -> # lsmap -npiv -vadapter vfchostX


Friday, April 24, 2020

how to enable the failed path in aix?

How to enable the failed path in aix?



The below script (For Loop) will be used to Enabled the failed disk path in aix. 
You no need to worry about the disk name/number and then fscsi/vscsi details. You can simply execute the below script and the script itself can able to fetch the failed disks and its fscsi/vscsi details on the server and then enabled it smoothly.


for s in `lspath|grep -i failed|awk '{print $2'}`
do
chpath -l $s -p `lspath -l $s|grep -i failed|awk '{print $3'}` -s Enabled
done

(or)

for s in `lspath|grep -i failed|awk '{print $2'}`; do chpath -l $s -p `lspath -l $s|grep -i failed|awk '{print $3'}` -s Enabled; done