Vous êtes sur la page 1sur 170

HACMP Short Notes

HACMP
HACMP : High Availability Cluster Multi-Processing
High Availability : Elimination of both planned and unplanned system and application
downtime. This is achieved through elimination of H/W and S/W single points of failure.
Cluster Topology : The Nodes, networks, storage, clients, persistent node ip label/devices
Cluster resources: HACMP can move these components from one node to others Ex: Service
labels, File systems and applications
RSCT Version: 2.4.2
SDD Version: 1.3.1.3
HA Configuration :

Define the cluster and nodes

Define the networks and disks

Define the topology

Verify and synchronize

Define the resources and resource groups

Verify and synchronize

After Installation
changes : /etc/inittab,/etc/rc.net,/etc/services,/etc/snmpd.conf,/etc/snmpd.peers,/etc/syslog.conf,
/etc/trcfmt,/var/spool/cron/crontabs/root,/etc/hosts , HACMP group will add
Software Components:
Application server
HACMP Layer
RSCT Layer
AIX Layer
LVM Layer
TCP/IP Layer
HACMP Services :
Cluster communication daemon(clcomdES)
Cluster Manager (clstrmgrES)
Cluster information daemon(clinfoES)
Cluster lock manager (cllockd)
Cluster SMUX peer daemon (clsmuxpd)
HACMP Deamons: clstrmgr, clinfo, clmuxpd, cllockd.
HA supports up to 32 nodes

HA supports up to 48 networks
HA supports up to 64 resource groups per cluster
HA supports up to 128 cluster resources
IP Label : The label that is associated with a particular IP address as defined by the DNS
(/etc/hosts)
Base IP label : The default IP address. That is set on the interface by aix on startup.
Service IP label: a service is provided and it may be bound on a single/multiple nodes. These
addresses that HACMP keep highly available.
IP alias: An IP alias is an IP address that is added to an interface. Rather than replacing its base
IP address.
RSCT Monitors the state of the network interfaces and devices.
IPAT via replacement : The service IP label will replace the boot IP address on the interface.
IPAT via aliasing: The service IP label will be added as an alias on the interface.
Persistent IP address: this can be assigned to a network for a particular node.
In HACMP the NFS export : /usr/es/sbin/cluster/etc/exports
Shared LVM:

Shared volume group is a volume group that resides entirely on the external disks shared
by cluster nodes

Shared LVM can be made available on Non concurrent access mode, Concurrent Access
mode, Enhanced concurrent access mode.

NON concurrent access mode: This environment typically uses journaled file systems to
manage data.
Create a non concurrent shared volume group: smitty mkvgGive VG name, No for
automatically available after system restart, Yes for Activate VG after it is created, give VG
major number
Create a non concurrent shared file system: smitty crjfsRename FS names, No to mount
automatically system restart, test newly created FS by mounting and unmounting it.
Importing a volume group to a fallover node:
Varyoff the volume group
Run discover process
Import a volume group
Concurrent Acccess Mode: Its not supported for file systems. Instead must use raw LVs and
Physical disks.
Creating concurrent access volume group:
Verify the disk status using lsdev Cc disk
Smitty cl_convgCreate a concurrent volume groupenter

Import the volume group using importvg C y vg_name physical_volume_name


Varyonvg vgname
Create LVs on the concurrent VG: smitty cl_conlv.
Enhanced concurrent mode VGs: This can be used for both concurrent and non concurrent
access. This VG is varied on all nodes in the cluster, The access for modifying the data is only
granted to the node that has the resource group active.
Active or passive mode:
Active varyon: all high level operations permitted.
Passive varyon: Read only permissions on the VG.
Create an enhanced concurrent mode VG: mkvg n s 32 C y myvg hdisk11 hdisk12
Resource group behaviour:
Cascading: Fallover using dynamic node priority. Online on first available node
Rotating : Failover to next priority node in the list. Never fallback. Online using distribution
policy.
Concurrent : Online on all available nodes . never fallback
RG dependencies:Clrgdependency t
/etc/hosts : /etc/hosts for name resolution. All cluster node IP interfaces must be added on this
file.
/etc/inittab : hacmp:2:once:/usr/es/sbin/cluster/etc/rc.init>/dev/console 2> &1 will strat the
clcomdES and clstrmgrES.
/etc/rc.net file is called by cfgmgr. To configure and start TCP/IP during the boot process.
C-SPOC uses clcomdES to execute commands on remote nodes.
C-SPOC commands located in /usr/es/sbin/cluster/cspoc
you should not stop a node with the forced option on more than one node at a time and also the
RG in concurrent mode.
Cluster commands are in /usr/es/sbin/cluster
User Administration : cl_usergroup
Create a concurrent VG -- > smitty cl_convg
To find the resource group information: clrginfo P
HACMP Planning:
Maximum no.of nodes in a cluster is 32
In an HACMP Cluster, the heartbeat messages are exchanged via IP networks and Point-to-Point
networks
IP Label represents the name associated with a specific IP address
Service IP label/address: The service IP address is an IP address used for client access.
2 types of service IP addresses:

Shared Service IP address: It can be active only on one node at a time.


Node bound service IP address: An IP address that can be configured only one node
Method of providing high availability service IP addresses:
IP address takeover via IP aliases
IPAT via IP replacement
IP alias is an IP address that is configured on a communication interface in addition to the base ip
address. IP alias is an AIX function that is supported by HACMP. AIX supports multiple IP
aliases on each communication interface. Each IP alias can be a different subnet.
Network Interface:
Service Interface: This interface used for providing access to the application running on that
node. The service IP address is monitored by HACMP via RSCT heartbeat.
Boot Interface: This is a communication interface. With IPAT via aliasing, during failover the
service IP label is aliased onto the boot interface
Persistent node IP label: Its useful for administrative purpose.
When an application is started or moved to another node together with its associated resource
group, the service IP address can be configured in two ways.

Replacing the base IP address of a communication interface. The service IP label and
boot IP label must be on same subnet.

Configuring one communication interface with an additional IP address on top of the


existing one. This method is IP aliasing. All Ip addresses/labels must be on different
subnet.

Default method is IP aliasing.


HACMP Security: Implemented directly by clcomdES, Uses HACMP ODM classes and the
/usr/es/sbin/cluster/rhosts file to determine partners.
Resource Group Takeover relationship:
Resource Group: Its a logical entity containing the resources to be made highly available by
HACMP.
Resources: Filesystems, NFS, Raw logical volumes, Raw physical disks, Service IP
addresses/Labels, Application servers, startup/stop scripts.
To made highly available by the HACMP each resource should be included in a Resource group.
Resource group takeover relationship:
1. Cascading
2. Rotating
3. Concurrent
4. Custom

Cascading:

Cascading resource group is activated on its home node by default.

Resource group can be activated on low priority node if the highest priority node
is not available at cluster startup.

If node failure resource group falls over to the available node with the next
priority.

Upon node reintegration into the cluster, a cascading resource group falls back to
its home node by default.

Attributes:
1. Inactive takeover(IT): Initial acquisition of a resource group in case
the home node is not available.
2. Fallover priority can be configured in default node priority list.
3. cascading without fallback is an attribute that modifies the fall back
behavior. If cwof flag is set to true, the resource group will not fall
back to any node joining. When the flag is false the resource
group falls back to the higher priority node.

Rotating:

At cluster startup first available node in the node priority list will activate the
resource group.

If the resource group is on the takeover node. It will never fallback to a higher
priority node if one becomes available.

Rotating resource groups require the use of IP address takeover. The nodes in the
resource chain must all share the same network connection to the resource group.

Concurrent:

A concurrent RG can be active on multiple nodes at the same time.

Users have to explicitly specify the desired startup, fallover and fallback
procedures.

This support only IPAT via aliasing service IP addresses.

Custom:

Startup Options:

Online on home node only

Online on first available node

Online on all available nodes

Online using distribution policyThe resource group will only be brought online if the
node has no other resource group online. You can find this by lssrc ls clstrmgrES

Fallover Options:

Fallover to next priority node in list

Fallover using dynamic node priorityThe fallover node can be selected on the basis of
either its available CPU, its available memory or the lowest disk usage. HACMP uses

RSCT to gather all this information then the resource group will fallover to the node that
best meets.

Bring offlineThe resource group will be brought offline in the event of an error occur.
This option is designed for resource groups that are online on all available nodes.

Fallback Options:

Fallback to higher priority node in the list

Never fallback

Basic Steps to implement an HACMP cluster:

Planning

Install and connect the hardware

Configure shared storage

Installing and configuring application software

Install HACMP software and reboot each node

Define the cluster topology

Synchronize the cluster topology

Configure cluster resources

Configure cluster resource group and shared storage

Synchronize the cluster

Test the cluster

HACMP installation and configuration:


HACMP release notes : /usr/es/lpp/cluster/doc
Smitty install_all fast path for installation
Cluster.es and cluster.cspoc images must be installed on all servers
Start the cluster communication daemon startsrc s clcomdES
Upgrading the cluster options: node by node migration and snapshot conversion
Steps for migration:

Stop cluster services on all nodes

Upgrade the HACMP software on each node

Start cluster services on one node at a time

Convert from supported version of HAS to hacmp

Current s/w should be commited

Save snapshot

Remove the old version

Install HA 5.1 and verify

Check previous version of cluster: lslpp h cluster


To save your HACMP configuration, create a snapshot in HACMP
Remove old version of HACMP: smitty install_remove ( select software name cluster*)
Lppchk v and lppchk c cluster* both commands run clean if the installation is ok.
After you have installed HA on cluster nodes you need to convert and apply the snapshot.
converting the snapshot must be performed before rebooting the cluster nodes
Clconvert_snapshot C v version s It converts HA old version snapshot to new version
After installation rebooting the cluster services is required because to activate the new cluster
manager.
Verification and synchronization : smitty hacmpextended configuration extended verification
and configuration verify changes only
Perform Node-by-Node Migration:

Save the current configuration in snapshot.

Stop cluster services on one node using graceful with takeover

Verify the cluster services

Install hacmp latest version.

Check the installed software using lppchk

Reboot the node.

Restart the HACMP software ( smitty hacmpSystem ManagementManage cluster


servicesstart cluster services

Repeat above steps on all nodes

Logs documenting on /tmp/hacmp.out /tmp/cm.log /tmp/clstrmgr.debug

Config_too_long message appears when the cluster manager detects that an event has
been processing for more than the specified time. To change the time interval ( smitty
hacmp extended configurationextended event configurationchange/show time until
warning)

Cluster snapshots are saved in the /usr/es/sbin/cluster/snapshots.


Synchronization process will fail when migration is incomplete. To back out from the change
you must restore the active ODM. (smitty hacmp Problem determination tools Restore
HACMP configuration database from active configuration.)
Upgrading HACMP new version involves converting the ODM from previous release to the
current release. That is done by /usr/es/sbin/cluster/conversion/cl_convert F v 5.1
The log file for the conversion is /tmp/clconvert.log.
Clean up process once installation interrupted.( smitty install software maintenance and
installation clean up after a interrupted installation)
Network Configuration:

Physical Networks: TCP/IP based, such as Ethernet and token ring Device based, RS 232 target
mode SSA(tmssa)
Configuring cluster Topology:
Standard and Extended configuration
Smitty hacmpInitialization and standard configuration
IP aliasing is used as the default mechanism for service IP label/address assignment to a network
interface.

Configure nodes : Smitty hacmpInitialization and standard configurationconfigure


nodes to an hacmp cluster (Give cluster name and node names)

Configure resources: Use configure resources to make highly available ( configure IP


address/label, Application server, Volume groups, Logical volumes, File systems

Configure resource groups: Use configure HACMP resource groups . you can choose
cascading, rotating, custom, concurrent

Assign resources to each resource group: configure HACMP resource groups


Change/show resources for a Resource group.

Verify and synchronize the cluster configuration

Display the cluster configuration

Steps for cluster configuration using extended path:

Run discovery: Running discovery retrieves current AIX configuration information from
all cluster nodes.

Configuring an HA cluster: smitty hacmpextended configurationextended topology


configurationconfigure an HACMP clusterAdd/change/show an HA cluster

Defining a node: smitty hacmpextended configurationextended topology


configurationconfigure HACMP nodesAdd a node to the HACMP cluster

Defining sites: This is optional.

Defining network: Run discover before network configuration.


1. IP based networks: smitty hacmpextended configurationextended topology

configurationconfigure HACMP networksAdd a network to the HACMP


clusterselect the type of network(enter network name, type, netmask, enable IP
takeover via IP aliases(default is true), IP address offset for heartbeating over IP
aliases.

Defining communication interfaces: smitty hacmpextended configurationextended


topology configurationHACMP cotmmunication interfaces/DevicesSelect
communication interfacesadd node name, network name, network interface,
IPlabel/address, network type

Defining communication devices: smitty hacmpextended configurationextended


topology configurationconfigure HACMP communication interface/devicesselect
communication devices

To see boot IP labels on a node use netstat in

Defining persistent IP labels: It always stays on the same node, does not require installing
an additional physical interface, its not part of any resource group.smitty
hacmpextended topology configurationconfigure persistent node IP
label/addressesadd persistent node IP label(enter node name, network name, node IP
label/address)

Resource Group Configuration

Smitty hacmpinitialization and standard configurationConfigure HACMP resource


groups Add a standard resource group Select cascading/Rotating/Concurrent/Custom
(enter resource group name, participating node names)

Assigning resources to the RG. Smitty hacmpinitialization and standard configuration


Configure HACMP resource groupschange/show resources for a standard resource
group( add service IP label/address, VG, FS, Application servers.

Resource group and application management:

Bring a resource group offline: smitty cl_adminselect hacmp resource group and
application managementBring a resource group offline.

Bring a resource group online: smitty hacmp select hacmp resource group and
application managementBring a resource group online.

Move a resource group: smitty hacmp select hacmp resource group and application
management Move a resource group to another node

C-SPOC: Under smitty cl_admin

Manage HACMP services

HACMP Communication interface management

HACMP resource group and application manipulation

HACMP log viewing and management

HACMP file collection management

HACMP security and users management

HACMP LVM

HACMP concurrent LVM

HACMP physical volume management

Post Implementation and administration:


C-Spoc commands are located in the /usr/es/sbin/cluster/cspoc directory.
HACMP for AIX ODM object classes are stored in /etc/es/objrepos.
User group administration in hacmp is smitty cl_usergroup
Problem Determination:
To verify the cluster configuration use smitty clverify.dialog
Log file to store output: /var/hacmp/clverify/clverify.log

HACMP Log Files:


/usr/es/adm/cluster.log: Generated by HACMP scripts and daemons.
/tmp/hacmp.out: This log file contains line by line record of every command executed by
scripts.
/usr/es/sbin/cluster/history/cluster.mmddyyyy: System creates cluster history file everyday.
/tmp/clstrmgr.debug: This messages generated by clstrmgrES activity.
/tmp/cspoc.log: generated by hacmp c-spoc commands
/tmp/dms_loads.out: stores log messages every time hacmp triggers the deadman switch
/var/hacmp/clverify/clverify.log: cluster verification log.
/var/ha/log/grpsvcs, /var/ha/log/topsvcs, /var/ha/log/grpglsm: daemon logs.
Snapshots: The primary information saved in a cluster snapshot is the data stored in the HACMP
ODM classes(HACMPcluster, HACMPnode, HACMPnetwork, HACMPdaemons).
The cluster snapshot utility stores the data it saves in two separate files:
ODM data file(.odm), Cluster state information file(.info)
To create a cluster snapshot: smitty hacmphacmp extended configurationhacmp snapshot
configurationadd a cluster snapshot
Cluster Verification and testing:
High and Low water mark values are 33 and 24
The default value for syncd is 60.
Before starting the cluster clcomd daemon is added to the /etc/inittab and started by init.
Verify the status of the cluster services: lssrc g cluster ( cluster manager daemon(clstrmgrES),
cluster SMUX peer daemon(clsmuxpd) and cluster topology services daemon(topsvcd) should be
running.
Status of different cluster subsystems: lssrc g topsvcs and lssrc g emsvcs.
In /tmp/hacmp.out file look for the node_up and node_up_complete events.
To check the HACMP cluster status: /usr/sbin/cluster/clstat. To use this command you should
have started the clinfo daemon.
To change the snmp version : /usr/sbin/snmpv3_ssw -1.
Stop the cluster services by using smitty clstop : graceful, takeover, forced. In the log file
/tmp/hacmp.out search for node_down and node_down_complete.
Graceful: Node will be released, but will not be acquired by other nodes.
Graceful with takeover: Node will be released and acquired by other nodes.
Forced: Cluster services will be stopped but resource group will not be released.
Resource group states: online, offline, aquiring, releasing, error, temporary error, or unknown.
Find the resource group status: /usr/es/sbin/cluster/utilities/clfindres or clRGinfo.
Options: -t : If you want to display the settling time p: display priority override locations

To review cluster topology: /usr/es/sbin/cluster/utilities/cltopinfo.


Different type of NFS mounts: hard and soft
Hard mount is default choice.
NFS export file: /usr/es/sbin/cluster/etc/exports.
If the adapter configured with a service IP address : verify in /tmp/hacmp.out event
swap_adapter has occurred, Service IP address has been moved using the command netstat in .
You can implement RS232 heartbeat network between any 2 nodes.
To test a serial connection lsdev Cc tty, baud rate is set to 38400, parity to none, bits per
character to 8
Test to see RSCT is functioning or not : lssrc ls topsvcs
RSCT verification: lssrc ls topsvcs. To check RSCT group services: lssrc ls grpsvcs
Monitor heartbeat over all the defines networks: cllsif.log from /var/ha/run/topsvcs.clustername.
Prerequisites:
PowerHA Version 5.5 AIX v5300-9 RSCT levet 2.4.10
BOS components: bos.rte.*, bos.adt.*, bos.net.tcp.*,
Bos.clvm.enh ( when using the enhanced concurrent resource manager access)
Cluster.es.nfs fileset comes with the powerHA installation medium installs the NFSv4. From aix
BOS bos.net.nfs.server 5.3.7.0 and bos.net.nfs.client 5.3.7.0 is required.
Check all the nodes must have same version of RSCT using lslpp l rsct
Installing powerHA: release notes: /usr/es/sbin/cluster/release_notes
Enter smitty install_allselect input devicePress f4 for a software listingenter
Steps for increase the size of a shared lun:

Stop the cluster on all nodes

Run cfgmgr

Varyonvg vgname

Lsattr El hdisk#

Chvg g vgname

Lsvg vgname

Varyoffvg vgname

On subsequent cluster nodes that share the vg. (run cfgmgr, lsattr El hdisk#, importvg
L vgname hdisk#)

Synchronize

PowerHA creates a backup copy of the modified files during synchronization on all nodes. These
backups are stored in /var/hacmp/filebackup directory.
The file collection logs are stored in /var/hacmp/log/clutils.log file.

User and group Administration:


Adding a user: smitty cl_usergroupselect users in a HACMP clusterAdd a user to the cluster.
(list users, change/show characteristics of a user in cluster, Removing a user from the cluster
Adding a group: smitty cl_usergroupselect groups in a HACMP clusterAdd a group to the
cluster.(list groups, change/show characteristics of a group in cluster, Removing a group from
the cluster
Command is used to change password on all cluster nodes: /usr/es/sbin/cluster/utilities/clpasswd
Smitty cl_usergroupusers in a HACMP cluster

Add a user to the cluster

List users in the cluster

Change/show characteristics of a user in the cluster

Remove a user from the cluster

Smitty cl_usergroupGroups in a HACMP cluster

Add a group to the cluster

List groups to the cluster

Change a group in the cluster

Remove a group

Smitty cl_usergroupPasswords in an HACMP cluster


Importing VG automatically: smitty hacmpExtended configurationHACMP extended
resource configurationChange/show resources and attributes for a resource
groupAutomatically import volume groups to true
C-SPOC LVM: smitty cl_admin HACMP Logical Volume Management

Shared Volume groups

Shared Logical volumes

Shared File systems

Synchronize shared LVM mirrors (Synchronize by VG/Synchronize by LV)

Synchronize a shared VG definition

C-SPOC concurrent LVM: smitty cl_admin HACMP concurrent LVM

Concurrent volume groups

Concurrent Logical volumes

Synchronize concurrent LVM mirrors

C-SPOC Physical volume management: smitty cl_adminHACMP physical volume


management

Add a disk to the cluster

Remove a disk from the cluster

Cluster disk replacement

Cluster datapath device management

Cluster Verification: smitty hacmpExtended verificationExtended verification and


synchronization. Verification log files stored in /var/hacmp/clverify.
/var/hacmp/clverify/clverify.log Verification log
/var/hacmp/clverify/pass/nodename If verification succeeds
/var/hacmp/clverify/fail/nodename If verification fails
Automatic cluster verification: Each time you start cluster services and every 24 hours.
Configure automatic cluster verification: smitty hacmpproblem determination toolshacmp
verification Automatic cluster configuration monitoring.
Cluster status Monitoring: /usr/es/sbin/cluster/clstat a and o.
/usr/es/sbin/cluster/utilities/cldumpIt provides snapshot of the key cluster status components.
Clshowsrv: It displays the status
Disk Heartbeat:

Its a non-IP heartbeat

Its use dedicated disk/LUN

Its a point to point network

If more than 2 nodes exist in your cluster, you will need a minimum of n number of nonIP heartbeat networks.

Disk heartbeating will typically requires 4 seeks/second. That is each of two nodes will
write to the disk and read from the disk once/second. Filemon tool monitors the seeks.

Configuring disk heartbeat:

Vpaths are configured as member disks of an enhanced concurrent volume group. Smitty
lvmselect volume groupsAdd a volume groupGive VG name, PV names, VG major
number, Set create VG concurrent capable to enhanced concurrent.

Import the new VG on all nodes using smitty importvg or importvg V 53 y c23vg
vpath5

Create the diskhb networksmitty hacmpextended configuration extended topology


configurationconfigure hacmp networksAdd a network to the HACMP clusterchoose
diskhb

Add 2 communication devices smitty hacmpextended configuration extended


topology configurationConfigure HACMP communication Interfaces/DevicesAdd
communication interfaces/devicesAdd pre-defined communication interfaces and
devices communication deviceschoose the diskhb

Create one communication device for other node also

Testing Disk Heartbeat connectivity:/usr/sbin/rsct/dhb_read is used to test the validity of a


diskhb connection.

Dhb_read p vpath0 r for receives data over diskhb network


Dhb_read p vpath3 t for transmits data over diskhb network.
Monitoring disk heartbeat: Monitor the activity of the disk heartbeats via lssrc ls topsvcs.
Monitor the Missed HBS field.
Configure HACMP Application Monitoring: smitty cm_cfg_appmonAdd a process application
monitorgive process names, app startup/stop scripts
Application availability analysis tool: smitty hacmpsystem managementResource group and
application managementapplication availability analysis
Commands:
List the cluster topology : /usr/es/sbin/cluster/utilities/cllsif
/usr/es/sbin/cluster/clstat
Start cluster : smitty clstart .. Monitor with /tmp/hacmp.out and check for node_up_complete.
Stop the cluster : smitty cl_stop Monitor with /tmp/hacmp.out and check fr
node_down_complete.
Determine the state of cluster: /usr/es/sbin/cluster/utilities/clcheck_server
Display the status of HACMP subsystems: clshowsrv v/-a
Display the topology information: cltopinfo c/-n/-w/-i
Monitor the heartbeat activity: lssrc ls topsvcs [ check for dropped, errors]
Display resource group attributes: clrginfo v, -p, -t, -c, -a OR clfindres

AIX Short Notes


AIX
LVM:
VG: One or more PVs can make up a VG.
Within each volume group one or more logical volumes can be defined.
VGDA(Volume group descriptor area) is an area on the disk that contains information pertinent
to the vg that the PV belongs to. It also includes information about the properties and status of all
physical and logical volumes that are part of the vg.
VGSA(Volume group status area) is used to describe the state of all PPs from all physical
volumes within a volume group. VGSA indicates if a physical partition contains accurate or stale
information.
LVCB(Logical volume control block) contains important information about the logical volume,
such as the no. of logical partitions or disk allocation policy.
VG type Max Pvs Max LVs Max PPs/VG Max PP Size
Normal 32 256 32512 1G
BIG 128 512 130048 1G
Scalable 1024 4096 2097152 128G
PVIDs stored in ODM.
Creating PVID : chdev l hdisk3 a pv=yes
Clear the PVID : chdev l hdisk3 a pv=clear.
Display the allocation PPs to LVs : lspv p hdisk0
Display the layout of a PV: lspv M hdisk0
Disabling partition allocation for a physical volume : chpv an hdisk2 : Allocatable=no
Enabling partition allocation for a physical volume : chpv ay hdisk2 : Allocatable = yes
Change the disk to unavailable : chpv vr hdisk2 : PV state = removed
Change the disk to available : chpv va hdisk2 : PV state = active
Clean the boot record : chpv c hdisk1
To define hdsik3 as a hotspare : chpv hy hdisk3
To remove hdisk3 as a hotspare : chpv hn hdisk3
Migrating ttwo disks : migratepv hdisk1 hdisk2
Migrate only PPS that belongs to particular LV : migratepv l testlv hdisk1 hdisk5

Move data from one partition located on a physical disk to another physical partition on a
different disk: migratelp testlv/1/2 hdisk5/123
Logical track group(LTG) size is the maximum allowed transfer size for an IO disk operation.
Lquerypv M hdisk0
VOLUME GROUPS
For each VG, two device driver files are created under /dev.
Creating VG : mkvg y vg1 s 64 v 99 hdisk4
Creating the Big VG : mkvg B y vg1 s 128 f n V 101 hdisk2
Creating a scalable VG: mkvg S y vg1 s 128 f hdisk3 hdisk4 hdisk5
Adding disks that requires more than 1016 PPs/PV using chvg t 2 VG1
Information about a VG read from a VGDA located on a disk: lsvg n VG1
Change the auto vary on flag for VG : chvg ay newvg
Change the auto vary off flag for VG: chvg an newvg
Quorum ensures data integrity in the event of disk failure. A quorum is a state in which 51
percent or more of the PVs in a VG accessible. When quorum is lost, the VG varies itself off.
Turn off the quorum : chvg Qn testvg
Turn on the quorum : chvg Qy testvg
To change the maximum no of PPs per PV : chvg t 16 testvg.
To change the Normal VG to scalable vg : 1. Varyoffvg ttt 2. chvg G ttt 3. varyonvg ttt
Change the LTG size : chvg L 128 testvg VGs are created with a variable logical track group
size.
Hot Spare: In Physical volume all PPs shou;d be free. PP located on a failing disk will be
copied from its mirror copy to one or more disks from the hot spare pool.
Designate hdisk4 as hot spare: chpv hy hdisk4
Migrate data from a failing disk to spare disk: Chvg hy vg;
Change synchronization policy : chvg sy testvg; synchronization policy controls automatic
synchronization of stale partitions within the VG.
Change the maximum no. of pps within a VG: chvg P 2048 testvg
Change maximum no.of LVs/VG : chvg v 4096 testvg.
How to remove the VG lock : chvg u
Extending a volume group : extendvg testvg hdisk3; If PVID is available use extendvg f
testvg hdisk3
Reducing the disk from vg : reducevg testvg hdisk3
Synchronize the ODM information : synclvodm testvg
To move the data from one system to another use the exportvg command. The exportvg
command only removes VG definition from the ODM and does not delete any data from
physical disk. : exportvg testvg

Importvg : Recreating the reference to the VG data and making that data available.. This
command reads the VGDA of one of the PV that are part of the VG. It uses redefinevg to find all
other disks that belong to the VG. It will add corresponding entries into the ODM database and
update /etc/filesystems with new values. importvg y testvg hdisk7

Server A: lsvg l app1vg

Server A: umount /app1

Server A: Varyoffvg app1vg

Server B: lspv|grep app1vg

Server B: exportvg app1vg

Server B: importvg y app1vg n V90 vpath0

Chvg a n app1vg

Varyoffvg app1vg

Varying on a volume group : varyonvg testvg


Varying off a volume group : varyoffvg testvg
Reorganizing a volume group : This command is ued to reorganize physical partitions within a
VG. The PPs will be rearranged on the disks according to the intra-physical and inter-physical
policy. reorgvg testvg.
Synchronize the VG : syncvg v testvg ; syncvg p hdisk4 hdisk5
Mirroring a volume group : lsvg p rootvg; extendvg rootvg hdisk1; mirrorvg rootvg; bosboot
ad /dev/hdisk1; bootlist m normal hdisk0 hdisk1
Splitting a volume group : splitvg y newvg c 1 testvg
Rejoin the two copies : joinvg testvg
Logical Volumes:
Create LV : mklv y lv3 t jfs2 a im testvg 10 hdisk5
Remove LV : umount /fs1, rmlv lv1
Delete all data belonging to logical volume lv1 on physical volume hdisk7: rmlv p hdsik7 lv1
Display the no. of logical partitions and their corresponding physical partitions: lslv m lv1
Display information about logical volume testlv read from VGDA located on hdisk6: lslv n
hdisk6 testlv
Display the LVCB : getlvcb AT lv1
Increasing the size of LV : extendlv a ie ex lv1 3 hdisk5 hdisk6
Copying a LV : cplv v dumpvg y lv8 lv1
Creating copies of LV : mklvcopy k lv1 3 hdisk7 &
Splitting a LV : umount /fs1; splitlvcopy y copylv testlv 2
Removing a copy of LV : rmlvcopy testlv 2 hdisk6
Changing maximum no.of logical partitions to 1000: chlv x 1000 lv1

Installation :
New and complete overwrite installation : For new machine, Overwrite the existing one, reassign
your hard disks
Migration: upgrade AIX versions from 5.2 to 5.3. This method preserves most file systems,
including root volume group.
Preservation installation : If you want to preserve the user data.. use /etc/preserve.list. This
installation overwrites /usr, /tmp,/var and / file systems by default. /etc/filesystems file is listed
by default.
TCB:

To check the tcb is installed or not: /usr/bin/tcbck.

By installing a system with the TCB option, you enable the trusted path, trusted shell,
trusted processes and system integrity checking.

Every device is part of TCB and every fle in the /dev directory is monitored by the TCB.

Critical information about so many files storing in /etc/security/sysck.cfg file

You can enable TCB anly at installation time

Installation steps : Through HMC activate override the boot mode to SMS.
Without hmc After POST hear the 2 beeps press 1.
Insert the AIX 5L CD1. select boot options(NO:5)Select install / Boot devise(Option1) select
CD/DVD select SCSI select the normal boot exit from SMSSystem boots from
mediaChoose languageChange/show installation settingsNew and complete overriteselect
harddiskInstall optionsenter to confirmAfter installation system reboots automatically
Erase a hard disk using diag command
Alternate Disk Installation:

Cloning the current running rootvg to an alternate disk

Installing a mksysb image on another disk.

Alt_disk_copy: Creates copies of rootvg on an alternate set of disks.


Alt_disk_mksysb: Installs an existing mksysb on an alternate set of disks.
Alt_rootvg_op: Performs wake, sleep and customize operations.
Alternate mksysb installation: smitty alt_mksysb
Alternate rootvg cloning: smitty alt_clone.
Cloning AIX :

Having online backup. As in case of disk crash.

When applying new maintenance levels, a copy of the rootvg is made to an alternate disk,
then updates are applied to that copy

To view the BOS installation logs : cd /var/adm/ras cat devinst.log. or alog o f bosinstlog. Or
smit alog_show
Installation Packages:

Fileset : A fileset is smallest installable unit. Ex: bos.net.uucp


Package : A group of installable filesets Ex: bos.net
Licenced program products : A complete s/w product Ex :BOS
Bundle : A bundle is a list of software that contain filesets, packages and LPPs. Install the
software bundle using smitty update_all.
PTF:Program temporary fix. Its an updated fileset or a new fileset that fixes a previous system
problem. PTFs installed through installp.
APAR: Authorised program analysis report. APARs applied to the system through instfix.
Fileset revision level identification : version:release:modification:fixlevel
The file sets that are below level 4.1.2.0, type: oslevel l 4.1.2.0
The file sets at levels later than the current maintenance level, type:
oslevel -g
To list all known recommended maintenance levels on the system,
type:oslevel rq
Oslevel s for SP level
Current maintenance level: oslevel -r

Installing S/W: Applied and commited


Applied: In applied state the previous version is stored in /usr/lpp/packagename.
Commited : First remove the previous version and go to for the installation
To install filesets within the bos.net software package in /usr/sys/inst.images directory in the
applied state: installp avx d /usr/sys/inst.images bos.net
Install S/W in commited state: installp acpX d/usr/sys/inst.images bos.net
Record of the installp output stored in /var/adm/sw/installp.summary
Commit all updates: installp cgX all
List all installable S/W : installp L d /dev/cd0
Cleaning up after failed installation : installp C
Removing installed software: installp ugp
Software Installation: smitty install_latest
Commiting applied updates: smitty install_commit
Rejecting applied updates: smitty install_reject
Removing installed software: smitty install_remove
To find what maintenance level your filesets are currently on : lslpp l
To list the individual files that are installed with a particular fileset : lslpp f bos.net
To list the installation and update history of filesets : lslpp h
To list fixes that are on a CDROM in /dev/cd0 instfix T d /dev/cd0
To determine if APAR is installed or not :

instfix iK IY737478

To list what maintenance levels installed : instfix i |grep ML

To install APAR : instfix K IY75645 d /dev/cd0


Installing individual fix by APAR: smitty update_by_fix
To install new fixes available from IBM : smitty update_all
Verifying the integrity of OS : lppchk v
Creating installation images on disk: smitty bffcreate
Verify whether the software installed on your system is in a consistent state: lppchk
To install RPM packages using geninstall. geninstall d Media all
Uninstall software: geninstall u f file
List installable software on device: geninstall L d media.
AIX Boot Process:
1. When the server is Powered on Power on self test(POST) is run and checks the hardware
2. On successful completion on POST Boot logical volume is searched by seeing the

bootlist
3. The AIX boot logical contains AIX kernel, rc.boot, reduced ODM & BOOT commands.

AIX kernel is loaded in the RAM.


4. Kernel takes control and creates a RAM file system.
5. Kernel starts /etc/init from the RAM file system
6. init runs the rc.boot 1 ( rc.boot phase one) which configures the base devices.
7. rc.boot1 calls restbase command which copies the ODM files from Boot Logical Volume

to RAM file system


8. rc.boot1 calls cfgmgr f command to configure the base devices
9. rc.boot1 calls bootinfo b command to determine the last boot device
10. Then init starts rc.boot2 which activates rootvg
11. rc.boot2 calls ipl_varyon command to activate rootvg
12. rc.boot2 runs fsck f /dev/hd4 and mount the partition on / of RAM file system
13. rc.boot2 runs fsck f /dev/hd2 and mounts /usr file system
14. rc.boot2 runs fsck f /dev/hd9var and mount /var file system and runs copy core

command to copy the core dump if available from /dev/hd6 to /var/adm/ras/vmcore.0 file.
And unmounts /var file system
15. rc.boot2 runs swapon /dev/hd6 and activates paging space
16. rc.boot2 runs migratedev and copies the device files from RAM file system to /file

system
17. rc.boot2 runs cp /../etc/objrepos/Cu* /etc/objrepos and copies the ODM files from RAM

file system to / filesystem


18. rc.boot2 runs mount /dev/hd9var and mounts /var filesystem
19. rc.boot2 copies the boot log messages to alog

20. rc.boot2 removes the RAM file system


21. Kernel starts /etc/init process from / file system
22. The /etc/init points /etc/inittab file and rc.boot3 is started. Rc.boot3 configures rest of the

devices
23. rc.boot3 runs fsck f /dev/hd3 and mount /tmp file system
24. rc.boot3 runs syncvg rootvg &
25. rc.boot3 runs cfgmgr p2 or cfgmgr p3 to configure rest of the devices. Cfgmgr p2 is

used when the physical key on MCA architecture is on normal mode and cfgmgr p3 is
used when the physical key on MCA architecture is on service mode.
26. rc.boot3 runs cfgcon command to configure the console
27. rc.boot3 runs savebase command to copy the ODM files from /dev/hd4 to /dev/hd5
28. rc.boot3 starts syncd 60 & errordaemon
29. rc.boot3 turn off LEDs
30. rc.boot3 removes /etc/nologin file
31. rc.boot3 checks the CuDv for chgstatus=3 and displays the missing devices on the

console
32. The next line of Inittab is execued

/etc/inittab file format: identifier:runlevel:action:command


MkitabAdd records to the /etc/inittab file
LsitabList records in the /etc/inittab file
Chitabchanges records in the /etc/inittab file
Rmitabremoves records from the /etc/inittab file
To display a boot list: bootlist m normal o
To change a boot list: bootlist m normal cd0 hdisk0
Troubleshooting on boot process:
Accessing a system that will not boot: Press F5 on a PCI based system to boot from the
tape/CDROMInsert volume 1 of the installation media select the maintenance mode for system
recovery Access a root volume groupselect the volume group
Damaged boot image:Access a system that will not bootCheck the / and /tmp file system
sizedetermine the boot disk using lslv m hd5Recreate the boot image using bosboot a d
/dev/hdiskncheck for CHECKSTOP errors on errlog. If such errors found probably failing
hardware. shutdown and restart the system
Corrupted file system, Corrupted jfs log: Access a system that will not bootdo fsck on all
filw systems format the jfs log using /usr/sbin/logform /dev/hd8Recreate the boot image using
bosboot a d /dev/hdiskn
Super block corrupted: If fsck indicates that block 8 is corrupted, the super block for the file
system is corrupted and needs to be repaired ( dd count=1 bs=4k skip=31 seek=1 if=/dev/hdn
of=/dev/hdn)rebuild jfslog using /usr/sbin/logform /dev/hd8mount the root and usr file systems

by (mount /dev/hd4 /mnt, mount /usr)Copy the system configuration to backup directory(cp
/mnt/etc/objrepos* /mnt/etc/objrepos/backup)copy the configuration from the RAM fs(cp
/etc/objrepos/Cu* /mnt/etc/objrepos)unmount all file systemssave the clean ODM to the BLV
using savebase d /dev/hdiskreboot
Corrupted /etc/inittab file: check the empty,missing inittab file. Check problems with
/etc/environment, /bin/sh,/bin/bsh,/etc/fsck,/etc/profileReboot
Runlevel selected group of processes. 2 is muti user and default runlevel. S,s,M,m for
Maintenance mode
Identifying current run levelcatt /etc/.init.state
Displaying history of previous run levels: /usr/lib/acct/fwtmp < /var/adm/wtmp |grep run-level
Changing system run levels: telinit M
Run level scripts allow users to start and stop selected applications while changing the run level.
Scripts beginning with k are stop scripts and S for start scripts.
Go to maintenance mode by using shutdown -m
Rc.boot fle: The /sbin/rc.boot file is a shell script that is called by the init. rc.boot file configures
devices, booting from disk, varying on a root volume group, enabling fle systems, calling the
BOS installation programs.
/etc/rc file: It performs normal startup initialization. It varyon all vgs, Activate all paging
spaces(swapon a), configure all dump devices(sysdumpdev q), perform file system
checks(fsck fp), mount all
/etc/rc.net: It contains network configuration information.
/etc/rc.tcpip: it start all network related daemons(inted, gated, routed, timed, rwhod)
Backups:
MKSYSB : Creates a bootable image of all mounted filesystems on the rootvg. This command is
for restore a system to its original state.
Tape Format : BOS boot image(kernel device drivers), BOS install image(tapeblksz, image.data,
bosinst.data), dummy table of contents, rootvg backup
Exclude file systems using mksysb ie /dev/rmt0
Cat /etc/exclude.rootvg
List content of MKSYSB image smitty lsmksysb
Restore a mksysb image : smitty restmksysb
Savevg command finds and backs up all files belonging to the specified volume group. Ex:
savevg ivf /dev/rmt0 uservg.
Restvg command restores the user volume group
Backup command backs up all files and file systems. Restore command extracts files from
archives created with the backup command.
Verify the content of a backup media tcopy /dev/rmt0
Daily Management :

/etc/security/environ : Contains the environment attributes for a user.


/etc/security/lastlog : Its an ascii file that contains last login attributes.(time last unsuccessful
login, unsuccessful login count, time last login)
/etc/security/limits : It specify the process resource limits for each user
/etc/security/user :
/usr/lib/security/mkuser.default : It contains the default attributes for a new user.
/etc/utmp file contains record of users logged into the system Command : who a
/var/adm/wtmp file contains connect-time accounting records
/etc/security/failedlogin contains record of unsuccessful login attempts.
/etc/environment contains variables specifying the basic environment for all processes.
/etc/profile file is first file that the OS uses at login time.
To enable user smith to access this system remotely : chuser rlogin=true smith
Remove the user rmuser smith
Remove the user with remove the authentication information rmuser p smith
Display the current run level : who r
How to display the active processes : who p
Changing the current shell : chsh
Change the prompt : export PS1=Ready.
To list all the 64-bit processes : ps M
To change the priority of a process : nice and renice
SUID set user id This attribute sets the effective and saved user ids of the process to the
owner id of the file on execution
SGID set group id -- This attribute sets the effective and saved group ids of the process to the
group id of the file on execution
CRON daemon runs shell commands at specified dates and times.
AT command to submit commands that are to be run only once.
System Planning:
RAID: Redundant array of independent disks.
RAID 0: Striping. Data is split into blocks of equal size and stored on different disks.
RAID 1: Mirroring. Duplicate copies are kept on separate physical disks.
RAID 5: Striping with Parity. Data is split into blocks of equal size. Additional data block
containing parity information.
RAID 10: It is a combination of mirroring and striping.
AIX 5.3 requires at least 2.2 GB of physical space.
Configuration:

ODM: ODM is a repository in which the OS keeps information about your system, such as
devices, software, TCP/IP configuration.
Basic Components of ODM: object classes, objects, descriptors
ODM directories: /usr/lib/objrepos, /usr/share/lib/objrepos, /etc/objrepos
Following steps for NFS implementation:
NFS daemons should be running on both server and client
The file systems that need to be remotely available will have to be exported(smitty
mknfsexp, exportfs a , showmount e myserver)
The exported file system need to be mounted on the remote systems
NFS services: /usr/sbin/rpc.mountd, /usr/sbin/nfsd, /usr/sbin/biod,rpc.statd, rpc.lockd
Changing an exported file system: smitty chnfsexp TCP/IP Daemons: inetd,gated, routed,named,
Configuration:
ODM: ODM(Object data manager) is a repository in which the OS keeps information regarding
your system, such as devices, software or TCP/IP information.
ODM information is stored in /usr/lib/objrepos, /usr/share/lib/objrepos, /etc/objrepos.
ODM commands: odmadd, odmchange, odmcreate, odmshow, odmdelete, odmdrop, odmget,
To start the graphical mode smit using smit m
Creating alias: alias rm=/usr/sbin/linux/rm
Export PATH=/usr/linux/bin:$path; print $path
Netwok File System:
Daemons: Server side(/usr/sbin/rpc.mountd, /usr/sbin/nfsd, portmap, rpc.statd, rpc.lockd) Client
side ( /usr/sbin/biod)
Start the NFS faemons using mknfs N. To start all nfs daemons using startsrc g nfs.
Exporting nfs directories:

Verify nfs is running or not using lssrc g nfs

Smitty mknfsexp

Specify path name, set the mode(rw,ro). It updates /etc/exports file.

/usr/sbin/exportfs a it sends all information in the /etc/exports to kernel.

Verify all file systems exported or not using showmount e Myserver

Exporting an nfs directory temporarily using exportfs i /dirname


Un exporting an nfs directory using smitty rmnfsexp
Establishing NFS mounts using smitty mknfsmnt
Changing an exported file system using smitty chnfsexp
Network configuration:
Stopping TCP IP daemons using /etc/tcp.clean script.

/etc/services file contains information about the known services


Add network routes using smitty mkroute or route add net 192.168.1 netmask 255.255.255.0
Traceroute command shows the route taken
Changing IP address smitty mktcpip
Identifying network interfaces : lsdev Cc if
Activating network interface: ifconfig interface address netmask up
Deactivating network interface: ifconfig tr0 down
Deleting an address: ifconfig tr0 delete
Detaching network interface: ifconfig tr0 detach
Creating an IP alias: ifconfig interface address netmask alias
To determine MTU size of a network interface using lsattr El interface.
Paging Space: A page is unit of virtual memory that holds 4kb of data.
Increasing paging space: chps s 3 hd6 ( its a 3LP)
Reducing paging space: chps d 1 hd6
Moving a paging space within the VG: migratepv l hd6 hdisk0 hdisk1
Removing a paging space: swapoff /dev/paging03; rmps paging03
Device configuration:
Lscfg detail about devices ex: lscfg vpl rmt0
To show more about a particular processor: lsattr El proc0
To discover how much memory is installed: lsattr El sys0 | grep realmem.
To show processor details: lscfg |grep proc or lsdev Cc processor
To show available processors: bindprocessor q
To turn on SMT using smtctl m on w boot
To turn off SMT : smtctl m off w now
Modifying an existing device configuration using chdev. The device can be in
defined,stopped,available state.
To change maxuproc value: chdev l sys0 a maxuproc=100
Remove a device configuration: rmdev Rdl rmt0
Bootinfo y command returns 32 bit or 64 bit.
Commands to run enable 64 bit: ln sf /usr/lib/boot/unix_64 /unixln sf /usr/lib/boot/unix_64
/usr/lib/boot/unixbosboot ad /dev/ipldevice shutdown r ls al /unix
File Systems:
Types: Journaled, Enhanced journaled, CDROM, NFS
FS Structure: Super block, allocation groups, inodes, blocks, fragments, and device logs

Super block: It contains control information about file system, such as overall file system in 512
byte blocks, FS name, FS log device, version no, no. of inodes, list of free inodes, list of free data
blocks, date and time of creation, FS state.
This data is stored in first block of FS and 31.
Allocation group:It consists of inodes and corresponding data blocks.
Inodes: It contains control information about the file. Such as type, size, owner, date and time
when the file was created, modifies, last accessed, it contains pointers to data blocks that stores
actual data. For JFS maximum no.of inodes and files is determined by the no. of bytes per
inode(NBPI). For JFS 16MB inode. For JFS2 there is no NBPI.
Data Blocks: actual data. The default value is 4KB.
Device logs: JFS log stores transactional information. This data can be used to roll back
incomplete operations if the machine crashes. Rootvg use LV hd8 as a common log.
FS differences:
Function JFS JFS2
Max FS Size 1TB 4PB
Max File Size 64G 4PB
Np.of inodes Fixed Dynamic
iNode size 128B 512B
Fragment Size 512 512
Block size 4KB 4KB
Creatinf FS: crfs v jfs2 g testvg a size=10M m /fs1
Display mounted FS: mount
Display characteristics of FS: lsfs
Initialize log device: logform /dev/loglv01
Display information about inodes: istat /etc/passwd
Monitoring and Performance Tuning:
Quotaon command enables disk quotas for one or more file systems
Ouotaoff command disables disk quotas for one or more file systems
Enable user quotas on /home: chfs a quota=userquota,groupquota /home
To check the consistency of the quota files using quotacheck
Edquota command to create each user or groups soft and hard limits for allowable disk space
and maximum number of files
Error logging is automatically started by the rc.boot script
Errstop command stops the error logging
The daemon for errlog is errdemon
The path to your systems error log file: /usr/lib/errdemon l

Change the maximum size of the error log: errdemon s 2000000


Display all the errors which have an specific error id: errpt j 8527F6F4
Display all the errors logged in a specific time: errpt s 1122164405 e 1123100405
To delete all the entries: errclear 0
Delete all the entries classified as software errors: errclear d s 0
VMSTAT: It reports kernel threads, virtual memory, disks, traps and cpu activity.
To display 5 summaries at 1 second intervals use vmstat 1 5
Kthr(kernel thread state) raverage number of runnable kernel threads. Baverage number of
kernel threads placed in the VMM wait queue
Memory(usage of virtual and real memory). Avm active virtual pages, total number of pages
allocated in page space. A high value is not an indicator of poor performance. Fresize of the
free list. A large portion of real memory is utilized as a cache for file system data.
Page(information about page faults and page activity). Repager input/output list, pipages
paged in from paging space, popages paged out to paging space, frpages freed, srpages
scanned by page replacement algorithm, cy clock cycles used by page replacement algorithm
Faults(trap and interrupt rate averages per second): indevice interrupts, sysystem calls,
cskernel thread context switches
CPU(breakdown of percentage usage of CPU time): ususer time, sysystem time, idcpu idle
time,wawaiting for request, pcnumber of physical processors consumed ecthe percentage of
entitled capacity consumed.
Disks(provides number of transfers per second)
SAR: sar 2 5(%usr, %sys, %wio, %idle, physc)
To report activity for the first 2 processors for each second for next 5 times: sar u P 0,1 1 5
Topas:
Tuning Parameters:
/etc/tunables directory centralizes the tunable files.
Nextboot: this file is automatically applied at boot time.
Lastboot: It contains tunable parameters with their values after the last boot.
Lastboot.log: It contains logging of the creation of the lastboot file.

AIX Commands
To display if the kernel is 32-bit enabled or 64-bit enabled: bootinfo k
How do I know if I am running a uniprocessor kernel or a multiprocessor kernel: ls l /unix
The /dev/hdiskxx directory is where the boot logical volume /dev/hd5 is located : lslv m hd5

How would I know if my machine is capable of running AIX 5L Version 5.3: AIX 5L Version 5.3
runs on all currently supported CHRP (Common Hardware Reference Platform)-based POWER
hardware.
How would I know if my machine is CHRP-based: Run the prtconf command. If it's a CHRP
machine, the string chrp appears on the Model Architecture line.
To display if the hardware is 32-bit or 64-bit, type: bootinfo y
How much real memory does my machine have: bootinfo r, lsattr El sys0 a realmem
To display the number of processors on your system: lscfg |grep proc
Detailed configuration of my system: lscfg p(platform specific device information) v(VPD)
Displays the chip type of the system. For example, PowerPC: uname p
Displays the release number of the operating system: uname r
Displays the system name. For example, AIX: uname s
Displays the name of the node: uname n
Displays the system name, nodename, version, machine ID.: uname a
Displays the system model name. For example, IBM, 9114-275: uname M
Displays the operating system version.: uname v
Displays the machine ID number of the hardware running the system: uname m
Displays the system ID number: uname u
What version, release, and maintenance level of AIX is running on my system: oslevel r
To determine which fileset updates are missing from 5300-04, for example, run the following
command:
oslevel rl 5300-04
What SP (Service Pack) is installed on my system? Oslevel s
information about installed filesets on my system: lslpp l
To show bos.acct contains /usr/bin/vmstat: lslpp w /usr/bin/vmstat or which_fileset vmstat
To show which filesets need to be installed or corrected: lppchk v
How do I get a dump of the header of the loader section and the symbol entries in symbolic
representation: dump Htv
To find out wheteher a hard drive is bootable: ipl_varyon -i
How do I replace a disk?
1. #extendvg VolumeGroupName hdisk_new
2. #migratepv hdisk_bad hdisk_new
3. #reducevg -d VolumeGroupName hdisk_bad
How can I clone (make a copy of ) the rootvg: alt_disk_copy -d hdisk1
How do I identify the network interfaces on my server: lsdev Cc if
To get information about one specific network interface: ifconfig tr0

HMC & LPAR Short Notes


HMC AND LPAR
HMC device is required to perform LPAR , DLPAR and CUOD configuration and management.
A single HMC can manage 48 i5 systems and 254 LPARs
In a partition there is a maximum no. of 64 virtual processors.
A mix of dedicated and shared processors within the same partition is not supported.
Sharing a pool of virtualized processors is known as Micro Partitioning technology
The maximum no.of physical processors on p5 is 64.
In Micro partition technology the minimum capacity is 1/10 processing units.
Virtual Ethernet enables inter partition communication without a dedicated physical network
adapter.
The virtual IO server owns the real resources that are shared with other clients.
Shared Ethernet adapter is a new service that acts as a layer 2 network switch to route network
traffic from a virtual Ethernet to a real network adapter.
On p5 595 Max no.of processors- 64, Max Memory Size 2TB, Dedicated processor
partitions-64, Shared processor partitions- 254.
HMC model for p5 595 is 7310-C04 or 7310-CR3
HMC Functions: LPAR, DLPAR, Capacity on demand without reboot, Inventory and microcode
management, Remote power control.
254 partitions supported by one HMC.
A Partition Profile is used to allocate resources such as processing units, memory and IO cards to
a partition. Several partition profiles may be created for the same partition.
System profile is a collection of partition profiles. A partition profile cannot be added to a system
profile if the partition resources are already committed to another partition profile.
To change from one system profile to another, all the running partitions must be shutdown.
To find the current firmware level: lscfg vp |grep p Platform Firmware:

Simultaneous multi threading : The instructions from the OS are loaded simultaneously into the
processor and executed.
DLPAR : DLPAR allows us to add, move or remove processors, memory and IO resources to,
from or between active partitions manually without having to restart or shutdown the partition.
Unused processing units available in shared processor pool.
Dedicated processors are whole processors that are assigned to a single partition. The minimum
no. of dedicated processors you must assign is one processor.
When a partition with dedicated processors is powered down, their processors will be available
to the shared processor pool. This capability is enabled by Allow idle processors to be shared.
Idle processors from active partitions with dedicated processors can be used by any uncapped
partition that requires additional processing units to complete its jobs.
Shared processor minimum processing unit is 0.1
Capped : The processor usage never exceeds the assigned processing capacity.
Uncapped : Processing capacity may be exceeded when the shared processor pool has spare
processing units.
Weight is a number range between 0-255. If there are 3 processors available in the shared
processor pool , partition A has an uncapped weight of 80 and B has 160. The LPAR A will
receive 1 processing unit and B receive 2 processing units.
Minimum Memory is the minimum amount of memory which needed by the logical partition to
start.
Desired memory is the requested amount of memory for the partition. The partition will receive
an amount of memory between minimum and desired. Desired memory is the amount of memory
which LPAR needs to have when the lpar is powered on. If the managed system does not have
the desired amount of memory but only has lesser memory , those uncommitted memory
resources will be assigned to the LPAR when the LPAR is activated.
You cant increase the memory more than maximum value.
Installed memory is the total no. of memory units that are installed on the managed system
Creating a new LPAR :
Server and Partition Server Management right click partitions Create logical partition
Give partition ID(Numeric between 1 and 254) and name (max 31 characters)
Give partition type (AIX or linux, i5/OS, VIO)
Select work load management group NO
Give profile name
Specify the Min, Desired and Max memory
Select the dedicated/shared processors
If you select dedicated then give min,desired and max processors
If you select the shared give min,desired and max processing units and click advanced
And click the radio button(capped/uncapped) and give the virtual processors(min,max,desired)

If you select the uncapped give the weight also.


Allocate physical IO resources : Select the IO and click the add as required/add as desired.
IO resources can be configured as required or desired. A required resource is needed for the
partition to start when the profile is activated. Desired resources are assigned to the partition if
they are available when the profile is activated.
And select the console, location code
To create another profile Right click the partition create profile give profile id.
Change default profile : Right click the partition change default profile choose profile.
Restart options :
DUMP : Initiate a main storage or system memory dump on the logical partition and restart the
logical partition when complete.
Immediate : as quickly as possible , without notifying the logical partition.
DUMP Retry : Retry a main storage or system memory dump on the logical partition and restart
the logical partition when complete.
Shutdown options :
Delayed : Shutdown the logical partition by starting the delayed power off sequence.
Immediate : as quickly as possible , without notifying the logical partition.
DLPAR:
DLPAR can be performed against the following types :
Physical Adapters
Processors
Memory
VIO Adapters
Right click the partition Dynamic Logical Partitioning Physical adapter resources
add/move/remove
Licensed Internal Code Updates: To install licensed internal code fixes on your managed systems
for a current release click change licensed internal code for the current release
To upgrade licensed internal code fixes on your managed systems for a current release click
upgrade licensed internal code for the current release
HMC Security: Servers and Clients communicate over the secure sockets layer(SSL). Which
provides server authentication, data encryption and data integration.
HMC Serial number lshmc -v
To format the DVD-RAM media
The following steps show how to format the DVD-RAM disk:
1. Place a DVD-RAM disk in to the HMC DVD drive.
2. In the HMC Navigation area,under your managed system, click Licensed Internal Code
Maintenance.

3. Then click HMC Code Update.


4. In the right-hand window, click Format Removable Media.
5. Then select the Format DVD radio button.
6. Select Backup/restore.
7. Then click the Format button.
The DVD-RAM disk should be formatted in a few seconds, after which you will receive a
Format DVD has been successfully completed - ACT0001F message.

Back up to formatted DVD media


Use the following steps to back up the CCD to the formatted DVD media:
1. In the HMC Navigation area, click Licensed Internal Code Maintenance.
2. Then click the HMC Code Update.
3. In the right-hand window, click Back up Critical Console Data.
4. Select the Back up to DVD on local system radio button and click the Next button.
5. Enter some valid text in the description window and click OK.

Installation & Upgradation


Install a Client Using NIM:

Configure NIM master

Defined basic NIM resources

Defined NIM client you want to install

Smit nim_bosinst

Select a target for the BOS installation operation

Select rte installation for the installation type

Select the lpp_source resource for the BOS installation

Select the SPOT resource for the BOS installation

Select bosinst_data resource that is capable of performing a non prompted BOS


installation

Select resolv_conf resource

Select accept new license agreement option and select yes

Press enter

Check the status using lsnim l client name

Clone a rootvg using alternate disk installation:

Check the status of physical disks(lspv) hdisk1 as an alternate disk.

Check the fileset bos.alt_disk_install.rte using lslpp. If its not installed install using
geninstall d /dev/cd0 bos.alt_disk_install.rte

Smit alt_cloneselect hdisk1

VIO updates:

Applying updates from a local hard disk(Login to the vio by padminCreate a


directoryusing ftp transfer the update files(Or) mount
remote_machine_name:directory /mntcommit previous updates(updateios
commit)Apply the update ( updateios accept install dev verify the update using
ioslevel reboot the vio(shutdown restart)

Migration AIX Version 5.3:


Prechecks:

System Requirements: Physical Memory:128MB, Paging Space: 512MB, Disk


Space:2.2GB

Take snap of system using snap a

Documenting your hardware using lsdev CHc memory, bootinfo r, lsdev CHc disk,
lsdev CHc adapter, df k

Documenting your software: oslevel, lslpp la, lslicense, lsattr El sys0,

Take system backup and data VG

Migration Process:

Change the boot list using bootlist m normal cd0

Insert the aix 5l base CD into the CD-ROM

Boot the system from CD. You will get the installation menu.

The default method of installation is migration

Select the hdisk, select advanced options for 32-bit or 64-bit

Migration using alternate disk install:

The OS is copied to another disk on your NIM client

The copied rootvg is then migrated to 5L.

Process: smit nim(or nimadm_migrate)Perform NIM software installation and Maintenance


tasksAlternate disk installation NIM alternate disk migrationperform nim alternate disk
migration( select client, disk name, LPP source and SPOT name)
Migration AIX Version 6.1:

Take an mksysb backup for rootvg on bootable media

Make a copy: /etc/inetd.conf, /etc/inittab, /etc/motd, /usr/dt/config/Xservers

Check and remove restricted tunables in /etc/nextboot

Ensure root user is primary authentication method of system. Lsuser a auth1 root; chuser
auth1=system root

Users who have logged must be log off

Check error log: errpt

Verify the processor capacity(32/64bit): prtconf c

Insert 6.1 dvd and mount it: mount v cdrfs o ro /dev/cd0 /mnt

Copy file /mnt/usr/lpp/bos/pre_migration to /tmp and run /tmp/pre_migration. The output


will be stored in /home/pre_migration.yymmddhhmmss

Shutdown F

Manually turn on system and boot from DVD

Select current console as the system console

Select English language

Select change/show installation settings and make sure the installation type is migration

Run /usr/lpp/bos/post_migration

Migrating HACMP cluster to 5.3:

Enough disk space is required on /(1.2 MB) and /usr(120MB)

Minimum RAM 128MB required

Pre migration Steps:

Take a snapshot

Take system backup

Run lppchk v, check ML, check errpt, df k, lsps s,

Rolling Migration:

From the working cluster we saved a snapshot

Took MKSYSB

Create alt_disk_install

Stop hacmp with takeover. Check that its moved to another node. Confirm with clfindres.

Install latest AIX fixes

Update and verify RSCT levels

Remove and replace SDD. (stopsrc s sddsrv, rmdev dl dpo R, uninstall SDD with
smitty remove, install latest SDD)

Run smitty update_all

Reboot the node

Repeat the steps to other servers

Check the cluster state by lssrc ls clstrmgrES

Snapshot Migration:

Stop HACMP on all nodes

Run smitty remove and deinstall cluster.*

Migrate the AIX/RSCT

Install the HACMP packages on all nodes

Reboot all nodes

Convert the snapshot: /usr/es/sbin/cluster/conversion/clconvert_snapshot v 5.2 s


snapshot.odm

Apply the snapshot: smitty hacmpextended configurationsnapshot configurationapply


a cluster snapshotselect snapshot and press enter

Start cluster services one node at a time.

Post Migration steps:

Verify and synchronize the cluster configuration

Do failover test

NIM Short Notes


NIM
Master: It refers the machine where you setup and maintain your NIM environment
Client: It can be target for NIM Master-operations. Such as installation, updation.
NIM Classes: Machines, Network, Resources, Groups
Group: Collection of machines or resources
Resources : lpp_source, SPOT, mksysb, bosinst_data, script, image_data, installp_bundle
Lsnim extract the contents in nim master
Lsnim c machines Shows the machine names
Lsnim l
/etc/bootptab: This file is used by the bootpd daemon. With no operations this file empty. This
file gets updated automatically by the NIM master when a NIM operation is executed that
requires the client machine to boot from a NIM SPOT.
/etc/exports: Any sort of installation, boot, mksysb savevg etc operation requires the use of
NFS. This file will be updated with which locations are NFS exported from the master to the
client and the permissions associated with those exports.
/etc/hosts: It gives a relationship between a systems hostname and an IP address. If your IP
address does not match up to the correct hostname your installation fails.
/etc/niminfo: This file should always exist on the NIM master. This file is built when you first
initialize the NIM environment. This is required to run nim commands and perform nim
operations. If the /etc/niminfo file accidentally deleted you can rebuild the file
/tftpboot: The main purpose of this directory is to hold the boot images that are created by nim
when a boot or installation is initiated. This directory also holds informational files about the
clients that are having a boot or installation operation performed.
SPOT: Shared product object tree.Its a directory of code(installed filesets). That is used during
client booting procedure. This content equals to /usr file system(Binaries, executables and
libraries, header files and shell scripts).
boot images can exist in the /tftpboot directory. kernels will be stored in /tftpboot directory.
lsnim -t spot -> list the different available spots
To find oslevel -r output by lsnim -l . If the SPOT and mksysb are not at the same level,
installation will only work if the SPOT is at higher level than the mksysb.
lpp_source: Similar to AIX install CD's. It contains AIX licensed program products(LPPs) in
Backup File Format.
Lpp_source with attribute simages=yes means create a SPOT, and to install the AIX operating
system.

Lpp_source with attribute simages=no means can not be used to install base AIX operating
system.
lpp_source types: lsnim -t lpp_source
Mksysb:this resource is a file containing the image of the root volume group of machine.
It is used to restore a machine.
Defining MKSYSB resource:Nim o define t mksysb a source= -a server=master a
location= resource name.
Lsnim t mksyb
bosinst_data: bosinst_data resource is a flat ascii file, like bosinst.data used for restoring system
backup images from tape or CD/DVD. this resource is for Push/pull installation of multiple
machines at the same time.
script: contains all the commands to perform customization. such as file system resizing,
additional user creation ..
To start a nim environment
1. Select a machine to be the master
2. Install AIX for the master
3. install NIM file sets : bos.sysmgt.nim.master, bos.sysmgt.nim.spot
4. Configure the selected machine as a NIM Master using smitty nimconfig Mention

Network name, Interface : nimconfig a netname=net_10_1_1 a pif_name=en0 a


netboot_kernel=mp a cable_type=tp a client_reg=no
5. When machine added to NIM environment, the /etc/niminfo file is created.
6. To rebuild the NIM master /etc/niminfo file, use the nimconfig r command
7. To rebuild and recover NIM client /etc/niminfo file use the niminit command. Ex: niminit

a master= -a name=
8. Create file systems for nim The lpp_source and SPOT resources are directories and the

related filesystems must be created.


9. Define basic resources(lpp_source, SPOT) smitty nim_mkres
10. Define the client (smitty nim_mkmac)
11. Start the client installation(smitty nim_task_inst)
12. Verify the /etc/bootptab
13. Verify the boot files created or not in the /bootptab

NIM Daemons: nimesis, nimd, bootpd, tftpd


NIM Master uses the bootpd and tftpd
The bootpd daemon will also use the /etc/bootptab file when a NIM client is configured to be
booted from the NIM master.
The tftpd daemon uses the /etc/tftpaccess.ctl file to determine which directory hierarchy is
allowed to share.

/var/adm/ras directory contains the nim master log files.


/usr/ad/ras/nimlog file contains information about failed NIM operations.
Alog command to view the nim logs: alog f /usr/adm/ras/nimlog -o
Estimation of minimum disk requirements: lpp_source 6GB, SPOT2GB, mksysb40GB
File System Hierarchy:
/tftpboot: This is used for NIM master boot images(Kernels) and info files for the NIM clients.
/export/lpp_source: Used for storing versions of AIX base level filesets in specific directories
/export/spot: Used for storing non /usr
/export/images: This is used for storing system backup images. Images can be created by NIM
mksysb.
/export/mksysb: This directory for the mksysb image files to install on clients-approx 1.5GB per
image.
/export/res: for bosinst_data, image_data and scripts.
/export/53 contains lppsource_53TL6 and spot_53TL6
NIM server size depends upon how many versions of AIX filesets, TLs, PTFs, Service Packs
Filesets for NIM Master.: bos.net.tcp.server, bos.net.nfs.server, bos.sysmgt.nim.master,
bos.sysmgt.nim.spot.
Master config: smitty nimconfigure NIM environmentadvanced configurationInitialize nim
master only. ( give details like network name and interface)
Making the lppsource:

Copy software from CD or DVD into /export/53 file system Smitty bffcreate ( give input
device, software package to copy , directory for storing sw package)

Define it as a NIM resource: smitty nimconfigure the NIM environment Advanced


configuration Create basic installation resources create a new LPP_source ( give
resource server, LPP_source name, LPP_source directory)

Making the SPOT: smitty NIM configure the NIM environment Advanced configuration
Create Basic installation resources Create a New Spot ( give Resource server, input device,
SPOT name and SPOT directory.
NIM Configuration:
Define Client Machine: smitty nim perform nim administration tasks manage machines
define a machine(NIM machine name, machine type(standalone), hardware platform type(chrp),
kernel to use for network boot(mp), cable type(tp),
Display NIM network objects: lsnim l c networks
The Basic NIM installation resources:
1) one nim lpp_source and one SPOT
2) for mksysb installation mksysb resource and SPOT

Define lpp_source: nim o define t lpp_source a server=master a


location=/export/lpp_source/lpp5300 a source=/dev/cd0 lpp5300
Creating a NIM lpp_source from a directory Nim o define t lpp_source a server=master
a location=/export/lpp_source/lpp5300 lpp5300
Removing NIM lpp_source: nim o remove lpp5300
Check a NIM lpp_source: nim Fo check lpp5304
Creating a NIM spot: nim o define t spot a server=master a location=/export/spot a
source=lpp5300 a installp_flags=-aQg spot5300
Listing Filesets in a SPOT: nim o lslpp a filesets=all a lslpp_flags=La spot6100-01
Nim o lslpp spot6100-01
Listing fixes in a SPOT: nim o fix_query spot6100-01
TL of a SPOT: lsnim l spot6100-01|grep oslevel
Listing Client Filesets: nim o lslpp a filesets=all a lslpp_flags=-La client.
Removing the NIM spot nim o remove spot5300
Checking the SPOTnim o check spot5300
Resetting the NIM spot nim Fo check spot5300
Create a nim client nim o define t standalone a if1=net_10_1_1 lpar55 0 ent0 LPAR55
Define NIM machines using smit nim_mkmac
Removing NIM client Definition: nim o remove LPAR55
Installing NIM Clients:
Base Operating System Installation
System Clone installation
Automated customization after a generic BOS install.
BOS install through NIM:

Nim o allocate a spot=spot5304 a lpp_source=lpp5304 LPAR55

Initiate the install: nim o bos_inst a source=spot5304 a installp_flags=agX a


accept_licenses=yes LPAR55

If the installation is unsuccessful, you need to reallocate the resources

Reset and deallocate NIM resources: Nim Fo reset LPAR55, Nim Fo deallocate a
subclass=all LPAR55

View the progress of installation: Nim o showlog a log_type=boot LPAR55

Using SMIT to install standalone client: smitty nim_bosinst select a target for the operation
select the installation type select the lpp_source select the spot
After Initial program load SMS Menu setup remote IPLinterpartition logical LAN Select IP
parameters(Client IP, Server IP, Gateway, Subnetmask)Ping TestExecute Ping Test Select
Boot Option Select install/Boot Device(Network) Select normal boot mode

Steps to migrate the NIM master to AIX 5L V5.3


1. Unmount all NFS mounts
2. Document the AIX and NIM master configuration(snap ac, lsnim)
3. Perform NIM database backup smitty nim_backup_db Backup the NIM database
4. Perform a mksysb of the NIM Master
5. insert Aix 5l v5.3 CD volume 1 into the CD drive

Creating mksysb from NIM client:


Nim o define t mksysb a server=master a source=lpar5 a mk_image=yes a
location=/export/images/mksysb.lpar5 mksysb_lpar5
Backup the VIO server:
Backupios file /home/padmin/viobackup/VIO.mksysb mksysb
Restoring the VIO server:
Defining mksysb resource: smitty nim_mkres(select mksysb) Define Spot resource: smitty
nim_mkres(select spot) perform the BOS installation
NIM Commands
nimconfig -a pif_name=en0 -a netname=net1To initialise the NIM master with network name
net1
nimconfig -r To rebuild /etc/niminfo file which contains the variables for NIM
nim -o define -t lpp_source -a source=/dev/cd0 -a server=master a
location=/export/lpp_source/lpp_source1 lpp_source1 To define lpp_source1 image in
/export/lpp_source/lpp_source directory from source cd0
nim -o define -t mksysb -a server=master -a location=/resources/mksysb.image mksysb1 To
define mksysb resource mksysb1, from source /resources/mksysb.image on master
nim -o remove inst_resource To remove the resource
nim o showres lpp_source6100Listing contents of the lpp_source.
Nim o showres a instfix_flags=T lppsource_61_01
nim -o check lpp_source1 To check the status of lpp_source lpp_source1
nim -o allocate -a spot=spot1 -a lpp_source=lpp_source1 node1 To allocate the resources spot1
and lpp_source1 to the client node1
nim -o bos_inst node1 To initialise NIM for the BOS installation on node1 with the allocated
resources
nim -o dkls_init dcmds To initialize the machine dcmds as diskless operation
nim -o dtls_init dcmdsTo initialize the machine dcmds for dataless operation
nim -o cust dcmdsTo initialize the machine dcmds for customize operation
nim -o diag dcmdsTo initialize the machine dcmds for diag operation
nim -o maint dcmdsTo initialize the machine dcmds for maintenance operation

nim -o define -t standalone -a platform=rspc -a if1="net1 dcmds xxxxx" -a cable_type1=bnc


dcmdsTo define the machine dcmds as standalone with platform as rspc and network as net1
with cable
type bnc and mac address xxxxx
nim -o unconfig masterTo unconfigure nim master master
nim -o allocate -a spot=spot1 dcmdsTo allocate the resource spot1 from machine dcmds
nim -o deallocate -a spot=spot1 dcmdsTo de allocate the resource spot1 from machine dcmds
nim -o remove dcmdsTo remove machine dcmds after removing all resources associated to it
nim -o reboot dcmdsTo reboot ther client dcmds
nim -o define -t lpp_source -a location=/software/lpp1 -a server=master -a source=/dev/cd0
lpp1To define lppsource lpp1 on master at /software/lpp1 directory from source device /dev/cd0
lsnim To list the nim resources
lsnim -l dcmds To list the detailed info about the object dcmds
lsnim -O dcmds To list the operation dcmds object can support
lsnim -c resources dcmds To list the resources allocated to the machine dcmds
nimclient The client version of nim command (User can obtain same results of nim in server )
NIM Master Configuration:
Nim Installation:
File sets required for NIM installation:

bos.sysmgt.nim.master

bos.sysmgt.nim.client

bos.sysmgt.nim.spot

Put volume 1 of your media installp acgXd /dev/cd0 bos.sysmgt.nim OR use smit install_all
Initial Setup: smit nim_config_env
Initializing the NIM master: nimconfig a pif_name=en0 a master_port=1058 a
netname=master_net a cable_type=bnc
Or smitty nimconfig.
Lsnim l master you will see information about NIM master
Lsnim l |more The boot resource created a /tftpboot directory to hold all of your boot images.
All NIM clients that are on the same subnet as this master will be assigned to master_net
network.
Set up first lpp_source resource: Create file system called /export/nim/lpp_source.
Nim o define t lpp_source a location=/export/nim/lpp_source/53_05 a server=master a
comments=5300-05 lpp_source a multi_volume=yes a source=/dev/cd0 a packages=all
5305_lpp
Or

Smit nim_mkres select lpp_source


If you wish to add other volumes you can use
A) bffcreate the volumes into the lpp_source
B) Use NIM to add the volumes smitty nim_res_opselect lpp_sourceselect updategive
target lpp_source and source
Lsnim l 5305_lpp
Rstate: If this is not set to ready for use then you cannot use this resource. Running a check on
the lpp_source will allow you to clear this up. Nim o check
Set up first SPOT resource: create file system called /export/nim/spot.
Nim o define t spot a server=master a source=5305_lpp a location=/export/nim/spot a
auto_expand=yes a comments=5300-05 spot 5305_spot
OR
Smitty nim_mkresselect SPOT.
Lsnim l 5305_spot
Unconfiguring NIM master: nim o unconfigure master.
Installing software on a client: smitty nim (or smit nim_inst_latest)Perform NIM software
installation and Maintenance tasksInstall and update softwareInstall Softwareselect the client
and the lpp_source.
Updating client software to a latest level: smitty nim(or nim_update_all)perform nim
software installation and maintenance tasksinstall and update softwareupdate installed
software to latest levelselect the client then select the lpp_source.
Alternate Disk Install for new TLs: smitty nim(or nim_alt_clone)Perform NIM software
installation and Maintenance tasks Alternate Disk Installation Clone the rootvg to an alternate
disk ( select taget machine and disk)
Alternate disk install for a new release: smit nim(or nimadm_migrate)Perform NIM software
installation and Maintenance tasksAlternate disk installation NIM alternate disk
migrationperform nim alternate disk migration( select client, disk name, LPP source and SPOT
name)
Performing Installs from the client: smit nim(or nim_client_inst) install and upgrade
software
RTE installation:

Requires lpp_source and spot

Default is to install the BOS.autoi bundle

Define the client on the NIM master

Prepare the NIM master to supply RTE install resources to the client

Initiate the installation from the client

Defining the client: smit nim_mkmac (give hostname) enter give machine typestandalone,
hardware platform typechrp, communication protocol need by clientnimsh, cable typeN/A.

Client on a new network: smit nim_mkmac give hostname and enter. Type of network
attached to network install interface ent(Ethernet network) enter. Give NIM networknetwork2
and subnetmask255.255.255.0, default gateway used by machine and master.
Setting up the master to install: smit nim_bosinstselect target machineselect the installation
type (rte)select the lpp_sourceselect the SPOTinstall the base OS on standalone clients
Checking the NIM master: lsnim l client, tail l /etc/bootptab(bf field in /etc/bootptab
specifies the boot file that will
be transferred to the client using TFTP after the client contacts the master using BOOTP) , ls l
/tftpboot(Actually a symbolic link) , showmont e ( shows exported files)
Typical Install Sequence:

Client initates BOOTP request to NIM server.

NIM server responds with information about the boot file (bootptab file)

Client initates TFTP of boot file from NIM server.

Client runs boot file

Client NFS mounts SPOT and lpp_source

Install Operating system

Accessing SMS: Open HMC select LPAR Activate select profile (default)click on open a
terminal windowAdvancedselect boot mode SMS ok select remote IPL select the
adapterselect internet protocol version (IPv4)select network service(BOOTP)Setting IP
parameters( select client IP, server IP, gateway, subnet mask)setting the bootlist(select
install/boot deviceselect network select network service(BOOTP)select the normal boot
mode) setting the boot(are you sure you want to exit SMS(yes).
Monitoring progress on the master: lsnim l client ( info prompting_for_data_at_console)
Installation: Main BOS installation menu ( select install now with default settings)
To view the bosinst log nim o showlog a log_type=bosinst client.
Listing valid operations for an object type: lsnim Pot master
Listing valid operations for an object: lsnim O client
Rebuilding the /etc/niminfo file: nimconfig r
Niminit -a name=client a master=master
Backing up the NIM database: smitty nim_backup_db (default value is
/etc/objrepos/nimdb.backup)
Restore the previously created backup: smitty nim_restore_db
NIM Log files:
/var/adm/ras/nimlog
/var/adm/ras/nim.installp
/var/adm/ras/nimsh.log
/var/adm/ras/bosinstlog

/var/adm/ras/nim.setup
High availability ( Alternate NIM master)
/etc/niminfo: Lists the active NIM master, and a list of valid alternate masters.
Configure alternate NIM master: smit niminit_altmstr
Synchronizing NIM database: smit nim_altmstr ( select synchronize alternate masters NIM DB)

NPIV Short Notes


N_Port-ID virtualization is a technology that allows multiple logical partitions to access
independent physical storage through the same physical fibre channel adapter. Each partition
enabled by a WWPN.
To enable NPIV on the managed system you must create VIO server at least 2.1.
assign atleast one 8GB dual port FC adapter to the VIO.
Then create virtual server and client FC adapter in each partition profile through HMC.
VIO server partition provides connection between virtual fibre channel server adapter and
physical FC adapter.

It requires power 6, NPIV enabled SAN switch, 8GB fibre channel adapter, A CISCO MDS
9000 SAN, HMC v7.3.4, VIO 2.1 with fix pack 20.1, sdd 1.7.2.0
To enable NPIV, assign the physical NPIV capable FC adapter to a VIO server and connect
virtual FC adapters on the client partition.
It requires

A running standalone source host with rootvg on a SAN LUN

NPIV capable FC adapter

Destination client partition that is currently running with rootvg on virtual SCSI disk.

On SAN switch:

Update firmware to a minimum level of Fabric OS(FOS)5.3.0. check with version

Enable NPIV capability on each port using portCfgNPIVPort 16, 1

On the HMC:

System managementserverserver nametasksdynamic logical partitioningvirtual


adaptersactionscreateFC adapterenter virtual slot number for the virtual FC server
adapter, select client partition , enter client adapter ID.

Tasksconfigurationsave current configuration

Create virtual fibre channel client adapter in the client partition profile.
ActionscreateFC adapterenter virtual slot number then select VIO server partition,
server adapter ID.

On VIO server

Run cfgdev command to get the virtual fibre channel adapter

Get the list of all available virtual FC server adapter: lsdev dev vfchost*

Get physical FC server adapters: lsdev dev fcs*

Lsnports: it checks the FC adapter NPIV readiness of the adapter and the SAN switch.

Use vfcmap command to map to map the virtual adapter to physical adapter : vfcmap
vadapter vfchst0 fcp fcs1;

List the mappings: lsmap npiv vadapter vfchost0

Do the zoning.

Check wwpn by using taskpropertiesactionspropertie

Zoneshow

Add a wwpn using zoneadd vios1, c0--------------

Save the new enable using cfgsave

Activate client partition

Install aix

Replacing a fibre channel adapter:

The adapter must be unconfigured or removed from the OS.

Mappings must be unconfigured

FC adapters and their child devices must be unconfigured or deleted.

Lsdev dev fc4 child; lsdev dev fc5 child; rmdev dev vfchost0 ucfg; rmdev dev
fcs4 recursive ucfg

Adapter can remove using : diagmenu

AIX Troubleshooting
Troubleshooting AIX and HACMP
Core dump:

Find core dump files: /usr/samples/findcore/corepath, getvfsname

Debug and analyze the core: snapcore d /tmp/coredir core.16928.24200405

Boot Process:

To check the boot process: alog t boot o

Failure to locate a boot image: The boot image of the disk may be corrupted. Access
rootvg from bootable media(select start maintenance mode for system recoveryaccess a
root VG0 to continue)run boboot command

Corrupted FS/corrupted JFS log device/Failing FSCK/bad disk: Boot from CDrom/mksysb
tapeselect start maintenance mode/system recoveryaccess rootvgformat the default jfs log
using /usr/sbin/logform /dev/hd8run fsck y /dev/hd1,hd2,hd3,hd4,hd9var (If fsck find any

errors repair the FS using fsck p /dev/hd#)lslv m hd5(for finding boot disk)recreate the boot
image using bosboot ad /dev/hdisk#, bootlist m normal hdisk#-->shutdown Fr
Remove much of system configuration and save it to backup directory: mount /dev/hd4 /mnt;
,mount /dev/hd2 /usr; mkdir /mnt/etc/objrepos/bak; cp /mnt/etc/objrepos/Cu*
/mnt/etc/objrepos/bak; umount all; exit
Save the clean ODM database: savebase d /dev/hdisk#
Check file system sizes using : df /dev/hd3; df /dev/hd4
Check the /etc/inittab file is missing or not
Check all permissions ls al / .profile /etc/environment /etc/profile
Check for ls al /bin /bin/bsh /bin/sh /lib /u /unix
Check ls l /etc/fsck /sbin/rc.boot or missed or not
No Login Prompt: ps ax |grep consolecheck getty process is running or not; lscons
System dump:

Estimating dump size: sysdumpdev e

To view current dump device: sysdumpdev l (/dev/hd6)

To specify the primary dump device : sysdumpdev P p /dev/hd7

To specify the secondary dump device: sysdumpdev P s /dev/hd7

Create dump device: estimate the size sysdumpdev e; mklv y hd7 t sysdump rootvg 7

Check the dump resources used by the system dump: /usr/lib/ras/dumpcheck p

Change the size of a dump device: chps s 1 hd6

Always allow system dump: sysdumpdev k

Get the last dump information: sysdumpdev L

TCP/IP troubleshooting:

Traceroute shows each gateway that the packet traverses on its way to finding the target
host. Traceroute uses the UDP protocol. And ping uses ICMP protocol. If you receive
any answer from local gateway then the problem with the remote host problem. If you
receive nothing then local network problem.

NFS troubleshooting:

Verify that the network connections

Verify inetd, portmap and biod daemons are running in the client

Verify valid mount point exists

Verify that server is up and running using rpcinfo p server

Verify mountd, portmap and nfsd daemons running on NFS using rpcinfo u server
mount, portmap, nfs

Check the /etc/exports file using showmount e server

Identifying the cause of slow access times for NFS: stopsrc s biod ; startsrc s biod

Use nfsstat s and nfsstat c commands to determine if the client or server is


retransmitting large blocks.

NFS error messages: mountd will not start, server not responding: port mapper failure
RPC timed out, mount: access denied, mount: you are not allowed

LVM Troubleshooting:

VG lost:
1. NON rootvg

exportvg data_vg

remove the bad disk from ODM using rmdev l hdisk# -d

create new disks and reboot

if you have savevg backup: restvg f /dev/rmt0 hdisk#

if you dont have savevg backup recreate VG LV FS

restore FS data using restore rqvf /dev/rmt0

2. Rootvg

shutdown the system and replace the bad disks


boot in maintenance mode
restore from a mksysb image( power off the machineturn on the powerplace
the bootable mediapress 5 / F5when installation screen appears select start
maintenance mode for system recoveryselect install from a system backup
import each VG into a new ODM.
Boot Problem Management:
LED

User Action

553

Access the rootvg. Issue df k. Check if /tmp, /usr or / are full.

553

Access the rootvg. Check /etc/inittab (empty, missing or corrupt?).


Check /etc/environment.

551, 555, 557

Access the rootvg. Re-create the BLV:


# bosboot ad /dev/hdiskx

551, 552, 554,


555, 556, 557

Access rootvg before mounting the rootvg filesystems. Re-create the


JFS log:
# logform /dev/hd8
Run fsck afterwards

552, 554, 556

Run fsck against all rootvg filesystems. If fsck indicates errors (not an
AIXV4 filesystem), repair the superblock (each filesystem has two
superblocks, one in logical block 1 and a copy in logical block 31, so
copy block 31 to block 1)
# dd count=1 bs=4k skip-31 seek=1 if=/dev/hd4 of=/dev/hd4

551

Access rootvg and unlock the rootvg:


chvg u rootvg

523 534

ODM files are missing or inaccessible. Restore the missing files from a
system backup.

518

Mount of /usr or /var failed? Check the /etc/filesystem. Check network


(remote mount)., filesystems (fsck) and hardware.

VIO Short Notes


VIO Short Note
PowerVM: It allows to increase the utilization of servers. Power VM includes Logical
partitioning, Micro Partitioning, Systems Virtualization, VIO, hypervisor and so on.
Simultaneous Multi Threading : SMT is an IBM microprocessor technology that allows 2
separate H/W instruction streams to run concurrently on the same physical processor.
Virtual Ethernet : VLAN allows secure connection between logical partitions without the need
for a physical IO adapter or cabling. The ability to securely share Ethernet bandwidth across
multiple partitions increases H/W utilization.
Virtual SCSI: VSCSI provides secure communication between the partitions and VIO
server.The combination of VSCSI and VIO capabilities allows you to share storage adapter

bandwidth and to subdivide single large disks into smaller segments. The adapters and disks can
shared across multiple partitions, increase utilization.
VIO server : Physical resources allows you to share the group of partitions.The VIO server can
use both virtualized storage and network adapters, making use of VSCSI and virtual Ethernet.
Redundant VIO server: AIX or linux partitions can be a client of one or more VIO servers at
the same time. A good strategy to improve availability for sets of client partitions is to connect
them to 2 VIO servers. The reason for redundancy is ability to upgrade latest technologies
without affecting production workloads.
Micro Partitioning: Sharing the processing capacity from one or more logical partitions. The
benefit of Micro Partitioning is that it allows significantly increased overall utilization
. n of
processor resources. A micro partition must have 0.1 processing units. Maximum no.of partitions
on any system P server is 254.
Uncapped Mode : The processing capacity can exceed the entitled capacity when resources are
available in the shared processor pool and the micro partition is eligible to run.
Capped Mode : The processing capacity can never exceed the entitled capacity.
Virtual Processors :A virtual processor is a representation of a physical processor that is
presented to the operating system running in a micro partition.
If a micro partition is having 1.60 processing units , and 2 virtual processors. Each virtual
processor will have 0.80 processing units.
Dedicated processors : Dedicated processors are whole processors that are assigned to dedicated
LPARs . The minimum processor allocation for an LPAR is one.
IVM(Integrated virtualization manager): IVM is a h/w management solution that performs a
subset of the HMC features for a single server, avoiding the need of a dedicated HMC server.
Live partition Mobility: Allows you to move running AIX or Linux partitions from one
physical Power6 server to another without disturb.
VIO
Version for VIO 1.5
For VIO command line interface is IOSCLI
The environment for VIO is oem_setup_env
The command for configuration through smit is cfgassist
Initial login to the VIO server is padmin
Help for vio commands ex: help errlog
Hardware requirements for creating VIO :
1. Power 5 or 6
2. HMC
3. At least one storage adapter
4. If you want to share Physical disk then one big Physical disk
5. Ethernet adapter

6. At least 512 MB memory

Latest version for vio is 2.1 fixpack 23


Copying the virtual IO server DVD media to a NIM server:
Mount /cdrom
Cd /cdrom
Cp /cdrom/bosinst.data /nim/resources
Execute the smitty installios command
Using smitty installios you can install the VIO S/w.
Topas cecdisp flag shows the detailed disk statistics
Viostat extdisk flag shows detailed disk statistics.
Wklmgr and wkldagent for handling workload manager. They can be used to record performance
data and that can be viewed by wkldout.
Chtcpip command for changing tcpip parameters
Viosecure command for handling the secure settings
Mksp : to create a storage pool
Chsp: Adds or removes physical volumes from the storage pool
Lssp: lists information about storage pool
Mkbdsp: Attaches storage from storage pool to virtual SCSI adapter
Rmbdsp: removes storage from virtual scsi adapter and return it to storage pool
Default storage pool is rootvg
Creation of VIO server using HMC version 7 :
Select the managed system -> Configuration -> Create Logical Partition -> VIO server
Enter the partition name and ID.
Check the mover service box if the VIO server partition to be created will be supporting partition
mobility.
Give a partition profile name ex:default
Processors : You can assign entire processors to your partition for dedicated use, or you can
assign partial processors units from the shared processor pool. Select shared.
Specify the minimum, desired and maximum processing units.
Specify minimum, desired and maximum virtual processors. And select the uncapped weight is
191
The system will try to allocate the desired values
The partition will not start if the managed system cannot provide the minimum amount of
processing units.
You cannot dynamically increase the amount of processing units to more than the maximum,
Assign the memory also min, desired and max.

The ratio between minimum and maximum amount of memory cannot be more than 1/64
IO selects the physical IO adapters for the partition. Required means the partition will not be able
to start unless these are available in this partition. Desired means that the partition can start also
without these adapters. A required adapter can not be moved in a dynamic LPAR operation.
VIO server partition requires a fiber channel adapter to attach SAN disks for the client partitions.
It also requires an Ethernet adapter for shared Ethernet adapter bridging to external networks.
VIO requires minimum of 30GB of disk space.
Create Virtual Ethernet and SCSI adapters: increase the maximum no of virtual adapters to 100
The maximum no of adapters must not set more than 1024.
In actions -> select create -> Ethernet adapter give Adapter ID and VLAN id.
Select Access External Network Check Box to use this adapter as a gateway between internal
and external network.
And also create SCSI adapter also.
VIO server S/W installation :
1. Place the CD/DVD in P5 Box
2. Activate the VIO server by clicking the activate. Select the default partition
3. Then check the Open terminal window or console section and click the advanced. And

OK.
4. Under the boot mode drop down list select SMS.

After installation is complete login with padmin and press a(for s/w maintenance agreement
terms)
License accept for accepting the license.
Creating a shared Ethernet adapter
1. lsdev virtual ( check the virtual Ethernet adapter)
2. lsdev type adapter ( Check the physical Ethernet adapter)
3. you use the lsmap all net command to check the slot numbers of the virtual Ethernet

adapter.
4. mkvdev sea ent0 vadapter ent2 default ent2 defaultid 1
5. lsmap all net
6. use the cfgassist or mktcpip command configure the tcp/ip or
7. mktcpip hostname vio_server1 inetaddr 9.3.5.196 interface ent3 netmask

255.255.244.0 gateway 9.3.4.1


Defining virtual disks
Virtual disks can either be whole physical disks, logical volumes or files. The physical disks can
be local or SAN disks.
Create the virtual disks
1. login to the padmin and run cfgdev command to rebuild the list of visible devices.

2. lsdev virtual (make sure virtual scsi server adapters available ex:vhost0)
3. lsmap all to check the slot numbers and vhost adapter numbers.
4. mkvg f vg rootvg_clients hdisk2 Creating rootvg_clients vg.
5. mklv lv dbsrv_rvg rootvg_clients 10G

Creating virtual device mappings:


1. lsdev vpd |grep vhost
2. mkvdev vdev dbsrv_rvg -vadapter vhost2 dev dbsrv_rvg
3. lsdev virtual
4. lsmap all

fget_config Av command provided on the IBM DS4000 series for a listing of LUN names.
Virtual SCSI Optical devices:
A dvd or cd device can be virtualized and assigned to client partitions. Only one VIO client can
access the device at a time.
Steps :
1. let the DVD drive assign to VIO server
2. Create a server SCSI adapter using the HMC.
3. Run the cfgdev command to get the new vhost adapter. Check using lsdev virtual
4. Create the virtual device for the DVD drive.(mkvdev vdev cd0 vadapter vhost3 dev

vcd)
5. Create a client scsi adapter in each lpar using the HMC.
6. Run the cfgmgr

Moving the drive :


1. Find the vscsi adapter using lscfg |grep Cn(n is the slot number)
2. rmdev Rl vscsin
3. run the cfgmgr in target LPAR

Through dsh command find which lpar is currently holding the drive.
Unconfiguring the dvd drive :
1. rmdev dev vcd ucfg
2. lsdev slots
3. rmdev dev pci5 recursive ucfg
4. cfgdev
5. lsdev virtual

Mirroring the VIO rootvg:


1. chvg factor 6 rootvg (rootvg can include upto 5 PVs with 6096 PPs)
2. extendvg f rootvg hdisk2

3. lspv
4. mirrorios f hdisk2
5. lsvg lv rootvg
6. bootlist mode normal ls

Creating Partitions :
1. Create new partition using HMC with AIX/linux
2. give partition ID and Partition name
3. Give proper memory settings(min/max/desired)
4. Skip the physical IO
5. give proper processing units (min/desired/max)
6. Create virtual ethernet adapter ( give adapter ID and VLAN id)
7. Create virtual SCSI adapter
8. In optional settings

Enable connection monitoring


Automatically start with managed system
Enable redundant error path reporting
9. bootmodes select normal

Advanced Virtualization:
Providing continuous availability of VIO servers : use multiple VIO servers for providing highly
available virtual scsi and shared Ethernet services.
IVM supports a single VIO server.
Virtual scsi redundancy can be achieved by using MPIO and LVM mirroring at client partition
and VIO server level.
Continuous availability for VIO

Shared Ethernet adapter failover

Network interface backup in the client

MPIO in the client with SAN

LVM Mirroring

Virtual Scsi Redundancy:


Virtual scsi redundancy can be achieved using MPIO and LVM mirroring.
Client is using MPIO to access a SAN disk, and LVM mirroring to access 2 scsi disks.
MPIO: MPIO for highly available virtual scsi configuration. The disks on the storage are
assigned to both virtual IO servers. The MPIO for virtual scsi devices only supports failover
mode.
Configuring MPIO:

Create 2 virtual IO server partitions

Install both VIO servers

Change fc_err_recov( to fast_fail and dyntrk(AIX tolerate cabling changes) to yes.


( chdev dev fscsi0 attr fc_err_recov=fast_fail dyntrk=yes perm

Reboot the VIO servers

Create the client partitions. Add virtual Ethernet adapters

Use the fget_config(fget_config vA) command to get the LUN to hdisk mappings.

Use the lsdev dev hdisk vpd command to retrieve the information.

The reserve_policy for each disk must be set to no_reserve.(chdev dev hdisk2 attr
reserve_policy=no_reserve)

Map the hdisks to vhost adapters.( mkvdev vdev hdisk2 vadapter vhost0 dev
app_server)

Install the client partitions.

Configure the client partitions

Testing MPIO

Configure the client partitions:

Check the MPIO configuration (lspv, lsdev Cc disk)

Run lspath

Enable the health check mode (chdev l hdisk0 a hcheck_interval=50 P

Enable the vscsi client adapter path timeout ( chdev l vscsi0 a vscsi_path_to=30 P)

Changing the priority of a path( chpath l hdisk0 p vscsi0 a priority=2)

Testing MPIO:

Lspath

Shutdown VIO2

Lspath

Start the vio2

Lspath

LVM Mirroring: This is for setting up highly available virtual scsi configuration. The client
partitions are configured with 2 virtual scsi adapters. Each of these virtual scsi adapters is
connected to a different VIO server and provides one disk to the client partition.
Configuring LVM Mirroring:

Create 2 virtual IO partitions, select one Ethernet adapter and one storage adapter

Install both VIO servers

Configure the virtual scsi adapters on both servers

Create client partitions. Each client partition needs to be configured with 2 virtual scsi
adapters.

Add one or two virtual Ethernet adapters

Create the volume group and logical volumes on VIO1 and VIO2

A logical volume from the rootvg_clients VG should be mapped to each of the 4 vhost
devices.( mkvdev vdev nimsrv_rvg vadapter vhost0 dev vnimsrv_rvg)

Lsmap all

When you bring up the client partitions you should have hdisk0 and hdisk1. Mirror the
rootvg.

Lspv

Lsdev Cc disk

Extendvg rootvg hdisk1

Mirrorvg m rootvg hdisk1

Test LVM mirroring

Testing LVM mirroring:

Lsvg l rootvg

Shutdown VIO2

Lspv hdisk1 (check the pvstate, stale partitions)

Reactivate VIO and varyonvg rootvg

Lspv hdisk1

Lsvg l rootvg

Shared Ethernet adapter: It can be used to connect a physical network to a virtual Ethernet
network. Several client partitions to share one physical adapter.
Shared Ethernet Redundancy: This is for temporary failure of communication with external
networks. Approaches to achieve continuous availability:

Shared Ethernet adapter failover

Network interface backup

Shared Ethernet adapter failover: It offers Ethernet redundancy. In a SEA failover configuration
2 VIO servers have the bridging functionality of the SEA. They use a control channel to
determine which of them is supplying the Ethernet service to the client. The client partition gets
one virtual Ethernet adapter bridged by 2 VIO servers.
Requirements for configuring SEA failover:

One SEA on one VIOs acts as the primary adapter and the second SEA on the second
VIOs acts as a backup adapter.

Each SEA must have at least one virtual Ethernet adapter with the access external
network flag(trunk flag) checked. This enables the SEA to provide bridging functionality
between the 2 VIO servers.

This adapter on both the SEAs has the same pvid

Priority value defines which of the 2 SEAs will be the primary and which is the
secondary. An adapter with priority 1 will have the highest priority.

Procedure for configuring SEA failover:

Configure a virtual Ethernet adapter via DLPAR. (ent2)


o

Select the VIOClick task buttonchoose DLPARvirtual adapters

Click actionsCreateEthernet adapter

Enter Slot number for the virtual Ethernet adapter into adapter ID

Enter the Port virtual Lan ID(PVID). The PVID allows the virtual Ethernet
adapter to communicate with other virtual Ethernet adapters that have the same
PVID.

Select IEEE 802.1

Check the box access external network

Give the virtual adapter a low trunk priority

Click OK.

Create another virtual adapter to be used as a control channel on VIOS1.( give another
VLAN ID, do not check the box access external network (ent3)

Create SEA on VIO1 with failover attribute. ( mkvdev sea ent0 vadapter ent2 default
ent2 defaultid 1 attr ha_mode=auto ctl_chan=ent3. Ex: ent4

Create VLAN Ethernet adapter on the SEA to communicate to the external VLAN tagged
network ( mkvdev vlan ent4 tagid 222) Ex:ent5

Assign an IP address to SEA VLAN adapter on VIOS1. using mktcpip

Same steps to VIO2 also. ( give the higher trunk priority:2)

Client LPAR Procedure:


o

Create client LPAR same as above.

Network interface backup : NIB can be used to provide redundant access to external networks
when 2 VIO servers used.
Configuring NIB:

Create 2 VIO server partitions

Install both VIO servers

Configure each VIO server with one virtual Ethernet adapter. Each VIO server needs to
be a different VLAN.

Define SEA with the correct VLAN ID

Add virtual Scsi adapters

Create client partitions

Define the ether channel using smitty etherchannel

Configuring multiple shared processor pools:


ConfigurationShared processor pool management Select the pool name
VIOs Security:
Enable basic firewall settings: viosecure firewall on
view all open ports on firewall configuration: viosecure firewall view
To view current security settings: viosecure view nonint
Change system security settings to default: viosecure level default
List all failed logins : lsfailedlogin
Dump the global command log: lsgcl
Backup:
Create a mksysb file of the system on a nfs mount: backupios file /mnt/vios.mksysb mksysb
Create a backup of all structures of VGs and/or storage pools: savevgstruct vdiskvg ( data will be
stored to /home/ios/vgbackups)
List all backups made with savevgstruct: restorevgstruct ls
Backup the system to a NFS mounted file system: backupios file /mnt
Performance Monitoring:
Retrieve statistics for ent0: entstat all ent0
Reset the statistics for ent0: entstat reset ent0
View disk statistics: viostat 2
Show summary for the system in stats: viostat sys 2
Show disk stats by adapter: viostat adapter 2
Turn on disk performance counters: chdev dev sys0 attr iostat=true
Topas cecdisp
Link aggregation on the VIO server:
Link aggregation means you can give one IP address to two network cards and connect to two
different switches for redundancy purpose. One network card will be active on one time.
Devices communication Etherchannel/IEEE 802.3 ad Link Aggregation Add an
etherchannel / Link aggregation
Select ent0 and mode 8023ad
Select backup adapter as redundancy ex: ent1
Automatically virtual adapter will be created named ent2.
Then put IP address : smitty tcpip Minimum configuration and startup select ent2 Put IP
address
VLANs:

WPAR Short notes


WPAR & 6.1
Its a software based virtualization solution for creating and managing multiple individual AIX
OS environments within a single AIX based LPAR.
Live partition Mobility: Its a PowerVM feature. Its ability to migrate a running LPAR between
systems
WPARs reduce the number of managed LPARs
Inside the WPAR, the application having the following benefits:
Private execution environments
Dedicated network addresses and filesystems.
Interprocess communication that is restricted to processes executing only in the same WPAR
System WPAR: Its a instance of AIX. Contains dedicated writable filesystems and system
service daemons. It can share the global environment /usr and /opt filesystems in read only
mode.
Application WPAR: Its a WPAR thats host only a single application or process. It shares file
system of the global environment. It will not run any system service daemons
Its not possible to remotely log in to an application partition or remotely.
Global Environment: It owns all physical and virtual resources of the LPAR. It allocates the
resources to the WPAR. Most performance and tuning activities are performed from this
environment. Sys admin must be logged in to the global environment to create,activate and
manage WPARs.
Processes: A process running inside a WPAR can only see other processes in the WPAR.
Processe running in other WPARs and global environment are invisible to it. Processes can only
access resources that are explicitly available inside the WPAR.
Users: Application WPARs inherit their user profiles from the global environment, so they have
same privileges that the global environment does. System WPARs maintain independent set of
users.
Resources: Resources created or owned by the global environment can only used by the global
environment unless they are explicitly shared with a WPAR. Resources created or owned by a
WPAR are visible only to that WPAR and global environment. To isolation of filesystems
between system WPARs. A separate directory tree under the /wpars directory is created for each
wpar. Inside this directory each WPAR maintains its own home,tmp,var directories. A system
wpar will also mount the global environments /opt and /usr filesystems as readonly. Application
wpars do not create their own filesystems, so they are usually allowed access to the filesystems
owned by the global environments.
Each system WPAR assigned its own network address. Communication between WPARs
running under the same instance via the loopback interface.

When to use workload partitions:

Improve application availability

Simplify OS and APP management

Manage application resource utilization

Upper limit of the no. of WPARs that can be executed within LPAR is 8192.
WPAR administration:

To use main WPAR menu: smit wpar

To use application WPAR menu: smit manage_appwpar

To use system WPAR menu: smit mange_syswpar

Create System WPAR: mkwpar n wpar001


Mkwpar n wpar001 N address=9.3.5.182
First OS creates and mounts the WPARs file system. Next it populates them with the necessary
ststem files. Finally it synchronizes the root part of the installed software. When the creation of
new WPAR is complete, it is left in the defines state.
Starting a WPAR:
Lswpar ( Defines state)
Name state type hostname directory
Wpar001 D S Wpar001 /wpars/wpar001
Startwpar wpar001(Mounting file systems and adding IP address.)
Lswpar ( Defines state)
Name state type hostname directory
Wpar001 A S Wpar001 /wpars/wpar001
You can login to the wpar using clogin from the global environment or telnet.clogin doesnt
depends on a TCP/IP connection.
To determine you are in the WPAR or inside the global environment execute the uname W
command. It returns 0 if you are in the global environment. And a value other than 0 if you are
inside a WPAR
Stopping of a WPAR: shutdown F (stopping the WPAR from inside the WPAR)
Stopwpar wpar001(stopping the WPAR from Global Environment).
-F ( Stopping a WPAR from global environment force the WPAR shutdown)
-N shutdown immediately.
Rebooting a WPAR: shudown Fr (Rebooting WPAR from inside the wpar)
Rebootwpar wpar001( Reboot from global environment)
Changing a WPAR:
You can change WPARs name only when the WPAR is in the Defined state.

Chwpar n wpar001.
Broken state:If a WPAR gets an undefined state.
Investigation:
Logs check( /var/adm/ras, /var/adm/WPARs)
Check the processes ps - @(It shows processes by WPAR)
Removing a WPAR: Verify WPAR is in Defined state. Take Backup. Rmwpar wpar001.
WPAR states:
Defined D WPAR is created but not started
Active A Its a normal state.
Broken B When failure occurs
Transitional T WPAR is in the process of changing from one state to another.
Paused P This state is reached when a WPAR has had a successful checkpoint or restore data.
Mobile partitions can be created by c flag
Creating Application WPAR: Execute the application with in the WPAR: wparexec. Ex:
wparexec /hga/myapp
Wparexec command starting myapp immediately after creation. This type of WPAR only exists
while the application is running. When the application ends, the WPAR also ends and all of its
resources are freed.
If the application WPAR has a dependency on a filesystem that is not mounted, it will mount the
file system automatically.
Lswpar(Transitional State)
Name state type hostname directory
Myapp T A myapp /
Lswpar(Active State)
Name state type hostname directory
Myapp A A myapp /
Lswpar( It disappears)
File Systems:
Types of File systems: namefs, jfs, jfs2, NFS.
By default system creates /,/tmp,/home,/var as jfs2 and /opt,/proc,/usr and namefs.
Creating a filesystem for a running WPAR:crfs v jfs2 m /wpars/wpar001/newfs u wpr00 a
logname=INLINE a size=1G
Changing a file system: chfs a size=512M /wpars/wpar001/newfs
Backing up the global environment: Stop all wpars, Then run a mksysb, mkdvd, mkcd
command with the N flag.
IBM workload partition manager for AIX is a tool for monitoring and managing WPARs.

Aix 6.1
Workload partition manager (Extra software package need to install)

Live application mobility ( move one partition from one system to another)

Automatically move partition if necessary

Aix 6 requisites: power 4,5,6


Wpar: light weight miniature aix running in aix. Its a hypervisor partitioning
Wpars share the global system resources with the copy of aix. Its shares aix OS kernel and its
shares processors, memory and adapters for IO from global resources.
Each wpar shared /usr /opt with global AIX read only.
Private filesystems: /, /tmp, /var, /home.
Its own network ip address and hostname
A separate administrative and security domain
2 types of wpar
System
Application
Live application mobility: Moving a running wpar to another machine or LPAR.

Install new machine(move wpar very fast way)

Multi system work load balancing ( load balancing of cpus, memory and IO)

Use mobility when upgrade machine(aix or firmware) or for repair

System wpar: its a copy of aix

Create it and its goes to defined state, run it, activate it and we can stop it and if its not
required remove it.

Its a complete virtualized os environment ( runs multiple services and applications)

Runs services like inetd,cron,syslog

Own root user, users and groups.

Does not share any file systems with other wpars or global system.

Application wpar:

Isolate an individual application

Light weight , one process . can start further processes.

Created and started in seconds

Starts when created. Automatically removed when application stops.

Shares global file systems

Good for HPC(high performance computing) means long running applications

Wpar manager:

Install wpar agent and it will talk to the all wpars in a machine

Wpar manager can see the wpars running on the machine

By using web browser it can communicate with wpars

Web servers ruuuning

Its a graphical interface

Create, remove and moving wpars

Start & stop them

Monitoring & reporting

Manual relocation

Automated relocation

Workload application mobility: relocate

On wpar manager select wparclick relocateselect target aix

Chkptwpar k it freeze wpar, save the wpar processes, state to a statefile on nfs.kill the
wpar processes once they are no longer needed

Restartwpar: this command take the statefile, rebuild the wpar processes & state and start
the wpar.

Reasons for using wpars:

Reducing system admin time. Many applications on one instance. Reduces install and
update of aix, monitoring, becakup, recovery etc..

Application encapsulation, treated apps as an isolated unitscreate/remove start/stop,


checkpoint/resume

Rapid environment creation of a new application

Reduce costs only one copy of aix plus shared access to aix disks.

Simple to move an application to a different machine, application mobility, performance


balancing,

Starting and stopping wpar:

Access the wpar standard console. Its a secured link port no:hostname:
14443/ibm/console logon

Managed systems(an entire physical server or a LPAR) and work load partitions are
under resource views tab

Wpar active state means its running, its not running defined, green tick means mobility,
transitional stateworking

Select wpars in defined stateactionsstartok

Select wpars in active stateactionsstopselect normal stop/hard stop/force wpar to


stopok

Monitoring the action using monitoringtask activity

(OR) run the /usr/sbin/stopwpar h sec_wpar on global system.

Application Mobility(moving wpar between machines):

Check the wpar mobility or not if not u cant move

Select wparactionsrelocate click browseok

Monitor the activity using task connectivity from monitoring tab.

Creating a WPAR(quick way):

newgive wpar namegive hostnamegive managed systemselect system/applicationif


its application give application name and select/deselect enable mobilityif its system
wpar select/deselect use private /usr,/opt and enable mobility.

Give nfs server and remote directoty if u select enable mobility.

(OR) /usr/sbin/mkwpar c h wparname n wp13 M dev=/nfs/wp13root directory=/


host=managed system mountopts=rw vfs=nfs R active=yes S

Its a defined state so actionsstart.

Creating wpar(detailed way):

Guided activitiesCreate workload partitionnextselect partition


type(System/application)give partition namenextdeploy this wpar to an existing
managed systemgive managed system namegive password click on start workload
partition when system starts and start the wpar immediately upon deployment
nextenable relocationgive network detailsgive nfs server name and remote directory

Mobility between power4, power5 and power6 machines:

Compatibility check: select wparclick on actionscompatability(it shows managed


systems that meet the basic requirements for relocating the selected wpar)

Wpar cannot move between different machines like power 4 to 5. so first you have to
stop the wpar and removed with preserve local file systems on server option. Then wpar
is undeployed state. Then click wparactionsdeployenter target system click on start
the wpar immediately upon deployment, preserve file systemsok.

WPAR properties:

Change properties: select wparactionsview/modify wpar

Change the processors using resource control

Access and controlling via the command line:

Lswpar gives wpar details (name,state,type,hostname,directory)

Stratwpar mywpar

Stopwpar hN mywpar

Lswpar L mywpar

Mkwpar n first

Mkwpar n h -N netmask address

-c for checkpoint

-M directory=/ vfs=nfs host=9.9.9.9 dev=/nfs/wp13 /opt

Startwpar wp13

Clogin wp13

Application Mobility:
Source AIX: /opt/mcr/bin/chkptwpar wp13 d /wpars/wp13/tmp/state o
/wpars/wp13/tmp/checkpoint.log k
Rmwpar p wp13
Target AIX: /opt/mcr/bin/restartwpar wp13 d /wpars/wp13/tmp/state o
/wpars/wp13/rmp/restart.log
Running application wpar:
Wparexec n temp h hostname /usr/bin/sleep 30
Process: starting wpar, mounting, loading, stopping
Comparing WPAR & Global AIX:

Wpar: df n (/, /home, /tmp, /var nfs mounts, /opt,/usr read only)

Host wp13 hostname and ip address

All ip addresses of wpars must be placed as ip alias on global aix

No physical volumes available on wpars.

No paging space available on wpars

All processes reside on wpars must be same running on global aix

Ps ef -@|pg( extra column will that is wpar name)

Ps ef -@ wp13|pg

Topas -@ on global aix

Topas on wpar will give some results for wpar and some are global aix. Yellow are
global aix and white are wpar.

HACMP Complete Configuration


HACMP Complete Configuration
Check the filesets:
labserver101:/usr/es/sbin/cluster > lslpp -l grep rsct
labserver101:/usr/es/sbin/cluster > lslpp -l grep cluster
labserver101 /etc/hosts :
16.138.184.177 labserver101

16.138.184.171 labserver101_svc
10.1.1.1 labserver101_boot
11.1.1.1 labserver101_stdby
16.138.184.181 labserver201
10.1.1.2 labserver201_boot
11.1.1.2 labserver201_stdby
labserver201 /etc/hosts :
16.138.184.177 labserver101
16.138.184.171 labserver101_svc
10.1.1.1 labserver101_boot
11.1.1.1 labserver101_stdby
16.138.184.181 labserver201
10.1.1.2 labserver201_boot
11.1.1.2 labserver201_stdby
From Labserver101 - > Smitty hacmp -> Extend Configuration -- Define the Cluster name

RG Name : saprg
Participating nodes : labserer101 labserver201

Enhanced concurrent Vg creation From Labserver101


labserver101:/usr/es/sbin/cluster > lspv grep hdisk1
hdisk1 0005e8e07d6b9991 None
labserver101:/usr/es/sbin/cluster > lspv grep 0005e8e07d6b9991
hdisk1 0005e8e07d6b9991 None
labserver201:/usr/es/sbin/cluster > lspv
hdisk0 0005e8d2c756518f rootvg active
hdisk1 0005e8e07d6b9991 None

labserver201:/usr/es/sbin/cluster > lspv grep 0005e8e07d6b9991


hdisk1 0005e8e07d6b9991 None

labserver101:/usr/es/sbin/cluster > lsvg


rootvg
testvg
labserver101:/usr/es/sbin/cluster > lsvg -o
rootvg
labserver101:/usr/es/sbin/cluster > ls -l /dev/hdisk1
brw------- 1 root system 17, 1 Jul 01 18:55 /dev/hdisk1
labserver201:/dev > ls -l /dev/hdisk1
brw------- 1 root system 17, 1 Jun 29 14:45 /dev/hdisk1
Create the file system /oracle in testvg
Lv01 -> /oracle

labserver101:/usr/es/sbin/cluster/utilities > ./cltopinfo


Cluster Name: hpcluster
Cluster Connection Authentication Mode: Standard
Cluster Message Authentication Mode: None
Cluster Message Encryption: None
Use Persistent Labels for Communication: No
There are 2 node(s) and 1 network(s) defined
NODE labserver101:
Network net_ether_01
labserver101_svc 16.138.184.171
labserver101_boot 10.1.1.1
labserver101_stdby 11.1.1.1
NODE labserver201:
Network net_ether_01
labserver101_svc 16.138.184.171
labserver201_boot 10.1.1.2

labserver201_stdby 11.1.1.2
Resource Group saprg
Startup Policy Online On Home Node Only
Fallover Policy Fallover To Next Priority Node In The List
Fallback Policy Fallback To Higher Priority Node In The List
Participating Nodes labserver101 labserver201
Service IP Label labserver101_svc

labserver101:/usr/es/sbin/cluster > lssrc -ls clstrmgrES


Current state: ST_STABLE
sccsid = "@(#)36 1.135.1.82 src/43haes/usr/sbin/cluster/hacmprd/main.C, hacmp.pe,
52haes_r541, 0747B_hacmp541 11/13/07 10:40:55"
i_local_nodeid 0, i_local_siteid -1, my_handle 1
ml_idx[1]=0
There are 0 events on the Ibcast queue
There are 0 events on the RM Ibcast queue
CLversion: 9
local node vrmf is 5410
cluster fix level is "0"
The following timer(s) are currently active:
Current DNP values
DNP Values for NodeId - 0 NodeName - labserver101
PgSpFree = 0 PvPctBusy = 0 PctTotalTimeIdle = 0.000000

DNP Values for NodeId - 0 NodeName - labserver201


PgSpFree = 0 PvPctBusy = 0 PctTotalTimeIdle = 0.000000

labserver201:/usr/es/sbin/cluster > lssrc -ls clstrmgrES


Current state: ST_INIT
sccsid = "@(#)36 1.135.1.82 src/43haes/usr/sbin/cluster/hacmprd/main.C, hacmp.pe,
52haes_r541, 0747B_hacmp541 11/13/07 10:40:55"
labserver201:/usr/es/sbin/cluster >

AIX Troubleshooting
1. How to force a failover of an EtherChannel ?
# /usr/lib/methods/ethchan_config -f Etherchannel_Device
2. How to add a backup adapter to an existing etherchannel device ?
# /usr/lib/methods/ethchan_config -a -b Etherchannel_Device Ethernet_Adapter
3. How to change the address to ping attribute of an EtherChannel ?
# /usr/lib/methods/ethchan_config -c Etherchannel_Device netaddr New_Ping_IP_Addr
4. How to list the available major numbers in a system ?
# lvlstmajor
5. How to list the major number of a volume group ?
# lvgenmajor rootvg
6. Consider a situation where you have a VG in a PV. But you have not imported that.
Now you need to find a list of attributes of that volume group before importing/varyon it.
Answer the below questions :
a. How to list the maximum number of logical volumes allowed in the VG ?
# lqueryvg -p PVname -N
b. How to show the PP size ?
# lqueryvg -p PVname -s
c. How to show the number of free PPs in the VG ?
# lqueryvg -p PVname -F
d. How to show the current number of LVs in the VG ?
# lqueryvg -p PVname -n
e. How to list the current number of PVs in the VG ?
# lqueryvg -p PVname -c
f. How to list the total number of VGDAs for the VG ?
# lqueryvg -p PVname -D
g. How to list each LVID, LV name, state for each logical volume ?
# lqueryvg -p PVname -l
h. How to list each PVID, number of VGDAs and state for each PV in the VG ?
# lqueryvg -p PVname -P
i. How to list all the attributes with tags for the vG ?
# lqueryvg -p PVname -At

j. How to list the VGID from that physical volume ?


# lqueryvg -p PVname -v
7. How do you move a physical partition ( actually its just a data between PPs) ?
# lmigratepp -g VGID -p old_PVID -n old_PPNum -P new_PVID -N new_PPNum
8. How to retrive the VG name for a particular LV from ODM ?
# getlvodm -b LVID
9. How to retrive all configured PVs from ODM ?
# getlvodm -C
10. How to retrive the major number for a VGID from ODM ?
# getlvodm -d VGID
11. How to retrive the logical volume allocation characteristics for a LVID from ODM ?
# getlvodm -c LVID
12. How to retrive the free configured PVs from ODM ?
# getlvodm -F
13. How to retrive the strip size for a LVID from ODM ?
# getlvodm -F LVID
14. How to retrive the PV name for a PVID from ODM ?
# getlvodm -g PVID
15. How to retrive all VG names from the ODM ?
# getlvodm -h
16. How to retrive the VGID for a PVID from ODM ?
# getlvodm -j PVID
17. How to retrive the LVs and LVIDs for a VG name or VGID from ODM ?
# getlvodm -L VGDescriptor
18. How to retrive the LVID/LV Name for a LV Name or LVID from ODM ?
# getlvodm -l LVDescriptor
19. How to retrive the mount point for a LVID from ODM ?
# getlvodm -m LVID
20. How to retrive the stripe width for a LVID from ODM ?
# getlvodm -N LVID

21. How to retrive the PVID/PN name for a PV name or PVID from ODM ?
# getlvodm -p PVDesciptor
22. How to retrive the PV names, PVIDs and VGs of all configured PVs from ODM ?
# getlvodm -P
23. How to retrive the relocatable flag for a LVID from ODM ?
# getlvodm -r LVID
24. How to retrive the VG state for a VG from ODM ?
# getlvodm -s VGDescriptor
25. How to retrive the timestamp for a VG from ODM ?
# getlvodm -T VGDescriptor
26. How to retrive the VG name for a VGID from ODM ?
# getlvodm -t VGID
27. How to retrive the auto-on value for a VG name or VGID from ODM ?
# getlvodm -v VGDesciptor
28. How to retrive the VGID for a vG name ?
# getlvodm -v VGDesciptor
29. How to retrive the PV names and PVIDs for a VG from ODM ?
# getlvodm -w VGDesciptor
30. How to retrive the LV type ffor a LVID from ODM ?
# getlvodm -y LVID
31. How to retrive the concurrent capable flag for a VG from ODM ?
# getlvodm -X VGDescriptor
32. How to retrive the auto-on concurrent flag for a VG from ODM ?
# getlvodm -x VGDescriptor
33. How to display the contents of LVCB ?
# getlvcb -A LVName
34. How to list the number of copies of a LV from LVCB ?
# getlvcb -c LVName
35. How to list the file system name of a LV from LVCB ?
# getlvcb -f LVName
36. How to list the label of a LV from LVCB ?

# getlvcb -L LVName
37. How to display the type of the file system from LVCB ?
# getlvcb -t LVName
38. How to display the upper limit from LVCB ?
# getlvcb -u LVName
39. How to list the current defrag state of a file system ?
# defrag -q Filesystem
40. How to lsit the current and future (if degragmented) state of a file system ?
# degrag -r Filesystem
41. How to defragment a file system ?
# defrag Filesystem
42. How to run fsck on 2 filesystems simultaneously on different drives ?
# dfsck FileSystem1 FileSystem2
43. How to list the superblock, i-name map, disk map information for a file system ?
# dumpfs Filesystem
44. Where is the magic file located ?
/etc/magic
45. How do you remove a file system data from /etc/filesystems ?
# imfs -x -l LVName
46. How do you list inode, last update/modify/access timestamp of a file ?
# istat FileName
47. How do you update the i-node table and write buffered files to the hard disk ?
# sync
48. How do you list the filesystems in a volume group ?
# lsvgfs VGName
49. How do you redefine the set of PVs of a VG in the ODM ?
# redefinevg -d PVName VGName
50. How do you replace a PV in a VG ?
# replacepv SourcePV DestinationPV

Shared Ethernet Adapter Redundancy

Shared Ethernet adapter: It can be used to connect a physical network to a virtual Ethernet
network. Several client partitions to share one physical adapter.
Shared Ethernet Redundancy: This is for temporary failure of communication with external
networks. Approaches to achieve continuous availability:

Shared Ethernet adapter failover

Network interface backup

Shared Ethernet adapter failover: It offers Ethernet redundancy. In a SEA failover configuration
2 VIO servers have the bridging functionality of the SEA. They use a control channel to
determine which of them is supplying the Ethernet service to the client. The client partition gets
one virtual Ethernet adapter bridged by 2 VIO servers.
Requirements for configuring SEA failover:

One SEA on one VIOs acts as the primary adapter and the second SEA on the second
VIOs acts as a backup adapter.

Each SEA must have at least one virtual Ethernet adapter with the access external
network flag(trunk flag) checked. This enables the SEA to provide bridging functionality
between the 2 VIO servers.

This adapter on both the SEAs has the same pvid

Priority value defines which of the 2 SEAs will be the primary and which is the
secondary. An adapter with priority 1 will have the highest priority.

Procedure for configuring SEA failover:

Configure a virtual Ethernet adapter via DLPAR. (ent2)


o

Select the VIOClick task buttonchoose DLPARvirtual adapters

Click actionsCreateEthernet adapter

Enter Slot number for the virtual Ethernet adapter into adapter ID

Enter the Port virtual Lan ID(PVID). The PVID allows the virtual Ethernet
adapter to communicate with other virtual Ethernet adapters that have the same
PVID.

Select IEEE 802.1

Check the box access external network

Give the virtual adapter a low trunk priority

Click OK.

Create another virtual adapter to be used as a control channel on VIOS1.( give another
VLAN ID, do not check the box access external network (ent3)

Create SEA on VIO1 with failover attribute. ( mkvdev sea ent0 vadapter ent2 default
ent2 defaultid 1 attr ha_mode=auto ctl_chan=ent3. Ex: ent4

Create VLAN Ethernet adapter on the SEA to communicate to the external VLAN tagged
network ( mkvdev vlan ent4 tagid 222) Ex:ent5

Assign an IP address to SEA VLAN adapter on VIOS1. using mktcpip

Same steps to VIO2 also. ( give the higher trunk priority:2)

NIM installation procedure

TABLE OF CONTENTS
1

Introduction

Contacts

High Level overview:

Helpful NIM commands:

5.

NIM Install and Configuration


5.1.

Introduction to Network Installation Management

5.2.

Create the NIM master

5.2.1

Build NIM Filesystem

5.2.2

Install NIM Master

5.2.3

Build the LPP SOURCE

5.2.4

Update the SPOT

5.3

Configure NIM master

5.3.1

Configure Master Network Resource

5.3.2

Configure LPP Resources

5.3.3

Configure SPOT Resources

5.3.4

Create mksysb NIM Resources

5.4

Install NIM Clients

5.4.1

Adding the Client to the NIM Master

5.4.1.1 Add the Client Server to NIM master


5.4.1.2 Preparing the NIM Client for a bos_install
5.4.2

Configure the Client Servers

5.4.2.1 Booting into SMS Mode.


5.4.2.2 Configuring Remote IPL
5.4.2.3 Adapter Configuration and Test
5.4.2.4 Booting the Server from the NIM Master
6

Server Backups and Restores

6.1

Structure of the NIM Master

6.2

Preparation for NIM Backups and Restores

6.3

Backup a NIM Client

6.4

Restore a NIM Client

6.5

Reconfiguration on the cleint after mksysb backup is installed

1 Introduction
The intent of this document is to provide detail steps for AIX install, backup, restore, migration
using Network Installation Management tool (NIM) for LPARs in NHSS environment. It includes
guidance in setting up a NIM environment.
Assumptions used during this document,
There is already a backup strategy in place and mksysbs are being saved on the NIM
Master in the /export/nim/mksysb NFS directory.
P51A Server has been designated at the NIM Master.

3. High Level overview:


The following is a fundamental list of activities to be performed in order to create the NIM
environment and perform a NIM install on the client partition.
Planning

the NIM Configuration

o Plan the NIM Master network config (what network is being used)
o Plan the NIM Master and Client NIM names
o Plan the NIM Master directory structure
o Plan the NIM Client network config.
o Plan the NIM Master Resource names (lppsource, spot)
o Plan the NIM Master Mksysb resource.
Implement

the NIM Master

o Install the bos.sysmgt file sets


o Create the required filesystems
o Use smitty nim to configure nim environment
o Use smitty nim_mkres to build the lpp resource, spot,bosinst and mksysb resources
Create

the nim clients

o smitty nim_mkmac
Allocate

the NIM resources to the client in preparation for mksysb restore.

o smitty nim_bosinst
Boot

the client into SMS and select boot from Ethernet (add the IPs as required).

Client

boots and performs the mksysb restore.

o Once finished, the nim resources are deallocated.

o Additional AIX configuration such as etherchannel may be required.

4 Helpful NIM commands:


smitty nim
smitty nim_mkmac
smitty nim_mknet
smitty nim_bosinst
lsnim
lsnim l
lsnim l
lsnim a spot (who has spot allocated)
nim Fo reset
nim o deallocate a subclass=all
nim Fo reset
nim o check
nim o check spot52
nim o lslpp
nim o showres
nim o fix_query
/usr/lib/instl/lppmgr d / -u b r
d = lppsource directory
u = remove duplicate updates
b = remove duplicate base levels
k = remove languages
x = remove supercedes

r = remove files
m = move files
l = list files
V = verbose

5. NIM Install and Configuration


There are different ways which can be used for AIX installation process. Due to the physical
configuration of a managed system, use of the Network Installation Management (NIM)
environment to install AIX is recommended.

5.1. Introduction to Network Installation Management


This section provides an introduction to the NIM environment and the operations you can
perform to manage the installation of the AIX Base Operating System (BOS) and optional
software on one or more machines. NIM gives you the ability to install and maintain not only the
AIX operating system, but also any additional software and fixes that may be applied over time.
NIM also allows you to customize the configuration of machines both during and after
installation. NIM eliminates the need for access to physical media, such as tapes and CDROMs, because the media is a NIM resource on the NIM master server. System backups can
be created with NIM, and stored on any server in the NIM environment, including the NIM
master. Use NIM to restore a system backup to the same server or to another server. Before
you begin configuring the NIM environment, you should already have the following:
NFS and TCP/IP installed

TCP/IP configured correctly

Name resolution configured

For any installation procedure, you need a software source to install from, such as the AIX 5.2
product CDs (in NHSS environment). The AIX 5.2 product CDs contains boot images used to
boot the system from the CD-ROM, installation images, and the installation commands used to
install the installation images.
In the NIM environment, the software source is separated into two NIM resources, the
LPP_Source and the SPOT. The LPP_Source is a directory on your NIM server. When the
LPP_Source is created, installation images are copied from the product CDs to the LPP_Source
directory. The product CDs also contains boot images that enable the system to boot from the
CD-ROM and installation commands that are used to install the installation images. This
equivalent NIM resource is called the SPOT (Shared Product Object Tree). The SPOT is a
directory that contains the installation commands that are used to install the installation images
from the LPP_Source onto a system. The SPOT is also used to build the necessary boot
images to boot a client system. Separate boot images exist for each type of adapter (Ethernet,

token ring, and so on). This illustration above shows that when an LPP_Source resource is
created, installation images are copied from the product CDs to the LPP_Source directory and a
SPOT resource contains the installation commands used to install the installation images from
the LPP_Source resource onto a system.
When using the nim_master_setup script to install a NIM master on a system, it creates an
LPP_Source and SPOT resource for you and defines the resources in the NIM environment.
The nim_master_setup script will also copy the AIX update images from your update CD to the
LPP_Source and then install the update images into the SPOT resource. In addition to the
LPP_Source and SPOT resources, several NIM resources can help customize the BOS
installation process and basic network configuration. The following table shows all the NIM
resources that are created with the nim_master_setup script:
Table 5. NIM resources created by nim_master_setup script
NIM Resource Name Given

Description

spot*

Spot_52

Commands used during installation. The network


boot images are built from the SPOT.

lpp_source

Lppsource_52 Directory containing installation images.

mksysb

Hostname.mks System backup image


ysb

bosinst_data

bosinst_ow

Answers questions asked during the BOS


installation, which allows for a non-prompted new
and complete overwrite installation.

resolv_conf

resolv_res

Provides the domain name and name server


information.

res_group

basic_res_grp

Used by the nim_clients_setup script to allocate


the bosinst_data, mksysb, lpp_source, spot, and
resolv_conf to install the client partitions.

* Required resource for installation


Beside lpp_source and spot resource which represent BOS, a system backup image called
mksysb is usually created. mksysb is a generic system backup which includes BOS plus
customer software. As a standard AIX BOS image, a mksysb can be used to clone customized
AIX to the servers who has no AIX installed. Also as a dedicated server backup on the NIM
master, in the event a client server has to be recovered, such as a catastrophic hardware or
software problem, the mksysb can be pushed down to the server to restore the server
operation. Scripts are available that create the resources necessary to perform a mksysb
installation.
Each LPAR on P5 server then will be defined in the NIM environment as a standalone system
resource, also referred to as a NIM client. Use smitty nim_mkmac to add the servers as NIM

clients. A nim_clients_setup script is also available you to define NIM clients and initiates a
AIX installation on the NIM clients.

5.2. Create the NIM master


This section describes how to install the NIM master from scratch. It starts to build a separate
NIM logic volume group called nimvg volume group if additional hard disk is available. Then it
runs nim_master_setup script to install NIM from CDs. In case the NIM has been installed
though smitty, the NIM master can also be configured through smitty nim, which will be
described in next section.

5.2.1 Build NIM Filesystem


It is optional but recommended to build the NIM logic volume group in a separate nimvg volume
group if additional hard disk is available. For the limit of hard disks, the NIM file system can be
established on rootvg.

5.2.2 Install NIM Master


To install the NIM Master, do the following:
o

Ensure that the AIX 5.3 CD 1 of 8 to install AIX

NIM Server is on AIX 5.3 ML05 and all LPARs are on AIX 5.2 ML09

Issue the following command to configure the NIM environment:


smitty nim
nim easy create option
select the network name, lppsource name , spot name
select the filesystem size and volume group you want to create to (like in our case its
nimvg)
Note: The estimated run time is 40 minutes.

After the above command has completed, verify that the following directories are
created:
/export/nim;
/export/nim/spot/spot_52;
/export/nim/lpp_source/lppsource_52
/tftpboot

o Manually create the mksysb and backup directory using the following command:
mkdir /export/nim/mksysb
mklv
crfs

mount
Build the LPP SOURCE
Note: lppsource and SPOT was created during the nim easy install
process. However we can manually create the lppsource as below
Copying fileset from AIX 5.2 CDs to /media/AIX , this is used as a media server
o Insert the AIX 5.2 CD 1 of 8 CD into the CD-ROM drive.
o Enter the following command at the command line:
smitty bffcreate
o The Copy Software to Hard Disk for Future Installation screen appears. Do the following:
o Enter /dev/cd0 in the INPUT device / directory for software field, then press the Enter
key.
In the DIRECTORY for storing software package field, enter /media/AIX/AIX_52.
o Update the TOC
Inutoc

5.3 Configure NIM master


This section describes how to configure the NIM master when the NIM fileset (and lpp source)
has been installed onto the server. There are two lpp sources AIX 5.3 and AIX 5.2 installed
on the NIM master.

5.3.1 Configure Master Network Resource


To create a lpp resource, do the following:
o smitty nim_mknet
o The Network Type screen appears. Select ent, then press the Enter key.

The Define a Network screen appears. enter the appropriate information as the
following.

Define a Network
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
* Network Name [master]

Network Type ent


Ethernet Type Standard
Network IP Address []
Subnetmask []
Other Network Type
Comments [master network]
F1=Help F2=Refresh F3=Cancel F4=List
Esc+5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do

5.3.2 Configure LPP Resources


To create a lpp source, do the following:
o smitty nim_mkres
o The Resource Type screen appears. Select the lpp_source type, and then press
the Enter key.
o The Define a Resource screen appears. Do the following:
o In the Resource Name field, enter lppsource_52.
o In the Server of Resource field, press the F4 key, then select master from the
list.
o In the Location of Resource field, enter /media/AIX/AIX52.
o In the Comment field, enter your comment such as This is the latest AIX 5.2 OS
filesets
o Press the Enter key to add the resource.

Define a Resource
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
* Resource Name [lppsource_52]
* Resource Type lpp_source
* Server of Resource [master]
* Location of Resource [/export/software+]
Source of Install Images []
Names of Option Packages []
Comments [This is the lates+]

F1=Help F2=Refresh F3=Cancel F4=List


Esc+5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do
o Once the command has completed, press the F10 key to exit to the command prompt.

5.3.3 Configure SPOT Resources


To create a spot resource, do the following:
o smitty nim_mkres
o The Resource Type screen appears. Select the spot type, then press the Enter key.
o The Define a Resource screen appears. Do the following:
o In the Resource Name field, enter spot52.
o In the Server of Resource field, press the F4 key, then select master from the list.
o In the Source of Install Images, press the F4 key, then select lppsource_52 from the list.
o In the Location of Resource field, enter /export/nim/spot.
o In the Comment field, enter your comment such as This is the spot for AIX 5.2 ML7
o Press the Enter key to add the resource.

Define a Resource
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
* Resource Name [spot52]
* Resource Type spot
* Server of Resource [master]
* Source of Install Images []
* Location of Resource [/export/NIM/spot]
Expand file systems if space needed? yes
Comments [This is the spot for+]
installp Flags
COMMIT software updates? no
SAVE replaced files? yes
AUTOMATICALLY install requisite software? yes
OVERWRITE same or newer versions? no

VERIFY install and check file sizes? no


F1=Help F2=Refresh F3=Cancel F4=List
Esc+5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do
o It may take 10 -15 minutes to create the spot resource from the lpp_source.
Once the command has completed without error, press the F10 key to exit to
the command prompt.

5.3.4 Create mksysb NIM Resources


You must create at least one mksysb resource as your system backup image. The actual
mksysb image can be created from each LPAR weekly as a backup AIX 5.x image for each
LPAR, saved in the /export/nim/mksysb/hostname directory. It can also be copied to a CD as a
separate backup, especially for the NIM master. To create a mksysb resource, do the following:
o Assume the mksysb is a file in /export/nim/mksysb/hostname/hostname.mksysb
o Define the new mksysb resource in NIM by entering the following command from the
command line:
o smitty nim_mkres
o The Resource Type screen appears. Select the mksysb resource type, then press the
Enter key.
o The Define a Resource screen appears. Do the following:
o In the Resource Name field, enter hostname.mksysb.
o In the Server of Resource field, press the F4 key, then select master from the list.
o In the Location of Resource field, enter /export/nim/mksysb/hostname.
o In the Comment field, enter your comment as required
o Press the Enter key to add the resource.

Define a Resource
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[TOP] [Entry Fields]
* Resource Name [52mksysb]
* Resource Type mksysb
* Server of Resource [master]

* Location of Resource [/export/nim/mksysb+]


Comments []
Source for Replication []
-ORSystem Backup Image Creation Options:
CREATE system backup image? no
NIM CLIENT to backup []
PREVIEW only? no
IGNORE space requirements? no
[MORE...9]
F1=Help F2=Refresh F3=Cancel F4=List
Esc+5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do
o Once the command has completed, press the F10 key to exit to the command prompt.

5.4 Install NIM Clients


Now you can start to configure and install AIX to each LPAR (NIM Client). The following
sections describe the procedure of installation on a NIM client and you can repeat it until all
servers are installed. There are two stages to configuring and installing a new client server from
the NIM master. The first step involves adding the client on the NIM master. The second step
involves configuring the client server for a NIM action, and then initiating a NIM boot over the
network. This will automatically install the new server using the client template definition in the
NIM master.

5.4.1 Adding the Client to the NIM Master


5.4.1.1 Add the Client Server to NIM master
The next step adds the client server to the NIM server. Run the following command:
smitty nim_mkmac
Enter the host name of NIM client press Enter, then select ent as the attached network. Fill in
the fields of the next screen using the following values and press Enter when complete:

Define a Machine
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
* NIM Machine Name [catom-mddbpca01]
* Machine Type [standalone]
* Hardware Platform Type [chrp]

Kernel to use for Network Boot [mp]


Primary Network Install Interface
* Cable Type tp
* Network Speed Setting [auto]
* Network Duplex Setting [auto]
* NIM Network master_net
* Host Name catos-nimpa00
Network Adapter Hardware Address [0]
Network Adapter Logical Device Name []
IPL ROM Emulation Device []
CPU Id []
Machine Group []
Comments [] F1=Help F2=Refresh F3=Cancel F4=List
Esc+5=Reset F6=Command F7=Edit F8=Image
F9=Shell F10=Exit Enter=Do
To verify the new client was added correctly, run the following command:
lsnim l
5.4.1.2 Preparing the NIM Client for a bos_install
At this point, the newly created NIM client must have a bosinst_data resources allocated to it in
order to make it serviceable.
To allocate the bosinst.data, run the following command:
smitty nim_bosinst
Select the previously defined client name, and press Enter.
Choose mksysb for the installation type, and press Enter.
Select the desired mksysb resource (52mksysb for instance), and press Enter.

Select the MKSYSB to use for the installation


Move cursor to desired item and press Enter.
52mksysb resources mksysb
F1=Help F2=Refresh F3=Cancel F4=List
Esc+5=Reset F6=Command F7=Edit F8=Image F9=Shell
Select the previously defined SPOT resource, and press Enter.

Select the MKSYSB to use for the installation


Move cursor to desired item and press Enter.
spot52 resources spot
F1=Help F2=Refresh F3=Cancel F4=List
Esc+5=Reset F6=Command F7=Edit F8=Image F9=Shell

Complete the next screen by filling in the following fields, and pressing Enter
twice to confirm when complete.

Install the Base Operating System on Standalone Clients


[TOP] [Entry Fields]
* Installation Target catos-mddbbca02
* Installation TYPE mksysb
* SPOT spot52
* LPP_SOURCE 52_lppres
MKSYSB 52mksysb
BOSINST_DATA to use during installation [52bid_64bit_2dr]
IMAGE_DATA to use during installation []
RESOLV_CONF to use for network configuration [resolv_conf]
Customization SCRIPT to run after installation []
Remain NIM client after install? [yes]
PRESERVE NIM definitions for resources on target [yes]
FORCE PUSH the installation [no]
Initiate reboot and reboot now? [no]
-ORSet bootlist for installation at the [no]
next reboot?
Additional BUNDLES to install []
-ORAdditional FILESETS to install []
(bundles will be ignored)
installp Flags
COMMIT software updates? [yes]
SAVE replaced files? [no]
AUTOMATICALLY install requisite software? [yes]
EXTEND filesystems if space needed? [yes]
OVERWRITE same or newer versions? [no]
VERIFY install and check file sizes? [no]
Group controls (only valid for group targets):
Number of concurrent operations []
Time limit (hours) []
Schedule a Job [no]
YEAR []
MONTH []
DAY (1-31) []
HOUR (0-23) []
MINUTES (0-59) []
[BOTTOM]

F1=Help F2=Refresh F3=Cancel F4=List


Press the F10 key to exit this screen and return to the command line.
To verify the completed NIM client profile, run the following command:
lsnim l

5.4.2 Configure the Client Servers


Now that the NIM master has been updated with the client information, the client server needs
to be configured for network boot through the HMC console.
5.4.2.1 Booting into SMS Mode.
This procedure assumes that the target server is currently down

o Open the HMC console.


Use Websm to do
In the Navigation Area (left pane), and right click it to open the terminal window.
1. Click Activate. The Activate Partition menu opens with a selection of partition profiles.
2. Select Open a terminal window to open a virtual terminal (vterm) window.
3. Click (Advanced...) to open the advanced options menu.
4. For the boot mode, select SMS.
5. Click OK to close the advanced options menu.
6. Click OK. You will see the following message:

After a few moments, the terminal screen will open, and the system will boot to the SMS
prompt.

5.4.2.2 Configuring Remote IPL


In SMS menu, choose Setup Remote IPL (Initial Program Load) from the main menu.
o Choose the Port 1 press Enter, then select 1. IP Parameter, press enter.
o Fill in each field using the IP addresses of both NIM server and NIM client from the Server
Install Worksheets.
Note: the Gateway IP Address field is always set to the IP Address of the NIM Master
5.4.2.3 Adapter Configuration and Test
o Back to the previous screen and choose option 2. Adapter Configuration.

o Select network speed auto and Duplex auto


o Press ESC twice and select option 3. Ping Test.
o Select 1. Execute Ping Test on next screen to attempt the ping operation. The NIM client
will now attempt to ping the NIM server.

After a 60 second wait, a success message should appear. If not, review and change the
network adapter configuration until the ping is successful.

5.4.2.4 Booting the Server from the NIM Master


After the successful ping, press any key and then select M to return to the SMS main menu
screen. From this screen, select 1. Select Install/Boot Devices and press Enter

From next screen, choose 7. List all Devices, and press Enter.
o After the buses are scanned, a list of boot devices will be presented. Select 1. Ethernet
and press Enter.
o Choose 2. Normal Mode Boot. and press Enter
o Finally, choose 1. Yes to exit the SMS menu and initiate the boot process.
After the STARTING SOFTWARE PLEASE WAIT message, the new client will start booting
and install the new image automatically. Once the Welcome to AIX message appears, the
client has successfully started the boot process. Approximately 30 minutes later, the new client
will be installed and ready for login.
Now you can follow the procedure in 5.4.2 to install the NIM client.

6 Server Backups and Restores


It is high recommended to make a completed system backups after implementing AIX on all
p570 LPARs. This section is intended to highlight a few key points about backups and restores
on the AIX platform.

6.1 Structure of the NIM Master


Usually OS Backups (mksysb) will occur through regularly scheduled jobs on each of the
servers. These servers will nfs mount the /export/nim/mksysb directory from the NIM Master,
over the network, and perform a mksysb to a file in the NFS directory. Once the mksysb

completes, the nfs mount will be unmounted, leaving the completed mksysb file on the nim
master.
In the event a server has to be recovered, such as a catastrophic hardware or software
problem, this mksysb can be pushed down to the server to restore the server operation.
The /export/NIM directory holds the required NIM master resources such as the mksysb and
SPOT and LPPSOURCE.

6.2 Preparation for NIM Backups and Restores


Some key components required for Backups and Restores are listed below.

The /etc/hosts file on all servers must contain all server hostnames and IP Addresses. It
must be accurate.
The network must be fully functional.
The /export/nim/mksysb directory is nfs exported as Read/Write from the NIM master to
all Clients. As part of this, the /etc/hosts file must be accurate.
The client must have an mksysb mount point directory created

You will need to create a scripts for the same

6.3 Backup a NIM Client


The backup process begins on the NIM Client (for instance catos-mddbbca01).
o Step 1 the client nfs mounts the /export/nim/mksysb directory from the NIM master.
o Step 2 the client issues the mksysb command and places the resulting file in the nfs
mounted mksysb directory. This mksysb file contains all info in regards to the Operating
System as well as customer data.
o Step 3 the client unmounts the nfs directory. The mksysb backup now resides on the
NIM master.
o Step 4 Backup software such as TSM can then be used to manage the backup of the
mksysbs now residing on the NIM Master. TSM is also used to backup all application
data. This is beyond the scope of this document.

6.4 Restore a NIM Client


The restore process involves the use of NIM to restore a previously saved mksysb image of a
server in the event of a catastrophic failure.
The restore process begins on the NIM master but then shifts to the NIM Client Server at step 3
(perform a netboot).

o Step 1 The desired mksysb image mksysb.hostname to restore is selected in the


/export/nim/mksysb/hostname directory.
o Step 2 Remove and recreate the mksysb resource on the NIM master. Use smitty
nim_mkres
o Step 3 the NIM master is configured to allocate the desired mksysb image to the desired
client and set the client to netboot. Use smitty nim_bosinst.
o Step 4 the client is netbooted. During this process, it follows the instructions setup on the
NIM master and performs a non-prompted OS install.
o Step 5 at the end of the netboot, the NIM resources are automatically deallocated on the
NIM Master and the server is rebooted to the login screen.

6.5 Reconfiguration on the client after mksysb backup is


installed
To restore original environment which mksysb was built, do the following steps:
o Step 1 Check hostname.
o Step 2 Check all Ethernet adapater and lables.
o Step 3 Check all etherchannels
o Step 4 Check HACMP filesets and start HACMP .
o Step 5 Check all SAN disk are attached and configured
o Step 6 Start GPFS.
o Step 7 - Mount filesystems.
o Step 8 run smitty aio and select change/show Asynchronous I/O.
select STATE to be configured at system restart as available. This is necessary
for oracle application.
o Step 9 Application People to check Oracle RAC and its working

How to restore non bootable mksysb image

Never use restbyname command in NIM server.


If you want to restore in client use below
tctl -f /dev/rmt0.1 rewind
tctl -f /dev/rmt0.1 fsf 3
restbyname -xqf /dev/rmt0.1

Removing duplicate default route


lsattr -El inet0 -> sees bad route
chdev -l inet0 -a delroute=
lsattr -El inet0 -> bad routes removed

AIX Patch Upgradation


AIX Patch Upgradation

Take TSM backup for all File systems on AIX servers

Take mksysb backups .

Take all Configuration backup .

Take snapshot of HACMP configuration: smitty cm_cfg_snap.

Upload all downloaded file sets from IBM web site .

Remove the second disk from the mirror: # unmirrorvg rootvg hdisk1

Check if there are any LVs left on the second disk: # lspv -l hdisk1

If exists migrate the remaining LVs to the first disk: # migratepv hdisk1 hdisk0

Remove the second disk from the root VG, to do this need to clear the boot record of the
second PV: # chpv -c hdisk1

Check if the bootlist is first set of the disk with the current rootvg, and on second place of
the disk with the alt_disk_install: # bootlist -m normal hdisk0 hdisk1

Remove the second PV from rootvg # reducevg rootvg hdisk1

Run alternate disk installation: # nohup alt_disk_install -B -C hdisk1 &

After successful end of above operation test alternate disk installation (using wakeup key)
# alt_disk_install -W hdisk1

List the content of PV # lspv -l hdisk1

Once all is fine, start the installation with the following commands sleep #
alt_disk_install S

Run the preview using smitty update_all and check the required /usr file system space
and increase it.

Add new boot image to the first PV to have fresh boot record: # bosboot -ad
/dev/hdisk0

Stop the cluster services using smitty cl_stop with option both[ Note: check the cluster
log in /tmp/hacmp.out and the servers login with the boot IP.]

Before start installation should commit all the softwares, installed on the server to make
easy roll-back in case of any problems during installation. # smit commit or # installp -c
all

Update all the software from TL and Service pack # smit update_all

Set only apply package, but not commit and keep the copy of replaced files.

Reboot both the servers

Start the cluster using smitty cl_start. Check the logs in /tmp/hacmp.out
Roll Back Plan

If any problem occurs change the boot disk: bootlist m normal o

Reboot the server.

Start the cluster services

Migrating AIX version from 5.3 to 6.1


Migrating AIX version from 5.3 to 6.1:

Take an mksysb backup for rootvg on bootable media

Make a copy: /etc/inetd.conf, /etc/inittab, /etc/motd, /usr/dt/config/Xservers

Check and remove restricted tunables in /etc/nextboot

Ensure root user is primary authentication method of system. Lsuser a auth1 root; chuser
auth1=system root

Users who have logged must be log off

Check error log: errpt

Verify the processor capacity(32/64bit): prtconf c

Insert 6.1 dvd and mount it: mount v cdrfs o ro /dev/cd0 /mnt

Copy file /mnt/usr/lpp/bos/pre_migration to /tmp and run /tmp/pre_migration. The output


will be stored in /home/pre_migration.yymmddhhmmss

Shutdown F

Manually turn on system and boot from DVD

Select current console as the system console

Select English language

Select change/show installation settings and make sure the installation type is migration

Run /usr/lpp/bos/post_migration

Tivoli Storage Manager Overview


TSM Version: 5.3.2
IBM Tivoli storage manager stores copies of data offline.
It protects hundreds of computers running a variety of OS
Components:
Administrative interface
TSM Server
Scheduler
Backup-Archive Client
TSM Database
TSM Recovery log
Storage Pools
Policy-Based Management
Tape Library
Administrative interface: TSM Administration center, which operates on the Integrated
Solutions Console (ISC), provides a task oriented GUI for storage administrators. Tasks such as
creating server maintenance scripts, scheduling, adding storage devices, setting policy domains,
user management, viewing the health monitor.

TSM Server: The role of TSM server is to store the backup or archive data from the backuparchive clients that it supports , to storage media. It also has a database of information to keep
track of the data it manages, including policy management objects
Scheduler: Administrator defined schedules allow for the automation of Tivoli storage manager
server and backup-archive client operations.
Backup-Archive Client: The TSM backup-archive client is a service which sends data to, and
retrieves data from TSM server. The TSM backup-archive client must be installed on every
machine that needs to transfer data to server managed storage called storage pools.
TSM Database: TSM Saves information in the TSM database about each file, raw LV, database
that it backs up, archives. This information includes the File name, size, management class. Data
is stored in a storage pool.
TSM Recovery Log: The recovery log keeps track of all changes made to the database, If a
system outage were to occur, a record of the changes would be available for recovery.
Storage Pools: Storage pools are collections of like media that provide storage for backed up,
archived and migrated files.
Policy-Based Management: Business policy is used to centrally manage backup-archive client
data. Policies are created by the administrator and stored in the database on the server.
Tape Library: TSM supports a variety of library types, including manual libraries, SCSI
libraries, 349X and 358X libraries.
Backup-Restore functionality:
TSM can perform backups of both files and raw lvs. When backing up files TSM server
database keeps a list of all files and their attributes (time, date, size, access control lists)
Backup: Creates a copy of file to protect against the operational loss or destruction of that file.
Customers control backups by defining the backup frequency and number of versions.
Restore: Places backup copies of files into a customer designated system. By default the most
recent version of each archive file requested is replaced.
4 levels of backups:
Byte level ( Small amounts of data)
Block Level (bigger amount of data)
File level ( normal files)
Image level ( includes file system and files)
TSM uses Progressive Backup Methodology also known as Incremental Backups.
Long term storage capabilities through Archive-Retrieve Function:
Archiving is useful when you want to store data that is infrequently accessed but must still be
kept available. TSM has the capability of archiving for 30 years.
Archive: Creates a copy of file or set of files. This feature enables customers to keep unlimited
archive copies of a file.
Retrieve: Allows users to copy an archive file from the storage pool to the work station.

Administration center on the integrated solutions console: GUI for managing IBM TSM
administrative function is called the Administration Center.
Automation Capabilities: It includes a central scheduling component that allows the automatic
processing of administrative commands and backup-archive client operations during a specific
time period when the schedule is activated.
Scheduling is split into 2 categories:
Administrative scheduling
Backup-archive client scheduling.
Data storage and Data Management:
Types of storage media on which TSM stores data: Storage media can be disks, optical and tape
assigned to a storage pool.
Storage pools contain backup files, archived files and space managed files. These storage pools
are chained in order to create a storage hierarchy. The disk pool is usually first in the chain and
followed by tape.
Policy Based Approach: Backup-archive client data managed by business policy. Policies
created by the administrator and stored in the database on the server.
Policy Domain: A group of nodes managed by the same set of policy constraints as defined by
the policy sets. A node may only to be defined to one policy domain per server.
Policy Set: A collection of management class definitions. A policy domain may contain number
of policy sets.
Management Class: A collection of management attributes called copy groups. 2 sets of MC
attributes: backup and archive
Copy group: Management attributes describing backup and archive characteristics. There is a
backup copy group and an archive copy group.
TSM Licensing:
3 License types: tsmbasic.lic, tsmee.lic and dataret.lic

Link Aggregation ( Network Bonding ) in AIX


Link aggregation means you can give one IP address to two network cards
and connect to two different switches for redundancy purpose. In this only one
network card will be active in one time, and when it got failed the other network
card goes active and let us continue our work.
It is better to use through SMIT.
#smit

then goto
Devices > Communication > EtherChannel / IEEE 802.3ad Link Aggregation
> Add An EtherChannel / Link Aggregation
here select the network card that you want to use, ie active.
Eg: select ent0
IMP : then select Mode as 8023ad
then select backup adapter for redundancy.(press F4 to show N/W adapters.)
Eg: ent1
press enter.
now ent0 and ent1 got bonded.
then automatically a virtual adapter will be created named ent2.
then put IP address and all to this virtual adapter.
#smit
Communications Applications and Services > TCP/IP > Minimum
Configuration & Startup
here select ent2 ( new bonded virtual adapter )
put IP Address and all,
give start now option.
Now you are successfully completed Link aggregation and check whether it works
or not by removing the 2nd cable to the network card and check ping, then put the
2nd cable and remove 1st cable. 2 - 3 drops normally occurs in my experience.

configure NTP Server on AIX


1.Verify that you have a suitable NTP server.
#lssrc -ls xntpd
Note : sys peer should show a valid server or 127.127.1.0
If the server is "insane", you should need to correct it by adding a server line into
/etc/ntp.conf and restarting xntpd.
Following these steps
#vi /etc/ntp.conf
Add server :

server 127.127.0.1
Double check that "broadcastclient" is commented.
#stopsrc -s xntpd
#startsrc -s xntpd

Note : If the server runs databases, use the -x flag to prevent the clock from
changing in a negative direction. Enter the following:
#startsrc -s xntpd -a "-x"
2.Enter
#lssrc -ls xntpd
to verify that the server is synched. This process can take upto 12 minutes.

configure NTP Client on AIX


1. Verify that you have a server suitable for synchronization, Enter:
#ntpdate -d ip.address.of.server
The offset must be less than 1000 seconds for xntpd to synch. If the offset is
greater than 1000 seconds, change the time manually on the client and run ntpdate
-d again.
If you get the message ," no server suitable for synchronization found", verify
xntpd is running on the server (see above )and that no firewalls are blocking port
123.
2. Specify your xntpd server in /etc/ntop.conf, Enter
#vi /etc/ntp.conf
comment "broadcastclient" line and add
server ip.address.of.server prefer
leave the driftfile and tracefile at their defaults.

3. start the xntpd daemon,


#startsrc -s xntpd
( use the -x flag if it is appropriate in your environment.)
4. Uncomment xntpd from /etc/rc.tcpip so it will start on reboot.
#vi /etc/rc.tcpip
Unconmment the following line
start /usr/sbin/xntpd "$src-running"
If using the -x flag, add "-x" to the end oof the line. you must include the qoutes
around "-x"
5. verify that the client is synched.
#lssrsc -ls xntpd
Note: sys peer should display the IP Adress or name of your xntpd server.This
process may take 12 minutes.

Network Related commands in AIX

host 193.9.200.1 Resolves ip to host name (from /etc/hosts file)


host ibm
hostname ibm

Resolve ibm to ip address (from /etc/hosts file)


To change the host name to ibm

entstat en0

To the status of ethernet device en0

entstat -d en0

To list the detailed status of device en0

no -a
no -d thewall

To list all net configurable attributes and their values


To change thewall parameter to its default value

no -o ipforwarding=1

To make the machine as router in tcpip networks

traceroute ibm

To trace the route to ibm

ping ibm

To tcp ping to the machine ibm

ifconfig -a

To show the status of all network interfaces

ifconfig en0

To show the status of en0

ifconfig en0 up

Turns on network card en0

ifconfig en0 down

Turns off network card en0

ifconfig en0 detach

Removes en0 card from the network interface list

ifconfig en0 inet 194.35.52.1 netmask 255.255.255.0 up


en0 and starts immediately

configure

mktcpip -h ibm -a 10.0.2.40 -m 255.255.255.0 -i en0 assign hostname


as ibm, IP as 10.0.2.40 subnetmask 255.255.255.0 to en0 interface
ifconfig en0 alias 195.60.60.1 Create alias ip address for en0
route add 0 192.100.13.7
for entire network

To make 192.100.13.7 as default gateway

route add 192.100.12.0 192.100.13.7


12.0 network
route -f

To make 13.7 as gateway for

To clear the gateway table

chdev -l inet0 -a hostname=ibm


permanently

To change the host name to ibm

netstat -a

To show the state of all sockets

netstat -c

To show the network buffers cache

netstat -D

To show the net drops of packets

netstat -i

To display interface statistics

netstat -r

To show the routing table

netstat -rn
names)

To show routing table (ip will be given instead of host

netstat -s

To show the statistics of the protocols

netstat -s -p <>

To show the statistics of respective protocols

********

NIM related Commands in AIX

lsnim

To list the nim resources

lsnim -l dcmds

To list the detailed info about the object dcmds

lsnim -O dcmds

To list the operation dcmds object can support

lsnim -c resources dcmds


dcmds

To list the resources allocated to the machine

nimclient The client version of nim command (User can obtain same
results of nim in server )
nimconfig -a pif_name=en0 -a netname=net1
master with network name net1
nimconfig -r
NIM

To initialise the NIM

To rebuild /etc/niminfo file which contains the variables for

nim -o define -t lpp_source -a source=/dev/cd0 -a server=master


-a location=/export/lpp_source/lpp_source1 lpp_source1
To
define lpp_source1 image in /export/lpp_source/lpp_source directory from
source cd0
nim -o define -t mksysb -a server=master -a
location=/resources/mksysb.image mksysb1

To define mksysb

resource mksysb1, from source /resources/mksysb.image on master


nim -o remove inst_resource

To remove the resource inst_resource

nim -o check lpp_source1 To check the status of lpp_source lpp_source1


nim -o allocate -a spot=spot1 -a lpp_source=lpp_source1 node1
allocate the resources spot1 and lpp_source1 to the client node1

To

nim -o bos_inst node1 To initialise NIM for the BOS installation on node1
with the allocated resources
nim -o dkls_init dcmds To initialize the machine dcmds as diskless
operation
nim -o dtls_init dcmds To initialize the machine dcmds for dataless
operation
nim -o cust dcmds
operation

To initialize the machine dcmds for customize

nim -o diag dcmds

To initialize the machine dcmds for diag operation

nim -o maint dcmds


operation

To initialize the machine dcmds for maintenance

nim -o define -t standalone -a platform=rspc -a if1="net1 dcmds


xxxxx" -a cable_type1=bnc dcmds To define the machine dcmds as
standalone with platform as rspc and network as net1 with cable type bnc
and mac address xxxxx
nim -o unconfig master

To unconfigure nim master

nim -o allocate -a spot=spot1 dcmds To allocate the resource spot1


from machine dcmds
nim -o deallocate -a spot=spot1 dcmds
spot1 from machine dcmds

To de allocate the resource

nim -o remove dcmds To remove machine dcmds after removing all


resources associated to it
nim -o reboot dcmds To reboot ther client dcmds
nim -o define -t lpp_source -a location=/software/lpp1 -a

server=master -a source=/dev/cd0 lpp1 To define lppsource lpp1 on


master at /software/lpp1 directory from source device /dev/cd0

********************

Posted by Praveen K at 1/20/2010 08:06:00 PM 3 comments:


Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest
Labels: AIX, Commands

Monday, December 21, 2009


Useful HACMP Commands

clstart -m -s -b -i -l
To start cluster daemons (m-clstrmgr, sclsmuxpd, b-broadcast message, -i-clinfo, -l cllockd)
clstop -f -N
resources

To force shutdown cluster immediately without releasing

clstop -g -N

To do graceful shutdown immediately with no takeover

clstop -gr -N

To do graceful shutdown immediately with takeover

cldare -t

To sync the cluster toplogy

cldare -t -f

To do the mock sync of topology

cldare -r

To sync the cluster resources

cldare -r -f

To do the mock sync of resources

clverify
cllscf

cluster verification utility


To list clustur topology information

cllsclstr

To list the name and security level of the cluster

cllsnode

To list the info about the cluster nodes

cllsnode -i node1

To list info about node1

cllsdisk -g shrg
group shrg
cllsnw

To list the PVID of the shared hard disk for resource


To list all cluster networks

cllsnw -n ether1
cllsif

To list the details of network ether1

To list the details by network adapter

cllsif -n node1_service
node1_service
cllsvg

To list the shared vgs which can be accessed by all nodes

cllsvg -g sh1
cllslv

To list the shared vgs in resource group sh1


To list the shared lvs

cllslv -g sh1

To list the shared lvs in the resource group sh1

cllsdisk -g sh1
cllsfs

To list the PVID of disks in the resource group sh1


To list the shared file systems

cllsfs -g sh1
cllsnim

To list the shared file systems in the resource group sh1


Show info about all network modules

cllsnim -n ether

Show info about ether network module

cllsparam -n node1
cllsserv

claddnim

To list the runtime parameters for the node node1

To list all the application servers

claddclstr -i 3 -n dcm
id 3
claddnode

To list the details of network adapter

To add a cluster definition with name dcm and

To add an adapter
To add network interface module

claddgrp -g sh1 -r cascading -n n1 n2


with nodes n1,n2 in cascade

To create resource group sh1

claddserv -s ser1 -b /usr/start -e /usr/stop


Creates an application
server ser1 with startscript as /usr/start and stop script as /usr/stop
clchclstr -i 2 -n dcmds
and id to 2

To change cluster definitions name to dcmds

clchclstr -s enhanced

To change the clustur security to enhanced

clchnode

To change the adapter parameters

clchgrp

To change the resource group name or node relationship

clchparam

To change the run time parameters (like verbose logging)

clchserv
start/end scripts

To change the name of app. server or change the

clrmclstr

To remove the cluster definition

clrmgrp -g sh1
resources
clrmnim

ether

To delete the resource group sh1 and related


To remove the network interface module ether

clrmnode -n node1

To remove the node node1

clrmnode -a node1_svc
clrmres -g sh1
clrmserv app1
clrmserv ALL

To remove the adapter named node1_svc

To remove all resources from resource group sh1


To remove the application server app1
To remove all applicaion servers

clgetactivenodes -n node1
To list the nodes with active cluster
manager processes from cluster manager on node node1
clgetaddr node1
clgetgrp -g sh1

returns a pingable address from node node1


To list the info about resource group sh1

clgetgrp -g sh1 -f nodes


group sh1

To list the participating nodes in the resource

clgetif
To list interface name/interface device name/netmask
associated with a specified ip label / ip address of a specific node
clgetip sh1
To get the ip label associated to the resource group
clgetnet 193.9.200.2 255.255.255.0
To list the network for ip
193.9.200.2, netmask 255.255.255.0
clgetvg -l nodelv
cllistlogs

To list the VG of LV nodelv

To list the logs

clnodename -a node5

To add node5 to the cluster

clnodename -o node5 -n node3


node5 to node3
clshowres

To change the cluster node name

Lists resources defined for all resource group

clfindres

To find the resource group within a cluster

xclconfig

X utility for cluster configuration

xhacmpm

X utility for hacmp management

xclstat

X utility for cluster status

HACMP LOGS
=========
/tmp/hacmp.out
history

Detailed event script output logged in 7 day rolling

/var/adm/cluster.log
this file)

High level view of cluster events (No clean up on

/usr/sbin/cluster/history/cluster.mmdd
events. (one file / day of month)
/tmp/cm.log

Clstrmgr messages

Day by day view of cluster

/tmp/cspoc.log
/tmp/emuhacmp.out

Output from C-SPOC commands (similar to smit.log)


Output from event emulation

rdist -b -f /etc/disfile1
disfile1 in binary mode

To distribute the files in disfile1 to all nodes in

Sample entry for disfile1


HOSTS = ( root@node1 root@node3 )
FILES = ( /etc/passwd /etc/security/passwd)
${FILES} -> ${HOSTS}
/usr/es/sbin/cluster/utility/*

(All commands)

***************
Posted by Praveen K at 12/21/2009 01:40:00 AM 9 comments:
Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest
Labels: Commands, HACMP

Friday, December 4, 2009


Useful TIVOLI Commands

q sess

To see currently working sessions

can sess ( sess no) To cancel a session


q eve domainname schedulename To see schedule time,nodename,
scgedule name and progress of the scheeduled backup. ( q eve * * to see all
scheduled backups)
q proc

To see currently running background processes

cancel process (process no)


q node

To cancel a process

To view assigned nodes in a Tivoli server

q filespace To check backup space utilized for each file system of a server,

filesystem type...
q actlog To view activity log
q act begind=mm/dd/yyyy begint=hh:mm:ss endd=mm/dd/yyyy
endt=hh:mm:ss
To view activity log for a specified period by begining
date and end date
q act begind=today-1 begint=00:00
12:00 am

To view activity log from yesterday

q act search=transferred begint=00:00


size of servers from 12:00 am
q act search=nodename

To see all completed backup

To view activity log for a node

q drive f=d To get detailed report of drive, it's status ...


q library f=d
q libvol

To get detailed information about tape library

To see volumes in the tape library and it's status

q vol To see volumes and it's storage pool name, capacity, status
q req

To see requests

reply (request no) To continue request


q sched

To show schedule name domain, action , start date & time

q sched f=d
backup path

To show detailed report about schedules including schedule

UPDATE LIBVOLUME TS3200 L41117L4 STATUS=SCRATCH To


change a library volume ( L41117L4 ) status to scratch from private
update node nodename "admin" domain=domainname
forcepwreset=Yes To change a tivoli login password of a node
define vol storagepoolname L41116L4

Add new tape L41116L4 to a

storagepool
delete volume L41116L4 discarddata=yes
media L41116L4

To delete data inside a tape

To create a new schedule backup (incremental ) for a node


========================================
UPDATE SCHEDULE domainname nodename desc="daily incremental backup
for a server" action=Incremental options=-subdir=yes objects=""/fs1/*"
"/fs2/*" "/home/user1/*"" priority=5 startdate=mm/dd/yyyy
starttime=hh:mm duration=1 duru=Hou period=1 perunits=Day
dayofweek=Any expiration=never
To start tivoli services in an AIX client
========================
#cd /usr/tivoli/tsm/client/ba/bin
nohup dsmc sched &

To start scheduled backup service

nohup dsmcad & To start BA client ( webmode backup i.e.


http://ip.address.of.server>:1581)

#cd /usr/tivoli/tsm/StorageAgent/bin
nohup ./dsmsta &
To start storage agent ( LANFREE BACKUP= backup
directly from SAN to Tape library )
#ps -ef | grep dsm

To view currently running Tivoli services in server

To update drive status of a server to online


==============================
update path sourceservername TAPE0 srct=server destt=drive libr=ts3200
online=yes

To set serial number of a drive in tape library


==============================

UPDATE DRIVE TS3200 TAPE0 ELEMENT=autodetect ONLINE=Yes


WWN="2002000E11107B6C" SERIAL=autodetect
To detect label of a tape in tape library
==========================
label libvol t3200 checkin=scratch labelsource=barcode search=bulk
waittime=0
To remove tape media from tape library
===========================
checkout libvol ts3200 VOLLIST=A00015L3,A00019L3 CHECKLABEL=YES
REMOVE=YES
To add and label new tapes into library
===========================
LABEL libvol TS3200 search=yes labelsource=barcode overwrite=NO
checkin=SCRATCH WAITTIME=0
To take full backup of Tivoli database manually to a file in local disk
=============================================
backup db dev=FILECLASS type=full
to assign a tape drive to a server
=======================
UPDATE PATH servername TAPE0 SRCTYPE=SERVER DESTTYPE=DRIVE
LIBRARY=TS3200 DEVICE=\\.\Tape0 ONLINE=YES AUTODETECT=no

Important
=======

If you are not able to take backup through any of Tivoli backup methods ( BA
client, dsmc , schedule ) then check any NFS mounted filesystems are there
in the server and that server is available or not. If that server is not powered
on, either unmount the NFS filesyatem or make that server available...

****************

Posted by Praveen K at 12/04/2009 12:25:00 AM No comments:


Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest
Labels: Commands, Tivoli

Thursday, December 3, 2009


SRC (System Resource Controller)commands in AIX

lssrc -a

To list the status of all subsystems

lssrc -h node1-a
node1

To list the status of all subsystems on foreign host

lssrc -s inetd

To list the status of the subsystem inetd

lssrc -g tcpip

To get the status of the subsystem group tcpip

startsrc -s inetd
startsrc -g tcpip

To start the subsystem inetd


To start the subsystem group tcpip

stopsrc -s inetd
To stop the subsystem inetd (if process is under
srcmstr. ie PPID of process=PID of srcmstr)
stopsrc -g tcpip
refresh -s nfsd
refresh -g tcpip

To stop the subsystem group tcpip


To refresh nfsd subsystem
To refresh tcpip subsystem group

lssrc -p [PID of process] To get status of the subsystem by process ID

kill [PID of process] To kill a process that not started by srcmstr


mkssys

To add a subsystem

rmssys -s kerberos

To remove the subsystem kerberos

chssys -s kerb -s kad

To rename the subsystem kerb to kad

*************
Posted by Praveen K at 12/03/2009 01:06:00 AM 1 comment:
Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest
Labels: AIX, Commands

Wednesday, December 2, 2009


User related commands in AIX

id

To list all system identifications for current user

id -gn
id -Gn

To list the default group for current user


To list all system groups for current user

lsuser root

To list the attribute of user root

lsuser ALL

To list the attributes of all users

lsuser -a ALL

To list all usernames

lsuser -a HOME ALL

To list the home directory of all users

lsuser -a auth1 auth2 ALL


users
lsuser -a expires ALL

To list the authentication method for all

To list expiry date

lsuser -a account_locked ALL

To check account lock status of all users

chuser -a login=true praveen

To enable the user praveen

chuser -a rlogin=true username


server

Enable user to login remotely to

mkuser praveen Creates user si with default values in


/usr/lib/security/mkuser.default
passwd

To change password of current user

passwd username To change password of a user


pwdadm username To change password of a user by security (admin)
member
mkuser su=false username
rmuser username
rmuser -p praveen
who

Create user without su facility

To remove a user
To remove user praveen and his all attributes

List users with tty nos and ip numbers

who /var/adm/wtmp
shutdown

Lists history of login logout system startup and

last root All previous login and logoff time of root user
who -r

To list the current run level ( cat /etc/.init.state )

whoami show current user


who am i
To list the exact user (who -m) ( ie. difference between su
and su - After su shows previous username bcoz environment not set for
current user and su - shows current username bcoz environment has been
set)
mkgroup dcm

To create the group dcm

chgroup users=user1,user2,user3 dcm


user3 to dcm group
rmgroup dcm
chauthent

To add users user1,user2 and

To delete the group dcm


To change the authentication methods

chuser unsuccessful_login_count=0 username


unsuccessful login count

To change the

pwdck -t ALL / Username To verify user passwords ( Report error and ask
to fix it)
usrck -t ALL / Username To verify user definition.
grpck -t ALL / groupname To verify a group

*************

Posted by Praveen K at 12/02/2009 08:27:00 AM No comments:


Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest
Labels: AIX, Commands

Print commands in AIX


switch.prt -d

to Display current print subsytem (AIX, System V, BSD)

switch.prt -s AIX

to change current print subsytem to AIX

switch.prt -d systemV

to change current print subsytem to systemV

qprt -P qname filename To submit print jobs (AIX Printing)


qprt -P:lp1 filename
there
qprt -N

To submit a print job if multiple queue / printer is

No. of print copy required

qchk -q

To display the default q

qchk -P lp0

To display the status of the printer lp0

qchk -# 123

To display the status of job number 123

qchk -A
qcan -x 123

To display the status of all queues (AIX Printing)


To cancel the print job 123 (AIX Printing)

qcan -X -P lp0

To cancel all jobs submitted to lp0 (AIX Printing)

qchk -L to show job priorities


qpri -#570 -a 25
qhld -# 569

To change the priority of the job to 25


To hold the job 569

qhld -r -#569

To remove holding job 569 (release)

qmov -m pq1 -#11

To move the job 11 to queue pq1

enable psq

To enable queue psq

disable psq

To disable queue psq

lp -d qname filename To submit printjob (System V printing)


lp -n

No. of print copy required

lpstat
lpstat -p lp0

To display the status all queues (System V printing)


To display the status of print queue lp0

lpstat -u root

To display the jobs submitted by user root

cancel -#111

To cancel job 111 (System V printing)

lpq -P lp0

To display the status of queue lp0 ( BSD Printing )

************

Posted by Praveen K at 12/02/2009 08:01:00 AM No comments:


Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest
Labels: AIX, Commands

Backup commands in AIX

mksysb -i -X /dev/rmt0
expands /tmp if required)

Creates image.data and system backup (-X

mksysb -m /dev/rmt0
system backup

Creates image.data file with map file and

mksysb -e /dev/rmt0
in /etc/exclude.rootvg

Creates system data but excludes the files listed

mkszfile

Creates /image.data file

mkcd -d /dev/cd1
/dev/cd1 (mksysb)

Creates system boot backup to the CD-R device

mkcd -d /dev/cd1 -v datavg


device /dev/cd1 (savevg)

Creates backup of VG datavg to CD-R

savevg -i -f /dev/rmt0 datavg Creates datavg.data image file and


backup vg datavg (path = /tmp/vgdata/datavg/datavg.data)

savevg -ef /dev/rmt0 datavg


Creates datavg backup but excludes
files listed in the /etc/exclude.datavg
find / -print | backup -ivf /dev/rmt0
backup -0vf /dev/rmt0 /home
backup level 0
restore -Tvf /dev/rmt0

Backup entire system to rmt0

Backup /home directory to rmt0 with

List the archive in rmt0

restore -xvf /dev/rmt0 /home


rmt0

Restore /home from archive in device

restore xvf /export/mksysb ./etc/sshd.conf


from /export/mksysb image
restore -Pa -vf /dev/rmt0 ./etc/passwd
from tape

Restores particular file

Restore only a file attribute

find ./home -print |cpio -ocvumB > /dev/rmt0 Archives /home


directory
cpio -icvdumB < /dev/rmt0 Restores cpio archive from rmt0
cpio -ivt < /dev/rmt0

List the contents of cpio archive from rmt0

cpio -icvd < /dev/rmt0 /home Restores /home directory from rmt0
tar -cvf /dev/rmt0 /home
tar -tvf /dev/rmt0

Archives /home to rmt0 device

List the archives in rmt0

tar -xvf /dev/rmt0 /home

Extract /home from rmt0

dd if=file1 of=file2 conv=ebcdic


file2

Convert and copy ascii file to ebcdic

dd count=1 bs=4k skip 31 seek=1 if=/dev/hd4 of=/dev/hd4 copy


31st block and paste to 1st block in the hd4 ( ie.to fix currupted superblock
by restoring it's backup copy to original location)

dd if=/dev/rmt0 ibs=512 obs=1024 of=/dev/rmt1 To copy blocks from


rmt0 with 512 blocks to rmt1 with 1024 blocks
tctl -f /dev/rmt0 rewind

To rewind the tape

tctl -f /dev/rmt0 offline

To eject the tape

tctl -f /dev/rmt0 status

To show the status of tape

tcopy /dev/rmt0.0 /dev/rmt1.0 to copy contents of 1st tape(tape0) to


2nd tape(tape1). It is useful when the size of the tapes are different.
tcopy /dev/rmt0

to show no. of files and block size in the tape

chdev -l rmt0 -a block_size=512


51201

To change the block size of the tape to

******************

Posted by Praveen K at 12/02/2009 05:15:00 AM No comments:


Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest
Labels: AIX, Commands

Tuesday, October 13, 2009


Network Related commands in AIX

host 193.9.200.1 Resolves ip to host name (from /etc/hosts file)


host ibm
hostname ibm

Resolve ibm to ip address (from /etc/hosts file)


To change the host name to ibm

entstat en0

To the status of ethernet device en0

entstat -d en0

To list the detailed status of device en0

no -a

To list all net configurable attributes and their values

no -d thewall

To change thewall parameter to its default value

no -o ipforwarding=1
traceroute ibm

To make the machine as router in tcpip networks


To trace the route to ibm

ping ibm

To tcp ping to the machine ibm

ifconfig -a

To show the status of all network interfaces

ifconfig en0

To show the status of en0

ifconfig en0 up

Turns on network card en0

ifconfig en0 down

Turns off network card en0

ifconfig en0 detach

Removes en0 card from the network interface list

ifconfig en0 inet 194.35.52.1 netmask 255.255.255.0 up


en0 and starts immediately

configure

mktcpip -h ibm -a 10.0.2.40 -m 255.255.255.0 -i en0 assign hostname


as ibm, IP as 10.0.2.40 subnetmask 255.255.255.0 to en0 interface
ifconfig en0 alias 195.60.60.1 Create alias ip address for en0
route add 0 192.100.13.7
for entire network

To make 192.100.13.7 as default gateway

route add 192.100.12.0 192.100.13.7


12.0 network
route -f

To make 13.7 as gateway for

To clear the gateway table

chdev -l inet0 -a hostname=ibm


permanently

To change the host name to ibm

netstat -a

To show the state of all sockets

netstat -c

To show the network buffers cache

netstat -D

To show the net drops of packets

netstat -i

To display interface statistics

netstat -r

To show the routing table

netstat -rn
names)

To show routing table (ip will be given instead of host

netstat -s

To show the statistics of the protocols

netstat -s -p <>

To show the statistics of respective protocols

********

Posted by Praveen K at 10/13/2009 12:38:00 PM No comments:


Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest
Labels: AIX, Commands

Installation specific commands in AIX

lslpp -l

To see the details of installed file sets

lslpp -ha bos.net.*


bos.net packages

To list the installation history of all file set in

lslpp -f bos.rte

To list the files in the bos.rte package

lslpp -w /etc/hosts
(parent fileset)

To list the file set which contain /etc/hosts file

lslpp -p bos.net.nfs.server To list the pre requisites for bos.net.nfs.server


file set
lslpp -d

To show dependancies of fileset

installp -L -d /dev/rmt0.1 To list the installable products on the device


rmt0
installp -aX -d /dev/rmt0.1 bos.net
and expands file system if it requires
installp -u bos.net

To install all filesets within bos.net

To remove bos.net

installp -r

To reject the applied software

installp -c -f

To commit the applied fileset

installp -C

To cleanup an incomplete installation

lppchk -c
To check the fileset items and verifies that the
checksum and filesize are consistent with SWVPD
lppchk -v
verify that all filesets have required requisites and
are completely installed
instfix -k IX9999 -d /dev/rmt0.1 To install the file set associated with fix
IX9999 from rmt0
instfix -ik IX9999

To verify fix IX9999 installed

************
Posted by Praveen K at 10/13/2009 12:09:00 PM No comments:
Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest
Labels: AIX, Commands

Console Emulation commands in AIX

lscons

To list the current console

lscons -b

To list the console at next boot

chcons /dev/tty3 To change the console to tty3


chcons -a login=enable /dev/tty3 Redirect console to tty3 and provide
login prompt
swcons /dev/tty3

To change system console to tty3 temporarily

************
Posted by Praveen K at 10/13/2009 11:47:00 AM 1 comment:
Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest
Labels: AIX, Commands

Device Related Commands in AIX

cfgmgr
system

To configure devices and installs device software in

cfgmgr -l vscsi0
interface
lscfg
definition info

To configure the components connected to the vscsi0


To display config, diagnostics and vital product

lscfg -l mem0

Display info about device mem0

lscfg -l ent*

Display info about all Ethernet cards

lscfg -v

Display vpd

lscfg -v -l hdisk0
mkdev -l rmt0
state
lsdev -P
lsdev -P -c disk
lsdev -P -r class

Display vpd of hdisk0


To change device rmt0 from defined state to available
To lists all supported devices
To list all supported disks
To display supported class

lsdev -P -r subclass To display all sub class


lsdev -C
lsdev -C -l mem0

To lists all configured devices


To display the properties of mem0

chdev -l sys0 -a maxproc=100 To change default maxproc value to


100/user
chdev -l rmt0 -a blocksize=512 To change the block size to 512
chdev -l rmt0 -a ret=no

To avoid tape retension

rmdev -l rmt0

To remove the device rmt0

rmdev -dl rmt0

To remove the device totally from database

rmdev -l rmt0 -S
stopped

To change the state of the device

lsparent -C -k rs232
accept rs232 devices

To display possible parent devices which

lsparent -C -l hdisk0
child device hdisk0

To display parent devices which accept

lsattr -Dl rmt0

To see the default values of the device rmt0

lsattr -El rmt0

To see the current values of the device rmt0

lsattr -El tty0 -a login -R


attribute of tty0

To see all possible values of the login

lsconn -p scsi0
accept

To list all possible connection scsi0 can

lvlstmajor

To list the available major numbers

mknod /dev/null c 2 2
(2) nos. (c - char device)

Create null device with major (2) and minor

****************
Posted by Praveen K at 10/13/2009 11:37:00 AM No comments:
Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest
Labels: AIX, Commands

System Dump commands in AIX

sysdumpdev -l

To list the current dump destination

sysdumpdev -L

List the details of the previous dump

sysdumpstart -p

Starts dump in the primary dump device

sysdumpstart -s

Starts dump in the secondary dump device

sysdumpdev -p /dev/lv00
tempararly

To make lv00 as primary dump device

sysdumpdev -P -p /dev/lv00 To make lv00 as primary dump device


permanently
sysdumpdev -s /dev/rmt0
sysdumpdev -z

To make rmt0 as secondary dump device


To determine a new system dump occurred

**************
Posted by Praveen K at 10/13/2009 11:18:00 AM No comments:
Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest
Labels: AIX, Commands

Paging Space Commands in AIX


lsps -a

To list out all paging spaces

lsps hd6

To display the details of the paging space hd6

chps -a y paging00

To turn on the paging space paging00 on next reboot

chps -a n paging00

To turn off the paging space paging00 on next reboot

chps -s4 paging00


blocks

To increase the size of the paging space in 4 LP

mkps -a -n -s4 newvg To create a paging space on VG newvg of 4 LP size


(-s4) and activate it immediately (-n) and activate it at every restarts
rmps paging00
swapon -a

To remove the paging space paging00


To invoke all entries in /etc/swapspaces file

swapon /dev/paging00 To make available swap space paging00


swapoff /dev/paging00 To disable swap space paging00

***********

Posted by Praveen K at 10/13/2009 11:09:00 AM 1 comment:


Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest
Labels: AIX, Commands

BLV commands in AIX

Bootlist, Bosboot and Bootinfo commands in AIX


bootlist -m normal -o

To see the boot sequence in normal mode

bootlist -m service -o

To see the boot sequence in service mode

bootlist -m normal hdisk0 hdisk1


hdisk0, hdisk1 in normal mode

To change boot sequence to

bootlist -m service cd0 rmt0 hdisk0


cd0,rmt0,hdisk0 in service mode

To change boot sequence to

bosboot -a
To create Boot image on default boot logical volume on the
hard disk, from where the system is booted.
bosboot -ad /dev/hdisk1
hdisk1

To create boot image on specified PV

mkboot -cd /dev/hdisk1


chpv -c hdisk1)

To clear the boot image from hdisk1 (as

bootinfo -b

To show the last boot device

bootinfo -e

Check the machine can boot from tape

bootinfo -k
To check the type of boot
(1=secure,2=service,3=normal)
bootinfo -K

To check whether the 32 or 64 bit kernel is active

bootinfo -m

To display machine model code

bootinfo -o hdisk0
hdisk0

To display the device location or disk device name of

bootinfo -p

To see the machine platform

bootinfo -r

To see the size of memory

bootinfo -s hdisk0 To see the size of hdisk0


bootinfo -P 0 -s hdisk0
bootinfo -t
bootinfo -T

To see the physical partition(PP) size of hdisk0

Specifies the type of boot


To see the machine type

bootinfo -y

To see whether the hardware in use is 32 bit or 64 bit

*****************
Posted by Praveen K at 10/13/2009 11:01:00 AM 2 comments:
Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest
Labels: AIX, Commands

LV commands in AIX

lslv -l lv00
lslv -p hdisk1
chlv -t copy lv00
chlv -p r lv00

Display info about LV lv00 by PV


Display LV allocation map for hdisk1
To change the lv00 to copy type
To change the lv00 to readonly mode

rmlv testlv

To remove testlv

rmlv -f testlv

To remove testlv forcefully (without confirmation)

mklv -s n -c 3 testlv hdisk1 To make LV testlv with three copies on


hdisk1
extendlv testlv 5
mklvcopy -s n testlv 2 hdisk1

To extend the LV silv with 5 LPs


To mirror LV testlv on same PV with 2

copies
mklvcopy testlv 3 hdisk1 hdisk2 To mirror LV testlv on hdisk1 and
hdisk2 with 3 copies
rmlvcopy testlv 2 hdisk1
hdisk1

To remove one copy of LV testlv from

mklv -t jfslog -y log00 newvg 2 To create a jfslog with name log00 on VG


newvg with 2LPs
mklv -t boot -y hd5 rootvg 1
with 1 LP (BLV)
logform /dev/log00

To create logical volume, type as boot

To format jfslog volume log00

**************
Posted by Praveen K at 10/13/2009 10:43:00 AM No comments:
Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest
Labels: AIX, Commands

PV commands in AIX

lspv

show all PV's in the system and their status

lspv hdisk0

display status and characteristics of the PV hdisk0

lspv -p hdisk0

display PP usage of hdisk0 and their location

lspv -l hdisk0
distribution

To list all logical volumes on PV hdisk0 and their

chpv -v r hdisk1
without varyoffvg)

make unavailable the PV (Used while removing PV

chpv -v a hdisk1

make PV available.(Open the PV)

chpv -c hdisk0
(CAREFULLY..!!)

Clear the master boot record on PV hdisk0

migratepv hdisk1 hdisk2 To move PPs from hdisk1 to hdisk2

migratepv -l newlv hdisk1 hdisk2 To migrate LV newlv from hdisk1 to


hdisk2
**************
Posted by Praveen K at 10/13/2009 10:31:00 AM 1 comment:
Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest
Labels: AIX, Commands

FS commands in AIX

lsfs

Lists all filesystems in the /etc/filesystems entry

lsfs -a

To list all filesystems (default)

lsfs -q
List all filesystems with detailed info (shows size of FS and
LV in it. so we can check wheather size of LV=size os FS)
lsfs -l

Specify the output in list format

lsfs -c

Specify the output in column format

lsfs -v jfs

Lists all jfs filesystems

chfs -a size=24576 /test


512 bytes (12 MB)

Change size of FS /test to 24576(blocks)x

chfs -a size=+24576 /test

Add 24576(blocks)x512 byte to FS /test

chfs -a size=+64M /test

Add 64 MB to /test

chfs -a size=10G /test

fix size of the FS /test to 10 GB

chfs -m /test /new

Change the mount point from /test to /new

chfs -A /test
chfs -d account /test
/etc/filesystems file)

To auto mount the filesystem test


Remove account attribute of /test.(from

chfs -a splitcopy=/backup -a copy=2 /oracle This will mount the 2nd


copy of mirrored filesystem oracle to /backup in read-only mode for backup
purpose
crfs -v jfs2 -g newvg -a size=100M -m /test Creates FS /test of type jfs
in VG newvg of size 100 MB with default LV.
crfs -v jfs -d /dev/lv00 -m /test Create FS /test of type jfs on device
/dev/lv00
rmfs /test

Deletes FS /test and associated LV

rmfs -r /test

Deletes FS /test its mount point and associated LV

defragfs /test

To defragment the file system /test

defragfs -q /test Display the current defrag status of the file system
fsck -y n /dev/lv00 To check the filesystem associated to /dev/lv00
assuming response "yes"
fsck -p /dev/lv00

To restore superblock from backup superblock

*************

Posted by Praveen K at 10/13/2009 10:14:00 AM 2 comments:


Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest
Labels: AIX, Commands

VG commands in AIX

lsvg

Display all VGs

lsvg -o

Display all active VGs

lsvg rootvg

Display info about rootvg

lsvg -l rootvg

Display info about all LVs in rootvg

lsvg -o |lsvg -il

Display info about all LVs in all VGs

lsvg -p rootvg

Display info about all PVs in rootvg

mkvg -s 8 hdisk1
size 8MB

Create VG with name vgxx on hdisk1 with partition

mkvg -s 8 -y newvg hdisk1


with partition size 8MB

Create VG with name newvg on hdisk1

mkvg -s 4 -t 2 -y newvg hdisk1


no of partions 2 * 1016

Create sivg on hdisk1 with PP size 4 and

chvg -a y newvg

To automatically activate newvg at startup

chvg -a n newvg

To deactivate the automatic activation at startup

chvg -t 2 newvg
To change maximum no. of PP / PV to 2032 on vg
newvg i.e 2*1016 ( so max no. of hdisk in newvg is 16 as in this case in
normalvg)
chvg -Qn newvg

To disable quorum on VG newvg

reorgvg newvg

Reorganises PP allocation of VG newvg

extendvg newvg hdisk3 hdisk4


exportvg newvg

Add PV hdisk3 and hdisk4 to VG newvg

Exports the VG newvg

importvg -V 44 -y newvg hdisk2


and assign major number 44
redcucevg newvg hdisk3

Import the hdisk2 with name newvg,

Remove PV hdisk3 from VG newvg

varyoffvg newvg
filesystems in newvg)

To deactviate VG newvg (after unmounting

varyonvg newvg

To activate VG newvg

syncvg -v newvg

To sync the mirrored LV in the VG newvg

mirrorvg -S -m newvg hdisk2 To mirror LVs of newvg with hdisk2 (-m


for exact mirror, -S for forbackground mirror)
unmirrorvg newvg hdisk2

To remove the mirrored PV from the newvg

***********

Posted by Praveen K at 10/13/2009 09:15:00 AM No comments:


Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest
Labels: AIX, Commands

Monday, October 12, 2009


Important AIX commands

lsitab init : to display default runlevel of an AIX system


cat /etc/.init.state : to get current runlevel of an AIX system
lslpp : display installed softwares on AIX
/usr/lib/errdemon : to start error login ( errpt ) in AIX
procwait -v : wait for a process and display status
procsig : list all signal action defined by processes
chuser fsize=-1 : to give permissions to create files over 2GB for a user.(1=no limit)
lppchk -l : to verify that the symbolic links assoiciated with a file set are
correct as indicated in the SWVPD.
shutdown -F : fast shutdown AIX system
shutdown -Fr : fast reboot AIX system
shutdown -Fm : reboot an AIX system to maintenance mode
telinit 2 : reboot an AIX system to normal mode
teleinit q : read nittab file
oslevel -s : display the currently installed service pack level in AIX
find /apps -print | backup -i -v -f /dev/rmt0 : copy the contents of /apps
to tape in AIX

who : display currently logged users


ln -sf /usr/lib/boot/unix_64 /unix
ln -sf /usr/lib/boot/unix_64 /usr/lib/boot/unix
bosboot -ad /dev/ipldevice
shutdown -Fr : reconfigure an AIX system from a 32-bit kernel to a 64-bit
kernal.
oslevel -rl 5300-06 : show which filesets still need to get new technology
level 5300-06 after AIX upgradation
chnfs -n 100 -B : increase the number of system threads allowed to
access NFS mounts to 100
./myscript > /tmp/myoutput 2>1 : run myscript and redirect stdout and
stderr to /tmp/myoutput
installp -C : to cleanup improperly applied filesets ( interrupted)
lppchk -c : to verify all the filesets installed completely
lppchk -v : verify that the operating system(AIX) is in a consistent state
after installing AIX technology level ( to verify all filesets are installed
completely)
refresh -s inetd : to restart TCP/IP
chvg -Qn testvg : disable quorum in testvg
smit update_all : to apply system to latest technologic level from CD.
migratepv -l /dev/testlv hdisk1 hdisk2 : move (migrate) testlv from
hdisk1 to hdisk2
migratepv hdisk1 hdisk2 : move all data from hdisk1 to hdisk2
lsfs -q /testfs : to check the size of /testfs and the LV mounted on it. i.e how
much space required by the FS to reach as that to LV in it.
last

To list all the records in the /var/adm/wtmp file (all successful logins)

last shutdown

To show the time of previous shutdown of server.

last reboot

To show the time of previous reboot of server.

uptime

To show how long the system has been up ( w -u )

oslevel

To list the operating system level

lslicense

To see the number of license

chlicense -u30

To change the fixed user license to 30

chlicense -f on

To enable floating user license

lsmcode

To check system firmware version (sys0)

invscout
To get firmware of all peripheral devices. It will generate the
output in /var/adm/invscout/invs.mrp containing the firmwares of all
peripheral devices.

****************

Posted by Praveen K at 10/12/2009 12:23:00 PM 3 comments:


Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest
Labels: AIX, Commands

Sunday, August 23, 2009


Advanced AIX commands

Please wait and Click this Image to get update from this site

VG commands in AIX
PV commands in AIX
FS commands in AIX
LV commands in AIX
BLV commands in AIX
Paging Space Commands in AIX
System Dump commands in AIX
Device related commands in AIX
Console emulation commands in AIX
Installation specific commands in AIX
Network Related commands in AIX
Backup commands in AIX
Print commands in AIX
User related commands in AIX
SRC (System Resource Controller) commands in AIX
TIVOLI related Commands
HACMP related Commands
NIM Commands in AIX
Useful AIX commands

Vous aimerez peut-être aussi