Académique Documents
Professionnel Documents
Culture Documents
HACMP
HACMP : High Availability Cluster Multi-Processing
High Availability : Elimination of both planned and unplanned system and application
downtime. This is achieved through elimination of H/W and S/W single points of failure.
Cluster Topology : The Nodes, networks, storage, clients, persistent node ip label/devices
Cluster resources: HACMP can move these components from one node to others Ex: Service
labels, File systems and applications
RSCT Version: 2.4.2
SDD Version: 1.3.1.3
HA Configuration :
After Installation
changes : /etc/inittab,/etc/rc.net,/etc/services,/etc/snmpd.conf,/etc/snmpd.peers,/etc/syslog.conf,
/etc/trcfmt,/var/spool/cron/crontabs/root,/etc/hosts , HACMP group will add
Software Components:
Application server
HACMP Layer
RSCT Layer
AIX Layer
LVM Layer
TCP/IP Layer
HACMP Services :
Cluster communication daemon(clcomdES)
Cluster Manager (clstrmgrES)
Cluster information daemon(clinfoES)
Cluster lock manager (cllockd)
Cluster SMUX peer daemon (clsmuxpd)
HACMP Deamons: clstrmgr, clinfo, clmuxpd, cllockd.
HA supports up to 32 nodes
HA supports up to 48 networks
HA supports up to 64 resource groups per cluster
HA supports up to 128 cluster resources
IP Label : The label that is associated with a particular IP address as defined by the DNS
(/etc/hosts)
Base IP label : The default IP address. That is set on the interface by aix on startup.
Service IP label: a service is provided and it may be bound on a single/multiple nodes. These
addresses that HACMP keep highly available.
IP alias: An IP alias is an IP address that is added to an interface. Rather than replacing its base
IP address.
RSCT Monitors the state of the network interfaces and devices.
IPAT via replacement : The service IP label will replace the boot IP address on the interface.
IPAT via aliasing: The service IP label will be added as an alias on the interface.
Persistent IP address: this can be assigned to a network for a particular node.
In HACMP the NFS export : /usr/es/sbin/cluster/etc/exports
Shared LVM:
Shared volume group is a volume group that resides entirely on the external disks shared
by cluster nodes
Shared LVM can be made available on Non concurrent access mode, Concurrent Access
mode, Enhanced concurrent access mode.
NON concurrent access mode: This environment typically uses journaled file systems to
manage data.
Create a non concurrent shared volume group: smitty mkvgGive VG name, No for
automatically available after system restart, Yes for Activate VG after it is created, give VG
major number
Create a non concurrent shared file system: smitty crjfsRename FS names, No to mount
automatically system restart, test newly created FS by mounting and unmounting it.
Importing a volume group to a fallover node:
Varyoff the volume group
Run discover process
Import a volume group
Concurrent Acccess Mode: Its not supported for file systems. Instead must use raw LVs and
Physical disks.
Creating concurrent access volume group:
Verify the disk status using lsdev Cc disk
Smitty cl_convgCreate a concurrent volume groupenter
Replacing the base IP address of a communication interface. The service IP label and
boot IP label must be on same subnet.
Cascading:
Resource group can be activated on low priority node if the highest priority node
is not available at cluster startup.
If node failure resource group falls over to the available node with the next
priority.
Upon node reintegration into the cluster, a cascading resource group falls back to
its home node by default.
Attributes:
1. Inactive takeover(IT): Initial acquisition of a resource group in case
the home node is not available.
2. Fallover priority can be configured in default node priority list.
3. cascading without fallback is an attribute that modifies the fall back
behavior. If cwof flag is set to true, the resource group will not fall
back to any node joining. When the flag is false the resource
group falls back to the higher priority node.
Rotating:
At cluster startup first available node in the node priority list will activate the
resource group.
If the resource group is on the takeover node. It will never fallback to a higher
priority node if one becomes available.
Rotating resource groups require the use of IP address takeover. The nodes in the
resource chain must all share the same network connection to the resource group.
Concurrent:
Users have to explicitly specify the desired startup, fallover and fallback
procedures.
Custom:
Startup Options:
Online using distribution policyThe resource group will only be brought online if the
node has no other resource group online. You can find this by lssrc ls clstrmgrES
Fallover Options:
Fallover using dynamic node priorityThe fallover node can be selected on the basis of
either its available CPU, its available memory or the lowest disk usage. HACMP uses
RSCT to gather all this information then the resource group will fallover to the node that
best meets.
Bring offlineThe resource group will be brought offline in the event of an error occur.
This option is designed for resource groups that are online on all available nodes.
Fallback Options:
Never fallback
Planning
Save snapshot
Config_too_long message appears when the cluster manager detects that an event has
been processing for more than the specified time. To change the time interval ( smitty
hacmp extended configurationextended event configurationchange/show time until
warning)
Physical Networks: TCP/IP based, such as Ethernet and token ring Device based, RS 232 target
mode SSA(tmssa)
Configuring cluster Topology:
Standard and Extended configuration
Smitty hacmpInitialization and standard configuration
IP aliasing is used as the default mechanism for service IP label/address assignment to a network
interface.
Configure resource groups: Use configure HACMP resource groups . you can choose
cascading, rotating, custom, concurrent
Run discovery: Running discovery retrieves current AIX configuration information from
all cluster nodes.
Defining persistent IP labels: It always stays on the same node, does not require installing
an additional physical interface, its not part of any resource group.smitty
hacmpextended topology configurationconfigure persistent node IP
label/addressesadd persistent node IP label(enter node name, network name, node IP
label/address)
Bring a resource group offline: smitty cl_adminselect hacmp resource group and
application managementBring a resource group offline.
Bring a resource group online: smitty hacmp select hacmp resource group and
application managementBring a resource group online.
Move a resource group: smitty hacmp select hacmp resource group and application
management Move a resource group to another node
HACMP LVM
Run cfgmgr
Varyonvg vgname
Lsattr El hdisk#
Chvg g vgname
Lsvg vgname
Varyoffvg vgname
On subsequent cluster nodes that share the vg. (run cfgmgr, lsattr El hdisk#, importvg
L vgname hdisk#)
Synchronize
PowerHA creates a backup copy of the modified files during synchronization on all nodes. These
backups are stored in /var/hacmp/filebackup directory.
The file collection logs are stored in /var/hacmp/log/clutils.log file.
Remove a group
If more than 2 nodes exist in your cluster, you will need a minimum of n number of nonIP heartbeat networks.
Disk heartbeating will typically requires 4 seeks/second. That is each of two nodes will
write to the disk and read from the disk once/second. Filemon tool monitors the seeks.
Vpaths are configured as member disks of an enhanced concurrent volume group. Smitty
lvmselect volume groupsAdd a volume groupGive VG name, PV names, VG major
number, Set create VG concurrent capable to enhanced concurrent.
Import the new VG on all nodes using smitty importvg or importvg V 53 y c23vg
vpath5
Move data from one partition located on a physical disk to another physical partition on a
different disk: migratelp testlv/1/2 hdisk5/123
Logical track group(LTG) size is the maximum allowed transfer size for an IO disk operation.
Lquerypv M hdisk0
VOLUME GROUPS
For each VG, two device driver files are created under /dev.
Creating VG : mkvg y vg1 s 64 v 99 hdisk4
Creating the Big VG : mkvg B y vg1 s 128 f n V 101 hdisk2
Creating a scalable VG: mkvg S y vg1 s 128 f hdisk3 hdisk4 hdisk5
Adding disks that requires more than 1016 PPs/PV using chvg t 2 VG1
Information about a VG read from a VGDA located on a disk: lsvg n VG1
Change the auto vary on flag for VG : chvg ay newvg
Change the auto vary off flag for VG: chvg an newvg
Quorum ensures data integrity in the event of disk failure. A quorum is a state in which 51
percent or more of the PVs in a VG accessible. When quorum is lost, the VG varies itself off.
Turn off the quorum : chvg Qn testvg
Turn on the quorum : chvg Qy testvg
To change the maximum no of PPs per PV : chvg t 16 testvg.
To change the Normal VG to scalable vg : 1. Varyoffvg ttt 2. chvg G ttt 3. varyonvg ttt
Change the LTG size : chvg L 128 testvg VGs are created with a variable logical track group
size.
Hot Spare: In Physical volume all PPs shou;d be free. PP located on a failing disk will be
copied from its mirror copy to one or more disks from the hot spare pool.
Designate hdisk4 as hot spare: chpv hy hdisk4
Migrate data from a failing disk to spare disk: Chvg hy vg;
Change synchronization policy : chvg sy testvg; synchronization policy controls automatic
synchronization of stale partitions within the VG.
Change the maximum no. of pps within a VG: chvg P 2048 testvg
Change maximum no.of LVs/VG : chvg v 4096 testvg.
How to remove the VG lock : chvg u
Extending a volume group : extendvg testvg hdisk3; If PVID is available use extendvg f
testvg hdisk3
Reducing the disk from vg : reducevg testvg hdisk3
Synchronize the ODM information : synclvodm testvg
To move the data from one system to another use the exportvg command. The exportvg
command only removes VG definition from the ODM and does not delete any data from
physical disk. : exportvg testvg
Importvg : Recreating the reference to the VG data and making that data available.. This
command reads the VGDA of one of the PV that are part of the VG. It uses redefinevg to find all
other disks that belong to the VG. It will add corresponding entries into the ODM database and
update /etc/filesystems with new values. importvg y testvg hdisk7
Chvg a n app1vg
Varyoffvg app1vg
Installation :
New and complete overwrite installation : For new machine, Overwrite the existing one, reassign
your hard disks
Migration: upgrade AIX versions from 5.2 to 5.3. This method preserves most file systems,
including root volume group.
Preservation installation : If you want to preserve the user data.. use /etc/preserve.list. This
installation overwrites /usr, /tmp,/var and / file systems by default. /etc/filesystems file is listed
by default.
TCB:
By installing a system with the TCB option, you enable the trusted path, trusted shell,
trusted processes and system integrity checking.
Every device is part of TCB and every fle in the /dev directory is monitored by the TCB.
Installation steps : Through HMC activate override the boot mode to SMS.
Without hmc After POST hear the 2 beeps press 1.
Insert the AIX 5L CD1. select boot options(NO:5)Select install / Boot devise(Option1) select
CD/DVD select SCSI select the normal boot exit from SMSSystem boots from
mediaChoose languageChange/show installation settingsNew and complete overriteselect
harddiskInstall optionsenter to confirmAfter installation system reboots automatically
Erase a hard disk using diag command
Alternate Disk Installation:
When applying new maintenance levels, a copy of the rootvg is made to an alternate disk,
then updates are applied to that copy
To view the BOS installation logs : cd /var/adm/ras cat devinst.log. or alog o f bosinstlog. Or
smit alog_show
Installation Packages:
instfix iK IY737478
bootlist
3. The AIX boot logical contains AIX kernel, rc.boot, reduced ODM & BOOT commands.
command to copy the core dump if available from /dev/hd6 to /var/adm/ras/vmcore.0 file.
And unmounts /var file system
15. rc.boot2 runs swapon /dev/hd6 and activates paging space
16. rc.boot2 runs migratedev and copies the device files from RAM file system to /file
system
17. rc.boot2 runs cp /../etc/objrepos/Cu* /etc/objrepos and copies the ODM files from RAM
devices
23. rc.boot3 runs fsck f /dev/hd3 and mount /tmp file system
24. rc.boot3 runs syncvg rootvg &
25. rc.boot3 runs cfgmgr p2 or cfgmgr p3 to configure rest of the devices. Cfgmgr p2 is
used when the physical key on MCA architecture is on normal mode and cfgmgr p3 is
used when the physical key on MCA architecture is on service mode.
26. rc.boot3 runs cfgcon command to configure the console
27. rc.boot3 runs savebase command to copy the ODM files from /dev/hd4 to /dev/hd5
28. rc.boot3 starts syncd 60 & errordaemon
29. rc.boot3 turn off LEDs
30. rc.boot3 removes /etc/nologin file
31. rc.boot3 checks the CuDv for chgstatus=3 and displays the missing devices on the
console
32. The next line of Inittab is execued
by (mount /dev/hd4 /mnt, mount /usr)Copy the system configuration to backup directory(cp
/mnt/etc/objrepos* /mnt/etc/objrepos/backup)copy the configuration from the RAM fs(cp
/etc/objrepos/Cu* /mnt/etc/objrepos)unmount all file systemssave the clean ODM to the BLV
using savebase d /dev/hdiskreboot
Corrupted /etc/inittab file: check the empty,missing inittab file. Check problems with
/etc/environment, /bin/sh,/bin/bsh,/etc/fsck,/etc/profileReboot
Runlevel selected group of processes. 2 is muti user and default runlevel. S,s,M,m for
Maintenance mode
Identifying current run levelcatt /etc/.init.state
Displaying history of previous run levels: /usr/lib/acct/fwtmp < /var/adm/wtmp |grep run-level
Changing system run levels: telinit M
Run level scripts allow users to start and stop selected applications while changing the run level.
Scripts beginning with k are stop scripts and S for start scripts.
Go to maintenance mode by using shutdown -m
Rc.boot fle: The /sbin/rc.boot file is a shell script that is called by the init. rc.boot file configures
devices, booting from disk, varying on a root volume group, enabling fle systems, calling the
BOS installation programs.
/etc/rc file: It performs normal startup initialization. It varyon all vgs, Activate all paging
spaces(swapon a), configure all dump devices(sysdumpdev q), perform file system
checks(fsck fp), mount all
/etc/rc.net: It contains network configuration information.
/etc/rc.tcpip: it start all network related daemons(inted, gated, routed, timed, rwhod)
Backups:
MKSYSB : Creates a bootable image of all mounted filesystems on the rootvg. This command is
for restore a system to its original state.
Tape Format : BOS boot image(kernel device drivers), BOS install image(tapeblksz, image.data,
bosinst.data), dummy table of contents, rootvg backup
Exclude file systems using mksysb ie /dev/rmt0
Cat /etc/exclude.rootvg
List content of MKSYSB image smitty lsmksysb
Restore a mksysb image : smitty restmksysb
Savevg command finds and backs up all files belonging to the specified volume group. Ex:
savevg ivf /dev/rmt0 uservg.
Restvg command restores the user volume group
Backup command backs up all files and file systems. Restore command extracts files from
archives created with the backup command.
Verify the content of a backup media tcopy /dev/rmt0
Daily Management :
ODM: ODM is a repository in which the OS keeps information about your system, such as
devices, software, TCP/IP configuration.
Basic Components of ODM: object classes, objects, descriptors
ODM directories: /usr/lib/objrepos, /usr/share/lib/objrepos, /etc/objrepos
Following steps for NFS implementation:
NFS daemons should be running on both server and client
The file systems that need to be remotely available will have to be exported(smitty
mknfsexp, exportfs a , showmount e myserver)
The exported file system need to be mounted on the remote systems
NFS services: /usr/sbin/rpc.mountd, /usr/sbin/nfsd, /usr/sbin/biod,rpc.statd, rpc.lockd
Changing an exported file system: smitty chnfsexp TCP/IP Daemons: inetd,gated, routed,named,
Configuration:
ODM: ODM(Object data manager) is a repository in which the OS keeps information regarding
your system, such as devices, software or TCP/IP information.
ODM information is stored in /usr/lib/objrepos, /usr/share/lib/objrepos, /etc/objrepos.
ODM commands: odmadd, odmchange, odmcreate, odmshow, odmdelete, odmdrop, odmget,
To start the graphical mode smit using smit m
Creating alias: alias rm=/usr/sbin/linux/rm
Export PATH=/usr/linux/bin:$path; print $path
Netwok File System:
Daemons: Server side(/usr/sbin/rpc.mountd, /usr/sbin/nfsd, portmap, rpc.statd, rpc.lockd) Client
side ( /usr/sbin/biod)
Start the NFS faemons using mknfs N. To start all nfs daemons using startsrc g nfs.
Exporting nfs directories:
Smitty mknfsexp
Super block: It contains control information about file system, such as overall file system in 512
byte blocks, FS name, FS log device, version no, no. of inodes, list of free inodes, list of free data
blocks, date and time of creation, FS state.
This data is stored in first block of FS and 31.
Allocation group:It consists of inodes and corresponding data blocks.
Inodes: It contains control information about the file. Such as type, size, owner, date and time
when the file was created, modifies, last accessed, it contains pointers to data blocks that stores
actual data. For JFS maximum no.of inodes and files is determined by the no. of bytes per
inode(NBPI). For JFS 16MB inode. For JFS2 there is no NBPI.
Data Blocks: actual data. The default value is 4KB.
Device logs: JFS log stores transactional information. This data can be used to roll back
incomplete operations if the machine crashes. Rootvg use LV hd8 as a common log.
FS differences:
Function JFS JFS2
Max FS Size 1TB 4PB
Max File Size 64G 4PB
Np.of inodes Fixed Dynamic
iNode size 128B 512B
Fragment Size 512 512
Block size 4KB 4KB
Creatinf FS: crfs v jfs2 g testvg a size=10M m /fs1
Display mounted FS: mount
Display characteristics of FS: lsfs
Initialize log device: logform /dev/loglv01
Display information about inodes: istat /etc/passwd
Monitoring and Performance Tuning:
Quotaon command enables disk quotas for one or more file systems
Ouotaoff command disables disk quotas for one or more file systems
Enable user quotas on /home: chfs a quota=userquota,groupquota /home
To check the consistency of the quota files using quotacheck
Edquota command to create each user or groups soft and hard limits for allowable disk space
and maximum number of files
Error logging is automatically started by the rc.boot script
Errstop command stops the error logging
The daemon for errlog is errdemon
The path to your systems error log file: /usr/lib/errdemon l
AIX Commands
To display if the kernel is 32-bit enabled or 64-bit enabled: bootinfo k
How do I know if I am running a uniprocessor kernel or a multiprocessor kernel: ls l /unix
The /dev/hdiskxx directory is where the boot logical volume /dev/hd5 is located : lslv m hd5
How would I know if my machine is capable of running AIX 5L Version 5.3: AIX 5L Version 5.3
runs on all currently supported CHRP (Common Hardware Reference Platform)-based POWER
hardware.
How would I know if my machine is CHRP-based: Run the prtconf command. If it's a CHRP
machine, the string chrp appears on the Model Architecture line.
To display if the hardware is 32-bit or 64-bit, type: bootinfo y
How much real memory does my machine have: bootinfo r, lsattr El sys0 a realmem
To display the number of processors on your system: lscfg |grep proc
Detailed configuration of my system: lscfg p(platform specific device information) v(VPD)
Displays the chip type of the system. For example, PowerPC: uname p
Displays the release number of the operating system: uname r
Displays the system name. For example, AIX: uname s
Displays the name of the node: uname n
Displays the system name, nodename, version, machine ID.: uname a
Displays the system model name. For example, IBM, 9114-275: uname M
Displays the operating system version.: uname v
Displays the machine ID number of the hardware running the system: uname m
Displays the system ID number: uname u
What version, release, and maintenance level of AIX is running on my system: oslevel r
To determine which fileset updates are missing from 5300-04, for example, run the following
command:
oslevel rl 5300-04
What SP (Service Pack) is installed on my system? Oslevel s
information about installed filesets on my system: lslpp l
To show bos.acct contains /usr/bin/vmstat: lslpp w /usr/bin/vmstat or which_fileset vmstat
To show which filesets need to be installed or corrected: lppchk v
How do I get a dump of the header of the loader section and the symbol entries in symbolic
representation: dump Htv
To find out wheteher a hard drive is bootable: ipl_varyon -i
How do I replace a disk?
1. #extendvg VolumeGroupName hdisk_new
2. #migratepv hdisk_bad hdisk_new
3. #reducevg -d VolumeGroupName hdisk_bad
How can I clone (make a copy of ) the rootvg: alt_disk_copy -d hdisk1
How do I identify the network interfaces on my server: lsdev Cc if
To get information about one specific network interface: ifconfig tr0
Simultaneous multi threading : The instructions from the OS are loaded simultaneously into the
processor and executed.
DLPAR : DLPAR allows us to add, move or remove processors, memory and IO resources to,
from or between active partitions manually without having to restart or shutdown the partition.
Unused processing units available in shared processor pool.
Dedicated processors are whole processors that are assigned to a single partition. The minimum
no. of dedicated processors you must assign is one processor.
When a partition with dedicated processors is powered down, their processors will be available
to the shared processor pool. This capability is enabled by Allow idle processors to be shared.
Idle processors from active partitions with dedicated processors can be used by any uncapped
partition that requires additional processing units to complete its jobs.
Shared processor minimum processing unit is 0.1
Capped : The processor usage never exceeds the assigned processing capacity.
Uncapped : Processing capacity may be exceeded when the shared processor pool has spare
processing units.
Weight is a number range between 0-255. If there are 3 processors available in the shared
processor pool , partition A has an uncapped weight of 80 and B has 160. The LPAR A will
receive 1 processing unit and B receive 2 processing units.
Minimum Memory is the minimum amount of memory which needed by the logical partition to
start.
Desired memory is the requested amount of memory for the partition. The partition will receive
an amount of memory between minimum and desired. Desired memory is the amount of memory
which LPAR needs to have when the lpar is powered on. If the managed system does not have
the desired amount of memory but only has lesser memory , those uncommitted memory
resources will be assigned to the LPAR when the LPAR is activated.
You cant increase the memory more than maximum value.
Installed memory is the total no. of memory units that are installed on the managed system
Creating a new LPAR :
Server and Partition Server Management right click partitions Create logical partition
Give partition ID(Numeric between 1 and 254) and name (max 31 characters)
Give partition type (AIX or linux, i5/OS, VIO)
Select work load management group NO
Give profile name
Specify the Min, Desired and Max memory
Select the dedicated/shared processors
If you select dedicated then give min,desired and max processors
If you select the shared give min,desired and max processing units and click advanced
And click the radio button(capped/uncapped) and give the virtual processors(min,max,desired)
Smit nim_bosinst
Press enter
Check the fileset bos.alt_disk_install.rte using lslpp. If its not installed install using
geninstall d /dev/cd0 bos.alt_disk_install.rte
VIO updates:
Documenting your hardware using lsdev CHc memory, bootinfo r, lsdev CHc disk,
lsdev CHc adapter, df k
Migration Process:
Boot the system from CD. You will get the installation menu.
Ensure root user is primary authentication method of system. Lsuser a auth1 root; chuser
auth1=system root
Insert 6.1 dvd and mount it: mount v cdrfs o ro /dev/cd0 /mnt
Shutdown F
Select change/show installation settings and make sure the installation type is migration
Run /usr/lpp/bos/post_migration
Take a snapshot
Rolling Migration:
Took MKSYSB
Create alt_disk_install
Stop hacmp with takeover. Check that its moved to another node. Confirm with clfindres.
Remove and replace SDD. (stopsrc s sddsrv, rmdev dl dpo R, uninstall SDD with
smitty remove, install latest SDD)
Snapshot Migration:
Do failover test
Lpp_source with attribute simages=no means can not be used to install base AIX operating
system.
lpp_source types: lsnim -t lpp_source
Mksysb:this resource is a file containing the image of the root volume group of machine.
It is used to restore a machine.
Defining MKSYSB resource:Nim o define t mksysb a source= -a server=master a
location= resource name.
Lsnim t mksyb
bosinst_data: bosinst_data resource is a flat ascii file, like bosinst.data used for restoring system
backup images from tape or CD/DVD. this resource is for Push/pull installation of multiple
machines at the same time.
script: contains all the commands to perform customization. such as file system resizing,
additional user creation ..
To start a nim environment
1. Select a machine to be the master
2. Install AIX for the master
3. install NIM file sets : bos.sysmgt.nim.master, bos.sysmgt.nim.spot
4. Configure the selected machine as a NIM Master using smitty nimconfig Mention
a master= -a name=
8. Create file systems for nim The lpp_source and SPOT resources are directories and the
Copy software from CD or DVD into /export/53 file system Smitty bffcreate ( give input
device, software package to copy , directory for storing sw package)
Making the SPOT: smitty NIM configure the NIM environment Advanced configuration
Create Basic installation resources Create a New Spot ( give Resource server, input device,
SPOT name and SPOT directory.
NIM Configuration:
Define Client Machine: smitty nim perform nim administration tasks manage machines
define a machine(NIM machine name, machine type(standalone), hardware platform type(chrp),
kernel to use for network boot(mp), cable type(tp),
Display NIM network objects: lsnim l c networks
The Basic NIM installation resources:
1) one nim lpp_source and one SPOT
2) for mksysb installation mksysb resource and SPOT
Reset and deallocate NIM resources: Nim Fo reset LPAR55, Nim Fo deallocate a
subclass=all LPAR55
Using SMIT to install standalone client: smitty nim_bosinst select a target for the operation
select the installation type select the lpp_source select the spot
After Initial program load SMS Menu setup remote IPLinterpartition logical LAN Select IP
parameters(Client IP, Server IP, Gateway, Subnetmask)Ping TestExecute Ping Test Select
Boot Option Select install/Boot Device(Network) Select normal boot mode
bos.sysmgt.nim.master
bos.sysmgt.nim.client
bos.sysmgt.nim.spot
Put volume 1 of your media installp acgXd /dev/cd0 bos.sysmgt.nim OR use smit install_all
Initial Setup: smit nim_config_env
Initializing the NIM master: nimconfig a pif_name=en0 a master_port=1058 a
netname=master_net a cable_type=bnc
Or smitty nimconfig.
Lsnim l master you will see information about NIM master
Lsnim l |more The boot resource created a /tftpboot directory to hold all of your boot images.
All NIM clients that are on the same subnet as this master will be assigned to master_net
network.
Set up first lpp_source resource: Create file system called /export/nim/lpp_source.
Nim o define t lpp_source a location=/export/nim/lpp_source/53_05 a server=master a
comments=5300-05 lpp_source a multi_volume=yes a source=/dev/cd0 a packages=all
5305_lpp
Or
Prepare the NIM master to supply RTE install resources to the client
Defining the client: smit nim_mkmac (give hostname) enter give machine typestandalone,
hardware platform typechrp, communication protocol need by clientnimsh, cable typeN/A.
Client on a new network: smit nim_mkmac give hostname and enter. Type of network
attached to network install interface ent(Ethernet network) enter. Give NIM networknetwork2
and subnetmask255.255.255.0, default gateway used by machine and master.
Setting up the master to install: smit nim_bosinstselect target machineselect the installation
type (rte)select the lpp_sourceselect the SPOTinstall the base OS on standalone clients
Checking the NIM master: lsnim l client, tail l /etc/bootptab(bf field in /etc/bootptab
specifies the boot file that will
be transferred to the client using TFTP after the client contacts the master using BOOTP) , ls l
/tftpboot(Actually a symbolic link) , showmont e ( shows exported files)
Typical Install Sequence:
NIM server responds with information about the boot file (bootptab file)
Accessing SMS: Open HMC select LPAR Activate select profile (default)click on open a
terminal windowAdvancedselect boot mode SMS ok select remote IPL select the
adapterselect internet protocol version (IPv4)select network service(BOOTP)Setting IP
parameters( select client IP, server IP, gateway, subnet mask)setting the bootlist(select
install/boot deviceselect network select network service(BOOTP)select the normal boot
mode) setting the boot(are you sure you want to exit SMS(yes).
Monitoring progress on the master: lsnim l client ( info prompting_for_data_at_console)
Installation: Main BOS installation menu ( select install now with default settings)
To view the bosinst log nim o showlog a log_type=bosinst client.
Listing valid operations for an object type: lsnim Pot master
Listing valid operations for an object: lsnim O client
Rebuilding the /etc/niminfo file: nimconfig r
Niminit -a name=client a master=master
Backing up the NIM database: smitty nim_backup_db (default value is
/etc/objrepos/nimdb.backup)
Restore the previously created backup: smitty nim_restore_db
NIM Log files:
/var/adm/ras/nimlog
/var/adm/ras/nim.installp
/var/adm/ras/nimsh.log
/var/adm/ras/bosinstlog
/var/adm/ras/nim.setup
High availability ( Alternate NIM master)
/etc/niminfo: Lists the active NIM master, and a list of valid alternate masters.
Configure alternate NIM master: smit niminit_altmstr
Synchronizing NIM database: smit nim_altmstr ( select synchronize alternate masters NIM DB)
It requires power 6, NPIV enabled SAN switch, 8GB fibre channel adapter, A CISCO MDS
9000 SAN, HMC v7.3.4, VIO 2.1 with fix pack 20.1, sdd 1.7.2.0
To enable NPIV, assign the physical NPIV capable FC adapter to a VIO server and connect
virtual FC adapters on the client partition.
It requires
Destination client partition that is currently running with rootvg on virtual SCSI disk.
On SAN switch:
On the HMC:
Create virtual fibre channel client adapter in the client partition profile.
ActionscreateFC adapterenter virtual slot number then select VIO server partition,
server adapter ID.
On VIO server
Get the list of all available virtual FC server adapter: lsdev dev vfchost*
Lsnports: it checks the FC adapter NPIV readiness of the adapter and the SAN switch.
Use vfcmap command to map to map the virtual adapter to physical adapter : vfcmap
vadapter vfchst0 fcp fcs1;
Do the zoning.
Zoneshow
Install aix
Lsdev dev fc4 child; lsdev dev fc5 child; rmdev dev vfchost0 ucfg; rmdev dev
fcs4 recursive ucfg
AIX Troubleshooting
Troubleshooting AIX and HACMP
Core dump:
Boot Process:
Failure to locate a boot image: The boot image of the disk may be corrupted. Access
rootvg from bootable media(select start maintenance mode for system recoveryaccess a
root VG0 to continue)run boboot command
Corrupted FS/corrupted JFS log device/Failing FSCK/bad disk: Boot from CDrom/mksysb
tapeselect start maintenance mode/system recoveryaccess rootvgformat the default jfs log
using /usr/sbin/logform /dev/hd8run fsck y /dev/hd1,hd2,hd3,hd4,hd9var (If fsck find any
errors repair the FS using fsck p /dev/hd#)lslv m hd5(for finding boot disk)recreate the boot
image using bosboot ad /dev/hdisk#, bootlist m normal hdisk#-->shutdown Fr
Remove much of system configuration and save it to backup directory: mount /dev/hd4 /mnt;
,mount /dev/hd2 /usr; mkdir /mnt/etc/objrepos/bak; cp /mnt/etc/objrepos/Cu*
/mnt/etc/objrepos/bak; umount all; exit
Save the clean ODM database: savebase d /dev/hdisk#
Check file system sizes using : df /dev/hd3; df /dev/hd4
Check the /etc/inittab file is missing or not
Check all permissions ls al / .profile /etc/environment /etc/profile
Check for ls al /bin /bin/bsh /bin/sh /lib /u /unix
Check ls l /etc/fsck /sbin/rc.boot or missed or not
No Login Prompt: ps ax |grep consolecheck getty process is running or not; lscons
System dump:
Create dump device: estimate the size sysdumpdev e; mklv y hd7 t sysdump rootvg 7
TCP/IP troubleshooting:
Traceroute shows each gateway that the packet traverses on its way to finding the target
host. Traceroute uses the UDP protocol. And ping uses ICMP protocol. If you receive
any answer from local gateway then the problem with the remote host problem. If you
receive nothing then local network problem.
NFS troubleshooting:
Verify inetd, portmap and biod daemons are running in the client
Verify mountd, portmap and nfsd daemons running on NFS using rpcinfo u server
mount, portmap, nfs
Identifying the cause of slow access times for NFS: stopsrc s biod ; startsrc s biod
NFS error messages: mountd will not start, server not responding: port mapper failure
RPC timed out, mount: access denied, mount: you are not allowed
LVM Troubleshooting:
VG lost:
1. NON rootvg
exportvg data_vg
2. Rootvg
User Action
553
553
Run fsck against all rootvg filesystems. If fsck indicates errors (not an
AIXV4 filesystem), repair the superblock (each filesystem has two
superblocks, one in logical block 1 and a copy in logical block 31, so
copy block 31 to block 1)
# dd count=1 bs=4k skip-31 seek=1 if=/dev/hd4 of=/dev/hd4
551
523 534
ODM files are missing or inaccessible. Restore the missing files from a
system backup.
518
bandwidth and to subdivide single large disks into smaller segments. The adapters and disks can
shared across multiple partitions, increase utilization.
VIO server : Physical resources allows you to share the group of partitions.The VIO server can
use both virtualized storage and network adapters, making use of VSCSI and virtual Ethernet.
Redundant VIO server: AIX or linux partitions can be a client of one or more VIO servers at
the same time. A good strategy to improve availability for sets of client partitions is to connect
them to 2 VIO servers. The reason for redundancy is ability to upgrade latest technologies
without affecting production workloads.
Micro Partitioning: Sharing the processing capacity from one or more logical partitions. The
benefit of Micro Partitioning is that it allows significantly increased overall utilization
. n of
processor resources. A micro partition must have 0.1 processing units. Maximum no.of partitions
on any system P server is 254.
Uncapped Mode : The processing capacity can exceed the entitled capacity when resources are
available in the shared processor pool and the micro partition is eligible to run.
Capped Mode : The processing capacity can never exceed the entitled capacity.
Virtual Processors :A virtual processor is a representation of a physical processor that is
presented to the operating system running in a micro partition.
If a micro partition is having 1.60 processing units , and 2 virtual processors. Each virtual
processor will have 0.80 processing units.
Dedicated processors : Dedicated processors are whole processors that are assigned to dedicated
LPARs . The minimum processor allocation for an LPAR is one.
IVM(Integrated virtualization manager): IVM is a h/w management solution that performs a
subset of the HMC features for a single server, avoiding the need of a dedicated HMC server.
Live partition Mobility: Allows you to move running AIX or Linux partitions from one
physical Power6 server to another without disturb.
VIO
Version for VIO 1.5
For VIO command line interface is IOSCLI
The environment for VIO is oem_setup_env
The command for configuration through smit is cfgassist
Initial login to the VIO server is padmin
Help for vio commands ex: help errlog
Hardware requirements for creating VIO :
1. Power 5 or 6
2. HMC
3. At least one storage adapter
4. If you want to share Physical disk then one big Physical disk
5. Ethernet adapter
The ratio between minimum and maximum amount of memory cannot be more than 1/64
IO selects the physical IO adapters for the partition. Required means the partition will not be able
to start unless these are available in this partition. Desired means that the partition can start also
without these adapters. A required adapter can not be moved in a dynamic LPAR operation.
VIO server partition requires a fiber channel adapter to attach SAN disks for the client partitions.
It also requires an Ethernet adapter for shared Ethernet adapter bridging to external networks.
VIO requires minimum of 30GB of disk space.
Create Virtual Ethernet and SCSI adapters: increase the maximum no of virtual adapters to 100
The maximum no of adapters must not set more than 1024.
In actions -> select create -> Ethernet adapter give Adapter ID and VLAN id.
Select Access External Network Check Box to use this adapter as a gateway between internal
and external network.
And also create SCSI adapter also.
VIO server S/W installation :
1. Place the CD/DVD in P5 Box
2. Activate the VIO server by clicking the activate. Select the default partition
3. Then check the Open terminal window or console section and click the advanced. And
OK.
4. Under the boot mode drop down list select SMS.
After installation is complete login with padmin and press a(for s/w maintenance agreement
terms)
License accept for accepting the license.
Creating a shared Ethernet adapter
1. lsdev virtual ( check the virtual Ethernet adapter)
2. lsdev type adapter ( Check the physical Ethernet adapter)
3. you use the lsmap all net command to check the slot numbers of the virtual Ethernet
adapter.
4. mkvdev sea ent0 vadapter ent2 default ent2 defaultid 1
5. lsmap all net
6. use the cfgassist or mktcpip command configure the tcp/ip or
7. mktcpip hostname vio_server1 inetaddr 9.3.5.196 interface ent3 netmask
2. lsdev virtual (make sure virtual scsi server adapters available ex:vhost0)
3. lsmap all to check the slot numbers and vhost adapter numbers.
4. mkvg f vg rootvg_clients hdisk2 Creating rootvg_clients vg.
5. mklv lv dbsrv_rvg rootvg_clients 10G
fget_config Av command provided on the IBM DS4000 series for a listing of LUN names.
Virtual SCSI Optical devices:
A dvd or cd device can be virtualized and assigned to client partitions. Only one VIO client can
access the device at a time.
Steps :
1. let the DVD drive assign to VIO server
2. Create a server SCSI adapter using the HMC.
3. Run the cfgdev command to get the new vhost adapter. Check using lsdev virtual
4. Create the virtual device for the DVD drive.(mkvdev vdev cd0 vadapter vhost3 dev
vcd)
5. Create a client scsi adapter in each lpar using the HMC.
6. Run the cfgmgr
Through dsh command find which lpar is currently holding the drive.
Unconfiguring the dvd drive :
1. rmdev dev vcd ucfg
2. lsdev slots
3. rmdev dev pci5 recursive ucfg
4. cfgdev
5. lsdev virtual
3. lspv
4. mirrorios f hdisk2
5. lsvg lv rootvg
6. bootlist mode normal ls
Creating Partitions :
1. Create new partition using HMC with AIX/linux
2. give partition ID and Partition name
3. Give proper memory settings(min/max/desired)
4. Skip the physical IO
5. give proper processing units (min/desired/max)
6. Create virtual ethernet adapter ( give adapter ID and VLAN id)
7. Create virtual SCSI adapter
8. In optional settings
Advanced Virtualization:
Providing continuous availability of VIO servers : use multiple VIO servers for providing highly
available virtual scsi and shared Ethernet services.
IVM supports a single VIO server.
Virtual scsi redundancy can be achieved by using MPIO and LVM mirroring at client partition
and VIO server level.
Continuous availability for VIO
LVM Mirroring
Use the fget_config(fget_config vA) command to get the LUN to hdisk mappings.
Use the lsdev dev hdisk vpd command to retrieve the information.
The reserve_policy for each disk must be set to no_reserve.(chdev dev hdisk2 attr
reserve_policy=no_reserve)
Map the hdisks to vhost adapters.( mkvdev vdev hdisk2 vadapter vhost0 dev
app_server)
Testing MPIO
Run lspath
Enable the vscsi client adapter path timeout ( chdev l vscsi0 a vscsi_path_to=30 P)
Testing MPIO:
Lspath
Shutdown VIO2
Lspath
Lspath
LVM Mirroring: This is for setting up highly available virtual scsi configuration. The client
partitions are configured with 2 virtual scsi adapters. Each of these virtual scsi adapters is
connected to a different VIO server and provides one disk to the client partition.
Configuring LVM Mirroring:
Create 2 virtual IO partitions, select one Ethernet adapter and one storage adapter
Create client partitions. Each client partition needs to be configured with 2 virtual scsi
adapters.
Create the volume group and logical volumes on VIO1 and VIO2
A logical volume from the rootvg_clients VG should be mapped to each of the 4 vhost
devices.( mkvdev vdev nimsrv_rvg vadapter vhost0 dev vnimsrv_rvg)
Lsmap all
When you bring up the client partitions you should have hdisk0 and hdisk1. Mirror the
rootvg.
Lspv
Lsdev Cc disk
Lsvg l rootvg
Shutdown VIO2
Lspv hdisk1
Lsvg l rootvg
Shared Ethernet adapter: It can be used to connect a physical network to a virtual Ethernet
network. Several client partitions to share one physical adapter.
Shared Ethernet Redundancy: This is for temporary failure of communication with external
networks. Approaches to achieve continuous availability:
Shared Ethernet adapter failover: It offers Ethernet redundancy. In a SEA failover configuration
2 VIO servers have the bridging functionality of the SEA. They use a control channel to
determine which of them is supplying the Ethernet service to the client. The client partition gets
one virtual Ethernet adapter bridged by 2 VIO servers.
Requirements for configuring SEA failover:
One SEA on one VIOs acts as the primary adapter and the second SEA on the second
VIOs acts as a backup adapter.
Each SEA must have at least one virtual Ethernet adapter with the access external
network flag(trunk flag) checked. This enables the SEA to provide bridging functionality
between the 2 VIO servers.
Priority value defines which of the 2 SEAs will be the primary and which is the
secondary. An adapter with priority 1 will have the highest priority.
Enter Slot number for the virtual Ethernet adapter into adapter ID
Enter the Port virtual Lan ID(PVID). The PVID allows the virtual Ethernet
adapter to communicate with other virtual Ethernet adapters that have the same
PVID.
Click OK.
Create another virtual adapter to be used as a control channel on VIOS1.( give another
VLAN ID, do not check the box access external network (ent3)
Create SEA on VIO1 with failover attribute. ( mkvdev sea ent0 vadapter ent2 default
ent2 defaultid 1 attr ha_mode=auto ctl_chan=ent3. Ex: ent4
Create VLAN Ethernet adapter on the SEA to communicate to the external VLAN tagged
network ( mkvdev vlan ent4 tagid 222) Ex:ent5
Network interface backup : NIB can be used to provide redundant access to external networks
when 2 VIO servers used.
Configuring NIB:
Configure each VIO server with one virtual Ethernet adapter. Each VIO server needs to
be a different VLAN.
Upper limit of the no. of WPARs that can be executed within LPAR is 8192.
WPAR administration:
Chwpar n wpar001.
Broken state:If a WPAR gets an undefined state.
Investigation:
Logs check( /var/adm/ras, /var/adm/WPARs)
Check the processes ps - @(It shows processes by WPAR)
Removing a WPAR: Verify WPAR is in Defined state. Take Backup. Rmwpar wpar001.
WPAR states:
Defined D WPAR is created but not started
Active A Its a normal state.
Broken B When failure occurs
Transitional T WPAR is in the process of changing from one state to another.
Paused P This state is reached when a WPAR has had a successful checkpoint or restore data.
Mobile partitions can be created by c flag
Creating Application WPAR: Execute the application with in the WPAR: wparexec. Ex:
wparexec /hga/myapp
Wparexec command starting myapp immediately after creation. This type of WPAR only exists
while the application is running. When the application ends, the WPAR also ends and all of its
resources are freed.
If the application WPAR has a dependency on a filesystem that is not mounted, it will mount the
file system automatically.
Lswpar(Transitional State)
Name state type hostname directory
Myapp T A myapp /
Lswpar(Active State)
Name state type hostname directory
Myapp A A myapp /
Lswpar( It disappears)
File Systems:
Types of File systems: namefs, jfs, jfs2, NFS.
By default system creates /,/tmp,/home,/var as jfs2 and /opt,/proc,/usr and namefs.
Creating a filesystem for a running WPAR:crfs v jfs2 m /wpars/wpar001/newfs u wpr00 a
logname=INLINE a size=1G
Changing a file system: chfs a size=512M /wpars/wpar001/newfs
Backing up the global environment: Stop all wpars, Then run a mksysb, mkdvd, mkcd
command with the N flag.
IBM workload partition manager for AIX is a tool for monitoring and managing WPARs.
Aix 6.1
Workload partition manager (Extra software package need to install)
Live application mobility ( move one partition from one system to another)
Multi system work load balancing ( load balancing of cpus, memory and IO)
Create it and its goes to defined state, run it, activate it and we can stop it and if its not
required remove it.
Does not share any file systems with other wpars or global system.
Application wpar:
Wpar manager:
Install wpar agent and it will talk to the all wpars in a machine
Manual relocation
Automated relocation
Chkptwpar k it freeze wpar, save the wpar processes, state to a statefile on nfs.kill the
wpar processes once they are no longer needed
Restartwpar: this command take the statefile, rebuild the wpar processes & state and start
the wpar.
Reducing system admin time. Many applications on one instance. Reduces install and
update of aix, monitoring, becakup, recovery etc..
Reduce costs only one copy of aix plus shared access to aix disks.
Access the wpar standard console. Its a secured link port no:hostname:
14443/ibm/console logon
Managed systems(an entire physical server or a LPAR) and work load partitions are
under resource views tab
Wpar active state means its running, its not running defined, green tick means mobility,
transitional stateworking
Wpar cannot move between different machines like power 4 to 5. so first you have to
stop the wpar and removed with preserve local file systems on server option. Then wpar
is undeployed state. Then click wparactionsdeployenter target system click on start
the wpar immediately upon deployment, preserve file systemsok.
WPAR properties:
Stratwpar mywpar
Stopwpar hN mywpar
Lswpar L mywpar
Mkwpar n first
-c for checkpoint
Startwpar wp13
Clogin wp13
Application Mobility:
Source AIX: /opt/mcr/bin/chkptwpar wp13 d /wpars/wp13/tmp/state o
/wpars/wp13/tmp/checkpoint.log k
Rmwpar p wp13
Target AIX: /opt/mcr/bin/restartwpar wp13 d /wpars/wp13/tmp/state o
/wpars/wp13/rmp/restart.log
Running application wpar:
Wparexec n temp h hostname /usr/bin/sleep 30
Process: starting wpar, mounting, loading, stopping
Comparing WPAR & Global AIX:
Wpar: df n (/, /home, /tmp, /var nfs mounts, /opt,/usr read only)
Ps ef -@ wp13|pg
Topas on wpar will give some results for wpar and some are global aix. Yellow are
global aix and white are wpar.
16.138.184.171 labserver101_svc
10.1.1.1 labserver101_boot
11.1.1.1 labserver101_stdby
16.138.184.181 labserver201
10.1.1.2 labserver201_boot
11.1.1.2 labserver201_stdby
labserver201 /etc/hosts :
16.138.184.177 labserver101
16.138.184.171 labserver101_svc
10.1.1.1 labserver101_boot
11.1.1.1 labserver101_stdby
16.138.184.181 labserver201
10.1.1.2 labserver201_boot
11.1.1.2 labserver201_stdby
From Labserver101 - > Smitty hacmp -> Extend Configuration -- Define the Cluster name
RG Name : saprg
Participating nodes : labserer101 labserver201
labserver201_stdby 11.1.1.2
Resource Group saprg
Startup Policy Online On Home Node Only
Fallover Policy Fallover To Next Priority Node In The List
Fallback Policy Fallback To Higher Priority Node In The List
Participating Nodes labserver101 labserver201
Service IP Label labserver101_svc
AIX Troubleshooting
1. How to force a failover of an EtherChannel ?
# /usr/lib/methods/ethchan_config -f Etherchannel_Device
2. How to add a backup adapter to an existing etherchannel device ?
# /usr/lib/methods/ethchan_config -a -b Etherchannel_Device Ethernet_Adapter
3. How to change the address to ping attribute of an EtherChannel ?
# /usr/lib/methods/ethchan_config -c Etherchannel_Device netaddr New_Ping_IP_Addr
4. How to list the available major numbers in a system ?
# lvlstmajor
5. How to list the major number of a volume group ?
# lvgenmajor rootvg
6. Consider a situation where you have a VG in a PV. But you have not imported that.
Now you need to find a list of attributes of that volume group before importing/varyon it.
Answer the below questions :
a. How to list the maximum number of logical volumes allowed in the VG ?
# lqueryvg -p PVname -N
b. How to show the PP size ?
# lqueryvg -p PVname -s
c. How to show the number of free PPs in the VG ?
# lqueryvg -p PVname -F
d. How to show the current number of LVs in the VG ?
# lqueryvg -p PVname -n
e. How to list the current number of PVs in the VG ?
# lqueryvg -p PVname -c
f. How to list the total number of VGDAs for the VG ?
# lqueryvg -p PVname -D
g. How to list each LVID, LV name, state for each logical volume ?
# lqueryvg -p PVname -l
h. How to list each PVID, number of VGDAs and state for each PV in the VG ?
# lqueryvg -p PVname -P
i. How to list all the attributes with tags for the vG ?
# lqueryvg -p PVname -At
21. How to retrive the PVID/PN name for a PV name or PVID from ODM ?
# getlvodm -p PVDesciptor
22. How to retrive the PV names, PVIDs and VGs of all configured PVs from ODM ?
# getlvodm -P
23. How to retrive the relocatable flag for a LVID from ODM ?
# getlvodm -r LVID
24. How to retrive the VG state for a VG from ODM ?
# getlvodm -s VGDescriptor
25. How to retrive the timestamp for a VG from ODM ?
# getlvodm -T VGDescriptor
26. How to retrive the VG name for a VGID from ODM ?
# getlvodm -t VGID
27. How to retrive the auto-on value for a VG name or VGID from ODM ?
# getlvodm -v VGDesciptor
28. How to retrive the VGID for a vG name ?
# getlvodm -v VGDesciptor
29. How to retrive the PV names and PVIDs for a VG from ODM ?
# getlvodm -w VGDesciptor
30. How to retrive the LV type ffor a LVID from ODM ?
# getlvodm -y LVID
31. How to retrive the concurrent capable flag for a VG from ODM ?
# getlvodm -X VGDescriptor
32. How to retrive the auto-on concurrent flag for a VG from ODM ?
# getlvodm -x VGDescriptor
33. How to display the contents of LVCB ?
# getlvcb -A LVName
34. How to list the number of copies of a LV from LVCB ?
# getlvcb -c LVName
35. How to list the file system name of a LV from LVCB ?
# getlvcb -f LVName
36. How to list the label of a LV from LVCB ?
# getlvcb -L LVName
37. How to display the type of the file system from LVCB ?
# getlvcb -t LVName
38. How to display the upper limit from LVCB ?
# getlvcb -u LVName
39. How to list the current defrag state of a file system ?
# defrag -q Filesystem
40. How to lsit the current and future (if degragmented) state of a file system ?
# degrag -r Filesystem
41. How to defragment a file system ?
# defrag Filesystem
42. How to run fsck on 2 filesystems simultaneously on different drives ?
# dfsck FileSystem1 FileSystem2
43. How to list the superblock, i-name map, disk map information for a file system ?
# dumpfs Filesystem
44. Where is the magic file located ?
/etc/magic
45. How do you remove a file system data from /etc/filesystems ?
# imfs -x -l LVName
46. How do you list inode, last update/modify/access timestamp of a file ?
# istat FileName
47. How do you update the i-node table and write buffered files to the hard disk ?
# sync
48. How do you list the filesystems in a volume group ?
# lsvgfs VGName
49. How do you redefine the set of PVs of a VG in the ODM ?
# redefinevg -d PVName VGName
50. How do you replace a PV in a VG ?
# replacepv SourcePV DestinationPV
Shared Ethernet adapter: It can be used to connect a physical network to a virtual Ethernet
network. Several client partitions to share one physical adapter.
Shared Ethernet Redundancy: This is for temporary failure of communication with external
networks. Approaches to achieve continuous availability:
Shared Ethernet adapter failover: It offers Ethernet redundancy. In a SEA failover configuration
2 VIO servers have the bridging functionality of the SEA. They use a control channel to
determine which of them is supplying the Ethernet service to the client. The client partition gets
one virtual Ethernet adapter bridged by 2 VIO servers.
Requirements for configuring SEA failover:
One SEA on one VIOs acts as the primary adapter and the second SEA on the second
VIOs acts as a backup adapter.
Each SEA must have at least one virtual Ethernet adapter with the access external
network flag(trunk flag) checked. This enables the SEA to provide bridging functionality
between the 2 VIO servers.
Priority value defines which of the 2 SEAs will be the primary and which is the
secondary. An adapter with priority 1 will have the highest priority.
Enter Slot number for the virtual Ethernet adapter into adapter ID
Enter the Port virtual Lan ID(PVID). The PVID allows the virtual Ethernet
adapter to communicate with other virtual Ethernet adapters that have the same
PVID.
Click OK.
Create another virtual adapter to be used as a control channel on VIOS1.( give another
VLAN ID, do not check the box access external network (ent3)
Create SEA on VIO1 with failover attribute. ( mkvdev sea ent0 vadapter ent2 default
ent2 defaultid 1 attr ha_mode=auto ctl_chan=ent3. Ex: ent4
Create VLAN Ethernet adapter on the SEA to communicate to the external VLAN tagged
network ( mkvdev vlan ent4 tagid 222) Ex:ent5
TABLE OF CONTENTS
1
Introduction
Contacts
5.
5.2.
5.2.1
5.2.2
5.2.3
5.2.4
5.3
5.3.1
5.3.2
5.3.3
5.3.4
5.4
5.4.1
6.1
6.2
6.3
6.4
6.5
1 Introduction
The intent of this document is to provide detail steps for AIX install, backup, restore, migration
using Network Installation Management tool (NIM) for LPARs in NHSS environment. It includes
guidance in setting up a NIM environment.
Assumptions used during this document,
There is already a backup strategy in place and mksysbs are being saved on the NIM
Master in the /export/nim/mksysb NFS directory.
P51A Server has been designated at the NIM Master.
o Plan the NIM Master network config (what network is being used)
o Plan the NIM Master and Client NIM names
o Plan the NIM Master directory structure
o Plan the NIM Client network config.
o Plan the NIM Master Resource names (lppsource, spot)
o Plan the NIM Master Mksysb resource.
Implement
o smitty nim_mkmac
Allocate
o smitty nim_bosinst
Boot
the client into SMS and select boot from Ethernet (add the IPs as required).
Client
r = remove files
m = move files
l = list files
V = verbose
For any installation procedure, you need a software source to install from, such as the AIX 5.2
product CDs (in NHSS environment). The AIX 5.2 product CDs contains boot images used to
boot the system from the CD-ROM, installation images, and the installation commands used to
install the installation images.
In the NIM environment, the software source is separated into two NIM resources, the
LPP_Source and the SPOT. The LPP_Source is a directory on your NIM server. When the
LPP_Source is created, installation images are copied from the product CDs to the LPP_Source
directory. The product CDs also contains boot images that enable the system to boot from the
CD-ROM and installation commands that are used to install the installation images. This
equivalent NIM resource is called the SPOT (Shared Product Object Tree). The SPOT is a
directory that contains the installation commands that are used to install the installation images
from the LPP_Source onto a system. The SPOT is also used to build the necessary boot
images to boot a client system. Separate boot images exist for each type of adapter (Ethernet,
token ring, and so on). This illustration above shows that when an LPP_Source resource is
created, installation images are copied from the product CDs to the LPP_Source directory and a
SPOT resource contains the installation commands used to install the installation images from
the LPP_Source resource onto a system.
When using the nim_master_setup script to install a NIM master on a system, it creates an
LPP_Source and SPOT resource for you and defines the resources in the NIM environment.
The nim_master_setup script will also copy the AIX update images from your update CD to the
LPP_Source and then install the update images into the SPOT resource. In addition to the
LPP_Source and SPOT resources, several NIM resources can help customize the BOS
installation process and basic network configuration. The following table shows all the NIM
resources that are created with the nim_master_setup script:
Table 5. NIM resources created by nim_master_setup script
NIM Resource Name Given
Description
spot*
Spot_52
lpp_source
mksysb
bosinst_data
bosinst_ow
resolv_conf
resolv_res
res_group
basic_res_grp
clients. A nim_clients_setup script is also available you to define NIM clients and initiates a
AIX installation on the NIM clients.
NIM Server is on AIX 5.3 ML05 and all LPARs are on AIX 5.2 ML09
After the above command has completed, verify that the following directories are
created:
/export/nim;
/export/nim/spot/spot_52;
/export/nim/lpp_source/lppsource_52
/tftpboot
o Manually create the mksysb and backup directory using the following command:
mkdir /export/nim/mksysb
mklv
crfs
mount
Build the LPP SOURCE
Note: lppsource and SPOT was created during the nim easy install
process. However we can manually create the lppsource as below
Copying fileset from AIX 5.2 CDs to /media/AIX , this is used as a media server
o Insert the AIX 5.2 CD 1 of 8 CD into the CD-ROM drive.
o Enter the following command at the command line:
smitty bffcreate
o The Copy Software to Hard Disk for Future Installation screen appears. Do the following:
o Enter /dev/cd0 in the INPUT device / directory for software field, then press the Enter
key.
In the DIRECTORY for storing software package field, enter /media/AIX/AIX_52.
o Update the TOC
Inutoc
The Define a Network screen appears. enter the appropriate information as the
following.
Define a Network
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
* Network Name [master]
Define a Resource
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
* Resource Name [lppsource_52]
* Resource Type lpp_source
* Server of Resource [master]
* Location of Resource [/export/software+]
Source of Install Images []
Names of Option Packages []
Comments [This is the lates+]
Define a Resource
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
* Resource Name [spot52]
* Resource Type spot
* Server of Resource [master]
* Source of Install Images []
* Location of Resource [/export/NIM/spot]
Expand file systems if space needed? yes
Comments [This is the spot for+]
installp Flags
COMMIT software updates? no
SAVE replaced files? yes
AUTOMATICALLY install requisite software? yes
OVERWRITE same or newer versions? no
Define a Resource
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[TOP] [Entry Fields]
* Resource Name [52mksysb]
* Resource Type mksysb
* Server of Resource [master]
Define a Machine
Type or select values in entry fields.
Press Enter AFTER making all desired changes.
[Entry Fields]
* NIM Machine Name [catom-mddbpca01]
* Machine Type [standalone]
* Hardware Platform Type [chrp]
Complete the next screen by filling in the following fields, and pressing Enter
twice to confirm when complete.
After a few moments, the terminal screen will open, and the system will boot to the SMS
prompt.
After a 60 second wait, a success message should appear. If not, review and change the
network adapter configuration until the ping is successful.
From next screen, choose 7. List all Devices, and press Enter.
o After the buses are scanned, a list of boot devices will be presented. Select 1. Ethernet
and press Enter.
o Choose 2. Normal Mode Boot. and press Enter
o Finally, choose 1. Yes to exit the SMS menu and initiate the boot process.
After the STARTING SOFTWARE PLEASE WAIT message, the new client will start booting
and install the new image automatically. Once the Welcome to AIX message appears, the
client has successfully started the boot process. Approximately 30 minutes later, the new client
will be installed and ready for login.
Now you can follow the procedure in 5.4.2 to install the NIM client.
completes, the nfs mount will be unmounted, leaving the completed mksysb file on the nim
master.
In the event a server has to be recovered, such as a catastrophic hardware or software
problem, this mksysb can be pushed down to the server to restore the server operation.
The /export/NIM directory holds the required NIM master resources such as the mksysb and
SPOT and LPPSOURCE.
The /etc/hosts file on all servers must contain all server hostnames and IP Addresses. It
must be accurate.
The network must be fully functional.
The /export/nim/mksysb directory is nfs exported as Read/Write from the NIM master to
all Clients. As part of this, the /etc/hosts file must be accurate.
The client must have an mksysb mount point directory created
Remove the second disk from the mirror: # unmirrorvg rootvg hdisk1
Check if there are any LVs left on the second disk: # lspv -l hdisk1
If exists migrate the remaining LVs to the first disk: # migratepv hdisk1 hdisk0
Remove the second disk from the root VG, to do this need to clear the boot record of the
second PV: # chpv -c hdisk1
Check if the bootlist is first set of the disk with the current rootvg, and on second place of
the disk with the alt_disk_install: # bootlist -m normal hdisk0 hdisk1
After successful end of above operation test alternate disk installation (using wakeup key)
# alt_disk_install -W hdisk1
Once all is fine, start the installation with the following commands sleep #
alt_disk_install S
Run the preview using smitty update_all and check the required /usr file system space
and increase it.
Add new boot image to the first PV to have fresh boot record: # bosboot -ad
/dev/hdisk0
Stop the cluster services using smitty cl_stop with option both[ Note: check the cluster
log in /tmp/hacmp.out and the servers login with the boot IP.]
Before start installation should commit all the softwares, installed on the server to make
easy roll-back in case of any problems during installation. # smit commit or # installp -c
all
Update all the software from TL and Service pack # smit update_all
Set only apply package, but not commit and keep the copy of replaced files.
Start the cluster using smitty cl_start. Check the logs in /tmp/hacmp.out
Roll Back Plan
Ensure root user is primary authentication method of system. Lsuser a auth1 root; chuser
auth1=system root
Insert 6.1 dvd and mount it: mount v cdrfs o ro /dev/cd0 /mnt
Shutdown F
Select change/show installation settings and make sure the installation type is migration
Run /usr/lpp/bos/post_migration
TSM Server: The role of TSM server is to store the backup or archive data from the backuparchive clients that it supports , to storage media. It also has a database of information to keep
track of the data it manages, including policy management objects
Scheduler: Administrator defined schedules allow for the automation of Tivoli storage manager
server and backup-archive client operations.
Backup-Archive Client: The TSM backup-archive client is a service which sends data to, and
retrieves data from TSM server. The TSM backup-archive client must be installed on every
machine that needs to transfer data to server managed storage called storage pools.
TSM Database: TSM Saves information in the TSM database about each file, raw LV, database
that it backs up, archives. This information includes the File name, size, management class. Data
is stored in a storage pool.
TSM Recovery Log: The recovery log keeps track of all changes made to the database, If a
system outage were to occur, a record of the changes would be available for recovery.
Storage Pools: Storage pools are collections of like media that provide storage for backed up,
archived and migrated files.
Policy-Based Management: Business policy is used to centrally manage backup-archive client
data. Policies are created by the administrator and stored in the database on the server.
Tape Library: TSM supports a variety of library types, including manual libraries, SCSI
libraries, 349X and 358X libraries.
Backup-Restore functionality:
TSM can perform backups of both files and raw lvs. When backing up files TSM server
database keeps a list of all files and their attributes (time, date, size, access control lists)
Backup: Creates a copy of file to protect against the operational loss or destruction of that file.
Customers control backups by defining the backup frequency and number of versions.
Restore: Places backup copies of files into a customer designated system. By default the most
recent version of each archive file requested is replaced.
4 levels of backups:
Byte level ( Small amounts of data)
Block Level (bigger amount of data)
File level ( normal files)
Image level ( includes file system and files)
TSM uses Progressive Backup Methodology also known as Incremental Backups.
Long term storage capabilities through Archive-Retrieve Function:
Archiving is useful when you want to store data that is infrequently accessed but must still be
kept available. TSM has the capability of archiving for 30 years.
Archive: Creates a copy of file or set of files. This feature enables customers to keep unlimited
archive copies of a file.
Retrieve: Allows users to copy an archive file from the storage pool to the work station.
Administration center on the integrated solutions console: GUI for managing IBM TSM
administrative function is called the Administration Center.
Automation Capabilities: It includes a central scheduling component that allows the automatic
processing of administrative commands and backup-archive client operations during a specific
time period when the schedule is activated.
Scheduling is split into 2 categories:
Administrative scheduling
Backup-archive client scheduling.
Data storage and Data Management:
Types of storage media on which TSM stores data: Storage media can be disks, optical and tape
assigned to a storage pool.
Storage pools contain backup files, archived files and space managed files. These storage pools
are chained in order to create a storage hierarchy. The disk pool is usually first in the chain and
followed by tape.
Policy Based Approach: Backup-archive client data managed by business policy. Policies
created by the administrator and stored in the database on the server.
Policy Domain: A group of nodes managed by the same set of policy constraints as defined by
the policy sets. A node may only to be defined to one policy domain per server.
Policy Set: A collection of management class definitions. A policy domain may contain number
of policy sets.
Management Class: A collection of management attributes called copy groups. 2 sets of MC
attributes: backup and archive
Copy group: Management attributes describing backup and archive characteristics. There is a
backup copy group and an archive copy group.
TSM Licensing:
3 License types: tsmbasic.lic, tsmee.lic and dataret.lic
then goto
Devices > Communication > EtherChannel / IEEE 802.3ad Link Aggregation
> Add An EtherChannel / Link Aggregation
here select the network card that you want to use, ie active.
Eg: select ent0
IMP : then select Mode as 8023ad
then select backup adapter for redundancy.(press F4 to show N/W adapters.)
Eg: ent1
press enter.
now ent0 and ent1 got bonded.
then automatically a virtual adapter will be created named ent2.
then put IP address and all to this virtual adapter.
#smit
Communications Applications and Services > TCP/IP > Minimum
Configuration & Startup
here select ent2 ( new bonded virtual adapter )
put IP Address and all,
give start now option.
Now you are successfully completed Link aggregation and check whether it works
or not by removing the 2nd cable to the network card and check ping, then put the
2nd cable and remove 1st cable. 2 - 3 drops normally occurs in my experience.
server 127.127.0.1
Double check that "broadcastclient" is commented.
#stopsrc -s xntpd
#startsrc -s xntpd
Note : If the server runs databases, use the -x flag to prevent the clock from
changing in a negative direction. Enter the following:
#startsrc -s xntpd -a "-x"
2.Enter
#lssrc -ls xntpd
to verify that the server is synched. This process can take upto 12 minutes.
entstat en0
entstat -d en0
no -a
no -d thewall
no -o ipforwarding=1
traceroute ibm
ping ibm
ifconfig -a
ifconfig en0
ifconfig en0 up
configure
netstat -a
netstat -c
netstat -D
netstat -i
netstat -r
netstat -rn
names)
netstat -s
netstat -s -p <>
********
lsnim
lsnim -l dcmds
lsnim -O dcmds
nimclient The client version of nim command (User can obtain same
results of nim in server )
nimconfig -a pif_name=en0 -a netname=net1
master with network name net1
nimconfig -r
NIM
To define mksysb
To
nim -o bos_inst node1 To initialise NIM for the BOS installation on node1
with the allocated resources
nim -o dkls_init dcmds To initialize the machine dcmds as diskless
operation
nim -o dtls_init dcmds To initialize the machine dcmds for dataless
operation
nim -o cust dcmds
operation
********************
clstart -m -s -b -i -l
To start cluster daemons (m-clstrmgr, sclsmuxpd, b-broadcast message, -i-clinfo, -l cllockd)
clstop -f -N
resources
clstop -g -N
clstop -gr -N
cldare -t
cldare -t -f
cldare -r
cldare -r -f
clverify
cllscf
cllsclstr
cllsnode
cllsnode -i node1
cllsdisk -g shrg
group shrg
cllsnw
cllsnw -n ether1
cllsif
cllsif -n node1_service
node1_service
cllsvg
cllsvg -g sh1
cllslv
cllslv -g sh1
cllsdisk -g sh1
cllsfs
cllsfs -g sh1
cllsnim
cllsnim -n ether
cllsparam -n node1
cllsserv
claddnim
claddclstr -i 3 -n dcm
id 3
claddnode
To add an adapter
To add network interface module
clchclstr -s enhanced
clchnode
clchgrp
clchparam
clchserv
start/end scripts
clrmclstr
clrmgrp -g sh1
resources
clrmnim
ether
clrmnode -n node1
clrmnode -a node1_svc
clrmres -g sh1
clrmserv app1
clrmserv ALL
clgetactivenodes -n node1
To list the nodes with active cluster
manager processes from cluster manager on node node1
clgetaddr node1
clgetgrp -g sh1
clgetif
To list interface name/interface device name/netmask
associated with a specified ip label / ip address of a specific node
clgetip sh1
To get the ip label associated to the resource group
clgetnet 193.9.200.2 255.255.255.0
To list the network for ip
193.9.200.2, netmask 255.255.255.0
clgetvg -l nodelv
cllistlogs
clnodename -a node5
clfindres
xclconfig
xhacmpm
xclstat
HACMP LOGS
=========
/tmp/hacmp.out
history
/var/adm/cluster.log
this file)
/usr/sbin/cluster/history/cluster.mmdd
events. (one file / day of month)
/tmp/cm.log
Clstrmgr messages
/tmp/cspoc.log
/tmp/emuhacmp.out
rdist -b -f /etc/disfile1
disfile1 in binary mode
(All commands)
***************
Posted by Praveen K at 12/21/2009 01:40:00 AM 9 comments:
Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest
Labels: Commands, HACMP
q sess
To cancel a process
q filespace To check backup space utilized for each file system of a server,
filesystem type...
q actlog To view activity log
q act begind=mm/dd/yyyy begint=hh:mm:ss endd=mm/dd/yyyy
endt=hh:mm:ss
To view activity log for a specified period by begining
date and end date
q act begind=today-1 begint=00:00
12:00 am
q vol To see volumes and it's storage pool name, capacity, status
q req
To see requests
q sched f=d
backup path
storagepool
delete volume L41116L4 discarddata=yes
media L41116L4
#cd /usr/tivoli/tsm/StorageAgent/bin
nohup ./dsmsta &
To start storage agent ( LANFREE BACKUP= backup
directly from SAN to Tape library )
#ps -ef | grep dsm
Important
=======
If you are not able to take backup through any of Tivoli backup methods ( BA
client, dsmc , schedule ) then check any NFS mounted filesystems are there
in the server and that server is available or not. If that server is not powered
on, either unmount the NFS filesyatem or make that server available...
****************
lssrc -a
lssrc -h node1-a
node1
lssrc -s inetd
lssrc -g tcpip
startsrc -s inetd
startsrc -g tcpip
stopsrc -s inetd
To stop the subsystem inetd (if process is under
srcmstr. ie PPID of process=PID of srcmstr)
stopsrc -g tcpip
refresh -s nfsd
refresh -g tcpip
To add a subsystem
rmssys -s kerberos
*************
Posted by Praveen K at 12/03/2009 01:06:00 AM 1 comment:
Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest
Labels: AIX, Commands
id
id -gn
id -Gn
lsuser root
lsuser ALL
lsuser -a ALL
To remove a user
To remove user praveen and his all attributes
who /var/adm/wtmp
shutdown
last root All previous login and logoff time of root user
who -r
To change the
pwdck -t ALL / Username To verify user passwords ( Report error and ask
to fix it)
usrck -t ALL / Username To verify user definition.
grpck -t ALL / groupname To verify a group
*************
switch.prt -s AIX
switch.prt -d systemV
qchk -q
qchk -P lp0
qchk -# 123
qchk -A
qcan -x 123
qcan -X -P lp0
qhld -r -#569
enable psq
disable psq
lpstat
lpstat -p lp0
lpstat -u root
cancel -#111
lpq -P lp0
************
mksysb -i -X /dev/rmt0
expands /tmp if required)
mksysb -m /dev/rmt0
system backup
mksysb -e /dev/rmt0
in /etc/exclude.rootvg
mkszfile
mkcd -d /dev/cd1
/dev/cd1 (mksysb)
cpio -icvd < /dev/rmt0 /home Restores /home directory from rmt0
tar -cvf /dev/rmt0 /home
tar -tvf /dev/rmt0
******************
entstat en0
entstat -d en0
no -a
no -d thewall
no -o ipforwarding=1
traceroute ibm
ping ibm
ifconfig -a
ifconfig en0
ifconfig en0 up
configure
netstat -a
netstat -c
netstat -D
netstat -i
netstat -r
netstat -rn
names)
netstat -s
netstat -s -p <>
********
lslpp -l
lslpp -f bos.rte
lslpp -w /etc/hosts
(parent fileset)
To remove bos.net
installp -r
installp -c -f
installp -C
lppchk -c
To check the fileset items and verifies that the
checksum and filesize are consistent with SWVPD
lppchk -v
verify that all filesets have required requisites and
are completely installed
instfix -k IX9999 -d /dev/rmt0.1 To install the file set associated with fix
IX9999 from rmt0
instfix -ik IX9999
************
Posted by Praveen K at 10/13/2009 12:09:00 PM No comments:
Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest
Labels: AIX, Commands
lscons
lscons -b
************
Posted by Praveen K at 10/13/2009 11:47:00 AM 1 comment:
Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest
Labels: AIX, Commands
cfgmgr
system
cfgmgr -l vscsi0
interface
lscfg
definition info
lscfg -l mem0
lscfg -l ent*
lscfg -v
Display vpd
lscfg -v -l hdisk0
mkdev -l rmt0
state
lsdev -P
lsdev -P -c disk
lsdev -P -r class
rmdev -l rmt0
rmdev -l rmt0 -S
stopped
lsparent -C -k rs232
accept rs232 devices
lsparent -C -l hdisk0
child device hdisk0
lsconn -p scsi0
accept
lvlstmajor
mknod /dev/null c 2 2
(2) nos. (c - char device)
****************
Posted by Praveen K at 10/13/2009 11:37:00 AM No comments:
Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest
Labels: AIX, Commands
sysdumpdev -l
sysdumpdev -L
sysdumpstart -p
sysdumpstart -s
sysdumpdev -p /dev/lv00
tempararly
**************
Posted by Praveen K at 10/13/2009 11:18:00 AM No comments:
Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest
Labels: AIX, Commands
lsps hd6
chps -a y paging00
chps -a n paging00
***********
bootlist -m service -o
bosboot -a
To create Boot image on default boot logical volume on the
hard disk, from where the system is booted.
bosboot -ad /dev/hdisk1
hdisk1
bootinfo -b
bootinfo -e
bootinfo -k
To check the type of boot
(1=secure,2=service,3=normal)
bootinfo -K
bootinfo -m
bootinfo -o hdisk0
hdisk0
bootinfo -p
bootinfo -r
bootinfo -y
*****************
Posted by Praveen K at 10/13/2009 11:01:00 AM 2 comments:
Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest
Labels: AIX, Commands
LV commands in AIX
lslv -l lv00
lslv -p hdisk1
chlv -t copy lv00
chlv -p r lv00
rmlv testlv
To remove testlv
rmlv -f testlv
copies
mklvcopy testlv 3 hdisk1 hdisk2 To mirror LV testlv on hdisk1 and
hdisk2 with 3 copies
rmlvcopy testlv 2 hdisk1
hdisk1
**************
Posted by Praveen K at 10/13/2009 10:43:00 AM No comments:
Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest
Labels: AIX, Commands
PV commands in AIX
lspv
lspv hdisk0
lspv -p hdisk0
lspv -l hdisk0
distribution
chpv -v r hdisk1
without varyoffvg)
chpv -v a hdisk1
chpv -c hdisk0
(CAREFULLY..!!)
FS commands in AIX
lsfs
lsfs -a
lsfs -q
List all filesystems with detailed info (shows size of FS and
LV in it. so we can check wheather size of LV=size os FS)
lsfs -l
lsfs -c
lsfs -v jfs
Add 64 MB to /test
chfs -A /test
chfs -d account /test
/etc/filesystems file)
rmfs -r /test
defragfs /test
defragfs -q /test Display the current defrag status of the file system
fsck -y n /dev/lv00 To check the filesystem associated to /dev/lv00
assuming response "yes"
fsck -p /dev/lv00
*************
VG commands in AIX
lsvg
lsvg -o
lsvg rootvg
lsvg -l rootvg
lsvg -p rootvg
mkvg -s 8 hdisk1
size 8MB
chvg -a y newvg
chvg -a n newvg
chvg -t 2 newvg
To change maximum no. of PP / PV to 2032 on vg
newvg i.e 2*1016 ( so max no. of hdisk in newvg is 16 as in this case in
normalvg)
chvg -Qn newvg
reorgvg newvg
varyoffvg newvg
filesystems in newvg)
varyonvg newvg
To activate VG newvg
syncvg -v newvg
***********
To list all the records in the /var/adm/wtmp file (all successful logins)
last shutdown
last reboot
uptime
oslevel
lslicense
chlicense -u30
chlicense -f on
lsmcode
invscout
To get firmware of all peripheral devices. It will generate the
output in /var/adm/invscout/invs.mrp containing the firmwares of all
peripheral devices.
****************
Please wait and Click this Image to get update from this site
VG commands in AIX
PV commands in AIX
FS commands in AIX
LV commands in AIX
BLV commands in AIX
Paging Space Commands in AIX
System Dump commands in AIX
Device related commands in AIX
Console emulation commands in AIX
Installation specific commands in AIX
Network Related commands in AIX
Backup commands in AIX
Print commands in AIX
User related commands in AIX
SRC (System Resource Controller) commands in AIX
TIVOLI related Commands
HACMP related Commands
NIM Commands in AIX
Useful AIX commands