Vous êtes sur la page 1sur 10

Solaris L3 Level Questions

Posted on June 4, 2008 by krishsubramanian


These are some of the L3 level unix interview questions
1. How do you replace a failed boot disk under meta in solaris? Step by
step explanation?
2. How do you remove meta only for the root slice? remaining slices should
run under meta?
3. what you would do if you want to replace a slice using metareplace
option?
4. what is the significance of 51% state database replicas in SVM?
5. what are the common errors you find in Solaris Volume manager?
6. You have a boot disk under svm, the machine fails to boot and remains in
ok prompt? what could be the possible reason?
7. metastat -p shows a metavolume needs replacement. Metavolume is a single
way mirror only. Actually you find disk and metavolumes are ok and I/O is
happening to the filesystems how will you remove the metareplace message
that comes out of metastat.
8. How to create a shared disk group in VxVM?
9. What is the difference between private and public regions in Veritas
Volume manager?
10. what would you do if the private region of a particular disk group is
full? What are the design considerations for the size of private region in
Vxvm disk group?
11. How to replace a corrupt private region? in vxvm 3.5 and greater
versions
12. How would you convert a volume from gen to fsgen? why should you do
that?
13. How can you unencapsulate a boot disk in VxVM?
14. How to identify multiple paths for a disk.
15. What is the difference between Vxdmp and EMC powerpath?
16. vxdisk -o alldgs list o/p shows some disk groups in braces What does
that signify?
17. what are the various layouts that are available in VxVM?
18.What is a layered volume? how to create it using vxmake?
19.How to quickly mirror a volume, if the volume is empty?
20. How to grow a volume?
21. What is the difference between failing and failed disks?
22. How to replace a failed disk in Veritas?
23. Plex is in a disabled state. How will you recover? what are the steps
to follow?
24.what is the difference between detached and disassociate state of
plexes?
25. Whats the boot process of VxVM?
26. Whats the difference between SVM and VxVM? What would you recommend to
your clients? why?
27.What are the various clusters you have worked on?
28. Which cluster is better VCS or Sun cluster?
29. Compare and contrast VCS and Sun Cluster.
30.how will you start VCS service? What are the configuration files in
VCS?
31. How would switch a service group?
32. How would you freeze a service group?
33. What is a Split brain scenario ?
34. What is the difference between link based and probe based IPMP.

Growing
1) Take
#
#
#
#
#

file systems in Red Hat Linux


the needed outputs.
/usr/bin/lsscsi > lsscsi.b4
/usr/sbin/lvmdiskscan > lvmdiskscan.b4
/sbin/powermt display dev=all > powermt.b4
/sbin/powermt display dev=all | grep emcpower > emcpower.b4
/var/tmp/emcgrab/tools/bin/inq.LinuxAMD64 -nodots -showvol > inq.b4

2) HBA Information.
# /usr/sbin/hbanyware/hbacmd listhbas | grep Port WWN | tee
hba_wwpn.txt
3) Identify the new LUNS.
# /usr/sbin/hbanyware/hbacmd reset [HBA_WWPN1]
***Verify that the paths through this HBA are online after the HBA
reset before resetting the next HBA***
#powermt display paths
#powermt display dev=all
# /usr/sbin/hbanyware/hbacmd reset [HBA_WWPN2]
***Verify that the paths are online after the HBA reset***
#powermt display paths
#powermt display dev=all
4) Get the new LUNS
# /usr/bin/lsscsi > lsscsi.after
# /sbin/powermt config
# /sbin/powermt save
# /sbin/powermt display dev=all > powermt.after
# /sbin/powermt display dev=all | grep emcpower > emcpower.after
# /usr/sbin/lvmdiskscan > lvmdiskscan.after
# /usr/bin/diff powermt.after powermt.b4
5) Once you get the new LUNS/emcpower devices, next task will be to add
them in the volume group and extend filesystem.
Take the needed outputs before performing the change.
# vgdisplay -v /dev/<vgname> > vgdisplay.b4
# vgs > vgs.b4
# pvs > pvs.b4
# lvs > lvs.b4
# lvdisplay -v /dev/<vgname>/<lvname> > lvdisplay.b4
6) create the PV and add it to volume group
# pvcreate /dev/emcpowerxx ( use the -f option to create in case it
throws error )
# pvs > pvs.after
# vgextend <vgname> /dev/emcpowerxx
# vgdisplay v /dev/<vgname>
7) Extend the logical volume and the filesystem
# lvextend /dev/<vgname>/<lvname> /dev/emcpowerxx ( will extend to
the value of emcpowerxx )
# lvdisplay -v /dev/<vgname>/<lvname>
# resize2fs /dev/<vgname>/<lvname>
# df -h
Oracle RAC and node evictions, RCA
Oracle Node evictions under clusterware were quite a common task in the
past Seems like they have been reduced a lot in 11gR2. However, we still
get issues related to the evictions time to time and nothing concrete
eventually comes out as RCA. I have seen both the oracle 10g as well 11g
clusterware, having these issues on solaris as well Linux.
Most common reasons
*Network disconnect between nodes, causing CRS to reboot. In my
experience, whenever IPMP has timeouts, mostly probe based, oracle
CRS evicts the node.
*Disk timeouts/Path/ (I/O) timeouts, very rare but possible.
Storage/SAN related.
*High resource contention, causing the server to hang, which is
picked up in turn by CRS causing node evictions.
The challenge was always to tune the kernel settings for ORACLE RAC
properly, Ensure OS and ORACLE CRS configuration settings are in
concurrence, configuring kdump/dumpadm settings properly, configure CRS to
dump core instead of only reboot.
Getting oracle RAC logs, OS watcher ( if enabled ) and the coredump and
analyse to arrive at a root cause.
How to find the global zone name from the local zone
# pkgparam -v SUNWcsr | grep From: | grep -v fern | head -1 | cut -d:
-f 5 | awk { print $1}
Use this from Non global zone to get the Global zone name.

Creating a new NFS resource group in Sun Cluster 3.2#


Today, let us see how to create a new resource group in Sun Cluster 3.2.
This set up uses SVM.
Lets go straight to the procedure.
1. Make sure the new LUNs are visible and available to be
configured.
# echo | format > format.b4
# scdidadm -L > scdidadm.b4
# cfgadm -c configure <controller(s)>
# devfsadm
# scdidadm -r
# scgdevs ( on one node )
# scdidadm -L > scdidadm.after
# diff scdidadm.b4 scdidadm.after
Note down the new DID devices. This is to be used to create file systems
2. Create the new metaset
# metaset -s sap-set -a -h phys-host1 phys-host2
3. Add disks to metaset
# metaset -s sap-set -a /dev/did/rdsk/d34
# metaset -s sap-set -a /dev/did/rdsk/d35
# metaset -s sap-set -a /dev/did/rdsk/d36
# metaset -s sap-set -a /dev/did/rdsk/d37
4. Take ownership of metaset on phys-host1
# cldg switch -n phys-host1 sap-set
5. Create new volumes for sap-set
# metainit -s sap-set d134 -p /dev/did/dsk/d34s0 1g
# metainit -s sap-set d135 -p /dev/did/dsk/d34s0 all
# metainit -s sap-set d136 -p /dev/did/dsk/d35s0 all
# metainit -s sap-set d137 -p /dev/did/dsk/d36s0 all
# metainit -s sap-set d138 -p /dev/did/dsk/d37s0 all
6. Create new filesystems
# umask 022
# newfs /dev/md/sap-set/rdsk/d134
# newfs /dev/md/sap-set/rdsk/d135
# newfs /dev/md/sap-set/rdsk/d136
# newfs /dev/md/sap-set/rdsk/d137
# newfs /dev/md/sap-set/rdsk/d138
7. create new mount points, Create it on both the nodes.
#mkdir -p /sap; chown sap:sap /sap
#mkdir -p /sapdata/sap11 ; chown sap:sap /sapdata/sap11
#mkdir -p /sapdata/sap12 ; chown sap:sap /sapdata/sap12
#mkdir -p /sapdata/sap13 ; chown sap:sap /sapdata/sap13
#mkdir -p /sapdata/sap14 ; chown sap:sap /sapdata/sap14
8. Edit the /etc/vfstab file and add the new file systems. Make the mount
at boot option as no
9. Create the Resource group SAP-RG.
# clrg create -n phys-host1 phys-host2 SAP-RG
10. Create logical hostname resource.
# clrslh create -g SAP-RG saplh-rs
11. Create the HAstoragePlus Resource
# clrs create -t HAStoragePlus -g SAP-RG -p AffinityOn=true -p
FilesystemMountPoints=/sap,/sapdata/sap11,/sapdata/sap12,/sapdata/sa
p13/sapdata/sap14 sap-data-res
12. Bring the Resource Group Online
# clrg online -M phys-host1 SAP-RG
13. Test the failover of the Resource Group
# clrg switch -n phys-host2 SAP-RG
14. Failover Back
# clrg switch -n phys-host1 SAP-RG
15. Create the SUNW.nfs Config Directory on the /sap file system.
# mkdir -p /sap/nfs/SUNW.nfs
16. Create the dfstab file to share the file systems
# vi /sap/nfs/SUNW.nfs/dfstab-sap-nfs-res
#share -F nfs -o rw /sapdata/sap11
#share -F nfs -o rw /sapdata/sap12
#share -F nfs -o rw /sapdata/sap13
#share -F nfs -o rw /sapdata/sap14
17. Offline the SAP-RG resource group.
# clrg offline SAP-RG
18. Modify the Pathprefix variable to ensure that NFS knows path to cluster
dfstab
# clrs set -p Pathprefix=/sap/nfs

19. Bring the Resource Group online


# clrg online -n phys-host1 SAP-RG
20. Create the NFS resource in SAP-RG resource group.
# clrs create -g SAP-RG -t nfs -p Resource_dependencies=sap-datares sap-nfs-res
21. Resource should be created and enabled as part of SAP-RG
# clrs status
22. check to see if the server is exporting filesystems
# dfshares
Solaris Boot archive:::Concepts,Explanation
There are two types of boot archive .
A) primary boot archive
B) Failsafe boot archive.
Primary Solaris boot archive is a subset of root file system /.
The
archive contains the kernel modules, configuration files.
The files in
boot archive is read by the kernel before the mount of /. The boot archive
is located in the /platform directory.
Whenever we update any
configuration or .conf files the boot archive must be up to date. If the
boot archive is not in sync with the changes we may have some booting
problems, which I will deal in another post.
Fail-safe boot archive is created at the time of installing OS. In fact
both the boot archives are created at the time of OS installation.
A
failsafe boot archive is a self sufficient one, can boot on its own and
requires no maintenance. We use the failsafe mode to boot whenever we have
issues with the OS booting from the primary boot archive.
To manually update the primary boot-archive we use the bootadm command.
# bootadm update-archive
You may have also seen a message like updating boot archive when you try
to reboot or shutdown the server.
The boot-archive is controlled by the
SMF service. To disable or enable the boot-archive use
# svcadm enable/disable system/boot-archive
This entry was posted in Solaris on November 14, 2011 by krishsubramanian.
Format command hanging in Solaris 10
I have come across issues when the format does not show up partitions of
the disks it lists. If a disk is chosen to be listed, it hangs forever A
quick workaround will be to set a environment variable. You can set
# export NOINUSE_CHECK=1
once this is set, format works fine.
Firmware upgrade on T2000
Today we will see how to upgrade firmware on a T2000 box.
Existing firmware
sc> showhost
Sun-Fire-T2000 System Firmware 6.4.6 2007/06/24 18:43
Host flash versions:
Hypervisor 1.4.1 2007/04/02 16:37
OBP 4.26.1 2007/04/02 16:26
POST 4.26.0 2007/03/26 16:45
sc>
Steps
1. Copy the patch for the firmare to the server. For example 139434-08.
Unzip the patch. This will upgrade the firmware to 6.7.11
2. From the console, check the current version of firmware
sc> showhost
3. Download the image to the Controller
# ./sysfwdownload Sun_System_Firmware-6_7_11-Sun_Fire_T2000.bin
This should take less than 15 minutes to complete. The command prompt
(host) will return upon successful completion. The end result is the image
([image].pkg) is now resident on the SP
4. Shutdown the server
# init 0
5. From the service processror CLI, poweroff the server
sc> poweroff
This will take about a minute

6. Make sure that your virtual keyswitch setting is not in the LOCKED
position
sc> showkeyswitch
If the virtual key switch is in LOCKED position you can change that with
the following command:
sc> setkeyswitch -y normal
7. Flash update the downloaded Sun System Firmware image
sc> flashupdate -s 127.0.0.1
127.0.0.1 is the default address for the local host. When the download
process is finished, ILOM displays the message:
Update complete. Reset device to use new software.
8. Type the resetsc command to reset ILOM
sc > resetsc
9. After the reboot, please check the version of the firmware
sc > showhost
10. poweron the server
sc> poweron
11. Check the console and boot the server
sc> console
{ok} boot
cfgadm:::Device being used by VxVM
root@abc>/> cfgadm -c unconfigure c1::dsk/c1t0d0
cfgadm: Component system is busy, try again: failed to offline:
/devices/ssm@0,0/pci@1c,700000/pci@1/SUNW,isptwo@4/sd@0,0
Resource
Information
/dev/dsk/c1t0d0s2
Device being used by VxVM
cfgadm unconfigure command fails here.
The way to resolve this is to disable the disks path from DMP control.
Since there is only one path to this disk, the -f (for force) option
needs to be used:
root@abc>/> vxdmpadm -f disable path=c1t0d0s2
root@abc>/> vxdmpadm getsubpaths ctlr=c1
NAME
STATE[A]
PATH-TYPE[M] DMPNODENAME ENCLR-TYPE
ENCLRNAME
ATTRS
=====================================================================
===========
c1t0d0s2
DISABLED

c1t0d0s2
Disk
Disk

c1t6d0s2
ENABLED(A)

c1t6d0s2
Disk
Disk

You can see the path now disabled from DMP.


root@abc>/> cfgadm -c unconfigure c1::dsk/c1t0d0
cfgadm goes through fine.
root@abc>/>
Multi terabyte support for UFS file systems Supports multi terabyte
# /usr/sbin/fstyp -v /dev/md/rdsk/d3 | head -5 ufs
magic decade format dynamic time Thu Jul 17 11:15:36 2008
No Multi terabyte support

# /usr/sbin/fstyp -v /dev/md/rdsk/d0 | head -5 ufs


magic 11954 format dynamic time Thu Jul 17 12:43:29 MDT 2008
Creating an ALOM Compatibility shell T5220
By default the T5220 comes up with an ILOM shell. The command sets are bit
different than ALOM which we are used toBut no worries, ILOM gives an
option to create ALOM compatibility shell and use it to administer the
server as usual
1. Login to the SP
XXXXXXXXXXXXXXXXXX login: username
Password:
Waiting for daemons to initialize
Daemons ready
Integrated Lights Out Manager
Version 3.0.x.x
Copyright 2008 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.

2. Create a user named admin, and set the admin account roles to aucro and
the CLI mode to alom
-> create /SP/users/admin role=aucro cli_mode=alom
Creating user
Enter new password: ********
Enter new password again: ********
Created /SP/users/admin
3. Log out of the root account after you have finished creating the admin
account
-> exit
4. Log in to the ALOM CLI shell (indicated by the sc> prompt) from the ILOM
login prompt.
XXXXXXXXXXXXXXXXX login: admin
Password:
Waiting for daemons to initialize
Daemons ready
Integrated Lights Out Manager
Version 3.0.x.x
Copyright 2008 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.
sc>
Remember that the ALOM CMT compatibility shell is an ILOM interface
Solaris Patching Error Codes
Here is the list of Solaris Patching Error codes
The complete list:
# 0 No error
# 1 Usage error
# 2 Attempt to apply a patch thats already been applied
# 3 Effective UID is not root
# 4 Attempt to save original files failed
# 5 pkgadd failed
# 6 Patch is obsoleted
# 7 Invalid package directory
# 8 Attempting to patch a package that is not installed
# 9 Cannot access /usr/sbin/pkgadd (client problem)
# 10 Package validation errors
# 11 Error adding patch to root template
# 12 Patch script terminated due to signal
# 13 Symbolic link included in patch
# 14 NOT USED
# 15 The prepatch script had a return code other than 0.
# 16 The postpatch script had a return code other than 0.
# 17 Mismatch of the -d option between a previous patch
# install and the current one.
# 18 Not enough space in the file systems that are targets
# of the patch.
# 19 $SOFTINFO/INST_RELEASE file not found
# 20 A direct instance patch was required but not found
# 21 The required patches have not been installed on the manager
# 22 A progressive instance patch was required but not found
# 23 A restricted patch is already applied to the package
# 24 An incompatible patch is applied
# 25 A required patch is not applied
# 26 The user specified backout data cant be found
Extending Filesystems in Sun Cluster using VxVM
I did an activity to add LUNS to existing database file systems running sun
Cluster 3.1, Solaris 9 OS with VxVM. The steps I followed are :
a) Get the LUN information from Storage and update the sd.conf
accordingly on both the servers
b) Make the LUNS Visible on the server. Run the command on both the nodes
#update_drv -f sd
c) Verify for the LUNS present; # format
d) Configure emcpower devices. Run on both the servers.
# /etc/powermt config
# /etc/powermt save
e) Create the Sun Cluster DID devices. Run on both nodes if required.

# devfsadm
# scgdevs
f) Verify the sun Cluster DID devices. # scdidadm -L
g) Add the disks to veritas diskgroup ; # vxdiskadm
h) Grow the FS
Updating Boot device order
You can update the boot device order using eeprom command.
# eeprom boot-device=vx-rootdisk vx-rootmirror
Found duplicate PV, how to solve them
Today lets see more about LVM. The most common issue is the Found
duplicate PV. How do you solve it?
Well, the message is directly related to the filter you use in
/etc/lvm/lvm.conf.
What ever devices that are included in the filter will have the LVM
metadata checked.
For example a filter like this
filter = [ a/.*/ ]
will have all the devices scanned for LVM metadata.
* When using device-mapper-multipath or other multipath software such as
EMC PowerPath or Hitachi Dynamic Link Manager (HDLM), each path to a
particular logical unit number (LUN) is registered as a different SCSI
device, such as /dev/sdb, /dev/sdc, and so on. The multipath software will
then create a new device that maps to those individual paths, such as
/dev/mapper/mpath1 or /dev/mapper/mpatha for device-mapper-multipath,
/dev/emcpowera for EMC PowerPath, or /dev/sddlmab for Hitachi HDLM. Since
each LUN has multiple device nodes in /dev that point to the same
underlying data, they all contain the same LVM metadata and thus LVM
commands will find the same metadata multiple times and report them as
duplicates.
* This is only a warning message and does not indicate a failure in LVM
operation. Rather, system alerts the administrator that only one of the
device is used as a PV and rest are being ignored.
To avoid this situation, a filter with only the needed devices should be
included.
For example, to allow the internal disks in HP arrays and any EMC Powerpath
devices, the filter would like
filter = [ a|/dev/cciss/.*|, a|/dev/emcpower.*|, r|.*| ]
After applying the filter
# rm -rf /etc/lvm/cache/.cache/
# pvscan
# vgscan
# lvscan
Do a #vgs -vv to cross check as well.
There are scenarios where a working filter may have failed and you may see
duplicate entries which should not be visible
This scenario calls for an investigation:
* Device names and the filter patterns for the devices in lvm.conf
* Any changes made to the lvm.conf and the initrd not recompiled to reflect
the changes.
Rebuild initird RHEL
Leave a reply
whenever there is an update to kernel modules or config files that are
needed at the time of booting, system admin needs to rebuild the initial
ramdisk. The sole purpose of initramfs is to help the rootfs mounted so
that the transition to real rootfs can happen.
RHEL prior to version 6, was using the mkinitrd for the rebuild. However,
with RHEL 6 this has been changed to dracut. Dracut tool is built in such
a way to use only minimal hardcoded info on the initramfs. Dracuts
initramfs depends on udev to create symbolic links to device nodes and when
the rootfss device node appears, it mounts it and switches root to it.
Lets see how to build the initramfs using both the options.
mkinitrd
1. Take a backup copy of the initrd before proceeding to rebuild.
# cp -p /boot/initrd-$(uname -r).img /boot/initrd-$(uname -r).img.bak

2. Build the initrd


# mkinitrd -f -v /boot/initrd-$(uname -r).img $(uname -r)
After the successful build, make an entry to grub.conf to have the option
to boot from the backed up ram disk ( step 1 ) in case of any booting
issues after the rebuild.
Dracut
1. Similar to the mkinitrd method, copy the initrd before proceeding to
rebuild.
# cp -p /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).img.bak
2. Build the initramfs
# dracut -f
Make needed changes to grub.conf so that one can have a failsafe kernel if
the compiled one fails to boot.
configure yum repository
1. Create the config file under /etc/yum.repos.d
# vim /etc/yum.repos.d/yum.repo
[yum]
name=RHEL REPO
baseurl=file:///REPO/Packages
enabled=1
gpgcheck=0
2. Save the file
3. copy the contents of the DVD to the directory /REPO/Packages
4. Install and configure createrepo binary.
# rpm -ivh deltarpm*
# rpm -ivh python-deltarpm*
# rpm -ivh createrepo-0.9.8-4.el6.noarch.rpm
# createrepo database /REPO/Packages
# yum clean all
# yum makecache
# yum list all
Persistent Routes in Solaris
Boot persistent Routes in Solaris
To add a static route in Solaris, you can use the route command.
This
will dynamically update the IP routing table. To make the routes
persistent across reboots the traditional way is to add a startup script
with the route commands. This will help us to make the routes persistent.
Traditional method
# route add -host 10.10.10.3 10.10.10.3
This will add the host route to the routing table.
To make it persistent across reboots.
# vi /etc/rc2.d/S99route_add
# route add -host 10.10.10.3 10.10.10.3
# chmod +x /etc/rc2.d/S99route_add
Command line option
Solaris offers a command line option to make the routes persistent across
reboots. We will see how that happens
# route -p add -host 10.10.10.3 10.10.10.3 -static
add host 10.10.10.3: gateway 10.10.10.3
add persistent host 10.10.10.3: gateway 10.10.10.3
You can check the routes being made persistent using the route -p show
command
# route -p show
persistent : route add -host 10.10.10.3 10.10.10.3 -static
The routes are written to /etc/inet/static_routes
# cat static_routes
# File generated by route (1M) do not edit
-host 10.10.10.3 10.10.10.3 -static
Deletion of persistent routes is also straight forward, use the route
delete -p
# route -p delete -host 10.10.10.3 10.10.10.3 static
How to say if a zone is global or non global zone in Solaris
Leave a reply

Just for scripting or some other remote purpose you may need to be sure if
the zone you are in is a global or a non global zone. We have a straight
forward solution for this. The command pkgcond.
Just check the usage of this command.
root@solaris:~# uname -a
SunOS solaris 5.11 11.1 i86pc i386 i86pc
root@solaris:~# pkgcond
no condition to check specified; usage is:
pkgcond [-nv] [ ]
command options:
-n negate results of condition test
-v verbose output of condition testing
As it states it works on a condition and I assume this was developed for
scripting purpose. Let us see some examples.
root@solaris:~# pkgcond is_global_zone
root@solaris:~# echo $?
0
root@solaris:~#
The condition echo $? is 0 indicating it is true. The zone from where I
executed this command was a global zone. Let us try the same for Non global
zone.
root@solaris:~# pkgcond is_nonglobal_zone
root@solaris:~# echo $?
1
root@solaris:~#
It returns false, so it is not a NON global zone. Thats it.
Wait, let me get this complete. The command pkgcond can be used to check
lot of other conditions. Here you go, all 0 are true and all 1 are false.
Just Enjoy..
root@solaris:~# pkgcond is_what
can_add_driver=1
can_remove_driver=1
can_update_driver=1
is_alternative_root=1
is_boot_environment=1
is_diskless_client=1
is_global_zone=0
is_mounted_miniroot=1
is_netinstall_image=1
is_nonglobal_zone=1
is_path_writable=0
is_running_system=0
root@solaris:~#
PS : As you can see I have used a solaris 11 box, in solaris 10 you can
also check to see if the zone is whole root or sparse root.
GAB and LLT Basics
Leave a reply
Today we will see something on Veritas Cluster. Strangely I never felt
like blogging about VCS and this is my first post on VCS. We will see some
more interesting and in-depth VCS stuff also going forward.
In a cluster, configuration is shared between nodes in the cluster. There
is a need for communication between nodes in the cluster to share the
configuration as well the changes that happen to the nodes. LLT and GAB is
used by VCS precisely for this reason.
LLT
* LLT ( low latency protocol ) operates in the layer 2 of the network
stack.
* Provides fast reliable cluster communication between Kernels/OS.
* Responsible for heartbeats between nodes.
* Important files /etc/llttab /etc/llthosts
GAB
* Is loaded as a kernel module on each cluster node.
* Maintains cluster membership by receiving input on the status of the
heartbeat from LLT. When a system no longer receives heartbeats from a
peer, it marks the peer as DOWN and excludes the peer from the cluster.
* Guaranteed delivery of point-to-point and broadcast messages to all nodes
within the cluster.
* Important file /etc/gabtab
This entry was posted in Cluster and tagged Veritas Cluster on July 26,
2013 by krishsubramanian.

Learning Python
Leave a reply
If you guys are interested to learn python online, you have some very
useful resourcessome of them i found interesting were
http://thenewboston.org/list.php?cat=36
https://www.udacity.com/course/cs101
http://www.codecademy.com/tracks/python
https://www.khanacademy.org/science/computer-science
http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/600sc-introduction-to-computer-science-and-programming-spring-2011/
Please let the community know via comments if you know any additional
websites/videos to learn python
Cheers
This entry was posted in programming and tagged Python on July 20, 2013 by
krishsubramanian.
LDOM Basics part one
Leave a reply
It has been some time since I wrote something. Hope to blog more
frequently Today let us see about Logical Domains (LDOMs)
Logical domains recently renamed as Oracle VM server is a server
virtualization technology from Oracle Corporation. There are three
important components to LDOMs. They are
A) Hypervisor
B) Solaris OE
C) Resources like CPU, Memory, network and disks.
Physical hardware is separated in to logical domains each having its own
Solaris OE, CPU, Memory, OBP, Console, I/O components. Each domain is
independent of each other and can be separately rebooted, patched and
upgraded.
LDOM manager is a piece of software that is used to create and manage
logical domains, it is also used to map logical domains to physical
resources.
Domains Explained
Logical domains are classified in to
a) control domain :
LDOM manager runs in this domain, so enables to
create and
manage LDOMs. You can have only one control domain in the
physical server. This is the first domain that is created when you install
LDOM software. Control domain is named primary.
b) Service domain : Provides virtual device services to other domains such
as virtual switch, virtual console, virtual disk server. Any domain can be
configured as a Service domain.
c) I/O domain : Has direct access to physical I/O devices, such as
network card in a PCIe controller. I/O domain can own a PCIe slot or onboard PCIe device. I/O domain can share physical I/O devices with other
domains in the form of virtual devices when the I/O domain is also used as
a service domain.
d) Root domain : Owns a PCIe root complex, PCIe fabric & provides all
fabric relaed services such as fabric error handling.
e) Guest domain
: Non I/O domain consumes virtual device services that
are provided by other service domains. Has no physical I/O devices, only
virtual devices.
LDOM Daemons
Ldmd => Logical domain daemon
VNTSD => virtual network terminal server daemon.

Vous aimerez peut-être aussi