Vous êtes sur la page 1sur 34

db2 connect to sales db2 update db cfg using sortheap 1000 db2 get db cfg instfix -vik IY76255

backup vg to tape $savevg -f /dev/rmt0 appsvg backup vg to another folder $mount nimtsm:/backup /mnt $savevg -f /mnt/siviewt/savevg_appsvg_170707 appsvg Deleve files in batch $find . -name "spc.log.*" - exec rm {} \; mount SSA5TSMP2:/backup /mnt Scheduler client connection 1. run cmd line 2. \program files\ibm siview\schclient -option SiView Compiliation 1. Make clean $HOME/superpos/src/csppt/source/src/posppt/make clean 2. If small tx is changed, bohhelper need to make $HOME/superpos/src/csppt/source/src/posppt/pptmgr/factory/make -f makefile bos_c lean $HOME/superpos/src/csppt/source/src/posppt/pptmgr/factory/make -f makefile mkbos 3. Make all $HOME/superpos/src/csppt/source/src/posppt/make all 4. Make lib $HOME/superpos/src/csppt/source/src/posppt/mkso 5. Make clean mmserver $HOME/superpos/src/csclass/make -f mkcsmain.mak clean $HOME/superpos/src/csclass/make -f mkcsmain.mak all 6. copy module and bind into runtime $HOME/superpos/src/csppt/source/src/posppt/copymod.ksh siviewd to resync the current setting on lv. $synclvodm -v -P mms1vg change schd client setting $schedulerclient.exe -option DSL access 1. WEP Key: 8b791e4934f28e6b25e5c8d5f6 DB2 Instance # db2icrt -p edatinst -s ESE -w 64 -u edatinst edatinst

DB2 Fixpack installation #installFixPak -y to display filesystem is mirrored or not #lsvg -l edat1vg note: if PVs is 2 means two physical volumes db2 export data command 1. db2move MMDB EXPORT -tn * 2. db2move MMDB IMPORT LOAD check HADR status #login as db2instance # db2 get snapshot for all on dbname DB2 Export #db2 export to awards.ixf of ixf message msgs.txt select * from staff where dept =20 db2 export to xxep_events_data.ixf of ixf select * from xxep.xxep_events_data wh ere event_id <=16706 db2 "export to xxep_siview_events_data.ixf of ixf select * from oraerp.xxep_sivi ew_events_data where event_id=32684" db2 "import from xxep_siview_events_data.ixf of ixf allow write access commitcou nt 1 insert into oraerp.xxep_siview_events_data" db2 "export to fbusergrp_m.ixf of ixf select smallint(0), d_thesystemKey, usergr p_id, usergrp_type,desc, timestamp(current timestamp), 'replica' from siview.fbu sergrp_m" DB2 Import # db2 import from myfile.ixf of ixf message msg.txt insert into staff #db2 "import from fbusergrp_m.ixf of ixf insert into siview.usergrp" DB2 query SQL message #db2 ? SQLnnnn Export db2 -x " select ' export to '||rtrim(name)||'.ixf of ixf select * from '||rtrim( creator)||'.'||ltrim(name)||';' from sysibm.systables where creator = 'SIVIEWT' and type = 'T' " > export_mmdb.sql db2 -tvf export_mmdb.sql -z export_mmdb.log I plan to run this script to import the data db2 -x " select ' import from '||rtrim(name)||'.ixf of ixf replace_create into s chema1'||ltrim(name)||' in '||rtrim(tbspace)||' index in '||rtrim(INDEX_TBSPACE) ||';' from sysibm.systables where creator = 'SIVIEW' and type = 'T' " > import_m mdb.sql db2 -tvf import_mmdb.sql -z import_mmdb.log DB2 rollforward #db2 rollforward database spcdb to xxxxx and stop perform this on the DB2 client side in order to test the HADR features? #UPDATE ALTERNATE SERVER FOR DATABASE database-alias USING HOSTNAME hostname POR T port-number

check db configuration #db2 get db cfg for <database_name> | grep HADR db2 back/restore #cd <temp_backup_directory> db2 backup db <database_name> #db2 restore db <database_name> from <temp_restore_directory> replace history fi le remove ^M character from the file #tr -d "\015\032" < oldfile > newfile add more spaces for Rep 1. create rawdevices on standby 2. assign to standby instance owner 3. create rawdevices on primary 4. assign to primary instance owner create sample database #db2sampl check cluster status #lssrc -g cluster #clRGinfo # clstat #smitty clstop #smitty clstart #smitty hacmp reboot the machine #reboot -q To start and stop the hadr process, use the db2 instance owner userid. To start primary hadr database process, execute: db2 start hadr on database SMDB as primary To start standby hadr database process, execute: db2 start hadr on database SMDB as standby To stop primary hadr database process, execute: db2 stop hadr on database databa se-alias To stop standby hadr database process, execute: db2 stop hadr on database databa se-alias Deactivate database #db2 deactivate database database-alias Stop rollforward #db2 rollforward database database-alias to end of logs and stop check Listen Port

#netstat -Aan | grep *.23 DB2 get snapshop for hadr #db2 get snapshot for all on hadr | grep "HADR Status" #hdrsnap check hacmp failover log #tail -f /tmp/hacmp.out Kill all process # | grep -v grep | grep siview|grep -v 'kill' | awk '{print$1}' or #ps -u shemond -o pid | grep -v PID | xargs -i kill -9 {} check the files with most space taken #du -sm * | sort -n | pg Generate IOR To generate the ior using genIOR utility in mm/bin. systax: genIOR host port server marker interface orb example: genIOR siviewt 16000 88888888888888888888888888888888 thePPTServiceManagerObject Factory CS_PPTServiceManagerObjectFactory EBROKER For EBROKER version, the server is UUID if you pointing to server directly. If y ou point to dispatcher any number with 32 characters is fine. The port refer to dispatcher port, or server port (direct connection). Upgrade from prod version to customized version $HOME/mm/lib -rwxr-xr-x -rwxr-xr-x -rwxr-xr-x -rwxr-xr-x -rwxr-xr-x -rwxr-xr-x $HOME/mm/bnd -rw-r--r--rw-r--r--rw-r--r--rw-r--r--rw-r--r--rw-r--r--rw-r--r--rw-r--r--rw-r--r--rw-r--r-1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 siviewd siviewd siviewd siviewd siviewd siviewd siviewd siviewd siviewd siviewd siviewd siviewd siviewd siviewd siviewd siviewd db2iadm1 db2iadm1 db2iadm1 db2iadm1 db2iadm1 db2iadm1 db2iadm1 db2iadm1 db2iadm1 db2iadm1 db2iadm1 db2iadm1 db2iadm1 db2iadm1 db2iadm1 db2iadm1 2010519 7811434 833017 2247753 1280630 6388018 2089 10821 13729 9050 16067 7057 15363 3736 3724 1556 Sep Sep Sep Sep Sep Sep Sep Sep Sep Sep Sep Sep Sep Sep Sep Sep 26 26 26 26 26 26 25 25 25 25 25 25 25 25 25 25 09:53 09:53 09:53 09:53 09:53 09:53 19:17 19:35 19:36 19:37 19:38 19:39 19:39 19:40 19:41 19:42 libcs_pptbossp_fw.so libcs_pptmgri.so libcs_pptsvrlib.so libcs_srvlibs.so libcs_svcclt.so libcs_txmethods.so cs00001.bnd cs00002.bnd cs00003.bnd cs00004.bnd cs00005.bnd cs00006.bnd cs00007.bnd cs00008.bnd cs00009.bnd cs00010.bnd

$HOME/mm/msg -rw-r--r-- 1 siviewd db2iadm1 $HOME/mm/bin -rwxr-xr-x 1 siviewd db2iadm1 Then call

6328 Sep 26 00:25 cs_pptmsg.cat 1078873 Sep 25 11:21 mmserver

bindmm.ksh lbmmdb Change $HOME/mm/bin/startmm.ksh #Add SD100027 export CS_MM_CATALOG_FILE=$HOME/mm/msg/cs_pptmsg.cat #D100027 Add start export CS_DERIVEDDATA_FORMULA_ENABLED=1 #D100027 add end Change $HOME/mm/etc/mmimconfig.ini [F25] ManagedObjectEntries=CS_PPTManager ;DCR4000022 Remove Inheritance of IM object from PPTServiceManager ;[PPTServiceManagerBO] ;ManagedObjectIDLName=PPTServiceManager ;ManagedObjectImplName=PPTServiceManager_i [CS_PPTManager] ManagedObjectIDLName=CS_PPTManager ManagedObjectImplName=CS_PPTManager_i [BRSVariableInfoIMizedMgrBO] ManagedObjectIDLName=BRSVariableInfoIMizedMgr ManagedObjectImplName=BRSVariableInfoIMizedMgr_i ; ; RAS Support section (service) / ManagedObjectEntries=CS_PPTManager Change $HOME/mm/etc/mmserver.ini PPTServiceManager thePPTServiceManagerObjectFactory:802000001111111111111111 11111111:28200:EBROKER SSA5SMSD1 PPTManager P25/#CS_PPTManager#F25#PPTManager:8020000011111111111111111111111 1:28200:EBROKER SSA5SMSD1 PPTServiceManagerObjectFactory PPTServiceManagerObjectFactory:80200000111111 111111111111111111:28200:EBROKER SSA5SMSD1 PPTManagerObjectFactory CS_PPTManagerObjectFactory:8020000011111111111111111 1111111:28200:EBROKER SSA5SMSD1 1. copy existing mm folder to become mm1 in modules folder. common to common1 2. Overwrite the mm1A content with files in folder in SSA5SMSD1:root:/package/up grade_1A_release_1/mm. 3. Depend on how many mmsever insetance is running, please copy the /package/upg rade_1A_release_1/mm/bin/mmserver to mmserver01, mmserver02........ 4. create the softlink from at /home/siview/mm to mm1. [ln -s /home/siview/mm1 mm] 5. create the new tables. CLP is included in mm1/mig/clp [Please confirm with J ian Der the appropriate table space]. 6. Import the predefined data. Predefined data is available at mm1/mig/pre. [Upg rade step attached in excel sheet] 7. Bind the database. bindmm.ksh, bindcs.ksh. 8. Please make sure in startmm.ksh, new parameter is added: export IFLIB_PATH=$HOME/ifadapter/lib

# LIBPATH export LIBPATH=.:$LIBPATH:$IFLIB_PATH #Interface Adapter fs export IFSERVER_NAME=SiViewIFDispatcher export IFSERVER_URL=corbaloc:iiop:1.2@localhost:30000/SiViewIFDispatcher export CS_MM_CATALOG_FILE=$HOME/mm/msg/cs_pptmsg.cat export SP_EBROKERTCS_SAMBAPATH=/home/smb/smartTCS <-- Please confirm if this sha re folder has been created export SP_CARRIER_ADMINISTRATOR=MM <--Please make sure it is set in SM too. export SP_CARRIER_CAPACITY=25 export SP_CARRIER_NOMINALSIZE=12 export SP_FPC_ADAPTATION_FLAG=1 #D100027 Add start export CS_DERIVEDDATA_FORMULA_ENABLED=1 Step to upgrade Interface Adapter: 1. create a new folder ifadapter1 in modules folder. 2. overwrite ifadapter1 content with SSA5SMSD1:root:/package/upgrade_1A_release_ 1/ifadapter 3. create the softlink at /home/siview to ifadapter1. [ln -s /home/siview/ifada pter1 ifadapter] Addition upgrade on IF Adapter will be provided by this week once Naveen has com pleted his changes on IF. check Database integrity #db2dart check volume group mount #fsck /dev/lv_moudules #lsvg -l rep1vg #mount /home/siview/modules Query TSM backup history #db2adutl query db MMDB Refresh sendmail #refresh -s sendmail frederic.oh@soitec.fr The followings are Good Database Backup image for mmdb and smdb: > su - mminst SSA5DBP1:mminst:/db2instance/db2home/mminst/dbbackup> db2 backup db mmdb to /db2instance/db2home/mminst/dbbackup Backup successful. The timestamp for this backup image is : 20071023155652 SSA5DBP1:mminst:/db2instance/db2home/mminst/dbbackup>ls -al total 1844456 drwxr-xr-x 2 mminst mmiadm 256 Oct 23 16:06 . drwxrwxr-x 14 mminst mmiadm 4096 Oct 23 12:34 .. -rw-r----- 1 mminst mmiadm 944353280 Oct 23 15:57 MMDB.0.mminst.NODE0000. CATN0000.20071023155652.001 SSA5DBP1:mminst:/db2instance/db2home/mminst/dbbackup>

db2 backup db mmdb use tsm Backup successful. The timestamp for this backup image is : 20071023160135 > su - sminst SSA5DBP1:sminst:/db2instance/db2home/sminst/dbbackup> db2 backup db smdb use tsm Backup successful. The timestamp for this backup image is : 20071023160747 db2 backup db smdb to /db2instance/db2home/sminst/dbbackup Backup successful. The timestamp for this backup image is : 20071023161121 SSA5DBP1:sminst:/db2instance/db2home/sminst/dbbackup>ls -al total 1572136 drwxr-xr-x 2 sminst smiadm 256 Oct 23 16:11 . drwxrwxr-x 8 sminst smiadm 4096 Oct 22 21:31 .. -rw-r--r-- 1 sminst smiadm 1072 Oct 23 16:09 DART.INF -rw-r----- 1 sminst smiadm 804921344 Oct 23 16:11 SMDB.0.sminst.NODE0000. CATN0000.20071023161121.001 I have not done it for spcdb. If the database is ready for backup. Could you do a backup of the database? Steps: 1. su - spcinst 2. Ensure that there is no connections. > db2 list application If there are, issue: db2 force application all 3. Do a offline backup to local disk. db2 backup db spcdb to /db2instance/db2home/spcinst/dbbackup Take note of the timestamp for the backup image. 4. Do a offine backup to tsm db2 backup db spcdb use tsm Take note of the timestamp for the backup image. Check top resources used by application #topas Check paging space #lsps

Change Page size a. check pp size by using lsvg rootvg (max available ps: #bootinfo -r) b. smitty chps Remove Paging Space #smitty swapoff #smitty rmps Restart sendmail # stopsrc -s sendmail

# startsrc -s sendmail -a"-bd -q30m" If you only need to refresh the configuration you can also run: #kill -1 pid # refresh -s sendmail Refresh clinfoES #stopsrc -s clinfoES #startsrc -s clinfoES #lssrc -s clinfoES #clstat if above not working for refreshing clinfoES then #stopsrc -s clinfoES #stopsrc -s snmpd #startsrc -s snmpd #startsrc -s clinfoES wait about 10 seconds then #clstat Print System configuration info #prtconf | more check file sizing and sorting #du -sk * | sort -n -r | pg DB2 restoring a. create filesystem b. create instance account and instance c. set proper rights and owner for filesysetm. #db2 restore db schdb from /home/tstinst/backup taken at 20071101141915 to /home /schdb replace history file #db2 restore db spcdb from /home/tstinst/backup taken at 20071101143035 to /home /spcdb replace history file #db2 restore db smdb from /home/tstinst/backup taken at 20071101141617 to /home/ smdb replace history file #db2 restore db mmdb from /home/tstinst/backup taken at 20071101141043 to /home/ mmdb replace history file do roll forwarding (copy lastet log files over). shutdown corba service: #txtsaoclt SSA5SMS:27500 xmserver ShutDown 2 txtsaoclt txtsaoclt txtsaoclt txtsaoclt txtsaoclt txtsaoclt txtsaoclt txtsaoclt txtsaoclt ssa5mmsp1:27201 ssa5mmsp1:27202 ssa5mmsp1:27203 ssa5mmsp1:27204 ssa5mmsp1:27205 ssa5mmsp1:27206 ssa5mmsp1:27207 ssa5mmsp1:27208 ssa5mmsp1:27209 mmserver01 mmserver01 mmserver01 mmserver01 mmserver01 mmserver01 mmserver01 mmserver01 mmserver01 ShutDown ShutDown ShutDown ShutDown ShutDown ShutDown ShutDown ShutDown ShutDown 2 2 2 2 2 2 2 2 2

txtsaoclt ssa5mmsp1:27210 mmserver01 ShutDown 2 txtsaoclt txtsaoclt txtsaoclt txtsaoclt txtsaoclt txtsaoclt txtsaoclt txtsaoclt txtsaoclt txtsaoclt ssa5mmsp2:27201 ssa5mmsp2:27202 ssa5mmsp2:27203 ssa5mmsp2:27204 ssa5mmsp2:27205 ssa5mmsp2:27206 ssa5mmsp2:27207 ssa5mmsp2:27208 ssa5mmsp2:27209 ssa5mmsp2:27210 mmserver01 mmserver01 mmserver01 mmserver01 mmserver01 mmserver01 mmserver01 mmserver01 mmserver01 mmserver01 ShutDown ShutDown ShutDown ShutDown ShutDown ShutDown ShutDown ShutDown ShutDown ShutDown 2 2 2 2 2 2 2 2 2 2

1. deactivate DB on P2 su - mminst db2 deactivate database su - sminst db2 deactivate database su - spcinst db2 deactivate database su - schinst db2 deactivate database su - repinst db2 deactivate database su - pmsinst db2 deactivate database su - edainst db2 deactivate database stop su su su su su su su stop su su su su su su su -

MMDB SMDB SPCDB SCHDB MMRDB EMNTDB EDATDB

hadr on P2 schinst -c 'db2stop' spcinst -c 'db2stop' edainst -c 'db2stop' sminst -c 'db2stop' mminst -c 'db2stop' pmsinst -c 'db2stop' repinst -c 'db2stop' hadr on P1 schinst -c 'db2stop' spcinst -c 'db2stop' edainst -c 'db2stop' sminst -c 'db2stop' mminst -c 'db2stop' pmsinst -c 'db2stop' repinst -c 'db2stop'

shutdown P2, P1 start P2, P1 start instance on P2 su - schinst -c 'db2start' su - spcinst -c 'db2start' su - edainst -c 'db2start' su - sminst -c 'db2start' su - mminst -c 'db2start' su - pmsinst -c 'db2start' su - repinst -c 'db2start'

start hadr on P2 su - schinst -c 'db2 start hadr on database schdb as standby' su - spcinst -c 'db2 start hadr on database spcdb as standby' su - edainst -c 'db2 start hadr on database edatdb as standby' su - sminst -c 'db2 start hadr on database smdb as standby' su - mminst -c 'db2 start hadr on database mmdb as standby' su - pmsinst -c 'db2 start hadr on database emntdb as standby' su - repinst -c 'db2 start hadr on database mmrdb as standby' start instance on P1 su - schinst -c 'db2start' su - spcinst -c 'db2start' su - edainst -c 'db2start' su - sminst -c 'db2start' su - mminst -c 'db2start' su - pmsinst -c 'db2start' su - repinst -c 'db2start' start hadr on P1 su - schinst -c 'db2 start hadr on database schdb as primary' su - spcinst -c 'db2 start hadr on database spcdb as primary' su - edainst -c 'db2 start hadr on database edatdb as primary' su - sminst -c 'db2 start hadr on database smdb as primary' su - mminst -c 'db2 start hadr on database mmdb as primary' su - pmsinst -c 'db2 start hadr on database emntdb as primary' su - repinst -c 'db2 start hadr on database mmrdb as primary' Update tablespaces #1. For smdb: Run db2 "ALTER TABLESPACE TSASNUOW RESIZE (ALL 4900)" #2. For emntdb: Run db2 "ALTER TABLESPACE TSASNAA RESIZE (ALL 25000)" #Update DB config parameter 1. Run the following on both servers using the instance userid(pmsinst), db2 update db cfg for emntdb using locklist 500 maxlocks 60 locktimeout 60 2. Stop HADR processes for emntdb on both database servers. 2.1. On standby server, as pmsinst, execute: db2 deactivate db emntdb db2 stop hadr on db emntdb db2stop db2start 2.2. On primary server, as pmsinst, execute: db2 stop hadr on db emntdb db2stop db2start 3. Start HADR Processes on both database servers. 3.1 On Standby server, as pmsinst, execute: db2 start hadr on db emntdb as standby 3.2. On Primary server, as pmsinst, execute: db2start hadr on db emntdb as primary

list tablespace container #db2 list tablespace containers for tbsid analyze replication #asnqanalyze -db proddb1 proddb2 -la detailed Move index tablespace to different one db2 "EXPORT TO FHCDATAHS.ixf OF IXF SELECT * FROM SIVIEW.FHCDATAHS"; #SYNC 10; DROP TABLE SIVIEW.FHCDATAHS; #SYNC 20; CREATE TABLE SIVIEW.FHCDATAHS (MEAS_LOT_ID VARCHAR(64) MEAS_LOT_TYPE VARCHAR(64) MEAS_PRODSPEC_ID VARCHAR(64) MEAS_PRODGRP_ID VARCHAR(64) MEAS_TECH_ID VARCHAR(64) MONITOR_GRP_ID VARCHAR(64) MEAS_MAINPD_ID VARCHAR(64) MEAS_OPE_NO VARCHAR(64) MEAS_PD_ID VARCHAR(64) MEAS_PD_TYPE VARCHAR(20) MEAS_PASS_COUNT INTEGER MEAS_PD_NAME VARCHAR(64) MEAS_AREA_ID VARCHAR(64) MEAS_EQP_ID VARCHAR(64) MEAS_EQP_NAME VARCHAR(64) MEAS_LC_RECIPE_ID VARCHAR(64) MEAS_RECIPE_ID VARCHAR(64) MEAS_PH_RECIPE_ID VARCHAR(64) CLAIM_TIME TIMESTAMP CLAIM_SHOP_DATE DOUBLE CLAIM_USER_ID VARCHAR(64) MEAS_DCDEF_ID VARCHAR(64) MEAS_DC_TYPE VARCHAR(12) MEAS_DCSPEC_ID VARCHAR(64) MEAS_WFRHS_TIME TIMESTAMP STORE_TIME TIMESTAMP EVENT_CREATE_TIME TIMESTAMP ) DATA CAPTURE NONE IN MM_TBSPACE1 INDEX IN MM_IDXSPACE2; #SYNC 30; ALTER TABLE SIVIEW.FHCDATAHS LOCKSIZE ROW APPEND OFF NOT VOLATILE; #SYNC 40; db2 "IMPORT FROM FHCDATAHS.ixf OF IXF MODIFIED BY COMPOUND=5 NOCHECKLENGTHS COMM ITCOUNT 1000 INSERT INTO SIVIEW.FHCDATAHS" NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NOT NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL

#SYNC 50; CREATE UNIQUE INDEX SIVIEW.IXFHCDATAHS ON SIVIEW.FHCDATAHS (MEAS_LOT_ID ASC, MEAS_MAINPD_ID ASC, MEAS_OPE_NO ASC, MEAS_PASS_COUNT ASC, CLAIM_TIME ASC, MEAS_DCDEF_ID ASC, STORE_TIME ASC ) PCTFREE 10 DISALLOW REVERSE SCANS; #SYNC 60; RUNSTATS ON TABLE SIVIEW.FHCDATAHS AND INDEXES ALL SHRLEVEL REFERENCE; #SYNC 70; Table space resizing #ALTER TABLESPACE TSCDFBPRIVGRP_M RESIZE (File '/home/smdb/sminst/NODE0000/SQL00001/CDFBPRIVGRP_M' 2048 ) ON DBPARTITIONNUM (0); DB2 Catalog #db2 catalog tcpip node mmdb remote <IP-ADR> server <DB-PORT> #db2 catalog db mmdb at node mmdb as <alias> authentication SERVER

SQL Replication - Add new Replication Tables 1. Stop Capture program (extract the script from /HAapps/hacmp/stopDBServer.sh) 2. Stop Apply program 3. From Replication Center Register new capture tables - For CD table, use the same tablespace as original table - For CD table, use 'siview' as the schema (note for different usage, may hav e different schema) 4. Add Apply Member - Set target table name - Change tablespace (same as MMDB) - For the running script, add Index tablespace information for table creatio n. - Change Target Load Option to 'ASNLOAD using the Export and Import Utilitie s" 5. Start Capture program 6. Start Appply program Stop All running Queue Managers # endmqm

Stop listeners associated with a Queue Managers # endmqlsr -m QMgrName Create a Queue Manager # crtmqm -q venus.queue.manager # strmqm # runmqsc # define qlocal (orange.queue) # cd /usr/mqm/samp/bin # ./amqsput ORANGE.QUEUE # ./amqsget ORANGE.QUEUE Delete a Queue Manager #dltmqm QMgrName export MYTEMPQM=TESTQM export MYPORT=1600 export MQCHLLIB=/var/mqm/ crtmqm $MYTEMPQM strmqm $MYTEMPQM runmqlsr -m $MYTEMPQM runmqsc $MYTEMPQM << EOF DEFINE CHANNEL(NTLM) DEFINE CHANNEL(NTLM) ALTER CHANNEL(NTLM) DEFINE QLOCAL(TESTQ) EOF amqsputc TESTQ $MYTEMPQM hello world EOF endmqm -i $MYTEMPQM Initializing high availability disaster recovery (HADR) Use the following procedure to set up and initialize the primary and standby dat abases for DB2 High Availability Disaster Recovery (HADR). HADR can be initialized through the command line processor (CLP), the Set Up Hig h Availability Disaster Recovery (HADR) wizard in the Control Center, or by calling the db2HADRStart API. To use the CLP to initialize HADR on your system for the first time: Determine the host name, host IP address, and the service name or port number fo r each of the HADR databases. If a host has multiple network interfaces, ensure that the HADR host name or IP address maps to the intended one. You need to allocate separate HADR ports in /etc/services for each protected database. Th ese cannot be the same as the ports allocated to the instance. The host name can only map to one IP address. Note: The instance names for the primary and standby databases do not have to be the same. Create the standby database by restoring a backup image or by initializing a spl it mirror, based on the existing database that is to be the primary. In the following example, the BACKUP DATABASE and RESTORE DATABASE commands are used to initialize database SOCKS as a standby database. In this case, an NFS mounted file system is accessible at both sites. Issue the following command at the primary database: backup db socks to /nfs1/backups/db2/socks Issue the following command at the standby database: restore db socks from /nfs1/backups/db2/socks replace history file The following example illustrates how to use the db2inidb utility to initialize the standby database using a split mirror

of the primary database. This procedure is an alternative to the backup and rest ore procedure illustrated above. Issue the following command at the standby database: db2inidb socks as standby Note: The database names for the primary and standby databases must be the same. It is recommended that you do not issue the ROLLFORWARD DATABASE command on the standby database after the restore operation or split mirror initialization. The results of using a rollforward operation mi ght differ slightly from replaying the logs using HADR on the standby database. If the databases are not identical, issuing the START HADR command with the AS STANDBY option will fail. When using the RESTORE DATABASE command, it is recommended that the REPLACE HIST ORY FILE option is used. When creating the standby database using the RESTORE DATABASE command, you must ensure that the standby remains in rollforward mode. This means that you cannot issue the ROLLFORWARD DATABASE comm and with either the COMPLETE option or the STOP option. An error will be returned if the START HADR command with the AS STANDBY option is attempted on the database after rollforward is stopped. The following RESTORE DATABASE command options should be avoided when setting up the standby database: TABLESPACE, INTO, REDIRECT, and WITHOUT ROLLING FORWARD. When setting up the standby database using the db2inidb utility, do not use the SNAPSHOT or MIRROR options. You can specify the RELOCATE USING option to change one or more of the following configuration attributes: instance name, log path, and database path. However, you must not change the database name or the table s pace container paths. Set the HADR configuration parameters on the primary and standby databases. Note: It is very important that you set the following configuration parameters a fter the standby databases has been created: HADR_LOCAL_HOST HADR_LOCAL_SVC HADR_REMOTE_HOST HADR_REMOTE_SVC HADR_REMOTE_INST The following sample configuration is for the primary and standby databases. On the primary: HADR_LOCAL_HOST HADR_LOCAL_SVC HADR_REMOTE_HOST HADR_REMOTE_SVC HADR_REMOTE_INST HADR_TIMEOUT HADR_SYNCMODE HADR_PEER_WINDOW On the standby: HADR_LOCAL_HOST HADR_LOCAL_SVC HADR_REMOTE_HOST HADR_REMOTE_SVC HADR_REMOTE_INST HADR_TIMEOUT HADR_SYNCMODE HADR_PEER_WINDOW host1.ibm.com hadr_service host2.ibm.com hadr_service dbinst2 120 NEARSYNC 120 host2.ibm.com hadr_service host1.ibm.com hadr_service dbinst1 120 NEARSYNC 120

If they are set prior to creating the standby database, the settings on the stan dby database will reflect what is set on the primary database. Connect to the standby instance and start HADR on the standby database, as in th e following example: START HADR ON DB SOCKS AS STANDBY Note: Usually, the standby database is started first. If you start the primary d atabase first, this startup procedure will fail if the standby database is not started within the time period specifie d by the HADR_TIMEOUT database configuration parameter. Connect to the primary instance and start HADR on the primary database, as in th e following example: START HADR ON DB SOCKS AS PRIMARY HADR is now started on the primary and standby databases. To open the Set Up High Availability Disaster Recovery (HADR) Databases wizard: From the Control Center expand the object tree until you find the database for w hich you want to configure HADR. Right-click the database and click High Availability Disaster Recovery ? Set Up in the pop-up menu. The Set Up High Availability Disaster Recovery Databases wizard opens. Additional information is provided through the contextual help facility within t he Control Center. Note: You can start HADR within the Set Up High Availability Disaster Recovery D atabases wizard, or you can just use the wizard to initialize HADR, then start it at another time. To open the Start HADR window: From the Control Center, expand the object tree until you find the database for which you want to manage HADR. Right-click the database and click High Availability Disaster Recovery?Manage i n the pop-up menu. The Manage High Availability Disaster Recovery window opens. Click Start HADR. The Start HADR window opens. DB2 Check Backup Integrity #db2ckbkp Setting up Q Replication 1. Creating Source and Target Q Manager # crtmqm SRC_QM # crtmqm TGT_QM # strmqm sRC_QM # strmqm TGT_QM 2. Creating Sourece Queue # runmqsc SRC_QM # DEFINE QLOCAL ('ADMINQ') DEFPSIST(YES) # DEFINE QLOCAL ('RESTARTQ') DEFPSIST(YES) # DEFINE QLOCAL('TGT_QM') USAGE(XMITQ) DEFPSIST(YES) # DEFINE QREMOTE('SENDQ') RNAME('RECVQ') RQMNAME('TGT_QM') XMITQ('TGT_QM') DEFP SIST(YES) 3. # # # Creating Target Queue runmqsc TGT_QM DEFINE QLOCAL('RECVQ') DEFPSIST(YES) DEFINE QLOCAL('SRC_QM') USAGE(XMITQ) DEFPSIST(YES)

# DEFINE QREMOTE('ADMINQ') RNAME('ADMINQ') RQMNAME('SRC_QM') XMITQ('SRC_QM') DE FPSIST(YES) # DEFINE QMODEL('IBMQREP.SPILL.MODELQ') DEFSOPT(SHARED) MSGDLVSQ(FIFO) DEFTYPE( PERMDYN) 4. Creating MQ Channels # runmqsc SRC_QM # DEFINE CHL ('SRC_QM.TO.TGT_QM') CHLTYPE(SDR) TRPTYPE(TCP) CONNAME ('IP_addres s(port)') XMITQ('TGT_QM') DISCINT(0) Important: Replace IP_address with the IP address of your computer and replace p ort with 1450 or the alternate port that you determined earlier. # DEFINE CHL ('TGT_QM.TO.SRC_QM') CHLTYPE(RCVR) TRPTYPE(TCP) # runmqsc TGT_QM # DEFINE CHL ('TGT_QM.TO.SRC_QM') CHLTYPE(SDR) TRPTYPE(TCP) CONNAME('IP_address (port)') XMITQ('SRC_QM') DISCINT(0) Important: Replace IP_address with the IP address or host name of your computer and replace port with 1451 or the alternate port that you determined earlier. # DEFINE CHL ('SRC_QM.TO.TGT_QM') CHLTYPE(RCVR) TRPTYPE(TCP) 5. # # # # # # # # # # Starting Up MQ strmqm SRC_QM strmqm TGT_QM runmqlsr -t tcp -m SRC_QM -p 1451 runmqsc SRC_QM start channel (SRC_QM.TO.TGT_QM) end runmqlsr -t tcp -m TGT_QM -p 1450 runmqsc TGT_QM start channel (TGT_QM.TO.SRC_QM) end

Setting up Q Replication 1. Creating Sourece and Target control tables Create the Q Capture control tables. Launch the Replication Center. You can use the db2rc command on all platforms, o r from the Windows Start menu, click Programs > IBM DB2 > General Administration Tools > Replication Center. On the Replication Center Launchpad, click the Q replication icon. Click the Create Q Capture Control Tables button. On the Start page of the Create Q Capture Control Tables wizard, accept the defa ult choice of Typical, and click Next. On the Server page, click the button next to the Q Capture server field, and se lect the SAMPLE database. Type a valid user ID and password for connecting to th e database, accept the default Q Capture schema of ASN, and click Next. Figure 1. Entering server information in the Create Q Capture Control Tables wiz ard The term server in the Replication Center refers to a DB2 database on Linux, UNIX, and Windows, and to a DB2 subsystem on z/OS. Schema refers to an instance of the Q Capture or Q Apply program, including its set of control tables. You can have multiple schemas on a server. On the Enter Queues page, specify the queue manager, administration queue, and r estart queue for the source system. Click the button next to the Queue manager field to select the SRC_QM queue manager. Click the buttons next to the Adminis tration queue and Restart queue fields to select the ADMINQ and RESTARTQ queues that you previously created. Figure 2. Specifying names for the queue manager and queues Optional: Click the Validate WebSphere MQ objects link to test if your WebSphere MQ objects can communicate properly.

Click Next. Click Finish to generate the SQL script for creating the control tab les. On the Run Now or Save SQL window, click OK. You should see a confirmation messa ge that the scripts ran successfully. Create the Q Apply control tables. If the Launchpad is no longer on your screen, from the Replication Center menu, click Replication Center > Launchpad. Click Create Q Apply Control Tables. On the Start page of the Create Q Apply Control Tables wizard, accept the defaul t choice of Typical, and click Next. On the Server page, click the button next to the Q Apply server field, and sele ct the TARGETDB database. Type a valid user ID and password for connecting to th e database, accept the default Q Apply schema of ASN, and click Next. Figure 3. Entering server information in the Create Q Apply Control Tables wizar d On the Enter Queues page, click the button next to the Queue manager field and select the TGT_QM queue manager. Optional: Click the Validate queue manager link to test the settings of your que ue manager. Figure 4. Selecting and validating the queue manager on the Queue Manager page Click Next. Click Finish to generate the SQL script for creating the control tab les. On the Run Now or Save SQL window, click OK. You should see a confirmation messa ge that the scripts ran successfully. 2. Enabling the source database for replication To turn on archive logging at the SAMPLE database: Close the Replication Center Launchpad, and in the Replication center object tre e use the icons to expand the folders in the following order:Q replication > De finitions > Q Capture Servers > SAMPLE Right-click the SAMPLE icon and choose Enable database for replication. In the Turn on Archive Logging window, click OK. The Replication Center does the following tasks: Forces all applications off the SAMPLE database Turns on archive logging (which puts the database in a backup pending state) Backs up the database to the default DB2 installation directory Deletes the backup. If you want to save the backup, select Use Configure Logging wizard to launch th e DB2 wizard that lets you specify backup options. Note: The backup process can take several minutes. Next, you will create the replication queue map. 3. Creating Replication Queue Map To create a replication queue map: In the Replication Center object tree, use the icons to open folders in the fol lowing order: Q replication > Definitions > Q Capture Servers > SAMPLE > Q Captu re Schemas > ASN. Figure 1. Replication Center object tree opened to display Replication Queue Map s folder Right-click the Replication Queue Maps folder, and click Create. The Create Repl ication Queue Map window opens. On the General page, click the button next to the Q Apply server field and sele ct the TARGETDB server. The Replication Queue Map Name field is filled in with t he value SAMPLE_ASN_TO_TARGETDB_ASN, which shows the suggested naming convention

. Enter or select your queues. Click the button next to the Send queue field and select the SENDQ queue that y ou previously created. Click the button next to the Receive queue field and select the RECVQ queue tha t you previously created. Click the button next to the Administration queue field and select the ADMINQ q ueue that you previously created. Figure 2. Entry fields on the Create Replication Queue Map notebook Optional: Click the Validate queues link to validate the queues and send test me ssages between the queues. Accept the other defaults on the Options page, and click OK. On the Run Now or Save SQL window, click OK. 4. To create a Q subscription: On the Replication Center Launchpad, click Create a Q Subscription. If the Repli cation Center Launchpad is not visible, click Replication Center > Launchpad. On the Create Q Subscriptions wizard, click Next to open the Replication page. A ccept the default replication type of Unidirectional, and click Next. On the Servers page, verify that the SAMPLE and TARGETDB databases appear in the Server fields and that the SAMPLE_ASN_TO_TARGETDB_ASN replication queue map app ears in the Replication queue map field. If any of these values are missing, cli ck the buttons next to the appropriate field to select the object. The Select Source Table window will open in front of the Source Tables page, cli ck Retrieve. Select the EMPLOYEE table from the list and click OK. On the Source Tables page, click Next. On the Target page, accept the default setting of Use a table and New table, cha nge the Table name field to EMPLOYEE, and click Next. The Replication Center aut omatically creates the target table for you and handles all of the mappings betw een columns, data types, and attributes. On the Rows and Columns page, accept the defaults and click Next. On the Unexpected Conditions page of the wizard, accept the default values and c lick Next. On the Loading Target Tables page, accept the defaults and click Next. Figure 1. The Review Q Subscriptions page showing a valid Q subscription Click Next to get to the Summary page, review your choices, and click Finish. Figure 2. Summary page On the Run Now or Save SQL window, click OK. You defined the mapping between the EMPLOYEE table at the SAMPLE database and th e newly created EMPLOYEE table at the TARGET database. This mapping was automati cally named EMPLOYEE0001 and is the name of the Q subscription. You also defined the queue pathway that will be used to replicate data between the two tables. Y ou are almost ready to replicate data. DB2 Memory Track #db2mtrk -i -d DB2 monitoring parameters: Buffer pool size Dynamic cache size Heap sizes Locklist and maxlocks sizes Lock mode and isolation issues Disorganized table spaces

Outdated runstats Long running SQL Log and table space utilization #### Start PMS replication process echo "starting PMS replication process\n" #su - pmsinst -c 'cd /db2instance/db2home/pmsinst/adm; startCaptureSMDB.sh > sta rtCaptureSMDB.log 2> startCatpureSMDB.err' #su - pmsinst -c 'cd /db2instance/db2home/pmsinst/adm; startApplySMDB.sh > start ApplySMDB.log 2> startApplySMDB.err' #### Start local Reporting Replication Process echo "starting local reporting replication process\n" #su - sminst -c 'cd /db2instance/db2home/sminst/capture; start_capture.sh > star t_capture.log 2> start_catpure.err' #su - spcinst -c 'cd /db2instance/db2home/spcinst/capture; start_capture.sh > st art_capture.log 2> start_catpure.err' #su - mminst -c 'cd /db2instance/db2home/mminst/capture; start_capture.sh > star t_capture.log 2> start_catpure.err' #su - repinst -c 'cd /db2instance/db2home/repinst/apply; start_apply.sh > start_ apply.log 2> start_apply.err' #su - repinst -c 'cd /db2instance/db2home/repinst/apply_smdb; start_apply.sh > s tart_apply.log 2> start_apply.err' #su - repinst -c 'cd /db2instance/db2home/repinst/apply_spc; start_apply.sh > st art_apply.log 2> start_apply.err' #### stop PMS replication process echo "stoping PMS replication process\n" su - pmsinst -c '/db2instance/db2home/pmsinst/adm/stopApplySMDB.sh > /db2instanc e/db2home/pmsinst/adm/stopApplySMDB.log 2> /db2instance/db2home/pmsinst/adm/stop ApplySMDB.err' su - pmsinst -c '/db2instance/db2home/pmsinst/adm/stopCaptureSMDB.sh > /db2insta nce/db2home/pmsinst/adm/stopCaptureSMDB.log 2> /db2instance/db2home/pmsinst/adm/ stopCaptureSMDB.err' #### stop local reporting replication process echo "stoping local reporting replication process\n" su - repinst -c '/db2instance/db2home/repinst/apply/stop_apply.sh > /db2instance /db2home/repinst/apply/stop_apply.log 2> /db2instance/db2home/repinst/apply/stop _apply.err' su - repinst -c '/db2instance/db2home/repinst/apply_smdb/stop_apply.sh > /db2ins tance/db2home/repinst/apply_smdb/stop_apply.log 2> /db2instance/db2home/repinst/ apply_smdb/stop_apply.err' su - repinst -c '/db2instance/db2home/repinst/apply_spc/stop_apply.sh > /db2inst ance/db2home/repinst/apply_spc/stop_apply.log 2> /db2instance/db2home/repinst/ap ply_spc/stop_apply.err' su - sminst -c '/db2instance/db2home/sminst/capture/stop_capture.sh > /db2instan ce/db2home/sminst/capture/stop_capture.log 2> /db2instance/db2home/sminst/captur e/stop_capture.err' su - spcinst -c '/db2instance/db2home/spcinst/capture/stop_capture.sh > /db2inst ance/db2home/spcinst/capture/stop_capture.log 2> /db2instance/db2home/sminst/cap ture/stop_capture.err' su - mminst -c '/db2instance/db2home/mminst/capture/stop_capture.sh > /db2instan ce/db2home/mminst/capture/stop_capture.log 2> /db2instance/db2home/mminst/captur e/stop_capture.err'

Requirement Enable Password-less SSH Access, so that SiView Interface Adapter can call Klari ty SendEvent executable/script Steps to setup password-less SSH Access 1. In the local host, generate the private/public keys ssh-keygen -t rsa ssh-keygen -t dsa ssh-keygen -t rsa1 2. Copy the public keys to the remote host scp ./ssh/*.pub user@remote_host:/$HOME/.ssh/authorized_keys 3. In the remote host, append, cat /id_rsa.pub >> ./.ssh/authorized_keys cat /identity.pub >> ./.ssh/authorized_keys cat /id_dsa.pub >> ./.ssh/authorized_keys 4. Set the rights of ~/.ssh to 700 and ~/.ssh/* to 600 cd ./ssh chmod 700 . chmod 600 * Where it did not work for UDB user 1. The sshd_config is configured as StrictModes yes - This means the home folder rights must only be 755 2. But the home directory for udb (/kla-tencor/udb) for some reasons is configur ed as 777 drwxrwxrwx 18 udb udb 1024 Feb 1 12:05 udb Solution 1. Create another folder call /home/udb/.ssh 2. Copy the authorized_keys in /home/udb/.ssh 3. Give ownership of /home/udb to user udb 4. Set the rights /home/udb 755 /home/udb/,ssh 700 /home/udb/.ssh/* 600 5. Configure the /etc/opt/ssh/sshd_config AuthorizedKeysFile /home/udb/.ssh/authorized_keys 6. Restart the SSH services Problem Determination Steps 1. See the logs /var/adm/syslog/syslog E.g. Feb 1 12:07:52 sscm1pa sshd[3275]: Authentication refused: bad ownership o r modes for directory /home/udb #iostat -a 2 5 # # # # # # # db2look db2look db2look db2look db2look db2look db2look -d -d -d -d -d -d -d mmdb -a -e -l -x -f -o db2look_mmdb_20080212.sql smdb -a -e -l -x -f -o db2look_smdb_20080212.sql mmrdb -a -e -l -x -f -o db2look_mmrdb_20080212.sql spcdb -a -e -l -x -f -o db2look_spcdb_20080212.sql schdb -a -e -l -x -f -o db2look_schdb_20080212.sql emntdb -a -e -l -x -f -o db2look_emntdb_20080212.sql edatdb -a -e -l -x -f -o db2look_edatdb_20080212.sql

asnqacmd apply_server=FDB_PROD apply_schema= QAPROD startq=" SYSB.QPRODTATAQ" These scripts generate the Q subscriptions. Call by: db2 stvf <scriptname>

db2 connect to mmdbdev user <userid> using <password> db2 stvf full_refresh.ddl This section contains some troubleshooting hints and tips. Runtime problems are most often caused by unavailable MQ objects. If problems occur, first of all check if all MQ objects are available and runnin g. The MQ objects are Queue Manager(s) (local and remote setup) Channels (remote setup only) Listeners (remote setup only) If all objects are up and running but is does not replicate, check the MQ messag es. Do use tools like the Websphere MQ Explorer (Windows only) or MQSC commands (like DISPLAY QUEUE) to display the numb er of messages in a queue. If Capture cannot put messages into the send queue, and if Stop Capture was define d as error action in the Queue Map, the Capture stops. A reason could be that the queue already contains the maximum number of messages allowed (CURDEPTH = MAXDEPTH). In this case, you increase the maximum number of messages allowed, or investigate why Apply does not fetch the messages from the queue. Apply detects missing messages automatically and only continues to process and r eplicate changed data when the missing messages reoccur. Mainly, two reasons cause Apply to detect missing messages and wait: Messages went to the DEAD LETTER queue Messages were cleared from a queue How to re-activate an inactive data queue? asnqacmd apply_server=server_name apply_schema=schema startq=receive_queue_name For example: asnqacmd apply_server=FDB_DEV apply_schema=QFDBDEV1 startq="SYSB.QMRDEVTATAQ" #rmdev -dl proc2 # cfgmgr -v # lsdev -Cc processor temp root passwd 1. hprrolim/63223ASewg 2. $ nsu 3. passwd: 63223asEWQ db2 "UPDATE QAPROD.IBMQREP_RECVQUEUES SET memory_limit = 64, description = '' WH ERE repqmapname = 'MMRDB_QCPROD_TO_FDB_PROD_QAPROD'"; SSA5DBP2:root:/>vmo -a maxfree minfree SSA5DBP2:root:/>vmo -a maxperm maxperm% minperm minperm% strict_maxperm SSA5DBP2:root:/>vmo -p Setting maxperm% to 90 | grep free = 6528 = 5760 | grep perm = 5507607 = 80 = 1376899 = 20 = 0 -o maxperm%=90 in nextboot file

Setting maxperm% to 90 SSA5DBP2:root:/>vmo -p -o minperm%=3 Setting minperm% to 3 in nextboot file Setting minperm% to 3 SSA5DBP2:root:/>vmo -a | grep lru lru_file_repage = 1 lru_poll_interval = 10 lrubucket = 131072 SSA5DBP2:root:/>vmo -p -o lru_file_repage=0 Setting lru_file_repage to 0 in nextboot file Setting lru_file_repage to 0 SSA5DBP2:root:/>vmo -a | grep maxcl maxclient% = 80 strict_maxclient = 1 SSA5DBP2:root:/>vmo -p -o maxclient%=90 Setting maxclient% to 90 in nextboot file Setting maxclient% to 90 SSA5DBP2:root:/>vmo -a | grep page_st page_steal_method = 0 SSA5DBP2:root:/>vmo -r -o page_steal_method=1 Setting page_steal_method to 1 in nextboot file Warning: some changes will take effect only after a bosboot and a reboot

IBMQREP_APPLYENQ IBMQREP_APPLYMON IBMQREP_APPLYPARMS IBMQREP_APPLYTRACE IBMQREP_DONEMSG IBMQREP_EXCEPTIONS IBMQREP_RECVQUEUES IBMQREP_SAVERI IBMQREP_SPILLEDROW IBMQREP_SPILLQS IBMQREP_TARGETS IBMQREP_TRG_COLS

QAPROD QAPROD QAPROD QAPROD QAPROD QAPROD QAPROD QAPROD QAPROD QAPROD QAPROD QAPROD

T T T T N N N N N N N N

2008-01-23-20.11.34.903142 2008-01-23-20.11.34.686821 2008-01-23-20.11.34.175648 2008-01-23-20.11.34.476099 2008-01-23-20.12.32.389532 2008-01-23-20.12.27.460750 2008-01-23-20.11.48.240450 2008-01-23-20.12.38.449128 2008-01-23-20.12.43.297157 2008-01-23-20.12.21.015970 2008-01-23-20.12.07.149171 2008-01-23-20.12.15.817854

db2 "runstats on table QAPROD.IBMQREP_DONEMSG WITH DISTRIBUTION" Run bosboot now? y bosboot: Boot image is 25202 512 byte blocks. Warning: changes will take effect only at next reboot CONNECT TO FDB_PROD USER "db2inst2"; CALL SYSPROC.NNSTAT('FBSIVBID','SIVIEW','FRCODE',NULL,NULL,0,NULL,?); CONNECT RESET; vmo vmo vmo Let vmo vmo vmo vmo -p -o minperm%=3 -p -o maxperm%=90 -p -o maxclient%=90 strict_maxperm and strict_maxclient default -p -o minfree=960 (960= minimum, multiply by x number of mempools) -p -o maxfree=1088 (1088= minimum,multiply by x number of mempools) -p -o lru_file_repage=0 -p -o lru_poll_interval=10

amqsputc SYSB.QMRDEVERPTATAQ QMRERPDEV

set MQSERVER=QMRDEVERP.SYSACHNL/TCP/'10.40.24.204(1434)' asnqacmd apply_server=FDB_ERP apply_schema=QAERP stopq=SYSB.QMRDEVERPTATAQ asnqacmd apply_server=FDB_ERP apply_schema=QAERP startq=SYSB.QMRDEVERPTATAQ asnqanalyze -db FDB_ERP -la detailed Requirement Enable Password-less SSH Access, so that SiView Interface Adapter can call Klari ty SendEvent executable/script Steps to setup password-less SSH Access 1. In the local host, generate the private/public keys ssh-keygen -t rsa ssh-keygen -t dsa ssh-keygen -t rsa1 2. Copy the public keys to the remote host scp ./ssh/*.pub user@remote_host:/$HOME/.ssh/authorized_keys 3. In the r emote host, append, cat /id_rsa.pub >> ./.ssh/authorized_keys cat /identity.pub >> ./.ssh/authorized_keys cat /id_dsa.pub >> ./.ssh/authorized_keys 4. Set the rights of ~/.ssh to 700 and ~/.ssh/* to 600 cd ./ssh chmod 700 . chmod 600 * Where it did not work for UDB user 1. The sshd_config is configured as StrictModes yes - This means the home folder rights must only be 755 2. But the home directory for udb (/kla-tencor/udb) for some reasons is configur ed as 777 drwxrwxrwx 18 udb udb 1024 Feb 1 12:05 udb Solution 1. Create another folder call /home/udb/.ssh 2. Copy the authorized_keys in /home/udb/.ssh 3. Give ownership of /home/udb to user udb 4. Set the rights /home/udb 755 /home/udb/,ssh 700 /home/udb/.ssh/* 600 5. Configure the /etc/opt/ssh/sshd_config AuthorizedKeysFile /home/udb/.ssh/authorized_keys 6. Restart the SSH services Problem Determination Steps 1. See the logs /var/adm/syslog/syslog E.g. Feb 1 12:07:52 sscm1pa sshd[3275]: Authentication refused: bad ownership o r modes for directory /home/udb

SSA5DBP1:spcinst:/db2instance/db2home/spcinst>db2 update db cfg using LOGPRIMARY 5

DB20000I The UPDATE DATABASE CONFIGURATION command completed successfully. SQL1363W One or more of the parameters submitted for immediate modification were not changed dynamically. For these configuration parameters, all applications must disconnect from this database before the changes become effective. Create a mount_klarityresults.bat batch file with the following command net use z: \\10.40.35.212\klarityresults /user:siview siview Then put this batch file in the Start > Programs > Startup. #ipclean Why is Capture failing with ASN0005E/SQL2656N messages? Answer The messages in the Capture log file may look similar to these messages: === 2007-07-23-07.54.33.714025 <logrd8::readTheLog> ASN8041D "Capture" : "ASN" : "WorkerThread" : db2LogRead API is sending us backwards in the DB2 Log: First LSN is "0000:0000:004d:e01b:ffb3" while Next Start LSN is "0000:0000:0000:0000:0000" 2007-07-23-07.54.33.714074 <logrd::readTheLog> ASN0005E CAPTURE "ASN" : "WorkerThread". The Capture program encountered an error when reading the DB2 log. The log sequence number is "0000:0000:004D: E01B:FFB3", the SQLCODE is "-2656", and the reason code is "". 2007-07-23-07.54.33.714074 <logrd::readTheLog> ASN8999D "Capture" : "ASN" : "WorkerThread" : 2007-07-23-07.54.33.714145 <CWorkerMain> ASN0589I "Capture" : "ASN" : "WorkerThread" The program received an unexpected return code "910" from routine "logtxrdr::getTrans". 2007-07-23-07.54.33.714201 <CWorkerMain> ASN0123I CAPTURE "ASN" : "WorkerThread". At program termination, the highest log sequence number of a successfully captured log record is "469F:ED0C:0000:000D:0000" and the lowest log sequence number of a record still to be committed is "0000:0000:004D:E01B:FFB3". === These errors indicate that the log file Capture needs is unavailable. The instructions below should resolve the problem: === 1. SELECT MIN_INFLIGHTSEQ from ASN.IBMSNAP_RESTART Result may look something like this: MIN_INFLIGHTSEQ ----------------------x'00000000019CACC03524' x'000000000001FCDC3F62' 2. Run db2flsn to determine which log file Capture needs. For example, db2flsn 019CACC03524 db2flsn 0001FCDC3F62

db2flsn 005DD6CF8828 Note: Either copy SQLOGCTL.LFH (log header control file) into current directory or cd into the directory where this resides before running db2flsn. The LSN must be 12-bytes. 3. Copy this log file and subsequent files back into active log directory. 4. Restart Capture. === The section titled "Retaining log data (Linux, UNIX, Windows, z/OS)" in Chapter 14 of the "SQL Replication Guide and Reference V8.2" is helpful to understand ho w long to keep DB2 logs. This manual can be found at #asnanalyze -db mmrdb -la detailed # asnqacmd apply_server=fdb_erp apply_schema=QAERP startq="SYSB.QMRDEVERPTATAQ" route add 0.0.0.0 mask 0.0.0.0 192.168.1.1 route add 10.40.0.0 mask 255.255.0.0 10.40.110.1 route add 10.40.0.0 mask 255.255.0.0 10.40.110.1 route add 80.65.0.0 mask 255.255.0.0 10.40.110.1 asnqmfmt SYSA.QMRDEVRESTART QMRDEV asnqcap capture_server=mmdb capture_schema=QCDEV LSN=0000:0000:0005:71aa:b75c MA XCMTSEQ=48a2:fd4c:0000:0005:0000 asnqcap capture_server=mmrdb capture_schema=QCERP LSN=0000:0000:001C:0383:C862 M AXCMTSEQ=48A9:55E7:0000:0003:0000 asntrc -help asntrc on -db mmdb -app -qualifier MIXQUAL -b 5M "asncap capture_server=sourcedb capture_schema=ASN debug=y logstdout > cap.trc"

db2 uncatalog db SPCDB db2 catalog tcpip node MMRDBSRV remote SSA5DB server 58010 # db2 catalog db MMRDB as MMRDB at node MMRDBSRV authentication server db2 catalog db SPCDB as SPCDB on path '/home/tstinst/SPCDB' authentication serve r db2 catalog db MMDB as MMDB on path '/home/tstinst/MMDB' authentication server 1. check the active and paths on primary and standby to check whether logs are being shipped correctly. 2. db2pd -alldbs -hadr 2:43:55 PM: chaitalinandan@in.ibm.com - Chaitali Nandan/India/IBM: 3. db2 get sn apshot for all on <dbname> 2:44:17 PM: chaitalinandan@in.ibm.com - Chaitali Nandan/India/IBM: this will giv e you : Primary log position(file, page, LSN) = S0000001.LOG, 0, 0000000000BB800

0 Standby log position(file, page, LSN) = S0000000.LOG, 1, 00000000007D123E 2:44:56 PM: chaitalinandan@in.ibm.com - Chaitali Nandan/India/IBM: in the optio n 2 above , values to observe : PrimaryFile PrimaryPg PrimaryLSN StandByFile Stan dByPg StandByLSN S0000001.LOG 0 0x000000BB80000000 S0000000.LOG 1 0x000 0007D123E0000 check IPC key #ipcs | grep xxxx remove IPC key #ipcrm -q ID #db2 bind @udbubind.lst datetime ISO blocking all grant public need to do to /home/db2inst1/sqllib/bnd #list wmb components running mqsilist # create broker db mqsicreatedb BRKDB1 -i userid -a password -e dbtype #create configuration manager mqsicreateconfigmgr CONFIG_MANAGER -i userid -a password -q queuename #start configuration manager mqsistart CONFIG_MANAGER #start broker mqsistart BROKER1 #change configuration manager mqsichangeconfigmgr -a <ServicePassword> -i <ServiceUserID> -j <MaxJVMHeapSize> To verify all files that comprise the X11.fnt package, enter: #lppchk -c X11.fnt To verify the symbolic links of all software products whose names begin with X11 , enter: #lppchk -l 'X11*' To verify that all filesets have all required requisites and are completely inst alled, enter: #lppchk -v Determine which filesets are missing using the command #oslevel -rl 5100-05 An alternative to oslevel -r : You can use instfix to determine the missing filesets as shown in the following example : #instfix -ciqv | grep 5200-02_AIX_ML | grep :-: #lsvg -p rootvg (lists physical partition allocation for each section) #lsvg rootvg (lists rootvg information)

#lsvg -l rootvg #bootlist -m normal -o (this will list the normal boot list) #unmirrorvg rootvg hdiskx If there are remaining LVs on the alternate disk, migrate logical volumes using: # migratepv -l lv_name source_disk target_disk #shutdown -Fr #lppchk -v #oslevel -r #instfix -I | grep ML or TL #errpt | more #bootlist -m normal -o (the disk with rootvg must be the first on the list) List VG details: lsvg <vgname> List LVs within VG: lsvg -l <vgname> List PV within VG: lsvg -p <vgname> Lists physical group names: lspv List PV details: lspv hdiskN List LVs within PV: lspv -l hdiskN List PP within PV: lspv -p hdiskN Lists LV names: lsvg -l <vgname> List LV details: lslv <lvname> List LVs within PV: lslv -l <lvname> List PP within PV: lslv -p <pvname> Show mirrored LV: lslv -m <lvname> Device Commands #lsdev The "lsdev -Cc disk" command can be used to display the what disks are re cognized by the system. This will display disks whether they have been initializ ed as PVs or not. #rmdev Remove a device from the system configuration. This is done to remove the device configuration when a faulty disk is replaced, or if a disk is moved to a nother system. #cfgmgr Re-scan the system for devices that were not available at system start u p. This can be done to recognize hot pluggable drives that were added after the system was up, or can be used to recognize devices such as external tape or disk drives that were not powered on when the system was booted up. Physical Volume Commands #pvcreate Initialize a disk as a Physical Volume. This must be done prior to add ing a disk to a volume group. #lspv Display the Physical Volumes on a system. When given a PV name as a parame ter, it gives detailed information about that PV. Volume Group Commands #lsvg List information about logical volumes. When given with parameters, it lis ts all volume groups on the system. When given parameters, it can give detailed information about a particular VG, including information about what logical volu mes are in that volume group. #chvg Change volume group characteristics. #mkvg Create a volume group. #extendvg Add PVs to a volume group. #reducevg Remove unused PVs from a volume group. If logical volumes currently re side on the PV in question, the "migratepv" command can be used to move the LV o nto another disk in the VG provided there is enough space to do so. #exportvg Remove knowledge of a volume group from the Kernel. This can be used o n removable disks such as optical disks or Iomega Jaz drives prior to removing t he drive from a system, or can be used when moving a disk or disks from one syst

em to another. The data in the VG is left intact. #importvg Re-initializes kernel knowledge of a volume group that had been previo usly exported. The data on the VG is left intact, and becomes accessible once it has been imported. The existing logical volume names are used unless they confl ict with logical volumes currently in use on the system. #mirrorvg Creates mirror volumes for all volumes in a volume group. #unmirrorvg Removes a mirror from each volume in a volume group. Logical Volume Commands #lslv Lists information about logical volumes. A logical volume name must be spe cified. In order to list information about all logical volumes, you would need t o use "lsvg" to list all volume groups, and then use "lsvg -l <vgname>" on each volume group. You could then use "lslv <lvname>" to list the detailed informatio n about each logical volume found. #mklv Create a logical volume. #rmlv Delete a logical volume. #extendlv Increase the size of a logical volume. #chlv Change the characteristics of a logical volume. #mklvcopy Mirror a logical volume. This allows for RAID-1 in software. #rmlvcopy Removes a mirror copy of a logical volume. File System Commands #crfs Create a file system. This command can be used to create a file system on an existing logical volume, or it can be used as an all in one command to create both a logical volume and file system at once. #chfs Change file system characteristics. This can be used to increase the size of a file system. #rmfs Remove a filesystem, its associated logical volume, and its entry in /etc/ filesystems. Paging Space Commands #lsps Lists information about paging space volumes. To generate a system backup and create an /image.data file (generated by the mkszfile command) to a tape device named /dev/rmt0, enter: #mksysb -i /dev/rmt0 Create a boot image on the newly added disk #bosboot -ad /dev/hdiskx Starting the Capture program from a known point in the DB2 log Stop the Capture and Apply programs. Set the Capture RETENTION_LIMIT and LAG_LIMIT values to their maximum, as shown in the following SQL statement: UPDATE ASN.IBMSNAP_CAPPARMS SET RETENTION_LIMIT=99999,LAG_LIMIT=99999; If the SY NCHPOINT values in the IBMSNAP_UOW, CD, IBMSNAP_REGISTER, and IBMSNAP_PRUNCNTL t ables are higher than the LSN value from which you want to start Capture, use SQ L to set the value to the point from which you want to start recapturing transac tions. In the following example, 00000006F5638E60000 is the log sequence number and 2009-09-05-09.55.43.316970 is the timestamp from which you send the Capture program back to start reading the log. UPDATE ASN.IBMSNAP_REGISTER SET SYNCHPOINT = x'00000006F5638E600000, SYNCHTIME=TIMESTAMP('2009-05-05-09.55.43.316970'); UPDATE ASN.IBMSNAP_REGISTER SET CD_OLD_SYNCHPOINT=x'00000006F5638E600000', CD_NEW_SYNCHPOINT=x'00000006F5638E600000', CCD_OLD_SYNCHPOINT=x'00000006F5638E600000' WHERE GLOBAL_RECORD='N'; UPDATE ASN.IBMSNAP_SUBS_SET SET

LASTRUN=TIMESTAMP('2009-09-05-09.55.43.316970'), LASTSUCCESS=TIMESTAMP('2009-05-05-09.55.43.316970'), SYNCHPOINT=x'00000006F5638E600000', SYNCHTIME=TIMESTAMP('2009-05-05-09.55.43.316970') WHERE WHOS_ON_FIRST='S' AND SET_NAME='BACK1'; UPDATE ASN.IBMSNAP_PRUNCNTL SET SYNCHPOINT =x'00000006F5638E600000', SYNCHTIME=TIMESTAMP('2009-05-05-09.55.43.316970'); UPDATE ASN.IBMSNAP_PRUNE_SET SET SYNCHPOINT =x'00000006F5638E600000', SYNCHTIME=TIMESTAMP('2009-05-05-09.55.43.316970'); DELETE FROM ASN.IBMSNAP_UOW; INSERT INTO ASN.IBMSNAP_RESTART (MAX_COMMITSEQ, MIN_INFLIGHTSEQ, MAX_COMMIT_TIME,CURR_COMMIT_TIME,CAPTURE_FIRST_SEQ) values (x'00000006F5638E600000',x'00000006F5638E600000', '2009-05-05-09.55.43.316970','2009-05-05-09.55.43.316970', x'00000006F5638E600000'); db2diag.log file #db2instance\db2home\repinst\sqllib\db2dump STEP 1: connect to SSA5DB, log in using mminst db2 connect to MMDB user siview using siview; db2 export to FHWCPHS.ixf of ixf "SELECT * FROM FHWCPHS WHERE CLAIM_TIME > '2010 -04-04-09.00.00.000000' and CLAIM_TIME < '2010-04-05-15.00.00.000000'" db2 export to FHWLTHS.ixf of ixf "SELECT * FROM FHWLTHS WHERE CLAIM_TIME > '2010 -04-04-09.00.00.000000' and CLAIM_TIME < '2010-04-05-15.00.00.000000'" db2 terminate; STEP 2: connect to SSA5DB, log in using repinst db2 connect to MMRDB user siview using siview; db2 "DELETE FROM FHWCPHS WHERE CLAIM_TIME > '2010-04-04-09.00.00.000000' and CLA IM_TIME < '2010-04-05-15.00.00.000000'" db2 "DELETE FROM FHWLTHS WHERE CLAIM_TIME > '2010-04-04-09.00.00.000000' and CLA IM_TIME < '2010-04-05-15.00.00.000000'" db2 import from FHWCPHS.ixf of ixf commitcount 20000 "INSERT INTO FHWCPHS" db2 import from FHWLTHS.ixf of ixf commitcount 20000 "INSERT INTO FHWLTHS" db2 terminate; db2 restore db mmrdb use tsm taken at 20100405204018 to /home/mmrdb without prom pting db2 rollforward db mmrdb to end of logs and complete db2 "SELECT SUBSTR(TBS.TBSPACE,1,24) AS TBSPACE, TBS.TBSPACEID, SUBSTR(TB.TABNAM E,1,24) AS TABNAME, SUBSTR(TB.TABSCHEMA,1,24) AS TABSCHEMA FROM SYSCAT.TABLESPAC ES TBS, SYSCAT.TABLES TB WHERE TBS.TBSPACEID=TB.TBSPACEID AND TBS.TBSPACE IN ('U SERSPACE1','QASNTBSP')" >tables_tablespaces.out Please run the following: db2pd -db MMRDB -tablespaces -alldbp >db2pd-tablespaces.out db2adutl query db MMRDB VERIFY HEADERONLY TAKEN AT 20100405204018 db2adutl query db MMRDB VERIFY TABLESPACESONLY TAKEN AT 20100405204018

db2adutl VERIFY HEADERONLY TAKEN AT 20100405204018 DATABASE MMRDB WITHOUT PROMPT ING > verify1.logs db2adutl QUERY FULL DATABASE MMRDB > query1.logs db2adutl QUERY TABLESPACE DATABASE MMRDB > query2.logs db2adutl VERIFY TABLESPACESONLY TAKEN AT 20100405204018 DATABASE MMRDB WITHOUT PRO MPTING > verify2.logs #rollforward to a point in time before crash db2 rollforward db mmrdb to 2010-04-05-22.17.50 and complete 2>> restore_again.l ogs 1>> restore_again.logs date >> restore_again.logs # db2support . -d mmrdb -c -g -s # db2look -d mmrdb -a -e -l -x -f -o db2look_mmrdb.sql Exporting a virtual disk from VMware ESX Server 1: vmkfstools -e /vmfolder/exportvmdiskkfilename.vmdk /vmfs/vmdiskfile.vmdk Importing a virtual disk inside VMware ESX Server 2: vmkfstools -i /vmfolder/vmdiskfile.vmdk /vmfs/vmserver1.vmdk Once the VMDK file copying over to ESX Server 2, you can register virtual machin e using: vmware-cmd -s register <config_file_path>. determine tablespace from table name #db2 "SELECT TABLES.TABSCHEMA, TABLES.TABNAME, TBSPACES.TBSPACE FROM SYSCAT.TABL ES AS TABLES, SYSCAT.TABLESPACES AS TBSPACES WHERE TABLES.TBSPACEID = TBSPACES.TBSPACE ID AND TABNAME = 'SALES'" #db2mtrk -d -v db2mtrk orts. db2pd -dbptnmem High-level, all areas, for monitoring Instance Memory usage includes allowance for configuration, not just actual system memory usage db2mtrk - Memory tracker command Provides complete report of memory status, for instances, databases, agents, and applications. This command outputs the following memory pool allocation information: Current size Maximum size (hard limit) Largest size (high water mark) Type (identifier indicating function for which memory will be used) Agent who allocated pool (only if the pool is private) Application Part 1 : i d v and db2pd dbptnmem output are two different rep

Part 2 :

set db2diag level to 4 db2diag -A to start a new db2diag.log db2 backup db db2dbmon revert to diaglevel 4 send the latest db2diag.log

- run db2dart /DB of db2dbmon , and send the dart report Part 3 : db2trc on -f X.dmp db2 backup db db2dbmon and get the error db2trc off db2trc flw X.dmp X.flw db2trc fmt X.dmp X.fmt send X.flw and X.fmt show and sort directory size $ du -k | sort -nr | more restart send queue $asnqacmd apply_server=FDB_ERP apply_schema=QAERP startq=SYSB.QMRDEVERPTATAQ log file in ERP $cd APPLCSF/$LOG To access a usb flash drive you can modify your /etc/fstab file like this: /dev/sda1 /mnt/usb_flash vfat noauto,users,rw,umask=0 0 0 Then as root do this: [ mount -a ] This is what my /etc/fstab looks like on my system which is 2.4.20-6 ( Red Hat 9 ) LABEL=/ / ext3 defaults 1 1 LABEL=/boot /boot ext3 defaults 1 2 none /dev/pts devpts gid=5,mode=620 0 0 LABEL=/home /home ext3 defaults 1 2 LABEL=/opt /opt ext3 defaults 1 2 none /proc proc defaults 0 0 none /dev/shm tmpfs defaults 0 0 LABEL=/tmp /tmp ext3 defaults 1 2 LABEL=/usr /usr ext3 defaults 1 2 LABEL=/usr/local /usr/local ext3 defaults 1 2 LABEL=/var /var ext3 defaults 1 2 /dev/hda8 swap swap defaults 0 0 /dev/cdrom /mnt/cdrom udf,iso9660 noauto,owner,kudzu,r o 0 0 /dev/fd0 /mnt/floppy auto noauto,owner,kudzu 0 0 /dev/sda /mnt/cam vfat noauto,user 0 0 /dev/sda1 /mnt/usb_flash vfat noauto,users,rw,umask=0 0 0 install from .tgz file # tar -zxvf ibm-java-sdk-6.0-5.0-linux-<arch>.tgz

install from rpm file # rpm -ivh TIVsm-API-x.x.x.rpm start vnc server #vncserver :1 remote connect to vncserver #vncviewer 10.0.7.64:5901 Deactivate all the Q subscriptions that feed the receive queue. # db2 "INSERT INTO QCERP.IBMQREP_SIGNAL (signal_type, signal_subtype, signal_input_in) VALUES ('CMD', 'CAPSTOP', 'XXEP_EVENTS_DATA_QREP')" Stop QCapture Delete all message from all the send queues and the receive queue that were invo lved in the invalid setup. #clear qlocal (SYSB.QMRDEVERPTATAQ) Start QCapture Issue the startq command so that the Q Apply program starts reading from the rec eive queue again #asnqacmd apply_server=FDB_ERP apply_schema=QAERP startq=SYSB.QMRDEVERPTATAQ #asnqacmd apply_server=FDB_ERP apply_schema=QAERP stopq=SYSB.QMRDEVERPTATAQ Activate all the Q subscriptions # db2 "INSERT INTO QCERP.IBMQREP_SIGNAL (signal_type, signal_subtype, signal_input_in) VALUES ('CMD', 'CAPSTART', 'XXEP_EVENTS_DATA_QREP')" Check capture log, if see following message then clear the target table ASN7017I "Q Capture" : "QCERP" : "WorkerThread" : The target table "ORAERP.XX EP_SIVIEW_EVENTS_DATA" is ready to be loaded from source table "XXEP.XXEP_EVENTS_DATA" for publication or Q su bscription "XXEP_EVENTS_DATA_QREP". Note for insert loaddone # db2 "INSERT INTO QCERP.IBMQREP_SIGNAL (signal_type, signal_subtype, signal_input_in) VALUES ('CMD', 'LOADDONE', 'XXEP_EVENTS_DATA_QREP')" Note: May consider cold start if not working or stop the QManager. db2trc on db2icrt -d db2trc off db2trc fmt db2trc flw f trace.dmp -s ese -u eposint eposinst trace.dmp trace.fmt trace.dmp trace.flw

truss -aelf -o db2icrt.out /opt/IBM/db2/V9.5/instance/db2icrt -d -s ese -u eposi nt eposinst truss -aelf -o db2ilist.out /opt/IBM/db2/V9.5/instance/db2ilist

#dismq -x # db2 force application all | db2 backup database eposdb to . # db2 force application all | db2 restore database eposdb Find out table statistics is recently updated. #select stats_time from syscat.tables where tabname in ('A','B') find cardinality of indexes. #select firstkeycard, first2keycard from syscat.indexes where indname='FX1' find unused index $db2pd -db db-name -tcbstats index from the result, look for "TCB Index Stats:" header and the "Scans" column, if t he count in Scans is 0 that means "it is not used". db2 snapshot to get locks $db2 get snapshot for locks for application agentid appwtfrlocksid $db2 get snapshot for locks on database_name default timestamp format yyyy-MM-dd-HH.mm.ss.ffffff su - ldapdb2 db2 connect to ldapdb2 db2 reorgchk update statistics on table all db2 terminate show valude of table log index build db2cat -d <db name> -n <table name> -s <schema name> -o <output file> db2pd -dbptnmem db2pd -db TEST -logs Instance Level UPDATE DATABASE MANAGER CONFIGURATION USING dft_mon_bufpool ON dft_mon_lock ON dft_mon_sort ON dft_mon_stmt ON dft_mon_table ON dft_mon_uow ON IMMEDIATE Turning switches on for session #UPDATE MONITOR SWITCHES USING bufferpool ON db2 get snapshot # db2 GET SNAPSHOT FOR ALL BUFFERPOOLS # db2 GET SNAPSHOT FOR BUFFERPOOLS ON database_name #

n e

Vous aimerez peut-être aussi