Vous êtes sur la page 1sur 27

Bits and Pieces

Installing Oracle 11gR2 on 64 bit OEL or Red Hat 5.x Need these RPMs:

libaio-devel-0.3.106 (i386) libaio-devel-0.3.106 (x86_64) sysstat-7.0.2 unixODBC-2.2.11 (i386) unixODBC-2.2.11 (x86_64) unixODBC-devel-2.2.11(i386) unixODBC-devel-2.2.11(x86_64)

Installing Oracle 10g R2 on Linux 64 bit Need these RPMs otherwise you will get this error when running the installer /libawt.so: libXp.so.6: cannot open shared object file: No such file or directory

libXp-1.0.0-8.1.el5.x86_64.rpm libXp-devel-1.0.0-8.1.el5.i386.rpm

Resolving the APP-FND-01931 error If you get the error like this and you are on Windows 7 with IE9
Error: APP-FND-01931: Your session is no longer valid or your logon information could not be reestablished from your session Go to Security > Custom Level > Scripting > Enable XSS filter Set it to Disable

ggci libclntsh.so.11.1: cannot restore segment prot after reloc: Permission denied After installing GoldenGate 11g on Linux x86-64, we were getting this error whenever we issued the ggsci command.

We found that this was because when the machine was built, the System Administrator had configured SELinux to enforcing. To fix this we can do the following: temporary fix as root /usr/sbin/setenforce 0 Or to make the change permanent, edit the /etc/selinux/config file and change SELINUX=enforcing to SELINUX=disabled. We may to reboot the machine for this change to take effect.

WARNING OGG-01194 EXTRACT task LOAD2 abended : There is no trail to reposition to when doing direct load task The cause was that the GoldenGate user did not have the privileges needed to insert rows into the target database. It was solved by grant the insert any privilege: grant insert any table to ggs_owner;

Error: The requested URL was not found, or cannot be served at this time Running custom reports using a 11.5.10.2 environment Check DISPLAY variable is set Check REPORTS60_PATH ha sthe location of the custom reports rdf files

Issues after upgrading 11.5.10.2 Apps database to 11g Release 2 while running adbldxml.pl and adconfig on database tier

JRE_TOP not found at its desired location /u01/app/oracle/product/11.2.0/dbhome_1/jre/1.1.8


Needed to point jtop to the 10g ORACLE_HOME jre location as the 11g ORACLE_HOME does have the jre directory perl adbldxml.pl tier=db appsuser=apps_applfnd jtop=/u01/app/oracle/product/10.2.0/db_1/jre/1.4.2

ERROR: Unable to set CLASSPATH JDBC Driver jars and zips are missing in /u01/app/oracle/product/11.2.0/dbhome_1/jdbc/lib directory.
Change ADJVAPRG variable defined in environment file on database server to point to jdk directory and not the jre directory which is not present in the 11g ORACLE_HOME ADJVAPRG=/u01/app/oracle/product/11.2.0/dbhome_1/jdk/jre/bin/java

Perl 5.8 and 5.10 version conflicts Perl lib version (5.10.0) doesnt match executable version (v5.8.8)
11g comes with Perl 5.10 while 10g and 11.5.10 use Perl 5.8. Autoconfig and adbldxml.pl exited with errors related to perl versions. By default it was using perl in /usr/bin which was perl 5.8 Had to create a symbolic link to point to perl residing in the Oracle 11g home perl -> /u01/app/oracle/product/11.2.0/dbhome_1/perl/bin/perl

Issue when running autoconfig because APPS user was not named APPS in our case it is APPS_APPLFND
ORA-44416: Invalid ACL: Unresolved principal APPS ORA-06512: at SYS.DBMS_NETWORK_ACL_ADMIN, line 252 Had to edit template files under $ORACLE_HOME/appsutil and change all occurences of APPS to APPS_APPLFND in the file txkcreateACL.sql ./template/txkcreateACL.sql ./install/CLMTS11G_kens-orasql-001/txkcreateACL.sql

While running autoconfig errors encountered for afdbprf.sh and adcrobj.sh

SP2-1503: Unable to initialize Oracle call interface SP2-0152: ORACLE may not be functioning properly This is because the Database Time Zone Upgrade was not performed when the database was upgraded from 10g to 11.2.0.1 10.2.0,4 database has DST version 4 and 11.2.0.1 databass it is version 11. If we use DBUA version 11.2.0.2 or 11.2.0.3, then the Timezone version is automaticallt upgraded and we do not have to do it manually as in the case of an upgrade to 11.2.0.1. Follow post to upgrade database time zone version :

[oracle@usha /]$ echo ORACLE_HOME=`cat /etc/oratab |egrep ':N|:Y'|cut -f2 -d':'` ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1 [oracle@usha /]$ echo DBA=`echo $ORACLE_HOME|sed -e 's:/product/.*::g'`/admin DBA=/u01/app/oracle/admin [oracle@usha /]$ Now lets write a shell script that loops into admin and all its subdirectories and delete all files older than 7 days. (Oracle 10 g version) #!/bin/bash for ORACLE_SID in `cat /etc/oratab |egrep ':N|:Y'|grep -v \*|cut -f1 -d':'` do ORACLE_HOME= `cat /etc/oratab |egrep ':N|:Y'|grep -v \*|cut -f2 -d':'` DBA=`echo $ORACLE_HOME|sed -e 's:/product/.*::g'`/admin

# Delete all .trc files from bdump directory older than 14 days find $DBA/$ORACLE_SID/bdump -name \*.trc -mtime +14 -exec rm {} \; find $DBA/$ORACLE_SID/udump -name \*.trc -mtime +14 -exec rm {} \; find $ORACLE_HOME/rdbms/audit -name \*.aud -mtime +14 -exec rm {} \; done

Now lets write a shell script that loops into admin and all its subdirectories and delete all files older than 7 days. (Oracle 11 g version). In Oracle 11g, Oracle introduced ADR and all Diagnostic files are created under ADR BASE Directory. Please visit ADR page to learn about ADR . SQL> select name,value from v$diag_info; NAME --------------Diag Enabled ADR Base ADR Home Diag Trace Diag Alert Diag Incident Diag Cdump Health Monitor Default Trace File /u01/app/oracle/diag/rdbms/testdb/DB11G/trace/DB11G_ora_22158.trc #!/bin/bash for ORACLE_SID in `cat /etc/oratab |egrep ':N|:Y'|grep -v \*|cut -f1 -d':'` do ORACLE_HOME= `cat /etc/oratab |egrep ':N|:Y'|grep -v \*|cut -f2 -d':'` ADR_HOME=`echo $ORACLE_HOME|sed -e 's:/product/.*::g'`/diag/rdbms/testdb/DB11G # Delete all .trc files from bdump directory older than 14 days find $ADR_HOME/trace -name \*.trc -mtime +14 -exec rm {} \; find $ADR_HOME/trace -name \*.trm -mtime +14 -exec rm {} \; find $ADR_HOME/alert -name \*.xml -mtime +14 -exec rm {} \; find $ADR_HOME/incident -name \*.inc -mtime +14 -exec rm {} \; find $ADR_HOME/cdump -name \*.dmp -mtime +14 -exec rm {} \; TRUE /u01/app/oracle /u01/app/oracle/diag/rdbms/testdb/DB11G /u01/app/oracle/diag/rdbms/testdb/DB11G/trace /u01/app/oracle/diag/rdbms/testdb/DB11G/alert /u01/app/oracle/diag/rdbms/testdb/DB11G/incident /u01/app/oracle/diag/rdbms/testdb/DB11G/cdump /u01/app/oracle/diag/rdbms/testdb/DB11G/hm VALUE

-------------------------------- ---------------------------------------------

Guest User End Dated


Its been long time I did not blog, I was busy in migration. Last week I came accross "Blank Login Page" (EBS R12) issue in one of the development instance, later I found that the functional team end dated GUEST user thinking that it is not used by anyone. Guest user has lot of importance in EBS, E-Business Suite uses a guest account to represent a user session that is not yet authenticated, so when the account gets locked then you will see a blank page when you try to login to the instance. I had to fix this issue from backend as I could not login to the instance, I used this script to check when the guest user was end dated. >> select USER_NAME, END_DATE from fnd_user where USER_NAME like '%GUEST%' ; Take backup of FND_USER table and run below statement to remove the end_date. >> update fnd_user set end_date=null where user_name='GUEST'; >> commit; I tried to login again, it dint worked, then I validated the guest user password with below statement, this should return "Y" if the guest user password is correct, in my case it returned me "N", so I had to reset the GUEST user password aswell. >> select fnd_web_sec.validate_login('GUEST','ORACLE') from dual; Below is the procedure to reset the GUEST user password as per metalink: 1. Reset GUEST password: SQL> exec fnd_vault.del('FND');SQL> commit; SQL> select FND_WEB_SEC.CHANGE_GUEST_PASSWORD('ORACLE','') from dual; SQL> commit; Note: In Above Sql , FND_WEB_SEC.CHANGE_GUEST_PASSWORD needs two inputs one is the new guest user's password which is "ORACLE" and other is applsys user's password that you need to provide as applicable. 2. Test if Profile Updated or not? DECLAREstat boolean; BEGINdbms_output.disable; dbms_output.enable(100000); stat := FND_PROFILE.SAVE('GUEST_USER_PWD','GUEST/ORACLE','SITE'); IF stat THEN dbms_output.put_line( 'Stat = TRUE - profile updated' ); ELSE dbms_output.put_line( 'Stat = FALSE - profile NOT updated' ); END IF; end;/

3. Change GUEST user password, FNDCPASS should not be used. It will result in error : "FNDCPASS was not able to decrypt password for user 'GUEST' during applsys" Use following API:
>>java oracle.apps.fnd.security.AdminAppServer APPS/"APPS Password" UPDATE GUEST_USER_PWD=GUEST/"Guest User Password" DB_HOST="Host_Name" DB_PORT="PortNumber"DB_NAME="SID"

Note: Provide all the parameters as applicable for Guest User password as used in step :1 , and DB related details for DB_HOST, DB_PORT and DB_NAME

4. Check the log of FNDCPASS for any error for GUEST user if there is no error for GUEST user then do the next step

5. Compile JSP >> cd $FND_TOP/patch/115/bin >> ojspCompile.pl --compile --flush 6. Set GUEST_USER_PWD password in $FND_SECURE/.dbc to GUEST/ORACLE

7. Correct End_Date of GUEST user & its responsibilities by direct launch of FORMS as explained in Note 552301 1 How To Prevent Users From Accessing Forms Directly In Oracle Applications R12 8. Correct the Roles by Workflow Directory Services User/Role Validation" with Parameters: Fix Dangling User/Roles=YesAdd Missing User/Role Assignments=Yes 9. Bounce Middle Tier Services 10. Retest the issue. .

Note:- I just shared how I fixed the guest user end_date issue, this may not be the soultion in all the case. Regards, Satya. http://dbaschool.blogspot.com/
Posted by satya at 02:56 0 comments Labels: Applications / EBS

Friday, 26 February 2010

CONCSUB and STARTMGR


In my previous post I discussed how to use CONCSUB, in this post I will discuss on below two topics,
How a script can be run with CONCSUB utility without showing the apps password. Usage of STARTMGR utility.

How a script can be run with CONCSUB utility without showing the apps password. If you are using CONCSUB utility in a script you can hide the apps password from other users, below is the procedure.

1. Create a script that runs CONCSUB apps/<> 2. Set the permission to 711, owned by applmgr, no one else will be able to read this script, which contains the apps password. 3. Create another script that simply runs the first script and make this one owned by applmgr, permissions 6755 4. Now any user can run the second script, which will run CONCSUB, without showing the password.

Usage of STARTMGR utility. Startmgr is used to start internal concurrent manager, which intern starts all other concurrent managers. The STARTMGR command can take upto 10 optional parameters. The startmgr executable is located at $FND_TOP/bin. Example: startmgr sysmgr="apps/xxxx" mgrname="std" printer="lpr_finance" mailto="admin" restart="N" logfile="stdmgrlog" queuesize="15" pmon="10" sleep="60" Note:- If no manager name is specified then the Internal Concurrent Manager is started. Metalink Reference: 147449.1 - How to Restart the Concurrent Manager in Unix. Regards, Satya. http://dbaschool.blogspot.com
Posted by satya at 14:42 0 comments Labels: Applications / EBS

Saturday, 20 February 2010

Submit a Concurrent Request Using CONCSUB


Introduction: CONCSUB is a utility to submit the concurrent request from operating system level to run concurrent program, without having to log on to oracle applications. Syntax: CONCSUB <APPS username>/<APPS password> \ <responsibility application short name> \ <responsibility name> \ <username> \ [WAIT=N|Y|<n seconds>] \ CONCURRENT \ <program application short name> \ <program name> \ [PROGRAM_NAME=<description>] \ [REPEAT_TIME=<resubmission time>] \ [REPEAT_INTERVAL= <number>] \ [REPEAT_INTERVAL_UNIT=< resubmission unit>] \ [REPEAT_INTERVAL_TYPE=< resubmission type>] \ [REPEAT_END=<resubmission end date and time>] \ [START=<date>] \ [IMPLICIT=< type of concurrent request> \ [<parameter 1> ... <parameter n>] You will find the CONCSUB executable under location $FND_TOP/bin/ Purpose: The usage of CONCSUB can be categorized into the following. Submitting Concurrent Requests Controlling Concurrent Managers Submitting Concurrent Requests: CONCSUB is used to submit both the standard and custom concurrent requests from operating system level. The following is an example to submit "Active Users" request. CONCSUB APPS/APPS SYSADMIN System Administrator SYSADMIN WAIT=N CONCURRENT FND FNDSCURS PROGRAM_NAME='"Active Users"' Controlling Concurrent Managers: COCNSUB is also used to shutdown the concurrent managers, however to start the concurrent manager you have to use startmgr utility, I will discuss srartmgt in a different post. CONCSUB apps/apps_password SYSADMIN System Administrator SYSADMIN WAIT=N CONCURRENT FND SHUTDOWN

Some times you may also require to abort the concurrent managers, in such situation you can specify ABORT CONCSUB apps/apps SYSADMIN System Administrator SYSADMIN WAIT=N CONCURRENT FND ABORT Metalink Reference: 457519.1 - How to Submit a Concurrent Request Using CONCSUB Syntax
Q 1)What is ORA-01555 error? Ans: This is one of the most favourite interview question,It can be asked by many companies: Tom had explained beautifully check the below link: http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:275215756923 Q 2)How you will kill a process completely? Ans We use the command SQL>alter system kill session(sid,serial#) immediate; The process might still be existing in OS We use kill -9 pid Through SQL we can get pid of OS in SPID column by combining views v$process and v$session. Q 3)What are the new features of RMAN in oracle 10g? Ans: The top 10 new features of RMAN in oracle 10g are: 1)Incrementally Updated Backups:Using this feature all changes between the SCN of the original image copy and the SCN of the incremental backup are applied to the image copy, winding it forward to make the equivalent of a new database image copy without the overhead of such a backup. The following example shows how this can be used: RUN { RECOVER COPY OF DATABASE WITH TAG 'incr_backup' UNTIL TIME 'SYSDATE - 7'; BACKUP INCREMENTAL LEVEL 1 FOR RECOVER OF COPY WITH TAG 'incr_backup' DATABASE; } The RECOVER COPY... line will not do anything until the script has been running for more than 7 days. The BACKUP INCREMENTAL line will perform a complete backup (level 0) the first day it is run, with all subsequent backups being level 1 incremental backups. After 7 days, the RECOVER COPY... line will start to take effect, merging all incremental backups older than 7 days into the level 0 backup, effectively moving the level 0 backup forward. The effect of this is that you will permanently have a 7 day recovery window with a 7 day old level 0 backup and 6 level 1 incremental backups. Notice that the tag must be used to identify which incremental backups apply to which image copies. 2)Fast Incremental BackupsThere are performance issues associated with incremental backups as the whole of each datafile must be scanned to identify changed blocks. In Oracle 10g it is possible to track changed blocks using a change tracking file. Enabling change tracking does produce a small overhead, but it greatly improves the performance of incremental backups. The current change tracking status can be displayed using the following query: SELECT status FROM v$block_change_tracking; Change tracking is enabled using the ALTER DATABASE command: ALTER DATABASE ENABLE BLOCK CHANGE TRACKING; Disabled using:

ALTER DATABASE DISABLE BLOCK CHANGE TRACKING; 4)BACKUP for Backupsets and Image CopiesIn Oracle 10g the BACKUP command has been extended to allow it to initiate backups of image copies in addition to backupsets. As a result the COPY command has been deprecated in favour of this new syntax. BACKUP AS COPY DATABASE; BACKUP AS COPY TABLESPACE users; BACKUP AS COPY DATAFILE 1; 5)Cataloging Backup PiecesIt is now possible to manually catalog a backup piece using the CATALOG commands in RMAN. This allows backup files to be moved to alternate locations or manually archived to tape and brought back for restore operations. In Oracle 9i this functionality was only availabale for controlfile copies, archivelog copies and datafile copies. In addition, there are some shortcuts to allow multiple files to be cataloged using a single command. The following examples give the general idea: # Catalog specific backup piece. CATALOG BACKUPPIECE '/backup/MYSID/01dmsbj4_1_1.bcp'; # Catalog all files and the contents of directories which # begin with the pattern "/backup/MYSID/arch". CATALOG START WITH '/backup/MYSID/arch'; # Catalog all files in the current recovery area. CATALOG RECOVERY AREA NOPROMPT; # Catalog all files in the current recovery area. # This is an exact synonym of the previous command. CATALOG DB_RECOVERY_FILE_DEST NOPROMPT; 6)Automatic Instance Creation for RMAN TSPITRIf a tablespace point-in-time recovery (TSPITR) is initiated with no reference to an auxillary instance RMAN now automatically creates an one. The auxillary instance configuration is based on that of the target database. As a result, any channels required for the restore operations must be present in the target database so they are configured correctly in the auxillary instance. The location of the datafiles for the auxillary instance are specified using the AUXILIARY DESTINATION clause shown below. RECOVER TABLESPACE users UNTIL LOGSEQ 2400 THREAD 1 AUXILIARY DESTINATION '/u01/oradata/auxdest';The tablespace is taken offline, restored from a backup, recovered to the specified point-in-time in the auxillary instance and re-imported into the target database. The tablespace in the target database should then be backed up and the tablespace brought back online. BACKUP TABLESPACE users; SQL "ALTER TABLESPACE users ONLINE";On successful completion the auxillary instance will be cleaned up automatically. In the event of errors the auxillary instance is left intact to aid troubleshooting. 7)Cross-Platform Tablespace Conversion The CONVERT TABLESPACE allows tablespaces to be transported between platforms with different

byte orders. The mechanism for transporting a tablespaces is unchanged, this command merely converts the tablespace to allow the transport to work. The platform of the source and destination platforms can be identified using the V$TRANSPORTABLE_PLATFORM view. The platform of the local server is not listed as no conversion in necessary for a matching platform. SQL> SELECT platform_name FROM v$transportable_platform; PLATFORM_NAME -----------------------------------Solaris[tm] OE (32-bit) ... ... Microsoft Windows 64-bit for AMD 15 rows selected.The tablespace conversion can take place on either the source or the destination server. The following examples show how the command is used in each case: # Conversion on a Solaris source host to a Linux destincation file. CONVERT TABLESPACE my_tablespace TO PLATFORM 'Linux IA (32-bit)' FORMAT='/tmp/transport_linux/%U'; # Conversion on a Linux destination host from a Solaris source file. CONVERT DATAFILE= '/tmp/transport_solaris/my_ts_file01.dbf', '/tmp/transport_solaris/my_ts_file02.dbf' FROM PLATFORM 'Solaris[tm] OE (32-bit)' DB_FILE_NAME_CONVERT '/tmp/transport_solaris','/u01/oradata/MYDB';In the first example the converted files are placed in the directory specified by the FORMAT clause. In the second example the specified datafiles are converted to the local servers platform and placed in the correct directory specified by the DB_FILE_NAME_CONVERT clause. 8)Enhanced Stored Scripts Commands Scripts can now be defined as global allowing them to be accessed by all databases within the recovery catalog. The syntax for global script manipulation is the same as that for regular scripts with the addition of the GLOBAL clause prior the word SCRIPT. Examples of it's usage are shown below: CREATE GLOBAL SCRIPT full_backup { BACKUP DATABASE PLUS ARCHIVELOG; DELETE FORCE NOPROMPT OBSOLETE; } CREATE GLOBAL SCRIPT full_backup FROM FILE 'full_backup.txt'; RUN { EXECUTE GLOBAL SCRIPT full_backup; } PRINT GLOBAL SCRIPT full_backup;

LIST GLOBAL SCRIPT NAMES; LIST ALL SCRIPT NAMES; # Global and local scripts. REPLACE GLOBAL SCRIPT full_backup { BACKUP DATABASE PLUS ARCHIVELOG; DELETE FORCE NOPROMPT OBSOLETE; } REPLACE GLOBAL SCRIPT full_backup FROM FILE 'full_backup.txt'; DELETE GLOBAL SCRIPT 'full_backup'; 8)Backupset CompressionThe AS COMPRESSED BACKUPSET option of the BACKUP command allows RMAN to perform binary compression of backupsets. The resulting backupsets do not need to be uncompressed during recovery. It is most useful in the following circumstances: You are performing disk-based backup with limited disk space. You are performing backups across a network where network bandwidth is limiting. You are performing backups to tape, CD or DVD where hardware compression is not available. The following examples assume that some persistent parameters are configured in a similar manner to those listed below: CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 7 DAYS; CONFIGURE DEFAULT DEVICE TYPE TO DISK; CONFIGURE CONTROLFILE AUTOBACKUP ON; CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '/backups/MYSID/%d_DB_%u_%s_%p'; The AS COMPRESSED BACKUPSET option can be used explicitly in the backup command: # Whole database and archivelogs. BACKUP AS COMPRESSED BACKUPSET DATABASE PLUS ARCHIVELOG; # Datafiles 1 and 5 only. BACKUP AS COMPRESSED BACKUPSET DATAFILE 1,5;Alternatively the option can be defined using the CONFIGURE command: # Configure compression. CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO COMPRESSED BACKUPSET; # Whole database and archivelogs. BACKUP DATABASE PLUS ARCHIVELOG;Compression requires additional CPU cycles which may affect the performance of the database. For this reason it should not be used for tape backups where hardware compression is available. 9)Restore PreviewThe PREVIEW option of the RESTORE command allows you to identify the backups required to complete a specific restore operation. The output generated by the command is in the same format as the LIST command. In addition the PREVIEW SUMMARY command can be used to produce a summary report with the same format as the LIST SUMMARY command. The following examples show how these commands are used: # Preview

RESTORE DATABASE PREVIEW; RESTORE TABLESPACE users PREVIEW; # Preview Summary RESTORE DATABASE PREVIEW SUMMARY; RESTORE TABLESPACE users PREVIEW SUMMARY; 10)Managing Backup Duration and ThrottlingThe DURATION clause of the of the BACKUP command restricts the total time available for a backup to complete. At the end of the time window backup is interrupted with any incomplete backupsets discarded. All complete backupsets are kept and used for future restore operations. The following examples show how it is used: BACKUP DURATION 2:00 TABLESPACE users; BACKUP DURATION 5:00 DATABASE PLUS ARCHIVELOGS; Q4)What are the advantages of DATAPUMP over export import?Why Datapump is very fast compared to export/import? Ans:Top 10 difference between exp/imp(export/import) and expdp/impdp(Datapump export and import) are: 1)Data Pump Export and Import operate on a group of files called a dump file set rather than on a single sequential dump file. 2)Data Pump Export and Import access files on the server rather than on the client. This results in improved performance. It also means that directory objects are required when you specify file locations. 3)The Data Pump Export and Import modes operate symmetrically, whereas original export and import did not always exhibit this behavior. For example, suppose you perform an export with FULL=Y, followed by an import using SCHEMAS=HR. This will produce the same results as if you performed an export with SCHEMAS=HR, followed by an import with FULL=Y. 4)Data Pump Export and Import use parallel execution rather than a single stream of execution, for improved performance. This means that the order of data within dump file sets and the information in the log files is more variable. 5)Data Pump Export and Import represent metadata in the dump file set as XML documents rather than as DDL commands. This provides improved flexibility for transforming the metadata at import time. 6)Data Pump Export and Import are self-tuning utilities. Tuning parameters that were used in original Export and Import, such as BUFFER and RECORDLENGTH, are neither required nor supported by Data Pump Export and Import. 7)At import time there is no option to perform interim commits during the restoration of a partition. This was provided by the COMMIT parameter in original Import. 8)There is no option to merge extents when you re-create tables. In original Import, this was provided by the COMPRESS parameter. Instead, extents are reallocated according to storage parameters for the target table.

9)Sequential media, such as tapes and pipes, are not supported. 10)The Data Pump method for moving data between different database versions is different than the method used by original Export/Import. With original Export, you had to run an older version of Export (exp) to produce a dump file that was compatible with an older database version. With Data Pump, you can use the current Export (expdp) version and simply use the VERSION parameter to specify the target database version

Q5)What is lock?What is row and table level lock? Ans:Locks are mechanism that prevent distructive interaction between transaction accessing the same resource. This is most common scenario where a developer comes and tell you my session got locked can you please release the lock,as a DBA you just need to check the actual session holding the lock with the help of below query:

Step1:To verify the lock object Here is the import query: ---------------------------------------------------------------

SELECT o.owner, o.object_name, o.object_type, o.last_ddl_time, o.status, l.session_id, l.oracle_username, l.locked_mode FROM dba_objects o, gv$locked_object l WHERE o.object_id = l.object_id; Step 2: --------Find the serial# for the sessions holding the lock: SQL> select SERIAL# from v$session where SID=667; SERIAL# ---------21091 SQL> alter system kill session '667,21091'; System altered.

Actual speaking all locks acquired by statement within a transaction are held for duration of the transaction. Oracle release all locks acquired by statement within a transaction when an explicit or implicit commit or rollback is executed.

Q6)What are the Database you are handling?What is the maximum size? Ans:I'm handling Production,Stand by,Datawarehouse,Performance,Test,Development,Demo and SAP Databases. The maximum size of my Database(Datawarehouse Database) is 750 gb,usually Datawarehouse Database(OLAP=>Online Analytical processing) will be larger in size compare to transactional Databases(OLTP=>Online Transactional processing.

1)How to find the E-business suite login URL? Ans: SQL> conn apps Enter password: Connected. SQL> select home_url from icx_parameters; HOME_URL -------------------------------------------------------------------------------http://testnode1.comp.com:8000/OA_HTML/AppsLogin

2)How to find the release of Apps installed or version installed in our machine? Ans:conn apps Enter password: Connected. SQL> select release_name from fnd_product_groups; RELEASE_NAME -------------------------------------------------12.1.1 3)What is Yellow Bar Warning in Apps? Ans: Oracle Applications Release 11.5.1 (11i) requires that its code run in a trusted mode and uses J-Initiator to run Java applets on a desktop client. If an applet is trusted, however, Java will extend the privileges of the applet.The Yellow Warning Bar is a warning that your applet is not running in a trusted mode.To indicate that an applet is trusted, it must be digitally signed using a digital Certificate,so Oracle Applications requires that all Java archive files must be digitally signed. 4)How to check the custom top installled? Ans: SQL> Select BASEPATH,PRODUCT_CODE,APPLICATION_SHORT_NAME From fnd_application Where application_Short_name like '%CUST_TOP_name%';

5)How to check multi-org is enabled in Oracle applications? Ans: SQL> select multi_org_flag from fnd_product_groups; M Y Note:For enabling multi-org check the MY ORACLE SUPPORT notes 396351.1 and 220601.1 6)How to compile invalid objects in Oracle Applications? Ans: Check the below link for all possible ways to compile the invalid objects in Oracle Application.Usually 'adadmin' utility provides us the option to do this task. http://onlineappsdba.blogspot.com/2008/05/how-to-compile-invalid-objects-in-apps.html

7)Can we install Apps Tier and Database Tier on different Operating system while installing Oracle EBS 11i/R12? Ans: Yes it is possible.We can do this by following below MY ORACLE SUPPORT notes: Oracle Apps 11i --> Using Oracle EBS with a Split Configuration Database Tier on 11gR2 [ID 946413.1] Oracle Apps R12 --> Oracle EBS R12 with Database Tier Only Platform on Oracle Database 11.2.0 [ID 456347.1]

8)How to find the node details in Oracle Applications? Ans: FND_NODES tables in 'apps' schema helps in finding node details after installation,clonning and migration of applications. SQL> SELECT NODE_NAME||' '||STATUS ||' '||NODE_ID||' '||HOST FROM FND_NODES; 9)How to see the products installed and their versions in Oracle Applications? Ans: SQL> SELECT APPLICATION_ID||''||ORACLE_ID||''||PRODUCT_VERSION||''||STATUS||''||PATCH_LEVEL FROM FND_PRODUCT_INSTALLATIONS; O/P looks like below: 172 172 12.0.0 I R12.CCT.B.1 191 191 12.0.0 I R12.BIS.B.1 602 602 12.0.0 I R12.XLA.B.1 805 805 12.0.0 I R12.BEN.B.1 8302 800 12.0.0 I R12.PQH.B.1 8303 800 12.0.0 I R12.PQP.B.1 809 809 12.0.0 I 11i.HXC.C

662 662 12.0.0 I R12.RLM.B.1 663 663 12.0.0 I R12.VEA.B.1 298 298 12.0.0 N R12.POM.B.1 185 185 12.0.0 I R12.XTR.B.1 10)How to see the concurrent Requests and jobs in Oracle Applications? Ans: FND_CONCURRENT_REQUESTS can be used to see the concurrent requests and job details.These details are useful in troubleshooting concurrent manager related issues. SQL>SELECT REQUEST_ID||' '||REQUEST_DATE||' '||REQUESTED_BY||' '||PHASE_CODE||' '||STATUS_CODE FROM FND_CONCURRENT_REQUESTS; O/P will be as given below: REQUEST_ID||''||REQUEST_DATE||''||REQUESTED_BY||''||PHASE_CODE||''||STATUS_CODE -------------------------------------------------------------------------------------------------------6088454 24-NOV-11 1318 P I 6088455 24-NOV-11 1318 P Q 6088403 24-NOV-11 0 C C 6088410 24-NOV-11 0 C C

Where: PHASE_CODE column can have values: C Completed I Inactive P Pending R Running STATUS_CODE Column can have values: A Waiting B Resuming C Normal D Cancelled E Error F Scheduled G Warning H On Hold I Normal M No Manager Q Standby R Normal S Suspended T Terminating U Disabled W Paused X Terminated Z Waiting

11)What is the significance of FND_INSTALL_PROCESSES and AD_DEFERRED_JOBS tables? Ans: FND_INSTALL_PROCESSES and AD_DEFERRED_JOBS tables are created and Dropped during the 'adadmin' and 'adpatch' sessions. Both AD utilities (adpatch/adadmin) access the same tables to store the workers details, so both FND_INSTALL_PROCESSES and AD_DEFERRED_JOBS tables need to be dropped from the failed adpatch session ,so that adadmin/adpatch session can run successfully next time.

Hi, Tuning the Database is very much essential not only for better execution of SQL statement but also for applications running well using that Database.AWR(Automatic workload repository) report gives us clear picture in Deciding and tuning Database as well as SQL statement.As Enterprise Manager we can see it easily but Enterprise manager needs license(additional cost) so some companies does not want to use Oracle Enterprise Manager.Oracle gives us flexible option in which we can generated awr report in html and plain format. It is better to generate in html format for seeing it clearly by using any web browser.Since performance tuning is a deep ocean I will be updating this thread based on my problems faced and the method which work fine for tuning purpose.

Collecting awr report from SQL prompt: Login to the Database as 'sys' user(SYSDBA) and make sure Database is up & running and Oracle Environment for the particular Database is set.we can gather the awr report using 'awrrpt.sql'. Note:If we want the awr report in Oracle RAC environment than we have to use 'awrgrpt.sql' script,as there you have to gather the report for multiple instances running on various nodes. SQL> select name from v$database; NAME --------TESTDB SQL> select status from v$instance; STATUS -----------OPEN SQL> @?/rdbms/admin/awrrpt.sql Current Instance ~~~~~~~~~~~~~~~~ DB Id DB Name Inst Num Instance ----------- ------------ -------- -----------3628069655 TESTDB 1 TESTDB

Specify the Report Type ~~~~~~~~~~~~~~~~~~~~~~~ Would you like an HTML report, or a plain text report? Enter 'html' for an HTML report, or 'text' for plain text Defaults to 'html' Enter value for report_type: html Type Specified: html

Instances in this Workload Repository schema ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ DB Id Inst Num DB Name Instance Host ------------ -------- ------------ ------------ -----------* 3628069655 1 TESTDB TESTDB TESTNODE1.comp.com

Using 3628069655 for database Id Using 1 for instance number

Specify the number of days of snapshots to choose from ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Entering the number of days (n) will result in the most recent (n) days of snapshots being listed. Pressing without

specifying a number lists all completed snapshots.

Enter value for num_days: 1 Listing the last day's Completed Snapshots Snap Instance DB Name Snap Id Snap Started Level ------------ ------------ --------- ------------------ ----TESTDB TESTDB 5590 24 Nov 2011 00:30 1 5591 24 Nov 2011 01:30 1 5592 24 Nov 2011 02:30 1 5593 24 Nov 2011 03:30 1 5594 24 Nov 2011 04:30 1 5595 24 Nov 2011 05:30 1 5596 24 Nov 2011 06:30 1 5597 24 Nov 2011 07:30 1 5598 24 Nov 2011 08:30 1 5599 24 Nov 2011 09:30 1 5600 24 Nov 2011 10:30 1 5601 24 Nov 2011 11:30 1 5602 24 Nov 2011 12:30 1 5603 24 Nov 2011 13:30 1 5604 24 Nov 2011 14:30 1 5605 24 Nov 2011 15:30 1

Specify the Begin and End Snapshot Ids

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Enter value for begin_snap: 5604 Begin Snapshot Id specified: 5604 Enter value for end_snap: 5605 End Snapshot Id specified: 5605

Specify the Report Name ~~~~~~~~~~~~~~~~~~~~~~~ The default report file name is awrrpt_1_5604_5605.html. To use this name, press to continue, otherwise enter an alternative. Enter value for report_name: awrrpt_NOV24_2011_2_30_3_30_PM.html

SQL> exit We will see the html format of the awr report in the current operating system path. [oracle@TESTNODE1 ~]$ ls -altr awr* -rw-r--r-- 1 oracle dba 458371 Nov 24 14:02 awrrpt_1_5590_5603.html -rw-r--r-- 1 oracle dba 390564 Nov 24 16:31 awrrpt_NOV24_2011_2_30_3_30_PM.html We can copy this html file using copying tool(winscp or ftp) to our machine and review using web browsers(mozilla or IE supported versions).

Analzing the awr report and suggesting possible recommendations: Once we obtain the awr report our main motive is to analyze the awr report and come up with possible recommendations.Depending on the size of our Production Database we can come up with possible recommendations.This recommendation should be first implemented in test environment and after successful results should be adopted in production environments. 1) Redo logs: We need to make sure our redo logs are large enough.Check the number of log switches, one every twenty minutes is ideal, more than this is too high and you should make them larger to reduce the number of switches. We can find the log switches in the Instance Activity Stats part of the awr report. Example:

Instance Activity Stats - Thread Activity * Statistics identified by '(derived)' come from sources other than SYSSTAT Statistic Total per Hour log switches (derived) 2 2.00 We can see in this system there are 2 log swtiches per hourly basis,which is good.So this tells us the redo logs are large enough. 2)Parsing: Check the hard parsing amount.It should be zero.If it is not, this indicates that our SGA is probably too small,increase the size of SGA and test again. Hard parsing is caused by use of literals in SQL (as opposed to bind variables).If the queries in question are our own,we should change them to use bind variables. We can find this information on the first page.

Load Profile Per Second Per Transaction Per Exec Per Call ~~~~~~~~~~~~ --------------- --------------- ---------- ---------... Parses: 33.9 7.2 Hard parses: 0.5 0.1 ... We can see in this system the hard parses is almost zero, which is good. Now coming to the SGA we can focus on the below considerations: 3)Buffer hit and Library hit percentages: Check the buffer hit and library hit percentages. We want them to be 100%, if not we should increase the size of SGA.This is also on the first page: Instance Efficiency Percentages (Target 100%) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Buffer Nowait %: 99.82 Redo NoWait %: 100.00 Buffer Hit %: 99.52 In-memory Sort %: 100.00 Library Hit %: 98.63 Soft Parse %: 98.60 Execute to Parse %: 50.96 Latch Hit %: 98.16 Parse CPU to Parse Elapsd %: 66.67 % Non-Parse CPU: 97.75 In this case they are also good.

4)Top 5 Timed Foreground Events: Check the average wait times.Anything over 5ms indicates a problem.If we see database CPU events in the Top 5, this indicates that SGA is too small.We may also be missing indexes.Check the optimizer statistics. Here are the Top 5 from my environment: Top 5 Timed Foreground Events ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Avg wait % DB Event Waits Time(s) (ms) time Wait Class ------------------------------ ------------ ----------- ------ ------ ---------DB CPU 15 59.9 log file sync 1,592 8 5 32.3 Commit sort segment request 1 1 1001 4.0 Configurat db file sequential read 216 1 4 3.6 User I/O db file scattered read 64 0 6 1.5 User I/O we can see here that the major issue is DB CPU, which generally indicates SGA is too small. However, in this case it is high because this report was run on a VM with the database and BPM sharing the CPU and disk. Database file sequential/scattered read These indicate time spent doing table scans and index scans (respectively).If these are high (over 5ms),We should consider moving your data files to reduce disk I/O contention, or move them to faster disks. 5)Enqueue high watermark: This indicates hardware contention that occurs when there are multiple users inserting into LOB segments at once while the database is trying to reclaim unused space. We should consider enabling secure files to improve LOB performance (SECURE_FILES=ALWAYS). We cannot see these in my example report, because this was not a problem in my environment, so it did not make it into the Top 5.If it did, you would see an event called:enq: HW - contention Other things to be aware of We will also check our database configuration. 6)MEMORY_TARGET:

Do not use this setting.We should have our DBA tune the memory manually instead.This will result in a better tuned database.We should start with 60% of physical memory allocated to SGA and 20% to PGA.

7)AUDIT_TRAIL: Usually we do not use this setting much for tuning.But auditing on Database level can be overhead to the Database.

ORA-24247: network access denied by access control list (ACL)

Issue: ORA-24247: network access denied by access control list (ACL) error accorded after db upgrade to 11gr2 from 9.2.0.8 in EBS Environment.

Error: ORA-24247: network access denied by access control list (ACL)

Impact: Unable to sent mail through database.

Reason: I had ignored pre-upgrade tool report warning:WARNING: --> Database contains schemas with objects dependent on DBMS_LDAP package. .... Refer to the 11g Upgrade Guide for instructions to configure Network ACLs. .... USER APPS has dependent objects.

Solution:

1. Please check whether the below files exist: /appsutil/install/<$CONTEXT_NAME>/txkcreateACL.sh

/appsutil/install/<$CONTEXT_NAME>/txkcreateACL.sql

2. If the above files exist , then run 'Autoconfig' on the DB Tier and check if the issue resolves..

3. If the issue does not resolve, then you can check the below steps. Create a ACL if one does not exist by referring the bellow command. You can use the scripts mentioned below to check the available ACLs and the related privileges..

SQL> select * from DBA_NETWORK_ACLS; SQL> select * from DBA_NETWORK_ACL_PRIVILEGES;

Assign the specific Users or Roles to the ACL list.

BEGIN

-- Only uncomment the following line if ACL "network_services.xml" has already been created --DBMS_NETWORK_ACL_ADMIN.DROP_ACL('network_services.xml');

DBMS_NETWORK_ACL_ADMIN.CREATE_ACL( acl => 'network_services.xml', description => 'FTP ACL', principal => 'APPS', is_grant => true, privilege => 'connect');

DBMS_NETWORK_ACL_ADMIN.ADD_PRIVILEGE( acl => 'network_services.xml', principal => 'APPS', is_grant => true, privilege => 'resolve');

DBMS_NETWORK_ACL_ADMIN.ASSIGN_ACL( acl => 'network_services.xml', host => '*');

COMMIT;

END;

Assign the ACL to the required Hosts including the Mail Server

connect apps/apps; DECLARE conn utl_smtp.connection; begin conn := utl_smtp.open_connection('mail1.indiandba.com', 25); end; /

Check the configuration:

select utl_inaddr.get_host_address('mail1.indiandba.com') from dual;

Vous aimerez peut-être aussi