Académique Documents
Professionnel Documents
Culture Documents
Kumaravelu S
DBA
From: Raghu, Nadupalle (Cognizant)
Sent: Wednesday, April 18, 2007 12:28 PM
Subject:
In this Document
Symptoms
Changes
Cause
Solution
References
Applies to:
Oracle Server - Enterprise Edition - Version:
This problem can occur on any platform.
.
Symptoms
The following messages are reported in alert.log after 10g Release 2 is installed.
Changes
Cause
These are warning messages that should not cause the program responsible for these
errors to fail. They appear as a result of new event messaging mechanism and memory
manager in 10g Release 2.
The meaning is that the process is just spending a lot of time in finding free memory
extents during an allocate as the memory may be heavily fragmented. Fragmentation in
memory is impossible to eliminate completely, however, continued messages of large
allocations in memory indicate there are tuning opportunities on the application.
The messages do not imply that an ORA-4031 is about to happen.
Solution
In 10g we have a new undocumented parameter that sets the KGL heap size warning
threshold. This parameter was not present in 10gR1. Warnings are written if heap size
exceeds this threshold.
If you want to set this to 8192 (8192 * 1024) and are using an spfile:
_kgl_large_heap_warning_threshold=8388608
NOTE: The default threshold in 10.2.0.1 is 2M. So these messages could show up
frequently in some application environments.
In 10.2.0.2, the threshold was increased to 50MB after regression tests, so this should be
a reasonable and recommended value. If you continue to see the these warning messages
in the alert log after applying 10.2.0.2 or higher, an SR may be in order to investigate if
you are encountering a bug in the Shared Pool.
DATABASE LINK
create database link CDOI3 connect to cdo identified by cdo using 'CDOI3.cts.com';
select * from cdo.t1@CDOI3;
10.237.5.154
User Name:oc4jadmin
Password : pass1234
https://metalink.oracle.com/metalink/plsql/f?p=110:19:4410067257338331514::NO:::
Also the following error occurs when attempting to access the web application.
Fri, 18 Feb 2005 12:06:21 GMT ORA-04020: deadlock detected while trying to
lock object SYS.DBMS_STANDARD DAD
name: devltimetrk PROCEDURE : time_sheet.display URL :
http://144.10.126.144:1643/pls/devlTimeTrk/time_sheet.display
That fixed my original problem. I put my pfile back the way it was and now I am
getting this -
ORACLE instance started.
I tried recompiling everything with utlrp.sql but recieved the trigger is invalid
error and I tried adding "_system_trig_enabled" and setting it to
"*._system_trig_enabled=TRUE" in my pfile, no help.
Thank you,
Sara
A possible workaround is to set following parameter in the listener.ora and restart the listener:
DIRECT_HANDOFF_TTC_LISTENER=OFF
Should you be working with Multi threaded server connections, you might need to increase the
value of large_pool_size.
Su – informat
Cd /informatica/repositoryserver
./ pmrepserver
http://www.oracle.com/technology/books/10g_books.html
smc&
A)INIT.ORA PARAMETER
spool off
$ ksh
$ set -o vi
instance_name=DWDEV
db_name=DWDEV
background_dump_dest=/oradata2/oracle9i/admin/DWDEV/bdump
user_dump_dest=/oradata2/oracle9i/admin/DWDEV/udump
core_dump_dest=/oradata2/oracle9i/admin/DWDEV/cdump
control_files=("/oradata2/oracle9i/admin/DWDEV/control01.ctl","/oradata2/oracle9i/adm
in/DWDEV/control02.ctl")
compatible=9.2.0.0.0
remote_login_passwordfile=EXCLUSIVE
undo_management=AUTO
undo_tablespace=undo1
B) STARTUP NOMOUNT;
C)
psrinfo
psrinfo –v
CREATE CONSTRAINT
a number,
b number,
c number,
);
alter table test1 modify DAY_OF_WEEK varchar2(1) not null enable novalidate
The following shows the steps to drop a database in Unix enviroment. In order to delete a
database, there are few things need to be taken care of. First, all the database related files
eg *.dbf, *.ctl, *.rdo, *.arc need to be deleted. Then, the entry in listener.ora and
tnsnames.ora need to be removed. Third, all the database links need to be removed since
it will be invalid anyways.
It depends how you login to oracle account in Unix, you should have environment set for
the user oracle. To confirm that the environment variable is set, do a env|grep ORACLE
and you will notice that your ORACLE_SID=SOME_SID and
ORACLE_HOME=SOME_PATH. If you do not already have the ORACLE_SID and
ORACLE_HOME set, do it now.
Make sure also, that you set the ORACLE_SID and ORACLE_HOME correct else you
will end up deleting other database. Next, you will have to query all the database related
files from dictionaries in order to identify which files to delete. Do the following:
STATSPACK INSTALLATION
Statspack Installation
Steps1:
Create tablespace tablespace_name datafile ‘/filename.dbf’’ size 500M;
2: /opt/oracle/rdbms/admin
3.run the command in sql prompt /opt/oracle/rdbms/admin/spcreate.sql
4.
IMP UTILITY
connected to ORACLE
The errors occur on Oracle database installed in Windows machine too. Actually the
problem can occurs in any platform of Oracle database. It usually happens when try to
import into new database.
The problem occurs because imp utiliy encounters error out when trying to execute some
commands.
After executing the above sql scripts, retry the import. The error should disappears.
UNDOTBS
Partition-level Import can only be specified in table mode. It lets you selectively load
data from specified partitions or subpartitions in an export file. Keep the following
guidelines in mind when using partition-level import.
• Import always stores the rows according to the partitioning scheme of the target
table.
• Partition-level Import inserts only the row data from the specified source
partitions or subpartitions.
• If the target table is partitioned, partition-level Import rejects any rows that fall
above the highest partition of the target table.
• Partition-level Import cannot import a nonpartitioned exported table. However, a
partitioned table can be imported from a nonpartitioned exported table using
table-level Import.
• Partition-level Import is legal only if the source table (that is, the table called
tablename at export time) was partitioned and exists in the Export file.
• If the partition or subpartition name is not a valid partition in the export file,
Import generates a warning.
• The partition or subpartition name in the parameter refers to only the partition or
subpartition in the Export file, which may not contain all of the data of the table
on the export source system.
• If ROWS=y (default), and the table does not exist in the Import target system, the
table is created and all rows from the source partition or subpartition are inserted
into the partition or subpartition of the target table.
• If ROWS=y (default) and IGNORE=y, but the table already existed before Import, all
rows for the specified partition or subpartition in the table are inserted into the
table. The rows are stored according to the existing partitioning scheme of the
target table.
• If ROWS=n, Import does not insert data into the target table and continues to
process other objects associated with the specified table and partition or
subpartition in the file.
• If the target table is nonpartitioned, the partitions and subpartitions are imported
into the entire table. Import requires IGNORE=y to import one or more partitions or
subpartitions from the Export file into a nonpartitioned table on the import target
system.
USER CREATION IN OS
The SQL will return the following results, look for DEFAULT_TEMP_TABLESPACE
for the setting:
PROPERTY_NAME PROPERTY_VALUE
—————————— ——————————
DICT.BASE 2
DEFAULT_TEMP_TABLESPACE TEMP
DBTIMEZONE +01:00
NLS_NCHAR_CHARACTERSET AL16UTF16
GLOBAL_DB_NAME ARON.GENERALI.CH
EXPORT_VIEWS_VERSION 8
NLS_LANGUAGE AMERICAN
NLS_TERRITORY AMERICA
NLS_CURRENCY $
NLS_ISO_CURRENCY AMERICA
NLS_NUMERIC_CHARACTERS .,
NLS_CHARACTERSET WE8ISO8859P1
NLS_CALENDAR GREGORIAN
NLS_DATE_FORMAT DD-MON-RR
NLS_DATE_LANGUAGE AMERICAN
NLS_SORT BINARY
NLS_TIME_FORMAT HH.MI.SSXFF AM
NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM
NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZR
NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZR
NLS_DUAL_CURRENCY $
NLS_COMP BINARY
NLS_LENGTH_SEMANTICS BYTE
NLS_NCHAR_CONV_EXCP FALSE
NLS_RDBMS_VERSION 9.2.0.6.0
If default temporary tablespace is wrong the alter it with the following command:
will return the following result, check if all users TEMPORARY_TABLESPACE is set to
correct settings:
If wrong temporary tablespace is found, alter it with the correct tablespace name (for
example, sys) with the following SQL:
Alternatively, recreate or add a datafile to your temporary tablespace and change the
default temporary tablespace for your database;
SQL> create temporary tablespace temp tempfile ‘/db/temp01.dbf’ size 100m autoextend
off extent management local uniform size 1m;
Re: What are these files GOOD for ? [message #126248 is a reply Sat, 02 July 2005
to message #126216 ] 00:09
Achchan
Messages: 86 Member
Registered: June 2005
Hi,
Files that have a .DJF extension contain the predefined redo logs and datafiles for seed
templates in DBCA.If you delete them you wont be able to use those db creation
templates in future.
db_domain
GLOBAL_NAMES=TRUE
ALTER DATABASE RENAME GLOBAL_NAME TO WEBDV.CTS.COM;
SQL> select
DBMS_METADATA.GET_DDL('TABLE','LOGOFF_TBL','COORS_TARGET') from
dua
l; CREATE OR REPLACE TRIGGER SYS.trg_logoff
BEFORE logoff ON DATABASE
BEGIN
INSERT INTO SYS.logoff_tbl VALUES(sys_context('userenv','session_user'),
SYSDATE);
END;
BACKUP PATH
NO.OF CPU
isainfo –v
Db_2k_cache_size=10m
http://hostname:port/em
utl_file_dir
Sqlnet.Inbound_connect_Timeout
Stripe on “compress:path02”
/opt/oracle10g/xdk/admin/initxml.sql
/opt/oracle10g//xdk/admin/xmlja.sql
/opt/oracle10g/rdbms/admin/catjava.sql
/opt/oracle10g/rdbms/admin/catexf.sql
Once the database has been restarted, resolve any invalid objects by
@?/opt/oracle10g/rdbms/admin/utlrp.sql
/rdbms/admin/catnoexf.sql
/rdbms/admin/rmaqjms.sql
/rdbms/admin/rmcdc.sql
/xdk/admin/rmxml.sql
/javavm/install/rmjvm.sql
10.237.101.37—Backup Report
SYBASE –Database
1.su – syb
2.dscp
3.open
4.listall
6.sp-who
7.go
9./Sybase/syb125/ASE-12-5/install
11.sp –helpdb
12.sp-configure
backupsp_helpdb test_saatchi
cd $SYSBASE
MORE INTERFACE
Sybadmin-pW
MAX_ENABLED_ROLES = 70
svrmgrl
connect internal
startup
shutdown abort
The following command for gather statistics of Number of rows in each table
exec dbms_stats.gather_database_stats();
mount -o hard,rw,noac,rsize=32768,wsize=32768,suid,proto=tcp,vers=3
10.237.101.37:/unixbkp /backup
oradim -delete -sid EPT
oradim -new -sid EPT -pfile D:\oracleHome2\database\initEPT.ora
SET TNS_ADMIN=C:\oracle\ora92\network\admin
Change the listener and database services Log On user to domain user
who is a member of the groups domain admin and ORA_DBA group.
The default setting is Local System Account.
- Run regedit
- Drill down to
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services
- Locate and delete the OracleOraHome92TNSListener (or whatever the
listener
name is)
- Reboot the entire Windows box
- When started and logged on as the Oracle user, go to a DOS /
Command prompt
- Run 'lsnrctl start <listener_name>' without the single quotes and
replacing
<listener_name> with the name.
- An OS error of 1060 will be seen (normal) as the service is missing.
- The listener should start correctly, or the next logical error may
display.
By the way can you explain backward of the problem? Did you do any
upgrade? May you use double oracle_home?
‘
/var/opt/oracle/--Install.loc
spcreate.sql
spreport.sql
for i in SAGACEND SAGACENB GLATTD STNC wrkshdev
do
ORACLE_SID=$i
export ORACLE_SID
sqlplus "/ as sysdba" << !
select sum(bytes)/1024/1024 from dba_data_files;
exit
!
done
/opt/infoall/info
fuser -c /oradata2
umount /oradata2
mount /oradata2
purge recyclebin
purge dba_recyclebin
10.237.209.11
Recover database;
Alter database open;
10.237.204.69
\\10.237.5.164\Softwares
My problem:
When I don't use tnsnames and want to use ipc protocol then I get the following
error.
SQL> connect myuserid/mypassword
ERROR:
ORA-01034: ORACLE not available
ORA-27121: unable to determine size of shared memory segment
SVR4 Error: 13: Permission denied
If not...
1. Login as oracle user
2. Shutdown (normal) the db
3. Go to $ORACLE_HOME/bin
4. Execute the following
chmod 6751 oracle
5. Check the file permissions on oracle using the following
ls -l oracle
Now start the Oracle EM dbconsole Build Script ($ORACLE_HOME/bin/emca for Linux
and $ORACLE_HOME\Bin\emca.bat for Windows).
---------------------------------------------------------
You have specified the following settings
M oracle.sysman.emcp.EMConfig createRepository
INFO: Creating repository ...
M oracle.sysman.emcp.EMConfig perform
INFO: Repository was created successfully
M oracle.sysman.emcp.util.PortQuery findUsedPorts
INFO: Searching services file for used port
AM oracle.sysman.emcp.EMConfig getProperties
...........
...........
INFO: Starting the DBConsole ...
AM oracle.sysman.emcp.EMConfig perform
INFO: DBConsole is started successfully
INFO: >>>>>>>>>>> The Enterprise Manager URL is http://akira:5500/em
<<<<<<<<<<<
Enterprise Manager configuration is completed successfully
FINISHED EMCA at Fri May 14 10:55:25 MEST 2004
http://akira:5500/em
-------------------------------------------------------------------------------
-
number of devices
(2 rows affected)
1> sp_configure 'number of devices'
2> go
Parameter Name Default Memory Used Config Value
Run Value Unit Type
------------------------------ ----------- ----------- ------------
----------- -------------------- ----------
number of devices 10 #36 60
60 number dynamic
(1 row affected)
(return status = 0)
1> sp_configure 'number of devices',70
2> go
00:00000:00027:2007/03/15 14:52:10.46 server Configuration file '/sybase/syb125
/ASE-12_5/ddm.cfg' has been written and the previous version has been renamed to
'/sybase/syb125/ASE-12_5/ddm.046'.
00:00000:00027:2007/03/15 14:52:10.48 server The configuration option 'number o
f devices' has been changed by 'sa' from '60' to '70'.
Parameter Name Default Memory Used Config Value
Run Value Unit Type
------------------------------ ----------- ----------- ------------
----------- -------------------- ----------
number of devices 10 #44 70
70 number dynamic
(1 row affected)
Configuration option changed. The SQL Server need not be rebooted since the
option is dynamic.
Changing the value of 'number of devices' to '70' increases the amount of memory
ASE uses by 12 K.
(return status = 0)
disk init
name='gem_hist_data7',
physname='/data/syb125/gem_hist/gem_hist_data7.dat',
size='1600M'
go
This Query is used to find out the object name and lock id
select c.owner,c.object_name,c.object_type,b.sid,b.serial#,b.status,b.osuser,b.machine
from v$locked_object a ,v$session b,dba_objects c
where b.sid = a.session_id and a.object_id = c.object_id;
NAME
SYNOPSIS
crontab [file]
crontab -r
crontab -l
DESCRIPTION
minute (0-59),
hour (0-23),
day of the month (1-31),
Donald K. Burleson
Oracle Tips
Until Oracle9i, there was no way to identify those indexes that were
not being used by SQL queries. This tip describes the Oracle9i method
that allows the DBA to locate and delete un-used indexes.
The approach is quite simple. Oracle9i has a tool that allows you to
monitor index usage with an alter index command. You can then
query and find those indexes that are unused and drop them from the
database.
Here is a script that will turn on monitoring of usage for all indexes in
a system:
set pages 999;
set heading off;
spool run_monitor.sql
select
'alter index '||owner||'.'||index_name||' monitoring usage;'
from
dba_indexes
where
owner not in ('SYS','SYSTEM','PERFSTAT')
;
spool off;
@run_monitor
select
index_name,
table_name,
used
from
v$object_usage;
If you like Oracle tuning, you might enjoy my latest book “Oracle
Tuning: The DefinitiveReference” by Rampant TechPress. (I don’t think
it is right to charge a fortune for books!) and you can buy it right now
at this link:
http://www.rampant-
books.com/book_2005_1_awr_proactive_tuning.htm
sysoper privileges
In some cases, you may wish to change the existing database character set. For instance,
you may find that the number of languages that need to be supported in your database
have increased. In most cases, you will need to do a full export/import to properly
convert all data to the new character set. However, if and only if, the new character set is
a strict superset of the current character set, it is possible to use the ALTER DATABASE
CHARACTER SET to expedite the change in the database character set.
The target character set is a strict superset if and only if each and every codepoint in the
source character set is available in the target character set, with the same corresponding
codepoint value. For instance the following migration scenarios can take advantage of the
ALTER DATABASE CHARACTER SET command since US7ASCII is a strict subset of
WE8ISO8859P1, AL24UTFFSS, and UTF8:
WARNING: Attempting to change the database character set to a character set that is not
a strict superset can result in data loss and data corruption. To ensure data integrity,
whenever migrating to a new character set that is not a strict superset, you must use
export/import. It is essential to do a full backup of the database before using the ALTER
DATABASE [NATIONAL] CHARACTER SET statement, since the command cannot be
rolled back. The syntax is:
To change the database character set, perform the following steps. Not all of them are
absolutely necessary, but they are highly recommended:
To change the national character set, replace the ALTER DATABASE CHARACTER
SET statement with ALTER DATABASE NATIONAL CHARACTER SET. You can
issue both commands together if desired
bash-3.00# zfs create datapool1/dbbackups
bash-3.00# zfs set mountpoint=/dbbackups datapool/dbbackups
bash-3.00# zfs set quota=10G datapool/dbbackups
/var/spool/cron/crontabs
1.Touch user
2.check cron.deny file also
DATAFILE SIZE +
CONTROL FILE SIZE +
REDO LOG FILE SIZE
Regards
Taj
http://dbataj.blogspot.com
Jun 1 (13 hours ago)
babu is correct...
but analyse the indexes
also...
if u wanna know the actual
used space, use
dba_extents instead of
dba_segments
Files typically have a default size of 100M and are named using the following formats where u%
is a unique 8 digit code, g% is the logfile group number, and %t is the tablespace name:
DB_CREATE_ONLINE_LOG_DEST_2 = d:\Oracle\Oradata\TSH1
The above parameters mean two members will be created for the logfile group in the specified
locations when the ALTER DATABASE ADD LOGFILE; statement is issued. Oracle will name
the file and increment the group number if they are not specified.
The ALTER DATABASE DROP LOGFILE GROUP 3; statement will remove the group and it
members from the database and delete the files at operating system level.
The resultant datafiles will have a default size of 100M and AUTOEXTEND UNLIMITED. For a
specific size file use:
CREATE TABLESPACE tsh_data DATAFILE SIZE 150M;
If a tablespace is dropped, Oracle will remove the OS files also. For tablespaces not using the
OMF feature this cleanup can be performed by issuing the statement:
DROP TABLESPACE tsh_data INCLUDING CONTENTS AND DATAFILES;
....
-- or
A default temporary tablespace cannot be taken offline until a new default temporary tablespace
is brought online.
Oracle has done it again. Venture with me down what seems like a small option but in fact has
major implications on what we, as DBAs no longer have to manage.
The world of database performance and tuning is changing very fast. Every time I look at new
features, it convinces me more and more that databases are becoming auto-tunable and self-
healing. We could argue for quite awhile that DBAs will or will not become obsolete in the
future, but I think our current nitch is the acceptance of new technology and our ability to
empower the companies we work for by using it. With Oracle9i, Oracle has given a peek into
the future of where it is going when tuning, not only the database but applications as well. The
little gem that Oracle has snuck in is Its' new automated segment space management option.
What Is It
If you haven't read the manuals yet, please do. You will quickly realize that Oracle is pushing us
to use locally managed tablespaces. Because all the information to manage segments and
blocks is kept in bitmaps in locally managed tablespaces, the access to the data dictionary is
relieved. Not only does this not generate redo, contention is reduce. Along with the push to
locally managed tablespaces, is the push to use automatic segment space management. This
option takes total control of the parameters FREELISTS, FREELIST GROUPS, and PCTUSED. That
means that Oracle will track and manage the used and free space in datablocks using bitmaps
for all objects defined in the tablespace for which it has been defined.
How It Use to Be
In the olden days, everything was dictionary-managed tablespaces. How objects were being
used within tablespaces made setting FREELIST, FREELIST GROUPS, and PCTUSED an ordeal.
Typically, you would sit down and look at the type of DML that was going to be executed, the
number of users executing the DML, the size of rows in tables, and how the data would grow
over time. You would then come up with an idea of how to set FREELIST, PCTUSED, and
PCTFREE in order to get the best usage of space when weighed against performance of DML. If
you didn't know what you were doing or even if you did, you constantly had to monitor
contention and space to verify and plan your next attempt. Let's spend a bit of time getting
accustomed to these parameters.
FREELIST
This is a list of blocks kept in the segment header that may be used for new rows being
inserted into a table. When an insert is being done, Oracle gets the next block on the freelist
and uses it for the insert. When multiple inserts are requested from multiple processes, there
is the potential for a high level of contention since the multiple processes will be getting the
same block from the freelist, until it is full, and inserting into it. Depending on how much
contention you can live with, you need to determine how many freelists you need so that the
multiple processes can access their own freelist.
PCTUSED
This is a storage parameter that states when a certain percentage of a block begin used falls
below PCTUSED, that block should be placed back on the freelist for available inserts. The
issue with using a value for PCTUSED was that you had to balance the need for performance, a
low PCTUSED to keep blocks off the freelist, against a high PCTUSED to keep space usage under
control.
FREELIST GROUPS
Basically used for multiple instances to access an object. This setting can also be used to move
the freelists to other blocks beside the segment header and thus give some relief to segment
header contention.
Why Is Auto Segment Space Management Good
I have come up with a short list of reasons why you might want to switch to auto segment
space management. I truly think you can find something that you will like.
* No worries
* No wasted time searching for problems that don't exist.
* No planning needed for storage parameters
* Out of the box performance for created objects
* No need to monitor levels of insert/update/delete rates
* Improvement in space utilization
* Better performance than most can tune or plan for with concurrent access to objects
* Avoidance of data fragmentation
* Minimal data dictionary access
* Better indicator of the state of a data block
* Further more, the method that Oracle uses to keep track of the availability of free space in a
block is much more granular than the singular nature of the old, on the freelist or off the
freelist scenario.
The AUTO keyword tells Oracle to use bitmaps for managing space for segments.
Check What You Have Defined
select tablespace_name,
contents,
extent_management,
allocation_type,
segment_space_management
from dba_tablespaces;
Realize that you can't change the method of segment space management by an ALTER
statement. You must create a new permanent, locally managed tablespace and state auto
segment space management and then migrate the objects.
Optional Procedures
The package DBMS_SPACE contains a procedure called SPACE_USAGE that gives information
about how space is being used within blocks under the segment high water mark.
Let Oracle Take Over
Maybe it's my old age or years of doing the mundane tasks as a DBA that wants to embrace this
feature. If there is one thing I have learned from using Oracle databases, it's that Oracle has
gotten a ton better at making sure new features work and are geared at truly making database
performance better. Here is just one instance where I think we can embrace Oracles' attempt
to take over a mundane task that is has been prone to error in the wrong hands. After all, it
isn't rocket science when you get down to it and will probably be gone in the next release
anyway.
Select DBTIMEZONE from dual; is used to determine the time zone of a Database
Auditing
The auditing mechanism for Oracle is extremely flexible so I'll only discuss performing full auditing
on a single user:
• Server Setup
• Audit Options
• View Audit Trail
• Maintenence
• Security
Server Setup
To allow auditing on the server you must:
Audit Options
Assuming that the "fireid" user is to be audited:
CONNECT sys/password AS SYSDBA
These options audit all DDL & DML issued by "fireid", along with some system events.
• DBA_AUDIT_EXISTS
• DBA_AUDIT_OBJECT
• DBA_AUDIT_SESSION
• DBA_AUDIT_STATEMENT
• DBA_AUDIT_TRAIL
• DBA_OBJ_AUDIT_OPTS
• DBA_PRIV_AUDIT_OPTS
• DBA_STMT_AUDIT_OPTS
The audit trail contains alot of data, but the following are most likely to be of interest:
Maintenance
The audit trail must be deleted/archived on a regular basis to prevent the SYS.AUD$ table
growing to an unnacceptable size.
Security
Only DBAs should have maintenance access to the audit trail. If SELECT access is required by
any applications this can be granted to any users, or alternatively a specific user may be created
for this.
Auditing modifications of the data in the audit trail itself can be achieved as follows:
AUDIT INSERT, UPDATE, DELETE ON sys.aud$ BY ACCESS;
EXEC DBMS_UTILITY.compile_schema('ATT');
EXEC DBMS_UTILITY.analyze_schema('ATT','COMPUTE') ;
Reads a raw SQL Trace generated by standard SQL Trace or by EVENT 10046 (Level 4, 8 or 12), and
generates a comprehensive HTML report with performance related details: time summary, call summary
(parse, execute, fetch), identification of top SQL, row source plan, explain plan, CBO statistics, wait
events, values of bind variables, I/O summary per schema object, latches, hot blocks, etc.
Output HTML report includes all the details found on TKPROF, plus additional information normally
requested and used for a transaction performance analysis. Generated report is more readable and
extensive than text format used on prior version of this tool and on current TKPROF.
Product Name, Product Version Can be used for Oracle Apps 11i or higher, or for any other application
running on top of an Oracle database
Instructions
Execution Environment:
the schema owning the transaction that generated the raw SQL Trace.
Access Privileges:
sqlplus <usr>/<pwd>
As the instructions are not the clearest the following is what I did to
install TraceAnalyzer so that it would be owned by the SYSTEM schema
1. Created a directory named INSTALL
2. Unzipped TRCA.zip into the INSTALL
directory
3. Created a directory under $ORACLE_HOME
named TraceAnalyzer
4. Moved the .sql files from the INSTALL to
the TraceAnalyzer directory
5. Logged onto Oracle as SYS
conn / as sysdba
SELECT COUNT(*)
FROM dba_tables t, dba_indexes i
WHERE t.table_name = i.table_name;
c:> cd oracle\ora92\TraceAnalyzer
Run TraceAnalyzer
Start SQL*Plus
c:\oracle\ora92\TraceAnalzyer> sqlplus
system/<pwd>@<service_name>
Exit SQL*Plus
/usr/platform/`uname -i`/sbin/prtdiag
prtconf | grep Mem
isainfo –v
/etc/fstab
The workaround to prevent lgwr from spinning is to set the following hidden
parameter in your parameter file:
_lgwr_async_io=false
This parameter turns of async i/o for lgwr but leaves it intact for the rest of the
database server.
NAME
---------------------------------
CPU used by this session
"CPU used by this session" statistic is given in 1/100ths of a second. Eg: a value
of 22 mean 0.22 seconds in 8i.
The Following Query can give a good idea of what the session is doing and how
much
CPU they have consumed:
For the values of command please look at the definition of V$session in the
reference manual.
To find out what sql the problem session(s) are executing, run the following
query:
FILE_NAME
---------------------------------------------
AUT
---
Alter database datafile /oradata1/CDOi1/data/ ULOG_TS.dbf
YES
Alter database datafile /oracle/CDOi1/data/users02.dbf
YES
PURPOSE
Identify intermittent HTTP500 errors caused by possible Microsoft
Internet
Explorer bug. The information in this article applies to releases
of:
Oracle Containers for J2EE (OC4J)
Oracle Application Server 10g (9.0.4.x)
Oracle9iAS Release 2 (9.0.3.x)
Oracle9iAS Release 2 (9.0.2.x)
Scope
This note may apply if you have recently applied Microsoft Internet
Explorer
browser patches.
Symptoms
You are seeing the following possible sequences of MOD_OC4J errors
in the
Oracle HTTP Server error_log file
Unix: $ORACLE_HOME/Apache/Apache/logs/error_log
Windows: %ORACLE_HOME%\Apache\Apache\logs\error_log
(a) MOD_OC4J_0145, MOD_OC4J_0119, MOD_OC4J_0013,
MOD_OC4J_0145, MOD_OC4J_0119, MOD_OC4J_0013, MOD_OC4J_0207
(b) MOD_OC4J_0015, MOD_OC4J_0078,
MOD_OC4J_0145, MOD_OC4J_0119, MOD_OC4J_0013,
MOD_OC4J_0145, MOD_OC4J_0119, MOD_OC4J_0013, MOD_OC4J_0207
(c) MOD_OC4J_0145, MOD_OC4J_0119, MOD_OC4J_0013,
MOD_OC4J_0080, MOD_OC4J_0058, MOD_OC4J_0035
(d) MOD_OC4J_0121, MOD_OC4J_0013, MOD_OC4J_0080, MOD_OC4J_0058
The above list is not definitive and other sequences may be
possible.
The following is one example sequence as seen in a log file:
MOD_OC4J_0145: There is no oc4j process (for destination: home)
available to service request.
MOD_OC4J_0119: Failed to get an oc4j process for destination: home.
MOD_OC4J_0013: Failed to call destination: home's service() to
service
the request.
MOD_OC4J_0145: There is no oc4j process (for destination: home)
available
to service request.
MOD_OC4J_0119: Failed to get an oc4j process for destination: home.
MOD_OC4J_0013: Failed to call destination: home's service() to
service
the request.
MOD_OC4J_0207: In internal process table, failed to find an
available
oc4j process for destination: home
Changes
The problem may be introduced by apply following Microsoft patches
o Microsoft 832894 security update
(MS04004: Cumulative security update for Internet Explorer)
or
o Microsoft 821814 hotfix
It may be seen only with certain browsers such as Internet Explorer
5.x and 6.x
The client machines will have a wininet.dll with a version number
of
6.0.2800.1405. To identify this
Use Windows Explorer to locate the file at
%WINNT%\system32\wininet.dll
> Right click on the file
> Select "Properties"
> click on the "Version" tab.
(see http://support.microsoft.com/default.aspx?scid=kb;enus;831167
for further details)
Cause
This Windows bug causes a change in behavior when HTTP POST requests
are
resubmitted, which can occur when the HTTP server terminates the
browser
clients open connections that exceeded their allowed HTTP 1.1
"KeepAlive"
idle time. In these cases the requests are resubmitted by the browser
without
the needed HTTP headers.
Fix
It is possible to address this issue by applying Microsoft patches to
the client systems where the browser is running.
As a more viable workaround it should be possible to disable the
KeepAlive
timeout by restarting the HTTP Server component after making the
following
configuration changes to httpd.conf
Unix: $ORACLE_HOME/Apache/Apache/conf/httpd.conf
Windows: %ORACLE_HOME%\Apache\Apache\conf\httpd.conf
1. Locate the KeepAlive directive in httpd.conf
KeepAlive On
2. Replace the KeepAlive directive in httpd.conf with
# vvv Oracle Note 269980.1 vvvvvvv
# KeepAlive On
KeepAlive Off
# ^^^ Oracle Note 269980.1 ^^^^^^^
3. If you are making this change manually, please run following
command to
propagate these changes into the central configuration repository.
Unix: $ORACLE_HOME/dcm/bin/dcmctl updateConfig co ohs v d
Windows: %ORACLE_HOME%\dcm\bin\dcmctl updateConfig co ohs v d
This step is not needed if the changes are mande via Enterprise
Manager.
References
http://support.microsoft.com/default.aspx?scid=kb;enus;831167
Checked for relevancy 2/8/2007
I am having a problem exporting an Oracle database. The error I got is, "exporting operators, exporting referential integrity
constraints, exporting triggers."
Please tell me how can I solve this. QUESTION POSED ON: 23 SEP 2004
QUESTION ANSWERED BY: Brian Peasland
First, verify that this package exists with the following query:
SELECT status,object_id,object_type,owner,object_name
FROM dba_objects
SELECT object_name, object_type, status
FROM user_objects WHERE object_type LIKE 'JAVA%';
offline
NORMAL
performs a checkpoint for all data files in the
tablespace. All of these data files must be online.
You need not perform media recovery on this
tablespace before bringing it back online. You must
use this option if the database is in noarchivelog
mode.
TEMPORARY
performs a checkpoint for all online data files in
the tablespace but does not ensure that all files can
be written. Any offline files may require media
recovery before you bring the tablespace back online.
IMMEDIATE
does not ensure that tablespace files are available
and does not perform a checkpoint. You must perform
media recovery on the tablespace before bringing it
back online.
OUTLN user is responsible for maintaining the stability between the plans for your queries with
stored outlines.
DBSNMP user is the one responsible to maintain the performance stats from enterprise
manager. even you can do this as SYS user.however connecting to the database as SYS user is
not recommended by oracle.
Prtconf
su - db2inst1
bash
cd sqllib
$ db2stop
4.Start an instance
As an instance owner on the host running db2, issue the following command
$ db2start
Dataflow Error
From documentation:
/*
OPEN_CURSORS specifies the maximum number of open cursors
(handles to private SQL areas) a session can have at once. You can use
this parameter to prevent a session from opening an excessive number
of cursors.
It is important to set the value of OPEN_CURSORS high enough to
prevent your application from running out of open cursors. The number
will vary from one application to another. Assuming that a session does
not open the number of cursors specified by OPEN_CURSORS, there is
no added overhead to setting this value higher than actually needed.
*/
Werner
Billy Re: no of open cursor Reply
Verreynne Posted: Aug 26, 2007 10:33 PM in response to:
Posts: 4,016 174313
Registered: 5/27/99
> how to resolve this if no. of open cursor exeeds then value given in
init.ora
I.e. application code defining ref cursors, using ref cursors.. but never
closing ref cursors.
The following SQL identifies SQL cursors with multiple cursor handles
for that SQL by the same session. It is unusual for an application to
have more than 2 or so cursor handles opened for the very same
SQL. Typically one will see a "cursor leaking" application with 100's of
open cursor handles for the very same SQL.
select
c.sid,
c.address,
c.hash_value,
COUNT(*) as "Cursor Copies"
from v$open_cursor c
group by
c.sid,
c.address,
c.hash_value
having
COUNT(*) > 2
order by
3 DESC
Once the application has been identified using V$SESSION, you can
use V$SQLTEXT to identify the actual SQL statement of which the
app creates so many handles.. and then trace and fix the problem in
the application.
Nagaraj
for performance tuning,
if you have statspack report generated then you can have a look at the timed events.
This is what I could find out from otn and through google.
Apparantly sqlnet.ora (also known as Profile) is a configuration file and contains the
parameters that specify preferences for how a client or server uses Net8 (oracle's network
services funcionality) features. The file is located in $ORACLE_HOME/network/admin on UNIX
and ORACLE_HOME\network\admin on Windows.
A little about Net8: : Net8 establishes network sessions and transfers data between a client
machine and a server or between two servers. It is located on each machine in the network
and once a network session is established, Net8 acts as a data courier for the client and the
server.
3) Oracle Names Server Configuration File (NAMES.ORA) : The Oracle Names server
configuration file (NAMES.ORA) contains the parameters that specify the location, domain
information, and optional configuration parameters for each Oracle Names server. NAMES.ORA
is located in $ORACLE_HOME/network/admin on UNIX and ORACLE_HOME\network\admin on
Windows NT.
ln -s /export/space/common/archive /archive
ln /export/home/fred/stuff /var/tmp/thing
The syntax for creating a hard link of a directory is the same. To create a hard link of
/var/www/html to /var/www/webroot, use:
ln /var/www/html /var/www/webroot
select ' alter ' || segment_type,segment_name || ' move tablespace xyz' from dba_segments
where tablespace_name='RAKESH';
>spool off
>@<urpath>\objects_move.log
if u want to move all the objects to another tablesapce just do the following....
>spool <urpath>\objects_move.log
> select ' alter ' || segment_type,segment_name || ' move tablespace xyz' from
dba_segments where tablespace_name='RAKESH';
>spool off
>@<urpath>\objects_move.log
Put the following line in init.ora. It will enable trace for all
sessions and the background
processes
sql_trace = TRUE
to disable trace:
sql_trace = FALSE
- or -
to stop trace:
- or -
- or -
EXECUTE dbms_support.start_trace;
EXECUTE dbms_support.stop_trace;
to start trace:
to stop trace:
- or -
The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges
to other users. By default, the user SYS is the only user that has these privileges. Creating a
password file via orapwd enables remote users to connect with administrative privileges
through SQL*Net.
The SYSOPER privilege allows instance startup, shutdown, mount, and dismount. It allows
the DBA to perform general database maintenance without viewing user data. The SYSDBA
privilege is the same as connect internal was in prior versions. It provides the ability to do
everything, unrestricted.
If orapwd has not yet been executed, attempting to grant SYSDBA or SYSOPER privileges
will result in the following error:
The following steps can be performed to grant other users these privileges:
1. Create the password file. This is done by executing the following command:
The filename is the name of the file that will hold the password information. The file
location will default to the current directory unless the full path is specified. The
contents are encrypted and are unreadable. The password required is the one for the
SYS user of the database.
The max_usersis the number of database users that can be granted SYSDBA or
SYSOPER. This parameter should be set to a higher value than the number of
anticipated users to prevent having to delete and recreate the password file.
Grant succeeded.
Now the user SCOTT can connect as SYSDBA. Administrative users can be connected and
authenticated to a local or remote database by using the SQL*Plus connect command. They
must connect using their username and password, and with the AS SYSDBA or AS SYSOPER
clause:
The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other
database users. The SYS password should never be shared and should be highly classified.
Deep inside the operating system executables there are many utilities at the
fingertips of Oracle professionals, but until now there has been no advice on how to
use these utilities. From tnsping.exe to dbv.exe to wrap.exe, Dave Moore describes
each utility and has working examples in the online code depot. Your time savings
from a single script is worth the price of this great book.
Get your copy of Oracle Utilities: Using Hidden Programs, Import/Export, SQL
Loader, oradebug, Dbverify, Tkprof and More today and receive immediate access to
the Online Code Depot!
With Oracle 9i a new method of tuning the PGA memory areas was introduced.
Automatic PGA Memory Management may be used in place of setting the sort_area_size,
sort_area_retained_size, sort_area_hash_size and other related memory management
parameters that all Oracle DBA's are familiar with. Those parameters may however still
be used. See the following for an interesting discussion on this topic:
The PGA memory management may now controlled by just two parameters if that's how
you choose to set it up.
• pga_aggregate_target,
• workarea_size_policy
Note that work_area_size_policy can be altered per database session, allowing manual
memory management on a per session basis if needed. eg. a session is loading a large
import file and a rather large sort_area_size is needed. A logon trigger could be used to
set the work_area_size policy for the account doing the import.
Also note that Automate PGA management can only be used for dedicated server
sessions.
For more some good reading on Automatic PGA management, please see:
The documentation contains some good guidelines for initial settings, and how to monitor
and tune them as needed.
If your 9i database is currently using manual PGA management, there are views available
to help you make a reasonable estimate for the setting.
If your database also has statspack statistics, then there is also historical information
available to help you determine the setting.
An initial setting can be determined by simply monitoring the amount of PGA memory
being used by the system as seen in v$pgastat, and by querying the
v$pga_target_for_estimate view.
v$pgastat:
select *
from v$pgastat
order by lower(name)
---------------------------------------- ------------------
------------
aggregate PGA auto target 8,294,400.00 bytes
16 rows selected.
The statistic "maximum PGA allocated" will display the maximum amount of PGA
memory allocated during the life of the instance.
The statistic "maximum PGA used for auto workareas" and "maximum PGA used for
manual workareas" will display the maximum amount of PGA memory used for each
type of workarea during the life of the instance.
v$pga_target_advice:
select *
from v$pga_target_advice
order by pga_target_for_estimate
/
PGA TARGET PGA TARGET ESTIMATED EXTRA
ESTIMATED PGA ESTIMATED OVER
12 rows selected.
There are other views that are also useful for PGA memory management.
v$process:
select
max(pga_used_mem) max_pga_used_mem
, max(pga_alloc_mem) max_pga_alloc_mem
, max(pga_max_mem) max_pga_max_mem
from v$process
select
max(pga_used_mem) max_pga_used_mem
, max(pga_alloc_mem) max_pga_alloc_mem
, max(pga_max_mem) max_pga_max_mem
from v$process
This displays the sum of all current PGA usage per process:
select
sum(pga_used_mem) sum_pga_used_mem
, sum(pga_alloc_mem) sum_pga_alloc_mem
, sum(pga_max_mem) sum_pga_max_mem
from v$process
/
Be sure to read the documentation referenced earlier, it contains an excellent explanation
of Automatic PGA Memory Management.
These are the steps to get the user who issues "drop table" command in a database:
if the output is :
3. truncate table aud$ - - - > to remove any audit trail data residing in the table.
3 sql>audit table; - - - >this starts auditing events pertaining to tables.
4. select action_name,username,userhost,to_char(timestamp,'dd-mon-yyyy:hh24:mi:ss')
from dba_audit_trail where action_name like '%DROP TABLE%'; - - - - >this query gives you
the username along with the the userhos from where the 'username' is connected.
system
temp 1000MB
iq_system_main 2000MB
iq_system_main2 1000MB
iq_system_main3 5000MB
iq_system_msg
http://10.237.99.28:9090/applications.do
Can someone explain to me the difference between differential incretmental and cumulative
incremental backups please
Differential backups are quicker than full backups because so much less data is being backed
up. But the amount of data being backed up grows with each differential backup until the next
full back up. Differential backups are more flexible than full backups, but still unwieldy to do
more than about once a day, especially as the next full backup approaches.
Incremental backups also back up only the changed data, but they only back up the data that
has changed since the LAST BACKUP — be it a full or incremental backup. They are sometimes
called "differential incremental backups," while differential backups are sometimes called
"cumulative incremental backups."
Suppose, if you do an incremental backup on Tuesday, you only back up the data that changed
since the incremental backup on Monday. The result is a much smaller, faster backup.
2) Cumulative Incremental Backups: RMAN backs up all the blocks used since the most
recent level 0 incremental backup. Cumulative incremental backups reduce the work needed
for a restore by ensuring that you only need one incremental backup from any particular level.
Cumulative backups require more space and time than differential backups, however, because
they duplicate the work done by previous backups at the same level.
If you would like to read the entire document (its a short one) you can find it at this site:
http://download.oracle.com/docs/cd/B19306_01/backup.102/b14191/rcmconc1005.htm
Suraj
RE: Incremantal RMAN Backups
I Tried to explain you things in a very simple way. I am not able to find anything I am missing.
If yes, please let me know.
> "No space left on device" sounds quite clear for me.
> Maybe the disk where you want to create the database is full. Another
> point colud be insufficient swap space but I would expect another error
> message for that.
Note that the error message is linked to semget. You seem to have run
out of semaphores. You configure the max number of semphores in
/etc/system:
set semsys:seminfo_semmni=100
set semsys:seminfo_semmns=1024
set semsys:seminfo_semmsl=256