Vous êtes sur la page 1sur 76

Managing

Large
Data

Partitioning

Partitioning Overview
Indexing
Managing statistics
Compression
Purging
Backing up

Partitioning
Facts

Divide and Conquer


Many Types
Range
List
Hash
Interval (11g)
Reference (11g)
Composite

Partitioning

Fifteen Years of Development


Core functionality

Performance

Manageability

Oracle8

Range partitioning
Global range indexes

Static partition
pruning

Basic maintenance
operations: add,
drop, exchange

Oracle8i

Hash and composite


range-hash partitioning

Partition-wise joins
Dynamic pruning

Merge operation

Oracle9i

List partitioning

Oracle9i R2

Composite range-list
partitioning

Oracle10g

Global hash indexes

Oracle10g R2

1M partitions per table

Oracle
Database 11g

More composite choices


REF Partitioning
Virtual Column
Partitioning

Global index
maintenance
Fast partition split
Local Index
maintenance
Multi-dimensional
pruning

Fast drop table


Interval Partitioning
Partition Advisor

Partitioning
Facts

Is not mostly about performance


Especially with OLTP
With OLTP you must be careful to not impeded
performance!
Is mostly about administration
Is an extra cost option to Enterprise Edition

Partitioning
Availability

Fast=True

Administration

Performance

Partitioning
Facts

Increases Availability of data


Each partition is independent
Some users may never even notice some data was
unavailable due to partition elimination
Downtime is reduced as well as time to recover is
reduced (smaller sets of data to recover)

part1.sql

Part1.sql
ops$tkyte%ORA11GR2> CREATE TABLE emp
2 ( empno
int,
3
ename
varchar2(20)
4 )
5 PARTITION BY HASH (empno)
6 ( partition part_1 tablespace p1,
7
partition part_2 tablespace p2
8 )
9 /
Table created.
ops$tkyte%ORA11GR2> insert into emp select empno, ename from scott.emp
2 /
14 rows created.

Part1.sql
ops$tkyte%ORA11GR2> select part1, part2
2
from (
3 select empno || ', ' || ename part1, row_number() over (order by empno) rn1
4
from emp partition(part_1)
5
) A FULL OUTER JOIN (
6 select empno || ', ' || ename part2, row_number() over (order by empno) rn2
7
from emp partition(part_2)
8
) B on ( a.rn1 = b.rn2 )
9 /
PART1
--------------7369, SMITH
7499, ALLEN
7654, MARTIN
7698, BLAKE
7782, CLARK
7839, KING
7876, ADAMS
7934, MILLER

PART2
--------------7521, WARD
7566, JONES
7788, SCOTT
7844, TURNER
7900, JAMES
7902, FORD

8 rows selected.

Part1.sql
ops$tkyte%ORA11GR2> alter tablespace p1 offline;
Tablespace altered.
ops$tkyte%ORA11GR2> select * from emp;
select * from emp
*
ERROR at line 1:
ORA-00376: file 3 cannot be read at this time
ORA-01110: data file 3:
'/home/ora11gr2/app/ora11gr2/oradata/ora11gr2/ORA11GR2/datafile/o1_mf_p1_6rprfr
mo_.dbf'

Part1.sql
ops$tkyte%ORA11GR2> variable n number
ops$tkyte%ORA11GR2> exec :n := 7844;
PL/SQL procedure successfully completed.
ops$tkyte%ORA11GR2> select * from emp where empno = :n;
EMPNO ENAME
---------- -------------------7844 TURNER

Partitioning
Facts

Reduced Administrative Burden


Performing operations on small objects is
o Easier
o Faster (each individual operation is, total time
might increase)
o Less resource intensive click to see example

Partitioning
SQL> create table big_table1
2 ( ID, OWNER, OBJECT_NAME, SUBOBJECT_NAME,
3
OBJECT_ID, DATA_OBJECT_ID,
4
OBJECT_TYPE, CREATED, LAST_DDL_TIME,
5
TIMESTAMP, STATUS, TEMPORARY,
6
GENERATED, SECONDARY )
7 tablespace big1
8 as
9 select ID, OWNER, OBJECT_NAME, SUBOBJECT_NAME,
10
OBJECT_ID, DATA_OBJECT_ID,
11
OBJECT_TYPE, CREATED, LAST_DDL_TIME,
12
TIMESTAMP, STATUS, TEMPORARY,
13
GENERATED, SECONDARY
14
from big_table.big_table;
Table created. (10,000,000 rows)

Partitioning
SQL> create table big_table2
2 ( ID, OWNER, OBJECT_NAME, SUBOBJECT_NAME,
3
OBJECT_ID, DATA_OBJECT_ID,
4
OBJECT_TYPE, CREATED, LAST_DDL_TIME,
5
TIMESTAMP, STATUS, TEMPORARY,
6
GENERATED, SECONDARY )
7 partition by hash(id)
8 (partition part_1 tablespace big2,
9
partition part_2 tablespace big2,
10
partition part_3 tablespace big2,
11
partition part_4 tablespace big2,
12
partition part_5 tablespace big2,
13
partition part_6 tablespace big2,
14
partition part_7 tablespace big2,
15
partition part_8 tablespace big2
16 )
17 as
18 select ID, OWNER, OBJECT_NAME, SUBOBJECT_NAME,
19
OBJECT_ID, DATA_OBJECT_ID,
20
OBJECT_TYPE, CREATED, LAST_DDL_TIME,
21
TIMESTAMP, STATUS, TEMPORARY,
22
GENERATED, SECONDARY
23
from big_table.big_table;
Table created.

Partitioning
SQL> select b.tablespace_name,
2
mbytes_alloc,
3
mbytes_free
4
from ( select round(sum(bytes)/1024/1024) mbytes_free,
5
tablespace_name
6
from dba_free_space
7
group by tablespace_name ) a,
8
( select round(sum(bytes)/1024/1024) mbytes_alloc,
9
tablespace_name
10
from dba_data_files
11
group by tablespace_name ) b
12
where a.tablespace_name (+) = b.tablespace_name
13
and b.tablespace_name in ('BIG1','BIG2')
14 /
TABLESPACE MBYTES_ALLOC MBYTES_FREE
---------- ------------ ----------BIG1
1496
344
BIG2
1496
344

Partitioning
We would need a lot of free space (resource) to
move this table, you need 2 copies
SQL> alter table big_table1 move;
alter table big_table1 move
*
ERROR at line 1:
ORA-01652: unable to extend temp segment by 1024 in tablespace BIG1

Partitioning
We cannot move this table, but
SQL> alter table big_table2 move;
alter table big_table2 move
*
ERROR at line 1:
ORA-14511: cannot perform operation on a partitioned object

Partitioning
SQL> alter table
Table altered.
SQL> alter table
Table altered.
SQL> alter table
Table altered.
SQL> alter table
Table altered.
SQL> alter table
Table altered.
SQL> alter table
Table altered.
SQL> alter table
Table altered.
SQL> alter table
Table altered.

big_table2 move partition part_1;


big_table2 move partition part_2;
big_table2 move partition part_3;
big_table2 move partition part_4;
big_table2 move partition part_5;
big_table2 move partition part_6;
big_table2 move partition part_7;
big_table2 move partition part_8;

We move each small partition one by one

Partitioning
Of course, we would likely automate this process
SQL> begin
2
for x in ( select partition_name
3
from user_tab_partitions
4
where table_name = 'BIG_TABLE2' )
5
loop
6
execute immediate
7
'alter table big_table2 move partition ' ||
8
x.partition_name;
9
end loop;
10 end;
11 /
PL/SQL procedure successfully completed.

Partitioning
Took less free space
If something failed, we only lost 1/8th the work (8
partitions)
You would need less UNDO space at any single
point in time
You can spread the work out over many days

8 hours to rebuild entire table


2 hour to rebuild a partition
Take 1 week to rebuild table a partition at a time

Partitioning Enhanced Statement


Performance
Read Query performance

Partition elimination is important


Mostly a warehouse/reporting event
In OLTP, partitioning rarely improves read query
performance
You must be careful to not negatively impact it (more on
that in indexing)
Occasionally, it can increase read performance due to
clustering
List partition by region, application queries by region,
all data on a given block is for that region

Partitioning Enhanced Statement


Performance
Write Query performance

Reduced contention
Instead of 1 index with 1 hot block, you have N
indexes with 1 hot block each
Instead of one set of freelists (be they ASSM or
MSSM), you have N.

Partitioning - Schemes

Range & Interval


Hash
List
Reference
Virtual Column
Composite

Partitioning
Composite Partitioning
Range

List

Hash

Range

11gr1

9i

8i

List

11gr1

11gr1

11gr1

Hash

11gr2

11gr2

11gr2

Indexing

Local and Global Indexes


LOCAL INDEX
Equipartition the index with the table: For every
table partition, there will be an index partition
that indexes just that table partition. All of the
entries in a given index partition point to a single
table partition, and all of the rows in a single
table partition are represented in a single index
partition.
GLOBAL INDEX
Partition the index by range or hash: Here the index
is partitioned by range, or optionally in Oracle
10g and above by hash, and a single index
partition may point to any (and all) table
partitions.

Which One to Use?


Local Indexes are the first choice, if they make
sense

Partition key almost certainly must be referenced in


predicate
Otherwise you will scan ALL index partitions

Most prevalent in Warehouse systems


Less so in OLTP to a degree

Which One to Use?


Global Indexes are second choice

Affects the speed and resources used by partition


operations, can be maintained however, indexes never
have to become unusable
Necessary for uniqueness when indexed attributes are
not part of the partition key
Necessary for runtime query performance when table
partition key is not part of the where clause

Local Indexes
Two types are defined

Local Prefixed: the partition key is on the leading edge


of the index
Local nonprefixed: the partition key is NOT on the
leading edge

Both can use partition elimination


Both can support uniqueness
There is nothing inherently better about prefixed
versus nonprefixed

Local Indexes

ops$tkyte%ORA11GR2> CREATE TABLE partitioned_table


2 ( a int,
3
b int,
4
data char(20)
5 )
6 PARTITION BY RANGE (a)
7 (
8 PARTITION part_1 VALUES LESS THAN(2) tablespace p1,
9 PARTITION part_2 VALUES LESS THAN(3) tablespace p2
10 )
11 /
Table created.

Local Indexes
ops$tkyte%ORA11GR2> create index local_prefixed on partitioned_table (a,b) local;
Index created.
ops$tkyte%ORA11GR2> set autotrace traceonly explain
ops$tkyte%ORA11GR2> select * from partitioned_table where a=1 and b=2;
Execution Plan
---------------------------------------------------------Plan hash value: 1622054381
---------------------------------------------------------------------------------| Id | Operation
| Name
| | Pstart| Pstop |
---------------------------------------------------------------------------------|
0 | SELECT STATEMENT
|
| |
|
|
|
1 | PARTITION RANGE SINGLE
|
| |
1 |
1 |
|
2 |
TABLE ACCESS BY LOCAL INDEX ROWID| PARTITIONED_TABLE | |
1 |
1 |
|* 3 |
INDEX RANGE SCAN
| LOCAL_PREFIXED
| |
1 |
1 |
---------------------------------------------------------------------------------Predicate Information (identified by operation id):
--------------------------------------------------3 - access("A"=1 AND "B"=2)
Note
----- dynamic sampling used for this statement (level=2)

Local Indexes
ops$tkyte%ORA11GR2> drop index local_prefixed;
Index dropped.
ops$tkyte%ORA11GR2> create index local_nonprefixed on partitioned_table (b) local;
Index created.
ops$tkyte%ORA11GR2> select * from partitioned_table where a=1 and b=2;
Execution Plan
---------------------------------------------------------Plan hash value: 904532382
--------------------------------------------------------------------------------| Id | Operation
| Name
|| Pstart| Pstop |
--------------------------------------------------------------------------------|
0 | SELECT STATEMENT
|
||
|
|
|
1 | PARTITION RANGE SINGLE
|
||
1 |
1 |
|* 2 |
TABLE ACCESS BY LOCAL INDEX ROWID| PARTITIONED_TABLE ||
1 |
1 |
|* 3 |
INDEX RANGE SCAN
| LOCAL_NONPREFIXED ||
1 |
1 |
--------------------------------------------------------------------------------Predicate Information (identified by operation id):
--------------------------------------------------2 - filter("A"=1)
3 - access("B"=2)

Local Indexes - Uniqueness


Local indexes can be used to enforce
UNIQUE/PRIMARY key constraints

But the partition key must be included in the constraint


itself
We enforce uniqueness within an index partition
never across partitions
Thus you cannot range partition a table by a date
field and have a primary key index on ID that is local
You have to use global indexes for that

Global Indexes
Partitioned using a scheme different from table

Table might have 10 range partitions by date


Index might have 5 range partitions by region

There are only global prefixed indexes, no such


thing as a non-prefixed global index

The partition key for the global index is on the leading


edge of the index every time.

Index only what you need


New in 11gR2
You can index only part of a table

Maybe just the most current data needs an index


Older data would be full scanned

Query plans can be generated that take this into


consideration

Index only what you need


ops$tkyte%ORA11GR2> CREATE TABLE t
2 (
3
dt date,
4
x
int,
5
y
varchar2(30)
6 )
7 PARTITION BY RANGE (dt)
8 (
9
PARTITION part1 VALUES LESS THAN (to_date('01-jan-2010','dd-mon-yyyy')) ,
10
PARTITION part2 VALUES LESS THAN (to_date('01-jan-2011','dd-mon-yyyy')) ,
11
PARTITION junk VALUES LESS THAN (MAXVALUE)
12 )
13 /
Table created.
ops$tkyte%ORA11GR2> insert into t
2 select to_date('01-jun-2010','dd-mon-yyyy'), rownum, object_name
3 from all_objects;
71923 rows created.
ops$tkyte%ORA11GR2> exec dbms_stats.gather_table_stats(user,'T');

Index only what you need

ops$tkyte%ORA11GR2> create index t_idx on t(x) local unusable;


Index created.
ops$tkyte%ORA11GR2> alter index t_idx rebuild partition part2;
Index altered.

Index only what you need


ops$tkyte%ORA11GR2> set autotrace traceonly explain
ops$tkyte%ORA11GR2> select * from t where x = 42;
--------------------------------------------------------------| Id | Operation
|| Pstart| Pstop |
--------------------------------------------------------------|
0 | SELECT STATEMENT
||
|
|
|
1 | VIEW
||
|
|
|
2 |
UNION-ALL
||
|
|
|
3 |
PARTITION RANGE SINGLE
||
2 |
2 |
|
4 |
TABLE ACCESS BY LOCAL INDEX ROWID||
2 |
2 |
|* 5 |
INDEX RANGE SCAN
||
2 |
2 |
|
6 |
PARTITION RANGE OR
||KEY(OR)|KEY(OR)|
|* 7 |
TABLE ACCESS FULL
||KEY(OR)|KEY(OR)|
--------------------------------------------------------------Predicate Information (identified by operation id):
--------------------------------------------------5 - access("X"=42)
7 - filter("X"=42 AND ("T"."DT"<TO_DATE(' 2010-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') OR
"T"."DT">=TO_DATE(' 2011-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') OR "T"."DT" IS NULL))

Statistics

Partitioning
Discuss Statistics

Local
Global

Gathering Statistics
Strategy For New Databases
Create tables
Optionally Run (or explain) queries on empty tables

Prime / Seed the optimizer

Enable incremental statistics

For large partitioned tables

Load data
Gather statistics

Use the defaults

Create indexes (if required!)

Gathering Statistics
Incremental Statistics
One of the biggest problems with large tables is keeping the
schema statistics up to date and accurate
This is particularly challenging in a Data Warehouse where
tables continue to grow and so the statistics gathering time
and resources grow proportionately
To address this problem, 11.1 introduced the concept of
incremental statistics for partitioned objects
This means that statistics are gathered for recently modified
partitions

Gathering Statistics
The Concept of Synopses
It is not possible to simply add partition statistics together to
create an up to date set of global statistics
This is because the Number of Distinct Values (NDV) for a
partition may include values common to multiple partitions.
To resolve this problem, compressed representations of the
distinct values of each column are created in a structure in
the SYSAUX tablespace known as a synopsis

Gathering Statistics
Synopsis Example

Object

Column Values

NDV

Partition #1

1,1,3,4,5

Partition #2

1,2,3,4,5

NDV by addition

WRONG

NDV by Synopsis

CORRECT

Compression

Direct Path Table Compression


Introduced in Oracle9i Release 2

Supports compression during bulk load operations (Direct Load,


CTAS, ALTER MOVE, INSERT /*+ APPEND */)
Data modified using conventional DML not compressed

Optimized compression algorithm for relational data


Improved performance for queries accessing large
amounts of data

Fewer IOs
Buffer Cache efficiency

Direct Path Table Compression


Data is compressed at the database block level

Each block contains own compression metadata improves IO


efficiency
Local symbol table dynamically adapts to data changes

Compression can be specified at either the table or


partition levels
Completely transparent to applications
Noticeable impact on write performance

OLTP Table Compression


Oracle Database 11g extends compression for
OLTP data

Support for conventional DML Operations


(INSERT, UPDATE, DELETE)

New algorithm significantly reduces write overhead

Batched compression ensures no impact for most OLTP


transactions

No impact on reads

Reads may actually see improved performance due to


fewer IOs and enhanced memory efficiency

OLTP Table Compression

Overhead

Inserts are
again
uncompressed

Free Space
Uncompressed
Compressed

Block usage reaches


PCTFREE triggers
Compression
Inserts are
uncompressed

Block usage reaches


PCTFREE triggers
Compression

Adaptable, continuous compression


Compression automatically triggered when block usage
reaches PCTFREE
Compression eliminates holes created due to deletions
and maximizes contiguous free space in block

OLTP Table Compression


Employee Table
ID

FIRST_NAME LAST_NAME

John

Doe

Jane

Doe

John

Smith

Jane

Doe

Initially Uncompressed
Block
Header
1JohnDoe 2Jane
Doe 3JohnSmith 4
Jane Doe
Free Space

INSERT INTO EMPLOYEE


VALUES (5, Jack, Smith);
COMMIT;

OLTP Table Compression


Employee Table
ID

FIRST_NAME

LAST_NAME

John

Doe

Jane

Doe

John

Smith

Jane

Doe

Jack

Smith

Local
Symbol Table

Compressed
Block Block
Header
John=|Doe=|Jane=|Smith=

1
2 3
4
1JohnDoe
2Jane
Doe
5Jack
3JohnSmith 4

Jane Doe
Free Space
Free Space

OLTP Table Compression


Uncompressed Block

Compressed Block

Header

Header
John=|Doe=|Jane=|Smith=

1JohnDoe 2Jane
Doe 3JohnSmith 4
Jane Doe 5Jack
Smith Free Space

Local
Symbol Table

1
2 3
4
1JohnDoe
2Jane
Doe
5Jack
3JohnSmith 4

Jane Doe
Free Space
Free Space

More Data
Per Block

Using OLTP Table Compression


Requires database compatibility level at 11.1 or greater
New Syntax extends the COMPRESS keyword

COMPRESS [FOR {ALL | DIRECT_LOAD} OPERATIONS]


DIRECT_LOAD (DEFAULT)
Refers to Bulk load operations from 10g and prior releases
ALL
OLTP + Direct loads

Enable compression for new tables


CREATETABLEt1COMPRESSFORALLOPERATIONS

Enable only direct load compression on existing table


ALTERTABLEt2COMPRESS

Only new rows are compressed, existing rows are uncompressed

Applying compression with Partitioning


Challenge:

Want to minimize storage


Do not want to use Advanced compression (option) for
whatever reason
In OLTP so no direct path options
Backup friendly

Applying compression with Partitioning


A current online, read-write tablespace that gets backed up like every other
normal tablespace in our system. The audit trail information in this tablespace
is not compressed, and it is constantly inserted into.
A read-only tablespace containing this year to date audit trail partitions in a
compressed format. At the beginning of each month, we make this
tablespace read-write, move and compress last months audit information into
this tablespace, make it read-only again, and back it up once that month.
A series of tablespaces for last year, the year before, and so on. These are all
read-only and might even be on slow, cheap media. In the event of a media
failure, we just need to restore from backup. We would occasionally pick a
year at random from our backup sets to ensure they are still restorable (tapes
go bad sometimes).

Purging

Purging
Best facilitated by partitioning

Uses DDL, no undo, no redo unless.


You have global indexes, they will need to be maintained or
rebuilt

If you cannot use partitioning

Use DDL Create table as select <rows to keep> instead of


delete
DELETE is the single most resource intensive statement out
there.

Sliding Windows of Data


Challenge:

Keep N-years/months whatever of data online


Have the data be constantly available
Purge old data
Add new data
Support efficient indexing scheme (keeping availability
in mind)
Support efficient storage (use indexes one current data
mostly)

Sliding Windows of Data


Well walk through how to

Detaching the old data: The oldest partition is either


dropped or exchanged with an empty table to permit
archiving of the old data.

Loading and indexing of the new data: The new data is


loaded into a work table and indexed and validated.

Attaching the new data: Once the new data is loaded and
processed, the table it is in is exchanged with an empty
partition in the partitioned table, turning this newly loaded
data in a table into a partition of the larger partitioned table.

Sliding Window
ops$tkyte@ORA11GR2> CREATE TABLE partitioned
2 ( timestamp date,
3
id
int
4 )
5 PARTITION BY RANGE (timestamp)
6 (
7 PARTITION fy_2004 VALUES LESS THAN
8 ( to_date('01-jan-2005','dd-mon-yyyy') ) ,
9 PARTITION fy_2005 VALUES LESS THAN
10 ( to_date('01-jan-2006','dd-mon-yyyy') )
11 )
12 /
Table created.
ops$tkyte@ORA11GR2> insert into partitioned partition(fy_2004)
2 select to_date('31-dec-2004',dd-mon-yyyy)-mod(rownum,360), object_id
3 from all_objects
4 /
72090 rows created.
ops$tkyte@ORA11GR2> insert into partitioned partition(fy_2005)
2 select to_date('31-dec-2005',dd-mon-yyyy)-mod(rownum,360), object_id
3 from all_objects
4 /
72090 rows created.

Sliding Window

ops$tkyte@ORA11GR2> create index partitioned_idx_local


2 on partitioned(id)
3 LOCAL
4 /
Index created.
ops$tkyte@ORA11GR2> create index partitioned_idx_global
2 on partitioned(timestamp)
3 GLOBAL
4 /
Index created.

Sliding Window

ops$tkyte@ORA11GR2> create table fy_2004 ( timestamp date, id int );


Table created.
ops$tkyte@ORA11GR2> create index fy_2004_idx on fy_2004(id)
2 /
Index created.

To archive to

Sliding Window
ops$tkyte@ORA11GR2> create table fy_2006 ( timestamp date, id int );
Table created.
ops$tkyte@ORA11GR2> insert into fy_2006
2 select to_date('31-dec-2006',dd-mon-yyyy)-mod(rownum,360), object_id
3 from all_objects
4 /
72097 rows created.
ops$tkyte@ORA11GR2> create index fy_2006_idx on fy_2006(id) nologging
2 /
Index created.

Data to be loaded

Sliding Window

ops$tkyte@ORA11GR2> alter table partitioned


2 exchange partition fy_2004
3 with table fy_2004
4 including indexes
5 without validation
6 /
Table altered.
ops$tkyte@ORA11GR2> alter table partitioned
2 drop partition fy_2004
3 /
Table altered.

That is our purge or archive operation


No data was touched

Sliding Window

ops$tkyte@ORA11GR2> alter table partitioned


2 add partition fy_2006
3 values less than ( to_date('01-jan-2007','dd-mon-yyyy') )
4 /
Table altered.
ops$tkyte@ORA11GR2> alter table partitioned
2 exchange partition fy_2006
3 with table fy_2006
4 including indexes
5 without validation
6 /
Table altered.

That was our load


No data was touched

Sliding Window

ops$tkyte@ORA11GR2> select index_name, status from user_indexes;


INDEX_NAME
-----------------------------FY_2006_IDX
FY_2004_IDX
PARTITIONED_IDX_GLOBAL
PARTITIONED_IDX_LOCAL

STATUS
-------VALID
VALID
UNUSABLE
N/A

However, we have a problem


Global indexes go invalid

Sliding Window

ops$tkyte@ORA11GR2> alter table partitioned


2 exchange partition fy_2004
3 with table fy_2004
4 including indexes
5 without validation
6 UPDATE GLOBAL INDEXES
7 /
Table altered.
ops$tkyte@ORA11GR2> alter table partitioned
2 drop partition fy_2004
3 UPDATE GLOBAL INDEXES
4 /
Table altered.

Online operation, generates redo and undo


But 100% availability

Sliding Window

ops$tkyte@ORA11GR2> alter table partitioned


2 add partition fy_2006
3 values less than ( to_date('01-jan-2007','dd-mon-yyyy') )
4 /
Table altered.
ops$tkyte@ORA11GR2> alter table partitioned
2 exchange partition fy_2006
3 with table fy_2006
4 including indexes
5 without validation
6 UPDATE GLOBAL INDEXES
7 /
Table altered.

Same here

Sliding Window

ops$tkyte@ORA11GR2> select index_name, status from user_indexes;


INDEX_NAME
-----------------------------FY_2006_IDX
FY_2004_IDX
PARTITIONED_IDX_GLOBAL
PARTITIONED_IDX_LOCAL

STATUS
-------VALID
VALID
VALID
N/A

6 rows selected.

Data was never unavailable


Operation did take longer
But so what?

Backing Up

Backing Up
The fastest way to do
something is to not do it

Backing Up
Use Read Only Tablespaces

Incorporate sliding tablespaces with your sliding


windows of data
Back up once, never again
Put your local indexes in with your table partitions or
Just dont backup indexes, often as fast or faster to
recreate them in the event of media failure

Backing Up
Dont do indexes

Even in a read/write environment


Might represent 50-60% of your database volume
As easy to recreate in parallel/nologging as it would be
to restore
Easier perhaps

Backing Up
Use true incrementals

Available with changed block tracking in EE since 10g


Demands an on disk based backup
We catch the backup up by applying only changed
blocks to it.
Now the time to backup a 100TB OLTP system is the
same as a 100GB system (assuming the same
transaction rates)
Time to backup is a function of how much data is
modified, not database size

Backing Up
Use compression where ever available

Index key compression


Direct path basic compression
OLTP compression
HC compression on Exadata/zfs/Pillar
Secure files compression

<Insert Picture Here>

Q&A

Vous aimerez peut-être aussi