Vous êtes sur la page 1sur 1

possible page formats: consecutive slots and slot directory.

consecutive slots organization used for fixed length record formats.

handles the deletion using bitmaps or linked lists
slot directory organization maintains a directory of slots for each page, with a
_record offset, record length_ pair per slot.
heap file: linked list is simple but takes longer to find files. directory approach finds
files faster.A clustered index offers much better range query performance, but the
same equality search performance as an unclustered index.
a clustered index is more expensive to maintain .we should make an index be clustered
only if range queries are important on its search key.
Ranged queries are going to be performed often, then we should
use a B+-tree on the index for the relation since hash indexes cannot perform range queries.
Read-only data that is not going to be modified often, go with a sorted file,
data that we intend to modify often, then go with a tree-based index
The B+ tree: balanced tree in which the internal nodes direct the search
and the leaf nodes contain the data entries
Log: An ordered list of REDO/UNDO actions.Log record contains:
<XID, pageID, offset, length, old data, new data>
The Write-Ahead Logging Protocol:
Must force the log record for an update before the corresponding data page gets to disk.
Must write all log records for a Xact before commit.
#1 guarantees Atomicity. #2 guarantees Durability.
Each log record has a unique Log Sequence Number (LSN).
Each data page contains a pageLSN.:The LSN of the most recent log record for an update to that page.
System keeps track of flushedLSN. :The max LSN flushed so far.
WAL: Before a page is written,pageLSN <= flushedLSN
DBMS creates a checkpoint, in order to minimize the time taken to recover in the event of a system crash
Recovery works in 3 phases:
Analysis: Forward from checkpoint. Redo: Forward from oldest recLSN.
Undo: Backward from end to first LSN of oldest Xact alive at crash.
Upon Undo, write CLRs. Redo repeats history:
Get lastLSN of Xact from Xact table. Can follow chain of log records backward via the prevLSN field.
Before starting UNDO, write an Abort log record :For recovering from crash during UNDO
To perform UNDO, must have a lock on data
Before restoring old value of a page, write a CLR: CLR has one extra field: undonextLSN
Points to the next LSN to undo (i.e. the prevLSN of the record were currently undoing).
CLRs never Undone At end of UNDO, write an end log record. CLR( Compensation Log Records)
Write commit record to log.
All log records up to Xacts lastLSN are flushed.
Guarantees that flushedLSN >= lastLSN.
Commit() returns.
Write end record to log
if you read once in your transaction and you read again later in the same transaction
if you cant get same value - unrepeatable
if you get same values its - repeatable

LRU + repeated sequential scans Problem: Sequential flooding

Clock policy: do for each page in cycle {if (pincount == 0 && ref bit is on)
turn off ref bit;.
else if (pincount == 0 && ref bit is off)
choose this page for replacement;
} until a page is chosen;
seek time (moving arms to position disk head on track): max 20 ms
rotational delay (waiting for blocks to rotate under head): max 10 ms
transfer time (moving data to/from disk) 1ms/4kb (approx).
Avg rotation delay is half of rotation time. Block size cannot exceed track size.
File: collection of pages that support insert/modify/read/scan.
heap files: simplest containing records in no order
Variable length record format with field offset directory offers support for direct access to if
ith field and null values.
Slotted page format supports variable length records
Catalog relations store info abt relations, views and indexes.
Index: speeds up selection on search key fields. contains collection of data entries on key.
Clustered: order of data records close to order of data entries
insert/delete cost of B+ = log base (Fanout) (No. of leaf pages)
Static hashing: no. of primary page fixed. never deallocated, over flow pages used.
(a) Scan

(b) Equality

(c ) Range

(d) Insert

(e) Delete

(1) Heap





(2) Sorted


Dlog 2B


(3) Clustered 1.5BD

Tree index
Hash index

0.15BD +

Dlog 2 B +
# matches
Dlog F 1.5B
+ Clog2R
+ # matches
D(1 +
D(log F
log F 0.15B) 0.15B +
+ Clog2 6.7R Clog2 6.7R
+ # matches
Dlog F 1.5B
+ Clog2R

BD(R+0.1 2D


+ BD

D(3 +
log F 0.15B) + + + 2D
Clog2 6.7R

+ 2D

B:no. of data blks/file

R: no of rec/block D: avg time to read/write page. (1 disk IO)
C: Cost of processing 1 record.
dirty read: t1 reads value changed by t2 before t2 commits.
phantom prob: transactions reads 2 diff values on 2 diff calls even though it didnt change val.
3 characteristics in SQL: access mode, diagnostic size, isolation level.
4 isolation levels : read uncommited ( dirty read, unrepeatable read, phantom)
read commited (unrepeatable read, phantom)
repeatable read(phantom) serializable (none)
Recoverable schedule: trnx commits only after transactions whose changes it reads commit.
guaranteed in strict 2PL, not in 2PL.
2 schedules are View serializable if : 1) Ti reads same initial Q in S and S'
2) Tj makes write(q) and Ti makes read(q) after in S. same must happen in S'
3) transaction that performs last write (q) in S must do so in S' too.