Covers the 3 isolation anomalies directly described, and 1 implied, by the ANSI SQL specs from 1992.
P0 - Dirty Write
*W1[x]...W2[x]...((C1 or A1) and (C2 or A2) in any order)*
A Dirty Write occurs when one transaction overwrites a value that has previously been written by another still uncommitted transaction.
T1 | T2 |
---|---|
BEGIN | |
W(x) | |
BEGIN | |
W(x) | |
COMMIT | |
COMMIT |
In the above scenario, T1 writes to a row ‘x’, but before it commits, T2 overwrites the same row with potentially another value.
P1 - Dirty Read
*W1[x]...R2[x]...(A1 and C2 in either order)*
Transaction T2 modifies a row. Transaction T1 then reads that row before T1 performs a COMMIT. If T2 then aborts, T1 will have read a result that was never committed.
T1 | T2 |
---|---|
BEGIN | |
W(x) | |
R(x) | |
ROLLBACK |
In the above scenario, the second read by client T1 might be wrong because T2 rolls back its updates soon after.
P2 - Fuzzy/Non-Repeatable Read
*R1[x]...W2[x]...C2...R1[x]...C1*
Transaction T1 reads x and then T2 modifies or deletes x and commits. If T1 then attempts to reread x, it receives a modified value or discovers that the data item has been deleted.
T1 | T2 |
---|---|
BEGIN | |
R(x) | |
BEGIN | |
W(x) | |
COMMIT | |
R(x) | |
COMMIT |
In the above scenario, the result second read by transaction T1 might not be the same as the result of the first read, because transaction T2 has written to the row ‘x’ in between those reads.
This is why it is called ‘non-repeatable’ read, same read operations made in the same transaction are not guaranteed to return the same value.
P3 - Phantom Reads
*R1[P]...W2[y in P]...C2....R1[P]...C1*
Transaction T1 reads a set of rows that satisfy some search query. Transaction T2 then creates (or deletes) rows that satisfy T1’s aforementioned search query and commits. If T1 then repeats its read with the same query, it gets a set of rows different from the first read.
T1 | T2 |
---|---|
BEGIN | |
R(P) | |
BEGIN | |
W(y in P) | |
COMMIT | |
R(P) | |
COMMIT |
In the above scenario, transaction T1 reads results of a filter predicate ‘P’ twice, but since transaction T2 has written a row (or many rows) ‘y’ that matched first read from T1, it might have changed the values of the row(s) such that when T1 makes the second read, they no longer match the given filter predicate any more. This is called phantom read.
So essentially, it’s the same as fuzzy/non-repeatable reads, but instead of the value of a single row, the problem is now in the count of the result of the filter predicate.