"I am somewhat preoccupied telling the Laws of Physics to shut up and sit down." -Vaarsuvius, Order of the Stick Webcomic
I sat in on a session at the PASS Summit in Seattle last week listening to a session delivered by Randy Knight talking about Isolation Levels in SQL Server.
Knowing that Oracle handles things differently to SQL Server made me want to research things a little closer.
As a DBA who spends time in both Oracle and SQL Server, it's important to understand Isolation Levels in these two RDBMS's and how they work, because their default behavior is very different.
According to the ANSI Standard, there are 4 default Isolation Levels in a Relational Database Management System, ordered below from least isolated to most isolated.
- Read Uncommitted
- Read Committed
- Repeatable Read
- Serializable
Going into the details of what each of these mean is outside the scope of what I want to write about today. There's a good write up at Wikipedia or any of a dozen or more blogs, just search "Isolation Levels" on your favorite search engine.
It should be noted that SQL Server implements all 4 of the ANSI Standard Isolation Levels plus one of its own (SNAPSHOT Isolation - which has 2 different levels in itself - Statement level and Transaction Level). As with the basic tenets, going into the details of snapshot isolation is outside of the scope of what I want to talk about here. To learn more about SNAPSHOT Isolation, refer to Books Online.
Oracle, on the other hand, does not implement Read Uncommitted and also does not implement Repeatable Read. Oracle adds in their own extra option called Read Only Isolation. You can read more about this option in the Oracle Documentation.
Because I work daily in both Oracle and SQL Server, what I really wanted to find out was what are the core effects of how SQL Server approached the READ COMMITTED Isolation level vs how Oracle handles it.
In basic terms, SQL Server has a "pessimistic" view of data concurrency by default. When in the middle of a transaction, SQL Server will block readers on any row that has changed until that row is committed.
Oracle, on the other hand has an "optimistic" view of concurrency by default. When in the middle of a transaction, Oracle will have readers see a version of the row as it existed at the start of the update transaction, thus not blocking readers.
I found an article written by Tom Kyte, Oracle's technology expert, where he talked about possible data inaccuracies when Read Committed is implemented the way that Microsoft does it in SQL Server and wanted to summarize it here as I used it for my own testing.
The basic setup is like this:
A bank account database has a table for balances that contains two columns, account and balance. The table has many hundreds of thousands rows.
A query is started to sum balances. The database begins reading pages and accumulating balances. After 200,000 records are read a transaction is done in the database. An account that has already been read is decremented by $400 and an account that has not yet been read (row number 300,000) is incremented by $400.
How do both database systems handle this ?
The timeline would go like this (I am going to represent the summing of balances by selecting one row at a time):
- Transaction 1: select balance where account = 1;
- Transaction 1: select balance where account = 2;
... etc ...
- Transaction 2: Decrement the value of account 1 by $400;
- Transaction 2: Increment the value of account 300000 by $400;
... etc ...
- Transaction 1: select balance where account = 300000;
... etc ...
- Transaction 2: Commit
Let's start by looking at how Oracle handles it.
I preload a table with three rows. Account 1 has a balance of $500, Account 2 has a balance of $200 and Account 300000 has a balance of $100.
Oracle's SQL*Plus, differently from SQL Server Management Studio, has an implicit transaction. You don't need to specify BEGIN TRANSACTION - a transaction is assumed from the moment you start until you quit SQL*Plus (implicit commit) or explicitly issue a commit or rollback.
In Session 1, we execute the following
SQL> select balance from accounts where account = 1; BALANCE ---------- 500 SQL> select balance from accounts where account = 2; BALANCE ---------- 200 SQL>
We do not commit the transaction.
In Session 2 we update the values of our accounts
SQL> update accounts set balance = 100 where account = 1; 1 row updated. SQL> update accounts set balance = 500 where account = 300000; 1 row updated. SQL>
Once again we leave the transaction open and do not commit.
We switch back to session 1:
SQL> select balance from accounts where account = 300000; BALANCE ---------- 100 SQL>
The query immediately returns the value of account 300000 as it was at the beginning of this current transaction, giving us a consistent view of all the balances as of the time that the query started.
Returning to Session 2 now, we can commit the transaction over there just to tidy up loose ends.
SQL> Commit; Commit complete. SQL>
Having seen how Oracle does it (the optimistic view), let's have a look at SQL Server.
On SQL Server, we need to explicitly tell SQL Server Management Studio that we want a transaction.
begin transaction select * from accounts where account = 1; select * from accounts where account = 2; account balance --------------------------------------- --------------------------------------- 1 500 (1 row(s) affected) account balance --------------------------------------- --------------------------------------- 2 200 (1 row(s) affected)
So far so good.
Over to Session 2, we perform our balance transfer between accounts
begin transaction update accounts set balance = 100 where account = 1; update accounts set balance = 500 where account = 300000; (1 row(s) affected) (1 row(s) affected)
Returning to Session 1, we try to query Account 300000:
select * from accounts where account = 300000;
SSMS sits here and spins. It is waiting for the transaction from Session 2 to complete and commit the change to the database. A writer is blocking a reader now.
We swap back to session 2 and complete the transaction:
commit transaction
Immediately the query in Session 1 returns a value:
account balance --------------------------------------- --------------------------------------- 300000 500 (1 row(s) affected)
That's not right .... or is it ? I'd have to say no - it's not right - not from a pure data accuracy standpoint.
From the SQL Server Read Committed Isolation level and the way it's implemented that is the correct value - the value was read after it was committed to the database but the net result is that the $400 has been counted twice. It was counted in the balance of account 1 and then again in the balance of account 300000 - our query has, in fact, returned incorrect data!
If I was wanting an accurate picture of my account balances under this scenario, it appears to be that Snapshot Isolation is what's needed, but even then it can be a little tricky.
I ran through the scenario above again using Snapshot Isolation, using these scripts in the same order of command execution as presented above::
Session 1
set transaction isolation level snapshot begin transaction select * from accounts select * from accounts where account = 300000 commit transaction
Session 2
set transaction isolation level snapshot begin transaction update accounts set balance = 100 where account = 1; update accounts set balance = 500 where account = 300000 commit transaction
The value in the last query of Session 1 showed 100 before I committed Session 2, with an immediate return, just like Oracle did. However, even after I had committed Session 2 and ended it completely, the return from the query on the balance on account 300000 in Session 1 was still showing $100, until I ran the commit transaction in Session 1, when it turned back to the $500 value that was actually sitting in the database.
This actually makes sense, since we had done a begin transaction at the start of the query in Session 1, so it was showing us a picture of the data as it existed at the start of Session 1.
If we run through the example one more time, this time not putting the queries in Session 1 inside a transaction, as soon as Session 2 commits, the balance of account 300000 is showing the updated value immediately.
Microsoft provides an excellent example of the different behavior under snapshot isolation and read committed snapshot isolation on their website (that uses vacation days instead of a bank account as the example).
So, there's a definite "Gotcha!" sitting in there. If this kind of level of accuracy is important to you and your application, make sure you completely understand how Isolation levels work in the database system you're using.
I should re-iterate that the demonstration scenario used in this post was used in Tom Kyte's article as linked above and replicated here in order to compare and contrast the differences between the default behavior in both Oracle and SQL Server and how it can be modified by using other options. While he mentioned "other vendors" in his article, I wanted to see for myself how this scenario specifically played out in SQL Server (as I already knew for certain how it would behave in Oracle).