Tag Archives: SQL

The Transaction Log, Delayed Durability, and considerations for its use with In-Memory OLTP – Part I


If Warhol did transaction Logging…

In this mini-series of posts, we will discuss how the mechanics of SQL Server’s transaction logging work to provide transactional durability. We will look at how Delayed Durability changes the logging landscape and then we will specifically see how In-Memory OLTP logging builds upon the on-disk logging mechanism. Finally, we will pose the question “should we use Delayed Durability with In-Memory or not” and discuss this scenario in detail. But in order to understand how delayed durability works, it is first important for us to understand how the transaction log works -so we shall start there…

Caching caching everywhere nor any drop to drink!

SQL Server is a highly efficient transaction processing platform and nearly every single operation performed by it, is usually first done so within memory. By making changes first in memory, the need to touch physical resources (such as physical disk IOPs) are reduced (in some cases substantially), and reducing the need to touch physical resources means those physical boundaries (and their limitations) have less impact to the overall system performance. Cool right?!

A little bit about the Buffer Cache

All data access such as insertion, deletion or updates are all first made in-memory, and if you are a Data Administrator, you will (or should!) already be familiar with the Buffer Cache and how it works. When data pages are required to fulfill data access requests (and assuming they are not already present in the Buffer Cache), the Storage Engine first reads those data pages into the Buffer Cache before performing any requested operations on those pages in memory. If the pages in memory are modified, those dirty pages will ultimately need to be persisted directly to disk (i.e. to their respective physical data file/s) through automatic CHECKPOINTing and will contain changes performed by either committed or uncommitted transactions. In the latter case of uncommitted transactions, the presence of those dirty pages in the data file/s is why UNDO (found in the transaction log) is performed upon recovery to roll back those changes into the buffer cache. Conversely, any dirty (buffer cache) pages that are a result of committed transaction changes but have not yet been hardened to disk (through CHECKPOINT) would require the REDO portion of the transaction log to roll those changes forward on recovery. I will refer back to this again when we move on to talk about In-Memory OLTP.

It’s all about the log, about the log, no trouble!

But It is not the hardening of data to data files that we should really be focusing on here (since there is little relevance to delayed durability), we are more concerned about transactional durability in SQL Server.

There is a common misunderstanding with SQL Server DBAs that when transactions are making changes, the transactional changes are written immediately to the logfile. The durability claims of SQL Server make this misunderstanding very easy to understand, but requiring a physical IO operation to occur upon every single transactional data modification would clearly be a huge potential bottleneck (and dependency) upon the transaction log disk performance. The SQL Server architects recognised this challenge, so in a similar way that data page modifications are first cached, so are the transactional changes. In the case of the transaction log, the “cached” area is provided by in-memory constructs known as log buffers that can store up to 60 Kilobytes of transaction log records.

Now that you know that log buffers are there to delay (or buffer) transaction logging to maximize the efficiency of a single physical IOP to disk, we must now consider when these structures must absolutely be flushed to physical disk and stored within the transaction log in order to still provide the Durability property of ACID that SQL Server adheres to. There are two main situations a log buffer is flushed:

  1. On transactional COMMIT. When a client application receives control back after issuing a successful COMMIT statement, SQL Server *must* provide durability guarantees about the changes that have occurred within the transaction. Therefore all transactional operations must be written and persisted to the transaction log, ultimately requiring that the log buffer containing those changes must be flushed on COMMIT to provide this guarantee.
  2. On log buffer full. Strictly speaking, as long as the first rule is adhered to, there is no logical requirement that SQL Server should flush the log buffer to disk when it becomes full. However, if you consider that it is paramount for SQL Server to never run out of available log buffers, then it is pretty obvious that the best way for it to avoid this situation is to hold onto the log buffer only as long as it needs to. If a log buffer is full, then it serves no further purpose in the buffering story and furthermore, given that the log buffer will contain a maximum of 60 Kilobytes of changes, there is good alignment to the physical write to the transaction log. When the log buffer is full it makes sense to flush it to disk.

Now we have a better understanding how the transaction log provides transactional durability to SQL Server and how it delays physical resource writes to the transaction log disk to improve logging performance, we can look at how Delayed Durability changes the logging landscape, why we might use it and what we are risking by doing so in the second post in this series.

Problem removing files from TempDB

I recently ran into an interesting problem while attempting to remove approximately half of the TempDB data files configured in a testing environment. As you might expect, there are various SQL Server tasks that are performed infrequently by a DBA, and this is a good example of one of them. Most environments have usually been misconfigured with too few TempDB data files or wrongly sized usually resulting in the classic allocation page contention problems (which is explained by this excellent SQLSkills article “The Accidental DBA (Day 27 of 30): Troubleshooting: Tempdb Contention“), but this particular environment had been temporarily over provisioned with disks and TempDB data files (for testing purposes).

Bonus fact: SQL Server 2016 attempts to automatically address the tempdb datafile allocation page contention problem by defaulting to 8 TempDB data files (or less if the number of cores is smaller). This can be overridden on install through the GUI or by using the /SQLTEMPDBFILECOUNT switch if performing a command line installation. Further optimisations have been implemented such as the adoption of uniform extent allocations by default and auto-growing all files simultaneously -both behaviours would formally have required turning on Traceflags 1117 and 1118. Several other improvements have been made to the performance of TempDB database which you can read about yourself in this (currently preview) documentation.


So the plan was to remove each TempDB file one by one, restart the instance (if required) and decommission those spare disks so that they can be returned and reallocated elsewhere in the environment. In order to remove each data file we would:

  1. Empty each file at a time (using DBCC SHRINKFILE).
  2. Remove each file upon empty.

Now there was a time (prior to SQL Server 2005) when we were told to tread very very carefully when shrinking TempDB files, and doing so could result in corruption. This perception remains with some of the longer serving SQL Server professionals (I question myself frequently) but if we take a look at KB307487 article “How to shrink the tempdb database in SQL Server” we can see that it is now safe to do so -although (as the Knowledge Base article states) certain scenarios can cause the shrink operation to fail.

So I ran the following code for each TempDB data file:

ALTER DATABASE [tempdb] REMOVE FILE [tempdb_15]

The code succeeded for the first few files, right up to TempDB data file 12, where I hit the following error:

DBCC SHRINKFILE: Page 14:56 could not be moved
because it is a work table page.
Msg 2555, Level 16, State 1, Line 1
Cannot move all contents of file "tempdb_12" to
other places to complete the emptyfile operation.
DBCC execution completed. If DBCC printed error
messages, contact your system administrator.
Msg 5042, Level 16, State 1, Line 1
The file 'tempdb_12' cannot be removed
because it is not empty.

As odd as this error was (especially since the SQL instance was not currently “in use”) I decided to bounce it but after restarting the instance was again greeted with the same error message! After some very quick Google-Fu I came across an old MSDN Forum question “DBCC SHRINKFILE: Page 4:11283400 could not be moved because it is a work table page.” I ruled out some of the responses but came across the response by Mike Rose:

“I realize this is an old thread but I have found that in most cases work tables are related to Query Plans.

Try issuing the following commands and the shrinking the tempdb:



there will be some performance hit for this as SQL will have to recreate its Query Plans, but it should allow you to shrink you TEMPDB.

Hope this helps,

M Rose”

Now my initial thoughts were that this response was clearly nonsense since I had just restarted the SQL Service for the instance (thereby flushing the SQL caches), but hey what the hell, why not give it a whirl….

…and success! It worked.

Apart from the flushing of the caches, the outcome was even more strange to me for another reason. Upon reboot TempDB is “recreated” upon startup*1 and I would have expected that fact alone to have fixed the problem but from the behaviour I had experienced, something had persisted across instance restart.
*1 you may want to read this interesting post by Jonathan Kehayias “Does tempdb Get Recreated From model at Startup?

Also of interest to me was whether running FREEPROCCACHE and FREESYSTEMCACHE was overkill so when the opportunity arose I attempted first to try only clearing the systemcache specific to TempDB through:


…and then tried clearing temporary tables and table variables through:

DBCC FREESYSTEMCACHE ('Temporary Tables & Table Variables')

…and then by using both together. Sadly these did not seem to work.

On my final opportunity I tried clearing the procedure cache (DBCC FREEPROCCACHE) and after running this a few times it appeared to solve the problem and I was able to remove the TempDB file without error.


Clearing system caches on a Production SQL Server instance is never a good thing and should be avoided, but at least the next time I run into this problem I have (what appears) to be a work around to the problem I ran into and at least the requirement to remove TempDB data files should be a rare occurrence for most of us!

I confess that I am left with a few unanswered questions and clearly what I experienced does make me doubt the results somewhat (and my sanity) and there were further tests that I would like to perform at some point. So if you run into this problem yourself I encourage you to try this (disclaimer: your fault if you destroy your SQL Server 😉 ) and I would be delighted if you leave feedback of what works for you (and what doesn’t)!

Incrementing SQL Sequences

Sequences were first introduced in SQL Server 2012 and are a way to improve performance in situations that would have traditionally been implemented using the column based identity property. One of the biggest downfalls of the identity property is that new values are generated on inserts meaning that they are also transactionally dependent. Sequences on the other hand are not and allow for caching and many other benefits which you can compare and contrast in this really good SQL Server Pro magazine article by Itzik titled SQL Server 2012 T-SQL at a Glance – Sequences.

Recently our performance environment was undergoing load testing using pre-loaded synthetic data which (upon execution) started to result in failures from identity conflicts. It was fairly obvious that our current sequence seed values were much lower than the loaded data identities. Not so much of a problem you might think, since we can easily reseed the sequence number via the Sequence Properties dialog (below). Simply select the Restart sequence checkbox and type your new seed number into the entry box next to it and click OK.

sequence properties

The only problem with this approach is that our database was configured (rightly or wrongly) with approximately 250 sequences! Since we could not be sure which sequences would ultimately cause us problems we decided to increment each one by 10,000.

Not being someone who likes performing monotonous tasks and also recognising the fact that this task would probably need to be performed again in the future I decided to attempt to programmatically solve this problem.

How to use it

Disclaimer: Before using the following script, please make sure you understand what you are doing and where you are doing it. The responsibility for its use and misuse is all yours!

The script below is fairly basic and generates a script to update every single sequence within your database (make sure you change context to the correct one) with a default increment of 10000 (feel free to alter as necessary). If you only want to update a range of sequences then obviously you should add a WHERE clause to this script and filter on whatever criteria floats your boat.

DECLARE @increment_sequence INT = 10000
   'ALTER SEQUENCE [' + SCHEMA_NAME(seq.schema_id) 
   + '].[' + seq.name + ']'
   + ' RESTART WITH '+ CAST(CAST(seq.start_value AS INT) 
   + @increment_sequence as VARCHAR(max)) + ';'
   sys.sequences AS seq
   LEFT OUTER JOIN sys.database_principals AS sseq
   sseq.principal_id = ISNULL(seq.principal_id, 
   (OBJECTPROPERTY(seq.object_id, 'OwnerId')))
   SCHEMA_NAME(seq.schema_id) ASC,
   seq.[Name] ASC

It creates a script in your query results as below. Simply copy and paste this into a new query window and execute.

ALTER SEQUENCE [DB_Sequence].[TransferID] RESTART WITH 10000;
ALTER SEQUENCE [DB_Sequence].[AccountID] RESTART WITH 68719;
ALTER SEQUENCE [DB_Sequence].[CustomerID] RESTART WITH 1010006;

If you do need to update many sequences in your database I hope you find this script useful and it saves you as much time as much as it has me!

Restoring CDC enabled databases

surprisedPerforming database and transaction log backups (and restores) are the meat and veg of a DBA’s responsibilities and if you are a seasoned professional I am sure you will have performed those operations ad-infinitum. Being complacent is perhaps the biggest mistake you can make as a “Guardian of Data” so you should always be prepared for the unexpected…

Several years ago in a Disaster Recovery scenario, I was asked to perform the migration of a large database which was used for a fairly complex ETL process that was both undocumented and something that the DBA team had no insight or involvement.

Due to a complete lack of a Disaster Recovery Plan for the system and process (how many times have we seen this guys!) I was forced to follow best practices, common sense and draw on my experiences in order to bring the service back to a “best known good” as quickly as possible. Since we had access to all existing backups taken so far, and having the ability to take a tail log backup I was able to recover across to DR with no data-loss (at least as far as I was concerned). I handed the environment back to the Business Intelligence team for their validation, operational testing and re-acceptance.

Of course it passed acceptance and resumed operation in production.

Now of course, what was not known at the time I performed this work was that the Database in question was enabled for Change Data Capture (CDC). Perhaps it should have been a logical assumption for me to make, but I suspect that even if I was aware of its presence, the significance of the situation might have escaped me.

I am not really sure what acceptance testing was performed (if indeed it was!), because after a few hours of activity (just enough time for a scheduled ETL process to execute and fail) it became clear that something was very wrong with the database.

Apparently the Change Data Capture tables were missing!

So now I knew about the existence of CDC in the database (and after digging around MSDN documentation for a few minutes) our mistake was obvious – we had not used the KEEP_CDC option upon restore, meaning that the CDC tables were no longer present. Unfortunately acceptance testing hadn’t detected this problem ahead of time and now our recovered database was live. After some discussion, the Business Intelligence team decided for us to re-create CDC on this warm database and to temporarily restore a copy of the old one (with KEEP_CDC!) to pull out those missed changes.

So what’s the problem with restoring CDC enabled databases?

The main problem with CDC enabled databases is that the KEEP_CDC option is incompatible with the NORECOVERY option.

For instance the following command:

	FROM DISK = 'CDCEnabledDB.bak' 

Will result in:

Msg 3031, Level 16, State 1, Line 1
Option 'norecovery' conflicts with option(s) 'keep_cdc'.
Remove the conflicting option and reissue the statement.

This automatically poses the question of how it is possible to restore a backup chain with CDC? On a database restore, in order to apply differential backups and transaction logs the NORECOVERY clause is required to prevent SQL Server from performing database recovery.

If this option is required but KEEP_CDC in conjunction with it is incompatible, surely this means point in time restores are not possible for restores that require CDC tables to be retained?


The investigation

When I first ran across this problem (and after reading Connect Item 587277 -which has now been updated by me with the correct resolution) I was incensed that Microsoft could be so stupid as to prevent something so necessary under certain situations. I saw it as a huge gaping hole in database recovery for any database requiring CDC. As I (and others on the Connect Item) saw it, if you cannot restore a log chain and KEEP_CDC then this will cause problems for:

  • Database Mirroring
  • Availability Groups
  • Log Shipping
  • Replication
  • Recovery to Standby

Quite infuriatingly the Connect Item was closed with:

“we looked into this but there is no plan as of now to change the current design and extend the support of this in the foreseeable future. Again, thanks for taking the time to share your feedback, this is really important to us.”

And that was that…for a time.

So what has changed?

Several months ago I was investigating the use of CDC enabled tables for a client project when I stumbled across the following statement in the MSDN documentation Replication, Change Tracking, Change Data Capture, and AlwaysOn Availability Groups :

“SQL Server Replication, change data capture (CDC), and change tracking (CT) are supported on AlwaysOn Availability Groups.”

Remembering the problems I faced years earlier I read that statement with interest and decided to spend time playing with the recovery of Change Data Capture when it dawned on me that mine and the other posters to the Connect Item (including the Microsoft respondent) had completely misunderstood the use of the KEEP_CDC clause.

Our misunderstanding was (unsurprisingly) that the KEEP_CDC clause was necessary to protect Change Data Capture tables from being lost on every restore sequence (which is wrong). Instead the clause is required to protect CDC tables upon RECOVERY. Change Data Capture schema and data is transparently restored behind the scenes during NORECOVERY in anticipation that you use the KEEP_CDC clause on RECOVERY -if not then CDC is dropped (again transparently).

Therefore in a restore sequence, recovery is the ONLY time that you need to specify KEEP_CDC in your restore sequence meaning that the incompatibility between this option and NORECOVERY is irrelevant.

For instance the following will work:

	FROM DISK = 'CDCEnabledDB.bak' 

	FROM DISK = 'CDCEnabledDB_1.trn' 

	FROM DISK = 'CDCEnabledDB_2.trn' 

In summary

Regardless of however experienced you believe yourself to be with SQL Server, there is always something that can come and bite you. Not even Microsoft Support and technology owners always fully understand how every single scenario or configuration is going to play out, so it is important that you Do Your Own Research.

While the use of CDC does introduce certain considerations that I will talk about another time, you should rest assured that as long as you understand the use of the KEEP_CDC clause (and make sure you use it when necessary) then you will not go far wrong.

VIEW, Pugh, Barney McGrew…

Trumpton…Cuthbert, Dibble, Grub.

I never ceases to amaze me that there is always that little bit of SQL Server functionality that you didn’t know about. Some of it good and some of it bad, some of it just meh…

The other day I was stumbling around a new environment, when I decided to use the GUI to script out an existing set of views when I saw something that had me scratching my head. Underneath the actual view definition was the following (some names removed to protect the innocent)…

-- note view definition removed --
EXEC sys.sp_addextendedproperty @name=N'MS_DiagramPane1',
@value=N'[0E232FF0-B466-11cf-A24F-00AA00A3EFFF, 1.00]
Begin DesignProperties =
   Begin PaneConfigurations =
      Begin PaneConfiguration = 0
         NumPanes = 4
         Configuration = "(H (1[40] 4[20] 2[20] 3) )"
      Begin PaneConfiguration = 1
         NumPanes = 3
         Configuration = "(H (1 [50] 4 [25] 3))"
      Begin PaneConfiguration = 2
         NumPanes = 3
         Configuration = "(H (1 [50] 2 [25] 3))"
      Begin PaneConfiguration = 3
         NumPanes = 3
         Configuration = "(H (4 [30] 2 [40] 3))"
      Begin PaneConfiguration = 4
         NumPanes = 2
         Configuration = "(H (1 [56] 3))"
      Begin PaneConfiguration = 5
         NumPanes = 2
         Configuration = "(H (2 [66] 3))"
      Begin PaneConfiguration = 6
         NumPanes = 2
         Configuration = "(H (4 [50] 3))"
      Begin PaneConfiguration = 7
         NumPanes = 1
         Configuration = "(V (3))"
      Begin PaneConfiguration = 8
         NumPanes = 3
         Configuration = "(H (1[56] 4[18] 2) )"
      Begin PaneConfiguration = 9
         NumPanes = 2
         Configuration = "(H (1 [75] 4))"
      Begin PaneConfiguration = 10
         NumPanes = 2
         Configuration = "(H (1[66] 2) )"
      Begin PaneConfiguration = 11
         NumPanes = 2
         Configuration = "(H (4 [60] 2))"
      Begin PaneConfiguration = 12
         NumPanes = 1
         Configuration = "(H (1) )"
      Begin PaneConfiguration = 13
         NumPanes = 1
         Configuration = "(V (4))"
      Begin PaneConfiguration = 14
         NumPanes = 1
         Configuration = "(V (2))"
      ActivePaneConfig = 0
   Begin DiagramPane =
      Begin Origin =
         Top = 0
         Left = 0
      Begin Tables =
         Begin Table = "Customer"
            Begin Extent =
               Top = 6
               Left = 38
               Bottom = 114
               Right = 227
            DisplayFlags = 280
            TopColumn = 0
         Begin Table = "Order"
            Begin Extent =
               Top = 6
               Left = 265
               Bottom = 114
               Right = 492
            DisplayFlags = 280
            TopColumn = 0
   Begin SQLPane =
   Begin DataPane =
      Begin ParameterDefaults = ""
   Begin CriteriaPane =
      Begin ColumnWidths = 12
         Column = 1440
         Alias = 900
         Table = 1170
         Output = 720
         Append = 1400
         NewValue = 1170
         SortType = 1350
         SortOrder = 1410
         GroupBy = 1350
         Filter = 1350
         Or = 1350
         Or = 1350
         Or = 1350
' , @level0type=N'SCHEMA',@level0name=N'dbo',

EXEC sys.sp_addextendedproperty @name=N'MS_DiagramPaneCount',
@value=1 , @level0type=N'SCHEMA',@level0name=N'dbo',

So after a few tests it turns out that when you create SQL Server Views using the SQL Server Management Studio View Designer, SSMS automatically generates and adds these properties to the view behind the scenes to aid the Designer for re-editing purposes. Please bear this in mind when generating scripts for deployment into your staging environments, whilst there doesn’t appear to be any performance drawbacks to this extra meta-data, it is messy and (imho) not best practice to redeploy into production.

ROLLBACK damn you!

angrymanA good question was asked on #SQLHELP the other day regarding the use of XACT_ABORT ON – “If you don’t have a BEGIN TRAN COMMIT TRAN in a stored procedure, will SET XACT_ABORT ON do anything?”.

My immediate reply to this was “Yes it makes the transaction atomic. See my blog post Baby baby baby, where did our love/ data go?

In response to this, the follow-up question was raised “wouldn’t executing the stored procedure by itself be atomic?”.

So I said “not quite. An implicit or explicit transaction is not atomic by default in SQL Server depending upon errors” then “and a proc doesn’t equate to a transaction. Check out my READPAST & Furious presentation at #SQLPASS or #SQLBITS for demos”.

You really have to love Twitter sometimes for the 140 Character limit, and it really can cause a certain level of ambiguity in answers that can sometimes cause confusion or vagueness to replies, and I felt that it is probably worth elaborating further in a post where the only restriction is that of the editor (Moi) getting tired of typing!

Fellow Concurrency Junkie Roji Thomas (Twitter|Blog) later sent me a tweet which read “When there is no explicit transactions, are you suggesting that XACT_ABORT has an effect on single statement implicit transactions?”. The answer is of course no, but I knew exactly what he was getting at and he was in effect quite rightly suggesting that there was an element of ambiguity in my reply.

So at this stage it might be a good idea for me to describe what XACT_ABORT does. If you want the official blurb on this directive you can look in Books Online, but I’ll boil it down in a nutshell. Put simply, setting XACT_ABORT ON has the effect on ensuring that any open transaction will be treated as purely atomic (all statements will complete successfully otherwise the transaction will completely rollback). By default XACT_ABORT is OFF, and more importantly there exist certain errors that will not cause a transaction to fail (even though that single statement might have). Examples of these are constraint failures and lock timeouts which have been discussed in detail elsewhere in this blog.

Setting XACT_ABORT ON is one way then to ensure that your transactions become all or nothing operations (yes, just like you thought they were already!) and there are certain considerations to bear in mind when using it. Again, rather than me regurgitate them, check out my other blog posts on this subject and my READPAST & Furious presentation last recorded at SQLPASS Summit 2012 which describe its usage fully.

So going back to Roji’s comment, what he was getting at was that under the default behaviour of SQL Server, essentially the following set statements are essentially identical in possible behaviour and outcomes:

--transaction 1
INSERT INTO dbo.mytable VALUES(1)

--transaction 2
INSERT INTO dbo.mytable VALUES(1)

--transaction 3
INSERT INTO dbo.mytable VALUES(1)

--transaction 4
INSERT INTO dbo.mytable VALUES(1)

There are two important points of note. The first is that one of the reasons that these sets of statements are pretty much identical in behaviour is because we only have a single DML operation occurring in each. If we are not explicitly defining the transaction then the single DML operation would itself by default imply and begin and commit (or rollback) as a single unit of work (or transaction). The second point of note is that because we are running our connection using default SQL settings, IMPLICIT TRANSACTIONS is off (more on this in a second). So the point that Roji was raising, was that under these conditions, the setting of XACT_ABORT is irrelevant and I fully agree with this assertion.

This story becomes slightly more complicated when we start talking about implicit transactions. If we SET IMPLICIT_TRANSACTIONS ON, then the behaviour of transaction 1 changes slightly. Again, you can refer to the SQL DBA’s favourite resource (Book’s Online) for more information on this command, but essentially when implicit transactions are set to ON, it causes SQL Server to imply open ended transactions. In other words, now when a transaction has been implied, you will explicitly have to commit it (Oracle style). So in the case of transaction 1, there is the possibility that your T-SQL batch may contain further DML operations (which would all be automatically enlisted into this open transaction). In this scenario setting XACT_ABORT ON would have an effect and distinct difference to the same batch with SET IMPLICIT TRANSACTION OFF (the default).

For instance:

--transaction 5
INSERT INTO dbo.mytable VALUES(1)
INSERT INTO dbo.mytable VALUES(5)

--transaction 6
INSERT INTO dbo.mytable VALUES(1)
INSERT INTO dbo.mytable VALUES(5)

In the example above transaction 5 is not quite the same as transaction 6. Transaction 5 could encounter a constraint violation by one of its statements, but still commit the other, whereas transaction 6 would automatically rollback in that situation.

In summary, you should always be careful when changing the default behaviours of SQL Server, and more importantly even when you haven’t, make sure you understand that the default behaviour matches that which was expected and required.

The setting of XACT_ABORT is irrelevant for single DML statements in isolation since they are ALWAYS atomic regardless, however the complications (and considerations) arise when a transaction spans multiple DML statements and could result in different behaviour to that which you were expecting. Setting IMPLICIT_TRANSACTIONS to ON will result in the duration of a transaction to last until it is explicitly committed and therefore the scope of your transaction might not be so obvious to you simply by looking at the code.

Good luck!

Why don’t you go and LOCK OFF!

No this is not a Hekaton post 🙂 -perhaps I should have saved this title for another rainy day.

SQL Server concurrency is a particularly difficult subject to master and understanding the result of your actions can be confounded by some of SQL Server’s special behaviors and quirks that I so often like to talk and write about.

As some of my followers stalkers out there may remember, one of my favorite demonstrations I like to give is during my
READPAST & Furious: Transactions, Locking, and Isolation session -last performed at the SQLPASS 2012 Summit and SQLSaturday #229 Dublin where in a little piece of code I discuss whether a transaction is really Atomic (or not). For anyone reading this paragraph who has immediately answered “ALWAYS” -then my friend you are in serious trouble with your code-base. And I am not even talking about the use of lower levels of isolation to perform dirty reads within your transaction here, isolation does not effect a transaction’s atomicity.

No, what I am talking about how SQL Server (under certain conditions) can fail a statement in a transaction, but still successfully commit. If this is not the behavior that was intended for your application then congratulations! You now have an inconsistent dataset. You can read a quite exhaustive post on this subject from the following link: “Baby baby baby, where did our love/ data go?

bad lockYesterday I was talking with a colleague (and fellow speaker) about a particularly bad concurrency issue at our current place of work and he told me that one quick and dirty proposal was to limit the duration of problematic locks, it had been suggested to set the LOCK_TIMEOUT for each session. Straight away, my alarm bells started ringing and I replied that this is probably a very bad idea. I could talk for hours on why it is a bad idea from a concurrency perspective -it is a bit like asking someone to attempt to cross a busy road and return to the Kerb (Sidewalk for my American buddies!) each time if a car is coming. However I won’t bore you with that detail, I will simply say that once you have read my post (referenced above) I will add that one such behavior (that allows the failure of a statement but does not cause a rollback) is using the LOCK_TIMEOUT session setting. Oh yes I know a lot of you out there are using it, and yes you had better start praying 🙂

In the following code snippet we demonstrate this behavior by updating one record in a table in an open ended transaction (which will take an Exclusive lock on this row). In a second connection we will run a second transaction in a session using a LOCK_TIMEOUT that first inserts a record into this table and then attempts to delete our locked record (and triggering the lock timeout).

Let’s first create a table with a single record and exclusively lock that record out under an open ended transaction:-

--connection 1


	--update value in open ended
	--transaction to take exclusive
	--lock where c1=1
	UPDATE t1 SET c1 = 3 WHERE c1=1

Next we will run a transaction in another connection to insert one record into this table and attempt to remove the blocked record:-

--connection 2
--after lock timeout view committed values

The following error occurs…

(1 row(s) affected)
Msg 1222, Level 16, State 45, Line 5
Lock request time out period exceeded.
The statement has been terminated.

(1 row(s) affected)

Now you can quite clearly see that the LOCK_TIMEOUT worked, but let us now take a look at the committed contents of the table (skipping over the locked record using the READPAST hint):-

--connection 2


(1 row(s) affected)

Yes transaction fans you will see that the row we inserted under an “Atomic” transaction committed successfully whilst the delete failed. I am not going to labour this point any more than to say, please be careful to understand any setting you enable for concurrency optimization and troubleshooting. If you are not careful, then you may live to regret it!