7 days of PASS day 1

curryHaving had the luxury of booking a slightly later flight than normal (9.45am to Seattle), I took the liberty of an overnight stay closer to the airport, but unfortunately for me I had decided upon a Curry for tea the night before and now at 6am I am feeling a little rough and cannot manage my traditional pre-flight bacon sandwich. Not good, but at least my cup of tea is slightly more appealing, and I start to get ready for my trip. I have a 30-minute drive ahead of me to the airport, so I start getting ready and head off.

So far this year, I have probably done an excessive amount of community events, and while this has not been good for my bank balance, it has been very good for my British Airways points. I am now on the Executive Club Silver level, which means entrance to the club lounge without paying for Business Class travel -and even though I was offered an awesome deal to upgrade to Business for this flight, I decided to slum it in Cattle+ Class. So I sit in the lounge for an hour, pray to the Lord of Bacon, and wait for my flight to board.

Upon taking my seat on the plane, I am surprisingly joined by Richard Douglas (b¦t), and as I settle down to do some more Summit session prep, Richard takes the well-earned opportunity to catch up on some films. Unfortunately for me, my earphone adapter did not seem to work, which meant that when I finally got around to watching some onboard entertainment it was in Mute-Vision. One documentary I did watch with fascination, was a Discovery Channel program to capture Bigfoot. I love all that “stuff”, and spent the next hour watching their exploits in the Pacific North-West and the constant looping repeats of the Patterson–Gimlin film. Towards the end of the film they actually managed to capture a Bigfoot, and it was at this point I realized that this film had indeed been a Blair Witch Style hoax and a complete waste of an hour of my life!

roasteryUpon landing and arriving into the City of Seattle, I dropped my bags off and headed with Richard to my favorite Starbucks (Starbucks Reserve Roastery & Tasting Room) but my GPS was playing tricks on us, and we got slightly more exercise climbing hills than is advisable for someone of my fitness. After consuming a spiffingly good cup of coffee we headed to the Taphouse for a bite to eat and to watch the Washington Huskies versus the Oregon State Beavers on TV in which the Huskies completely dominated the match and ended up winning 41 to 17.

Finally, to end the day, I headed back to my room to continue working on my Thursday Summit Session for a few hours. I’ve got a lot of work still to do on it, but I’m not panicking…. yet!


Posted in Community, SQLServerPedia Syndication | Tagged | Leave a comment

Configuring Red Hat Enterprise Linux 7.2 for YUM without a Redhat Subscription

It has been a very long time since I have installed a Redhat Enterprise Linux distribution having tended to prefer Ubuntu based distributions (such as Linux Mint) or CentOS if I really wanted a Redhat derivative (although for Oracle Database installations on Linux, I would always tend to use Oracle Enterprise Linux for simplicity). With the development and impending arrival of SQL Server on Linux, I thought it was about time that I returned back to the playground with a vanilla copy of Redhat EL so that I could test it with SQL Server, Docker Linux Containers, Spark and lots of other sexy things that have been avoiding my immediate attention for a little too long.

After a basic installation, I decided that it was time to start using YUM to add some of my favorite packages to this build when I hit across this quite annoying error:

[retracement@localhost]$ sudo yum install nano
Loaded plugins: product-id, search-disabled-repos, 
This system is not registered to Red Hat Subscription Management.
You can use subscription-manager to register.
There are no enabled repos.
Run "yum repolist all" to see the repos you have.
You can enable repos with yum-config-manager --enable <repo>

Ok so this is obviously not going to fly and I am certainly not going to pay for a Redhat Subscription so I decided to break out Bingoogle and came across this rather useful post from Aziz Saiful called HowTo Install redhat package with YUM command without RHN -and I recommend you also give it a read (although some of its details are ever-so-slightly out of date with this release of RHEL 7.2.). The post discusses how to set up an alternative source to the installation DVD, and for Windows people, this is an equivalent of the -source parameter that we would use in PowerShell with the Add-WindowsFeature cmdlet to add new features from local media.
To cut a long story short, I decided to work my way through this article and provide an updated post (and if nothing else, I will not need to Bingoogle this again!).

Our first step is to ensure that we have a mounted Redhat Enterprise Linux 7.2 DVD (i.e. the one we installed Linux from).
The next step is to mount the DVD to a mount point. For simplicities sake, I chose cdrom off the root.

[retracement@localhost]$ sudo mkdir /cdrom
[retracement@localhost]$ sudo mount /dev/cdrom /cdrom

Ok so now we have a mounted cdrom, we can create the YUM repo configuration file within the path /etc/yum.repos.d to point to this location. Unfortunately, you will need to use vi to do this (I hate vi!), but if you need any tips on vi, please use this Vi Cheat Sheet. Once in vi, create the file dvd.repo (or called anything else you want – but ensure you keep the .repo extension otherwise the file will not be recognized by YUM).

name=RHEL 7.2 dvd repo

Once you have created this file, if you have performed every step correctly, you can take a look at YUM’s repolists.

[retracement@localhost]$ sudo yum repolist

And while you still receive the same error regarding the System not being registered to Red Hat Subscription Management, you should also see your new repo listed underneath.
To check it all works, let’s install nano!

[retracement@localhost]$ sudo yum install nano


Perfect! Like everything in Linux, it is easy when you know how. On a closing note, it is unclear to me at this moment in time, whether this will entirely resolve my installation problems since I will still obviously need access to an online repo or sources in order to install third-party packages not included with the installation media,  but once I have figured that out, I will update this post.

Posted in Linux, SQL | Tagged | Leave a comment

The Transaction Log, Delayed Durability, and considerations for its use with In-Memory OLTP – Part I


If Warhol did transaction Logging…

In this mini-series of posts, we will discuss how the mechanics of SQL Server’s transaction logging work to provide transactional durability. We will look at how Delayed Durability changes the logging landscape and then we will specifically see how In-Memory OLTP logging builds upon the on-disk logging mechanism. Finally, we will pose the question “should we use Delayed Durability with In-Memory or not” and discuss this scenario in detail. But in order to understand how delayed durability works, it is first important for us to understand how the transaction log works -so we shall start there…

Caching caching everywhere nor any drop to drink!

SQL Server is a highly efficient transaction processing platform and nearly every single operation performed by it, is usually first performed within memory. When operations are performed within memory, the need to touch physical resources (such as physical disk IOPS) are also reduced, and reducing the need to touch physical resources means those physical boundaries (and their limitations) have less impact to the overall system performance. Cool right?!

A little bit about the Buffer Cache

All data access such as insertion, deletion or updates are all first made in-memory, and if you are a Data Administrator, you will (or should!) already be familiar with the Buffer Cache. So when data pages are needed to fulfill updates, deletions or inserts (and assuming they are not already present in the Buffer Cache), the Storage Engine first reads those data pages into the Buffer Cache before performing any requested operations on those pages in memory. Those dirty pages will ultimately be persisted directly to disk (i.e. to the physical data file\s) through automatic checkpointing and these pages will contain changes performed by either committed or uncommitted transactions. In the latter case of uncommitted transactions, the presence of those dirty pages in the data file\s is why UNDO is a necessary operation that is required upon recovery -and those changes must be rolled back using the transaction log records to do so. Any dirty pages that are a result of committed transaction changes but have not yet been hardened to the physical data file\s through a CHECKPOINT operation would require the REDO portion of the transaction log (assuming that the SQL Server failed) to roll those changes forward. I will refer back to this again when we move on to talk about In-Memory OLTP.

It’s all about the log, about the log, no trouble!

But it is not the hardening of data to data files that we are focusing on here (since there is little relevance to what we are focusing on), we are more concerned about transactional durability in SQL Server. There is a common misunderstanding with SQL Server DBAs that when transactions are making changes, the transactional changes are written immediately to the logfile. The durability claims of SQL Server make this misunderstanding easy to understand, but requiring a physical IO operation to occur upon every single transactional data modification would clearly be a huge potential bottleneck (and dependency) upon the transaction log disk performance. The SQL Server architects recognised this challenge, so in a similar way that data page modifications are first cached, so are the transactional changes. In the case of the transaction log, the “cached” area is provided by in-memory constructs known as log buffers that can store up to 60 Kilobytes of transaction log records.

Now that you know that log buffers are there to delay transaction logging to maximize the efficiency of a single physical IOP to disk, we must now consider when these structures must absolutely be flushed to physical disk and stored within the transaction log in order to still provide the Durability property of ACID that SQL Server adheres to. There are two main situations:

  1. On transactional COMMIT. When a client application receives control back after issuing a successful COMMIT statement, SQL Server must provide durability guarantees about the changes that have occurred within the transaction. All transactional operations must have been written and persisted to the transaction log, therefore the log buffer containing those changes must be flushed on COMMIT to provide this guarantee.
  2. On log buffer full. Strictly speaking, as long as the first rule is adhered to, there is no logical requirement that SQL Server should flush the log buffer to disk when it becomes full. However, if you consider that it is paramount for SQL Server to never run out of available log buffers, then it is obvious that the best way for it to avoid this situation is to hold onto the log buffer only as long as it needs to. If a log buffer is full, then it serves no further purpose in the buffering story and furthermore, given that the log buffer will contain a maximum of 60Kilobytes of changes, there is good alignment to the physical write to the transaction log. When the log buffer is full it makes sense to flush it to disk.

Now we have a better understanding how the transaction log provides transactional durability to SQL Server and how it delays physical resource writes to the transaction log disk to improve logging performance, we can look at how Delayed Durability changes the logging landscape, why we might use it and what we are risking by doing so in the second post in this series.

Posted in SQL | Tagged , , | Leave a comment