Category Archives: Linux

Introduction to Azure Cloud Shell

Azure Cloud ShellIn my last couple of posts, I have described the remote management of Azure through the command line from what was essentially a fat (or thick) client. This gives you an awful lot of scripting and automation control over the platform by using either the Azure CLI or PowerShell through the PowerShell Az Module. This is perfect for most cases, but what if you are using an unsupported Operating System or you only have access to the browser (perhaps via a thin client)?

Thankfully a solution to this problem already exists and the good folks at Microsoft have made it easy for you to have full scripting ability over your Azure subscription through your web browser of choice! Enter Azure Cloud Shell…

Accessing Azure Cloud Shell

There are two ways to access Azure Cloud Shell, the first being directly through the Azure Portal itself. Cloud Shell PromptOnce authenticated, look to the top right of the Portal and you should see a grouping of icons and in particular, one that looks very much like a DOS prompt (have no fear, DOS is nowhere to be seen).

The second method to access Azure Cloud Shell is by jumping directly to it via which will require you to authenticate to your subscription before launching. There is an ever so slight difference between each method. Accessing the Shell via the Azure Portal will not require you to specify your Azure directory context (assuming you have several) since your Portal will have already defaulted to one, whereas with the direct URL method that obviously doesn’t happen.


Select your Azure directory context through

For both methods of access, you will need to select the command line environment to use for your Cloud Shell session (your choice is Bash or PowerShell) and the one you choose will partially depend upon your environment of preference.


I will explain the difference later, however, for now, I am going to select the Bash environment.

Configuring Azure Cloud Shell storage

When using Azure Cloud Shell for the first time you will need to assign (or create) a small piece of Azure storage that it will use.  Unfortunately, this will incur a very small monthly cost on your subscription.

ACS storage

The storage is used to persist state across your Cloud Shell sessions.  To get a little more visibility about what is going on I am going to click Show advanced settings:


It is slightly disappointing that at the time of writing there are only 7 available Cloud Shell storage regions -which means that your Shell storage might not be able to live in the same region as your other resources (depending upon where they are).


Would it really matter that your Cloud Shell blob might live in a different region? I think it is probably very unlikely that you will consume egress data into your Shell region since it is management that is the purpose of Cloud Shell, not data staging, but I suppose you might want to bear in mind when you are scripting.

In my specific case (as you will see above) I decided to use an existing resource group named RG_CoreInfrastructure and within create a new storage account named sqlcloudcloudshell [sic] under the North Europe region and within this create a new cloudShellfs file share.

I don’t really like this dialog box since it is not very intuitive and allows you to submit incorrectly named or existing resources -both leading to creation failure. I’d rather they are caught and reported on at input time. For the record, the general rules for our Cloud Shell are that the storage account name needs to be in lowercase letters and numbers, must begin with a letter or number, be unique across Azure, and between 3 to 24 characters long (phew!). The file share can only contain lowercase letters, numbers, hyphens, and must begin with a letter or number and cannot contain two consecutive hyphens. It is a little frustrating, but you will get there in the end with a bit of trial and error!

Whilst it is possible that you could pre-stage all of these things upfront and select an existing storage account (assuming it was in one of the 7 Cloud Shell regions), I was particularly interested in what Azure was going to provision, being mindful about not choosing storage that was more expensive than it needed to be. As it turns out, Azure created my storage as follows:

Performance/Access tier: Standard/Hot
Replication: Locally-redundant storage (LRS)
Account kind: StorageV2 (general purpose v2)

Other things of note were creation tagged this storage resource with the name-value pair of ms-resource-usage/azure-cloud-shell and set a file storage quota of 6GiB.

Running Azure Cloud Shell

Once setup has completed, the Azure Cloud Shell will launch as follows:


If you look at the menubar at the top of your Azure Cloud Shell window you will note that there is a dropdown currently set to Bash -which was the type of shell session chosen earlier. If we change this to PowerShell it will reconnect back into the Cloud Shell Container, this time using PowerShell as your entry point of choice.


Within a few seconds, you will now have entered into a PowerShell session.


Bash or PowerShell?

If you can remember when we first launched Azure Cloud Shell we had to select Bash or PowerShell as our environment of choice. The question is, which should you choose?

The real answer to this question is that it doesn’t really matter, and is simply down to your preference – especially since you can easily switch between the two. However, I would probably argue (especially for Linux fanboys like myself) that Bash (and therefore Azure CLI via Cloud Shell) is probably easier and more intuitive, and you could always directly enter a PowerShell Core for Linux session using the pwsh command in Bash (see also my previous post) if you wanted.

Whichever way you enter a PowerShell Cloud session, the Az module cmdlets are then directly available to you and you do not require any further configuration to your environment. I have specifically found that PowerShell does seem to favour more scripted hardcore Azure deployments since you can very easily use its OOP potential -such as assigning a resource object to a variable and accessing its properties/ methods/ collections programmatically through the object.

Basically, have a play and see what works for you.

Pre-Installed Software

The Azure Cloud Shell thankfully is also deployed with many pre-installed tools and runtimes. This is possibly a subject for another day, but just know that the following are available for your use all within your browser session(!!!):

Development Runtimes available include PowerShell, .NET Core, Python, Java, Node.js, and Go.

Editing tools installed include code (Visual Studio Code), vim, nano, and emacs.

Other tools you can use are git, maven, make, npm, and many more.

I fully expect these lists to consistently get bigger over time.


I hope you have found this post very useful and if you haven’t already done so please start testing Azure Cloud Shell now! It is the perfect place to quickly configure Azure using Azure CLI or the Azure PowerShell module (through scripting or simple commands) through your browser so that you don’t need to install those runtimes on your local machine.


AzureRM, Azure CLI and the PowerShell Az Module

There is now a variety of Microsoft provided command line tools available to connect to (and manage) Azure Resources and the situation is quite confusing to new-comers or those individuals who have not kept up to date with new developments. This post is designed to rectify this situation.

It is probably also worth me mentioning that I am focusing on the usage and deployment of these tools with Linux in mind, however, the following explanation is also applicable to Windows and macOS environments.

##PowerShell Core (on Linux)
If you are running a Linux machine, to use AzureRM or the new PowerShell Az module you will first need to install PowerShell Core for Linux. Installation is fairly easy to perform and you can follow this post using the relevant section for your particular distribution and release. In my specific case, I am running Linux Mint 18.1 Serena, however, to get my Ubuntu version I first run the following:

more /etc/os-release

This returns:

NAME=\"Linux Mint"
VERSION="18.1 (Serena)"
PRETTY_NAME="Linux Mint 18.1"

As you can see, the UBUNTU_CODENAME is xenial. So if I visit the Ubuntu list of releases page, I can see that xenial equates to version 16.04.x LTS. This means that (for me at least) I can follow the PowerShell on Linux installation section titled Ubuntu 16.04 (instructions provided below simply as an example):

# Download the Microsoft repository GPG keys
wget -q
# Register the Microsoft repository GPG keys
sudo dpkg -i packages-microsoft-prod.deb
# Update the list of products
sudo apt-get update
# Install PowerShell
sudo apt-get install -y powershell

To enter a PowerShell session in Linux, you simply type the following within a bash prompt (in Windows you instead use the powershell executable):



AzureRM module (aka Azure PowerShell)

Before you read further, you should understand that AzureRM is deprecated, and if you are currently using it to manage to Azure resources, then you should seriously consider migrating to one of the other options described in the next sections (and removing AzureRM from your system).

I first came across the AzureRM PowerShell Module many years ago when I wanted to manage Azure Resources from my Windows laptop through PowerShell. At the time, this was the only way of doing so from the command line, and the functionality of AzureRM was provided by (in Windows at least) a downloadable installer to make the module available to PowerShell. You can check out the latest version and instructions for AzureRM by visiting this link, but as mentioned earlier, you should avoid attempting this and instead use the Azure CLI or PowerShell Az module as described in the next sections. At the last time of trying, attempts to install AzureRM via PowerShell Core in Linux resulted in failure with warning messages pointing to the PowerShell Az module, so you are forced to go to the newer options anyway.

Upon import into your PowerShell environment, the AzureRM module provided up to 134 commandlets (in version 2) so that you could manage all your Azure subscription resources via PowerShell.

Azure CLI

The Azure CLI was intended as the defacto cross-platform command-line tool for managing Azure resources. Version 1 was originally conceived and written using node.js, and offered the ability to manage Azure resources from Linux and macOS, as well as from Windows (prior to PowerShell Core being available on macOS and Linux). Version 2 of the Azure CLI was re-written in python for better platform compatibility and as such, there is now not always direct one-one compatibility between commands across those versions. Obviously, you should use version 2 where possible.

Head over to Microsoft docs article titled Install the Azure CLI for instructions on installing the CLI for your operating system of choice, or you might also be interested in how to do so on my personal Linux distribution of choice (Linux Mint) in my post titled Installing Azure CLI on Linux Mint. Installing the Azure CLI will enable the ability to use the az command syntax from within your Windows command prompt, or via your PowerShell prompt, or through bash (if you are specifically using Linux).

Once installed, to use the Azure CLI you will prefix any instruction with the az command. Thankfully there is strong contextual help so you can simply run az for a full list of available subcommands, or provide a specific subcommand for help on that.

To execute an az command, you will first need to login to your Microsoft account (that you use to access your Azure subscription/s) as follows:

az login

This will return the following message:

Note, we have launched a browser for you to login.
For old experience with device code,
use "az login --use-device-code"

A browser window will be automatically launched for Azure and require you to log in with your credentials. Alternatively (as you can see in the message), you can use the old authentication method (which is especially useful if your machine does not have a browser). In this case, you would run the following command:

az login --use-device-code

Then log into your account from a browser entering the device code provided.

Either way, after you have done this, we can issue our az commands against our Azure resources, for example:

az vm list

Would list out all current Azure IaaS VMs in your subscription.
Another useful tip with az is to use the find subcommand to search through command documentation for a specific phrase. For example if I want to search for Cosmos DB related commands I would use the following:

az find --search-query cosmos

Returns the following list of commands:

`az cosmosdb database show`
    Shows an Azure Cosmos DB database

`az cosmosdb database create`
    Creates an Azure Cosmos DB database

`az cosmosdb database delete`
    Deletes an Azure Cosmos DB database

`az cosmosdb collection create`
    Creates an Azure Cosmos DB collection

`az cosmosdb collection delete`
    Deletes an Azure Cosmos DB collection

`az cosmosdb collection update`
    Updates an Azure Cosmos DB collection

`az cosmosdb database`
    Manage Azure Cosmos DB databases.

`az cosmosdb collection`
    Manage Azure Cosmos DB collections.

`az cosmosdb delete`
    Deletes an Azure Cosmos DB database account.

`az cosmosdb update`
    Update an Azure Cosmos DB database account.

Az PowerShell module

You might have already questioned that if PowerShell already had a way to manage Azure resources (through the AzureRM module) and we now have the Azure CLI (providing the cross-platform Az functionality), how could there be any synergy between those two environments?

The answer is that there isn't. This is one reason why AzureRM is deprecated.

With the arrival of PowerShell Core on Linux and macOS, it became possible to import the AzureRM module also into those environments, and yet as we have already found out, the Azure CLI was the newer mechanism to manage Azure resources. The problem is that both used different commands to do so (however subtle). In December 2018, Microsoft addressed this situation by introducing the new PowerShell Az module to replace AzureRM which gave a level of synergy between managing Azure resources through the Azure CLI and managing resources through PowerShell. This really means that if you understand one command line environment, your scripts will be relatively easily transferable into the other.

If you are looking to migrate from AzureRM to the new Azure Az module then you should check out this excellent post by Martin Brandl. It is also worth you checking out this announcement from Microsoft titled Introducing the new Azure PowerShell Az module.

To install the PowerShell Az module you can follow the Microsoft article titled Install the Azure PowerShell module. As you will read, you must install this module in an elevated prompt, otherwise, the installation will fail. On PowerShell Core for Linux this means that from bash you would first elevate your PowerShell session as follows:

sudo pwsh

Then you can run the following PowerShell Install-Module command:

Install-Module -Name Az -AllowClobber

Once installed and imported, you are now able to utilize this new PowerShell Az module
but must first login to your Azure account using the following PowerShell command:


This will return a message similar to the one below:

WARNING: To sign in, use a web browser to open
the page and
enter the code D8ZYJCM2V to authenticate.

Once you have performed this action, your Az module in PowerShell will be ready to use. For instance, we can now list IaaS VMs in PowerShell as follows:


As you may note, there is a correlation between this command and the Azure CLI command (az vm list) that we ran earlier. However, it is important to realize that the functionality and behavior of the PowerShell Az module and Azure CLI are not identical. For instance, in this specific case, the Azure CLI will return a verbose result set in YAML format by default whereas the PowerShell Az module returns the results in a shortened tabular format.


Hopefully, it is clear by now that the AzureRM module is redundant across all environments and if you need the ability to manage resources in Azure through PowerShell then the Az module is what you should be using. However, given the ease of multi-platform deployment and use of the Azure CLI, it is probably something you should not ignore and arguably might prefer over PowerShell Az (or at very least run both alongside each other). For example, at the time of writing, exporting an Azure resource template to PowerShell results in code that uses AzureRM rather than PowerShell Az whereas exporting to CLI uses (of course) the Azure CLI itself.

There is also an argument that the Azure CLI is far better suited to Azure automated deployments over ARM templates due to its brevity, and this is discussed in detail by Pascal Naber in his excellent post titled Stop using ARM templates! Use the Azure CLI instead.

Whether you eventually decide to favor Azure CLI over PowerShell Az (or use both), I sincerely hope this post has helped clear up any confusion you may have between all the available command line options to manage Azure resources -I know it had confused me!

Installing Azure CLI on Linux Mint

The Azure CLI (or Azure Command Line Interface) allows provides an easy way to create and manage your Azure resources on macOS, Linux, and Windows. If you (like me) are using Linux and wish to use and control Microsoft Azure easily through the command line, then it is probably something you should have.

I wanted to write a very quick post in order to explain the very simple steps you must follow to get the Azure CLI working for you if are using an Ubuntu derivative distribution such as Linux Mint. Microsoft’s basic installation guide Install Azure CLI with apt has one specific problem for those distros, so let’s take a look at the Ubuntu section of that guide:

Install Azure CLI

If you run steps 1 to 3 you will not observe a problem at the time of execution but will hit an error on running the first line of step 4. We see the following error:

Ign:13 serena/main Translation-en
Reading package lists... Done
W: The repository ' serena Release' does not have a Release file.
N: Data from such a repository can't be authenticated and is therefore potentially dangerous to use.
N: See apt-secure(8) manpage for repository creation and user configuration details.
E: Failed to fetch  404  Not Found
E: Some index files failed to download. They have been ignored, or old ones used instead.

In the highlighted line we can quite clearly see the 404 Not Found error, and if we take a look inside the /etc/apt/sources.list.d/azure-cli.list file (created in step 2), you will see that it contains that repository path generated, which is, of course, the problem.

Linux Mint uses its own release codenames and so the default script (provided by Microsoft) picks this up rather than the (required) Ubuntu release name for the Microsoft software repository. See the $(lsb_release -cs) piece of code in their script. So before we first start with the Microsoft code, you will need to find the Mint release name and replace this with the correct Ubuntu package base.

Find out your Linux Mint short codename by running the following:

lsb_release -cs

In my case, I am running Linux Mint Serena. Next, so I now need to find out the short codename of the Ubuntu base build that my edition of Mint is derived from. To do this, visit the Linux Mint Releases page.

From this page, I can see that Serena uses the Xenial package base (as below):

All we need to do is add the right repository path for the right package base. There are two ways to do this. The first you can simply edit /etc/apt/sources.list.d/azure-cli.list and replace (in my case) serena with xenial as we have done below.

Alternatively, you can edit (and hard code) the variable substitution within script 2 and rerun (this programmatically does the same thing we performed manually above):

echo "deb [arch=amd64] $AZ_REPO main" | \
    sudo tee /etc/apt/sources.list.d/azure-cli.list

And that’s it, once the right package base has been corrected, you can rerun the step 4 script which should no longer error. Azure CLI is now ready for you to manage and deploy your Azure resources from your lovely Linux Ubuntu derivative distribution! Check out Common Azure CLI commands for managing Azure resources for some guidance on how to use it.

I’ll give it a try and attempt to list all my Azure resource groups in tabular format:

az group list --output table

Which gives me:

Name                                      Location     Status
----------------------------------------  -----------  ---------
cloud-shell-storage-westeurope            westeurope   Succeeded
future_decoded_demo                       eastus2      Succeeded
Gothenburg                                northeurope  Succeeded
mysql                                     northeurope  Succeeded
sql2014sp1                                northeurope  Succeeded
sqlonlinux                                uksouth      Succeeded
stretchgroup-gothenburg-northeurope       northeurope  Succeeded
stretchgroup-hhserverf-sql01-northeurope  northeurope  Succeeded
stretchgroup-techdaysvm-northeurope       northeurope  Succeeded
techdays                                  northeurope  Succeeded
Testing                                   eastus       Succeeded

As you can see, Azure CLI is very cool and you should start using it now, so don’t let minor configuration difficulties stop you!

Installing Docker on Linux Mint

Ok, so first things first. This is not a ground shaking post of revelation, and ultimately all the information you need can be found directly from Docker, but like all good posts this is intended to address any confusion or ambiguity you may find when installing Docker on Linux Mint and join all the dots for you.

A web search will almost certainly point you to lots of similar posts, mostly (if not) all of which start instructing you to add unofficial or unrecognized sources, keys etc. Therefore my intention with this post is not to replace official documentation, but to make the process as simple as possible, whilst still pointing to all the official documentation so that you can be confident you are not breaking security or other such things!

You can head over to the following Docker page Get Docker CE for Ubuntu for the initial setup and updates, but for simplicity, you can follow along below.

First run the script below in order to update your local files from configured repositories, install required packages, and add the official Docker GPG key.

# Ensure your repositories are up to date
sudo apt-get update

# Install required packages
$ sudo apt-get install \
    apt-transport-https \
    ca-certificates \
    curl \

#Add Docker’s official GPG key:
$ curl -fsSL | sudo apt-key add -

#Check the GPG fingerprint successfully added (should see output from this command)
sudo apt-key fingerprint 0EBFCD88

Now that the package repository has been added, you can now install Docker Community Edition from apt as follows:

sudo apt-get update
sudo apt-get install docker-ce

Once this has been done, next up is perhaps the most important step (in terms of potential problems) -and that is adding the correct repository for your version of Linux Mint. The issue you face is that Linux Mint uses its own release codenames and so the default script (provided by Docker) picks this up rather than the (required) Ubuntu release -its the $(lsb_release -cs) piece of code in their script. Instead, you will need to find out your Mint release name and replace this with the correct Ubuntu package base.

Find out your Linux Mint short codename by running the following:

lsb_release -cs

In my case I find that I am running Linux Mint Serena. Next, you need to find out the short codename of the Ubuntu base build that your edition of Mint is derived from. To do this, visit the Linux Mint Releases page.

From this page, I can see that Serena uses the Xenial package base (as below):

So now all we need to do is add the right repository for the right package base (note I have added xenial to the script below). In your case, you may be using a new or older edition of Mint, so simply replace the word “xenial” in the script with the correct package base relevant to the version of Mint you are using.

#Add the Docker repository for the Xenial build
sudo add-apt-repository \
   "deb [arch=amd64] \
   xenial \

Once this is completed you then need to perform the Docker post-installation tasks which you can find here. These tasks are really there to prevent you having to keep running all Docker commands using the privileged sudo command. For instance, without going any further you *could* already now run the following command to list all current downloaded docker images (there should be none).

#list docker images (using privelaged mode)
sudo docker image ls

But we can avoid having to keep specifying sudo by running the following:

# Create a new docker group
sudo groupadd docker

# Add your user to the docker group.
# This script assumes that your current user
# is the one you want to be a docker admin
sudo usermod -aG docker $USER

It is now important that you log out of your session and back in, in order to pick up the new security context in your session, otherwise you may be greeted with the following text when attempting to run your docker command without sudo:

retracement@mylinuxhost ~ $ docker images
Got permission denied while trying to connect to the
Docker daemon socket at unix:///var/run/docker.sock:
Get http://%2Fvar%2Frun%2Fdocker.sock/v1.35/images/json:
dial unix /var/run/docker.sock: connect: permission denied

So if you have followed the instructions correctly, you should be able to list docker images (or any other docker command) without requiring sudo as follows:

#list docker images (non-privelaged mode)
docker image ls

And that’s it. Docker is now ready for you to run containers on your shiny Linux Mint desktop.

Setting up Samba on Linux Mint (the easy way)

The Server Message Block (SMB) Protocol is a network file sharing protocol introduced by Microsoft and can be incredibly useful when moving files across multiplatform machines (particularly if your primary machine is a Windows desktop). Samba is a file and print sharing suite of utilities in Linux which uses and provides integration with other machines using the SMB transport.

What this quick guide covers

If you only want to provide basic folder sharing capabilities from your Linux distribution of choice, configuration and setup of Samba is (in my humble opinion) over complicated at best and a little bit messy at worst.

This quick guide is specifically targetted to the Linux Mint distribution (although will be applicable to many others) and only describes how to share your Linux filesystem folders and does not go into any detail regarding the advanced Samba functionality.

Even though Linux Mint attempts to make folder sharing more user-friendly, I have never had any success using the GUI based procedure, and have even struggled with the following method described in this article. Furthermore, I prefer to understand what is being configured behind the scenes, so I shall keep to the point and keep it simple.

The following procedure was tested on the latest release of Linux Mint at the time of writing (18.1 “Serena”) but I have also used this successfully against 17.1 “Rebecca”.

Configure the share

The first thing you need to do is configure your share in the samba configuration file.
Edit /etc/samba/smb.conf and scroll to the Share Definitions section inserting the following section (replacing the relevant names as required).

path = /media/mylinuxuser/Dropbox
valid users = mylinuxuser
read only = No
#create mask = 0777
#directory mask = 0777

The name in square braces is your desired share name, the path is obviously the real path to the folder you are sharing and the create mask and directory mask parameters define what permissions are assigned to files and directories created through the share. In the section above, the masks are commented out and Samba defaults should be sufficient, but you can override and provide less restrictive permissions if necessary (from a security perspective, please understand what you are doing first!). Ensure you provide at least one valid user to access the share.

To check share setup run:


Install Samba
If Samba is not installed (you can check this by sudo service –status-all|grep smbd OR sudo service –status-all|grep samba)

sudo apt-get install samba

Add SMB password
Ensure you add an SMB password for every valid user that you wish to access the share:

smbpasswd -a mylinuxuser

Restart SMBD daemon
Finally, for the new share to be visible to your remote device you will need to restart samba (you will also need to do this every time you add a new share or reconfigure an existing one):

sudo service smbd restart

That is all there is to it. Once you have followed these steps, your share will be available to your remote SMB client from your Linux Mint desktop.

Configuring Red Hat Enterprise Linux 7.2 for YUM without a Redhat Subscription

It has been a very long time since I have installed a Redhat Enterprise Linux distribution having tended to prefer Ubuntu based distributions (such as Linux Mint) or CentOS if I really wanted a Redhat derivative (although for Oracle Database installations on Linux, I would always tend to use Oracle Enterprise Linux for simplicity). With the development and impending arrival of SQL Server on Linux, I thought it was about time that I returned back to the playground with a vanilla copy of Redhat EL so that I could test it with SQL Server, Docker Linux Containers, Spark and lots of other sexy things that have been avoiding my immediate attention for a little too long.

After a basic installation, I decided that it was time to start using YUM to add some of my favorite packages to this build when I hit across this quite annoying error:

[retracement@localhost]$ sudo yum install nano
Loaded plugins: product-id, search-disabled-repos, 
This system is not registered to Red Hat Subscription Management.
You can use subscription-manager to register.
There are no enabled repos.
Run "yum repolist all" to see the repos you have.
You can enable repos with yum-config-manager --enable 

Ok so this is obviously not going to fly and I am certainly not going to pay for a Redhat Subscription so I decided to break out Bingoogle and came across this rather useful post from Aziz Saiful called HowTo Install redhat package with YUM command without RHN -and I recommend you also give it a read (although some of its details are ever-so-slightly out of date with this release of RHEL 7.2.). The post discusses how to set up an alternative source to the installation DVD, and for Windows people, this is an equivalent of the -source parameter that we would use in PowerShell with the Add-WindowsFeature cmdlet to add new features from local media.
To cut a long story short, I decided to work my way through this article and provide an updated post (and if nothing else, I will not need to Bingoogle this again!).

Our first step is to ensure that we have a mounted Redhat Enterprise Linux 7.2 DVD (i.e. the one we installed Linux from).
The next step is to mount the DVD to a mount point. For simplicities sake, I chose cdrom off the root.

[retracement@localhost]$ sudo mkdir /cdrom
[retracement@localhost]$ sudo mount /dev/cdrom /cdrom

Ok so now we have a mounted cdrom, we can create the YUM repo configuration file within the path /etc/yum.repos.d to point to this location. Unfortunately, you will need to use vi to do this (I hate vi!), but if you need any tips on vi, please use this Vi Cheat Sheet. Once in vi, create the file dvd.repo (or called anything else you want – but ensure you keep the .repo extension otherwise the file will not be recognized by YUM).

name=RHEL 7.2 dvd repo

Once you have created this file, if you have performed every step correctly, you can take a look at YUM’s repolists.

[retracement@localhost]$ sudo yum repolist

And while you still receive the same error regarding the System not being registered to Red Hat Subscription Management, you should also see your new repo listed underneath.
To check it all works, let’s install nano!

[retracement@localhost]$ sudo yum install nano


Perfect! Like everything in Linux, it is easy when you know how. On a closing note, it is unclear to me at this moment in time, whether this will entirely resolve my installation problems since I will still obviously need access to an online repo or sources in order to install third-party packages not included with the installation media,  but once I have figured that out, I will update this post.

Oracle Unbreakable Linux installation fails on Hyper-V Generation 2 Virtual Machine

Have you been attempting (and failing) to install Oracle Unbreakable Linux as a Virtual Machine under Hyper-V and cannot figure out what is wrong?

If you are receiving the following message:

Boot Failed. EFI SCSI Device.
Boot Failed. EFI SCSI Device.
Boot Failed. EFI SCSI Device. Failed Secure Boot Verification.
PXE Network Boot using IPv4
PXE-E18: Server response timeout.
Boot Failed. EFI Network.
No Operating System was Loaded.
Press a key to retry the boot sequence...

Secure Boot verificationThen I bet you are using a Hyper-V generation 2 VM?
When I first saw the error message it is quite clear that something is not working properly with the boot sequence and my first thought was that either my install media was corrupt or that there was an incompatibility between the Oracle Linux boot media and Hyper-V generation 2 virtual machines (the error message gives a big clue on this).

Update 11th October 2016

Before continuing with this article, you should know that there is a slightly better way to address this problem, since you are also very likely to run into the same (or similar) problem on CentOS, Ubuntu, or anything else that is not Microsoft Windows in Hyper-V. For instance, I have noticed that on an attempted CentOS installation within a version 8 generation 2 Hyper-V VM (Windows 10 Anniversary Update Edition)  there is not even a visual clue given that there is a Secure Boot failure -at least not on the initial boot (as before).

All we see is:

However, if you wait long enough you will eventually arrive at the following screen:


So rather than disable Secure Boot as this blog post instructs, I recommend changing Secure Boot to use the Microsoft UEFI Certificate Authority template rather than the Microsoft Windows template. Make this change through the Virtual Machine settings page (Security node in the hardware pane) and if it is not already obvious, your VM must be stopped in order to change this setting.  Once you have made the change, your problems should be resolved and your Linux distribution should automatically boot.


If you do not see this option available to you, then feel free to proceed with the alternative route as described below.

Immediately I tried installing to a generation 1 VM and it ran through smoothly without incident (proving media was fine), so I returned back to my generation 2 VM to resolve the issue. Returning back to the error message, the Failed Secure Boot Verification warning stands out like a sore thumb, and Hyper-V afficionados will recognise that secure boot was actually introduced by generation 2 VMs. Thankfully it is very easy to turn this off and disable secure boot through the System Center/ Firmware option within Virtual Machine Properties pages (or via Hyper-V Manager/ Settings/ Firmware option). Alternatively, we can also do this through PowerShell as follows:

Set-VMFirmware –VMName "VMname" -EnableSecureBoot Off

The next time the machine boots, the installer should automatically launch (and if it isn’t clear already, you must leave secure boot disabled post install).


For more information on this subject see Oracle Linux virtual machines on Hyper-V and visit What’s New in Hyper-V for Windows Server 2012 R2 for and in-depth discussion of Hyper-V new features in Windows 2012 R2.