Tag Archives: Development

Removing and maintaining Azure resource group deployments based upon deployment count

Whenever you create or update an Azure resource, a new deployment is created under the resources’ configured resource group. This deployment history is retained ad-infinitum until you eventually hit the hard limit of 800 deployments (per resource group). You may think this figure is more than enough to accommodate all the possible resource changes that could ever be made in a resource group, but if you are running CICD pipelines to push out your Infrastructure as Code (IaC) (or create lots of resources per resource group) then it is very likely you will exhaust this figure very quickly.

Every time a release pipeline runs, regardless of whether you are changing resources or not, all configured and enabled deployments in the pipeline will result in a new deployment record. You can view all historic deployments in the Azure Portal for each resource group by selecting its Deployments item under the Settings pane (see below).


In the example above you will note that we only have 4 deployments that have been created in this resource group. When the hard limit is eventually hit, all subsequent deployments to that specific resource group will fail.

Microsoft’s solution

Microsoft provide a solution to this in the MS doc titled Resolve error when deployment count exceeds 800 which allows you to programmatically remove deployments (through Azure CLI or PowerShell Az) based upon a deployment date and this is made possible because of the Timestamp property. I have also seen many blog posts that simply seem to regurgitate this Microsoft code giving really just one solution – to maintain deployments based on date.

This is all well and good if your deployments span many weeks or months and that the counts are predictable, based on date-time, but what happens if you have highly active, highly unpredictable, or high number of resources per resource group?

Deployment count solution

Perhaps a far better solution would be to maintain a set deployment count that will allow each release to succeed each time. If you are only deploying a single resource, then clearly you would only have to ensure a spare deployment slot is available. If you are deploying resources through a CICD pipeline then you simply need to ensure that you have at least that number of resources in your pipeline available.

Running from Azure CLI or PowerShell Az command-line

If you are manually running the maintenance code either from a remote command-line session or directly within the Portal command-line itself, you will have to set your subscription context that you wish to maintain. We can do this easily in PowerShell by running the following code (ensuring that you change the subscription text to the one you wish to target):

$subscriptionName =  "ACMEPRODSUB"
$subscription = Get-AzSubscription -SubscriptionName $subscriptionName
Set-AzContext -Subscription $subscription | Out-Null

Once you have set your subscription you can then use the subsequent code (detailed in the Running from within an Azure DevOps Pipeline section).

Running from within an Azure DevOps Pipeline

I have generally found that running a maintenance step at the start of any infrastructure Release Pipeline is a good point of execution. It will reduce the time to cycle through and delete any excess deployments to a minimum, and will also ensure there are enough deployment slots to prevent release failure. For our pipelines, maintaining a deployment count of 700 is a good compromise -it leaves 100 spare slots for each run and plenty of past deployment history.

In the release pipeline, we can create an Azure PowerShell task within a release stage.

For convenience we can use our code as an inline script -though you may ultimately decide to parameterize the $retainCount variable and publish the script from a repo instead.

Use your common sense when setting the number of deployments you wish to retain.

$retainCount = 700

Since our Azure PowerShell task has explicit settings for the subscription that we wish to execute the script against, we do not need to worry about changing subscription context. All we are concerned about is the functionality of the code itself.

In the code below we are first looping through all resource groups in the current subscription context. For each resource group we return all its deployments, and for any deployment that is above the iterator threshold set by $retainCount (assuming there are any) -it will be deleted.

foreach ($resourceGroup in Get-AzResourceGroup){
    $resourceGroupName = $resourceGroup.ResourceGroupName
    $deployments = Get-AzResourceGroupDeployment -ResourceGroupName $resourceGroupName

    write-host "Resource group" $resourceGroupName "has" $deployments.Count "deployments..."

    $iterationCount = 1
    foreach ($deployment in $deployments) { #deployments are returned sorted by age desc
        if ($iterationcount -gt $retainCount){

            Remove-AzResourceGroupDeployment -ResourceGroupName $resourceGroupName -Name $deployment.DeploymentName | Out-Null
            write-host "   Deleted deployment on" $deployment.Timestamp -fore magenta
        }

        $iterationCount = $iterationCount + 1
    }
    Write-Host "   Resource group deployment maintenance complete." -fore green
}

This results in the following output:

If you are using an Azure DevOps release task to execute this code you will not see coloured text in the task output.

Conclusion

If you are manually maintaining your resource group deployments or wish to automate it through Azure DevOps, the timestamped solution provided by Microsoft may not fit your requirements given the frequency of your deployments or other considerations. Given that the deployments are returned in a time sorted descending order, we can easily delete deployments based upon the deployment count -always leaving enough space for future deployments and not removing those based upon date alone. Ensuring that this maintenance task is run prior to any automated infrastructure releases can improve the success rate of your release pipelines in highly active environments.

How to restore a deleted Azure DevOps repository

If you are using Azure DevOps, you might be comforted that your Git repo is “in the Cloud” and automatically has availability and disaster guarantees. However you (or someone else) still have the ability to accidentally (or maliciously) delete repos from Azure DevOps Repos. Surprisingly, at the time of writing, there is no GUI based option to restore your repo. This might initially instill a sense of panic as you frantically search for the latest local clone to replace your remote – but there is a better way.

When you delete an Azure DevOps repository, it is initially soft-deleted to the “recycle bin”. After a period of time (oddly I have failed to find an offical Microsoft reference stating exactly what this but I believe it is 28 days) it is automatically purged and hard-deleted. Although there is no GUI support to restore your soft-deleted repositories, that ability is exposed through the Azure DevOps REST API, but frustratingly the Microsoft Azure DevOps Services REST API Reference does not provide a worked example in the Repositories – Restore Repository From Recycle Bin API call page.

To make your life easier, I will provide the solution below!


Getting started with Azure DevOps REST API and PAT token

Within my blog so far I have provided several worked examples of making a REST API call to Azure DevOps. If you are new to this, I suggest you first check out my post titled Querying Azure DevOps REST API with PowerShell.

Once you have assigned your $header variable from an encoded PAT token (as documented in the aforementioned article) you are ready to roll!

Set your repository’s Organisation and project

Each project will contain its own set of Azure repositories. Ensure you provide the correct values for your organization and project- and ensure that for any names with spaces are correctly replaced using %20 (so that a valid url can be formed).

$organization = "retracement"
$project = "ACME%20Corp"

REST API call to list repositories in the recycleBin

From the Microsoft Azure DevOps Services REST API Reference we can call the Repositories – List REST API call to return a list of all deleted repositories in our recycleBin for our organization’s project.

We will first build up our $url using the variables set earlier.

$url = "https://dev.azure.com/$organization/$project/_apis/git/recycleBin/repositories?api-version=5.1-preview.1"

Now that all variables are set we can make our REST API call and iterate over all deleted repositories

$deletedRepos = Invoke-RestMethod -Uri $url -Method Get -ContentType "application/json" -Headers $header
  # for each repository
Write-Host "Deleted repositories"
Write-Host "--------------------"

$deletedRepos.value | ForEach-Object {
    $repoId = $_.id
    $repoName = $_.name
    $deletedBy = $_.deletedBy.displayName
    $deletedDate = $_.deletedDate
    Write-Host "repoId:" $repoId $repoName "deleted on" $deletedDate "deleted by" $deletedBy
}

The following output is returned:

Deleted repositories
--------------------
repoId: 3b1bbfe0-470d-4724-bc69-6ec29ff88cb5 SuperImportantRepo deleted on 2020-04-08T13:44:21.807Z deleted by Mark Broadbent
repoId: 4c3abef0-520a-2461-ac70-1ad30ef11ab7 NotImportantRepo deleted on 2020-04-12T10:00:01.201Z deleted by Mark Broadbent

We have now identified that someone (me!) has deleted a super important repository by accident. Using the repoId we can use this to restore it from the recycleBin.


Recover soft deleted repository

First we need to set a variable $repoId to the deleted repository (SuperImportantRepo) repoId that we identified earlier. This will be used in our next REST API call.

$repoId = "3b1bbfe0-470d-4724-bc69-6ec29ff88cb5"

Now we can return back to the Repositories – Restore Repository From Recycle Bin REST API call page as use this to build out our new url.
As you will see, the url contains our $repoId and we will also create a $body variable set to a JSON key value pair setting the deleted property to false. This JSON body is passed into our REST API call using the Patch Method.

$url = "https://dev.azure.com/$organization/$project/_apis/git/recycleBin/repositories/" + $repoId +"?api-version=5.1-preview.1"
$body = ConvertTo-Json @{“deleted”= "false"}
Invoke-RestMethod -Uri $url -Method Patch -Body ($body) -ContentType "application/json" -Headers $header

The output of our final REST API call results in:

id            : 3b1bbfe0-470d-4724-bc69-6ec29ff88cb5
name          : SuperImportantRepo
url           : https://dev.azure.com/retracement/e6fa212f-3520-4c30-8c28-d6bd88926ff2/_apis/git/repositories/3b1bbfe0-470d-4724-bc69-6ec29ff88cb5
project       : @{id=e6fa212f-3520-4c30-8c28-d6bd88926ff2; name=ACME%20Corp; description=Super Important Repository for mission critical systems; url=https://dev.azure.com/retracement/_apis/projects/e6fa212f-3520-4c30-8c28-d6bd88926ff2; state=wellFormed; revision=626; 
                visibility=private; lastUpdateTime=2019-11-20T15:49:09.773Z}
defaultBranch : refs/heads/master
size          : 731
remoteUrl     : https://retracement@dev.azure.com/retracement/ACME%20Corp/_git/SuperImportantRepo
sshUrl        : git@ssh.dev.azure.com:v3/retracement/ACME%20Corp/SuperImportantRepo
webUrl        : https://dev.azure.com/retracement/ACME%20Corp/_git/SuperImportantRepo

As we can see from the above output success!


Summary

As I have shown, deleting a repository by accident in Azure DevOps does not have to be a disaster recovery situation since the recycleBin and Azure DevOps REST API makes it relatively simply to view and restore (when you know how!). However it is worth pointing out that for Git repositories, no similar situation exists if you delete a branch (unlike with Tfs Repos in Azure DevOps). So the moral of the story is to ensure you periodically back up all your remote repositories AND set branch policies to protect them against accidental deletion.

Hope you enjoyed the post!

Cannot delete old build definitions in Azure DevOps

I have been experiencing a problem for quite a while now in my current environment, in that some of our old builds cannot be deleted. When you attempt to do so it results in the following error:

One or more builds associated with the requested pipeline(s) are retained by a release. The pipeline(s) and builds will not be deleted.

Many of our pipelines have undergone a lot of change over time to the degree it is not even obvious anymore why (or indeed where) these builds are being prevented from being dropped. The only thing that is clear is that until they can be, the old build definitions will remain.

I have tried to set the Stop retaining the build setting for all builds associated with a build definition to no avail. The setting just does not seem to want to take in most cases.

I have also tried playing around with build retention policies and even tried tidying up the release pipelines (and releases) themselves. Unfortunately for me, those darn build pipelines do not want to delete.

Today I decided to put some of my recent Powershell and Azure DevOps REST API experiences (see previous posts in this blog) to the test and attempt to get to the bottom of the problem. As it turns out there is a build property called retainedByRelease that is exposed through the REST API which is the reason why a build cannot be removed -resulting in our irritating error.

Using the same technique that I wrote about in Querying Azure DevOps REST API with PowerShell I first decided to try an report on this property. Please refer back to the post above for more explanation on utilizing the REST API, but I realized I would need to make two REST API calls. The first would be to query one or more build definitions and the second would be to query all builds for each build definition. More specifically, with this last call I would report on the retainedByRelease property.


Querying the build definition builds

In the first piece of code we create our authorization token.

$personalToken = "tiksj25oumfavuzr4316vhpxw2mywzbapxj7sw3x2xet3dml1ygy"
$token = [System.Convert]::ToBase64String([System.Text.Encoding]::ASCII.GetBytes(":$($personalToken)"))
$header = @{authorization = "Basic $token"}

Next we set our organization and project variables.

$organization = "retracement"
$project = "ACME%20Corp"

Our first REST API call queries all build definitions within the project.

#all build definitions
$url = "https://dev.azure.com/$organization/$project/_apis/build/definitions?api-version=6.0-preview.7"
$builddefinitions = Invoke-RestMethod -Uri $url -Method Get -ContentType "application/json" -Headers $header
$builddefinitions.value | Sort-Object id|ForEach-Object {
Write-Host $_.id $_.name $_.queueStatus

#all builds for a definition
$url = "https://dev.azure.com/$organization/$project/_apis/build/builds?definitions=" + $_.id + "&api-version=6.0-preview.5"
$builds = Invoke-RestMethod -Uri $url -Method Get -ContentType "application/json" -Headers $header

$builds.value | Sort-Object id|ForEach-Object {
#report on retain status
Write-Host " BuildId" $_.id "- retainedByRelease:" $_.retainedByRelease
}
Write-Host
}

For brevity I provide only a subset of the results:

339 SQL Dacpac Build enabled
BuildId 43045 - retainedByRelease: False
BuildId 43051 - retainedByRelease: False
BuildId 43053 - retainedByRelease: True
BuildId 43307 - retainedByRelease: True
BuildId 43325 - retainedByRelease: True

366 Databricks Notebooks Build enabled
BuildId 45338 - retainedByRelease: False
BuildId 45340 - retainedByRelease: False
BuildId 45346 - retainedByRelease: True
BuildId 46032 - retainedByRelease: True

375 ARM Templates Build enabled
BuildId 46452 - retainedByRelease: False
BuildId 46454 - retainedByRelease: True

As you can see, from the three active build definitions listed, each one has at least one build that is marked for retention by release.


Setting the build retainedByRelease property

Now we have a procedure in place to query the retainedByRelease property, it is just as easy to set it. If you are trying to remove a specific Build Definition (or builds), you can implement a filter in the builddefinitions iterator. So:

$builddefinitions.value | Sort-Object id|ForEach-Object {

Would now become:

$builddefinitions.value | where {$_.name -eq "ARM Templates Build"}|Sort-Object id|ForEach-Object {

In the above example we are filtering on a single build definition, but feel free to use the filter of your choosing.

The final thing we need to do is make a REST API call to update each build returned by this filtered build definition. We can so this as follows by adding the following line inside our build iterator:

Invoke-RestMethod -Uri $url -Method Patch -Body (ConvertTo-Json @{"retainedByRelease"='false'}) -ContentType "application/json" -Headers $header

You will note the use of -Method Patch within this call rather than -Method Get and the JSON body. The patch method allows us to partially update resources (in this case one field) with the JSON body provided.


Putting it all together

So if we wanted to update the builds of one specific Build Definition called ARM Templates Build we would run the following code:

$personalToken = "tiksj25oumfavuzr4316vhpxw2mywzbapxj7sw3x2xet3dml1ygy"
$token = [System.Convert]::ToBase64String([System.Text.Encoding]::ASCII.GetBytes(":$($personalToken)"))
$header = @{authorization = "Basic $token"}

$organization = "retracement"
$project = "ACME%20Corp"

#all build definitions
$url = "https://dev.azure.com/$organization/$project/_apis/build/definitions?api-version=6.0-preview.7"
$builddefinitions = Invoke-RestMethod -Uri $url -Method Get -ContentType "application/json" -Headers $header

$builddefinitions.value | where {$_.name -eq "ARM Templates Build"}|Sort-Object id|ForEach-Object {
Write-Host $_.id $_.name $_.queueStatus

#all builds for a definition
$url = "https://dev.azure.com/$organization/$project/_apis/build/builds?definitions=" + $_.id + "&api-version=6.0-preview.5"
$builds = Invoke-RestMethod -Uri $url -Method Get -ContentType "application/json" -Headers $header

$builds.value | Sort-Object id|ForEach-Object {
#report on retain status
Write-Host " BuildId" $_.id "- retainedByRelease:" $_.retainedByRelease

#api call for a build
$url = "https://dev.azure.com/$organization/$project/_apis/build/builds/" + $_.id + "?api-version=6.0-preview.5"

#set retainedByRelease property to false
Invoke-RestMethod -Uri $url -Method Patch -Body (ConvertTo-Json @{"retainedByRelease"='false'}) -ContentType "application/json" -Headers $header
}
Write-Host
}

Now that all your builds for the ARM Templates Build Build Definition are deleted, you should be able to remove this build definition without further error (you do not need to first remove its builds).


Summary

There are certain issues that you might experience in Azure DevOps which seem almost impossible to resolve through the GUI, but yet again the Azure DevOps API can come to our rescue. In this specific example we have easily queried aspects of DevOps through PowerShell, and this time even changed information through it to resolve our problem.

I hope you find this post useful for this rather frustrating problem!

Using Azure CLI to query Azure DevOps

Coding CatIn previous posts, I have touched upon the use of Azure Cloud Shell for generic querying of Azure resources and I thought it would be useful to quickly document its use for something a little more specific such as querying or manipulating Azure DevOps through the command line.

For my example, I will focus on something as mundane and straight-forward as querying the Azure DevOps repository meta-data (so that I can look at and compare branch settings against each other) but I hope you get the idea that this is just scratching the tip of the iceberg and the Azure CLI is a powerful tool to add to your arsenal of scripting languages.

The whole end-end process required to query Azure DevOps is itself is a relatively straight-forward affair -especially when you know exactly what you are doing (isn’t everything!) but before we get there, you will first need to have access to the Azure CLI. You have two ways of using it, the first being to install it locally -and instructions to do this can be found via an earlier post titled “AzureRM, Azure CLI and the PowerShell Az Module“. Alternatively, you may also use the Azure CLI through Azure Cloud Shell (i.e. directly from Azure) as detailed in another of my posts titled “Introduction to Azure Cloud Shell“.

Configure az devops pre-requisites

Once you are up and running with the Azure CLI and have access to its az command, there are a few pre-requisites needed before you can query Azure DevOps directly. These are detailed as follows:
1. You must ensure that you are running Azure CLI version 2.0.49 or higher. You can check this by simply running the following command:
az --version
az version
2. Your Azure CLI must have the azure-devops extension added to it. To check if this is already available run the following command to list your extensions:
az extension list
az extension
If the extension is not listed you can add it as follows:
az extension add --name azure-devops
added extension
For further information on this extension, you can view the Microsoft documentation titled “Use extensions with Azure CLI“.
3. Your az session must be signed in to your Azure tenant, and to do this use the az login command and provide the relevant credentials:
az login
4. Finally, to avoid having to provide a project context every time you run an az devops command you should set a default project context as follows (obviously use your own organization and project):
az devops configure --defaults organization=https://retracement.visualstudio.com/ project="ACME Corp"

You are now ready to go!


Querying DevOps through Azure CLI

In order to find out all the commands now made available to you with your new extension, you can execute the following command:
az devops -h

By doing so, you will note that the extension provides devops subgroup commands such as teams -for example to list your current devops teams:
az devops team list

As the help context shows, the extension also provides “related groups” (such as repos) to manage other facets of Azure DevOps. In our specific example, we want to query all available repos for our Azure DevOps project. We can do this as follows:
az repos list
json results
Notice that your results come back in JSON format by default. We can override this and return results in tabular format by using the output parameter:
az repos list --output tabletable output
The Azure CLI also provides a query option so that you can provide a JMESPath query string to filter your results. For instance, in the most basic scenario we can return the first element from our results (using zero-based index notation):
az repos list --query [0]

That is clearly not so useful, so instead, I want to return specific properties from all repos. In this case, I want to return its name, Azure repo url path, and the default branch that is set:
az repos list --query [].[name,webUrl,defaultBranch]

In our final example we will return the results in a tabular format and alias our property names (for our column headings):
az repos list --query "[].{Name:name, Url:webUrl, DefaultBranch:defaultBranch}" --output tablewith aliases


Summary

Being able to programmatically query Azure DevOps through the Azure CLI is incredibly useful and powerful and could help you keep your environment standardized (for example ensure branch policies across repos are identical) or even provides a method that you can easily track change. Obviously we are not just restricted to the Azure DevOps repos, we can look at all facets of the environment. For example, to list all current builds in a project we can issue the following command:
az pipelines build list -o table

As a final point of note, I confess to finding JMESPath to query and filter my results far less intuitive or simple than with other languages (especially given the semi-structured nature of the data you are filtering), but with a little bit of trial and error, you can eventually get there!

I hope you find my post useful and please feel free to provide feedback in the comments.

Further References

https://docs.microsoft.com/en-us/cli/azure/query-azure-cli?view=azure-cli-latest
http://jmespath.org/examples.html

Introduction to Azure Cloud Shell

Azure Cloud ShellIn my last couple of posts, I have described the remote management of Azure through the command line from what was essentially a fat (or thick) client. This gives you an awful lot of scripting and automation control over the platform by using either the Azure CLI or PowerShell through the PowerShell Az Module. This is perfect for most cases, but what if you are using an unsupported Operating System or you only have access to the browser (perhaps via a thin client)?

Thankfully a solution to this problem already exists and the good folks at Microsoft have made it easy for you to have full scripting ability over your Azure subscription through your web browser of choice! Enter Azure Cloud Shell…

Accessing Azure Cloud Shell

There are two ways to access Azure Cloud Shell, the first being directly through the Azure Portal itself. Cloud Shell PromptOnce authenticated, look to the top right of the Portal and you should see a grouping of icons and in particular, one that looks very much like a DOS prompt (have no fear, DOS is nowhere to be seen).

The second method to access Azure Cloud Shell is by jumping directly to it via shell.azure.com which will require you to authenticate to your subscription before launching. There is an ever so slight difference between each method. Accessing the Shell via the Azure Portal will not require you to specify your Azure directory context (assuming you have several) since your Portal will have already defaulted to one, whereas with the direct URL method that obviously doesn’t happen.

shelldirectory2

Select your Azure directory context through shell.azure.com

For both methods of access, you will need to select the command line environment to use for your Cloud Shell session (your choice is Bash or PowerShell) and the one you choose will partially depend upon your environment of preference.

clibashps

I will explain the difference later, however, for now, I am going to select the Bash environment.

Configuring Azure Cloud Shell storage

When using Azure Cloud Shell for the first time you will need to assign (or create) a small piece of Azure storage that it will use.  Unfortunately, this will incur a very small monthly cost on your subscription.

ACS storage

The storage is used to persist state across your Cloud Shell sessions.  To get a little more visibility about what is going on I am going to click Show advanced settings:

createcloudstorage

It is slightly disappointing that at the time of writing there are only 7 available Cloud Shell storage regions -which means that your Shell storage might not be able to live in the same region as your other resources (depending upon where they are).

CSRegion

Would it really matter that your Cloud Shell blob might live in a different region? I think it is probably very unlikely that you will consume egress data into your Shell region since it is management that is the purpose of Cloud Shell, not data staging, but I suppose you might want to bear in mind when you are scripting.

In my specific case (as you will see above) I decided to use an existing resource group named RG_CoreInfrastructure and within create a new storage account named sqlcloudcloudshell [sic] under the North Europe region and within this create a new cloudShellfs file share.

I don’t really like this dialog box since it is not very intuitive and allows you to submit incorrectly named or existing resources -both leading to creation failure. I’d rather they are caught and reported on at input time. For the record, the general rules for our Cloud Shell are that the storage account name needs to be in lowercase letters and numbers, must begin with a letter or number, be unique across Azure, and between 3 to 24 characters long (phew!). The file share can only contain lowercase letters, numbers, hyphens, and must begin with a letter or number and cannot contain two consecutive hyphens. It is a little frustrating, but you will get there in the end with a bit of trial and error!

Whilst it is possible that you could pre-stage all of these things upfront and select an existing storage account (assuming it was in one of the 7 Cloud Shell regions), I was particularly interested in what Azure was going to provision, being mindful about not choosing storage that was more expensive than it needed to be. As it turns out, Azure created my storage as follows:

Performance/Access tier: Standard/Hot
Replication: Locally-redundant storage (LRS)
Account kind: StorageV2 (general purpose v2)

Other things of note were creation tagged this storage resource with the name-value pair of ms-resource-usage/azure-cloud-shell and set a file storage quota of 6GiB.

Running Azure Cloud Shell

Once setup has completed, the Azure Cloud Shell will launch as follows:

azcloudsh

If you look at the menubar at the top of your Azure Cloud Shell window you will note that there is a dropdown currently set to Bash -which was the type of shell session chosen earlier. If we change this to PowerShell it will reconnect back into the Cloud Shell Container, this time using PowerShell as your entry point of choice.

switchtops

Within a few seconds, you will now have entered into a PowerShell session.

azpsh

Bash or PowerShell?

If you can remember when we first launched Azure Cloud Shell we had to select Bash or PowerShell as our environment of choice. The question is, which should you choose?

The real answer to this question is that it doesn’t really matter, and is simply down to your preference – especially since you can easily switch between the two. However, I would probably argue (especially for Linux fanboys like myself) that Bash (and therefore Azure CLI via Cloud Shell) is probably easier and more intuitive, and you could always directly enter a PowerShell Core for Linux session using the pwsh command in Bash (see also my previous post) if you wanted.

Whichever way you enter a PowerShell Cloud session, the Az module cmdlets are then directly available to you and you do not require any further configuration to your environment. I have specifically found that PowerShell does seem to favour more scripted hardcore Azure deployments since you can very easily use its OOP potential -such as assigning a resource object to a variable and accessing its properties/ methods/ collections programmatically through the object.

Basically, have a play and see what works for you.

Pre-Installed Software

The Azure Cloud Shell thankfully is also deployed with many pre-installed tools and runtimes. This is possibly a subject for another day, but just know that the following are available for your use all within your browser session(!!!):

Development Runtimes available include PowerShell, .NET Core, Python, Java, Node.js, and Go.

Editing tools installed include code (Visual Studio Code), vim, nano, and emacs.

Other tools you can use are git, maven, make, npm, and many more.

I fully expect these lists to consistently get bigger over time.

Summary

I hope you have found this post very useful and if you haven’t already done so please start testing Azure Cloud Shell now! It is the perfect place to quickly configure Azure using Azure CLI or the Azure PowerShell module (through scripting or simple commands) through your browser so that you don’t need to install those runtimes on your local machine.

Enjoy!

AzureRM, Azure CLI and the PowerShell Az Module

azurecli
There is now a variety of Microsoft provided command line tools available to connect to (and manage) Azure Resources and the situation is quite confusing to new-comers or those individuals who have not kept up to date with new developments. This post is designed to rectify this situation.

It is probably also worth me mentioning that I am focusing on the usage and deployment of these tools with Linux in mind, however, the following explanation is also applicable to Windows and macOS environments.


##PowerShell Core (on Linux)
If you are running a Linux machine, to use AzureRM or the new PowerShell Az module you will first need to install PowerShell Core for Linux. Installation is fairly easy to perform and you can follow this post using the relevant section for your particular distribution and release. In my specific case, I am running Linux Mint 18.1 Serena, however, to get my Ubuntu version I first run the following:

more /etc/os-release

This returns:

NAME=\"Linux Mint"
VERSION="18.1 (Serena)"
ID=linuxmint
ID_LIKE=ubuntu
PRETTY_NAME="Linux Mint 18.1"
VERSION_ID="18.1"
HOME_URL="http://www.linuxmint.com/"
SUPPORT_URL="http://forums.linuxmint.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/linuxmint/"
VERSION_CODENAME=serena
UBUNTU_CODENAME=xenial

As you can see, the UBUNTU_CODENAME is xenial. So if I visit the Ubuntu list of releases page, I can see that xenial equates to version 16.04.x LTS. This means that (for me at least) I can follow the PowerShell on Linux installation section titled Ubuntu 16.04 (instructions provided below simply as an example):

# Download the Microsoft repository GPG keys
wget -q https://packages.microsoft.com/config/ubuntu/16.04/packages-microsoft-prod.deb
# Register the Microsoft repository GPG keys
sudo dpkg -i packages-microsoft-prod.deb
# Update the list of products
sudo apt-get update
# Install PowerShell
sudo apt-get install -y powershell

To enter a PowerShell session in Linux, you simply type the following within a bash prompt (in Windows you instead use the powershell executable):

pwsh

pwsh

AzureRM module (aka Azure PowerShell)

Before you read further, you should understand that AzureRM is deprecated, and if you are currently using it to manage to Azure resources, then you should seriously consider migrating to one of the other options described in the next sections (and removing AzureRM from your system).

I first came across the AzureRM PowerShell Module many years ago when I wanted to manage Azure Resources from my Windows laptop through PowerShell. At the time, this was the only way of doing so from the command line, and the functionality of AzureRM was provided by (in Windows at least) a downloadable installer to make the module available to PowerShell. You can check out the latest version and instructions for AzureRM by visiting this link, but as mentioned earlier, you should avoid attempting this and instead use the Azure CLI or PowerShell Az module as described in the next sections. At the last time of trying, attempts to install AzureRM via PowerShell Core in Linux resulted in failure with warning messages pointing to the PowerShell Az module, so you are forced to go to the newer options anyway.

Upon import into your PowerShell environment, the AzureRM module provided up to 134 commandlets (in version 2) so that you could manage all your Azure subscription resources via PowerShell.


Azure CLI

The Azure CLI was intended as the defacto cross-platform command-line tool for managing Azure resources. Version 1 was originally conceived and written using node.js, and offered the ability to manage Azure resources from Linux and macOS, as well as from Windows (prior to PowerShell Core being available on macOS and Linux). Version 2 of the Azure CLI was re-written in python for better platform compatibility and as such, there is now not always direct one-one compatibility between commands across those versions. Obviously, you should use version 2 where possible.

Head over to Microsoft docs article titled Install the Azure CLI for instructions on installing the CLI for your operating system of choice, or you might also be interested in how to do so on my personal Linux distribution of choice (Linux Mint) in my post titled Installing Azure CLI on Linux Mint. Installing the Azure CLI will enable the ability to use the az command syntax from within your Windows command prompt, or via your PowerShell prompt, or through bash (if you are specifically using Linux).

Once installed, to use the Azure CLI you will prefix any instruction with the az command. Thankfully there is strong contextual help so you can simply run az for a full list of available subcommands, or provide a specific subcommand for help on that.

To execute an az command, you will first need to login to your Microsoft account (that you use to access your Azure subscription/s) as follows:

az login

This will return the following message:

Note, we have launched a browser for you to login.
For old experience with device code,
use "az login --use-device-code"

A browser window will be automatically launched for Azure and require you to log in with your credentials. Alternatively (as you can see in the message), you can use the old authentication method (which is especially useful if your machine does not have a browser). In this case, you would run the following command:

az login --use-device-code

Then log into your account from a browser entering the device code provided.

Either way, after you have done this, we can issue our az commands against our Azure resources, for example:

az vm list

Would list out all current Azure IaaS VMs in your subscription.
Another useful tip with az is to use the find subcommand to search through command documentation for a specific phrase. For example if I want to search for Cosmos DB related commands I would use the following:

az find --search-query cosmos

Returns the following list of commands:

`az cosmosdb database show`
    Shows an Azure Cosmos DB database

`az cosmosdb database create`
    Creates an Azure Cosmos DB database

`az cosmosdb database delete`
    Deletes an Azure Cosmos DB database

`az cosmosdb collection create`
    Creates an Azure Cosmos DB collection

`az cosmosdb collection delete`
    Deletes an Azure Cosmos DB collection

`az cosmosdb collection update`
    Updates an Azure Cosmos DB collection

`az cosmosdb database`
    Manage Azure Cosmos DB databases.

`az cosmosdb collection`
    Manage Azure Cosmos DB collections.

`az cosmosdb delete`
    Deletes an Azure Cosmos DB database account.

`az cosmosdb update`
    Update an Azure Cosmos DB database account.

Az PowerShell module

You might have already questioned that if PowerShell already had a way to manage Azure resources (through the AzureRM module) and we now have the Azure CLI (providing the cross-platform Az functionality), how could there be any synergy between those two environments?

The answer is that there isn't. This is one reason why AzureRM is deprecated.

With the arrival of PowerShell Core on Linux and macOS, it became possible to import the AzureRM module also into those environments, and yet as we have already found out, the Azure CLI was the newer mechanism to manage Azure resources. The problem is that both used different commands to do so (however subtle). In December 2018, Microsoft addressed this situation by introducing the new PowerShell Az module to replace AzureRM which gave a level of synergy between managing Azure resources through the Azure CLI and managing resources through PowerShell. This really means that if you understand one command line environment, your scripts will be relatively easily transferable into the other.

If you are looking to migrate from AzureRM to the new Azure Az module then you should check out this excellent post by Martin Brandl. It is also worth you checking out this announcement from Microsoft titled Introducing the new Azure PowerShell Az module.

To install the PowerShell Az module you can follow the Microsoft article titled Install the Azure PowerShell module. As you will read, you must install this module in an elevated prompt, otherwise, the installation will fail. On PowerShell Core for Linux this means that from bash you would first elevate your PowerShell session as follows:

sudo pwsh

Then you can run the following PowerShell Install-Module command:

Install-Module -Name Az -AllowClobber

Once installed and imported, you are now able to utilize this new PowerShell Az module
but must first login to your Azure account using the following PowerShell command:

Connect-AzAccount

This will return a message similar to the one below:

WARNING: To sign in, use a web browser to open
the page https://microsoft.com/devicelogin and
enter the code D8ZYJCM2V to authenticate.

Once you have performed this action, your Az module in PowerShell will be ready to use. For instance, we can now list IaaS VMs in PowerShell as follows:

Get-AzVM

As you may note, there is a correlation between this command and the Azure CLI command (az vm list) that we ran earlier. However, it is important to realize that the functionality and behavior of the PowerShell Az module and Azure CLI are not identical. For instance, in this specific case, the Azure CLI will return a verbose result set in YAML format by default whereas the PowerShell Az module returns the results in a shortened tabular format.


Summary

Hopefully, it is clear by now that the AzureRM module is redundant across all environments and if you need the ability to manage resources in Azure through PowerShell then the Az module is what you should be using. However, given the ease of multi-platform deployment and use of the Azure CLI, it is probably something you should not ignore and arguably might prefer over PowerShell Az (or at very least run both alongside each other). For example, at the time of writing, exporting an Azure resource template to PowerShell results in code that uses AzureRM rather than PowerShell Az whereas exporting to CLI uses (of course) the Azure CLI itself.

There is also an argument that the Azure CLI is far better suited to Azure automated deployments over ARM templates due to its brevity, and this is discussed in detail by Pascal Naber in his excellent post titled Stop using ARM templates! Use the Azure CLI instead.

Whether you eventually decide to favor Azure CLI over PowerShell Az (or use both), I sincerely hope this post has helped clear up any confusion you may have between all the available command line options to manage Azure resources -I know it had confused me!

Custom Job scheduling, remote queries, and avoiding false negatives using PowerShell and SQL Agent

Say what you want about the SQL Server Agent, it is the lifeblood of SQL Server for scheduling maintenance routines, data loading, and other such ongoing operations. It has been around since the dawn of time and has grown into a rich (but simple) job execution scheduler, yet at the same time, still lacks certain capabilities that mildly annoy me.

Scheduling exclusions

One such annoyance is the inability to create intelligent exclusions for job schedules. In other words the “Don’t run this job if” scenarios that you might commonly run into at retailers, education or health sectors, financial institutions and other environments that may have time-sensitive or complex execution logic.

Now I know what you are thinking. You are thinking that you have created plenty of job schedules in the past and managed to add exceptions to the schedule, and with this, you are partially correct.

In the SQL Agent Job Schedule dialog above, we have a weekly schedule that is set to run every Monday to Friday, running only twice a day, starting at 00:00:00. By very definition, this schedule will not run on Saturday or Sunday (and therefore we have an exception), but we are constrained within the bounds of the dialog interface and available fields and logic.

Complex scheduling logic requirements

Consider an environment that consumes lots of ETL feeds from third parties on a (working) daily basis. The feeds, however, are not created during holidays, meaning that the dependent job/s would fail if they ran on those days. There are three obvious ways to work around this situation:

  1. Allow failures to occur (rolling back changes on failure)
  2. Embed feed existence logic *1 (and/or exception logic) within the code/ package (to prevent the package execution from failing)
  3. Prevent job execution on-condition within a job step

Option 1 (quite clearly) is not a great solution and would involve false negative reporting (in systems such as SCOM since the error would be just that) and require human interaction to review. Jobs would fail, but the failure would only be a result of missing feeds and nothing more. Recovery in this instance would (hopefully!) only require allowing the job to run on schedule once the feeds are delivered the following day.

Option 2 would address the problem of failure in this instance but does not implement schedule exception logic. In other words, the job (and its package) would execute successfully, but this in itself creates false success reporting (the package did not perform its usual workload). Implementing schedule exception logic inside the code/package is also far from ideal since its function is easily lost in the logic of the actual job step and maintaining and building upon this logic could become unwieldy over time.

Option 3 is perhaps the best choice and provides isolation between the job schedule logic and the job function itself (because requirements can change) and improved visibility (feedback) of why the job did not run – but we still need to understand how to implement this for situations that are outside the capability of the Job Schedule dialog.

*1 Whilst it is fair to argue that existence logic is not the solution to our problem, it should be obvious to the reader that creating packages that gracefully handle failure is a given. In reality, all package dependencies should be tested for, and reported on error -and I tend to favor the “bubbling up approach” to failure.

Remote queries for exclusion lookups

In this specific scenario, we need to implement our custom logic to perform our holiday test and raise an error if the current day is one (we will address the handling of this error shortly). In our environment, we maintain a centralized calendar (in a table of course!) of all important dates and public holidays up to five years in advance, and this makes for an excellent reference point for our test query. We want to store our table on a different (but highly available) SQL instance so it can be used by all SQL instance Agent jobs, but its location introduces yet another challenge to our problem -how do we query this remote data from our Agent job?

If you automatically jumped to the conclusion that we should use a linked server, then I can tell you are a bit of a masochist. One of the (several) problems with this approach is that we would have to create a linked server to our calendar data instance within every instance that needs to reference it. In an enterprise environment, this could be a problem to deploy and maintain.

SQL Server 2008 R2 introduced PowerShell Agent job steps, and this gives us a very easy mechanism to programmatically run our remote query without a linked server.

In the following piece of code, we are simply querying our remote server hosting the calendar table ([DBAAdmin].[dbo].[FullCalendar]) and searching for today’s date.

$serverinstance = "server1"
$sqlCommandText = @"
DECLARE @testdate DATE = GETDATE()
SELECT IsHoliday
FROM [DBAAdmin].[dbo].[FullCalendar]
WHERE DateKey=@testdate
"@

if ((Invoke-Sqlcmd $sqlCommandText -ServerInstance $serverinstance).IsHoliday) {
throw "Current date is a Holiday"
}

If today’s date is classed in the table as a holiday (see the evaluation against the returned dataset’s IsHoliday property), then an exception will be thrown and the step will be failed.

On step failure success

The only problem with failing the job when today *is* a holiday, is that we are essentially raising a false-negative which is not much better than just allowing a job just to fail because a feed wasn’t delivered for that day. In the ideal world, we need the job to quit but terminate in such a way that is uniquely different to a “success” (i.e. that today is not a holiday) or “failure” (i.e. that the job has failed for some other reason that needs investigation).

The trick here is to change the job step On failure action to “Quit the job reporting success” via the Job Step/ Advanced Properties as follows:

So your job steps are like this:

By doing so, your skip logic will result in the job step failure showing the job execution as an alert (rather than failure). Only the step itself will show as a failure -which, I think you will agree, is pretty cool! This means that not only do you have a visual indicator that job was canceled by design, but you should be able to programmatically exclude these alerts in SCOM.

Summary

In this post, we have done several things in order to deliver a richer scheduling solution for our agent jobs. These are as follows:

  1. We have prevented our scheduling custom logic job step from causing false negatives through setting its “On Failure: Quit the job reporting success” to allow the job to report only an alert but and only fail the job step.
  2. We have created a centralized table to store custom metadata to allow all SQL instances to query it for scheduling logic scale-out (where you might need to!).
  3. We have utilized PowerShell to query our remote metadata in order to avoid the need to create (and maintain) instance-specific linked servers. This is clearly a good thing for the deployment of standardized scripts.

I hope you can see from this very simple example that not only can we now implement very rich custom scheduling solutions for our jobs, but we can also use some of these techniques to avoid the unnecessary use of linked servers and other things. Enjoy!