Developing Azure Modular ARM Templates
Cloud architectures are nearly ubiquitous. Managers are letting go of their FUD and embracing a secure model that can extend their reach globally. IT guys, who don't lose any sleep over the fact that their company's finance data is on the same physical wire as their public data, because the data is separated by VLANs, are realizing that VNets on Azure function on the same principle. Developers are embracing a cross-platform eutopia where Python and .NET can live together as citizens in a harmonious cloud solution where everyone realizes that stacks don't actually exist. OK... maybe I'm dreaming about that last one, but the cloud is widely used.
With Azure 2.0 (aka Azure ARM), we finally have a model of managing our resources (database, storage account, network card, VM, load balancer, etc) is a declarative model where we can throw nouns at Azure and it let verb them into existence.
JSON templates give us a beautiful 100% GUI-free environment to restore sanity the stolen from us by years of dreadfully clicking buttons. Yet, there's gotta be a better way of dealing with our ARM templates than scrolling up and down all the time. Well, there is... what follows is my proposal for a modular ARM template architecture
Below is a link to a template that defines all kinds of awesome:
Take this magical spell and throw it at Azure and you'll get a full infrastucture of many Elasticsearch nodes, all talking to each other, each with their own endpoint, and a traffic manager to unify the endpoints to make sure everyone in the US gets a fast search connection. There's also the multiple VNets, mesh VPN, and the administative VM and all that stuff.
Yet, this isn't even remotely how I work with my templates. This is:
Synopsis
Before moving on, note that there are a lot of related concepts going on here. It's important that I give you a quick synopsis of what follows:
- Modularly splitting ARM templates into managable, mergable, reusable JSON files
- Deploying ARM templates in phases.
- Proposal for symlinking for reusable architectures
- Recording production deployments
- Managing deployment arguments
- Automating support files
Let's dive in...
Modular Resources
Notice that the above screenshot does not show monolith. Instead, I manage individual resources, not the entire template at once. This let's me find and add, remove, enable, disable, merge, etc things quickly.
Note that each folder represents "resource provider/resource type/resource.json". The root is where you would put the optional sections variables.json
, parameters.json
, and outputs.json
. In this example, I have a PS1 file there just because it supports this particular template.
My deployment PowerShell script combines the appropriate JSON files together to create the final azuredeploy-generated.json
file.
I originally started with grunt to handle the merging. grunt-contrib-concat + grunt-json-format worked for a while, but my Gruntfile.js became rather long, and the entire process was wildly unreliable anyway. Besides, it was just one extra moving part that I didn't need. I was already deploying with PowerShell. So, might as well just do that...
You can get my PowerShell Azure modular JSON magical script at the end of this article.
There's a lot to discuss here, but let's review some core benefits...
Core Benefits
Aside from the obvious benefit of modularity to help you sleep at night, there are at least two other core benefits:
First, is the ability to add and remove resources via files, but a much greater benefit is the ability to enable or disable resources. In my merge script, I exclude any file that starts with an underscore. This acts a a simple way to comment out a resource.
Second, is the ability to version and merge individual resources in Git (I'm assuming you're living in 2016 or beyond, there are are using Git, not that one old subversive version control thing or Terrible Foundation Server). The ability to diff and merge resources, not entire JSON monoliths is great.
Phased Deployment
When something is refactored, often fringe benefits naturally appear. In this case, modular JSON resources allows for programmaticly enabling and disabling of resources. More specifically, I'd like to mention a concept I integrate into my deployment model: phased deployment.
When deploying a series of VM and VNets, it's important to make sure your dependencies are setup correctly. That's fairly simple: just make sure dependsOn
is setup right in each resource. Azure will take that information into account to see what to deploy in parallel.
That's epic, but I don't really want to wait around forever if part of my dependency tree is a network gateway. Those things take forever to deploy. Not only that, but I've some phases that are simply done in PowerShell.
Go back and look at the screenshot we started with. Notice that some of the resources start with 1., 2., etc.... So, starting a JSON resource with "#." states at what phase that resource will deploy. In my deployment script I'll state what phase I'm currently deploying. I might specify that I only want to deploy phase 1. This will do everything less than phase 1. If I like what I see, I'll deploy phase 2.
In my example, phase 2 is my network gateway phase. After I've aged a bit, I'll come back to run some PowerShell to create a VPN mesh (not something I'd try to declare in JSON). Then, I'll deploy phase 3 to setup my VMs.
Crazy SymLink Idea
This section acts more as an extended sidebar than part of the main idea.
Most benefits of this modular approach are obvious. What might not be obvious is the following:
You can symlink to symbols for reuse. For any local Hyper-V Windows VM I spin up, I usually have a Linux VM to go along with it. For my day-to-day stuff, I have a Linux VM that I for general development which I never turn off. I keep all my templates/Git repos on it.
On any *nix-based system, you can create symbolic links to expose the same file with multiple file names (similar to how myriad Git "filename" will point to the same blob based on a common SHA1 hash).
Don't drift off simply because you think it's some crazy fringe idea.
For this discussion, this can mean the following:
./storage/storageAccounts/storage-copyIndex.json
./network/publicIPAddresses/pip-copyIndex.json
./network/networkInterfaces/nic-copyIndex.json
./network/networkSecurityGroups/nsg-copyIndex.json
./network/virtualNetworks/vnet-copyIndex.json
These resources could be some epic, pristine awesomeness that you want to reuse somewhere. Now, do use the following Bash script:
#!/bin/bash
if [ -z "$1" ]; then
echo "usage: link_common.sh type"
exit 1
fi
TYPE=$1
mkdir -p `pwd`/$TYPE/template/resources/storage/storageAccounts
mkdir -p `pwd`/$TYPE/template/resources/network/{publicIPAddresses,networkInterfaces,networkSecurityGroups,virtualNetworks}
ln -sf `pwd`/_common/storage/storageAccounts/storage-copyIndex.json `pwd`/$TYPE/template/resources/storage/storageAccounts/storage-copyIndex.json
ln -sf `pwd`/_common/network/publicIPAddresses/pip-copyIndex.json `pwd`/$TYPE/template/resources/network/publicIPAddresses/pip-copyIndex.json
ln -sf `pwd`/_common/network/networkInterfaces/nic-copyIndex.json `pwd`/$TYPE/template/resources/network/networkInterfaces/nic-copyIndex.json
ln -sf `pwd`/_common/network/networkSecurityGroups/nsg-copyIndex.json `pwd`/$TYPE/template/resources/network/networkSecurityGroups/nsg-copyIndex.json
ln -sf `pwd`/_common/network/virtualNetworks/vnet-copyIndex.json `pwd`/$TYPE/template/resources/network/virtualNetworks/vnet-copyIndex.json
Run this:
chmod +x ./link_common.sh
./link_common.sh myimpressivearchitecture
This will won't create duplicate files, but it will create files that point to the same content. Change one => Change all.
Doing this, you might want to make the source-of-truth files read-only. There are a few days to do this, but the simplest is to give root ownership of the common stuff, then give yourself file-read and directory-list rights.
sudo chown -R root:$USER _common
sudo chmod -R 755 _common
LINUX NOTE: directory-list rights are set with the directory execute bit
If you need to edit something, you'll have to do it as root (e.g. sudo). This will protect you from doing stupid stuff.
Linux symlinks look like normal files and folders to Windows. There's nothing to worry about there.
This symlinking concept will help you link to already established architectures. You can add/remove symlinks as you need to add/remove resources. This is an established practice in the Linux world. It's very common to create a folder for ./sites-available and ./sites-enabled. You never delete from ./sites-enabled, you simply create links to enable or disable.
Hmm, OK, yes, that is a crazy fringe idea. I don't even do it. Just something you can try on Linux, or on Windows with some sysinternals tools.
Deployment
When you're watching an introductory video or following a hello world example of ARM templates, throwing variables at a template is great, but I'd never do this in production.
In production, you're going to archive each script that is thrown at the server. You might even have a Git repo for each and every server. You're going to stamp everything with files and archive everything you did together. Because this is how you work anyway, it's best to keep that as an axiom and let everything else mold to it.
To jump to the punchline, after I deploy a template twice (perhaps once with gateways disabled, and one with them enabled, to verify in phases), here's what my ./deploy
folder looks like:
./09232016-072446.1/arguments-generated.json
./09232016-072446.1/azuredeploy-generated.json
./09232016-072446.1/success.txt
./09242016-051529.2/arguments-generated.json
./09242016-051529.2/azuredeploy-generated.json
./09242016-051529.2/success.txt
Each deployment archives the generated files with the timestamp. Not a while lot to talk about there.
Let's back up a little bit and talk about deal with arguments and that arguments-generated.json
listed above.
If I'm doing phased deployment, the phase will be suffixed to the deploy folder name (e.g. 09242016-051529.1).
Deployment Arguments
Instead of setting up parameters in the traditional ARM manner, I opt to generate an arguments file. So, my model is to not only generate the "azuredeploy.json", but also the "azuredeploy-parameters.json". Once these are generated, they can be stamped with a timestamp, then archived with the status.
Sure, zip them and throw them on a blob store if you want. Meh. I find it a bit overkill and old school. If anything, I'll throw my templates at my Elasticsearch cluster so I can view the archives that way.
While my azuredeploy-generated.json
is generated from myriad JSON files, my arguments-generated.json
is generated from my ./template/arguments.json
file.
Here's my ./template/arguments.json
file:
{ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#", "contentVersion": "1.0.0.0", "parameters": { "admin-username": { "value": "{{admin-username}}" }, "script-base": { "value": "{{blobpath}}/" }, "ssh-public-key": { "value": "{{ssh-public-key}}" } } }
My deployment script will add in the variables to generate the final arguments file.
$arguments = @{ "blobpath" = $blobPath "admin-username" = "dbetz" "ssh-public-key" = (cat $sshPublicKeyPath -raw) }
Aside from the benefits of automating the public key creation for Linux, there's that blobpath
argument. That's important. In fact, dynamic arguments like this might not even make sense until you see my support file model.
Support Files
If you are going to upload assets/scripts/whatever to your server during deployment, you need to get them to a place they are accessible. One way to do this is to commit to Git every 12 seconds. Another way is to simply use blob storage.
Here's the idea:
You have the following folder structure:
./template
./support
You saw ./template
in VS Code above, in this example, ./support
looks like this:
support/install.sh
support/create_data_generation_setup.sh
[support/generate/hamlet.py](https://netfxharmonics.com/n/2015/03/brstrings)
These are files that I need to get on the server. Use Git if you want, but Azure can handle this directly:
$key = (Get-AzureRmStorageAccountKey -ResourceGroupName $deploymentrg -Name $deploymentaccount)[0].value $ctx = New-AzureStorageContext -StorageAccountName $deploymentaccount -StorageAccountKey $key $blobPath = Join-Path $templatename $ts $supportPath = (Join-Path $projectFolder "support") (ls -File -Recurse $supportPath).foreach({ $relativePath = $_.fullname.substring($supportPath.length + 1) $blob = Join-Path $blobPath $relativePath Write-Host "Uploading $blob" Set-AzureStorageBlobContent -File $_.fullname -Container 'support' -Blob $blob -BlobType Block -Context $ctx -Force > $null })
This PowerShell code in my ./support
folder and replicates the structure to blob storage.
You ask: "what blob storage?"
Response: I keep a resource group named deploy01
around with a storage account named file
(with 8 random letters to make it unique). I reuse this account for all my Azure deployments. You might duplicate this per client. Upon deployment, blobs are loaded with the fully qualified file path including the template that I'm using and my deployment timestamp.
The result is that by time the ARM template is thrown at Azure, the following URL was generated and the files are in place to be used:
https://files0908bf7n.blob.core.windows.net/support/elasticsearch-secure-nodes/09232016-052804
For each deployment, I'm going to have a different set of files in blob storage.
In this case, the following blobs were uploaded:
elasticsearch-secure-nodes/09232016-072446/generate/hamlet.py
elasticsearch-secure-nodes/09232016-072446/install.sh
elasticsearch-secure-nodes/09232016-072446/create_data_generation_setup.sh
SECURITY NOTE: For anything sensitive, disable public access, create a SAS token policy, and use that policy to generate a SAS token URL. Give this a few hours to live so your entire template can successfully complete. Remember, gateways take a while to create. Once again: this is why I do phased deployments.
When the arguments-generated.json
is used, the script-base
parameter is populated like this:
"setup-script": {
"value": "https://files0c0a8f6c.blob.core.windows.net/support/elasticsearch-secure-nodes/09232016-072446"
},
You can then use this parameter to do things like this in your VM extensions:
"fileUris": [
"[concat(parameters('script-base'), '/install.sh')]"
],
"commandToExecute": "[concat('sh install.sh ', length(variables('locations')), ' ''', parameters('script-base'), ''' ', variables('names')[copyindex()])]"
Notice that https://files0908bf7n.blob.core.windows.net/support/elasticsearch-secure-nodes/09232016-072446/install.sh
is the script to be called, but https://files0908bf7n.blob.core.windows.net/support/elasticsearch-secure-nodes/09232016-072446
is also sends in as a parameter. This will tell the script itself where to pull the other files. Actually, in this case, that endpoint is passed a few levels deep.
In my script, when I'm doing phased deployment, I can set uploadSupportFilesAtPhase
to whatever phase I want to upload support files. I generally don't do this at phase 1, because, for mat, that phase is everything up to the VM or gateway. The support files are for the VMs, so there's no need to play around with them while doing idempotent updates to phase 1.
Visual Studio Code
I've a lot of different editors that I use. Yeah, sure, there's Visual Studio, whatever. For me, it's .NET only. It's far too bulky for most anything else. For ARM templates, it's absolutely terrible. I feel like I'm playing with VB6 with it's GUI driven resource seeking.
While I use EditPlus or Notepad2 (scintilla) for most everything, this specific scenario calls for Visual Studio Code (Atom). It allows you to open a folder directly without the needs for pointless SLN files and lets you view the entire hierarchy at once. It also lets you quickly CTRL-C/CTRL-V a JSON file to create a new one (File->New can die). F2 also works for rename. Not much else you need in life.
Splitting a Monolith
Going from an exist monolithic template is simple. Just write a quick tool to open JSON and dump it in to various files. Below is my a subpar script I wrote in PowerShell to make this happen:
$templateBase = '\\10.1.40.1\dbetz\azure\armtemplates' $template = 'python-uwsgi-nginx' $templateFile = Join-Path $templateBase "$template\azuredeploy.json" $json = cat $templateFile -raw $partFolder = 'E:\Drive\Code\Azure\Templates\_parts' $counters = @{ "type"=0 } ((ConvertFrom-Json $json).resources).foreach({ $index = $_.type.indexof('/') $resourceProvider = $_.type.substring(0, $index).split('.')[1].tolower() $resourceType = $_.type.substring($index+ 1, $_.type.length - $index - 1) $folder = Join-Path $partFolder $resourceProvider if(!(Test-Path $folder)) { mkdir $folder > $null } $netResourceType = $resourceType while($resourceType.contains('/')) { $index = $resourceType.indexof('/') $parentResourceType = $resourceType.substring(0, $index) $resourceType = $resourceType.substring($index+ 1, $resourceType.length - $index - 1) $netResourceType = $resourceType $folder = Join-Path $folder $parentResourceType if(!(Test-Path $folder)) { mkdir $folder > $null } } $folder = Join-Path $folder $netResourceType if(!(Test-Path $folder)) { mkdir $folder > $null } $counters[$_.type] = $counters[$_.type] + 1 $file = $folder + "\" + $netResourceType + $counters[$_.type] + '.json' Write-Host "saving to $file" (ConvertTo-Json -Depth 100 $_ -Verbose).Replace('\u0027', '''') | sc $file })
Here's a Python tool I wrote that does the same thing, but the JSON formatting is much better: armtemplatesplit.py
This is compatible with Python 3 and legacy Python (2.7+).
Deploy script
Here's my current deploy armdeploy.ps1 script: