Categories
Azure

Creating your first Azure Resource Manager (ARM) template

If you’re manually creating infrastructure for your next app in Azure, you should consider using an Azure Resource Manager (ARM) template.

An ARM template is essentially a JSON file that describes the infrastructure you want to create in the Azure cloud. It can be run as many times as you like to spin up identically-configured environments.

Create the template

The first step when using an ARM template is to create the template file. If you’d rather start with an example template, Microsoft has an entire Github repo with some templates that you can clone.

The base template – normally called azuredeploy.json – is made up of the following structure:

{
  “$schema”: “https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#”,
  “ContentVersion”: “1.0.0.0”,
  “Resources”: []
}

The elements can be described as follows:

  • $schema: Refers to the properties that can be used within the template. If you were to load the URL in your web browser, it would return the elements and a description of each that you can use within your template. When uploading a template, these get validated to ensure you haven’t made any mistakes.
  • contentVersion: This is an internal version number for you to use to reference the current template. It should be in the format X.X.X.X, with X being any integer. You should only change this value in your template when a significant change is made.
  • resources: This is an array that will contain a description of all the Azure resources that your app or service needs. For now it’s empty.

Add resources to the template

As mentioned above, within the resources section of the template you need to describe the Azure services that you wish to provision. Each Azure service has a set of properties that can be set, with some mandatory. By default, a resource requires:

  • type: The type is a string that refers to a namespace of the resource provider and the name of the type of resource that you wish to provision. For example, Microsoft.DocumentDB/DatabaseAccounts implies you want to create a DatabaseAccount from the Microsoft.DocumentDB namespace
  • apiVersion: Similar to the template $schema version, each Azure resource type also publishes versions of their schema. This property allows you to specify which schema or version of the resource type you’d like to use and is mandatory
  • name: The human-readable string name that you’d like the resource to be called

While not mandatory, a location element is normally provided as well to specify the location where you want the resource to reside (eg. Australia East).

Luckily Microsoft publishes a full list of properties for each resource type. But if you’re still not sure, for most resources you can manually create the resource using the Azure Portal and go to the “Export Template” tab for the resource, and Microsoft will generate a template for you.

For this tutorial, let’s create a simple functions app. Add the following to your resources section in your azuredeploy.json file:

   "resources": [
        {
            "type": "Microsoft.Web/sites",
            "apiVersion": "2018-02-01",
            "name": "[parameters('siteName')]",
            "kind": "functionapp,linux",
            "location": "[parameters('location')]",
            "dependsOn": [
                "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]"
            ],
            "properties": {
                "name": "[parameters('siteName')]",
                "siteConfig": {
                     "appSettings": [
                        {
                            "name": "FUNCTIONS_WORKER_RUNTIME",
                            "value": "python"
                        },
                        {
                            "name": "FUNCTIONS_EXTENSION_VERSION",
                            "value": "~2"
                        }
                    ]
                },
                "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
                "clientAffinityEnabled": false
            }
        },
        {
            "type": "Microsoft.Web/serverfarms",
            "apiVersion": "2018-02-01",
            "name": "[variables('hostingPlanName')]",
            "location": "[parameters('location')]",
            "kind": "linux",
            "properties":{
                "reserved": false
            },
            "sku": {
                "Tier": "Dynamic",
                "Name": "Y1"
            }
        }
    ]

The above creates a Linux App Service hosting plan. It uses the consumption function tier (Y1) and isn’t a reserved plan.

The template also creates a function app (Microsoft.Web/sites) that dependsOn the hosting plan.

If you look closely, you might notice that some elements refer to variables and parameters. Let’s dive deeper into what they are.

What are parameters?

Parameters allow you to specify a value each time you deploy a template. For example, if you had a template and wanted to create a production and staging environment with it, you could create a environment parameter that would allow you to specify staging or production in resource names without modifying the template file each time.

If you didn’t use a parameter, you’d need to change the hard-coded string value in your azuredeploy.json file each time you wanted to change to a new environment.

Similarly if you wanted to be able to deploy your template to a different Azure location quickly, you could specify a location parameter. Then you could deploy to any Azure region by simply providing a new location parameter value – with no change to the template file required.

Within the template file, parameters sit in a top-level parameters element as follows:

{
  “$schema”: “https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#”,
  “ContentVersion”: “1.0.0.0”,
  “Resources”: [],
  “Parameters”: {
    “hostingPlanName”: {
      “type”: “string”,
      “DefaultValue”: “”,
      “Metadata”: {
           “Description”: “The name to give your hosting plan”
      }
   }
}

Parameters support a number of elements, but the most common include:

  • type: The type of value provided (eg. String)
  • defaultValue: A default value to use if one isn’t provided when the template is deployed
  • metadata.description: A description of what the parameter represents

Parameters can be set when deploying using the Azure CLI or PowerShell. Here’s an example of how you would provide a location parameter using the Azure CLI:

az group deployment create \
  —-name mytemplatename \
  —-resource-group yourResourceGroup \
  —-templateFile $templateFile \
  —-parameters environment=staging

Referencing the parameter is done using the following syntax – you can see a full example in the template we defined earlier with resources:

"location": "[parameters('location')]"

What are variables?

While parameters allow you to specify values when deploying a template, variables allow you to reuse values internally within your template file without duplication. For instance, if you had a value that you use in 3 different resources (such as a location or in the example above the hostingPlanName) that you didn’t want to expose to those running your template you could use a variable.

Like parameters, variables are also top-level elements. They’re simpler to specify, as you don’t need to provide descriptions, types and default values. They look like this:

{
  “$schema”: “https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#”,
  “ContentVersion”: “1.0.0.0”,
  “Resources”: [],
  “Variables”: {
     “HostingPlanName”: “yourHostingPlanName”
  }
}

You can then reference the variable within your resource definitions using the following syntax as seen in the resources section in the template we created earlier:

"name": "[variables('hostingPlanName')]"

Summary

In this post, you’ve learnt about how a template is structured, and what each element means. Stay tuned for our next post on deploying your template.

Categories
Azure

Using Cloudflare with Azure Blob Storage

If you’re storing files in Azure Storage, you’ll likely soon find that the cost of bandwidth will soon become one of the more expensive items in your Azure bill. By using a content delivery network (CDN), you can improve performance for those accessing your files from around the world, while also reducing the bandwidth costs incurred on Azure.

Microsoft offers its own Azure CDN solution, or you can use Akamai or Verizon. However, for the purposes of this tutorial we’ll be using Cloudflare – a third-party vendor who offers a free tier that should meet most people’s needs. Cloudflare can cache files based on the caching headers you apply when you upload files to blob storage, or you can apply default caching values. As a side note, Cloudflare also offers other services such as DDoS protection and edge workers.

Note that you’lll need to have your own domain name for this to work – you need to associate a custom domain name with the blob storage account. To use Cloudflare services, you also need to point your domain to Cloudflare’s name servers.

Setup Cloudflare
If you have a Cloudflare account already, you can skip this section. Otherwise, create an account at Cloudflare. The free account will be fine for this tutorial.

Once your account has been created, click “Add a site” and follow the instructions to verify and add your domain to Cloudflare. If you want to use a sub domain (eg. cdn.example.com) for the Blob Storage custom domain, this should be the root domain (eg. example.com).

This will also require changing your domain’s nameservers, so you’ll require admin permissions with your DNS registrar.

Getting started
If you haven’t already done so, create an Azure Storage resource by clicking “Create a resource” in the Azure Portal. Search for “storage account”, and choose the Microsoft service by clicking “Create”. Fill in the settings as required for your storage account.

Once your account has been provisioned, head to the “Storage accounts” tab and choose the account you want to setup.

Create a custom domain
The first step in setting up Cloudflare with Blob Storage is to map it to a custom domain name that you own (such as example.com). In the Azure Portal for your storage account, go to “Custom domain” under the “Blob service” tab.

You should see some Microsoft instructions about configuring a CNAME record with your DNS provider, which should now be Cloudflare. Follow these steps first, by logging into your Cloudflare account and adding the CNAME record to validate your ownership of the domain with Microsoft. Note that this will be the URL that people will see and use when accessing your content, so CNAMEs such as cdn.example.com or static.example.com are commonly used.

In Cloudflare, make sure the proxy status for your domain or subdomain is set to DNS only. If proxied is enabled, verification with Microsoft will fail.

Once you’ve added the required record to your domain name, enter the URL in the text field below in the Azure Portal. Click “Save”, and if the DNS changes have propogated (this can take up to 24-48 hours depending on your provider and configuration) then the custom domain should be added successfully. Otherwise, try again later once the changes have propegated.

Enable the Cloudflare proxy
Once you’ve verified your domain, you can enable caching on Cloudflare. Go back to your domain or subdomain in the DNS tab, and click the cloud icon until it’s orange and “Proxied” shows.

Then go to “Caching” and set the “Caching level” to standard.

Finally, by default Cloudflare will only cache certain types of files. If you’re serving predominantly static HTML or CSS files, this might be OK but if you’re sending other content types such as JSON you’ll need to add a page rule so that the files are cached.

Go to the “Page Rules” tab, and choose “Create Page Rule”. Enter the domain/subdomain you’re using for blob storage (eg cdn.example.com) and then click “Add a setting”. Choose “Cache Level” and select “Cache Everything”. This will now cache everything served on this domain, using either the Cloudflare defaults or the cache-control headers applied to the individual files.

Upload files to Blob Storage with cache-control headers set
Now, upload some files to your blob storage account in the Azure Portal. Depending on how you upload the files, you’ll need to set the cache-control headers. The .NET SDK has built-in support, or you can use these instructions to use PowerShell.

Conclusion
By enabling Cloudflare for your domain, you’re now saving money each month on bandwidth costs (Cloudflare will only request resources once the cached copy expires) and protecting yourself from a variety of security attacks.

Categories
Azure Azure Functions

Using Azure Functions with Rider

Previously we’ve written about how to get started with Azure Functions using JetBrains Rider – but that’s now an outdated article, as the Azure plugin now offers seamless integration with the Microsoft Functions tooling. Gone are the days of manually updating project configuration files.

What you’ll need

In order to follow along, you’ll need to make sure you:

Getting started

First you’ll want to install the Azure Toolkit for Rider. Open Rider then navigate to Preferences (JetBrains Rider > Preferences on a Mac) and Plugins.

In the Plugins screen, go to the Marketplace tab  and search for “Azure Toolkit for Rider”. Click install.

This will now allow you to manage and deploy a whole bunch of Azure services from within Rider, such as App Services and SQL databases.

Let’s open a new Functions app. Now that you’ve installed the Azure Toolkit, a new template for AzureFunctions has been added to your new project window. From within Rider, click File > New and choose Azure Functions from underneath the .NET Core section.

Enter a solution name, project name and choose a directory and language, and then click Create. If you’re using this as a test project, you can probably leave these values as is.

Running the Functions app

And that’s almost all there is to using Functions apps in Rider now! The toolkit will automatically detect that it’s a function app, and provide the necessary scaffolding to allow you to run and debug the app.

Gone are the days of hacking the project settings and configuration to make it work – it’s a seamless experience now.

To verify that the config is correct, click the run configuration in the top right-hand side of Rider, and edit the configurations. You should see a default Azure Functions host configuration, complete with all the settings required to run a functions app. You can see in this example that the function will run on port 7071, and pause on error.

Even better, is that if you don’t have the required Functions command line tools to run the app (which Rider relies on behind the scenes) then you’ll even get prompted to install them – which Rider will do automatically on your behalf. You’ll also get prompted if your tools are out of date.

And that’s how easy it is now to get started using Azure Functions in Rider!

Categories
Azure Azure Functions

Which Azure Functions runtime am I using?

Microsoft currently support two versions of the Azure Functions runtime – version 1 and version 2. This post will look at the main changes between the two versions, and show you how you can check which runtime you’re using.

What are the key differences between versions?

Version 1 of the runtime was introduced back in 2016, when functions were first announced. At launch it supported JavaScript, C#, Python and PHP.

In late September 2018 Microsoft made the Functions 2 runtime generally available and with it bought a number of significant development, deployment and performance improvements including the ability to use the runtime anywhere – on a Mac or Linux machine too! It’s worth noting that as of March 2019, Python support for Functions 2 is still in preview.

There were significant under the hood changes made to improve performance – and .NET Core 2.1 support was added, alongside a move to .NET Core powering the functions host process instead of .NET.

Big changes were made to the way that bindings work – as part of 2.0 they became extensions instead of being bundled into the runtime itself (aside from HTTP and timer support which are deemed core to the experience). This means that some bindings didn’t make it over yet to the new 2.0 runtime – your mileage may vary, but the Microsoft bindings docs have a clearer comparison between the two.

There are also a bunch of new integrations with Functions 2 – for instance, Application Insights is supported with minimal configuration required, while you can easily use deployment centre to add code from other sources such as Github repos into your functions app.

How can I tell what runtime I’m using?

If you’re running a functions app on a Mac or Linux machine – there’s a fair chance you’re using the 2.0 runtime, as that’s what the functions command line tools support. You can verify this by opening the host.json file within the root directory of your app, which should look something like this:

{    "version": "2.0",    "extensions": {      "http": {        "routePrefix": ""      }    } }

The version field directly references the version of the functions runtime you’re using. If it says 2.0, you’re using version 2 – if it’s 1 or missing completely, you’re on the first version of the runtime.

Similarly, you can also view the runtime for your app through the Azure Portal – open your app in the portal, then navigate to “Function app settings” where you’ll see the below “Runtime version” setting.

Categories
Azure

What’s an Azure Service Principal and Managed Identity?

In this post, we’ll take a brief look at the difference between an Azure service principal and a managed identity (formerly referred to as a Managed Service Identity or MSI).

What is a service principal or managed service identity?

Lets get the basics out of the way first. In short, a service principal can be defined as:

An application whose tokens can be used to authenticate and grant access to specific Azure resources from a user-app, service or automation tool, when an organisation is using Azure Active Directory.

In essence, service principals help us avoid having to create fake users in Active Directory in order to manage authentication when we need to access Azure resources.

Stepping back a bit, and its important to remember that service principals are defined on a per-tenant basis. This is different to the application in which principals are created – the application sits across every tenant.

Managed identities are often spoken about when talking about service principals, and that’s because its now the preferred approach to managing identities for apps and automation access. In effect, a managed identity is a layer on top of a service principal, removing the need for you to manually create and manage service principals directly.

There are two types of managed identities:

  • System-assigned: These identities are tied directly to a resource, and abide by that resources’ lifecycle. For instance, if that resource is deleted then the identity too will be removed
  • User-assigned: These identities are created independent of a resource, and as such can be used between different resources. Removing them is a manual process whenever you see fit

One of the problems with managed identities is that for now only a limited subset of Azure services support using them as an authentication mechanism. If the service you use doesn’t support MI, then you’ll need to either continue to manually create your service/security principals.

So what’s the difference?

Put simply, the difference between a managed identity and a service principal is that a managed identity manages the creation and automatic renewal of a service principal on your behalf.

Update 31/1/20: If you’re using Azure Web Apps, check out our new post on using managed identities with deployment slots

Categories
Azure DevOps

Accessing Key Vault secrets in an Azure DevOps pipeline task

In the post, we’ll take a look at one option for accessing Azure Key Vault secrets from within an Azure DevOps release pipeline.

Want to secure your Azure DevOps application secrets in Key Vault? Find out how in  our short e-book guide on Amazon

Setting up your Azure Key Vault

Before you can add the secret to your pipeline, you first need to make sure that there’s a key vault setup in Azure, and that you have given either your pipeline managed service identity or account GET access to the secrets within the vault. Note that you need to ensure this is set under your key vault’s “Settings” → “Access Policies” section.

If you haven’t already, add your secrets into the “Secrets” section, and take note of the names used for the secrets – you’ll need these a bit later on.

Adding the Azure Key Vault pipeline task

Now that you’ve got your secrets stored and accessible from key vault, it’s time to configure the Dev Ops pipeline. Open the visual pipeline editor for your pipeline by clicking “Edit”, and choose the stage in which you need access to the secrets.

Then, click the “+” button next to “Run on agent” (or whatever the first step of your pipeline may be) and search for “Azure Key Vault”. Note that you’ll want to add the “Download Key Vault Secrets” task that appears first – this is the official task from Microsoft.

Then, click on the new “Azure Key Vault” task you just added to your pipeline, and set a display name. Choose the Azure subscription in which you created your key vault, and then select from the “Key vault” dropdown list the name of the key vault you stored your secrets in.

If you can’t see it listed, it’s possible you managed service identity doesn’t have the correct permission, so be sure to check it’s been added to your key vault with GET permission.

Now drag the key vault task up your pipeline task list (if applicable) so that it runs before any other task that requires a secret stored within your key vault.

And that’s it!

Accessing a Key Vault secret from other tasks

Now other tasks can access the secrets, by using a task variable. The key vault task will make all your secrets available using the $(<your secret name here>) syntax, such as $(api-secret).

Summary

This is just one of the various ways that you can access key vault secrets from a Dev Ops pipeline task. Stay tuned for more posts where we explore other ways of accessing the secrets in tasks.

If you want a more in-depth guide and comparison between alternative approaches to storing secrets in DevOps, you can also get our book on Amazon today.

Categories
Azure Azure Functions

How to add Application Insights to an Azure function

Today we’re going to look at how easy it is to add Azure Application Insights (part of Azure Monitor) to an Azure Function.

What is Application Insights?

In short, Application Insights is a performance and monitoring tool that forms part of the Azure Monitor suite. It allows you to access detailed telemetry on requests made to your function, while observing real-time performance and failures. It works across App Service apps, Azure Functions and more – practically anywhere you can install the available App Insights SDKs.

You can find the full pricing information here, but you effectively get charged per GB of ingested data (5GB included free per month).

How can I enable this for my Azure Function App?

For Azure Functions, it’s super simple to enable App Insights. You don’t even need to add the SDKs – Microsoft handles that for you.

First, you’ll need to create an Application Insights instance. Open the Azure Portal, click “Create a resource” and search for “Application Insights”.

Enter a friendly and unique name for your instance, and depending on the language you write your functions in, choose General or Node.js for the “Application Type”.

Choose a subscription and resource group (ideally matching those of your Azure Function app) and a location. Note that App Insights is only supported in a limited number of regions worldwide, so you might need to choose a different region to that of your functions app.

Once that’s been created, open the App Insights resource. On the “Overview” page, you’ll see in the top section an “Instrumentation Key”. Copy this – it’s essentially the identifier that will allow you function app to report data back into this App Insights resource.

Now navigate to your function app in the Azure Portal. Select the app name in the sidebar, and choose “Platform Features” on the right-hand side of the screen.

Next, open “Application settings”. Scroll down until you see the “Application settings” header. At the bottom of this section, you’ll see a “Add new setting” button – click this to add a new row.

Now in the “App Setting Name” column, type “APPINSIGHTS_INSTRUMENTATIONKEY”. In the “Value” column, paste the “Instrumentation Key” that we copied in the earlier step from your App Insights resource. Scroll back to the top of the page, and click “Save”.

Summary

And that’s it! That’s how easy it is to enable App Insights for Azure Functions. If you want to see metrics coming through straight away, navigate to your App Insights resource and click “Live Metrics”. Make an API request to your function (assuming it’s HTTP based – otherwise trigger it however you can) and you should see the request come through instantly.

Categories
Azure Azure Functions

Create an Azure Function App using a Mac

In this post, we’ll take a look at how you can quickly create an Azure Function when using a Mac. In particular, we’ll be using the Azure Functions command line tools to create our functions app.

Getting started

Before you can create a function, you’ll need to install the Azure Function command line tools. But before you can continue, you need to make sure that you have installed Brew. If you’re not familiar with it, Brew is a package manager for the Mac – similar to NPM for Node.Js projects.

Once you’ve installed Brew, make sure you have the latest .NET Core SDK installed. If you’re not sure, open Terminal and run:

dotnet --version

If you do get a version back, make sure it’s at least 2.0.0 or above. If you get an error, you’ll need to install it using the following Brew commands:

brew tap caskroom/caskbrew cask install dotnet

Run the same dotnet version command again to ensure the install was successful.

Now we need to install the Azure Function Core Tools:

brew tap azure/functionsbrew install azure-functions-core-tools

Once the install has finished, you’re now ready to go creating your first function!

Creating the function app

Now that we’ve got all the dependencies installed, it’s time to create our function app. Open Terminal, and navigate to a directory where you’d like your app to be created (in our case, we’re going to create it from the root Documents folder).

cd ~/Documents

Now let’s get started creating the app! First lets initialise the project:

func init MyFirstFunctionApp --source-control true --worker-runtime dotnet

This will create a folder called MyFirstFunctionApp inside your Documents folder. It will be initialised as a git repository (the source-control parameter) and have a dotnet runtime. If you’d prefer a Node.Js function or a Python function, you can replace dotnet with either node or python respectively.

You’ll now have your project – but at this point, it’ll be empty with no functions. To add your first function, we need to run the following command to generate a function called MyFirstFunction:

 func new --name MyFirstFunction

It’ll then ask you what type of template you’d like to use. A template is effectively a trigger – what will cause your function to start running? We’ll choose #2 (HttpTrigger) for now, as this will enable you to easily run your function via Curl, Postman or by visiting a page in your web browser. You can provide this in the previous command as —template, or select from the list.

You’ll now have your first function and function app created! You can run it by entering:

func start

Once it’s started, you should see the a message with the function URL, like this:

The sample function expects a name parameter to be provided, so open up your web browser and go to http://localhost:7071/api/MyFirstFunction?name=Bob. You should get a response like:

Hello, Bob

Summary

So that’s how easy it is to get started with an Azure Function app on a Mac. Microsoft has done a lot of work around the tooling to make it as straightforward as possible.

Categories
Azure

What’s the difference between an Azure Service Bus queue and topic?

Starting out with an Azure Service Bus? It can be confusing trying to work out whether you should use a queue or a topic. In this post, we’ll try to break down the difference between the two and which one you should use when.

What is a service bus queue or topic?

When configuring a service bus, you have two options for configuring how messages are processed – a queue, or a topic.

Lets start with a queue. A queue has a one-to-one relationship between each message and its consumer, and is a way to ensure reliable first-in-first-out (FIFO) delivery to one processor from many sources. For example, you might have one WebJob that processes requests that get placed in the queue from many different sources. In most cases, the processor (or receiver) receives the messages in the same order that they were placed on the queue. The key to a queue is that each message from the queue is only ever processed by a single consumer.

In contrast, a topic follows the publish/subscribe pattern and can have many consumers who can each subscribe to receive notifications when a message is sent to the topic. In effect, this means you can have a one-to-many relationship between messages and consumers although this does depend on how you configure your filter rules (you can opt to have each message delivered to only 1 subscriber if you wish).

What’s the difference?

In effect, the difference between a queue and topic can be described as follows:

  • A queue can only be listened to by one consumer, whereas a topic can have multiple subscribers.
  • Topic subscriptions can enable powerful filtering capabilities, such that you can define certain parameters that messages must meet in order to be copied into a subscriptions virtual queue. This can be handy if you need to handle different types of messages, or messages with variable data structures in the same topic.
  • Topics can be more scalable than queues, as more than one consumer can listen for messages. If you need to scale a queue, you’re still limited to having the one consumer listening, so aside from horizontal scaling you’re out of luck.
  • Both queue and topic subscriptions support PeekLock and ReceiveAndDelete modes, so you can ensure a message is processed before being dismissed if required.

When should I use a topic or queue?

This is a trickier question to answer, and ultimately depends on how much you’re willing to spend on your service bus. If you’re using the basic tier, then you only have the one option – queues. Topics are only supported in the standard and premium tier.

By moving to the standard or premium tier, you incur an hourly base charge as well as the per million operations fee which is charged in the basic tier.

Pricing aside, it also depends on the type of data that you’re ingesting into your service bus. If its time-sensitive and high volume, topics would be the ideal approach as you can more easily scale your downstream consumer to handle the larger volume of messages.

If on the other hand you receive a relatively stable or low volume of messages which aren’t necessarily time-critical (ie. they may sit on a queue for some time until the processor reaches them if the load is higher than anticipated) then you can probably get away with using a queue.

Categories
Azure DevOps

How to store Azure DevOps secrets in Azure Key Vault

Often when creating an Azure DevOps continuous integration/deployment pipeline there’s a need to store and use app secrets, such as client keys. While you can store secrets within Azure DevOps variable groups, an alternative approach is to use Azure Key Vault instead.

Want to secure your Azure DevOps application secrets in Key Vault? Find out how in  our short e-book guide on Amazon

By using Azure Key Vault you get the same enhanced data protection that your other cloud apps can enjoy including activation and expiration dates, and the DevOps integration allows for the centralised management of keys used across apps or pipelines. Keep in mind if you decide to use key vault, you will be charged according to the Azure Key Vault pricing for storing your secrets.

Setting up Key Vault access in Azure DevOps

Getting started is easy. Open Azure DevOps, and navigate to the project you wish to integrate with. Open the Pipelines section, and then go to Library.

If you already have secrets and values stored in an existing library, the easiest way to integrate with key vault is to create a separate variable group. If you don’t, you’ll get a message warning you that when you enable key vault in your existing group, it’ll blow away all your existing variables saved within the group. This is because you can’t use key vault variables side by side with Dev Ops variables within one group.

Open the new variable group, and you should see a toggle to link secrets from an Azure key vault as variables. Turn that on, and you’ll see the option to set the Azure subscription to be used, and a field to specify a key vault name.

You’ll need to ensure that you’ve previously setup a connection to your Azure subscription within Azure DevOps, and added an Azure Resource Manager service connection using an Azure Service Principal to the resource group where your key vault is located. If you haven’t, the management links next to each field will help you to setup these connections.

Once connected, pointing DevOps to your key vault is as easy as choosing the correct subscription from the drop down list and then selecting your key vault by name in the second drop down. If your service principal doesn’t have get and list secret management permissions, you’ll be prompted to automatically authorise it or manually do so in the Azure Portal.

If successfully connected, you’ll be able to then see a list of secrets from your key vault by clicking the add button. Choose the secrets you want to make available to your pipeline and click OK.

Add your new variable group to your pipeline, and that’s all there is to adding key vault secrets to an Azure DevOps pipeline.