Categories
Microsoft Graph

What is the Microsoft Graph?

Ever heard someone mention the Microsoft Graph and not known what it is? In this article, we’ll dive deeper into what the graph is and what it can provide you access to.

What is the Microsoft Graph?

In a nutshell, the Microsoft Graph is designed to be a one-stop shop (ie a single endpoint) for interacting with the Microsoft suite of products. For now it’s limited to only a subset of Microsoft’s product range, but Microsoft has grand ambitions for continuing to grow this over time.

Delve, Excel, Microsoft Bookings, Microsoft Teams, OneDrive, OneNote, Outlook/Exchange, Planner, and SharePoint as well as many enterprise and mobility services are currently supported.

The key difference between the Microsoft Graph and Microsoft’s previous service-specific APIs is that the Graph is designed around user scenarios and is independent of the service that customers may interact with.

For example, where previously you may have directly called anOutlook API to access a user’s calendar, using the Graph you simply interact with Calendar data directly without caring about the service.

One side-effect of the Graph is that instead of each Microsoft product having its own platform-specific SDK, you can use one SDK to access them all. You can find a full list of the SDKs here, but platforms include iOS, Android, .NET, PHP, Ruby and Python.

Does the API use GraphQL?

No – while the “Microsoft Graph” name may confuse some, the API itself is a normal REST API and doesn’t use GraphQL at this point in time.

Can anyone use the API or do you need to be a partner?

Anyone can sign up to use the Graph API for free.Many of the customer scenarios around email, contacts and calendars are available for use in production apps today.

Keep in mind that some APIs are still in beta (such as The ones backed by Microsoft Booking) and as such shouldn’t be used in production apps yet.

Categories
Azure

Creating your first Azure Resource Manager (ARM) template

If you’re manually creating infrastructure for your next app in Azure, you should consider using an Azure Resource Manager (ARM) template.

An ARM template is essentially a JSON file that describes the infrastructure you want to create in the Azure cloud. It can be run as many times as you like to spin up identically-configured environments.

Create the template

The first step when using an ARM template is to create the template file. If you’d rather start with an example template, Microsoft has an entire Github repo with some templates that you can clone.

The base template – normally called azuredeploy.json – is made up of the following structure:

{
  “$schema”: “https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#”,
  “ContentVersion”: “1.0.0.0”,
  “Resources”: []
}

The elements can be described as follows:

  • $schema: Refers to the properties that can be used within the template. If you were to load the URL in your web browser, it would return the elements and a description of each that you can use within your template. When uploading a template, these get validated to ensure you haven’t made any mistakes.
  • contentVersion: This is an internal version number for you to use to reference the current template. It should be in the format X.X.X.X, with X being any integer. You should only change this value in your template when a significant change is made.
  • resources: This is an array that will contain a description of all the Azure resources that your app or service needs. For now it’s empty.

Add resources to the template

As mentioned above, within the resources section of the template you need to describe the Azure services that you wish to provision. Each Azure service has a set of properties that can be set, with some mandatory. By default, a resource requires:

  • type: The type is a string that refers to a namespace of the resource provider and the name of the type of resource that you wish to provision. For example, Microsoft.DocumentDB/DatabaseAccounts implies you want to create a DatabaseAccount from the Microsoft.DocumentDB namespace
  • apiVersion: Similar to the template $schema version, each Azure resource type also publishes versions of their schema. This property allows you to specify which schema or version of the resource type you’d like to use and is mandatory
  • name: The human-readable string name that you’d like the resource to be called

While not mandatory, a location element is normally provided as well to specify the location where you want the resource to reside (eg. Australia East).

Luckily Microsoft publishes a full list of properties for each resource type. But if you’re still not sure, for most resources you can manually create the resource using the Azure Portal and go to the “Export Template” tab for the resource, and Microsoft will generate a template for you.

For this tutorial, let’s create a simple functions app. Add the following to your resources section in your azuredeploy.json file:

   "resources": [
        {
            "type": "Microsoft.Web/sites",
            "apiVersion": "2018-02-01",
            "name": "[parameters('siteName')]",
            "kind": "functionapp,linux",
            "location": "[parameters('location')]",
            "dependsOn": [
                "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]"
            ],
            "properties": {
                "name": "[parameters('siteName')]",
                "siteConfig": {
                     "appSettings": [
                        {
                            "name": "FUNCTIONS_WORKER_RUNTIME",
                            "value": "python"
                        },
                        {
                            "name": "FUNCTIONS_EXTENSION_VERSION",
                            "value": "~2"
                        }
                    ]
                },
                "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
                "clientAffinityEnabled": false
            }
        },
        {
            "type": "Microsoft.Web/serverfarms",
            "apiVersion": "2018-02-01",
            "name": "[variables('hostingPlanName')]",
            "location": "[parameters('location')]",
            "kind": "linux",
            "properties":{
                "reserved": false
            },
            "sku": {
                "Tier": "Dynamic",
                "Name": "Y1"
            }
        }
    ]

The above creates a Linux App Service hosting plan. It uses the consumption function tier (Y1) and isn’t a reserved plan.

The template also creates a function app (Microsoft.Web/sites) that dependsOn the hosting plan.

If you look closely, you might notice that some elements refer to variables and parameters. Let’s dive deeper into what they are.

What are parameters?

Parameters allow you to specify a value each time you deploy a template. For example, if you had a template and wanted to create a production and staging environment with it, you could create a environment parameter that would allow you to specify staging or production in resource names without modifying the template file each time.

If you didn’t use a parameter, you’d need to change the hard-coded string value in your azuredeploy.json file each time you wanted to change to a new environment.

Similarly if you wanted to be able to deploy your template to a different Azure location quickly, you could specify a location parameter. Then you could deploy to any Azure region by simply providing a new location parameter value – with no change to the template file required.

Within the template file, parameters sit in a top-level parameters element as follows:

{
  “$schema”: “https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#”,
  “ContentVersion”: “1.0.0.0”,
  “Resources”: [],
  “Parameters”: {
    “hostingPlanName”: {
      “type”: “string”,
      “DefaultValue”: “”,
      “Metadata”: {
           “Description”: “The name to give your hosting plan”
      }
   }
}

Parameters support a number of elements, but the most common include:

  • type: The type of value provided (eg. String)
  • defaultValue: A default value to use if one isn’t provided when the template is deployed
  • metadata.description: A description of what the parameter represents

Parameters can be set when deploying using the Azure CLI or PowerShell. Here’s an example of how you would provide a location parameter using the Azure CLI:

az group deployment create \
  —-name mytemplatename \
  —-resource-group yourResourceGroup \
  —-templateFile $templateFile \
  —-parameters environment=staging

Referencing the parameter is done using the following syntax – you can see a full example in the template we defined earlier with resources:

"location": "[parameters('location')]"

What are variables?

While parameters allow you to specify values when deploying a template, variables allow you to reuse values internally within your template file without duplication. For instance, if you had a value that you use in 3 different resources (such as a location or in the example above the hostingPlanName) that you didn’t want to expose to those running your template you could use a variable.

Like parameters, variables are also top-level elements. They’re simpler to specify, as you don’t need to provide descriptions, types and default values. They look like this:

{
  “$schema”: “https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#”,
  “ContentVersion”: “1.0.0.0”,
  “Resources”: [],
  “Variables”: {
     “HostingPlanName”: “yourHostingPlanName”
  }
}

You can then reference the variable within your resource definitions using the following syntax as seen in the resources section in the template we created earlier:

"name": "[variables('hostingPlanName')]"

Summary

In this post, you’ve learnt about how a template is structured, and what each element means. Stay tuned for our next post on deploying your template.

Categories
Azure DevOps

Azure DevOps system variables now read-only

Azure DevOps provides a number of predefined system variables out of the box on hosted build agents.

While these have officially always been read-only, informally you’ve previously been able to change the value of a predefined system variable by overwriting it in a pipeline task.

However, as of the January 28 2020 DevOps release this is no longer the case. The DevOps team have improved the security of system variables, and as such now actively prohibit system variables from being written to.

The variables include items such as the DefaultWorkingDirectory, JobName and TeamProject. These are all variables that you can otherwise use and refer to in your DevOps pipeline tasks.

This means you’ll need to now create your own task or pipeline variables where possible. If you can’t modify the task or pipeline variables, speak to someone with admin rights who can.

Categories
Azure

Managed identities and Azure App Service staging slots

If you’re using an Azure App Service on a tier that offers staging slots (standard and above) then you might want to consider what happens when you swap a slot.

If you’ve configured a slot then you’ll want to swap deployments at a minimum between a production and pre-production environment. Microsoft cover in depth what happens when you commence a swap, but what they don’t cover is what happens to any managed identities that you have setup for the app service.

In short, managed identities are tied to the slot in which you first create or assign them, and do not change when you initiate a swap between two or more slots.

You might recall that for an App Service you can have both a system-assigned or a user-assigned identity. These are configured to allow your App Service to access other Azure resources, without the need for sharing secrets and passwords.

When swapping slots, both system and user-assigned identities remain tied to the slot. They don’t swap – so if you need to change these, you’ll need to intervene separately.

Once you’ve generated or assigned an identity, don’t forget to then add it to any Azure resources your app needs access to.

Also keep in mind the lifecycle of a managed identity. User assigned identities won’t be removed whenever you delete a slot. On the other hand, system assigned identities will be deleted as soon as you delete a slot. Depending on your situation, you may prefer one of these approaches.

Categories
Azure

Azure Availability Zones vs Availability Sets

A common question for newcomers to Microsoft’s Azure platform is what’s the difference between an availability zone and an availability set?

An availability zone is a unique physical location within one Azure region, and provides high-availability for your application and infrastructure. Each zone is independent and has physically isolated power, cooling, and networking from other zones within the same region.

Different Azure services have varying levels of support for availability zones. Some will automatically replicate between zones, while others will require you to choose the zone to connect to.

Availability zones aren’t available within all Azure regions – in fact, currently there are only 10 regions across the United States, Asia and Europe that support availability zones:

  • Central US
  • East US
  • East US 2
  • West US 2
  • North Europe
  • UK South
  • West Europe
  • France Central
  • Japan East
  • Southeast Asia

On the other hand, an availability set is a concept that only applies to virtual machines. A set allows you to isolate virtual machine (VM) resources from one another when deployed, meaning your VMs will run across multiple machines and racks. This will help to avoid a hardware failure bringing down your entire application, and also means that outages from Microsoft updates won’t cause your application issues.

Unlike an availability zone which applies at the data centre level, an availability set only applies within the same data centre.

Categories
Azure DevOps

How to retry a failed stage in Azure DevOps

One of the biggest limitations of Azure DevOps has until now been the fact that you can’t retry an individual stage or step in a pipeline if it fails. Instead, you’d have to retry the entire pipeline again and hope that the failed stage passes.

As of September 2019, in most cases you can now retry a failed stage without the need to retry the whole pipeline – but there’s a catch. You’ll need to enable the Multi-stage pipelines preview feature, if you haven’t already. Note that this still doesn’t allow you to retry failed steps within a stage – rather it relies on the new multi-stage support to allow you to retry a collection of steps within a stage.

You can check if this is enabled by clicking your avatar in the top-right corner of the Azure DevOps portal, and then “Preview features”. Select “for this organisation [<your organisation name]” from the drop-down menu at the top of the screen, and scroll down until you see “Multi-stage pipelines”. If this is toggled off, you’ll need to turn this on before you can retry failed stages – you will need administrator rights to change it though.

Once you’ve enabled this, navigate back to your release pipelines and you should see the ability to re-rerun individual stages that fail throughout your pipeline.

Note that as this is still in beta, there are some limitations. Perhaps one of the biggest is that you can’t retry stages that are explicitly cancelled – only those that fail as a result of the task being run having an error can be retried today.

Categories
Azure

Using Cloudflare with Azure Blob Storage

If you’re storing files in Azure Storage, you’ll likely soon find that the cost of bandwidth will soon become one of the more expensive items in your Azure bill. By using a content delivery network (CDN), you can improve performance for those accessing your files from around the world, while also reducing the bandwidth costs incurred on Azure.

Microsoft offers its own Azure CDN solution, or you can use Akamai or Verizon. However, for the purposes of this tutorial we’ll be using Cloudflare – a third-party vendor who offers a free tier that should meet most people’s needs. Cloudflare can cache files based on the caching headers you apply when you upload files to blob storage, or you can apply default caching values. As a side note, Cloudflare also offers other services such as DDoS protection and edge workers.

Note that you’lll need to have your own domain name for this to work – you need to associate a custom domain name with the blob storage account. To use Cloudflare services, you also need to point your domain to Cloudflare’s name servers.

Setup Cloudflare
If you have a Cloudflare account already, you can skip this section. Otherwise, create an account at Cloudflare. The free account will be fine for this tutorial.

Once your account has been created, click “Add a site” and follow the instructions to verify and add your domain to Cloudflare. If you want to use a sub domain (eg. cdn.example.com) for the Blob Storage custom domain, this should be the root domain (eg. example.com).

This will also require changing your domain’s nameservers, so you’ll require admin permissions with your DNS registrar.

Getting started
If you haven’t already done so, create an Azure Storage resource by clicking “Create a resource” in the Azure Portal. Search for “storage account”, and choose the Microsoft service by clicking “Create”. Fill in the settings as required for your storage account.

Once your account has been provisioned, head to the “Storage accounts” tab and choose the account you want to setup.

Create a custom domain
The first step in setting up Cloudflare with Blob Storage is to map it to a custom domain name that you own (such as example.com). In the Azure Portal for your storage account, go to “Custom domain” under the “Blob service” tab.

You should see some Microsoft instructions about configuring a CNAME record with your DNS provider, which should now be Cloudflare. Follow these steps first, by logging into your Cloudflare account and adding the CNAME record to validate your ownership of the domain with Microsoft. Note that this will be the URL that people will see and use when accessing your content, so CNAMEs such as cdn.example.com or static.example.com are commonly used.

In Cloudflare, make sure the proxy status for your domain or subdomain is set to DNS only. If proxied is enabled, verification with Microsoft will fail.

Once you’ve added the required record to your domain name, enter the URL in the text field below in the Azure Portal. Click “Save”, and if the DNS changes have propogated (this can take up to 24-48 hours depending on your provider and configuration) then the custom domain should be added successfully. Otherwise, try again later once the changes have propegated.

Enable the Cloudflare proxy
Once you’ve verified your domain, you can enable caching on Cloudflare. Go back to your domain or subdomain in the DNS tab, and click the cloud icon until it’s orange and “Proxied” shows.

Then go to “Caching” and set the “Caching level” to standard.

Finally, by default Cloudflare will only cache certain types of files. If you’re serving predominantly static HTML or CSS files, this might be OK but if you’re sending other content types such as JSON you’ll need to add a page rule so that the files are cached.

Go to the “Page Rules” tab, and choose “Create Page Rule”. Enter the domain/subdomain you’re using for blob storage (eg cdn.example.com) and then click “Add a setting”. Choose “Cache Level” and select “Cache Everything”. This will now cache everything served on this domain, using either the Cloudflare defaults or the cache-control headers applied to the individual files.

Upload files to Blob Storage with cache-control headers set
Now, upload some files to your blob storage account in the Azure Portal. Depending on how you upload the files, you’ll need to set the cache-control headers. The .NET SDK has built-in support, or you can use these instructions to use PowerShell.

Conclusion
By enabling Cloudflare for your domain, you’re now saving money each month on bandwidth costs (Cloudflare will only request resources once the cached copy expires) and protecting yourself from a variety of security attacks.

Categories
Azure DevOps

What is Azure DevOps used for?

Azure DevOps is a Microsoft product that can be used as part of the software development lifecycle, to manage the delivery, development and release of software products.

DevOps is a paid product (although there is a free tier available) that many software development companies use – particularly those within the Microsoft ecosystem. It used to be called Visual Studio Team Studio (VSTS).

DevOps is made up of 5 core areas of functionality:

  • Boards: Boards are used to track your teams work, and work best for teams using Agile methodologies to deliver work. Cards (or issues) can be created and tracked on boards, and linked to work as it gets delivered and released. Think of this as an equivalent to the likes of Atlassian Jira.
  • Repos: Employ version control for your code by creating a Git repository within DevOps. DevOps supports Git repositories in a similar manner to the likes of Github and Bitbucket.
  • Pipelines: Create continuous delivery pipelines that allow you to build your app and then release it automatically. Run your tests, deploy your code and store your build artefacts using Pipelines – DevOps has a whole host of tasks supported out of the box, or you can easily create your own.
  • Test Plans: Depending on your account level (paid or free) you can use test plans to track the progress of your test activities including both manual and automated tests, and easily analyse reports including passed/failed tests and defects raised.
  • Artifacts: Finally, DevOps will store your artifacts that are generated as part of your pipelines. This allows you to easily rollback to a previous version of your app if required, or the ability to re-deploy to new servers if required. You can also create package feeds with public sharing now in preview for public projects.
Categories
Azure DevOps

How to generate an Azure DevOps pipeline status badge for README files

Microsoft has made it very easy to generate badge icons that you can use wherever you’d like, showing the status of your build and release pipelines in Azure DevOps.

The icons – which are made available as images – can be generated for both build pipelines and release pipelines on the Azure DevOps website. They change based on the outcome of the latest run, allowing you to place them in places such as your project’s Readme files.

They aren’t particularly flexible, but you can choose the branch and scope of the success/fail status.

How to generate a badge icon

Login to the Azure DevOps portal. Navigate to Pipelines, and then choose the release or build pipeline that you would like to generate an icon for.

In the top right-hand corner, select the three dots and then choose “Status badge”.

This brings up a modal window that allows you to customise the badge, and choose a branch, scope for the status (pipeline or per job), and then a URL is generated for the image.

Microsoft also provides the markdown text which allows you to immediately copy and paste the badge into a markdown file such as your project’s Readme file.

Categories
Azure DevOps

How to link Azure DevOps with Jira

In this post, we’ll look at the official integration between Azure DevOps and Atlassian’s Jira software.

Love it or hate it, Jira is among other things one of the most widely used issue tracking tools, and in July 2019 Microsoft rolled out a new Jira app that can be used to sync issues between DevOps and Jira, enabling end to end traceability between both platforms from when an issue is reported to when a deployment is released fixing it.

There are a number of limitations. At present, the official plugin doesn’t support DevOps repositories. Only GitHub repositories are supported, although Azure Repos support is planned soon. Build information also isn’t available – the app will only show deployments made in the Jira issue.

The integration also doesn’t change much from DevOps – there’s no UI integrating with Jira reporting or issues, and you can’t easily view a summary of Jira cards from pull requests commit messages.

That said, to get started, you’ll need to:

  • Be a Jira administrator, or have the ability to request a Jira app be installed within your organisation’s Jira installation
  • Use Jira Cloud – on-premise installations aren’t supported
  • Use GitHub for your source code

It’s also worth noting that for now this app only supports showing deployment traceability in Jira – other information such as build numbers or tasks will not show.

Getting started

First you’ll need to install the Azure Pipelines for Jira application, available for free in the Atlassian marketplace.

Then, once you’ve installed the app you’ll be prompted to add your DevOps organisation. Click “Add organisation”, and you’ll be prompted to enter your credentials. It should then list all your organisations in a table view.

Enabling support in DevOps

Now we need to add the Jira integration to our DevOps release pipeline. Open the DevOps portal, and navigate to your project and release pipeline.

Click “Edit” on the release pipeline you’d like to integrate Jira with, and then click “Options”. Click “Integrations” and you’ll see the option to “Report deployment status to Jira”.

Check the box, and you should see your Jira software cloud account listed from the previous step. If you don’t see it, check your Jira account to make sure you’ve added the organisation successfully in the DevOps app.

You’ll also see the ability to map your stages to a set list of deployment types. These deployment types are set by Microsoft, and you’ll have to pick from the list and match them as best you can to your stages. At present, there are 5 deployment types:

  • Production
  • Staging
  • Testing
  • Development
  • Unmapped

These deployment types then show in your Jira cards as commits which contain your story numbers move through your deployment process.

Referencing Jira issues

The app works by searching your commit history (both commit and pull request messages) for Jira issue keys. If you’re not sure about how to find an issue key see this article on Atlassian’s website.

To enable the linking, once you’ve followed the above steps simply add the issue number into your commit messages when you write them. We tend to stick to the format “[ISSUE-KEY] Fixed an issue with something” as the short message title, but the exact format is up to you.

The app then searches your commit history and will automatically update the issue when you create and deploy a release in your pipelines. It’s that simple.