Categories
Azure Azure Functions

Building Azure Functions in the Rider IDE

Many people now use Rider as their integrated development environment editor for .NET. However, when it comes to using it for developing Azure Functions, the out-of-the-box experience is (currently) less than ideal.

I say currently, because in the next update to the Azure Toolkit plugin for Rider Functions support has been added. This’ll make running and debugging function apps a breeze.

Until then though, here’s how you can run a functions app from Rider.

How to run Azure functions in Rider

First, you’ll need to install the function CLI (see creating an Azure function for the Mac).

Then, open your functions app in Rider by selecting the project or solution.

Create a Properties folder in the root directory if you don’t have one already, and then inside of that create a new JSON file called launchSettings.json.

Inside the JSON file, place the following object:

{  "profiles": {    "functions": {      "commandName": "Executable",      "executablePath": "func",      "commandLineArgs": "start --port 7071 --pause-on-error"    }  }}

You should see Rider now refresh, and add the configuration profile to your configurations with a little rocket icon. Click run, and your function app should start running within Rider.

Note that debugging is a little tricker for now. On Windows, once your function app is running you can go “Run” > “Attach to process” and choose your function app to listen to. But on a Mac, this has had mixed results.

Categories
Azure

Azure App Configuration vs App Settings

For a long time now, App Settings have been the way to configure your app’s settings such as environment variables and connection strings for Azure App Services.

In February 2019, Microsoft announced a new service called App Configuration – which, currently in preview – allows you to easily centralise your app settings across multiple resources.

Want to secure your Azure DevOps application secrets in Key Vault? Find out how in  our short e-book guide on Amazon

Why is App Settings a problem?

App Settings are great – if you only need those settings for one resource (eg. an app service). But if you’re working with, say, a microservice that might be made up of many resources including an app service and maybe a functions app that all share similar configurations, keeping track of what settings were made where can quickly become problematic.

Amongst other things, there’s no easy way to do things like point-in-time replays of settings, so you can see what changed or keep the settings the same if you need to rebuild or deploy a specific version of your app.

What does App Configuration offer that App Settings doesn’t?

App Configuration is a fully managed service that offers the ability to store all your app configuration values in one location, with the ability to compare settings at specific times and with specific labels.

Operating in a similar fashion to Key Vault, it’s a separate resource from your app service (although you can configure it immediately by going to the new preview tab in your app services), and there are a number of SDKs available to integrate with it for .NET core,  .NET and Java.

You can think of App Configuration as a counterpart to Key Vault for properties that don’t need to be stored as secrets. Connection strings would likely still go in Key Vault, but properties that aren’t secrets should be stored in App Configuration.

First you’ll need to setup a connection string which you can find under the “Settings” → “Access Keys” section of your App Configuration resource. Then, within App Configuration, you set a number of key value pairs, which you can then access in a similar fashion to the way you get environment variables in App Settings:

var configBuilder = new ConfigurationBuilder();configBuilder.AddAzureAppConfiguration(Environment.GetEnvironmentVariable("YourKeyNameFromAppConfiguration"));var config = configBuilder.Build();

You can also import and export settings easily, and can use JSON, YAML or a Properties file. You can also export the settings for a specific time and date.

At present, because the service is in preview there are no additional charges. Microsoft hasn’t yet stated its intentions for charging for this service, but given the similarities with Key Vault (which is a paid service) it’s easy to see how a similar model could apply if they decided.

Categories
.NET Core

Understanding the Newtonsoft JSON Token

One of the concepts worth understanding when writing a custom converter for the popular Newtonsoft JSON framework is that of the JsonToken, and the role it plays in helping you understand the JSON being read.

The JsonToken documentation for Newtonsoft is relatively sparse – in part because once you understand it, it’s a relatively simple concept.

Effectively, when you’re attempting to deserialise a string representation of a JSON object, such as in the ReadJson method of a custom JsonConverter, the JsonToken represents each element or component (token if you prefer) of the object as it’s deserialised.

So for example, if you have the following JSON and implement the ReadJson method:

{	"message": "Hello world"}

When you first get the reader.TokenType value you’ll get JsonToken.StartObject back – this represents that the { implies that you’re first reading an object element.

If you were to then call reader.ReadAsync(), and then subsequently request the reader.TokenType again now you’ll get JsonToken.PropertyName in response. This is because the next element within the JSON string is the property name (in other words, the “message” key).

Lastly, if you were to call reader.ReadAsync() for a third time, you’d get back JsonToken.String as the value of the property is a string (“Hello world”). NewtonSoft will try and parse the value into a representation of a type – so for example, if that was a number, you’d get back Integer of Float, and if it was a Boolean you’d get back Boolean.

Categories
Azure Azure Functions

Which Azure Functions runtime am I using?

Microsoft currently support two versions of the Azure Functions runtime – version 1 and version 2. This post will look at the main changes between the two versions, and show you how you can check which runtime you’re using.

What are the key differences between versions?

Version 1 of the runtime was introduced back in 2016, when functions were first announced. At launch it supported JavaScript, C#, Python and PHP.

In late September 2018 Microsoft made the Functions 2 runtime generally available and with it bought a number of significant development, deployment and performance improvements including the ability to use the runtime anywhere – on a Mac or Linux machine too! It’s worth noting that as of March 2019, Python support for Functions 2 is still in preview.

There were significant under the hood changes made to improve performance – and .NET Core 2.1 support was added, alongside a move to .NET Core powering the functions host process instead of .NET.

Big changes were made to the way that bindings work – as part of 2.0 they became extensions instead of being bundled into the runtime itself (aside from HTTP and timer support which are deemed core to the experience). This means that some bindings didn’t make it over yet to the new 2.0 runtime – your mileage may vary, but the Microsoft bindings docs have a clearer comparison between the two.

There are also a bunch of new integrations with Functions 2 – for instance, Application Insights is supported with minimal configuration required, while you can easily use deployment centre to add code from other sources such as Github repos into your functions app.

How can I tell what runtime I’m using?

If you’re running a functions app on a Mac or Linux machine – there’s a fair chance you’re using the 2.0 runtime, as that’s what the functions command line tools support. You can verify this by opening the host.json file within the root directory of your app, which should look something like this:

{    "version": "2.0",    "extensions": {      "http": {        "routePrefix": ""      }    } }

The version field directly references the version of the functions runtime you’re using. If it says 2.0, you’re using version 2 – if it’s 1 or missing completely, you’re on the first version of the runtime.

Similarly, you can also view the runtime for your app through the Azure Portal – open your app in the portal, then navigate to “Function app settings” where you’ll see the below “Runtime version” setting.

Categories
Azure Azure Functions

Azure Functions App vs Functions

A common question asked by newcomers to the world of Azure Functions, is what’s the difference between an Azure functions app vs a function? Are they the same thing?

In short, there is a difference. You can think of the functions app as if it refers to a workspace that contains one or more functions. It’s essentially a wrapper (or in .NET terms, project), holding all your Azure functions in one place and allowing for easy deployment and management of them as one.

On the other hand, a function refers to a single, isolated piece of functionality. Typically in .NET this results in a class that performs one discrete piece of behaviour.

There are a number of ramifications for function apps that don’t apply to individual functions. For instance, it’s at the function app level that the pricing plan, runtime version and more are set in Azure. Microsoft refers to this as providing the execution context for your functions.

So keep in mind that when you first create a function resource in the Azure Portal, you’re creating a functions app. Once you navigate inside that resource, you can then start to add individual functions.

Categories
Azure

What’s an Azure Service Principal and Managed Identity?

In this post, we’ll take a brief look at the difference between an Azure service principal and a managed identity (formerly referred to as a Managed Service Identity or MSI).

What is a service principal or managed service identity?

Lets get the basics out of the way first. In short, a service principal can be defined as:

An application whose tokens can be used to authenticate and grant access to specific Azure resources from a user-app, service or automation tool, when an organisation is using Azure Active Directory.

In essence, service principals help us avoid having to create fake users in Active Directory in order to manage authentication when we need to access Azure resources.

Stepping back a bit, and its important to remember that service principals are defined on a per-tenant basis. This is different to the application in which principals are created – the application sits across every tenant.

Managed identities are often spoken about when talking about service principals, and that’s because its now the preferred approach to managing identities for apps and automation access. In effect, a managed identity is a layer on top of a service principal, removing the need for you to manually create and manage service principals directly.

There are two types of managed identities:

  • System-assigned: These identities are tied directly to a resource, and abide by that resources’ lifecycle. For instance, if that resource is deleted then the identity too will be removed
  • User-assigned: These identities are created independent of a resource, and as such can be used between different resources. Removing them is a manual process whenever you see fit

One of the problems with managed identities is that for now only a limited subset of Azure services support using them as an authentication mechanism. If the service you use doesn’t support MI, then you’ll need to either continue to manually create your service/security principals.

So what’s the difference?

Put simply, the difference between a managed identity and a service principal is that a managed identity manages the creation and automatic renewal of a service principal on your behalf.

Update 31/1/20: If you’re using Azure Web Apps, check out our new post on using managed identities with deployment slots

Categories
Azure DevOps

Using key vault values from variable groups in Azure DevOps pipeline tasks

Earlier this week we had a post about how you can easily access secrets stored within Azure Key Vault in an Azure DevOps pipeline task, using the Key Vault Task. One other way you can achieve this same functionality is by using a variable group, and in this post we’re going to show you how.

Why would you use a variable group instead of the key vault task? If you know you require access to the secrets from across multiple stages within a pipeline using a group allows you to easily manage access without having to include the task in every single stage by scoping the group to the release or specific stages.

However, if only one stage requires access to the secrets it might be easier to just include the task in that particular stage and follow our previous post.

Getting started

First you’ll need to setup your key vault so that your service principal or managed identity has GET access to your vault. Then, follow our previous post on creating a variable group with a key vault to setup DevOps for this integration.

Once you’ve done the above, it’s time to get started. Navigate to your release pipeline in Azure DevOps.

Connecting the variable group

From your release pipeline, click “Edit”. Go to the “Variables” tab at the top of the screen, and choose “Variable groups”.

Select the variable group you created in the above section, scope it to either the entire release or a stage of your pipeline (depending on where you need to have access to the secrets, keeping in mind the more restricted you can make it the better from a security perspective) and click “Save”.

Now from within your tasks, if you need to reference the secrets you can use the $(your-variable-name) syntax. For instance, in the Azure Function App Deploy task, if we wanted to specify an app setting under “Application and Configuration Settings” we could use the following syntax:

-NOTIFICATIONHUB_CONNECTIONSTRING $(your-connection-string-secret-identifier)

The benefit of this is that the syntax is identical to that used with the key vault task (they both set the secrets as task variables) so if you need to, you can swap between using a variable group or key vault task with ease.

Summary

In summary, this is an alternative approach to using the key vault task in a pipeline. Depending on your needs, this may be a better approach than the other lesson, but mileage may vary.

Categories
.NET Core

Throw or throw ex in C#?

One common question that comes up when working with exceptions in C#, is whether or not you should re-throw an existing exception by specifying the exception, or just using the throw keyword where an error occurs.

In most cases, it’s best to rethrow the existing exception, such as below:

try {	var networkResponse = await someNetworkService.GetAllPosts();}catch (Exception ex){	Log.Warning("An exception occurred", ex);	//Now lets rethrow this same exception	throw;}

By doing so, you ensure that the original stack trace for the exception is maintained. If you were to instead do something like this:

try {	var networkResponse = await someNetworkService.GetAllPosts();}catch (Exception ex){	Log.Warning("An exception occurred", ex);	//Now lets throw ex again	throw ex;}

You’ll now see that the exception stack trace only goes as far back as this line of code – not to the raw source of the exception.

In short, that’s why its normally better to just throw again, instead of specifying the exception when re-throwing.

Categories
Azure DevOps

Accessing Key Vault secrets in an Azure DevOps pipeline task

In the post, we’ll take a look at one option for accessing Azure Key Vault secrets from within an Azure DevOps release pipeline.

Want to secure your Azure DevOps application secrets in Key Vault? Find out how in  our short e-book guide on Amazon

Setting up your Azure Key Vault

Before you can add the secret to your pipeline, you first need to make sure that there’s a key vault setup in Azure, and that you have given either your pipeline managed service identity or account GET access to the secrets within the vault. Note that you need to ensure this is set under your key vault’s “Settings” → “Access Policies” section.

If you haven’t already, add your secrets into the “Secrets” section, and take note of the names used for the secrets – you’ll need these a bit later on.

Adding the Azure Key Vault pipeline task

Now that you’ve got your secrets stored and accessible from key vault, it’s time to configure the Dev Ops pipeline. Open the visual pipeline editor for your pipeline by clicking “Edit”, and choose the stage in which you need access to the secrets.

Then, click the “+” button next to “Run on agent” (or whatever the first step of your pipeline may be) and search for “Azure Key Vault”. Note that you’ll want to add the “Download Key Vault Secrets” task that appears first – this is the official task from Microsoft.

Then, click on the new “Azure Key Vault” task you just added to your pipeline, and set a display name. Choose the Azure subscription in which you created your key vault, and then select from the “Key vault” dropdown list the name of the key vault you stored your secrets in.

If you can’t see it listed, it’s possible you managed service identity doesn’t have the correct permission, so be sure to check it’s been added to your key vault with GET permission.

Now drag the key vault task up your pipeline task list (if applicable) so that it runs before any other task that requires a secret stored within your key vault.

And that’s it!

Accessing a Key Vault secret from other tasks

Now other tasks can access the secrets, by using a task variable. The key vault task will make all your secrets available using the $(<your secret name here>) syntax, such as $(api-secret).

Summary

This is just one of the various ways that you can access key vault secrets from a Dev Ops pipeline task. Stay tuned for more posts where we explore other ways of accessing the secrets in tasks.

If you want a more in-depth guide and comparison between alternative approaches to storing secrets in DevOps, you can also get our book on Amazon today.

Categories
Azure Azure Functions

How to add Application Insights to an Azure function

Today we’re going to look at how easy it is to add Azure Application Insights (part of Azure Monitor) to an Azure Function.

What is Application Insights?

In short, Application Insights is a performance and monitoring tool that forms part of the Azure Monitor suite. It allows you to access detailed telemetry on requests made to your function, while observing real-time performance and failures. It works across App Service apps, Azure Functions and more – practically anywhere you can install the available App Insights SDKs.

You can find the full pricing information here, but you effectively get charged per GB of ingested data (5GB included free per month).

How can I enable this for my Azure Function App?

For Azure Functions, it’s super simple to enable App Insights. You don’t even need to add the SDKs – Microsoft handles that for you.

First, you’ll need to create an Application Insights instance. Open the Azure Portal, click “Create a resource” and search for “Application Insights”.

Enter a friendly and unique name for your instance, and depending on the language you write your functions in, choose General or Node.js for the “Application Type”.

Choose a subscription and resource group (ideally matching those of your Azure Function app) and a location. Note that App Insights is only supported in a limited number of regions worldwide, so you might need to choose a different region to that of your functions app.

Once that’s been created, open the App Insights resource. On the “Overview” page, you’ll see in the top section an “Instrumentation Key”. Copy this – it’s essentially the identifier that will allow you function app to report data back into this App Insights resource.

Now navigate to your function app in the Azure Portal. Select the app name in the sidebar, and choose “Platform Features” on the right-hand side of the screen.

Next, open “Application settings”. Scroll down until you see the “Application settings” header. At the bottom of this section, you’ll see a “Add new setting” button – click this to add a new row.

Now in the “App Setting Name” column, type “APPINSIGHTS_INSTRUMENTATIONKEY”. In the “Value” column, paste the “Instrumentation Key” that we copied in the earlier step from your App Insights resource. Scroll back to the top of the page, and click “Save”.

Summary

And that’s it! That’s how easy it is to enable App Insights for Azure Functions. If you want to see metrics coming through straight away, navigate to your App Insights resource and click “Live Metrics”. Make an API request to your function (assuming it’s HTTP based – otherwise trigger it however you can) and you should see the request come through instantly.