Categories
.NET Core

Understanding the Newtonsoft JSON Token

One of the concepts worth understanding when writing a custom converter for the popular Newtonsoft JSON framework is that of the JsonToken, and the role it plays in helping you understand the JSON being read.

The JsonToken documentation for Newtonsoft is relatively sparse – in part because once you understand it, it’s a relatively simple concept.

Effectively, when you’re attempting to deserialise a string representation of a JSON object, such as in the ReadJson method of a custom JsonConverter, the JsonToken represents each element or component (token if you prefer) of the object as it’s deserialised.

So for example, if you have the following JSON and implement the ReadJson method:

{	"message": "Hello world"}

When you first get the reader.TokenType value you’ll get JsonToken.StartObject back – this represents that the { implies that you’re first reading an object element.

If you were to then call reader.ReadAsync(), and then subsequently request the reader.TokenType again now you’ll get JsonToken.PropertyName in response. This is because the next element within the JSON string is the property name (in other words, the “message” key).

Lastly, if you were to call reader.ReadAsync() for a third time, you’d get back JsonToken.String as the value of the property is a string (“Hello world”). NewtonSoft will try and parse the value into a representation of a type – so for example, if that was a number, you’d get back Integer of Float, and if it was a Boolean you’d get back Boolean.

Categories
Azure Azure Functions

Which Azure Functions runtime am I using?

Microsoft currently support two versions of the Azure Functions runtime – version 1 and version 2. This post will look at the main changes between the two versions, and show you how you can check which runtime you’re using.

What are the key differences between versions?

Version 1 of the runtime was introduced back in 2016, when functions were first announced. At launch it supported JavaScript, C#, Python and PHP.

In late September 2018 Microsoft made the Functions 2 runtime generally available and with it bought a number of significant development, deployment and performance improvements including the ability to use the runtime anywhere – on a Mac or Linux machine too! It’s worth noting that as of March 2019, Python support for Functions 2 is still in preview.

There were significant under the hood changes made to improve performance – and .NET Core 2.1 support was added, alongside a move to .NET Core powering the functions host process instead of .NET.

Big changes were made to the way that bindings work – as part of 2.0 they became extensions instead of being bundled into the runtime itself (aside from HTTP and timer support which are deemed core to the experience). This means that some bindings didn’t make it over yet to the new 2.0 runtime – your mileage may vary, but the Microsoft bindings docs have a clearer comparison between the two.

There are also a bunch of new integrations with Functions 2 – for instance, Application Insights is supported with minimal configuration required, while you can easily use deployment centre to add code from other sources such as Github repos into your functions app.

How can I tell what runtime I’m using?

If you’re running a functions app on a Mac or Linux machine – there’s a fair chance you’re using the 2.0 runtime, as that’s what the functions command line tools support. You can verify this by opening the host.json file within the root directory of your app, which should look something like this:

{    "version": "2.0",    "extensions": {      "http": {        "routePrefix": ""      }    } }

The version field directly references the version of the functions runtime you’re using. If it says 2.0, you’re using version 2 – if it’s 1 or missing completely, you’re on the first version of the runtime.

Similarly, you can also view the runtime for your app through the Azure Portal – open your app in the portal, then navigate to “Function app settings” where you’ll see the below “Runtime version” setting.

Categories
Azure Azure Functions

Azure Functions App vs Functions

A common question asked by newcomers to the world of Azure Functions, is what’s the difference between an Azure functions app vs a function? Are they the same thing?

In short, there is a difference. You can think of the functions app as if it refers to a workspace that contains one or more functions. It’s essentially a wrapper (or in .NET terms, project), holding all your Azure functions in one place and allowing for easy deployment and management of them as one.

On the other hand, a function refers to a single, isolated piece of functionality. Typically in .NET this results in a class that performs one discrete piece of behaviour.

There are a number of ramifications for function apps that don’t apply to individual functions. For instance, it’s at the function app level that the pricing plan, runtime version and more are set in Azure. Microsoft refers to this as providing the execution context for your functions.

So keep in mind that when you first create a function resource in the Azure Portal, you’re creating a functions app. Once you navigate inside that resource, you can then start to add individual functions.

Categories
Azure

What’s an Azure Service Principal and Managed Identity?

In this post, we’ll take a brief look at the difference between an Azure service principal and a managed identity (formerly referred to as a Managed Service Identity or MSI).

What is a service principal or managed service identity?

Lets get the basics out of the way first. In short, a service principal can be defined as:

An application whose tokens can be used to authenticate and grant access to specific Azure resources from a user-app, service or automation tool, when an organisation is using Azure Active Directory.

In essence, service principals help us avoid having to create fake users in Active Directory in order to manage authentication when we need to access Azure resources.

Stepping back a bit, and its important to remember that service principals are defined on a per-tenant basis. This is different to the application in which principals are created – the application sits across every tenant.

Managed identities are often spoken about when talking about service principals, and that’s because its now the preferred approach to managing identities for apps and automation access. In effect, a managed identity is a layer on top of a service principal, removing the need for you to manually create and manage service principals directly.

There are two types of managed identities:

  • System-assigned: These identities are tied directly to a resource, and abide by that resources’ lifecycle. For instance, if that resource is deleted then the identity too will be removed
  • User-assigned: These identities are created independent of a resource, and as such can be used between different resources. Removing them is a manual process whenever you see fit

One of the problems with managed identities is that for now only a limited subset of Azure services support using them as an authentication mechanism. If the service you use doesn’t support MI, then you’ll need to either continue to manually create your service/security principals.

So what’s the difference?

Put simply, the difference between a managed identity and a service principal is that a managed identity manages the creation and automatic renewal of a service principal on your behalf.

Update 31/1/20: If you’re using Azure Web Apps, check out our new post on using managed identities with deployment slots

Categories
Azure DevOps

Using key vault values from variable groups in Azure DevOps pipeline tasks

Earlier this week we had a post about how you can easily access secrets stored within Azure Key Vault in an Azure DevOps pipeline task, using the Key Vault Task. One other way you can achieve this same functionality is by using a variable group, and in this post we’re going to show you how.

Why would you use a variable group instead of the key vault task? If you know you require access to the secrets from across multiple stages within a pipeline using a group allows you to easily manage access without having to include the task in every single stage by scoping the group to the release or specific stages.

However, if only one stage requires access to the secrets it might be easier to just include the task in that particular stage and follow our previous post.

Getting started

First you’ll need to setup your key vault so that your service principal or managed identity has GET access to your vault. Then, follow our previous post on creating a variable group with a key vault to setup DevOps for this integration.

Once you’ve done the above, it’s time to get started. Navigate to your release pipeline in Azure DevOps.

Connecting the variable group

From your release pipeline, click “Edit”. Go to the “Variables” tab at the top of the screen, and choose “Variable groups”.

Select the variable group you created in the above section, scope it to either the entire release or a stage of your pipeline (depending on where you need to have access to the secrets, keeping in mind the more restricted you can make it the better from a security perspective) and click “Save”.

Now from within your tasks, if you need to reference the secrets you can use the $(your-variable-name) syntax. For instance, in the Azure Function App Deploy task, if we wanted to specify an app setting under “Application and Configuration Settings” we could use the following syntax:

-NOTIFICATIONHUB_CONNECTIONSTRING $(your-connection-string-secret-identifier)

The benefit of this is that the syntax is identical to that used with the key vault task (they both set the secrets as task variables) so if you need to, you can swap between using a variable group or key vault task with ease.

Summary

In summary, this is an alternative approach to using the key vault task in a pipeline. Depending on your needs, this may be a better approach than the other lesson, but mileage may vary.

Categories
.NET Core

Throw or throw ex in C#?

One common question that comes up when working with exceptions in C#, is whether or not you should re-throw an existing exception by specifying the exception, or just using the throw keyword where an error occurs.

In most cases, it’s best to rethrow the existing exception, such as below:

try {	var networkResponse = await someNetworkService.GetAllPosts();}catch (Exception ex){	Log.Warning("An exception occurred", ex);	//Now lets rethrow this same exception	throw;}

By doing so, you ensure that the original stack trace for the exception is maintained. If you were to instead do something like this:

try {	var networkResponse = await someNetworkService.GetAllPosts();}catch (Exception ex){	Log.Warning("An exception occurred", ex);	//Now lets throw ex again	throw ex;}

You’ll now see that the exception stack trace only goes as far back as this line of code – not to the raw source of the exception.

In short, that’s why its normally better to just throw again, instead of specifying the exception when re-throwing.

Categories
Azure DevOps

Accessing Key Vault secrets in an Azure DevOps pipeline task

In the post, we’ll take a look at one option for accessing Azure Key Vault secrets from within an Azure DevOps release pipeline.

Want to secure your Azure DevOps application secrets in Key Vault? Find out how in  our short e-book guide on Amazon

Setting up your Azure Key Vault

Before you can add the secret to your pipeline, you first need to make sure that there’s a key vault setup in Azure, and that you have given either your pipeline managed service identity or account GET access to the secrets within the vault. Note that you need to ensure this is set under your key vault’s “Settings” → “Access Policies” section.

If you haven’t already, add your secrets into the “Secrets” section, and take note of the names used for the secrets – you’ll need these a bit later on.

Adding the Azure Key Vault pipeline task

Now that you’ve got your secrets stored and accessible from key vault, it’s time to configure the Dev Ops pipeline. Open the visual pipeline editor for your pipeline by clicking “Edit”, and choose the stage in which you need access to the secrets.

Then, click the “+” button next to “Run on agent” (or whatever the first step of your pipeline may be) and search for “Azure Key Vault”. Note that you’ll want to add the “Download Key Vault Secrets” task that appears first – this is the official task from Microsoft.

Then, click on the new “Azure Key Vault” task you just added to your pipeline, and set a display name. Choose the Azure subscription in which you created your key vault, and then select from the “Key vault” dropdown list the name of the key vault you stored your secrets in.

If you can’t see it listed, it’s possible you managed service identity doesn’t have the correct permission, so be sure to check it’s been added to your key vault with GET permission.

Now drag the key vault task up your pipeline task list (if applicable) so that it runs before any other task that requires a secret stored within your key vault.

And that’s it!

Accessing a Key Vault secret from other tasks

Now other tasks can access the secrets, by using a task variable. The key vault task will make all your secrets available using the $(<your secret name here>) syntax, such as $(api-secret).

Summary

This is just one of the various ways that you can access key vault secrets from a Dev Ops pipeline task. Stay tuned for more posts where we explore other ways of accessing the secrets in tasks.

If you want a more in-depth guide and comparison between alternative approaches to storing secrets in DevOps, you can also get our book on Amazon today.

Categories
Azure Azure Functions

How to add Application Insights to an Azure function

Today we’re going to look at how easy it is to add Azure Application Insights (part of Azure Monitor) to an Azure Function.

What is Application Insights?

In short, Application Insights is a performance and monitoring tool that forms part of the Azure Monitor suite. It allows you to access detailed telemetry on requests made to your function, while observing real-time performance and failures. It works across App Service apps, Azure Functions and more – practically anywhere you can install the available App Insights SDKs.

You can find the full pricing information here, but you effectively get charged per GB of ingested data (5GB included free per month).

How can I enable this for my Azure Function App?

For Azure Functions, it’s super simple to enable App Insights. You don’t even need to add the SDKs – Microsoft handles that for you.

First, you’ll need to create an Application Insights instance. Open the Azure Portal, click “Create a resource” and search for “Application Insights”.

Enter a friendly and unique name for your instance, and depending on the language you write your functions in, choose General or Node.js for the “Application Type”.

Choose a subscription and resource group (ideally matching those of your Azure Function app) and a location. Note that App Insights is only supported in a limited number of regions worldwide, so you might need to choose a different region to that of your functions app.

Once that’s been created, open the App Insights resource. On the “Overview” page, you’ll see in the top section an “Instrumentation Key”. Copy this – it’s essentially the identifier that will allow you function app to report data back into this App Insights resource.

Now navigate to your function app in the Azure Portal. Select the app name in the sidebar, and choose “Platform Features” on the right-hand side of the screen.

Next, open “Application settings”. Scroll down until you see the “Application settings” header. At the bottom of this section, you’ll see a “Add new setting” button – click this to add a new row.

Now in the “App Setting Name” column, type “APPINSIGHTS_INSTRUMENTATIONKEY”. In the “Value” column, paste the “Instrumentation Key” that we copied in the earlier step from your App Insights resource. Scroll back to the top of the page, and click “Save”.

Summary

And that’s it! That’s how easy it is to enable App Insights for Azure Functions. If you want to see metrics coming through straight away, navigate to your App Insights resource and click “Live Metrics”. Make an API request to your function (assuming it’s HTTP based – otherwise trigger it however you can) and you should see the request come through instantly.

Categories
Azure Azure Functions

Create an Azure Function App using a Mac

In this post, we’ll take a look at how you can quickly create an Azure Function when using a Mac. In particular, we’ll be using the Azure Functions command line tools to create our functions app.

Getting started

Before you can create a function, you’ll need to install the Azure Function command line tools. But before you can continue, you need to make sure that you have installed Brew. If you’re not familiar with it, Brew is a package manager for the Mac – similar to NPM for Node.Js projects.

Once you’ve installed Brew, make sure you have the latest .NET Core SDK installed. If you’re not sure, open Terminal and run:

dotnet --version

If you do get a version back, make sure it’s at least 2.0.0 or above. If you get an error, you’ll need to install it using the following Brew commands:

brew tap caskroom/caskbrew cask install dotnet

Run the same dotnet version command again to ensure the install was successful.

Now we need to install the Azure Function Core Tools:

brew tap azure/functionsbrew install azure-functions-core-tools

Once the install has finished, you’re now ready to go creating your first function!

Creating the function app

Now that we’ve got all the dependencies installed, it’s time to create our function app. Open Terminal, and navigate to a directory where you’d like your app to be created (in our case, we’re going to create it from the root Documents folder).

cd ~/Documents

Now let’s get started creating the app! First lets initialise the project:

func init MyFirstFunctionApp --source-control true --worker-runtime dotnet

This will create a folder called MyFirstFunctionApp inside your Documents folder. It will be initialised as a git repository (the source-control parameter) and have a dotnet runtime. If you’d prefer a Node.Js function or a Python function, you can replace dotnet with either node or python respectively.

You’ll now have your project – but at this point, it’ll be empty with no functions. To add your first function, we need to run the following command to generate a function called MyFirstFunction:

 func new --name MyFirstFunction

It’ll then ask you what type of template you’d like to use. A template is effectively a trigger – what will cause your function to start running? We’ll choose #2 (HttpTrigger) for now, as this will enable you to easily run your function via Curl, Postman or by visiting a page in your web browser. You can provide this in the previous command as —template, or select from the list.

You’ll now have your first function and function app created! You can run it by entering:

func start

Once it’s started, you should see the a message with the function URL, like this:

The sample function expects a name parameter to be provided, so open up your web browser and go to http://localhost:7071/api/MyFirstFunction?name=Bob. You should get a response like:

Hello, Bob

Summary

So that’s how easy it is to get started with an Azure Function app on a Mac. Microsoft has done a lot of work around the tooling to make it as straightforward as possible.

Categories
Azure

What’s the difference between an Azure Service Bus queue and topic?

Starting out with an Azure Service Bus? It can be confusing trying to work out whether you should use a queue or a topic. In this post, we’ll try to break down the difference between the two and which one you should use when.

What is a service bus queue or topic?

When configuring a service bus, you have two options for configuring how messages are processed – a queue, or a topic.

Lets start with a queue. A queue has a one-to-one relationship between each message and its consumer, and is a way to ensure reliable first-in-first-out (FIFO) delivery to one processor from many sources. For example, you might have one WebJob that processes requests that get placed in the queue from many different sources. In most cases, the processor (or receiver) receives the messages in the same order that they were placed on the queue. The key to a queue is that each message from the queue is only ever processed by a single consumer.

In contrast, a topic follows the publish/subscribe pattern and can have many consumers who can each subscribe to receive notifications when a message is sent to the topic. In effect, this means you can have a one-to-many relationship between messages and consumers although this does depend on how you configure your filter rules (you can opt to have each message delivered to only 1 subscriber if you wish).

What’s the difference?

In effect, the difference between a queue and topic can be described as follows:

  • A queue can only be listened to by one consumer, whereas a topic can have multiple subscribers.
  • Topic subscriptions can enable powerful filtering capabilities, such that you can define certain parameters that messages must meet in order to be copied into a subscriptions virtual queue. This can be handy if you need to handle different types of messages, or messages with variable data structures in the same topic.
  • Topics can be more scalable than queues, as more than one consumer can listen for messages. If you need to scale a queue, you’re still limited to having the one consumer listening, so aside from horizontal scaling you’re out of luck.
  • Both queue and topic subscriptions support PeekLock and ReceiveAndDelete modes, so you can ensure a message is processed before being dismissed if required.

When should I use a topic or queue?

This is a trickier question to answer, and ultimately depends on how much you’re willing to spend on your service bus. If you’re using the basic tier, then you only have the one option – queues. Topics are only supported in the standard and premium tier.

By moving to the standard or premium tier, you incur an hourly base charge as well as the per million operations fee which is charged in the basic tier.

Pricing aside, it also depends on the type of data that you’re ingesting into your service bus. If its time-sensitive and high volume, topics would be the ideal approach as you can more easily scale your downstream consumer to handle the larger volume of messages.

If on the other hand you receive a relatively stable or low volume of messages which aren’t necessarily time-critical (ie. they may sit on a queue for some time until the processor reaches them if the load is higher than anticipated) then you can probably get away with using a queue.