Categories
Azure Azure Functions

Create an Azure Function App using a Mac

In this post, we’ll take a look at how you can quickly create an Azure Function when using a Mac. In particular, we’ll be using the Azure Functions command line tools to create our functions app.

Getting started

Before you can create a function, you’ll need to install the Azure Function command line tools. But before you can continue, you need to make sure that you have installed Brew. If you’re not familiar with it, Brew is a package manager for the Mac – similar to NPM for Node.Js projects.

Once you’ve installed Brew, make sure you have the latest .NET Core SDK installed. If you’re not sure, open Terminal and run:

dotnet --version

If you do get a version back, make sure it’s at least 2.0.0 or above. If you get an error, you’ll need to install it using the following Brew commands:

brew tap caskroom/caskbrew cask install dotnet

Run the same dotnet version command again to ensure the install was successful.

Now we need to install the Azure Function Core Tools:

brew tap azure/functionsbrew install azure-functions-core-tools

Once the install has finished, you’re now ready to go creating your first function!

Creating the function app

Now that we’ve got all the dependencies installed, it’s time to create our function app. Open Terminal, and navigate to a directory where you’d like your app to be created (in our case, we’re going to create it from the root Documents folder).

cd ~/Documents

Now let’s get started creating the app! First lets initialise the project:

func init MyFirstFunctionApp --source-control true --worker-runtime dotnet

This will create a folder called MyFirstFunctionApp inside your Documents folder. It will be initialised as a git repository (the source-control parameter) and have a dotnet runtime. If you’d prefer a Node.Js function or a Python function, you can replace dotnet with either node or python respectively.

You’ll now have your project – but at this point, it’ll be empty with no functions. To add your first function, we need to run the following command to generate a function called MyFirstFunction:

 func new --name MyFirstFunction

It’ll then ask you what type of template you’d like to use. A template is effectively a trigger – what will cause your function to start running? We’ll choose #2 (HttpTrigger) for now, as this will enable you to easily run your function via Curl, Postman or by visiting a page in your web browser. You can provide this in the previous command as —template, or select from the list.

You’ll now have your first function and function app created! You can run it by entering:

func start

Once it’s started, you should see the a message with the function URL, like this:

The sample function expects a name parameter to be provided, so open up your web browser and go to http://localhost:7071/api/MyFirstFunction?name=Bob. You should get a response like:

Hello, Bob

Summary

So that’s how easy it is to get started with an Azure Function app on a Mac. Microsoft has done a lot of work around the tooling to make it as straightforward as possible.

Categories
Azure

What’s the difference between an Azure Service Bus queue and topic?

Starting out with an Azure Service Bus? It can be confusing trying to work out whether you should use a queue or a topic. In this post, we’ll try to break down the difference between the two and which one you should use when.

What is a service bus queue or topic?

When configuring a service bus, you have two options for configuring how messages are processed – a queue, or a topic.

Lets start with a queue. A queue has a one-to-one relationship between each message and its consumer, and is a way to ensure reliable first-in-first-out (FIFO) delivery to one processor from many sources. For example, you might have one WebJob that processes requests that get placed in the queue from many different sources. In most cases, the processor (or receiver) receives the messages in the same order that they were placed on the queue. The key to a queue is that each message from the queue is only ever processed by a single consumer.

In contrast, a topic follows the publish/subscribe pattern and can have many consumers who can each subscribe to receive notifications when a message is sent to the topic. In effect, this means you can have a one-to-many relationship between messages and consumers although this does depend on how you configure your filter rules (you can opt to have each message delivered to only 1 subscriber if you wish).

What’s the difference?

In effect, the difference between a queue and topic can be described as follows:

  • A queue can only be listened to by one consumer, whereas a topic can have multiple subscribers.
  • Topic subscriptions can enable powerful filtering capabilities, such that you can define certain parameters that messages must meet in order to be copied into a subscriptions virtual queue. This can be handy if you need to handle different types of messages, or messages with variable data structures in the same topic.
  • Topics can be more scalable than queues, as more than one consumer can listen for messages. If you need to scale a queue, you’re still limited to having the one consumer listening, so aside from horizontal scaling you’re out of luck.
  • Both queue and topic subscriptions support PeekLock and ReceiveAndDelete modes, so you can ensure a message is processed before being dismissed if required.

When should I use a topic or queue?

This is a trickier question to answer, and ultimately depends on how much you’re willing to spend on your service bus. If you’re using the basic tier, then you only have the one option – queues. Topics are only supported in the standard and premium tier.

By moving to the standard or premium tier, you incur an hourly base charge as well as the per million operations fee which is charged in the basic tier.

Pricing aside, it also depends on the type of data that you’re ingesting into your service bus. If its time-sensitive and high volume, topics would be the ideal approach as you can more easily scale your downstream consumer to handle the larger volume of messages.

If on the other hand you receive a relatively stable or low volume of messages which aren’t necessarily time-critical (ie. they may sit on a queue for some time until the processor reaches them if the load is higher than anticipated) then you can probably get away with using a queue.

Categories
Azure DevOps

How to use variable groups in Azure DevOps

In a previous post looking at how to use Azure Key Vault to store secrets for a DevOps pipeline, we touched on variable groups and how they can be used. In this post, we’re going to dive a bit deeper into what a variable group is, how you can create one and how you can link variable groups into your build pipeline.

Want to secure your Azure DevOps application secrets in Key Vault? Find out how in  our short e-book guide on Amazon

What is a variable group?

A variable group is a logical collection of environment variables (or properties) used throughout your build and/or deployment pipelines. They are essentially key value pairs, that can include things like API keys, database connection strings or configuration items such as downstream API endpoint URLs.

Variable groups can store both plain text variables and secrets, which should never be committed into your source code repository. Note that if you want to use DevOps’ Azure Key Vault integration, you’ll need to create a separate variable group as you can’t mix and match Key Vault and DevOps variables in the same group.

How do I create one?

Creating a variable group is simple. Login to Azure DevOps and navigate to “Pipelines” > “Library”. You’ll see in the top navigation bar the option to “+ Variable Group”. Clicking that will take you to a “New variable group” screen, that asks for a number of properties:

  • Variable group name: A friendly name used to refer to your new variable group. Use something that has meaning to you and the types of properties that will be stored within the group (ie. My App – Production)
  • Description: Provide a short description that describes a bit more about the types of variables that should be placed within this group (eg. This group contains build settings and environment variables for production builds only)
  • Allow access to all pipelines: Enable this toggle to ensure that you can access all the variables from all of your pipelines. If you don’t enable it, you’ll need to authorise pipelines defined in YAML manually in order to let them access your properties.
  • Link secrets from an Azure key vault as variables: Enable this toggle if you want to use Azure Key Vault to store your secrets instead of DevOps. Note that by enabling this, you’ll need to then provide key vault connection details, and enter your secrets via the Azure Portal instead.

If you left the key vault integration disabled, you’ll now be able to click “Add” below the “Variables” section to begin creating new properties. Each property can have a name and a value, and can be marked as clear text or secret by clicking the padlock icon at the end of each row.

If you need to customise security permissions, click “Security” at the top of the screen. This will bring up a modal window where you can add or remove user groups from being able to access this variable group.

Once you’re done, hit “Save” and your variable group will be persisted.

How do I use it?

If you selected “Allow access to all pipelines” when creating the variable group, linking it with your build pipeline through the DevOps website is simple. Navigate to the pipeline you want to link the group with, click “Edit” in the top-left corner and then click “Variables” underneath your pipeline name.

You should see on the left-hand side “Pipeline variables” and “Variable groups”. Navigate to the latter, and click “Link variable group”. This brings up a modal that lists all the variable groups that your role has access to – if you can’t see a group that you know exists, check to make sure you have set the right permissions in the “Security” window for that group.

Choose the group you want to link with the pipeline, and then the scope that the group applies to. If the variables are used throughout the pipeline, then you can choose to make it visible to the entire release, or if you know only a few scopes require access you can also choose specific scopes.

Click “Link” and you’ve now made the variable group be accessible from your pipeline!

Alternatively, if you’re using a YAML file to describe your pipelines and builds, you can also add a variable group by adding the following section in your YAML file:

variables:- group: your-new-variable-group-name

One thing worth noting is that when you run a pipeline, DevOps will create an immutable snapshot with the values of the variables within your group so that your release remains in the same state. This ensures that it isn’t influenced by future changes or modifications that you might make to the values, and means you can redeploy a release later if needed.

Summary

Variable groups can be powerful tools for logically grouping properties and secrets that you need for build pipelines, and are simple to configure and use.

Categories
Azure DevOps

How to store Azure DevOps secrets in Azure Key Vault

Often when creating an Azure DevOps continuous integration/deployment pipeline there’s a need to store and use app secrets, such as client keys. While you can store secrets within Azure DevOps variable groups, an alternative approach is to use Azure Key Vault instead.

Want to secure your Azure DevOps application secrets in Key Vault? Find out how in  our short e-book guide on Amazon

By using Azure Key Vault you get the same enhanced data protection that your other cloud apps can enjoy including activation and expiration dates, and the DevOps integration allows for the centralised management of keys used across apps or pipelines. Keep in mind if you decide to use key vault, you will be charged according to the Azure Key Vault pricing for storing your secrets.

Setting up Key Vault access in Azure DevOps

Getting started is easy. Open Azure DevOps, and navigate to the project you wish to integrate with. Open the Pipelines section, and then go to Library.

If you already have secrets and values stored in an existing library, the easiest way to integrate with key vault is to create a separate variable group. If you don’t, you’ll get a message warning you that when you enable key vault in your existing group, it’ll blow away all your existing variables saved within the group. This is because you can’t use key vault variables side by side with Dev Ops variables within one group.

Open the new variable group, and you should see a toggle to link secrets from an Azure key vault as variables. Turn that on, and you’ll see the option to set the Azure subscription to be used, and a field to specify a key vault name.

You’ll need to ensure that you’ve previously setup a connection to your Azure subscription within Azure DevOps, and added an Azure Resource Manager service connection using an Azure Service Principal to the resource group where your key vault is located. If you haven’t, the management links next to each field will help you to setup these connections.

Once connected, pointing DevOps to your key vault is as easy as choosing the correct subscription from the drop down list and then selecting your key vault by name in the second drop down. If your service principal doesn’t have get and list secret management permissions, you’ll be prompted to automatically authorise it or manually do so in the Azure Portal.

If successfully connected, you’ll be able to then see a list of secrets from your key vault by clicking the add button. Choose the secrets you want to make available to your pipeline and click OK.

Add your new variable group to your pipeline, and that’s all there is to adding key vault secrets to an Azure DevOps pipeline.