You’ve probably heard the phrase “serverless” being thrown around over the last few months. In this post, I’ll explain how Functions enable serverless code execution in Azure.
Think about the different ways we have for running code in Azure:
- Virtual machines: We have to pay per minute that the machine is running, which often is pay-per-month for the machine. There is also a substantial cost of ownership.
- App Service Plan: Web apps, mobile apps, and web jobs allow us to execute web/mobile application code but really are virtual machines with per-month (realistically) cost under the covers.
- Containers: We can put more services on a single machine or cluster of machines but there’s an infrastructure that must be orchestrated and paid for.
- Service Fabric: When you deploy a service fabric, you are actually deploying a scale set of virtual machines.
With every one of those options, there is some infrastructure with the associated infrastructure and/or ownership costs. If I travel back in time to when I was once a programmer, (and Visual C++ came on 20 plus diskettes) I didn’t want to care about where my code ran. I just wanted my code to run. Until recently, something had to be deployed to run the code I had to spec that something, pay for that something on a per-month basis, and maintain that something. That’s a lot of work that distracts me from my job: writing code.
Azure introduced Functions, which is similar to AWS Lambda, to give developers something that they wanted: a way to run code, on demand, without requiring any infrastructure. That code can be C#, F#, or NodeJS.
A developer can deploy a piece of code as a Function in Azure. The function is triggered in some way to spawn an instance of execution. One can view Functions as a successor to Web Jobs that were based on an App Service Plan. Triggers can include, but are not limited to:
- A schedule: Do something every X minutes.
- A Log App: Perform some action as part of a larger workflow.
- An event from Azure Event Hubs: Subscribe to events from an Event Hub.
- An Azure blob or queue: Do something when something is written to a specific storage account.
- Something happening in CosmosDB: Automatically process new data.
Once the Function is triggered a new instance is started in response to the trigger and optional input. The more triggers that happen at once, the more instances of your Function that are executed … so you get scale-out without having to deploy any clusters, or move any sliders … it just happens!
So far, I’ve mentioned two of the big reasons:
- You don’t need to build any infrastructure.
- Instant scale out based on demand (triggers).
Functions offer pay-per-second computing. In theory, lightweight tasks which are deployed as functions should reduce costs. However, if those tasks are constantly running, one has to wonder if it would be more cost effective to deploying something like Service Fabric, which offers a similar model (microservices) that is cost controlled (scale set rules).
A significant advantage is that programmers can become more effective:
- Code can be deployed more quickly.
- Improvements can be introduced more quickly.
- A microservices approach is used, making the architecture cleaner.
- The business understands “signals” to its systems more because they must be implemented as triggers.
Part of a Larger System
I guess, in theory, it would be possible to write an entire system using just Functions. However, I think that Functions will have its place in the larger ecosystem alongside web/mobile apps, API gateways, service fabric, containers, and virtual machines. That place will be a high-value position, that adds value at (hopefully) a low per-second cost.