Will IT Pros Adopt Infrastructure as Code?

In the last year or so, the phrase “infrastructure as code” has started to creep across our industry. At this point, most IT admins probably have never heard of this term, so it might come as a surprise after they hear the term from a boss who has had a meeting with cloud sales reps or has read one of those dreaded airline magazines full of the latest IT crazes.
Infrastructure as code is real; you can do it right now in the Microsoft world in Azure, and it will be coming on-premises in 2016. I’m still trying to figure out what infrastructure as code will mean to me and my customers, and it concerns me. And I am left wondering, will I roll out infrastructure using code, will my customers try it, and will you do it?

Software-Defined Everything

If you’ve completely ignored what the IT industry has been doing for the last five years, then you’re completely unaware that “software-defined” is where the business is going. In Microsoft’s world we have:

  • Machines: Hyper-V has abstracted machines from hardware.
  • Operating systems: In Windows Server 2016, we get abstraction from machines via operating system virtualization with Windows Server Containers (application isolation) and Hyper-V Containers (secure application isolation).
  • Networking: Hyper-V Network Virtualization abstracts us from the physical network using NVGRE, and support is coming for VXLAN in the Windows Server 2016 Network Controller.
  • Storage: Microsoft gave us Storage Spaces and Scale-Out File Server to change how physical storage was deployed in Windows Server 2012. Windows Server 2016 gives us Storage Spaces Direct (S2D), hyper-convergence, and brings Azure’s blobs and tables to our data centers.

Software-defined everything is Microsoft’s vision for the data center. Don’t fool yourself into thinking that this is a unique vision; VMware has a similar perspective on the future of IT infrastructure. Hardware is being commoditized, and software is bringing it together to create a more affordable, scalable, resilient, flexible, and agile infrastructure that meets the constant demands of the business.
Think about this for a moment: what happens when you ask your network admins to provision a new network range on the WAN? Are there any objections? Is there screaming? How quickly is that request ticket closed to your satisfaction? How much money was spent on hardware to make it happen? In a software-defined world, something like that happens in minutes, not weeks. You get a glimpse of that when you define a cloud service and a virtual network in Microsoft Azure; the fabric of Azure is software-defined and the heart of that network controller is coming to Windows Server 2016 with Azure Consistent management in the form of Microsoft Azure Stack (MAS).

Infrastructure as Code

Contemplate how you deploy a service on a traditional Hyper-V or VMware farm. I guess you start by deploying virtual machines; if you’re working at scale, you’ll deploy virtual machines from a template. Each is deployed, one at a time and attached to a VLAN. You’ll manually install software, patch it, configure it, and hand it over some hours later after the request, assuming that you didn’t have a large work queue to get through first. And there might be obstacles to overcome first; someone else will have to deploy VLANs and storage capacity for the service, and that can take days instead of hours.
If our machines, storage, and network are all based on software that abstracts the physical infrastructure, then are manual operations required?

  • Virtual machines: If we spend less time deploying actual virtual machines, don’t we have more time to create templates for each of the various configurations that customers (internal or external) request? That will reduce the time to provision a new machine.
  • Storage: If we have software-defined storage, the storage administrator’s role is to focus on the infrastructure; tenants can consume storage accounts that are abstracted from the underlying service.
  • Network: An orchestrated NVGRE or VXLAN implementation allows a tenant to deploy their own virtual network and enable routing and selective isolation with other networks in the private or public cloud.

In other words, we can separate the services that are used by tenants from the administration and deployment of the infrastructure that enables those services. You can see this in action today by playing with a trial account in Microsoft Azure. Some of this is possible today with Windows Server 2012 R2, System Center, and the Windows Azure Pack (WAP), but things really will accelerate when Windows Server 2016 is released with MAS.

Now think about how you deploy a virtual machine, even in the software-defined Microsoft Azure. Your current method of deploying a machine is probably to step through some cumbersome wizard, and if you’re like me, a large percentage of the time you make a mistake and have to delete the resulting machine and start over again. So, obviously, we want to reduce the human element, and scripting makes this possible. Forget for a moment that it is impossible to find a complete PowerShell example for deploying a functioning virtual machine in Azure. You’ll write that script and … you’ll quickly notice that the script is very specific for that machine or service, and that there’s still a significant human interaction.
Software is programmable, so why are we deploying software-defined data centers using the same methods that we’ve been using since the early 1990s? That’s the question that Microsoft has started asking us. If you are deploying 100 web servers, do you want to do it by hand, or would you like to use some predefined scale-out template, such as Azure VM Scale Sets?

Azure Resource Manager

Microsoft introduced Azure Resource Manager (ARM) to Azure to allow us to programmatically deploy the virtual infrastructure that can make up a service, without the need to learn PowerShell and customize scripts. The concept is that we build up a template using JavaScript Object Notation (JSON, pronounced as jay-son), and looks very like XML to novices such as myself. Tools such as Visual Studio are used to create a JSON file that defines storage, networking, and virtual machines. This template can be stored in source and version controlled locations, such as Git. Tenants and administrators can deploy anything from a single machine to a huge application farm by running one of these templates from the storage location; this reaches out to Azure or MAS and provisions the described infrastructure into the public or private cloud.
This sounds pretty amazing, right? But if you’re like me, there’s a few things in that description that caught your attention, and possibly not in a good way.

  • JSON: JSON, who the … heck is JSON? I can’t even do XML, so please don’t ask me to program an infrastructure using JSON. Have a look at this Microsoft documentation and see what you think of JSON.
  • Visual Studio: That’s a developer’s tool. Microsoft might be drinking Gartner’s Kool-Aid again and thinking that we’re all DevOp-ing away, writing C# in-between deploying switches and delaying System Center update rollup deployments.
  • Git: Where I come from, calling someone a git starts a fight. I’ve tried using GitHub and the thing makes SharePoint look easy. And does anyone really think that a corporation is going to deploy infrastructure from an open code repository?

I really do like the idea of deploying infrastructure using code. But right now, the tooling is not there for me. I need something, not a developer tool, to create this programming in a user-friendly way. I need documentation that is designed for IT pros, not developers. And I need somewhere that is business and IT pro friendly to store my templates. Until then, ARM is nice in theory, but not ready in practice for me.

Am I wrong? Am I crazy? Is the concept of infrastructure as code completely unsuitable for you? Do you agree with me? What do you think? Let us know below.