It is clear from Microsoft’s publicity about Windows Server 2016 that the corporation believes that the default choice for installing a new server is Nano Server. Do you agree with that view? In this opinion post, I discuss the merits of both sides of the argument, share my take on the matter, and ask what you think.
Nano Server is a deep refactoring of Windows Server, which goes several steps beyond what Microsoft did with Server Core in Windows Server 2008. Server Core stripped away the graphical element of the interface, leaving us with a command prompt and a PowerShell prompt; it was, as just about every presenter on the topic has quipped, Windows without windows. It wasn’t long until Microsoft advised us that Server Core should be the default choice as the installation option.
Four versions of Windows Server later, Microsoft has revisited the concept of server installation options and user interfaces. In an effort to streamline the operating system, Microsoft has gotten deep into the code of Windows Server and removed almost all traces of a user interface. In the first public preview of Server Core, a local login accomplished little more than tell us that the machine was on the network! Feedback has shaped this experience, and now we can:
- View and change some basic network address settings
- View and enable/disable Windows Firewall rules
- Reset WinRM so Hyper-V hosts can be managed by System Center Virtual Machine Manager, which continues to rely on the unreliable
Initially, Nano Server was limited to a few roles such as Hyper-V host or Scale-Out File Server (SOFS), but the number of supported roles has grown during the Technical Preview process. Once again, Microsoft is telling us to avoid GUIs when we deploy new servers.
Treat your servers like cattle, not pets; that’s the heavily quoted line from Jeffrey Snover, Technical Fellow and the Lead Architect for the Enterprise Cloud Group, best known as the father of PowerShell. If you paid attention to Microsoft Ignite last year, Snover had several breakout sessions where he repeated the same content and the same lines, effectively telling us that it’s silly to get attached to a single server or to install a user interface on a server. Do I agree? Yes and no.
We should not be logging into servers to do day-to-day administration; every time that we do this, we are crossing over a boundary and creating a risk. Microsoft believes that this justifies ripping the user interface out of the OS.
I absolutely agree that we should do as much administration as possible from our PCs. We should be using management tools such as System Center consoles, PowerShell, and Remote Server Administration Toolkit (RSAT) from our PCs instead of logging into servers directly (Remote Desktop or KVM). Note that Nano Server adds a cloud-based admin toolset hosted in Microsoft Azure. When I last worked as an administrator, this was something that I expected my team to stick to as much as possible. And to be honest, it does speed up work when you can create your own personalized administration experience on your own workstation.
However, there are times where we just need to get onto a server to get things done. The core scenario that Microsoft encourages us to use with a GUI-less installation option has always been Hyper-V, a subject matter that I know a little about. I have seriously tried Server Core and Nano Server as a Hyper-V host. And my experience has been, when something goes wrong, I cannot troubleshoot the machines, quickly or at all. I find myself in a situation where if I could just log into locally with a GUI, I could resolve these issues.
Before you comment on adding a GUI to Core, it is widely reported that after a while, the option to re-add a GUI to Server Core breaks.
So what if this only affects one machine every now and then? Ask the many customers of Emulex, who found that every one of their Windows Server 2012 R2 Hyper-V hosts were suffering from “Blue Screens” for a 9-month period until the networking company decided to bail out customers that had purchased blade servers from HPE (then HP), IBM, and Hitachi. I bet troubleshooting Server Core installations was fun – how do you remotely troubleshoot a server over the network if the network card is faulty?
Cattle vs. Pets
In the event that a server becomes faulty, shoot it in the head and build a new one. I love this concept – quite honestly, I do. Once again, as an administrator, this is how I treated PCs. If it took more than 1 hour to solve an issue, rebuild it; there should be nothing of any value on the machine so PXE boot that sucker and push a new image down. All of the user’s state and data was on the network so we lost nothing that the standard image and some custom policies, driven by management tools, could fix in the time it took for a PC image to be deployed over a 1 GbE network.
Microsoft is repeating this same message for servers, saying that there’s nothing of value on them. These machines are stateless so why are you bothering?
You know what? That’s probably true … if you run something the size of Bing or Azure. 18 months ago, it was rumored that Azure was made up of at least 1 million physical Hyper-V hosts. Can you imagine that real estate? How much value can any one of those machines really have in the grand scale of things? If a host has a software problem, kill it (the virtual machines fail over to one of the other 999 or so hosts in the cluster) and rebuild it – work that is done remotely from one of the handful of remote global operations centers.
But things are a little different when you don’t operate at cloud scales, even in large enterprises. Developers continue to turn out applications where there’s one database machine and one web server, despite evangelists espousing better high availability options. Millions of legacy systems, which are too expensive to re-engineer or replace, are designed with single points of failure. This means that every one of these machines is more important than some cow; putting down these pets would cost someone their job. This means that we need to be able to rescue these machines.
This is the classic line that angry Snover acolytes “shout out” in blog comments. I agree; use PowerShell wherever you can. Snover did admins everywhere a huge favor when he gave us PowerShell. Finally, we had a way to automate those tasks that consume so much time. I use PowerShell almost every time I’m doing hands-on work. I have built up a little collection of scripts that I use in my labs and on customer sites all of the time, to speed things up and to get guaranteed and consistent results. And guess what – having a GUI on my servers has never stopped me from using PowerShell!
But I also use a GUI. There are times where it’s just quicker to click than to Google and copy/paste/type. My mouse works the same way every time I click it, but PowerShell cmdlets are maddeningly inconsistent from one product group to another.
When doing my research for this article, I found a video where Snover talked about the eras of Windows Server; he said that adding a GUI to a server brought servers to the masses. This is absolutely true; Windows Server burst the concept of the server through the walls of the few and brought those machines out to businesses of every size, changing how business is being done all around the world. And yes, the cloud is changing this again, but the cost of server computing in the cloud can be prohibitive enough that the existence of on-premises servers isn’t at risk for the foreseeable future.
As a person who interacts with a lot of IT pros, locally and internationally, I have learned a few truths. The first is that PowerShell adoption is low. There is a much skewed market and then there is everyone else. Those who attend TechEd/Ignite and are regular attendees to user groups are the ones who use PowerShell and do so regularly. They are the few, and are the ones that read blogs, respond on social media, watch videos, and so forth. Then there is the majority, the ones who come to work, do their thing and go home; these are the many that rarely/never use PowerShell, live in the GUI … and importantly … these are the many that drive IT in the vast majority of businesses.
The second truth I have learned is that these folks will not change their work habits. They aren’t passionate about IT; they do a job.
And the third, and hardest truth for some to hear or understand, is that everyone has a ceiling. This ceiling varies per expertise and per person. I can admit that I know my limits in IT – for example, my inability to subnet is laughable! Expecting that every admin is going to become proficient in PowerShell is wishful thinking. It’s not going to happen. And this is where those angry acolytes will declare that these people should be fired or not promoted to senior positions. I could rant quite a bit on this topic, and much of that rant would be in agreement, but the reality is that this is not going to happen. We have to live with what’s there.
So there’s a choice to make: push an ideal that will alienate the majority of customers, or be flexible with your comments to understand the every business, not aligning to small or large, is different.
I have heard from a few consultants who have deliberately used this lack of general adoption to deploy Server Core in customer sites, preventing troublesome customers from meddling with stable deployments!
Nano Is Smaller
These statements about Nano are absolutely true. I have a Nano machine that is consuming 137 MB (Megabytes!) of RAM and just 606 MB (Megabytes!!) of disk space as I type this article. Nano Server is crazy-small and efficient. I absolutely want to use it as much as possible.
We’re told that we should use Nano Server everywhere, with an emphasis on physical roles like SOFS and Hyper-V. How useful is a 606 MB installation to me? It would be awesome if non-OEMs were supported when they installed Hyper-V on USB, or if Microsoft supported Windows/Hyper-V in any way on SD cards (as vSphere is). Instead, I have to purchase a pair of 300 GB hard disks (RAID 1) and drop my 606 MB VHD file onto that disk, with the rest of the space being empty – virtual machines go onto data disks. So I have saved nothing in storage.
In reality, I’m going to save 1 or 2 GB in RAM per Hyper-V host or SOFS node. But I probably have 256 GB or more in my Hyper-V nodes so I’m saving around 1% of RAM, at quite a cost (administration).
Nano will require fewer patches. If things stayed like they were with Windows Server 2012 R2, then I’d probably still get at least 1 patch per month on Nano hosts. That’s one reboot per month, just as it would be if I got 10 patches. I don’t care about quantity of patches, I care about reboots … actually if I have a cluster I don’t care too much about those either because I’ll use Cluster Aware Updating to proactively live migrate my virtual machines around the cluster so the business suffers from zero service downtime.
Where smaller does make sense is if we forget the physical options. I’ll be honest, I will continue to use a full GUI option for Hyper-V, as all of my customers do, and 70% of all respondents that I have interacted with do as well. However, Hyper-V hosts are only a tiny percentage of my installation, and are irrelevant to all you VMware customers too. It’s the virtual machines, dummy!
Yes, virtual machines are the things that consume all that RAM and disk capacity. These are the things that consume all the RAM and disk space. Trimming that fat on the 10+ virtual machines on a host will save more resources than you’ll save by converting the host to Nano Server, where the savings are at a higher cost (likely to require hardware troubleshooting, which needs a GUI). The costs savings on servicing (patching) can also be realized more with virtual machines, because fewer updates will reduce saturation of networking or storage bandwidth. For larger customers, these savings equate to real money. I don’t think it would make much difference for anyone in the small/medium enterprise space win on-premises scenarios, but it could be a real budget saver in public clouds because you could potentially deploy smaller machines with Nano Server.
What Will I Do?
I’m glad that Jeffrey Snover has softened the language he uses when talking about Nano Server – we don’t all run Azure-sized infrastructures where we can exterminate any server that has a hiccup. When I decide to upgrade the 2-node Hyper-V cluster at work, I will continue to use the full installation. I like being able to troubleshoot issues and my colleagues don’t have the skills to PowerShell their way out of an issue. What would I use for all of our virtual machines in the future? I think I’ll stick with full installations – we’re a smallish company with around 20 virtual machines. RAM was cheap so there really isn’t much of a saving to be made in the Nano versus desktop argument. As for security, we’re not stupid enough to browse the Net from our servers and our machines are not exposed to the Internet.
If I was back in a large enterprise, then I would do things slightly differently. My hosts would continue to be running with the desktop installation, but I would try to use Nano for as many virtual machines as possible. The savings in RAM would be important then, and I could reduce the bandwidth and IOPS impact of servicing hundreds of machines quite a bit every month. This wouldn’t be a blanket decision, but I would try to use Nano where possible in the guests.
I think I would also try to use Nano Server in Azure; the cost difference between an A3 and an A4 VM might be enough to get me to rework my administration.
What Do You Think?
I am one of those people that does attend conferences, reads blogs, watches videos, and learns what I can find time for. So I am willing to make changes in how I work. And, because you’ve just read over 2,400 words on a tech site, I guess you are similar to me. What do you think about the Nano versus desktop server debate? What will you or your customers prefer? What will you really do?