If you attended Microsoft TechEd North America 2014 this year, you more than likely know who Mark Russinovich is. In addition to being a Technical Fellow working with the development team on Microsoft Azure, Russinovich is a well-known IT conference speaker. He’s so well-known that his sessions at TechEd were routinely filled to capacity, prompting some conference attendees to take to Twitter to express their opinion on the need for larger conference rooms for Russinovich sessions.
In addition to his work on Azure, Russinovich has a long history in the IT industry. His Winternals software company (with co-founder Bryce Cogswell) was launched in 1996, which produced the now-ubiquitous (and still updated) Windows Sysinternals suite of software utilities. He also uncovered the infamous Sony DRM rootkit in 2005, and he’s also written three fiction novels focused on IT security: Zero Day, Trojan Horse, and the just-released Rogue Code.
Microsoft Technical Fellow Mark Russinovich (Source: Mark Russinovich)
I had the opportunity to sit down with Mark for a 30-minute interview at Microsoft TechEd 2014, where we discussed the Microsoft Azure announcements from the show, what the growth of the cloud means for IT professionals, and provided some tips and advice for system administrators looking to beef up their cloud skills.
Editor’s Note: This interview has been edited for space and clarity.
Jeff James: Let’s start with a discussion about the [Microsoft TechED 2014] keynote. The keynote had a ton of Azure news: There’s all sorts of updates to Azure, and I know that reflects the way that a cloud service like Azure is developed, as updates come very quickly. When looking at all the Azure announcements, what would you suggest that IT pros pay the most attention to? What are the standout three or four things they should take away from the Azure announcements?
Mark Russinovich: Great question. I think if you look at all the Azure announcements, they were all aimed at hybrid [cloud environments] in some way…such as IT pros that are working in on prem environments, and think they might move to the cloud eventually, or move to the cloud when they have an app or scenario that spans on prem and the cloud. Or they want to do dev-test and bring stuff back on-prem because they’re just playing with the cloud or using it for some initial “tipping the toe” scenarios.
A key announcement was ExpressRoute and the GA [general availability] of that. ExpressRoute is the ability to take leased lines through an ISP or in to a fiber hotel, then get that wired into an Azure data center with different levels of provision bandwidth so your traffic stays off the open Internet and you get quality of service.
Jeff: Almost like a VPN?
Mark: Yeah, it is. And then there were some related, minor, announcements yesterday related to networking and hybrid networking. Like we had multiple point to site, so before our point to site VPN solution only allowed one point to connect into an Azure virtual network, and now we support multiple different sites [Multiple Site-to-Site and Inter VNET (VNET-to-VNET)] to have that connection into Azure.
Then we also have virtual network or virtual network bridging or connecting. That’s a scenario that some of our customers that have moved into the cloud with lift and shift kind of applications, server applications, wanted for regional disaster recovery and fail over.
SQL Server AlwaysOn allows you to fail over SQL Server from one server to another. That’s used commonly within a single data center, so if a server fails the SQL Server remains up from the perspective of delivering service to its clients. But, people thinking more about risks, extend beyond just a single server failing to a whole region failing or becoming unavailable, want SQL Server’s AlwaysOn failover from one region to another.
In Azure, that really wasn’t possible because the only way to communicate between regions was to use public IP addresses. People didn’t want to put SQL servers on a public IP addresses. And so this virtual network to virtual network connection lets you deploy SQL server and a virtual network in one region and SQL server and another virtual network in a second region then have gateways to connect those to virtual networks so they can talk to each other.
Jeff: When you say regions, is it within in a country or dispersed globally?
Mark: It can actually be dispersed globally. We have a data center strategy that goes into geographic areas, or a “geo.” Within each geo we have at this point one or more regions, where geo is a kind of a geopolitical regulatory compliance (area) or boundary.
In the US, we have 5 regions today. In Western Europe, we have 2 regions. In Asia, we have two regions. In Japan, we have two regions. In China we have two…
Jeff: When you’re classifying regions does that also align with a data center (in each)?
Mark: No, not necessarily. There can be multiple data centers in a region. We don’t expose them at this point.
Jeff: One thing I also noticed about the announcement yesterday was that there weren’t a whole of on-premise announcements. I’ve heard rumbles from a people that Barcelona [TechEd Europe 2014 in Barcelona, slated for October 2014] may have some interesting news. I’ve talked to some of the attendees and some of our readers too that have said “There really is nothing here [at TechEd 2014] for on-prem.”
Mark: Let me answer that a little bit by talking about some more of the things you were asking about. That was very interesting in yesterdays’ announcements. Because I think when people are saying there’s nothing on-prem, they’re really saying nothing that is just exclusively non-cloud related.
Besides the ExpressRoute and the hybrid network connectivity, we have things that are aimed [partially at on-prem] like Azure Site Recovery. That’s clearly targeted at on-prem. That is saying that you a have an on-prem deployment of something and you want to fail it over to Azure. There is a cloud connection there, but it’s one that’s forward looking.
In terms of people that have got on-prem deployments…the idea is I’ve got data center infrastructure that I already purchased or leased. I need to make use of that. I should make use of that so that I’m not just throwing money away, so I’ll deploy my applications in that. To have that failover site, it’s not prudent at this point to go buy another, lease a new bunch of servers, or co-lo, or build something out. Instead, for that occasional rare scenario of when I do need to fail over to someplace else, I’ll go to the cloud. That way I’m only paying for the cloud resources when I actually do the fail over.
Jeff: So a better way to phrase [the announcements from the TechEd 2014 keynote] would be to say that the days of [IT resources being exclusively] on-prem are pretty much over, with the idea that you can expand your resources into the cloud as needed. It sounds like all of the things announced yesterday go a step further down that path….
Mark: Actually related to that, you’ll see some things that look like they are aimed at the cloud but they’re really aimed at consistency even with on-prem scenarios.
For example: Azure Files allows you to write an application that takes advantage of file sharing and file shares to distribute data and store data among different servers. Take that from on-prem and move it into the cloud a lot more easily or vice-versa. That’s part of the consistency play, and that’s consistency by taking an existing on-prem programming model and putting it up in the cloud. We’re also be going the other way which you haven’t seen much of yet — but you’ll see more of — which is to say here’s the new way to write a cloud app and that’ll work on-prem. The windows Azure appliances the first steps so Windows Azure Pack (WAP). But in WAP you get the same management and deployment experience for websites as well as service bus as well as virtual machine creation. We want to take that further and further so that it’s not just management of virtual machines but management of applications that consist of virtual machines and other resources. You probably saw the resource group templates that we announced at build and that’ll be going down into WAP, and those templates will expand to include virtual machines and PaaS applications as well.
Jeff: Great. I’m not personally as familiar with Systems Center, but I was told that Azure Site Recovery requires System Center Virtual Machine Manager to enable…
Mark: …Yeah, I’m sure it does require System Center Virtual Machine Manager because that’s what knows about the topology of virtual machine apps across servers.
Jeff: So [System Center Virtual Machine Manager] is a requirement for Azure Site Recovery, for a good reason. Do you have any concrete numbers you can share on the IT pro side, what you’re saying in terms of [Azure] adoption, and what areas in the market are you seeing the most adoption? Is it with large, midsize, or small [companies]?
Mark: Sure, I think the latest is [around] 8,000 customers a week.
Jeff: When you say customers a week, that’s…
Mark: That’s all sizes.
Jeff: So that’s someone who goes in and creates an Azure account and starts using [Azure services]…
Jeff: That leads to my next question , which is about cloud security…maybe you can talk about the security announcements related to Azure from [the TechED 2014 keynote]? From your perspective, what do those announcements mean for the people that are still on the fence [about the cloud] and are worried that [a cloud security lapse] will put them on the front page of the New York Times?
Mark: The key ones were the support for anti-malware inside of our PaaS cloud services as well as in our virtual machines.
Jeff: That works basically when you create an Azure VM you can essentially choose to have that protection enabled by default.
Mark: You can, yes. Or you can later enable it if you like to so you don’t have to do it at the time you create the VM or the cloud service.
Jeff: This all can be configured by System Center?
Mark: It’s actually configured through configuration files that you give the management API’s or the PowerShell commandlets. This is really the lower layer of security management. What I imagine as we build these things out is that they’ll be more top level orchestration of security management across these things.
For example, being able to apply consistent configuration across a bunch of applications from some central place. The IT guy comes and says, “These are the applications that my company is deploying and here is the anti-malware policy that I want those to have.” Without having to go talk to the people that are deploying or operating those applications. You’re going to see us move in that direction. Where IT can have a way to manage the parts of application behavior that they want to manage without interfering ideally with the operators.
Jeff: Sure. It’s more efficient.
Mark: Because otherwise there’s so much friction. If you need to give the application administrator “Here, take this config and add it to your application config before you deploy it.” and “Oh here is an update to it.” a week later go …”
Jeff: It becomes a headache.
Mark: It becomes a headache. Then you’re going to see higher level services which you’re already kind of seeing in Windows Azure…Microsoft Azure Active Directive Premium. [laughter]
Jeff: [laughter] Don’t feel bad because we’ve gone through the same thing with renaming articles, and we’re redesigning our site soon. So there is a lot of Windows Azure to Microsoft Azure renaming going on…
Mark: I have years of it being beaten into me. I just need like electric shock every time.
Jeff: I could use the same thing also.
Mark: The Azure Active Directory Premium service — that Brad that was showing keynote — prevents things such as logging in from two different places in the world within hours. That nonsensical login as one of the kinds of anomalies that we can see as we scan the data that’s being generated by Azure Active Directory for customer authentication. What you’re going to see are things like your security anti-malware logs. If you provide access to those logs to a Microsoft service or third party service, those services will be able to perform analytics on those in conjunction with Azure Active Directory logins, in conjunction with other data that we get from other sources. This is really a massive opportunity for up-leveling security all up. I heard a great example last week that shows the power of the cloud when it comes to security and a company like Microsoft that’s so connected with a whole bunch of different security intelligence. We were told by an external entity that certain companies have been targeted with spear-phishing emails. We were given signatures for the spear-phishing emails, and we went into those company’s Office 365 accounts and added rules to cause the spear-phishing emails to go into user junk folders so those users would never see them.
Jeff: I love the external entity. That’s very discrete. So [these emails go] into the junk folders and then what?
Mark: That means if these emails had ended up in peoples inboxes then they would have potentially exposed the company to breaches, because [those emails are spear-phishing] lures to install software and get compromised at a watering hole and then that’s an avenue into a company. With those emails in a junk [mail folder] they’re less-likely to be seen by users who come across them and get owned.
Mark: Without having that connection between Microsoft’s security intelligence and our ability to go and configure these email rules on behalf of customers. In the old way of customers running their own Exchange Servers, we would have had to …
Jeff: …communicate to them individually.
Mark: Individually, that would have shown to be problematic and then they would have had to go take some steps themselves, which would have added a lot of friction. Instead we could just go in, boom-boom, you’re protected.
Jeff: I think there’s a misconception too. I think a lot of people, if they really look at things like uptime and you compare internal data centers with the data centers that Azure runs, I’m sure that there are up-time and efficiency differences don’t make a really good comparison…
Mark: It’s not, although this is the weird human nature thing. If I’m going to operate it myself then I’m a lot more forgiving and actually, I might not even know.
Jeff: That’s true.
Mark: Because we ask our customers, “What is your uptime SLA in your data center for your servers? For your applications?” And they don’t have the number. Or they may have a target but they don’t know really what they’re achieving. They will tolerate their own failures and screw ups and it’s like “Yea yea, we screwed up. We shouldn’t have done that.” But if it’s somebody else that does it? Then it’s…
Jeff: Front page news.
Mark: …front page news and you guys, what are you doing? And I don’t trust you.
Jeff: Well it’s true. What other advice would you give to administrators who are really still stuck in the physical “If I don’t touch it, it doesn’t exist” [mindset]. What are some easy ways to get them to step away from the [physical] server and embrace the cloud? What would you suggest? Is it just as easy as starting up and add your account and trying that, or are there other ways?
Mark: Well if it’s just getting familiar with what this whole thing is, you can sign up for an Azure account in a few minutes with a credit card and get the free trial, which is I think is a month or something or pretty decent usage. Also a lot of companies are MSDN subscribers. And there’s benefits that come with that that a lot of people aren’t taking advantage of. Up to $150 a month of free Azure usage.
Which is a great way to just go kick the tires and see what’s going on. But when it comes to figuring out a strategy for adoption of the cloud or even figuring out if the cloud is something that makes sense for your company, the place that we see companies starting is dev-test. [It’s like this:] “Let’s go and have our people create their VMs in the cloud, test them in the cloud, and then bring them back to on-prem to deploy them in production.”
There’s no data exposed up there. It’s a lot cheaper to do dev-test in the cloud than it is to go buy dev test servers and create VMs on premises for them because you’re just using as many resources as you need for your test cycles. The second you’re done with those VMs you shut them down. And so it’s a great way to start to get your feet wet without a lot of risks and with some good benefits that come along with it.
Jeff: OK. Any other comments you’ll like to make about cloud in general kind of coming of TechEd? Any other misconceptions that need to be addressed or myths that need to be busted?
Mark: Myths that need to be busted…
Jeff: …about the cloud or Azure specifically.
Mark: What myths remain about the cloud still? I’ve seen in the four years that I’ve been working on Azure a step-by-step change in IT pro understanding and perception of the cloud. Which has been pretty dramatic given it’s only been over four years. Because four years ago when I started on Azure nobody knew what the cloud was. Now people know what cloud is, they know what Infrastructure as a Service (IaaS) is, they’ve got some idea what Platform as a Service (PaaS). Back then, as they were trying to figure out what cloud was, there was a lot denial and a lot of skepticism that the cloud was real…people generally see it as an inevitability. That they need to figure out how to get to the cloud or else people are going to leave them behind.
Jeff: Yeah it’s a competitive situation…
Jeff: I guess for my last question, and this is more about Microsoft after Satya Nadella has [taken over as CEO of Microsoft]. I remember TechEd last year [when Brad Anderson arrived on stage] liked James Bond driving an Aston Martin. I thought this year that he might come out in Ferrari, or a Dodge Challenger Hellcat. But [Brad’s TechEd keynote entrance] was much more subdued, it was much more humble. It seems like Satya approaches things without a lot of fanfare, but there’s a very personal engagement with the audience and with customers. Is that just my impression, or is that the way Satya does things?
Mark: One of the things Satya has said — and I don’t know if he’s been saying this publicly, but I think he has — is that Microsoft needs to adopt a challenger mindset…that this is a very competitive world and [there are] lots of these areas that we’re behind. So we’ve got to recognize that. Even in the cloud we’re behind in certain ways too. This is a place that’s moving so fast and we’ve seen how just taking your eye off something for a few years could be the end of it. The difference between being a player and being a distant, inconsequential part of a market — as we are in some of these areas [is that] we can’t take things for granted, or we can’t take our customers for granted. At Azure we definitely believe this. I guess you can see the way Azure started when it was called Windows Azure. It was like “Oh, it’s Windows. So that’s our destiny to get all the Windows people.”
Then cloud obviously became more than Windows. Customers were telling us it’s not just about Windows. They’ve got other tools, operating systems, and runtime environments that are non-Microsoft. So if you give us a Microsoft-only cloud, we wouldn’t be able to use just that. If we’re not going to be able to use just that then we’re going to, maybe, look around for someplace where we just use one-stop shops [and] having a single vendor that they can trust and take them some place and meet their needs. This is what you’ve seen us embrace on the Azure side, and now you’re seeing that happen on the mobile side. [Now we] embrace Linux, embrace Java, embrace Node, embrace PHP, Python, Android, iOS, Chef, and Puppet, alongside our own technologies as well.
We’d like to thank Mark and the Microsoft Azure PR team (specifically Andrea Carl, Jessica Spindel, Mark Miller, and Joel Sider) for helping schedule our interview at TechEd 2014.