Microsoft is at a crossroads these days. Impressive products like Windows Phone, the Xbox One, and even the newly-released Windows 10 Technical Preview are tangible proof that Microsoft is making better products and services than ever before. Yet the pace of innovation and the level of competition facing Microsoft is intense.
Windows Phone devices may be the technical equal of smartphones running iOS and Android, but Windows Phone lags far behind both in market share. The Xbox One suffered from a launch campaign that could serve as a textbook example of how not to launch a consumer technology product, and has been outsold by Sony’s PlayStation 4 game console since launch. Windows 10 looks promising as a very public apology for Microsoft’s “New Coke” moment, as Windows 8 was widely reviled by businesses and consumers alike. The two years Microsoft spent trying to convince people to buy Windows 8 was lost time, and even helped Apple and the oft-maligned Google Chromebooks realize retail market share gains against Windows PCs.
A Success Story: Microsoft Azure
Another competitive market segment is cloud computing, an area where Microsoft is mainly locked in a three-way battle for dominance with Amazon Web Services (AWS) and Google Cloud Platform. AWS is the clear market leader, Google was uncharacteristically late to the market and is therefore battling for third place with the likes of IBM Softlayer and others, but Microsoft Azure is steadily making gains.
Microsoft CEO Satya Nadella deserves credit for helping build Azure into what it is today, but one of the most influential people on the Azure team is Microsoft Azure Chief Technology Officer Mark Russinovich. As outlined in a Wired magazine interview earlier this year, Russinovich isn’t afraid to criticize his employer, and can be refreshingly frank and direct about Microsoft’s failings. In an interview I had with Russinovich at TechEd 2014, he said that the Microsoft was now operating in a “very competitive world” and Microsoft was behind in several areas. Here’s a larger excerpt from that interview:
“So we’ve got to recognize that. Even in the cloud we’re behind in certain ways too. This is a place that’s moving so fast and we’ve seen how just taking your eye off something for a few years could be the end of it. The difference between being a player and being a distant, inconsequential part of a market — as we are in some of these areas — [is that] we can’t take things for granted, or we can’t take our customers for granted. At Azure we definitely believe this.”
A few weeks before Microsoft TechEd Europe 2014 – where Russinovich took the stage to demo new Azure services like the Azure Batch Service, the Docker Client for Windows, and Azure Premium Storage – I had the opportunity to conduct a wide-ranging phone interview with Mark where we discussed the state of IT, the future of Microsoft Azure, and a look back at how far IT has come in the last decade. What follows are portions of our phone interview, edited for space and clarity.
Recent Improvements to Microsoft Azure
Jeff James: We last spoke at Microsoft TechEd 2014 in Houston, and there were several new features for Microsoft Azure announced at the time. Azure has seen some additional improvements and new features since then, so perhaps you could talk a bit about some of the updates to Azure since then?
Mark Russinovich: One thing that’s top of mind is we released our Microsoft Azure D-series of VMs recently. These VMs offer faster CPU, more RAM, and local SSDs, which we haven’t had before. (Editor’s Note: Microsoft also announced the massive Azure G-series VMs — the largest ever offered — just after this interview was conducted.)
That unlocks a bunch of different scenarios. One of the things that D-series VMs work nicely with is SQL server, which has a feature called buffer pool extension which allows you to pick a buffer pool at a local storage device. In this case, it would be the SSD. It’s essentially a second tier of high-speed cache behind RAM for SQL.
At Microsoft TechEd 2014, there was a lot of news about the rebranding of Hyper V Replica to Azure Site Recovery and the support for DR, not just using Azure as the orchestrator but actually to Azure or back from on premises.
Since then we acquired a company called InMage which does VM replication. It works on basically any VM, whether it’s Amazon or on a VMware VM, to replicate data. We think this is the great tool for our customers that want to migrate from, example, VMWare or Amazon to Azure because they can do it in real time and minimize the downtime through the migration.
Jeff: That leads to my next question: With all of these cloud services, the way Microsoft is releasing updates to their products and services has radically changed. Years ago it used to be a product like Windows Server 2008 would come out as a discrete SKU, followed by a service patch (or two) down the road. But with Azure and some of these other new cloud services, the updates are ongoing, a constant product iteration. Maybe you could talk a little bit about how Microsoft’s approach to developing and shipping products has changed?
Mark: This is part of the transformation of the way Microsoft produces software, going from a boxed product company to a service company. When I hear about companies that are saying, “Oh, we’ve got the cloud now. We’re going to start working on the cloud.” I won’t name any names, but some of the largest companies in IT that are delivering IT infrastructure are suddenly saying “We’ve got the cloud.”
I know firsthand from being at Microsoft though this transition from a boxed product mentality to a service mentality, you can say it but it really requires a massive cultural change and engineering systems change to support the constant delivery that we’re doing now.
It takes many years. This is coming from a company that had experience in cloud services or service delivery through things like Hotmail and Xbox Live and Bing for many years.
But even getting to the point where we’re releasing a public platform and products on top of that that are aimed at IT, it was still a transformation, and we’re still [going through that transformation]. We’re still not perfect yet.
There’s lots of room for us to improv, but the boxed product delivery model is one where there’s a big cost for customers to absorb a new release because they’re responsible, ultimately, for deploying and upgrading their servers, testing for compatibility, and it’s therefore disruptive to them.
Their tolerance or willingness to accept new software is weighed against the benefits they’re going to get out of taking that new software. One factor behind this multiyear cycle between big releases is customers not wanting to go through this disruption on a frequent basis because you’re going to be delivering incremental improvements on a frequent basis. That’s not worth the cost of going and disrupting your whole infrastructure and doing all [of your own] testing.
What that leads to is the cost of a bug becomes very high because if you deliver a bug as part of that software and then deliver a fix shortly after you’re causing another wave of disruption through the system. Patching is a very expensive process.
These things combine to create this development process which starts with the team looking at what features they’re going to produce in the next release, spending some time on that, typically months. Coming up with a plan. Starting the engineering work. Going through these milestones with internal integration tests in-between milestones.
Next we get to a point where it’s somewhat stable. Releasing that as a preview or a beta to the outside world. Then when they’re close to the final release, they’ve got all the functionality in, then there’s typically a longer beta where you want to ensure that enough of your customers have played with the product in as close to production like environments to ensure that you’re not releasing those bugs as part of that product.
At the same time, you’re going through lots of testing internally. Then once you get at that level of confidence you [release it out to the world.] That’s a two to three year process that Microsoft and many other boxed product companies have traditionally gone through.
With delivering software as a service, we’re the ones taking the disruption when it comes to the updates, not the customer. We can create a VM system that can produce the software, test the software, and release it in an agile manner.
One of the best things we get out of software delivered as a service is the ability to test in production where we can do, of course, our own internal testing on our test clusters and with our own first party workloads. Once we’re ready and have confidence in the release, we start to push it out to production but we do it in a very careful way where what we’ll do is push it out in what we call slices.
We’ll do a first slice in production to just a small subset of the total capacity. See how that goes. That’s basically evaluating the software with real customer production workload hitting it.
If that does well then we’ll roll it out to a larger slice and then continue to accelerate until we’ve got full coverage.
The cost of a bug fix is actually much lower, too, because our engineering systems are designed to detect a problem quickly and then for our developers to be able to fix the problem and then push out that fix to production very quickly in the order of a day, typically, or even hours in some cases.
The whole model changes. It requires a cultural shift in the fact that now it’s not sustained engineering that is dealing with bug fixes while the product team is off figuring out what to do in the next release. It’s actually the developers of the product that are essentially operating. This is the whole DevOps model that everybody’s talking about and going towards.
The developers have the field production because they’re the ones…Basically, developers are the company that’s operating the software. If there’s a bug fix we need to fix it and we need to feel the pain of it directly so we can feel the urgency of fixing it, understand the problem, and roll out a fix for it.
Jeff: So it took a mindset change at Microsoft to move to this new development model, but I’m sure it’s also a learning process for Microsoft customers and IT administrators, too. Do you have any advice for them on how to get the most out of the new way some of these Microsoft services are being developed and deployed?
Mark: Once you start consuming software as a service like this, it takes a lot of the burden off of IT of that rolling out software, doing that testing. It puts the burden on the software developer, the software service. And so, the IT pro can focus on other activities. A common saying among IT pros is that the list of to dos is always longer than the capacity to go after them. Taking some of the things off that to do list gives them the opportunity to go after higher value activities.
The Growth of Microsoft Azure
Jeff: About nine months ago, if I remember correctly, you gave a presentation at Microsoft BUILD 2013 where, as part of your presentation, you showed some stats on Azure growth? Maybe you could talk a little bit about what has Azure’s growth been like since year to date 2014? What numbers can you talk about?
Mark: We’ve got a slide that we show at our executive briefing center that has a bunch of stats. Updated versions of some of the ones that we probably talked about at Build. Some of the examples are, as far as our hybrid story, one of the numbers that’s important is that Hyper-V has grown five points of share against VMware.
We’re seeing the cloud help drive Hyper-V on prem as customers want that hybrid consistency. If they’re going to pick a cloud like Azure, it makes sense for them to go with Hyper-V.
As far as Fortune 500 company adoption, we’re up to more than 57 percent of the Fortune 500 are using Azure. Our database as a service is growing at 10 percent month over month and has over a million active databases at this point. Our storage service has over 30 trillion storage objects now.
Talking about Azure Active Directory, back to your question about nostagia, because there’s a nice parallel here with what’s going on with Azure and what went on with Windows Server. Azure Active Directory is now over 300 million users. The number of authentication requests that is served per week is over 13 billion now.
Our Visual Studio Online, which I think around the time [of BUILD 2013] was at preview and has GA’ed now. That has over 1.6 million developers using it now. That’s growing at 10 percent month over month, as well.
A Bit About Azure Active Directory
Jeff: I have a quick side question about Active Directory and Azure Active Directory. Say you’re a small business, maybe you’re using Google Apps and you’re born in the cloud and you’re using various cloud apps and you really don’t have an on prem Windows Server infrastructure. Can you get Azure Active Directory and use it without having an on prem existing Active Directory infrastructure?
Mark: Absolutely. What you mentioned is a great scenario because Azure Active Directory serves as an identity point of federation with things like Google Apps. In fact, I can’t remember the number of SaaS apps that have integration with Azure Active Directory federation. Let’s see if I can pull up the number….
Jeff: I was just going to say over the last 5 or 10 years it seems like there’s been a big trend towards startups starting really lean and just running with an infrastructure in the cloud and not having a lot of on premise infrastructure because the cloud gives them agility and a bunch of other advantages. It’s interesting to see the growth that you’ve seen in Azure Active Directory Premium.
Mark: It’s been a huge amount of growth. That number, by the way, is over 1,200 SaaS apps are in our gallery that are pre-integrated with Azure Active Directory. Google Apps is just one example of that. Of course, Office 365 is a huge driver of identity in Azure Active Directory.
Jeff: I’ll let you finish your point there, but my next question was going to lead into how Microsoft is positioning Azure versus Amazon Web Services and Google Cloud Platform…
Mark: Yeah, I’d be happy to talk about that. I mentioned the point about Azure Active Directory having a nice parallel with what happened with Server. If you look at what really drove the success of Server it was messaging with Exchange, but Exchange using Active Directory as its identity and directory service.
The combination of Server, Active Directory, and Exchange is really what propelled Server to where it became a critical part of the back office across the IT landscape. The stat that you’re familiar with is that over 95 percent of IT is using Active Directory as their identity directory.
What we’re starting to see is that same thing happen with Azure. Azure being the equivalent of Server, Office 365 being the equivalent of Exchange, and Azure Active Directory being the equivalent of Active Directory, creating this nice virtuous cycle.
Microsoft Azure vs. Amazon Web Services and Google Cloud Platform
Jeff: We could argue that Microsoft Exchange was the killer app for Windows Server over the last decade or so. It’s a great foundation to build on. How do you position yourself against AWS and Google’s offering?
Mark: This is a question we get a lot. “How do we differentiate from those guys?” “Why should I pick you and not them?” We’ve boiled it down to saying that there are three values that we think that we excel at when it comes to cloud and what matters in the cloud. One is hyper-scale, one is hybrid and the consistency story between on prem and cloud, and the other one is being enterprise grade.
We usually represent these things as three circles that overlap, showing that because we excel in all three of these. We are the only one of the three cloud vendors you mentioned that hit the center of that overlap, where the other two might excel in two of them but not all three.
When it comes to hyper-scale, what we’re talking about there is the size of our cloud and the global reach of our cloud. It’s pretty well acknowledged at this point that the three of us are the largest public clouds and, also, the three of us have larger private cloud infrastructure hosting our first party services, even behind the public cloud.
Google especially, their public cloud is a tiny fraction of their total infrastructure footprint, but we all know how to operate at massive scale on the order of millions of servers. When it comes to global reach, this means having pubic cloud presence in regions around the world.
This is a place that we’re ahead of Amazon and Google, actually by a good margin. We’ve got 17 regions now and more coming online. You’re going to hear news about some more in the next couple of months.
[Editor’s Note: After this interview was conducted, Microsoft announced that it had opened two new Azure data centers in Australia, bring the global total of Azure greens to 19.]
Because data center build out can take, typically, a couple of years, there’s a whole bunch in flight and there’s a whole pipeline. You’re going to see a constant stream of new regions dotting up over the next few years and that trend’s probably going to accelerate.
We’re at 17. That’s double what Amazon’s got, and that’s five times the number that Google’s got at this point.
Then, when it comes to hybrid and consistency, this is a case where we’re unique across those three clouds. We’re the only ones that are really focusing on hybrid and focusing on a consistency story.
When it comes to hybrid, that’s really about connecting on prem to the cloud. We do that through networking. This is a place where we’ve got a competitive advantage right now with our ExpressRoute offering because a lot of enterprises can’t connect over public Internet to a cloud service. There are a number of reason they can’t or don’t want to.
One of them is the quality of service that they get on the pubic Internet. Most of them are working with ISPs or network providers to have dedicated lines between their own data centers and connections to the Internet.
We partner with a whole bunch of those providers to provide wire direct access into our backbone. This is what we call our ExpressRoute offering with dedicate SLAs and bandwidth with high availability, so redundant [inaudible 22:37] lines and with support for hundreds of thousands of routes.
Many enterprises have very complex networks. We have partners like Level 3, Verizon, AT&T, British Telecoms, and Orange.
You’re going to see the list of partners continue to grow in the next three years. We’re on a full swing to the build out that palette to cover all the major network providers around the world.
That’s one part of the network connectivity aspect of it. Another part of the hybrid consistency story is having our own services be able to take advantage of the cloud when you’re on prem.
There’s a few different way that customers can take advantage of the cloud without really becoming dependent on the cloud. This is a really an important part of clouded option.
A few years ago we were in the “What is the cloud?” phase, and then we went to the “Why cloud?” phase, and now we’re in the “How cloud?” phase. What we find when it comes to the “how” is that customers need to start using the cloud and get familiar with it so they can understand the governance, and the SLAs, and what kind of configuration management systems they need to put in place to support it, and what the cost structure looks like.
Some of the very low risk ways they can get started with it are through dev/test, and backup, and DR. All three of those are really low risk ways to take advantage of it, where you’re not putting your mission critical software at risk.
For example, dev/test, of course if you’ve got developers creating, for example, a marketing campaign that they’ve built a website, some SQL database, your traditional IT, that can take a few weeks to a couple of months to procure the servers, and the virtual machines, and get everything configured, and the networking set up for them to get that going.
When it comes to the public cloud, what they can do is start doing dev/test on that marketing website while IT is getting that infrastructure ready. Within, literally, a few minutes, get those VMs up in the cloud, play with it.
When they’re done for the day, shut it down and they’re not paying the cost any more. They can create multiple dev/test environments like that very quickly, never putting production data at risk, never putting the production sites at risk, but just doing the development and testing up in the cloud and then bringing it back to on prem.
Another way is to do backup to the cloud. This is where we have things like integration of Windows Server with Azure storage for backup, integration of SQLs with Azure storage for backup, and integration of Data Protection Manager with Azure for backup.
All of the data’s encrypted when it goes up to the cloud. The keys are all back on prem, so instead of spending tons of money buying storage for backup and managing all that infrastructure, now you’re leveraging, basically, infinite storage in the cloud at costs that nobody can compete with.
Mark: [Another great Azure story] is Microsoft StorSimple, which is basically creating a bottomless storage appliance on prem. Basically using the cloud as a tier for storage but, again, it’s encrypted. You’re caching the local stuff on a local appliance. It’s not only serving as a tier for cache, basically cache with a bottomless storage, but also, then, serving as a backup for that data, as well.
I mentioned Azure Site Recovery. That’s another way you can orchestrate DR through the cloud. If you’re worried about ever pointing DR to the cloud you can, of course, just take Azure Site Recovery and DR between your own two on prem locations or you can DR to the cloud.
Again, it’s not putting your mainstream production workload at any more risk than it would be under already because you’re serving the primary off prem. If that primary fails, the options are: Go over to the cloud, where you’ll get service back, or be out of service if you don’t want to invest in a second on prem site.
Those are some of the great ways that people can get started and get familiar with the cloud.
Jeff: I’m going through all the stuff that you just mentioned. When you really step back and think about it, it’s amazing the level of technology development over the last 5 to 10 years, whether it’s how advanced virtualization has gotten and how much the cloud has come onto the scene. Maybe this is a great segue into a retrospective. You founded the company Sysinternals in 1996, correct?
Jeff: From the mid to late ’90s, from what IT was then to what it is now, you’ve seen the industry go rom Windows 95 all the way up to what we’re doing today. Do you have any thoughts on that evolution? Are there things that have surprised you, that you never really expected would happen? There’s been strong growth in some areas, more in the consumer and the general technology space. And years ago everyone thought we’d be flying around in jet packs, and flying cars, and that sort of thing….
Jeff: It’s the same thing in IT. Maybe in the mid ’90s there were certain things people thought would happen in IT but never materialized, but things like the cloud that came up that maybe people weren’t expecting as much?
Mark: Yeah, the cloud…If you go back to what Larry Ellison was talking about, the network computer, you plug it in and IT is just a utility service, that was back in the mid ’90s. That was obviously way too ahead of its time, like some of the mobile devices that were being released in the late ’90s that were striving to be something like an iPhone but the technology wasn’t there to support it. Then, 10 years later, it was.
The ideas and the technology met to allow the creation of such a product, that were delivering on the promise that the ideas had. I think cloud is similar to that.
Cloud has been something people have been pushing or wishing that we could realize. It took that maturation of IT processes, software, hardware to get to the point to actually deliver.
One of the key aspects of what these clouds are built on, they’re all built on virtualization. If you look, virtualization really started to take off at the very end of the ’90s and into the 2000s.
It’s taken a while for it to mature to the point where it could really run massive enterprise grade clouds on virtualization. Not just physical machine virtualization, but network virtualizations also advanced a lot in the last 5 to 10 years, and that’s a key aspect of the cloud, as well.
Actually going back to the late mid ’90s, those are the days when people would walk around with floppy disks to recover their Windows NT systems. There are still a few people out there doing that for their really precious systems but for the most part that is not the way people manage their endpoints anymore, manage their servers or the client systems anymore.
Anybody knows that if you really want to have a highly managed IT environment that any particular workstation, there’s no data on that workstation that you care about and if you need to, if it starts to act funny you just reimage it. You don’t reimage it by walking around with CDs anymore. You reimage it by using remote reimaging capabilities.
It’s the same thing now with the role of IT, and [IT columnist and author] Mark Minasi (@mminasi) has really captured some of this really well because he talks about this very same point. In fact, in the ’90s IT pros were configuring servers and having to set jumpers on PCI ports to set interrupts so that the system would work at all. They were walking around with floppy disks and imaging servers that way.
They’re not doing that anymore. It’s not like the removal of those tasks has made their job any less busy. They’ve found other things to do. Obviously, they’re more valuable now than doing those things.
- Related: Mark and Mark Discuss the Cloud and Other Matters of Importance [ Video – TechEd Europe 2014]
Jeff: Yeah, it’s funny. I remember. Years ago, I used to write for Computer Gaming World magazine in the days of MS DOS gaming. I remember writing a review of a PC game called Strike Commander that you needed the latest PC hardware to run. You also had to literally do this fine surgery on your autoexec.bat and config.sys files to make sure that you loaded the exact amount of memory so the thing would even boot, so the article had a sidebar showing you how to do that. It’s definitely come a long way.
Mark: I remember getting sound cards from CompUSA, Circuit City, whatever it was back then. Bringing them home and then having to play, let me try IRQ3. That’s not working. IRQ4. Now the sound card’s working but now the video card’s not working.
Jeff: Oh, yeah. Those were fun days, that’s for sure. So we’ve talked about some positive trends with the cloud and virtualization, and I know you’ve written quite a bit about security. You’ve also written some fiction novels about [IT security] as well. So now that we’ve discussed some history, maybe we could also discuss some of the negative trends we’ve seen in IT over the last 10 or 15 years?
Mark: [Security] is a negative trend just because IT hasn’t kept up with the negative aspect of security. That is what the world has been shifting over the last 10 or 15 years security-wise. IT has been slow to recognize and adapt to it.
The days of perimeter defense where IT saying: “I’m going to create the firewall and the DMZ and everything’s cool.” The effectiveness of that approach ended a long time ago, but you still are hearing people talk about it and see talks at security conferences [reminding people of that.] “The DMZ is dead. The perimeter is dead. The threats are now everywhere.”
The fact that that’s still a compelling title or abstract for a talk today shows just how slow the industry has been to adapt to that reality which has been there for a long time now.
Jeff: Yeah, that’s true. Shifting gears a little bit. There’s a lot of stuff that’s happened that we’ve talked about. There are all these great trends in virtualization and the cloud. There have been some amazing technological advances in IT over the last 10 or 15 years.
When it comes to the system administrator, the line IT professional who is implementing all this stuff and in the trenches making it all work, their role and responsibility has changed dramatically, also. Getting back to what we discussed a little bit earlier, if you had a friend or a family member who has just graduated from college, wants to go into IT and IT management, what two or three things would you tell them? What do you really need to focus on to advance your career and look at IT the way it is today rather than the way it was 10 years ago?
Mark: Obviously, I would tell them that they need to get familiar with the cloud. I think that the people that are going to be most valuable for IT and that companies are going to be looking for are the IT professionals that can help them get from on prem to the cloud or bridge on prem to the cloud.
The key aspect of that is networking. I think that network administrator, network engineers are going to be sought after as these complex connections between on prem and the cloud.
Another part of it is what you’re going to see more is this trend that people call shadow IT, which is the business going around IT and going to the cloud. IT needs to figure out how to not get in the way of going to the cloud but how to make sure that the business is doing it safely.
I think that really, when I fundamentally look at the role of IT, it’s shifting from infrastructure provider to governance and that focus happening. IT has always been about governance but the governance part has really been second seat to infrastructure management and deployment. Governance came riding along with that.
I think that the focus now will switch to governance, especially as you have the sprawl in the cloud. Keeping track of data and data classification. Making sure, especially as you’re moving and you’ve got policies around what data can be on prem versus the cloud. Data sovereignty issues, especially with global companies.
When you’re handling customer data where they’ve got their own requirements or your own business has requirements that are being regulated from the outside about where data can be.
If IT’s not figuring out how to play a helpful role in governance, one that is not interfering with the agility that the business is finding in IT then they’re just not going to be relevant.
Jeff: I’ve only got one more question. This is more related to the stuff you’ve been working on more on a personal basis. You’ve written three fiction novels to date, and I’ve heard that maybe one of those was optioned to be movie? Maybe if you could just give our readers a quick update on the latest in that area? I reviewed
Mark: I’ve written three now. Three novels. The first one was “Zero Day.” The second one “Trojan Horse” just came out in 2012. Then the third one just came out in May, called “Rogue Code.”
It’s actually, I think, the best one. It’s the one I’m most proud of. It’s, I believe, extremely timely. In this one, there’s a crime cartel that’s planted somebody inside of the IT system of the New York Stock Exchange. Over several years they’ve worked their way into a position where they’re able to deploy software into the training engine.
The crime cartel then uses that position to inject malware that is then skimming trades to make a lot of money. Of course, our protagonist Jeff Aiken, is called in to do a pen test on the exchange and discovers that malware sitting there. The story then unfolds as he is realizing the scope of what he’s found.
The crime cartel, they, of course, learn that he’s on to them. The race to stop a massive, final heist that they’re after as they decide to pull out.
High frequency trading has been in the news a lot. It’s very coincidentally, Michael Lewis’ book “Flash Boys” came out just a month before Rogue Code did where the book focuses on high frequency trading and how he believes that HFT, which is basically…I call it digital front running. It is actually enabled by, many times, the exchanged pumping the front running through special order types.
The simplest form of the front running is I make a trade on this exchange here. HFT systems see that. They run to another exchange using high speed algorithms and network connections to make trades on that other exchange in anticipation that my order’s not going to get filled or the price is going to change based on what I just did. Now they can take advantage of that in this second exchange.
Of course, there’s the order type aspect of it, which is they can place orders that sit in a queue that then, when those orders execute, they basically their detection of activity of a particular stock at a particular price. They can cancel then orders that are sitting in the queue or they can jump the queue using special order types to get ahead of other traders.
All of that is the background for Rogue Code and how the criminals can sit and hide in that gaming of HFT to go undetected.
Of course, some of the themes. There are insider threats. I also describe in some detail the security system that I would imagine players like the New York Stock Exchange have of jump boxes and segregated networks and show how the hackers and Jeff manage to get from one side to the other.