Preface

We are in the year 2016. In fact, we are almost towards the end of it! How amazing to look back and reflect on all of the big changes that have happened in technology over the past 15 years. In some ways, it seems that Y2K has just happened and everyone has been scrambling to make sure their DOS-based and green screen applications are prepared to handle four-digit date ranges. It seems unthinkable to us now that these systems could have been created in a way that was so short-sighted. Did we not think the world would make it to the year 2000? Today, we build technology with such a different perspective and focus. Everything is centralized, redundant, global, and cloud driven. Users expect 100% uptime, from wherever they are, on whatever device that happens to be sitting in front of them. The world has truly changed.

And as the world has changed, so has the world of technology infrastructure. This year, we are introduced to Microsoft's Windows Server 2016. Yes, we have officially rolled past the half-way marker of this decade and are quickly on our way to 2020, which has always sounded so futuristic. We are living in and beyond Doc and Marty's future, we are actually testing hoverboards, and even some of the wardrobe predictions given to us through cinema no longer seem so far-fetched.

From a user's perspective, a consumer of data, backend computing requirements are almost becoming irrelevant. Things such as maintenance windows, scheduled downtime, system upgrades, slowness due to a weak infrastructure – these items have to become invisible to the workforce. We are building our networks in ways that allow knowledgeworkers and developers to do their jobs without consideration for what is supporting their job functions. What do we use to support that level of reliability and resiliency? Our datacenters haven't disappeared. Just because we use the words "cloud" and "private cloud" so often doesn't make it magic. What makes all of this centralized, "spin up what you need" mentality happen is still physical servers running in physical datacenters.

What drives the processing power of these datacenters for most companies in the world? Windows Server. In fact, I recently attended a Microsoft conference that had many talks and sessions about Azure, Microsoft's cloud resource center. Azure is enormous, offering us all kinds of technologies and leading the edge as far as cloud computing and security technologies. I was surprised in these talks to hear Windows Server 2016 being referenced time and time again. Why were Azure presenters talking about Server 2016? Because Windows Server 2016—the same Server 2016 that you will be installing into your datacenters—is what underpins all of Azure. It is truly ready to service even the heaviest workloads, in the newest cloud-centric ways. Over the last handful of years, we have all become familiar with Software-Defined Computing, using virtualization technology to turn our server workloads into a software layer. Now we are hearing more and more about expanding on this idea with new technologies such as Software-Defined Networking and Software-Defined Storage, enhancing our ability to virtualize and share resources at a grand scale.

In order to make our workloads more flexible and cloud-ready, Microsoft has taken some major steps in shrinking the server platforms themselves and creating brand new ways of interfacing with those servers. We are talking about things like Server Core, Nano Server, Containers, Hyper-V Containers, and the Server Management Tools. Windows Server 2016 brings us many new capabilities, and along with those capabilities come many new acronyms and terminology.

Let's take some time together to explore the inner workings of the newest version of this server operating system, which will drive and support so many of our business infrastructures over the coming years. Windows Servers have dominated our datacenter's rackspaces for more than two decades, will this newest iteration in the form of Windows Server 2016 continue that trend?