It is gradual, but software is slowly taking over the world. Businesses (and people) now operate 24X7 in a global economy that never sleeps. About a decade ago, IT started moving from waterfall developments to an agile software development methodology to support the ever-increasing rate of change for new functions/features and updates. It is becoming the norm for companies to deploy software multiple times per day. DevOps is the methodology that grew out of agile. A major enabler of DevOps is containers. This raises the question whether containers are a short-lived success, or the new way to package and deploy software. I argue that containers are the new way and here to stay until a new software development/deployment methodology supplants DevOps at some future date. Before I tell you why I believe this, let us first define containers.
Here is the definition from What Is Docker. Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, and system libraries – anything you can install on a server.
This guarantees that the container will always run the same regardless of the environment it is running in. Containers also allow one to wrap up multiple pieces of software if they so choose. The Linux kernel version 2.6.24 in 2010 included Linux container technology (LXC) with the integration of Control Groups (cgroups). LXC is the foundation for Docker. Google has used cgroups since 2006 as a way to isolate resources running on shared hardware. In 2014, Google acknowledged firing up over 2 billion containers in a week (yes, a week!) and has its own version of LXC containers called lmctfy (Let Me Contain That For You). Docker simplified container technology by making it easier to use with its high-level API and documentation. DotCloud released the results of the Docker project as open source in March 2013 and the unstoppable tide started rising.
All containers running on a single machine share the same operating system kernel including common files. This is in contrast to hypervisors, virtual machine managers, where each program has its own environment and appears to have the host’s resources all to itself. The application can be running while copying the container, while a virtual machine copy requires stopping the application.
Why do I believe that containers are here to stay, an unstoppable tide if you will? I look to see how large companies are reacting. I recall the story of King Canute and the waves. King Canute ruled Great Britain and some surrounding areas. He had his throne set on the seashore and commanded the incoming tide, as part of his kingdom, to halt and not wet his feet and robes. As his legs got wet, he had his throne moved to dry land and said, “Let all men know how empty and worthless is the power of kings, for there is none worthy of the name, but He whom heaven, earth, and sea obey eternal laws.” How is the world’s largest software company, Microsoft, reacting to containers? In June 2014, Microsoft Azure added support for Docker containers on Linux VMs to match AWS, which offered Docker support since April 2014 using Elastic Bean. In October 2014, Microsoft and Docker jointly announced they were bringing the Windows Server ecosystem to the Docker community (Windows Server Support for Dockers). At DockerCon 2015 in August, Mark Russinovich, CTO of Microsoft Azure, demonstrated the first-ever application built using code running in both a Windows Server Container and a Linux container connected together. This demo is a preview of Windows Server Containers as part of Windows Server 2016.
But wait, there’s more! Microsoft did not stop at just providing equivalent Docker container support for Windows. Microsoft went the logical (virtual?) next step to provide a container-like experience for virtual machines. In April 2015, Microsoft announced virtualized containers and a containerized Windows Server:
Hyper-V Containers, a new container deployment option with enhanced isolation powered by Hyper-V virtualization, and
Nano Server, a minimal footprint installation of Windows Server that is highly optimized for the cloud, and ideal for containers.
VMware, an 800-pound gorilla in the hypervisor space, also responded indicating that they, too, now recognized that the container tide is unstoppable. Not too long ago, VMware was highlighting the need for VMware services and VMware’s hypervisor for containerized deployments. At VMworld 2015, VMware announced vSphere Integrated Containers and the VMware Photon Platform, similar offers to the Microsoft announcements.
Containers are an important enabler for DevOps. Development is able to quickly copy and spin up new instances of software without needing to set up the environment. Containers are self-contained and developers do not have to coordinate use of common language stacks and tooling. With the environment defined as part of the container, the application will work the same on any machine in development or test or production.
King Canute could not stop the incoming tide and recognized a higher authority. We now see the world’s largest software company and a major hypervisor company recognizing the inevitability of containers, adopting, and adapting to containers.
Disclaimer: I work for Hewlett Packard Enterprise. Opinions expressed here are my personal opinions, not those of HPE.
Technology entrepreneur with 30+ years experience successfully launching new products and services for small-to-large companies. Now applying his knowledge and experience at Hewlett Packard Enterprise as a Chief Technologist. Also active in the Seattle start-up community. Educated in Physics at MIT and has an MBA from the Wharton School, University of Pennsylvania.