In a previous post, I argued that in the case of Kubernetes, the hype may be ahead of actual adoption. The responses to that article have ranged from basic questions about containers, to very firm views that lots of people are already running multi-cluster, multi-cloud containerized infrastructure. Regarding the latter view, from experience I know that smaller, more agile companies tend to underestimate the complexity behind large enterprises’ slower reactions to change (Ian Miell does a superb job of breaking down the slowness of enterprise in this blog post). So, in this article, I’ll try to do three things:
- Attempt to tighten up some basic definitions for the non-techie folks who may still be wondering what the fuss is about
- Mention the number one common attribute I’ve seen amongst companies succeeding with containers
- Explain why just moving to containers is just the tip of the iceberg of your IT concerns
“See it from the window” PHOTO BY ROBERT ALVES DE JESUS ON UNSPLASH
Is my monolithic app running in a container on VMware cloud-native?
Well, sorry to say, but it depends. Let’s remember that a container is just a software package that can be run on infrastructure, and that there are (at least) two types of containers.
System containers have been around longer, since the days of LXC (2008) or, arguably, the days of Solaris Zones before that. These are, simplistically speaking, small and efficient units that behave like virtual machines. They can support multiple executable applications, and offer isolation as well as other features that system administrators will feel safe with, like easy manageability. This is ideal for traditional apps that you want to containerize without completely revolutionizing your IT practices, and the benefit is simple: increasing application density per server by over 10x vs. a virtual machine.
Application containers have a single application to execute. This is the world of the Docker image format (not the same as Docker Engine, Docker Swarm or Docker Inc.) and OCI , and what most people refer to when they mention the word ‘container’. The benefits here from an IT perspective are that application containers running a microservices app bring the full promise of cloud-native to life: high delivery pace, almost infinite scalability, and improved resilience. Those dramatic outcomes demand a significant culture and technology shift, a I will mention in detail later.
Containers are just packages and will, broadly, do as they’re told; microservices is an architectural approach to writing software applications; and cloud-native is a method of delivering those applications. To answer the question above: throwing away existing resources and ignoring business outcomes in pursuit of an ideal is probably bad practice, so if you have a VMware environment that is there to stay, then that is probably cloud-native enough for now (VMware’s acquisition of Heptio is interesting in that sense for the future use case). The bottom line is, that getting hung up on the simplest item (containers) to grasp on that list is a common error.
Thor’s hammer does not exist in IT
I recently met with the head of Cloud for a large UK financial services company, who told me that the previous CIO had to leave his post after failing to deliver on a ‘straight to serverless’ strategy, i.e., leap-frogging the cloud and container revolutions in order to operate workloads on the company’s well-used private datacenter, with serverless technology. That the CIO had to leave isn’t a major surprise: in any line of work, we need to use the right tools for the right jobs, especially when those jobs are complex. In cloud, that means that in most cases, we will likely be using a combination of bare metal, virtual machines, containers and serverless, on any combination of private server farm, private cloud or public cloud.
Without a doubt, the one thing I have seen as a first step in a successful IT transition journey is this: not trying to over-simplify a dramatic IT (r)evolution. Instead, understanding it from a holistic aspect, and judging it vis-à-vis business outcomes and objectives. It’s good to strive, but not all companies have the resource to be cloud-native purists, and there are clear benefits even in smaller steps, like allowing more time for analysis of the technology’s impact, or enabling better risk management. (This post from container company Cloud 66 does well to describe the short term efficiency, and long term insight, gains of moving a monolithic app to a container.)
Known-unknowns and unknown-unknowns
Ultimately, though, we wouldn’t be so excited about the container revolution if it was just about squeezing in more monolithic applications. A microservices app, running in containers, orchestrated on multiple substrates, and all that according to cloud-native principles—that is something worth sweating for. An application that scales quickly and reliably with less operational resource, that adapts fast to customer and competitive dynamics, and that self-heals, is where a lot of us are aiming.
Again, containers are just one part of this. Consider technological challenges: What about orchestration? And network? And stateful services? And cloud-native-ready pipeline tools?
Arguably even more important, consider cultural challenges: What needs to change with our development practices? How do we find and retain the right talent? How do we re-skill older talent and bridge between generations? How will the risk balance change?
An example: open source is already part of your strategy
It is a well-documented fact that the rise of cloud and open-source has been connected, which also brings some interesting tensions, as I explored in my previous article. In containers, this synergy seems stronger than ever. The juggernaut behind Kubernetes and many related open source projects, the Cloud Native Computing Foundation (CNCF), is part of the Linux Foundation. The CNCF charter is clear about the intentions of the foundation: it seeks to foster and sustain an ecosystem of open source, vendor-neutral projects. Consequentially, since the CNCF’s inception in 2014, it has become increasingly feasible to manage a complex cloud-native stack with a large mix of these open source projects (some interesting data in the foundation’s annual report). The more you get into container-native methodologies, the more open source you will use.
The other side of this picture, is that open source packages make up significant chunks of free and proprietary applications alike—while your whole-app may be proprietary, the bit your team actually wrote may be very small within it. As the State of Open Source Security Report shows, open source usage is tightly coupled with digital transformation, and as such is becoming increasingly business-critical; however, only 17% of maintainers rank their security expertise as high, which means that a lot of those packages are an operational risk.
Yet another aspect is community: using more open source makes the organization a stakeholder, and as such it should liaise with the relevant community to see how it can contribute, and also how it can get exposure to roadmap and to security alerts as quickly as possible.
No containing this
So to summarize the above example, a ‘simple’ decision about joining the container wave will inherently and significantly increase the organization’s benefit from, and exposure to, open source software. This software may or may not be supported by large sponsors, will probably to a meaningful extent be maintained by volunteers, and will probably have a vibrant community behind it—all of whom need to be engaged by users who rely on these projects.
In other words, not simple. Containers are a critical part of a digital transformation, but just one part. The whole of this transformation—parts of which will appear in your software delivery systems without you expecting them—can enable great things for your applications, if approached with the right mix of openness, maturity and responsibility.
(Originally posted on Forbes.com)