These Are Not The Containers You’re Looking For

In a previous post, I argued that in the case of Kubernetes, the hype may be ahead of actual adoption. The responses to that article have ranged from basic questions about containers, to very firm views that lots of people are already running multi-cluster, multi-cloud containerized infrastructure. Regarding the latter view, from experience I know that smaller, more agile companies tend to underestimate the complexity behind large enterprises’ slower reactions to change (Ian Miell does a superb job of breaking down the slowness of enterprise in this blog post). So, in this article, I’ll try to do three things:

  • Attempt to tighten up some basic definitions for the non-techie folks who may still be wondering what the fuss is about
  • Mention the number one common attribute I’ve seen amongst companies succeeding with containers
  • Explain why just moving to containers is just the tip of the iceberg of your IT concerns
"See it from the window"

“See it from the window” PHOTO BY ROBERT ALVES DE JESUS ON UNSPLASH

Is my monolithic app running in a container on VMware cloud-native?

Well, sorry to say, but it depends. Let’s remember that a container is just a software package that can be run on infrastructure, and that there are (at least) two types of containers.

System containers have been around longer, since the days of LXC (2008) or, arguably, the days of Solaris Zones before that. These are, simplistically speaking, small and efficient units that behave like virtual machines. They can support multiple executable applications, and offer isolation as well as other features that system administrators will feel safe with, like easy manageability. This is ideal for traditional apps that you want to containerize without completely revolutionizing your IT practices, and the benefit is simple: increasing application density per server by over 10x vs. a virtual machine.

Application containers have a single application to execute. This is the world of the Docker image format (not the same as Docker Engine, Docker Swarm or Docker Inc.) and OCI , and what most people refer to when they mention the word ‘container’. The benefits here from an IT perspective are that application containers running a microservices app bring the full promise of cloud-native to life: high delivery pace, almost infinite scalability, and improved resilience. Those dramatic outcomes demand a significant culture and technology shift, a I will mention in detail later.

Containers are just packages and will, broadly, do as they’re told; microservices is an architectural approach to writing software applications; and cloud-native is a method of delivering those applications. To answer the question above: throwing away existing resources and ignoring business outcomes in pursuit of an ideal is probably bad practice, so if you have a VMware environment that is there to stay, then that is probably cloud-native enough for now (VMware’s acquisition of Heptio is interesting in that sense for the future use case). The bottom line is, that getting hung up on the simplest item (containers) to grasp on that list is a common error.

Thor’s hammer does not exist in IT

I recently met with the head of Cloud for a large UK financial services company, who told me that the previous CIO had to leave his post after failing to deliver on a ‘straight to serverless’ strategy, i.e., leap-frogging the cloud and container revolutions in order to operate workloads on the company’s well-used private datacenter, with serverless technology. That the CIO had to leave isn’t a major surprise: in any line of work, we need to use the right tools for the right jobs, especially when those jobs are complex. In cloud, that means that in most cases, we will likely be using a combination of bare metal, virtual machines, containers and serverless, on any combination of private server farm, private cloud or public cloud.

Without a doubt, the one thing I have seen as a first step in a successful IT transition journey is this: not trying to over-simplify a dramatic IT (r)evolution. Instead, understanding it from a holistic aspect, and judging it vis-à-vis business outcomes and objectives. It’s good to strive, but not all companies have the resource to be cloud-native purists, and there are clear benefits even in smaller steps, like allowing more time for analysis of the technology’s impact, or enabling better risk management. (This post from container company Cloud 66 does well to describe the short term efficiency, and long term insight, gains of moving a monolithic app to a container.)

Known-unknowns and unknown-unknowns

Ultimately, though, we wouldn’t be so excited about the container revolution if it was just about squeezing in more monolithic applications. A microservices app, running in containers, orchestrated on multiple substrates, and all that according to cloud-native principles—that is something worth sweating for. An application that scales quickly and reliably with less operational resource, that adapts fast to customer and competitive dynamics, and that self-heals, is where a lot of us are aiming.

Again, containers are just one part of this. Consider technological challenges: What about orchestration? And network? And stateful services? And cloud-native-ready pipeline tools?

Arguably even more important, consider cultural challenges: What needs to change with our development practices? How do we find and retain the right talent? How do we re-skill older talent and bridge between generations? How will the risk balance change?

An example: open source is already part of your strategy

It is a well-documented fact that the rise of cloud and open-source has been connected, which also brings some interesting tensions, as I explored in my previous article. In containers, this synergy seems stronger than ever. The juggernaut behind Kubernetes and many related open source projects, the Cloud Native Computing Foundation (CNCF), is part of the Linux Foundation. The CNCF charter is clear about the intentions of the foundation: it seeks to foster and sustain an ecosystem of open source, vendor-neutral projects. Consequentially, since the CNCF’s inception in 2014, it has become increasingly feasible to manage a complex cloud-native stack with a large mix of these open source projects (some interesting data in the foundation’s annual report). The more you get into container-native methodologies, the more open source you will use.

The other side of this picture, is that open source packages make up significant chunks of free and proprietary applications alike—while your whole-app may be proprietary, the bit your team actually wrote may be very small within it. As the State of Open Source Security Report shows, open source usage is tightly coupled with digital transformation, and as such is becoming increasingly business-critical; however, only 17% of maintainers rank their security expertise as high, which means that a lot of those packages are an operational risk.

Yet another aspect is community: using more open source makes the organization a stakeholder, and as such it should liaise with the relevant community to see how it can contribute, and also how it can get exposure to roadmap and to security alerts as quickly as possible.

No containing this

So to summarize the above example, a ‘simple’ decision about joining the container wave will inherently and significantly increase the organization’s benefit from, and exposure to, open source software. This software may or may not be supported by large sponsors, will probably to a meaningful extent be maintained by volunteers, and will probably have a vibrant community behind it—all of whom need to be engaged by users who rely on these projects.

In other words, not simple. Containers are a critical part of a digital transformation, but just one part. The whole of this transformation—parts of which will appear in your software delivery systems without you expecting them—can enable great things for your applications, if approached with the right mix of openness, maturity and responsibility.

(Originally posted on Forbes.com)

Open Source Software At A Crossroads

“It was the best of times,

it was the worst of times,

it was the age of wisdom,

it was the age of foolishness,

it was the epoch of belief,

it was the epoch of incredulity”

—Charles Dickens

Marc Andreesen famously said that software is eating the world; others have more recently added that at least in b2b, open source is eating software. Indeed, when we look at 2018, it has been a landmark year for users, enterprises and investors alike—but has it also included the seeds for a potential slowing down of open source investment, and of perhaps even usage?

In the cloud world, where the operational friction of software is reduced or removed, and where economies are extremely effective, public cloud providers present a challenge to many open source projects. Over the past several years, I have had the good fortune of being involved in key deals to monetize some of the largest open source software projects—in a way that keeps them free and usable for end users, and gets cloud providers to support the community. However, given what we’ve seen in 2018 and just last week, the economic model for funding open source may be at risk. It is down to more innovative aggregation models and powerful open source communities to ensure that open source continues to gain ground .

Open sign (Photo by Finn Hackshaw on Unsplash)

Continued adoption, M&A explosion

It’s no secret that open source use is accelerating, and is driving some of the most critical pieces of modern IT. In addition, the Linux Foundation recently reported that in the last five years, membership has gone up by 432 percent increase.

On top of that, 2018 has seen roughly $57 billion of value creation in open source M&A and IPOs The number jumps by $7.5 billion if you countGitHub ’s acquisition by Microsoft MSFT -0.29%, despite the fact that GitHub is not a developer or curator of open source software as such; rather, it accelerates use of open source (with “pervasive developer empathy”, as I heard Accel’s Adrian Colyer mention). This is a story of absolute sums but also of multiples, as effectively analyzed by Tomas Tunguz in this blog post.

Over the years we’ve seen different approaches to monetizing open source, and we have examples of them all in the past year’s exits (the following is just my simplistic breakdown):

  • Sell support and services (Heptio acquired by VMware VMW +0.57%)
  • Sell hosted/managed open source ( MongoDB as-a-service)
  • Have an open source core, sell extra features around it ( Elastic , Pivotal IPOs)
  • Make order out of chaos ( Red Hat RHT +0.02% acquired by IBMIBM -0.76%, also Pivotal)
  • Aggregate and accelerate other peoples’ code (GitHub acquired by Microsoft)

Of the above, as far as I have seen, the first two are probably most common, and arguably most vulnerable. If the technology can be wrangled effectively by internal teams or by consultants, there is less of an incentive to buy support from vendors (as Ben Kepes mentioned in his analysis of Cloud Foundry’s position); and if AWS is the best company in the world at selling software services over the web, then it would have an immediate advantage over providers who primarily sell through other channels, including commercial sponsors of open source. For this reason, recent developments around open source licensing are particularly important.

MongoDB and Redis pull back, AWS react

Last week, AWS announced on its blog the launch of DocumentDB, a MongoDB-compatible database. As some pundits have pointed out, this is clearly a reaction to MongoDB, Inc.’s new and highly-restrictive license called the Server Side Public License (SSPL)—a move which the publicly-traded MongoDB made in order to protect its revenue position.

Earlier last year, Redis Labs learned a hard lesson in community relations management when it took a less dramatic step: while offering its Redis database under a permissive license, it changed the licensing on its add-on modules to the “Commons Clause”, so service providers would need to pay for their use. While communication could have been clearer, the action itself is similar in intent to what MongoDB did, and to what many other open source companies have attempted or plan to attempt to do.

Bill Gates once said that “A platform is when the economic value of everybody that uses it, exceeds the value of the company that creates it. Then it’s a platform.” By that measure, if AWS is the ultimate cloud platform of our times, then AWS will recognize that it is better because software like Redis exists, and therefore work to increase—rather than limit—Redis Labs’ overall revenue. In this conundrum, community may be the key.

Communities: the un-replicable asset

A big part of open source’s success over proprietary software has been its ability to move in an agile fashion, with strong and fast feedback loops, and devoted developer engagement. As fast as AWS moves, with its famous customer obsession and feature release pace, it is still constrained by money and financial priorities. Open source contributors don’t work for money (although communities should be supported financially, if we want to keep enjoying the independence of some key projects), and some open source projects are almost too big even for IT giants to replace with a proprietary clone. On top of that, consider that many developers will simply switch off any solution which smells of corporate interest, especially if there is an open source alternative.

This reminds me of a protracted discussion I was part of some years ago with an IT giant who considered the cloning option for a piece of software, meant for distribution. At the end of a call in which a group of execs agreed on the cost of engineering, a developer advocate posed a simple question: we now know how much building this will cost, but how much will we invest in building a community? (the IT giant chose to support and collaborate with the project).

What does 2019 hold?

While the economic tension between public cloud services and the first two or three types of open source monetizations might increase in 2019, I estimate that ‘order-makers’ and ‘aggregators’ will continue to go from strength to strength. Specifically, companies that accelerate reliable production use of open source—from GitHub competitor Atlassian to open source security company Snyk—are proving that there is great value to be provided to users from focusing on security and usability of, and collaboration around, small and large projects alike.

What might change in the immediate future is the pace and size of venture capital investment into open source companies, but this could also be the cyclical product of a very loaded 2018, and not only related to business model frictions.

In either case, a focus on building and sustaining healthy open source communities and differentiating business models seems to be more important than ever.

(Originally posted on Forbes.com)