No Easy Way Forward For Commercial Open Source Software Vendors

In an earlier article, I examined some of the recent dynamics in open source software, specifically around the for-own-profit commercialization of some projects by large cloud providers, and how that is driving smaller companies to seek out restrictive license models, in the process causing themselves considerable friction in their communities.

As befits a piece that deals with themes of free software and a polarized cloud industry, the article seemed to have struck a chord with several people, some of whom have contacted me to agree or disagree with my points. Rather than keep those to myself, I thought a follow up with three of these luminaries, with regards to their inside views on the topic, would be much more engaging.

In this article, I’ll summarize the main points from my conversations with Spencer Kimball, co-founder and CEO of Cockroach Labs; Joseph Jacks, founder and general partner of OSS Capital; and Abby Kearns, Executive Director of the Cloud Foundry Foundation. All have extensive track records in open source, but each has a slightly different take.



The independent vendor perspective: Spencer Kimball

While still a student in 1995, Kimball developed the first version of GNU Image Manipulation Program (GIMP) as a class project, along with Peter Mattis. Later on as a Google engineer, he worked on a new version of the Google File System, and the Google Servlet Engine. In 2012, Kimball, Mattis, and Brian McGinnis launched the company Viewfinder, later selling it to Square.

Drawing on his experiences at Google, Kimball wanted a technology like BigTable to be made available as open source outside of the company, and co-founded (again, with Mattis, and ex-Googler Ben Darnell) the company Cockroach Labs to provide commercial backing for CockroachDB, an open source project.

According to Kimball, whichever cloud provider is the best at brokering the multi-cloud migration will ‘win’ cloud. He adds that CockroachDB was built for that multi-cloud/region and relational future—where scale, complexity but also privacy frameworks such as GDPR become critical business drivers. But as optimistic as he is about the business, Kimball is also concerned about the sustainability of today’s and tomorrow’s venture-backed commercial OSS businesses.

Red Hat, Kimball reflects, clearly ‘figured out’ the model for commercial OSS before the days of cloud, becoming the dominant force in the commercial OSS business. The Red Hat ‘equilibrium’ (Kimball’s term) was based on selling contracts for support and professional services on top of widely-available OSS. With the emergence of cloud, Red Hat capitalized on the complexity of ‘big-software’ systems such as OpenStack and Kubernetes. (Bassam Tabbara of Upbound has commented on how this model might change with the IBM acquisition.)

Kimball states, “with cloud becoming the mainstream way to consume and manage IT, the complexity of some OSS provides a natural advantage to cloud platforms such as AWS or Azure, as they can use economies of scale to build a managed service out of any open source core.” He adds, “they can also offer enterprise support on top, effectively taking the bottom 50% of an emerging vendor’s total addressable market, and also capping its growth in the enterprise high-end.” So what is an emerging vendor to do? “The best protection is community,” says Kimball. Engaged, committed groups of maintainers, contributors and users are impossible to copy or to replicate in a managed service, and can keep even the most aggressive IT giant at bay.

Another protection could be to address a multi-cloud niche, as Cockroach Labs has done, which serves customers at the gap between the lock-in-focused cloud providers.  At the end of the list, Kimball mentions restrictive (“almost ‘byzantine’,” he says) licenses and other defensive models such as ‘free for use, source available’, whole-compilation protection and more—all suboptimal and not in line with the principles of free software.

In light of these comments from Kimball, it is very interesting—if not entirely surprising—to note CockroachDB’s licensing change, announced last week on the company’s blog: they are adopting a version of the Business Source License (BSL), that is not limited by nodes (unlike MariaDB’s version), but prohibits the offer of a commercial version of CockroachDB as a service without buying a license, by other players (read: AWS). This announcement has already resulted in friction on social media and the blogosphere (which I would rather not amplify by referencing).

The venture investor perspective: Joseph Jacks

OSS Capital is the world’s first VC firm exclusively-focused on investing in and partnering with commercial open-source software startups. An early contributor to Kubernetes, Jacks previously founded Kismatic, which he sold to Apprenda, as well as founding container mega-tradeshow KubeCon and donating it to the CNCF as part of its inception.

OSS Capital’s investment strategy is focused exclusively on supporting early-stage commercial OSS startup companies. OSS Capital’s equity partner/advisory network of commercial OSS founders have collectively captured over $140bn in value across 40 of the largest COSS companies of the previous decade; transferring this knowledge and expertise to the next-generation of commercial OSS founders is a core part of the OSS Capital’s value proposition. Additionally, OSS Capital organizes the commercial OSS community conference,

When asked about the strategic outlook for OSS given recent skirmishes, Jacks points out that the pie is getting much, much bigger: since companies outside of what is considered the software industry (from cars to home appliances) are effectively becoming producers of software, that grows the addressable market considerably, and will result in an acceleration of open source well beyond what we’ve seen so far.

Even from within the tech industry, Jacks says, “many OSS projects disrupt and/or bring transformational innovation to major global industries like data processing and storage (Spark, Ceph, Hadoop, Kafka,  MongoDB , CockroachDB, Neo4j, Cassandra), operating systems (Linux, FreeRTOS), semiconductors (RISC-V), networking/CDNs (Envoy, Varnish), software engineering (Docker, Go), computing (Kubernetes), search (ElasticSearch), AI (TensorFlow, PyTorth).” Those two major developments, says Jacks, will reframe the playing field for open source.

Given his expansive view, it is perhaps not surprising that Jacks is a critic of the recent proliferation of restrictive licenses as a defensive measure for emerging OSS companies. “This can dramatically reduce the value-creation potential of OSS projects, which are fundamentally driven by developer adoption,” he says, and adds, “instead, open-core OSS companies should use more permissive licenses like MIT, A2.0, or BSD in order to maximize value creation for all constituents (and that includes cloud providers), while capturing value on the proprietary layers around the open core.” (Jacks calls this layer ”the crust”.)

So what are effective strategies for a new OSS company to build, scale and survive in an  AWS-dominated world? Jacks says, “one, focus on maximizing value creation and capture for all, building highly standardized disruptive technology; two, build inclusive and constructive communities; three, ship quality software fast; four, embrace transparency and open governance across all constituents.”

The foundation perspective: Abby Kearns

I spoke with Abby Kearns as a follow up to my interview piece with her from late last year, and the conversation focused on the licensing implications of competitive moves in the commercial OSS market. At an impressive CF Summit, Cloud Foundry Foundation announced that its open source project Eirini, which enables pluggable use of either Diego/Garden or Kubernetes as orchestrators, passed its validation tests for CF Application Runtime releases. Kearns, who has served as Executive Director of the Foundation since 2016, is no stranger to both the opportunities and the tensions that exist at the intersection of free software and commercial interests.

As expected, Kearns is adamant, saying, “open source as a method of building and delivering free software can only thrive if we continue to put code in the open, and ask for help in improving it. ” She recommends to developers and commercial OSS companies to assume that someone will copy the software and perhaps even use it in a competitive context—and if one is worried about that, then why put code out there in the first place?

In Kearns’s view, actions such as using restrictive licenses can be revealing when it comes to the maintainer’s intent. Similarly, companies that open-source a wholly-formed thing might be missing the point, which is to build together, says Kearns—paraphrasing Richard M. Stallman’s famous manifesto: “free like free speech, not like free beer, not like a free puppy or free (used) mattress”.

Kearns believes that focus on these key tenets will see commercial OSS vendors through: engagement with contributors, transparency towards stakeholders, and outreach towards community. She also points out that growth in users isn’t the only meaningful metric for open source—just as important is growth in meaningful usage or in engagement with a dynamic community that likes to contribute.

Why open source in the first place?

To continue this positive note, Gabe Monroy of Microsoft recently retweeted a thread that showed how engineers from across rivaling vendors can collaborate successfully around open source software, to the benefit of both users and the projects themselves. Per Monroy, this is an “example of why multi-vendor OSS is the future of infrastructure software”. This, and so much more, could not have been achieved if it were not for open, collaborative communities and a bias towards permissive licensing.

(Originally posted on

Containers Put The ‘Dev’ Back In DevSecOps

On the back of a record-breaking Kubecon+CloudNativeCon (with a reported 7,700 attendees last week), it is very clear that Kubernetes’ position as a cloud center of gravity remains unassailable in the foreseeable future. For one, even outside of this sprawling tradeshow, it has taken over sessions at many other conferences, from the Open Infrastructure Summit (previously OpenStack Summit) to the Cloud Foundry Summit. I personally believe that in the next few years, these two conferences (and others) will either accelerate in their contraction or even merge into a mega CloudNativeCon.

Does that mean digital transformation has reached a steady state? Hardly. As the Cloud Foundry Foundation’s April report showed, 74% of companies polled define digital transformation as a perpetual, not one-time, shift. The survey also shows that organizations are already using a good mix of technologies and clouds in production, which explains the Foundation’s endorsement of Kubernetes and the launch last year of the related project Eirini.



New models, new personas, new tension

In the end, what matters to IT teams is a thoughtful approach to building and deploying cloud-native applications. There are many reasons why containers have become so popular since DotCloud’s pivot into Docker all those years ago, and I have gone through some of them in previous posts. IBM’s Jim Comfort has called it the most dramatic shift in IT in memory, more so than cloud computing. Since that pivot, it has become a convention that whatever is in the container is the developers’ responsibility, while operators own what is outside of it.

Kubernetes and related projects challenge that paradigm even further and represent the cloud-native vision in that they provide developers with access to native APIs, which means they have much more control over how their application is delivered and deployed. So, while software is eating the world, developers have started to eat the software delivery chain.

However, Kubernetes’ evolution into the golden standard of an “un-opinionated PaaS” still means that in most large enterprises it is still owned and maintained by operations-minded engineers. Therefore, one of the main efforts driving enterprise adoption has been to build operating models that balance developer empowerment and operator governance—for some, that means DevOps. However, the rise of DevOps is not the only shift in user personas that we have seen.

Open source and the “shift-left” risk

Security used to be the place where people not only buy, but also use solutions to prevent and adapt to security risks, but this was at an age of waterfall development and of mostly closed-source code. The rise of open source software has been disruptive to that as it has been pervasive—whether the application itself is open-sourced or not, significant amounts of open source packages are being leveraged in its creation. The cloud-native movement has accelerated this trend even further, with its strong bias towards open source.

The reason this matters is that security teams are not staffed nor equipped to control these open source inputs. Whether the security team was planning for it or not, a lot of their risk has “shifted left”, and now with open APIs, both the breadth and speed of risk have risen. Snyk’s latest report, “Shifting Container Security Left” shows that the top 10 official container images on DockerHub each contain at least 30 vulnerabilities and that even of the top 10 most popular free Docker-certified images, 50% have known vulnerabilities.

Even worse, the report surfaces an acute ownership problem, as 68% believe developers should be responsible for owning container security, while 80% of developers say they don’t test their images during development, and 50% don’t scan their images for vulnerabilities at all.

Time for joint accountability

As a consequence, when it comes to software-lifecycle tools, security vendors are experiencing a demand shift—a separation of personas at large enterprises. Security teams are still the economic buyers and hold the budget authority; however, with the realization that developers must have the tools to own more of the security of their own apps, comes a transfer of budget decisions over to development teams, who will actually use these tools. You build it, you run it—but you also secure it.

As I mentioned in an earlier piece, this is not an easy process for a security industry that is geared toward servicing security teams. The challenge many providers are facing is to provide tools designed for both sides of the aisle, that can promote joint accountability: empowering the people “on the left” while giving those “on the right” enough levels of control and observability.

In a sense, to shift-left is to let go. With more code—in a repository or in pre-built containers—coming from lesser-known origins and deployed more rapidly on more distributed systems, the only option is to continue to evolve and provide developers with the right tools and knowledge to own security.

(Originally posted on

Shift-Left Security is About Empowerment, Not Encroachment

Containers have changed the security conversation in many organizations

There are many reasons why containers have become popular in recent years. For one, they help developers maintain consistency across platforms and through the software delivery chain. As a result, it’s now the norm to consider whatever is in the container to be the developers’ responsibility, while operators own what is outside of it.

Kubernetes has challenged that paradigm even further by providing developers with access to its native API, affecting operational parameters.

You could say that software is eating the world, open source is eating software and developers are eating the software life cycle!

Kubernetes, however, was built by system developers and (arguably) for system developers and has evolved into the golden standard of an “un-opinionated PaaS”—and in a large enterprise, that is still owned and maintained by an operator. As a result, one of the main trends in enterprise adoption has been to find operating models which empower devs while providing operators a good level of control over policy creation and enforcement.

Shift-Left Security About Empowerment, Not Encroachment

User != Buyer

Security is a great example of the need for this new equilibrium. With the explosion of open source software (which is being leveraged in both open and proprietary applications) many security teams understand that their risk has shifted left, and in a cloud-native world that is moving faster than ever toward production. Snyk’s latest “State of Open Source Security Report” shows a significant increase in vulnerabilities—almost doubling in just two years. More importantly, the report also shows companies are struggling to tackle container security, with a four-fold increase in container vulnerabilities in 2018.

Unsurprisingly, the report shows a need for developers to be enabled to take more ownership for security. The figures reveal 80% of developers believe they should be responsible for the security of their open source code, yet only 30% rate their own security knowledge as “high.”

As a consequence, the security industry is experiencing a decoupling of the user and buyer personas for software security tools and services at large enterprises. Security officers still hold the budget authority in most cases, but increasingly, they fence off a good chunk of it for tools chosen by developers to be used in their development workflow to help them deal with security issues as early and as often as they commit code.

New Shared Tools for a New Age

The above is easier said than done, when almost all of the security industry is geared toward servicing security teams and helping them deal with “those pesky software engineers.” The challenge for providers in this rapidly changing reality is to provide tools designed for developer responsibility but also for joint accountability: the best user experience for the people “on the left” with the right levels of control and observability for the people “on the right.”

What do these tools look like? They move at the speed of code at the source control and IDE stages (rather than snapshotting or cloning repos for static analysis) to allow developers to find and fix issues early; they provide strong complementary gates at the build, CI/CD and registry (push) stages; they place clear controls at the gates to deployment, whether by using Kubernetes or OpenShift admission controllers, or integrating within a `cf push` command (as examples); and perhaps most importantly, they prioritize actionable insights over just plain data.

Shift Left Sometimes Means ‘Let Go’

In an operational sense, shift left doesn’t mean that platform or DevOps teams should own actions more to the left, encroaching on developers. Quite the opposite: It is about developer empowerment. With more bits of open source and proprietary code coming from increasingly disparate origins and sitting in increasingly distributed systems, this becomes at least as big an imperative for security teams. While CSOs and CISOs remain accountable, more and more of them bravely face the cloud-native future by allowing developers to be more responsible for the security posture of their own apps.

(Originally posted on Container Journal)

At RSA Conference, Most Security Vendors Still Not Shifting Left

A week at the sprawling cybersecurity conference known as RSA Conference always sends you home with tired feet and a full brain. With attendance as high as ever, top-quality (and more diverse, this time around) keynotes, a myriad sponsored evening events and—crucially—a week-long San Francisco drizzle that kept attendees engaged indoors, the momentum seemed ever-present.

However, as someone coming to this industry from cloud computing, containers, open source and DevOps, I could not help but notice a wealth of “old world” pitches and a dearth of newer ones. If you are in the market for perimeter or beyond-perimeter solutions, ethical hacking or endpoint security, you can spend far more than the allocated few days to explore all the talks and vendors on offer; but if you are trying to face the cloud-native immediate future by shifting security to the left, and are looking for insights on this shift, then not so much.

Rusty lock


A time of painful change

As Snyk CEO Guy Podjarny recently pointed out in his talk at QCon London, DevSecOps is a highly overused term that few stop to define for themselves in depth. In his approach, DevSecOps is actually an umbrella term for three areas of required transformation: technologies, methodologies and models of shared ownership.

In this reality, modern and modernizing IT organizations are facing tremendous disruption on several levels:

  • Cloud computing, microservices and containers represent a technology shift on a scale unseen in recent history, as was discussed with IBM’s IBM +0% Jim Comfort in my last post. This isn’t about training people on a couple of new tools—it is about re-thinking what technology enables, and how technology affects the operational and business structure.
  • From Agile to DevSecOps and beyond, methodologies can help accelerate technology adoption, but they also tend to trigger significant shifts in culture and process. This is one reason why, as I detailed in a previous post, early mass-market users of technologies such as Kubernetes require operating models, not nifty click-to-deploy SaaS tools.
  • As companies attempt to hire new talent to address new challenges, and as technologies from the last decade are being phased out, a skill/generational gap emerges between young cloud-native developers and long-in-the-tooth operators, fluent in Linux system administration and server configuration management. As Jim Comfort implied in that IBM IBM +0% post, re-skilling probably can’t keep up with the pace of technology change.

Security vendors are starting to get it

It’s a classic case of the innovator’s dilemma. While most traditional security vendors realize the need to adapt, they have large and stable revenue streams from legacy products, which makes the transition slower than the pace of change their customers are experiencing. That isn’t to say that the existing transition isn’t meaningful: we are well on the journey that started by securing the server amd moved on to securing the VM, cloud estate, and—more recently—container. Put that together with the rise of external open source code that developers leverage in practically every new application (including proprietary ones), and with the understanding that an increasing portion of risk comes from inside the perimeter, and you get to the obvious conclusion: the urgent ‘cybersecurity’ battle is for securing the developer’s workflow, in real time.

Another interesting change is a decoupling of the user and buyer identity. The budget might still sit with the CISO, but increasingly, portions of those security budgets are being allocated to developer tools. This disrupts a long-existing equilibrium in the security market, and therefore it is no wonder that many vendors are re-engineering the product-services mix, as they figure out where their customers are going. Expect IT organizations that collaborate across the engineering-security divide to be more resilient in the face of exploit attempts—and expect vendors that adapt quickly and effectively to this future to have a better chance of survival.

New partnering models are key

Cash is usually king, and many of these so-called ‘cybersecurity dinosaurs’ will survive and thrive by leveraging their reserves, but it may be a painful and protracted process. To make it less so, vendors would be wise to rethink go-to-market strategies; re-align strategic partnerships; and prioritize capabilities for risk mitigation over risk adaptation—all in an effort to understand that developers will continue to move fast, and savvy security officers need to enable that to happen within the right guardrails.

Both security teams within IT organizations and incumbent security vendors would be wise to follow the guidance of members of the PagerDutysecurity team, interviewed on The Secure Developer podcast, who said that their job, in the end, is to make it easy for developers to do the right thing.

(Originally posted on

IBM And The Third Iteration of Multi-Cloud

Four months after the announcement of the Red Hat acquisition, and some time before it is expected to officially close, I caught up with Dr. Jim Comfort, GM of Multicloud Offerings and Hybrid Cloud at IBM, to talk about the company’s recent multi-cloud-related news, and to get a sense of its updated view of multi-cloud. Jim joined IBM Research in 1988 and since then has held a variety of development and product management roles and executive positions. He has been closely involved with IBM’s Cloud Strategy, including acquisitions such as Softlayer, Aspera and others. [Note: as a Forbes contributor, I do not have any commercial relationship with IBM or its staff.]

Dr. Jim Comfort

Dr. Jim Comfort; IBM

First, what is significant about the new announcements?

At its THINK 2019 conference, IBM announced Watson Anywhere, the latest evolution of its fabled  AI platform (originally restricted to the IBM Cloud), which will now run across enterprise data centers and major public clouds including AWS, Azure, and Google Cloud Platform. My reading is that this is a potential sign of things to come with Red Hat: IBM seems to be coming to terms with its relative weakness as an IaaS player, and its tremendous strengths as a platform and managed service provider. Building the multi-cloud muscle with Watson now, will serve IBM well when it comes to Red Hat OpenShift later on.IBM also made a series of announcements with regards to its hybrid cloud offerings, including a new Cloud Integration Platform connecting applications from different vendors and across different substrates into a single development environment; and new end-to-end hybrid cloud services, bringing IBM into the managed-multi-cloud space. Again, to me this seems like infrastructure-building for the day IBM turns on the multi-cloud ‘power button’, using its existing and new technology and formidable channel assets.

How the Red Hat deal might change IBM

Jim agrees we haven’t seen an IT wave move as broadly and as quickly as containers have in decades—not even cloud, which took the best part of a decade to take off. This isn’t just due to Solomon Hykes’s eye-opening original Docker demo in 2013, or to the CNCF’s stellar community- and ecosystem-building efforts—the reasons are mainly technological and strategic.

Amongst their many advantages, containers truly offer the potential to decouple the underlying technology from the application layer. In a key insight, Jim suggested that while multi-cloud used to mean private+public, and then evolved into a term for ‘pick & choose’ strategy (for example, Google Cloud Engine for compute, with Amazon RDS for a database), containers and related orchestration systems are leading us to a more powerful definition. Multi-cloud now is about ‘build once, ship many’, on completely abstracted infrastructure. As I wrote in a previous post, ‘soft-PaaS’ is on the rise for many related reasons, but the added insight here is about a fuller realization of the multi-cloud vision. In this new stage of the market, the challenge shifts to areas such as managing data sources and making multi-cloud operations easier. In this sense, OpenShift as PaaS, together with IBM’s services capabilities, become potentially powerful strategic assets.

How IBM sees its role in this cloud-native world

Jim being from the services side, his view on this was not a hardware model for a software-defined world; he claimed that IBM has always helped companies deal with massive change, and that still is the main mission. The market, if you believe press releases, is constantly shifting like a Newton’s cradle from public cloud to private cloud and back again. Personally, every time I hear of a large company going ‘all-in’ on either side, I tend to chuckle: as I covered in a previous post, oversimplifying architecture and infrastructure has ended many an IT career. Remember that leading UK news outlet that declared it is going ‘all-in with AWS’ because it couldn’t realise its ambitions on DIY-OpenStack? I was in the room when the former CIO refused vendor help in implementing this complex private cloud platform and—surprise—the experiment failed.Jim suggested that in his experience, vendors can help customers focus on the company’s required business objectives, using five areas for analysis and planning, as IBM already does:

  • Infrastructure architecture
  • Application architecture
  • DevOps and automation
  • Evolved operational models
  • Evolved security models

Yes, the new tech is shiny, but even more important for customers are things like managing legacy-IT resistance, re-skilling an older workforce, and managing generational gaps (cloud native devs can be much younger than their Ops counterparts). In a surprising statistic, Jim claimed that 50% of IBMers have been less than five years with the company, and that the company has specific millennial programs.

IBM’s open source position gets a boost

A major advantage of the acquisition that has not received enough attention, in my opinion, is that in acquiring Red Hat, IBM has shot into the top-three organizations measured by open source software contributions. In a star-studded panel during the THINK conference, IBM seems to have embraced this position gladly. Analyst firmRedMonk‘s co-founder Steve O’Grady correctly warned that, “the future success of open source is neither guaranteed nor inevitable.” Similar to some point I covered in a previous post, O’Grady outlined profiteering, licensing and business models as systemic challenges that must be addressed.However, even if open source continues to thrive, it is a predominantly bottom-up IT phenomenon, which me be at odds with IBM’s more traditional CIO-downwards approach. To me this is one of the most interesting areas to watch: as Red Hat gets integrated into the family, will IBM be successful in changing its own culture, taking full advantage of its history (for example, its talent-hothouse lab the Linux Technology Center), its new and existing technologies (Kubernetes-based offerings, free Linux OSs) and its newfound open source dominance?

(Originally posted on

These Are Not The Containers You’re Looking For

In a previous post, I argued that in the case of Kubernetes, the hype may be ahead of actual adoption. The responses to that article have ranged from basic questions about containers, to very firm views that lots of people are already running multi-cluster, multi-cloud containerized infrastructure. Regarding the latter view, from experience I know that smaller, more agile companies tend to underestimate the complexity behind large enterprises’ slower reactions to change (Ian Miell does a superb job of breaking down the slowness of enterprise in this blog post). So, in this article, I’ll try to do three things:

  • Attempt to tighten up some basic definitions for the non-techie folks who may still be wondering what the fuss is about
  • Mention the number one common attribute I’ve seen amongst companies succeeding with containers
  • Explain why just moving to containers is just the tip of the iceberg of your IT concerns
"See it from the window"


Is my monolithic app running in a container on VMware cloud-native?

Well, sorry to say, but it depends. Let’s remember that a container is just a software package that can be run on infrastructure, and that there are (at least) two types of containers.

System containers have been around longer, since the days of LXC (2008) or, arguably, the days of Solaris Zones before that. These are, simplistically speaking, small and efficient units that behave like virtual machines. They can support multiple executable applications, and offer isolation as well as other features that system administrators will feel safe with, like easy manageability. This is ideal for traditional apps that you want to containerize without completely revolutionizing your IT practices, and the benefit is simple: increasing application density per server by over 10x vs. a virtual machine.

Application containers have a single application to execute. This is the world of the Docker image format (not the same as Docker Engine, Docker Swarm or Docker Inc.) and OCI , and what most people refer to when they mention the word ‘container’. The benefits here from an IT perspective are that application containers running a microservices app bring the full promise of cloud-native to life: high delivery pace, almost infinite scalability, and improved resilience. Those dramatic outcomes demand a significant culture and technology shift, a I will mention in detail later.

Containers are just packages and will, broadly, do as they’re told; microservices is an architectural approach to writing software applications; and cloud-native is a method of delivering those applications. To answer the question above: throwing away existing resources and ignoring business outcomes in pursuit of an ideal is probably bad practice, so if you have a VMware environment that is there to stay, then that is probably cloud-native enough for now (VMware’s acquisition of Heptio is interesting in that sense for the future use case). The bottom line is, that getting hung up on the simplest item (containers) to grasp on that list is a common error.

Thor’s hammer does not exist in IT

I recently met with the head of Cloud for a large UK financial services company, who told me that the previous CIO had to leave his post after failing to deliver on a ‘straight to serverless’ strategy, i.e., leap-frogging the cloud and container revolutions in order to operate workloads on the company’s well-used private datacenter, with serverless technology. That the CIO had to leave isn’t a major surprise: in any line of work, we need to use the right tools for the right jobs, especially when those jobs are complex. In cloud, that means that in most cases, we will likely be using a combination of bare metal, virtual machines, containers and serverless, on any combination of private server farm, private cloud or public cloud.

Without a doubt, the one thing I have seen as a first step in a successful IT transition journey is this: not trying to over-simplify a dramatic IT (r)evolution. Instead, understanding it from a holistic aspect, and judging it vis-à-vis business outcomes and objectives. It’s good to strive, but not all companies have the resource to be cloud-native purists, and there are clear benefits even in smaller steps, like allowing more time for analysis of the technology’s impact, or enabling better risk management. (This post from container company Cloud 66 does well to describe the short term efficiency, and long term insight, gains of moving a monolithic app to a container.)

Known-unknowns and unknown-unknowns

Ultimately, though, we wouldn’t be so excited about the container revolution if it was just about squeezing in more monolithic applications. A microservices app, running in containers, orchestrated on multiple substrates, and all that according to cloud-native principles—that is something worth sweating for. An application that scales quickly and reliably with less operational resource, that adapts fast to customer and competitive dynamics, and that self-heals, is where a lot of us are aiming.

Again, containers are just one part of this. Consider technological challenges: What about orchestration? And network? And stateful services? And cloud-native-ready pipeline tools?

Arguably even more important, consider cultural challenges: What needs to change with our development practices? How do we find and retain the right talent? How do we re-skill older talent and bridge between generations? How will the risk balance change?

An example: open source is already part of your strategy

It is a well-documented fact that the rise of cloud and open-source has been connected, which also brings some interesting tensions, as I explored in my previous article. In containers, this synergy seems stronger than ever. The juggernaut behind Kubernetes and many related open source projects, the Cloud Native Computing Foundation (CNCF), is part of the Linux Foundation. The CNCF charter is clear about the intentions of the foundation: it seeks to foster and sustain an ecosystem of open source, vendor-neutral projects. Consequentially, since the CNCF’s inception in 2014, it has become increasingly feasible to manage a complex cloud-native stack with a large mix of these open source projects (some interesting data in the foundation’s annual report). The more you get into container-native methodologies, the more open source you will use.

The other side of this picture, is that open source packages make up significant chunks of free and proprietary applications alike—while your whole-app may be proprietary, the bit your team actually wrote may be very small within it. As the State of Open Source Security Report shows, open source usage is tightly coupled with digital transformation, and as such is becoming increasingly business-critical; however, only 17% of maintainers rank their security expertise as high, which means that a lot of those packages are an operational risk.

Yet another aspect is community: using more open source makes the organization a stakeholder, and as such it should liaise with the relevant community to see how it can contribute, and also how it can get exposure to roadmap and to security alerts as quickly as possible.

No containing this

So to summarize the above example, a ‘simple’ decision about joining the container wave will inherently and significantly increase the organization’s benefit from, and exposure to, open source software. This software may or may not be supported by large sponsors, will probably to a meaningful extent be maintained by volunteers, and will probably have a vibrant community behind it—all of whom need to be engaged by users who rely on these projects.

In other words, not simple. Containers are a critical part of a digital transformation, but just one part. The whole of this transformation—parts of which will appear in your software delivery systems without you expecting them—can enable great things for your applications, if approached with the right mix of openness, maturity and responsibility.

(Originally posted on

Open Source Software At A Crossroads

“It was the best of times,

it was the worst of times,

it was the age of wisdom,

it was the age of foolishness,

it was the epoch of belief,

it was the epoch of incredulity”

—Charles Dickens

Marc Andreesen famously said that software is eating the world; others have more recently added that at least in b2b, open source is eating software. Indeed, when we look at 2018, it has been a landmark year for users, enterprises and investors alike—but has it also included the seeds for a potential slowing down of open source investment, and of perhaps even usage?

In the cloud world, where the operational friction of software is reduced or removed, and where economies are extremely effective, public cloud providers present a challenge to many open source projects. Over the past several years, I have had the good fortune of being involved in key deals to monetize some of the largest open source software projects—in a way that keeps them free and usable for end users, and gets cloud providers to support the community. However, given what we’ve seen in 2018 and just last week, the economic model for funding open source may be at risk. It is down to more innovative aggregation models and powerful open source communities to ensure that open source continues to gain ground .

Open sign (Photo by Finn Hackshaw on Unsplash)

Continued adoption, M&A explosion

It’s no secret that open source use is accelerating, and is driving some of the most critical pieces of modern IT. In addition, the Linux Foundation recently reported that in the last five years, membership has gone up by 432 percent increase.

On top of that, 2018 has seen roughly $57 billion of value creation in open source M&A and IPOs The number jumps by $7.5 billion if you countGitHub ’s acquisition by Microsoft MSFT -0.29%, despite the fact that GitHub is not a developer or curator of open source software as such; rather, it accelerates use of open source (with “pervasive developer empathy”, as I heard Accel’s Adrian Colyer mention). This is a story of absolute sums but also of multiples, as effectively analyzed by Tomas Tunguz in this blog post.

Over the years we’ve seen different approaches to monetizing open source, and we have examples of them all in the past year’s exits (the following is just my simplistic breakdown):

  • Sell support and services (Heptio acquired by VMware VMW +0.57%)
  • Sell hosted/managed open source ( MongoDB as-a-service)
  • Have an open source core, sell extra features around it ( Elastic , Pivotal IPOs)
  • Make order out of chaos ( Red Hat RHT +0.02% acquired by IBMIBM -0.76%, also Pivotal)
  • Aggregate and accelerate other peoples’ code (GitHub acquired by Microsoft)

Of the above, as far as I have seen, the first two are probably most common, and arguably most vulnerable. If the technology can be wrangled effectively by internal teams or by consultants, there is less of an incentive to buy support from vendors (as Ben Kepes mentioned in his analysis of Cloud Foundry’s position); and if AWS is the best company in the world at selling software services over the web, then it would have an immediate advantage over providers who primarily sell through other channels, including commercial sponsors of open source. For this reason, recent developments around open source licensing are particularly important.

MongoDB and Redis pull back, AWS react

Last week, AWS announced on its blog the launch of DocumentDB, a MongoDB-compatible database. As some pundits have pointed out, this is clearly a reaction to MongoDB, Inc.’s new and highly-restrictive license called the Server Side Public License (SSPL)—a move which the publicly-traded MongoDB made in order to protect its revenue position.

Earlier last year, Redis Labs learned a hard lesson in community relations management when it took a less dramatic step: while offering its Redis database under a permissive license, it changed the licensing on its add-on modules to the “Commons Clause”, so service providers would need to pay for their use. While communication could have been clearer, the action itself is similar in intent to what MongoDB did, and to what many other open source companies have attempted or plan to attempt to do.

Bill Gates once said that “A platform is when the economic value of everybody that uses it, exceeds the value of the company that creates it. Then it’s a platform.” By that measure, if AWS is the ultimate cloud platform of our times, then AWS will recognize that it is better because software like Redis exists, and therefore work to increase—rather than limit—Redis Labs’ overall revenue. In this conundrum, community may be the key.

Communities: the un-replicable asset

A big part of open source’s success over proprietary software has been its ability to move in an agile fashion, with strong and fast feedback loops, and devoted developer engagement. As fast as AWS moves, with its famous customer obsession and feature release pace, it is still constrained by money and financial priorities. Open source contributors don’t work for money (although communities should be supported financially, if we want to keep enjoying the independence of some key projects), and some open source projects are almost too big even for IT giants to replace with a proprietary clone. On top of that, consider that many developers will simply switch off any solution which smells of corporate interest, especially if there is an open source alternative.

This reminds me of a protracted discussion I was part of some years ago with an IT giant who considered the cloning option for a piece of software, meant for distribution. At the end of a call in which a group of execs agreed on the cost of engineering, a developer advocate posed a simple question: we now know how much building this will cost, but how much will we invest in building a community? (the IT giant chose to support and collaborate with the project).

What does 2019 hold?

While the economic tension between public cloud services and the first two or three types of open source monetizations might increase in 2019, I estimate that ‘order-makers’ and ‘aggregators’ will continue to go from strength to strength. Specifically, companies that accelerate reliable production use of open source—from GitHub competitor Atlassian to open source security company Snyk—are proving that there is great value to be provided to users from focusing on security and usability of, and collaboration around, small and large projects alike.

What might change in the immediate future is the pace and size of venture capital investment into open source companies, but this could also be the cyclical product of a very loaded 2018, and not only related to business model frictions.

In either case, a focus on building and sustaining healthy open source communities and differentiating business models seems to be more important than ever.

(Originally posted on

Cloud Foundry And The PaaS You’re Already Running

Following my previous Forbes articles about the resurgence of PaaS and the adoption of Kubernetes, I ran into Abby Kearns, executive director of Cloud Foundry Foundation, who was kind enough to read them. We exchanged some ideas about PaaS, Kubernetes, and the recent wave of acquisitions in the Cloud space. [Note: as a Forbes contributor, I do not have any commercial relationship with the Foundation or its staff.]

For those of you who don’t know, Cloud Foundry encompasses multiple open source projects, the primary one being an open source cloud application platform built with container-based architecture and housed within the Cloud Foundry Foundation, which is backed by the likes of Cisco, Dell DELL +NaN% EMC EMC +0%Google GOOGL +0.07% and others. It has commercial distributions which are offered by Pivotal, IBM and others (Ben Kepes has a great post on the tension between open-source CF and the distributions, on his blog). Cloud Foundry runs on BOSH, a technology that was originally (and almost presciently, you could say) designed to manage large distributed systems, and as such was container-ready back in 2010.

Abby Kearns, Executive Director of the Cloud Foundry FoundationCLOUD FOUNDRY FOUNDATION

Cloud Foundry announced support for Kubernetes in its Container Runtime project in late 2017, alongside its original non-Kubernetes-based Application Runtime. In October of this year it doubled down with Eirini, a project that combines Kubernetes as the container scheduler with Application Runtime; and CF Containerization, which packages BOSH releases into containers for deployment into Kubernetes.

Broadly, Abby and I talked through three themes, as detailed below:

Everyone is already running a PaaS

I’ve written in my post about PaaS how the advent of ‘unstructured-PaaS’ offerings such as Kubernetes has contributed to the resurgence of this category, but implied in my article was the assumption that PaaS is still a choice. Abby presented a different view: it’s not so much about the runtime as it is about the ops, and, by and large, the total stack is the same as what is offered in a proper PaaS .

An operations team will have logging, monitoring, autoscaling, security, network management and a host of other capabilities around the deployment of an application; the relevant question is how much of that they’re putting together themselves (whether these bits are homegrown, open source, or commercial software), and conversely how much is being supplied from one coherent solution (again, homegrown, open source, or commercial). Whether you’re giving the keys to Pivotal, Google, or Red HatRHT +0.02%; internalizing engineering and operations costs but using CF and BOSH to manage your RBAC, security and other ops tasks; or putting together a mosaic of focused solutions—the end result is operationally the same as a PaaS. In the end, we agreed, digital transformation is about coming to terms with the complexity of your required IT stack, and optimizing the ownership and cost model—all of which are tough architecture and business choices .

Cloud mega-deals: this is not the empire striking back

Redpoint venture capitalist and tech-blogger extraordinaire Tomas Tunguz recently showed how open source acquisitions took up three of the top five slots for 2018 tech acquisitions (the SAP- Qualtrics deal has since nudged the Mulesoft deal down a bit). Since then, we’ve also had VMwareVMW +0.57% buying Kubernetes services firm Heptio, founded by luminaries Joe Beda and Craig McLuckie. As a Christmas-dinner summary for your less-techie friends and family, you could say that Microsoft bought its way back into the developer limelight, IBM armed itself with (arguably) the most robust commercial Kubernetes-based platform, and VMware went for the most-skilled Kubernetes services company.

Abby commented that in her view, what we are witnessing is large technology companies looking to M&A around open source technology to solve a common problem: how to quickly obtain the innovation and agility that their enterprise customers are demanding, while not being disrupted by new, cloud-native technologies. The open source angle is just a testimony to its huge importance in this cloudy world, and the Cloud Foundry Foundation expects that these recent deals will mark the start, not the end, of the Kubernetes consolidation wave.

Cloud Foundry will follow a different trajectory than OpenStack

If you’ve been around the cloud industry for several years, you might be tempted to think that the above consolidation will cause the likes of Cloud Foundry Foundation and the CNCF to diminish in importance or outright deflate, like OpenStack has over the past few years. As I mentioned in another article, in my opinion this has been (in the end) a good process for OpenStack, which is leaner and meaner (in a nice way) as a result. Not the same for Cloud Foundry Foundation, says Abby. This period may mirror a similar phase in 2014 when big vendors started buying up OpenStack companies as EMC did with Cloudscaling, Cisco with Metacloud, and many, many more—but Cloud Foundry has adapted to the Kubernetes wave in time, and its main partners are more closely aligned around its values and objectives.

Additional reasons, as I see it:

  • Cloud Foundry Foundation has been smarter about avoiding inflation in its ecosystem—both in terms of number of vendors with influence, and of actual dollars being invested through M&A activity.
  • BOSH being adaptable to pretty much all clouds-deployment models-platforms is a strategic technical asset.
  • The Foundation’s relentless focus on developer experience (rather than infrastructure), increases its options and avoids playing squarely in the public clouds’ game.

So, should we expect a Cloud Foundry serverless initiative next year? No choice but to wait and see.

(Originally posted on

Is Cloud Good for Humans? The Ethical Angle

I once subscribed to the YouTube channel of a guitar instructor who used a phrase that stuck with me: “practice doesn’t make perfect, practice makes permanent”. Doing something wrong more quickly, efficiently and consistently doesn’t necessarily improve the quality of the action ; sometimes, it could even make it worse, as bad practice becomes entrenched habit.

Technology is a facilitating tool, not evil or good in and of itself—it’s about taking whatever the intent (or the error) was into the real world more efficiently and effectively, for better or worse. Consider this effect in areas such as Cloud Computing (and related technologies): whether it’s a bug or an exploitable feature or vulnerability, I argue that even without malice the ethical stakes are uniquely high.


While ethics are as old as civilization itself, in the technology industry, I would argue that personal ethics are still very much an emerging topic. Consider a recent StackOverflow survey in which 80% of developers said they wanted to know what their code is used for—but 40% would not disqualify writing unethical code, and 80% did not feel ultimately responsible for their unethical code. (You can explore the psychological drivers for this in this great talk by Andrea Dobson.)

I am not referring to the obvious and very visible meta-discussions about things such as data privacy, anti-trust, the morality of machines or Yuval Noah Harari’s theory of Digital Dictatorships. I’ll even put corporate malice aside for now, so we can explore how Cloud can take our individual bad habits and errors to a global scale very, very quickly .

In the red corner: how can Cloud make things worse?


When your app is live in multiple cloud regions with five-9s availability, anything bad that is baked into it will scale accordingly. The glaringly obvious example of this from the past two years: political radicalisation and deliberate disinformation campaigns that took advantage of social media platforms to influence democratic elections and referendums. For a debrief, please contact the offices of Mssrs. Zuckerberg and Dorsey.


Speaking of Facebook FB -0.07%, remember “Move Fast and Break Things”? Speed (or ultimately, agility) is arguably the greatest benefit of using Cloud, as well as being a key tenet of agile methodologies. But that also means that mistakes in code can get into users’ hands within minutes. Indeed, the “Break Things” part of that sentence assumes you can “Move Fast” again to issue a fix, but even if you are as agile as Facebook, some damage is already done, and probably at scale.

If the system moves too fast, perhaps a useful approach for each of us to overcome our personal speed bias is the one developed by Nobel laureates Daniel Kahneman and Amos Tversky, and popularized in Kahneman’s book, “Thinking, Fast and Slow”. By engaging ‘System 2’, the more analytical side of our brains, we can focus on how we make decisions at least as much as we do on what gets decided.


Microservices architecture is great way to empower smaller teams to release independently, and drive software development in a much more dynamic way—resulting in more resilient apps. Consider, however, that if each small team only owns a small part of the picture, it might not feel responsible for any damage done at the end of the line. Philosopher and political theorist Hannah Arendt famously explored how breaking down evil into ‘digestible’ bits detaches contributors from feelings of blame, but even when we talk about honest mistakes, that principle can easily be applied to software development when we lose sight of our place in the bigger picture.

Perhaps a mitigation here is to always know how your code fits into the big picture of the app, and stay close to your users, even if you are far-removed to the left.


In the long term, robots may or may not take some or all of our jobs. This might create a motivation for people to ‘stretch’ their morals in a vain effort to keep their jobs. In the short and medium term, the focus in Cloud and DevOps seems to be on abstraction, task automation and the creation of self-service experiences for developers. While this drives the industry forward professionally, a message of “you just focus on your code” implicitly encourages the removal of overall responsibility from each small part of the chain—as we’ve seen above, this can be a bad thing. Again, being conscious that you are delivering part of a whole, and caring about the outcome, can help.

In the blue corner: trends that can save us

Free and open source software

The Free Software Movement and the world of Open Source, which have revolutionized technology and technology-driven experiences, have always prioritized values such as transparency, accountability and responsibility. I have witnessed first hand how leaders of some of the largest and most influential projects and foundations act with these values in mind, knowing that ultimately, it is their community that is the source of whatever power they have. As open source continues to take over large swaths of the technology landscape, these values could help us drive personal ethics on the level of the single developer or operator.

Devops and digital transformation

If one side of the DevOps movement is about automation through tooling, then the other (and arguably the more important) side is about cultural change towards more collaboration, more observability and less silos. Instead of developers “throwing code over the wall” to a team they hardly ever talk to (except when something goes wrong), digital transformation (especially with regards to containers) strives to bring people together to a shared operational view of the world, and to ongoing, collaborative communication. If successful, this trend will do well to mitigate the risk of individuals shirking ethical responsibility by taking the maxim “focus on your code” too literally.

A related trend is “shift-left”, which caters to the rising power of developers by putting more responsibility and capabilities at the coding stage. Whether it’s tools to fix open source vulnerabilities such as Snyk’s, or concepts such as Gitops—if we can drive ethics at the developer level in the same manner, this could be good news when code get deployed in production.

Rising social awareness

Recent political activity in several western countries suggest a new wave of support amongst the younger generations for socially-minded and more inclusive policies. In the UK, for example, a July poll found that a majority of Conservative Party voters support higher taxes for better public services. Cloud computing and open source have similarly impacted corporate culture (Microsoft under Satya Nadella is a good example), and of immediate concern to C-suite execs in certain companies is quasi-unionized activity such as the Google GOOGL +0.07% walkout. As the battle for technical talent intensifies, we should expect this grassroots corporate social consciousness to grow. I’d wager that many employers will follow Google’s lead in adapting to their audience’s new expectations—and a key part of that will look at how personal ethics interact with technology at work.

Climate crisis

It bis likely that the biggest issue of our lifetimes is climate change, which has the potential to reduce or eradicate many species, including our own. As is the human condition, crisis is the best motivator for action. While some political leaders muddle about, backtrack on policy or pander to an uninformed political base or corporate interests, it sometimes comes down to individuals to drive actions in both the business and political arenas. I spoke to Anne Currie of consulting firm Container Solutions, who has written numerous posts and has delivered numerous talks with regards to ethics; she had a good example of a low-hanging fruit specifically for Cloud users.

“As companies move into Cloud we have a choice,” said Currie when we talked. “Do we use the sustainable options available from Google, Azure or a handful of AWS regions? Or do we use the far dirtier defaults like the US East coast? Thoughtless hosting is no longer ethically acceptable”.

Start with small steps

As a famous saying goes, “history is filled with examples of soldiers who carried out orders that were never given”. We can all do something to make sure Cloud is good for humans. These include:

  • Bring our personal values to work.
  • Take ownership of the whole lifecycle of the our work.
  • Express an active interest in, and form an opinion about, the ecosystem in which our code integrates.
  • Use our own initiative and conviction to effect positive change in the team, the company, the community.

(Originally posted on

OpenStack Gets Its Swagger Back

In a previous article I briefly explored the time-gap between marketing hype around a new technology (in this case Kubernetes) and its actual adoption at scale. Attending the OpenStack Summit in Berlin last week was a great opportunity to time-travel forward in that same type of cycle. OpenStack was all the rage just four years ago but then, as a reaction, became the butt of many a cloud joke. I set out to discover where OpenStack is in late 2018, and what newer technologies and foundations can learn from this story .

The “Four Opens”UDI NACHMANYBoom to bust to…?
OpenStack famously came out of a joint project set up by NASA andRackspaceRAX +0% in 2010, to offer an open-source Cloud platform—focused on compute, network and storage—and counter the early rise of lock-in-focused AWS and Azure. The ecosystem reached peak hype around late 2014, marked by the number of startups and the size of their rounds (and burn rates),the size of the semi-annual summits, the number of vendors coming into the foundation—and their bragging about the number of open source contributions to the project. (Contributions to open source are of course a very good thing when done with the intent of collaboration and problem-solving, but it’s a quality, not quantity, game; this short thread from Jessie Frazelle is a great summary from a developer’s point of view.)By 2016, when Ubuntu founder Mark Shuttleworth talked about the collapse of “The Big Tent”, OpenStack was deep in the trough of disillusionment. Soon enough, startup funding ran dry ( OracleORCL +0.12% and other buyers made some key post-bubble acquired-hires as a result), and major backers like HPE, Cisco and IBMIBM -0.76% started to cut their losses. This was so painful in the Cloud industry that at times it seems that foundations like the CNCF manage themselves according to a secret “don’t do what OpenStack did” rulebook. Leaner, but not meaner

What OpenStack has done extremely well from the beginning, is engage its community of engineers and operators . This tight-knit and committed group, which has gone through the flames of technical issues and human skepticism, seems to have been the key to the project’s current state—more modest, mature, pragmatic and focused on uptime.Consider how OpenStack has managed to generate activity around new projects that address challenges presented by technologies such as Containers, ‘unstructured’ PaaS and Serverless. At the summit, foundation speakers boasted the following: the Kata Containers project allows hardware-level security isolation that can work well with Kubernetes and FaaS; Airship is a collection of tools that automate cloud provisioning; StarlingX leverages the Titanium R5 platform to address distributed edge applications; and Zuul is a CI/CD platform fit for multiple environments. All these tools are, in some respect, designed for infra operations users, and all bridge from OpenStack, which is a sign of pragmatic confidence in the ecosystem’s vitality as well as in its place within wider contexts.

The atmosphere at the show itself seemed more down-to-earth than before too, and as friendly as ever. Exhibitors were mostly managed services providers looking to help enterprises succeed with OSS (no magic-cloud-in-a-box pitch was noticed), swag hunting was a niche activity (i.e., fewer, higher-quality conversations), keynotes focused on advanced production case studies, and at the long list of talks about Cloud, Containers and FaaS, lead-generating badge scanners were nowhere to be seen. Kind of what you’d expect from a proper tech community summit.

It’s not all roses, of course. The opening keynote mentioned 75 public clouds running on OpenStack, but in truth none of those come close to threatening the dominance of the big-three (the biggest OpenStack-based public cloud is probably Rackspace, which is increasingly focused on managed services). Initiatives like the Passport Program are good for making the niche work better together, but they probably won’t move AWS’s needle.

No more Stacks
That said, in a world where everyone wants to talk about functions, containers and the edge, the OpenStack project has done a remarkable job of moving forward and adapting. In my opinion, the two most meaningful moments came when keynotes touched upon the project’s plans in China, and the summit’s name change.

While the US and European markets for OpenStack continue to grow slowly and steadily, much faster growth is attributed to the Chinese market, which has both a massive infrastructure need, and a tendency to be wary of the leading Western vendors. It will be interesting to see how sustainable that growth is for OpenStack, especially if the government publicly endorses the project based on its popularity, openness and local case studies such as TenCent. Luckily, OpenStack’s first summit in China, planned for 2019, will allow us an opportunity to explore that question up close.

It was also announced that the summit will change its name, to the Open Infrastructure Summit. While this recognizes the broad relevance and experience of this ecosystem, it will no doubt bring it into closer orbit with events organized by the likes of Linux Foundation and the CNCF. Whether harmony and symbiosis will ensue, or friction and confusion, remains to be seen. What’s very clear from this summit is that OpenStack has become a bit boring, and that is great news for anyone looking to deploy reliable private cloud infrastructure.

(Originally posted on