Open Source’s Rocky Week Provides Ample Food For Thought

Visitors of Open Core Summit in San Francisco in mid-September could be forgiven for their confusion: on the one hand, discussions at the conference centered around business and community models for open source software (OSS) viability, in an increasingly polarized Cloud world.

Open Core Summit
Nick Schrock at Open Core SummitOPEN CORE SUMMIT

On the other hand, tweetstorms by key figures in the community (who were notably absent) focused on the very definition of open source, disagreeing with the association of open source with open core, source available and other limiting models. One of the attendees (disclaimer: a colleagure of mine) put it succinctly in a tweet that exposes the still-raging debates about the nature and direction of different models to the “left” of full-on proprietary software:

A Week Of Moral Reminders For Open Source

It was certainly fascinating that a conference which dealt in different approaches to protect or limit open source—many of them controversial—was book-ended by two seminal events in the OSS world. The first one occurred just prior to the conference, when the founder of the Free Software movement and GNU project, Richard M. Stallman, resigned from MIT and the Free Software Foundation (FSF), following public pressure pertaining to opinions he expressed in an email concerning the Jeffrey Epstein affair—software engineer Sarah Mei gave a detailed breakdown on Twitter of historical issues with misogyny at the FSF.

While the Stallman resignation could be seen as starting to fix a historical issue, the second event more clearly raised new questions for open source’s future.

After it came to light that Cloud company Chef signed a deal with the U.S. Immigrations and Customs Enforcement (ICE, who has been highlighted for its separation of families and night-time raids), developer Seth Vargo pulled a piece of open source technology that helps run Chef in production, claiming moral reasons.

Chef added kindling to the fire by forking the deprecated code, and renaming the author—viewed as a hostile tactic towards community norms. Chef likely needs to fulfil its contractual obligations to a demanding federal customer, but has since back-tracked on the author renaming, and has gone public on its next steps for remediation.

Free Software: Free For Any Purpose?

Much of the Twitter criticism around the Open Core Summit re-emphasized the agreed principle that if you’re writing OSS, you should be comfortable with anyone re-using or modifying such software freely, per the Four Freedoms—even if it means that AWS sells a managed service based on a project, which could limit the growth of the commercial entity supporting it, and with it the project’s viability itself (I covered this in my earlier post).

However, an extension of that question could pertain to the uses of software for purposes which might disagree with the maintainers’ values. The Four Freedoms as defined today do not speak to someone using free software to infringe on non-software freedoms—but some are now calling for this to be a formal part of free software licenses. A few licenses already include such clauses, but due to numerous gray areas, this is tricky to navigate—some entities (CIA) enjoy less scrutiny than others (ICE); judgment on some issues can be based on one’s background (China-Taiwan, Israel-Palestine, Spain-Catalonia); and so on. Even if almost all involved see the use as evil, how does one prove that a server in a North Korean labor camp runs Kubernetes, for example? How does a project enforce its policy in such a case?

As more developers bring their values to work, this will be a critical development for open source, software in general, and technology. Developer Matthew Garrett positioned this well, claiming that solving this through licenses could be effective but not in line with principles of free software and open source. Likewise, Risk Engineer Aditya Mukerjee gave a great summary of where this could quickly get complicated:

Acquia’s “Makers And Takers”

In this context it was useful to talk to Acquia founder and Drupal Project lead Dries Buytaert, just after the Summit (he had to cancel his attendance there to close an investment round in the company from Vista Equity Partners).

In a long and impassioned blog post, Buytaert used the “makers vs. takers” model to argue that failure (by all stakeholders) to embrace the collaborative culture of the OSS community is the most real and damaging issue facing it. Operating an open source community and business is hard, claims Buytaert, and ultimately every community is set up differently. Acquia, he says, maintains the Drupal project but contributes only 5% of code commits, which ensures open collaboration—compared with other vendors who might opt for more control strategic of their projects’ direction at the expense of collaboration.

An example of this, says Buytaert, is a model by which open source vendors that ensure they receive “selective benefits” from their work. Automattic in the WordPress community controls the WordPress.com domain; Mozilla Corporation, the for-profit subsidiary of the Mozilla Foundation, are paid large sums from Google for searches done through the Firefox search bar; MongoDB owns the copyright to its own code and is able to change MongoDB’s license in order to fend off competitors.

From Cloud Vs. Community To Government Vs. Community?

Still, Buytaert agrees that there is a degree of threat from large, well-funded companies that “refuse to meaningfully contribute, and take more than they give.” However, first, it’s important to understand and respect that some companies can contribute more than others; and second, it’s important that the OSS community encourages and incentivizes them to do so, and the best way, says Buytaert, is to create a situation where the more you invest in open source, the more you win commercially. If the opposite is true, it will be hard to sustain.

Buytaert suggests that the big cloud players could give back by rewarding their developers for contributing to open source projects during work hours, or by having their marketing team help promote the open source project—in return they will get more skilled and better connected engineers.

As Pivotal’s Eli Aleyner suggested in his talk at the Summit about working with public clouds, today’s developers are tomorrow’s technology buyers, and any potential short term gains for a cloud provider from not playing nice with an open source project could be dwarfed by the commercial damage resulting from alienating the community.

If the Chef precedent is an example, this principle now very clearly includes government entities, and so it will be interesting to see how software use evolves as communities get more opinionated about the end use of their software.

(Originally posted on Forbes.com)

VMware And IBM Go Full Circle To Dominate The Cloud-Native Ecosystem

While Clayton Christensen’s “Innovator’s Dilemma” taught us that leaders will struggle to retain their leadership position in dynamic industries, others such as Ron Adner have reminded us that it is rarely the innovator that ends up capturing most of the long term value. Examples in technology abound, from Xerox Labs and the Graphic User Interface to Sony and the MP3 player (both examples redefined by Apple later on).

Sometimes, a handful of new players emerge that truly break through into dominance—Amazon Web Services is an obvious example in the IT infrastructure space—but the large majority end up failing (think of how many more public cloud players existed only a few years ago). As analyst firm Monkchips has shown with their “VMware pattern” theory, that is mostly because it is very difficult to mature as a company, in order to take your innovative technology from inception to wider enterprise adoption.

VMware CEO Pat Gelsinger and Heptio co-founder and CTO Joe Beda, on the day of the announcement of the former company acquiring the latter.
VMware CEO Pat Gelsinger and Heptio co-founder and CTO Joe Beda, on the day of the announcement of the former company acquiring the latter.JOE BEDA ON TWITTER

The polarization of the Kubernetes world

The cloud-native ecosystem emerged with the open-sourcing of Kubernetes in 2014, and since then we have seen an explosion of new companies and new open source projects. At times daunting for its busy graphic representation, the fragmented CNCF landscape for the most part showed us a promise of a community of small, innovative equals. The past two years have been remarkable for the consolidation of power in this ecosystem, arguably the current and future battleground of IT, as I wrote in a previous post.

While consolidation happens often, and the “VMware pattern” holds, it is not often that we see companies who were dismissed as “has-beens” by analysts come back to true dominance in a completely new field. After all, HP’s and Dell’s server businesses have been hit hard by Cloud and have not bounced back; Oracle has been trying to adapt to the brave new world of open source and portable software; and Microsoft has given up its plans for mobile long ago (though its resurgence as a cloud and open source mammoth has been breathtaking to watch).

In the blue corner, if you will, with IBM’s acquisition of Red Hat and its strategic contributions to the cloud-native ecosystem, it is newly positioned at this point in time as a strong leader with a wealth of both hugely popular open source projects, and strong tools to build, run and manage cloud-native applications (from RHEL through to OpenShift). I wrote about this in an earlier post, interviewing Dr. Jim Comfort.

And in the other corner, its erstwhile rival in the jolly days of virtualization, VMware. With the rise of public cloud and OpenStack, and then Kubernetes, the company experienced clickbait headlines about its business model business model, employee retention and other concerns—all the while continuing to post satisfactory financial results. With recent news, long-time execs in VMware are definitely laughing now.

The Multi-Cloud, Cloud-Native Company

The brave decision to shut down its own public cloud service—something IBM, with its large Softlayer estate, did not do as decisively—led VMware to embrace its position as a multi-cloud leader, with strategic deals brokered with AWS and other clouds.

Then came the wave of acquisitions: Heptio, a cloud-native services firm founded by the originators of the Kubernetes project; Bitnami, a cloud apps company with deep developer relationships; and lately, Pivotal, OpenShift’s big rival in the world of enterprise PaaS.

At VMworld this week, we’ve witnessed the pièce de résistance with Project Pacific and Tanzu Mission Control, effectively summarized by former Heptio co-founder and CEO Craig McLuckie:

Both IBM and VMware are clearly just getting started. The dominance of developers and popularity of open source as a methodology in the cloud-native world will likely ensure some sort of balance of power in the ecosystem, but with the dramatic and rapid resurgence of these two ex-“has-beens,” Kubernetes and related projects are truly maturing into enterprise technologies. For now, their innovator’s dilemma has found its solution.

(Originally posted on Forbes.com)

No Easy Way Forward For Commercial Open Source Software Vendors

In an earlier article, I examined some of the recent dynamics in open source software, specifically around the for-own-profit commercialization of some projects by large cloud providers, and how that is driving smaller companies to seek out restrictive license models, in the process causing themselves considerable friction in their communities.

As befits a piece that deals with themes of free software and a polarized cloud industry, the article seemed to have struck a chord with several people, some of whom have contacted me to agree or disagree with my points. Rather than keep those to myself, I thought a follow up with three of these luminaries, with regards to their inside views on the topic, would be much more engaging.

In this article, I’ll summarize the main points from my conversations with Spencer Kimball, co-founder and CEO of Cockroach Labs; Joseph Jacks, founder and general partner of OSS Capital; and Abby Kearns, Executive Director of the Cloud Foundry Foundation. All have extensive track records in open source, but each has a slightly different take.

Caution

Caution PHOTO BY MAKARIOS TANG ON UNSPLASH

The independent vendor perspective: Spencer Kimball

While still a student in 1995, Kimball developed the first version of GNU Image Manipulation Program (GIMP) as a class project, along with Peter Mattis. Later on as a Google engineer, he worked on a new version of the Google File System, and the Google Servlet Engine. In 2012, Kimball, Mattis, and Brian McGinnis launched the company Viewfinder, later selling it to Square.

Drawing on his experiences at Google, Kimball wanted a technology like BigTable to be made available as open source outside of the company, and co-founded (again, with Mattis, and ex-Googler Ben Darnell) the company Cockroach Labs to provide commercial backing for CockroachDB, an open source project.

According to Kimball, whichever cloud provider is the best at brokering the multi-cloud migration will ‘win’ cloud. He adds that CockroachDB was built for that multi-cloud/region and relational future—where scale, complexity but also privacy frameworks such as GDPR become critical business drivers. But as optimistic as he is about the business, Kimball is also concerned about the sustainability of today’s and tomorrow’s venture-backed commercial OSS businesses.

Red Hat, Kimball reflects, clearly ‘figured out’ the model for commercial OSS before the days of cloud, becoming the dominant force in the commercial OSS business. The Red Hat ‘equilibrium’ (Kimball’s term) was based on selling contracts for support and professional services on top of widely-available OSS. With the emergence of cloud, Red Hat capitalized on the complexity of ‘big-software’ systems such as OpenStack and Kubernetes. (Bassam Tabbara of Upbound has commented on how this model might change with the IBM acquisition.)

Kimball states, “with cloud becoming the mainstream way to consume and manage IT, the complexity of some OSS provides a natural advantage to cloud platforms such as AWS or Azure, as they can use economies of scale to build a managed service out of any open source core.” He adds, “they can also offer enterprise support on top, effectively taking the bottom 50% of an emerging vendor’s total addressable market, and also capping its growth in the enterprise high-end.” So what is an emerging vendor to do? “The best protection is community,” says Kimball. Engaged, committed groups of maintainers, contributors and users are impossible to copy or to replicate in a managed service, and can keep even the most aggressive IT giant at bay.

Another protection could be to address a multi-cloud niche, as Cockroach Labs has done, which serves customers at the gap between the lock-in-focused cloud providers.  At the end of the list, Kimball mentions restrictive (“almost ‘byzantine’,” he says) licenses and other defensive models such as ‘free for use, source available’, whole-compilation protection and more—all suboptimal and not in line with the principles of free software.

In light of these comments from Kimball, it is very interesting—if not entirely surprising—to note CockroachDB’s licensing change, announced last week on the company’s blog: they are adopting a version of the Business Source License (BSL), that is not limited by nodes (unlike MariaDB’s version), but prohibits the offer of a commercial version of CockroachDB as a service without buying a license, by other players (read: AWS). This announcement has already resulted in friction on social media and the blogosphere (which I would rather not amplify by referencing).

The venture investor perspective: Joseph Jacks

OSS Capital is the world’s first VC firm exclusively-focused on investing in and partnering with commercial open-source software startups. An early contributor to Kubernetes, Jacks previously founded Kismatic, which he sold to Apprenda, as well as founding container mega-tradeshow KubeCon and donating it to the CNCF as part of its inception.

OSS Capital’s investment strategy is focused exclusively on supporting early-stage commercial OSS startup companies. OSS Capital’s equity partner/advisory network of commercial OSS founders have collectively captured over $140bn in value across 40 of the largest COSS companies of the previous decade; transferring this knowledge and expertise to the next-generation of commercial OSS founders is a core part of the OSS Capital’s value proposition. Additionally, OSS Capital organizes the commercial OSS community conference, OpenCoreSummit.com.

When asked about the strategic outlook for OSS given recent skirmishes, Jacks points out that the pie is getting much, much bigger: since companies outside of what is considered the software industry (from cars to home appliances) are effectively becoming producers of software, that grows the addressable market considerably, and will result in an acceleration of open source well beyond what we’ve seen so far.

Even from within the tech industry, Jacks says, “many OSS projects disrupt and/or bring transformational innovation to major global industries like data processing and storage (Spark, Ceph, Hadoop, Kafka,  MongoDB , CockroachDB, Neo4j, Cassandra), operating systems (Linux, FreeRTOS), semiconductors (RISC-V), networking/CDNs (Envoy, Varnish), software engineering (Docker, Go), computing (Kubernetes), search (ElasticSearch), AI (TensorFlow, PyTorth).” Those two major developments, says Jacks, will reframe the playing field for open source.

Given his expansive view, it is perhaps not surprising that Jacks is a critic of the recent proliferation of restrictive licenses as a defensive measure for emerging OSS companies. “This can dramatically reduce the value-creation potential of OSS projects, which are fundamentally driven by developer adoption,” he says, and adds, “instead, open-core OSS companies should use more permissive licenses like MIT, A2.0, or BSD in order to maximize value creation for all constituents (and that includes cloud providers), while capturing value on the proprietary layers around the open core.” (Jacks calls this layer ”the crust”.)

So what are effective strategies for a new OSS company to build, scale and survive in an  AWS-dominated world? Jacks says, “one, focus on maximizing value creation and capture for all, building highly standardized disruptive technology; two, build inclusive and constructive communities; three, ship quality software fast; four, embrace transparency and open governance across all constituents.”

The foundation perspective: Abby Kearns

I spoke with Abby Kearns as a follow up to my interview piece with her from late last year, and the conversation focused on the licensing implications of competitive moves in the commercial OSS market. At an impressive CF Summit, Cloud Foundry Foundation announced that its open source project Eirini, which enables pluggable use of either Diego/Garden or Kubernetes as orchestrators, passed its validation tests for CF Application Runtime releases. Kearns, who has served as Executive Director of the Foundation since 2016, is no stranger to both the opportunities and the tensions that exist at the intersection of free software and commercial interests.

As expected, Kearns is adamant, saying, “open source as a method of building and delivering free software can only thrive if we continue to put code in the open, and ask for help in improving it. ” She recommends to developers and commercial OSS companies to assume that someone will copy the software and perhaps even use it in a competitive context—and if one is worried about that, then why put code out there in the first place?

In Kearns’s view, actions such as using restrictive licenses can be revealing when it comes to the maintainer’s intent. Similarly, companies that open-source a wholly-formed thing might be missing the point, which is to build together, says Kearns—paraphrasing Richard M. Stallman’s famous manifesto: “free like free speech, not like free beer, not like a free puppy or free (used) mattress”.

Kearns believes that focus on these key tenets will see commercial OSS vendors through: engagement with contributors, transparency towards stakeholders, and outreach towards community. She also points out that growth in users isn’t the only meaningful metric for open source—just as important is growth in meaningful usage or in engagement with a dynamic community that likes to contribute.

Why open source in the first place?

To continue this positive note, Gabe Monroy of Microsoft recently retweeted a thread that showed how engineers from across rivaling vendors can collaborate successfully around open source software, to the benefit of both users and the projects themselves. Per Monroy, this is an “example of why multi-vendor OSS is the future of infrastructure software”. This, and so much more, could not have been achieved if it were not for open, collaborative communities and a bias towards permissive licensing.

(Originally posted on Forbes.com)

Containers Put The ‘Dev’ Back In DevSecOps

On the back of a record-breaking Kubecon+CloudNativeCon (with a reported 7,700 attendees last week), it is very clear that Kubernetes’ position as a cloud center of gravity remains unassailable in the foreseeable future. For one, even outside of this sprawling tradeshow, it has taken over sessions at many other conferences, from the Open Infrastructure Summit (previously OpenStack Summit) to the Cloud Foundry Summit. I personally believe that in the next few years, these two conferences (and others) will either accelerate in their contraction or even merge into a mega CloudNativeCon.

Does that mean digital transformation has reached a steady state? Hardly. As the Cloud Foundry Foundation’s April report showed, 74% of companies polled define digital transformation as a perpetual, not one-time, shift. The survey also shows that organizations are already using a good mix of technologies and clouds in production, which explains the Foundation’s endorsement of Kubernetes and the launch last year of the related project Eirini.

Tunnel

Tunnel PHOTO BY ALEXANDER MILS ON UNSPLASH

New models, new personas, new tension

In the end, what matters to IT teams is a thoughtful approach to building and deploying cloud-native applications. There are many reasons why containers have become so popular since DotCloud’s pivot into Docker all those years ago, and I have gone through some of them in previous posts. IBM’s Jim Comfort has called it the most dramatic shift in IT in memory, more so than cloud computing. Since that pivot, it has become a convention that whatever is in the container is the developers’ responsibility, while operators own what is outside of it.

Kubernetes and related projects challenge that paradigm even further and represent the cloud-native vision in that they provide developers with access to native APIs, which means they have much more control over how their application is delivered and deployed. So, while software is eating the world, developers have started to eat the software delivery chain.

However, Kubernetes’ evolution into the golden standard of an “un-opinionated PaaS” still means that in most large enterprises it is still owned and maintained by operations-minded engineers. Therefore, one of the main efforts driving enterprise adoption has been to build operating models that balance developer empowerment and operator governance—for some, that means DevOps. However, the rise of DevOps is not the only shift in user personas that we have seen.

Open source and the “shift-left” risk

Security used to be the place where people not only buy, but also use solutions to prevent and adapt to security risks, but this was at an age of waterfall development and of mostly closed-source code. The rise of open source software has been disruptive to that as it has been pervasive—whether the application itself is open-sourced or not, significant amounts of open source packages are being leveraged in its creation. The cloud-native movement has accelerated this trend even further, with its strong bias towards open source.

The reason this matters is that security teams are not staffed nor equipped to control these open source inputs. Whether the security team was planning for it or not, a lot of their risk has “shifted left”, and now with open APIs, both the breadth and speed of risk have risen. Snyk’s latest report, “Shifting Container Security Left” shows that the top 10 official container images on DockerHub each contain at least 30 vulnerabilities and that even of the top 10 most popular free Docker-certified images, 50% have known vulnerabilities.

Even worse, the report surfaces an acute ownership problem, as 68% believe developers should be responsible for owning container security, while 80% of developers say they don’t test their images during development, and 50% don’t scan their images for vulnerabilities at all.

Time for joint accountability

As a consequence, when it comes to software-lifecycle tools, security vendors are experiencing a demand shift—a separation of personas at large enterprises. Security teams are still the economic buyers and hold the budget authority; however, with the realization that developers must have the tools to own more of the security of their own apps, comes a transfer of budget decisions over to development teams, who will actually use these tools. You build it, you run it—but you also secure it.

As I mentioned in an earlier piece, this is not an easy process for a security industry that is geared toward servicing security teams. The challenge many providers are facing is to provide tools designed for both sides of the aisle, that can promote joint accountability: empowering the people “on the left” while giving those “on the right” enough levels of control and observability.

In a sense, to shift-left is to let go. With more code—in a repository or in pre-built containers—coming from lesser-known origins and deployed more rapidly on more distributed systems, the only option is to continue to evolve and provide developers with the right tools and knowledge to own security.

(Originally posted on Forbes.com)

Shift-Left Security is About Empowerment, Not Encroachment

Containers have changed the security conversation in many organizations

There are many reasons why containers have become popular in recent years. For one, they help developers maintain consistency across platforms and through the software delivery chain. As a result, it’s now the norm to consider whatever is in the container to be the developers’ responsibility, while operators own what is outside of it.

Kubernetes has challenged that paradigm even further by providing developers with access to its native API, affecting operational parameters.

You could say that software is eating the world, open source is eating software and developers are eating the software life cycle!

Kubernetes, however, was built by system developers and (arguably) for system developers and has evolved into the golden standard of an “un-opinionated PaaS”—and in a large enterprise, that is still owned and maintained by an operator. As a result, one of the main trends in enterprise adoption has been to find operating models which empower devs while providing operators a good level of control over policy creation and enforcement.

Shift-Left Security About Empowerment, Not Encroachment

User != Buyer

Security is a great example of the need for this new equilibrium. With the explosion of open source software (which is being leveraged in both open and proprietary applications) many security teams understand that their risk has shifted left, and in a cloud-native world that is moving faster than ever toward production. Snyk’s latest “State of Open Source Security Report” shows a significant increase in vulnerabilities—almost doubling in just two years. More importantly, the report also shows companies are struggling to tackle container security, with a four-fold increase in container vulnerabilities in 2018.

Unsurprisingly, the report shows a need for developers to be enabled to take more ownership for security. The figures reveal 80% of developers believe they should be responsible for the security of their open source code, yet only 30% rate their own security knowledge as “high.”

As a consequence, the security industry is experiencing a decoupling of the user and buyer personas for software security tools and services at large enterprises. Security officers still hold the budget authority in most cases, but increasingly, they fence off a good chunk of it for tools chosen by developers to be used in their development workflow to help them deal with security issues as early and as often as they commit code.

New Shared Tools for a New Age

The above is easier said than done, when almost all of the security industry is geared toward servicing security teams and helping them deal with “those pesky software engineers.” The challenge for providers in this rapidly changing reality is to provide tools designed for developer responsibility but also for joint accountability: the best user experience for the people “on the left” with the right levels of control and observability for the people “on the right.”

What do these tools look like? They move at the speed of code at the source control and IDE stages (rather than snapshotting or cloning repos for static analysis) to allow developers to find and fix issues early; they provide strong complementary gates at the build, CI/CD and registry (push) stages; they place clear controls at the gates to deployment, whether by using Kubernetes or OpenShift admission controllers, or integrating within a `cf push` command (as examples); and perhaps most importantly, they prioritize actionable insights over just plain data.

Shift Left Sometimes Means ‘Let Go’

In an operational sense, shift left doesn’t mean that platform or DevOps teams should own actions more to the left, encroaching on developers. Quite the opposite: It is about developer empowerment. With more bits of open source and proprietary code coming from increasingly disparate origins and sitting in increasingly distributed systems, this becomes at least as big an imperative for security teams. While CSOs and CISOs remain accountable, more and more of them bravely face the cloud-native future by allowing developers to be more responsible for the security posture of their own apps.

(Originally posted on Container Journal)

At RSA Conference, Most Security Vendors Still Not Shifting Left

A week at the sprawling cybersecurity conference known as RSA Conference always sends you home with tired feet and a full brain. With attendance as high as ever, top-quality (and more diverse, this time around) keynotes, a myriad sponsored evening events and—crucially—a week-long San Francisco drizzle that kept attendees engaged indoors, the momentum seemed ever-present.

However, as someone coming to this industry from cloud computing, containers, open source and DevOps, I could not help but notice a wealth of “old world” pitches and a dearth of newer ones. If you are in the market for perimeter or beyond-perimeter solutions, ethical hacking or endpoint security, you can spend far more than the allocated few days to explore all the talks and vendors on offer; but if you are trying to face the cloud-native immediate future by shifting security to the left, and are looking for insights on this shift, then not so much.

Rusty lock

Rusty lock PHOTO BY JAMES SUTTON ON UNSPLASH

A time of painful change

As Snyk CEO Guy Podjarny recently pointed out in his talk at QCon London, DevSecOps is a highly overused term that few stop to define for themselves in depth. In his approach, DevSecOps is actually an umbrella term for three areas of required transformation: technologies, methodologies and models of shared ownership.

In this reality, modern and modernizing IT organizations are facing tremendous disruption on several levels:

  • Cloud computing, microservices and containers represent a technology shift on a scale unseen in recent history, as was discussed with IBM’s IBM +0% Jim Comfort in my last post. This isn’t about training people on a couple of new tools—it is about re-thinking what technology enables, and how technology affects the operational and business structure.
  • From Agile to DevSecOps and beyond, methodologies can help accelerate technology adoption, but they also tend to trigger significant shifts in culture and process. This is one reason why, as I detailed in a previous post, early mass-market users of technologies such as Kubernetes require operating models, not nifty click-to-deploy SaaS tools.
  • As companies attempt to hire new talent to address new challenges, and as technologies from the last decade are being phased out, a skill/generational gap emerges between young cloud-native developers and long-in-the-tooth operators, fluent in Linux system administration and server configuration management. As Jim Comfort implied in that IBM IBM +0% post, re-skilling probably can’t keep up with the pace of technology change.

Security vendors are starting to get it

It’s a classic case of the innovator’s dilemma. While most traditional security vendors realize the need to adapt, they have large and stable revenue streams from legacy products, which makes the transition slower than the pace of change their customers are experiencing. That isn’t to say that the existing transition isn’t meaningful: we are well on the journey that started by securing the server amd moved on to securing the VM, cloud estate, and—more recently—container. Put that together with the rise of external open source code that developers leverage in practically every new application (including proprietary ones), and with the understanding that an increasing portion of risk comes from inside the perimeter, and you get to the obvious conclusion: the urgent ‘cybersecurity’ battle is for securing the developer’s workflow, in real time.

Another interesting change is a decoupling of the user and buyer identity. The budget might still sit with the CISO, but increasingly, portions of those security budgets are being allocated to developer tools. This disrupts a long-existing equilibrium in the security market, and therefore it is no wonder that many vendors are re-engineering the product-services mix, as they figure out where their customers are going. Expect IT organizations that collaborate across the engineering-security divide to be more resilient in the face of exploit attempts—and expect vendors that adapt quickly and effectively to this future to have a better chance of survival.

New partnering models are key

Cash is usually king, and many of these so-called ‘cybersecurity dinosaurs’ will survive and thrive by leveraging their reserves, but it may be a painful and protracted process. To make it less so, vendors would be wise to rethink go-to-market strategies; re-align strategic partnerships; and prioritize capabilities for risk mitigation over risk adaptation—all in an effort to understand that developers will continue to move fast, and savvy security officers need to enable that to happen within the right guardrails.

Both security teams within IT organizations and incumbent security vendors would be wise to follow the guidance of members of the PagerDutysecurity team, interviewed on The Secure Developer podcast, who said that their job, in the end, is to make it easy for developers to do the right thing.

(Originally posted on Forbes.com)

IBM And The Third Iteration of Multi-Cloud

Four months after the announcement of the Red Hat acquisition, and some time before it is expected to officially close, I caught up with Dr. Jim Comfort, GM of Multicloud Offerings and Hybrid Cloud at IBM, to talk about the company’s recent multi-cloud-related news, and to get a sense of its updated view of multi-cloud. Jim joined IBM Research in 1988 and since then has held a variety of development and product management roles and executive positions. He has been closely involved with IBM’s Cloud Strategy, including acquisitions such as Softlayer, Aspera and others. [Note: as a Forbes contributor, I do not have any commercial relationship with IBM or its staff.]

Dr. Jim Comfort

Dr. Jim Comfort; IBM

First, what is significant about the new announcements?

At its THINK 2019 conference, IBM announced Watson Anywhere, the latest evolution of its fabled  AI platform (originally restricted to the IBM Cloud), which will now run across enterprise data centers and major public clouds including AWS, Azure, and Google Cloud Platform. My reading is that this is a potential sign of things to come with Red Hat: IBM seems to be coming to terms with its relative weakness as an IaaS player, and its tremendous strengths as a platform and managed service provider. Building the multi-cloud muscle with Watson now, will serve IBM well when it comes to Red Hat OpenShift later on.IBM also made a series of announcements with regards to its hybrid cloud offerings, including a new Cloud Integration Platform connecting applications from different vendors and across different substrates into a single development environment; and new end-to-end hybrid cloud services, bringing IBM into the managed-multi-cloud space. Again, to me this seems like infrastructure-building for the day IBM turns on the multi-cloud ‘power button’, using its existing and new technology and formidable channel assets.

How the Red Hat deal might change IBM

Jim agrees we haven’t seen an IT wave move as broadly and as quickly as containers have in decades—not even cloud, which took the best part of a decade to take off. This isn’t just due to Solomon Hykes’s eye-opening original Docker demo in 2013, or to the CNCF’s stellar community- and ecosystem-building efforts—the reasons are mainly technological and strategic.

Amongst their many advantages, containers truly offer the potential to decouple the underlying technology from the application layer. In a key insight, Jim suggested that while multi-cloud used to mean private+public, and then evolved into a term for ‘pick & choose’ strategy (for example, Google Cloud Engine for compute, with Amazon RDS for a database), containers and related orchestration systems are leading us to a more powerful definition. Multi-cloud now is about ‘build once, ship many’, on completely abstracted infrastructure. As I wrote in a previous post, ‘soft-PaaS’ is on the rise for many related reasons, but the added insight here is about a fuller realization of the multi-cloud vision. In this new stage of the market, the challenge shifts to areas such as managing data sources and making multi-cloud operations easier. In this sense, OpenShift as PaaS, together with IBM’s services capabilities, become potentially powerful strategic assets.

How IBM sees its role in this cloud-native world

Jim being from the services side, his view on this was not a hardware model for a software-defined world; he claimed that IBM has always helped companies deal with massive change, and that still is the main mission. The market, if you believe press releases, is constantly shifting like a Newton’s cradle from public cloud to private cloud and back again. Personally, every time I hear of a large company going ‘all-in’ on either side, I tend to chuckle: as I covered in a previous post, oversimplifying architecture and infrastructure has ended many an IT career. Remember that leading UK news outlet that declared it is going ‘all-in with AWS’ because it couldn’t realise its ambitions on DIY-OpenStack? I was in the room when the former CIO refused vendor help in implementing this complex private cloud platform and—surprise—the experiment failed.Jim suggested that in his experience, vendors can help customers focus on the company’s required business objectives, using five areas for analysis and planning, as IBM already does:

  • Infrastructure architecture
  • Application architecture
  • DevOps and automation
  • Evolved operational models
  • Evolved security models

Yes, the new tech is shiny, but even more important for customers are things like managing legacy-IT resistance, re-skilling an older workforce, and managing generational gaps (cloud native devs can be much younger than their Ops counterparts). In a surprising statistic, Jim claimed that 50% of IBMers have been less than five years with the company, and that the company has specific millennial programs.

IBM’s open source position gets a boost

A major advantage of the acquisition that has not received enough attention, in my opinion, is that in acquiring Red Hat, IBM has shot into the top-three organizations measured by open source software contributions. In a star-studded panel during the THINK conference, IBM seems to have embraced this position gladly. Analyst firmRedMonk‘s co-founder Steve O’Grady correctly warned that, “the future success of open source is neither guaranteed nor inevitable.” Similar to some point I covered in a previous post, O’Grady outlined profiteering, licensing and business models as systemic challenges that must be addressed.However, even if open source continues to thrive, it is a predominantly bottom-up IT phenomenon, which me be at odds with IBM’s more traditional CIO-downwards approach. To me this is one of the most interesting areas to watch: as Red Hat gets integrated into the family, will IBM be successful in changing its own culture, taking full advantage of its history (for example, its talent-hothouse lab the Linux Technology Center), its new and existing technologies (Kubernetes-based offerings, free Linux OSs) and its newfound open source dominance?

(Originally posted on Forbes.com)

These Are Not The Containers You’re Looking For

In a previous post, I argued that in the case of Kubernetes, the hype may be ahead of actual adoption. The responses to that article have ranged from basic questions about containers, to very firm views that lots of people are already running multi-cluster, multi-cloud containerized infrastructure. Regarding the latter view, from experience I know that smaller, more agile companies tend to underestimate the complexity behind large enterprises’ slower reactions to change (Ian Miell does a superb job of breaking down the slowness of enterprise in this blog post). So, in this article, I’ll try to do three things:

  • Attempt to tighten up some basic definitions for the non-techie folks who may still be wondering what the fuss is about
  • Mention the number one common attribute I’ve seen amongst companies succeeding with containers
  • Explain why just moving to containers is just the tip of the iceberg of your IT concerns
"See it from the window"

“See it from the window” PHOTO BY ROBERT ALVES DE JESUS ON UNSPLASH

Is my monolithic app running in a container on VMware cloud-native?

Well, sorry to say, but it depends. Let’s remember that a container is just a software package that can be run on infrastructure, and that there are (at least) two types of containers.

System containers have been around longer, since the days of LXC (2008) or, arguably, the days of Solaris Zones before that. These are, simplistically speaking, small and efficient units that behave like virtual machines. They can support multiple executable applications, and offer isolation as well as other features that system administrators will feel safe with, like easy manageability. This is ideal for traditional apps that you want to containerize without completely revolutionizing your IT practices, and the benefit is simple: increasing application density per server by over 10x vs. a virtual machine.

Application containers have a single application to execute. This is the world of the Docker image format (not the same as Docker Engine, Docker Swarm or Docker Inc.) and OCI , and what most people refer to when they mention the word ‘container’. The benefits here from an IT perspective are that application containers running a microservices app bring the full promise of cloud-native to life: high delivery pace, almost infinite scalability, and improved resilience. Those dramatic outcomes demand a significant culture and technology shift, a I will mention in detail later.

Containers are just packages and will, broadly, do as they’re told; microservices is an architectural approach to writing software applications; and cloud-native is a method of delivering those applications. To answer the question above: throwing away existing resources and ignoring business outcomes in pursuit of an ideal is probably bad practice, so if you have a VMware environment that is there to stay, then that is probably cloud-native enough for now (VMware’s acquisition of Heptio is interesting in that sense for the future use case). The bottom line is, that getting hung up on the simplest item (containers) to grasp on that list is a common error.

Thor’s hammer does not exist in IT

I recently met with the head of Cloud for a large UK financial services company, who told me that the previous CIO had to leave his post after failing to deliver on a ‘straight to serverless’ strategy, i.e., leap-frogging the cloud and container revolutions in order to operate workloads on the company’s well-used private datacenter, with serverless technology. That the CIO had to leave isn’t a major surprise: in any line of work, we need to use the right tools for the right jobs, especially when those jobs are complex. In cloud, that means that in most cases, we will likely be using a combination of bare metal, virtual machines, containers and serverless, on any combination of private server farm, private cloud or public cloud.

Without a doubt, the one thing I have seen as a first step in a successful IT transition journey is this: not trying to over-simplify a dramatic IT (r)evolution. Instead, understanding it from a holistic aspect, and judging it vis-à-vis business outcomes and objectives. It’s good to strive, but not all companies have the resource to be cloud-native purists, and there are clear benefits even in smaller steps, like allowing more time for analysis of the technology’s impact, or enabling better risk management. (This post from container company Cloud 66 does well to describe the short term efficiency, and long term insight, gains of moving a monolithic app to a container.)

Known-unknowns and unknown-unknowns

Ultimately, though, we wouldn’t be so excited about the container revolution if it was just about squeezing in more monolithic applications. A microservices app, running in containers, orchestrated on multiple substrates, and all that according to cloud-native principles—that is something worth sweating for. An application that scales quickly and reliably with less operational resource, that adapts fast to customer and competitive dynamics, and that self-heals, is where a lot of us are aiming.

Again, containers are just one part of this. Consider technological challenges: What about orchestration? And network? And stateful services? And cloud-native-ready pipeline tools?

Arguably even more important, consider cultural challenges: What needs to change with our development practices? How do we find and retain the right talent? How do we re-skill older talent and bridge between generations? How will the risk balance change?

An example: open source is already part of your strategy

It is a well-documented fact that the rise of cloud and open-source has been connected, which also brings some interesting tensions, as I explored in my previous article. In containers, this synergy seems stronger than ever. The juggernaut behind Kubernetes and many related open source projects, the Cloud Native Computing Foundation (CNCF), is part of the Linux Foundation. The CNCF charter is clear about the intentions of the foundation: it seeks to foster and sustain an ecosystem of open source, vendor-neutral projects. Consequentially, since the CNCF’s inception in 2014, it has become increasingly feasible to manage a complex cloud-native stack with a large mix of these open source projects (some interesting data in the foundation’s annual report). The more you get into container-native methodologies, the more open source you will use.

The other side of this picture, is that open source packages make up significant chunks of free and proprietary applications alike—while your whole-app may be proprietary, the bit your team actually wrote may be very small within it. As the State of Open Source Security Report shows, open source usage is tightly coupled with digital transformation, and as such is becoming increasingly business-critical; however, only 17% of maintainers rank their security expertise as high, which means that a lot of those packages are an operational risk.

Yet another aspect is community: using more open source makes the organization a stakeholder, and as such it should liaise with the relevant community to see how it can contribute, and also how it can get exposure to roadmap and to security alerts as quickly as possible.

No containing this

So to summarize the above example, a ‘simple’ decision about joining the container wave will inherently and significantly increase the organization’s benefit from, and exposure to, open source software. This software may or may not be supported by large sponsors, will probably to a meaningful extent be maintained by volunteers, and will probably have a vibrant community behind it—all of whom need to be engaged by users who rely on these projects.

In other words, not simple. Containers are a critical part of a digital transformation, but just one part. The whole of this transformation—parts of which will appear in your software delivery systems without you expecting them—can enable great things for your applications, if approached with the right mix of openness, maturity and responsibility.

(Originally posted on Forbes.com)

Open Source Software At A Crossroads

“It was the best of times,

it was the worst of times,

it was the age of wisdom,

it was the age of foolishness,

it was the epoch of belief,

it was the epoch of incredulity”

—Charles Dickens

Marc Andreesen famously said that software is eating the world; others have more recently added that at least in b2b, open source is eating software. Indeed, when we look at 2018, it has been a landmark year for users, enterprises and investors alike—but has it also included the seeds for a potential slowing down of open source investment, and of perhaps even usage?

In the cloud world, where the operational friction of software is reduced or removed, and where economies are extremely effective, public cloud providers present a challenge to many open source projects. Over the past several years, I have had the good fortune of being involved in key deals to monetize some of the largest open source software projects—in a way that keeps them free and usable for end users, and gets cloud providers to support the community. However, given what we’ve seen in 2018 and just last week, the economic model for funding open source may be at risk. It is down to more innovative aggregation models and powerful open source communities to ensure that open source continues to gain ground .

Open sign (Photo by Finn Hackshaw on Unsplash)

Continued adoption, M&A explosion

It’s no secret that open source use is accelerating, and is driving some of the most critical pieces of modern IT. In addition, the Linux Foundation recently reported that in the last five years, membership has gone up by 432 percent increase.

On top of that, 2018 has seen roughly $57 billion of value creation in open source M&A and IPOs The number jumps by $7.5 billion if you countGitHub ’s acquisition by Microsoft MSFT -0.29%, despite the fact that GitHub is not a developer or curator of open source software as such; rather, it accelerates use of open source (with “pervasive developer empathy”, as I heard Accel’s Adrian Colyer mention). This is a story of absolute sums but also of multiples, as effectively analyzed by Tomas Tunguz in this blog post.

Over the years we’ve seen different approaches to monetizing open source, and we have examples of them all in the past year’s exits (the following is just my simplistic breakdown):

  • Sell support and services (Heptio acquired by VMware VMW +0.57%)
  • Sell hosted/managed open source ( MongoDB as-a-service)
  • Have an open source core, sell extra features around it ( Elastic , Pivotal IPOs)
  • Make order out of chaos ( Red Hat RHT +0.02% acquired by IBMIBM -0.76%, also Pivotal)
  • Aggregate and accelerate other peoples’ code (GitHub acquired by Microsoft)

Of the above, as far as I have seen, the first two are probably most common, and arguably most vulnerable. If the technology can be wrangled effectively by internal teams or by consultants, there is less of an incentive to buy support from vendors (as Ben Kepes mentioned in his analysis of Cloud Foundry’s position); and if AWS is the best company in the world at selling software services over the web, then it would have an immediate advantage over providers who primarily sell through other channels, including commercial sponsors of open source. For this reason, recent developments around open source licensing are particularly important.

MongoDB and Redis pull back, AWS react

Last week, AWS announced on its blog the launch of DocumentDB, a MongoDB-compatible database. As some pundits have pointed out, this is clearly a reaction to MongoDB, Inc.’s new and highly-restrictive license called the Server Side Public License (SSPL)—a move which the publicly-traded MongoDB made in order to protect its revenue position.

Earlier last year, Redis Labs learned a hard lesson in community relations management when it took a less dramatic step: while offering its Redis database under a permissive license, it changed the licensing on its add-on modules to the “Commons Clause”, so service providers would need to pay for their use. While communication could have been clearer, the action itself is similar in intent to what MongoDB did, and to what many other open source companies have attempted or plan to attempt to do.

Bill Gates once said that “A platform is when the economic value of everybody that uses it, exceeds the value of the company that creates it. Then it’s a platform.” By that measure, if AWS is the ultimate cloud platform of our times, then AWS will recognize that it is better because software like Redis exists, and therefore work to increase—rather than limit—Redis Labs’ overall revenue. In this conundrum, community may be the key.

Communities: the un-replicable asset

A big part of open source’s success over proprietary software has been its ability to move in an agile fashion, with strong and fast feedback loops, and devoted developer engagement. As fast as AWS moves, with its famous customer obsession and feature release pace, it is still constrained by money and financial priorities. Open source contributors don’t work for money (although communities should be supported financially, if we want to keep enjoying the independence of some key projects), and some open source projects are almost too big even for IT giants to replace with a proprietary clone. On top of that, consider that many developers will simply switch off any solution which smells of corporate interest, especially if there is an open source alternative.

This reminds me of a protracted discussion I was part of some years ago with an IT giant who considered the cloning option for a piece of software, meant for distribution. At the end of a call in which a group of execs agreed on the cost of engineering, a developer advocate posed a simple question: we now know how much building this will cost, but how much will we invest in building a community? (the IT giant chose to support and collaborate with the project).

What does 2019 hold?

While the economic tension between public cloud services and the first two or three types of open source monetizations might increase in 2019, I estimate that ‘order-makers’ and ‘aggregators’ will continue to go from strength to strength. Specifically, companies that accelerate reliable production use of open source—from GitHub competitor Atlassian to open source security company Snyk—are proving that there is great value to be provided to users from focusing on security and usability of, and collaboration around, small and large projects alike.

What might change in the immediate future is the pace and size of venture capital investment into open source companies, but this could also be the cyclical product of a very loaded 2018, and not only related to business model frictions.

In either case, a focus on building and sustaining healthy open source communities and differentiating business models seems to be more important than ever.

(Originally posted on Forbes.com)

Cloud Foundry And The PaaS You’re Already Running

Following my previous Forbes articles about the resurgence of PaaS and the adoption of Kubernetes, I ran into Abby Kearns, executive director of Cloud Foundry Foundation, who was kind enough to read them. We exchanged some ideas about PaaS, Kubernetes, and the recent wave of acquisitions in the Cloud space. [Note: as a Forbes contributor, I do not have any commercial relationship with the Foundation or its staff.]

For those of you who don’t know, Cloud Foundry encompasses multiple open source projects, the primary one being an open source cloud application platform built with container-based architecture and housed within the Cloud Foundry Foundation, which is backed by the likes of Cisco, Dell DELL +NaN% EMC EMC +0%Google GOOGL +0.07% and others. It has commercial distributions which are offered by Pivotal, IBM and others (Ben Kepes has a great post on the tension between open-source CF and the distributions, on his blog). Cloud Foundry runs on BOSH, a technology that was originally (and almost presciently, you could say) designed to manage large distributed systems, and as such was container-ready back in 2010.

Abby Kearns, Executive Director of the Cloud Foundry FoundationCLOUD FOUNDRY FOUNDATION

Cloud Foundry announced support for Kubernetes in its Container Runtime project in late 2017, alongside its original non-Kubernetes-based Application Runtime. In October of this year it doubled down with Eirini, a project that combines Kubernetes as the container scheduler with Application Runtime; and CF Containerization, which packages BOSH releases into containers for deployment into Kubernetes.

Broadly, Abby and I talked through three themes, as detailed below:

Everyone is already running a PaaS

I’ve written in my post about PaaS how the advent of ‘unstructured-PaaS’ offerings such as Kubernetes has contributed to the resurgence of this category, but implied in my article was the assumption that PaaS is still a choice. Abby presented a different view: it’s not so much about the runtime as it is about the ops, and, by and large, the total stack is the same as what is offered in a proper PaaS .

An operations team will have logging, monitoring, autoscaling, security, network management and a host of other capabilities around the deployment of an application; the relevant question is how much of that they’re putting together themselves (whether these bits are homegrown, open source, or commercial software), and conversely how much is being supplied from one coherent solution (again, homegrown, open source, or commercial). Whether you’re giving the keys to Pivotal, Google, or Red HatRHT +0.02%; internalizing engineering and operations costs but using CF and BOSH to manage your RBAC, security and other ops tasks; or putting together a mosaic of focused solutions—the end result is operationally the same as a PaaS. In the end, we agreed, digital transformation is about coming to terms with the complexity of your required IT stack, and optimizing the ownership and cost model—all of which are tough architecture and business choices .

Cloud mega-deals: this is not the empire striking back

Redpoint venture capitalist and tech-blogger extraordinaire Tomas Tunguz recently showed how open source acquisitions took up three of the top five slots for 2018 tech acquisitions (the SAP- Qualtrics deal has since nudged the Mulesoft deal down a bit). Since then, we’ve also had VMwareVMW +0.57% buying Kubernetes services firm Heptio, founded by luminaries Joe Beda and Craig McLuckie. As a Christmas-dinner summary for your less-techie friends and family, you could say that Microsoft bought its way back into the developer limelight, IBM armed itself with (arguably) the most robust commercial Kubernetes-based platform, and VMware went for the most-skilled Kubernetes services company.

Abby commented that in her view, what we are witnessing is large technology companies looking to M&A around open source technology to solve a common problem: how to quickly obtain the innovation and agility that their enterprise customers are demanding, while not being disrupted by new, cloud-native technologies. The open source angle is just a testimony to its huge importance in this cloudy world, and the Cloud Foundry Foundation expects that these recent deals will mark the start, not the end, of the Kubernetes consolidation wave.

Cloud Foundry will follow a different trajectory than OpenStack

If you’ve been around the cloud industry for several years, you might be tempted to think that the above consolidation will cause the likes of Cloud Foundry Foundation and the CNCF to diminish in importance or outright deflate, like OpenStack has over the past few years. As I mentioned in another article, in my opinion this has been (in the end) a good process for OpenStack, which is leaner and meaner (in a nice way) as a result. Not the same for Cloud Foundry Foundation, says Abby. This period may mirror a similar phase in 2014 when big vendors started buying up OpenStack companies as EMC did with Cloudscaling, Cisco with Metacloud, and many, many more—but Cloud Foundry has adapted to the Kubernetes wave in time, and its main partners are more closely aligned around its values and objectives.

Additional reasons, as I see it:

  • Cloud Foundry Foundation has been smarter about avoiding inflation in its ecosystem—both in terms of number of vendors with influence, and of actual dollars being invested through M&A activity.
  • BOSH being adaptable to pretty much all clouds-deployment models-platforms is a strategic technical asset.
  • The Foundation’s relentless focus on developer experience (rather than infrastructure), increases its options and avoids playing squarely in the public clouds’ game.

So, should we expect a Cloud Foundry serverless initiative next year? No choice but to wait and see.

(Originally posted on Forbes.com)