Containers Put The ‘Dev’ Back In DevSecOps

On the back of a record-breaking Kubecon+CloudNativeCon (with a reported 7,700 attendees last week), it is very clear that Kubernetes’ position as a cloud center of gravity remains unassailable in the foreseeable future. For one, even outside of this sprawling tradeshow, it has taken over sessions at many other conferences, from the Open Infrastructure Summit (previously OpenStack Summit) to the Cloud Foundry Summit. I personally believe that in the next few years, these two conferences (and others) will either accelerate in their contraction or even merge into a mega CloudNativeCon.

Does that mean digital transformation has reached a steady state? Hardly. As the Cloud Foundry Foundation’s April report showed, 74% of companies polled define digital transformation as a perpetual, not one-time, shift. The survey also shows that organizations are already using a good mix of technologies and clouds in production, which explains the Foundation’s endorsement of Kubernetes and the launch last year of the related project Eirini.

Tunnel

Tunnel PHOTO BY ALEXANDER MILS ON UNSPLASH

New models, new personas, new tension

In the end, what matters to IT teams is a thoughtful approach to building and deploying cloud-native applications. There are many reasons why containers have become so popular since DotCloud’s pivot into Docker all those years ago, and I have gone through some of them in previous posts. IBM’s Jim Comfort has called it the most dramatic shift in IT in memory, more so than cloud computing. Since that pivot, it has become a convention that whatever is in the container is the developers’ responsibility, while operators own what is outside of it.

Kubernetes and related projects challenge that paradigm even further and represent the cloud-native vision in that they provide developers with access to native APIs, which means they have much more control over how their application is delivered and deployed. So, while software is eating the world, developers have started to eat the software delivery chain.

However, Kubernetes’ evolution into the golden standard of an “un-opinionated PaaS” still means that in most large enterprises it is still owned and maintained by operations-minded engineers. Therefore, one of the main efforts driving enterprise adoption has been to build operating models that balance developer empowerment and operator governance—for some, that means DevOps. However, the rise of DevOps is not the only shift in user personas that we have seen.

Open source and the “shift-left” risk

Security used to be the place where people not only buy, but also use solutions to prevent and adapt to security risks, but this was at an age of waterfall development and of mostly closed-source code. The rise of open source software has been disruptive to that as it has been pervasive—whether the application itself is open-sourced or not, significant amounts of open source packages are being leveraged in its creation. The cloud-native movement has accelerated this trend even further, with its strong bias towards open source.

The reason this matters is that security teams are not staffed nor equipped to control these open source inputs. Whether the security team was planning for it or not, a lot of their risk has “shifted left”, and now with open APIs, both the breadth and speed of risk have risen. Snyk’s latest report, “Shifting Container Security Left” shows that the top 10 official container images on DockerHub each contain at least 30 vulnerabilities and that even of the top 10 most popular free Docker-certified images, 50% have known vulnerabilities.

Even worse, the report surfaces an acute ownership problem, as 68% believe developers should be responsible for owning container security, while 80% of developers say they don’t test their images during development, and 50% don’t scan their images for vulnerabilities at all.

Time for joint accountability

As a consequence, when it comes to software-lifecycle tools, security vendors are experiencing a demand shift—a separation of personas at large enterprises. Security teams are still the economic buyers and hold the budget authority; however, with the realization that developers must have the tools to own more of the security of their own apps, comes a transfer of budget decisions over to development teams, who will actually use these tools. You build it, you run it—but you also secure it.

As I mentioned in an earlier piece, this is not an easy process for a security industry that is geared toward servicing security teams. The challenge many providers are facing is to provide tools designed for both sides of the aisle, that can promote joint accountability: empowering the people “on the left” while giving those “on the right” enough levels of control and observability.

In a sense, to shift-left is to let go. With more code—in a repository or in pre-built containers—coming from lesser-known origins and deployed more rapidly on more distributed systems, the only option is to continue to evolve and provide developers with the right tools and knowledge to own security.

(Originally posted on Forbes.com)

At RSA Conference, Most Security Vendors Still Not Shifting Left

A week at the sprawling cybersecurity conference known as RSA Conference always sends you home with tired feet and a full brain. With attendance as high as ever, top-quality (and more diverse, this time around) keynotes, a myriad sponsored evening events and—crucially—a week-long San Francisco drizzle that kept attendees engaged indoors, the momentum seemed ever-present.

However, as someone coming to this industry from cloud computing, containers, open source and DevOps, I could not help but notice a wealth of “old world” pitches and a dearth of newer ones. If you are in the market for perimeter or beyond-perimeter solutions, ethical hacking or endpoint security, you can spend far more than the allocated few days to explore all the talks and vendors on offer; but if you are trying to face the cloud-native immediate future by shifting security to the left, and are looking for insights on this shift, then not so much.

Rusty lock

Rusty lock PHOTO BY JAMES SUTTON ON UNSPLASH

A time of painful change

As Snyk CEO Guy Podjarny recently pointed out in his talk at QCon London, DevSecOps is a highly overused term that few stop to define for themselves in depth. In his approach, DevSecOps is actually an umbrella term for three areas of required transformation: technologies, methodologies and models of shared ownership.

In this reality, modern and modernizing IT organizations are facing tremendous disruption on several levels:

  • Cloud computing, microservices and containers represent a technology shift on a scale unseen in recent history, as was discussed with IBM’s IBM +0% Jim Comfort in my last post. This isn’t about training people on a couple of new tools—it is about re-thinking what technology enables, and how technology affects the operational and business structure.
  • From Agile to DevSecOps and beyond, methodologies can help accelerate technology adoption, but they also tend to trigger significant shifts in culture and process. This is one reason why, as I detailed in a previous post, early mass-market users of technologies such as Kubernetes require operating models, not nifty click-to-deploy SaaS tools.
  • As companies attempt to hire new talent to address new challenges, and as technologies from the last decade are being phased out, a skill/generational gap emerges between young cloud-native developers and long-in-the-tooth operators, fluent in Linux system administration and server configuration management. As Jim Comfort implied in that IBM IBM +0% post, re-skilling probably can’t keep up with the pace of technology change.

Security vendors are starting to get it

It’s a classic case of the innovator’s dilemma. While most traditional security vendors realize the need to adapt, they have large and stable revenue streams from legacy products, which makes the transition slower than the pace of change their customers are experiencing. That isn’t to say that the existing transition isn’t meaningful: we are well on the journey that started by securing the server amd moved on to securing the VM, cloud estate, and—more recently—container. Put that together with the rise of external open source code that developers leverage in practically every new application (including proprietary ones), and with the understanding that an increasing portion of risk comes from inside the perimeter, and you get to the obvious conclusion: the urgent ‘cybersecurity’ battle is for securing the developer’s workflow, in real time.

Another interesting change is a decoupling of the user and buyer identity. The budget might still sit with the CISO, but increasingly, portions of those security budgets are being allocated to developer tools. This disrupts a long-existing equilibrium in the security market, and therefore it is no wonder that many vendors are re-engineering the product-services mix, as they figure out where their customers are going. Expect IT organizations that collaborate across the engineering-security divide to be more resilient in the face of exploit attempts—and expect vendors that adapt quickly and effectively to this future to have a better chance of survival.

New partnering models are key

Cash is usually king, and many of these so-called ‘cybersecurity dinosaurs’ will survive and thrive by leveraging their reserves, but it may be a painful and protracted process. To make it less so, vendors would be wise to rethink go-to-market strategies; re-align strategic partnerships; and prioritize capabilities for risk mitigation over risk adaptation—all in an effort to understand that developers will continue to move fast, and savvy security officers need to enable that to happen within the right guardrails.

Both security teams within IT organizations and incumbent security vendors would be wise to follow the guidance of members of the PagerDutysecurity team, interviewed on The Secure Developer podcast, who said that their job, in the end, is to make it easy for developers to do the right thing.

(Originally posted on Forbes.com)

IBM And The Third Iteration of Multi-Cloud

Four months after the announcement of the Red Hat acquisition, and some time before it is expected to officially close, I caught up with Dr. Jim Comfort, GM of Multicloud Offerings and Hybrid Cloud at IBM, to talk about the company’s recent multi-cloud-related news, and to get a sense of its updated view of multi-cloud. Jim joined IBM Research in 1988 and since then has held a variety of development and product management roles and executive positions. He has been closely involved with IBM’s Cloud Strategy, including acquisitions such as Softlayer, Aspera and others. [Note: as a Forbes contributor, I do not have any commercial relationship with IBM or its staff.]

Dr. Jim Comfort

Dr. Jim Comfort; IBM

First, what is significant about the new announcements?

At its THINK 2019 conference, IBM announced Watson Anywhere, the latest evolution of its fabled  AI platform (originally restricted to the IBM Cloud), which will now run across enterprise data centers and major public clouds including AWS, Azure, and Google Cloud Platform. My reading is that this is a potential sign of things to come with Red Hat: IBM seems to be coming to terms with its relative weakness as an IaaS player, and its tremendous strengths as a platform and managed service provider. Building the multi-cloud muscle with Watson now, will serve IBM well when it comes to Red Hat OpenShift later on.IBM also made a series of announcements with regards to its hybrid cloud offerings, including a new Cloud Integration Platform connecting applications from different vendors and across different substrates into a single development environment; and new end-to-end hybrid cloud services, bringing IBM into the managed-multi-cloud space. Again, to me this seems like infrastructure-building for the day IBM turns on the multi-cloud ‘power button’, using its existing and new technology and formidable channel assets.

How the Red Hat deal might change IBM

Jim agrees we haven’t seen an IT wave move as broadly and as quickly as containers have in decades—not even cloud, which took the best part of a decade to take off. This isn’t just due to Solomon Hykes’s eye-opening original Docker demo in 2013, or to the CNCF’s stellar community- and ecosystem-building efforts—the reasons are mainly technological and strategic.

Amongst their many advantages, containers truly offer the potential to decouple the underlying technology from the application layer. In a key insight, Jim suggested that while multi-cloud used to mean private+public, and then evolved into a term for ‘pick & choose’ strategy (for example, Google Cloud Engine for compute, with Amazon RDS for a database), containers and related orchestration systems are leading us to a more powerful definition. Multi-cloud now is about ‘build once, ship many’, on completely abstracted infrastructure. As I wrote in a previous post, ‘soft-PaaS’ is on the rise for many related reasons, but the added insight here is about a fuller realization of the multi-cloud vision. In this new stage of the market, the challenge shifts to areas such as managing data sources and making multi-cloud operations easier. In this sense, OpenShift as PaaS, together with IBM’s services capabilities, become potentially powerful strategic assets.

How IBM sees its role in this cloud-native world

Jim being from the services side, his view on this was not a hardware model for a software-defined world; he claimed that IBM has always helped companies deal with massive change, and that still is the main mission. The market, if you believe press releases, is constantly shifting like a Newton’s cradle from public cloud to private cloud and back again. Personally, every time I hear of a large company going ‘all-in’ on either side, I tend to chuckle: as I covered in a previous post, oversimplifying architecture and infrastructure has ended many an IT career. Remember that leading UK news outlet that declared it is going ‘all-in with AWS’ because it couldn’t realise its ambitions on DIY-OpenStack? I was in the room when the former CIO refused vendor help in implementing this complex private cloud platform and—surprise—the experiment failed.Jim suggested that in his experience, vendors can help customers focus on the company’s required business objectives, using five areas for analysis and planning, as IBM already does:

  • Infrastructure architecture
  • Application architecture
  • DevOps and automation
  • Evolved operational models
  • Evolved security models

Yes, the new tech is shiny, but even more important for customers are things like managing legacy-IT resistance, re-skilling an older workforce, and managing generational gaps (cloud native devs can be much younger than their Ops counterparts). In a surprising statistic, Jim claimed that 50% of IBMers have been less than five years with the company, and that the company has specific millennial programs.

IBM’s open source position gets a boost

A major advantage of the acquisition that has not received enough attention, in my opinion, is that in acquiring Red Hat, IBM has shot into the top-three organizations measured by open source software contributions. In a star-studded panel during the THINK conference, IBM seems to have embraced this position gladly. Analyst firmRedMonk‘s co-founder Steve O’Grady correctly warned that, “the future success of open source is neither guaranteed nor inevitable.” Similar to some point I covered in a previous post, O’Grady outlined profiteering, licensing and business models as systemic challenges that must be addressed.However, even if open source continues to thrive, it is a predominantly bottom-up IT phenomenon, which me be at odds with IBM’s more traditional CIO-downwards approach. To me this is one of the most interesting areas to watch: as Red Hat gets integrated into the family, will IBM be successful in changing its own culture, taking full advantage of its history (for example, its talent-hothouse lab the Linux Technology Center), its new and existing technologies (Kubernetes-based offerings, free Linux OSs) and its newfound open source dominance?

(Originally posted on Forbes.com)

These Are Not The Containers You’re Looking For

In a previous post, I argued that in the case of Kubernetes, the hype may be ahead of actual adoption. The responses to that article have ranged from basic questions about containers, to very firm views that lots of people are already running multi-cluster, multi-cloud containerized infrastructure. Regarding the latter view, from experience I know that smaller, more agile companies tend to underestimate the complexity behind large enterprises’ slower reactions to change (Ian Miell does a superb job of breaking down the slowness of enterprise in this blog post). So, in this article, I’ll try to do three things:

  • Attempt to tighten up some basic definitions for the non-techie folks who may still be wondering what the fuss is about
  • Mention the number one common attribute I’ve seen amongst companies succeeding with containers
  • Explain why just moving to containers is just the tip of the iceberg of your IT concerns
"See it from the window"

“See it from the window” PHOTO BY ROBERT ALVES DE JESUS ON UNSPLASH

Is my monolithic app running in a container on VMware cloud-native?

Well, sorry to say, but it depends. Let’s remember that a container is just a software package that can be run on infrastructure, and that there are (at least) two types of containers.

System containers have been around longer, since the days of LXC (2008) or, arguably, the days of Solaris Zones before that. These are, simplistically speaking, small and efficient units that behave like virtual machines. They can support multiple executable applications, and offer isolation as well as other features that system administrators will feel safe with, like easy manageability. This is ideal for traditional apps that you want to containerize without completely revolutionizing your IT practices, and the benefit is simple: increasing application density per server by over 10x vs. a virtual machine.

Application containers have a single application to execute. This is the world of the Docker image format (not the same as Docker Engine, Docker Swarm or Docker Inc.) and OCI , and what most people refer to when they mention the word ‘container’. The benefits here from an IT perspective are that application containers running a microservices app bring the full promise of cloud-native to life: high delivery pace, almost infinite scalability, and improved resilience. Those dramatic outcomes demand a significant culture and technology shift, a I will mention in detail later.

Containers are just packages and will, broadly, do as they’re told; microservices is an architectural approach to writing software applications; and cloud-native is a method of delivering those applications. To answer the question above: throwing away existing resources and ignoring business outcomes in pursuit of an ideal is probably bad practice, so if you have a VMware environment that is there to stay, then that is probably cloud-native enough for now (VMware’s acquisition of Heptio is interesting in that sense for the future use case). The bottom line is, that getting hung up on the simplest item (containers) to grasp on that list is a common error.

Thor’s hammer does not exist in IT

I recently met with the head of Cloud for a large UK financial services company, who told me that the previous CIO had to leave his post after failing to deliver on a ‘straight to serverless’ strategy, i.e., leap-frogging the cloud and container revolutions in order to operate workloads on the company’s well-used private datacenter, with serverless technology. That the CIO had to leave isn’t a major surprise: in any line of work, we need to use the right tools for the right jobs, especially when those jobs are complex. In cloud, that means that in most cases, we will likely be using a combination of bare metal, virtual machines, containers and serverless, on any combination of private server farm, private cloud or public cloud.

Without a doubt, the one thing I have seen as a first step in a successful IT transition journey is this: not trying to over-simplify a dramatic IT (r)evolution. Instead, understanding it from a holistic aspect, and judging it vis-à-vis business outcomes and objectives. It’s good to strive, but not all companies have the resource to be cloud-native purists, and there are clear benefits even in smaller steps, like allowing more time for analysis of the technology’s impact, or enabling better risk management. (This post from container company Cloud 66 does well to describe the short term efficiency, and long term insight, gains of moving a monolithic app to a container.)

Known-unknowns and unknown-unknowns

Ultimately, though, we wouldn’t be so excited about the container revolution if it was just about squeezing in more monolithic applications. A microservices app, running in containers, orchestrated on multiple substrates, and all that according to cloud-native principles—that is something worth sweating for. An application that scales quickly and reliably with less operational resource, that adapts fast to customer and competitive dynamics, and that self-heals, is where a lot of us are aiming.

Again, containers are just one part of this. Consider technological challenges: What about orchestration? And network? And stateful services? And cloud-native-ready pipeline tools?

Arguably even more important, consider cultural challenges: What needs to change with our development practices? How do we find and retain the right talent? How do we re-skill older talent and bridge between generations? How will the risk balance change?

An example: open source is already part of your strategy

It is a well-documented fact that the rise of cloud and open-source has been connected, which also brings some interesting tensions, as I explored in my previous article. In containers, this synergy seems stronger than ever. The juggernaut behind Kubernetes and many related open source projects, the Cloud Native Computing Foundation (CNCF), is part of the Linux Foundation. The CNCF charter is clear about the intentions of the foundation: it seeks to foster and sustain an ecosystem of open source, vendor-neutral projects. Consequentially, since the CNCF’s inception in 2014, it has become increasingly feasible to manage a complex cloud-native stack with a large mix of these open source projects (some interesting data in the foundation’s annual report). The more you get into container-native methodologies, the more open source you will use.

The other side of this picture, is that open source packages make up significant chunks of free and proprietary applications alike—while your whole-app may be proprietary, the bit your team actually wrote may be very small within it. As the State of Open Source Security Report shows, open source usage is tightly coupled with digital transformation, and as such is becoming increasingly business-critical; however, only 17% of maintainers rank their security expertise as high, which means that a lot of those packages are an operational risk.

Yet another aspect is community: using more open source makes the organization a stakeholder, and as such it should liaise with the relevant community to see how it can contribute, and also how it can get exposure to roadmap and to security alerts as quickly as possible.

No containing this

So to summarize the above example, a ‘simple’ decision about joining the container wave will inherently and significantly increase the organization’s benefit from, and exposure to, open source software. This software may or may not be supported by large sponsors, will probably to a meaningful extent be maintained by volunteers, and will probably have a vibrant community behind it—all of whom need to be engaged by users who rely on these projects.

In other words, not simple. Containers are a critical part of a digital transformation, but just one part. The whole of this transformation—parts of which will appear in your software delivery systems without you expecting them—can enable great things for your applications, if approached with the right mix of openness, maturity and responsibility.

(Originally posted on Forbes.com)

Open Source Software At A Crossroads

“It was the best of times,

it was the worst of times,

it was the age of wisdom,

it was the age of foolishness,

it was the epoch of belief,

it was the epoch of incredulity”

—Charles Dickens

Marc Andreesen famously said that software is eating the world; others have more recently added that at least in b2b, open source is eating software. Indeed, when we look at 2018, it has been a landmark year for users, enterprises and investors alike—but has it also included the seeds for a potential slowing down of open source investment, and of perhaps even usage?

In the cloud world, where the operational friction of software is reduced or removed, and where economies are extremely effective, public cloud providers present a challenge to many open source projects. Over the past several years, I have had the good fortune of being involved in key deals to monetize some of the largest open source software projects—in a way that keeps them free and usable for end users, and gets cloud providers to support the community. However, given what we’ve seen in 2018 and just last week, the economic model for funding open source may be at risk. It is down to more innovative aggregation models and powerful open source communities to ensure that open source continues to gain ground .

Open sign (Photo by Finn Hackshaw on Unsplash)

Continued adoption, M&A explosion

It’s no secret that open source use is accelerating, and is driving some of the most critical pieces of modern IT. In addition, the Linux Foundation recently reported that in the last five years, membership has gone up by 432 percent increase.

On top of that, 2018 has seen roughly $57 billion of value creation in open source M&A and IPOs The number jumps by $7.5 billion if you countGitHub ’s acquisition by Microsoft MSFT -0.29%, despite the fact that GitHub is not a developer or curator of open source software as such; rather, it accelerates use of open source (with “pervasive developer empathy”, as I heard Accel’s Adrian Colyer mention). This is a story of absolute sums but also of multiples, as effectively analyzed by Tomas Tunguz in this blog post.

Over the years we’ve seen different approaches to monetizing open source, and we have examples of them all in the past year’s exits (the following is just my simplistic breakdown):

  • Sell support and services (Heptio acquired by VMware VMW +0.57%)
  • Sell hosted/managed open source ( MongoDB as-a-service)
  • Have an open source core, sell extra features around it ( Elastic , Pivotal IPOs)
  • Make order out of chaos ( Red Hat RHT +0.02% acquired by IBMIBM -0.76%, also Pivotal)
  • Aggregate and accelerate other peoples’ code (GitHub acquired by Microsoft)

Of the above, as far as I have seen, the first two are probably most common, and arguably most vulnerable. If the technology can be wrangled effectively by internal teams or by consultants, there is less of an incentive to buy support from vendors (as Ben Kepes mentioned in his analysis of Cloud Foundry’s position); and if AWS is the best company in the world at selling software services over the web, then it would have an immediate advantage over providers who primarily sell through other channels, including commercial sponsors of open source. For this reason, recent developments around open source licensing are particularly important.

MongoDB and Redis pull back, AWS react

Last week, AWS announced on its blog the launch of DocumentDB, a MongoDB-compatible database. As some pundits have pointed out, this is clearly a reaction to MongoDB, Inc.’s new and highly-restrictive license called the Server Side Public License (SSPL)—a move which the publicly-traded MongoDB made in order to protect its revenue position.

Earlier last year, Redis Labs learned a hard lesson in community relations management when it took a less dramatic step: while offering its Redis database under a permissive license, it changed the licensing on its add-on modules to the “Commons Clause”, so service providers would need to pay for their use. While communication could have been clearer, the action itself is similar in intent to what MongoDB did, and to what many other open source companies have attempted or plan to attempt to do.

Bill Gates once said that “A platform is when the economic value of everybody that uses it, exceeds the value of the company that creates it. Then it’s a platform.” By that measure, if AWS is the ultimate cloud platform of our times, then AWS will recognize that it is better because software like Redis exists, and therefore work to increase—rather than limit—Redis Labs’ overall revenue. In this conundrum, community may be the key.

Communities: the un-replicable asset

A big part of open source’s success over proprietary software has been its ability to move in an agile fashion, with strong and fast feedback loops, and devoted developer engagement. As fast as AWS moves, with its famous customer obsession and feature release pace, it is still constrained by money and financial priorities. Open source contributors don’t work for money (although communities should be supported financially, if we want to keep enjoying the independence of some key projects), and some open source projects are almost too big even for IT giants to replace with a proprietary clone. On top of that, consider that many developers will simply switch off any solution which smells of corporate interest, especially if there is an open source alternative.

This reminds me of a protracted discussion I was part of some years ago with an IT giant who considered the cloning option for a piece of software, meant for distribution. At the end of a call in which a group of execs agreed on the cost of engineering, a developer advocate posed a simple question: we now know how much building this will cost, but how much will we invest in building a community? (the IT giant chose to support and collaborate with the project).

What does 2019 hold?

While the economic tension between public cloud services and the first two or three types of open source monetizations might increase in 2019, I estimate that ‘order-makers’ and ‘aggregators’ will continue to go from strength to strength. Specifically, companies that accelerate reliable production use of open source—from GitHub competitor Atlassian to open source security company Snyk—are proving that there is great value to be provided to users from focusing on security and usability of, and collaboration around, small and large projects alike.

What might change in the immediate future is the pace and size of venture capital investment into open source companies, but this could also be the cyclical product of a very loaded 2018, and not only related to business model frictions.

In either case, a focus on building and sustaining healthy open source communities and differentiating business models seems to be more important than ever.

(Originally posted on Forbes.com)

Cloud Foundry And The PaaS You’re Already Running

Following my previous Forbes articles about the resurgence of PaaS and the adoption of Kubernetes, I ran into Abby Kearns, executive director of Cloud Foundry Foundation, who was kind enough to read them. We exchanged some ideas about PaaS, Kubernetes, and the recent wave of acquisitions in the Cloud space. [Note: as a Forbes contributor, I do not have any commercial relationship with the Foundation or its staff.]

For those of you who don’t know, Cloud Foundry encompasses multiple open source projects, the primary one being an open source cloud application platform built with container-based architecture and housed within the Cloud Foundry Foundation, which is backed by the likes of Cisco, Dell DELL +NaN% EMC EMC +0%Google GOOGL +0.07% and others. It has commercial distributions which are offered by Pivotal, IBM and others (Ben Kepes has a great post on the tension between open-source CF and the distributions, on his blog). Cloud Foundry runs on BOSH, a technology that was originally (and almost presciently, you could say) designed to manage large distributed systems, and as such was container-ready back in 2010.

Abby Kearns, Executive Director of the Cloud Foundry FoundationCLOUD FOUNDRY FOUNDATION

Cloud Foundry announced support for Kubernetes in its Container Runtime project in late 2017, alongside its original non-Kubernetes-based Application Runtime. In October of this year it doubled down with Eirini, a project that combines Kubernetes as the container scheduler with Application Runtime; and CF Containerization, which packages BOSH releases into containers for deployment into Kubernetes.

Broadly, Abby and I talked through three themes, as detailed below:

Everyone is already running a PaaS

I’ve written in my post about PaaS how the advent of ‘unstructured-PaaS’ offerings such as Kubernetes has contributed to the resurgence of this category, but implied in my article was the assumption that PaaS is still a choice. Abby presented a different view: it’s not so much about the runtime as it is about the ops, and, by and large, the total stack is the same as what is offered in a proper PaaS .

An operations team will have logging, monitoring, autoscaling, security, network management and a host of other capabilities around the deployment of an application; the relevant question is how much of that they’re putting together themselves (whether these bits are homegrown, open source, or commercial software), and conversely how much is being supplied from one coherent solution (again, homegrown, open source, or commercial). Whether you’re giving the keys to Pivotal, Google, or Red HatRHT +0.02%; internalizing engineering and operations costs but using CF and BOSH to manage your RBAC, security and other ops tasks; or putting together a mosaic of focused solutions—the end result is operationally the same as a PaaS. In the end, we agreed, digital transformation is about coming to terms with the complexity of your required IT stack, and optimizing the ownership and cost model—all of which are tough architecture and business choices .

Cloud mega-deals: this is not the empire striking back

Redpoint venture capitalist and tech-blogger extraordinaire Tomas Tunguz recently showed how open source acquisitions took up three of the top five slots for 2018 tech acquisitions (the SAP- Qualtrics deal has since nudged the Mulesoft deal down a bit). Since then, we’ve also had VMwareVMW +0.57% buying Kubernetes services firm Heptio, founded by luminaries Joe Beda and Craig McLuckie. As a Christmas-dinner summary for your less-techie friends and family, you could say that Microsoft bought its way back into the developer limelight, IBM armed itself with (arguably) the most robust commercial Kubernetes-based platform, and VMware went for the most-skilled Kubernetes services company.

Abby commented that in her view, what we are witnessing is large technology companies looking to M&A around open source technology to solve a common problem: how to quickly obtain the innovation and agility that their enterprise customers are demanding, while not being disrupted by new, cloud-native technologies. The open source angle is just a testimony to its huge importance in this cloudy world, and the Cloud Foundry Foundation expects that these recent deals will mark the start, not the end, of the Kubernetes consolidation wave.

Cloud Foundry will follow a different trajectory than OpenStack

If you’ve been around the cloud industry for several years, you might be tempted to think that the above consolidation will cause the likes of Cloud Foundry Foundation and the CNCF to diminish in importance or outright deflate, like OpenStack has over the past few years. As I mentioned in another article, in my opinion this has been (in the end) a good process for OpenStack, which is leaner and meaner (in a nice way) as a result. Not the same for Cloud Foundry Foundation, says Abby. This period may mirror a similar phase in 2014 when big vendors started buying up OpenStack companies as EMC did with Cloudscaling, Cisco with Metacloud, and many, many more—but Cloud Foundry has adapted to the Kubernetes wave in time, and its main partners are more closely aligned around its values and objectives.

Additional reasons, as I see it:

  • Cloud Foundry Foundation has been smarter about avoiding inflation in its ecosystem—both in terms of number of vendors with influence, and of actual dollars being invested through M&A activity.
  • BOSH being adaptable to pretty much all clouds-deployment models-platforms is a strategic technical asset.
  • The Foundation’s relentless focus on developer experience (rather than infrastructure), increases its options and avoids playing squarely in the public clouds’ game.

So, should we expect a Cloud Foundry serverless initiative next year? No choice but to wait and see.

(Originally posted on Forbes.com)

Is Cloud Good for Humans? The Ethical Angle

I once subscribed to the YouTube channel of a guitar instructor who used a phrase that stuck with me: “practice doesn’t make perfect, practice makes permanent”. Doing something wrong more quickly, efficiently and consistently doesn’t necessarily improve the quality of the action ; sometimes, it could even make it worse, as bad practice becomes entrenched habit.

Technology is a facilitating tool, not evil or good in and of itself—it’s about taking whatever the intent (or the error) was into the real world more efficiently and effectively, for better or worse. Consider this effect in areas such as Cloud Computing (and related technologies): whether it’s a bug or an exploitable feature or vulnerability, I argue that even without malice the ethical stakes are uniquely high.

Demonstration PHOTO BY JERRY KIESEWETTER ON UNSPLASH

While ethics are as old as civilization itself, in the technology industry, I would argue that personal ethics are still very much an emerging topic. Consider a recent StackOverflow survey in which 80% of developers said they wanted to know what their code is used for—but 40% would not disqualify writing unethical code, and 80% did not feel ultimately responsible for their unethical code. (You can explore the psychological drivers for this in this great talk by Andrea Dobson.)

I am not referring to the obvious and very visible meta-discussions about things such as data privacy, anti-trust, the morality of machines or Yuval Noah Harari’s theory of Digital Dictatorships. I’ll even put corporate malice aside for now, so we can explore how Cloud can take our individual bad habits and errors to a global scale very, very quickly .

In the red corner: how can Cloud make things worse?

Scale

When your app is live in multiple cloud regions with five-9s availability, anything bad that is baked into it will scale accordingly. The glaringly obvious example of this from the past two years: political radicalisation and deliberate disinformation campaigns that took advantage of social media platforms to influence democratic elections and referendums. For a debrief, please contact the offices of Mssrs. Zuckerberg and Dorsey.

Speed 

Speaking of Facebook FB -0.07%, remember “Move Fast and Break Things”? Speed (or ultimately, agility) is arguably the greatest benefit of using Cloud, as well as being a key tenet of agile methodologies. But that also means that mistakes in code can get into users’ hands within minutes. Indeed, the “Break Things” part of that sentence assumes you can “Move Fast” again to issue a fix, but even if you are as agile as Facebook, some damage is already done, and probably at scale.

If the system moves too fast, perhaps a useful approach for each of us to overcome our personal speed bias is the one developed by Nobel laureates Daniel Kahneman and Amos Tversky, and popularized in Kahneman’s book, “Thinking, Fast and Slow”. By engaging ‘System 2’, the more analytical side of our brains, we can focus on how we make decisions at least as much as we do on what gets decided.

Distribution

Microservices architecture is great way to empower smaller teams to release independently, and drive software development in a much more dynamic way—resulting in more resilient apps. Consider, however, that if each small team only owns a small part of the picture, it might not feel responsible for any damage done at the end of the line. Philosopher and political theorist Hannah Arendt famously explored how breaking down evil into ‘digestible’ bits detaches contributors from feelings of blame, but even when we talk about honest mistakes, that principle can easily be applied to software development when we lose sight of our place in the bigger picture.

Perhaps a mitigation here is to always know how your code fits into the big picture of the app, and stay close to your users, even if you are far-removed to the left.

Automation

In the long term, robots may or may not take some or all of our jobs. This might create a motivation for people to ‘stretch’ their morals in a vain effort to keep their jobs. In the short and medium term, the focus in Cloud and DevOps seems to be on abstraction, task automation and the creation of self-service experiences for developers. While this drives the industry forward professionally, a message of “you just focus on your code” implicitly encourages the removal of overall responsibility from each small part of the chain—as we’ve seen above, this can be a bad thing. Again, being conscious that you are delivering part of a whole, and caring about the outcome, can help.

In the blue corner: trends that can save us

Free and open source software

The Free Software Movement and the world of Open Source, which have revolutionized technology and technology-driven experiences, have always prioritized values such as transparency, accountability and responsibility. I have witnessed first hand how leaders of some of the largest and most influential projects and foundations act with these values in mind, knowing that ultimately, it is their community that is the source of whatever power they have. As open source continues to take over large swaths of the technology landscape, these values could help us drive personal ethics on the level of the single developer or operator.

Devops and digital transformation

If one side of the DevOps movement is about automation through tooling, then the other (and arguably the more important) side is about cultural change towards more collaboration, more observability and less silos. Instead of developers “throwing code over the wall” to a team they hardly ever talk to (except when something goes wrong), digital transformation (especially with regards to containers) strives to bring people together to a shared operational view of the world, and to ongoing, collaborative communication. If successful, this trend will do well to mitigate the risk of individuals shirking ethical responsibility by taking the maxim “focus on your code” too literally.

A related trend is “shift-left”, which caters to the rising power of developers by putting more responsibility and capabilities at the coding stage. Whether it’s tools to fix open source vulnerabilities such as Snyk’s, or concepts such as Gitops—if we can drive ethics at the developer level in the same manner, this could be good news when code get deployed in production.

Rising social awareness

Recent political activity in several western countries suggest a new wave of support amongst the younger generations for socially-minded and more inclusive policies. In the UK, for example, a July poll found that a majority of Conservative Party voters support higher taxes for better public services. Cloud computing and open source have similarly impacted corporate culture (Microsoft under Satya Nadella is a good example), and of immediate concern to C-suite execs in certain companies is quasi-unionized activity such as the Google GOOGL +0.07% walkout. As the battle for technical talent intensifies, we should expect this grassroots corporate social consciousness to grow. I’d wager that many employers will follow Google’s lead in adapting to their audience’s new expectations—and a key part of that will look at how personal ethics interact with technology at work.

Climate crisis

It bis likely that the biggest issue of our lifetimes is climate change, which has the potential to reduce or eradicate many species, including our own. As is the human condition, crisis is the best motivator for action. While some political leaders muddle about, backtrack on policy or pander to an uninformed political base or corporate interests, it sometimes comes down to individuals to drive actions in both the business and political arenas. I spoke to Anne Currie of consulting firm Container Solutions, who has written numerous posts and has delivered numerous talks with regards to ethics; she had a good example of a low-hanging fruit specifically for Cloud users.

“As companies move into Cloud we have a choice,” said Currie when we talked. “Do we use the sustainable options available from Google, Azure or a handful of AWS regions? Or do we use the far dirtier defaults like the US East coast? Thoughtless hosting is no longer ethically acceptable”.

Start with small steps

As a famous saying goes, “history is filled with examples of soldiers who carried out orders that were never given”. We can all do something to make sure Cloud is good for humans. These include:

  • Bring our personal values to work.
  • Take ownership of the whole lifecycle of the our work.
  • Express an active interest in, and form an opinion about, the ecosystem in which our code integrates.
  • Use our own initiative and conviction to effect positive change in the team, the company, the community.

(Originally posted on Forbes.com)

OpenStack Gets Its Swagger Back

In a previous article I briefly explored the time-gap between marketing hype around a new technology (in this case Kubernetes) and its actual adoption at scale. Attending the OpenStack Summit in Berlin last week was a great opportunity to time-travel forward in that same type of cycle. OpenStack was all the rage just four years ago but then, as a reaction, became the butt of many a cloud joke. I set out to discover where OpenStack is in late 2018, and what newer technologies and foundations can learn from this story .

The “Four Opens”UDI NACHMANYBoom to bust to…?
OpenStack famously came out of a joint project set up by NASA andRackspaceRAX +0% in 2010, to offer an open-source Cloud platform—focused on compute, network and storage—and counter the early rise of lock-in-focused AWS and Azure. The ecosystem reached peak hype around late 2014, marked by the number of startups and the size of their rounds (and burn rates),the size of the semi-annual summits, the number of vendors coming into the foundation—and their bragging about the number of open source contributions to the project. (Contributions to open source are of course a very good thing when done with the intent of collaboration and problem-solving, but it’s a quality, not quantity, game; this short thread from Jessie Frazelle is a great summary from a developer’s point of view.)By 2016, when Ubuntu founder Mark Shuttleworth talked about the collapse of “The Big Tent”, OpenStack was deep in the trough of disillusionment. Soon enough, startup funding ran dry ( OracleORCL +0.12% and other buyers made some key post-bubble acquired-hires as a result), and major backers like HPE, Cisco and IBMIBM -0.76% started to cut their losses. This was so painful in the Cloud industry that at times it seems that foundations like the CNCF manage themselves according to a secret “don’t do what OpenStack did” rulebook. Leaner, but not meaner

What OpenStack has done extremely well from the beginning, is engage its community of engineers and operators . This tight-knit and committed group, which has gone through the flames of technical issues and human skepticism, seems to have been the key to the project’s current state—more modest, mature, pragmatic and focused on uptime.Consider how OpenStack has managed to generate activity around new projects that address challenges presented by technologies such as Containers, ‘unstructured’ PaaS and Serverless. At the summit, foundation speakers boasted the following: the Kata Containers project allows hardware-level security isolation that can work well with Kubernetes and FaaS; Airship is a collection of tools that automate cloud provisioning; StarlingX leverages the Titanium R5 platform to address distributed edge applications; and Zuul is a CI/CD platform fit for multiple environments. All these tools are, in some respect, designed for infra operations users, and all bridge from OpenStack, which is a sign of pragmatic confidence in the ecosystem’s vitality as well as in its place within wider contexts.

The atmosphere at the show itself seemed more down-to-earth than before too, and as friendly as ever. Exhibitors were mostly managed services providers looking to help enterprises succeed with OSS (no magic-cloud-in-a-box pitch was noticed), swag hunting was a niche activity (i.e., fewer, higher-quality conversations), keynotes focused on advanced production case studies, and at the long list of talks about Cloud, Containers and FaaS, lead-generating badge scanners were nowhere to be seen. Kind of what you’d expect from a proper tech community summit.

It’s not all roses, of course. The opening keynote mentioned 75 public clouds running on OpenStack, but in truth none of those come close to threatening the dominance of the big-three (the biggest OpenStack-based public cloud is probably Rackspace, which is increasingly focused on managed services). Initiatives like the Passport Program are good for making the niche work better together, but they probably won’t move AWS’s needle.

No more Stacks
That said, in a world where everyone wants to talk about functions, containers and the edge, the OpenStack project has done a remarkable job of moving forward and adapting. In my opinion, the two most meaningful moments came when keynotes touched upon the project’s plans in China, and the summit’s name change.

While the US and European markets for OpenStack continue to grow slowly and steadily, much faster growth is attributed to the Chinese market, which has both a massive infrastructure need, and a tendency to be wary of the leading Western vendors. It will be interesting to see how sustainable that growth is for OpenStack, especially if the government publicly endorses the project based on its popularity, openness and local case studies such as TenCent. Luckily, OpenStack’s first summit in China, planned for 2019, will allow us an opportunity to explore that question up close.

It was also announced that the summit will change its name, to the Open Infrastructure Summit. While this recognizes the broad relevance and experience of this ecosystem, it will no doubt bring it into closer orbit with events organized by the likes of Linux Foundation and the CNCF. Whether harmony and symbiosis will ensue, or friction and confusion, remains to be seen. What’s very clear from this summit is that OpenStack has become a bit boring, and that is great news for anyone looking to deploy reliable private cloud infrastructure.

(Originally posted on Forbes.com)

PaaS-y Days Are Here Again

At the turn of the previous decade, you would be considered perfectly sound to place a bet on Platform-as-a-Service (PaaS) as the dominant cloud service model (over IaaS). With offerings such as Google’s App Engine, Microsoft’s original Azure, newcomers Heroku and EngineYard, and OpenShift from Red Hat, to name just a few, the stage seemed set for this model of opinionated, abstracted operations to take immediate hold.

But within a few years, many a PaaS-y startup started to run into trouble when raising money. “PaaS” became a dirty word (one founder I worked closely with told me of a lost weekend he spent redoing his company’s collateral and website to remove all mention of the word “platform”). So what went wrong, and how is it changing now?

Opinion

The perfect storm of 2008

The Great Recession which started in 2008 created some key conditions for the renewed rise of startups. These have have been well documented for years: chiefly, negative (real) cost of capital made seed capital more accessible; and large numbers of young software engineers, many with open source and “post-Web 2.0” experience, were left without jobs, looking for something to do.

Amazon stumbled upon this market to complete the golden triangle of capital-talent-technology. At a time when leading (and emerging) PaaS vendors were banking on abstracting IT for enterprise developers, AWS found that startup developers—many of them with shallow pockets and deep experiences—tended to have their own development environments and just wanted access to someone else’s machines, not someone else’s opinions. Amazon Web Services’ EC2 (Elastic Compute Cloud) service gave them just that, as did upstarts such as DigitalOcean and open projects such as OpenStack.

As many of these developer teams evolved into the first cloud-native companies, there emerged a common need for simplification and abstraction, to support rapid scaleup. However, even that didn’t push the market over to PaaS. Many have criticized the inherent pricing model of many PaaS tools as unfit for scale, but I would like to offer an additional, technical argument: the first cloud-native companies had access to an exploding universe of open source code and great technical talent who, put together, could help in building and maintaining complex IT ‘stacks’—in other words, the first hyper-scale cloud companies chose to own system complexity rather than take someone else’s rigid opinion.

Since these early days of AWS, the history of cloud computing can be summarized as follows: who will come in a distant second?

Kubernetes: Evolution Of An IT Revolution

(Note: this post assumes a basic level of familiarity with technologies such as containers and Kubernetes. For a primer, watch this lovely video.)

When asked about the impact of revolutionary developments in France, Chinese Communist leader Zhou Enlai famously replied, “It is too soon to say.” (Though it is still disputed if he was referring to 1789 or 1968!) Lucky for him, he didn’t have to deal with things such as the IT hype cycle.

The IT industry has never been short on marketing hype, and Kubernetes, the emerging standard for container orchestration, is no exception. For sure, Kubernetes’ technical pedigree is distinguished, having been born out of the technology that powers Google itself. Furthermore, since its birth in 2014 as an open source project hosted by The Cloud-Native Computing Foundation (CNCF), the project has seen impressive growth driven by a rich ecosystem—and that trend intensifies, as can be seen in the CNCF’s latest survey.

Birds’ eye view of ships

As impressive as growth can be, however, it isn’t the same thing as ubiquity (this is where marketing hype is effective in blurring reality). Judging from metrics such as tech conferences covering Kubernetes, the number of partner logos on the CNCF’s landscape, the volume of press releases—and even more tangible metrics such as commits to related projects on GitHub—one might be forgiven for believing that ubiquity has already been achieved.

It might be useful to remember, then, that according to a study cited on Forbes.com earlier this year, 12 years after the launch of AWS, only 31% of workloads are currently running exclusively on public clouds. In light of this data, would it be wise to assume that the cloud-native revolution has covered more ground in less than a third of that time? Likely not. More specifically: What can the rapid growth of Kubernetes tell us about the evolution of its usage model in enterprise? With some personal insight into this space, I can attempt to guess.

An alternative timeline

I have been fortunate enough to have been there (in the eye of the cloud-computing storm) when Kubernetes was released initially and as a stable (version 1.0) project, in June of 2014 and July of 2015, respectively. Having worked with cutting-edge engineering teams in this sector before and since then, I have also seen Kubernetes spread in usage and grow in maturity. General milestones in the project’s history are well-documented, but I see some actual adoption trends moving on a timeline as follows (vastly simplified/generalized for the sake of brevity):

  • 2014: Extreme-early adopters try to understand how Kubernetes works.
  • 2015: With the launch of the CNCF and Google’s managed Kubernetes, GKE, a wider community of software and system engineers now have a reference model for the technology. Red Hat’s OpenShift starts on the container-PaaS path.
  • 2016: Since only a small number of workloads actually run on Google’s public cloud at this time, the focus shifts to finding ways to install Kubernetes and to spin up clusters for test applications, on other clouds or on premises.
  • 2017: More managed Kubernetes services are made available (or at least announced), from Azure and AWS, and erstwhile Kubernetes rivals Docker and Mesosphere announce support for Kubernetes in their tooling. This effectively cements Kubernetes’ leadership as the de-facto standard for container orchestration. Early adopters, with access to top engineering talent, deploy production workloads at scale on the technology. The part of the mainstream that is actually doing something with Kubernetes still focuses on understanding how to install and manage it at a proof-of-concept level.
  • 2018, to date: Several more managed Kubernetes services are launched by DigitalOceanOracle and others; coupled with the growth of the top-3 clouds’ managed Kubernetes services (as well as Red Hat OpenShift), the problem of simple installation and management is effectively solved. The conversation shifts ‘left’ to problems in the pipeline—how to get code in a reliable, repeatable way into Kubernetes. It also shifts ‘right’ to lifecycle management—how to operate (monitor, log and secure) clusters, especially within hybrid environments. This means that production usage is still constrained—to run an app in production, one needs to address the whole cycle, rather than just spin up clusters.

In the grand scheme of things, this is still an evolving technology with a small footprint. Even for a technology with the robustness and backing to become pervasive, growth does not equal ubiquity—yet. So what does that mean for users, investors and vendors? And what else is happening that should influence decisions?