One of the oldest and most persistent debates in any field goes back to the famous quote by ancient Greek poet Archilochus, “a fox knows many things, but a hedgehog one important thing”—and to Isaiah Berlin’s equally famous philosophical-literary essay, “The Hedgehog and The Fox”. Technology is only one of numerous areas in which this paradigm is being used (or overused, depending on your viewpoint) to evolve industry standards and operating models.
A good example of this is the increasing tension in cloud-native technologies between those tools that strive to be a “best of breed”, and those that position themselves as a “one-stop-shop”. Vendors such as CircleCI for CICD, Datadog for monitoring, and of course Snyk for developer-first security all aim to be the absolute best at the (relatively) narrow task at hand. On the other side of the fence, vendors such as Gitlab look to add functionality to cover a wider surface area.
There will always be a tradeoff on this route to simplifying systems such as procurement or IT management: the Hedgehogs of this world will never achieve the same results on narrow sets of tasks as will Foxes, but they will offer simplicity or unity of interface as an advantage.
The leading public cloud providers such as AWS and Azure are good examples of companies that, on the one hand, have the product realism to build rich ecosystems of compatible partner solutions, but on the other hand offer economic incentives in place to help those customers that want to reduce the number of suppliers or solutions they work with.
In this context, a new category of solutions is looking to create automation engines to connect best-in-breed solutions, allowing at least a common technical interface to connect discrete but closely-related tasks. Examples of this approach are GitHub Actions and Bitbucket Pipelines, which aim to offer developers a way to automate their software workflows—build, test, and deploy—right from their software repositories.
Puppet Relay: automating DevOps and cloud operations
A new and more Operations-focused entrant into this space was announced late last month, Puppet Relay. If this is where configuration management veterans Puppet are going, it could indeed be big news. Relay is an Event-, Task-, or Anything-Driven automation engine that sits behind other technologies used by an enterprise. Relay automates processes across any cloud infrastructure, tools and APIs that developers, DevOps engineers, and SREs (site reliability engineers) typically manage manually. Interoperability and ease of integration seem to be major focus areas for Puppet.
According to former Cloud Foundry Foundation Executive Director (and longtime Pivot before that) Abby Kearns, who recently joined Puppet as CTO, there are tons of boxes in the datacenter that traditional Puppet customers want to automate with configuration management, but with the move to the cloud, the potential surface area for automation is much bigger. Puppet’s ambition is to capture the automation and orchestration of the cloud movement, with multiple use cases such as continuous deployment, automatic fault remediation, cloud operations and more.
Relay already integrates with some of the most important enterprise infrastructure vendors like Hashicorp, ServiceNow, Atlassian, Splunk and others. Stephen O’Grady, Principal Analyst with RedMonk, was quoted in Puppet’s press release as saying that the rise of DevOps has created a hugely fragmented portfolio of tools, and that organizations are looking for ways to automate and integrate the different touch points. “This is the opportunity that Relay is built for,” concluded O’Grady.
Automation market trends point to a specialized future
Market analysis and data supports this trend. In its Market Guide for Infrastructure Automation Tools, Gartner estimated that by 2023, 60% of organizations will use infrastructure automation tools as part of their DevOps toolchains, improving application deployment efficiency by 25%. Similarly, in its 2019-2021 I&O Automation Benchmark Report, Gartner forecasted that 47% of infrastructure and operations executives are actively investing and plan to continue investing in infrastructure automation tools in 2020.
Ultimately, taste in architecture means using the right tool for the right job, under the financial or organizational constraints that each of us might have. By providing a backbone to automate the cross-vendor workflow, the new breed of automation engines such as Puppet Relay could potentially tilt the scale towards specialized solutions, and change day-to-day development and operations.
Open source software underpins many of the applications we use today, whether critical for our society to function, or just for our ability to share photos of our quarantine-sourdough with strangers. The code itself has clearly changed our software applications, but what deeper, underlying impact on software delivery and organizational culture have we seen through this process?
In this article, I had the privilege of speaking with three industry luminaries that have contributed to building open source projects and communities for many years. I wanted to learn from them about the diffusion of software delivery practices from communities and projects into companies and products.
First, I spoke to Chip Childers, co-founder and newly-appointed Executive Director of the Cloud Foundry Foundation; then, to Dustin Kirkland, Chief Product Officer at Apex Clearing and a longtime contributor to Ubuntu, Kubernetes and OpenStack; and finally to Anton Drukh, early employee and VP Engineering at cloud-native unicorn Snyk.
The foundation: becoming better developers through contribution
The Cloud Foundry Foundation was originally established to hold the intellectual property of the open source Cloud Foundry technology and oversee the governance of the project. Today, the Foundation houses 48 projects under its umbrella, and over the years has influenced numerous enterprises in using the technology. The Foundation’s projects release independently, and most projects come together to form the platform through a coordinated release process: smaller teams release whenever they are ready, while the entire system is tested for known-good combinations through the coordinated release process.
From the beginning, says Childers, the Foundation was structured with an open source license, an open contribution model, and an open governance model. Naturally, companies with larger numbers of contributors often obtain more influence in the project roadmap—but the path that this process has paved goes both ways.
The Foundation’s practices have clearly impacted companies whose staff are also contributors, according to Childers. For example, he has seen more and more contributing developers—often paired across different organizations—go back to their companies and drive adoption of methodologies like extreme programming and agile development.
On an even deeper level, says Childers, that impact has created a flywheel effect, making developers great ambassadors into their companies, and then improving the Foundation’s projects. As a first step, developers adopt the same collaboration mindset that permeates the Cloud Foundry community, in their day jobs. Second, as developers building tools for developers, they tend to develop empathy and a keen understanding of user experience, which improves their work in their companies. Third, hands-on experience in using Cloud Foundry technologies and processes in their organizations means that contributors have a wider perspective and often feed back into the Foundation’s projects on what could be improved.
The contributor: delivering proprietary software like open source
Dustin Kirkland, Chief Product Officer at Apex Clearing and an ex-colleague of mine, spent the last 20+ years in various leadership roles around open source software, and has contributed code to projects including Ubuntu, OpenStack, and Kubernetes. Upon arrival at Apex Clearing, he wondered if the company can re-use not only the code of the open source projects it had access to, but also some of the underlying processes around how that code was delivered, which he had experienced firsthand. Specifically, he focused his attention on release cycles.
Projects such as Ubuntu, OpenStack, and Kubernetes have predictable, time-based release cycles. Ubuntu has released every April and October, since October of 2004 (32 timely, major platform releases, for over 16 years!); Kubernetes, introduced in 2014, has chosen an increased pace of quarterly release cycles, and has managed four releases per year, over the last six years.
A key concept these projects use which Kirkland introduced into Apex Clearing is “Coordinated Cycles”: with time, resources, and scope as variables, a project needs to make two of them constant, and then manage the third. For example, with Ubuntu, time (releasing on time) and resources (size of the contributor community) are fixed, and scope is negotiated. Typically, a Cycle kicks off with a Summit or Conference (such as an Ubuntu Developer Summit) that brings together contributors from around the industry. In addition, a Mid-Cycle Summit is a good way of tracking progress and correcting course as needed.
When Kirkland arrived at Apex in 2019, products and projects were managed asynchronously. After examining the option of six-month releases (like Ubuntu or OpenStack), it was deemed unwieldy, as the team would need to manage two-week sprints for 26 weeks straight. Quarterly cycles, adopted by Kubernetes, were considered too short to see through anything but the smallest individual projects. Finally, the team settled on 16-week cycles: three full cycles per year, with 48 weeks of development, while still allowing for four weeks of holidays.
Today at Apex, each cycle involves three types of summits:
Prioritization Summit: product managers collect input from all stakeholders, so that they can achieve consensus on priorities for each product family.
Planning Summit: once the product requirements are defined, there is alignment between engineering and management on work commitments across the product portfolio, for the upcoming cycle.
Mid-Cycle Summit, renamed Portfolio Review: report on progress and adjust course where necessary, about two-three times per cycle.
Announced in April of this year, Apex 20a was the first release that used open source processes and methodologies. This month, Kirkland and his team will be (virtually) holding a Portfolio Review for Apex 20b, reviewing the entire portfolio with all engineering and product leaders.
The start-up: putting up open source norms as foundations
Anton Drukh, a current colleague of mine, joined Snyk as its first hire in 2015, and has successfully grown the engineering function to a stellar team of about 70 people. He has always been fascinated, he says, by how the simplest solutions can solve the most complex problems, with people being the critical element in any solution.
Snyk’s approach from its earliest days was to see developers as the solution to—and not the source of—security vulnerabilities introduced into software. Drukh says that as someone whose formative years as an engineering leader were securely in the cloud and cloud-native era, he was especially drawn to three aspects of the new company.
First, Snyk chose to focus on securing open source, and Drukh believed that working closely with open source communities would help develop a culture of humility. Today, says Drukh, every external contribution to Snyk’s mostly open source code base is a positive reminder of that.
Second, learning from many open source projects, Snyk aimed to build a distributed and diverse engineering team and company. According to Drukh, building these principles into the hiring process created an immense sense of empowerment amongst employees. Snyk runs in small teams of five-six people, always from different locations (and continents, as Snyk’s engineers are based in the UK, North America, Israel, and beyond), and trusts them to ‘do the right thing’. This, in turn, creates a strong and shared sense of accountability for the team’s and the company’s success.
Third, from its very beginning, the company set out to adopt open source practices in the practicalities of its software development. These measures increase a feature’s effectiveness, but also shorten the time it takes for an idea to travel from a developer’s mind to the hands of the user. Examples abound, such as:
Snyk’s codebase is shared within a team and between teams, which enables ease of onboarding and clarity of ownership.
Each repository needs to have a standardized and automated release flow, which supports high release pace.
Each pull request needs to have a clear guideline for why it is being raised, what trade-offs stand behind the chosen approach, how the tests are reflecting the expectations of the new functionality, and who should review this change. This drives transparency and accountability in the culture.
Revealingly, for some of Snyk’s users, the impact of the product has sparked a curiosity into how it was delivered, and an attempt to learn from the processes of a cloud-native startup. Customers can inspect the company’s delivery process across some of its codebase, and share ideas with Snyk (such as this one, of consolidating release cycles). Without the culture and processes inspired by open source, none of this serendipity would exhibit itself.
One of the most significant changes brought about by the Coronavirus outbreak has been the mass move to working from home. From an operational perspective, the impact on technology companies has been hard to overstate both in its depth and its breadth. A now-famous meme identifies COVID-19 as the one factor which has finally realized the promise of digital transformation. While amusing, this presents an important point: this pandemic, like many other crises before it, accelerates many processes that would have otherwise taken months or years.
While some changes will be temporary or partial, others will transform our world. The implications of accelerated change are of concern to anyone responsible for risk management, and Chief Information Security Officers (CISOs) are no exception. So, what is on the minds of CISOs of key technology companies in this challenging time?
Polling the CISOs
In their latest CISO Current report, cybersecurity-focused venture and research firm YL Ventures asked their advisory board—made up of around 80 CISOs for leading companies such as Wal-Mart, Netflix and Spotify—similar questions. YL Ventures have been making use of this advisory board in investment due diligence processes over the years, and in 2019 decided to start publishing their findings for the benefit of the industry. The below list includes a combination of feedback from that report, as well as from other sources, where noted.
1. A good time to reconsider technology choices
Originally, the YL Ventures report was focused on DevSecOps, and found that many current tools—for example, most static application security testing (SAST) and runtime application self-protection (RASP) platforms—were considered cumbersome and difficult to adopt within software engineering teams. Another major (though perhaps unsurprising) finding was that the biggest challenge was in creating a system of processes and incentives, to support transformation. It was noted that technology should be evaluated in how it supports and safely accelerates these non-technological changes: it should bridge the long-cycle, “breaking” culture of security engineering and the short-cycle, “building” culture of software development.
2. Everyone is remote, and some will stay that way
With the acceleration of the pandemic, YL Ventures quickly pivoted and included a second and more pertinent part of their report, which deals with the work-related transformations that are on the minds of CISOs in light of the pandemic. The biggest challenge that was cited was establishing fully remote workforces in a tight timeframe, in a way that will be sustainable for a partially-remote future of work; a close second was severe budget constraints, unsurprisingly. According to Naama Ben Dov of YL Ventures, both these concerns present opportunities for vendors in how they interact with CISOs from this point on.
3. The risk map is being reshuffled
There are things that CISOs worry about, that are getting easier in the new reality, such as controlling location-based risks: an employee residing in London could not have logged into their laptop from Moscow, for example, since there is no business travel. Similarly, Adrian Ludwig, CISO of software giant Atlassian, was a guest on Snyk’s webinar on working from home last week, and noted that he has seen an uptake around bug hygiene, as engineers have more time to be thorough and at times gravitate to smaller tasks.
Other issues are made more complicated, for example VPNs and DDoS mitigation. Yair Melmed, VP Solutions at DDoS start-up MazeBolt, reported that over 85% of companies he works with are proactively identifying DDoS Mitigation vulnerabilities that would have impacted their business continuity had they been hit by a DDoS attack. Because VPN Gateways weren’t critical to business continuity before COVID-19, most companies are finding that their DDoS mitigation solutions don’t adequately protect them. This has become a critical issue, with the risk of employees being be cut off and other services might be impacted if they come under DDoS attack.
Engaging with CISOs in a time of crisis
As many vendors know, and as the YL Ventures report confirms, budgets are being scrutinized for the short term, but in many cases also for the long term—under the assumption that remote workforce concerns will stay for the foreseeable future. In general, says Ben Dov, for now it’s much more ‘defer and delay’ than ‘reprioritise’, and therefore how vendors react to their customers’ plight will be etched into the memory of CISOs and procurement officers alike when budgets recover or get retargeted.
1. Careful with those emails
The YL Ventures report explicitly calls out alarmist sales pitches, as well as sending too many emails. CISOs polled recommend to simply show goodwill and empathy with personalized messages to check in on customers’ state of mind. Alongside companies who keep sending alarmist messages, there are, of course, many examples of other vendors who are leaning in with empathy and patience towards their customers and prospects.
Getting involved with communities and community initiatives related to the specific sector is also something that is called out. Snyk’s developer advocacy team, for example, organized an online event called AllTheTalks, with all proceeds from ticket sales going towards the World Health Organization.
2. Walking the path with the buyer
While the budget may have been cut, the business need probably still exists. Many companies, from AWS to Atlassian and beyond, have come out with special offers to support businesses during this transition, as was summarized in this Forbes piece as well as elsewhere. Many, like Snyk, have created specific offers for small businesses and those in the healthcare, hospitality, travel and entertainment industries. MazeBolt has decided to offer free DDoS assessments to cover the most common DDoS vectors, resulting in a detailed vulnerability report.
3. Conscious relationship-building
On the Snyk webinar, Ludwig also spoke of his habit to informally communicate with colleagues in the company canteen (since a formal invite to a meeting with the company CISO is something no developer wants to receive). With everyone working remotely, in order to foster engagement with colleagues Ludwig recommends to make a list of those we need to reach out to and how often, and then schedule those catch-up meetings. In some cases, ‘office hours’ sessions could be set up where everyone can drop in and share their perspectives and concerns on the day to day.
Crucially, there is no reason to not extent this practice to our customers, catching up about everything and nothing on a regular basis to cement our closer relationships. While this could seem artificial, it is an effective way to keep conversations going when the operating model does not allow for any coincidental conversations.
Perhaps that is a take-away that touches on all aspects of our lives under quarantine: being conscious about our priorities, about who we interact with, and how we do so in a way that helps us achieve our goals now and later on.
As software takes over more of IT, developers are taking more ownership of related parts of software delivery, and moving faster. In this shifting reality, with increasing velocity and complexity, more software can mean more risk. However, beyond the linear increase in risk, it is also the very nature of risk that is changing.
In the very recent past, developers would work on monolithic applications, which meant that the scope and rate of change were more manageable. If developers have a vision of the desired state, they can test for it in the place where they can validate changes; for most of the recent two decades, this place has been the Continuous Integration/Continuous Delivery (CI/CD) pipeline.
Microservices increase complexity by shifting change validation from the pipeline into the network: if my application is made up of a 100 services, the pipeline won’t build and test them all at the same time. Inherent to the concept and benefit of microservices, some services will go through the pipeline while most others are in a different state, from being rewritten to running in production. In this situation, a desired state can be a phantom, so we need a new testing paradigm.
From punchline to best practice
It used to be the case that to scare off a nosy manager, developers would use the phrase “we test in production”. This was a concept so absurd it couldn’t possibly be taken literally, and would be sure to end the conversation. Now, with cloud-native applications, testing in production can be seen as the least scary option of them all.
When there is a constant generation of change that still needs validation, production environments become the main place where a never-ending series of ‘desired states’ exist. With the right tools for observability and for quick reaction to issues, risk could be managed here. (Assuming, of course, that I have the right people and processes in place first.)
Companies like Honeycomb and others herald this approach. When dealing with complex systems, they claim that most problems result from the convergence of numerous and simultaneous failures. Therefore, Honeycomb focuses on an observable state in order to understand exactly what is happening at any given moment.
This is observing state, not validating change. A different approach comes from where we validated change before: the CI/CD pipeline.
Putting change and state together
In an eBook published this week, Rob Zuber, CTO of developer tools company CircleCI, talks about the role of CI/CD togetherwith testing of microservices.
As more and more companies outside of the tech industry come to see software as a competitive differentiator, Zuber sees greater use of 3rd-party services and tools, a proliferation of microservices architectures, and larger data sets.
In practice, Zuber claims, breaking change into smaller increments can look like deploying multiple changes—something that a CI/CD tools would naturally handle well. Being able to evolve your production environment quickly and with confidence is more important than ever, but using a combination of tools that validate change and others that observe state is key.
CircleCI itself, says Zuber, uses its own technology for CI/CD, and other tools to tie together the change and state validation—crash reporting, observability, and centralised logging are examples. At the center of this stack is the CI/CD tool, which in their view is better placed compared to source control tools or operational systems.
A full view of the software delivery cycle
Both approaches seem to agree that neither observing a state nor observing the change is enough: good coding practices, source control management, solid deployment practices—they are all crucial parts of the same picture. All of these elements together can lead to better cost and risk decisions, more robust code, and better products. Testing is another area of software delivery that is being utterly transformed by cloud-native technologies and open source, and should be examined with new realities and new risks in mind.
(NOTE: this article includes no medical or related advice, which, in any case, the author is completely unqualified to provide.)
It is a time of significant change, which none of us can predict. First reported from Wuhan, China, on 31 December 2019, Coronavirus disease (COVID-19) is beginning to change personal and work lives around the world. Considered by the World Health Organization to be less lethal but more contagious than its cousin SARS-CoVid, COVID-19 is making its presence felt in ways that we can see, and in many ways that we can’t, yet.
Technology conferences represent an interesting case study: forward-thinking companies and communities, with vast technology resources, can take this time to reevaluate some of their strategies with regards to face-to-face events. More importantly, this transition can help countries, corporations, and humans everywhere to reduce their carbon footprint, tackling the greater challenge of our times.
Storm Corona Hits Tech
The software industry, amongst others, has seen immediate effect in the cancellations of many prominent large tech conferences. The first cancellation many noticed was the mammoth and vendor-neutral annual telecoms event, Mobile World Conference in Barcelona. The latest ‘victim’ in the vendor-neutral conference space has been Kubecon/CloudNativeCon EU, which has so far been delayed to July.
In the face of COVID-19, large tech companies have been in many cases first movers to cancel company-run conferences: Facebook with F8, Google with Google I/O and Cloud Next, Microsoft with its Ignite Tour, and more. This was part of a wider initiative by large companies in tech as well as other industries to curtail inessential business travel—which in itself would have a severe effect on the attendance of some of these events.
An Introvert’s Dream?
It is the beginning of a very difficult era for anyone in sectors which support events (events management, travel, hospitality, photography, catering and more), while those who support virtual interaction stand to benefit.
The stereotypical aversion of some developers to face-to-face human interaction has been the subject of many research papers, articles, and online memes. In that light, it could be that the virtual model will in some aspects prove to be more effective for engagement and content delivery.
If the aim of some of these events is to educate developers and disseminate information, do they need to be physical events at all? As many conferences pivot towards a virtual model in the coming months, we are about to find out. Consider that a huge majority or open source contributors do so remotely anyway, so in that space at the very least, the ground is ready.
Challenging Old Assumptions
We might not see some of these conferences return at all: as suggested recently on tech strategy blog Stratechery, Facebook’s success depends more on management of its digital real estate than on its F8 developer conference, especially as it invests significant resources in security and privacy (also, it helps to avoid the shadow of Cambridge Analytica at F8).
A UK-based CISO asked recently on one of my chat groups, “we’re pulling back on travel, beefing up remote working capability and handing out hand cleaning gels. Anyone doing anything non-obvious?” Actually, we could start with the obvious, and look at successful tech events that are already virtual-only. Events such as All Day DevOps, Global Devops Bootcamp, and HashiTalks point the way for developer-focused events run by communities and vendors alike.
The technology is there. As an employee of HP in 2006, I regularly participated in Halo Room meetings, which felt uncannily like face-to-face (so much so that we would have fun watching newbies rush towards the screen to shake digital counterparts’ hands). Halo was prohibitively priced at the time, and was rooted in hardware pricing models—as was Cisco’s TelePresence. In 2020, backed by cloud infrastructure and software monetization models, I imagine that vendors can take similar solutions to market for a fraction of HP’s then-upfront price, offering an up-market alternative to Zoom.
A True Opportunity To Survive And Thrive
Scientists: you should wash your hands because of Coronavirus.
People: I'm gonna stop flying, hoard masks, work from home & totally rearrange my life.
Also Scientists: the #ClimateCrisis will kill millions – we must use clean power & change how we get to work.
The events crisis triggered by COVID-19 may prove to us all that many of the events we attend (and on rare occasion, enjoy) are not crucial. More importantly, if, all of a sudden, it is possible for corporations and governments to restrict travel to safeguard their public, then that can teach us an important lesson for the much more existential threat that is already here: climate change.
In a report issued in November, 2019 by a team of world-class climate scientists, it was said that, “almost 75 percent of the climate pledges are partially or totally insufficient to contribute to reducing GHG emissions by 50 percent by 2030”. As corporations come under increasing pressure from governments, investors, and employees to accelerate their efforts to reduce carbon emissions—reinventing cloud, open source, and software engineering conferences seems like a great place to make a real impact.
Digital transformation has reshaped countless businesses, through the combined forces of software explosion, cloud adoption, and DevOps re-education. This has enabled new entrants to disrupt industries (e.g., Airbnb), and non-software incumbents to position themselves as de-facto software companies (e.g., Bosch).
Yet, as many CISOs have found out during this time, more software equals more risk. As a result, as James Kaplan, McKinsey’s cybersecurity leader, has said, “for many companies cybersecurity is increasingly a critical business issue, not only or even primarily a technology issue.” Let’s look at how each of the major vectors of change has contributed to this dynamic new reality.
Breaking down digital transformation
First, as Mark Andreesen once predicted, software is eating the world; to make things even more dynamic, open source is eating software, and as a result of both, developers are increasingly influential when it comes to technology choices.
Second, cloud infrastructure has introduced huge amounts of new technology: consider as examples containers and serverless replacing servers and VMs, and infrastructure-as-code supplanting traditional datacenter operations and security. Many of these technologies are changing rapidly, and not all of them are mature—but since developers are more influential, CIOs often have no choice but to live with these risks in production.
Finally, the rise of DevOps methodologies has changed the process of how software is developed and delivered, and the ownership of it across the lifecycle. Examples of this include continuous integration and delivery, which has brought gates and delays to a minimum, and has created empowered and self-sufficient teams of developers. This means that these more opinionated and more influential teams can now move more freely than ever before.
Software-defined everything? This shift has clearly redefined the IT stack and its ownership model, as shown in the chart below.
Operations teams, through DevOps, typically aim for manageable ratios of 1 Operations per 15 Developers, or in many cases even lower. Where does this leave security teams and CISOs, who in my experience are often expected to deliver a 1:30 or 1:40 ratio? Unfortunately, with their backs to the wall.
Security is left behind
On the one hand, security teams are very much still in the critical path of much of software delivery; however, they are also often separate from development and uninformed, and using outdated tools and processes. As a result, many security professionals are perceived by developers as slowing down the ever-accelerating process of delivery, and by executives as contributing to the release of vulnerable applications. This presents them with an almost impossible conflict of speed vs. security, which speed typically wins.
To make things even worse, a severe talent shortage perpetuates the state of understaffed security teams. Cybersecurity professional organisation ISC claimed in a recent survey that the global IT Security shortage is four million, and while half of the sample is looking to change how they deliver security, hiring is slowing them down and putting the business at risk. Over half of those polled said that their organisation is at moderate-to-extreme risk due to staff shortage.
Not just shift left, but top to bottom
In a previous piece, I examined how containers are challenging existing models of security ownership, and asked, “How far to the right should shift-left go?”
Yet if our stack is increasingly software-defined, from the top almost to the bottom, then it should be up to developers to secure it top-down. On a recent episode of The Secure Developer podcast, Justin Somaini, a well known security industry leader with experience from the likes of Yahoo! and Verisign, stated that he expected a third to half of a security team’s headcount to move from today’s process management roles into security-minded developers roles.
This brave new world of Cloud Native Application Security means moving people to the left as well as tools, but also extending developer ownership of security as far as software reaches. This does not present a risk to security professionals’ jobs: it is a change in reporting lines, and an enhancement of their skills—a genuine career development opportunity. On the hiring front, this opens up the option to recruit into security-relevant functions not only security talent, but also programming talent. This shift in ownership and role definition will make everyone’s lives easier.
In an earlier piece, I claimed that the increasing use of containers and cloud-native tools for enterprise applications introduces disruption to existing security models, in four main ways:
Breaking an application down into microservices means there is less central control over the content and update frequency of each part;
Packaging into a container happens within the developer workflow;
Providing developers with access to native APIs for tools such as Kubernetes increases the blast radius of any action; and
Using containers inherently introduces much more third-party (open source) code into a company’s proprietary codebase.
Shifting left? Not so soon
According to Gartner, by 2022 more than 75% of global organizations will be running containerized applications in production. On the other hand, Snyk’s container security report earlier this year found that there has been a surge in container vulnerabilities, while 80% of developers polled didn’t test their container images during development.
Other reports, such as highlighted in a piece by Forbes contributor Tony Bradley, show a concerning level of short-sightedness by security engineering teams—while most of the worry seems to be around misconfigurations, which occur during the build stage, most of the investment focus seems to be around runtime, which in terms of lifecycle is inefficient if not just too late.Recommended For You
Putting it all together, it’s clear that shifting security left by giving developers more ownership on the security of their own application is a good thing—but how much ownership should or could they be given? And what kinds of tools and processes need to be in place for this transition to be a success?
In 2011, Netscape pioneer and current VC Marc Andreessen famously said that software is eating the world. If everything is becoming software defined, security is no exception: recall when endpoint and network security used to be delivered by hardware appliances, not as software, as they are today. Application security is no different, especially since the people that know applications best, and that are best placed to keep up with their evolution, are developers.
Easier said than done, when successful developer adoption is such an art. Obviously, empowering developers to secure their applications means meeting them where they are without friction (close integration to the tools developers actually use)—while keeping them out of trouble with security engineering teams and others who are used to the “us & them” mentality of yesteryear.
However, we can break that down even further: to succeed, developer-led security tools should look and feel like developer tools that are delight to use; embrace community contribution and engagement, even if they are not in themselves open-source; be easy to self-consume but also to scale in an enterprise; and of course, protect against what matters with the proper security depth and intelligence.
How far to the right should shift-left go?
A large part of the promise of cloud-native technologies for IT organizations is encapsulated in the phrase, “you build it, you run it.” Making security both more efficient and more effective is a huge opportunity that is at every technology executive’s door these days. To truly empower developers to own the security of their own applications, CSOs and CIOs should think about this in a broader sense: “you build it, you run it, and you secure it.”
What this means is to give developers the tools they need to both buildand run secure containers, including monitoring Kubernetes workloads for vulnerabilities. In this case, Security or Operations could move to a role of policy creation, enforcement and audit—while application risk risk identification and remediation would be within the developer’s workflow. The recent launch of developer-focused container-scanning solutions by Amazon Web Services (NASDAQ: AMZN), Google Cloud (NASDAQ: GOOGL) and others is a start.
Following this brave but existential choice will help enterprises to drastically reduce the risk inherent in growing their container infrastructure, and to efficiently scale security best practice. Wrapping top-of-the-line security with a coat of delightful developer experiences is something possible today, and a direction that is inevitable. As Rachel Stephens of analyst firm Redmonk has written, tools can play a lead role in culture change.
Visitors of Open Core Summit in San Francisco in mid-September could be forgiven for their confusion: on the one hand, discussions at the conference centered around business and community models for open source software (OSS) viability, in an increasingly polarized Cloud world.
On the other hand, tweetstorms by key figures in the community (who were notably absent) focused on the very definition of open source, disagreeing with the association of open source with open core, source available and other limiting models. One of the attendees (disclaimer: a colleagure of mine) put it succinctly in a tweet that exposes the still-raging debates about the nature and direction of different models to the “left” of full-on proprietary software:
A Week Of Moral Reminders For Open Source
It was certainly fascinating that a conference which dealt in different approaches to protect or limit open source—many of them controversial—was book-ended by two seminal events in the OSS world. The first one occurred just prior to the conference, when the founder of the Free Software movement and GNU project, Richard M. Stallman, resigned from MIT and the Free Software Foundation (FSF), following public pressure pertaining to opinions he expressed in an email concerning the Jeffrey Epstein affair—software engineer Sarah Mei gave a detailed breakdown on Twitter of historical issues with misogyny at the FSF.
While the Stallman resignation could be seen as starting to fix a historical issue, the second event more clearly raised new questions for open source’s future.
After it came to light that Cloud company Chef signed a deal with the U.S. Immigrations and Customs Enforcement (ICE, who has been highlighted for its separation of families and night-time raids), developer Seth Vargo pulled a piece of open source technology that helps run Chef in production, claiming moral reasons.
Chef added kindling to the fire by forking the deprecated code, and renaming the author—viewed as a hostile tactic towards community norms. Chef likely needs to fulfil its contractual obligations to a demanding federal customer, but has since back-tracked on the author renaming, and has gone public on its next steps for remediation.
Free Software: Free For Any Purpose?
Much of the Twitter criticism around the Open Core Summit re-emphasized the agreed principle that if you’re writing OSS, you should be comfortable with anyone re-using or modifying such software freely, per the Four Freedoms—even if it means that AWS sells a managed service based on a project, which could limit the growth of the commercial entity supporting it, and with it the project’s viability itself (I covered this in my earlier post).
However, an extension of that question could pertain to the uses of software for purposes which might disagree with the maintainers’ values. The Four Freedoms as defined today do not speak to someone using free software to infringe on non-software freedoms—but some are now calling for this to be a formal part of free software licenses. A few licenses already include such clauses, but due to numerous gray areas, this is tricky to navigate—some entities (CIA) enjoy less scrutiny than others (ICE); judgment on some issues can be based on one’s background (China-Taiwan, Israel-Palestine, Spain-Catalonia); and so on. Even if almost all involved see the use as evil, how does one prove that a server in a North Korean labor camp runs Kubernetes, for example? How does a project enforce its policy in such a case?
As more developers bring their values to work, this will be a critical development for open source, software in general, and technology. Developer Matthew Garrett positioned this well, claiming that solving this through licenses could be effective but not in line with principles of free software and open source. Likewise, Risk Engineer Aditya Mukerjee gave a great summary of where this could quickly get complicated:
Acquia’s “Makers And Takers”
In this context it was useful to talk to Acquia founder and Drupal Project lead Dries Buytaert, just after the Summit (he had to cancel his attendance there to close an investment round in the company from Vista Equity Partners).
In a long and impassioned blog post, Buytaert used the “makers vs. takers” model to argue that failure (by all stakeholders) to embrace the collaborative culture of the OSS community is the most real and damaging issue facing it. Operating an open source community and business is hard, claims Buytaert, and ultimately every community is set up differently. Acquia, he says, maintains the Drupal project but contributes only 5% of code commits, which ensures open collaboration—compared with other vendors who might opt for more control strategic of their projects’ direction at the expense of collaboration.
An example of this, says Buytaert, is a model by which open source vendors that ensure they receive “selective benefits” from their work. Automattic in the WordPress community controls the WordPress.com domain; Mozilla Corporation, the for-profit subsidiary of the Mozilla Foundation, are paid large sums from Google for searches done through the Firefox search bar; MongoDB owns the copyright to its own code and is able to change MongoDB’s license in order to fend off competitors.
From Cloud Vs. Community To Government Vs. Community?
Still, Buytaert agrees that there is a degree of threat from large, well-funded companies that “refuse to meaningfully contribute, and take more than they give.” However, first, it’s important to understand and respect that some companies can contribute more than others; and second, it’s important that the OSS community encourages and incentivizes them to do so, and the best way, says Buytaert, is to create a situation where the more you invest in open source, the more you win commercially. If the opposite is true, it will be hard to sustain.
Buytaert suggests that the big cloud players could give back by rewarding their developers for contributing to open source projects during work hours, or by having their marketing team help promote the open source project—in return they will get more skilled and better connected engineers.
As Pivotal’s Eli Aleyner suggested in his talk at the Summit about working with public clouds, today’s developers are tomorrow’s technology buyers, and any potential short term gains for a cloud provider from not playing nice with an open source project could be dwarfed by the commercial damage resulting from alienating the community.
If the Chef precedent is an example, this principle now very clearly includes government entities, and so it will be interesting to see how software use evolves as communities get more opinionated about the end use of their software.
While Clayton Christensen’s “Innovator’s Dilemma” taught us that leaders will struggle to retain their leadership position in dynamic industries, others such as Ron Adner have reminded us that it is rarely the innovator that ends up capturing most of the long term value. Examples in technology abound, from Xerox Labs and the Graphic User Interface to Sony and the MP3 player (both examples redefined by Apple later on).
Sometimes, a handful of new players emerge that truly break through into dominance—Amazon Web Services is an obvious example in the IT infrastructure space—but the large majority end up failing (think of how many more public cloud players existed only a few years ago). As analyst firm Monkchips has shown with their “VMware pattern” theory, that is mostly because it is very difficult to mature as a company, in order to take your innovative technology from inception to wider enterprise adoption.
The polarization of the Kubernetes world
The cloud-native ecosystem emerged with the open-sourcing of Kubernetes in 2014, and since then we have seen an explosion of new companies and new open source projects. At times daunting for its busy graphic representation, the fragmented CNCF landscape for the most part showed us a promise of a community of small, innovative equals. The past two years have been remarkable for the consolidation of power in this ecosystem, arguably the current and future battleground of IT, as I wrote in a previous post.
While consolidation happens often, and the “VMware pattern” holds, it is not often that we see companies who were dismissed as “has-beens” by analysts come back to true dominance in a completely new field. After all, HP’s and Dell’s server businesses have been hit hard by Cloud and have not bounced back; Oracle has been trying to adapt to the brave new world of open source and portable software; and Microsoft has given up its plans for mobile long ago (though its resurgence as a cloud and open source mammoth has been breathtaking to watch).
In the blue corner, if you will, with IBM’s acquisition of Red Hat and its strategic contributions to the cloud-native ecosystem, it is newly positioned at this point in time as a strong leader with a wealth of both hugely popular open source projects, and strong tools to build, run and manage cloud-native applications (from RHEL through to OpenShift). I wrote about this in an earlier post, interviewing Dr. Jim Comfort.
And in the other corner, its erstwhile rival in the jolly days of virtualization, VMware. With the rise of public cloud and OpenStack, and then Kubernetes, the company experienced clickbait headlines about its business model business model, employee retention and other concerns—all the while continuing to post satisfactory financial results. With recent news, long-time execs in VMware are definitely laughing now.
The Multi-Cloud, Cloud-Native Company
The brave decision to shut down its own public cloud service—something IBM, with its large Softlayer estate, did not do as decisively—led VMware to embrace its position as a multi-cloud leader, with strategic deals brokered with AWS and other clouds.
Then came the wave of acquisitions: Heptio, a cloud-native services firm founded by the originators of the Kubernetes project; Bitnami, a cloud apps company with deep developer relationships; and lately, Pivotal, OpenShift’s big rival in the world of enterprise PaaS.
At VMworld this week, we’ve witnessed the pièce de résistance with Project Pacific and Tanzu Mission Control, effectively summarized by former Heptio co-founder and CEO Craig McLuckie:
Both IBM and VMware are clearly just getting started. The dominance of developers and popularity of open source as a methodology in the cloud-native world will likely ensure some sort of balance of power in the ecosystem, but with the dramatic and rapid resurgence of these two ex-“has-beens,” Kubernetes and related projects are truly maturing into enterprise technologies. For now, their innovator’s dilemma has found its solution.
In an earlier article, I examined some of the recent dynamics in open source software, specifically around the for-own-profit commercialization of some projects by large cloud providers, and how that is driving smaller companies to seek out restrictive license models, in the process causing themselves considerable friction in their communities.
As befits a piece that deals with themes of free software and a polarized cloud industry, the article seemed to have struck a chord with several people, some of whom have contacted me to agree or disagree with my points. Rather than keep those to myself, I thought a follow up with three of these luminaries, with regards to their inside views on the topic, would be much more engaging.
In this article, I’ll summarize the main points from my conversations with Spencer Kimball, co-founder and CEO of Cockroach Labs; Joseph Jacks, founder and general partner of OSS Capital; and Abby Kearns, Executive Director of the Cloud Foundry Foundation. All have extensive track records in open source, but each has a slightly different take.
Caution PHOTO BY MAKARIOS TANG ON UNSPLASH
The independent vendor perspective: Spencer Kimball
While still a student in 1995, Kimball developed the first version of GNU Image Manipulation Program (GIMP) as a class project, along with Peter Mattis. Later on as a Google engineer, he worked on a new version of the Google File System, and the Google Servlet Engine. In 2012, Kimball, Mattis, and Brian McGinnis launched the company Viewfinder, later selling it to Square.
Drawing on his experiences at Google, Kimball wanted a technology like BigTable to be made available as open source outside of the company, and co-founded (again, with Mattis, and ex-Googler Ben Darnell) the company Cockroach Labs to provide commercial backing for CockroachDB, an open source project.
According to Kimball, whichever cloud provider is the best at brokering the multi-cloud migration will ‘win’ cloud. He adds that CockroachDB was built for that multi-cloud/region and relational future—where scale, complexity but also privacy frameworks such as GDPR become critical business drivers. But as optimistic as he is about the business, Kimball is also concerned about the sustainability of today’s and tomorrow’s venture-backed commercial OSS businesses.
Red Hat, Kimball reflects, clearly ‘figured out’ the model for commercial OSS before the days of cloud, becoming the dominant force in the commercial OSS business. The Red Hat ‘equilibrium’ (Kimball’s term) was based on selling contracts for support and professional services on top of widely-available OSS. With the emergence of cloud, Red Hat capitalized on the complexity of ‘big-software’ systems such as OpenStack and Kubernetes. (Bassam Tabbara of Upbound has commented on how this model might change with the IBM acquisition.)
Kimball states, “with cloud becoming the mainstream way to consume and manage IT, the complexity of some OSS provides a natural advantage to cloud platforms such as AWS or Azure, as they can use economies of scale to build a managed service out of any open source core.” He adds, “they can also offer enterprise support on top, effectively taking the bottom 50% of an emerging vendor’s total addressable market, and also capping its growth in the enterprise high-end.” So what is an emerging vendor to do? “The best protection is community,” says Kimball. Engaged, committed groups of maintainers, contributors and users are impossible to copy or to replicate in a managed service, and can keep even the most aggressive IT giant at bay.
Another protection could be to address a multi-cloud niche, as Cockroach Labs has done, which serves customers at the gap between the lock-in-focused cloud providers. At the end of the list, Kimball mentions restrictive (“almost ‘byzantine’,” he says) licenses and other defensive models such as ‘free for use, source available’, whole-compilation protection and more—all suboptimal and not in line with the principles of free software.
In light of these comments from Kimball, it is very interesting—if not entirely surprising—to note CockroachDB’s licensing change, announced last week on the company’s blog: they are adopting a version of the Business Source License (BSL), that is not limited by nodes (unlike MariaDB’s version), but prohibits the offer of a commercial version of CockroachDB as a service without buying a license, by other players (read: AWS). This announcement has already resulted in friction on social media and the blogosphere (which I would rather not amplify by referencing).
The venture investor perspective: Joseph Jacks
OSS Capital is the world’s first VC firm exclusively-focused on investing in and partnering with commercial open-source software startups. An early contributor to Kubernetes, Jacks previously founded Kismatic, which he sold to Apprenda, as well as founding container mega-tradeshow KubeCon and donating it to the CNCF as part of its inception.
OSS Capital’s investment strategy is focused exclusively on supporting early-stage commercial OSS startup companies. OSS Capital’s equity partner/advisory network of commercial OSS founders have collectively captured over $140bn in value across 40 of the largest COSS companies of the previous decade; transferring this knowledge and expertise to the next-generation of commercial OSS founders is a core part of the OSS Capital’s value proposition. Additionally, OSS Capital organizes the commercial OSS community conference, OpenCoreSummit.com.
When asked about the strategic outlook for OSS given recent skirmishes, Jacks points out that the pie is getting much, much bigger: since companies outside of what is considered the software industry (from cars to home appliances) are effectively becoming producers of software, that grows the addressable market considerably, and will result in an acceleration of open source well beyond what we’ve seen so far.
Even from within the tech industry, Jacks says, “many OSS projects disrupt and/or bring transformational innovation to major global industries like data processing and storage (Spark, Ceph, Hadoop, Kafka, MongoDB , CockroachDB, Neo4j, Cassandra), operating systems (Linux, FreeRTOS), semiconductors (RISC-V), networking/CDNs (Envoy, Varnish), software engineering (Docker, Go), computing (Kubernetes), search (ElasticSearch), AI (TensorFlow, PyTorth).” Those two major developments, says Jacks, will reframe the playing field for open source.
Given his expansive view, it is perhaps not surprising that Jacks is a critic of the recent proliferation of restrictive licenses as a defensive measure for emerging OSS companies. “This can dramatically reduce the value-creation potential of OSS projects, which are fundamentally driven by developer adoption,” he says, and adds, “instead, open-core OSS companies should use more permissive licenses like MIT, A2.0, or BSD in order to maximize value creation for all constituents (and that includes cloud providers), while capturing value on the proprietary layers around the open core.” (Jacks calls this layer ”the crust”.)
So what are effective strategies for a new OSS company to build, scale and survive in an AWS-dominated world? Jacks says, “one, focus on maximizing value creation and capture for all, building highly standardized disruptive technology; two, build inclusive and constructive communities; three, ship quality software fast; four, embrace transparency and open governance across all constituents.”
The foundation perspective: Abby Kearns
I spoke with Abby Kearns as a follow up to my interview piece with her from late last year, and the conversation focused on the licensing implications of competitive moves in the commercial OSS market. At an impressive CF Summit, Cloud Foundry Foundation announced that its open source project Eirini, which enables pluggable use of either Diego/Garden or Kubernetes as orchestrators, passed its validation tests for CF Application Runtime releases. Kearns, who has served as Executive Director of the Foundation since 2016, is no stranger to both the opportunities and the tensions that exist at the intersection of free software and commercial interests.
As expected, Kearns is adamant, saying, “open source as a method of building and delivering free software can only thrive if we continue to put code in the open, and ask for help in improving it. ” She recommends to developers and commercial OSS companies to assume that someone will copy the software and perhaps even use it in a competitive context—and if one is worried about that, then why put code out there in the first place?
In Kearns’s view, actions such as using restrictive licenses can be revealing when it comes to the maintainer’s intent. Similarly, companies that open-source a wholly-formed thing might be missing the point, which is to build together, says Kearns—paraphrasing Richard M. Stallman’s famous manifesto: “free like free speech, not like free beer, not like a free puppy or free (used) mattress”.
Kearns believes that focus on these key tenets will see commercial OSS vendors through: engagement with contributors, transparency towards stakeholders, and outreach towards community. She also points out that growth in users isn’t the only meaningful metric for open source—just as important is growth in meaningful usage or in engagement with a dynamic community that likes to contribute.
Why open source in the first place?
To continue this positive note, Gabe Monroy of Microsoft recently retweeted a thread that showed how engineers from across rivaling vendors can collaborate successfully around open source software, to the benefit of both users and the projects themselves. Per Monroy, this is an “example of why multi-vendor OSS is the future of infrastructure software”. This, and so much more, could not have been achieved if it were not for open, collaborative communities and a bias towards permissive licensing.