The Coming Cloud War Isn’t About Market Share

At least once a quarter we are bombarded with press releases about how public cloud providers are catching up to, or overtaking, perennial leader Amazon Web Services in terms of market share or quarterly revenue trends. A good example is this recent story which, in my opinion, took the routine to a new level of bluster.

Perhaps even more tiring: as a reliable reaction, these PR stunts trigger blog posts and social media threads that explain how some cloud providers are structuring their business reporting in a way that ensures a favourable comparison (for example, rolling up numbers from cloud-based business applications into cloud numbers).

To paraphrase a famous hockey adage by Wayne Gretzky, the focus on these variables is like skating to where the puck used to be. I will argue that where the puck is soon to be, and will remain for the foreseeable future, is a very different place: in a world that experiences an alarming rise in the extremity of climate events and resulting geopolitical unrest, and in which every summer is the coolest we’ll ever have going forward, the lens of investors, governments and buyers is already shifting.

Rows of solar panels and a partially cloudy sky
IMPA solar farm in Indiana PHOTO BY AMERICAN PUBLIC POWER ASSOCIATION ON UNSPLASH

In an earlier post, “Is Cloud Good For Humans? The Ethical Angle”, I spoke to Anne Currie, Tech Ethicist at consulting firm Container Solutions, and emerging hard-science fiction author. In that post, Currie shared tangible, bottom-up actions that every developer and IT person could take to reduce the carbon footprint of their work on the cloud. I went back to Currie to speak about the top-down: on a strategic level, how are the big three public clouds approaching this issue as a competitive factor?

A new competitive paradigm

As a cloud buyer looking for a strategic partner, carbon intensity and corporate climate governance are set to become dominant factors, more so than the short-term financial performance metrics we track today. A good example of how institutional investors in other industries are reforming their hypotheses is the Transition Pathway Initiative (TPI) tool, which aims to provide a multi-dimensional analysis of climate posture for some of the largest polluters in the world. This shift to climate-oriented investment criteria by some of the largest investors in the world is already transforming corporate policy in many sectors, and Cloud is unlikely to be an exception.

Why are investors interested in how their assets act on (or ignore) climate change? For a number of reasons. First, companies who care about their broader eco-systems, tend to financially outperform those who don’t. Second, investors are risk-focused, and climate change poses an increasingly complex risk. Companies that don’t manage these risks—plan for them, discuss them at board level, carry out scenario planning, set targets—might be setting themselves up for long-term failure. These risks also have a physical dimension in examples such as flooded factories (or data centers), regulation risk imposed by expected carbon taxes, and litigation risk (as seen recently in the wave of lawsuits filedagainst corporations and governments for failing to act on climate change). 

This is not theoretical: investors with trillions of dollars in assets under management are already pressing companies to commit to ambitious targets, and the TPI (backed by supporters with $21 trillion of assets under management) measures companies on those indicators, informing investor engagement, divestment, shareholder resolutions, and voting.

Three clouds, three strategies

Why is this important? It is because the entire world (a sum of its actors) needs to be Net-Zero emission by 2050, in order to have a chance of not exceeding a temperature increase of over 1.5 degrees Celsius above pre-industrial levels, which would trigger tipping points in terms of extreme weather events, and send the economy and geopolitics into a spiral that would dwarf anything we are seeing with COVID-19. 

Given the high probability and impact of the risk of not meeting Paris Agreement targets, investors and buyers are focusing on the quality of climate and carbon management within a company—both in terms of actual emissions reductions, and the governance around them. Governance includes issues like level of board involvement and oversight, executive remuneration for good ‘carbon performance’, target setting, and of course transparency around these activities.

In a paper published by Currie and Paul Johnston in 2018, and updated last February, the authors map the three clouds against the three strategies, in the following way:

  • Strictly in terms of data center operational emissions, Google is actively carbon-neutral (which means it offsets any emissions by buying carbon credits). In September, CEO Sundar Pichai announced that Google will be eliminating its carbon emissions legacy by buying historical carbon credits, and aiming for 100% clean electricity every hour of every day by 2030.
  • Microsoft, who have also been carbon-neutral for a while, have set a company-wide (not just Azure) goal of being carbon-negative by 2030, stating that “while the world will need to reach net zero, those of us who can afford to move faster and go further should do so”. Even more so than Google, Microsoft is setting a precedent for taking responsibility for historical emissions, which is a critical element in corporate accountability.
  • Amazon’s Climate Pledge is a commitment to run its operations on 100% renewable energy by 2025 and be carbon-zero by 2040, a decade later than its competitors targets (carbon zero means no offsetting will be involved). Currie believes that in particular due to their early leadership position, some of AWS’s major regions are relatively higher on emissions, for example US-East (due to the energy mix there). AWS has been public about Oregon, GovCloud-U.S. West), Frankfurt, Canada-Central and Ireland as its green nominated regions on its Sustainability webpage, and has been making some visible investments in clean energy.

In terms of transparency and governance, Google and Microsoft typically get higher marks from industry analysts and journalists than Amazon, according to Currie. Both companies use renewable energy credits(RECs), which are a sort of token representing a utility’s renewable energy generation, and are easy to measure and track publicly. Amazon has been so far slower in its efforts at transparency, although it has started to publish its strategy to drive carbon out on its public website.

This is a partial analysis, and readers are encouraged to do further research. It is important to note that a narrower focus on AWS itself will enable a more accurate comparison than the current view of physical supply chain-heavy retail giant Amazon vs. software companies Microsoft and Google. In addition, most of the focus is around data center operations, and not on other important factors such as the carbon impact of building them, the companies’ push in other areas of sustainable solutions (from electric cars to meat and dairy substitutes), or even what the company footprint is in relation to its office-based workforce and travel.

New ways to lead

If we’ve learned anything about AWS since its mythical beginnings in 2006, it is that it moves with remarkable agility even as it grows much bigger, and constantly iterates. Climate change mitigation has not been an area of significant transparency, compared to visible innovation in other areas. As a software-driven infrastructure business, and possibly the most agile part of Amazon, AWS has the potential to change this story. The company is famously customer-obsessed, and as the frame of competition in any industry (let alone an energy-consuming juggernaut such as cloud computing) shifts to meet climate crisis challenges, we can hope that AWS will increase competitive pressure in this area, too, for all our benefit.

As cloud buyers, many CIOs already feel the pressure from regulators, investors and customers, and are taking steps that are impacting cloud providers. In addition, DevOps culture and developer empowerment, coupled with the increasing share of late-Millenials and Gen-Z in the workforce, are accelerating the pressure for ethical behavior from internal stakeholders. These changes are visible in companies as young as Snyk, which recently migrated a key service from another supplier to an AWS carbon-neutral region, or as large and established as the Financial Times. Speaking to Currie, Rob Godfrey, senior architect at the FT shared that the organization plans to have moved entirely to the cloud by the end of 2020. He added, “we hope that around 75% of our infrastructure will be in what AWS have been calling their clean regions, but we would like to see that percentage even higher in future.”

As COVID-19 provides us a preview into the complexities we might face with climate change, cloud buyers would be wise to shift their focus from short-term, biased revenue comparisons to areas that could have a lasting impact on the risk or success of their suppliers, and to demand an investment in both an accelerated green transition and in the governance required to track it.

(Originally posted on Forbes.com)

Why Automation Engines Matter To Best-In-Breed DevOps Vendors

One of the oldest and most persistent debates in any field goes back to the famous quote by ancient Greek poet Archilochus, “a fox knows many things, but a hedgehog one important thing”—and to Isaiah Berlin’s equally famous philosophical-literary essay, “The Hedgehog and The Fox”. Technology is only one of numerous areas in which this paradigm is being used (or overused, depending on your viewpoint) to evolve industry standards and operating models.

Fox in the city
Urban fox MEL GARDNER ON UNSPLASH

A good example of this is the increasing tension in cloud-native technologies between those tools that strive to be a “best of breed”, and those that position themselves as a “one-stop-shop”. Vendors such as CircleCI for CICD, Datadog for monitoring, and of course Snyk for developer-first security all aim to be the absolute best at the (relatively) narrow task at hand. On the other side of the fence, vendors such as Gitlab look to add functionality to cover a wider surface area.

There will always be a tradeoff on this route to simplifying systems such as procurement or IT management: the Hedgehogs of this world will never achieve the same results on narrow sets of tasks as will Foxes, but they will offer simplicity or unity of interface as an advantage.

The leading public cloud providers such as AWS and Azure are good examples of companies that, on the one hand, have the product realism to build rich ecosystems of compatible partner solutions, but on the other hand offer economic incentives in place to help those customers that want to reduce the number of suppliers or solutions they work with.

In this context, a new category of solutions is looking to create automation engines to connect best-in-breed solutions, allowing at least a common technical interface to connect discrete but closely-related tasks. Examples of this approach are GitHub Actions and Bitbucket Pipelines, which aim to offer developers a way to automate their software workflows—build, test, and deploy—right from their software repositories.

Puppet Relay: automating DevOps and cloud operations

A new and more Operations-focused entrant into this space was announced late last month, Puppet Relay. If this is where configuration management veterans Puppet are going, it could indeed be big news. Relay is an Event-, Task-, or Anything-Driven automation engine that sits behind other technologies used by an enterprise. Relay automates processes across any cloud infrastructure, tools and APIs that developers, DevOps engineers, and SREs (site reliability engineers) typically manage manually. Interoperability and ease of integration seem to be major focus areas for Puppet.

According to former Cloud Foundry Foundation Executive Director (and longtime Pivot before that) Abby Kearns, who recently joined Puppet as CTO, there are tons of boxes in the datacenter that traditional Puppet customers want to automate with configuration management, but with the move to the cloud, the potential surface area for automation is much bigger. Puppet’s ambition is to capture the automation and orchestration of the cloud movement, with multiple use cases such as continuous deployment, automatic fault remediation, cloud operations and more.

Relay already integrates with some of the most important enterprise infrastructure vendors like Hashicorp, ServiceNow, Atlassian, Splunk and others. Stephen O’Grady, Principal Analyst with RedMonk, was quoted in Puppet’s press release as saying that the rise of DevOps has created a hugely fragmented portfolio of tools, and that organizations are looking for ways to automate and integrate the different touch points. “This is the opportunity that Relay is built for,” concluded O’Grady.

Automation market trends point to a specialized future

Market analysis and data supports this trend. In its Market Guide for Infrastructure Automation Tools, Gartner estimated that by 2023, 60% of organizations will use infrastructure automation tools as part of their DevOps toolchains, improving application deployment efficiency by 25%. Similarly, in its 2019-2021 I&O Automation Benchmark Report, Gartner forecasted that 47% of infrastructure and operations executives are actively investing and plan to continue investing in infrastructure automation tools in 2020.

Ultimately, taste in architecture means using the right tool for the right job, under the financial or organizational constraints that each of us might have. By providing a backbone to automate the cross-vendor workflow, the new breed of automation engines such as Puppet Relay could potentially tilt the scale towards specialized solutions, and change day-to-day development and operations.

(Originally posted on Forbes.com)

The Biggest Impact Of Open Source On Enterprises Might Not Be The Software Itself

Open source software underpins many of the applications we use today, whether critical for our society to function, or just for our ability to share photos of our quarantine-sourdough with strangers. The code itself has clearly changed our software applications, but what deeper, underlying impact on software delivery and organizational culture have we seen through this process?

In this article, I had the privilege of speaking with three industry luminaries that have contributed to building open source projects and communities for many years. I wanted to learn from them about the diffusion of software delivery practices from communities and projects into companies and products.

Profile pics of the three interviewees
Chip Childers, Dustin Kirkland, Anton Drukh TWITTER

First, I spoke to Chip Childers, co-founder and newly-appointed Executive Director of the Cloud Foundry Foundation; then, to Dustin Kirkland, Chief Product Officer at Apex Clearing and a longtime contributor to Ubuntu, Kubernetes and OpenStack; and finally to Anton Drukh, early employee and VP Engineering at cloud-native unicorn Snyk.

The foundation: becoming better developers through contribution

The Cloud Foundry Foundation was originally established to hold the intellectual property of the open source Cloud Foundry technology and oversee the governance of the project. Today, the Foundation houses 48 projects under its umbrella, and over the years has influenced numerous enterprises in using the technology. The Foundation’s projects release independently, and most projects come together to form the platform through a coordinated release process: smaller teams release whenever they are ready, while the entire system is tested for known-good combinations through the coordinated release process.

From the beginning, says Childers, the Foundation was structured with an open source license, an open contribution model, and an open governance model. Naturally, companies with larger numbers of contributors often obtain more influence in the project roadmap—but the path that this process has paved goes both ways.

The Foundation’s practices have clearly impacted companies whose staff are also contributors, according to Childers. For example, he has seen more and more contributing developers—often paired across different organizations—go back to their companies and drive adoption of methodologies like extreme programming and agile development.

On an even deeper level, says Childers, that impact has created a flywheel effect, making developers great ambassadors into their companies, and then improving the Foundation’s projects. As a first step, developers adopt the same collaboration mindset that permeates the Cloud Foundry community, in their day jobs. Second, as developers building tools for developers, they tend to develop empathy and a keen understanding of user experience, which improves their work in their companies. Third, hands-on experience in using Cloud Foundry technologies and processes in their organizations means that contributors have a wider perspective and often feed back into the Foundation’s projects on what could be improved.

The contributor: delivering proprietary software like open source

Dustin Kirkland, Chief Product Officer at Apex Clearing and an ex-colleague of mine, spent the last 20+ years in various leadership roles around open source software, and has contributed code to projects including Ubuntu, OpenStack, and Kubernetes. Upon arrival at Apex Clearing, he wondered if the company can re-use not only the code of the open source projects it had access to, but also some of the underlying processes around how that code was delivered, which he had experienced firsthand. Specifically, he focused his attention on release cycles.

Projects such as Ubuntu, OpenStack, and Kubernetes have predictable, time-based release cycles. Ubuntu has released every April and October, since October of 2004 (32 timely, major platform releases, for over 16 years!); Kubernetes, introduced in 2014, has chosen an increased pace of quarterly release cycles, and has managed four releases per year, over the last six years.

A key concept these projects use which Kirkland introduced into Apex Clearing is “Coordinated Cycles”: with time, resources, and scope as variables, a project needs to make two of them constant, and then manage the third. For example, with Ubuntu, time (releasing on time) and resources (size of the contributor community) are fixed, and scope is negotiated. Typically, a Cycle kicks off with a Summit or Conference (such as an Ubuntu Developer Summit) that brings together contributors from around the industry. In addition, a Mid-Cycle Summit is a good way of tracking progress and correcting course as needed.

When Kirkland arrived at Apex in 2019, products and projects were managed asynchronously. After examining the option of six-month releases (like Ubuntu or OpenStack), it was deemed unwieldy, as the team would need to manage two-week sprints for 26 weeks straight. Quarterly cycles, adopted by Kubernetes, were considered too short to see through anything but the smallest individual projects. Finally, the team settled on 16-week cycles: three full cycles per year, with 48 weeks of development, while still allowing for four weeks of holidays. 

Today at Apex, each cycle involves three types of summits:

  1. Prioritization Summit: product managers collect input from all stakeholders, so that they can achieve consensus on priorities for each product family.
  2. Planning Summit: once the product requirements are defined, there is alignment between engineering and management on work commitments across the product portfolio, for the upcoming cycle.
  3. Mid-Cycle Summit, renamed Portfolio Review: report on progress and adjust course where necessary, about two-three times per cycle.

Announced in April of this year, Apex 20a was the first release that used open source processes and methodologies. This month, Kirkland and his team will be (virtually) holding a Portfolio Review for Apex 20b, reviewing the entire portfolio with all engineering and product leaders.

The start-up: putting up open source norms as foundations

Anton Drukh, a current colleague of mine, joined Snyk as its first hire in 2015, and has successfully grown the engineering function to a stellar team of about 70 people. He has always been fascinated, he says, by how the simplest solutions can solve the most complex problems, with people being the critical element in any solution.

Snyk’s approach from its earliest days was to see developers as the solution to—and not the source of—security vulnerabilities introduced into software. Drukh says that as someone whose formative years as an engineering leader were securely in the cloud and cloud-native era, he was especially drawn to three aspects of the new company.

First, Snyk chose to focus on securing open source, and Drukh believed that working closely with open source communities would help develop a culture of humility. Today, says Drukh, every external contribution to Snyk’s mostly open source code base is a positive reminder of that.

Second, learning from many open source projects, Snyk aimed to build a distributed and diverse engineering team and company. According to Drukh, building these principles into the hiring process created an immense sense of empowerment amongst employees. Snyk runs in small teams of five-six people, always from different locations (and continents, as Snyk’s engineers are based in the UK, North America, Israel, and beyond), and trusts them to ‘do the right thing’. This, in turn, creates a strong and shared sense of accountability for the team’s and the company’s success.

Third, from its very beginning, the company set out to adopt open source practices in the practicalities of its software development. These measures increase a feature’s effectiveness, but also shorten the time it takes for an idea to travel from a developer’s mind to the hands of the user. Examples abound, such as:

  1. Snyk’s codebase is shared within a team and between teams, which enables ease of onboarding and clarity of ownership.
  2. Each repository needs to have a standardized and automated release flow, which supports high release pace.
  3. Each pull request needs to have a clear guideline for why it is being raised, what trade-offs stand behind the chosen approach, how the tests are reflecting the expectations of the new functionality, and who should review this change. This drives transparency and accountability in the culture.

Revealingly, for some of Snyk’s users, the impact of the product has sparked a curiosity into how it was delivered, and an attempt to learn from the processes of a cloud-native startup. Customers can inspect the company’s delivery process across some of its codebase, and share ideas with Snyk (such as this one, of consolidating release cycles). Without the culture and processes inspired by open source, none of this serendipity would exhibit itself.

(Originally posted on Forbes.com)

As Remote Working Becomes Normal, What Do CISOs Expect Of Vendors?

WFH
Working from home CHARLES DELUVIO ON UNSPLASH

One of the most significant changes brought about by the Coronavirus outbreak has been the mass move to working from home. From an operational perspective, the impact on technology companies has been hard to overstate both in its depth and its breadth. A now-famous meme identifies COVID-19 as the one factor which has finally realized the promise of digital transformation. While amusing, this presents an important point: this pandemic, like many other crises before it, accelerates many processes that would have otherwise taken months or years.

CEO CTO COVID-19
Digital Transformation Quiz SUSANNE WOLK (TWITTER)

While some changes will be temporary or partial, others will transform our world. The implications of accelerated change are of concern to anyone responsible for risk management, and Chief Information Security Officers (CISOs) are no exception. So, what is on the minds of CISOs of key technology companies in this challenging time?

Polling the CISOs

In their latest CISO Current report, cybersecurity-focused venture and research firm YL Ventures asked their advisory board—made up of around 80 CISOs for leading companies such as Wal-Mart, Netflix and Spotify—similar questions. YL Ventures have been making use of this advisory board in investment due diligence processes over the years, and in 2019 decided to start publishing their findings for the benefit of the industry. The below list includes a combination of feedback from that report, as well as from other sources, where noted.

1. A good time to reconsider technology choices

Originally, the YL Ventures report was focused on DevSecOps, and found that many current tools—for example, most static application security testing (SAST) and runtime application self-protection (RASP) platforms—were considered cumbersome and difficult to adopt within software engineering teams. Another major (though perhaps unsurprising) finding was that the biggest challenge was in creating a system of processes and incentives, to support transformation. It was noted that technology should be evaluated in how it supports and safely accelerates these non-technological changes: it should bridge the long-cycle, “breaking” culture of security engineering and the short-cycle, “building” culture of software development.

2. Everyone is remote, and some will stay that way

With the acceleration of the pandemic, YL Ventures quickly pivoted and included a second and more pertinent part of their report, which deals with the work-related transformations that are on the minds of CISOs in light of the pandemic. The biggest challenge that was cited was establishing fully remote workforces in a tight timeframe, in a way that will be sustainable for a partially-remote future of work; a close second was severe budget constraints, unsurprisingly. According to Naama Ben Dov of YL Ventures, both these concerns present opportunities for vendors in how they interact with CISOs from this point on.

3. The risk map is being reshuffled

There are things that CISOs worry about, that are getting easier in the new reality, such as controlling location-based risks: an employee residing in London could not have logged into their laptop from Moscow, for example, since there is no business travel. Similarly, Adrian Ludwig, CISO of software giant Atlassian, was a guest on Snyk’s webinar on working from home last week, and noted that he has seen an uptake around bug hygiene, as engineers have more time to be thorough and at times gravitate to smaller tasks.

Other issues are made more complicated, for example VPNs and DDoS mitigation. Yair Melmed, VP Solutions at DDoS start-up MazeBolt, reported that over 85% of companies he works with are proactively identifying DDoS Mitigation vulnerabilities that would have impacted their business continuity had they been hit by a DDoS attack. Because VPN Gateways weren’t critical to business continuity before COVID-19, most companies are finding that their DDoS mitigation solutions don’t adequately protect them. This has become a critical issue, with the risk of employees being be cut off and other services might be impacted if they come under DDoS attack.

Engaging with CISOs in a time of crisis

As many vendors know, and as the YL Ventures report confirms, budgets are being scrutinized for the short term, but in many cases also for the long term—under the assumption that remote workforce concerns will stay for the foreseeable future. In general, says Ben Dov, for now it’s much more ‘defer and delay’ than ‘reprioritise’, and therefore how vendors react to their customers’ plight will be etched into the memory of CISOs and procurement officers alike when budgets recover or get retargeted.

1. Careful with those emails

The YL Ventures report explicitly calls out alarmist sales pitches, as well as sending too many emails. CISOs polled recommend to simply show goodwill and empathy with personalized messages to check in on customers’ state of mind. Alongside companies who keep sending alarmist messages, there are, of course, many examples of other vendors who are leaning in with empathy and patience towards their customers and prospects.

Getting involved with communities and community initiatives related to the specific sector is also something that is called out. Snyk’s developer advocacy team, for example, organized an online event called AllTheTalks, with all proceeds from ticket sales going towards the World Health Organization.

2. Walking the path with the buyer

While the budget may have been cut, the business need probably still exists. Many companies, from AWS to Atlassian and beyond, have come out with special offers to support businesses during this transition, as was summarized in this Forbes piece as well as elsewhere. Many, like Snyk, have created specific offers for small businesses and those in the healthcare, hospitality, travel and entertainment industries. MazeBolt has decided to offer free DDoS assessments to cover the most common DDoS vectors, resulting in a detailed vulnerability report.

3. Conscious relationship-building

On the Snyk webinar, Ludwig also spoke of his habit to informally communicate with colleagues in the company canteen (since a formal invite to a meeting with the company CISO is something no developer wants to receive). With everyone working remotely, in order to foster engagement with colleagues Ludwig recommends to make a list of those we need to reach out to and how often, and then schedule those catch-up meetings. In some cases, ‘office hours’ sessions could be set up where everyone can drop in and share their perspectives and concerns on the day to day.

Crucially, there is no reason to not extent this practice to our customers, catching up about everything and nothing on a regular basis to cement our closer relationships. While this could seem artificial, it is an effective way to keep conversations going when the operating model does not allow for any coincidental conversations.

Perhaps that is a take-away that touches on all aspects of our lives under quarantine: being conscious about our priorities, about who we interact with, and how we do so in a way that helps us achieve our goals now and later on.

(Originally posted on Forbes.com)

For Cloud-Native Applications, Testing In Production Goes From Punchline To Best Practice

As software takes over more of IT, developers are taking more ownership of related parts of software delivery, and moving faster. In this shifting reality, with increasing velocity and complexity, more software can mean more risk. However, beyond the linear increase in risk, it is also the very nature of risk that is changing.

In the very recent past, developers would work on monolithic applications, which meant that the scope and rate of change were more manageable. If developers have a vision of the desired state, they can test for it in the place where they can validate changes; for most of the recent two decades, this place has been the Continuous Integration/Continuous Delivery (CI/CD) pipeline.

Post-Its
Sticky notes PHOTO BY PAUL HANAOKA ON UNSPLASH

Microservices increase complexity by shifting change validation from the pipeline into the network: if my application is made up of a 100 services, the pipeline won’t build and test them all at the same time. Inherent to the concept and benefit of microservices, some services will go through the pipeline while most others are in a different state, from being rewritten to running in production. In this situation, a desired state can be a phantom, so we need a new testing paradigm.

From punchline to best practice

It used to be the case that to scare off a nosy manager, developers would use the phrase “we test in production”. This was a concept so absurd it couldn’t possibly be taken literally, and would be sure to end the conversation. Now, with cloud-native applications, testing in production can be seen as the least scary option of them all.

When there is a constant generation of change that still needs validation, production environments become the main place where a never-ending series of ‘desired states’ exist. With the right tools for observability and for quick reaction to issues, risk could be managed here. (Assuming, of course, that I have the right people and processes in place first.)

Companies like Honeycomb and others herald this approach. When dealing with complex systems, they claim that most problems result from the convergence of numerous and simultaneous failures. Therefore, Honeycomb focuses on an observable state in order to understand exactly what is happening at any given moment.

This is observing state, not validating change. A different approach comes from where we validated change before: the CI/CD pipeline.

CircleCI CTO Rob Zuber
Rob Zuber, CTO of CircleCI ROB ZUBER/CIRCLECI

Putting change and state together

In an eBook published this week, Rob Zuber, CTO of developer tools company CircleCI, talks about the role of CI/CD together with testing of microservices.

As more and more companies outside of the tech industry come to see software as a competitive differentiator, Zuber sees greater use of 3rd-party services and tools, a proliferation of microservices architectures, and larger data sets.

In practice, Zuber claims, breaking change into smaller increments can look like deploying multiple changes—something that a CI/CD tools would naturally handle well. Being able to evolve your production environment quickly and with confidence is more important than ever, but using a combination of tools that validate change and others that observe state is key.

CircleCI itself, says Zuber, uses its own technology for CI/CD, and other tools to tie together the change and state validation—crash reporting, observability, and centralised logging are examples. At the center of this stack is the CI/CD tool, which in their view is better placed compared to source control tools or operational systems.

A full view of the software delivery cycle

Both approaches seem to agree that neither observing a state nor observing the change is enough: good coding practices, source control management, solid deployment practices—they are all crucial parts of the same picture. All of these elements together can lead to better cost and risk decisions, more robust code, and better products. Testing is another area of software delivery that is being utterly transformed by cloud-native technologies and open source, and should be examined with new realities and new risks in mind.

(Originally posted on Forbes.com)

Cloud And Open Source Can Reinvent Tech Conferences In The COVID-19 (And Carbon-Negative) Era

(NOTE: this article includes no medical or related advice, which, in any case, the author is completely unqualified to provide.)

It is a time of significant change, which none of us can predict. First reported from Wuhan, China, on 31 December 2019, Coronavirus disease (COVID-19) is beginning to change personal and work lives around the world. Considered by the World Health Organization to be less lethal but more contagious than its cousin SARS-CoVid, COVID-19 is making its presence felt in ways that we can see, and in many ways that we can’t, yet.

Technology conferences represent an interesting case study: forward-thinking companies and communities, with vast technology resources, can take this time to reevaluate some of their strategies with regards to face-to-face events. More importantly, this transition can help countries, corporations, and humans everywhere to reduce their carbon footprint, tackling the greater challenge of our times.

Empty airport terminal
Empty airport gate area PHOTO: ERIC PROUZET ON UNSPLASH

Storm Corona Hits Tech

The software industry, amongst others, has seen immediate effect in the cancellations of many prominent large tech conferences. The first cancellation many noticed was the mammoth and vendor-neutral annual telecoms event, Mobile World Conference in Barcelona. The latest ‘victim’ in the vendor-neutral conference space has been Kubecon/CloudNativeCon EU, which has so far been delayed to July.

In the face of COVID-19, large tech companies have been in many cases first movers to cancel company-run conferences: Facebook with F8, Google with Google I/O and Cloud Next, Microsoft with its Ignite Tour, and more. This was part of a wider initiative by large companies in tech as well as other industries to curtail inessential business travel—which in itself would have a severe effect on the attendance of some of these events.

An Introvert’s Dream?

It is the beginning of a very difficult era for anyone in sectors which support events (events management, travel, hospitality, photography, catering and more), while those who support virtual interaction stand to benefit.

The stereotypical aversion of some developers to face-to-face human interaction has been the subject of many research papers, articles, and online memes. In that light, it could be that the virtual model will in some aspects prove to be more effective for engagement and content delivery.

If the aim of some of these events is to educate developers and disseminate information, do they need to be physical events at all? As many conferences pivot towards a virtual model in the coming months, we are about to find out. Consider that a huge majority or open source contributors do so remotely anyway, so in that space at the very least, the ground is ready.

Challenging Old Assumptions

We might not see some of these conferences return at all: as suggested recently on tech strategy blog Stratechery, Facebook’s success depends more on management of its digital real estate than on its F8 developer conference, especially as it invests significant resources in security and privacy (also, it helps to avoid the shadow of Cambridge Analytica at F8).

A UK-based CISO asked recently on one of my chat groups, “we’re pulling back on travel, beefing up remote working capability and handing out hand cleaning gels. Anyone doing anything non-obvious?” Actually, we could start with the obvious, and look at successful tech events that are already virtual-only. Events such as All Day DevOpsGlobal Devops Bootcamp, and HashiTalks point the way for developer-focused events run by communities and vendors alike.

The technology is there. As an employee of HP in 2006, I regularly participated in Halo Room meetings, which felt uncannily like face-to-face (so much so that we would have fun watching newbies rush towards the screen to shake digital counterparts’ hands). Halo was prohibitively priced at the time, and was rooted in hardware pricing models—as was Cisco’s TelePresence. In 2020, backed by cloud infrastructure and software monetization models, I imagine that vendors can take similar solutions to market for a fraction of HP’s then-upfront price, offering an up-market alternative to Zoom.

A True Opportunity To Survive And Thrive

The events crisis triggered by COVID-19 may prove to us all that many of the events we attend (and on rare occasion, enjoy) are not crucial. More importantly, if, all of a sudden, it is possible for corporations and governments to restrict travel to safeguard their public, then that can teach us an important lesson for the much more existential threat that is already here: climate change.

In a report issued in November, 2019 by a team of world-class climate scientists, it was said that, “almost 75 percent of the climate pledges are partially or totally insufficient to contribute to reducing GHG emissions by 50 percent by 2030”. As corporations come under increasing pressure from governments, investors, and employees to accelerate their efforts to reduce carbon emissions—reinventing cloud, open source, and software engineering conferences seems like a great place to make a real impact.

(Originally posted on Forbes.com)

For CISOs To Scale Security Fast, Shift-Left Is Not Enough

Digital transformation has reshaped countless businesses, through the combined forces of software explosion, cloud adoption, and DevOps re-education. This has enabled new entrants to disrupt industries (e.g., Airbnb), and non-software incumbents to position themselves as de-facto software companies (e.g., Bosch).

Yet, as many CISOs have found out during this time, more software equals more risk. As a result, as James Kaplan, McKinsey’s cybersecurity leader, has said, “for many companies cybersecurity is increasingly a critical business issue, not only or even primarily a technology issue.” Let’s look at how each of the major vectors of change has contributed to this dynamic new reality.

Closeup photo of black and red keyboard
Shift PHOTO BY DANIEL JOSEF ON UNSPLASH

Breaking down digital transformation

First, as Mark Andreesen once predicted, software is eating the world; to make things even more dynamic, open source is eating software, and as a result of both, developers are increasingly influential when it comes to technology choices.

Second, cloud infrastructure has introduced huge amounts of new technology: consider as examples containers and serverless replacing servers and VMs, and infrastructure-as-code supplanting traditional datacenter operations and security. Many of these technologies are changing rapidly, and not all of them are mature—but since developers are more influential, CIOs often have no choice but to live with these risks in production.

Finally, the rise of DevOps methodologies has changed the process of how software is developed and delivered, and the ownership of it across the lifecycle. Examples of this include continuous integration and delivery, which has brought gates and delays to a minimum, and has created empowered and self-sufficient teams of developers. This means that these more opinionated and more influential teams can now move more freely than ever before.

Software-defined everything? This shift has clearly redefined the IT stack and its ownership model, as shown in the chart below.

Software defined IT
Software is taking over the stack SNYK LTD.

Operations teams, through DevOps, typically aim for manageable ratios of 1 Operations per 15 Developers, or in many cases even lower. Where does this leave security teams and CISOs, who in my experience are often expected to deliver a 1:30 or 1:40 ratio? Unfortunately, with their backs to the wall.

Security is left behind

On the one hand, security teams are very much still in the critical path of much of software delivery; however, they are also often separate from development and uninformed, and using outdated tools and processes. As a result, many security professionals are perceived by developers as slowing down the ever-accelerating process of delivery, and by executives as contributing to the release of vulnerable applications. This presents them with an almost impossible conflict of speed vs. security, which speed typically wins.

To make things even worse, a severe talent shortage perpetuates the state of understaffed security teams. Cybersecurity professional organisation ISC claimed in a recent survey that the global IT Security shortage is four million, and while half of the sample is looking to change how they deliver security, hiring is slowing them down and putting the business at risk. Over half of those polled said that their organisation is at moderate-to-extreme risk due to staff shortage.

Not just shift left, but top to bottom

In a previous piece, I examined how containers are challenging existing models of security ownership, and asked, “How far to the right should shift-left go?”

Yet if our stack is increasingly software-defined, from the top almost to the bottom, then it should be up to developers to secure it top-down. On a recent episode of The Secure Developer podcast, Justin Somaini, a well known security industry leader with experience from the likes of Yahoo! and Verisign, stated that he expected a third to half of a security team’s headcount to move from today’s process management roles into security-minded developers roles.

This brave new world of Cloud Native Application Security means moving people to the left as well as tools, but also extending developer ownership of security as far as software reaches. This does not present a risk to security professionals’ jobs: it is a change in reporting lines, and an enhancement of their skills—a genuine career development opportunity. On the hiring front, this opens up the option to recruit into security-relevant functions not only security talent, but also programming talent. This shift in ownership and role definition will make everyone’s lives easier.

(Originally posted on Forbes.com)

How Containers Are Changing Ownership Of Application Security

In an earlier piece, I claimed that the increasing use of containers and cloud-native tools for enterprise applications introduces disruption to existing security models, in four main ways:

  1. Breaking an application down into microservices means there is less central control over the content and update frequency of each part;
  2. Packaging into a container happens within the developer workflow;
  3. Providing developers with access to native APIs for tools such as Kubernetes increases the blast radius of any action; and
  4. Using containers inherently introduces much more third-party (open source) code into a company’s proprietary codebase.

Shifting left? Not so soon

According to Gartner, by 2022 more than 75% of global organizations will be running containerized applications in production. On the other hand, Snyk’s container security report earlier this year found that there has been a surge in container vulnerabilities, while 80% of developers polled didn’t test their container images during development.

Other reports, such as highlighted in a piece by Forbes contributor Tony Bradley, show a concerning level of short-sightedness by security engineering teams—while most of the worry seems to be around misconfigurations, which occur during the build stage, most of the investment focus seems to be around runtime, which in terms of lifecycle is inefficient if not just too late.Recommended For You

Putting it all together, it’s clear that shifting security left by giving developers more ownership on the security of their own application is a good thing—but how much ownership should or could they be given? And what kinds of tools and processes need to be in place for this transition to be a success?

A developer workstation secured by a storm trooper
A developer workstation secured by a storm trooper PHOTO BY LIAM TUCKER ON UNSPLASH

Andreessen, revisited

In 2011, Netscape pioneer and current VC Marc Andreessen famously said that software is eating the world. If everything is becoming software defined, security is no exception: recall when endpoint and network security used to be delivered by hardware appliances, not as software, as they are today. Application security is no different, especially since the people that know applications best, and that are best placed to keep up with their evolution, are developers.

Easier said than done, when successful developer adoption is such an art. Obviously, empowering developers to secure their applications means meeting them where they are without friction (close integration to the tools developers actually use)—while keeping them out of trouble with security engineering teams and others who are used to the “us & them” mentality of yesteryear.

However, we can break that down even further: to succeed, developer-led security tools should look and feel like developer tools that are delight to use; embrace community contribution and engagement, even if they are not in themselves open-source; be easy to self-consume but also to scale in an enterprise; and of course, protect against what matters with the proper security depth and intelligence.

How far to the right should shift-left go?

A large part of the promise of cloud-native technologies for IT organizations is encapsulated in the phrase, “you build it, you run it.” Making security both more efficient and more effective is a huge opportunity that is at every technology executive’s door these days. To truly empower developers to own the security of their own applications, CSOs and CIOs should think about this in a broader sense: “you build it, you run it, and you secure it.

What this means is to give developers the tools they need to both buildand run secure containers, including monitoring Kubernetes workloads for vulnerabilities. In this case, Security or Operations could move to a role of policy creation, enforcement and audit—while application risk risk identification and remediation would be within the developer’s workflow. The recent launch of developer-focused container-scanning solutions by Amazon Web Services (NASDAQ: AMZN), Google Cloud (NASDAQ: GOOGL) and others is a start.

Following this brave but existential choice will help enterprises to drastically reduce the risk inherent in growing their container infrastructure, and to efficiently scale security best practice. Wrapping top-of-the-line security with a coat of delightful developer experiences is something possible today, and a direction that is inevitable. As Rachel Stephens of analyst firm Redmonk has written, tools can play a lead role in culture change.

(Originally posted on Forbes.com)

Open Source’s Rocky Week Provides Ample Food For Thought

Visitors of Open Core Summit in San Francisco in mid-September could be forgiven for their confusion: on the one hand, discussions at the conference centered around business and community models for open source software (OSS) viability, in an increasingly polarized Cloud world.

Open Core Summit
Nick Schrock at Open Core SummitOPEN CORE SUMMIT

On the other hand, tweetstorms by key figures in the community (who were notably absent) focused on the very definition of open source, disagreeing with the association of open source with open core, source available and other limiting models. One of the attendees (disclaimer: a colleagure of mine) put it succinctly in a tweet that exposes the still-raging debates about the nature and direction of different models to the “left” of full-on proprietary software:

A Week Of Moral Reminders For Open Source

It was certainly fascinating that a conference which dealt in different approaches to protect or limit open source—many of them controversial—was book-ended by two seminal events in the OSS world. The first one occurred just prior to the conference, when the founder of the Free Software movement and GNU project, Richard M. Stallman, resigned from MIT and the Free Software Foundation (FSF), following public pressure pertaining to opinions he expressed in an email concerning the Jeffrey Epstein affair—software engineer Sarah Mei gave a detailed breakdown on Twitter of historical issues with misogyny at the FSF.

While the Stallman resignation could be seen as starting to fix a historical issue, the second event more clearly raised new questions for open source’s future.

After it came to light that Cloud company Chef signed a deal with the U.S. Immigrations and Customs Enforcement (ICE, who has been highlighted for its separation of families and night-time raids), developer Seth Vargo pulled a piece of open source technology that helps run Chef in production, claiming moral reasons.

Chef added kindling to the fire by forking the deprecated code, and renaming the author—viewed as a hostile tactic towards community norms. Chef likely needs to fulfil its contractual obligations to a demanding federal customer, but has since back-tracked on the author renaming, and has gone public on its next steps for remediation.

Free Software: Free For Any Purpose?

Much of the Twitter criticism around the Open Core Summit re-emphasized the agreed principle that if you’re writing OSS, you should be comfortable with anyone re-using or modifying such software freely, per the Four Freedoms—even if it means that AWS sells a managed service based on a project, which could limit the growth of the commercial entity supporting it, and with it the project’s viability itself (I covered this in my earlier post).

However, an extension of that question could pertain to the uses of software for purposes which might disagree with the maintainers’ values. The Four Freedoms as defined today do not speak to someone using free software to infringe on non-software freedoms—but some are now calling for this to be a formal part of free software licenses. A few licenses already include such clauses, but due to numerous gray areas, this is tricky to navigate—some entities (CIA) enjoy less scrutiny than others (ICE); judgment on some issues can be based on one’s background (China-Taiwan, Israel-Palestine, Spain-Catalonia); and so on. Even if almost all involved see the use as evil, how does one prove that a server in a North Korean labor camp runs Kubernetes, for example? How does a project enforce its policy in such a case?

As more developers bring their values to work, this will be a critical development for open source, software in general, and technology. Developer Matthew Garrett positioned this well, claiming that solving this through licenses could be effective but not in line with principles of free software and open source. Likewise, Risk Engineer Aditya Mukerjee gave a great summary of where this could quickly get complicated:

Acquia’s “Makers And Takers”

In this context it was useful to talk to Acquia founder and Drupal Project lead Dries Buytaert, just after the Summit (he had to cancel his attendance there to close an investment round in the company from Vista Equity Partners).

In a long and impassioned blog post, Buytaert used the “makers vs. takers” model to argue that failure (by all stakeholders) to embrace the collaborative culture of the OSS community is the most real and damaging issue facing it. Operating an open source community and business is hard, claims Buytaert, and ultimately every community is set up differently. Acquia, he says, maintains the Drupal project but contributes only 5% of code commits, which ensures open collaboration—compared with other vendors who might opt for more control strategic of their projects’ direction at the expense of collaboration.

An example of this, says Buytaert, is a model by which open source vendors that ensure they receive “selective benefits” from their work. Automattic in the WordPress community controls the WordPress.com domain; Mozilla Corporation, the for-profit subsidiary of the Mozilla Foundation, are paid large sums from Google for searches done through the Firefox search bar; MongoDB owns the copyright to its own code and is able to change MongoDB’s license in order to fend off competitors.

From Cloud Vs. Community To Government Vs. Community?

Still, Buytaert agrees that there is a degree of threat from large, well-funded companies that “refuse to meaningfully contribute, and take more than they give.” However, first, it’s important to understand and respect that some companies can contribute more than others; and second, it’s important that the OSS community encourages and incentivizes them to do so, and the best way, says Buytaert, is to create a situation where the more you invest in open source, the more you win commercially. If the opposite is true, it will be hard to sustain.

Buytaert suggests that the big cloud players could give back by rewarding their developers for contributing to open source projects during work hours, or by having their marketing team help promote the open source project—in return they will get more skilled and better connected engineers.

As Pivotal’s Eli Aleyner suggested in his talk at the Summit about working with public clouds, today’s developers are tomorrow’s technology buyers, and any potential short term gains for a cloud provider from not playing nice with an open source project could be dwarfed by the commercial damage resulting from alienating the community.

If the Chef precedent is an example, this principle now very clearly includes government entities, and so it will be interesting to see how software use evolves as communities get more opinionated about the end use of their software.

(Originally posted on Forbes.com)

VMware And IBM Go Full Circle To Dominate The Cloud-Native Ecosystem

While Clayton Christensen’s “Innovator’s Dilemma” taught us that leaders will struggle to retain their leadership position in dynamic industries, others such as Ron Adner have reminded us that it is rarely the innovator that ends up capturing most of the long term value. Examples in technology abound, from Xerox Labs and the Graphic User Interface to Sony and the MP3 player (both examples redefined by Apple later on).

Sometimes, a handful of new players emerge that truly break through into dominance—Amazon Web Services is an obvious example in the IT infrastructure space—but the large majority end up failing (think of how many more public cloud players existed only a few years ago). As analyst firm Monkchips has shown with their “VMware pattern” theory, that is mostly because it is very difficult to mature as a company, in order to take your innovative technology from inception to wider enterprise adoption.

VMware CEO Pat Gelsinger and Heptio co-founder and CTO Joe Beda, on the day of the announcement of the former company acquiring the latter.
VMware CEO Pat Gelsinger and Heptio co-founder and CTO Joe Beda, on the day of the announcement of the former company acquiring the latter.JOE BEDA ON TWITTER

The polarization of the Kubernetes world

The cloud-native ecosystem emerged with the open-sourcing of Kubernetes in 2014, and since then we have seen an explosion of new companies and new open source projects. At times daunting for its busy graphic representation, the fragmented CNCF landscape for the most part showed us a promise of a community of small, innovative equals. The past two years have been remarkable for the consolidation of power in this ecosystem, arguably the current and future battleground of IT, as I wrote in a previous post.

While consolidation happens often, and the “VMware pattern” holds, it is not often that we see companies who were dismissed as “has-beens” by analysts come back to true dominance in a completely new field. After all, HP’s and Dell’s server businesses have been hit hard by Cloud and have not bounced back; Oracle has been trying to adapt to the brave new world of open source and portable software; and Microsoft has given up its plans for mobile long ago (though its resurgence as a cloud and open source mammoth has been breathtaking to watch).

In the blue corner, if you will, with IBM’s acquisition of Red Hat and its strategic contributions to the cloud-native ecosystem, it is newly positioned at this point in time as a strong leader with a wealth of both hugely popular open source projects, and strong tools to build, run and manage cloud-native applications (from RHEL through to OpenShift). I wrote about this in an earlier post, interviewing Dr. Jim Comfort.

And in the other corner, its erstwhile rival in the jolly days of virtualization, VMware. With the rise of public cloud and OpenStack, and then Kubernetes, the company experienced clickbait headlines about its business model business model, employee retention and other concerns—all the while continuing to post satisfactory financial results. With recent news, long-time execs in VMware are definitely laughing now.

The Multi-Cloud, Cloud-Native Company

The brave decision to shut down its own public cloud service—something IBM, with its large Softlayer estate, did not do as decisively—led VMware to embrace its position as a multi-cloud leader, with strategic deals brokered with AWS and other clouds.

Then came the wave of acquisitions: Heptio, a cloud-native services firm founded by the originators of the Kubernetes project; Bitnami, a cloud apps company with deep developer relationships; and lately, Pivotal, OpenShift’s big rival in the world of enterprise PaaS.

At VMworld this week, we’ve witnessed the pièce de résistance with Project Pacific and Tanzu Mission Control, effectively summarized by former Heptio co-founder and CEO Craig McLuckie:

Both IBM and VMware are clearly just getting started. The dominance of developers and popularity of open source as a methodology in the cloud-native world will likely ensure some sort of balance of power in the ecosystem, but with the dramatic and rapid resurgence of these two ex-“has-beens,” Kubernetes and related projects are truly maturing into enterprise technologies. For now, their innovator’s dilemma has found its solution.

(Originally posted on Forbes.com)