Photo by Lyman Hansel Gerona @lhgerona) on Unsplash
I recently read the famously excellent book “Bullshit Jobs” by the late, great David Graeber, about jobs that don’t add any value to the world we live in (and indeed cause economic and societal harm), but persist due to mostly political and sociological (but not so much economic) reasons.
(I should add here that the vast majority of jobs, even for those of you who feel like “my job is bullshit”, are not “Bullshit Jobs” per Graeber’s definition, and are actually needed.)
Before your mind’s associative mechanisms take you to the public sector, allow me to inform you that Graeber specifically focuses on—and in my opinion proves, surprisingly—how this is an even bigger phenomenon in the private sector, especially in larger organisations. This is a dangerous book, and well worth reading!
In parallel, like many of you, my email and chat apps are blowing up with numerous threads concerning the recent breakthroughs in generative AI (e.g. ChatGPT or Midjourney), and what it means for our future; implications mentioned range from a worrying decline in the quality of high school book reports, all the way to The Singularity.
Obviously, jobs are a hot item. Will machines replace us at our jobs? It’s obviously a multifaceted and fascinating discussion that will take a long time to unfold in the real world. There are some aspects that are more certain, such as the implications on factory and retail sectors, where—absent a social safety net—work elimination could lead directly to poverty. But while we get our heads around these developments and what they mean, I think Graeber’s book provides us with an interesting and actionable thought exercise: identifying a group of jobs that AI could (and should, perhaps) help to transition.
Graeber classifies Bullsh*t Jobs into five categories. Here they are with my crude assessment of risk:
Flunkies: some leaders equate number of reports to power, even if those on their team sit around and do nothing (think royal courtiers in history, or Gareth/Dwight from the UK/US versions of the Office). There is no chance that any AI is silly enough to try and recreate this task, because there is none to recreate. LOW RISK.
Goons: roles with an aggressive nature that have an adverse effect, like soldiers (the only reason we need them is because the other people have them too), but also (per Graeber!) bank lobbyists and some corporate lawyers. This is related to basic game theory, and unless we give AI all the control (for an example, watch the hilarious S1E9 of Love Death & Robots on Netflix), it won’t change the underlying premise of this particular game. Going back to the warfare example, as it evolves, many military tasks are slowly being replaced by machines (e.g. drones, cyber-warfare, SIGINT), but not uniformly across countries—and sadly, they are definitely not being eliminated. LOW RISK.
Duct tapers: these people are busy manually fixing or bridging over inefficiencies or disconnects that should exist in the first place; if we build systems more thoroughly and consolidate around platforms, we wouldn’t need these jobs. System design and modelling seem like a perfect place for AI to excel. HIGH RISK.
Box tickers: people whose roles exist so that a company can claim it has done something, like when as an oil & gas company you hire an ESG Manager without a budget or power. AI is not taking over government institutions that create compliance standards anytime soon… but it could automate how we meet some of the requirements. MEDIUM RISK.
Taskmasters: these could be people whose job it is to assign tasks (Type 1), or both create bullshit tasks and then assign (Type 2). Type 1 could easily be replaced by an orchestration mechanism, in the same way that Uber or Lyft can assign rides to drivers; Type 2 is usually a derivative of Type 1, so therefore it would also be at risk, eventually—or merge with the Flunkies category. HIGH RISK.
Final tally: 2.5 out of 5 isn’t bad! 🙂
On a serious note, of course, prophecy was given to fools, and the likelihood of any of this coming to pass is greatly dependent on us as employees, managers, consumers, and citizen-voters. But if you’re looking to read something that challenges your perception of the political, economic and professional systems around us—I highly recommend this book.
At least once a quarter we are bombarded with press releases about how public cloud providers are catching up to, or overtaking, perennial leader Amazon Web Services in terms of market share or quarterly revenue trends. A good example is this recent story which, in my opinion, took the routine to a new level of bluster.
Perhaps even more tiring: as a reliable reaction, these PR stunts trigger blog posts and social media threads that explain how some cloud providers are structuring their business reporting in a way that ensures a favourable comparison (for example, rolling up numbers from cloud-based business applications into cloud numbers).
To paraphrase a famous hockey adage by Wayne Gretzky, the focus on these variables is like skating to where the puck used to be. I will argue that where the puck is soon to be, and will remain for the foreseeable future, is a very different place: in a world that experiences an alarming rise in the extremity of climate events and resulting geopolitical unrest, and in which every summer is the coolest we’ll ever have going forward, the lens of investors, governments and buyers is already shifting.
IMPA solar farm in Indiana PHOTO BY AMERICAN PUBLIC POWER ASSOCIATION ON UNSPLASH
In an earlier post, “Is Cloud Good For Humans? The Ethical Angle”, I spoke to Anne Currie, Tech Ethicist at consulting firm Container Solutions, and emerging hard-science fiction author. In that post, Currie shared tangible, bottom-up actions that every developer and IT person could take to reduce the carbon footprint of their work on the cloud. I went back to Currie to speak about the top-down: on a strategic level, how are the big three public clouds approaching this issue as a competitive factor?
A new competitive paradigm
As a cloud buyer looking for a strategic partner, carbon intensity and corporate climate governance are set to become dominant factors, more so than the short-term financial performance metrics we track today. A good example of how institutional investors in other industries are reforming their hypotheses is the Transition Pathway Initiative (TPI) tool, which aims to provide a multi-dimensional analysis of climate posture for some of the largest polluters in the world. This shift to climate-oriented investment criteria by some of the largest investors in the world is already transforming corporate policy in many sectors, and Cloud is unlikely to be an exception.
Why are investors interested in how their assets act on (or ignore) climate change? For a number of reasons. First, companies who care about their broader eco-systems, tend to financially outperform those who don’t. Second, investors are risk-focused, and climate change poses an increasingly complex risk. Companies that don’t manage these risks—plan for them, discuss them at board level, carry out scenario planning, set targets—might be setting themselves up for long-term failure. These risks also have a physical dimension in examples such as flooded factories (or data centers), regulation risk imposed by expected carbon taxes, and litigation risk (as seen recently in the wave of lawsuits filedagainst corporations and governments for failing to act on climate change).
This is not theoretical: investors with trillions of dollars in assets under management are already pressing companies to commit to ambitious targets, and the TPI (backed by supporters with $21 trillion of assets under management) measures companies on those indicators, informing investor engagement, divestment, shareholder resolutions, and voting.
Three clouds, three strategies
Why is this important? It is because the entire world (a sum of its actors) needs to be Net-Zero emission by 2050, in order to have a chance of not exceeding a temperature increase of over 1.5 degrees Celsius above pre-industrial levels, which would trigger tipping points in terms of extreme weather events, and send the economy and geopolitics into a spiral that would dwarf anything we are seeing with COVID-19.
Given the high probability and impact of the risk of not meeting Paris Agreement targets, investors and buyers are focusing on the quality of climate and carbon management within a company—both in terms of actual emissions reductions, and the governance around them. Governance includes issues like level of board involvement and oversight, executive remuneration for good ‘carbon performance’, target setting, and of course transparency around these activities.
Strictly in terms of data center operational emissions, Google is actively carbon-neutral (which means it offsets any emissions by buying carbon credits). In September, CEO Sundar Pichai announced that Google will be eliminating its carbon emissions legacy by buying historical carbon credits, and aiming for 100% clean electricity every hour of every day by 2030.
Amazon’sClimate Pledge is a commitment to run its operations on 100% renewable energy by 2025 and be carbon-zero by 2040, a decade later than its competitors targets (carbon zero means no offsetting will be involved). Currie believes that in particular due to their early leadership position, some of AWS’s major regions are relatively higher on emissions, for example US-East (due to the energy mix there). AWS has been public about Oregon, GovCloud-U.S. West), Frankfurt, Canada-Central and Ireland as its green nominated regions on its Sustainability webpage, and has been making some visible investments in clean energy.
We put the statement back on the new page. Also, did you see the updated list of renewable energy projects, and this one that is now online? https://t.co/1uo338LiDS
— Adrian Cockcroft – now @adrianco@mastodon.social (@adrianco) September 22, 2020
In terms of transparency and governance, Google and Microsoft typically get higher marks from industry analysts and journalists than Amazon, according to Currie. Both companies use renewable energy credits(RECs), which are a sort of token representing a utility’s renewable energy generation, and are easy to measure and track publicly. Amazon has been so far slower in its efforts at transparency, although it has started to publish its strategy to drive carbon out on its public website.
This is a partial analysis, and readers are encouraged to do further research. It is important to note that a narrower focus on AWS itself will enable a more accurate comparison than the current view of physical supply chain-heavy retail giant Amazon vs. software companies Microsoft and Google. In addition, most of the focus is around data center operations, and not on other important factors such as the carbon impact of building them, the companies’ push in other areas of sustainable solutions (from electric cars to meat and dairy substitutes), or even what the company footprint is in relation to its office-based workforce and travel.
New ways to lead
If we’ve learned anything about AWS since its mythical beginnings in 2006, it is that it moves with remarkable agility even as it grows much bigger, and constantly iterates. Climate change mitigation has not been an area of significant transparency, compared to visible innovation in other areas. As a software-driven infrastructure business, and possibly the most agile part of Amazon, AWS has the potential to change this story. The company is famously customer-obsessed, and as the frame of competition in any industry (let alone an energy-consuming juggernaut such as cloud computing) shifts to meet climate crisis challenges, we can hope that AWS will increase competitive pressure in this area, too, for all our benefit.
As cloud buyers, many CIOs already feel the pressure from regulators, investors and customers, and are taking steps that are impacting cloud providers. In addition, DevOps culture and developer empowerment, coupled with the increasing share of late-Millenials and Gen-Z in the workforce, are accelerating the pressure for ethical behavior from internal stakeholders. These changes are visible in companies as young as Snyk, which recently migrated a key service from another supplier to an AWS carbon-neutral region, or as large and established as the Financial Times. Speaking to Currie, Rob Godfrey, senior architect at the FT shared that the organization plans to have moved entirely to the cloud by the end of 2020. He added, “we hope that around 75% of our infrastructure will be in what AWS have been calling their clean regions, but we would like to see that percentage even higher in future.”
As COVID-19 provides us a preview into the complexities we might face with climate change, cloud buyers would be wise to shift their focus from short-term, biased revenue comparisons to areas that could have a lasting impact on the risk or success of their suppliers, and to demand an investment in both an accelerated green transition and in the governance required to track it.
One of the oldest and most persistent debates in any field goes back to the famous quote by ancient Greek poet Archilochus, “a fox knows many things, but a hedgehog one important thing”—and to Isaiah Berlin’s equally famous philosophical-literary essay, “The Hedgehog and The Fox”. Technology is only one of numerous areas in which this paradigm is being used (or overused, depending on your viewpoint) to evolve industry standards and operating models.
Urban fox MEL GARDNER ON UNSPLASH
A good example of this is the increasing tension in cloud-native technologies between those tools that strive to be a “best of breed”, and those that position themselves as a “one-stop-shop”. Vendors such as CircleCI for CICD, Datadog for monitoring, and of course Snyk for developer-first security all aim to be the absolute best at the (relatively) narrow task at hand. On the other side of the fence, vendors such as Gitlab look to add functionality to cover a wider surface area.
There will always be a tradeoff on this route to simplifying systems such as procurement or IT management: the Hedgehogs of this world will never achieve the same results on narrow sets of tasks as will Foxes, but they will offer simplicity or unity of interface as an advantage.
The leading public cloud providers such as AWS and Azure are good examples of companies that, on the one hand, have the product realism to build rich ecosystems of compatible partner solutions, but on the other hand offer economic incentives in place to help those customers that want to reduce the number of suppliers or solutions they work with.
In this context, a new category of solutions is looking to create automation engines to connect best-in-breed solutions, allowing at least a common technical interface to connect discrete but closely-related tasks. Examples of this approach are GitHub Actions and Bitbucket Pipelines, which aim to offer developers a way to automate their software workflows—build, test, and deploy—right from their software repositories.
Puppet Relay: automating DevOps and cloud operations
A new and more Operations-focused entrant into this space was announced late last month, Puppet Relay. If this is where configuration management veterans Puppet are going, it could indeed be big news. Relay is an Event-, Task-, or Anything-Driven automation engine that sits behind other technologies used by an enterprise. Relay automates processes across any cloud infrastructure, tools and APIs that developers, DevOps engineers, and SREs (site reliability engineers) typically manage manually. Interoperability and ease of integration seem to be major focus areas for Puppet.
According to former Cloud Foundry Foundation Executive Director (and longtime Pivot before that) Abby Kearns, who recently joined Puppet as CTO, there are tons of boxes in the datacenter that traditional Puppet customers want to automate with configuration management, but with the move to the cloud, the potential surface area for automation is much bigger. Puppet’s ambition is to capture the automation and orchestration of the cloud movement, with multiple use cases such as continuous deployment, automatic fault remediation, cloud operations and more.
Relay already integrates with some of the most important enterprise infrastructure vendors like Hashicorp, ServiceNow, Atlassian, Splunk and others. Stephen O’Grady, Principal Analyst with RedMonk, was quoted in Puppet’s press release as saying that the rise of DevOps has created a hugely fragmented portfolio of tools, and that organizations are looking for ways to automate and integrate the different touch points. “This is the opportunity that Relay is built for,” concluded O’Grady.
Automation market trends point to a specialized future
Market analysis and data supports this trend. In its Market Guide for Infrastructure Automation Tools, Gartner estimated that by 2023, 60% of organizations will use infrastructure automation tools as part of their DevOps toolchains, improving application deployment efficiency by 25%. Similarly, in its 2019-2021 I&O Automation Benchmark Report, Gartner forecasted that 47% of infrastructure and operations executives are actively investing and plan to continue investing in infrastructure automation tools in 2020.
Ultimately, taste in architecture means using the right tool for the right job, under the financial or organizational constraints that each of us might have. By providing a backbone to automate the cross-vendor workflow, the new breed of automation engines such as Puppet Relay could potentially tilt the scale towards specialized solutions, and change day-to-day development and operations.
Open source software underpins many of the applications we use today, whether critical for our society to function, or just for our ability to share photos of our quarantine-sourdough with strangers. The code itself has clearly changed our software applications, but what deeper, underlying impact on software delivery and organizational culture have we seen through this process?
In this article, I had the privilege of speaking with three industry luminaries that have contributed to building open source projects and communities for many years. I wanted to learn from them about the diffusion of software delivery practices from communities and projects into companies and products.
Chip Childers, Dustin Kirkland, Anton Drukh TWITTER
First, I spoke to Chip Childers, co-founder and newly-appointed Executive Director of the Cloud Foundry Foundation; then, to Dustin Kirkland, Chief Product Officer at Apex Clearing and a longtime contributor to Ubuntu, Kubernetes and OpenStack; and finally to Anton Drukh, early employee and VP Engineering at cloud-native unicorn Snyk.
The foundation: becoming better developers through contribution
The Cloud Foundry Foundation was originally established to hold the intellectual property of the open source Cloud Foundry technology and oversee the governance of the project. Today, the Foundation houses 48 projects under its umbrella, and over the years has influenced numerous enterprises in using the technology. The Foundation’s projects release independently, and most projects come together to form the platform through a coordinated release process: smaller teams release whenever they are ready, while the entire system is tested for known-good combinations through the coordinated release process.
From the beginning, says Childers, the Foundation was structured with an open source license, an open contribution model, and an open governance model. Naturally, companies with larger numbers of contributors often obtain more influence in the project roadmap—but the path that this process has paved goes both ways.
The Foundation’s practices have clearly impacted companies whose staff are also contributors, according to Childers. For example, he has seen more and more contributing developers—often paired across different organizations—go back to their companies and drive adoption of methodologies like extreme programming and agile development.
On an even deeper level, says Childers, that impact has created a flywheel effect, making developers great ambassadors into their companies, and then improving the Foundation’s projects. As a first step, developers adopt the same collaboration mindset that permeates the Cloud Foundry community, in their day jobs. Second, as developers building tools for developers, they tend to develop empathy and a keen understanding of user experience, which improves their work in their companies. Third, hands-on experience in using Cloud Foundry technologies and processes in their organizations means that contributors have a wider perspective and often feed back into the Foundation’s projects on what could be improved.
The contributor: delivering proprietary software like open source
Dustin Kirkland, Chief Product Officer at Apex Clearing and an ex-colleague of mine, spent the last 20+ years in various leadership roles around open source software, and has contributed code to projects including Ubuntu, OpenStack, and Kubernetes. Upon arrival at Apex Clearing, he wondered if the company can re-use not only the code of the open source projects it had access to, but also some of the underlying processes around how that code was delivered, which he had experienced firsthand. Specifically, he focused his attention on release cycles.
Projects such as Ubuntu, OpenStack, and Kubernetes have predictable, time-based release cycles. Ubuntu has released every April and October, since October of 2004 (32 timely, major platform releases, for over 16 years!); Kubernetes, introduced in 2014, has chosen an increased pace of quarterly release cycles, and has managed four releases per year, over the last six years.
A key concept these projects use which Kirkland introduced into Apex Clearing is “Coordinated Cycles”: with time, resources, and scope as variables, a project needs to make two of them constant, and then manage the third. For example, with Ubuntu, time (releasing on time) and resources (size of the contributor community) are fixed, and scope is negotiated. Typically, a Cycle kicks off with a Summit or Conference (such as an Ubuntu Developer Summit) that brings together contributors from around the industry. In addition, a Mid-Cycle Summit is a good way of tracking progress and correcting course as needed.
When Kirkland arrived at Apex in 2019, products and projects were managed asynchronously. After examining the option of six-month releases (like Ubuntu or OpenStack), it was deemed unwieldy, as the team would need to manage two-week sprints for 26 weeks straight. Quarterly cycles, adopted by Kubernetes, were considered too short to see through anything but the smallest individual projects. Finally, the team settled on 16-week cycles: three full cycles per year, with 48 weeks of development, while still allowing for four weeks of holidays.
Today at Apex, each cycle involves three types of summits:
Prioritization Summit: product managers collect input from all stakeholders, so that they can achieve consensus on priorities for each product family.
Planning Summit: once the product requirements are defined, there is alignment between engineering and management on work commitments across the product portfolio, for the upcoming cycle.
Mid-Cycle Summit, renamed Portfolio Review: report on progress and adjust course where necessary, about two-three times per cycle.
Announced in April of this year, Apex 20a was the first release that used open source processes and methodologies. This month, Kirkland and his team will be (virtually) holding a Portfolio Review for Apex 20b, reviewing the entire portfolio with all engineering and product leaders.
The start-up: putting up open source norms as foundations
Anton Drukh, a current colleague of mine, joined Snyk as its first hire in 2015, and has successfully grown the engineering function to a stellar team of about 70 people. He has always been fascinated, he says, by how the simplest solutions can solve the most complex problems, with people being the critical element in any solution.
Snyk’s approach from its earliest days was to see developers as the solution to—and not the source of—security vulnerabilities introduced into software. Drukh says that as someone whose formative years as an engineering leader were securely in the cloud and cloud-native era, he was especially drawn to three aspects of the new company.
First, Snyk chose to focus on securing open source, and Drukh believed that working closely with open source communities would help develop a culture of humility. Today, says Drukh, every external contribution to Snyk’s mostly open source code base is a positive reminder of that.
Second, learning from many open source projects, Snyk aimed to build a distributed and diverse engineering team and company. According to Drukh, building these principles into the hiring process created an immense sense of empowerment amongst employees. Snyk runs in small teams of five-six people, always from different locations (and continents, as Snyk’s engineers are based in the UK, North America, Israel, and beyond), and trusts them to ‘do the right thing’. This, in turn, creates a strong and shared sense of accountability for the team’s and the company’s success.
Third, from its very beginning, the company set out to adopt open source practices in the practicalities of its software development. These measures increase a feature’s effectiveness, but also shorten the time it takes for an idea to travel from a developer’s mind to the hands of the user. Examples abound, such as:
Snyk’s codebase is shared within a team and between teams, which enables ease of onboarding and clarity of ownership.
Each repository needs to have a standardized and automated release flow, which supports high release pace.
Each pull request needs to have a clear guideline for why it is being raised, what trade-offs stand behind the chosen approach, how the tests are reflecting the expectations of the new functionality, and who should review this change. This drives transparency and accountability in the culture.
Revealingly, for some of Snyk’s users, the impact of the product has sparked a curiosity into how it was delivered, and an attempt to learn from the processes of a cloud-native startup. Customers can inspect the company’s delivery process across some of its codebase, and share ideas with Snyk (such as this one, of consolidating release cycles). Without the culture and processes inspired by open source, none of this serendipity would exhibit itself.
One of the most significant changes brought about by the Coronavirus outbreak has been the mass move to working from home. From an operational perspective, the impact on technology companies has been hard to overstate both in its depth and its breadth. A now-famous meme identifies COVID-19 as the one factor which has finally realized the promise of digital transformation. While amusing, this presents an important point: this pandemic, like many other crises before it, accelerates many processes that would have otherwise taken months or years.
Digital Transformation Quiz SUSANNE WOLK (TWITTER)
While some changes will be temporary or partial, others will transform our world. The implications of accelerated change are of concern to anyone responsible for risk management, and Chief Information Security Officers (CISOs) are no exception. So, what is on the minds of CISOs of key technology companies in this challenging time?
Polling the CISOs
In their latest CISO Current report, cybersecurity-focused venture and research firm YL Ventures asked their advisory board—made up of around 80 CISOs for leading companies such as Wal-Mart, Netflix and Spotify—similar questions. YL Ventures have been making use of this advisory board in investment due diligence processes over the years, and in 2019 decided to start publishing their findings for the benefit of the industry. The below list includes a combination of feedback from that report, as well as from other sources, where noted.
1. A good time to reconsider technology choices
Originally, the YL Ventures report was focused on DevSecOps, and found that many current tools—for example, most static application security testing (SAST) and runtime application self-protection (RASP) platforms—were considered cumbersome and difficult to adopt within software engineering teams. Another major (though perhaps unsurprising) finding was that the biggest challenge was in creating a system of processes and incentives, to support transformation. It was noted that technology should be evaluated in how it supports and safely accelerates these non-technological changes: it should bridge the long-cycle, “breaking” culture of security engineering and the short-cycle, “building” culture of software development.
2. Everyone is remote, and some will stay that way
With the acceleration of the pandemic, YL Ventures quickly pivoted and included a second and more pertinent part of their report, which deals with the work-related transformations that are on the minds of CISOs in light of the pandemic. The biggest challenge that was cited was establishing fully remote workforces in a tight timeframe, in a way that will be sustainable for a partially-remote future of work; a close second was severe budget constraints, unsurprisingly. According to Naama Ben Dov of YL Ventures, both these concerns present opportunities for vendors in how they interact with CISOs from this point on.
3. The risk map is being reshuffled
There are things that CISOs worry about, that are getting easier in the new reality, such as controlling location-based risks: an employee residing in London could not have logged into their laptop from Moscow, for example, since there is no business travel. Similarly, Adrian Ludwig, CISO of software giant Atlassian, was a guest on Snyk’s webinar on working from home last week, and noted that he has seen an uptake around bug hygiene, as engineers have more time to be thorough and at times gravitate to smaller tasks.
Other issues are made more complicated, for example VPNs and DDoS mitigation. Yair Melmed, VP Solutions at DDoS start-up MazeBolt, reported that over 85% of companies he works with are proactively identifying DDoS Mitigation vulnerabilities that would have impacted their business continuity had they been hit by a DDoS attack. Because VPN Gateways weren’t critical to business continuity before COVID-19, most companies are finding that their DDoS mitigation solutions don’t adequately protect them. This has become a critical issue, with the risk of employees being be cut off and other services might be impacted if they come under DDoS attack.
Engaging with CISOs in a time of crisis
As many vendors know, and as the YL Ventures report confirms, budgets are being scrutinized for the short term, but in many cases also for the long term—under the assumption that remote workforce concerns will stay for the foreseeable future. In general, says Ben Dov, for now it’s much more ‘defer and delay’ than ‘reprioritise’, and therefore how vendors react to their customers’ plight will be etched into the memory of CISOs and procurement officers alike when budgets recover or get retargeted.
1. Careful with those emails
The YL Ventures report explicitly calls out alarmist sales pitches, as well as sending too many emails. CISOs polled recommend to simply show goodwill and empathy with personalized messages to check in on customers’ state of mind. Alongside companies who keep sending alarmist messages, there are, of course, many examples of other vendors who are leaning in with empathy and patience towards their customers and prospects.
Getting involved with communities and community initiatives related to the specific sector is also something that is called out. Snyk’s developer advocacy team, for example, organized an online event called AllTheTalks, with all proceeds from ticket sales going towards the World Health Organization.
2. Walking the path with the buyer
While the budget may have been cut, the business need probably still exists. Many companies, from AWS to Atlassian and beyond, have come out with special offers to support businesses during this transition, as was summarized in this Forbes piece as well as elsewhere. Many, like Snyk, have created specific offers for small businesses and those in the healthcare, hospitality, travel and entertainment industries. MazeBolt has decided to offer free DDoS assessments to cover the most common DDoS vectors, resulting in a detailed vulnerability report.
3. Conscious relationship-building
On the Snyk webinar, Ludwig also spoke of his habit to informally communicate with colleagues in the company canteen (since a formal invite to a meeting with the company CISO is something no developer wants to receive). With everyone working remotely, in order to foster engagement with colleagues Ludwig recommends to make a list of those we need to reach out to and how often, and then schedule those catch-up meetings. In some cases, ‘office hours’ sessions could be set up where everyone can drop in and share their perspectives and concerns on the day to day.
Crucially, there is no reason to not extent this practice to our customers, catching up about everything and nothing on a regular basis to cement our closer relationships. While this could seem artificial, it is an effective way to keep conversations going when the operating model does not allow for any coincidental conversations.
Perhaps that is a take-away that touches on all aspects of our lives under quarantine: being conscious about our priorities, about who we interact with, and how we do so in a way that helps us achieve our goals now and later on.
As software takes over more of IT, developers are taking more ownership of related parts of software delivery, and moving faster. In this shifting reality, with increasing velocity and complexity, more software can mean more risk. However, beyond the linear increase in risk, it is also the very nature of risk that is changing.
In the very recent past, developers would work on monolithic applications, which meant that the scope and rate of change were more manageable. If developers have a vision of the desired state, they can test for it in the place where they can validate changes; for most of the recent two decades, this place has been the Continuous Integration/Continuous Delivery (CI/CD) pipeline.
Sticky notes PHOTO BY PAUL HANAOKA ON UNSPLASH
Microservices increase complexity by shifting change validation from the pipeline into the network: if my application is made up of a 100 services, the pipeline won’t build and test them all at the same time. Inherent to the concept and benefit of microservices, some services will go through the pipeline while most others are in a different state, from being rewritten to running in production. In this situation, a desired state can be a phantom, so we need a new testing paradigm.
From punchline to best practice
It used to be the case that to scare off a nosy manager, developers would use the phrase “we test in production”. This was a concept so absurd it couldn’t possibly be taken literally, and would be sure to end the conversation. Now, with cloud-native applications, testing in production can be seen as the least scary option of them all.
When there is a constant generation of change that still needs validation, production environments become the main place where a never-ending series of ‘desired states’ exist. With the right tools for observability and for quick reaction to issues, risk could be managed here. (Assuming, of course, that I have the right people and processes in place first.)
Companies like Honeycomb and others herald this approach. When dealing with complex systems, they claim that most problems result from the convergence of numerous and simultaneous failures. Therefore, Honeycomb focuses on an observable state in order to understand exactly what is happening at any given moment.
Observability, short and sweet:
– can you understand whatever internal state the system has gotten itself into?
This is observing state, not validating change. A different approach comes from where we validated change before: the CI/CD pipeline.
Rob Zuber, CTO of CircleCI ROB ZUBER/CIRCLECI
Putting change and state together
In an eBook published this week, Rob Zuber, CTO of developer tools company CircleCI, talks about the role of CI/CD togetherwith testing of microservices.
As more and more companies outside of the tech industry come to see software as a competitive differentiator, Zuber sees greater use of 3rd-party services and tools, a proliferation of microservices architectures, and larger data sets.
In practice, Zuber claims, breaking change into smaller increments can look like deploying multiple changes—something that a CI/CD tools would naturally handle well. Being able to evolve your production environment quickly and with confidence is more important than ever, but using a combination of tools that validate change and others that observe state is key.
CircleCI itself, says Zuber, uses its own technology for CI/CD, and other tools to tie together the change and state validation—crash reporting, observability, and centralised logging are examples. At the center of this stack is the CI/CD tool, which in their view is better placed compared to source control tools or operational systems.
A full view of the software delivery cycle
Both approaches seem to agree that neither observing a state nor observing the change is enough: good coding practices, source control management, solid deployment practices—they are all crucial parts of the same picture. All of these elements together can lead to better cost and risk decisions, more robust code, and better products. Testing is another area of software delivery that is being utterly transformed by cloud-native technologies and open source, and should be examined with new realities and new risks in mind.
(NOTE: this article includes no medical or related advice, which, in any case, the author is completely unqualified to provide.)
It is a time of significant change, which none of us can predict. First reported from Wuhan, China, on 31 December 2019, Coronavirus disease (COVID-19) is beginning to change personal and work lives around the world. Considered by the World Health Organization to be less lethal but more contagious than its cousin SARS-CoVid, COVID-19 is making its presence felt in ways that we can see, and in many ways that we can’t, yet.
Technology conferences represent an interesting case study: forward-thinking companies and communities, with vast technology resources, can take this time to reevaluate some of their strategies with regards to face-to-face events. More importantly, this transition can help countries, corporations, and humans everywhere to reduce their carbon footprint, tackling the greater challenge of our times.
Empty airport gate area PHOTO: ERIC PROUZET ON UNSPLASH
Storm Corona Hits Tech
The software industry, amongst others, has seen immediate effect in the cancellations of many prominent large tech conferences. The first cancellation many noticed was the mammoth and vendor-neutral annual telecoms event, Mobile World Conference in Barcelona. The latest ‘victim’ in the vendor-neutral conference space has been Kubecon/CloudNativeCon EU, which has so far been delayed to July.
In the face of COVID-19, large tech companies have been in many cases first movers to cancel company-run conferences: Facebook with F8, Google with Google I/O and Cloud Next, Microsoft with its Ignite Tour, and more. This was part of a wider initiative by large companies in tech as well as other industries to curtail inessential business travel—which in itself would have a severe effect on the attendance of some of these events.
An Introvert’s Dream?
It is the beginning of a very difficult era for anyone in sectors which support events (events management, travel, hospitality, photography, catering and more), while those who support virtual interaction stand to benefit.
The stereotypical aversion of some developers to face-to-face human interaction has been the subject of many research papers, articles, and online memes. In that light, it could be that the virtual model will in some aspects prove to be more effective for engagement and content delivery.
If the aim of some of these events is to educate developers and disseminate information, do they need to be physical events at all? As many conferences pivot towards a virtual model in the coming months, we are about to find out. Consider that a huge majority or open source contributors do so remotely anyway, so in that space at the very least, the ground is ready.
We might not see some of these conferences return at all: as suggested recently on tech strategy blog Stratechery, Facebook’s success depends more on management of its digital real estate than on its F8 developer conference, especially as it invests significant resources in security and privacy (also, it helps to avoid the shadow of Cambridge Analytica at F8).
A UK-based CISO asked recently on one of my chat groups, “we’re pulling back on travel, beefing up remote working capability and handing out hand cleaning gels. Anyone doing anything non-obvious?” Actually, we could start with the obvious, and look at successful tech events that are already virtual-only. Events such as All Day DevOps, Global Devops Bootcamp, and HashiTalks point the way for developer-focused events run by communities and vendors alike.
The technology is there. As an employee of HP in 2006, I regularly participated in Halo Room meetings, which felt uncannily like face-to-face (so much so that we would have fun watching newbies rush towards the screen to shake digital counterparts’ hands). Halo was prohibitively priced at the time, and was rooted in hardware pricing models—as was Cisco’s TelePresence. In 2020, backed by cloud infrastructure and software monetization models, I imagine that vendors can take similar solutions to market for a fraction of HP’s then-upfront price, offering an up-market alternative to Zoom.
A True Opportunity To Survive And Thrive
Scientists: you should wash your hands because of Coronavirus.
People: I'm gonna stop flying, hoard masks, work from home & totally rearrange my life.
Also Scientists: the #ClimateCrisis will kill millions – we must use clean power & change how we get to work.
The events crisis triggered by COVID-19 may prove to us all that many of the events we attend (and on rare occasion, enjoy) are not crucial. More importantly, if, all of a sudden, it is possible for corporations and governments to restrict travel to safeguard their public, then that can teach us an important lesson for the much more existential threat that is already here: climate change.
In a report issued in November, 2019 by a team of world-class climate scientists, it was said that, “almost 75 percent of the climate pledges are partially or totally insufficient to contribute to reducing GHG emissions by 50 percent by 2030”. As corporations come under increasing pressure from governments, investors, and employees to accelerate their efforts to reduce carbon emissions—reinventing cloud, open source, and software engineering conferences seems like a great place to make a real impact.
Digital transformation has reshaped countless businesses, through the combined forces of software explosion, cloud adoption, and DevOps re-education. This has enabled new entrants to disrupt industries (e.g., Airbnb), and non-software incumbents to position themselves as de-facto software companies (e.g., Bosch).
Yet, as many CISOs have found out during this time, more software equals more risk. As a result, as James Kaplan, McKinsey’s cybersecurity leader, has said, “for many companies cybersecurity is increasingly a critical business issue, not only or even primarily a technology issue.” Let’s look at how each of the major vectors of change has contributed to this dynamic new reality.
Shift PHOTO BY DANIEL JOSEF ON UNSPLASH
Breaking down digital transformation
First, as Mark Andreesen once predicted, software is eating the world; to make things even more dynamic, open source is eating software, and as a result of both, developers are increasingly influential when it comes to technology choices.
Second, cloud infrastructure has introduced huge amounts of new technology: consider as examples containers and serverless replacing servers and VMs, and infrastructure-as-code supplanting traditional datacenter operations and security. Many of these technologies are changing rapidly, and not all of them are mature—but since developers are more influential, CIOs often have no choice but to live with these risks in production.
Finally, the rise of DevOps methodologies has changed the process of how software is developed and delivered, and the ownership of it across the lifecycle. Examples of this include continuous integration and delivery, which has brought gates and delays to a minimum, and has created empowered and self-sufficient teams of developers. This means that these more opinionated and more influential teams can now move more freely than ever before.
Software-defined everything? This shift has clearly redefined the IT stack and its ownership model, as shown in the chart below.
Software is taking over the stack SNYK LTD.
Operations teams, through DevOps, typically aim for manageable ratios of 1 Operations per 15 Developers, or in many cases even lower. Where does this leave security teams and CISOs, who in my experience are often expected to deliver a 1:30 or 1:40 ratio? Unfortunately, with their backs to the wall.
Security is left behind
On the one hand, security teams are very much still in the critical path of much of software delivery; however, they are also often separate from development and uninformed, and using outdated tools and processes. As a result, many security professionals are perceived by developers as slowing down the ever-accelerating process of delivery, and by executives as contributing to the release of vulnerable applications. This presents them with an almost impossible conflict of speed vs. security, which speed typically wins.
To make things even worse, a severe talent shortage perpetuates the state of understaffed security teams. Cybersecurity professional organisation ISC claimed in a recent survey that the global IT Security shortage is four million, and while half of the sample is looking to change how they deliver security, hiring is slowing them down and putting the business at risk. Over half of those polled said that their organisation is at moderate-to-extreme risk due to staff shortage.
Not just shift left, but top to bottom
In a previous piece, I examined how containers are challenging existing models of security ownership, and asked, “How far to the right should shift-left go?”
Yet if our stack is increasingly software-defined, from the top almost to the bottom, then it should be up to developers to secure it top-down. On a recent episode of The Secure Developer podcast, Justin Somaini, a well known security industry leader with experience from the likes of Yahoo! and Verisign, stated that he expected a third to half of a security team’s headcount to move from today’s process management roles into security-minded developers roles.
This brave new world of Cloud Native Application Security means moving people to the left as well as tools, but also extending developer ownership of security as far as software reaches. This does not present a risk to security professionals’ jobs: it is a change in reporting lines, and an enhancement of their skills—a genuine career development opportunity. On the hiring front, this opens up the option to recruit into security-relevant functions not only security talent, but also programming talent. This shift in ownership and role definition will make everyone’s lives easier.
In an earlier piece, I claimed that the increasing use of containers and cloud-native tools for enterprise applications introduces disruption to existing security models, in four main ways:
Breaking an application down into microservices means there is less central control over the content and update frequency of each part;
Packaging into a container happens within the developer workflow;
Providing developers with access to native APIs for tools such as Kubernetes increases the blast radius of any action; and
Using containers inherently introduces much more third-party (open source) code into a company’s proprietary codebase.
Shifting left? Not so soon
According to Gartner, by 2022 more than 75% of global organizations will be running containerized applications in production. On the other hand, Snyk’s container security report earlier this year found that there has been a surge in container vulnerabilities, while 80% of developers polled didn’t test their container images during development.
Other reports, such as highlighted in a piece by Forbes contributor Tony Bradley, show a concerning level of short-sightedness by security engineering teams—while most of the worry seems to be around misconfigurations, which occur during the build stage, most of the investment focus seems to be around runtime, which in terms of lifecycle is inefficient if not just too late.Recommended For You
Putting it all together, it’s clear that shifting security left by giving developers more ownership on the security of their own application is a good thing—but how much ownership should or could they be given? And what kinds of tools and processes need to be in place for this transition to be a success?
A developer workstation secured by a storm trooper PHOTO BY LIAM TUCKER ON UNSPLASH
Andreessen, revisited
In 2011, Netscape pioneer and current VC Marc Andreessen famously said that software is eating the world. If everything is becoming software defined, security is no exception: recall when endpoint and network security used to be delivered by hardware appliances, not as software, as they are today. Application security is no different, especially since the people that know applications best, and that are best placed to keep up with their evolution, are developers.
Easier said than done, when successful developer adoption is such an art. Obviously, empowering developers to secure their applications means meeting them where they are without friction (close integration to the tools developers actually use)—while keeping them out of trouble with security engineering teams and others who are used to the “us & them” mentality of yesteryear.
However, we can break that down even further: to succeed, developer-led security tools should look and feel like developer tools that are delight to use; embrace community contribution and engagement, even if they are not in themselves open-source; be easy to self-consume but also to scale in an enterprise; and of course, protect against what matters with the proper security depth and intelligence.
How far to the right should shift-left go?
A large part of the promise of cloud-native technologies for IT organizations is encapsulated in the phrase, “you build it, you run it.” Making security both more efficient and more effective is a huge opportunity that is at every technology executive’s door these days. To truly empower developers to own the security of their own applications, CSOs and CIOs should think about this in a broader sense: “you build it, you run it, and you secure it.”
What this means is to give developers the tools they need to both buildand run secure containers, including monitoring Kubernetes workloads for vulnerabilities. In this case, Security or Operations could move to a role of policy creation, enforcement and audit—while application risk risk identification and remediation would be within the developer’s workflow. The recent launch of developer-focused container-scanning solutions by Amazon Web Services (NASDAQ: AMZN), Google Cloud (NASDAQ: GOOGL) and others is a start.
Following this brave but existential choice will help enterprises to drastically reduce the risk inherent in growing their container infrastructure, and to efficiently scale security best practice. Wrapping top-of-the-line security with a coat of delightful developer experiences is something possible today, and a direction that is inevitable. As Rachel Stephens of analyst firm Redmonk has written, tools can play a lead role in culture change.
A story about the late Oracle co-CEO, Mark Hurd, who passed away last week at the young age of 62.
It’s somewhere in May 2008, Düsseldorf, on the last day of a huge, week-long tradeshow where HP (where I worked 2006-2014) had a massive presence. All staff (about 50 in sales, marketing), tired, hungover, very cynical are summoned to the booth for 7am (!) to hear the new-ish CEO speak.
At this point, everyone expects some generic pep talk full of corporate Americanisms. The figure that steps in, with reading glasses at the end of his nose, in khakis and a blue blazer, looks like a cross between accountant and American football quarterback—fitting our very low expectations.
He takes the mic, looks at his notes, says good morning, and says that he wants us to do one thing today, and one thing only. He pauses, looks up, drops his notes.
To everyone’s pre-caffeine shock, he starts shouting “I want you to sell, I want you to kill the competition today, kick their asses, don’t leave any opening” he goes on and on. I look around and see some of the most experienced, cynical sales people I’ve ever know smiling like kids, on their feet, shouting back in encouragement.
I’ve been in the military and I’ve been in field business roles and I’ve never seen this kind of instant transformation, and a crew as motivated and focused as the HP booth staff that day. Proper Henry V moment.
I’m sure everyone who worked for Hurd has informed/uninformed good/bad opinions of how he was as manager, exec, CEO, but he’s gone now, so what we have our stories.