Why Automation Engines Matter To Best-In-Breed DevOps Vendors

One of the oldest and most persistent debates in any field goes back to the famous quote by ancient Greek poet Archilochus, “a fox knows many things, but a hedgehog one important thing”—and to Isaiah Berlin’s equally famous philosophical-literary essay, “The Hedgehog and The Fox”. Technology is only one of numerous areas in which this paradigm is being used (or overused, depending on your viewpoint) to evolve industry standards and operating models.

Fox in the city

A good example of this is the increasing tension in cloud-native technologies between those tools that strive to be a “best of breed”, and those that position themselves as a “one-stop-shop”. Vendors such as CircleCI for CICD, Datadog for monitoring, and of course Snyk for developer-first security all aim to be the absolute best at the (relatively) narrow task at hand. On the other side of the fence, vendors such as Gitlab look to add functionality to cover a wider surface area.

There will always be a tradeoff on this route to simplifying systems such as procurement or IT management: the Hedgehogs of this world will never achieve the same results on narrow sets of tasks as will Foxes, but they will offer simplicity or unity of interface as an advantage.

The leading public cloud providers such as AWS and Azure are good examples of companies that, on the one hand, have the product realism to build rich ecosystems of compatible partner solutions, but on the other hand offer economic incentives in place to help those customers that want to reduce the number of suppliers or solutions they work with.

In this context, a new category of solutions is looking to create automation engines to connect best-in-breed solutions, allowing at least a common technical interface to connect discrete but closely-related tasks. Examples of this approach are GitHub Actions and Bitbucket Pipelines, which aim to offer developers a way to automate their software workflows—build, test, and deploy—right from their software repositories.

Puppet Relay: automating DevOps and cloud operations

A new and more Operations-focused entrant into this space was announced late last month, Puppet Relay. If this is where configuration management veterans Puppet are going, it could indeed be big news. Relay is an Event-, Task-, or Anything-Driven automation engine that sits behind other technologies used by an enterprise. Relay automates processes across any cloud infrastructure, tools and APIs that developers, DevOps engineers, and SREs (site reliability engineers) typically manage manually. Interoperability and ease of integration seem to be major focus areas for Puppet.

According to former Cloud Foundry Foundation Executive Director (and longtime Pivot before that) Abby Kearns, who recently joined Puppet as CTO, there are tons of boxes in the datacenter that traditional Puppet customers want to automate with configuration management, but with the move to the cloud, the potential surface area for automation is much bigger. Puppet’s ambition is to capture the automation and orchestration of the cloud movement, with multiple use cases such as continuous deployment, automatic fault remediation, cloud operations and more.

Relay already integrates with some of the most important enterprise infrastructure vendors like Hashicorp, ServiceNow, Atlassian, Splunk and others. Stephen O’Grady, Principal Analyst with RedMonk, was quoted in Puppet’s press release as saying that the rise of DevOps has created a hugely fragmented portfolio of tools, and that organizations are looking for ways to automate and integrate the different touch points. “This is the opportunity that Relay is built for,” concluded O’Grady.

Automation market trends point to a specialized future

Market analysis and data supports this trend. In its Market Guide for Infrastructure Automation Tools, Gartner estimated that by 2023, 60% of organizations will use infrastructure automation tools as part of their DevOps toolchains, improving application deployment efficiency by 25%. Similarly, in its 2019-2021 I&O Automation Benchmark Report, Gartner forecasted that 47% of infrastructure and operations executives are actively investing and plan to continue investing in infrastructure automation tools in 2020.

Ultimately, taste in architecture means using the right tool for the right job, under the financial or organizational constraints that each of us might have. By providing a backbone to automate the cross-vendor workflow, the new breed of automation engines such as Puppet Relay could potentially tilt the scale towards specialized solutions, and change day-to-day development and operations.

(Originally posted on Forbes.com)

Leave a Reply