Managed Kubernetes on more public clouds: good news, but also distracting

cloud66-blog-managed-Kubernetes-on-more-public-clouds-good-news-but-also-distracting

Joining the fray
One of the most predictable surprises in the public IaaS market recently has been DigitalOcean’s public launch of their own managed Kubernetes service, at KubeCon Europe last May. In my opinion, it was predictable for at least three reasons. First, customers. DigitalOcean is one of the largest and most prominent public-cloud providers, and an extremely user-focused company. Its users were asking for this capability, so that they could try out Kubernetes on their existing and favorite provider.

Second, competition. Since the announcement of Amazon’s EKS and Fargate in late 2017, we’ve seen a flurry of reaction from what I call “tier 1.5” and “tier-2” providers (compared to the top-3 of AWS, Azure and Google Cloud), such as IBM, Rackspace, Alibaba, T-Systems and literally dozens of others. Not playing this game would be detrimental to any provider’s competitive position in this rapidly-consolidating market.

Third, it just makes sense, if you’re following this market. IaaS companies build services that help users spin up more machines, and Kubernetes is no different. With an impressive pedigree from Google, a vibrant and strong community in the CNCF, and tailwind from the top-3 clouds, Kubernetes is the dominant container orchestration technology. If your users are writing microservices apps, they’ll be using Kubernetes much more often than they will any alternatives.

Cloud 66 ♥ DigitalOcean (as well as all our other cloud provider partners)
Cloud 66 has had a long and fruitful partnership with DigitalOcean. On average, our joint users spin up hundreds of new “Droplets” a month to deploy their Rails, Node, or microservices apps—with many thousands more still running. Many of these joint customers even use Maestro, our own multi-cloud lifecycle management tool, itself backed by Kubernetes.

Is the abundance of managed Kubernetes services good news? Kind of. As we’ve said in a previous post, if you’re a cloud provider who is not enabling users to spin up Kubernetes clusters in 10 minutes through a friendly UI, then most likely another provider will.

More options is a good thing, but in our experience, some of these services are better (more robust, feature-rich, performant) than others—which will be important to production users. Also, a wealth of options is valuable if it helps you get the most out of the Kubernetes promise of portability. In this case, you will need to make sure that you use managed Kubernetes services that don’t lock you out of a multi-cloud or hybrid architecture (again, that is where tools like Maestro come in handy).

So yes, it can definitely be good news for the savvy user, but at the same time, it’s not really interesting news, because this is ultimately a solved problem, and a distraction from the greater operational challenges embedded into Kubernetes, which we’ve discussed in this post.

Dev friendly can also be Ops-friendly
While it’s great for everyone to have a Kubernetes cluster on their cloud of choice with a few clicks, ultimately it is in an ops-friendly environment (IaaS), while the challenges are how to add a dev-friendly experience to that layer, that operators can trust.

So if managed-public-Kubernetes is a given, what are the next problems? From our experience running thousands of customer workloads on containers, they are mostly around security (container and code vulnerabilities, runtime access, secrets management etc.); lifecycle management (multi-cloud deployment, leveraging stateful workloads, network management etc.); and container pipeline (delivery and deployment).

We’ve had to solve most of these issues ourselves over the years by developing tools, and have been offering some of them in our container toolchain and our open source products. With regards to the pipeline, your challenges might revolve around things like:

  • Building images in a container-native way, with CI tools that understand how your 20 services and 3 databases interact within one app;
  • Taking configuration and secrets management out of the developers’ hands;
  • Curating an easily-maintainable set of configuration files, with version control and role-based access, for both external/off-the-shelf and complex internal services;
  • Creating a mechanism for devs to do one-click deployments to multiple Kubernetes environments;
  • and much more.

In the end, an operator’s job about thinking what will happen when scale comes. What won’t scale is bespoke manual processes, reliance on custom tech in a commoditizing market, un-portability, processes that make people wait, rusty knobs that are not fit for the new purpose, or wobbly ones that don’t work first-time, every-time.

What will scale is tested infrastructure, repeatable and reliable automation, self-service for developers that operators can trust, abstracted workload portability, and above all, tools that facilitate the shared world view that Kubernetes mandates.

The latter list has driven our product development since we started Cloud 66, and is embedded into our tooling. Check out our container toolchain and our open source products as the best complement to whatever Kubernetes, hosted or on-prem, you are using.

Originally published on the Cloud 66 blog and then on LinkedIn. Reposted with permission.

Container Pipelines: The Next Frontier

Our “8 Components” blog post, which was written quickly over lunch before the holidays, has turned out to be an all-time favourite—we even turned it into an eBook, complete with our own war stories.

One of the areas highlighted in the post and eBook was the container pipeline, and on popular demand, in this post I will aim to expand a bit on why current pipelines don’t cut it, what a CDP needs to cover, and how we approach this challenge.

A faster engine, but not enough fuel

The release of AWS’s managed Kubernetes (EKS) and abstraction layer (Fargate) in late November of last year was an important moment for Kubernetes and the CNCF, but not only because it helped further cement the orchestrator’s dominance. It also signalled to container platform vendors that the days of getting excited about Kubernetes-with-a-UI are effectively over.

But things are far from settled. Removing one bottleneck usually exposes a handful of “new” ones around it. To bring back to containers: now that clusters will be moving much faster, what key components will slow us down?

We think an interesting and critical area to look at is the pipeline.

How to think about a pipeline

Broadly, there are three main approaches to a container pipeline today:

  1. Build a stack of open source projects and automation scripts. Low software costs, high operational cost. Can be difficult to maintain and scale.
  2. Pay someone to build no. 1 for you, or use a managed service. Can be expensive, but at least minimizes hidden costs of the previous case.
  3. Use a hosted CI tool. Usually inexpensive, but usually does not “speak config”, which is essential in Kubernetes.

OwnStack and Managed OwnStack. Someone I used to work with said once, “in cloud, open source software often means closed-source operations; you’re locking yourself in to either your own practices, or to a cloud vendor”. I find this is especially relevant for more fragmented and younger ecosystems, such as containers.

Example: a friend of mine works for a major financial institution, and is part of an all-star cloud engineer team. They use a complex set of open source tools and homemade automation scripts. Of one of these tools he said, “that project is small, and the maintainer sometimes goes off the grid and things can stay broken for a while, but we can handle it in-house”.

The question is, when it comes to scaling that anecdote up into production, how many companies can afford the talent to do the same? A fully open-source stack could prove prohibitively expensive (in terms of operational costs) to build, automate, scale and maintain—which is why many companies turn to SIs and MSPs for help in building that OwnStack. However, that is usually an expensive exercise, and has the risk of not being sustainable as Kubernetes and the CNCF landscape continues to evolve (and fast!). Users have told us that a managed OwnStack could cost upwards of 5x of a product approach.

Specifically for pipelines, large-scale users have told us that an OwnStack is tough to setup and upgrade, to standardize between teams and regions, to secure (e.g. deal with credentials distribution), and, ultimately, to scale.

Hosted CI. Everyone has their way of doing things, and change can be painful—so the inclination to leverage friendly, existing tools with their familiar, old ways of doing things is very human. However, in some cases, the change is so meaningful that these trusted tools and practices start slowing us down.

Take build & test. I thought that this post broke down the problem very well. When my app is made of numerous unique elements, it becomes very difficult to test for issues before this complex, fragile structure is deployed. By the way, it can be rebuilt and redeployed within minutes, which means testing needs to happen across the lifecycle, and take into account wildly different environments and substrates

The emphasis shifts from the pre-deploy test to the ability to reiterate often and quickly within a well-controlled policy.

Time for a Container Deployment Pipeline

Kubernetes brings Devs and Ops closer together, and to avoid the complexity of the infrastructure impacting development pace, a Container Deployment Pipeline (CDP) solution should facilitate efficient maintenance of container delivery that is consistent with the code.

To recap the CDP section of our eBook, a CDP needs to:

  • Understand that microservices require a pipeline-wide view, from Git to Kubernetes;
  • Automate that pipeline while providing advanced observability, security and policy management;
  • “Speak config” automate creation, control and versioning of production-minded config files for any environment (easy environment creation should mean easy operations!);
  • Be easily scalable and deployable across teams, clusters and regions.

Here’s a graphic representation of what a CDP should cover (functions also available in common CI tools are marked by a green asterisk):

All of that is covered by our pipeline, Cloud 66 Skycap, which has exciting new features on the way for even more automation, governance and flexibility.

Sign up for a free, full-functionality 14-day trial here.

Lastly, come talk to us at KubeCon (promo code included!) or at these conferences: CloudFest // CloudExpo // RailsConf.

Originally published on the Cloud 66 blog and then on LinkedIn

Cloud-native transformation: containers in the enterprise

A few weeks ago, I attended a talk about moving to microservices, in which the speakers, both of them DevOps program/delivery managers at a large online retailer, mentioned a rule they had. Tooling, they said, was not the answer to a Devops culture problem, but tools can be effective in slowing down or accelerating that very human effort.

This probably resonates well with anyone trying to drive technological change within an organization, and even more so with regards to cloud-native transformation in the enterprise, by which I mean the move to a microservices-based architecture and container-based infrastructure. It’s the right tools, that support the right human expertise, that get the job done.

Take delivering code to production as an example. In the old world (i.e., before containers…), CI tools took code from a repo, delivered an artefact, and then initiated some testing (essentially, using a framework-specific set of commands to execute a user-defined test). Then a set of scripts would configure servers, databases and load balancers would rev up, and off you went. “Simple”, said nobody, ever; that is, at least until containers came along!

Now though, your app is broken down into parts, and those parts move quickly, and by the way, your code still lives in your repo but also in a Docker repo, so they all need complex version control mechanisms, traceability/audit, and orchestration in order to be herded into production.

In delivery, many tools still deliver the same functionality as before, adapted for containers, but this isn’t enough anymore if you want to get from point C (images built) to point Z (production running). What about binding artefacts to the application as services? What about reducing runtime for efficiency but also IP protection? What about automated stop/start and redeployment hooks? What about observability for this complex system?

In deployment, there is a rising class of CaaS tools (see our post here for how that term can be mis-used), but they mostly focus on getting Kubernetes clusters up and running quicker than through a command line. But infrastructure is only as good as its support of the app layer: what about dynamic scaling? What about non-container workloads like databases? What about firewall management? What about control over what gets deployed where?

Cloud 66 has been running containers in production since 2012, so we know what the production issues are. We’ve written a white paper to encapsulate what we think you can consider along this journey to production. If you want to learn more about container-native CI/CD and pipelines, click below.

Lastly, if you’re an SI or MSP, come talk to us in person about your views and experiences at AWS Re:invent in Las Vegas, NV, and Kubecon in Austin, TX. [Contact us](mailto: partners@cloud66.com) to book a meeting.

Originally published on the Cloud 66 blog and then on LinkedIn. Reposted with permission.

CaaS: Hope or Hype?

In searching for inspiration for this post, I decided to bottom out an old suspicion of mine. After a brief but brave foray into the meme rabbit holes of the internet, I can confirm that indeed, “Everything-as-a-Service” remains a remarkably under-utilized as a buzzword. Something to strive for, I guess.

A shortage of memes might be behind the proliferation of the “-aaS” suffix in various areas. For example, let’s talk about Containers-as-a-Service, or “CaaS”. If containers are infrastructure, why do we need a new acronym, rather than using IaaS? And if your answer is “because IaaS is for VMs”, then why don’t we have VMaaS for virtual machines, BMaaS for bare metal, ARMaaS for ARM servers, and so forth?

So, is it just marketing hype, of the same variety that promotes “serverless” when most enterprise workloads are not even on the public cloud (yet), and those running on containers in production are just starting to emerge?

Or is one reason that actually, containers are different? It’s great to have these ephemeral little Docker things, but to be usable they need orchestration, and tooling on top that is built-for-purpose, bridges Devs and Ops, and considers the architectural differences between the world of apps until now and microservices. And in this case, what is the scope of a CaaS offering?

For any cloud provider that isn’t in the top-5 bracket, this isn’t necessarily an amusing hypothetical question. As clouds like AWS and GCP add services and capabilities around containers, and expand into verticals and regions that were once strongholds for these regional providers, this becomes existential.

So let’s ask again, more accurately: what could CaaS mean, as an effective competitive tool, for your medium-sized cloud provider? The annoyingly obvious answer is, “it depends” on the provider’s overall competitive strategy and how containers fit into it. I see broadly two approaches.

For some, just having a UI on Kubernetes, or a minimal integration with a variety of open source services, would cover the urgent and visible need. This approach will likely run into problems once your usage goes from an experiment, to production workloads at scale. There’s a reason why Google, the originator of Kubernetes, has a managed offering—they understand the complexity! The top clouds will no doubt be waiting patiently for these scenarios to unfold, taking advantage of the lack of customer added value of this approach.

For other providers, this will be about finding a way to emphasize their USPs. For example:

  • Features that support specific use cases, such as compliance with financial or data sovereignty regulation (this is where I get to namedrop GDPR): e.g., observability, security permissions and firewall management.
  • Functionality that supports/encourages a hybrid environment (multi-tenant/single-tenant/on-prem and server/VM/container): e.g., hosted, dedicated and on-prem installations.
  • Capabilities that don’t just match that of GKE/ACS/ECS, but look to go beyond them.

This is a more thoughtful approach to product strategy in general, and as such, will likely require long-term thinking and commitment from the provider. It will also probably deliver much more value to you, the enterprise user.

Cloud 66 has been running its own stack on containers in production since 2012, and since then we (pragmatically) swapped out our own orchestration engine for Kubernetes. We come from the cloud-native production angle, and have run into many of the problems most enterprises haven’t yet considered. Most importantly for providers, working with partners and providers is core to our strategy.

Find out more about Cloud 66 Skycap, our container-native continuous deployment pipeline, and Cloud 66 Maestro, our application lifecycle management tool, or contact us at hello@cloud66.com.

Originally published on the Cloud 66 blog and then on LinkedIn. Reposted with permission.