Managed Kubernetes on more public clouds: good news, but also distracting

cloud66-blog-managed-Kubernetes-on-more-public-clouds-good-news-but-also-distracting

Joining the fray
One of the most predictable surprises in the public IaaS market recently has been DigitalOcean’s public launch of their own managed Kubernetes service, at KubeCon Europe last May. In my opinion, it was predictable for at least three reasons. First, customers. DigitalOcean is one of the largest and most prominent public-cloud providers, and an extremely user-focused company. Its users were asking for this capability, so that they could try out Kubernetes on their existing and favorite provider.

Second, competition. Since the announcement of Amazon’s EKS and Fargate in late 2017, we’ve seen a flurry of reaction from what I call “tier 1.5” and “tier-2” providers (compared to the top-3 of AWS, Azure and Google Cloud), such as IBM, Rackspace, Alibaba, T-Systems and literally dozens of others. Not playing this game would be detrimental to any provider’s competitive position in this rapidly-consolidating market.

Third, it just makes sense, if you’re following this market. IaaS companies build services that help users spin up more machines, and Kubernetes is no different. With an impressive pedigree from Google, a vibrant and strong community in the CNCF, and tailwind from the top-3 clouds, Kubernetes is the dominant container orchestration technology. If your users are writing microservices apps, they’ll be using Kubernetes much more often than they will any alternatives.

Cloud 66 ♥ DigitalOcean (as well as all our other cloud provider partners)
Cloud 66 has had a long and fruitful partnership with DigitalOcean. On average, our joint users spin up hundreds of new “Droplets” a month to deploy their Rails, Node, or microservices apps—with many thousands more still running. Many of these joint customers even use Maestro, our own multi-cloud lifecycle management tool, itself backed by Kubernetes.

Is the abundance of managed Kubernetes services good news? Kind of. As we’ve said in a previous post, if you’re a cloud provider who is not enabling users to spin up Kubernetes clusters in 10 minutes through a friendly UI, then most likely another provider will.

More options is a good thing, but in our experience, some of these services are better (more robust, feature-rich, performant) than others—which will be important to production users. Also, a wealth of options is valuable if it helps you get the most out of the Kubernetes promise of portability. In this case, you will need to make sure that you use managed Kubernetes services that don’t lock you out of a multi-cloud or hybrid architecture (again, that is where tools like Maestro come in handy).

So yes, it can definitely be good news for the savvy user, but at the same time, it’s not really interesting news, because this is ultimately a solved problem, and a distraction from the greater operational challenges embedded into Kubernetes, which we’ve discussed in this post.

Dev friendly can also be Ops-friendly
While it’s great for everyone to have a Kubernetes cluster on their cloud of choice with a few clicks, ultimately it is in an ops-friendly environment (IaaS), while the challenges are how to add a dev-friendly experience to that layer, that operators can trust.

So if managed-public-Kubernetes is a given, what are the next problems? From our experience running thousands of customer workloads on containers, they are mostly around security (container and code vulnerabilities, runtime access, secrets management etc.); lifecycle management (multi-cloud deployment, leveraging stateful workloads, network management etc.); and container pipeline (delivery and deployment).

We’ve had to solve most of these issues ourselves over the years by developing tools, and have been offering some of them in our container toolchain and our open source products. With regards to the pipeline, your challenges might revolve around things like:

  • Building images in a container-native way, with CI tools that understand how your 20 services and 3 databases interact within one app;
  • Taking configuration and secrets management out of the developers’ hands;
  • Curating an easily-maintainable set of configuration files, with version control and role-based access, for both external/off-the-shelf and complex internal services;
  • Creating a mechanism for devs to do one-click deployments to multiple Kubernetes environments;
  • and much more.

In the end, an operator’s job about thinking what will happen when scale comes. What won’t scale is bespoke manual processes, reliance on custom tech in a commoditizing market, un-portability, processes that make people wait, rusty knobs that are not fit for the new purpose, or wobbly ones that don’t work first-time, every-time.

What will scale is tested infrastructure, repeatable and reliable automation, self-service for developers that operators can trust, abstracted workload portability, and above all, tools that facilitate the shared world view that Kubernetes mandates.

The latter list has driven our product development since we started Cloud 66, and is embedded into our tooling. Check out our container toolchain and our open source products as the best complement to whatever Kubernetes, hosted or on-prem, you are using.

Originally published on the Cloud 66 blog and then on LinkedIn. Reposted with permission.

Container Pipelines: The Next Frontier

Our “8 Components” blog post, which was written quickly over lunch before the holidays, has turned out to be an all-time favourite—we even turned it into an eBook, complete with our own war stories.

One of the areas highlighted in the post and eBook was the container pipeline, and on popular demand, in this post I will aim to expand a bit on why current pipelines don’t cut it, what a CDP needs to cover, and how we approach this challenge.

A faster engine, but not enough fuel

The release of AWS’s managed Kubernetes (EKS) and abstraction layer (Fargate) in late November of last year was an important moment for Kubernetes and the CNCF, but not only because it helped further cement the orchestrator’s dominance. It also signalled to container platform vendors that the days of getting excited about Kubernetes-with-a-UI are effectively over.

But things are far from settled. Removing one bottleneck usually exposes a handful of “new” ones around it. To bring back to containers: now that clusters will be moving much faster, what key components will slow us down?

We think an interesting and critical area to look at is the pipeline.

How to think about a pipeline

Broadly, there are three main approaches to a container pipeline today:

  1. Build a stack of open source projects and automation scripts. Low software costs, high operational cost. Can be difficult to maintain and scale.
  2. Pay someone to build no. 1 for you, or use a managed service. Can be expensive, but at least minimizes hidden costs of the previous case.
  3. Use a hosted CI tool. Usually inexpensive, but usually does not “speak config”, which is essential in Kubernetes.

OwnStack and Managed OwnStack. Someone I used to work with said once, “in cloud, open source software often means closed-source operations; you’re locking yourself in to either your own practices, or to a cloud vendor”. I find this is especially relevant for more fragmented and younger ecosystems, such as containers.

Example: a friend of mine works for a major financial institution, and is part of an all-star cloud engineer team. They use a complex set of open source tools and homemade automation scripts. Of one of these tools he said, “that project is small, and the maintainer sometimes goes off the grid and things can stay broken for a while, but we can handle it in-house”.

The question is, when it comes to scaling that anecdote up into production, how many companies can afford the talent to do the same? A fully open-source stack could prove prohibitively expensive (in terms of operational costs) to build, automate, scale and maintain—which is why many companies turn to SIs and MSPs for help in building that OwnStack. However, that is usually an expensive exercise, and has the risk of not being sustainable as Kubernetes and the CNCF landscape continues to evolve (and fast!). Users have told us that a managed OwnStack could cost upwards of 5x of a product approach.

Specifically for pipelines, large-scale users have told us that an OwnStack is tough to setup and upgrade, to standardize between teams and regions, to secure (e.g. deal with credentials distribution), and, ultimately, to scale.

Hosted CI. Everyone has their way of doing things, and change can be painful—so the inclination to leverage friendly, existing tools with their familiar, old ways of doing things is very human. However, in some cases, the change is so meaningful that these trusted tools and practices start slowing us down.

Take build & test. I thought that this post broke down the problem very well. When my app is made of numerous unique elements, it becomes very difficult to test for issues before this complex, fragile structure is deployed. By the way, it can be rebuilt and redeployed within minutes, which means testing needs to happen across the lifecycle, and take into account wildly different environments and substrates

The emphasis shifts from the pre-deploy test to the ability to reiterate often and quickly within a well-controlled policy.

Time for a Container Deployment Pipeline

Kubernetes brings Devs and Ops closer together, and to avoid the complexity of the infrastructure impacting development pace, a Container Deployment Pipeline (CDP) solution should facilitate efficient maintenance of container delivery that is consistent with the code.

To recap the CDP section of our eBook, a CDP needs to:

  • Understand that microservices require a pipeline-wide view, from Git to Kubernetes;
  • Automate that pipeline while providing advanced observability, security and policy management;
  • “Speak config” automate creation, control and versioning of production-minded config files for any environment (easy environment creation should mean easy operations!);
  • Be easily scalable and deployable across teams, clusters and regions.

Here’s a graphic representation of what a CDP should cover (functions also available in common CI tools are marked by a green asterisk):

All of that is covered by our pipeline, Cloud 66 Skycap, which has exciting new features on the way for even more automation, governance and flexibility.

Sign up for a free, full-functionality 14-day trial here.

Lastly, come talk to us at KubeCon (promo code included!) or at these conferences: CloudFest // CloudExpo // RailsConf.

Originally published on the Cloud 66 blog and then on LinkedIn

CaaS in 2018: it’s a race

Photo by Matt Lee on Unsplash

[Content warning: 2018 prediction!]

Three weeks post-Reinvent (and a month after posting my thoughts on what CaaS actually means), Google Cloud’s annual summary further underlines, in my opinion, just how fast the top clouds are moving.

It’s clear that all top clouds have or will have a Kubernetes-based managed service, which will go beyond the clusters and aim to leverage their services portfolio. What about smaller players, who don’t have that firepower or that catalog?

When it comes to containers on the public cloud, my bet is that most other providers will take one of the following three approaches:

1. Ignore containers and let customers play around standard/upstream Kubernetes inside your VMs/servers. Few will take it beyond the PoC level, those that want production solutions move on. Cloud keeps hold on a small but loyal customer base due to a niche expertise (premium bare metal, extreme elasticity, location etc.).

2. Build your own Kubernetes-based (or–gasp!–other technology) container engine, simple-to-use and infra-focused like the rest of the cloud. Beta in July, GA in October, maybe another GA/stable release in December—by that time Fargate, ACS and GKE carve up the market and educate users that “infra isn’t interesting—it’s about the app management”.

3. Cut time to market by working with an ISV who has a proven solution that you can embed, and with regards to capabilities, may enable you to actually go beyond matching the ante with the GKEs of the world—stateful services, hybrid environments, DNS discovery… Literally a handful of these exist. (Spoiler: Cloud 66 is one of them.)

It promises to be a very exciting year for anyone in the container space, as vast new opportunities open up for enterprises to rethink their IT architecture and delivery. But with the widening of the market for enterprises, in my view the window for IaaS providers to make a claim in it is not that wide at all.

Original post

Cloud-native transformation: containers in the enterprise

A few weeks ago, I attended a talk about moving to microservices, in which the speakers, both of them DevOps program/delivery managers at a large online retailer, mentioned a rule they had. Tooling, they said, was not the answer to a Devops culture problem, but tools can be effective in slowing down or accelerating that very human effort.

This probably resonates well with anyone trying to drive technological change within an organization, and even more so with regards to cloud-native transformation in the enterprise, by which I mean the move to a microservices-based architecture and container-based infrastructure. It’s the right tools, that support the right human expertise, that get the job done.

Take delivering code to production as an example. In the old world (i.e., before containers…), CI tools took code from a repo, delivered an artefact, and then initiated some testing (essentially, using a framework-specific set of commands to execute a user-defined test). Then a set of scripts would configure servers, databases and load balancers would rev up, and off you went. “Simple”, said nobody, ever; that is, at least until containers came along!

Now though, your app is broken down into parts, and those parts move quickly, and by the way, your code still lives in your repo but also in a Docker repo, so they all need complex version control mechanisms, traceability/audit, and orchestration in order to be herded into production.

In delivery, many tools still deliver the same functionality as before, adapted for containers, but this isn’t enough anymore if you want to get from point C (images built) to point Z (production running). What about binding artefacts to the application as services? What about reducing runtime for efficiency but also IP protection? What about automated stop/start and redeployment hooks? What about observability for this complex system?

In deployment, there is a rising class of CaaS tools (see our post here for how that term can be mis-used), but they mostly focus on getting Kubernetes clusters up and running quicker than through a command line. But infrastructure is only as good as its support of the app layer: what about dynamic scaling? What about non-container workloads like databases? What about firewall management? What about control over what gets deployed where?

Cloud 66 has been running containers in production since 2012, so we know what the production issues are. We’ve written a white paper to encapsulate what we think you can consider along this journey to production. If you want to learn more about container-native CI/CD and pipelines, click below.

Lastly, if you’re an SI or MSP, come talk to us in person about your views and experiences at AWS Re:invent in Las Vegas, NV, and Kubecon in Austin, TX. [Contact us](mailto: partners@cloud66.com) to book a meeting.

Originally published on the Cloud 66 blog and then on LinkedIn. Reposted with permission.

CaaS: Hope or Hype?

In searching for inspiration for this post, I decided to bottom out an old suspicion of mine. After a brief but brave foray into the meme rabbit holes of the internet, I can confirm that indeed, “Everything-as-a-Service” remains a remarkably under-utilized as a buzzword. Something to strive for, I guess.

A shortage of memes might be behind the proliferation of the “-aaS” suffix in various areas. For example, let’s talk about Containers-as-a-Service, or “CaaS”. If containers are infrastructure, why do we need a new acronym, rather than using IaaS? And if your answer is “because IaaS is for VMs”, then why don’t we have VMaaS for virtual machines, BMaaS for bare metal, ARMaaS for ARM servers, and so forth?

So, is it just marketing hype, of the same variety that promotes “serverless” when most enterprise workloads are not even on the public cloud (yet), and those running on containers in production are just starting to emerge?

Or is one reason that actually, containers are different? It’s great to have these ephemeral little Docker things, but to be usable they need orchestration, and tooling on top that is built-for-purpose, bridges Devs and Ops, and considers the architectural differences between the world of apps until now and microservices. And in this case, what is the scope of a CaaS offering?

For any cloud provider that isn’t in the top-5 bracket, this isn’t necessarily an amusing hypothetical question. As clouds like AWS and GCP add services and capabilities around containers, and expand into verticals and regions that were once strongholds for these regional providers, this becomes existential.

So let’s ask again, more accurately: what could CaaS mean, as an effective competitive tool, for your medium-sized cloud provider? The annoyingly obvious answer is, “it depends” on the provider’s overall competitive strategy and how containers fit into it. I see broadly two approaches.

For some, just having a UI on Kubernetes, or a minimal integration with a variety of open source services, would cover the urgent and visible need. This approach will likely run into problems once your usage goes from an experiment, to production workloads at scale. There’s a reason why Google, the originator of Kubernetes, has a managed offering—they understand the complexity! The top clouds will no doubt be waiting patiently for these scenarios to unfold, taking advantage of the lack of customer added value of this approach.

For other providers, this will be about finding a way to emphasize their USPs. For example:

  • Features that support specific use cases, such as compliance with financial or data sovereignty regulation (this is where I get to namedrop GDPR): e.g., observability, security permissions and firewall management.
  • Functionality that supports/encourages a hybrid environment (multi-tenant/single-tenant/on-prem and server/VM/container): e.g., hosted, dedicated and on-prem installations.
  • Capabilities that don’t just match that of GKE/ACS/ECS, but look to go beyond them.

This is a more thoughtful approach to product strategy in general, and as such, will likely require long-term thinking and commitment from the provider. It will also probably deliver much more value to you, the enterprise user.

Cloud 66 has been running its own stack on containers in production since 2012, and since then we (pragmatically) swapped out our own orchestration engine for Kubernetes. We come from the cloud-native production angle, and have run into many of the problems most enterprises haven’t yet considered. Most importantly for providers, working with partners and providers is core to our strategy.

Find out more about Cloud 66 Skycap, our container-native continuous deployment pipeline, and Cloud 66 Maestro, our application lifecycle management tool, or contact us at hello@cloud66.com.

Originally published on the Cloud 66 blog and then on LinkedIn. Reposted with permission.

New beginnings in an exciting growth area

After over three years at Canonical, I’ve decided to join Cloud 66 as VP of Sales & Business Development. I start today!

In a way, this continues my transition from the hardware level, to the operating system, and now to the software that sits on top. Put another way, I moved from server environments, to virtualization, and now to containers.

Working at HP at the beginning of this decade, I powered through the Windows 8 launch and the emergence of cloud-focused form factors such as the Chromebook and the Surface. Wanting to be part of that transition, I looked for software companies on the bleeding edge of cloud.

I was lucky to have been hired into Canonical by Chris Kenyon in mid-2014, and to have had such amazing experiences growing the public cloud business exponentially. Apart from our business achievements, and Ubuntu’s continued dominance in every scale-out architecture, I have to say it’s rare to find a company full of people who, despite being so talented, are generally still so nice to work with.

But I digress. One of the strongest currents in IT in the past few years has been the emergence of containers as a viable alternative to virtualization. But the industry is only at the start of this journey, with not many companies running production. At a risk of contributing to over-use of the vending machine analogy, I like to classify the marketplace this way:

  • Vendors that enable customers to build great shelves, or perhaps you even build the shelves for the customers. In other words, they are focused on container infrastructure, perhaps with a sleek UI/installer, but not on the app. Still a fragmented and not very enterprise-y space with lots of DIY-stacks. Devs rejoice; Ops worry.
  • Vendors that deliver a fully-stocked vending machine to customers, leaving them no choice in what gets sold through it. In other words, these are the highly opinionated/structured PaaS vendors, as well as GKE, ECS, and ACS. Devs feel constrained; Ops might worry about lock-in but hey, at least the thing runs reliably.
  • Vendors that deliver machines continuously specified by Ops, used by Devs, and managed by the vendors. In other words, this isn’t just about bringing up clusters, but also about how to deploy apps to that infra, and how to keep it all running intelligently at scale—with native databases, networking, firewall options and a good balance of flexibility and governance. This is what will need to happen for Kubernetes to be the new vSphere. This is where Cloud 66 shines.

At Cloud 66, KhashVic and their team have been providing solutions for this problem for years, and with the end-to-end container stack of Cloud 66 Skycap and Cloud 66 Maestro, backed by Kubernetes, I believe we are positioned extremely well to accelerate container adoption in production, at scale, in the enterprise. We will be looking for partners on this journey:

  • System Integrators and Cloud/DevOps Consultancies that want to complement their services with a proven product set.
  • IaaS providers who want to add an integrated, feature-rich container engine to their compute business.

Contact us at partners@cloud66.com, or DM me on LinkedIn. Request a demo here!

Original post

Ubuntu on AWS gets serious performance boost with AWS-tuned kernel

Ubuntu and AWS

Canonical and Amazon Web Services have been working closely together to create the best experience of the world’s most popular cloud OS, on the world’s most popular public cloud. Official Ubuntu guest images have been available on AWS for years, and underlie the majority of workloads on the service—whether you use the EC2 Quickstart, Marketplace, or Lightsail. This week, and for the first time on the public cloud, Canonical, in collaboration with Amazon, is delighted to announce the availability of an AWS-tuned Ubuntu kernel for the Ubuntu 16.04 LTS release.

Thanks to our public cloud and kernel teams, as of March 29th, Ubuntu Cloud Images for Amazon have been enabled with the AWS-tuned Ubuntu kernel by default. The AWS-tuned Ubuntu kernel will receive the same level of support and security maintenance as all supported Ubuntu kernels for the duration of the Ubuntu 16.04 LTS.

The kernel itself is provided by the linux-aws kernel package.  The most notable highlights for this kernel include:

  • Up to 30% faster kernel boot speeds, on a 15% smaller kernel package
  • Full support for Elastic Network Adapter (ENA), including the latest driver version 1.1.2, supporting up to 20 Gbps network speeds for ENA instance types (currently I3, P2, R4, X1, and m4.16xlarge)
  • Improved i3 instance class support with NVMe storage disks under high IO load
  • Increased I/O performance for i3 instances
  • Improved instance initialization with NVMe backed storage disks
  • Disabled CONFIG_NO_HZ_FULL to eliminate deadlocks on some instance types
  • Resolved CPU throttling with AWS t2.micro instances

Any Ubuntu 16.04 LTS image brought up from EC2 Quickstart or AWS Marketplace on March 29th or later will be running on this AWS-tuned kernel. You can also query the EC2 API to confirm that these AMIs are ENA-enabled:

Instances using the AWS-tuned Ubuntu kernel will, of course, be supportable through Canonical’s Ubuntu Advantage service, available for purchase on an hourly metered basis on the AWS-Marketplace (Standard or Advanced) tiers, or at an annual price on our Shop. The AWS-tuned Ubuntu kernel will not support the Canonical Livepatch Service at the time of this announcement, but investigation is underway to evaluate delivery of this service for users of the AWS-tuned Ubuntu kernel.

If, for now, you prefer stability over speed, you can get still get going with Livepatch by reverting to the old kernel, using the following commands:

Watch this space for more developments from Amazon and Canonical throughout the year, as we continue to optimize performance on a host of AWS services, and to make it easy to deploy and operate complex workloads in production.

Originally posted on the Ubuntu Insights blog. Reposted with permission.

Canonical and AWS partner to deliver world-class support in the cloud

Ubuntu & AWS

In today’s software world, support is many times an afterthought or an expensive contract used only to keep-up with the latest patches, updates, and versions. Hidden costs to upgrade software, including downtime, scheduling, and planning are also factors that need to be considered. Canonical does not believe the traditional norms of support apply. Our leading support product Ubuntu Advantage (UA) is a professional support package that provides Ubuntu users with the backing needed to be successful.

As many of you have read on AWS Blog, this week at AWS’ Re:invent 2016 conference we announced the ability to purchase UA Virtual Guest via AWS marketplace. Ubuntu Advantage Virtual Guest is designed for virtualized enterprise workloads on AWS, which use official Ubuntu images. The tooling, technology, and expertise of UA is available via the AWS marketplace with just a few clicks. It includes:

  • Access to Landscape (SaaS version), the system’s management tool for using Ubuntu at scale
  • Canonical Livepatch Service, which allows users to apply critical kernel patches without rebooting on Ubuntu 16.04 LTS images using the Linux 4.4 kernel
  • Up to 24×7 telephone and web support and the option of a dedicated Canonical support engineer
  • Access to the Canonical Knowledge Hub and regular security bug fixes

Further, the added benefits of accessing Ubuntu Advantage through AWS Marketplace SaaS model are hourly pricing rates based on the quantity of customers actual Ubuntu usage on AWS, their SLA requirements, and centralized billing through users AWS Marketplace account. Customers pay for what they consume within their account, no more.

Innovation and leadership on display at Re:invent 2016

The ability to buy UA through the AWS Marketplace is just the beginning. At Re:invent we will be showcasing many of our solutions that support Big Software including:

Containers are changing how software is deployed and operated. Canonical is also actively innovating around containers with our machine container solution LXD, providing the density and efficiency of containers, but with the manageability and security of virtual machines; enhanced partnerships with partners like Docker, the CNCF and others around process container orchestration. Finally, our Canonical Distribution of Kubernetes provides a ‘pure K8s’ experience across any cloud.

Juju for service modeling and Charms to make software deployments painless. Juju is an open source service modeling platform that makes it easy to deploy and operate complex, interlinked, dynamic software stacks. Juju has hundreds of preconfigured services called Juju Charms available in the Juju store. For example, Juju makes it easy to stand-up and scale up or down Hadoop, Kubernetes, Ceph, MySQL, etc. all without disruption to the cloud environment.

Snaps for product interoperability and enablement. Snaps is a new packaging format used to securely bundle any software as an app, making updates and rollbacks simple. A snap is a fancy zip file containing an application together with its dependencies, and a description of how it should be safely run on your system, especially the different ways it should talk to other software. Most importantly snaps are secure, sandboxed, containerised applications isolated from the underlying system and from other applications. Snaps allow the safe installation of apps from any vendor on mission critical devices and desktops. Canonical’s Ubuntu Core is the leading open source Snap-enabled production operating system which powers anything from robots, drones, industrial IoT gateways, network equipment, digital signage, mobile base stations, refrigerators, and more.

Even as the cost of software has declined, the expense to operate today’s complex and distributed solutions have increased as many companies have found themselves managing these systems in a vacuum. Even for experts, deploying, and operating containers and Kubernetes at scale can be a daunting task. However, by deploying Ubuntu, Juju for software modeling, and Canonical’s Kubernetes distribution helps organizations to make deployment simplified. Further, we have certified our distribution of Kubernetes to work with most major public clouds as well as on-premise infrastructure like VMware or bare-metal Metal as a Service (MaaS) solutions thereby eliminating many of the integration and deployment headaches.

Most of these solutions can be used and deployed in production with your AWS EC2 credentials today. What’s more, they are supported with a professional SLAs from Canonical. We are also looking for innovative ISVs and forward thinking systems integrators to help us drive value for our customers and bring compelling solutions to market.

At AWS Re:invent 2016, we will be talking about all this and more at booth 2341 in Hall D.

Originally posted on the Ubuntu Insights blog. Reposted with permission.

Running Ubuntu VMs in Germany—without trading off security or data sovereignty

Data sovereignty is one of the hot topics in cloud computing. While US-based public cloud providers—such as AWS, Microsoft Azure, IBM and Google Cloud Platform—have continued to solidify their dominance both at home and abroad, 2016 has also seen an awakening in Europe.

At CeBIT in March of this year, T-Systems, a subsidiary of Deutsche Telekom, launched its own Open Telekom Cloud platform, based on solutions from Huawei, as a state-of-the-art European alternative to the US giants’ platforms. T-Systems is adopting a multi-cloud strategy, with Openstack and a Microsoft Azure region as just part of a compelling European-based cloud portfolio.

While Canonical is a leader in OpenStack, we are also extremely passionate about bringing the best Ubuntu user experience to users of every public cloud. In addition, we are close partners of both Huawei and Deutsche Telekom. Therefore, naturally, we are delighted about our collaboration with T-Systems.

As of the time of writing, Open Telekom Cloud is the only German-based public cloud service that offers official Ubuntu images on all LTS versions, and that’s great for at least three reasons. First, Ubuntu users in Germany can now get the optimal, secure and stable Ubuntu experience they expect from Canonical, but without potential tradeoffs relating to data sovereignty.

Second, users of other operating systems on Open Telekom Cloud can move to the no. 1 cloud OS—stable, secure, fully open source.

Third, all Open Telekom Cloud users can access professional support, Landscape, and our new Canonical Livepatch Service — by purchasing our Ubuntu Advantage support packages as an option.

Press release on Ubuntu Insights: insights.ubuntu.com/2016/11/03/open-telekom-cloud-joins-certified-public-cloud/

Original post

5 reasons you should only use certified images on the public cloud

Ubuntu has a long history in the cloud. Not only is it the world’s number one platform for deployments of OpenStack (as we’ve covered here), it also runs more public cloud workloads than all other platforms combined. Fast, secure, and proven in the most demanding production environments, it is extremely popular with the likes of Netflix, Waze, Airbnb, Uber, Heroku, and many others. Dustin Kirkland covered it brilliantly in his post last month.

Ubuntu is the choice of developers all over the world, and truly supports scale-out architecture. It is also a fully open-source operating system; in fact anyone can download an image from our public pages and even modify it, as long as it’s for their own use. So why be picky about which images you use on which public cloud?

Two of our values are especially relevant here:

  1. Ensuring widespread community usefulness for Ubuntu
  2. Building confidence that Ubuntu Just Works

For that, Ubuntu needs to provide a predictable, secure, and reliable user experience. When bad things happen, it can be annoying on your personal desktop, but when your project—and business—depends on reliable infrastructure, things need to run smoothly and efficiently at scale. Whether it’s an unforeseen incompatibility that requires extensive developer resource to fix, or a security vulnerability that’s hampering operations while you wait for a patch, the implications can often be measured in millions of dollars.

Certified images, developed and supported by Canonical, are managed centrally, delivered automatically, with bugs and vulnerabilities fixed fast. Here are the top reasons to ensure your workloads are running on certified images:

  1. The best Ubuntu experience at all times, dependable, always up-to-date, and optimised for the leading public clouds
  2. Consistency with your development and testing environments
  3. Fast issue resolution and bug fixes, with rapid updates and software installation
  4. 100% compatible with the cutting-edge Ubuntu cloud toolset, with an option for enterprise-grade Ubuntu Advantage support packages
  5. A rich ecosystem of services at your fingertips

An enormous amount of work goes into creating and maintaining certified images, because it’s necessary to ensure that the best Ubuntu experience is available to everyone, through our cloud partners. With a truly stellar engineering team, a cutting-edge tool set and enterprise commercial support available direct from Canonical, there’s no better choice in the cloud.

So if you’re considering using the services of a public cloud provider who currently doesn’t offer Certified Ubuntu images, it’s worth raising the issue with them. Because in today’s competitive cloud world, you need all the advantages you can get.

If you’d like to read more, download our new ebook here.
Download eBook

Originally posted on the Ubuntu Insights blog. Reposted with permission.