How Cloud Native Workloads are Making Bare Metal Great Again
Today we’re excited to announce that one of our absolutely favorite services (CoreOS’s container image management service Quay.io) is now a Packet user! You can check out their full blog post here. If you’re a Quay user, you’ve probably already noticed the faster builds. As a daily (often hourly) user, we definitely have!
What’s special about Quay?
Quay is a container registry, developed in the early days of the Docker revolution to help developers host and manage their Docker images. It was acquired by CoreOS in 2014 and has since become the strongest registry for building, managing and distributing containers, especially among enterprises.
So why is Quay on Packet bare metal a big deal? After all, it was working fine yesterday, and now it’s still working today. The answer is simple: GIFEE, or “Google Infrastructure For Everyone Else”.
The Bigger Story
Before we dive into GIFEE and what makes the “cloud native” revolution so, well, revolutionary, let’s review a bit of background.
Quay’s move to Packet for its build system is a journey that started the way many cloud native projects do: with a need for greater performance without sacrificing scalability, availability or portability. Quay was originally built like many new web applications; on the most immediately available, easily scalable virtual public cloud: AWS. Until now, Quay would spin up a new EC2 instance for each build, store the image on S3 for delivery, and then destroy the EC2 instance.
This meant waiting around ~3 minutes for your build to start, and depending on the complexity of the build itself, a potentially large chunk of time to do the actual work. To provide some context, builds for Packet’s core API were pushing 15 minutes to complete.
So, while this is a totally workable solution, at scale the limitations from both an application performance and business standpoint became apparent. Not only was it difficult to make the individual jobs faster, but it meant that Quay was paying for a full hour of each EC2 instance even though the instances were generally utilized for only a fraction of that time.
Kubernetes and Bare Metal
CoreOS and their team are as closely aligned with Google’s Kubernetes (now under the stewardship of the Cloud Native Computing Foundation) as almost anyone in the space, so it is no surprise that this iteration of Quay includes a k8s strategy. With their announcement today, Quay revealed that they now orchestrate container-powered VM’s with Kubernetes. And it does so on a fleet of Packet bare metal servers, which it scales up and down as needed to meet demand.
Now, Kubernetes is a pretty flexible tool, which works with a variety of infrastructure and compute platforms. So why does this iteration of the service feature bare metal servers?
Well, over the last few years (and over more than a few post-meetup beers!) we connected with the Quay team and eventually sold them on our vision and promise of truly “on demand” bare metal. They recognized the power of having a non-virtual environment that was also programmable and could be orchestrated with tools they were already using and invested in.
Not coincidentally, this is the same way Google manages much of their platform as well: in containers, orchestrated by Kubernetes, on bare metal servers.
Quay Got GIFEE - What’s It to Me?
CoreOS’ co-founder Alex Polvi is known for evangelizing the idea of GIFEE (Google Infrastructure for Everyone Else), and how the various tools that CoreOS has produced help everyday developers get the benefits of GIFEE. With its move to Kubernetes and bare metal, Quay is living the GIFEE dream!
The results are pretty striking: instead of a 1-3 minute wait time for builds to begin boot times are down to just a few seconds. The build times are also faster as the jobs enjoy uncongested local I/O. If performance is a feature, then one could argue the Quay team just rolled out one of their biggest feature upgrades ever, simply by embracing GIFEE on bare metal.
Cattle Not Kittens. No Special Snowflakes Here.
While this is a somewhat insensitive analogy if one is a lover of all things feline, it is a good one when talking about infrastructure and cloud native application deployment.
Cloud Native users are cattle ranchers. Their applications are designed to require little maintenance, to handle failure and to scale out. Services are loosely coupled and stateless. If a server fails it is easier to replace than fix -- there is no emotional attachment.
Contrast this with many traditional IT departments, who look a lot more like pet owners. Their applications are designed from an enterprise perspective -- services are tightly coupled, relying on scale-up systems and relational databases. They are lovingly and expensively cared for, with great attention and affection.
For far too long, we’ve treated bare metal servers as pets and VM’s as cattle. Dedicated servers have been special darlings that we painstakingly customize, monitor and maintain, tend-to and protect, and never let go until they die of old age.
At Packet, we encourage our clients to think about bare metal servers just like they do ephemeral VM’s - just really, really fast VM’s.
As more and more workload moves to a container-based or “cloud native” architecture, the value proposition becomes even clearer and more tangible: an application in a container doesn’t care if it’s deployed on a VM or a bare metal server...not really. It just wants compute and network that enables it to do what it needs to do, and in the end the best performing resource for the cost wins.
Quay is Cloud Native. Cloud Native Loves Metal!
Aside from adding to our collection of ‘cool kids’ clients (we’re talking to you @hashicorp and @raintank and @ns1!), we think that the needs and capabilities of cloud native applications like Quay are shaping a future for internet infrastructure that is more portable and more “Google-ey”.
If you’re familiar with Geoffrey Moore’s famous tech marketing book “Crossing the Chasm” this will sound a bit like the technology adoption lifecycle: first come the early adopters and then there is a huge gap before widespread adoption. When it comes to “containers in production” it feels like we’re vaulting over the chasm right now, thanks to tools like Kubernetes, Mesos, CoreOS, and countless other contributors.
The strange aspect of bare metal in the story of cloud native adoption is that it feels like deja-vu. It’s pretty funny to see innovative early adopters like Quay pushing forward the bare metal option, when it was just adopted by the laggards a few years ago! Maybe we’ll all meet in the middle?