Docker vs Kubernetes – Should We Really Compare?

Joseph Sibony
Joseph Sibony reading time: 7 minutes
May 6, 2021

If you are here, you are like many others in the software development industry that are looking to find out the differences between Docker vs Kubernetes. Interestingly, a comparison of one against the other is probably not the right direction. This is because they are both functionally different, but Kubernetes extends Docker functionality to include high interoperability to build, deploy, and scale applications.

Looking at the two in this manner, one can see that Docker sits tall as the original containerization technology that has helped stability and ease of deployment for many applications. Running on multiple operating systems has allowed Docker to gain a solid foothold for development projects.

Alternatively, Kubernetes exists to help with the orchestration aspects of a deployment. Adding Docker into the orchestration activities of the Kubernetes cluster allows for the higher-end features necessary for a real-world scenario. If you look at the coordination and scaling aspects of Kubernetes vs Docker, they have been critical in surfacing Kubernetes as the go-to infrastructure for this type of software development.

A Docker Overview

Docker

When Docker arrived on the development scene, it was the first inkling of a way to produce a standardized, self-serving application that is wholly contained in a deployable package. Users are able to define the underlying OS and install prerequisites for the workload for which it was designed. Creating a deployable result that promises to run on multiple operating systems is just one factor that contributes to the high adoption rate of Docker.

Coding Infrastructure

The ability to move towards Infrastructure as Code (IaS) is also greatly advanced by the usage of Dockerfiles checked in alongside code. In doing so, the application and everything needed to create the underlying infrastructure is protected and reviewable in the same way other code is managed. Since the instructions extend beyond just local development, phrases like “Works on my machine!” are encountered much less often.

Savvy teams are also using the Dockerfile in their CI/CD process to help with dynamic creation of development and QA resources. Doing so allows for a fresh environment for each release candidate. It also provides a way to control costs for those that have static resources in the cloud. This combination of control and consistency is one factor that makes Docker attractive.

Containing “all the things!”

Developers are now able to run containers that serve their application in a manner that allows them more control over their local development environment. By explicitly stating requirements in the Dockerfile, everything needed is “contained” in the final result. These built containers are then stored and distributed to one or more environments.

One reason Docker works well in this and in automated build situations thanks to the way it uses “layers” to create a new container that only contains the most recent changes. Other layers already exist as needed and are re-used unless specifically instructed not to. The completed Docker containers are published to a container registry where they can then be used to complete deployments to multiple environments.

Kubernetes Defined

Kubernetes

Enter onto the scene, Kubernetes. Sometimes referred to as “k8s,” the technology behind it has been widely adopted as a production-class orchestration system. It has seen a steady increase in usage in the relatively short amount of time it has been available.

According to a study showing usage among IT professionals, Kubernetes saw a large increase in companies who have adopted the technology. In 2019, Kubernetes was used by 87% of respondents. This is quite the increase from 55% just two years prior.

There are many reasons why so many teams have integrated k8s into their environment.

Automation of Deployments

Looking at the similarities of Docker vs Kubernetes both allow for repeatable and consistent deployments. Kubernetes takes the application and deploys it in a way that handles all the aspects of bringing the service online. Using several configurations, the containerized applications are deployed with a predefined number of replicas. These replicas utilize many functions of the Kubernetes Control Plane to instruct the nodes how to come online. The advantage of this over other methods becomes clear when the level of orchestration available is truly realized.

Scalability

For cost control, running an application in Kubernetes can lead to much better usage of cloud and hybrid cloud resources. In the same regard, the ability for an application to grow based on its own internal feedback is a huge advantage of Kubernetes. This scalability aspect allows for the increase of available replicas with appropriate access to shared volumes, configuration, and security intact.

Management

DevOps of this type of environment extends beyond just deploying an application. The management layer of Kubernetes allows for very complex deployments that are backed by additional aspects of monitoring and self-healing capabilities. For example, using a series of probes, can give instructions to:

  • Determine Readiness — Checks for readiness prior to activation or inclusion in other probes.
  • Verify State — Using a “liveness” probe, the orchestration system can ensure containers are in a running state based on different types of verifications.
  • Pause for Startup — A StartUp probe can provide an allowance for a much longer application warm-up. Doing so prevents a state where the application can fail due to enacting a liveness probe prior to it being fully available.

Docker vs Kubernetes – The Right Tool at the Right Time

Which solution is best for your scenario is mostly dependent on where your team is at with adopting each technology. Those just entering the world of containerized software applications may be confused when they start considering the pros and cons of Docker vs Kubernetes.

It is a good idea to be on the “cutting edge” as long as you have the agility or time to allow for trial and error. For most, a well-structured, planned approach is going to provide the best results. Here are a few questions you can ask yourself and the team:

  • Is our infrastructure stack supportive of a shift to containers?
  • What additional training may be needed for the engineers?
  • What is the intention for production deployments?
    • How scalable does the service need to be?
    • Will it be to a data center or to the cloud?
  • Do our priorities support changing the CI/CD process with a new workflow?

A Comparison Table – Docker vs Kubernetes

It may be helpful to look at a loose comparison of the two:

Docker Kubernetes
Container Support Yes (containerd) Yes (containerd + CRI)
Persistent Storage Yes w/Complexities Yes
Container Cross-Platform Support No. Limited to base image. Yes
99.99% Uptime No Yes
Initial Complexity Low High
Auto-scaling No Yes
Self-healing No Yes
Load-balanced No Yes
Community Support Yes Yes

 

Simply put, if you are moving towards a container solution, Kubernetes is the more complex yet stable technology. And while not directly comparable to Docker, it definitely embraces it. Those already comfortable with containerized software delivery will find definite benefit in using Kubernetes as an orchestration tool.

Docker is not going away any time soon. Those that have already built a solid foundation on its workflow would do well by implementing Kubernetes. Many find the progression to using a k8s cluster works very well with the existing Docker technology.

Working Better Together

Working together

To reiterate, we should be looking more at how Kubernetes has extended container technology like Docker. This goes much further than just comparing Docker vs Kubernetes. At its core, k8s provided a way for those already using Docker to make a seamless transition to a Container Runtime Interface (CRI).

One important factor to keep in mind involves the most recent versions of Kubernetes and Docker support. Consider the statement from v1.20 release notes:

“Docker support in the kubelet is now deprecated and will be removed in a future release. The kubelet uses a module called “dockershim” which implements CRI support for Docker and it has seen maintenance issues in the Kubernetes community. We encourage you to evaluate moving to a container runtime that is a full-fledged implementation of CRI (v1alpha1 or v1 compliant) as they become available.” 

This API is the runtime that handles several operations on Kubernetes including starting and stopping containers. The use of a “dockershim” is being deprecated so that development teams can work towards the newer standards for a reliable application in a Kubernetes cluster.

Ultimately, the change removes the reliance on the internal Docker Engine runtime which contains many extra functions already handled by Kubernetes. This means developers can still use Docker to build images. However, administrators and DevOps personnel may need to adjust to use the Kubernetes containerd vs the internal Docker version.

Conclusion

It should be clear that looking at details surrounding Docker vs Kubernetes goes beyond a
simple comparison of the two. Rather, Kubernetes adds layers of automation, stability, and scalability to the already widely adopted Docker development workflow.

Either application can be put through the paces in a local development situation with very little fuss. The best thing you can do is take time to evaluate the two technologies and see where they may fit into your team’s workflow. With such low barriers to entry for both, what is right for your application is more about requirements and less about the implementation.

Related:

Incredibuild and Containers

Incredibuild highly accelerates container-based processes by allowing containers to harvest hundreds of unused cores you already own in your on-premise network or by scaling to cost-effective compute instances in the public cloud, transforming your containers into super containers with hundreds of cores that run faster builds, tests, and other compute-intensive processes. Try it free.

pipelines

Joseph Sibony
Joseph Sibony reading time: 7 minutes minutes May 6, 2021
May 6, 2021

Table of Contents

Related Posts

7 minutes Platform Engineering vs DevOps: A Comprehensive Comparison

Read More  

7 minutes These 4 advantages of caching are a game-changer for development projects

Read More  

7 minutes Build Cache Today and Tomorrow

Read More