Infrastructure Considerations for Containers and Kubernetes

Containers and Kubernetes are at the heart of a broad industry shift where applications and services are based on a microservices architecture. Specifically, microservices are being rapidly adopted as a means of building and modernizing distributed applications allowing them to be more scalable, flexible, resilient, and easier to build.

Instead of building self-contained, monolithic applications, a microservices approach breaks applications into modular, independent components that can be dynamically integrated with one another using application programming interfaces (APIs).

Increasingly, companies are using containers to power their microservices application architectures. Containers encapsulate a lightweight runtime environment for an application. Specifically, containers enable finer-grained execution environments, permit application isolation, and are lightweight.

Furthermore, containers include everything needed to run, such as code, dependencies, libraries, binaries, and other elements. Today, Docker is the most popular choice for building and running containers.

Compared to Virtual Machines, containers share the OS kernel instead of having a full copy of it and take up less space. Because they do not require OS spin-up time associated with a VM, containers initialize faster. In general, containers start in seconds or even milliseconds, which is much faster than VMs. As such, containers deliver performance characteristics that match the needs of a microservices architecture. In particular, the quick instantiation maps better to the unpredictable workload characteristics associated with microservices. 

The growing embracement of containers was validated in a 2019 industry container usage survey that found the median number of containers per host doubled (to 30) between 2018 and 2019. And the maximum per-node density was 250 containers, which was a 38% increase from 2018.

Managing Your Containers

With such explosive growth in the use of containers, companies need a way to oversee and manage their efforts. That’s where Kubernetes comes in.

Kubernetes is an open-source container orchestrator system for automating deployment, scaling, and management of application containers across clusters of hosts. It was originally designed by Google and is now maintained by the Cloud Native Computing Foundation. Kubernetes works with a range of container tools, including Docker. It groups containers that make up an application into logical units for easy management and discovery.

Kubernetes provides a framework to run distributed systems resiliently. It takes care of scaling and failover for an application. For example, in a production environment, Kubernetes can start a new container if one goes down. Thus, helping to ensure there is no application downtime. Additionally, Kubernetes provides service discovery and load balancing, storage orchestration, automated rollout and rollbacks, self-healing features, configuration management, and more.

While there are a handful of container orchestrators available today, Kubernetes dominates the market. In addition to the widely used open-source variant, some commercial offerings such as Red Hat OpenShift are built on Kubernetes. (The commercial offers add enterprise features and support.)

Kubernetes can be deployed on a bare-metal cluster or on a cluster of virtual machines. Kubernetes, in turn, can orchestrate the containers it manages directly on bare metal or on virtual machines. Most instances of Kubernetes today are run on VMs running on-premises or in the cloud.

Bare-metal instances are not as common. However, there are use cases where they offer advantages. For example, a network edge application might be too latency-sensitive to tolerate the overhead created by a VM. Or an application (such as machine learning) might need to run on GPUs or other hardware accelerators, which do not lend themselves to VMs.

Optimized, Integrated Solutions

Running container workloads comes down to hardware. Businesses need physical machines, with CPUs, memory, and local persistent storage. In addition, they need some shared persistent storage and networking element to hook up all the machines.

A suitable system must be able to be dynamically provisioned by the users to handle different data workflows. Many companies are looking for turnkey solutions that combine the needed processing, storage, memory, and interconnect technologies to provide either the bare metal or VM foundation for their container and microservices efforts. Delivering such a solution requires expertise and real-world best practices across both HPC and container/Kubernetes domains, plus deep industry knowledge about the specific applications.

PSSC Labs has a more than 30 years history of delivering systems that meet the most demanding workloads across industries, government, and academia. Its offerings include the PowerServe Uniti Servers line that leverages the latest components from Intel® and Nvidia®. These servers are ideal for a wide range of applications, including AI and deep learning, as well as for computational and data analysis.

PSSC Labs also offers CloudOOP Big Data Servers that deliver the highest level of performance one would expect in an enterprise server combined with the cost-effectiveness of direct attach storage for Big Data applications. The servers deliver 200+ MB/sec sustained IO speeds per hard drive (which is 30%+ faster than other OEMs.)

As the number of containers per host grows and Kubernetes use grows, these solutions and other PSSC Labs systems are designed to meet the requirements of enterprises today. Such systems will increasingly become more important as companies explore new ways to make use of the containers, Kubernetes, and microservices to serve their users better and quickly react to new business opportunities.

Leave a Reply

Your email address will not be published. Required fields are marked *