19 Kubernetes Best Practices for Building Efficient Clusters

September 23, 2020

Introduction

Kubernetes offers an extraordinary level of flexibility for orchestrating a large cluster of distributed containers.

The sheer number of available features and options can present a challenge. Applying best practices helps you avoid potential hurdles and create a secure and efficient environment from the get-go.

Use the outlined Kubernetes best practices to build optimized containers, streamline deployments, administer reliable services, and manage a full-blown cluster.

Best practices for deploying and administering a Kubernetes cluster.

Secure and Optimize Containers

Containers provide much less isolation than Virtual Machines. You should always verify container images and maintain strict control over user permissions.

Using small container images boosts efficiency, conserves resources, and reduces the attack surface for potential attackers.

Only Use Trusted Container Images

Ready-made container images are highly accessible and exceptionally useful. However, public images can quickly become stale, contain exploits, bugs, or even malicious software that rapidly spreads throughout a Kubernetes cluster.

Only use images from trusted repositories and always scan images for potential vulnerabilities. Numerous online tools, such as Anchore or Clair, provide a quick static analysis of container images and inform you of potential threats and issues. Spend a few moments to scan container images before deploying them and avoid potentially disastrous consequences.

Non-Root Users and Read-Only Filesystems

Change the built-in security context to force all containers to run only with non-root users and with a read-only filesystem.

Avoid running containers as a root user. A security breach can quickly escalate if a user can grant themselves additional permissions.

Define container security context in Kubernetes. Non-root user and read-only Filesystem.

If a file system is set to read-only, there is little chance to tamper with the container’s content. Instead of editing system files, the entire container would need to be removed and a new one put in its place.

Note: Learn about Docker Privileged containers and why you shouldn’t run them.

Create Small and Layered Images

Small images speed up your builds and need less storage. Efficiently layering an image can reduce image size significantly. Try building your images from scratch to achieve optimal results.

Use multiple FROM statements in a single Dockerfile if you need lots of different components. This feature creates sections, each referencing a different base image. The final image no longer stores the previous layers, only the components you need from each, making the Docker container much slimmer.

Each layer is pulled based on the FROM command located in the deployed container.

Limit User Access with RBAC

Role-based access control (RBAC) ensures that no user has more permissions than they need to complete their tasks. You can enable RBAC by appending the following flag when starting the API server:

--authorization-mode=RBAC

RBAC uses the rbac.authorization.k8s.io API group to drive authorization decisions through the Kubernetes API.

Note: Keep secrets and passwords isolated from your container images. A user with permissions to create pods within a namespace can use that role to create a pod and access config maps or secrets.

Stdout and Stderr Logs

It is common practice to send application logs to the stdout (standard output) stream, and error logs to the stderr (standard error) stream. Once an app writes to stdout and stderr, a container engine, like Docker, redirects and stores records in a JSON file.

Kubernetes containers, pods, and nodes are dynamic entities. Logs need to be consistent and perpetually available. It is therefore recommended to keep your cluster-wide logs in a separate backend storage system.

Kubernetes can be integrated with a wide range of existing logging solutions, such as the ELK Stack.

Streamline Deployments

A Kubernetes deployment establishes a template that ensures pods are up and running, regularly updated, or rolled back as defined by the user.

Using clear labels, flags, linked containers, and DaemonSets can give you fine-grained control over the deployment process.

Using the record Flag

When you append the --record flag, the executed kubectl command is stored as an annotation. By inspecting the deployment rollout history, you can easily track updates in the CHANGE-CAUSE column.

Kubernetes deplyment history example.

Rollback to any revision by declaring the revision number in the undo command.

kubectl rollout undo deployment example-deployment --to-revision=1

Without the --record flag, it would be difficult to identify the specific revision.

Descriptive Labels

Try using as many descriptive labels as possible. Labels are key:value pairs that allow users to group and organize pods in meaningful subsets. Most features, plugins, and third-party solutions need labels to be able to identify pods and control automated processes.

Using labels in Kubernetes deployment.

For example, Kubernetes DaemonSets depend on labels and node-selectors to manage pod deployment within a cluster.

Create Multiple Processes Within a Pod

Use Kubernetes’ container linking abilities instead of trying to solve every problem inside a container. It effectively deploy multiple containers on a single Kubernetes pod. A good example is outsourcing security features to a proxy sidecar container.

Linked continers in a Kubernetes pod.

A coupled container can support or enhance the main container’s core functionality or help the main container adapt to its deployment environment.

Use Init Containers

One or more init containers usually perform utility tasks or security checks you do not want to include in the main application container. You can use init containers to ensure that a service is ready before initiating the pod’s main container.

Each init container must successfully run to completion before the subsequent init container starts. Init containers can delay the onset of the pod’s main container until a precondition is satisfied. Without this precondition, Kubernetes restarts the pod. Once the prerequisite is met, the init container self-terminates and allows the main container to start.

Avoid Using the Latest Tag

Refrain from using no tag or the :latest tag when deploying containers in a production environment. The latest tag makes it hard to determine which version of the image is running.

An effective way to make sure that the container always uses the same version of the image is to use the unique image digest as the tag. In this example, a Redis image version is deployed using its unique digest:

redisd@sha256:675hgjfn48324cf93ffg43269ee113168c194352dde3eds876677c5cb

Kubernetes does not automatically update the image version unless you change the digest value.

Set Up Readiness and Liveness Probes

Liveness and readiness probes help Kubernetes monitor and interpret the health of your applications. If you define a liveness check and a process does meet the requirements, Kubernetes stops the container and starts a new instance to take its place.

Readiness probes conduct audits on the pod level and assesses if a pod can accept traffic. If a pod is unresponsive, a readiness probe triggers a process to restart the pod.

Note: It is recommended to configure a time delay when configuring readiness probes. Large configuration files can take some time to load. A readiness probe might stop the pod before it manages to load, triggering a restart loop.

The documentation for configuring readiness and liveness probes is available on the official Kubernetes website.

Try Different Service Types

By learning how to utilize different service types, you can effectively administer internal and external pod traffic. Your goal is to create a stable networking environment by managing reliable endpoints such as IPs, ports, and DNS.

Static Ports with NodePort

Expose pods to external users by setting the service type to NodePort. If you specify a value in the nodePort field, Kubernetes reserves that port number across all nodes and forwards all incoming traffic meant for the pods that are part of the service. The service is accessible using both the internal cluster IP and the node’s IP with the reserved port.

Set up nodePort service type.

Users can contact the NodePort service from outside of the cluster by requesting:

NodeIP:NodePort

Always use a port number within the range configured for NodePort (30000-32767). If an API transaction fails, you would need to troubleshoot possible port collisions.

Ingress vs LoadBalancer

The LoadBalancer type exposes services externally using your provider’s load balancer. Each service you expose using the LoadBalancer type receives its IP. If you have many services, you may experience unplanned additional costs based on the number of exposed services.

What is an Ingress? An Ingress lets you expose multiple services from outside the cluster to services within the cluster, using a single IP. Ingress is not a service type but is positioned to rout traffic for multiple services and act as a unique entry point for your service. Ingress controllers provide a set of capabilities that let you set up a subdomain and path-based routing to back-end services.

A standard configuration requirement is to provide an ingress controller with an existing static public IP address. The static public IP address remains if the ingress controller is deleted. This approach allows you to use current DNS records and network configurations consistently throughout your applications’ life-cycle.

Map External Services to a DNS

The ExternalName type does not map services to a selector but uses a DNS name instead. Use the externalName parameter to map services by using a CNAME record. A CNAME record is a fully qualified domain name and not a numeric IP.

Clients connecting to the service are going to bypass the service proxy and connect to the external resource directly. In this example, the pnap-service is mapped to the admin.phoenixnap.com external resource.

Mapping external service using CNAME Kubernetes.

Access to the pnap-service works in the same way as with other services. The crucial difference is that the redirection now occurs at the DNS level.

Application Design

Automated container deployment with Kubernetes ensures that most operations now run without direct human input. Design your applications and container images so that they are interchangeable and do not require constant micromanagement.

Focus on Individual Services

Try splitting your application into multiple services and avoid bundling too much functionality in a single container. It is much easier to scale apps horizontally and reuse containers if they focus on doing one function.

When creating your applications, assume that your containers are short-term entities that are going to be stopped and restarted regularly.

Note: Kubernetes is an automation tool, and your images should not depend on manual management or input once deployed. Containers are immutable and should not be modified but restarted from the base image.

Use Helm Charts

Helm, the Kubernetes application package manager, can streamline the installation process and deploy resources throughout the cluster very quickly. The Helm application packages are called Charts.

Applications like MySQL, PostgreSQL, MongoDB, Redis, WordPress are in demand solutions. Instead of creating and editing several complex configurations files, you can deploy readily available Helm Charts.

Use the following command to create the necessary Deployments, Services, PersistentVolumeClaims, and Secrets needed to run the Kafka Manager on your cluster.

helm install --name my-messenger stable/kafka-manager

You no longer need to analyze specific components and learn how to configure them to run Kafka properly.

If you are just starting with Helm, visit our guides How To Install Helm On Ubuntu, Mac And Windows and How to Add or Update Helm Repo. And if you want to find out more, or compare Helm with other tools, check our Helm vs Kustomize article.

Utilize Node and Pod Affinity

The affinity feature is used to define both node affinity and inter-pod affinity. Node affinity allows you to specify the nodes a pod is eligible to be scheduled on by using existing node labels.

  • requiredDuringSchedulingIgnoredDuringExecution – Establishes mandatory constraints that must be met for a pod to be scheduled to a node.
  • preferredDuringSchedulingIgnoredDuringExecution – Defines preferences that a scheduler prioritizes but does not guarantee.

If the node labels change at runtime, and the pod’s affinity rules are no longer met, the pod is not removed from the node. The nodeSelector parameter limits pods to specific nodes by using labels. In this example, the Grafana pod is going to be scheduled only on nodes that have the ssd label.

Pod and node affinity and anti-affinity tags.

The pod affinity/anti-affinity feature expands the types of constraints you can express. Instead of using node labels, you can use existing pod labels to delineate the nodes a pod can be scheduled on. This feature allows you to set rules so that individual pods get scheduled based on labels of other pods.

Node Taints and Tolerations

Kubernetes automatically tries to deploy pods to locations with the smallest workload. Node and Pod affinity allow you to control which node a pod gets deployed to. Taints can prevent the deployment of pods to specific nodes without altering existing pods. Pods that you want to deploy on a tainted node need to opt in to use the node.

  • Taints – Prevent new pods from being scheduled on nodes, define node preferences, and remove existing pods from a node.
  • Tolerations – Enable pods to be scheduled only on nodes with existing and matching Taints.

Taints and Tolerations produce optimal results when used together to ensure that pods get scheduled onto the appropriate nodes.

Note: Learn what to look out for when migrating a legacy application to containers in our article How to Containerize Legacy Applications.

Group Resources with Namespaces

Use Kubernetes namespaces to partition large clusters into smaller, easily identifiable groups. Namespaces allow you to create separate test, QA, production, or development environments and allocate adequate resources within a unique namespace. The names of Kubernetes resources only need to be unique within a single namespace. Different namespaces can have resources with the same name.

If multiple users have access to the same cluster, you can limit users and permit them to act within a specific namespace’s confines. Separating users is a great way to delimit resources and avoid potential naming or versioning conflict.

Namespaces are Kubernetes resources and are exceptionally easy to create. Create a YAML file defining the namespace name and use kubectl to post it to the Kubernetes API server. You can subsequently use the namespace to administer the deployment of additional resources.

Conclusion

A Kubernetes cluster represents a complex structure with a vast number of solutions and features. Not all the suggested practices are applicable for all use cases.

Try to apply some of the practices outlined in this article and see the impact it will have on the cohesion and functionality of your Kubernetes cluster.

Was this article helpful?
YesNo
Dejan Tucakov
Dejan is the Head of Content at phoenixNAP with over 8 years of experience in Web publishing and technical writing. Prior to joining PNAP, he was Chief Editor of several websites striving to advocate for emerging technologies. He is dedicated to simplifying complex notions and providing meaningful insight into data center and cloud technology.
Next you should read
How to Install Rancher on Ubuntu
September 2, 2020

Install Rancher on your Ubuntu server and get started using this container management platform that helps...
Read more
How to Restart Kubernetes Pods
August 27, 2020

Kubernetes is one of the premier systems for managing containerized applications. But it isn't always able to...
Read more
How to Install Prometheus on Kubernetes and Use It for Monitoring
December 1, 2022

This tutorial shows you how to create a series of .yml files to set up Prometheus Monitoring on your...
Read more
How to Install Kubernetes on a Bare Metal Server
November 27, 2019

Container deployment with direct hardware access solves a lot of latency issues and allows you to utilize...
Read more