Kubernetes

Kubernetes and Edge Computing

Kubernetes

  • Automation is key to making edge apps work
  • Kubernetes has rapidly become the most popular means of container management
  • Containerized apps will be the order of the day for edge
  • Ergo, it stands to reason you will need Kubernetes for edge deployment

As the edge is defined and built out, we will get a clearer picture of just what workloads will be moving to the edge from inside the firewall of the data center. While the technology and platform are still in the very early stages, one platform has become an obvious choice for deploying containers at the edge – Kubernetes, developed by Google and released as open-source.

Containers are mini deployment environments where the app runs. In a virtualized environment, the entire OS (Linux or Windows) is run in the virtual machine, which means considerable overhead. Containers have whittled down Linux OS with just the libraries and APIs needed for that specific app, which means a few megabytes of memory needed vs. gigabytes for a fully virtualized environment.

Kubernetes is a container orchestration platform for deploying containers to clusters, or a network of virtual machines. Clusters run on nodes, another name for server hardware, and because of their small size, many containers can run on one node. Docker was the first container technology to the market and has been embraced by open source and commercial software vendors alike, making it the de facto standard for containerized apps.

A Kubernetes cluster is a cluster that uses the Kubernetes container orchestration to deploy, maintain and scale Docker containers. The basic idea of Kubernetes is to abstract the underlying hardware and provide a single interface to deploy containers to all kinds.

Containers are ideal for the edge because while the latest Intel Xeon and AMD Epyc processors sport impressive performance, you can only cram so much computing power into the space of an edge computing unit, which is usually about the size of a shipping container. So in a cramped space, you want to use as few resources as possible.

There are two types of containerized apps: migrated on-premises apps that are modernized and retrofitted for the cloud and new apps built from the ground up, also for the cloud.

What’s convenient about container development is you can build your apps locally, even on a laptop, test/debug them and then deploy them to a cloud server, like an IoT edge network. There is a wide variety of Kubernetes developer tools to choose from, each varying according to need, and all allow you to deploy your applications remotely once you have finished development.

Kubernetes has several tools for edge computing, in particular, the Horizontal Pod Autoscaler. Pods are collections of containers that are run on nodes, the hardware of the edge device. The Horizontal Pod Autoscaler automatically scales the number of pods up or down and handles the load balancing needed for sudden traffic spikes. Kubernetes detects a spike in traffic from logs and automatically provisions resources to scale to the rise and fall in demand.

Another automation element of Kubernetes is to restart containers that fail or kill containers that are unresponsive. The built-in monitoring system provides real-time insights into traffic behavior to identify bottlenecks and opportunities for optimization.

The exact boundaries of the edge vs. the cloud are still being determined, but one thing is clear -there is an intent to move computation closer to where things are happening and process the data where it is rather than send terabytes of data upstream to the data center.
One thing we do know is that Kubernetes on the edge offers numerous benefits for customers. It offers the flexibility to reduce the complexities associated with running computations across numerous geographically distributed points of presence and a variety of architectures. That means any workload that can be containerized can now be deployed at the edge.

Containers are lightweight and you can easily find yourself with thousands of containers to manage, but Kubernetes provides the underlying tools to efficiently manage container workflows through automation.

Thanks to the autoscale part of Kubernetes, applications can scale and offer fast response through low latency to handle thousands if not millions of users. Kubernetes is self-healing, so containers can be killed off or restarted automatically.

As some features and functions work their way out, it’s not too soon to familiarize yourself with the whole concept of containers, Kubernetes, and remote deployment. What you deploy in the data center today can easily move out to the edge tomorrow. After all, that’s the whole point behind Kubernetes is ease of deployment.