Kubernetes based containers orchestration
Containers bring a new cloud model, by packaging application’s file system, deploying it in a standardized way, and running it within a host’s kernel partition. Containers deploy extremely fast, scale easily, and provide service high-availability, without the limitations of virtual machines.
Containers bring huge benefits for application providers that need to leverage a new kind of tools to help them to scale simplicity.
Containers are fast, portable and efficient. They deploy in seconds (from a public or private registry) and run from virtual machines or baremetal servers, in your public or private cloud. Microservices that leverage containers for each “concern” (Separation of Concerns Principe), might require to run thousands of containers in a single host, and ten/hundred thousands containers in a data center. This is where orchestration comes in.
Containers Orchestration allows implementing container-based Clouds (the evolution of VM-based Clouds), with significant benefits:
- Scalability (at Planetary Scale) without complications or huge support teams
- Flexible Growth, applications grow consistently (regardless of application complexity)
- Portability, or Run Anywhere, giving you the freedom to take advantage of on-premises, hybrid, or public cloud infrastructure.
- Service discovery and load balancing. No need to modify your application to use an unfamiliar service discovery mechanism. Kubernetes gives containers their own IP addresses and a single DNS name for a set of containers, and can load-balance across them.
- Automatic bin packing, Automatically allocate containers based on their resource requirements and other constraints, while not sacrificing availability. Mix critical and best-effort workloads in order to drive up utilization and save even more resources.
- Storage orchestration, Automatically mount the storage system of your choice, whether from local storage, a public cloud provider such as GCP or AWS, or a network storage system such as NFS, iSCSI, Gluster, Ceph, Cinder, or Flocker.
- Self-healing, Automatically restart containers that fail, replace and reschedule containers when nodes die, kill containers that don’t respond to your user-defined health checks, and don’t publish them to clients until they are ready to serve.
- Automated rollouts and rollbacks, Kubernetes progressively rolls out changes to your application or its configuration, while monitoring application health to ensure it doesn’t kill all your instances at the same time. If something goes awry, Kubernetes will rollback the change for you. Take advantage of a growing ecosystem of deployment solutions.
- Secret and configuration management, Deploy and update “secrets” and application configuration without rebuilding your image and without exposing secrets in your stack configuration.
- Horizontal scaling, Scale your application up and down with a simple command, with a UI, or automatically based on CPU usage.
Telcos are also benefiting from this new paradigm, as many of the newest Virtualized Network Functions are moving from the VM model to the Containers model, due to its reduced overhead and increased efficiency (a basic requirement for Telcos that need to move huge amounts of traffic through their cloud applications).
Containers are rapidly being adopted, due to its reduced overhead, increased portability, consistent operation and better application development, which in the end provide Greater Efficiency.
In their report, Datalog states “Half of Docker Environments Are Orchestrated”, confirming the need to orchestrate them, in order to maximize efficiency and scalability.
Kubernetes by Google
Kubernetes (commonly abbreviated as k8s) is an open-source container orchestration system for automating application deployment, scaling, and management. It was originally designed by Google, and is now maintained by the Cloud Native Computing Foundation.
Kubernetes was created by Joe Beda, Brendan Burns and Craig McLuckie, who were quickly joined by other Google engineers including Brian Grant and Tim Hockin, and was first announced by Google in mid-2014. Its development and design are heavily influenced by Google’s Borg system, and many of the top contributors to the project previously worked on Borg.
Its aim is to provide a “platform for automating deployment, scaling, and operations of application containers across clusters of hosts”. It works with a range of container tools, including Docker.
Scalable Services, with Kubernetes
The basic scheduling unit in Kubernetes is a pod. It adds a higher level of abstraction by grouping containerized components. A pod consists of one or more containers that are guaranteed to be co-located on the same host machine and can share resources.
A Kubernetes service is a set of pods that work together, such as one tier of a multi-tier application.
Service discovery assigns a stable IP address and DNS name to the service, and load-balances traffic in a round-robin manner to network connections of that IP address among the pods matching the selector (even as failures cause the pods to move from machine to machine). By default a service is exposed inside a cluster (e.g. back-end pods might be grouped into a service, with requests from the front-end pods load-balanced among them), but a service can also be exposed outside a cluster (e.g. for clients to reach front-end pods).
WhiteMist is Whitestack’s own Kubernetes distribution, geared at accelerating the adoption of containers in:
- Cloud Providers, which need to maximize their energy/space efficiency and the same time offer their customers what they need now.
- E-Commerce providers, which have been struggling to match demand with legacy technology, and are now in dire need of a definite solution.
- Large Organizations, which due to their Digital Transformation processes, need to rely on Containers (rather than VMs)
- Telecom Operators, which need to address the Challenges of 5G with improved efficiencies, in order to support all the massive traffic.
With “40% of respondents from enterprise companies (5000+) are running Kubernetes in production” (according to the CNCF Survey), it is pretty clear that most of the growth of cloud infrastructure, will be on containers, and orchestrated by a Kubernetes distribution.
WhiteMist follows the same design pattern than other Whitestack products: “Design for Simplicity, Design for accelerating adoption”. Its installation process is very simple, at the point that other products that need Kubernetes, can bundle WhiteMist in their products to deploy Kubernetes in green-fields.
Containerization allows our development teams to move fast, deploy software efficiently, and operate at an unprecedented scale. WhiteMist helps you to orchestrates their lifecycle, so you can scale in a easy way.