What’s New in Kubernetes 1.19? New Features and Updates

The last several months have been a busy time for the Kubernetes community, and especially the Kubernetes release team, amid the challenges caused by the ongoing pandemic. The Kubernetes project itself has felt the impact, with the upcoming release of version 1.19 having been postponed and the project’s release schedule adjusted to accommodate the ongoing disruption to people’s lives. Only three new Kubernetes versions, instead of the usual four, will be released this year, and it is unclear whether this will be a permanent change going forward.

With its extended release cycle, the version 1.19 incorporates a number of changes and enhancements that emphasize the maturity and production readiness of Kubernetes, including several notable feature promotions to general availability (e.g., Ingress and seccomp), security enhancements (TLS 1.3 support), and improvements to address technical debt. This post covers the highlights of the release.

Notable New Features and Changes

Ingress goes GA

Introduced as an API in beta all the way back in Kubernetes version 1.1, Ingress handles external access to services in a cluster, exposing HTTP and HTTPS routes. It may also manage load balancing, terminate SSL/TLS, and provide name-based virtual hosting. In order for the Ingress resource to work, an Ingress controller must be used; the Kubernetes project currently supports and maintains GCE and nginx controllers, and a list of additional Ingress controllers is provided here.

In 1.19, Ingress graduates to general availability and is added to the networking v1 APIs. As part of this milestone, there are some key differences in v1 Ingress objects, including schema and validation changes. For example, the `pathType` field no longer has a default value and must be specified.

For more details, see the following:

PR: ingress: Add Ingress to v1 API and update backend to defaultBackend

KEP: Graduate Ingress to GA

seccomp goes GA

Seccomp is a security facility in the Linux kernel for restricting system calls that applications can make. Seccomp was introduced as a Kubernetes feature in alpha back in version 1.3. To date, applying seccomp profiles to pods required using annotations on a PodSecurityPolicy. In 1.19, seccomp is graduating to GA with a new `seccompProfile` field being added to pod and container securityContext objects. Note that support for the existing annotation is being deprecated and will be removed in version 1.22. Additionally, as part of ensuring Kubelet backwards compatibility, seccomp profiles will be enforced in the following priority order:

  1. Container-specific field.
  2. Container-specific annotation.
  3. Pod-wide field.
  4. Pod-wide annotation.

In conjunction with this change, the pod sandbox container is also configured with a separate `runtime/default` seccomp profile.

More details are covered in the following links:

PR: seccomp GA – Add new seccomp fields and update kubelet to use them

PR: Add seccomp least privilege for kuberuntime

KEP: Seccomp to GA

TLS 1.3 support

Kubernetes 1.19 addresses one of the recommendations that came out of the Kubernetes security audit conducted last year and adds support for new TLS 1.3 ciphers that can be used for Kubernetes.

View the relevant PR and security audit findings here:

PR: Add support for TLS 1.3 ciphers

Security audit recommendation: TOB-K8S-037: Kubelet supports insecure TLS ciphersuites

Node debugging

Now available in alpha, running the `kubectl alpha debug` command will create and run a new pod that runs in the host OS namespaces and can be used to troubleshoot nodes. This allows a user to inspect a running pod without restarting it and without having to enter the container itself to, for example, check the filesystem, execute additional debugging utilities, or initial network requests from the pod network namespace. Part of the motivation for this enhancement is to also eliminate most uses of SSH for node debugging and maintenance.

Learn more at the following resources:

PR: kubectl debug: support debugging nodes

KEP: Node Troubleshooting with Privileged Containers

Admission webhook warnings

With this change in beta in 1.19, admission webhooks can now return non-fatal warnings to API clients making requests. This enhancement is intended to make it easy for users and cluster administrators to recognize problematic API use, including use of deprecated APIs from clients such as kubectl.

The PR and broader proposal can be found at these links:

PR: Admission webhook warnings

KEP: Warning mechanism for use of deprecated APIs

Other notable changes

With the Kubernetes version 1.18 release a few months ago, we covered the release of the Pod Topology Spread feature in beta, which allows for simple definitions of complex pod layouts. A recent change in 1.19 has been added to automatically weight topologies and apply better differentiation between nodes and zones yields more balanced results across constraints.

We also described the new feature, Immutable Secrets and ConfigMaps, in our coverage of version 1.18. That feature has now been promoted to beta.

Another change in 1.19 ensures that the default volume mount created for service account credentials has file permissions that enable increased security while running non-root containers.

Version 1.19 now supports JSON logging output from Kubernetes components by passing the flag `–log-format=json`.

Finally, in 1.19 Kubernetes has changed terminology to reflect inclusive language.

Notable Deprecations

Hyperkube, an all-in-one binary for Kubernetes components, is now deprecated and will not be built by the Kubernetes project going forward.

Several, older beta API versions are deprecated in 1.19 and will be removed in version 1.22. We will provide a follow-on update since this means 1.22 will likely end up being a breaking release for many end users.

Looking Ahead

One feature enhancement that we have been tracking for some time is support for sidecar containers. This was slated to be released in 1.19 but has been postponed due to additional considerations by Kubernetes SIG-node. This enhancement will have a substantial impact for other projects such as Istio, and we will continue to track and provide updates as it works its way toward release in a future version of Kubernetes.


Set up Kubernetes Metrics Server and Horizontal Pod Autoscaler on IONOS Enterprise Cloud Kubernetes Clusters.

Create a Kubernetes Metrics Server

1.    To clone the GitHub repository of metrics-server, run the following command:

git clone https://github.com/kubernetes-incubator/metrics-server.git
cd metrics-server/

2.    To install Metrics Server from the root of the Metrics Server directory, run the following command:

kubectl create -f deploy/1.8+/

3.    To confirm that Metrics Server is running, run the following command:

kubectl get pods -n kube-system

The output should look similar to the following:

$ kubectl get pods -n kube-system | grep metrics-server
metrics-server-85cc795fbf-79d72   1/1    
Running   0          22s

Create a php-apache deployment and a service

1.    To create a php-apache deployment, run the following command:

kubectl create deployment php-apache
--image=k8s.gcr.io/hpa-example

2.    To set the CPU requests, run the following command:

kubectl patch deployment php-apache
-p='{"spec":{"template":{"spec":{"containers":[{"name":"hpa-example","resources":{"requests":{"cpu":"200m"}}}]}}}}'

Important: If you don’t set the value for cpu correctly, then the CPU utilization metric for the pod won’t be defined and the HPA can’t scale.

3.    To  expose the deployment as a service, run the following command:

kubectl create service clusterip php-apache
--tcp=80

4.    To create an HPA, run the following command:

kubectl autoscale deployment php-apache
--cpu-percent=50 --min=1 --max=10

5.    To confirm that the HPA was created, run the following command.

kubectl get hpa

6.    To create apod to connect to the deployment that you created earlier, run the following command:

kubectl run --generator=run-pod/v1 -i --tty load-generator --image=busybox /bin/sh

7.    To test a load on the pod in the namespace that you used in step 1, run the following script:

while true; do wget -q -O- http://php-apache; done

Note: To exit the while loop and the tty session of the load generator pod, use CTRL + Cto cancel the loop, and then use CTRL + D to exit the session.

8.    To see how the HPA scales the pod based on CPU utilization metrics, run the following command (preferably from another terminal window):

kubectl get hpa -w

TheMetrics Server is now up and running, and you can use it to get resource-based metrics.

9.    To clean up the resources used for testing the HPA, run the following commands:

kubectl delete hpa,service,deployment php-apache
kubectl delete pod load-generator

A Comparison of Public Cloud Managed Kubernetes Services

In this article, I’ll look to provide some comparisons of public cloud vendors when deciding where to run Kubernetes. Obviously, this assumes that you’ve already decided that Kubernetes is the way to go.
It’s important to understand the main features and capabilities of the main cloud providers and present what I think are some crystal clear criteria for choosing your target platform.

DIY or managed service?
Before I get into public cloud vendors its important to highlight that Kubernetes is so modular, flexible, and extensible that it can be deployed on-prem, or in a third-party data center, in any of the popular cloud providers and even across multiple cloud providers. With a varying array of choices, what should you do for your business and your peace of mind?
The answer, of course, is “it depends.”
Should you run your Kubernetes systems on-prem or in third-party data centers. You may have already invested a lot of time, money, and training in your bespoke infrastructure. The challenges of DIY Kubernetes infrastructure become more and more burdensome as you need to invest time and operational cycles in standing up and ongoing daily management of the environment.

Or should you run your Kubernetes system on one of the cloud providers? You may want to benefit from the goodness of Kubernetes without the headache of having to manage it and keep it in tip-top form with upgrades and security patching.

What’s also important to note is that you’ll need to be already containerized — if you’re already there then great, taking that monolithic application to a brave new world is going to be a challenge but it does bring its benefits as you drive your business forward.

Choosing to run Kubernetes managed by your cloud provider is probably a no-brainer. You already run workloads in the cloud, right? Kubernetes gives you the opportunity to replace a lot of layers of management, monitoring, and security you may have to build and more importantly have the skillset to integrate with your processes and maintain yourself.

There are actually quite a few cloud providers that support Kubernetes and I’ll focus here on the Big Three: Google’s GKE, Microsoft AKS, and Amazon’s EKS and provide a view on what IONOS Enterprise Cloud is offering also.


Google GKE (Google Kubernetes Engine)
Kubernetes, if you didn’t know already, came from Google. GKE is the managed offering of Kubernetes by Google. Google SREs will manage the control plane of Kubernetes for you and you get auto-upgrades. Since Google has so much influence on Kubernetes and it used it as the container orchestration solution of the Google cloud platform from day one, it would be really weird if it didn’t have the best integration.

GKE may be the most up to date on releases. On GKE, you don’t have to pay for the Kubernetes control plane which is important to bear in mind if controlling costs is important to your business, which I assume would be. So with Google, you just pay for the worker nodes. Google also can provide GCR (Goole Container Registry), integrated central logging and monitoring via Stackdriver Logging and Stackdriver Monitoring all be it very pricey, and if you’re interested in even tighter integration with your CI/CD pipeline you can use Google Code Build which will add even more costs, which is all great but as with most PaaS offerings once you get locked in you’re locked in, so the main thing to keep in mind is that flexibility is key with Kubernetes, most ancillary services can be bolted on to your hosted servers so you’re not stove-piped into using the vendors tools if you don’t want to be.

GKE takes advantage of general-purpose Kubernetes concepts like Service and Ingress for fine-grained control over load balancing. If your Kubernetes service is of type LoadBalancer, GKE will expose it to the world via a plain L4 (TCP) load balancer. However, if you create an Ingres object in front of your service then GKE will create an L7 load balancer capable of doing SSL termination for you and even allow gRPC traffic if you annotate it correctly, of course setting up your own Ingress Controller is also possible should the need arise.


Microsoft Azure AKS (Azure Kubernetes Service)
Microsoft Azure originally had a solution called ACS that supported Apache Mesos, Kubernetes, and Docker Swarm. But, in 2017 it introduced AKS as a dedicated Kubernetes hosting service.

AKS is very similar to GKE. It also managed a Kubernetes cluster for you free of charge. Microsoft invested a lot in Kubernetes in general and AKS in particular. There is strong integration with Active Directory for authentication and authorization, integrated monitoring and logging, and Azure storage. You also get built-in container registry, networking, and GPU-enabled nodes.
One of the most interesting features of AKS is its usage of the virtual-kublet project to integrate with ACI (Azure Container Instances). The ACI takes away the need to provision nodes for your cluster.

Setting up a cluster on AKS takes a long time (20 minutes on average) and the startup time has high volatility (more than an hour on rare occasions). The developer experience is relatively poor. You need some combination of a web UI (Azure Portal Manager), PowerShell, and plain CLI to provision and set everything up.


Amazon AWS EKS (Elastic Kubernetes Service)
Amazon was a little late to the Kubernetes scene. It always had its own ECS (Elastic Container Service) container orchestration platform. But, customer demand was for Kubernetes was overwhelming. Many organizations ran their Kubernetes clusters on EC2 using Kops or similar eventually AWS decided to provide proper support with official integrations. EKS today integrates with IAM for identity management, AWS load balancers, networking, and various storage options.

AWS has promised integration with Fargate (similar to AKS + ACI). This will eliminate the need to provision worker nodes and potentially let Kubernetes automatically scale up and down for a truly elastic experience.
Note that on EKS you have to pay for the managed control plane. If you just want to play around and experiment with Kubernetes or have lots of small clusters that might be a limiting factor.

As far as performance goes EKS takes 10–15 minutes to start a cluster. EKS is probably not the simplest to set up as with AKS you’re moving between the management consoles, IAM and CLI to get the cluster up and running, it’s probably the most complex setup out of all the three cloud vendors so in reality, it could take a little under an hour from the initial deployment to getting the cluster up and running.

IONOS Enterprise Cloud
So what about the other vendors, well there are quite a few from the likes of Oracle, IBM and Digital Ocean there is also IONOS Enterprise Cloud. If I was to compare how we IONOS fared against the top three then I would say there is some catch up to make with ancillary PaaS services, but for creating a cluster and providing worker nodes to the cluster then IONOS does this with ease and simplicity actually much better than the competition. IONOS has UI integration with the data center designer which is missing from the top three providers, it’s such a simple process to get up and running that clusters can be ready to use in under 15 minutes.

Having the ability to choose the amount of CPU and RAM is a huge deal, you’re not forced into certain sizes for your worker nodes, adding and removing worker nodes is simple too, just remember to drain your nodes before you remove them. IONOS also has full API ingratiation, in fact, a cluster and worker nodes can be up and running with four API calls. With IONOS you get dedicated CPU and RAM resources so performance is a given. IONOS also brings GDPR compliant cloud infrastructure without having to worry about the US Cloud Act which should be top of your list for cloud service requirements.

There are also services such as persistence volumes in the shape of HDD and SSD storage and load balancer services just like the other vendors, with services on their roadmap to come, also as it’s vanilla Kubernetes, it’s easy to add things like Istio, Prometheus, Grafana and Ingress load balancers too. I’ve not even touched on cost yet but compared to the other vendors IONOS comes under the competition reserved instance pricing too, making it very attractable. Here are some rough figures though to help you determine costs when choosing a Kubernetes platform. This monthly cost comparison assumes that you have 3 master nodes, 15 worker nodes, and each node has 4 vCPU and 16GB of RAM.

AWS Google Cloud Platform Microsoft Azure IONOS
£0.18 per hour £0.18 per hour £0.17 per hour £0.15 per hour
18 Nodes (3 Control) 15 Nodes (Free Control Plane) 15 Nodes (Free Control Plane) 15 Nodes (Free Control Plane)
£2332 Compute Cost £2194 Compute Cost £1836 Compute Cost £1620 Compute Cost
M5 xLarge Instance type: n1-standard-4 D4 v3 4 vcpu 16gb 4 vCPU (2 Dedicated CPU Cores)  16Gb Ram

Conclusion
Kubernetes itself is platform agnostic. In theory, you can easily switch from any cloud platform to another as well as run on your own infrastructure. In practice, when you choose a platform provider you often want to utilize and benefit from their specific services that will require some work to migrate to a different provider or on-prem.

There are a number of container orchestration tools out there with the likes of Rancher, Swarm etc. it looks like Kubernetes has won the container orchestration wars. The big question for you is where you should run it. Usually, the answer is simple. If you’re already running on one of the cloud providers then check to ensure that your vendor is the right choice, this is where multi-cloud is giving you benefit allowing you to leverage the best the cloud has to offer so you can run your Kubernetes cluster with confidence.

Deploying a Managed Kubernetes Cluster in 15 minutes with IONOS Enterprise Cloud

First, log into the IONOS dashboard using your account credentials

If required you can also create a group which can have the relevant privileges for creating and managing the Kubernetes clusters within the account. Select the User manager from the Manager Resources menu on the banner.

A group can be created and the associated privileges applied for creating Kubernetes clusters.

To start creating a Kubernetes cluster select Kubernetes Manager from the Manager resources menu.

Click ‘Create Cluster’

Provide a name for the cluster and click ‘Create Cluster’

The cluster will be created and should take around 3-5 minutes to create.

Once created the status will change to ‘Green’ and then node pools can be created within the cluster, click ‘Create node pool’ to create anode pool.

Provide a name for the node pool

A node pool will be created within a data center and you will be provided with the ability to either create a new data center or select an existing data center.

With a data center selected, provide the number of nodes that should be provisioned along with the CPU Architecture with a number of cores, RAM quantity and other requirements such as Availability Zone, Storage type and Storage size for the nodes.

A validation request is presented, click OK and the node pool will be provisioned.

To view the status of the provisioning of the node pool the expansion arrow can be clicked and more information is displayed, note that the nodes are being provisioned in the background.

Whilst the nodes are being provisioned you can take the opportunity to download the Kubeconfig file which is used via API access to the cluster using kubectl

The status of the node pool will turn ‘green’ once the node pool is in an available state.

The kubeconfig file can be uploaded to your workstation of choice and then administration of the cluster and node pools can begin. In the example below the kubeconfig is exported and some basic checks made to ensure the cluster is in a running state.

This concludes how easy it is to provision a Kubernetes cluster with the IONOS Enterprise Cloud in around 15 minutes.