Kubernetes
Upgrading Kubernetes v1.19.0 to v1.20.0 ? |Read this first!
By HiFX Engg.
Team | March
01, 2021 | 6 min read
Kubernetes, an open-source extensile platform used to manage the workloads and services
that
are containerized, is known for its service discovery, load balancing, automated
mounting of
storage system, automatic roll outs, and bin packing, self-healing and sensitive
configuration management.
The version of the Kubernetes to be run on the control plane nodes and the worker nodes
is
to be known while creating a novel Kubernetes cluster using the Container Engine.
Kubernetes
version is generally represented as x.y.z where, x refers to a major release, y to a
minor
release, and z to a patch release. The latest minor release of Kubernetes, the third and
final release of 2020 with patch support for approximately one year, is v1.20; and can
be
downloaded from kubernetes.tar.gz, kubernetes-src.tar.gz and GitHub. Furthermore, the
release incorporates 44 enhancements out of which 11 have become stable, 15 are on the
process of entering beta, and 16 are in the process of moving to alpha.
The Major Themes and Changes to Expect While Upgrading to v1.20
The major changes in v1.20 since v1.19 are listed below:
- Deprecation or obsoleteness of Dockershim : The container runtime
interface (CRI) for Docker, Dockershim, has been deprecated. However,
the docker-produced images will carry on its work in your cluster maintaining the
CRI
compliant run-times. Moreover, it should be ensured that the worker nodes are
employing
a
supported container runtime, as the support for Docker will be ceased in the future
releases. The assistance of your service provider will assure the right upgrade
planning
and
testing.
- Handling of Exec Probe Timeout :
The debug that impacts the existing pod definitions can be handled by setting the
feature
gate, ExecProbeTimeout, to false.
- External authentication for client-go :
The current cluster enables the client-go credential plugins via the
KUBERNETES_EXEC_INFO
environment variable.
- Availability of CronJob controller v2 through the feature gate :
The CronJob controller, using the informers instead of polling, is
available in
the alpha
version in v1.20.
- Easy setting of Process ID (PID) Limits : Graduation of the PID
limits
to general availability (GA) on the
SupportNodePidsLimit as well
as the SupportPodPidsLimit.
- Default enablement of API Priority and Fairness (APF) : This
supports
the kube-apiserver to classify the incoming requests by priority.
- Re-implementation of IPv4/Ipv6 dual stack : Grounded on the user
and
community feedback, dual stack services are supported along
with
assignment of Ipv4 as well as Ipv6 service cluster IP addresses to a single source.
In
addition, the service to be shifted from a single to a dual IP stack and vice-versa
is
enabled.
- Introduction of graceful node shutdown : The Alpha version
mode
of the GracefulNodeShutdown awakens the
kubelet on the onset of any
node system shutdowns, allowing the pods to gracefully terminate during a shutdown.
The predominant feature observed in v1.20 is the graduation of volume container storage
interface (CSI) snapshots operations, that initiate steps to develops applications for
Kubernetes, to general availability (GA). This ensures easy triggering and incorporation
of
stable operations on every Kubernetes environment as well as associated storage
providers.
Another noteworthy attribute of v1.20 is the debut of two beta features enabling users
and
admins using Kubernetes to have adequate control on the permissions concerning the
volume
mounted within a pod. Further, this release includes the graduation of kubectl, that
directly provides support for debugging workflows, to beta.
The new release incorporates the benefit of troubleshooting the following
debugs:
- Workloads that crash on startup: A copy of the pod, that utilizes a
different command or
image, can be created to troubleshoot the same.
- Distroless containers: This can be resolved by appending a new
container with debugging
tools to a new copy of the pod. Nonetheless, it can also be performed using an
ephemeral
container.
- Affected node: The troubleshoot on a node can be performed by the
creation of a
container with the host namespaces as well as having access to the host’s
filesystem.
Further, the kubectl component holds a higher value than the
kubectl plugin,
‘debug’. Thus,
the users will need to replace the name of the debug plugin.
Other advancements in v1.20 include:
- Integration of Golang1.15.5
- eta version of non-recursive volume ownership and permission
- eta version of CSI driver policy for FSGroup
- n alpha feature for security improvement in CSI drivers
- od resource metrics
- raduation of RuntimeClass and API types Defaults to stable
- raduation of TokenRequest / TokenRequestProjection to GA
- Exclusive shipping of cloud controller manager by their respective cloud providers
The Systematic Order to Upgrade From v1.19 to v1.20
The following pre-requisites in terms of kube-apiserver,
kube-controller manager,
kube-scheduler, cloud-controller
manager, kubelet and
kube-proxy are to be ensured while
upgrading to v1.20.
Kube-apiserver
- The prevailing version of kube-apiserver instance should be v1.19 in a
single-cluster
instance.
- The kube-apiserver components in a high-availability (HA) cluster can be at
v1.19 or
v1.20 guaranteeing maximal skew of any one minor versions, i.e., within the HA
clusters,
while the latest kube-apiserver supported is v1.20, the oldest version must
be
v1.19.
- The kube-controller-manager, kube-scheduler and
cloud-controller
manager components that
communicate with the API server, must be at v1.19 ascertaining they are older than
the
existing API server version.
- The kubelet components must be maintained at v1.18 or v1.19, thus, ensuring that
they
are older than the existing API server version.
- Registered admission webhooks, such as ValidatingWebhookConfiguration and
MutatingWebhookConfiguration should be able to incorporate any new
information
or
versions of the REST resources in v1.20.
- After ensuring all of the above, the v1.19 of kube-apiserver can be
upgraded to
v1.20.
The API change guidelines and API deprecation project policies demand that the
kube-apiserver must not consider omitting any of the minor versions, be it
in
single-cluster components.
Kube-controller manager, kube-scheduler, cloud-controller
manager
The upgrade of kube-apiserver to v1.20 must be ascertained before perceiving the
upgrade of
the kube-controller manager, kube-scheduler, and cloud-controller
manager.
Kubelet
- Similar to the above pre-requisite the kube-apiserver components must be
upgraded to
v1.20 prior to upgrading the kubelet component to v1.20.
- Moreover, drain of pods from the node should not be forgotten while executing the
upgrade of the kubelet component.
- CAUTION: It is not recommended to have a kubelet version two versions
lesser
than that
of kube-apiserver.
Kube-proxy
There are three pre-requisites to be considered while considering the upgrade of
kube-proxy.
- The version of kube-proxy must be similar to that of the kubelet
version on a node.
- The version of kube-proxy must not be higher than the
kube-apiserver
component.
- Furthermore, the least allowed version of kube-proxy must be two versions
lesser than
that of kube-apiserver.
NOTE: The presence of version skew within the kube-apiserver components
restricts
the usage
of kubelet, kube-controller manager, kube scheduler, cloud-controller
manager and
kubectl
components.
Notably, the kubectl version is supported by the kube-apiserver version which
is
one version
higher or one version lower than the prevailing version, i.e., if the version of the
kube-apiserver is v1.20, kubectl is supported by v1.21, v1.20 and v1.19.
Prevalent Drawbacks of v1.20
Absence of accelerator metrics (AcceleratorStats) in Summary API within kubelet.
Though, the changes seem a bit confusing, extended interaction with Kubernetes will
ensure
easiness in the long run.
Summary
The depreciation of Dockershim is a factor to be considered, in the long
run
as support
for Docker will be ceased, its better to begin the planning for an upgrade.