Release Notes

1.28.15-gs0, 1.29.9-gs0, 1.30-5-gs0 (released: 2024-11-06)

Kubernetes Release Notes for 1.30

Kubernetes Release Notes for 1.29

Kubernetes Release Notes for 1.28

1.27.16-gs0, 1.28.13-gs0, 1.29.8-gs0, 1.30.4-gs0 (released: 2024-09-24)

Kubernetes Release Notes for 1.30

Kubernetes Release Notes for 1.29

Kubernetes Release Notes for 1.28

Kubernetes Release Notes for 1.27

1.30.3-gs0 (released: 2024-08-30)

Kubernetes Release Notes for 1.30

Improvements

  • Node Pools Support (Labs version) For more details, see here.

1.27.11-gs3, 1.28.10-gs1, 1.29.6-gs1, (released: 2024-07-03)

Security Patches

1.28.10-gs0, 1.29.6-gs0 (released: 2024-06-21)

Kubernetes Release Notes for 1.29

Kubernetes Release Notes for 1.28

Improvements

1.29.3-gs2 (released: 2024-05-27)

Improvements

  • Configure coredns replica count based on number of nodes
  • Improve gsk operation stability.

1.29.3-gs1 (released: 2024-04-23)

Bug fixes

  • Fix cilium configuration to add missing source nat rule on worker nodes for traffic from pods to paas servers and non-gsk servers which are in the same network

1.29.3-gs0 (released: 2024-04-11)

Kubernetes Release Notes for 1.29

Known Issues

  • Platform services and other non-GSK servers that are connected to the cluster’s private network are not reachable from inside pods, unless these pods are using host networking. This limitation stems from the way masquerading is performed with native routing in cilium.

We are working on solutions for these issues and will provide them in upcoming GSK releases.

Improvements

Breaking changes

These might require further action from the customer.

  • Introduction of cilium as the cni plugin

    During the gsk upgrade to 1.29 a migration from flannel to cilium as the cni plugin will be carried out.

    That opens the possibility to use network policies for cluster network traffic control and aside from that features like cluster traffic encryption with wireguard can be configured.

    Customers are advised to validate their workloads, to ensure that no workload specific configuration is causing problems. Network policies are not enabled by default after an upgrade. This needs to be done by the customer. For more information see product documentation.

1.28.7-gs2, 1.27.11-gs1, 1.26.14-gs1, and 1.25.16-gs1 (released: 2024-04-02)

Improvements

  • Improve gsk operation stability.
  • Improve stability of PVC operations.

1.24.17-gs1 (released: 2024-04-02)

Bug fixes

  • Fix dupplicate storages for the same PVC.
  • Fix k8s cluster cannot be deprovisioned or rolled back during provisioning due to failed server deletion.

Improvements

  • Add support for volume cloning.
  • Improve gsk operation stability.
  • Improve stability of PVC operations.

1.28.7-gs1, 1.27.11-gs0, 1.26.14-gs0, and 1.25.16-gs0 (released: 2024-03-21)

Kubernetes Release Notes for 1.27

Kubernetes Release Notes for 1.26

Kubernetes Release Notes for 1.25

Bug fixes

  • Fix dupplicate storages for the same PVC.
  • Fix k8s cluster cannot be deprovisioned or rolled back during provisioning due to failed server deletion.

Improvements

  • Add support for volume cloning.
  • Improve gsk operation stability.

1.28.7-gs0 (released: 2024-02-19)

Kubernetes Release Notes for 1.28

Bug fixes

  • Fixed issue of not applying resource reservations in certain situations

1.28.6-gs0 (released: 2024-02-14)

Kubernetes Release Notes for 1.28

Breaking changes

These might require further action from the customer.

  • Introduction of Kubelet and System resource reservations

    Every worker node now reserves 450MiB of memory for system resources, and between 640MiB - 1465MiB for kubernetes components, depending on the node size. At the same time, we’re reducing pod eviction thresholds from 1000MiB to 100MiB.

    This will ensure greater node stability and recovery in case of out-of-memory (OOM) and/or CPU starvation events due to workload overprovisioning or spiking.

    Customers are advised to reevaluate their worker node sizes, to match workload requirements. For more information see product documentation.

  • Limiting the number of pods that can run on nodes < 4GiB memory

    On worker nodes with 2GiB or 3GiB of memory we are reducing the concurrent number of pods (maxPods) to 35. This is to reduce the impact of new resource reservations on the amount of memory available for workloads, since reservation amounts are a function of node pod density.

    On worker nodes with 4GiB or memory and above, the maxPods limit remains at 110.

Improvements

  • Improved worker node stability in memory/CPU pressure scenarios
  • Cluster Autoscaler updated for GSK v1.28
  • Upgrade process performance and reliability improvements

Bug fixes:

  • Includes the latest version of CSI provisioner, which fixes an issue with indefinite stuck pending pod on a deleted node.
  • Increased flannel pod resource limits, to avoid starvation in larger clusters.
  • Multiple supply chain security updates, addressing CVE-2023-47108, CVE-2023-44487 and CVE-2022-41723

1.27.5-gs0, 1.26.8-gs0, 1.25.13-gs0, and 1.24.17-gs0 (released: 2023-09-06)

Kubernetes Release Notes for 1.27

Kubernetes Release Notes for 1.26

Kubernetes Release Notes for 1.25

Kubernetes Release Notes for 1.24

Bug fixes

  • Fix issue causing custom storage classes to be deleted during upgrade process. All cluster upgrades are available again.

1.25.12-gs0 (released: 2023-08-28)

Kubernetes Release Notes for 1.25

Improvements

1.26.7-gs0 (released: 2023-08-28)

Kubernetes Release Notes for 1.26

Improvements

1.27.4-gs0 (released: 2023-08-23)

Kubernetes Release Notes for 1.27

Improvements

1.26.5-gs0 (released: 2023-06-14)

Kubernetes Release Notes for 1.26

Improvements

  • Support rocket (local) storage to provision local PVCs for the workloads that requires a storage with extreme IOPs. How to use the rocket storage with GSK can be found here.

1.25.8-gs0, 1.24.12-gs0, 1.23.17-gs0, and 1.22.17-gs1 (released: 2023-03-30)

Kubernetes Release Notes for 1.25

Kubernetes Release Notes for 1.24

Kubernetes Release Notes for 1.23

Bug fixes

Improvements

  • Includes the latest version of CSI-plugin, which adds the followig labels to the storage:
    • SuccessfulAttachVolume: means the CSI-plugin attached the storage at least once during its lifetime.
    • VolumeToBeDeleted: means the CSI-plugin received a delete action from the provisioner to remove the storage (the PVC got deleted). The protection label will be removed and the customer will be able to delete the storages via the API or the panel if thay are not removed automatically by the CSI plugin.

1.25.6-gs0 (released 2023-02-23)

Kubernetes Release Notes for 1.25

Improvements

  • Cluster private network IP range can now be configured via parameter k8s_cluster_cidr
    • accepts a private /16 CIDR block, which is then broken down into a /19 node block, a /18 service block and a /17 pod block
  • inotify sysctls have been increased
    • fs.inotify.max_user_instances is now 8192 (was: 128)
    • fs.inotify.max_user_watches is now 524288 (was: 327875)

1.20.15-gs5, 1.21.14-gs4, 1.22.17-gs0, 1.23.15-gs1, and1.24.9-gs0 (released: 2023-01-19)

Kubernetes Release Notes for 1.24 Kubernetes Release Notes for 1.23

Improvements

  • Includes the latest version of CSI-plugin, which provisions PVCs (storages) with 0 Reserved blocks. The CSI-plugin tunes the previously provisioned PVCs (storages) by setting Reserved blocks to 0, so the customer does not need to perform any further action.
  • GSK upgarde from 1.23 to 1.24 is enabled

1.24.8-gs0 (released: 2023-01-04)

Kubernetes Release Notes for 1.24

Breaking change

  • Removal of Dockershim

    Dockershim is officially dropped by Kubernetes in 1.24. This means Kubernetes is no longer using Docker as a container runtime. We now have switched to containerd as our container runtime.

    This might required further action from the customer.

    For most customers this should have no impact. However we encourage you to read if you are affected this on the offical K8s docs: Check whether dockershim removal affects you

  • The logging has been changed to the standard cri log format

    The earlier versions of GSK used the journald log driver by Docker. In the earlier versions, journald stored the logs in /var/log/journal. You can find more about cri log format

    If you are using a log shipper you have to adjust the log shippers config in order to retrieve logs after the update.

    Now the logs can be found in /var/log/containers/*.log and /var/log/pods/*/*/*.log.

  • No longer access the docker engine /var/run/docker.sock

    Docker engine was replaced with containerd. You can find more about Kubernetes Containerd Integration.

Improvements

  • The nodes use Ubuntu 22.04
  • The nodes use containerd as the container runtime instead of Docker engine, as dockershim was removed from Kubernetes 1.24

1.23.15-gs0 and 1.22.16-gs0 (released: 2022-12-16)

Improvements

  • The node uses Ubuntu 22.04
  • Scheduling coredns evenly on the worker nodes
  • Includes the latest version of csi-plugin

1.21.14-gs2, 1.20.15-gs3, and 1.19.16-gs3 (released: 2022-11-14)

Bug fixes

  • Fix the upgarde of csi-plugin for worker nodes, so the customer can collect metrics of storages (PVs).
  • Fix the missing csi-plugin for the surge node, so the workload will be re-scheduled into the surge node when the csi-plugin is ready to handle PVCs
  • Fix the scale-in of the surge node after the upgarde is done

1.21.14-gs1, 1.20.15-gs2, 1.19.16-gs2 (released: 2022-09-12)

Bug fixes

  • Fix the issue of the surge node upgrade, where the new configuration was not saved for further operations such as scale-out/in.

1.21.14-gs0, 1.20.15-gs1, 1.19.16-gs1 (released: 2022-07-19)

Improvements

  • Upgrade k8s: v1.21.14, v1.20.15, 1.19.16.
  • Support PVC volume usage metrics
  • Support PVC volume health
  • Support surge upgrades to avoid resource shortage during the upgrade

1.21.11-gs0, 1.20.15-gs0, 1.19.16-gs0

Improvements

  • Upgrade k8s: v1.21.11, v1.20.15, 1.19.16.
  • Avoid Warning FailedScheduling in pods with PVC.
  • Spread pods across nodes evenly via PodTopologySpread.

Bug fixes

  • Fix issue causing storage cannot be deleted when storageclass has reclaimPolicy: Retain.
  • Fix scale out/in fails if one of the nodes is down.
  • Fix k8s doesn’t recursively change ownership and permissions for the contents of each volume to match the fsGroup specified in a Pod’s securityContext when that volume is mounted.
Top