Sneak Peek at Kubernetes 1.35: What I’m Watching#

Kubernetes keeps moving fast, and version 1.35 is next up with a planned release on December 17, 2025. This one leans into resource management, security, and compatibility, while also cutting some older baggage. Think more efficient clusters, fewer surprises during scaling, and a hard push toward modern Linux and container runtimes.

Here is how I look at 1.35 from a platform and DevOps point of view.

In Place Pod Resource Updates Hit GA#

One of the biggest quality of life wins in Kubernetes 1.35 is In place Update of Pod Resources reaching GA.

This feature first appeared as alpha in 1.27 and went beta in 1.33. It lets you change CPU and memory requests and limits for running pods without killing and recreating them.

In practice, that means you can vertically scale a workload on the fly. Perfect for noisy or spiky apps in production, where you would rather tune resources than restart a pod and hope the readiness checks are fast.

Under the hood, the Container Runtime Interface exposes an UpdateContainerResources API that works on both Linux and Windows, and you can see the updated values through ContainerStatus. The end result is less downtime, cleaner optimizations, and a much nicer story for high availability environments.

If you run big, long lived workloads and hate disruptive rollouts just to tweak resources, this is a feature you should plan to adopt quickly.

New Alpha: Node Declared Features for Smarter Scheduling#

Kubernetes 1.35 brings in a new alpha called Node Declared Features. It adds a .status.declaredFeatures field to the Node API.

The idea is simple but powerful. Instead of manually labeling nodes with what they support, the node can effectively advertise its own capabilities, and the scheduler and admission controllers can use that to place pods only where they actually fit.

This is especially useful in messy real world clusters with mixed hardware, different OS versions, or specialized nodes for AI workloads. For example, you could avoid accidentally dropping a GPU hungry workload on a node that has no accelerators at all.

If you run multi cloud, edge, or heterogeneous clusters, this is worth tracking early. The design work lives in KEP 5328 if you want to go deep on the mechanics.

Security and Identity: Pod Certificates Go Beta#

Pod Certificates move to beta in 1.35, and this is a big deal for workload identity and mTLS.

With this feature, the kubelet can request and mount short lived x.509 certificates straight into a pod using projected volumes. That makes it much easier to roll out mutual TLS between services without bolting on a full external PKI stack for every use case.

You get automatic rotation and revocation as part of the story, which fits nicely with zero trust style designs and service meshes like Istio. For some setups, this may reduce the need for sidecar style proxies dedicated just to certificate plumbing.

It showed up as alpha in 1.34, so the beta in 1.35 is the signal to start testing it more widely if you care about securing east west traffic inside the cluster.

Smarter Scheduling with Numeric Taint Operators#

Taints and tolerations get more expressive in 1.35 with numeric operators like Gt (greater than) and Lt (less than), instead of only exact matches.

That means you can do policies such as driving pod scheduling and eviction based on numeric thresholds. A simple example would be a NoExecute taint that evicts pods when some node metric drops under a certain value, like performance below 95 percent of some internal score.

This is particularly nice for auto scaling setups or environments with strict SLAs. Instead of manually watching dashboards and reacting late, you can encode some of that logic into the cluster itself.

Beta Marches On: User Namespaces and OCI Image Volumes#

Two important security and workload management features keep maturing in beta.

  • User Namespaces, beta since 1.30, let you map the container root user to an unprivileged account on the host. That cuts down the blast radius for container escapes and moves you closer to rootless style operation, which is especially important for multi tenant clusters.

  • Mounting OCI Images as Volumes, beta since 1.33 and enabled by default in 1.35, allows you to mount content from OCI artifacts directly as volumes. You can use this for configuration, binaries, models, or other artifacts without building heavy container images or wiring in extra init containers. It decouples runtime images from data and configuration and can make things like sharing ML models across workloads a lot more efficient.

If your platform is heavy on security or you are trying to clean up long lived container images, these two features are worth a serious look.

Big Breaking Change: cgroup v1 Support Is Gone#

This is the one that will burn people if they do not prepare.

In Kubernetes 1.35, cgroup v1 support is removed on Linux nodes. Kubernetes is now fully aligned with cgroup v2, which has been stable since 1.25 and generally provides better isolation, better accounting, and a cleaner interface.

The impact is simple. On systems that only have cgroup v1, kubelet will refuse to start. So if you have older distributions, custom kernels, or hosts that never finished the move to cgroup v2, you need to fix that before upgrading your clusters to 1.35.

Expect kubeadm and other tooling to validate this and complain loudly.

Deprecations You Should Not Ignore#

Kubernetes 1.35 also trims some older paths.

  • ipvs mode in kube-proxy on Linux is deprecated in favor of a nftables based backend. If you still rely on ipvs, put a migration plan on your list.

  • Support for older containerd versions in the 1.x family is dropped in line with containerd end of life. You will want to be on containerd 2.0 or later and keep an eye on signals like the kubelet_cri_losing_support metric.

Nothing here is shocking, but ignoring it will absolutely bite you later.

New Alpha Features for Folks Living on the Edge#

Beyond the headline items, 1.35 adds a bundle of new alpha features, especially around AI and advanced scheduling.

Highlights include more capabilities in the Dynamic Resource Allocation framework for GPUs and other AI hardware, better mixed version upgrade proxying for smoother rolling updates, and stronger CSI security context integrations.

All of these are clearly aimed at AI, ML, and complex hybrid environments. They are alpha, so they are not for every production cluster yet, but they are a good preview of where Kubernetes is pushing next.