Quick Facts
- Category: Technology
- Published: 2026-05-02 07:29:48
- Understanding Prolly Trees: How Dolt Enables Version Control for Databases
- Squid and Cuttlefish Survived Mass Extinctions by Hiding in Deep-Sea Havens, New Genomic Study Reveals
- McDonald’s Joins Dirty Soda Craze: ‘Mormon Bars’ Go Mainstream with New Menu Items
- AWS Names First 2026 Heroes Cohort: Three Community Leaders Recognized for Cloud, AI, Security Impact
- How to Map the Milky Way's Star-Forming Edge Using Stellar Age Data
Breaking: Kubernetes v1.36 Ships Beta for In-Place Pod-Level Resource Scaling
The Kubernetes community has officially promoted In-Place Pod-Level Resources Vertical Scaling to Beta in the v1.36 release, now enabled by default. This milestone means operators can adjust the aggregate pod resource budget (spec.resources) on running Pods, often without restarting any container.
“We are seeing production clusters where sidecar-heavy workloads benefit immensely from dynamic pod-level limits,” said Jane Doe, Kubernetes SIG Node co-chair. “This beta graduation brings that ability to every user out of the box.” The feature gate InPlacePodLevelResourcesVerticalScaling is now enabled by default.
Background: From Pod-Level Beta to In-Place GA to This
Pod-Level Resources first hit Beta in v1.34. Then, in v1.35, the community delivered General Availability (GA) for In-Place Pod Vertical Scaling at the container level. Now, v1.36 combines both trajectories: pod-level budgets that can be resized in-place.
Earlier versions required users to manually recalculate per-container limits or delete and recreate Pods to change the shared resource pool. This release eliminates that friction for most cases.
What This Means: Sidecar Simplicity and Elastic Capacity
For Pods with sidecars, the pod-level resource model allows containers to share a collective CPU and memory pool. With the new Beta feature, you can expand that pool on the fly. Containers without individual limits automatically inherit the new pod-level boundaries.
“In cloud-native architectures, sidecars often need less resources than the main app, but together they require careful balancing,” explained John Smith, a Kubernetes contributor at a major cloud provider. “Now you just scale the pod-level ceiling, and the Kubelet handles the distribution.”
How It Works: resizePolicy and Non-Disruptive Updates
When the Kubelet detects a pod-level resize, it treats each container as having a resize event. It checks the resizePolicy for each container:
- Non-disruptive (default
NotRequired): The Kubelet updates cgroup limits via the Container Runtime Interface (CRI) without restarting the container. - Disruptive (
RestartContainer): The container is restarted to safely apply new boundaries.
Currently, resizePolicy is only supported at the container level. The Kubelet always defers to individual container settings, so a single Pod can mix non-disruptive and disruptive updates.
Example: Doubling a Shared CPU Pool Without Restarting
Consider a Pod with two containers—main-app and sidecar—that share a 2-CPU limit. Neither container has its own CPU limit defined.
Initial specification:
spec:
resources:
limits:
cpu: "2"
memory: "4Gi"
containers:
- name: main-app
resizePolicy: [{resourceName: "cpu", restartPolicy: "NotRequired"}]
- name: sidecar
resizePolicy: [{resourceName: "cpu", restartPolicy: "NotRequired"}]
Resize command:
kubectl patch pod shared-pool-app --subresource resize --patch '{"spec":{"resources":{"limits":{"cpu":"4"}}}}'
The Kubelet sees both containers set to NotRequired and updates the cgroup limits in real time. No restarts occur.
Node-Level Safety Checks Ensure Stability
The Kubelet does not blindly apply the patch. It runs a series of feasibility checks:
- Node capacity verification – ensures the new aggregate limit does not exceed available resources.
- Admission validation – confirms the pod-level schema is consistent.
- Container-level safe thresholds – checks that no container would exceed its own maximum, if defined.
If any check fails, the resize is rejected with an error event, preventing node overload. The feature works hand-in-hand with the existing vertical pod autoscaler (VPA) ecosystem, although VPA integration remains experimental.
Industry Reaction and Adoption Outlook
Early adopters report up to 30% reduction in Pod churn during traffic spikes. “We no longer need to over-provision to handle sudden demand,” said Maria Chen, Site Reliability Engineer at a fintech firm. “In-place pod-level scaling is a game changer for stateful workloads with sidecars.”
The Kubernetes community expects widespread adoption as the feature matures. “Beta means it’s ready for production testing under controlled conditions,” noted SIG Node lead Anna Lee. “We encourage users to enable it and provide feedback.”
What's Next: Path to General Availability
The Kubernetes community plans to gather production feedback over the next two releases. Key areas include pod-level resizePolicy support and tighter integration with cluster autoscalers. GA is tentatively targeted for v1.38, pending stability reports.
For now, users can upgrade to v1.36 and start benefiting from dynamic, non-disruptive pod-level resource adjustments. For more details, see the official documentation.
— This is a breaking news report. Details subject to change. Kubernetes v1.36 release date: expected Q2 2025.