Skip to content

Commit

Permalink
Merge pull request #48159 from kubernetes/dev-1.32
Browse files Browse the repository at this point in the history
Official 1.32 Release Docs
  • Loading branch information
chanieljdan authored Dec 11, 2024
2 parents cd9b5fe + 426f5e2 commit 7295018
Show file tree
Hide file tree
Showing 185 changed files with 7,475 additions and 571 deletions.
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
---
title: Compatibility Version For Kubernetes Control Plane Components
reviewers:
- jpbetz
- siyuanfoundation
content_type: concept
weight: 70
---

<!-- overview -->

Since release v1.32, we introduced configurable version compatibility and emulation options to Kubernetes control plane components to make upgrades safer by providing more control and increasing the granularity of steps available to cluster administrators.

<!-- body -->

## Emulated Version

The emulation option is set by the `--emulated-version` flag of control plane components. It allows the component to emulate the behavior (APIs, features, ...) of an earlier version of Kubernetes.

When used, the capabilities available will match the emulated version:
* Any capabilities present in the binary version that were introduced after the emulation version will be unavailable.
* Any capabilities removed after the emulation version will be available.

This enables a binary from a particular Kubernetes release to emulate the behavior of a previous version with sufficient fidelity that interoperability with other system components can be defined in terms of the emulated version.

The `--emulated-version` must be <= `binaryVersion`. See the help message of the `--emulated-version` flag for supported range of emulated versions.
31 changes: 31 additions & 0 deletions content/en/docs/concepts/cluster-administration/logging.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,6 +75,37 @@ appending a container name to the command, with a `-c` flag, like so:
kubectl logs counter -c count
```


### Container log streams

{{< feature-state feature_gate_name="PodLogsQuerySplitStreams" >}}

As an alpha feature, the kubelet can split out the logs from the two standard streams produced
by a container: [standard output](https://en.wikipedia.org/wiki/Standard_streams#Standard_output_(stdout))
and [standard error](https://en.wikipedia.org/wiki/Standard_streams#Standard_error_(stderr)).
To use this behavior, you must enable the `PodLogsQuerySplitStreams`
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/).
With that feature gate enabled, Kubernetes {{< skew currentVersion >}} allows access to these
log streams directly via the Pod API. You can fetch a specific stream by specifying the stream name (either `Stdout` or `Stderr`),
using the `stream` query string. You must have access to read the `log` subresource of that Pod.

To demonstrate this feature, you can create a Pod that periodically writes text to both the standard output and error stream.

{{% code_sample file="debug/counter-pod-err.yaml" %}}

To run this pod, use the following command:

```shell
kubectl apply -f https://k8s.io/examples/debug/counter-pod-err.yaml
```

To fetch only the stderr log stream, you can run:

```shell
kubectl get --raw "/api/v1/namespaces/default/pods/counter-err/log?stream=Stderr"
```


See the [`kubectl logs` documentation](/docs/reference/generated/kubectl/kubectl-commands#logs)
for more details.

Expand Down
26 changes: 23 additions & 3 deletions content/en/docs/concepts/cluster-administration/node-shutdown.md
Original file line number Diff line number Diff line change
Expand Up @@ -217,9 +217,7 @@ these pods will be stuck in terminating status on the shutdown node forever.

To mitigate the above situation, a user can manually add the taint `node.kubernetes.io/out-of-service`
with either `NoExecute` or `NoSchedule` effect to a Node marking it out-of-service.
If the `NodeOutOfServiceVolumeDetach`[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
is enabled on {{< glossary_tooltip text="kube-controller-manager" term_id="kube-controller-manager" >}},
and a Node is marked out-of-service with this taint, the pods on the node will be forcefully deleted
If a Node is marked out-of-service with this taint, the pods on the node will be forcefully deleted
if there are no matching tolerations on it and volume detach operations for the pods terminating on
the node will happen immediately. This allows the Pods on the out-of-service node to recover quickly
on a different node.
Expand Down Expand Up @@ -267,6 +265,28 @@ via the [Non-Graceful Node Shutdown](#non-graceful-node-shutdown) procedure ment
{{< /note >}}


## Windows Graceful node shutdown {#windows-graceful-node-shutdown}

{{< feature-state feature_gate_name="WindowsGracefulNodeShutdown" >}}

The Windows graceful node shutdown feature depends on kubelet running as a Windows service,
it will then have a registered [service control handler](https://learn.microsoft.com/en-us/windows/win32/services/service-control-handler-function)
to delay the presshutdown event with a given duration.

Windows graceful node shutdown is controlled with the `WindowsGracefulNodeShutdown`
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)
which is introduced in 1.32 as an alpha feature.

Windows graceful node shutdown can not be cancelled.

If Kubelet is not running as a Windows service, it will not be able to set and monitor
the [Preshutdown](https://learn.microsoft.com/en-us/windows/win32/api/winsvc/ns-winsvc-service_preshutdown_info) event,
the node will have to go through the [Non-Graceful Node Shutdown](#non-graceful-node-shutdown) procedure mentioned above.

In the case where the Windows graceful node shutdown feature is enabled, but the kubelet is not
running as a Windows service, the kubelet will continue running instead of failing. However,
it will log an error indicating that it needs to be run as a Windows service.

## {{% heading "whatsnext" %}}

Learn more about the following:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -109,6 +109,26 @@ a Pod.
For a particular resource, a *Pod resource request/limit* is the sum of the
resource requests/limits of that type for each container in the Pod.

## Pod-level resource specification

{{< feature-state feature_gate_name="PodLevelResources" >}}

Starting in Kubernetes 1.32, you can also specify resource requests and limits at
the Pod level. the Pod level. At Pod level, Kubernetes {{< skew currentVersion >}}
only supports resource requests or limits for specific resource types: `cpu` and /
or `memory`. This feature is currently in alpha and with the feature enabled,
Kubernetes allows you to declare an overall resource budget for the Pod, which is
especially helpful when dealing with a large number of containers where it can be
difficult to accurately gauge individual resource needs. Additionally, it enables
containers within a Pod to share idle resources with each other, improving resource
utilization.

For a Pod, you can specify resource limits and requests for CPU and memory by including the following:
* `spec.resources.limits.cpu`
* `spec.resources.limits.memory`
* `spec.resources.requests.cpu`
* `spec.resources.requests.memory`

## Resource units in Kubernetes

### CPU resource units {#meaning-of-cpu}
Expand Down Expand Up @@ -192,6 +212,19 @@ spec:
cpu: "500m"
```
## Pod resources example {#example-2}
{{< feature-state feature_gate_name="PodLevelResources" >}}
The following Pod has an explicit request of 1 CPU and 100 MiB of memory, and an
explicit limit of 1 CPU and 200 MiB of memory. The `pod-resources-demo-ctr-1`
container has explicit requests and limits set. However, the
`pod-resources-demo-ctr-2` container will simply share the resources available
within the Pod resource boundaries, as it does not have explicit requests and limits
set.

{{% code_sample file="pods/resource/pod-level-resources.yaml" %}}

## How Pods with resource requests are scheduled

When you create a Pod, the Kubernetes scheduler selects a node for the Pod to
Expand Down
5 changes: 1 addition & 4 deletions content/en/docs/concepts/configuration/secret.md
Original file line number Diff line number Diff line change
Expand Up @@ -666,10 +666,7 @@ Therefore, one Pod does not have access to the Secrets of another Pod.

### Configure least-privilege access to Secrets

To enhance the security measures around Secrets, Kubernetes provides a mechanism: you can
annotate a ServiceAccount as `kubernetes.io/enforce-mountable-secrets: "true"`.
For more information, you can refer to the [documentation about this annotation](/docs/concepts/security/service-accounts/#enforce-mountable-secrets).
To enhance the security measures around Secrets, use separate namespaces to isolate access to mounted secrets.

{{< warning >}}
Any containers that run with `privileged: true` on a node can access all
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -58,6 +58,10 @@ Resources consumed by the command are counted against the Container.
* Sleep - Pauses the container for a specified duration.
This is a beta-level feature default enabled by the `PodLifecycleSleepAction` [feature gate](/docs/reference/command-line-tools-reference/feature-gates/).

{{< note >}}
Enable the `PodLifecycleSleepActionAllowZero` feature gate if you want to set a sleep duration of zero seconds (effectively a no-op) for your Sleep lifecycle hooks.
{{< /note >}}

### Hook handler execution

When a Container lifecycle management hook is called,
Expand Down
2 changes: 1 addition & 1 deletion content/en/docs/concepts/containers/images.md
Original file line number Diff line number Diff line change
Expand Up @@ -214,7 +214,7 @@ behalf of the two different Pods, when parallel image pulls is enabled.

### Maximum parallel image pulls

{{< feature-state for_k8s_version="v1.27" state="alpha" >}}
{{< feature-state for_k8s_version="v1.32" state="beta" >}}

When `serializeImagePulls` is set to false, the kubelet defaults to no limit on the
maximum number of images being pulled at the same time. If you would like to
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -316,9 +316,8 @@ may also be used with field selectors when included in the `spec.versions[*].sel
{{< feature-state feature_gate_name="CustomResourceFieldSelectors" >}}

The `spec.versions[*].selectableFields` field of a {{< glossary_tooltip term_id="CustomResourceDefinition" text="CustomResourceDefinition" >}} may be used to
declare which other fields in a custom resource may be used in field selectors
with the feature of `CustomResourceFieldSelectors`
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/) (This feature gate is enabled by default since Kubernetes v1.31).
declare which other fields in a custom resource may be used in field selectors.

The following example adds the `.spec.color` and `.spec.size` fields as
selectable fields.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -46,6 +46,14 @@ Error from server (BadRequest): Unable to find "ingresses" that match label sele
| Node | `spec.unschedulable` |
| CertificateSigningRequest | `spec.signerName` |

### Custom resources fields

All custom resource types support the `metadata.name` and `metadata.namespace` fields.

Additionally, the `spec.versions[*].selectableFields` field of a {{< glossary_tooltip term_id="CustomResourceDefinition" text="CustomResourceDefinition" >}}
declares which other fields in a custom resource may be used in field selectors. See [selectable fields for custom resources](/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#crd-selectable-fields)
for more information about how to use field selectors with CustomResourceDefinitions.

## Supported operators

You can use the `=`, `==`, and `!=` operators with field selectors (`=` and `==` mean the same thing). This `kubectl` command, for example, selects all Kubernetes Services that aren't in the `default` namespace:
Expand All @@ -72,4 +80,4 @@ You can use field selectors across multiple resource types. This `kubectl` comma

```shell
kubectl get statefulsets,services --all-namespaces --field-selector metadata.namespace!=default
```
```
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,12 @@ of the same resource. API resources are distinguished by their API group, resour
In cases when objects represent a physical entity, like a Node representing a physical host, when the host is re-created under the same name without deleting and re-creating the Node, Kubernetes treats the new host as the old one, which may lead to inconsistencies.
{{< /note >}}

The server may generate a name when `generateName` is provided instead of `name` in a resource create request.
When `generateName` is used, the provided value is used as a name prefix, which server appends a generated suffix
to. Even though the name is generated, it may conflict with existing names resulting in a HTTP 409 resopnse. This
became far less likely to happen in Kubernetes v1.31 and later, since the server will make up to 8 attempt to generate a
unique name before returning a HTTP 409 response.

Below are four types of commonly used name constraints for resources.

### DNS Subdomain Names
Expand Down
Loading

0 comments on commit 7295018

Please sign in to comment.