資料內(nèi)容:
2.2.1 How declarative configuration works
As we saw in the previous exercise, declarative configuration management is powered
by the kubectl apply command. In contrast with imperative kubectl commands, like
scale and annotate, the kubectl apply command has one parameter, the path to
the file containing the resource manifest:
kubectl apply -f ./resource.yaml
The command is responsible for figuring out which changes should be applied to the
matching resource in the Kubernetes cluster and update the resource using the
Kubernetes API. It is a critical feature that makes Kubernetes a perfect fit for GitOps.
Let’s learn more about the logic behind kubectl apply and understand what it can
and cannot do. To understand which problems kubectl apply is solving, let’s go
through different scenarios using the Deployment resource we created earlier.
The simplest scenario is when the matching resource does not exist in the Kuber
netes cluster. In this case, kubectl creates a new resource using the manifest stored in
the specified file.
If the matching resource exists, why doesn’t kubectl replace it? The answer is obvi
ous if you look at the complete manifest resource using the kubectl get command.
Following is a partial listing of the Deployment resource that was created in the exam
ple. Some parts of the manifest have been omitted for clarity (indicated with ellipses):
$ kubectl get deployment nginx-declarative -o=yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
environment: prod
kubectl.kubernetes.io/last-applied-configuration: |
{ ... }
organization: marketing35
Declarative vs. imperative object management
creationTimestamp: "2019-10-15T00:57:44Z"
generation: 2
name: nginx-declarative
Namespace: default
resourceVersion: "349411"
selfLink: /apis/apps/v1/Namespaces/default/deployments/nginx-declarative
uid: d41cf3dc-a3e8-40dd-bc81-76afd4a032b1
spec:
progressDeadlineSeconds: 600
replicas: 3
revisionHistoryLimit: 10
selector:
matchLabels:
app: nginx-declarative
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
...
status:
...
As you may have noticed, a live resource manifest includes all the fields specified in
the file plus dozens of new fields such as additional metadata, the status field, and
other fields in the resource spec. All these additional fields are populated by the
Deployment controller and contain important information about the resource’s run
ning state. The controller populates information about resource state in the status
field and applies default values of all unspecified optional fields, such as revision
HistoryLimit and strategy. To preserve this information, kubectl apply
merges the manifest from the specified file and the live resource manifest. As a result,
the command updates only fields specified in the file, keeping everything else
untouched. So if we decide to scale down the deployment and change the replicas
field to 1, then kubectl changes only that field in the live resource and saves it back to
Kubernetes using an update API.
In real life, we don’t want to control all possible fields that influence resource
behavior in a declarative way. It makes sense to leave some room for imperativeness
and skip fields that should be changed dynamically. The replicas field of the
Deployment resource is a perfect example. Instead of hardcoding the number of rep
licas you want to use, the Horizontal Pod Autoscaler can be used to dynamically scale
up or scale down your application based on load.