kubectl apply --dry-run behaving weirdly - kubernetes

I am facing a weird behaviour with kubectl and --dry-run.
To simplify let's say that I have the following yaml file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
run: nginx
name: nginx
spec:
replicas: 3
selector:
matchLabels:
run: nginx
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: nginx
spec:
containers:
- image: nginxsdf
imagePullPolicy: Always
name: nginx
Modifying for example the image or the number of replicas:
kubectl apply -f Deployment.yaml -o yaml --dry-run outputs me the resource having the OLD specifications
kubectl apply -f Deployment.yaml -o yaml outputs me the resource having NEW specifications
According to the documentation:
--dry-run=false: If true, only print the object that would be sent, without sending it.
However the object printed is the old one and not the one that will be sent to the ApiServer
Tested on minikube, gke v1.10.0
In the meanwhile I opened a new gitHub issue for it:
https://github.com/kubernetes/kubernetes/issues/72644

I got the following answer in the kubernetes issue page:
When updating existing objects, kubectl apply doesn't send an entire object, just a patch. It is not exactly correct to print either the existing object or the new object in dry-run mode... the outcome of the merge is what should be printed.
For kubectl to be able to accurately reflect the result of the apply, it would need to have the server-side apply logic clientside, which is a non-goal.
Current efforts are directed at moving apply logic to the server. As part of that, the ability to dry-run server-side has been added. kubectl apply --server-dry-run will do what you want, printing the result of the apply merge, without actually persisting it.
#apelisse we should probably update the flag help for apply and possibly print a warning when using --dry-run when updating an object via apply to document the limitations of --dry-run and direct people to use --server-dry-run

The latest version of the client uses:
kubectl apply -f Deployment.yaml --dry-run=server

Related

How do I update a deployment via YAML with rollback support?

I am trying to update a deployment via the YAML file, similar to this question. I have the following yaml file...
apiVersion: apps/v1
kind: Deployment
metadata:
name: simple-server-deployment
labels:
app: simple-server
spec:
replicas: 3
selector:
matchLabels:
app: simple-server
template:
metadata:
labels:
app: simple-server
spec:
containers:
- name: simple-server
image: nginx
ports:
- name: http
containerPort: 80
I tried changing the code by changing replicas: 3 to replicas: 1. Next I redeployed like kubectl apply -f simple-deployment.yml and I get deployment.apps/simple-server-deployment configured. However, when I run kubectl rollout history deployment/simple-server-deployment I only see 1 entry...
REVISION CHANGE-CAUSE
1 <none>
How do I do the same thing while increasing the revision so it is possible to rollback?
I know this can be done without the YAML but this is just an example case. In the real world I will have far more changes and need to use the YAML.
You can use --record flag so in your case the command will look like:
kubectl apply -f simple-deployment.yml --record
However, a few notes.
First, --record flag is deprecated - you will see following message when you will run kubectl apply with the --record flag:
Flag --record has been deprecated, --record will be removed in the future
However, there is no replacement for this flag yet, but keep in mind that in the future there probably will be.
Second thing, not every change will be recorded (even with --record flag) - I tested your example from the main question and there is no new revision. Why? It's because::
#deech this is expected behavior. The Deployment only create a new revision (i.e. another Replica Set) when you update its pod template. Scaling it won't create another revision.
Considering the two above, you need to think (and probably test) if the --record flag is suitable for you. Maybe it's better to use some version control system like git, but as I said, it depends on your requirements.
Change of replicas does not create new history record. You can add --record to you apply command and check the annotation later to see what was the last spec applied.

kubectl rollout restart deployment <deployment-name> doesn't get the latest image

Why does kubectl rollout restart <deployment-name> doesn't get my latest image? I had rebuild my image but it seems that kubernetes doesn't update my deployment with the latest image.
tl;dr
I just wanted to add an answer here regarding the failure of kubectl rollout restart deployment [my-deployment-name]. My problem was that I changed the image name, without running kubectl apply -f [my-deployment-filename>.yaml first.
Long Answer
So my earlier image name is microservices/posts which is in my local and looked like this.
# This is a file named `posts-depl.yaml`
apiVersion: apps/v1
kind: Deployment
metadata:
name: posts-depl
spec:
replicas: 1
selector:
matchLabels:
app: posts
template:
metadata:
labels:
app: posts
spec:
containers:
- name: posts
image: microservices/posts
However, since I need to push it to Docker Hub, I rebuild the image with a new name of [my docker hub username]/microservices_posts then I pushed. Then I updated the posts-depl.yaml to look like this.
# Still same file `posts-depl.yaml` but updated
apiVersion: apps/v1
kind: Deployment
metadata:
name: posts-depl
spec:
replicas: 1
selector:
matchLabels:
app: posts
template:
metadata:
labels:
app: posts
spec:
containers:
- name: posts
image: [my docker hub username]/microservices_posts # Notice that I only change this part
Apparently, when I ran kubectl rollout restart deployment posts-depl, it didn't update. Then I finally decided to go to StackOverflow. I just thought I had a wrong mistake or probably meet up with kubernetes bug or something.
But turns out I had to run kubectl apply -f <your deployment filename>.yaml again. Then it's running fine.
Just sharing, might change someone's life. ;)
So for a review here..
It seems that my past deployment which is posts-depl is cached with the image name of my earlier image which is microservices/posts, and since I build a new image named [my docker hub username]/microservices_posts it doesn't acknowledge that. So when I run kubectl rollout restart deployment <deployment name>. What it does instead was looking for the microservices/posts image which is on my local! But since it was not updated, it doesn't do a thing!
Hence, what I should be doing was re-running kubectl apply -f <my deployment filename>.yaml again which already has been updated with the new image name as [my docker hub username]/microservices_posts!
Then, I live happily ever after.
Hope that helps and may you live happily ever after too.

What would be the fastest way to generate persistentVolume, persistentVolumeClaim, and storageClass correct yaml file from kubectl?

Two years ago while I took CKA exam, I already have this question. At that time I only could do was to see k8s.io official documentation. Now just curious on generating pv / pvc / storageClass via pure kubectl cli. What I look for is similar to the similar logic as deployment, for example:
$ kubectl create deploy test --image=nginx --port=80 --dry-run -o yaml
W0419 23:54:11.092265 76572 helpers.go:553] --dry-run is deprecated and can be replaced with --dry-run=client.
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: test
name: test
spec:
replicas: 1
selector:
matchLabels:
app: test
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: test
spec:
containers:
- image: nginx
name: nginx
ports:
- containerPort: 80
resources: {}
status: {}
Or similar logic to run a single pod:
$ kubectl run test-pod --image=nginx --port=80 --dry-run -o yaml
W0419 23:56:29.174692 76654 helpers.go:553] --dry-run is deprecated and can be replaced with --dry-run=client.
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: test-pod
name: test-pod
spec:
containers:
- image: nginx
name: test-pod
ports:
- containerPort: 80
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
So what should I type in order to generate pv / pvc / storageClass yaml? The current only declarative fastest way:
cat <<EOF | kubectl create -f -
<PV / PVC / storageClass yaml goes here>
EOF
Edited: Please note that I look any fast way to generate correct pv / pvc / storageClass template without remembering specific syntax thru cli, and not necessary via kubectl.
There is no kubectl command to create a resource like PV, PVC, and storage class.
From certificate points of view, you have go over k8.io and look for the PV, PVC, and storage class under the task link.
Under task link, most of the YAML will be the same and for now, this is one of the fastest ways in exam.
TL;DR:
Look, bookmark and build index your brain in all yaml files in this Github directory (content/en/examples/pods) before the exam. 100% legal according to CKA curriculum.
https://github.com/kubernetes/website/tree/master/content/en/examples/pods/storage/pv-volume.yaml
Then use this form during exam:
kubectl create -f https://k8s.io/examples/pods/storage/pv-volume.yaml
In case you need edit and apply:
# curl
curl -sL https://k8s.io/examples/pods/storage/pv-volume.yaml -o /your/path/pv-volume.yaml
# wget
wget -O /your/path/pv-volume.yaml https://k8s.io/examples/pods/storage/pv-volume.yaml
vi /your/path/pv-volume.yaml
kubectl apply -f /your/path/pv-volume.yaml
Story:
Actually after look around for my own answer, there's an article floating around that suggest me to bookmark these 100% legal pages:
https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolume
https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/#creating-a-cron-job
https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/
Note that:
kubectl apply -f https://k8s.io/examples/pods/storage/pv-volume.yaml
kubectl could create objects from URL
Where is the original https://k8s.io pointing to?
What else I could benefit from?
Then after digging up the page above "pods/storage/pv-volume.yaml" code, the link points to:
https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/pods/storage/pv-volume.yaml
Which direct to:
https://github.com/kubernetes/website/tree/master/content/en/examples/pods
So https://k8s.io is a shorten uri as well as a http 301 redirect to https://github.com/kubernetes/website/tree/master/content/en to help the exam candidate easy to produce (not copy-n-paste) in the exam terminal.

Kubectl error: the object has been modified; please apply your changes to the latest version and try again

I am getting below error while trying to apply patch :
core#dgoutam22-1-coreos-5760 ~ $ kubectl apply -f ads-central-configuration.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
Error from server (Conflict): error when applying patch:
{"data":{"default":"{\"dedicated_redis_cluster\": {\"nodes\": [{\"host\": \"192.168.1.94\", \"port\": 6379}]}}"},"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"data\":{\"default\":\"{\\\"dedicated_redis_cluster\\\": {\\\"nodes\\\": [{\\\"host\\\": \\\"192.168.1.94\\\", \\\"port\\\": 6379}]}}\"},\"kind\":\"ConfigMap\",\"metadata\":{\"annotations\":{},\"creationTimestamp\":\"2018-06-27T07:19:13Z\",\"labels\":{\"acp-app\":\"acp-discovery-service\",\"version\":\"1\"},\"name\":\"ads-central-configuration\",\"namespace\":\"acp-system\",\"resourceVersion\":\"1109832\",\"selfLink\":\"/api/v1/namespaces/acp-system/configmaps/ads-central-configuration\",\"uid\":\"64901676-79da-11e8-bd65-fa163eaa7a28\"}}\n"},"creationTimestamp":"2018-06-27T07:19:13Z","resourceVersion":"1109832","uid":"64901676-79da-11e8-bd65-fa163eaa7a28"}}
to:
&{0xc4200bb380 0xc420356230 acp-system ads-central-configuration ads-central-configuration.yaml 0xc42000c970 4434 false}
**for: "ads-central-configuration.yaml": Operation cannot be fulfilled on configmaps "ads-central-configuration": the object has been modified; please apply your changes to the latest version and try again**
core#dgoutam22-1-coreos-5760 ~ $
It seems likely that your yaml configurations were copy pasted from what was generated, and thus contains fields such as creationTimestamp (and resourceVersion, selfLink, and uid), which don't belong in a declarative configuration file.
Go through your yaml and clean it up. Remove things that are instance specific. Your final yaml should be simple enough that you can easily understand it.
Remove these lines from the file:
creationTimestamp:
resourceVersion:
selfLink:
uid:
Then try to apply again.
Give attention to put the last resourceVersion in your update, you can get it running:
kubectl get deployment <DEPLOYMENT-NAME> -o yaml | grep resourceVersion
you may have been edited the same exported deployment file..
1 - try to reexport it with:
kubectl get deployment <DEPLOYMENT-NAME> -o yaml > deployment-file.yaml
2 - make the needed modifications in "deployment-file.yaml"
3 - apply the changes with:
kubectl apply -f deployment-file.yaml
OR:
you may want to edit the deployment directly.. use :
kubectl edit deployment <DEPLOYMENT-NAME> -o yaml
change the default editor if you aren't familiar with VI editor : export EDITOR=nano
I am able to reproduce the issue in my test environment. Steps to reproduce:
Create a deployment from Kubernetes Engine > Workloads > Deploy
Input your Application Name, Namespace, Labels
Select cluster or create new cluster
You are able to view the YAML file here and here is the sample:
---
apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "nginx-1"
namespace: "default"
labels:
app: "nginx-1"
spec:
replicas: 3
selector:
matchLabels:
app: "nginx-1"
template:
metadata:
labels:
app: "nginx-1"
spec:
containers:
- name: "nginx"
image: "nginx:latest"
---
apiVersion: "autoscaling/v2beta1"
kind: "HorizontalPodAutoscaler"
metadata:
name: "nginx-1-hpa"
namespace: "default"
labels:
app: "nginx-1"
spec:
scaleTargetRef:
kind: "Deployment"
name: "nginx-1"
apiVersion: "apps/v1"
minReplicas: 1
maxReplicas: 5
metrics:
- type: "Resource"
resource:
name: "cpu"
targetAverageUtilization: 80
After deployment if you go to Kubernetes Engine > Workloads > nginx-1 (click on it)
a.) You will get Deployment details (Overview, Details, Revision history, events, YAML)
b.) click on YAML and copy the content from YAML tab
c.) create new YAML file and paste the content and save the file
d.) Now if you run the command $kubectl apply -f newyamlfile.yaml, it will shows you the below error:
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
Error from server (Conflict): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{\"deployment.kubernetes.io/revision\":\"1\"},\"creationTimestamp\":\"2019-09-17T21:34:39Z\",\"generation\":1,\"labels\":{\"app\":\"nginx-1\"},\"name\":\"nginx-1\",\"namespace\":\"default\",\"resourceVersion\":\"218884\",\"selfLink\":\"/apis/apps/v1/namespaces/default/deployments/nginx-1\",\"uid\":\"f41c5b6f-d992-11e9-9adc-42010a80023b\"},\"spec\":{\"progressDeadlineSeconds\":600,\"replicas\":3,\"revisionHistoryLimit\":10,\"selector\":{\"matchLabels\":{\"app\":\"nginx-1\"}},\"strategy\":{\"rollingUpdate\":{\"maxSurge\":\"25%\",\"maxUnavailable\":\"25%\"},\"type\":\"RollingUpdate\"},\"template\":{\"metadata\":{\"creationTimestamp\":null,\"labels\":{\"app\":\"nginx-1\"}},\"spec\":{\"containers\":[{\"image\":\"nginx:latest\",\"imagePullPolicy\":\"Always\",\"name\":\"nginx\",\"resources\":{},\"terminationMessagePath\":\"/dev/termination-log\",\"terminationMessagePolicy\":\"File\"}],\"dnsPolicy\":\"ClusterFirst\",\"restartPolicy\":\"Always\",\"schedulerName\":\"default-scheduler\",\"securityContext\":{},\"terminationGracePeriodSeconds\":30}}},\"status\":{\"availableReplicas\":3,\"conditions\":[{\"lastTransitionTime\":\"2019-09-17T21:34:47Z\",\"lastUpdateTime\":\"2019-09-17T21:34:47Z\",\"message\":\"Deployment has minimum availability.\",\"reason\":\"MinimumReplicasAvailable\",\"status\":\"True\",\"type\":\"Available\"},{\"lastTransitionTime\":\"2019-09-17T21:34:39Z\",\"lastUpdateTime\":\"2019-09-17T21:34:47Z\",\"message\":\"ReplicaSet \\\"nginx-1-7b4bb7fbf8\\\" has successfully progressed.\",\"reason\":\"NewReplicaSetAvailable\",\"status\":\"True\",\"type\":\"Progressing\"}],\"observedGeneration\":1,\"readyReplicas\":3,\"replicas\":3,\"updatedReplicas\":3}}\n"},"generation":1,"resourceVersion":"218884"},"spec":{"replicas":3},"status":{"availableReplicas":3,"observedGeneration":1,"readyReplicas":3,"replicas":3,"updatedReplicas":3}}
to:
Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
Name: "nginx-1", Namespace: "default"
Object: &{map["apiVersion":"apps/v1" "metadata":map["name":"nginx-1" "namespace":"default" "selfLink":"/apis/apps/v1/namespaces/default/deployments/nginx-1" "uid":"f41c5b6f-d992-11e9-9adc-42010a80023b" "generation":'\x02' "labels":map["app":"nginx-1"] "annotations":map["deployment.kubernetes.io/revision":"1"] "resourceVersion":"219951" "creationTimestamp":"2019-09-17T21:34:39Z"] "spec":map["replicas":'\x01' "selector":map["matchLabels":map["app":"nginx-1"]] "template":map["metadata":map["labels":map["app":"nginx-1"] "creationTimestamp":<nil>] "spec":map["containers":[map["imagePullPolicy":"Always" "name":"nginx" "image":"nginx:latest" "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "restartPolicy":"Always" "terminationGracePeriodSeconds":'\x1e' "dnsPolicy":"ClusterFirst" "securityContext":map[] "schedulerName":"default-scheduler"]] "strategy":map["type":"RollingUpdate" "rollingUpdate":map["maxUnavailable":"25%" "maxSurge":"25%"]] "revisionHistoryLimit":'\n' "progressDeadlineSeconds":'\u0258'] "status":map["observedGeneration":'\x02' "replicas":'\x01' "updatedReplicas":'\x01' "readyReplicas":'\x01' "availableReplicas":'\x01' "conditions":[map["message":"Deployment has minimum availability." "type":"Available" "status":"True" "lastUpdateTime":"2019-09-17T21:34:47Z" "lastTransitionTime":"2019-09-17T21:34:47Z" "reason":"MinimumReplicasAvailable"] map["lastTransitionTime":"2019-09-17T21:34:39Z" "reason":"NewReplicaSetAvailable" "message":"ReplicaSet \"nginx-1-7b4bb7fbf8\" has successfully progressed." "type":"Progressing" "status":"True" "lastUpdateTime":"2019-09-17T21:34:47Z"]]] "kind":"Deployment"]}
for: "test.yaml": Operation cannot be fulfilled on deployments.apps "nginx-1": the object has been modified; please apply your changes to the latest version and try again
To solve the problem, you need to find the exact yaml file and then edit it as per your requirement, after that you can run $kubectl apply -f nginx-1.yaml
Hope this information finds you well.
This error is coming because the deployment.yaml has an entry for resourceVersion. Remove it as it's not needed and you will be able to apply the new configuration.

Kubectl apply does not update pods or deployments

I'm using a CI to update my kubernetes cluster whenever there's an update to an image. Whenever the image is pushed and has the latest tag it kubectl apply's the existing deployment but nothing gets updated.
this is what runs
$ kubectl apply --record --filename /tmp/deployment.yaml
My goal is when the apply is ran that a rolling deployment gets executed.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: api
spec:
replicas: 1
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
image: us.gcr.io/joule-eed41/api:latest
imagePullPolicy: Always
ports:
- containerPort: 1337
args:
- /bin/sh
- -c
- echo running api;npm start
env:
- name: NAMESPACE
valueFrom:
configMapKeyRef:
name: config
key: NAMESPACE
As others suggested, have a specific tag.
Set new image using following command
kubectl set image deployment/deployment_name deployment_name=image_name:image_tag
In your case it would be
kubectl set image deployment/api api=us.gcr.io/joule-eed41/api:0.1
As #ksholla20 mentionedm using kubectl set image is a good option for many (most?) cases.
But if you can't change the image tag consider using:
1 ) kubectl rollout restart deployment/<name>
(reference).
2 ) kubectl patch deployment <name> -p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"version\":\"$CURRENT_BUILD_HASH_OR_DATE\"}}}}}}" (reference)
(*) Notice that the patch command allow you to change specific properties in the deployment (or any other object chosen) like the label selector and the pod label or other properties like the value of the NAMESPACE environment variable in your example.
I've run into the same problem and none of the solutions posted so far will help. The solution is easy, but not easy to see or predict. The applied yaml will generate both a deployment and a replicaset the first time it's run. Unfortunately, applying changes to the manifest likely only replaces the replicaset, while the deployment will remain unchanged. This is a problem because some changes need to happen at the deployment level, but the old deployment hangs around. To have best results, delete the deployment and ensure all previous deployments and replicasets are deleted. Then apply the updated manifest.