Yaml version of `kubectl autoscale deployment` - kubernetes

When I run the command kubectl autoscale deployment xxx --min=1 --max=3 --cpu-percent=80 given that no deployment by the name xxx exists, I can receive an error which indicates that the autoscaler could NOT be created since the deployment name is invalid (does not exists) and this is desirable.
However, when I write the same in a yaml file:
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: autoscaler-test
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: xxx
minReplicas: 3
maxReplicas: 10
targetCPUUtilizationPercentage: 50
I receive a message indicating that the autoscaler has been created successfully.
horizontalpodautoscaler.autoscaling/autoscaler-test created
This is of course undesirable as I have a problem here: Has the autoscaler been Successfully created and will it really work (did I provide the right deployment name).
Since I am not a kubernetes pro and have been working with it for only a week, I have this question, is the yaml file configuration even correct or on in other words is it equivalent to the command line version? If not, how can I re-write the command like version in forms of a yaml file?
I need to convert it to a yaml file because it would be much cleaner in a CI/CD pipeline.

I have replicated the above scenario. You are facing the issue while running the “auto scale” command because you might be missing to create the deployment. To create the deployment use the following command:
kubectl apply -f xxx.yaml
and then run the autoscale command.
After the autoscale command run the following command to get the YAML of your HorizontalPodAutoscaler.
kubectl get hpa xxx -o yaml

Related

Kubernetes - ScaledObject - Keda - RabbitMQ

i have created a ScaledObject and TriggerAuthentication using Keda, in order to horizontally autoscale my pods based on a RabbitMQ length.
but for some reason, when i try to query my ScaledObjects like this:
kubectl get ScaledObjects -n mynamespace
i am not getting anything.
but when i am applying the yaml file which contains all of the information about the ScaledObject, the output is this:
scaledobject.keda.sh/rabbitmq-scaledobject unchanged
i am also able to edit this scaled object using this command:
kubectl edit scaledobject.keda.sh/rabbitmq-scaledobject -n mynamespace
but i am not sure why it is not listed when doing this command:
kubectl get ScaledObjects -n mynamespace
the autoscaler does work, i am just wondering why it is not listed..
Thanks in Advance,
Yaniv
This might be a case of having more than one Custom Resource defined with the same kind but a different apiVersion.
For example, these two versions of Keda create the ScaledObject with different apiVersion:
1.4:
apiVersion: keda.k8s.io/v1alpha1
kind: ScaledObject
2.0:
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
So when you run kubectl get ScaledObjects -n mynamespace, it might be defaulting to the one you are not using.

How to verify the rolling update?

I tried to automate the rolling update when the configmap changes are made. But, I am confused about how can I verify if the rolling update is successful or not. I found out the command
kubectl rollout status deployment test-app -n test
But I guess this is used when we are performing the rollback rather than for rolling update. What's the best way to know if the rolling update is successful or not?
I think it is fine,
kubectl rollout status deployments/test-app -n test can be used to verify the rollout deployment as well.
As an additional step, you can run,
kubectl rollout history deployments/test-app -n test as well.
if you need further clarity,
kubectl get deployments -o wide and check the READY and IMAGE fields.
ConfigMap generation and rolling update
I tried to automate the rolling update when the configmap changes are made
It is a good practice to create new resources instead of mutating (update in-place). kubectl kustomize is supporting this workflow:
The recommended way to change a deployment's configuration is to
create a new configMap with a new name,
patch the deployment, modifying the name value of the appropriate configMapKeyRef field.
You can deploy using Kustomize to automatically create a new ConfigMap every time you want to change content by using configMapGenerator. The old ones can be garbage collected when not used anymore.
Whith Kustomize configMapGenerator can you get a generated name.
Example
kind: ConfigMap
metadata:
name: example-configmap-2-g2hdhfc6tk
and this name get reflected to your Deployment that then trigger a new rolling update, but with a new ConfigMap and leave the old unchanged.
Deploy both Deployment and ConfigMap using
kubectl apply -k <kustomization_directory>
When handling change this way, you are following the practices called Immutable Infrastructure.
Verify deployment
To verify a successful deployment, you are right. You should use:
kubectl rollout status deployment test-app -n test
and when leaving the old ConfigMap unchanged but creating a new ConfigMap for the new ReplicaSet it is clear which ConfigMap belongs to which ReplicaSet.
Also rollback will be easier to understand since both old and new ReplicaSet use its own ConfigMap (on change of content).
Your command is fine to check if an update went through.
Now, a ConfigMap change eventually gets applied to the Pod. There is no need to do a rolling update for that. Depending on what you have passed in the ConfigMap, you probably could have restarted the service and that's it.
What's the best way to know if the rolling update is successful or not?
To check if you rolling update was executed correctly, you command works fine, you could check also if you replicas is running.
I tried to automate the rolling update when the configmap changes are made.
You could use Reloader to perform your rolling updates automatically when a configmap/secret changed.
Reloader can watch changes in ConfigMap and Secret and do rolling upgrades on Pods with their associated DeploymentConfigs, Deployments, Daemonsets and Statefulsets.
Let's explore how Reloader works in a pratical way using a nginx deployment as exxample.
First install Reloader in your cluster:
kubectl apply -f https://raw.githubusercontent.com/stakater/Reloader/master/deployments/kubernetes/reloader.yaml
You will see a new container named reloader-reloader-... this container is responsible to 'watch' your deployments and make the rolling updates when necessary.
Create a configMap with your values, in my case I'll create a my-config and set a variable called myvar.value with value hello:
kubectl create configmap my-config --from-literal=myvar.value=hello
Now, let's create a simple deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-example
labels:
app: nginx
metadata:
annotations:
configmap.reloader.stakater.com/reload: my-config
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
env:
- name: MYVAR
valueFrom:
configMapKeyRef:
name: my-config
key: myvar.value
In this example, the nginx image will be used getting the value from my configMap and set in a variable called MYVAR.
To Reloader works, you must specify the name of your configMap in annotations, in the example above it will be:
metadata:
annotations:
configmap.reloader.stakater.com/reload: my-config
Apply the deployment example with kubectl apply -f mydeplyment-example.yaml and check the variable from the new pod.
$ kubectl exec -it $(kubectl get pods -l=app=nginx --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}') env | grep MYVAR
MYVAR=hello
Now let's change the value of the variable:
Edit configmap with kubectl edit configmap my-config, change the value of myvar.value to hi, save and close.
After save, Reloader will recreate your container and get the new value from configmap.
To check if the rolling update was executed successfully:
kubectl rollout status deployment deployment-example
Check the new value:
$ kubectl exec -it $(kubectl get pods -l=app=nginx --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}') env | grep MYVAR
MYVAR=hi
That's all!
Check the Reloader github see more options to use.
I hope it helps!
Mounted ConfigMaps are updated automatically reference
To Check roll-out history used below steps
ubuntu#dlv-k8s-cluster-master:~$ kubectl rollout history deployment busybox
REVISION CHANGE-CAUSE
1 kubectl create --filename=busybox.yaml --record=true
Update image on deployment as below
$ kubectl set image deployment.app/busybox *=busybox --record
deployment.apps/busybox image updated
Check new rollout history which will list the new change cause for rollout
$ kubectl rollout history deployment busybox
REVISION CHANGE-CAUSE
1 kubectl create --filename=busybox.yaml --record=true
2 kubectl set image deployment.app/busybox *=busybox --record=true
To Rollback Deployment : use undo flag along rollout command
ubuntu#dlv-k8s-cluster-master:~$ kubectl rollout undo deployment busybox deployment.apps/busybox rolled back
ubuntu#dlv-k8s-cluster-master:~$ kubectl rollout history deployment busybox
REVISION CHANGE-CAUSE
2 kubectl set image deployment.app/busybox *=busybox --record=true
3 kubectl create --filename=busybox.```

Imperative command for creating job and cronjob in Kubernetes

Is this a valid imperative command for creating job?
kubectl create job my-job --image=busybox
I see this in https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands. But the command is not working. I am getting error as bellow:
Error: unknown flag: --image
What is the correct imperative command for creating job?
Try this one
kubectl create cronjob my-job --schedule="0,15,30,45 * * * *" --image=busy-box
What you have should work, though it not recommended as an approach anymore. I would check what version of kubectl you have, and possibly upgrade it if you aren't using the latest.
That said, the more common approach these days is to write a YAML file containing the Job definition and then run kubectl apply -f myjob.yaml or similar. This file-driven approach allowed for more natural version control, editing, review, etc.
Using correct value for --restart field on "kubectl run" will result run command to create an deployment or job or cronjob
--restart='Always': The restart policy for this Pod. Legal values [Always, OnFailure, Never]. If set to 'Always'
a deployment is created, if set to 'OnFailure' a job is created, if set to 'Never', a regular pod is created. For the
latter two --replicas must be 1. Default 'Always', for CronJobs `Never`.
Use "kubectl run" for creating basic kubernetes job using imperatively command as below
master $ kubectl run nginx --image=nginx --restart=OnFailure --dry-run -o yaml > output.yaml
Above should result an "output.yaml" as below example, you can edit this yaml for advance configurations as needed and create job by "kubectl create -f output.yaml or if you just need basic job then remove --dry-run option from above command and you will get basic job created.
apiVersion: batch/v1
kind: Job
metadata:
creationTimestamp: null
labels:
run: nginx
name: nginx
spec:
template:
metadata:
creationTimestamp: null
labels:
run: nginx
spec:
containers:
- image: nginx
name: nginx
resources: {}
restartPolicy: OnFailure
status: {}

Create Daemonset using kubectl?

I took the CKA exam and I needed to work with Daemonsets for quite a while there. Since it is much faster to do everything with kubectl instead of creating yaml manifests for k8s resources, I was wondering if it is possible to create Daemonset resources using kubectl.
I know that it is NOT possible to create it using regular kubectl create daemonset at least for now. And there is no description of it in the documentation. But maybe there is a way to do that in some different way?
The best thing I could do right now is to create Deployment first like kubectl create deployment and edit it's output manifest. Any options here?
The fastest hack is to create a deployment file using
kubectl create deploy nginx --image=nginx --dry-run -o yaml > nginx-ds.yaml
Now replace the line kind: Deployment with kind: DaemonSet in nginx-ds.yaml and remove the line replicas: 1
However, the following command will give a clean daemonset manifest considering that "apps/v1" is the api used for DaemonSet in your cluster
kubectl create deploy nginx --image=nginx --dry-run -o yaml | \
sed '/null\|{}\|replicas/d;/status/,$d;s/Deployment/DaemonSet/g' > nginx-ds.yaml
You have your nginx DaemonSet.
CKA allows access to K8S documentation. So, it should be possible to get a sample YAML for different resources from there. Here is the one for the Daemonset from K8S documentation.
Also, not sure if the certification environment has access to resources in the kube-system namespace. If yes, then use the below command to get a sample yaml for Daemonset.
kubectl get daemonsets kube-flannel-ds-amd64 -o yaml -n=kube-system > daemonset.yaml
It's impossible. At least for Kubernetes 1.12. The only option is to get a sample Daemonset yaml file and go from there.
The fastest way to create
kubectl create deploy nginx --image=nginx --dry-run -o yaml > nginx-ds.yaml
Now replace the line kind: Deployment with kind: DaemonSet in nginx-ds.yaml and remove the line replicas: 1 , strategy {} and status {} as well.
Otherwise it shows error for some required fields like this
error: error validating "nginx-ds.yaml": error validating data: [ValidationError(DaemonSet.spec): unknown field "strategy" in io.k8s.api.apps.v1.DaemonSetSpec, ValidationError(DaemonSet.status): missing required field "currentNumberScheduled" in io.k8s.api.apps.v1.DaemonSetStatus,ValidationError(DaemonSet.status): missing required field "numberMisscheduled" in io.k8s.api.apps.v1.DaemonSetStatus, ValidationError(DaemonSet.status): missing required field "desiredNumberScheduled" in io.k8s.api.apps.v1.DaemonSetStatus, ValidationError(DaemonSet.status): missing required field "numberReady" in io.k8s.api.apps.v1.DaemonSetStatus]; if you choose to ignore these errors, turn validation off with --validate=false
There is no such option to create a DaemonSet using kubectl. But still, you can prepare a Yaml file with basic configuration for a DaemonSet, e.g. daemon-set-basic.yaml, and create it using kubectl create -f daemon-set-basic.yaml
You can edit new DaemonSet using kubectl edit daemonset <name-of-the-daemon-set>. Or modify the Yaml file and apply changes by kubectl apply -f daemon-set-basic.yaml. Note, if you want to update configuration modifying file and using apply command, it is better to use apply instead of create when you create the DaemonSet.
Here is the example of a simple DaemonSet:
kind: DaemonSet
metadata:
name: fluentd-elasticsearch
labels:
k8s-app: fluentd-logging
spec:
selector:
matchLabels:
name: fluentd-elasticsearch
template:
metadata:
labels:
name: fluentd-elasticsearch
spec:
containers:
- name: fluentd-elasticsearch
image: k8s.gcr.io/fluentd-elasticsearch:1.20
You could take advantage of Kubernetes architecture to obtain definition of DaemonSet from existing cluster. Have a look at kube-proxy, which is a network component that runs on each node in your cluster.
kube-proxy is deployed as DaemonSet so you can extract its definition with below command.
$ kubectl get ds kube-proxy -n kube-system -o yaml > kube-proxy.ds.yaml
Warning!
By extracting definition of DaemonSet from kube-proxy be aware that:
You will have to do pliantly of clean up!
You will have to change apiVersion from extensions/v1beta1 to apps/v1
I used this by the following commands:
Either create Replicaset or deployment from Kubernetes imperative command
kubectl create deployment <daemonset_name> --image= --dry-run -o yaml > file.txt
Edit the kind and replace DaemonSet, remove replicas and strategy fields into it.
kubectl apply -f file.txt
During CKA examination you are allowed to access Kubernetes Documentation for DaemonSets. You could use the link and get examples of DaemonSet yaml files. However you could use the way you mentioned, change a deployment specification to DaemonSet specification. You need to change the kind to Daemonset, remove strategy, replicas and status fields. That would do.
Using command to deployment create and modifying it, one can create daemonset very quickly.
Below is one line command to create daemonset
kubectl create deployment elasticsearch --namespace=kube-system --image=k8s.gcr.io/fluentd-elasticsearch:1.20 --dry-run -o yaml | grep -v "creationTimestamp\|status" | awk '{gsub(/Deployment/, "DaemonSet"); print }'

Kubernetes: Error from server when edit Deployment [duplicate]

I have defined a Deployment for my app:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: myapp-deployment
spec:
replicas: 2
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: 172.20.34.206:5000/myapp_img:2.0
ports:
- containerPort: 8080
Now, if I want update my app's image 2.0 to 3.0, I do this:
$ kubectl edit deployment/myapp-deployment
vim is open. I change the image version from 2.0 to 3.0 and save.
How can it be automated? Is there a way to do it just running a command? Something like:
$ kubectl edit deployment/myapp-deployment --image=172.20.34.206:5000/myapp:img:3.0
I thought using Kubernetes API REST but I don't understand the documentation.
You could do it via the REST API using the PATCH verb. However, an easier way is to use kubectl patch. The following command updates your app's tag:
kubectl patch deployment myapp-deployment -p \
'{"spec":{"template":{"spec":{"containers":[{"name":"myapp","image":"172.20.34.206:5000/myapp:img:3.0"}]}}}}'
According to the documentation, YAML format should be accepted as well. See Kubernetes issue #458 though (and in particular this comment) which may hint at a problem.
There is a set image command which may be useful in simple cases
Update existing container image(s) of resources.
Possible resources include (case insensitive):
pod (po), replicationcontroller (rc), deployment (deploy), daemonset (ds), job, replicaset (rs)
kubectl set image (-f FILENAME | TYPE NAME) CONTAINER_NAME_1=CONTAINER_IMAGE_1 ... CONTAINER_NAME_N=CONTAINER_IMAGE_N
http://kubernetes.io/docs/user-guide/kubectl/kubectl_set_image/
$ kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
deployment "nginx-deployment" image updated
http://kubernetes.io/docs/user-guide/deployments/
(I would have posted this as a comment if I had enough reputation)
Yes, as per http://kubernetes.io/docs/user-guide/kubectl/kubectl_patch/ both JSON and YAML formats are accepted.
But I see that all the examples there are using JSON format.
Filed https://github.com/kubernetes/kubernetes.github.io/issues/458 to add a YAML format example.
I have recently built a tool to automate deployment updates when new images are available, it works with Kubernetes and Helm:
https://github.com/rusenask/keel
You only have to label your deployments with Keel policy like keel.sh/policy=major to enable major version updates, more info in the readme. Works similarly with Helm, no additional CLI/UI required.