Get replica set of the deployment - kubernetes

I can get the ReplicaSet if given the replica-set-name using the api as below:
GET /apis/apps/v1/namespaces/{namespace}/replicasets/{name}
But how can I get the ReplicaSet based on the deployment?
Any help is appreciated.
Thank you

But how can I get the ReplicaSet based on the deployment?
With quite some gymnastics... If you inspect how kubectl does it (by executing kubectl -n my-namespace describe deploy my-deployment --v=9) you will see that it dos the following:
first gets deployment details with: /apis/extensions/v1beta1/namespaces/my-namespace/deployments/my-deployment. From there it gets labels for replicaset selection.
then gets replicaset details using labels from deployment in previous step (say labels were my-key1:my-value1 and my-key2:my-value2) like so: /apis/extensions/v1beta1/namespaces/my-namespace/replicasets?labelSelector=my-key1%3Dmy-value1%2Cmy-key2%3Dmy-value2
Funny part here begins with extracting multiple labels from deployment output and formatting them for replicaset call, and that is task for grep, awk, jq or even python, depending on your actual use case (from bash, python, some client or whatever...)

Related

Get names of all deployment configs with no running pods

Is there a simple method (that won't require googling at every use) to get names of all deployment configs with no running pods (scaled to 0) in Kubernetes / Openshift? Methods without JSON tokens and awk please.
The docs of oc get dc --help are way too long to decipher for the occasional need.
The only CLI arg for advanced filtering without working with JSON is a --field-selector, but it has a limited scope which not include spec.replicas field.
So, there will be some magic around JSON with other flag - jsonpath.
Here is a command to filter and print names of all deployments which are scaled to 0:
kubectl get deployments --all-namespaces -o=jsonpath='{range .items[?(#.spec.replicas==0)]}{.metadata.name}{"\n"}{end}'
Jsonpath reference is here.

How to get the custom attributes in Kubernetes?

How to list the current deployments running in Kubernetes with custom columns displayed as mentioned below:
DEPLOYMENT CONTAINER_IMAGE READY_REPLICAS NAMESPACE
The data should be sorted by the increasing order of the deployment name.
Look a the -o custom-columns feature. https://kubernetes.io/docs/reference/kubectl/overview/#custom-columns shows the basics. The hard one would be container_image, since a pod can contain more than one, but assuming you just want the first, something like .spec.template.containers[0].image? Give a shot and see how it goes.
Command to get the custom columns as per the question:
Kubectl get deployments -o custom-columns=DEPLOYMENT:.metadata.name,CONTAINER_IMAGE:..image,READY_REPLICAS:..replicas,NAMESPACE:..namespace

kubectl - Retrieving the current/"new" replicaset for a deployment in json format

We're trying to build some simple automation to detect if a deployment has finished a rolling update or not, or if a rolling update has failed. The naive way (which we do today) is to simply get all the pods for the deployment, and wait until they're ready. We look at all the pods, and if one of the pods have restarted 3 or more times (since we started the update), we roll back the update. This works fine most of the time. The times it doesn't, is when the current version that is deployed is in a failing state for any reason, so that the existing pods are continuously restarting, triggering a rollback from our end.
So the idea I had was to monitor the pods in the new replicaset that is being rolled out after I initiate a rolling update. This way we won't detect failing pods in the previous version as failures of the rolling update. I have found out how to find the pods within a replicaset (PS: I use powershell as my shell but hopefully the translation to bash or whatever you prefer is fairly straight forward):
kubectl get pod -n <namespace> -l <selector> -o json | ConvertFrom-Json | Where {$_.metadata.ownerReferences.name -eq <replicaset-name>}
And I can easily find which replicaset belongs to a deployment using the same method, just querying for replicasets and filtering on their ownerReference again.
HOWEVER, a deployment has at least two replicasets during a rolling update: The new, and the old. I can see the names of those if I use "kubectl describe deployment" -- but that's not very automation-friendly without some text processing. It seems like a fragile way to do it. It would be great if I could find the same information in the json, but seems that doesn't exist.
So, my question is: How do I find the connection between the deployments and the current/old replicaset, preferably in JSON format? Is there some other resource that "connects" these two, or is there some information on the replicasets themselves I can use to distinguish the new from the old replicaset?
The deployment will indicate the current "revision" of the replica set with the deployment.kubernetes.io/revision annotation.
metadata:
annotations:
deployment.kubernetes.io/revision: "4"
This will exist on both the deployment and the replicaset. The replicaset with revision N-1 will be the "old" one.
The way used in kubectl source code is:
sort.Sort(replicaSetsByCreationTimestamp(rsList))
which means:
Select all replicaSets created by deployment;
Sort them by creationTimestamp;
Choose the most recent one, which is the current new replicaset;

Editing Kubernetes pod on-the-fly

For the debug and testing purposes I'd like to find a most convenient way launching Kubernetes pods and altering its specification on-the-fly.
The launching part is quite easy with imperative commands.
Running
kubectl run nginx-test --image nginx --restart=Never
gives me exactly what I want: the single pod not managed by any controller like Deployment or ReplicaSet. Easy to play with and cleanup when it needed.
However when I'm trying to edit the spec with
kubectl edit po nginx-test
I'm getting the following warning:
pods "nginx-test" was not valid:
* spec: Forbidden: pod updates may not change fields other than spec.containers[*].image, spec.initContainers[*].image, spec.activeDeadlineSeconds or spec.tolerations (only additions to existing tolerations)
i.e. only the limited set of Pod spec is editable at runtime.
OPTIONS FOUND SO FAR:
Getting Pod spec saved into the file:
kubectl get po nginx-test -oyaml > nginx-test.yaml
edited and recreated with
kubectl apply -f
A bit heavy weight for changing just one field though.
Creating a Deployment not single Pod and then editing spec section in Deployment itself.
The cons are:
additional API object needed (Deployment) which you should not forget to cleanup when you are done
the Pod names are autogenerated in the form of nginx-test-xxxxxxxxx-xxxx and less
convenient to work with.
So is there any simpler option (or possibly some elegant workaround) of editing arbitrary field in the Pod spec?
I would appreciate any suggestion.
You should absolutely use a Deployment here.
For the use case you're describing, most of the interesting fields on a Pod cannot be updated, so you need to manually delete and recreate the pod yourself. A Deployment manages that for you. If a Deployment owns a Pod, and you delete the Deployment, Kubernetes knows on its own to delete the matching Pod, so there's not really any more work.
(There's not really any reason to want a bare pod; you almost always want one of the higher-level controllers. The one exception I can think of is kubectl run a debugging shell inside the cluster.)
The Pod name being generated can be a minor hassle. One trick that's useful here: as of reasonably recent kubectl, you can give the deployment name to commands like kubectl logs
kubectl logs deployment/nginx-test
There are also various "dashboard" type tools out there that will let you browse your current set of pods, so you can do things like read logs without having to copy-and-paste the full pod name. You may also be able to set up tab completion for kubectl, and type
kubectl logs nginx-test<TAB>

kubectl filter doesn't work with jobs?

I'm trying to delete jobs using a filter like the following
kubectl delete jobs -l ml=""
This returns no resources found. However if I do kubectl describe I see
Labels: ml=,job_type=worker,runtime_id=tf-runtime,task_index=0
The same filter and command works just fine with the pods created by the job controller.
The command also works just fine my job is tagged with a single label, for example
Labels: ml=
So my filter appears to be incorrect when there are other labels on the resource. However, the same set of labels on other resources (services, pods) works just fine with that filter.
I just tried this with with kubectl 1.3.0 and I can't reproduce it. Can you try the latest kubectl?