Each Kubernetes deployment gets this annotation:
$ kubectl describe deployment/myapp
Name: myapp
Namespace: default
CreationTimestamp: Sat, 24 Mar 2018 23:27:42 +0100
Labels: app=myapp
Annotations: deployment.kubernetes.io/revision=5
Is there a way to read that annotation (deployment.kubernetes.io/revision) from a pod that belongs to the deployment?
I tried Downward API, but that only allows to get annotations of the pod itself (not of its deployment).
kubectl get pod POD_NAME -o jsonpath='{.metadata.annotations}'
It has been a long time but here is what I do to get a specific annotation :
kubectl get ing test -o jsonpath='{.metadata.annotations.kubernetes\.io/ingress\.class}'
So for you it would be :
kubectl get deploy myapp -o jsonpath='{.metadata.annotations.deployment\.kubernetes\.io/revision}'
I hope it helps.
As an alternative, you can leverage both using a kubectl selector to query all pods with label app=myapp and jq to query and format the resulting json to get you the name and annotations for each of the pods
kubectl get po -l app=myapp -o=json | jq '[. | {pod: .items[].metadata}][] | {name: .pod.name, annotations: .pod.annotations}'
Yes you can get the annotation from a pod using below command:
kubectl describe pod your_podname
and you will find Annotations section with all annotation for pod.
Yes. Its possible by the below command-
kubectl get pod myapp -n=default -o yaml | grep -A 8 annotations:
kubectl get pod myapp -n=default -o yaml gets all the details of the pod myapp in default namespace in yaml format.
grep -A 8 metadata: searches for keyword 'annotations' and displays 8 lines as specified by A 8 to show all the annotations
to get only the annotations section of the pod you can use
kubectl get pod YOUR_POD_NAME | get -i 'annotations'
you can also use jsonPath like
kubectl get pod YOUR_POD_NAME -o jsonpath='{.metadata.annotations}{"\n"}'
Related
I want to know that how to use api to curd my crd resources with api. And I can write a sdk to control resources.
Use kubectl just
kubectl get inferenceservices test-sklearn -n kserve-test
kubectl apply -f xx.yaml -n kserve-test
kubectl delete -f xx.yaml -n kserve-test
apiVersion: "serving.kubeflow.org/v1beta1"
kind: "InferenceService"
metadata:
name: "test-sklearn"
spec:
predictor:
sklearn:
storageUri: "http://xxxx"
get call process in process_log
kubectl get inferenceservices test-sklearn -n kserve-test --v=8 > process_log 2>&1
use kubectl proxy to test
kubectl proxy --address 0.0.0.0 --accept-hosts=^.*
TEST to get resource status
GET http://xxx:8001/apis/serving.kubeflow.org/v1beta1/namespaces/kserve-test/inferenceservices/test-sklearn
I am using kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml to create deployment.
I want to create deployment in my namespace examplenamespace.
How can I do this?
There are three possible solutions.
Specify namespace in the kubectl command:
kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml -n my-namespace
Specify namespace in your yaml files:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
namespace: my-namespace
Change default namespace in ~/.kube/config:
apiVersion: v1
kind: Config
clusters:
- name: "k8s-dev-cluster-01"
cluster:
server: "https://example.com/k8s/clusters/abc"
namespace: "my-namespace"
By adding -n namespace to command you already have. It also works with other types of resources.
kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml -n namespacename
First you need to create the namespace likes this
kubectl create ns nameOfYourNamespace
Then you create your deployment under your namespace
kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml -n examplenamespace
The ns at
kubectl create ns nameOfYourNamespace
stands for namespace
The -n
kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml -n examplenamespace
stands for --namespace
So you first create your namespace in order Kubernetes know what namespaces dealing with.
Then when you are about to apply your changes you add the -n flag that stands for --namespace so Kubernetes know under what namespace will deploy/ create the proper resources
I am trying to define the namespace name when executing the kubectl create deployment command?
This is what I tried:
kubectl create deployment test --image=banu/image1 namespace=test
and this doesn't work.
And I want to expose this deployment using a ClusterIP service within the cluster itself for that given namespace How can I do that using kubectl command line?
You can specify either -n or --namespace options.
kubectl create deployment test --image=nginx --namespace default --dry-run -o yaml and see result deployment yaml.
Using kubectl run
kubectl run test --namespace test --image nginx --port 9090 --dry-run -o yaml
You need to create a namespace like this
kubectl create ns test
ns stands for namespace, so with kubectl you say you want to create namespace with name test
Then while you creating the deployment you add the namespace you want
kubectl create deployment test --image=banu/image1 -n test
Where flag -n stands for namespace, that way you say to Kubernetes that all resources related to that deployment will be under the test namespace
In order to see all the resources under a specific namespace
kubectl get all -n test
--namespace and -n is the same things
Use -n test instead of namespace=test
Sample with nginx image:
$ kubectl create deployment nginx --image=nginx -n test
deployment.apps/nginx created
$ kubectl get deploy -n test
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 1/1 1 1 8s
In second case you need to create service and define labels from deployment.
You can find correct labels by runnig something like:
kubectl -n test describe deploy test |grep Labels:
and apply service like:
apiVersion: v1
kind: Service
metadata:
name: test-svc
namespace: test
spec:
ports:
- name: test
port: 80 # Change this port
protocol: TCP
type: ClusterIP
selector:
# Here you need to define output from previous step
Is there a way to use kubectl to list only the pods belonging to a deployment?
Currently, I do this to get pods:
kubectl get pods| grep hello
But it seems an overkill to get ALL the pods when I am interested to know only the pods for a given deployment. I use the output of this command to see the status of all pods, and then possibly exec into one of them.
I also tried kc get -o wide deployments hellodeployment, but it does not print the Pod names.
There's a label in the pod for the selector in the deployment. That's how a deployment manages its pods. For example for the label or selector app=http-svc you can do something like that this and avoid using grep and listing all the pods (this becomes useful as your number of pods becomes very large)
here are some examples command line:
# single label
kubectl get pods -l=app=http-svc
kubectl get pods --selector=app=http-svc
# multiple labels
kubectl get pods --selector key1=value1,key2=value2
K8s components are linked to each other by labels and selectors. There are just no built-in attributes of My-List-of-ReplicaSets or My-List-Of-Pods for a deployment. You can't get them from kubectl describe or kubectl get
As #Rico suggested above, you have to use label filters. But you can't simply use the labels that you specify in the deployment metafile because deployment will generate a random hash and use it as an additional label.
For example, I have a deployment and a standalone pod that share the same label app=http-svc. While the first two are managed by the deployment, the 3rd one is not and shouldn't be in the result.
ma.chi#~/k8s/deployments % kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
http-9c89b5578-6cqbp 1/1 Running 0 7s app=http-svc,pod-template-hash=574561134
http-9c89b5578-vwqbx 1/1 Running 0 7s app=http-svc,pod-template-hash=574561134
nginx-standalone 1/1 Running 0 7s app=http-svc
ma.chi#~/k8s/deployments %
The source file is
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: http-svc
name: http
spec:
replicas: 2
selector:
matchLabels:
app: http-svc
strategy: {}
template:
metadata:
labels:
app: http-svc
spec:
containers:
- image: nginx:1.9.1
name: nginx1
---
apiVersion: v1
kind: Pod
metadata:
labels:
app: http-svc
name: nginx-standalone
spec:
containers:
- image: nginx:1.9.1
name: nginx1-standalone
To exact spot the containers created and managed by your deployment, you can use the script below(which is ugly, but this is the best I can do)
DEPLOY_NAME=http
RS_NAME=`kubectl describe deployment $DEPLOY_NAME|grep "^NewReplicaSet"|awk '{print $2}'`; echo $RS_NAME
POD_HASH_LABEL=`kubectl get rs $RS_NAME -o jsonpath="{.metadata.labels.pod-template-hash}"` ; echo $POD_HASH_LABEL
POD_NAMES=`kubectl get pods -l pod-template-hash=$POD_HASH_LABEL --show-labels | tail -n +2 | awk '{print $1}'`; echo $POD_NAMES
Here's a tidier shell alias (based on this code by #kekaoyunfuwu) that only lists the pods of a deployment (no interim results are shown):
k_list_pods_in_deployment() (
test $# -eq 0 && {
echo "Missing deployment name" && kubectl get deployments
return 1
}
deployment="$1"; shift
replicaSet="$(kubectl describe deployment $deployment \
| grep '^NewReplicaSet' \
| awk '{print $2}'
)"
podHashLabel="$(kubectl get rs $replicaSet \
-o jsonpath='{.metadata.labels.pod-template-hash}'
)"
kubectl get pods -l pod-template-hash=$podHashLabel --show-labels \
| tail -n +2 | awk '{print $1}'
)
alias k.list-pods-in-deployment=k_list_pods_in_deployment
My kubernetes version is 1.10.4.
I am trying to create a ConfigMap for java keystore files:
kubectl create configmap key-config --from-file=server-keystore=/home/ubuntu/ssl/server.keystore.jks --from-file=server-truststore=/home/ubuntu/ssl/server.truststore.jks --from-file=client--truststore=/home/ubuntu/ssl/client.truststore.jks --append-hash=false
It says configmap "key-config" created.
But when I describe the configmap I am getting null value:
$ kubectl describe configmaps key-config
Name: key-config
Namespace: prod-es
Labels: <none>
Annotations: <none>
Data
====
Events: <none>
I know my version kubernetes support binary data as configmaps or secrets but I am not sure what is wrong with my approach.
Any input on this is highly appreciated.
kubectl describe does not show binary data in ConfigMaps at the moment (kubectl version v1.10.4); also the DATA column of the kubectl get configmap output does not include the binary elements:
$ kubectl get cm
NAME DATA AGE
key-config 0 1m
But the data is there, it's just a poor UI experience at the moment. You can verify that with:
kubectl get cm key-config -o json
Or you can use this friendly command to check that the ConfigMap can be mounted and the projected contents matches your original files:
kubectl run cm-test --image=busybox --rm --attach --restart=Never --overrides='{"spec":{"volumes":[{"name":"cm", "configMap":{"name":"key-config"}}], "containers":[{"name":"cm-test", "image":"busybox", "command":["sh","-c","md5sum /cm/*"], "volumeMounts":[{"name":"cm", "mountPath":"/cm"}]}]}}'