I attempted but there is an error..i also see See 'kubectl run --help' for usage.
but i can't fix it..
kubectl run pod pod4 --image=aamirpinger/helloworld:latest --port=80 --annotaions=createdBy="Muhammad Shahbaz" --restart=Never
Error: unknown flag: --annotaions
kubectl run supports specifying annotations via the --annotations flag that can be specified multiple times to apply multiple annotations.
For example:
$ kubectl run --image myimage --annotations="foo=bar" --annotations="another=one" mypod
results in the following:
$ kubectl get pod mypod -o yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
foo: bar
another: one
[...]
kubectl run doesn't have an option to set annotations.
Unless you're running a one-off debugging pod, it's usually better practice to write out the full (Deployment) YAML file, commit to source control, and install it using kubectl apply -f. That will let you specify any Kubernetes object property you need to.
As David Maze mentioned ,there is no --annotations flag for kubectl run command.It is better to write deployment yaml file compared to running using kubectl run command.
However you can add annotations to kubernetes resources using Kubectl annotate command.All Kubernetes objects support the ability to store additional data with the object as annotations.
Hope this helps.
someone before me deployed daemonset, and configmap is it possible for me to somehow get the values he used ? smth like kubectl edit <name> but the edit option has some temporary data in it too - the name of pod with random chars etc. - and to get pure values used in that deployments/daemonset what command would I need to use?
kubectl get --export had bugs like this and this and --export will be deprecated in k8s v1.18 per this link.
$ kubectl get --export
Flag --export has been deprecated, This flag is deprecated and will be removed in future.
...
kubectl get -o yaml can be used to get values of a k8s resource's manifest along with metadata and status of the resource. kubectl get -o yaml has the following three sections:
metadata
spec with the values of the k8s resource's manifest
status with the resource's status
How about kubectl get --export?
I would like to get and see the random number provided by K8s.
kubectl logs abc-abc-platform-xxxxx dicom-server
Here xxxxx has to be replaced with the random generated number.
Can you let me know which command do I have to execute to find the random number?
If the specific pod has the my-app label, you can find that pod and save it as bash variable:
POD=$(kubectl get pod -l app=my-app -o jsonpath="{.items[0].metadata.name}")
and then execute:
kubectl logs $POD
PS. This will work only if you have 1 pod with that specific label.
I am running the command
kubectl create -f mypod.yaml --namespace=mynamespace
as I need to specify the environment variables through a configMap I created and specified in the mypod.yaml file. Kubernetes returns
pod/mypod created
but kubectl get pods doesn't show it in my list of pods and I can't access it by name as if it does not exist. However, if I try to create it again, it says that the pod is already created.
What may cause this, and how would I diagnose the problem?
By default, kubectl commands operate in the default namespace. But you created your pod in the mynamespace namespace.
Try one of the following:
kubectl get pods -n mynamespace
kubectl get pods --all-namespaces
I am looking to list all the containers in a pod in a script that gather's logs after running a test. kubectl describe pods -l k8s-app=kube-dns returns a lot of info, but I am just looking for a return like:
etcd
kube2sky
skydns
I don't see a simple way to format the describe output. Is there another command? (and I guess worst case there is always parsing the output of describe).
Answer
kubectl get pods POD_NAME_HERE -o jsonpath='{.spec.containers[*].name}'
Explanation
This gets the JSON object representing the pod. It then uses kubectl's JSONpath to extract the name of each container from the pod.
You can use get and choose one of the supported output template with the --output (-o) flag.
Take jsonpath for example,
kubectl get pods -l k8s-app=kube-dns -o jsonpath={.items[*].spec.containers[*].name} gives you etcd kube2sky skydns.
Other supported output output templates are go-template, go-template-file, jsonpath-file. See http://kubernetes.io/docs/user-guide/jsonpath/ for how to use jsonpath template. See https://golang.org/pkg/text/template/#pkg-overview for how to use go template.
Update: Check this doc for other example commands to list container images: https://kubernetes.io/docs/tasks/access-application-cluster/list-all-running-container-images/
Quick hack to avoid constructing the JSONpath query for a single pod:
$ kubectl logs mypod-123
a container name must be specified for pod mypod-123, choose one of: [etcd kubesky skydns]
I put some ideas together into the following:
Simple line:
kubectl get po -o jsonpath='{range .items[*]}{"pod: "}{.metadata.name}{"\n"}{range .spec.containers[*]}{"\tname: "}{.name}{"\n\timage: "}{.image}{"\n"}{end}'
Split (for readability):
kubectl get po -o jsonpath='
{range .items[*]}
{"pod: "}
{.metadata.name}
{"\n"}{range .spec.containers[*]}
{"\tname: "}
{.name}
{"\n\timage: "}
{.image}
{"\n"}
{end}'
How to list BOTH init and non-init containers for all pods
kubectl get pod -o="custom-columns=NAME:.metadata.name,INIT-CONTAINERS:.spec.initContainers[*].name,CONTAINERS:.spec.containers[*].name"
Output looks like this:
NAME INIT-CONTAINERS CONTAINERS
helm-install-traefik-sjts9 <none> helm
metrics-server-86cbb8457f-dkpqm <none> metrics-server
local-path-provisioner-5ff76fc89d-vjs6l <none> local-path-provisioner
coredns-6488c6fcc6-zp9gv <none> coredns
svclb-traefik-f5wwh <none> lb-port-80,lb-port-443
traefik-6f9cbd9bd4-pcbmz <none> traefik
dc-postgresql-0 init-chmod-data dc-postgresql
backend-5c4bf48d6f-7c8c6 wait-for-db backend
if you want a clear output of which containers are from each Pod
kubectl get po -l k8s-app=kube-dns \
-o=custom-columns=NAME:.metadata.name,CONTAINERS:.spec.containers[*].name
To get the output in the separate lines:
kubectl get pods POD_NAME_HERE -o jsonpath='{range .spec.containers[*]}{.name}{"\n"}{end}'
Output:
base-container
sidecar-0
sidecar-1
sidecar-2
If you use json as output format of kubectl get you get plenty details of a pod. With json processors like jq it is easy to select or filter for certain parts you are interested in.
To list the containers of a pod the jq query looks like this:
kubectl get --all-namespaces --selector k8s-app=kube-dns --output json pods \
| jq --raw-output '.items[].spec.containers[].name'
If you want to see all details regarding one specific container try something like this:
kubectl get --all-namespaces --selector k8s-app=kube-dns --output json pods \
| jq '.items[].spec.containers[] | select(.name=="etcd")'
Use below command:
kubectl get pods -o=custom-columns=PodName:.metadata.name,Containers:.spec.containers[*].name,Image:.spec.containers[*].image
To see verbose information along with configmaps of all containers in a particular pod, use this command:
kubectl describe pod/<pod name> -n <namespace name>
Use below command to see all the information of a particular pod
kubectl get pod <pod name> -n <namespace name> -o yaml
For overall details about the pod try following command to get the container details as well
kubectl describe pod <podname>
I use this to display image versions on the pods.
kubectl get pods -o=jsonpath='{range .items[*]}{"\n"}{.metadata.name}{":\t"}{range .spec.containers[*]}{.image}{end}{end}' && printf '\n'
It's just a small modification of script from here, with adding new line to start next console command on the new line, removed commas at the end of each line and listing only my pods, without service pods (e.g. --all-namespaces option is removed).
There are enough answers here but sometimes you want to see a deployment object pods' containers and initContainers. To do that;
1- Retrieve the deployment name
kubectl get deployment
2- Retrieve containers' names
kubectl get deployment <deployment-name> -o jsonpath='{.spec.template.spec.containers[*].name}'
3- Retrieve initContainers' names
kubectl get deployment <deployment-name> -o jsonpath='{.spec.template.spec.initContainers[*].name}'
Easiest way to know the containers in a pod:
kubectl logs -c -n