Get environment variable from kubernetes pod? - kubernetes

What's the best way to list out the environment variables in a kubernetes pod?
(Similar to this, but for Kube, not Docker.)

kubectl exec -it <pod_name> -- env

Execute in bash:
kubectl exec -it <pod-name> -- printenv | grep -i env
You will get all environment variables that consists env keyword.

Both answers have the following issues:
They assume you have the permissions to start pod, which is not the case in a locked-down environment
They start a new pod, which is invasive and may give different environment variables than "a[n already running] kubernetes pod"
To inspect a running pod and get its environment variables, one can run:
kubectl describe pod <podname>
This is from Alexey Usharovski's comment.
I am hoping this gives more visibility to your great answer. If you would like to post it as an answer yourself, please let me know and I will delete mine.

kubectl exec <POD_NAME> -- sh -c 'echo $VAR_NAME'

I normally use:
kubectl exec -it <POD_NAME> -- env | grep "<VARIABLE_NAME>"

Related

App pod logs with linkerd | unable to view

I was able to view the app container logs using kubectl -f logs and was able to login to the container using "k exec --stdin --tty -- /bin/bash".
After injecting linkerd, I could not login to the container. However my goal is to check the app logs.
When I use this "k logs -f linkerd-proxy" I could not see the app-related logs.
I tried injecting debug-sidecar as well.
Tried this - "k logs deploy/ linkerd-debug - " and as well as this "k exec -it -c linkerd-debug -- tshark -i any -f "tcp" -V -Y "http.request"
still I couldn't see the exact logs for my app in the pod. Please suggest.
Linkerd works by injecting an additional container into your pods; this is known as the "sidecar" pattern. Your application (or better said container) logs are still accessible, however, as a result of having more than one container in the pod, kubectl requires you to explicitly specify the container name.
For example, assuming you have a pod with two containers (linkerd-proxy and app), you'd have to specify app as the name of the container:
$ kubectl logs -f <pod-name> -c app
# You can specify the container name without the -c flag
$ kubectl logs -f <pod-name> app
# This will work for 'exec' too
$ kubectl exec <pod-name> -c app -it -- sh

How can I enter a Kubernetes managed container faster?

Currently if I'm about to inspect my container, I have to do three steps:
kubectl get all -n {NameSpace}
kubectl describe {Podname from step 1} -n {NameSpace}
Find the Node Host and the container ID (My eyes are complaning!)
Switch to the host and execute "docker exec -ti -u root {Container ID} bash"
I am so mad about it right now. Wish somebody could offer some help to me and those who may share the same issue.
Pods are the smallest deployable units of computing that you can
create and manage in Kubernetes.
So, if you want to "enter" a container, you just need to "exec" into the pod in a particular namespace. Kubernetes will get you the shell/command for that pod.
kubectl -n somenamespace exec -it podname -- bash
There is no need to mention the node here as Kubernetes internally knows on which node the pod is scheduled.
If a Pod has more than one container, use --container or -c to specify
a container in the kubectl exec command. For example, suppose you have
a Pod named my-pod, and the Pod has two containers named main-app and
helper-app. The following command would open a shell to the main-app
container.
kubectl exec -it my-pod -c main-app -- /bin/bash
https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/

How to login/enter in kubernetes pod

I have kubernetes pods running as shown in command "kubectl get all -A" :
and same pods are shown in command "kubectl get pod -A" :
I want to enter/login to any of these pod (all are in Running state). How can I do that please let me know the command?
Kubernetes Pods are not Virtual Machines, so not something you typically can "log in" to.
But you might be able to execute a command in a container. e.g. with:
kubectl exec <pod-name> -- <command>
Note that your container need to contain the binary for <command>, otherwise this will fail.
See also Getting a shell to a container.
In addition to Jonas' answer above;
If you have more than one namespace, you need to specify the namespace your pod is currently using i.e kubectl exec -n <name space here> <pod-name> -it -- /bin/sh
After successfully accessing your pod, you can go ahead and navigate through your container.

How to access a kubernetes pod by its partial name?

I often run tasks like:
Read the log of the service X
or
Attach a shell inside the service Y
I always use something in my history like:
kubectl logs `kubectl get pods --no-headers -o custom-columns=":metadata.name" | grep <partial_name>`
or
kubectl exec -it `kubectl get pods --no-headers -o custom-columns=":metadata.name" | grep <partial_name>` bash
Do you know if kubectl has already something in place for this? Or should I create my own set of aliases?
Kubernetes instances are loosely coupled by the means of labels (key-value pairs). Because of that Kubernetes provides various functionalities that can help you to operate on sets of objects based on labels.
In case you have several pods of the same service good chances that they are managed by some ReplicaSet with the use of some specific label. You should see it if you run:
kubectl get pods --show-labels
Now for aggregating logs for instance you could use label selector like:
kubectl logs -l key=value
For more info please see: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ .
added to my .zshconfig
sshpod () {
kubectl exec --stdin --tty `kubectl get pods --no-headers -o custom-columns=":metadata.name" | grep ${1} | head -n 1` -- /bin/bash
}
usage
sshpod podname
this
finds all pods
greps needed name
picks the first
sshs into the pod
You can go access a pod by its deployment/service/etc:
kubectl exec -it svc/foo -- bash
kubectl exec -it deployment/bar -- bash
Kubernetes will pick a pod that matches the criteria and send you to it.
You can enable shell autocompletion. Kubectl provides this support for Bash and Zsh which will save you a lot of typing (you will use TAB to get the suggestion/completion).
Kuberentes documentations has a great set of information about how to enable autocompletion under Optional kubectl configurations. It covers Bash on Linux, Bash on MacOS and Zsh.

How to access kube-apiserver on command line?

Looking at documentation for installing Knative requires a Kubernetes cluster v1.11 or newer with the MutatingAdmissionWebhook admission controller enabled. So checking the documentation for this I see the following command:
kube-apiserver -h | grep enable-admission-plugins
However, kube-apiserver is running inside a docker container on master. Logging in as admin to master, I am not seeing this on the command line after install. What steps do I need to take to to run this command? Its probably a basic docker question but I dont see this documented anywhere in Kubernetes documentation.
So what I really need to know is if this command line is the best way to set these plugins and also how exactly to enter the container to execute the command line.
Where is kube-apiserver located
Should I enter the container? What is name of container and how do I enter it to execute the command?
I think that answer from #embik that you've pointed out in the initial question is quite decent, but I'll try to shed light on some aspects that can be useful for you.
As #embik mentioned in his answer, kube-apiserver binary actually resides on particular container within K8s api-server Pod, therefore you can free to check it, just execute /bin/sh on that Pod:
kubectl exec -it $(kubectl get pods -n kube-system| grep kube-apiserver|awk '{print $1}') -n kube-system -- /bin/sh
You might be able to propagate the desired enable-admission-plugins through kube-apiserver command inside this Pod, however any modification will disappear once api-server Pod re-spawns, i.e. master node reboot, etc.
The essential api-server config located in /etc/kubernetes/manifests/kube-apiserver.yaml. Node agent kubelet controls kube-apiserver runtime Pod, and each time when health checks are not successful kubelet sents a request to K8s Scheduler in order to re-create this affected Pod from primary kube-apiserver.yaml file.
This is old, still if its in the benefit of a needy. The a #Nick_Kh's answer is good enough, just want to extend it.
In case the api-server pod fails to give you the shell access, you may directly execute the command using kubectl exec like this:
kubectl exec -it kube-apiserver-rhino -n kube-system -- kube-apiserver -h | grep enable-admission-plugins
In this case, I wanted to know what are the default admission plugins enabled and every time I tried accessing pod's shell (bash, sh, etc.), ended up with error like this:
[root#rhino]# kubectl exec -it kube-apiserver-rhino -n kube-system -- /bin/sh
OCI runtime exec failed: exec failed: container_linux.go:367: starting container process caused: exec: "/bin/sh": stat /bin/sh: no such file or directory: unknown
command terminated with exit code 126