Inspecting a container using kubectl - kubernetes

Is there a way to inspect a container running in pod directly from the kubernetes command line (using kubectl) to see some details such as running in priveleged mode for instance.
something like:
kubectl inspect -c <containerName>
The only way I found is to ssh to the node hosting the pod and perform a docker inspect <containerID> but this is kind of tedious.
My kubernetes version
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:55:54Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0+coreos.0", GitCommit:"6bb2e725fc2876cd94b3900fc57a1c98ca87a08b", GitTreeState:"clean", BuildDate:"2018-04-02T16:49:31Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

Check kubectl describe pods/<pod_name>
If it is not enough for you, you can go for JSON and filter it with jq
kubectl get <pod_name> -ojson | jq '.spec.containers[] | .securityContext'
Also, check kubectl Cheat Sheet

You have below kubectl commands to know details of a pod
kubectl describe <pod_name> -n <namespacename>
kubectl get <pod_name> -n <namespacename> -o yaml # output in yaml format
kubectl get <pod_name> -n <namespacename> -o json # output in json format
If you want to know which containers are running in privileged mode from an audit point of view then I suggest to look at project Falco which has a mechanism to write policies and trigger alert when a container is violating a policy. The policy could be no container can run in privileged mode.

I a similar problem, that I had a pod with status Evicted and needed to inspect it (on kubectl is describe). So i used:
kubectl describe pod <pod-name>
So I could see what I was looking for:
...
Status: Failed
Reason: Evicted
Message: Pod The node had condition: [DiskPressure].
...
So searching i found a very nice article talking about 12 Critical Kubernetes Health Conditions You Need to Monitor and Why.
Still on solving, but this log may help others.

Related

you must specify an existing container or a new image when specifying args

According to the Kubernetes docs, you can start a debug version of a container and run a command on it like this:
$ kubectl debug (POD | TYPE[[.VERSION].GROUP]/NAME) [ -- COMMAND [args...] ]
But when I try and do this in real life I get the following:
$ kubectl debug mypod \
--copy-to=mypod-dev \
--env='PYTHONPATH="/my_app"' \
--set-image=mycontainer=myimage:dev -- python do_the_debugging.py
error: you must specify an existing container or a new image when specifying args.
If I don't specify -- python do_the_debugging.py I can create the debug container, but then I need a separate command to actually do the debugging:
kubectl exec -it mypod-dev -- python do_the_debugging.py
Why can't I do this all in one line as the docs seem to specify?
Some kubernetes details:
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.1", GitCommit:"c4d752765b3bbac2237bf87cf0b1c2e307844666", GitTreeState:"clean", BuildDate:"2020-12-23T02:22:53Z", GoVersion:"go1.15.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.15-eks-ad4801", GitCommit:"ad4801fd44fe0f125c8d13f1b1d4827e8884476d", GitTreeState:"clean", BuildDate:"2020-10-20T23:27:12Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
Try to add -it and --container flags to your command. In your specific case, it might look like this:
$ kubectl debug mypod \
--copy-to=mypod-dev \
--env='PYTHONPATH="/my_app"' \
--set-image=mycontainer=myimage:dev \
--container=mycontainer -it -- python do_the_debugging.py
I am not able to reproduce your exact issue because I don't have the do_the_debugging.py script, but I've created simple example.
First, I created Pod with name web using nginx image:
root#kmaster:~# kubectl run web --image=nginx
pod/web created
And then I ran kubectl debug command to create a copy of web named web-test-1 but with httpd image:
root#kmaster:~# kubectl debug web --copy-to=web-test-1 --set-image=web=httpd --container=web -it -- bash
If you don't see a command prompt, try pressing enter.
root#web-test-1:/usr/local/apache2#
Furthermore, I recommend you to upgrade your cluster to a newer version because your client and server versions are very diffrent.
Your kubectl version is 1.20, therefore you should have kube-apiserver in version 1.19 or 1.20.
Generally speaking if kube-apiserver is in version X, kubectl should be in version X-1 or X or X+1.

Why does simple kubectl(1.16) run show an error?

kubectl version
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-11-18T14:56:51Z", GoVersion:"go1.12.13", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T19:09:08Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
error
When I run kubectl run, an error occurs.
$ kubectl run nginx --image=nginx
WARNING: New generator "deployment/apps.v1" specified, but it isn't available. Falling back to "run/v1".
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
error: no matches for kind "Deployment" in version "apps/v1"
It seems like this is caused by a new version(1.16.x), doesn't it?
As far as I searched, even official documents doesn't explicitly mention something related to this situation. How can I use kubectl run?
Try
kubectl create deployment --image nginx my-nginx
As kubectl Usage Conventions suggests,
Specify the --generator flag to pin to a specific behavior when you
use generator-based commands such as kubectl run or kubectl expose
Use kubectl run --generator=run-pod/v1 nginnnnnnx --image nginx instead.
Also #soltysh describes well enough why its better to use kubectl create instead of kubectl run

The kubernetes "AVAILABLE" column indicates "0", but the former steps(in Kubernetes guide) are OK

I need to deploy some docker images, and manage them with the Kubernetes.
I followed the tutorial"Interactive Tutorial - Deploying an App"(https://kubernetes.io/docs/tutorials/kubernetes-basics/deploy-app/deploy-interactive/).
But after I typing the command kuberctl get deployments, in the result table, the deployment column shows 0 instead of 1, it's confusing me.
If there is anyone kindly guides me what's going wrong and what shall I do?
The OS is Ubuntu16.04;
The kuberctl version command shows the server and client version informations well.
The docker image is tagged already(a mysql:5.7 image).
devserver:~$ kubectl version    
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.0", GitCommit:"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d", GitTreeState:"clean", BuildDate:"2018-12-03T21:04:45Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}  
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:44:10Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
devserver:~$ kubectl get deployments
NAME  DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
ap-mysql   1    1    1       0     1
hello-node  1    1    1       0     1
I expect the answer about the phenomenon and the resolution. And I need to deploy my image on the minikube.
Katacoda uses hosted VM's so sometimes it may be slow to respond to the terminal input.
To verify if any deployment is present you may run kubectl get deployments --all-namespaces.To see what's going on with your deployment you can run kubectl describe DEPLOYMENT_NAME -n NAMESPACE.To inspect a pod you can do the same kubectl describe POD_NAME -n NAMESPACE.

kubectl top deosn't work

I'm using kubernetes 1.11.0 and running heapster. When I run
kubectl top pod
It will show error
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get services http:heapster:)
while I have installed heapster already
kubectl create -f deploy/kube-config/influxdb/
kubectl create -f deploy/kube-config/rbac/heapster-rbac.yaml
Any suggest?
Update:
the command kubectl top pod works now but the endpoint doesn't work
kubectl get --raw "/apis/metrics.k8s.io/v1beta1/pods"
#Error from server (ServiceUnavailable): the server is currently unable to handle the request
Can you check and ensure that your kubectl binary is the latest? Something like
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T22:29:25Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"darwin/amd64"}
This generally happens if kubectl is older. Old kubectl versions were looking for heapster service to be present but new ones should not have this problem.
Hope this helps.
In addition to above, you might want to consider moving to metrics server since heapster is on its way to being deprecated.
https://github.com/kubernetes/heapster/blob/master/docs/deprecation.md

Run kubectl inside a cluster

I have a Kubernetes 1.10 cluster up and running. Using the following command, I create a container running bash inside the cluster:
kubectl run tmp-shell --rm -i --tty --image centos -- /bin/bash
I download the correct version of kubectl inside the running container, make it executable and try to run
./kubectl get pods
but get the following error:
Error from server (Forbidden): pods is forbidden:
User "system:serviceaccount:default:default" cannot
list pods in the namespace "default"
Does this mean, that kubectl detected it is running inside a cluster and is automatically connecting to that one? How do I allow the serviceaccount to list the pods? My final goal will be to run helm inside the container. According to the docs I found, this should work fine as soon as kubectl is working fine.
Does this mean, that kubectl detected it is running inside a cluster and is automatically connecting to that one?
Yes, it used the KUBERNETES_SERVICE_PORT and KUBERNETES_SERVICE_HOST envvars to locate the API server, and the credential in the auto-injected /var/run/secrets/kubernetes.io/serviceaccount/token file to authenticate itself.
How do I allow the serviceaccount to list the pods?
That depends on the authorization mode you are using. If you are using RBAC (which is typical), you can grant permissions to that service account by creating RoleBinding or ClusterRoleBinding objects.
See https://kubernetes.io/docs/reference/access-authn-authz/rbac/#service-account-permissions for more information.
I believe helm requires extensive permissions (essentially superuser on the cluster). The first step would be to determine what service account helm was running with (check the serviceAccountName in the helm pods). Then, to grant superuser permissions to that service account, run:
kubectl create clusterrolebinding helm-superuser \
--clusterrole=cluster-admin \
--serviceaccount=$SERVICEACCOUNT_NAMESPACE:$SERVICEACCOUNT_NAME
True kubectl will try to get everything needs to authenticate with the master.
But with ClusterRole and "cluster-admin" you'll give unlimited permissions across all namespaces for that pod and sounds a bit risky.
For me, it was a bit annoying adding extra 43MB for the kubectl client in my Kubernetes container but the alternative was to use one of the SDKs to implement a more basic client. kubectl is easier to authenticate because the client will get the token needs from /var/run/secrets/kubernetes.io/serviceaccount plus we can use manifests files if we want. I think for most common of the Kubernetes setups you shouldn't add any additional environment variables or attach any volume secret, will just work if you have the right ServiceAccount.
Then you can test if is working with something like:
$ kubectl exec -it <your-container-with-the-attached-privs> -- /kubectl get pods -n <YOUR_NAMESPACE>
NAME. READY STATUS RESTARTS AGE
pod1-0 1/1 Running 0 6d17h
pod2-0 1/1 Running 0 6d16h
pod3-0 1/1 Running 0 6d17h
pod3-2 1/1 Running 0 67s
or permission denied:
$ kubectl exec -it <your-container-with-the-attached-privs> -- /kubectl get pods -n kube-system
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:default:spinupcontainers" cannot list resource "pods" in API group "" in the namespace "kube-system"
command terminated with exit code 1
Tested on:
$ kubectl exec -it <your-container-with-the-attached-privs> -- /kubectl versionClient Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:20:10Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:12:17Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
You can check my answer at How to run kubectl commands inside a container? for RoleBinding and RBAC.