how to login as root to running pod as root in kubernetes - kubernetes

I tried multiple syntax including one given below , no luck yet
kubectl exec -u root -it testpod -- bash
Error: unknown shorthand flag: 'u' in -u
See 'kubectl exec --help' for usage.
it is version 1.22

There is no option available in kubectl exec to mention the user
Because it is decided at either in the container image or in the pod.spec.containers.securityContext.runAsUser field
so to achieve what youy want is on a running container then do just kubectl exec -it testpod -- bash and then issue su - root from inside the container

Related

App pod logs with linkerd | unable to view

I was able to view the app container logs using kubectl -f logs and was able to login to the container using "k exec --stdin --tty -- /bin/bash".
After injecting linkerd, I could not login to the container. However my goal is to check the app logs.
When I use this "k logs -f linkerd-proxy" I could not see the app-related logs.
I tried injecting debug-sidecar as well.
Tried this - "k logs deploy/ linkerd-debug - " and as well as this "k exec -it -c linkerd-debug -- tshark -i any -f "tcp" -V -Y "http.request"
still I couldn't see the exact logs for my app in the pod. Please suggest.
Linkerd works by injecting an additional container into your pods; this is known as the "sidecar" pattern. Your application (or better said container) logs are still accessible, however, as a result of having more than one container in the pod, kubectl requires you to explicitly specify the container name.
For example, assuming you have a pod with two containers (linkerd-proxy and app), you'd have to specify app as the name of the container:
$ kubectl logs -f <pod-name> -c app
# You can specify the container name without the -c flag
$ kubectl logs -f <pod-name> app
# This will work for 'exec' too
$ kubectl exec <pod-name> -c app -it -- sh

Configure command to use for shell on pod

In k9s: Is there a way to configure the command which is used when starting a shell inside the pod?
I have looked their docu and briefly browsed the source without hints how a shell command could be specified.
kubectl command-line tool must be configured to communicate with your cluster.
Create the Pod:
kubectl apply -f https://k8s.io/examples/application/shell-demo.yaml
Verify that the container is running:
kubectl get pod shell-demo
Get a shell to the running container:
kubectl exec --stdin --tty shell-demo -- /bin/bash
In your shell, list the root directory:
Run this command inside the container:
ls /
Please refer to this document Configure command to use for shell on pod

How can I enter a Kubernetes managed container faster?

Currently if I'm about to inspect my container, I have to do three steps:
kubectl get all -n {NameSpace}
kubectl describe {Podname from step 1} -n {NameSpace}
Find the Node Host and the container ID (My eyes are complaning!)
Switch to the host and execute "docker exec -ti -u root {Container ID} bash"
I am so mad about it right now. Wish somebody could offer some help to me and those who may share the same issue.
Pods are the smallest deployable units of computing that you can
create and manage in Kubernetes.
So, if you want to "enter" a container, you just need to "exec" into the pod in a particular namespace. Kubernetes will get you the shell/command for that pod.
kubectl -n somenamespace exec -it podname -- bash
There is no need to mention the node here as Kubernetes internally knows on which node the pod is scheduled.
If a Pod has more than one container, use --container or -c to specify
a container in the kubectl exec command. For example, suppose you have
a Pod named my-pod, and the Pod has two containers named main-app and
helper-app. The following command would open a shell to the main-app
container.
kubectl exec -it my-pod -c main-app -- /bin/bash
https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/

Restart=Never causes the MongoDB pod to terminate

I am trying to follow the instructions here: https://github.com/bitnami/charts/tree/master/bitnami/mongodb
1) helm install mongorelease --set mongodbRootPassword=secretpassword,mongodbUsername=my-user,mongodbPassword=my-password,mongodbDatabase=my-database bitnami/mongodb
which says:
To connect to your database run the following command:
kubectl run --namespace default mongorelease-mongodb-client --rm --tty -i --restart='Never' --image docker.io/bitnami/mongodb:4.2.5-debian-10-r44 --command -- mongo admin --host mongorelease-mongodb --authenticationDatabase admin -u root -p $MONGODB_ROOT_PASSWORD
I run the command above (replacing $MONGODB_ROOT_PASSWORD with my password) and I see this error:
error: invalid restart policy: 'Never'
See 'kubectl run -h' for help and examples
I remove the single quotes around Never and see this:
MongoDB shell version v4.2.5
connecting to: mongodb://mongorelease-mongodb:27017/admin?authSource=admin&compressors=disabled&gssapiServiceName=mongodb
2020-04-11T10:04:52.187+0000 E QUERY [js] Error: Authentication failed. :
connect#src/mongo/shell/mongo.js:341:17
#(connect):2:6
2020-04-11T10:04:52.189+0000 F - [main] exception: connect failed
2020-04-11T10:04:52.189+0000 E - [main] exiting with code 1
pod "mongorelease-mongodb-client" deleted
pod default/mongorelease-mongodb-client terminated (Error)
I then remove --restart=Never from the command and run it again. It then works expected and I can interact with MongoDB, however I am presented with this warning:
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
What is the command I should be using?
--restart=Never creates a pod. So you instead can run this command with --generator=run-pod/v1 to create a pod. This avoids usage of --restart=Never and also the deprecation warning will not be there.
kubectl run --rm --grace-period=1 --force=true --generator=run-pod/v1 --namespace default mongorelease-mongodb-client --tty -i --image docker.io/bitnami/mongodb:4.2.5-debian-10-r44 --command -- mongo admin --host mongorelease-mongodb --authenticationDatabase admin -u root -p $MONGODB_ROOT_PASSWORD

kubectl exec into container of a multi container pod

I have problem login into one container of a multi-container pod.
I get the container id from the kubectl describe pod <pod-name>
kubectl describe pod ipengine-net-benchmark-488656591-gjrpc -c <container id>
When i try:
kubectl exec -ti ipengine-net-benchmark-488656591-gjrpc -c 70761432854f /bin/bash
It says: Error from server: container 70761432854f is not valid for pod ipengine-net-benchmark-488656591-gjrpc
Ah once more detailed reading the man page of kubectl exec :
Flags:
-c, --container="": Container name. If omitted, the first container in the pod will be chosen
-p, --pod="": Pod name
-i, --stdin[=false]: Pass stdin to the container
-t, --tty[=false]: Stdin is a TTY
So i just used the container name from my manifest.yaml and it worked like charm.
Name of the container: ipengine-net-benchmark-iperf-server
kubectl exec -ti ipengine-net-benchmark-488656591-gjrpc -c ipengine-net-benchmark-iperf-server /bin/bash