Profile using eBPF inside kubernetes - kubernetes

I am trying to profile a kubernetes pod using the BCC tools. The environment is:
minikube v1.8.0
installed linux kernel headers installed on the host by following the steps: https://minikube.sigs.k8s.io/docs/tutorials/ebpf_tools_in_minikube/
Spin up zlim/bcc container inside the cluster, verified that execsnoop works.
When I run "profile" inside the container, I see the BCC is working to get profile output.
However, if I run "profile -p " using a pid of a container, the profile output comes out empty.
There are 2 ways I get the pid of a container:
"docker ps" followed by "docker inspect <container_id> '{{ .State.Pid }}'"
Directly check "ps" on the minikube console.
Any help will be appreciated. Thanks.

Related

Start interactive shell into a sql server 2019 container running in an aks pod

I am using the mssql docker image (Linux) for sql server 2019. The default user is not root but mssql.
I need to perform some operations as root inside the container:
docker exec -it sql bash
mssql#7f5a78a63728:/$ sudo <command>
bash: sudo: command not found
Then I start the shell as root:
docker exec -it --user=root sql bash
root#7f5a78a63728:/# <command>
...
This works.
Now I need to do this in a container deployed in an AKS cluster
kubectl exec -it rms-sql-1-sql-server-deployment-86cc45dc5c-tgtm2 -- bash
mssql#rms-sql-1-sql-server-host:/$ sudo <command>
bash: sudo: command not found
as expected. But then:
kubectl exec -it --user=root rms-sql-1-sql-server-deployment-86cc45dc5c-tgtm2 -- bash
error: auth info "root" does not exist
So when the container is in an AKS cluster, starting a shell as root doesn't work.
I then try to ssh into the node and use docker from inside:
kubectl debug node/aks-agentpool-30797540-vmss000000 -it --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11
Creating debugging pod node-debugger-aks-agentpool-30797540-vmss000000-xfrsq with container debugger on node aks-agentpool-30797540-vmss000000.
If you don't see a command prompt, try pressing enter.
root#aks-agentpool-30797540-vmss000000:/# docker ...
bash: docker: command not found
Looks like a Kubernetes cluster node doesn't have docker installed!
Any clues?
EDIT
The image I used locally and in Kubernetes is exactly the same,
mcr.microsoft.com/mssql/server:2019-latest untouched
David Maze has well mentioned in the comment:
Any change you make in this environment will be lost as soon as the Kubernetes pod is deleted, including if you need to update the underlying image or if its node goes away outside of your control. Would building a custom image with your changes be a more maintainable solution?
Generally, if you want to change something permanently you have to create a new image. Everything you described behaved exactly as it was supposed to. First you have exec the container in docker, then logged in as root. However, in k8s it is a completely different container. Perhaps a different image is used. Second, even if you made a change, it would exist until the container dies. If you want to modify something permanently, you have to create your new image with all the components and the configuration you need. For more information look at pod lifecycle.

Location of Kubernetes config directory with Docker Desktop on Windows

I am running a local Kubernetes cluster through Docker Desktop on Windows. I'm attempting to modify my kube-apiserver config, and all of the information I've found has said to modify /etc/kubernetes/manifests/kube-apiserver.yaml on the master. I haven't been able to find this file, and am not sure what the proper way is to do this. Is there a different process because the cluster is through Docker Desktop?
Is there a different process because the cluster is through Docker Desktop?
You can get access to the kubeapi-server.yaml with a Kubernetes that is running on Docker Desktop but in a "hacky" way. I've included the explanation below.
For setups that require such reconfigurations, I encourage you to use different solution like for example minikube.
Minikube has a feature that allows you to pass the additional options for the Kubernetes components. You can read more about --extra-config ExtraOption by following this documentation:
Minikube.sigs.k8s.io: Docs: Commands: Start
As for the reconfiguration of kube-apiserver.yaml with Docker Desktop
You need to run following command:
docker run -it --privileged --pid=host debian nsenter -t 1 -m -u -n -i sh
Above command will allow you to run:
vi /etc/kubernetes/manifests/kube-apiserver.yaml
This lets you edit the API server configuration. The Pod running kubeapi-server will be restarted with new parameters.
You can check below StackOverflow answers for more reference:
Stackoverflow.com: Answer: Where are the Docker Desktop for Windows kubelet logs located?
Stackoverflow.com: Answer: How to change the default nodeport range on Mac (docker-desktop)?
I've used this answer without $ screen command and I was able to reconfigure kubeapi-server on Docker Desktop in Windows

Nginx ingress controller at kubernetes not allowing installation of some package

I am looking to execute
apt install tcpdump
but facing permission denial, upon looking to set the directory to root, it is asking me for password and I don't know from where to get that password.
I installed nginx helm chart from stable/nginx repository with no RBAC
Please see snapshot for details on error, while I tried installing tcpdump in the pod after doing ssh into it.
In Using GDB with Nginx, you can find troubleshooting section:
Shortly:
find the node where your pod is running (kubectl get pods -o wide)
ssh into the node
find the docker_ID for this image (docker ps | grep pod_name)
run docker exec -it --user=0 --privileged docker_ID bash
Note: Runtime privilege and Linux capabilities
When the operator executes docker run --privileged, Docker will enable access to all devices on the host as well as set some configuration in AppArmor or SELinux to allow the container nearly all the same access to the host as processes running outside containers on the host. Additional information about running with --privileged is available on the Docker Blog.
Additional resources:
ROOT IN CONTAINER, ROOT ON HOST
Hope this help.

ImagePullBackOff Error

I'm using Minikube on Windows based machine. On the same machine, I also have docker-machine setup.
I've pointed docker client towards minikube’s docker environment. This way, can see Docker environment inside Kubernetes.
Without issues, I can build docker images & run docker containers from Minikube VM. However, when I try to start any docker container via kubectl(from PowerShell), its failing to start primarily as if kubectl can't find docker image due to following error -
Failed to pull image "image name": rpc error: code = Unknown desc =
Error response from daemon: repository "image-repo-name" not found:
does not exist or no pull access Error syncing pod
I don't know what's missing. If "docker run" can access the image why "kubectl" can not do?
Here my Dockerfile:
FROM node:4.4
EXPOSE 9002
COPY server.js .
CMD node server.js
Make sure your image path in your yaml is correct. That image should exist on your local machine. It should be named with a number not "latest"
Have this in your deployment yaml:
image: redis:1.0.48
run "> docker images" to see the list of images on your machine.

Access docker within container on jenkins slave

my question is basically a combination of Access Docker socket within container and Accessing docker host from (jenkins) docker container
My goal
to run Jenkins fully dockerized including dynamic slaves and being able to create docker-containers within the slaves.
Except for the last part everything is already working thanks to https://github.com/maxfields2000/dockerjenkins_tutorial if the Unix-docker-sock is properly exposed to the Jenkins master.
The problem
unlike the slaves which are provisioned dynamically, the master is started via docker-compose and thus has proper access to the UNIX socket.
For the slaves which are spawned dynamically, this approach does not work.
I tried to forward the access to docker like
VOLUME /var/run/docker.sock
VOLUME /var/lib/docker
during building the image. Unfortunately so far I get a Permission denied (socket: /run/docker.sock) when trying to access to docker.sock in the slave which was created like: https://gist.github.com/geoHeil/1752b46d6d38bdbbc460556e38263bc3
The strange thing is: the user in the slave is root.
So why do I not have access to the docker.sock? Or how could I burn in the --privileged flag so that the permission denied problem would go away?
With docker 1.10 a new User namespace is introduced, thus sharing docker.sock isn't enough, as root inside the container isn't root on the host machine anymore.
I recently played with Jenkins container as well, and I wanted to build containers using the host docker engine.
The steps I did are:
Find group id for docker group:
$ id
..... 999(docker)
Run jenkins container with two volumes - one contains the docker client executable, the other shares the docker unix socket. Note how I use --group-add to add the container user to the docker group, to allow access:
docker run --name jenkins -tid -p 8080:8080 --group-add=999 -v /path-to-my-docker-client:/home/jenkins/docker -v /var/run/docker.sock:/var/run/docker.sock jenkins
Tested and found it indeeds work:
docker exec -ti jenkins bash
./docker ps
See more about additional groups here
Another approach would be to use --privileged flag instead of --group-add, yet its better to use avoid it if possible