I tried to use the “—privileged” flag on my container but it didn't work.
Is there any possibility to use the “—privileged” flag at docker container in Bluemix?
Privileged mode is not supported for security reasons, because IBM Containers run on a shared infrastructure and evidently we can't provide root access to the users. If you need something like that, you should try the Virtual Machines Bluemix Service to have full access to the machine.
The --privileged flag is not supported in the IBM Containers.
The supported flags for cf ic run are the following:
--name, -it, -m, --memory, -e, --env, -p, -publish, -v, --volume, --link
This is documented the link below:
https://www.ng.bluemix.net/docs/containers/container_cli_reference_native-docker.html
Related
I know k3d can do this magically via k3d cluster create myname --token MYTOKEN --agents 1, but I am trying to figure out how to do the most simple version of that 'manually'. I want to create a server something like:
docker run -e K3S_TOKEN=MYTOKEN rancher/k3s:latest server
And connect an agent something like like:
docker run -e K3S_TOKEN=MYTOKEN -e K3S_URL=https://localhost:6443 rancher/k3s:latest agent
Does anyone know what ports need to be forwarded here? How can I set this up? Nearly everything I try, the agent complains about port 6444 already in use, even if I disable as much as possible about the server with any combination of --no-deploy servicelb --disable-agent --no-deploy traefik
Feel free to disable literally everything other than the server and the agent, I'm trying to make this ultra ultra simple, but just butting my head against a wall at the moment. Thanks!
The containers must "see" each other. Docker isolates the networks by default, so "localhost" in your agent container is the agent container itself.
Possible solutions:
Run both containers without network isolation using --net=host, map API port of the server to the host with --port and use the host IP in the agent container or use docker-compose.
A working example for docker-compose is described here: https://www.trion.de/news/2019/08/28/kubernetes-in-docker-mit-k3s.html
I am running a local Kubernetes cluster through Docker Desktop on Windows. I'm attempting to modify my kube-apiserver config, and all of the information I've found has said to modify /etc/kubernetes/manifests/kube-apiserver.yaml on the master. I haven't been able to find this file, and am not sure what the proper way is to do this. Is there a different process because the cluster is through Docker Desktop?
Is there a different process because the cluster is through Docker Desktop?
You can get access to the kubeapi-server.yaml with a Kubernetes that is running on Docker Desktop but in a "hacky" way. I've included the explanation below.
For setups that require such reconfigurations, I encourage you to use different solution like for example minikube.
Minikube has a feature that allows you to pass the additional options for the Kubernetes components. You can read more about --extra-config ExtraOption by following this documentation:
Minikube.sigs.k8s.io: Docs: Commands: Start
As for the reconfiguration of kube-apiserver.yaml with Docker Desktop
You need to run following command:
docker run -it --privileged --pid=host debian nsenter -t 1 -m -u -n -i sh
Above command will allow you to run:
vi /etc/kubernetes/manifests/kube-apiserver.yaml
This lets you edit the API server configuration. The Pod running kubeapi-server will be restarted with new parameters.
You can check below StackOverflow answers for more reference:
Stackoverflow.com: Answer: Where are the Docker Desktop for Windows kubelet logs located?
Stackoverflow.com: Answer: How to change the default nodeport range on Mac (docker-desktop)?
I've used this answer without $ screen command and I was able to reconfigure kubeapi-server on Docker Desktop in Windows
How to ssh to the node inside the cluster in local. I am using docker edge version which has kubernetes inbuilt. If i run
kubectl ssh node
I am getting
Error: unknown command "ssh" for "kubectl"
Did you mean this?
set
Run 'kubectl --help' for usage.
error: unknown command "ssh" for "kubectl"
Did you mean this?
set
There is no "ssh" command in kubectl yet, but there are plenty of options to access Kubernetes node shell.
In case you are using cloud provider, you are able to connect to nodes directly from instances management interface.
For example, in GCP: Select Menu -> Compute Engine -> VM instances, then press SSH button on the left side of the desired node instance.
In case of using local VM (VMWare, Virtualbox), you can configure sshd before rolling out Kubernetes cluster, or use VM console, which is available from management GUI.
Vagrant provides its own command to access VMs - vagrant ssh
In case of using minikube, there is minikube ssh command to connect to minikube VM. There are also other options.
I found no simple way to access docker-for-desktop VM, but you can easily switch to minikube for experimenting with node settings.
How to ssh to the node inside the cluster in local
Kubernetes is aware of nodes on level of secure communication with kubelets on nodes (geting hostname and ip from node), and as such, does not provide cluster-level ssh to nodes out of the box. Depending on your actual provide/setup there are different ways of connecting to nodes and they all boil down to locate your ssh key, open appropriate ports on firewall/security groups and issue ssh -i key user#node_instance_ip command to access node. If you are running locally with virtual machines you can setup your own ssh keypairs and do the trick..
You can effectively shell into a pod using exec(I know its not exactly what the question asks, but might be helpful).
An example usage would be kubectl exec -it name-of-your-pod -- /bin/bash. assuming you have bash installed.
Hope that helps.
You have to first Extend kubectl with plugins adding https://github.com/luksa/kubectl-plugins.
Basically, to "install" ssh, e.g.:
wget https://raw.githubusercontent.com/luksa/kubectl-plugins/master/kubectl-ssh
Then make sure the file is in kubectl-ssh your path.
What is best practice to access the host's services within a docker container?
I'd like to access PostgreSQL running on the host within my application which runs in a docker container.
The easiest approach I've found is to use docker container run --net="host" which, based on this answer, behaves as follows:
Such a container will share the network stack with the docker host and from the container point of view, localhost (or 127.0.0.1) will refer to the docker host.
Be aware that any port opened in your docker container would be opened on the docker host. And this without requiring the -p or -P docker run option.
Which does not seem to be best practice since the containers should be isolated from the host.
Other approaches I've found are awking the hosts IP. May this be the way to go?
The best option in this case to treat the host as a remote machine. That way the container will be portable and would not have a strict dependency on network locations when connecting to the database.
In addition to what is mentioned on the drawbacks of using --network=host, this option will tightly couple the container to the host by assuming that the database is found on localhost.
The way to treat the machine as a remote one, is to use standard network constructs such as IP and DNS. Define a new DNS entry for the container that will point to the host where the DB is found using the
--add-host option to docker run.
docker run --add-host db-static:<ip-address-of-host> ...
Then inside the container you connect to the database via db-static
my question is basically a combination of Access Docker socket within container and Accessing docker host from (jenkins) docker container
My goal
to run Jenkins fully dockerized including dynamic slaves and being able to create docker-containers within the slaves.
Except for the last part everything is already working thanks to https://github.com/maxfields2000/dockerjenkins_tutorial if the Unix-docker-sock is properly exposed to the Jenkins master.
The problem
unlike the slaves which are provisioned dynamically, the master is started via docker-compose and thus has proper access to the UNIX socket.
For the slaves which are spawned dynamically, this approach does not work.
I tried to forward the access to docker like
VOLUME /var/run/docker.sock
VOLUME /var/lib/docker
during building the image. Unfortunately so far I get a Permission denied (socket: /run/docker.sock) when trying to access to docker.sock in the slave which was created like: https://gist.github.com/geoHeil/1752b46d6d38bdbbc460556e38263bc3
The strange thing is: the user in the slave is root.
So why do I not have access to the docker.sock? Or how could I burn in the --privileged flag so that the permission denied problem would go away?
With docker 1.10 a new User namespace is introduced, thus sharing docker.sock isn't enough, as root inside the container isn't root on the host machine anymore.
I recently played with Jenkins container as well, and I wanted to build containers using the host docker engine.
The steps I did are:
Find group id for docker group:
$ id
..... 999(docker)
Run jenkins container with two volumes - one contains the docker client executable, the other shares the docker unix socket. Note how I use --group-add to add the container user to the docker group, to allow access:
docker run --name jenkins -tid -p 8080:8080 --group-add=999 -v /path-to-my-docker-client:/home/jenkins/docker -v /var/run/docker.sock:/var/run/docker.sock jenkins
Tested and found it indeeds work:
docker exec -ti jenkins bash
./docker ps
See more about additional groups here
Another approach would be to use --privileged flag instead of --group-add, yet its better to use avoid it if possible