Use docker swarm without docker machine - docker-compose

Is it possible to use docker swarm with docker compose only without a docker machine?
If yes, How can I realize that?

docker-machine designed for create virtual docker host in your environment. if you manage your virtual machines with another tools like VMWare ESX you don't neet to docker-machine.
docker swarm doesn't related to docker-machine and you can init swarm on a single node.
your single node is manager without any worker node.
after that you can compose your services with docker compose and deploy your stack on swarm.

Related

kubernetes: Is POD is also like a PC

I see that kubernets uses pod and then in each pod there can be multiple containers.
Example I create a pod with
Container 1: Django server - running at port 8000
Container 2: Reactjs server - running at port 3000
Whereas
I am coming for docker background
So in docker we do
docker run --name django -d -p 8000:8000 some-django
docker run --name reactjs -d -p 3000:3000 some-reactjs
So POD is also like PC with some ubunut os on it
No, a Pod is not like a PC/VM with Ubuntu on it.
There is no intermediate layer between your host and the containers in a pod. The only thing happening here is that the containers in a pod share some resources/namespaces in the host's kernel, and there are mechanisms in your host kernel to "protect" the containers from seeing other containers. Pods are just a mechanism to help you deploy a couple containers that share some resources (like the network namespace) a little easier. Fundamentally they are just linux processes directly on the host.
(one nuanced technicality/caveat on the above statement: Docker and tools like it will sometimes run their own VM and may try to make that invisible to you. For example, Docker Desktop does this. Usually you can ignore this layer, but it is great to know it is there. The answer holds though: That one single VM will host all of your pods/containers and there is not one VM per pod.)

Run a K3S server in a docker container, and connect a K3S agent in another docker container

I know k3d can do this magically via k3d cluster create myname --token MYTOKEN --agents 1, but I am trying to figure out how to do the most simple version of that 'manually'. I want to create a server something like:
docker run -e K3S_TOKEN=MYTOKEN rancher/k3s:latest server
And connect an agent something like like:
docker run -e K3S_TOKEN=MYTOKEN -e K3S_URL=https://localhost:6443 rancher/k3s:latest agent
Does anyone know what ports need to be forwarded here? How can I set this up? Nearly everything I try, the agent complains about port 6444 already in use, even if I disable as much as possible about the server with any combination of --no-deploy servicelb --disable-agent --no-deploy traefik
Feel free to disable literally everything other than the server and the agent, I'm trying to make this ultra ultra simple, but just butting my head against a wall at the moment. Thanks!
The containers must "see" each other. Docker isolates the networks by default, so "localhost" in your agent container is the agent container itself.
Possible solutions:
Run both containers without network isolation using --net=host, map API port of the server to the host with --port and use the host IP in the agent container or use docker-compose.
A working example for docker-compose is described here: https://www.trion.de/news/2019/08/28/kubernetes-in-docker-mit-k3s.html

What are good workflows for deploying podman/buildah created container images to minikube?

I am exploring and learning about containers and kubernetes using podman and minikube on a linux workstation. I use podman to build images on the workstation and would like to deploy these images in minikube also running on the workstation using the kvm2 virtual machine driver. I also start minikube using the CRI-O container runtime.
What are efficient workflows to deploy these images from the workstation to minikube in this scenario? Docker is not running on the minikube VM so the reusing the Docker daemon as described in the minikube documentation is not an option. Sharing the host file system with minikube also appears to not be viable at this time when using kvm2.
Is running a local registry that is visible to both the workstation and the minikube vm the best option? Answers to How to use local docker images with Minikube? and (Kubernetes + Minikube) can't get docker image from local registry appear to offer good solutions for configuring a local registry.
Would skopeo be a solution?
Edit: this is a nice post describing how to set up a registry using podman: https://computingforgeeks.com/create-docker-container-registry-with-podman-letsencrypt/
thank you
Brad
Minikube documentation provides the foundation for a potential workflow at https://minikube.sigs.k8s.io/docs/tasks/docker_registry/. In order to use podman in lieu of docker I did the following
Start minikube, as instructed, with the --insecure-registry flag. I specifically use
minikube start --network-plugin=cni --enable-default-cni --bootstrapper=kubeadm --container-runtime=cri-o --cpus 4 --memory 4g --insecure-registry "192.168.39.0/24"
Enable the minikube registry addon.
minikube addons enable registry
Configure podman to use the insecure minikube registry by adding the registry to the insecure registries section of /etc/containers/registries.conf. This section now looks like
[registries.insecure]
registries = ['192.168.39.175:5000']
where 192.168.39.175 is the minikube ip. This ip may change following minikube restarts.
Follow the build, push and run commands in https://minikube.sigs.k8s.io/docs/tasks/docker_registry/ substituting podman for docker. This assumes the test-img container file exists.
Build: podman build --tag $(minikube ip):5000/test-img .
Push: podman push $(minikube ip):5000/test-img
Run: kubectl run test-img --image=$(minikube ip):5000/test-img
This worked but suffers from a serious complication: there is no apparent way at this time to set the IP address for the minikube VM when using kvm2. The IP will always be in the 192.168.39.0/24 subnet but that is the only certainty. Each time minikube is started the IP address of the registry will change which has significant implications for podman and the workflow in general.
More to come an another solution.

Docker best practice to access host's services

What is best practice to access the host's services within a docker container?
I'd like to access PostgreSQL running on the host within my application which runs in a docker container.
The easiest approach I've found is to use docker container run --net="host" which, based on this answer, behaves as follows:
Such a container will share the network stack with the docker host and from the container point of view, localhost (or 127.0.0.1) will refer to the docker host.
Be aware that any port opened in your docker container would be opened on the docker host. And this without requiring the -p or -P docker run option.
Which does not seem to be best practice since the containers should be isolated from the host.
Other approaches I've found are awking the hosts IP. May this be the way to go?
The best option in this case to treat the host as a remote machine. That way the container will be portable and would not have a strict dependency on network locations when connecting to the database.
In addition to what is mentioned on the drawbacks of using --network=host, this option will tightly couple the container to the host by assuming that the database is found on localhost.
The way to treat the machine as a remote one, is to use standard network constructs such as IP and DNS. Define a new DNS entry for the container that will point to the host where the DB is found using the
--add-host option to docker run.
docker run --add-host db-static:<ip-address-of-host> ...
Then inside the container you connect to the database via db-static

Docker Swarm mode - equivalent docker commands to docker run -it ubuntu

I am trying to deploy a mongodb cluster on docker swarm mode, all the mongod daemon are in the same overlay network.
I need to configure the mongodb cluster, i am trying to find a command that works like docker run -it ubuntu in docke rswarm mode so i can log in, any ideas?
Or is there any other way to access the overlay network.
SOLUTION:
First ssh to the node that runs the container, then find out the container id using docker ps and simply use docker exec -it $(id) bash to log on to the container.