Communicating between swarm container and host machine - mongodb

This is a total beginner question with regards to Docker.
I have a basic swarm running on a single host as a testing environment. There are 11 different containers running, all communicating through the host (the literal machine I am now typing this on). Only 1 physical machine, 11 containers.
On my physical machine's localhost I have a MongoDB server running. I want to be able to communicate with this MongoDB server from within the containers in my swarm.
What do I have to configure to get this working? There is lots of information with regards to networking on Docker. I normally use:
docker run --net="host" --rm -ti <name_of_image>
and everything works fine. But as soon as I run a swarm (not a single container) I can't seem to figure out how to connect everything together so I can talk to my MongoDB server.
I realise this is probably a very basic question. I appreciate also that I probably need to read some more of the swarm networking docs to understand this but I don't know which documentation to look at. There seems to be multiple different ways to network my containers and physical machine together.
Any information would be much appreciated, even if it's just a link to some docs you think would be enlightening.
Cheers.

Related

DOCKER environment in production

I am new to docker and just started playing around it. I have a following setup of my app in production as of now:
Server machine 1 : running spring-boot microservices
Server machine 2 : running redis
Server machine 3 : running postgres
If I use docker in server machine 1 and run all of the microservices as container and run the redis and postgres as a container as well in server machine 1, is this is correct thing to do ? Or I have to run the docker on all the server machines and run containers separately.
Which is the best practice to do ?
When first starting out I suggest doing it all on 1 machine. Your database containers can use volumes to save data to the machine itself. So when you need to switch to a different machine, because 1 machine is too slow, you can easily transfer your database data. When starting to use more than 1 machine to run Docker you probably want to use a deployment option like Kubernetes or Docker swarm. This will simplify the process of setting up your environments on different machines, because it will be done by Kubernetes.
Also when your application is getting a lot of traffic you might want to switch to Managed Databases, which are provided by services like GCP, AWS, Digitalocean, etc. A managed database will scale automatically, get updates frequently and back-up automatically. This will take a lot of burden of your shoulders. I personally use Managed Databases myself.
My suggestion for now: Use 1 machine, learn Kubernetes when your application gets more traffic. Look into managed databases (available for Redis and Postgres).

How to test autodiscovery with just one computer?

I'm trying to write a Kotlin server with auto discovery, however I only have one computer to develop. My server uses a port and I just can't figure it out myself how to successfully test my application. Thanks for your help!
JVMs
You may be able to run copies of the app in different JVMs, but would have to run them on different ports.
VMs
This may be slow, but may be an option
Docker
Using Docker (and optionally compose) you been run multiple copies of the app on the same port, with less overhead than using full VMs.

About docker swarm mode with filters, strategy, affinity, etc

I've started using docker swarm mode, and I couldn't find reliable information about a lot of things covered in traditional swarm. Does anyone know about the following things??
What kind of filters are available? Used to have constraint, health, and containerslots, but not sure how to set, change or use that filter when creating services. I got constraint label working by passing "--constraint node.labels.FOO==BAR" to docker service create, but not sure about other filters.
How do you set affinity, dependency, port? passing "-e" doesn't seemed to be working..
Anyway to set strategy...?
Not specific to swarm, but is there any way to check how much CPU or memory is reserved by containers? Couldn't find relevant information in docker info.
This question is also not specific to swarm. Is there any way to limit disk and network bandwidth?
I'm referring this => https://docs.docker.com/swarm/scheduler/filter/ but I can't find one for the swarm mode.
Seriously should be working on improving swarm mode documentation...
Question 1, 2 and 3 can be answered in the following link I believe
https://docs.docker.com/engine/swarm/manage-nodes/
For 4th question:
You can very well do docker inspect on the containers to get cpu and memory reserved. By default docker doesn't assign limit to memory and cpu, it will try to consume what ever is available on the host. If you have set the limits then you can see the same through docker inspect.

Which way to run PostgreSQL in Docker?

Which of these methods is correct?
One db container for each app
One db container for all apps
Install db without docker
I tried to find information, but nothing. Or I badly searched?
It is immature, but that doesn't seem to be stopping a lot of people using Docker for persistence.
The official Postgres image has 4.5 million pulls - Ok, this doesn't mean that all those images/containers are being used but it does suggest that it is a popular solution.
If you have already decided that you would like to use Docker, because of what containers can offer your architecture, then I don't think you will have trouble using it for persistence - assuming you are happy learning Docker.
I'm using Postgres and MySql in several projects quite successfully on Docker.
In choosing option 1 or 2, I would say that unless your apps are related to the same problem domain/company/project I would go with option 1. Of course, running costs will possibly factor in as well.
I generally go with option 1.
All 3 options could be valid but it depends on the use you have to perform.
In my server I've 1 container for every main postgresql releases currently I use.
I run all of them on different ports (use not random number for ports but some easy to remember because a problem with docker is remember all port numbers and other for every container).
pg84(port 8432), pg93(port 9332), pg94 (port 9432)
I link the pgXX to the container I need and that's perfect for me.
For my experience I prefer option 2 so.

Docker: Running multiple applications VS running multiple containers

I am trying to run Wildfly, Jenkins and Postgresql in Docker container(s).
As far as I could understand from articles I've read, the Docker way is to have each application run in a different container.
Is my assumption correct or is it better to have only one container containing these three applications?
Afaik the basic philosophy behind docker is to run one service per container. You can run whole application inside a container, but I don't think that will go well with the way docker work. Running different services in different containers gives you more flexibility and a better modularity for your app.