I am new to docker and just started playing around it. I have a following setup of my app in production as of now:
Server machine 1 : running spring-boot microservices
Server machine 2 : running redis
Server machine 3 : running postgres
If I use docker in server machine 1 and run all of the microservices as container and run the redis and postgres as a container as well in server machine 1, is this is correct thing to do ? Or I have to run the docker on all the server machines and run containers separately.
Which is the best practice to do ?
When first starting out I suggest doing it all on 1 machine. Your database containers can use volumes to save data to the machine itself. So when you need to switch to a different machine, because 1 machine is too slow, you can easily transfer your database data. When starting to use more than 1 machine to run Docker you probably want to use a deployment option like Kubernetes or Docker swarm. This will simplify the process of setting up your environments on different machines, because it will be done by Kubernetes.
Also when your application is getting a lot of traffic you might want to switch to Managed Databases, which are provided by services like GCP, AWS, Digitalocean, etc. A managed database will scale automatically, get updates frequently and back-up automatically. This will take a lot of burden of your shoulders. I personally use Managed Databases myself.
My suggestion for now: Use 1 machine, learn Kubernetes when your application gets more traffic. Look into managed databases (available for Redis and Postgres).
Related
I'm really new into learning MongoDB and I'm having a hard time trying to understand sharding.
So I have my PC who is the host and I have created two VMs using VirtualBox. In my PC (host) there is a DB with some data. So my issue is which of those 3 components should be the the Config Server, the Shard and the Query Router (mongo). Can somebody please help me explaining this ? (I have read the documentation and still haven't understand it completely).
A sharded cluster needs to run at least three processes, a "decent" sharded cluster runs at least 10 processes (1 mongos Router, 3 Config servers, 2 x 3 Shard servers). In production, they run on different machines, but it is no problem to run them all on the same machine.
I would suggest these steps for learning:
Deploy a Stand alone MongoDB
Deploy a Replica Set, see Deploy a Replica Set or Deploy Replica Set With Keyfile Authentication
Deploy a Sharded Cluster, see Deploy a Sharded Cluster or Deploy Sharded Cluster with Keyfile Authentication
In the beginning it is an overkill to use different machines. Use localhost for all.
You can run multiple mongod/mongos instances on one machine, just ensure they are using different ports (e.g. the default port 27017), different dbPath and preferable also different log files.
You may run your first trial without authentication. Once you got that working, use authentication with keyfile and as next step use authentication by x.509 certificates.
If you use Windows, then have a look at https://github.com/Wernfried/mongoDB-oneclick
This is a total beginner question with regards to Docker.
I have a basic swarm running on a single host as a testing environment. There are 11 different containers running, all communicating through the host (the literal machine I am now typing this on). Only 1 physical machine, 11 containers.
On my physical machine's localhost I have a MongoDB server running. I want to be able to communicate with this MongoDB server from within the containers in my swarm.
What do I have to configure to get this working? There is lots of information with regards to networking on Docker. I normally use:
docker run --net="host" --rm -ti <name_of_image>
and everything works fine. But as soon as I run a swarm (not a single container) I can't seem to figure out how to connect everything together so I can talk to my MongoDB server.
I realise this is probably a very basic question. I appreciate also that I probably need to read some more of the swarm networking docs to understand this but I don't know which documentation to look at. There seems to be multiple different ways to network my containers and physical machine together.
Any information would be much appreciated, even if it's just a link to some docs you think would be enlightening.
Cheers.
I have a web application hosted on a server, it uses virtualEnv to separate dev and prod instances. Both instances share the same postgres database. (all on the same server)
I am kind of new to docker and I would like to replace the dev and prod instances with docker containers, and each link to its dev or prod postgres containers (or a similar effect so that a code change in development will not affect production database.)
what is the best design for this scenario? should I have the dev and prod container mapped to different ports? Can I have 1 dockerfile for both dev and prod containers? How do I deal with 2 postgres container?
Seems your requirement is not very complicated, so I think you can run 2 pair containers (each pair have one application container and one postgres container) to achieve this, the basic structure described as below:
devContainer---> pgsDBContainer:<port_number1> ---> dataVolume1
prodContainer---> pgsDBContainer:<port_number2> ---> dataVolume2
Each container pair have one dedicated port number and one dedicated volume. The port numbers used for dev or prod application to connect to corresponding postgres database, which should be easy to understand. But volume is another story.
Please read this Manage data in containers doc for container volume. As you mentioned that "a code change in development will not affect production database", which means you should have two separate volumes for postgres containers, so the data of the databases will not mixed up.
Can I have 1 dockerfile for both dev and prod containers?
Yes you can, just as I mentioned, you should give each postgres container different port and volume config when you start them with docker run command. docker run has EXPOSE option and VOLUME option for you to config the port number and volume location.
Just a reminder, When you run a database in container, you may need to consider the Data Persistent in container environment to avoid data loss that caused by container removed. Some discussions of container Data Persistent can be found here and there.
We need to use Jenkins to test some web apps that each need:
a database (postgres in our case)
a search service (ElasticSearch in our case, but only sometimes)
a cache server, such as redis
So far, we've just had these services running on the Jenkins master, but this causes problems when we want to upgrade Postgres, ES or Redis versions. Not all apps can move in lock step, and we want to run the tests on new versions before committing to move an app in production.
What we'd like to do is have these services provided on a per-job-run basis, each one running in its own container.
What's the best way to orchestrate these containers?
How do you start up these ancillary containers and tear them down, regardless of whether to job succeeds or not?
how do you prevent port collisions between, say, the database in a run of a job for one web app and the database in the job for another web app?
Check docker-compose and write a docker-compose file for your tests.
The latest network features of Docker (private network) will help you to isolate builds running in parallel.
However, start learning docker-compose as if you only had one build at the same time. When confident with this, look further for advanced docker documentation around networking.
I just completed my vagrant box for a product that made by my company.
I needed that because we're running same product on different
operating systems. I want to serve sites inside virtual machines, I
have questions:
Am I on correct way? Can a virtual machine used as production
server?
If you say yes:
How should I keep virtualbox running? Are there any script or sth
to restart if something crashes?
What happens if somebody accidentally gives "vagrant destroy"
command? What should I do if I don't want to lose my database and user
uploaded files?
We have some import scripts that running every beginning of the
month. sometimes they're using 7gb ram (running 1500 lines of mysql
code with lots of asynchronised instances). Can it be dangerous to run
inside VirtualBox?
Are there any case study blog post about this?
Vagrant is mainly for Development environment. I personally recommend using Type 1 hypervisor (Bare metal), VirtualBox is a desktop virtulization tool (Type 2, running on top of a traditional OS), not recommended for production.
AWS is ok, the VMs are running as Xen guest, Xen is on bare metal;-)
I wouldn't.
The w/ Vagrant + Virtualbox is that these are development instances. I would look at Amazon Web Services for actually deploying your project into the wild.