I have one VM host in one physical server with many docker containers inside.
Here one fragment of my fig.yml
pg:
image: pg...
redis:
image: redis...
mongodb:
image: mongodb...
app:
image: myapp...
I wish set pg container use only 25% of host cpu and app use only 50% of host cpu and so on.
Could I do it with fig or with docker run and manage link by hand?
In my case when one of this container is running a costly task it affect the cpu performance of the others ones. But when in the same physical server I have others VM with similar deploy inside the problem increase dramatically.
For now, Fig doesn't support setting CPU and memory limitation. Maybe it will support in the future.
I encourage you to experiment with using docker run -m for memory limit, and docker run -c for CPU shares. These flags will allow you to set memory and CPU values when starting a container. Read more about the flags you can use with docker run here:
https://docs.docker.com/reference/commandline/cli/#run
But it can only set when you are create a new container.
After creating container, you cannot change the value.
Related
I see that kubernets uses pod and then in each pod there can be multiple containers.
Example I create a pod with
Container 1: Django server - running at port 8000
Container 2: Reactjs server - running at port 3000
Whereas
I am coming for docker background
So in docker we do
docker run --name django -d -p 8000:8000 some-django
docker run --name reactjs -d -p 3000:3000 some-reactjs
So POD is also like PC with some ubunut os on it
No, a Pod is not like a PC/VM with Ubuntu on it.
There is no intermediate layer between your host and the containers in a pod. The only thing happening here is that the containers in a pod share some resources/namespaces in the host's kernel, and there are mechanisms in your host kernel to "protect" the containers from seeing other containers. Pods are just a mechanism to help you deploy a couple containers that share some resources (like the network namespace) a little easier. Fundamentally they are just linux processes directly on the host.
(one nuanced technicality/caveat on the above statement: Docker and tools like it will sometimes run their own VM and may try to make that invisible to you. For example, Docker Desktop does this. Usually you can ignore this layer, but it is great to know it is there. The answer holds though: That one single VM will host all of your pods/containers and there is not one VM per pod.)
I'm using docker-compose to run my database container that has fairly small memory limit (using mem_limit setting).
Will the docker host disk cache be used (no mem limit) by the postgres container, or should I make sure the container has enough free memory for disk caching?
Using Debian 4.9.246-2 linux host
I sometimes stop/start docker very often when I am release new features in my application.
docker-compose up -d
docker-compose stop
I am using pretty much the bare bones postgres docker setup (see below).
I am mapping the /data folder to my host.
Is there anything I should be worried about if I stop/start docker many times in a day in terms of data getting corrupted?
Is calling docker-compose stop the best way to be stopping my postgres instance?
My postgres service in my docker-compose looks like this:
db:
image: postgres:9.4
volumes:
- "/home/deploy/data/pgdata:/var/lib/postgresql/data"
restart: always
This setup currently is running smoothly in development, but once it goes to production I want to make sure I am following best practices etc.
Use,
docker-compose down -v
What it does is basically removes all the volumes you added. If you don't those volumes will hang on and eat up your space. It only removes the volume inside the docker container. The volume in your host stays and survives container removal in case if you want that data to survive container removal.
Whenever you create a docker container by docker run, Docker creates a volume/ directory to keep the details about the containers. After you execute docker run, if you look into /var/lib/docker/containers, you will see one directory for each container you started. If you have not removed the volumes for previous container, you will see many directories under the "container" directory. The name of these directories will be very long random letters and number. So, if you don't tell the docker to remove these directories when you stop the container, it will be there forever. The v option I mentioned above, will delete these directories when you take down the container.
Keep in mind, you can view the contents of the directory /var/lib/docker only as a root user. To change to root user, use sudo -i before you attempt to view the contents of the directory.
Databases in particular are usually designed so that it's very hard to lose data, even if the machine loses power in the middle of writing something to disk. (This comes at some performance cost.) So long as you don't have more than one PostgreSQL instance at a time using the same backing data store, I'd expect it to not lose data or otherwise corrupt itself; the worst you should expect to see is a message at startup that it's recovering from a write-ahead log or something along those lines.
docker stop will send a signal to a container that prompts it to shut down cleanly, and PostgreSQL will take this as a cue to shut down. It looks like docker-compose stop, docker-compose down, and sending ^C to docker-compose up all use the same mechanism. So the way you're doing it now should result in a clean shutdown (provided PostgreSQL finishes its cleanup within 10 seconds).
I believe you can docker-compose restart specific services, or docker-compose up --force-recreate them. This would help if you rebuilt your application container and needed to restart that, but not its database.
I've been trying to deploy Postgres within Docker for portability reason, and noticed that query performance as measured by "explain analyze" has been painfully slow compared to bare metal.
For a table with 1.7 million rows, a query on bare metal Postgres takes about 1.2 sec vs 4.8 sec on Dockered Postgres, an increase of 4 times! This comparison is done with the same mounted volume for both bare-metal and Docker (for Docker, I'm using the -v option) The volume is a gp2 volume, mounted through AWS console, 60GB
Couple of things I tried:
Increase shared memory buffer option in postgresql.conf, which has negligible effect
Tried several volume mapping options (delegated, cached, consistent)
Upgrading Docker from 17.06-ce to 17.12-ce
This is all done in AWS Linux 2 instance. At this point I’m hoping to get more suggestions on what to do to improve performance.
The docker run command I use:
docker run -p 5432:5432 --name postgres -v /vol/pgsql/10.0/data:/var/lib/postgresql/data postgres:latest
My question aims at verifying and maybe rectifying my idea of the reliability of Docker containers. I read both, the Docker documentation and several articles on VOLUME in the Dockerfile and --v as an argument when running a container as means to persist data outside a Docker container. Be it in a data container or on the host system. As would like to keep the complexity of my setup simple, I would prefer not to copy/save/store data round and about but keep it in the Docker container itself.
There are several cases through which I discovered the behaviour of Docker containers. I'd like to know if I missed a scenario where a container can be 100% lost unpurposely, i.e. NOT doing $ docker rm -f mycontainer
docker commands to pause, stop and kill a container
-> restartable by $ docker restart mycontainer or $ docker run mycontainer
Host system reboot
-> docker container exits with 0 or 255
Host system unexpected power off
-> What happens?
Application exception
-> docker container exits with -1
Updating or restarting docker (as pointed out by Greg)
-> expected behavior: like on system reboot (?)
In all those cases, the docker container is still existent in the end. So is there any other scenario that can cause a docker container to be lost like with $ docker rm -f mycontainer?
The background is, that I read a lot about mounted volumes and external datastorage on the host system for Postgres but I'd like to avoid storing data outside my containers on the host system if possible. On the other hand, I don't want to wake up and have all data lost. (I do perform regular SQL-dumps, but I don't want to do this every 5 minutes). If a docker container itself is not reliable for persistant data, I don't see why I should create a second container to hold the data for a first one and increase the complexity of my system by adding a new container but not gaining anything in terms of reliability.
Edit: There are two points in the Docker userguide on Volumes which do not explicitly explain which behaviour to expect and therefore making me question if these concepts provide extra reliability:
Changes to a data volume will not be included when you update an
image
-> Does that mean that they get lost or that the content of the volume won't be changed?
Volumes persist until no containers use them
-> What's the definition of 'use'? As long as a container is not stopped, killed, removed? Does that mean that the volume Docker created on the host system will get removed? Or does volume only refer to a virtual bridge between a directory inside Docker and one on the host system?
If you store all your data in the container, what are you going to do when you need to update the image? Updates to images are normally done by changing the Dockerfile and rebuilding the image. If my data is kept separate to my container, I can start a new version of the image, mount the data with --volumes-from or -v and kill the old container. In your case, you have to keep the container running and try to patch in place with something like puppet.
Also, I'm not sure what you think you're saving. If you run the official postgres image, it will have declared volumes in the Dockerfile. Those volumes exist as normal directories on your host system whether you ran the container with -v or not. Even if your Dockerfile has no volumes, clearly the UFS is being stored on your host anyway.
In general, you should consider containers to be temporary and stateless. Whilst you don't have to do this, you will find most of the tooling and support services are designed around this idiom.
Regarding your scenarios, there are a few you're missing:
A bug could make it impossible to restart a stopped container
The updating issue mentioned above
If you want to change storage driver. This will cause a great deal of problems, as you need to migrate your images.
Just for clarity on the commands, docker start will restart stopped or exited containers and docker unpause will unpause paused containers.