How to confirm volumes configured correctly for docker service? - docker-compose

We have docker-compose.yml with multiple services configured.
In one of the docker service we have set volumes for docker service.
Example : volumes: - ./src/main/resources/db/changelog:/init
enter code here
We need to execute all the db log scripts present in changelog folder but it is not executing. Can someone pinpoint the issue? What is the use of :/init at the end of folder path?

Related

Docker-compose does not see the new volume added in `docker-compose.yml`

I have a docker-compose.yml from which I started a couple of services. I add a new volume mapping to one of the services and then try to restart the container with
docker compose restart <service_name>
but the volume is still not mapped and not available from within the image.
What is the right way to add a volume to an image defined with docker compose?
Oki, so it turns out that restart is just a refresh of the existing image but changes nothing in the parameters with which it is started.
In order to have compose take into account volume mapping changes in the docker-compose.yml file one has ro run:
docker compose up --build <service_name>
There might be other solutions, but this is what I ended up doing.

How to run docker-compose on google cloud run?

I'm new to GCP, and I'm trying to deploy my spring boot web service using docker-compose.
In my docker-compose.yml file, I have 3 services: my app service, mysql service and Cassandra service.
Locally, It works like a charm. I added also a cloudbuild.yaml file :
steps:
- name: 'docker/compose:1.28.2'
args: ['up', '-d']
- name: 'gcr.io/cloud-builders/docker'
args: ['tag', 'workspace_app:latest', 'gcr.io/$PROJECT_ID/$REPO_NAME:$COMMIT_SHA']
images: ['gcr.io/$PROJECT_ID/$REPO_NAME:$COMMIT_SHA']
The build on Google cloud build is made with success. But, when I try to run the image on google cloud run, It doesn't call the docker-compose.
How do I must process to use docker-compose on production?
With Cloud Run, you can deploy only one container image. The container can contain several binaries that you can run in parallel. But keep that in mind:
CPU is throttled when no request are processed. Background process/app aren't recommended on Cloud Run, prefer Request/Response app on Cloud Run (a webserver).
Only HTTP request are supported by Cloud Run. TCP connection (such as MySQL connection) aren't supported.
Cloud Run is stateless. You can't persist data in it.
All the data are stored in memory (directory /tmp is writable). You can exceed the total size of the instance memory (your app footprint + your files stored in memory)
Related to the previous point, when the instance is offloaded (you don't manage that, it's serverless), you lost all what you put in memory.
Thus, MySQL and Cassandra service must be hosted elsewhere
docker-compose -f dirfile/ cloudbuild.yaml up
and for check it write this command
docker images
and for check you conatiner
docker container ls -a
and for check if container run or not write this command
docker ps
Finally, I deployed my solution with docker-compose on the google virtual machine instance.
First, we must clone our git repository on our virtual machine instance.
Then, on the cloned repository containing of course the docker-compose.yml, the dockerfile and the war file, we executed this command:
docker run --rm \
-v /var/run/docker.sock:/var/run/docker.sock \
-v "$PWD:$PWD" \
-w="$PWD" \
docker/compose:1.29.1 up
And, voila, our solution is working on production with docker-compose

How to mount "/" in kubernetes

Saving Data in kubernetes is not persistant. so we should use volume.
Forexample we can mount "/apt" to save data in "apt".
Now I want to mount "/" but I get this error.
Error: Error response from daemon: invalid volume specification:
'/var/lib/kubelet/pods/26c39eeb-85d7-11e9-933c-7c8bca006fec/volumes/kubernetes.io~rbd/pvc-d66d9039-853d-11e9-8aa3-7c8bca006fec:/':
invalid mount config for type "bind": invalid specification:
destination can't be '/'
The question is How can I mount "/" in kubernetes?
Not completely sure about your environment, but I ran into this issue today because I wanted to be able to browse the entire root filesystem of a container via SSH (WinSCP) to the host. I am using Docker in a Photon OS VM environment. The answer I've come to is: you can't do what you're trying to do, but you may be able to accomplish what you're trying to accomplish. Let's say I created a volume called mysql and I create a new (oversimplified) mysql container using that volume as root:
docker volume create --name mysql
docker run -d --name=mysqldb -v /var/lib/docker/volumes/mysql:/ mysql:5.7
Docker will cry and say I can't mount to root (destination can't be '/'). However, since I know the location where our volumes live (/var/lib/docker/volumes/) then we can simply create our container as normal and an arbitrarily-named volume will be placed in that folder. So if your goal is (as mine was) to be able to ssh to the host and browse the files in the root of your container, you CAN do that, you just need to go to the correct arbitrarily-named volume. In my case it is "/var/lib/docker/volumes/12dccb66f2eeaeefe8e1feabb86f3c6def87b091dabeccad2902851caa97f04c/_data", which isn't as pretty as "/var/lib/docker/volumes/mysql", but it gets the job done.
Hope that helps someone.

How to use docker with mongo to achieve replication and with opening authentication

I want to use docker run a vm mongodb, at the same time, the mongo configure file use my own defined configure file to archive replication and open authentication.
Scanning some files but don't resolve the problem.
Any ideas?
The docker mongo image has a docker-entrypoint.sh it calls in the Dockerfile
Check if you can:
create your own image which would create the right user and restart mongo with authentication on: see "umputun/mongo-auth" and its init.sh script
or mount a createUser.js script in docker-entrypoint-initdb.d.
See "how to make a mongo docker container with auth"

How to run a command once in Docker compose

So I'm working on a docker compose file to deploy my Go web server. My server uses mongo, so I added a data volume container and the mongo service in docker compose.
Then I wrote a Dockerfile in order to build my Go project, and finally run it.
However, there is another step that must be done. Once my project has been compiled, I have to run the following command:
./my-project -setup
This will add some necessary information to the database, and the information only needs to be added once.
I can't however add this step on the Dockerfile (in the build process) because mongo must already be started.
So, how can I achieve this? Even if I restart the server and then run again docker-compose up I don't want this command to be executed again.
I think I'm missing some Docker understanding, because I don't actually understand everything about data volume containers (are they just stopped containers that mount a volume?).
Also, if I restart the server, and then run docker-compose up, which commands will be run? Will it just start the same container that was now stopped with the given CMD?
In any case, here is my docker-compose.yml:
version: '2'
services:
mongodata:
image: mongo:latest
volumes:
- /data/db
command: --break-mongo
mongo:
image: mongo:latest
volumes_from:
- mongodata
ports:
- "28001:27017"
command: --smallfiles --rest --auth
my_project:
build: .
ports:
- "6060:8080"
depends_on:
- mongo
- mongodata
links:
- mongo
And here is my Dockerfile to build my project image:
FROM golang
ADD . /go/src/my_project
RUN cd /go/src/my_project && go get
RUN go install my_project
RUN my_project -setup
ENTRYPOINT /go/bin/my_project
EXPOSE 8080
I suggest to add an entrypoint-script to your container; in this entrypoint-script, you can check if the database has been initialized, and if it isn't, perform the required steps.
As you noticed in your question, the order in which services / containers are started should not be taken for granted, so it's possible your application container is started before the database container, so the script should take that into account.
As an example, have a look at the official WordPress image, which performs a one-time initialization of the database in it's entrypoint-script. The script attempts to connect to the database (and retries if the database cannot be contacted (yet)), and checks if initialization is needed; https://github.com/docker-library/wordpress/blob/df190dc9c5752fd09317d836bd2bdcd09ee379a5/apache/docker-entrypoint.sh#L146-L171
NOTE
I notice you created a "data-only container" to attach your volume to. Since docker 1.9, docker has volume management, including naming volumes. Because of this, you no longer need to use "data-only" containers.
You can remove the data-only container from your compose file, and change your mongo service to look something like this;
mongo:
image: mongo:latest
volumes:
- mongodata:/data/db
ports:
- "28001:27017"
command: --smallfiles --rest --auth
This should create a new volume, named mongodata if it doesn't exist, or re-use the existing volume with that name. You can list all volumes using docker volume ls and remove a volume with docker volume rm <some-volume> if you no longer need it
You could try to use ONBUILD instruction:
The ONBUILD instruction adds to the image a trigger instruction to be executed at a later time, when the image is used as the base for another build. The trigger will be executed in the context of the downstream build, as if it had been inserted immediately after the FROM instruction in the downstream Dockerfile.
Any build instruction can be registered as a trigger.
This is useful if you are building an image which will be used as a base to build other images, for example an application build environment or a daemon which may be customized with user-specific configuration.
For example, if your image is a reusable Python application builder, it will require application source code to be added in a particular directory, and it might require a build script to be called after that. You can’t just call ADD and RUN now, because you don’t yet have access to the application source code, and it will be different for each application build. You could simply provide application developers with a boilerplate Dockerfile to copy-paste into their application, but that is inefficient, error-prone and difficult to update because it mixes with application-specific code.
The solution is to use ONBUILD to register advance instructions to run later, during the next build stage.
Here’s how it works:
When it encounters an ONBUILD instruction, the builder adds a trigger to the metadata of the image being built. The instruction does not otherwise affect the current build.
At the end of the build, a list of all triggers is stored in the image manifest, under the key OnBuild. They can be inspected with the docker inspect command.
Later the image may be used as a base for a new build, using the FROM instruction. As part of processing the FROM instruction, the downstream builder looks for ONBUILD triggers, and executes them in the same order they were registered. If any of the triggers fail, the FROM instruction is aborted which in turn causes the build to fail. If all triggers succeed, the FROM instruction completes and the build continues as usual.
Triggers are cleared from the final image after being executed. In other words they are not inherited by “grand-children” builds.
In docker-compose you can define:
restart: no
To run the container only once, which is useful for example for db-migration containers.
Your application need some initial state for working. It means that you should:
Check if required state already exists
Depends on first step result init state or not
You can write program for checking current database state (here I will use bash script but it can be every other language program):
RUN if $(./check.sh); then my_project -setup; fi
In my case if script will return 0 (success exit status) then setup command will be called.