Devspace tool. How to deploy dependeny in --force-build mode - azure-devops

I use DevSpace tool to deploy my services into minikube local cluster.
I have two services to deploy: auth-handler and mysql;
auth-handler has the dependency of my-sql in devspace.xml. So it can't start till mysql hasn't been deployed.
auth-handler
dependencies:
- source:
path: ../mysql
namespace: databases
mysql has the image stage. Where in Dockerfile I perform logic to initiate DB by some data.
images:
backend:
image: registry.kube-system.svc.cluster.local/mysql
tags:
- local
dockerfile: ./mysql/Dockerfile
The first time, it works fine. But for example when I redeploy services the second time mysql image stage for mysql is skipped because DevSpace caches the image stage if it's already been successfully built. So my DB isn't initialized at this time because image stage skipped.
I can manually deploy mysql with -b / --force-buildto deploy mysql with the forcing launching of image stage but I don't need to manually deploy mysql. I need to initiate deployment of auth-handler and it will initiate deploying mysql in -b / --force-build``-b / --force-build mode.

Instead of populating your database within the Dockerfile, I would recommend adding a hook in the hooks section of devspace.yaml which could run devspace enter -c [mysql] -- command-to-populate-db or alternatively, adding an init container to populate the database. This will be a lot more flexible.
For more details on hooks, have a look at the DevSpace docs: https://devspace.sh/cli/docs/configuration/hooks/basics

Related

Docker containers gone after Gitlab CI pipeline

I installed a Gitlab runner with the docker executor on my Raspberry Pi. In my Gitlab repository I have a docker-compose.yaml file which should run 2 containers, 1 for the application and 1 for the database. It works on my laptop Then I built a simple pipeline with 2 stages test and deploy. This is my deploy stage:
deploy-job:
stage: deploy
image: docker:latest
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: "/certs"
script:
- docker info
- docker compose down
- docker compose build
- docker compose up -d
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
In the pipeline logs I can see that network, volumes and containers get created and the containers are started. It then says
Cleaning up project directory and file based variables 00:03
Job succeeded
When I ssh into the my raspi and do docker ps -a none of the containers are displayed. It is as if nothing has happened.
I compared my setup to the one in this video https://www.youtube.com/watch?v=RV0845KmsNI&t=352s and my pipeline looks similar. The only difference I can figure out is that in the video a shell executor is used for the Gitlab runner.
There are some differences between using the docker and the shell executor. When you use docker the docker-compose will start your application+db inside the container created to run the job and when the job finishes this container will be stopped by the GitLab runner as well as your application+db inside it. On the other hand, when using the shell executor all the commands of the job are executed directly in the system's shell, so when the job execution has finished the containers of your application+db should remain running in the system. One of the advantages of using the docker executor is precisely that it isolates the job execution inside a docker container, and when it finishes the job container is stopped and the system where the GitLab runner is running is not affected at all (this may change if you have configured the runner to run docker as root).
So my suggested solution is to change the executor to shell (you have to handle security issues).

How to confirm volumes configured correctly for docker service?

We have docker-compose.yml with multiple services configured.
In one of the docker service we have set volumes for docker service.
Example : volumes: - ./src/main/resources/db/changelog:/init
enter code here
We need to execute all the db log scripts present in changelog folder but it is not executing. Can someone pinpoint the issue? What is the use of :/init at the end of folder path?

How to run docker-compose on google cloud run?

I'm new to GCP, and I'm trying to deploy my spring boot web service using docker-compose.
In my docker-compose.yml file, I have 3 services: my app service, mysql service and Cassandra service.
Locally, It works like a charm. I added also a cloudbuild.yaml file :
steps:
- name: 'docker/compose:1.28.2'
args: ['up', '-d']
- name: 'gcr.io/cloud-builders/docker'
args: ['tag', 'workspace_app:latest', 'gcr.io/$PROJECT_ID/$REPO_NAME:$COMMIT_SHA']
images: ['gcr.io/$PROJECT_ID/$REPO_NAME:$COMMIT_SHA']
The build on Google cloud build is made with success. But, when I try to run the image on google cloud run, It doesn't call the docker-compose.
How do I must process to use docker-compose on production?
With Cloud Run, you can deploy only one container image. The container can contain several binaries that you can run in parallel. But keep that in mind:
CPU is throttled when no request are processed. Background process/app aren't recommended on Cloud Run, prefer Request/Response app on Cloud Run (a webserver).
Only HTTP request are supported by Cloud Run. TCP connection (such as MySQL connection) aren't supported.
Cloud Run is stateless. You can't persist data in it.
All the data are stored in memory (directory /tmp is writable). You can exceed the total size of the instance memory (your app footprint + your files stored in memory)
Related to the previous point, when the instance is offloaded (you don't manage that, it's serverless), you lost all what you put in memory.
Thus, MySQL and Cassandra service must be hosted elsewhere
docker-compose -f dirfile/ cloudbuild.yaml up
and for check it write this command
docker images
and for check you conatiner
docker container ls -a
and for check if container run or not write this command
docker ps
Finally, I deployed my solution with docker-compose on the google virtual machine instance.
First, we must clone our git repository on our virtual machine instance.
Then, on the cloned repository containing of course the docker-compose.yml, the dockerfile and the war file, we executed this command:
docker run --rm \
-v /var/run/docker.sock:/var/run/docker.sock \
-v "$PWD:$PWD" \
-w="$PWD" \
docker/compose:1.29.1 up
And, voila, our solution is working on production with docker-compose

Docker - Waiting for Mongo to load before running create indexes script

I have a script that creates text indexes in MongoDB - however if i use it with my current Dockerfile (which in turn is being called in my docker-compose file), it results in an error as Mongo hasn't fully loaded.
I've seen dependsOn however that appears to be for a entire service. Also come across this https://github.com/ufoscout/docker-compose-wait/ however I'd like to avoid using it if there's something much simplier.
Has anyone solved this - wait for Mongo to load before RUN?
My dockerfile
FROM mongo:4
COPY ./scripts /scripts
RUN mongo<scripts/create-text-indexes
And docker-compose snippet:
dashboard-mongo:
build:
context: ./infrastructure
dockerfile: Dockerfile
image: a-custom-mongo:SNAPSHOT
ports:
- "27017:27017"
networks:
default:
cms:
aliases:
- dashboard-mongo
And the script I'm running looks like:
use customboard
print("---> Deleting existing indexes...");
db.test.dropIndexes();
print("---> Creating indexes...");
db.test.createIndex({
"_id": "text",
"code": "text"
});
Thanks.
RUN mongo<scripts/create-text-indexes this script will not work, remember RUN command is for installation and configuration not to interact with Database or start the process. you should start the process in CMD or entrypoint.
one way to use offical image or clone the offical repo and take the benefit from offical docker image entrypoint. why reinvent the wheel?
Wit offical image all you need
FROM mongo:4
COPY ./scripts /docker-entrypoint-initdb.d
Once you build and start the container the offical image will start the DB process and wait once it is able to handle connection then it will run the script.
Initializing a fresh instance
When a container is started for the first time it will execute files
with extensions .sh and .js that are found in
/docker-entrypoint-initdb.d. Files will be executed in alphabetical
order. .js files will be executed by mongo using the database
specified by the MONGO_INITDB_DATABASE variable, if it is present, or
test otherwise. You may also switch databases within the .js script.
docker mongo image

How to run a command once in Docker compose

So I'm working on a docker compose file to deploy my Go web server. My server uses mongo, so I added a data volume container and the mongo service in docker compose.
Then I wrote a Dockerfile in order to build my Go project, and finally run it.
However, there is another step that must be done. Once my project has been compiled, I have to run the following command:
./my-project -setup
This will add some necessary information to the database, and the information only needs to be added once.
I can't however add this step on the Dockerfile (in the build process) because mongo must already be started.
So, how can I achieve this? Even if I restart the server and then run again docker-compose up I don't want this command to be executed again.
I think I'm missing some Docker understanding, because I don't actually understand everything about data volume containers (are they just stopped containers that mount a volume?).
Also, if I restart the server, and then run docker-compose up, which commands will be run? Will it just start the same container that was now stopped with the given CMD?
In any case, here is my docker-compose.yml:
version: '2'
services:
mongodata:
image: mongo:latest
volumes:
- /data/db
command: --break-mongo
mongo:
image: mongo:latest
volumes_from:
- mongodata
ports:
- "28001:27017"
command: --smallfiles --rest --auth
my_project:
build: .
ports:
- "6060:8080"
depends_on:
- mongo
- mongodata
links:
- mongo
And here is my Dockerfile to build my project image:
FROM golang
ADD . /go/src/my_project
RUN cd /go/src/my_project && go get
RUN go install my_project
RUN my_project -setup
ENTRYPOINT /go/bin/my_project
EXPOSE 8080
I suggest to add an entrypoint-script to your container; in this entrypoint-script, you can check if the database has been initialized, and if it isn't, perform the required steps.
As you noticed in your question, the order in which services / containers are started should not be taken for granted, so it's possible your application container is started before the database container, so the script should take that into account.
As an example, have a look at the official WordPress image, which performs a one-time initialization of the database in it's entrypoint-script. The script attempts to connect to the database (and retries if the database cannot be contacted (yet)), and checks if initialization is needed; https://github.com/docker-library/wordpress/blob/df190dc9c5752fd09317d836bd2bdcd09ee379a5/apache/docker-entrypoint.sh#L146-L171
NOTE
I notice you created a "data-only container" to attach your volume to. Since docker 1.9, docker has volume management, including naming volumes. Because of this, you no longer need to use "data-only" containers.
You can remove the data-only container from your compose file, and change your mongo service to look something like this;
mongo:
image: mongo:latest
volumes:
- mongodata:/data/db
ports:
- "28001:27017"
command: --smallfiles --rest --auth
This should create a new volume, named mongodata if it doesn't exist, or re-use the existing volume with that name. You can list all volumes using docker volume ls and remove a volume with docker volume rm <some-volume> if you no longer need it
You could try to use ONBUILD instruction:
The ONBUILD instruction adds to the image a trigger instruction to be executed at a later time, when the image is used as the base for another build. The trigger will be executed in the context of the downstream build, as if it had been inserted immediately after the FROM instruction in the downstream Dockerfile.
Any build instruction can be registered as a trigger.
This is useful if you are building an image which will be used as a base to build other images, for example an application build environment or a daemon which may be customized with user-specific configuration.
For example, if your image is a reusable Python application builder, it will require application source code to be added in a particular directory, and it might require a build script to be called after that. You can’t just call ADD and RUN now, because you don’t yet have access to the application source code, and it will be different for each application build. You could simply provide application developers with a boilerplate Dockerfile to copy-paste into their application, but that is inefficient, error-prone and difficult to update because it mixes with application-specific code.
The solution is to use ONBUILD to register advance instructions to run later, during the next build stage.
Here’s how it works:
When it encounters an ONBUILD instruction, the builder adds a trigger to the metadata of the image being built. The instruction does not otherwise affect the current build.
At the end of the build, a list of all triggers is stored in the image manifest, under the key OnBuild. They can be inspected with the docker inspect command.
Later the image may be used as a base for a new build, using the FROM instruction. As part of processing the FROM instruction, the downstream builder looks for ONBUILD triggers, and executes them in the same order they were registered. If any of the triggers fail, the FROM instruction is aborted which in turn causes the build to fail. If all triggers succeed, the FROM instruction completes and the build continues as usual.
Triggers are cleared from the final image after being executed. In other words they are not inherited by “grand-children” builds.
In docker-compose you can define:
restart: no
To run the container only once, which is useful for example for db-migration containers.
Your application need some initial state for working. It means that you should:
Check if required state already exists
Depends on first step result init state or not
You can write program for checking current database state (here I will use bash script but it can be every other language program):
RUN if $(./check.sh); then my_project -setup; fi
In my case if script will return 0 (success exit status) then setup command will be called.