Docker containers gone after Gitlab CI pipeline - docker-compose

I installed a Gitlab runner with the docker executor on my Raspberry Pi. In my Gitlab repository I have a docker-compose.yaml file which should run 2 containers, 1 for the application and 1 for the database. It works on my laptop Then I built a simple pipeline with 2 stages test and deploy. This is my deploy stage:
deploy-job:
stage: deploy
image: docker:latest
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: "/certs"
script:
- docker info
- docker compose down
- docker compose build
- docker compose up -d
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
In the pipeline logs I can see that network, volumes and containers get created and the containers are started. It then says
Cleaning up project directory and file based variables 00:03
Job succeeded
When I ssh into the my raspi and do docker ps -a none of the containers are displayed. It is as if nothing has happened.
I compared my setup to the one in this video https://www.youtube.com/watch?v=RV0845KmsNI&t=352s and my pipeline looks similar. The only difference I can figure out is that in the video a shell executor is used for the Gitlab runner.

There are some differences between using the docker and the shell executor. When you use docker the docker-compose will start your application+db inside the container created to run the job and when the job finishes this container will be stopped by the GitLab runner as well as your application+db inside it. On the other hand, when using the shell executor all the commands of the job are executed directly in the system's shell, so when the job execution has finished the containers of your application+db should remain running in the system. One of the advantages of using the docker executor is precisely that it isolates the job execution inside a docker container, and when it finishes the job container is stopped and the system where the GitLab runner is running is not affected at all (this may change if you have configured the runner to run docker as root).
So my suggested solution is to change the executor to shell (you have to handle security issues).

Related

Keep containers running after build

We are using the Docker Compose TeamCity build runner and would like the containers to continue running after the build.
I understand that the docker-compose build step itself follows a successful 'up' with a 'down', so I have attempted to bring them back up in a subsequent command line step with simply:
docker-compose up -d
I can see from the log that this is initially successful but when the build process exits, so do the containers. I have also tried:
nohup docker-compose up -d &
The outcome is the same.
How do we keep the containers running when the build has finished?
For info, the environment is both TeamCity and its BuildAgent running on the same Ubuntu box.
I have achieved this by NOT using the Docker Compose build runner. I now just have a single command line build step doing:
docker-compose down
docker-compose up -d
This works, and I feel rather silly ;-)

Docker compose equivalent of `docker run --gpu=all` option

To automate the configuration (docker run arguments) used to launch a docker container, I am writing a docker-compose.yml file.
My container should have access to the GPU, and so I currently use docker run --gpus=all parameter. This is described in the Expose GPUs for use docs:
Include the --gpus flag when you start a container to access GPU
resources. Specify how many GPUs to use. For example:
$ docker run -it --rm --gpus all ubuntu nvidia-smi
Unfortunately, Enabling GPU access with Compose doesn't describe this use case exactly. This guide uses the deploy yaml element, but in the context of reserving machines with GPUs. In fact, another documentation says that it will be ignored by docker-compose:
This only takes effect when deploying to a swarm with docker stack deploy, and is ignored by docker-compose up and docker-compose run.
After trying it and solving a myriad of problems along the way, I have realized that it is simply the documentation that is out of date.
Adding the following yaml block to my docker-compose.yml resulted in nvidia-smi being available to use.
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]

How to run docker-compose on google cloud run?

I'm new to GCP, and I'm trying to deploy my spring boot web service using docker-compose.
In my docker-compose.yml file, I have 3 services: my app service, mysql service and Cassandra service.
Locally, It works like a charm. I added also a cloudbuild.yaml file :
steps:
- name: 'docker/compose:1.28.2'
args: ['up', '-d']
- name: 'gcr.io/cloud-builders/docker'
args: ['tag', 'workspace_app:latest', 'gcr.io/$PROJECT_ID/$REPO_NAME:$COMMIT_SHA']
images: ['gcr.io/$PROJECT_ID/$REPO_NAME:$COMMIT_SHA']
The build on Google cloud build is made with success. But, when I try to run the image on google cloud run, It doesn't call the docker-compose.
How do I must process to use docker-compose on production?
With Cloud Run, you can deploy only one container image. The container can contain several binaries that you can run in parallel. But keep that in mind:
CPU is throttled when no request are processed. Background process/app aren't recommended on Cloud Run, prefer Request/Response app on Cloud Run (a webserver).
Only HTTP request are supported by Cloud Run. TCP connection (such as MySQL connection) aren't supported.
Cloud Run is stateless. You can't persist data in it.
All the data are stored in memory (directory /tmp is writable). You can exceed the total size of the instance memory (your app footprint + your files stored in memory)
Related to the previous point, when the instance is offloaded (you don't manage that, it's serverless), you lost all what you put in memory.
Thus, MySQL and Cassandra service must be hosted elsewhere
docker-compose -f dirfile/ cloudbuild.yaml up
and for check it write this command
docker images
and for check you conatiner
docker container ls -a
and for check if container run or not write this command
docker ps
Finally, I deployed my solution with docker-compose on the google virtual machine instance.
First, we must clone our git repository on our virtual machine instance.
Then, on the cloned repository containing of course the docker-compose.yml, the dockerfile and the war file, we executed this command:
docker run --rm \
-v /var/run/docker.sock:/var/run/docker.sock \
-v "$PWD:$PWD" \
-w="$PWD" \
docker/compose:1.29.1 up
And, voila, our solution is working on production with docker-compose

Docker-stack - get image always from the hub.docker.com

Summary: When I perform a 'docker stack deploy' in an AWS / EC2 environment the local (old) image is used. How can I overrule this behaviour to have the 'docker stack' use the new image from the hub.docker.com? As a workaround I first do a 'docker pull' of the image from index.docker.com before executing the 'docker stack deploy'. Is this extra step needed?
Situation:
On a Jenkins server (not on AWS / EC2) I have the following building steps:
Maven build
docker login -u ${env.DOCKER_USERNAME} -p ${env.DOCKER_PASSWORD}
docker build -t local-username/image-name:latest
docker tag local-username/image-name dockerhub-username/image-name:latest
docker push dockerhub-username/image-name:latest
The next steps in my Jenkinsfile are executed via a secure shell (ssh) on my AWS environement:
docker stack deploy -c docker-compose.yml stackname
When I execute this Jenkins job, the docker image is taken from the local image repo on AWS. I want to use the newest image put on hub.docker.com.
When I insert the following action BEFORE the 'docker stack deploy' everything works smoothly:
docker pull index.docker.io/dockerhub-username/image-name:latest
My questions:
Why do I need this extra 'docker pull' action?
How can I remove this action? Just by adding 'index.docker.io' in front of the image in the docker-compose.yml file? Or is there a better approach?
The extra docker pull should, of course not be needed.
What will help you?
The answer of #Tarun may work.
Or just name the Docker hub registry. Use the following line in your docker-compose.yml (stack) file
servicename:
image: index.docker.io/dockerhub-username/image-name
That will help you.
Maybe this is due to the fact that you build locally or on a separate Jenkins server, push the image to Docker hub and start on a remote shell on EC2. On that EC2 there is only a current image.
I tried the above solution for you, and it worked.
You can just execute the 'docker stack deploy' and the right image is used.

Locally deploying a GCloud app with a custom Docker image

I have been deploying my app from a bash terminal using an app.yaml script and the command:
gcloud app deploy app.yaml
This runs a main.app script to set the environment from a custom made docker image.
How can I deploy this locally only so that I can make small changes and see their effects before actually deploying which takes quite a while?
If you want to run your app locally, you should be able to do that outside of the docker container. We actually place very few restrictions on the environment - largely you just need to make sure you're listening on port 8080.
However if you really want to test locally with docker - you can...
# generate the Dockerfile for your applications
gcloud beta app gen-config --custom
# build the docker container
docker build -t myapp .
# run the container
docker run -it -p 8080:8080 myapp
From there, you should be able to hit http://localhost:8080 and see your app running.
Hope this helps!