I am running Win10, WSL and Docker Desktop. I have the following test YML which errors out:
version: '2.3'
services:
cli:
image: smartsheet-www
volumes_from:
- container:amazeeio-ssh-agent
➜ ~ ✗ docker-compose -f test.yml up
no such service: amazeeio-ssh-agent
Why does it try to find a service when I specified container: ?
The container exists, runs and has a volume.
docker inspect -f "{{ .Config.Volumes }}" amazeeio-ssh-agent
map[/tmp/amazeeio_ssh-agent:{}]
docker exec -it amazeeio-ssh-agent /bin/sh -c 'ls -l /tmp/amazeeio_ssh-agent/'
total 0
srw------- 1 drupal drupal 0 Apr 1 03:54 socket
Removing the volumes_from and the following line starts the cli service just fine.
After a bit of searching, I finally found https://github.com/docker/compose/issues/8874 and https://github.com/pygmystack/pygmy-legacy/issues/60#issue-1037009622 this fixes it.
uncheck this
Related
In my Kubernetes cluster I am running GitLab-ee 15.8.0 with a GitLab Runner. This runner is configured for a kubernetes executor and I have mounted the /var/run/docker.sock to this runner in the configmap. When running a pipeline which brings up a docker-compose-test.yml, I can see that all pods that exist in kubernetes are starting to crash and are getting restarted. After that I can see that the pipeline is still in the Running state, but nor runner is working on it. The last command the runner executed in the pipeline was: docker-compose -f docker-compose-test.yml up -d.
I expected the pipeline to just bring up the docker containers and run the Laravel tests using the database container and the application container, but instead it messes up the Nginx-Ingress resource.
I am running GitLab-ee:15.8.0 with the gitlab-runner version 15.8.2
Here is the gitlab-ci.yml:
image: docker:20.10.16
services:
- docker:20.10.16-dind
variables:
DOCKER_COMPOSE_CMD: "docker-compose -f docker-compose-test.yml"
stages:
- test
- build
test:
stage: test
script:
- docker-compose --version
- $DOCKER_COMPOSE_CMD down --volumes --remove-orphans
- $DOCKER_COMPOSE_CMD up -d
- $DOCKER_COMPOSE_CMD exec -T -e APP_ENV=testing laravel-api-test ./scripts/wait-for.sh database-test:54321 -t 60 -- echo "Database connection established"
- $DOCKER_COMPOSE_CMD exec -T -e APP_ENV=testing laravel-api-test php artisan passport:keys
- $DOCKER_COMPOSE_CMD exec -T -e APP_ENV=testing laravel-api-test php artisan migrate
- $DOCKER_COMPOSE_CMD exec -T -e APP_ENV=testing laravel-api-test sh -c "vendor/bin/phpunit ./tests $PARAMETERS --coverage-text --colors=never --stderr"
- $DOCKER_COMPOSE_CMD down --volumes --remove-orphans
# only:
# - tags
build:
stage: build
script:
- export IMAGE_TAG=$(echo "$CI_COMMIT_TAG" | awk -F '/' '{print $NF}')
- docker build -t laravel-api:"$IMAGE_TAG" .
- docker login -u "$CONTAINER_REGISTRY_USERNAME" -p "$CONTAINER_REGISTRY_PASSWORD" "$CONTAINER_REGISTRY_URL"
- docker push laravel-api:"$IMAGE_TAG"
only:
- tags
And this is the docker-compose-test.yml that seems to mess things up:
version: "3.7"
services:
laravel-api-test:
build:
args:
user: laravel
uid: 1000
context: .
dockerfile: docker/development/Dockerfile
working_dir: /var/www/
volumes:
- ./:/var/www
ports:
- ${APP_PORT}:9000
networks:
- application
database-test:
image: postgres:15.1-alpine
ports:
- 54321:5432
environment:
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_USER: ${DB_USERNAME}
networks:
- application
networks:
application:
driver: bridge
The last thing that is probably relevant is the gitlab-runner config:
apiVersion: v1
kind: ConfigMap
metadata:
name: gitlab-runner-config
namespace: gitlab-runner
data:
config.toml: |-
concurrent = 4
[[runners]]
name = "Runner_1"
url = "https://gitlab.project.com/ci"
token = "my-token"
executor = "kubernetes"
[runners.kubernetes]
namespace = "gitlab-runner"
privileged = true
poll_timeout = 600
cpu_request = "1"
service_cpu_request = "200m"
[[runners.kubernetes.volumes.host_path]]
name = "docker"
mount_path = "/var/run/docker.sock"
host_path = "/var/run/docker.sock"
Finally this the output from the pipeline after it crashed:
Running with gitlab-runner 15.8.2 (4d1ca121)
on Runner_1 eNNz4y9k, system ID: r_y3jEhmF8fN58
Preparing the "kubernetes" executor
00:00
Using Kubernetes namespace: gitlab-runner
Using Kubernetes executor with image docker:20.10.16 ...
Using attach strategy to execute scripts...
Preparing environment
00:04
Waiting for pod gitlab-runner/runner-ennz4y9k-project-117-concurrent-0f24cx to be running, status is Pending
Running on runner-ennz4y9k-project-117-concurrent-0f24cx via gitlab-runner-56cd6f4bb5-zrbd9...
Getting source from Git repository
00:01
Fetching changes with git depth set to 20...
Initialized empty Git repository in /builds/Clients/opus-volvere/laravel-api/.git/
Created fresh repository.
Checking out 3890412c as main...
Skipping Git submodules setup
Executing "step_script" stage of the job script
$ docker-compose --version
Docker Compose version v2.6.0
$ $DOCKER_COMPOSE_CMD down --volumes --remove-orphans
Container laravel-api-database-test-1 Stopping
Container laravel-api-laravel-api-test-1 Stopping
Container laravel-api-database-test-1 Stopping
Container laravel-api-laravel-api-test-1 Stopping
Container laravel-api-database-test-1 Stopped
Container laravel-api-database-test-1 Removing
Container laravel-api-laravel-api-test-1 Stopped
Container laravel-api-laravel-api-test-1 Removing
Container laravel-api-laravel-api-test-1 Removed
Container laravel-api-database-test-1 Removed
Network laravel-api_application Removing
Network laravel-api_application Removed
$ $DOCKER_COMPOSE_CMD up -d
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 827B done
#1 DONE 0.1s
#2 [internal] load .dockerignore
#2 transferring context: 88B done
#2 DONE 0.1s
I am not really sure where to look, with log files or anything, so some help regarding the debugging of this issue is really appreciated...
As far as I can see, the only starts when I try to launch the docker compose. I already built the image in the pipeline and that worked like it should, but it start to go wrong when I actually try to run the containers. Maybe that helps? This is just a really annoying problem that isn't my real expertise or anything so I am reading, learning and trying a lot :(
I followed this tutorial on how to add a gitlab runner to kubernetes. Maybe is has something to do with the fact that it tries to create a new pod for the pipeline, because the tuotial I sent says:
The second is a ServiceAccount, Role, and RoleBinding to give the
Runner the privileges to add new Pods to the Namespace.
Again, I am not familiar with all this stuff, so for me its also a shot in the dark, but I really want this fixed so I can continue working on this project
What could cause this GitLab pipeline to crash my entire kubernetes?
Never expose the host container runtime to workload inside the cluster.
This can lead to the situation that the GitLab runner "cleans up" and removes the containers that operate your cluster components.
In addition to that you get tied to a specific container runtime which should be an implementation detail of your cluster.
As an alternative you can use docker-in-docker for the GitLab runner for example.
How to kill a specific docker container using docker-compose, I have tried the below but its didn't work appreciate your help.
root#docker:/opt/dockercompose# docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
792663a9f2de nginx "nginx -g 'daemon of…" About an hour ago Up 6 minutes 0.0.0.0:8005->80/tcp dockercompose_webapp2_1
1f94ff0e70cf nginx "nginx -g 'daemon of…" About an hour ago Up 6 minutes 0.0.0.0:8000->80/tcp dockercompose_webapp1_1
root#docker:/opt/dockercompose# docker-compose kill 792663a9f2de
ERROR: No such service: 792663a9f2de
root#docker:/opt/dockercompose#
All of the docker-compose commands take the service names as specified in the docker-compose.yml file. The docker ps output you show could be created from a docker-compose.yml file like:
version: '3.8'
services:
webapp1:
image: nginx
ports: ['8000:80']
webapp2:
image: nginx
ports: ['8005:80']
If you want to kill off a specific Compose-managed container, you can docker-compose kill webapp2; it will find it in the docker-compose.yml and match it up with some hidden container metadata.
For most practical things, if you're in a Compose-managed environment, you can use exclusively docker-compose commands: docker-compose ps to list the containers, docker-compose logs to see a container's output, and so on. All of these again take the Compose service name, not the Docker container name or ID.
Below command works-
root#docker:/opt/dockercompose# docker-compose -f docker-compose.yaml -p 792663a9f2de kill
I'm facing an issue on executing a cronjob.Below is the snippet of the code.
containers:
- name: ssm1db
image: anuragh/ubuntu:mycronjob5
imagePullPolicy: Always
command:
- "/bin/sh"
- "-c"
- "kubectl exec ssm1db-0 -- bash -c 'whoami; /db2/db2inst1/dba/jobs/dbactivate.sh -d wdp'"
For example.
I'm able to execute the below code .Here db2inst1 is the user which i need the script to be executed.
/bin/su -c ./full_online_backup.sh - db2inst1
But while executing using kubectl ,getting below error
/bin/su: /bin/su: cannot execute binary file
command terminated with exit code 126
[root#ssm1db-0 /]#
Related question : How to start crond as non-root user in a Docker Container?
You would encounter permission issues when running crond in a non root user.
I tried to install Gitlab with docker compose. I set docker-compose.
gitlab:
image: 'gitlab/gitlab-ce:latest'
volumes:
- '/srv/docker/gitlab/data:/var/opt/gitlab'
- '/srv/docker/gitlab/config:/etc/gitlab'
- '/srv/docker/gitlab/logs:/var/log/gitlab'
ports:
- "10080:10080"
- "10443:443"
- "10022:22"
restart: always
hostname: '1.1.1.1'
dns:
- xx.xx.xx.xx
environment:
GITLAB_OMNIBUS_CONFIG: |
gitlab_rails['gravatar_enabled'] = false
gitlab_rails['time_zone'] = 'Asia/Tokyo'
When I run docker-compose up it failed said
gitlab_1 | If this container fails to start due to permission problems try to fix it by executing:
gitlab_1 |
gitlab_1 | docker exec -it gitlab update-permissions
gitlab_1 | docker restart gitlab
gitlab_1 |
gitlab_1 | Installing gitlab.rb config...
gitlab_1 | cp: cannot create regular file '/etc/gitlab/gitlab.rb': Permission denied
gitlab_gitlab_1 exited with code 1
as written I tried to run
docker exec -it gitlab update-permissions
But error said
Error response from daemon: No such container: gitlab
Anyone can help?
Just info docker ps
Result:
CONTAINER ID IMAGE COMMAND CREATED
xxxxxxx gitlab/gitlab-ce:latest "/assets/wrapper" 24 hours ago
And permission file
ls -la /etc/gitlab/gitlab.rb
-rw-------. 1 root root 0 Dec 12 17:00 /etc/gitlab/gitlab.rb
It seems that the container doesn't have the permission to create files beneath your mounted volumes:
/srv/docker/gitlab/data
/srv/docker/gitlab/config
/srv/docker/gitlab/logs
The file permissions show that gitlab.rb is only read and writeable by root. I just checked them - my container uses the same permissions.
So it might be a problem with your Docker host. Somehow it prevents you from creating/writing these files. Maybe the filesystem is mount readonly, or the permissions of the host volume folders don't allow it.
SELinux or App Armour also could be a problem!
I'd recommend to remove all files in the volumes and set the permissions of the 3 folders to 777. After it started you'll see which user/group ids are needed and you can tighten it down.
Do you use AppArmour, SELinux? What kind of Host OS do you use?
Finally I tried to change content inside docker-compose.yml
I didn't know the exact reason why not worked. Maybe different setting in gitlab_omnibus
For docker compose I refer to this
docker-compose.yml
I deleted existed docker image and container, then run docker-compose. And it works fine
i've the same problem and running this has solved my problem; because i was on SELinux:
sudo docker run --detach \
--hostname gitlab.example.com \
--publish 443:443 --publish 80:80 --publish 22:22 \
--name gitlab \
--restart always \
--volume $GITLAB_HOME/config:/etc/gitlab:Z \
--volume $GITLAB_HOME/logs:/var/log/gitlab:Z \
--volume $GITLAB_HOME/data:/var/opt/gitlab:Z \
--shm-size 256m \
gitlab/gitlab-ee:latest
This will ensure that the Docker process has enough permissions to create the configuration files in the mounted volumes.
Using below docker-compose.yml file if I run "docker-compose up" or "docker-compose up -d" command then I see both containers status as exited however when I run docker restart <postgres-containerId> then its up and running but when I try to run docker restart <java8-containerId> then its restarting and again exiting.
Could you please suggest what parameter I need to specify to make these containers up and running after docker-compose up command and how do I attach to java container I tried with docker attach <java8-containerId> command but was not able to attach ?
docker-compose.yml file -
postgres:
image: postgres:9.4
ports:
- "5430:5432"
javaapp:
image:java8:latest
volumes:
- /pgm:/pgm
working_dir: /pgm
links:
- postgres
command: /bin/bash
docker-compose ps results -
Name Command State Ports
--------------------------------------------------------------------
compose_javaapp_1 /bin/bash Exit 0
compose_postgres_1 /docker-entrypoint.sh postgres Exit 0
To see available containers:
docker ps -a
To open container shell:
docker exec -it <container-name> /bin/bash