Is there any way to use compose v2 in bitbucket pipeline? - docker-compose

First of all - no, I cannot switch from Bitbucket pipelines to something appropriate, unfortunately, it is direct requirement.
[x] I have searched other SO questions and google, the following two questions are related:
Bitbucket Pipeline - docker compose error (no answer)
How to use docker compose V2 in Bitbucket Pipelines (answer not working even when literally copied to pipeline definition for one of reasons below)
Working v1 main pipeline (only significant step and job, of course, it is larger)
image: python:3.10
definitions:
steps:
- step: &run-tests
name: Test
image: docker/compose:debian-1.29.2
caches:
- docker
services:
- docker
script:
- COMPOSE_DOCKER_CLI_BUILD=1 DOCKER_BUILDKIT=1 docker-compose --project-name reporting --env-file .env.ci -f docker-compose.ci.yaml up -d --build
# - ... (wait until ready and run tests, ignored, because error happens earlier)
pipelines:
default:
- parallel:
- step: *run-tests
Encountered errors
I'll to refer to them multiple times, so let's define short aliases:
403
+ COMPOSE_DOCKER_CLI_BUILD=1 DOCKER_BUILDKIT=1 docker compose --project-name reporting --env-file .env.ci -f docker-compose.ci.yaml up -d --build
listing workers for Build: failed to list workers: Unavailable: connection error: desc = "transport: Error while dialing unable to upgrade to h2c, received 403"
priviliged
+ COMPOSE_DOCKER_CLI_BUILD=1 DOCKER_BUILDKIT=1 docker compose --project-name reporting --env-file .env.ci -f docker-compose.ci.yaml up -d --build
#1 [internal] booting buildkit
#1 pulling image moby/buildkit:buildx-stable-1
#1 pulling image moby/buildkit:buildx-stable-1 2.8s done
#1 creating container buildx_buildkit_default 0.0s done
#1 ERROR: Error response from daemon: authorization denied by plugin pipelines: --privileged=true is not allowed
------
> [internal] booting buildkit:
------
Error response from daemon: authorization denied by plugin pipelines: --privileged=true is not allowed
Unfortunately, there is no docker/compose v2 image, and our deployment uses v2, so some inconsistencies happen. I'm trying to use v2 in pipeline now. I replaced docker-compose references with docker compose and try to prevent this command from crashing. Important thing to note: I need docker buildkit and cannot go without it, because I'm using Dockerfile.name.dockerignore files which are separate for prod and dev, and docker without buildkit does not support it (builds will simply fail).
Things I tried (debug smts like docker version and docker compose version were always working OK in these cases):
using image: linuxserver/docker-compose:2.10.2-v2. Result: 403.
using image: library/docker:20.10.18.
No more changes. Result: privileged.
Add docker buildx create --driver-opt image=moby/buildkit:v0.10.4-rootless --use as a step. Result: privileged (logs show that this image is actually used: pulling image moby/buildkit:v0.10.4-rootless 6.3s done).
using no explicit image (relying on bitbucket docker installation).
with official compose installation method (result: 403):
- mkdir -p /usr/local/lib/docker/cli-plugins/
- wget -O /usr/local/lib/docker/cli-plugins/docker-compose https://github.com/docker/compose/releases/download/v2.10.2/docker-compose-linux-x86_64
- chmod +x /usr/local/lib/docker/cli-plugins/docker-compose
with solution from 2nd link above (result: 403, but with some portion of success: downloaded two services that do not require building - postgres and redis - and failed only then)
If it is important, compose file for CI (only healthchecks trimmed, everything else not touched):
# We need this file without volumes due to bitbucket limitations.
version: '3.9'
services:
db:
image: mariadb:10.8.3-jammy
env_file: .env.ci
volumes:
- ./tests/db_init/:/docker-entrypoint-initdb.d
networks:
- app_network
redis:
image: redis:alpine
environment:
- REDIS_REPLICATION_MODE=master
networks:
- app_network
app:
build:
context: .
args:
- APP_USER=reporting
- APP_PORT
env_file: .env.ci
depends_on:
- db
- redis
networks:
- app_network
nginx:
build:
context: .
dockerfile: configs/Dockerfile.nginx
env_file: .env.ci
environment:
- APP_HOST=app
ports:
- 80:80
depends_on:
- app
networks:
- app_network
networks:
app_network:
driver: bridge
For now I reverted everything and keep using v1. The limitations of bitbucket pipelines drive me mad, I can easily run the same stuff in github actions, but now have to remove one service (that uses docker directory mounting, so cannot run on bitbucket) and spend whole day trying to upgrade compose. Sorry for this tone, this really makes me desire to quit bitbucket forever and never touch it again.

Related

Sort out docker container permission when running silverstripe dev/build

I have created a fresh SilverStripe project using composer and I'm wanting to have my containers up and running via docker-compose up.
I have written a very basic Dockerfile:
FROM brettt89/silverstripe-web:7.4-apache
ENV DOCUMENT_ROOT /var/www/html/public
COPY . $DOCUMENT_ROOT
WORKDIR $DOCUMENT_ROOT
RUN chown www-data:www-data $DOCUMENT_ROOT
USER www-data
as well as a simple compose yaml file which specifies almost all the required services for it to work. here's what it looks like:
version: "3.8"
services:
silverstripe:
build:
context: .
volumes:
- .:/var/www/html
depends_on:
- database
environment:
- DOCUMENT_ROOT=/var/www/html/public
- SS_TRUSTED_PROXY_IPS=*
- SS_ENVIRONMENT_TYPE=dev
- SS_DATABASE_SERVER=database
- SS_DATABASE_NAME=SS_mysite
- SS_DATABASE_USERNAME=root
- SS_DATABASE_PASSWORD=
- SS_DEFAULT_ADMIN_USERNAME=admin
- SS_DEFAULT_ADMIN_PASSWORD=password
ports:
- 8088:80
database:
image: mysql:5.7
environment:
- MYSQL_ALLOW_EMPTY_PASSWORD=yes
volumes:
- db-data:/var/lib/mysql
volumes:
db-data:
I can get my containers up and running. But when I go to 127.0.0.1:8080:/dev/build, It is raising the mkdir():permission denied warning.
I can see my files in the container have 1000:1000 ownership which I assume is still root?
So wondering how I can fix this. I have seen examples of setting up things such that containers could be created via docker build, but I just want to be able to run things via docker-compose up.
I am using Ubuntu-20.04 and project has been created by $USER.
The quickest trick to fix this, for setting up your local environment, is to change your user UID from 1000 to www-data using the usermod command:
RUN usermod -u 1000 www-data
then, of course, you can skip your last two lines.
You can find more info here:
https://blog.gougousis.net/file-permissions-the-painful-side-of-docker/

Can't connect with docker-compose to Postgres database

I'm trying to build a docker-compose file that will spin up my EF Core web api project, connecting to my Postgres database.
I'm having a hard time getting the EF project connecting to the database.
This is what I currently have for my docker-compose.yml:
version: '3.8'
services:
web:
container_name: 'mybackendcontainer'
image: 'myuser/mybackend:0.0.6'
build:
context: .
dockerfile: backend.dockerfile
ports:
- 8080:80
depends_on:
- postgres
networks:
- mybackend-network
postgres:
container_name: 'postgres'
image: 'postgres:latest'
environment:
- POSTGRES_USER=username
- POSTGRES_PASSWORD=MySuperSecurePassword!
- POSTGRES_DB=MyDatabase
networks:
- mybackend-network
expose:
- 5432
volumes:
- ./db-data/:/var/lib/postgresql/data/
pgadmin:
image: dpage/pgadmin4
ports:
- 15433:80
env_file:
- .env
depends_on:
- postgres
networks:
- mybackend-network
volumes:
- ./pgadmin-data/:/var/lib/pgadmin/
networks:
mybackend-network:
driver: bridge
And my web project docker file looks like this:
# Get base DSK Image from Microsoft
FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build-env
WORKDIR /app
# Copy the CSPROJ file and restore any dependencies (via NUGET)
COPY *.csproj ./
RUN dotnet restore
# Copy the project files and build our release
COPY . ./
RUN dotnet publish -c Release -o out
# Generate runtime image - do not include the whole SDK to save image space
FROM mcr.microsoft.com/dotnet/aspnet:6.0
WORKDIR /app
EXPOSE 80
COPY --from=build-env /app/out .
ENTRYPOINT ["dotnet", "MyBackend.dll"]
And my connection string looks like this:
User ID =bootcampdb;Password=MySuperSecurePassword!;Server=postgres;Port=5432;Database=MyDatabase; Integrated Security=true;Pooling=true;
Currently I have two problems:
I'm getting Npgsql.PostgresException (0x80004005): 57P03: the database system is starting up when I do docker-compose -up. I tried to add the healthcheck to my postgress db but that did not work. When I go to my Docker desktop app, and start my backend again, that message goes away and I get my second problem...
Secondly after the DB started it's saying: FATAL: password authentication failed for user "username". It looks like it's not creating my user for the database. I even changed not to use .env files but have the value in my docker-compose file, but its still not working. I've tried to do docker-compose down -v to ensure my volumes gets deleted.
Sorry these might be silly questions, I'm still new to containerization and trying to get this to work.
Any help will be appreciated!
Problem 1: Having depends_on only means that docker-compose will wait until your postgres container is started before it starts the web container. The postgres container needs some time to get ready to accept connections and if you attempt to connect before it's ready, you get the error you're seeing. You need to code your backend in a way that it'll wait until Postgres is ready by retrying the connection with a delay.
Problem 2: Postgres only creates the user and database if no database already exists. You probably have an existing database in ./db-data/ on the host. Try deleting ./db-data/ and Postgres should create the user and database using the environment variables you've set.

Migrating Ruby on Rails application to Docker: Issues with Docker-Compose

I'm new to creating my own docker images. I've been following along with this guide. I've successfully built my by using docker-compose build in the root directory.
However, I encounter the same issue every time I try to run: docker-compose up
I get the following error:
Pulling postgresql (postgresql:latest)...
ERROR: pull access denied for postgresql, repository does not exist or may require 'docker login'
I've setup a docker account. I can run a postgresql image using the documentation.
I'm at a loss as to what to do. I'm thinking I should modify my Dockerfile for my project or the docker-compose.yml file, but I'm unsure.
Also, when I build my app, I get the following at the beginning:
postgresql uses an image, skipping
My docker-compose.yml file looks like:
web:
build: .
command: rails s -e production
ports:
- 3000
links:
- postgresql
- postgresql:postgresql.cloud66.local
environment:
- RAILS_ENV=production
- RACK_ENV=production
postgresql:
image: postgresql
You may be running an outdated version of docker-compose.0
Also, your YAML seems to have an indentation error:
web:
build: .
links:
- postgresql
postgresql:
image: postgresql
This should be:
web:
build: .
links:
- postgresql
postgresql:
image: postgresql
Maybe it was just a copy & paste error, because the error message implies it was parsed correctly.

docker-compose on Windows volume not working

I've been playing with Docker for the past week and think the container idea is very useful, but despite reading everything I can for the past 3 days I can't get the volume mapping to work
get docker-compose to use my existing volume.
Docker Version: 18.03.1-ce
docker-compose version 1.21.1, build 7641a569
I created a volume using the following via a Dockerfile
# Reference SQL image
FROM microsoft/mssql-server-windows-developer
# Create directory within SQL container for database files mapped to the volume
VOLUME sqldata:c:/MSSQL
and here it shows:
C:\ProgramData\Docker\volumes>docker volume ls
local sqldata
Now I've tried probably 60+ different "solutions" based on StackOverflow and Docker forums, but none of them work. (Note despite the names below with Azure I am simply trying to get this to run locally, Azure is next hurdle)
Docker-compose.yaml:
version: '3.4'
services:
ws:
image: wsManager
container_name: azure-wcf
ports:
- "80"
depends_on:
- db
db:
image: dbimage:latest
container_name: azure-db
volumes:
- \sqldata:/mssql
# - type: volume
# source: sqldata
# target: /mssql
ports:
- "1433"
I've added a volumes section but it does not help,
volumes:
sqldata:
external:
name: sqldata
changed the - \sqldata:/mssql
to every possible slash .. . ~ whatever. Moved the file to yaml file
to C:\ProgramData\Docker\volumes - basically any suggestion that showed in my search results. The dbImage is a SQL Server image that I need to persist the data from but am wondering what the magic is as nothing I've tried works. Any help is GREATLY appreciated.
I'm running on Windows 10 Pro build 1803.
Why does this have to be so hard?
Than you to whomever knows how to make this actually work.
The solution is to reference the true path on Windows using the volumes: option as below:
sqldb:
image: sqlimage
container_name: azure-db
volumes:
- "C:\\ProgramData\\Docker\\volumes\\sqldata:c:\\mssql"
To persist the data I used the following:
environment:
- "sa_password=ddsql2017##"
- "ACCEPT_EULA=Y"
- 'attach_dbs= {"dbName":"MyDb","dbFiles":"C:\\MSSQL\\MyDb.mdf","C:\\MSSQL\\MyDb.ldf"]}]'
Hope this helps someone else as many of the examples I found searching both on SO and elsewhere did not work for me, and in the Docker forums there are a lot of posts saying mounting volumes not work for Windows.
For those who are using Ubunto WSL:
sudo mkdir /c
sudo mount --bind /mnt/c /c
navigate to your project file use new path ( /c/your-project-path and not /mnt/c/your-project-path)
edit your docker-compose.yml and use relative path for volume : ( like ./src instead of c/your-project-path/src)
docker-compose up
I was struggling with a similar problem when trying to mount a volume to a specific path of my Windows machine: basically it didn't work so every time I restarted my Docker instance I lose all my DB data.
I finally found out that it is because Docker for Windows by default cannot interpret Windows path so the flag COMPOSE_CONVERT_WINDOWS_PATHS has to be activated. To do so:
Run the command "set COMPOSE_CONVERT_WINDOWS_PATHS=1"
Restart Docker
Go to Settings > Shared Drives > Reset credentials and then select drive and then apply
From the command line, kill the containers (docker container rm -f )
Re-run the containers
Hope it helps
If your windows account credentials has been changed, you also have to reset credentials for shared drives. (Settings > Shared Drives > Reset credentials)
In my case, the password was changed by my company security policy.
Are you sure you really need to map to a certain host directory? If not, my solution is to create a volume beforehand and use it in docker-compose.yaml. I use the same scripts for both windows and linux. That is the beauty of docker.
Here is what I did to start both postgres and mysql:
create_db.sh (you can run it in git bash or similiar environment in windows):
docker volume create --name postgres-data -d local
docker volume create --name mysql-data -d local
docker-compose up -d
docker-compose.yaml:
version: '3'
services:
postgres:
image: postgres:latest
environment:
POSTGRES_DB: datasource
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
ports:
- 5432:5432
volumes:
- postgres-data:/var/lib/postgresql/data
mysql:
image: mysql:latest
environment:
MYSQL_DATABASE: 'train'
MYSQL_USER: 'mysql'
MYSQL_PASSWORD: 'mysql'
MYSQL_ROOT_PASSWORD: 'mysql'
ports:
- 3306:3306
volumes:
- mysql-data:/var/lib/mysql
volumes:
postgres-data:
external: true
mysql-data:
external: true
By default it looks that after installing Docker on Windows, sharing of drivers is disabled - so you won't be able to use volumes(that are stored on disks)
Enabling such sharing, through: Docker in tray - right click - Settings, helped to me, volumes started working fine.
Docker on Windows is having strange behavior as Windows has limitations with credentials and also with the virtual machine that Docker is using(Hyper-V , VirtualBox - depending on your Docker version and setup).
Basically, you are correct to map a folder in
volumes:
section in your service:
The path is
version: '3.4'
services:
db:
image: dbimage:latest
container_name: azure-db
volumes:
- c:/Temp/sqldata:/mssql
Important is that you do not need to explicitly create volume in volumes section, but the docker-compose up will create it(the same is for docker run).
Strange thing is that it will never show up in
docker volume ls
but it will be usable with the same files inside windows directory and inside container path /mssql
You can test it with:
docker run --rm -v c:/Temp/sqldata:/data alpine ls /data
or
docker run --rm -v c:/Temp:/data alpine ls /data
If it Disappear, probably it lost the credentials and Reset it via Docker->Settings->Shared Drives->Reset credentials.
I hope it was clear and covered all the aspects for you.
Launch Docker from your windows taskbar
Click on Settings icon on top
Click Resources
Click File Sharing
Click on (+) sign and add path of local folder in which you want to map the container volume.
It worked for me.

Metadata fetch failed stack driver logging Google Compute Engine

I am integrating my go application with Stackdriver logging via cloud.google.com/go/logging. My application works perfectly fine when deployed in a GCP on Flex engine. However, when I run my app in local, as soon as I hit localhost:8080 I get the following error on my console and the application gets killed automatically:
Metadata fetch failed: Get http://metadata/computeMetadata/v1/instance/attributes/gae_project: dial tcp: lookup metadata on 127.0.0.
11:53: server misbehaving
My understanding is that when running locally, the code should not try to access Google's internal metadata, which is what is happening above. I dug deeper and looks like this part is handled in the code cloud.google.com/go/compute/metadata/metadata.go. I might be wrong here but it looks like I have to set an env variable for the code to work properly. Pasting from the documentation in metadata.go
// metadataHostEnv is the environment variable specifying the
// GCE metadata hostname. If empty, the default value of
// metadataIP ("169.254.169.254") is used instead.
// This is variable name is not defined by any spec, as far as
// I know; it was made up for the Go package.
metadataHostEnv = "GCE_METADATA_HOST"
If all of my understanding is true, what should I set GCE_METADATA_HOST to? If I am wrong about my understanding, why am I seeing this error? Is it possible that this error has something to do with my Docker and not with Stackdriver logging?
I am running my app with in a container with docker-compose. I am performing go install which generates the binary and then I am simply executing the binary.
EDIT: This is my compose file
version: '3'
services:
dev:
image: <gcr_image>
entrypoint:
- /bin/sh
- -c
- "cat ./config-scripts/config.sh >> /root/.bashrc; bash"
command: bash
stdin_open: true
tty: true
working_dir: /code
environment:
- ENV1=value1
- ENV2=value2
ports:
- "8080:8080"
volumes:
- .:/code
- ~/.npmrc:/root/.npmrc
- ~/.config/gcloud:/root/.config/gcloud
- /var/run/docker.sock:/var/run/docker.sock