I'm new to creating my own docker images. I've been following along with this guide. I've successfully built my by using docker-compose build in the root directory.
However, I encounter the same issue every time I try to run: docker-compose up
I get the following error:
Pulling postgresql (postgresql:latest)...
ERROR: pull access denied for postgresql, repository does not exist or may require 'docker login'
I've setup a docker account. I can run a postgresql image using the documentation.
I'm at a loss as to what to do. I'm thinking I should modify my Dockerfile for my project or the docker-compose.yml file, but I'm unsure.
Also, when I build my app, I get the following at the beginning:
postgresql uses an image, skipping
My docker-compose.yml file looks like:
web:
build: .
command: rails s -e production
ports:
- 3000
links:
- postgresql
- postgresql:postgresql.cloud66.local
environment:
- RAILS_ENV=production
- RACK_ENV=production
postgresql:
image: postgresql
You may be running an outdated version of docker-compose.0
Also, your YAML seems to have an indentation error:
web:
build: .
links:
- postgresql
postgresql:
image: postgresql
This should be:
web:
build: .
links:
- postgresql
postgresql:
image: postgresql
Maybe it was just a copy & paste error, because the error message implies it was parsed correctly.
Related
I'm using docker for the first time to set up a test database that my team can then use. I'm having some trouble getting my data on DBeaver after running my docker-compose file. The issue I'm facing is that my database does not show up in DBeaver (along with relevant Schemas and Tables that I also create/populate in my initialization sql script).
Here is my docker-compose.yml
version: "3"
services:
test_database:
image: postgres:latest
build:
context: ./
dockerfile: Dockerfile
restart: always
ports:
- 5432:5432
environment:
- POSTGRES_USER=dev
- POSTGRES_PASSWORD=test1234
- POSTGRES_DB=testdb
container_name: test_database
In this, I specify the docker file I want it to use for building. Here is the dockerfile:
# syntax = docker/dockerfile:1.3
FROM postgres:latest
ADD test_data.tar .
COPY init_test_db.sql /docker-entrypoint-initdb.d/
Now, when I run docker-compose build and docker-compose up, I can see through the logs that my SQL commands (CREATE, COPY, etc.) do get executed and the rows do get added. But when I connect to this instance through DBeaver, I can't see this at all. In fact, the only database on there is the default Postgres database, even through the logs say I'm connected to test_database.
I followed some other solutions and used docker volume prune as well, but that didn't affect anything (I read some solutions about clearing up volumes, and at that point, I had volumes: /tmp:/tmp as well). Any ideas?
Wow, this wasn't an error after all. All I had to do was go on the connection settings on DBeaver and check 'Show all databases' under the Postgres tab. Hope this can help someone :)
First of all - no, I cannot switch from Bitbucket pipelines to something appropriate, unfortunately, it is direct requirement.
[x] I have searched other SO questions and google, the following two questions are related:
Bitbucket Pipeline - docker compose error (no answer)
How to use docker compose V2 in Bitbucket Pipelines (answer not working even when literally copied to pipeline definition for one of reasons below)
Working v1 main pipeline (only significant step and job, of course, it is larger)
image: python:3.10
definitions:
steps:
- step: &run-tests
name: Test
image: docker/compose:debian-1.29.2
caches:
- docker
services:
- docker
script:
- COMPOSE_DOCKER_CLI_BUILD=1 DOCKER_BUILDKIT=1 docker-compose --project-name reporting --env-file .env.ci -f docker-compose.ci.yaml up -d --build
# - ... (wait until ready and run tests, ignored, because error happens earlier)
pipelines:
default:
- parallel:
- step: *run-tests
Encountered errors
I'll to refer to them multiple times, so let's define short aliases:
403
+ COMPOSE_DOCKER_CLI_BUILD=1 DOCKER_BUILDKIT=1 docker compose --project-name reporting --env-file .env.ci -f docker-compose.ci.yaml up -d --build
listing workers for Build: failed to list workers: Unavailable: connection error: desc = "transport: Error while dialing unable to upgrade to h2c, received 403"
priviliged
+ COMPOSE_DOCKER_CLI_BUILD=1 DOCKER_BUILDKIT=1 docker compose --project-name reporting --env-file .env.ci -f docker-compose.ci.yaml up -d --build
#1 [internal] booting buildkit
#1 pulling image moby/buildkit:buildx-stable-1
#1 pulling image moby/buildkit:buildx-stable-1 2.8s done
#1 creating container buildx_buildkit_default 0.0s done
#1 ERROR: Error response from daemon: authorization denied by plugin pipelines: --privileged=true is not allowed
------
> [internal] booting buildkit:
------
Error response from daemon: authorization denied by plugin pipelines: --privileged=true is not allowed
Unfortunately, there is no docker/compose v2 image, and our deployment uses v2, so some inconsistencies happen. I'm trying to use v2 in pipeline now. I replaced docker-compose references with docker compose and try to prevent this command from crashing. Important thing to note: I need docker buildkit and cannot go without it, because I'm using Dockerfile.name.dockerignore files which are separate for prod and dev, and docker without buildkit does not support it (builds will simply fail).
Things I tried (debug smts like docker version and docker compose version were always working OK in these cases):
using image: linuxserver/docker-compose:2.10.2-v2. Result: 403.
using image: library/docker:20.10.18.
No more changes. Result: privileged.
Add docker buildx create --driver-opt image=moby/buildkit:v0.10.4-rootless --use as a step. Result: privileged (logs show that this image is actually used: pulling image moby/buildkit:v0.10.4-rootless 6.3s done).
using no explicit image (relying on bitbucket docker installation).
with official compose installation method (result: 403):
- mkdir -p /usr/local/lib/docker/cli-plugins/
- wget -O /usr/local/lib/docker/cli-plugins/docker-compose https://github.com/docker/compose/releases/download/v2.10.2/docker-compose-linux-x86_64
- chmod +x /usr/local/lib/docker/cli-plugins/docker-compose
with solution from 2nd link above (result: 403, but with some portion of success: downloaded two services that do not require building - postgres and redis - and failed only then)
If it is important, compose file for CI (only healthchecks trimmed, everything else not touched):
# We need this file without volumes due to bitbucket limitations.
version: '3.9'
services:
db:
image: mariadb:10.8.3-jammy
env_file: .env.ci
volumes:
- ./tests/db_init/:/docker-entrypoint-initdb.d
networks:
- app_network
redis:
image: redis:alpine
environment:
- REDIS_REPLICATION_MODE=master
networks:
- app_network
app:
build:
context: .
args:
- APP_USER=reporting
- APP_PORT
env_file: .env.ci
depends_on:
- db
- redis
networks:
- app_network
nginx:
build:
context: .
dockerfile: configs/Dockerfile.nginx
env_file: .env.ci
environment:
- APP_HOST=app
ports:
- 80:80
depends_on:
- app
networks:
- app_network
networks:
app_network:
driver: bridge
For now I reverted everything and keep using v1. The limitations of bitbucket pipelines drive me mad, I can easily run the same stuff in github actions, but now have to remove one service (that uses docker directory mounting, so cannot run on bitbucket) and spend whole day trying to upgrade compose. Sorry for this tone, this really makes me desire to quit bitbucket forever and never touch it again.
I'm trying to build a docker-compose file that will spin up my EF Core web api project, connecting to my Postgres database.
I'm having a hard time getting the EF project connecting to the database.
This is what I currently have for my docker-compose.yml:
version: '3.8'
services:
web:
container_name: 'mybackendcontainer'
image: 'myuser/mybackend:0.0.6'
build:
context: .
dockerfile: backend.dockerfile
ports:
- 8080:80
depends_on:
- postgres
networks:
- mybackend-network
postgres:
container_name: 'postgres'
image: 'postgres:latest'
environment:
- POSTGRES_USER=username
- POSTGRES_PASSWORD=MySuperSecurePassword!
- POSTGRES_DB=MyDatabase
networks:
- mybackend-network
expose:
- 5432
volumes:
- ./db-data/:/var/lib/postgresql/data/
pgadmin:
image: dpage/pgadmin4
ports:
- 15433:80
env_file:
- .env
depends_on:
- postgres
networks:
- mybackend-network
volumes:
- ./pgadmin-data/:/var/lib/pgadmin/
networks:
mybackend-network:
driver: bridge
And my web project docker file looks like this:
# Get base DSK Image from Microsoft
FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build-env
WORKDIR /app
# Copy the CSPROJ file and restore any dependencies (via NUGET)
COPY *.csproj ./
RUN dotnet restore
# Copy the project files and build our release
COPY . ./
RUN dotnet publish -c Release -o out
# Generate runtime image - do not include the whole SDK to save image space
FROM mcr.microsoft.com/dotnet/aspnet:6.0
WORKDIR /app
EXPOSE 80
COPY --from=build-env /app/out .
ENTRYPOINT ["dotnet", "MyBackend.dll"]
And my connection string looks like this:
User ID =bootcampdb;Password=MySuperSecurePassword!;Server=postgres;Port=5432;Database=MyDatabase; Integrated Security=true;Pooling=true;
Currently I have two problems:
I'm getting Npgsql.PostgresException (0x80004005): 57P03: the database system is starting up when I do docker-compose -up. I tried to add the healthcheck to my postgress db but that did not work. When I go to my Docker desktop app, and start my backend again, that message goes away and I get my second problem...
Secondly after the DB started it's saying: FATAL: password authentication failed for user "username". It looks like it's not creating my user for the database. I even changed not to use .env files but have the value in my docker-compose file, but its still not working. I've tried to do docker-compose down -v to ensure my volumes gets deleted.
Sorry these might be silly questions, I'm still new to containerization and trying to get this to work.
Any help will be appreciated!
Problem 1: Having depends_on only means that docker-compose will wait until your postgres container is started before it starts the web container. The postgres container needs some time to get ready to accept connections and if you attempt to connect before it's ready, you get the error you're seeing. You need to code your backend in a way that it'll wait until Postgres is ready by retrying the connection with a delay.
Problem 2: Postgres only creates the user and database if no database already exists. You probably have an existing database in ./db-data/ on the host. Try deleting ./db-data/ and Postgres should create the user and database using the environment variables you've set.
Mac OS 10.13.6
docker 9.03.5, build 633a0ea
I create two docker containers, web and db using docker-compose.yml. It worked for several months. Recently I decided to rebuild containers from scratch, so I actually removed the existing ones and started over:
$ docker-compose -f docker-compose-dev.yml build --no-cache
$ docker-compose -f docker-compose-dev.yml up -d
Duiring the "build" run, building db container yields this:
DETAIL: The data directory was initialised by PostgreSQL version 9.6,
which is not compatible with this version 11.2.
exited with code 1
The db container does not start so I can not check what it's got inside.
My containers
are defined like this:
version: '3'
services:
web:
restart: unless-stopped
container_name: web
build:
context: .
dockerfile: Dockerfile.dev
ports:
- "8000:8000"
environment:
DJANGO_SETTINGS_MODULE: '<my_app>.settings.postgres'
DB_NAME: 'my_db'
DB_USER: 'my_db_user'
DB_PASS: 'my_db_user'
DB_HOST: 'my_db_host'
PRODUCTION: 'false'
DEBUG: 'True'
depends_on:
- db
volumes:
- ./:/usr/src/app/
db:
image: postgres:11.2-alpine
volumes:
- myapp-db-dev:/var/lib/postgresql/data
environment:
- POSTGRES_DB=<my_db>
- POSTGRES_USER=<my_db_user>
- POSTGRES_PASSWORD=<my_db_password>
volumes:
myapp-db-dev:
My local postgresql is 11.3 (which should be irrelevant):
$ psql --version
psql (PostgreSQL) 11.3
and my local postgresql data directory was removed completely
$ rm -rf /usr/local/var/postgres
However, it's up-to-date:
$ brew postgresql-upgrade-database
Error: postgresql data already upgraded!
I read Stack Overflow 17822974 and Stack Overflow 19076980, those advices did not help.
How to fix this data incompatibility? If possible, I would like to avoid downgrading postgres. I don't even get what data it's talking about at that point, all the data is migrated later in a separate step.
It seems like on the first run Postgres 9.6 was specified as an image. So, the container was initialized and the data was put to the myapp-db-dev named volume. Then someone changed the version and you've got the error. The possible solution would be:
Temporary downgrade the version to the Postgres 9.6, e.g. specify postgres:9.6.
Go to the container and dump the data with pg_dump utility.
Change version to 11.2 and specify new volume (it's a good advice to use host volume).
Restore the data.
I want to have a MongoDB service running in a Docker in order to serve a Flask app. What I've tried is create a container using docker-compose.yml:
my_mongo_service:
image: mongo
environment:
- MONGO_INITDB_ROOT_USERNAME=${MONGO_ROOT_USER}
- MONGO_INITDB_ROOT_PASSWORD=${MONGO_ROOT_PASSWORD}
- MONGO_INITDB_DATABASE=${MY_DATABASE_NAME}
ports:
- "27017:27017"
volumes:
- "/data/db:/data/db"
command: mongod
Imagine we have an .env file like this:
MONGO_ROOT_USER=my_fancy_username
MONGO_ROOT_PASSWORD=my_fancy_password
MY_DATABASE_NAME=my_fancy_database
What I would expect (reading the doc) is that a database matching MY_DATABASE_NAME value is created and an user matching MONGO_ROOT_USER is created too and I could authenticate with the pair (MONGO_ROOT_USER,MONGO_ROOT_PASSWORD).
Ok, I launch my container with docker-compose up and enter on it with docker exec -it <container-id> bash. I put mongo on the console and when I try to authenticate it crashes:
> use my_fancy_database
switched to db my_fancy_database
> db.auth('my_fancy_username','my_fancy_password')
Error: Authentication failed.
0
On the log, the error I find is the following
[...] authentication failed for my_fancy_username on my_fancy_database from client [...] ; UserNotFound: Could not find user my_fancy_username#my_fancy_database
The docker-compose.yml configuration (as it was posted on official documentation) is not working. What I'm doing wrong?
Thanks in advance.
I don't get it. Are you using environmental variables, which are not in the environment? It sure looks so.
If you do echo $MY_DATABASE_NAME in your terminal and see empty output, then here is the answer to your question. You either first have to define the variable with export (or source for a file) or redefine your docker-compose.yml.
For that, it's best to use env_file directive:
my_mongo_service:
image: mongo
env_file:
- .env
ports:
- "27017:27017"
volumes:
- "/data/db:/data/db"
And set your .env as this:
MONGO_INITDB_ROOT_USERNAME=my_fancy_username
MONGO_INITDB_ROOT_PASSWORD=my_fancy_password
MONGO_INITDB_DATABASE=my_fancy_database
Side note: using command: mongod is not necessary, the base image is already using it.