I'm trying to reload the test database with data using a dump.
The idea is the prefill postgres:14.1 with the dump, before running the test.
So far, I have the following .gitlab-ci.yml but the DB can't file the dump file.
image: "custom_image:latest"
services:
- "postgres:14.1"
variables:
RAILS_ENV: test
POSTGRES_DB: test
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
PGPASSWORD: postgres
POSTGRES_HOST_AUTH_METHOD: trust
DATABASE_URL: "postgresql://postgres:postgres#postgres:5432/test"
pg_restore:
stage: build
image: postgres:14.1
script:
- pg_restore --version
- pg_restore --no-privileges --no-owner --dbname=postgresql://postgres:postgres#0.0.0.0:5432/test db/test.dump
artifacts:
paths:
- ./db
test:
stage: test
dependencies:
- pg_restore
script:
- bundle exec rake db:migrate
- bundle exec rake test
--dbname=postgresql://postgres:postgres#0.0.0.0:5432/test
In the case where you use services: the database is not on localhost. So 0.0.0.0 is not the correct host to use here. Instead, it will be the hostname postgres. The DATABASE_URL is the value you want to use instead. postgresql://postgres:postgres#postgres:5432/test
You can also define an explicit hostname alias:
services:
- name: "postgres:14.1"
alias: mydatabasehostname
Additionally, services are ephemeral and do not carry state between jobs. So your test: job won't see any changes made to the database in any other job. You must setup the DB in every job.
Related
Hello i get this error after i run docker-compose build up
But i get this error
postgres_1 | Error: Database is uninitialized and superuser password is not specified.
Here is a snap shot of the error!
And down below is my docker-compose.yml file
version: '3.6'
Server.js file
services:
smart-brain-api:
container_name: backend
build: ./
command: npm start
working_dir: /usr/src/smart-brain-api
ports:
- "3000:3000"
volumes:
- ./:/usr/src/smart-brain-api
#PostGres Database
postgres:
image: postgres
ports:
- "5432:5432"
You can use the POSTGRES_HOST_AUTH_METHOD environment property by making the following change to your docker-compose.yml.
db:
image: postgres:9.6-alpine
environment:
POSTGRES_DB: "db"
POSTGRES_HOST_AUTH_METHOD: "trust"
The above will solve the error.
To avoid that you can specify the followings environments variables for postgres container on your docker-compose file.
POSTGRES_PASSWORD
This environment variable is normally required for you to use the PostgreSQL image. This environment variable sets the superuser password for PostgreSQL. The default superuser is defined by the POSTGRES_USER environment variable.
POSTGRES_DB
This optional environment variable can be used to define a different name for the default database that is created when the image is first started. If it is not specified, then the value of POSTGRES_USER will be used.
For more information about Environment Variables check:
https://hub.docker.com/_/postgres
It's already mentioned in the interactive mode; how to run the container, if you don't find it, use the following:
To allow all connections without a password use:
docker run -e POSTGRES_HOST_AUTH_METHOD=trust postgres:9.6 (use the tag you need).
To specify postgres password for the superuser, use:
docker run -e POSTGRES_PASSWORD=<your_password> postgres:9.6 (use the tag you need).
You can make change to your docker-compose.yml file like in example:
db:
image: postgres:13
environment:
- "POSTGRES_HOST_AUTH_METHOD=trust"
Summing up the command on official docker site:
docker run --name <YOUR_POSTGRES_DB> -e POSTGRES_PASSWORD=<YOUR_POSTGRES_PASSWORD> -d postgres
You can make your connection using the below docker command.
docker run -e POSTGRES_PASSWORD=<your_password> postgres:9.6.
I am working on a project which uses postgres as backend. Jenkins is used for ci. So far only unit tests were included as part of the daily build, but now we want to include integration tests as well. This requires an api to access a database. The database is Postgres,liquibase is used as a publisher.
What I want to do is via Jenkins
1. Create database,
2. Publish it using liquibase
3. Run Integration tests
I created this docker-compose
version: '3.4'
services:
xyz.api:
image: xyz.api
build:
context: .
dockerfile: Dockerfile.ci
ports:
- 3001:3001
depends_on:
- publisher
environment:
POSTGRES_USER: "postgres"
POSTGRES_PASSWORD: "xyz"
POSTGRES_DB: "xyz_db"
publisher:
image: publish
build:
context: .
dockerfile: Dockerfile-publish.ci
entrypoint: ""
depends_on:
- postgresql
command: ["./wait-for-it.sh", "db:5433"]
postgresql:
image: postgres:10.2
environment:
POSTGRES_USER: "postgres"
POSTGRES_PASSWORD: "xyz"
POSTGRES_DB: "xyz_db"
ports:
- target: 5432
published: 5433
protocol: tcp
networks:
xyzapi:
dockerfile-publish
FROM sequenceiq/liquibase:latest
WORKDIR /workspace
# copy project and restore as distinct layers
RUN liquibase --driver=org.postgresql.Driver --classpath=/usr/local/bin/postgresql-9.3-1102.jdbc41.jar --changeLogFile=xyz_ChangeLog.xml --url=jdbc:postgresql://postgresql:5433/xyz_db --username=postgres --password=xyz update
docker-compose up throws
---> Running in 18c564dac2de
Unexpected error running Liquibase: org.postgresql.util.PSQLException: The connection attempt failed.
After trying a number of options including docker-compose I still could not get it working. I don't think I am on right track here and am asking for hint or direction on how to run integration tests in Jenkins for a project that depends on a database.
I'm setting up a simple backend that perform CRUD action with postgres database and want to create database and migration automatically when the docker-compose up runs.
I have already tried to add below code to Dockerfile or entrypoint.sh but none of them work.
createdb --host=localhost -p 5432 --username=postgres --no-password pg_development
createdb db:migrate
This code will work if run separately after docker is fully up
I have already tried to add - ./db-init:/docker-entrypoint-initdb.d to the volumes but that didn't work either
This is the Dockerfile
FROM node:10.12.0
# Create app directory
RUN mkdir -p /restify-pg
WORKDIR /restify-pg
EXPOSE 1337
ENTRYPOINT [ "./entrypoint.sh" ]
This is my docker-compose.yml
version: '3'
services:
db:
image: "postgres:11.2"
ports:
- "5432:5432"
volumes:
- ./pgData:/var/lib/psotgresql/data
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD:
POSTGRES_DB: pg_development
app:
build:
context: .
dockerfile: Dockerfile
ports:
- "3000:3000"
volumes:
- .:/restify-pg
environment:
DB_HOST: db
entrypoint.sh (in here I get createdb: command not found)
#!/bin/bash
cd app
createdb --host=localhost -p 5432 --username=postgres --no-password pg_development
sequelize db:migrate
npm install
npm run dev
I expect that when I run docker, the migration and the db creation would happen.
entrypoint.sh (in here I get createdb: command not found)
Running createdb in the nodejs container will not work because it is postgres specific command and it's not installed by default in the nodejs image.
If you specify POSTGRES_DB: pg_development env var on postgres container, the database will be created automatically when the container starts. So no need to run createdb anyway in entrypoint.sh that is mounted in the nodejs container.
In order to make sequelize db:migrate work you need to:
add sequelize-cli to dependencies in package.json
run npm install so it gets installed
run npx sequelize db:migrate
Here is a proposal:
# docker-compose.yml
version: '3'
services:
db:
image: "postgres:11.2"
ports:
- "5432:5432"
volumes:
- ./pgData:/var/lib/postgresql/data
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD:
POSTGRES_DB: pg_development
app:
working_dir: /restify-pg
entrypoint: ["/bin/bash", "./entrypoint.sh"]
image: node:10.12.0
ports:
- "3000:3000"
volumes:
- .:/restify-pg
environment:
DB_HOST: db
# package.json
{
...
"dependencies": {
...
"pg": "^7.9.0",
"pg-hstore": "^2.3.2",
"sequelize": "^5.2.9",
"sequelize-cli": "^5.4.0"
}
}
# entrypoint.sh
npm install
npx sequelize db:migrate
npm run dev
If you can run your migrations from nodejs instead of docker then consider this solution instead
I'm trying to create and restore postgres backup using docker.
the docker failed to do it and gives me the following error:
/usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
dockerfile:
FROM postgres:11
ENV POSTGRES_USER postgres
ENV POSTGRES_PASSWORD postgres
ENV POSTGRES_DB dbName
COPY backup.backup /
COPY initdb.sh /docker-entrypoint-initdb.d
initdb.sh:
pg_restore --username=postgres --create --exit-on-error --verbose --dbname=dbName backup.backup
docker-compose.yml:
version: '2'
services:
db:
image: postgres:11
expose:
- "5432"
ports:
- "15432:5432"
volumes:
- dock-volume:/var/lib/postgresql/data
environment:
- POSTGRES_PASSWORD= postgres
- POSTGRES_DB= dbName
volumes:
dock-volume:
I tried to add the environment variables to docker-compose.yml but it doesnt help..
You should not create a Dockerfile for the Postgres because you already have the definition in your Dockercompose file. The variables that you define under environment are visible inside Postgres. If you want to have a backup and make sure that is running when you initialize you can do:
volumes:
- ~/Downloads/data/my_buckup.psql:/docker-entrypoint-initdb.d/stage.sql
Then when Postgres initialize this script will be run. You can see the documentation here under the section How to extend this image
I hope it makes sense
So I've setup a bitbucket-pipelines.yml for my python app. It needs a postgres database so I've followed the tutorial here (https://confluence.atlassian.com/bitbucket/test-with-databases-in-bitbucket-pipelines-856697462.html) which has led me to the following config:
image: node
pipelines:
default:
- step:
script:
- npm install
- npm test
services:
- postgres
definitions:
services:
postgres:
image: postgres
environment:
POSTGRES_DB: 'pipelines'
POSTGRES_USER: 'test_user'
POSTGRES_PASSWORD: 'test_user_password'
I need some specific extensions in my db, how can I add these. I tried to add an extra in the script that installs them but at that point postgres doesn't seem to be up and running.
This worked for me, without the need to build my own image (I add 2 extensions to postgres):
image: node:8.11.1 # or any image you need
clone:
depth: 1 # include the last commit
definitions:
services:
postgres:
image: postgres
environment:
POSTGRES_DB: test
POSTGRES_USER: postgres
POSTGRES_PASSWORD: your_password
pipelines:
default:
- step:
caches:
- node
script:
- npm install
- apt-get update
- apt-get -y install postgresql-client
- ./bin/utilities/wait-for-it.sh -h localhost -p 5432 -t 30
- PGPASSWORD=your_password psql -h localhost -p 5432 -c "create extension if not exists \"uuid-ossp\"; create extension if not exists pg_trgm;" -U postgres test;
- npm test test/drivers/* test/helpers/* test/models/*
services:
- postgres
wait-for-it.sh makes sure that postgres is ready to run, you can find it here: https://github.com/vishnubob/wait-for-it
Then, I run psql to create the extensions in the test database. Here note the variable PGPASSWORD I set before running that and that I use the -h and -p parameters to connect to the running postgres instance (otherwise it will try to do it with unix sockets, which doesn't seem to work).
You need to create your own image based on postgres, then push it to repository and use in pipeline
definitions:
services:
postgres:
image: your_custom_image_based_on_postgres
environment:
POSTGRES_DB: 'pipelines'
POSTGRES_USER: 'test_user'
POSTGRES_PASSWORD: 'test_user_password'
You can also find image that suit your requirements in https://hub.docker.com/