I am trying to get the Cogstack NiFi Docker implementation running on my Windows machine.
https://github.com/CogStack/CogStack-NiFi
When you run the Docker command;
docker-compose -f services.yml up -d samples-db
It is supposed to run this code from the services.yml (https://github.com/CogStack/CogStack-NiFi/blob/master/deploy/services.yml)
services:
#---------------------------------------------------------------------------#
# Postgres container with sample data #
#---------------------------------------------------------------------------#
samples-db:
image: postgres:11.4-alpine
container_name: cogstack-samples-db
restart: always
volumes:
# mapping postgres data dump and initialization
- ../services/pgsamples/db_dump/db_samples-pdf-text-small.sql.gz:/data/db_samples.sql.gz:ro
- ../services/pgsamples/init_db.sh:/docker-entrypoint-initdb.d/init_db.sh:ro
# data persistence
- samples-vol:/var/lib/postgresql/data
ports:
# <host:container> expose the postgres DB to host for debugging purposes
- 5555:5432
#expose:
# - 5432
networks:
- cognet
And thus the init_db.sh (https://github.com/CogStack/CogStack-NiFi/blob/master/services/pgsamples/init_db.sh)
But unfortunately the data does not get populated.
When I try manually running init_db.sh it gives an error;
/docker-entrypoint-initdb.d # ./init_db.sh
Creating database: db_samples and user: test
psql: FATAL: role "root" does not exist
What could be the reason ?
Related
I'm trying to run a bash script after a Postgres container starts which 1) creates a new table within the Postgres DB, and 2) runs a copy command that dumps the contents of a csv file into the newly created table.
Currently, I'm specifying the execution of the script within my docker-compose.yml file using the "command" argument, but I find that it doesn't allow the Postgres container to succesfully start. I receive the following information from the log:
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
When I remove the "command" argument everything is fine. Here is what my docker-compose.yml files looks like now:
# docker-compose.yml
version: '3'
services:
web:
build: .
command: bash -c 'while !</dev/tcp/db/5432; do sleep 1; done; uvicorn app.main:app --host 0.0.0.0'
volumes:
- .:/app
expose: # new
- 8000
environment:
- DATABASE_URL=postgresql://fastapi_traefik:fastapi_traefik#db:5432/fastapi_traefik
depends_on:
- db
labels: # new
- "traefik.enable=true"
- "traefik.http.routers.fastapi.rule=Host(`fastapi.localhost`)"
db:
image: postgres:13-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
- "/Users/theComputerPerson/:/tmp"
expose:
- 5432
environment:
- POSTGRES_USER=fastapi_traefik
- POSTGRES_PASSWORD=fastapi_traefik
- POSTGRES_DB=fastapi_traefik
command: /bin/bash -c "/tmp/newtable.sh"
traefik: # new
image: traefik:v2.2
ports:
- 8008:80
- 8081:8080
volumes:
- "./traefik.dev.toml:/etc/traefik/traefik.toml"
- "/var/run/docker.sock:/var/run/docker.sock:ro"
volumes:
postgres_data:
It may be worth noting that I'm trying to customize some of the aspects of this FastAPI project, and to turn your attention to the development files and not the production files. Please let me know if I can provide any additional information in the comments.
You are overriding the default container image startup command.
According to PostgreSQL official container image page, you can extend initialization adding your sh scripts (or even sql files) to /docker-entrypoint-initdb.d directory.
See https://hub.docker.com/_/postgres.
This approach has a caveat that this script could not be executed.
Another approach is to override default container image command adding yours in bash style: postgres; /bin/bash -c "/tmp/newtable.sh";
I've been created a docker container of postgres service, but on start it and try to connect in database I get erros like I didn't defined a user and database to Postgres instance, I already tried to change the docker-compose and find the poblem but I didn't find.
Follow the attachments:
Dockerfile:
FROM wyveo/nginx-php-fpm:latest
RUN chmod -R 775 /usr/share/nginx/
RUN export pwd=pwd
docker-compose.yml:
version: '3'
services:
laravel-app_prm:
build: .
ports:
- "8099:80"
volumes:
- ${pwd}/.docker/nginx/:/usr/share/nginx
postgres_prm:
image: postgres
restart: always
environment:
- POSTGRES_USER=db_usr
- POSTGRES_PASSWORD=postgres_password
- POSTGRES_DB=db_prm
ports:
- "5432:5440"
volumes:
- ${pwd}/.docker/dbdata:/var/lib/postgresql/data/
**when I try to connect to the database directly through the container's bash, I get an error that user and database, both being inserted in the same way as defined in docker-compose.yml, do not exist.
sudo docker container <postgres_container_id> bash
psql -h localhost -U db_usr
... and so on...
And to set up connection in pgAdmin I got the container IP using:
sudo docker container inspect <postgres_container_id>
and getting the value from IPAddress atribute.
I am trying to run bamboo-server using a docker container and connect it to postgres db that is running on another container. First I run the postgres db and create an empty database named bamboo with a user postgres and password postgres.
And I run this commend to run bamboo server from https://hub.docker.com/r/atlassian/bamboo
$> docker volume create --name bambooVolume
$> docker run -v bambooVolume:/var/atlassian/application-data/bamboo --name="bamboo" -d -p 8085:8085 -p 54663:54663 atlassian/bamboo
Then I open localhost:8085 and generate a license and reach the point that I see this error
Error accessing database: org.postgresql.util.PSQLException: Connection to localhost:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
What is the problem?
SOLUTION:
Worked with this dokcer-compose yaml:
version: '2'
services:
bamboo:
image: atlassian/bamboo
container_name: bamboo
ports:
- '54663:5436'
- '8085:8085'
networks:
- bamboonet
volumes:
- bamboo-data:/var/atlassian/application-data/bamboo
hostname: bamboo
environment:
CATALINA_OPTS: -Xms256m -Xmx1g
BAMBOO_PROXY_NAME:
BAMBOO_PROXY_PORT:
BAMBOO_PROXY_SCHEME:
BAMBOO_DELAYED_START:
labels:
com.blacklabelops.description: "Atlassian Bamboo"
com.blacklabelops.service: "bamboo"
db-bamboo:
image: postgres
container_name: postgres
hostname: postgres
networks:
- bamboonet
volumes:
- bamboo-data-db:/var/lib/postgresql/data
ports:
- '5432:5432'
environment:
POSTGRES_PASSWORD: password
POSTGRES_USER: bamboo
POSTGRES_DB: bamboo
POSTGRES_ENCODING: UTF8
POSTGRES_COLLATE: C
POSTGRES_COLLATE_TYPE: C
PGDATA: /var/lib/postgresql/data/pgdata
labels:
com.blacklabelops.description: "PostgreSQL Database Server"
com.blacklabelops.service: "postgresql"
volumes:
bamboo-data:
external: false
bamboo-data-db:
external: false
networks:
bamboonet:
driver: bridge
If you don't set network of your docker it will be used bridge mode as default.
I think the problem is you might use {containerName}:5432 instead of localhost:5432 from your JDBC connection string, because localhost mean your container of website instead of real computer, so that you can't connect to DB by that.
jdbc:postgresql://bamboo-pg-db-container:5432/bamboo
I'm using a postgres service through docker compose with multiple databases in the same container (one for keycloak and a test db for non-identity managment application data). As per the postgres image docs, I've mounted a volume with a shell script to create multiple databases. This part is working fine, and once the containers are up and running I can access the databases.
Inside the volume there's also an .sql file with a query for creating tables and inserting rows into these tables. While the docker logs show that the tables are created and the insertions are performed, I can't see the tables on either of the created databases. How would I go about ensuring this script runs on the testdb?
Docker compose file:
volumes:
postgres_data:
driver: local
services:
test-webportal-db:
env_file:
- ./.env
image: postgres
volumes:
- postgres_data:/var/lib/postgresql/data
- ./db-init:/docker-entrypoint-initdb.d
environment:
- POSTGRES_DB=${TEST_DB}
- POSTGRES_MULTIPLE_DATABASES=${POSTGRES_MULTIPLE_DATABASES}
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
ports:
- 5432:5432
test-webportal-db-adminer:
image: adminer:latest
restart: always
depends_on:
- test-webportal-db
ports:
- 8090:8080
test-webportal-identity:
env_file:
- ./.env
image: quay.io/keycloak/keycloak:latest
environment:
- DB_VENDOR=${DB_VENDOR}
- DB_ADDR=${DB_ADDR}
- DB_DATABASE=${DB_DATABASE}
- DB_USER=${DB_USER}
- DB_SCHEMA=${DB_SCHEMA}
- DB_PASSWORD=${DB_PASSWORD}
- KEYCLOAK_USER=${KEYCLOAK_USER}
- KEYCLOAK_PASSWORD=${KEYCLOAK_PASSWORD}
ports:
- 8080:8080
depends_on:
- test-webportal-db
Shell script:
#!/bin/bash
set -e
set -u
# function to create user and database with POSTGRES_PASSWORD
function create_user_and_database() {
local database=$1
echo " Creating user and database '$database'"
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" <<-EOSQL
CREATE USER $database WITH PASSWORD '$POSTGRES_PASSWORD';
CREATE DATABASE $database;
GRANT ALL PRIVILEGES ON DATABASE $database TO $database;
EOSQL
}
# create a database for each one listed in the POSTGRES_MULTIPLE_DATABASES env variable
if [ -n "$POSTGRES_MULTIPLE_DATABASES" ]; then
echo "Multiple database creation requested: $POSTGRES_MULTIPLE_DATABASES"
for db in $(echo $POSTGRES_MULTIPLE_DATABASES | tr ',' ' '); do
create_user_and_database $db
done
echo "Multiple databases created"
fi
I managed a workaround by removing the .sql file and placing the queries into the shell script as follows:
psql -U $TEST_DB $TEST_DB << EOF
<queries here>
EOF
I have a docker-compose.yaml that spins up an express server and postgres database in separate containers. The postgres container maps a volume to a db folder, which contains database seeding scripts. Locally, docker-compose runs fine and I can execute integration tests against the networked containers. However, the same scripts fail when running on Jenkins, with the following error:
ERROR: for db Cannot start service db: error while creating mount
source path
'/var/jenkins_home/workspace/project_name/db':
mkdir /var/jenkins_home: read-only file system
docker-compose.yaml
...
db:
image: postgres:10.5
restart: always
networks:
- cloud
environment:
POSTGRES_USER: someuser
POSTGRES_PASSWORD: somepassword
POSTGRES_DB: somedb
ports:
- 5432:5432
volumes:
- ./db:/docker-entrypoint-initdb.d
healthcheck:
test: ["CMD-SHELL", "pg_isready -U someuser -d somedb"]
interval: 10s
timeout: 5s
retries: 5
I have read in several places that volumes require absolute paths (not relative), but I tried harcoding /var/jenkins_home/workspace/project_name/db to the left side of the volume config to no avail.