Im trying to create a table for my application to use when I start up the container. The application is a simple web scraper that goes to a site scrapes data puts the data into a psql db then displays the scraped data on a site.
I understand that it only needs to run once on initial creation. So I open docker and with no containers I run docker-compose up and then I get the error below:
app | 2022-10-08 19:48:21 [scrapy.utils.signal] ERROR: Error caught on signal handler: <bound method CoreStats.spider_closed of <scrapy.extensions.corestats.CoreStats object at 0x7fc61cc3b2b0>>
app | Traceback (most recent call last):
app | File "/usr/local/lib/python3.9/site-packages/scrapy/crawler.py", line 103, in crawl
app | yield self.engine.open_spider(self.spider, start_requests)
app | psycopg2.OperationalError: could not connect to server: Connection refused
app | Is the server running on host "postgres" (172.18.0.2) and accepting
app | TCP/IP connections on port 5432?
So then I run docker-compose down -v. Then I run docker-compose up --build and still the db is not being created. However if I run it then go into the db container and create the db then re run it then it works so I know its just some bug with my setup. Is something wrong with my yaml file or init sql file (posted below).
init.sql file
\c docker;
DROP TABLE quotes;
CREATE TABLE IF NOT EXISTS quotes (
id serial not null,
author text not null,
quote text not null
);
docker yaml file
services:
postgres:
image: postgres
container_name: postgres
ports:
- "5432:5432"
restart: unless-stopped
hostname: postgres
volumes:
- ./app/db/init.sql:/docker-entrypoint-initdb.d/init.sql
# - ./postgres/data:/var/lib/postgresql/data
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: postgres
# adminer:
# image: adminer
# restart: always
# depends_on:
# - postgres
# # ports:
# # - "8080:8080"
# networks:
# - docker
app:
container_name: app
image: app
depends_on:
- postgres
ports:
- "8080:8080"
build: ./app
Related
I have a docker-compose that brings up the psql database as below, currently I'm trying to connect to it with pgAdmin4 (not in a docker container) and be able to view it. I've been having trouble authenticating with the DB and I don't understand why.
docker-compose
version: "3"
services:
# nginx and server also have an override, but not important for this q.
nginx:
ports:
- 1234:80
- 1235:443
server:
build: ./server
ports:
- 3001:3001 # app server port
- 9230:9230 # debugging port
env_file: .env
command: yarn dev
volumes:
# Mirror local code but not node_modules
- /server/node_modules/
- ./server:/server
database:
container_name: column-db
image: 'postgres:latest'
restart: always
ports:
- 5432:5432
environment:
POSTGRES_USER: postgres # The PostgreSQL user (useful to connect to the database)
POSTGRES_PASSWORD: root # The PostgreSQL password (useful to connect to the database)
POSTGRES_DB: postgres # The PostgreSQL default database (automatically created at first launch)
volumes:
- ./db-data/:/var/lib/postgresql/data/
networks:
app-network:
driver: bridge
I do docker-compose up then check the logs, and it says that it is ready for connections. I go to pgAdmin and enter the following:
where password is root. I then get this error:
FATAL: password authentication failed for user "postgres"
I check the docker logs and I see
DETAIL: Role "postgres" does not exist.
I'm not sure what I'm doing wrong, according to the docs the super user should be created with those specifications. Am I missing something? Been banging my head against this for an hour now. Any help is appreciated!
#jjanes solved it in a comment, I had used a mapped volume and never properly set up the db. Removed the volume and we're good to go.
I am trying to run bamboo-server using a docker container and connect it to postgres db that is running on another container. First I run the postgres db and create an empty database named bamboo with a user postgres and password postgres.
And I run this commend to run bamboo server from https://hub.docker.com/r/atlassian/bamboo
$> docker volume create --name bambooVolume
$> docker run -v bambooVolume:/var/atlassian/application-data/bamboo --name="bamboo" -d -p 8085:8085 -p 54663:54663 atlassian/bamboo
Then I open localhost:8085 and generate a license and reach the point that I see this error
Error accessing database: org.postgresql.util.PSQLException: Connection to localhost:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
What is the problem?
SOLUTION:
Worked with this dokcer-compose yaml:
version: '2'
services:
bamboo:
image: atlassian/bamboo
container_name: bamboo
ports:
- '54663:5436'
- '8085:8085'
networks:
- bamboonet
volumes:
- bamboo-data:/var/atlassian/application-data/bamboo
hostname: bamboo
environment:
CATALINA_OPTS: -Xms256m -Xmx1g
BAMBOO_PROXY_NAME:
BAMBOO_PROXY_PORT:
BAMBOO_PROXY_SCHEME:
BAMBOO_DELAYED_START:
labels:
com.blacklabelops.description: "Atlassian Bamboo"
com.blacklabelops.service: "bamboo"
db-bamboo:
image: postgres
container_name: postgres
hostname: postgres
networks:
- bamboonet
volumes:
- bamboo-data-db:/var/lib/postgresql/data
ports:
- '5432:5432'
environment:
POSTGRES_PASSWORD: password
POSTGRES_USER: bamboo
POSTGRES_DB: bamboo
POSTGRES_ENCODING: UTF8
POSTGRES_COLLATE: C
POSTGRES_COLLATE_TYPE: C
PGDATA: /var/lib/postgresql/data/pgdata
labels:
com.blacklabelops.description: "PostgreSQL Database Server"
com.blacklabelops.service: "postgresql"
volumes:
bamboo-data:
external: false
bamboo-data-db:
external: false
networks:
bamboonet:
driver: bridge
If you don't set network of your docker it will be used bridge mode as default.
I think the problem is you might use {containerName}:5432 instead of localhost:5432 from your JDBC connection string, because localhost mean your container of website instead of real computer, so that you can't connect to DB by that.
jdbc:postgresql://bamboo-pg-db-container:5432/bamboo
I have a Java Spring Boot app which works with a Postgres database. I want to use Docker for both of them. I initially put just the Postgres in Docker, and I had a docker-compose.yml file defined like this:
version: '2'
services:
db:
container_name: sample_db
image: postgres:9.5
volumes:
- sample_db:/var/lib/postgresql/data
environment:
- POSTGRES_PASSWORD=sample
- POSTGRES_USER=sample
- POSTGRES_DB=sample
- PGDATA=/var/lib/postgresql/data/pgdata
ports:
- 5432:5432
volumes:
sample_db: {}
Then, when I issued the commands sudo dockerd and sudo docker-compose -f docker-compose.yml up, it was starting the database. I could connect using pgAdmin for example, by using localhost as server and port 5432. Then, in my Spring Boot app, inside the application.properties file I defined the following properties.
spring.datasource.url=jdbc:postgresql://localhost:5432/sample
spring.datasource.username=sample
spring.datasource.password=sample
spring.jpa.generate-ddl=true
At this point I could run my Spring Boot app locally through Spring Suite, and it all was working fine. Then, I wanted to also add my Spring Boot app as Docker image. I first of all created a Dockerfile in my project directory, which looks like this:
FROM java:8
EXPOSE 8080
ADD /target/manager.jar manager.jar
ENTRYPOINT ["java","-jar","manager.jar"]
Then, I entered to the directory of the project issued mvn clean followed by mvn install. Next, issued docker build -f Dockerfile -t manager . followed by docker tag 9c6b1e3f1d5e myuser/manager:latest (the id is correct). Finally, I edited my existing docker-compose.yml file to look like this:
version: '2'
services:
web:
image: myuser/manager:latest
ports:
- 8080:8080
depends_on:
- db
db:
container_name: sample_db
image: postgres:9.5
volumes:
- sample_db:/var/lib/postgresql/data
environment:
- POSTGRES_PASSWORD=sample
- POSTGRES_USER=sample
- POSTGRES_DB=sample
- PGDATA=/var/lib/postgresql/data/pgdata
ports:
- 5432:5432
volumes:
sample_db: {}
But, now if I issue sudo docker-compose -f docker-compose.yml up command, the database again starts correctly, but I get errors and exit code 1 for the web app part. The problem is the connection string. I believe I have to change it to something else, but I don't know what it should be. I get the following error messages:
web_1 | 2017-06-27 22:11:54.418 ERROR 1 --- [ main] o.a.tomcat.jdbc.pool.ConnectionPool : Unable to create initial connections of pool.
web_1 |
web_1 | org.postgresql.util.PSQLException: Connection to localhost:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections
Any ideas?
Each container has its own network interface with its own localhost. So change how Java points to Postgres:
spring.datasource.url=jdbc:postgresql://localhost:5432/sample
To:
spring.datasource.url=jdbc:postgresql://db:5432/sample
db will resolve to the proper Postgres IP.
Bonus. With docker-compose you don't need to build your image by hand. So change:
web:
image: myuser/manager:latest
To:
web:
build: .
I had the same problem and I lost some time to understand and solve this problem:
org.postgresql.util.PSQLException: Connection to localhost:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
I show all the properties so that everyone understands.
application.properties:
spring.datasource.url=jdbc:postgresql://localhost:5432/testdb
spring.datasource.driver-class-name=org.postgresql.Driver
spring.datasource.username=postgres
spring.datasource.password=postgres
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.PostgreSQL82Dialect
spring.jpa.hibernate.ddl-auto=update
docker-compose.yml:
version: "3"
services:
springapp:
build: .
container_name: springapp
environment:
SPRING_DATASOURCE_URL: jdbc:postgresql://db:5432/testdb
ports:
- 8000:8080
restart: always
depends_on:
- db
db:
image: postgres
container_name: db
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=testdb
- PGDATA=/var/lib/postgresql/data/pgdata
ports:
- 5000:5432
volumes:
- pgdata:/var/lib/postgresql/data
restart: always
volumes:
pgdata:
For start spring application with local database we use url localhost.
For connect to container with database we need change 'localhost' on your database service, in my case 'localhost' to 'db'.
Solution: add SPRING_DATASOURCE_URL environment in docker-compose.yml wich rewrite spring.datasource.url value for connect:
environment:
SPRING_DATASOURCE_URL: jdbc:postgresql://db:5432/testdb
I hope this helps someone save his time.
You can use this.
version: "2"
services:
sample_db-postgresql:
image: postgres:9.5
ports:
- 5432:5432
environment:
- POSTGRES_PASSWORD=sample
- POSTGRES_USER=sample
- POSTGRES_DB=sample
volumes:
- sample_db:/var/lib/postgresql/data
volumes:
sample_db:
You can use ENV variable to change the db address in your docker-compose.
Dockerfile:
FROM java:8
EXPOSE 8080
ENV POSTGRES localhost
ADD /target/manager.jar manager.jar
ENTRYPOINT exec java $JAVA_OPTS -jar manager.jar --spring.datasource.url=jdbc:postgresql://$POSTGRES:5432/sample
docker-compose:
`
container_name: springapp
environment:
- POSTGRES=db`
I am trying to get the Cogstack NiFi Docker implementation running on my Windows machine.
https://github.com/CogStack/CogStack-NiFi
When you run the Docker command;
docker-compose -f services.yml up -d samples-db
It is supposed to run this code from the services.yml (https://github.com/CogStack/CogStack-NiFi/blob/master/deploy/services.yml)
services:
#---------------------------------------------------------------------------#
# Postgres container with sample data #
#---------------------------------------------------------------------------#
samples-db:
image: postgres:11.4-alpine
container_name: cogstack-samples-db
restart: always
volumes:
# mapping postgres data dump and initialization
- ../services/pgsamples/db_dump/db_samples-pdf-text-small.sql.gz:/data/db_samples.sql.gz:ro
- ../services/pgsamples/init_db.sh:/docker-entrypoint-initdb.d/init_db.sh:ro
# data persistence
- samples-vol:/var/lib/postgresql/data
ports:
# <host:container> expose the postgres DB to host for debugging purposes
- 5555:5432
#expose:
# - 5432
networks:
- cognet
And thus the init_db.sh (https://github.com/CogStack/CogStack-NiFi/blob/master/services/pgsamples/init_db.sh)
But unfortunately the data does not get populated.
When I try manually running init_db.sh it gives an error;
/docker-entrypoint-initdb.d # ./init_db.sh
Creating database: db_samples and user: test
psql: FATAL: role "root" does not exist
What could be the reason ?
I have django app that I am attempting to host in docker. I have been unsuccessful in launching my postgres server before standing up the django app. Here is my docker-compose.yaml
version: '3'
services:
flyway:
image: boxfuse/flyway
command: -url=jdbc:postgresql://db/dbname -schemas=schemaName -user=user -password=pwd migrate
volumes:
- ./flyway:/flyway/sql
depends_on:
- db
db:
image: postgres:9.6
restart: always
ports:
- 5432:5432
environment:
- POSTGRES_PASSWORD=pwd
healthcheck:
test: "pg_isready -q -U postgres"
app:
image: myimage
ports:
- 8000:8000
Services db and app both seem to stand up fine but I am unable to spin up the postgres defaults with flyway. Here are the errors that I'm getting:
flyway_1 | SEVERE: Connection error:
flyway_1 | org.postgresql.util.PSQLException: Connection to db:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
ERROR:
flyway_1 | Unable to obtain connection from database (jdbc:postgresql://db/dbname) for user 'user': Connection to db:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
I couldn't find a good example on how to use flyway with Postgres. How do I go about getting this to work? TIA
Version '3+' of the docker-compose file doesn't support parameter condition in the depends_on block, but version '2.1+' does. So you can create compose file like the following, that uses healthcheck from the postgres section, for example:
version: '2.1'
services:
my-app:
# ...
# ...
depends_on:
- flyway
flyway:
image: boxfuse/flyway:5-alpine
command: -url=jdbc:postgresql://postgres:5432/mydb -schemas=public -user=postgres -password=postgres migrate
volumes:
- ./migration:/flyway/sql
depends_on:
postgres:
condition: service_healthy
postgres:
domainname: postgres
build: ./migration
ports:
- "5432:5432"
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
healthcheck:
test: ["CMD", "pg_isready", "-q", "-U", "postgres"]
interval: 5s
timeout: 1s
retries: 2
depends_on of the flyway service does not actually check that the database within db-container is up and running, but instead only checks that the container is up. This is quite different. The container could be up and running at the moment the database within it is starting but not yet accepting connections.
For such a case, you should specify a health check to make sure your database is accepting connections. You can even find an example how to do it with PostgreSQL in the official docker-compose docs.
Please use -connectRetries to wait for postgres, example (wait for 60s): -connectRetries=60
More details here
https://github.com/flyway/flyway-docker