I have the following docker-compose file:
services:
pgdatabase:
image: postgres:13
environment:
- POSTGRES_USER=root
- POSTGRES_PASSWORD=root
- POSTGRES_DB=ny_taxi
volumes:
- "./data:/var/lib/postgresql/data:rw"
ports:
- "5432:5432"
pgadmin:
image: dpage/pgadmin4
environment:
- PGADMIN_DEFAULT_EMAIL=admin#admin.com
- PGADMIN_DEFAULT_PASSWORD=root
volumes:
- "./data_pgadmin:/var/lib/pgadmin"
ports:
- "8080:80"
I'm trying to connect to postgres using pgadmin but I'm getting the following error:
Unable to connect to server: could not translate host name "pgdatabase" to address: Name does not resolve
Running docker network ls I get:
NAME DRIVER SCOPE
bridge bridge local
docker-sql-pg_default bridge local
host host local
none null local
Then running docker network inspect docker-sql-pg_default I get
[
{
"Name": "docker-sql-pg_default",
"Id": "bfee2f08620b5ffc1f8e10d8bed65c4d03a98a470deb8b987c4e52a9de27c3db",
"Created": "2023-01-24T17:57:27.831702189Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.24.0.0/16",
"Gateway": "172.24.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"8f53be84a95c9c0591df6cc6edb72d4ca070243c3c067ab2fb14c2094b23bcee": {
"Name": "docker-sql-pg-pgdatabase-1",
"EndpointID": "7f3ddb29b000bc4cfda9c54a4f13e0aa30f1e3f8e5cc1a8ba91cee840c16cd60",
"MacAddress": "02:42:ac:18:00:02",
"IPv4Address": "172.24.0.2/16",
"IPv6Address": ""
},
"bf2eb29b73fe9e49f4bef668a1f70ac2c7e9196b13350f42c28337a47fcd71f4": {
"Name": "docker-sql-pg-pgadmin-1",
"EndpointID": "b3a9504d75e11aa0f08f6a2b5c9c2f660438e23f0d1dd5d7cf4023a5316961d2",
"MacAddress": "02:42:ac:18:00:03",
"IPv4Address": "172.24.0.3/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "default",
"com.docker.compose.project": "docker-sql-pg",
"com.docker.compose.version": "2.13.0"
}
}
]
I tried to connect to the gateway IP 172.24.0.1 and the IP of postgres base 172.24.0.2 but I got timeout error. Why my network isn't running?
Basically I solved my problem using the following steps:
First I accessed the pg-admin container and I used ping to verify if the pgadmin could reach the postgres.
Since pgadmin was reaching postgres I used sudo netstat -tulpn | grep LISTEN to verify in my host machine the ports that are in use. surprisingly I have two instances of pgadmin running on 8080 (one bugged).
I used docker-compose down to stop the servers and used docker system prune to delete all images/containers...
I verified the used ports again and one pgadmin still running on 8080.
I used pidof to check the PID of running (bugged) pgadmin.
Then I used kill -9 to kill the proccess.
Last, I used docker-compose up -d and I was able to communicate pgadmin with postgres via pgadmin interface.
I'm testing using docker compose for a replicated set on my local machine and am having trouble with getting Compass connected and functioning with the local set. Everything has deployed fine and I can see the exposed ports listening on the host machine. I just keep getting ECONNREFUSED.
Here is the Docker Compose file.
services:
mongo1:
hostname: mongo1
container_name: mongo1
image: mongo:latest
networks:
- mongo-network
expose:
- 27017
ports:
- 30001:27017
volumes:
- md1:/data/db
restart: always
command: mongod --replSet mongo-net
mongo2:
hostname: mongo2
container_name: mongo2
image: mongo:latest
networks:
- mongo-network
expose:
- 27017
ports:
- 30002:27017
volumes:
- md2:/data/db
restart: always
command: mongod --replSet mongo-net
mongo3:
hostname: mongo3
container_name: mongo3
image: mongo:latest
networks:
- mongo-network
expose:
- 27017
ports:
- 30003:27017
volumes:
- md3:/data/db
restart: always
command: mongod --replSet mongo-net
mongoinit:
image: mongo
restart: "no"
depends_on:
- mongo1
- mongo2
- mongo3
command: >
sh -c "mongosh --host mongo1:27017 --eval
'
db = (new Mongo("localhost:27017")).getDB("test");
config = {
"_id" : "mongo-net",
"members" : [
{
"_id" : 0,
"host" : "localhost:27017",
"priority": 1
},
{
"_id" : 1,
"host" : "localhost:27017",
"priority": 2
},
{
"_id" : 2,
"host" : "localhost:27017",
"priority": 3
}
]
};
rs.initiate(config);
'"
networks:
mongo-network:
volumes:
md1:
md2:
md3:
The containers and replication set deploy fine and are communicating. I can see the defined ports exposed and listening.
My issue is trying to use Compass to connect to the replicated set. I get ECONNREFUSED.
What's odd is I can actually see the client connecting on the primary logs, but get no other information about what the connection was refused/DCed.
{
"attr": {
"connectionCount": 7,
"connectionId": 129,
"remote": "192.168.32.1:61508",
"uuid": "f78c3803-fc7c-4655-9b77-94329bd41a7d"
},
"c": "NETWORK",
"ctx": "listener",
"id": 22943,
"msg": "Connection accepted",
"s": "I",
"t": {
"$date": "2022-11-02T18:57:20.032+00:00"
}
}
{
"attr": {
"client": "conn129",
"doc": {
"application": {
"name": "MongoDB Compass"
},
"driver": {
"name": "nodejs",
"version": "4.8.1"
},
"os": {
"architecture": "arm64",
"name": "darwin",
"type": "Darwin",
"version": "22.1.0"
},
"platform": "Node.js v16.5.0, LE (unified)|Node.js v16.5.0, LE (unified)"
},
"remote": "192.168.32.1:61508"
},
"c": "NETWORK",
"ctx": "conn129",
"id": 51800,
"msg": "client metadata",
"s": "I",
"t": {
"$date": "2022-11-02T18:57:20.034+00:00"
}
}
{
"attr": {
"connectionCount": 6,
"connectionId": 129,
"remote": "192.168.32.1:61508",
"uuid": "f78c3803-fc7c-4655-9b77-94329bd41a7d"
},
"c": "NETWORK",
"ctx": "conn129",
"id": 22944,
"msg": "Connection ended",
"s": "I",
"t": {
"$date": "2022-11-02T18:57:20.044+00:00"
}
}
I have a mongodb container which i name e-learning
and i have a docker image which should connect to the mongodb container to update my database but it's not working i get this error:
Unknown, Last error: connection() error occurred during connection handshake: dial tcp 127.0.0.1:27017: connect: connection refused }
here's my docker build file
# syntax=docker/dockerfile:1
FROM golang:1.18
WORKDIR /go/src/github.com/worker
COPY go.mod go.sum main.go ./
RUN go mod download
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .
FROM jrottenberg/ffmpeg:4-alpine
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
ENV LD_LIBRARY_PATH=/usr/local/lib
COPY --from=jrottenberg/ffmpeg / /
COPY app.env /root
COPY --from=0 /go/src/github.com/worker/app .
CMD ["./app"]
my docker compose file
version: "3.9"
services:
worker:
image: worker
environment:
- MONGO_URI="mongodb://localhost:27017/"
- MONGO_DATABASE=e-learning
- RABBITMQ_URI=amqp://user:password#rabbitmq:5672/
- RABBITMQ_QUEUE=upload
networks:
- app_network
external_links:
- e-learning
- rabbitmq
volumes:
- worker:/go/src/github.com/worker:rw
networks:
app_network:
external: true
volumes:
worker:
my docker inspect network
[
{
"Name": "app_network",
"Id": "f688edf02a194fd3b8a2a66076f834a23fa26cead20e163cde71ef32fc1ab598",
"Created": "2022-06-27T12:18:00.283531947+03:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.20.0.0/16",
"Gateway": "172.20.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"2907482267e1f6e42544e5e8d852c0aac109ec523c6461e003572963e299e9b0": {
"Name": "rabbitmq",
"EndpointID": "4b46e091e4d5a79782185dce12cb2b3d79131b92d2179ea294a639fe82a1e79a",
"MacAddress": "02:42:ac:14:00:03",
"IPv4Address": "172.20.0.3/16",
"IPv6Address": ""
},
"8afd004a981715b8088af53658812215357e156ede03905fe8fdbe4170e8b13f": {
"Name": "e-learning",
"EndpointID": "1c12d592a0ef6866d92e9989f2e5bc3d143602fc1e7ad3d980efffcb87b7e076",
"MacAddress": "02:42:ac:14:00:02",
"IPv4Address": "172.20.0.2/16",
"IPv6Address": ""
},
"ad026f7e10c9c1c41071929239363031ff72ad1b9c6765ef5c977da76f24ea31": {
"Name": "video-transformation-worker-1",
"EndpointID": "ce3547014a6856725b6e815181a2c3383d307ae7cf7132e125c58423f335b73f",
"MacAddress": "02:42:ac:14:00:04",
"IPv4Address": "172.20.0.4/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
Change MONGO_URI="mongodb://localhost:27017/" to MONGO_URI="mongodb://e-learning:27017/" (working on the assumption that e-learning is the mongo container).
Within a container attached to a bridge network (the default) localhost (127.0.0.1) is the container itself. So your app container is trying to access the database at port 27017 on itself (not on the host or on the db container). The easiest solution is to use the automatic DNS resolution between containers that docker provides.
I added extra hosts and changed my mongo uri to host.docker.internal
and it solved my problems
version: "3.9"
services:
worker:
image: worker
environment:
- MONGO_URI="mongodb://host.docker.internal:27017/"
- MONGO_DATABASE=e-learning
- RABBITMQ_URI=amqp://user:password#rabbitmq:5672/
- RABBITMQ_QUEUE=upload
networks:
- app_network
external_links:
- e-learning
- rabbitmq
volumes:
- worker:/go/src/github.com/worker:rw
extra_hosts:
- "host.docker.internal:host-gateway"
networks:
app_network:
external: true
volumes:
worker:
#Docker compose file
version: "3.4" # optional since v1.27.0
services:
flaskblog:
build:
context: flaskblog
dockerfile: Dockerfile
image: flaskblog
restart: unless-stopped
environment:
APP_ENV: "prod"
APP_DEBUG: "False"
APP_PORT: 5000
MONGODB_DATABASE: flaskdb
MONGODB_USERNAME: flaskuser
MONGODB_PASSWORD:
MONGODB_HOSTNAME: mongodbuser
volumes:
- appdata:/var/www
depends_on:
- mongodb
networks:
- frontend
- backend
mongodb:
image: mongo:4.2
#container_name: mongodb
restart: unless-stopped
command: mongod --auth
environment:
MONGO_INITDB_ROOT_USERNAME: admin
MONGO_INITDB_ROOT_PASSWORD:
MONGO_INITDB_DATABASE: flaskdb
MONGODB_DATA_DIR: /data/db2
MONDODB_LOG_DIR: /dev/null
volumes:
- mongodbdata:/data/db2
#- ./mongo-init.js:/docker-entrypoint-initdb.d/mongo-init.js:ro
networks:
- backend
networks:
backend:
driver: bridge
volumes:
mongodbdata:
driver: local
appdata:
driver: local
#Flask container file
FROM python:3.8-slim-buster
#python:3.6-stretch
LABEL MAINTAINER="Shekhar Banerjee"
ENV GROUP_ID=1000 \
USER_ID=1000 \
SECRET_KEY="4" \
EMAIL_USER="dankml.com" \
EMAIL_PASS="nzw"
RUN mkdir /home/logix3
WORKDIR /home/logix3
ADD . /home/logix3/
RUN pip install -r requirements.txt
RUN pip install gunicorn
RUN groupadd --gid $GROUP_ID www
RUN useradd --create-home -u $USER_ID --shell /bin/sh --gid www www
USER www
EXPOSE 5000/tcp
#CMD ["python","run.py"]
CMD [ "gunicorn", "-w", "4", "--bind", "127.0.0.1:5000", "run:app"]
These are my docker-compose file and Dockerfile for an application . The container for MongoDB and Flask are working fine individually, but when I try to run them together, I do not get any response on the localhost, There are no error messages in the logs or the containers
Curl in the system gives this :
$ curl -i http://127.0.0.1:5000
curl: (7) Failed to connect to 127.0.0.1 port 5000: Connection refused
Can anyone suggest how do I debug this ??
**Only backend network is functional
[
{
"Name": "mlspace_backend",
"Id": "e7e37cad0058330c99c55ffea2b98281e0c6526763e34550db24431b30030b77",
"Created": "2022-05-15T22:52:25.0281001+05:30",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.20.0.0/16",
"Gateway": "172.20.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"12c50f5f70d18b4c7ffc076177b59ff063a8ff81c4926c8cae0bf9e74dc2fc83": {
"Name": "mlspace_mongodb_1",
"EndpointID": "8278f672d9211aec9b539e14ae4eeea5e8f7aaef95448f44aab7ec1a8c61bb0b",
"MacAddress": "02:42:ac:14:00:02",
"IPv4Address": "172.20.0.2/16",
"IPv6Address": ""
},
"75cde7a34d6b3c4517c536a06349579c6c39090a93a017b2c280d987701ed0cf": {
"Name": "mlspace_flaskblog_1",
"EndpointID": "20489de8841d937f01768170a89f74c37ed049d241b096c8de8424c51e58704c",
"MacAddress": "02:42:ac:14:00:03",
"IPv4Address": "172.20.0.3/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "backend",
"com.docker.compose.project": "mlspace",
"com.docker.compose.version": "1.23.2"
}
}
]
I checked the logs of the containers. no errors found, flask engine seem to be running but the site wont run , even curl won't given any output
I am pretty new to docker. I have a spring-boot application that connects to postgres database (running in a container). I want to dockerize my spring-boot application and be able to connect it to the postgres database container.
What works:
Running the application without using docker works just fine: java -jar product-manager-0.0.1-SNAPSHOT.jar
What doesn't work:
Dockerizing and Running the application with Docker gives me an exception.
Note that I am not using docker-compose here.
My app Dockerfile:
FROM openjdk:11
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar","/app.jar"]
Steps I've taken to run the app:
Run and start up the postgres database in a container using these commands:
docker run --rm --name lil-postgres -e POSTGRES_PASSWORD=password -d -v $HOME/srv/postgres:/var/lib/postgresql/data -p 5432:5432 postgres
postgres -D /usr/local/var/postgres
I've built the app's image:
docker build -t ensa/product-manager .
This is the result from running the latest command:
Sending build context to Docker daemon 38.01MB
Step 1/4 : FROM openjdk:11
---> 612d4d483eee
Step 2/4 : ARG JAR_FILE=target/*.jar
---> Running in 1b8674e959ca
Removing intermediate container 1b8674e959ca
---> d2c2b90680de
Step 3/4 : COPY ${JAR_FILE} app.jar
---> 01295beecd1b
Step 4/4 : ENTRYPOINT ["java","-jar","/app.jar"]
---> Running in 31230e7ff323
Removing intermediate container 31230e7ff323
---> c4487683e7b1
Successfully built c4487683e7b1
Successfully tagged ensa/product-manager:latest
Finally, I've created a running container using:
docker run --net="host" -it ensa/product-manager
The third step resulted in an exception:
org.postgresql.util.PSQLException: FATAL: password authentication failed for user "hamzabelmellouki"
at org.postgresql.core.v3.ConnectionFactoryImpl.doAuthentication(ConnectionFactoryImpl.java:520) ~[postgresql-42.2.8.jar!/:42.2.8]
at org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:141) ~[postgresql-42.2.8.jar!/:42.2.8]
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:192) ~[postgresql-42.2.8.jar!/:42.2.8]
at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49) ~[postgresql-42.2.8.jar!/:42.2.8]
at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:195) ~[postgresql-42.2.8.jar!/:42.2.8]
at org.postgresql.Driver.makeConnection(Driver.java:458) ~[postgresql-42.2.8.jar!/:42.2.8]
at org.postgresql.Driver.connect(Driver.java:260) ~[postgresql-42.2.8.jar!/:42.2.8]
at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:138) ~[HikariCP-3.4.1.jar!/:na]
at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:353) ~[HikariCP-3.4.1.jar!/:na]
at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:201) ~[HikariCP-3.4.1.jar!/:na]
at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:473) ~[HikariCP-3.4.1.jar!/:na]
at com.zaxxer.hikari.pool.HikariPool.checkFailFast(HikariPool.java:562) ~[HikariCP-3.4.1.jar!/:na]
at com.zaxxer.hikari.pool.HikariPool.<init>(HikariPool.java:115) ~[HikariCP-3.4.1.jar!/:na]
at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:112) ~[HikariCP-3.4.1.jar!/:na]
at org.springframework.jdbc.datasource.DataSourceUtils.fetchConnection(DataSourceUtils.java:158) ~[spring-jdbc-5.2.1.RELEASE.jar!/:5.2.1.RELEASE]
at org.springframework.jdbc.datasource.DataSourceUtils.doGetConnection(DataSourceUtils.java:116) ~[spring-jdbc-5.2.1.RELEASE.jar!/:5.2.1.RELEASE]
at org.springframework.jdbc.datasource.DataSourceUtils.getConnection(DataSourceUtils.java:79) ~[spring-jdbc-5.2.1.RELEASE.jar!/:5.2.1.RELEASE]
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:324) ~[spring-jdbc-5.2.1.RELEASE.jar!/:5.2.1.RELEASE]
at org.springframework.boot.jdbc.EmbeddedDatabaseConnection.isEmbedded(EmbeddedDatabaseConnection.java:120) ~[spring-boot-2.2.1.RELEASE.jar!/:2.2.1.RELEASE]
at org.springframework.boot.autoconfigure.orm.jpa.HibernateDefaultDdlAutoProvider.getDefaultDdlAuto(HibernateDefaultDdlAutoProvider.java:42) ~[spring-boot-autoconfigure-2.2.1.RELEASE.jar!/:2.2.1.RELEASE]
at org.springframework.boot.autoconfigure.orm.jpa.HibernateJpaConfiguration.lambda$getVendorProperties$1(HibernateJpaConfiguration.java:130) ~[spring-boot-autoconfigure-2.2.1.RELEASE.jar!/:2.2.1.RELEASE]
at org.springframework.boot.autoconfigure.orm.jpa.HibernateSettings.getDdlAuto(HibernateSettings.java:41) ~[spring-boot-autoconfigure-2.2.1.RELEASE.jar!/:2.2.1.RELEASE]
at org.springframework.boot.autoconfigure.orm.jpa.HibernateProperties.determineDdlAuto(HibernateProperties.java:136) ~[spring-boot-autoconfigure-2.2.1.RELEASE.jar!/:2.2.1.RELEASE]
at org.springframework.boot.autoconfigure.orm.jpa.HibernateProperties.getAdditionalProperties(HibernateProperties.java:102) ~[spring-boot-autoconfigure-2.2.1.RELEASE.jar!/:2.2.1.RELEASE]
at org.springframework.boot.autoconfigure.orm.jpa.HibernateProperties.determineHibernateProperties(HibernateProperties.java:94) ~[spring-boot-autoconfigure-2.2.1.RELEASE.jar!/:2.2.1.RELEASE]
at org.springframework.boot.autoconfigure.orm.jpa.HibernateJpaConfiguration.getVendorProperties(HibernateJpaConfiguration.java:132) ~[spring-boot-autoconfigure-2.2.1.RELEASE.jar!/:2.2.1.RELEASE]
at org.springframework.boot.autoconfigure.orm.jpa.JpaBaseConfiguration.entityManagerFactory(JpaBaseConfiguration.java:133) ~[spring-boot-autoconfigure-2.2.1.RELEASE.jar!/:2.2.1.RELEASE]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:na]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:na]
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:na]
at java.base/java.lang.reflect.Method.invoke(Method.java:566) ~[na:na]
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:154) ~[spring-beans-5.2.1.RELEASE.jar!/:5.2.1.RELEASE]
at org.springframework.beans.factory.support.ConstructorResolver.instantiate(ConstructorResolver.java:640) ~[spring-beans-5.2.1.RELEASE.jar!/:5.2.1.RELEASE]
at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:625) ~[spring-beans-5.2.1.RELEASE.jar!/:5.2.1.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1338) ~[spring-beans-5.2.1.RELEASE.jar!/:5.2.1.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1177) ~[spring-beans-5.2.1.RELEASE.jar!/:5.2.1.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:557) ~[spring-beans-5.2.1.RELEASE.jar!/:5.2.1.RELEASE]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:517) ~[spring-beans-5.2.1.RELEASE.jar!/:5.2.1.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:323) ~[spring-beans-5.2.1.RELEASE.jar!/:5.2.1.RELEASE]
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) ~[spring-beans-5.2.1.RELEASE.jar!/:5.2.1.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:321) ~[spring-beans-5.2.1.RELEASE.jar!/:5.2.1.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:202) ~[spring-beans-5.2.1.RELEASE.jar!/:5.2.1.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.getBean(AbstractApplicationContext.java:1108) ~[spring-context-5.2.1.RELEASE.jar!/:5.2.1.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:868) ~[spring-context-5.2.1.RELEASE.jar!/:5.2.1.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:550) ~[spring-context-5.2.1.RELEASE.jar!/:5.2.1.RELEASE]
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:141) ~[spring-boot-2.2.1.RELEASE.jar!/:2.2.1.RELEASE]
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:747) ~[spring-boot-2.2.1.RELEASE.jar!/:2.2.1.RELEASE]
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:397) ~[spring-boot-2.2.1.RELEASE.jar!/:2.2.1.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:315) ~[spring-boot-2.2.1.RELEASE.jar!/:2.2.1.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1226) ~[spring-boot-2.2.1.RELEASE.jar!/:2.2.1.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1215) ~[spring-boot-2.2.1.RELEASE.jar!/:2.2.1.RELEASE]
at com.ensa.productmanager.ProductManagerApplication.main(ProductManagerApplication.java:10) ~[classes!/:0.0.1-SNAPSHOT]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:na]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:na]
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:na]
at java.base/java.lang.reflect.Method.invoke(Method.java:566) ~[na:na]
at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:48) ~[app.jar:0.0.1-SNAPSHOT]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:87) ~[app.jar:0.0.1-SNAPSHOT]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:51) ~[app.jar:0.0.1-SNAPSHOT]
at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:52) ~[app.jar:0.0.1-SNAPSHOT]
I've spent the whole day debugging and reading threads about this error. But I couldn't find a clue. Any help will be much appreciated.
UPDATE
I've run docker inspect on the Postgres container:
[
{
"Id": "426744f8c0a90b504c4d7c22242929f4b7f833e3ff4ddb9a112139d18ffd7c10",
"Created": "2020-01-16T19:14:14.7290504Z",
"Path": "docker-entrypoint.sh",
"Args": [
"postgres"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 8057,
"ExitCode": 0,
"Error": "",
"StartedAt": "2020-01-16T19:14:15.5451891Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:e2d75d1c1264a777df31dcbd4fd452b238134eb27854c2a173fdbfaa47ce9b87",
"ResolvConfPath": "/var/lib/docker/containers/426744f8c0a90b504c4d7c22242929f4b7f833e3ff4ddb9a112139d18ffd7c10/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/426744f8c0a90b504c4d7c22242929f4b7f833e3ff4ddb9a112139d18ffd7c10/hostname",
"HostsPath": "/var/lib/docker/containers/426744f8c0a90b504c4d7c22242929f4b7f833e3ff4ddb9a112139d18ffd7c10/hosts",
"LogPath": "/var/lib/docker/containers/426744f8c0a90b504c4d7c22242929f4b7f833e3ff4ddb9a112139d18ffd7c10/426744f8c0a90b504c4d7c22242929f4b7f833e3ff4ddb9a112139d18ffd7c10-json.log",
"Name": "/lil-postgres",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/Users/hamzabelmellouki/srv/postgres:/var/lib/postgresql/data"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "default",
"PortBindings": {
"5432/tcp": [
{
"HostIp": "",
"HostPort": "5432"
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": true,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"Capabilities": null,
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": false,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": null,
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 0,
"NanoCpus": 0,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": [
"/proc/asound",
"/proc/acpi",
"/proc/kcore",
"/proc/keys",
"/proc/latency_stats",
"/proc/timer_list",
"/proc/timer_stats",
"/proc/sched_debug",
"/proc/scsi",
"/sys/firmware"
],
"ReadonlyPaths": [
"/proc/bus",
"/proc/fs",
"/proc/irq",
"/proc/sys",
"/proc/sysrq-trigger"
]
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/baabe07f985a61979a3752f7e3c43116b980c45682d27c660b2867a6af3da28b-init/diff:/var/lib/docker/overlay2/ac6700580b74ace6c2f4eb9c274ef405124afb8019948c51516b496ed62eae8d/diff:/var/lib/docker/overlay2/35c484f8059617661ae54f749d0b938e7d81bc2089dd81ef81952aed6079c30b/diff:/var/lib/docker/overlay2/1649505946cf497fd95a5b7a118150a09315767464671f93459e3324f8e5ac42/diff:/var/lib/docker/overlay2/69a4818765c800f405e11c7755e184e4fd0837b68b4664821cf9c80daa7b629b/diff:/var/lib/docker/overlay2/177d3596510c3fdb235ec84728b8b6baf23a406dca3e9a3bfb45e98b48fe2395/diff:/var/lib/docker/overlay2/07d15f6f98f43ff88e6bdf9e361fadaaa46f83121278b5dbddac372cb95d99b8/diff:/var/lib/docker/overlay2/5e10c2001ef13459900f124ae825c210e2e001b0aa2f8b5e1e7c8c5c27d9e403/diff:/var/lib/docker/overlay2/b1e7828ebe991a7401746f581fd640bca3ae74f3a9820fd15e0fb2c666a08754/diff:/var/lib/docker/overlay2/e1b274467493537d29e25cce71acf9c3020d8a2e25080c75aa0fca10a5c05789/diff:/var/lib/docker/overlay2/11141fed542b6c519ec7621e0c4f341d2b2822a7c2885980216a2ef27609ffe0/diff:/var/lib/docker/overlay2/c3e4e9634bfe6a9e4b26f3d2d5751b2fa01aae0e8e1679a800accd94179d527d/diff:/var/lib/docker/overlay2/e7da8a834521c827046a2dc64f6f39afa6fded7f1af3241c75c592756d3053d6/diff:/var/lib/docker/overlay2/91d22f9b7d3aa1fae5cabcfe245a18c7dc6ac372696c73be70b65f7c8b42f1f4/diff:/var/lib/docker/overlay2/1a2021467ef42428eda11a9aef133e4b7ee9364266255abe5a0f2df4a9ff2584/diff",
"MergedDir": "/var/lib/docker/overlay2/baabe07f985a61979a3752f7e3c43116b980c45682d27c660b2867a6af3da28b/merged",
"UpperDir": "/var/lib/docker/overlay2/baabe07f985a61979a3752f7e3c43116b980c45682d27c660b2867a6af3da28b/diff",
"WorkDir": "/var/lib/docker/overlay2/baabe07f985a61979a3752f7e3c43116b980c45682d27c660b2867a6af3da28b/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/Users/hamzabelmellouki/srv/postgres",
"Destination": "/var/lib/postgresql/data",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
}
],
"Config": {
"Hostname": "426744f8c0a9",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"5432/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"POSTGRES_USER=hamzabelmellouki",
"POSTGRES_PASSWORD=password",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/postgresql/11/bin",
"GOSU_VERSION=1.11",
"LANG=en_US.utf8",
"PG_MAJOR=11",
"PG_VERSION=11.5-1.pgdg90+1",
"PGDATA=/var/lib/postgresql/data"
],
"Cmd": [
"postgres"
],
"Image": "postgres",
"Volumes": {
"/var/lib/postgresql/data": {}
},
"WorkingDir": "",
"Entrypoint": [
"docker-entrypoint.sh"
],
"OnBuild": null,
"Labels": {}
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "2649b309ebde1f690927a1081497c9dc05dc249642b429f25d930243eb81e08f",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"5432/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "5432"
}
]
},
"SandboxKey": "/var/run/docker/netns/2649b309ebde",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "f01f60128018d8634f26a0c683ee6865e20361548b3a49007804dc913bfc4292",
"Gateway": "172.17.0.1",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "172.17.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"MacAddress": "02:42:ac:11:00:02",
"Networks": {
"bridge": {
"IPAMConfig": null,
"Links": null,
"Aliases": null,
"NetworkID": "045d48c0f1d22cb7c319d6e850f6e9985a08e6d08884a79e823dbd0ac6c8a1b9",
"EndpointID": "f01f60128018d8634f26a0c683ee6865e20361548b3a49007804dc913bfc4292",
"Gateway": "172.17.0.1",
"IPAddress": "172.17.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:11:00:02",
"DriverOpts": null
}
}
}
}
]
maybe,you can use docker-compose to defining and running multi-container docker applications(postgres+spring-boot) and you use a YAML file to configure your application's services.
So,You must create new volumes with different path to insert new data with new schema and configuration,because if you use volumes that already used before so you will find the problems like password authentication failed for user “hamzabelmellouki”
or the user(role) “hamzabelmellouki” does not existe or the database "database's name" does not existe because they have other user rhat created before .
#####docker-compose.yml#####
version: '3'
services:
db:
image: postgres
container_name: lil-postgres
ports:
- "5432:5432"
restart: unless-stopped
volumes:
- "./postgres/data:/var/lib/postgresql/data"
environment:
POSTGRES_USER: hamzabelmellouki
POSTGRES_PASSWORD: password
POSTGRES_DB: hamzabelmellouki
networks:
- product-net
product:
image: ensa/product-manager
restart: unless-stopped
ports:
- "8080:8080"
depends_on:
- db
networks:
- product-net
networks:
product-net:
driver: bridge
volumes:
db:
driver: local
Try below:
While running Postgres container, you have not provided any username , it should be something like:
docker run --rm --net="host" --name lil-postgres -e POSTGRES_USER=hamzabelmellouki -e POSTGRES_PASSWORD=password -d -v $HOME/srv/postgres:/var/lib/postgresql/data -p 5432:5432 postgres:9.6
by default Postgres container will run in bridge network (default docker network) and in your app, you've provided --net="host"
So maybe due to that, they are not able to communicate.
Provide the same network while running both
You can check it by below command
docker inspect container_name (or container_id)
Its nothing related, still, I've updated the docker image tag(Postgres:9.6), as it's always good to use a particular tag name, instead of default one (latest tag).
Try changing localhost by the ip adresse 172.17.0.2 in the application.properties spring.datasource.url=jdbc:postgresql://172.17.0.2:5432/testdb