Internal Server Error 500, when accessing Spring boot endpoint in Docker - mongodb

I am trying to learn app deployment with Docker. My configurations are as below:
application.properties
####### Mongo Properties ###########
spring.data.mongodb.uri=mongodb://mongo/locationsdb
Dockerfile
FROM openjdk:14-alpine
ARG JAR_FILE=./target/*jar
COPY ${JAR_FILE} jarapp.jar
EXPOSE 8080
ENTRYPOINT ["java", "-Dspring.profiles.active=docker", "-jar", "jarapp.jar"]
docker-compose.yml
version: "3"
services:
mongodb-container:
image: mongo:latest
container_name: "mongodb-container"
restart: always
ports:
- 27017:27017
server-container:
image: server_side
container_name: "server-container"
restart: always
ports:
- 8080:8080
links:
- mongodb-container
depends_on:
- mongodb-container
After the above then I did the following:
docker-compose config
docker-compose up --build
But I was gething the below error:
server-container | 2021-09-02 09:44:41.253 INFO 1 --- [localhost:27017] org.mongodb.driver.cluster : ***Exception in monitor thread while connecting to server localhost:27017***
server-container |
server-container | com.mongodb.MongoSocketOpenException: **Exception opening socket**
server-container | at com.mongodb.internal.connection.SocketStream.open(SocketStream.java:70) ~[mongodb-driver-core-4.2.3.jar!/:na]
server-container | at com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:143) ~[mongodb-driver-core-4.2.3.jar!/:na]
server-container | at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.lookupServerDescription(DefaultServerMonitor.java:188) ~[mongodb-driver-core-4.2.3.jar!/:na]
server-container | at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:144) ~[mongodb-driver-core-4.2.3.jar!/:na]
server-container | at java.base/java.lang.Thread.run(Thread.java:832) ~[na:na]
server-container | Caused by: java.net.ConnectException: Connection refused
server-container | at java.base/sun.nio.ch.Net.pollConnect(Native Method) ~[na:na]
server-container | at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:589) ~[na:na]
server-container | at java.base/sun.nio.ch.NioSocketImpl.timedFinishConnect(NioSocketImpl.java:542) ~[na:na]
server-container | at java.base/sun.nio.ch.NioSocketImpl.connect(NioSocketImpl.java:597) ~[na:na]
server-container | at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:333) ~[na:na]
server-container | at java.base/java.net.Socket.connect(Socket.java:648) ~[na:na]
server-container | at com.mongodb.internal.connection.SocketStreamHelper.initialize(SocketStreamHelper.java:107) ~[mongodb-driver-core-4.2.3.jar!/:na]
server-container | at com.mongodb.internal.connection.SocketStream.initializeSocket(SocketStream.java:79) ~[mongodb-driver-core-4.2.3.jar!/:na]
server-container | at com.mongodb.internal.connection.SocketStream.open(SocketStream.java:65) ~[mongodb-driver-core-4.2.3.jar!/:na]
server-container | ... 4 common frames omitted
server-container |
server-container | 2021-09-02 09:44:43.395 INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path ''
server-container | 2021-09-02 09:44:43.429 INFO 1 --- [ main] c.f.virtuallab.VirtuallabApplication : Started VirtuallabApplication in 26.943 seconds (JVM running for 28.445)
mongodb-container | {"t":{"$date":"2021-09-02T09:45:13.967+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"Checkpointer","msg":"WiredTiger message","attr":{"message":"[1630575913:967258][1:0x7fef40740700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 34, snapshot max: 34 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 1"}}
As it shows in the log, There was a Exception opening socket problem and then it says this: server-container | 2021-09-02 09:44:43.429 INFO 1 --- [ main] c.f.virtuallab.VirtuallabApplication : Started VirtuallabApplication in 26.943 seconds (JVM running for 28.445) afterwards.
When i tried my end point: localhost:8080/api/v1/locations I was only getting Internal Server Error (500).
Could someone guide me on how to properly connect the mongodb and get the application started?

Try changing
####### Mongo Properties ###########
spring.data.mongodb.uri=mongodb://mongodb-container/locationsdb
You are using mongo as your mongodb host but you have declared mongodb container as mongodb-container in your docker-compose file. So your mongodb instance should be accessed by mongodb-container and not by mongo.

I think no need to expose mongodb port with host machine if in the frontend it's already programmed. But have to mention same username & password alogwith hostname I mean container name to the mongodb container.

Related

airflow runs connections check against metadata db before init db

I am running airflow locally based on dockerfile, .env, docker-compose.yaml and entrypoint.sh
as "docker-compose -f docker-compose.yaml up"
And just after "airflow init db" in entrypoint.sh I am getting the following error:
[after it all is cool, I can run airflow. But this drives me crazy. Can anyone help me to resolve it, please?]
what's strange is that service queries the tables even before the db has been initiated
airflow_webserver | initiating db
airflow_webserver | DB: postgresql://airflow:***#airflow_metadb:5432/airflow
airflow_webserver | [2022-02-22 13:52:26,318] {db.py:929} INFO - Dropping tables that exist
airflow_webserver | [2022-02-22 13:52:26,570] {migration.py:201} INFO - Context impl PostgresqlImpl.
airflow_webserver | [2022-02-22 13:52:26,570] {migration.py:204} INFO - Will assume transactional DDL.
airflow_metadb | 2022-02-22 13:52:26.712 UTC [71] ERROR: relation "connection" does not exist at character 55
airflow_metadb | 2022-02-22 13:52:26.712 UTC [71] STATEMENT: SELECT connection.conn_id AS connection_conn_id
airflow_metadb | FROM connection GROUP BY connection.conn_id
airflow_metadb | HAVING count(*) > 1
airflow_metadb | 2022-02-22 13:52:26.714 UTC [72] ERROR: relation "connection" does not exist at character 55
airflow_metadb | 2022-02-22 13:52:26.714 UTC [72] STATEMENT: SELECT connection.conn_id AS connection_conn_id
airflow_metadb | FROM connection
airflow_metadb | WHERE connection.conn_type IS NULL
airflow_webserver | [2022-02-22 13:52:26,733] {db.py:921} INFO - Creating tables
airflow 2.2.3
postgres 13
in dockerfile:
ENTRYPOINT ["/entrypoint.sh"]
in docker-compose.yaml:
webserver:
env_file: ./.env
image: airflow
container_name: airflow_webserver
restart: always
depends_on:
- postgres
environment:
<<: *env_common
AIRFLOW__CORE__LOAD_EXAMPLES: ${AIRFLOW__CORE__LOAD_EXAMPLES}
AIRFLOW__CORE__DAGS_ARE_PAUSED_AT_CREATION: ${AIRFLOW__CORE__DAGS_ARE_PAUSED_AT_CREATION}
EXECUTOR: ${EXECUTOR}
_AIRFLOW_DB_UPGRADE: ${_AIRFLOW_DB_UPGRADE}
_AIRFLOW_WWW_USER_CREATE: ${_AIRFLOW_WWW_USER_CREATE}
_AIRFLOW_WWW_USER_USERNAME: ${_AIRFLOW_WWW_USER_USERNAME}
_AIRFLOW_WWW_USER_PASSWORD: ${_AIRFLOW_WWW_USER_PASSWORD}
_AIRFLOW_WWW_USER_ROLE: ${_AIRFLOW_WWW_USER_ROLE}
_AIRFLOW_WWW_USER_EMAIL: ${_AIRFLOW_WWW_USER_EMAIL}
logging:
options:
max-size: 10m
max-file: "3"
volumes:
- ./dags:bla-bla
- ./logs:bla-bla
ports:
- ${AIRFLOW_WEBSERVER_PORT}:${AIRFLOW_WEBSERVER_PORT}
command: webserver
healthcheck:
test: ["CMD-SHELL", "[ -f /usr/local/airflow/airflow-webserver.pid ]"]
interval: 30s
timeout: 30s
retries: 3

Unable to connect to mongodb as docker compose service from another service

when I launch my application using docker-compose, I get an error that my application cannot connect to the database, although the port is exposed and they are in the same network...
This is my docker-compose.yml file:
version: '3'
volumes:
db-data:
driver: local
mongo-config:
driver: local
services:
pulseq-mongodb:
image: mongo:latest
container_name: server-mongodb
restart: always
networks:
- server-net
ports:
- "27017:27017"
expose:
- 27017
volumes:
- db-data:/data/db
- mongo-config:/data/configdb
server:
image: my-server:0.0.1-pre-alpha.1
container_name: server
restart: always
networks:
- server-net
ports:
- "8080:8080"
depends_on:
- server-mongodb
networks:
server-net:
driver: bridge
I'm getting the following error on startup:
server | 2021-11-01 13:05:10.409 INFO 1 --- [localhost:27017] org.mongodb.driver.cluster : Exception in monitor thread while connecting to server localhost:27017
server |
server | com.mongodb.MongoSocketOpenException: Exception opening socket
server | at com.mongodb.internal.connection.SocketStream.open(SocketStream.java:70) ~[mongodb-driver-core-4.2.3.jar:na]
server | at com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:143) ~[mongodb-driver-core-4.2.3.jar:na]
server | at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.lookupServerDescription(DefaultServerMonitor.java:188) ~[mongodb-driver-core-4.2.3.jar:na]
server | at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:144) ~[mongodb-driver-core-4.2.3.jar:na]
server | at java.base/java.lang.Thread.run(Unknown Source) ~[na:na]
I tried to use many solutions, but nothing helped me. Any answer will be helpful.
Thanks.
I fixed this by using the container name, instead of localhost in the application configuration.

Unable to load data from Kafka topic to Postgres using JDBCSinkConnector

I have dockerized Kafka and Postgres. I use JDBC Sink connector to load data from Kafka topic to Postgres table. First I create a topic and a stream above it with "AVRO" value format.
CREATE STREAM TEST01 (ROWKEY VARCHAR KEY, COL1 INT, COL2 VARCHAR)
WITH (KAFKA_TOPIC='test01', PARTITIONS=1, VALUE_FORMAT='AVRO');
This is the code of creating Sink Connector:
curl -X PUT http://localhost:8083/connectors/sink-jdbc-postgre-01/config \
-H "Content-Type: application/json" -d '{
"connector.class" : "io.confluent.connect.jdbc.JdbcSinkConnector",
"connection.url" : "jdbc:postgresql://postgres:5432/",
"topics" : "test01",
"key.converter" : "org.apache.kafka.connect.storage.StringConverter",
"value.converter" : "io.confluent.connect.avro.AvroConverter",
"value.converter.schema.registry.url": "http://schema-registry:8081",
"connection.user" : "postgres",
"connection.password" : "********",
"auto.create" : true,
"auto.evolve" : true,
"insert.mode" : "insert",
"pk.mode" : "record_key",
"pk.fields" : "MESSAGE_KEY"
}'
Then, I check Postgres if there's any data that came from Kafka by using \dt command and it returns the folllowing: Did not find any relations.
Then I check kafka-connect logs and it return the following result:
[2021-03-30 10:05:07,546] INFO Attempting to open connection #2 to PostgreSql (io.confluent.connect.jdbc.util.CachedConnectionProvider)
connect | [2021-03-30 10:05:07,577] INFO Unable to connect to database on attempt 2/3. Will retry in 10000 ms. (io.confluent.connect.jdbc.util.CachedConnectionProvider)
connect | org.postgresql.util.PSQLException: The connection attempt failed.
connect | at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:296)
connect | at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49)
connect | at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:211)
connect | at org.postgresql.Driver.makeConnection(Driver.java:459)
connect | at org.postgresql.Driver.connect(Driver.java:261)
connect | at java.sql.DriverManager.getConnection(DriverManager.java:664)
connect | at java.sql.DriverManager.getConnection(DriverManager.java:208)
connect | at io.confluent.connect.jdbc.dialect.GenericDatabaseDialect.getConnection(GenericDatabaseDialect.java:224)
connect | at io.confluent.connect.jdbc.util.CachedConnectionProvider.newConnection(CachedConnectionProvider.java:93)
connect | at io.confluent.connect.jdbc.util.CachedConnectionProvider.getConnection(CachedConnectionProvider.java:62)
connect | at io.confluent.connect.jdbc.sink.JdbcDbWriter.write(JdbcDbWriter.java:56)
connect | at io.confluent.connect.jdbc.sink.JdbcSinkTask.put(JdbcSinkTask.java:74)
connect | at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:546)
connect | at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:326)
connect | at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:228)
connect | at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:196)
connect | at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:184)
connect | at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:234)
connect | at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
connect | at java.util.concurrent.FutureTask.run(FutureTask.java:266)
connect | at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
connect | at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
connect | at java.lang.Thread.run(Thread.java:748)
connect | Caused by: java.net.UnknownHostException: postgres
connect | at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:184)
connect | at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
connect | at java.net.Socket.connect(Socket.java:589)
connect | at org.postgresql.core.PGStream.<init>(PGStream.java:81)
connect | at org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:92)
connect | at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:196)
connect | ... 22 more
connect | [2021-03-30 10:05:17,578] INFO Attempting to open connection #3 to PostgreSql (io.confluent.connect.jdbc.util.CachedConnectionProvider)
connect | [2021-03-30 10:05:17,732] ERROR WorkerSinkTask{id=sink-jdbc-postgre-01-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted. Error: org.postgresql.util.PSQLException: The connection attempt failed. (org.apache.kafka.connect.runtime.WorkerSinkTask)
connect | org.apache.kafka.connect.errors.ConnectException: org.postgresql.util.PSQLException: The connection attempt failed.
connect | at io.confluent.connect.jdbc.util.CachedConnectionProvider.getConnection(CachedConnectionProvider.java:69)
connect | at io.confluent.connect.jdbc.sink.JdbcDbWriter.write(JdbcDbWriter.java:56)
connect | at io.confluent.connect.jdbc.sink.JdbcSinkTask.put(JdbcSinkTask.java:74)
connect | at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:546)
connect | at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:326)
connect | at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:228)
connect | at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:196)
connect | at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:184)
connect | at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:234)
connect | at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
connect | at java.util.concurrent.FutureTask.run(FutureTask.java:266)
connect | at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
connect | at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
connect | at java.lang.Thread.run(Thread.java:748)
connect | Caused by: org.postgresql.util.PSQLException: The connection attempt failed.
connect | at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:296)
connect | at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49)
connect | at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:211)
connect | at org.postgresql.Driver.makeConnection(Driver.java:459)
connect | at org.postgresql.Driver.connect(Driver.java:261)
connect | at java.sql.DriverManager.getConnection(DriverManager.java:664)
connect | at java.sql.DriverManager.getConnection(DriverManager.java:208)
connect | at io.confluent.connect.jdbc.dialect.GenericDatabaseDialect.getConnection(GenericDatabaseDialect.java:224)
connect | at io.confluent.connect.jdbc.util.CachedConnectionProvider.newConnection(CachedConnectionProvider.java:93)
connect | at io.confluent.connect.jdbc.util.CachedConnectionProvider.getConnection(CachedConnectionProvider.java:62)
connect | ... 13 more
connect | Caused by: java.net.UnknownHostException: postgres
connect | at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:184)
connect | at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
connect | at java.net.Socket.connect(Socket.java:589)
connect | at org.postgresql.core.PGStream.<init>(PGStream.java:81)
connect | at org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:92)
connect | at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:196)
connect | ... 22 more
connect | [2021-03-30 10:05:17,734] ERROR WorkerSinkTask{id=sink-jdbc-postgre-01-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask)
I supposed that the problem might be in missing postgresql connector .jar file in /usr/share/java/kafka-connect-jdbc but here it is.
root#connect:/usr/share/java/kafka-connect-jdbc# ls -l
total 8412
-rw-r--r-- 1 root root 17555 Apr 18 2020 common-utils-5.5.0.jar
-rw-r--r-- 1 root root 317816 Apr 18 2020 jtds-1.3.1.jar
-rw-r--r-- 1 root root 230113 Apr 18 2020 kafka-connect-jdbc-5.5.0.jar
-rw-r--r-- 1 root root 927447 Apr 18 2020 postgresql-42.2.10.jar
-rw-r--r-- 1 root root 41139 Apr 18 2020 slf4j-api-1.7.26.jar
-rw-r--r-- 1 root root 7064881 Apr 18 2020 sqlite-jdbc-3.25.2.jar
What could be the solution for that problem?
Thanks to #Robin Moffatt tutorial and #OneCricketeer tip I've found the way to solve this problem. Kafka-connect and Postgres should be in one docker-compose.yml file. I attach the code of docker-compose.yml below. Hope this help to the people who would face with the same problem:
---
version: '3'
services:
zookeeper:
image: confluentinc/cp-zookeeper:5.5.0
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
broker:
image: confluentinc/cp-server:5.5.0
hostname: broker
container_name: broker
depends_on:
- zookeeper
ports:
- "9092:9092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker:29092
CONFLUENT_METRICS_REPORTER_ZOOKEEPER_CONNECT: zookeeper:2181
CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
CONFLUENT_METRICS_ENABLE: 'true'
CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'
schema-registry:
image: confluentinc/cp-schema-registry:5.5.0
hostname: schema-registry
container_name: schema-registry
depends_on:
- zookeeper
- broker
ports:
- "8081:8081"
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: 'zookeeper:2181'
connect:
image: cnfldemos/cp-server-connect-datagen:0.3.2-5.5.0
hostname: connect
container_name: connect
depends_on:
- zookeeper
- broker
- schema-registry
ports:
- "8083:8083"
environment:
CONNECT_BOOTSTRAP_SERVERS: 'broker:29092'
CONNECT_REST_ADVERTISED_HOST_NAME: connect
CONNECT_REST_PORT: 8083
CONNECT_GROUP_ID: compose-connect-group
CONNECT_CONFIG_STORAGE_TOPIC: docker-connect-configs
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
CONNECT_OFFSET_FLUSH_INTERVAL_MS: 10000
CONNECT_OFFSET_STORAGE_TOPIC: docker-connect-offsets
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
CONNECT_STATUS_STORAGE_TOPIC: docker-connect-status
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1
CONNECT_KEY_CONVERTER: org.apache.kafka.connect.storage.StringConverter
CONNECT_VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter
CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: http://schema-registry:8081
CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_ZOOKEEPER_CONNECT: 'zookeeper:2181'
# CLASSPATH required due to CC-2422
CLASSPATH: /usr/share/java/monitoring-interceptors/monitoring-interceptors-5.5.0.jar
CONNECT_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
CONNECT_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"
CONNECT_PLUGIN_PATH: "/usr/share/java,/usr/share/confluent-hub-components"
CONNECT_LOG4J_LOGGERS: org.apache.zookeeper=ERROR,org.I0Itec.zkclient=ERROR,org.reflections=ERROR
ksqldb-server:
image: confluentinc/cp-ksqldb-server:5.5.0
hostname: ksqldb-server
container_name: ksqldb-server
depends_on:
- broker
- connect
ports:
- "8088:8088"
environment:
KSQL_CONFIG_DIR: "/etc/ksql"
KSQL_BOOTSTRAP_SERVERS: "broker:29092"
KSQL_HOST_NAME: ksqldb-server
KSQL_LISTENERS: "http://0.0.0.0:8088"
KSQL_CACHE_MAX_BYTES_BUFFERING: 0
KSQL_KSQL_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
KSQL_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
KSQL_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"
KSQL_KSQL_CONNECT_URL: "http://connect:8083"
ksqldb-cli:
image: confluentinc/cp-ksqldb-cli:5.5.0
container_name: ksqldb-cli
depends_on:
- broker
- connect
- ksqldb-server
entrypoint: /bin/sh
tty: true
rest-proxy:
image: confluentinc/cp-kafka-rest:5.5.0
depends_on:
- zookeeper
- broker
- schema-registry
ports:
- 8082:8082
hostname: rest-proxy
container_name: rest-proxy
environment:
KAFKA_REST_HOST_NAME: rest-proxy
KAFKA_REST_BOOTSTRAP_SERVERS: 'broker:29092'
KAFKA_REST_LISTENERS: "http://0.0.0.0:8082"
KAFKA_REST_SCHEMA_REGISTRY_URL: 'http://schema-registry:8081'
postgres:
image: postgres
restart: always
environment:
POSTGRES_PASSWORD: postgres
ports:
- 5432:5432
Here is the error from your stack trace:
java.net.UnknownHostException: postgres
This means that your Kafka Connect worker machine cannot find the host postgres.

Mongodb connection error though docker in springboot

I am learning docker and trying to create a docker-compose.yml file for a springboot db application(jdk 10). Spring boot picks up 27017 as a default port for mongodb, so if I start a mongo container with below command:
docker run -d -p 27017:27017 mongo
And then start my application in intellij, everything works fine. When I try to use a docker-compose.yml, i get a connection exception.
This is my Dockerfile
FROM openjdk:8-jdk-alpine
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar","/app.jar"]
This is my docker-compose.yml:
services:
java:
image: adoptopenjdk/openjdk10:latest
mongo:
image: mongo
expose:
- "27017"
ports:
- "0.0.0.0:27017:27017"
spring_boot_mongo:
build: .
ports:
- "8080:8080"
links:
- java
version: "2"
Error while running docker-compose up command:
localhost:27017
spring_boot_mongo_1 |
spring_boot_mongo_1 | com.mongodb.MongoSocketOpenException: Exception opening socket
spring_boot_mongo_1 | at com.mongodb.internal.connection.SocketStream.open(SocketStream.java:67) ~[mongodb-driver-core-3.8.2.jar!/:na]
spring_boot_mongo_1 | at com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:126) ~[mongodb-driver-core-3.8.2.jar!/:na]
spring_boot_mongo_1 | at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:117) ~[mongodb-driver-core-3.8.2.jar!/:na]
spring_boot_mongo_1 | at java.lang.Thread.run(Thread.java:748) [na:1.8.0_212]
spring_boot_mongo_1 | Caused by: java.net.ConnectException: Connection refused (Connection refused)
spring_boot_mongo_1 | at java.net.PlainSocketImpl.socketConnect(Native Method) ~[na:1.8.0_212]
spring_boot_mongo_1 | at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) ~[na:1.8.0_212]
spring_boot_mongo_1 | at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) ~[na:1.8.0_212]
spring_boot_mongo_1 | at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) ~[na:1.8.0_212]
spring_boot_mongo_1 | at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) ~[na:1.8.0_212]
spring_boot_mongo_1 | at java.net.Socket.connect(Socket.java:589) ~[na:1.8.0_212]
spring_boot_mongo_1 | at com.mongodb.internal.connection.SocketStreamHelper.initialize(SocketStreamHelper.java:64) ~[mongodb-driver-core-3.8.2.jar!/:na]
spring_boot_mongo_1 | at com.mongodb.internal.connection.SocketStream.open(SocketStream.java:62) ~[mongodb-driver-core-3.8.2.jar!/:na]
spring_boot_mongo_1 | ... 3 common frames omitted
spring_boot_mongo_1 |
This is what I see when i run docker ps:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4da4948d04d9 spring-boot-mongodb-master_spring_boot_mongo "java -jar /app.jar" 11 seconds ago Up 10 seconds 0.0.0.0:8080->8080/tcp spring-boot-mongodb-master_spring_boot_mongo_1
e9a79f3ba8ab mongo "docker-entrypoint.s…" 12 seconds ago Up 10 seconds 0.0.0.0:27017->27017/tcp spring-boot-mongodb-master_mongo_1
This means the mongo mapping works as expected(I see the same o/p when i start mongodb container with the above mentioned docker run command), but the application port mapping was not achieved. Can someone please help me??
Thanks!
Services from the same docker-compose are connected to same default network. You should use service name in your url when you want to access another container. Container name will be resolved to container IP automatically. You cannot access another container by using localhost - use service name instead. In your case you can set environment variable for your spring_boot_mongo service :
spring_boot_mongo:
build: .
ports:
- "8080:8080"
environment:
- SPRING_DATA_MONGODB_.HOST=mongo
if you are using spring data mongo. Otherwise set the environment variable to override your uri in your application container.

Jhipster, Unable to connect containerized mongodb

I'm trying to use docker to run my jhipster microservices.
I had no problem when running without Docker. But now i'm facing some problem when tryin to run my microservice using docker.
everytime i execute
docker-compose -f app.yml up
command on my UAA server, it shows this error when running the UAA server.
uaa-app_1 | 2017-04-17 07:38:59.725 DEBUG 6 --- [ main] i.c.f.uaa.config.CacheConfiguration : No cache
uaa-app_1 | 2017-04-17 07:39:06.897 DEBUG 6 --- [ main] i.c.f.u.c.apidoc.SwaggerConfiguration : Starting Swagger
uaa-app_1 | 2017-04-17 07:39:06.914 DEBUG 6 --- [ main] i.c.f.u.c.apidoc.SwaggerConfiguration : Started Swagger in 16 ms
uaa-app_1 | 2017-04-17 07:39:07.004 DEBUG 6 --- [ main] i.c.f.uaa.config.DatabaseConfiguration : Configuring Mongobee
uaa-app_1 | 2017-04-17 07:39:07.020 INFO 6 --- [ main] com.github.mongobee.Mongobee : Mongobee has started the data migration sequence..
uaa-app_1 | 2017-04-17 07:39:37.056 WARN 6 --- [ main] ationConfigEmbeddedWebApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'mongobee' defined in class path resource [id/co/fifgroup/uaa/config/DatabaseConfiguration.class]: Invocation of init method failed; nested exception is com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting for a server that matches ReadPreferenceServerSelector{readPreference=primary}. Client view of cluster state is {type=UNKNOWN, servers=[{address=localhost:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.ConnectException: Connection refused (Connection refused)}}]
uaa-app_1 | 2017-04-17 07:39:37.074 INFO 6 --- [ main] i.c.f.uaa.config.CacheConfiguration : Closing Cache Manager
uaa-app_1 | 2017-04-17 07:39:37.129 ERROR 6 --- [ main] o.s.boot.SpringApplication : Application startup failed
uaa-app_1 |
uaa-app_1 | org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'mongobee' defined in class path resource [id/co/fifgroup/uaa/config/DatabaseConfiguration.class]: Invocation of init method failed; nested exception is com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting for a server that matches ReadPreferenceServerSelector{readPreference=primary}. Client view of cluster state is {type=UNKNOWN, servers=[{address=localhost:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.ConnectException: Connection refused (Connection refused)}}]
and this is my UAA server app.yml inside of docker directory.
version: '2'
services:
uaa-app:
image: uaa
external_links:
- uaa-mongodb:mongodb
- jhipster-registry:registry
environment:
- SPRING_PROFILES_ACTIVE=dev,swagger
- SPRING_CLOUD_CONFIG_URI=http://admin:admin#registry:8761/config
- SPRING_DATA_MONGODB_URI=mongodb://mongodb:27017
- SPRING_DATA_MONGODB_DATABASE=uaa
- JHIPSTER_SLEEP=15 # gives time for the database to boot before the application
uaa-mongodb:
extends:
file: mongodb.yml
service: uaa-mongodb
jhipster-registry:
extends:
file: jhipster-registry.yml
service: jhipster-registry
environment:
- SPRING_CLOUD_CONFIG_SERVER_NATIVE_SEARCH_LOCATIONS=file:./central-config/docker-config/
uaa-mongodb and jhipster-registry works fine with docker, but my UAA server unable to connect to uaa-mongodb.
and why the error keep saying that i using localhost:27017 even thoug i tried to change SPRING_DATA_MONGODB_URI and spring.data.mongodb.uri inside application-dev.yml and application-prod.yml into difference value.
can someone help me with this problem...
to answer my own question, this all i did with my script.
i change app.yml file into
version: '2'
services:
uaa:
image: uaa:latest
external_links:
- uaa-mongodb:mongodb
- jhipster-registry:registry
links :
- uaa-mongodb:mongodb
environment:
- SPRING_PROFILES_ACTIVE=prod,swagger,test
- SPRING_CLOUD_CONFIG_URI=http://admin:admin#registry:8761/config
- EUREKA_CLIENT_SERVICEURL_DEFAULTZONE=http://admin:admin#registry:8761/config
- SPRING_DATA_MONGODB_HOST=mongodb
- SPRING_DATA_MONGODB_PORT=27017
- SPRING_DATA_MONGODB_DATABASE=uaa
- JHIPSTER_SLEEP=15 # gives time for the database to boot before the application
jhipster-registry:
extends:
file: jhipster-registry.yml
service: jhipster-registry
environment:
- SPRING_CLOUD_CONFIG_SERVER_NATIVE_SEARCH_LOCATIONS=file:./central-config/docker-config/
uaa-mongodb:
extends:
file: mongodb.yml
service: uaa-mongodb
and my application-pro.yml into
eureka:
instance:
prefer-ip-address: true
client:
enabled: true
healthcheck:
enabled: true
registerWithEureka: true
fetchRegistry: true
serviceUrl:
defaultZone: ${EUREKA_CLIENT_SERVICEURL_DEFAULTZONE}
spring:
devtools:
restart:
enabled: false
livereload:
enabled: false
data:
mongodb:
host: ${SPRING_DATA_MONGODB_HOST}
port: ${SPRING_DATA_MONGODB_PORT}
database: ${SPRING_DATA_MONGODB_DATABASE}
now my UAA server successfully connect to containerized mongodb database.
but now i'm having new error..
com.netflix.discovery.shared.transport.TransportException: Cannot execute request on any known server
uaa_1 | at com.netflix.discovery.shared.transport.decorator.RetryableEurekaHttpClient.execute(RetryableEurekaHttpClient.java:111)
uaa_1 | at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator.register(EurekaHttpClientDecorator.java:56)
uaa_1 | at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator$1.execute(EurekaHttpClientDecorator.java:59)
uaa_1 | at com.netflix.discovery.shared.transport.decorator.SessionedEurekaHttpClient.execute(SessionedEurekaHttpClient.java:77)
uaa_1 | at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator.register(EurekaHttpClientDecorator.java:56)
uaa_1 | at com.netflix.discovery.DiscoveryClient.register(DiscoveryClient.java:815)
uaa_1 | at com.netflix.discovery.InstanceInfoReplicator.run(InstanceInfoReplicator.java:104)
uaa_1 | at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
uaa_1 | at java.util.concurrent.FutureTask.run(FutureTask.java:266)
uaa_1 | at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
uaa_1 | at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
uaa_1 | at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
uaa_1 | at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
uaa_1 | at java.lang.Thread.run(Thread.java:745)