Flink Cluster startup Error : Could not resolve ResourceManager address akka - flink-sql

Need help with following error as I dont seem to find what is actual issue. I am trying to run flink cluster on docker-desktop in win10 professional.
Dockerfile:
FROM SOME-LOCAL-REGISTERY-URL/flink:1.11
ADD build/libs/demoapp-service-all.jar /opt/flink/usrlib/demoapp-service-all.jar
volume /tmp
ADD conf/flink-conf.yaml /opt/flink/conf/flink-conf.yaml
ADD conf/log4j.properties /opt/flink/conf/log4j.properties
flink-conf.yaml:
jobmanager.rpc.address: jobmanager
jobmanager.rpc.port: 8092
jobmanager.memory.process.size: 1600m
taskmanager.memory.process.size: 1728m
taskmanager.numberOfTaskSlots: 1
parallelism.default: 1
state.backend: rocksdb
state.checkpoints.dir: file:///c:/Users/demo/checkpoint_dir
state.backend.rocksdb.memory.managed: true
I am creating "demo/demoapp:1.0" image manually from Dockefile and then starting flink cluster as "docker-compose up"
docker-compose.yml:
version: "2.2"
services:
jobmanager:
image: demo/demoapp:1.0
ports:
- "8092:8092"
command: ["standalone-job", "-Dspring.profiles.active=dev"]
taskmanager:
image: demo/demoapp:1.0
depends_on:
- jobmanager
command: ["taskmanager", "-Dspring.profiles.active=dev"]
scale: 1
Logs:
jobmanager_1 | Starting Job Manager
taskmanager_1 | Starting Task Manager
jobmanager_1 | Starting standalonejob as a console application on host aaf9a34c154f.
taskmanager_1 | Starting taskexecutor as a console application on host a96dd08d9ae6.
---------------------------------------------------------------------------------------------
taskmanager_1 | TM_RESOURCE_PARAMS extraction logs:
taskmanager_1 | jvm_params: -Xmx536870902 -Xms536870902 -XX:MaxDirectMemorySize=268435458 -XX:MaxMetaspaceSize=268435456
taskmanager_1 | dynamic_configs: -D taskmanager.memory.framework.off-heap.size=134217728b -D taskmanager.memory.network.max=134217730b -D taskmanager.memory.network.min=134217730b -D taskmanager.memory.framework.heap.size=134217728b -D taskmanager.memory.managed.size=536870920b -D taskmanager.cpu.cores=2.0 -D taskmanager.memory.task.heap.size=402653174b -D taskmanager.memory.task.off-heap.size=0b
taskmanager_1 | logs: INFO [] - Loading configuration property: jobmanager.rpc.address, a96dd08d9ae6
taskmanager_1 | INFO [] - Loading configuration property: jobmanager.rpc.port, 8092
taskmanager_1 | INFO [] - Loading configuration property: jobmanager.memory.process.size, 1600m
taskmanager_1 | INFO [] - Loading configuration property: taskmanager.memory.process.size, 1728m
taskmanager_1 | INFO [] - Loading configuration property: taskmanager.numberOfTaskSlots, 2
taskmanager_1 | INFO [] - Loading configuration property: parallelism.default, 1
taskmanager_1 | INFO [] - Loading configuration property: state.backend, rocksdb
taskmanager_1 | INFO [] - Loading configuration property: state.checkpoints.dir, file:///c:/Users/demo/checkpoint_dir
taskmanager_1 | INFO [] - Loading configuration property: state.backend.rocksdb.memory.managed, true
taskmanager_1 | INFO [] - Loading configuration property: blob.server.port, 6124
taskmanager_1 | INFO [] - Loading configuration property: query.server.port, 6125
--------------------------------------------------------------------------------------
jobmanager_1 | JM_RESOURCE_PARAMS extraction logs:
jobmanager_1 | jvm_params: -Xmx1073741824 -Xms1073741824 -XX:MaxMetaspaceSize=268435456
jobmanager_1 | logs: INFO [] - Loading configuration property: jobmanager.rpc.address, aaf9a34c154f
jobmanager_1 | INFO [] - Loading configuration property: jobmanager.rpc.port, 8092
jobmanager_1 | INFO [] - Loading configuration property: jobmanager.memory.process.size, 1600m
jobmanager_1 | INFO [] - Loading configuration property: taskmanager.memory.process.size, 1728m
jobmanager_1 | INFO [] - Loading configuration property: taskmanager.numberOfTaskSlots, 1
jobmanager_1 | INFO [] - Loading configuration property: parallelism.default, 1
jobmanager_1 | INFO [] - Loading configuration property: state.backend, rocksdb
jobmanager_1 | INFO [] - Loading configuration property: state.checkpoints.dir, file:///c:/Users/demo/checkpoint_dir
jobmanager_1 | INFO [] - Loading configuration property: state.backend.rocksdb.memory.managed, true
jobmanager_1 | INFO [] - Loading configuration property: blob.server.port, 6124
jobmanager_1 | INFO [] - Loading configuration property: query.server.port, 6125
---------------------------------------------------------------------------------------------
Error Logs:
taskmanager_1 | 2020-11-25 10:15:41,179 INFO org.apache.flink.runtime.net.ConnectionUtils [] - Trying to connect to address a96dd08d9ae6/172.18.0.3:8092
taskmanager_1 | 2020-11-25 10:15:41,180 INFO org.apache.flink.runtime.net.ConnectionUtils [] - Failed to connect from address 'a96dd08d9ae6/172.18.0.3': Connection refused (Connection refused)
taskmanager_1 | 2020-11-25 10:15:41,181 INFO org.apache.flink.runtime.net.ConnectionUtils [] - Failed to connect from address '/172.18.0.3': Connection refused (Connection refused)
taskmanager_1 | 2020-11-25 10:15:41,181 INFO org.apache.flink.runtime.net.ConnectionUtils [] - Failed to connect from address '/172.18.0.3': Connection refused (Connection refused)
taskmanager_1 | 2020-11-25 10:15:41,182 INFO org.apache.flink.runtime.net.ConnectionUtils [] - Failed to connect from address '/127.0.0.1': Connection refused (Connection refused)
taskmanager_1 | 2020-11-25 10:15:41,183 INFO org.apache.flink.runtime.net.ConnectionUtils [] - Failed to connect from address '/172.18.0.3': Connection refused (Connection refused)
taskmanager_1 | 2020-11-25 10:15:41,183 INFO org.apache.flink.runtime.net.ConnectionUtils [] - Failed to connect from address '/127.0.0.1': Connection refused (Connection refused)
taskmanager_1 | 2020-11-25 10:16:19,730 INFO org.apache.flink.runtime.taskexecutor.TaskExecutor []
- Could not resolve ResourceManager address akka.tcp://flink#a96dd08d9ae6:8092/user/rpc/resourcemanager_*, retrying in 10000 ms: Could not connect to rpc endpoint under address akka.tcp://flink#a96dd08d9ae6:8092/user/rpc/resourcemanager_*.
Also, apart from error, I dont understand from logs, why taskmanager is reading "jobmanager.rpc.address" and "taskmanager.numberOfTaskSlots" different from flink-conf.yaml. Whereas JobManager reads correctly.
Pls help on what I am missing here.

Instead of defining jobmanager.rpc.address inside flink-conf.yaml, defining it inside the docker-compose.yml file solved the problem for me:
Dockerfile:
FROM flink:1.12.2-scala_2.12-java8
COPY --chown=flink:flink ./path/to/assembly.jar /opt/flink/usrlib/
COPY --chown=flink:flink ./conf/* /opt/flink/conf/
docker-compose.yml:
environment:
FLINK_PROPERTIES: |-
jobmanager.rpc.address: jobmanager
flink-conf.yaml:
# Other configurations.
# ...
# Leave last line empty.

Related

Strimzi kafka - kafka connect not getting installed

I've Strimzi Kafka setup on GKE, and it is working fine.
I've a requirement to setup MirrorMaker2 to push data from source kafka topic to target Kafka topic.
From what i understand, MirrorMaker2 requires KafkaConnect.
I'm trying to install KafkaConnect on the GKE cluster in namespace kafka-connect, the yaml used is the following.
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnect
metadata:
name: my-connect-cluster
# annotations:
# strimzi.io/use-connector-resources: "true"
spec:
version: 3.0.0
replicas: 1
bootstrapServers: versa-kafka-gke-kafka-bootstrap:9093
tls:
trustedCertificates:
- secretName: versa-kafka-gke-cluster-ca-cert
certificate: ca.crt
config:
group.id: connect-cluster
offset.storage.topic: connect-cluster-offsets
config.storage.topic: connect-cluster-configs
status.storage.topic: connect-cluster-status
# -1 means it will use the default replication factor configured in the broker
config.storage.replication.factor: -1
offset.storage.replication.factor: -1
status.storage.replication.factor: -1
Command run to install:
kubectl apply -f kafka-connect.yaml -n kafka-connect
Note : Strimzi kafka is installed in namespace 'kafka', version - 3.0.0
I was expecting the KafkaConnect to get installed & the pods to be installed as well, however KafkaConnect resource is created, but not showing as Ready.
Also, the pods are not getting created.
(base) Karans-MacBook-Pro:kafkaConnect karanalang$ kc get kafkaconnect my-connect-cluster -n kafka-connect
NAME DESIRED REPLICAS READY
my-connect-cluster 1
On describing the my-connect-cluster, here is the output (not showing any errors)
(base) Karans-MacBook-Pro:kafkaConnect karanalang$ kc describe kafkaconnect my-connect-cluster -n kafka-connect
Name: my-connect-cluster
Namespace: kafka-connect
Labels: <none>
Annotations: <none>
API Version: kafka.strimzi.io/v1beta2
Kind: KafkaConnect
Metadata:
Creation Timestamp: 2023-02-16T06:38:41Z
Generation: 1
Managed Fields:
API Version: kafka.strimzi.io/v1beta2
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:kubectl.kubernetes.io/last-applied-configuration:
f:spec:
.:
f:bootstrapServers:
f:config:
.:
f:config.storage.replication.factor:
f:config.storage.topic:
f:group.id:
f:offset.storage.replication.factor:
f:offset.storage.topic:
f:status.storage.replication.factor:
f:status.storage.topic:
f:replicas:
f:tls:
.:
f:trustedCertificates:
f:version:
Manager: kubectl-client-side-apply
Operation: Update
Time: 2023-02-16T06:38:41Z
Resource Version: 266457732
UID: bdd697f8-e38a-466b-8ddf-81ed1ae54efe
Spec:
Bootstrap Servers: versa-kafka-gke-kafka-bootstrap:9093
Config:
config.storage.replication.factor: -1
config.storage.topic: connect-cluster-configs
group.id: connect-cluster
offset.storage.replication.factor: -1
offset.storage.topic: connect-cluster-offsets
status.storage.replication.factor: -1
status.storage.topic: connect-cluster-status
Replicas: 1
Tls:
Trusted Certificates:
Certificate: ca.crt
Secret Name: versa-kafka-gke-cluster-ca-cert
Version: 3.0.0
Events: <none>
How do i debug/fix this ?
tia!
Update :
Based on note from Jakub, i re-installed kafkaconnect in the same namespace as Strimzi kafka (i.e. namespace - kafka), and pods are coming up now. However the logs show error as shown below :
(base) Karans-MacBook-Pro:kafkaConnect karanalang$ kc logs -f pod/my-connect-cluster-connect-67f76f5d89-nv9sj -n kafka
Preparing truststore
Certificate was added to keystore
Preparing truststore is complete
Starting Kafka Connect with configuration:
# Bootstrap servers
bootstrap.servers=versa-kafka-gke-w-kafka-bootstrap:9093
# REST Listeners
rest.port=8083
rest.advertised.host.name=10.6.0.199
rest.advertised.port=8083
# Plugins
plugin.path=/opt/kafka/plugins
# Provided configuration
offset.storage.topic=connect-cluster-offsets
value.converter=org.apache.kafka.connect.json.JsonConverter
config.storage.topic=connect-cluster-configs
key.converter=org.apache.kafka.connect.json.JsonConverter
group.id=connect-cluster
status.storage.topic=connect-cluster-status
config.storage.replication.factor=-1
offset.storage.replication.factor=-1
status.storage.replication.factor=-1
security.protocol=SSL
producer.security.protocol=SSL
consumer.security.protocol=SSL
admin.security.protocol=SSL
# TLS / SSL
ssl.truststore.location=/tmp/kafka/cluster.truststore.p12
ssl.truststore.password=[hidden]
ssl.truststore.type=PKCS12
producer.ssl.truststore.location=/tmp/kafka/cluster.truststore.p12
producer.ssl.truststore.password=[hidden]
consumer.ssl.truststore.location=/tmp/kafka/cluster.truststore.p12
consumer.ssl.truststore.password=[hidden]
admin.ssl.truststore.location=/tmp/kafka/cluster.truststore.p12
admin.ssl.truststore.password=[hidden]
# Additional configuration
client.rack=
2023-02-17 06:43:08,952 INFO WorkerInfo values:
jvm.args = -Xms128M, -XX:+UseG1GC, -XX:MaxGCPauseMillis=20, -XX:InitiatingHeapOccupancyPercent=35, -XX:+ExplicitGCInvokesConcurrent, -XX:MaxInlineLevel=15, -Djava.awt.headless=true, -Dcom.sun.management.jmxremote, -Dcom.sun.management.jmxremote.authenticate=false, -Dcom.sun.management.jmxremote.ssl=false, -Dkafka.logs.dir=/opt/kafka, -Dlog4j.configuration=file:/opt/kafka/custom-config/log4j.properties
jvm.spec = Red Hat, Inc., OpenJDK 64-Bit Server VM, 11.0.12, 11.0.12+7-LTS
jvm.classpath = /opt/kafka/bin/../libs/accessors-smart-2.4.7.jar:/opt/kafka/bin/../libs/activation-1.1.1.jar:/opt/kafka/bin/../libs/annotations-13.0.jar:/opt/kafka/bin/../libs/aopalliance-repackaged-2.6.1.jar:/opt/kafka/bin/../libs/argparse4j-0.7.0.jar:/opt/kafka/bin/../libs/audience-annotations-0.5.0.jar:/opt/kafka/bin/../libs/automaton-1.11-8.jar:/opt/kafka/bin/../libs/checker-qual-3.5.0.jar:/opt/kafka/bin/../libs/commons-cli-1.4.jar:/opt/kafka/bin/../libs/commons-lang-2.6.jar:/opt/kafka/bin/../libs/commons-lang3-3.8.1.jar:/opt/kafka/bin/../libs/connect-api-3.0.0.jar:/opt/kafka/bin/../libs/connect-basic-auth-extension-3.0.0.jar:/opt/kafka/bin/../libs/connect-file-3.0.0.jar:/opt/kafka/bin/../libs/connect-json-3.0.0.jar:/opt/kafka/bin/../libs/connect-mirror-3.0.0.jar:/opt/kafka/bin/../libs/connect-mirror-client-3.0.0.jar:/opt/kafka/bin/../libs/connect-runtime-3.0.0.jar:/opt/kafka/bin/../libs/connect-transforms-
......
2023-02-17 06:43:24,944 INFO Added alias 'InsertHeader' to plugin 'org.apache.kafka.connect.transforms.InsertHeader' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader) [main]
2023-02-17 06:43:24,945 INFO Added alias 'RegexRouter' to plugin 'org.apache.kafka.connect.transforms.RegexRouter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader) [main]
2023-02-17 06:43:24,945 INFO Added alias 'TimestampRouter' to plugin 'org.apache.kafka.connect.transforms.TimestampRouter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader) [main]
2023-02-17 06:43:24,945 INFO Added alias 'ValueToKey' to plugin 'org.apache.kafka.connect.transforms.ValueToKey' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader) [main]
2023-02-17 06:43:24,945 INFO Added alias 'HasHeaderKey' to plugin 'org.apache.kafka.connect.transforms.predicates.HasHeaderKey' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader) [main]
2023-02-17 06:43:24,945 INFO Added alias 'RecordIsTombstone' to plugin 'org.apache.kafka.connect.transforms.predicates.RecordIsTombstone' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader) [main]
2023-02-17 06:43:24,945 INFO Added alias 'TopicNameMatches' to plugin 'org.apache.kafka.connect.transforms.predicates.TopicNameMatches' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader) [main]
2023-02-17 06:43:24,945 INFO Added alias 'BasicAuthSecurityRestExtension' to plugin 'org.apache.kafka.connect.rest.basic.auth.extension.BasicAuthSecurityRestExtension' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader) [main]
2023-02-17 06:43:24,945 INFO Added aliases 'AllConnectorClientConfigOverridePolicy' and 'All' to plugin 'org.apache.kafka.connect.connector.policy.AllConnectorClientConfigOverridePolicy' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader) [main]
2023-02-17 06:43:24,945 INFO Added aliases 'NoneConnectorClientConfigOverridePolicy' and 'None' to plugin 'org.apache.kafka.connect.connector.policy.NoneConnectorClientConfigOverridePolicy' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader) [main]
2023-02-17 06:43:24,945 INFO Added aliases 'PrincipalConnectorClientConfigOverridePolicy' and 'Principal' to plugin 'org.apache.kafka.connect.connector.policy.PrincipalConnectorClientConfigOverridePolicy' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader) [main]
2023-02-17 06:43:25,347 INFO DistributedConfig values:
access.control.allow.methods =
access.control.allow.origin =
admin.listeners = null
bootstrap.servers = [versa-kafka-gke-w-kafka-bootstrap:9093]
client.dns.lookup = use_all_dns_ips
client.id =
config.providers = []
config.storage.replication.factor = -1
config.storage.topic = connect-cluster-configs
connect.protocol = sessioned
connections.max.idle.ms = 540000
connector.client.config.override.policy = All
group.id = connect-cluster
header.converter = class org.apache.kafka.connect.storage.SimpleHeaderConverter
heartbeat.interval.ms = 3000
inter.worker.key.generation.algorithm = HmacSHA256
inter.worker.key.size = null
inter.worker.key.ttl.ms = 3600000
inter.worker.signature.algorithm = HmacSHA256
inter.worker.verification.algorithms = [HmacSHA256]
key.converter = class org.apache.kafka.connect.json.JsonConverter
listeners = [http://:8083]
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
offset.flush.interval.ms = 60000
offset.flush.timeout.ms = 5000
offset.storage.partitions = 25
offset.storage.replication.factor = -1
offset.storage.topic = connect-cluster-offsets
plugin.path = [/opt/kafka/plugins]
rebalance.timeout.ms = 60000
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 40000
response.http.headers.config =
rest.advertised.host.name = 10.6.0.199
rest.advertised.listener = null
rest.advertised.port = 8083
rest.extension.classes = []
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
scheduled.rebalance.max.delay.ms = 300000
security.protocol = SSL
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = /tmp/kafka/cluster.truststore.p12
ssl.truststore.password = [hidden]
ssl.truststore.type = PKCS12
status.storage.partitions = 5
status.storage.replication.factor = -1
status.storage.topic = connect-cluster-status
task.shutdown.graceful.timeout.ms = 5000
topic.creation.enable = true
topic.tracking.allow.reset = true
topic.tracking.enable = true
value.converter = class org.apache.kafka.connect.json.JsonConverter
worker.sync.timeout.ms = 3000
worker.unsync.backoff.ms = 300000
(org.apache.kafka.connect.runtime.distributed.DistributedConfig) [main]
2023-02-17 06:43:25,355 INFO Creating Kafka admin client (org.apache.kafka.connect.util.ConnectUtils) [main]
2023-02-17 06:43:25,359 INFO AdminClientConfig values:
bootstrap.servers = [versa-kafka-gke-w-kafka-bootstrap:9093]
client.dns.lookup = use_all_dns_ips
client.id =
connections.max.idle.ms = 300000
default.api.timeout.ms = 60000
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 2147483647
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = SSL
security.providers = null
send.buffer.bytes = 131072
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = /tmp/kafka/cluster.truststore.p12
ssl.truststore.password = [hidden]
ssl.truststore.type = PKCS12
(org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,848 WARN The configuration 'producer.ssl.truststore.password' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,849 WARN The configuration 'group.id' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,849 WARN The configuration 'rest.advertised.port' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,849 WARN The configuration 'plugin.path' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,850 WARN The configuration 'admin.security.protocol' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,850 WARN The configuration 'consumer.ssl.truststore.location' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,850 WARN The configuration 'producer.ssl.truststore.location' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,850 WARN The configuration 'status.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,850 WARN The configuration 'offset.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,850 WARN The configuration 'consumer.security.protocol' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,851 WARN The configuration 'value.converter' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,851 WARN The configuration 'key.converter' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,851 WARN The configuration 'admin.ssl.truststore.password' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,851 WARN The configuration 'consumer.ssl.truststore.password' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,851 WARN The configuration 'config.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,851 WARN The configuration 'producer.security.protocol' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,852 WARN The configuration 'rest.advertised.host.name' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,852 WARN The configuration 'status.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,852 WARN The configuration 'client.rack' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,852 WARN The configuration 'rest.port' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,853 WARN The configuration 'config.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,853 WARN The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,855 WARN The configuration 'admin.ssl.truststore.location' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig) [main]
2023-02-17 06:43:27,856 INFO Kafka version: 3.0.0 (org.apache.kafka.common.utils.AppInfoParser) [main]
2023-02-17 06:43:27,857 INFO Kafka commitId: 8cb0a5e9d3441962 (org.apache.kafka.common.utils.AppInfoParser) [main]
2023-02-17 06:43:27,857 INFO Kafka startTimeMs: 1676616207856 (org.apache.kafka.common.utils.AppInfoParser) [main]
2023-02-17 06:43:30,857 INFO [AdminClient clientId=adminclient-1] Failed re-authentication with versa-kafka-gke-w-kafka-bootstrap/10.6.131.57 (Failed to process post-handshake messages) (org.apache.kafka.common.network.Selector) [kafka-admin-client-thread | adminclient-1]
2023-02-17 06:43:30,867 ERROR [AdminClient clientId=adminclient-1] Connection to node -1 (versa-kafka-gke-w-kafka-bootstrap/10.6.131.57:9093) failed authentication due to: Failed to process post-handshake messages (org.apache.kafka.clients.NetworkClient) [kafka-admin-client-thread | adminclient-1]
2023-02-17 06:43:30,870 WARN [AdminClient clientId=adminclient-1] Metadata update failed due to authentication error (org.apache.kafka.clients.admin.internals.AdminMetadataManager) [kafka-admin-client-thread | adminclient-1]
org.apache.kafka.common.errors.SslAuthenticationException: Failed to process post-handshake messages
Caused by: javax.net.ssl.SSLException: Tag mismatch!
at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:133)
at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:349)
at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:292)
at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:287)
at java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:123)
at java.base/sun.security.ssl.SSLEngineImpl.decode(SSLEngineImpl.java:681)
at java.base/sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:636)
at java.base/sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:454)
at java.base/sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:433)
at java.base/javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:637)
at org.apache.kafka.common.network.SslTransportLayer.read(SslTransportLayer.java:567)
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:95)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402)
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576)
at org.apache.kafka.common.network.Selector.poll(Selector.java:481)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:551)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.processRequests(KafkaAdminClient.java:1389)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1320)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: javax.crypto.AEADBadTagException: Tag mismatch!
at java.base/com.sun.crypto.provider.GaloisCounterMode.decryptFinal(GaloisCounterMode.java:623)
at java.base/com.sun.crypto.provider.CipherCore.finalNoPadding(CipherCore.java:1116)
at java.base/com.sun.crypto.provider.CipherCore.fillOutputBuffer(CipherCore.java:1053)
at java.base/com.sun.crypto.provider.CipherCore.doFinal(CipherCore.java:941)
at java.base/com.sun.crypto.provider.AESCipher.engineDoFinal(AESCipher.java:491)
at java.base/javax.crypto.CipherSpi.bufferCrypt(CipherSpi.java:779)
at java.base/javax.crypto.CipherSpi.engineDoFinal(CipherSpi.java:730)
at java.base/javax.crypto.Cipher.doFinal(Cipher.java:2497)
at java.base/sun.security.ssl.SSLCipher$T13GcmReadCipherGenerator$GcmReadCipher.decrypt(SSLCipher.java:1903)
at java.base/sun.security.ssl.SSLEngineInputRecord.decodeInputRecord(SSLEngineInputRecord.java:240)
at java.base/sun.security.ssl.SSLEngineInputRecord.decode(SSLEngineInputRecord.java:197)
at java.base/sun.security.ssl.SSLEngineInputRecord.decode(SSLEngineInputRecord.java:160)
at java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:111)
... 16 more
2023-02-17 06:43:30,946 INFO App info kafka.admin.client for adminclient-1 unregistered (org.apache.kafka.common.utils.AppInfoParser) [kafka-admin-client-thread | adminclient-1]
2023-02-17 06:43:30,947 INFO [AdminClient clientId=adminclient-1] Metadata update failed (org.apache.kafka.clients.admin.internals.AdminMetadataManager) [kafka-admin-client-thread | adminclient-1]
org.apache.kafka.common.errors.TimeoutException: The AdminClient thread has exited. Call: fetchMetadata
2023-02-17 06:43:30,948 INFO [AdminClient clientId=adminclient-1] Metadata update failed (org.apache.kafka.clients.admin.internals.AdminMetadataManager) [kafka-admin-client-thread | adminclient-1]
org.apache.kafka.common.errors.TimeoutException: The AdminClient thread has exited. Call: fetchMetadata
2023-02-17 06:43:30,949 INFO [AdminClient clientId=adminclient-1] Timed out 2 remaining operation(s) during close. (org.apache.kafka.clients.admin.KafkaAdminClient) [kafka-admin-client-thread | adminclient-1]
2023-02-17 06:43:30,959 INFO Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics) [kafka-admin-client-thread | adminclient-1]
2023-02-17 06:43:30,960 INFO Closing reporter org.apache.kafka.common.metrics.JmxReporter (org.apache.kafka.common.metrics.Metrics) [kafka-admin-client-thread | adminclient-1]
2023-02-17 06:43:30,961 INFO Metrics reporters closed (org.apache.kafka.common.metrics.Metrics) [kafka-admin-client-thread | adminclient-1]
2023-02-17 06:43:30,961 ERROR Stopping due to error (org.apache.kafka.connect.cli.ConnectDistributed) [main]
org.apache.kafka.connect.errors.ConnectException: Failed to connect to and describe Kafka cluster. Check worker's broker connection and security properties.
at org.apache.kafka.connect.util.ConnectUtils.lookupKafkaClusterId(ConnectUtils.java:70)
at org.apache.kafka.connect.util.ConnectUtils.lookupKafkaClusterId(ConnectUtils.java:51)
at org.apache.kafka.connect.cli.ConnectDistributed.startConnect(ConnectDistributed.java:97)
at org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:80)
Caused by: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.SslAuthenticationException: Failed to process post-handshake messages
at java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395)
at java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1999)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:165)
at org.apache.kafka.connect.util.ConnectUtils.lookupKafkaClusterId(ConnectUtils.java:64)
... 3 more
Caused by: org.apache.kafka.common.errors.SslAuthenticationException: Failed to process post-handshake messages
Caused by: javax.net.ssl.SSLException: Tag mismatch!
at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:133)
at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:349)
at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:292)
at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:287)
at java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:123)
at java.base/sun.security.ssl.SSLEngineImpl.decode(SSLEngineImpl.java:681)
at java.base/sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:636)
at java.base/sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:454)
at java.base/sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:433)
at java.base/javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:637)
at org.apache.kafka.common.network.SslTransportLayer.read(SslTransportLayer.java:567)
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:95)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402)
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576)
at org.apache.kafka.common.network.Selector.poll(Selector.java:481)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:551)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.processRequests(KafkaAdminClient.java:1389)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1320)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: javax.crypto.AEADBadTagException: Tag mismatch!
at java.base/com.sun.crypto.provider.GaloisCounterMode.decryptFinal(GaloisCounterMode.java:623)
at java.base/com.sun.crypto.provider.CipherCore.finalNoPadding(CipherCore.java:1116)
at java.base/com.sun.crypto.provider.CipherCore.fillOutputBuffer(CipherCore.java:1053)
at java.base/com.sun.crypto.provider.CipherCore.doFinal(CipherCore.java:941)
at java.base/com.sun.crypto.provider.AESCipher.engineDoFinal(AESCipher.java:491)
at java.base/javax.crypto.CipherSpi.bufferCrypt(CipherSpi.java:779)
at java.base/javax.crypto.CipherSpi.engineDoFinal(CipherSpi.java:730)
at java.base/javax.crypto.Cipher.doFinal(Cipher.java:2497)
at java.base/sun.security.ssl.SSLCipher$T13GcmReadCipherGenerator$GcmReadCipher.decrypt(SSLCipher.java:1903)
at java.base/sun.security.ssl.SSLEngineInputRecord.decodeInputRecord(SSLEngineInputRecord.java:240)
at java.base/sun.security.ssl.SSLEngineInputRecord.decode(SSLEngineInputRecord.java:197)
at java.base/sun.security.ssl.SSLEngineInputRecord.decode(SSLEngineInputRecord.java:160)
at java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:111)
... 16 more
Pls note :
Strimzi Kafka version - 3.0.0, hence I've changed the version of kafkaconnect to - 3.0.0 as well.
You say you want to use MirrorMaker2? Then you should be using that kind, not KafkaConnect.
https://strimzi.io/blog/2020/03/30/introducing-mirrormaker2/
And, as commented, ensure the operator is watching the namespace where you install any resources. Just because you can get/describe the resource doesn't mean the operator knows about it, or is processing it (look at its logs)

SpringBoot Application Context error: : Kotlin, Gradle,Database, Docker Postgresql

I wanted to setup a connection between my postgresql database, which is inside a docker container and my Project.
The Docker an everything is set up via a docker compose file which looks like this:
version: '3.8'
services:
db:
container_name: postgres_container
image: postgres
restart: always
environment:
POSTGRES_DB: postgres_db
POSTGRES_USER: admin
POSTGRES_PASSWORD: secret
PGDATA: /var/lib/postgresql/data
ports:
- "5432:5432"
volumes:
- db-data:/var/lib/postgresql/data
pgadmin:
container_name: pgadmin4_container
image: dpage/pgadmin4:5.5
restart: always
environment:
PGADMIN_DEFAULT_EMAIL: admin#admin.com
PGADMIN_DEFAULT_PASSWORD: secret
PGADMIN_LISTEN_PORT: 80
ports:
- "8080:80"
volumes:
- pgadmin-data:/var/lib/pgadmin
links:
- "db:pgsql-server"
volumes:
db-data:
pgadmin-data:
My Project is supposed to be written in Kotlin an I have set up Spring Boot with the initializer, which looked like following:
enter image description here
I have added the following into the application properties file to get a connection to the database.
spring.datasource.url=jdbc:postgresql://localhost:5432/postgres_db
spring.datasource.username=admin
spring.datasource.password=secret
spring.jpa.database-platform=org.hibernate.dialect.PostgreSQLDialect
spring.jpa.hibernate.ddl-auto=update
This is my main file:
package com.nosiaj.nstudycloud
import org.springframework.boot.autoconfigure.SpringBootApplication
import org.springframework.boot.runApplication
#SpringBootApplication
class NstudycloudApplication
fun main(args: Array<String>) {
runApplication<NstudycloudApplication>(*args)
}
And I wanted to create a table via code
package com.nosiaj.nstudycloud.entity
import javax.persistence.Entity
import javax.persistence.Id
import javax.persistence.Table
#Table(name="customer")
#Entity
class Customer {
#Id
private var uId: String = TODO("initialize me")
private val firstName: String
private val lastName: String
private val email: String
private val phone: String
private val IBAN: String
}
This is my build.gradle.kts file:
plugins {
id("org.springframework.boot") version "2.7.2"
id("io.spring.dependency-management") version "1.0.12.RELEASE"
kotlin("jvm") version "1.6.21"
kotlin("plugin.spring") version "1.6.21"
kotlin("plugin.jpa") version "1.6.21"
}
group = "com.nosiaj"
version = "0.0.1-SNAPSHOT"
java.sourceCompatibility = JavaVersion.VERSION_17
repositories {
mavenCentral()
}
dependencies {
implementation("org.springframework.boot:spring-boot-starter-data-jpa")
implementation("org.springframework.boot:spring-boot-starter-web")
implementation("com.fasterxml.jackson.module:jackson-module-kotlin")
implementation("org.jetbrains.kotlin:kotlin-reflect")
implementation("org.jetbrains.kotlin:kotlin-stdlib-jdk8")
runtimeOnly("org.postgresql:postgresql")
testImplementation("org.springframework.boot:spring-boot-starter-test")
}
tasks.withType<KotlinCompile> {
kotlinOptions {
freeCompilerArgs = listOf("-Xjsr305=strict")
jvmTarget = "17"
}
}
tasks.withType<Test> {
useJUnitPlatform()
}
Now when I start the application it connects to the database and creates the table, but I still get Error and the server stops. This is the Output:
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.7.2)
2022-08-07 16:54:22.736 INFO 55209 --- [ main] c.n.n.NstudycloudApplicationKt : Starting NstudycloudApplicationKt using Java 17.0.4 on simba-desktop with PID 55209 (/home/simba/Dokumente/Project/nstudycloud/nstudycloud/build/classes/kotlin/main started by simba in /home/simba/Dokumente/Project/nstudycloud/nstudycloud)
2022-08-07 16:54:22.739 INFO 55209 --- [ main] c.n.n.NstudycloudApplicationKt : No active profile set, falling back to 1 default profile: "default"
2022-08-07 16:54:23.342 INFO 55209 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Bootstrapping Spring Data JPA repositories in DEFAULT mode.
2022-08-07 16:54:23.359 INFO 55209 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Finished Spring Data repository scanning in 3 ms. Found 0 JPA repository interfaces.
2022-08-07 16:54:23.743 INFO 55209 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8080 (http)
2022-08-07 16:54:23.750 INFO 55209 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat]
2022-08-07 16:54:23.751 INFO 55209 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.65]
2022-08-07 16:54:23.812 INFO 55209 --- [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext
2022-08-07 16:54:23.812 INFO 55209 --- [ main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 1026 ms
2022-08-07 16:54:23.949 INFO 55209 --- [ main] o.hibernate.jpa.internal.util.LogHelper : HHH000204: Processing PersistenceUnitInfo [name: default]
2022-08-07 16:54:23.991 INFO 55209 --- [ main] org.hibernate.Version : HHH000412: Hibernate ORM core version 5.6.10.Final
2022-08-07 16:54:24.110 INFO 55209 --- [ main] o.hibernate.annotations.common.Version : HCANN000001: Hibernate Commons Annotations {5.1.2.Final}
2022-08-07 16:54:24.182 INFO 55209 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Starting...
2022-08-07 16:54:24.286 INFO 55209 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Start completed.
2022-08-07 16:54:24.298 INFO 55209 --- [ main] org.hibernate.dialect.Dialect : HHH000400: Using dialect: org.hibernate.dialect.PostgreSQLDialect
2022-08-07 16:54:24.594 INFO 55209 --- [ main] o.h.e.t.j.p.i.JtaPlatformInitiator : HHH000490: Using JtaPlatform implementation: [org.hibernate.engine.transaction.jta.platform.internal.NoJtaPlatform]
2022-08-07 16:54:24.602 INFO 55209 --- [ main] j.LocalContainerEntityManagerFactoryBean : Initialized JPA EntityManagerFactory for persistence unit 'default'
2022-08-07 16:54:24.635 WARN 55209 --- [ main] JpaBaseConfiguration$JpaWebConfiguration : spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning
2022-08-07 16:54:24.928 WARN 55209 --- [ main] ConfigServletWebServerApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.context.ApplicationContextException: Failed to start bean 'webServerStartStop'; nested exception is org.springframework.boot.web.server.WebServerException: Unable to start embedded Tomcat server
2022-08-07 16:54:24.930 INFO 55209 --- [ main] j.LocalContainerEntityManagerFactoryBean : Closing JPA EntityManagerFactory for persistence unit 'default'
2022-08-07 16:54:24.932 INFO 55209 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Shutdown initiated...
2022-08-07 16:54:24.938 INFO 55209 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Shutdown completed.
2022-08-07 16:54:24.943 INFO 55209 --- [ main] o.apache.catalina.core.StandardService : Stopping service [Tomcat]
2022-08-07 16:54:24.955 INFO 55209 --- [ main] ConditionEvaluationReportLoggingListener :
Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
2022-08-07 16:54:24.975 ERROR 55209 --- [ main] o.s.boot.SpringApplication : Application run failed
org.springframework.context.ApplicationContextException: Failed to start bean 'webServerStartStop'; nested exception is org.springframework.boot.web.server.WebServerException: Unable to start embedded Tomcat server
at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:181) ~[spring-context-5.3.22.jar:5.3.22]
at org.springframework.context.support.DefaultLifecycleProcessor.access$200(DefaultLifecycleProcessor.java:54) ~[spring-context-5.3.22.jar:5.3.22]
at org.springframework.context.support.DefaultLifecycleProcessor$LifecycleGroup.start(DefaultLifecycleProcessor.java:356) ~[spring-context-5.3.22.jar:5.3.22]
at java.base/java.lang.Iterable.forEach(Iterable.java:75) ~[na:na]
at org.springframework.context.support.DefaultLifecycleProcessor.startBeans(DefaultLifecycleProcessor.java:155) ~[spring-context-5.3.22.jar:5.3.22]
at org.springframework.context.support.DefaultLifecycleProcessor.onRefresh(DefaultLifecycleProcessor.java:123) ~[spring-context-5.3.22.jar:5.3.22]
at org.springframework.context.support.AbstractApplicationContext.finishRefresh(AbstractApplicationContext.java:935) ~[spring-context-5.3.22.jar:5.3.22]
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:586) ~[spring-context-5.3.22.jar:5.3.22]
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:147) ~[spring-boot-2.7.2.jar:2.7.2]
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:734) ~[spring-boot-2.7.2.jar:2.7.2]
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:408) ~[spring-boot-2.7.2.jar:2.7.2]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:308) ~[spring-boot-2.7.2.jar:2.7.2]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1306) ~[spring-boot-2.7.2.jar:2.7.2]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1295) ~[spring-boot-2.7.2.jar:2.7.2]
at com.nosiaj.nstudycloud.NstudycloudApplicationKt.main(NstudycloudApplication.kt:13) ~[main/:na]
Caused by: org.springframework.boot.web.server.WebServerException: Unable to start embedded Tomcat server
at org.springframework.boot.web.embedded.tomcat.TomcatWebServer.start(TomcatWebServer.java:229) ~[spring-boot-2.7.2.jar:2.7.2]
at org.springframework.boot.web.servlet.context.WebServerStartStopLifecycle.start(WebServerStartStopLifecycle.java:43) ~[spring-boot-2.7.2.jar:2.7.2]
at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:178) ~[spring-context-5.3.22.jar:5.3.22]
... 14 common frames omitted
Caused by: java.lang.IllegalArgumentException: standardService.connector.startFailed
at org.apache.catalina.core.StandardService.addConnector(StandardService.java:238) ~[tomcat-embed-core-9.0.65.jar:9.0.65]
at org.springframework.boot.web.embedded.tomcat.TomcatWebServer.addPreviouslyRemovedConnectors(TomcatWebServer.java:282) ~[spring-boot-2.7.2.jar:2.7.2]
at org.springframework.boot.web.embedded.tomcat.TomcatWebServer.start(TomcatWebServer.java:213) ~[spring-boot-2.7.2.jar:2.7.2]
... 16 common frames omitted
Caused by: org.apache.catalina.LifecycleException: Protocol handler start failed
at org.apache.catalina.connector.Connector.startInternal(Connector.java:1077) ~[tomcat-embed-core-9.0.65.jar:9.0.65]
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) ~[tomcat-embed-core-9.0.65.jar:9.0.65]
at org.apache.catalina.core.StandardService.addConnector(StandardService.java:234) ~[tomcat-embed-core-9.0.65.jar:9.0.65]
... 18 common frames omitted
Caused by: java.net.BindException: Die Adresse wird bereits verwendet
at java.base/sun.nio.ch.Net.bind0(Native Method) ~[na:na]
at java.base/sun.nio.ch.Net.bind(Net.java:555) ~[na:na]
at java.base/sun.nio.ch.ServerSocketChannelImpl.netBind(ServerSocketChannelImpl.java:337) ~[na:na]
at java.base/sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:294) ~[na:na]
at org.apache.tomcat.util.net.NioEndpoint.initServerSocket(NioEndpoint.java:275) ~[tomcat-embed-core-9.0.65.jar:9.0.65]
at org.apache.tomcat.util.net.NioEndpoint.bind(NioEndpoint.java:230) ~[tomcat-embed-core-9.0.65.jar:9.0.65]
at org.apache.tomcat.util.net.AbstractEndpoint.bindWithCleanup(AbstractEndpoint.java:1227) ~[tomcat-embed-core-9.0.65.jar:9.0.65]
at org.apache.tomcat.util.net.AbstractEndpoint.start(AbstractEndpoint.java:1313) ~[tomcat-embed-core-9.0.65.jar:9.0.65]
at org.apache.coyote.AbstractProtocol.start(AbstractProtocol.java:614) ~[tomcat-embed-core-9.0.65.jar:9.0.65]
at org.apache.catalina.connector.Connector.startInternal(Connector.java:1074) ~[tomcat-embed-core-9.0.65.jar:9.0.65]
... 20 common frames omitted
Process finished with exit code 1
So why does this Error appear? And how can I solve it ? I don't know what I am doing wrong, if someone could help me, that would be really great !!
Thanks a lot !!
It means another process is running on same port, so you have to kill it.
First, list which app using which that port and get it's pid
lsof -i:8080
Then kill it with
kill <pid>

NullPointerException: ha.BootstrapStandby.run(BootstrapStandby.java:549)

$HADOOP_HOME/bin/hdfs --config $HADOOP_CONF_DIR namenode -bootstrapStandby
getting below error in Hadoop HA 3.3.3 on standbyNode for the above command.
{"name":"org.apache.hadoop.hdfs.server.namenode.NameNode","time":1659853529569,"date":"2022-08-07 06:25:29,569","level":"INFO","thread":"main","message":"registered UNIX signal handlers for [TERM, HUP, INT]"}
{"name":"org.apache.hadoop.hdfs.server.namenode.NameNode","time":1659853529682,"date":"2022-08-07 06:25:29,682","level":"INFO","thread":"main","message":"createNameNode [-bootstrapStandby]"}
{"name":"org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby","time":1659853530068,"date":"2022-08-07 06:25:30,068","level":"INFO","thread":"main","message":"Found nn: apache-hadoop-namenode-0.apache-hadoop-namenode.backend.svc.cluster.local, ipc: hdfs:8020"}
{"name":"org.apache.hadoop.hdfs.server.namenode.NameNode","time":1659853530069,"date":"2022-08-07 06:25:30,069","level":"ERROR","thread":"main","message":"Failed to start namenode.","exceptionclass":"java.io.IOException","stack":["java.io.IOException: java.lang.NullPointerException","\tat org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby.run(BootstrapStandby.java:549)","\tat org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1741)","\tat org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1834)","Caused by: java.lang.NullPointerException","\tat org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby.parseConfAndFindOtherNN(BootstrapStandby.java:435)","\tat org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby.run(BootstrapStandby.java:114)","\tat org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:81)","\tat org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:95)","\tat org.apache.hadoop.hdfs.server.namenode.ha.BootstrapStandby.run(BootstrapStandby.java:544)","\t... 2 more"]}{"name":"org.apache.hadoop.util.ExitUtil","time":1659853530074,"date":"2022-08-07 06:25:30,074","level":"INFO","thread":"main","message":"Exiting with status 1: java.io.IOException: java.lang.NullPointerException"}
{"name":"org.apache.hadoop.hdfs.server.namenode.NameNode","time":1659853530083,"date":"2022-08-07 06:25:30,083","level":"INFO","thread":"shutdown-hook-0","message":"SHUTDOWN_MSG: \n\nSHUTDOWN_MSG: Shutting down NameNode at apache-hadoop-namenode-1.apache-hadoop-namenode.backend.svc.cluster.local"}

Lagom create static µ-service cluster

i wrote a few µ-services with lagom.
I have a docker-compose file in which all µ-services will be manual created.
Now I have a problem.
If I use discovery with akka-dns their is nothing found.
If I use static and give them the exposed ports their is also not found. Where is my mistake?
Docker compose file:
security:
container_name: panakeia-security
image: nexus.familieschmidt.online/andre-user/panakeia/security-impl:latest
environment:
- APPLICATION_SECRET=hlPlU12MK?[oF1Xj`>xd>CtCjTHohfu0=ekVFOo?r]lH^GpFo5o?kurLFO6sQPzD
- POSTGRESQL_URL=jdbc:postgresql://****SERVER-IP****:12000/panakeia
- POSTGRESQL_USERNAME=panakeia
- POSTGRESQL_PASSWORD=123456789
- INIT_USERNAME=test#test.de
- INIT_USERPASS=123456
- REQUIRED_CONTACT_POINT_NR=1
- JAVA_OPTS=-Dconfig.resource=prod-application.conf -Dplay.server.pidfile.path=/dev/null
expose:
- 9000
- 15000
# - 8558
ports:
- "15000:15000"
- "14999:9000"
networks:
- panakeia-network
My loader class
class SecurityLoader extends LagomApplicationLoader {
override def load(context: LagomApplicationContext): LagomApplication =
new SecurityApplication(context) with ConfigurationServiceLocatorComponents {
// override def staticServiceUri: URI = URI.create("http://localhost:9000")
} //AkkaDiscoveryComponents
override def loadDevMode(context: LagomApplicationContext): LagomApplication =
new SecurityApplication(context) with LagomDevModeComponents
override def describeService = Some(readDescriptor[SecurityService])
}
and my production.conf
include "application"
play {
server {
pidfile.path = "/dev/null"
}
http.secret.key = "${APPLICATION_SECRET}"
}
db.default {
url = ${POSTGRESQL_URL}
username = ${POSTGRESQL_USERNAME}
password = ${POSTGRESQL_PASSWORD}
}
user.init {
username = ${INIT_USERNAME}
password = ${INIT_USERPASS}
}
pac4j.jwk = {"kty":"oct","k":${JWK_KEY},"alg":"HS512"}
lagom.persistence.jdbc.create-tables.auto = true
//akka {
// discovery.method = akka-dns
//
// cluster {
// shutdown-after-unsuccessful-join-seed-nodes = 60s
// }
//
// management {
// cluster.bootstrap {
// contact-point-discovery {
// discovery-method = akka.discovery
//// discovery-method = akka-dns
// service-name = "security-service"
// required-contact-point-nr = ${REQUIRED_CONTACT_POINT_NR}
// }
// }
// }
//}
lagom.services {
security-impl = "http://****SERVER-IP****:15000"
// serviceB = "http://10.1.2.4:8080"
}
For other domains on the same server i have a reverse nginx. But not for this project.
When I run http://SERVER-IP:14999 in browser I get a 404 NotFound typical Play screen message. :(
I have no problem to write the compose file and to link them manual. I have also no problem to use akka-dns. But I will have no kubernetes or something else.
Thanks for you help
Here is a log of one the µ-service
2021-04-02T14:38:24.151Z [info] akka.event.slf4j.Slf4jLogger [] - Slf4jLogger started
2021-04-02T14:38:24.409Z [info] akka.remote.artery.tcp.ArteryTcpTransport [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=main, akkaSource=ArteryTcpTransport(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:24.409UTC] - Remoting started with transport [Artery tcp]; listening on address [akka://application#192.168.0.5:25520] with UID [-580933006609059378]
2021-04-02T14:38:24.427Z [info] akka.cluster.Cluster [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=main, akkaSource=Cluster(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:24.427UTC] - Cluster Node [akka://application#192.168.0.5:25520] - Starting up, Akka version [2.6.8] ...
2021-04-02T14:38:24.527Z [info] akka.cluster.Cluster [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=main, akkaSource=Cluster(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:24.527UTC] - Cluster Node [akka://application#192.168.0.5:25520] - Registered cluster JMX MBean [akka:type=Cluster]
2021-04-02T14:38:24.528Z [info] akka.cluster.Cluster [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=main, akkaSource=Cluster(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:24.527UTC] - Cluster Node [akka://application#192.168.0.5:25520] - Started up successfully
2021-04-02T14:38:24.552Z [info] akka.cluster.Cluster [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.internal-dispatcher-2, akkaSource=Cluster(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:24.552UTC] - Cluster Node [akka://application#192.168.0.5:25520] - No downing-provider-class configured, manual cluster downing required, see https://doc.akka.io/docs/akka/current/typed/cluster.html#downing
2021-04-02T14:38:24.552Z [info] akka.cluster.Cluster [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.internal-dispatcher-2, akkaSource=Cluster(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:24.552UTC] - Cluster Node [akka://application#192.168.0.5:25520] - No seed-nodes configured, manual cluster join required, see https://doc.akka.io/docs/akka/current/typed/cluster.html#joining
2021-04-02T14:38:25.383Z [info] akka.io.DnsExt [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=main, akkaSource=DnsExt(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:25.383UTC] - Creating async dns resolver async-dns with manager name SD-DNS
2021-04-02T14:38:25.386Z [info] akka.management.cluster.bootstrap.ClusterBootstrap [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=main, akkaSource=ClusterBootstrap(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:25.386UTC] - Bootstrap using default `akka.discovery` method: AggregateServiceDiscovery
2021-04-02T14:38:25.394Z [info] akka.management.internal.HealthChecksImpl [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=main, akkaSource=HealthChecksImpl(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:25.394UTC] - Loading readiness checks List(NamedHealthCheck(cluster-membership,akka.management.cluster.scaladsl.ClusterMembershipCheck))
2021-04-02T14:38:25.394Z [info] akka.management.internal.HealthChecksImpl [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=main, akkaSource=HealthChecksImpl(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:25.394UTC] - Loading liveness checks List()
2021-04-02T14:38:25.488Z [info] akka.management.scaladsl.AkkaManagement [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=main, akkaSource=AkkaManagement(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:25.488UTC] - Binding Akka Management (HTTP) endpoint to: 192.168.0.5:8558
2021-04-02T14:38:25.541Z [info] akka.management.scaladsl.AkkaManagement [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=main, akkaSource=AkkaManagement(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:25.541UTC] - Including HTTP management routes for ClusterHttpManagementRouteProvider
2021-04-02T14:38:25.581Z [info] akka.management.scaladsl.AkkaManagement [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=main, akkaSource=AkkaManagement(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:25.581UTC] - Including HTTP management routes for ClusterBootstrap
2021-04-02T14:38:25.585Z [info] akka.management.cluster.bootstrap.ClusterBootstrap [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=main, akkaSource=ClusterBootstrap(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:25.584UTC] - Using self contact point address: http://192.168.0.5:8558
2021-04-02T14:38:25.600Z [info] akka.management.scaladsl.AkkaManagement [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=main, akkaSource=AkkaManagement(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:25.600UTC] - Including HTTP management routes for HealthCheckRoutes
2021-04-02T14:38:26.108Z [info] akka.management.scaladsl.AkkaManagement [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.default-dispatcher-18, akkaSource=AkkaManagement(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:26.108UTC] - Bound Akka Management (HTTP) endpoint to: 192.168.0.5:8558
2021-04-02T14:38:26.154Z [info] akka.management.cluster.bootstrap.ClusterBootstrap [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=main, akkaSource=ClusterBootstrap(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:26.154UTC] - Initiating bootstrap procedure using akka.discovery method...
2021-04-02T14:38:26.163Z [info] akka.management.cluster.bootstrap.internal.BootstrapCoordinator [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.default-dispatcher-5, akkaSource=akka://application#192.168.0.5:25520/system/bootstrapCoordinator, sourceActorSystem=application, akkaTimestamp=14:38:26.163UTC] - Locating service members. Using discovery [akka.discovery.aggregate.AggregateServiceDiscovery], join decider [akka.management.cluster.bootstrap.LowestAddressJoinDecider]
2021-04-02T14:38:26.164Z [info] akka.management.cluster.bootstrap.internal.BootstrapCoordinator [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.default-dispatcher-5, akkaSource=akka://application#192.168.0.5:25520/system/bootstrapCoordinator, sourceActorSystem=application, akkaTimestamp=14:38:26.163UTC] - Looking up [Lookup(application,None,Some(tcp))]
2021-04-02T14:38:26.189Z [info] akka.management.cluster.bootstrap.internal.BootstrapCoordinator [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.default-dispatcher-5, akkaSource=akka://application#192.168.0.5:25520/system/bootstrapCoordinator, sourceActorSystem=application, akkaTimestamp=14:38:26.189UTC] - Located service members based on: [Lookup(application,None,Some(tcp))]: [], filtered to []
2021-04-02T14:38:26.195Z [info] play.api.db.DefaultDBApi [] - Database [default] initialized
2021-04-02T14:38:26.200Z [info] play.api.db.HikariCPConnectionPool [] - Creating Pool for datasource 'default'
2021-04-02T14:38:26.207Z [info] com.zaxxer.hikari.HikariDataSource [] - HikariPool-1 - Starting...
2021-04-02T14:38:26.218Z [info] com.zaxxer.hikari.HikariDataSource [] - HikariPool-1 - Start completed.
2021-04-02T14:38:26.221Z [info] play.api.db.HikariCPConnectionPool [] - datasource [default] bound to JNDI as DefaultDS
2021-04-02T14:38:26.469Z [info] akka.cluster.sharding.ShardRegion [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.internal-dispatcher-4, akkaSource=akka://application#192.168.0.5:25520/system/sharding/ProfileProcessor, sourceActorSystem=application, akkaTimestamp=14:38:26.469UTC] - ProfileProcessor: Idle entities will be passivated after [2.000 min]
2021-04-02T14:38:26.480Z [info] akka.cluster.sharding.typed.scaladsl.ClusterSharding [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=main, akkaSource=ClusterSharding(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:26.480UTC] - Starting Shard Region [ProfileEntity]...
2021-04-02T14:38:26.483Z [info] akka.cluster.sharding.ShardRegion [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.internal-dispatcher-3, akkaSource=akka://application#192.168.0.5:25520/system/sharding/ProfileEntity, sourceActorSystem=application, akkaTimestamp=14:38:26.483UTC] - ProfileEntity: Idle entities will be passivated after [2.000 min]
2021-04-02T14:38:26.534Z [info] play.api.Play [] - Application started (Prod) (no global state)
2021-04-02T14:38:26.553Z [info] play.core.server.AkkaHttpServer [] - Listening for HTTP on /0.0.0.0:9000
2021-04-02T14:38:27.188Z [info] akka.management.cluster.bootstrap.LowestAddressJoinDecider [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.default-dispatcher-5, akkaSource=LowestAddressJoinDecider(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:27.187UTC] - Discovered [0] contact points, confirmed [0], which is less than the required [2], retrying
2021-04-02T14:38:27.363Z [info] akka.management.cluster.bootstrap.internal.BootstrapCoordinator [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.default-dispatcher-5, akkaSource=akka://application#192.168.0.5:25520/system/bootstrapCoordinator, sourceActorSystem=application, akkaTimestamp=14:38:27.363UTC] - Looking up [Lookup(application,None,Some(tcp))]
2021-04-02T14:38:27.369Z [info] akka.management.cluster.bootstrap.internal.BootstrapCoordinator [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.default-dispatcher-19, akkaSource=akka://application#192.168.0.5:25520/system/bootstrapCoordinator, sourceActorSystem=application, akkaTimestamp=14:38:27.369UTC] - Located service members based on: [Lookup(application,None,Some(tcp))]: [], filtered to []
2021-04-02T14:38:28.183Z [info] akka.management.cluster.bootstrap.LowestAddressJoinDecider [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.default-dispatcher-19, akkaSource=LowestAddressJoinDecider(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:28.183UTC] - Discovered [0] contact points, confirmed [0], which is less than the required [2], retrying
2021-04-02T14:38:28.563Z [info] akka.management.cluster.bootstrap.internal.BootstrapCoordinator [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.default-dispatcher-5, akkaSource=akka://application#192.168.0.5:25520/system/bootstrapCoordinator, sourceActorSystem=application, akkaTimestamp=14:38:28.563UTC] - Looking up [Lookup(application,None,Some(tcp))]
2021-04-02T14:38:28.564Z [info] akka.management.cluster.bootstrap.internal.BootstrapCoordinator [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.default-dispatcher-18, akkaSource=akka://application#192.168.0.5:25520/system/bootstrapCoordinator, sourceActorSystem=application, akkaTimestamp=14:38:28.564UTC] - Located service members based on: [Lookup(application,None,Some(tcp))]: [], filtered to []
2021-04-02T14:38:29.183Z [info] akka.management.cluster.bootstrap.LowestAddressJoinDecider [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.default-dispatcher-5, akkaSource=LowestAddressJoinDecider(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:29.183UTC] - Discovered [0] contact points, confirmed [0], which is less than the required [2], retrying
2021-04-02T14:38:29.724Z [info] akka.management.cluster.bootstrap.internal.BootstrapCoordinator [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.default-dispatcher-22, akkaSource=akka://application#192.168.0.5:25520/system/bootstrapCoordinator, sourceActorSystem=application, akkaTimestamp=14:38:29.723UTC] - Looking up [Lookup(application,None,Some(tcp))]
2021-04-02T14:38:29.729Z [info] akka.management.cluster.bootstrap.internal.BootstrapCoordinator [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.default-dispatcher-18, akkaSource=akka://application#192.168.0.5:25520/system/bootstrapCoordinator, sourceActorSystem=application, akkaTimestamp=14:38:29.729UTC] - Located service members based on: [Lookup(application,None,Some(tcp))]: [], filtered to []
2021-04-02T14:38:30.183Z [info] akka.management.cluster.bootstrap.LowestAddressJoinDecider [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.default-dispatcher-18, akkaSource=LowestAddressJoinDecider(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:30.183UTC] - Discovered [0] contact points, confirmed [0], which is less than the required [2], retrying
It looks like the startup process is completly going wrong.
Inside of the securityapplication class it should be generated a user with this code
val newUUID = UUID.randomUUID()
clusterSharding.entityRefFor(Profile.typedKey,newUUID.toString).ask[ProfileConfirmation](reply => CreateProfile(username,password,Some(EmailAddress(username)),Set(SuperAdminRole), PersonalData("","",""),reply ))(30.seconds).map{
case ProfileCmdAccepted(profile) => s"Profile ${profile.login} created"
case ProfileCmdRejected(err) => s"ERROR while default user init: $err"
case a => s"a: $a"
}
But there is throwing a Timeout exception. I think the cluster service is not initialized. At demo all working.
Do someone has an idea?
Okay found it
in the production conf I added
lagom.services {
security-service = "http://"${REMOTE_IP}":15000"
binary-service = "http://"${REMOTE_IP}":15001"
}
lagom.cluster.bootstrap.enabled = false
In the lagom Application loader in method load I inheriated my application from
ConfigurationServiceLocatorComponents
And now really important!
The docker must configured!
I used docker compose.
First adding a netowrk!
networks:
panakeia-network:
# driver: bridge
ipam:
driver: default
config:
- subnet: 172.28.0.0/16
AND now I give the cluster.seed-nodes.0 the ip of the service! And name the system! That is also important!
security:
container_name: panakeia-security
image: security-impl:latest
environment:
- APPLICATION_SECRET=****
- POSTGRESQL_URL=jdbc:postgresql://postgres-database:5432/panakeia
- POSTGRESQL_USERNAME=????
- POSTGRESQL_PASSWORD=******
- INIT_USERNAME=test#test.de
- INIT_USERPASS=*******
- JWK_KEY=******
- REQUIRED_CONTACT_POINT_NR=1
- KAFKA_BROKER=REALLY_IP/OR_KAFKA_SERVICE_NAME:9092
- REMOTE_IP=????
- JAVA_OPTS=-Dconfig.resource=prod-application.conf -Dplay.server.pidfile.path=/dev/null -Dakka.cluster.seed-nodes.0=akka://panakeia#172.28.1.5:25520 -Dplay.akka.actor-system=panakeia
expose:
- 9000
- 15000
ports:
- "15000:15000"
- "14999:9000"
networks:
panakeia-network:
ipv4_address: 172.28.1.5
With this way it looks like it possible to create a simple and small µ-services cluster without complex systems like kubernetes.

akka simple cluster with two seed nodes

I'm using akka cluster 2.3.6
I compiled two separated jars and inside I have main class
val configuration = ConfigFactory.load()
val bucket = configuration.getString("bucket")
val system = ActorSystem(bucket,configuration)
val resultDispatcherActor = system.actorOf(Props(new MyActor))
I'm running two separate jars:
java -Dconfig.file=poc.conf -jar poc.jar
where my poc.conf is the following:
akka {
loglevel = INFO
stdout-loglevel = INFO  
event-handlers = ["akka.event.Logging$DefaultLogger"]
actor {    
provider = "akka.cluster.ClusterActorRefProvider"
  }
remote{
enabled-transports = ["akka.remote.netty.tcp"]    
log-remote-lifecycle-events = off
netty.tcp {      
hostname = ""      
host = "10.0.0.5"     
port = 2551
}    
}
cluster {
seed-nodes = [
"akka.tcp://myCluster#10.0.0.5:2551",
"akka.tcp://myCluster#10.0.0.5:2552"]
 
}
}
inside netty.tcp block each first application assigned to port 2551 and part 2552.
However when I start both jars, each of them prints the following log:
[INFO] [11/16/2014 18:43:53.890] [main] [Remoting] Starting remoting
[INFO] [11/16/2014 18:43:54.739] [main] [Remoting] Remoting started; listening on addresses :[akka.tcp://myCluster#10.0.0.5:2551]
[INFO] [11/16/2014 18:43:54.757] [main] [Cluster(akka://myCluster)] Cluster Node [akka.tcp://myCluster#10.0.0.5:2551] - Starting up...
[INFO] [11/16/2014 18:43:54.848] [main] [Cluster(akka://myCluster)] Cluster Node [akka.tcp://myCluster#10.0.0.5:2551] - Registered cluster JMX MBean [akka:type=Cluster]
[INFO] [11/16/2014 18:43:54.848] [main] [Cluster(akka://myCluster)] Cluster Node [akka.tcp://myCluster#10.0.0.5:2551] - Started up successfully
[INFO] [11/16/2014 18:43:54.858] [myCluster-akka.actor.default-dispatcher-15] [Cluster(akka://myCluster)] Cluster Node [akka.tcp://myCluster#10.0.0.5:2551] - Metrics will be retreived from MBeans, and may be incorrect on some platforms. To increase metric accuracy add the 'sigar.jar' to the classpath and the appropriate platform-specific native libary to 'java.library.path'. Reason: java.lang.ClassNotFoundException: org.hyperic.sigar.Sigar
[INFO] [11/16/2014 18:43:54.863] [myCluster-akka.actor.default-dispatcher-15] [Cluster(akka://myCluster)] Cluster Node [akka.tcp://myCluster#10.0.0.5:2551] - Metrics collection has started successfully
[WARN] [11/16/2014 18:43:54.963] [myCluster-akka.remote.default-remote-dispatcher-6] [Remoting] Tried to associate with unreachable remote address [akka.tcp://myCluster#10.0.0.5:2552]. Address is now gated for 5000 ms, all messages to this address will be delivered to dead letters. Reason: Connection refused: /10.0.0.5:2552
[INFO] [11/16/2014 18:43:54.972] [myCluster-akka.actor.default-dispatcher-16] [akka://myCluster/deadLetters] Message [akka.cluster.InternalClusterAction$InitJoin$] from Actor[akka://myCluster/system/cluster/core/daemon/firstSeedNodeProcess-1#-86755168] to Actor[akka://myCluster/deadLetters] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
What am I doing wrong?
You need to provide a port to the program when you run it. Your netty.tcp.port value defaults to 2551 in both cases.
An example of how to do this:
// 0 means a random unused port will be assigned
val port = if (args.isEmpty) "0" else args(0)
val config = ConfigFactory.parseString(s"akka.remote.netty.tcp.port=$port") //etc
Also, see the cluster samples for an example on how to do this.
Make the seed-node to have the same port 2551 and specify under remote as below.
remote{
transport = "akka.remote.netty.NettyRemoteTransport"
cluster {
seed-nodes = [
"akka.tcp://myCluster#10.0.0.5:2551",
"akka.tcp://myCluster#10.0.0.5:2551"]
use-dispatcher = cluster-dispatcher
}
cluster-dispatcher {
type = "Dispatcher"
executor = "fork-join-executor"
fork-join-executor {
parallelism-min = 2
parallelism-max = 4
}
}