Exception on spring application startup with spring-cloud-kubernetes config maps dependencies present - spring-cloud

I have a few spring services which has both Eureka-client and spring-cloud-starter-kubernetes-fabric8-all dependencies. By default, Eureka is enabled and Kubernetes is disabled.
management:
endpoints:
web:
exposure:
include: '*'
eureka:
client:
enabled: true
serviceUrl:
defaultZone: http://localhost:8080/eureka/
spring:
main:
banner-mode: off
application:
name: service-one
cloud:
kubernetes:
enabled: false
config:
enable-api: false
enabled: false
reload:
enabled: false
zipkin:
enabled: false
When I startup the app, I get the following exception as a warning, eventough kubernetes is disabled.
Running - docker run --rm dhananjay12/demo-service-one:latest
2021-05-02 23:35:51.458 INFO [,,] 1 --- [ main] ubernetesProfileEnvironmentPostProcessor : Did not find service account namespace at: [/var/run/secrets/kubernetes.io/serviceaccount/namespace]. Ignoring.
2021-05-02 23:35:51.464 WARN [,,] 1 --- [ main] ubernetesProfileEnvironmentPostProcessor : Not running inside kubernetes. Skipping 'kubernetes' profile activation.
2021-05-02 23:35:51.980 WARN [,,] 1 --- [ main] io.fabric8.kubernetes.client.Config : Error reading service account token from: [/var/run/secrets/kubernetes.io/serviceaccount/token]. Ignoring.
2021-05-02 23:35:51.984 WARN [,,] 1 --- [ main] io.fabric8.kubernetes.client.Config : Error reading service account token from: [/var/run/secrets/kubernetes.io/serviceaccount/token]. Ignoring.
2021-05-02 23:35:51.987 WARN [,,] 1 --- [ main] o.s.c.k.f.Fabric8AutoConfiguration : No namespace has been detected. Please specify KUBERNETES_NAMESPACE env var, or use a later kubernetes version (1.3 or later)
2021-05-02 23:35:52.441 WARN [service-one,,] 1 --- [ main] s.c.k.f.c.Fabric8ConfigMapPropertySource : Can't read configMap with name: [service-one] in namespace:[null]. Ignoring.
io.fabric8.kubernetes.client.KubernetesClientException: Operation: [get] for kind: [ConfigMap] with name: [service-one] in namespace: [null] failed.
at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:64) ~[kubernetes-client-4.13.2.jar:na]
at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:72) ~[kubernetes-client-4.13.2.jar:na]
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.getMandatory(BaseOperation.java:225) ~[kubernetes-client-4.13.2.jar:na]
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:186) ~[kubernetes-client-4.13.2.jar:na]
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:84) ~[kubernetes-client-4.13.2.jar:na]
at org.springframework.cloud.kubernetes.fabric8.config.Fabric8ConfigMapPropertySource.getData(Fabric8ConfigMapPropertySource.java:61) ~[spring-cloud-kubernetes-fabric8-config-2.0.2.jar:2.0.2]
at org.springframework.cloud.kubernetes.fabric8.config.Fabric8ConfigMapPropertySource.<init>(Fabric8ConfigMapPropertySource.java:50) ~[spring-cloud-kubernetes-fabric8-config-2.0.2.jar:2.0.2]
at org.springframework.cloud.kubernetes.fabric8.config.Fabric8ConfigMapPropertySourceLocator.getMapPropertySource(Fabric8ConfigMapPropertySourceLocator.java:51) ~[spring-cloud-kubernetes-fabric8-config-2.0.2.jar:2.0.2]
at org.springframework.cloud.kubernetes.commons.config.ConfigMapPropertySourceLocator.getMapPropertySourceForSingleConfigMap(ConfigMapPropertySourceLocator.java:81) ~[spring-cloud-kubernetes-commons-2.0.2.jar:2.0.2]
at org.springframework.cloud.kubernetes.commons.config.ConfigMapPropertySourceLocator.lambda$locate$0(ConfigMapPropertySourceLocator.java:67) ~[spring-cloud-kubernetes-commons-2.0.2.jar:2.0.2]
at java.base/java.util.Collections$SingletonList.forEach(Unknown Source) ~[na:na]
at org.springframework.cloud.kubernetes.commons.config.ConfigMapPropertySourceLocator.locate(ConfigMapPropertySourceLocator.java:67) ~[spring-cloud-kubernetes-commons-2.0.2.jar:2.0.2]
at org.springframework.cloud.bootstrap.config.PropertySourceLocator.locateCollection(PropertySourceLocator.java:51) ~[spring-cloud-context-3.0.2.jar:3.0.2]
at org.springframework.cloud.bootstrap.config.PropertySourceLocator.locateCollection(PropertySourceLocator.java:47) ~[spring-cloud-context-3.0.2.jar:3.0.2]
at org.springframework.cloud.bootstrap.config.PropertySourceBootstrapConfiguration.initialize(PropertySourceBootstrapConfiguration.java:95) ~[spring-cloud-context-3.0.2.jar:3.0.2]
at org.springframework.boot.SpringApplication.applyInitializers(SpringApplication.java:650) ~[spring-boot-2.4.5.jar:2.4.5]
at org.springframework.boot.SpringApplication.prepareContext(SpringApplication.java:403) ~[spring-boot-2.4.5.jar:2.4.5]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:338) ~[spring-boot-2.4.5.jar:2.4.5]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1340) ~[spring-boot-2.4.5.jar:2.4.5]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1329) ~[spring-boot-2.4.5.jar:2.4.5]
at com.mynotes.microservices.demo.serviceone.ServiceOneApplication.main(ServiceOneApplication.java:15) ~[classes/:na]
Caused by: java.net.UnknownHostException: kubernetes.default.svc: Name or service not known
The exception is not there if I put the same property in env variable.
Running - docker run --rm -e spring.cloud.kubernetes.config.enable-api=false dhananjay12/demo-service-one:latest
While the app does spin up and the rest of the thing works, can someone suggest, why is this exception there in the first place?
Code - https://github.com/dhananjay12/spring-microservice-demo/blob/master/service-one/src/main/resources/application.yml
Spring boot - 2.4.5, Spring cloud - 2020.0.2

On further analysis and going through the docs, disabling of these features must be set in bootstrap.yml - https://docs.spring.io/spring-cloud-kubernetes/docs/current/reference/html/index.html#kubernetes-ecosystem-awareness.
Of course env variable will have precedence

Related

Kubernetes : WARN datanode.DataNode: Problem connecting to server: namenode:9001

I am a novice to Kubernetes and I currently trying to deploy hadoop in kubernetes. I follow that docker-compose to deploy h https://github.com/big-data-europe/docker-hadoop/blob/master/docker-compose-v3.yml Below there is my yaml for the namenode :
apiVersion: apps/v1
kind: Deployment
metadata:
name: hadoop-namenode
labels:
traefik.docker.network: hbase-namenode
traefik.port: "9870"
namespace: hdfs
spec:
selector:
matchLabels:
app: hadoop-namenode
traefik.docker.network: hbase-namenode
traefik.port: "9870"
template:
metadata:
labels:
app: hadoop-namenode
traefik.docker.network: hbase-namenode
traefik.port: "9870"
spec:
nodeSelector:
kubernetes.io/hostname: data-mesh-hdfs
containers:
- name: hadoop-namenode
image: bde2020/hadoop-namenode:2.0.0-hadoop3.2.1-java8
ports:
- containerPort: 9870
env:
- name: CLUSTER_NAME
value: test
envFrom:
- secretRef:
name: hadoop-secret
volumeMounts:
- name: data-namenode
mountPath: /hadoop/dfs/name
volumes:
- name: data-namenode
persistentVolumeClaim:
claimName: namenode-pvc
I create the pv and pvc for my pod and I attach my service on the pod.
apiVersion: v1
kind: Service
metadata:
name: service-namenode
namespace: hdfs
spec:
selector:
traefik.docker.network: hbase-namenode
type: NodePort
ports:
- name: namenode
protocol: TCP
port: 9870
targetPort: 9870
When I deploy my pod hadoop-namenode, he was up and running correctly on my cluster. I had those logs :
2021-12-14 15:43:25,893 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2021-12-14 15:43:26,044 INFO namenode.NameNode: createNameNode []
2021-12-14 15:43:26,202 INFO impl.MetricsConfig: Loaded properties from hadoop-metrics2.properties
2021-12-14 15:43:26,367 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s).
2021-12-14 15:43:26,368 INFO impl.MetricsSystemImpl: NameNode metrics system started
2021-12-14 15:43:26,422 INFO namenode.NameNodeUtils: fs.defaultFS is hdfs://namenode:9001
2021-12-14 15:43:26,423 INFO namenode.NameNode: Clients should use namenode:9001 to access this namenode/service.
2021-12-14 15:43:26,634 INFO util.JvmPauseMonitor: Starting JVM pause monitor
2021-12-14 15:43:26,676 INFO hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:9870
And finally when i setup my datanode and service using this file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hadoop-datanode
labels:
traefik.docker.network: hbase-datanode
traefik.port: "9864"
namespace: hdfs
spec:
selector:
matchLabels:
app: hadoop-datanode
traefik.docker.network: hbase-datanode
traefik.port: "9864"
template:
metadata:
labels:
app: hadoop-datanode
traefik.docker.network: hbase-datanode
traefik.port: "9864"
spec:
nodeSelector:
kubernetes.io/hostname: data-mesh-hdfs
containers:
- name: hadoop-datanode
image: bde2020/hadoop-datanode:2.0.0-hadoop3.2.1-java8
ports:
- containerPort: 9864
env:
- name: SERVICE_PRECONDITION
value: "10.47.0.1:9870"
envFrom:
- secretRef:
name: hadoop-secret
volumeMounts:
- name: datanode
mountPath: /hadoop/dfs/data
volumes:
- name: datanode
persistentVolumeClaim:
claimName: datanode-pvc
All my pod up and running but I got this error on my datanode.
kubectl get pod -n hdfs
NAME READY STATUS RESTARTS AGE
hadoop-datanode-7f9bdb4f54-4hh6t 1/1 Running 0 2d21h
hadoop-namenode-6bddbc7b6-h8bqk 1/1 Running 0 2d22h
2021-12-17 14:46:57,697 INFO server.AbstractConnector: Started ServerConnector#4b5c304e{HTTP/1.1,[http/1.1]}{localhost:41327}
2021-12-17 14:46:57,698 INFO server.Server: Started #2486ms
2021-12-17 14:46:57,943 INFO web.DatanodeHttpServer: Listening HTTP traffic on /0.0.0.0:9864
2021-12-17 14:46:57,952 INFO util.JvmPauseMonitor: Starting JVM pause monitor
2021-12-17 14:46:57,958 INFO datanode.DataNode: dnUserName = root
2021-12-17 14:46:57,958 INFO datanode.DataNode: supergroup = supergroup
2021-12-17 14:46:58,026 INFO ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue, queueCapacity: 1000, scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler, ipcBackoff: false.
2021-12-17 14:46:58,043 INFO ipc.Server: Starting Socket Reader #1 for port 9867
2021-12-17 14:46:58,310 INFO datanode.DataNode: Opened IPC server at /0.0.0.0:9867
2021-12-17 14:46:58,332 INFO datanode.DataNode: Refresh request received for nameservices: null
2021-12-17 14:46:58,359 WARN hdfs.DFSUtilClient: Namenode for null remains unresolved for ID null. Check your hdfs-site.xml file to ensure namenodes are configured properly.
2021-12-17 14:46:58,363 INFO datanode.DataNode: Starting BPOfferServices for nameservices: <default>
2021-12-17 14:46:58,389 INFO datanode.DataNode: Block pool <registering> (Datanode Uuid unassigned) service to namenode:9001 starting to offer service
2021-12-17 14:46:58,408 INFO ipc.Server: IPC Server Responder: starting
2021-12-17 14:46:58,409 INFO ipc.Server: IPC Server listener on 9867: starting
2021-12-17 14:46:58,480 WARN datanode.DataNode: Problem connecting to server: namenode:9001
2021-12-17 14:47:03,481 WARN datanode.DataNode: Problem connecting to server: namenode:9001
2021-12-17 14:47:08,483 WARN datanode.DataNode: Problem connecting to server: namenode:9001
2021-12-17 14:47:13,484 WARN datanode.DataNode: Problem connecting to server: namenode:9001
2021-12-17 14:47:18,486 WARN datanode.DataNode: Problem connecting to server: namenode:9001
I use secret ressource to setup my environment variable for datanode and namenode which give that result for my datanode at the start :
Configuring core
- Setting hadoop.proxyuser.hue.hosts=*
- Setting fs.defaultFS=hdfs://namenode:9001
- Setting hadoop.http.staticuser.user=root
- Setting io.compression.codecs=org.apache.hadoop.io.compress.SnappyCodec
- Setting hadoop.proxyuser.hue.groups=*
Configuring hdfs
- Setting dfs.datanode.data.dir=file:///hadoop/dfs/data
- Setting dfs.namenode.datanode.registration.ip-hostname-check=false
- Setting dfs.webhdfs.enabled=true
- Setting dfs.permissions.enabled=false
Configuring yarn
- Setting yarn.timeline-service.enabled=true
- Setting yarn.scheduler.capacity.root.default.maximum-allocation-vcores=4
- Setting yarn.resourcemanager.system-metrics-publisher.enabled=true
- Setting yarn.resourcemanager.store.class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
- Setting yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage=98.5
- Setting yarn.log.server.url=http://historyserver:8188/applicationhistory/logs/
- Setting yarn.resourcemanager.fs.state-store.uri=/rmstate
- Setting yarn.timeline-service.generic-application-history.enabled=true
- Setting yarn.log-aggregation-enable=true
- Setting yarn.resourcemanager.hostname=resourcemanager
- Setting yarn.scheduler.capacity.root.default.maximum-allocation-mb=8192
- Setting yarn.nodemanager.aux-services=mapreduce_shuffle
- Setting yarn.resourcemanager.resource_tracker.address=resourcemanager:8031
- Setting yarn.timeline-service.hostname=historyserver
- Setting yarn.resourcemanager.scheduler.address=resourcemanager:8030
- Setting yarn.resourcemanager.address=resourcemanager:8032
- Setting mapred.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec
- Setting yarn.nodemanager.remote-app-log-dir=/app-logs
- Setting yarn.resourcemanager.scheduler.class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
- Setting mapreduce.map.output.compress=true
- Setting yarn.nodemanager.resource.memory-mb=16384
- Setting yarn.resourcemanager.recovery.enabled=true
- Setting yarn.nodemanager.resource.cpu-vcores=8
Configuring httpfs
Configuring kms
Configuring mapred
- Setting mapreduce.map.java.opts=-Xmx3072m
- Setting mapreduce.reduce.memory.mb=8192
- Setting mapreduce.reduce.java.opts=-Xmx6144m
- Setting yarn.app.mapreduce.am.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
- Setting mapreduce.map.memory.mb=4096
- Setting mapred.child.java.opts=-Xmx4096m
- Setting mapreduce.reduce.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
- Setting mapreduce.framework.name=yarn
- Setting mapreduce.map.env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
Configuring for multihomed network
[1/100] 10.47.0.1:9001 is available.
2021-12-17 14:46:55,883 INFO datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG: host = hadoop-datanode-7f9bdb4f54-4mb8r/10.47.0.6
STARTUP_MSG: args = []
STARTUP_MSG: version = 3.2.1
If I look at my open port on my namenode, I see this :
netstat -lpten
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program name
tcp 0 0 0.0.0.0:9001 0.0.0.0:* LISTEN 0 13617204 361/java
tcp 0 0 0.0.0.0:9870 0.0.0.0:* LISTEN 0 13617098 361/java
On my file core-site.xml on datanode, I have the data below :
<configuration>
<property><name>hadoop.proxyuser.hue.hosts</name><value>*</value></property>
<property><name>fs.defaultFS</name><value>hdfs://namenode:9001</value></property>
<property><name>hadoop.http.staticuser.user</name><value>root</value></property>
<property><name>io.compression.codecs</name><value>org.apache.hadoop.io.compress.SnappyCodec</value></property>
<property><name>hadoop.proxyuser.hue.groups</name><value>*</value></property>
</configuration>
Same result on the core-site.xml file of namenode :
<configuration>
<property><name>hadoop.proxyuser.hue.hosts</name><value>*</value></property>
<property><name>fs.defaultFS</name><value>hdfs://namenode:9001</value></property>
<property><name>hadoop.http.staticuser.user</name><value>root</value></property>
<property><name>io.compression.codecs</name><value>org.apache.hadoop.io.compress.SnappyCodec</value></property>
<property><name>hadoop.proxyuser.hue.groups</name><value>*</value></property>
</configuration>
Can anyone please help me on identifying the issue ?

Creating docker compose for Spring boot app with Mongodb atlas

I am trying to create a docker-compose file for my spring boot app that uses MongoDB atlas but when I sent a request it's not working locally.
when I am trying without docker it works fine locally.
in the logs, mongo is not making a connection
2021-07-02 10:05:38.867 INFO 1 --- [ main] com.example.demo.DemoAwsApplication : Starting DemoAwsApplication v0.0.1-SNAPSHOT using Java 11.0.10 on 08195c02d4c8 with PID 1 (/workspace/BOOT-INF/classes started by cnb in /workspace)
2021-07-02 10:05:38.873 INFO 1 --- [ main] com.example.demo.DemoAwsApplication : No active profile set, falling back to default profiles: default
2021-07-02 10:05:40.170 INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 5000 (http)
2021-07-02 10:05:40.189 INFO 1 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat]
2021-07-02 10:05:40.189 INFO 1 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.43]
2021-07-02 10:05:40.274 INFO 1 --- [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext
2021-07-02 10:05:40.275 INFO 1 --- [ main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 1315 ms
2021-07-02 10:05:40.589 INFO 1 --- [ main] o.s.s.concurrent.ThreadPoolTaskExecutor : Initializing ExecutorService 'applicationTaskExecutor'
2021-07-02 10:05:40.873 INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 5000 (http) with context path ''
2021-07-02 10:05:40.893 INFO 1 --- [ main] com.example.demo.DemoAwsApplication : Started DemoAwsApplication in 2.845 seconds (JVM running for 3.37)
2021-07-02 10:20:37.316 INFO 1 --- [extShutdownHook] o.s.s.concurrent.ThreadPoolTaskExecutor : Shutting down ExecutorService 'applicationTaskExecutor'
hope someone can give me an example or guilds
docker-compose
version: "2.1"
services:
app:
image: idanovadia/demo-ecs:0.0.1-SNAPSHOT
ports:
- 8080:5000
response
{
"timestamp": "2021-07-02T13:34:04.025+00:00",
"status": 404,
"error": "Not Found",
"message": "",
"path": "/api/auth/signin"
}
you don't need to define a port since you're using Atlas.
Remove => spring.data.mongodb.port=27017
And change the uri like below. You can only keep that in your application.propertise.
spring.data.mongodb.uri=mongodb+srv://user:password#name.cvbdl.mongodb.net/test?retryWrites=true&w=majority

How to use aws elastic-search in Zeebe operate

I have deployed zeebe cluster and zeebe operate separately, In the zeebe cluster, the elasticsearch is configured with the AWS ELK but facing issue with zeebe operate.
Configmap.yaml:
kind: ConfigMapmetadata:
name: {{ include "zeebe-operate.fullname" . }}
apiVersion: v1
data:
application.yml: |
# Operate configuration file
camunda.operate:
elasticsearch:
host: {{ .Values.global.elasticsearch.host }}
port: {{ .Values.global.elasticsearch.port }}
username: {{ .Values.global.elasticsearch.username }}
password: {{ .Values.global.elasticsearch.password }}
prefix: zeebe-record-operate
ERROR:
2020-06-29 05:35:10.397 ERROR 6 --- [ main] o.c.o.e.ElasticsearchConnector : Error occurred while connecting to Elasticsearch: clustername [elasticsearch], https://xx.xx.xx.x.x.x.x.ap-south-1.es.amazonaws.com:443. Will be retried...
java.io.IOException: https://xxxxx-xxxx-xxx-xx-xxx-x..ap-south-1.es.amazonaws.com: Name or service not known
at org.elasticsearch.client.RestClient$SyncResponseListener.get(RestClient.java:964) ~[elasticsearch-rest-client-6.8.7.jar!/:6.8.7]
at org.elasticsearch.client.RestClient.performRequest(RestClient.java:233) ~[elasticsearch-rest-client-6.8.7.jar!/:6.8.7]
at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1764) ~[elasticsearch-rest-high-level-client-6.8.8.jar!/:6.8.7]
at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:1734) ~[elasticsearch-rest-high-level-client-6.8.8.jar!/:6.8.7]
at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:1696) ~[elasticsearch-rest-high-level-client-6.8.8.jar!/:6.8.7]
at org.elasticsearch.client.ClusterClient.health(ClusterClient.java:146) ~[elasticsearch-rest-high-level-client-6.8.8.jar!/:6.8.7]
at org.camunda.operate.es.ElasticsearchConnector.checkHealth(ElasticsearchConnector.java:89) ~[camunda-operate-common-0.23.0.jar!/:?]
at org.camunda.operate.es.ElasticsearchConnector.createEsClient(ElasticsearchConnector.java:75) ~[camunda-operate-common-0.23.0.jar!/:?]
at org.camunda.operate.es.ElasticsearchConnector.esClient(ElasticsearchConnector.java:51) ~[camunda-operate-common-0.23.0.jar!/:?]
at org.camunda.operate.es.ElasticsearchConnector$$EnhancerBySpringCGLIB$$670b527.CGLIB$esClient$0() ~[camunda-operate-common-0.23.0.jar!/:?]
at org.camunda.operate.es.ElasticsearchConnector$$EnhancerBySpringCGLIB$$670b527$$FastClassBySpringCGLIB$$af2d84c1.invoke() ~[camunda-operate-common-0.23.0.jar!/:?]
at org.springframework.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:244) ~[spring-core-5.2.5.RELEASE.jar!/:5.2.5.RELEASE]
at org.springframework.context.annotation.ConfigurationClassEnhancer$BeanMethodInterceptor.intercept(ConfigurationClassEnhancer.java:331) ~[spring-context-5.2.5.RELEASE.jar!/:5.2.5.RELEASE]
at org.camunda.operate.es.ElasticsearchConnector$$EnhancerBySpringCGLIB$$670b527.esClient() ~[camunda-o
URL is accessible for zeebe cluster but for zeebe operate it is not working as you can see the logs of zeebe broker
Logs of Broker:
2020-06-23 08:23:29.759 [] [main] DEBUG io.zeebe.broker.system - Bootstrap Broker-0 [7/10]: topology manager started in 1 ms
2020-06-23 08:23:29.761 [] [main] INFO io.zeebe.broker.system - Bootstrap Broker-0 [8/10]: metric's server
2020-06-23 08:23:29.820 [] [main] DEBUG io.zeebe.broker.system - Bootstrap Broker-0 [8/10]: metric's server started in 58 ms
2020-06-23 08:23:29.821 [] [main] INFO io.zeebe.broker.system - Bootstrap Broker-0 [9/10]: leader management request handler
2020-06-23 08:23:29.826 [] [main] DEBUG io.zeebe.broker.system - Bootstrap Broker-0 [9/10]: leader management request handler started in 2 ms
2020-06-23 08:23:29.826 [] [main] INFO io.zeebe.broker.system - Bootstrap Broker-0 [10/10]: zeebe partitions
2020-06-23 08:23:29.832 [] [main] INFO io.zeebe.broker.system - Bootstrap Broker-0 partitions [1/1]: partition 1
2020-06-23 08:23:30.231 [] [main] DEBUG io.zeebe.broker.exporter - Exporter configured with ElasticsearchExporterConfiguration{url='https://xxx.xxx.xxx.xx.xx.xx.xx.x.ap-south-1.es.amazonaws.com:443', index=IndexConfiguration{indexPrefix='zeebe-record', createTemplate=true, command=false, event=true, rejection=false, error=true, deployment=true, incident=true, job=true, message=false, messageSubscription=false, variable=true, variableDocument=true, workflowInstance=true, workflowInstanceCreation=false, workflowInstanceSubscription=false, ignoreVariablesAbove=8191}, bulk=BulkConfiguration{delay=5, size=1000}, authentication=AuthenticationConfiguration{username='kibana'}}
2020-06-23 08:23:30.236 [Broker-0-ZeebePartition-1] [Broker-0-zb-actors-1] DEBUG io.zeebe.broker.system - Removing follower partition service for partition PartitionId{id=1, group=raft-partition}
2020-06-23 08:23:30.325 [Broker-0-ZeebePartition-1] [Broker-0-zb-actors-1] DEBUG io.zeebe.broker.system - Partition role transitioning from null to LEADER
Thanks in Advance!!

Istio blocking connection to MySQL

I would like to deploy the java petstore for kubernetes. In order to achieve this I have 2 simple deployments. The first one is the java web app and the second one is a MySQL database.
When istio is disabled the connection between the app and the DB works well.
Unfortunatly when the istio sidecar is injected the communication between the two stops working.
Here is the deployment file of the web app:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jpetstoreweb
spec:
replicas: 1
template:
metadata:
labels:
app: jpetstoreweb
annotations:
sidecar.istio.io/inject: "true"
spec:
containers:
- name: jpetstoreweb
image: wingardiumleviosa/petstore:v7
env:
- name: VERSION
value: "1"
- name: DB_URL
value: "jpetstoredb-service"
- name: DB_PORT
value: "3306"
- name: DB_NAME
value: "jpetstore"
- name: DB_USERNAME
value: "jpetstore"
- name: DB_PASSWORD
value: "foobar"
ports:
- containerPort: 9080
readinessProbe:
httpGet:
path: /
port: 9080
initialDelaySeconds: 10
periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
name: jpetstoreweb-service
spec:
selector:
app: jpetstoreweb
ports:
- port: 80
targetPort: 9080
---
And next the deployment file of the mySql database :
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jpetstoredb
spec:
replicas: 1
template:
metadata:
labels:
app: jpetstoredb
annotations:
sidecar.istio.io/inject: "true"
spec:
containers:
- name: jpetstoredb
image: wingardiumleviosa/petstoredb:v1
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: "foobar"
- name: MYSQL_DATABASE
value: "jpetstore"
- name: MYSQL_USER
value: "jpetstore"
- name: MYSQL_PASSWORD
value: "foobar"
---
apiVersion: v1
kind: Service
metadata:
name: jpetstoredb-service
spec:
selector:
app: jpetstoredb
ports:
- port: 3306
targetPort: 3306
Finally the error logs from the web app trying to connect to the DB :
Exception thrown by application class 'org.springframework.web.servlet.FrameworkServlet.processRequest:488'
org.springframework.web.util.NestedServletException: Request processing failed; nested exception is org.springframework.transaction.CannotCreateTransactionException: Could not open JDBC Connection for transaction; nested exception is java.sql.SQLException: Communication link failure: java.io.EOFException, underlying cause: null ** BEGIN NESTED EXCEPTION ** java.io.EOFException STACKTRACE: java.io.EOFException at com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:1395) at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:1539) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:1930) at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1168) at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:1279) at com.mysql.jdbc.MysqlIO.sqlQuery(MysqlIO.java:1225) at com.mysql.jdbc.Connection.execSQL(Connection.java:2278) at com.mysql.jdbc.Connection.execSQL(Connection.java:2237) at com.mysql.jdbc.Connection.execSQL(Connection.java:2218) at com.mysql.jdbc.Connection.setAutoCommit(Connection.java:548) at org.apache.commons.dbcp.DelegatingConnection.setAutoCommit(DelegatingConnection.java:331) at org.apache.commons.dbcp.PoolingDataSource$PoolGuardConnectionWrapper.setAutoCommit(PoolingDataSource.java:317) at org.springframework.jdbc.datasource.DataSourceTransactionManager.doBegin(DataSourceTransactionManager.java:221) at org.springframework.transaction.support.AbstractPlatformTransactionManager.getTransaction(AbstractPlatformTransactionManager.java:350) at org.springframework.transaction.interceptor.TransactionAspectSupport.createTransactionIfNecessary(TransactionAspectSupport.java:261) at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:101) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171) at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:89) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204) at com.sun.proxy.$Proxy28.getCategory(Unknown Source) at org.springframework.samples.jpetstore.web.spring.ViewCategoryController.handleRequest(ViewCategoryController.java:31) at org.springframework.web.servlet.mvc.SimpleControllerHandlerAdapter.handle(SimpleControllerHandlerAdapter.java:48) at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:874) at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:808) at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:476) at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:431) at javax.servlet.http.HttpServlet.service(HttpServlet.java:687) at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1255) at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:743) at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:440) at com.ibm.ws.webcontainer.filter.WebAppFilterChain.invokeTarget(WebAppFilterChain.java:182) at com.ibm.ws.webcontainer.filter.WebAppFilterChain.doFilter(WebAppFilterChain.java:93) at com.ibm.ws.security.jaspi.JaspiServletFilter.doFilter(JaspiServletFilter.java:56) at com.ibm.ws.webcontainer.filter.FilterInstanceWrapper.doFilter(FilterInstanceWrapper.java:201) at com.ibm.ws.webcontainer.filter.WebAppFilterChain.doFilter(WebAppFilterChain.java:90) at com.ibm.ws.webcontainer.filter.WebAppFilterManager.doFilter(WebAppFilterManager.java:996) at com.ibm.ws.webcontainer.filter.WebAppFilterManager.invokeFilters(WebAppFilterManager.java:1134) at com.ibm.ws.webcontainer.filter.WebAppFilterManager.invokeFilters(WebAppFilterManager.java:1005) at com.ibm.ws.webcontainer.servlet.CacheServletWrapper.handleRequest(CacheServletWrapper.java:75) at com.ibm.ws.webcontainer.WebContainer.handleRequest(WebContainer.java:927) at com.ibm.ws.webcontainer.osgi.DynamicVirtualHost$2.run(DynamicVirtualHost.java:279) at com.ibm.ws.http.dispatcher.internal.channel.HttpDispatcherLink$TaskWrapper.run(HttpDispatcherLink.java:1023) at com.ibm.ws.http.dispatcher.internal.channel.HttpDispatcherLink.wrapHandlerAndExecute(HttpDispatcherLink.java:417) at com.ibm.ws.http.dispatcher.internal.channel.HttpDispatcherLink.ready(HttpDispatcherLink.java:376) at com.ibm.ws.http.channel.internal.inbound.HttpInboundLink.handleDiscrimination(HttpInboundLink.java:532) at com.ibm.ws.http.channel.internal.inbound.HttpInboundLink.handleNewRequest(HttpInboundLink.java:466) at com.ibm.ws.http.channel.internal.inbound.HttpInboundLink.processRequest(HttpInboundLink.java:331) at com.ibm.ws.http.channel.internal.inbound.HttpICLReadCallback.complete(HttpICLReadCallback.java:70) at com.ibm.ws.tcpchannel.internal.WorkQueueManager.requestComplete(WorkQueueManager.java:501) at com.ibm.ws.tcpchannel.internal.WorkQueueManager.attemptIO(WorkQueueManager.java:571) at com.ibm.ws.tcpchannel.internal.WorkQueueManager.workerRun(WorkQueueManager.java:926) at com.ibm.ws.tcpchannel.internal.WorkQueueManager$Worker.run(WorkQueueManager.java:1015) at com.ibm.ws.threading.internal.ExecutorServiceImpl$RunnableWrapper.run(ExecutorServiceImpl.java:232) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1160) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at java.lang.Thread.run(Thread.java:812) ** END NESTED EXCEPTION **
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:488)
at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:431)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
Extract : Could not open JDBC Connection for transaction
Additionnal info :
1) I can curl the DB from the web app container using CURL and it answers correctly.
2) I use Cilium instead of Calico
3) I installed Istio using HELM
4) Kubernetes is installed on bare metal (no cloud provider)
5) kubectl get pods -n istio-system all istio pods are running
6) kubectl get pods -n kube-system all cilium pods are running
7) Istio is injected using kubectl apply -f <(~/istio-1.0.5/bin/istioctl kube-inject -f ~/jpetstore.yaml) -n foo. If I use any other method Istio is not injecting itself in the Web pod (But works for the DB pod, god knows why)
8) The DB pod is always happy and working well
9) Logs of the istio-proxy container inside the WebApp pod : kubectl logs jpetstoreweb-84c7d8964-s642k istio-proxy -n myns
2018-12-28T03:52:30.610101Z info Version root#6f6ea1061f2b-docker.io/istio-1.0.5-c1707e45e71c75d74bf3a5dec8c7086f32f32fad-Clean
2018-12-28T03:52:30.610167Z info Proxy role: model.Proxy{ClusterID:"", Type:"sidecar", IPAddress:"10.233.72.142", ID:"jpetstoreweb-84c7d8964-s642k.myns", Domain:"myns.svc.cluster.local", Metadata:map[string]string(nil)}
2018-12-28T03:52:30.611217Z info Effective config: binaryPath: /usr/local/bin/envoy
configPath: /etc/istio/proxy
connectTimeout: 10s
discoveryAddress: istio-pilot.istio-system:15007
discoveryRefreshDelay: 1s
drainDuration: 45s
parentShutdownDuration: 60s
proxyAdminPort: 15000
serviceCluster: jpetstoreweb
zipkinAddress: zipkin.istio-system:9411
2018-12-28T03:52:30.611249Z info Monitored certs: []envoy.CertSource{envoy.CertSource{Directory:"/etc/certs/", Files:[]string{"cert-chain.pem", "key.pem", "root-cert.pem"}}}
2018-12-28T03:52:30.611829Z info Starting proxy agent
2018-12-28T03:52:30.611902Z info Received new config, resetting budget
2018-12-28T03:52:30.611912Z info Reconciling configuration (budget 10)
2018-12-28T03:52:30.611926Z info Epoch 0 starting
2018-12-28T03:52:30.613236Z info Envoy command: [-c /etc/istio/proxy/envoy-rev0.json --restart-epoch 0 --drain-time-s 45 --parent-shutdown-time-s 60 --service-cluster jpetstoreweb --service-node sidecar~10.233.72.142~jpetstoreweb-84c7d8964-s642k.myns~myns.svc.cluster.local --max-obj-name-len 189 --allow-unknown-fields -l warn --v2-config-only]
[2018-12-28 03:52:30.630][20][info][main] external/envoy/source/server/server.cc:190] initializing epoch 0 (hot restart version=10.200.16384.256.options=capacity=16384, num_slots=8209 hash=228984379728933363 size=4882536)
[2018-12-28 03:52:30.631][20][info][main] external/envoy/source/server/server.cc:192] statically linked extensions:
[2018-12-28 03:52:30.631][20][info][main] external/envoy/source/server/server.cc:194] access_loggers: envoy.file_access_log,envoy.http_grpc_access_log
[2018-12-28 03:52:30.631][20][info][main] external/envoy/source/server/server.cc:197] filters.http: envoy.buffer,envoy.cors,envoy.ext_authz,envoy.fault,envoy.filters.http.header_to_metadata,envoy.filters.http.jwt_authn,envoy.filters.http.rbac,envoy.grpc_http1_bridge,envoy.grpc_json_transcoder,envoy.grpc_web,envoy.gzip,envoy.health_check,envoy.http_dynamo_filter,envoy.ip_tagging,envoy.lua,envoy.rate_limit,envoy.router,envoy.squash,istio_authn,jwt-auth,mixer
[2018-12-28 03:52:30.631][20][info][main] external/envoy/source/server/server.cc:200] filters.listener: envoy.listener.original_dst,envoy.listener.proxy_protocol,envoy.listener.tls_inspector
[2018-12-28 03:52:30.631][20][info][main] external/envoy/source/server/server.cc:203] filters.network: envoy.client_ssl_auth,envoy.echo,envoy.ext_authz,envoy.filters.network.rbac,envoy.filters.network.thrift_proxy,envoy.http_connection_manager,envoy.mongo_proxy,envoy.ratelimit,envoy.redis_proxy,envoy.tcp_proxy,mixer
[2018-12-28 03:52:30.631][20][info][main] external/envoy/source/server/server.cc:205] stat_sinks: envoy.dog_statsd,envoy.metrics_service,envoy.stat_sinks.hystrix,envoy.statsd
[2018-12-28 03:52:30.631][20][info][main] external/envoy/source/server/server.cc:207] tracers: envoy.dynamic.ot,envoy.lightstep,envoy.zipkin
[2018-12-28 03:52:30.631][20][info][main] external/envoy/source/server/server.cc:210] transport_sockets.downstream: alts,envoy.transport_sockets.capture,raw_buffer,tls
[2018-12-28 03:52:30.631][20][info][main] external/envoy/source/server/server.cc:213] transport_sockets.upstream: alts,envoy.transport_sockets.capture,raw_buffer,tls
[2018-12-28 03:52:30.634][20][info][config] external/envoy/source/server/configuration_impl.cc:50] loading 0 static secret(s)
[2018-12-28 03:52:30.638][20][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:240] gRPC config stream closed: 14, no healthy upstream
[2018-12-28 03:52:30.638][20][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:41] Unable to establish new stream
[2018-12-28 03:52:30.638][20][info][config] external/envoy/source/server/configuration_impl.cc:60] loading 1 listener(s)
[2018-12-28 03:52:30.640][20][info][config] external/envoy/source/server/configuration_impl.cc:94] loading tracing configuration
[2018-12-28 03:52:30.640][20][info][config] external/envoy/source/server/configuration_impl.cc:103] loading tracing driver: envoy.zipkin
[2018-12-28 03:52:30.640][20][info][config] external/envoy/source/server/configuration_impl.cc:116] loading stats sink configuration
[2018-12-28 03:52:30.640][20][info][main] external/envoy/source/server/server.cc:432] starting main dispatch loop
[2018-12-28 03:52:32.010][20][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:240] gRPC config stream closed: 14, no healthy upstream
[2018-12-28 03:52:32.011][20][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:41] Unable to establish new stream
[2018-12-28 03:52:34.691][20][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:240] gRPC config stream closed: 14, no healthy upstream
[2018-12-28 03:52:34.691][20][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:41] Unable to establish new stream
[2018-12-28 03:52:38.483][20][info][upstream] external/envoy/source/common/upstream/cluster_manager_impl.cc:130] cm init: initializing cds
[2018-12-28 03:53:01.596][20][info][upstream] external/envoy/source/common/upstream/cluster_manager_impl.cc:494] add/update cluster outbound|443||kubernetes.default.svc.cluster.local during init
...
[2018-12-28T04:09:09.561Z] - 115 1548 6 "127.0.0.1:9080" inbound|80||jpetstoreweb-service.myns.svc.cluster.local 127.0.0.1:40318 10.233.72.142:9080 10.233.72.1:43098
[2018-12-28T04:09:14.555Z] - 115 1548 8 "127.0.0.1:9080" inbound|80||jpetstoreweb-service.myns.svc.cluster.local 127.0.0.1:40350 10.233.72.142:9080 10.233.72.1:43130
[2018-12-28T04:09:19.556Z] - 115 1548 5 "127.0.0.1:9080" inbound|80||jpetstoreweb-service.myns.svc.cluster.local 127.0.0.1:40364 10.233.72.142:9080 10.233.72.1:43144
[2018-12-28T04:09:24.558Z] - 115 1548 6 "127.0.0.1:9080" inbound|80||jpetstoreweb-service.myns.svc.cluster.local 127.0.0.1:40378 10.233.72.142:9080 10.233.72.1:43158
10) Using Istio 1.0.5 and kubernetes 1.13.0
All idears are welcome ;-)
Thx
So there really is an issue with Istio 1.0.5 and the JDBC of MySQL
The temporary solution is to delete the mesh ressource in the following way :
kubectl delete meshpolicies.authentication.istio.io default
As stated here and referencing this.
(FYI : I deleted the ressource BEFORE deploying my petstore app.)
As of Istio 1.1.1 there is more data on this problem in the FAQ

Jhipster, Unable to connect containerized mongodb

I'm trying to use docker to run my jhipster microservices.
I had no problem when running without Docker. But now i'm facing some problem when tryin to run my microservice using docker.
everytime i execute
docker-compose -f app.yml up
command on my UAA server, it shows this error when running the UAA server.
uaa-app_1 | 2017-04-17 07:38:59.725 DEBUG 6 --- [ main] i.c.f.uaa.config.CacheConfiguration : No cache
uaa-app_1 | 2017-04-17 07:39:06.897 DEBUG 6 --- [ main] i.c.f.u.c.apidoc.SwaggerConfiguration : Starting Swagger
uaa-app_1 | 2017-04-17 07:39:06.914 DEBUG 6 --- [ main] i.c.f.u.c.apidoc.SwaggerConfiguration : Started Swagger in 16 ms
uaa-app_1 | 2017-04-17 07:39:07.004 DEBUG 6 --- [ main] i.c.f.uaa.config.DatabaseConfiguration : Configuring Mongobee
uaa-app_1 | 2017-04-17 07:39:07.020 INFO 6 --- [ main] com.github.mongobee.Mongobee : Mongobee has started the data migration sequence..
uaa-app_1 | 2017-04-17 07:39:37.056 WARN 6 --- [ main] ationConfigEmbeddedWebApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'mongobee' defined in class path resource [id/co/fifgroup/uaa/config/DatabaseConfiguration.class]: Invocation of init method failed; nested exception is com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting for a server that matches ReadPreferenceServerSelector{readPreference=primary}. Client view of cluster state is {type=UNKNOWN, servers=[{address=localhost:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.ConnectException: Connection refused (Connection refused)}}]
uaa-app_1 | 2017-04-17 07:39:37.074 INFO 6 --- [ main] i.c.f.uaa.config.CacheConfiguration : Closing Cache Manager
uaa-app_1 | 2017-04-17 07:39:37.129 ERROR 6 --- [ main] o.s.boot.SpringApplication : Application startup failed
uaa-app_1 |
uaa-app_1 | org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'mongobee' defined in class path resource [id/co/fifgroup/uaa/config/DatabaseConfiguration.class]: Invocation of init method failed; nested exception is com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting for a server that matches ReadPreferenceServerSelector{readPreference=primary}. Client view of cluster state is {type=UNKNOWN, servers=[{address=localhost:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.ConnectException: Connection refused (Connection refused)}}]
and this is my UAA server app.yml inside of docker directory.
version: '2'
services:
uaa-app:
image: uaa
external_links:
- uaa-mongodb:mongodb
- jhipster-registry:registry
environment:
- SPRING_PROFILES_ACTIVE=dev,swagger
- SPRING_CLOUD_CONFIG_URI=http://admin:admin#registry:8761/config
- SPRING_DATA_MONGODB_URI=mongodb://mongodb:27017
- SPRING_DATA_MONGODB_DATABASE=uaa
- JHIPSTER_SLEEP=15 # gives time for the database to boot before the application
uaa-mongodb:
extends:
file: mongodb.yml
service: uaa-mongodb
jhipster-registry:
extends:
file: jhipster-registry.yml
service: jhipster-registry
environment:
- SPRING_CLOUD_CONFIG_SERVER_NATIVE_SEARCH_LOCATIONS=file:./central-config/docker-config/
uaa-mongodb and jhipster-registry works fine with docker, but my UAA server unable to connect to uaa-mongodb.
and why the error keep saying that i using localhost:27017 even thoug i tried to change SPRING_DATA_MONGODB_URI and spring.data.mongodb.uri inside application-dev.yml and application-prod.yml into difference value.
can someone help me with this problem...
to answer my own question, this all i did with my script.
i change app.yml file into
version: '2'
services:
uaa:
image: uaa:latest
external_links:
- uaa-mongodb:mongodb
- jhipster-registry:registry
links :
- uaa-mongodb:mongodb
environment:
- SPRING_PROFILES_ACTIVE=prod,swagger,test
- SPRING_CLOUD_CONFIG_URI=http://admin:admin#registry:8761/config
- EUREKA_CLIENT_SERVICEURL_DEFAULTZONE=http://admin:admin#registry:8761/config
- SPRING_DATA_MONGODB_HOST=mongodb
- SPRING_DATA_MONGODB_PORT=27017
- SPRING_DATA_MONGODB_DATABASE=uaa
- JHIPSTER_SLEEP=15 # gives time for the database to boot before the application
jhipster-registry:
extends:
file: jhipster-registry.yml
service: jhipster-registry
environment:
- SPRING_CLOUD_CONFIG_SERVER_NATIVE_SEARCH_LOCATIONS=file:./central-config/docker-config/
uaa-mongodb:
extends:
file: mongodb.yml
service: uaa-mongodb
and my application-pro.yml into
eureka:
instance:
prefer-ip-address: true
client:
enabled: true
healthcheck:
enabled: true
registerWithEureka: true
fetchRegistry: true
serviceUrl:
defaultZone: ${EUREKA_CLIENT_SERVICEURL_DEFAULTZONE}
spring:
devtools:
restart:
enabled: false
livereload:
enabled: false
data:
mongodb:
host: ${SPRING_DATA_MONGODB_HOST}
port: ${SPRING_DATA_MONGODB_PORT}
database: ${SPRING_DATA_MONGODB_DATABASE}
now my UAA server successfully connect to containerized mongodb database.
but now i'm having new error..
com.netflix.discovery.shared.transport.TransportException: Cannot execute request on any known server
uaa_1 | at com.netflix.discovery.shared.transport.decorator.RetryableEurekaHttpClient.execute(RetryableEurekaHttpClient.java:111)
uaa_1 | at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator.register(EurekaHttpClientDecorator.java:56)
uaa_1 | at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator$1.execute(EurekaHttpClientDecorator.java:59)
uaa_1 | at com.netflix.discovery.shared.transport.decorator.SessionedEurekaHttpClient.execute(SessionedEurekaHttpClient.java:77)
uaa_1 | at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator.register(EurekaHttpClientDecorator.java:56)
uaa_1 | at com.netflix.discovery.DiscoveryClient.register(DiscoveryClient.java:815)
uaa_1 | at com.netflix.discovery.InstanceInfoReplicator.run(InstanceInfoReplicator.java:104)
uaa_1 | at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
uaa_1 | at java.util.concurrent.FutureTask.run(FutureTask.java:266)
uaa_1 | at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
uaa_1 | at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
uaa_1 | at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
uaa_1 | at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
uaa_1 | at java.lang.Thread.run(Thread.java:745)