I am currently trying to setup keycloak with a postgres database on minikube. The problem is that keycloak cant connect to my postgres db and ends in a crashloopbackerror.
Here my configuration:
Postgres-deployment.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: postgres-volume
labels:
type: local
spec:
capacity:
storage: 100Mi
accessModes:
- ReadWriteOnce
hostPath:
path: "/var/lib/postgresql/data"
persistentVolumeReclaimPolicy: Retain
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-volume-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
status: {}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-database
labels:
app: postgres-database
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: postgres-database
template:
metadata:
labels:
app: postgres-database
spec:
containers:
- name: postgres-database
env:
- name: POSTGRES_DB
valueFrom:
secretKeyRef:
name: postgres-secret
key: postgres-database
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: postgres-password
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres-secret
key: postgres-username
image: postgres
ports:
- containerPort: 5431
resources: {}
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-storage
restartPolicy: Always
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: postgres-volume-claim
status: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
app: postgres-database
name: postgres-keycloak
spec:
ports:
- name: postgres-keycloak
port: 5431
targetPort: 5431
selector:
app: postgres-database
status:
loadBalancer: {}
keycloak-deyployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
io.kompose.service: keycloak
name: keycloak
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: keycloak
template:
metadata:
labels:
io.kompose.service: keycloak
spec:
containers:
- env:
- name: DB_ADDR
value: postgres-keycloak
- name: DB_DATABASE
valueFrom:
secretKeyRef:
name: postgres-secret
key: postgres-database
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: postgres-password
- name: DB_USER
valueFrom:
secretKeyRef:
name: postgres-secret
key: postgres-username
- name: DB_VENDOR
value: POSTGRES
- name: KEYCLOAK_PASSWORD
valueFrom:
secretKeyRef:
name: keycloak-secret
key: keycloak-password
- name: KEYCLOAK_USER
valueFrom:
secretKeyRef:
name: keycloak-secret
key: keycloak-username
image: quay.io/keycloak/keycloak:10.0.1
imagePullPolicy: ""
name: keycloak
ports:
- containerPort: 8443
resources: {}
restartPolicy: Always
serviceAccountName: ""
volumes: null
Keycloak-service.yaml
apiVersion: v1
kind: Service
metadata:
name: keycloak
spec:
ports:
- name: "8443"
port: 8443
targetPort: 8443
selector:
io.kompose.service: keycloak
type: NodePort
kubectl get all
NAME READY STATUS RESTARTS AGE
pod/keycloak-84fd96bbbf-wmpgt 0/1 CrashLoopBackOff 5 8m14s
pod/postgres-database-6467d78d5d-b6pj9 1/1 Running 0 8m13s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/keycloak NodePort 10.98.180.162 <none> 8443:31882/TCP 8m13s
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 8m36s
service/postgres-keycloak ClusterIP 10.100.233.109 <none> 5431/TCP 8m13s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/keycloak 0/1 1 0 8m14s
deployment.apps/postgres-database 1/1 1 1 8m13s
NAME DESIRED CURRENT READY AGE
replicaset.apps/keycloak-84fd96bbbf 1 1 0 8m14s
replicaset.apps/postgres-database-6467d78d5d 1 1 1 8m13s
and the error when running kubectl logs keycloak
23:27:43,693 INFO [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 71) WFLYCLINF0002: Started authorizationRevisions cache from keycloak container
23:27:43,714 INFO [org.keycloak.connections.infinispan.DefaultInfinispanConnectionProviderFactory] (ServerService Thread Pool -- 71) Node name: keycloak-84fd96bbbf-wmpgt, Site name: null
23:27:54,349 WARN [org.jboss.jca.core.connectionmanager.pool.strategy.OnePool] (ServerService Thread Pool -- 71) IJ000604: Throwable while attempting to get a new connection: null: javax.resource.ResourceException: IJ031084: Unable to create connection
at org.jboss.ironjacamar.jdbcadapters#1.4.20.Final//org.jboss.jca.adapters.jdbc.local.LocalManagedConnectionFactory.createLocalManagedConnection(LocalManagedConnectionFactory.java:345)
at org.jboss.ironjacamar.jdbcadapters#1.4.20.Final//org.jboss.jca.adapters.jdbc.local.LocalManagedConnectionFactory.getLocalManagedConnection(LocalManagedConnectionFactory.java:352)
at org.jboss.ironjacamar.jdbcadapters#1.4.20.Final//org.jboss.jca.adapters.jdbc.local.LocalManagedConnectionFactory.createManagedConnection(LocalManagedConnectionFactory.java:287)
at org.jboss.ironjacamar.impl#1.4.20.Final//org.jboss.jca.core.connectionmanager.pool.mcp.SemaphoreConcurrentLinkedDequeManagedConnectionPool.createConnectionEventListener(SemaphoreConcurrentLinkedDequeManagedConnectionPool.java:1326)
at org.jboss.ironjacamar.impl#1.4.20.Final//org.jboss.jca.core.connectionmanager.pool.mcp.SemaphoreConcurrentLinkedDequeManagedConnectionPool.getConnection(SemaphoreConcurrentLinkedDequeManagedConnectionPool.java:499)
at org.jboss.ironjacamar.impl#1.4.20.Final//org.jboss.jca.core.connectionmanager.pool.AbstractPool.getSimpleConnection(AbstractPool.java:632)
at org.jboss.ironjacamar.impl#1.4.20.Final//org.jboss.jca.core.connectionmanager.pool.AbstractPool.getConnection(AbstractPool.java:604)
at org.jboss.ironjacamar.impl#1.4.20.Final//org.jboss.jca.core.connectionmanager.AbstractConnectionManager.getManagedConnection(AbstractConnectionManager.java:624)
at org.jboss.ironjacamar.impl#1.4.20.Final//org.jboss.jca.core.connectionmanager.tx.TxConnectionManagerImpl.getManagedConnection(TxConnectionManagerImpl.java:440)
at org.jboss.ironjacamar.impl#1.4.20.Final//org.jboss.jca.core.connectionmanager.AbstractConnectionManager.allocateConnection(AbstractConnectionManager.java:789)
at org.jboss.ironjacamar.jdbcadapters#1.4.20.Final//org.jboss.jca.adapters.jdbc.WrapperDataSource.getConnection(WrapperDataSource.java:151)
at org.jboss.as.connector#19.1.0.Final//org.jboss.as.connector.subsystems.datasources.WildFlyDataSource.getConnection(WildFlyDataSource.java:64)
at org.keycloak.keycloak-model-jpa#10.0.1//org.keycloak.connections.jpa.DefaultJpaConnectionProviderFactory.getConnection(DefaultJpaConnectionProviderFactory.java:371)
at org.keycloak.keycloak-model-jpa#10.0.1//org.keycloak.connections.jpa.updater.liquibase.lock.LiquibaseDBLockProvider.lazyInit(LiquibaseDBLockProvider.java:65)
at org.keycloak.keycloak-model-jpa#10.0.1//org.keycloak.connections.jpa.updater.liquibase.lock.LiquibaseDBLockProvider.lambda$waitForLock$2(LiquibaseDBLockProvider.java:96)
at org.keycloak.keycloak-server-spi-private#10.0.1//org.keycloak.models.utils.KeycloakModelUtils.suspendJtaTransaction(KeycloakModelUtils.java:682)
at org.keycloak.keycloak-model-jpa#10.0.1//org.keycloak.connections.jpa.updater.liquibase.lock.LiquibaseDBLockProvider.waitForLock(LiquibaseDBLockProvider.java:94)
at org.keycloak.keycloak-services#10.0.1//org.keycloak.services.resources.KeycloakApplication$1.run(KeycloakApplication.java:145)
at org.keycloak.keycloak-server-spi-private#10.0.1//org.keycloak.models.utils.KeycloakModelUtils.runJobInTransaction(KeycloakModelUtils.java:227)
at org.keycloak.keycloak-services#10.0.1//org.keycloak.services.resources.KeycloakApplication.startup(KeycloakApplication.java:138)
at org.keycloak.keycloak-wildfly-extensions#10.0.1//org.keycloak.provider.wildfly.WildflyPlatform.onStartup(WildflyPlatform.java:29)
at org.keycloak.keycloak-services#10.0.1//org.keycloak.services.resources.KeycloakApplication.<init>(KeycloakApplication.java:125)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
at org.jboss.resteasy.resteasy-jaxrs#3.11.2.Final//org.jboss.resteasy.core.ConstructorInjectorImpl.construct(ConstructorInjectorImpl.java:152)
at org.jboss.resteasy.resteasy-jaxrs#3.11.2.Final//org.jboss.resteasy.spi.ResteasyProviderFactory.createProviderInstance(ResteasyProviderFactory.java:2805)
at org.jboss.resteasy.resteasy-jaxrs#3.11.2.Final//org.jboss.resteasy.spi.ResteasyDeployment.createApplication(ResteasyDeployment.java:369)
at org.jboss.resteasy.resteasy-jaxrs#3.11.2.Final//org.jboss.resteasy.spi.ResteasyDeployment.startInternal(ResteasyDeployment.java:281)
at org.jboss.resteasy.resteasy-jaxrs#3.11.2.Final//org.jboss.resteasy.spi.ResteasyDeployment.start(ResteasyDeployment.java:92)
at org.jboss.resteasy.resteasy-jaxrs#3.11.2.Final//org.jboss.resteasy.plugins.server.servlet.ServletContainerDispatcher.init(ServletContainerDispatcher.java:119)
at org.jboss.resteasy.resteasy-jaxrs#3.11.2.Final//org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.init(HttpServletDispatcher.java:36)
at io.undertow.servlet#2.1.0.Final//io.undertow.servlet.core.LifecyleInterceptorInvocation.proceed(LifecyleInterceptorInvocation.java:117)
at org.wildfly.extension.undertow#19.1.0.Final//org.wildfly.extension.undertow.security.RunAsLifecycleInterceptor.init(RunAsLifecycleInterceptor.java:78)
at io.undertow.servlet#2.1.0.Final//io.undertow.servlet.core.LifecyleInterceptorInvocation.proceed(LifecyleInterceptorInvocation.java:103)
at io.undertow.servlet#2.1.0.Final//io.undertow.servlet.core.ManagedServlet$DefaultInstanceStrategy.start(ManagedServlet.java:305)
at io.undertow.servlet#2.1.0.Final//io.undertow.servlet.core.ManagedServlet.createServlet(ManagedServlet.java:145)
at io.undertow.servlet#2.1.0.Final//io.undertow.servlet.core.DeploymentManagerImpl$2.call(DeploymentManagerImpl.java:585)
at io.undertow.servlet#2.1.0.Final//io.undertow.servlet.core.DeploymentManagerImpl$2.call(DeploymentManagerImpl.java:556)
at io.undertow.servlet#2.1.0.Final//io.undertow.servlet.core.ServletRequestContextThreadSetupAction$1.call(ServletRequestContextThreadSetupAction.java:42)
at io.undertow.servlet#2.1.0.Final//io.undertow.servlet.core.ContextClassLoaderSetupAction$1.call(ContextClassLoaderSetupAction.java:43)
at org.wildfly.extension.undertow#19.1.0.Final//org.wildfly.extension.undertow.security.SecurityContextThreadSetupAction.lambda$create$0(SecurityContextThreadSetupAction.java:105)
at org.wildfly.extension.undertow#19.1.0.Final//org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1541)
at org.wildfly.extension.undertow#19.1.0.Final//org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1541)
at org.wildfly.extension.undertow#19.1.0.Final//org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1541)
at org.wildfly.extension.undertow#19.1.0.Final//org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1541)
at io.undertow.servlet#2.1.0.Final//io.undertow.servlet.core.DeploymentManagerImpl.start(DeploymentManagerImpl.java:598)
at org.wildfly.extension.undertow#19.1.0.Final//org.wildfly.extension.undertow.deployment.UndertowDeploymentService.startContext(UndertowDeploymentService.java:97)
at org.wildfly.extension.undertow#19.1.0.Final//org.wildfly.extension.undertow.deployment.UndertowDeploymentService$1.run(UndertowDeploymentService.java:78)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at org.jboss.threads#2.3.3.Final//org.jboss.threads.ContextClassLoaderSavingRunnable.run(ContextClassLoaderSavingRunnable.java:35)
at org.jboss.threads#2.3.3.Final//org.jboss.threads.EnhancedQueueExecutor.safeRun(EnhancedQueueExecutor.java:1982)
at org.jboss.threads#2.3.3.Final//org.jboss.threads.EnhancedQueueExecutor$ThreadBody.doRunTask(EnhancedQueueExecutor.java:1486)
at org.jboss.threads#2.3.3.Final//org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1377)
at java.base/java.lang.Thread.run(Thread.java:834)
at org.jboss.threads#2.3.3.Final//org.jboss.threads.JBossThread.run(JBossThread.java:485)
Caused by: org.postgresql.util.PSQLException: The connection attempt failed.
at org.postgresql.jdbc#42.2.5//org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:292)
at org.postgresql.jdbc#42.2.5//org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49)
at org.postgresql.jdbc#42.2.5//org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:195)
at org.postgresql.jdbc#42.2.5//org.postgresql.Driver.makeConnection(Driver.java:454)
at org.postgresql.jdbc#42.2.5//org.postgresql.Driver.connect(Driver.java:256)
at org.jboss.ironjacamar.jdbcadapters#1.4.20.Final//org.jboss.jca.adapters.jdbc.local.LocalManagedConnectionFactory.createLocalManagedConnection(LocalManagedConnectionFactory.java:321)
... 57 more
Caused by: java.net.SocketTimeoutException: connect timed out
at java.base/java.net.PlainSocketImpl.socketConnect(Native Method)
at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:399)
at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:242)
at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:224)
at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:403)
at java.base/java.net.Socket.connect(Socket.java:609)
at org.postgresql.jdbc#42.2.5//org.postgresql.core.PGStream.<init>(PGStream.java:70)
at org.postgresql.jdbc#42.2.5//org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:91)
at org.postgresql.jdbc#42.2.5//org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:192)
... 62 more
23:27:54,357 FATAL [org.keycloak.services] (ServerService Thread Pool -- 71) java.lang.RuntimeException: Failed to connect to database
23:27:54,359 INFO [org.jboss.as.server] (Thread-1) WFLYSRV0220: Server shutdown has been requested via an OS signal
23:27:54,359 INFO [org.jboss.resteasy.resteasy_jaxrs.i18n] (ServerService Thread Pool -- 71) RESTEASY002225: Deploying javax.ws.rs.core.Application: class org.keycloak.services.resources.KeycloakApplication
23:27:54,361 INFO [org.jboss.resteasy.resteasy_jaxrs.i18n] (ServerService Thread Pool -- 71) RESTEASY002200: Adding class resource org.keycloak.services.resources.ThemeResource from Application class org.keycloak.services.resources.KeycloakApplication
23:27:54,361 INFO [org.jboss.resteasy.resteasy_jaxrs.i18n] (ServerService Thread Pool -- 71) RESTEASY002205: Adding provider class org.keycloak.services.filters.KeycloakSecurityHeadersFilter from Application class org.keycloak.services.resources.KeycloakApplication
23:27:54,362 INFO [org.jboss.resteasy.resteasy_jaxrs.i18n] (ServerService Thread Pool -- 71) RESTEASY002205: Adding provider class org.keycloak.services.filters.KeycloakTransactionCommitter from Application class org.keycloak.services.resources.KeycloakApplication
23:27:54,362 INFO [org.jboss.resteasy.resteasy_jaxrs.i18n] (ServerService Thread Pool -- 71) RESTEASY002205: Adding provider class org.keycloak.services.error.KeycloakErrorHandler from Application class org.keycloak.services.resources.KeycloakApplication
23:27:54,362 INFO [org.jboss.resteasy.resteasy_jaxrs.i18n] (ServerService Thread Pool -- 71) RESTEASY002200: Adding class resource org.keycloak.services.resources.JsResource from Application class org.keycloak.services.resources.KeycloakApplication
23:27:54,362 INFO [org.jboss.resteasy.resteasy_jaxrs.i18n] (ServerService Thread Pool -- 71) RESTEASY002210: Adding provider singleton org.keycloak.services.util.ObjectMapperResolver from Application class org.keycloak.services.resources.KeycloakApplication
23:27:54,362 INFO [org.jboss.resteasy.resteasy_jaxrs.i18n] (ServerService Thread Pool -- 71) RESTEASY002220: Adding singleton resource org.keycloak.services.resources.RobotsResource from Application class org.keycloak.services.resources.KeycloakApplication
23:27:54,363 INFO [org.jboss.resteasy.resteasy_jaxrs.i18n] (ServerService Thread Pool -- 71) RESTEASY002220: Adding singleton resource org.keycloak.services.resources.WelcomeResource from Application class org.keycloak.services.resources.KeycloakApplication
23:27:54,363 INFO [org.jboss.resteasy.resteasy_jaxrs.i18n] (ServerService Thread Pool -- 71) RESTEASY002220: Adding singleton resource org.keycloak.services.resources.RealmsResource from Application class org.keycloak.services.resources.KeycloakApplication
23:27:54,363 INFO [org.jboss.resteasy.resteasy_jaxrs.i18n] (ServerService Thread Pool -- 71) RESTEASY002220: Adding singleton resource org.keycloak.services.resources.admin.AdminRoot from Application class org.keycloak.services.resources.KeycloakApplication
23:27:54,393 INFO [org.jboss.as.connector.subsystems.datasources] (MSC service thread 1-1) WFLYJCA0010: Unbound data source [java:jboss/datasources/KeycloakDS]
23:27:54,396 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-2) ISPN000080: Disconnecting JGroups channel ejb
23:27:54,398 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-1) ISPN000080: Disconnecting JGroups channel ejb
23:27:54,401 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-2) ISPN000080: Disconnecting JGroups channel ejb
23:27:54,402 INFO [org.jboss.as.mail.extension] (MSC service thread 1-1) WFLYMAIL0002: Unbound mail session [java:jboss/mail/Default]
23:27:54,420 INFO [org.wildfly.extension.undertow] (MSC service thread 1-2) WFLYUT0008: Undertow HTTPS listener https suspending
23:27:54,430 INFO [org.wildfly.extension.undertow] (MSC service thread 1-2) WFLYUT0007: Undertow HTTPS listener https stopped, was bound to 0.0.0.0:8443
23:27:54,430 INFO [org.jboss.as.connector.deployers.jdbc] (MSC service thread 1-1) WFLYJCA0019: Stopped Driver service with driver-name = postgresql
23:27:54,439 INFO [org.hibernate.validator.internal.util.Version] (ServerService Thread Pool -- 71) HV000001: Hibernate Validator 6.0.18.Final
23:27:54,457 INFO [org.jboss.resteasy.plugins.validation.i18n] (ServerService Thread Pool -- 71) RESTEASY008550: Unable to find CDI supporting ValidatorFactory. Using default ValidatorFactory
23:27:54,631 INFO [org.wildfly.extension.undertow] (ServerService Thread Pool -- 71) WFLYUT0021: Registered web context: '/auth' for server 'default-server'
23:27:54,633 INFO [org.wildfly.extension.undertow] (ServerService Thread Pool -- 61) WFLYUT0022: Unregistered web context: '/auth' from server 'default-server'
23:27:54,638 INFO [org.jboss.modcluster] (ServerService Thread Pool -- 65) MODCLUSTER000002: Initiating mod_cluster shutdown
23:27:54,638 INFO [org.wildfly.extension.undertow] (MSC service thread 1-1) WFLYUT0008: Undertow AJP listener ajp suspending
23:27:54,639 INFO [org.wildfly.extension.undertow] (MSC service thread 1-1) WFLYUT0007: Undertow AJP listener ajp stopped, was bound to 0.0.0.0:8009
23:27:54,641 INFO [org.jboss.as.connector.subsystems.datasources] (MSC service thread 1-1) WFLYJCA0010: Unbound data source [java:jboss/datasources/ExampleDS]
23:27:54,643 INFO [org.jboss.as.connector.deployers.jdbc] (MSC service thread 1-1) WFLYJCA0019: Stopped Driver service with driver-name = h2
23:27:54,652 INFO [org.wildfly.extension.undertow] (MSC service thread 1-2) WFLYUT0019: Host default-host stopping
23:27:54,657 INFO [org.wildfly.extension.undertow] (MSC service thread 1-1) WFLYUT0008: Undertow HTTP listener default suspending
23:27:54,659 INFO [org.wildfly.extension.undertow] (MSC service thread 1-1) WFLYUT0007: Undertow HTTP listener default stopped, was bound to 0.0.0.0:8080
23:27:54,659 INFO [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 65) WFLYCLINF0003: Stopped client-mappings cache from ejb container
23:27:54,660 INFO [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 61) WFLYCLINF0003: Stopped users cache from keycloak container
23:27:54,659 INFO [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 64) WFLYCLINF0003: Stopped realms cache from keycloak container
23:27:54,659 INFO [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 69) WFLYCLINF0003: Stopped keys cache from keycloak container
23:27:54,662 INFO [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 66) WFLYCLINF0003: Stopped authorization cache from keycloak container
23:27:54,662 INFO [org.wildfly.extension.undertow] (MSC service thread 1-1) WFLYUT0004: Undertow 2.1.0.Final stopping
23:27:54,671 INFO [org.jboss.as.server.deployment] (MSC service thread 1-2) WFLYSRV0028: Stopped deployment keycloak-server.war (runtime-name: keycloak-server.war) in 309ms
23:27:54,679 INFO [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 70) WFLYCLINF0003: Stopped loginFailures cache from keycloak container
23:27:54,680 INFO [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 60) WFLYCLINF0003: Stopped sessions cache from keycloak container
23:27:54,681 INFO [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 63) WFLYCLINF0003: Stopped offlineClientSessions cache from keycloak container
23:27:54,682 INFO [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 62) WFLYCLINF0003: Stopped offlineSessions cache from keycloak container
23:27:54,683 INFO [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 68) WFLYCLINF0003: Stopped clientSessions cache from keycloak container
23:27:54,684 INFO [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 67) WFLYCLINF0003: Stopped authenticationSessions cache from keycloak container
23:27:54,685 INFO [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 72) WFLYCLINF0003: Stopped actionTokens cache from keycloak container
23:27:54,715 INFO [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 71) WFLYCLINF0003: Stopped work cache from keycloak container
23:27:54,715 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-1) ISPN000080: Disconnecting JGroups channel ejb
23:27:54,731 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-2) ISPN000080: Disconnecting JGroups channel ejb
23:27:54,760 ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0013: Operation ("add") failed - address: ([("subsystem" => "microprofile-metrics-smallrye")]): java.lang.NullPointerException
at org.wildfly.extension.microprofile.metrics-smallrye#19.1.0.Final//org.wildfly.extension.microprofile.metrics.MicroProfileMetricsSubsystemAdd$2.execute(MicroProfileMetricsSubsystemAdd.java:86)
at org.jboss.as.controller#11.1.1.Final//org.jboss.as.controller.AbstractOperationContext.executeStep(AbstractOperationContext.java:999)
at org.jboss.as.controller#11.1.1.Final//org.jboss.as.controller.AbstractOperationContext.processStages(AbstractOperationContext.java:743)
at org.jboss.as.controller#11.1.1.Final//org.jboss.as.controller.AbstractOperationContext.executeOperation(AbstractOperationContext.java:467)
at org.jboss.as.controller#11.1.1.Final//org.jboss.as.controller.OperationContextImpl.executeOperation(OperationContextImpl.java:1413)
at org.jboss.as.controller#11.1.1.Final//org.jboss.as.controller.ModelControllerImpl.boot(ModelControllerImpl.java:527)
at org.jboss.as.controller#11.1.1.Final//org.jboss.as.controller.AbstractControllerService.boot(AbstractControllerService.java:515)
at org.jboss.as.controller#11.1.1.Final//org.jboss.as.controller.AbstractControllerService.boot(AbstractControllerService.java:477)
at org.jboss.as.server#11.1.1.Final//org.jboss.as.server.ServerService.boot(ServerService.java:448)
at org.jboss.as.server#11.1.1.Final//org.jboss.as.server.ServerService.boot(ServerService.java:401)
at org.jboss.as.controller#11.1.1.Final//org.jboss.as.controller.AbstractControllerService$1.run(AbstractControllerService.java:416)
at java.base/java.lang.Thread.run(Thread.java:834)
You can please this out files
i have tested and working for me https://github.com/harsh4870/Keycloack-postgres-kubernetes-deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: keycloak
namespace: default
labels:
app: keycloak
spec:
replicas: 1
selector:
matchLabels:
app: keycloak
template:
metadata:
labels:
app: keycloak
spec:
containers:
- name: keycloak
image: quay.io/keycloak/keycloak:10.0.0
env:
- name: KEYCLOAK_USER
value: "admin"
- name: KEYCLOAK_PASSWORD
value: "admin"
- name: PROXY_ADDRESS_FORWARDING
value: "true"
- name: DB_VENDOR
value: POSTGRES
- name: DB_ADDR
value: postgres
- name: DB_DATABASE
value: keycloak
- name: DB_USER
value: root
- name: DB_PASSWORD
value: password
- name : KEYCLOAK_HTTP_PORT
value : "80"
- name: KEYCLOAK_HTTPS_PORT
value: "443"
- name : KEYCLOAK_HOSTNAME
value : keycloak.harshmanvar.tk #replace with ingress URL
ports:
- name: http
containerPort: 8080
- name: https
containerPort: 8443
readinessProbe:
httpGet:
path: /auth/realms/master
port: 8080
I figured out what the problem was: my problem was that my DB wasnt starting. The reason was that my VM didnt have enough memory to fullfill the PVC.
Related
filebeat cannot connect to logstash because this happen when i upgrade kubernetes to 1.24 the log path is changed, on this version is no longer using docker as a runtime
i only change this path and cannot connect to logstash
paths:
- "/var/log/containers/*.log"
but when i run this on kubernetes with docker runtime is work
paths:
- /var/lib/docker/containers/*/*.log
here my filebeat.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: kube-system
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
filebeat.config:
prospectors:
# Mounted `filebeat-prospectors` configmap:
path: ${path.config}/prospectors.d/*.yml
# Reload prospectors configs as they change:
reload.enabled: false
modules:
path: ${path.config}/modules.d/*.yml
# Reload module configs as they change:
reload.enabled: false
processors:
- add_cloud_metadata:
output.logstash:
hosts: ["log.xxx.com:5044"]
ssl.certificate_authorities: ["/usr/share/filebeat/certs/elk-ca.pem"]
ssl.certificate: "/usr/share/filebeat/certs/beats-client.pem"
ssl.key: "/usr/share/filebeat/certs/beats-client.key"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-prospectors
namespace: kube-system
labels:
k8s-app: filebeat
data:
kubernetes.yml: |-
- type: log
enabled: true
symlinks: true
paths:
- "/var/log/containers/*.log"
processors:
- add_kubernetes_metadata:
in_cluster: true
default_matchers.enabled: false
matchers:
- logs_path:
logs_path: /var/log/containers/
logs
2022-12-19T13:06:35.189Z INFO instance/beat.go:611 Home path: [/usr/share/filebeat] Config path: [/usr/share/filebeat] Data path: [/usr/share/filebeat/data] Logs path: [/usr/share/filebeat/logs]
2022-12-19T13:06:35.190Z INFO instance/beat.go:618 Beat UUID: 68776622-4469-4ff4-bf1a-442d13456c75
2022-12-19T13:06:35.190Z INFO [seccomp] seccomp/seccomp.go:116 Syscall filter successfully installed
2022-12-19T13:06:35.190Z INFO [beat] instance/beat.go:931 Beat info {"system_info": {"beat": {"path": {"config": "/usr/share/filebeat", "data": "/usr/share/filebeat/data", "home": "/usr/share/filebeat", "logs": "/usr/share/filebeat/logs"}, "type": "filebeat", "uuid": "68776622-4469-4ff4-bf1a-442d13456c75"}}}
2022-12-19T13:06:35.191Z INFO [beat] instance/beat.go:940 Build info {"system_info": {"build": {"commit": "1d55b4bd9dbf106a4ad4bc34fe9ee425d922363b", "libbeat": "6.7.1", "time": "2019-04-02T15:01:15.000Z", "version": "6.7.1"}}}
2022-12-19T13:06:35.191Z INFO [beat] instance/beat.go:943 Go runtime info {"system_info": {"go": {"os":"linux","arch":"amd64","max_procs":4,"version":"go1.10.8"}}}
2022-12-19T13:06:35.193Z INFO [beat] instance/beat.go:947 Host info {"system_info": {"host": {"architecture":"x86_64","boot_time":"2022-12-15T13:22:10Z","containerized":true,"name":"filebeat-2gh9x","ip":["127.0.0.1/8","::1/128","10.220.48.144/32","fe80::74:65ff:fe3e:27f0/64"],"kernel_version":"3.10.0-1160.80.1.el7.x86_64","mac":["02:74:65:3e:27:f0"],"os":{"family":"redhat","platform":"centos","name":"CentOS Linux","version":"7 (Core)","major":7,"minor":6,"patch":1810,"codename":"Core"},"timezone":"UTC","timezone_offset_sec":0}}}
2022-12-19T13:06:35.195Z INFO [beat] instance/beat.go:976 Process info {"system_info": {"process": {"capabilities": {"inheritable":null,"permitted":["chown","dac_override","fowner","fsetid","kill","setgid","setuid","setpcap","net_bind_service","net_raw","sys_chroot","mknod","audit_write","setfcap"],"effective":["chown","dac_override","fowner","fsetid","kill","setgid","setuid","setpcap","net_bind_service","net_raw","sys_chroot","mknod","audit_write","setfcap"],"bounding":["chown","dac_override","fowner","fsetid","kill","setgid","setuid","setpcap","net_bind_service","net_raw","sys_chroot","mknod","audit_write","setfcap"],"ambient":null}, "cwd": "/usr/share/filebeat", "exe": "/usr/share/filebeat/filebeat", "name": "filebeat", "pid": 1, "ppid": 0, "seccomp": {"mode":"filter","no_new_privs":true}, "start_time": "2022-12-19T13:06:34.150Z"}}}
2022-12-19T13:06:35.195Z INFO instance/beat.go:280 Setup Beat: filebeat; Version: 6.7.1
2022-12-19T13:06:35.197Z INFO [publisher] pipeline/module.go:110 Beat name: filebeat-2gh9x
2022-12-19T13:06:35.197Z WARN [cfgwarn] beater/filebeat.go:89 DEPRECATED: config.prospectors are deprecated, Use `config.inputs` instead. Will be removed in version: 7.0.0
2022-12-19T13:06:35.198Z INFO instance/beat.go:402 filebeat start running.
2022-12-19T13:06:35.198Z INFO registrar/registrar.go:97 No registry file found under: /usr/share/filebeat/data/registry. Creating a new registry file.
2022-12-19T13:06:35.200Z INFO registrar/registrar.go:134 Loading registrar data from /usr/share/filebeat/data/registry
2022-12-19T13:06:35.200Z INFO registrar/registrar.go:141 States Loaded from registrar: 0
2022-12-19T13:06:35.200Z WARN beater/filebeat.go:367 Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2022-12-19T13:06:35.200Z INFO crawler/crawler.go:72 Loading Inputs: 0
2022-12-19T13:06:35.245Z INFO [monitoring] log/log.go:117 Starting metrics logging every 30s
2022-12-19T13:06:35.484Z INFO kubernetes/util.go:86 kubernetes: Using pod name filebeat-2gh9x and namespace kube-system to discover kubernetes node
2022-12-19T13:06:35.576Z INFO add_cloud_metadata/add_cloud_metadata.go:345 add_cloud_metadata: hosting provider type detected as ec2, metadata={"availability_zone":"ap-southeast-1a","instance_id":"i-01eae62cd2b63a734","machine_type":"t3.xlarge","provider":"ec2","region":"ap-southeast-1"}
2022-12-19T13:06:35.730Z INFO kubernetes/util.go:93 kubernetes: Using node ip-10-12-xxx.ap-southeast-1.compute.internal discovered by in cluster pod node query
2022-12-19T13:06:35.731Z INFO kubernetes/watcher.go:182 kubernetes: Performing a resource sync for *v1.PodList
2022-12-19T13:06:35.766Z INFO kubernetes/watcher.go:198 kubernetes: Resource sync done
2022-12-19T13:06:35.767Z INFO kubernetes/watcher.go:242 kubernetes: Watching API for resource events
2022-12-19T13:06:35.767Z INFO log/input.go:138 Configured paths: [/var/log/containers/*.log]
2022-12-19T13:06:35.767Z INFO cfgfile/reload.go:150 Config reloader started
2022-12-19T13:06:35.767Z INFO crawler/crawler.go:106 Loading and starting Inputs completed. Enabled inputs: 0
2022-12-19T13:06:35.768Z INFO cfgfile/reload.go:150 Config reloader started
2022-12-19T13:06:35.768Z INFO cfgfile/reload.go:205 Loading of config files completed.
2022-12-19T13:06:35.770Z INFO kubernetes/util.go:86 kubernetes: Using pod name filebeat-2gh9x and namespace kube-system to discover kubernetes node
2022-12-19T13:06:35.928Z INFO kubernetes/util.go:93 kubernetes: Using node ip-10-12-xxx.ap-southeast-1.compute.interna discovered by in cluster pod node query
2022-12-19T13:06:35.928Z INFO kubernetes/watcher.go:182 kubernetes: Performing a resource sync for *v1.PodList
2022-12-19T13:06:35.934Z INFO kubernetes/watcher.go:198 kubernetes: Resource sync done
2022-12-19T13:06:35.934Z INFO kubernetes/watcher.go:242 kubernetes: Watching API for resource events
2022-12-19T13:06:35.935Z INFO log/input.go:138 Configured paths: [/var/log/containers/*.log]
2022-12-19T13:06:35.935Z INFO input/input.go:114 Starting input of type: log; ID: 12715084902863065571
2022-12-19T13:06:35.935Z INFO cfgfile/reload.go:205 Loading of config files completed.
UPDATE
its selinux problem
any idea fix this instead of change the selinux to permissive ?
type=AVC msg=audit(1671531416.824:408337): avc: denied { read } for pid=10185 comm="filebeat" name="containers" dev="dm-3" ino=524357 scontext=system_u:system_r:container_t:s0:c579,c853 tcontext=system_u:object_r:container_log_t:s0 tclass=dir permissive=0
type=SYSCALL msg=audit(1671531416.824:408337): arch=c000003e syscall=257 success=no exit=-13 a0=ffffffffffffff9c a1=c4202b3b20 a2=80000 a3=0 items=0 ppid=10123 pid=10185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="filebeat" exe="/usr/share/filebeat/filebeat" subj=system_u:system_r:container_t:s0:c579,c853 key=(null)
type=PROCTITLE msg=audit(1671531416.824:408337): proctitle=66696C6562656174002D63002F6574632F66696C65626561742E796D6C002D65
I am trying to deploy Istio on Virtual Machines. I am current architecture I have Kubernetes cluster which run the istio control plane (istiod) and a vm which is running the famous bookinfo istio application rating application. I am following the multi-network implementation as describe here (https://istio.io/latest/docs/setup/install/virtual-machine/). I have followed each step of the documentation and have successfully completed all the setup.
Error:
When I am trying to call the service running in kubernetes I get an error upstream connect error or disconnect/reset before headers. reset reason: connection failure
I can successfully call the service running on the vm from kubernetes.
Log of istio services running on the vm
2022-09-02T14:24:08.165388Z info FLAG: --domain=""
2022-09-02T14:24:08.165394Z info FLAG: --help="false"
2022-09-02T14:24:08.165396Z info FLAG: --log_as_json="false"
2022-09-02T14:24:08.165399Z info FLAG: --log_caller=""
2022-09-02T14:24:08.165401Z info FLAG: --log_output_level="dns:debug"
2022-09-02T14:24:08.165404Z info FLAG: --log_rotate=""
2022-09-02T14:24:08.165407Z info FLAG: --log_rotate_max_age="30"
2022-09-02T14:24:08.165409Z info FLAG: --log_rotate_max_backups="1000"
2022-09-02T14:24:08.165412Z info FLAG: --log_rotate_max_size="104857600"
2022-09-02T14:24:08.165414Z info FLAG: --log_stacktrace_level="default:none"
2022-09-02T14:24:08.165420Z info FLAG: --log_target="[stdout]"
2022-09-02T14:24:08.165423Z info FLAG: --meshConfig="./etc/istio/config/mesh"
2022-09-02T14:24:08.165426Z info FLAG: --outlierLogPath=""
2022-09-02T14:24:08.165428Z info FLAG: --proxyComponentLogLevel=""
2022-09-02T14:24:08.165431Z info FLAG: --proxyLogLevel="debug"
2022-09-02T14:24:08.165433Z info FLAG: --serviceCluster="istio-proxy"
2022-09-02T14:24:08.165436Z info FLAG: --stsPort="0"
2022-09-02T14:24:08.165438Z info FLAG: --templateFile=""
2022-09-02T14:24:08.165441Z info FLAG: --tokenManagerPlugin="GoogleTokenExchange"
2022-09-02T14:24:08.165450Z info FLAG: --vklog="0"
2022-09-02T14:24:08.165457Z info Version 1.13.2-91533d04e894ff86b80acd6d7a4517b144f9e19a-Clean
2022-09-02T14:24:08.165587Z info Proxy role ips=[10.243.0.35 fe80::3cff:fe38:afc8] type=sidecar id=istio-on-vm-three.ratings domain=ratings.svc.cluster.local
2022-09-02T14:24:08.165626Z info Apply mesh config from file defaultConfig:
discoveryAddress: istiod.istio-system.svc:15012
meshId: mesh1
proxyMetadata:
CANONICAL_REVISION: latest
CANONICAL_SERVICE: ratings
ISTIO_META_AUTO_REGISTER_GROUP: ratings
ISTIO_META_CLUSTER_ID: cc90a48f0mfd7shso5g0
ISTIO_META_DNS_CAPTURE: "true"
ISTIO_META_MESH_ID: mesh1
ISTIO_META_NETWORK: ""
ISTIO_META_WORKLOAD_NAME: ratings
ISTIO_METAJSON_LABELS: '{"app":"ratings","service.istio.io/canonical-name":"ratings","service.istio.io/canonical-revision":"latest"}'
POD_NAMESPACE: ratings
SERVICE_ACCOUNT: bookinfo-ratings
TRUST_DOMAIN: cluster.local
tracing:
zipkin:
address: zipkin.istio-system:9411
2022-09-02T14:24:08.166897Z info Apply proxy config from env
serviceCluster: ratings.ratings
controlPlaneAuthPolicy: MUTUAL_TLS
2022-09-02T14:24:08.167480Z info Effective config: binaryPath: /usr/local/bin/envoy
concurrency: 2
configPath: ./etc/istio/proxy
controlPlaneAuthPolicy: MUTUAL_TLS
discoveryAddress: istiod.istio-system.svc:15012
drainDuration: 45s
meshId: mesh1
parentShutdownDuration: 60s
proxyAdminPort: 15000
proxyMetadata:
CANONICAL_REVISION: latest
CANONICAL_SERVICE: ratings
ISTIO_META_AUTO_REGISTER_GROUP: ratings
ISTIO_META_CLUSTER_ID: cc90a48f0mfd7shso5g0
ISTIO_META_DNS_CAPTURE: "true"
ISTIO_META_MESH_ID: mesh1
ISTIO_META_NETWORK: ""
ISTIO_META_WORKLOAD_NAME: ratings
ISTIO_METAJSON_LABELS: '{"app":"ratings","service.istio.io/canonical-name":"ratings","service.istio.io/canonical-revision":"latest"}'
POD_NAMESPACE: ratings
SERVICE_ACCOUNT: bookinfo-ratings
TRUST_DOMAIN: cluster.local
serviceCluster: ratings.ratings
statNameLength: 189
statusPort: 15020
terminationDrainDuration: 5s
tracing:
zipkin:
address: zipkin.istio-system:9411
2022-09-02T14:24:08.167495Z info JWT policy is third-party-jwt
2022-09-02T14:24:13.167597Z info timed out waiting for platform detection, treating it as Unknown
2022-09-02T14:24:13.167892Z info Opening status port 15020
2022-09-02T14:24:13.168029Z debug dns initialized DNS search=[.] servers=[127.0.0.53:53]
2022-09-02T14:24:13.169553Z info dns Starting local udp DNS server on 127.0.0.1:15053
2022-09-02T14:24:13.169584Z info dns Starting local tcp DNS server on 127.0.0.1:15053
2022-09-02T14:24:13.169628Z info CA Endpoint istiod.istio-system.svc:15012, provider Citadel
2022-09-02T14:24:13.169647Z info Using CA istiod.istio-system.svc:15012 cert with certs: /etc/certs/root-cert.pem
2022-09-02T14:24:13.169782Z info citadelclient Citadel client using custom root cert: /etc/certs/root-cert.pem
2022-09-02T14:24:13.182361Z info ads All caches have been synced up in 5.02146778s, marking server ready
2022-09-02T14:24:13.182736Z info sds SDS server for workload certificates started, listening on "etc/istio/proxy/SDS"
2022-09-02T14:24:13.182795Z info xdsproxy Initializing with upstream address "istiod.istio-system.svc:15012" and cluster "cc90a48f0mfd7shso5g0"
2022-09-02T14:24:13.182770Z info sds Starting SDS grpc server
2022-09-02T14:24:13.183203Z info starting Http service at 127.0.0.1:15004
2022-09-02T14:24:13.184810Z info Pilot SAN: [istiod.istio-system.svc]
2022-09-02T14:24:13.186415Z info Starting proxy agent
2022-09-02T14:24:13.186444Z info Epoch 0 starting
2022-09-02T14:24:13.186463Z info Envoy command: [-c etc/istio/proxy/envoy-rev0.json --restart-epoch 0 --drain-time-s 45 --drain-strategy immediate --parent-shutdown-time-s 60 --local-address-ip-version v4 --file-flush-interval-msec 1000 --disable-hot-restart --log-format %Y-%m-%dT%T.%fZ %l envoy %n %v -l debug --concurrency 2]
2022-09-02T14:24:13.264923Z info xdsproxy connected to upstream XDS server: istiod.istio-system.svc:15012
2022-09-02T14:24:13.284519Z info cache generated new workload certificate latency=101.82115ms ttl=23h59m59.715492792s
2022-09-02T14:24:13.284554Z info cache Root cert has changed, start rotating root cert
2022-09-02T14:24:13.284578Z info ads XDS: Incremental Pushing:0 ConnectedEndpoints:0 Version:
2022-09-02T14:24:13.284993Z info cache returned workload trust anchor from cache ttl=23h59m59.715012276s
2022-09-02T14:24:13.327799Z info ads ADS: new connection for node:istio-on-vm-three.ratings-1
2022-09-02T14:24:13.327908Z info cache returned workload certificate from cache ttl=23h59m59.672096732s
2022-09-02T14:24:13.328260Z info ads SDS: PUSH request for node:istio-on-vm-three.ratings resources:1 size:4.0kB resource:default
2022-09-02T14:24:13.367689Z info ads ADS: new connection for node:istio-on-vm-three.ratings-2
2022-09-02T14:24:13.367769Z info cache returned workload trust anchor from cache ttl=23h59m59.63223465s
2022-09-02T14:24:13.367948Z info ads SDS: PUSH request for node:istio-on-vm-three.ratings resources:1 size:1.1kB resource:ROOTCA
2022-09-02T14:24:13.387123Z debug dns updated lookup table with 96 hosts
2022-09-02T14:24:22.280792Z info Agent draining Proxy
2022-09-02T14:24:22.280825Z info Status server has successfully terminated
2022-09-02T14:24:22.281118Z error accept tcp [::]:15020: use of closed network connection
2022-09-02T14:24:22.282028Z info Graceful termination period is 5s, starting...
2022-09-02T14:24:27.282551Z info Graceful termination period complete, terminating remaining proxies.
2022-09-02T14:24:27.282610Z warn Aborted all epochs
2022-09-02T14:24:27.282622Z warn Aborting epoch 0
2022-09-02T14:24:27.282889Z info Epoch 0 aborted normally
2022-09-02T14:24:27.282899Z info Agent has successfully terminated
2022-09-02T14:24:57.386419Z info FLAG: --concurrency="0"
2022-09-02T14:24:57.386463Z info FLAG: --domain=""
2022-09-02T14:24:57.386471Z info FLAG: --help="false"
2022-09-02T14:24:57.386474Z info FLAG: --log_as_json="false"
2022-09-02T14:24:57.386477Z info FLAG: --log_caller=""
2022-09-02T14:24:57.386480Z info FLAG: --log_output_level="dns:debug"
2022-09-02T14:24:57.386482Z info FLAG: --log_rotate=""
2022-09-02T14:24:57.386486Z info FLAG: --log_rotate_max_age="30"
2022-09-02T14:24:57.386489Z info FLAG: --log_rotate_max_backups="1000"
2022-09-02T14:24:57.386492Z info FLAG: --log_rotate_max_size="104857600"
2022-09-02T14:24:57.386495Z info FLAG: --log_stacktrace_level="default:none"
2022-09-02T14:24:57.386504Z info FLAG: --log_target="[stdout]"
2022-09-02T14:24:57.386507Z info FLAG: --meshConfig="./etc/istio/config/mesh"
2022-09-02T14:24:57.386510Z info FLAG: --outlierLogPath=""
2022-09-02T14:24:57.386512Z info FLAG: --proxyComponentLogLevel=""
2022-09-02T14:24:57.386515Z info FLAG: --proxyLogLevel="debug"
2022-09-02T14:24:57.386518Z info FLAG: --serviceCluster="istio-proxy"
2022-09-02T14:24:57.386521Z info FLAG: --stsPort="0"
2022-09-02T14:24:57.386533Z info FLAG: --templateFile=""
2022-09-02T14:24:57.386544Z info FLAG: --tokenManagerPlugin="GoogleTokenExchange"
2022-09-02T14:24:57.386553Z info FLAG: --vklog="0"
2022-09-02T14:24:57.386559Z info Version 1.13.2-91533d04e894ff86b80acd6d7a4517b144f9e19a-Clean
2022-09-02T14:24:57.386705Z info Proxy role ips=[10.243.0.35 fe80::3cff:fe38:afc8] type=sidecar id=istio-on-vm-three.ratings domain=ratings.svc.cluster.local
2022-09-02T14:24:57.386749Z info Apply mesh config from file defaultConfig:
discoveryAddress: istiod.istio-system.svc:15012
meshId: mesh1
proxyMetadata:
CANONICAL_REVISION: latest
CANONICAL_SERVICE: ratings
ISTIO_META_AUTO_REGISTER_GROUP: ratings
ISTIO_META_CLUSTER_ID: cc90a48f0mfd7shso5g0
ISTIO_META_DNS_CAPTURE: "true"
ISTIO_META_MESH_ID: mesh1
ISTIO_META_NETWORK: ""
ISTIO_META_WORKLOAD_NAME: ratings
ISTIO_METAJSON_LABELS: '{"app":"ratings","service.istio.io/canonical-name":"ratings","service.istio.io/canonical-revision":"latest"}'
POD_NAMESPACE: ratings
SERVICE_ACCOUNT: bookinfo-ratings
TRUST_DOMAIN: cluster.local
tracing:
zipkin:
address: zipkin.istio-system:9411
2022-09-02T14:24:57.387852Z info Apply proxy config from env
serviceCluster: ratings.ratings
controlPlaneAuthPolicy: MUTUAL_TLS
2022-09-02T14:24:57.388363Z info Effective config: binaryPath: /usr/local/bin/envoy
concurrency: 2
configPath: ./etc/istio/proxy
controlPlaneAuthPolicy: MUTUAL_TLS
discoveryAddress: istiod.istio-system.svc:15012
drainDuration: 45s
meshId: mesh1
parentShutdownDuration: 60s
proxyAdminPort: 15000
proxyMetadata:
CANONICAL_REVISION: latest
CANONICAL_SERVICE: ratings
ISTIO_META_AUTO_REGISTER_GROUP: ratings
ISTIO_META_CLUSTER_ID: cc90a48f0mfd7shso5g0
ISTIO_META_DNS_CAPTURE: "true"
ISTIO_META_MESH_ID: mesh1
ISTIO_META_NETWORK: ""
ISTIO_META_WORKLOAD_NAME: ratings
ISTIO_METAJSON_LABELS: '{"app":"ratings","service.istio.io/canonical-name":"ratings","service.istio.io/canonical-revision":"latest"}'
POD_NAMESPACE: ratings
SERVICE_ACCOUNT: bookinfo-ratings
TRUST_DOMAIN: cluster.local
serviceCluster: ratings.ratings
statNameLength: 189
statusPort: 15020
terminationDrainDuration: 5s
tracing:
zipkin:
address: zipkin.istio-system:9411
2022-09-02T14:24:57.388378Z info JWT policy is third-party-jwt
2022-09-02T14:25:02.388947Z info timed out waiting for platform detection, treating it as Unknown
2022-09-02T14:25:02.389180Z debug dns initialized DNS search=[.] servers=[127.0.0.53:53]
2022-09-02T14:25:02.389248Z info dns Starting local udp DNS server on 127.0.0.1:15053
2022-09-02T14:25:02.389249Z info Opening status port 15020
2022-09-02T14:25:02.389413Z info dns Starting local tcp DNS server on 127.0.0.1:15053
2022-09-02T14:25:02.389432Z info CA Endpoint istiod.istio-system.svc:15012, provider Citadel
2022-09-02T14:25:02.389445Z info Using CA istiod.istio-system.svc:15012 cert with certs: /etc/certs/root-cert.pem
2022-09-02T14:25:02.389532Z info citadelclient Citadel client using custom root cert: /etc/certs/root-cert.pem
2022-09-02T14:25:02.402154Z info ads All caches have been synced up in 5.019952409s, marking server ready
2022-09-02T14:25:02.402449Z info sds SDS server for workload certificates started, listening on "etc/istio/proxy/SDS"
2022-09-02T14:25:02.402475Z info xdsproxy Initializing with upstream address "istiod.istio-system.svc:15012" and cluster "cc90a48f0mfd7shso5g0"
2022-09-02T14:25:02.402487Z info sds Starting SDS grpc server
2022-09-02T14:25:02.402794Z info starting Http service at 127.0.0.1:15004
2022-09-02T14:25:02.403926Z info Pilot SAN: [istiod.istio-system.svc]
2022-09-02T14:25:02.405489Z info Starting proxy agent
2022-09-02T14:25:02.405522Z info Epoch 0 starting
2022-09-02T14:25:02.405560Z info Envoy command: [-c etc/istio/proxy/envoy-rev0.json --restart-epoch 0 --drain-time-s 45 --drain-strategy immediate --parent-shutdown-time-s 60 --local-address-ip-version v4 --file-flush-interval-msec 1000 --disable-hot-restart --log-format %Y-%m-%dT%T.%fZ %l envoy %n %v -l debug --concurrency 2]
2022-09-02T14:25:02.480867Z info xdsproxy connected to upstream XDS server: istiod.istio-system.svc:15012
2022-09-02T14:25:02.552937Z info ads ADS: new connection for node:istio-on-vm-three.ratings-1
2022-09-02T14:25:02.592884Z info ads ADS: new connection for node:istio-on-vm-three.ratings-2
2022-09-02T14:25:02.602362Z info cache generated new workload certificate latency=199.854356ms ttl=23h59m59.397649371s
2022-09-02T14:25:02.602401Z info cache Root cert has changed, start rotating root cert
2022-09-02T14:25:02.602421Z info ads XDS: Incremental Pushing:0 ConnectedEndpoints:2 Version:
2022-09-02T14:25:02.602531Z info cache returned workload trust anchor from cache ttl=23h59m59.397477611s
2022-09-02T14:25:02.602586Z info cache returned workload certificate from cache ttl=23h59m59.397417006s
2022-09-02T14:25:02.602881Z info cache returned workload trust anchor from cache ttl=23h59m59.397122534s
2022-09-02T14:25:02.604303Z info ads SDS: PUSH request for node:istio-on-vm-three.ratings resources:1 size:1.1kB resource:ROOTCA
2022-09-02T14:25:02.604361Z info cache returned workload trust anchor from cache ttl=23h59m59.395642519s
2022-09-02T14:25:02.604393Z info ads SDS: PUSH for node:istio-on-vm-three.ratings resources:1 size:1.1kB resource:ROOTCA
2022-09-02T14:25:02.604384Z info ads SDS: PUSH request for node:istio-on-vm-three.ratings resources:1 size:4.0kB resource:default
2022-09-02T14:25:02.622631Z debug dns updated lookup table with 96 hosts
2022-09-02T14:25:04.329218Z debug dns request ;; opcode: QUERY, status: NOERROR, id: 24280
;; flags: rd ad; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1
;; QUESTION SECTION:
;details.default.svc. IN AAAA
;; ADDITIONAL SECTION:
;; OPT PSEUDOSECTION:
; EDNS: version 0; flags: ; udp: 1200
protocol=udp edns=true id=6240baac-c243-45be-9a10-dfe500a83cfa
2022-09-02T14:25:04.329282Z debug dns response for hostname "details.default.svc." (found=true): ;; opcode: QUERY, status: NOERROR, id: 24280
;; flags: qr aa rd; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;details.default.svc. IN AAAA
protocol=udp edns=true id=6240baac-c243-45be-9a10-dfe500a83cfa
2022-09-02T14:25:04.329305Z debug dns request ;; opcode: QUERY, status: NOERROR, id: 17619
;; flags: rd ad; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1
;; QUESTION SECTION:
;details.default.svc. IN A
;; ADDITIONAL SECTION:
;; OPT PSEUDOSECTION:
; EDNS: version 0; flags: ; udp: 1200
protocol=udp edns=true id=30fd3d3c-efed-4a27-b8ba-113f56efb67d
2022-09-02T14:25:04.329371Z debug dns response for hostname "details.default.svc." (found=true): ;; opcode: QUERY, status: NOERROR, id: 17619
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;details.default.svc. IN A
;; ANSWER SECTION:
details.default.svc. 30 IN A 172.21.199.92
protocol=udp edns=true id=30fd3d3c-efed-4a27-b8ba-113f56efb67d
Gateway configuration for istiod
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"networking.istio.io/v1alpha3","kind":"Gateway","metadata":{"annotations":{},"name":"istiod-gateway","namespace":"istio-system"},"spec":{"selector":{"istio":"eastwestgateway"},"servers":[{"hosts":["*"],"port":{"name":"tls-istiod","number":15012,"protocol":"tls"},"tls":{"mode":"PASSTHROUGH"}},{"hosts":["*"],"port":{"name":"tls-istiodwebhook","number":15017,"protocol":"tls"},"tls":{"mode":"PASSTHROUGH"}}]}}
creationTimestamp: '2022-09-02T13:54:17Z'
generation: 1
managedFields:
- apiVersion: networking.istio.io/v1alpha3
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:spec:
.: {}
f:selector:
.: {}
f:istio: {}
f:servers: {}
manager: kubectl-client-side-apply
operation: Update
time: '2022-09-02T13:54:17Z'
name: istiod-gateway
namespace: istio-system
resourceVersion: '3685'
uid: 23f776c9-a4d1-43a7-8992-72be4f933d9d
spec:
selector:
istio: eastwestgateway
servers:
- hosts:
- '*'
port:
name: tls-istiod
number: 15012
protocol: tls
tls:
mode: PASSTHROUGH
- hosts:
- '*'
port:
name: tls-istiodwebhook
number: 15017
protocol: tls
tls:
mode: PASSTHROUGH
Virtual service for istiod
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"networking.istio.io/v1alpha3","kind":"VirtualService","metadata":{"annotations":{},"name":"istiod-vs","namespace":"istio-system"},"spec":{"gateways":["istiod-gateway"],"hosts":["*"],"tls":[{"match":[{"port":15012,"sniHosts":["*"]}],"route":[{"destination":{"host":"istiod.istio-system.svc.cluster.local","port":{"number":15012}}}]},{"match":[{"port":15017,"sniHosts":["*"]}],"route":[{"destination":{"host":"istiod.istio-system.svc.cluster.local","port":{"number":443}}}]}]}}
creationTimestamp: '2022-09-02T13:54:17Z'
generation: 1
managedFields:
- apiVersion: networking.istio.io/v1alpha3
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:spec:
.: {}
f:gateways: {}
f:hosts: {}
f:tls: {}
manager: kubectl-client-side-apply
operation: Update
time: '2022-09-02T13:54:17Z'
name: istiod-vs
namespace: istio-system
resourceVersion: '3686'
uid: d1b88fde-20a3-48dd-a549-dfe77407e206
spec:
gateways:
- istiod-gateway
hosts:
- '*'
tls:
- match:
- port: 15012
sniHosts:
- '*'
route:
- destination:
host: istiod.istio-system.svc.cluster.local
port:
number: 15012
- match:
- port: 15017
sniHosts:
- '*'
route:
- destination:
host: istiod.istio-system.svc.cluster.local
port:
number: 443
Please let me know if you need more information to debug/
After a lot of debugging and trial and error I found the problem and solved. First the variables in definition to create workload group in the official istio documentation is not explained properly. As per the official documentation in the workload group we need to mention the network of the vm but doesn't which network as a vm can have interfaces mapping to a public and private network. The solution is that you need to mention the network ip which is mapping to default network interface i.e in my case my eth0 interface mapped to the private ip of the vm, hence for me the workload definition was something like this
apiVersion: networking.istio.io/v1alpha3
kind: WorkloadGroup
metadata:
name: "${VM_APP}"
namespace: "${VM_NAMESPACE}"
spec:
metadata:
labels:
app: "${VM_APP}"
template:
serviceAccount: "${SERVICE_ACCOUNT}"
network: "${VM'S_PRIVATE_IP}"
probe:
periodSeconds: 5
initialDelaySeconds: 1
httpGet:
port: 8080
path: /ready
Second the command provided in the docs to create the workload entry is incomplete. To get a mesh expansion to work in a multi-network mesh the command should be
istioctl x workload entry configure -f workloadgroup.yaml -o "${WORK_DIR}" --clusterID "${CLUSTER}" --ingressIP ${EAST_WEST_GATEWAY_IP_ADDRESS} --externalIP ${PRIVATE_IP_OF_THE_VM or ETH0_IP_ADDRESS} --autoregister
I have a custom SPI javascript provider, packaged in the .jar file, as described in the official Keycloak docs.
For the local development, I'm using jboss/keycloak docker image via docker-compose file with the volume mapping to the standalone/deployments folder set.
The package is deployed and works OK, but Keycloak keeps to redeploy the same file every 5 seconds:
11:43:58,304 INFO [org.keycloak.subsystem.server.extension.KeycloakProviderDeploymentProcessor] (MSC service thread 1-8) Deploying Keycloak provider: custom-auth-provider.jar
11:43:58,320 INFO [org.jboss.as.server] (DeploymentScanner-threads - 1) WFLYSRV0013: Redeployed "custom-auth-provider.jar"
11:44:03,388 INFO [org.keycloak.subsystem.server.extension.KeycloakProviderDeploymentProcessor] (MSC service thread 1-4) Undeploying Keycloak provider: custom-auth-provider.jar
11:44:03,395 INFO [org.jboss.as.server.deployment] (MSC service thread 1-6) WFLYSRV0028: Stopped deployment custom-auth-provider.jar (runtime-name: custom-auth-provider.jar) in 9ms
11:44:03,397 INFO [org.jboss.as.server.deployment] (MSC service thread 1-6) WFLYSRV0027: Starting deployment of "custom-auth-provider.jar" (runtime-name: "custom-auth-provider.jar")
11:44:03,409 INFO [org.keycloak.subsystem.server.extension.KeycloakProviderDeploymentProcessor] (MSC service thread 1-6) Deploying Keycloak provider: custom-auth-provider.jar
11:44:03,425 INFO [org.jboss.as.server] (DeploymentScanner-threads - 1) WFLYSRV0016: Replaced deployment "custom-auth-provider.jar" with deployment "custom-auth-provider.jar"
11:44:08,471 INFO [org.keycloak.subsystem.server.extension.KeycloakProviderDeploymentProcessor] (MSC service thread 1-1) Undeploying Keycloak provider: custom-auth-provider.jar
11:44:08,477 INFO [org.jboss.as.server.deployment] (MSC service thread 1-8) WFLYSRV0028: Stopped deployment custom-auth-provider.jar (runtime-name: custom-auth-provider.jar) in 11ms
11:44:08,479 INFO [org.jboss.as.server.deployment] (MSC service thread 1-3) WFLYSRV0027: Starting deployment of "custom-auth-provider.jar" (runtime-name: "custom-auth-provider.jar")
11:44:08,493 INFO [org.keycloak.subsystem.server.extension.KeycloakProviderDeploymentProcessor] (MSC service thread 1-5) Deploying Keycloak provider: custom-auth-provider.jar
11:44:08,517 INFO [org.jboss.as.server] (DeploymentScanner-threads - 1) WFLYSRV0013: Redeployed "custom-auth-provider.jar"
11:44:13,573 INFO [org.keycloak.subsystem.server.extension.KeycloakProviderDeploymentProcessor] (MSC service thread 1-3) Undeploying Keycloak provider: custom-auth-provider.jar
11:44:13,581 INFO [org.jboss.as.server.deployment] (MSC service thread 1-6) WFLYSRV0028: Stopped deployment custom-auth-provider.jar (runtime-name: custom-auth-provider.jar) in 11ms
Is this a desired behavior, or can I stop it somehow?
Apparently, the docker-compose volumes was the issue. With the automatic deployment of the .jar package by the Wildfly AS, properties of the file got changed, which caused Wildfly to consume it as an updated file.
Changing the particular volume mapping to a 'bind' type
volumes:
- type: bind # bind mount type prevents file changes
source: ./standalone/deployments/
target: /opt/jboss/keycloak/standalone/deployments
did the trick
I know this will be a late answer, but:
You should specify the .jar path in docker-compose like this:
volumes:
- type: bind
source: ./keycloak/spi/keycloak-event-listener-spi-0.1.jar
target: /opt/jboss/keycloak/standalone/deployments/keycloak-event-listener-spi-0.1.jar
This solves the problem for me
In my case, the issue was caused by using a folder outside of WSL.
The volume seams to act weirdly when it points on a windows folder.
I am trying to setup Kafka on Minikube, a very basic setup. I can't validate if Kafka and Zookeeper have been setup correctly because kafkacat fails.
Here is my config:
zookeeper
kind: Deployment
apiVersion: apps/v1
metadata:
name: zookeeper-deployment-1
spec:
selector:
matchLabels:
app: zookeeper-1
template:
metadata:
labels:
app: zookeeper-1
spec:
containers:
- name: zoo1
image: digitalwonderland/zookeeper
ports:
- containerPort: 2181
env:
- name: ZOOKEEPER_ID
value: "1"
- name: ZOOKEEPER_SERVER_1
value: zoo1
---
apiVersion: v1
kind: Service
metadata:
name: zoo1
labels:
app: zookeeper-1
spec:
ports:
- name: client
port: 2181
protocol: TCP
- name: follower
port: 2888
protocol: TCP
- name: leader
port: 3888
protocol: TCP
selector:
app: zookeeper-1
kafka
kind: Deployment
apiVersion: apps/v1
metadata:
name: kafka-broker0
spec:
selector:
matchLabels:
app: kafka
template:
metadata:
labels:
app: kafka
id: "0"
spec:
containers:
- name: kafka
image: wurstmeister/kafka
ports:
- containerPort: 9092
env:
- name: KAFKA_ADVERTISED_PORT
value: "9092"
- name: KAFKA_ADVERTISED_HOST_NAME
value: kafka-service
- name: KAFKA_ZOOKEEPER_CONNECT
value: zoo1:2181
- name: KAFKA_BROKER_ID
value: "0"
---
apiVersion: v1
kind: Service
metadata:
name: kafka-service
labels:
name: kafka
spec:
ports:
- port: 9092
name: kafka-port
protocol: TCP
selector:
app: kafka
id: "0"
pods
kafka-broker0-6885746967-6vktz 1/1 Running 0 5m20s
zookeeper-deployment-1-7f5bb9785f-7pplk 1/1 Running 0 5m25s
svc
kafka-service ClusterIP 10.99.226.129 <none> 9092/TCP 6m30s
zoo1 ClusterIP 10.96.140.187 <none> 2181/TCP,2888/TCP,3888/TCP 6m35s
kafkacat logs
✗ kafkacat -L -b kafka-service:9092 -d broker
%7|1596239513.610|BRKMAIN|rdkafka#producer-1| [thrd::0/internal]: :0/internal: Enter main broker thread
%7|1596239513.610|BROKER|rdkafka#producer-1| [thrd:app]: kafka-service:9092/bootstrap: Added new broker with NodeId -1
%7|1596239513.610|BRKMAIN|rdkafka#producer-1| [thrd:kafka-service:9092/bootstrap]: kafka-service:9092/bootstrap: Enter main broker thread
%7|1596239513.610|CONNECT|rdkafka#producer-1| [thrd:app]: kafka-service:9092/bootstrap: Selected for cluster connection: bootstrap servers added (broker has 0 connection attempt(s))
%7|1596239513.610|INIT|rdkafka#producer-1| [thrd:app]: librdkafka v1.4.0 (0x10400ff) rdkafka#producer-1 initialized (builtin.features gzip,snappy,ssl,sasl,regex,lz4,sasl_gssapi,sasl_plain,sasl_scram,plugins,zstd,sasl_oauthbearer, CC CXX PKGCONFIG OSXLD LIBDL PLUGINS ZLIB SSL SASL_CYRUS ZSTD HDRHISTOGRAM LZ4_EXT SYSLOG SNAPPY SOCKEM SASL_SCRAM SASL_OAUTHBEARER CRC32C_HW, debug 0x2)
%7|1596239513.610|CONNECT|rdkafka#producer-1| [thrd:kafka-service:9092/bootstrap]: kafka-service:9092/bootstrap: Received CONNECT op
%7|1596239513.610|STATE|rdkafka#producer-1| [thrd:kafka-service:9092/bootstrap]: kafka-service:9092/bootstrap: Broker changed state INIT -> TRY_CONNECT
%7|1596239513.610|CONNECT|rdkafka#producer-1| [thrd:kafka-service:9092/bootstrap]: kafka-service:9092/bootstrap: broker in state TRY_CONNECT connecting
%7|1596239513.610|STATE|rdkafka#producer-1| [thrd:kafka-service:9092/bootstrap]: kafka-service:9092/bootstrap: Broker changed state TRY_CONNECT -> CONNECT
%7|1596239513.610|CONNECT|rdkafka#producer-1| [thrd:app]: Not selecting any broker for cluster connection: still suppressed for 49ms: application metadata request
%7|1596239513.610|CONNECT|rdkafka#producer-1| [thrd:app]: Not selecting any broker for cluster connection: still suppressed for 49ms: application metadata request
%7|1596239513.610|CONNECT|rdkafka#producer-1| [thrd:app]: Not selecting any broker for cluster connection: still suppressed for 49ms: application metadata request
%7|1596239513.610|CONNECT|rdkafka#producer-1| [thrd:app]: Not selecting any broker for cluster connection: still suppressed for 49ms: application metadata request
%7|1596239513.614|BROKERFAIL|rdkafka#producer-1| [thrd:kafka-service:9092/bootstrap]: kafka-service:9092/bootstrap: failed: err: Local: Host resolution failure: (errno: Bad address)
NODEPORT UPDATE
✗ kafkacat -L -b kafka-service:30236 -d broker
%7|1596476848.078|STATE|rdkafka#producer-1| [thrd:kafka-service:30236/bootstrap]: kafka-service:30236/bootstrap: Broker changed state CONNECT -> DOWN
%7|1596476848.078|CONNECT|rdkafka#producer-1| [thrd:app]: Not selecting any broker for cluster connection: still suppressed for 46ms: application metadata request
%7|1596476848.078|CONNECT|rdkafka#producer-1| [thrd:app]: Not selecting any broker for cluster connection: still suppressed for 46ms: application metadata request
%7|1596476849.065|CONNECT|rdkafka#producer-1| [thrd:app]: Cluster connection already in progress: application metadata request
%7|1596476849.065|CONNECT|rdkafka#producer-1| [thrd:app]: Not selecting any broker for cluster connection: still suppressed for 49ms: application metadata request
% ERROR: Failed to acquire metadata: Local: Broker transport failure
minikube ip
✗ kafkacat -L -b 192.168.64.2:30236 -d broker
|1596476908.164|BRKMAIN|rdkafka#producer-1| [thrd:192.168.64.2:30236/bootstrap]: 192.168.64.2:30236/bootstrap: Enter main broker thread
%7|1596476908.164|CONNECT|rdkafka#producer-1| [thrd:app]: 192.168.64.2:30236/bootstrap: Selected for cluster connection: bootstrap servers added (broker has 0 connection attempt(s))
%7|1596476908.164|INIT|rdkafka#producer-1| [thrd:app]: librdkafka v1.4.0 (0x10400ff) rdkafka#producer-1 initialized (builtin.features gzip,snappy,ssl,sasl,regex,lz4,sasl_gssapi,sasl_plain,sasl_scram,plugins,zstd,sasl_oauthbearer, CC CXX PKGCONFIG OSXLD LIBDL PLUGINS ZLIB SSL SASL_CYRUS ZSTD HDRHISTOGRAM LZ4_EXT SYSLOG SNAPPY SOCKEM SASL_SCRAM SASL_OAUTHBEARER CRC32C_HW, debug 0x2)
%7|1596476908.164|CONNECT|rdkafka#producer-1| [thrd:192.168.64.2:30236/bootstrap]: 192.168.64.2:30236/bootstrap: Received CONNECT op
%7|1596476908.164|STATE|rdkafka#producer-1| [thrd:192.168.64.2:30236/bootstrap]: 192.168.64.2:30236/bootstrap: Broker changed state INIT -> TRY_CONNECT
%7|1596476908.164|CONNECT|rdkafka#producer-1| [thrd:192.168.64.2:30236/bootstrap]: 192.168.64.2:30236/bootstrap: broker in state TRY_CONNECT connecting
%7|1596476908.164|STATE|rdkafka#producer-1| [thrd:192.168.64.2:30236/bootstrap]: 192.168.64.2:30236/bootstrap: Broker changed state TRY_CONNECT -> CONNECT
%7|1596476908.164|CONNECT|rdkafka#producer-1| [thrd:app]: Not selecting any broker for cluster connection: still suppressed for 49ms: application metadata request
✗ kafkacat -L -b localhost:30236 -d broker
%7|1596477098.454|CONNECT|rdkafka#producer-1| [thrd:localhost:30236/bootstrap]: localhost:30236/bootstrap: Received CONNECT op
%7|1596477098.454|INIT|rdkafka#producer-1| [thrd:app]: librdkafka v1.4.0 (0x10400ff) rdkafka#producer-1 initialized (builtin.features gzip,snappy,ssl,sasl,regex,lz4,sasl_gssapi,sasl_plain,sasl_scram,plugins,zstd,sasl_oauthbearer, CC CXX PKGCONFIG OSXLD LIBDL PLUGINS ZLIB SSL SASL_CYRUS ZSTD HDRHISTOGRAM LZ4_EXT SYSLOG SNAPPY SOCKEM SASL_SCRAM SASL_OAUTHBEARER CRC32C_HW, debug 0x2)
%7|1596477098.454|STATE|rdkafka#producer-1| [thrd:localhost:30236/bootstrap]: localhost:30236/bootstrap: Broker changed state INIT -> TRY_CONNECT
%7|1596477098.454|CONNECT|rdkafka#producer-1| [thrd:localhost:30236/bootstrap]: localhost:30236/bootstrap: broker in state TRY_CONNECT connecting
%7|1596477098.454|CONNECT|rdkafka#producer-1| [thrd:app]: Not selecting any broker for cluster connection: still suppressed for 49ms: application metadata request
%7|1596477098.454|STATE|rdkafka#producer-1| [thrd:localhost:30236/bootstrap]: localhost:30236/bootstrap: Broker changed state TRY_CONNECT -> CONNECT
%7|1596477098.454|CONNECT|rdkafka#producer-1| [thrd:app]: Not selecting any broker for cluster connection: still suppressed for 49ms: application metadata request
%7|1596477098.454|CONNECT|rdkafka#producer-1| [thrd:app]: Not selecting any broker for cluster connection: still suppressed for 49ms: application metadata request
%7|1596477098.454|CONNECT|rdkafka#producer-1| [thrd:app]: Not selecting any broker for cluster connection: still suppressed for 49ms: application metadata request
%7|1596477098.460|CONNECT|rdkafka#producer-1| [thrd:localhost:30236/bootstrap]: localhost:30236/bootstrap: Connecting to ipv4#127.0.0.1:30236 (plaintext) with socket 9
%7|1596477098.461|BROKERFAIL|rdkafka#producer-1| [thrd:localhost:30236/bootstrap]: localhost:30236/bootstrap: failed: err: Local: Broker transport failure: (errno: Connection refused)
%7|1596477098.461|STATE|rdkafka#producer-1| [thrd:localhost:30236/bootstrap]: localhost:30236/bootstrap: Broker changed state CONNECT -> DOWN
%7|1596477098.461|STATE|rdkafka#producer-1| [thrd:localhost:30236/bootstrap]: localhost:30236/bootstrap: Broker changed state DOWN -> INIT
%7|1596477098.461|CONNECT|rdkafka#producer-1| [thrd:app]: Not selecting any broker for cluster connection: still suppressed for 43ms: application metadata request
%7|1596477098.461|CONNECT|rdkafka#producer-1| [thrd:app]: Not selecting any broker for cluster connection: still suppressed for 42ms: application metadata request
apiVersion: v1
kind: Service
metadata:
name: kafka-service
labels:
name: kafka
spec:
type: NodePort
ports:
- port: 9092
nodePort: 30236
name: kafka-port
protocol: TCP
selector:
app: kafka
id: "0"
I have never used kafkacat, but if it is a cli that is installed on your host, and not inside of another container, you can use it like this now:
✗ kafkacat -L -b localhost:30236 -d broker
localhost, or the ip of your kubernetes node.
Using a service default is of type ClusterIp, and this can be accessed only inside the kubernetes cluster
i am getting below error when i start the jboss, i am running my application in eclipse jobss, even i tried to run from terminal i am getting the same error.
is the below server issue or application configuration issue?
20:22:24,053 INFO [org.jboss.modules] (main) JBoss Modules version 1.3.3.Final-redhat-1
20:22:29,362 INFO [org.jboss.msc] (main) JBoss MSC version 1.1.5.Final-redhat-1
20:22:29,455 INFO [org.jboss.as] (MSC service thread 1-6) JBAS015899: JBoss EAP 6.3.0.GA (AS 7.4.0.Final-redhat-19) starting
20:22:30,470 INFO [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) JBAS015003: Found md-edrs.war in deployment directory. To trigger deployment create a file called md-edrs.war.dodeploy
20:22:30,495 INFO [org.xnio] (MSC service thread 1-11) XNIO Version 3.0.10.GA-redhat-1
20:22:30,496 INFO [org.jboss.as.server] (Controller Boot Thread) JBAS015888: Creating http management service using socket-binding (management-http)
20:22:30,501 INFO [org.xnio.nio] (MSC service thread 1-11) XNIO NIO Implementation Version 3.0.10.GA-redhat-1
20:22:30,524 INFO [org.jboss.remoting] (MSC service thread 1-11) JBoss Remoting version (unknown)
20:22:30,544 INFO [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 29) JBAS010280: Activating Infinispan subsystem.
20:22:30,543 WARN [org.jboss.as.txn] (ServerService Thread Pool -- 44) JBAS010153: Node identifier property is set to the default value. Please make sure it is unique.
20:22:30,548 INFO [org.jboss.as.naming] (ServerService Thread Pool -- 37) JBAS011800: Activating Naming Subsystem
20:22:30,550 INFO [org.jboss.as.webservices] (ServerService Thread Pool -- 46) JBAS015537: Activating WebServices Extension
20:22:30,550 INFO [org.jboss.as.security] (ServerService Thread Pool -- 42) JBAS013171: Activating Security Subsystem
20:22:30,564 INFO [org.jboss.as.jsf] (ServerService Thread Pool -- 35) JBAS012605: Activated the following JSF Implementations: [main, 1.2]
20:22:30,571 INFO [org.jboss.as.security] (MSC service thread 1-13) JBAS013170: Current PicketBox version=4.0.19.SP8-redhat-1
20:22:30,604 INFO [org.jboss.as.connector.logging] (MSC service thread 1-11) JBAS010408: Starting JCA Subsystem (IronJacamar 1.0.26.Final-redhat-1)
20:22:30,627 INFO [org.jboss.as.naming] (MSC service thread 1-13) JBAS011802: Starting Naming Service
20:22:30,630 INFO [org.jboss.as.mail.extension] (MSC service thread 1-10) JBAS015400: Bound mail session [java:jboss/mail/Default]
20:22:30,686 INFO [org.jboss.as.connector.subsystems.datasources] (ServerService Thread Pool -- 25) JBAS010403: Deploying JDBC-compliant driver class org.h2.Driver (version 1.3)
20:22:30,836 INFO [org.apache.coyote.http11.Http11Protocol] (MSC service thread 1-5) JBWEB003001: Coyote HTTP/1.1 initializing on : http-localhost/127.0.0.1:8080
20:22:30,847 INFO [org.apache.coyote.http11.Http11Protocol] (MSC service thread 1-5) JBWEB003000: Coyote HTTP/1.1 starting on: http-localhost/127.0.0.1:8080
20:22:30,932 INFO [org.jboss.as.connector.subsystems.datasources] (ServerService Thread Pool -- 25) JBAS010403: Deploying JDBC-compliant driver class com.microsoft.sqlserver.jdbc.SQLServerDriver (version 4.0)
20:22:31,032 INFO [org.jboss.ws.common.management] (MSC service thread 1-11) JBWS022052: Starting JBoss Web Services - Stack CXF Server 4.3.0.Final-redhat-3
20:22:31,211 INFO [org.jboss.as.server.deployment.scanner] (MSC service thread 1-2) JBAS015012: Started FileSystemDeploymentService for directory /Users/raja/JBOSS/jboss-eap-6.3-1/standalone/deployments
20:22:31,214 INFO [org.jboss.as.server.deployment] (MSC service thread 1-3) JBAS015876: Starting deployment of "md-edrs.war" (runtime-name: "md-edrs.war")
20:22:31,232 INFO [org.jboss.as.remoting] (MSC service thread 1-6) JBAS017100: Listening on 127.0.0.1:9999
20:22:31,233 INFO [org.jboss.as.remoting] (MSC service thread 1-9) JBAS017100: Listening on 127.0.0.1:4447
20:22:34,225 INFO [org.jboss.as.pojo] (MSC service thread 1-7) JBAS017000: Found legacy bean/pojo namespace: urn:jboss:bean-deployer:2.0 - might be missing some xml features (potential exceptions).
20:22:34,876 WARN [org.jboss.as.ee] (MSC service thread 1-1) JBAS011006: Not installing optional component edu.ucdavis.dmm.validation.validator.PasswordResetRequestValidator due to an exception (enable DEBUG log level to see the cause)
20:22:34,876 WARN [org.jboss.as.ee] (MSC service thread 1-1) JBAS011006: Not installing optional component edu.ucdavis.dmm.validation.validator.PasswordResetValidator due to an exception (enable DEBUG log level to see the cause)
20:22:35,014 WARN [org.jboss.weld.deployer] (MSC service thread 1-1) JBAS016012: Deployment deployment "md-edrs.war" contains CDI annotations but beans.xml was not found.
20:22:35,972 INFO [org.jboss.web] (ServerService Thread Pool -- 74) JBAS018210: Register web context: /md-edrs
20:22:35,989 INFO [org.apache.catalina.core.ContainerBase.[jboss.web].[default-host].[/md-edrs]] (ServerService Thread Pool -- 74) No Spring WebApplicationInitializer types detected on classpath
20:22:35,994 INFO [org.jboss.as.connector.subsystems.datasources] (MSC service thread 1-2) JBAS010400: Bound data source [java:/jdbc/mdedrsDS2]
20:22:35,994 INFO [org.jboss.as.connector.subsystems.datasources] (MSC service thread 1-6) JBAS010400: Bound data source [java:/jdbc/mdedrsDS]
20:22:35,993 INFO [org.jboss.as.connector.subsystems.datasources] (MSC service thread 1-8) JBAS010400: Bound data source [java:/jdbc/mdedrsDS3]
20:22:36,004 INFO [org.apache.catalina.core.ContainerBase.[jboss.web].[default-host].[/md-edrs]] (ServerService Thread Pool -- 74) Initializing Spring root WebApplicationContext
20:22:37,295 ERROR [org.apache.catalina.core.ContainerBase.[jboss.web].[default-host].[/md-edrs]] (ServerService Thread Pool -- 74) JBWEB000287: Exception sending context initialized event to listener instance of class org.springframework.web.context.ContextLoaderListener: org.springframework.beans.factory.BeanDefinitionStoreException: Invalid bean definition with name 'configurationHelper' defined in ServletContext resource [/WEB-INF/config/web-application-config.xml]: Could not resolve placeholder 'build.number'
at org.springframework.beans.factory.config.PlaceholderConfigurerSupport.doProcessProperties(PlaceholderConfigurerSupport.java:209) [spring-beans-3.1.1.RELEASE.jar:3.1.1.RELEASE]
at org.springframework.beans.factory.config.PropertyPlaceholderConfigurer.processProperties(PropertyPlaceholderConfigurer.java:220) [spring-beans-3.1.1.RELEASE.jar:3.1.1.RELEASE]
at org.springframework.beans.factory.config.PropertyResourceConfigurer.postProcessBeanFactory(PropertyResourceConfigurer.java:84) [spring-beans-3.1.1.RELEASE.jar:3.1.1.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.invokeBeanFactoryPostProcessors(AbstractApplicationContext.java:681) [spring-context-3.1.1.RELEASE.jar:3.1.1.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.invokeBeanFactoryPostProcessors(AbstractApplicationContext.java:656) [spring-context-3.1.1.RELEASE.jar:3.1.1.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:446) [spring-context-3.1.1.RELEASE.jar:3.1.1.RELEASE]
at org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:385) [spring-web-3.1.1.RELEASE.jar:3.1.1.RELEASE]
at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:284) [spring-web-3.1.1.RELEASE.jar:3.1.1.RELEASE]
at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:111) [spring-web-3.1.1.RELEASE.jar:3.1.1.RELEASE]
at org.apache.catalina.core.StandardContext.contextListenerStart(StandardContext.java:3339) [jbossweb-7.4.8.Final-redhat-4.jar:7.4.8.Final-redhat-4]
at org.apache.catalina.core.StandardContext.start(StandardContext.java:3777) [jbossweb-7.4.8.Final-redhat-4.jar:7.4.8.Final-redhat-4]
at org.jboss.as.web.deployment.WebDeploymentService.doStart(WebDeploymentService.java:161) [jboss-as-web-7.4.0.Final-redhat-19.jar:7.4.0.Final-redhat-19]
at org.jboss.as.web.deployment.WebDeploymentService.access$000(WebDeploymentService.java:59) [jboss-as-web-7.4.0.Final-redhat-19.jar:7.4.0.Final-redhat-19]
at org.jboss.as.web.deployment.WebDeploymentService$1.run(WebDeploymentService.java:94) [jboss-as-web-7.4.0.Final-redhat-19.jar:7.4.0.Final-redhat-19]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) [rt.jar:1.7.0_80]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) [rt.jar:1.7.0_80]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_80]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_80]
at java.lang.Thread.run(Thread.java:745) [rt.jar:1.7.0_80]
at org.jboss.threads.JBossThread.run(JBossThread.java:122)
20:22:37,333 INFO [javax.enterprise.resource.webcontainer.jsf.config] (ServerService Thread Pool -- 74) Initializing Mojarra 2.1.28-jbossorg-2 for context '/md-edrs'
20:22:38,194 INFO [org.hibernate.validator.internal.util.Version] (ServerService Thread Pool -- 74) HV000001: Hibernate Validator 4.3.1.Final-redhat-1
20:22:39,010 ERROR [org.apache.catalina.core] (ServerService Thread Pool -- 74) JBWEB001103: Error detected during context /md-edrs start, will stop it
20:22:39,011 INFO [org.apache.catalina.core.ContainerBase.[jboss.web].[default-host].[/md-edrs]] (ServerService Thread Pool -- 74) Closing Spring root WebApplicationContext
20:22:39,013 ERROR [org.jboss.msc.service.fail] (ServerService Thread Pool -- 74) MSC000001: Failed to start service jboss.web.deployment.default-host./md-edrs: org.jboss.msc.service.StartException in service jboss.web.deployment.default-host./md-edrs: org.jboss.msc.service.StartException in anonymous service: JBAS018040: Failed to start context
at org.jboss.as.web.deployment.WebDeploymentService$1.run(WebDeploymentService.java:97)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) [rt.jar:1.7.0_80]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) [rt.jar:1.7.0_80]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_80]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_80]
at java.lang.Thread.run(Thread.java:745) [rt.jar:1.7.0_80]
at org.jboss.threads.JBossThread.run(JBossThread.java:122)
Caused by: org.jboss.msc.service.StartException in anonymous service: JBAS018040: Failed to start context
at org.jboss.as.web.deployment.WebDeploymentService.doStart(WebDeploymentService.java:166)
at org.jboss.as.web.deployment.WebDeploymentService.access$000(WebDeploymentService.java:59)
at org.jboss.as.web.deployment.WebDeploymentService$1.run(WebDeploymentService.java:94)
... 6 more
20:22:39,045 INFO [org.jboss.as.server] (ServerService Thread Pool -- 26) JBAS018559: Deployed "md-edrs.war" (runtime-name : "md-edrs.war")
20:22:39,045 INFO [org.jboss.as.controller] (Controller Boot Thread) JBAS014774: Service status report
JBAS014777: Services which failed to start: service jboss.web.deployment.default-host./md-edrs: org.jboss.msc.service.StartException in service jboss.web.deployment.default-host./md-edrs: org.jboss.msc.service.StartException in anonymous service: JBAS018040: Failed to start context
20:22:39,050 INFO [org.jboss.as] (Controller Boot Thread) JBAS015961: Http management interface listening on http://127.0.0.1:9990/management
20:22:39,050 INFO [org.jboss.as] (Controller Boot Thread) JBAS015951: Admin console listening on http://127.0.0.1:9990
20:22:39,050 ERROR [org.jboss.as] (Controller Boot Thread) JBAS015875: JBoss EAP 6.3.0.GA (AS 7.4.0.Final-redhat-19) started (with errors) in 15361ms - Started 292 of 334 services (2 services failed or missing dependencies, 60 services are lazy, passive or on-demand)
20:42:05,936 INFO [org.jboss.as.server.deployment] (MSC service thread 1-14) JBAS015877: Stopped deployment md-edrs.war (runtime-name: md-edrs.war) in 26ms
20:42:05,938 INFO [org.jboss.as.server.deployment] (MSC service thread 1-10) JBAS015876: Starting deployment of "md-edrs.war" (runtime-name: "md-edrs.war")
20:42:07,591 INFO [org.jboss.as.pojo] (MSC service thread 1-9) JBAS017000: Found legacy bean/pojo namespace: urn:jboss:bean-deployer:2.0 - might be missing some xml features (potential exceptions).
20:42:08,055 WARN [org.jboss.as.ee] (MSC service thread 1-12) JBAS011006: Not installing optional component edu.ucdavis.dmm.validation.validator.PasswordResetRequestValidator due to an exception (enable DEBUG log level to see the cause)
20:42:08,056 WARN [org.jboss.as.ee] (MSC service thread 1-12) JBAS011006: Not installing optional component edu.ucdavis.dmm.validation.validator.PasswordResetValidator due to an exception (enable DEBUG log level to see the cause)
20:42:08,188 WARN [org.jboss.weld.deployer] (MSC service thread 1-12) JBAS016012: Deployment deployment "md-edrs.war" contains CDI annotations but beans.xml was not found.
20:42:08,197 INFO [org.jboss.web] (ServerService Thread Pool -- 99) JBAS018210: Register web context: /md-edrs
20:42:08,204 INFO [org.apache.catalina.core.ContainerBase.[jboss.web].[default-host].[/md-edrs]] (ServerService Thread Pool -- 99) No Spring WebApplicationInitializer types detected on classpath
20:42:08,211 INFO [org.apache.catalina.core.ContainerBase.[jboss.web].[default-host].[/md-edrs]] (ServerService Thread Pool -- 99) Initializing Spring root WebApplicationContext
20:42:09,197 ERROR [org.apache.catalina.core.ContainerBase.[jboss.web].[default-host].[/md-edrs]] (ServerService Thread Pool -- 99) JBWEB000287: Exception sending context initialized event to listener instance of class org.springframework.web.context.ContextLoaderListener: org.springframework.beans.factory.BeanDefinitionStoreException: Invalid bean definition with name 'configurationHelper' defined in ServletContext resource [/WEB-INF/config/web-application-config.xml]: Could not resolve placeholder 'build.number'
at org.springframework.beans.factory.config.PlaceholderConfigurerSupport.doProcessProperties(PlaceholderConfigurerSupport.java:209) [spring-beans-3.1.1.RELEASE.jar:3.1.1.RELEASE]
at org.springframework.beans.factory.config.PropertyPlaceholderConfigurer.processProperties(PropertyPlaceholderConfigurer.java:220) [spring-beans-3.1.1.RELEASE.jar:3.1.1.RELEASE]
at org.springframework.beans.factory.config.PropertyResourceConfigurer.postProcessBeanFactory(PropertyResourceConfigurer.java:84) [spring-beans-3.1.1.RELEASE.jar:3.1.1.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.invokeBeanFactoryPostProcessors(AbstractApplicationContext.java:681) [spring-context-3.1.1.RELEASE.jar:3.1.1.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.invokeBeanFactoryPostProcessors(AbstractApplicationContext.java:656) [spring-context-3.1.1.RELEASE.jar:3.1.1.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:446) [spring-context-3.1.1.RELEASE.jar:3.1.1.RELEASE]
at org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:385) [spring-web-3.1.1.RELEASE.jar:3.1.1.RELEASE]
at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:284) [spring-web-3.1.1.RELEASE.jar:3.1.1.RELEASE]
at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:111) [spring-web-3.1.1.RELEASE.jar:3.1.1.RELEASE]
at org.apache.catalina.core.StandardContext.contextListenerStart(StandardContext.java:3339) [jbossweb-7.4.8.Final-redhat-4.jar:7.4.8.Final-redhat-4]
at org.apache.catalina.core.StandardContext.start(StandardContext.java:3777) [jbossweb-7.4.8.Final-redhat-4.jar:7.4.8.Final-redhat-4]
at org.jboss.as.web.deployment.WebDeploymentService.doStart(WebDeploymentService.java:161) [jboss-as-web-7.4.0.Final-redhat-19.jar:7.4.0.Final-redhat-19]
at org.jboss.as.web.deployment.WebDeploymentService.access$000(WebDeploymentService.java:59) [jboss-as-web-7.4.0.Final-redhat-19.jar:7.4.0.Final-redhat-19]
at org.jboss.as.web.deployment.WebDeploymentService$1.run(WebDeploymentService.java:94) [jboss-as-web-7.4.0.Final-redhat-19.jar:7.4.0.Final-redhat-19]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) [rt.jar:1.7.0_80]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) [rt.jar:1.7.0_80]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_80]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_80]
at java.lang.Thread.run(Thread.java:745) [rt.jar:1.7.0_80]
at org.jboss.threads.JBossThread.run(JBossThread.java:122)
20:42:09,200 INFO [javax.enterprise.resource.webcontainer.jsf.config] (ServerService Thread Pool -- 99) Initializing Mojarra 2.1.28-jbossorg-2 for context '/md-edrs'
20:42:10,295 ERROR [org.apache.catalina.core] (ServerService Thread Pool -- 99) JBWEB001103: Error detected during context /md-edrs start, will stop it
20:42:10,296 INFO [org.apache.catalina.core.ContainerBase.[jboss.web].[default-host].[/md-edrs]] (ServerService Thread Pool -- 99) Closing Spring root WebApplicationContext
20:42:10,297 ERROR [org.jboss.msc.service.fail] (ServerService Thread Pool -- 99) MSC000001: Failed to start service jboss.web.deployment.default-host./md-edrs: org.jboss.msc.service.StartException in service jboss.web.deployment.default-host./md-edrs: org.jboss.msc.service.StartException in anonymous service: JBAS018040: Failed to start context
at org.jboss.as.web.deployment.WebDeploymentService$1.run(WebDeploymentService.java:97)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) [rt.jar:1.7.0_80]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) [rt.jar:1.7.0_80]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_80]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_80]
at java.lang.Thread.run(Thread.java:745) [rt.jar:1.7.0_80]
at org.jboss.threads.JBossThread.run(JBossThread.java:122)
Caused by: org.jboss.msc.service.StartException in anonymous service: JBAS018040: Failed to start context
at org.jboss.as.web.deployment.WebDeploymentService.doStart(WebDeploymentService.java:166)
at org.jboss.as.web.deployment.WebDeploymentService.access$000(WebDeploymentService.java:59)
at org.jboss.as.web.deployment.WebDeploymentService$1.run(WebDeploymentService.java:94)
... 6 more
20:42:10,340 INFO [org.jboss.as.server.deployment] (MSC service thread 1-13) JBAS015877: Stopped deployment md-edrs.war (runtime-name: md-edrs.war) in 38ms
20:42:10,343 INFO [org.jboss.as.controller] (DeploymentScanner-threads - 1) JBAS014774: Service status report
JBAS014775: New missing/unsatisfied dependencies:
service jboss.deployment.unit."md-edrs.war".component."com.sun.faces.config.ConfigureListener".START (missing) dependents: [service jboss.deployment.unit."md-edrs.war".deploymentCompleteService]
service jboss.deployment.unit."md-edrs.war".component."javax.faces.webapp.FacesServlet".START (missing) dependents: [service jboss.deployment.unit."md-edrs.war".deploymentCompleteService]
service jboss.deployment.unit."md-edrs.war".component."javax.servlet.jsp.jstl.tlv.PermittedTaglibsTLV".START (missing) dependents: [service jboss.deployment.unit."md-edrs.war".deploymentCompleteService]
service jboss.deployment.unit."md-edrs.war".component."net.sf.ehcache.constructs.web.filter.SimpleCachingHeadersPageCachingFilter".START (missing) dependents: [service jboss.deployment.unit."md-edrs.war".deploymentCompleteService]
service jboss.deployment.unit."md-edrs.war".component."org.apache.catalina.servlets.DefaultServlet".START (missing) dependents: [service jboss.deployment.unit."md-edrs.war".deploymentCompleteService]
service jboss.deployment.unit."md-edrs.war".component."org.apache.jasper.servlet.JspServlet".START (missing) dependents: [service jboss.deployment.unit."md-edrs.war".deploymentCompleteService]
service jboss.deployment.unit."md-edrs.war".component."org.springframework.web.context.support.HttpRequestHandlerServlet".START (missing) dependents: [service jboss.deployment.unit."md-edrs.war".deploymentCompleteService]
service jboss.deployment.unit."md-edrs.war".component."org.springframework.web.filter.CharacterEncodingFilter".START (missing) dependents: [service jboss.deployment.unit."md-edrs.war".deploymentCompleteService]
service jboss.deployment.unit."md-edrs.war".component."org.springframework.web.filter.DelegatingFilterProxy".START (missing) dependents: [service jboss.deployment.unit."md-edrs.war".deploymentCompleteService]
service jboss.deployment.unit."md-edrs.war".component."org.springframework.web.servlet.DispatcherServlet".START (missing) dependents: [service jboss.deployment.unit."md-edrs.war".deploymentCompleteService]
service jboss.deployment.unit."md-edrs.war".component."org.springframework.web.servlet.tags.BindTag".START (missing) dependents: [service jboss.deployment.unit."md-edrs.war".deploymentCompleteService]
service jboss.deployment.unit."md-edrs.war".component."org.springframework.web.servlet.tags.EscapeBodyTag".START (missing) dependents: [service jboss.deployment.unit."md-edrs.war".deploymentCompleteService]
service jboss.deployment.unit."md-edrs.war".component."org.springframework.web.servlet.tags.HtmlEscapeTag".START (missing) dependents: [service jboss.deployment.unit."md-edrs.war".deploymentCompleteService]
service jboss.deployment.unit."md-edrs.war".component."org.springframework.web.servlet.tags.MessageTag".START (missing) dependents: [service jboss.deployment.unit."md-edrs.war".deploymentCompleteService]
service jboss.deployment.unit."md-edrs.war".component."org.springframework.web.servlet.tags.NestedPathTag".START (missing) dependents: [service jboss.deployment.unit."md-edrs.war".deploymentCompleteService]
service jboss.deployment.unit."md-edrs.war".component."org.springframework.web.servlet.tags.ThemeTag".START (missing) dependents: [service jboss.deployment.unit."md-edrs.war".deploymentCompleteService]
service jboss.deployment.unit."md-edrs.war".component."org.springframework.web.servlet.tags.TransformTag".START (missing) dependents: [service jboss.deployment.unit."md-edrs.war".deploymentCompleteService]
service jboss.deployment.unit."md-edrs.war".component."org.springframework.web.servlet.tags.UrlTag".START (missing) dependents: [service jboss.deployment.unit."md-edrs.war".deploymentCompleteService]
service jboss.deployment.unit."md-edrs.war".component."org.springframework.web.servlet.tags.form.ButtonTag".START (missing) dependents: [service jboss.deployment.unit."md-edrs.war".deploymentCompleteService]
service jboss.deployment.unit."md-edrs.war".component."org.springframework.web.servlet.tags.form.ErrorsTag".START (missing) dependents: [service jboss.deployment.unit."md-edrs.war".deploymentCompleteService]
service jboss.deployment.unit."md-edrs.war".component."org.springframework.web.servlet.tags.form.FormTag".START (missing) dependents: [service jboss.deployment.unit."md-edrs.war".deploymentCompleteService]
service jboss.deployment.unit."md-edrs.war".component."org.springframework.web.servlet.tags.form.HiddenInputTag".START (missing) dependents: [service jboss.deployment.unit."md-edrs.war".deploymentCompleteService]
service jboss.deployment.unit."md-edrs.war".component."org.springframework.web.servlet.tags.form.OptionTag".START (missing) dependents: [service jboss.deployment.unit."md-edrs.war".deploymentCompleteService]
service jboss.deployment.unit."md-edrs.war".component."org.springframework.web.servlet.tags.form.OptionsTag".START (missing) dependents: [service jboss.deployment.unit."md-edrs.war".deploymentCompleteService]
service jboss.deployment.unit."md-edrs.war".component."org.springframework.web.servlet.tags.form.PasswordInputTag".START (missing) dependents: [service jboss.deployment.unit."md-edrs.war".deploymentCompleteService]
service jboss.deployment.unit."md-edrs.war".component."org.springframework.web.servlet.tags.form.RadioButtonsTag".START (missing) dependents: [service jboss.deployment.unit."md-edrs.war".deploymentCompleteService]
service jboss.deployment.unit."md-edrs.war".component."org.springframework.web.servlet.tags.form.SelectTag".START (missing) dependents: [service jboss.deployment.unit."md-edrs.war".deploymentCompleteService]
service jboss.deployment.unit."md-edrs.war".component."org.springframework.web.servlet.tags.form.TextareaTag".START (missing) dependents: [service jboss.deployment.unit."md-edrs.war".deploymentCompleteService]
service jboss.web.deployment.default-host./md-edrs.realm (missing) dependents: [service jboss.deployment.unit."md-edrs.war".deploymentCompleteService]
JBAS014777: Services which failed to start: service jboss.web.deployment.default-host./md-edrs
service jboss.web.deployment.default-host./md-edrs
20:42:37,471 INFO [org.apache.catalina.core] (MSC service thread 1-9) JBWEB001079: Container org.apache.catalina.core.ContainerBase.[jboss.web].[default-host].[/] has not been started
20:42:37,475 INFO [org.jboss.as.connector.subsystems.datasources] (MSC service thread 1-1) JBAS010409: Unbound data source [java:/jdbc/mdedrsDS]
20:42:37,475 INFO [org.jboss.as.connector.subsystems.datasources] (MSC service thread 1-2) JBAS010409: Unbound data source [java:/jdbc/mdedrsDS2]
20:42:37,477 INFO [org.jboss.as.connector.subsystems.datasources] (MSC service thread 1-7) JBAS010409: Unbound data source [java:/jdbc/mdedrsDS3]
20:42:38,477 INFO [org.apache.coyote.http11.Http11Protocol] (MSC service thread 1-5) JBWEB003075: Coyote HTTP/1.1 pausing on: http-localhost/127.0.0.1:8080
20:42:38,478 INFO [org.apache.coyote.http11.Http11Protocol] (MSC service thread 1-5) JBWEB003077: Coyote HTTP/1.1 stopping on : http-localhost/127.0.0.1:8080
20:42:38,486 INFO [org.jboss.as] (MSC service thread 1-3) JBAS015950: JBoss EAP 6.3.0.GA (AS 7.4.0.Final-redhat-19) stopped in 1012ms
can somebody help me on this?
20:22:37,295 ERROR [org.apache.catalina.core.ContainerBase.[jboss.web].[default-host].[/md-edrs]] (ServerService Thread Pool -- 74) JBWEB000287: Exception sending context initialized event to listener instance of class org.springframework.web.context.ContextLoaderListener: org.springframework.beans.factory.BeanDefinitionStoreException: Invalid bean definition with name 'configurationHelper' defined in ServletContext resource [/WEB-INF/config/web-application-config.xml]: Could not resolve placeholder 'build.number'
Provide value for 'build.number' in web-application-config.xml or at run time.