Helm Keycloak Postgres ERROR: column "client_id" does not exist - postgresql

I followed this guide to Config Keycloak + Postgres use Helm.
I have tried with H2 database (default), it works fine. And then, I config to use with Postgres, below is my config.
keycloak:
replicas: 1
image:
repository: jboss/keycloak
tag: 3.4.0.Final
username: admin
password: admin
service:
type: LoadBalancer
persistence:
deployPostgres: false
dbVendor: POSTGRES
dbName: keycloak
dbHost: xxx.pgsql.domain.com
dbPort: 5432
dbUser: keycloak
# Only used if no existing secret is specified. In this case a new secret is created
dbPassword: xxxxxxxxxx
Seem the connection fine, but the error logs:
7:18:24,071 ERROR [org.keycloak.connections.jpa.updater.liquibase.conn.DefaultLiquibaseConnectionProvider] (ServerService Thread Pool -- 56) Change Set META-INF/jpa-changelog-authz-3.4.0.CR1.xml::authz-3.4.0.CR1-resource-server-pk-change-part2::glavoie#gmail.com failed. Error: ERROR: column "client_id" does not exist
Position: 73 [Failed SQL: UPDATE RESOURCE_SERVER_POLICY p SET RESOURCE_SERVER_CLIENT_ID = (SELECT CLIENT_ID FROM RESOURCE_SERVER s WHERE s.ID = p.RESOURCE_SERVER_ID)]: liquibase.exception.DatabaseException: ERROR: column "client_id" does not exist
Position: 73 [Failed SQL: UPDATE RESOURCE_SERVER_POLICY p SET RESOURCE_SERVER_CLIENT_ID = (SELECT CLIENT_ID FROM RESOURCE_SERVER s WHERE s.ID = p.RESOURCE_SERVER_ID)]
at liquibase.executor.jvm.JdbcExecutor$ExecuteStatementCallback.doInStatement(JdbcExecutor.java:316)
at liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:55)
at liquibase.executor.jvm.JdbcExecutor.execute(JdbcExecutor.java:122)
at liquibase.database.AbstractJdbcDatabase.execute(AbstractJdbcDatabase.java:1247)
at liquibase.database.AbstractJdbcDatabase.executeStatements(AbstractJdbcDatabase.java:1230)
at liquibase.changelog.ChangeSet.execute(ChangeSet.java:548)
at liquibase.changelog.visitor.UpdateVisitor.visit(UpdateVisitor.java:51)
at liquibase.changelog.ChangeLogIterator.run(ChangeLogIterator.java:73)
at liquibase.Liquibase.update(Liquibase.java:210)
at liquibase.Liquibase.update(Liquibase.java:190)
at liquibase.Liquibase.update(Liquibase.java:186)
at org.keycloak.connections.jpa.updater.liquibase.LiquibaseJpaUpdaterProvider.updateChangeSet(LiquibaseJpaUpdaterProvider.java:135)
at org.keycloak.connections.jpa.updater.liquibase.LiquibaseJpaUpdaterProvider.update(LiquibaseJpaUpdaterProvider.java:88)
at org.keycloak.connections.jpa.updater.liquibase.LiquibaseJpaUpdaterProvider.update(LiquibaseJpaUpdaterProvider.java:67)
at org.keycloak.connections.jpa.DefaultJpaConnectionProviderFactory.update(DefaultJpaConnectionProviderFactory.java:322)
at org.keycloak.connections.jpa.DefaultJpaConnectionProviderFactory.migration(DefaultJpaConnectionProviderFactory.java:308)
at org.keycloak.connections.jpa.DefaultJpaConnectionProviderFactory.lambda$lazyInit$0(DefaultJpaConnectionProviderFactory.java:179)
at org.keycloak.models.utils.KeycloakModelUtils.suspendJtaTransaction(KeycloakModelUtils.java:544)
at org.keycloak.connections.jpa.DefaultJpaConnectionProviderFactory.lazyInit(DefaultJpaConnectionProviderFactory.java:130)
at org.keycloak.connections.jpa.DefaultJpaConnectionProviderFactory.create(DefaultJpaConnectionProviderFactory.java:78)
at org.keycloak.connections.jpa.DefaultJpaConnectionProviderFactory.create(DefaultJpaConnectionProviderFactory.java:56)
at org.keycloak.services.DefaultKeycloakSession.getProvider(DefaultKeycloakSession.java:163)
at org.keycloak.models.jpa.JpaRealmProviderFactory.create(JpaRealmProviderFactory.java:51)
at org.keycloak.models.jpa.JpaRealmProviderFactory.create(JpaRealmProviderFactory.java:33)
at org.keycloak.services.DefaultKeycloakSession.getProvider(DefaultKeycloakSession.java:163)
at org.keycloak.models.cache.infinispan.RealmCacheSession.getDelegate(RealmCacheSession.java:144)
at org.keycloak.models.cache.infinispan.RealmCacheSession.getMigrationModel(RealmCacheSession.java:137)
at org.keycloak.migration.MigrationModelManager.migrate(MigrationModelManager.java:74)
at org.keycloak.services.resources.KeycloakApplication.migrateModel(KeycloakApplication.java:244)
at org.keycloak.services.resources.KeycloakApplication.migrateAndBootstrap(KeycloakApplication.java:185)
at org.keycloak.services.resources.KeycloakApplication$1.run(KeycloakApplication.java:144)
at org.keycloak.models.utils.KeycloakModelUtils.runJobInTransaction(KeycloakModelUtils.java:227)
at org.keycloak.services.resources.KeycloakApplication.<init>(KeycloakApplication.java:135)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.jboss.resteasy.core.ConstructorInjectorImpl.construct(ConstructorInjectorImpl.java:150)
at org.jboss.resteasy.spi.ResteasyProviderFactory.createProviderInstance(ResteasyProviderFactory.java:2298)
at org.jboss.resteasy.spi.ResteasyDeployment.createApplication(ResteasyDeployment.java:340)
at org.jboss.resteasy.spi.ResteasyDeployment.start(ResteasyDeployment.java:253)
at org.jboss.resteasy.plugins.server.servlet.ServletContainerDispatcher.init(ServletContainerDispatcher.java:120)
at org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.init(HttpServletDispatcher.java:36)
at io.undertow.servlet.core.LifecyleInterceptorInvocation.proceed(LifecyleInterceptorInvocation.java:117)
at org.wildfly.extension.undertow.security.RunAsLifecycleInterceptor.init(RunAsLifecycleInterceptor.java:78)
at io.undertow.servlet.core.LifecyleInterceptorInvocation.proceed(LifecyleInterceptorInvocation.java:103)
at io.undertow.servlet.core.ManagedServlet$DefaultInstanceStrategy.start(ManagedServlet.java:250)
at io.undertow.servlet.core.ManagedServlet.createServlet(ManagedServlet.java:133)
at io.undertow.servlet.core.DeploymentManagerImpl$2.call(DeploymentManagerImpl.java:565)
at io.undertow.servlet.core.DeploymentManagerImpl$2.call(DeploymentManagerImpl.java:536)
at io.undertow.servlet.core.ServletRequestContextThreadSetupAction$1.call(ServletRequestContextThreadSetupAction.java:42)
at io.undertow.servlet.core.ContextClassLoaderSetupAction$1.call(ContextClassLoaderSetupAction.java:43)
at org.wildfly.extension.undertow.security.SecurityContextThreadSetupAction.lambda$create$0(SecurityContextThreadSetupAction.java:105)
at org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1508)
at org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1508)
at org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1508)
at org.wildfly.extension.undertow.deployment.UndertowDeploymentInfoService$UndertowThreadSetupAction.lambda$create$0(UndertowDeploymentInfoService.java:1508)
at io.undertow.servlet.core.DeploymentManagerImpl.start(DeploymentManagerImpl.java:578)
at org.wildfly.extension.undertow.deployment.UndertowDeploymentService.startContext(UndertowDeploymentService.java:100)
at org.wildfly.extension.undertow.deployment.UndertowDeploymentService$1.run(UndertowDeploymentService.java:81)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
at org.jboss.threads.JBossThread.run(JBossThread.java:320)
Caused by: org.postgresql.util.PSQLException: ERROR: column "client_id" does not exist
Position: 73
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2477)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2190)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:300)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:428)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:354)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:301)
at org.postgresql.jdbc.PgStatement.executeCachedSql(PgStatement.java:287)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:264)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:260)
at org.jboss.jca.adapters.jdbc.WrappedStatement.execute(WrappedStatement.java:198)
at liquibase.executor.jvm.JdbcExecutor$ExecuteStatementCallback.doInStatement(JdbcExecutor.java:314)
... 65 more
17:18:24,096 ERROR [org.jboss.msc.service.fail] (ServerService Thread Pool -- 56) MSC000001: Failed to start service jboss.undertow.deployment.default-server.default-host./auth: org.jboss.msc.service.StartException in service jboss.undertow.deployment.default-server.default-host./auth: java.lang.RuntimeException: RESTEASY003325: Failed to construct public org.keycloak.services.resources.KeycloakApplication(javax.servlet.ServletContext,org.jboss.resteasy.core.Dispatcher)
at org.wild...
Please help me to fix it. Many thanks.

The chart has several places where it expects
deployPostgres: true
when using postgres.

Related

master-datasources.xml content always revert back to initial configuration when I start wso2server 5.9

I am beginner to WSO2, and I'm trying to configure Identity server data-source to PostgreSQL, using the documentation.
JDBC driver used
My latest master-datasources.xml is
<datasources-configuration xmlns:svns="http://org.wso2.securevault/configuration">
<providers>
<provider>org.wso2.carbon.ndatasource.rdbms.RDBMSDataSourceReader</provider>
</providers>
<datasources>
<datasource>
<name>WSO2_CARBON_DB</name>
<description>The datasource used for registry and user manager</description>
<jndiConfig>
<name>jdbc/WSO2CarbonDB</name>
</jndiConfig>
<definition type="RDBMS">
<configuration>
<url>jdbc:postgresql://localhost:5432/wso2_db</url>
<username>postgres</username>
<password>root</password>
<driverClassName>org.postgresql.Driver</driverClassName>
<maxActive>50</maxActive>
<maxWait>60000</maxWait>
<testOnBorrow>true</testOnBorrow>
<validationQuery>SELECT 1; COMMIT</validationQuery>
<validationInterval>30000</validationInterval>
<defaultAutoCommit>true</defaultAutoCommit>
<commitOnReturn>true</commitOnReturn>
</configuration>
</definition>
</datasource>
<datasource>
<name>WSO2_SHARED_DB</name>
<description>Shared Database for user and registry data</description>
<jndiConfig>
<name>jdbc/SHARED_DB</name>
</jndiConfig>
<definition type="RDBMS">
<configuration>
<url>jdbc:postgresql://localhost:5432/wso2_db</url>
<username>postgres</username>
<password>root</password>
<driverClassName>org.postgresql.Driver</driverClassName>
<testOnBorrow>true</testOnBorrow>
<maxWait>60000</maxWait>
<defaultAutoCommit>true</defaultAutoCommit>
<validationInterval>30000</validationInterval>
<maxActive>50</maxActive>
<jmxEnabled>false</jmxEnabled>
</configuration>
</definition>
</datasource>
<datasource>
<name>WSO2_IDENTITY_DB</name>
<description>Shared database for identity data</description>
<jndiConfig>
<name>jdbc/WSO2IdentityDB</name>
</jndiConfig>
<definition type="RDBMS">
<configuration>
<url>jdbc:postgresql://localhost:5432/wso2_db</url>
<username>postgres</username>
<password>root</password>
<driverClassName>org.postgresql.Driver</driverClassName>
</configuration>
</definition>
</datasource>
</datasources>
</datasources-configuration>
When I start running WSO2 server , master-datasources.xml revertback to initial H2 configuration.
I modified deployment.toml based on the suggestion from #Piraveena Paralogarajah.
[server]
hostname = "localhost"
node_ip = "127.0.0.1"
base_path = "https://$ref{server.hostname}:${carbon.management.port}"
[super_admin]
username = "admin"
password = "admin"
create_admin_account = true
[user_store]
type = "read_write_ldap"
connection_url = "ldap://localhost:${Ports.EmbeddedLDAP.LDAPServerPort}"
connection_name = "uid=admin,ou=system"
connection_password = "admin"
base_dn = "dc=wso2,dc=org" #refers the base dn on which the user and group search bases will be generated
[database.identity_db]
type = "postgre"
hostname = "localhost"
name = "wso2_db"
username = "postgres"
password = "root"
port = "5432"
[database.shared_db]
type = "postgre"
hostname = "localhost"
name = "wso2_db"
username = "postgres"
password = "root"
port = "5432"
[keystore.primary]
name = "wso2carbon.jks"
password = "wso2carbon"
executed Query
<IS-HOME>/dbscripts/identity/postgresql.sql
<IS-HOME>/dbscripts/identity/uma/postgresql.sql
<IS-HOME>/dbscripts/consent/postgresql.sql
this time master-datasources.xml updated for postgress. But got exception while running server.
2020-02-19 16:44:35,247] [] ERROR {org.wso2.carbon.user.core.common.DefaultRealm} - nullType class java.lang.reflect.InvocationTargetException org.wso2.carbon.user.core.UserStoreException: nullType class java.lang.reflect.InvocationTargetException
at org.wso2.carbon.user.core.common.DefaultRealm.createObjectWithOptions(DefaultRealm.java:397)
at org.wso2.carbon.user.core.common.DefaultRealm.initializeObjects(DefaultRealm.java:224)
at org.wso2.carbon.user.core.common.DefaultRealm.init(DefaultRealm.java:129)
at org.wso2.carbon.user.core.common.DefaultRealmService.initializeRealm(DefaultRealmService.java:264)
at org.wso2.carbon.user.core.common.DefaultRealmService.<init>(DefaultRealmService.java:102)
at org.wso2.carbon.user.core.common.DefaultRealmService.<init>(DefaultRealmService.java:115)
at org.wso2.carbon.user.core.internal.Activator.startDeploy(Activator.java:72)
at org.wso2.carbon.user.core.internal.BundleCheckActivator.start(BundleCheckActivator.java:61)
at org.eclipse.osgi.internal.framework.BundleContextImpl$3.run(BundleContextImpl.java:842)
at org.eclipse.osgi.internal.framework.BundleContextImpl$3.run(BundleContextImpl.java:1)
at java.security.AccessController.doPrivileged(Native Method)
at org.eclipse.osgi.internal.framework.BundleContextImpl.startActivator(BundleContextImpl.java:834)
at org.eclipse.osgi.internal.framework.BundleContextImpl.start(BundleContextImpl.java:791)
at org.eclipse.osgi.internal.framework.EquinoxBundle.startWorker0(EquinoxBundle.java:1013)
at org.eclipse.osgi.internal.framework.EquinoxBundle$EquinoxModule.startWorker(EquinoxBundle.java:365)
at org.eclipse.osgi.container.Module.doStart(Module.java:598)
at org.eclipse.osgi.container.Module.start(Module.java:462)
at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel$1.run(ModuleContainer.java:1820)
at org.eclipse.osgi.internal.framework.EquinoxContainerAdaptor$2$1.execute(EquinoxContainerAdaptor.java:150)
at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel.incStartLevel(ModuleContainer.java:1813)
at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel.incStartLevel(ModuleContainer.java:1770)
at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel.doContainerStartLevel(ModuleContainer.java:1735)
at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel.dispatchEvent(ModuleContainer.java:1661)
at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel.dispatchEvent(ModuleContainer.java:1)
at org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(EventManager.java:234)
at org.eclipse.osgi.framework.eventmgr.EventManager$EventThread.run(EventManager.java:345)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.wso2.carbon.user.core.common.DefaultRealm.createObjectWithOptions(DefaultRealm.java:351)
... 25 more
Caused by: org.wso2.carbon.user.core.UserStoreException: Error occurred while checking is existing domain : PRIMARY for tenant : -1234
at org.wso2.carbon.user.core.util.UserCoreUtil.persistDomain(UserCoreUtil.java:860)
at org.wso2.carbon.user.core.common.AbstractUserStoreManager.persistDomain(AbstractUserStoreManager.java:6190)
at org.wso2.carbon.user.core.ldap.ReadOnlyLDAPUserStoreManager.<init>(ReadOnlyLDAPUserStoreManager.java:240)
at org.wso2.carbon.user.core.ldap.ReadWriteLDAPUserStoreManager.<init>(ReadWriteLDAPUserStoreManager.java:120)
... 30 more
Caused by: org.wso2.carbon.user.core.UserStoreException: DB error occurred while checking is existing domain : PRIMARY & tenant id : -1234
at org.wso2.carbon.user.core.util.UserCoreUtil.isExistingDomain(UserCoreUtil.java:1009)
at org.wso2.carbon.user.core.util.UserCoreUtil.persistDomain(UserCoreUtil.java:849)
... 33 more
Caused by: org.postgresql.util.PSQLException: ERROR: relation "um_domain" does not exist
Position: 26
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2510)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2245)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:311)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:447)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:368)
at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:159)
at org.postgresql.jdbc.PgPreparedStatement.executeQuery(PgPreparedStatement.java:109)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.tomcat.jdbc.pool.StatementFacade$StatementProxy.invoke(StatementFacade.java:114)
at com.sun.proxy.$Proxy53.executeQuery(Unknown Source)
at org.wso2.carbon.user.core.util.UserCoreUtil.isExistingDomain(UserCoreUtil.java:998)
... 34 more
[2020-02-19 16:44:35,275] [] ERROR {org.wso2.carbon.user.core.internal.Activator} - Cannot start User Manager Core bundle org.wso2.carbon.user.core.UserStoreException: Cannot initialize the realm.
at org.wso2.carbon.user.core.common.DefaultRealmService.initializeRealm(DefaultRealmService.java:274)
at org.wso2.carbon.user.core.common.DefaultRealmService.<init>(DefaultRealmService.java:102)
at org.wso2.carbon.user.core.common.DefaultRealmService.<init>(DefaultRealmService.java:115)
at org.wso2.carbon.user.core.internal.Activator.startDeploy(Activator.java:72)
at org.wso2.carbon.user.core.internal.BundleCheckActivator.start(BundleCheckActivator.java:61)
at org.eclipse.osgi.internal.framework.BundleContextImpl$3.run(BundleContextImpl.java:842)
at org.eclipse.osgi.internal.framework.BundleContextImpl$3.run(BundleContextImpl.java:1)
at java.security.AccessController.doPrivileged(Native Method)
at org.eclipse.osgi.internal.framework.BundleContextImpl.startActivator(BundleContextImpl.java:834)
at org.eclipse.osgi.internal.framework.BundleContextImpl.start(BundleContextImpl.java:791)
at org.eclipse.osgi.internal.framework.EquinoxBundle.startWorker0(EquinoxBundle.java:1013)
at org.eclipse.osgi.internal.framework.EquinoxBundle$EquinoxModule.startWorker(EquinoxBundle.java:365)
at org.eclipse.osgi.container.Module.doStart(Module.java:598)
at org.eclipse.osgi.container.Module.start(Module.java:462)
at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel$1.run(ModuleContainer.java:1820)
at org.eclipse.osgi.internal.framework.EquinoxContainerAdaptor$2$1.execute(EquinoxContainerAdaptor.java:150)
at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel.incStartLevel(ModuleContainer.java:1813)
at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel.incStartLevel(ModuleContainer.java:1770)
at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel.doContainerStartLevel(ModuleContainer.java:1735)
at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel.dispatchEvent(ModuleContainer.java:1661)
at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel.dispatchEvent(ModuleContainer.java:1)
at org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(EventManager.java:234)
at org.eclipse.osgi.framework.eventmgr.EventManager$EventThread.run(EventManager.java:345)
Caused by: org.wso2.carbon.user.core.UserStoreException: nullType class java.lang.reflect.InvocationTargetException
at org.wso2.carbon.user.core.common.DefaultRealm.initializeObjects(DefaultRealm.java:318)
at org.wso2.carbon.user.core.common.DefaultRealm.init(DefaultRealm.java:129)
at org.wso2.carbon.user.core.common.DefaultRealmService.initializeRealm(DefaultRealmService.java:264)
... 22 more
Caused by: org.wso2.carbon.user.core.UserStoreException: nullType class java.lang.reflect.InvocationTargetException
at org.wso2.carbon.user.core.common.DefaultRealm.createObjectWithOptions(DefaultRealm.java:397)
at org.wso2.carbon.user.core.common.DefaultRealm.initializeObjects(DefaultRealm.java:224)
... 24 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.wso2.carbon.user.core.common.DefaultRealm.createObjectWithOptions(DefaultRealm.java:351)
... 25 more
Caused by: org.wso2.carbon.user.core.UserStoreException: Error occurred while checking is existing domain : PRIMARY for tenant : -1234
at org.wso2.carbon.user.core.util.UserCoreUtil.persistDomain(UserCoreUtil.java:860)
at org.wso2.carbon.user.core.common.AbstractUserStoreManager.persistDomain(AbstractUserStoreManager.java:6190)
at org.wso2.carbon.user.core.ldap.ReadOnlyLDAPUserStoreManager.<init>(ReadOnlyLDAPUserStoreManager.java:240)
at org.wso2.carbon.user.core.ldap.ReadWriteLDAPUserStoreManager.<init>(ReadWriteLDAPUserStoreManager.java:120)
... 30 more
Caused by: org.wso2.carbon.user.core.UserStoreException: DB error occurred while checking is existing domain : PRIMARY & tenant id : -1234
at org.wso2.carbon.user.core.util.UserCoreUtil.isExistingDomain(UserCoreUtil.java:1009)
at org.wso2.carbon.user.core.util.UserCoreUtil.persistDomain(UserCoreUtil.java:849)
... 33 more
Caused by: org.postgresql.util.PSQLException: ERROR: relation "um_domain" does not exist
Position: 26
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2510)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2245)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:311)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:447)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:368)
at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:159)
at org.postgresql.jdbc.PgPreparedStatement.executeQuery(PgPreparedStatement.java:109)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.tomcat.jdbc.pool.StatementFacade$StatementProxy.invoke(StatementFacade.java:114)
at com.sun.proxy.$Proxy53.executeQuery(Unknown Source)
at org.wso2.carbon.user.core.util.UserCoreUtil.isExistingDomain(UserCoreUtil.java:998)
... 34 more
I tried this but it is not working.
With the 4.5.0 carbon-kernel release, all WSO2 products such as APIM 3.0.0, IS 5.9.0 introduced a new config model. According to the new config model, there is a centralized configuration file (deployment.toml) where users add the configurations, then those configurations will be added to the respective .xml files.
So if you want to do some changes in the master-datasources.xml file, you have to add the relevant configs in deployment.toml file according to the new config model. With the new config model, all the changes made by you in the xml config files will be overridden by the toml configs during the server startup.
Please follow this documentation to refer further information on this new config model
Related documents:
https://wso2.com/blogs/thesource/2019/10/simplifying-configuration-with-WSO2-identity-server
Please follow this documentation if you are using trying to configure WSO2 Identity server with postgres db.
https://is.docs.wso2.com/en/next/setup/changing-to-postgresql/
[updated according to the new issue]
Please execute this script also
/dbscripts/postgresql.sql
. From the error logs it says "um_domain" does not exist. That table creation happens from this script and you haven't executed this particular script.
Caused by: org.postgresql.util.PSQLException: ERROR: relation "um_domain" does not exist
Position: 26
It seems you are missing some tables. Maybe your DB schema is not compliant with wso2 DB schema
To fix that you need to run WSO2 DB scripts on PostgresDB. You can find the scripts inside the product in the following paths {is-home}/dbscripts and {is-home}/dbscripts/identity. Postgres scripts are under the name of "postgres.sql".
Make sure, the deployment.toml configuration has worked like as a publisher file, So, it's rollback to H2 database because the LDAP was configured on localhost.
please follow the below.
open the developement.toml find in this path C:[Program Files]WSO2\Identity Server\5.11.0\repository\conf
Remove the LDAP ~ AD configuration and add that
[user_store]
type = "database_unique_id"
Change the user database configuration
[database.user]
url = "jdbc:postgresql://localhost:5432/wso2"
username = "postgres"
password = "MohsenPass"
driver = "org.postgresql.Driver"
Change the identity_db database configuration
[database.identity_db]
type = "postgre"
hostname = "localhost"
name = "wso2"
username = "postgres"
password = "PassMohsen"
port = "5432"
Change the shared_db database configuration
type = "postgre"
hostname = "localhost"
name = "wso2"
username = "postgres"
password = "MohsenPass"
port = "5432"
Now Start-up the server, that process will do initialization of new configuration and new destination as well,
I hope do well to fix your issues.
Any questions regarding in wso2 identity server to set up and development ask me on twitter #MohsenEnazi.

Weblogic 12c deployment null pointer exception

encounter null pointer exception but I cannot find any hints in log:
EJB deployment state REMOVED for [pas-app-master-app-4.0.5-SNAPSHOT-1:pas-app-master-app-4.0.5-SNAPSHOT-1, pas-app-master-ejb-4.0.5-SNAPSHOT.jar:pas-app-master-ejb-4.0.5-SNAPSHOT:pas-app-master-ejb-4.0.5-SNAPSHOT.jar](PASServiceFacade, SampleSessionBean)
Failure occurred in the execution of deployment request with ID "680923808590731" for task "116" on [partition-name: DOMAIN]. Error is: "weblogic.management.DeploymentException: java.lang.NullPointerException" weblogic.management.DeploymentException: java.lang.NullPointerException at weblogic.application.internal.flow.EnvContextFlow.activate(EnvContextFlow.java:81) at weblogic.application.internal.BaseDeployment$2.next(BaseDeployment.java:752) at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:45) at weblogic.application.internal.BaseDeployment.activate(BaseDeployment.java:262) at weblogic.application.internal.EarDeployment.activate(EarDeployment.java:66) at weblogic.application.internal.DeploymentStateChecker.activate(DeploymentStateChecker.java:165) at weblogic.deploy.internal.targetserver.AppContainerInvoker.activate(AppContainerInvoker.java:90) at weblogic.deploy.internal.targetserver.operations.AbstractOperation.activate(AbstractOperation.java:631) at weblogic.deploy.internal.targetserver.operations.ActivateOperation.activateDeployment(ActivateOperation.java:171) at weblogic.deploy.internal.targetserver.operations.ActivateOperation.doCommit(ActivateOperation.java:121) at weblogic.deploy.internal.targetserver.operations.StartOperation.doCommit(StartOperation.java:151) at weblogic.deploy.internal.targetserver.operations.AbstractOperation.commit(AbstractOperation.java:348) at weblogic.deploy.internal.targetserver.DeploymentManager.handleDeploymentCommit(DeploymentManager.java:907) at weblogic.deploy.internal.targetserver.DeploymentManager.activateDeploymentList(DeploymentManager.java:1468) at weblogic.deploy.internal.targetserver.DeploymentManager.handleCommit(DeploymentManager.java:459) at weblogic.deploy.internal.targetserver.DeploymentServiceDispatcher.commit(DeploymentServiceDispatcher.java:181) at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer.doCommitCallback(DeploymentReceiverCallbackDeliverer.java:217) at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer.access$100(DeploymentReceiverCallbackDeliverer.java:14) at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer$2.run(DeploymentReceiverCallbackDeliverer.java:69) at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:678) at weblogic.invocation.ComponentInvocationContextManager._runAs(ComponentInvocationContextManager.java:352) at weblogic.invocation.ComponentInvocationContextManager.runAs(ComponentInvocationContextManager.java:337) at weblogic.work.LivePartitionUtility.doRunWorkUnderContext(LivePartitionUtility.java:57) at weblogic.work.PartitionUtility.runWorkUnderContext(PartitionUtility.java:41) at weblogic.work.SelfTuningWorkManagerImpl.runWorkUnderContext(SelfTuningWorkManagerImpl.java:652) at weblogic.work.ExecuteThread.execute(ExecuteThread.java:420) at weblogic.work.ExecuteThread.run(ExecuteThread.java:360) Caused By: java.lang.NullPointerException at weblogic.wsee.jaxws.ServiceRefProcessorImpl.colocated(ServiceRefProcessorImpl.java:508) at weblogic.wsee.jaxws.ServiceRefProcessorImpl.getWsdlURL(ServiceRefProcessorImpl.java:338) at weblogic.wsee.jaxws.ServiceRefProcessorImpl.createService(ServiceRefProcessorImpl.java:262) at weblogic.wsee.jaxws.ServiceRefProcessorImpl.createTargetRef(ServiceRefProcessorImpl.java:130) at weblogic.wsee.jaxws.ServiceRefProcessorImpl.bindServiceRef(ServiceRefProcessorImpl.java:420) at weblogic.application.naming.EnvironmentBuilder.bindServiceRef(EnvironmentBuilder.java:1179) at weblogic.application.naming.EnvironmentBuilder.bindServiceReferences(EnvironmentBuilder.java:1146) at weblogic.application.naming.EnvironmentBuilder.bindServiceReferences(EnvironmentBuilder.java:1518) at weblogic.application.naming.EnvironmentBuilder.bindEnvEntriesFromDDs(EnvironmentBuilder.java:2118) at weblogic.application.naming.EnvironmentBuilder.bindEnvEntriesFromDDs(EnvironmentBuilder.java:2089) at weblogic.application.internal.flow.EnvContextFlow.activate(EnvContextFlow.java:79) at weblogic.application.internal.BaseDeployment$2.next(BaseDeployment.java:752) at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:45) at weblogic.application.internal.BaseDeployment.activate(BaseDeployment.java:262) at weblogic.application.internal.EarDeployment.activate(EarDeployment.java:66) at weblogic.application.internal.DeploymentStateChecker.activate(DeploymentStateChecker.java:165) at weblogic.deploy.internal.targetserver.AppContainerInvoker.activate(AppContainerInvoker.java:90) at weblogic.deploy.internal.targetserver.operations.AbstractOperation.activate(AbstractOperation.java:631) at weblogic.deploy.internal.targetserver.operations.ActivateOperation.activateDeployment(ActivateOperation.java:171) at weblogic.deploy.internal.targetserver.operations.ActivateOperation.doCommit(ActivateOperation.java:121) at weblogic.deploy.internal.targetserver.operations.StartOperation.doCommit(StartOperation.java:151) at weblogic.deploy.internal.targetserver.operations.AbstractOperation.commit(AbstractOperation.java:348) at weblogic.deploy.internal.targetserver.DeploymentManager.handleDeploymentCommit(DeploymentManager.java:907) at weblogic.deploy.internal.targetserver.DeploymentManager.activateDeploymentList(DeploymentManager.java:1468) at weblogic.deploy.internal.targetserver.DeploymentManager.handleCommit(DeploymentManager.java:459) at weblogic.deploy.internal.targetserver.DeploymentServiceDispatcher.commit(DeploymentServiceDispatcher.java:181) at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer.doCommitCallback(DeploymentReceiverCallbackDeliverer.java:217) at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer.access$100(DeploymentReceiverCallbackDeliverer.java:14) at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer$2.run(DeploymentReceiverCallbackDeliverer.java:69) at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:678) at weblogic.invocation.ComponentInvocationContextManager._runAs(ComponentInvocationContextManager.java:352) at weblogic.invocation.ComponentInvocationContextManager.runAs(ComponentInvocationContextManager.java:337) at weblogic.work.LivePartitionUtility.doRunWorkUnderContext(LivePartitionUtility.java:57) at weblogic.work.PartitionUtility.runWorkUnderContext(PartitionUtility.java:41) at weblogic.work.SelfTuningWorkManagerImpl.runWorkUnderContext(SelfTuningWorkManagerImpl.java:652) at weblogic.work.ExecuteThread.execute(ExecuteThread.java:420) at weblogic.work.ExecuteThread.run(ExecuteThread.java:360)

Elasticsearch cluster: java.lang.IllegalStateException: Future got interrupted

After a restart, our Elasticsearch cluster is showing this in the logs:
[2018-08-30T14:13:15,111][ERROR][o.e.x.m.c.c.ClusterStatsCollector] [myelastic01] collector [cluster_stats] failed to collect data
java.lang.IllegalStateException: Future got interrupted
at org.elasticsearch.common.util.concurrent.FutureUtils.get(FutureUtils.java:53) ~[elasticsearch-6.3.2.jar:6.3.2]
at org.elasticsearch.action.support.AdapterActionFuture.actionGet(AdapterActionFuture.java:34) ~[elasticsearch-6.3.2.jar:6.3.2]
(...)
Caused by: java.lang.InterruptedException
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:998) ~[?:1.8.0_181]
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) ~[?:1.8.0_181]
(...)
[2018-08-30T14:15:22,972][ERROR][o.e.x.m.c.i.IndexStatsCollector] [myelastic01] collector [index-stats] failed to collect data
org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];
at org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedException(ClusterBlocks.java:166) ~[elasticsearch-6.3.2.jar:6.3.2]
at org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.checkGlobalBlock(TransportIndicesStatsAction.java:68) ~[elasticsearch-6.3.2.jar:6.3.2]
at org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.checkGlobalBlock(TransportIndicesStatsAction.java:45) ~[elasticsearch-6.3.2.jar:6.3.2]
at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$AsyncAction.<init>(TransportBroadcastByNodeAction.java:255) ~[elasticsearch-6.3.2.j
ar:6.3.2]
[2018-08-30T14:18:50,307][WARN ][r.suppressed ] path: /.kibana/_search, params: {ignore_unavailable=true, index=.kibana, filter_path=aggregations.types.buckets}
org.elasticsearch.action.search.SearchPhaseExecutionException: all shards failed
at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseFailure(AbstractSearchAsyncAction.java:288) ~[elasticsearch-6.3.2.jar:6.3.2]
at org.elasticsearch.action.search.AbstractSearchAsyncAction.executeNextPhase(AbstractSearchAsyncAction.java:128) ~[elasticsearch-6.3.2.jar:6.3.2]
at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseDone(AbstractSearchAsyncAction.java:249) ~[elasticsearch-6.3.2.jar:6.3.2]
(...)
[2018-08-30T14:18:50,304][WARN ][r.suppressed ] path: /.kibana/doc/config%3A6.3.2, params: {index=.kibana, id=config:6.3.2, type=doc}
org.elasticsearch.action.NoShardAvailableActionException: No shard available for [get [.kibana][doc][config:6.3.2]: routing [null]]
at org.elasticsearch.action.support.single.shard.TransportSingleShardAction$AsyncSingleAction.perform(TransportSingleShardAction.java:207) ~[elasticsearch-6.3.2.jar:
6.3.2]
at org.elasticsearch.action.support.single.shard.TransportSingleShardAction$AsyncSingleAction.start(TransportSingleShardAction.java:186) ~[elasticsearch-6.3.2.jar:6.
3.2]
at org.elasticsearch.action.support.single.shard.TransportSingleShardAction.doExecute(TransportSingleShardAction.java:95) ~[elasticsearch-6.3.2.jar:6.3.2]
(...)
This is a ELK stack which was working before, the configuration has not been modified. What could be the cause? I've searched around but found nothing relevant.

:INIT_FAILURE, Fail to create InputInitializerManager

I am using EMR services from Amazon Web Services and am attempting to run a count query on an external table that I've built. The data for the table is stored in mongodb and the table is an external table in Hive. The query I'm trying to run is
select user_id, count (*) from myTable group by user_id;
I can query select * from myTable but I cannot do any other queries. When I try to I get this error:
----------------------------------------------------------------------------------------------
VERTICES MODE STATUS TOTAL COMPLETED RUNNING PENDING FAILED KILLED
----------------------------------------------------------------------------------------------
Map 1 container FAILED -1 0 0 -1 0 0
Reducer 2 container KILLED 1 0 0 1 0 0
----------------------------------------------------------------------------------------------
VERTICES: 00/02 [>>--------------------------] 0% ELAPSED TIME: 0.03 s
----------------------------------------------------------------------------------------------
Status: Failed
Vertex failed, vertexName=Map 1, vertexId=vertex_1476467351971_0008_2_00, diagnostics=[Vertex vertex_1476467351971_0008_2_00 [Map 1] killed/failed due to:INIT_FAILURE, Fail to create InputInitializerManager, org.apache.tez.dag.api.TezReflectionException: Unable to instantiate class with 1 arguments: org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator
at org.apache.tez.common.ReflectionUtils.getNewInstance(ReflectionUtils.java:70)
at org.apache.tez.common.ReflectionUtils.createClazzInstance(ReflectionUtils.java:89)
at org.apache.tez.dag.app.dag.RootInputInitializerManager$1.run(RootInputInitializerManager.java:151)
at org.apache.tez.dag.app.dag.RootInputInitializerManager$1.run(RootInputInitializerManager.java:148)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.tez.dag.app.dag.RootInputInitializerManager.createInitializer(RootInputInitializerManager.java:148)
at org.apache.tez.dag.app.dag.RootInputInitializerManager.runInputInitializers(RootInputInitializerManager.java:121)
at org.apache.tez.dag.app.dag.impl.VertexImpl.setupInputInitializerManager(VertexImpl.java:3986)
at org.apache.tez.dag.app.dag.impl.VertexImpl.access$3100(VertexImpl.java:204)
at org.apache.tez.dag.app.dag.impl.VertexImpl$InitTransition.handleInitEvent(VertexImpl.java:2818)
at org.apache.tez.dag.app.dag.impl.VertexImpl$InitTransition.transition(VertexImpl.java:2765)
at org.apache.tez.dag.app.dag.impl.VertexImpl$InitTransition.transition(VertexImpl.java:2747)
at org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
at org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
at org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
at org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
at org.apache.tez.state.StateMachineTez.doTransition(StateMachineTez.java:59)
at org.apache.tez.dag.app.dag.impl.VertexImpl.handle(VertexImpl.java:1888)
at org.apache.tez.dag.app.dag.impl.VertexImpl.handle(VertexImpl.java:203)
at org.apache.tez.dag.app.DAGAppMaster$VertexEventDispatcher.handle(DAGAppMaster.java:2242)
at org.apache.tez.dag.app.DAGAppMaster$VertexEventDispatcher.handle(DAGAppMaster.java:2228)
at org.apache.tez.common.AsyncDispatcher.dispatch(AsyncDispatcher.java:183)
at org.apache.tez.common.AsyncDispatcher$1.run(AsyncDispatcher.java:114)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.tez.common.ReflectionUtils.getNewInstance(ReflectionUtils.java:68)
... 25 more
Caused by: java.lang.RuntimeException: Failed to load plan: hdfs://ip-172-31-33-88.ec2.internal:8020/tmp/hive/hadoop/8c1eca9a-84ba-4d79-b39c-e633f1c6a646/hive_2016-10-14_18-57-38_728_4862537458701624850-1/hadoop/_tez_scratch_dir/157c0e27-0bfa-4acf-b0e9-b2fed595a8de/map.xml: org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to find class: com.mongodb.hadoop.hive.input.HiveMongoInputFormat
Serialization trace:
inputFileFormatClass (org.apache.hadoop.hive.ql.plan.PartitionDesc)
aliasToPartnInfo (org.apache.hadoop.hive.ql.plan.MapWork)
at org.apache.hadoop.hive.ql.exec.Utilities.getBaseWork(Utilities.java:451)
at org.apache.hadoop.hive.ql.exec.Utilities.getMapWork(Utilities.java:298)
at org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.<init>(HiveSplitGenerator.java:131)
... 30 more
Caused by: org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to find class: com.mongodb.hadoop.hive.input.HiveMongoInputFormat
Serialization trace:
inputFileFormatClass (org.apache.hadoop.hive.ql.plan.PartitionDesc)
aliasToPartnInfo (org.apache.hadoop.hive.ql.plan.MapWork)
at org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:156)
at org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:133)
at org.apache.hive.com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:670)
at org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readClass(SerializationUtilities.java:180)
at org.apache.hive.com.esotericsoftware.kryo.serializers.DefaultSerializers$ClassSerializer.read(DefaultSerializers.java:326)
at org.apache.hive.com.esotericsoftware.kryo.serializers.DefaultSerializers$ClassSerializer.read(DefaultSerializers.java:314)
at org.apache.hive.com.esotericsoftware.kryo.Kryo.readObjectOrNull(Kryo.java:759)
at org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readObjectOrNull(SerializationUtilities.java:198)
at org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:132)
at org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:551)
at org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:790)
at org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readClassAndObject(SerializationUtilities.java:175)
at org.apache.hive.com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:161)
at org.apache.hive.com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:39)
at org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:708)
at org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readObject(SerializationUtilities.java:213)
at org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:125)
at org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:551)
at org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:686)
at org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readObject(SerializationUtilities.java:205)
at org.apache.hadoop.hive.ql.exec.SerializationUtilities.deserializeObjectByKryo(SerializationUtilities.java:583)
at org.apache.hadoop.hive.ql.exec.SerializationUtilities.deserializePlan(SerializationUtilities.java:492)
at org.apache.hadoop.hive.ql.exec.SerializationUtilities.deserializePlan(SerializationUtilities.java:469)
at org.apache.hadoop.hive.ql.exec.Utilities.getBaseWork(Utilities.java:411)
... 32 more
Caused by: java.lang.ClassNotFoundException: com.mongodb.hadoop.hive.input.HiveMongoInputFormat
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:154)
... 55 more
]
Vertex killed, vertexName=Reducer 2, vertexId=vertex_1476467351971_0008_2_01, diagnostics=[Vertex received Kill in NEW state., Vertex vertex_1476467351971_0008_2_01 [Reducer 2] killed/failed due to:OTHER_VERTEX_FAILURE]
DAG did not succeed due to VERTEX_FAILURE. failedVertices:1 killedVertices:1
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, vertexName=Map 1, vertexId=vertex_1476467351971_0008_2_00, diagnostics=[Vertex vertex_1476467351971_0008_2_00 [Map 1] killed/failed due to:INIT_FAILURE, Fail to create InputInitializerManager, org.apache.tez.dag.api.TezReflectionException: Unable to instantiate class with 1 arguments: org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator
at org.apache.tez.common.ReflectionUtils.getNewInstance(ReflectionUtils.java:70)
at org.apache.tez.common.ReflectionUtils.createClazzInstance(ReflectionUtils.java:89)
at org.apache.tez.dag.app.dag.RootInputInitializerManager$1.run(RootInputInitializerManager.java:151)
at org.apache.tez.dag.app.dag.RootInputInitializerManager$1.run(RootInputInitializerManager.java:148)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.tez.dag.app.dag.RootInputInitializerManager.createInitializer(RootInputInitializerManager.java:148)
at org.apache.tez.dag.app.dag.RootInputInitializerManager.runInputInitializers(RootInputInitializerManager.java:121)
at org.apache.tez.dag.app.dag.impl.VertexImpl.setupInputInitializerManager(VertexImpl.java:3986)
at org.apache.tez.dag.app.dag.impl.VertexImpl.access$3100(VertexImpl.java:204)
at org.apache.tez.dag.app.dag.impl.VertexImpl$InitTransition.handleInitEvent(VertexImpl.java:2818)
at org.apache.tez.dag.app.dag.impl.VertexImpl$InitTransition.transition(VertexImpl.java:2765)
at org.apache.tez.dag.app.dag.impl.VertexImpl$InitTransition.transition(VertexImpl.java:2747)
at org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
at org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
at org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
at org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
at org.apache.tez.state.StateMachineTez.doTransition(StateMachineTez.java:59)
at org.apache.tez.dag.app.dag.impl.VertexImpl.handle(VertexImpl.java:1888)
at org.apache.tez.dag.app.dag.impl.VertexImpl.handle(VertexImpl.java:203)
at org.apache.tez.dag.app.DAGAppMaster$VertexEventDispatcher.handle(DAGAppMaster.java:2242)
at org.apache.tez.dag.app.DAGAppMaster$VertexEventDispatcher.handle(DAGAppMaster.java:2228)
at org.apache.tez.common.AsyncDispatcher.dispatch(AsyncDispatcher.java:183)
at org.apache.tez.common.AsyncDispatcher$1.run(AsyncDispatcher.java:114)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.tez.common.ReflectionUtils.getNewInstance(ReflectionUtils.java:68)
... 25 more
Caused by: java.lang.RuntimeException: Failed to load plan: hdfs://ip-172-31-33-88.ec2.internal:8020/tmp/hive/hadoop/8c1eca9a-84ba-4d79-b39c-e633f1c6a646/hive_2016-10-14_18-57-38_728_4862537458701624850-1/hadoop/_tez_scratch_dir/157c0e27-0bfa-4acf-b0e9-b2fed595a8de/map.xml: org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to find class: com.mongodb.hadoop.hive.input.HiveMongoInputFormat
Serialization trace:
inputFileFormatClass (org.apache.hadoop.hive.ql.plan.PartitionDesc)
aliasToPartnInfo (org.apache.hadoop.hive.ql.plan.MapWork)
at org.apache.hadoop.hive.ql.exec.Utilities.getBaseWork(Utilities.java:451)
at org.apache.hadoop.hive.ql.exec.Utilities.getMapWork(Utilities.java:298)
at org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.<init>(HiveSplitGenerator.java:131)
... 30 more
Caused by: org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to find class: com.mongodb.hadoop.hive.input.HiveMongoInputFormat
Serialization trace:
inputFileFormatClass (org.apache.hadoop.hive.ql.plan.PartitionDesc)
aliasToPartnInfo (org.apache.hadoop.hive.ql.plan.MapWork)
at org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:156)
at org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:133)
at org.apache.hive.com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:670)
at org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readClass(SerializationUtilities.java:180)
at org.apache.hive.com.esotericsoftware.kryo.serializers.DefaultSerializers$ClassSerializer.read(DefaultSerializers.java:326)
at org.apache.hive.com.esotericsoftware.kryo.serializers.DefaultSerializers$ClassSerializer.read(DefaultSerializers.java:314)
at org.apache.hive.com.esotericsoftware.kryo.Kryo.readObjectOrNull(Kryo.java:759)
at org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readObjectOrNull(SerializationUtilities.java:198)
at org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:132)
at org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:551)
at org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:790)
at org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readClassAndObject(SerializationUtilities.java:175)
at org.apache.hive.com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:161)
at org.apache.hive.com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:39)
at org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:708)
at org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readObject(SerializationUtilities.java:213)
at org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:125)
at org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:551)
at org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:686)
at org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readObject(SerializationUtilities.java:205)
at org.apache.hadoop.hive.ql.exec.SerializationUtilities.deserializeObjectByKryo(SerializationUtilities.java:583)
at org.apache.hadoop.hive.ql.exec.SerializationUtilities.deserializePlan(SerializationUtilities.java:492)
at org.apache.hadoop.hive.ql.exec.SerializationUtilities.deserializePlan(SerializationUtilities.java:469)
at org.apache.hadoop.hive.ql.exec.Utilities.getBaseWork(Utilities.java:411)
... 32 more
Caused by: java.lang.ClassNotFoundException: com.mongodb.hadoop.hive.input.HiveMongoInputFormat
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:154)
... 55 more
]Vertex killed, vertexName=Reducer 2, vertexId=vertex_1476467351971_0008_2_01, diagnostics=[Vertex received Kill in NEW state., Vertex vertex_1476467351971_0008_2_01 [Reducer 2] killed/failed due to:OTHER_VERTEX_FAILURE]DAG did not succeed due to VERTEX_FAILURE. failedVertices:1 killedVertices:1
hive>
I am using a single node EMR cluster from AWS with the "Core Hadoop" configuration, but I had the same errors using a three node cluster. I have already added the Mongo Hadoop Core, Mongo Hadoop Hive, Mongo Java driver to the master node.. These are the jars I'm using:
mongo-hadoop-hive-2.0.1.jar
mongo-java-driver-3.3.0.jar
remotecontent?filepath=org%2Fmongodb%2Fmongo-hadoop%2Fmongo-hadoop-core%2F2.0.1%2Fmongo-hadoop-core-2.0.1.jar
These jars were downloaded from Maven.
A similar question had previously been asked on StackOverflow and was resolved by adding the src for Apache Tez 0.8.4 and building it. Apache Tez 0.8.4 is already on the cluster though, at least according to the EMR configuration we chose.
We are using Hadoop 2.7.2 and Hive 2.1.0.

nutch2.0 with cassandra

Exception in thread "main" org.apache.gora.util.GoraException: java.io.IOException
at org.apache.gora.store.DataStoreFactory.createDataStore(DataStoreFactory.java:167)
at org.apache.gora.store.DataStoreFactory.createDataStore(DataStoreFactory.java:135)
at org.apache.nutch.storage.StorageUtils.createWebStore(StorageUtils.java:75)
at org.apache.nutch.crawl.InjectorJob.run(InjectorJob.java:214)
at org.apache.nutch.crawl.Crawler.runTool(Crawler.java:68)
at org.apache.nutch.crawl.Crawler.run(Crawler.java:136)
at org.apache.nutch.crawl.Crawler.run(Crawler.java:250)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.nutch.crawl.Crawler.main(Crawler.java:257)
Caused by: java.io.IOException
at org.apache.gora.cassandra.store.CassandraStore.initialize(CassandraStore.java:88)
at org.apache.gora.store.DataStoreFactory.initializeDataStore(DataStoreFactory.java:102)
at org.apache.gora.store.DataStoreFactory.createDataStore(DataStoreFactory.java:161)
... 8 more
Caused by: java.lang.NullPointerException
at org.apache.gora.cassandra.store.CassandraMapping.<init>(CassandraMapping.java:117)
at org.apache.gora.cassandra.store.CassandraMappingManager.get(CassandraMappingManager.java:84)
at org.apache.gora.cassandra.store.CassandraClient.initialize(CassandraClient.java:84)
at org.apache.gora.cassandra.store.CassandraStore.initialize(CassandraStore.java:85)
... 10 more
I just run nutch2.0 on cassandra. It's the output of crawl, and the output of TestGoreStorage is as following:
Starting!
Exception in thread "main" org.apache.gora.util.GoraException: java.io.IOException
at org.apache.gora.store.DataStoreFactory.createDataStore(DataStoreFactory.java:167)
at org.apache.gora.store.DataStoreFactory.createDataStore(DataStoreFactory.java:135)
at org.apache.nutch.storage.StorageUtils.createWebStore(StorageUtils.java:75)
at org.apache.nutch.storage.TestGoraStorage.main(TestGoraStorage.java:204)
Caused by: java.io.IOException
at org.apache.gora.cassandra.store.CassandraStore.initialize(CassandraStore.java:88)
at org.apache.gora.store.DataStoreFactory.initializeDataStore(DataStoreFactory.java:102)
at org.apache.gora.store.DataStoreFactory.createDataStore(DataStoreFactory.java:161)
... 3 more
Caused by: java.lang.NullPointerException
at org.apache.gora.cassandra.store.CassandraMapping.<init>(CassandraMapping.java:117)
at org.apache.gora.cassandra.store.CassandraMappingManager.get(CassandraMappingManager.java:84)
at org.apache.gora.cassandra.store.CassandraClient.initialize(CassandraClient.java:84)
at org.apache.gora.cassandra.store.CassandraStore.initialize(CassandraStore.java:85)
... 5 more
I can connect cassandra with cassandra-cli, and just check out the nutch from svn.
Here is the effect config in gora.properties:
gora.datastore.default=org.apache.gora.cassandra.store.CassandraStore
gora.sqlstore.jdbc.driver=org.hsqldb.jdbc.JDBCDriver
gora.sqlstore.jdbc.url=jdbc:hsqldb:hsql://210.44.138.8/nutchtest
gora.sqlstore.jdbc.user=sa
gora.sqlstore.jdbc.password=
gora.cassandrastore.servers=210.44.138.8:9160
and the config in gora-cassandra-mapping:
<keyspace name="webpage" cluster="My Cluster" host="210.44.138.8">
<family name="p"/>
<family name="f"/>
<family name="sc" type="super"/>
</keyspace>
210.44.138.8 is a node of my cluster, and the name of cluster is "My Cluster",
more info: closed firewall, run in eclipse. I'm very pleasure if someone give me any help.
I'm not sure if I had the exact same problem, but I found that in the gora-cassandra-mapping.xml file I had to add a keyspace attribute (keyspace="ks1") to the class element:
<keyspace name="ks1" cluster="My Cluster" host="1.2.3.4">
...
</keyspace>
<class keyspace="ks1" keyClass="java.lang.String" name="org.apache.nutch.storage.WebPage">
...
</class>