Spring boot with MongodB but connection to Azure Cosmos - mongodb

I am having spring Mongodb app, I wanted to take it to Azure. So, I decided to use Cosmos db. I made following changes to my application.properties file
spring.data.mongodb.uri = mongodb://[username]:[password]#[dbname].documents.azure.com:10255/?ssl=true
spring.data.mongodb.database=dbname
I am getting following exception:
ActivityId: 25611363-0000-0000-0000-000000000000, Request URI: /apps/bbbd93b0-83ee-44a2-9015-ca7226457764/services/63c75889-e342-42b3-81b0-4851cae426d7/partitions/89ba02e8-b034-4b75-b8a0-57194d79f785/replicas/131587440168033880p, RequestStats: , SDK: Microsoft.Azure.Documents.Common/1.19.121.4' on server dz5prdddc02-docdb-1.documents.azure.com:10255. The full response is { "_t" : "OKMongoResponse", "ok" : 0, "code" : 2, "errmsg" : "Message: {\"Errors\":[\"Too many 'included' paths (106) specified in policy. A maximum of 100 is allowed.\"]}\r\nActivityId: 25611363-0000-0000-0000-000000000000, Request URI: /apps/bbbd93b0-83ee-44a2-9015-ca7226457764/services/63c75889-e342-42b3-81b0-4851cae426d7/partitions/89ba02e8-b034-4b75-b8a0-57194d79f785/replicas/131587440168033880p, RequestStats: , SDK: Microsoft.Azure.Documents.Common/1.19.121.4", "$err" : "Message: {\"Errors\":[\"Too many 'included' paths (106) specified in policy. A maximum of 100 is allowed.\"]}\r\nActivityId: 25611363-0000-0000-0000-000000000000, Request URI: /apps/bbbd93b0-83ee-44a2-9015-ca7226457764/services/63c75889-e342-42b3-81b0-4851cae426d7/partitions/89ba02e8-b034-4b75-b8a0-57194d79f785/replicas/131587440168033880p, RequestStats: , SDK: Microsoft.Azure.Documents.Common/1.19.121.4" }
at org.springframework.data.mongodb.core.MongoExceptionTranslator.translateExceptionIfPossible(MongoExceptionTranslator.java:107) ~[spring-data-mongodb-1.10.9.RELEASE.jar:na]
at org.springframework.data.mongodb.core.index.MongoPersistentEntityIndexCreator.createIndex(MongoPersistentEntityIndexCreator.java:162) ~[spring-data-mongodb-1.10.9.RELEASE.jar:na]
at org.springframework.data.mongodb.core.index.MongoPersistentEntityIndexCreator.checkForAndCreateIndexes(MongoPersistentEntityIndexCreator.java:133) ~[spring-data-mongodb-1.10.9.RELEASE.jar:na]
at org.springframework.data.mongodb.core.index.MongoPersistentEntityIndexCreator.checkForIndexes(MongoPersistentEntityIndexCreator.java:125) ~[spring-data-mongodb-1.10.9.RELEASE.jar:na]
at org.springframework.data.mongodb.core.index.MongoPersistentEntityIndexCreator.(MongoPersistentEntityIndexCreator.java:91) ~[spring-data-mongodb-1.10.9.RELEASE.jar:na]
at org.springframework.data.mongodb.core.index.MongoPersistentEntityIndexCreator.(MongoPersistentEntityIndexCreator.java:68) ~[spring-data-mongodb-1.10.9.RELEASE.jar:na]
at org.springframework.data.mongodb.core.MongoTemplate.(MongoTemplate.java:233) ~[spring-data-mongodb-1.10.9.RELEASE.jar:na]
at org.springframework.boot.autoconfigure.data.mongo.MongoDataAutoConfiguration.mongoTemplate(MongoDataAutoConfiguration.java:101) ~[spring-boot-autoconfigure-1.5.9.RELEASE.jar:1.5.9.RELEASE]
at org.springframework.boot.autoconfigure.data.mongo.MongoDataAutoConfiguration$$EnhancerBySpringCGLIB$$7c4704f8.CGLIB$mongoTemplate$1() ~[spring-boot-autoconfigure-1.5.9.RELEASE.jar:1.5.9.RELEASE]
at org.springframework.boot.autoconfigure.data.mongo.MongoDataAutoConfiguration$$EnhancerBySpringCGLIB$$7c4704f8$$FastClassBySpringCGLIB$$e0a4d3c3.invoke() ~[spring-boot-autoconfigure-1.5.9.RELEASE.jar:1.5.9.RELEASE]
at org.springframework.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:228) ~[spring-core-4.3.13.RELEASE.jar:4.3.13.RELEASE]
at org.springframework.context.annotation.ConfigurationClassEnhancer$BeanMethodInterceptor.intercept(ConfigurationClassEnhancer.java:358) ~[spring-context-4.3.13.RELEASE.jar:4.3.13.RELEASE]
at org.springframework.boot.autoconfigure.data.mongo.MongoDataAutoConfiguration$$EnhancerBySpringCGLIB$$7c4704f8.mongoTemplate() ~[spring-boot-autoconfigure-1.5.9.RELEASE.jar:1.5.9.RELEASE]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_151]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_151]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_151]
at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_151]
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:162) ~[spring-beans-4.3.13.RELEASE.jar:4.3.13.RELEASE]
... 47 common frames omitted
Caused by: com.mongodb.MongoCommandException: Command failed with error 2: 'Message: {"Errors":["Too many 'included' paths (106) specified in policy. A maximum of 100 is allowed."]}
ActivityId: 25611363-0000-0000-0000-000000000000, Request URI: /apps/bbbd93b0-83ee-44a2-9015-ca7226457764/services/63c75889-e342-42b3-81b0-4851cae426d7/partitions/89ba02e8-b034-4b75-b8a0-57194d79f785/replicas/131587440168033880p, RequestStats: , SDK: Microsoft.Azure.Documents.Common/1.19.121.4' on server dz5prdddc02-docdb-1.documents.azure.com:10255. The full response is { "_t" : "OKMongoResponse", "ok" : 0, "code" : 2, "errmsg" : "Message: {\"Errors\":[\"Too many 'included' paths (106) specified in policy. A maximum of 100 is allowed.\"]}\r\nActivityId: 25611363-0000-0000-0000-000000000000, Request URI: /apps/bbbd93b0-83ee-44a2-9015-ca7226457764/services/63c75889-e342-42b3-81b0-4851cae426d7/partitions/89ba02e8-b034-4b75-b8a0-57194d79f785/replicas/131587440168033880p, RequestStats: , SDK: Microsoft.Azure.Documents.Common/1.19.121.4", "$err" : "Message: {\"Errors\":[\"Too many 'included' paths (106) specified in policy. A maximum of 100 is allowed.\"]}\r\nActivityId: 25611363-0000-0000-0000-000000000000, Request URI: /apps/bbbd93b0-83ee-44a2-9015-ca7226457764/services/63c75889-e342-42b3-81b0-4851cae426d7/partitions/89ba02e8-b034-4b75-b8a0-57194d79f785/replicas/131587440168033880p, RequestStats: , SDK: Microsoft.Azure.Documents.Common/1.19.121.4" }
at com.mongodb.connection.ProtocolHelper.getCommandFailureException(ProtocolHelper.java:115) ~[mongodb-driver-core-3.4.3.jar:na]
at com.mongodb.connection.CommandProtocol.execute(CommandProtocol.java:114) ~[mongodb-driver-core-3.4.3.jar:na]
at com.mongodb.connection.DefaultServer$DefaultServerProtocolExecutor.execute(DefaultServer.java:168) ~[mongodb-driver-core-3.4.3.jar:na]
at com.mongodb.connection.DefaultServerConnection.executeProtocol(DefaultServerConnection.java:289) ~[mongodb-driver-core-3.4.3.jar:na]
at com.mongodb.connection.DefaultServerConnection.command(DefaultServerConnection.java:176) ~[mongodb-driver-core-3.4.3.jar:na]
at com.mongodb.operation.CommandOperationHelper.executeWrappedCommandProtocol(CommandOperationHelper.java:216) ~[mongodb-driver-core-3.4.3.jar:na]
at com.mongodb.operation.CommandOperationHelper.executeWrappedCommandProtocol(CommandOperationHelper.java:207) ~[mongodb-driver-core-3.4.3.jar:na]
at com.mongodb.operation.CommandOperationHelper.executeWrappedCommandProtocol(CommandOperationHelper.java:146) ~[mongodb-driver-core-3.4.3.jar:na]
at com.mongodb.operation.CommandOperationHelper.executeWrappedCommandProtocol(CommandOperationHelper.java:139) ~[mongodb-driver-core-3.4.3.jar:na]
at com.mongodb.operation.CreateIndexesOperation$1.call(CreateIndexesOperation.java:150) ~[mongodb-driver-core-3.4.3.jar:na]
at com.mongodb.operation.CreateIndexesOperation$1.call(CreateIndexesOperation.java:144) ~[mongodb-driver-core-3.4.3.jar:na]
at com.mongodb.operation.OperationHelper.withConnectionSource(OperationHelper.java:426) ~[mongodb-driver-core-3.4.3.jar:na]
at com.mongodb.operation.OperationHelper.withConnection(OperationHelper.java:417) ~[mongodb-driver-core-3.4.3.jar:na]
at com.mongodb.operation.CreateIndexesOperation.execute(CreateIndexesOperation.java:144) ~[mongodb-driver-core-3.4.3.jar:na]
at com.mongodb.operation.CreateIndexesOperation.execute(CreateIndexesOperation.java:71) ~[mongodb-driver-core-3.4.3.jar:na]
at com.mongodb.Mongo.execute(Mongo.java:845) ~[mongodb-driver-3.4.3.jar:na]
at com.mongodb.Mongo$2.execute(Mongo.java:828) ~[mongodb-driver-3.4.3.jar:na]
at com.mongodb.DBCollection.createIndex(DBCollection.java:1618) ~[mongodb-driver-3.4.3.jar:na]
at org.springframework.data.mongodb.core.index.MongoPersistentEntityIndexCreator.createIndex(MongoPersistentEntityIndexCreator.java:142) ~[spring-data-mongodb-1.10.9.RELEASE.jar:na]
... 63 common frames omitted

This error - “Too many 'included' paths (106) specified in policy. A maximum of 100 is allowed.” - occurs when you create created multiple indexes on the account and exceed the limit (100). However, you don’t have to create most of these indexes (if any) as, in contrast to MongoDB, CosmosDB automatically indexes all paths in the document, explicit indexes aren’t required. Try to exclude createIndex/ensureIndex commands and their Spring equivalents unless they have to do with creating unique indexes (you need createIndex for that as we don’t know which fields you want this constraint on).

Related

File domain patron error when running spark streaming

I faced this error when running my application for several hours.
My spark application read stream from a streaming hudi table (hudi table that is constantly updated) and write to a parquet file. There is another stream read that same parquet file and write to another hudi table. The flow is as follow
Hudi -> stream 1 -> parquet -> stream 2 -> hudi
I can see the error appears when stream 2 read from the parquet file. The underlying storage is OneFS
User class threw exception: org.apache.spark.sql.streaming.StreamingQueryException: Failed to get file domain patron for path /path/_temporary. Error: Name: _temporary Status: STATUS_OBJECT_NAME_NOT_FOUND
=== Streaming Query ===
Identifier: [id = 72e6b29c-a641-47ff-82fc-ccd8146a4226, runId = 22af0796-7cb4-4599-b29f-ee95bda27cb3]
Current Committed Offsets: {FileStreamSource[hdfs://path]: {"logOffset":60}}
Current Available Offsets: {FileStreamSource[hdfs://path]: {"logOffset":60}}
Current State: ACTIVE
Thread State: RUNNABLE
Logical Plan:
FileStreamSource[hdfs://path]
at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:356)
at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:244)
Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Failed to get file domain patron for path path/_temporary. Error: Name: _temporary Status: STATUS_OBJECT_NAME_NOT_FOUND
at org.apache.hadoop.ipc.Client.call(Client.java:1476)
at org.apache.hadoop.ipc.Client.call(Client.java:1413)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
at com.sun.proxy.$Proxy10.getListing(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getListing(ClientNamenodeProtocolTranslatorPB.java:578)
at sun.reflect.GeneratedMethodAccessor54.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy11.getListing(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:2086)
at org.apache.hadoop.hdfs.DistributedFileSystem$DirListingIterator.<init>(DistributedFileSystem.java:944)
at org.apache.hadoop.hdfs.DistributedFileSystem$DirListingIterator.<init>(DistributedFileSystem.java:927)
at org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:872)
at org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:868)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.listLocatedStatus(DistributedFileSystem.java:886)
at org.apache.hadoop.fs.FileSystem.listLocatedStatus(FileSystem.java:1696)
at org.apache.spark.util.HadoopFSUtils$.listLeafFiles(HadoopFSUtils.scala:220)
at org.apache.spark.util.HadoopFSUtils$.$anonfun$parallelListLeafFilesInternal$1(HadoopFSUtils.scala:95)
at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at scala.collection.TraversableLike.map(TraversableLike.scala:238)
at scala.collection.TraversableLike.map$(TraversableLike.scala:231)
at scala.collection.AbstractTraversable.map(Traversable.scala:108)
at org.apache.spark.util.HadoopFSUtils$.parallelListLeafFilesInternal(HadoopFSUtils.scala:85)
at org.apache.spark.util.HadoopFSUtils$.parallelListLeafFiles(HadoopFSUtils.scala:69)
at org.apache.spark.sql.execution.datasources.InMemoryFileIndex$.bulkListLeafFiles(InMemoryFileIndex.scala:158)
at org.apache.spark.sql.execution.datasources.InMemoryFileIndex.listLeafFiles(InMemoryFileIndex.scala:131)
at org.apache.spark.sql.execution.datasources.InMemoryFileIndex.refresh0(InMemoryFileIndex.scala:94)
at org.apache.spark.sql.execution.datasources.InMemoryFileIndex.<init>(InMemoryFileIndex.scala:66)
at org.apache.spark.sql.execution.streaming.FileStreamSource.allFilesUsingInMemoryFileIndex(FileStreamSource.scala:248)
at org.apache.spark.sql.execution.streaming.FileStreamSource.fetchAllFiles(FileStreamSource.scala:301)
at org.apache.spark.sql.execution.streaming.FileStreamSource.fetchMaxOffset(FileStreamSource.scala:128)
at org.apache.spark.sql.execution.streaming.FileStreamSource.latestOffset(FileStreamSource.scala:325)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$constructNextBatch$3(MicroBatchExecution.scala:394)
at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken(ProgressReporter.scala:357)
at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken$(ProgressReporter.scala:355)
at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:68)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$constructNextBatch$2(MicroBatchExecution.scala:385)
at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238)
at scala.collection.immutable.Map$Map1.foreach(Map.scala:128)
at scala.collection.TraversableLike.map(TraversableLike.scala:238)
at scala.collection.TraversableLike.map$(TraversableLike.scala:231)
at scala.collection.AbstractTraversable.map(Traversable.scala:108)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$constructNextBatch$1(MicroBatchExecution.scala:382)
at scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:23)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.withProgressLocked(MicroBatchExecution.scala:613)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.constructNextBatch(MicroBatchExecution.scala:378)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStream$2(MicroBatchExecution.scala:

How to read MongoDB DBRef on PrestoDB/PrestoSQL?

MongoDB has references from one document to another (something similar to a foreign key) with DBRefs, these are like other property with two parts, a name to the collection they are referencing ($ref) and the Id of the document they are referring ($id).
Both of them start with $, these DBRefs are not picked up automatically when querying the collections.
I defined them manually in the _schema collection like this:
{
"name" : "otherCollection",
"type" : "row($id ObjectId, $ref varchar)",
"hidden" : false
}
But it throws an exception at querying time. I'm using PrestoDB v0.232 but planning to move to PrestoSQL.
2020-03-12T20:56:32.164-0600 ERROR remote-task-callback-7 com.facebook.presto.execution.StageExecutionStateMachine Stage execution 20200313_025632_00008_hnu22.2.0 failed
com.google.common.util.concurrent.UncheckedExecutionException: java.lang.IllegalArgumentException: Bad type signature: 'row($id ObjectId)'
at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2050)
at com.google.common.cache.LocalCache.get(LocalCache.java:3952)
at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3974)
at com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4958)
at com.google.common.cache.LocalCache$LocalLoadingCache.getUnchecked(LocalCache.java:4964)
at com.facebook.presto.mongodb.MongoSession.getTable(MongoSession.java:153)
at com.facebook.presto.mongodb.MongoMetadata.getTableMetadata(MongoMetadata.java:278)
at com.facebook.presto.mongodb.MongoMetadata.listTableColumns(MongoMetadata.java:130)
at com.facebook.presto.metadata.MetadataManager.listTableColumns(MetadataManager.java:590)
at com.facebook.presto.metadata.MetadataListing.listTableColumns(MetadataListing.java:93)
at com.facebook.presto.connector.system.jdbc.ColumnJdbcTable.cursor(ColumnJdbcTable.java:126)
at com.facebook.presto.connector.system.SystemPageSourceProvider$1.cursor(SystemPageSourceProvider.java:124)
at com.facebook.presto.split.MappedRecordSet.cursor(MappedRecordSet.java:53)
at com.facebook.presto.spi.RecordPageSource.<init>(RecordPageSource.java:38)
at com.facebook.presto.connector.system.SystemPageSourceProvider.createPageSource(SystemPageSourceProvider.java:103)
at com.facebook.presto.spi.connector.ConnectorPageSourceProvider.createPageSource(ConnectorPageSourceProvider.java:40)
at com.facebook.presto.split.PageSourceManager.createPageSource(PageSourceManager.java:58)
at com.facebook.presto.operator.ScanFilterAndProjectOperator.getOutput(ScanFilterAndProjectOperator.java:227)
at com.facebook.presto.operator.Driver.processInternal(Driver.java:379)
at com.facebook.presto.operator.Driver.lambda$processFor$8(Driver.java:283)
at com.facebook.presto.operator.Driver.tryWithLock(Driver.java:675)
at com.facebook.presto.operator.Driver.processFor(Driver.java:276)
at com.facebook.presto.execution.SqlTaskExecution$DriverSplitRunner.processFor(SqlTaskExecution.java:1077)
at com.facebook.presto.execution.executor.PrioritizedSplitRunner.process(PrioritizedSplitRunner.java:162)
at com.facebook.presto.execution.executor.TaskExecutor$TaskRunner.run(TaskExecutor.java:545)
at com.facebook.presto.$gen.Presto_0_232_cc1019c____20200313_025536_1.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.IllegalArgumentException: Bad type signature: 'row($id ObjectId)'
at com.facebook.presto.spi.type.TypeSignature.checkArgument(TypeSignature.java:360)
at com.facebook.presto.spi.type.TypeSignature.parseRowTypeSignature(TypeSignature.java:200)
at com.facebook.presto.spi.type.TypeSignature.parseTypeSignature(TypeSignature.java:119)
at com.facebook.presto.spi.type.TypeSignature.parseTypeSignature(TypeSignature.java:106)
at com.facebook.presto.mongodb.MongoSession.buildColumnHandle(MongoSession.java:197)
at com.facebook.presto.mongodb.MongoSession.loadTableSchema(MongoSession.java:183)
at com.google.common.cache.CacheLoader$FunctionToCacheLoader.load(CacheLoader.java:165)
at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3528)
at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2277)
at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2154)
at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2044)
... 28 more
Is there a way to read these kind of fields?
UPDATE:
I tried moving to PrestoSQL latest version (331) and surrounding with ", like this:
"{
"name" : "otherCollection",
"type" : "row(\"$id\" ObjectId, \"$ref\" varchar)",
"hidden" : false
}"
And the column started to be displayed on querys, but the contents are always null. Also tried surrounding with ' but in that case got this exception:
java.util.concurrent.ExecutionException: io.prestosql.sql.parser.ParsingException: line 1:5: mismatched input ''$id''. Expecting: <type>
at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:531)
at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:492)
at com.google.common.util.concurrent.AbstractFuture$TrustedFuture.get(AbstractFuture.java:83)
at com.google.common.util.concurrent.Uninterruptibles.getUninterruptibly(Uninterruptibles.java:196)
at com.google.common.cache.LocalCache$Segment.getAndRecordStats(LocalCache.java:2312)
at com.google.common.cache.LocalCache$Segment$1.run(LocalCache.java:2292)
at com.google.common.util.concurrent.MoreExecutors$DirectExecutor.execute(MoreExecutors.java:398)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1029)
at com.google.common.util.concurrent.AbstractFuture.addListener(AbstractFuture.java:675)
at com.google.common.util.concurrent.AbstractFuture$TrustedFuture.addListener(AbstractFuture.java:105)
at com.google.common.cache.LocalCache$Segment.loadAsync(LocalCache.java:2287)
at com.google.common.cache.LocalCache$Segment.refresh(LocalCache.java:2359)
at com.google.common.cache.LocalCache$Segment.scheduleRefresh(LocalCache.java:2337)
at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2034)
at com.google.common.cache.LocalCache.get(LocalCache.java:3952)
at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3974)
at com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4958)
at com.google.common.cache.LocalCache$LocalLoadingCache.getUnchecked(LocalCache.java:4964)
at io.prestosql.plugin.mongodb.MongoSession.getTable(MongoSession.java:164)
at io.prestosql.plugin.mongodb.MongoMetadata.getColumnHandles(MongoMetadata.java:114)
at io.prestosql.plugin.mongodb.MongoMetadata.getTableProperties(MongoMetadata.java:226)
at io.prestosql.metadata.MetadataManager.getTableProperties(MetadataManager.java:415)
at io.prestosql.sql.planner.DistributedExecutionPlanner.getTableInfo(DistributedExecutionPlanner.java:145)
at io.prestosql.sql.planner.DistributedExecutionPlanner.lambda$doPlan$0(DistributedExecutionPlanner.java:133)
at com.google.common.collect.CollectCollectors.lambda$toImmutableMap$1(CollectCollectors.java:61)
at java.util.stream.ReduceOps$3ReducingSink.accept(ReduceOps.java:169)
at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
at java.util.Collections$2.tryAdvance(Collections.java:4719)
at java.util.Collections$2.forEachRemaining(Collections.java:4727)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
at io.prestosql.sql.planner.DistributedExecutionPlanner.doPlan(DistributedExecutionPlanner.java:133)
at io.prestosql.sql.planner.DistributedExecutionPlanner.doPlan(DistributedExecutionPlanner.java:124)
at io.prestosql.sql.planner.DistributedExecutionPlanner.plan(DistributedExecutionPlanner.java:96)
at io.prestosql.execution.SqlQueryExecution.planDistribution(SqlQueryExecution.java:433)
at io.prestosql.execution.SqlQueryExecution.start(SqlQueryExecution.java:339)
at io.prestosql.$gen.Presto_331____20200318_020345_2.run(Unknown Source)
at io.prestosql.execution.SqlQueryManager.createQuery(SqlQueryManager.java:240)
at io.prestosql.dispatcher.LocalDispatchQuery.lambda$startExecution$7(LocalDispatchQuery.java:132)
at io.prestosql.$gen.Presto_331____20200318_020345_2.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: io.prestosql.sql.parser.ParsingException: line 1:5: mismatched input ''$id''. Expecting: <type>
at io.prestosql.sql.parser.ErrorHandler.syntaxError(ErrorHandler.java:108)
at org.antlr.v4.runtime.ProxyErrorListener.syntaxError(ProxyErrorListener.java:41)
at org.antlr.v4.runtime.Parser.notifyErrorListeners(Parser.java:544)
at org.antlr.v4.runtime.DefaultErrorStrategy.reportNoViableAlternative(DefaultErrorStrategy.java:310)
at org.antlr.v4.runtime.DefaultErrorStrategy.reportError(DefaultErrorStrategy.java:136)
at io.prestosql.sql.parser.SqlBaseParser.type(SqlBaseParser.java:10326)
at io.prestosql.sql.parser.SqlBaseParser.standaloneType(SqlBaseParser.java:386)
at io.prestosql.sql.parser.SqlParser.invokeParser(SqlParser.java:146)
at io.prestosql.sql.parser.SqlParser.createType(SqlParser.java:96)
at io.prestosql.metadata.TypeRegistry.fromSqlType(TypeRegistry.java:164)
at io.prestosql.metadata.MetadataManager.fromSqlType(MetadataManager.java:1262)
at io.prestosql.type.InternalTypeManager.fromSqlType(InternalTypeManager.java:51)
at io.prestosql.plugin.mongodb.MongoSession.buildColumnHandle(MongoSession.java:208)
at io.prestosql.plugin.mongodb.MongoSession.loadTableSchema(MongoSession.java:194)
at com.google.common.cache.CacheLoader$FunctionToCacheLoader.load(CacheLoader.java:165)
at com.google.common.cache.CacheLoader.reload(CacheLoader.java:100)
at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3531)
at com.google.common.cache.LocalCache$Segment.loadAsync(LocalCache.java:2286)
... 35 more
Caused by: org.antlr.v4.runtime.NoViableAltException
at org.antlr.v4.runtime.atn.ParserATNSimulator.noViableAlt(ParserATNSimulator.java:2028)
at org.antlr.v4.runtime.atn.ParserATNSimulator.execATN(ParserATNSimulator.java:467)
at org.antlr.v4.runtime.atn.ParserATNSimulator.adaptivePredict(ParserATNSimulator.java:393)
at io.prestosql.sql.parser.SqlBaseParser.type(SqlBaseParser.java:10012)
... 47 more

master-datasources.xml content always revert back to initial configuration when I start wso2server 5.9

I am beginner to WSO2, and I'm trying to configure Identity server data-source to PostgreSQL, using the documentation.
JDBC driver used
My latest master-datasources.xml is
<datasources-configuration xmlns:svns="http://org.wso2.securevault/configuration">
<providers>
<provider>org.wso2.carbon.ndatasource.rdbms.RDBMSDataSourceReader</provider>
</providers>
<datasources>
<datasource>
<name>WSO2_CARBON_DB</name>
<description>The datasource used for registry and user manager</description>
<jndiConfig>
<name>jdbc/WSO2CarbonDB</name>
</jndiConfig>
<definition type="RDBMS">
<configuration>
<url>jdbc:postgresql://localhost:5432/wso2_db</url>
<username>postgres</username>
<password>root</password>
<driverClassName>org.postgresql.Driver</driverClassName>
<maxActive>50</maxActive>
<maxWait>60000</maxWait>
<testOnBorrow>true</testOnBorrow>
<validationQuery>SELECT 1; COMMIT</validationQuery>
<validationInterval>30000</validationInterval>
<defaultAutoCommit>true</defaultAutoCommit>
<commitOnReturn>true</commitOnReturn>
</configuration>
</definition>
</datasource>
<datasource>
<name>WSO2_SHARED_DB</name>
<description>Shared Database for user and registry data</description>
<jndiConfig>
<name>jdbc/SHARED_DB</name>
</jndiConfig>
<definition type="RDBMS">
<configuration>
<url>jdbc:postgresql://localhost:5432/wso2_db</url>
<username>postgres</username>
<password>root</password>
<driverClassName>org.postgresql.Driver</driverClassName>
<testOnBorrow>true</testOnBorrow>
<maxWait>60000</maxWait>
<defaultAutoCommit>true</defaultAutoCommit>
<validationInterval>30000</validationInterval>
<maxActive>50</maxActive>
<jmxEnabled>false</jmxEnabled>
</configuration>
</definition>
</datasource>
<datasource>
<name>WSO2_IDENTITY_DB</name>
<description>Shared database for identity data</description>
<jndiConfig>
<name>jdbc/WSO2IdentityDB</name>
</jndiConfig>
<definition type="RDBMS">
<configuration>
<url>jdbc:postgresql://localhost:5432/wso2_db</url>
<username>postgres</username>
<password>root</password>
<driverClassName>org.postgresql.Driver</driverClassName>
</configuration>
</definition>
</datasource>
</datasources>
</datasources-configuration>
When I start running WSO2 server , master-datasources.xml revertback to initial H2 configuration.
I modified deployment.toml based on the suggestion from #Piraveena Paralogarajah.
[server]
hostname = "localhost"
node_ip = "127.0.0.1"
base_path = "https://$ref{server.hostname}:${carbon.management.port}"
[super_admin]
username = "admin"
password = "admin"
create_admin_account = true
[user_store]
type = "read_write_ldap"
connection_url = "ldap://localhost:${Ports.EmbeddedLDAP.LDAPServerPort}"
connection_name = "uid=admin,ou=system"
connection_password = "admin"
base_dn = "dc=wso2,dc=org" #refers the base dn on which the user and group search bases will be generated
[database.identity_db]
type = "postgre"
hostname = "localhost"
name = "wso2_db"
username = "postgres"
password = "root"
port = "5432"
[database.shared_db]
type = "postgre"
hostname = "localhost"
name = "wso2_db"
username = "postgres"
password = "root"
port = "5432"
[keystore.primary]
name = "wso2carbon.jks"
password = "wso2carbon"
executed Query
<IS-HOME>/dbscripts/identity/postgresql.sql
<IS-HOME>/dbscripts/identity/uma/postgresql.sql
<IS-HOME>/dbscripts/consent/postgresql.sql
this time master-datasources.xml updated for postgress. But got exception while running server.
2020-02-19 16:44:35,247] [] ERROR {org.wso2.carbon.user.core.common.DefaultRealm} - nullType class java.lang.reflect.InvocationTargetException org.wso2.carbon.user.core.UserStoreException: nullType class java.lang.reflect.InvocationTargetException
at org.wso2.carbon.user.core.common.DefaultRealm.createObjectWithOptions(DefaultRealm.java:397)
at org.wso2.carbon.user.core.common.DefaultRealm.initializeObjects(DefaultRealm.java:224)
at org.wso2.carbon.user.core.common.DefaultRealm.init(DefaultRealm.java:129)
at org.wso2.carbon.user.core.common.DefaultRealmService.initializeRealm(DefaultRealmService.java:264)
at org.wso2.carbon.user.core.common.DefaultRealmService.<init>(DefaultRealmService.java:102)
at org.wso2.carbon.user.core.common.DefaultRealmService.<init>(DefaultRealmService.java:115)
at org.wso2.carbon.user.core.internal.Activator.startDeploy(Activator.java:72)
at org.wso2.carbon.user.core.internal.BundleCheckActivator.start(BundleCheckActivator.java:61)
at org.eclipse.osgi.internal.framework.BundleContextImpl$3.run(BundleContextImpl.java:842)
at org.eclipse.osgi.internal.framework.BundleContextImpl$3.run(BundleContextImpl.java:1)
at java.security.AccessController.doPrivileged(Native Method)
at org.eclipse.osgi.internal.framework.BundleContextImpl.startActivator(BundleContextImpl.java:834)
at org.eclipse.osgi.internal.framework.BundleContextImpl.start(BundleContextImpl.java:791)
at org.eclipse.osgi.internal.framework.EquinoxBundle.startWorker0(EquinoxBundle.java:1013)
at org.eclipse.osgi.internal.framework.EquinoxBundle$EquinoxModule.startWorker(EquinoxBundle.java:365)
at org.eclipse.osgi.container.Module.doStart(Module.java:598)
at org.eclipse.osgi.container.Module.start(Module.java:462)
at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel$1.run(ModuleContainer.java:1820)
at org.eclipse.osgi.internal.framework.EquinoxContainerAdaptor$2$1.execute(EquinoxContainerAdaptor.java:150)
at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel.incStartLevel(ModuleContainer.java:1813)
at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel.incStartLevel(ModuleContainer.java:1770)
at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel.doContainerStartLevel(ModuleContainer.java:1735)
at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel.dispatchEvent(ModuleContainer.java:1661)
at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel.dispatchEvent(ModuleContainer.java:1)
at org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(EventManager.java:234)
at org.eclipse.osgi.framework.eventmgr.EventManager$EventThread.run(EventManager.java:345)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.wso2.carbon.user.core.common.DefaultRealm.createObjectWithOptions(DefaultRealm.java:351)
... 25 more
Caused by: org.wso2.carbon.user.core.UserStoreException: Error occurred while checking is existing domain : PRIMARY for tenant : -1234
at org.wso2.carbon.user.core.util.UserCoreUtil.persistDomain(UserCoreUtil.java:860)
at org.wso2.carbon.user.core.common.AbstractUserStoreManager.persistDomain(AbstractUserStoreManager.java:6190)
at org.wso2.carbon.user.core.ldap.ReadOnlyLDAPUserStoreManager.<init>(ReadOnlyLDAPUserStoreManager.java:240)
at org.wso2.carbon.user.core.ldap.ReadWriteLDAPUserStoreManager.<init>(ReadWriteLDAPUserStoreManager.java:120)
... 30 more
Caused by: org.wso2.carbon.user.core.UserStoreException: DB error occurred while checking is existing domain : PRIMARY & tenant id : -1234
at org.wso2.carbon.user.core.util.UserCoreUtil.isExistingDomain(UserCoreUtil.java:1009)
at org.wso2.carbon.user.core.util.UserCoreUtil.persistDomain(UserCoreUtil.java:849)
... 33 more
Caused by: org.postgresql.util.PSQLException: ERROR: relation "um_domain" does not exist
Position: 26
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2510)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2245)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:311)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:447)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:368)
at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:159)
at org.postgresql.jdbc.PgPreparedStatement.executeQuery(PgPreparedStatement.java:109)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.tomcat.jdbc.pool.StatementFacade$StatementProxy.invoke(StatementFacade.java:114)
at com.sun.proxy.$Proxy53.executeQuery(Unknown Source)
at org.wso2.carbon.user.core.util.UserCoreUtil.isExistingDomain(UserCoreUtil.java:998)
... 34 more
[2020-02-19 16:44:35,275] [] ERROR {org.wso2.carbon.user.core.internal.Activator} - Cannot start User Manager Core bundle org.wso2.carbon.user.core.UserStoreException: Cannot initialize the realm.
at org.wso2.carbon.user.core.common.DefaultRealmService.initializeRealm(DefaultRealmService.java:274)
at org.wso2.carbon.user.core.common.DefaultRealmService.<init>(DefaultRealmService.java:102)
at org.wso2.carbon.user.core.common.DefaultRealmService.<init>(DefaultRealmService.java:115)
at org.wso2.carbon.user.core.internal.Activator.startDeploy(Activator.java:72)
at org.wso2.carbon.user.core.internal.BundleCheckActivator.start(BundleCheckActivator.java:61)
at org.eclipse.osgi.internal.framework.BundleContextImpl$3.run(BundleContextImpl.java:842)
at org.eclipse.osgi.internal.framework.BundleContextImpl$3.run(BundleContextImpl.java:1)
at java.security.AccessController.doPrivileged(Native Method)
at org.eclipse.osgi.internal.framework.BundleContextImpl.startActivator(BundleContextImpl.java:834)
at org.eclipse.osgi.internal.framework.BundleContextImpl.start(BundleContextImpl.java:791)
at org.eclipse.osgi.internal.framework.EquinoxBundle.startWorker0(EquinoxBundle.java:1013)
at org.eclipse.osgi.internal.framework.EquinoxBundle$EquinoxModule.startWorker(EquinoxBundle.java:365)
at org.eclipse.osgi.container.Module.doStart(Module.java:598)
at org.eclipse.osgi.container.Module.start(Module.java:462)
at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel$1.run(ModuleContainer.java:1820)
at org.eclipse.osgi.internal.framework.EquinoxContainerAdaptor$2$1.execute(EquinoxContainerAdaptor.java:150)
at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel.incStartLevel(ModuleContainer.java:1813)
at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel.incStartLevel(ModuleContainer.java:1770)
at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel.doContainerStartLevel(ModuleContainer.java:1735)
at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel.dispatchEvent(ModuleContainer.java:1661)
at org.eclipse.osgi.container.ModuleContainer$ContainerStartLevel.dispatchEvent(ModuleContainer.java:1)
at org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(EventManager.java:234)
at org.eclipse.osgi.framework.eventmgr.EventManager$EventThread.run(EventManager.java:345)
Caused by: org.wso2.carbon.user.core.UserStoreException: nullType class java.lang.reflect.InvocationTargetException
at org.wso2.carbon.user.core.common.DefaultRealm.initializeObjects(DefaultRealm.java:318)
at org.wso2.carbon.user.core.common.DefaultRealm.init(DefaultRealm.java:129)
at org.wso2.carbon.user.core.common.DefaultRealmService.initializeRealm(DefaultRealmService.java:264)
... 22 more
Caused by: org.wso2.carbon.user.core.UserStoreException: nullType class java.lang.reflect.InvocationTargetException
at org.wso2.carbon.user.core.common.DefaultRealm.createObjectWithOptions(DefaultRealm.java:397)
at org.wso2.carbon.user.core.common.DefaultRealm.initializeObjects(DefaultRealm.java:224)
... 24 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.wso2.carbon.user.core.common.DefaultRealm.createObjectWithOptions(DefaultRealm.java:351)
... 25 more
Caused by: org.wso2.carbon.user.core.UserStoreException: Error occurred while checking is existing domain : PRIMARY for tenant : -1234
at org.wso2.carbon.user.core.util.UserCoreUtil.persistDomain(UserCoreUtil.java:860)
at org.wso2.carbon.user.core.common.AbstractUserStoreManager.persistDomain(AbstractUserStoreManager.java:6190)
at org.wso2.carbon.user.core.ldap.ReadOnlyLDAPUserStoreManager.<init>(ReadOnlyLDAPUserStoreManager.java:240)
at org.wso2.carbon.user.core.ldap.ReadWriteLDAPUserStoreManager.<init>(ReadWriteLDAPUserStoreManager.java:120)
... 30 more
Caused by: org.wso2.carbon.user.core.UserStoreException: DB error occurred while checking is existing domain : PRIMARY & tenant id : -1234
at org.wso2.carbon.user.core.util.UserCoreUtil.isExistingDomain(UserCoreUtil.java:1009)
at org.wso2.carbon.user.core.util.UserCoreUtil.persistDomain(UserCoreUtil.java:849)
... 33 more
Caused by: org.postgresql.util.PSQLException: ERROR: relation "um_domain" does not exist
Position: 26
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2510)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2245)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:311)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:447)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:368)
at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:159)
at org.postgresql.jdbc.PgPreparedStatement.executeQuery(PgPreparedStatement.java:109)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.tomcat.jdbc.pool.StatementFacade$StatementProxy.invoke(StatementFacade.java:114)
at com.sun.proxy.$Proxy53.executeQuery(Unknown Source)
at org.wso2.carbon.user.core.util.UserCoreUtil.isExistingDomain(UserCoreUtil.java:998)
... 34 more
I tried this but it is not working.
With the 4.5.0 carbon-kernel release, all WSO2 products such as APIM 3.0.0, IS 5.9.0 introduced a new config model. According to the new config model, there is a centralized configuration file (deployment.toml) where users add the configurations, then those configurations will be added to the respective .xml files.
So if you want to do some changes in the master-datasources.xml file, you have to add the relevant configs in deployment.toml file according to the new config model. With the new config model, all the changes made by you in the xml config files will be overridden by the toml configs during the server startup.
Please follow this documentation to refer further information on this new config model
Related documents:
https://wso2.com/blogs/thesource/2019/10/simplifying-configuration-with-WSO2-identity-server
Please follow this documentation if you are using trying to configure WSO2 Identity server with postgres db.
https://is.docs.wso2.com/en/next/setup/changing-to-postgresql/
[updated according to the new issue]
Please execute this script also
/dbscripts/postgresql.sql
. From the error logs it says "um_domain" does not exist. That table creation happens from this script and you haven't executed this particular script.
Caused by: org.postgresql.util.PSQLException: ERROR: relation "um_domain" does not exist
Position: 26
It seems you are missing some tables. Maybe your DB schema is not compliant with wso2 DB schema
To fix that you need to run WSO2 DB scripts on PostgresDB. You can find the scripts inside the product in the following paths {is-home}/dbscripts and {is-home}/dbscripts/identity. Postgres scripts are under the name of "postgres.sql".
Make sure, the deployment.toml configuration has worked like as a publisher file, So, it's rollback to H2 database because the LDAP was configured on localhost.
please follow the below.
open the developement.toml find in this path C:[Program Files]WSO2\Identity Server\5.11.0\repository\conf
Remove the LDAP ~ AD configuration and add that
[user_store]
type = "database_unique_id"
Change the user database configuration
[database.user]
url = "jdbc:postgresql://localhost:5432/wso2"
username = "postgres"
password = "MohsenPass"
driver = "org.postgresql.Driver"
Change the identity_db database configuration
[database.identity_db]
type = "postgre"
hostname = "localhost"
name = "wso2"
username = "postgres"
password = "PassMohsen"
port = "5432"
Change the shared_db database configuration
type = "postgre"
hostname = "localhost"
name = "wso2"
username = "postgres"
password = "MohsenPass"
port = "5432"
Now Start-up the server, that process will do initialization of new configuration and new destination as well,
I hope do well to fix your issues.
Any questions regarding in wso2 identity server to set up and development ask me on twitter #MohsenEnazi.

Databricks job getting javax.net.ssl.SSLHandshakeException: Received fatal alert: handshake_failure when calling api running in Google Cloud

A spark job running as a Databricks job tries to access an external rest api via http and the following error occurs: ERROR ScalaDriverLocal: User Code Stack Trace:
javax.net.ssl.SSLHandshakeException: Received fatal alert: handshake_failure
Here is the code making the http call
Request request = new Request.Builder()
.url("https://some_url")
.get()
.addHeader("cache-control", "no-cache")
.build();
Response response = client.newCall(request).execute();
I have tried setting the https.protocols system variable in the code as follows
System.setProperty("https.protocols","TLSv1,TLSv1.1,TLSv1.2");
without results.
Here is the full stacktrace of the error:
ERROR ScalaDriverLocal: User Code Stack Trace:
javax.net.ssl.SSLHandshakeException: Received fatal alert: handshake_failure
at sun.security.ssl.Alerts.getSSLException(Alerts.java:192)
at sun.security.ssl.Alerts.getSSLException(Alerts.java:154)
at sun.security.ssl.SSLSocketImpl.recvAlert(SSLSocketImpl.java:2020)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1127)
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1367)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1395)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1379)
at okhttp3.internal.connection.RealConnection.connectTls(RealConnection.kt:351)
at okhttp3.internal.connection.RealConnection.establishProtocol(RealConnection.kt:310)
at okhttp3.internal.connection.RealConnection.connect(RealConnection.kt:178)
at okhttp3.internal.connection.ExchangeFinder.findConnection(ExchangeFinder.kt:236)
at okhttp3.internal.connection.ExchangeFinder.findHealthyConnection(ExchangeFinder.kt:109)
at okhttp3.internal.connection.ExchangeFinder.find(ExchangeFinder.kt:77)
at okhttp3.internal.connection.Transmitter.newExchange$okhttp(Transmitter.kt:162)
at okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.kt:35)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:112)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:87)
at okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.kt:82)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:112)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:87)
at okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.kt:84)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:112)
at okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.kt:71)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:112)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:87)
at okhttp3.RealCall.getResponseWithInterceptorChain(RealCall.kt:184)
at okhttp3.RealCall.execute(RealCall.kt:66)
at com.mycompany.metadata.MetadataRepository.loadAggregations(MetadataRepository.java:50)
at com.mycompany.jobs.DefaultJob.run(DefaultJob.java:50)
at com.mycompany.run.Main.main(Main.java:26)
at line7ccb0b1a0bd6475aac11185531c9050025.$read$$iw$$iw$$iw$$iw$$iw$$iw.<init>(command--1:1)
at line7ccb0b1a0bd6475aac11185531c9050025.$read$$iw$$iw$$iw$$iw$$iw.<init>(command--1:44)
at line7ccb0b1a0bd6475aac11185531c9050025.$read$$iw$$iw$$iw$$iw.<init>(command--1:46)
at line7ccb0b1a0bd6475aac11185531c9050025.$read$$iw$$iw$$iw.<init>(command--1:48)
at line7ccb0b1a0bd6475aac11185531c9050025.$read$$iw$$iw.<init>(command--1:50)
at line7ccb0b1a0bd6475aac11185531c9050025.$read$$iw.<init>(command--1:52)
at line7ccb0b1a0bd6475aac11185531c9050025.$read.<init>(command--1:54)
at line7ccb0b1a0bd6475aac11185531c9050025.$read$.<init>(command--1:58)
at line7ccb0b1a0bd6475aac11185531c9050025.$read$.<clinit>(command--1)
at line7ccb0b1a0bd6475aac11185531c9050025.$eval$.$print$lzycompute(<notebook>:7)
at line7ccb0b1a0bd6475aac11185531c9050025.$eval$.$print(<notebook>:6)
at line7ccb0b1a0bd6475aac11185531c9050025.$eval.$print(<notebook>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:793)
at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1054)
at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:645)
at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:644)
at scala.reflect.internal.util.ScalaClassLoader$class.asContext(ScalaClassLoader.scala:31)
at scala.reflect.internal.util.AbstractFileClassLoader.asContext(AbstractFileClassLoader.scala:19)
at scala.tools.nsc.interpreter.IMain$WrappedRequest.loadAndRunReq(IMain.scala:644)
at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:576)
at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:572)
at com.databricks.backend.daemon.driver.DriverILoop.execute(DriverILoop.scala:215)
at com.databricks.backend.daemon.driver.ScalaDriverLocal$$anonfun$repl$1.apply$mcV$sp(ScalaDriverLocal.scala:197)
at com.databricks.backend.daemon.driver.ScalaDriverLocal$$anonfun$repl$1.apply(ScalaDriverLocal.scala:197)
at com.databricks.backend.daemon.driver.ScalaDriverLocal$$anonfun$repl$1.apply(ScalaDriverLocal.scala:197)
at com.databricks.backend.daemon.driver.DriverLocal$TrapExitInternal$.trapExit(DriverLocal.scala:679)
at com.databricks.backend.daemon.driver.DriverLocal$TrapExit$.apply(DriverLocal.scala:632)
at com.databricks.backend.daemon.driver.ScalaDriverLocal.repl(ScalaDriverLocal.scala:197)
at com.databricks.backend.daemon.driver.DriverLocal$$anonfun$execute$8.apply(DriverLocal.scala:368)
at com.databricks.backend.daemon.driver.DriverLocal$$anonfun$execute$8.apply(DriverLocal.scala:345)
at com.databricks.logging.UsageLogging$$anonfun$withAttributionContext$1.apply(UsageLogging.scala:238)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
at com.databricks.logging.UsageLogging$class.withAttributionContext(UsageLogging.scala:233)
at com.databricks.backend.daemon.driver.DriverLocal.withAttributionContext(DriverLocal.scala:48)
at com.databricks.logging.UsageLogging$class.withAttributionTags(UsageLogging.scala:271)
at com.databricks.backend.daemon.driver.DriverLocal.withAttributionTags(DriverLocal.scala:48)
at com.databricks.backend.daemon.driver.DriverLocal.execute(DriverLocal.scala:345)
at com.databricks.backend.daemon.driver.DriverWrapper$$anonfun$tryExecutingCommand$2.apply(DriverWrapper.scala:644)
at com.databricks.backend.daemon.driver.DriverWrapper$$anonfun$tryExecutingCommand$2.apply(DriverWrapper.scala:644)
at scala.util.Try$.apply(Try.scala:192)
at com.databricks.backend.daemon.driver.DriverWrapper.tryExecutingCommand(DriverWrapper.scala:639)
at com.databricks.backend.daemon.driver.DriverWrapper.getCommandOutputAndError(DriverWrapper.scala:485)
at com.databricks.backend.daemon.driver.DriverWrapper.executeCommand(DriverWrapper.scala:597)
at com.databricks.backend.daemon.driver.DriverWrapper.runInnerLoop(DriverWrapper.scala:390)
at com.databricks.backend.daemon.driver.DriverWrapper.runInner(DriverWrapper.scala:337)
at com.databricks.backend.daemon.driver.DriverWrapper.run(DriverWrapper.scala:219)
at java.lang.Thread.run(Thread.java:748)
I couldn't ping point the cause of the problem but I found a workaround which is not to use OkHttp, replacing the code that makes a request with this worked.
HttpResponse<String> response = Unirest.get("<some_url>")
.header("cache-control", "no-cache")
.asString();
This is because of secure cipher suites incompatibility:
Databricks is choosing efficient cipher suites
OkHttp - modern secure cipher suites
To make it work, you can analyze the API with https://www.ssllabs.com/ssltest/
There would be a section with Cipher Suites supported by the API, you have to explicitly set them like this (for my situation CBC_SHA256 was the deal):
ConnectionSpec spec = new ConnectionSpec.Builder(ConnectionSpec.MODERN_TLS)
.tlsVersions(TlsVersion.TLS_1_2)
.cipherSuites(
CipherSuite.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
CipherSuite.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,
CipherSuite.TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
)
.build();
OkHttpClient client = new OkHttpClient.Builder()
.connectionSpecs(Collections.singletonList(spec))
.build();
https://square.github.io/okhttp/https/
I've found a lot of information and description here:
https://github.com/square/okhttp/issues/6138

:INIT_FAILURE, Fail to create InputInitializerManager

I am using EMR services from Amazon Web Services and am attempting to run a count query on an external table that I've built. The data for the table is stored in mongodb and the table is an external table in Hive. The query I'm trying to run is
select user_id, count (*) from myTable group by user_id;
I can query select * from myTable but I cannot do any other queries. When I try to I get this error:
----------------------------------------------------------------------------------------------
VERTICES MODE STATUS TOTAL COMPLETED RUNNING PENDING FAILED KILLED
----------------------------------------------------------------------------------------------
Map 1 container FAILED -1 0 0 -1 0 0
Reducer 2 container KILLED 1 0 0 1 0 0
----------------------------------------------------------------------------------------------
VERTICES: 00/02 [>>--------------------------] 0% ELAPSED TIME: 0.03 s
----------------------------------------------------------------------------------------------
Status: Failed
Vertex failed, vertexName=Map 1, vertexId=vertex_1476467351971_0008_2_00, diagnostics=[Vertex vertex_1476467351971_0008_2_00 [Map 1] killed/failed due to:INIT_FAILURE, Fail to create InputInitializerManager, org.apache.tez.dag.api.TezReflectionException: Unable to instantiate class with 1 arguments: org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator
at org.apache.tez.common.ReflectionUtils.getNewInstance(ReflectionUtils.java:70)
at org.apache.tez.common.ReflectionUtils.createClazzInstance(ReflectionUtils.java:89)
at org.apache.tez.dag.app.dag.RootInputInitializerManager$1.run(RootInputInitializerManager.java:151)
at org.apache.tez.dag.app.dag.RootInputInitializerManager$1.run(RootInputInitializerManager.java:148)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.tez.dag.app.dag.RootInputInitializerManager.createInitializer(RootInputInitializerManager.java:148)
at org.apache.tez.dag.app.dag.RootInputInitializerManager.runInputInitializers(RootInputInitializerManager.java:121)
at org.apache.tez.dag.app.dag.impl.VertexImpl.setupInputInitializerManager(VertexImpl.java:3986)
at org.apache.tez.dag.app.dag.impl.VertexImpl.access$3100(VertexImpl.java:204)
at org.apache.tez.dag.app.dag.impl.VertexImpl$InitTransition.handleInitEvent(VertexImpl.java:2818)
at org.apache.tez.dag.app.dag.impl.VertexImpl$InitTransition.transition(VertexImpl.java:2765)
at org.apache.tez.dag.app.dag.impl.VertexImpl$InitTransition.transition(VertexImpl.java:2747)
at org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
at org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
at org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
at org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
at org.apache.tez.state.StateMachineTez.doTransition(StateMachineTez.java:59)
at org.apache.tez.dag.app.dag.impl.VertexImpl.handle(VertexImpl.java:1888)
at org.apache.tez.dag.app.dag.impl.VertexImpl.handle(VertexImpl.java:203)
at org.apache.tez.dag.app.DAGAppMaster$VertexEventDispatcher.handle(DAGAppMaster.java:2242)
at org.apache.tez.dag.app.DAGAppMaster$VertexEventDispatcher.handle(DAGAppMaster.java:2228)
at org.apache.tez.common.AsyncDispatcher.dispatch(AsyncDispatcher.java:183)
at org.apache.tez.common.AsyncDispatcher$1.run(AsyncDispatcher.java:114)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.tez.common.ReflectionUtils.getNewInstance(ReflectionUtils.java:68)
... 25 more
Caused by: java.lang.RuntimeException: Failed to load plan: hdfs://ip-172-31-33-88.ec2.internal:8020/tmp/hive/hadoop/8c1eca9a-84ba-4d79-b39c-e633f1c6a646/hive_2016-10-14_18-57-38_728_4862537458701624850-1/hadoop/_tez_scratch_dir/157c0e27-0bfa-4acf-b0e9-b2fed595a8de/map.xml: org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to find class: com.mongodb.hadoop.hive.input.HiveMongoInputFormat
Serialization trace:
inputFileFormatClass (org.apache.hadoop.hive.ql.plan.PartitionDesc)
aliasToPartnInfo (org.apache.hadoop.hive.ql.plan.MapWork)
at org.apache.hadoop.hive.ql.exec.Utilities.getBaseWork(Utilities.java:451)
at org.apache.hadoop.hive.ql.exec.Utilities.getMapWork(Utilities.java:298)
at org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.<init>(HiveSplitGenerator.java:131)
... 30 more
Caused by: org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to find class: com.mongodb.hadoop.hive.input.HiveMongoInputFormat
Serialization trace:
inputFileFormatClass (org.apache.hadoop.hive.ql.plan.PartitionDesc)
aliasToPartnInfo (org.apache.hadoop.hive.ql.plan.MapWork)
at org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:156)
at org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:133)
at org.apache.hive.com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:670)
at org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readClass(SerializationUtilities.java:180)
at org.apache.hive.com.esotericsoftware.kryo.serializers.DefaultSerializers$ClassSerializer.read(DefaultSerializers.java:326)
at org.apache.hive.com.esotericsoftware.kryo.serializers.DefaultSerializers$ClassSerializer.read(DefaultSerializers.java:314)
at org.apache.hive.com.esotericsoftware.kryo.Kryo.readObjectOrNull(Kryo.java:759)
at org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readObjectOrNull(SerializationUtilities.java:198)
at org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:132)
at org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:551)
at org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:790)
at org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readClassAndObject(SerializationUtilities.java:175)
at org.apache.hive.com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:161)
at org.apache.hive.com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:39)
at org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:708)
at org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readObject(SerializationUtilities.java:213)
at org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:125)
at org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:551)
at org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:686)
at org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readObject(SerializationUtilities.java:205)
at org.apache.hadoop.hive.ql.exec.SerializationUtilities.deserializeObjectByKryo(SerializationUtilities.java:583)
at org.apache.hadoop.hive.ql.exec.SerializationUtilities.deserializePlan(SerializationUtilities.java:492)
at org.apache.hadoop.hive.ql.exec.SerializationUtilities.deserializePlan(SerializationUtilities.java:469)
at org.apache.hadoop.hive.ql.exec.Utilities.getBaseWork(Utilities.java:411)
... 32 more
Caused by: java.lang.ClassNotFoundException: com.mongodb.hadoop.hive.input.HiveMongoInputFormat
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:154)
... 55 more
]
Vertex killed, vertexName=Reducer 2, vertexId=vertex_1476467351971_0008_2_01, diagnostics=[Vertex received Kill in NEW state., Vertex vertex_1476467351971_0008_2_01 [Reducer 2] killed/failed due to:OTHER_VERTEX_FAILURE]
DAG did not succeed due to VERTEX_FAILURE. failedVertices:1 killedVertices:1
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, vertexName=Map 1, vertexId=vertex_1476467351971_0008_2_00, diagnostics=[Vertex vertex_1476467351971_0008_2_00 [Map 1] killed/failed due to:INIT_FAILURE, Fail to create InputInitializerManager, org.apache.tez.dag.api.TezReflectionException: Unable to instantiate class with 1 arguments: org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator
at org.apache.tez.common.ReflectionUtils.getNewInstance(ReflectionUtils.java:70)
at org.apache.tez.common.ReflectionUtils.createClazzInstance(ReflectionUtils.java:89)
at org.apache.tez.dag.app.dag.RootInputInitializerManager$1.run(RootInputInitializerManager.java:151)
at org.apache.tez.dag.app.dag.RootInputInitializerManager$1.run(RootInputInitializerManager.java:148)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.tez.dag.app.dag.RootInputInitializerManager.createInitializer(RootInputInitializerManager.java:148)
at org.apache.tez.dag.app.dag.RootInputInitializerManager.runInputInitializers(RootInputInitializerManager.java:121)
at org.apache.tez.dag.app.dag.impl.VertexImpl.setupInputInitializerManager(VertexImpl.java:3986)
at org.apache.tez.dag.app.dag.impl.VertexImpl.access$3100(VertexImpl.java:204)
at org.apache.tez.dag.app.dag.impl.VertexImpl$InitTransition.handleInitEvent(VertexImpl.java:2818)
at org.apache.tez.dag.app.dag.impl.VertexImpl$InitTransition.transition(VertexImpl.java:2765)
at org.apache.tez.dag.app.dag.impl.VertexImpl$InitTransition.transition(VertexImpl.java:2747)
at org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
at org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
at org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
at org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
at org.apache.tez.state.StateMachineTez.doTransition(StateMachineTez.java:59)
at org.apache.tez.dag.app.dag.impl.VertexImpl.handle(VertexImpl.java:1888)
at org.apache.tez.dag.app.dag.impl.VertexImpl.handle(VertexImpl.java:203)
at org.apache.tez.dag.app.DAGAppMaster$VertexEventDispatcher.handle(DAGAppMaster.java:2242)
at org.apache.tez.dag.app.DAGAppMaster$VertexEventDispatcher.handle(DAGAppMaster.java:2228)
at org.apache.tez.common.AsyncDispatcher.dispatch(AsyncDispatcher.java:183)
at org.apache.tez.common.AsyncDispatcher$1.run(AsyncDispatcher.java:114)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.tez.common.ReflectionUtils.getNewInstance(ReflectionUtils.java:68)
... 25 more
Caused by: java.lang.RuntimeException: Failed to load plan: hdfs://ip-172-31-33-88.ec2.internal:8020/tmp/hive/hadoop/8c1eca9a-84ba-4d79-b39c-e633f1c6a646/hive_2016-10-14_18-57-38_728_4862537458701624850-1/hadoop/_tez_scratch_dir/157c0e27-0bfa-4acf-b0e9-b2fed595a8de/map.xml: org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to find class: com.mongodb.hadoop.hive.input.HiveMongoInputFormat
Serialization trace:
inputFileFormatClass (org.apache.hadoop.hive.ql.plan.PartitionDesc)
aliasToPartnInfo (org.apache.hadoop.hive.ql.plan.MapWork)
at org.apache.hadoop.hive.ql.exec.Utilities.getBaseWork(Utilities.java:451)
at org.apache.hadoop.hive.ql.exec.Utilities.getMapWork(Utilities.java:298)
at org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.<init>(HiveSplitGenerator.java:131)
... 30 more
Caused by: org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to find class: com.mongodb.hadoop.hive.input.HiveMongoInputFormat
Serialization trace:
inputFileFormatClass (org.apache.hadoop.hive.ql.plan.PartitionDesc)
aliasToPartnInfo (org.apache.hadoop.hive.ql.plan.MapWork)
at org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:156)
at org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:133)
at org.apache.hive.com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:670)
at org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readClass(SerializationUtilities.java:180)
at org.apache.hive.com.esotericsoftware.kryo.serializers.DefaultSerializers$ClassSerializer.read(DefaultSerializers.java:326)
at org.apache.hive.com.esotericsoftware.kryo.serializers.DefaultSerializers$ClassSerializer.read(DefaultSerializers.java:314)
at org.apache.hive.com.esotericsoftware.kryo.Kryo.readObjectOrNull(Kryo.java:759)
at org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readObjectOrNull(SerializationUtilities.java:198)
at org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:132)
at org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:551)
at org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:790)
at org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readClassAndObject(SerializationUtilities.java:175)
at org.apache.hive.com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:161)
at org.apache.hive.com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:39)
at org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:708)
at org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readObject(SerializationUtilities.java:213)
at org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:125)
at org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:551)
at org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:686)
at org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readObject(SerializationUtilities.java:205)
at org.apache.hadoop.hive.ql.exec.SerializationUtilities.deserializeObjectByKryo(SerializationUtilities.java:583)
at org.apache.hadoop.hive.ql.exec.SerializationUtilities.deserializePlan(SerializationUtilities.java:492)
at org.apache.hadoop.hive.ql.exec.SerializationUtilities.deserializePlan(SerializationUtilities.java:469)
at org.apache.hadoop.hive.ql.exec.Utilities.getBaseWork(Utilities.java:411)
... 32 more
Caused by: java.lang.ClassNotFoundException: com.mongodb.hadoop.hive.input.HiveMongoInputFormat
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:154)
... 55 more
]Vertex killed, vertexName=Reducer 2, vertexId=vertex_1476467351971_0008_2_01, diagnostics=[Vertex received Kill in NEW state., Vertex vertex_1476467351971_0008_2_01 [Reducer 2] killed/failed due to:OTHER_VERTEX_FAILURE]DAG did not succeed due to VERTEX_FAILURE. failedVertices:1 killedVertices:1
hive>
I am using a single node EMR cluster from AWS with the "Core Hadoop" configuration, but I had the same errors using a three node cluster. I have already added the Mongo Hadoop Core, Mongo Hadoop Hive, Mongo Java driver to the master node.. These are the jars I'm using:
mongo-hadoop-hive-2.0.1.jar
mongo-java-driver-3.3.0.jar
remotecontent?filepath=org%2Fmongodb%2Fmongo-hadoop%2Fmongo-hadoop-core%2F2.0.1%2Fmongo-hadoop-core-2.0.1.jar
These jars were downloaded from Maven.
A similar question had previously been asked on StackOverflow and was resolved by adding the src for Apache Tez 0.8.4 and building it. Apache Tez 0.8.4 is already on the cluster though, at least according to the EMR configuration we chose.
We are using Hadoop 2.7.2 and Hive 2.1.0.