Neo4j embedded online backup from Java - scala

I am using Neo4j(embedded) Enterprise edition 1.9.4 along with Scala-Neo4j wrapper in my project. I tried to backup the Neo4j data using Java like below
def backup_data()
{
val backupPath: File = new File("D:/neo4j-enterprise-1.9.4/data/backup/")
val backup = OnlineBackup.from( "127.0.0.1" )
if(backupPath.list().length > 0)
{
backup.incremental( backupPath.getPath() )
}
else
{
backup.full( backupPath.getPath() );
}
}
It is working fine for the full backup. But the incremental backup part is throwing the Null pointer exception.
Where did I go wrong?
EDIT
Building the GraphDatabase instance through Scala-Neo4j wrapper
class MyNeo4jClass extends SomethingClass with Neo4jWrapper with EmbeddedGraphDatabaseServiceProvider {
def neo4jStoreDir = "/tmp/temp-neo-test"
. . .
}
Stacktrace
Exception in thread "main" java.lang.NullPointerException
at org.neo4j.consistency.checking.OwnerChain$3.checkReference(OwnerChain.java:111)
at org.neo4j.consistency.checking.OwnerChain$3.checkReference(OwnerChain.java:106)
at org.neo4j.consistency.report.ConsistencyReporter$DiffReportHandler.checkReference(ConsistencyReporter.java:330)
at org.neo4j.consistency.report.ConsistencyReporter.dispatchReference(ConsistencyReporter.java:109)
at org.neo4j.consistency.report.PendingReferenceCheck.checkReference(PendingReferenceCheck.java:50)
at org.neo4j.consistency.store.DirectRecordReference.dispatch(DirectRecordReference.java:39)
at org.neo4j.consistency.report.ConsistencyReporter$ReportInvocationHandler.forReference(ConsistencyReporter.java:236)
at org.neo4j.consistency.report.ConsistencyReporter$ReportInvocationHandler.dispatchForReference(ConsistencyReporter.java:228)
at org.neo4j.consistency.report.ConsistencyReporter$ReportInvocationHandler.invoke(ConsistencyReporter.java:192)
at $Proxy17.forReference(Unknown Source)
at org.neo4j.consistency.checking.OwnerChain.check(OwnerChain.java:143)
at org.neo4j.consistency.checking.PropertyRecordCheck.checkChange(PropertyRecordCheck.java:57)
at org.neo4j.consistency.checking.PropertyRecordCheck.checkChange(PropertyRecordCheck.java:35)
at org.neo4j.consistency.report.ConsistencyReporter.dispatchChange(ConsistencyReporter.java:101)
at org.neo4j.consistency.report.ConsistencyReporter.forPropertyChange(ConsistencyReporter.java:382)
at org.neo4j.consistency.checking.incremental.StoreProcessor.checkProperty(StoreProcessor.java:61)
at org.neo4j.consistency.checking.AbstractStoreProcessor.processProperty(AbstractStoreProcessor.java:95)
at org.neo4j.consistency.store.DiffRecordStore$DispatchProcessor.processProperty(DiffRecordStore.java:207)
at org.neo4j.kernel.impl.nioneo.store.PropertyStore.accept(PropertyStore.java:83)
at org.neo4j.kernel.impl.nioneo.store.PropertyStore.accept(PropertyStore.java:43)
at org.neo4j.consistency.store.DiffRecordStore.accept(DiffRecordStore.java:159)
at org.neo4j.kernel.impl.nioneo.store.RecordStore$Processor.applyById(RecordStore.java:180)
at org.neo4j.consistency.store.DiffStore.apply(DiffStore.java:73)
at org.neo4j.kernel.impl.nioneo.store.StoreAccess.applyToAll(StoreAccess.java:174)
at org.neo4j.consistency.checking.incremental.IncrementalDiffCheck.execute(IncrementalDiffCheck.java:43)
at org.neo4j.consistency.checking.incremental.DiffCheck.check(DiffCheck.java:39)
at org.neo4j.consistency.checking.incremental.intercept.CheckingTransactionInterceptor.complete(CheckingTransactionInterceptor.java:160)
at org.neo4j.kernel.impl.transaction.xaframework.InterceptingXaLogicalLog$1.intercept(InterceptingXaLogicalLog.java:79)
at org.neo4j.kernel.impl.transaction.xaframework.XaLogicalLog$LogDeserializer.readAndWriteAndApplyEntry(XaLogicalLog.java:1120)
at org.neo4j.kernel.impl.transaction.xaframework.XaLogicalLog.applyTransaction(XaLogicalLog.java:1292)
at org.neo4j.kernel.impl.transaction.xaframework.XaResourceManager.applyCommittedTransaction(XaResourceManager.java:766)
at org.neo4j.kernel.impl.transaction.xaframework.XaDataSource.applyCommittedTransaction(XaDataSource.java:246)
at org.neo4j.com.ServerUtil.applyReceivedTransactions(ServerUtil.java:423)
at org.neo4j.backup.BackupService.unpackResponse(BackupService.java:453)
at org.neo4j.backup.BackupService.incrementalWithContext(BackupService.java:388)
at org.neo4j.backup.BackupService.doIncrementalBackup(BackupService.java:286)
at org.neo4j.backup.BackupService.doIncrementalBackup(BackupService.java:273)
at org.neo4j.backup.OnlineBackup.incremental(OnlineBackup.java:147)
at Saddahaq.User_node$.backup_data(User_node.scala:1637)
at Saddahaq.User_node$.main(User_node.scala:2461)
at Saddahaq.User_node.main(User_node.scala)

After the backup is taken, the backed target is checked for consistency. The incremental version of the consistency checker currently suffers from a bug leading to the observed NPE.
Workaround: either always take full backups with backup.full or prevent consistency checking on incremental backups by using
backup.incremental(backupPath.getPath(), false);

Related

Getting "java.lang.IllegalStateException: Tried to lookup lag for unknown task 3_0" after upgrading Kafka Stream from 2.5.1 to 2.6.2

I just upgraded our Kafka Stream application from 2.5.1 to 2.6.2. It used to work, now it doesn't.
Here is the troublesome topology (I have omitted the irrelevant Serdes):
val builder = new StreamsBuilder()
val contractEventStream: KStream[TariffId, ContractEvent] =
builder.stream[String, ContractUpsertAvro](settings.contractsTopicName)
.flatMap { (_, contractAvro) =>
ContractEvent.from(contractAvro)
.map(contractEvent => (contractEvent.tariffId, contractEvent))
}
val tariffsTable: KTable[TariffId, Tariff] =
builder.stream[String, TariffUpdateEventAvro](settings.tariffTopicName)
.flatMapValues(Tariff.fromAvro(_))
.selectKey((_, tariff) => tariff.tariffId)
.toTable(Materialized.`with`(tariffIdSerde, tariffSerde)) // Materialized.as also throws the same IllegalStateExceptions
contractEventStream
.join(tariffsTable)(JourneyStep.from(_, _).asInstanceOf[ContractCreated])(Joined.`with`(tariffIdSerde, contractEventSerde, tariffSerde))
.selectKey((_, contractUpdated) => contractUpdated.accountId)
.foreach((_, journeyStep) => println(journeyStep))
The join gives the following exception:
java.lang.IllegalStateException: Tried to lookup lag for unknown task 3_0
at org.apache.kafka.streams.processor.internals.assignment.ClientState.lagFor(ClientState.java:306)
at java.util.Comparator.lambda$comparingLong$6043328a$1(Comparator.java:511)
at java.util.Comparator.lambda$thenComparing$36697e65$1(Comparator.java:216)
at java.util.TreeMap.compare(TreeMap.java:1295)
at java.util.TreeMap.put(TreeMap.java:538)
at java.util.TreeSet.add(TreeSet.java:255)
at java.util.AbstractCollection.addAll(AbstractCollection.java:344)
at java.util.TreeSet.addAll(TreeSet.java:312)
at org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor.getPreviousTasksByLag(StreamsPartitionAssignor.java:1275)
at org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor.assignTasksToThreads(StreamsPartitionAssignor.java:1189)
at org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor.computeNewAssignment(StreamsPartitionAssignor.java:940)
at org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor.assign(StreamsPartitionAssignor.java:399)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.performAssignment(ConsumerCoordinator.java:589)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.onJoinLeader(AbstractCoordinator.java:684)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.access$1000(AbstractCoordinator.java:111)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$JoinGroupResponseHandler.handle(AbstractCoordinator.java:597)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$JoinGroupResponseHandler.handle(AbstractCoordinator.java:560)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:1160)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:1135)
at org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:206)
at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:169)
at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:129)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.fireCompletion(ConsumerNetworkClient.java:602)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.firePendingCompletedRequests(ConsumerNetworkClient.java:412)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:297)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:236)
at org.apache.kafka.clients.consumer.KafkaConsumer.pollForFetches(KafkaConsumer.java:1296)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1237)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1210)
at org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:767)
at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:624)
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:551)
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:510)
I can't see what I am doing wrong. The code above works with Kafka 2.5.1. Anyone has any idea what is going on?
The problem is caused by the Kafka Streams cache, which it keeps on disk. This cache is specific to Kafka-version and to the Kafka Streams topology you use (ie. a change in your topology could also lead to this error).
The cache is usually found in /tmp or elsewhere if you passed in the "state.dir" property to Kafka Streams. Clear the directory with the cache and you should be able to cleanly start again.

java.lang.NoClassDefFoundError: Could not initialize class - spark/scala

I am new to spark/scala development. I am using maven to build my project and IDE is intelliJ. I am trying to query a hive table and then iterate over the resulting dataframe(using foreach). Here's my code:
try{
val DF_1 = hiveContext.sql("select distinct(address) from
test_table where trim(address)!=''")
println("number of rows: "+DF_1.count)
DF_1.foreach(x => {
val y =hiveContext.sql("select place from test_table where address='"+x(0).toString+"'")
if(y.count > 1){
println("Multiple place values for address: "+x(0).toString)
y.foreach(r => println(r))
println("*************")
}
})}
catch {case e: Exception => e.printStackTrace()}
With each iteration, I am Querying the same table to get another column, trying to see if there are multiple values of places for each address in test_table. I have no compilation errors and the application builds successfully. But, when I run the above code, I get the following error:
java.lang.NoClassDefFoundError: Could not initialize class xxxxxxxx
the application launches successfully, prints the count of rows in DF_1 and then fails with the above error at the foreach loop. I did a jar xvf on my jar and can see the main class - driver.class:
com/.../driver$$anonfun$1$$anonfun$apply$1.class
com/.../driver$$anonfun$1.class
com/.../driver$$anonfun$2.class
com/.../driver$$anonfun$3.class
com/.../driver$$anonfun$4.class
com/.../driver$$anonfun$5.class
com/.../driver$$anonfun$main$1$$anonfun$apply$1.class
com/.../driver$$anonfun$main$1$$anonfun$apply$2.class
com/.../driver$$anonfun$main$1$$anonfun$apply$3.class
com/.../driver$$anonfun$main$1.class
com/.../driver$$anonfun$main$10$$anonfun$apply$9.class
com/.../driver$$anonfun$main$10.class
com/.../driver$$anonfun$main$11.class
com/.../driver$$anonfun$main$12.class
com/.../driver$$anonfun$main$13.class
com/.../driver$$anonfun$main$14.class
com/.../driver$$anonfun$main$15.class
com/.../driver$$anonfun$main$16.class
com/.../driver$$anonfun$main$17.class
com/.../driver$$anonfun$main$18.class
com/.../driver$$anonfun$main$19.class
com/.../driver$$anonfun$main$2$$anonfun$apply$4.class
com/.../driver$$anonfun$main$2$$anonfun$apply$5.class
com/.../driver$$anonfun$main$2$$anonfun$apply$6.class
com/.../driver$$anonfun$main$2.class
com/.../driver$$anonfun$main$20.class
com/.../driver$$anonfun$main$21.class
com/.../driver$$anonfun$main$22.class
com/.../driver$$anonfun$main$23.class
com/.../driver$$anonfun$main$3$$anonfun$apply$7.class
com/.../driver$$anonfun$main$3$$anonfun$apply$8.class
com/.../driver$$anonfun$main$3.class
com/.../driver$$anonfun$main$4$$anonfun$apply$9.class
com/.../driver$$anonfun$main$4.class
com/.../driver$$anonfun$main$5.class
com/.../driver$$anonfun$main$6$$anonfun$apply$1.class
com/.../driver$$anonfun$main$6$$anonfun$apply$2.class
com/.../driver$$anonfun$main$6$$anonfun$apply$3.class
com/.../driver$$anonfun$main$6$$anonfun$apply$4.class
com/.../driver$$anonfun$main$6$$anonfun$apply$5.class
com/.../driver$$anonfun$main$6.class
com/.../driver$$anonfun$main$7$$anonfun$apply$1.class
com/.../driver$$anonfun$main$7$$anonfun$apply$2.class
com/.../driver$$anonfun$main$7$$anonfun$apply$3.class
com/.../driver$$anonfun$main$7$$anonfun$apply$4.class
com/.../driver$$anonfun$main$7$$anonfun$apply$5.class
com/.../driver$$anonfun$main$7$$anonfun$apply$6.class
com/.../driver$$anonfun$main$7$$anonfun$apply$7.class
com/.../driver$$anonfun$main$7$$anonfun$apply$8.class
com/.../driver$$anonfun$main$7.class
com/.../driver$$anonfun$main$8$$anonfun$apply$10.class
com/.../driver$$anonfun$main$8$$anonfun$apply$4.class
com/.../driver$$anonfun$main$8$$anonfun$apply$5.class
com/.../driver$$anonfun$main$8$$anonfun$apply$6.class
com/.../driver$$anonfun$main$8$$anonfun$apply$7.class
com/.../driver$$anonfun$main$8$$anonfun$apply$8.class
com/.../driver$$anonfun$main$8$$anonfun$apply$9.class
com/.../driver$$anonfun$main$8.class
com/.../driver$$anonfun$main$9$$anonfun$apply$11.class
com/.../driver$$anonfun$main$9$$anonfun$apply$7.class
com/.../driver$$anonfun$main$9$$anonfun$apply$8.class
com/.../driver$$anonfun$main$9$$anonfun$apply$9.class
com/.../driver$$anonfun$main$9.class
com/.../driver$.class
com/.../driver.class
I am not facing the error when I launch the job in local mode instead of yarn. What is causing the issue and how can it be corrected?
Any help would be appreciated, Thank you.
Looks like your your jar or some dependencies aren't distributed between worker nodes. In local mode it works because you have the jars in the place. In yarn mode you need to build a fat-jar with all dependencies include hive and spark libraries in it.

ReactiveMongo/Play: `LastError: DatabaseException['<none>']`, while db still updates

weird problem: While my play application tries to insert/update records from some mongoDB collections while using reactivemongo, the operation seems to fail with a mysterious message, but the record does, actually, gets inserted/updated.
More info:
Insert to problematic collections from the mongo console works well
reading from all collections works well
reading and writing to other collections in the same db works well
writing to the problematic collections used to work.
Error message is:
play.api.http.HttpErrorHandlerExceptions$$anon$1: Execution exception[[LastError: DatabaseException['<none>']]]
at play.api.http.HttpErrorHandlerExceptions$.throwableToUsefulException(HttpErrorHandler.scala:280)
at play.api.http.DefaultHttpErrorHandler.onServerError(HttpErrorHandler.scala:206)
at play.core.server.netty.PlayRequestHandler$$anonfun$2$$anonfun$apply$1.applyOrElse(PlayRequestHandler.scala:100)
at play.core.server.netty.PlayRequestHandler$$anonfun$2$$anonfun$apply$1.applyOrElse(PlayRequestHandler.scala:99)
at scala.concurrent.Future$$anonfun$recoverWith$1.apply(Future.scala:344)
at scala.concurrent.Future$$anonfun$recoverWith$1.apply(Future.scala:343)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
at play.api.libs.iteratee.Execution$trampoline$.execute(Execution.scala:70)
at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40)
at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248)
Caused by: reactivemongo.api.commands.LastError: DatabaseException['<none>']
Using ReactiveMongo 0.11.14, Play 2.5.4, Scala 2.11.7, MongoDB 3.4.0.
Thanks!
UPDATE - The mystery thickens!
Based on #Yaroslav_Derman's answer, I added a .recover clause, like so:
collectionRef.flatMap( c =>
c.update( BSONDocument("_id" -> publicationWithId.id.get), publicationWithId.asInstanceOf[PublicationItem], upsert=true))
.map(wr => {
Logger.warn("Write Result: " + wr )
Logger.warn("wr.inError: " + wr.inError)
Logger.warn("*************")
publicationWithId
}).recover({
case de:DatabaseException => {
Logger.warn("DatabaseException: " + de.getMessage())
Logger.warn("Cause: " + de.getCause())
Logger.warn("Code: " + de.code)
publicationWithId
}
})
The recover clause does get called. Here's the log:
[info] application - Saving pub t3
[warn] application - *************
[warn] application - Saving publication Publication(Some(BSONObjectID("5848101d7263468d01ff390d")),t3,2016-12-07,desc,auth,filename,None)
[info] application - Resolving database...
[info] application - Resolving database...
[warn] application - DatabaseException: DatabaseException['<none>']
[warn] application - Cause: null
[warn] application - Code: None
So no cause, no code, message is "'<none>'", but still an error. What gives?
I tried to move to 0.12, but that caused some compilation errors across the app, plus I'm not sure that would solve the problem. So I'd like to understand what's wrong first.
UPDATE #2:
Migrated to reactive-mongo 0.12.0. Problem persists.
Problem solved by downgrading to MongoDB 3.2.8. Turns out reactiveMongo 0.12.0 is not compatible with mongoDB 3.4.
Thanks everyone who looked into this.
For play reactivemongo 0.12.0 you can do like this
def appsDb = reactiveMongoApi.database.map(_.collection[JSONCollection](DesktopApp.COLLECTION_NAME))
def save(id: String, user: User, body: JsValue) = {
val version = (body \ "version").as[String]
val app = DesktopApp(id, version, user)
appsDb.flatMap(
_.insert(app)
.map(_ => app)
.recover(processError)
)
}
def processError[T]: PartialFunction[Throwable, T] = {
case ex: DatabaseException if ex.code.contains(10054 | 10056 | 10058 | 10107 | 13435 | 13436) =>
//Custom exception which processed in Error Handler
throw new AppException(ResponseCode.ALREADY_EXISTS, "Entity already exists")
case ex: DatabaseException if ex.code.contains(10057 | 15845 | 16550) =>
//Custom exception which processed in Error Handler
throw new AppException(ResponseCode.ENTITY_NOT_FOUND, "Entity not found")
case ex: Exception =>
//Custom exception which processed in Error Handler
throw new InternalServerErrorException(ex.getMessage)
}
Also you can add logs in processError method
LastError was deprecated in 0.11, replaced by WriteResult.
LastError does not, actually, means error, it could mean successful result, you need to check inError property of the LastError object to detect if it's real error. As I see, the '<none>' error message give a good chance that this is not error.
Here is the example "how it was in 0.10": http://reactivemongo.org/releases/0.10/documentation/tutorial/write-documents.html

Debugging a standalone jetty server - how to specify single threaded mode?

I have successfully created a standalone Scalatra / Jetty server, using the official instructions from Scalatra ( http://www.scalatra.org/2.3/guides/deployment/standalone.html )
I am debugging it under Ensime, and would like to limit the number of threads handling messages to a single one - so that single-stepping through the servlet methods will be easier.
I used this code to achieve it:
package ...
import org.eclipse.jetty.server.Server
import org.eclipse.jetty.servlet.{DefaultServlet, ServletContextHandler}
import org.eclipse.jetty.webapp.WebAppContext
import org.scalatra.servlet.ScalatraListener
import org.eclipse.jetty.util.thread.QueuedThreadPool
import org.eclipse.jetty.server.ServerConnector
object JettyLauncher {
def main(args: Array[String]) {
val port =
if (System.getenv("PORT") != null)
System.getenv("PORT").toInt
else
4080
// DEBUGGING MODE BEGINS
val threadPool = new QueuedThreadPool()
threadPool.setMaxThreads(8)
val server = new Server(threadPool)
val connector = new ServerConnector(server)
connector.setPort(port)
server.setConnectors(Array(connector))
// DEBUGGING MODE ENDS
val context = new WebAppContext()
context setContextPath "/"
context.setResourceBase("src/main/webapp")
context.addEventListener(new ScalatraListener)
context.addServlet(classOf[DefaultServlet], "/")
server.setHandler(context)
server.start
server.join
}
}
It works fine - except for one minor detail...
I can't tell Jetty to use 1 thread - the minimum value is 8!
If I do, this is what happens:
$ sbt assembly
...
$ java -jar ./target/scala-2.11/CurrentVersions-assembly-0.1.0-SNAPSHOT.jar
18:13:27.059 [main] INFO org.eclipse.jetty.util.log - Logging initialized #41ms
18:13:27.206 [main] INFO org.eclipse.jetty.server.Server - jetty-9.1.z-SNAPSHOT
18:13:27.220 [main] WARN o.e.j.u.component.AbstractLifeCycle - FAILED org.eclipse.jetty.server.Server#1ac539f: java.lang.IllegalStateException: Insufficient max threads in ThreadPool: max=1 < needed=8
java.lang.IllegalStateException: Insufficient max threads in ThreadPool: max=1 < needed=8
...which is why you see setMaxThreads(8) instead of setMaxThreads(1) in my code above.
Any ideas why this happens?
The reason is that the size of the threadpool also depends on th enumber of connectors you've got defined. If you look at the source code of the jetty server you'll see this:
// check size of thread pool
SizedThreadPool pool = getBean(SizedThreadPool.class);
int max=pool==null?-1:pool.getMaxThreads();
int selectors=0;
int acceptors=0;
if (mex.size()==0)
{
for (Connector connector : _connectors)
{
if (connector instanceof AbstractConnector)
acceptors+=((AbstractConnector)connector).getAcceptors();
if (connector instanceof ServerConnector)
selectors+=((ServerConnector)connector).getSelectorManager().getSelectorCount();
}
}
int needed=1+selectors+acceptors;
if (max>0 && needed>max)
throw new IllegalStateException(String.format("Insufficient threads: max=%d < needed(acceptors=%d + selectors=%d + request=1)",max,acceptors,selectors));
So the minimum with a single serverconnector is 2. It looks like you've got a couple of other default connectors or selectors running.

Exception using mongodb as infinispan cache store

I want to use MongoDb as cacche store for the infinispan to persist the data evicted according to policy
i am posting the snippet of the code that is causing exception along with the exception
ConfigurationBuilder config = new ConfigurationBuilder();
MongoDBCacheStore strgBuilder = new MongoDBCacheStore();
ConfigurationBuilder b = new ConfigurationBuilder();
b.persistence()
.addStore(MongoDBCacheStoreConfigurationBuilder.class)
.host( "localhost" )
.port( 27017 )
.timeout( 1500 )
.acknowledgment( 0 )
.username( "" )
.password( "" )
.database( "infinispan_cachestore" )
.collection( "entries" );
/* DefaultCacheManager manager=new DefaultCacheManager(b.build());
Cache ch=manager.getCache();
ch.put("username","sogani"); */
final Configuration configcache = b.build();
MongoDBCacheStoreConfiguration store = (MongoDBCacheStoreConfiguration) configcache.persistence().stores().get(0);
exception that I am getting is
java.lang.NoSuchMethodException: org.infinispan.loaders.mongodb.configuration.MongoDBCacheStoreConfigurationBuilder.
Any pointer will be of a great help
Thnx.
MongoDB was not updated after new persistence API was adopted in Infinispan. Try Infinispan 5.2.7.Final, maybe 5.3.0.Final or look into adaptor52x stuff. Or, even better, try to reimplement it using the new CacheWriter interface and issue a PR - the existing code should provide you some guidelines.