Exception using mongodb as infinispan cache store - mongodb

I want to use MongoDb as cacche store for the infinispan to persist the data evicted according to policy
i am posting the snippet of the code that is causing exception along with the exception
ConfigurationBuilder config = new ConfigurationBuilder();
MongoDBCacheStore strgBuilder = new MongoDBCacheStore();
ConfigurationBuilder b = new ConfigurationBuilder();
b.persistence()
.addStore(MongoDBCacheStoreConfigurationBuilder.class)
.host( "localhost" )
.port( 27017 )
.timeout( 1500 )
.acknowledgment( 0 )
.username( "" )
.password( "" )
.database( "infinispan_cachestore" )
.collection( "entries" );
/* DefaultCacheManager manager=new DefaultCacheManager(b.build());
Cache ch=manager.getCache();
ch.put("username","sogani"); */
final Configuration configcache = b.build();
MongoDBCacheStoreConfiguration store = (MongoDBCacheStoreConfiguration) configcache.persistence().stores().get(0);
exception that I am getting is
java.lang.NoSuchMethodException: org.infinispan.loaders.mongodb.configuration.MongoDBCacheStoreConfigurationBuilder.
Any pointer will be of a great help
Thnx.

MongoDB was not updated after new persistence API was adopted in Infinispan. Try Infinispan 5.2.7.Final, maybe 5.3.0.Final or look into adaptor52x stuff. Or, even better, try to reimplement it using the new CacheWriter interface and issue a PR - the existing code should provide you some guidelines.

Related

Intermittent DB connection timeout in .NET 6 Console Application connecting to Azure SQL

We have a .Net Core Console Application accessing Azure SQL (Gen5, 4 vCores) deployed as a web job in Azure.
We recently upgraded our small console application to ef6(6.0.11)
Since quite some time, the application keeps throwing below exception intermittently for READ operations(highlighted in below code):
Microsoft.Data.SqlClient.SqlException (0x80131904): A connection was successfully established with the server, but then an error occurred during the pre-login handshake. (provider: SSL Provider, error: 0 - The wait operation timed out.)
We are clueless on Root Cause of this issue. Any hints on where to start looking # for root cause?
Any pointer would be highly appreciated.
NOTE : Connection string has following settings in azure
"ConnectionStrings": { "DBContext": "Server=Trusted_Connection=False;Encrypt=False;" }
Overall code looks something like below:
` var config = new ConfigurationBuilder()
.SetBasePath(Directory.GetCurrentDirectory())
.Build();
var builder = new SqlConnectionStringBuilder(config.GetConnectionString("DBContext"));
builder.Password = "";
builder.UserID = "";
builder.DataSource = "";
builder.InitialCatalog = "";
string _connection = builder.ConnectionString;
var sp = new ServiceCollection()
.AddDbContext<DBContext>(x => x.UseSqlServer(_connection, providerOptions => providerOptions.EnableRetryOnFailure()))
.BuildServiceProvider();
var db = sp.GetService<DBContext>();
lock(db)
{
var NewTriggers = db.Triggers.Where(x => x.IsSubmitted == false && x.Error == null).OrderBy(x => x.CreateOn).ToList();
}
`
We tried migrating from EF 3.1 to EF 6.0.11. We were expecting a smooth transition

Debugging a standalone jetty server - how to specify single threaded mode?

I have successfully created a standalone Scalatra / Jetty server, using the official instructions from Scalatra ( http://www.scalatra.org/2.3/guides/deployment/standalone.html )
I am debugging it under Ensime, and would like to limit the number of threads handling messages to a single one - so that single-stepping through the servlet methods will be easier.
I used this code to achieve it:
package ...
import org.eclipse.jetty.server.Server
import org.eclipse.jetty.servlet.{DefaultServlet, ServletContextHandler}
import org.eclipse.jetty.webapp.WebAppContext
import org.scalatra.servlet.ScalatraListener
import org.eclipse.jetty.util.thread.QueuedThreadPool
import org.eclipse.jetty.server.ServerConnector
object JettyLauncher {
def main(args: Array[String]) {
val port =
if (System.getenv("PORT") != null)
System.getenv("PORT").toInt
else
4080
// DEBUGGING MODE BEGINS
val threadPool = new QueuedThreadPool()
threadPool.setMaxThreads(8)
val server = new Server(threadPool)
val connector = new ServerConnector(server)
connector.setPort(port)
server.setConnectors(Array(connector))
// DEBUGGING MODE ENDS
val context = new WebAppContext()
context setContextPath "/"
context.setResourceBase("src/main/webapp")
context.addEventListener(new ScalatraListener)
context.addServlet(classOf[DefaultServlet], "/")
server.setHandler(context)
server.start
server.join
}
}
It works fine - except for one minor detail...
I can't tell Jetty to use 1 thread - the minimum value is 8!
If I do, this is what happens:
$ sbt assembly
...
$ java -jar ./target/scala-2.11/CurrentVersions-assembly-0.1.0-SNAPSHOT.jar
18:13:27.059 [main] INFO org.eclipse.jetty.util.log - Logging initialized #41ms
18:13:27.206 [main] INFO org.eclipse.jetty.server.Server - jetty-9.1.z-SNAPSHOT
18:13:27.220 [main] WARN o.e.j.u.component.AbstractLifeCycle - FAILED org.eclipse.jetty.server.Server#1ac539f: java.lang.IllegalStateException: Insufficient max threads in ThreadPool: max=1 < needed=8
java.lang.IllegalStateException: Insufficient max threads in ThreadPool: max=1 < needed=8
...which is why you see setMaxThreads(8) instead of setMaxThreads(1) in my code above.
Any ideas why this happens?
The reason is that the size of the threadpool also depends on th enumber of connectors you've got defined. If you look at the source code of the jetty server you'll see this:
// check size of thread pool
SizedThreadPool pool = getBean(SizedThreadPool.class);
int max=pool==null?-1:pool.getMaxThreads();
int selectors=0;
int acceptors=0;
if (mex.size()==0)
{
for (Connector connector : _connectors)
{
if (connector instanceof AbstractConnector)
acceptors+=((AbstractConnector)connector).getAcceptors();
if (connector instanceof ServerConnector)
selectors+=((ServerConnector)connector).getSelectorManager().getSelectorCount();
}
}
int needed=1+selectors+acceptors;
if (max>0 && needed>max)
throw new IllegalStateException(String.format("Insufficient threads: max=%d < needed(acceptors=%d + selectors=%d + request=1)",max,acceptors,selectors));
So the minimum with a single serverconnector is 2. It looks like you've got a couple of other default connectors or selectors running.

Monogid4 Gridfs connection failure

I'm working on Rails4, Mongoid4 and Gridfs. I;m not able to connect gridfs filesystem
class GridfsController < ApplicationController
def serve
gridfs_path = env["PATH_INFO"].gsub("/uploads/", "")
begin
gridfs_file = Mongo::GridFileSystem.new(Mongo::DB.new('database_name', Mongo::Connection.new('localhost'))).open(gridfs_path, 'r')
self.response_body = gridfs_file.read
self.content_type = gridfs_file.content_type
rescue Exception => e
self.status = :file_not_found
self.content_type = 'text/plain'
self.response_body = ''
raise e
end
end
end
Getting this error
NameError (uninitialized constant GridfsController::Mongo):
app/controllers/gridfs_controller.rb:7:in `serve'
Mongoid doesn't use the "official" Ruby driver to talk to MongoDB and that's where Mongo::GridFileSystem comes from. Mongoid uses Moped to talk to MongoDB and Moped doesn't know anything about GridFS.
AFAIK the usual GridFS solution is to use mongoid-grid_fs to talk to GridFS:
self.response_body = Mongoid::GridFs[gridfs_path].data
or if you have the id instead of the path:
self.response_body = Mongoid::GridFs.get(gridfs_id).data
There is an implementation of the gridfs specs for the Moped driver here: moped-gridfs
It's better than loading two drivers (moped and mongo-ruby-driver)

Neo4j embedded online backup from Java

I am using Neo4j(embedded) Enterprise edition 1.9.4 along with Scala-Neo4j wrapper in my project. I tried to backup the Neo4j data using Java like below
def backup_data()
{
val backupPath: File = new File("D:/neo4j-enterprise-1.9.4/data/backup/")
val backup = OnlineBackup.from( "127.0.0.1" )
if(backupPath.list().length > 0)
{
backup.incremental( backupPath.getPath() )
}
else
{
backup.full( backupPath.getPath() );
}
}
It is working fine for the full backup. But the incremental backup part is throwing the Null pointer exception.
Where did I go wrong?
EDIT
Building the GraphDatabase instance through Scala-Neo4j wrapper
class MyNeo4jClass extends SomethingClass with Neo4jWrapper with EmbeddedGraphDatabaseServiceProvider {
def neo4jStoreDir = "/tmp/temp-neo-test"
. . .
}
Stacktrace
Exception in thread "main" java.lang.NullPointerException
at org.neo4j.consistency.checking.OwnerChain$3.checkReference(OwnerChain.java:111)
at org.neo4j.consistency.checking.OwnerChain$3.checkReference(OwnerChain.java:106)
at org.neo4j.consistency.report.ConsistencyReporter$DiffReportHandler.checkReference(ConsistencyReporter.java:330)
at org.neo4j.consistency.report.ConsistencyReporter.dispatchReference(ConsistencyReporter.java:109)
at org.neo4j.consistency.report.PendingReferenceCheck.checkReference(PendingReferenceCheck.java:50)
at org.neo4j.consistency.store.DirectRecordReference.dispatch(DirectRecordReference.java:39)
at org.neo4j.consistency.report.ConsistencyReporter$ReportInvocationHandler.forReference(ConsistencyReporter.java:236)
at org.neo4j.consistency.report.ConsistencyReporter$ReportInvocationHandler.dispatchForReference(ConsistencyReporter.java:228)
at org.neo4j.consistency.report.ConsistencyReporter$ReportInvocationHandler.invoke(ConsistencyReporter.java:192)
at $Proxy17.forReference(Unknown Source)
at org.neo4j.consistency.checking.OwnerChain.check(OwnerChain.java:143)
at org.neo4j.consistency.checking.PropertyRecordCheck.checkChange(PropertyRecordCheck.java:57)
at org.neo4j.consistency.checking.PropertyRecordCheck.checkChange(PropertyRecordCheck.java:35)
at org.neo4j.consistency.report.ConsistencyReporter.dispatchChange(ConsistencyReporter.java:101)
at org.neo4j.consistency.report.ConsistencyReporter.forPropertyChange(ConsistencyReporter.java:382)
at org.neo4j.consistency.checking.incremental.StoreProcessor.checkProperty(StoreProcessor.java:61)
at org.neo4j.consistency.checking.AbstractStoreProcessor.processProperty(AbstractStoreProcessor.java:95)
at org.neo4j.consistency.store.DiffRecordStore$DispatchProcessor.processProperty(DiffRecordStore.java:207)
at org.neo4j.kernel.impl.nioneo.store.PropertyStore.accept(PropertyStore.java:83)
at org.neo4j.kernel.impl.nioneo.store.PropertyStore.accept(PropertyStore.java:43)
at org.neo4j.consistency.store.DiffRecordStore.accept(DiffRecordStore.java:159)
at org.neo4j.kernel.impl.nioneo.store.RecordStore$Processor.applyById(RecordStore.java:180)
at org.neo4j.consistency.store.DiffStore.apply(DiffStore.java:73)
at org.neo4j.kernel.impl.nioneo.store.StoreAccess.applyToAll(StoreAccess.java:174)
at org.neo4j.consistency.checking.incremental.IncrementalDiffCheck.execute(IncrementalDiffCheck.java:43)
at org.neo4j.consistency.checking.incremental.DiffCheck.check(DiffCheck.java:39)
at org.neo4j.consistency.checking.incremental.intercept.CheckingTransactionInterceptor.complete(CheckingTransactionInterceptor.java:160)
at org.neo4j.kernel.impl.transaction.xaframework.InterceptingXaLogicalLog$1.intercept(InterceptingXaLogicalLog.java:79)
at org.neo4j.kernel.impl.transaction.xaframework.XaLogicalLog$LogDeserializer.readAndWriteAndApplyEntry(XaLogicalLog.java:1120)
at org.neo4j.kernel.impl.transaction.xaframework.XaLogicalLog.applyTransaction(XaLogicalLog.java:1292)
at org.neo4j.kernel.impl.transaction.xaframework.XaResourceManager.applyCommittedTransaction(XaResourceManager.java:766)
at org.neo4j.kernel.impl.transaction.xaframework.XaDataSource.applyCommittedTransaction(XaDataSource.java:246)
at org.neo4j.com.ServerUtil.applyReceivedTransactions(ServerUtil.java:423)
at org.neo4j.backup.BackupService.unpackResponse(BackupService.java:453)
at org.neo4j.backup.BackupService.incrementalWithContext(BackupService.java:388)
at org.neo4j.backup.BackupService.doIncrementalBackup(BackupService.java:286)
at org.neo4j.backup.BackupService.doIncrementalBackup(BackupService.java:273)
at org.neo4j.backup.OnlineBackup.incremental(OnlineBackup.java:147)
at Saddahaq.User_node$.backup_data(User_node.scala:1637)
at Saddahaq.User_node$.main(User_node.scala:2461)
at Saddahaq.User_node.main(User_node.scala)
After the backup is taken, the backed target is checked for consistency. The incremental version of the consistency checker currently suffers from a bug leading to the observed NPE.
Workaround: either always take full backups with backup.full or prevent consistency checking on incremental backups by using
backup.incremental(backupPath.getPath(), false);

Can´t bind mongodb service on Appfog

Hi i´m trying to bind mongodb service on my expressjs app with Appfog.
I have a config file like this:
config.js
var config = {}
config.dev = {};
config.prod = {};
//DEV
config.dev.host = "localhost";
config.dev.port = 3000;
config.dev.mdbhost = "localhost";
config.dev.mdbport = 27017;
config.dev.db = "detysi";
//PROD
config.prod.service_type = "mongo-1.8";
config.prod.json = process.env.VCAP_SERVICES ? JSON.parse(process.env.VCAP_SERVICES) : '';
config.prod.credentials = process.env.VCAP_SERVICES ? config.prod.json[config.prod.service_type][0]["credentials"] : null;
config.prod.mdbhost = config.prod.credentials["host"];
config.prod.mdbport = config.prod.credentials["port"];
config.prod.db = config.prod.credentials["db"];
config.prod.port = process.env.VCAP_APP_PORT || process.env.PORT;
module.exports = config;
And this is my mongodb conf depending of the environment
app.js
if ( process.env.VCAP_SERVICES ) {
server = new Server(config.prod.mdbhost, config.prod.mdbport, { auto_reconnect: true });
db = new Db(config.prod.db, server);
} else {
server = new Server(config.dev.mdbhost, config.dev.mdbport, { auto_reconnect: true });
db = new Db(config.dev.db, server);
}
I bind the service manually from https://console.appfog.com, my app is using the infra AWS Virginia. I also use MongoHQ addon to create one collection with two documents.
When i go to Windows console and write af update myapp it throws me next error:
Cannot read property '0' of undefined
This is cause process.env.VCAP_SERVICES is undefined.
I was investigating that and it can be that my mongodb service is incorrectly binded.
After that i tried to bind mongodb service from the windows console like below:
af bind-service mongodb myapp
But it throws me next error:
Service mongodb and App myapp are not on the same infra
At this point i don´t know what can i do.
I had the same problem.
What fixed it for me:
Go to: console>services>(re)start mongodb service
Run the same command again.
Push again.
It should work.