Neo4j - database shutdown - scala

I am using Neo4j embedded in my Scala project. I have been including
ShutdownHookThread {
shutdown(ds)
}
the above piece of code in each and every function before the beginning of transaction. Do I need to include it in every function. What happens if I don't include it?

ShutdownHookThread registers a piece of code to be executed when your application is about to exit. You need to use it only once - somewhere in your app bootstrap code, cause there is no sense to shutdown the database more than one time.

Related

How to test change unit in mongock with its multiple attributes/lifecycle methods?

We recently migrated from MongoBee to Mongock, and with Mongock 5 version the #ChangeLog and #ChangeSet are depricated. Writing the #ChangeUnit is easy enough and rollback methods are very helpful.
However, I'm unable to figure out how to write a test simulating the migration in test DB and validating the changes in DB, as there are #BeforeExecution, #RollbackBeforeExecution, #Execution and #RollbackExecution attributes or lifecycle methods in a #ChangeUnit.
Earlier, I used to just call the method with #ChangeSet annotation like
assertOriginalStructure();
someMigrationChangeLog.updateIndexOnSomething();
assertIndexUpdated();
Now, I'm unsure if there is a clean way to write the above test as there is some logic in #BeforeExecution and also in #Execution. I know individually calling the annotated methods will work, but I wanted to know if there is a way to just run one #ChangeUnit as a whole.
In the new version 5, the basic change is that a ChangeUnit holds the unit of execution. That's normally done in the method annotated with #Execution, so the first approach is just doing the same you are doing but calling the #Execution method:
assertOriginalStructure();
someMigrationChangeUnit.updateIndexOnSomething();//annotated with #Execution
assertIndexUpdated();
However, your ChangeUnit can also provide#BeforeExecution, which would be used to perform any action that cannot be within the execution, for example, in a transactional MongoDB migration, DDL are not allowed inside a transaction, so that would be done in the #BeforeExecution. So if your changeUnit has both, #Execution and #BeforeExecution, you should do this:
assertOriginalStructure();
someMigrationChangeUnit.beforeExecution();//annotated with #BeforeExecution
someMigrationChangeUnit.updateIndexOnSomething();//annotated with #Execution
assertIndexUpdated();

Not calling Cluster.close() with the Cassandra Java driver causes application to be "zombie" and not exit

When my connection is open, the application won't exit, this causes some nasty problems for me (highly concurrent and nested using a shared sesssion, don't know when each part is finished) - is there a way to make sure that the cluster doesn't "hang" the application?
For example here:
object ZombieTest extends App {
val session= Cluster.builder().addContactPoint("localhost").build().connect()
// app doesn't exit unless doing:
session.getCluster.close() // won't exit unless this is called
}
In a slightly biased answer, you could look at https://github.com/outworkers/phantom instead of using the standard java driver.
You get scala.concurrent.Future, monix.eval.Task or even com.twitter.util.Future from a query automatically. You can choose between all three.
DB connection pools are better isolated inside ContactPoint and Database abstraction layers, which have shutdown methods you can easily wire in to your app lifecycle.
It's far faster than the Java driver, as the serialization an de-serialisation of types is wired in compile time via more advanced macro mechanisms.
The short answer is that you need to have a lifecycle way of calling session.close or session.closeAsync when you shut down everything else, it's how it's designed to work.

Connection not available in Play for Scala

I have the following configuration in application.conf for an in-memory database HSQLDB:
db {
inmemory {
jndiName = jndiInMemory
driver = org.hsqldb.jdbc.JDBCDriver
url = "jdbc:hsqldb:mem:inmemory"
}
}
And connect in a controller with the following statements
val database = Database.forName("jndiInMemory")
val session = database.createSession
val conn = session.conn
// JDBC statements
Problem is that when the code runs several times, I get an exception in session.conn:
HikariPool-34 - Connection is not available, request timed out after
30000ms.
Since I'm using JNDI, I figured that the connections are reused. Do I have to drop the session after I finish using it? How to fix this code?
Hard to tell without looking at the actual code but in general: when you create a database connection at the start of the application, you usually reuse it until application ends - then you should close connection.
If you spawn a new connection every time you do query, without ending previous ones you will run at the connection limit pretty fast.
Easy to use pattern is: create a session in the beginning and then use dependency injection to pass it to wherever you need to run it.
BTW, I noticed that for some configurations e.g. Slick create connection statically (as in: stores them as static class properties). So, you need to create a handler, that closes session when application exits. It run ok... until you start it several times over in SBT, which by default uses the same JVM to run itself and spawned application. In such cases it is better to run things as fork. For tests I use Test / fork := true, for run I use sbt-revolver, though I am not sure how that would play out with Play.

What is the best way to tell MongoDB has started?

I have some automated tests that I run in order to test a MongoDB-related library. In order to do that, I start a Mongo server with a temporary data directory and on an ephemeral port, connect to it, and run some tests.
This leads to a race condition, obviously. So in my first version of these tests, I paused for a fixed amount of time and waited to make sure mongod had time to start before the tests began.
This was frustrating (and inefficient), so I decided to monitor the standard output and wait for a line on mongod's standard output stream matching the regular expression:
/\[initandlisten\] waiting for connections/
This got it working. So good, then I prepared to circle back and try to find a more robust way to do it. I recalled that a Java library called "embedmongo" ran MongoDB-based tests, and figured it must solve the problem. And it does this (GitHub):
protected String successMessage() {
return "waiting for connections on port";
}
... and uses that to figure out whether the process has started correctly.
So, are we right? Is examining the mongod process output log (is it ever internationalized? could the wording of the message ever change?) the very best way to do this? Or is there something more robust that we're both missing?
What we do in a similar scenario is:
Try to connect to the configured port (simply new Socket(host, port)) in a loop until it works (10 ms delay) - this ensures, that the mongo client, which starts an internal monitoring thread, does not throw exceptions due to "connection refused"
Connect to the mongodb and query something. This is important, as all mongo client objects are lazy init. (Simple listDatabaseNames() on the client is enough, but make sure to actually read the result.)
All the time, check the process for not being terminated.
I just wrote a small untilMongod command which does just that, which you can use in bash scripting: https://github.com/FGM/untilMongod
Includes a bash + Node.JS example use case.

Entity Framework SaveChanges "hangs" the program

My code is pretty simple:
Context.AddObject("EntitiesSetName", newObjectName);
Context.SaveChanges();
It worked fine, but just one time – the first one. That time, I interrupted my program by Shift+F5 after the SaveChanges() was traced. It was a debug process, so I manually removed a newly created record from a DB and ran a program again in the debug mode. But it does not work anymore – it “hangs” when SaveChanges() is being called.
Another strange thing that I see:
If I write before addObject() and SaveChanges() are called something like:
var tempResult = (from mydbRecord in Context
where Context.myKey == 123
select mydbRecord.myKey).Count();
// 123 is the key value of the record that should be created before the program hangs.
tempResult will have the next value: 1.
So, it seems that the record is created (when the program hung) and now exists, but when I check the DB manually using other tools – it does not!
What do I do wrong? Is it some kind of cache issue or something else?
EDIT:
I've found a source of problem.
It was not EF problem at all, but it's a problem of the tool that I use to control the database manually (Benthic).
My program falls into some kind of deadlock (when I call SaveChanges()) with the tool when the tool is connected into the same DB.
So, the problem is in the synchronization area, imho, so my question can be marked as solved.