OrientDB inconsistent behavior calling reload() - orientdb

I have encountered a strange problem. I load a Vertex and after, at some point, I call to reload() to refresh the data and I get ORecordNotFoundException. The vertex exist because the get() has retrieved it. Why reload fail?
All of this is executed over the same tx.
Even worse, sometime it fail, sometime it work. When I run the tests, sometimes all pass and sometime one fail. All the test are self contained. Each test create an entity, store it, dereference it, get a new instance and test if all is fine.
Could be because the Tx is never closed? I call commits/rollbacks over the tx many time but only call to shutdown at the end.
All of this is tested against OrientDB 3.0.7

Related

NpgsqlConnection fails when database has been dropped and recreated

For an XUnit integration test automation project, that runs with a PostgreSQL database, I have created a script that first drops and then recreates the database, so that every test can start with the same set of data as input. When I run the tests individually (one-by-one) through the test explorer, they all run fine. When I try to run them all in the same testrun it fails on the second test that is being executed
The structure of every test is:
initialize the new database using the script that drops, creates and fills it with data
run the test
open a NpgsqlConnection to the database
query the database and check if the resulting content matches my expectations
the second time this causes a Npgsql.NpgsqlException : Exception while writing to stream
it seems that when the connection is being created for the second time, NpgSql sees it's a previously used connection, so it reuses it. But it has been dropped and can't be used again.
If for instance I don't use the command query after creating the first connection and only in the second connection it also works fine.
I hope someone can give me a good suggestion on how to deal with this. It is the first time that I use PostgreSQL in one of my projects. I could maybe use the entity framework data provider for PostgreSQL but I will try asking this first...
I added Pooling=false to the connection string and now it works. I can drop and recreate the database as often as I want now in the same test, and simply reconnect to it from the C# code

How can I roll back DAO tests using scala PlaySpec and Slick

I'm trying to flesh out my application's abstract DAO test harness to support rolling back any test modifications to the database. I know slick supports transactions with (db.run(<some DBIOAction>.transactionally), but that doesn't work as part of an abstract test class, as the db actions need to actually be run in the actual test method.
Currently, I'm working with attempting to wrap the test method with BeforeAndAfter's runTest method and attempting to find some slick method that allows me to wrap the test execution in a transaction. It feels like the correct first step, but I'm struggling to figure out how to not interfere with regular test creation while still being able to roll back transactions (i.e. I don't want to have to manually add a DBIOAction.failure in every DB test that changes the DB state).
I've tried setting autocommit=false around the method, e.g.
db.run(
SimpleJdbcOperation(_.connection.setAutoCommit(false)) andThen
DBIOAction.successful(super.runTest) zip
SimpleJdbcOperation(_.connection.rollBack()))
but I think the connection pool is foiling that particular method, as getting the autocommit status inside the test method returns true and the rollback doesn't do anything.
Is there anything I can do here short of hacky (manual DBIOAction.failure()) or wasteful (drop and recreate table/schema after every test) solutions?
For now I'm going with https://stackoverflow.com/a/34953817/1676006, but I still feel like there should be a better way.

Entity Framework Code First - Model change breaks Seed

We've been using Entity Framework Code First 5 for a little while now, without major issue.
I've recently discovered that ANY change I make to my model (such as adding a field, or removing a field) means that the Seed method no longer runs leaving my database in an invalid state.
If I reverse the change, the seed method runs fine.
I have tried making changes to varying parts of my model, so it's not the specific change which is relevant.
Anyone know how I can (a) debug what the specific issue is, or (b) come across this themselves and know how to fix it?
UPDATE: After the model change, however many times I query the database it doesn't run the Seed. However, I have found that if I manually run IISRESET, and then re-execute the web service which executes the query it does then run the seed! Anyone know why this would be the case, and why suddenly I need to reset IIS in between the database initialization and the Seed executing?
Many thanks Steve

Play Model save function isn't actually writing to the database

I have a play model called "JobStatus" and it's just got one property, an enum with a JobState, (Running/notRunning).
The class extends model and is implemented as a singleton. You call it's getInstance() method to get the only record in the underlying table.
I have a job that runs every month and in the job I will toggle the state of the JobStatus object back and forth at various times and call .save().
I've noticed it isn't actually saving.
When the job starts off, it's first line of code is
JobStatus thisJobStatus = jobStatus.getInstance();
...// exit if already running
thisJobStatus.JobState = JobState.Running;
thisJobStatus.save()
then when the job is done it will change the status back to NotRunning and save again.
The issue is that when I look in the MySql database the actual record value is never changed.
This causes a catastrophic failure because when other nodes try to run the job they check the state and since they're seeing it as NotRunning, they all try to run the job also.
So my clever scheme for managing job state is failing because the actual value isn't getting commited to the DB.
How do I force Play to write to the DB right away when I call .save() on a model?
Thanks
Josh
try adding this to your JobStatus and call it after save.
public static void commit(){
JobStatus.em().getTransaction().commit();
JobStatus.em().getTransaction().begin();
JobStatus.em().flush();
JobStatus.em().clear();
}
I suppose you want to mark your job as "running" pretty much as the first thing when the job starts? In that case, you shouldn't have any other ongoing database statements yet...
To commit your changes in the database immediately (instead of after the job has ended), add the following commands after the thisJobStatus.save(); method call:
JPA.em().flush();
JPA.em().getTransaction().commit();
Additionally, since you're using MySQL, you might want to lock the row immediately upon retriveval using the SELECT ... FOR UPDATE clause. (See MySQL Reference Manual for more information.) Of course, you wouldn't want to have that in your getInstance() method, otherwise every fetch operation would lock the record.

Salesforce deployment error because of test class failure

We are encountering the deployment error due to some test classes where batch apex class is called. The error occurring is:
"System.unexpectedException:No more than one executeBatch can be called within a test method."
In our test class, there are insert and update statements which in turn calls the batch apex from a trigger. We have also tried to limit the batch query by using "Test.isRunningTest()" method but we are again facing the same error.
The code works fine in sandbox and the error is coming only at the time of deployment to production.
Also, the test classes causing the error were working fine previously in the production.
Please provide some pointers/solution for the above mentioned error.
Thank you.
I would suggest the best approach would be to ensure the trigger doesn't execute the batch if Test.IsRunningTest() is true, and then test the batch class with it's own test method. I suspect your trigger is fired twice and so batch instances are created and run more than one.
Using a dedicated test method you can execute the batch specifying a limit on the query, and you should use the optional batch size parameter to control the number of calls to execute, i.e. if your limit is 50, but you do this:
Database.executeBatch(myBatchInstance, 25);
It'll still need to call the execute() method twice to cover all the records, and this is where you hit problems like the one you mentioned.