For an XUnit integration test automation project, that runs with a PostgreSQL database, I have created a script that first drops and then recreates the database, so that every test can start with the same set of data as input. When I run the tests individually (one-by-one) through the test explorer, they all run fine. When I try to run them all in the same testrun it fails on the second test that is being executed
The structure of every test is:
initialize the new database using the script that drops, creates and fills it with data
run the test
open a NpgsqlConnection to the database
query the database and check if the resulting content matches my expectations
the second time this causes a Npgsql.NpgsqlException : Exception while writing to stream
it seems that when the connection is being created for the second time, NpgSql sees it's a previously used connection, so it reuses it. But it has been dropped and can't be used again.
If for instance I don't use the command query after creating the first connection and only in the second connection it also works fine.
I hope someone can give me a good suggestion on how to deal with this. It is the first time that I use PostgreSQL in one of my projects. I could maybe use the entity framework data provider for PostgreSQL but I will try asking this first...
I added Pooling=false to the connection string and now it works. I can drop and recreate the database as often as I want now in the same test, and simply reconnect to it from the C# code
Related
I'm currently working on a SpringBatch application that should insert some logs in case a certain type of error happens. The problem is that if the BatchJob fails, it automatically rollback everything done and that’s perfect, but it also rollback the error logs.
I need to achieve something similar to the AUTONOMOUS_TRANSACTION of Oracle while using PostgreSQL (14).
I’ve seen the DBLINK and it seem the only thing close to an alternative, but I have found some problems:
I need to avoid the connection string because the database host/port/name changes in the different environments, is it possible? I need to persist the data in the same database to technically I don’t need to connect to any other database but use the calling connection.
Is it possible to create a Function/Procedure that creates the takes care of all and I only have to call it Java side? Maybe this way I can somehow pass the connection data as a parameter in case that is not possible to avoid.
In a best case scenario I would be able to do something like:
dblink_exec(text sql);
That without arguments considers the same database where is been executed.
The problem is that I need this to be done without specifying any connection data, this will be inside a function on the executing db, in the same schema… that function will pass from one environment to the next one and the code needs to be the same so any name/user/pass needed must be avoided since they will change by environment. And since doing it in the same db and schema technically they can be inferred.
Thanks in advance!
At the moment I haven't try anything, I'm trying to get some information first.
I need to add sql scripts to a new database created with code first approach. I couldn't find anything about that when googling for it. How is it done please?
Background:
I need to add triggers to the database that need to run everytime certain tables are updated (which is an external process not controlled by my application). So I need to install the triggers in the database on its creation.
Edit (09/16/22 3:34 pm)
Using a migration is not desired. Everything needs to be done in the code, which will already create the database if it is not present.
Edit (09/16/22 4:31 pm)
The script is not meant to be executed when the server starts. It's a trigger the db server should execute whenever a table gets changed (externally). So an ExecuteSqlRaw() call during startup of the server is not what I am looking for.
We have a WebSphere cluster with four clones. Identical code runs on each of the clones. We have Quartz periodically kick off a job that runs the code.
The code tries to update a row in a table so that only one of the clones will be able to successfully update the table, and then that clone will run the rest of the job. Something like:
update <table> set status = 'RUNNING' where job_name = 'JOB1' and status = 'STOPPED'
We do not start a transaction when we execute the update statement.
What we see sometimes is that all four clones fail to update the table, and all get a lock timeout error (sql code -913).
We've also tried an alternative where we start a transaction, select to see if the row is marked as running, and if not, then performing an update and committing; and otherwise rolling back.
That had the same problem.
One solution we did not try yet is to modify the select to be a "select for update" although from my googleing, I have doubts as to whether that will help.
Any suggestions?
This ended up not being a problem (that's what I get for listening to someone without checking it out myself).
I tested this out in our development environment with two clones. One of the clones would see the -913 lock timeout error occasionally while the other clone would successfully update the table. Other than the ugly log message, everything worked as it should.
Usually, however, we would not get the -913 error, but rather a warning indicating that there was no row to update from one of the clones. Again, this behavior is fine.
So, as we originally thought, and Clockwork-Muse also suggests, using UPDATE statements in this manner to enforce a lock works just fine in DB2.
We've been using Entity Framework Code First 5 for a little while now, without major issue.
I've recently discovered that ANY change I make to my model (such as adding a field, or removing a field) means that the Seed method no longer runs leaving my database in an invalid state.
If I reverse the change, the seed method runs fine.
I have tried making changes to varying parts of my model, so it's not the specific change which is relevant.
Anyone know how I can (a) debug what the specific issue is, or (b) come across this themselves and know how to fix it?
UPDATE: After the model change, however many times I query the database it doesn't run the Seed. However, I have found that if I manually run IISRESET, and then re-execute the web service which executes the query it does then run the seed! Anyone know why this would be the case, and why suddenly I need to reset IIS in between the database initialization and the Seed executing?
Many thanks Steve
I am currently using Entity Framework 4.2 with Code First. I currently have a Windows 2008 application server and a database server running on Amazon EC2. The application server has a Windows Service installed that runs once per day. The service executes the following code:
// returns between 2000-4000 records
var users = userRepository.GetSomeUsers();
// do some work
foreach (var user in users)
{
var userProcessed = new UserProcessed { User = user };
userProcessedRepository.Add(userProcessed);
}
// Calls SaveChanges() on DbContext
unitOfWork.Commit();
This code takes a few minutes to run. It also maxes out the CPU on the application server. I have tried the following measures:
Remove the unitOfWork.Commit() to see if it is network related when the application server talks to the database. This did not change the outcome.
Changed my application server from a medium instance to a high CPU instance on Amazon to see if it is resource related. This caused the server not to max out the CPU anymore and the execution time improved slightly. However, the execution time was still a few minutes.
As a test I modified the above code to run three times to see if execution time for the second and third loop using the same DbContext. Every consecutive loop took longer to run that the previous one but that could be related to using the same DbContext.
Am I missing something? Is it really possible that something as simple as this takes minutes to run? Even if I don't commit to the database after each loop? Is there a way to speed this up?
Entity Framework (as it stands) isn't really well suited to this kind of bulk operation. Are you able to use one of the bulk insert methods with EC2? Otherwise, you might find that hand-coding the T-SQL INSERT statements is significantly faster. If performance is important then that probably outweighs the benefits of using EF.
My guess is that your ObjectContext is accumulating a lot of entity instances. SaveChanges seems to have a phase that has time linear in the number of entities loaded. This is likely the reason for the fact that it is taking longer and longer.
A way to resolve this is to use multiple, smaller ObjectContexts to get rid of old entity instances.