EF Core 6 code first: Add triggers to new database? - entity-framework-core

I need to add sql scripts to a new database created with code first approach. I couldn't find anything about that when googling for it. How is it done please?
Background:
I need to add triggers to the database that need to run everytime certain tables are updated (which is an external process not controlled by my application). So I need to install the triggers in the database on its creation.
Edit (09/16/22 3:34 pm)
Using a migration is not desired. Everything needs to be done in the code, which will already create the database if it is not present.
Edit (09/16/22 4:31 pm)
The script is not meant to be executed when the server starts. It's a trigger the db server should execute whenever a table gets changed (externally). So an ExecuteSqlRaw() call during startup of the server is not what I am looking for.

Related

NpgsqlConnection fails when database has been dropped and recreated

For an XUnit integration test automation project, that runs with a PostgreSQL database, I have created a script that first drops and then recreates the database, so that every test can start with the same set of data as input. When I run the tests individually (one-by-one) through the test explorer, they all run fine. When I try to run them all in the same testrun it fails on the second test that is being executed
The structure of every test is:
initialize the new database using the script that drops, creates and fills it with data
run the test
open a NpgsqlConnection to the database
query the database and check if the resulting content matches my expectations
the second time this causes a Npgsql.NpgsqlException : Exception while writing to stream
it seems that when the connection is being created for the second time, NpgSql sees it's a previously used connection, so it reuses it. But it has been dropped and can't be used again.
If for instance I don't use the command query after creating the first connection and only in the second connection it also works fine.
I hope someone can give me a good suggestion on how to deal with this. It is the first time that I use PostgreSQL in one of my projects. I could maybe use the entity framework data provider for PostgreSQL but I will try asking this first...
I added Pooling=false to the connection string and now it works. I can drop and recreate the database as often as I want now in the same test, and simply reconnect to it from the C# code

Keeping database table clean using Entity Framework

I am using Entity Framework and I have a table that I use to record some events that gets generated by a 3rd party. These events are valid for 3 days. So I like to clean out any events that is older than 3 days to keep my database table leaner. Is there anyway I can do this? Some approach that won't cause any performance issue during clean up.
To do the mentioned above there are some options :
1) Define stored procedure mapped to your EF . And you can use Quarts Trigger to execute this method using timely manner.
https://www.quartz-scheduler.net
2) SQL Server Agent job which runs every day at the least peak time and remove your rows.
https://learn.microsoft.com/en-us/sql/ssms/agent/schedule-a-job?view=sql-server-2017
If only the method required for cleaning purpose nothing else i recommend you go with option 2
first of all. Make sure the 3rd party write a timestamp on every record, in that way you will be able to track how old the record is.
then. Create a script that deletes all record that are older than 3 days.
DELETE FROM yourTable WHERE DATEDIFF(day,getdate(),thatColumn) < -3
now create a scheduled task in SQL management Studio:
In SQL Management Studio, navigate to the server, then expand the SQL Server Agent item, and finally the Jobs folder to view, edit, add scheduled jobs.
set script to run once every day or whatever please you :)

Processing a row externally that fired a trigger

I'm working on a PostgreSQL 9.3-database on an Ubuntu 14 server.
I try to write a trigger-function (AFTER EACH ROW) that launches an external process that needs to access the row that fired that trigger.
My problem:
Even tough I can run queries on the table including the new row inside the trigger, the external process does not see it (while the trigger function is still running).
Is there a way to manage that?
I thought about starting some kind of asynchronous function call to give the trigger some time to terminate first, but that's of course really ugly.
Also I read about notifiers and listeners, but that would require some refactoring of my existing code and also some additional listener, which I tried to prevent with my trigger. (I'm also afraid of new problems which may occur on this road.)
Any more thoughts?
Robin

flyway Working with existing Db

I'm start to use flyway in my project, i configure to apply new changes on my db, but i want to know if can i use 2 parans init-method="init, migrate" because every time i want to start on a new computer i need to restore some db backup from pgadim change the ini-method to init, so he will create the table to control the updates. so then i need to change my init-method to migrate to apply future changes.
i want to know if have an better way to do this.
sorry my english.
Flyway 2.1 will come with an initOnMigrate setting that does exactly what you need. Until then, you can always wrap the Flyway class with one containing the logic you need.

Committing to database during process phase in spring batch job

I have a conventional spring-batch job where I read from database, process domain objects and write it out to a file.
I need to slightly tweak the functionality during the processor phase so that I can update and commit the domain object to the database and the write it out to a file. I would need the commit to happen instantly as I would require the database ID for the write phase.
When I tried updating the domain object and saving it, I noticed that the entity was getting committed after the write phase.
Is there any way to force the commit to happen instantly during the processor phase and continue as before?
I would need the commit to happen instantly as I would require the
database ID for the write phase.
i am not sure what id do you need, as you should already have one, when you are trying to update an (existing) entry
if you meant insert, you can work around this issue by using database specific functions to get the id of the inserted but not yet commited object
e.g. for oracle - Obtain id of an insert in the same statement
When I tried updating the domain object and saving it, I noticed that
the entity was getting committed after the write phase.
that is desired behaviour, because the write part is the last one within the (chunk)transaction, if it is successful - commit, if not - rollback, imagine a successful commit and a problem with the file, the item in the database would have a wrong state