I have 2 migrations that I wish to run.
Create new table of currencies.
Link existing table to currencies with FK and update script.
Sql("UPDATE dbo.Rates" +
" SET CurrencyId = c.Id" +
" from Rates r" +
" join Currencies c on c.IsoCode = r.CurrencyIso");
I also have seed data that populates the currencies table.
I need to be able to run migration 1, run the seed, then run migration 2 all in one hit.
How do I force the seed data into db before I reference the FK?
Related
I have a .Net Core Web API endpoint which extracts the data from OracleDB and saves it into PostgreSQL DB. I am using the latest Entity Framework core and Oracle.ManagedDataAccess.Core and Npgsql.EntityFrameworkCore.PostgreSQL to connect to respective Database.
Internally in my service layer which API uses to connect to my Infra Layer, I have separate methods to call:
Call OracleRepository - In order to pull records from 5 tables (each in a diff method inside repository):
Fetch Table A data
Fetch Table B data.. and so on till Table E
Call PostgreSqlRepository - In order to store data of each table (fetched from OracleDB) into PostgreDB using Code First approach(again each in a diff method inside repository).
No of records in each table:
A - 6.7k
B - 113k
C - 56k
D - 5.8k
E - 5.3k
Now all the above steps take around 45 seconds to complete. Any suggestions to improve the performance here.
Is there is a way to fetch data asynchronously from DB and store it? I have used Transient lifetime for both Oracle and Postgre Repository and all other services in my .net Core application.
Note: Each time I am truncating my PostgreSQL tables before inserting data (using the RemoveRange method of EfCore).
When bulk-loading tables in relational databases, there's something to keep in mind.
By default, each INSERT query generates a database transaction. The ACID rules of the database require things to work correctly even if many concurrent database sessions are querying the table you're loading. So, the automatic commit is time consuming.
You can work around this when bulk loading. The best way is with the COPY command. To use this, you'll extract your data from your source database (Oracle) and write it into a temporary CSV flat file on your file system. That's fast enough. Then you'll use queries like this to copy the file into the table. That's designed to be optimized around the transaction stuff.
COPY A FROM `/the/path/to/your/file.csv` DELIMITER ',' CSV HEADER;
If you can't use COPY, know this. If you batch up your INSERTs in explicit transactions, things work better.
START TRANSACTION;
INSERT INTO A (col, col) VALUES (val, val);
INSERT INTO A (col, col) VALUES (val, val);
INSERT INTO A (col, col) VALUES (val, val);
INSERT INTO A (col, col) VALUES (val, val);
INSERT INTO A (col, col) VALUES (val, val);
COMMIT;
things work a lot faster than if you do the inserts one by one. Batches of about 100 rows will typically get you a tenfold performance improvement.
In EF Core, you can do transactions with code like so... This is pseudocode. It lacks exception handling and other things you need.
using var context = new WhateverContext();
const transactionLength = 100;
var rows = transactionLength;
var transaction = context.Database.BeginTransaction();
for each row in your input {
context.Table.Add(new Row { whatever });
if ( --rows <= 0 ) {
/* finish current transaction, start new one. */
context.SaveChanges();
transaction.Commit();
transaction.Dispose();
transaction = context.Database.BeginTransaction();
rows = transactionLength;
}
}
/* finish current transaction */
context.SaveChanges();
transaction.Commit();
transaction.Dispose();
I have a Products table which contains an attribute that will get updated via an ERP update by an end user. When that happens I need the update to be replicated in another table. I am not at all experienced with creating T-SQL triggers but I believe it will accomplish my objective.
Example:
In the IC_Products table:
Productkey = 456
StockLocation = ‘GA-07-A250’
In the IC_ProductCustomFields table (will start out the same because I will run a script to make it so):
Productkey = 456
CustomFieldKey = 13
Value = ‘GA-07-A250’
When the IC_Products.StockLocation column gets updated then I want the value in new IC_ProductCustomFields.Value to also get updated automatically and immediately.
If a new record is created in IC_Products then I want a new record to also be created in IC_ProductCustomFields.
I would like to know how to write the trigger script as well as how to implement it. I am using SQL Server 2005.
You want something like this:
CREATE TRIGGER [dbo].[tr_Products_SyncCustomFields] ON [dbo].[IC_Products]
FOR INSERT, UPDATE
AS
-- First, we'll handle the update. If the record doesn't exist, we'll handle that second
UPDATE IC_ProductCustomFields
SET Value = inserted.StockLocation
FROM IC_ProductCustomFields cf
INNER JOIN inserted -- Yes, we want inserted. In triggers you just get inserted and deleted
ON cf.Productkey = inserted.Productkey AND CustomFieldKey = 13;
-- Now handle the insert where required. Note the NOT EXISTS criteria
INSERT INTO IC_ProductCustomFields (Productkey, CustomFieldKey, Value)
SELECT Productkey, CustomFieldKey, Value
FROM inserted
WHERE NOT EXISTS
(
SELECT *
FROM IC_ProductCustomFields
WHERE Productkey = inserted.Productkey AND CustomFieldKey = 13
);
GO
You could, I think, do separate triggers for insert and update, but this will also have the side-effect of restoring your (supposed?) invariants if the custom fields ever get out of sync; even in an update, if the custom field doesn't exist, this will insert the new record as required to bring it back into compliance with your spec.
I would like to periodically, e.g. once a year, archive a set of table rows from our DB2 9.7 database based on some criteria. So e.g. once a year, archive all EMPLOYEE rows that have a creation date older than 1 year ago?
By archive I mean that the data is moved out of the DB schema and stored in some other location, in a retrievable format. Is this possible to do?
System doesn't need to access archived data
If you don't need to access archived data by your program then I would suggest this:
create export (reference here) script, e.g.:
echo '===================== export started ';
values current time;
-- maybe ixf format would be better?
export to tablename.del of del
select * from tablename
where creation_date < (current date - 1 year)
;
echo '===================== export finished ';
create delete db2 script, e.g.:
echo '===================== delete started ';
values current time;
delete from tablename.del of del
where creation_date < (current date - 1 year)
;
commit;
echo '===================== delete finished ';
write batch script which calls everything copies new exported file to the safe location. When calling the script we want to ensure that delete is not done until data is placed on safe:
db2 connect to db user xx using xxx
db2 -s -vtf export.sql
7z a safe-location-<date-time>.7z tablename.del
if no errors till now:
db2 -s -vtf delete.sql
register batch script as a cron job to do this automatically
Again, since deleting is very sensitive operation, I would suggest to have more than one backup mechanisms to ensure that no data will be lost (e.g. delete to have some different timeframe - e.g. delete older than 1.5 year).
System should access archived data
If you need your system to access archived data, then I would suggest one of the following methods:
export / import to other db or table / delete
stored procedure which does select + insert to other db or table / delete - for example you can adapt sp in answer nr. 3 in this question
do table partitioning - reference here
Sure, why not? One fairly straight forward way is to write a stored procedure thab basically would:
extract all records of the a given table you wish to archive into a temp table,
insert those temp records into the archive table,
delete from the given table where the primary key is IN the temp table
If you wanted only a subset of columns to go into your archive, you could extract from a view containing just those columns, as long as you still capture the primary key in your temp file.
Windows/NET/ODBC
I would like to get query results to new table on some handy way which I can see through data adapter but I can't find a way to do it.
There is no much examples around to satisfy beginner's level on this.
Don't know temporary or not but after seeing results that table is no more needed so I can delete it 'by hand' or it can be deleted automatically.
This is what I try:
mCmd = New OdbcCommand("CREATE TEMP TABLE temp1 ON COMMIT DROP AS " & _
"SELECT dtbl_id, name, mystr, myint, myouble FROM " & myTable & " " & _
"WHERE myFlag='1' ORDER BY dtbl_id", mCon)
n = mCmd.ExecuteNonQuery
This run's without error and in 'n' I get correct number of matched rows!!
But with pgAdmin I don't see those table no where?? No matter if I look under opened transaction or after transaction is closed.
Second, should I define columns for temp1 table first or they can be made automatically based on query results (that would be nice!).
Please minimal example to illustrate me what to do based on upper code to get new table filled with query results.
A shorter way to do the same thing your current code does is with CREATE TEMPORARY TABLE AS SELECT ... . See the entry for CREATE TABLE AS in the manual.
Temporary tables are not visible outside the session ("connection") that created them, they're intended as a temporary location for data that the session will use in later queries. If you want a created table to be accessible from other sessions, don't use a TEMPORARY table.
Maybe you want UNLOGGED (9.2 or newer) for data that's generated and doesn't need to be durable, but must be visible to other sessions?
See related: Is there a way to access temporary tables of other sessions in PostgreSQL?
When selecting 'update model from database' none of the system tables (SYS. schema) is available from the list of tables.
How may I add a system table to my EF model.
Sybase (ASA12) is the database platform which I am using.
As a workaround I created a view on the system table.
It is then available and may be updated automated by the edmx generator
I created a script that recreates all the catalog views, i.e. sys.*, as views in a user schema:
Note: This is T-SQL, and SQL Server object names, but I'm sure you can adapt the concept to Sybase.
SELECT
'CREATE VIEW ' + 'dpc.' + name + ' AS SELECT * FROM ' + 'sys.' + name + char(13) + char(10) + ' GO' + char(13) + char(10)
FROM
sys.all_objects
WHERE
type = 'v'
and is_ms_shipped = 1
and schema_name(schema_id) = 'sys'
ORDER BY
name
Then I ran the script output by the above query, which copied each sys.x view to a new dpc.x view, and added all the dpc.* views to my EDMX model.