Cannot drop Firebird table when using multiple connections - ado.net

I would like to safely drop Firebird table. I have 3 transactions, one to recreate table, one to do something with the table (just inserting a single row to keep it simple) and the last one to drop the table.
If all these txns are executed using single connection these works. If I use a different connection, then the drop command fails with
lock conflict on no wait transaction
unsuccessful metadata update
object TABLE "DEMO" is in use
private static void Test() {
using var conn1 = new FbConnection(ConnectionString);
using var conn2 = new FbConnection(ConnectionString);
using var conn3 = new FbConnection(ConnectionString);
conn1.Open();
conn2.Open();
conn3.Open();
ExecuteTxn(conn1, cmd => {
cmd.CommandText = "recreate table demo (id int primary key)";
cmd.ExecuteNonQuery();
});
ExecuteTxn(conn2, cmd => {
cmd.CommandText = "insert into demo (id) values (1)";
cmd.ExecuteNonQuery();
});
ExecuteTxn(conn3, cmd => {
cmd.CommandText = "drop table demo";
cmd.ExecuteNonQuery();
});
}
private static void ExecuteTxn(FbConnection conn, Action<FbCommand> todo) {
using (var txn = conn.BeginTransaction())
using (var cmd = conn.CreateCommand()) {
cmd.Transaction = txn;
todo(cmd);
txn.Commit();
}
}
I realized that changing the transaction options as
txn = conn.BeginTransaction(new FbTransactionOptions { TransactionBehavior = FbTransactionBehavior.Wait }))
seems to help. But I'm not sure if this the right thing to do or just a coincidence...
Using Firebird 3.0.6, FirebirdSql.Data.FirebirdClient.dll 7.5.0.0

As far as I understand it, the problem has to do with how Firebird caches certain metadata, which might result in existence locks being retained, which will prevent deletion of the object. In addition, it is possible - this is a guess! - that the Firebird ADO.net provider retains the statement handle with the insert statement prepared, which will also result in an existence lock being retained.
Executing in a WAIT transaction (optionally with a timeout) is considered an appropriate workaround by the Firebird core developers.
For reference, see the following tickets:
CORE-3766 - Transaction can`t change metadata if it is run in no_wait and there is another connect that once had queried these metadata
CORE-6382 - Triggers accessing a table prevent concurrent DDL command from dropping that table
In certain cases, switching from Firebird ClassicServer or Firebird SuperClassic to Firebird SuperServer can also prevent this problem.
However, if you want a more in-depth explanation, it might be worthwhile to ask this question on the firebird-devel mailing list.

Related

Entity Framework Core: how to add additional SQL query to all queries?

By session i mean Postgres session (same TCP connection).
My question related to https://stackoverflow.com/a/19410907/1251169
I need to pass user id to Postgres trigger. From my research i can only see a way in using set session custom.userid = 'NAME' (look at url above)
But it works only if i pass the query in the same session obviously.
For example, this works
using (var context = new MyDbContext())
{
context.Database.ExecuteSqlCommand("set session custom.userid = 'NAME';
insert into my_table values ('abc', 1)");
}
Inside the my_table trigger i can read
select current_setting('custom.userid'); // NAME
This doesn't work:
using (var context = new MyDbContext())
{
context.Database.ExecuteSqlCommand("set session custom.userid = 'NAME';");
context.Database.ExecuteSqlCommand("insert into my_table values ('abc', 1)");
}
Because these are two different sessions.
select current_setting('custom.userid'); // empty string
I use EF Code First and manipulate models, not raw queries
using (var context = new MyDbContext())
{
context.Database.ExecuteSqlCommand("set session custom.userid = 'NAME';");
context.MyTables.Add(new MyTable
{
Code = 'abc',
Value = 1
});
context.SaveChanges();
}
This doesn't work either, because there are 2 different sessions. As far as I understand, EF creates raw SQL query inside its internals and it is executed in separate session.
Is it possible to access these internals (write hook function or somehow) to add "set session" query to any query issued by EF?
p.s.
A possible solution is to use set local (not session) and create additional transaction (if not exist). But i'd like to avoid create transaction just to pass some data.
public class MyDbContext : DbContext
{
public override int SaveChanges()
{
IDbContextTransaction trans = null;
try
{
if (Database.CurrentTransaction == null)
{
trans = Database.BeginTransaction();
}
Database.ExecuteSqlCommand("set LOCAL custom.userid = 'test987';");
return base.SaveChanges();
}
finally
{
trans?.Commit();
}
}
}

Parallel.Foreach and BulkCopy

I have a C# library which connects to 59 servers of the same database structure and imports data to my local db to the same table. At this moment I am retrieving data server by server in foreach loop:
foreach (var systemDto in systems)
{
var sourceConnectionString = _systemService.GetConnectionStringAsync(systemDto.Ip).Result;
var dbConnectionFactory = new DbConnectionFactory(sourceConnectionString,
"System.Data.SqlClient");
var dbContext = new DbContext(dbConnectionFactory);
var storageRepository = new StorageRepository(dbContext);
var usedStorage = storageRepository.GetUsedStorageForCurrentMonth();
var dtUsedStorage = new DataTable();
dtUsedStorage.Load(usedStorage);
var dcIp = new DataColumn("IP", typeof(string)) {DefaultValue = systemDto.Ip};
var dcBatchDateTime = new DataColumn("BatchDateTime", typeof(string))
{
DefaultValue = batchDateTime
};
dtUsedStorage.Columns.Add(dcIp);
dtUsedStorage.Columns.Add(dcBatchDateTime);
using (var blkCopy = new SqlBulkCopy(destinationConnectionString))
{
blkCopy.DestinationTableName = "dbo.tbl";
blkCopy.WriteToServer(dtUsedStorage);
}
}
Because there are many systems to retrieve data, I wonder if it is possible to use Pararel.Foreach loop? Will BulkCopy lock the table during WriteToServer and next WriteToServer will wait until previous will complete?
-- EDIT 1
I've changed Foreach to Parallel.Foreach but I face one problem. Inside this loop I have async method: _systemService.GetConnectionStringAsync(systemDto.Ip)
and this line returns error:
System.NotSupportedException: A second operation started on this
context before a previous asynchronous operation completed. Use
'await' to ensure that any asynchronous operations have completed
before calling another method on this context. Any instance members
are not guaranteed to be thread safe.
Any ideas how can I handle this?
In general, it will get blocked and will wait until the previous operation complete.
There are some factors that may affect if SqlBulkCopy can be run in parallel or not.
I remember when adding the Parallel feature to my .NET Bulk Operations, I had hard time to make it work correctly in parallel but that worked well when the table has no index (which is likely never the case)
Even when worked, the performance gain was not a lot faster.
Perhaps you will find more information here: MSDN - Importing Data in Parallel with Table Level Locking

Which transaction isolation level to choose for atomic "Get Or Create" Scenario

Which Transaction IsolationLevel is the best to guarantee that only 1 Datarow get created.
Assuming SQL Server 2012 and EntityFramework 6 is used.
using(var db = new XyzContext())
{
using(var dbContextTransaction = db.Database.BeginTransaction(???))
{
try
{
Item obj = db.Item.SingleOrDefault(o => o.Hashcode.Equals(hashCode));
//it is possible that 2 threads are coming through here and both have obj == null
if(obj == null)
{
obj = db.Item.Add(new Item
{
Hashcode = hashCode,
State = 0,
});
}
db.SaveChanges();
dbContextTransaction.Commit();
}
catch(Exception)
{
dbContextTransaction.Rollback();
}
}
}
If your scenario was update, then Snapshot is good,(which is a default behavior of ef 6).
But in your case which is insert, then most of methods would not work properly.
You must be sure that your lock escalation level is table(which is default).
Then apply RepeatableRead transaction mode.
It prevents other threads from reading the table, until first thread is done.
It's better to have a unique constraint column on one of your columns instead of this method.
Or create a special table in your sql server database, then put row lock on specific record of that table before your main query & insert, then do your works, there is not bottle neck for your other operations with that table and is fast enough.
Good luck

Get connection used by DatabaseFactory.GetDatabase().ExecuteReader()

We have two different query strategies that we'd ideally like to operate in conjunction on our site without opening redundant connections. One strategy uses the enterprise library to pull Database objects and Execute_____(DbCommand)s on the Database, without directly selecting any sort of connection. Effectively like this:
Database db = DatabaseFactory.CreateDatabase();
DbCommand q = db.GetStoredProcCommand("SomeProc");
using (IDataReader r = db.ExecuteReader(q))
{
List<RecordType> rv = new List<RecordType>();
while (r.Read())
{
rv.Add(RecordType.CreateFromReader(r));
}
return rv;
}
The other, newer strategy, uses a library that asks for an IDbConnection, which it Close()es immediately after execution. So, we do something like this:
DbConnection c = DatabaseFactory.CreateDatabase().CreateConnection();
using (QueryBuilder qb = new QueryBuilder(c))
{
return qb.Find<RecordType>(ConditionCollection);
}
But, the connection returned by CreateConnection() isn't the same one used by the Database.ExecuteReader(), which is apparently left open between queries. So, when we call a data access method using the new strategy after one using the old strategy inside a TransactionScope, it causes unnecessary promotion -- promotion that I'm not sure we have the ability to configure for (we don't have administrative access to the SQL Server).
Before we go down the path of modifying the query-builder-library to work with the Enterprise Library's Database objects ... Is there a way to retrieve, if existent, the open connection last used by one of the Database.Execute_______() methods?
Yes, you can get the connection associated with a transaction. Enterprise Library internally manages a collection of transactions and the associated database connections so if you are in a transaction you can retrieve the connection associated with a database using the static TransactionScopeConnections.GetConnection method:
using (var scope = new TransactionScope())
{
IEnumerable<RecordType> records = GetRecordTypes();
Database db = DatabaseFactory.CreateDatabase();
DbConnection connection = TransactionScopeConnections.GetConnection(db).Connection;
}
public static IEnumerable<RecordType> GetRecordTypes()
{
Database db = DatabaseFactory.CreateDatabase();
DbCommand q = db.GetStoredProcCommand("GetLogEntries");
using (IDataReader r = db.ExecuteReader(q))
{
List<RecordType> rv = new List<RecordType>();
while (r.Read())
{
rv.Add(RecordType.CreateFromReader(r));
}
return rv;
}
}

How to delete many-to-many relationship in Entity Framework without loading all of the data

Does anyone know how to delete many-to-many relationship in ADO.NET Entity Framework without having to load all of the data? In my case I have an entity Topic that has a property Subscriptions and I need to remove a single subscription. The code myTopic.Subscriptions.Remove(...) works but I need to load all subscriptions first (e.g. myTopic.Subscriptions.Load()) and I don't want to do that because there are lots (and I mean lots) of subscriptions.
You can Attach() a subscription then Remove() it - note, we're not using Add() here, just Attach, so effectively we're telling EF that we know the object is attached in the store, and asking it to behave as if that were true.
var db = new TopicDBEntities();
var topic = db.Topics.FirstOrDefault(x => x.TopicId == 1);
// Get the subscription you want to delete
var subscription = db.Subscriptions.FirstOrDefault(x => x.SubscriptionId == 2);
topic.Subscriptions.Attach(subscription); // Attach it (the ObjectContext now 'thinks' it belongs to the topic)
topic.Subscriptions.Remove(subscription); // Remove it
db.SaveChanges(); // Flush changes
This whole exchange, including getting the original topic from the database sends these 3 queries to the database:
SELECT TOP (1)
[Extent1].[TopicId] AS [TopicId],
[Extent1].[Description] AS [Description]
FROM [dbo].[Topic] AS [Extent1]
WHERE 1 = [Extent1].[TopicId]
SELECT TOP (1)
[Extent1].[SubscriptionId] AS [SubscriptionId],
[Extent1].[Description] AS [Description]
FROM [dbo].[Subscription] AS [Extent1]
WHERE 2 = [Extent1].[SubscriptionId]
exec sp_executesql N'delete [dbo].[TopicSubscriptions]
where (([TopicId] = #0) and ([SubscriptionId] = #1))',N'#0 int,#1 int',#0=1,#1=2
so it's not pulling all the subscriptions at any point.
This is how to delete without first loading any data. This works in EF5. Not sure about earlier versions.
var db = new TopicDBEntities();
var topic = new Topic { TopicId = 1 };
var subscription = new Subscription { SubscriptionId = 2};
topic.Subscriptions.Add(subscription);
// Attach the topic and subscription as unchanged
// so that they will not be added to the db
// but start tracking changes to the entities
db.Topics.Attach(topic);
// Remove the subscription
// EF will know that the subscription should be removed from the topic
topic.subscriptions.Remove(subscription);
// commit the changes
db.SaveChanges();
One way would be to have a stored proc that will delete your child records directly on the DB and include it in your EF model; then just call it from your DataContext.
Here is my example ...where i know the foreign keys and i don't want to do a db round trip.
I hope this helps someone...
Given:
[client]<--- many-to-many --->[Medication]
Client objClient = new Client() { pkClientID = pkClientID };
EntityKey entityKey = _commonContext.CreateEntityKey("Client", objClient);
objClient.EntityKey = entityKey;
_commonContext.Attach(objClient); //just load entity key ...no db round trip
Medication objMed = new Medication() { pkMedicationID = pkMedicationID };
EntityKey entityKeyMed = _commonContext.CreateEntityKey("Medication", objMed);
objMed.EntityKey = entityKeyMed;
_commonContext.Attach(objMed);
objClient.Medication.Attach(objMed);
objClient.Medication.Remove(objMed); //this deletes
_commonContext.SaveChanges();
If the foreign keys are set, referential integrity should do automatically via the DBMS itself when deleting the parent entities.
If you use code first, as far as I learned in a MVA tutorial, ON DELETE CASCADE is the default behavior set by EF6. If running DB first, you should alter your childtable(s)...
Here is the link: https://mva.microsoft.com/en-US/training-courses/implementing-entity-framework-with-mvc-8931?l=pjxcgEC3_7104984382
In the Video it's mentioned at 20:00 upwards and in the slide presentation it is said on page 14.
Cheers