Entity Framework Core: how to add additional SQL query to all queries? - postgresql

By session i mean Postgres session (same TCP connection).
My question related to https://stackoverflow.com/a/19410907/1251169
I need to pass user id to Postgres trigger. From my research i can only see a way in using set session custom.userid = 'NAME' (look at url above)
But it works only if i pass the query in the same session obviously.
For example, this works
using (var context = new MyDbContext())
{
context.Database.ExecuteSqlCommand("set session custom.userid = 'NAME';
insert into my_table values ('abc', 1)");
}
Inside the my_table trigger i can read
select current_setting('custom.userid'); // NAME
This doesn't work:
using (var context = new MyDbContext())
{
context.Database.ExecuteSqlCommand("set session custom.userid = 'NAME';");
context.Database.ExecuteSqlCommand("insert into my_table values ('abc', 1)");
}
Because these are two different sessions.
select current_setting('custom.userid'); // empty string
I use EF Code First and manipulate models, not raw queries
using (var context = new MyDbContext())
{
context.Database.ExecuteSqlCommand("set session custom.userid = 'NAME';");
context.MyTables.Add(new MyTable
{
Code = 'abc',
Value = 1
});
context.SaveChanges();
}
This doesn't work either, because there are 2 different sessions. As far as I understand, EF creates raw SQL query inside its internals and it is executed in separate session.
Is it possible to access these internals (write hook function or somehow) to add "set session" query to any query issued by EF?
p.s.
A possible solution is to use set local (not session) and create additional transaction (if not exist). But i'd like to avoid create transaction just to pass some data.
public class MyDbContext : DbContext
{
public override int SaveChanges()
{
IDbContextTransaction trans = null;
try
{
if (Database.CurrentTransaction == null)
{
trans = Database.BeginTransaction();
}
Database.ExecuteSqlCommand("set LOCAL custom.userid = 'test987';");
return base.SaveChanges();
}
finally
{
trans?.Commit();
}
}
}

Related

Cannot drop Firebird table when using multiple connections

I would like to safely drop Firebird table. I have 3 transactions, one to recreate table, one to do something with the table (just inserting a single row to keep it simple) and the last one to drop the table.
If all these txns are executed using single connection these works. If I use a different connection, then the drop command fails with
lock conflict on no wait transaction
unsuccessful metadata update
object TABLE "DEMO" is in use
private static void Test() {
using var conn1 = new FbConnection(ConnectionString);
using var conn2 = new FbConnection(ConnectionString);
using var conn3 = new FbConnection(ConnectionString);
conn1.Open();
conn2.Open();
conn3.Open();
ExecuteTxn(conn1, cmd => {
cmd.CommandText = "recreate table demo (id int primary key)";
cmd.ExecuteNonQuery();
});
ExecuteTxn(conn2, cmd => {
cmd.CommandText = "insert into demo (id) values (1)";
cmd.ExecuteNonQuery();
});
ExecuteTxn(conn3, cmd => {
cmd.CommandText = "drop table demo";
cmd.ExecuteNonQuery();
});
}
private static void ExecuteTxn(FbConnection conn, Action<FbCommand> todo) {
using (var txn = conn.BeginTransaction())
using (var cmd = conn.CreateCommand()) {
cmd.Transaction = txn;
todo(cmd);
txn.Commit();
}
}
I realized that changing the transaction options as
txn = conn.BeginTransaction(new FbTransactionOptions { TransactionBehavior = FbTransactionBehavior.Wait }))
seems to help. But I'm not sure if this the right thing to do or just a coincidence...
Using Firebird 3.0.6, FirebirdSql.Data.FirebirdClient.dll 7.5.0.0
As far as I understand it, the problem has to do with how Firebird caches certain metadata, which might result in existence locks being retained, which will prevent deletion of the object. In addition, it is possible - this is a guess! - that the Firebird ADO.net provider retains the statement handle with the insert statement prepared, which will also result in an existence lock being retained.
Executing in a WAIT transaction (optionally with a timeout) is considered an appropriate workaround by the Firebird core developers.
For reference, see the following tickets:
CORE-3766 - Transaction can`t change metadata if it is run in no_wait and there is another connect that once had queried these metadata
CORE-6382 - Triggers accessing a table prevent concurrent DDL command from dropping that table
In certain cases, switching from Firebird ClassicServer or Firebird SuperClassic to Firebird SuperServer can also prevent this problem.
However, if you want a more in-depth explanation, it might be worthwhile to ask this question on the firebird-devel mailing list.

Which transaction isolation level to choose for atomic "Get Or Create" Scenario

Which Transaction IsolationLevel is the best to guarantee that only 1 Datarow get created.
Assuming SQL Server 2012 and EntityFramework 6 is used.
using(var db = new XyzContext())
{
using(var dbContextTransaction = db.Database.BeginTransaction(???))
{
try
{
Item obj = db.Item.SingleOrDefault(o => o.Hashcode.Equals(hashCode));
//it is possible that 2 threads are coming through here and both have obj == null
if(obj == null)
{
obj = db.Item.Add(new Item
{
Hashcode = hashCode,
State = 0,
});
}
db.SaveChanges();
dbContextTransaction.Commit();
}
catch(Exception)
{
dbContextTransaction.Rollback();
}
}
}
If your scenario was update, then Snapshot is good,(which is a default behavior of ef 6).
But in your case which is insert, then most of methods would not work properly.
You must be sure that your lock escalation level is table(which is default).
Then apply RepeatableRead transaction mode.
It prevents other threads from reading the table, until first thread is done.
It's better to have a unique constraint column on one of your columns instead of this method.
Or create a special table in your sql server database, then put row lock on specific record of that table before your main query & insert, then do your works, there is not bottle neck for your other operations with that table and is fast enough.
Good luck

TransactionScope with two datacontexts doesn't roll back

I am trying to solve situation with rolling back our datacontexts.
We are using one TransactionScope and inside two data contexts of two different databases.
At the end we want to save changes on both databases so we call .SaveChanges but the problem is that when an error occurs on the other database the changes on the first database are still saved.
What am I doing wrong in there that the first database doesn't roll back?
Thank you,
Jakub
public void DoWork()
{
using (var scope = new TransactionScope())
{
using (var rawData = new IntranetRawDataDevEntities())
{
rawData.Configuration.AutoDetectChangesEnabled = true;
using (var dataWareHouse = new IntranetDataWareHouseDevEntities())
{
dataWareHouse.Configuration.AutoDetectChangesEnabled = true;
... some operations with the data - no savechanges() is being called.
// Save changes for all items.
if (!errors)
{
// First database save.
rawData.SaveChanges();
// Fake data to fail the second database save.
dataWareHouse.Tasks.Add(new PLKPIDashboards.DataWareHouse.Task()
{
Description = string.Empty,
Id = 0,
OperationsQueue = new OperationsQueue(),
Queue_key = 79,
TaskTypeSLAs = new Collection<TaskTypeSLA>(),
Tasktype = null
});
// Second database save.
dataWareHouse.SaveChanges();
scope.Complete();
}
else
{
scope.Dispose();
}
}
}
}
From this article http://blogs.msdn.com/b/alexj/archive/2009/01/11/savechanges-false.aspx
try to use
rawData.SaveChanges(false);
dataWareHouse.SaveChanges(false);
//if everything is ok
scope.Complete();
rawData.AcceptAllChanges();
dataWareHouse.AcceptAllChanges();

Get connection used by DatabaseFactory.GetDatabase().ExecuteReader()

We have two different query strategies that we'd ideally like to operate in conjunction on our site without opening redundant connections. One strategy uses the enterprise library to pull Database objects and Execute_____(DbCommand)s on the Database, without directly selecting any sort of connection. Effectively like this:
Database db = DatabaseFactory.CreateDatabase();
DbCommand q = db.GetStoredProcCommand("SomeProc");
using (IDataReader r = db.ExecuteReader(q))
{
List<RecordType> rv = new List<RecordType>();
while (r.Read())
{
rv.Add(RecordType.CreateFromReader(r));
}
return rv;
}
The other, newer strategy, uses a library that asks for an IDbConnection, which it Close()es immediately after execution. So, we do something like this:
DbConnection c = DatabaseFactory.CreateDatabase().CreateConnection();
using (QueryBuilder qb = new QueryBuilder(c))
{
return qb.Find<RecordType>(ConditionCollection);
}
But, the connection returned by CreateConnection() isn't the same one used by the Database.ExecuteReader(), which is apparently left open between queries. So, when we call a data access method using the new strategy after one using the old strategy inside a TransactionScope, it causes unnecessary promotion -- promotion that I'm not sure we have the ability to configure for (we don't have administrative access to the SQL Server).
Before we go down the path of modifying the query-builder-library to work with the Enterprise Library's Database objects ... Is there a way to retrieve, if existent, the open connection last used by one of the Database.Execute_______() methods?
Yes, you can get the connection associated with a transaction. Enterprise Library internally manages a collection of transactions and the associated database connections so if you are in a transaction you can retrieve the connection associated with a database using the static TransactionScopeConnections.GetConnection method:
using (var scope = new TransactionScope())
{
IEnumerable<RecordType> records = GetRecordTypes();
Database db = DatabaseFactory.CreateDatabase();
DbConnection connection = TransactionScopeConnections.GetConnection(db).Connection;
}
public static IEnumerable<RecordType> GetRecordTypes()
{
Database db = DatabaseFactory.CreateDatabase();
DbCommand q = db.GetStoredProcCommand("GetLogEntries");
using (IDataReader r = db.ExecuteReader(q))
{
List<RecordType> rv = new List<RecordType>();
while (r.Read())
{
rv.Add(RecordType.CreateFromReader(r));
}
return rv;
}
}

ADO.NET - Bad Practice?

I was reading an article in MSDN several months ago and have recently started using the following snippet to execute ADO.NET code, but I get the feeling it could be bad. Am I over reacting or is it perfectly acceptable?
private void Execute(Action<SqlConnection> action)
{
SqlConnection conn = null;
try {
conn = new SqlConnection(ConnectionString);
conn.Open();
action.Invoke(conn);
} finally {
if (conn != null && conn.State == ConnectionState.Open) {
try {
conn.Close();
} catch {
}
}
}
}
public bool GetSomethingById() {
SomeThing aSomething = null
bool valid = false;
Execute(conn =>
{
using (SqlCommand cmd = conn.CreateCommand()) {
cmd.CommandText = ....
...
SqlDataReader reader = cmd.ExecuteReader();
...
aSomething = new SomeThing(Convert.ToString(reader["aDbField"]));
}
});
return aSomething;
}
What is the point of doing that when you can do this?
public SomeThing GetSomethingById(int id)
{
using (var con = new SqlConnection(ConnectionString))
{
con.Open();
using (var cmd = con.CreateCommand())
{
// prepare command
using (var rdr = cmd.ExecuteReader())
{
// read fields
return new SomeThing(data);
}
}
}
}
You can promote code reuse by doing something like this.
public static void ExecuteToReader(string connectionString, string commandText, IEnumerable<KeyValuePair<string, object>> parameters, Action<IDataReader> action)
{
using (var con = new SqlConnection(connectionString))
{
con.Open();
using (var cmd = con.CreateCommand())
{
cmd.CommandText = commandText;
foreach (var pair in parameters)
{
var parameter = cmd.CreateParameter();
parameter.ParameterName = pair.Key;
parameter.Value = pair.Value;
cmd.Parameters.Add(parameter);
}
using (var rdr = cmd.ExecuteReader())
{
action(rdr);
}
}
}
}
You could use it like this:
//At the top create an alias
using DbParams = Dictionary<string, object>;
ExecuteToReader(
connectionString,
commandText,
new DbParams() { { "key1", 1 }, { "key2", 2 } }),
reader =>
{
// ...
// No need to dispose
}
)
IMHO it is indeed a bad practice, since you're creating and opening a new database-connection for every statement that you execute.
Why is it bad:
performance wise (although connection pooling helps decrease the performance hit): you should open your connection, execute the statements that have to be executed, and close the connection when you don't know when the next statement will be executed.
but certainly context-wise. I mean: how will you handle transactions ? Where are your transaction boundaries ? Your application-layer knows when a transaction has to be started and committed, but you're unable to span multiple statements into the same sql-transaction with this way of working.
This is a very reasonable approach to use.
By wrapping your connection logic into a method which takes an Action<SqlConnection>, you're helping prevent duplicated code and the potential for introduced error. Since we can now use lambdas, this becomes an easy, safe way to handle this situation.
That's acceptable. I've created a SqlUtilities class two years ago that had a similar method. You can take it one step further if you like.
EDIT: Couldn't find the code, but I typed a small example (probably with many syntax errors ;))
SQLUtilities
public delegate T CreateMethod<T> (SqlDataReader reader);
public static T CreateEntity<T>(string query, CreateMethod<T> createMethod, params SqlParameter[] parameters) {
// Open the Sql connection
// Create a Sql command with the query/sp and parameters
SqlDataReader reader = cmd.ExecuteReader();
return createMethod(reader);
// Probably some finally statements or using-closures etc. etc.
}
Calling code
private SomeThing Create(SqlDataReader reader) {
SomeThing something = new SomeThing();
something.ID = Convert.ToIn32(reader["ID"]);
...
return something;
}
public SomeThing GetSomeThingByID (int id) {
return SqlUtilities.CreateEntity<SomeThing> ("something_getbyid", Create, ....);
}
Of course you could use a lambda expression instead of the Create method, and you could easily make a CreateCollection method and reuse the existing Create method.
However if this is a new project. Check out LINQ to entities. Is far easier and flexible than ADO.Net.
Well, In my opinion check what you do before going through it.Something that is working doesn't mean it is best and good programming practice.Check out and find a concrete example and benefit of using it.But if you are considering using for big projects it would be nice using frameworks like NHibernate.Because there are a lot projects even frameworks developed based on it,like http://www.cuyahoga-project.org/.