ADO.Net Transaction when Command's transaction is not set - ado.net

I came across a sample on ADO.Net, where a transaction was being done without setting the command's transaction property as in code below.
Is this possible Or one needs to explicitly set the command's transaction property?
// Start a local transaction.
SqlTransaction sqlTran = connection.BeginTransaction();
// Enlist a command in the current transaction.
SqlCommand command = connection.CreateCommand();
-----
-----
sqlTran.Commit()

That should throw a runtime exception. That is, if you have an active transaction on a SqlConnection and do not assign a reference to the corresponding SqlTransaction to the SqlCommand.Transaction property and attempt to execute the command, you should get an exception.
In short, set the Transaction property explicitly when executing a command on a connection with an active transaction.

Related

JDBC batch for multiple prepared statements

Is it possible to batch together commits from multiple JDBC prepared statements?
In my app the user will insert one or more records along with records in related tables. For example, we'll need to update a record in the "contacts" table, delete related records in the "tags" table, and then insert a fresh set of tags.
UPDATE contacts SET name=? WHERE contact_id=?;
DELETE FROM tags WHERE contact_id=?;
INSERT INTO tags (contact_id,tag) values (?,?);
// insert more tags as needed here...
These statements need to be part of a single transaction, and I want to do them in a single round trip to the server.
To send them in a single round-trip, there are two choices: for each command create a Statement and then call .addBatch(), or for each command create a PreparedStatement, and then call .setString(), .setInt() etc. for parameter values, then call .addBatch().
The problem with the first choice is that sending a full SQL string in the .addBatch() call is inefficient and you don't get the benefit of sanitized parameter inputs.
The problem with the second choice is that it may not preserve the order of the SQL statements. For example,
Connection con = ...;
PreparedStatement updateState = con.prepareStatement("UPDATE contacts SET name=? WHERE contact_id=?;");
PreparedStatement deleteState = con.prepareStatement("DELETE FROM contacts WHERE contact_id=?;");
PreparedStatement insertState = con.prepareStatement("INSERT INTO tags (contact_id,tag) values (?,?);");
updateState.setString(1, "Bob");
updateState.setInt(1, 123);
updateState.addBatch();
deleteState.setInt(1, 123);
deleteState.addBatch();
... etc ...
... now add more parameters to updateState, and addBatch()...
... repeat ...
con.commit();
In the code above, are there any guarantees that all of the statements will execute in the order we called .addBatch(), even across different prepared statements? Ordering is obviously important; we need to delete tags before we insert new ones.
I haven't seen any documentation that says that ordering of statements will be preserved for a given connection.
I'm using Postgres and the default Postgres JDBC driver, if that matters.
The batch is per statement object, so a batch is executed per executeBatch() call on a Statement or PreparedStatement object. In other words, this only executes the statements (or value sets) associated with the batch of that statement object. It is not possible to 'order' execution across multiple statement objects. Within an individual batch, the order is preserved.
If you need statements executed in a specific order, then you need to explicitly execute them in that order. This either means individual calls to execute() per value set, or using a single Statement object and generating the statements in the fly. Due to the potential of SQL injection, this last approach is not recommended.

Azure Data Factory Stop 2 triggers from executing at same time

I have two ADFv2 triggers.
One is set to execute every 3 mins and another every 20 mins.
They execute different pipelines but there is an overlap as both touch the same database table which I want to prevent.
Is there a way to set them up so if one is already running and the other is scheduled to start, it is instead queued until the running trigger is finished?
Not natively AFAIK. You can use the pipeline's concurrency property setting to get this behaviour but only for a single pipeline.
Instead you could (we have):
Use Validation activity to block if a sentinel blob exists and have your other pipeline write and delete the blob when it starts/ends.
Likewise have one pipeline set a flag in a control table on the database that you can examine
If you can tolerate changing your frequencies to have a common factor, create a master pipeline that Execute Pipeline's your current two pipelines; make the longer one only called every n-th run using MOD. Then you can use the concurrency setting on the outer pipeline to make sure the next trigger gets queued until the current run ends.
Use REST API https://learn.microsoft.com/en-us/azure/data-factory/monitor-programmatically#rest-api in one pipeline to check if the other is running
Jason's post gave me an idea for a more simple solution.
I have two triggers. Each executes at different schedules and different pipelines.
On occasion the schedule on these triggers can overlap. In this circumstance the trigger that fires while the other is running should not run. Only one to be running at any one time.
I did this using the following.
Create a control table with a IsJobRunning BIT (flag) column
When a trigger fires, the pipeline associated with it will execute an SP that will check the Control table.
If the value in the IsJobRunning is 0 then UPDATE the IsJobRunning column to 1 and continue executing,
if 1 then RAISEERROR - a dummy error - stop executing.
IF (SELECT J.IsJobRunning FROM '[[Control table ]]' ) = 1
BEGIN
SET #ERRMSG = N'**INFORMATIONAL ONLY** Other ETL trigger job is running - so stop this attempt ' ;
SET #ErrorSeverity = 16 ;
-- Note: this is only a INFORMATIONAL message and not an actual error.
RAISERROR (#ERRMSG,#ErrorSeverity,1 ) WITH NOWAIT;
RETURN 1;
END ;
ELSE
BEGIN
-- set #IsJobRunning to RUNNING
EXEC '[[ UPDATE IsJobRunning on COntrol table]] ' ;
END ;
This looks like this in the pipeline.
This logic is in both Pipelines.

How to make SQLAlchemy issue additional SQL after flushing the current session?

I have some SQL that I'd like SQLAlchemy to issue after it flushes the current session.
So I'm trying to write a Python function that will do the following "at the end of this specific SQLAlchemy session, 1) flush the session, 2) then send this extra SQL to the database as well, 3) then finally commit the session", but only if I call it within that particular session.
I don't want it on all sessions globally, so if I didn't call the function within this session, then don't execute the SQL.
I know SQLAlchemy has a built-in events system, and I played around with it, but I can't figure out how to register an event listener for only the current session, and not all sessions globally. I read the docs, but I'm still not getting it.
I am aware of database triggers, but they won't work for this particular scenario.
I'm using Flask-SQLAlchemy, which uses scoped sessions.
Not sure why it does not work for you. Sample code below runs as expected:
class Stuff(Base):
__tablename__ = 'stuff'
id = Column(Integer, primary_key=True)
name = Column(String)
Base.metadata.create_all(engine)
session = Session()
from sqlalchemy import event
#event.listens_for(session, 'after_flush')
def _handle_event(session, context):
print('>> --- after_flush started ---')
rows = session.execute("SELECT 1 AS XXX").fetchall()
print(rows)
print('>> --- after_flush finished ---')
# create test data
s1 = Stuff(name='uno')
session.add(s1)
print('--- before calling commit ---')
session.commit()
print('--- after calling commit ---')

Returning rowcount from CLR stored procedure

If I have a stored procedure like this:
CREATE PROCEDURE [dbo].[sp_update_dummy]
AS
BEGIN
update update_dummy set value = value + 1 where id = 1
END
and call this using executeUpdate (from standard java.sql library) then the updated row count is returned to the Java program (assuming, of course, that the update statement updates a row in the table).
However if I execute a CLR stored procedure coded like this:
[Microsoft.SqlServer.Server.SqlProcedure]
public static void clr_update_dummy()
{
using (SqlConnection conn = new SqlConnection("context connection=true"))
{
SqlCommand command = new SqlCommand("update update_dummy set value = value + 1 where id = 1", conn);
conn.Open();
command.ExecuteNonQuery();
conn.Close();
}
}
Then the Java program does not get the updated row count (it seems to get a value of -1 returned). This is also what happens if I put SET NOCOUNT ON into the SQL stored procedure.
So it looks to me that a CLR stored procedure acts as if SET NOCOUNT ON is used.
Is there any way to code a CLR stored procedure so that row count can be picked up in the same way it is for a SQL stored procedure? Unfortunately it isn't possible to change the Java program (it is a 3rd party component) to, for example, pick up an OUTPUT parameter. I've looked at SqlContext.Pipe but there is nothing obvious there. Also I'm not sure of the mechanism by which the row count is returned to the executeUpdate procedure.
I can probably create a hack to get around the problem (Java executes a SQL stored procedure which in turn executes a CLR stored procedure for instance) but if possible I'd like to not introduce another layer into the call stack.
You probably (if it makes sense to just be turning around and running a piece of SQL from inside your CLR procedure) want to call ExecuteAndSend
In addition to any actual results, other messages and errors are also sent directly to the client.
SqlCommand command = new SqlCommand("update update_dummy set value = value + 1 where id = 1", conn);
conn.Open();
SqlContext.Pipe.ExecuteAndSend(command);

SQL Anywhere, Entity Framework 4 and Transactions

I have a process in my program that uses an Entity Framework 4 EDM. The entity context object contains function imports for calling stored procedures.
The process receives a batch of data from a remote server. The batch can consist of data for any of our tables / data types (each data type is stored in its own table). The batch can also contain data for the same row multiple times. It has to handle this as a single insert (for the first occurance) and one or more updates (for each subsequent occurance). The stored procedures therefore implement an upsert operation using the INSERT ... ON EXISTING UPDATE command.
Our code basically determines which stored procedure to call and then calls it using the entity context object's method for that stored procedure. Then entire batch has to be done in a single transaction, so we call context.Connection.BeginTransaction() at the beginning of the batch.
There is one data type that has millions of rows. We need to load that data as quickly as possible. I'm implementing logic to import that data type using the SABulkCopy class. This also needs to be a part of the single transaction already started. The issue is that I need to pass an SATransaction to the SABulkCopy class's constructor (there is no way to set it it using properties) and I don't have an SATransaction. context.Connection.BeginTransaction() returns a DBTransaction. I tried to cast this into an SATransaction without success.
What's the right way to get the SABulkCopy object to join the transaction?
We gave up on the SABulkCopy class. It turns out that it doesn't do a bulk load. It creates an SACommand object that executes an INSERT statement and inserts the rows one at a time. And it does it inefficiently, to boot.
I still needed to get at the SATransaction associated with the DBTransaction returned by context.Connection.BeginTransaction(). I was given some reflection code that does this in response to another question I posted about this:
SATransaction saTransaction = (SATransaction) dbTransaction.GetType()
.InvokeMember( "StoreTransaction",
BindingFlags.FlattenHierarchy | BindingFlags.NonPublic | BindingFlags.InvokeMethod |
BindingFlags.Instance | BindingFlags.GetProperty | BindingFlags.NonPublic,
null, dbTransaction, new object[ 0 ] );
The program does what it needs to do. It's unfortunate, though, that Microsoft didn't make the StoreTransaction property of the EntityTransaction class public.