I have this scheme. I have 2 tables A and B in sql server. I made triggers TA and TB. At changing table A run trigger TA and changing table B. At changing table B run trigger TB and changing table A. When I change A run TA and certainly TB, but code in TB raises an error. The triggers run cyclic. In my scheme I need there is not the cycle and the triggers run only one direction.
How can I organize it?
Best Regards
Temur
Related
Given....
CREATE PROCEDURE NotaRandomUpdate (#table1_id INT, #new_value INT)
AS
BEGIN
begin transaction
UPDATE Table1
SET field1 = #new_value
WHERE id = #table1_id
INSERT INTO Table2 VALUE(#new_value)
end transaction
END
In the above (very) simplified situation, if there are 2 seperate TRIGGERS, one on each of Table1 & Table2, which trigger would execute 1st?
I'm looking to take the combined result of the full transaction (with information not referenced in the transaction itself) and save that combined result eleswhere - so I need to bring data from the join of Table1=>Table2 out.
If Table1-Trigger executes 1st, then I'm faced with not having data needed (at that instance) from Table2.
If Table2-Trigger executes 1st, then I'm faced with not having data needed (at that instance) from Table1.
I presume the triggers only execute during/after the commit phase....or are they executed immediately the Table1-update & Table-insert statements are executed and thus the overall database updates are wrapped up inside the full transaction?
This is due to happen in a DB2 database.
Is a solution possible?.
Or am I faced with running a "some time later" activity (like pre-EOD) which executes a query which joins the 2 tables after all relevent updates (for that day) have been completed, providing of course that each of Table1 & Table2 have some timestamp columns that can be tracked.
end
Any relevant triggers for Table1 will fire before any relevant triggers on Table2 , assuming no rollback.
Db2 triggers execute with Insert or Update or Delete statements, whether per-row or per-statement. Hence the statements inside trigger body will only run (assuming trigger is valid) during execution of the triggering statement. Commit will not invoke trigger logic.
Each of your Insert/Update/Delete statements that executes will execute any relevant valid triggers during execution of that statement before execution of the next statement will begin.
My Postgres version is 9.6
I tried today to drop an index from my DB with this command:
drop index index_name;
And it caused a lot of locks - the whole application was stuck until I killed all the sessions of the drop (why it was devided to several sessions?).
When I checked the locks I saw that almost all the blocked sessions execute that query:
SELECT a.attname, format_type(a.atttypid, a.atttypmod),
pg_get_expr(d.adbin, d.adrelid), a.attnotnull, a.atttypid, a.atttypmod
FROM
pg_attribute a LEFT JOIN pg_attrdef d
ON a.attrelid = d.adrelid AND a.attnum = d.adnum
WHERE a.attrelid = <index_able_name>::regclass
AND a.attnum > 0 AND NOT a.attisdropped
ORDER BY a.attnum;
Is that make sense that this will block system actions?
So I decided to drop the index with concurrently option to prevent locks.
drop index concurrently index_name;
I execute it now from PGAdmin (because you can't run it from noraml transaction).
It run over that 20 minutes and didnt finished yet. Index size is 20MB+-.
And when I'm checking the DB for locks I see that there is a select query on that table and that's blocks the drop command.
But when I took this select and execute in another session - this was vary fast (2-3) seconds.
So why is that blocking my drop? is there another option to do that? maybe to disable index instead?
drop index and drop index concurrently are usually very fast commands, but they both, as all DDL commands, require exclusive access to the table.
They differ only in how they try to achieve this exclusive access. Plain drop index would simply request exclusive lock on the table. This would block all queries (even selects) that try to use the table after the start of the drop query. It will do this until it will get the exclusive lock - when all transactions touching the table in any way, which started before the drop command, would finish and the transaction with a drop is committed. This explains why your application stopped working.
The concurrently version also needs brief exclusive access. But it works differently - it will not block the table, but wait until there no other query touching it and then does its (usually brief) work. But if the table is constantly busy it will never find such a moment, and wait for it infinitely. Also I suppose it just tries to lock the table repeatedly every X milliseconds until it succeeds, so a later parallel execution can be more lucky and finish faster.
If you see multiple simultaneous sessions trying to drop an index, and you do not expect that, then you have a bug in your application. The database would never do this on its own.
I am trying to dynamically set my SSIS Default Buffer Max Rows property by running the following query in an Execute SSIS Tasl on the PreExecute event of a Data Flow task:
SELECT
SUM (max_length) [row_length]
FROM sys.tables t
JOIN sys.columns c
ON t.object_id=c.object_id
JOIN sys.schemas s
ON t.schema_id=s.schema_id
WHERE t.name = 'TableName'
AND s.name = 'SchemaName'
I am attempting to be clever and use the name of my Data Flow Task (which is the same as my Destination table name) as a parameter in my Execute SQL Task by referencing the System::TaskName variable.
The problem is, that everything seems to validate correctly but, when it comes to executing the process, the PreExecute fails.
I have since found out that this is due to System::TaskName not being available at anything other than Execute.
Are there any alternative variables / methods that I can use other than dumping a load of script tasks that manually change a variable into my flow?
Thanks,
Daniel
Found the answer,
I have used System::SourceName
Daniel
Well I've been using #temp tables in standard T-SQL coding for years and thought I understood them.
However, I've been dragged into a project based in MS Access, utilizing pass-through queries, and found something that has really got me puzzled.
Though maybe it's the inner workings of Access that has me fooled !?
Here we go : Under normal usage, I understand the if I create a temp table in a Sproc, it's scope ends with the end of the SProc, and is dropped by default.
In the Access example, I found it was possible to do this in one Query:
select top(10) * into #myTemp from dbo.myTable
And then this in second separate query:
select * from #myTemp
How is this possible ?
If a temp table dies with the current session, does this mean that Access keeps a single session open, and uses that session for all Queries executed ?
Or has my fundamental understanding of scope been wrong all this time ?
Hope someone out there can help clarify what is occurring under the hood !?
Many Thanks
I found this answer of a kind of similar question:
Temp table is stored in tempdb until the connection is dropped (or in the case of a global temp tables when the last connection using it is dropped). You can also (and it is a good proctice to do so) manually drop the table when you are finished using it with a drop table statement.
I hope this helps out.
Could you tell me why this query works in pgAdmin, but doesn't with software using ODBC:
CREATE TEMP TABLE temp296 WITH (OIDS) ON COMMIT DROP AS
SELECT age_group AS a,male AS m,mode AS t,AVG(speed) AS speed
FROM person JOIN info ON person.ppid=info.ppid
WHERE info.mode=2
GROUP BY age_group,male,mode;
SELECT age_group,male,mode,
CASE
WHEN age_group=1 AND male=0 THEN (info_dist_km/(SELECT avg_speed FROM temp296 WHERE a=1 AND m=0))*60
ELSE 0
END AS info_durn_min
FROM person JOIN info ON person.ppid=info.ppid
WHERE info.mode IN (7) AND info.info_dist_km>2;
I got "42P01: ERROR: relation "temp296" does not exist".
I also have tried with "BEGIN; [...] COMMIT;" - "HY010:The cursor is open".
PostgreSQL 9.0.10, compiled by Visual C++ build 1500, 64-bit
psqlODBC 09.01.0200
Windows 7 x64
I think that the reason why it did not work for you because by default ODBC works in autocommit mode. If you executed your statements serially, the very first statement
CREATE TEMP TABLE temp296 ON COMMIT DROP ... ;
must have autocommitted after finishing, and thus dropped your temp table.
Unfortunately, ODBC does not support directly using statements like BEGIN TRANSACTION; ... COMMIT; to handle transactions.
Instead, you can disable auto-commit using SQLSetConnectAttr function like this:
SQLSetConnectAttr(hdbc, SQL_ATTR_AUTOCOMMIT, SQL_AUTOCOMMIT_OFF, 0);
But, after you do that, you must remember to commit any change by using SQLEndTran like this:
SQLEndTran(SQL_HANDLE_DBC, hdbc, SQL_COMMIT);
While WITH approach has worked for you as a workaround, it is worth noting that using transactions appropriately is faster than running in auto-commit mode.
For example, if you need to insert many rows into the table (thousands or millions), using transactions can be hundreds and thousand times faster than autocommit.
It is not uncommon for temporary tables to not be available via SQLPrepare/SQLExecute in ODBC i.e., on prepared statements e.g., MS SQL Server is like this. The solution is usually to use SQLExecDirect.