Identify cause of cascading trigger - triggers

DB2
I am getting SQLCODE-724, indicating cascading trigger at 17th level, and I suppose that some procedure called by a procedure called by a trigger updates a field that calls the trigger.
How would I create a monitor to help identify the sequence in which procedures/triggers are being called so I can put an end to this cascading-trigger problem?

You can use the DB2 profiler written by Serge Rielau:
More than a TRACE. Extending the SQL PL Profiler to do tracing https://www.ibm.com/developerworks/community/blogs/SQLTips4DB2LUW/entry/tracing?lang=en
Reality Check: SQL PL Profiler and plan explain with actual row counts https://www.ibm.com/developerworks/community/blogs/SQLTips4DB2LUW/entry/sql_pl_profiler_and_plan_explain_with_actual_row_counts23?lang=en
Also, you can put a logger tool in the code, like log4db2 https://github.com/angoca/log4db2
With these tools, you can identify what is happening in the code, and identify the source of the problem.

Related

Best way to track the progress of a long-running function (from outside) - PostgreSQL 11?

What is the best way to track progress of a
long-running function in PostgreSQL 11?
Since every function executes in a single transaction, even if the function writes to some "log" table no other session/transaction can see this output unless the function completes with SUCCESS.
I read about some attempts here but they are from 2010.
https://www.endpointdev.com/blog/2010/04/viewing-postgres-function-progress-from/
Also, this approach looks terribly inconvenient.
As of today what is the best way to track progress?
One approach that I know... is to turn the func to a procedure and then do partial commits in the SP. But what if I want to return some result set from the func... In that case I cannot turn it into a SP, right? So... how to proceed in that case?
Many thanks in advance.
NOTE: The function is written in PL/pgSQL, the most common procedural SQL language available in PostgreSQL.
I don't know that there's a great way to do it built-in to postgres yet, but there are a couple ways to achieve logging that will be visible outside of a function.
You can use the pg_background extension to run an insert in the background that will be visible outside of the function. This requires compiling and installing this extension.
Use dblink to connect to the same database and insert data. This will most likely require setting up some permissions.
Neither option is ideal, but hopefully one can work for you. Converting your function to a procedure may also work, but you won't be able to call the procedure from within a transaction.

Firebird "For Each Row" trigger syntax

There is confusion in the documentation and searching online about the support for statement and row level triggers. According to the documentation I've seen the latest version of Firebird supports both statement and row level triggers.
Firebird supposedly supports SQL-92/99. The standard approach for this is to use "for each row" in the trigger SQL, however, this causes an error in firebird.
Here is my statement level trigger, which works:
CREATE TRIGGER myExampleTrigger FOR myTable
AFTER UPDATE
AS
BEGIN
POST_EVENT 'testEvent';
END;
Here is my row level trigger, which doesn't work:
CREATE TRIGGER myExampleTrigger FOR myTable
AFTER UPDATE
AS
FOR EACH ROW
BEGIN
POST_EVENT 'testEvent';
END;
The statement-level trigger works to post an event for updates on myTable. When I update multiple rows it will only post one event.
What is the syntax for the trigger statement to get it to do a row-level trigger so that I can post an event FOR EACH ROW that is updated?
Firebird does not have statement-level triggers. Just create it as the first. It's a row-level trigger.
You said it posts only one event. It seems you also didn't understood how Firebird events works. It will be posted a single time, but you can see how many times by the event counter. They're are posted on commit.
Triggers in Firebird are always row level, never statement level. The documentation (Interbase 6.0 Language Reference, page 82; available from the Firebird website) says:
CREATE TRIGGER defines a new trigger to a database. A trigger is a self-contained
program associated with a table or view that automatically performs an action when a
row in the table or view is inserted, updated, or deleted.
As Adriano already explained, events are sent on transaction commit. If you post the same event multiple times in a single transaction, only a single event will be posted (with the count in the event).
Events are used to signal to other applications, not to the database itself (that is what triggers itself are for), so - afaik - you can't register for, nor determine the event count from within a trigger or stored procedure. The application registers for events. How this is done depends on the programming language and driver.
A lot of the (old) Interbase documentation shows example using EVENT INIT and EVENT WAIT, this however is only for embedded SQL which requires a preprocessor and is really hardly used. With Java and Jaybird you can use FBEventManager to listen for events, with C# and the Firebird .net provider you can use FbRemoteEvent. If you use the Firebird C API you need to use isc_que_events.

Get Error when Save modifed record using Light Switch on Azure

I am using Light switch on Azure.
After I modified a column in a record when I click the Save button I got
"Store update, insert, or delete statement affected an unexpected number of rows(0). Entties may have been modified or deleted since entities were loaded, Refresh ObjectStateManager entries.
I use VS 2012 on my dev machine debug this light switch app. it works fine and no errors when I modify the save column on same records then save it.
Is anybody in this forum has idea what could cause this? and how should I work around it?
I suspect the azure machine don't have the same version of EF with my dev machine. but in the Light switch project both client and server reference I could not find the EF is referenced there. So I don't know how I can bring the EF dll on my machine up to Azure machine.
Anybody could give me some suggestion on this?
Thanks
Chris
Usually it's a side effect of Optimistic Concurrency. This article can give you the idea of it in Lightswitch:
LightSwitch 2012 Concurrency Enhancements
When it's working on dev machine and it's not working on Azure, I guess something is not right in your production database.
you can also take a look at Entity framework: affected an unexpected number of rows(0)
Having Instead of insert/update triggers, sometimes SQL server does not report back an IdentityScope for each new inserted/updated row. Therefore EF can not realize the number of affected rows.
Normally, any insert/update into a table with identity column are immediately followed by a select of the scope_identity() to populate the associated value in the Entity Framework. The instead of trigger causes this second step to be missed, which leads to the 0 rows inserted error.
You can change your trigger to be either before or after insert or tweak your trigger by adding following line at the end of it:
select [Id] from [dbo].[TableXXX] where ##ROWCOUNT > 0 and [Id] = scope_identity()
Find more details in this or this thread.

How to see variable values in SQL Profiler when Trigger fires?

I am creating an "After Update" Trigger on a SQL Server 2008 table. The trigger fires just fine but one of the values it's updating in another table isn't correct. I am looking at a trace in SQL Profiler, but I can't see my variable's values in there.
I read this other question and so added the RPC: Completed Event to my trace, but there were not instances of that event in my trace for some reason. That is, I see it at other places in the trace but not where my trigger is firing.
Just to (hopefully) be clear, my trigger is EXECUTING an SP like this:
EXEC SP_UpdateSomeStuff #variable1, #variable2
... and that's all that I see in the trace. What I wish to see is:
EXEC SP_UpdateSomeStuff #variable1 = 111, #variable2 = 222
... but I can't figure out which events to add to get this. Thanks for any ideas.
"RPC" stands for "Remote Procedure Call" -- generally, queries submitted "from outside" to SQL Server. Trigger events are anything but outside calls, which should be why you are not seeing them in Profiler.
I suspect that you won't be able to see your paremeter values via SQL Profiler. Can you temporarily put in debugging code (insert DebugTable values (Wvariable1, etc.), such that the value you are working with get logged somewhere?

Eclipse BIRT and Oracle: Need to set role before running report

Is it possible to set a database role before running a report? I have a number of databases each containing a number of schemas with the same set of tables, where each schema has a number of roles to control read, write, data management and so on. None of these are default roles.
In sqlplus or TOAD I can do SET ROLE , before running a select statement. I would like to do the same in BIRT.
It may be possible to do this using the afterOpen event for the ODA Data Source, but I have not found any examples on how to get and use the native connection in JavaScript.
I am not allowed to add or change anything on the server end.
You can make an additional call to the database in the afterOpen method of the Data Source using Java. You can use JavaScript or a Java Event Handler to execute the SET ROLE statement, or to call a stored procedure that will execute it for you. This happens after the initial db connection is made, but before the Data Set query runs. It will be a little tricky to use the data source connection to make that call however, and I don't have the code right now to provide as an example.
Another way is to create a stored proc Data Set that will execute the desired command, and have that execute first. Drag and drop the Data Set into the report design, and make it invisible. It will run first before any other queries. Not the cleanest solution, but easy to do
Hope that helps
Le Birt Expert
You can write a login trigger and do a set role in this trigger ( PL/SQL: DBMS_SESSION.SET_ROLE). You can determine the username, osuser, program and machine of the user who want to log in.
The approach to use a stored procedure for setting the role won't work - at least not on Apache Derby. Reason: lifetime of the set role is limited to the execution of the procedure itself - after returning from the procedure the role will be the same as before the procedure has been called, i.e. for executing the report the same as no role would have ever been set.