does every procedure call create a recursive session in oracle - oracle10g

I'm trying to solve a case of constant ora--00018 "maximum open sessions exceeded" crashes even though sessions parameter in set to 1500, during those crushes number of v)session entries is sometimes 50 % off v$ resource_limit. current_utilization value of sessions parameter. So we suspect there are a lot of recursive sessions being made and that quickly brings A DB Server down. I know that triggers can cause a lot off recursive sessions . Does every procedure call create the same effect or are any specific kind of procedure that generate it. I tried to test by checking the current_utilization value before and after I ran a simple procedure and didn't see the difference. Maybe because my test procedure is too simple and too fast to notice. I've read this article http://tech.e2sn.com/oracle/oracle-internals-and-architecture/recursive-sessions-and-ora-00018-maximum-number-of-sessions-exceeded but it's not clear to me if every procedure is run on a different session . I'm using oracle 10g

Related

How to create warning message in trigger?

Is it possible to create a warning message in a trigger in Firebird 2.5?
I know I can create an exception message which will stop the user from saving the record changes, but in this instance I don't mind if the user continues.
Could I call a procedure that generates the message?
There is no mechanism in Firebird to produce warnings in PSQL code, you can only raise exceptions, which in triggers will result in the effect of the executed statement that fired the trigger to be undone.
In short, this is not possible.
There are workarounds possible, but those would require 'external' protocols, like, for example, inserting the warning message into a global temporary table, requiring the calling code to explicitly select from that temporary table after execution.
SQL model does provide putting query on pause and then waiting for extra input from client to either unfreeze it or fail it. SQL is not user-interactive service and there is no confirmation dialogs. You have to rethink your application design.
One possible avenue, nominally staying withing 2-tier client-server framework, would be creating temporary tabless for all the data you want to save (for example transaction-scope GTTs), and then have TWO stored procedures. One SP would be sanity-checking and returning list of warnings, if any. Another SP then would dump the data from GTTs to main, persistent tables without doing those checks.
Your client app would select warnings from the check-SP first, if it returns any then show them to the user, then either call save-SP and commit, or rollback without calling save-SP.
This is abusing C/S idea, so there would be dragons. First of all, you would have to have several GTTs and two SPs for E-V-E-R-Y pausable data saving in your app. And that can be a lot.
Also, notice, that database data may change after you called check-SP and before you called save-SP. Becuse some OTHER application running elsewhere could be changing and committing data during that pause. Especially if you transaction was of READ COMMMITTED kind. But with SNAPSHOT tx too.
Better approach would be to drop C/S scheme and go to 3-tier model, AKA multi-tier, AKA "Application Server". That way your client app sends the "briefcase" of data to the app-server, it would be app-server (not SQL triggers) doing all the data validation, and then it would be saving it to data storage backend, SQL or any other.
There, of course, still would be that problem, that data could had been changed by other users, why you paused one user and waited him to read and decide. But you would have more flexibility in app-server on data reconcilation, than you would have with plain SQL.

Executing SP_Open cursor causes SMSS to have a severe error

Our program executes a stored procedure VIA SP_Opencursor, i've managed to extract the exact call via a trace, and when running the code directly in SMSS, we get a severe error
declare #p1 int
set #p1=0
declare #p3 int
set #p3=16388
declare #p4 int
set #p4=8196
declare #p5 int
set #p5=0
exec sp_cursoropen #p1 output,N' EXEC [dbo].[EventSearchSP] ''hip'' ',#p3 output,#p4 output,#p5 output
select #p1, #p3, #p4, #p5
When running the above code we get the following error:
Executing SQL directly; no cursor.
Msg 0, Level 11, State 0, Line 0
A severe error occurred on the current command. The results, if any, should be discarded.
This code used to work, and only fairly recently has it been failing. It fails across all Databases on our instance (over 100)
When running the code on other servers, the procedure executes correctly and it does not fail.
The actual data of the Stored procedure is fairly generic, a single column of numbers. I don't belive it's the stored procedure itself as this problem is happening when executing any Stored procedure from the client program
I'm fast running out of ideas as to try to resolve the issue, does anyone know what could be causing this error to suddenly start, where it used to be working fine?
We've found the problem, and If anyone knows why it is a problem i would love to know so i can find a way around it.
We recently set XACT_ABORT ON feature in the connections on SQL server after doing research and it being generally accepted as best practice. However with this setting on we get the above error, after turning the feature off again, the procedure began working as before.
Unfortunately we turned XACT_ABORT ON in order to solve another issue, which we now need to find another solution to!

Best way to track the progress of a long-running function (from outside) - PostgreSQL 11?

What is the best way to track progress of a
long-running function in PostgreSQL 11?
Since every function executes in a single transaction, even if the function writes to some "log" table no other session/transaction can see this output unless the function completes with SUCCESS.
I read about some attempts here but they are from 2010.
https://www.endpointdev.com/blog/2010/04/viewing-postgres-function-progress-from/
Also, this approach looks terribly inconvenient.
As of today what is the best way to track progress?
One approach that I know... is to turn the func to a procedure and then do partial commits in the SP. But what if I want to return some result set from the func... In that case I cannot turn it into a SP, right? So... how to proceed in that case?
Many thanks in advance.
NOTE: The function is written in PL/pgSQL, the most common procedural SQL language available in PostgreSQL.
I don't know that there's a great way to do it built-in to postgres yet, but there are a couple ways to achieve logging that will be visible outside of a function.
You can use the pg_background extension to run an insert in the background that will be visible outside of the function. This requires compiling and installing this extension.
Use dblink to connect to the same database and insert data. This will most likely require setting up some permissions.
Neither option is ideal, but hopefully one can work for you. Converting your function to a procedure may also work, but you won't be able to call the procedure from within a transaction.

Processing a row externally that fired a trigger

I'm working on a PostgreSQL 9.3-database on an Ubuntu 14 server.
I try to write a trigger-function (AFTER EACH ROW) that launches an external process that needs to access the row that fired that trigger.
My problem:
Even tough I can run queries on the table including the new row inside the trigger, the external process does not see it (while the trigger function is still running).
Is there a way to manage that?
I thought about starting some kind of asynchronous function call to give the trigger some time to terminate first, but that's of course really ugly.
Also I read about notifiers and listeners, but that would require some refactoring of my existing code and also some additional listener, which I tried to prevent with my trigger. (I'm also afraid of new problems which may occur on this road.)
Any more thoughts?
Robin

Informing user about the progress of sql procedure execution in MS Access

In my application I have SQL Server 2008 in the backend and MS Access 2010 in the front end. I have long running procedures that are executed through MS Access. For example now I need to show a messege box in the MS ACCESS front end to the user when the SQL procedure is currently running. Please provide me an idea how can I accomplish that... Thanks in advance!
You can't use MS-Access in asynchronous calls - even if you had a way to set a file-based counter from your stored procedure - you wouldn't be able to track it from inside Access while the stored procedure is running.
The only two ways I see that could do something like this is to
change your stored procedure to handle some kind of paging counter -
using a loop in VBA, and passing, tracking and incrementing a first
pointer and last pointer to your stored procedure - then reporting on
progress, or,
find a way to shell-nowait out to call the stored procedure and then
monitor and report on the progress in VBA