oracle 12c cursor_sharing alter multple sessions - oracle12c

Question:
Is there an equivalent to dbms_system.set_int_param_in_session that works for init paramters where the value is a character string rather than an integer or a boolean?
Backstory:
I've got a problem with a 3rd party application whose api's don't use bind variables. This causes high cpu and slow performance as the application issues the same sql over and over. (hard parsing) I've discovered that setting the cursor_sharing parameter to FORCE improves performance, however there are security issues with doing that. The 3rd party application maintains many sessions (~30-50) So my current approach is to set cursor_sharing to force during the time the 3rd party application is doing its thing and then set cursor_sharing back to exact when it is done. Kludgy? Yes, very. We did some research and found the dbms_system.set_int_param_in_session procedure which seems like it would be an adequate solution except that it only works with init parameters with data type integer. Is there an equivalent for parameters with values of data type string?

Though im not well versed with using dbms_system, based on the info available for dbms_system package, i tried to use add_parameter_value at session level and ended up with following error:
SQL> set serveroutput on
DECLARE
sid_value VARCHAR2(10);
BEGIN
dbms_system.get_env('ORACLE_SID', sid_value);
dbms_output.put_line(sid_value);
dbms_system.add_parameter_value('cursor_sharing', 'EXACT', 'MEMORY', sid_value, 4);
END;
/
show errorsSQL> 2 3 4 5 6 7 8 mydb
DECLARE
*
ERROR at line 1:
ORA-20004: cursor_sharing is not a list parameter
SQL>
So, tried the following trigger based approach which sets cursor_sharing to force only for user SCOTT which you can change as per your requirement.
SQL> create or replace trigger csf
after logon
on database
begin
if user='SCOTT' then
execute immediate 'alter session set cursor_sharing=force';
else
execute immediate 'alter session set cursor_sharing=exact';
end if;
end;
/ 2 3 4 5 6 7 8 9 10 11
Trigger created.
SQL>
SQL>
SQL> conn scott/tiger
Connected.
SQL> show parameter cursor_sharing
NAME TYPE VALUE
--------------------- --------- ---------------
cursor_sharing string FORCE
SQL>
SQL> conn system/manager
Connected.
SQL> show parameter cursor_sharing
NAME TYPE VALUE
--------------------- --------- ---------------
cursor_sharing string EXACT
SQL>
Hope this helps you avoid setting the value back to original value.

Related

kill the long running queries automatically

I want to kill the queries which are running more then 2 hours in automatic way.
I tried creating trigger like below
create or replace function stop_query()
RETURNS trigger
language plpgsql
as $$
begin
with pid_tbl as
(
SELECT
pid
FROM pg_stat_activity
WHERE (now() - pg_stat_activity.query_start) > interval '120 minutes';
)
select * from pid_tbl;
SELECT pg_cancel_backend(var_pid);
end;$$
CREATE TRIGGER stop_query
FOR EACH ROW EXECUTE FUNCTION stop_query();
please advice me how can i achieve this. is there any way I can achieve it without writing functions trigger
You don't need this trigger at all. As I mentioned in the comment, it should be enough for you to run one of these queries:
SET LOCAL statement_timeout='2 h';--applies only until the end of the current transaction within the current session
SET SESSION statement_timeout='2 h';--only in the current session/connection
ALTER ROLE your_user_name SET statement_timeout='2 h';--all new sessions of this user
ALTER DATABASE your_db_name SET statement_timeout='2 h';--all new sessions on this db
ALTER SYSTEM SET statement_timeout='2 h';--all new sessions on all dbs on this system
They all set the statement_timeout setting that's by default 0 (meaning "no limit") to '2 h' (which simply stand for "2 hours"). It's best to apply this only to the specific context where it's required, i.e. for a specific user that tends to run queries you don't want hanging for too long.
Documentation:
statement_timeout (integer)
Abort any statement that takes more than the specified amount of time. If log_min_error_statement is set to ERROR or lower, the statement that timed out will also be logged. If this value is specified without units, it is taken as milliseconds. A value of zero (the default) disables the timeout.
The timeout is measured from the time a command arrives at the server until it is completed by the server. If multiple SQL statements appear in a single simple-Query message, the timeout is applied to each statement separately. (PostgreSQL versions before 13 usually treated the timeout as applying to the whole query string.) In extended query protocol, the timeout starts running when any query-related message (Parse, Bind, Execute, Describe) arrives, and it is canceled by completion of an Execute or Sync message.
Setting statement_timeout in postgresql.conf is not recommended because it would affect all sessions.
If you try to use unsupported units, you'll get a hint with your error:
ERROR: invalid value for parameter "statement_timeout": "2 hours"
HINT: Valid units for this parameter are "us", "ms", "s", "min", "h", and "d".
Which are microseconds, milliseconds, seconds, minutes, hours and days respectively.

INSERT OPENQUERY timeout

I'm trying to execute and insert query to a linked server in SQL Server.
For that I'm using INSERT INTO OPENQUERY statement.
The linked server is an Apache HIVE using Cloudera ODBC Provider.
The insert operation takes around 1 minute in my setup when performed from HIVE client.
However, SQL INSERT always times out after 30 seconds.
I set the Query Timeout parameter to 0 but it seems to be not affecting INSERT statement, however, it is working fine for SELECT statements taking longer time.
Is this a known limitation?
Is there a way to change the timeout for the insert statement when using OPENQUERY?
EDIT
I would like to clarify the setup I'm working with.
---------- ---------------------- ---------------
| MS SQL | => Linked Server => | Hive ODBC Provider | => | Hive Server |
---------- ---------------------- ---------------
In Hive, I have a table called calc_result where I would like to periodically store calculation results from the SQL server. For example, I try to insert using a query like this.
insert openquery(HIVE, 'select timestamp timestamp , tag tag, value value from calc_result')
values('2019-04-22 11:50:41', 'test',2.0)
The insert operation is captured correctly by HIVE server and a MapReduce job starts. However, the job will be killed after 30 seconds due to timeout.
The SQL server will show the below error message.
OLE DB provider "MSDASQL" for linked server "HIVE" returned message "[Cloudera][Hardy] (72) Query execution timeout expired.".
However, SELECT OPENQUERY works fine and would follow Query Timeout settings of the linked server (Which is set to 0 in this case).
Edit that is completely different use case from what I've imagined. In that case there should not be any difference in select/insert.
As you have configured your linked server timeout, there is a second place in the linked server properties you can check a Command Timeout setting in the provider string:
Other option that comes into my mind is instance wide timout. Default set for 600 seconds (10 minutes) which is way above your 30 seconds. However, you can still try it to see if there is any impact.
For infinite wait:
sp_configure 'show advanced options',1
go
reconfigure
go
sp_configure 'remote query timeout (s)',0
go
reconfigure
go
I would try using SELECT INTO temporary table and then materializing it using regular INSERT INTO:
SELECT c1, c2
INTO #temp_tab
FROM OPENQUERY(mylinkedserver, 'SELECT c1, c2 FROM remote_table');
INSERT INTO normal_table(col1, col2)
SELECT c1, c2
FROM #temp_tab;
EDIT:
You could try wrapping it with transaction and remove aliases:
BEGIN TRAN;
insert openquery(HIVE, 'select timestamp, tag, value from calc_result')
values('2019-04-22 11:50:41', 'test',2.0);
COMMIT;
If necessary set up DTC: How can I enable distributed transactions for a linked server?
While I didn't find a way to change OPENQUERYtimeout from 30 seconds, I found that using EXEC AT Linked Server to work fine for INSERT queries while adhering to timeout settings.
I accidentally stumbled upon the solution in this 2009 blog post. Databases might not be my strength, but I feel SQL Server documentation can be improved. A simple page that lists possible ways to interact with a Linked Server could've saved me lots of retries.

PostgreSQL : relation [temp table] does not exist error for temporary table

I have a function in PostgreSQL that calls multiple function depends on certain conditions. I create a temporary table in main function dynamically using "Execute" statement and using that temporary table for insertion and selection
in other functions (same dynamic process using "Execute" statement) those I am calling from main function.
However, its working fine as per my requirement. But sometimes it is throwing an error 'relation does not exists' on the temporary table when it is performing selection or insertion on the subroutine(internal function).
SAMPLE
Main Function
CREATE OR REPLACE FUNCTION public.sample_function(
param bigint)
RETURNS TABLE(isfinished boolean)
LANGUAGE 'plpgsql'
COST 100.0
VOLATILE ROWS 1000.0
AS $function$
DECLARE
st_dt DATE;
end_dt DATE;
var4 CHARACTER VARYING := CURRENT_TIME;
var1 character varying;
BEGIN
SELECT SUBSTRING(REPLACE(REPLACE(REPLACE(var4,':',''),'.',''),'+','') FROM 5 FOR 7) INTO var4;
EXECUTE 'CREATE TABLE sampletable'||var4||' (
"emp_id" UUid,
"emp_name" Character Varying( 2044 ),
"start_date" Date,
"end_date" Date)';
select public.innerfunction (st_dt,end_dt,var4)
into var1;
EXECUTE 'DROP TABLE sampletable'||var4;
return query select true ;
END;
- Inner Function
CREATE OR REPLACE FUNCTION public.innerfunction(
st_dt timestamp without time zone,
end_dt timestamp without time zone,
var4 bigint)
RETURNS integer
LANGUAGE 'plpgsql'
COST 100.0
VOLATILE
AS $function$
DECLARE
date1 timestamp without time zone:=st_dt
BEGIN
EXECUTE 'INSERT INTO sampletable'||var4||'
SELECT *
from "abc"'
;
return return_val;
END;
$function$;
- Error Message
ERROR:
relation "sampletable1954209" does not exist
LINE 1: INSERT INTO sampletable1954209
QUERY: INSERT INTO sampletable1954209
SELECT *
from "abc"
;
CONTEXT: PL/pgSQL function innerfunction(timestamp without time zone,
timestamp without time zone) line 51 at EXECUTE
SQL statement "SELECT public.innerfunction(st_dt ,end_dt)"
PL/pgSQL function sample_function(bigint) line 105 at SQL statement
********** Error **********
In above example I created a main function 'sample_function', and I am creating a temporary dynamic table 'sampletable with a random number attached to it. I am using that table on 'innerfunction' for insertion purpose.
When I am calling the main function it working as required but some times it gives the mentioned error 'relation "sampletable1954209" does not exist'.
I am not able to catch the issue.
I just ran into this problem. It turned out to be because the database was behind pgBouncer in transaction pooling mode, which meant each query could conceivably run in a different connection.
There were two symptoms of this behaviour:
I could run CREATE TEMPORARY TABLE test (LIKE sometable); in one connection, and then run SELECT * FROM test in another connection and see the temporary table. This isn't supposed to happen as temporary tables are meant to be session-specific, but because pgBouncer is pooling connections it means multiple sessions can share temporary tables unpredictably, if they are created outside of transactions.
Sometimes the connection would randomly change and my temporary table would disappear. I was creating and accessing the temporary table outside of a transaction which is why pgBouncer thought it was ok to switch connections.
The solution is to wrap everything up inside a transaction, however this was problematic for me because I was calling other code that used transactions. Postgres doesn't support nested transactions, so I was unable to wrap my code in one. (It does approximate nested transactions with savepoints, but this requires your code to use different SQL if it's running inside a transaction or not, which is not practical.)
Another solution is to change the pgBouncer configuration to session pooling instead, which causes it to run the DISCARD ALL statement when a client disconnects, before giving the session to another client. This drops all temporary tables and makes the connections behave more like direct connections.
There's a good write up of the issues at the VMWare support hub.
I see a problem with this code, but it does not quite match your exact error message:
You pass var4 to innerfunction as bigint.
Now if var4 starts with a zero like 0649940, then sample_function will use sampletable0649940, while innerfunction will try to access sampletable649940 because the leading zero is lost in the conversion.
Your error message has a seven-digit number though, so it might be a different problem.
Why don't you use a temporary table and use a fixed name for it? A temporary table is only visible in one session, and there cannot be any name collisions. That's what temporary tables are for.

PLS-00357 sequence.nextval not allowed

I'm trying to create a trigger and am getting the error "[Error] PLS-00357: PLS-00357: Table, View Or Sequence reference 'table_data_seq.nextval' not allowed in this context"
I have read a lot of information on the error and cannot find the difference between the PL/SQL that people say works and mine. Below is my code for creating the trigger ( keeping it as basic as possible to get it working ):
create or replace trigger tr_tabData
before insert on table_data
for each row
DECLARE
seq_value int;
begin
select table_data_sq.nextval into seq_value from dual;
end;
Oracle version is 10.2.0.5
As requested here it the script for the sequence:
DROP SEQUENCE DATA_ADMIN.TABLE_DATA_SQ;
CREATE SEQUENCE DATA_ADMIN.TABLE_DATA_SQ
START WITH 1000
MAXVALUE 999999999999999999999999999
MINVALUE 1
NOCYCLE
CACHE 20
NOORDER;
This is not possible before 11g. You can use sequence_name.NEXTVAL in regular assignments from 11g not before that, and that by the following:
select TABLE_DATA_SQ.NEXTVAL into :NEW.YourID from dual;
It turned out that this was a bug with the TOAD version and my Oracle database version. The same code in SQL*Plus and SQL Developer worked as expected.

Accessing variables inside trigger function

I'm trying to make a trigger function to create a time stamp based on a base date stored in a variable plus an interval in seconds.
This base date is given to the psql script with the -v option, e.g. "-v start_time='2013-10-10 13:48:00'".
I want to access this variable from within a trigger function a do something like:
NEW.mytimestamp = timestamp :start_time + interval NEW.elapsed_seconds ' s';
Unfortunately I cannot figure out the right syntax for that. Any ideas?
It is impossible. psql variables (accessed via :varname) are client side variables. Trigger functions are executed on the server and cannot access these variables.
There is a way around this, but a little difficult (one cannot simple initialize values via command line). You can use custom configuration setting variables:
postgres=# select set_config('public.xxx', '10', false);
set_config
------------
10
(1 row)
create or replace function foo_trg()
returns trigger as $$
begin
raise notice '%', current_setting('public.xxx');
return new;
end;
$$ language plpgsql;
create table foo(a int);
create trigger hh before insert on foo for each row execute procedure foo_trg();
postgres=# insert into foo values(200);
NOTICE: 10
INSERT 0 1
Another (more established) technique would be to use an auxiliary table.
On second thought, trigger parametrization (based on some global value) is usually a terrible idea. It indicates you are doing some wrong. Use a function instead.