DbVisualizer inserts a paramter into my Oracle DB Function call - dbvisualizer

I'm trying to write an Function in an Oracle database. When i'm in the function editor and attempt to run it, DbVis inserts an additional parameter.
#call [dbvis - v0] = SP_GET_ANNUAL_SALES_HISTORY( [dbvis - v1], 'DAL', '00105315', '2013' );
#echo returnValue = [dbvis - v0];
#echo p1 = [dbvis - v0];
Then I get this error:
... Physical database connection acquired for: JdaTest
10:36:40 [#CALL - 0 row(s), 0.000 secs] [Error Code: 6550, SQL State: 65000] ORA-06550: line 1, column 13:
PLS-00306: wrong number or types of arguments in call to 'SP_GET_ANNUAL_SALES_HISTORY'
ORA-06550: line 1, column 7:
PL/SQL: Statement ignored
10:36:40 [#ECHO - 0 row(s), 0.000 secs] returnValue = null
10:36:40 [#ECHO - 0 row(s), 0.000 secs] p1 = null
... 3 statement(s) executed, 0 row(s) affected, exec/fetch time: 0.000/0.000 sec [2 successful, 0 warnings, 1 errors]
This is my function. It's my first foray into stored procedures. At this point, I'm just trying to getting it to run and delivery some results. hte Return type is a Type I also created. It's 'CREATE OR REPLACE TYPE "SAPMGR"."ANNUAL_SALES_HISTORY" is Varray(12) of number'
CREATE OR REPLACE FUNCTION "SAPMGR"."SP_GET_ANNUAL_SALES_HISTORY" (loc_in IN varchar2,item_in IN varchar2,year_in IN varchar2)
RETURN annual_sales_history
AS
yearStart Date;
yearEnd Date;
start_date sales_history.start_date%TYPE;
qty sales_history.quantity%TYPE;
ash annual_sales_history;
cursor c1 is
select start_date,QTY
from sales_history
where item = item_in
and loc = loc_in
and start_date between yearStart and yearEnd
order by item, loc, start_date;
BEGIN
Loop
fetch c1 into start_date, qty;
exit when c1%notfound;
ash(extract(month from start_date)) := qty;
DBMS_OUTPUT.PUT_LINE( 'ash' || ' ' || ash(0) || ' ' || ash(1) || ' ' || ash(2) || ' ' || ash(3) || ' ' || ash(4) || ' ' || ash(5)|| ' ' || ash(6) || ' ' || ash(7));
End loop
commit;
close c1;
EXCEPTION
WHEN OTHERS THEN
raise_application_error(-20001,'An error was encountered - '||SQLCODE||' -ERROR- '||SQLERRM);
RETURN ash;
END;
What is [dbvis - v1] and how do I get rid of it? Or, please show me where I'm leaving something out maybe
Thanks.

I received mail from DbVisualizer that their editor does not recognize custom data types, and that was what I was attempting. So, stay with native types, and you're ok.

Related

How to subtract dates in Oracle PL SQL

I'm using Oracle 18c.
I'm trying to determine elapsed time, but I get an error when I subtract two date variables in PL SQL.
The following code works fine:
DECLARE
l_zero_date date;
l_current_date date;
l_elapsed_time date;
BEGIN
Execute Immediate 'ALTER SESSION set nls_timestamp_format = "DD-MM-YYYY HH24:MI:SS"';
l_zero_date := to_date('01-01-1900 00:00:00', 'DD-MM-YYYY HH24:MI:SS');
dbms_output.put_line('The value of l_zero_date is: ' || l_zero_date);
Select ls.duration Into l_current_date From LIT_STATS ls Where ls.prim_key = 1002;
dbms_output.put_line('The value of l_curr_date is: ' || l_current_date);
-- dbms_output.put_line('The elapsed time is: ' || l_current_date - l_zero_date);
END;
This produces the results:
The value of l_zero_date is: 1900-01-01 00:00:00
The value of l_curr_date is: 1900-01-01 00:35:22
However, If I un-comment the last dbms_output line I get the error:
Error report -
ORA-06502: PL/SQL: numeric or value error: character to number conversion error
ORA-06512: at line 14
06502. 00000 - "PL/SQL: numeric or value error%s"
*Cause: An arithmetic, numeric, string, conversion, or constraint error
occurred. For example, this error occurs if an attempt is made to
assign the value NULL to a variable declared NOT NULL, or if an
attempt is made to assign an integer larger than 99 to a variable
declared NUMBER(2).
*Action: Change the data, how it is manipulated, or how it is declared so
that values do not violate constraints.
I don't understand why I get the error on subtraction involving two fields declared as DATE. For example, the following code works fine:
declare
a date;
b date;
begin
a := sysdate;
dbms_lock.sleep(10); -- sleep about 10 seconds give or take
b := sysdate;
dbms_output.put_line( b-a || ' of a day has elapsed' );
dbms_output.put_line( (b-a)*24 || ' of an hour has elapsed' );
dbms_output.put_line( (b-a)*24*60 || ' of a minute has elapsed' );
dbms_output.put_line( (b-a)*24*60*60 || ' seconds has elapsed' );
end;
Why does the line dbms_output.put_line('The elapsed time is: ' || l_current_date - l_zero_date); produce an error?
Thanks for looking at this.
As I mentioned in the comments, this is an order of operations issue. Take the following example:
SELECT 'TEST'||SYSDATE-SYSDATE FROM DUAL
When this runs, I get the following error: ORA-00932: inconsistent datatypes: expected CHAR got DATE
But when I wrap the dates in ( and ):
SELECT 'TEST'||(SYSDATE-SYSDATE) FROM DUAL
The result is TEST0.
It is order of operations, the code moves left to right unless there are parentheses informing it to do the date subtraction first.
Here is a DBFiddle showing the queries being run (LINK)

Postgresql SKU Generator

i'm looking for SKU generator function to generate SKU based on product name, combination of letter and 5 digit unique number in Postgresql.
For example :
generate_sku('GREENFIELDS FULL CREAM MILK')
will return only 3 first word and random number :
GRE-FUL-CRE-987652
Any idea ?
As suggested create a sequence. Since you have a specific value range restrict the range of the sequence. Then create a function which take a single parameter, the name, and returns the SKU. See Fiddle.
create sequence sku_seq
as integer
increment 1
start with 10000
minvalue 10000
maxvalue 99999
no cycle;
create or replace
function item_sku(item_name_in text)
returns text
language sql
immutable strict
as $$
with parm (snam) as
( select string_to_array(item_name_in, ' '))
select sku || to_char(nextval('sku_seq'),'FM99999')
from ( select case
when array_length(snam,1) > 2
then substr(snam[1],1,3) ||'_' ||
substr(snam[2],1,3) ||'_' ||
substr(snam[3],1,3) ||'_'
when array_length(snam,1) = 2
then substr(snam[1],1,3) ||'_' ||
substr(snam[2],1,3) ||'_'
else substr(snam[1],1,3) || '_'
end sku
from parm
) s ;
$$;

AWS Redshift: FATAL: connection limit "500" exceeded for non-bootstrap users

Hope you're all okay.
We hit this limit quite often. We know there is no way to up the 500 limit of concurrent user connections in Redshift. We also know certain views (pg_user_info) provide info as to the user's actual limit.
We are looking for some answers not found in this forum plus any guidance based on your experience.
Questions:
Does recreation of the cluster with bigger EC2 instances, would yield a higher limit value?
Does adding new nodes to the existing cluster would yield a higher limit value?
From the app development perspective: What specific strategies/actions you'd recommend in order to spot or predict a situation whereby this limit will be hit?
Txs - Jimmy
Okay folks.
thanks to all who answered.
I posted a support ticket in AWS and this is the recommendation, pasting all here, it's long but I hope it works for many people running into this issue. The idea is to catch the situation before it happens:
To monitor the number of connections made to the database, you can create a cloudwatch alarm based on the Database connections metrics that will trigger a lambda function when a certain threshold is reached. This lambda function can then terminate idle connections by calling a procedure that terminates idle connections.
Please find the query that creates a procedure to log and terminate long running inactive sessions
:
1. Add view to get all current inactive sessions in the cluster
CREATE OR REPLACE VIEW inactive_sessions as (
select a.process,
trim(a.user_name) as user_name,
trim(c.remotehost) as remotehost,
a.usesysid,
a.starttime,
datediff(s,a.starttime,sysdate) as session_dur,
b.last_end,
datediff(s,case when b.last_end is not null then b.last_end else a.starttime end,sysdate) idle_dur
FROM
(
select starttime,process,u.usesysid,user_name
from stv_sessions s, pg_user u
where
s.user_name = u.usename
and u.usesysid>1
and process NOT IN (select pid from stv_inflight where userid>1
union select pid from stv_recents where status != 'Done' and userid>1)
) a
LEFT OUTER JOIN (
select
userid,pid,max(endtime) as last_end from svl_statementtext
where userid>1 and sequence=0 group by 1,2) b ON a.usesysid = b.userid AND a.process = b.pid
LEFT OUTER JOIN (
select username, pid, remotehost from stl_connection_log
where event = 'initiating session' and username <> 'rsdb') c on a.user_name = c.username AND a.process = c.pid
WHERE (b.last_end > a.starttime OR b.last_end is null)
ORDER BY idle_dur
);
2. Add table for logging information about long running transactions that was terminated
CREATE TABLE IF NOT EXISTS terminated_inactive_sessions (
process int,
user_name varchar(50),
remotehost varchar(50),
starttime timestamp,
session_dur int,
idle_dur int,
terminated_on timestamp DEFAULT GETDATE()
);
3. Add procedure to log and terminate any inactive transactions running for longer than 'n' amount of seconds
CREATE OR REPLACE PROCEDURE terminate_and_log_inactive_sessions (n INTEGER)
AS $$
DECLARE
expired RECORD ;
BEGIN
FOR expired IN SELECT process, user_name, remotehost, starttime, session_dur, idle_dur FROM inactive_sessions where idle_dur >= n
LOOP
EXECUTE 'INSERT INTO terminated_inactive_sessions (process, user_name, remotehost, starttime, session_dur, idle_dur) values (' || expired.process || ' , ''' || expired.user_name || ''' , ''' || expired.remotehost || ''' , ''' || expired.starttime || ''' , ' || expired.session_dur || ' , ' || expired.idle_dur || ');';
EXECUTE 'SELECT PG_TERMINATE_BACKEND(' || expired.process || ')';
END LOOP ;
END ;
$$ LANGUAGE plpgsql;
4. Execute the procedure by running the following command:
call terminate_and_log_inactive_sessions(100);
Here is a sample lambda function that attempts to close idle connections by querying the view 'inactive_sessions' created above, which you can use as a reference.
#Current time
now = datetime.datetime.now()
query = "SELECT process, user_name, session_dur, idle_dur FROM inactive_sessions where idle_dur >= %d"
logger = logging.getLogger()
logger.setLevel(logging.INFO)
def lambda_handler(event, context):
try:
conn = psycopg2.connect("dbname=" + db_database + " user=" + db_user + " password=" + db_password + " port=" + db_port + " host=" + db_host)
conn.autocommit = True
except:
logger.error("ERROR: Unexpected error: Could not connect to Redshift cluster.")
sys.exit()
logger.info("SUCCESS: Connection to RDS Redshift cluster succeeded")
with conn.cursor() as cur:
cur.execute(query % (session_idle_limit))
row_count = cur.rowcount
if row_count >=1:
result = cur.fetchall()
for row in result:
print("terminating session with pid %s that has been idle for %d seconds at %s" % (row[0],row[3],now))
cur.execute("SELECT PG_TERMINATE_BACKEND(%s);" % (row[0]))
conn.close()
else:
conn.close()
As you said this is a hard limit in Redshift and there is no way to up it. Redshift is not a high concurrency / high connection database.
I expect that if you need the large data analytic horsepower of Redshift you can get around this with connection sharing. Pgpool is a common tool for this.

why do i have a date error in my sql query?

I am doing an SQL SELECT query but I have the error message:
"SQL Error [22007]: [SQL0181] A value of date, time, or timestamp
string is incorrect."
Here is my request:
SELECT *
FROM ROXDTA400.STKF0300 A
JOIN ROXDTA400.TABJ00141 B ON A.STNSIT = B.CDSITE
WHERE ( A.STNLIB <> '-- Trémie --'
AND A.STNSIT <> 40
AND DATE(LEFT(STNDAV,4) || '-' || substr(STNDAV,5,2) || '-' || RIGHT(STNDAV,2))
BETWEEN DATE('2019-01-01') AND DATE('2019-01-04') );
The problem seems to come from the date created with the STNDAV field, because if I replace with for example DATE ('2019-01-03'), it works.
DATE (LEFT (STNDAV, 4) || '-' || substr (STNDAV, 5,2) || '-' || RIGHT (STNDAV, 2)) Gives me the correct date format.
Where would the problem come from?
thank you,
Ensure that dates stored in STNDAV are valid. I mean, check there is any invalid date such as February 30th or '99999999'. If the source is an IBM i (iSeries or AS/400), it will be faster if you avoid functions in the WHERE portion, so STNDAV BETWEEN '20190101' AND '20190104' will perform better.

SQL Scalar function element not recognized in TSQL program

I have an input db2 table with two elements: loan_number, debt_to_income; this table's name is #Input_Table. I am trying to run a test the function by running a SQL program against this table. The problem is that the function's element is not being recognized in the SQL program for some reason, maybe I have been looking at to long? I need to validate that the output in the table will output in a order by the debt_to_income field.
Here is the function code:
ALTER FUNCTION [dbo].[FN_DTI_BANDS]
(
-- the parameters for the function here
#FN_DTI_Band decimal(4,3)
)
RETURNS varchar(16)
AS
BEGIN
declare #Return varchar(16)
select #Return =
Case
when #FN_DTI_Band is NULL then ' Missing'
WHEN #FN_DTI_Band = 00.00 then ' Missing'
When #FN_DTI_Band < = 0.31 then 'Invalid'
When #FN_DTI_Band between 0.31 and 0.34 then '31-34'
When #FN_DTI_Band between 0.34 and 0.38 then '34-38'
When #FN_DTI_Band >= 0.38 then '38+'
else null end
-- Return the result of the function
RETURN #Return
END
Here is the T-SQL test program:
SELECT loan_number,dbo.FN_DTI_BANDS(debt_to_income)as FN_DTI_Band
from #Input_table
SELECT COUNT(*), FN_DTI_Band
FROM #Input_table
GROUP BY FN_DTI_Band
ORDER BY FN_DTI_Band
Here is the error:
Msg 207, Level 16, State 1, Line 7
Invalid column name 'FN_DTI_Band'.
Msg 207, Level 16, State 1, Line 5
Invalid column name 'FN_DTI_Band'.
Can someone help me spot what I am overlooking? Thank you!
the table #input_table does not have a column called FN_DTI_Band.
Just the result of the first select statement has that column name.
You need to make the first select statement a sub query of the 2nd
Something like this:
SELECT COUNT(*), T.FN_DTI_Band
FROM
(
SELECT loan_number,dbo.FN_DTI_BANDS(debt_to_income) as FN_DTI_Band
from #Input_table
) T
GROUP BY T.FN_DTI_Band
ORDER BY T.FN_DTI_Band
Try prepending "dbo" onto the name of the function.
Select Count(*), dbo.FN_DTI_Band
From....