Not able to drop schema in DB2 Using ADMIN_DROP_SCHEMA Stored Procedure - db2

I found at several places to be able to drop a schema in DB2 along with all of its contents (indexes, SPs, triggers, sequences etc) using
CALL SYSPROC.ADMIN_DROP_SCHEMA('schema_name', NULL, 'ERRORSCHEMA', 'ERRORTAB');
However, I am getting the following error while using this command:
1) [Code: -469, SQL State: 42886] The parameter mode OUT or INOUT is not valid for a parameter in the routine named "ADMIN_DROP_SCHEMA" with specific name "ADMIN_DROP_SCHEMA" (parameter number "3", name "ERRORTABSCHEMA").. SQLCODE=-469, SQLSTATE=42886, DRIVER=4.22.29
2) [Code: -727, SQL State: 56098] An error occurred during implicit system action type "2". Information returned for the error includes SQLCODE "-469", SQLSTATE "42886" and message tokens "ADMIN_DROP_SCHEMA|ADMIN_DROP_SCHEMA|3|ERRORTABSCHEMA".. SQLCODE=-727, SQLSTATE=56098, DRIVER=4.22.29
Can anyone help me suggest what's wrong here? I tried to look at several places but didn't get any idea. It doesn't seem it's an authorization issue. Using DB2 version 11.5.

You are using the ADMIN_DROP_SCHEMA procedure parameters incorrectly, assuming you are CALLing the procedure from SQL and not the CLP.
The third and fourth parameters cannot be a literal (despite the documentation giving such an example), instead they must be host-variables (because the the procedure requires them to be input/output parameters).
If the stored-procedure completes without errors it sets these parameters to NULL. so your code should check for this.
If the stored-procedure detects errors, it creates and adds rows to the specified table and leaves the values of these parameters unchanged, and you must then query that table to list the error(s). You should drop this table before calling the stored procedure otherwise the procedure will fail with -601.
Example:
--#SET TERMINATOR #
drop table errschema.errtable#
set serveroutput on#
begin
declare v_errschema varchar(20) default 'ERRSCHEMA';
declare v_errtab varchar(20) default 'ERRTABLE';
CALL SYSPROC.ADMIN_DROP_SCHEMA('SOMESCHEMA', NULL, v_errschema, v_errtab);
if v_errschema is null and v_errtab is null
then
call dbms_output.put_line('The admin_drop_schema reported success');
else
call dbms_output.put_line('admin_drop_schema failed and created/populated table '||rtrim(v_errschema)||'.'||rtrim(v_errtab) );
end if;
end#

You can use global variables if you would like to use ADMIN_DROP_SCHEMA outside of compound SQL
E.g.
CREATE OR REPLACE VARIABLE ERROR_SCHEMA VARCHAR(128) DEFAULT 'SYSTOOLS';
CREATE OR REPLACE VARIABLE ERROR_TAB VARCHAR(128) DEFAULT 'ADS_ERRORS';
DROP TABLE IF EXISTS SYSTOOLS.ADS_ERRORS;
CALL ADMIN_DROP_SCHEMA('MY_SCHEMA', NULL, ERROR_SCHEMA, ERROR_TAB);

Related

PostgreSQL : relation [temp table] does not exist error for temporary table

I have a function in PostgreSQL that calls multiple function depends on certain conditions. I create a temporary table in main function dynamically using "Execute" statement and using that temporary table for insertion and selection
in other functions (same dynamic process using "Execute" statement) those I am calling from main function.
However, its working fine as per my requirement. But sometimes it is throwing an error 'relation does not exists' on the temporary table when it is performing selection or insertion on the subroutine(internal function).
SAMPLE
Main Function
CREATE OR REPLACE FUNCTION public.sample_function(
param bigint)
RETURNS TABLE(isfinished boolean)
LANGUAGE 'plpgsql'
COST 100.0
VOLATILE ROWS 1000.0
AS $function$
DECLARE
st_dt DATE;
end_dt DATE;
var4 CHARACTER VARYING := CURRENT_TIME;
var1 character varying;
BEGIN
SELECT SUBSTRING(REPLACE(REPLACE(REPLACE(var4,':',''),'.',''),'+','') FROM 5 FOR 7) INTO var4;
EXECUTE 'CREATE TABLE sampletable'||var4||' (
"emp_id" UUid,
"emp_name" Character Varying( 2044 ),
"start_date" Date,
"end_date" Date)';
select public.innerfunction (st_dt,end_dt,var4)
into var1;
EXECUTE 'DROP TABLE sampletable'||var4;
return query select true ;
END;
- Inner Function
CREATE OR REPLACE FUNCTION public.innerfunction(
st_dt timestamp without time zone,
end_dt timestamp without time zone,
var4 bigint)
RETURNS integer
LANGUAGE 'plpgsql'
COST 100.0
VOLATILE
AS $function$
DECLARE
date1 timestamp without time zone:=st_dt
BEGIN
EXECUTE 'INSERT INTO sampletable'||var4||'
SELECT *
from "abc"'
;
return return_val;
END;
$function$;
- Error Message
ERROR:
relation "sampletable1954209" does not exist
LINE 1: INSERT INTO sampletable1954209
QUERY: INSERT INTO sampletable1954209
SELECT *
from "abc"
;
CONTEXT: PL/pgSQL function innerfunction(timestamp without time zone,
timestamp without time zone) line 51 at EXECUTE
SQL statement "SELECT public.innerfunction(st_dt ,end_dt)"
PL/pgSQL function sample_function(bigint) line 105 at SQL statement
********** Error **********
In above example I created a main function 'sample_function', and I am creating a temporary dynamic table 'sampletable with a random number attached to it. I am using that table on 'innerfunction' for insertion purpose.
When I am calling the main function it working as required but some times it gives the mentioned error 'relation "sampletable1954209" does not exist'.
I am not able to catch the issue.
I just ran into this problem. It turned out to be because the database was behind pgBouncer in transaction pooling mode, which meant each query could conceivably run in a different connection.
There were two symptoms of this behaviour:
I could run CREATE TEMPORARY TABLE test (LIKE sometable); in one connection, and then run SELECT * FROM test in another connection and see the temporary table. This isn't supposed to happen as temporary tables are meant to be session-specific, but because pgBouncer is pooling connections it means multiple sessions can share temporary tables unpredictably, if they are created outside of transactions.
Sometimes the connection would randomly change and my temporary table would disappear. I was creating and accessing the temporary table outside of a transaction which is why pgBouncer thought it was ok to switch connections.
The solution is to wrap everything up inside a transaction, however this was problematic for me because I was calling other code that used transactions. Postgres doesn't support nested transactions, so I was unable to wrap my code in one. (It does approximate nested transactions with savepoints, but this requires your code to use different SQL if it's running inside a transaction or not, which is not practical.)
Another solution is to change the pgBouncer configuration to session pooling instead, which causes it to run the DISCARD ALL statement when a client disconnects, before giving the session to another client. This drops all temporary tables and makes the connections behave more like direct connections.
There's a good write up of the issues at the VMWare support hub.
I see a problem with this code, but it does not quite match your exact error message:
You pass var4 to innerfunction as bigint.
Now if var4 starts with a zero like 0649940, then sample_function will use sampletable0649940, while innerfunction will try to access sampletable649940 because the leading zero is lost in the conversion.
Your error message has a seven-digit number though, so it might be a different problem.
Why don't you use a temporary table and use a fixed name for it? A temporary table is only visible in one session, and there cannot be any name collisions. That's what temporary tables are for.

How to return values from dynamically generated "insert" command?

I have a stored procedure that performs inserts and updates in the tables. The need to create it was to try to centralize all the scan functions before inserting or updating records. Today the need arose to return the value of the field ID of the table so that my application can locate the registry and perform other stored procedures.
Stored procedure
SET TERM ^ ;
CREATE OR ALTER procedure sp_insupd (
iaction varchar(3),
iusuario varchar(20),
iip varchar(15),
imodulo varchar(30),
ifieldsvalues varchar(2000),
iwhere varchar(1000),
idesclogs varchar(200))
returns (
oid integer)
as
declare variable vdesc varchar(10000);
begin
if (iaction = 'ins') then
begin
vdesc = idesclogs;
/*** the error is on the line below ***/
execute statement 'insert into '||:imodulo||' '||:ifieldsvalues||' returning ID into '||:oid||';';
end else
if (iaction = 'upd') then
begin
execute statement 'select '||:idesclogs||' from '||:imodulo||' where '||:iwhere into :vdesc;
execute statement 'execute procedure SP_CREATE_AUDIT('''||:imodulo||''');';
execute statement 'update '||:imodulo||' set '||:ifieldsvalues||' where '||:iwhere||';';
end
insert into LOGS(USUARIO, IP, MODULO, TIPO, DESCRICAO) values (
:iusuario, :iip, :imodulo, (case :iaction when 'ins' then 1 when 'upd' then 2 end), :vdesc);
end^
SET TERM ; ^
The error in the above line is occurring due to syntax error. The procedure is compiled normally, that is, the error does not happen in the compilation, since the line in question is executed through the "execute statement". When there was no need to return the value of the ID field, the procedure worked normally with the line like this:
...
execute statement 'insert into '||:imodulo||' '||:ifieldsvalues||';';
...
What would be the correct way for the value of the ID field to be stored in the OID variable?
What is REAL VALUE in ifieldsvalues ?
you can not have BOTH
'insert into '||:imodulo||' '||:ifieldsvalues
'update '||:imodulo||' set '||:ifieldsvalues
because methods to specify column names and column values in INSERT and UPDATE statements is fundamentally different!!! You either would have broken update-stmt or broken insert-stmt!
The error in the above line is occurring due to syntax error
This is not enough. Show the real error text, all of it.
It includes the actual command you generate and it seems you had generated it really wrong way.
all the scan functions before inserting or updating records
Move those functions out of the SQL server and into your application server.
Then you would not have to make insert/update in that "strings splicing" way, which is VERY fragile and "SQL injection" friendly. You stepped into the road to hell here.
the error does not happen in the compilation
Exactly. And that is only for starters. You are removing all the safety checks that should had helped you in applications development.
http://searchsoftwarequality.techtarget.com/definition/3-tier-application
https://en.wikipedia.org/wiki/Multitier_architecture#Three-tier_architecture
http://bobby-tables.com
On modern Firebird versions EXECUTE STATEMENT command can have the same INTO clause as PSQL SELECT command.
https://www.firebirdsql.org/file/documentation/reference_manuals/fblangref25-en/html/fblangref25-psql-coding.html#fblangref25-psql-execstmt
Use http://translate.ru to read http://www.firebirdsql.su/doku.php?id=execute_statement
Or just see SQL examples there. Notice, however, those examples all use SELECT dynamic command, not INSERT. So I am not sure it would work that way.
This works in Firebird 2.5 (but not in Firebird 2.1) PSQL blocks.
execute statement 'insert into Z(payload) values(2) returning id' into :i;
To run it from IBExpert/FlameRobin/iSQL interactive shell add that obvious boilerplate:
execute block returns (i integer) as
begin
execute statement 'insert into Z(payload) values(2) returning id' into :i;
suspend;
end

Pentaho Data Integration Input / Output Bit Type Error

I am using Pentaho Data Integration for numerous projects at work. We predominantly use Postgres for our database's. One of our older tables has two columns that are set to type bit(1) to store 0 for false and 1 for true.
My task is to synchronize a production table with a copy in our development environment. I am reading the data in using Table Input and immediately trying to do an Insert/Update. However, it fails because of the conversion to Boolean by PDI. I updated the query to cast the values to integers to retain the 0 and 1 but when I run it again, my transformation fails because an integer cannot be a bit value.
I have looked for several days trying different things like using the javascript step to convert to a bit but I have not been able to successfully read in a bit type and use the Insert/Update step to store the data. I also do not believe the Insert/Update step has the capabilities of updating the SQL that is being used to define the data type for the column.
The database connection is set up using:
Connection Type: PostgreSQL
Access: Native (JDBC)
Supports the boolean data type: true
Quote all in database: true
Note: Altering the table to change the datatype is not optional at this point in time. Too many applications currently depend on this table so altering it in this way could cause undesirable affects
Any help would be appreciated. Thank you.
You can create cast object (for example from character varying to bit) in your destination database with "as assignment" option. AS ASSIGNMENT allows to apply this cast automatically during inserts.
http://www.postgresql.org/docs/9.3/static/sql-createcast.html
Here is some proof-of-concept for you:
CREATE FUNCTION cast_char_to_bit (arg CHARACTER VARYING)
RETURNS BIT(1) AS
$$
SELECT
CASE WHEN arg = '1' THEN B'1'
WHEN arg = '0' THEN B'0'
ELSE NULL
END
$$
LANGUAGE SQL;
CREATE CAST (CHARACTER VARYING AS BIT(1))
WITH FUNCTION cast_char_to_bit(CHARACTER VARYING)
AS ASSIGNMENT;
Now you should be able to insert/update single-character strings into bit(1) column. However, you will need to cast your input column to character varying/text, so that it would be converted to String after in the table input step and to CHARACTER VARYING in the insert/update step.
Probably, you could create cast object using existing cast functions, which are defined in postgres already (see pg_cast, pg_type and pg_proc tables, join by oid), but I haven't managed to do this, unfortunately.
Edit 1:
Sorry for the previous solution. Adding a cast from boolean to bit looks much more reasonable: you will not even need to cast data in your table input step.
CREATE FUNCTION cast_bool_to_bit (arg boolean)
RETURNS BIT(1) AS
$$
SELECT
CASE WHEN arg THEN B'1'
WHEN NOT arg THEN B'0'
ELSE NULL
END
$$
LANGUAGE SQL;
CREATE CAST (BOOLEAN AS BIT(1))
WITH FUNCTION cast_bool_to_bit(boolean)
AS ASSIGNMENT;
I solved this by writing out the Postgres insert SQL (with B'1' and B'0' for the bit values) in a previous step and using "Execute row SQL Script" at the end to run each insert as individual SQL statements.

Trying to convert simple execution stored procedure to postgres function - can't get CURRVAL('table_seq') to function

The MySQL Stored Procedure was:
BEGIN
set #sql=_sql;
PREPARE stmt FROM #sql;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
set _ires=LAST_INSERT_ID();
END$$
I tried to convert it to:
BEGIN
EXECUTE _sql;
SELECT INTO _ires CURRVAL('table_seq');
RETURN;
END;
I get the error:
SQL error:
ERROR: relation "table_seq" does not exist
LINE 1: SELECT CURRVAL('table_seq')
^
QUERY: SELECT CURRVAL('table_seq')
CONTEXT: PL/pgSQL function "myexecins" line 4 at SQL statement
In statement:
SELECT myexecins('SELECT * FROM tblbilldate WHERE billid = 2')
The query used is for testing purposes only. I believe this function is used to get the row id of the inserted or created row from the query. Any Suggestions?
When you create tables with serial columns, sequences are, by default, named as tablename_columnname_seq, but it seems that you're trying to access tablename_seq. So with a table called foobar and primary key column foobar_id it ends up being foobar_foobar_id_seq.
By the way, a cleaner way to get the primary key after an insert is using the RETURNING clause in INSERT. E.g.:
_sql = 'INSERT INTO sometable (foobar) VALUES (123) RETURNING sometable_id';
EXECUTE _sql INTO _ires;
PostgreSQL is saying that there is no sequence called "table_seq". Are you sure that that is the right name? The name you would use would depend on what is in _sql as each SERIAL or BIGSERIAL gets its own sequence, you can also define sequences and wire them up by hand.
In any case, lastval() is a closer match to MySQL's LAST_INSERT_ID(), lastval() returns the most recently returned value from any sequence in the current session:
lastval
Return the value most recently returned by nextval in the current session. This
function is identical to currval, except that instead of taking the sequence name
as an argument it fetches the value of the last sequence used by nextval in the
current session. It is an error to call lastval if nextval has not yet been called
in the current session.
Using lastval() also means that you don't have to worry about what's in _sql, unless of course it doesn't use a sequence at all.

Oracle error ORA-22905: cannot access rows from a non-nested table item

here's the stored procedure i wrote.In this proc "p_subjectid" is an array of numbers passed from the front end.
PROCEDURE getsubjects(p_subjectid subjectid_tab,p_subjects out refCursor)
as
BEGIN
open p_subjects for select * from empsubject where subject_id in
(select column_value from table(p_subjectid));
--select * from table(cast(p_subjectid as packg.subjectid_tab))
END getsubjects;
This is the error i am getting.
Oracle error ORA-22905: cannot access rows from a non-nested table item OR
as i have seen in different post,i tried casting "cast(p_subjectid as packg.subjectid_tab)" inside table function as given in the comment below.But i am getting another error: ORA-00902: invalid datatype.
And this is the definition of the "subjectid_tab".
type subjectid_tab is table of number index by binary_integer;
Can anyone please tell me what's the error.Is anything wrong with the my procedure.
You have to declare the type on "the database level" as ammoQ suggested:
CREATE TYPE subjectid_tab AS TABLE OF NUMBER INDEX BY binary_integer;
instead of declaring the type within PL/SQL. If you declare the type just in the PL/SQL block, it won't be available to the SQL "engine".
Oracle has two execution scopes: SQL and PL/SQL. When you use a SELECT/INSERT/UPDATE (etc) statement you are working in the SQL scope and, in Oracle 11g and below, you cannot reference types that are defined in the PL/SQL scope. (Note: Oracle 12 changed this so you can reference PL/SQL types.)
TYPE subjectid_tab IS TABLE OF NUMBER INDEX BY BINARY_INTEGER;
Is an associative array and can only be defined in the PL/SQL scope so cannot be used in SQL statements.
What you want is to define a collection (not an associative array) in the SQL scope using:
CREATE TYPE subjectid_tab IS TABLE OF NUMBER;
(Note: you do not need the INDEX BY clause for a collection.)
Then you can do:
OPEN p_subjects FOR
SELECT *
FROM empsubject
WHERE subject_id MEMBER OF p_subjectid;
or
OPEN p_subjects FOR
SELECT *
FROM empsubject
WHERE subject_id IN ( SELECT COLUMN_VALUE FROM TABLE( p_subjectid ) );
This is the good solution.
You cannot use a table(cast()) if the type that you cast is in the DECLARE part of the pl/sql block.
You REALLY need to use CREATE TYPE my_type [...]. Otherwise, it will throw the "cannot fetch row[...]" exception.
I just had this problem yesterday.
DECLARE
TYPE number_table IS TABLE OF NUMBER;
result_ids number_table := number_table();
BEGIN
/* .. bunch of code that uses my type successfully */
OPEN ? AS
SELECT *
FROM TABLE(CAST(result_ids AS number_table)); /* BOOM! */
END;
This fails in both of the ways you described earlier when called from a java routine. I discovered this was due to the fact that the type number_table is not defined in an exportable manner than can be shipped off the database. The type works great internally to the routine. But as soon as you try to execute a returnable recordset that references it in any way (including IN clauses?!?) you get a datatype not defined.
So the solution really is CREATE TYPE myschema.number_table IS TABLE OF NUMBER; Then drop the type declaration from your block and use the schema level declaration. Use the schema qualifier to reference the type just to be sure you are using the right one.
you have to cast the results of the pipelined query so:
If your pipelined function returns a rowtype of varchar2 then define a type (for example )
CREATE OR REPLACE TYPE char_array_t is VARRAY(32) of varchar2(255);
select * from table(cast(fn(x) as user_type_t ) );
will now work.