ALL_TAB_PRIVS gives different results from package context - oracle12c

I'm trying to use the view ALL_TAB_PRIVS to determine whether a certain grant exists for a table for a specific user. Querying the view itself works as intended, but when I'm querying it from a procedure within a package it misses a grantee.
Here's the code snippet I use:
declare
procedure foo is
begin
for s in (select * from all_tab_privs p
where p.table_schema = 'B' and p.table_name = 'TABLE_NAME') loop
dbms_output.put_line(s.grantee);
end loop;
end;
begin
tester.foo;
dbms_output.put_line('--');
foo;
end;
I run this code from schema A and I want to check whether schema C has an SELECT grant on table TABLE_NAME in schema B. TESTER is a package in schema A which has a procedure foo which does the same thing as the free procedure in the code snippet. However the output is:
A
--
A
C
So the packaged procedure is missing the grant for C. How is this possible? Is the view built differntly depending on where it's called from? I checked the documentation, but couldn't find a hint on why this is happening.

Related

Is there any way to check whether a PostgreSQL record- or row-type variable contains a specific field from inside a function?

I have a trigger defined on several tables to fire after all INSERT, UPDATE, or DELETE, all using the same trigger function. The trigger function performs an expensive check, but I can speed it up significantly by filtering some of the intermediate steps of that check using either a WHERE machine_serial = NEW.machine_serial or WHERE machine_serial = OLD.machine_serial clause, depending on what type of statement fired the trigger. However, not all the tables actually have a machine_serial column, so I can't perform this filtering when the trigger is fired on one of those tables. I am currently trying to find a good solution to making the decision of whether to filter or not from within the trigger function, and I believe that simply checking whether NEW or OLD has the machine_serial field would be easiest, clearest, and fastest. I can't find any way to do that in the documentation though, but checking whether a RECORD contains a certain field seems like such a basic, commonplace operation for anyone that has to work with RECORDs that I assume that I've just got to be missing it somewhere - I can't imagine that it's just not possible.
For completeness, I'll go over the alternatives I've considered to the hypothetical does-RECORD-have-field check:
I could create two trigger functions, do_expensive_check_with_machine_serial() and do_expensive_check_without_machine_serial(), and use one or the other depending on whether the table has the machine_serial column. But if I or anyone after me needs to alter the logic in either one of these functions, they'll need to remember to alter the logic in the other one, too.
I could stick with the one trigger function I currently have, and figure out whether the firing table has machine_serial by just trying to access NEW.machine_serial or OLD.machine_serial. If that raises an exception, I can catch it and then I'll know the field isn't present. But the manual explicitly suggests avoiding using exception blocks unless absolutely necessary, due to performance impacts.
I could stick with the one trigger function I currently have, and just add a check like this: IF (TG_TABLE_SCHEMA = x AND TG_TABLE_NAME = y) OR (TG_TABLE_SCHEMA = w AND TG_TABLE_NAME = z) OR ...
, and just maintain that list of every table that has a machine_serial column. But then I and anyone that comes after me would need to alter that check in the trigger function any time the trigger is added to a new table, which is less than ideal.
Of course, the above three alternatives would all function, but they all feel like bad design choices to me. Maybe it's because I'm used to the dynamicness offered by Python, but if I used any of these alternatives, I would feel like I'm doing something wrong. And PostgreSQL is pretty good about offering lots of operators on all sorts of data types, so I just can't imagine that something as basic as checking whether a RECORD or ROW-type variable contains a certain field is impossible.
Before I show the solution, I have to say, so this requirement can be signal of some unhappy design. Maybe you try to implement some functionality that should not be implemented in triggers. Triggers are good, but too smart too generic too rich can be very slow and very hard to maintain and fix errors (but as every in life, there are exceptions from rules).
So first - you can look to system catalog:
CREATE FUNCTION public.foo_trg() RETURNS trigger
LANGUAGE plpgsql
AS $$
begin
raise notice 'a exists %', exists(select * from pg_attribute where attrelid = new.tableoid and attname = 'a');
raise notice 'd exists %', exists(select * from pg_attribute where attrelid = new.tableoid and attname = 'd');
return new;
end;
$$;
CREATE TABLE public.foo (
a integer,
b integer
);
CREATE TRIGGER foo_trg_insert
AFTER INSERT ON public.foo
FOR EACH ROW EXECUTE FUNCTION public.foo_trg();
(2022-09-02 06:18:41) postgres=# insert into foo values(1,2);
NOTICE: a exists t
NOTICE: d exists f
INSERT 0 1
Second solution is based on record to jsonb transformations:
CREATE OR REPLACE FUNCTION public.foo_trg()
RETURNS trigger
LANGUAGE plpgsql
AS $$
declare j jsonb;
begin
j := to_jsonb(new);
raise notice 'a exists %', j ? 'a';
raise notice 'd exists %', j ? 'd';
return new;
end;
$$
(2022-09-02 06:24:54) postgres=# insert into foo values(1,2);
NOTICE: a exists t
NOTICE: d exists f
INSERT 0 1
Second solution can be faster, because doesn't requires queries to system catalog. It hits just system catalog cache, but it doesn't work on some legacy PostgreSQL releases.

Query has no result in destination data when calling colpivot inside pgsql stored procedure

I have created a procedure to generate temp table using colpivot https://github.com/hnsl/colpivot
and saving the result into a physical table as below in PGSQL
create or replace procedure create_report_table()
language plpgsql
as $$
begin
drop table if exists reports;
select colpivot('_report',
'select
u.username,
c.shortname as course_short_name,
to_timestamp(cp.timecompleted)::date as completed
FROM mdl_course_completions AS cp
JOIN mdl_course AS c ON cp.course = c.id
JOIN mdl_user AS u ON cp.userid = u.id
WHERE c.enablecompletion = 1
ORDER BY u.username' ,array['username'], array['course_short_name'], '#.completed', null);
create table reports as (SELECT * FROM _report);
commit;
end; $$
colpivot function , drop table , delete table works really fine in isolation. but when I create the procedure as above, and call the procedure to execute, this throws an error Query has no result in destination data
Is there any way I can use colpivot in collaboration with several queries as I am currently trying ?
Use PERFORM instead of SELECT. That will execute the statement, without the need to keep the result somewhere. This is what the manual says:
Sometimes it is useful to evaluate an expression or SELECT query but discard the result, for example when calling a function that has
side-effects but no useful result value. To do this in PL/pgSQL, use
the PERFORM statement

DB2oC (DB2 on Cloud) : Facing "attempted to modify data but was not defined as MODIFIES SQL DATA" error

I have created very simple function in DB2oC as below, which has one UPDATE sql statement and one SELECT sql statement along with MODIFIES SQL DATA. But still I get the below error, though I have specified MODIFIES SQL DATA. I did GRANT ALL on that TEST table to my user id and also did GRANT EXECUTE ON FUNCTION to my user id on safe side. Can you please help to explain on what could be the issue?
I have simply invoked the function using SELECT statement like below:
SELECT TESTSCHEMA.MAIN_FUNCTION() FROM TABLE(VALUES(1));
SQL Error [38002]: User defined routine "TESTSCHEMA.MAIN_FUNCTION"
(specific name "SQL201211013006981") attempted to modify data but was
not defined as MODIFIES SQL DATA.. SQLCODE=-577, SQLSTATE=38002,
DRIVER=4.27.25
CREATE OR REPLACE FUNCTION MAIN_FUNCTION()
RETURNS VARCHAR(20)
LANGUAGE SQL
MODIFIES SQL DATA
BEGIN
DECLARE val VARCHAR(20);
UPDATE TEST t SET t.CONTENT_TEXT = 'test value' WHERE t.ID = 1;
select CONTENT_TEXT into val from TEST where ID = 1;
return val;
end;
Appreciate your help.
For the modifies SQL data clause , the usage of the function is restricted on Db2-LUW.
These restrictions do not apply for user defined functions that do not modify data.
For your specific example, that UDF will operate when used as the sole expression on the right hand side of an assignment statement in a compound-SQL compiled statemnent.
For example:
create or replace variable my_result varchar(20) default null;
begin
set my_result = main_function();
end#
Consider using stored procedures to modify table contents, instead of user defined functions.
You could avoid using a function, and just use a single "change data statement"
SELECT CONTENT_TEXT
FROM NEW TABLE(
UPDATE TEST t
SET t.CONTENT_TEXT = 'test value'
WHERE t.ID = 1
)

Hana - Select without Cursor

I am doing a select in a stored procedure with a cursor, I would to know If I could do the same select without using a cursor.
PROCEDURE "DOUGLAS"."tmp.douglas.yahoo::testando" ( )
LANGUAGE SQLSCRIPT
SQL SECURITY INVOKER
DEFAULT SCHEMA DOUGLAS
AS
BEGIN
/*****************************
procedure logic
*****************************/
declare R varchar(50);
declare cursor users FOR select * from USERS WHERE CREATE_TIME between ADD_SECONDS (CURRENT_TIMESTAMP , -7200 ) and CURRENT_TIMESTAMP;
FOR R AS users DO
CALL _SYS_REPO.GRANT_ACTIVATED_ROLE('dux.health.model.roles::finalUser',R.USER_NAME);
END FOR;
END;
Technically you could convert the result set into an ARRAY and then loop over the array - but for what?
The main problem is that you want to automatically grant permissions on any users that match your time based WHERE condition.
This is not a good idea in most scenarios.
The point of avoiding cursors is to allow the DBMS to optimize SQL commands. Telling the DBMS what data you want, not how to produce it.
In this example it really wouldn't make any difference performance wise.
A much more important factor is that you run SELECT * even though you only need the USER_NAME and that your R variable is declared as VARCHAR(50) (which is wrong if you wanted to store the USER_NAME in it) but never actually used.
The R variable in the FOR loop exists in a different validity context and actually contains the current row of the cursor.

Execute triggers function of another schema on the actual chema

my problem is easy to explain with an example: I have a 'common' schema (the public one?) where I store common data between a clustered application.
For every instance of my application, I have a role (used as the application user).
And i have a common role, app_users, with read-only privileges on the common schema, and every application role is a member of app_users.
Now my problem is: how can i set a trigger on the app_a scheme that execute a function (procedure) in the common scheme, but affect the (and only the) app_a tables?
I mean:
// common_scheme, dummy function to emulate the mysql on update = now()
CREATE OR REPLACEFUNCTION update_etime() RETURNS TRIGGER AS $$
BEGIN
NEW.etime = date_part('epoch'::text, now())::int;
RETURN NEW;
END;
$$ language plpgsql;
// now, in the app_foo scheme, i have the table:
CREATE TABLE foo_table (fid serial not null primary key unique, label char(25));
// and the trigger:
CREATE TRIGGER foo_table_update_etime BEFORE UPDATE ON foo_talbe FOR EACH ROW EXECUTE PROCEDURE update_etime();
// ERROR: function update_etime() does not exist
CREATE TRIGGER foo_table_update_etime BEFORE UPDATE ON foo_talbe FOR EACH ROW EXECUTE PROCEDURE common_scheme.update_etime();
// ERROR: function common_scheme.update_etime() does not exist
The user that will access app_foo has the execute privilege on update_etime() function in common_schema.
Any idea?
I've googled around but the only solution I fount to call functions from other schemas is something like execute 'select * from ' || schema_name || '.table_name'; but i dont think this will do the trick in my case, becose the function must work with the 'local' scheme.
Your second set of syntax should work... the one with "EXECUTE PROCEDURE common_scheme.update_etime();"
If it isn't finding the function, I'd guess that you either have created it in a different schema than you think it is in, or you haven't created it at all (and note, your example create syntax has a bug, no space between "replace" and "function", which would cause an error when trying to create the function. Try doing a:
\df *.update_etime
As superuser to verify the function exists and is in the location you think it is in. HTH.