DB2 function accepting rows - db2

I have a function which is accepting an id from type varchar. The id represents the pk. But sometimes the pk consists more of two parameters. So is it possible to make the function more generic and asking for the type inside the function. The input parameter should be the object(row) from which the id should be taken.
CREATE OR REPLACE FUNCTION "CHECK_VALID"
(ID VARCHAR(8))
RETURNS BOOLEAN
LANGUAGE SQL
SPECIFIC SQL23562233244578
RETURN
ID IS ...

Related

How can generic functions can be used for computed fields in hasura?

I've a logs table, which is it contains all the actions (updated, created) taken by operators (admin users).
On this table 2 of these columns (indexed as hash) target_entity and target_id Which respectively stores
table name: table name of the action taken on.
record id added or updated record's id in the target table.
So, what I am trying to achieve;
I would like to add a computed field named eg:logs which depends on a function;
FUNCTION public."fetchLogs"(
referenceId integer,
referenceName TEXT
)
First parameter is the current table's primary key.
I'm not sure if I can automatically send primary key as first argument
So probably it should be something like table_row table
Second parameter is a static value which is table's name, planning to statically pass as argument.
This function returns a JSON object;
RETURNS json LANGUAGE plpgsql STABLE
AS $function$
It should return log records related to this record.
At this point I have 2 things needs to be tackled with;
Since the first parameter is the reference (primary) key, I don't know If I can just use primary key as an argument. I'm guessing I need to use as table_row (anytable if that's a thing) then table_row.id
In the Hasura console, Add Computed Field > Function Name selector does not list this function, guessing because it doesn't explicitly indicated in the function which table is this action for.
Answers I'm looking for if this is achievable or is there a better practice for this kind of things?
Maybe I need a encapsulating function for each table needs this computed column. But I'm not sure is it possible or how can be done.
P.S. In case if you are wondering all primary keys are same type and name yes. All tables (will be using this computed column) has a primary key named as id and type is integer.

insert function with side-effects: how to take insert parameter?

I'm interested in writing a Postgres function that inserts a new row to e.g. an invoice table, and performs some side effects based on the result of the insertion. My invoice table has some columns with default values (e.g. auto-generated primary key column id), and some that are optional.
I'm wondering if there's a way to take a parameter that represents a row of the invoice table, possibly without default and optional fields, and insert that value directly as a row.
CREATE OR REPLACE FUNCTION public.insert_invoice(new_invoice invoice)
RETURNS uuid
LANGUAGE sql
AS $$
WITH invoice_insert_result AS (
-- This fails: new_invoice has type invoice, but expected type uuid (because it thinks we want to put `new_invoice` in the "id" column)
INSERT INTO invoice VALUES (new_invoice)
RETURNING id
)
-- Use the result to perform side-effects
SELECT invoice_insert_result.id
$$;
I know this is possible to do by replicating the schema of the invoice table in the list of parameters of the function, however I'd prefer not to do that since it would mean additional boilerplate and maintenance burden.
The uuid is a value that is automatically generated, you cannot insert a uuid value to a table.
The target column names can be listed in any order. If no list of
column names is given at all, the default is all the columns of the
table in their declared order; or the first N column names, if there
are only N columns supplied by the VALUES clause or query. The values
supplied by the VALUES clause or query are associated with the
explicit or implicit column list left-to-right.
https://www.postgresql.org/docs/current/sql-insert.html
The quote part means that you insert command can either explicit mention column list. Or you not mention column list then the to be inserted command (after values) should have all the column list's value.
to achiever your intended result, Insert command,you must specify columns list. If not, then you need insert uuid value. But you cannot uuid is auto generated. The same example would be like if a table have a column bigserial then you cannot insert bigserial value to that column. Since bigserial is auto-generated.
For other non-automatic column, You can aggregated them use customized type.
denmo
create type inv_insert_template as (receiver text, base_amount numeric,tax_rate numeric);
full function:
CREATE OR REPLACE FUNCTION public.insert_invoice(new_invoice inv_insert_template)
RETURNS bigint
LANGUAGE sql
AS $$
WITH invoice_insert_result AS (
INSERT INTO invoices(receiver,base_amount, tax_rate)
VALUES (new_invoice.receiver,
new_invoice.base_amount,
new_invoice.tax_rate) RETURNING inv_no
)
SELECT invoice_insert_result.inv_no from invoice_insert_result;
$$;
call it: select * from public.insert_invoice(row('person_c', 1000, 0.1));
db fiddle demo

PostgreSQL - return n-sized varchar from function

As I found in documentation:
Parenthesized type modifiers (e.g., the precision field for type
numeric) are discarded by CREATE FUNCTION
Are there any alternatives to return varchar(N) type from plpgsql function?
question update:
On picture you can see that Name column recognised as varchar(128), however Number column is recognised as nonsized varchar
f_concat function returns: cast(res as varchar(255));
You can preserve the type modifier for a function result by creating a domain. Postgres will use the underlying varchar(N) type when sending column descriptions to your client:

postgresql exception catching or error handling in postgresql

In below code i have created sample table and written store procedure for exception handling ,the problem is if i insert integer values into columns name and email it is executing .if i pass integer values for name and email columns it should throw exception saying that your passing data types is wrong for name and email columns.
Can any one help me.
CREATE TABLE people
(
id integer NOT NULL,
name text,
email text,
CONSTRAINT people_pkey PRIMARY KEY (id)
)
CREATE OR REPLACE FUNCTION test() RETURNS integer AS'
BEGIN
BEGIN
INSERT INTO people(id,name,email) values(1,5,6);
EXCEPTION
WHEN OTHERS THEN RETURN -1;
END;
RETURN 1;
END'LANGUAGE plpgsql;
select * from test()
select * from people
This is the normal behavior, and it's not related to exception or error handling.
Assigning a numeric value to a text field in an SQL query works seamlessly, because PostgreSQL applies an implicit cast to the numeric literal (this also works with just about any datatype, since they all have a text representation through their I/O conversion routine).
This is tangentially mentioned in the doc for CREATE CAST:
It is normally not necessary to create casts between user-defined
types and the standard string types (text, varchar, and char(n), as
well as user-defined types that are defined to be in the string
category). PostgreSQL provides automatic I/O conversion casts for
that. The automatic casts to string types are treated as assignment
casts, while the automatic casts from string types are explicit-only.

Call PL/pgSQL function multiple times with SELECT results

I have a PL/pgSQL function which requires one input parameter which is the primary key of the table on which it works. I call it as follows:
select myFunction('0001');
It then does some calculations on the data in the row identified by '0001' of a particular table and performs an UPDATE.
How can I call the function repeatedly for each primary key returned by a query? Something like the following:
select myFunction(select ID from theTable);
Maybe you should call the function as follow:
select myfunction(id) from thetable;
id being the pk of the table.