Filter bigint values on insert Postgresql - postgresql

I have 2 tables in Postgresql with the same schema, the only difference is that in one of the table id field is of type bigint. Schema of the table I need to fill with data looks like this:
create table test_int_table(
id int,
description text,
hash_code int
);
I need to copy the data from test_table with bigint id to public.test_int_table. And some of the values which are bigger than id range should be filtered out. How can I track those values without hardcoding the range?
I can do something like this, but I would like to build more generic solution:
insert into test_int_table
select * from test_table as test
where test.id not between 2147483647 and 9223372036854775808
By generic I mean without constraints on the columns names and their number. So that in case, I have multiple columns of bigint type in other tables how can I filter all of their columns values generically (without specifying a column name)?

There is no generic solution, as far as I can tell.
But I would write it as
INSERT INTO test_int_table
SELECT *
FROM test_table AS t
WHERE t.id BETWEEN -2147483647 AND 2147483647;

You can do something like this if you want to track :
Create a function like this :
CREATE OR REPLACE FUNCTION convert_to_integer(v_input bigint)
RETURNS INTEGER AS $$
DECLARE v_int_value INTEGER DEFAULT NULL;
BEGIN
BEGIN
v_int_value := v_input::INTEGER;
EXCEPTION WHEN OTHERS THEN
RAISE NOTICE 'Invalid integer value: "%". Returning NULL.', v_input;
RETURN NULL;
END;
RETURN v_int_value;
END;
$$ LANGUAGE plpgsql;
and write a query like this :
INSERT INTO test_int_table SELECT * FROM test_table AS t WHERE convert_to_integer(t.id) is not null;
Or you can modify a function to return 0.

Related

Postgres - function get the values from one query then insert as dynamic sql string

I am building a function on postgresql, basically send one id from one table then re-build the insert statement of that row and insert it as string column from another table.
I have this table, in insert_query I want to store the insert statement of one row, with his variables:
create table get_insert (tabname varchar(30), insert_query varchar(5000));
I want to store something like this on insert_query column:
Insert into baseball_table (code, name) select '01','Robet';
Currently my function is storing just this, which doesn't work since I need to store the real values:
INSERT INTO baseball_table(code,name) SELECT code,name FROM baseball_table WHERE id=1;
This is my function:
CREATE OR REPLACE FUNCTION get_values(
_id character varying
)
RETURNS boolean
LANGUAGE 'plpgsql'
VOLATILE PARALLEL UNSAFE
AS $function$
DECLARE v_id integer;
DECLARE sql_brand varchar;
BEGIN
sql_query'INSERT INTO baseball_table(code,name) SELECT code,name FROM core.brand WHERE id=' || v_id ||'
';
INSERT INTO core.clone_brand (tabname, insert_query)VALUES ('brand',sql_query);
RETURN true;
END;
$function$;
Which is the best way to get the real values without making variables of each column?
Regards
I want to get the way to get the real values without making variables of each column.

Postgres: inserting dynamic number of columns and values in table

I am very new to Postgres SQL. My requirement is to pass a dynamic number of columnId, columnValue pair and insert this combination in a table(Example: employeeId, employeeName combination). The length of list can be anything. I am thinking of building dynamic query at the code-side and pass it as string to function and execute the statement. Is there any better approach for this problem. Any example or idea will be much appreciated.
If you are allowed to pass that information as a structured JSON value, this gets quite easy. Postgres has a feature to map a JSON value to a table type using the function json_populate_record
Sample table:
create table some_table
(
id integer primary key,
some_name text,
some_date date,
some_number integer
);
The insert function:
create function do_insert(p_data text)
returns void
as
$$
insert into some_table (id, some_name, some_date, some_number)
select (json_populate_record(null::some_table, p_data::json)).*;
$$
language sql;
Then you can use:
select do_insert('{"id": 42, "some_name": "Arthur"}');
select do_insert('{"id": 1, "some_value": 42}');
Note that columns that are not part of the passed JSON string are explicitly set to NULL using this approach.
If the passed string contains column names that do not exist, they are simply ignored, so
select do_insert('{"id": 12, "some_name": "Arthur", "not_there": 123}');
will ignore the not_there "column".
Online example: https://rextester.com/JNIBL25827
Edit
A similar approach can be used for updating:
create function do_update(p_data text)
returns void
as
$$
update some_table
set id = t.id,
some_name = t.some_name,
some_date = t.some_date,
some_number = t.some_number
from json_populate_record(null::some_table, p_data::json) as t;
$$
language sql;
or using insert on conflict to cover both use cases with one function:
create function do_upsert(p_data text)
returns void
as
$$
insert into some_table (id, some_name, some_date, some_number)
select (json_populate_record(null::some_table, p_data::json)).*
on conflict (id) do update
set id = excluded.id,
some_name = excluded.some_name,
some_date = excluded.some_date,
some_number = excluded.some_number
$$
language sql;

PostgreSQL : Cast Data Type from Text to Bigint when using WHERE IN

I've problem when try to cast data type from TEXT to BIGINT when using WHERE IN on PostgreSQL in procedure. This always gives
operator does not exist: bigint = text. Try cast the variable in the query.
But still get the same notice. This is example query:
DECLARE
-- $1 params text
BEGIN
SELECT * FROM table_a where
colId IN($1); // notice is here, colId is bigint
END
/*Call the procedure*/
SELECT my_function('1,2,3,4,5');
How do we cast the variable? Thanks!
Using strings for id list is wrong design. You can use a arrays in PostgreSQL.
For example
CREATE OR REPLACE FUNCTION foo(VARIADIC ids int[])
RETURNS SETOF table_a AS $$
SELECT * FROM table_a WHERE id = ANY($1)
$$ LANGUAGE sql;
SELECT foo(1,2,3);
But, usually wrapping simple SQL to functions looks like broken design. Procedures should not to replace views.

Is it possible to write a postgres function that will handle a many to many join?

I have a job table. I have an industries table. Jobs and industries have a many to many relationship via a join table called industriesjobs. Both tables have uuid is their primary key. My question is two fold. Firstly is it feasible to write two functions to insert data like this? If this is feasible then my second question is how do I express an array of the uuid column type. I'm unsure of the syntax.
CREATE OR REPLACE FUNCTION linkJobToIndustries(jobId uuid, industiresId uuid[]) RETURNS void AS $$
DECLARE
industryId uuid[];
BEGIN
FOREACH industryId SLICE 1 IN ARRAY industriesId LOOP
INSERT INTO industriesjobs (industry_id, job_id) VALUES (industryId, jobId);
END LOOP;
RETURN;
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE FUNCTION insertJobWithIndistries(orginsation varchar, title varchar, addressId uuid, industryIds uuid[]) RETURNS uuid AS $$
DECLARE
jobId uuid;
BEGIN
INSERT INTO jobs ("organisation", "title", "address_id") VALUES (orginsation, title, addressId) RETURNING id INTO jobId;
SELECT JobbaLinkJobToIndustries(jobId, industryIds);
END;
$$ LANGUAGE plpgsql;
SELECT jobId FROM insertJobWithIndistries(
'Acme Inc'::varchar,
'Bomb Tester'::varchar,
'0030cfb3-1a03-4c5a-9afa-6b69376abe2e',
{ 19c2e0ee-acd5-48b2-9fac-077ad4d49b19, 21f8ffb7-e155-4c8f-acf0-9e991325784, 28c18acd-99ba-46ac-a2dc-59ce952eecf2 }
);
Thanks in advance.
Key to a solution are the function unnest() to (per documentation):
expand an array to a set of rows
And a data-modifying CTE.
A simple query does the job:
WITH ins_job AS (
INSERT INTO jobs (organisation, title, address_id)
SELECT 'Acme Inc', 'Bomb Tester', '0030cfb3-1a03-4c5a-9afa-6b69376abe2e' -- job-data here
RETURNING id
)
INSERT INTO industriesjobs (industry_id, job_id)
SELECT indid, id
FROM ins_job i -- that's a single row, so a CROSS JOIN is OK
, unnest('{19c2e0ee-acd5-48b2-9fac-077ad4d49b19
, 21f8ffb7-e155-4c8f-acf0-9e9913257845
, 28c18acd-99ba-46ac-a2dc-59ce952eecf2}'::uuid[]) indid; -- industry IDs here
Also demonstrating proper syntax for an array of uuid. (White space between elements and separators is irrelevant while not inside double-quotes.)
One of your UUIDs was one character short:
21f8ffb7-e155-4c8f-acf0-9e991325784
Must be something like:
21f8ffb7-e155-4c8f-acf0-9e9913257845 -- one more character
If you need functions, you do that, too:
CREATE OR REPLACE FUNCTION link_job_to_industries(_jobid uuid, _indids uuid[])
RETURNS void AS
$func$
INSERT INTO industriesjobs (industry_id, job_id)
SELECT _indid, _jobid
FROM unnest(_indids) _indid;
$func$ LANGUAGE sql;
Etc.
Related:
Insert data in 3 tables at a time using Postgres
How to insert multiple rows using a function in PostgreSQL

Calling set-returning function with each element in array

I have a set-returning function (SRF) that accepts an integer argument and returns a set of rows from a table. I call it using SELECT * FROM tst.mySRF(3);, and then manipulate the returned value as if it were a table.
What I would like to do is to execute it on each element of an array; however, when I call it using SELECT * FROM tst.mySRF(unnest(array[3,4]));, an error is returned "set-valued function called in context that cannot accept a set". If I instead call it using SELECT tst.mySRF(unnest(array[3,4]));, I get a set of the type tst.tbl.
Table definition:
DROP TABLE tst.tbl CASCADE;
CREATE TABLE tst.tbl (
id serial NOT NULL,
txt text NOT NULL,
PRIMARY KEY (id)
);
INSERT INTO tst.tbl(txt) VALUES ('a'), ('b'), ('c'), ('d');
Function definition:
CREATE OR REPLACE FUNCTION tst.mySRF(
IN p_id integer
)
RETURNS setof tst.tbl
LANGUAGE plpgsql
AS $body$
DECLARE
BEGIN
RETURN QUERY
SELECT id, txt
FROM tst.tbl
WHERE id = p_id;
END;
$body$;
Calls:
SELECT * FROM tst.mySRF(3) returns a table, as expected.
SELECT tst.mySRF(unnest(array[3,4])) returns a table with a single column of the type tst.tbl, as expected.
SELECT * FROM tst.mySRF(unnest(array[3,4])) returns the error described above, I had expected a table.
To avoid the "table of single column" problem, you need to explicitly expand the SRF results with the (row).* notation
SELECT (tst.mySRF(unnest(array[3,4]))).*;
If I understood #depesz, LATERAL will provide a more efficient or straightforward way to achieve the same result.