PostgreSQL Type Method - postgresql

How do I create type methods in PostgreSQL?
Lets take the following type for example:
create type Employee as (
name varchar(20),
salary integer)
How do I do this?
create method giveraise (percent integer) for Employee
begin
set self.salary = self.salary + (self.salary * percent) / 100;
end

You have been told in the comments, that Postgres doesn't have type methods.
However, Postgres supports attribute notation for the execution of functions with a single parameter. This looks almost exactly like a method for the type. Consider this simple example:
CREATE OR REPLACE FUNCTION raise10(numeric)
RETURNS numeric LANGUAGE sql AS 'SELECT $1 * 1.1';
Call:
SELECT (100).raise10;
Result:
raise10
---------
110.0
A major limitation is that this only works for a single parameter. No way to pass in additional parameters like a percentage for a variable raise.
Works for composite types just as well. More about "computed fields" in this related answer:
Store common query as column?
To take this one step further, one can even call an UPDATE on the row and persist the change:
CREATE TABLE employee (
name text PRIMARY KEY,
salary numeric);
INSERT INTO employee VALUES
('foo', 100)
,('bar', 200);
CREATE OR REPLACE FUNCTION giveraise10(employee)
RETURNS numeric AS
$func$
UPDATE employee
SET salary = salary * 1.1 -- constant raise of 10%
WHERE name = ($1).name
RETURNING salary;
$func$ LANGUAGE sql;
Call:
SELECT *, e.giveraise10 FROM employee e;
Result:
name | salary | giveraise10
------+--------+-------------
foo | 100 | 110.0
bar | 200 | 220.0
->sqlfiddle
The SELECT displays the pre-UPDATE value for salary, but the field has actually been updated!
SELECT *, e.giveraise10 FROM employee e;
name | salary
------+--------
foo | 110.0
bar | 220.0
Whether it's wise to use such trickery is for you to decide. There are more efficient and transparent ways to update a table.

Related

Calling an insert *function* from a CTE in a SELECT query in Postgres 13.4

I'm writing up utility code to run through pg_cron, and sometimes want the routines to insert some results into a custom table at dba.event_log. I've got a basic log table as a starting point:
DROP TABLE IF EXISTS dba.event_log;
CREATE TABLE IF NOT EXISTS dba.event_log (
dts timestamp NOT NULL DEFAULT now(),
name citext NOT NULL DEFAULT '',
details citext NOT NULL DEFAULT '');
The toy example below performs a select operation, and then uses that value as the result of the outer query, and as a values element of an insert into the event_log:
WITH
values_cte AS (
select clock_timestamp() as ct
),
log as(
insert into event_log (
name,
details)
values (
'CTE INSERT check',
'clock = ' || (select ct::text from values_cte)
)
)
select * from values_cte;
select * from event_log;
Every time I run this, I get a new log entry, with the clock_timestamp() to make it easy to see that something is happening:
+----------------------------+------------------+---------------------------------------+
| dts | name | details |
+----------------------------+------------------+---------------------------------------+
| 2021-11-10 11:58:43.919151 | CTE INSERT check | clock = 2021-11-10 11:58:43.919821+11 |
| 2021-11-10 11:58:56.769512 | CTE INSERT check | clock = 2021-11-10 11:58:56.769903+11 |
| 2021-11-10 11:58:59.632619 | CTE INSERT check | clock = 2021-11-10 11:58:59.632822+11 |
| 2021-11-10 12:00:50.442282 | CTE INSERT check | clock = 2021-11-10 12:00:50.442646+11 |
+----------------------------+------------------+---------------------------------------+
I'll likely enrich the table later, and I'd to make the log inserts into a simple call now. Below is a simple insert function:
DROP FUNCTION IF EXISTS dba.event_log_add(citext,citext);
CREATE FUNCTION dba.event_log_add(
name_in citext,
description_in citext)
RETURNS int4
LANGUAGE sql AS
$BODY$
insert into event_log (name, details)
values (name_in, description_in)
returning 1;
$BODY$;
It sees like I should be able to rewrite the original query to call the function, like this:
WITH
values_cte AS (
select clock_timestamp() as ct
),
log as (
select * from dba.event_log_add(
'CTE event_log_add check',
'clock = ' || (select ct::text from values_cte)
)
)
select * from values_cte;
The only difference here is that the VALUES are now passed as parameters to dba.event_log_add, rather than used in an INSERT directly in the query. I get this error:
ERROR: function dba.event_log_add(unknown, text) does not exist
LINE 8: select * from dba.event_log_add(
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts. 0.000 seconds. (Line 1).
I've tried
Explicit casts
Rewriting the function as a stored procedure and using CALL
Rewriting the function in PL/PgSQL, returning VOID, and running PERFORM.
Nothing seemed to work. I've checked the search_path, used qualified
names, checked permissions, etc. Some approaches throw errors that don't seem to apply,
like the one above, others throw no error, and insert no data. Run directly, the function works fine, it only blows up within the CTE.
I think I'm missing something about using a function instead of a direct INSERT. Is there a good way to do this? After looking at the docs, and hunting around here for more information, I'm a bit clearer on the rules. But not entirely. If I'm reading it right, a data-modifying CTE is ruled/regulated by the outer query. There are definitely subtleties that I'm not grasping. Am I changing the context in some way to moving the INSERT into a function, making how the code in the query and CTE are interpreted?
https://www.postgresql.org/docs/13/queries-with.html#QUERIES-WITH-MODIFYING
Your function expects parameters of type citext but you are passing values of type text. You need to cast the parameters:
WITH values_cte AS (
select clock_timestamp() as ct
),log as (
select event_log_add('CTE event_log_add check'::citext,
('clock = ' || (select ct::text from values_cte))::citext)
)
select *
from log;
It's probably easier to define the parameters as text, during the INSERT the casting will then be done automatically:
CREATE FUNCTION event_log_add(
name_in text,
description_in text)
RETURNS int4
LANGUAGE sql AS
$BODY$
insert into event_log (name, details)
values (name_in, description_in)
returning 1;
$BODY$;
WITH values_cte AS (
select clock_timestamp() as ct
),log as (
select event_log_add('CTE event_log_add check',
'clock = ' || (select ct::text from values_cte))
)
select *
from log;
If you want, you can add an explicit cast inside the function.

How to order results with Postgres Full Text Search function using 2 tables with pg_trgm

I have two tables:
Table author
id (pk) | first_name
1 | John
Table post
id (pk) | author_id (fk) | content
1 | 1 | test content
I've created an extra table to hold the search results from the function:
Table post_search
author_id (fk) | post_id (fk) | created_at | content | first_name
I want to be able to provide a search term from front end and full text search on both tables, specifically first_name and content.
I have enabled pg_trgm on Postgresql:
CREATE EXTENSION pg_trgm;
This is the Function:
CREATE
OR REPLACE FUNCTION public.search_posts(search text)
RETURNS SETOF post_search
LANGUAGE sql STABLE AS $ function $
SELECT
P.author_id,
P.id,
P.created_at,
P.content,
A.first_name
FROM
post P
JOIN author A ON A.id = P.author_id
WHERE
search <% concat_ws(' ', first_name, content)
ORDER BY
similarity(search, concat_ws(' ', first_name, content))
LIMIT
100;
$function$
The results from this function are not ordered by first_name, they're mixed, eg. the first result has the search keyword on the content and not on author's first_name. Is there a way to order the results by first_name and then by content?
Also, is it possible to have a function like this without the need of creating the extra post_search table? If so, can someone assist on the function code?
Update:
I've ended up using this function instead, I've added a score column and order by it.
CREATE OR REPLACE FUNCTION search_posts(search text)
RETURNS SETOF post_search
LANGUAGE sql
STABLE
AS $function$
SELECT P.author_id,P.id,P.created_at, P.content, A.first_name, ((similarity(search, P.content) + similarity(search, A.first_name)) / 2) as score
FROM post P
JOIN profile A ON A.id = P.author_id
WHERE search % first_name or search % content
ORDER BY score desc
LIMIT 100;
$function$
Also, is it possible to have a function like this without the need of creating the extra post_search table?
One part of your question is well defined and easy to answer. You could declare the function return an anonymous table with a specific structure:
....
RETURNS table (author_id int, post_id int, created_at timestamptz, content text, first_name text)
Or you could declare a named composite type
create type post_search as (author_id int, post_id int, created_at timestamptz, content text, first_name text);
And then:
....
RETURNS setof post_search

Do you need surrounding parantheses in Postgres SELECT statement?

I noticed that there is different output for this
SELECT id,name,description FROM table_name;
as opposed to this
SELECT (id,name,description) FROM table_name;
Is there any big difference between the two?
What is the purpose of this?
create table table_name(id int, name text, description text);
insert into table_name
values (1, 'John', 'big one');
select (id, name, description), id, name, description
from table_name;
row | id | name | description
--------------------+----+------+-------------
(1,John,"big one") | 1 | John | big one
(1 row)
The difference is important. Columns enclosed in parenthesis form a row constructor known also as a composite value, returned in a single column. Usually, separate columns are preferred as a query result. Row constructors are necessary when a row as a whole is needed (e.g. in the VALUES of the above INSERT command). They are also used as values of composite types.
The following query actually is selecting a ROW type value:
SELECT (id, name, description) FROM table_name;
This syntax by itself would not be very useful, and more typically you would use this if you were doing an INSERT INTO ... SELECT into a table which had a row type in its definition. Here is an example of how you might use this.
CREATE TYPE your_type AS (
id INTEGER,
name VARCHAR,
description VARCHAR
);
CREATE TABLE your_table (
id INTEGER,
t your_type
);
INSERT INTO your_table (id, t)
SELECT 1, (id, name, description)
FROM table_name;
From the Postgres documentation on composite types:
Whenever you create a table, a composite type is also automatically created, with the same name as the table, to represent the table's row type.
So you have already been working with row types, whether or not you knew it.

Import CSV Table Definitions into PostgreSQL

I have a file of table definitions, in the following format
Table Name Field Name Field Data Type
ATableName1 AFieldName1 VARCHAR2
ATableName1 AFieldName2 NUMBER
...
ATableNameX AFieldNameX1 TIMESTAMP(6)
Is there any easy way to import this into Postgres to automatically create the tables?
What if I split the file up into individual tables, and just had a csv of field names/data types for each table?
Field Name Field Data Type
AFieldName1 VARCHAR2
AFieldName2 NUMBER
My searching has only yielded data import via copy, and table creation (based on data) using pgfutter.
mind I change varchar2 to varchar and number to integer.alsoyou have tsv - in order to use it, change chr(44) in my code to chr(9). Mind I dont check for injection, otherwise here's working example:
t=# do
$$
declare
_r record;
begin
for _r in (
with t(l) as (values('ATableName1,AFieldName1i, VARCHAR
ATableName1,AFieldName2,INTEGER
ATableNameX,AFieldNameX1,TIMESTAMP(6)'::text)
)
, r as (select unnest(string_to_array(l,chr(10))) rw from t)
, p as (select split_part(rw,chr(44),1) tn, split_part(rw,chr(44),2) cn,split_part(rw,chr(44),3) tp from r)
select tn||' ('||string_agg(cn||' '||tp, ', ')||')' s from p
group by tn
) loop
raise info '%','create table '||_r.s;
execute 'create table '||_r.s;
end loop;
end;
$$
;
INFO: create table ATableNameX (AFieldNameX1 TIMESTAMP(6))
INFO: create table ATableName1 (AFieldName1i VARCHAR, AFieldName2 INTEGER)
DO
Time: 16.743 ms
t=# \dt AF
t=# \dt atablename*
List of relations
Schema | Name | Type | Owner
--------+-------------+-------+-------
public | atablename1 | table | vao
public | atablenamex | table | vao
SQL is your friend, it is very expressive, you can construct your tables def using string_agg function. Have a look on the example here.
http://sqlfiddle.com/#!17/0fe14/1

Duplicate single database record

Hello what is the easiest way to duplicate a DB record over the same table?
My problem is that the table where I am doing this has many column, like 100+, and I don't like how the solution looks like. Here is what I do (this is inside plpqsql function):
...
1. duplicate record
INSERT INTO history
(SELECT NEXTVAL('history_id_seq'), col_1, col_2, ... , col_100)
FROM history
WHERE history_id = 1234
ORDER BY datetime DESC
LIMIT 1)
RETURNING
history_id INTO new_history_id;
2. update some columns
UPDATE history
SET
col_5 = 'test_5',
col_23 = 'test_23',
datetime = CURRENT_TIMESTAMP
WHERE history_id = new_history_id;
Here are the problems I am attempting to solve
Listing all these 100+ columns looks lame
When new column is added eventually the function should be updated too
On separate DB instances the column order might differ, which would cause the function fail
I am not sure if I can list them once more (solving issue 3) like insert into <table> (<columns_list>) values (<query>) but then the query looks even uglier.
I would like to achieve something like 'insert into ', but this seems impossible the unique primary key constraint will raise a duplication error.
Any suggestions?
Thanks in advance for you time.
This isn't pretty or particularly optimized but there are a couple of ways to go about this. Ideally, you might want to do this all in an UPDATE trigger though you could implement a duplication function something like this:
-- create source table
CREATE TABLE history (history_id serial not null primary key, col_2 int, col_3 int, col_4 int, datetime timestamptz default now());
-- add some data
INSERT INTO history (col_2, col_3, col_4)
SELECT g, g * 10, g * 100 FROM generate_series(1, 100) AS g;
-- function to duplicate record
CREATE OR REPLACE FUNCTION fn_history_duplicate(p_history_id integer) RETURNS SETOF history AS
$BODY$
DECLARE
cols text;
insert_statement text;
BEGIN
-- build list of columns
SELECT array_to_string(array_agg(column_name::name), ',') INTO cols
FROM information_schema.columns
WHERE (table_schema, table_name) = ('public', 'history')
AND column_name <> 'history_id';
-- build insert statement
insert_statement := 'INSERT INTO history (' || cols || ') SELECT ' || cols || ' FROM history WHERE history_id = $1 RETURNING *';
-- execute statement
RETURN QUERY EXECUTE insert_statement USING p_history_id;
RETURN;
END;
$BODY$
LANGUAGE 'plpgsql';
-- test
SELECT * FROM fn_history_duplicate(1);
history_id | col_2 | col_3 | col_4 | datetime
------------+-------+-------+-------+-------------------------------
101 | 1 | 10 | 100 | 2013-04-15 14:56:11.131507+00
(1 row)
As I noted in my original comment, you might also take a look at the colnames extension as an alternative to querying the information schema.
You don't need the update anyway, you can supply the constant values directly in the SELECT statement:
INSERT INTO history
SELECT NEXTVAL('history_id_seq'),
col_1,
col_2,
col_3,
col_4,
'test_5',
...
'test_23',
...,
col_100
FROM history
WHERE history_sid = 1234
ORDER BY datetime DESC
LIMIT 1
RETURNING history_sid INTO new_history_sid;