select/delete can not use default in postgresql? - postgresql

begin;
create table test101(col1 int default 2, col2 text default 'hello world');
insert into test101 values (1,default);
insert into test101 values (default,default);
insert into test101 values (default,'dummy');
insert into test101 values (5,'dummy');
commit;
update: OK.
update test101 set col2 = default where col1 = 4;
select, delete not OK.
select * from test101 where col1 = COALESCE (col1,default);
delete from test101 where col1 = COALESCE (col1,default);
error code:
ERROR: 42601: DEFAULT is not allowed in this context
LINE 1: delete from test101 where col1 = COALESCE (col1,default);
^
LOCATION: transformExprRecurse, parse_expr.c:285
also tried: delete from test101 where col1 = default;
default value is not easy to find.
Get the default values of table columns in Postgres? Then select/delete operation with default operation is not that weird.

In the question you linked to they do:
SELECT column_name, column_default
FROM information_schema.columns
WHERE (table_schema, table_name) = ('public', 'test101')
ORDER BY ordinal_position;
which produces something like:
column_name | column_default
-------------+---------------------
col1 | 2
col2 | 'hello world'::text
Maybe, you can combine this query with your query? (But I would not recommend it, because ... )

Related

Postgres EXECUTE with CTE

Is it possible to EXECUTE a prepared statement using parameters you'd get from a CTE ?
The below samples are simplified versions of my code, but this is replicating exactly the problem I have.
Here's how far I've been able to go - without a CTE :
BEGIN;
CREATE TEMPORARY TABLE testTable
(
col1 NUMERIC,
col2 TEXT
) ON COMMIT DROP;
INSERT INTO testTable
VALUES (1, 'foo'), (2, 'bar');
PREPARE myStatement AS
WITH cteTable AS
(
SELECT col1, col2
FROM testTable
WHERE col1 = $1
)
SELECT col2 FROM cteTable;
EXECUTE myStatement(2);
DEALLOCATE myStatement;
COMMIT;
Here's the result:
col2
bar
And now, here's what I am trying to achieve :
BEGIN;
CREATE TEMPORARY TABLE testTable
(
col1 NUMERIC,
col2 TEXT
) ON COMMIT DROP;
INSERT INTO testTable
VALUES (1, 'foo'), (2, 'bar');
PREPARE myStatement AS
WITH cteTable AS
(
SELECT col1, col2
FROM testTable
WHERE col1 = $1
)
SELECT col2 FROM cteTable;
-- Using a CTE here to get the parameters for the prepared statement
WITH parameters AS
(
SELECT 2 val
)
EXECUTE myStatement(SELECT val FROM parameters);
DEALLOCATE myStatement;
COMMIT;
The error message I'm having is
Syntax error at or near EXECUTE
Even if try to run the EXECUTE part without attempting to use the CTE values, I still have the same error message.
As I haven't been able to find anyone else having the same issue despite my researches, I guess I could be doing it wrong.. If so could someone please point me into the right direction ?
Thanks
As Vao Tsun commented, this is not possible.
I have converted my prepared statement to a function which will return a table, then used a CTE to SELECT UNION my function with multiple parameters.

Postgresql, select a "fake" row

In Postgres 8.4 or higher, what is the most efficient way to get a row of data populated by defaults without actually creating the row. Eg, as a transaction (pseudocode):
create table "mytable"
(
id serial PRIMARY KEY NOT NULL,
parent_id integer NOT NULL DEFAULT 1,
random_id integer NOT NULL DEFAULT random(),
)
begin transaction
fake_row = insert into mytable (id) values (0) returning *;
delete from mytable where id=0;
return fake_row;
end transaction
Basically I'd expect a query with a single row where parent_id is 1 and random_id is a random number (or other function return value) but I don't want this record to persist in the table or impact on the primary key sequence serial_id_seq.
My options seem to be using a transaction like above or creating views which are copies of the table with the fake row added but I don't know all the pros and cons of each or whether a better way exists.
I'm looking for an answer that assumes no prior knowledge of the datatypes or default values of any column except id or the number or ordering of the columns. Only the table name will be known and that a record with id 0 should not exist in the table.
In the past I created the fake record 0 as a permanent record but I've come to consider this record a type of pollution (since I typically have to filter it out of future queries).
You can copy the table definition and defaults to the temp table with:
CREATE TEMP TABLE table_name_rt (LIKE table_name INCLUDING DEFAULTS);
And use this temp table to generate dummy rows. Such table will be dropped at the end of the session (or transaction) and will only be visible to current session.
You can query the catalog and build a dynamic query
Say we have this table:
create table test10(
id serial primary key,
first_name varchar( 100 ),
last_name varchar( 100 ) default 'Tom',
age int not null default 38,
salary float default 100.22
);
When you run following query:
SELECT string_agg( txt, ' ' order by id )
FROM (
select 1 id, 'SELECT ' txt
union all
select 2, -9999 || ' as id '
union all
select 3, ', '
|| coalesce( column_default, 'null'||'::'||c.data_type )
|| ' as ' || c.column_name
from information_schema.columns c
where table_schema = 'public'
and table_name = 'test10'
and ordinal_position > 1
) xx
;
you will get this sting as a result:
"SELECT -9999 as id , null::character varying as first_name ,
'Tom'::character varying as last_name , 38 as age , 100.22 as salary"
then execute this query and you will get the "phantom row".
We can build a function that build and excecutes the query and return our row as a result:
CREATE OR REPLACE FUNCTION get_phantom_rec (p_i test10.id%type )
returns test10 as $$
DECLARE
v_sql text;
myrow test10%rowtype;
begin
SELECT string_agg( txt, ' ' order by id )
INTO v_sql
FROM (
select 1 id, 'SELECT ' txt
union all
select 2, p_i || ' as id '
union all
select 3, ', '
|| coalesce( column_default, 'null'||'::'||c.data_type )
|| ' as ' || c.column_name
from information_schema.columns c
where table_schema = 'public'
and table_name = 'test10'
and ordinal_position > 1
) xx
;
EXECUTE v_sql INTO myrow;
RETURN myrow;
END$$ LANGUAGE plpgsql ;
and then this simple query gives you what you want:
select * from get_phantom_rec ( -9999 );
id | first_name | last_name | age | salary
-------+------------+-----------+-----+--------
-9999 | | Tom | 38 | 100.22
I would just select the fake values as literals:
select 1 id, 1 parent_id, 1 user_id
The returned row will be (virtually) indistinguishable from a real row.
To get the values from the catalog:
select
0 as id, -- special case for serial type, just return 0
(select column_default::int -- Cast to int, because we know the column is int
from INFORMATION_SCHEMA.COLUMNS
where table_name = 'mytable'
and column_name = 'parent_id') as parent_id,
(select column_default::int -- Cast to int, because we know the column is int
from INFORMATION_SCHEMA.COLUMNS
where table_name = 'mytable'
and column_name = 'user_id') as user_id;
Note that you must know what the columns are and their type, but this is reasonable. If you change the table schema (except default value), you would need to tweak the query.
See the above as a SQLFiddle.

Duplicate single database record

Hello what is the easiest way to duplicate a DB record over the same table?
My problem is that the table where I am doing this has many column, like 100+, and I don't like how the solution looks like. Here is what I do (this is inside plpqsql function):
...
1. duplicate record
INSERT INTO history
(SELECT NEXTVAL('history_id_seq'), col_1, col_2, ... , col_100)
FROM history
WHERE history_id = 1234
ORDER BY datetime DESC
LIMIT 1)
RETURNING
history_id INTO new_history_id;
2. update some columns
UPDATE history
SET
col_5 = 'test_5',
col_23 = 'test_23',
datetime = CURRENT_TIMESTAMP
WHERE history_id = new_history_id;
Here are the problems I am attempting to solve
Listing all these 100+ columns looks lame
When new column is added eventually the function should be updated too
On separate DB instances the column order might differ, which would cause the function fail
I am not sure if I can list them once more (solving issue 3) like insert into <table> (<columns_list>) values (<query>) but then the query looks even uglier.
I would like to achieve something like 'insert into ', but this seems impossible the unique primary key constraint will raise a duplication error.
Any suggestions?
Thanks in advance for you time.
This isn't pretty or particularly optimized but there are a couple of ways to go about this. Ideally, you might want to do this all in an UPDATE trigger though you could implement a duplication function something like this:
-- create source table
CREATE TABLE history (history_id serial not null primary key, col_2 int, col_3 int, col_4 int, datetime timestamptz default now());
-- add some data
INSERT INTO history (col_2, col_3, col_4)
SELECT g, g * 10, g * 100 FROM generate_series(1, 100) AS g;
-- function to duplicate record
CREATE OR REPLACE FUNCTION fn_history_duplicate(p_history_id integer) RETURNS SETOF history AS
$BODY$
DECLARE
cols text;
insert_statement text;
BEGIN
-- build list of columns
SELECT array_to_string(array_agg(column_name::name), ',') INTO cols
FROM information_schema.columns
WHERE (table_schema, table_name) = ('public', 'history')
AND column_name <> 'history_id';
-- build insert statement
insert_statement := 'INSERT INTO history (' || cols || ') SELECT ' || cols || ' FROM history WHERE history_id = $1 RETURNING *';
-- execute statement
RETURN QUERY EXECUTE insert_statement USING p_history_id;
RETURN;
END;
$BODY$
LANGUAGE 'plpgsql';
-- test
SELECT * FROM fn_history_duplicate(1);
history_id | col_2 | col_3 | col_4 | datetime
------------+-------+-------+-------+-------------------------------
101 | 1 | 10 | 100 | 2013-04-15 14:56:11.131507+00
(1 row)
As I noted in my original comment, you might also take a look at the colnames extension as an alternative to querying the information schema.
You don't need the update anyway, you can supply the constant values directly in the SELECT statement:
INSERT INTO history
SELECT NEXTVAL('history_id_seq'),
col_1,
col_2,
col_3,
col_4,
'test_5',
...
'test_23',
...,
col_100
FROM history
WHERE history_sid = 1234
ORDER BY datetime DESC
LIMIT 1
RETURNING history_sid INTO new_history_sid;

db2 equivalent of tsql temp table

How would I do the following TSQL query in DB2? I'm having problems creating a temp table based on the results from a query.
SELECT
COLUMN_1, COLUMN_2, COLUMN_3
INTO #TEMP_A
FROM TABLE_A
WHERE COLUMN_1 = 1 AND COLUMN_2 = 2
The error message is:
"Error: SQL0104N An unexpected token "#TEMP_A" was found following "". Expected tokens may include: ":". SQLSTATE=42601"
You have to declare a temp table in DB2 before you can use it. Either with the same query you are running:
DECLARE GLOBAL TEMPORARY TABLE SESSION.YOUR_TEMP_TABLE_NAME AS (
SELECT COLUMN_1, COLUMN_2, COLUMN_3
FROM TABLE_A
) DEFINITION ONLY
Or "manually" define the columns:
DECLARE GLOBAL TEMPORARY TABLE SESSION.YOUR_TEMP_TABLE_NAME (
COLUMN_1 CHAR(10)
,COLUMN_2 TIMESTAMP
,COLUMN_3 INTEGER
)
Then populate it:
INSERT INTO SESSION.YOUR_TEMP_TABLE_NAME
SELECT COLUMN_1, COLUMN_2, COLUMN_3
FROM TABLE_A
WHERE COLUMN_1 = 1
AND COLUMN_2 = 2
It's not quite as straight-forward as in SQL Server. :)
And even though it's called a "global" temporary table, it only exists for the current session. Note that all temp tables should be prefixed with the SESSION schema. If you do not provide a schema name, then SESSION will be implied.
maybe the "with" clause is what you look for:
with TEMP_A as (
SELECT COLUMN_1, COLUMN_2, COLUMN_3
FROM TABLE_A
WHERE COLUMN_1 = 1 AND COLUMN_2 = 2
)
-- now use TEMP_A
select * from TEMP_A
As it turned out, I did not have permissions to create temp tables.

postgres: Setting the value of an array with a subquery?

In postgres, can you set the value of an array in an INSERT to the result of a subquery? Like:
INSERT INTO mytable
VALUES( SELECT list_of_integers FROM someothertable WHERE somekey = somevalue);
Where that mytable just has as its one column a type of integer[] and that other column list_of_integers is also type integer[] ?
You want the unnest function. I think you'd use it like:
INSERT INTO mytable
SELECT set_of_integers
FROM unnest(
SELECT list_of_integers
FROM someothertable
WHERE somekey = somevalue
) t(set_of_integers)
But i don't have PostgreSQL to hand to try it out myself.
Yes:
INSERT INTO
mytable
(column1, column2, an_array_of_integers_column)
VALUES
(2, 'bbb', (SELECT list_of_integers FROM someothertable WHERE somekey = somevalue));