how to insert DEFAULT into a prepared statement in PostgreSQL - postgresql

I'm trying to insert values into a database using prepared statements, but sometimes I need to insert for a certain value the literal 'DEFAULT', how do I do this?
CREATE TABLE test (id int, firstname text default 'john', lastname text default 'doe');
This is what I want to do, but then using a prepared statement:
insert into test (id, firstname, lastname) VALUES ('1', DEFAULT, DEFAULT);
But this is resulting in an error (for obvious reasons):
PREPARE testprep (integer, text, text) AS INSERT INTO test (id, firstname, lastname) VALUES ($1, $2, $3);
EXECUTE testprep('1',DEFAULT,DEFAULT);
The Error:
ERROR: syntax error at or near "DEFAULT"
Both examples I created using SQL-Fiddle:
http://sqlfiddle.com/#!15/243ae/1/0
http://sqlfiddle.com/#!15/243ae/3/0

There is no way to do that with a prepared statement.
The only escape would be a BEFORE INSERT trigger on the table that replaces certain data values (e.g. NULL) with the default value. But this is not a nice solution and will cost performance.
The other escape route is to use several prepared statements, one for each combination of values you want set to default.

You may try omitting the default columns from the insert statement:
PREPARE testprep (integer) AS
INSERT INTO test (id) VALUES ($1);
EXECUTE testprep('1');
Postgres should rely on the default values in the table definition for the firstname and lastname columns. From the Postgres documentation:
When a new row is created and no values are specified for some of the columns, those columns will be filled with their respective default values.

Related

Generated UUIDs behavior in postgres INSERT rule compared to the UPDATE rule

I have a postgres database with a single table. The primary key of this table is a generated UUID. I am trying to add a logging table to this database such that whenever a row is added or deleted, the logging table gets an entry. My table has the following structure
CREATE TABLE configuration (
id uuid NOT NULL DEFAULT uuid_generate_v4(),
name text,
data json
);
My logging table has the following structure
CREATE TABLE configuration_log (
configuration_id uuid,
new_configuration_data json,
old_configuration_data json,
"user" text,
time timestamp
);
I have added the following rules:
CREATE OR REPLACE RULE log_configuration_insert AS ON INSERT TO "configuration"
DO INSERT INTO configuration_log VALUES (
NEW.id,
NEW.data,
'{}',
current_user,
current_timestamp
);
CREATE OR REPLACE RULE log_configuration_update AS ON UPDATE TO "configuration"
WHERE NEW.data::json::text != OLD.data::json::text
DO INSERT INTO configuration_log VALUES (
NEW.id,
NEW.data,
OLD.data,
current_user,
current_timestamp
);
Now, if I insert a value in the configuration table, the UUID in the configuration table and the configuration_log table are different. For example, the insert query
INSERT INTO configuration (name, data)
VALUES ('test', '{"property1":"value1"}')
The result is this... the UUID is c2b6ca9b-1771-404d-baae-ae2ec69785ac in the configuration table whereas in the configuration_log table the result is this... the UUID id 16109caa-dddc-4959-8054-0b9df6417406
However, the update rule works as expected. So if I write an update query as
UPDATE "configuration"
SET "data" = '{"property1":"abcd"}'
WHERE "id" = 'c2b6ca9b-1771-404d-baae-ae2ec69785ac';
The configuration_log table gets the correct UUID as seen here i.e. c2b6ca9b-1771-404d-baae-ae2ec69785ac
I am using NEW.id in both the rules so I was expecting the same behavior. Can anyone point out what I might be doing wrong here?
Thanks
This is another good example why rules should be avoided
Quote from the manual:
For any reference to NEW, the target list of the original query is searched for a corresponding entry. If found, that entry's expression replaces the reference.
So NEW.id is replaced with uuid_generate_v4() which explains why you are seeing a different value.
You should rewrite this to a trigger.
Btw: using jsonb is preferred over json, then you can also get rid of the (essentially incorrect) cast of the json column to text to compare the content.

How to read sql server insert/update into value with mybatis?

I'm trying to retrieve the value of a SQL Sever temporal table's SYSSTARTTIME field with myBatis and can't figure it out. I had something like this:
<insert id="insertVersionedTableData" parameterType="com.me.dao.vo.DataVO">
<selectKey keyProperty="systemStartTm" resultType="java.util.Date" order="AFTER">
SELECT sysstarttm FROM #MyTableVar;
</selectKey>
DECLARE #MyTableVar TABLE (sysstarttm datetime);
INSERT INTO
dbo.versioned_table (id, first_name, last_name)
OUTPUT inserted.system_start_tm into #MyTableVar
VALUES (#{id}, #{firstName}, #{lastName});
</insert>
When I run this though I can see the insert in the console logging but it then throws an exception saying that #MyTableVar must be declared so it's like the table variable goes out of scope after the insert and the tag runs just like a regular SELECT
I've tried the generated keys functionality too but myBatis expects the key to be the first field in the table which my autogenerated SYSTEMSTARTTIME is not.

dashDB: insert into generated default column using select

I have a simple test table
CREATE TABLE TEST (
KEY INTEGER PRIMARY KEY GENERATED BY DEFAULT AS IDENTITY (START WITH 1, INCREMENT BY 1),
INTENTS VARCHAR(255),
NO_FOUND SMALLINT );
I am then trying to insert data into this table using the following command from within dashDB's sql dashboard.
Insert into table from (select item1,item2,item3 from TEST2 where some_condition );
However I cannot seem to get the command to not return an error.
Have tried the db2 'DEFAULT', and '0' (default for integer), and even NULL as values for item1.
Have also tried the insert using values, but then the column headings cause the system to report multiple values returned.
Have also tried 'OVERRIDING USER VALUE'
but this then complains about not finding a JOIN element.
Any ideas welcome.
I would try something like this:
Insert into test(intents,no_found)
(select item2,item3 from TEST2 where some_condition );
You specify that only two of the three columns receive values, the KEY column is generated. Hence you only select the two related columns.

Getting error for auto increment fields when inserting records without specifying columns

We're in process of converting over from SQL Server to Postgres. I have a scenario that I am trying to accommodate. It involves inserting records from one table into another, WITHOUT listing out all of the columns. I realize this is not recommended practice, but let's set that aside for now.
drop table if exists pk_test_table;
create table public.pk_test_table
(
recordid SERIAL PRIMARY KEY NOT NULL,
name text
);
--example 1: works and will insert a record with an id of 1
insert into pk_test_table values(default,'puppies');
--example 2: fails
insert into pk_test_table
select first_name from person_test;
Error I receive in the second example:
column "recordid" is of type integer but expression is of type
character varying Hint: You will need to rewrite or cast the
expression.
The default keyword will tell the database to grab the next value.
Is there any way to utilize this keyword in the second example? Or some way to tell the database to ignore auto-incremented columns and just them be populated like normal?
I would prefer to not use a subquery to grab the next "id".
This functionality works in SQL Server and hence the question.
Thanks in advance for your help!
If you can't list column names, you should instead use the DEFAULT keyword, as you've done in the simple insert example. This won't work with a in insert into ... select ....
For that, you need to invoke nextval. A subquery is not required, just:
insert into pk_test_table
select nextval('pk_test_table_id_seq'), first_name from person_test;
You do need to know the sequence name. You could get that from information_schema based on the table name and inferring its primary key, using a function that takes just the table name as an argument. It'd be ugly, but it'd work. I don't think there's any way around needing to know the table name.
You're inserting value into the first column, but you need to add a value in the second position.
Therefore you can use INSERT INTO table(field) VALUES(value) syntax.
Since you need to fetch values from another table, you have to remove VALUES and put the subquery there.
insert into pk_test_table(name)
select first_name from person_test;
I hope it helps
I do it this way via a separate function- though I think I'm getting around the issue via the table level having the DEFAULT settings on a per field basis.
create table public.pk_test_table
(
recordid integer NOT NULL DEFAULT nextval('pk_test_table_id_seq'),
name text,
field3 integer NOT NULL DEFAULT 64,
null_field_if_not_set integer,
CONSTRAINT pk_test_table_pkey PRIMARY KEY ("recordid")
);
With function:
CREATE OR REPLACE FUNCTION func_pk_test_table() RETURNS void AS
$BODY$
INSERT INTO pk_test_table (name)
SELECT first_name FROM person_test;
$BODY$
LANGUAGE sql VOLATILE;
Then just execute the function via a SELECT FROM func_pk_test_table();
Notice it hasn't had to specify all the fields- as long as constraints allow it.

postgres autoincrement not updated on explicit id inserts

I have the following table in postgres:
CREATE TABLE "test" (
"id" serial NOT NULL PRIMARY KEY,
"value" text
)
I am doing following insertions:
insert into test (id, value) values (1, 'alpha')
insert into test (id, value) values (2, 'beta')
insert into test (value) values ('gamma')
In the first 2 inserts I am explicitly mentioning the id. However the table's auto increment pointer is not updated in this case. Hence in the 3rd insert I get the error:
ERROR: duplicate key value violates unique constraint "test_pkey"
DETAIL: Key (id)=(1) already exists.
I never faced this problem in Mysql in both MyISAM and INNODB engines. Explicit or not, mysql always update autoincrement pointer based on the max row id.
What is the workaround for this problem in postgres? I need it because I want a tighter control for some ids in my table.
UPDATE:
I need it because for some values I need to have a fixed id. For other new entries I dont mind creating new ones.
I think it may be possible by manually incrementing the nextval pointer to max(id) + 1 whenever I am explicitly inserting the ids. But I am not sure how to do that.
That's how it's supposed to work - next_val('test_id_seq') is only called when the system needs a value for this column and you have not provided one. If you provide value no such call is performed and consequently the sequence is not "updated".
You could work around this by manually setting the value of the sequence after your last insert with explicitly provided values:
SELECT setval('test_id_seq', (SELECT MAX(id) from "test"));
The name of the sequence is autogenerated and is always tablename_columnname_seq.
In the recent version of Django, this topic is discussed in the documentation:
Django uses PostgreSQL’s SERIAL data type to store auto-incrementing
primary keys. A SERIAL column is populated with values from a sequence
that keeps track of the next available value. Manually assigning a
value to an auto-incrementing field doesn’t update the field’s
sequence, which might later cause a conflict.
Ref: https://docs.djangoproject.com/en/dev/ref/databases/#manually-specified-autoincrement-pk
There is also management command manage.py sqlsequencereset app_label ... that is able to generate SQL statements for resetting sequences for the given app name(s)
Ref: https://docs.djangoproject.com/en/dev/ref/django-admin/#django-admin-sqlsequencereset
For example these SQL statements were generated by manage.py sqlsequencereset my_app_in_my_project:
BEGIN;
SELECT setval(pg_get_serial_sequence('"my_project_aaa"','id'), coalesce(max("id"), 1), max("id") IS NOT null) FROM "my_project_aaa";
SELECT setval(pg_get_serial_sequence('"my_project_bbb"','id'), coalesce(max("id"), 1), max("id") IS NOT null) FROM "my_project_bbb";
SELECT setval(pg_get_serial_sequence('"my_project_ccc"','id'), coalesce(max("id"), 1), max("id") IS NOT null) FROM "my_project_ccc";
COMMIT;
It can be done automatically using a trigger. This way you are sure that the largest value is always used as the next default value.
CREATE OR REPLACE FUNCTION set_serial_id_seq()
RETURNS trigger AS
$BODY$
BEGIN
EXECUTE (FORMAT('SELECT setval(''%s_%s_seq'', (SELECT MAX(%s) from %s));',
TG_TABLE_NAME,
TG_ARGV[0],
TG_ARGV[0],
TG_TABLE_NAME));
RETURN OLD;
END;
$BODY$
LANGUAGE plpgsql;
CREATE TRIGGER set_mytable_id_seq
AFTER INSERT OR UPDATE OR DELETE
ON mytable
FOR EACH STATEMENT
EXECUTE PROCEDURE set_serial_id_seq('mytable_id');
The function can be reused for multiple tables. Change "mytable" to the table of interest.
For more info regarding triggers:
https://www.postgresql.org/docs/9.1/plpgsql-trigger.html
https://www.postgresql.org/docs/9.1/sql-createtrigger.html