dashDB: insert into generated default column using select - db2

I have a simple test table
CREATE TABLE TEST (
KEY INTEGER PRIMARY KEY GENERATED BY DEFAULT AS IDENTITY (START WITH 1, INCREMENT BY 1),
INTENTS VARCHAR(255),
NO_FOUND SMALLINT );
I am then trying to insert data into this table using the following command from within dashDB's sql dashboard.
Insert into table from (select item1,item2,item3 from TEST2 where some_condition );
However I cannot seem to get the command to not return an error.
Have tried the db2 'DEFAULT', and '0' (default for integer), and even NULL as values for item1.
Have also tried the insert using values, but then the column headings cause the system to report multiple values returned.
Have also tried 'OVERRIDING USER VALUE'
but this then complains about not finding a JOIN element.
Any ideas welcome.

I would try something like this:
Insert into test(intents,no_found)
(select item2,item3 from TEST2 where some_condition );
You specify that only two of the three columns receive values, the KEY column is generated. Hence you only select the two related columns.

Related

sqldeveloper exports integer inserts as strings

It seems like a bug in Version 19.4 which is fixed in 20+
I exported the content of my tables in sqldeveloper and the insert sql statements all have number as strings.
Example:
Insert into testtable(id,stuff) values ('1','Hello')
ID 1 becomes '1' in the export and I have trouble reading it in.
This is the case for every table. Is there a way to avoid the two ' ?
The DDL is:
create table TESTTABLE(
ID INTEGER not null
);
after executing its this in sqldeveloepr:
create table testtable{
"ID" Number(*,0) NOT NULL ENABLE
}
I noticed that I'm able to add such a line, if the constraints are not active. It seems like sqldeveloper convertes the string to a number internally.
create table testtable(
"ID" Number(*,0) NOT NULL ENABLE
);
insert into testtable values (1);
commit;
select /*insert*/ * from testtable;
Running this, i get
Table TESTTABLE created.
1 row inserted.
Commit complete.
REM INSERTING into TESTTABLE
SET DEFINE OFF;
Insert into TESTTABLE (ID) values (1);
No quotes on the number value/field. I did this with version 20.2 of SQL Developer.

Generated UUIDs behavior in postgres INSERT rule compared to the UPDATE rule

I have a postgres database with a single table. The primary key of this table is a generated UUID. I am trying to add a logging table to this database such that whenever a row is added or deleted, the logging table gets an entry. My table has the following structure
CREATE TABLE configuration (
id uuid NOT NULL DEFAULT uuid_generate_v4(),
name text,
data json
);
My logging table has the following structure
CREATE TABLE configuration_log (
configuration_id uuid,
new_configuration_data json,
old_configuration_data json,
"user" text,
time timestamp
);
I have added the following rules:
CREATE OR REPLACE RULE log_configuration_insert AS ON INSERT TO "configuration"
DO INSERT INTO configuration_log VALUES (
NEW.id,
NEW.data,
'{}',
current_user,
current_timestamp
);
CREATE OR REPLACE RULE log_configuration_update AS ON UPDATE TO "configuration"
WHERE NEW.data::json::text != OLD.data::json::text
DO INSERT INTO configuration_log VALUES (
NEW.id,
NEW.data,
OLD.data,
current_user,
current_timestamp
);
Now, if I insert a value in the configuration table, the UUID in the configuration table and the configuration_log table are different. For example, the insert query
INSERT INTO configuration (name, data)
VALUES ('test', '{"property1":"value1"}')
The result is this... the UUID is c2b6ca9b-1771-404d-baae-ae2ec69785ac in the configuration table whereas in the configuration_log table the result is this... the UUID id 16109caa-dddc-4959-8054-0b9df6417406
However, the update rule works as expected. So if I write an update query as
UPDATE "configuration"
SET "data" = '{"property1":"abcd"}'
WHERE "id" = 'c2b6ca9b-1771-404d-baae-ae2ec69785ac';
The configuration_log table gets the correct UUID as seen here i.e. c2b6ca9b-1771-404d-baae-ae2ec69785ac
I am using NEW.id in both the rules so I was expecting the same behavior. Can anyone point out what I might be doing wrong here?
Thanks
This is another good example why rules should be avoided
Quote from the manual:
For any reference to NEW, the target list of the original query is searched for a corresponding entry. If found, that entry's expression replaces the reference.
So NEW.id is replaced with uuid_generate_v4() which explains why you are seeing a different value.
You should rewrite this to a trigger.
Btw: using jsonb is preferred over json, then you can also get rid of the (essentially incorrect) cast of the json column to text to compare the content.

PostgreSQL id column not defined

I am new in PostgreSQL and I am working with this database.
I got a file which I imported, and I am trying to get rows with a certain ID. But the ID is not defined, as you can see it in this picture:
so how do I access this ID? I want to use an SQL command like this:
SELECT * from table_name WHERE ID = 1;
If any order of rows is ok for you, just add a row number according to the current arbitrary sort order:
CREATE SEQUENCE tbl_tbl_id_seq;
ALTER TABLE tbl ADD COLUMN tbl_id integer DEFAULT nextval('tbl_tbl_id_seq');
The new default value is filled in automatically in the process. You might want to run VACUUM FULL ANALYZE tbl to remove bloat and update statistics for the query planner afterwards. And possibly make the column your new PRIMARY KEY ...
To make it a fully fledged serial column:
ALTER SEQUENCE tbl_tbl_id_seq OWNED BY tbl.tbl_id;
See:
Creating a PostgreSQL sequence to a field (which is not the ID of the record)
What you see are just row numbers that pgAdmin displays, they are not really stored in the database.
If you want an artificial numeric primary key for the table, you'll have to create it explicitly.
For example:
CREATE TABLE mydata (
id integer GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
obec text NOT NULL,
datum timestamp with time zone NOT NULL,
...
);
Then to copy the data from a CSV file, you would run
COPY mydata (obec, datum, ...) FROM '/path/to/csvfile' (FORMAT 'csv');
Then the id column is automatically filled.

Postgresql check insert input

I'm making a database with PostgreSQL. In one of the attributes in a tables it should be possible to insert numbers: "+" and "-", but no other chars like "A", "B" or "!".
Is it possible to check the input when I'm using the INSERT INTO function?
I don't know because I'm just a beginner in Postgre and didn't find a solution in the internet.
Thanks if anybody knows an answer!
Instead of a trigger, you could use a CHECK constraint on the column's value. (or even make it a domain or type)
CREATE TABLE meuk
( id serial NOT NULL PRIMARY KEY
, thefield varchar CHECK (thefield SIMILAR TO e'[0-9\+\-]+' )
);
INSERT INTO meuk(thefield) VALUES ('1234');
INSERT INTO meuk(thefield) VALUES ('+1234');
INSERT INTO meuk(thefield) VALUES ('-1234');
INSERT INTO meuk(thefield) VALUES ('-1234a'); -- this one should fail
SELECT * FROM meuk;

how to insert DEFAULT into a prepared statement in PostgreSQL

I'm trying to insert values into a database using prepared statements, but sometimes I need to insert for a certain value the literal 'DEFAULT', how do I do this?
CREATE TABLE test (id int, firstname text default 'john', lastname text default 'doe');
This is what I want to do, but then using a prepared statement:
insert into test (id, firstname, lastname) VALUES ('1', DEFAULT, DEFAULT);
But this is resulting in an error (for obvious reasons):
PREPARE testprep (integer, text, text) AS INSERT INTO test (id, firstname, lastname) VALUES ($1, $2, $3);
EXECUTE testprep('1',DEFAULT,DEFAULT);
The Error:
ERROR: syntax error at or near "DEFAULT"
Both examples I created using SQL-Fiddle:
http://sqlfiddle.com/#!15/243ae/1/0
http://sqlfiddle.com/#!15/243ae/3/0
There is no way to do that with a prepared statement.
The only escape would be a BEFORE INSERT trigger on the table that replaces certain data values (e.g. NULL) with the default value. But this is not a nice solution and will cost performance.
The other escape route is to use several prepared statements, one for each combination of values you want set to default.
You may try omitting the default columns from the insert statement:
PREPARE testprep (integer) AS
INSERT INTO test (id) VALUES ($1);
EXECUTE testprep('1');
Postgres should rely on the default values in the table definition for the firstname and lastname columns. From the Postgres documentation:
When a new row is created and no values are specified for some of the columns, those columns will be filled with their respective default values.