I'm currently trying to automate a DB2 database through shell scripts but there seems to be a problem with the INSERT operation.
The tables I create look like this:
CREATE TABLE "TRAINING1"
(ID INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY (STARTS WITH 1 INCREMENT BY 1),
F1 VARCHAR(20) NOT NULL,
name VARCHAR(40) NOT NULL,
surname VARCHAR(40) NOT NULL,
CONSTRAINT pk_train1_id primary key(ID));
Now the creation works just fine for all similar tables, but when I attempt to insert data:
db2 "insert into TRAINING1 values ('hello', 'world', 'foo', 'bar')"
I get this error:
SQL0117N The number of values assigned is not the same as the number
of specified or implied columns or variables. SQLSTATE=42802
As far as I understand, the primary key I specified should generate values automatically and cannot have values explicitly assigned to it. Out of curiosity I did this:
db2 "insert into TRAINING1 values (1, 'hello', 'world', 'foo', 'bar')"
and it then complains with this error:
SQL0798N A value cannot be specified for column "ID" which is defined
as GENERATED ALWAYS. SQLSTATE=428C9
I'm still fairly new to DB2 but almost a week later, I still haven't found any solutions to this yet.
I'm running DB2 Express Edition on a 64-bit Ubuntu virtual machine.
Any thoughts on why it's doing this?
Thanks
In order to skip over a column in an INSERT statement, you must specify the columns that are not to be skipped:
db2 "insert into TRAINING1 (f1, name, surname) values ('hello', 'world', 'foo')"
Related
My table has following structure
CREATE TABLE myTable
(
user_id VARCHAR(100) NOT NULL,
task_id VARCHAR(100) NOT NULL,
start_time TIMESTAMP NOT NULL,
SOME_COLUMN VARCHAR,
col1 INTEGER,
col2 INTEGER DEFAULT 0
);
ALTER TABLE myTable
ADD CONSTRAINT pk_4_col_constraint UNIQUE (task_id, user_id, start_time, SOME_COLUMN);
ALTER TABLE myTable
ADD CONSTRAINT pk_3_col_constraint UNIQUE (task_id, user_id, start_time);
CREATE INDEX IF NOT EXISTS index_myTable ON myTable USING btree (task_id);
However when i try to insert data into table using
INSERT INTO myTable VALUES (...)
ON CONFLICT (task_id, user_id, start_time) DO UPDATE
SET ... --updating other columns except for [task_id, user_id, start_time]
I get following error
ERROR: duplicate key value violates unique constraint "pk_4_col_constraint"
Detail: Key (task_id, user_id, start_time, SOME_COLUMN)=(XXXXX, XXX, 2021-08-06 01:27:05, XXXXX) already exists.
I got the above error when i tried to programatically insert the row. I was successfully able to the execute query successfully via SQL-IDE.
Now i have following questions:
How is that possible? When 'pk_3_col_constraint' is ensuring my data is unique at 3 columns, adding one extra column will not change anything. What's happening here?
I am aware that although my constraint name starts with 'pk' i am using UNIQUE constraint rather than Primary Key constraint(probably a mistake while creating constraints but either way this error shouldn't have occurred)
Why didn't i get the error when using SQL-IDE?
I read in few articles unique constraint works little different as compared to primary key constraint hence causes this issue at time. If this is known issue is there any way i can replicate this error to understand in more detail.
I am running PostgreSQL 11.9 on x86_64-pc-linux-gnu, compiled by x86_64-pc-linux-gnu-gcc (GCC) 7.4.0, 64-bit version Postgres. My programmatic env was a Java AWS Lambda.
I have noticed people have faced this error occasionally in the past.
https://www.postgresql.org/message-id/15556-7b3ae3aba2c39c23%40postgresql.org
https://www.postgresql.org/message-id/flat/65AECD9A-CE13-4FCB-9158-23BE62BB65DD%40msqr.us#d05d2bb7b2f40437c2ccc9d485d8f41e but there are conclusions as to why it is happening
This one is most odd, I've got a DB2 instance with 50+ tables defined, and whilst I can insert and query data. DB2 is being extremely picky about formatting and keeps complaining about both table / column context whilst insisting on everything being quoted.
Most weird is that none of the tables show in the results of a 'list tables' command whilst 2 other tables defined by API do..?
Syntax I used to create the tables..
CREATE TABLE Shell.Customers
(
"idCustomers" BIGINT NOT NULL GENERATED ALWAYS AS IDENTITY ( INCREMENT BY 1 NO CYCLE ORDER ),
"Name" VARCHAR(64) NOT NULL,
"Code" VARCHAR(6) NOT NULL,
PRIMARY KEY ("idCustomers")
) COMPRESS YES ADAPTIVE WITH RESTRICT ON DROP;
Any ideas where I messed it up?
Thanks in advance.. :)
LIST TABLES command without ‘FOR’ clause shows tables for the current user. Your table is not listed unless your current user name is SHELL.
Use LIST TABLES FOR SCHEMA SHELL (or FOR ALL) command to list the table you mentioned.
I have a simple test table
CREATE TABLE TEST (
KEY INTEGER PRIMARY KEY GENERATED BY DEFAULT AS IDENTITY (START WITH 1, INCREMENT BY 1),
INTENTS VARCHAR(255),
NO_FOUND SMALLINT );
I am then trying to insert data into this table using the following command from within dashDB's sql dashboard.
Insert into table from (select item1,item2,item3 from TEST2 where some_condition );
However I cannot seem to get the command to not return an error.
Have tried the db2 'DEFAULT', and '0' (default for integer), and even NULL as values for item1.
Have also tried the insert using values, but then the column headings cause the system to report multiple values returned.
Have also tried 'OVERRIDING USER VALUE'
but this then complains about not finding a JOIN element.
Any ideas welcome.
I would try something like this:
Insert into test(intents,no_found)
(select item2,item3 from TEST2 where some_condition );
You specify that only two of the three columns receive values, the KEY column is generated. Hence you only select the two related columns.
I've recently started developing apps with PostgreSQL as backend DB (imposed on me) with no previous experience of Postgres. So far it hasn't been too bad, but now I run into a problem to which I cannot find answer for.
I created a batch scripts that runs a pg_dump command for a particular database on the server. This batch file is executed on schedule by the pgAgent.
The pg_dump itself seems to work ok. All the database structure and data are dumped to a file. However the sequences are all set to 1. For example for table tbl_departments the sequence dump looks like this:
CREATE SEQUENCE "tbl_departments_iID_seq"
START WITH 1
INCREMENT BY 1
NO MINVALUE
NO MAXVALUE
CACHE 1;
ALTER TABLE "tbl_departments_iID_seq" OWNER TO postgres;
ALTER SEQUENCE "tbl_departments_iID_seq" OWNED BY tbl_departments."iID";
In this particular example the sequence should be set to start with 8, since the last inserted record has iID = 7.
How do I make the pg_dump set the sequence starting number the next one available for each table?
The command for dump is:
%PGBIN%pg_dump -h 192.168.0.112 -U postgres -F p -b -v --inserts -f "\\192.168.0.58\PostgresDB\backup\internals_db.sql" Internals
EDIT:
I think I have found the issue, although I still don't know how to resolve this:
If I open pgAdmin and generate CREATE script for tbl_departments, it look like this:
CREATE TABLE tbl_departments
(
"iID" serial NOT NULL, -- id, autoincrement
"c150Name" character varying(150) NOT NULL, -- human readable name for department
"bRetired" boolean NOT NULL DEFAULT false, -- if TRUE that it is no longer active
"iParentDept" integer NOT NULL DEFAULT 0, -- ID of the parent department
CONSTRAINT tbl_departments_pkey PRIMARY KEY ("iID")
)
The pg_dump statement is:
CREATE TABLE tbl_departments (
"iID" integer NOT NULL,
"c150Name" character varying(150) NOT NULL,
"bRetired" boolean DEFAULT false NOT NULL,
"iParentDept" integer DEFAULT 0 NOT NULL
);
ALTER TABLE tbl_departments OWNER TO postgres;
COMMENT ON TABLE tbl_departments IS 'list of departments';
COMMENT ON COLUMN tbl_departments."iID" IS 'id, autoincrement';
COMMENT ON COLUMN tbl_departments."c150Name" IS 'human readable name for department';
COMMENT ON COLUMN tbl_departments."bRetired" IS 'if TRUE that it is no longer active';
COMMENT ON COLUMN tbl_departments."iParentDept" IS 'ID of the parent department';
CREATE SEQUENCE "tbl_departments_iID_seq"
START WITH 1
INCREMENT BY 1
NO MINVALUE
NO MAXVALUE
CACHE 1;
ALTER TABLE "tbl_departments_iID_seq" OWNER TO postgres;
ALTER SEQUENCE "tbl_departments_iID_seq" OWNED BY tbl_departments."iID";
INSERT INTO tbl_departments VALUES (1, 'Information Technologies', false, 0);
INSERT INTO tbl_departments VALUES (2, 'Quality Control', false, 0);
INSERT INTO tbl_departments VALUES (3, 'Engineering', false, 0);
INSERT INTO tbl_departments VALUES (5, 'Quality Assurance', false, 0);
INSERT INTO tbl_departments VALUES (6, 'Production', false, 2);
ALTER TABLE ONLY tbl_departments
ADD CONSTRAINT tbl_departments_pkey PRIMARY KEY ("iID");
SELECT pg_catalog.setval('"tbl_departments_iID_seq"', 1, false);
the pg_dump sets the iID column to integer rather than serial, which disabled the auto-incrementation. The setval is also set to 1 rather than 7 as one would expect.
When I open the front-end application and go to add new department it fails because all I am providing is: name of new department, active/disabled (true/false), ID of parent dept. (0 if no parent).
I am expecting for the new record primary key iID to be created automatically by the DB, which as far as I know is an expected basic feature of any RDBMS.
because the pg_dump converts the serials to integers the auto-incrementation stops.
There is no reason for concern.
The generated SQL file will restore current values of sequences.
Open the file with an editor and look for setval.
There should be lines like this:
SELECT pg_catalog.setval('test_id_seq', 1234, true);
If you cannot find them it means that INSERT commands set the proper value of a sequence.
As Craig noticed, the current value of the sequence had to be equal to 1 at the time of dump of the original database. You have probably inserted iID values directly, not using default. In that case the sequence is not used.
Therefore I suggest start from the beginning, but in two databases:
make an sql dump like in the question,
create a new database,
run the sql script in the new database,
check whether corresponding serial columns have the same declaration in both databases,
compare current values of corresponding sequences in both databases.
the pg_dump sets the iID column to integer rather than serial, which disabled the auto-incrementation.
That's normal. See the manual.
SERIAL is basically just shorthand for CREATE SEQUENCE and then an integer column that makes that sequence its default for nextval('seq_name').
The setval is also set to 1 rather than 7 as one would expect.
I can only explain that one by assuming that the sequence start point is 1 in the DB. Perhaps due to a prior attempt at running DDL that altered it, such as a setval or alter sequence?
setval it to the start point you expect. Then, so long as you don't run other setval commands, alter sequence commands, etc, you'll be fine.
Or maybe the app inserted values directly, without using the sequence?
SELECT setval(pg_get_serial_sequence('public.table', 'id'), max(id)+1) FROM public.table;
I've got a PgSQL 9.4.3 server setup and previously I was only using the public schema and for example I created a table like this:
CREATE TABLE ma_accessed_by_members_tracking (
reference bigserial NOT NULL,
ma_reference bigint NOT NULL,
membership_reference bigint NOT NULL,
date_accessed timestamp without time zone,
points_awarded bigint NOT NULL
);
Using the Windows Program PgAdmin III I can see it created the proper information and sequence.
However I've recently added another schema called "test" to the same database and created the exact same table, just like before.
However this time I see:
CREATE TABLE test.ma_accessed_by_members_tracking
(
reference bigint NOT NULL DEFAULT nextval('ma_accessed_by_members_tracking_reference_seq'::regclass),
ma_reference bigint NOT NULL,
membership_reference bigint NOT NULL,
date_accessed timestamp without time zone,
points_awarded bigint NOT NULL
);
My question / curiosity is why in a public schema the reference shows bigserial but in the test schema reference shows bigint with a nextval?
Both work as expected. I just do not understand why the difference in schema's would show different table creations. I realize that bigint and bigserial allow the same volume of ints to be used.
Merely A Notational Convenience
According to the documentation on Serial Types, smallserial, serial, and bigserial are not true data types. Rather, they are a notation to create at once both sequence and column with default value pointing to that sequence.
I created test table on schema public. The command psql \d shows bigint column type. Maybe it's PgAdmin behavior ?
Update
I checked PgAdmin source code. In function pgColumn::GetDefinition() it scans table pg_depend for auto dependency and when found it - replaces bigint with bigserial to simulate original table create code.
When you create a serial column in the standard way:
CREATE TABLE new_table (
new_id serial);
Postgres creates a sequence with commands:
CREATE SEQUENCE new_table_new_id_seq ...
ALTER SEQUENCE new_table_new_id_seq OWNED BY new_table.new_id;
From documentation: The OWNED BY option causes the sequence to be associated with a specific table column, such that if that column (or its whole table) is dropped, the sequence will be automatically dropped as well.
Standard name of a sequence is built from table name, column name and suffix _seq.
If a serial column was created in such a way, PgAdmin shows its type as serial.
If a sequence has non-standard name or is not associated with a column, PgAdmin shows nextval() as default value.