Postgresql: Why does \d+ table not show default as null - postgresql

I'm new to Postgres from MySQL.
I've created a table with a column 'col' that has its default as null.
CREATE TABLE widgets (id serial primary key, col bigint default null);
CREATE TABLE
test=# \d+ widgets;
Table "public.widgets"
Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
--------+---------+-----------+----------+-------------------------------------+---------+--------------+-------------
id | integer | | not null | nextval('widgets_id_seq'::regclass) | plain | |
col | bigint | | | | plain | |
Indexes:
"widgets_pkey" PRIMARY KEY, btree (id)
test=#
However, in the Default column there is nothing to indicate it's a null field.
Should the default value show as null in the schema output?
I've also tried the same example above, but with default 99. Nothing shows in the Default column when typing \d+. In that example, I insert a row, INSERT INTO widgets (id) VALUES (1); it gets the default value 99, so I know the default value is being used (at least for the integer value).
Update: the default column did in fact show 99. I had used \d+ with autocomplete and viewed the wrong table.
With default null, when I SELECT * FROM widgets the column shows up blank.
Does Postgres not explicitly show a value marked as 'null'? If not, how is it possible to differentiate between and empty text field and a field with a null value?

When nothing is specified a column's default is always null. Apparently the psql developers thought, that showing null as the default value makes no sense. Maybe because it is the standard behavior or because null is not a "value".
Only non-null default values are shown in the \d output.
how is it possible to differentiate between and empty text field and a field with a null value?
An empty string '' is something different than a null value and you will see the difference in the output:
postgres=# CREATE TABLE null_test (c1 text default null, c2 text default '');
CREATE TABLE
postgres=# \d null_test
Table "public.null_test"
Column | Type | Collation | Nullable | Default
--------+------+-----------+----------+----------
c1 | text | | |
c2 | text | | | ''::text

Related

How to add a row in the postgres table when it is showing duplicate id error even though I haven't passed an id? [duplicate]

This question already has answers here:
How to reset Postgres' primary key sequence when it falls out of sync?
(33 answers)
Why do SQL id sequences go out of sync (specifically using Postgres)?
(2 answers)
Closed 3 days ago.
So, I generated a table called person from mockaroo of about 1000 rows.
Column | Type | Collation | Nullable | Default
------------------+-----------------------+-----------+----------+------------------------------------
id | bigint | | not null | nextval('person_id_seq'::regclass)
first_name | character varying(100) | | not null |
last_name | character varying(100) | | not null |
gender | character varying(7) | | not null |
email | character varying(100) | | |
date_of_birth | date | | not null |
country_of_birth | character varying(100) | | not null |
Indexes:
"person_pkey" PRIMARY KEY, btree (id)
"person_email_key" UNIQUE CONSTRAINT, btree (email)
Above are the table details.
I am trying to insert a row into the table. Since, I gave id as BIGSERIAL datatype, its supposed to auto increment the id for me and everytime I generate a row.
But, now as I am trying to insert a new row it's showing me duplicate id error.
test=# INSERT INTO person (first_name, last_name, gender, email, date_of_birth, country_of_birth) VALUES ('Sean', 'Paul','Male', 'paul#gmail.com','2001-03-02','India');
ERROR: duplicate key value violates unique constraint "person_pkey"
DETAIL: Key (id)=(2) already exists.
The problem can be one of the following:
somebody ran ALTER SEQUENCE or called the setval function to reset the sequence counter
somebody INSERTed a row with an explicit value of 2 for id, so that the default value was overridden rather than using a sequence value
You can reduce the danger of the latter happening by using identity columns with GENERATED ALWAYS AS IDENTITY.

How to get the standard price field value in odoo?

I have problem.
The standard_price is computed field and not stored product_template, product_product table. How to get the standard price field value in odoo xlsx report?
The error is:
Record does not exist or has been deleted.: None
Help, I need any solution and idea?
Check the cost field of the product_price_history table. I think that is what you are looking for. This table is related with the product_product table through the field product_id:
base=# \dS product_price_history
Table "public.product_price_history"
Column | Type | Modifiers
-------------+-----------------------------+--------------------------------------------------------------------
id | integer | not null default nextval('product_price_history_id_seq'::regclass)
create_uid | integer |
product_id | integer | not null
company_id | integer | not null
datetime | timestamp without time zone |
cost | numeric |
write_date | timestamp without time zone |
create_date | timestamp without time zone |
write_uid | integer |
Indexes:
"product_price_history_pkey" PRIMARY KEY, btree (id)
Foreign-key constraints:
"product_price_history_company_id_fkey" FOREIGN KEY (company_id) REFERENCES res_company(id) ON DELETE SET NULL
"product_price_history_create_uid_fkey" FOREIGN KEY (create_uid) REFERENCES res_users(id) ON DELETE SET NULL
"product_price_history_product_id_fkey" FOREIGN KEY (product_id) REFERENCES product_product(id) ON DELETE CASCADE
"product_price_history_write_uid_fkey" FOREIGN KEY (write_uid) REFERENCES res_users(id) ON DELETE SET NULL

"duplicate key value violates unique constraint" during updating non unique field

We are trying to move our application to new postgresql cluster.
during doing that we noticed that application threw exception like that:
[2017-06-02 14:43:34,530] ........ (psycopg2.IntegrityError) duplicate key value violates unique constraint "items_url"
DETAIL: Key (url)=(http://www.domainname.ru/ap_module/content/article/400-professional/140-professional/11880) already exists.
[SQL: 'UPDATE items SET status=%(status)s WHERE items.id IN ....
it's very strange because:
the application writes to items fields, not items_url. items_url is indexes by items, actually
UPDATE only changes status fields that hasn't flag unique and also it is not a primary key.
table items:
id | integer | not null default nextval(('public.items_id_seq'::text)::regclass)
ctime | timestamp without time zone | not null default now()
pubdate | timestamp without time zone | not null default now()
resource_id | integer | not null default 0
url | text |
title | text |
description | text |
body | text |
status | smallint | not null default 0
image | text |
orig_id | integer | not null default 0
mtime | timestamp without time zone | not null default now()
checksum | text |
video_url | text |
audio_url | text |
content_type | smallint | default 0
author | text |
video | text |
fulltext_status | smallint | default 0
summary | text |
image_id | integer |
video_id | integer |
priority | smallint |
Indexes:
"items_pkey" PRIMARY KEY, btree (id)
"items_url" UNIQUE, btree (url)
"items_resource_id" btree (resource_id)
"ndx__items__ctime" btree (ctime)
"ndx__items__image" btree (image_id)
"ndx__items__mtime" btree (mtime)
"ndx__items__pubdate" btree (pubdate)
"ndx__items__video" btree (video_id)
Foreign-key constraints:
"items_fkey1" FOREIGN KEY (image_id) REFERENCES images(id) ON UPDATE CASCADE ON DELETE SET NULL
"items_fkey2" FOREIGN KEY (video_id) REFERENCES videos(id) ON UPDATE CASCADE ON DELETE SET NUL
Well, the question is why it happens and how can I troubleshoot this?
Thank you.
UPD1:
I tried to reproduce it on 9.4. - reproduced
Played with client_encoding. Encoding everywhere is the same.

Altering a parent table in Postgresql 8.4 breaks child table defaults

The problem: In Postgresql, if table temp_person_two inherits fromtemp_person, default column values on the child table are ignored if the parent table is altered.
How to replicate:
First, create table and a child table. The child table should have one column that has a default value.
CREATE TEMPORARY TABLE temp_person (
person_id SERIAL,
name VARCHAR
);
CREATE TEMPORARY TABLE temp_person_two (
has_default character varying(4) DEFAULT 'en'::character varying NOT NULL
) INHERITS (temp_person);
Next, create a trigger on the parent table that copies its data to the child table (I know this appears like bad design, but this is a minimal test case to show the problem).
CREATE FUNCTION temp_person_insert() RETURNS trigger
LANGUAGE plpgsql
AS '
BEGIN
INSERT INTO temp_person_two VALUES ( NEW.* );
RETURN NULL;
END;
';
CREATE TRIGGER temp_person_insert_trigger
BEFORE INSERT ON temp_person
FOR EACH ROW
EXECUTE PROCEDURE temp_person_insert();
Then insert data into parent and select data from child. The data should be correct.
INSERT INTO temp_person (name) VALUES ('ovid');
SELECT * FROM temp_person_two;
person_id | name | has_default
-----------+------+-------------
1 | ovid | en
(1 row )
Finally, alter parent table by adding a new, unrelated column. Attempt to insert data and watch a "not-null constraint" violation occur:
ALTER TABLE temp_person ADD column foo text;
INSERT INTO temp_person(name) VALUES ('Corinna');
ERROR: null value in column "has_default" violates not-null constraint
CONTEXT: SQL statement "INSERT INTO temp_person_two VALUES ( $1 .* )"
PL/pgSQL function "temp_person_insert" line 2 at SQL statement
My version:
testing=# select version();
version
-------------------------------------------------------------------------------------------------------
PostgreSQL 8.4.17 on x86_64-pc-linux-gnu, compiled by GCC gcc-4.4.real (Debian 4.4.5-8) 4.4.5, 64-bit
(1 row)
It's there all the way to 9.3, but it's going to be tricky to fix, and I'm not sure if it's just undesirable behaviour rather than a bug.
The constraint is still there, but look at the column-order.
Table "pg_temp_2.temp_person"
Column | Type | Modifiers
-----------+-------------------+-----------------------------------------------------------------
person_id | integer | not null default nextval('temp_person_person_id_seq'::regclass)
name | character varying |
Number of child tables: 1 (Use \d+ to list them.)
Table "pg_temp_2.temp_person_two"
Column | Type | Modifiers
-------------+----------------------+-----------------------------------------------------------------
person_id | integer | not null default nextval('temp_person_person_id_seq'::regclass)
name | character varying |
has_default | character varying(4) | not null default 'en'::character varying
Inherits: temp_person
ALTER TABLE
Table "pg_temp_2.temp_person_two"
Column | Type | Modifiers
-------------+----------------------+-----------------------------------------------------------------
person_id | integer | not null default nextval('temp_person_person_id_seq'::regclass)
name | character varying |
has_default | character varying(4) | not null default 'en'::character varying
foo | text |
Inherits: temp_person
It works in your first example because you are effectively doing:
INSERT INTO temp_person_two (person_id,name)
VALUES (person_id, name)
BUT look where your new column is added in the child table - at the end! So you end up with
INSERT INTO temp_person_two (person_id,name,has_default)
VALUES (person_id, name, foo)
rather than what you hoped for:
INSERT INTO temp_person_two (person_id,name,foo)...
So - what's the correct behaviour here? If PostgreSQL shuffled the columns in the child table that could break code. If it doesn't, that can also break code. As it happens, I don't think the first option is do-able without substantial PG code changes, so it's unlikely to do that in the medium term.
Moral of the story: explicitly list your INSERT column-names.
Could take a while by hand. You know any languages with regexes? ;-)
It's not a bug. NEW.* expands to the values of each column in the new row, so you're doing INSERT INTO temp_person_two VALUES ( NEW.person_id, NEW.name, NEW.foo ), the last of which is indeed NULL if you didn't specify it (and wrong if you did).
I'm surprised it even works before you added the new column, since the number of values doesn't match the number of fields in the child table. Presumably it assumes the default for missing trailing values.

reference to a sequence column (postgresql)

I encountered a problem when creating a foreign key referencing to a sequence, see the code example below.
But on creating the tables i recieve the following error.
"Detail: Key columns "product" and "id" are of incompatible types: integer and ownseq"
I've already tried different datatypes for the product column (like smallint, bigint) but none of them is accepted.
CREATE SEQUENCE ownseq INCREMENET BY 1 MINVALUE 100 MAXVALUE 99999;
CREATE TABLE products (
id ownseq PRIMARY KEY,
...);
CREATE TABLE basket (
basket_id SERIAL PRIMARY KEY,
product INTEGER FOREIGN KEY REFERENCES products(id));
CREATE SEQUENCE ownseq INCREMENT BY 1 MINVALUE 100 MAXVALUE 99999;
CREATE TABLE products (
id integer PRIMARY KEY default nextval('ownseq'),
...
);
alter sequence ownseq owned by products.id;
The key change is that id is defined as an integer, rather than as ownseq. This is what would happen if you used the SERIAL pseudo-type to create the sequence.
Try
CREATE TABLE products (
id INTEGER DEFAULT nextval(('ownseq'::text)::regclass) NOT NULL PRIMARY KEY,
...);
or don't create the sequence ownseq and let postgres do it for you:
CREATE TABLE products (
id SERIAL NOT NULL PRIMARY KEY
...);
In the above case the name of the sequence postgres has create should be products_id_seq.
Hope this helps.
PostgreSQL is powerful and you have just been bitten by an advanced feature.
Your DDL is quite valid but not at all what you think it is.
A sequence can be thought of as an extra-transactional simple table used for generating next values for some columns.
What you meant to do
You meant to have the id field defined thus, as per the other answer:
id integer PRIMARY KEY default nextval('ownseq'),
What you did
What you did was actually define a nested data structure for your table. Suppose I create a test sequence:
CREATE SEQUENCE testseq;
Then suppose I \d testseq on Pg 9.1, I get:
Sequence "public.testseq"
Column | Type | Value
---------------+---------+---------------------
sequence_name | name | testseq
last_value | bigint | 1
start_value | bigint | 1
increment_by | bigint | 1
max_value | bigint | 9223372036854775807
min_value | bigint | 1
cache_value | bigint | 1
log_cnt | bigint | 0
is_cycled | boolean | f
is_called | boolean | f
This is the definition of the type the sequence used.
Now suppose I:
create table seqtest (test testseq, id serial);
I can insert into it:
INSERT INTO seqtest (id, test) values (default, '("testseq",3,4,1,133445,1,1,0,f,f)');
I can then select from it:
select * from seqtest;
test | id
----------------------------------+----
(testseq,3,4,1,133445,1,1,0,f,f) | 2
Moreover I can expand test:
SELECT (test).* from seqtest;
select (test).* from seqtest;
sequence_name | last_value | start_value | increment_by | max_value | min_value
| cache_value | log_cnt | is_cycled | is_called
---------------+------------+-------------+--------------+-----------+----------
-+-------------+---------+-----------+-----------
| | | | |
| | | |
testseq | 3 | 4 | 1 | 133445 | 1
| 1 | 0 | f | f
(2 rows)
This sort of thing is actually very powerful in PostgreSQL but full of unexpected corners (for example not null and check constraints don't work as expected with nested data types). I don't generally recommend nested data types, but it is worth knowing that PostgreSQL can do this and will be happy to accept SQL commands to do it without warning.