Peewee Python Default is not reflecting in DateTimeField - postgresql

Not able to set default timestamp in spite of using DateTimeField(default=datetime.datetime.now)
The column is set to not null but no default value is set
I have my model
import datetime
database = PostgresqlDatabase(dbname,username,password,host)
class BaseModel(Model):
class Meta:
database = database
class UserInfo(BaseModel):
id = PrimaryKeyField()
username = CharField(unique=True)
password = CharField()
email = CharField(null=True)
created_date = DateTimeField(default=datetime.datetime.now)
when I create table using this model and below code
database.connect()
database.create_tables([UserInfo])
I am getting below table
Table "public.userinfo"
Column | Type | Modifiers
--------------+-----------------------------+------------------------- ------------------------------
id | integer | not null default nextval('userinfo_id_seq'::regclass)
username | character varying(255) | not null
password | character varying(255) | not null
email | character varying(255) |
created_date | timestamp without time zone | not null
Indexes:
"userinfo_pkey" PRIMARY KEY, btree (id)
"userinfo_username" UNIQUE, btree (username)
Here in table created date doesn't set to any default

When using the default parameter, the values are set by Peewee rather than being a part of the actual table and column definition
try created_date = DateTimeField(constraints=[SQL('DEFAULT CURRENT_TIMESTAMP')])

Related

How to add a row in the postgres table when it is showing duplicate id error even though I haven't passed an id? [duplicate]

This question already has answers here:
How to reset Postgres' primary key sequence when it falls out of sync?
(33 answers)
Why do SQL id sequences go out of sync (specifically using Postgres)?
(2 answers)
Closed 3 days ago.
So, I generated a table called person from mockaroo of about 1000 rows.
Column | Type | Collation | Nullable | Default
------------------+-----------------------+-----------+----------+------------------------------------
id | bigint | | not null | nextval('person_id_seq'::regclass)
first_name | character varying(100) | | not null |
last_name | character varying(100) | | not null |
gender | character varying(7) | | not null |
email | character varying(100) | | |
date_of_birth | date | | not null |
country_of_birth | character varying(100) | | not null |
Indexes:
"person_pkey" PRIMARY KEY, btree (id)
"person_email_key" UNIQUE CONSTRAINT, btree (email)
Above are the table details.
I am trying to insert a row into the table. Since, I gave id as BIGSERIAL datatype, its supposed to auto increment the id for me and everytime I generate a row.
But, now as I am trying to insert a new row it's showing me duplicate id error.
test=# INSERT INTO person (first_name, last_name, gender, email, date_of_birth, country_of_birth) VALUES ('Sean', 'Paul','Male', 'paul#gmail.com','2001-03-02','India');
ERROR: duplicate key value violates unique constraint "person_pkey"
DETAIL: Key (id)=(2) already exists.
The problem can be one of the following:
somebody ran ALTER SEQUENCE or called the setval function to reset the sequence counter
somebody INSERTed a row with an explicit value of 2 for id, so that the default value was overridden rather than using a sequence value
You can reduce the danger of the latter happening by using identity columns with GENERATED ALWAYS AS IDENTITY.

psycopg2.errors.UndefinedColumn: column excluded.number does not exist

I have two tables in two different schemas on one database:
CREATE TABLE IF NOT EXISTS target_redshift.dim_collect_projects (
project_id BIGINT NOT NULL UNIQUE,
project_number BIGINT,
project_name VARCHAR(300) NOT NULL,
connect_project_id BIGINT NOT NULL,
project_desc VARCHAR(5000) NOT NULL,
project_type VARCHAR(50) NOT NULL,
project_status VARCHAR(100),
project_path VARCHAR(32768),
language_code VARCHAR(10),
country_code VARCHAR(10),
timezone VARCHAR(10),
project_created_at TIMESTAMP WITHOUT TIME ZONE,
project_modified_at TIMESTAMP WITHOUT TIME ZONE,
date_created TIMESTAMP WITHOUT TIME ZONE NOT NULL DEFAULT NOW(),
date_updated TIMESTAMP WITHOUT TIME ZONE NOT NULL DEFAULT NOW()
);
CREATE TABLE IF NOT EXISTS source_redshift.dim_collect_projects (
id BIGINT NOT NULL UNIQUE,
number BIGINT,
name VARCHAR(300) NOT NULL,
connect_project_id BIGINT NOT NULL,
description VARCHAR(5000) NOT NULL,
type VARCHAR(50) NOT NULL,
status VARCHAR(100),
path VARCHAR(32768),
language VARCHAR(10),
country VARCHAR(10),
timezone VARCHAR(10),
created TIMESTAMP WITHOUT TIME ZONE NULL DEFAULT NOW(),
modified TIMESTAMP WITHOUT TIME ZONE NULL DEFAULT NOW()
);
I need to copy data from the second table to the first.
Do it so:
INSERT INTO target_redshift.dim_collect_projects AS t
SELECT id, number, name, connect_project_id, description,
type, status, path, language, country, timezone, created,
modified
FROM source_redshift.dim_collect_projects
ON CONFLICT (project_id)
DO UPDATE SET
(t.project_number, t.project_name, t.connect_project_id, t.project_desc,
t.project_type, t.project_status, t.project_path, t.language_code,
t.country_code, t.timezone, t.project_created_at, t.project_modified_at,
t.date_created, t.date_updated) = (EXCLUDED.number, EXCLUDED.name, EXCLUDED.connect_project_id,
EXCLUDED.description, EXCLUDED.type, EXCLUDED.status,
EXCLUDED.path, EXCLUDED.language, EXCLUDED.country,
EXCLUDED.timezone, EXCLUDED.created, EXCLUDED.modified, t.date_created, NOW())
And AirFlow send an error:
psycopg2.errors.UndefinedColumn: column excluded.number does not exist
LINE 12: t.date_created, t.date_updated) = (EXCLUDED.number, ...
You need to use the target_redshift.dim_collect_projects field names for the excluded.* fields e.g. excluded.project_number. The target table is the controlling one for the column names as that is where the data insert is being attempted.
UPDATE
Using an example table from my test database:
\d animals
Table "public.animals"
Column | Type | Collation | Nullable | Default
--------+------------------------+-----------+----------+---------
id | integer | | not null |
cond | character varying(200) | | not null |
animal | character varying(200) | | not null |
Indexes:
"animals_pkey" PRIMARY KEY, btree (id)
\d animals_target
Table "public.animals_target"
Column | Type | Collation | Nullable | Default
---------------+------------------------+-----------+----------+---------
target_id | integer | | not null |
target_cond | character varying(200) | | |
target_animal | character varying(200) | | |
Indexes:
"animals_target_pkey" PRIMARY KEY, btree (target_id)
insert into
animals_target
select
*
from
animals
ON CONFLICT
(target_id)
DO UPDATE SET
(target_id, target_cond, target_animal) =
(excluded.target_id, excluded.target_cond, excluded.target_animal);
NOTE: No use of table alias for the table being inserted into.
The target table is the one the data is being inserted into. The attempted INSERT is into its columns so the they are the ones that are being potentially excluded.
For anyone who might come here later, I had a table which was created from an Excel import, and unwittingly one of the column names started with a Unicode character (in other words, an invisible character).
ERROR: column excluded.columnname does not exist
LINE 5: ... (yada, yada) = (excluded.columnname, excluded.yada)
HINT: Perhaps you wanted to reference the column "excluded.columnname".
Since none of the answers have been marked as correct, I suggest that errors like the above may arise even though everything looks to be perfectly fine if a column name begins with one of these invisible characters. At least, that was the case for me and I had to scratch my head for quite a while before I figured it out.
One way to avoid such issues could be to not create tables automatically based on the contents of Excel files.
Did it:
INSERT INTO dim_collect_projects_1 AS t (project_id, project_number, project_name, connect_project_id, project_desc,
project_type, project_status, project_path, language_code,
country_code, timezone, project_created_at, project_modified_at)
SELECT s.id, s.number, s.name, s.connect_project_id, s.description,
s.type, s.status, s.path, s.language, s.country, s.timezone, s.created,
s.modified
FROM dim_collect_projects_2 AS s
ON CONFLICT (project_id)
DO UPDATE SET
(project_number, project_name, connect_project_id, project_desc,
project_type, project_status, project_path, language_code,
country_code, timezone, project_created_at, project_modified_at,
date_updated) = (EXCLUDED.project_number,
EXCLUDED.project_name, EXCLUDED.connect_project_id,
EXCLUDED.project_desc, EXCLUDED.project_type, EXCLUDED.project_status,
EXCLUDED.project_path, EXCLUDED.language_code, EXCLUDED.country_code,
EXCLUDED.timezone, EXCLUDED.project_created_at,
EXCLUDED.project_modified_at, NOW())
WHERE t.project_number != EXCLUDED.project_number
OR t.project_name != EXCLUDED.project_name
OR t.connect_project_id != EXCLUDED.connect_project_id
OR t.project_desc != EXCLUDED.project_desc
OR t.project_type != EXCLUDED.project_type
OR t.project_status != EXCLUDED.project_status
OR t.project_path != EXCLUDED.project_path
OR t.language_code != EXCLUDED.language_code
OR t.country_code != EXCLUDED.country_code
OR t.timezone != EXCLUDED.timezone
OR t.project_created_at != EXCLUDED.project_created_at
OR t.project_modified_at != EXCLUDED.project_modified_at;

Postgresql: Why does \d+ table not show default as null

I'm new to Postgres from MySQL.
I've created a table with a column 'col' that has its default as null.
CREATE TABLE widgets (id serial primary key, col bigint default null);
CREATE TABLE
test=# \d+ widgets;
Table "public.widgets"
Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
--------+---------+-----------+----------+-------------------------------------+---------+--------------+-------------
id | integer | | not null | nextval('widgets_id_seq'::regclass) | plain | |
col | bigint | | | | plain | |
Indexes:
"widgets_pkey" PRIMARY KEY, btree (id)
test=#
However, in the Default column there is nothing to indicate it's a null field.
Should the default value show as null in the schema output?
I've also tried the same example above, but with default 99. Nothing shows in the Default column when typing \d+. In that example, I insert a row, INSERT INTO widgets (id) VALUES (1); it gets the default value 99, so I know the default value is being used (at least for the integer value).
Update: the default column did in fact show 99. I had used \d+ with autocomplete and viewed the wrong table.
With default null, when I SELECT * FROM widgets the column shows up blank.
Does Postgres not explicitly show a value marked as 'null'? If not, how is it possible to differentiate between and empty text field and a field with a null value?
When nothing is specified a column's default is always null. Apparently the psql developers thought, that showing null as the default value makes no sense. Maybe because it is the standard behavior or because null is not a "value".
Only non-null default values are shown in the \d output.
how is it possible to differentiate between and empty text field and a field with a null value?
An empty string '' is something different than a null value and you will see the difference in the output:
postgres=# CREATE TABLE null_test (c1 text default null, c2 text default '');
CREATE TABLE
postgres=# \d null_test
Table "public.null_test"
Column | Type | Collation | Nullable | Default
--------+------+-----------+----------+----------
c1 | text | | |
c2 | text | | | ''::text

How to get the standard price field value in odoo?

I have problem.
The standard_price is computed field and not stored product_template, product_product table. How to get the standard price field value in odoo xlsx report?
The error is:
Record does not exist or has been deleted.: None
Help, I need any solution and idea?
Check the cost field of the product_price_history table. I think that is what you are looking for. This table is related with the product_product table through the field product_id:
base=# \dS product_price_history
Table "public.product_price_history"
Column | Type | Modifiers
-------------+-----------------------------+--------------------------------------------------------------------
id | integer | not null default nextval('product_price_history_id_seq'::regclass)
create_uid | integer |
product_id | integer | not null
company_id | integer | not null
datetime | timestamp without time zone |
cost | numeric |
write_date | timestamp without time zone |
create_date | timestamp without time zone |
write_uid | integer |
Indexes:
"product_price_history_pkey" PRIMARY KEY, btree (id)
Foreign-key constraints:
"product_price_history_company_id_fkey" FOREIGN KEY (company_id) REFERENCES res_company(id) ON DELETE SET NULL
"product_price_history_create_uid_fkey" FOREIGN KEY (create_uid) REFERENCES res_users(id) ON DELETE SET NULL
"product_price_history_product_id_fkey" FOREIGN KEY (product_id) REFERENCES product_product(id) ON DELETE CASCADE
"product_price_history_write_uid_fkey" FOREIGN KEY (write_uid) REFERENCES res_users(id) ON DELETE SET NULL

How to use subquery In Sphinx's multi-valued attributes (MVA) for query type

My database is PostgreSQL, and I want to use Sphinx Search Engines to index my data;
How can i use sql_attr_multi to fetch the relationship data?
The tables in postgresql, schemas is:
crm=# \d orders
Table "orders"
Column | Type | Modifiers
-----------------+-----------------------------+----------------------------------------
id | bigint | not null
trade_id | bigint | not null
item_id | bigint | not null
price | numeric(10,2) | not null
total_amount | numeric(10,2) | not null
subject | character varying(255) | not null default ''::character varying
status | smallint | not null default 0
created_at | timestamp without time zone | not null default now()
updated_at | timestamp without time zone | not null default now()
Indexes:
"orders_pkey" PRIMARY KEY, btree (id)
"orders_trade_id_idx" btree (trade_id)
crm=# \d trades
Table "trades"
Column | Type | Modifiers
-----------------------+-----------------------------+---------------------------------------
id | bigint | not null
operator_id | bigint | not null
customer_id | bigint | not null
category_ids | bigint[] | not null
total_amount | numeric(10,2) | not null
discount_amount | numeric(10,2) | not null
created_at | timestamp without time zone | not null default now()
updated_at | timestamp without time zone | not null default now()
Indexes:
"trades_pkey" PRIMARY KEY, btree (id)
The Sphinx's config is:
source trades_src
{
type = pgsql
sql_host = 10.10.10.10
sql_user = ******
sql_pass = ******
sql_db = crm
sql_port = 5432
sql_query = \
SELECT id, operator_id, customer_id, category_ids, total_amount, discount_amount, \
date_part('epoch',created_at) AS created_at, \
date_part('epoch',updated_at) AS updated_at \
FROM public.trades;
#attributes
sql_attr_bigint = operator_id
sql_attr_bigint = customer_id
sql_attr_float = total_amount
sql_attr_float = discount_amount
sql_attr_multi = bigint category_ids from field category_ids
#sql_attr_multi = bigint order_ids from query; SELECT id FROM orders
#how can i add where condition is the query for orders? eg. WHERE trade_id = ?
sql_attr_timestamp = created_at
sql_attr_timestamp = updated_at
}
I used MVA (multi-valued attributes) on category_ids field, and it is the ARRAY type in Postgresql.
But I donot know How to define MVA on order_ids. It will be through subquery?
Copied from sphinx forum....
sql_attr_multi = bigint order_ids from query; SELECT trade_id,id FROM orders ORDER BY trade_id
The first column of the query is the sphinx 'document_id' (ie the id in main sql_query)
The second column is the value to insert into the MVA array for that document.
(The ORDER BY might not strictly be needed, but sphinx is much quicker at processing the data if ordered by document_id IIRC)