insert multiple rows into table with column that has default value - postgresql

I have a table in PostgreSQL and one of the column has default value.
DDL of the table is:
CREATE TABLE public.my_table_name
(int_column_1 character varying(6) NOT NULL,
text_column_1 character varying(20) NOT NULL,
text_column_2 character varying(15) NOT NULL,
default_column numeric(10,7) NOT NULL DEFAULT 0.1,
time_stamp_column date NOT NULL);
I am trying to insert multiple rows in a single query. And in those I have some rows to which I have value for default_column and i have some rows to which i don't have any value for default_column and want to Postgres to use default value for these rows.
Here's what i tried:
INSERT INTO "my_table_name"(int_column_1, text_column_1, text_column_2, default_column, time_stamp_column)
VALUES
(91,'text_row_11','text_row_21',8,current_timestamp),
(91,'text_row_12','text_row_22',,current_timestamp),
(91,'text_row_13','text_row_23',19,current_timestamp),
(91,'text_row_14','text_row_24',,current_timestamp),
(91,'text_row_15','text_row_25',27,current_timestamp);
this gives me an error. So, when i try to insert:
INSERT INTO "my_table_name"(int_column_1, text_column_1, text_column_2, default_column, time_stamp_column)
VALUES (91,'text_row_12','text_row_22',,current_timestamp), -- i want null to be appended here, so i left it empty.
--error from this query is: ERROR: syntax error at or near ","
and
INSERT INTO "my_table_name"(int_column_1, text_column_1, text_column_2, default_column, time_stamp_column)
VALUES (91,'text_row_14','text_row_24',NULL,current_timestamp),
-- error from this query is: ERROR: new row for relation "glycemicindxdir" violates check constraint "food_item_check"
So, how do i fix this; And insert value when i have it or have Postgres insert default when I don't have a value?

Use the default keyword:
INSERT INTO my_table_name
(int_column_1, text_column_1, text_column_2, default_column, time_stamp_column)
VALUES
(91, 'text_row_11', 'text_row_21', 8 , current_timestamp),
(91, 'text_row_12', 'text_row_22', default, current_timestamp),
(91, 'text_row_13', 'text_row_23', 19 , current_timestamp),
(91, 'text_row_14', 'text_row_24', default, current_timestamp),
(91, 'text_row_15', 'text_row_25', 27 , current_timestamp);

Related

How to change date format of a column based on regex in PostgreSQL11.0

I have a table in PostgreSQL 11.0 with following column with date (column type: character varying).
id date_col
1 April2006
2 May2005
3 null
4
5 May16,2019
I would like to convert the column to 'date'
As there are two different date format, I am using a CASE statement to alter the column type based on a date pattern.
select *,
case
when date_col ~ '^[A-Za-z]+\d+,\d+' then alter table tbl alter date_col type date using to_date((NULLIF(date_col , 'null')), 'MonthDD,YYYY')
when date_col ~ '^[A-Za-z]+,\d+' then alter table tbl alter date_col type date using to_date((NULLIF(date_col, 'null')), 'MonthYYYY')
else null
end
from tbl
I am getting following error:
[Code: 0, SQL State: 42601] ERROR: syntax error at or near "table"
Position: 93 [Script position: 93 - 98]
The expected output is:
id date_col
1 2006-04-01
2 2005-05-01
3 null
4
5 2019-05-16
Any help is highly appreciated!!
You definitely can't alter a column one row at a time. Your better bet is to update the existing values so they are all the same format, then issue a single ALTER statement.

Select largest absolute value column pairs with headers per row

I am using: Microsoft SQL Server 2014 - 12.0.4213.0
Here is my sample table (numbers fuzzed):
CREATE TABLE most_recent_counts(
State VARCHAR(2) NOT NULL PRIMARY KEY
,BuildDate DATE NOT NULL
,Count_1725_Change INTEGER NOT NULL
,Count_1725_Percent_Change NUMERIC(20,2) NOT NULL
,Count_2635_Change INTEGER NOT NULL
,Count_2635_Percent_Change NUMERIC(20,2) NOT NULL
,Count_3645_Change INTEGER NOT NULL
,Count_3645_Percent_Change NUMERIC(20,2) NOT NULL
);
INSERT INTO most_recent_counts(State,BuildDate,Count_1725_Change,Count_1725_Percent_Change,Count_2635_Change,Count_2635_Percent_Change,Count_3645_Change,Count_3645_Percent_Change) VALUES ('AK','2018-06-05',1025,5.00,1700,2.50,2050,3.00);
INSERT INTO most_recent_counts(State,BuildDate,Count_1725_Change,Count_1725_Percent_Change,Count_2635_Change,Count_2635_Percent_Change,Count_3645_Change,Count_3645_Percent_Change) VALUES ('AL','2018-06-02',15000,4.00,10400,2.00,6800,1.25);
INSERT INTO most_recent_counts(State,BuildDate,Count_1725_Change,Count_1725_Percent_Change,Count_2635_Change,Count_2635_Percent_Change,Count_3645_Change,Count_3645_Percent_Change) VALUES ('AR','2018-06-07',2300,1.00,2700,1.00,1800,0.50);
INSERT INTO most_recent_counts(State,BuildDate,Count_1725_Change,Count_1725_Percent_Change,Count_2635_Change,Count_2635_Percent_Change,Count_3645_Change,Count_3645_Percent_Change) VALUES ('AZ','2018-04-26',107000,5.50,45400,3.00,180000,16.00);
INSERT INTO most_recent_counts(State,BuildDate,Count_1725_Change,Count_1725_Percent_Change,Count_2635_Change,Count_2635_Percent_Change,Count_3645_Change,Count_3645_Percent_Change) VALUES ('CA','2018-06-07',140000,6.00,550000,14.00,600000,18.00);
It should look something like this:
IMG: https://i.imgur.com/KGkfm66.png
In the real table, I have some 600ish such counts.
I would like to produce a table from this table, where for each state, I have the top ten (in magnitude) pairs of columns (i.e. The abs. change, and the percent change) (i.e. if in Alabama's row, there is a minus 10 million count in the sales to people in the 46-55 range, that should definitely be part of the result set, even if the rest of the columns are positive accruals in the thousands)
What's the best way to do this?

unique date field postgresql default value

I have a date column which I want to be unique once populated, but want the date field to be ignored if it is not populated.
In MySQL the way this is accomplished is to set the date column to "not null" and give it a default value of '0000-00-00' - this allows all other fields in the unique index to be "checked" even if the date column is not populated yet.
This does not work in PosgreSQL because '0000-00-00' is not a valid date, so you cannot store it in a date field (this makes sense to me).
At first glance, leaving the field nullable seemed like an option, but this creates a problem:
=> create table uniq_test(NUMBER bigint not null, date DATE, UNIQUE(number, date));
CREATE TABLE
=> insert into uniq_test(number) values(1);
INSERT 0 1
=> insert into uniq_test(number) values(1);
INSERT 0 1
=> insert into uniq_test(number) values(1);
INSERT 0 1
=> insert into uniq_test(number) values(1);
INSERT 0 1
=> select * from uniq_test;
number | date
--------+------
1 |
1 |
1 |
1 |
(4 rows)
NULL apparently "isn't equal to itself" and so it does not count towards constraints.
If I add an additional unique constraint only on the number field, it checks only number and not date and so I cannot have two numbers with different dates.
I could select a default date that is a 'valid date' (but outside working scope) to get around this, and could (in fact) get away with that for the current project, but there are actually cases I might be encountering in the next few years where it will not in fact be evident that the date is a non-real date just because it is "a long time ago" or "in the future."
The advantage the '0000-00-00' mechanic had for me was precisely that this date isn't real and therefore indicated a non-populated entry (where 'non-populated' was a valid uniqueness attribute). When I look around for solutions to this on the internet, most of what I find is "just use NULL" and "storing zeros is stupid."
TL;DR
Is there a PostgreSQL best practice for needing to include "not populated" as a possible value in a unique constraint including a date field?
Not clear what you want. This is my guess:
create table uniq_test (number bigint not null, date date);
create unique index i1 on uniq_test (number, date)
where date is not null;
create unique index i2 on uniq_test (number)
where date is null;
There will be an unique constraint for not null dates and another one for null dates effectively turning the (number, date) tuples into distinct values.
Check partial index
It's not a best practice, but you can do it such way:
t=# create table so35(i int, d date);
CREATE TABLE
t=# create unique index i35 on so35(i, coalesce(d,'-infinity'));
CREATE INDEX
t=# insert into so35 (i) select 1;
INSERT 0 1
t=# insert into so35 (i) select 2;
INSERT 0 1
t=# insert into so35 (i) select 2;
ERROR: duplicate key value violates unique constraint "i35"
DETAIL: Key (i, (COALESCE(d, '-infinity'::date)))=(2, -infinity) already exists.
STATEMENT: insert into so35 (i) select 2;

Need to apply CHECK constraint and length check on number in SQL

I need to have a CHECK constraint on a datatype, which has to have a format of 010000 through 129999 with the zero preserved, but I don't know how to achieve this. Basically, as evident, it's a numeric month-year.
I have tried using numeric(6,0) and integer, but I don't know how to use a CHECK that preserves the leading zero.
I also don't know how I could achieve this more easily using character varying(6) and it's not preferred either, as I think it'll be harder to use in the application layer.
Any suggestions? I'm using Postgres.
Three ways (there may be more):
-- (1) use a date type for a date
CREATE TABLE mmyyyy
( id SERIAL NOT NULL PRIMARY KEY
, yyyymm01 DATE NOT NULL CHECK (date_trunc('month', yyyymm01) = yyyymm01)
);
INSERT INTO mmyyyy(yyyymm01) VALUES
('1901-01-01') ,('0001-01-01') ,('2016-02-01') ;
INSERT INTO mmyyyy(yyyymm01) VALUES ('1901-13-01') ; -- should fail
INSERT INTO mmyyyy(yyyymm01) VALUES ('2016-02-13') ; -- should fail
SELECT id, to_char(yyyymm01, 'mmyyyy') AS this FROM mmyyyy ;
-- (2) use a char type and apply the check on the cast_to_int result
CREATE TABLE omg
( id SERIAL NOT NULL PRIMARY KEY
, mmyyyy varchar(6) NOT NULL CHECK (
length(mmyyyy) = 6 AND
left(mmyyyy,2)::integer BETWEEN 1 AND 12)
);
INSERT INTO omg(mmyyyy) VALUES ('011901') ,('010001') ,('022016') ;
INSERT INTO omg(mmyyyy) VALUES ('131901') ; -- should fail
INSERT INTO omg(mmyyyy) VALUES ('002016') ; -- should fail
SELECT id, mmyyyy FROM omg ;
-- (3) use an int type and apply the check to the value/10000
CREATE TABLE wtf
( id SERIAL NOT NULL PRIMARY KEY
, mmyyyy INTEGER NOT NULL CHECK (
mmyyyy/10000 BETWEEN 1 AND 12)
);
INSERT INTO wtf(mmyyyy) VALUES
(11901) ,(10001) ,(22016)
;
INSERT INTO wtf(mmyyyy) VALUES (131901) ; -- should fail
INSERT INTO wtf(mmyyyy) VALUES (2016) ; -- should fail
SELECT id, to_char(mmyyyy, '099999') AS mmyyyy
FROM wtf
;
-- (extra) use an date/char/int type AS the baseclass for a domain(or type):
-- (this can come in handy if the "type" is used in more than one place)
CREATE DOMAIN omgwtf AS
INTEGER CHECK ( value/10000 BETWEEN 1 AND 12)
;
CREATE TABLE tralala
( id SERIAL NOT NULL PRIMARY KEY
, mmyyyy omgwtf NOT NULL
);
INSERT INTO tralala(mmyyyy) VALUES
(11901) ,(10001) ,(22016)
;
INSERT INTO tralala(mmyyyy) VALUES (131901) ; -- should fail
INSERT INTO tralala(mmyyyy) VALUES (2016) ; -- should fail
SELECT id, to_char(mmyyyy, '099999') AS mmyyyy
FROM tralala
;
The output:
CREATE TABLE
INSERT 0 3
ERROR: date/time field value out of range: "1901-13-01"
LINE 1: INSERT INTO mmyyyy(yyyymm01) VALUES ('1901-13-01') ;
^
HINT: Perhaps you need a different "datestyle" setting.
ERROR: new row for relation "mmyyyy" violates check constraint "mmyyyy_yyyymm01_check"
DETAIL: Failing row contains (4, 2016-02-13).
id | this
----+--------
1 | 011901
2 | 010001
3 | 022016
(3 rows)
CREATE TABLE
INSERT 0 3
ERROR: new row for relation "omg" violates check constraint "omg_mmyyyy_check"
DETAIL: Failing row contains (4, 131901).
ERROR: new row for relation "omg" violates check constraint "omg_mmyyyy_check"
DETAIL: Failing row contains (5, 002016).
id | mmyyyy
----+--------
1 | 011901
2 | 010001
3 | 022016
(3 rows)
CREATE TABLE
INSERT 0 3
ERROR: new row for relation "wtf" violates check constraint "wtf_mmyyyy_check"
DETAIL: Failing row contains (4, 131901).
ERROR: new row for relation "wtf" violates check constraint "wtf_mmyyyy_check"
DETAIL: Failing row contains (5, 2016).
id | mmyyyy
----+---------
1 | 011901
2 | 010001
3 | 022016
(3 rows)
CREATE DOMAIN
CREATE TABLE
INSERT 0 3
ERROR: value for domain omgwtf violates check constraint "omgwtf_check"
ERROR: value for domain omgwtf violates check constraint "omgwtf_check"
id | mmyyyy
----+---------
1 | 011901
2 | 010001
3 | 022016
(3 rows)
I ended up using a yyyymm format, as suggested by #lad2025.

an empty row with null-like values in not-null field

I'm using postgresql 9.0 beta 4.
After inserting a lot of data into a partitioned table, i found a weird thing. When I query the table, i can see an empty row with null-like values in 'not-null' fields.
That weird query result is like below.
689th row is empty. The first 3 fields, (stid, d, ticker), are composing primary key. So they should not be null. The query i used is this.
select * from st_daily2 where stid=267408 order by d
I can even do the group by on this data.
select stid, date_trunc('month', d) ym, count(*) from st_daily2
where stid=267408 group by stid, date_trunc('month', d)
The 'group by' results still has the empty row.
The 1st row is empty.
But if i query where 'stid' or 'd' is null, then it returns nothing.
Is this a bug of postgresql 9b4? Or some data corruption?
EDIT :
I added my table definition.
CREATE TABLE st_daily
(
stid integer NOT NULL,
d date NOT NULL,
ticker character varying(15) NOT NULL,
mp integer NOT NULL,
settlep double precision NOT NULL,
prft integer NOT NULL,
atr20 double precision NOT NULL,
upd timestamp with time zone,
ntrds double precision
)
WITH (
OIDS=FALSE
);
CREATE TABLE st_daily2
(
CONSTRAINT st_daily2_pk PRIMARY KEY (stid, d, ticker),
CONSTRAINT st_daily2_strgs_fk FOREIGN KEY (stid)
REFERENCES strgs (stid) MATCH SIMPLE
ON UPDATE CASCADE ON DELETE CASCADE,
CONSTRAINT st_daily2_ck CHECK (stid >= 200000 AND stid < 300000)
)
INHERITS (st_daily)
WITH (
OIDS=FALSE
);
The data in this table is simulation results. Multithreaded multiple simulation engines written in c# insert data into the database using Npgsql.
psql also shows the empty row.
You'd better leave a posting at http://www.postgresql.org/support/submitbug
Some questions:
Could you show use the table
definitions and constraints for the
partions?
How did you load your data?
You get the same result when using
another tool, like psql?
The answer to your problem may very well lie in your first sentence:
I'm using postgresql 9.0 beta 4.
Why would you do that? Upgrade to a stable release. Preferably the latest point-release of the current version.
This is 9.1.4 as of today.
I got to the same point: "what in the heck is that blank value?"
No, it's not a NULL, it's a -infinity.
To filter for such a row use:
WHERE
case when mytestcolumn = '-infinity'::timestamp or
mytestcolumn = 'infinity'::timestamp
then NULL else mytestcolumn end IS NULL
instead of:
WHERE mytestcolumn IS NULL