Postgresql 12 CAST now() to date not working - postgresql

I am finding strange why database is having following result:
my1db=# select now()::date;
now
----------------------------
2020-10-08 19:57:24.483647
(1 row)
But here is the result of my other datbase:
my2db=# SELECT now()::date;
now
------------
2020-10-08
(1 row)
My 1st DB is not casting as shown from My 2nd DB
1STDB OS - RHEL 7 dbversion - PostgreSql 12.2
2NDDB OS - RHEL 7 dbversion - PostgreSql 12.1
May I know what am I missing?
It is the reason why I am not getting correct results from my applications
Edit (*removed images):
1STDB
template1=# \dC date
List of casts
Source type | Target type | Function | Implicit?
-----------------------------+-----------------------------+--------------+---------------
date | text | (with inout) | yes
date | timestamp without time zone | timestamp | yes
date | timestamp with time zone | timestamptz | yes
timestamp without time zone | date | date | in assignment
timestamp with time zone | date | date | in assignment
(5 rows)
template1=# select castfunc::regproc from pg_cast where casttarget = 'date'::regtype;
castfunc
------------------------
pg_catalog."timestamp"
pg_catalog."timestamp"
pg_catalog."timestamp"
(3 rows)
template1=# select castfunc::regproc, proname, proowner, prosrc, rolname from pg_cast, pg_proc, pg_authid where casttarget = 'date'::regtype and castfunc = pg_proc.oid and proowner = pg_authid.oid;
castfunc | proname | proowner | prosrc | rolname
------------------------+-----------+----------+-----------------------+---------
pg_catalog."timestamp" | timestamp | 10 | date_timestamp | pgadmin
pg_catalog."timestamp" | timestamp | 10 | timestamptz_timestamp | pgadmin
pg_catalog."timestamp" | timestamp | 10 | timestamp_scale | pgadmin
(3 rows)
2NDDB - cast behaves normally
psql (12.1)
Type "help" for help.
template1=# \dC date
List of casts
Source type | Target type | Function | Implicit?
-----------------------------+-----------------------------+-------------+---------------
date | timestamp without time zone | timestamp | yes
date | timestamp with time zone | timestamptz | yes
timestamp without time zone | date | date | in assignment
timestamp with time zone | date | date | in assignment
(4 rows)
template1=# select castfunc::regproc from pg_cast where casttarget = 'date'::regtype;
castfunc
-----------------
pg_catalog.date
pg_catalog.date
(2 rows)
template1=# select castfunc::regproc, proname, proowner, prosrc, rolname from pg_cast, pg_proc, pg_authid where casttarget = 'date'::regtype and castfunc = pg_proc.oid and proowner = pg_authid.oid;
castfunc | proname | proowner | prosrc | rolname
-----------------+---------+----------+------------------+----------
pg_catalog.date | date | 10 | timestamp_date | postgres
pg_catalog.date | date | 10 | timestamptz_date | postgres
(2 rows)

Related

SELECT date and time without second-fractions from unix epoch

The answer to this isn't exactly what I wanted.
I have for example:
select to_timestamp(1476544839);
to_timestamp
------------------------
2016-10-15 16:20:39+01
(1 row)
But I do not want the fractional part of seconds.
Expected:
to_timestamp
------------------------
2016-10-15 16:20:39
PGSQL version:
select version();
version
-------------------------------------------------------------------------------------------------------------------------------------
PostgreSQL 9.6.20 on x86_64-pc-linux-gnu (Ubuntu 9.6.20-1.pgdg18.04+1), compiled by gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0, 64-bit
(1 row)
EDIT
As a follow up to the question, I have this column named timestamp created with time zone.
> \d my_table;
View "postgres.my_table"
Column | Type | Collation | Nullable | Default
------------+--------------------------+-----------+----------+---------
session_id | integer | | |
timestamp | timestamp with time zone | | |
seconds | integer | | |
... ...
And so
SELECT timestamp FROM my_table
timestamp
----------------------------
2016-03-17 08:33:51.842+00
2016-03-17 08:33:51.738+00
2016-03-17 08:33:50.794+00
(3 rows)
It is these fractional part of seconds .842 .739 .794 that I do not want.

Outdated row doesn't move out from foreign table partition postgres

I'm trying to learn how sharding is configured in Postgres.
My Postgres setup has a temperature table which has 4 partitions each covering different range of "timestamp" value.
postgres=# \d+ temperature
Partitioned table "public.temperature"
Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
-----------+-----------------------------+-----------+----------+-----------------------------------------+---------+--------------+-------------
id | bigint | | not null | nextval('temperature_id_seq'::regclass) | plain | |
city_id | integer | | not null | | plain | |
timestamp | timestamp without time zone | | not null | | plain | |
temp | numeric(5,2) | | not null | | main | |
Partition key: RANGE ("timestamp")
Partitions: temperature_201901 FOR VALUES FROM ('2019-01-01 00:00:00') TO ('2019-02-01 00:00:00'),
temperature_201902 FOR VALUES FROM ('2019-02-01 00:00:00') TO ('2019-03-01 00:00:00'),
temperature_201903 FOR VALUES FROM ('2019-03-01 00:00:00') TO ('2019-04-01 00:00:00'),
temperature_201904 FOR VALUES FROM ('2019-04-01 00:00:00') TO ('2019-05-01 00:00:00')
temperature_201904 table, in particular, is a foreign table
postgres=# \d+ temperature_201904
Foreign table "public.temperature_201904"
Column | Type | Collation | Nullable | Default | FDW options | Storage | Stats target | Description
-----------+-----------------------------+-----------+----------+-----------------------------------------+-------------+---------+--------------+-------------
id | bigint | | not null | nextval('temperature_id_seq'::regclass) | | plain | |
city_id | integer | | not null | | | plain | |
timestamp | timestamp without time zone | | not null | | | plain | |
temp | numeric(5,2) | | not null | | | main | |
Partition of: temperature FOR VALUES FROM ('2019-04-01 00:00:00') TO ('2019-05-01 00:00:00')
Partition constraint: (("timestamp" IS NOT NULL) AND ("timestamp" >= '2019-04-01 00:00:00'::timestamp without time zone) AND ("timestamp" < '2019-05-01 00:00:00'::timestamp without time zone))
Server: shard02
Insert works as expected. If I insert the following value and check from the remote host shard02, then the value exists. Fantastic!
postgres=# select * from temperature_201904;
id | city_id | timestamp | temp
----+---------+---------------------+-------
1 | 1 | 2019-04-02 00:00:00 | 12.30
(1 row)
However, if I update the timestamp of this row such that it's no longer valid for the range defined for the partition, I'd expect it to get moved out and placed into the correct partition, temperature_201901, but it's not.
postgres=# update temperature set timestamp = '2019-01-04' where id=1;
UPDATE 1
postgres=# select * from temperature_201904 ;
id | city_id | timestamp | temp
----+---------+---------------------+-------
1 | 1 | 2019-01-04 00:00:00 | 12.30
Again, just to reiterate, this table has a range temperature_201904 FOR VALUES FROM ('2019-04-01 00:00:00') TO ('2019-05-01 00:00:00') and is a foreign table.
Feels like I'm missing something here.
Is this an expected behavior? If so, is there a way to configure such that data are automatically moved between nodes as their partition constraints are changed?
Thanks in advance!
postgres=# SELECT version();
version
------------------------------------------------------------------------------------------------------------------
PostgreSQL 12.2 (Debian 12.2-2.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
This seems to be expected. From the docs
While rows can be moved from local partitions to a foreign-table partition (provided the foreign data wrapper supports tuple routing), they cannot be moved from a foreign-table partition to another partition.
Now I would have expected an ERROR rather than silently violating the implied constraint, but I wouldn't expect this to have worked the way to you want it to.

how to retrieve information from three tables in below conditions in postgresql

I have three tables.
TABLE_1:
T2_ID ver date boolean
---------------------------------------------------------
1 | X-20-50 | 2019-01-01 16:20:51.722336+00 | TRUE
2 | X-50-30 | 2019-02-26 16:20:51.722336+00 | TRUE
3 | X-20-32 | 2019-03-20 16:20:51.722336+00 | FALSE
1 | X-20-50 | 2019-01-09 16:20:51.722336+00 | FALSE
2 | X-20-50 | 2019-12-02 16:20:51.722336+00 | TRUE
3 | X-20-50 | 2019-01-24 16:20:51.722336+00 | TRUE
TABLE_2:
id | type | scheduler
--------------------------------------------------
1 | ABC | w1,w2,w3,w4,w5,w6,w7,w8,w9,w10,w11,w12
2 | PQR | w5,w9
3 | TRC | w1,w4,w8
TABLE_3
start_date_of_ver | end_date_of_ver | ver_name
-----------------------------------------------------------
2019-01-01 00:00:00+00 | 2019-04-01 00:00:00+00 | X-20-50
2019-02-25 00:00:00+00 | 2019-05-26 00:00:00+00 | X-50-30
2019-03-15 00:00:00+00 | 2019-06-06 00:00:00+00 | X-20-32
Table 4 should fulfill the below condition.
it takes version name (ver_name) as input
from this (ver_name), it takes start date and end date of version (from table_3) if the version period is 3 months then it creates 12 weeks table with id (type) as the first column and creates an entry of twelve-week according to table 2 of the scheduler.
information on table 4 will be updated as and when table 1 has entries of that particular week which are TRUE
Note: table 1, entries get generates on a daily basis.
Desired table: which has only ver_name as input and calculate below table.
When table_1 don't have any entries then table_4 should look like as below
Table_4: X-20-50
id_of_table_2 | week_1 | week_2 | week_3 | week_4 | week_5 | week_6 | week_7 | week_8 | week_9 | week_10 | week_11 | week_12 |
------------------------------------------------------------------------------------------------------------------------------
ABC | w1 | w2 | w3 | w4 | w5 | w6 | w7 | w8 | w9 | w10 | w11 | w12 |
PQR | | | | | w5 | | | | w9 | | | |
TRC | w1 | | | w4 | | | | w8 | | | | |
When table_1 has entries then table_4 should look like as below
X-20-50
id_of_table_2 | week_1 | week_2 | week_3 | week_4 | week_5 | week_6 | week_7 | week_8 | week_9 | week_10 | week_11 | week_12 |
------------------------------------------------------------------------------------------------------------------------------
ABC | Done | Done | w3 | w4 | w5 | w6 | w7 | w8 | w9 | w10 | w11 | w12 |
PQR | | | | | w5 | | | | w9 | | | |
TRC | Done | | | w4 | | | | w8 | | | | |
You can create function which can take starting date of a week as input.
Example-
create function a(start_date)
RETURNS json
LANGUAGE 'plpgsql'
COST 100
VOLATILE
AS $BODY$
DECLARE
outputjson json;
BEGIN
EXECUTE 'select json_agg(*) from table_name where date >= '||start_date||' and (date '||start_date||' + integer ''7'')' into outputjson;
RETURN outputjson;
END;
$$
Hope this will help.
Your requirement needs a little refinement. You specify to retrieve weekly data yet fail to define a your week. On what day does it begin? Are all weeks 7 days long? What happens when Dec 31 falls on Tuesday is Friday Jan 3 in the same week (see current year calendar). Then there is the issue of user input and what it represents. Is it the desired start date and the week is that date and the next 6 days or any date within weekly period?
The following assumes an ISO 8601 definition (google it - lots of stuff). Every week begins on Monday and all weeks are 7 days long. (Thus the week containing 31-Dec-2019 also includes 3-Jan-2020). The routine extracts the ISO Year and ISO week user entered date.
--setup
create table weekly_something( c1 text, c2 text, date1 timestamptz, someem boolean);
insert into weekly_something( c1, c2, date1, someem )
values ('ABC','AB-20-50','2019-11-25 16:20:51.722336+00',TRUE)
, ('PQR','AB-50-30','2019-11-26 16:20:51.722336+00',TRUE)
, ('TRC','CD-20-32','2019-11-27 16:20:51.722336+00',FALSE)
, ('ABC','AB-20-50','2019-12-02 16:20:51.722336+00',FALSE)
, ('ABC','AB-20-50','2019-12-02 16:20:51.722336+00',TRUE)
, ('JFF','yy-45-89','2019-12-31 16:20:51.722336+00',TRUE)
, ('JFF','yy-89-30','2020-01-03 16:20:51.722336+00',TRUE) ;
-- JFF Just For Fun
-- SQL Function
create function week_of(week_date date)
returns setof weekly_something
language sql stable strict
as $$
select *
from weekly_something
where (extract('isoyear' from week_date), extract('week' from week_date)) =
(extract('isoyear' from date1), extract('week' from date1));
$$;
-- test
select * from week_of('2019-11-26');
select * from week_of('2019-12-30');

Update column with correct daterange using generate_series

I have a column with incorrect dateranges (a day is missing). The code
to generate these dateranges was written by a previous employee and
cannot be found.
The dateranges look like this, notice the missing day:
+-------+--------+-------------------------+
| id | client | date_range |
+-------+--------+-------------------------+
| 12885 | 30 | [2016-01-07,2016-01-13) |
| 12886 | 30 | [2016-01-14,2016-01-20) |
| 12887 | 30 | [2016-01-21,2016-01-27) |
| 12888 | 30 | [2016-01-28,2016-02-03) |
| 12889 | 30 | [2016-02-04,2016-02-10) |
| 12890 | 30 | [2016-02-11,2016-02-17) |
| 12891 | 30 | [2016-02-18,2016-02-24) |
+-------+--------+-------------------------+
And should look like this:
+-------------------------+
| range |
+-------------------------+
| [2016-01-07,2016-01-14) |
| [2016-01-14,2016-01-21) |
| [2016-01-21,2016-01-28) |
| [2016-01-28,2016-02-04) |
| [2016-02-04,2016-02-11) |
| [2016-02-11,2016-02-18) |
| [2016-02-18,2016-02-25) |
| [2016-02-25,2016-03-03) |
+-------------------------+
The code I've written to generate correct dateranges looks like this:
create or replace function generate_date_series(startsOn date, endsOn date, frequency interval)
returns setof date as $$
select (startsOn + (frequency * count))::date
from (
select (row_number() over ()) - 1 as count
from generate_series(startsOn, endsOn, frequency)
) series
$$ language sql immutable;
select DATERANGE(
generate_date_series(
'2016-01-07'::date, '2024-11-07'::date, interval '7days'
)::date,
generate_date_series(
'2016-01-14'::date, '2024-11-13'::date, interval '7days'
)::date
) as range;
However, I'm having trouble trying to update the column with the
correct dateranges. I initially executed this UPDATE query on a test
database I created:
update factored_daterange set date_range = dt.range from (
select daterange(
generate_date_series(
'2016-01-07'::date, '2024-11-07'::date, interval '7days'
)::date,
generate_date_series(
'2016-01-14'::date, '2024-11-14'::date, interval '7days'
)::date ) as range ) dt where client_id=30;
But that is not correct, it simply assigns the first generated
daterange to each row. I want to essentially update the dateranges
row-by-row since there is no other join or condition I can match the
dates up to. Any assistance in this matter is greatly appreciated.
Your working too hard. Just update the upper range value.
update your_table_name
set date_range = daterange(lower(date_range),(upper(date_range) + interval '1 day')::date) ;

How to select rows in a postgres table based on a date and store it in a new table?

I have a postgres table which has some data.Each row has a date associated with it.I want to extract rows for the dates which has the month as April.Here is a csv version of my postgres table data
,date,location,device,provider,cpu,mem,load,drops,id,latency,gw_latency,upload,download,sap_drops,sap_latency,alert_id
0,2018-02-10 11:52:59.342269+00:00,BEM,10.11.100.1,COD,6.0,23.0,11.75,0.0,,,,,,,,
1,2018-02-10 11:53:04.006971+00:00,VER,10.11.100.1,KOD,6.0,23.0,4.58,0.0,,,,,,,,
2,2018-03-25 20:28:36.186015+00:00,RET,10.11.100.1,POL,7.0,26.0,9.83,0.0,,86.328,5.0,4.33,15.33,0.0,23.0,
3,2018-03-25 20:28:59.155453+00:00,ASR,10.12.100.1,VOL,5.0,14.0,2.67,0.0,,52.406,12.0,2.17,3.17,0.0,28.0,
4,2018-04-01 13:16:44.472119+00:00,RED,10.19.0.1,SEW,6.0,14.0,2.77,0.0,,52.766,2.0,3.25,2.29,0.0,1.0,0.0
5,2018-04-01 13:16:48.478708+00:00,RED,10.19.0.1,POL,6.0,14.0,4.065,0.0,,52.766,1.0,6.63,1.5,0.0,1.0,0.0
6,2018-04-06 21:00:44.769702+00:00,GOK,10.61.100.1,FDE,4.0,22.0,3.08,0.0,,54.406,8.0,3.33,2.83,0.0,19.0,0.0
7,2018-04-06 21:01:07.211395+00:00,WER,10.4.100.1,FDE,3.0,3.0,9.28,0.0,,0.346,2.0,10.54,8.02,0.0,33.0,0.0
8,2018-04-13 11:18:08.411550+00:00,DER,10.19.0.1,CVE,14.0,14.0,7.88,0.0,,50.545,2.0,6.17,9.59,0.0,1.0,0.0
9,2018-04-13 11:18:12.420974+00:00,RTR,10.19.0.1,BOL,14.0,14.0,1.345,0.0,,50.545,1.0,2.26,0.43,0.0,1.0,0.0
So I want only the rows which has a month of april data such that I will have a table which looks something like this
4,2018-04-01 13:16:44.472119+00:00,RED,10.19.0.1,SEW,6.0,14.0,2.77,0.0,,52.766,2.0,3.25,2.29,0.0,1.0,0.0
5,2018-04-01 13:16:48.478708+00:00,RED,10.19.0.1,POL,6.0,14.0,4.065,0.0,,52.766,1.0,6.63,1.5,0.0,1.0,0.0
6,2018-04-06 21:00:44.769702+00:00,GOK,10.61.100.1,FDE,4.0,22.0,3.08,0.0,,54.406,8.0,3.33,2.83,0.0,19.0,0.0
7,2018-04-06 21:01:07.211395+00:00,WER,10.4.100.1,FDE,3.0,3.0,9.28,0.0,,0.346,2.0,10.54,8.02,0.0,33.0,0.0
8,2018-04-13 11:18:08.411550+00:00,DER,10.19.0.1,CVE,14.0,14.0,7.88,0.0,,50.545,2.0,6.17,9.59,0.0,1.0,0.0
9,2018-04-13 11:18:12.420974+00:00,RTR,10.19.0.1,BOL,14.0,14.0,1.345,0.0,,50.545,1.0,2.26,0.43,0.0,1.0,0.0
Now If I try to extract a particular date with the below query
select * from metrics_data where date = 2018-04-13;
I get the error message
No operator matches the given name and argument type(s). You might need to add explicit type casts.
How do I get the rows for the month of April and store it in a new table say april_data?
Below is the structure of my existing table
Column | Type | Modifiers | Storage | Stats target | Description
-------------+--------------------------+-----------+----------+--------------+-------------
date | timestamp with time zone | | plain | |
location | character varying(255) | | extended | |
device | character varying(255) | | extended | |
provider | character varying(255) | | extended | |
cpu | double precision | | plain | |
mem | double precision | | plain | |
load | double precision | | plain | |
drops | double precision | | plain | |
id | integer | | plain | |
latency | double precision | | plain | |
gw_latency | double precision | | plain | |
upload | double precision | | plain | |
download | double precision | | plain | |
sap_drops | double precision | | plain | |
sap_latency | double precision | | plain | |
alert_id | double precision | | plain | |
The type of column date in your table is timestamp with time zone which format will be YYYY:MM:DD HH24:MI:SS.MS. In the query you make an operation timestamp with time zone = date, so it will throw an error.
So, if you want to fix it, you should fix one side to the type of other.
In your case I suggest as below:
Match exact 1 day.
select * from metrics_data where date(date) = '2018-04-13';
Match within 1 month.
select * from metrics_data where date BETWEEN '2018-04-01 00:00:00' AND '2018-04-30 23:59:59.999';
OR
select * from metrics_data where date(date) BETWEEN '2018-04-01' AND '2018-04-30';
OR
select * from metrics_data where to_char(date,'YYYY-MM') = '2018-04';
Match only April.
select * from metrics_data where to_char(date,'MM') = '04';
OR
select * from metrics_data where extract(month from date) = 4;
Hopefully this answer will help you.
You need single quotes around the string literal.
PostgreSQL will automatically cast it to the correct data type (timestamp with time zone).
You could use the extract function to select only the dates from April:
SELECT * FROM yourtable WHERE extract (month FROM yourtable.date) = 4;