SELECT * FROM items WHERE created_time >= 20210505143012999
on mySQL, we can give condition like WHERE created_time >= 20210505143012999.
but, I want to find it with similar format(20210505143012999) on PostgreSQL.. How can I do this?
Seems mySQL is a little lax with data types (or perhaps just more forgiving), in Postgres your value is just an number (bigint). You need to convert it with the to_timestamp function. But as an epoch it seems Postgres does not like and it also does not appear to be an epoch either. You can either pre-convert to a string then use the to_timestamp of cast it as test within the function parameters. Either way specify the format: (see demo)
select to_timestamp ('20210505143012999', 'yyyymmddhh24missms');
select to_timestamp (20210505143012999::text, 'yyyymmddhh24missms');
Related
I'm working myself through the Datacamp SQL track, and I'm currently working with date values. I've encountered two examples which seem contradictory to me.
-- Count requests created on January 31, 2017
SELECT count(*)
FROM evanston311
WHERE date_created::date='2017-01-31';
And:
-- Count requests created on February 29, 2016
SELECT count(*)
FROM evanston311
WHERE date_created>= '2016-02-29'
AND date_created< '2016-03-01';
Why do I need to cast the value as date in the first case but not the other?
As with most typed languages, you can rely on implicit type casting... until you can't.
Something like date_created >= '2016-02-29' Postgres can use the type of date_created to figure out how to implicitly cast '2016-02-29'. There's no ambiguity. But sometimes Postgres can't make a guess at all.
OTOH a function like date_part has multiple signatures date_part(text, timestamp) and date_part(text, interval). If you pass it a date string...
test=# select date_part('day', '2019-01-03');
ERROR: function date_part(unknown, unknown) is not unique
LINE 1: select date_part('day', '2019-01-03');
^
HINT: Could not choose a best candidate function. You might need to add explicit type casts.
...Postgres cannot make a guess because the second string could be interpreted as either a timestamp or an interval type. You need to resolve this ambiguity.
# select date_part('day', '2019-01-03'::date);
date_part
-----------
3
Now that Postgres knows you're passing in a date it can correctly guess to use it as a timestamp.
Another reason is as a cheap way to truncate timestamps. In your example date_created::date = '2017-01-31' will truncate date_created to be a date and make the comparison work. Of course, date_created should already be a date...
You can use it on the value being compared if you're not sure if that value will be a date or a timestamp.
select * from table
where date_created = $1::date
This will work the same with '2019-01-02' or '2019-01-02 03:04:05'.
Which brings us to our final reason: making up for bad schemas. Like if date_created is actually a timestamp, or all too common, text. In that case you need to explicitly control how comparisons are made. For example, let's say we had text_created of type text that contained timestamps as strings: naught. And maybe some poorly formatted data crept in that has extra spaces on the end...
-- Text comparison compares the values exactly.
test=# select * from test where text_created = '2019-01-04';
date_created | time_created | text_created
--------------+--------------+--------------
-- Date comparison compares as dates ignoring the extra whitespace.
test=# select * from test where text_created::date = '2019-01-04';
date_created | time_created | text_created
--------------+--------------+--------------
| | 2019-01-04
See Chapter 10. Type Conversion in the Postgres docs for more.
I have a column saved as a character data type. This column is what I am going to be using as a date. The column goes "YYYY-MM-DD" in that format.
This is a problem because if I ever need to filter by date, I have to go
select col_1, col_2
from table
where date LIKE '2016-04%;
If I want to search for a date range, this turns into a giant complicated mess.
What is the easiest way to convert this to a "date" data type? I want it to continue to be in YYYY-MM-DD order (no timestamp).
My ultimate goal is to be able to search for dates in a format like this:
select col_1, col_2
from table
where date between 2016-01-01 AND 2016-05-31;
What do you guys recommend? I am terrified I am going to corrupt my date if I use an alter statement to convert my data type. (I have a copy of the data saved and can upload it again, but it will take forever.)
Edit: This is a VERY Large table.
Edit Part 2: I originally stored the data as a varchar data type because my dates were not uploading correctly and I got an error message when I tried to save as a date data type. The every date in this column is in the "YYYY-MM-DD" order. My solution was to save it as varchar to avoid the error message (I couldn't figure out what was wrong. I even got rid of leading and trailing spaces.)
Storing a date as a varchar was the wrong choice to begin with. It's very good that you want to change that.
The first step is to convert the columns using an ALTER TABLE statement:
alter table the_table
ALTER COLUMN col_1 TYPE date using col_1::date,
ALTER COLUMN col_2 TYPE date using col_2::date;
Note that this will fail if you have any value in those columns that cannot be convert to a correct date. If you get that you need to first fix those invalid strings before you can change the data type.
I want it to continue to be in YYYY-MM-DD order
This is a misconception. A DATE (or timestamp) does not have a "format". Once it's stored as a date you can display it in any format you want.
My ultimate goal is to be able to search for dates in a format like this:
2016-01-01 is not a valid date literal, a proper (i.e. correctly typed) date constant can be specified e.g. using date '2016-01-01' (note the single quotes!
So your query becomes:
select col_1, col_2
from table
where col_1 between date '2016-01-01' AND date '2016-05-31';
If you have a lot of queries like that you should consider creating an index on the date columns.
Regarding the date constant format:
Are you telling me that despite having the varchar data types, I can still (as of right now) search between specific dates by just typing the word date and putting single quotes between two dates
No, that's not the case. SQL is a strongly typed language and as such will only compare values of the same type.
Using an ANSI date literal (or e.g. to_date()) results in a type constant (i.e. a value with a specific data type).
The difference between date '2016-01-01' and '2016-01-01' is the same as between42(a number) and'42'` (a string).
If you compare a string with a date, you are comparing apples and oranges and the database will do an implicit data type conversion from one type to the other. This is something that should be avoided at all costs.
If you do not want to change the table, you should use the query sagi provided which explicitly converts the strings to dates and then does the comparison on (real) date values (not strings)
You can use POSTGRES TO_DATE() cast function :
SELECT col_1,col_2
FROM Your_Table
WHERE to_date(date_col,'yyyy-mm-dd') between to_date('2016-05-31','yyyy-mm-dd') and to_date('2016-01-01','yyyy-mm-dd')
What #a_horse said.
Plus, if you can't change the data type for some odd reason, to_date() is a safe option to convert the column on the column, but there is no point to use the same expression for provided constants. So:
SELECT col_1, col_2
FROM tbl
WHERE to_date(date, 'YYYY-DD-MM') BETWEEN date '2016-05-31' AND date '2016-01-01';
Or just use string literals without type. The type date is deferred from the context in this expression. And you don't even need to_date(). Since you are using ISO format already. A plain cast is safe:
WHERE date::date BETWEEN '2016-05-31' AND '2016-01-01';
Be sure to use ISO 8601 format for all date strings, so they are unambiguous and valid with any locale.
You can even have an expression index to support the query. Match the actual expression used in queries:
CREATE INDEX tbl_date_idx ON tbl ((date::date)); -- parentheses required!
But I wouldn't use the basic type name date as identifier to begin with.
I have a TIMESTAMP WITHOUT TIME ZONE field
when I do:
select datex
from A
it shows me datex as: 2015-09-16 10:59:59.073629
how do I modify it to be 2015-09-16 10:59:59?
I don't need the tail after the seconds.
I read this http://www.postgresql.org/docs/9.1/static/functions-datetime.html but coldn't find a match.
You can either configure your SQL client tool to not show fractional seconds, or you can use the to_char() method to format the output:
select to_char(datex, 'yyyy-mm-dd hh24:mi:ss') as datex
from A;
See the manual for details on the format string. This is documented in the chapter "Data Type Formatting Functions":
http://www.postgresql.org/docs/current/static/functions-formatting.html
try:
SELECT date_trunc('second', datex);
The function date_trunc is similar to the trunc function for numbers.
The timestamp has milliseconds, so if any records are created via automation they will likely have the same seconds value but different millisecond values. I need to do this:
Version.uniq(:created_at)
But, this doesn't work because they are all unique. How can I use to_i, or whatever else might work, to pull this off?
You'll need the date_trunc() PostgreSQL function:
SELECT DISTINCT date_trunc('second', created_at) FROM "version"
In ruby:
Version.select("date_trunc('second', created_at)").distinct
To just get rid of fractional seconds, cast to the equivalent type with 0 fractional digits.
SELECT DISTINCT created_at::timestamp(0) FROM "version"
Or timestamptz, you did not disclose your exact type.
For more specific needs use date_trunc().
More details:
Discard millisecond part from timestamp
I have a table in a PostgreSQL database with a column of TIMESTAMP WITHOUT TIME ZONE type. I need to order the records by this column and apparently PostgreSQL has some trouble doing it as both
...ORDER BY time_column
and
...ORDER BY time_column DESC
give me the same order of elements for my 3-element sample of records having the same time_column value, except the amount of milliseconds in it.
It seems that while sorting, it does not consider milliseconds in the value.
I am sure the milliseconds are in fact stored in the database because when I fetch the records, I can see them in my DateTime field.
When I first load all the records and then order them by the time_column in memory, the result is correct.
Am I missing some option to make the ordering behave correctly?
EDIT: I was apparently missing a lot. The problem was not in PostgreSQL, but in NHibernate stripping the milliseconds off the DateTime property.
It's a foolish notion that PostgreSQL wouldn't be able to sort timestamps correctly.
Run a quick test and rest asured:
CREATE TEMP TABLE t (x timestamp without time zone);
INSERT INTO t VALUES
('2012-03-01 23:34:19.879707')
,('2012-03-01 23:34:19.01386')
,('2012-03-01 23:34:19.738593');
SELECT x FROM t ORDER by x DESC;
SELECT x FROM t ORDER by x;
q.e.d.
Then try to find out, what's really happening in your query. If you can't, post a testcase and you will be helped presto pronto.
try cast your column to ::timestamp like that:
SELECT * FROM TABLE
ORDER BY time_column::timestamp