I use PostgreSQL
I can run a single query per day, but it will take a long time to go through every day.
The "zone" and "reader" also changes, so to run single queries every time will keep me up until late.
If at best I can only change the "reader" and "zone" every time it would help. The main "PAIN" I have, is to change the dates every time. It will be from 2022 11 18 18:00 to 2022 12 01 19:00.
P.S - I'm new to SQL, please be gentle :)
My current query:
select * from vw_tracking_resource_events
where "when_enter_dt_timezone" between '2022 11 18 18:00:00' and '2022 11 18 19:00:00'
and "zone" = '085 Level'
and "site" = 'MK'
and "reader" = 'RV Shaft'
and "group" = 'Lamp'
If you cast your field to separate and compare the date part and the time part to desired ranges, it becomes super easy:
WHERE when_enter_dt_timezone BETWEEN '2022-11-18' AND '2022-12-01T23:59:59.999'
AND when_enter_dt_timezone::time BETWEEN '18:00' AND '19:00'
Edit:
#Stefanov.sm makes a very good point regarding the casting of the timestamp to type date (1st criterion above) if an index can be used to retrieve data.
I corrected the query to take his remark.
Disclaimer: With when_enter_dt_timezone::date BETWEEN ... AND 2022-12-01, you include e.g. 2021-12-01T18:30.
Without the cast, the upper bound 2022-12-01 is implicitly set to midnight (morning); you will either have to change the upper bound to 2022-12-02 (which #Stefanov.sm suggested and works very well since you have a condition on the time anyway) or set your upper bound to 2022-12-01T23:59:59.999 (which is what I did above, although only to draw your attention to this specific issue).
You can try something like this to get records for the last 14 days between 6:00 p.m. and 7:00 p.m.
select * from vw_tracking_resource_events
where when_enter_dt_timezone > current_date - interval '14' day and
when_enter_dt_timezone::time between time '18:00' AND time '19:00'
Demo in sqldaddy.io
Modified using #Atmo notes
and #a_horse_with_no_name
My database table has the timestamp format "2019-12-08T13:03:16.502639-0600". I am having difficult figuring out how to filter a 24 hour range of dates so I only get the data plucked from inside that 24 hour running period of time. I need something like now() then back 24 hours on a running basis. I have tried several ways but none of them work.
Do I need to parse the timestamp first and only use the parts I need?
In Python, the query could look like this:
r.table('MyDB').filter(r.row['timestamp'] > r.now() - 24*60*60).run(con)
I understand why postgresql uses month,day and second fields to representate the sql interval datatype. A month is not always the same length and a day can have 23, 24 or 25 hours if a daylight savings time adjustment is involved. this is from postgresql documentation.
But I then do not understand why this is not consequently handled both for months and days. see the following query which calculates an exact interval where the number of seconds between two points in time is exactly calculatable:
select ('2017-01-01'::timestamp-'2016-01-01'::timestamp); -->366 days.
postgresql chooses to give a result in days. not in months and not in seconds.
But why is the result days and not seconds? it is NOT defined how long days are (they can be 23,24 or 25 hours long). so why does he not give output in seconds?
Then since the length of months is also not defined, why doesn't postgresql give an output of 12 month instead of 366 days?
He does not care that the length of days is not defined, but obviously he cares that the length of month is not defined.
Why this asymmetrie?
For further explanation, see this query:
select ('10 days'::interval-'24 hours'::interval); --> 10 days -24:00:00
you see that postgresql correctly refuses to answer with 9 days. He is pretty aware of the problem that days and hours cannot be interchanged. But then again why does the first query return days?
I can't answer your question, but I think I can point you in the right direction. I think the book SQL-99 Complete, Really is the most accessible source for understanding SQL intervals. It's available online: https://mariadb.com/kb/en/sql-99/08-temporal-values/.
SQL standards describe two kinds of intervals: year-month intervals and day-time intervals. It does this to prevent month parts and day parts from appearing in the same interval, because, as you already know, the number of days in a month is ambiguous. The number of days in the interval '3' month depends on which three months you're talking about.
I think this is the verbose, standard SQL way to write your first query.
select cast(timestamp '2017-01-01' - timestamp '2016-01-01' as interval day to hour) as new_column;
new_column
interval day to hour
--
366 days
I suspect that you'll find that SQL standards have rules for what a SQL dbms is supposed to do when things like interval day to hour are omitted. PostgreSQL might or might not follow those rules.
postgresql chooses to give a result in days. not in months and not in seconds.
Standard SQL prevents month parts and day parts from appearing in the same interval. Also, the range of valid seconds is from 0 to 59.
select interval '59' second;
interval
interval second
--
00:00:59
select interval '60' second;
interval
interval second
--
00:01:00
Once a week I need to run a report where I query an Access database for any product that will expire in 9 months or less. The way they want it calculated is to take the date 9 months into the future and return anything that expires at the end of that month or sooner. If it were simply 270 days or less, I'd have no problem. (I'd also have no problem if I could do it in Excel, but that's not an option for now).
I came up with a solution that works every month of the year, unless it happens to be March (more specifically between March 6th and April 5th).
< DateValue(Month(Date()+270)+1 & "/1/" & Year(Date()+270))
So basically I'm:
adding 270 days to today's date
extracting the resulting month
adding 1 to the month
putting it back together as a text string so I can use < the 1st of the following month
for the year, I'm using the year from the date +270 days so I don't end up using the current year by accident
The trouble is that for the date range above (which I unhappily discovered today), I land in December when I add 270 days, so the following month is in a different year. As a result, my report only produced items that already expired.
In other words, on March 5th, I would have needed a list of everything expiring prior to December 1, but on March 6th, I need everything before January 1 of the next year.
Is there a more effective way to do this that avoids this issue? I thought of using
You may have had DateDiff in mind, and it can be used:
Where DateDiff("m", Date(), [YourDateField]) Between 0 And 9
However, that will ignore an index you might have on [YourDateField].
This, however, will include products that expired previously in the current month.
The alternative is DateSerial as Hans showed but he forgot that in SQL Date() must be used and that only those products that will expire should be listed:
Where [YourDateField] Between Date() And DateSerial(Year(Date()), Month(Date()) + 10, 0)
Use the DateSerial Function to compute the future date you need.
Here is a demonstration in the Access Immediate window which computes the date 9 months from today:
? Date
3/6/2015
? DateSerial(Year(Date), Month(Date) + 9, Day(Date))
12/6/2015
However, as I understand your requirement, you actually want dates from that entire month. In that case you can compute the first of the month which is 10 months from today and ask for everything less than that date.
? DateSerial(Year(Date), Month(Date) + 10, 1)
1/1/2016
You can include that expression in your query like this ...
WHERE expire_date < DateSerial(Year(Date()), Month(Date()) + 10, 1)
I have data from a text file I'm reading into a postgres 9.1 table, and the data looks like this:
451,22:30:00,22:30:00,San Jose,1
451,22:35:00,22:35:00,Santa Clara,2
451,22:40:00,22:40:00,Lawrence,3
451,22:44:00,22:44:00,Sunnyvale,4
451,22:49:00,22:49:00,Mountain View,5
451,22:53:00,22:53:00,San Antonio,6
451,22:57:00,22:57:00,California Ave,7
451,23:01:00,23:01:00,Palo Alto,8
451,23:04:00,23:04:00,Menlo Park,9
451,23:07:00,23:07:00,Atherton,10
451,23:11:00,23:11:00,Redwood City,11
451,23:15:00,23:15:00,San Carlos,12
451,23:18:00,23:18:00,Belmont,13
451,23:21:00,23:21:00,Hillsdale,14
451,23:24:00,23:24:00,Hayward Park,15
451,23:27:00,23:27:00,San Mateo,16
451,23:30:00,23:30:00,Burlingame,17
451,23:33:00,23:33:00,Broadway,18
451,23:38:00,23:38:00,Millbrae,19
451,23:42:00,23:42:00,San Bruno,20
451,23:47:00,23:47:00,So. San Francisco,21
451,23:53:00,23:53:00,Bayshore,22
451,23:58:00,23:58:00,22nd Street,23
451,24:06:00,24:06:00,San Francisco,24
It is from a timetable for a commuter rail line, Caltrain. I'm trying to query stations, to get train arrival and departure times. I did this several months ago in MySql, and I got
select * from trains as a, trains as b where a.trip_id=b.trip_id and a.st
op_id='San Antonio' and b.stop_id='San Carlos' and a.arrival_time < b.arrival_ti
me;
So far so good, pretty straightforward. However, when I tried copying the data into a postgres database, I got an error for the various columns that had times after midnight, either 24 or 25:00:00 something. However, if I change them to be 00:00:00 and 01:00:00 something, won't that mess with the query? A time after midnight will appear to be before the starting time? MySql apparently didn't have a problem with those times, and I'm not sure what to do. I'm thinking I should use the last column, or maybe convert the times to something that doesn't take into account PM/AM?
You should try using the interval type for the time columns. Those will keep track of the number of hours, minutes, and seconds instead of trying to record a time of day.
See the PostgreSQL documentation on dates and times.
An interval can have a time component greater than 24 hours, unlike the time datatype that is confined to 00:00 <= x <= 23:59.