I have a table with the date and time fields separated
Table1
data hora id
2015-01-01 11:40:06 1
2015-01-01 15:40:06 2
2015-01-02 15:40:06 3
2015-01-05 10:40:06 4
2015-01-05 15:40:06 5
2015-01-06 08:23:00 6
Now I need to consult the id between 2015-01-01 12:00:00 12:00:00 and 2015-01-05 12:00:00, , should return the ids 2,3,4.
I'm trying to convert and concatenate the date and time fields that are separated in a single datetime field in order to use the 'between' but I can not hit the syntax can someone give an example?
It works!
SELECT
*
FROM
tableA
WHERE
(dataemissao + hora) BETWEEN (date '2015-01-21' + time '14:00')
AND (date '2015-01-21' + time '18:00')
Related
I am trying to get week numbers in a Year starting from a certain day
I've checked the stack but quite confused.
SELECT EXTRACT(WEEK FROM TIMESTAMP '2021-01-01'),
extract('year' from TIMESTAMP '2021-01-01')
The output is 53|2021
I want it to be 01|2021
I understand the principle of the isoweek but I want the year to start in 01-01-2021
The aim is to use intervals from this day to determine week numbers
Week N0| End Date
1 | 01-01-2021
2 | 01-08-2021
5 | 01-29-2021
...
This is really strange way to determine the week number, but in the end it's a simple math operation: the number of days since January first divided by 7.
You can create a function for this:
create function custom_week(p_input date)
returns int
as
$$
select (p_input - date_trunc('year', p_input)::date) / 7 + 1;
$$
language sql
immutable;
So this:
select date, custom_week(date)
from (
values
(date '2021-01-01'),
(date '2021-01-08'),
(date '2021-01-29')
) as v(date)
yields
date | custom_week
-----------+------------
2021-01-01 | 1
2021-01-08 | 2
2021-01-29 | 5
I have a SQL table (postgreSQL/TimescaleDB) with hourly values, eg:
Timestamp Value
...
2021-02-17 13:00:00 2
2021-02-17 14:00:00 4
...
2021-02-18 13:00:00 3
2021-02-18 14:00:00 3
...
I want to get the average values for each hour mapped to today's date in a specific timespan, so something like that:
select avg(value)
from table
where Timestamp between '2021-02-10' and '2021-02-20'
group by *hourpart of timestamp*
result today (2021-10-08) should be:
...
Timestamp Value
2021-10-08 13:00:00 2.5
2021-10-08 14:00:00 3.5
...
If I do the same select tomorrow (2021-10-09) result should change to:
...
Timestamp Value
2021-10-09 13:00:00 2.5
2021-10-09 14:00:00 3.5
...
I resolved the problem by myself:
Solution:
SELECT EXTRACT(HOUR FROM table."Timestamp") as hour,
avg(table."Value") as average
from table
where Timestamp between '2021-02-10' and '2021-02-20'
group by hour
order by hour;
You have to write your query like this:
select avg(value)
from table
where Timestamp between '2021-02-10' and '2021-02-20'
group by substring(TimeStamp,1,10), substring(TimeStamp,11,9)
Given the following sqlfiddle: http://www.sqlfiddle.com/#!17/f483a/2/0
create table test (
start_date date
);
insert into test values ('2019/01/01');
select
start_date,
age(now()::date,start_date) as date_diff
from test;
Which generates the following output:
date_diff | 0 years 7 mons 27 days 0 hours 0 mins 0.00 secs
How could I instead generate the correct number of calendar days
239 days
without using a custom function?
Don't use the age function. Subtracting a date from a date yields an integer. now() returns a timestamp so you need to use current_date instead.
select start_date,
current_date - start_date as date_diff
from test;
I'm using Postgresql 9 and I'm fighting with counting and grouping when no lines are counted.
Let's assume the following schema :
create table views {
date_event timestamp with time zone ;
event_id integer;
}
Let's imagine the following content :
2012-01-01 00:00:05 2
2012-01-01 01:00:05 5
2012-01-01 03:00:05 8
2012-01-01 03:00:15 20
I want to group by hour, and count the number of lines. I wish I could retrieve the following :
2012-01-01 00:00:00 1
2012-01-01 01:00:00 1
2012-01-01 02:00:00 0
2012-01-01 03:00:00 2
2012-01-01 04:00:00 0
2012-01-01 05:00:00 0
.
.
2012-01-07 23:00:00 0
I mean that for each time range slot, I count the number of lines in my table whose date correspond, otherwise, I return a line with a count at zero.
The following will definitely not work (will yeld only lines with counted lines > 0).
SELECT extract ( hour from date_event ),count(*)
FROM views
where date_event > '2012-01-01' and date_event <'2012-01-07'
GROUP BY extract ( hour from date_event );
Please note I might also need to group by minute, or by hour, or by day, or by month, or by year (multiple queries is possible of course).
I can only use plain old sql, and since my views table can be very big (>100M records), I try to keep performance in mind.
How can this be achieved ?
Thank you !
Given that you don't have the dates in the table, you need a way to generate them. You can use the generate_series function:
SELECT * FROM generate_series('2012-01-01'::timestamp, '2012-01-07 23:00', '1 hour') AS ts;
This will produce results like this:
ts
---------------------
2012-01-01 00:00:00
2012-01-01 01:00:00
2012-01-01 02:00:00
2012-01-01 03:00:00
...
2012-01-07 21:00:00
2012-01-07 22:00:00
2012-01-07 23:00:00
(168 rows)
The remaining task is to join the two selects using an outer join like this :
select extract ( day from ts ) as day, extract ( hour from ts ) as hour,coalesce(count,0) as count from
(
SELECT extract ( day from date ) as day , extract ( hour from date ) as hr ,count(*)
FROM sr
where date>'2012-01-01' and date <'2012-01-07'
GROUP BY extract ( day from date ) , extract ( hour from date )
) AS cnt
right outer join ( SELECT * FROM generate_series ( '2012-01-01'::timestamp, '2012-01-07 23:00', '1 hour') AS ts ) as dtetable on extract ( hour from ts ) = cnt.hr and extract ( day from ts ) = cnt.day
order by day,hour asc;
This query will give you the output what your are looking for,
select to_char(date_event, 'YYYY-MM-DD HH24:00') as time, count (to_char(date_event, 'HH24:00')) as count from views where date(date_event) > '2012-01-01' and date(date_event) > '2012-01-07' group by time order by time;
Requirements
I have data table that saves data in date ranges.
Each record is allowed to overlap previous record(s) (record has a CreatedOn datetime column).
New record can define it's own date range if it needs to hence can overlap several older records.
Each new overlapping record overrides settings of older records that it overlaps.
Result set
What I need to get is get per day data for any date range that uses record overlapping. It should return a record per day with corresponding data for that particular day.
To convert ranges to days I was thinking of numbers/dates table and user defined function (UDF) to get data for each day in the range but I wonder whether there's any other (as in better* or even faster) way of doing this since I'm using the latest SQL Server 2008 R2.
Stored data
Imagine my stored data looks like this
ID | RangeFrom | RangeTo | Starts | Ends | CreatedOn (not providing data)
---|-----------|----------|--------|-------|-----------
1 | 20110101 | 20110331 | 07:00 | 15:00
2 | 20110401 | 20110531 | 08:00 | 16:00
3 | 20110301 | 20110430 | 06:00 | 14:00 <- overrides both partially
Results
If I wanted to get data from 1st January 2011 to 31st May 2001 resulting table should look like the following (omitted obvious rows):
DayDate | Starts | Ends
--------|--------|------
20110101| 07:00 | 15:00 <- defined by record ID = 1
20110102| 07:00 | 15:00 <- defined by record ID = 1
... many rows omitted for obvious reasons
20110301| 06:00 | 14:00 <- defined by record ID = 3
20110302| 06:00 | 14:00 <- defined by record ID = 3
... many rows omitted for obvious reasons
20110501| 08:00 | 16:00 <- defined by record ID = 2
20110502| 08:00 | 16:00 <- defined by record ID = 2
... many rows omitted for obvious reasons
20110531| 08:00 | 16:00 <- defined by record ID = 2
Actually, since you are working with dates, a Calendar table would be more helpful.
Declare #StartDate date
Declare #EndDate date
;With Calendar As
(
Select #StartDate As [Date]
Union All
Select DateAdd(d,1,[Date])
From Calendar
Where [Date] < #EndDate
)
Select ...
From Calendar
Left Join MyTable
On Calendar.[Date] Between MyTable.Start And MyTable.End
Option ( Maxrecursion 0 );
Addition
Missed the part about the trumping rule in your original post:
Set DateFormat MDY;
Declare #StartDate date = '20110101';
Declare #EndDate date = '20110501';
-- This first CTE is obviously to represent
-- the source table
With SampleData As
(
Select 1 As Id
, Cast('20110101' As date) As RangeFrom
, Cast('20110331' As date) As RangeTo
, Cast('07:00' As time) As Starts
, Cast('15:00' As time) As Ends
, CURRENT_TIMESTAMP As CreatedOn
Union All Select 2, '20110401', '20110531', '08:00', '16:00', DateAdd(s,1,CURRENT_TIMESTAMP )
Union All Select 3, '20110301', '20110430', '06:00', '14:00', DateAdd(s,2,CURRENT_TIMESTAMP )
)
, Calendar As
(
Select #StartDate As [Date]
Union All
Select DateAdd(d,1,[Date])
From Calendar
Where [Date] < #EndDate
)
, RankedData As
(
Select C.[Date]
, S.Id
, S.RangeFrom, S.RangeTo, S.Starts, S.Ends
, Row_Number() Over( Partition By C.[Date] Order By S.CreatedOn Desc ) As Num
From Calendar As C
Join SampleData As S
On C.[Date] Between S.RangeFrom And S.RangeTo
)
Select [Date], Id, RangeFrom, RangeTo, Starts, Ends
From RankedData
Where Num = 1
Option ( Maxrecursion 0 );
In short, I rank all the sample data preferring the newer rows that overlap the same date.
Why do it all in DB when you can do it better in memory
This is the solution (I eventually used) that seemed most reasonable in terms of data transferred, speed and resources.
get actual range definitions from DB to mid tier (smaller amount of data)
generate in memory calendar of a certain date range (faster than in DB)
put those DB definitions in (much easier and faster than DB)
And that's it. I realised that complicating certain things in DB is not not worth it when you have executable in memory code that can do the same manipulation faster and more efficient.