Is there a way we add an extra group by to the toolkit_experimental.interpolated_average function? Say my data has power measurements for different sensors; how would I add a group by on the sensor_id?
with s as (
select sensor_id,
time_bucket('30 minutes', timestamp) bucket,
time_weight('LOCF', timestamp, value) agg
from
measurements m
inner join sensor_definition sd on m.sensor_id = sd.id
where asset_id = '<battery_id>' and sensor_name = 'power' and
timestamp between '2023-01-05 23:30:00' and '2023-01-07 00:30:00'
group by sensor_id, bucket)
select sensor_id,
bucket,
toolkit_experimental.interpolated_average(
agg,
bucket,
'30 minutes'::interval,
lag(agg) over (order by bucket),
lead(agg) over (order by bucket)
)
from s
group by sensor_id;
The above query does not work as I'd need to add bucket and agg as a group by column as well.
You can find the relevant schemas below.
create table measurements
(
sensor_id uuid not null,
timestamp timestamp with time zone not null,
value double precision not null
);
create table sensor_definition
(
id uuid default uuid_generate_v4() not null
primary key,
asset_id uuid not null,
sensor_name varchar(256) not null,
sensor_type varchar(256) not null,
unique (asset_id, sensor_name, sensor_type)
);
Any suggestions?
This is a great question and cool use case. There's definitely a way to do this! I like your CTE at the top, though I prefer to name them a little more descriptively. The join looks good for selection and you could even quite easily then sub out the "on-the-fly" aggregation for a continuous aggregate at some point in the future and just do the same join against the continuous aggregate...so that's great!
The only thing you need to do is modify the window clause of the lead and lag functions so that they understand that it's working on not the full ordered data set, and then you don't need a group by clause at all!
WITH weighted_sensor AS (
SELECT
sensor_id,
time_bucket('30 minutes', timestamp) bucket,
time_weight('LOCF', timestamp, value) agg
FROM
measurements m
INNER JOIN sensor_definition sd ON m.sensor_id = sd.id
WHERE asset_id = '<battery_id>' AND sensor_name = 'power' and
timestamp between '2023-01-05 23:30:00' and '2023-01-07 00:30:00'
GROUP BY sensor_id, bucket)
SELECT
sensor_id,
bucket,
toolkit_experimental.interpolated_average(
agg,
bucket,
'30 minutes'::interval,
lag(agg) OVER (PARTITION BY sensor_id ORDER BY bucket),
lead(agg) OVER (PARTITION BY sensor_id ORDER BY bucket)
)
FROM weighted_sensor;
You can also split out the window clause into a separate clause in the query and name it, this helps especially if you're using it more times, so if you were to use the integral function as well, for instance, to get total energy utilization in a period, you might do something like this:
WITH weighted_sensor AS (
SELECT
sensor_id,
time_bucket('30 minutes', timestamp) bucket,
time_weight('LOCF', timestamp, value) agg
FROM
measurements m
INNER JOIN sensor_definition sd ON m.sensor_id = sd.id
WHERE asset_id = '<battery_id>' AND sensor_name = 'power' and
timestamp between '2023-01-05 23:30:00' and '2023-01-07 00:30:00'
GROUP BY sensor_id, bucket)
SELECT
sensor_id,
bucket,
toolkit_experimental.interpolated_average(
agg,
bucket,
'30 minutes'::interval,
lag(agg) OVER sensor_times,
lead(agg) OVER sensor_times
),
toolkit_experimental.interpolated_integral(
agg,
bucket,
'30 minutes'::interval,
lag(agg) OVER sensor_times,
lead(agg) OVER sensor_times,
'hours'
)
FROM weighted_sensor
WINDOW sensor_times AS (PARTITION BY sensor_id ORDER BY bucket);
I used hours as the unit as I figure energy is often measured in watt-hours or the like...
Related
I have a very simple table like below
Events:
Event_name
Event_time
A
2022-02-10
B
2022-05-11
C
2022-07-17
D
2022-10-20
To a table like this are added new events, but we always take the event from the last X days (for example, 30 days), so the query result for this table is changeable.
I would like to transform the above table into this:
A
B
C
D
2022-02-10
2022-05-11
2022-07-17
2022-10-20
In general, the number of columns won't be constant. But if it's not possible we can add a limitation for the number of columns- for example, 10 columns.
I tried with crosstab, but I had to add the column name manually this is not what I mean and it doesn't work with the CTE query
WITH CTE AS (
SELECT DISTINCT
1 AS "Id",
event_time,
event_name,
ROW_NUMBER() OVER(ORDER BY event_time) AS nr
FROM
events
WHERE
event_time >= CURRENT_DATE - INTERVAL '31 days')
SELECT *
FROM
crosstab (
'SELECT id, event_name, event_time
FROM
CTE
WHERE
nr <= 10
ORDER BY
nr') AS ct(id int,
event_name text,
EventTime1 timestamp,
EventTime2 timestamp,
EventTime3 timestamp,
EventTime4 timestamp,
EventTime5 timestamp,
EventTime6 timestamp,
EventTime7 timestamp,
EventTime8 timestamp,
EventTime9 timestamp,
EventTime10 timestamp)
This query will be used as the data source in Tableau (data visualization and analysis software) it would be great if it could be one query (without temp tables, adding new functions, etc.)
Thanks!
I have a large table with 100s of millions of rows. Because it is so big, it is partitioned by date range first, and then that partition is also partitioned by a period_id.
CREATE TABLE research.ranks
(
security_id integer NOT NULL,
period_id smallint NOT NULL,
classificationtype_id smallint NOT NULL,
dtz timestamp with time zone NOT NULL,
create_dt timestamp with time zone NOT NULL DEFAULT now(),
update_dt timestamp with time zone NOT NULL DEFAULT now(),
rank_1 smallint,
rank_2 smallint,
rank_3 smallint
)
CREATE TABLE zpart.ranks_y1990 PARTITION OF research.ranks
FOR VALUES FROM ('1990-01-01 00:00:00+00') TO ('1991-01-01 00:00:00+00')
PARTITION BY LIST (period_id);
CREATE TABLE zpart.ranks_y1990p1 PARTITION OF zpart.ranks_y1990
FOR VALUES IN ('1');
every year has a partition and there are another dozen partitions for each year.
I needed to see the result for security_ids side by side for different period_ids.
So the join I initially used was one like this:
select c1.security_id, c1.dtz,c1.rank_2 as rank_2_1, c9.rank_2 as rank_2_9
from research.ranks c1
left join research.ranks c9 on c9.dtz=c9.dtz and c1.security_id=c9.security_id and c9.period_id=9
where c1.period_id =1 and c1.dtz>now()-interval'10 years'
which was slow, but acceptable. I'll call this the JOIN version.
Then, we wanted to show two more period_ids and extended the above to add additional joins on the new period_ids.
This slowed down the join enough for us to look at a different solution.
We found that the following type of query runs about 6 or 7 times faster:
select c1.security_id, c1.dtz
,sum(case when c1.period_id=1 then c1.rank_2 end) as rank_2_1
,sum(case when c1.period_id=9 then c1.rank_2 end) as rank_2_9
,sum(case when c1.period_id=11 then c1.rank_2 end) as rank_2_11
,sum(case when c1.period_id=14 then c1.rank_2 end) as rank_2_14
from research.ranks c1
where c1.period_id in (1,11,14,9) and c1.dtz>now()-interval'10 years'
group by c1.security_id, c1.dtz;
We can use the sum because the table has unique indexes so we know there will only ever be one record that is being "summed". I'll call this the SUM version.
The speed is so much better that I'm questioning half of the code I have written previously! Two questions:
Should I be trying to use the SUM version rather than the JOIN version everywhere or is the efficiency likely to be a factor of the specific structure and not likely to be as useful in other circumstances?
Is there a problem with the logic of the SUM version in cases that I haven't considered?
To be honest, I don't think your "join" version was ever a good idea anyway. You only have one (partitioned) table so there never was a need for any join.
SUM() is the way to go, but I would use SUM(...) FILTER(WHERE ..) instead of a CASE:
SELECT
security_id,
dtz,
SUM(rank_2) FILTER (WHERE period_id = 1) AS rank_2_1,
SUM(rank_2) FILTER (WHERE period_id = 9) AS rank_2_9,
SUM(rank_2) FILTER (WHERE period_id = 11) AS rank_2_11,
SUM(rank_2) FILTER (WHERE period_id = 14) AS rank_2_14,
FROM
research.ranks
WHERE
period_id IN ( 1, 11, 14, 9 )
AND dtz > now( ) - INTERVAL '10 years'
GROUP BY
security_id,
dtz;
I have a data set as shown in the picture.
I am trying to get the date difference between eligenddate (First row) and eligstartdate (second row). I would really appreciate any suggestions.
Thank you
SQL2005:
One solution is to insert into a table variable (#DateWithRowNum - the number of rows is small) or into a temp table (#DateWithRowNum - the number of rows is high) the rows with a row number (generated using [elig]startdate as order by criteria; also see note #1) plus a self join thus:
DECLARE #DateWithRowNum TABLE (
memberid VARCHAR(50) NOT NULL,
rownum INT,
PRIMARY KEY(memberid, rownum),
startdate DATETIME NOT NULL,
enddate DATETIME NOT NULL
)
INSERT #DateWithRowNum (memberid, rownum, startdate, enddate)
SELECT memberid,
ROW_NUMBER() OVER(PARTITION BY memberid ORDER By startdate),
startdate,
enddate
FROM dbo.MyTable
SELECT crt.*, DATEDIFF(MONTH, crt.enddate, prev.startdate) AS gap
FROM #DateWithRowNum crt
LEFT JOIN #DateWithRowNum prev ON crt.memberid = prev.memberid AND crt.rownum - 1 = prev.rownum
ORDER BY crt.memberid, crt.rownum
Another solution is to use common table expression instead of table variable / temp table thus:
;WITH DateWithRowNum AS (
SELECT memberid,
ROW_NUMBER() OVER(PARTITION BY memberid ORDER By startdate),
startdate,
enddate
FROM dbo.MyTable
)
SELECT crt.*, DATEDIFF(MONTH, crt.enddate, prev.startdate) AS gap
FROM DateWithRowNum crt
LEFT /*HASH*/ JOIN DateWithRowNum prev ON crt.memberid = prev.memberid AND crt.rownum - 1 = prev.rownum
ORDER BY crt.memberid, crt.rownum
Note #1: I assume that you need to calculate these values for every memberid
Note #2: HASH hint forces SQL Server to evaluate just once every data source (crt or prev) of LEFT JOIN.
I have two tables that I am joining together. I want to filter the results based on whether or not it had been created 24 hours prior. Here are my tables.
table user_infos (
id integer,
date_created timestamp with timezone,
name varchar(40)
);
table user_data (
id integer,
team_name varchar(40)
);
This is my query that I am using to join them together and hopefully filter them:
SELECT timestampdiff(HOUR, user_infos.date_created, now()) as hours_since,
user_data.id, user_data.team_name,
user_infos.name, user_infos.date_created
FROM user_data
JOIN user_infos
ON user_infos.id=user_data.id
WHERE timestampdiff(HOUR, user_infos.date_created, now()) < 24
ORDER BY name ASC, id ASC
LIMIT 50 OFFSET 0
What I am trying to do is join the two tables such that the id, team_name, name, and date-created would be treated as one table.
Then I would like to filter it such that I only get the results that were created 24 hours ago. This is what I am using the timestampdiff for.
Then I ORDER then by name and id in ascending order.
then limit the results to 50.
Everything look good except that I doesn't work. When I run this query it tells me that the "hour" column does not exist.
Clearly there is something subtle here that is messing everything up. Does anyone have any suggestions?
Alternatively, I've tried this, but it tells me that there is a syntax error at 1;
SELECT
user_data.id, user_data.team_name,
user_infos.name, user_infos.date_created
FROM user_data
JOIN user_infos
ON user_infos.id=user_data.id
WHERE user_infos.date_created
BETWEEN DATE( DATE_SUB( NOW() , INTERVAL 1 DAY ) ) AND
DATE ( NOW() )
ORDER BY name ASC, id ASC
LIMIT 50 OFFSET 0
I think your problem is with your data types. You are checking if a timestamp field is between a casted date field (which removes the time from the date). NOW() is different than the DATE(NOW()).
So you have 2 options. You can either remove the DATE() casting and it should work, or you can cast the date_created to a date.
SELECT
user_data.id, user_data.team_name,
user_infos.name, user_infos.date_created
FROM user_data
JOIN user_infos
ON user_infos.id=user_data.id
WHERE user_infos.date_created
BETWEEN DATE_SUB( NOW() , INTERVAL 1 DAY ) AND
NOW()
ORDER BY name ASC, id ASC
LIMIT 50 OFFSET 0
SQL Fiddle Demo
Given this table:
SELECT * FROM CommodityPricing order by dateField
"SILVER";60.45;"2002-01-01"
"GOLD";130.45;"2002-01-01"
"COPPER";96.45;"2002-01-01"
"SILVER";70.45;"2003-01-01"
"GOLD";140.45;"2003-01-01"
"COPPER";99.45;"2003-01-01"
"GOLD";150.45;"2004-01-01"
"MERCURY";60;"2004-01-01"
"SILVER";80.45;"2004-01-01"
As of 2004, COPPER was dropped and mercury introduced.
How can I get the value of (array_agg(value order by date desc) ) [1] as NULL for COPPER?
select commodity,(array_agg(value order by date desc) ) --[1]
from CommodityPricing
group by commodity
"COPPER";"{99.45,96.45}"
"GOLD";"{150.45,140.45,130.45}"
"MERCURY";"{60}"
"SILVER";"{80.45,70.45,60.45}"
SQL Fiddle
select
commodity,
array_agg(
case when commodity = 'COPPER' then null else price end
order by date desc
)
from CommodityPricing
group by commodity
;
To "pad" missing rows with NULL values in the resulting array, build your query on full grid of rows and LEFT JOIN actual values to the grid.
Given this table definition:
CREATE TEMP TABLE price (
commodity text
, value numeric
, ts timestamp -- using ts instead of the inappropriate name date
);
I use generate_series() to get a list of timestamps representing the years and CROSS JOIN to a unique list of all commodities (SELECT DISTINCT ...).
SELECT commodity, (array_agg(value ORDER BY ts DESC)) AS years
FROM generate_series ('2002-01-01 00:00:00'::timestamp
, '2004-01-01 00:00:00'::timestamp
, '1y') t(ts)
CROSS JOIN (SELECT DISTINCT commodity FROM price) c(commodity)
LEFT JOIN price p USING (ts, commodity)
GROUP BY commodity;
Result:
COPPER {NULL,99.45,96.45}
GOLD {150.45,140.45,130.45}
MERCURY {60,NULL,NULL}
SILVER {80.45,70.45,60.45}
SQL Fiddle.
I cast the array to text in the fiddle, because the display sucks and would swallow NULL values otherwise.