I have looked high and low for a way to do this, and several previously asked questions have similarities, but none help me with what I want to do.
This is what I have:
select siteid, count(dmovedin) as dmovedin
from Ledgers
where dDeleted is null and iTferFromLedID is NULL and dmovedin between '2016-05-01 00:00:00' and '2016-05-31 23:23:59'
group by siteid
order by siteid
go
select siteid, count(dMovedOut) as dmovedout
from Ledgers
where dDeleted is null and iTfertoLedID is NULL and dmovedout between '2016-05-01 00:00:00' and '2016-05-31 23:23:59'
group by siteid
order by siteid
Currently SQL returns two tables, each with a site id column and a unique column.
What I want to do is have these 2 unique columns side by side in the same table, but I cannot figure out how to do it.
Does anyone have any ideas?
It looks like everything is the same but the second column in the select.
In that case, combine them there:
select siteid, count(dmovedin) as dmovedin, count(dmovedout) as dmovedout
Then just combine your Where:
where dDeleted is null
and (iTfertoLedID is NULL
and iTferFromLedID is NULL)
and (dmovedin between '2016-05-01 00:00:00' and '2016-05-31 23:23:59'
**and** dmovedout between '2016-05-01 00:00:00' and '2016-05-31 23:23:59')
Depending on the logic you want, the bolded 'and' might be an 'or' (which is why I nested those lines in parentheses). I don't know what you're trying to do so I can't say for sure.
Everything else should be the same. Have you tried that? This seems to be basic SQL.
What if you transform each count query into a "table" in your query? As you have different where clauses, I think it will be easier.
SELECT A.siteid, A.dmovedin, ISNULL(B.dmovedout, 0) dmovedout
FROM
(
select siteid, count(dmovedin) as dmovedin
from Ledgers
where dDeleted is null and iTferFromLedID is NULL and dmovedin between '2016-05-01 00:00:00' and '2016-05-31 23:23:59'
group by siteid
order by siteid
) A
LEFT JOIN (
select siteid, count(dMovedOut) as dmovedout
from Ledgers
where dDeleted is null and iTfertoLedID is NULL and dmovedout between '2016-05-01 00:00:00' and '2016-05-31 23:23:59'
group by siteid
order by siteid
) B ON A.siteid = B.siteid
Related
I want to get the number of consecutive days from the current date using Postgres SQL.
enter image description here
Above is the scenario in which I have highlighted consecutive days count should be like this.
Below is the SQL query which I have created but it's not returning the expected result
with grouped_dates as (
select user_id, created_at::timestamp::date,
(created_at::timestamp::date - (row_number() over (partition by user_id order by created_at::timestamp::date) || ' days')::interval)::date as grouping_date
from watch_history
)
select * , dense_rank() over (partition by grouping_date order by created_at::timestamp::date) as in_streak
from grouped_dates where user_id = 702
order by created_at::timestamp::date
Can anyone please help me to resolve this issue?
If anyhow we can able to apply distinct for created_at field to below query then I will get solutions for my issue.
WITH list AS
(
SELECT user_id,
(created_at::timestamp::date - (row_number() over (partition by user_id order by created_at::timestamp::date) || ' days')::interval)::date as next_day
FROM watch_history
)
SELECT user_id, count(*) AS number_of_consecutive_days
FROM list
WHERE next_day IS NOT NULL
GROUP BY user_id
Does anyone have an idea how to apply distinct to created_at for the above mentioned query ?
To get the "number of consecutive days" for the same user_id :
WITH list AS
(
SELECT user_id
, array_agg(created_at) OVER (PARTITION BY user_id ORDER BY created_at RANGE BETWEEN CURRENT ROW AND '1 day' FOLLOWING) AS consecutive_days
FROM watch_history
)
SELECT user_id, count(DISTINCT d.day) AS number_of_consecutive_days
FROM list
CROSS JOIN LATERAL unnest(consecutive_days) AS d(day)
WHERE array_length(consecutive_days, 1) > 1
GROUP BY user_id
To get the list of "consecutive days" for the same user_id :
WITH list AS
(
SELECT user_id
, array_agg(created_at) OVER (PARTITION BY user_id ORDER BY created_at RANGE BETWEEN CURRENT ROW AND '1 day' FOLLOWING) AS consecutive_days
FROM watch_history
)
SELECT user_id
, array_agg(DISTINCT d.day ORDER BY d.day) AS list_of_consecutive_days
FROM list
CROSS JOIN LATERAL unnest(consecutive_days) AS d(day)
WHERE array_length(consecutive_days, 1) > 1
GROUP BY user_id
full example & result in dbfiddle
I want to count how many document that does not have date in field1, field2, field3, and field4. I have created the query as below but it does not really look good.
select
count(doc)
where true
and field1 is not null
and field2 is not null
and field3 is not null
and field4 is not null
How can I apply one filter for multiple columns?
Thanks in advance.
There is nothing at all wrong with your current query, and it is probably what I would be using here. However, you could use a COALESCE trick here:
SELECT COUNT(*)
FROM yourTable
WHERE COALESCE(field1, field2, field3, field4) IS NOT NULL;
This works because for any record having at least one of the four fields assigned to a non NULL date would fail the IS NOT NULL check. Only records for which all four fields are NULL would match.
Note that this counts records having at least one non NULL field. If instead you want to count records where all four fields are NULL, then use:
SELECT COUNT(*)
FROM yourTable
WHERE COALESCE(field1, field2, field3, field4) IS NULL;
I want to select count of impression between the dates and All time impression as well, can we do this in one query ?
This is my query in which I am able to get impression only in between dates
SELECT
robotAds."Ad_ID",
count(robotScraper."adIDAdID") as ad_impression
FROM
robot__ads robotAds
LEFT JOIN robot__session__scraper__data robotScraper
ON robotScraper."adIDAdID" = robotAds."Ad_ID"
LEFT JOIN robot__session_data robotSession
ON robotSession."id" = robotScraper."sessionIDId"
AND robotSession."Session_start" BETWEEN '2020-11-25 00:00:00'
AND '2021-04-01 00:00:00'
GROUP BY
robotAds."Ad_ID"
What I have to do to get count of all time impression in this same query.
Thanks
yes you can :
SELECT
robotAds."Ad_ID",
count(robotScraper."adIDAdID") filter (where robotSession."Session_start" BETWEEN '2020-11-25 00:00:00'AND '2021-04-01 00:00:00') as ad_impression,
count(robotScraper."adIDAdID") count_alltime
FROM
robot__ads robotAds
LEFT JOIN robot__session__scraper__data robotScraper
ON robotScraper."adIDAdID" = robotAds."Ad_ID"
LEFT JOIN robot__session_data robotSession
ON robotSession."id" = robotScraper."sessionIDId"
GROUP BY
robotAds."Ad_ID"
"Conditional aggregation" should meet this need. Essentially this is using a case expression inside the aggregation function, like this:
SELECT
robotAds."Ad_ID"
, count(CASE
WHEN robotSession."Session_start" BETWEEN '2020-11-25 00:00:00'
AND '2021-04-01 00:00:00'
THEN 1
END) AS range_ad_impression
, count(robotScraper."adIDAdID") AS all_ad_impression
FROM robot__ads robotAds
LEFT JOIN robot__session__scraper__data robotScraper ON robotScraper."adIDAdID" = robotAds."Ad_ID"
LEFT JOIN robot__session_data robotSession ON robotSession."id" = robotScraper."sessionIDId"
GROUP BY robotAds."Ad_ID"
Note: the count() function ignores NULLs, above I have ommitted an explicit instruction to return NULL but some prefer to do this using else i.e.
,count(CASE
WHEN robotSession."Session_start" BETWEEN '2020-11-25 00:00:00'
AND '2021-04-01 00:00:00'
THEN 1 ELSE NULL
END) AS range_count
My table is somethingg like
CREATE TABLE table1
(
_id text,
name text,
data_type int,
data_value int,
data_date timestamp -- insertion time
);
Now due to a system bug, many duplicate entries are created and I need to remove those duplicated and keep only unique entries excluding data_date because it is a system generated date.
My query to do that is something like:
DELETE FROM table1 A
USING ( SELECT _id, name, data_type, data_value, MIN(data_date) min_date
FROM table1
GROUP BY _id, name, data_type, data_value
HAVING count(data_date) > 1) B
WHERE A._id = B._id
AND A.name = B.name
AND A.data_type = B.data_type
AND A.data_value = B.data_value
AND A.data_date != B.min_date;
However this query works, having millions of records in the table, I want a faster way for it. My idea is to create a new column with value as partition by [_id, name, data_type, data_value] or columns which are in group by. However, I could not find the way to create such column.
I would appretiate if any one may suggest a way to create such column.
Edit 1:
There is another thing to add, I don't want to use CTE or subquery for updating this new column because it will be same as my existing query.
The best way is simply creating a new table without duplicated records:
CREATE...
SELECT _id, name, data_type, data_value, MIN(data_date) min_date
FROM table1
GROUP BY _id, name, data_type, data_value;
Alternatively, you can create a rank and then filter, but a subquery is needed.
RANK() OVER (PARTITION BY your_variables ORDER BY data_date ASC) r
And then filter r=1.
Given this table:
SELECT * FROM CommodityPricing order by dateField
"SILVER";60.45;"2002-01-01"
"GOLD";130.45;"2002-01-01"
"COPPER";96.45;"2002-01-01"
"SILVER";70.45;"2003-01-01"
"GOLD";140.45;"2003-01-01"
"COPPER";99.45;"2003-01-01"
"GOLD";150.45;"2004-01-01"
"MERCURY";60;"2004-01-01"
"SILVER";80.45;"2004-01-01"
As of 2004, COPPER was dropped and mercury introduced.
How can I get the value of (array_agg(value order by date desc) ) [1] as NULL for COPPER?
select commodity,(array_agg(value order by date desc) ) --[1]
from CommodityPricing
group by commodity
"COPPER";"{99.45,96.45}"
"GOLD";"{150.45,140.45,130.45}"
"MERCURY";"{60}"
"SILVER";"{80.45,70.45,60.45}"
SQL Fiddle
select
commodity,
array_agg(
case when commodity = 'COPPER' then null else price end
order by date desc
)
from CommodityPricing
group by commodity
;
To "pad" missing rows with NULL values in the resulting array, build your query on full grid of rows and LEFT JOIN actual values to the grid.
Given this table definition:
CREATE TEMP TABLE price (
commodity text
, value numeric
, ts timestamp -- using ts instead of the inappropriate name date
);
I use generate_series() to get a list of timestamps representing the years and CROSS JOIN to a unique list of all commodities (SELECT DISTINCT ...).
SELECT commodity, (array_agg(value ORDER BY ts DESC)) AS years
FROM generate_series ('2002-01-01 00:00:00'::timestamp
, '2004-01-01 00:00:00'::timestamp
, '1y') t(ts)
CROSS JOIN (SELECT DISTINCT commodity FROM price) c(commodity)
LEFT JOIN price p USING (ts, commodity)
GROUP BY commodity;
Result:
COPPER {NULL,99.45,96.45}
GOLD {150.45,140.45,130.45}
MERCURY {60,NULL,NULL}
SILVER {80.45,70.45,60.45}
SQL Fiddle.
I cast the array to text in the fiddle, because the display sucks and would swallow NULL values otherwise.