How to create report in SQL based on query parameter date and its corresponding weekday - rdbms

My SQL table is as under
The data is being updated by users on a daily basis and report_date is changed as when data updated by any user. Now I need to generate a report from the above table based on the date as a parameter. For example, If the query parameter (Report_date) is 1-Feb-20 then report should be
Rly, Sum of Trains on weekday corresponding to the query date (sum of trains where Sch_SAT is Y as Saturday is weekday for 1-Feb-20), Sum of trains (where Report_date is equal to query date and Sch_SAT is Y), Sum of Trains where Sch_SAT is Y and Report_date is not equal to query date. The desired Report is as under:
Is it possible to generate the above report? If possible, please help.

Thanks for valuable tips for asking questions in presentable form on this forum. I will keep in mind in future issues. I have achieved the desired results by splitting & rejoining the table. My SQL query (stored procedure) is
USE [Loco_bank]
GO
/****** Object: StoredProcedure [dbo].[HOG_summary] Script Date: 02/13/2020 10:19:07 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROCEDURE [dbo].[HOG_summary] #Sunday varchar(1), #Monday varchar(1), #Tuesday varchar(1), #Wednesday varchar(1), #Thursday varchar(1), #Friday varchar(1), #Saturday varchar(1), #dt DATETIME AS
Begin
SELECT Zrly as Master_Zrly, COUNT(Train) as Trains_sch INTO #tab1 FROM Hotel_load WHERE ([Saturday] = #Saturday OR #Saturday IS NULL) AND ([Sunday] = #Sunday OR #Sunday IS NULL) AND ([Monday] = #Monday OR #Monday IS NULL) AND ([Tuesday] = #Tuesday OR #Tuesday IS NULL) AND ([Wednesday] = #Wednesday OR #Wednesday IS NULL) AND ([Thursday] = #Thursday OR #Thursday IS NULL) AND ([Friday] = #Friday OR #Friday IS NULL)
GROUP BY Zrly
SELECT Zrly, COUNT(updated_on) as Reported, COUNT(HOG_loco_type_master) as HOG_Run, (COUNT(updated_on) - COUNT(HOG_loco_type_master)) as NONHOG_Run INTO #tab2 FROM Hotel_load WHERE ([Saturday] = #Saturday OR #Saturday IS NULL) AND ([Sunday] = #Sunday OR #Sunday IS NULL) AND ([Monday] = #Monday OR #Monday IS NULL) AND ([Tuesday] = #Tuesday OR #Tuesday IS NULL) AND ([Wednesday] = #Wednesday OR #Wednesday IS NULL) AND ([Thursday] = #Thursday OR #Thursday IS NULL) AND ([Friday] = #Friday OR #Friday IS NULL) AND updated_on = #dt GROUP BY Zrly
SELECT #tab1.Master_Zrly, #tab1.Trains_sch, #tab2.Zrly, #tab2.Reported, #tab2.HOG_Run, #tab2.NONHOG_Run FROM #tab1
LEFT JOIN #tab2 ON #tab1.Master_Zrly = #tab2.Zrly
End
My query string is:
http://127.0.0.1/HOG/HOG_Report_BD.aspx?Wednesday=Y&dt=12-Feb-2020
And Output is
| Master_Zrly | Trains_sch | Reported | HOG_Run | NONHOG_Run |
|-------------|------------|----------|---------|------------|
| SCR | 25 | 9 | 18 | 2 |
| NCR | 9 | 2 | 5 | 4 |
| SWR | 13 | 4 | | |
| ER | 28 | 7 | 22 | 1 |
| WCR | 13 | 6 | 7 | 5 |
| CR | 40 | 18 | 21 | 10 |
| KR | 1 | 1 | | |
| NWR | 17 | 16 | | |
| ECoR | 11 | 2 | | |
| NR | 99 | 30 | 62 | 6 |

Related

Lederboards in PostgreSQL and get 2 next and previous rows

We use Postgresql 14.1
I have a sample data that contains over 50 million records.
base table:
+------+----------+--------+--------+--------+
| id | item_id | battles| wins | damage |
+------+----------+--------+--------+--------+
| 1 | 255 | 35 | 52.08 | 1245.2 |
| 2 | 255 | 35 | 52.08 | 1245.2 |
| 3 | 255 | 35 | 52.08 | 1245.3 |
| 4 | 255 | 35 | 52.08 | 1245.3 |
| 5 | 255 | 35 | 52.09 | 1245.4 |
| 6 | 255 | 35 | 52.08 | 1245.3 |
| 7 | 255 | 35 | 52.08 | 1245.3 |
| 8 | 255 | 35 | 52.08 | 1245.7 |
| 1 | 460 | 18 | 47.35 | 1010.1 |
| 2 | 460 | 27 | 49.18 | 1518.9 |
| 3 | 460 | 16 | 50.78 | 1171.2 |
+------+----------+--------+--------+--------+
We need to get the target row number and 2 next and 2 previous rows as quickly as possible.
Indexed columns:
id
item_id
Sorting:
damage (DESC)
wins (DESC)
battles (ASC)
id (ASC)
At the example, we need to find the row number and +- 2 rows where id = 4 and item_id = 255. The result table should be:
+------+----------+--------+--------+--------+------+
| id | item_id | battles| wins | damage | rank |
+------+----------+--------+--------+--------+------+
| 5 | 255 | 35 | 52.09 | 1245.4 | 2 |
| 3 | 255 | 35 | 52.08 | 1245.3 | 3 |
| 4 | 255 | 35 | 52.08 | 1245.3 | 4 |
| 6 | 255 | 35 | 52.08 | 1245.3 | 5 |
| 7 | 255 | 35 | 52.08 | 1245.3 | 6 |
+------+----------+--------+--------+--------+------+
How can I do this with Row number windows function?
Is there is any way optimize in query to make it faster because other columns have no indexes?
CREATE OR REPLACE FUNCTION find_top(in_id integer, in_item_id integer) RETURNS TABLE (
r_id int,
r_item_id int,
r_battles int,
r_wins real,
r_damage real,
r_rank bigint,
r_eff real,
r_frags int
) AS $$
DECLARE
center_place bigint;
BEGIN
SELECT place INTO center_place FROM
(SELECT
id, item_id,
ROW_NUMBER() OVER (ORDER BY damage DESC, wins DESC, battles, id) AS place
FROM
public.my_table
WHERE
item_id = in_item_id
AND battles >= 20
) AS s
WHERE s.id = in_id;
RETURN QUERY SELECT
s.place, pt.id, pt.item_id, pt.battles, pt.wins, pt.damage
FROM
(
SELECT * FROM
(SELECT
ROW_NUMBER () OVER (ORDER BY damage DESC, wins DESC, battles, id) AS place,
id, item_id
FROM
public.my_table
WHERE
item_id = in_item_id
AND battles >= 20) x
WHERE x.place BETWEEN (center_place - 2) AND (center_place + 2)
) s
JOIN
public.my_table pt
ON pt.id = s.id AND pt.item_id = s.item_id;
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE FUNCTION find_top(in_id integer, in_item_id integer) RETURNS TABLE (
r_id int,
r_item_id int,
r_battles int,
r_wins real,
r_damage real,
r_rank bigint,
r_eff real,
r_frags int
) AS $$
BEGIN
RETURN QUERY
SELECT c.*, B.ord -3 AS row_number
FROM
( SELECT array_agg(id) OVER w AS id
, array_agg(item_id) OVER w AS item_id
FROM public.my_table
WINDOW w AS (ORDER BY damage DESC, wins DESC, battles, id ROWS BETWEEN 2 PRECEDING AND 2 FOLLOWING)
) AS a
CROSS JOIN LATERAL unnest(a.id, a.item_id) WITH ORDINALITY AS b(id, item_id, ord)
INNER JOIN public.my_table AS c
ON c.id = b.id
AND c.item_id = b.item_id
WHERE a.item_id[3] = in_item_id
AND a.id[3] = in_id
ORDER BY b.ord ;
END ; $$ LANGUAGE plpgsql;
test result in dbfiddle

how to retrieve information from three tables in below conditions in postgresql

I have three tables.
TABLE_1:
T2_ID ver date boolean
---------------------------------------------------------
1 | X-20-50 | 2019-01-01 16:20:51.722336+00 | TRUE
2 | X-50-30 | 2019-02-26 16:20:51.722336+00 | TRUE
3 | X-20-32 | 2019-03-20 16:20:51.722336+00 | FALSE
1 | X-20-50 | 2019-01-09 16:20:51.722336+00 | FALSE
2 | X-20-50 | 2019-12-02 16:20:51.722336+00 | TRUE
3 | X-20-50 | 2019-01-24 16:20:51.722336+00 | TRUE
TABLE_2:
id | type | scheduler
--------------------------------------------------
1 | ABC | w1,w2,w3,w4,w5,w6,w7,w8,w9,w10,w11,w12
2 | PQR | w5,w9
3 | TRC | w1,w4,w8
TABLE_3
start_date_of_ver | end_date_of_ver | ver_name
-----------------------------------------------------------
2019-01-01 00:00:00+00 | 2019-04-01 00:00:00+00 | X-20-50
2019-02-25 00:00:00+00 | 2019-05-26 00:00:00+00 | X-50-30
2019-03-15 00:00:00+00 | 2019-06-06 00:00:00+00 | X-20-32
Table 4 should fulfill the below condition.
it takes version name (ver_name) as input
from this (ver_name), it takes start date and end date of version (from table_3) if the version period is 3 months then it creates 12 weeks table with id (type) as the first column and creates an entry of twelve-week according to table 2 of the scheduler.
information on table 4 will be updated as and when table 1 has entries of that particular week which are TRUE
Note: table 1, entries get generates on a daily basis.
Desired table: which has only ver_name as input and calculate below table.
When table_1 don't have any entries then table_4 should look like as below
Table_4: X-20-50
id_of_table_2 | week_1 | week_2 | week_3 | week_4 | week_5 | week_6 | week_7 | week_8 | week_9 | week_10 | week_11 | week_12 |
------------------------------------------------------------------------------------------------------------------------------
ABC | w1 | w2 | w3 | w4 | w5 | w6 | w7 | w8 | w9 | w10 | w11 | w12 |
PQR | | | | | w5 | | | | w9 | | | |
TRC | w1 | | | w4 | | | | w8 | | | | |
When table_1 has entries then table_4 should look like as below
X-20-50
id_of_table_2 | week_1 | week_2 | week_3 | week_4 | week_5 | week_6 | week_7 | week_8 | week_9 | week_10 | week_11 | week_12 |
------------------------------------------------------------------------------------------------------------------------------
ABC | Done | Done | w3 | w4 | w5 | w6 | w7 | w8 | w9 | w10 | w11 | w12 |
PQR | | | | | w5 | | | | w9 | | | |
TRC | Done | | | w4 | | | | w8 | | | | |
You can create function which can take starting date of a week as input.
Example-
create function a(start_date)
RETURNS json
LANGUAGE 'plpgsql'
COST 100
VOLATILE
AS $BODY$
DECLARE
outputjson json;
BEGIN
EXECUTE 'select json_agg(*) from table_name where date >= '||start_date||' and (date '||start_date||' + integer ''7'')' into outputjson;
RETURN outputjson;
END;
$$
Hope this will help.
Your requirement needs a little refinement. You specify to retrieve weekly data yet fail to define a your week. On what day does it begin? Are all weeks 7 days long? What happens when Dec 31 falls on Tuesday is Friday Jan 3 in the same week (see current year calendar). Then there is the issue of user input and what it represents. Is it the desired start date and the week is that date and the next 6 days or any date within weekly period?
The following assumes an ISO 8601 definition (google it - lots of stuff). Every week begins on Monday and all weeks are 7 days long. (Thus the week containing 31-Dec-2019 also includes 3-Jan-2020). The routine extracts the ISO Year and ISO week user entered date.
--setup
create table weekly_something( c1 text, c2 text, date1 timestamptz, someem boolean);
insert into weekly_something( c1, c2, date1, someem )
values ('ABC','AB-20-50','2019-11-25 16:20:51.722336+00',TRUE)
, ('PQR','AB-50-30','2019-11-26 16:20:51.722336+00',TRUE)
, ('TRC','CD-20-32','2019-11-27 16:20:51.722336+00',FALSE)
, ('ABC','AB-20-50','2019-12-02 16:20:51.722336+00',FALSE)
, ('ABC','AB-20-50','2019-12-02 16:20:51.722336+00',TRUE)
, ('JFF','yy-45-89','2019-12-31 16:20:51.722336+00',TRUE)
, ('JFF','yy-89-30','2020-01-03 16:20:51.722336+00',TRUE) ;
-- JFF Just For Fun
-- SQL Function
create function week_of(week_date date)
returns setof weekly_something
language sql stable strict
as $$
select *
from weekly_something
where (extract('isoyear' from week_date), extract('week' from week_date)) =
(extract('isoyear' from date1), extract('week' from date1));
$$;
-- test
select * from week_of('2019-11-26');
select * from week_of('2019-12-30');

Postgresql array unique aggregation

I have a large table with structure
CREATE TABLE t (
id SERIAL primary key ,
a_list int[] not null,
b_list int[] not null,
c_list int[] not null,
d_list int[] not null,
type int not null
)
I want query all unique values from a_list, b_list, c_list, d_list for type like this
select
some_array_unique_agg_function(a_list),
some_array_unique_agg_function(b_list),
some_array_unique_agg_function(c_list),
some_array_unique_agg_function(d_list),
count(1)
where type = 30
For example, for this data
+----+---------+--------+--------+---------+------+
| id | a_list | b_list | c_list | d_list | type |
+----+---------+--------+--------+---------+------+
| 1 | {1,3,4} | {2,4} | {1,1} | {2,4,5} | 30 |
| 1 | {1,2,4} | {2,4} | {4,1} | {2,4,5} | 30 |
| 1 | {1,3,5} | {2} | {} | {2,4,5} | 30 |
+----+---------+--------+--------+---------+------+
I want the next result
+-------------+--------+--------+-----------+-------+
| a_list | b_list | c_list | d_list | count |
+-------------+--------+--------+-----------+-------+
| {1,2,3,4,5} | {2,4} | {1,4} | {2,4,5} | 3 |
+-------------+--------+--------+-----------+-------+
Is there some_array_unique_agg_function for my purposes?
Try this
with cte as (select
unnest( a_list::text[] )::integer as a_list,
unnest( b_list::text[] )::integer as b_list,
unnest( c_list::text[] )::integer as c_list,
unnest( d_list::text[] )::integer as d_list,
(select count(type) from t) as type
from t
where type = 30
)
select array_agg(distinct a_list),array_agg(distinct b_list)
,array_agg(distinct c_list),array_agg(distinct d_list),type from cte group by type ;
Result:
"{1,2,3,4,5}";"{2,4,NULL}";"{1,4,NULL}";"{2,4,5}";3

Valid periods - SQL VIEW

I have 2 tables (actually there are 4, but for now lets say it's 2) with data like this:
Table PersonA
ClientID ID From Till
1 10 1.1.2017 30.4.2017
1 12 1.8.2017 2.1.2018
Table PersonB
ClientID ID From Till
1 6 1.3.2017 30.6.2017
And I need to generate view that would show something like this:
ClientID From Till PersonA PersonB
1 1.1.2017 28.2.2017 10 NULL
1 1.3.2017 30.4.2017 10 6
1 1.5.2017 30.6.2017 NULL 6
1 1.8.2017 02.1.2018 12 NULL
So basically I need to create view that would show what "persons" each client had in given period.
So when there is an overlap, client have both PersonA and PersonB (same should apply for PersonC and PersonD).
So in the final view one client can't have any overlapping dates.
I don't know how to approach this.
In an adaptation of this algorithm, we can already handle the overlaps:
declare #PersonA table(ClientID int, ID int, [From] date, Till date);
insert into #PersonA values (1,10,'20170101','20170430'),(1,12,'20170801','20180112');
declare #PersonB table(ClientID int, ID int, [From] date, Till date);
insert into #PersonB values (1,6,'20170301','20170630');
declare #PersonC table(ClientID int, ID int, [From] date, Till date);
insert into #PersonC values (1,12,'20170401','20170625');
declare #PersonD table(ClientID int, ID int, [From] date, Till date);
insert into #PersonD values (1,14,'20170501','20170525'),(1,14,'20170510','20171122');
with X(ClientID,EdgeDate)
as (select ClientID
,case
when toggle = 1
then Till
else [From]
end as EdgeDate
from
(
select ClientID,[From],Till from #PersonA
union all
select ClientID,[From],Till from #PersonB
union all
select ClientID,[From],Till from #PersonC
union all
select ClientID,[From],Till from #PersonD
) as concated
cross join
(
select-1 as toggle
union all
select 1 as toggle
) as toggler
),merged
as (select distinct
S.ClientID
,S.EdgeDate as [From]
,min(E.EdgeDate) as Till
from
X as S
inner join X as E
on S.ClientID = E.ClientID
and S.EdgeDate < E.EdgeDate
group by S.ClientID
,S.EdgeDate
),prds
as (select distinct
merged.ClientID
,merged.[From]
,merged.Till
,A.ID as PersonA
,B.ID as PersonB
,C.ID as PersonC
,D.ID as PersonD
from
merged
left join #PersonA as A
on merged.ClientID = A.ClientID
and A.[From] <= merged.[From]
and merged.Till <= A.Till
left join #PersonB as B
on merged.ClientID = B.ClientID
and B.[From] <= merged.[From]
and merged.Till <= B.Till
left join #PersonC as C
on merged.ClientID = C.ClientID
and C.[From] <= merged.[From]
and merged.Till <= C.Till
left join #PersonD as D
on merged.ClientID = D.ClientID
and D.[From] <= merged.[From]
and merged.Till <= D.Till
where not(A.ID is null
and B.ID is null
and C.ID is null
and D.ID is null
)
)
select ClientID
,[From]
,case
when Till = lead([From]
) over(order by Till)
then dateadd(d,-1,Till)
else Till
end as Till
,PersonA
,PersonB
,PersonC
,PersonD
from
prds
order by ClientID
,[From]
,Till;
Output with just the two Person tables given in the question:
+----------+------------+------------+---------+---------+
| ClientID | From | Till | PersonA | PersonB |
+----------+------------+------------+---------+---------+
| 1 | 2017-01-01 | 2017-02-28 | 10 | NULL |
| 1 | 2017-03-01 | 2017-04-29 | 10 | 6 |
| 1 | 2017-04-30 | 2017-06-30 | NULL | 6 |
| 1 | 2017-08-01 | 2018-01-12 | 12 | NULL |
+----------+------------+------------+---------+---------+
Output of script as it is above, with four Person tables:
+----------+------------+------------+---------+---------+---------+---------+
| ClientID | From | Till | PersonA | PersonB | PersonC | PersonD |
+----------+------------+------------+---------+---------+---------+---------+
| 1 | 2017-01-01 | 2017-02-28 | 10 | NULL | NULL | NULL |
| 1 | 2017-03-01 | 2017-03-31 | 10 | 6 | NULL | NULL |
| 1 | 2017-04-01 | 2017-04-29 | 10 | 6 | 12 | NULL |
| 1 | 2017-04-30 | 2017-04-30 | NULL | 6 | 12 | NULL |
| 1 | 2017-05-01 | 2017-05-09 | NULL | 6 | 12 | 14 |
| 1 | 2017-05-10 | 2017-05-24 | NULL | 6 | 12 | 14 |
| 1 | 2017-05-25 | 2017-06-24 | NULL | 6 | 12 | 14 |
| 1 | 2017-06-25 | 2017-06-29 | NULL | 6 | NULL | 14 |
| 1 | 2017-06-30 | 2017-07-31 | NULL | NULL | NULL | 14 |
| 1 | 2017-08-01 | 2017-11-21 | 12 | NULL | NULL | 14 |
| 1 | 2017-11-22 | 2018-01-12 | 12 | NULL | NULL | NULL |
+----------+------------+------------+---------+---------+---------+---------+

Grouping in t-sql with latest dates

I have a table like this
Event ID | Contract ID | Event date | Amount |
----------------------------------------------
1 | 1 | 2009-01-01 | 100 |
2 | 1 | 2009-01-02 | 20 |
3 | 1 | 2009-01-03 | 50 |
4 | 2 | 2009-01-01 | 80 |
5 | 2 | 2009-01-04 | 30 |
For each contract I need to fetch the latest event and amount associated with the event and get something like this
Event ID | Contract ID | Event date | Amount |
----------------------------------------------
3 | 1 | 2009-01-03 | 50 |
5 | 2 | 2009-01-04 | 30 |
I can't figure out how to group the data correctly. Any ideas?
Thanks in advance.
SQL 2k5/2k8:
with cte_ranked as (
select *
, row_number() over (
partition by ContractId order by EvantDate desc) as [rank]
from [table])
select *
from cte_ranked
where [rank] = 1;
SQL 2k:
select t.*
from table as t
join (
select max(EventDate) as MaxDate
, ContractId
from table
group by ContractId) as mt
on t.ContractId = mt.ContractId
and t.EventDate = mt.MaxDate