select inside select and minimizing running time in POSTGRESQL - postgresql

I am trying to calculate the total booking number and the percentage for each hotel per year using POSTGRESQL. Here is my code:
WITH distribution_per_year AS
(
SELECT hotel, arrival_date_year,
COUNT(*) AS booking_by_hotel,
(SELECT COUNT(*) AS total_booking FROM "Full_Data" )
FROM "Full_Data"
GROUP BY hotel, arrival_date_year
)
SELECT hotel, arrival_date_year, booking_by_hotel, total_booking,
round(booking_by_hotel *100.00 / total_booking , 2) as percent
FROM distribution_per_year
and it worked, it gives me the results as I wanted
| hotel | arrival_date_year | booking_by_hotel | total_booking | percent |
|:------ |:----------------- |:---------------- |:------------- |:--------|
| Hotel1 | 2015 | 6526 | 100561 | 6.49 |
| Hotel1 | 2016 | 33210 | 100561 | 33.02 |
| Hotel1 | 2017 | 20064 | 100561 | 19.95 |
| Hotel2 | 2015 | 6758 | 100561 | 6.72 |
| Hotel2 | 2016 | 22434 | 100561 | 22.31 |
| Hotel2 | 2017 | 11569 | 100561 | 11.50 |
My question is: I noticed it takes time to run this code. I think it's because it repeats the subquery every time it group by
(SELECT COUNT(*) AS total_booking FROM "Full_Data" )
Is there a way to enhance this code??

Related

Create Calculated Pivot from Several Query Results in PostgreSQL

I have question regarding how to make a calculated pivot table from several query results on PostgreSQL. I've managed to make three queries results but don't have any idea how to combine and calculate all the data into a single table. I've tried to google it but found out that most of the question is about how to make a pivot table from a single table, which I'm able to do using sum, case, and group by. Well, Here's the simplified version of my query results
Query from query 1 which contains gross value
| city | code | gross |
|-------|------|--------|
| city1 | 21 | 194793 |
| city1 | 25 | 139241 |
| city1 | 28 | 231365 |
| city2 | 21 | 282025 |
| city2 | 25 | 334458 |
| city2 | 28 | 410852 |
| city3 | 21 | 109237 |
Result from query 2 which contains positive adjustments
| city | code | adj_pos |
|-------|------|---------|
| city1 | 21 | 16259 |
| city1 | 25 | 13634 |
| city1 | 28 | 45854 |
| city2 | 25 | 18060 |
| city2 | 28 | 18220 |
Result from query 3 which contains negative adjustments
| city | code | adj_neg |
|-------|------|---------|
| city1 | 25 | 23364 |
| city2 | 21 | 27478 |
| city2 | 25 | 23474 |
And what I want to to is to create something like this
| city | 21_gross | 25_gross | 28_gross | 21_pos | 25_pos | 28_pos | 21_neg | 25_neg | 28_neg |
|-------|----------|----------|----------|--------|--------|--------|--------|--------|--------|
| city1 | 194793 | 139241 | 231365 | 16259 | 13634 | 45854 | | 23364 | |
| city2 | 282025 | 334458 | 410852 | | 18060 | 18220 | 27478 | 23474 | |
| city3 | 109237 | | | | | | | | |
or probably final calculation which come from gross + positive adjustment -
negative adjustment from each city on each code like this
| city | 21_nett | 25_nett | 28_nett |
|-------|---------|---------|---------|
| city1 | 211052 | 129511 | 277219 |
| city2 | 254547 | 329044 | 429072 |
| city3 | 109237 | 0 | 0 |
Any suggestion will be appreciated. Thank you!
I think the best you can achieve is to get the pivoting part as JSON - http://sqlfiddle.com/#!17/b7d64/23:
select
city,
json_object_agg(
code,
coalesce(gross,0) + coalesce(adj_pos,0) - coalesce(adj_neg,0)
) as js
from q1
left join q2 using (city,code)
left join q3 using (city,code)
group by city

SQL server left join not returning expected records from left table

I have two objects within a SQl Server 2008 R2 database, which I am trying to join together with a left join but I am unable to get the left join to return all records from the table.
1 table - tt_activityoccurrence
1 view - vw_academicweeks
The vw_academicweeks, is a view that contains for each academic year a week number, and the first day and last day of the week and contains 52 records for each academic year.
tt_activityoccurrence is a table which contains occurrences of lessons within a year, lessons will not occur in all 52 weeks of the year.
With my query I am trying to return all instances from the vw_academicweeks view to return the following information
+------------+------------+------------+------------+---------+
| ActivityID | WeekStart | StartTime | EndTime | week_no |
+------------+------------+------------+------------+---------+
| 59936 | 04/09/2017 | 05/09/2017 | 05/09/2017 | 6 |
| 59936 | 11/09/2017 | 12/09/2017 | 12/09/2017 | 7 |
| 59936 | 18/09/2017 | 19/09/2017 | 19/09/2017 | 8 |
| 59936 | 25/09/2017 | 26/09/2017 | 26/09/2017 | 9 |
| 59936 | 02/10/2017 | 03/10/2017 | 03/10/2017 | 10 |
| 59936 | 09/10/2017 | 10/10/2017 | 10/10/2017 | 11 |
| 59936 | 16/10/2017 | 17/10/2017 | 17/10/2017 | 12 |
| 59936 | Null | Null | Null | 13 |
| 59936 | 30/10/2017 | 31/10/2017 | 31/10/2017 | 14 |
| 59936 | 06/11/2017 | 07/11/2017 | 07/11/2017 | 15 |
| 59936 | 13/11/2017 | 14/11/2017 | 14/11/2017 | 16 |
| 59936 | 20/11/2017 | 21/11/2017 | 21/11/2017 | 17 |
| 59936 | 27/11/2017 | 28/11/2017 | 28/11/2017 | 18 |
| 59936 | 04/12/2017 | 05/12/2017 | 05/12/2017 | 19 |
| 59936 | 11/12/2017 | 12/12/2017 | 12/12/2017 | 20 |
| 59936 | 18/12/2017 | 19/12/2017 | 19/12/2017 | 21 |
| 59936 | Null | Null | Null | 22 |
| 59936 | Null | Null | Null | 23 |
+------------+------------+------------+------------+---------+
With the left join I can return all values except the nulls, so that the week_no column is missing rows, 13,22 and 23. I have also tried this with an outer join but receive the same information.
I feel I am missing something obvious but it is escaping me at the moment.
select
ttao.ActivityID
,dateadd(dd,datediff(dd,0,DATEADD(dd, -(DATEPART(dw, ttao.StartTime)-1), ttao.StartTime)),0) WeekStart
,ttao.StartTime
,ttao.EndTime
,aw.week_no
from
vw_AcademicWeeks AW
left join TT_ActivityOccurrence TTAO on
(dateadd(dd,datediff(dd,0,DATEADD(dd, -(DATEPART(dw, ttao.StartTime)-1), ttao.StartTime)),0))=aw.ay_start
where
ay_code='1718' and
TTAO.ActivityID='59936'
order by aw.week_no asc
Your where clause makes it an inner join by eliminating rows outside of the scope of your join. You need to move this logic up to your join statement. Note, I didn't validate your join condiditon (the dateadd...datediff logic)
select
ttao.ActivityID
,dateadd(dd,datediff(dd,0,DATEADD(dd, -(DATEPART(dw, ttao.StartTime)-1), ttao.StartTime)),0) WeekStart
,ttao.StartTime
,ttao.EndTime
,aw.week_no
from
vw_AcademicWeeks AW
left join TT_ActivityOccurrence TTAO on
(dateadd(dd,datediff(dd,0,DATEADD(dd, -(DATEPART(dw, ttao.StartTime)-1), ttao.StartTime)),0)) = aw.ay_start
and ay_code='1718'
and TTAO.ActivityID='59936'
order by aw.week_no asc

PostgreSQL Crosstab generate_series of weeks for columns

From a table of "time entries" I'm trying to create a report of weekly totals for each user.
Sample of the table:
+-----+---------+-------------------------+--------------+
| id | user_id | start_time | hours_worked |
+-----+---------+-------------------------+--------------+
| 997 | 6 | 2018-01-01 03:05:00 UTC | 1.0 |
| 996 | 6 | 2017-12-01 05:05:00 UTC | 1.0 |
| 998 | 6 | 2017-12-01 05:05:00 UTC | 1.5 |
| 999 | 20 | 2017-11-15 19:00:00 UTC | 1.0 |
| 995 | 6 | 2017-11-11 20:47:42 UTC | 0.04 |
+-----+---------+-------------------------+--------------+
Right now I can run the following and basically get what I need
SELECT COALESCE(SUM(time_entries.hours_worked),0) AS total,
time_entries.user_id,
week::date
--Using generate_series here to account for weeks with no time entries when
--doing the join
FROM generate_series( (DATE_TRUNC('week', '2017-11-01 00:00:00'::date)),
(DATE_TRUNC('week', '2017-12-31 23:59:59.999999'::date)),
interval '7 day') as week LEFT JOIN time_entries
ON DATE_TRUNC('week', time_entries.start_time) = week
GROUP BY week, time_entries.user_id
ORDER BY week
This will return
+-------+---------+------------+
| total | user_id | week |
+-------+---------+------------+
| 14.08 | 5 | 2017-10-30 |
| 21.92 | 6 | 2017-10-30 |
| 10.92 | 7 | 2017-10-30 |
| 14.26 | 8 | 2017-10-30 |
| 14.78 | 10 | 2017-10-30 |
| 14.08 | 13 | 2017-10-30 |
| 15.83 | 15 | 2017-10-30 |
| 8.75 | 5 | 2017-11-06 |
| 10.53 | 6 | 2017-11-06 |
| 13.73 | 7 | 2017-11-06 |
| 14.26 | 8 | 2017-11-06 |
| 19.45 | 10 | 2017-11-06 |
| 15.95 | 13 | 2017-11-06 |
| 14.16 | 15 | 2017-11-06 |
| 1.00 | 20 | 2017-11-13 |
| 0 | | 2017-11-20 |
| 2.50 | 6 | 2017-11-27 |
| 0 | | 2017-12-04 |
| 0 | | 2017-12-11 |
| 0 | | 2017-12-18 |
| 0 | | 2017-12-25 |
+-------+---------+------------+
However, this is difficult to parse particularly when there's no data for a week. What I would like is a pivot or crosstab table where the weeks are the columns and the rows are the users. And to include nulls from each (for instance if a user had no entries in that week or week without entries from any user).
Something like this
+---------+---------------+--------------+--------------+
| user_id | 2017-10-30 | 2017-11-06 | 2017-11-13 |
+---------+---------------+--------------+--------------+
| 6 | 4.0 | 1.0 | 0 |
| 7 | 4.0 | 1.0 | 0 |
| 8 | 4.0 | 0 | 0 |
| 9 | 0 | 1.0 | 0 |
| 10 | 4.0 | 0.04 | 0 |
+---------+---------------+--------------+--------------+
I've been looking around online and it seems that "dynamically" generating a list of columns for crosstab is difficult. I'd rather not hard code them, which seems weird to do anyway for dates. Or use something like this case with week number.
Should I look for another solution besides crosstab? If I could get the series of weeks for each user including all nulls I think that would be good enough. It just seems that right now my join strategy isn't returning that.
Personally I would use a Date Dimension table and use that table as the basis for the query. I find it far easier to use tabular data for these types of calculations as it leads to SQL that's easier to read and maintain. There's a great article on creating a Date Dimension table in PostgreSQL at https://medium.com/#duffn/creating-a-date-dimension-table-in-postgresql-af3f8e2941ac, though you could get away with a much simpler version of this table.
Ultimately what you would do is use the Date table as the base for the SELECT cols FROM table section and then join against that, or probably use Common Table Expressions, to create the calculations.
I'll write up a solution to that if you would like demonstrating how you could create such a query.

Crosstab function and Dates PostgreSQL

I had to create a cross tab table from a Query where dates will be changed into column names. These order dates can be increase or decrease as per the dates passed in the query. The order date is in Unix format which is changed into normal format.
Query is following:
Select cd.cust_id
, od.order_id
, od.order_size
, (TIMESTAMP 'epoch' + od.order_date * INTERVAL '1 second')::Date As order_date
From consumer_details cd,
consumer_order od,
Where cd.cust_id = od.cust_id
And od.order_date Between 1469212200 And 1469212600
Order By od.order_id, od.order_date
Table as follows:
cust_id | order_id | order_size | order_date
-----------|----------------|---------------|--------------
210721008 | 0437756 | 4323 | 2016-07-22
210721008 | 0437756 | 4586 | 2016-09-24
210721019 | 10749881 | 0 | 2016-07-28
210721019 | 10749881 | 0 | 2016-07-28
210721033 | 13639 | 2286145 | 2016-09-06
210721033 | 13639 | 2300040 | 2016-10-03
Result will be:
cust_id | order_id | 2016-07-22 | 2016-09-24 | 2016-07-28 | 2016-09-06 | 2016-10-03
-----------|----------------|---------------|---------------|---------------|---------------|---------------
210721008 | 0437756 | 4323 | 4586 | | |
210721019 | 10749881 | | | 0 | |
210721033 | 13639 | | | | 2286145 | 2300040

Sort In MDX Query not work

I use SSAS and SQL Server 2008R2
I use AdventureWorkDW dimensional Database.
I write this query :
Select
[Measures].[Internet Sales Amount] on columns,
order(
[Product].[Product Categories].[Subcategory],
[Measures].[Internet Sales Amount],
asc
) on rows
From [Adventure Works]
I got result like this :
also i write this query :
Select
[Measures].[Internet Sales Amount] on columns,
non empty order(
crossjoin(
[Product].[Category].[Category],
[Product].[Subcategory].[Subcategory]
),
[Measures].[Internet Sales Amount],
desc
) on rows
From [Adventure Works]
And result is not sorted also :
Why result was not sorted?
Query (2012sql):
Select
[Measures].[Internet Sales Amount] on columns,
order(
[Product].[Product Categories].[Subcategory],
[Measures].[Internet Sales Amount],
basc
) on rows
From [Adventure Works]
The problem was I think because data was hierarchical and with basc you sort just with Amount.
The Order function can either be hierarchical (as specified by using
the ASC or DESC flag) or nonhierarchical (as specified by using the
BASC or BDESC flag
Result (2012sql):
| GG | INTERNET SALES AMOUNT |
|-------------------|-----------------------|
| Lights | (null) |
| Locks | (null) |
| Panniers | (null) |
| Pumps | (null) |
| Bib-Shorts | (null) |
| Tights | (null) |
| Bottom Brackets | (null) |
| Brakes | (null) |
| Chains | (null) |
| Cranksets | (null) |
| Derailleurs | (null) |
| Forks | (null) |
| Handlebars | (null) |
| Headsets | (null) |
| Mountain Frames | (null) |
| Pedals | (null) |
| Road Frames | (null) |
| Saddles | (null) |
| Touring Frames | (null) |
| Wheels | (null) |
| Socks | $5,106.32 |
| Cleaners | $7,218.60 |
| Caps | $19,688.10 |
| Gloves | $35,020.70 |
| Vests | $35,687.00 |
| Bike Racks | $39,360.00 |
| Bike Stands | $39,591.00 |
| Hydration Packs | $40,307.67 |
| Fenders | $46,619.58 |
| Bottles and Cages | $56,798.19 |
| Shorts | $71,319.81 |
| Jerseys | $172,950.68 |
| Helmets | $225,335.60 |
| Tires and Tubes | $245,529.32 |
| Touring Bikes | $3,844,801.05 |
| Mountain Bikes | $9,952,759.56 |
| Road Bikes | $14,520,584.04 |