I'm trying to recreate simple SQL query in DAX. The output Query needs to work in Power BI Report Builder and I have been trying all day reading all sorts of PowerBI / DAX online resources to rewrite this.
A little bit about the data:
The data is structured in three tables, CustomCar, Engine and Chassis.
Basically "CarId" is the key that connects all three tables.
Let's assume all tables have more than 20 columns. so only a few of the columns are needed in the final output.
All three tables (CustomCar, Chassis and Engine) have an IsActive property (the relationship between Engine/Chassis to CustomCar is MANY-TO-ONE. Because an engine might blow up and they change it therefore somehow we want to track which Engine is on the car today and what engine was on it last year, however, at any time, there is only one active engine for each car.. The same goes for Chassis)
Both Engine and Chassis have 'Manufacturer' and 'Model' columns so in the output query they need to be identified from each other.
I am not trying to sum any sort of sales number, just a list of cars with their current configuration.
Any help is appreciated.
Select
CC.Name, CC.Model as 'CustomCarModel', CC.MaxSpeed,
Ch.Manufacturer as 'ChassisManufacturer', Ch.Model as 'ChassisModel', Ch.ManufacturedDate as 'ChassisManfDate',
E.Manufactuer as 'EngineManufacturer', E.Model as 'EngineModel', E.Power, E.CylCount, E.ManufacturedDate
From CustomCars CC
Join Chassis Ch on Ch.CarID = CC.CarId
Join Engine E on E.CarID = CC.CarID
where
CC.IsActive = 1 and CC.FirstTestDriveYear < 1980 and
Ch.IsActive = 1 and
E.IsActive = 1
More info, here are my tables.
Classic Car:
CarId (Primary Key) | Model | MaxSpeed | NumOfPax | TankCapacity | IsActive | FirstTestDriveYear |....
1 | SuperChev | 220 | 2 | 60 | 1 | 1985 |
2 | CustomBranco | 185 | 2 | 90 | 1 | 1979 |
3 | RebuiltToyo | 251 | 4 | 20 | 0 | 1990 |
Chassis:
ChassisId (Primary Key) | CarId (Foreign Key)| IsActive | Manufacturer | Model | ManufacturedDate | ...
1 | 1 | 0 | ACME Chassis | M1 | '04-Jan-1985' | ...
2 | 1 | 1 | SuperChassis | T5 | '03-Feb-1987' | ...
3 | 2 | 0 | Ford | S2 | '25-Mar-1965' | ...
4 | 2 | 0 | Ford | S2 | '25-Mar-1968' | ...
5 | 3 | 0 | JapanChass | X123 | '25-Feb-1988' | ...
6 | 2 | 1 | Ford | S8 | '08-Jul-1978' | ...
7 | 2 | 0 | Ford | S2 | '25-Mar-1968' | ...
8 | 3 | 1 | JapanChass | Y765 | '25-Feb-1992' | ...
Engine:
EngineId (Primary Key) | CarId (Foreign Key)| IsActive | Manufacturer | Model | ManufacturedDate | Power | CylCount | ...
1 | 1 | 0 | GM | AB1 | '04-Jan-1985' | 320 | 8 | ...
2 | 1 | 1 | Bently | ZY2 | '03-Feb-1987' | 285 | 8 | ...
3 | 2 | 0 | Ford | S2 | '25-Mar-1965' | 290 | 6 | ...
4 | 2 | 0 | Ford | S2 | '25-Mar-1968' | 292 | 6 | ...
5 | 3 | 0 | Toyota | X123 | '25-Feb-1988' | 180 | 4 | ...
6 | 2 | 1 | Ford | S8 | '08-Jul-1978' | 222 | 8 | ...
7 | 2 | 0 | Ford | S2 | '25-Mar-1968' | 320 | 8 | ...
8 | 3 | 1 | Toyota | Y765 | '25-Feb-1992' | 211 | 6 | ...
I have found a work around for this. I added the query when adding the data pipeline in Power BI dashboard and will use the values from the query as is.
Related
Suppose such a spreadsheet in org table
|------------+-------+------------+--------+--------+------------|
| Date | Items | Unit Price | Amount | Amount | Categories |
|------------+-------+------------+--------+--------+------------|
| 2019/09/17 | A | 2.64 | 1 | 2.64 | materials |
| | B | 52.67 | 2 | 105.34 | diagnosis |
| | C | 3.08 | 1 | 3.08 | materials |
| | D | 3.85 | 2 | 7.7 | materials |
| | E | 33.66 | 2 | 67.32 | materials |
| | F | 40 | 1 | 40 | treatments |
| | G | 16.5 | 1 | 16.5 | materials |
| | H | 4 | 3 | 12 | treatments |
| | I | 40 | 1 | 40 | bed |
| | M | 6 | 13 | 78 | treatments |
|------------+-------+------------+--------+--------+------------|
#+TBLFM: $5=$3*$4
How could copy the date 2019/09.17 to the bottom of data column?
The link that #manandearth posted in the comments describes how to duplicate (perhaps with slight modifications) the entries in a column. Briefly, pressing S-RET in a cell duplicates its contents from the cell above (if it is not empty) - if the cell is full and the next cell is empty then it duplicates the full cell to the empty cell. If the contents are numeric, then the "duplication" involves a slight modification: it increases the value by 1. The same happens with a date: it increases the date to next day (but the date has to be in a format that Org mode recognizes: either an active date <YYYY-MM-DD> or an inactive data [YYYY-MM-DD]). The increment by default is 1 in these cases, but can be set to something else by setting the variable org-table-copy-increment to a different value. That's the "interactive" case I mention in my comment.
The other way to fill a column in a table is by using a formula. For example here's a formula to fill the first column with a copy of the first entry in the column:
#+TBLFM: #3$1..#>$1 = #2$1
This says: Set all rows from row 3 (#3) to the last row (#>) of column 1 ($1) to the value of the cell in row 2 (#2), column 1 ($1). Note that row 1 is the header. Press C-c C-c on the table formula line above and ... wait, what happened?
|------------+-------+------------+--------+--------+------------|
| Date | Items | Unit Price | Amount | Amount | Categories |
|------------+-------+------------+--------+--------+------------|
| 2019/09/17 | A | 2.64 | 1 | 2.64 | materials |
| 13.196078 | B | 52.67 | 2 | 105.34 | diagnosis |
| 13.196078 | C | 3.08 | 1 | 3.08 | materials |
| 13.196078 | D | 3.85 | 2 | 7.7 | materials |
| 13.196078 | E | 33.66 | 2 | 67.32 | materials |
| 13.196078 | F | 40 | 1 | 40 | treatments |
| 13.196078 | G | 16.5 | 1 | 16.5 | materials |
| 13.196078 | H | 4 | 3 | 12 | treatments |
| 13.196078 | I | 40 | 1 | 40 | bed |
| 13.196078 | M | 6 | 13 | 78 | treatments |
|------------+-------+------------+--------+--------+------------|
#+TBLFM: #3$1..#>$1 = #2$1
It does not quite work in this case for a technical reason: Org mode uses Calc in table formula calculations and Calc looks at 2019/09/17 and says: "Aha, I have to divide 2019 by 9 and then divide the result by 17", and fills the rest of the column with the result of the divisions: 13.196078. You may have meant 2019/09/17 to be a date, but Org mode does not know that: it gives it to Calc which interprets it as an arithmetic expression. The solution here is the same as in the linked answer: make Org mode aware that it's a date by making it either an active date: <2019-09-17> or an inactive date: [2019-09-17]:
|------------------+-------+------------+--------+--------+------------|
| Date | Items | Unit Price | Amount | Amount | Categories |
|------------------+-------+------------+--------+--------+------------|
| [2019-09-17] | A | 2.64 | 1 | 2.64 | materials |
| [2019-09-17 Tue] | B | 52.67 | 2 | 105.34 | diagnosis |
| [2019-09-17 Tue] | C | 3.08 | 1 | 3.08 | materials |
| [2019-09-17 Tue] | D | 3.85 | 2 | 7.7 | materials |
| [2019-09-17 Tue] | E | 33.66 | 2 | 67.32 | materials |
| [2019-09-17 Tue] | F | 40 | 1 | 40 | treatments |
| [2019-09-17 Tue] | G | 16.5 | 1 | 16.5 | materials |
| [2019-09-17 Tue] | H | 4 | 3 | 12 | treatments |
| [2019-09-17 Tue] | I | 40 | 1 | 40 | bed |
| [2019-09-17 Tue] | M | 6 | 13 | 78 | treatments |
|------------------+-------+------------+--------+--------+------------|
#+TBLFM: #3$1..#>$1 = #2$1
This does not do automatic incrementation but if that's what you want, it's easy to accomplish: Calc can do calculations on dates, so we can increment daily by adding to the date in each row the row number minus 2 (e.g. row 3 would get an increment of 3 - 2 = 1, row 4 would get 4 - 2 = 2, etc). To accomplish this, you have to get the row number of the current row: the idiom is ##. Then the formula becomes:
#+TBLFM: #3$1..#>$1 = #2$1 + ## - 2
and the table becomes:
|------------------+-------+------------+--------+--------+------------|
| Date | Items | Unit Price | Amount | Amount | Categories |
|------------------+-------+------------+--------+--------+------------|
| [2019-09-17] | A | 2.64 | 1 | 2.64 | materials |
| [2019-09-18 Wed] | B | 52.67 | 2 | 105.34 | diagnosis |
| [2019-09-19 Thu] | C | 3.08 | 1 | 3.08 | materials |
| [2019-09-20 Fri] | D | 3.85 | 2 | 7.7 | materials |
| [2019-09-21 Sat] | E | 33.66 | 2 | 67.32 | materials |
| [2019-09-22 Sun] | F | 40 | 1 | 40 | treatments |
| [2019-09-23 Mon] | G | 16.5 | 1 | 16.5 | materials |
| [2019-09-24 Tue] | H | 4 | 3 | 12 | treatments |
| [2019-09-25 Wed] | I | 40 | 1 | 40 | bed |
| [2019-09-26 Thu] | M | 6 | 13 | 78 | treatments |
|------------------+-------+------------+--------+--------+------------|
#+TBLFM: #3$1..#>$1 = #2$1+ ## - 2
The various anomalies of the display of dates (do we include the day of the week? do we include the time?) might be worked around using org-time-stamp-custom-formats but that gets us into waters that I have not explored.
I am a novice self-teaching Microsoft Access.
I have an MS Access database with a table of students (Table1).
Table1
+----+-----------+----------+------------+------------+
| id | firstname | lastname | Year_Group | Form_Group |
+----+-----------+----------+------------+------------+
| 2 | mnb | nbgfv | 7 | 1 |
| 3 | jhg | uhgf | 8 | 2 |
| 4 | poi | ijuy | 9 | 2 |
| 5 | tgf | tgfd | 10 | 2 |
| 6 | wer | qwes | 11 | 2 |
+----+-----------+----------+------------+------------+
Every day students days are recorded sort of like Table2.
Table2
+----------+----+-----------+----------+------------+--------+-----------+----------+
| Date | id | firstname | lastname | Year_Group | Effort | Behaviour | Homework |
+----------+----+-----------+----------+------------+--------+-----------+----------+
| 28/02/19 | 2 | mnb | nbgfv | 7 | Good | Good | Y |
| 28/02/19 | 3 | jhg | uhgf | 8 | OK | OK | Y |
| 28/02/19 | 4 | poi | ijuy | 9 | Bad | Bad | N |
| 01/03/19 | 5 | tgf | tgfd | 10 | Good | OK | Y |
| 01/03/19 | 6 | wer | qwes | 11 | Good | Good | Y |
+----------+----+-----------+----------+------------+--------+-----------+----------+
Is there a way (when using a list box or combo box) to select a student from Table1 so that their information is used for the corresponding columns in Table2?
Or is there a more efficient way to do this?
Firstly, you should normalise your data.
Currently, you are repeating the firstname, lastname, and Year_Group data in two separate tables, which not only bloats your database, but also means that such data must be maintained in two separate places, potentially leading to inconsistencies and then uncertainty as to which is the master.
Instead, I would suggest that your Students table should contain all information pertaining to the characteristics of a student:
Students
+----+-----------+----------+------------+------------+
| id | firstname | lastname | Year_Group | Form_Group |
+----+-----------+----------+------------+------------+
| 2 | mnb | nbgfv | 7 | 1 |
| 3 | jhg | uhgf | 8 | 2 |
| 4 | poi | ijuy | 9 | 2 |
| 5 | tgf | tgfd | 10 | 2 |
| 6 | wer | qwes | 11 | 2 |
+----+-----------+----------+------------+------------+
And the information pertaining to each school day should only reference the student ID in the Students table:
SchoolDays
+----------+----+--------+-----------+----------+
| Date | id | Effort | Behaviour | Homework |
+----------+----+--------+-----------+----------+
| 28/02/19 | 2 | Good | Good | Y |
| 28/02/19 | 3 | OK | OK | Y |
| 28/02/19 | 4 | Bad | Bad | N |
| 01/03/19 | 5 | Good | OK | Y |
| 01/03/19 | 6 | Good | Good | Y |
+----------+----+--------+-----------+----------+
Then, if you want to display the data in its entirety, you would use a query which joins the two tables, e.g.:
select
t2.date,
t1.firstname,
t1.lastname,
t1.year_group,
t2.effort,
t2.behaviour,
t2.homework
from
students t1 inner join schooldays t2 on t1.id = t2.id
I am having a hard time trying to wrap my head around the pivot/unpivot concepts and hoping someone can help or give me some guidance on how to approach my problem.
Here is a simplified sample table I have
+-------+------+------+------+------+------+
| SAUID | COM1 | COM2 | COM3 | COM4 | COM5 |
+-------+------+------+------+------+------+
| 1 | 24 | 22 | 100 | 0 | 45 |
| 2 | 34 | 55 | 789 | 23 | 0 |
| 3 | 33 | 99 | 5552 | 35 | 4675 |
+-------+------+------+------+------+------+
The end result I am looking for a table result similar below
+-------+-----------+-------+
| SAUID | OCCUPANCY | VALUE |
+-------+-----------+-------+
| 1 | COM1 | 24 |
| 1 | COM2 | 22 |
| 1 | COM3 | 100 |
| 1 | COM4 | 0 |
| 1 | COM5 | 45 |
| 2 | COM1 | 34 |
| 2 | COM2 | 55 |
| 2 | COM3 | 789 |
| 2 | COM4 | 23 |
| 2 | COM5 | 0 |
| 3 | COM1 | 33 |
| 3 | COM2 | 99 |
| 3 | COM3 | 5552 |
| 3 | COM4 | 35 |
| 3 | COM5 | 4675 |
+-------+-----------+-------+
Im looking around but most of the examples seem to use pivot but having a hard time trying to wrap that around my case as I need the values all in one column.
I hoping to experiment with some hardcoding to get fimilar with my example but my actual table columns are ~100 with varying #s of SAUID per table and looks like it will require dynamic sql?
Thanks for the help in advance.
Use UNPIVOT:
SELECT u.SAUID, u.OCCUPANCY, u.VALUE
FROM yourTable t
UNPIVOT
(
VALUE for OCCUPANCY in (COM1, COM2, COM3, COM4, COM5)
) u;
ORDER BY
u.SAUID, u.OCCUPANCY;
Demo
My Situation
I have some tables in my redshift cluster that all break down into either an order_id, shipment_id, or shipment_item_id depending on how granular the table is. order_id is a 1 to many relationship on shipment_id and shipment_id is a 1 to many on shipemnt_item_id.
My Question
I distribute on order_id, so all shipment_id and shipment_item_id records should be on the same nodes across the tables since they are grouped by order_id. My question is, when I have to join on shipment_id or shipment_item_id then will redshift know that the records are on the same nodes, or will it still broadcast the tables since they aren't joined on order_id?
Example Tables
unified_order shipment_details
+----------+-------------+------------------+ +-------------+-----------+--------------+
| order_id | shipment_id | shipment_item_id | | shipment_id | ship_day | ship_details |
+----------+-------------+------------------+ +-------------+-----------+--------------+
| 1 | 1 | 1 | | 1 | 1/1/2017 | stuff |
| 1 | 1 | 2 | | 2 | 5/1/2017 | other stuff |
| 1 | 1 | 3 | | 3 | 6/14/2017 | more stuff |
| 1 | 2 | 4 | | 4 | 5/13/2017 | less stuff |
| 1 | 2 | 5 | | 5 | 6/19/2017 | that stuff |
| 1 | 3 | 6 | | 6 | 7/31/2017 | what stuff |
| 2 | 4 | 7 | | 7 | 2/5/2017 | things |
| 2 | 4 | 8 | +-------------+-----------+--------------+
| 3 | 5 | 9 |
| 3 | 5 | 10 |
| 4 | 6 | 11 |
| 5 | 7 | 12 |
| 5 | 7 | 13 |
+----------+-------------+------------------+
Distribution
distribution_by_node
+------+----------+-------------+------------------+
| node | order_id | shipment_id | shipment_item_id |
+------+----------+-------------+------------------+
| 1 | 1 | 1 | 1 |
| 1 | 1 | 1 | 2 |
| 1 | 1 | 1 | 3 |
| 1 | 1 | 2 | 4 |
| 1 | 1 | 2 | 5 |
| 1 | 1 | 3 | 6 |
| 1 | 5 | 7 | 12 |
| 1 | 5 | 7 | 13 |
| 2 | 2 | 4 | 7 |
| 2 | 2 | 4 | 8 |
| 3 | 3 | 5 | 9 |
| 3 | 3 | 5 | 10 |
| 4 | 4 | 6 | 11 |
+------+----------+-------------+------------------+
The Amazon Redshift documentation does not go into detail how information is shared between nodes, but it is doubtful that it "broadcasts the tables".
Rather, information is probably sent between nodes based on need -- only the relevant columns would be shared, and possibly only sub-ranges of the data.
Rather than worrying too much about the internal implementation, you should test various DISTKEY and SORTKEY strategies against real queries to determine performance.
Follow the recommendations from Choose the Best Distribution Style to minimize the amount of data that needs to be sent between nodes and consult Amazon Redshift Best Practices for Designing Queries to improve queries.
You can EXPLAIN your query to see how data will be distributed (or not) during the execution. In this doc you'll see how to read the query plan:
Evaluating the Query Plan
I have two subqueries. Here is the output of subquery A....
id | date_lat_lng | stat_total | rnum
-------+--------------------+------------+------
16820 | 2016_10_05_10_3802 | 9 | 2
15701 | 2016_10_05_10_3802 | 9 | 3
16821 | 2016_10_05_11_3802 | 16 | 2
17861 | 2016_10_05_11_3802 | 16 | 3
16840 | 2016_10_05_12_3683 | 42 | 2
17831 | 2016_10_05_12_3767 | 0 | 2
17862 | 2016_10_05_12_3802 | 11 | 2
17888 | 2016_10_05_13_3683 | 35 | 2
17833 | 2016_10_05_13_3767 | 24 | 2
16823 | 2016_10_05_13_3802 | 24 | 2
and subquery B, in which date_lat_lng and stat_total has commonality with subquery A, but id does not.
id | date_lat_lng | stat_total | rnum
-------+--------------------+------------+------
17860 | 2016_10_05_10_3802 | 9 | 1
15702 | 2016_10_05_11_3802 | 16 | 1
17887 | 2016_10_05_12_3683 | 42 | 1
15630 | 2016_10_05_12_3767 | 20 | 1
16822 | 2016_10_05_12_3802 | 20 | 1
16841 | 2016_10_05_13_3683 | 35 | 1
15632 | 2016_10_05_13_3767 | 23 | 1
17863 | 2016_10_05_13_3802 | 3 | 1
16842 | 2016_10_05_14_3683 | 32 | 1
15633 | 2016_10_05_14_3767 | 12 | 1
Both subquery A and B pull data from the same table. I want to delete the rows in that table that share the same ID as subquery A but only where date_lat_lng and stat_total have a shared match in subquery B.
Effectively I need:
DELETE FROM table WHERE
id IN
(SELECT id FROM (subqueryA) WHERE
subqueryA.date_lat_lng=subqueryB.date_lat_lng
AND subqueryA.stat_total=subqueryB.stat_total)
Except I'm not sure where to place subquery B, or if I need an entirely different structure.
Something like this,
DELETE FROM table WHERE
id IN (
SELECT DISTINCT id
FROM subqueryA
JOIN subqueryB
USING (id,date_lat_lng,stat_total)
)