I have a table for player stats like so:
player_id | game_id | rec | rec_yds | td | pas_att | pas_yds | ...
--------------------------------------------------------
1 | 3 | 1 | 5 | 0 | 3 | 20 |
2 | 3 | 0 | 8 | 1 | 7 | 20 |
3 | 3 | 3 | 9 | 0 | 0 | 0 |
4 | 3 | 5 | 15 | 0 | 0 | 0 |
I want to return the max values for every column in the table except player_id and game_id.
I know I can return the max of one single column by doing something like so:
SELECT MAX(rec) FROM stats
However, this table has almost 30 columns, so I would just be repeating the query below, for all 30 stats, just replacing the name of the stat.
SELECT MAX(rec) as rec FROM stats
This would get tedious real quick, and wont scale.
Is there any way to kind of loop over columns, get every column in the table and return the max value like so:
player_id | game_id | rec | rec_yds | td | pas_att | pas_yds | ...
--------------------------------------------------------
4 | 3 | 5 | 15 | 1 | 7 | 20 |
You can get the maximum of multiple columns in a single query:
SELECT
MAX(rec) AS rec_max,
MAX(rec_yds) AS rec_yds_max,
MAX(td) AS td_max,
MAX(pas_att) AS pas_att_max,
MAX(pas_yds) AS pas_yds_max
FROM stats
However, there is no way to dynamically get an arbitrary number of columns. You could dynamically build the query by loading all column names of the table, then apply conditions such as "except player_id and game_id", but that cannot be done within the query itself.
Related
I have a table that has 2 columns. One is a type column and the other is a value amount column. There are only 2 types/ I would like to select columns of this table into another table with 2 combined columns based on type and value. For example, the table may have order with 2 of the types in 2 rows. It would be inserted into the 2nd table as one row.
Example:
Table 1
| ID | OrderID | Type | Value |
|:-----|:--------:|:------------:|-------:|
| 1 | 300 | bike | 100 |
| 2 | 300 | skateboard | 150 |
| 3 | 700 | bike | 200 |
| 4 | 700 | skateboard | 50 |
| 5 | 800 | bike | 150 |
| 6 | 800 | skateboard | 100 _
What is the TSQL to have it inserted into the 2nd table with these values?
Table 2
| ID | OrderID | BikeValue | SkateboardValue |
|:----|:--------:|:----------:|-----------------:|
| 1 | 300 | 100 | 150 |
| 2 | 700 | 200 | 50 |
| 3 | 800 | 150 | 100 |
Just make it simple for yourself. Do two SQL statements. One to insert and another to update.
INSERT INTO Table2 (OrderID, BikeValue)
SELECT Table1.OrderID, Table1.Value
FROM Table1 (NOLOCK)
WHERE Table1.Type = 'bike'
UPDATE Table2 SET Table2.SkateboardValue = Table1.Value
FROM Table2
INNER JOIN Table1 ON Table1.OrderID = Table2.OrderID
WHERE Table1.Type = 'skateboard'
In Tableau, I am joining two tables where a header can have multiple details
Work Order Header
Work Order Details
The joined data looks like this:
Header.ID | Header.ManualTotal | Details.ID | Details.LineTotal
A | 1000 | 1 | 550
A | 1000 | 2 | 35
A | 1000 | 3 | 100
B | 335 | 1 | 250
B | 335 | 2 | 300
C | null | 1 | 50
C | null | 2 | 25
C | null | 3 | 5
C | null | 4 | 5
Where there is a manual total, use that, if there is no manual total, use the sum of the line totals
ID | Total
A | 1000
B | 335
C | 85
I tried something like this:
ifnull( sum({fixed [Header ID] : [Manual Total] }), sum([Line Total]) )
basically I need to use the ifnull, then use the manual total if it exists, or sum line totals if it doesn't
Please advise on how to use LODs or some other solution to get the correct answer
Here is a solution that does not require a level-of-detail calculation.
Just try this:
use an inner join on id of the two tables
create this calculation: ifnull(median([Manual Total]),sum([Line Total]))
insert agg(your_calculation) into your sheet
I need help with updating table from another table in Postgres Db.
Long story short we ended up with corrupted data in db, and now I need to update one table with values from another.
I have table with this data table wfc:
| step_id | command_id | commands_order |
|---------|------------|----------------|
| 1 | 1 | 0 |
| 1 | 2 | 1 |
| 1 | 3 | 2 |
| 1 | 4 | 3 |
| 1 | 1 | 0 |
| 2 | 2 | 0 |
| 2 | 3 | 1 |
| 2 | 3 | 1 |
| 2 | 4 | 3 |
and I want to update values in command_order column from another table, so I can have result like this:
| step_id | command_id | commands_order|
|---------|------------|---------------|
| 1 | 1 | 0 |
| 1 | 2 | 1 |
| 1 | 3 | 2 |
| 1 | 4 | 3 |
| 1 | 1 | 4 |
| 2 | 2 | 0 |
| 2 | 3 | 1 |
| 2 | 3 | 2 |
| 2 | 4 | 3 |
It was looking like easy task, but problem is to update rows for same command_id, it is writing same value in commands_order
SQL that I tried is:
UPDATE wfc
SET commands_order = CAST(sq.input_step_id as INTEGER)
FROM (
SELECT wfp.step_id, wfp.input_command_id, wfp.input_step_id
from wfp
order by wfp.step_id, wfp.input_step_id
) AS sq
WHERE (wfc.step_id=sq.step_id AND wfc.command_id=CAST(sq.input_command_id as INTEGER));
SQL Fiddle http://sqlfiddle.com/#!15/4efff4/4
I am pretty stuck with this, please help.
Thanks in advance.
Assuming you are trying to number the rows in the order in which they were created, and as long as you understand that ctid will chnage on update and with VACCUUM FULL, you can do the following:
select step_id, command_id, rank - 1 as command_order
from (select step_id, command_id, ctid as wfc_ctid, rank() over
(partition by step_id order by ctid)
from wfc) as wfc_ordered;
This will give you the wfc table with the ordering that you want. If you do update the original table, the ctids will change, so it's probably safer to create a copy of the table with the above query.
I'm running postgres 9.4
I'm essentially updating an existing unorganized structure to a folder based organization. Im auto-assigning an order number to each item for user reordering, but doing an initial setting of all of these values with a 1 time use update statement. However, It seems like SET is taking my subquery's from clause and not recreating it for each successive row that it sets.
Here's my query example:
UPDATE folder_items
SET order_number =
(SELECT COALESCE(MAX(folder_items_2.order_number), 0) + 1
FROM folder_items AS folder_items_2
WHERE folder_items.parent_folder_id = folder_items_2.parent_folder_id
AND folder_items.folder_set_id = folder_items_2.folder_set_id
AND folder_items.id != folder_items_2.id);
With my initial table:
| folder_id | folder_set_id | order_number
row 1 | 1 | 1 | null
row 2 | 2 | 1 | null
row 3 | 3 | 2 | null
row 4 | 4 | 2 | null
row 5 | 5 | 2 | null
row 6 | 6 | 3 | null
when I run my query I get something like
| folder_id | folder_set_id | order_number
row 1 | 1 | 1 | 1
row 2 | 2 | 1 | 1
row 3 | 3 | 2 | 1
row 4 | 4 | 2 | 1
row 5 | 5 | 2 | 1
row 6 | 6 | 3 | 1
However, I want results that look like this:
| folder_id | folder_set_id | order_number
row 1 | 1 | 1 | 1
row 2 | 2 | 1 | 2
row 3 | 3 | 2 | 1
row 4 | 4 | 2 | 2
row 5 | 5 | 2 | 3
row 6 | 6 | 3 | 1
Is there a way to get these desired results? Is the best way to do some sort of window function that counts how many in the same folder_set_id are underneath each row?
Use ROW_NUMBER to calculate the ORDER_ID, then update the table.
with new_order as (
SELECT "folder_id",
row_number() over ( partition by "folder_set_id"
order by "folder_id") as rn
FROM Table1
)
UPDATE Table1 AS t
SET "order_number" = n.rn
FROM new_order AS n
WHERE t."folder_id" = n."folder_id";
SQL DEMO
OUTPUT
| row_id | folder_id | folder_set_id | order_number |
|--------|-----------|---------------|--------------|
| row 1 | 1 | 1 | 1 |
| row 2 | 2 | 1 | 2 |
| row 3 | 3 | 2 | 1 |
| row 4 | 4 | 2 | 2 |
| row 5 | 5 | 2 | 3 |
| row 6 | 6 | 3 | 1 |
I need to set a sequence inside T-SQL when in the first column I have sequence marker (which is repeating) and use other column for ordering.
It is hard to explain so I try with example.
This is what I need:
|------------|-------------|----------------|
| Group Col | Order Col | Desired Result |
|------------|-------------|----------------|
| D | 1 | NULL |
| A | 2 | 1 |
| C | 3 | 1 |
| E | 4 | 1 |
| A | 5 | 2 |
| B | 6 | 2 |
| C | 7 | 2 |
| A | 8 | 3 |
| F | 9 | 3 |
| T | 10 | 3 |
| A | 11 | 4 |
| Y | 12 | 4 |
|------------|-------------|----------------|
So my marker is A (each time I met A I must start new group inside my result). All rows before first A must be set to NULL.
I know that I can achieve that with loop but it would be slow solution and I need to update a lot of rows (may be sometimes several thousand).
Is there a way to achive this without loop?
You can use window version of COUNT to get the desired result:
SELECT [Group Col], [Order Col],
COUNT(CASE WHEN [Group Col] = 'A' THEN 1 END)
OVER
(ORDER BY [Order Col]) AS [Desired Result]
FROM mytable
If you need all rows before first A set to NULL then use SUM instead of COUNT.
Demo here