I have this data in postgreSQL
Id type quantity
1 order 10
2 order 12
3 order 11
4 purchase 5
5 purchase 4
6 credit 2
I would like to return the quantity negative when the type = 'order' or 'credit
Id type quantity
1 order -10
2 order -12
3 order -11
4 purchase 5
5 purchase 4
6 credit -2
How do i do this in postgresql?
you can do it with the case statement
select Id,type,case when type in ('order','credit') then quantity*-1 else quantity end as quantity
from tableName
if the quantity is already having the negative value then you have to add another case statement.
Related
I have the following tables (example)
Analyze_Line
id
game_id
bet_result
game_type
1
1
WIN
0
2
2
LOSE
0
3
3
WIN
0
4
4
LOSE
0
5
5
LOSE
0
6
6
WIN
0
Game
id
league_id
home_team_id
away_team_id
1
1
1
2
2
2
2
3
3
3
3
4
4
1
1
2
5
2
2
3
6
3
3
4
Required Data:
league_id
WIN
LOSE
GameCnt
1
1
1
2
2
0
2
2
3
2
0
2
The Analyze_Line table is joined with the Game table and simple can get GameCnt grouping by league_id, but I am not sure how to calculate WIN count and LOSE count in bet_result
You can use conditionals in aggregate function to divide win and lose bet results per league.
select
g.league_id,
sum(case when a.bet_result = 'WIN' then 1 end) as win,
sum(case when a.bet_result = 'LOSE' then 1 end) as lose,
count(*) as gamecnt
from
game g
inner join analyze_line a on
g.id = a.game_id
group by
g.league_id
Since there is no mention of postgresql version, I can't recommend using FILTER clause (postgres specific), since it might not work for you.
Adding to Kamil's answer - PostgreSQL introduced the filter clause in PostgreSQL 9.4, released about eight years ago (December 2014). At this point, I think it's safe enough to use in answers. IMHO, it's a tad more elegant than summing over a case expression, but it does have the drawback of being PostgreSQL specific syntax, and thus not portable:
SELECT g.league_id,
COUNT(*) FILTER (WHERE a.bet_result = 'WIN') AS win,
COUNT(*) FILTER (WHERE a.bet_result = 'LOSE') AS lose,
COUNT(*) AS gamecnt
FROM game g
JOIN analyze_line a ON g.id = a.game_id
GROUP BY g.league_id
Given the following table
time kind counter key1 value
----------------------------------------
1 1 1 1 1
2 0 1 1 2
3 0 1 2 3
5 0 1 1 4
5 1 2 2 5
6 0 2 3 6
7 0 2 2 7
8 1 3 3 8
9 1 4 3 9
How would one select the value in the first row
immediately after and immediately before each
row of kind 1 ordered by time where the key1
value is the same in both instances .i.e:
time value prevvalue nextvalue
---------------------------------------------
1 1 0n 2
5 5 3 7
8 8 6 0n
9 9 6 0n
Here are some of the things I have tried, though
to be honest I have no idea how to canonically achieve
something like this in q whereby the prior value has a
variable offset to the current row?
select
prev[value],
next[value],
by key1 where kind<>1
update 0N^prevval,0N^nextval from update prevval:prev value1,nextval:next value1 by key1 from table
Some advice or a pointer on how to achieve this would be great!
Thanks
I was able to use the following code to return a table meeting your requirements. If this is correct, the sample table you have provided is incorrect, otherwise I have misunderstood the question.
q)table:([] time:1 2 3 5 5 6 7 8 9;kind:1 0 0 0 1 0 0 1 1;counter:1 1 1 1 2 2 2 3 4;key1:1 1 2 1 2 3 2 3 3;value1:1 2 3 4 5 6 7 8 9)
q)tab2:update 0N^prevval,0N^nextval from update prevval:prev value1,nextval:next value1 by key1 from table
q)tab3:select from tab2 where kind=1
time value1 prevval nextval
---------------------------
1 1 2
5 5 3 7
8 8 6 9
9 9 8
The update statement in tab2:
update 0N^prevval,0N^nextval from update prevval:prev value1,nextval:next value1 by key1 from table
is simply adding 2 columns onto the original table with the previous and next values for each row. 0^ is filling the empty fields with nulls.
The select statement in tab3:
tab3:select from tab2 where kind=1
is filtering tab2 for rows where kind=1.
The final select statement:
select time,value1,prevval,nextval from tab3
is selecting the rows you want to be returned in the final result.
Hope this answers your question.
Thanks,
Caitlin
in Tableau I have a table with this form :
rows: Score.
columns:MY(month), sum(good), sum(bad).
This is the information when I use: month 201811
201611 201612 ... 201801 ... 201811 TOTAL
Score Good Bad Good Bad Good Bad ... Good Bad
1 3 0 7 3 6 3 2 1
2 5 1 1 1 1 1 4 4
3 10 3 2 1 0 3 3 3
I want to use a filter with 'Month' column ,when I filter month=201811, show since 201611 to 201711 (last 12 months) in Total column(Totals in Bad and Good columns) by Score.
Filter: 201811
Formula: sum(Good) and sum(Bad) since '201611' to '201711'
I trying "IF DATEDIFF('month', [Good], today()) <=12" but doesn't work.
Thanks for your help.
Try this:
If DATEDIFF("month",TODAY(),[Your Date Field],"Sunday") <= -12
then [Your Date Field] else null end
Then use that as your date column. The "Sunday" is supposed to be whatever you consider the starting day of the week. I wasn't sure what your date field is named so I named it "[Your Date Field]"
I have a table that looks like this:
GroupID UserID Value
1 1 10
1 2 20
1 3 30
1 4 40
1 5 45
1 6 49
1 7 80
1 8 90
2 1 2
2 2 24
2 3 34
2 4 48
2 5 56
3 1 etc.
3 2
3 3
3 4
4 1
4 2
4 3
I am trying to write a LEAD function that will give me the midpoint between each value. To do this I have written the following:
SELECT
[GroupID]
, [UserID]+0.5
, (LEAD ([Value], 1) OVER (ORDER BY GroupID, UserID) + [Value])/2 as [Value]
from dbo.myTable
The problem with this function is that when it gets to the last User in the group, it gives me a bad value because it's taking the [Value] on the current row and the value from the next row.
What I want to do is stop it when it reaches the maximum UserID for each Group. In other words, when it gets to GroupID = 1 and UserID = 8, it should end and start at the next Group. I do not want a row that looks like this:
GroupID UserID Value
1 8.5 46
I could run a DELETE statement after I INSERT the rows into the original table, but I don't have anything to identify when a row is the "maximum" User for it's Group. Ideally, I would like to somehow tell the lead statement not to calculate it in the first place.
I would like a query that will show a sum of columns with a default value for missing data. For example assume I have a table as follows:
type_lookup:
id name
-----------
1 self
2 manager
3 peer
And a table as follows
data:
id type_lookup_id value
--------------------------
1 1 1
2 1 4
3 2 9
4 2 1
5 2 9
6 1 5
7 2 6
8 1 2
9 1 1
After running a query I would like a result set as follows:
type_lookup_id value
----------------------
1 13
2 25
3 0
I would like all rows in type_lookup table to be included in the result set - even if they don't appear in the data table.
It's a bit hard to read your data layout, but something like the following should do the trick:
SELECT tl.type_lookup_id, tl.name, sum(da.type_lookup_id) how_much
from type_lookup tl
left outer join data da
on da.type_lookup_id = tl.type_lookup_id
group by tl.type_lookup_id, tl.name
order by tl.type_lookup_id
[EDIT]
...subsequently edited by changing count() to sum().