I have 2 table in kdb as below
q)table1:([]A:1 2 3 5 5 6 2 1;B:`HAK`ZAK`NAK`AAK`AZK`HAK`ZAK`HAK;C:2000.01.01+0 1 2 3 4 0 1 0)
q)table1
A B C
----------------
1 HAK 2000.01.01
2 ZAK 2000.01.02
3 NAK 2000.01.03
5 AAK 2000.01.04
5 AZK 2000.01.05
6 HAK 2000.01.01
2 ZAK 2000.01.02
1 HAK 2000.01.01
q)table2:([]B:`HAK`ZAK`NAK`AAK`AZK;Z:`NAFK`RFK`NAFK`RFK`ORQ)
q)table2
B Z
--------
HAK NAFK
ZAK RFK
NAK NAFK
AAK RFK
AZK ORQ
I want to modify Table1 column B as per mapping of Table 2.
eg wherever in table1 columnB has word "HAK", then look in table2 columnB and replace table1 with respective table2 columnz
Same for all rows for table1.
final output i want is table1 should be updated like below.
A B C
-----------------
1 NAFK 2000.01.01
2 RFK 2000.01.02
3 NAFK 2000.01.03
5 RFK 2000.01.04
5 ORQ 2000.01.05
6 NAFK 2000.01.01
2 RFK 2000.01.02
1 NAFK 2000.01.01
The function which I came up with is below.
hfun:
{$[
x in `$("HAK");`$("NAFK");
x in `$("ZAK");`$("RFK");
x in `$("NAK");`$("NAFK");
x in `$("AAK");`$("RFK");
x in `$(AZK);`$("ORQ");
x]}
finalOutput:update B:hfun'[B] from table1
The above function is working as expected but its not feasible to write every time a new function for new mappings or if table2 has 200 rows.
Can someone please take a look and advise further?
Could also use an amend to achieve this:
#[table1;`B;(!/)table2`B`Z]
update B:({x[;0]!x[;1]}flip value flip table2)'[B]from table1
This will achieve the desired outcome without the need to define extra variables or conditional statements also.
It also works with spaces in symbols.
You can use a dictionary update instead of the loop w/ condition:
dict:`HAK`ZAK`NAK`AAK`AZK!`NAFK`RFK`NAFK`RFK`ORQ;
update B^dict B from table1
(with spaces)
table1:([]A:1 2 3 5 5 6 2 1;B:(`$"HAK z";`$"ZAK";`$"NAK";`$"AAK";`$"AZK";`$"HAK";`$"ZAK";`$"HAK");C:2000.01.01+0 1 2 3 4 0 1 0)
table2:([]B:(`$"HAK z";`$"ZAK";`$"NAK";`$"AAK";`$"AZK");Z:`NAFK`RFK`NAFK`RFK`ORQ)
dict:exec B!Z from table2;
update B^dict B from table1
You can also use an lj
q)select A,B:B^Z,C from table1 lj `B xkey table2
A B C
-----------------
1 NAFK 2000.01.01
2 RFK 2000.01.02
3 NAFK 2000.01.03
5 RFK 2000.01.04
5 ORQ 2000.01.05
6 NAFK 2000.01.01
2 RFK 2000.01.02
1 NAFK 2000.01.01
Related
How do i select all rows from a table where a specific column value equals something?
i have tried the following:
select from tablname where columnvalue = value
thanks
you could do:
q)table:([]a:1 2 3 4 5;b:`a`b`c`d`e;c:`hi`bye`bye`bye`hi)
q)table
a b c
-------
1 a hi
2 b bye
3 c bye
4 d bye
5 e hi
q)select from table where c=`bye
a b c
-------
2 b bye
3 c bye
4 d bye
You could do:
q)tbl:([] a:1 2 3;b:4 5 6;c:7 8 9)
q)tbl
a b c
-----
1 4 7
2 5 8
3 6 9
q)select a from tbl
a
-
1
2
3
I was wondering if there is anyway of combining these two counts in the same table, like (Titulo, count1, count2).
First one:
select Titulo, count(genero)
from livro natural inner join genero
group by titulo;
Output:
titulo count
1 A lei 2
2 Olhar misterioso 2
3 Pensamento ao anoitecer 2
4 Ajudar e proteger 2
5 A corrupcao 2
6 O crime do seculo 2
7 Sem volta 2
8 Andar protegido 2
9 A bem ou mal 2
10 Diarios de um policia 2
Second one:
select Titulo, count(IDMemb)
from genero natural inner join livro natural inner join gosta
group by titulo;
Output:
titulo count
1 A lei 6
2 Olhar misterioso 4
3 Pensamento ao anoitecer. 4
4 Ajudar e proteger 4
5 A corrupcao 6
6 O crime do seculo 6
7 Sem volta 4
8 Andar protegido 4
9 A bem ou mal 4
10 Diarios de um policia 4
Desired output:
titulo count. count
1 A lei 2 6
2 Olhar misterioso 2 4
3 Pensamento ao anoitecer 2 4
4 Ajudar e proteger 2 4
5 A corrupcao 2 6
6 O crime do seculo 2 6
7 Sem volta 2 4
8 Andar protegido 2 4
9 A bem ou mal 2 4
10 Diarios de um policia 2 4
Thanks for your help
You can count only distinct genero for second SQL like this
select Titulo, count(distinct genero), count(IDMemb)
from genero natural inner join livro natural inner join gosta
group by titulo;
Suppose I have data formatted in the following way (FYI, total row count is over 30K):
customer_id order_date order_rank
A 2017-02-19 1
A 2017-02-24 2
A 2017-03-31 3
A 2017-07-03 4
A 2017-08-10 5
B 2016-04-24 1
B 2016-04-30 2
C 2016-07-18 1
C 2016-09-01 2
C 2016-09-13 3
I need a 4th column, let's call it days_since_last_order which, in the case where order_rank = 1 then 0 else calculate the number of days since the previous order (with rank n-1).
So, the above would return:
customer_id order_date order_rank days_since_last_order
A 2017-02-19 1 0
A 2017-02-24 2 5
A 2017-03-31 3 35
A 2017-07-03 4 94
A 2017-08-10 5 38
B 2016-04-24 1 0
B 2016-04-30 2 6
C 2016-07-18 1 79
C 2016-09-01 2 45
C 2016-09-13 3 12
Is there an easier way to calculate the above with a window function (or similar) rather than join the entire dataset against itself (eg. on A.order_rank = B.order_rank - 1) and doing the calc?
Thanks!
use the lag window function
SELECT
customer_id
, order_date
, order_rank
, COALESCE(
DATE(order_date)
- DATE(LAG(order_date) OVER (PARTITION BY customer_id ORDER BY order_date))
, 0)
FROM <table_name>
Example Data
Sql fiddle http://sqlfiddle.com/#!15/c8a17/4
My Query
select
unnest(array[g,g1,g2]) as disp,
unnest(array[g,g||'-'||g1,g||'-'||g1||'-'||g2]) as grp,
unnest(array[1,2,3]) as ord,
unnest(array['assesvalue','lst2','salesvalue','itemprofit','profitper','itemstockvalue'])as analysis,
unnest(array[value1,tt,sv,tp,per,tsv])as val
from (
select
g,
g1,
g2,
sum(value1) as value1,
sum(tt) as tt,
sum(sv) as sv,
sum(tp) as tp,
sum(per) as per,
sum(tsv) as tsv
from table1
group by g,g1,g2
) as ta
It Show the output like this
disp grp ord analysis val
A A 1 assesvalue 100
B A-B 2 lst2 30
C A-B-C 3 salesvalue 20
A A 1 itemprofit 5
B A-B 2 profitper 1
C A-B-C 3 itemstockvalue 10
Expected Result :
disp grp ord analysis val
A A 1 assesvalue 100
A A 1 lst2 30
A A 1 salesvalue 20
A A 1 itemprofit 5
A A 1 profitper 1
A A 1 itemstockvalue 10
B A-B 2 assesvalue 100
B A-B 2 lst2 30
B A-B 2 salesvalue 20
B A-B 2 itemprofit 5
B A-B 2 profitper 1
B A-B 2 itemstockvalue 10
C A-B-C 3 assesvalue 100
C A-B-C 3 lst2 30
C A-B-C 3 salesvalue 20
C A-B-C 3 itemprofit 5
C A-B-C 3 profitper 1
C A-B-C 3 itemstockvalue 10
In Query i am using multiple unnest.
first 3 unnest inside have 3 columns other have 6 columns that its show wrong output but if last 2 unnest have less then 6 its show my expected result.
what am doing wrong in query??
i am using postgresql 9.3
I have a table like that
ID ORDER TEAM TIME
IL-1 1 A_Team 11
IL-1 2 A_Team 3
IL-1 3 B_Team 2
IL-1 4 A_Team 1
IL-1 5 A_Team 1
IL-2 1 A_Team 5
IL-2 2 C_Team 3
What I want is grouping the same named teams which are also sequential teams (Which is according to the ORDER column)
So the result table should look like
IL-1 1 A_Team 14
IL-1 2 B_Team 2
IL-1 3 A_Team 2
IL-2 1 A_Team 5
IL-2 2 C_Team 3
Thanks
Edit: Depending on the nang's answer, I added the ID column to my table.
There is a problem in your example. Why should rows #6 and #2 not be "sequential teams"?
1 A_Team 5
2 A_Team 3
However, maybe the following is usefull for you:
select neworder, name, sum([time]) from (
select min(n1.[order]) neworder, n2.[order], n2.name, n2.[time]
from mytable n1, mytable n2
where n1.Name = n2.Name
and n2.[order] >= n1.[order]
and not exists(select 1 from mytable n3 where n3.name != n1.name and n3.[order] > n1.[order] and n3.[order] < n2.[order])
group by n2.[order], n2.name, n2.[time]) x
group by neworder, name
Result:
neworder name (No column name)
1 A_Team 19
4 A_Team 2
3 B_Team 2
2 C_Team 3