I have a case statement as below
Case when col1 like '%other%' then 'No' else col5 end as col5
Here like in SQL I need to implement the case statement with different columns and the wild card check of the word 'other' all in talend how can this be done?
Your question is not clear, without any screenshots or explanation.
I assume that you have some input component, like tOracleInput with row out of it, having multiple columns in the schema. I would suggest to use tMap component to manipulate contents of the schema, especially Expression Builder.
P.S. I personally prefer tJavaFlex for columns validations / manipulations, this way the code more readable, but it is more advanced technique.
If the columns are not dynamically created, you can add a case in tMap for example.
So, for a new boolean column you could create the expression:
row1.mycolumn1.toLowerCase().contains("other") ? new Boolean.true : new Boolean.false;
The boolean field would hold the check value then.
EDIT
Since a new boolean column is not wanted here, your specific requirement would look like this:
row1.product_code.toLowerCase().contains("other") ? "No data" : row1.product_code;
Please find the input as follows.
col1 col2 col3 col4 col5
---- ---- ---- ---- ----
aaaother aaa aaa aaa aaa
otherbbb bbb bbb bbb bbb
ccc ccc ccc ccc ccc
in the tMap you have to have the following statement in the expression builder against col5
input.col1.contains("other")?"No":input.col5
then the output will be as follows
col1 col2 col3 col4 col5
---- ---- ---- ---- ----
aaaother aaa aaa aaa No
otherbbb bbb bbb bbb No
ccc ccc ccc ccc ccc
Related
So I have the following table :
q)flip`col1`col2`col3!((enlist`na;(`na`emea);`na);`test1`test2`test3;(enlist`uat;enlist`prd;(`uat`prd`dr)))
col1 col2 col3
--------------------------
,`na test1 ,`uat
`na`emea test2 ,`prd
`na test3 `uat`prd`dr
can I use ungroup on this table ?
Short answer is no - the first line of the documentation for ungroup https://code.kx.com/q/ref/ungroup/ States that "Where x is a table, in which some cells are lists, but for any row, all lists are of the same length"
your second row in the table contains lists in col1 and col3 but these are of different lengths.
ungroup will work on the first and last rows of your table as these contain lists and the cells which are lists are of the same length.
q)ungroup 1#t
col1 col2 col3
---------------
na test1 uat
q)ungroup -1#t
col1 col2 col3
---------------
na test3 uat
na test3 prd
na test3 dr
q)(1#t),-1#t
col1 col2 col3
----------------------
,`na test1 ,`uat
`na test3 `uat`prd`dr
q)ungroup (1#t),-1#t
col1 col2 col3
---------------
na test1 uat
na test3 uat
na test3 prd
na test3 dr
Like my answer on your other related question (How can I prep this table for ungroup), you can ungroup one column at a time if you use a custom ungrouper:
q){#[x where count each x y;y;:;raze x y]}/[t;`col1`col3]
col1 col2 col3
---------------
na test1 uat
na test2 prd
emea test2 prd
na test3 uat
na test3 prd
na test3 dr
I would like to join 2 tables based on below criteria:
I would like to pick the substring from Table_A where "some_name" column has data like 'AB-FBb3' and then match it against Table_B by replacing FB with SC and then fetching "desc" details.
Table_A:
**AB some_name G_NAME Status some_time**
------------------------------------------------------------
AAA Job1 xxxxxxxxx Ended OK 2020-06-29 10:37:52
AAA Job2 xxxxxxxxx Ended OK 2020-06-29 10:37:52
BBB AB-Job1 xxxxxxxxx Ended OK 2020-06-29 10:37:52
BBB AB-Job2 xxxxxxxxx Ended OK 2020-06-29 10:37:52
BBB AB-FBb3 xxxxxxxxx Ended OK 2020-06-29 10:37:52
Table_B:
**RM j_name desc rand_time**
----------------------------------------------------
111 Job1 Sometext 2020-06-29 06:30:51
111 AB-Job1 Sometext1 2020-06-29 09:31:52
222 AB-Job5 Sometext2 2020-06-29 09:34:11
222 DPF-AB-Job2 Sometext3 2020-06-29 03:39:33
222 SCb3 Sometext4 2020-06-29 11:32:23
Currently what I have (I would like to add on the above condition mentioned to this):
SELECT a.some_name,a.G_NAME,b.desc,
FROM Table_A a
LEFT JOIN Table_B b
ON b.j_name IN (a.some_name, 'DPF-' || a.some_name)
where a.service_name like 'AB-%'
Any suggestions? Also the substring postion is not fixed. Would need to find the substring and then join the Tables.
FYI: This is an extended question to my earlier question- hence posted as separate question.
I think I found an answer by trying an OR condition with the SUBSTR combined with INSTR:
SELECT a.some_name,a.G_NAME,b.desc,
FROM Table_A a
LEFT JOIN Table_B b
ON
(b.j_name IN (a.some_name, 'SC' || SUBSTR(a.some_name,
instr(a.some_name, 'FB')+2 ,3)))
or
(b.j_name IN (a.some_name, 'DPF-' || a.some_name))
where a.service_name like 'AB-%'
I'm not sure if it's a perfection solution but if anyone has a better solution, please feel free to let me know.
I need to create a query which increment value of current row by 8% to previous row.
Table (let's name it money) contains one row (and two columns), and it looks like
AMOUNT ID
100.00 AAA
I just need to populate a data from this table like this way (one select from this table, eg. 6 iterations):
100.00 AAA
108.00 AAA
116.64 AAA
125.97 AAA
136.04 AAA
146.93 AAA
You can do that with a common table expression.
E.g. if your source looks like this:
db2 "create table money(amount decimal(31,2), id varchar(10))"
db2 "insert into money values (100,'AAA')"
You can create the input data with the following query (I will include counter column for clarity):
db2 "with
cte(c1,c2,counter)
as
(select
amount, id, 1
from
money
union all
select
c1*1.08, c2, counter+1
from
cte
where counter < 10)
select * from cte"
C1 C2 COUNTER
--------------------------------- ---------- -----------
100.00 AAA 1
108.00 AAA 2
116.64 AAA 3
125.97 AAA 4
136.04 AAA 5
146.92 AAA 6
158.67 AAA 7
171.36 AAA 8
185.06 AAA 9
199.86 AAA 10
To populate the existing table without repeating the existing row you use e.g. an insert like this:
$ db2 "insert into money
with
cte(c1,c2,counter)
as
(select
amount*1.08, id, 1
from
money
union all
select
c1*1.08, c2, counter+1
from
cte
where counter < 10) select c1,c2 from cte"
$ db2 "select * from money"
AMOUNT ID
--------------------------------- ----------
100.00 AAA
108.00 AAA
116.64 AAA
125.97 AAA
136.04 AAA
146.93 AAA
158.68 AAA
171.38 AAA
185.09 AAA
199.90 AAA
215.89 AAA
11 record(s) selected.
I need achive below requirement i.e
Input -- at very first time
Order value
1111 aaa
222 bbb
333 ccc
in the target (Insert) I will have
Order value
Order value
1111 aaa
222 bbb
333 ccc
----------Input -- at second time
Order value
1111 Aaa1
222 Bbb2
333 ccc
Out put must be
Order value
1111 aaa Aaa1
222 bbb Bbb2
So on
I need to keep appending change values for the corresponding key column ..
111 aaa aaa1 aaa2 aaa3 ..like this
Please help
You can follow these steps:
Use a CDC stage. Here by default it keeps the following functions: 0 for copy/duplicate record,1 for insert ,b2 for delete, 3 for update.
Now the link which would be carrying the copy record simply connect it to a transformer where declare a stage variable that would be incremented by 1.
Next in the field derivation write as CONCAT(string,'Initcaps(a'),'aa',svarcount,'')
I need to do some advanced grouping in TSQL with data that looks like this:
PK YEARMO DATA
1 201201 AAA
1 201202 AAA
1 201203 AAA
1 201204 AAA
1 201205 (null)
1 201206 BBB
1 201207 AAA
2 201301 CCC
2 201302 CCC
2 201303 CCC
2 201304 DDD
2 201305 DDD
And then, every time DATA changes per primary key, pull up the date range for said item so that it looks something like this:
PK START_DT STOP_DT DATA
1 201201 201204 AAA
1 201205 201205 (null)
1 201206 201206 BBB
1 201207 201207 AAA
2 201301 201303 CCC
2 201304 201305 DDD
I've been playing around with ranking functions but haven't had much success. Any pointers in the right direction would be supremely awesome and appreciated.
You can use the row_number()function to partition your data into ranges:
SELECT
PK,
START_DT = MIN(YEARMO),
STOP_DT = MAX(YEARMO),
DATA
FROM (
SELECT
PK, DATA, YEARMO,
ROW_NUMBER() OVER (ORDER BY YEARMO) -
ROW_NUMBER() OVER (PARTITION BY PK, DATA ORDER BY YEARMO) grp
FROM your_table
) A
GROUP BY PK, DATA, grp
ORDER BY MIN(YEARMO)
Sample SQL Fiddle