Talend filter from two input files - talend

I have two data files (delimited files) :
- The first one contain 3 columns, ID, num_phone, trafic_etl : the sim card may be 3g, 4g or whatever.
- the second one contain 1 column num_phone_4g : the sim card has to be 4g.
The thing is, I want to fill a oracle table, with numbers with 4g sim card (second file), that has 0 trafic_etl in total, knowing that the first file may have more than one row for same num_phone.
I did do this with sql statement by storing files in tables.
But what I have to do, is using talend for and I am new to this tool.
Thanks in advance.
Images of the two files : File2
File1

Here's a solution using this sample data.
*File 1*
num_phone;trafic_etl;annee;mois;jour
123456;111111;2018;Juillet;20
123457;222222;2018;Juillet;20
123458;0;2018;Juillet;20
123456;333333;2018;Juillet;20
123457;444444;2018;Juillet;20
123458;0;2018;Juillet;20
*File 2*
num_phone_4g
123456
123457
123458
123459
The expected output is 123458 (because it has a total of 0 trafic) and 123459 (because it's not present in file 1; I don't know if this is possible in your use case).
I aggregate the data of file2 by phone number to get the total trafic for each phone number (assuming the date is not important). Then I use this aggregated data as a lookup to file2. In tMap_1, there is a join between the 2 flows on the phone number, and I only output the rows from file2 where the total trafic is null or zero.
Let me know if my assumptions are correct. If they are not, I will update my answer.

Related

Is is possible limit the number of rows in the output of a Dataprep flow?

I'm using Dataprep on GCP to wrangle a large file with a billion rows. I would like to limit the number of rows in the output of the flow, as I am prototyping a Machine Learning model.
Let's say I would like to keep one million rows out of the original billion. Is this possible to do this with Dataprep? I have reviewed the documentation of sampling, but that only applies to the input of the Transformer tool and not the outcome of the process.
You can do this, but it does take a bit of extra work in your Recipe--set up a formula in a new column using something like RANDBETWEEN to give you a random integer output between 1 and 1,000 (in this million-to-billion case). From there, you can filter rows based on whatever random integer between 1 and 1,000 as what you'll keep, and then your output will only have your randomized subset. Just have your last part of the recipe remove this temporary column.
So indeed there are 2 approaches to this.
As Courtney Grimes said, you can use one of the 2 functions that create random-number out of a range.
randbetween :
rand :
These methods can be used to slice an "even" portion of your data. As suggested, a randbetween(1,1000) , then pick 1<x<1000 to filter, because it's 1\1000 of data (million out of a billion).
Alternatively, if you just want to have million records in your output, but either
Don't want to rely on the knowledge of the size of the entire table
just want the first million rows, agnostic to how many rows there are -
You can just use 2 of these 3 row filtering methods: (top rows\ range)
P.S
By understanding the $sourcerownumber metadata parameter (can read in-product documentation), you can filter\keep a portion of the data (as per the first scenario) in 1 step (AKA without creating an additional column.
BTW, an easy way of "discovery" of how-to's in Trifacta would be to just type what you're looking for in the "search-transtormation" pane (accessed via ctrl-k). By searching "filter", you'll get most of the relevant options for your problem.
Cheers!

kdb q - efficiently count tables in flatfiles

I have a lot of tables stored in flat files (in a directory called basepath) and I want to check their number of rows. The best I can so right now is:
c:([] filename:system "ls ",basepath;
tablesize:count each get each hsym `$basepath,/:system "ls ",basepath)
which loads each table entirely into memory and then performs the count (that's quite slow). Is saving as splayed tables the only way to make this faster (because I would only load 1 column and count that) or is there a trick in q that I can use?
Thanks for the help
If you have basepath defined as a string of the path to directory where all your flat tables are stored then you can create a dictionary of the row counts as follows:
q)cnt:{count get hsym x}
q)filename:key hsym `$basepath
q)filename!cnt each filename
t| 2
g| 3
This is where I have flat tables t and g saved in my basepath directory. This stops you from having to use system commands which are often less effiecient.
The function cnt takes the path of each flat table (as a symbol) and returns the number of rows without saving them into memory.
The best solution if you have control of the process of saving such files down is to add an extra step of saving the meta information of the row count somewhere seperate at the same time of saving the raw data. This would allow you to quickly access the table size from this file instead of reading the full tbale in each time.
However, note that to avoid pulling them into memory at all you would have to instead use read1 and look at the headers on the binary data. As you said it would be better to save as a splayed table and read in one column.
UPDATE: I would not recommend doing this and strongly suggest doing the above but for curiosity after looking into using read1 here's an example what what a hacky solution might look like:
f:{
b:read1(y;0;x);
if[not 0x62630b~b[2 4 5];'`$"not a table"];
cc:first first((),"i";(),4)1:b 7+til 4;
if[null ce:first where cc=sums 0x0=11 _ b;:.z.s[x*2;y]];
c:`$"\000" vs "c"$b[11+til ce];
n:first first((),"i";(),4)1:b[(20+ce)+til 4];
:`columns`rows!(c;n);
}[2000]
The q binary file format isn’t documented anywhere, the only way to figure it out is to save different things and see how the bytes change. It’s also subject to changes between versions - the above is written for 3.5 and is probably valid for 3.0-3.5 only, not the latest 3.6 release or anything 2.X.
The given code works in the following way:
reads a chunk from the front of the file
validates that it looks like a flat unkeyed table (flip[98] of a dict[99] with symbol[11] keys)
reads the count of symbols in the list of columns as a little endian 4 byte int
scans through the null terminated strings for that many zero bytes
if the columns are so numerous or verbose that we don’t have them
all in this chunk it will double the size of the chunk and try again
turn the strings into symbols
using the offset we get from the end of the column list, skip a bit
more of the header for the mixed list of columns
then read the count from the header of the first column
Hope this answers your question!
From experimenting with the binary files, it seems that the table count is saved as part of the binary file when you save down a flat file, taking up 4 bytes after the initial object type and column headings which will vary from table to table.
`:test set ([]a:1 2 3;b:4 5 6;c:7 8 9;aa:10 11 12;bb:13 14 15)
q)read1 `:test
0xff016200630b000500000061006200630061610062620000000500000009000300000
0 7 11 31
bytes | example | meaning
---------------------------------------------------------------------------------------
0 - 5 | 0xff016200630b0 | object is a flat table
7 - 11 | 0x05000000 | number of columns (5)
12- 22 | 0x6100620063006161006262 | one byte for the ascii values of column "a" and "b" in hex followed by the one byte separator
23 - 30 | 0x0000050000000900 | 8 bytes that can be skipped
31 - 34 | 0x0300000 | 4 bytes for row count of first column (3)
This should help you understand the function that Fiona posted.
The binary is saved down little-endian meaning the most-significant byte is the right-hand most digit - doing this in decimal for the number 100 would give 001, with the 100's (most significant) on the right and then 10s and finally 1s on the left. In the binary file, each group of 2 digits is a byte.
You can use 1: to read in the contents of a binary file, with additional arguments in the list specifying the offset - where to start reading from, and how many bytes to read. In our case we want to start at byte 31 and read in 4 bytes, specifying the output should be an integer and to cut the input into separate 4 byte chunks.
q)first first (enlist "i";enlist 4)1:(`:test;31;4)
3i
Converting the little-endian bytes into a long gives us the row count. Since this only has to read in 4 bytes instead of the whole file it is a lot quicker.
For a table with 10000 rows and 2 columns there is not much difference:
q)\t 0x0 sv reverse first (enlist "x";enlist 1)1:(`:test10000;31;4)
0
q)\t count get `:test10000
0
For a table with 100m rows and 2 columns:
q)\t 0x0 sv reverse first (enlist "x";enlist 1)1:(`:test10m;31;4)
0
q)\t count get `:test10m
2023
If you have a splayed table instead you can read in the number of elements in one of the columns from bytes 9-13 like so, assuming the column is a simple list:
q)first first (enlist "i";enlist 4)1:(`:a;8;4)
3i
You can read more about reading in from binary files here https://code.kx.com/q/ref/filenumbers/#1-binary-files
You can make what you currently have more efficient by using the following
counttables:{count each get each hsym `$basepath}
This will improve the speed of the count by not including the extra read in of the data as well as the join which you are currently doing. You are correct though that if the tables where saved splayed you would only have to read in the one column making it much more efficient.
If your tables are stored uncompressed there's probably something quite hacky you could do with a read1 on the headers within the file until you find the first column header.
But v hacky :-(
Are you responsible for saving these down? Can you keep a running state as you do?

Tableau collate data from multiple columns

I am not sure what the technical term for what I am trying to do is.
Hoping raw data and output below will clearly define the use case.
Raw data :
This is what my raw data looks like
Output 1 :
this is what I am trying extract first
Here I am trying to get a table where the first column has the name of the guests and 2nd column has the count of times they have featured in the table as a guest.
Output 2 :
this what I am trying extract next
Here I am trying to map months against names and see how many nights one has collected in which month.
One way to achieve this would be to create a temp table with 5 columns,column 1 with Guest names,
column 2 with count of occurrence in guest 1 column in raw data table,
column 3 with count of occurrence in guest 2 column in raw data table,
column 4 with count of occurrence in guest 3 column in raw data table,
column 5 with total of previous 3 columns.
But I am trying to find a proper solution through tableau, if possible. Because this way would not help me achieve Output 2.
Plain text raw data if you'd like to work on it :
booking by,Guest 1,Guest 2,Guest 3,stay start,stay end,hotel code
Ram,Seema,Ram,,May 1 2018,May 2 2018,BBST
Karan,Ram,Seema,,May 6 2018,May 7 2018,BRRLY
Mahesh,Mahesh,Seema,Ram,June 2 2018,June 4 2018,BBST
Krishna,Krishna,,,June 2 2018,June 3 2018,BRRLY
Seema,Seema,,,June 7 2018,June 8 2018,BRRLY

Merge Columns from various files in Talend

I am trying to achieve column merge of files in a folder using Talend.(Files are local)
Example:- 4 files are there in a folder. ( there could be 'n' number of files also)
Each file would have one column having 100 values.
So after merge, the output file would have 4 or 'n' number of columns with 100 records in it.
Is it possible to merge this way using Talend components ?
Tried with 2 files in tmap , the output records becomes multiplied ( the record in first file * the record in second file ).
Any help would be appreciated.
Thanks.
You have to determine how to join data from the different files.
If row number N of each file has to be matched with row number N of the other files, then you must set a sequence on each of your file, and join the sequences in order to get your result. Careful, you are totally depending on the order of data in each file.
Then you can have this job :
tFileInputdelimited_1 --> tMap_1 --->{tMap_5
tFileInputdelimited_2 --> tMap_2 --->{tMap_5
tFileInputdelimited_3 --> tMap_3 --->{tMap_5
tFileInputdelimited_4 --> tMap_4 --->{tMap_5
In tMaps from 1 to 4, copy the input to the output, and add a "sequence" column (datatype integer) to your output, populate it with Numeric.sequence("IDENTIFIER1",1,1) . Then you have 2 columns in output : your data and a unique sequence.
Be careful to use different identifiers for each source.
Then in tMap_5, just join the different sequences, and get your inputColumn.

Select value in table in tableau

I am quite new to Tableau, so have patience with me :)
I have two tables,
Table one (T1) contains all my data with the first row being Year-Week, like 2014-01, 2014-02, and so on. Quick question regarding this, how do I make Tableau consider this as a date, and not as string?
T1 contains a lot of data that looks like this:
YearWeek Spend TV Movies
2014-01 5000 42 12
2014-02 4800 41 32
2014-03 2000 24 14
....
2015-24 7000 45 65
I have another table (T2) that contains information regarding some values I want to multiply with the T1 columns, T2 looks like:
NAME TV Movies
Weight 2 5
Response 6 3
Ad 7 2
Version 1 0
I want to create a calculated field (TVNEW) that takes the values from T1 of TV, and adds Response(TV) to it, and times it with the weight(TV),
So something like this:
(T1[TV]+T2[TV[Response]])*T2[TV[Weight]]
This looks like this for the rows:
(42+6)*2
(41+6)*2
(24+6)*2
...
(45+6)*2
So the calculation should take a specific value from T2, and do the calculation for each value in T1[TV]
Thanks in advance
The easy answer to your question will be: No, not natively.
What you want to do sounds like accessing a 2 dimensional array and that's not really the intention of Tableau. Additionally you have 2 completely independent tables without a common attribute to JOIN on. Tableau is just not meant to work that way.
I cannot think of a way to dynamically extract that value (I assume your example is just that, an example; and in your case you don't just use two values in the calculation, otherwise you could create 2 parameters that you can use in your calculated fields)
When I look at your tables it looks like you could transpose and join them that they ideally look like this: (Edit: Comment says transposing is not an option)
Medium Value YearWeek Spend
Movies 12 2014-01 5,000
Movies 32 2014-02 4,000
Movies 14 2014-03 2,000
Movies 65 2015-24 7,000
TV 42 2014-01 5,000
TV 41 2014-02 4,000
TV 24 2014-03 2,000
TV 45 2015-24 7,000
and
Medium Weight Response Ad Version
TV 2 6 7 1
Movies 5 3 2 0
Depending on the systems you work with you could already put it in one CSV or table so you wouldn't have to do a JOIN in Tableau.
Now you can create the first table natively in Tableau (from Version 9.0 onwards), if you open your data source, in the Data Source Preview choose the columns TV and Movies, click on the small triangle and then on Pivot. (At this point you can also choose the YearWeek column click on the triangle and Split to create a seperate field for Year and Week. You won't be able to assign the type date to it put that shouldn't give you any disadvantages.)
For the second table I can think of two possibilities:
you have access to a tool that can transpose your table (Excel can do that see: Convert matrix to 3-column table ('reverse pivot', 'unpivot', 'flatten', 'normalize') Once you have done that you can open it in Tableau and join the two tables on Medium
You could create calculated fields depending on the medium:
Field: Weight
CASE [Medium]
WHEN 'TV' THEN 2
WHEN 'Movies' THEN 5
END
And accordingly for Response, Ad and Version
Obviously that is only reasonable if you really just need a handfull of values.
Once this is done it's only a matter of creating a calculated field with
([Value]+[Response])*[Weight]
And this will calculate all the values for your table