I would like to create some marks, where the information of the size comes from one dataset and the information of the color comes from another dataset.
Is this possible?
Or can I update created marks (created with dataset 1) by using information from a second dataset?
Yes, you can do it.
You can use lookup transform provided there is a lookup key in both datasets.
In this example, 'category' is the key that performs lookup transform
Related
I'm using Grafana v9.3.2.2 on Azure Grafana
I have a line chart with labels of an ID. I also have an SQL table in which the IDs are mapped to simple strings. I want to alias the IDs in the label to the strings from the SQL
I am trying to look for a transformation to do the conversion.
There is a transformation called “rename by regex”, but that will require me to hardcode for each case. Is there something similar with which I don't have to hardcode for each case.
There is something similar for variables - https://grafana.com/blog/2019/07/17/ask-us-anything-how-to-alias-dashboard-variables-in-grafana-in-sql/. But I don't see anything for transformations.
Use 2 queries in the panel - one for data with IDs and seconds one for mapping ID to string. Then add transformation Outer join and use that field ID to join queries results into one result.
You may need to use also Organize fields transformation to rename, hide unwanted fields, so only right fields will be used in the label at the end.
I am new to Scala and Spark. I am exploring the Amazon Deequ library for data profiling.
How do I get count of rows having a particular value while using ColumnProfilerRunner()?
The AnalysisRunner has an option of "compliance" I am looking for a similar option to filter rows that comply with the given column constraint.
I have multiple columns hence I want to check dynamically instead of using column names.
Appreciate any help.
Thanks
Deequ's column profiler computes a fixed set of statistics. If you want to compute custom statistics of your data, you should use the VerificationSuite. Checkout the examples on deequ's github page.
I thought this would be fairly straight forward but i can't really find a simple way of doing it. I want to add a unique rownumber to a source dataset in a ADF Mapping Dataflow. In SSIS i would have done this with a script component but there's no option for that as far as i can see in ADF. I've looked for suitable functions in the derived columns expressions editor and also the aggregate component but there doesn't appear to be one.
Any ideas how this could be achieved?
Thanks
Many options:
Add a surrogate key transform
Hash row columns in Derived Column using SHA2
Use the rowNumber() function in a Window transformation
Give those a shot and let us know what you think
I did like this:
Add a Column with the same value to all the rows (I've used an integer with value = 1);
Added a window, using the column create previously on step 1 (Over);
Add a column on step 4 to window (window columns) with any name and rowNumber() as expression;
I have some data which I need to pivot in Talend. This is a sample:
brandname,metric,value
A,xyz,2
B,xyz,2
A,abc,3
C,def,1
C,ghi,6
A,ghi,1
Now I need this data to be pivoted on the metric column like this:
brandname,abc,def,ghi,xyz
A,3,null,1,2
B,null,null,null,2
C,null,1,6,null
Currently I am using tPivotToColumnsDelimited to pivot the data to a file and reading back from that file. However having to store data on an external file and reading back is messy and unnecessary overhead.
Is there a way to do this with Talend without writing to an external file? I tried to use tDenormalize but as far as I understand, it will return the rows as 1 column which is not what I need. I also looked for some 3rd party component in TalendExchange but couldn't find anything useful.
Thank you for your help.
Assuming that your metrics are fixed, you can use their names as columns of the output. The solution to do the pivot has two parts: first, a tMap that transposes the value of each input-row in into the corresponding column in the output-row out and second, a tAggregate that groups the map's output-rows according to the brandname.
For the tMap you'd have to fill the columns conditionally like this, example for output colum named "abc":
out.abc = "abc".equals(in.metric)?in.value:null
In the tAggregate you'd have to group by out.brandname and aggregate each column as sum ignoring nulls.
I have this kind of data:
I need to transpose this data into something like this using Talend:
Help would be much appreciated.
dbh's suggestion should work indeed, but I did not try it.
However, I have another solution which doesn't require to change input format and is not too complicated to implement. Indeed the job has only 2 transformation components (tDenormalize and tMap).
The job looks like the following:
Explanation :
Your input is read from a CSV file (could be a database or any other kind of input)
tDenormalize component will Denormalize your column value (column 2), based on value on id column (column 1), separating fields with a specific delimiter (";" in my case), resulting as shown in 2 rows.
tMap : split the aggregated column into multiple columns, by using java's String.split() method and spreading the resulting array into multiple columns. The tMap should like like this:
Since Talend doesn't accept to store Array objects, make sure to store the splitted String in Object format. Then, cast that object into Array on the right side of the Map.
That approach should give you the expected result.
IMPORTANT:
tNormalize might shuffle the rows, meaning for bigger input, you might encounter unsorted output. Make sure to sort it if needed or use tDenormalizeSortedRow instead.
tNormalize is similar to an aggregation component meaning it scans the whole input before processing, which results into possible performance issues with particularly big inputs (tens of millions of records).
Your input is probably wrong (you have 5 entries with 1 as id, and 6 entries with 2 as id). 6 columns are expected meaning you should always have 6 lines per id. If not, then you should implement dbh's solution, and you probably HAVE TO add a column with a key.
You can use Talend's tPivotToColumnsDelimited component to achieve this. You will most likely need an additional column in your data to represent the field name.
Like "Identifier, field name, value "
Then you can use this component to pivot the data and write a file as output. If you need to process the data further, read the resulting file with tFileInoutDelimited .
See docs and an example at
https://help.talend.com/display/TalendOpenStudioComponentsReferenceGuide521EN/13.43+tPivotToColumnsDelimited