How to output an hash to table format in ruby - hash

I have an hash
rows = [{"col1"=>"v1", "col2"=>"v2"}, {"col1"=>"v3", "col2"=>"v4"}] like
I want the output in table format using ruby
Could you please provide an answer?

Related

MD5/SHA Field Dataset in Data Fusion

I need to concatanate a few string values in order to obtain the SHA256 encrypted string. I've seen Data Fusion has a plugin to do the job:
The documentation however is very poor and nothing I've tried seems to work. I created a table in BQ with the string fields I need to concatanate but the output is same as input. Can anyone provide with an example on how to use this plugin?
EDIT
Below I present the example,
This is how the workflow looks like:
For the testing purposes, I added one column with the following string:
2022-01-01T00:00:00+01:00
And here's the output:
You can use Wrangler to concatenate the string values.
I tried your scenario adding Wrangler to the Pipeline:
Joining 2 Columns:
I named the column new_col, using , as delimiter:
Output:
What you described can be achieved by 2 Wranglers:
The first Wrangler will be what #angela-b described. Use the merge directive to create a new column with the concatenation of two columns. Example directive that joins column a and b using , as the delimiter and stores the result in column a_b:
merge a b a_b ,
The second Wrangler will use the hash directive which will hash the column in place using a specified algorithm. Example of a directive that hashes column a_b using MD5:
hash :a_b 'MD5' true
Remember to set the last parameter encode to true so that you get a string output instead of a byte array.

split the string column which has a delimiter(',')-sql

Hi I am trying to split the string column which has a delimiter(',')
drop table #address
CREATE TABLE #Address(stir VARCHAR(max));
GO
INSERT INTO #Address(stir)
values('aa,"","7453adeg3","tom","jon","1900-01-01","14155","","2"')
,('ca,"23","42316eg3","pom","","1800-01-01","9999","","1"')
,('daa,"","1324567a","","catty","","756432","213",""')
GO
Expected output:
I am using PARSENAME but it is returning null values? guide me on my expected out put
thanks in advance
The best solution here would be to just create a flat CSV file based on your current insert data, and then use SQL Server's bulk import tool to load it into a table. The following CSV data should be workable here:
aa,"","7453adeg3","tom","jon","1900-01-01","14155","","2"
ca,"23","42316eg3","pom","","1800-01-01","9999","","1"
daa,"","1324567a","","catty","","756432","213",""
Just make sure that you specify double quote as the field escape character.

kdb - how generate HEX colour code as string or symbol

I would like to create a column on an in-memory table that generates a colour HEX code based on a person's name (another column). A quick google didn't give much so wondered if any pointers can be given here.
e.g
update colour: <some code and use username col as input> from table
In kdb+ you can run a function on a column via an update statement but there are slight differences depending on whether the function is vectorised or not. If vectorised:
update colour:{<some code>}[username] from table
update colour:someFunction[username] from table
If not vectorised then an iterator like each ' is required
update colour:{<some code>}'[username] from table
update colour:someFunction'[username] from table
This function will generate hex codes from the first 3 characters of a string.
q)hex:{a:i-16*j:(i:`int$3#x)div 16;"0123456789ABCDEF"raze(j-16*j div 16),'a}
q)hex"Hello"
"48656C"
q)update colour:hex'[username] from table

Convert XML PATH sample code from SQL Server to DB2

I'm converting the SQL server to db2..
I need a solution for stuff and for xml path
Ex
Select stuff(select something
from table name
Where condition
For xml path(''),1,1,'')
Pls convert this into db2.
Your code is an old school XML "trick" to convert multiple values to a single string. (Often comma separated but in this case space separated.) Since those days DB2 (and the sql standards) have added a new function called listagg which is designed to solve this exact problem:
Select listagg(something,' ')
from table name
Where condition
db2 docs -
https://www.ibm.com/support/knowledgecenter/en/SSEPEK_12.0.0/sqlref/src/tpc/db2z_bif_listagg.html
https://www.ibm.com/support/knowledgecenter/ssw_ibm_i_74/db2/rbafzcollistagg.htm

Talend shuffle the order of the columns

I was trying to achieve merging all the rows of a file into columns based on a certain sequence number. This has been achieved by tpivotToColumnDelimited.( this has to be done , cannot be changed ).
But after using that, the column ordering has been changed.
Is there any way of reading a file according to a schema and writing the file according to some other schema in talend ? ( Basically shuffling the column ordering in a file )
I had tried using setting tdynamicschema from input and output but was not able to read and write the data properly.
Any help would be highly appreciated.
I had solved the issue.
Simply added a column which had the index number read from the file and before using the tpivotToColumnDelimited , i had used that column dynamically to sort the results and write to a tmp file and then with the help of tpivotToColumnDelimited , it is now according to the input schema.