Convert a csv string to a table in csv - kdb

If we have a file containing csv then we can read it using 0:
say, we have a file x.csv on the disk then converting it to a table is easy as below
("SFJ";enlist",")0:`:/x.csv
But, how can we covert a csv string to table?
string:
"sym,px,vol
GG,10.2,100
AA,11.2,1000"
Expected output: table
sym px vol
"GG" 10.2 100
"AA" 11.2 1000

A string can be passed in using 0: instead of a file handle, and the table will be created as normal:
q)s:("sym,px,vol";"GG,10.2,100";"AA,11.2,1000")
q)s
"sym,px,vol"
"GG,10.2,100"
"AA,11.2,1000"
q)("SFJ";enlist",")0:s
sym px vol
-------------
GG 10.2 100
AA 11.2 1000

If you needed to programmatically get to Eliot's s from one big string csv there are a few options depending on the format of the csv string.
// \n delimited
s:` vs "sym,px,vol\nGG,10.2,100\nAA,11.2,1000"
// if you know the row and col count.
s:3 3#"," vs "sym,px,vol,GG,10.2,100,AA,11.2,1000"
// if you just know the col count
s:"sym,px,vol,GG,10.2,100,AA,11.2,1000"
f:{[str;noCol]
str:"," vs str;
noRow:`long$(count str)%noCol;
(noRow, noCol)#str
}
f[s;3]
All three output this ("sym,px,vol";"GG,10.2,100";"AA,11.2,1000")

Related

PostgreSQL COPY cannot read JSON from CSV file

I'm copying data from a CSV file into a PostgreSQL table using COPY
My CSV file is simply:
0\"a string"
And my table "Test" was created by the following:
create table test (
id integer,
data jsonb
);
My copy statement was the following:
I received the following error:
williazz=# \copy test from 'test/test.csv' delimiters '\' CSV
ERROR: invalid input syntax for type json
DETAIL: Token "a" is invalid.
CONTEXT: JSON data, line 1: a...
COPY test, line 1, column data: "a string"
Interestingly, when I changed my CSV file to a number, it had no problem.
CSV:
0\1505
williazz=# \copy test from 'test/test.csv' delimiters '\' CSV
COPY 1
williazz=# select * from test;
id | data
----+------
0 | 1505
(1 row)
Furthermore, numbers in arrays also work:
CSV:
1\[0,1,2,3,4,5]
williazz=# select * from test;
id | data
----+---------------
0 | 1505
1 | [0,1,2,3,4,5]
(2 rows)
But as soon as I introduct a non-digit string into the JSON, the COPY stops working
0\[1,2,"three",4,5]
ERROR: invalid input syntax for type json
DETAIL: Token "three" is invalid.
CONTEXT: JSON data, line 1: [1, 2, three...
COPY test, line 1, column data: "[1, 2, three, 4, 5]"
I cannot get postgres to read a non-digit string in JSON format. I've also tried changing the data type of column "data" from jsonb to json, and using basically every combination of single and double quotes
Could someone please help me identify the problem? Thank you
Because your file is CSV encoded, it does not mean what you think.
0\"a string"
With a delimiter of \ this is two values: the number 0 and the string a string. Note the lack of quotes. Those quotes are part of the CSV string formatting. a string is not valid JSON, the quotes are required.
Instead you need to include the JSON string quotes inside the CSV string quotes. Quotes in CSV are escaped by doubling them.
0\"""a string"""
Now that is the number 0 and the string "a string" including quotes.
And as an observation, it would be simpler to remove the complication of embedding JSON into a CSV and use a pure JSON file.
[
[0, "a string"],
[1, "other string"]
]

kdb+: Save table with a column with a list of float into a csv file

I have a table "floats" with two columns: sym and prices. sym elements are strings and prices elements are list of floats.
q)LF:((3.0;1.0;2.0);(5.0;7.0;4.0);(2.0;8.0;9.0))
q)show floats:flip `sym`prices!(`6AH0`6AH6`6AH7;LF)
sym prices
-----------
6AH0 3 1 2
6AH6 5 7 4
6AH7 2 8 9
I want to export the table "floats" on a csv file but I get this error:
q)save `:floats.csv
'type
[0] save `:floats.csv
I followed this post kdb+: Save table into a csv file which solves the problem if the column is a list of string. Unfortunately when I try to convert the "prices" column to a list of chars and then save to CSV using the internal function, the procedure returns errors:
q))#[`floats;`prices;" " sv']
'type
[7] #[`floats;`prices;" " sv']
^
q))#[`floats;`prices;string]
'noamend: `. `floats
[10] #[`floats;`prices;string]
^
q))#[`floats;string `prices;" " sv']
'noamend: `. `floats
[10] #[`floats;string `prices;" " sv']
^
Please help me in converting the "prices" column to a list of chars and then save to CSV using the internal function or provide valid alternatives to export the table on a text file.
First, you need to convert float to string then use sv with adverb each right denoted by /: .
floats: update " " sv/: string each prices from floats

kdb+: Save table into a csv file

I have the below table "dates" , it has a sym column with symbols and a d column with list of strings and would like to save it into a regular CSV file. Couldn't find a good way to do it. Any suggestions?
q)dates
sym d
----------------------------------------------------------------------------
6AH0 "1970.03.16" "1980.03.17" "1990.03.19" "2010.03.15"
6AH6 "1976.03.15" "1986.03.17" "1996.03.18" "2016.03.14"
6AH7 "1977.03.14" "1987.03.16" "1997.03.17" "2017.03.13"
6AH8 "1978.03.13" "1988.03.14" "1998.03.16" "2018.03.19"
6AH9 "1979.03.19" "1989.03.13" "1999.03.15" "2019.03.18"
When I try to do the regular save the below error happens:
q)save `:dates.csv
k){$[t&77>t:#y;$y;x;-14!'y;y]}
'type
q))
The internal table->csv conversion function within Kdb+ is not able to handle nested lists in columns. The d column in your table is a list of list of chars. However, the conversion function is able to handle a simply nested column (depth of 1).
Therefore, you can convert the d column to a list of chars and then save to CSV using the internal function:
/ generate a table of dummy data
q)show dates:flip `sym`d!(`6AH0`6AH6`6AH7;string (3;0N)#12?.z.d)
sym d
--------------------------------------------------------
6AH0 "2008.02.04" "2015.01.02" "2003.07.05" "2005.02.25"
6AH6 "2012.10.25" "2008.08.28" "2017.01.25" "2007.12.27"
6AH7 "2004.02.01" "2005.06.06" "2013.02.11" "2010.12.20"
/ convert 'd' column to simple list - the (" " sv') is the conversion func here
q)#[`dates;`d;" " sv']
`dates
/ review what was done
q)show dates
sym d
--------------------------------------------------
6AH0 "2008.02.04 2015.01.02 2003.07.05 2005.02.25"
6AH6 "2012.10.25 2008.08.28 2017.01.25 2007.12.27"
6AH7 "2004.02.01 2005.06.06 2013.02.11 2010.12.20"
/ save to csv
q)save `:dates.csv
`:dates.csv
/ review saved csv
q)\cat dates.csv
"sym,d"
"6AH0,2008.02.04 2015.01.02 2003.07.05 2005.02.25"
"6AH6,2012.10.25 2008.08.28 2017.01.25 2007.12.27"
"6AH7,2004.02.01 2005.06.06 2013.02.11 2010.12.20"
As per the csv specification, you'll want to flatten the list out and separate each with a comma and double quote the list.
'save' is limited in that the file must be named the same as the global variable you are saving.
If I was tasked with your question I'd do it like so;
`:myFileNamedWhatever.csv 0: csv 0: select sym,csv sv'd from dates
Explanation;
csv 0: table /csv is a variable, literally defined as "," - its good for readability. csv 0: table converts the table to a comma separated list of strings
`:file 0: listOfStrings /this takes a LIST of strings and pushes them to the file handle. Each element of the list is a new line in the file
I'd prefer this approach as it is general and allows the saving of various types. You can use it within a function etc..
At a later date I decided that I wanted it saved as a pipe (or anything) separated file;
`:myNewFile.psv 0: "|" 0: select sym,"|"sv'd from table

Putting keyword data into a csv file MATLAB

Given a table of the following format in MATLAB:
userid | itemid | keywords
A = [ 3 10 'book'
3 10 'briefcase'
3 10 'boat'
12 20 'windows'
12 20 'picture'
12 35 'love'
4 10 'day'
12 10 'working day'
... ... ... ];
where A is a table of size (58000*3), I want to write the data in a csv file with the following format:
csv.file
itemid keywords
10 book, briefcase, boat, day, working day, ...
20 windows, picture, ...
35 love, ...
where we the list of itemids is stored in Iids = [10,20,35,...]
I would like to avoide using loops for this as you can imagine the matrix is big-sized. Any idea is appreciated.
I wasn't able to think of a solution without loops. But you can optimize your loop by:
using logical indexing
running such loop only M times (if M is the number of unique itemid elements) instead of N times (if N is the number of elements in your table).
The solution I come up with is this.
First of all, create your table
A=table([3;3;3;12;12;12;4;12], [10;10;10;20;20;35;10;10],{'book','briefcase','boat','windows','picture','love','day','working day'}','VariableNames',{'userid','itemid','keywords'});
which looks like
Select the unique values for column itemid (your Iids):
Iids=unique(A.itemid);
which looks like
Create a new, empty, table which will contain the results:
NewTable=table();
And now the minimal loop I've come up with:
for id=Iids'
% select rows with given itemid value
RowsWithGivenId=A(A.itemid==id,:);
% create new row in NewTable with the id and the (joined together) keywords from the selected rows
NewTable=[NewTable; table(id,{strjoin(RowsWithGivenId.keywords,', ')})];
end
Also, append the new column names in NewTable
NewTable.Properties.VariableNames = {'itemid','keywords'};
And now NewTable looks like:
Please note: due to the fact that the keywords in the new table are separated by comma, a csv file is not the format I recommend. By using writetable() as writetable(NewTable,'myfile.csv');
what you'll get is
As instead, by replacing ; instead of a separating comma (in strjoin()), you'll get a nicer format:

Get substring into a new column

I have a table that contains a column that has data in the following format - lets call the column "title" and the table "s"
title
ab.123
ab.321
cde.456
cde.654
fghi.789
fghi.987
I am trying to get a unique list of the characters that come before the "." so that i end up with this:
ab
cde
fghi
I have tried selecting the initial column into a table then trying to do an update to create a new column that is the position of the dot using "ss".
something like this:
t: select title from s
update thedot: (title ss `.)[0] from t
i was then going to try and do a 3rd column that would be "N" number of characters from "title" where N is the value stored in "thedot" column.
All i get when i try the update is a "type" error.
Any ideas? I am very new to kdb so no doubt doing something simple in a very silly way.
the reason why you get the type error is because ss only works on string type, not symbol. Plus ss is not vector based function so you need to combine it with each '.
q)update thedot:string[title] ss' "." from t
title thedot
---------------
ab.123 2
ab.321 2
cde.456 3
cde.654 3
fghi.789 4
There are a few ways to solve your problem:
q)select distinct(`$"." vs' string title)[;0] from t
x
----
ab
cde
fghi
q)select distinct(` vs' title)[;0] from t
x
----
ab
cde
fghi
You can read here for more info: http://code.kx.com/q/ref/casting/#vs
An alternative is to make use of the 0: operator, to parse around the "." delimiter. This operator is especially useful if you have a fixed number of 'columns' like in a csv file. In this case where there is a fixed number of columns and we only want the first, a list of distinct characters before the "." can be returned with:
exec distinct raze("S ";".")0:string title from t
`ab`cde`fghi
OR:
distinct raze("S ";".")0:string t`title
`ab`cde`fghi
Where "S " defines the types of each column and "." is the record delimiter. For records with differing number of columns it would be better to use the vs operator.
A variation of WooiKent's answer using each-right (/:) :
q)exec distinct (` vs/:x)[;0] from t
`ab`cde`fghi