Scheme read specific data from file - matlab

I have a txt file that looks like this:
1 17.3
2 18.2
3 18.6
I would like to make a variable (for example temp) which would store store first value (17.3). I would then compare this value with something else (< temp 20). Next step would be to store second value in temp (18.2), so I could again compare values.
Any help would be appreciated!
In Matlab it would look like this:
A=importdata(...)
i=0;
while i<length(temp) do
temp=A(i,2)
i=i+1;
if temp < 20
...
end
end

There are several ways to skin this cat in R6RS:
You can use read. read will read any Scheme datum so since these are all numbers read will read the next number.
You can make your own parser. You read one char at a time and when you hit a space or linefeed you take the list of chars you have though list->string to get string and then string->number This can also be done in two parts reading lines then parsing each line or do a slurp first then process the string.

Related

MATLAB fwrite\fread issue: two variables are being concatenated

I am reading in a binary EDF file and I have to split it into multiple smaller EDF files at specific points and then adjust some of the values inside. Overall it works quite well but when I read in the file it combines 2 character arrays with each other. Obviously everything afterwords gets corrupted as well. I am at a dead end and have no idea what I'm doing wrong.
The part of the code (writing) that has to contain the problem:
byt=fread(fid,8,'*char');
fwrite(tfid,byt,'*char');
fwrite(tfid,fread(fid,44));
%new number of records
s = records;
fwrite(tfid,s,'*char');
fseek(fid,8,0);
%test
fwrite(tfid,fread(fid,8,'*char'),'*char');
When I use the reader it combines the records (fwrite(tfid,s,'*char'))
with the value of the next variable. All variables before this are displayed correctly. The relevant code of the reader:
hdr.bytes = str2double(fread(fid,8,'*char')');
reserved = fread(fid,44);%#ok
hdr.records = str2double(fread(fid,8,'*char')');
if hdr.records == -1
beep
disp('There appears to be a problem with this file; it returns an out-of-spec value of -1 for ''numberOfRecords.''')
disp('Attempting to read the file with ''edfReadUntilDone'' instead....');
[hdr, record] = edfreadUntilDone(fname, varargin);
return
end
hdr.duration = str2double(fread(fid,8,'*char')');
The likely problem is that your character array s does not have 8 characters in it, but you expect there to be 8 when you read it from the file. Whatever the number of characters in the array is, that's how many values fwrite will write out to the file. Anything less than 8 characters and you'll end up reading part of the next piece of data when you read from the file.
One fix would be to pad s with blanks before writing it:
s = [blanks(8-numel(records)) records];
In addition, the syntax '*char' is only valid when using fread: the * indicates that the output class should be 'char' as well. It's unnecessary when using fwrite.

Can I directly load text with numbers in CCC,CC format ? (K4)

I have input with floats stored like 1000,50, ie. the decimal points are replaced by commas.
Is there an option in K to load these numbers directly into floats ?
When using
data:("SFF" ;";",";") 0:. filename
I get 0ns, of course, because the numbers are not recognized as floats.
I load them as strings now, and convert them using ssr like
c:.:' .q.ssr'[data;",";"."]
but that is extremely slow.
Is there an option somewhere to have K load these numbers in CCC,CC format as floats directly ? Normal format and ccc,cc format are not mixed, any file has just one of them.
If there is not, I imagine that it must by quite easy to replace a "." somewhere in the Q-binary where the load-function sits, with a ",", to get a version which loads these numbers. Has anybody tried that ? Or any other tip to load big files with these numbers in reasonable time ?
Cheers,
Co
If ssr' is slow for your task you may find this tiny function useful:
c2p:{c:-1_sums count each x;p:ss[r:raze x;","];r[p]:".";(0,c) _ r}
Update: an alternative version:
c2p:{p:ss[r:raze x;","];r[p]:".";(0,-1_sums count'[x])_r}
It concatenates all strings into a single long string, finds positions of commas, replaces commas with periods then splits that long string:
q)N:1000000
q)s:string[N?100000],'",",'string N?1000
q)\t r1:ssr'[s;",";"."]
4284
q)\t r2:c2p s
242
q)r1~r2
1b
I was thinking something like find (?) combined with indexing/applying
q)N:1000000
q)s:string[N?100000],'",",'string N?1000
q)\ts {s[x;y]:"."}./:flip(til count s;s?\:",")
967 52972144
q)s
"93912.794"
"57144.788"
"77809.659"
"7839.47"
"6363.523"
"44761.244"
"65699.712"
It's not perfect but that's the general idea. I'm sure there is an easier way...

Reading large amount of data stored in lines from csv

I need to read in a lot of data (~10^6 data points) from a *.csv-file.
the data is stored in lines
I can't know how many data points per line and how many lines are there before I read it in
the amount of data points per line can be different for each line
So the *.csv-file could look like this:
x Header
x1,x2
y Header
y1,y2,y3, ...
z Header
z1,z2
...
Right now I read in every line as string and split it at every comma. This is what my code looks like:
index = 1;
headerLine = textscan(csvFileHandle,'%s',1,'Delimiter','\n');
while ~isempty(headerLine{1})
dummy = textscan(csvFileHandle,'%s',1,'Delimiter','\n', ...
'BufSize',2^31 - 1);
rawData(index) = textscan(dummy{1}{1},'%f','Delimiter',',');
headerLine = textscan(csvFileHandle,'%s',1,'Delimiter','\n');
index = index + 1;
end
It's working, but it's pretty slow. Most of the time is used while splitting the string with textscan. (~95%).
I preallocated rawData with sample data, but it brought next to nothing for the speed.
Is there a better way than mine to read in something like this?
If not, is there a faster way to split this string?
First suggestion: to read a single line as a string when looping over a file, just use fgetl (returns a nice single string so no faffing with cell arrays).
Also, you might consider (if possible), reading everything in a single go rather than making repeating reads from file:
output = textscan(fid, '%*s%s','Delimiter','\n'); % skips headers with *
If the file is so big that you can't do everything at once, try to read in blocks (e.g. tackle 1000 lines at a time, parsing data as you go).
For converting the string, there are the options of str2num or strsplit+str2double but the only thing I can think of that might be slightly quicker than textscan is sscanf. Since this doesn't accept the delimiter as a separate input put it in the format string (the last value doesn't end with ,, true, but sscanf can handle that).
for n = 1:length(output);
data{n} = sscanf(output{n},'%f,');
end
Tests with a limited patch of test data suggests sscanf is a bit quicker (but might depend on machine/version/data sizes).

How to get the number of columns of a csv file?

I have a huge csv file that I want to load with matlab. However, I'm only interested in specific columns that I know the name.
As a first step, I would like to just check how many columns the csv file has. How can I do that with matlab?
As Jonesy and erelender suggest, I would think this will do it:
fid=fopen(filename);
tline = fgetl(fid);
fclose(fid);
length(find(tline==','))+1
Since you don't seem to know what kind of carriage return character (or character encoding?) is being used then I would suggest progressively sampling your file until you encounter a recognizable CR character. One way to do this is to loop over something like
A = fscanf(fileID, ['%' num2str(N) 'c'], sizeA);
where N is the number of characters to read. At each iteration test A for presence of carriage return characters, stop if one is encountered. Once you know where the carriage return is just repeat with the right N and perform the length(find...) operation, or alternately accumulate the number of commas at each iteration. You may want to check that your file is being read along rows (is it always?), check a few samples to make sure it is.
1-) Read the first line of file
2-) Count the number of commas, or seperator characters if it is not comma
3-) Add 1 to the count and the result is the number of columns in the file.
If the csv has only numeric value you can use:
M=csvread('file_name.csv');
[row,col]=size(M);

How to randomly select from a list of 47 names that are entered from a data file?

I have managed to input a number data file into a matrix but have been unable to do so for any data that is not a number.
I have a list of 47 names and supposed to generate a random name from the list. I have tried to use the function textscan but was not going anywhere. Also how do I generate a random name from the list? All I have been able to do was generate a random number between 1 to 47.
Appreciate the replies. I should have said I need it in MATLAB sorry.
Here is a sample list of data in my data file
name01
name02
name03
and the code to read it:
fid = fopen('names.dat','rt');
headerChars = fgetl(fid);
data = fscanf(fid,'%f,%f,%f,%f',[4 47]).';
fclose(fid);
The above is what I have to read the data file into a matrix but it is only reading the first line. (Yes it was modified from a previous post here on this forums :/)
Edit: As per the helpful comments from mtrw, and the fixed formatting of the sample data file, I've updated my answer with more detail.
With a single name (i.e. "Bob", "Bob Smith", or "Smith, Bob") on each line of the file, you can use the function TEXTSCAN by specifying '%s' as the format argument (to denote reading a string) and the newline character '\n' as the 'Delimiter' (the character that separates the strings in the file):
fid = fopen('namefile.txt','r');
names = textscan(fid,'%s','Delimiter','\n');
fclose(fid);
Then it's a matter of randomly picking one of the names. You can use the function RANDI to generate a random integer in the range from 1 to the number of names read from the file (found using the NUMEL function):
names = names{1}; %# Get the contents from the cell returned by TEXTSCAN
selectedName = names{randi(numel(names))};
Sounds like you're halfway home. Take that random number and use it as an index for the list.
For example, if you randomly generate the number 23 then fetch the 23rd entry in the list which gives you a random name draw.
Use the RANDOMBETWEEN function to get a random number within your range. Use INDEX to get the actual cell value. For instance:
=INDEX(A1:A47, RANDBETWEEN(1, 47))
The above will work for your specific case of 47 names, assuming they're in column A. In general, you'd want something like:
=INDEX(MyNames, RANDBETWEEN(ROW(MyNames), ROW(MyNames) + ROWS(MyNames) - 1))
This assumes you've named your range of cells "MyNames" (for example, by selecting all the cells in your range and setting a name in the naming box). The above formula works by using the ROW function to return the top row of the MyNames array and the ROWS function to get the total rows in MyNames.