Matlab interp1 gives last row as NaN - matlab

I have a problem similar to here. However, it doesn't seem that there is a resolution.
My problem is as such: I need to import some files, for example, 5. There are 20 columns in each file, but the number of lines are varied. Column 1 is time in terms of crank-angle degrees, and the rest are data.
So my code first imports all of the files, finds the file with the most number of rows, then creates a multidimensional array with that many rows. The timing is in engine cycles so, I would then remove lines from the imported file that go beyond a whole engine cycle. This way, I always have data in terms of X whole engine cycles. Then I would just interpolate the data to the pre-allocated array to have a giant multi-dimensional array for the 5 data files.
However, this seems to always result in the last row of every column of every page being filled with NaNs. Please have a look at the code below. I can't see where I'm doing wrong. Oh, and by the way, as I have been screwed over before, this is NOT homework.
maxlines = 0;
maxcycle = 999;
for i = 1:1
filename = sprintf('C:\\Directory\\%s\\file.out',folder{i});
file = filelines(filename); % Import file clean
lines = size(file,1); % Find number of lines of file
if lines > maxlines
maxlines = lines; % If number of lines in this file is the most, save it
end
lastCAD = file(end,1); % Add simstart to shift the start of the cycle to 0 CAD
lastcycle = fix((lastCAD-simstart)./cycle); % Find number of whole engine cycles
if lastcycle < maxcycle
maxcycle = lastcycle; % Find lowest number of whole engine cycles amongst all designs
end
cols = size(file,2); % Find number of columns in files
end
lastcycleCAD = maxcycle.*cycle+simstart; % Define last CAD of whole cycle that can be used for analysis
% Import files
thermo = zeros(maxlines,cols,designs); % Initialize array to proper size
xq = linspace(simstart,lastcycleCAD,maxlines); % Define the CAD degrees
for i = 1:designs
filename = sprintf('C:\\Directory\\%s\\file.out',folder{i});
file = importthermo(filename, 6, inf); % Import the file clean
[~,lastcycleindex] = min(abs(file(:,1)-lastcycleCAD)); % Find index of end of last whole cycle
file = file(1:lastcycleindex,:); % Remove all CAD after that
thermo(:,1,i) = xq;
for j = 2:17
thermo(:,j,i) = interp1(file(:,1),file(:,j),xq);
end
sprintf('file from folder %s imported OK',folder{i})
end
thermo(end,:,:) = []; % Remove NaN row
Thank you very much for your help!

Are you sampling out of the range? if so, you need to tell interp1 that you want extrapolation
interp1(file(:,1),file(:,j),xq,'linear','extrap');

Related

Optimizing reading the data in Matlab

I have a large data file with a text formatted as a single column with n rows. Each row is either a real number or a string with a value of: No Data. I have imported this text as a nx1 cell named Data. Now I want to filter out the data and to create a nx1 array out of it with NaN values instead of No data. I have managed to do it using a simple cycle (see below), the problem is that it is quite slow.
z = zeros(n,1);
for i = 1:n
if Data{i}(1)~='N'
z(i) = str2double(Data{i});
else
z(i) = NaN;
end
end
Is there a way to optimize it?
Actually, the whole parsing can be performed with a one-liner using a properly parametrized readtable function call (no iterations, no sanitization, no conversion, etc...):
data = readtable('data.txt','Delimiter','\n','Format','%f','ReadVariableNames',false,'TreatAsEmpty','No data');
Here is the content of the text file I used as a template for my test:
9.343410
11.54300
6.733000
-135.210
No data
34.23000
0.550001
No data
1.535000
-0.00012
7.244000
9.999999
34.00000
No data
And here is the output (which can be retrieved in the form of a vector of doubles using data.Var1):
ans =
9.34341
11.543
6.733
-135.21
NaN
34.23
0.550001
NaN
1.535
-0.00012
7.244
9.999999
34
NaN
Delimiter: specified as a line break since you are working with a single column... this prevents No data to produce two columns because of the whitespace.
Format: you want numerical values.
TreatAsEmpty: this tells the function to treat a specific string as empty, and empty doubles are set to NaN by default.
If you run this you can find out which approach is faster. It creates an 11MB text file and reads it with the various approaches.
filename = 'data.txt';
%% generate data
fid = fopen(filename,'wt');
N = 1E6;
for ct = 1:N
val = rand(1);
if val<0.01
fwrite(fid,sprintf('%s\n','No Data'));
else
fwrite(fid,sprintf('%f\n',val*1000));
end
end
fclose(fid)
%% Tommaso Belluzzo
tic
data = readtable(filename,'Delimiter','\n','Format','%f','ReadVariableNames',false,'TreatAsEmpty','No Data');
toc
%% Camilo Rada
tic
[txtMat, nLines]=txt2mat(filename);
NoData=txtMat(:,1)=='N';
z = zeros(nLines,1);
z(NoData)=nan;
toc
%% Gelliant
tic
fid = fopen(filename,'rt');
z= textscan(fid, '%f', 'Delimiter','\n', 'whitespace',' ', 'TreatAsEmpty','No Data', 'EndOfLine','\n','TextType','char');
z=z{1};
fclose(fid);
toc
result:
Elapsed time is 0.273248 seconds.
Elapsed time is 0.304987 seconds.
Elapsed time is 0.206315 seconds.
txt2mat is slow, even without converting resulting string matrix to numbers it is outperformed by readtable and textscan. textscan is slightly faster than readtable. Probably because it skips some of the internal sanity checks and does not convert the resulting data to a table.
Depending of how big are your files and how often you read such files, you might want to go beyond readtable, that could be quite slow.
EDIT: After tests, with a file this simple the method below provide no advantages. The method was developed to read RINEX files, that are large and complex in the sense that the are aphanumeric with different numbers of columns and different delimiters in different rows.
The most efficient way I've found, is to read the whole file as a char matrix, then you can easily find you "No data" lines. And if your real numbers are formatted with fix width you can transform them from char into numbers in a way much more efficient than str2double or similar functions.
The function I wrote to read a text file into a char matrix is:
function [txtMat, nLines]=txt2mat(filename)
% txt2mat Read the content of a text file to a char matrix
% Read all the content of a text file to a matrix as wide as the longest
% line on the file. Shorter lines are padded with blank spaces. New lines
% are not included in the output.
% New lines are identified by new line \n characters.
% Reading the whole file in a string
fid=fopen(filename,'r');
fileData = char(fread(fid));
fclose(fid);
% Finding new lines positions
newLines= fileData==sprintf('\n');
linesEndPos=find(newLines)-1;
% Calculating number of lines
nLines=length(linesEndPos);
% Calculating the width (number of characters) of each line
linesWidth=diff([-1; linesEndPos])-1;
% Number of characters per row including new lines
charsPerRow=max(linesWidth)+1;
% Initializing output var with blank spaces
txtMat=char(zeros(charsPerRow,nLines,'uint8')+' ');
% Computing a logical index to all characters of the input string to
% their final positions
charIdx=false(charsPerRow,nLines);
% Indexes of all new lines
linearInd = sub2ind(size(txtMat), (linesWidth+1)', 1:nLines);
charIdx(linearInd)=true;
charIdx=cumsum(charIdx)==0;
% Filling output matrix
txtMat(charIdx)=fileData(~newLines);
% Cropping the last row coresponding to new lines characters and transposing
txtMat=txtMat(1:end-1,:)';
end
Then, once you have all your data in a matrix (let's assume it is named txtMat), you can do:
NoData=txtMat(:,1)=='N';
And if your number fields have fix width, you can transform them to numbers way more efficiently than str2num with something like
values=((txtMat(:,1:10)-'0')*[1e6; 1e5; 1e4; 1e3; 1e2; 10; 1; 0; 1e-1; 1e-2]);
Where I've assumed the numbers have 7 digits and two decimal places, but you can easily adapt it for your case.
And to finish you need to set the NaN values with:
values(NoData)=NaN;
This is more cumbersome than readtable or similar functions, but if you are looking to optimize the reading, this is WAY faster. And if you don't have fix width numbers you can still do it this way by adding a couple lines to count the number of digits and find the place of the decimal point before doing the conversion, but that will slow down things a little bit. However, I think it will still be faster.

Only Import File when it contains certain numbers from a Table

I got a couple 100 sensor measurement files all containing the date and time of measurement. All the files have names that include date and time. Example:
07-06-2016_17-58-32.wf
07-06-2016_18-02-32.wf
...
...
08-06-2016_17:48-26.wf
I have a function (importfile) and a loop that imports my data. The loop looks like this:
Files = dir('C:\Osci\User\*.waveform');
numFiles = length(Files);
Data = cell(1, numFiles);
for fileNum = 1:numFiles
Data{fileNum} = importfile(Files(fileNum).name);
end
Not all of these waveform files are useful. The measurement files are only useful if they were generated in a certain time period. I got a table that shows my allowed time periods:
07-Jun-2016 18:00:01
07-Jun-2016 18:01:31
07-Jun-2016 18:02:01
...
I want to modify my loop, so that the files (.waveform files) are only imported if the numbers for day (first number), hour (4th number) and minute (5th number) from the files match the numbers of the table containing the allowed time periods.
EDIT: Rather than a scalar hour, minute, and second, there is a vector of each. In my case, MyDay, MyHour and MyMinute are 1100x1 matrices while fileTimes only consists of 361 rows.
So, using the provided example the loop should only import file
07-06-2016_18-02-32.wf
since it is the only one where the numbers match (in this case 7, 18, 02).
EDIT2: Using #erfan's answer (and changing some directories and variable names) I have the following working code:
fmtstr = 'O:\\Basic_Research_All\\Lange\\Skripe ISAT\\Rohdaten\\*_%02i-*-*_%02i-%02i-*.wf';
Files = struct([]);
n = size(MyDayMyHourMyMinute);
for N = 1:n;
Files = [Files; dir(sprintf(fmtstr, MyDayMyHourMyMinute(N,:)))];
end
numFiles = length(Files);
WaveformData = cell(1, numFiles);
for fileNum = 1:numFiles
WaveformData{fileNum} = importfile(Files(fileNum).name);
end
Since your filenames are pretty well defined as dates and times, you can prefilter your list by turning them into actual dates and times:
% Get the file list
Files = dir('C:\Osci\User\*.waveform');
% You only need the names
Files = {Files.name};
% Get just the filename w/o the extension
[~, baseFileNames] = cellfun(#(x) fileparts(x), Files, 'UniformOutput', false);
% Your filename is just a date, so parse it as such
fileTimes = datevec(baseFileNames, 'mm-dd-yyyy_HH-MM-SS');
% Now pick out the files you want
% goodFiles = fileTimes(:, 4) == myHour & fileTimes(:, 5) == myMinute & fileTimes(:, 6) == mySecond;
goodFiles = ismember(fileTimes(:, 4:6), [myHour(:), myMinute(:), mySecond(:)], 'rows');
% Pare down your list of filenames
Files = Files(goodFiles);
% Preallocate your data cell
Data = cell(1, numel(Files));
% Now do your loop
for idx = 1:numel(Data)
Data{idx} = importfile(Files{idx});
end
You will, of course, need to define myHour, myMinute and mySecond. Of course, using the logical indexing in goodFiles, you could impose any sort of time criteria, like time or date range. If you find that your filenames aren't so well defined, you could parse out the filename using textscan or strfind to get the bits you want. The important thing is that cell arrays can be indexed into in much the same way as numerical or string arrays and it's often better to vectorize your filter criteria and then only do the loop on the parts you have to.
The OP indicated in a comment below that rather than a scalar hour, minute, and second, there is a vector of each. In that case, use ismember to match the two time vectors and return a logical index vector. With 2015a, MathWorks introduced the function ismembertol, which allows one to check membership within a certain tolerance.
You can apply your selection from the beginning. Imagine the acceptable values for day, hour and minute are saved in acc as an n*3 matrix. If you replace the first line of your code with:
fmtstr = 'C:\Osci\User\%02i-*-*_%02i-%02i-*.wf';
Files = struct([]);
for ii = 1:n
Files = [Files; dir(sprintf(fmtstr, acc(ii,:)))];
end
Then you have already applied your criteria to Files. The rest is the same.

Variable labels in MATLAB

I have a huge table data= {1000 x 1000} of binary data.
They table's variable names are encoded for eg D1,D2,...,DA2,DA3,... with their real labels given in a .txt file.
The .txt file also consists of some text for eg:
D1: Age
Mean age: 33
Median :
.
.
.
D2: weight
I would just like to pick out these names from the text file and create a table with the real variable names.
Any suggestions?
If there is a specific number of lines between each of those labels, then you can extract them by reading in the file, and looping over the relevant lines. For each label, it simple to extract the label with strsplit()
e.g. Let's say there's 5 lines between each label
uselessLines = 5;
% imports as a vertical matrix with each line from the file.
dataLabelsFile = importdata(filename);
% get the total number of lines
numLines = size(dataLabelsFile);
% pre-allocate array for labels, a cell is used for a string
dataLabels = cell(ceil(numLines/(uselessLines+1)));
% use a seperate counting variable
m = 1;
% now, for each label, we add it to the dataLabels matrix
for i=1:(uselessLines+1):numLines
line = strsplit(dataLabelsFile{i}); % by default splits on whitespace
dataLabels(m) = line(2);
m = m + 1;
end
By the end of that loop you should have a variable called dataLabels that holds all of the labels. Now, you can actually very easily work out which label goes with which set of data
provided they are still in the same order. The indexes will be the same for the label to the data.
This is a method you could try if the labels are evenly spaced.
However, if the labels are a random number of lines, then you probably want to do a check with a regular expression like the person below me has suggested. Then you just replace the last two lines of the loop with something like this.
...
if (regular expression matched)
dataLabels(m) = line(2);
m = m + 1;
end
...
That being said, while regular expressions are flexible, if you can get away with replacing it with literally one function call, it's usually better to do that. Regex efficiencies are determined by the skill of the programmer, while in-built functions have generally been tested by some of the better programmers in the world. Additionally, Regex's are harder to understand if you ever want to go back and change it.
Of course there are times when Regex's are amazing, I'm just not convinced this is one of those times.
An implemention of the approach in my earlier comment:
fid = fopen(filename);
varNames = cell(0);
proceed = true;
while proceed
line = fgetl(fid);
if ischar(line)
startIdx = regexp(line,'(?<=^[A-Z]*\d*:)\s');
if ~isempty(startIdx)
varNames{end+1} = strtrim(line(startIdx:end)); %#ok<SAGROW>
end
else
proceed = false;
end
end
fclose(fid);
I cant put the resulting varNames in a table for you, since I have a version of Matlab that does not support tables.

Performance issues by processing huge textfiles

I am facing the problem to extract data out of a textfile which has both numbers and characters in it. The data I want, (the numbers) are separated by rows with characters, describing the following dataset. The textfile is rather large (>2.000.000 lines).
I try to put every dataset (the number of rows between two rows with characters) into a matrix. The matrix should be named according to the description (frequency) in the textline above each dataset. I have a working code, but I face performance problems. Maybe someone can help me to speed it up. One file takes currently about 15 minutes. I need the numbers in matrices to process them further.
Snippet out of Textfile:
21603 2135 21339 21604
103791 94 1 1 1 4
21339 1702 21600 21604
-1
-1
2414
1
Velocity (magnitude) Response at Structural FE Nodes
1
Frequency = 10.00 Hz
Result = Engineering Units
Component = Vmag
Location =
Form & Units = RMS Magnitude in m/s
1 5 1 11 2 1
1 0 1 1 1 0 0 0
1 2161
0.00000e+000 1.00000e+001 0.00000e+000 0.00000e+000 0.00000e+000 0.00000e+000
0.00000e+000 0.00000e+000 0.00000e+000 0.00000e+000 0.00000e+000 0.00000e+000
20008
1.23285e-004
20428
1.21613e-004
Here is my code:
file='large_file.txt';
fid=fopen(file,'r');
k=1;
filerows=2164986; % nr of rows in textfile
A=zeros(filerows,6); % preallocate Matrix where textfile should be saved in
for count=1:8 % get rid of first 8 lines
fgets(fid);
end
name=0;
start=1;
while ~feof(fid)
a=fgets(fid);
b=str2double(strread(a,'%s')); % turn read row in a vector
if isnan(b(1))==1 % check whether there are characters in the row
if strfind(a,'Frequency') % check if 'Frequency' is in the row
Matrixname = sprintf('Frequency%i=A(%i:%i,:);',name,start,k);
eval(Matrixname);
name=b(3);
for count=1:10 % get rid of next 10 lines
fgets(fid);
end
start=k+1;
end
else % if there are just numbers in the row, insert it into the matrix
A(k,1:length(b))=b; % populate matrix A with the row entries
k = k+1;
end
k/filerows % show progress
end
fclose(fid);
Matrixname = sprintf('Frequency%i=A(%i:end,:);',name,start);
eval(Matrixname);
Reading text files line by line is very time consuming, especially in Matlab. When I have to read in text files, I usually read in the entire file at once. You may be limited by memory, so read it in the largest size chunks your machine can handle. Once it's all in memory, use some kind of logical indexing to find the parts of the data you're interested in. Again, in Matlab, for and while loops are very slow. For the data set you have there, I would do the following:
fid = fopen(file);
data = fread(fid,[1 maxBytes],'char=>char');
blockIndices = strfind(data,'Velocity'); % Calculate offsets based on data format
% Another approach much faster than for loops
lineData = regexp(data,sprintf('\n'),'split'); % No each line is in a cell
processedData = cellfun(#processData,lineData,'Uniform',false);
function y = processData(x)
% do something with x
end
Once I had the block indices I could calculate offsets to the parts of the data I want. I don't think that two million lines is that much data, and most computers these days have multiple gigabytes of memory, and it doesn't look like each line is more than a couple hundred characters, so the file is probably less than half a GB. Good luck.
Using the matlab profiler will help you see which lines of code are taking the most amount of time so that you can figure out what to optimize.
As the original poster determined, the line causing trouble in this case was
k/filerows % show progress
Printing to the screen many times is very time consuming. If you want to show progress without slowing the code way down, you could do
if mod(k,filerows/100) == 0
disp('k rows processed');
end
That code will cause an update to be displayed 100 times, or every 3.5 seconds in that particular case.
If you want to get really fancy, check out waitbar, but it is usually overkill.
Finally I got the sscanf-solution to work. I used that function to replace the str2double function to gain some speed as suggested in Why is str2double so slow in matlab as compared to a mex-function?.
Sadly it didn't do too much, but at least it helped a bit.
So, start was ca. 850s
Profiler after removing progress-status: ca. 450s
Profiler after replacing str2double by sscanf: ca.330s
The code now is:
file='test.txt';
fid=fopen(file,'r');
k=1;
filerows=2164986; % nr of rows in textfile
A=zeros(filerows,6); % preallocate Matrix where textfile should be saved in
for count=1:8 % get rid of first 8 lines
fgets(fid);
end
name=0;
start=1;
while ~feof(fid)
a=fgets(fid);
b=strread(a,'%s');
b=sscanf(sprintf('%s#', b{:}), '%g#')';
if isempty(b) % check whether there had been characters in the row
if strfind(a,'Frequency') % check whether 'Frequency' was in the row
Matrixname = sprintf('Frequency%i=A(%i:%i,:);',name,start,k);
eval(Matrixname);
b=str2double(strread(a,'%s'));
name=b(3);
for count=1:8 % get rid of next 8 lines
fgets(fid);
end
start=k+1;
end
else % if there were just numbers in the row, insert it into the matrix
A(k,1:length(b))=b; % populate matrix A with the row entries
k = k+1;
end
end
fclose(fid);
Matrixname = sprintf('Frequency%i=A(%i:%i,:);',name,start,k);
eval(Matrixname);

explanation for matlab code

Am new to matlab.
Can someone explain me the following code. this code is used for training the neural network
N = xlsread('data.xls','Sheet1');
N = N(1:150,:);
UN = xlsread('data.xls','Sheet2');
UN = UN(1:150,:);
traindata = [N ; UN];
save('traindata.mat','traindata');
label = [];
for i = 1 : size(N,1)*2
if( i <= size(N,1))
% label = [label ;sum(traindata(i,:))/size(traindata(i,:),2)];
label = [label ;sum(traindata(i,:))/10];
else
% label = [label ;sum(traindata(i,:))/size(traindata(i,:),2)];
label = [label ;sum(traindata(i,:))/10];
end
end
weightMat = BpTrainingProcess(4,0.0001,0.1,0.9,15,[size(traindata,1) 1],traindata,label);
I cannot find a Neural Network toolbox built-in that corresponds to BpTrainingProcess(), so this must be a file you have access to locally (or you need to obtain from the person who gave you this code). It likely strings together several function calls to Neural Network toolbox functions, or perhaps is an original implementation of a back-propagation training method.
Otherwise, the code has some drawbacks. For one, it doesn't appear that the interior if-else statement actually does anything. Even the lines that are commented out would leave a totally useless if-else setup. It looks like the if-else is intended to let you do different label normalization for the data loaded from Sheet1 of the Excel file vs. data loaded from Sheet2. Maybe that is important for you, but it's currently not happening in the program.
Lastly, the code uses an empty array for label and the proceeds to append rows to the empty array. This is unneeded because you already know how many rows there will be (it will total up to size(N,1)*2 = 150*2 = 300 rows. You could just as easily set label=zeros(300,1) and then use usual indexing at each iteration of the for-loop: label(i) = .... This saves time and space, but arguably won't matter much for a 300-row data set (assuming that the length of each row is not too large).
I put documentation next to the code below.
% The functionn 'xlsread()' reads data from an Excel file.
% Here it is storing the values from Sheet 1 of the file 'data.xls'
% into the variable N, and then using the syntax N = N(1:150,:) to
% change N from being all of the data into being only the first
% 150 rows of the data
N = xlsread('data.xls','Sheet1');
N = N(1:150,:);
% Now do the same thing for Sheet 2 from the Excel file.
UN = xlsread('data.xls','Sheet2');
UN = UN(1:150,:);
% This concatenates the two different data arrays together, making
% one large array where N is the top half and UN is the bottom half.
% This is basically just stacking N on top of UN into one array.
traindata = [N ; UN];
% This saves a copy of the newly stacked array into the Matlab data file
% 'traindata.mat'. From now on, you should be able to load the data from
% this file, without needing to read it from the Excel sheet above.
save('traindata.mat','traindata');
% This makes an empty array which will have new things appended to it below.
label = [];
% Because UN and N have the same number of rows, then the training data
% has twice as many rows. So this sets up a for loop that will traverse
% all of these rows of the training data. The 'size()' function can be
% used to get the different dimensions of an array.
for i = 1 : size(N,1)*2
% Here, an if statement is used to check if the current row number, i,
% is less than or equal to than the number of rows in N. This implies
% that this part of the if-statement is only for handling the top half
% of 'trainingdata', that is, the stuff coming from the variable N.
if( i <= size(N,1))
% The line below was already commented out. Maybe it had an old use
% but is no longer needed?
% label = [label ;sum(traindata(i,:))/size(traindata(i,:),2)];
% This syntax will append new rows to the variable 'label', which
% started out as an empty array. This is usually bad practice, memory-wise
% and also for readability.
% Here, the sum of the training data is being computed, and divided by 10
% in every case, and then appended as a new row in 'label'. Hopefully,
% if you are familiar with the data, you will know why the data in 'N'
% always needs to be divided by 10.
label = [label ;sum(traindata(i,:))/10];
% Otherwise, if i > # of rows then handle the data differently.
% Really this means the code below treats only data from the variable UN.
else
% The line below was already commented out. Maybe it had an old use
% but is no longer needed?
% label = [label ;sum(traindata(i,:))/size(traindata(i,:),2)];
% Just like above, the data is being divided by 10. Given that there
% is nothing different about the code here, and how it modifies 'label'
% there is no need for the if-else statements, and they only waste time.
label = [label ;sum(traindata(i,:))/10];
% This is needed to show the end of the if-else block.
end
% This is needed to show the end of the for-loop.
end
% This appears to be a Back-Propagation Neural Network training function.
% This doesn't match any built-in Matlab function I can find, but you might
% check in the Neural Network toolbox to see if the local function
% BpTrainingProcess is a wrapper for a collection of built-in training functions.
weightMat = BpTrainingProcess(4, 0.0001, 0.1, 0.9, 15,
[size(traindata,1) 1], traindata,label);
Here is a link to an example Matlab Neural Network toolbox function for back-propagation training. You might want to look around the documentation there to see if any of it resembles the interior of BpTrainingProcess().