Trying to save datenum - matlab

I have a small problem with saving the datenum in Matlab.
I have a sensors which reads data in real-time. Then I am adding the time when the reading was received by the computer.
I am constructing a matrix with first column time as given from function now, second column being the data. This is done in real-time in Matlab. Everything works perfect untill I have to save the data.
When saving data, the date is rounded automatically. If I plot now my time (da variable), I will get a function which increases.
However, If I plot mam(1,:), I get a flat line.
I have tried many things but with same result.
Do you know, how can I save the matrix (ma) in Matlab in such a way to preserve all the decimals from date?
Here is a small script simulating my problem:
s=0;
j=1;
for i=1:10
s(j)=s(end)+i;
da(j)=now;
pause(1);
j=j+1;
end
ma= [da; s];
dlmwrite('mam.dat',ma);
`

The code you provided works fine. This can be verified by looking at the difference between ma(1,1) and ma(2,1) with ma(1,1) - ma(1,2) which doesn't return 0.
The rounding is occurring when the data is displayed. By default matlab displays 6 decimal places. The command format('long') will cause all decimal places to be displayed.
Style note:
The logic in your loop is a little odd, here is more matlaby way to do what you've written above
nSample = 10;
s = nan(nSample,1); % pre allocate arrays, much faster for big arrays
da = nan(nSample,1);
for i = 1:nSample
if i==1
s(i) = 1;
else
s(i) = s(i-1) + i;
end
da(i) = now;
end
ma = [da; s];
dlmwrite('mam.dat', ma);

If you want to save the data with as much precision as stored in the variables, export to a binary MAT-file instead of textual files:
save mam.mat ma

Related

Faster way to load .csv files in folder and display them using imshow in MATLAB

I have a piece of MATLAB code that works fine, but I wanted to know is there any faster way of performing the same task, where each .csv file is a 768*768 dimension matrix
Current code:
for k = 1:143
matFileName = sprintf('ang_thresholded%d.csv', k);
matData = load(matFileName);
imshow(matData)
end
Any help in this regard will be very helpful. Thank You!
In general, its better to separate the loading, computational and graphical stuff.
If you have enough memory, you should try to change your code to:
n_files=143;
% If you know the size of your images a priori:
matData=zeros( 768, 768,n_files); % prealocate for speed.
for k = 1:n_files
matFileName = sprintf('ang_thresholded%d.csv', k);
matData(:,:,k) = load(matFileName);
end
seconds=0.01;
for k=1:n_Files
%clf; %Not needed in your case, but needed if you want to plot more than one thing (hold on)
imshow(matData(:,:,k));
pause(seconds); % control "framerate"
end
Note the use of pause().
Here is another option using Matlab's data stores which are designed to work with large datasets or lots of smaller sets. The TabularTextDatastore is specifically for this kind of text based data.
Something like the following. However, note that since I don't have any test files it is sort of notional example ...
ttds = tabularTextDatastore('.\yourDirPath\*.csv'); %Create the data store
while ttds.hasdata %This turns false after reading the last file.
temp = read(ttds); %Returns a Matlab table class
imshow(temp.Variables)
end
Since it looks like your filenames' numbering is not zero padded (e.g. 1 instead of 001) then the file order might get messed up so that may need addressed as well. Anyway I thought this might be a good alternative approach worth considering depending on what else you want to do with the data and how much of it there might be.

How to merge outputs that are produced in for-loops separately in Matlab

I have some code that is executed in a for loop at the moment, but I will eventually use parfor. That is why I need to save the output for each loop separately:
for Year = 2008:2016
for PartOfYear = 1:12
% some code that produces numerical values, vectors and strings
end
end
I want to save the outputs for each loop separately and in the end merge it together, so that all the outputs are vertically concatenated, starting with Year=2008, PartOfYear = 1 in the first row, then Year = 2008, PartOfYear = 2, and so on. I am stuck as how to write this code - I looked into tables, cells, the eval and the sprintf function but couldn't make it work for my case.
you can use cell (thats what i use mostly)
check out the code
a=1; %some random const
OParray=cell(1);
idx=1;colforYear=1;colforPart=2;colforA=3;
for Year = 2008:2016
for PartOfYear = 1:12
str1='monday';
a=a+1; %some random operation
outPut=strcat(str1,num2str(a));
OParray{idx,colforYear}=Year;
OParray{idx,colforPart}=PartOfYear;
OParray{idx,colforA}=outPut;
idx=idx+1;
end
end
Steer clear of eval, it makes code very difficult to debug and interpret, and either way creating dynamic variables isnt recommended in matlab as good practice. Also, always index starting from 1 going upwards because it just makes your life easier in data handling.
You're best off creating a structure and saving each output as a value in that structure that is indexed with the same value as the one in your for loop. Something like:
Years= [2008:1:2016]
for Year = 1:length(Years)
for PartofYear= 1:12
Monthly_Out{PartofYear}= %whatever code generates your output
end
Yearly_Out{year}= vertcat(Monthly_Out{:,:});
end
Total_Output= vertcat{Yearly_Out{:,:});

MATLAB error message when plotting big data files?

I have written code to plot data from very large .txt files (20Gb to 60Gb). The .txt files contain two columns of data, that represent the outputs of two sensors from an experiment that I did. The reason the data files are so large is that the data was recorded at 4M samples/s.
The code works well for plotting relatively small .txt files (10Gb), however when I try to plot my larger data files (60Gb) I get the following error message:
Attempted to access TIME(0); index must be a
positive integer or logical.
Error in textscan_loop (line 17)
TIME =
((TIME(end)+sample_rate):sample_rate:(sample_rate*(size(d,1)))+(TIME(end)));%shift
Time along
The basic idea behind my code is to conserve RAM by reading Nlines of data from .txt on disk to Matlab variable C in RAM, plotting C then clearing C. This process occurs in loop so the data is plotted in chunks until the end of the .txt file is reached. The code can be found below:
Nlines = 1e6; % set numbe of lines to sample per cycle
sample_rate = (1); %sample rate
DECE= 1000;% decimation factor
TIME = (0:sample_rate:sample_rate*((Nlines)-1));%first inctance of time vector
format = '%f\t%f';
fid = fopen('H:\PhD backup\Data/ONK_PP260_G_text.txt');
while(~feof(fid))
C = textscan(fid, format, Nlines, 'CollectOutput', true);
d = C{1}; % immediately clear C at this point you need the memory!
clearvars C ;
TIME = ((TIME(end)+sample_rate):sample_rate:(sample_rate*(size(d,1)))+(TIME(end)));%shift Time along
plot((TIME(1:DECE:end)),(d(1:DECE:end,:)))%plot and decimate
hold on;
clearvars d;
end
fclose(fid);
I think the while loop does around 110 cycles before the code stops executing and the error message is displayed, I know this because the graph shows around 110e7 data points and the loop processes 1e6 data points at a time.
If anyone knows why this error might be occurring please let me know.
Cheers,
Jim
The error that you encounter is in fact not in the plotting, but in the line of reference.
Though I have been unable to reproduce the exact error, I suspect it to be related to this:
Time = 1:0
Time(end)
In any case, the way forward is clear. You need to run this code with dbstop if error and observe all relevant variables in the line that throws the error.
From here you will likely figure out what is causing the problem, hopefully just something simple like your code being unable to deal with data size that is an exact multiple of 1000 or so.
Trying to use plot for big data is problematic as matlab is trying to plot every single data point.
Obviously the screen will not display all of these points (many will overlap), and therefore it is recommended to plot only the relevant points. One could subsample and do this manually as you seem to have tried, but fortunately we have a ready to use solution for this:
The Plot (Big) File Exchange Submission
Here is the introduction:
This simple tool intercepts data going into a plot and reduces it to
the smallest possible set that looks identical given the number of
pixels available on the screen. It then updates the data as a user
zooms or pans. This is useful when a user must plot a very large
amount of data and explore it visually.
This works with MATLAB's built-in line plot functions, allowing the
functionality of those to be preserved.
Instead of:
plot(t, x);
One could use:
reduce_plot(t, x);
Most plot options, such as multiple series and line properties, can be
passed in too, such that 'reduce_plot' is largely a drop-in
replacement for 'plot'.
h = reduce_plot(t, x(1, :), 'b:', t, x(2, :), t, x(3, :), 'r--*');
This function works on plots where the "x" data is always increasing,
which is the most common, such as for time series.

Find a Binary Data Sequence in a Signal

Here's my goal:
I'm trying to find a way to search through a data signal and find (index) locations where a known, repeating binary data sequence is located. Then, because the spreading code and demodulation is known, pull out the corresponding chip of data and read it. Currently, I believe xcorr will do the trick.
Here's my problem:
I can't seem to interpret my result from xcorr or xcorr2 to give me what I'm looking for. I'm either having a problem cross-referencing from the vector location of my xcorr function to my time vector, or a problem properly identifying my data sequence with xcorr, or both. Other possibilities may exist.
Where I am at/What I have:
I have created a random BPSK signal that consists of the data sequence of interest and garbage data over a repeating period. I have tried processing it using xcorr, which is where I am stuck.
Here's my code:
%% Clear Variables
clc;
clear all, close all;
%% Create random data
nbits = 2^10;
ngarbage = 3*nbits;
data = randi([0,1],1,nbits);
garbage = randi([0,1],1,ngarbage);
stream = horzcat(data,garbage);
%% Convert from Unipolar to Bipolar Encoding
stream_b = 2*stream - 1;
%% Define Parameters
%%% Variable Parameters
nsamples = 20*nbits;
nseq = 5 %# Iterate stream nseq times
T = 10; %# Number of periods
Ts = 1; %# Symbol Duration
Es = Ts/2; %# Energy per Symbol
fc = 1e9; %# Carrier frequency
%%% Dependent Parameters
A = sqrt(2*Es/Ts); %# Amplitude of Carrier
omega = 2*pi*fc %# Frequency in radians
t = linspace(0,T,nsamples) %# Discrete time from 0 to T periods with nsamples samples
nspb = nsamples/length(stream) %# Number of samples per bit
%% Creating the BPSK Modulation
%# First we have to stretch the stream to fit the time vector. We can quickly do this using _
%# simple matrix manipulation.
% Replicate each bit nspb/nseq times
repStream_b = repmat(stream_b',1,nspb/nseq);
% Tranpose and replicate nseq times to be able to fill to t
modSig_proto = repmat(repStream_b',1,nseq);
% Tranpose column by column, then rearrange into a row vector
modSig = modSig_proto(:)';
%% The Carrier Wave
carrier = A*cos(omega*t);
%% Modulated Signal
sig = modSig.*carrier;
Using XCORR
I use xcorr2() to eliminate the zero padding effect of xcorr on unequal vectors. See comments below for clarification.
corr = abs(xcorr2(data,sig); %# pull the absolute correlation between data and sig
[val,ind] = sort(corr(:),'descend') %# sort the correlation data and assign values and indices
ind_max = ind(1:nseq); %# pull the nseq highest valued indices and send to ind_max
Now, I think this should pull the five highest correlations between data and sig. These should correspond to the end bit of data in the stream for every iteration of stream, because I would think that is where the data would most strongly cross-correlate with sig, but they do not. Sometimes the maxes are not even one stream length apart. So I'm confused here.
Question
In a three part question:
Am I missing a certain step? How do I use xcorr in this case to find where data and sig are most strongly correlated?
Is my entire method wrong? Should I not be looking for the max correlations?
Or should I be attacking this problem from another angle, id est, not use xcorr and maybe use filter or another function?
Your overall method is great and makes a lot of sense. The problem you're having is that you're getting some actual correlation with your garbage data. I noticed that you shifted all of your sream to be zero-centered, but didn't do the same to your data. If you zero-center the data, your correlation peaks will be better defined (at least that worked when I tried it).
data = 2*data -1;
Also, I don't recommend using a simple sort to find your peaks. If you have a wide peak, which is especially possible with a noisy signal, you could have two high points right next to each other. Find a single maximum, and then zero that point and a few neighbors. Then just repeat however many times you like. Alternatively, if you know how long your epoch is, only do a correlation with one epoch's worth of data, and iterate through the signal as it arrives.
With #David K 's and #Patrick Mineault's help I manage to track down where I went wrong. First #Patrick Mineault suggested I flip the signals. The best way to see what you would expect from the result is to slide the small vector along the larger, searched vector. So
corr = xcorr2(sig,data);
Then I like to chop off the end there because it's just extra. I did this with a trim function I made that simply takes the signal you're sliding and trims it's irrelevant pieces off the end of the xcorr result.
trim = #(x,s2) x(1:end - (length(s2) - 1));
trim(corr,data);
Then, as #David K suggests, you need to have the data stream you're looking for encoded the same as your searched signal. So in this case
data = 2*data-1;
Second, if you just have your data at it's original bit length, and not at it's stretched, iterated length, it can be found in the signal but it will be VERY noisy. To reduce the noise, simply stretch the data to match it's stretched length in the iterated signal. So
rdata = repmat(data',1,nspb/nseq);
rdata = repmat(rdata',1,nseq);
data = rdata(:)';
Now finally, we should have crystal clear correlations for this case. And to pull out the maxes that should correspond to those correlations I wrote
[sortedValues sortIndex] = sort(corr(:),'descend');
c = 0 ;
for r = 1 : length(sortedValues)
if sortedValues(r,:) == max(corr)
c = c + 1;
maxIndex(1,c) = sortIndex(r,:);
else break % If you don't do this, you get loop lock
end
end
Now c should end up being nseq for this case and you should have 5 index times where the corrs should be! You can easily pull out the bits with another loop and c or length(maxIndex). I've also made this into a more "real world" toy script, where there is a data stream, doppler, fading, and it's over a time vector in seconds instead of samples.
Thanks for the help!
Try flipping the signal, i.e.:
corr = abs(xcorr2(data,sig(end:-1:1));
Is that any better?

MATLAB query about for loop, reading in data and plotting

I am a complete novice at using matlab and am trying to work out if there is a way of optimising my code. Essentially I have data from model outputs and I need to plot them using matlab. In addition I have reference data (with 95% confidence intervals) which I plot on the same graph to get a visual idea on how close the model outputs and reference data is.
In terms of the model outputs I have several thousand files (number sequentially) which I open in a loop and plot. The problem/question I have is whether I can preprocess the data and then plot later - to save time. The issue I seem to be having when I try this is that I have a legend which either does not appear or is inaccurate.
My code (apolgies if it not elegant):
fn= xlsread(['tbobserved' '.xls']);
time= fn(:,1);
totalreference=fn(:,4);
totalreferencelowerci=fn(:,6);
totalreferenceupperci=fn(:,7);
figure
plot(time,totalrefrence,'-', time, totalreferencelowerci,'--', time, totalreferenceupperci,'--');
xlabel('Year');
ylabel('Reference incidence per 100,000 population');
title ('Total');
clickableLegend('Observed reference data', 'Totalreferencelowerci', 'Totalreferenceupperci','Location','BestOutside');
xlim([1910 1970]);
hold on
start_sim=10000;
end_sim=10005;
h = zeros (1,1000);
for i=start_sim:end_sim %is there any way of doing this earlier to save time?
a=int2str(i);
incidenceFile =strcat('result_', 'Sim', '_', a, 'I_byCal_total.xls');
est_tot=importdata(incidenceFile, '\t', 1);
cal_tot=est_tot.data;
magnitude=1;
t1=cal_tot(:,1)+1750;
totalmodel=cal_tot(:,3)+cal_tot(:,5);
h(a)=plot(t1,totalmodel);
xlim([1910 1970]);
ylim([0 500]);
hold all
clickableLegend(h(a),a,'Location','BestOutside')
end
Essentially I was hoping to have a way of reading in the data and then plot later - ie. optimise the code.
I hope you might be able to help.
Thanks.
mp
Regarding your issue concerning
I have a legend which either does not
appear or is inaccurate.
have a look at the following extracts from your code.
...
h = zeros (1,1000);
...
a=int2str(i);
...
h(a)=plot(t1,totalmodel);
...
You are using a character array as index. Instead of h(a) you should use h(i). MATLAB seems to cast the character array a to double as shown in the following example with a = 10;.
>> double(int2str(10))
ans = 49 48
Instead of h(10) the plot handle will be assigned to h([49 48]) which is not your intention.