I'm attempting to run this simple diffusion case (I understand that it isn't ideal generally), and I'm doing fine with getting the inside of the solid, but need some help with the outer edges.
global M
size=100
M=zeros(size,size);
M(25,25)=50;
for diffusive_steps=1:500
oldM=M;
newM=zeros(size,size);
for i=2:size-1;
for j=2:size-1;
%we're considering the ij-th pixel
pixel_conc=oldM(i,j);
newM(i,j+1)=newM(i,j+1)+pixel_conc/4;
newM(i,j-1)=newM(i,j-1)+pixel_conc/4;
newM(i+1,j)=newM(i+1,j)+pixel_conc/4;
newM(i-1,j)=newM(i-1,j)+pixel_conc/4;
end
end
M=newM;
end
It's a pretty simple piece of code, and I know that. I'm not very good at using Octave yet (chemist by trade), so I'd appreciate any help!
If you have concerns about the border of your simulation you could pad your matrix with NaN values, and then remove the border after the simulation has completed. NaN stands for not a number and is often used to denote blank data. There are many MATLAB functions work in a useful way with these values.
e.g. finding the mean of an array which has blanks:
nanmean([0 nan 5 nan 10])
ans =
5
In your case, I would start by adding a border of NaNs to your M matrix. I'm using 'n' instead of 'size', since size is an important function in MATLAB, and using it as a variable can lead to confusing errors.
n=100;
blankM=zeros(n+2,n+2);
blankM([1,end],:) = nan;
blankM(:, [1,end]) = nan;
Now we can define 'M'. N.B that the first column and row will be NaNs so we need to add an offset (25+1):
M = blankM;
M(26,26)=50;
Run the simulation through,
m = size(blankM, 1);
n = size(blankM, 2);
for diffusive_steps=1:500
oldM = M;
newM = blankM;
for i=2:m-1;
for j=2:n-1;
pixel_conc=oldM(i,j);
newM(i,j+1)=newM(i,j+1)+pixel_conc/4;
newM(i,j-1)=newM(i,j-1)+pixel_conc/4;
newM(i+1,j)=newM(i+1,j)+pixel_conc/4;
newM(i-1,j)=newM(i-1,j)+pixel_conc/4;
end
end
M=newM;
end
and then extract the area of interest
finalResult = M(2:end-1, 2:end-1);
One simple change you might make is to add a boundary of ghost cells, or halo, around the domain of interest. Rather than mis-use the name size I've used a variable called sz. Replace:
M=zeros(sz,sz)
with
M=zeros(sz+2,sz+2)
and then compute your diffusion over the interior of this augmented matrix, ie over cells (2:sz+1,2:sz+1). When it comes to considering the results, discard or just ignore the halo.
Even simpler would be to simply take what you already have and ignore the cells in your existing matrix which are on the N,S,E,W edges.
This technique is widely used in problems such as, and similar to, yours and avoids the need to write code which deals with the computations on cells which don't have a full complement of neighbours. Setting the appropriate value for the contents of the halo cells is a problem-dependent matter, 0 isn't always the right value.
Related
I am trying to sum in the second dimension a matrix QI in Matlab. The trick is, the columns contain a series of increasing numbers, but not all columns have the same number of elements (i.e. numel(QI(:,1)) ~= numel(QI(:,2)) and so on). For the sake of clarity, I attach a picture of it. Note that I padded the missing areas with 0, so the previous condition becomes nnz(QI(:,1)) ~= nnz(QI(:,2)).
One initial strategy that I thought of was to treat this as an image and construct a mask for each different gradient level, but that seems like a tedious job.
Anyone has a better idea on how to do this? I should also mention that I am able to freely modify how QI is generated, but I'd rather not if there is a solution for this problem.
EDIT:
Hopefully the new colored image should give a better understanding.
FYI, each column was previously stored in a cell array without the trailing zeros. Then I extracted the columns one by one and stored them in a matrix in order to perform the summation, padding the extra zeros whenever the length isn't the same.
Generally these column data should have the same number of rows, but sometimes that's not the case, and even worse, they do not allign properly.
I'm starting to think if it's better to rework the code that generate the cell arrays rather than this matrix. Thoughts?
Thank you,
edit: following you comment, I modified the answer. Be aware that your data cannot be really "aligned" because they have not the same number of value.
A way would be to use a cell as a storage for your measures.
valueMissing = 0; % here you can put the defauld value you want
% transform you matrix in a cell
QICell = arrayfun(#(x) QI(QI(:,x)!=valueMissing,x), 1:size(QI,2),'UniformOutput', false);
Now you can sum the last element of the vectors inside the cell
QIsum = sum(cellfun(#(x) x(end), QICell))
Or reorder the vectors so that your last element are "aligned"
QICellReordered = cellfun(#(x) x(end:-1:1),QICell, 'UniformOutput',false);
Then you can make all possible sums:
m = min(cellfun(#numel, QICellReordered));
QIsum = zeros(m,1);
for i=1:m
QIsum(i) = sum(cellfun(#(x) x(i), QICellReordered));
end
% reorder QISum to your original order
QIsum = QIsum(end:-1:1);
I hope this help !
I have a problem with the following code. I want to store all the values I am creating in the for loop below so that I can make a plot of it. I have tried several things, but nothing works. Does anyone know a simple method to create a vector of the results and then plot them?
dx=0.1;
t=1;
e=1;
for x=-1:dx:1
lower_bound=-100;
upper_bound=x/(sqrt(4*t*e));
e=1;
u=(1/sqrt(pi))*quad(#integ,lower_bound,upper_bound);
plot(x,u)
hold on
end
hold off
I would like to use as much of this matlab code as possible.
dx=0.1;
t=1;
e=1;
xval=[-1:dx:1].';
upper_bound = zeros(numel(xval),1);
u = zeros(numel(xval),1);
for ii=1:numel(xval)
x = xval(ii)
lower_bound=-100;
upper_bound(ii,1)=x/(sqrt(4*t*e));
u(ii,1)=(1/sqrt(pi))*quad(#integ,lower_bound,upper_bound(ii));
end
figure;
plot(xval,u)
by adding the (ii) behind your statements it saves your variables in an array. I did not use that on your lower_bound since it is a constant.
Note that I first created an array xval and called that with integers in ii, since subscriptindices must be positive integers in MATLAB. I also initialised both upper_bound and u by creating a zero matrix before the loop executes. This is handy since extending an existing vector is very memory and time consuming in MATLAB and since you know how big they will get (same number of elements as xval) you might as well use that.
I also got the plot call outside the loop, to prevent you from plotting 21 blue lines in 1 plot.
In Matlab, is it possible to measure local variation of a signal across an entire signal without using for loops? I.e., can I implement the following:
window_length = <something>
for n = 1:(length_of_signal - window_length/2)
global_variance(n) = var(my_signal(1:window_length))
end
in a vectorized format?
If you have the image processing toolbox, you can use STDFILT:
global_std = stdfilt(my_signal(:),ones(window_length,1));
% square to get the variance
global_variance = global_std.^2;
You could create a 2D array where each row is shifted one w.r.t. to the row above, and with the number of rows equal to the window width; then computing the variance is trivial. This doesn't require any toolboxes. Not sure if it's much faster than the for loop though:
longSignal = repmat(mySignal(:), [1 window_length+1]);
longSignal = reshape(longSignal(1:((length_of_signal+1)*window_length)), [length_of_signal+1, window_length])';
global_variance = sum(longSignal.*longSignal, 2);
global_variance = global_variance(1:length_of_signal-window_length));
Note that the second column is shifted down by one relative to the one above - this means that when we have the blocks of data on which we want to operate in rows, so I take the transpose. After that, the sum operator will sum over the first dimension, which gives you a row vector with the results you want. However, there is a bit of wrapping of data going on, so we have to limit to the number of "good" values.
I don't have matlab handy right now (I'm at home), so I was unable to test the above - but I think the general idea should work. It's vectorized - I can't guarantee it's fast...
Check the "moving window standard deviation" function at Matlab Central. Your code would be:
movingstd(my_signal, window_length, 'forward').^2
There's also moving variance code, but it seems to be broken.
The idea is to use filter function.
I am trying to put my dataset into the MATLAB [ranked,weights] = relieff(X,Ylogical,10, 'categoricalx', 'on') function to rank the importance of my predictor features. The dataset<double n*m> has n observations and m discrete (i.e. categorical) features. It happens that each observation (row) in my dataset has at least one NaN value. These NaNs represent unobserved, i.e. missing or null, predictor values in the dataset. (There is no corruption in the dataset, it is just incomplete.)
relieff() uses this function below to remove any rows that contain a NaN:
function [X,Y] = removeNaNs(X,Y)
% Remove observations with missing data
NaNidx = bsxfun(#or,isnan(Y),any(isnan(X),2));
X(NaNidx,:) = [];
Y(NaNidx,:) = [];
This is not ideal, especially for my case, since it leaves me with X=[] and Y=[] (i.e. no observations!)
In this case:
1) Would replacing all NaN's with a random value, e.g. 99999, help? By doing this, I am introducing a new feature state for all the predictor features so I guess it is not ideal.
2) or is replacing NaNs with the mode of the corresponding feature column vector (as below) statistically more sound? (I am not vectorising for clarity's sake)
function [matrixdata] = replaceNaNswithModes(matrixdata)
for i=1: size(matrixdata,2)
cv= matrixdata(:,i);
modevalue= mode(cv);
cv(find(isnan(cv))) = modevalue;
matrixdata(:,i) = cv;
end
3) Or any other sensible way that would make sense for "categorical" data?
P.S: This link gives possible ways to handle missing data.
I suggest to use a table instead of a matrix.
Then you have functions such as ismissing (for the entire table), and isundefined to deal with missing values for categorical variables.
T = array2table(matrix);
T = standardizeMissing(T); % NaN is standard for double but this
% can be useful for other data type
var1 = categorical(T.var1);
missing = isundefined(var1);
T = T(missing,:); % removes lines with NaN
matrix = table2array(T);
For a start both solutiona (1) and (2) do not help you handle your data more properly, since NaN is in fact a labelling that is handled appropriately by Matlab; warnings will be issued. What you should do is:
Handle the NaNs per case
Use try catch blocks
NaN is like a number, and there is nothing bad about it. Even is you divide by NaN matlab will treat it properly and give you a NaN.
If you still want to replace them, then you will need an assumption that holds. For example, if your data is engine speeds in a timeseries that have been input by the engine operator, but some time instances have not been specified then there are more than one ways to handle the NaN that will appear in the matrix.
Replace with 0s
Replace with the previous value
Replace with the next value
Replace with the average of the previous and the next value
and many more.
As you can see your problem is ill-posed, and depends on the predictor and the data source.
In case of categorical data, e.g. three categories {0,1,2} and supposing NaN occurs in Y.
for k=1:size(Y,2)
[ id ]=isnan(Y(:,k);
m(k)=median(Y(~id),k);
Y(id,k)=round(m(k));
end
I feel really bad that I had to write a for-loop but I cannot see any other way. As you can see I made a number of assumptions, by using median and round. You may want to use a threshold depending on you knowledge about the data.
I think the answer to this has been given by gd047 in dimension-reduction-in-categorical-data-with-missing-values:
I am going to look into this, if anyone has any other suggestions or particular MatLab implementations, it would be great to hear.
You can take a look at this page http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/binary.html the firs a1a, it says transforming categorical into binary. Could possibly work. (:
I have cell array each containing a sequence of values as a row vector. The sequences contain some missing values represented by NaN.
I would like to replace all NaNs using some sort of interpolation method, how can I can do this in MATLAB? I am also open to other suggestions on how to deal with these missing values.
Consider this sample data to illustrate the problem:
seq = {randn(1,10); randn(1,7); randn(1,8)};
for i=1:numel(seq)
%# simulate some missing values
ind = rand( size(seq{i}) ) < 0.2;
seq{i}(ind) = nan;
end
The resulting sequences:
seq{1}
ans =
-0.50782 -0.32058 NaN -3.0292 -0.45701 1.2424 NaN 0.93373 NaN -0.029006
seq{2}
ans =
0.18245 -1.5651 -0.084539 1.6039 0.098348 0.041374 -0.73417
seq{3}
ans =
NaN NaN 0.42639 -0.37281 -0.23645 2.0237 -2.2584 2.2294
Edit:
Based on the responses, I think there's been a confusion: obviously I'm not working with random data, the code shown above is simply an example of how the data is structured.
The actual data is some form of processed signals. The problem is that during the analysis, my solution would fail if the sequences contain missing values, hence the need for filtering/interpolation (I already considered using the mean of each sequence to fill the blanks, but I am hoping for something more powerful)
Well, if you're working with time-series data then you can use Matlab's built in interpolation function.
Something like this should work for your situation, but you'll need to tailor it a little ... ie. if you don't have equal spaced sampling you'll need to modify the times line.
nseq = cell(size(seq))
for i = 1:numel(seq)
times = 1:length(seq{i});
mask = ~isnan(seq{i});
nseq{i} = seq{i};
nseq{i}(~mask) = interp1(times(mask), seq{i}(mask), times(~mask));
end
You'll need to play around with the options of interp1 to figure out which ones work best for your situation.
I would use inpaint_nans, a tool designed to replace nan elements in 1-d or 2-d matrices by interpolation.
seq{1} = [-0.50782 -0.32058 NaN -3.0292 -0.45701 1.2424 NaN 0.93373 NaN -0.029006];
seq{2} = [0.18245 -1.5651 -0.084539 1.6039 0.098348 0.041374 -0.73417];
seq{3} = [NaN NaN 0.42639 -0.37281 -0.23645 2.0237];
for i = 1:3
seq{i} = inpaint_nans(seq{i});
end
seq{:}
ans =
-0.50782 -0.32058 -2.0724 -3.0292 -0.45701 1.2424 1.4528 0.93373 0.44482 -0.029006
ans =
0.18245 -1.5651 -0.084539 1.6039 0.098348 0.041374 -0.73417
ans =
2.0248 1.2256 0.42639 -0.37281 -0.23645 2.0237
If you have access to the System Identification Toolbox, you can use the MISDATA function to estimate missing values. According to the documentation:
This command linearly interpolates
missing values to estimate the first
model. Then, it uses this model to
estimate the missing data as
parameters by minimizing the output
prediction errors obtained from the
reconstructed data.
Basically the algorithm alternates between estimating missing data and estimating models, in a way similar to the Expectation Maximization (EM) algorithm.
The model estimated can be any of the linear models idmodel (AR/ARX/..), or if non given, uses a default-order state-space model.
Here's how to apply it to your data:
for i=1:numel(seq)
dat = misdata( iddata(seq{i}(:)) );
seq{i} = dat.OutputData;
end
Use griddedInterpolant
There also some other functions like interp1. For curved plots spline is the the best method to find missing data.
As JudoWill says, you need to assume some sort of relationship between your data.
One trivial option would be to compute the mean of your total series, and use those for missing data. Another trivial option would be to take the mean of the n previous and n next values.
But be very careful with this: if you're missing data, you're generally better to deal with those missing data, than to make up some fake data that could screw up your analysis.
Consider the following example
X=some Nx1 array
Y=F(X) with some NaNs in it
then use
X1=X(find(~isnan(Y)));
Y1=Y(find(~isnan(Y)));
Now interpolate over X1 and Y1 to compute all values at all X.