I have this filtering function that takes an input image, performs convolution using a given kernel, and returns the resulting image. However, I can't seem to work it out how to make it takes different kernel sizes.For example instead of pre-defined 3x3 kernel as below in the code, it could instead take 5x5 or 7x7. and then the user could input the type of kernel/filter they want(Depending on the intended effect). I can't seem to put my head around it. i'm quite new to matlab.
function [newImg] = kernelFunc(imgB)
img=imread(imgB);
figure,imshow(img);
img2=zeros(size(img)+2);
newImg=zeros(size(img));
for rgb=1:3
for x=1:size(img,1)
for y=1:size(img,2)
img2(x+1,y+1,rgb)=img(x,y,rgb);
end
end
end
for rgb=1:3
for i= 1:size(img2,1)-2
for j=1:size(img2,2)-2
window=zeros(9,1);
inc=1;
for x=1:3
for y=1:3
window(inc)=img2(i+x-1,j+y-1,rgb);
inc=inc+1;
end
end
kernel=[1;2;1;2;4;2;1;2;1]/16;
med=window.*kernel;
disp(med);
med=sum(med);
med=floor(med);
newImg(i,j,rgb)=med;
end
end
end
newImg=uint8(newImg);
figure,imshow(newImg);
end
I have commented the code and marked the places to change <--. kernel in this example is a column vector of 3*3=9 elements. Besides changing the kernel itself, you may also need to change the amount of zero-padding around the image. E.g., for a 5x5 kernel, you would need two rows and two columns of padding instead of one. Then update the inner loop I have marked "grab each pixel" to pull an area the size of the kernel (e.g., for x=1:5 and for y=1:5 for a 5x5 kernel). The actual convolution is unchanged.
Reminder: the function is taking a uint8 (0..255-valued) RGB image. window and kernel are double, so the trunc cuts any fractional part off before putting the new pixel value in uint8 newImg.
function [newImg] = kernelFunc(imgB)
img=imread(imgB);
figure,imshow(img);
img2=zeros(size(img)+2); % one extra column on each side, and one extra
% row top and bottom. <-- May need more padding.
newImg=zeros(size(img)); % the destination
% Pad the image with zeros at the left and top (transform img->img2)
for rgb=1:3
for x=1:size(img,1) %for each row
for y=1:size(img,2) %for each column
img2(x+1,y+1,rgb)=img(x,y,rgb); % <-- adjust per kernel size
end
end
end
% Process the padded image (img2->newImg)
for rgb=1:3
for i= 1:size(img2,1)-2 % for each row
for j=1:size(img2,2)-2 % for each column
% Build a row vector of the pixels to be convolved.
window=zeros(9,1); % <-- 9=kernel size
inc=1;
for x=1:3 % <-- grab each pixel
for y=1:3 % <-- under the kernel
window(inc)=img2(i+x-1,j+y-1,rgb);
inc=inc+1;
end
end
kernel=[1;2;1;2;4;2;1;2;1]/16; % <-- the kernel itself
med=window.*kernel; % start the convolution
disp(med);
med=sum(med); % finish the convolution
med=floor(med); % fit the pixels back into uint8.
newImg(i,j,rgb)=med; % store the result
end %for each column
end %for each row
end %for each color channel
newImg=uint8(newImg);
figure,imshow(newImg);
end
Related
I'm working on a filtered back projection algorithm using the central slice theorem for a homework assignment and while I understand the theory on paper, I've run into an issue implementing it in Matlab. I was provided with a skeleton to follow to do it but there is a step that I think I'm maybe misunderstanding. Here is what I have:
function img = sampleFBP(sino,angs)
% This step is necessary so that frequency information is preserved: we pad
% the sinogram with zeros so that this is ensured.
sino = padarray(sino, floor(size(sino,1)/2), 'both');
% diagDim should be the length of an individual row of the sinogram - don't
% hardcode this!
diagDim = size(sino, 2);
% The 2DFT (2D Fourier transform) of our image will start as a matrix of
% all zeros.
fimg = zeros(diagDim);
% Design your 1-d ramp filter.
rampFilter_1d = abs(linspace(-1, 1, diagDim))';
rowIndex = 1;
for nn = angs
% Each contribution to the image's 2DFT will also begin as all zero.
imContrib = zeros(diagDim);
% Get the current row of the sinogram - use rowIndex.
curRow = sino(rowIndex,:);
% Take the 1D Fourier transform the current row - be careful, as it's
% necessary to perform ifftshift and fftshift as Matlab tends to
% place zero-frequency components of a spectrum at the edges.
fourierCurRow = fftshift(fft(ifftshift(curRow)));
% Place the Fourier-transformed sino row and place it at the center of
% the next image contribution. Add the ramp filter in Fourier domain.
imContrib(floor(diagDim/2), :) = fourierCurRow;
imContrib = imContrib * fft(rampFilter_1d);
% Rotate the current image contribution to be at the correct angle on
% the 2D Fourier-space image.
imContrib = imrotate(imContrib, nn, 'crop');
% Add the current image contribution to the running representation of
% the image in Fourier space!
fimg = fimg + imContrib;
rowIndex = rowIndex + 1;
end
% Finally, just take the inverse 2D Fourier transform of the image! Don't
% forget - you may need an fftshift or ifftshift here.
rcon = fftshift(ifft2(ifftshift(fimg)));
The sinogram I'm inputting is just the output of the radon function on a Shepp-Logan phantom from 0 to 179 degrees. Running the code as it is now gives me a black image. I think I'm missing something in the loop where I add the FTs of rows to the image. From my understanding of the central slice theorem, what I think should be happening is this:
Initialize an array the same size as the what the 2DFT will be (i.e., diagDim x diagDim). This is the Fourier space.
Take a row of the sinogram which corresponds to the line integral information from a single angle and apply a 1D FT to it
According to the Central Slice Theorem, the FT of this line integral is a line through the Fourier domain that passes through the origin at an angle that corresponds to the angle at which the projection was taken. So to emulate that, I take the FT of that line integral and place it in the center row of the diagDim x diagDim matrix I created
Next I take the FT of the 1D ramp filter I created and multiply it with the FT of the line integral. Multiplication in the Fourier domain is equivalent to a convolution in the spatial domain so this convolves the line integral with the filter.
Now I rotate the entire matrix by the angle the projection was taken at. This should give me a diagDim x diagDim matrix with a single line of information passing through the center at an angle. Matlab increases the size of the matrix when it is rotated but since the sinogram was padded at the beginning, no information is lost and the matrices can still be added
If all of these empty matrices with a single line through the center are added up together, it should give me the complete 2D FT of the image. All that needs to be done is take the inverse 2D FT and the original image should be the result.
If the problem I'm running into is something conceptual, I'd be grateful if someone could point out where I messed up. If instead this is a Matlab thing (I'm still kind of new to Matlab), I'd appreciate learning what it is I missed.
The code that you have posted is a pretty good example of filtered backprojection (FBP) and I believe could be useful to people who wanted to learn the basis of FBP. One can use the function iradon(...) in MATLAB (see here) to perform FBP using a variety of filters. In your case of course, the point is to learn the basis of the central slice theorem and so finding a short cut is not the point. I have also learned a lot and refreshed my knowledge through answering to your question!
Now your code has been perfectly commented and describes the steps that need to be taken. There are a couple of subtle [programming] issues that need to be fixed so that the code works just fine.
First, your image representation in Fourier domain may end up having a missing array due to floor(diagDim/2) depending on the size of the sinogram. I would change this to round(diagDim/2) to have complete dataset in fimg. Be aware that this may lead to an error for certain sinogram sizes if not handled correctly. I would encourage you to visualize fimg to understand what that missing array is and why it matters.
Second issue is that your sinogram needs to be transposed to be consistent with your algorithm. Hence an addition of sino = sino'. Again, I do encourage you to try the code without this to see what happens! Note that zero padding must be happened along the views to avoid artifacts due to aliasing. I will demonstrate an example for this in this answer.
Third and most importantly, imContrib is a temporary holder for an array along fimg. Therefore, it must maintain the same size as fimg, so
imContrib = imContrib * fft(rampFilter_1d);
should be replaced with
imContrib(floor(diagDim/2), :) = imContrib(floor(diagDim/2), :)' .* rampFilter_1d;
Note that the Ramp filter is linear in frequency domain (thanks to #Cris Luengo for correcting this error). Therefore, you should drop the fft in fft(rampFilter_1d) as this filter is applied in the frequency domain (remember fft(x) decomposes the domain of x, such as time, space, etc to its frequency content).
Now a complete example to show how it works using the modified Shepp-Logan phantom:
angs = 0:359; % angles of rotation 0, 1, 2... 359
init_img = phantom('Modified Shepp-Logan', 100); % Initial image 2D [100 x 100]
sino = radon(init_img, angs); % Create a sinogram using radon transform
% Here is your function ....
% This step is necessary so that frequency information is preserved: we pad
% the sinogram with zeros so that this is ensured.
sino = padarray(sino, floor(size(sino,1)/2), 'both');
% Rotate the sinogram 90-degree to be compatible with your codes definition of view and radial positions
% dim 1 -> view
% dim 2 -> Radial position
sino = sino';
% diagDim should be the length of an individual row of the sinogram - don't
% hardcode this!
diagDim = size(sino, 2);
% The 2DFT (2D Fourier transform) of our image will start as a matrix of
% all zeros.
fimg = zeros(diagDim);
% Design your 1-d ramp filter.
rampFilter_1d = abs(linspace(-1, 1, diagDim))';
rowIndex = 1;
for nn = angs
% fprintf('rowIndex = %g => nn = %g\n', rowIndex, nn);
% Each contribution to the image's 2DFT will also begin as all zero.
imContrib = zeros(diagDim);
% Get the current row of the sinogram - use rowIndex.
curRow = sino(rowIndex,:);
% Take the 1D Fourier transform the current row - be careful, as it's
% necessary to perform ifftshift and fftshift as Matlab tends to
% place zero-frequency components of a spectrum at the edges.
fourierCurRow = fftshift(fft(ifftshift(curRow)));
% Place the Fourier-transformed sino row and place it at the center of
% the next image contribution. Add the ramp filter in Fourier domain.
imContrib(round(diagDim/2), :) = fourierCurRow;
imContrib(round(diagDim/2), :) = imContrib(round(diagDim/2), :)' .* rampFilter_1d; % <-- NOT fft(rampFilter_1d)
% Rotate the current image contribution to be at the correct angle on
% the 2D Fourier-space image.
imContrib = imrotate(imContrib, nn, 'crop');
% Add the current image contribution to the running representation of
% the image in Fourier space!
fimg = fimg + imContrib;
rowIndex = rowIndex + 1;
end
% Finally, just take the inverse 2D Fourier transform of the image! Don't
% forget - you may need an fftshift or ifftshift here.
rcon = fftshift(ifft2(ifftshift(fimg)));
Note that your image has complex value. So, I use imshow(abs(rcon),[]) to show the image. A couple of helpful images (food for thought) with the final reconstructed image rcon:
And here is the same image if you comment out the zero padding step (i.e. comment out sino = padarray(sino, floor(size(sino,1)/2), 'both');):
Note the different object size in the reconstructed images with and without zero padding. The object shrinks when the sinogram is zero padded since the radial contents are compressed.
I want to take the discrete convolution of two 1-D vectors. The vectors correspond to intensity data as a function of frequency. My goal is to take the convolution of one intensity vector B with itself and then take the convolution of the result with the original vector B, and so on, each time taking the convolution of the result with the original vector B. I want the final result to have the same length as the original vector B.
I am starting with a code in IDL that I am trying to modify to MATLAB. The relevant part of the code reads:
for i=1,10 do begin
if i lt 2 then A=B else A=Rt
n=i+1
Rt=convol(A,B,s,center=0,/edge_zero)
...
endfor
which I have rewritten in MATLAB
for i = 2:11
if i < 3
A = B; % indices start at 0, not 1
else
A = Rt;
end
n = i + 1;
% Scale by 1/s
Rt = (1/s).*conv(A,B);
...
end
But I am not sure how to incorporate the zero-padding that uses the edge_zero option. In IDL, the convolution calculates the values of elements at the edge of the vector as if the vector were padded with zeroes. The optional third option for the convolution function in MATLAB includes the option 'same', which returns the central part of the convolution of the same size as u for conv(u,v), but that doesn't seem to be the correct way about this problem. How do I do an analogous zero-padding in MATLAB?
Here is a code I needed for my doctoral research that I know does the zero padding correctly. I hope it helps.
function conv_out=F_convolve_FFT(f,g,dw,flipFlag),
if(nargin<4), flipFlag==0; end;
% length of function f to be convolved, initialization of conv_out, and padding p
N=length(f); conv_out=zeros(1,N); p=zeros(1,N);
% uncomment if working with tensor f,g's of rank 3 or greater
% f=F_reduce_rank(f); g=F_reduce_rank(g);
% padding. also, this was commented out: EN=length(fp);
fp = [f p]; gp = [g p];
% if performing convolution of the form c(w) = int(f(wp)g(w+wp),wp) = int(f(w-wp)gbar(wp),wp) due to reverse-order domain on substitution
if(flipFlag==1), gp=fliplr(gp); end;
% perform the convolution. You do NOT need to multiply the invocation of "F_convolve_FFT(f,g,dw,flipFlag)" in your program by "dx", the finite-element.
c1 = ifft(fft(fp).*fft(gp))*dw;
% if performing "reverse" convolution, an additional circshift is necessary to offset the padding
if(flipFlag==1), c1=circshift(c1',N)'; end;
% offset the padding by dNm
if(mod(N,2)==0), dNm=N/2; elseif(mod(N,2)==1), dNm=(N-1)/2; end;
% padding. also, this was commented out: EN=length(fp);
conv_out(:)=c1(dNm+1:dNm+N);
return
I have a fairly simple question. I am trying to segment an image using MATLAB. I have tried the imageSegmenter app, a toolbox with GUI. The tool seems to be working perfectly, especially when I use the "Flood Fill" option with almost any tolerance parameter.
Is there a function (not a tool) form of the flood fill? If yes, what is the name of the function? The documentation seems not be including this information.
The function grayconnected(I,row,column,tolerance) does, what the Flood-Fill-Tool in the imageSegmeter-Toolbox does: Initialize with a point [x,y] (column-/row-index in the image) and starting from there "flood" surrounding pixels within a given gray value range specified by the tolerance parameter (top-left in the Flood Fill GUI).
Actually you only need that one line (if you have your gray-valued img, an initialization point row,column and picked a tolerance, e.g. 12):
%>>> this is where the magic happens <<<%
segmentation = grayconnected(img, row, column, 12);
For convenience though see below a code snippet with visualization, where you may select your initialization. Input is a colored image (if it's already gray, skip rgb2gray). Output (a segmentation mask) corresponding to each point i is in segmentations(:,:,i). You may merge these single segmentation masks to one or assign them to different objects.
Note that this is really a very basic segmentation method, prone to noise and bad if you don't have a clear contrast (where a single threshold operation might already give you good results without initialization). You can use this initial segmentation to be refined, e.g. with active contours.
[img] = imread('test.jpg');
img = rgb2gray(img);
tolerance = 12; % default setting in imageSegmenter
%% >>>>>>>>>> GET INITIALIZATION POINTS <<<<<<<<<< %%
str = 'Click to select initialization points. Press ENTER to confirm.';
fig_sel = figure(); imshow(img);
title(str,'Color','b','FontSize',10);
fprintf(['\nNote: ' str '\n'...
'Pressing ENTER ends the selection without adding a final point.\n' ...
'Pressing BACKSPACE/DELETE removes the previously selected point.\n'] );
% select points in figure and close afterwards
[x, y] = getpts(fig_sel);
close(fig_sel);
%% >>>>>>>>>> PROCESS INITIALIZATION POINTS <<<<<<<<<< %%
if length(x) == 0
fprintf('\nError: No points specified. An initialization point is needed!');
else
segmentations = zeros([size(img) length(x)]);
fig_result = figure(); hold on;
for i = 1:length(x)
% confusing: y corresponds to row, x to column in img
column = ceil(x(i));
row = ceil(y(i));
%>>> this is where the magic happens <<<%
segmentations(:,:,i) = grayconnected(img,row,column,tolerance);
% show corresponding initialization point
subplot(1,2,1); imshow(img); hold on;
title('Active point (red)');
plot(x(i),y(i),'r.','MarkerSize',10); % active in red
plot(x(1:end ~= i),y(1:end ~= i),'b.','MarkerSize',5); % ... others in blue
hold off;
% ... with segmentation result
title('Segmentation result');
subplot(1,2,2); imshow(segmentations(:,:,i));
% click through results
waitforbuttonpress
end
close(fig_result);
end
I have a 2d array (doubles) representing some data, and it has a bunch of NaNs in it. The contour plot of the data looks like this:
All of the white spaces are NaNs, the gray diamond is there for reference, and the filled contour shows the shape of my data. When I filter the data with imfilt, the NaNs significantly chew into the data, so we end up with something like this:
You can see that the support set is significantly contracted. I can't use this, as it has chewed into some of the more interesting variations on the edges (for reasons specific to my experiments, those edges are important).
Is there a function to filter within an island of NaNs that treats edges similar to edges of rectangular filtering windows, instead of just killing the edges? Sort of like an nanmean function, except for convolving images?
Here is my filter code:
filtWidth = 7;
imageFilter=fspecial('gaussian',filtWidth,filtSigma);
%convolve them
dataFiltered = imfilter(rfVals,imageFilter,'symmetric','conv');
and the code for plotting the contour plot:
figure
contourf(dataFiltered); hold on
plot([-850 0 850 0 -850], [0 850 0 -850 0], 'Color', [.7 .7 .7],'LineWidth', 1); %the square (limits are data-specific)
axis equal
There is some code at the Mathworks file exchange (ndanfilter.m) that comes close to what I want, but I believe it only interpolates NaNs that are sprinkled on the interior of an image, not data showing this island-type effect.
Note: I just found nanconv.m, which does exactly what I want, with a very intuitive usage (convolve an image, ignoring NaN, much like nanmean works). I've made this part of my accepted answer, and include a comparison to the performance of the other answers.
Related questions
Gaussian filtering a image with Nan in Python
The technique I ended up using was the function nanconv.m at Matlab's File Exchange. It does exactly what I was looking for: it runs the filter in a way that ignores the NaNs just the way that Matlab's built-in function nanmean does. This is a hard to decipher from the documentation of the function, which is a tad cryptic.
Here's how I use it:
filtWidth = 7;
filtSigma = 5;
imageFilter=fspecial('gaussian',filtWidth,filtSigma);
dataFiltered = nanconv(data,imageFilter, 'nanout');
I'm pasting the nanconv function below (it is covered by the BSD license). I will post images etc when I get a chance, just wanted to post what I ended up doing for anyone curious about what I did.
Comparison to other answers
Using gnovice's solution the results look intuitively very nice, but there are some quantitative blips on the edges that were a concern. In practice, the extrapolation of the image beyond the edges led to many spuriously high values at the edges of my data.
Using krisdestruction's suggestion of replacing the missing bits with the original data, also looks pretty decent (especially for very small filters), but (by design) you end up with unfiltered data at the edges, which is a problem for my application.
nanconv
function c = nanconv(a, k, varargin)
% NANCONV Convolution in 1D or 2D ignoring NaNs.
% C = NANCONV(A, K) convolves A and K, correcting for any NaN values
% in the input vector A. The result is the same size as A (as though you
% called 'conv' or 'conv2' with the 'same' shape).
%
% C = NANCONV(A, K, 'param1', 'param2', ...) specifies one or more of the following:
% 'edge' - Apply edge correction to the output.
% 'noedge' - Do not apply edge correction to the output (default).
% 'nanout' - The result C should have NaNs in the same places as A.
% 'nonanout' - The result C should have ignored NaNs removed (default).
% Even with this option, C will have NaN values where the
% number of consecutive NaNs is too large to ignore.
% '2d' - Treat the input vectors as 2D matrices (default).
% '1d' - Treat the input vectors as 1D vectors.
% This option only matters if 'a' or 'k' is a row vector,
% and the other is a column vector. Otherwise, this
% option has no effect.
%
% NANCONV works by running 'conv2' either two or three times. The first
% time is run on the original input signals A and K, except all the
% NaN values in A are replaced with zeros. The 'same' input argument is
% used so the output is the same size as A. The second convolution is
% done between a matrix the same size as A, except with zeros wherever
% there is a NaN value in A, and ones everywhere else. The output from
% the first convolution is normalized by the output from the second
% convolution. This corrects for missing (NaN) values in A, but it has
% the side effect of correcting for edge effects due to the assumption of
% zero padding during convolution. When the optional 'noedge' parameter
% is included, the convolution is run a third time, this time on a matrix
% of all ones the same size as A. The output from this third convolution
% is used to restore the edge effects. The 'noedge' parameter is enabled
% by default so that the output from 'nanconv' is identical to the output
% from 'conv2' when the input argument A has no NaN values.
%
% See also conv, conv2
%
% AUTHOR: Benjamin Kraus (bkraus#bu.edu, ben#benkraus.com)
% Copyright (c) 2013, Benjamin Kraus
% $Id: nanconv.m 4861 2013-05-27 03:16:22Z bkraus $
% Process input arguments
for arg = 1:nargin-2
switch lower(varargin{arg})
case 'edge'; edge = true; % Apply edge correction
case 'noedge'; edge = false; % Do not apply edge correction
case {'same','full','valid'}; shape = varargin{arg}; % Specify shape
case 'nanout'; nanout = true; % Include original NaNs in the output.
case 'nonanout'; nanout = false; % Do not include NaNs in the output.
case {'2d','is2d'}; is1D = false; % Treat the input as 2D
case {'1d','is1d'}; is1D = true; % Treat the input as 1D
end
end
% Apply default options when necessary.
if(exist('edge','var')~=1); edge = false; end
if(exist('nanout','var')~=1); nanout = false; end
if(exist('is1D','var')~=1); is1D = false; end
if(exist('shape','var')~=1); shape = 'same';
elseif(~strcmp(shape,'same'))
error([mfilename ':NotImplemented'],'Shape ''%s'' not implemented',shape);
end
% Get the size of 'a' for use later.
sza = size(a);
% If 1D, then convert them both to columns.
% This modification only matters if 'a' or 'k' is a row vector, and the
% other is a column vector. Otherwise, this argument has no effect.
if(is1D);
if(~isvector(a) || ~isvector(k))
error('MATLAB:conv:AorBNotVector','A and B must be vectors.');
end
a = a(:); k = k(:);
end
% Flat function for comparison.
o = ones(size(a));
% Flat function with NaNs for comparison.
on = ones(size(a));
% Find all the NaNs in the input.
n = isnan(a);
% Replace NaNs with zero, both in 'a' and 'on'.
a(n) = 0;
on(n) = 0;
% Check that the filter does not have NaNs.
if(any(isnan(k)));
error([mfilename ':NaNinFilter'],'Filter (k) contains NaN values.');
end
% Calculate what a 'flat' function looks like after convolution.
if(any(n(:)) || edge)
flat = conv2(on,k,shape);
else flat = o;
end
% The line above will automatically include a correction for edge effects,
% so remove that correction if the user does not want it.
if(any(n(:)) && ~edge); flat = flat./conv2(o,k,shape); end
% Do the actual convolution
c = conv2(a,k,shape)./flat;
% If requested, replace output values with NaNs corresponding to input.
if(nanout); c(n) = NaN; end
% If 1D, convert back to the original shape.
if(is1D && sza(1) == 1); c = c.'; end
end
One approach would be to replace the NaN values with nearest-neighbor interpolates using scatteredInterpolant (or TriScatteredInterp in older MATLAB versions) before performing the filtering, then replacing those points again with NaN values afterward. This would be akin to filtering a full 2-D array using the 'replicate' argument as opposed to the 'symmetric' argument as a boundary option for imfilter (i.e. you're replicating as opposed to reflecting values at the jagged NaN boundary).
Here's what the code would look like:
% Make your filter:
filtWidth = 7;
imageFilter = fspecial('gaussian', filtWidth, filtWidth);
% Interpolate new values for Nans:
nanMask = isnan(rfVals);
[r, c] = find(~nanMask);
[rNan, cNan] = find(nanMask);
F = scatteredInterpolant(c, r, rfVals(~nanMask), 'nearest');
interpVals = F(cNan, rNan);
data = rfVals;
data(nanMask) = interpVals;
% Filter the data, replacing Nans afterward:
dataFiltered = imfilter(data, imageFilter, 'replicate', 'conv');
dataFiltered(nanMask) = nan;
Okay without using your plot function, I can still give you a solution. What you want to do is find all the new NaN's and replace it with the original unfiltered data (assuming it is correct). While it's not filtered, it's better than reducing the domain of your contour image.
% Toy Example Data
rfVals= rand(100,100);
rfVals(1:2,:) = nan;
rfVals(:,1:2) = nan;
% Create and Apply Filter
filtWidth = 3;
imageFilter=fspecial('gaussian',filtWidth,filtWidth);
dataFiltered = imfilter(rfVals,imageFilter,'symmetric','conv');
sum(sum(isnan( dataFiltered ) ) )
% Replace New NaN with Unfiltered Data
newnan = ~isnan( rfVals) & isnan( dataFiltered );
dataFiltered( newnan ) = rfVals( newnan );
sum(sum(isnan( rfVals) ) )
sum(sum(isnan( dataFiltered ) ) )
Detect new NaN using the following code. You can also probably use the xor function.
newnan = ~isnan( rfVals) & isnan( dataFiltered );
Then this line sets the indices in dataFiltered to the values in rfVals
dataFiltered( newnan ) = rfVals( newnan );
Results
From the lines printed in the console and my code, you can see that the number of NaN in dataFiltered is reduced from 688 to 396 as was the number of NaN in rfVals.
ans =
688
ans =
396
ans =
396
Alternate Solution 1
You can also use a smaller filter near the edges by specifying a smaller kernel and merging it after, but if you just want valid data with minimal code, my main solution will work.
Alternate Solution 2
An alternate approach is to pad/replace the NaN values with zero or some constant you want so that it will work, then truncate it. However for signal processing/filtering, you will probably want my main solution.
Want to improve this post? Add citations from reputable sources by editing the post. Posts with unsourced content may be edited or deleted.
nanfilter does exactly the same thing with nanconv when filtering as long as the filter is the same. If you get the nan values before you use nanfilter and then add the back to the after-filtered matrix, you will get the same result with what you get from nanconv with the option 'nanout', as long as you use the same filter.
I have a two-dimensional matrix whose elements represent data that can be colormapped and represented as an image. I want to interpolate them, but I get strange behavior at some of the boundaries that I can't explain.
Here is the original image, the image after 3 iterations of the interpolation routine, and the image after 10 interpolation iterations.
Here's my code:
close all
ifactor = 0.9; % Interpolation factor
% Cut out the meaningless stuff at the edges
bshift_i = bshift(1:16, 1:50);
[m, n] = size(bshift_i);
% Plot the initial data using colormapping
figure()
imagesc(bshift_i, [7.5 10.5])
colorbar()
% Main processing loop
for ii = 1:10
% Every iteration, we generate a grid that has the same size as the
% existing data, and another one that whose axis step sizes are
% (ifactor) times smaller than the existing axes.
[b_x, b_y] = meshgrid(1:ifactor^(ii - 1):n, 1:ifactor^(ii - 1):m);
[b_xi, b_yi] = meshgrid(1:ifactor^ii:n, 1:ifactor^ii:m);
% Interpolate our data and reassign to bshift_i so that we can use it
% in the next iteration
bshift_i = interp2(b_x, b_y, bshift_i, b_xi, b_yi);
end
% Plot the interpolated image
figure()
imagesc(bshift_i, [7.5 10.5])
colorbar()
I'm mainly wondering why the blue artifacts at the bottom and right edges occur, and if so, how I can work around/avoid them.
The problem is how you define the x and y ranges for the interpolation.
Let's take a look at 1:ifactor^(ii - 1):m:
For the first iteration, you get 16 values, from 1 to 16.
For the second iteration, you get 17 values, from 1 to 15.4
For the third iteration, you get 19 values, from 1 to 15.58
An this is enough to illustrate the problem. With the second iteration, everything is fine, because your upper interpolation bound is inside the value range. However, for the third iteration, your upper bound is now outside the value range (15.58 > 15.4).
interp2 does not extrapolate, it returns a NaN (after the 3rd iteration, bshift_i(end,end) is NaN). These NaNs get plotted as 0, hence blue.
To fix this, you have to ensure that your x and y range always include the last value. One way to do this could be:
[b_x, b_y] = meshgrid(linspace(1,n,n./(ifactor^(ii - 1))), ...
linspace(1,m,m./(ifactor^(ii - 1))));
[b_xi, b_yi] = meshgrid(linspace(1,n,n./(ifactor^(ii))), ...
linspace(1,m,m./(ifactor^(ii))));
linspace always includes the first and the last element. However, the third input is not the increment, but the number of elements. That's why you'd have to adapt that condition to resemble your approach.