how can i use lsqcurvefit for image registration? - matlab

I have two 3D images, i need to register these two images using "lsqcurvefit". I know that I can use "imregister" but I want to use my own registration using "lsqcurvefit" in Matlab. My images are are following Gaussian distribution. it is not documented well that how should I provide it, anyone can help me in detail?
image registration is a repeated process of maping source image to target image using i.e affine. i want to use intensity base registration, and i use all voxels of my image. therefore, i need to fit these two images as much as possible.
Thanks

Here's an example of how to do point-wise image registration using lsqcurvefit. Basically you make a function that takes a set of points and an Affine matrix (we're just going to use the translate and rotate parts but you can use skew and magnify if desired) and returns a new set of points. There's probably a built-in function for this already but it's only two lines so it's easy to write. That function is:
function TformPts = TransformPoints(StartCoordinates, TransformMatrix)
TformPts = StartCoordinates*TransformMatrix;
Here's a script that generates some points, rotates and translates them by a random angle and vector, then uses the TransformPoints function as the input for lsqcurvefit to fit the needed transformation matrix for the registration. Then it's just a matrix multiplication to generate the registered set of points. If we did this all right the red circles (original data) will line up with the black stars (shifted then registered points) very well when the code below is run.
% 20 random points in x and y between 0 and 100
% row of ones pads out third dimension
pointsList = [100*rand(2, 20); ones(1, 20)];
rotateTheta = pi*rand(1); % add rotation, in radians
translateVector = 10*rand(1,2); % add translation, up to 10 units here
% 2D transformation matrix
% last row pads out third dimension
inputTransMatrix = [cos(rotateTheta), -sin(rotateTheta), translateVector(1);
sin(rotateTheta), cos(rotateTheta), translateVector(2);
0 0 1];
% Transform starting points by this matrix to make an array of shifted
% points.
% For point-wise registration, pointsList represents points from one image,
% shiftedPoints points from the other image
shiftedPoints = inputTransMatrix*pointsList;
% Add some random noise
% Remove this line if you want the registration to be exact
shiftedPoints = shiftedPoints + rand(size(shiftedPoints, 1), size(shiftedPoints, 2));
% Plot starting sets of points
figure(1)
plot(pointsList(1,:), pointsList(2,:), 'ro');
hold on
plot(shiftedPoints(1,:), shiftedPoints(2,:), 'bx');
hold off
% Fitting routine
% Make some initial, random guesses
initialFitTheta = pi*rand(1);
initialFitTranslate = [2, 2];
guessTransMatrix = [cos(initialFitTheta), -sin(initialFitTheta), initialFitTranslate(1);
sin(initialFitTheta), cos(initialFitTheta), initialFitTranslate(2);
0 0 1];
% fit = lsqcurvefit(#fcn, initialGuess, shiftedPoints, referencePoints)
fitTransMatrix = lsqcurvefit(#TransformPoints, guessTransMatrix, pointsList, shiftedPoints);
% Un-shift second set of points by fit values
fitShiftPoints = fitTransMatrix\shiftedPoints;
% Plot it up
figure(1)
hold on
plot(fitShiftPoints(1,:), fitShiftPoints(2,:), 'k*');
hold off
% Display start transformation and result fit
disp(inputTransMatrix)
disp(fitTransMatrix)

Related

Implementing a filtered backprojection algorithm using the central slice theorem in Matlab

I'm working on a filtered back projection algorithm using the central slice theorem for a homework assignment and while I understand the theory on paper, I've run into an issue implementing it in Matlab. I was provided with a skeleton to follow to do it but there is a step that I think I'm maybe misunderstanding. Here is what I have:
function img = sampleFBP(sino,angs)
% This step is necessary so that frequency information is preserved: we pad
% the sinogram with zeros so that this is ensured.
sino = padarray(sino, floor(size(sino,1)/2), 'both');
% diagDim should be the length of an individual row of the sinogram - don't
% hardcode this!
diagDim = size(sino, 2);
% The 2DFT (2D Fourier transform) of our image will start as a matrix of
% all zeros.
fimg = zeros(diagDim);
% Design your 1-d ramp filter.
rampFilter_1d = abs(linspace(-1, 1, diagDim))';
rowIndex = 1;
for nn = angs
% Each contribution to the image's 2DFT will also begin as all zero.
imContrib = zeros(diagDim);
% Get the current row of the sinogram - use rowIndex.
curRow = sino(rowIndex,:);
% Take the 1D Fourier transform the current row - be careful, as it's
% necessary to perform ifftshift and fftshift as Matlab tends to
% place zero-frequency components of a spectrum at the edges.
fourierCurRow = fftshift(fft(ifftshift(curRow)));
% Place the Fourier-transformed sino row and place it at the center of
% the next image contribution. Add the ramp filter in Fourier domain.
imContrib(floor(diagDim/2), :) = fourierCurRow;
imContrib = imContrib * fft(rampFilter_1d);
% Rotate the current image contribution to be at the correct angle on
% the 2D Fourier-space image.
imContrib = imrotate(imContrib, nn, 'crop');
% Add the current image contribution to the running representation of
% the image in Fourier space!
fimg = fimg + imContrib;
rowIndex = rowIndex + 1;
end
% Finally, just take the inverse 2D Fourier transform of the image! Don't
% forget - you may need an fftshift or ifftshift here.
rcon = fftshift(ifft2(ifftshift(fimg)));
The sinogram I'm inputting is just the output of the radon function on a Shepp-Logan phantom from 0 to 179 degrees. Running the code as it is now gives me a black image. I think I'm missing something in the loop where I add the FTs of rows to the image. From my understanding of the central slice theorem, what I think should be happening is this:
Initialize an array the same size as the what the 2DFT will be (i.e., diagDim x diagDim). This is the Fourier space.
Take a row of the sinogram which corresponds to the line integral information from a single angle and apply a 1D FT to it
According to the Central Slice Theorem, the FT of this line integral is a line through the Fourier domain that passes through the origin at an angle that corresponds to the angle at which the projection was taken. So to emulate that, I take the FT of that line integral and place it in the center row of the diagDim x diagDim matrix I created
Next I take the FT of the 1D ramp filter I created and multiply it with the FT of the line integral. Multiplication in the Fourier domain is equivalent to a convolution in the spatial domain so this convolves the line integral with the filter.
Now I rotate the entire matrix by the angle the projection was taken at. This should give me a diagDim x diagDim matrix with a single line of information passing through the center at an angle. Matlab increases the size of the matrix when it is rotated but since the sinogram was padded at the beginning, no information is lost and the matrices can still be added
If all of these empty matrices with a single line through the center are added up together, it should give me the complete 2D FT of the image. All that needs to be done is take the inverse 2D FT and the original image should be the result.
If the problem I'm running into is something conceptual, I'd be grateful if someone could point out where I messed up. If instead this is a Matlab thing (I'm still kind of new to Matlab), I'd appreciate learning what it is I missed.
The code that you have posted is a pretty good example of filtered backprojection (FBP) and I believe could be useful to people who wanted to learn the basis of FBP. One can use the function iradon(...) in MATLAB (see here) to perform FBP using a variety of filters. In your case of course, the point is to learn the basis of the central slice theorem and so finding a short cut is not the point. I have also learned a lot and refreshed my knowledge through answering to your question!
Now your code has been perfectly commented and describes the steps that need to be taken. There are a couple of subtle [programming] issues that need to be fixed so that the code works just fine.
First, your image representation in Fourier domain may end up having a missing array due to floor(diagDim/2) depending on the size of the sinogram. I would change this to round(diagDim/2) to have complete dataset in fimg. Be aware that this may lead to an error for certain sinogram sizes if not handled correctly. I would encourage you to visualize fimg to understand what that missing array is and why it matters.
Second issue is that your sinogram needs to be transposed to be consistent with your algorithm. Hence an addition of sino = sino'. Again, I do encourage you to try the code without this to see what happens! Note that zero padding must be happened along the views to avoid artifacts due to aliasing. I will demonstrate an example for this in this answer.
Third and most importantly, imContrib is a temporary holder for an array along fimg. Therefore, it must maintain the same size as fimg, so
imContrib = imContrib * fft(rampFilter_1d);
should be replaced with
imContrib(floor(diagDim/2), :) = imContrib(floor(diagDim/2), :)' .* rampFilter_1d;
Note that the Ramp filter is linear in frequency domain (thanks to #Cris Luengo for correcting this error). Therefore, you should drop the fft in fft(rampFilter_1d) as this filter is applied in the frequency domain (remember fft(x) decomposes the domain of x, such as time, space, etc to its frequency content).
Now a complete example to show how it works using the modified Shepp-Logan phantom:
angs = 0:359; % angles of rotation 0, 1, 2... 359
init_img = phantom('Modified Shepp-Logan', 100); % Initial image 2D [100 x 100]
sino = radon(init_img, angs); % Create a sinogram using radon transform
% Here is your function ....
% This step is necessary so that frequency information is preserved: we pad
% the sinogram with zeros so that this is ensured.
sino = padarray(sino, floor(size(sino,1)/2), 'both');
% Rotate the sinogram 90-degree to be compatible with your codes definition of view and radial positions
% dim 1 -> view
% dim 2 -> Radial position
sino = sino';
% diagDim should be the length of an individual row of the sinogram - don't
% hardcode this!
diagDim = size(sino, 2);
% The 2DFT (2D Fourier transform) of our image will start as a matrix of
% all zeros.
fimg = zeros(diagDim);
% Design your 1-d ramp filter.
rampFilter_1d = abs(linspace(-1, 1, diagDim))';
rowIndex = 1;
for nn = angs
% fprintf('rowIndex = %g => nn = %g\n', rowIndex, nn);
% Each contribution to the image's 2DFT will also begin as all zero.
imContrib = zeros(diagDim);
% Get the current row of the sinogram - use rowIndex.
curRow = sino(rowIndex,:);
% Take the 1D Fourier transform the current row - be careful, as it's
% necessary to perform ifftshift and fftshift as Matlab tends to
% place zero-frequency components of a spectrum at the edges.
fourierCurRow = fftshift(fft(ifftshift(curRow)));
% Place the Fourier-transformed sino row and place it at the center of
% the next image contribution. Add the ramp filter in Fourier domain.
imContrib(round(diagDim/2), :) = fourierCurRow;
imContrib(round(diagDim/2), :) = imContrib(round(diagDim/2), :)' .* rampFilter_1d; % <-- NOT fft(rampFilter_1d)
% Rotate the current image contribution to be at the correct angle on
% the 2D Fourier-space image.
imContrib = imrotate(imContrib, nn, 'crop');
% Add the current image contribution to the running representation of
% the image in Fourier space!
fimg = fimg + imContrib;
rowIndex = rowIndex + 1;
end
% Finally, just take the inverse 2D Fourier transform of the image! Don't
% forget - you may need an fftshift or ifftshift here.
rcon = fftshift(ifft2(ifftshift(fimg)));
Note that your image has complex value. So, I use imshow(abs(rcon),[]) to show the image. A couple of helpful images (food for thought) with the final reconstructed image rcon:
And here is the same image if you comment out the zero padding step (i.e. comment out sino = padarray(sino, floor(size(sino,1)/2), 'both');):
Note the different object size in the reconstructed images with and without zero padding. The object shrinks when the sinogram is zero padded since the radial contents are compressed.

Sparse 3D reconstruction MATLAB example

I have a stereo camera system and I am trying this MATLAB's Computer Vision toolbox example (http://www.mathworks.com/help/vision/ug/sparse-3-d-reconstruction-from-multiple-views.html) with my own images and camera calibration files. I used Caltech's camera calibration toolbox (http://www.vision.caltech.edu/bouguetj/calib_doc/).
First I tried each camera separately based on first example and found the intrinsic camera calibration matrices for each camera and saved them. I also undistorted the left and right images using Caltech toolbox. Therefore I commented out the code for that from MATLAB example.
Here are the instrinsic camera matrices:
K1=[1050 0 630;0 1048 460;0 0 1];
K2=[1048 0 662;0 1047 468;0 0 1];
BTW, these are the right and center lenses from bumblebee XB3 cameras.
Question: aren't they supposed to be the same?
Then I did stereo calibration based on fifth example. I saved the rotation matrix (R) and translation matrix (T) from that. Therefore I commented out the code for that from MATLAB example.
Here are the rotation and translation matrices:
R=[0.9999 -0.0080 -0.0086;0.0080 1 0.0048;0.0086 -0.0049 1];
T=[120.14 0.55 1.04];
Then I fed all these images and calibration files and camera matrices to the MATLAB example and tried to find the 3-D point cloud but the results are not promising. I am attaching the code here. I think here are two problems:
1- My epipolar constraint value is too large!(to the power of 16)
2- I am not sure about the camera matrices and how I calculated them from R, and T from Caltech toolbox!
P.S. as far as feature extraction goes that is fine.
would be great if someone can help.
clear
close all
clc
files = {'Left1.tif';'Right1.tif'};
for i = 1:numel(files)
files{i}=fullfile('...\sparse_matlab', files{i});
images(i).image = imread(files{i});
end
figure;
montage(files); title('Pair of Original Images')
% Intrinsic camera parameters
load('Calib_Results_Left.mat')
K1 = KK;
load('Calib_Results_Right.mat')
K2 = KK;
%Extrinsics using stereo calibration
load('Calib_Results_stereo.mat')
Rotation=R;
Translation=T';
images(1).CameraMatrix=[Rotation; Translation] * K1;
images(2).CameraMatrix=[Rotation; Translation] * K2;
% Detect feature points and extract SURF descriptors in images
for i = 1:numel(images)
%detect SURF feature points
images(i).points = detectSURFFeatures(rgb2gray(images(i).image),...
'MetricThreshold',600);
%extract SURF descriptors
[images(i).featureVectors,images(i).points] = ...
extractFeatures(rgb2gray(images(i).image),images(i).points);
end
% Visualize several extracted SURF features from the Left image
figure; imshow(images(1).image);
title('1500 Strongest Feature Points from Globe01');
hold on;
plot(images(1).points.selectStrongest(1500));
indexPairs = ...
matchFeatures(images(1).featureVectors, images(2).featureVectors,...
'Prenormalized', true,'MaxRatio',0.4) ;
matchedPoints1 = images(1).points(indexPairs(:, 1));
matchedPoints2 = images(2).points(indexPairs(:, 2));
figure;
% Visualize correspondences
showMatchedFeatures(images(1).image,images(2).image,matchedPoints1,matchedPoints2,'montage' );
title('Original Matched Features from Globe01 and Globe02');
% Set a value near zero, It will be used to eliminate matches that
% correspond to points that do not lie on an epipolar line.
epipolarThreshold = .05;
for k = 1:length(matchedPoints1)
% Compute the fundamental matrix using the example helper function
% Evaluate the epipolar constraint
epipolarConstraint =[matchedPoints1.Location(k,:),1]...
*helperCameraMatricesToFMatrix(images(1).CameraMatrix,images(2).CameraMatrix)...
*[matchedPoints2.Location(k,:),1]';
%%%% here my epipolarConstraint results are bad %%%%%%%%%%%%%
% Only consider feature matches where the absolute value of the
% constraint expression is less than the threshold.
valid(k) = abs(epipolarConstraint) < epipolarThreshold;
end
validpts1 = images(1).points(indexPairs(valid, 1));
validpts2 = images(2).points(indexPairs(valid, 2));
figure;
showMatchedFeatures(images(1).image,images(2).image,validpts1,validpts2,'montage');
title('Matched Features After Applying Epipolar Constraint');
% convert image to double format for plotting
doubleimage = im2double(images(1).image);
points3D = ones(length(validpts1),4); % store homogeneous world coordinates
color = ones(length(validpts1),3); % store color information
% For all point correspondences
for i = 1:length(validpts1)
% For all image locations from a list of correspondences build an A
pointInImage1 = validpts1(i).Location;
pointInImage2 = validpts2(i).Location;
P1 = images(1).CameraMatrix'; % Transpose to match the convention in
P2 = images(2).CameraMatrix'; % in [1]
A = [
pointInImage1(1)*P1(3,:) - P1(1,:);...
pointInImage1(2)*P1(3,:) - P1(2,:);...
pointInImage2(1)*P2(3,:) - P2(1,:);...
pointInImage2(2)*P2(3,:) - P2(2,:)];
% Compute the 3-D location using the smallest singular value from the
% singular value decomposition of the matrix A
[~,~,V]=svd(A);
X = V(:,end);
X = X/X(end);
% Store location
points3D(i,:) = X';
% Store pixel color for visualization
y = round(pointInImage1(1));
x = round(pointInImage1(2));
color(i,:) = squeeze(doubleimage(x,y,:))';
end
% add green point representing the origin
points3D(end+1,:) = [0,0,0,1];
color(end+1,:) = [0,1,0];
% show images
figure('units','normalized','outerposition',[0 0 .5 .5])
subplot(1,2,1); montage(files,'Size',[1,2]); title('Original Images')
% plot point-cloud
hAxes = subplot(1,2,2); hold on; grid on;
scatter3(points3D(:,1),points3D(:,2),points3D(:,3),50,color,'fill')
xlabel('x-axis (mm)');ylabel('y-axis (mm)');zlabel('z-axis (mm)')
view(20,24);axis equal;axis vis3d
set(hAxes,'XAxisLocation','top','YAxisLocation','left',...
'ZDir','reverse','Ydir','reverse');
grid on
title('Reconstructed Point Cloud');
First of all, the Computer Vision System Toolbox now includes a Camera Calibrator App for calibrating a single camera, and also support for programmatic stereo camera calibration. It would be easier for you to use those tools, because the example you are using and the Caltech Calibration Toolbox use somewhat different conventions.
The example uses the pre-multiply convention, i.e. row vector * matrix, while the Caltech toolbox uses the post-multiply convention (matrix * column vector). That means that if you do use the camera parameters from Caltech, you would have to transpose the intrinsic matrix and the rotation matrices. That could be the main cause of your problems.
As far as the intrinsics being different between your two cameras, that is perfectly normal. All cameras are slightly different.
It would also help to see the matched features that you've used for triangulation. Given that you are reconstructing an elongated object, it doesn't seem too surprising to see the reconstructed points form a line in 3D...
You could also try rectifying the images and doing a dense reconstruction, as in the example I've linked to above.

Matlab - Trying to use vectors with grid coordinates and value at each point for a color plot

I'm trying to make a color plot in matlab using output data from another program. What I have are 3 vectors indicating the x-position, y-yposition (both in milliarcseconds, since this represents an image of the surroundings of a black hole), and value (which will be assigned a color) of every point in the desired image. I apparently can't use pcolor, because the values which indicate the color of each "pixel" are not in a matrix, and I don't know a way other than meshgrid to create a matrix out of the vectors, which didn't work due to the size of the vectors.
Thanks in advance for any help, I may not be able to reply immediately.
If we make no assumptions about the arrangement of the x,y coordinates (i.e. non-monotonic) and the sparsity of the data samples, the best way to get a nice image out of your vectors is to use TriScatteredInterp. Here is an example:
% samplesToGrid.m
function [vi,xi,yi] = samplesToGrid(x,y,v)
F = TriScatteredInterp(x,y,v);
[yi,xi] = ndgrid(min(y(:)):max(y(:)), min(x(:)):max(x(:)));
vi = F(xi,yi);
Here's an example of taking 500 "pixel" samples on a 100x100 grid and building a full image:
% exampleSparsePeakSamples.m
x = randi(100,[500 1]); y = randi(100,[500 1]);
v = exp(-(x-50).^2/50) .* exp(-(y-50).^2/50) + 1e-2*randn(size(x));
vi = samplesToGrid(x,y,v);
imagesc(vi); axis image
Gordon's answer will work if the coordinates are integer-valued, but the image will be spare.
You can assign your values to a matrix based on the x and y coordinates and then use imagesc (or a similar function).
% Assuming the X and Y coords start at 1
max_x = max(Xcoords);
max_y = max(Ycoords);
data = nan(max_y, max_x); % Note the order of y and x
indexes = sub2ind(size(data), max_y, max_x);
data(indexes) = Values;
imagesc(data); % note that NaN values will be colored with the minimum colormap value

matlab: how to plot multidimensional array

Let's say I have 9 MxN black and white images that are in some way related to one another (i.e. time lapse of some event). What is a way that I can display all of these images on one surface plot?
Assume the MxN matrices only contain 0's and 1's. Assume the images simply contain white lines on a black background (i.e. pixel value == 1 if that pixel is part of a line, 0 otherwise). Assume images are ordered in such a way as to suggest movement progression of line(s) in subsequent images. I want to be able to see a "side-view" (or volumetric representation) of these images which will show the surface that a particular line "carves out" in its movement across the images.
Coding is done in MATLAB. I have looked at plot (but it only does 2D plots) and surf, which does 3D plots but doesn't work for my MxNx9 matrix of images. I have also tried to experiment with contourslice, but not sure what parameters to pass it.
Thanks!
Mariya
Are these images black and white with simple features on a "blank" field, or greyscale, with more dense information?
I can see a couple of approaches.
You can use movie() to display a sequence of images as an animation.
For a static view of sparse, simple data, you could plot each image as a separate layer in a single figure, giving each layer a different color for the foreground, and using AlphaData to make the background transparent so all the steps in the sequenc show through. The gradient of colors corresponds to position in the image sequence. Here's an example.
function plotImageSequence
% Made-up test data
nLayers = 9;
x = zeros(100,100,nLayers);
for i = 1:nLayers
x(20+(3*i),:,i) = 1;
end
% Plot each image as a "layer", indicated by color
figure;
hold on;
for i = 1:nLayers
layerData = x(:,:,i);
alphaMask = layerData == 1;
layerData(logical(layerData)) = i; % So each layer gets its own color
image('CData',layerData,...
'AlphaData',alphaMask,...
'CDataMapping','scaled');
end
hold off
Directly showing the path of movement a "line" carves out is hard with raster data, because Matlab won't know which "moved" pixels in two subsequent images are associated with each other. Don't suppose you have underlying vector data for the geometric features in the images? Plot3() might allow you to show their movement, with time as the z axis. Or you could use the regular plot() and some manual fiddling to plot the paths of all the control points or vertexes in the geometric features.
EDIT: Here's a variation that uses patch() to draw each pixel as a little polygon floating in space at the Z level of its index in the image sequence. I think this will look more like the "surface" style plots you are asking for. You could fiddle with the FaceAlpha property to make dense plots more legible.
function plotImageSequencePatch
% Made-up test data
nLayers = 6;
sz = [50 50];
img = zeros(sz(1),sz(2),nLayers);
for i = 1:nLayers
img(20+(3*i),:,i) = 1;
end
% Plot each image as a "layer", indicated by color
% With each "pixel" as a separate patch
figure;
set(gca, 'XLim', [0 sz(1)]);
set(gca, 'YLim', [0 sz(2)]);
hold on;
for i = 1:nLayers
layerData = img(:,:,i);
[x,y] = find(layerData); % X,Y of all pixels
% Reshape in to patch outline
x = x';
y = y';
patch_x = [x; x+1; x+1; x];
patch_y = [y; y; y+1; y+1];
patch_z = repmat(i, size(patch_x));
patch(patch_x, patch_y, patch_z, i);
end
hold off

How do I create an image matrix with a line drawn in it in MATLAB?

I want to plot a line from one well-defined point to another and then turn it into an image matrix to use a Gaussian filter on it for smoothing. For this I use the functions line and getframe to plot a line and capture the figure window in an image, but getframe is very slow and not very reliable. I noticed that it does not capture anything when the computer is locked and I got an out of memory error after 170 executions.
My questions are:
Is there a substitute to getframe that I can use?
Is there a way to create the image matrix and draw the line directly in it?
Here is a minimal code sample:
figure1=line([30 35] ,[200 60]);
F= getframe;
hsize=40; sigma=20;
h = fspecial('gaussian',hsize,sigma);
filteredImg = imfilter(double(F.cdata), h,256);
imshow(uint8(filteredImg));
[update]
High-Performance Mark's idea with linspace looks very promising, but how do I access the matrix coordinates calculated with linspace? I tried the following code, but it does not work as I think it should. I assume it is a very simple and basic MATLAB thing, but I just can not wrap my head around it:
matrix=zeros(200,60);
diagonal=round([linspace(30,200,numSteps); linspace(35,60,numSteps)]);
matrix(diagonal(1,:), diagonal(2,:))=1;
imshow(matrix);
Here's one example of drawing a line directly into a matrix. First, we'll create a matrix of zeros for an empty image:
mat = zeros(250, 250, 'uint8'); % A 250-by-250 matrix of type uint8
Then, let's say we want to draw a line running from (30, 35) to (200, 60). We'll first compute how many pixels long the line will have to be:
x = [30 200]; % x coordinates (running along matrix columns)
y = [35 60]; % y coordinates (running along matrix rows)
nPoints = max(abs(diff(x)), abs(diff(y)))+1; % Number of points in line
Next, we compute row and column indices for the line pixels using linspace, convert them from subscripted indices to linear indices using sub2ind, then use them to modify mat:
rIndex = round(linspace(y(1), y(2), nPoints)); % Row indices
cIndex = round(linspace(x(1), x(2), nPoints)); % Column indices
index = sub2ind(size(mat), rIndex, cIndex); % Linear indices
mat(index) = 255; % Set the line pixels to the max value of 255 for uint8 types
You can then visualize the line and the filtered version with the following:
subplot(1, 2, 1);
image(mat); % Show original line image
colormap(gray); % Change colormap
title('Line');
subplot(1, 2, 2);
h = fspecial('gaussian', 20, 10); % Create filter
filteredImg = imfilter(mat, h); % Filter image
image(filteredImg); % Show filtered line image
title('Filtered line');
If you have Computer Vision System toolbox there is a ShapeInserter object available. This can be used to draw lines, circles, rectangles and polygons on the image.
mat = zeros(250,250,'uint8');
shapeInserter = vision.ShapeInserter('Shape', 'Lines', 'BorderColor', 'White');
y = step(shapeInserter, mat, int32([30 60 180 210]));
imshow(y);
http://www.mathworks.com/help/vision/ref/vision.shapeinserterclass.html
You can check my answer here. It is robust way to achieve what you are asking for. Advantage of my approach is that it doesn't need additional parameters to control density of the line drawn.
Something like this:
[linspace(30,200,numSteps); linspace(35,60,numSteps)]
Does that work for you ?
Mark