Image Processing - Using Radon Transform for Pattern Recognition in MATLAB - matlab

I am attempting to extract the Radon Signature in order to recognize patterns of clothing (striped,plaid, irregular and patternless) as done in 1.
Algorithm to be implemented :
1. Use sobel operator to compute the gradient map as f(x,y).
2. Perform Radon transform based on maximum disk area.
3. Compute the variance of r under all theta directions.
4. Employ L2-norm to normalize the feature vector.
5. Plot Radon Signature as a bar chart of var(r) for all theta values.
I have done the following :
img = imread('plaid.jpg');
grey = rgb2gray(img);
img2 = edge(grey, 'sobel');
vararray=zeros(1,size(theta,2));
theta = -89:90;
for j = 1: size(theta,2)
[R3,xp3] = radon (img2,theta(j));
vararray(j) = var(R3);
end
vararray = vararray/norm(vararray);
figure(1), bar(theta,vararray),title('Radon Signature');
I believe that my error lies in the first 2 steps. I am unsure how to perform Radon only on the maximum disk area.
My results are shown on the right, while from the article (referenced below) is shown on the left.
However, my results should at least show 2 distinct peaks as shown in the acticle's results, but they do not.
Any assistance is appreciated.
Source of Algorithm : "Assistive Clothing Pattern Recognition for Visually Impaired People" by Xiaodong Yang, Student Member, IEEE, Shuai Yuan, and YingLi Tian, Senior Member, IEEE

Maximum disk area is, as #beaker thought, defined by the maximum filled circle that fits inside the bounding box of the image. That you can observe from the Fig.3 b) of the article.
Another thing you did wrong, is using edge detector edge(grey, 'sobel') while you should use gradient map or more formally gradient magnitude. Here's a code which produces a curve close to what is shown in Fig 3d. How to quantify it to six peaks, remains a question.
A = imread( 'Layer-5.png' ); % image from the article
A = double(rgb2gray( A ));
% gradient magnitude
dx = imfilter(A,fspecial('sobel') ); % x, 3x3 kernel
dy = imfilter(A,fspecial('sobel')'); % y
gradmag = sqrt( dx.^2 + dy.^2 );
% mask by disk
R = min( size(A)/2 ); % radius
disk = insertShape(zeros(size(A)),'FilledCircle', [size(A)/2,R] );
mask = double(rgb2gray(disk)~=0);
gradmag = mask.*gradmag;
% radon transform
theta = linspace(0,180,180);
vars = zeros(size(theta));
for u = 1:length(theta)
[rad,xp] =radon( gradmag, theta(u) );
indices = find( abs(xp)<R );
% ignore radii outside the maximum disk area
% so you don't sum up zeroes into variance
vars(u) = var( rad( indices ) );
end
vars = vars/norm(vars);
figure; plot( vars );
Bear in mind, images copied from the article appear with jpg artefacts. After good denoising (a tad too much here), e.g.,
you get much more prominent results.

Related

Two-dimensional matched filter

I want to implement two dimensional matched filter for blood vessel extraction according to the paper "Detection of Blood Vessels in Retinal Images Using Two-Dimensional Matched Filters" by Chaudhuri et al., IEEE Trans. on Medical Imaging, 1989 (there's a PDF on the author's web site).
A brief discription is that blood vessel's cross-section has a gaussian distribution and therefore I want to use gaussian matched filter to increase SNR. Such a kernel may be mathematically expressed as:
K(x,y) = -exp(-x^2/2*sigma^2) for |x|<3*sigma, |y|<L/2
L here is the length of vessel with fixed orientation. Experimentally sigma=1.5 and L = 7.
My MATLAB code for this part is:
s = 1.5; %sigma
t = -3*s:3*s;
theta=0:15:165; %different rotations
%one dimensional kernel
x = 1/sqrt(6*s)*exp(-t.^2/(2*s.^2));
L=7;
%two dimensional gaussian kernel
x2 = repmat(x,L,1);
Consider the response of this filter for a pixel belonging to the background retina. Assuming the background to have constant intensity with zero mean additive Gaussian white noise, the expected value of the filter output should ideally be zero. The convolution kernel is, therefore, modified by subtracting the mean value of s(t) from the function itself. The mean value of the kernel is determined as: m = Sum(K(x,y))/(number of points).
Thus, the convolutional mask used in this algorithm is given by: K(x, y) = K(x,y) - m.
My MATLAB code:
m = sum(x2(:))/(size(x2,1)*size(x2,2));
x2 = x2-m;
A vessel may be oriented at any angle 0<theta<180 and the matched filter response is maximum when when it is aligned at theta+- 90 (cross-section distribution is gaussian not the vessel itself).
Thus we need to rotate the matched filter 12 times with 15 degree increment.
My MATLAB code is attached here but I don't get a desirable result. Any help is appreciated.
%apply rotated matched filter on image
r = {};
for k = 1:12
x3=imrotate(x2,theta(k),'crop');%figure;imagesc(x3);colormap gray;
r{k}=conv2(img,x3);
end
w=[];h = zeros(584,565);
for i = 1:565
for j = 1:584
for k = 1:32
w= [w ,r{k}(j,i)];
end
h(j,i)=max(abs(w));
w = [];
end
end
%show result
figure('Name','after matched filter');imagesc(h);colormap gray
For rotation I used imrotate which seems more sensible to me but in the paper it is different: suppose p=[x,y] be a discrete point in the kernel. To compute coefficients in the rotated kernel we have [u,v] = p*Rotation_Matrix.
Rotation_Matrix=[cos(theta),sin(theta);-sin(theta),cos(theta)]
And the kernel is:
K(x,y) = -exp(-u^2/2*s^2)
But the new kernel doesn't have a gaussian shape anymore. Using imrotate preserves gaussian shape. So what is the benefit of using Rotation matrix?
Input image is:
Output:
Matched filtering helps increase SNR but background noise is amplified too.
Am I right to use imrotate to rotate the kernel? My main problem is with rotation matrix that why and what is the right code to implement it.
The reason to build the filter from its analytic expression for each rotation, rather than using imrotate, is that the filter extent is not circular, and therefore rotating brings in "new" pixel values and pushes some other pixels out of the kernel. Furthermore, rotating a kernel constructed as here (smooth transition along one direction, step edge along the other dimension) requires different interpolation methods along each dimension, which imrotate cannot do. The resulting rotated kernel will always be wrong.
Both these issues can be easily seen when displaying the kernel you make together with two rotated versions:
This display brings an additional issues to the front: the kernel is not centered on a pixel, causing it to shift the output by half a pixel.
Note also that, when subtracting the mean, it is important that this mean be computed only over the original domain of the filter, and that any zeros used to pad this domain to a rectangular shape remain zero (these should not become negative).
The rotated kernels can be constructed as follows:
m = max(ceil(3*s),(L-1)/2);
[x,y] = meshgrid(-m:m,-m:m); % non-rotated coordinate system, contains (0,0)
t = pi/6; % angle in radian
u = cos(t)*x - sin(t)*y; % rotated coordinate system
v = sin(t)*x + cos(t)*y; % rotated coordinate system
N = (abs(u) <= 3*s) & (abs(v) <= L/2); % domain
k = exp(-u.^2/(2*s.^2)); % kernel
k = k - mean(k(N));
k(~N) = 0; % set kernel outside of domain to 0
This is the result for the three rotations used in the example above (the grey around the edges of the kernel corresponds to the value 0, the black pixels have a negative value):
Another issue is that you use conv2 with the default 'full' output shape, you should be using 'same' here, so that the output of the filter matches the input.
Note that, instead of computing all filter responses, and computing the max afterwards, it is much easier to compute the max as you compute each filter response. All of the above leads to the following code:
img = im2double(rgb2gray(img));
s = 1.5; %sigma
L = 7;
theta = 0:15:165; %different rotations
out = zeros(size(img));
m = max(ceil(3*s),(L-1)/2);
[x,y] = meshgrid(-m:m,-m:m); % non-rotated coordinate system, contains (0,0)
for t = theta
t = t / 180 * pi; % angle in radian
u = cos(t)*x - sin(t)*y; % rotated coordinate system
v = sin(t)*x + cos(t)*y; % rotated coordinate system
N = (abs(u) <= 3*s) & (abs(v) <= L/2); % domain
k = exp(-u.^2/(2*s.^2)); % kernel
k = k - mean(k(N));
k(~N) = 0; % set kernel outside of domain to 0
res = conv2(img,k,'same');
out = max(out,res);
end
out = out/max(out(:)); % force output to be in [0,1] interval that MATLAB likes
imwrite(out,'so_result.png')
I get the following output:

Count circle objects in an image using matlab

How to count circle objects in a bright image using MATLAB?
The input image is:
imfindcircles function can't find any circle in this image.
Based on well known image processing techniques, you can write your own processing tool:
img = imread('Mlj6r.jpg'); % read the image
imgGray = rgb2gray(img); % convert to grayscale
sigma = 1;
imgGray = imgaussfilt(imgGray, sigma); % filter the image (we will take derivatives, which are sensitive to noise)
imshow(imgGray) % show the image
[gx, gy] = gradient(double(imgGray)); % take the first derivative
[gxx, gxy] = gradient(gx); % take the second derivatives
[gxy, gyy] = gradient(gy); % take the second derivatives
k = 0.04; %0.04-0.15 (see wikipedia)
blob = (gxx.*gyy - gxy.*gxy - k*(gxx + gyy).^2); % Harris corner detector (high second derivatives in two perpendicular directions)
blob = blob .* (gxx < 0 & gyy < 0); % select the top of the corner (i.e. positive second derivative)
figure
imshow(blob) % show the blobs
blobThresshold = 1;
circles = imregionalmax(blob) & blob > blobThresshold; % find local maxima and apply a thresshold
figure
imshow(imgGray) % show the original image
hold on
[X, Y] = find(circles); % find the position of the circles
plot(Y, X, 'w.'); % plot the circle positions on top of the original figure
nCircles = length(X)
This code counts 2710 circles, which is probably a slight (but not so bad) overestimation.
The following figure shows the original image with the circle positions indicated as white dots. Some wrong detections are made at the border of the object. You can try to make some adjustments to the constants sigma, k and blobThresshold to obtain better results. In particular, higher k may be beneficial. See wikipedia, for more information about the Harris corner detector.

Fourier transform for fiber alignment

I'm working on an application to determine from an image the degree of alignment of a fiber network. I've read several papers on this issue and they basically do this:
Find the 2D discrete Fourier transform (DFT = F(u,v)) of the image (gray, range 0-255)
Find the Fourier Spectrum (FS = abs(F(u,v))) and the Power Spectrum (PS = FS^2)
Convert spectrum to polar coordinates and divide it into 1º intervals.
Calculate number-averaged line intensities (FI) for each interval (theta), that is, the average of all the intensities (pixels) forming "theta" degrees with respect to the horizontal axis.
Transform FI(theta) to cartesian coordinates
Cxy(theta) = [FI*cos(theta), FI*sin(theta)]
Find eigenvalues (lambda1 and lambda2) of the matrix Cxy'*Cxy
Find alignment index as alpha = 1 - lamda2/lambda1
I've implemented this in MATLAB (code below), but I'm not sure whether it is ok since point 3 and 4 are not really clear for me (I'm getting similar results to those of the papers, but not in all cases). For instance, in point 3, "spectrum" is referring to FS or to PS?. And in point 4, how should this average be done? are all the pixels considered? (even though there are more pixels in the diagonal).
rgb = imread('network.tif');%513x513 pixels
im = rgb2gray(rgb);
im = imrotate(im,-90);%since FFT space is rotated 90º
FT = fft2(im) ;
FS = abs(FT); %Fourier spectrum
PS = FS.^2; % Power spectrum
FS = fftshift(FS);
PS = fftshift(PS);
xoffset = (513-1)/2;
yoffset = (513-1)/2;
% Avoid low frequency points
x1 = 5;
y1 = 0;
% Maximum high frequency pixels
x2 = 255;
y2 = 0;
for theta = 0:pi/180:pi
% Transposed rotation matrix
Rt = [cos(theta) sin(theta);
-sin(theta) cos(theta)];
% Find radial lines necessary for improfile
xy1_rot = Rt * [x1; y1] + [xoffset; yoffset];
xy2_rot = Rt * [x2; y2] + [xoffset; yoffset];
plot([xy1_rot(1) xy2_rot(1)], ...
[xy1_rot(2) xy2_rot(2)], ...
'linestyle','none', ...
'marker','o', ...
'color','k');
prof = improfile(F,[xy1_rot(1) xy2_rot(1)],[xy1_rot(2) xy2_rot(2)]);
i = i + 1;
FI(i) = sum(prof(:))/length(prof);
Cxy(i,:) = [FI(i)*cos(theta), FI(i)*sin(theta)];
end
C = Cxy'*Cxy;
[V,D] = eig(C)
lambda2 = D(1,1);
lambda1 = D(2,2);
alpha = 1 - lambda2/lambda1
Figure: A) original image, B) plot of log(P+1), C) polar plot of FI.
My main concern is that when I choose an artificial image perfectly aligned (attached figure), I get alpha = 0.91, and it should be exactly 1.
Any help will be greatly appreciated.
PD: those black dots in the middle plot are just the points used by improfile.
I believe that there are a couple sources of potential error here that are leading to you not getting a perfect alpha value.
Discrete Fourier Transform
You have discrete imaging data which forces you to take a discrete Fourier transform which inevitably (depending on the resolution of the input data) have some accuracy issues.
Binning vs. Sampling Along a Line
The way that you have done the binning is that you literally drew a line (rotated by a particular angle) and sampled the image along that line using improfile. Using improfile performs interpolation of your data along that line introducing yet another potential source of error. The default is nearest neighbor interpolation which in the example shown below can cause multiple "profiles" to all pick up the same points.
This was with a rotation of 1-degree off-vertical when technically you'd want those peaks to only appear for a perfectly vertical line. It is clear to see how this sort of interpolation of the Fourier spectrum can lead to a spread around the "correct" answer.
Data Undersampling
Similar to Nyquist sampling in the Fourier domain, sampling in the spatial domain has some requirements as well.
Imagine for a second that you wanted to use 45-degree bin widths instead of the 1-degree. Your approach would still sample along a thin line and use that sample to represent 45-degrees worth or data. Clearly, this is a gross under-sampling of the data and you can imagine that the result wouldn't be very accurate.
It becomes more and more of an issue the further you get from the center of the image since the data in this "bin" is really pie wedge shaped and you're approximating it with a line.
A Potential Solution
A different approach to binning would be to determine the polar coordinates (r, theta) for all pixel centers in the image. Then to bin the theta components into 1-degree bins. Then sum all of the values that fall into that bin.
This has several advantages:
It removes the undersampling that we talked about and draws samples from the entire "pie wedge" regardless of the sampling angle.
It ensures that each pixel belongs to one and only one angular bin
I have implemented this alternate approach in the code below with some false horizontal line data and am able to achieve an alpha value of 0.988 which I'd say is pretty good given the discrete nature of the data.
% Draw a bunch of horizontal lines
data = zeros(101);
data([5:5:end],:) = 1;
fourier = fftshift(fft2(data));
FS = abs(fourier);
PS = FS.^2;
center = fliplr(size(FS)) / 2;
[xx,yy] = meshgrid(1:size(FS,2), 1:size(FS, 1));
coords = [xx(:), yy(:)];
% De-mean coordinates to center at the middle of the image
coords = bsxfun(#minus, coords, center);
[theta, R] = cart2pol(coords(:,1), coords(:,2));
% Convert to degrees and round them to the nearest degree
degrees = mod(round(rad2deg(theta)), 360);
degreeRange = 0:359;
% Band pass to ignore high and low frequency components;
lowfreq = 5;
highfreq = size(FS,1)/2;
% Now average everything with the same degrees (sum over PS and average by the number of pixels)
for k = degreeRange
ps_integral(k+1) = mean(PS(degrees == k & R > lowfreq & R < highfreq));
fs_integral(k+1) = mean(FS(degrees == k & R > lowfreq & R < highfreq));
end
thetas = deg2rad(degreeRange);
Cxy = [ps_integral.*cos(thetas);
ps_integral.*sin(thetas)]';
C = Cxy' * Cxy;
[V,D] = eig(C);
lambda2 = D(1,1);
lambda1 = D(2,2);
alpha = 1 - lambda2/lambda1;

How can I get projections of an image without using Radon

I need radon transform of an image,but I am not permitted to use radon function of MATLAB.So I have to write my own.
I have done some research for that but couldn't find any satisfying example.
Is there any other way to get projections of an image without
using radon?
If not,how can I write my own radon function(at least a little
clue)
The radon transform converts an image in 2D/pseudo-3D to a 1D intensity equivalent. Basically, we calculate the sum of all intensities across one row of pixels for all angles theta.
Original code by Justin K. Romberg (Rice University) on https://www.clear.rice.edu/elec431/projects96/DSP/projections.html
projections.m
%%This MATLAB function takes an image matrix and vector of angles and
%%then finds the 1D projection (Radon transform) at each of the angles.
%%It returns a matrix whose columns are the projections at each angle.
%% Written by : Justin K. Romberg
function PR = projections(IMG, THETA)
% pad the image with zeros so we don't lose anything when we rotate.
[iLength, iWidth] = size(IMG);
iDiag = sqrt(iLength^2 + iWidth^2);
LengthPad = ceil(iDiag - iLength) + 2;
WidthPad = ceil(iDiag - iWidth) + 2;
padIMG = zeros(iLength+LengthPad, iWidth+WidthPad);
padIMG(ceil(LengthPad/2):(ceil(LengthPad/2)+iLength-1), ...
ceil(WidthPad/2):(ceil(WidthPad/2)+iWidth-1)) = IMG;
% loop over the number of angles, rotate 90-theta (because we can easily sum
% if we look at stuff from the top), and then add up. Don't perform any
% interpolation on the rotating.
n = length(THETA);
PR = zeros(size(padIMG,2), n);
for i = 1:n
tic
tmpimg = imrotate(padIMG, 90-THETA(i), 'bilinear', 'crop');
PR(:,i) = (sum(tmpimg))';
THETA(i)
toc
end
I don't know if it what you search for but have a look to that :
Matlab code at the end.

Measuring weighted mean length from an electrophoresis gel image

Background:
My question relates to extracting feature from an electrophoresis gel (see below). In this gel, DNA is loaded from the top and allowed to migrate under a voltage gradient. The gel has sieves so smaller molecules migrate further than longer molecules resulting in the separation of DNA by length. So higher up the molecule, the longer it is.
Question:
In this image there are 9 lanes each with separate source of DNA. I am interested in measuring the mean location (value on the y axis) of each lane.
I am really new to image processing, but I do know MATLAB and I can get by with R with some difficulty. I would really appreciate it if someone can show me how to go about finding the mean of each lane.
Here's my try. It requires that the gels are nice (i.e. straight lanes and the gel should not be rotated), but should otherwise work fairly generically. Note that there are two image-size-dependent parameters that will need to be adjusted to make this work on images of different size.
%# first size-dependent parameter: should be about 1/4th-1/5th
%# of the lane width in pixels.
minFilterWidth = 10;
%# second size-dependent parameter for filtering the
%# lane profiles
gaussWidth = 5;
%# read the image, normalize to 0...1
img = imread('http://img823.imageshack.us/img823/588/gele.png');
img = rgb2gray(img);
img = double(img)/255;
%# Otsu thresholding to (roughly) find lanes
thMsk = img < graythresh(img);
%# count the mask-pixels in each columns. Due to
%# lane separation, there will be fewer pixels
%# between lanes
cts = sum(thMsk,1);
%# widen the local minima, so that we get a nice
%# separation between lanes
ctsEroded = imerode(cts,ones(1,minFilterWidth));
%# use imregionalmin to identify the separation
%# between lanes. Invert to get a positive mask
laneMsk = ~repmat(imregionalmin(ctsEroded),size(img,1),1);
Image with lanes that will be used for analysis
%# for each lane, create an averaged profile
lblMsk = bwlabel(laneMsk);
nLanes = max(lblMsk(:));
profiles = zeros(size(img,1),nLanes);
midLane = zeros(1,nLanes);
for i = 1:nLanes
profiles(:,i) = mean(img.*(lblMsk==i),2);
midLane(:,i) = mean(find(lblMsk(1,:)==i));
end
%# Gauss-filter the profiles (each column is an
%# averaged intensity profile
G = exp(-(-gaussWidth*5:gaussWidth*5).^2/(2*gaussWidth^2));
G=G./sum(G);
profiles = imfilter(profiles,G','replicate'); %'
%# find the minima
[~,idx] = min(profiles,[],1);
%# plot
figure,imshow(img,[])
hold on, plot(midLane,idx,'.r')
Here's my stab at a simple template for an interactive way to do this:
% Load image
img = imread('gel.png');
img = rgb2gray(img);
% Identify lanes
imshow(img)
[x,y] = ginput;
% Invert image
img = max(img(:)) - img;
% Subtract background
[xn,yn] = ginput(1);
noise = img((yn-2):(yn+2), (xn-2):(xn+2));
noise = mean(noise(:));
img = img - noise;
% Calculate means
means = (1:size(img,1)) * double(img(:,round(x))) ./ sum(double(img(:,round(x))), 1);
% Plot
hold on
plot(x, means, 'r.')
The first thing to do to is convert your RGB image to grayscale:
gr = rgb2gray(imread('gelk.png'));
Then, take a look at the image intensity histogram using imhist. Notice anything funny about it? Use imcontrast(imshow(gr)) to pull up the contrast adjustment tool. I found that eliminating the weird stuff after the major intensity peak was beneficial.
The image processing task itself can be divided into several steps.
Separate each lane
Identify ('segment') the band in each lane
Calculate the location of the bands
Step 1 can be done "by hand," if the lane widths are guaranteed. If not, the line detection offered by the Hough transform is probably the way to go. The documentation on the Image Processing Toolbox has a really nice tutorial on this topic. My code recapitulates that tutorial with better parameters for your image. I only spent a few minutes with them, I'm sure you can improve the results by tuning the parameters further.
Step 2 can be done in a few ways. The easiest technique to use is Otsu's method for thresholding grayscale images. This method works by determining a threshold that minimizes the intra-class variance, or, equivalently, maximizes the inter-class variance. Otsu's method is present in MATLAB as the graythresh function. If Otsu's method isn't working well you can try multi-level Otsu or a number of other histogram based threshold determination methods.
Step 3 can be done as you suggest, by calculating the mean y value of the segmented band pixels. This is what my code does, though I've restricted the check to just the center column of each lane, in case the separation was off. I'm worried that the result may not be as good as calculating the band centroid and using its location.
Here is my solution:
function [locations, lanesBW, lanes, cols] = segmentGel(gr)
%%# Detect lane boundaries
unsharp = fspecial('unsharp'); %# Sharpening filter
I = imfilter(gr,unsharp); %# Apply filter
bw = edge(I,'canny',[0.01 0.3],0.5); %# Canny edges with parameters
[H,T,R] = hough(bw); %# Hough transform of edges
P = houghpeaks(H,20,'threshold',ceil(0.5*max(H(:)))); %# Find peaks of Hough transform
lines = houghlines(bw,T,R,P,'FillGap',30,'MinLength',20); %# Use peaks to identify lines
%%# Plot detected lines above image, for quality control
max_len = 0;
imshow(I);
hold on;
for k = 1:length(lines)
xy = [lines(k).point1; lines(k).point2];
plot(xy(:,1),xy(:,2),'LineWidth',2,'Color','green');
%# Plot beginnings and ends of lines
plot(xy(1,1),xy(1,2),'x','LineWidth',2,'Color','yellow');
plot(xy(2,1),xy(2,2),'x','LineWidth',2,'Color','red');
%# Determine the endpoints of the longest line segment
len = norm(lines(k).point1 - lines(k).point2);
if ( len > max_len)
max_len = len;
end
end
hold off;
%%# Use first endpoint of each line to separate lanes
cols = zeros(length(lines),1);
for k = 1:length(lines)
cols(k) = lines(k).point1(1);
end
cols = sort(cols); %# The lines are in no particular order
lanes = cell(length(cols)-1,1);
for k = 2:length(cols)
lanes{k-1} = im2double( gr(:,cols(k-1):cols(k)) ); %# im2double for compatibility with greythresh
end
otsu = cellfun(#graythresh,lanes); %# Calculate threshold for each lane
lanesBW = cell(size(lanes));
for k = 1:length(lanes)
lanesBW{k} = lanes{k} < otsu(k); %# Apply thresholds
end
%%# Use segmented bands to determine migration distance
locations = zeros(size(lanesBW));
for k = 1:length(lanesBW)
width = size(lanesBW{k},2);
[y,~] = find(lanesBW{k}(:,round(width/2))); %# Only use center of lane
locations(k) = mean(y);
end
I suggest you carefully examine not only each output value, but the results from each step of the function, before using it for actual research purposes. In order to get really good results, you will have to read a bit about Hough transforms, Canny edge detection and Otsu's method, and then tune the parameters. You may also have to alter how the lanes are split; this code assumes that there will be lines detected on either side of the image.
Let me add another implementation similar in concept to that of #JohnColby's, only without the manual user-interaction:
%# read image
I = rgb2gray(imread('gele.png'));
%# middle position of each lane
%# (assuming lanes are somewhat evenly spread and of similar width)
x = linspace(1,size(I,2),10);
x = round( (x(1:end-1)+x(2:end))./2 );
%# compute the mean value across those columns
m = mean(I(:,x));
%# find the y-indices of the mean values
[~,idx] = min( bsxfun(#minus, double(I(:,x)), m) );
%# show the result
figure(1)
imshow(I, 'InitialMagnification',100, 'Border','tight')
hold on, plot(x, idx, ...
'Color','r', 'LineStyle','none', 'Marker','.', 'MarkerSize',10)
and applied on the smaller image: