I'm a beginner in image processing and I'm using MATLAB to extract HOG features from the images to train SVM classifier. The size of the training images is 480*640 pixels and I'm getting 167796 features with the default settings for the built-in MATLAB extractHOGFeatures function. However, when I test the model it gives me less features (216 features only!) knowing that the testing images have the same size of the training images. I get this error in MATLAB "The number of columns in TEST and training data must be equal".
Do you have any clue how to solve this problem and get feature vector with the same size for the training and testing sets?
Here is the code,
[fpos,fneg] = featuress(pathPos, pathNeg);
%train SVM
HOG_featV = loadingV(fpos,fneg); % loading and labeling each training example
%% Detection
tSize = [24 32];
testImPath = '.\face_detection\dataset\bikes_and_persons2\';
imlist = dir([testImPath '*.bmp']);
for j = 1:length(imlist)
disp ('inside for loop');
img = imread([testImPath imlist(j).name]);
axis equal; axis tight; axis off;
imshow(img); hold on;
detect(img,model,tSize);
%% training
function [fpos, fneg] = featuress(pathPos,pathNeg)
% extract features for positive examples
imlist = dir([pathPos '*.bmp']);
for i = 1:length(imlist)
im = imread([pathPos imlist(i).name]);
fpos{i} = extractHOGFeatures(double(im));
end
% extract features for negative examples
imlist = dir([pathNeg '*.bmp']);
for i = 1:length(imlist)
im = imread([pathNeg imlist(i).name]);
fneg{i} = extractHOGFeatures(double(im));
end
end
%% testing function
function detect(im,model,wSize)
topLeftRow = 1;
topLeftCol = 1;
[bottomRightCol bottomRightRow d] = size(im);
fcount = 1;
for y = topLeftCol:bottomRightCol-wSize(2)
for x = topLeftRow:bottomRightRow-wSize(1)
p1 = [x,y];
p2 = [x+(wSize(1)-1), y+(wSize(2)-1)];
po = [p1; p2];
img = imcut(po,im);
featureVector{fcount} = extractHOGFeatures(double(img));
boxPoint{fcount} = [x,y];
fcount = fcount+1;
x = x+1;
end
end
lebel = ones(length(featureVector),1);
P = cell2mat(featureVector');
% each row of P' correspond to a window
[ predictions] = svmclassify(model, P); % classifying each window
[a, indx]= max(predictions);
bBox = cell2mat(boxPoint(indx));
rectangle('Position',[bBox(1),bBox(2),24,32],'LineWidth',1, 'EdgeColor','r');
end
Thanks in advance.
What's the size of P? Is it 167796 x 216? If so then, you should not transpose featureVector when you call cell2mat. Or you should transpose P before you use it. You can also make featureVector a matrix rather than a cell array. Since you know that the length of the HOG vector is 167796 and you know how many images you have, you can pre-allocate it up front, and fill in the rows.
Related
I am using histograms in Matlab to look at the distribution of some data from my experiments. I want to find the mean distribution (mean height of the bars) from a group of tests then produce an average histogram.
By using this code:
data = zeros(26,31);
for i = 1:length(files6)
x = csvread(files6(i).name);
x = x(1:end,:);
time = x(:,1);
variable = x(:,3);
thing(:,1) = x(:,1);
thing(:,2) = x(:,3);
figure()
binCenter = {0:tbinstep:tbinend 0:varbinstep:varbinend};
hist3(thing, 'Ctrs', binCenter, 'CDataMode','auto','FaceColor','interp');
colorbar
[N,C] = hist3(thing, 'Ctrs', binCenter);
data = data + N;
clearvars x time variable
end
avedata = data / i;
I can find the mean of N, which will be the Z value for the plot (histogram) I want, and I have X,Y (which are the same for all tests) from:
x = 0:tbinstep:tbinend;
y = 0:varbinstep:varbinend;
But how do I bring these together to make the graphical out that shows the average height of the bars? I can't use hist3 again as that will just calculate the distribution of avedata.
AT THE RISK OF STARTING AN XY PROBLEM using bar3 has been suggested, but that asks the question "how do I go from 2 vectors and a matrix to 1 matrix bar3 can handle? I.e. how do I plot x(1), y(1), avedata(1,1) and so on for all the data points in avedata?"
TIA
By looking at hist3 source code in matlab r2014b, it has his own plotting implemented inside that prepares data and plot it using surf method. Here is a function that reproduce the same output highly inspired from the hist3 function with your options ('CDataMode','auto','FaceColor','interp'). You can put this in a new file called hist3plot.m:
function [ h ] = hist3plot( N, C )
%HIST3PLOT Summary of this function goes here
% Detailed explanation goes here
xBins = C{1};
yBins = C{2};
% Computing edges and width
nbins = [length(xBins), length(yBins)];
xEdges = [0.5*(3*xBins(1)-xBins(2)), 0.5*(xBins(2:end)+xBins(1:end-1)), 0.5*(3*xBins(end)-xBins(end-1))];
yEdges = [0.5*(3*yBins(1)-yBins(2)), 0.5*(yBins(2:end)+yBins(1:end-1)), 0.5*(3*yBins(end)-yBins(end-1))];
xWidth = xEdges(2:end)-xEdges(1:end-1);
yWidth = yEdges(2:end)-yEdges(1:end-1);
del = .001; % space between bars, relative to bar size
% Build x-coords for the eight corners of each bar.
xx = xEdges;
xx = [xx(1:nbins(1))+del*xWidth; xx(2:nbins(1)+1)-del*xWidth];
xx = [reshape(repmat(xx(:)',2,1),4,nbins(1)); NaN(1,nbins(1))];
xx = [repmat(xx(:),1,4) NaN(5*nbins(1),1)];
xx = repmat(xx,1,nbins(2));
% Build y-coords for the eight corners of each bar.
yy = yEdges;
yy = [yy(1:nbins(2))+del*yWidth; yy(2:nbins(2)+1)-del*yWidth];
yy = [reshape(repmat(yy(:)',2,1),4,nbins(2)); NaN(1,nbins(2))];
yy = [repmat(yy(:),1,4) NaN(5*nbins(2),1)];
yy = repmat(yy',nbins(1),1);
% Build z-coords for the eight corners of each bar.
zz = zeros(5*nbins(1), 5*nbins(2));
zz(5*(1:nbins(1))-3, 5*(1:nbins(2))-3) = N;
zz(5*(1:nbins(1))-3, 5*(1:nbins(2))-2) = N;
zz(5*(1:nbins(1))-2, 5*(1:nbins(2))-3) = N;
zz(5*(1:nbins(1))-2, 5*(1:nbins(2))-2) = N;
% Plot the bars in a light steel blue.
cc = repmat(cat(3,.75,.85,.95), [size(zz) 1]);
% Plot the surface
h = surf(xx, yy, zz, cc, 'CDataMode','auto','FaceColor','interp');
% Setting x-axis and y-axis limits
xlim([yBins(1)-yWidth(1) yBins(end)+yWidth(end)]) % x-axis limit
ylim([xBins(1)-xWidth(1) xBins(end)+xWidth(end)]) % y-axis limit
end
You can then call this function when you want to plot outputs from Matlab's hist3 function. Note that this can handle non uniform positionning of bins:
close all; clear all;
data = rand(10000,2);
xBins = [0,0.1,0.3,0.5,0.6,0.8,1];
yBins = [0,0.1,0.3,0.5,0.6,0.8,1];
figure()
hist3(data, {xBins yBins}, 'CDataMode','auto','FaceColor','interp')
title('Using hist3')
figure()
[N,C] = hist3(data, {xBins yBins});
hist3plot(N, C); % The function is called here
title('Using hist3plot')
Here is a comparison of the two outputs:
So if I understand your question and code correctly, you are plotting the distribution of multiple experiments' data as histograms, then you want to calculate the average shape of all the previous histograms.
I usually avoid giving approaches the asker isn't explicitly asking for, but for this one I must comment that it is a very strange thing to do. I've never heard of calculating the average shape of multiple histograms before. So just in case, you could simply append all your experiment's data into a single variable, and plot a normalized histogram of that using histogram2. This code outputs a relative frequency histogram. (Other normalization methods)
% Append all data in a single matrix
x = []
for i = 1:length(files6)
x = [x; csvread(files6(i).name)];
end
% Plot normalized bivariate histogram, normalized
xEdges = 0:tbinstep:tbinend;
yEdges = 0:varbinstep:varbinend;
histogram2(x(:,1), x(:,3), xEdges, yEdges, 'Normalize', 'Probability')
Now, if you really are looking to draw the average shape of multiple histograms, then yes, use bar3. Since bar3 doesn't accept an (x,y) value argument, you can follow the other answer, or modify the XTickLabel and YTickLabel property to match whatever your bin range is, afterwards.
... % data = yourAverageData;
% Save axis handle to `h`
h = bar3(data);
% Set property of axis
h.XTickLabels = 0:tbinstep:tbinend;
h.YTickLabels = 0:varbinstep:varbinend;
I'm trying to implement stochastic gradient descent in MATLAB however I am not seeing any convergence. Mini-batch gradient descent worked as expected so I think that the cost function and gradient steps are correct.
The two main issues I am having are:
Randomly shuffling the data in the training set before the
for-loop
Selecting one example at a time
Here is my MATLAB code:
Generating Data
alpha = 0.001;
num_iters = 10;
xrange =(-10:0.1:10); % data lenght
ydata = 5*(xrange)+30; % data with gradient 2, intercept 5
% plot(xrange,ydata); grid on;
noise = (2*randn(1,length(xrange))); % generating noise
target = ydata + noise; % adding noise to data
f1 = figure
subplot(2,2,1);
scatter(xrange,target); grid on; hold on; % plot a scttaer
title('Linear Regression')
xlabel('xrange')
ylabel('ydata')
tita0 = randn(1,1); %intercept (randomised)
tita1 = randn(1,1); %gradient (randomised)
% Initialize Objective Function History
J_history = zeros(num_iters, 1);
% Number of training examples
m = (length(xrange));
Shuffling data, Gradient Descent and Cost Function
% STEP1 : we shuffle the data
data = [ xrange, ydata];
data = data(randperm(size(data,1)),:);
y = data(:,1);
X = data(:,2:end);
for iter = 1:num_iters
for i = 1:m
x = X(:,i); % STEP2 Select one example
h = tita0 + tita1.*x; % building the estimated %Changed to xrange in BGD
%c = (1/(2*length(xrange)))*sum((h-target).^2)
temp0 = tita0 - alpha*((1/m)*sum((h-target)));
temp1 = tita1 - alpha*((1/m)*sum((h-target).*x)); %Changed to xrange in BGD
tita0 = temp0;
tita1 = temp1;
fprintf("here\n %d; %d", i, x)
end
J_history(iter) = (1/(2*m))*sum((h-target).^2); % Calculating cost from data to estimate
fprintf('Iteration #%d - Cost = %d... \r\n',iter, J_history(iter));
end
On plotting the cost vs iterations and linear regression graphs, the MSE settles (local minimum?) at around 420 which is wrong.
On the other hand if I re-run the exact same code however using batch gradient descent I get acceptable results. In batch gradient descent I am changing x to xrange:
Any suggestions on what I am doing wrong?
EDIT:
I also tried selecting random indexes using:
f = round(1+rand(1,1)*201); %generating random indexes
and then selecting one example:
x = xrange(f); % STEP2 Select one example
Proceeding to use x in the hypothesis and GD steps also yield a cost of 420.
First we need to shuffle the data correctly:
data = [ xrange', target'];
data = data(randperm(size(data,1)),:);
Next we need to index X and y correctly:
y = data(:,2);
X = data(:,1);
Then during gradient descent I need to update based on a single value not on target, like so:
tita0 = tita0 - alpha*((1/m)*((h-y(i))));
tita1 = tita1 - alpha*((1/m)*((h-y(i)).*x));
Theta converges to [5, 30] with the changes above.
I have a synthetic image. I want to do eigenvalue decomposition of local structure tensor (LST) of it for some edge detection purposes. I used the eigenvaluesl1 , l2 and eigenvectors e1 ,e2 of LST to generate an adaptive ellipse for each pixel of image. Unfortunately I get unequal eigenvalues l1 , l2 and so unequal semi-axes length of ellipse for homogeneous regions of my figure:
However I get good response for a simple test image:
I don't know what is wrong in my code:
function [H,e1,e2,l1,l2] = LST_eig(I,sigma1,rw)
% LST_eig - compute the structure tensor and its eigen
% value decomposition
%
% H = LST_eig(I,sigma1,rw);
%
% sigma1 is pre smoothing width (in pixels).
% rw is filter bandwidth radius for tensor smoothing (in pixels).
%
n = size(I,1);
m = size(I,2);
if nargin<2
sigma1 = 0.5;
end
if nargin<3
rw = 0.001;
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% pre smoothing
J = imgaussfilt(I,sigma1);
% compute gradient using Sobel operator
Sch = [-3 0 3;-10 0 10;-3 0 3];
%h = fspecial('sobel');
gx = imfilter(J,Sch,'replicate');
gy = imfilter(J,Sch','replicate');
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% compute tensors
gx2 = gx.^2;
gy2 = gy.^2;
gxy = gx.*gy;
% smooth
gx2_sm = imgaussfilt(gx2,rw); %rw/sqrt(2*log(2))
gy2_sm = imgaussfilt(gy2,rw);
gxy_sm = imgaussfilt(gxy,rw);
H = zeros(n,m,2,2);
H(:,:,1,1) = gx2_sm;
H(:,:,2,2) = gy2_sm;
H(:,:,1,2) = gxy_sm;
H(:,:,2,1) = gxy_sm;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% eigen decomposition
l1 = zeros(n,m);
l2 = zeros(n,m);
e1 = zeros(n,m,2);
e2 = zeros(n,m,2);
for i = 1:n
for j = 1:m
Hmat = zeros(2);
Hmat(:,:) = H(i,j,:,:);
[V,D] = eigs(Hmat);
D = abs(D);
l1(i,j) = D(1,1); % eigen values
l2(i,j) = D(2,2);
e1(i,j,:) = V(:,1); % eigen vectors
e2(i,j,:) = V(:,2);
end
end
Any help is appreciated.
This is my ellipse drawing code:
% determining ellipse parameteres from eigen value decomposition of LST
M = input('Enter the maximum allowed semi-major axes length: ');
I = input('Enter the input data: ');
row = size(I,1);
col = size(I,2);
a = zeros(row,col);
b = zeros(row,col);
cos_phi = zeros(row,col);
sin_phi = zeros(row,col);
for m = 1:row
for n = 1:col
a(m,n) = (l2(m,n)+eps)/(l1(m,n)+l2(m,n)+2*eps)*M;
b(m,n) = (l1(m,n)+eps)/(l1(m,n)+l2(m,n)+2*eps)*M;
cos_phi1 = e1(m,n,1);
sin_phi1 = e1(m,n,2);
len = hypot(cos_phi1,sin_phi1);
cos_phi(m,n) = cos_phi1/len;
sin_phi(m,n) = sin_phi1/len;
end
end
%% plot elliptic structuring elements using parametric equation and superimpose on the image
figure; imagesc(I); colorbar; hold on
t = linspace(0,2*pi,50);
for i = 10:10:row-10
for j = 10:10:col-10
x0 = j;
y0 = i;
x = a(i,j)/2*cos(t)*cos_phi(i,j)-b(i,j)/2*sin(t)*sin_phi(i,j)+x0;
y = a(i,j)/2*cos(t)*sin_phi(i,j)+b(i,j)/2*sin(t)*cos_phi(i,j)+y0;
plot(x,y,'r','linewidth',1);
hold on
end
end
This my new result with the Gaussian derivative kernel:
This is the new plot with axis equal:
I created a test image similar to yours (probably less complicated) as follows:
pos = yy([400,500]) + 100 * sin(xx(400)/400*2*pi);
img = gaussianlineclip(pos+50,7) + gaussianlineclip(pos-50,7);
I = double(stretch(img));
(This requires DIPimage to run)
Then ran your LST_eig on it (sigma1=1 and rw=3) and your code to draw ellipses (no change to either, except adding axis equal), and got this result:
I suspect some non-uniformity in some of the blue areas of your image, which cause very small gradients to appear. The problem with the definition of the ellipses as you use them is that, for sufficiently oriented patterns, you'll get a line even if that pattern is imperceptible. You can get around this by defining your ellipse axes lengths as follows:
a = repmat(M,size(l2)); % longest axis is always the same
b = M ./ (l2+1); % shortest axis is shorter the more important the largest eigenvalue is
The smallest eigenvalue l1 is high in regions with strong gradients but no clear direction. The above does not take this into account. One option could be to make a depend on both energy and anisotropy measures, and b depend only on energy:
T = 1000; % some threshold
r = M ./ max(l1+l2-T,1); % circle radius, smaller for higher energy
d = (l2-l1) ./ (l1+l2+eps); % anisotropy measure in range [0,1]
a = M*d + r.*(1-d); % use `M` length for high anisotropy, use `r` length for high isotropy (circle)
b = r; % use `r` width always
This way, the whole ellipse shrinks if there are strong gradients but no clear direction, whereas it stays large and circular when there are only weak or no gradients. The threshold T depends on image intensities, adjust as needed.
You should probably also consider taking the square root of the eigenvalues, as they correspond to the variance.
Some suggestions:
You can write
a = (l2+eps)./(l1+l2+2*eps) * M;
b = (l1+eps)./(l1+l2+2*eps) * M;
cos_phi = e1(:,:,1);
sin_phi = e1(:,:,2);
without a loop. Note that e1 is normalized by definition, there is no need to normalize it again.
Use Gaussian gradients instead of Gaussian smoothing followed by Sobel or Schaar filters. See here for some MATLAB implementation details.
Use eig, not eigs, when you need all eigenvalues. Especially for such a small matrix, there is no advantage to using eigs. eig seems to produce more consistent results. There is no need to take the absolute value of the eigenvalues (D = abs(D)), as they are non-negative by definition.
Your default value of rw = 0.001 is way too small, a sigma of that size has no effect on the image. The goal of this smoothing is to average gradients in a local neighborhood. I used rw=3 with good results.
Use DIPimage. There is a structuretensor function, Gaussian gradients, and a lot more useful stuff. The 3.0 version (still in development) is a major rewrite that improves significantly on dealing with vector- and matrix-valued images. I can write all of your LST_eig as follows:
I = dip_image(I);
g = gradient(I, sigma1);
H = gaussf(g*g.', rw);
[e,l] = eig(H);
% Equivalences with your outputs:
l1 = l{2};
l2 = l{1};
e1 = e{2,:};
e2 = e{1,:};
Actually I use MATLAB code to implement my work which it's summarize as a following:
I used a program applied the Bayesian approach to solve fiber tracking, I have a problem in load the dataset, which it's Medical Image. I need any hint to help me to open this data set to continue my work. the following code shows the load dataset function:
function data = Load_DMRI_Data(dataset)
% Load diffusion tensor MRI data and return it in
% a structure together with gradients and b-values.
% The data must be stored in a data structure for further
% processing, see code.
% The gradients are stored as a(3xg) matrix, where g is the
% number of acquired DWI volumens (including the b=0 ones).
% The b-values are stored in a corresponding (1xg) vector.
if strcmp(dataset,'gordon')
readdir = '/projects/lmi/data/diffusion/gk-3t/041020-02156-bvalexpr/data/003/';
intensity = zeros(256,256,31,32);
for g = 1:32
for slice = 1:31
fid = fopen(sprintf('%sI.%03d',readdir,(g-1)*31+slice),'r');
im = fread(fid,'int16');
im = im(end-256^2+1:end);
intensity(:,:,slice,g) = reshape(im,[256 256])';
fclose(fid);
end
end
G = load('gradients.mat');
G = [[1;1;1] G.g]; % Add arbitrary gradient direction for b=0
b = [0 1000*ones(1,31)];
data = struct('intensity',intensity,'G',G,'b',b,'FOV',240,'SliceThickness',4);
elseif strcmp(dataset,'pc')
readdir = 'c:\Work\DiffusionData\';
intensity = zeros(256,256,1,32);
for g = 1:32
for slice = 16:16 % Use slice 16 as test slice
fid = fopen(sprintf('%sI.%03d',readdir,(g-1)*31+slice),'r','ieee-be');
im = fread(fid,'int16');
im = im(end-256^2+1:end);
intensity(:,:,slice-15,g) = reshape(im,[256 256])';
fclose(fid);
end
end
G = load(sprintf('%sgradients.mat',readdir));
G = [[1;1;1] G.g]; % Add arbitrary gradient direction for b=0
b = [0 1000*ones(1,31)]; % b-values
data = struct('intensity',intensity,'G',G,'b',b);
end
The website for dataset : http://www.sci.utah.edu/~gk/DTI-data/
I'm having a bit of trouble understanding how to change a colormap of a grayscale GIF image after performing histogram equalization on the image. The process is perfectly simple with image compression types that don't have an associated colormap, such as JPEG, and I've gotten it to work with grayscale JPEG images.
clear
clc
[I,map] = imread('moon.gif');
h = zeros(256,1); %array value holds number of pixels with same value
hmap = zeros(256,1);
P = zeros(256,1); %probability that pixel intensity will appear in image
Pmap = zeros(256,1);
s = zeros(256,1); %calculated CDF using P
smap = zeros(256,1);
M = size(I,1);
N = size(I,2);
I = double(I);
Inew = double(zeros(M,N));
mapnew = zeros(256,3);
for x = 1:M;
for y = 1:N;
for l = 1:256;
%count pixel intensities and probability
end
end
end
for j = 2:256
for i = 2:j
%calculate CDF of P
end
end
s(1) = P(1);
smap(1) = Pmap(1);
for x = 1:M;
for y = 1:N;
for l = 1:256;
%calculates adjusted CDF and sets it to new image
end
end
end
mapnew = mapnew/256;
Inew = uint8(Inew);
I = uint8(I);
subplot(1,2,1), imshow(Inew,map); %comparing the difference between original map
subplot(1,2,2), imshow(Inew,mapnew); %to'enhanced' colormap, but both turn out poorly
All is fine in terms of the equalization of the actual image, but I'm not sure what to change about the color map. I tried performing the same operations on the colormap that I did with the image, but no dice.
Sorry that I can't post images cause of my low rep, but I'll try and provide all the info I can on request.
Any help would be greatly appreciated.
function J=histeqo(I)
J=I;
[m,n]=size(I);
[h,d]=imhist(I);
ch=cumsum(h); // The cumulative frequency
imagesize=(m*n); // The image size lightsize=size(d,1);// The Lighting range
tr=ch*(lightsize/imagesize); // Adjustment function
for x=1:m
for y=1:n
J(x,y)=tr(I(x,y)+1);
end
end
subplot(1,2,1);imshow(J);
subplot(1,2,2);imhist(J);
end