Related
I divided the cameraman.tif to 3x3 blocks and I want to apply histeq function to all of these 3x3 blocks
and trying to get a new image. I need a help with histogram equalization of these 3x3 blocks on MATLAB.
I = imread("cameraman.tif");
for i = 2:3:255
for j = 2:3:255
B = I(i:i+2 , j:j+2);
J = double(B);
end
end
Try this:
img = imread('cameraman.tif');
fun = #(heq) histeq(heq.data)
b = blockproc(img,[3,3],fun);
figure, imshow(imtile([img b],[]));
I have a synthetic image. I want to do eigenvalue decomposition of local structure tensor (LST) of it for some edge detection purposes. I used the eigenvaluesl1 , l2 and eigenvectors e1 ,e2 of LST to generate an adaptive ellipse for each pixel of image. Unfortunately I get unequal eigenvalues l1 , l2 and so unequal semi-axes length of ellipse for homogeneous regions of my figure:
However I get good response for a simple test image:
I don't know what is wrong in my code:
function [H,e1,e2,l1,l2] = LST_eig(I,sigma1,rw)
% LST_eig - compute the structure tensor and its eigen
% value decomposition
%
% H = LST_eig(I,sigma1,rw);
%
% sigma1 is pre smoothing width (in pixels).
% rw is filter bandwidth radius for tensor smoothing (in pixels).
%
n = size(I,1);
m = size(I,2);
if nargin<2
sigma1 = 0.5;
end
if nargin<3
rw = 0.001;
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% pre smoothing
J = imgaussfilt(I,sigma1);
% compute gradient using Sobel operator
Sch = [-3 0 3;-10 0 10;-3 0 3];
%h = fspecial('sobel');
gx = imfilter(J,Sch,'replicate');
gy = imfilter(J,Sch','replicate');
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% compute tensors
gx2 = gx.^2;
gy2 = gy.^2;
gxy = gx.*gy;
% smooth
gx2_sm = imgaussfilt(gx2,rw); %rw/sqrt(2*log(2))
gy2_sm = imgaussfilt(gy2,rw);
gxy_sm = imgaussfilt(gxy,rw);
H = zeros(n,m,2,2);
H(:,:,1,1) = gx2_sm;
H(:,:,2,2) = gy2_sm;
H(:,:,1,2) = gxy_sm;
H(:,:,2,1) = gxy_sm;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% eigen decomposition
l1 = zeros(n,m);
l2 = zeros(n,m);
e1 = zeros(n,m,2);
e2 = zeros(n,m,2);
for i = 1:n
for j = 1:m
Hmat = zeros(2);
Hmat(:,:) = H(i,j,:,:);
[V,D] = eigs(Hmat);
D = abs(D);
l1(i,j) = D(1,1); % eigen values
l2(i,j) = D(2,2);
e1(i,j,:) = V(:,1); % eigen vectors
e2(i,j,:) = V(:,2);
end
end
Any help is appreciated.
This is my ellipse drawing code:
% determining ellipse parameteres from eigen value decomposition of LST
M = input('Enter the maximum allowed semi-major axes length: ');
I = input('Enter the input data: ');
row = size(I,1);
col = size(I,2);
a = zeros(row,col);
b = zeros(row,col);
cos_phi = zeros(row,col);
sin_phi = zeros(row,col);
for m = 1:row
for n = 1:col
a(m,n) = (l2(m,n)+eps)/(l1(m,n)+l2(m,n)+2*eps)*M;
b(m,n) = (l1(m,n)+eps)/(l1(m,n)+l2(m,n)+2*eps)*M;
cos_phi1 = e1(m,n,1);
sin_phi1 = e1(m,n,2);
len = hypot(cos_phi1,sin_phi1);
cos_phi(m,n) = cos_phi1/len;
sin_phi(m,n) = sin_phi1/len;
end
end
%% plot elliptic structuring elements using parametric equation and superimpose on the image
figure; imagesc(I); colorbar; hold on
t = linspace(0,2*pi,50);
for i = 10:10:row-10
for j = 10:10:col-10
x0 = j;
y0 = i;
x = a(i,j)/2*cos(t)*cos_phi(i,j)-b(i,j)/2*sin(t)*sin_phi(i,j)+x0;
y = a(i,j)/2*cos(t)*sin_phi(i,j)+b(i,j)/2*sin(t)*cos_phi(i,j)+y0;
plot(x,y,'r','linewidth',1);
hold on
end
end
This my new result with the Gaussian derivative kernel:
This is the new plot with axis equal:
I created a test image similar to yours (probably less complicated) as follows:
pos = yy([400,500]) + 100 * sin(xx(400)/400*2*pi);
img = gaussianlineclip(pos+50,7) + gaussianlineclip(pos-50,7);
I = double(stretch(img));
(This requires DIPimage to run)
Then ran your LST_eig on it (sigma1=1 and rw=3) and your code to draw ellipses (no change to either, except adding axis equal), and got this result:
I suspect some non-uniformity in some of the blue areas of your image, which cause very small gradients to appear. The problem with the definition of the ellipses as you use them is that, for sufficiently oriented patterns, you'll get a line even if that pattern is imperceptible. You can get around this by defining your ellipse axes lengths as follows:
a = repmat(M,size(l2)); % longest axis is always the same
b = M ./ (l2+1); % shortest axis is shorter the more important the largest eigenvalue is
The smallest eigenvalue l1 is high in regions with strong gradients but no clear direction. The above does not take this into account. One option could be to make a depend on both energy and anisotropy measures, and b depend only on energy:
T = 1000; % some threshold
r = M ./ max(l1+l2-T,1); % circle radius, smaller for higher energy
d = (l2-l1) ./ (l1+l2+eps); % anisotropy measure in range [0,1]
a = M*d + r.*(1-d); % use `M` length for high anisotropy, use `r` length for high isotropy (circle)
b = r; % use `r` width always
This way, the whole ellipse shrinks if there are strong gradients but no clear direction, whereas it stays large and circular when there are only weak or no gradients. The threshold T depends on image intensities, adjust as needed.
You should probably also consider taking the square root of the eigenvalues, as they correspond to the variance.
Some suggestions:
You can write
a = (l2+eps)./(l1+l2+2*eps) * M;
b = (l1+eps)./(l1+l2+2*eps) * M;
cos_phi = e1(:,:,1);
sin_phi = e1(:,:,2);
without a loop. Note that e1 is normalized by definition, there is no need to normalize it again.
Use Gaussian gradients instead of Gaussian smoothing followed by Sobel or Schaar filters. See here for some MATLAB implementation details.
Use eig, not eigs, when you need all eigenvalues. Especially for such a small matrix, there is no advantage to using eigs. eig seems to produce more consistent results. There is no need to take the absolute value of the eigenvalues (D = abs(D)), as they are non-negative by definition.
Your default value of rw = 0.001 is way too small, a sigma of that size has no effect on the image. The goal of this smoothing is to average gradients in a local neighborhood. I used rw=3 with good results.
Use DIPimage. There is a structuretensor function, Gaussian gradients, and a lot more useful stuff. The 3.0 version (still in development) is a major rewrite that improves significantly on dealing with vector- and matrix-valued images. I can write all of your LST_eig as follows:
I = dip_image(I);
g = gradient(I, sigma1);
H = gaussf(g*g.', rw);
[e,l] = eig(H);
% Equivalences with your outputs:
l1 = l{2};
l2 = l{1};
e1 = e{2,:};
e2 = e{1,:};
In the Matlab SVM tutorial, it says
You can set your own kernel function, for example, kernel, by setting 'KernelFunction','kernel'. kernel must have the following form:
function G = kernel(U,V)
where:
U is an m-by-p matrix.
V is an n-by-p matrix.
G is an m-by-n Gram matrix of the rows of U and V.
When I followed the custom SVM kernel example, I set a break point in mysigmoid.m function. However, I found U and V were in fact 1-by-p vectors and G was a scalar.
Why does not MATLAB process the kernel by matrices?
My custom kernel function is
function G = mysigmoid(U,V)
% Sigmoid kernel function with slope gamma and intercept c
gamma = 0.5;
c = -1;
G = tanh(gamma*U*V' + c);
end
My Matlab script is
%% Train SVM Classifiers Using a Custom Kernel
rng(1); % For reproducibility
n = 100; % Number of points per quadrant
r1 = sqrt(rand(2*n,1)); % Random radius
t1 = [pi/2*rand(n,1); (pi/2*rand(n,1)+pi)]; % Random angles for Q1 and Q3
X1 = [r1.*cos(t1), r1.*sin(t1)]; % Polar-to-Cartesian conversion
r2 = sqrt(rand(2*n,1));
t2 = [pi/2*rand(n,1)+pi/2; (pi/2*rand(n,1)-pi/2)]; % Random angles for Q2 and Q4
X2 = [r2.*cos(t2), r2.*sin(t2)];
X = [X1; X2]; % Predictors
Y = ones(4*n,1);
Y(2*n + 1:end) = -1; % Labels
% Plot the data
figure(1);
gscatter(X(:,1),X(:,2),Y);
title('Scatter Diagram of Simulated Data');
SVMModel1 = fitcsvm(X,Y,'KernelFunction','mysigmoid','Standardize',true);
% Compute the scores over a grid
d = 0.02; % Step size of the grid
[x1Grid,x2Grid] = meshgrid(min(X(:,1)):d:max(X(:,1)),...
min(X(:,2)):d:max(X(:,2)));
xGrid = [x1Grid(:),x2Grid(:)]; % The grid
[~,scores1] = predict(SVMModel1,xGrid); % The scores
figure(2);
h(1:2) = gscatter(X(:,1),X(:,2),Y);
hold on;
h(3) = plot(X(SVMModel1.IsSupportVector,1),X(SVMModel1.IsSupportVector,2),...
'ko','MarkerSize',10);
% Support vectors
contour(x1Grid,x2Grid,reshape(scores1(:,2),size(x1Grid)),[0,0],'k');
% Decision boundary
title('Scatter Diagram with the Decision Boundary');
legend({'-1','1','Support Vectors'},'Location','Best');
hold off;
CVSVMModel1 = crossval(SVMModel1);
misclass1 = kfoldLoss(CVSVMModel1);
disp(misclass1);
Kernels add dimensions to a feature. If you have, for example, one feature for sample x={a} it will expand it into something like x= {a_1... a_q}. As you are doing this for all of your data at once, you are going to have a M x P (M is the number of examples in your training set and P is the number of features). The second matrix it asks for is P x N, where N is the number of examples in the training/test set.
That said, your output should be M x N. Since it is instead 1, it means that you have U = 1XM and V=Nx1 where N=M. To have an output of M x N logic follows that you should simply transpose your inputs.
I have recently found an interesting article regarding image rectification for two stereo image pairs. I liked the algorithm because it was very compact and from what the article suggested it did the right thing. After I implemented the matlab version on two images, I didn't get a correct rectified image. I got an image that was pitch black apart from the left and down line which had pixels. In the image there also were some gray pixels from the original image but just a hand full. I posted below the matlab code, and the link to the article and also an example of the result I got for one image (for the other image it was the same)
This is the link to the article A compact algorithm for rectification of stereo pairs.
A screen shot with the initial images and the results is bellow:
The initial images are the following two(such that you do not have to search for another stereo pair) :
function [T1,T2,Pn1,Pn2] = rectify(Po1,Po2)
% RECTIFY: compute rectification matrices
% factorize old PPMs
[A1,R1,t1] = art(Po1);
[A2,R2,t2] = art(Po2);
% optical centers (unchanged)
c1 = - inv(Po1(:,1:3))*Po1(:,4);
c2 = - inv(Po2(:,1:3))*Po2(:,4);
% new x axis (= direction of the baseline)
v1 = (c1-c2);
% new y axes (orthogonal to new x and old z)
v2 = cross(R1(3,:)',v1);
% new z axes (orthogonal to baseline and y)
v3 = cross(v1,v2);
% new extrinsic parameters
R = [v1'/norm(v1)
v2'/norm(v2)
v3'/norm(v3)];
% translation is left unchanged
% new intrinsic parameters (arbitrary)
A = (A1 + A2)./2;
A(1,2)=0; % no skew
A(1,3) = A(1,3) + 160;
% new projection matrices
Pn1 = A * [R -R*c1 ];
Pn2 = A * [R -R*c2 ];
% rectifying image transformation
T1 = Pn1(1:3,1:3)* inv(Po1(1:3,1:3));
T2 = Pn2(1:3,1:3)* inv(Po2(1:3,1:3));
function [A,R,t] = art(P)
% ART: factorize a PPM as P=A*[R;t]
Q = inv(P(1:3, 1:3));
[U,B] = qr(Q);
R = inv(U);
t = B*P(1:3,4);
A = inv(B);
A = A ./A(3,3);
This is the "main" code from which I call my rectify function
img1 = imread('D:\imag1.png');
img2 = imread('D:\imag2.png');
im1 = rgb2gray(img1);
im2 = rgb2gray(img2);
im1 = im2double(im1);
im2 = im2double(im2);
figure; imshow(im1, 'border', 'tight')
figure; imshow(im2, 'border', 'tight')
%pair projection matrices obtained after the calibration P01,P02
a = double(9.765*(10^2))
b = double(5.790*(10^-1))
format bank;
Po1 = double([a 5.382*10 -2.398*(10^2) 3.875*(10^5);
9.849*10 9.333*(10^2) 1.574*(10^2) 2.428*(10^5);
b 1.108*(10^(-1)) 8.077*(10^(-1)) 1.118*(10^3)]);
Po2 = [9.767*(10^2) 5.376*10 -2.400*(10^2) 4.003*(10^4);
9.868*10 9.310*(10^2) 1.567*(10^2) 2.517*(10^5);
5.766*(10^(-1)) 1.141*(10^(-1)) 8.089*(10^(-1)) 1.174*(10^3)];
[T1, T2, Pn1, Pn2] = rectify(Po1, Po2);
imnoua = conv2(im1, T1);
imnoua2 = conv2(im2, T2);
fprintf('Imaginea noua e \n');
figure; imshow(imnoua, 'border', 'tight')
figure; imshow(imnoua2, 'border', 'tight')
Thank you for your time!
As Shai says, T1 and T2 are projective transformation matrices, not filter kernels. You should be using imwarp, rather than conv2:
imnoua = imwarp(im1, projective2d(T1));
imnoua2 = imwarp(im2, projective2d(T2));
Better yet, use rectifyStereoImages from the Computer Vision System Toolbox. Check out this example.
I have a KxLxM matrix A which is an image with a feature vector, length M, for each pixel location.
I have also have a feature vector v, length M. At each pixel location of image A i want to calculate the correlation of the pixel's feature vector with my feature vector v.
I've already done this using a loop, but loops are slow in matlab. Does anyone have a suggestion of how to vectorize this?
function test()
A = rand(4,5,3);
v = [1 2 3];
c = somecorr(A, v);
size(c)
function c = somecorr(a,v)
c = a(:,:,1).*0;
for y = 1:size(a,1)
for x = 1:size(a,2)
c(y,x) = corr2(squeeze(a(y,x,1:length(v)))',v);
end
end
>>test()
ans =
4 5
You could try this and see, if its faster:
function c = somecorr2(a,v)
as = reshape(a,size(a,1)*size(a,2),size(a,3));
cs = corr(as',v');
c = reshape(cs,size(a,1),size(a,2));
size(c)
I only did some small tests, but it seems to be more than 100x faster. At least for my test cases.
If you do not have the 'corr' function you can use this on, inspired by this [answer](
What is a fast way to compute column by column correlation in matlab):
function C = manualCorr(A,B)
An=bsxfun(#minus,A,mean(A,1)); %%% zero-mean
Bn=bsxfun(#minus,B,mean(B,1)); %%% zero-mean
An=bsxfun(#times,An,1./sqrt(sum(An.^2,1))); %% L2-normalization
Bn=bsxfun(#times,Bn,1./sqrt(sum(Bn.^2,1))); %% L2-normalization
C=sum(An.*repmat(Bn,1,size(An,2)),1); %% correlation
For a 100x100x3 matrix I get the following runtimes:
Your version: 1.643065 seconds.
mine with 'corr': 0.007191 seconds.
mine with 'manualCorr': 0.006206 seconds.
I was using Matlab R2012a.