How to create diagonal stripe patterns and checkerboard patterns? - matlab

Based on this question, I can confirm that horizontal patterns can be imposed onto a matrix (which in this case is an image), by multiplying it with a modulation signal created with this:
vModulationSignal = 1 + (0.5 * cos(2 * pi * (signalFreq / numRows) * [0:(numRows - 1)].'));
It would also be great if someone could explain to why the above modulation signal works.
Now I want to create diagonal patterns such as :
And criss-cross (checkered) patterns such as this:
using a similar vModulationSignal
Code Excerpt where the modulation signal is created
numRows = size(mInputImage, 1);
numCols = size(mInputImage, 2);
signalFreq = floor(numRows / 1.25);
vModulationSignal = 1 + (0.5 * cos(2 * pi * (signalFreq / numRows) * [0:(numRows - 1)].'));
mOutputImage = bsxfun(#times, mInputImage, vModulationSignal);
Code Excerpt where I'm trying to create the criss cross signal
numRows = size(mInputImage, 1);
numCols = size(mInputImage, 2);
signalFreq1 = floor(numRows / 1.25);
signalFreq2 = floor(numCols / 1.25);
vModulationSignal1 = 1 + (0.5 * cos(2 * pi * (signalFreq / numRows) * [0:(numRows - 1)].'));
vModulationSignal2 = 1 + (0.5 * cos(2 * pi * (signalFreq / numRows) * [0:(numRows - 1)].'));
mOutputImage = bsxfun(#times, mInputImage, vModulationSignal);
figure();
imshow(mOutputImage);

For horizontal, vertical, diagonal stripes:
fx = 1 / 20; % 1 / period in x direction
fy = 1 / 20; % 1 / period in y direction
Nx = 200; % image dimension in x direction
Ny = 200; % image dimension in y direction
[xi, yi] = ndgrid(1 : Nx, 1 : Ny);
mask = sin(2 * pi * (fx * xi + fy * yi)) > 0; % for binary mask
mask = (sin(2 * pi * (fx * xi + fy * yi)) + 1) / 2; % for gradual [0,1] mask
imagesc(mask); % only if you want to see it
just choose fx and fy accordingly (set fy=0 for horizontal stripes, fx=0 for vertical stripes and fx,fy equal for diagonal stripes). Btw. the period of the stripes (in pixels) is exactly
period_in_pixel = 1 / sqrt(fx^2 + fy^2);
For checkerboard patterns:
f = 1 / 20; % 1 / period
Nx = 200;
Ny = 200;
[xi, yi] = ndgrid(1 : Nx, 1 : Ny);
mask = sin(2 * pi * f * xi) .* sin(2 * pi * f * yi) > 0; % for binary mask
mask = (sin(2 * pi * f * xi) .* sin(2 * pi * f * yi) + 1) / 2; % for more gradual mask
imagesc(mask);
Here the number of black and white squares per x, y direction is:
number_squares_x = 2 * f * Nx
number_squares_y = 2 * f * Ny
And if you know the size of your image and the number of squares that you want, you can use this to calculate the parameter f.
Multiplying the mask with the image:
Now that is easy. The mask is a logical (white = true, black = false). Now you only have to decide which part you want to keep (the white or the black part).
Multiply your image with the mask
masked_image = original_image .* mask;
to keep the white areas in the mask and
masked_image = original_image .* ~mask;
for the opposite.

This is actually an extension of Trilarion's answer that gives better control on stripes appearance:
function out = diagStripes( outSize, stripeAngle, stripeDistance, stripeThickness )
stripeAngle = wrapTo2Pi(-stripeAngle+pi/2);
if (stripeAngle == pi/2) || (stripeAngle == 3*pi/2)
f = #(fx, fy, xi, yi) cos(2 * pi * (fy * yi)); % vertical stripes
elseif (stripeAngle == 0)||(stripeAngle == pi)
f = #(fx, fy, xi, yi) cos(2 * pi * (fx * xi)); % horizontal stripes
else
f = #(fx, fy, xi, yi) cos(2 * pi * (fx * xi + fy * yi)); % diagonal stripes
end
if numel(outSize) == 1
outSize = [outSize outSize];
end;
fx = cos(stripeAngle) / stripeDistance; % period in x direction
fy = sin(stripeAngle) / stripeDistance; % period in y direction
Nx = outSize(2); % image dimension in x direction
Ny = outSize(1); % image dimension in y direction
[yi, xi] = ndgrid((1 : Ny)-Ny/2, (1 : Nx)-Nx/2);
mask = (f(fx, fy, xi, yi)+1)/2; % for gradual [0,1] mask
out = mask < (cos(pi*stripeThickness)+1)/2; % for binary mask
end
outSize is a two or one element vector that gives the dimensions of output image in pixels, stripeAngle gives the slope of stripes in radians, stripeDistance is the distance between centers of stripes in pixels and stripeDistance is a float value in [0 .. 1] that gives the percent of coverage of (black) stripes in (white) background.
There're also answers to the other question for generating customized checkerboard patterns.

Related

Convert Fisheye Video into regular Video

I have a video stream coming from a 180 degree fisheye camera. I want to do some image-processing to convert the fisheye view into a normal view.
After some research and lots of read articles I found this paper.
They describe an algorithm (and some formulas) to solve this problem.
I used tried to implement this method in a Matlab. Unfortunately it doesn't work, and I failed to make it work. The "corrected" image looks exactly like the original photograph and there's no any removal of distortion and secondly I am just receiving top left side of the image, not the complete image but changing the value of 'K' to 1.9 gives mw the whole image, but its exactly the same image.
Input image:
Result:
When the value of K is 1.15 as mentioned in the article
When the value of K is 1.9
Here is my code:
image = imread('image2.png');
[Cx, Cy, channel] = size(image);
k = 1.5;
f = (Cx * Cy)/3;
opw = fix(f * tan(asin(sin(atan((Cx/2)/f)) * k)));
oph = fix(f * tan(asin(sin(atan((Cy/2)/f)) * k)));
image_new = zeros(opw, oph,channel);
for i = 1: opw
for j = 1: oph
[theta,rho] = cart2pol(i,j);
R = f * tan(asin(sin(atan(rho/f)) * k));
r = f * tan(asin(sin(atan(R/f))/k));
X = ceil(r * cos(theta));
Y = ceil(r * sin(theta));
for k = 1: 3
image_new(i,j,k) = image(X,Y,k);
end
end
end
image_new = uint8(image_new);
warning('off', 'Images:initSize:adjustingMag');
imshow(image_new);
This is what solved my problem.
input:
strength as floating point >= 0. 0 = no change, high numbers equal stronger correction.
zoom as floating point >= 1. (1 = no change in zoom)
algorithm:
set halfWidth = imageWidth / 2
set halfHeight = imageHeight / 2
if strength = 0 then strength = 0.00001
set correctionRadius = squareroot(imageWidth ^ 2 + imageHeight ^ 2) / strength
for each pixel (x,y) in destinationImage
set newX = x - halfWidth
set newY = y - halfHeight
set distance = squareroot(newX ^ 2 + newY ^ 2)
set r = distance / correctionRadius
if r = 0 then
set theta = 1
else
set theta = arctangent(r) / r
set sourceX = halfWidth + theta * newX * zoom
set sourceY = halfHeight + theta * newY * zoom
set color of pixel (x, y) to color of source image pixel at (sourceX, sourceY)

Applying 2D Gabor Wavelet on Image

EDIT: This is the code I am using that generates an edge detected looking image:
cookiesImage = rgb2gray(imread('Cookies.png'));
width = 45;
height = 45;
KMAX = pi / 2;
f = sqrt(2);
delta = pi / 3;
output = zeros(size(cookiesImage, 1), size(cookiesImage, 2), 8);
for i = 0 : 7
wavelets = GaborWavelet(width, height, KMAX, f, i, 2, delta);
figure(1);
subplot(1, 8, i + 1), imshow(real(wavelets), []);
output(:, :, i + 1) = imfilter(cookiesImage, wavelets, 'symmetric');
end
display = sum(abs(output).^2, 3).^0.5;
display = display./max(display(:));
figure(2); imshow(display);
function GWKernel = GaborWavelet (width, height, KMAX, f, u , v, delta)
delta2 = delta * delta;
kv = KMAX / (f^v);
thetaU = (u * pi) / 8;
kuv = kv * exp (1i * thetaU);
kuv2 = abs(kuv)^2;
GWKernel = zeros (height, width);
for y = -height/ 2 + 1 : height / 2
for x = -width / 2 + 1 : width / 2
GWKernel(y + height / 2, x + width / 2) = (kuv2 / delta2) * exp(-0.5 * kuv2 * (x * x + y * y) / delta2) * (exp(1i * (real(kuv) * y + imag (kuv) * x )) - exp (-0.5 * delta2));
end
end
This is the function that I am using for the Wavelets and this is how I am trying to apply them but all I am getting is an edge detected looking image, rather than one as in this link.
Running your code, I see the following wavelets being generated:
These look a lot like rotated second derivatives. This is only the real (even) component of the Gabor filter kernels. The imaginary (odd) counterparts look like first derivatives.
This is why you feel like your result is like an edge-detected image. It sort of is.
Try increasing the size of the filter (not the footprint widthxheight, but the delta that determines the size of the envelope). This will make it so that you see a larger portion of the sinusoid waves that form the Gabor kernel.
Next, the result image you show is the sum of the square magnitude of the individual Gabor filters. Try displaying the real or imaginary components of one of the filter results, you'll see it looks more like you'd expect:
imshow(real(output(:,:,3)),[])
I'm not familiar with this parametrization of the Gabor kernel, but note that it has a Gaussian envelope. Therefore, the footprint (width, height) of the kernel can be adjusted to the size of this Gaussian (which seems to use delta as the sigma). I typically recommend using a kernel footprint of 2*ceil(3*sigma)+1 for the Gaussian kernel. The same applies here:
width = 2*ceil(3*delta)+1;
height = width;
This will speed up computations, as you see in the image above your kernels have lots of near-zero values in them, it is possible to crop them to a smaller size without affecting the output.
The GaborWavelet function can also be simplified a lot using vectorization:
function GWKernel = GaborWavelet (width, height, KMAX, f, u , v, delta)
delta2 = delta * delta;
kv = KMAX / (f^v);
thetaU = (u * pi) / 8;
kuv = kv * exp (1i * thetaU);
kuv2 = abs(kuv)^2;
x = -width/2 + 1 : width/2;
[x,y] = meshgrid(x,x);
GWKernel = (kuv2 / delta2) * exp(-0.5 * kuv2 * (x .* x + y .* y) / delta2) ...
.* (exp(1i * (real(kuv) * y + imag (kuv) * x )) - exp (-0.5 * delta2));

Math behind some quaternion code and gimbal lock

I have the following matlab code that gives a Rotation matrix from Quaternion. The person who did this said he used some library and it is originally from this Rotation Matrix
The Rotation matrix according to the code looks like
The diagonal elements and the signs are changed. Can someonw explain to me the math or the conditions for the disparity between the two.
Q0 = Quat(1);
Q1 = Quat(2);
Q2 = Quat(3);
Q3 = Quat(4);
%% set f2q to 2*q0 and calculate products
f2q = 2 * Q0;
f2q0q0 = f2q * Q0;
f2q0q1 = f2q * Q1;
f2q0q2 = f2q * Q2;
f2q0q3 = f2q * Q3;
%% set f2q to 2*q1 and calculate products
f2q = 2 * Q1;
f2q1q1 = f2q * Q1;
f2q1q2 = f2q * Q2;
f2q1q3 = f2q * Q3;
%% set f2q to 2*q2 and calculate products
f2q = 2 * Q2;
f2q2q2 = f2q * Q2;
f2q2q3 = f2q * Q3;
f2q3q3 = 2 * Q3 * Q3;
%% calculate the rotation matrix assuming the quaternion is normalized
R(1, 1) = f2q0q0 + f2q1q1 - 1;
R(1, 2) = f2q1q2 + f2q0q3;
R(1, 3) = f2q1q3 - f2q0q2;
R(2, 1) = f2q1q2 - f2q0q3;
R(2, 2) = f2q0q0 + f2q2q2 - 1;
R(2, 3) = f2q2q3 + f2q0q1;
R(3, 1) = f2q1q3 + f2q0q2;
R(3, 2) = f2q2q3 - f2q0q1;
R(3, 3) = f2q0q0 + f2q3q3 - 1;
Also...
The Rotation Matrix is used to find Euler Angles that also deals with Gimbal lock
%%calculate the pitch angle -90.0 <= Theta <= 90.0 deg
pitchdeg = asin(-R(1, 3)) * (180 / pi);
%% calculate the roll angle range -180.0 <= Phi < 180.0 deg
rolldeg = atan2(R(2, 3), R(3, 3)) * (180 / pi);
%% map +180 roll onto the functionally equivalent -180 deg roll
if (rolldeg == 180)
rolldeg = -180;
end
%% calculate the yaw (compass) angle 0.0 <= Psi < 360.0 deg
if (pitchdeg == 90)
%% vertical upwards gimbal lock case
yawdeg = (atan2(R(3, 2), R(2, 2)) * (180 / pi)) + rolldeg;
elseif (pitchdeg == -90)
%% vertical downwards gimbal lock case
yawdeg = (Math.Atan2(-R(3, 2), R(2, 2)) * (180 / pi)) - rolldeg;
else
%% general case
yawdeg = atan2(R(1, 2), R(1, 1)) * (180 / pi);
end
%% map yaw angle Psi onto range 0.0 <= Psi < 360.0 deg
if (yawdeg < 0)
yawdeg = yawdeg + 360;
end
%% check for rounding errors mapping small negative angle to 360 deg
if (yawdeg >= 360)
yawdeg = 0;
end
Euler.Roll = rolldeg;
Euler.Pitch = pitchdeg;
Euler.Yaw = yawdeg;
Googling the formula to convert from Quaternion to Euler angles gave me a formula which i could not attach because i dont have enough reputation but you can find it in wiki Conversion_between_quaternions_and_Euler_angles
The terms are different comparing with the diagonal of Rotation matrix. Also, is there a math behind the compensation for gimbal lock?
The IMU uses North-East-Down system
I am using euler angles to visualize the gyro. Any kind of guidance is appreciable

Intersection point between circle and line (Polar coordinates)

I'm wondering if there is a way of finding the intersection point between a line and a circle written in polar coordinates.
% Line
x_line = 10 + r * cos(th);
y_line = 10 + r * sin(th);
%Circle
circle_x = circle_r * cos(alpha);
circle_y = circle_r * sin(alpha);
So far I've tried using the intersect(y_line, circle_y) function without any success. I'm relatively new to MATLAB so bear with me.
I have generalised the below so that other values than a=10 can be used...
a = 10; % constant line offset
th = 0; % constant angle of line
% rl = ?? - variable to find
% Coordinates of line:
% [xl, yl] = [ a + rl * cos(th), a + rl * sin(th) ];
rc = 1; % constant radius of circle
% alpha = ?? - variable to find
% Coordinates of circle:
% [xc, yc] = [ rc * cos(alpha), rc * sin(alpha) ];
We want the intersection, so xl = xc, yl = yc
% a + rl * cos(th) = rc * cos(alpha)
% a + rl * sin(th) = rc * sin(alpha)
Square both sides of both equations and sum them. Simplifying sin(a)^2 + cos(a)^2 = 1. Expanding brackets and simplifying further gives
% rl^2 + 2 * a * rl * (cos(th) + sin(th)) + 2 * a - rc^2 = 0
Now you can use the quadratic formula to get the value of rl.
Test discriminant:
dsc = (2 * a * (cos(th) + sin(th)) )^2 - 4 * (2 * a - rc^2);
rl = [];
if dsc < 0
% no intersection
elseif dsc == 0
% one intersection at
rl = - cos(th) - sin(th);
else
% two intersection points
rl = -cos(th) - sin(th) + [ sqrt(dsc)/2, -sqrt(dsc)/2];
end
% Get alpha from an earlier equation
alpha = acos( ( a + rl .* cos(th) ) ./ rc );
Now you have 0, 1 or 2 points of intersection of the line with the circle, from certain known and unknown values about each line. Essentially this is just simultaneous equations, see the start of this article for a basis of the maths
https://en.wikipedia.org/wiki/System_of_linear_equations
Do you need to do it numerically? This problem would have an easy analytical solution: The point (10 + r*cos(th),10 + r*sin(th)) is on a circle with radius R iff
(10+r*cos(th))^2 + (10+r*sin(th))^2 == R^2
<=> 200+r^2 + 2*r*(cos(th)+sin(th)) == R^2
<=> r^2 + 2*r*sqrt(2)*sin(th+pi/4) + 200 - R^2 = 0
which is a quadratic equation in r. If the discriminant is positive, there are two solutions (corresponding to two intersection points), otherwise, there are none.
If you work out the math, the condition for intersection is 100*(sin(2*th)-1)+circle_r^2 >= 0 and the roots are -10*sqrt(2)*sin(th+pi/4)*[1,1] + sqrt(100*(sin(2*th)-1)+circle_r^2)*[1,-1].
Here is a Matlab plot as an example for th = pi/3 and circle_r = 15. The magenta markers are calculated in closed-form using the equation shown above.

Scaling a rotated ellipse

I have a function that draws an ellipse respecting some major axis (vx) and minor axis (vy) scales rotated clockwise by an angle a. I would like to adjust it so that the unrotated ellipse satisfies the equation:
(x / vx)^2 + (y / vy)^2 = s
For some value s which is passed in.
function [] = plotellipse(cx, cy, vx, vy, s, a)
t = linspace(0, 2 * pi);
x = cx + vx * cos(t) * cos(-a) - vy * sin(t) * sin(-a);
y = cy + vy * sin(t) * cos(-a) + vx * cos(t) * sin(-a);
plot(x,y,'y-');
The usual equation for an ellipse, which you have implemented correctly, is
To reduce the desired equation to the same form, divide through by s:
Now x and y become
vxs = vx / sqrt(s)
vys = vy / sqrt(s)
x = cx + vxs * cos(t) * cos(-a) - vys * sin(t) * sin(-a);
y = cy + vys * sin(t) * cos(-a) + vxs * cos(t) * sin(-a);