Implementing i(t) function in Matlab - matlab

I have a problem in which I have to implement the following question in Matlab.
i(t) = A2 * sin(wr*t) * exp(-alpha*t); for t [0, 10] with step 0.5s
My approach is as follows
clc;
clear;
% Given Data
Vs = 220;
L = 5e-3;
C = 10e-6;
R = 22;
Vo = 50;
% a)
alpha = R / (2 * L);
omega_not = 1 / sqrt(L*C);
omega_r = sqrt( omega_not^2 - alpha^2 );
A2 = Vs / (omega_r * L);
t = 1:0.5:10;
i = A2 * sin( omega_r * t ) .* exp(-alpha * t);
% b)
t1 = pi / omega_r;
% c)
plot(t, i);
But it yields all the values of current equal to zero. Please help me solve the problem.

I think the problem is this part of the expression:
exp(-alpha * t)
When I run your code, -alpha equals -2200. The exponential for such a large negative number is so small that the code returns zero.
>> exp(-200)
ans =
1.3839e-87
>> exp(-1000)
ans =
0

Related

What type of (probably syntactic) mistake am I making when using a generic function on an array for plotting?

I am trying to plot a geodesic on a 3D surface (tractrix) with Matlab. This worked for me in the past when I didn't need to parametrize the surface (see here). However, the tractrix called for parameterization, chain rule differentiation, and collection of u,v,x,y and f(x,y) values.
After many mistakes I think that I'm getting the right values for x = f1(u,v) and y = f2(u,v) describing a spiral right at the base of the surface:
What I can't understand is why the z value or height of the 3D plot of the curve is consistently zero, when I'm applying the same mathematical formula that allowed me to plot the surface in the first place, ie. f = #(x,y) a.* (y - tanh(y)) .
Here is the code, which runs without any errors on Octave. I'm typing a special note in upper case on the crucial calls. Also note that I have restricted the number of geodesic lines to 1 to decrease the execution time.
a = 0.3;
u = 0:0.1:(2 * pi);
v = 0:0.1:5;
[X,Y] = meshgrid(u,v);
% NOTE THAT THESE FORMULAS RESULT IN A SUCCESSFUL PLOT OF THE SURFACE:
x = a.* cos(X) ./ cosh(Y);
y = a.* sin(X) ./ cosh(Y);
z = a.* (Y - tanh(Y));
h = surf(x,y,z);
zlim([0, 1.2]);
set(h,'edgecolor','none')
colormap summer
hold on
% THESE ARE THE GENERIC FUNCTIONS (f) WHICH DON'T SEEM TO WORK AT THE END:
f = #(x,y) a.* (y - tanh(y));
f1 = #(u,v) a.* cos(u) ./ cosh(v);
f2 = #(u,v) a.* sin(u) ./ cosh(v);
dfdu = #(u,v) ((f(f1(u,v)+eps, f2(u,v)) - f(f1(u,v) - eps, f2(u,v)))/(2 * eps) .*
(f1(u+eps,v)-f1(u-eps,v))/(2*eps) +
(f(f1(u,v), f2(u,v)+eps) - f(f1(u,v), f2(u,v)-eps))/(2 * eps) .*
(f2(u+eps,v)-f2(u-eps,v))/(2*eps));
dfdv = #(u,v) ((f(f1(u,v)+eps, f2(u,v)) - f(f1(u,v) - eps, f2(u,v)))/(2 * eps) .*
(f1(u,v+eps)-f1(u,v-eps))/(2*eps) +
(f(f1(u,v), f2(u,v)+eps) - f(f1(u,v), f2(u,v)-eps))/(2 * eps) .*
(f2(u,v+eps)-f2(u,v-eps))/(2*eps));
% Normal vector to the surface:
N = #(u,v) [- dfdu(u,v), - dfdv(u,v), 1]; % Normal vec to surface # any pt.
% Some colors to draw the lines:
C = {'k','r','g','y','m','c'};
for s = 1:1 % No. of lines to be plotted.
% Starting points:
u0 = [0, u(length(u))];
v0 = [0, v(length(v))];
du0 = 0.001;
dv0 = 0.001;
step_size = 0.00005; % Will determine the progression rate from pt to pt.
eta = step_size / sqrt(du0^2 + dv0^2); % Normalization.
eps = 0.0001; % Epsilon
max_num_iter = 100000; % Number of dots in each line.
% Semi-empty vectors to collect results:
U = [[u0(s), u0(s) + eta*du0], zeros(1,max_num_iter - 2)];
V = [[v0(s), v0(s) + eta*dv0], zeros(1,max_num_iter - 2)];
for i = 2:(max_num_iter - 1) % Creating the geodesic:
ut = U(i);
vt = V(i);
xt = f1(ut,vt);
yt = f2(ut,vt);
ft = f(xt,yt);
utm1 = U(i - 1);
vtm1 = V(i - 1);
xtm1 = f1(utm1,vtm1);
ytm1 = f2(utm1,vtm1);
ftm1 = f(xtm1,ytm1);
usymp = ut + (ut - utm1);
vsymp = vt + (vt - vtm1);
xsymp = f1(usymp,vsymp);
ysymp = f2(usymp,vsymp);
fsymp = ft + (ft - ftm1);
df = fsymp - f(xsymp,ysymp); % Is the surface changing? How much?
n = N(ut,vt); % Normal vector at point t
gamma = df * n(3); % Scalar x change f x z value of N
xtp1 = xsymp - gamma * n(1); % Gamma to modulate incre. x & y.
ytp1 = ysymp - gamma * n(2);
U(i + 1) = usymp - gamma * n(1);;
V(i + 1) = vsymp - gamma * n(2);;
end
% THE PROBLEM! f(f1(U,V),f2(U,V)) below YIELDS ALL ZEROS!!! The expected values are between 0 and 1.2.
P = [f1(U,V); f2(U,V); f(f1(U,V),f2(U,V))]; % Compiling results into a matrix.
units = 35; % Determines speed (smaller, faster)
packet = floor(size(P,2)/units);
P = P(:,1: packet * units);
for k = 1:packet:(packet * units)
hold on
plot3(P(1, k:(k+packet-1)), P(2,(k:(k+packet-1))), P(3,(k:(k+packet-1))),
'.', 'MarkerSize', 5,'color',C{s})
drawnow
end
end
The answer is to Cris Luengo's credit, who noticed that the upper-case assigned to the variable Y, used for the calculation of the height of the curve z, was indeed in the parametrization space u,v as intended, and not in the manifold x,y! I don't use Matlab/Octave other than for occasional simulations, and I was trying every other syntactical permutation I could think of without realizing that f fed directly from v (as intended). I changed now the names of the different variables to make it cleaner.
Here is the revised code:
a = 0.3;
u = 0:0.1:(3 * pi);
v = 0:0.1:5;
[U,V] = meshgrid(u,v);
x = a.* cos(U) ./ cosh(V);
y = a.* sin(U) ./ cosh(V);
z = a.* (V - tanh(V));
h = surf(x,y,z);
zlim([0, 1.2]);
set(h,'edgecolor','none')
colormap summer
hold on
f = #(x,y) a.* (y - tanh(y));
f1 = #(u,v) a.* cos(u) ./ cosh(v);
f2 = #(u,v) a.* sin(u) ./ cosh(v);
dfdu = #(u,v) ((f(f1(u,v)+eps, f2(u,v)) - f(f1(u,v) - eps, f2(u,v)))/(2 * eps) .*
(f1(u+eps,v)-f1(u-eps,v))/(2*eps) +
(f(f1(u,v), f2(u,v)+eps) - f(f1(u,v), f2(u,v)-eps))/(2 * eps) .*
(f2(u+eps,v)-f2(u-eps,v))/(2*eps));
dfdv = #(u,v) ((f(f1(u,v)+eps, f2(u,v)) - f(f1(u,v) - eps, f2(u,v)))/(2 * eps) .*
(f1(u,v+eps)-f1(u,v-eps))/(2*eps) +
(f(f1(u,v), f2(u,v)+eps) - f(f1(u,v), f2(u,v)-eps))/(2 * eps) .*
(f2(u,v+eps)-f2(u,v-eps))/(2*eps));
% Normal vector to the surface:
N = #(u,v) [- dfdu(u,v), - dfdv(u,v), 1]; % Normal vec to surface # any pt.
% Some colors to draw the lines:
C = {'y','r','k','m','w',[0.8 0.8 1]}; % Color scheme
for s = 1:6 % No. of lines to be plotted.
% Starting points:
u0 = [0, -pi/2, 2*pi, 4*pi/3, pi/4, pi];
v0 = [0, 0, 0, 0, 0, 0];
du0 = [0, -0.0001, 0.001, - 0.001, 0.001, -0.01];
dv0 = [0.1, 0.01, 0.001, 0.001, 0.0005, 0.01];
step_size = 0.00005; % Will determine the progression rate from pt to pt.
eta = step_size / sqrt(du0(s)^2 + dv0(s)^2); % Normalization.
eps = 0.0001; % Epsilon
max_num_iter = 180000; % Number of dots in each line.
% Semi-empty vectors to collect results:
Uc = [[u0(s), u0(s) + eta*du0(s)], zeros(1,max_num_iter - 2)];
Vc = [[v0(s), v0(s) + eta*dv0(s)], zeros(1,max_num_iter - 2)];
for i = 2:(max_num_iter - 1) % Creating the geodesic:
ut = Uc(i);
vt = Vc(i);
xt = f1(ut,vt);
yt = f2(ut,vt);
ft = f(xt,yt);
utm1 = Uc(i - 1);
vtm1 = Vc(i - 1);
xtm1 = f1(utm1,vtm1);
ytm1 = f2(utm1,vtm1);
ftm1 = f(xtm1,ytm1);
usymp = ut + (ut - utm1);
vsymp = vt + (vt - vtm1);
xsymp = f1(usymp,vsymp);
ysymp = f2(usymp,vsymp);
fsymp = ft + (ft - ftm1);
df = fsymp - f(xsymp,ysymp); % Is the surface changing? How much?
n = N(ut,vt); % Normal vector at point t
gamma = df * n(3); % Scalar x change f x z value of N
xtp1 = xsymp - gamma * n(1); % Gamma to modulate incre. x & y.
ytp1 = ysymp - gamma * n(2);
Uc(i + 1) = usymp - gamma * n(1);;
Vc(i + 1) = vsymp - gamma * n(2);;
end
x = f1(Uc,Vc);
y = f2(Uc,Vc);
P = [x; y; f(Uc,Vc)]; % Compiling results into a matrix.
units = 35; % Determines speed (smaller, faster)
packet = floor(size(P,2)/units);
P = P(:,1: packet * units);
for k = 1:packet:(packet * units)
hold on
plot3(P(1, k:(k+packet-1)), P(2,(k:(k+packet-1))), P(3,(k:(k+packet-1))),
'.', 'MarkerSize', 5,'color',C{s})
drawnow
end
end

How to compute the derivatives automatically within for-loop in Matlab?

For the purpose of generalization, I hope Matlab can automatically compute the 1st & 2nd derivatives of the associated function f(x). (in case I change f(x) = sin(6x) to f(x) = sin(8x))
I know there exists built-in commands called diff() and syms, but I cannot figure out how to deal with them with the index i in the for-loop. This is the key problem I am struggling with.
How do I make changes to the following set of codes? I am using MATLAB R2019b.
n = 10;
h = (2.0 * pi) / (n - 1);
for i = 1 : n
x(i) = 0.0 + (i - 1) * h;
f(i) = sin(6 * x(i));
dfe(i) = 6 * cos(6 * x(i)); % first derivative
ddfe(i) = -36 * sin(6 * x(i)); % second derivative
end
You can simply use subs and double to do that. For your case:
% x is given here
n = 10;
h = (2.0 * pi) / (n - 1);
syms 'y';
g = sin(6 * y);
for i = 1 : n
x(i) = 0.0 + (i - 1) * h;
f(i) = double(subs(g,y,x(i)));
dfe(i) = double(subs(diff(g),y,x(i))); % first derivative
ddfe(i) = double(subs(diff(g,2),y,x(i))); % second derivative
end
By #Daivd comment, you can vectorize the loop as well:
% x is given here
n = 10;
h = (2.0 * pi) / (n - 1);
syms 'y';
g = sin(6 * y);
x = 0.0 + ((1:n) - 1) * h;
f = double(subs(g,y,x));
dfe = double(subs(diff(g),y,x)); % first derivative
ddfe = double(subs(diff(g,2),y,x)); % second derivative

2D Discrete Fourier Transform and Inverse DFT in matlab

I am currently implementing 2D DFT and IDFT for images in matlab without using built-in library. I successfully output a spectrum image after DFT but I fail to get back the original image after IDFT.
Here's my code for DFT
input = im2double(img_input);
[M, N] = size(input);
Wm = zeros(M, M);
Wn = zeros(N, N);
for x = 1:M-1
for y = 1:N-1
input(x, y) = input(x, y) * (-1)^(x + y);
end
end
for u = 0:M-1
for x = 0:M-1
Wm(u+1, x+1) = exp(-li * pi * 2 * u * x/ M);
end
end
for v = 0:N-1
for y = 0:N-1
Wn(v+1, y+1) = exp(-li * pi * 2 * v * y / N);
end
end
F = Wm * input * Wn / 200;
output = im2uint8(log(1 + abs(F)));
IDFT:
[M, N] = size(input);
Wm = zeros(M, M);
Wn = zeros(N, N);
for x = 0:M-1
for u = 0:M-1
Wm(x+1, u+1) = exp(2 * pi * 1i * u * x/ M);
end
end
for y = 0:N-1
for v = 0:N-1
Wn(y+1, v+1) = exp(2 * pi * 1i * v * y / N);
end
end
f = Wm * input * Wn;
for x = 1:M-1
for y = 1:N-1
f(x, y) = f(x, y) * (-1)^(x + y);
end
end
output = im2uint8(abs(f));
I multiply the input by (-1)^(x+y) in order to shift the coordinate origin. I have no idea why I can't get back the original image after performing IDFT on the DFTed image.
Original Image
Image after DFT
Image after IDFT
Your code works fine. To get output of the second function to be identical to img_input of the first function, I had to make the following changes:
1st function:
F = Wm * input * Wn; % Don't divide by 200 here.
output = im2uint8(log(1 + abs(F))); % Skip this line altogether
2nd function: Make sure F from the first function is used as input here.
f = Wm * input * Wn / N / M; % Divide by N*M, proper normalization
Note that the normalization is usually put into the IDFT, but you can also put it into the DFT if you prefer. The normalization by 200 is not correct though.

Recomposing vector input to algorithm from matrix output

I've written some code to implement an algorithm that takes as input a vector q of real numbers, and returns as an output a complex matrix R. The Matlab code below produces a plot showing the input vector q and the output matrix R.
Given only the complex matrix output R, I would like to obtain the input vector q. Can I do this using least-squares optimization? Since there is a recursive running sum in the code (rs_r and rs_i), the calculation for a column of the output matrix is dependent on the calculation of the previous column.
Perhaps a non-linear optimization can be set up to recompose the input vector q from the output matrix R?
Looking at this in another way, I've used an algorithm to compute a matrix R. I want to run the algorithm "in reverse," to get the input vector q from the output matrix R.
If there is no way to recompose the starting values from the output, thereby treating the problem as a "black box," then perhaps the mathematics of the model itself can be used in the optimization? The program evaluates the following equation:
The Utilde(tau, omega) is the output matrix R. The tau (time) variable comprises the columns of the response matrix R, whereas the omega (frequency) variable comprises the rows of the response matrix R. The integration is performed as a recursive running sum from tau = 0 up to the current tau timestep.
Here are the plots created by the program posted below:
Here is the full program code:
N = 1001;
q = zeros(N, 1); % here is the input
q(1:200) = 55;
q(201:300) = 120;
q(301:400) = 70;
q(401:600) = 40;
q(601:800) = 100;
q(801:1001) = 70;
dt = 0.0042;
fs = 1 / dt;
wSize = 101;
Glim = 20;
ginv = 0;
R = get_response(N, q, dt, wSize, Glim, ginv); % R is output matrix
rows = wSize;
cols = N;
figure; plot(q); title('q value input as vector');
ylim([0 200]); xlim([0 1001])
figure; imagesc(abs(R)); title('Matrix output of algorithm')
colorbar
Here is the function that performs the calculation:
function response = get_response(N, Q, dt, wSize, Glim, ginv)
fs = 1 / dt;
Npad = wSize - 1;
N1 = wSize + Npad;
N2 = floor(N1 / 2 + 1);
f = (fs/2)*linspace(0,1,N2);
omega = 2 * pi .* f';
omegah = 2 * pi * f(end);
sigma2 = exp(-(0.23*Glim + 1.63));
sign = 1;
if(ginv == 1)
sign = -1;
end
ratio = omega ./ omegah;
rs_r = zeros(N2, 1);
rs_i = zeros(N2, 1);
termr = zeros(N2, 1);
termi = zeros(N2, 1);
termr_sub1 = zeros(N2, 1);
termi_sub1 = zeros(N2, 1);
response = zeros(N2, N);
% cycle over cols of matrix
for ti = 1:N
term0 = omega ./ (2 .* Q(ti));
gamma = 1 / (pi * Q(ti));
% calculate for the real part
if(ti == 1)
Lambda = ones(N2, 1);
termr_sub1(1) = 0;
termr_sub1(2:end) = term0(2:end) .* (ratio(2:end).^-gamma);
else
termr(1) = 0;
termr(2:end) = term0(2:end) .* (ratio(2:end).^-gamma);
rs_r = rs_r - dt.*(termr + termr_sub1);
termr_sub1 = termr;
Beta = exp( -1 .* -0.5 .* rs_r );
Lambda = (Beta + sigma2) ./ (Beta.^2 + sigma2); % vector
end
% calculate for the complex part
if(ginv == 1)
termi(1) = 0;
termi(2:end) = (ratio(2:end).^(sign .* gamma) - 1) .* omega(2:end);
else
termi = (ratio.^(sign .* gamma) - 1) .* omega;
end
rs_i = rs_i - dt.*(termi + termi_sub1);
termi_sub1 = termi;
integrand = exp( 1i .* -0.5 .* rs_i );
if(ginv == 1)
response(:,ti) = Lambda .* integrand;
else
response(:,ti) = (1 ./ Lambda) .* integrand;
end
end % ti loop
No, you cannot do so unless you know the "model" itself for this process. If you intend to treat the process as a complete black box, then it is impossible in general, although in any specific instance, anything can happen.
Even if you DO know the underlying process, then it may still not work, as any least squares estimator is dependent on the starting values, so if you do not have a good guess there, it may converge to the wrong set of parameters.
It turns out that by using the mathematics of the model, the input can be estimated. This is not true in general, but for my problem it seems to work. The cumulative integral is eliminated by a partial derivative.
N = 1001;
q = zeros(N, 1);
q(1:200) = 55;
q(201:300) = 120;
q(301:400) = 70;
q(401:600) = 40;
q(601:800) = 100;
q(801:1001) = 70;
dt = 0.0042;
fs = 1 / dt;
wSize = 101;
Glim = 20;
ginv = 0;
R = get_response(N, q, dt, wSize, Glim, ginv);
rows = wSize;
cols = N;
cut_val = 200;
imagLogR = imag(log(R));
Mderiv = zeros(rows, cols-2);
for k = 1:rows
val = deriv_3pt(imagLogR(k,:), dt);
val(val > cut_val) = 0;
Mderiv(k,:) = val(1:end-1);
end
disp('Running iteration');
q0 = 10;
q1 = 500;
NN = cols - 2;
qout = zeros(NN, 1);
for k = 1:NN
data = Mderiv(:,k);
qout(k) = fminbnd(#(q) curve_fit_to_get_q(q, dt, rows, data),q0,q1);
end
figure; plot(q); title('q value input as vector');
ylim([0 200]); xlim([0 1001])
figure;
plot(qout); title('Reconstructed q')
ylim([0 200]); xlim([0 1001])
Here are the supporting functions:
function output = deriv_3pt(x, dt)
% Function to compute dx/dt using the 3pt symmetrical rule
% dt is the timestep
N = length(x);
N0 = N - 1;
output = zeros(N0, 1);
denom = 2 * dt;
for k = 2:N0
output(k - 1) = (x(k+1) - x(k-1)) / denom;
end
function sse = curve_fit_to_get_q(q, dt, rows, data)
fs = 1 / dt;
N2 = rows;
f = (fs/2)*linspace(0,1,N2); % vector for frequency along cols
omega = 2 * pi .* f';
omegah = 2 * pi * f(end);
ratio = omega ./ omegah;
gamma = 1 / (pi * q);
termi = ((ratio.^(gamma)) - 1) .* omega;
Error_Vector = termi - data;
sse = sum(Error_Vector.^2);

Double Summation in MATLAB and vectorized loops

Here's my attempt in implementing this lovely formula.
http://dl.dropbox.com/u/7348856/Picture1.png
%WIGNER Computes Wigner-Distribution on an image (difference of two images).
function[wd] = wigner(difference)
%Image size
[M, N, ~] = size(difference);
%Window size (5 x 5)
Md = 5;
Nd = 5;
%Fourier Transform
F = fft2(difference);
%Initializing the wigner picture
wd = zeros(M, N, 'uint8');
lambda =0.02;
value = (4/(Md*Nd));
for x = 1+floor(Md/2):M - floor(Md/2)
for y = 1+floor(Nd/2):N - floor(Nd/2)
for l = -floor(Nd/2) : floor(Nd/2)
for k = -floor(Md/2) : floor(Md/2)
kernel = exp(-lambda * norm(k,l));
kernel = kernel * value;
theta = 4 * pi * ((real(F(x, y)) * (k/M) )+ (imag(F(x, y)) * (l/N)));
wd(x, y) = (wd(x, y)) + (cos(theta) * difference(x + k, y + l) * difference(x - k, y - l) * (kernel));
end
end
end
end
end
As you can see, the outer two loops are for the sliding window, while the remaining inner ones are for the variables of the summation.
Now, my request for you my beloved stackoverflow users is: Can you help me improve these very nasty for loops that take more than its share of time, and turn it into vectorized loops?
And will that improvement be of a significant change?
Thank you.
this might not be what you are asking, but it seems (at first glance) that the order of the summations are independent and that instead of {x,y,l,k} you could go {l,k,x,y}. doing this will allow you to evaluate kernel fewer times by keeping it in the outer most loop.
Those four nested loops are basically processing each pixel in the image in a sliding-neighborhood style. I immediately thought of NLFILTER and IM2COL functions.
Here is my attempt at vectorizing the code. Note that I haven't thoroughly tested it, or compared performance against loop-based solution:
function WD = wigner(D, Md, Nd, lambda)
%# window size and lambda
if nargin<2, Md = 5; end
if nargin<3, Nd = 5; end
if nargin<4, lambda = 5; end
%# image size
[M,N,~] = size(D);
%# kernel = exp(-lambda*norm([k,l])
[K,L] = meshgrid(-floor(Md/2):floor(Md/2), -floor(Nd/2):floor(Nd/2));
K = K(:); L = L(:);
kernel = exp(-lambda .* sqrt(K.^2+L.^2));
%# frequency-domain part
F = fft2(D);
%# f(x+k,y+l) * f(x-k,y-l) * kernel
C = im2col(D, [Md Nd], 'sliding');
X1 = bsxfun(#times, C .* flipud(C), kernel);
%# cos(theta)
C = im2col(F, [Md Nd], 'sliding');
C = C(round(Md*Nd/2),:); %# take center pixels
theta = bsxfun(#times, real(C), K/M) + bsxfun(#times, imag(C), L/N);
X2 = cos(4*pi*theta);
%# combine both parts for each sliding-neighborhood
WD = col2im(sum(X1.*X2,1), [Md Nd], size(F), 'sliding') .* (4/(M*N));
%# pad array with zeros to be of same size as input image
WD = padarray(WD, ([Md Nd]-1)./2, 0, 'both');
end
For what its worth, here is the loop-based version with the improvement that #Laurbert515 suggested:
function WD = wigner_loop(D, Md, Nd, lambda)
%# window size and lambda
if nargin<2, Md = 5; end
if nargin<3, Nd = 5; end
if nargin<4, lambda = 5; end
%# image size
[M,N,~] = size(D);
%# frequency-domain part
F = fft2(D);
WD = zeros([M,N]);
for l = -floor(Nd/2):floor(Nd/2)
for k = -floor(Md/2):floor(Md/2)
%# kernel = exp(-lambda*norm([k,l])
kernel = exp(-lambda * norm([k,l]));
for x = (1+floor(Md/2)):(M-floor(Md/2))
for y = (1+floor(Nd/2)):(N-floor(Nd/2))
%# cos(theta)
theta = 4 * pi * ( real(F(x,y))*k/M + imag(F(x,y))*l/N );
%# f(x+k,y+l) * f(x-k,y-l)* kernel
WD(x,y) = WD(x,y) + ( cos(theta) * D(x+k,y+l) * D(x-k,y-l) * kernel );
end
end
end
end
WD = WD * ( 4/(M*N) );
end
and how I test it (based on what I understood from the paper you previously linked to):
%# difference between two consecutive frames
A = imread('AT3_1m4_02.tif');
B = imread('AT3_1m4_03.tif');
D = imsubtract(A,B);
%#D = rgb2gray(D);
D = im2double(D);
%# apply Wigner-Distribution
tic, WD1 = wigner(D); toc
tic, WD2 = wigner_loop(D); toc
figure(1), imshow(WD1,[])
figure(2), imshow(WD2,[])
you might then need to scale/normalize the matrix, and apply thresholding...