How to do circular convolution between 2 functions with cconv? - matlab

I was asked to do circular convolution between two functions by sampling them, using the functions cconv. A known result of this sort of convolution is: CCONV( sin(x), sin(x) ) == -pi*cos(x)
To test the above I did:
w = linspace(0,2*pi,1000);
l = linspace(0,2*pi,1999);
stem(l,cconv(sin(w),sin(w))
but the result I got was:
which is absolutely not -pi*cos(x).
Can anybody please explain what is wrong with my code and how to fix it?

In the documentation of cconv it says that:
c = cconv(a,b,n) circularly convolves vectors a and b. n is the length of the resulting vector. If you omit n, it defaults to length(a)+length(b)-1. When n = length(a)+length(b)-1, the circular convolution is equivalent to the linear convolution computed with conv.
I believe that the reason for your problem is that you do not specify the 3rd input to cconv, which then selects the default value, which is not the right one for you. I have made an animation showing what happens when different values of n are chosen.
If you compare my result for n=200 to your plot you will see that the amplitude of your data is 10 times larger whereas the length of your linspace is 10 times bigger. This means that some normalization is needed, likely a multiplication by the linspace step.
Indeed, after proper scaling and choice of n we get the right result:
res = 100; % resolution
w = linspace(0,2*pi,res);
dx = diff(w(1:2)); % grid step
stem( linspace(0,2*pi,res), dx * cconv(sin(w),sin(w),res) );
This is the code I used for the animation:
hF = figure();
subplot(1,2,1); hS(1) = stem(1,cconv(1,1,1)); title('Autoscaling');
subplot(1,2,2); hS(2) = stem(1,cconv(1,1,1)); xlim([0,7]); ylim(50*[-1,1]); title('Constant limits');
w = linspace(0,2*pi,100);
for ind1 = 1:200
set(hS,'XData',linspace(0,2*pi,ind1));
set(hS,'YData',cconv(sin(w),sin(w),ind1));
suptitle("n = " + ind1);
drawnow
% export_fig(char("D:\BLABLA\F" + ind1 + ".png"),'-nocrop');
end

Related

Gaussian iterative curve fitting [duplicate]

I have a set of frequency data with peaks to which I need to fit a Gaussian curve and then get the full width half maximum from. The FWHM part I can do, I already have a code for that but I'm having trouble writing code to fit the Gaussian.
Does anyone know of any functions that'll do this for me or would be able to point me in the right direction? (I can do least squares fitting for lines and polynomials but I can't get it to work for gaussians)
Also it would be helpful if it was compatible with both Octave and Matlab as I have Octave at the moment but don't get access to Matlab until next week.
Any help would be greatly appreciated!
Fitting a single 1D Gaussian directly is a non-linear fitting problem. You'll find ready-made implementations here, or here, or here for 2D, or here (if you have the statistics toolbox) (have you heard of Google? :)
Anyway, there might be a simpler solution. If you know for sure your data y will be well-described by a Gaussian, and is reasonably well-distributed over your entire x-range, you can linearize the problem (these are equations, not statements):
y = 1/(σ·√(2π)) · exp( -½ ( (x-μ)/σ )² )
ln y = ln( 1/(σ·√(2π)) ) - ½ ( (x-μ)/σ )²
= Px² + Qx + R
where the substitutions
P = -1/(2σ²)
Q = +2μ/(2σ²)
R = ln( 1/(σ·√(2π)) ) - ½(μ/σ)²
have been made. Now, solve for the linear system Ax=b with (these are Matlab statements):
% design matrix for least squares fit
xdata = xdata(:);
A = [xdata.^2, xdata, ones(size(xdata))];
% log of your data
b = log(y(:));
% least-squares solution for x
x = A\b;
The vector x you found this way will equal
x == [P Q R]
which you then have to reverse-engineer to find the mean μ and the standard-deviation σ:
mu = -x(2)/x(1)/2;
sigma = sqrt( -1/2/x(1) );
Which you can cross-check with x(3) == R (there should only be small differences).
Perhaps this has the thing you are looking for? Not sure about compatability:
http://www.mathworks.com/matlabcentral/fileexchange/11733-gaussian-curve-fit
From its documentation:
[sigma,mu,A]=mygaussfit(x,y)
[sigma,mu,A]=mygaussfit(x,y,h)
this function is doing fit to the function
y=A * exp( -(x-mu)^2 / (2*sigma^2) )
the fitting is been done by a polyfit
the lan of the data.
h is the threshold which is the fraction
from the maximum y height that the data
is been taken from.
h should be a number between 0-1.
if h have not been taken it is set to be 0.2
as default.
i had similar problem.
this was the first result on google, and some of the scripts linked here made my matlab crash.
finally i found here that matlab has built in fit function, that can fit Gaussians too.
it look like that:
>> v=-30:30;
>> fit(v', exp(-v.^2)', 'gauss1')
ans =
General model Gauss1:
ans(x) = a1*exp(-((x-b1)/c1)^2)
Coefficients (with 95% confidence bounds):
a1 = 1 (1, 1)
b1 = -8.489e-17 (-3.638e-12, 3.638e-12)
c1 = 1 (1, 1)
I found that the MATLAB "fit" function was slow, and used "lsqcurvefit" with an inline Gaussian function. This is for fitting a Gaussian FUNCTION, if you just want to fit data to a Normal distribution, use "normfit."
Check it
% % Generate synthetic data (for example) % % %
nPoints = 200; binSize = 1/nPoints ;
fauxMean = 47 ;fauxStd = 8;
faux = fauxStd.*randn(1,nPoints) + fauxMean; % REPLACE WITH YOUR ACTUAL DATA
xaxis = 1:length(faux) ;fauxData = histc(faux,xaxis);
yourData = fauxData; % replace with your actual distribution
xAxis = 1:length(yourData) ;
gausFun = #(hms,x) hms(1) .* exp (-(x-hms(2)).^2 ./ (2*hms(3)^2)) ; % Gaussian FUNCTION
% % Provide estimates for initial conditions (for lsqcurvefit) % %
height_est = max(fauxData)*rand ; mean_est = fauxMean*rand; std_est=fauxStd*rand;
x0 = [height_est;mean_est; std_est]; % parameters need to be in a single variable
options=optimset('Display','off'); % avoid pesky messages from lsqcurvefit (optional)
[params]=lsqcurvefit(gausFun,x0,xAxis,yourData,[],[],options); % meat and potatoes
lsq_mean = params(2); lsq_std = params(3) ; % what you want
% % % Plot data with fit % % %
myFit = gausFun(params,xAxis);
figure;hold on;plot(xAxis,yourData./sum(yourData),'k');
plot(xAxis,myFit./sum(myFit),'r','linewidth',3) % normalization optional
xlabel('Value');ylabel('Probability');legend('Data','Fit')

MATLAB: 3d reconstruction using eight point algorithm

I am trying to achieve 3d reconstruction from 2 images. Steps I followed are,
1. Found corresponding points between 2 images using SURF.
2. Implemented eight point algo to find "Fundamental matrix"
3. Then, I implemented triangulation.
I have got Fundamental matrix and results of triangulation till now. How do i proceed further to get 3d reconstruction? I'm confused reading all the material available on internet.
Also, This is code. Let me know if this is correct or not.
Ia=imread('1.jpg');
Ib=imread('2.jpg');
Ia=rgb2gray(Ia);
Ib=rgb2gray(Ib);
% My surf addition
% collect Interest Points from Each Image
blobs1 = detectSURFFeatures(Ia);
blobs2 = detectSURFFeatures(Ib);
figure;
imshow(Ia);
hold on;
plot(selectStrongest(blobs1, 36));
figure;
imshow(Ib);
hold on;
plot(selectStrongest(blobs2, 36));
title('Thirty strongest SURF features in I2');
[features1, validBlobs1] = extractFeatures(Ia, blobs1);
[features2, validBlobs2] = extractFeatures(Ib, blobs2);
indexPairs = matchFeatures(features1, features2);
matchedPoints1 = validBlobs1(indexPairs(:,1),:);
matchedPoints2 = validBlobs2(indexPairs(:,2),:);
figure;
showMatchedFeatures(Ia, Ib, matchedPoints1, matchedPoints2);
legend('Putatively matched points in I1', 'Putatively matched points in I2');
for i=1:matchedPoints1.Count
xa(i,:)=matchedPoints1.Location(i);
ya(i,:)=matchedPoints1.Location(i,2);
xb(i,:)=matchedPoints2.Location(i);
yb(i,:)=matchedPoints2.Location(i,2);
end
matchedPoints1.Count
figure(1) ; clf ;
imshow(cat(2, Ia, Ib)) ;
axis image off ;
hold on ;
xbb=xb+size(Ia,2);
set=[1:matchedPoints1.Count];
h = line([xa(set)' ; xbb(set)'], [ya(set)' ; yb(set)']) ;
pts1=[xa,ya];
pts2=[xb,yb];
pts11=pts1;pts11(:,3)=1;
pts11=pts11';
pts22=pts2;pts22(:,3)=1;pts22=pts22';
width=size(Ia,2);
height=size(Ib,1);
F=eightpoint(pts1,pts2,width,height);
[P1new,P2new]=compute2Pmatrix(F);
XP = triangulate(pts11, pts22,P2new);
eightpoint()
function [ F ] = eightpoint( pts1, pts2,width,height)
X = 1:width;
Y = 1:height;
[X, Y] = meshgrid(X, Y);
x0 = [mean(X(:)); mean(Y(:))];
X = X - x0(1);
Y = Y - x0(2);
denom = sqrt(mean(mean(X.^2+Y.^2)));
N = size(pts1, 1);
%Normalized data
T = sqrt(2)/denom*[1 0 -x0(1); 0 1 -x0(2); 0 0 denom/sqrt(2)];
norm_x = T*[pts1(:,1)'; pts1(:,2)'; ones(1, N)];
norm_x_ = T*[pts2(:,1)';pts2(:,2)'; ones(1, N)];
x1 = norm_x(1, :)';
y1= norm_x(2, :)';
x2 = norm_x_(1, :)';
y2 = norm_x_(2, :)';
A = [x1.*x2, y1.*x2, x2, ...
x1.*y2, y1.*y2, y2, ...
x1, y1, ones(N,1)];
% compute the SVD
[~, ~, V] = svd(A);
F = reshape(V(:,9), 3, 3)';
[FU, FS, FV] = svd(F);
FS(3,3) = 0; %rank 2 constrains
F = FU*FS*FV';
% rescale fundamental matrix
F = T' * F * T;
end
triangulate()
function [ XP ] = triangulate( pts1,pts2,P2 )
n=size(pts1,2);
X=zeros(4,n);
for i=1:n
A=[-1,0,pts1(1,i),0;
0,-1,pts1(2,i),0;
pts2(1,i)*P2(3,:)-P2(1,:);
pts2(2,i)*P2(3,:)-P2(2,:)];
[~,~,va] = svd(A);
X(:,i) = va(:,4);
end
XP(:,:,1) = [X(1,:)./X(4,:);X(2,:)./X(4,:);X(3,:)./X(4,:); X(4,:)./X(4,:)];
end
function [ P1,P2 ] = compute2Pmatrix( F )
P1=[1,0,0,0;0,1,0,0;0,0,1,0];
[~, ~, V] = svd(F');
ep = V(:,3)/V(3,3);
P2 = [skew(ep)*F,ep];
end
From a quick look, it looks correct. Some notes are as follows:
You normalized code in eightpoint() is no ideal.
It is best done on the points involved. Each set of points will have its scaling matrix. That is:
[pts1_n, T1] = normalize_pts(pts1);
[pts2_n, T2] = normalize-pts(pts2);
% ... code
% solution
F = T2' * F * T
As a side note (for efficiency) you should do
[~,~,V] = svd(A, 0);
You also want to enforce the constraint that the fundamental matrix has rank-2. After you compute F, you can do:
[U,D,v] = svd(F);
F = U * diag([D(1,1),D(2,2), 0]) * V';
In either case, normalization is not the only key to make the algorithm work. You'll want to wrap the estimation of the fundamental matrix in a robust estimation scheme like RANSAC.
Estimation problems like this are very sensitive to non Gaussian noise and outliers. If you have a small number of wrong correspondence, or points with high error, the algorithm will break.
Finally, In 'triangulate' you want to make sure that the points are not at infinity prior to the homogeneous division.
I'd recommend testing the code with 'synthetic' data. That is, generate your own camera matrices and correspondences. Feed them to the estimate routine with varying levels of noise. With zero noise, you should get an exact solution up to floating point accuracy. As you increase the noise, your estimation error increases.
In its current form, running this on real data will probably not do well unless you 'robustify' the algorithm with RANSAC, or some other robust estimator.
Good luck.
Good luck.
Which version of MATLAB do you have?
There is a function called estimateFundamentalMatrix in the Computer Vision System Toolbox, which will give you the fundamental matrix. It may give you better results than your code, because it is using RANSAC under the hood, which makes it robust to spurious matches. There is also a triangulate function, as of version R2014b.
What you are getting is sparse 3D reconstruction. You can plot the resulting 3D points, and you can map the color of the corresponding pixel to each one. However, for what you want, you would have to fit a surface or a triangular mesh to the points. Unfortunately, I can't help you there.
If what you're asking is how to I proceed from fundamental Matrix + corresponding points to a dense model then you still have a lot of work ahead of you.
relative camera locations (R,T) can be calculated from a fundamental matrix assuming you know the internal camera params (up to scale, rotation, translation). To get a full dense matrix there are a few ways to go. you can try using an existing library (PMVS for example). I'd look into OpenMVG but I'm not sure about matlab interface.
Another way to go, you can compute a dense optical flow (many available for matlab). Look for a epipolar OF (It takes a fundamental matrix and restricts the solution to lie on the epipolar lines). Then you can triangulate every pixel to get a depthmap.
Finally you will have to play with format conversions to get from a depthmap to VRML (You can look at meshlab)
Sorry my answer isn't more Matlab oriented.

empirical mean and variance plot in matlab with the normal distribution

I'am new in matlab programming,I should Write a script to generate a random sequence (x1,..., XN) of size N following the normal distribution N (0, 1) and calculate the empirical mean mN and variance σN^2
Then,I should plot them:
this is my essai:
function f = normal_distribution(n)
x =randn(n);
muem = 1./n .* (sum(x));
muem
%mean(muem)
vaem = 1./n .* (sum((x).^2));
vaem
hold on
plot(x,muem,'-')
grid on
plot(x,vaem,'*')
NB:those are the formules that I have used:
I have obtained,a Figure and I don't know if is it correct or not ,thanks for Help
From your question, it seems what you want to do is calculate the mean and variance from a sample of size N (nor an NxN matrix) drawn from a standard normal distribution. So you may want to use randn(n, 1), instead of randn(n). Also as #ThP pointed out, it does not make sense to plot mean and variance vs. x. What you could do is to calculate means and variances for inceasing sample sizes n1, n2, ..., nm, and then plot sample size vs. mean or variance, to see them converge to 0 and 1. See the code below:
function [] = plotMnV(nIter)
means = zeros(nIter, 1);
vars = zeros(nIter, 1);
for pow = 1:nIter
n = 2^pow;
x =randn(n, 1);
means(pow) = 1./n * sum(x);
vars(pow) = 1./n * sum(x.^2);
end
plot(1:nIter, means, 'o-');
hold on;
plot(1:nIter, vars, '*-');
end
For example, plotMnV(20) gave me the plot below.

Multiply an arbitrary number of matrices an arbitrary number of times

I have found several questions/answers for vectorizing and speeding up routines for multiplying a matrix and a vector in a single loop, but I am trying to do something a little more general, namely multiplying an arbitrary number of matrices together, and then performing that operation an arbitrary number of times.
I am writing a general routine for calculating thin-film reflection from an arbitrary number of layers vs optical frequency. For each optical frequency W each layer has an index of refraction N and an associated 2x2 transfer matrix L and 2x2 interface matrix I which depends on the index of refraction and the thickness of the layer. If n is the number of layers, and m is the number of frequencies, then I can vectorize the index into an n x m matrix, but then in order to calculate the reflection at each frequency, I have to do nested loops. Since I am ultimately using this as part of a fitting routine, anything I can do to speed it up would be greatly appreciated.
This should provide a minimum working example:
W = 1260:0.1:1400; %frequency in cm^-1
N = rand(4,numel(W))+1i*rand(4,numel(W)); %dummy complex index of refraction
D = [0 0.1 0.2 0]/1e4; %thicknesses in cm
[n,m] = size(N);
r = zeros(size(W));
for x = 1:m %loop over frequencies
C = eye(2); % first medium is air
for y = 2:n %loop over layers
na = N(y-1,x);
nb = N(y,x);
%I = InterfaceMatrix(na,nb); % calculate the 2x2 interface matrix
I = [1 na*nb;na*nb 1]; % dummy matrix
%L = TransferMatrix(nb) % calculate the 2x2 transfer matrix
L = [exp(-1i*nb*W(x)*D(y)) 0; 0 exp(+1i*nb*W(x)*D(y))]; % dummy matrix
C = C*I*L;
end
a = C(1,1);
c = C(2,1);
r(x) = c/a; % reflectivity, the answer I want.
end
Running this twice for two different polarizations for a three layer (air/stuff/substrate) problem with 2562 frequencies takes 0.952 seconds while solving the exact same problem with the explicit formula (vectorized) for a three layer system takes 0.0265 seconds. The problem is that beyond 3 layers, the explicit formula rapidly becomes intractable and I would have to have a different subroutine for each number of layers while the above is completely general.
Is there hope for vectorizing this code or otherwise speeding it up?
(edited to add that I've left several things out of the code to shorten it, so please don't try to use this to actually calculate reflectivity)
Edit: In order to clarify, I and L are different for each layer and for each frequency, so they change in each loop. Simply taking the exponent will not work. For a real world example, take the simplest case of a soap bubble in air. There are three layers (air/soap/air) and two interfaces. For a given frequency, the full transfer matrix C is:
C = L_air * I_air2soap * L_soap * I_soap2air * L_air;
and I_air2soap ~= I_soap2air. Thus, I start with L_air = eye(2) and then go down successive layers, computing I_(y-1,y) and L_y, multiplying them with the result from the previous loop, and going on until I get to the bottom of the stack. Then I grab the first and third values, take the ratio, and that is the reflectivity at that frequency. Then I move on to the next frequency and do it all again.
I suspect that the answer is going to somehow involve a block-diagonal matrix for each layer as mentioned below.
Not next to a matlab, so that's only a starter,
Instead of the double loop you can write na*nb as Nab=N(1:end-1,:).*N(2:end,:);
The term in the exponent nb*W(x)*D(y) can be written as e=N(2:end,:)*W'*D;
The result of I*L is a 2x2 block matrix that has this form:
M = [1, Nab; Nab, 1]*[e-, 0;0, e+] = [e- , Nab*e+ ; Nab*e- , e+]
with e- as exp(-1i*e), and e+ as exp(1i*e)'
see kron on how to get the block matrix form, to vectorize the propagation C=C*I*L just take M^n
#Lama put me on the right path by suggesting block matrices, but the ultimate answer ended up being more complicated, and so I put it here for posterity. Since the transfer and interface matrix is different for each layer, I leave in the loop over the layers, but construct a large sparse block matrix where each block represents a frequency.
W = 1260:0.1:1400; %frequency in cm^-1
N = rand(4,numel(W))+1i*rand(4,numel(W)); %dummy complex index of refraction
D = [0 0.1 0.2 0]/1e4; %thicknesses in cm
[n,m] = size(N);
r = zeros(size(W));
C = speye(2*m); % first medium is air
even = 2:2:2*m;
odd = 1:2:2*m-1;
for y = 2:n %loop over layers
na = N(y-1,:);
nb = N(y,:);
% get the reflection and transmission coefficients from subroutines as a vector
% of length m, one value for each frequency
%t = Tab(na, nb);
%r = Rab(na, nb);
t = rand(size(W)); % dummy vector for MWE
r = rand(size(W)); % dummy vector for MWE
% create diagonal and off-diagonal elements. each block is [1 r;r 1]/t
Id(even) = 1./t;
Id(odd) = Id(even);
Io(even) = 0;
Io(odd) = r./t;
It = [Io;Id/2].';
I = spdiags(It,[-1 0],2*m,2*m);
I = I + I.';
b = 1i.*(2*pi*D(n).*nb).*W;
B(even) = -b;
B(odd) = b;
L = spdiags(exp(B).',0,2*m,2*m);
C = C*I*L;
end
a = spdiags(C,0);
a = a(odd).';
c = spdiags(C,-1);
c = c(odd).';
r = c./a; % reflectivity, the answer I want.
With the 3 layer system mentioned above, it isn't quite as fast as the explicit formula, but it's close and probably can get a little faster after some profiling. The full version of the original code clocks at 0.97 seconds, the formula at 0.012 seconds and the sparse diagonal version here at 0.065 seconds.

Can someone explain how to graph this sum in MATLAB using contourf?

I'm going to start off by stating that, yes, this is homework (my first homework question on stackoverflow!). But I don't want you to solve it for me, I just want some guidance!
The equation in question is this:
I'm told to take N = 50, phi1 = 300, phi2 = 400, 0<=x<=1, and 0<=y<=1, and to let x and y be vectors of 100 equally spaced points, including the end points.
So the first thing I did was set those variables, and used x = linspace(0,1) and y = linspace(0,1) to make the correct vectors.
The question is Write a MATLAB script file called potential.m which calculates phi(x,y) and makes a filled contour plot versus x and y using the built-in function contourf (see the help command in MATLAB for examples). Make sure the figure is labeled properly. (Hint: the top and bottom portions of your domain should be hotter at about 400 degrees versus the left and right sides which should be at 300 degrees).
However, previously, I've calculated phi using either x or y as a constant. How am I supposed to calculate it where both are variables? Do I hold x steady, while running through every number in the vector of y, assigning that to a matrix, incrementing x to the next number in its vector after running through every value of y again and again? And then doing the same process, but slowly incrementing y instead?
If so, I've been using a loop that increments to the next row every time it loops through all 100 values. If I did it that way, I would end up with a massive matrix that has 200 rows and 100 columns. How would I use that in the linspace function?
If that's correct, this is how I'm finding my matrix:
clear
clc
format compact
x = linspace(0,1);
y = linspace(0,1);
N = 50;
phi1 = 300;
phi2 = 400;
phi = 0;
sum = 0;
for j = 1:100
for i = 1:100
for n = 1:N
sum = sum + ((2/(n*pi))*(((phi2-phi1)*(cos(n*pi)-1))/((exp(n*pi))-(exp(-n*pi))))*((1-(exp(-n*pi)))*(exp(n*pi*y(i)))+((exp(n*pi))-1)*(exp(-n*pi*y(i))))*sin(n*pi*x(j)));
end
phi(j,i) = phi1 - sum;
end
end
for j = 1:100
for i = 1:100
for n = 1:N
sum = sum + ((2/(n*pi))*(((phi2-phi1)*(cos(n*pi)-1))/((exp(n*pi))-(exp(-n*pi))))*((1-(exp(-n*pi)))*(exp(n*pi*y(j)))+((exp(n*pi))-1)*(exp(-n*pi*y(j))))*sin(n*pi*x(i)));
end
phi(j+100,i) = phi1 - sum;
end
end
This is the definition of contourf. I think I have to use contourf(X,Y,Z):
contourf(X,Y,Z), contourf(X,Y,Z,n), and contourf(X,Y,Z,v) draw filled contour plots of Z using X and Y to determine the x- and y-axis limits. When X and Y are matrices, they must be the same size as Z and must be monotonically increasing.
Here is the new code:
N = 50;
phi1 = 300;
phi2 = 400;
[x, y, n] = meshgrid(linspace(0,1),linspace(0,1),1:N)
f = phi1-((2./(n.*pi)).*(((phi2-phi1).*(cos(n.*pi)-1))./((exp(n.*pi))-(exp(-n.*pi)))).*((1-(exp(-1.*n.*pi))).*(exp(n.*pi.*y))+((exp(n.*pi))-1).*(exp(-1.*n.*pi.*y))).*sin(n.*pi.*x));
g = sum(f,3);
[x1,y1] = meshgrid(linspace(0,1),linspace(0,1));
contourf(x1,y1,g)
Vectorize the code. For example you can write f(x,y,n) with:
[x y n] = meshgrid(-1:0.1:1,-1:0.1:1,1:10);
f=exp(x.^2-y.^2).*n ;
f is a 3D matrix now just sum over the right dimension...
g=sum(f,3);
in order to use contourf, we'll take only the 2D part of x,y:
[x1 y1] = meshgrid(-1:0.1:1,-1:0.1:1);
contourf(x1,y1,g)
The reason your code takes so long to calculate the phi matrix is that you didn't pre-allocate the array. The error about size happens because phi is not 100x100. But instead of fixing those things, there's an even better way...
MATLAB is a MATrix LABoratory so this type of equation is pretty easy to compute using matrix operations. Hints:
Instead of looping over the values, rows, or columns of x and y, construct matrices to represent all the possible input combinations. Check out meshgrid for this.
You're still going to need a loop to sum over n = 1:N. But for each value of n, you can evaluate your equation for all x's and y's at once (using the matrices from hint 1). The key to making this work is using element-by-element operators, such as .* and ./.
Using matrix operations like this is The Matlab Way. Learn it and love it. (And get frustrated when using most other languages that don't have them.)
Good luck with your homework!