Optimization by perturbing variable - matlab

My main script contains following code:
%# Grid and model parameters
nModel=50;
nModel_want=1;
nI_grid1=5;
Nth=1;
nRow.Scale1=5;
nCol.Scale1=5;
nRow.Scale2=5^2;
nCol.Scale2=5^2;
theta = 90; % degrees
a_minor = 2; % range along minor direction
a_major = 5; % range along major direction
sill = var(reshape(Deff_matrix_NthModel,nCell.Scale1,1)); % variance of the coarse data matrix of size nRow.Scale1 X nCol.Scale1
%# Covariance computation
% Scale 1
for ihRow = 1:nRow.Scale1
for ihCol = 1:nCol.Scale1
[cov.Scale1(ihRow,ihCol),heff.Scale1(ihRow,ihCol)] = general_CovModel(theta, ihCol, ihRow, a_minor, a_major, sill, 'Exp');
end
end
% Scale 2
for ihRow = 1:nRow.Scale2
for ihCol = 1:nCol.Scale2
[cov.Scale2(ihRow,ihCol),heff.Scale2(ihRow,ihCol)] = general_CovModel(theta, ihCol/(nCol.Scale2/nCol.Scale1), ihRow/(nRow.Scale2/nRow.Scale1), a_minor, a_major, sill/(nRow.Scale2*nCol.Scale2), 'Exp');
end
end
%# Scale-up of fine scale values by averaging
[covAvg.Scale2,var_covAvg.Scale2,varNorm_covAvg.Scale2] = general_AverageProperty(nRow.Scale2/nRow.Scale1,nCol.Scale2/nCol.Scale1,1,nRow.Scale1,nCol.Scale1,1,cov.Scale2,1);
I am using two functions, general_CovModel() and general_AverageProperty(), in my main script which are given as following:
function [cov,h_eff] = general_CovModel(theta, hx, hy, a_minor, a_major, sill, mod_type)
% mod_type should be in strings
angle_rad = theta*(pi/180); % theta in degrees, angle_rad in radians
R_theta = [sin(angle_rad) cos(angle_rad); -cos(angle_rad) sin(angle_rad)];
h = [hx; hy];
lambda = a_minor/a_major;
D_lambda = [lambda 0; 0 1];
h_2prime = D_lambda*R_theta*h;
h_eff = sqrt((h_2prime(1)^2)+(h_2prime(2)^2));
if strcmp(mod_type,'Sph')==1 || strcmp(mod_type,'sph') ==1
if h_eff<=a
cov = sill - sill.*(1.5*(h_eff/a_minor)-0.5*((h_eff/a_minor)^3));
else
cov = sill;
end
elseif strcmp(mod_type,'Exp')==1 || strcmp(mod_type,'exp') ==1
cov = sill-(sill.*(1-exp(-(3*h_eff)/a_minor)));
elseif strcmp(mod_type,'Gauss')==1 || strcmp(mod_type,'gauss') ==1
cov = sill-(sill.*(1-exp(-((3*h_eff)^2/(a_minor^2)))));
end
and
function [PropertyAvg,variance_PropertyAvg,NormVariance_PropertyAvg]=...
general_AverageProperty(blocksize_row,blocksize_col,blocksize_t,...
nUpscaledRow,nUpscaledCol,nUpscaledT,PropertyArray,omega)
% This function computes average of a property and variance of that averaged
% property using power averaging
PropertyAvg=zeros(nUpscaledRow,nUpscaledCol,nUpscaledT);
%# Average of property
for k=1:nUpscaledT,
for j=1:nUpscaledCol,
for i=1:nUpscaledRow,
sum=0;
for a=1:blocksize_row,
for b=1:blocksize_col,
for c=1:blocksize_t,
sum=sum+(PropertyArray((i-1)*blocksize_row+a,(j-1)*blocksize_col+b,(k-1)*blocksize_t+c).^omega); % add all the property values in 'blocksize_x','blocksize_y','blocksize_t' to one variable
end
end
end
PropertyAvg(i,j,k)=(sum/(blocksize_row*blocksize_col*blocksize_t)).^(1/omega); % take average of the summed property
end
end
end
%# Variance of averageed property
variance_PropertyAvg=var(reshape(PropertyAvg,...
nUpscaledRow*nUpscaledCol*nUpscaledT,1),1,1);
%# Normalized variance of averageed property
NormVariance_PropertyAvg=variance_PropertyAvg./(var(reshape(...
PropertyArray,numel(PropertyArray),1),1,1));
Question: Using Matlab, I would like to optimize covAvg.Scale2 such that it matches closely with cov.Scale1 by perturbing/varying any (or all) of the following variables
1) a_minor
2) a_major
3) theta
I am aware I can use fminsearch, however, how I am not able to perturb the variables I want to while using this fminsearch.

I won't pretend to understand everything that you are doing. But it sounds like a typical minimization problem. What you want to do is to come up with a single function that takes a_minor, a_major and theta as arguments, and returns the square of the difference between covAvg.Scale2 and cov.Scale1. Something like this:
function diff = minimize_me(a_minor, a_major, theta)
... your script goes here
diff = (covAvg.Scale2 - cov.Scale1)^2;
end
Then you need matlab to minimize this function. There's more than one option here. Since you only have three variables to minimize over, fminsearch is a good place to start. You would call it something like this:
opts = optimset('display', 'iter');
x = fminsearch( #(x) minimize_me(x(1), x(2), x(3)), [a_minor_start a_major_start theta_start], opts)
The first argument to fminsearch is the function you want to optimize. It must take a single argument: a vector of the variables that will be perturbed in order to find the minimum value. Here I use an anonymous function to extract the values from this vector and pass them into minimize_me. The second argument to fminsearch is a vector containing the values to start searching at. The third argument are options that affect the search; it's a good idea to set display to iter when you first start optimizing, so that you can get an idea of well the optimizer is converging.
If your parameters have restricted domains (e.g. they must all be positive) take a look at fminsearchbnd on the file exchange.
If I have misunderstood your problem, and this doesn't help at all, try posting code that we can run to reproduce the problem ourselves.

Related

Verify Law of Large Numbers in MATLAB

The problem:
If a large number of fair N-sided dice are rolled, the average of the simulated rolls is likely to be close to the mean of 1,2,...N i.e. the expected value of one die. For example, the expected value of a 6-sided die is 3.5.
Given N, simulate 1e8 N-sided dice rolls by creating a vector of 1e8 uniformly distributed random integers. Return the difference between the mean of this vector and the mean of integers from 1 to N.
My code:
function dice_diff = loln(N)
% the mean of integer from 1 to N
A = 1:N
meanN = sum(A)/N;
% I do not have any idea what I am doing here!
V = randi(1e8);
meanvector = V/1e8;
dice_diff = meanvector - meanN;
end
First of all, make sure everytime you ask a question that it is as clear as possible, to make it easier for other users to read.
If you check how randi works, you can see this:
R = randi(IMAX,N) returns an N-by-N matrix containing pseudorandom
integer values drawn from the discrete uniform distribution on 1:IMAX.
randi(IMAX,M,N) or randi(IMAX,[M,N]) returns an M-by-N matrix.
randi(IMAX,M,N,P,...) or randi(IMAX,[M,N,P,...]) returns an
M-by-N-by-P-by-... array. randi(IMAX) returns a scalar.
randi(IMAX,SIZE(A)) returns an array the same size as A.
So, if you want to use randi in your problem, you have to use it like this:
V=randi(N, 1e8,1);
and you need some more changes:
function dice_diff = loln(N)
%the mean of integer from 1 to N
A = 1:N;
meanN = mean(A);
V = randi(N, 1e8,1);
meanvector = mean(V);
dice_diff = meanvector - meanN;
end
For future problems, try using the command
help randi
And matlab will explain how the function randi (or other function) works.
Make sure to check if the code above gives the desired result
As pointed out, take a closer look at the use of randi(). From the general case
X = randi([LowerInt,UpperInt],NumRows,NumColumns); % UpperInt > LowerInt
you can adapt to dice rolling by
Rolls = randi([1 NumSides],NumRolls,NumSamplePaths);
as an example. Exchanging NumRolls and NumSamplePaths will yield Rolls.', or transpose(Rolls).
According to the Law of Large Numbers, the updated sample average after each roll should converge to the true mean, ExpVal (short for expected value), as the number of rolls (trials) increases. Notice that as NumRolls gets larger, the sample mean converges to the true mean. The image below shows this for two sample paths.
To get the sample mean for each number of dice rolls, I used arrayfun() with
CumulativeAvg1 = arrayfun(#(jj)mean(Rolls(1:jj,1)),[1:NumRolls]);
which is equivalent to using the cumulative sum, cumsum(), to get the same result.
CumulativeAvg1 = (cumsum(Rolls(:,1))./(1:NumRolls).'); % equivalent
% MATLAB R2019a
% Create Dice
NumSides = 6; % positive nonzero integer
NumRolls = 200;
NumSamplePaths = 2;
% Roll Dice
Rolls = randi([1 NumSides],NumRolls,NumSamplePaths);
% Output Statistics
ExpVal = mean(1:NumSides);
CumulativeAvg1 = arrayfun(#(jj)mean(Rolls(1:jj,1)),[1:NumRolls]);
CumulativeAvgError1 = CumulativeAvg1 - ExpVal;
CumulativeAvg2 = arrayfun(#(jj)mean(Rolls(1:jj,2)),[1:NumRolls]);
CumulativeAvgError2 = CumulativeAvg2 - ExpVal;
% Plot
figure
subplot(2,1,1), hold on, box on
plot(1:NumRolls,CumulativeAvg1,'b--','LineWidth',1.5,'DisplayName','Sample Path 1')
plot(1:NumRolls,CumulativeAvg2,'r--','LineWidth',1.5,'DisplayName','Sample Path 2')
yline(ExpVal,'k-')
title('Average')
xlabel('Number of Trials')
ylim([1 NumSides])
subplot(2,1,2), hold on, box on
plot(1:NumRolls,CumulativeAvgError1,'b--','LineWidth',1.5,'DisplayName','Sample Path 1')
plot(1:NumRolls,CumulativeAvgError2,'r--','LineWidth',1.5,'DisplayName','Sample Path 2')
yline(0,'k-')
title('Error')
xlabel('Number of Trials')

defining the X values for a code

I have this task to create a script that acts similarly to normcdf on matlab.
x=linspace(-5,5,1000); %values for x
p= 1/sqrt(2*pi) * exp((-x.^2)/2); % THE PDF for the standard normal
t=cumtrapz(x,p); % the CDF for the standard normal distribution
plot(x,t); %shows the graph of the CDF
The problem is when the t values are assigned to 1:1000 instead of -5:5 in increments. I want to know how to assign the correct x values, that is -5:5,1000 to the t values output? such as when I do t(n) I get the same result as normcdf(n).
Just to clarify: the problem is I cannot simply say t(-5) and get result =1 as I would in normcdf(1) because the cumtrapz calculated values are assigned to x=1:1000 instead of -5 to 5.
Updated answer
Ok, having read your comment; here is how to do what you want:
x = linspace(-5,5,1000);
p = 1/sqrt(2*pi) * exp((-x.^2)/2);
cdf = cumtrapz(x,p);
q = 3; % Query point
disp(normcdf(q)) % For reference
[~,I] = min(abs(x-q)); % Find closest index
disp(cdf(I)) % Show the value
Sadly, there is no matlab syntax which will do this nicely in one line, but if you abstract finding the closest index into a different function, you can do this:
cdf(findClosest(x,q))
function I = findClosest(x,q)
if q>max(x) || q<min(x)
warning('q outside the range of x');
end
[~,I] = min(abs(x-q));
end
Also; if you are certain that the exact value of the query point q exists in x, you can just do
cdf(x==q);
But beware of floating point errors though. You may think that a certain range outght to contain a certain value, but little did you know it was different by a tiny roundoff erorr. You can see that in action for example here:
x1 = linspace(0,1,1000); % Range
x2 = asin(sin(x1)); % Ought to be the same thing
plot((x1-x2)/eps); grid on; % But they differ by rougly 1 unit of machine precision
Old answer
As far as I can tell, running your code does reproduce the result of normcdf(x) well... If you want to do exactly what normcdf does them use erfc.
close all; clear; clc;
x = linspace(-5,5,1000);
cdf = normcdf(x); % Result of normcdf for comparison
%% 1 Trapezoidal integration of normal pd
p = 1/sqrt(2*pi) * exp((-x.^2)/2);
cdf1 = cumtrapz(x,p);
%% 2 But error function IS the integral of the normal pd
cdf2 = (1+erf(x/sqrt(2)))/2;
%% 3 Or, even better, use the error function complement (works better for large negative x)
cdf3 = erfc(-x/sqrt(2))/2;
fprintf('1: Mean error = %.2d\n',mean(abs(cdf1-cdf)));
fprintf('2: Mean error = %.2d\n',mean(abs(cdf2-cdf)));
fprintf('3: Mean error = %.2d\n',mean(abs(cdf3-cdf)));
plot(x,cdf1,x,cdf2,x,cdf3,x,cdf,'k--');
This gives me
1: Mean error = 7.83e-07
2: Mean error = 1.41e-17
3: Mean error = 00 <- Because that is literally what normcdf is doing
If your goal is not not to use predefined matlab funcitons, but instead to calculate the result numerically (i.e. calculate the error function) then it's an interesting challange which you can read about for example here or in this stats stackexchange post. Just as an example, the following piece of code calculates the error function by implementing eq. 2 form the first link:
nerf = #(x,n) (-1)^n*2/sqrt(pi)*x.^(2*n+1)./factorial(n)/(2*n+1);
figure(1); hold on;
temp = zeros(size(x)); p =[];
for n = 0:20
temp = temp + nerf(x/sqrt(2),n);
if~mod(n,3)
p(end+1) = plot(x,(1+temp)/2);
end
end
ylim([-1,2]);
title('\Sigma_{n=0}^{inf} ( 2/sqrt(pi) ) \times ( (-1)^n x^{2*n+1} ) \div ( n! (2*n+1) )');
p(end+1) = plot(x,cdf,'k--');
legend(p,'n = 0','\Sigma_{n} 0->3','\Sigma_{n} 0->6','\Sigma_{n} 0->9',...
'\Sigma_{n} 0->12','\Sigma_{n} 0->15','\Sigma_{n} 0->18','normcdf(x)',...
'location','southeast');
grid on; box on;
xlabel('x'); ylabel('norm. cdf approximations');
Marcin's answer suggests a way to find the nearest sample point. It is easier, IMO, to interpolate. Given x and t as defined in the question,
interp1(x,t,n)
returns the estimated value of the CDF at x==n, for whatever value of n. But note that, for values outside the computed range, it will extrapolate and produce unreliable values.
You can define an anonymous function that works like normcdf:
my_normcdf = #(n)interp1(x,t,n);
my_normcdf(-5)
Try replacing x with 0.01 when you call cumtrapz. You can either use a vector or a scalar spacing for cumtrapz (https://www.mathworks.com/help/matlab/ref/cumtrapz.html), and this might solve your problem. Also, have you checked the original x-values? Is the problem with linspace (i.e. you are not getting the correct x vector), or with cumtrapz?

Two functions in Matlab to approximate integral - not enough input arguments?

I want to write a function that approximates integrals with the trapezoidal rule.
I first defined a function in one file:
function[y] = integrand(x)
y = x*exp(-x^2); %This will be integrand I want to approximate
end
Then I wrote my function that approximates definite integrals with lower bound a and upper bound b (also in another file):
function [result] = trapez(integrand,a,b,k)
sum = 0;
h = (b-a)/k; %split up the interval in equidistant spaces
for j = 1:k
x_j = a + j*h; %this are the points in the interval
sum = sum + ((x_j - x_(j-1))/2) * (integrand(x_(j-1)) + integrand(x_j));
end
result = sum
end
But when I want to call this function from the command window, using result = trapez(integrand,0,1,10) for example, I always get an error 'not enough input arguments'. I don't know what I'm doing wrong?
There are numerous issues with your code:
x_(j-1) is not defined, and is not really a valid Matlab syntax (assuming you want that to be a variable).
By calling trapez(integrand,0,1,10) you're actually calling integrand function with no input arguments. If you want to pass a handle, use #integrand instead. But in this case there's no need to pass it at all.
You should avoid variable names that coincide with Matlab functions, such as sum. This can easily lead to issues which are difficult to debug, if you also try to use sum as a function.
Here's a working version (note also a better code style):
function res = trapez(a, b, k)
res = 0;
h = (b-a)/k; % split up the interval in equidistant spaces
for j = 1:k
x_j1 = a + (j-1)*h;
x_j = a + j*h; % this are the points in the interval
res = res+ ((x_j - x_j1)/2) * (integrand(x_j1) + integrand(x_j));
end
end
function y = integrand(x)
y = x*exp(-x^2); % This will be integrand I want to approximate
end
And the way to call it is: result = trapez(0, 1, 10);
Your integrandfunction requires an input argument x, which you are not supplying in your command line function call

Storing and accessing a piecewise interpolant using a single handle

I have a dataset containing two vectors of points, X and Y that represents measurements of an "exponential-like" phenomenon (i.e. Y = A*exp(b*x)). When fitting it with an exponential equation I'm getting a nice-looking fit, but when I'm using it to compute things it turns out that the fit is not quite as accurate as I would hope.
Currently my most promising idea is a piecewise exponential fit (taking about 6 (x,y) pairs each time), which seems to provide better results in cases I tested manually. Here's a diagram to illustrate what I mean to do:
// Assuming a window of size WS=4:
- - - - - - - - - - - - //the entire X span of the data
[- - - -]- - // the fit that should be evaluated for X(1)<= x < X(floor(WS/2)+1)
-[- - - -]- // the fit that should be evaluated for X(3)<= x < X(4)
...
- - - - - -[- - - -]- - // X(8)<= x < X(9)
... //and so on
Some Considerations:
I considered filtering the data before fitting, but this is tricky since I don't really know anything about the type of noise it contains.
I would like the piecewise fit (including all different cases) to be accessible using a single function handle. It's very similar to MATLAB's Shape-preserving "PCHIP" interpolant, only that it should use an exponential function instead.
The creation of the fit does not need to happen during the runtime of another code. I even prefer to create it separately.
I'm not worried about the potential unsmoothness of the final function.
The only way of going about this I could think of is defining an .m file as explained in Fit a Curve Defined by a File, but that would require manually writing conditions for almost as many cases as I have points (or obviously write a code that generates this code for me, which is also a considerable effort).
Relevant code snippets:
clear variables; clc;
%% // Definitions
CONSTS.N_PARAMETERS_IN_MODEL = 2; %// For the model y = A*exp(B*x);
CONSTS.WINDOW_SIZE = 4;
assert(~mod(CONSTS.WINDOW_SIZE,2) && CONSTS.WINDOW_SIZE>0,...
'WINDOW_SIZE should be a natural even number.');
%% // Example Data
X = [0.002723682418630,0.002687088539567,0.002634005004610,0.002582978173834,...
0.002530684550171,0.002462144527884,0.002397219225698,0.002341097974950,...
0.002287544321171,0.002238889510803]';
Y = [0.178923076435990,0.213320004768074,0.263918364216839,0.324208349386613,...
0.394340431220509,0.511466688684182,0.671285738221314,0.843849959919278,...
1.070756756433334,1.292800046096531]';
assert(CONSTS.WINDOW_SIZE <= length(X),...
'WINDOW_SIZE cannot be larger than the amount of points.');
X = flipud(X); Y = flipud(Y); % ascending sorting is needed later for histc
%% // Initialization
nFits = length(X)-CONSTS.WINDOW_SIZE+1;
coeffMat(nFits,CONSTS.N_PARAMETERS_IN_MODEL) = 0; %// Preallocation
ft = fittype( 'exp1' );
opts = fitoptions( 'Method', 'NonlinearLeastSquares' );
%% // Populate coefficient matrix
for ind1 = 1:nFits
[xData, yData] = prepareCurveData(...
X(ind1:ind1+CONSTS.WINDOW_SIZE-1),Y(ind1:ind1+CONSTS.WINDOW_SIZE-1));
%// Fit model to data:
fitresult = fit( xData, yData, ft, opts );
%// Save coefficients:
coeffMat(ind1,:) = coeffvalues(fitresult);
end
clear ft opts ind1 xData yData fitresult ans
%% // Get the transition points
xTrans = [-inf; X(CONSTS.WINDOW_SIZE/2+1:end-CONSTS.WINDOW_SIZE/2); inf];
At this point, xTrans and coeffMat contain all the required information to evaluate the fits. To show this we can look at a vector of some test data:
testPts = [X(1), X(1)/2, mean(X(4:5)), X(CONSTS.WINDOW_SIZE)*1.01,2*X(end)];
...and finally the following should roughly happen internally within the handle:
%% // Get the correct fit# to be evaluated:
if ~isempty(xTrans) %// The case of nFits==1
rightFit = find(histc(testPts(3),xTrans));
else
rightFit = 1;
end
%% // Actually evaluate the right fit
f = #(x,A,B)A*exp(B*x);
yy = f(testPts(3),coeffMat(rightFit,1),coeffMat(rightFit,2));
And so my problem is how to hold that last bit (along with all the fit coefficients) inside a single handle, that accepts an arbitrarily-sized input of points to interpolate on?
Related resources:
stackoverflow.com/questions/16777921/matlab-curve-fitting-exponential-vs-linear/
It's not all clear but why not to puts things into a class ?
classdef Piecewise < handle
methods
% Construction
function [this] = Piecewise(xmeas, ymeas)
... here compute xTrans and coeffMat...
end
% Interpolation
function [yinterp] = Evaluate(xinterp)
... Here use previously computed xTrans and coeffMat ...
end
end
properties(SetAccess=Private, GetAccess=Private)
xTrans;
coeffMat;
end
end
In this way you can prcompute xTrans vector and coeffMat matrix during construction and then later reuse these properties when you need to evaluate interpolant at xinterp values in Evaluate method.
% The real measured data
xmeas = ...
ymeas = ...
% Build piecewise interpolation object
piecewise = Piecewise(x,meas, ymeas);
% Rebuild curve at any new positions
xinterp = ...
yinterp = piecewise.Evaluate(xinterp);
Function like syntax
If you truly prefer to have function-handle like syntax, you can still internally use above Piecewise object and add extra static method to return it as a function handle:
classdef Piecewise < handle
... see code above...
methods(Static=true)
function [f] = MakeHandle(xmeas, ymeas)
%[
obj = Piecewise(xmeas, ymeas);
f = #(x)obj.Evaluate(x);
%]
end
end
This can be used like this:
f = Piecewise.MakeHandle(xmeas, ymeas);
yinterp = f(xinterp);
PS1: You can later put Evaluate and Piecewise constructor methods as private if you absolutely wanna to force this syntax.
PS2: To fully hide object oriented design, you can turn MakeHandle into a fully classic routine (will work the same as if static and users won't have to type Piecewise. in front of MakeHandle).
A last solution without oop
Put everything in a single .m file :
function [f] = MakeHandle(xmeas, ymeas)
... Here compute xTrans and coeffMat ...
f = #(x)compute(x, xTrans, coeffMat);% Passing xTrans/coeffMatt as hidden parameters
end
function [yinterp] = compute(xinterp, xTrans, coeffMat)
... do interpolation here...
end
As an extension of CitizenInsane's answer, the following is an alternative approach that allows a "handle-y" access to the inner Evaluate function.
function b = subsref(this,s)
switch s(1).type
case '()'
xval = s.subs{:};
b = this.Evaluate(xval);
otherwise %// Fallback to the default behavior for '.' and '{}':
builtin('subsref',this,s);
end
end
References: docs1, docs2, docs3, question1
P.S. docs2 is referenced because I initially tried to do subsref#handle (which is calling the superclass method, as one would expect in OOP when overriding methods in a subclass), but this doesn't work in MATLAB, which instead requires builtin() to achieve the same functionality.

how to write this script in matlab

I've just started learning Matlab(a few days ago) and I have the following homework that I just don't know how to code it:
Write a script that creates a graphic using the positions of the roots of the polynomial function: p(z)=z^n-1 in the complex plan for various values for n-natural number(nth roots of unity)
so I am assuming the function you are using is p(z) = (z^n) - 1 where n is some integer value.
you can the find the roots of this equation by simply plugging into the roots function. The array passed to roots are the coefficients of the input function.
Example
f = 5x^2-2x-6 -> Coefficients are [5,-2,-6]
To get roots enter roots([5,-2,-6]). This will find all points at which x will cause the function to be equal to 0.
so in your case you would enter funcRoots = roots([1,zeros(1,n-1),-1]);
You can then plot these values however you want, but a simple plot like plot(funcRoots) would likely suffice.
To do in a loop use the following. Mind you, if there are multiple roots that are the same, there may be some overlap so you may not be able to see certain values.
minN = 1;
maxN = 10;
nVals = minN:maxN;
figure; hold on;
colors = hsv(numel(nVals));
legendLabels = cell(1,numel(nVals));
iter = 1;
markers = {'x','o','s','+','d','^','v'};
for n = nVals
funcRoots = roots([1,zeros(1,n-1),-1]);
plot(real(funcRoots),imag(funcRoots),...
markers{mod(iter,numel(markers))+1},...
'Color',colors(iter,:),'MarkerSize',10,'LineWidth',2)
legendLabels{iter} = [num2str(n),'-order'];
iter = iter+1;
end
hold off;
xlabel('Real Value')
ylabel('Imaginary Value')
title('Unity Roots')
axis([-1,1,-1,1])
legend(legendLabels)