Computing and determining the convergence rate - matlab

Write a MATLAB code to compute and determine the convergence rate of :
(exp(h)-(1+h+1/2*h^2))/h ‍‍‍‍‍‍ ‍‍‍‍‍‍ with h=1/2, 1/2^2,..., 1/2^10
My code was:
h0=(0.5)^i;
TOL=10^(-8);
N=10;
i=1;
flag=0;
table=zeros(30,1);
table(1)=h0
while i < N
h=(exp(h0)-(1+h0+0.5*h0^2))/h0;
table (i+1)=h;
if abs(h-h0)< TOL
flag=1;
break;
end
i=i+1;
h0=h;
end
if flag==1
h
else
error('failed');
end
The answer I received does not make quite any sense. Please help.

The main problem is in your tolerance check
if abs(h-h0) < TOL
If your expression approaches its limit fast enough, h can become 0 as h0 is larger than the tolerance. If so, the criteria isn't met and the loop continues. Next iteration h0 is 0 and h will be evaluated as NaN (since divition with 0 is bad)
If you, like in this case, expect a convergence towards 0, you could instead check
if h > TOL
or you could also add a NaN-check
if abs(h-h0) < TOL || isnan(h)
Apart from that, there are a few problems with your code.
First you initiate h0 using i (the imaginary number), you probably intented to use i = 1, but that line is below in you code.
You use a whileloop but in your case, as you intend to increment i, a forloop would be just as good.
The use of both the variable table, h and h0 is unnecessary. Make a single result vector, initialized with your h0 at the first index - see example below.
TOL = 1e-8; % Tolerance
N = 10; % Max number of iterations
% Value vector
h = zeros(N+1,1);
% Init value
h(1) = (0.5)^1;
for k = 1:N
h(k+1) = (exp(h(k)) - (1 + h(k) + 0.5*h(k)^2))/h(k);
if isnan(h(k+1)) || abs(diff(h(k + [0 1]))) < TOL
N = k;
break
end
end
% Crop vector
h = h(1:N);
% Display result
fprintf('Converged after %d iterations\n', N)
% Plot (with logarithmic y-xis)
semilogy(1:N, h,'*-')

Related

Matlab rref() function precision error after 12th column of hilbert matrices

My question may be a simple one but I could not think of a logical explanation for my question:
When I use
rref(hilb(8)), rref(hilb(9)), rref(hilb(10)), rref(hilb(11))
it gives me the result that I expected, a unit matrix.
However when it comes to the
rref(hilb(12))
it does not give a nonsingular matrix as expected. I used Wolfram and it gives the unit matrix for the same case so I am sure that it should have given a unit matrix. There may be a round off error or something like that but then 1/11 or 1/7 have also some troublesome decimals
so why does Matlab behave like this when it comes to 12?
It indeed seems like a precision error. This makes sense as the determinant of Hilbert matrix of order n tends to 0 as n tends to infinity (see here). However, you can use rref with tol parameter:
[R,jb] = rref(A,tol)
and take tol to be very small to get more precise results. For example, rref(hilb(12),1e-20)
will give you identity matrix.
EDIT- more details regarding the role of the tol parameter.
The source code of rref is provided at the bottom of the answer. The tol is used when we search for a maximal element in absolute value in a certain part of a column, to find the pivot row.
% Find value and index of largest element in the remainder of column j.
[p,k] = max(abs(A(i:m,j))); k = k+i-1;
if (p <= tol)
% The column is negligible, zero it out.
A(i:m,j) = zeros(m-i+1,1);
j = j + 1;
If all the elements are smaller than tol in absolute value, the relevant part of the column is filled by zeros. This seems to be where the precision error for rref(hilb(12)) occurs. By reducing the tol we avoid this issue in rref(hilb(12),1e-20).
source code:
function [A,jb] = rref(A,tol)
%RREF Reduced row echelon form.
% R = RREF(A) produces the reduced row echelon form of A.
%
% [R,jb] = RREF(A) also returns a vector, jb, so that:
% r = length(jb) is this algorithm's idea of the rank of A,
% x(jb) are the bound variables in a linear system, Ax = b,
% A(:,jb) is a basis for the range of A,
% R(1:r,jb) is the r-by-r identity matrix.
%
% [R,jb] = RREF(A,TOL) uses the given tolerance in the rank tests.
%
% Roundoff errors may cause this algorithm to compute a different
% value for the rank than RANK, ORTH and NULL.
%
% Class support for input A:
% float: double, single
%
% See also RANK, ORTH, NULL, QR, SVD.
% Copyright 1984-2005 The MathWorks, Inc.
% $Revision: 5.9.4.3 $ $Date: 2006/01/18 21:58:54 $
[m,n] = size(A);
% Does it appear that elements of A are ratios of small integers?
[num, den] = rat(A);
rats = isequal(A,num./den);
% Compute the default tolerance if none was provided.
if (nargin < 2), tol = max(m,n)*eps(class(A))*norm(A,'inf'); end
% Loop over the entire matrix.
i = 1;
j = 1;
jb = [];
while (i <= m) && (j <= n)
% Find value and index of largest element in the remainder of column j.
[p,k] = max(abs(A(i:m,j))); k = k+i-1;
if (p <= tol)
% The column is negligible, zero it out.
A(i:m,j) = zeros(m-i+1,1);
j = j + 1;
else
% Remember column index
jb = [jb j];
% Swap i-th and k-th rows.
A([i k],j:n) = A([k i],j:n);
% Divide the pivot row by the pivot element.
A(i,j:n) = A(i,j:n)/A(i,j);
% Subtract multiples of the pivot row from all the other rows.
for k = [1:i-1 i+1:m]
A(k,j:n) = A(k,j:n) - A(k,j)*A(i,j:n);
end
i = i + 1;
j = j + 1;
end
end
% Return "rational" numbers if appropriate.
if rats
[num,den] = rat(A);
A=num./den;
end

Accuracy issues with multiplication of matrices in Matlab R2012b

I have implemented a script that does constrained optimization for solving the optimal parameters of Support Vector Machines model. I noticed that my script for some reason gives inaccurate results (although very close to the real value). For example the typical situation is that the result of a calculation should be exactly 0, but instead it is something like
-1/18014398509481984 = -5.551115123125783e-17
This situation happens when I multiply matrices with vectors. What makes this also strange is that if I do the multiplications by hand in the command window in Matlab I get exactly 0 result.
Let me give an example: If I take the vectors Aq = [-1 -1 1 1] and x = [12/65 28/65 32/65 8/65]' I get exactly 0 result from their multiplication if I do this in the command window, as you can see in the picture below:
If on the other hand I do this in my function-script I don't get the result being 0 but rather the value -1/18014398509481984.
Here is the part of my script that is responsible for this multiplication (I've added the Aq and x into the script to show the contents of Aq and x as well):
disp('DOT PRODUCT OF ACTIVE SET AND NEW POINT: ')
Aq
x
Aq*x
Here is the result of the code above when run:
As you can see the value isn't exactly 0 even though it really should be. Note that this problem doesn't occur for all possible values of Aq and x. If Aq = [-1 -1 1 1] and x = [4/13 4/13 4/13 4/13] the result is exactly 0 as you can see below:
What is causing this inaccuracy? How can I fix this?
P.S. I didn't include my whole code because it's not very well documented and few hundred lines long, but I will if requested.
Thank you!
UPDATE: new test, by using Ander Biguri's advice:
UPDATE 2: THE CODE
function [weights, alphas, iters] = solveSVM(data, labels, C, e)
% FUNCTION [weights, alphas, iters] = solveSVM(data, labels, C, e)
%
% AUTHOR: jjepsuomi
%
% VERSION: 1.0
%
% DESCRIPTION:
% - This function will attempt to solve the optimal weights for a Support
% Vector Machines (SVM) model using active set method with gradient
% projection.
%
% INPUTS:
% "data" a n-by-m data matrix. The number of rows 'n' corresponds to the
% number of data points and the number of columns 'm' corresponds to the
% number of variables.
% "labels" a 1-by-n row vector of data labels from the set {-1,1}.
% "C" Box costraint upper limit. This will constrain the values of 'alphas'
% to the range 0 <= alphas <= C. If hard-margin SVM model is required set
% C=Inf.
% "e" a real value corresponding to the convergence criterion, that is if
% solution Xi and Xi-1 are within distance 'e' from each other stop the
% learning process, i.e. IF |F(Xi)-F(Xi-1)| < e ==> stop learning process.
%
% OUTPUTS:
% "weights" a vector corresponding to the optimal decision line parameters.
% "alphas" a vector of alpha-values corresponding to the optimal solution
% of the dual optimization problem of SVM.
% "iters" number of iterations until learning stopped.
%
% EXAMPLE USAGE 1:
%
% 'Hard-margin SVM':
%
% data = [0 0;2 2;2 0;3 0];
% labels = [-1 -1 1 1];
% [weights, alphas, iters] = solveSVM(data, labels, Inf, 10^-100)
%
% EXAMPLE USAGE 2:
%
% 'Soft-margin SVM':
%
% data = [0 0;2 2;2 0;3 0];
% labels = [-1 -1 1 1];
% [weights, alphas, iters] = solveSVM(data, labels, 0.8, 10^-100)
% STEP 1: INITIALIZATION OF THE PROBLEM
format long
% Calculate linear kernel matrix
L = kron(labels', labels);
K = data*data';
% Hessian matrix
Qd = L.*K;
% The minimization function
L = #(a) (1/2)*a'*Qd*a - ones(1, length(a))*a;
% Gradient of the minimizable function
gL = #(a) a'*Qd - ones(1, length(a));
% STEP 2: THE LEARNING PROCESS, ACTIVE SET WITH GRADIENT PROJECTION
% Initial feasible solution (required by gradient projection)
x = zeros(length(labels), 1);
iters = 1;
optfound = 0;
while optfound == 0 % criterion met
% Negative of the gradient at initial solution
g = -gL(x);
% Set the active set and projection matrix
Aq = labels; % In plane y^Tx = 0
P = eye(length(x))-Aq'*inv(Aq*Aq')*Aq; % In plane projection
% Values smaller than 'eps' are changed into 0
P(find(abs(P-0) < eps)) = 0;
d = P*g'; % Projection onto plane
if ~isempty(find(x==0 | x==C)) % Constraints active?
acinds = find(x==0 | x==C);
for i = 1:length(acinds)
if (x(acinds(i)) == 0 && d(acinds(i)) < 0) || x(acinds(i)) == C && d(acinds(i)) > 0
% Make the constraint vector
constr = zeros(1,length(x));
constr(acinds(i)) = 1;
Aq = [Aq; constr];
end
end
% Update the projection matrix
P = eye(length(x))-Aq'*inv(Aq*Aq')*Aq; % In plane / box projection
% Values smaller than 'eps' are changed into 0
P(find(abs(P-0) < eps)) = 0;
d = P*g'; % Projection onto plane / border
end
%%%% DISPLAY INFORMATION, THIS PART IS NOT NECESSAY, ONLY FOR DEBUGGING
if Aq*x ~= 0
disp('ACTIVE SET CONSTRAINTS Aq :')
Aq
disp('CURRENT SOLUTION x :')
x
disp('MULTIPLICATION OF Aq and x')
Aq*x
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Values smaller than 'eps' are changed into 0
d(find(abs(d-0) < eps)) = 0;
if ~isempty(find(d~=0)) && rank(P) < length(x) % Line search for optimal lambda
lopt = ((g*d)/(d'*Qd*d));
lmax = inf;
for i = 1:length(x)
if d(i) < 0 && -x(i) ~= 0 && -x(i)/d(i) <= lmax
lmax = -x(i)/d(i);
elseif d(i) > 0 && (C-x(i))/d(i) <= lmax
lmax = (C-x(i))/d(i);
end
end
lambda = max(0, min([lopt, lmax]));
if abs(lambda) < eps
lambda = 0;
end
xo = x;
x = x + lambda*d;
iters = iters + 1;
end
% Check whether search direction is 0-vector or 'e'-criterion met.
if isempty(find(d~=0)) || abs(L(x)-L(xo)) < e
optfound = 1;
end
end
%%% STEP 3: GET THE WEIGHTS
alphas = x;
w = zeros(1, length(data(1,:)));
for i = 1:size(data,1)
w = w + labels(i)*alphas(i)*data(i,:);
end
svinds = find(alphas>0);
svind = svinds(1);
b = 1/labels(svind) - w*data(svind, :)';
%%% STEP 4: OPTIMALITY CHECK, KKT conditions. See KKT-conditions for reference.
weights = [b; w'];
datadim = length(data(1,:));
Q = [zeros(1,datadim+1); zeros(datadim, 1), eye(datadim)];
A = [ones(size(data,1), 1), data];
for i = 1:length(labels)
A(i,:) = A(i,:)*labels(i);
end
LagDuG = Q*weights - A'*alphas;
Ac = A*weights - ones(length(labels),1);
alpA = alphas.*Ac;
LagDuG(any(abs(LagDuG-0) < 10^-14)) = 0;
if ~any(alphas < 0) && all(LagDuG == zeros(datadim+1,1)) && all(abs(Ac) >= 0) && all(abs(alpA) < 10^-6)
disp('Optimal found, Karush-Kuhn-Tucker conditions satisfied.')
else
disp('Optimal not found, Karush-Kuhn-Tucker conditions not satisfied.')
end
% VISUALIZATION FOR 2D-CASE
if size(data, 2) == 2
pinds = find(labels > 0);
ninds = find(labels < 0);
plot(data(pinds, 1), data(pinds, 2), 'o', 'MarkerFaceColor', 'red', 'MarkerEdgeColor', 'black')
hold on
plot(data(ninds, 1), data(ninds, 2), 'o', 'MarkerFaceColor', 'blue', 'MarkerEdgeColor', 'black')
Xb = min(data(:,1))-1;
Xe = max(data(:,1))+1;
Yb = -(b+w(1)*Xb)/w(2);
Ye = -(b+w(1)*Xe)/w(2);
lineh = plot([Xb Xe], [Yb Ye], 'LineWidth', 2);
supvh = plot(data(find(alphas~=0), 1), data(find(alphas~=0), 2), 'g.');
legend([lineh, supvh], 'Decision boundary', 'Support vectors');
hold off
end
NOTE:
If you run the EXAMPLE 1, you should get an output starting with the following:
As you can see, the multiplication between Aq and x don't produce value 0, even though they should. This is not a bad thing in this particular example, but if I have more data points with lots of decimals in them this inaccuracy becomes bigger and bigger problem, because the calculations are not exact. This is bad for example when I'm searching for a new direction vector when I'm moving towards the optimal solution in gradient projection method. The search direction isn't exactly the correct direction, but close to it. This is why I want the exactly correct values...is this possible?
I wonder if the decimals in the data points have something to do with the accuracy of my results. See the picture below:
So the question is: Is this caused by the data or is there something wrong in the optimization procedure...
Do you use format function inside your script? It looks like you used somewhere format rat.
You can always use matlab eps function, that returns precision that is used inside matlab. The absolute value of -1/18014398509481984 is smaller that this, according to my Matlab R2014B:
format long
a = abs(-1/18014398509481984)
b = eps
a < b
This basically means that the result is zero (but matlab stopped calculations because according to eps value, the result was just fine).
Otherwise you can just use format long inside your script before the calculation.
Edit
I see inv function inside your code, try replacing it with \ operator (mldivide). The results from it will be more accurate as it uses Gaussian elimination, without forming the inverse.
The inv documentation states:
In practice, it is seldom necessary to form the explicit inverse of a
matrix. A frequent misuse of inv arises when solving the system of
linear equations Ax = b. One way to solve this is with x = inv(A)*b. A
better way, from both an execution time and numerical accuracy
standpoint, is to use the matrix division operator x = A\b. This
produces the solution using Gaussian elimination, without forming the
inverse.
With the provided code, this is how I tested:
I added a break-point on the following code:
if Aq*x ~= 0
disp('ACTIVE SET CONSTRAINTS Aq :')
Aq
disp('CURRENT SOLUTION x :')
x
disp('MULTIPLICATION OF Aq and x')
Aq*x
end
When the if branch was taken, I typed at console:
K>> format rat; disp(x);
12/65
28/65
32/65
8/65
K>> disp(x == [12/65; 28/65; 32/65; 8/65]);
0
1
0
0
K>> format('long'); disp(max(abs(x - [12/65; 28/65; 32/65; 8/65])));
1.387778780781446e-17
K>> disp(eps(8/65));
1.387778780781446e-17
This suggests that this is a displaying problem: the format rat deliberately uses small integers for expressing the value, on the expense of precision. Apparently, the true value of x(4) is the next one to 8/65 than can be possibly put in double format.
So, this begs the question: are you sure that numeric convergence depends on flipping the least significant bit in a double precision value?

Finding optimal weight factor for SOR

I am using the SOR method and need to find the optimal weight factor. I think a good way to go about this is to run my SOR code with a number of omegas from 0 to 2, then store the number of iterations for each of these. Then I can see which iteration is the lowest and which omega it corresponds to. Being a novice programer, however, I am unsure how to go about this.
Here is my SOR code:
function [x, l] = SORtest(A, b, x0, TOL,w)
[m n] = size(A); % assigning m and n to number of rows and columns of A
l = 0; % counter variable
x = [0;0;0]; % introducing solution matrix
max_iter = 200;
while (l < max_iter) % loop until max # of iters.
l = l + 1; % increasing counter variable
for i=1:m % looping through rows of A
sum1 = 0; sum2 = 0; % intoducing sum1 and sum2
for j=1:i-1 % looping through columns
sum1 = sum1 + A(i,j)*x(j); % computing sum using x
end
for j=i+1:n
sum2 = sum2 + A(i,j)*x0(j); % computing sum using more recent values in x0
end
x(i) =(1-w)*x0(i) + w*(-sum1-sum2+b(i))/A(i,i); % assigning elements to the solution matrix.
end
if abs(norm(x) - norm(x0)) < TOL % checking tolerance
break
end
x0 = x; % assigning x to x0 before relooping
end
end
That's pretty easy to do. Simply loop through values of w and determine what the total number of iterations is at each w. Once the function finishes, check to see if this is the current minimum number of iterations required to get a solution. If it is, then update what the final solution would be. Once we iterate over all w, the result would be the solution vector that produced the smallest number of iterations to converge. Bear in mind that SOR has the w such that it does not include w = 0 or w = 2, or 0 < w < 2, so we can't include 0 or 2 in the range. As such, do something like this:
omega_vec = 0.01:0.01:1.99;
final_x = x0;
min_iter = intmax;
for w = omega_vec
[x, iter] = SORtest(A, b, x0, TOL, w);
if iter < min_iter
min_iter = iter;
final_x = x;
end
end
The loop checks to see if the total number of iterations at each w is less than the current minimum. If it is, log this and also record what the solution vector was. The final solution vector that was the minimum over all w will be stored in final_x.

Infinite while loop in Matlab, counter doesn't exit loop

I'm writing a function of the Gauss Seidel method of solving a linear system of equations of the form Ax=b, x being the unknown we are looking for.
I am having a problem with the while loop in my function, it seems that it runs infinitely. I can't seem to figure out why.
This is my function for creating the coefficient matrix A and the column vectors x and b, all with the same number of rows of course. No problem with this one.
function [A, b, x0] = test_system(n)
u = ones(n, 1);
A = spdiags([u 4*u u], [-1 0 1], n, n);
b = zeros(n, 1);
b(1) = 3;
b(2 : 2 : end-2) = -2;
b(3 : 2 : end-1) = 2;
b(end) = -3;
x0 = ones(n, 1);
This is my function for solving the system. I have included all of it just in case, but I believe the real problem is within the while loop at the very end which runs infinitely when I execute the function. The counter doesn't break away from it either. I can't really see what its problem is. Any clues?
Be gentle, I'm new at Matlab :)
function [x] = GaussSeidel(A,b,x0,tol)
% implementation of the GaussSeidel iterative method
% for solving a linear system of equations Ax = b
%INPUTS:
% A: coefficient matrix
% b: column vector of constants
% x0: setup for the unknown vector (using vector of ones)
% tol: result must be within 'tol' of correct answer.
%OUTPUTS:
% x: unknown
%check that A is a matrix
if ~(ismatrix(A))
error('A is not a matrix');
end
%check that A is square
[m,n] = size(A);
if m ~= n
error('Matrix A is not square');
end
%check that b is a column vector
if ~(iscolumn(b))
error('b is not a column vector');
end
%check that x0 is a column vector
if ~(iscolumn(x0))
error('x0 is not a column vector');
end
%check that A, b and x0 agree in size
[rowA,colA] = size(A);
[rowb,colb] = size(b);
[rowx0,colx0] = size(x0);
if ~isequal(colA,rowb)||~isequal(rowb,rowx0)
error('matrix dimensions of A, b and xo do not agree');
end
%check that A and b have real entries
if ~isreal(A) || ~isreal(b)
error('matrix A or vector b do not have real entries');
end
%check that the provided tolerance is positive
if tol <= 0
error('tolerance must be positive');
end
%check that A is strictly diagonally dominant
absoluteA = abs(A);
row_sum=sum(absoluteA,2);
diagonal=diag(absoluteA);
if ~all(2*diagonal > row_sum)
warning('matrix A is not strictly diagonally dominant');
end
L = tril(A,-1);
U = triu(A,+1);
D = diag(diag(A));
x = x0;
M1 = inv(D).*L;
M2 = inv(D).*U;
M3 = D\b;
k = 0; %iterations counter
disp(size(M1));
disp(size(M2));
disp(size(M3));
disp(size(x));
while (norm(A*x - b) > tol)
for i=1:n
x(i) = - M1(i,:).*x - M2(i,:).*x + M3(i,:);
end
k=k+1;
if(k >= 10e4)
error('too many iterations carried out');
end
end
end %end function
Coding Discussion
The source of the coding error is the use of element-wise operations instead of matrix-matrix and matrix-vector operations.
These operations produce zero-matrices, so no progress is made.
The M matrices should be defined by
M1 = inv(D)*L; % Note that D\L is more efficient
M2 = inv(D)*U; % Note that D\U is more efficient
M3 = D\b;
and the iterator performing the update should be
x(i) = - M1(i,:)*x - M2(i,:)*x + M3(i,:);
Method Discussion
I also think it is worth mentioning that the code is currently implementing the Jacobi Method since the updater has the form
while a Gauss-Seidel update has the form
.
I don’t have 50 reputation so I can not comment this.
The line if(k >= 10e4), I think this does not make what you think it would. 10e4 is 100,000 and 1e4 is 10,000. This will be the reason why you think your counter does not work. Matlab ist still running because it’s running further than you think it will. Also I run in the same Problem knedlsepp already pointed out.

how to output all the iteration results from a FOR loop into matrix and plot the graph

I have two for loops in a nested format. My second loop calculates the final equation. The display of the result is outside the second loop in order to display when the second loop is complete.
Below is the logic I used in MATLAB. I need to plot graph of eqn2 vs x.
L=100
for x=1:10
eqn1
for y=1:L
eqn2
end
value = num2strn eqn2
disp value
end
Currently the problem I am facing is that value or output of eqn2 is always replaced after each cycle until x reaches 10. Hence, the workspace table of eqn2 and value only shows the last value. My intention is to document all the output values of value in every cycle of x from 1:10.
How can I do this?
You pseudo-coded a little too strongly for my taste - I have tried to reconstruct what you were trying to do. If I understood correctly, this should do address your question (store intermediate results from the calculation in array Z):
L=100
z = zeros(L,10);
for x=1:10
% perform some calculations
eqn1
for y=1:L
% perform some more calculations; it is not clear whether the result of
% this loop over y=1:L yields one value, or L. I am going to assume L values
z(y, x) = eqn2(x, y)
end
value =num2strn eqn2
disp value
end
% now you have the value of each evaluation of the innermost loop available. You can plot it as follows:
figure;
plot( x, z); % multiple plots with a common x parameter; may need to use Z' (transpose)...
title 'this is my plot';
xlabel 'this is the x axis';
ylabel 'this is the y axis';
As for the other questions you asked in your comments, you could probably findd inspiration in the following:
L = 100;
nx = 20; ny = 99; % I am choosing how many x and y values to test
Z = zeros(ny, nx); % allocate space for the results
x = linspace(0, 10, nx); % x and y don't need to be integers
y = linspace(1, L, ny);
myFlag = 0; % flag can be used for breaking out of both loops
for xi = 1:nx % xi and yi are integers
for yi = 1:ny
% evaluate "some function" of x(xi) and y(yi)
% note that these are not constrained to be integers
Z(yi, xi) = (x(xi)-4).^2 + 3*(y(yi)-5).^2+2;
% the break condition you were asking for
if Z(yi, xi) < 5
fprintf(1, 'Z less than 5 with x=%.1f and y=%.1f\n', x(xi), y(yi));
myFlag = 1; % set flag so we break out of both loops
break
end
end
if myFlag==1, break; end % break out of the outer loop as well
end
This may not be what you had in mind - I cannot understand "run the loop untill all the values of z(y,x) <5 and then it should output x". If you run the outer loop to completion (that's the only way you know "all the values of z(y,x)" then your value of x will be the last value it was... This is why I was suggesting running through all values of x and y, collecting the whole matrix Z, and then examining Z for the things you want.
For example, if you wonder if there is a value for X for which all Z < 5, you could do this (if you didn't break out of the for loops):
highestZ = max(Z, [], 1); % "take the highest value of Z in the 1 dimension
fprintf(1, 'Z is always < 5 for x = %d\n', x(highestZ<5));
etc.
If you can't figure it out from here, I give up...