Image corruption in matlab parfor loop - matlab

I have some code for an adaptive median filter (see below) which works perfectly until I attempt to run it as a parallel loop, in which case there are values missing or corrupted and the image I get at the end is incorrect (see http://i.stack.imgur.com/Rt6dV.jpg).
Matlab doesn't throw up any errors but appears to be either dropping random data, or overwriting them. The only thing I could find which would even vaguely match this problem was information on custom type definitions and set functions, but all of my input images are uint8 so I don't have any custom set functions or type definitions. (http://fluffynukeit.com/problems-with-matlab-parfor-data-disappearing/)
The error occurs specifically between lines 63 and 86, when I attempt to change
for Index = 1:length(ss)
to
parfor Index = 1:length(ss)
and I can't for the life of me figure out why.
I usually call the function with an image, I, in uint8 format (although any format should work), w_size is an odd number relating to the size of the window for filtering, and M can be set from 0 to 2, where lower numbers reduce the amount of pixels considered when median filtering by increasing the weight of the standard deviation of the current block. So a typical function call would be thus:
I = imread('image.tif');
J = Par_AMF(I, 5, 1);
figure;
imshow(J)
I should finish by saying that typically the size of images I'm using make the parallel processing a huge time saver, usually about half the time of the non-parallel execution, so although there isn't much point for smaller images the ones I'm using would really benefit from it.
function [FIm] = Par_AMF(Im, w_size, M)
% PAR_AMF Adaptive median filter
% Performs a local adaptive median filtering operation
% Inputs: Im - Image to be filtered
% w_size - Window size for filter
% M - Sigma weighting, range from 0 to 2
%
% Output: FIm - Filtered image
tic;
sz_oldim = size(Im);
w_option = floor(w_size/2);
% ----------------------------------------------------------------------- %
% Padding edges of image matrix
Pad_im = padarray(Im,[w_option, w_option],0,'both');
% ----------------------------------------------------------------------- %
% Stage 1
% Calculate the local Mean and Standard Deviation for finding Speckle
% Pixels
% ----------------------------------------------------------------------- %
sz = size(Pad_im);
start_index = w_option+1;
end_index1=(sz(1)-w_option);
end_index2=(sz(2)-w_option);
LB = zeros(sz_oldim);
UB = LB;
first_loop1 = start_index:end_index1;
first_loop2 = start_index:end_index2;
ss = cell(numel(LB), 1);
Index = 1;
for c = first_loop2
for r = first_loop1
ss{Index} = Pad_im(r-w_option:r+w_option,c-w_option:c+w_option);
Index = Index+1;
end
end
% This parfor functions as expected:
parfor Index = 1:length(ss)
ss1=ss{Index}(:);
if any(ss1 > 0)
local_mn = mean(ss1);
local_sig = std(double([ss1;local_mn]));
LB(Index) = local_mn - (M*local_sig);
UB(Index) = local_mn + (M*local_sig);
end
end
% ----------------------------------------------------------------------- %
% UB is the Upper Bound Limit; LB is the Lower Bound Limit
% ----------------------------------------------------------------------- %
% ----------------------------------------------------------------------- %
% Using UB and LB mark pixels as speckle or valid pixels and update only if
% central pixel of window is speckle pixels with median value calculated
% again with only speckle pixels with the window
% ----------------------------------------------------------------------- %
TempOut = Im;
% When this for loop is changed to a parfor (which should definitely work)
% the output is corrupted horribly:
% parfor Index = 1:length(ss)
for Index = 1:length(ss)
ss1 = ss{Index}(:);
ss2 = ss1;
if any(ss1>0)
zz = 0;
for z = 1:length(ss1)
if (ss1(z) < LB(Index) || ss1(z) > UB(Index))
ss1(z) = 0;
ss2(z) = 0;
else
ss1(z)=1;
zz= zz+1;
end
end
if zz > 0
new_ss1 = ss2(ss2~=0);
if(ss1(ceil(length(ss1)/2)) == 0)
md = median(new_ss1);
TempOut(Index) = md;
end
end
end
end
looptime = toc;
disp(['Total processing time ', num2str(looptime./60), ' minutes'])
FIm = TempOut;
end
EDIT: In Matlab version R2015a

Related

Directional artifacts in MATLAB randn arrays?

I'm generating 3d fractal noise in MATLAB using a variety of methods. It's working relatively well, but I'm having an issue where I see vertical striping artifacts in my noise. This happens regardless of what data type or resolution I use.
Edit: I figured it out. The solution is posted as an answer below. Thanks everyone for your thoughts and guidance!
expo = 2^6;
dims = [expo,expo,expo];
beta = -4.5;
render = randnd(beta, dims); % Create volumetric fractal
render = render - min(render); % Set floor to zero
render = render ./ max(render); % Set ceiling to one
%render = imbinarize(render); % BW Threshold option
render = render .* 255; % For greyscale
slicer = 1; % Turn on image slicer/saver
i = 0; % Page counter
format = '.png';
imagename = '___testDump/slice';
imshow(render(:,:,1),[0 255]); %Single test image
if slicer == 1
for c = 1:length(render)
i = i+1;
pagenumber = num2str(i);
filename = [imagename, pagenumber, format];
imwrite(uint8(render(:,:,i)),filename)
end
end
function X = randnd(beta,varargin)
seed = 999;
rng(seed); % Set seed
%% X = randnd(beta,varargin)
% Based on similar functions by Jon Yearsley and Hristo Zhivomirov
% Written by Marcin Konowalczyk
% Timmel Group # Oxford University
%% Parse the input
narginchk(0,Inf); nargoutchk(0,1);
if nargin < 2 || isempty(beta); beta = 0; end % Default to white noise
assert(isnumeric(beta) && isequal(size(beta),[1 1]),'''beta'' must be a number');
assert(-6 <= beta && beta <= 6,'''beta'' out of range'); % Put on reasonable bounds
%% Generate N-dimensional white noise with 'randn'
X = randn(varargin{:});
if isempty(X); return; end; % Usually happens when size vector contains zeros
% Squeeze prevents an error if X has more than one leading singleton dimension
% This is a slight deviation from the pure functionality of 'randn'
X = squeeze(X);
% Return if white noise is requested
if beta == 0; return; end;
%% Generate corresponding N-dimensional matrix of multipliers
N = size(X);
% Create matrix of multipliers (M) of X in the frequency domain
M = [];
for j = 1:length(N)
n = N(j);
if (rem(n,2)~=0) % if n is odd
% Nyquist frequency bin does not show up in odd-numbered fft
k = ifftshift(-(n-1)/2:(n-1)/2);
else
k = ifftshift(-n/2:n/2-1);
end
% Spectral multipliers
m = (k.^2)';
if isempty(M);
M = m;
else
% Create the permutation vector
M_perm = circshift(1:length(size(M))+1,[0 1]);
% Permute a singleton dimension to the beginning of M
M = permute(M,M_perm);
% Add m along the first dimension of M
M = bsxfun(#plus,M,m);
end
end
% Reverse M to match X (since new dimensions were being added form the left)
M = permute(M,length(size(M)):-1:1);
assert(isequal(size(M),size(X)),'Bad programming error'); % This should never occur
% Shape the amplitude multipliers by beta/4 which corresponds to shaping the power by beta
M = M.^(beta/4);
% Set the DC component to zero
M(1,1) = 0;
%% Multiply X by M in frequency domain
Xstd = std(X(:));
Xmean = mean(X(:));
X = real(ifftn(fftn(X).*M));
% Force zero mean unity standard deviation
X = X - mean(X(:));
X = X./std(X(:));
% Restore the standard deviation and mean from before the spectral shaping.
% This ensures the random sample from randn is truly random. After all, if
% the mean was always exactly zero it would not be all that random.
X = X + Xmean;
X = X.*Xstd;
end
Here is my solution:
My "min/max" code (lines 6 and 7) was bad. I wanted to divide all values in the matrix by the single largest value in the matrix so that all values would be between 0 and 1. Because I used max() improperly, I was stepping through the max value of each column and using that as my divisor; thus the vertical stripes.
In the end this is what my code looks like. X is the 3 dimensional matrix:
minVal = min(X,[],'all'); % Get the lowest value in the entire matrix
X = X - minVal; % Set min value to zero
maxVal = max(X,[],'all'); % Get the highest value in the entire matrix
X = X ./ maxVal; % Set max value to one

Creating a heatmap of the logistic map for different values of lambda in matlab

So I am trying to create a heatmap of the logistic map for lambda =2.5 till lambda 4, showing that some outcomes are more common than others as part of my thesis. However so far I did not came far. I plotted the logistic map, however the heatmap part is a bit of a hassle and I can't find how to do it. So, how do I create a heatmap using the coding that I have?
% Logistics Map
% Classic chaos example. Plots semi-stable values of
% x(n+1) = r*x(n)*(1-x(n)) as r increases to 4.
%
clear
scale = 1000; % determines the level of rounding
maxpoints = 200; % determines maximum values to plot
N = 3000; % number of "r" values to simulate
a = 2.5; % starting value of "r"
b = 4; % final value of "r"... anything higher diverges.
rs = linspace(a,b,N); % vector of "r" values
M = 500; % number of iterations of logistics equation
% Loop through the "r" values
for j = 1:length(rs)
r=rs(j); % get current "r"
x=zeros(M,1); % allocate memory
x(1) = 0.5; % initial condition (can be anything from 0 to 1)
for i = 2:M, % iterate
x(i) = r*x(i-1)*(1-x(i-1));
end
% only save those unique, semi-stable values
out{j} = unique(round(scale*x(end-maxpoints:end)));
end
% Rearrange cell array into a large n-by-2 vector for plotting
data = [];
for k = 1:length(rs)
n = length(out{k});
data = [data; rs(k)*ones(n,1),out{k}];
end
% Plot the data
figure(97);clf
h=plot(data(:,1),data(:,2)/scale,'b.');
set(h,'markersize',0.25)
ylim([0 1])
set(gcf,'color','w')
Thanks a lot in advance!

Unable to get contrast of image as described by its formula

I was trying to get the contrast of an image using a formula but the contrast value is not exceeding 255. As well as whenever I tried to make some operation on my image matrix the element values also not exceeding 255. I tried converting the image matrix to double but the element values changed and are not equal to original pixel values.
clc;
clear all;
close all;
h = imread('C:\Users\LAXMIDHAR\Desktop\My proj files\abc.jpg');
g = rgb2gray(h);
% f = im2double(g);
[M,N] = size(g);
%
% for i=1:M
% for j=1:N
% f(i,j) = f(i,j).*((i-j).^2);
% end
% end
%
% s = sum(sum(f));
s = 0;
for i = 1:M
for j=1:N
s = s+(g(i,j).*((i-j).^2));
end
end
% s is the contrast of image
s is expected to be large but its not exceeding 255. This is the contrast formula:

Looping my algorithm to plot for a different parameter value on the same graph(MATLAB)

I've implemented an algorithm for my physics project which does exactly what I want. The problem that I'm stuck which is not the Physics content itself hence I think it might be somewhat trivial to explain what my code does. I'm mainly stuck with the way MATLAB's plotting works if I was to loop over the same algorithm to produce similar graphs with a slight change of a value of my parameter. Here's my code below:
clear; clc; close all;
% Parameters:
z_nn = 4; % Number of nearest-neighbour in lattice (square = 4).
z_nnn = 4; % Number of next-nearest-neighbours in lattice (square = 4).
Lx = 40; % Number of sites along x-axis.
Ly = 40; % Number of sites along y-axis.
sigma = 1; % Size of a site (defines our units of length).
beta = 1.2; % Inverse temperature beta*epsilon.
mu = -2.53; % Chemical potential mu/epsilon.
mu_2 = -2.67; % Chemical potential mu/epsilon for 2nd line.
J = linspace(1, 11, 11);%J points for the line graph plot
potential = zeros(Ly);
attract = 1.6; %wall attraction constant
k = 1; %wall depth
rho_0 = 0.4; % Initial density.
tol = 1e-12; % Convergence tolerance.
count = 30000; % Upper limit for iterations.
alpha = 0.01; % Mixing parameter.
conv = 1; cnt = 1; % Convergence value and counter.
rho = rho_0*ones(Ly); % Initialise rho to the starting guess(i-th rho_old) in Eq(47)
rho_rhs = zeros(Ly); % Initialise rho_new to zeros.
% Solve equations iteratively:
while conv>=tol && cnt<count
cnt = cnt + 1; % Increment counter.
% Loop over all lattice sites:
for j=1:Ly
%Defining the Lennard-Jones potential
if j<k
potential(j) = 1000000000;
else
potential(j) = -attract*(j-k)^(-3);
end
% Handle the periodic boundaries for x and y:
%left = mod((i-1)-1,Lx) + 1; % i-1, maps 0 to Lx.
%right = mod((i+1)-1,Lx) + 1; % i+1, maps Lx+1 to 1.
if j<k+1 %depth of wall
rho_rhs(j) = 0;
rho(j) = 0;
elseif j<(20+k)
rho_rhs(j) = (1 - rho(j))*exp((beta*((3/2)*rho(j-1) + (3/2)*rho(j+1) + 2*rho(j) + mu) - potential(j)));
else
rho_rhs(j) = rho_rhs(j-1);
end
end
conv = sum(sum((rho - rho_rhs).^2)); % Convergence value is the sum of the differences between new and current solution.
rho = alpha*rho_rhs + (1 - alpha)*rho; % Mix the new and current solutions for next iteration.
end
% disp(['conv = ' num2str(conv_2) ' cnt = ' num2str(cnt)]); % Display final answer.
% figure(2);
% pcolor(rho_2);
figure(1);
plot(J, rho(1:11));
hold on;
% plot(J, rho_2(1,1:11));
hold off;
disp(['conv = ' num2str(conv) ' cnt = ' num2str(cnt)]); % Display final answer.
figure(3);
pcolor(rho);
Running this code should give you a graph like this
Now I want to produce a similar graph but with one of the variable's value changed and plotted on the same graph. My approach that I've tried is below:
clear; clc; close all;
% Parameters:
z_nn = 4; % Number of nearest-neighbour in lattice (square = 4).
z_nnn = 4; % Number of next-nearest-neighbours in lattice (square = 4).
Lx = 40; % Number of sites along x-axis.
Ly = 40; % Number of sites along y-axis.
sigma = 1; % Size of a site (defines our units of length).
beta = 1.2; % Inverse temperature beta*epsilon.
mu = [-2.53,-2.67]; % Chemical potential mu/epsilon.
mu_2 = -2.67; % Chemical potential mu/epsilon for 2nd line.
J = linspace(1, 10, 10);%J points for the line graph plot
potential = zeros(Ly, length(mu));
gamma = zeros(Ly, length(mu));
attract = 1.6; %wall attraction constant
k = 1; %wall depth
rho_0 = 0.4; % Initial density.
tol = 1e-12; % Convergence tolerance.
count = 30000; % Upper limit for iterations.
alpha = 0.01; % Mixing parameter.
conv = 1; cnt = 1; % Convergence value and counter.
rho = rho_0*[Ly,length(mu)]; % Initialise rho to the starting guess(i-th rho_old) in Eq(47)
rho_rhs = zeros(Ly,length(mu)); % Initialise rho_new to zeros.
figure(3);
hold on;
% Solve equations iteratively:
while conv>=tol && cnt<count
cnt = cnt + 1; % Increment counter.
% Loop over all lattice sites:
for j=1:Ly
for i=1:length(mu)
y = 1:Ly;
MU = mu(i).*ones(Ly)
%Defining the Lennard-Jones potential
if j<k
potential(j) = 1000000000;
else
potential(j) = -attract*(j-k).^(-3);
end
% Handle the periodic boundaries for x and y:
%left = mod((i-1)-1,Lx) + 1; % i-1, maps 0 to Lx.
%right = mod((i+1)-1,Lx) + 1; % i+1, maps Lx+1 to 1.
if j<k+1 %depth of wall
rho_rhs(j) = 0;
rho(j) = 0;
elseif j<(20+k)
rho_rhs(j) = (1 - rho(j))*exp((beta*((3/2)*rho(j-1) + (3/2)*rho(j+1) + 2*rho(j) + MU - potential(j)));
else
rho_rhs(j) = rho_rhs(j-1);
end
end
end
conv = sum(sum((rho - rho_rhs).^2)); % Convergence value is the sum of the differences between new and current solution.
rho = alpha*rho_rhs + (1 - alpha)*rho; % Mix the new and current solutions for next iteration.
disp(['conv = ' num2str(conv) ' cnt = ' num2str(cnt)]); % Display final answer.
figure(1);
pcolor(rho);
plot(J, rho(1:10));
end
hold off;
The only variable that I'm changing here is mu. I would like to loop my first code so that I can enter an arbitrary amount of different values of mu and plot them on the same graph. Naturally I had to change all of the lists dimension from (1 to size of Ly) to (#of mu(s) to size of Ly), such that when the first code is being looped, the i-th mu value in that loop is being turned into lists with dimension as long as Ly. So I thought I would do the plotting within the loop and use "hold on" encapsulating the whole loop so that every plot that was generated in each loop won't be erased. But I've been spending hours on trying to figure out the semantics of MATLAB but ultimately I can't figure out what to do. So hopefully I can get some help on this!
hold on only applies to the active figure, it is not a generic property shared among all figures. What is does is changing the value of the current figure NextPlot property, which governs the behavior when adding plots to a figure.
hold on is equivalent to set(gcf,'NextPlot','add');
hold off is equivalent to set(gcf,'NextPlot','replace');
In your code you have:
figure(3); % Makes figure 3 the active figure
hold on; % Sets figure 3 'NextPlot' property to 'add'
% Do some things %
while conv>=tol && cnt<count
% Do many things %
figure(1); % Makes figure 1 the active figure; 'hold on' was not applied to that figure
plot(J, rho(1:10)); % plots rho while erasing the previous plot
end
You should try to add another hold on statement after figure(1)
figure(1);
hold on
plot(J, rho(1:10));

Accuracy issues with multiplication of matrices in Matlab R2012b

I have implemented a script that does constrained optimization for solving the optimal parameters of Support Vector Machines model. I noticed that my script for some reason gives inaccurate results (although very close to the real value). For example the typical situation is that the result of a calculation should be exactly 0, but instead it is something like
-1/18014398509481984 = -5.551115123125783e-17
This situation happens when I multiply matrices with vectors. What makes this also strange is that if I do the multiplications by hand in the command window in Matlab I get exactly 0 result.
Let me give an example: If I take the vectors Aq = [-1 -1 1 1] and x = [12/65 28/65 32/65 8/65]' I get exactly 0 result from their multiplication if I do this in the command window, as you can see in the picture below:
If on the other hand I do this in my function-script I don't get the result being 0 but rather the value -1/18014398509481984.
Here is the part of my script that is responsible for this multiplication (I've added the Aq and x into the script to show the contents of Aq and x as well):
disp('DOT PRODUCT OF ACTIVE SET AND NEW POINT: ')
Aq
x
Aq*x
Here is the result of the code above when run:
As you can see the value isn't exactly 0 even though it really should be. Note that this problem doesn't occur for all possible values of Aq and x. If Aq = [-1 -1 1 1] and x = [4/13 4/13 4/13 4/13] the result is exactly 0 as you can see below:
What is causing this inaccuracy? How can I fix this?
P.S. I didn't include my whole code because it's not very well documented and few hundred lines long, but I will if requested.
Thank you!
UPDATE: new test, by using Ander Biguri's advice:
UPDATE 2: THE CODE
function [weights, alphas, iters] = solveSVM(data, labels, C, e)
% FUNCTION [weights, alphas, iters] = solveSVM(data, labels, C, e)
%
% AUTHOR: jjepsuomi
%
% VERSION: 1.0
%
% DESCRIPTION:
% - This function will attempt to solve the optimal weights for a Support
% Vector Machines (SVM) model using active set method with gradient
% projection.
%
% INPUTS:
% "data" a n-by-m data matrix. The number of rows 'n' corresponds to the
% number of data points and the number of columns 'm' corresponds to the
% number of variables.
% "labels" a 1-by-n row vector of data labels from the set {-1,1}.
% "C" Box costraint upper limit. This will constrain the values of 'alphas'
% to the range 0 <= alphas <= C. If hard-margin SVM model is required set
% C=Inf.
% "e" a real value corresponding to the convergence criterion, that is if
% solution Xi and Xi-1 are within distance 'e' from each other stop the
% learning process, i.e. IF |F(Xi)-F(Xi-1)| < e ==> stop learning process.
%
% OUTPUTS:
% "weights" a vector corresponding to the optimal decision line parameters.
% "alphas" a vector of alpha-values corresponding to the optimal solution
% of the dual optimization problem of SVM.
% "iters" number of iterations until learning stopped.
%
% EXAMPLE USAGE 1:
%
% 'Hard-margin SVM':
%
% data = [0 0;2 2;2 0;3 0];
% labels = [-1 -1 1 1];
% [weights, alphas, iters] = solveSVM(data, labels, Inf, 10^-100)
%
% EXAMPLE USAGE 2:
%
% 'Soft-margin SVM':
%
% data = [0 0;2 2;2 0;3 0];
% labels = [-1 -1 1 1];
% [weights, alphas, iters] = solveSVM(data, labels, 0.8, 10^-100)
% STEP 1: INITIALIZATION OF THE PROBLEM
format long
% Calculate linear kernel matrix
L = kron(labels', labels);
K = data*data';
% Hessian matrix
Qd = L.*K;
% The minimization function
L = #(a) (1/2)*a'*Qd*a - ones(1, length(a))*a;
% Gradient of the minimizable function
gL = #(a) a'*Qd - ones(1, length(a));
% STEP 2: THE LEARNING PROCESS, ACTIVE SET WITH GRADIENT PROJECTION
% Initial feasible solution (required by gradient projection)
x = zeros(length(labels), 1);
iters = 1;
optfound = 0;
while optfound == 0 % criterion met
% Negative of the gradient at initial solution
g = -gL(x);
% Set the active set and projection matrix
Aq = labels; % In plane y^Tx = 0
P = eye(length(x))-Aq'*inv(Aq*Aq')*Aq; % In plane projection
% Values smaller than 'eps' are changed into 0
P(find(abs(P-0) < eps)) = 0;
d = P*g'; % Projection onto plane
if ~isempty(find(x==0 | x==C)) % Constraints active?
acinds = find(x==0 | x==C);
for i = 1:length(acinds)
if (x(acinds(i)) == 0 && d(acinds(i)) < 0) || x(acinds(i)) == C && d(acinds(i)) > 0
% Make the constraint vector
constr = zeros(1,length(x));
constr(acinds(i)) = 1;
Aq = [Aq; constr];
end
end
% Update the projection matrix
P = eye(length(x))-Aq'*inv(Aq*Aq')*Aq; % In plane / box projection
% Values smaller than 'eps' are changed into 0
P(find(abs(P-0) < eps)) = 0;
d = P*g'; % Projection onto plane / border
end
%%%% DISPLAY INFORMATION, THIS PART IS NOT NECESSAY, ONLY FOR DEBUGGING
if Aq*x ~= 0
disp('ACTIVE SET CONSTRAINTS Aq :')
Aq
disp('CURRENT SOLUTION x :')
x
disp('MULTIPLICATION OF Aq and x')
Aq*x
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Values smaller than 'eps' are changed into 0
d(find(abs(d-0) < eps)) = 0;
if ~isempty(find(d~=0)) && rank(P) < length(x) % Line search for optimal lambda
lopt = ((g*d)/(d'*Qd*d));
lmax = inf;
for i = 1:length(x)
if d(i) < 0 && -x(i) ~= 0 && -x(i)/d(i) <= lmax
lmax = -x(i)/d(i);
elseif d(i) > 0 && (C-x(i))/d(i) <= lmax
lmax = (C-x(i))/d(i);
end
end
lambda = max(0, min([lopt, lmax]));
if abs(lambda) < eps
lambda = 0;
end
xo = x;
x = x + lambda*d;
iters = iters + 1;
end
% Check whether search direction is 0-vector or 'e'-criterion met.
if isempty(find(d~=0)) || abs(L(x)-L(xo)) < e
optfound = 1;
end
end
%%% STEP 3: GET THE WEIGHTS
alphas = x;
w = zeros(1, length(data(1,:)));
for i = 1:size(data,1)
w = w + labels(i)*alphas(i)*data(i,:);
end
svinds = find(alphas>0);
svind = svinds(1);
b = 1/labels(svind) - w*data(svind, :)';
%%% STEP 4: OPTIMALITY CHECK, KKT conditions. See KKT-conditions for reference.
weights = [b; w'];
datadim = length(data(1,:));
Q = [zeros(1,datadim+1); zeros(datadim, 1), eye(datadim)];
A = [ones(size(data,1), 1), data];
for i = 1:length(labels)
A(i,:) = A(i,:)*labels(i);
end
LagDuG = Q*weights - A'*alphas;
Ac = A*weights - ones(length(labels),1);
alpA = alphas.*Ac;
LagDuG(any(abs(LagDuG-0) < 10^-14)) = 0;
if ~any(alphas < 0) && all(LagDuG == zeros(datadim+1,1)) && all(abs(Ac) >= 0) && all(abs(alpA) < 10^-6)
disp('Optimal found, Karush-Kuhn-Tucker conditions satisfied.')
else
disp('Optimal not found, Karush-Kuhn-Tucker conditions not satisfied.')
end
% VISUALIZATION FOR 2D-CASE
if size(data, 2) == 2
pinds = find(labels > 0);
ninds = find(labels < 0);
plot(data(pinds, 1), data(pinds, 2), 'o', 'MarkerFaceColor', 'red', 'MarkerEdgeColor', 'black')
hold on
plot(data(ninds, 1), data(ninds, 2), 'o', 'MarkerFaceColor', 'blue', 'MarkerEdgeColor', 'black')
Xb = min(data(:,1))-1;
Xe = max(data(:,1))+1;
Yb = -(b+w(1)*Xb)/w(2);
Ye = -(b+w(1)*Xe)/w(2);
lineh = plot([Xb Xe], [Yb Ye], 'LineWidth', 2);
supvh = plot(data(find(alphas~=0), 1), data(find(alphas~=0), 2), 'g.');
legend([lineh, supvh], 'Decision boundary', 'Support vectors');
hold off
end
NOTE:
If you run the EXAMPLE 1, you should get an output starting with the following:
As you can see, the multiplication between Aq and x don't produce value 0, even though they should. This is not a bad thing in this particular example, but if I have more data points with lots of decimals in them this inaccuracy becomes bigger and bigger problem, because the calculations are not exact. This is bad for example when I'm searching for a new direction vector when I'm moving towards the optimal solution in gradient projection method. The search direction isn't exactly the correct direction, but close to it. This is why I want the exactly correct values...is this possible?
I wonder if the decimals in the data points have something to do with the accuracy of my results. See the picture below:
So the question is: Is this caused by the data or is there something wrong in the optimization procedure...
Do you use format function inside your script? It looks like you used somewhere format rat.
You can always use matlab eps function, that returns precision that is used inside matlab. The absolute value of -1/18014398509481984 is smaller that this, according to my Matlab R2014B:
format long
a = abs(-1/18014398509481984)
b = eps
a < b
This basically means that the result is zero (but matlab stopped calculations because according to eps value, the result was just fine).
Otherwise you can just use format long inside your script before the calculation.
Edit
I see inv function inside your code, try replacing it with \ operator (mldivide). The results from it will be more accurate as it uses Gaussian elimination, without forming the inverse.
The inv documentation states:
In practice, it is seldom necessary to form the explicit inverse of a
matrix. A frequent misuse of inv arises when solving the system of
linear equations Ax = b. One way to solve this is with x = inv(A)*b. A
better way, from both an execution time and numerical accuracy
standpoint, is to use the matrix division operator x = A\b. This
produces the solution using Gaussian elimination, without forming the
inverse.
With the provided code, this is how I tested:
I added a break-point on the following code:
if Aq*x ~= 0
disp('ACTIVE SET CONSTRAINTS Aq :')
Aq
disp('CURRENT SOLUTION x :')
x
disp('MULTIPLICATION OF Aq and x')
Aq*x
end
When the if branch was taken, I typed at console:
K>> format rat; disp(x);
12/65
28/65
32/65
8/65
K>> disp(x == [12/65; 28/65; 32/65; 8/65]);
0
1
0
0
K>> format('long'); disp(max(abs(x - [12/65; 28/65; 32/65; 8/65])));
1.387778780781446e-17
K>> disp(eps(8/65));
1.387778780781446e-17
This suggests that this is a displaying problem: the format rat deliberately uses small integers for expressing the value, on the expense of precision. Apparently, the true value of x(4) is the next one to 8/65 than can be possibly put in double format.
So, this begs the question: are you sure that numeric convergence depends on flipping the least significant bit in a double precision value?