Learning XOR with deep neural network - matlab

I am novice to deep learning so I begin with the simplest test case: XOR learning.
In the new edition of Digital Image Processing by G & W the authors give an example of XOR learning by a deep net with 3 layers: input, hidden and output (each layer has 2 neurons.), and a sigmoid as the network activation function.
For network initailization they say: "We used alpha = 1.0, an inital set of Gaussian random weights of zero mean and standard deviation of 0.02" (alpha is the gradient descent learning rate).
Training was made with 4 labeled examples:
X = [1 -1 -1 1;1 -1 1 -1];%MATLAB syntax
R = [1 1 0 0;0 0 1 1];%Labels
I have written the following MATLAB code to implement the network learing process:
function output = neuralNet4e(input,specs)
NumPat = size(input.X,2);%Number of patterns
NumLayers = length(specs.W);
for kEpoch = 1:specs.NumEpochs
% forward pass
A = cell(NumLayers,1);%Output of each neuron in each layer
derZ = cell(NumLayers,1);%Activation function derivative on each neuron dot product
A{1} = input.X;
for kLayer = 2:NumLayers
B = repmat(specs.b{kLayer},1,NumPat);
Z = specs.W{kLayer} * A{kLayer - 1} + B;
derZ{kLayer} = specs.activationFuncDerive(Z);
A{kLayer} = specs.activationFunc(Z);
end
% backprop
D = cell(NumLayers,1);
D{NumLayers} = (A{NumLayers} - input.R).* derZ{NumLayers};
for kLayer = (NumLayers-1):-1:2
D{kLayer} = (specs.W{kLayer + 1}' * D{kLayer + 1}).*derZ{kLayer};
end
%Update weights and biases
for kLayer = 2:NumLayers
specs.W{kLayer} = specs.W{kLayer} - specs.alpha * D{kLayer} * A{kLayer - 1}' ;
specs.b{kLayer} = specs.b{kLayer} - specs.alpha * sum(D{kLayer},2);
end
end
output.A = A;
end
Now, when I am using their setup (i.e., weights initalizaion with std = 0.02)
clearvars
s = 0.02;
input.X = [1 -1 -1 1;1 -1 1 -1];
input.R = [1 1 0 0;0 0 1 1];
specs.W = {[];s * randn(2,2);s * randn(2,2)};
specs.b = {[];s * randn(2,1);s * randn(2,1)};
specs.activationFunc = #(x) 1./(1 + exp(-x));
specs.activationFuncDerive = #(x) exp(-x)./(1 + exp(-x)).^2;
specs.NumEpochs = 1e4;
specs.alpha = 1;
output = neuralNet4e(input,specs);
I'm getting (after 10000 epoches) that the final output of the net is
output.A{3} = [0.5 0.5 0.5 0.5;0.5 0.5 0.5 0.5]
but when I changed s = 0.02; to s = 1; I got output.A{3} = [0.989 0.987 0.010 0.010;0.010 0.012 0.0.98 0.98] as it should.
Is it possible to get these results with `s=0.02;' and I am doing something wrong in my code? or is standard deviation of 0.02 is just a typo?

Based on your code, I don't see any errors. In my knowledge, the result that you got,
[0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5]
That is a typical result of overfitting. There are many reasons for this to happen, such as too many epochs, too large learning rate, too small sample data, and others.
On your example, s=0.02 limits the values of randomized weights and biases. Changing that to s=1 makes the randomized values unchanged/unscaled.
To make the s=0.02 one work, you can try minimizing the number of epochs or maybe lowering the alpha.
Hope this helps.

Related

Is there a way to derive implict ODE function from big symbolic expression?

I'm Davide and I have a problem with the derivation of a function that later should be given to ode15i in Matlab.
Basically I've derived a big symbolic expression that describe the motion of a spececraft with a flexible appendice (like a solar panel). My goal is to obtain a function handle that can be integrated using the built-in implicit solver in Matlab (ode15i).
The problem I've encounter is the slowness of the Symbolic computations, especially in the function "daeFunction" (I've run it and lost any hope for a responce after 3/4 hours had passed).
The system of equations, that is derived using the Lagrange's method is an implicit ode.
The complex nature of the system arise from the flexibility modelling of the solar panel.
I am open to any suggestions that would help me in:
running the code properly.
running it as efficiently as possible.
Thx in advance.
Here after I copy the code. Note: Matlab r2021a was used.
close all
clear
clc
syms t
syms r(t) [3 1]
syms angle(t) [3 1]
syms delta(t)
syms beta(t) [3 1]
mu = 3.986e14;
mc = 1600;
mi = 10;
k = 10;
kt = 10;
Ii = [1 0 0 % for the first link it is different thus I should do a functoin or something that writes everything into an array or a vector
0 5 0
0 0 5];
% Dimension of satellite
a = 1;
b = 1.3;
c = 1;
Ic = 1/12*mc* [b^2+c^2 0 0
0 c^2+a^2 0
0 0 a^2+b^2];
ra_c = [0 1 0]';
a = diff(r,t,t);
ddelta = diff(delta,t);
dbeta = diff(beta,t);
dddelta = diff(delta,t,t);
ddbeta = diff(beta,t,t);
R= [cos(angle1).*cos(angle3)-cos(angle2).*sin(angle1).*sin(angle3) sin(angle1).*cos(angle3)+cos(angle2).*cos(angle1).*sin(angle3) sin(angle2).*sin(angle3)
-cos(angle1).*sin(angle3)-cos(angle2).*sin(angle1).*cos(angle3) -sin(angle1).*sin(angle3)+cos(angle2).*cos(angle1).*cos(angle3) sin(angle2).*cos(angle3)
sin(angle2).*sin(angle3) -sin(angle2).*cos(angle1) cos(angle2)];
d_angle1 = diff(angle1,t);
d_angle2 = diff(angle2,t);
d_angle3 = diff(angle3,t);
dd_angle1 = diff(angle1,t,t);
dd_angle2 = diff(angle2,t,t);
dd_angle3 = diff(angle3,t,t);
d_angle = [d_angle1;d_angle2;d_angle3];
dd_angle = [dd_angle1;dd_angle2;dd_angle3];
omega = [d_angle2.*cos(angle1)+d_angle3.*sin(angle2).*sin(angle1);d_angle2.*sin(angle1)-d_angle3.*sin(angle2).*cos(angle1);d_angle1+d_angle3.*cos(angle2)]; % this should describe correctly omega_oc
d_omega = diff(omega,t);
v1 = diff(r1,t);
v2 = diff(r2,t);
v3 = diff(r3,t);
v = [v1; v2; v3];
[J,r_cgi,R_ci]= Jacobian_Rob(4,delta,beta);
% Perform matrix multiplication
for mm = 1:4
vel(:,mm) = J(:,:,mm)*[ddelta;dbeta];
end
vel = formula(vel);
dr_Ccgi = vel(1:3,:);
omega_ci = vel(4:6,:);
assumeAlso(angle(t),'real');
assumeAlso(d_angle(t),'real');
assumeAlso(dd_angle(t),'real');
assumeAlso(r(t),'real');
assumeAlso(a(t),'real');
assumeAlso(v(t),'real');
assumeAlso(beta(t),'real');
assumeAlso(delta(t),'real');
assumeAlso(dbeta(t),'real');
assumeAlso(ddelta(t),'real');
assumeAlso(ddbeta(t),'real');
assumeAlso(dddelta(t),'real');
omega = formula(omega);
Tc = 1/2*v'*mc*v+1/2*omega'*R*Ic*R'*omega;
% kinetic energy of all appendices
for h = 1:4
Ti(h) = 1/2*v'*mi*v+mi*v'*skew(omega)*R*ra_c+mi*v'*skew(omega)*R*r_cgi(:,h)+mi*v'*R*dr_Ccgi(:,h)+1/2*mi*ra_c'*R'*skew(omega)'*skew(omega)*R*ra_c ...
+ mi*ra_c'*R'*skew(omega)'*skew(omega)*R*r_cgi(:,h)+mi*ra_c'*R'*skew(omega)'*R*dr_Ccgi(:,h)+1/2*omega'*R*R_ci(:,:,h)*Ii*(R*R_ci(:,:,h))'*omega ...
+ omega'*R*R_ci(:,:,h)*Ii*R_ci(:,:,h)'*omega_ci(:,h)+1/2*omega_ci(:,h)'*R_ci(:,:,h)*Ii*R_ci(:,:,h)'*omega_ci(:,h)+1/2*mi*r_cgi(:,h)'*R'*skew(omega)'*skew(omega)*R*r_cgi(:,h)+mi*r_cgi(:,h)'*R'*skew(omega)'*R*dr_Ccgi(:,h)...
+ 1/2*mi*dr_Ccgi(:,h)'*dr_Ccgi(:,h);
Ugi(h) = -mu*mi/norm(r,2)+mu*mi*r'/(norm(r,2)^3)*(R*ra_c+R*R_ci(:,:,h)*r_cgi(:,h));
end
Ugc = -mu*mc/norm(r,2);
Ue = 1/2*kt*(delta)^2+sum(1/2*k*(beta).^2);
U = Ugc+sum(Ugi)+Ue;
L = Tc + sum(Ti) - U;
D = 1/2 *100* (ddelta^2+sum(dbeta.^2));
%% Equation of motion derivation
eq = [diff(jacobian(L,v),t)'-jacobian(L,r)';
diff(jacobian(L,d_angle),t)'-jacobian(L,angle)';
diff(jacobian(L,ddelta),t)'-jacobian(L,delta)'+jacobian(D,ddelta)';
diff(jacobian(L,dbeta),t)'-jacobian(L,beta)'+jacobian(D,dbeta)'];
%% Reduction to first order sys
[sys,newVars,R1]=reduceDifferentialOrder(eq,[r(t); angle(t); delta(t); beta(t)]);
DAEs = sys;
DAEvars = newVars;
%% ode15i implicit solver
pDAEs = symvar(DAEs);
pDAEvars = symvar(DAEvars);
extraParams = setdiff(pDAEs,pDAEvars);
f = daeFunction(DAEs,DAEvars,'File','ProvaSum');
y0est = [6778e3 0 0 0.01 0.1 0.3 0 0.12 0 0 0 7400 0 0 0 0 0 0 0 0]';
yp0est = zeros(20,1);
opt = odeset('RelTol', 10.0^(-7),'AbsTol',10.0^(-7),'Stats', 'on');
[y0,yp0] = decic(f,0,y0est,[],yp0est,[],opt);
% Integration
[tSol,ySol] = ode15i(f,[0 0.5],y0,yp0,opt);
%% Funcitons
function [J,p_cgi,R_ci]=Jacobian_Rob(N,delta,beta)
% Function to compute Jacobian see Robotics by Siciliano
% N total number of links
% delta [1x1] beta [N-1x1] variable that describe position of the solar
% panel elements
beta = formula(beta);
L_link = [1 1 1 1]'; % Length of each link elements in [m], later to be derived from file or as function input
for I = 1 : N
A1 = Homog_Matrix(I,delta,beta);
A1 = formula(A1);
R_ci(:,:,I) = A1(1:3,1:3);
if I ~= 1
p_cgi(:,I) = A1(1:3,4) + A1(1:3,1:3)*[1 0 0]'*L_link(I)/2;
else
p_cgi(:,I) = A1(1:3,4) + A1(1:3,1:3)*[0 0 1]'*L_link(I)/2;
end
for j = 1:I
A_j = formula(Homog_Matrix(j,delta,beta));
z_j = A_j(1:3,3);
Jp(:,j) = skew(z_j)*(p_cgi(:,I)-A_j(1:3,4));
Jo(:,j) = z_j;
end
if N-I > 0
Jp(:,I+1:N) = zeros(3,N-I);
Jo(:,I+1:N) = zeros(3,N-I);
end
J(:,:,I)= [Jp;Jo];
end
J = formula(J);
p_cgi = formula(p_cgi);
R_ci = formula(R_ci);
end
function [A_CJ]=Homog_Matrix(J,delta,beta)
% This function is made sopecifically for the solar panel
% define basic rotation matrices
Rx = #(angle) [1 0 0
0 cos(angle) -sin(angle)
0 sin(angle) cos(angle)];
Ry = #(angle) [ cos(angle) 0 sin(angle)
0 1 0
-sin(angle) 0 cos(angle)];
Rz = #(angle) [cos(angle) -sin(angle) 0
sin(angle) cos(angle) 0
0 0 1];
if isa(beta,"sym")
beta = formula(beta);
end
L_link = [1 1 1 1]'; % Length of each link elements in [m], later to be derived from file or as function input
% Rotation matrix how C sees B
R_CB = Rz(-pi/2)*Ry(-pi/2); % Clarify notation: R_CB represent the rotation matrix that describe the frame B how it is seen by C
% it is the same if it was wrtitten R_B2C
% becouse bring a vector written in B to C
% frame --> p_C = R_CB p_B
% same convention used in siciliano how C sees B frame
A_AB = [R_CB zeros(3,1)
zeros(1,3) 1];
A_B1 = [Rz(delta) zeros(3,1)
zeros(1,3) 1];
A_12 = [Ry(-pi/2)*Rx(-pi/2)*Rz(beta(1)) L_link(1)*[0 0 1]'
zeros(1,3) 1];
if J == 1
A_CJ = A_AB*A_B1;
elseif J == 0
A_CJ = A_AB;
else
A_CJ = A_AB*A_B1*A_12;
end
for j = 3:J
A_Jm1J = [Rz(beta(j-1)) L_link(j-1)*[1 0 0]'
zeros(1,3) 1];
A_CJ = A_CJ*A_Jm1J;
end
end
function [S]=skew(r)
S = [ 0 -r(3) r(2); r(3) 0 -r(1); -r(2) r(1) 0];
end
I found your question beautiful. My suggestion is to manipulate the problem numerically. symbolic manipulation in Matlab is good but is much slower than numerical calculation. you can define easily the ode into a system of first-order odes and solve them using numerical integration functions like ode45. Your code is very lengthy and I couldn't manage to follow its details.
All the Best.
Yasien

Matlab neural network handwritten digit recognition, output going to indifference

Using Matlab I am trying to construct a neural network that can classify handwritten digits that are 30x30 pixels. I use backpropagation to find the correct weights and biases. The network starts with 900 inputs, then has 2 hidden layers with 16 neurons and it ends with 10 outputs. Each output neuron has a value between 0 and 1 that represents the belief that the input should be classified as a certain digit. The problem is that after training, the output becomes almost indifferent to the input and it goes towards a uniform belief of 0.1 for each output.
My approach is to take each image with 30x30 pixels and reshape it to be vectors of 900x1 (note that 'Images_vector' is already in the vector format when it is loaded). The weights and biases are initiated with random values between 0 and 1. I am using stochastic gradiënt descent to update the weights and biases with 10 randomly selected samples per batch. The equations are as described by Nielsen.
The script is as follows.
%% Inputs
numberofbatches = 1000;
batchsize = 10;
alpha = 1;
cutoff = 8000;
layers = [900 16 16 10];
%% Initialization
rng(0);
load('Images_vector')
Images_vector = reshape(Images_vector', 1, 10000);
labels = [ones(1,1000) 2*ones(1,1000) 3*ones(1,1000) 4*ones(1,1000) 5*ones(1,1000) 6*ones(1,1000) 7*ones(1,1000) 8*ones(1,1000) 9*ones(1,1000) 10*ones(1,1000)];
newOrder = randperm(10000);
Images_vector = Images_vector(newOrder);
labels = labels(newOrder);
images_training = Images_vector(1:cutoff);
images_testing = Images_vector(cutoff + 1:10000);
w = cell(1,length(layers) - 1);
b = cell(1,length(layers));
dCdw = cell(1,length(layers) - 1);
dCdb = cell(1,length(layers));
for i = 1:length(layers) - 1
w{i} = rand(layers(i+1),layers(i));
b{i+1} = rand(layers(i+1),1);
end
%% Learning process
batches = randi([1 cutoff - batchsize],1,numberofbatches);
cost = zeros(numberofbatches,1);
c = 1;
for batch = batches
for i = 1:length(layers) - 1
dCdw{i} = zeros(layers(i+1),layers(i));
dCdb{i+1} = zeros(layers(i+1),1);
end
for n = batch:batch+batchsize
y = zeros(10,1);
disp(labels(n))
y(labels(n)) = 1;
% Network
a{1} = images_training{n};
z{2} = w{1} * a{1} + b{2};
a{2} = sigmoid(0, z{2});
z{3} = w{2} * a{2} + b{3};
a{3} = sigmoid(0, z{3});
z{4} = w{3} * a{3} + b{4};
a{4} = sigmoid(0, z{4});
% Cost
cost(c) = sum((a{4} - y).^2) / 2;
% Gradient
d{4} = (a{4} - y) .* sigmoid(1, z{4});
d{3} = (w{3}' * d{4}) .* sigmoid(1, z{3});
d{2} = (w{2}' * d{3}) .* sigmoid(1, z{2});
dCdb{4} = dCdb{4} + d{4} / 10;
dCdb{3} = dCdb{3} + d{3} / 10;
dCdb{2} = dCdb{2} + d{2} / 10;
dCdw{3} = dCdw{3} + (a{3} * d{4}')' / 10;
dCdw{2} = dCdw{2} + (a{2} * d{3}')' / 10;
dCdw{1} = dCdw{1} + (a{1} * d{2}')' / 10;
c = c + 1;
end
% Adjustment
b{4} = b{4} - dCdb{4} * alpha;
b{3} = b{3} - dCdb{3} * alpha;
b{2} = b{2} - dCdb{2} * alpha;
w{3} = w{3} - dCdw{3} * alpha;
w{2} = w{2} - dCdw{2} * alpha;
w{1} = w{1} - dCdw{1} * alpha;
end
figure
plot(cost)
ylabel 'Cost'
xlabel 'Batches trained on'
With the sigmoid function being the following.
function y = sigmoid(derivative, x)
if derivative == 0
y = 1 ./ (1 + exp(-x));
else
y = sigmoid(0, x) .* (1 - sigmoid(0, x));
end
end
Other than this I have also tried to have 1 of each digit in each batch, but this gave the same result. Also I have tried varying the batch size, the number of batches and alpha, but with no success.
Does anyone know what I am doing wrong?
Correct me if I'm wrong: You have 10000 samples in you're data, which you divide into 1000 batches of 10 samples. Your training process consists of running over these 10000 samples once.
This might be too little, normally your training process consists of several epochs (one epoch = iterating over every sample once). You can try going over your batches multiple times.
Also for 900 inputs your network seems small. Try it with more neurons in the second layer. Hope it helps!

Matlab Error 'Dimensions of matrices being concatenated are not consistent.'

I know this problem comes up a lot on here and I've read through many threads but am still stuck on my specific problem. I'm using a mass matrix with ode function to solve 4 equations simultaneously, 2 algebraic and 2 differential:
function ade
% Equation set
% y(1)== radius 1, y(2)== concentration Cc
% y(3)== radius 2 y(4)== concentration Cc2
%==========================================================================
clc
close all
M = [1 0 0 0
0 0 0 0
0 0 1 0
0 0 0 0];
options = odeset('Mass',M);
y0= [1,1,1,1];
tspan = [0:1:1440];
[t,y]= ode15s(#equations,tspan,y0,options);
function mainequations = equations (t, y)
m = 18;
m2 = 599;
ro = 1000;
ro2 = 1050;
k = 0.085;
k2 = 0.085;
theta = 46/8;
theta2 = 46/8;
De = 7.65*10^-14;
Cb = 0.993;
R = 8.29*10^-7;
alpha = (m*k)/ro;
alpha1 = (m2*k2)/ro2;
beta = (theta*k*y(2))/De;
beta2 = (theta2*k2*y(4))/De;
mainequations = [-alpha*y(2)
y(4) - y(2)-(beta * y(1)^2 *(1/y(1) - y(3)))
-alpha1*y(4)
Cb - y(4) -(beta * y(1)^2 + beta2*(y(3))^2)*(1/y(3) - 1/R)];
I'm pretty sure the problem is in the equations at the bottom, I'm guessing one is the wrong size, but when debugging all the y values are the same size i.e. 4x1. Can anyone help?
Thanks

How to create a "skill-bias diagram" (meteorology)?

In my research area (meteorology) graphs within graphs are commonly produced.
more information about it can be found here.
Each of those lines joints up data points that have:
An x-value, between 0 and 1 (values greater than 1 should not be represented in the graph).
A y-value, between 0 and 1.
A PSS value, between 1 and -1.
A Frequency Bias value, ranging from 0 to +∞, but values higher than 4 are not displayed.
A False Alarm Ratio (FAR) value, ranging from 0.0 to 0.9. The False Alarm Ratio value is held constant at a particular value for each data point on any given line.
EDIT: To make things really concrete, I've drawn a pink dot on the graph. That dot represents a data point for which x=0.81, y=0.61, PSS=-0.2, B=3.05, FAR=0.8.
I am trying to reproduce something similar in MATLAB. Googling turned up a lot of answers like this, which feature inset figures rather than what I'm looking for.
I have the data organized in a 3D array, where each page refers to a different level of False Alarm Ratio. The page with a FAR of 0.8 (data here) starts out like this
Then there are other pages on the 3D array devoted to FARs of 0.7, 0.6, and so on.
Questions
1. Is it even possible to create such an graph in MATLAB?
2. If so, what function should I use, and what approach should I take? EDIT: I have working code (below) that creates a somewhat similar figure using the linear plot function, but the documentation for this function does not indicate any way to insert a graph inside another graph. I am not sure how helpful this code is, but have inserted it in response to the downvoter.
H = [0:0.01:1];
figure; hold on
fill([0 1 1],[0 0 1],[0 0.2 0.4]) % Deep blue
fill([0 1 0],[0 1 1],[0.4 0 0]) % Purple
low_colours = {[0 0.501 1],[0 0.8 0.4], [0.4 0.8 0], [0.8 0.8 0]};
high_colours = {[0.6 0 0],[0.8 0 0], [1 0.5019 0], [0.988 0.827 0.196]};
colour_counter = 0;
for ii = -0.8:0.2:0
colour_counter = colour_counter + 1;
if colour_counter < 5
colour_now = low_colours{colour_counter};
end
ORSS = ones(1,size(H,2))*ii;
F = (H .* (1-ORSS)) ./ ((1-2.*H) .* ORSS + 1);
plot(F,H)
fill(F,H,colour_now);
end
colour_counter = 0;
for ii = 0.8:-0.2:0
colour_counter = colour_counter + 1;
if colour_counter < 5
colour_now = high_colours{colour_counter};
end
ORSS = ones(1,size(H,2))*ii;
F = (H .* (1-ORSS)) ./ ((1-2.*H) .* ORSS + 1);
plot(F,H)
fill(F,H,colour_now);
end
I think I got what you want, but before you go to the code below, notice the following:
I didn't need any of your functions in the link (and I have no idea what they do).
I also don't really use the x and y columns in the data, they are redundant coordinates to the PSS and B.
I concat all the 'pages' in your data to one long table (FAR below) with 5 columns (FAR,x,y,PSS,FB).
If you take closer look at the data you see that some areas that supposed to be colored in the graph has no representation in it (i.e. no values). So in order to interpolate the color to there we need to add the corners:
FAR{end+1,:} = [0.8 0 0 0 4];
FAR{end+1,:} = [0.9 0 0 -0.66 3.33];
FAR{end+1,:} = [1 0 0 0 0];
FAR{end+1,:} = [1 0 0 -1 3];
Next, the process has 2 parts. First we make a matrix for each variable, that ordered in columns by the corresponding FAR value, so for instance, in the PSS matrix the first column is all PSS values where FAR is 0, the second column is all PSS values where FAR is 0.1, and so on. We make such matrices for FAR(F), PSS and FreqBias(B), and we initialize them with NaNs so we can have columns with different number of values:
F = nan(max(histcounts(FAR.FAR,10)),10);
PSS = F;
B = F;
c = 1;
f = unique(FAR.FAR).';
for k = f
valid = FAR.FAR==k & FAR.x<=1;
B(1:sum(valid),c) = FAR.FB(valid);
B(sum(valid):end,c) = B(sum(valid),c);
PSS(1:sum(valid),c) = FAR.PSS(valid);
PSS(sum(valid):end,c) = PSS(sum(valid),c);
F(:,c) = k;
c = c+1;
end
Then we set the colors for the colormap (which I partially took from you), and set the labels position:
colors = [0 0.2 0.4
0 0.501 1;
0 0.8 0.4;
0.4 0.8 0;
0.8 0.8 0;
0.988 0.827 0.196;
1 0.5019 0;
0.8 0 0;
0.6 0 0.2;
0.4 0.1 0.5];
label_pos =[0.89 0.77
1.01 0.74
1.14 0.69
1.37 0.64
1.7 0.57
2.03 0.41
2.65 0.18
2.925 -0.195
2.75 -0.55];
And we use contourf to plot everything together, and set all kind of properties to make it look good:
[C,h] = contourf(B,PSS,F);
xlim([0 4])
ylim([-1 1])
colormap(colors)
caxis([0 1])
xlabel('Frequency Bias B')
ylabel('Pierce Skill Score PSS')
title('False Alarm Ratio')
ax = h.Parent;
ax.XTick = 0:4;
ax.YTick = -1:0.5:1;
ax.FontSize = 20;
for k = 1:numel(f)-2
text(label_pos(k,1),label_pos(k,2),num2str(f(k+1)),...
'FontSize',12+k)
end
And here is the result:
Getting the labels position:
If you wonder what is a fast way to obtain the variable label_pos, then here is how I made it...
You run the code above without the last for loop. Then you run the following code:
clabel(C,'manual')
f = gcf;
label_pos = zeros(numel(f.Children.Children)-1,2);
for k = 1:2:size(label_pos,1)
label_pos(k,:) = f.Children.Children(k).Position(1:2);
end
label_pos(2:2:size(label_pos,1),:) = [];
After the first line the script will pause and you will see this message in the command window:
Carefully select contours for labeling.
When done, press RETURN while the Graph window is the active window.
Click on the figure where you want to have a label, and press Enter.
That's it! Now the variable label_pos has the positions of the labels, just as I used it above.

Calculate Quantization error in MATLAB

iI was given this solution to a problem in my course material.
Problem:
a signal x(t) sampled at 10 sample/sec. consider the first 10 samples of x(t)
x(t) = 0.3 cos(2*pi*t);
using a 8-bit quantiser find the quantisation error.
solution:
(256 quantisation levels)
t=1:10;
x=(0.3)*cos(2*pi*(t-1)/10);
mx=max(abs(x));
q256=mx*(1/128)*floor(128*(x/mx));
stem(q256)
e256=(1/10)*sum(abs(x-q256))
Error: e256 = 9.3750e-04
There was no explanation on this, can you explain how this was calculated in detail?
For the first two code lines I prefer,
Fs = 10;
L = 10;
t = (0 : L - 1) / Fs;
x = 0.3 * cos(2 * pi * t);
where Fs is sampling frequency, L number of samples and t shows the time.
Note that x is sinusoidal with frequency of Fx = 1 Hz or we can say that it's periodic with Tx = 1 sec.
For 8-bit quantization we have 256 levels. Since L / Fs = [10 sample] / [10 sample/sec] = 1 sec is equal to Tx (a whole period of x) we can work with positive samples.
mx = max(abs(x));
mx is defined because in order to use floor we need to scale the x.
q256 = mx*(1/128)*floor(128*(x/mx));
mx shows the maximum value for x so x / mx will take values over [-1 1] and 128*x/mx over [-128 128] will cover all 256 levels.
So we will quantize it with floor and scale it back (mx*1/128).
e256 = (1/L)*sum(abs(x-q256))
e256 simply shows the mean error over 10 samples.
Note that if L / Fs < Tx then this quantization won't be the optimum one.
Have in mind
The answer that you are given has some problems!
suppose x = [-1 -.2 0 .7 1]; and we want to quantize it with 2 bits.
mx = max(abs(x));
q4 = mx * (1/2) * floor(2*(x/mx));
Will give q4 = [-1 -0.5 0 0.5 1] which has 5 levels (instead of 2^2 = 4).
It might not be a big problem, you can delete the level x=1 and have q4 = [-1 -0.5 0 0.5 0.5], still the code needs some improvements and of course the error will increase.
A simple solution is to add
[~,ind] = max(x);
x(ind) = x(ind) - 1e-10;
after definition of mx so the maximum values of x will be quantized in one level lower.
The error will increase to 0.0012.