Why am I getting the wrong sign on my cos(x) approximation? - matlab

I am writing a Matlab script that will approximate sin(x) and cos(x) using their Maclaurin polynomials.
When I input
arg = (5*pi)/4 I expect to get the correct approximations for
sin((5*pi)/4) = -0.7071067811865474617
cos((5*pi)/4) = -0.7071067811865476838.
Instead I get the following when running the script:
Approximation of sin(3.92699) >> -0.7071067811865474617
Actual sin(3.92699) = -0.7071067811865474617
Error approximately = 0.0000000000000000000 (0)
----------------------------------------------------------
Approximation of cos(3.92699) >> 0.7071067811865474617
Actual cos(3.92699) = -0.7071067811865476838
Error approximately = 0.0000000000000001110 (1.1102e-16)
I am getting the correct answers for sin but incorrect for cosine when the argument (angle) is in quadrant 3 or 4. The problem is that I am getting the wrong sign on the cos(arg) value. Where have I messed up?
CalculatorForSineCosine.m
% Argument for sine/cosine in radians.
arg = (5*pi)/4;
% Move the argument x so it's within [0, pi/2].
newArg = moveArgumentV2(arg);
% Calculate what degree we need for our Taylorpolynomial.
TOL = 0; % If 0, assume we want Machine Epsilon.
r = findDegreeV2(TOL);
% Plot nth degree Taylorpolynomial around x = 0 for sine.
% and calculate approximation of sin(x).
[approximatedSin, errorSin] = sin_taylorV2(r, newArg);
eS = num2str(errorSin); % errorSin in string format
% Plot nth degree Taylorpolynomial around x = 0 for cosine.
% and calculate approximation of cos(x).
[approximatedCos, errorCos] = cos_taylorV2(r, newArg);
eC = num2str(errorCos); % errorCos in string format
% Print out the result.
fprintf('\nApproximation of sin(%.5f)\t >> %.19f\n', arg, approximatedSin);
fprintf('Actual sin(%.5f)\t\t\t\t = %.19f\n', arg, sin(arg));
fprintf('Error approximately\t\t\t\t = %.19f (%s)\n', errorSin, eS);
disp("----------------------------------------------------------")
fprintf('Approximation of cos(%.5f)\t >> %.19f\n', arg, approximatedCos);
fprintf('Actual cos(%.5f)\t\t\t\t = %.19f\n', arg, cos(arg));
fprintf('Error approximately\t\t\t\t = %.19f (%s)\n\n', errorCos, eC);
sin_taylorV2.m
function [approximatedSin, errorSin] = sin_taylorV2(r, x)
%% sss
% Q_2n+1(x) where 2n+1 = degree of polynomial.
n = (r - 1)/2;
% Approximate sin(x) using its Taylorpolynomial.
approximatedSin = 0;
for k = 0:n
approximatedSin = approximatedSin + (((-1).^k) .* (x.^(2.*k+1)))./(factorial(2.*k+1));
end
% Calculate the error.
errorSin = abs(sin(x) - approximatedSin);
end
cos_taylorV2.m
function [approximatedCos, errorCos] = cos_taylorV2(r, x)
%% sss
% Q_2n+1(x) where 2n+1 = degree of polynomial and n = # terms.
n = (r - 1)/2;
% Approximate cos(x) using its Taylorpolynomial.
approximatedCos = 0;
for k = 0:n
approximatedCos = approximatedCos + (((-1).^k) .* (x.^(2.*k)))./(factorial(2.*k));
end
% Calculate the error.
errorCos = abs(cos(x) - approximatedCos);
end
moveArgumentV2.m
function newArg = moveArgumentV2(arg)
%% Moves the argument x to the interval [0, pi/2].
% Make use of sines periodocity and choose n as ceil( (x-pi)/2pi) )
n = ceil((arg-pi)/(2*pi));
x1 = arg - 2*pi*n; % New angle will be in [-pi, pi]
x2 = abs(x1); % Angle will be in [0, pi]
if (x2 < pi/2) && (x2 > 0)
x3 = x2;
else
x3 = pi - x2;
end
newArg = x3*sign(x1); % Angle will be in [0, pi/2]
end

I would like to notice two things in your code.
First, you don't need the moveArgumentV2(arg) function, as, if you remember, the radius of convergence for the Maclaurin/Taylor series of the sin(x)/cos(x) is the set of all real numbers. That means the series should converge for any real x, disregarding the round-off errors inherently to every arithmetic operations done in a computer.
As a matter of fact, following your code, we can write a function that approximates the cos as:
function y = mycos(x,n)
y = 0;
for k=0:n
term = (-1)^k*x.^(2*k)/factorial(2*k);
y = y + term;
end
end
Notice this function works for values outside the range [-pi,pi]:
x = -10*pi:0.1:10*pi;
ye = cos(x) % exact value
ya = mycos(x,100) % approximated value
plot(x,ye,x,ya,'o')
The values returned by the mycos function are close to the exact value given by the cos built-in function. This happens because I calculated the approximation with the first 100 terms. The error, however, for higher values of x, is extremely large if we use just a few terms.
ya = mycos(x,10) % approximated value with 10 terms only
plot(x,ye-ya); title('error')
The problem now is that we can't just increase the number of terms without running in another problem.
If we increase the number of points, the mycos function crumbles due to round-off errors, because of the factorial function that overflows. A good idea is to try to change your code in order to avoid the use of the factorial function. Notice the recurrence between sucessive terms in the Maclaurin expansion of the cos function, and you can create another function without the use of the factorial:
function y = mycos2(x,n)
term = 1;
y = 1;
for k=1:n
term = -term.*x.^2/(2*k-1)/(2*k);
y = y + term;
end
end
Here, we calculate each term in the series expansion from the previous calculated term. We avoid the calculation of the factorial and make use of what we already have. This speeds the code and avoids overflow. As a matter of fact, if we now calculate the cos approximation with 500 terms, we get:
x = -10*pi:0.5:10*pi;
ye = cos(x); % exact value
ya = mycos(x,500); % approximated value
ya2 = mycos2(x,500); % approximated value
plot(x,ye,x,ya,'x',x,ya2,'s')
legend('ye','ya','ya2')
Notice in this figure the x marks are the calculations done with the mycos function, while the o marks are done without using the factorial function. The first function crumbles for values outside the range [-2,2], but the second one runs just fine. It works even when I use 1e5 terms. Increasing the number of terms reduces the errors, so you can estimate how much terms you will use on an approximation, given a desired tolerance. If this number is greater than 170, the first function will not work properly.
factorial(170) returns 7.2574e+306, but factorial(171) returns Inf, so any value that should be calculated with more than 170 terms will have problems in the first function. Avoid the calculation of factorial at all costs.

This is what I tried:
x = -3*pi:0.01:3*pi;
y = x;
for ii=1:numel(y)
y(ii) = moveArgumentV2(y(ii)); % not vectorized
end
plot(sin(x))
hold on
plot(sin(y))
Both sin(x) and sin(y) produce the same plot. But:
plot(cos(x))
hold on
plot(cos(y))
Now we see that cos(x) and cos(y) are not the same! This is because moveArgumentV2 changes the angle to be in the first and fourth quadrant (in the range [-pi/2, pi/2]), which is what you need for the sin function, but is not adequate for the cos function.
I would modify sin_taylorV2 and cos_taylorV2 to call moveArgumentV2, so you don't rely on the caller to know what the valid input range is. In cos_taylorV2 you would need to call it this way:
x = moveArgumentV2(x+pi/2) - pi/2;
and in sin_taylorV2 you'd call it the same way you do now.
Or, better, write cos_taylorV2 in terms of sin_taylorV2, which we know to be correct. This avoids code duplication.

Related

How to use matlab to quickly judge whether a function is convex?

For example, FX = x ^ 2 + sin (x)
Just for curiosity, I don't want to use the CVX toolbox to do this.
You can check this within some interval [a,b] by checking if the second derivative is nonnegative. For this you have to define a vector of x-values, find the numerical second derivative and check whether it is not too negative:
a = 0;
b = 1;
margin = 1e-5;
point_count = 100;
f=#(x) x.^2 + sin(x);
x = linspace(a, b, point_count)
is_convex = all(diff(x, 2) > -margin);
Since this is a numerical test, you need to adjust the parameter to the properties of the function, that is if the function does wild things on a small scale we might not be able to pick it up. E.g. with the parameters above the test will falsely report the function f=#(x)sin(99.5*2*pi*x-3) as convex.
clear
syms x real
syms f(x) d(x) d1(x)
f = x^2 + sin(x)
d = diff(f,x,2)==0
d1 = diff(f,x,2)
expSolution = solve(d, x)
if size(expSolution,1) == 0
if eval(subs(d1,x,0))>0
disp("condition 1- the graph is concave upward");
else
disp("condition 2 - the graph is concave download");
end
else
disp("condition 3 -- not certain")
end

Laguerre's method to obtain poly roots (Matlab)

I must write using Laguerre's method a piece of code to find the real and complex roots of poly:
P=X^5-5*X^4-6*X^3+6*X^2-3*X+1
I have little doubt. I did the algorithm in the matlab, but 3 out of 5 roots are the same and I don't think that is correct.
syms X %Declearing x as a variabl
P=X^5-5*X^4-6*X^3+6*X^2-3*X+1; %Equation we interest to solve
n=5; % The eq. order
Pd1 = diff(P,X,1); % first differitial of f
Pd2 = diff(P,X,2); %second differitial of f
err=0.00001; %Answear tollerance
N=100; %Max. # of Iterations
x(1)=1e-3; % Initial Value
for k=1:N
G=double(vpa(subs(Pd1,X,x(k))/subs(P,X,x(k))));
H=G^2 - double(subs(Pd2,X,x(k))) /subs(P,X,x(k));
D1= (G+sqrt((n-1)*(n*H-G^2)));
D2= (G-sqrt((n-1)*(n*H-G^2)));
D = max(D1,D2);
a=n/D;
x(k+1)=x(k)-a
Err(k) = abs(x(k+1)-x(k));
if Err(k) <=err
break
end
end
output (roots of polynomial):
x =
0.0010 + 0.0000i 0.1434 + 0.4661i 0.1474 + 0.4345i 0.1474 + 0.4345i 0.1474 + 0.4345i
What you actually see are all the values x(k) which arose in the loop. The last one, 0.1474 + 0.4345i is the end result of this loop - the approximation of the root which is in your given tolerance threshold. The code
syms X %Declaring x as a variable
P = X^5 - 5 * X^4 - 6 * X^3 + 6 * X^2 - 3 * X + 1; %Polynomial
n=5; %Degree of the polynomial
Pd1 = diff(P,X,1); %First derivative of P
Pd2 = diff(P,X,2); %Second derivative of P
err = 0.00001; %Answer tolerance
N = 100; %Maximal number of iterations
x(1) = 0; %Initial value
for k = 1:N
G = double(vpa(subs(Pd1,X,x(k)) / subs(P,X,x(k))));
H = G^2 - double(subs(Pd2,X,x(k))) / subs(P,X,x(k));
D1 = (G + sqrt((n-1) * (n * H-G^2)));
D2 = (G - sqrt((n-1) * (n * H-G^2)));
D = max(D1,D2);
a = n/D;
x(k+1) = x(k) - a;
Err(k) = abs(x(k+1)-x(k));
if Err(k) <=err
fprintf('Initial value %f, result %f%+fi', x(1), real(x(k)), imag(x(k)))
break
end
end
results in
Initial value -2.000000, result -1.649100+0.000000i
If you want to get other roots, you have to use other initial values. For example one can obtain
Initial value 10.000000, result 5.862900+0.000000i
Initial value -2.000000, result -1.649100+0.000000i
Initial value 3.000000, result 0.491300+0.000000i
Initial value 0.000000, result 0.147400+0.434500i
Initial value 1.000000, result 0.147400-0.434500i
These are all zeros of the polynomial.
A method for calculating the next root when you have found another one would be that you divide through the corresponding linear factor and use your loop for the resulting new polynomial. Note that this is in general not very easy to handle since rounding errors can have a big influence on the result.
Problems with the existing code
You do not implement the Laguerre method properly as a method in complex numbers. The denominator candidates D1,D2 are in general complex numbers, it is inadvisable to use the simple max which only has sensible results for real inputs. The aim is to have a=n/D be the smaller of both variants, so that one has to look for the D in [D1,D2] with the larger absolute value. If there were a conditional assignment as in C, this would look like
D = (abs(D_1)>abs(D2)) ? D1 : D2;
As that does not exist, one has to use commands with a similar result
D = D1; if (abs(D_1)<abs(D2)) D=D2; end
The resulting sequence of approximation points is
x(0) = 0.0010000
x(1) = 0.143349512707684+0.466072958423667i
x(2) = 0.164462212064089+0.461399841949893i
x(3) = 0.164466373475316+0.461405404094130i
There is a point where one can not expect the (residual) polynomial value at the root approximation to substantially decrease. The value close to zero is obtained by adding and subtracting rather large terms in the sum expression of the polynomial. The accuracy lost in these catastrophic cancellation events can not be recovered.
The threshold for polynomial values that are effectively zero can be estimated as the machine constant of the double type times the polynomial value where all coefficients and the evaluation point are replaced by their absolute values. This test serves in the code primarily to avoid divisions by zero or near-zero.
Finding all roots
One approach is to apply the method to a sufficiently large number of initial points along some circle containing all the roots, with some strict rules for early termination at too slow convergence. One would have to make the list of the roots found unique, but keep the multiplicity,...
The other standard method is to apply deflation, that is, divide out the linear factor of the root found. This works well in low degrees.
There is no need for the slower symbolic operations as there are functions that work directly on the coefficient array, such as polyval and polyder. Deflation by division with remainder can be achieved using the deconv function.
For real polynomials, we know that the complex conjugate of a root is also a root. Thus initialize the next iteration with the deflated polynomial with it.
Other points:
There is no point in the double conversions as at no point there is a conversion into the single type.
If you don't do anything with it, it makes no sense to create an array, especially not for Err.
Roots of the example
Implementing all this I get a log of
x(0) = 0.001000000000000+0.000000000000000i, |Pn(x(0))| = 0.99701
x(1) = 0.143349512707684+0.466072958423667i, |dx|= 0.48733
x(2) = 0.164462212064089+0.461399841949893i, |dx|=0.021624
x(3) = 0.164466373475316+0.461405404094130i, |dx|=6.9466e-06
root found x=0.164466373475316+0.461405404094130i with value P0(x)=-2.22045e-16+9.4369e-16i
Deflation
x(0) = 0.164466373475316-0.461405404094130i, |Pn(x(0))| = 2.1211e-15
root found x=0.164466373475316-0.461405404094130i with value P0(x)=-2.22045e-16-9.4369e-16i
Deflation
x(0) = 0.164466373475316+0.461405404094130i, |Pn(x(0))| = 4.7452
x(1) = 0.586360702193454+0.016571894375927i, |dx|= 0.61308
x(2) = 0.562204173408499+0.000003168181059i, |dx|=0.029293
x(3) = 0.562204925474889+0.000000000000000i, |dx|=3.2562e-06
root found x=0.562204925474889+0.000000000000000i with value P0(x)=2.22045e-16-1.33554e-17i
Deflation
x(0) = 0.562204925474889-0.000000000000000i, |Pn(x(0))| = 7.7204
x(1) = 3.332994579372812-0.000000000000000i, |dx|= 2.7708
root found x=3.332994579372812-0.000000000000000i with value P0(x)=6.39488e-14-3.52284e-15i
Deflation
x(0) = 3.332994579372812+0.000000000000000i, |Pn(x(0))| = 5.5571
x(1) = -2.224132251798332+0.000000000000000i, |dx|= 5.5571
root found x=-2.224132251798332+0.000000000000000i with value P0(x)=-3.33067e-14+1.6178e-15i
for the modified code
P = [1, -2, -6, 6, -3, 1];
P0 = P;
deg=length(P)-1; % The eq. degree
err=1e-05; %Answer tolerance
N=10; %Max. # of Iterations
x=1e-3; % Initial Value
for n=deg:-1:1
dP = polyder(P); % first derivative of P
d2P = polyder(dP); %second derivative of P
fprintf("x(0) = %.15f%+.15fi, |Pn(x(0))| = %8.5g\n", real(x),imag(x), abs(polyval(P,x)));
for k=1:N
Px = polyval(P,x);
dPx = polyval(dP,x);
d2Px = polyval(d2P,x);
if abs(Px) < 1e-14*polyval(abs(P),abs(x))
break % if value is zero in relative accuracy
end
G = dPx/Px;
H=G^2 - d2Px / Px;
D1= (G+sqrt((n-1)*(n*H-G^2)));
D2= (G-sqrt((n-1)*(n*H-G^2)));
D = D1;
if abs(D2)>abs(D1) D=D2; end % select the larger denominator
a=n/D;
x=x-a;
fprintf("x(%d) = %.15f%+.15fi, |dx|=%8.5g\n",k,real(x),imag(x), abs(a));
if abs(a) < err*(err+abs(x))
break
end
end
y = polyval(P0,x); % check polynomial value of the original polynomial
fprintf("root found x=%.15f%+.15fi with value P0(x)=%.6g%+.6gi\n", real(x),imag(x),real(y),imag(y));
disp("Deflation");
[ P,R ] = deconv(P,[1,-x]); % division with remainder
x = conj(x); % shortcut for conjugate pairs and clustered roots
end

Solve for independent variable between data points in MATLAB

I have many sets of data over the same time period, with a timestep of 300 seconds. Sets that terminate before the end of the observation period (here I've truncated it to 0 to 3000 seconds) have NaNs in the remaining spaces:
x = [0;300;600;900;1200;1500;1800;2100;2400;2700;3000];
y(:,1) = [4.65;3.67;2.92;2.39;2.02;1.67;1.36;1.07;NaN;NaN;NaN];
y(:,2) = [4.65;2.65;2.33;2.18;2.03;1.89;1.75;1.61;1.48;1.36;1.24];
y(:,3) = [4.65;2.73;1.99;1.49;1.05;NaN;NaN;NaN;NaN;NaN;NaN];
I would like to know at what time each dataset would reach the point where y is equal to a specific value, in this case y = 2.5
I first tried finding the nearest y value to 2.5, and then using the associated time, but this isn't very accurate (the dots should all fall on the same horizontal line):
ybreak = 2.5;
for ii = 1:3
[~, index] = min(abs(y(:,ii)-ybreak));
yclosest(ii) = y(index,ii);
xbreak(ii) = x(index);
end
I then tried doing a linear interpolation between data points, and then solving for x at y=2.5, but wasn't able to make this work:
First I removed the NaNs (which it seems like there must be a simpler way of doing?):
for ii = 1:3
NaNs(:,ii) = isnan(y(:,ii));
for jj = 1:length(x);
if NaNs(jj,ii) == 0;
ycopy(jj,ii) = y(jj,ii);
end
end
end
Then tried fitting:
for ii = 1:3
f(ii) = fit(x(1:length(ycopy(:,ii))),ycopy(:,ii),'linearinterp');
end
And get the following error message:
Error using cfit/subsasgn (line 7)
Can't assign to an empty FIT.
When I try fitting outside the loop (for just one dataset), it works fine:
f = fit(x(1:length(ycopy(:,1))),ycopy(:,1),'linearinterp');
f =
Linear interpolant:
f(x) = piecewise polynomial computed from p
Coefficients:
p = coefficient structure
But I then still can't solve f(x)=2.5 to find the time at which y=2.5
syms x;
xbreak = solve(f(x) == 2.5,x);
Error using cfit/subsref>iParenthesesReference (line 45)
Cannot evaluate CFIT model for some reason.
Error in cfit/subsref (line 15)
out = iParenthesesReference( obj, currsubs );
Any advice or thoughts on other approaches to this would be much appreciated. I need to be able to do it for many many datasets, all of which have different numbers of NaN values.
As you mention y=2.5 is not in your data set so the value of x which corresponds to this depends on the interpolation method you use. For linear interpolation, you could use something like the following
x = [0;300;600;900;1200;1500;1800;2100;2400;2700;3000];
y(:,1) = [4.65;3.67;2.92;2.39;2.02;1.67;1.36;1.07;NaN;NaN;NaN];
y(:,2) = [4.65;2.65;2.33;2.18;2.03;1.89;1.75;1.61;1.48;1.36;1.24];
y(:,3) = [4.65;2.73;1.99;1.49;1.05;NaN;NaN;NaN;NaN;NaN;NaN];
N = size(y, 2);
x_interp = NaN(N, 1);
for i = 1:N
idx = find(y(:,i) >= 2.5, 1, 'last');
x_interp(i) = interp1(y(idx:idx+1, i), x(idx:idx+1), 2.5);
end
figure
hold on
plot(x, y)
scatter(x_interp, repmat(2.5, N, 1))
hold off
It's worth keeping in mind that the above code is assuming your data is monotonically decreasing (as your data is), but this solution could be adapted for monotonically increasing as well.

Matlab : Confusion regarding unit of entropy to use in an example

Figure 1. Hypothesis plot. y axis: Mean entropy. x axis: Bits.
This Question is in continuation to a previous one asked Matlab : Plot of entropy vs digitized code length
I want to calculate the entropy of a random variable that is discretized version (0/1) of a continuous random variable x. The random variable denotes the state of a nonlinear dynamical system called as the Tent Map. Iterations of the Tent Map yields a time series of length N.
The code should exit as soon as the entropy of the discretized time series becomes equal to the entropy of the dynamical system. It is known theoretically that the entropy of the system, H is log_e(2) or ln(2) = 0.69 approx. The objective of the code is to find number of iterations, j needed to produce the same entropy as the entropy of the system, H.
Problem 1: My problem in when I calculate the entropy of the binary time series which is the information message, then should I be doing it in the same base as H? OR Should I convert the value of H to bits because the information message is in 0/1 ? Both give different results i.e., different values of j.
Problem 2: It can happen that the probality of 0's or 1's can become zero so entropy correspondng to it can become infinity. To prevent this, I thought of putting a check using if-else. But, the loop
if entropy(:,j)==NaN
entropy(:,j)=0;
end
does not seem to be working. Shall be greateful for ideas and help to solve this problem. Thank you
UPDATE : I implemented the suggestions and answers to correct the code. However, my logic of solving was not proper earlier. In the revised code, I want to calculate the entropy for length of time series having bits 2,8,16,32. For each code length, entropy is calculated. Entropy calculation for each code length is repeated N times starting for each different initial condition of the dynamical system. This appraoch is adopted to check at which code length the entropy becomes 1. The nature of the plot of entropy vs bits should be increasing from zero and gradually reaching close to 1 after which it saturates - remains constant for all the remaining bits. I am unable to get this curve (Figure 1). Shall appreciate help in correcting where I am going wrong.
clear all
H = 1 %in bits
Bits = [2,8,16,32,64];
threshold = 0.5;
N=100; %Number of runs of the experiment
for r = 1:length(Bits)
t = Bits(r)
for Runs = 1:N
x(1) = rand;
for j = 2:t
% Iterating over the Tent Map
if x(j - 1) < 0.5
x(j) = 2 * x(j - 1);
else
x(j) = 2 * (1 - x(j - 1));
end % if
end
%Binarizing the output of the Tent Map
s = (x >=threshold);
p1 = sum(s == 1 ) / length(s); %calculating probaility of number of 1's
p0 = 1 - p1; % calculating probability of number of 0'1
entropy(t) = -p1 * log2(p1) - (1 - p1) * log2(1 - p1); %calculating entropy in bits
if isnan(entropy(t))
entropy(t) = 0;
end
%disp(abs(lambda-H))
end
Entropy_Run(Runs) = entropy(t)
end
Entropy_Bits(r) = mean(Entropy_Run)
plot(Bits,Entropy_Bits)
For problem 1, H and entropy can be in either nats or bits units, so long as they are both computed using the same units. In other words, you should use either log for both or log2 for both. With the code sample you provided, H and entropy are correctly calculated using consistant nats units. If you prefer to work in units of bits, the conversion of H should give you H = log(2)/log(2) = 1 (or using the conversion factor 1/log(2) ~ 1.443, H ~ 0.69 * 1.443 ~ 1).
For problem 2, as #noumenal already pointed out you can check for NaN using isnan. Alternatively you could check if p1 is within (0,1) (excluding 0 and 1) with:
if (p1 > 0 && p1 < 1)
entropy(:,j) = -p1 * log(p1) - (1 - p1) * log(1 - p1); %calculating entropy in natural base e
else
entropy(:, j) = 0;
end
First you just
function [mean_entropy, bits] = compute_entropy(bits, blocks, threshold, replicate)
if replicate
disp('Replication is ON');
else
disp('Replication is OFF');
end
%%
% Populate random vector
if replicate
seed = 849;
rng(seed);
else
rng('default');
end
rs = rand(blocks);
%%
% Get random
trial_entropy = zeros(length(bits));
for r = 1:length(rs)
bit_entropy = zeros(length(bits), 1); % H
% Traverse bit trials
for b = 1:(length(bits)) % N
tent_map = zeros(b, 1); %Preallocate for memory management
%Initialize
tent_map(1) = rs(r);
for j = 2:b % j is the iterator, b is the current bit
if tent_map(j - 1) < threshold
tent_map(j) = 2 * tent_map(j - 1);
else
tent_map(j) = 2 * (1 - tent_map(j - 1));
end % if
end
%Binarize the output of the Tent Map
s = find(tent_map >= threshold);
p1 = sum(s == 1) / length(s); %calculate probaility of number of 1's
%p0 = 1 - p1; % calculate probability of number of 0'1
bit_entropy(b) = -p1 * log2(p1) - (1 - p1) * log2(1 - p1); %calculate entropy in bits
if isnan(bit_entropy(b))
bit_entropy(b) = 0;
end
%disp(abs(lambda-h))
end
trial_entropy(:, r) = bit_entropy;
disp('Trial Statistics')
data = get_summary(bit_entropy);
disp('Mean')
disp(data.mean);
disp('SD')
disp(data.sd);
end
% TO DO Compute the mean for each BIT index in trial_entropy
mean_entropy = 0;
disp('Overall Statistics')
data = get_summary(trial_entropy);
disp('Mean')
disp(data.mean);
disp('SD')
disp(data.sd);
%This is the wrong mean...
mean_entropy = data.mean;
function summary = get_summary(entropy)
summary = struct('mean', mean(entropy), 'sd', std(entropy));
end
end
and then you just have to
% Entropy Script
clear all
%% Settings
replicate = false; % = false % Use true for debugging only.
%H = 1; %in bits
Bits = 2.^(1:6);
Threshold = 0.5;
%Tolerance = 0.001;
Blocks = 100; %Number of runs of the experiment
%% Run
[mean_entropy, bits] = compute_entropy(Bits, Blocks, Threshold, replicate);
%What we want
%plot(bits, mean_entropy);
%What we have
plot(1:length(mean_entropy), mean_entropy);

Numerical Integral in MatLab using integral command

I am trying to compute the value of this integral using Matlab
Here the other parameters have been defined or computed in the earlier part of the program as follows
N = 2;
sigma = [0.01 0.1];
l = [15];
meu = 4*pi*10^(-7);
f = logspace ( 1, 6, 500);
w=2*pi.*f;
for j = 1 : length(f)
q2(j)= sqrt(sqrt(-1)*2*pi*f(j)*meu*sigma(2));
q1(j)= sqrt(sqrt(-1)*2*pi*f(j)*meu*sigma(1));
C2(j)= 1/(q2(j));
C1(j)= (q1(j)*C2(j) + tanh(q1(j)*l))/(q1(j)*(1+q1(j)*C2(j)*tanh(q1(j)*l)));
Z(j) = sqrt(-1)*2*pi*f(j)*C1(j);
Apprho(j) = meu*(1/(2*pi*f(j))*(abs(Z(j))^2));
Phi(j) = atan(imag(Z(j))/real(Z(j)));
end
%integration part
c1=w./(2*pi);
rho0=1;
fun = #(x) log(Apprho(x)/rho0)/(x.^2-w^2);
c2= integral(fun,0,Inf);
phin=pi/4-c1.*c2;
I am getting an error like this
could anyone help and tell me where i am going wrong.thanks in advance
Define Apprho in a separate *.m function file, instead of storing it in an array:
function [ result ] = Apprho(x)
%
% Calculate f and Z based on input argument x
%
% ...
%
meu = 4*pi*10^(-7);
result = meu*(1/(2*pi*f)*(abs(Z)^2));
end
How you calculate f and Z is up to you.
MATLAB's integral works by calling the function (in this case, Apprho) repeatedly at many different x values. The x values called by integral don't necessarily correspond to the 1: length(f) values used in your original code, which is why you received errors.