Should I use the optionnal argument v0 in scipy.sparse.linalg.eigs to kickstart the Arnoldi iterations - scipy

I'm using scipy.sparse.linalg.eigs to find the leading eigenvalues/vectors of a LinearOperator. It so happens that I already have at my disposal a pretty good approximation of a few eigenvalues/vectors that I just want refined.
Would it be reasonable to feed them to scipy.sparse.linalg.eigs via the v0 argument ?
The documentation relative to v0 is rather scarce:
v0 ndarray, optional
Starting vector for iteration. Default: random

Related

How do i find the integral of the movmean function on the top graph

How do i find the integral of the movmean function on the top graph
The intz doesnt work, and im not so sure what the trap gives me:
subplot(3,1,1);
Fz1=detrend(Fz(500:1200000));
plot(Fz1,'Color','k');
hold on
M1 = movmean(Fz(500:1200000),[2000,2000]);
M11=detrend(M1);
plot(M11,'Color','r')
trapz(M11)
intz = int(M11)
this line intz = int(M11)throwing an exception. Please tell me what is wrong on here
You are confusing your commands. The trapz is the command that is doing the integration using the trapezoidal integration method. The command int does not do integration; it creates variables of integer type given integer-valued floating-point (a.k.a. double) variables.
Most likely you are seeing this kind of error
M11 = [1.2 3.4]; % as an example
int(M11)
Undefined function 'int' for input arguments of type 'double'.
because your data in M11 contains non-integer values. In short int is not necessary for integration; just use trapz. See help trapz in MATLAB for details on using this command.

fzero or fsolve ? differents results - Who is the correct ?

I have a function
b=2.02478;
g=3.45581;
s=0.6;
R=1;
p =#(r) 1 - (b./r).^2 - (g^-2)*((2/15)*(s/R)^9 *(1./(r - 1).^9 - 1./(r + 1).^9 - 9./(8*r).*(1./(r - 1).^8 - 1./(r + 1).^8)) -(s/R)^3 *(1./(r-1).^3 - 1./(r+1).^3 - 3./(2*r).*(1./(r-1).^2 - 1./(r+1).^2)));
options = optimset('Display','off');
tic
r2 = fzero(p,[1.001,100])
toc
tic
r3 = fsolve(p,[1.001,100],options)
toc
and the answer
r2 =
2.0198
Elapsed time is 0.002342 seconds.
r3 =
2.1648 2.2745
Elapsed time is 0.048991 seconds.
which is more confiable ? fzero returns different values than fsolve
You should always look at the exit flag (or output struct) of a function, especially when your result is not as expected.
This is what I get:
fzero(func,[1.00001,100]):
X = 4.9969
FVAL
EXITFLAG = 1 % fzero found a zero X.
OUTPUT.message = 'Zero found in the interval [1.00001, 100]'
fzero(func,1.1):
X = 1
FVAL = 8.2304e+136
EXITFLAG = -5 % fzero may have converged to a singular point.
OUTPUT.message = 'Current point x may be near a singular point. The interval [0.975549, 1.188] reduced to the requested tolerance and the function changes sign in the interval, but f(x) increased in magnitude as the interval reduced.'
The meaning of the exit flag is explained in the matlab documentation:
1 Function converged to a solution x.
-5 Algorithm might have converged to a singular point.
-6 fzero did not detect a sign change.
So, based on this information it is clear that the first one gives you the correct result.
Why does fzero fails
As documented in the manual, fzero calculates the zero by finding a sign change:
tries to find a point x where fun(x) = 0. This solution is where fun(x) changes sign—fzero cannot find a root of a function such as x^2.
Therefore, X = 1 is also a solution of your formulation as the sign changes at this location from +inf to -inf as can be seen on a plot:
Note that it is always a good idea to provide a search range if possible as mentioned in the manual:
Calling fzero with a finite interval guarantees fzero will return a value near a point where FUN changes sign.
Tip: Calling fzero with an interval (x0 with two elements) is often faster than calling it with a scalar x0.
Alternative: fsolve
Note that this method is developed for solving a system of multiple nonlinear equations. Therefore, it is not as efficient as fzero (~20x slower in your case). fzero uses gradient based methods (check the manual for more information), which may work better in certain situations, but may get stuck in a local extrema. In this case, the gradient of your function gives the correct direction as long as your initial value is larger than 1. So, for this specific function fsolve is somewhat more robust than fzero with a single initial value, i.e. fsolve(func, 1.1) returns the expected value.
Conclusion: In general, use fzero with a search range instead of an initial value if possible for a single variable and fsolve for multiple variables. If one method fails, you can try another method or another starting point.
As you can read in documentation:
The algorithm, which was originated by T. Dekker, uses a combination of bisection, secant, and inverse quadratic interpolation methods.
So, it is sensitive into the initial point and area which it is seeking for the solution. Hence, you have gotten different result for different initial value and scope.

precision matlab vpa

I'm working with an algorithm, which uses hyperbolic functions and in order to get more accurate results from it I need to increase the precision, so I would like to do it by vpa function means, but I'm not quite sure how to implement it. Here some code to clarify the situation further:
x=18; %the hyperbolic relation is valid until x=18
cosh(x)^2-sinh(x)^2
ans = 1
x=19; %the hyperbolic relation is no longer valid
cosh(x)^2-sinh(x)^2
ans = 0
working with the VPA function:
a=vpa('cosh(40)',30); %the hyperbolic relation is valid beyond x=19
b=vpa('sinh(40)',30);
a^2-b^2
ans = 1.00008392333984375
the problem now is that I don't know how to get the value from VPA with a variable of control 'x'
I tried this but it didn't work:
x=40;
a=vpa('cosh(x)',x,30);
b=vpa('sinh(x)',30);
a^2-b^2
When doing symbolic math or variable precision arithmetic one must be careful with with converting between floating-point. In this case, you need to convert your input, x, to variable precision before passing it in to cosh or sinh (otherwise only the output of these will be converted to variable precision). For your example:
x = vpa(40,30);
a = cosh(x);
b = sinh(x);
a^2-b^2
which returns the expected 1.0. I'm not sure where you found the the use of vpa with string inputs, but that form is no longer used (using strings may even result in different results due to different functions being called). Note also that the default setting for digits in current versions of Matlab is 32.

Cannot use operator ^ on a vector value

function[df] = getDiscountFactor(t,T,r,i)
y=getYearFraction(t,T);
a=r(i,1)
df=1/((1+a)^y);
end
This is my code, thing is that y cannot do the (1+a)^y operation because it says it's a vector, but it is only a vector value r(i,1). When I call it in command line it prints a as 0.03 and then it shows this error
a =
0.0300
Error using ^
Inputs must be a scalar and a square
matrix.
To compute elementwise POWER, use
POWER (.^) instead.
If i use .^ operator problem is solved but df gets into a vector and I need it to be a single number.
You've checked the first input to see that it's a scalar, so that presumably means the other input, y, is not a square matrix. If you check what y is too, you will probably see what's gone wrong.
And if not, does matlab have a function that tells you what type matlab thinks an object is? Run that on a and on y. Or to be extra pedantic, run that on 1+a. That should clear things up.

Why does "pi" become symbolic in MATLAB?

In the Matlab command window I type:
syms f;
s = 2*pi*f*j;
s
which returns
s =
pi*f*2*j
Why is pi is not calculated as 3.141592...?What's wrong with the code I entered into the command window?
Welcome to symbolic math where you get exact answers as opposed to floating-point approximations. If you just want to "get a number" you can use non-symbolic functions and operations or you can convert symbolic results back to floating-point.
For example:
syms f
s = pi*f*2*j
s2 = subs(s,f,2)
s3 = double(s2)
Alternatively, you can use variable precision arithmetic to represent pi as a decimal approximation of a specified level in a symbolic expression:
syms f
s = vpa(pi)*f*j
See the documentation for vpa for further details. You can also use the sym function to achieve similar things.
However, you can lose some of the power of symbolic math if you convert to a decimal or floating point representation too soon. For example, compare the difference between the following expressions:
sin(pi) % 1.224646799147353e-16
sin(vpa(pi)) % -3.2101083013100396069547145883568e-40
sin(sym(pi)) % 0, sin(sym(1)*pi) and sin(sym(pi,'r')) also return zero
Only the last one will be fully cancelled out of an expression, thus simplifying it.