How can I numerically solve equations containing Bessel functions in MATLAB? - matlab

I have confronted an equation containing Bessel functions of the first type on one side and modified Bessel functions of the second type on the other. I want to know its exact solutions (values of u). The equation is as follows:
u*besselj(s-1,u)/besselj(s,u)=-w*besselK(s-1,w)/besselk(s,w)
where s is an arbitrary integer number, for example 2.
w can be written as a function of u:
w=sqrt(1-u^2);
and so this equation has only one variable: u
I'm new to MATLAB. I have no idea about how I should approach this. Could anyone please help me?

A quick thing to try may be the FZERO function, a generic nonlinear zero finder. To learn how to use it, you can implement the examples given in the documentation. Then, rewrite your function so it can be input to fzero and see what you get..
(Note: I haven't tried this, but I just noticed there were no replies yet so maybe it's better than nothing.)

Related

how can i get properly expressed answer(only expressed with constants) in ti-nspire cas(system of linear equations)

I'm struggling with ti linear equation solving. I want to solve system of equations of two variables(i,v).
following is two equations.
ai=v+iq
v=(i-1-bv)p
When I use ti-nspire cas function 'solve' answer i=,v= still contains i and v. But I want to express them in unknown constants(a,b,p,q) only. How can I do that??
I tried to express them in other ways, But it doesn't work.
And some times answer of system of equation contains expression like'x=c5'. I wonder What c5 is meaning.
thank you.
Remember to use explicit mulitplication:
]

Find minimum of nonlinear system of equations with nonlinear equality and inequality constraints in MATLAB

I need to solve this problem better described at the title. The idea is that I have two nonlinear equations in four variables, together with two nonlinear inequality constraints. I have discovered that the function fmincon is probably the best approach, as you can set everything I require in this situation (please let me know otherwise). However, I'm having some doubts at the implementation stage. Below I'm exposing the complete case, I think it's simple enough to be in its real form.
The first thing I did was to define the objective function in a separate file.
function fcns=eqns(x,phi_b,theta_b,l_1,l_2)
fcns=[sin(theta_b)*(x(1)*x(4)-x(2)*x(3))+x(4)*sqrt(x(1)^2+x(2)^2-l_2^2)-x(2)*sqrt(x(3)^2+x(4)^2-l_1^2);
cos(theta_b)*sin(phi_b)*(x(1)*x(4)-x(2)*x(3))+x(3)*sqrt(x(1)^2+x(2)^2-l_2^2)-x(1)*sqrt(x(3)^2+x(4)^2-l_1^2)];
Then the inequality constraints, also in another file.
function [c,ceq]=nlinconst(x,phi_b,theta_b,l_1,l_2)
c=[-x(1)^2-x(2)^2+l_2^2; -x(3)^2-x(4)^2+l_1^2];
ceq=[];
The next step was to actually run it in a script. Below, since the objective function requires extra variables, I defined an anonymous function f. In the next line, I did the same for the constraint (anonymous function). After that, it's pretty self explanatory.
f=#(x)norm(eqns(x,phi_b,theta_b,l_1,l_2));
f_c=#(x)nlinconst(x,phi_b,theta_b,l_1,l_2);
x_0=[15 14 16 18],
LB=0.5*[l_2 l_2 l_1 l_1];
UB=1.5*[l_2 l_2 l_1 l_1];
[res,fval]=fmincon(f,x_0,[],[],[],[],LB,UB,f_c),
The first thing to notice is that I had to transform my original objective function by the use of norm, otherwise I'd get a "User supplied objective function must return a scalar value." error message. So, is this the best approach or is there a better way to go around this?
This actually works, but according to my research (one question from stackoverflow actually!) you can guide the optimization procedure if you define an equality constraint from the objective function, which makes sense. I did that through the following code at the constraint file:
ceq=eqns(x,phi_b,theta_b,l_1,l_2);
After that, I found out I could use the deal function and define the constraints within the script.
c=#(x)[-x(1)^2-x(2)^2+l_2^2; -x(3)^2-x(4)^2+l_1^2];
f_c=#(x)deal(c(x),f(x));
So, which is the best method to do it? Through the constraint file or with this function?
Additionally, I found in MATLAB's documentation that it is suggested in these cases to set:
f=#(x)0;
Since the original objective function is already at the equality constraints. However, the optimization doesn't go beyond the initial guess obviously (the cost value is already 0 for every solution), which makes sense but leaves me wondering why is it suggested at the documentation (last section here: http://www.mathworks.com/help/optim/ug/nonlinear-systems-with-constraints.html).
Any input will be valued, and sorry for the long text, I like to go into detail if you didn't pick up on it yet... Thank you!
I believe fmincon is well suited for your problem. Naturally, as with most minimization problems, the objective function is a multivariate scalar function. Since you are dealing with a vector function, fmincon complained about that.
Is using the norm the "best" approach? The short answer is: it depends. The reason I say this is norm in MATLAB is, by default, the Euclidean (or L2) norm and is the most natural choice for most problems. Sometimes however, it may be easier to solve a problem (or more physically meaningful) to use an L1 or a more stringent infinity-norm. I defer a thorough discussion of norms to the following superb blog post: https://rorasa.wordpress.com/2012/05/13/l0-norm-l1-norm-l2-norm-l-infinity-norm/
As for why the example on Mathworks is formulated the way it is: they are solving a system of nonlinear equations - not minimizing a function. They first use the standard approach, using fsolve, but then they propose alternate methods of solving the same problem.
One such way is to reformulate solving the nonlinear equations as a minimization problem with an equality constraint. By using f=#(x)0 with fmincon, the objective function f is naturally already minimized, and the only thing that has to be satisfied in this case is the equality constraint - which would be the solution to the system of nonlinear equations. Clever indeed.

Matlab How Does ode45 Work

I asked a question regarding how the matlabFunction worked (here), which spurred a question related to the ode45 function. Using the example I gave in my post on the matlabFunction, when I pass this function through ode45, with some initial conditions, does ode45 read the derivative -s.*(x-y) as approximating the unknown function x; the same thing being said of -y+r.*x-x.*z and y, and -b.*z+x.*y and z? More specifically, we have the
matlabFunction([s*(y-x);-x*z+r*x-y; x*y-b*z],
'vars',{t,[x;y;z],[s;r;b]},'file','Example2');
and then we use
[tout,yout]=ode45(#(t,Y,p) Example2(t,Y,[10;5;8/3]),[0,50],[1;0;0]);
To approximately solve for the unknown functions x,y, and z. How does ode45 know to take the functions, which are defined as variables, [x;y;z] and approximate them? I have an inkling of a feeling that my question is rather vague, but I would just like to know the connection between these things.
The semantics of your commands is that x'(t)=s*(y(t)-x(t)), y'(t)=-x(t)*z(t)+r*x(t)-y(t), and z'(t)=x(t)*y(t)-b*z(t), with the constants you have given for s, r, and b. MATLAB will follow the commands you have given and compute a numerical approximation to this system. I am not entirely sure what you mean by your question,
How does ode45 know to take the functions, […] and approximate them?
That is exactly what you told it to do, and it is the only thing ode45 ever does. (By the time you call ode45, the names x, y, z are completely irrelevant, by the way. The function only cares for a vector of values.) If you are asking about the algorithm behind approximating the solution of an ODE given this way, you can easily find any number of books and lectures on the topic using google or any other search engine.
You may be interested in the function odeToVectorfield, by the way, which simplifies getting these functions from a differential equation written in more traditional form.

Explanation of two integral equations and implementation

I have a problem with these two equations showing in the pictures.
I have two vectors represented the C(m) and S(m) in the two equations. I am trying to implement these equations in Matlab. Instead of doing a continuous integral operation, I think I should do the summation. For example, the first equation
A1 = sqrt(sum(C.^2));
Am I right? Also, I am not sure how to implement equation two that contains a ||dM||. Please help.
What are the mathematical meaning of these two equations? I think the first one may related to the 'sum of squares', if C(m) is a vector then this equation will measure the total variance of the random variable in vector C or some kind of average of vector C? What about the second one?
Thanks very much for your help!
A.
In MATLAB there are typically two different ways to do an integration.
For people who have access to the symbolic toolbox, algebraic integration is an option. If this is the case for you, I would look into help int and see which inputs you need.
For the rest, numerical integration is available, this basically means that you just calculate a function on a lot of points and then take the mean of the function values in these points.
For the mathematical meaning some more context would be helpful, and you may want to ask this question at math.stackexchange.com or on the site of whatever field you are in. (stats, physics?)

writing null space basis in MATLAB without using null(A)

My teacher wants us to find the basis of the null space of a function in MATLAB. This is the exact question.
Use the MATLAB function rref and the function lead above to write a MATLAB
function N=nullbase(A) which computes a matrix N whose columns form a basis
for the nullspace of A. Your file nullbase.m should not use the MATLAB functions
rank or null.
Please someone help me, thank you.
What you're looking for is the SVD (singular value decomposition). Because this sounds like a homework problem, I won't give away the whole answer, but this hint should help. MATLAB has an SVD function: http://www.mathworks.com/help/techdoc/ref/svd.html