Computation of Confluent Hypergeometric Function of the First Kind in Matlab - matlab

Is there a way to perform a computation of the confluent yypergeometric function of the first kind in Matlab (specifically in R2013a)?
In Mathematica, this function is called Hypergeometric1F1. I've seen kummerU in Matlab, but the definitions look different.
In Mathematica, the definition is:
While in Matlab, the definition is given as:
How do I calculate confluent hypergeometric function of the first kind, i.e., the first of the two integrals, in Matlab?

The two are different because they return different solutions to the same second order ODE, but the names can make them easy to confuse. Mathematica's Hypergeometric1F1 calculates the confluent hypergeometric function, also known as Kummer's function. Matlab's kummeru calculates the confluent hypergeometric Kummer U function, also known as Tricomi's confluent hypergeometric function. The two are related by a simple relation, as shown here (see also the relations here and here).
In Matlab, you can calculate the confluent hypergeometric function symbolically via the general hypergeom function (a numeric solution is returned if all of the input arguments are floating point):
A = hypergeom(a,b,z);
This will return a result equivalent to that from Mathematica's Hypergeometric1F1. If you need faster solutions, you can try my optimized hypergeomq described in this Math.SE answer. For a purely numeric solution, you could also try this File Exchange submission
In Mathematica, you can use HypergeometricU to produce a result equivalent to Matlab's kummeru.

Related

MATLAB collect terms with common denominator

I'm writing some MATLAB code that gives a symbolic equation. The equation has a number of fractional terms where the denominators are different functions. I would like to group the terms with the same denominator. To give an example of what I'm trying to achieve assume the following equation:
[1]
Where the x_i's are different functions in my case. Is there a function in MATLAB that can achieve this? or if you could write an algorithm that would be extremely helpful.
[1]: https://i.stack.imgur.com/TtYGc.png
If you are using Matlab's Symbolic Math Toolbox™ (meaning using syms to create symbolic variables and combining those into functions etc...) then the symplify function should do the trick. For more read: Preforming symbolic computations

Is it possible to output stepwise function values in minimization (scipy)?

I want to minimize some function J using gradient information. I found two functions in scipy that may do the job, scipy.optimize.fmin_tnc (here) and scipy.optimize.minimize (here), and I implemented them. However, now I need the stepwise output of the function evaluations at each step of the (e.g. Newton) algorithm to plot its convergence. Is it possible to get this vector somehow out of these functions? It is not part of default return values as it seems.

MATLAB's glmfit vs fitglm

I'm trying to perform logistic regression to do classification using MATLAB. There seem to be two different methods in MATLAB's statistics toolbox to build a generalized linear model 'glmfit' and 'fitglm'. I can't figure out what the difference is between the two. Is one preferable over the other?
Here are the links for the function descriptions.
http://uk.mathworks.com/help/stats/glmfit.html
http://uk.mathworks.com/help/stats/fitglm.html
The difference is what the functions output. glmfit just outputs a vector of the regression coefficients (and some other stuff if you ask for it). fitglm outputs a regression object that packs all sorts of information and functionality inside (See the docs on GeneralizedLinearModel class). I would assume the fitglm is intended to replace glmfit.
In addition to Dan's answer, I would like to add the following.
The function fitglm, like newer functions from the statistics toolbox, accepts more flexible inputs than glmfit. For example, you can use a table as the data source, specifyy a formula of the form Y ~ X1 + X2 + ..., and use categorical variables.
As a side note, the function lassoglm uses (depends on) glmfit.

Matlab How Does ode45 Work

I asked a question regarding how the matlabFunction worked (here), which spurred a question related to the ode45 function. Using the example I gave in my post on the matlabFunction, when I pass this function through ode45, with some initial conditions, does ode45 read the derivative -s.*(x-y) as approximating the unknown function x; the same thing being said of -y+r.*x-x.*z and y, and -b.*z+x.*y and z? More specifically, we have the
matlabFunction([s*(y-x);-x*z+r*x-y; x*y-b*z],
'vars',{t,[x;y;z],[s;r;b]},'file','Example2');
and then we use
[tout,yout]=ode45(#(t,Y,p) Example2(t,Y,[10;5;8/3]),[0,50],[1;0;0]);
To approximately solve for the unknown functions x,y, and z. How does ode45 know to take the functions, which are defined as variables, [x;y;z] and approximate them? I have an inkling of a feeling that my question is rather vague, but I would just like to know the connection between these things.
The semantics of your commands is that x'(t)=s*(y(t)-x(t)), y'(t)=-x(t)*z(t)+r*x(t)-y(t), and z'(t)=x(t)*y(t)-b*z(t), with the constants you have given for s, r, and b. MATLAB will follow the commands you have given and compute a numerical approximation to this system. I am not entirely sure what you mean by your question,
How does ode45 know to take the functions, […] and approximate them?
That is exactly what you told it to do, and it is the only thing ode45 ever does. (By the time you call ode45, the names x, y, z are completely irrelevant, by the way. The function only cares for a vector of values.) If you are asking about the algorithm behind approximating the solution of an ODE given this way, you can easily find any number of books and lectures on the topic using google or any other search engine.
You may be interested in the function odeToVectorfield, by the way, which simplifies getting these functions from a differential equation written in more traditional form.

Matlab activation function for values 0 and 1

I am working on an artificial neural network. I want to implement it in Matlab, but I am unable to find a proper activation function. I need a step function because my output is either 0 or 1. is there any function in Matlab that can be used for this kind of output. Also, I want the reverse function of the same activation function. logsig and tansig are not working for me.
Both tansig and logsig are part of the Neural Network Toolbox as the online documentation makes clear. So, if which tansig returns nothing, then you don't have that toolbox (or at least don't have a version current enough to contain that function). However, both of these functions are extremely simple, and the documentation even gives you the formulae under the "Algorithms" section: tansig, logsig. Both can be implemented as a one line anonymous function if you wanted.
If your question is actually about how to produce a Heaviside step function, Matlab has heaviside (it's part of the Symbolic Math toolbox but a pure numeric version is included – type edit heaviside to see the simple code). However, note that using such a non-differentiable function is problematic for some types of neural networks as this StackOverflow question and answer addresses.
Heaviside did not work for me.. i finally normalized my data between 1 and -1 and then applied tansig.
Thanks