I am willing to compute a Frechet/Gateaux derivative of a function which is not entirely explicit and my question is : What would be the most efficient way to do it ? Which language would you recommend me to use ?
Precisely, my problem is that I have a function, say F, which is the square of the euclidean norm of the sum of products of pairs of multidimensional functions (i.e. from R^n to R^k).
AFAIK, If I use Maple or Maxima, they will ask me to explicit the functions involved in the formula whereas I would like to keep them abstract. Then, I necessarly need to compute a Frechet/Gateaux derivative so as to keep the expressions simple. Indeed, when I proceed the standard way, I start to develop the square of the euclidean norm as a sum of squares and there is a lot of indexes. My goal being to make a Taylor developpement with integral remainder to the third order, the expression becomes, according to me, humanly infeasible (the formula is more than one A4 page long).
So I would prefer to use a Frechet/Gateaux derivative, which would allow me, among other, to keep scalar products instead of sums.
As the functions invloved have some similarities with their derivatives (due to the presence of exponentials) there is just a small amount of rules to know. So I thought that I might make such a one-purposed computer algebra system by myself.
And I started to learn LISP, as I read that it would be efficient for my problem, but I am a little bit lost now, since this language is very different and I am still used to think in terms of C/Python/Perl...
Here is another question : would you have some links to courses or articles about how an algebra system for symbolic computations is made (preferably in LISP) ? Any suggestions are welcome.
Thank you very much for your answers.
My advice is to use Maxima. Maxima is inspired by Lisp, and implemented in Lisp, so using Maxima will save you a tremendous amount of time and trouble. If Lisp is suitable for your problem, Maxima is even more so.
Maxima will allow you to use undefined terms in an expression; it is not necessary to define all terms.
Post a message to the Maxima mailing list (maxima#math.utexas.edu) to ask for specific advice. Please explain in detail about what you are trying to accomplish.
Related
I have been looking for a Matlab function that can do a nonlinear total least square fit, basically fit a custom function to data which has errors in all dimensions. The easiest case being x and y data-points with different given standard deviations in x and y, for every single point. This is a very common scenario in all natural sciences and just because most people only know how to do a least square fit with errors in y does not mean it wouldn't be extremely useful. I know the problem is far more complicated than a simple y-error, this is probably why most (not even physicists like myself) learned how to properly do this with multidimensional errors.
I would expect that a software like matlab could do it but unless I'm bad at reading the otherwise mostly useful help pages I think even a 'full' Matlab license doesn't provide such fitting functionality. Other tools like Origin, Igor, Scipy use the freely available fortran package "ODRPACK95", for instance. There are few contributions about total least square or deming fits on the file exchange, but they're for linear fits only, which is of little use to me.
I'd be happy for any hint that can help me out
kind regards
First I should point out that I haven't practiced MATLAB much since I graduated last year (also as a Physicist). That being said, I remember using
lsqcurvefit()
in MATLAB to perform non-linear curve fits. Now, this may, or may not work depending on what you mean by custom function? I'm assuming you want to fit some known expression similar to one of these,
y = A*sin(x)+B
y = A*e^(B*x) + C
It is extremely difficult to perform a fit without knowning the form, e.g. as above. Ultimately, all mathematical functions can be approximated by polynomials for small enough intervals. This is something you might want to consider, as MATLAB does have lots of tools for doing polynomial regression.
In the end, I would acutally reccomend you to write your own fit-function. There are tons of examples for this online. The idea is to know the true solution's form as above, and guess on the parameters, A,B,C.... Create an error- (or cost-) function, which produces an quantitative error (deviation) between your data and the guessed solution. The problem is then reduced to minimizing the error, for which MATLAB has lots of built-in functionality.
Is there a way to check if a rational function is a polynomial in Matlab?
I have a big rational function, call it R, that I am trying to show is a polynomial. I've tried the simplify and simplifyFraction functions and the following (not very effective) procedure:
Split it into denominator and numerator:
[num,den] = numden(R);
Calculate the roots of both polynomials:
r_num = roots(sym2poly(num));
r_den = roots(sym2poly(den));
Check if all the elements of r_den belong to r_num:
Because of numerical imprecision I haven't been able to come up with a reliable way of doing this.
This is a not-so-easy problem and finding greatest common divisor of polynomials is a very active area of research. There are tons of publications and you can find them online.
The main problem is that root finding is an ill-conditioned problem. And recently a few experts are trying to combine the numerical computations with symbolic representations. If you google for ERES method you will have an entry point together with thesis of Christou.
This problem is particularly important for signals and control people because of the transfer function representations and pole zero cancellations. Matlab goes out a long way to make sure that all is OK and a minimal neighborhood of each pole zero is accepted as a cancellation.
So as a quick remedy, convert your polynomial coefficients to 1D vectors, say a and b, and use minreal(tf(a,b)). Then you can extract num and den of that transfer representation.
Shameless plug: I am the author of a python3 library and I also implemented a system theoretical approach. Here and here is the full implementation details with citations about LCM and GCD operations.
I have read many tutorials, papers and I understood the concept of Genetic Algorithm, but I have some problems to implement the problem in Matlab.
In summary, I have:
A chromosome containing three genes [ a b c ] with each gene constrained by some different limits.
Objective function to be evaluated to find the best solution
What I did:
Generated random values of a, b and c, say 20 populations. i.e
[a1 b1 c1] [a2 b2 c2]…..[a20 b20 c20]
At each solution, I evaluated the objective function and ranked the solutions from best to worst.
Difficulties I faced:
Now, why should we go for crossover and mutation? Is the best solution I found not enough?
I know the concept of doing crossover (generating random number, probability…etc) but which parents and how many of them will be selected to do crossover or mutation?
Should I do the crossover for the entire 20 solutions (parents) or only two of them?
Generally a Genetic Algorithm is used to find a good solution to a problem with a huge search space, where finding an absolute solution is either very difficult or impossible. Obviously, I don't know the range of your values but since you have only three genes it's likely that a good solution will be found by a Genetic Algorithm (or a simpler search strategy at that) without any additional operators. Selection and Crossover is usually carried out on all chromosome in the population (although it's not uncommon to carry some of the best from each generation forward as is). The general idea is that the fitter chromosomes are more likely to be selected and undergo crossover with each other.
Mutation is usually used to stop the Genetic Algorithm prematurely converging on a non-optimal solution. You should analyse the results without mutation to see if it's needed. Mutation is usually run on the entire population, at every generation, but with a very small probability. Giving every gene 0.05% chance that it will mutate isn't uncommon. You usually want to give a small chance of mutation, without it completely overriding the results of selection and crossover.
As has been suggested I'd do a lit bit more general background reading on Genetic Algorithms to give a better understanding of its concepts.
Sharing a bit of advice from 'Practical Neural Network Recipies in C++' book... It is a good idea to have a significantly larger population for your first epoc, then your likely to include features which will contribute to an acceptable solution. Later epocs which can have smaller populations will then tune and combine or obsolete these favourable features.
And Handbook-Multiparent-Eiben seems to indicate four parents are better than two. However bed manufactures have not caught on to this yet and seem to only produce single and double-beds.
I need to find the roots from the equations as follows (Mathematica):
Sqrt[3]/2*x-(I-x*Sqrt[3]/2*c^2)*I/Sqrt[2*Pi]/d^3*Integrate[t*Exp[-t^2/2/d^2]/(Sqrt[3]/2*x+I*(t+b0)),{t,-Inf,Inf}]=0
i.e. as the picture shows:
where c, d, and b0 is given parameters, x is a complex root needs to find.
I have tried several methods, including scanning the real and imagine part of x and the iteration approach, but non of them could resolve all the cases.
Are there any general approaches that could solve such kind of equation efficiently, or by MATLAB/Mathematica?
did you try Matlab's mupad? it is a powerful symbolic tool, very similar to Maple wich gives really good results in non-numerical mathematics. Try there. declare the equation, give assumptions to the software ,i.e assume c real positive (don't copy this, I dont remember the proper syntax) and then solve! It will very likely find a solution if it exits (sometimes in some mathematical cases that you even don't know!)
This question could refer to any computer algebra system which has the ability to compute the Groebner Basis from a set of polynomials (Mathematica, Singular, GAP, Macaulay2, MatLab, etc.).
I am working with an overdetermined system of polynomials for which the full groebner basis is too difficult to compute, however it would be valuable for me to be able to print out the groebner basis elements as they are found so that I may know if a particular polynomial is in the groebner basis. Is there any way to do this?
If you implement Buchberger's algorithm on your own, then you can simply print out the elements as the are found.
If you have Mathematica, you can use this code as your starting point.
https://www.msu.edu/course/mth/496/snapshot.afs/groebner.m
See the function BuchbergerSteps.
Due to the way the Buchberger algorithm works (see, for instance, Wikipedia or IVA), the partial results that you could obtain by printing intermediate results are not guaranteed to constitute a Gröbner basis.
Depending on your ultimate goal, you may want to try instead an algorithm for triangularization of ideals, such as Ritt-Wu's algorithm (see IVA or Shang-Ching Chou's book). This is somewhat similar to reduction to row echelon form in Linear Algebra, and you may interrupt the algorithm at any point to get a partially reduced system of polynomial equations.