How to push-back a vector in elisp - lisp

I found some instructions to pushback with a vector using a function called vector-push, the problem is that I think it only works with common lisp.
What if I want to push to the back of a vector in elisp? How can I do that?
I'm familiar with C++ where you say something like vector.push_back(element);.
The documentation on vector functions in elisp is quite sparse.

Emacs Lisp does not have extensible vectors.
You can emulate them, of course, just like you can emulate multidimensional arrays, reinventing the wheel in this day and age is a waste of time.
Why use an inadequate tool?

Related

Complex inverse and complex pseudo-inverse in Scala?

I'm considering to learn Scala for my algorithm development, but first need to know if the language has implemented (or is implementing) complex inverse and pseudo-inverse functions. I looked at the documentation (here, here), and although it states these functions are for real matrices, in the code, I don't see why it wouldn't accept complex matrices.
There's also the following comment left in the code:
pinv for anything that can be transposed, multiplied with that transposed, and then solved
Is this just my wishful thinking, or will it not accept complex matrices?
Breeze implementer here:
I haven't implemented inv etc. for complex numbers yet, because I haven't figured out a good way to store complex numbers unboxed in a way that is compatible with blas and lapack and doesn't break the current API. You can set the call up yourself using netlib java following a similar recipe to the code you linked.

Why is Matlab function interpn being modified?

The Matlab function interpn does n-grid interpolation. According to the documentation page:
In a future release, interpn will not accept mixed combinations of row and column vectors for the sample and query grids.
This page provides a bit more information but is still kind of cryptic.
My question is this: Why is this modification being implemented? In particular, are there any pitfalls to using interpn?
I am writing a program in fortran that is supposed to produce similar results to a Matlab program that uses interpn as a crucial component. I'm wondering if the Matlab program might have a problem that is related to this modification.
No, I don't think this indicates that there is any sort of problem with using interpn, or any of the other MATLAB interpolation functions.
Over the last few releases MathWorks has been introducing some new/better functionality for interpolation (for example the griddedInterpolant, scatteredInterpolant and delaunayTriangulation classes). This has been going on in small steps since R2009a, when they replaced the underlying QHULL libraries for computational geometry with CGAL.
It seems likely to me that interpn has for a long time supported an unusual form of input arguments (i.e. mixed row and column vectors to define the sample grid) that is probably a bit confusing for people, hardly ever used, and a bit of a pain for MathWorks to support. So as they move forward with the newer functionality, they're just taking the opportunity to simplify some of the syntaxes supported: it doesn't mean that there is any problem with interpn.

Diagonalization hermitian matrices julia vs fortran

I have a program written in Fortran and in Julia, one of the cases I have symmetric matrices
and I get results more or less similar with both programs. When I switch to a case where I have hermitian matrices, the program in Julia and the program in Fortran give me different stuff. I would guess that maybe the difference comes from the diagonalization procedure, in Fortran I use:
ZHEEVD(..)
while in Julia I simply use:
eig(matrix)
The first thing that I notice is that ZHEEVD fixes the first row of the eigenvector matrices to real numbers (no imaginary part), while eig fixes the last row to real numbers.
Any idea how to overcome this tiny differences? Any more info that can be useful when dealing with julia's linear algebra built-ins?
Digging in to the Julia methods (the #less macro is very handy for this), you'll find that it eventually calls the LAPACK.syevr! method, which in the Complex128 case is a wrapper for the ZHEEVR LAPACK method (scroll down a bit to see the actual definition).
If you'd prefer to keep using ZHEEVD, you can access it via the ccall interface: see the manual section on Calling C and Fortran code. The LAPACK wrappers linked above should provide plenty of examples (LAPACK comes as part of OpenBLAS, which is included in Julia, so you shouldn't need to install anything else).

Suffix tree in Matlab

I am finding longest substring in text T, such that it is a prefix of string S. I have made algorithm using suffix tree which provides less complex solution, but since Matlab doesn't use pointers or any other reference, I am stuck at the implementation.
Could somebody please suggest some solution or some alternate way to this problem, possible in Matlab.
Here are a few suggestions for using "pointers" in Matlab:
You can simply use cell array indexes as pointers, to reference cell array elements. This is probably the simplest approach.
You can use a Handle Class for creating classes which you can hold references to. A little more involved but very nice from a software engineering point of view.
As less Matlaby solution, you could write the algorithm in C and use mex to interface between Matlab and your algorithm.

Which language to compute a Frechet/Gateaux derivative of an abstract function?

I am willing to compute a Frechet/Gateaux derivative of a function which is not entirely explicit and my question is : What would be the most efficient way to do it ? Which language would you recommend me to use ?
Precisely, my problem is that I have a function, say F, which is the square of the euclidean norm of the sum of products of pairs of multidimensional functions (i.e. from R^n to R^k).
AFAIK, If I use Maple or Maxima, they will ask me to explicit the functions involved in the formula whereas I would like to keep them abstract. Then, I necessarly need to compute a Frechet/Gateaux derivative so as to keep the expressions simple. Indeed, when I proceed the standard way, I start to develop the square of the euclidean norm as a sum of squares and there is a lot of indexes. My goal being to make a Taylor developpement with integral remainder to the third order, the expression becomes, according to me, humanly infeasible (the formula is more than one A4 page long).
So I would prefer to use a Frechet/Gateaux derivative, which would allow me, among other, to keep scalar products instead of sums.
As the functions invloved have some similarities with their derivatives (due to the presence of exponentials) there is just a small amount of rules to know. So I thought that I might make such a one-purposed computer algebra system by myself.
And I started to learn LISP, as I read that it would be efficient for my problem, but I am a little bit lost now, since this language is very different and I am still used to think in terms of C/Python/Perl...
Here is another question : would you have some links to courses or articles about how an algebra system for symbolic computations is made (preferably in LISP) ? Any suggestions are welcome.
Thank you very much for your answers.
My advice is to use Maxima. Maxima is inspired by Lisp, and implemented in Lisp, so using Maxima will save you a tremendous amount of time and trouble. If Lisp is suitable for your problem, Maxima is even more so.
Maxima will allow you to use undefined terms in an expression; it is not necessary to define all terms.
Post a message to the Maxima mailing list (maxima#math.utexas.edu) to ask for specific advice. Please explain in detail about what you are trying to accomplish.