Diagonalization hermitian matrices julia vs fortran - fortran90

I have a program written in Fortran and in Julia, one of the cases I have symmetric matrices
and I get results more or less similar with both programs. When I switch to a case where I have hermitian matrices, the program in Julia and the program in Fortran give me different stuff. I would guess that maybe the difference comes from the diagonalization procedure, in Fortran I use:
ZHEEVD(..)
while in Julia I simply use:
eig(matrix)
The first thing that I notice is that ZHEEVD fixes the first row of the eigenvector matrices to real numbers (no imaginary part), while eig fixes the last row to real numbers.
Any idea how to overcome this tiny differences? Any more info that can be useful when dealing with julia's linear algebra built-ins?

Digging in to the Julia methods (the #less macro is very handy for this), you'll find that it eventually calls the LAPACK.syevr! method, which in the Complex128 case is a wrapper for the ZHEEVR LAPACK method (scroll down a bit to see the actual definition).
If you'd prefer to keep using ZHEEVD, you can access it via the ccall interface: see the manual section on Calling C and Fortran code. The LAPACK wrappers linked above should provide plenty of examples (LAPACK comes as part of OpenBLAS, which is included in Julia, so you shouldn't need to install anything else).

Related

Is there a way to define a general function in matlab?

Matlab has functionalities that allow you to work with known functions that you must define.
But, sometimes I want to do a complex symbolic calculation using a general function, Say A(x), without specifying A(x).
In other words, is it possible for me to make a statement like
diff(A(x^2+1),x), where the answer should involve a symbolic derivative of A???
diff(A(x^2+1),x) = A' diff(x^2+1,x)
That is, if A' is the derivative of A.
Yes. The functionality you describe is part of the symbolic algebra toolkit -- note that it comes with some fairly significant limitations, but, in short, all you would require would be
syms x A(x)
diff(A(x), x)
Note that ' is reserved for transpose, even with symbolic functions. (Although, personally, I'd frankly suggest Mathematica for any serious symbolic algebra any day over matlab -- it's really the intended purpose of the whole product, whereas the symbolic algebra toolkit is exactly that: a toolkit add-on to the core features of Matlab, namely fast linear algebra).

Complex inverse and complex pseudo-inverse in Scala?

I'm considering to learn Scala for my algorithm development, but first need to know if the language has implemented (or is implementing) complex inverse and pseudo-inverse functions. I looked at the documentation (here, here), and although it states these functions are for real matrices, in the code, I don't see why it wouldn't accept complex matrices.
There's also the following comment left in the code:
pinv for anything that can be transposed, multiplied with that transposed, and then solved
Is this just my wishful thinking, or will it not accept complex matrices?
Breeze implementer here:
I haven't implemented inv etc. for complex numbers yet, because I haven't figured out a good way to store complex numbers unboxed in a way that is compatible with blas and lapack and doesn't break the current API. You can set the call up yourself using netlib java following a similar recipe to the code you linked.

Why is Matlab function interpn being modified?

The Matlab function interpn does n-grid interpolation. According to the documentation page:
In a future release, interpn will not accept mixed combinations of row and column vectors for the sample and query grids.
This page provides a bit more information but is still kind of cryptic.
My question is this: Why is this modification being implemented? In particular, are there any pitfalls to using interpn?
I am writing a program in fortran that is supposed to produce similar results to a Matlab program that uses interpn as a crucial component. I'm wondering if the Matlab program might have a problem that is related to this modification.
No, I don't think this indicates that there is any sort of problem with using interpn, or any of the other MATLAB interpolation functions.
Over the last few releases MathWorks has been introducing some new/better functionality for interpolation (for example the griddedInterpolant, scatteredInterpolant and delaunayTriangulation classes). This has been going on in small steps since R2009a, when they replaced the underlying QHULL libraries for computational geometry with CGAL.
It seems likely to me that interpn has for a long time supported an unusual form of input arguments (i.e. mixed row and column vectors to define the sample grid) that is probably a bit confusing for people, hardly ever used, and a bit of a pain for MathWorks to support. So as they move forward with the newer functionality, they're just taking the opportunity to simplify some of the syntaxes supported: it doesn't mean that there is any problem with interpn.

is there a faster version of fminbnd in matlab?

I am now using fminbnd in Matlab, and I find it relatively slow (I am using it inside a nested loop). The function itself, its interface and the values it returns are great, but when looking into the .m file I see it is not optimized. As a matter of fact, I was hoping for something like that to be written as a mex.
Anyone knows of an alternative to fminbnd that works much faster and does not have as much overhead?
It's written like that because it has to evaluate (feval) your user-defined function(s) on every iteration. Matlab's ODE solvers work in the same way. In current Matlab it's costly for a C/C++ code to call a user-defined Matlab function and read in its return values iteratively.
Make sure you're using the options correctly, that fminbnd is the correct tool (maybe a simpler scheme would be better or, since this in a loop, maybe a multi-dimensional method like fminsearch would be more appropriate), and have optimized your objective function. The next easiest thing would be to try compiling your Matlab code to C or C++ (see codgen). You'll likely need to compile in your objective function, and all of the options, as well in order to avoid the slowdown issues mentioned above. I've not tried this for fminbnd, but I did see mention of it working online. If your objective function itself is complicated, you could try just converting it to a mex function.
fminbnd is based on Brent's method. You can find C, C++, and FORTRAN code for that here. The GSL also has a version: gsl_min_fminimizer_brent.

How does MATLAB vectorized code work "under the hood"?

I understand how using vectorization in a language like MATLAB speeds up the code by removing the overhead of maintaining a loop variable, but how does the vectorization actually take place in the assembly / machine code? I mean there still has to be a loop somewhere, right?
Matlab 'vectorization' concept is completely different than the vector instructions concept, such as SSE. This is a common misunderstanding between two groups of people: matlab programmers and C/asm programmers. Matlab 'vectorization', as the word is commonly used, is only about expressing loops in the form of (vectors of) matrix indices, and sometimes about writing things in terms of basic matrix/vector operations (BLAS), instead of writing the loop itself. Matlab 'vectorized' code is not necessarily expressed as vectorized CPU instructions. Consider the following code:
A = rand(1000);
B = (A(1:2:end,:)+A(2:2:end,:))/2;
This code computes mean values for two adjacent matrix rows. It is a 'vectorized' matlab expression. However, since matlab stores matrices column-wise (columns are contiguous in memory), this operation is not trivially changed into operations on SSE vectors: since we perform the operations row-wise the data you need to load into the vectors is not stored contiguously in the memory.
This code on the other hand
A = rand(1000);
B = (A(:,1:2:end)+A(:,2:2:end))/2;
can take advantage of SSE instructions and streaming instructions, since we operate on two adjacent columns at a time.
So, matlab 'vectorization' is not equivalent to using CPU vector instructions. It is just a word used to signify the lack of a loop implemented in MATLAB. To add to the confusion, sometimes people even use the word to say that some loop has been implemented using a built-in function, such as arrayfun, or bsxfun. Which is even more misleading since those functions might be significantly slower than native matlab loops. As robince said, not all loops are slow in matlab nowadays, though you do need to know when they work, and when they don't.
And in any way you always need a loop, it is just implemented in matlab built-in functions / BLAS instead of the users matlab code.
Yes there is still a loop. But it is able to loop directly in compiled code. Loops in Fortran (on which Matlab was originally based) C or C++ are not inherently slow. That they are slow in Matlab is a property of dynamic runtime (they are also slower in other dynamic languages like Python).
Since Matlab has introduced a Just-In-Time compiler loop performance has actually increased dramatically - so the old guidelines to avoid loops are less important with recent versions than they once were.