Turn off "smart behavior" in Matlab - matlab

There is one thing I do not like on Matlab: It tries sometimes to be too smart. For instance, if I have a negative square root like
a = -1; sqrt(a)
Matlab does not throw an error but switches silently to complex numbers. The same happens for negative logarithms. This can lead to hard to find errors in a more complicated algorithm.
A similar problem is that Matlab "solves" silently non quadratic linear systems like in the following example:
A=eye(3,2); b=ones(3,1); x = A \ b
Obviously x does not satisfy A*x==b (It solves a least square problem instead).
Is there any possibility to turn that "features" off, or at least let Matlab print a warning message in this cases? That would really helps a lot in many situations.

I don't think there is anything like "being smart" in your examples. The square root of a negative number is complex. Similarly, the left-division operator is defined in Matlab as calculating the pseudoinverse for non-square inputs.
If you have an application that should not return complex numbers (beware of floating point errors!), then you can use isreal to test for that. If you do not want the left division operator to calculate the pseudoinverse, test for whether A is square.
Alternatively, if for some reason you are really unable to do input validation, you can overload both sqrt and \ to only work on positive numbers, and to not calculate the pseudoinverse.

You need to understand all of the implications of what you're writing and make sure that you use the right functions if you're going to guarantee good code. For example:
For the first case, use realsqrt instead
For the second case, use inv(A) * b instead
Or alternatively, include the appropriate checks before/after you call the built-in functions. If you need to do this every time, then you can always write your own functions.

Related

Convert symbolic expression into a rational polynomial

I have a lengthy symbolic expression that involves rational polynomials (basic arithmetic and integer powers). I'd like to simplify it into a single (simple) rational polynomial.
numden does it, but it seems to use some expensive optimization, which probably addresses a more general case. When tried on my example below, it crashed after a few hours--out of memory (32GB).
I believe something more efficient is possible even if I don't have a cpp access to matlab functionality (e.g. children).
Motivation: I have an objective function that involves polynomials. I manually derived it, and I'd like to verify and compare the derivatives: I subtract the two expressions, and the result should vanish.
Currently, my interest in this is academic since practically, I simply substitute some random expression, get zero, and it's enough for me.
I'll try to find the time to play with this as some point, and I'll update here about it, but I posted in case someone finds it interesting and would like to give it a try before that.
To run my function:
x = sym('x', [1 32], 'real')
e = func(x)
The function (and believe it or not, this is just the Jacobian, and I also have the Hessian) can't be pasted here since the text limit is 30K:
https://drive.google.com/open?id=1imOAa4VS87WDkOwAK0NoFCJPTK_2QIRj

Matlab Symbolic expression creates overflow

I am using Matlab symbolic toolbox to create a function of high complexity. This function is then written to a .m-file (using matlabFunction). For some reason, after simplifying the function, the function is returned on a form that looks like fun = (A*1.329834759483753e310 + B*5.873798798237459e305 + ...)*7.577619127319697e-320, where A and B are functions of my variables (too complex to repeat here). That is, all the terms within the parenthesis are in the order of about 1e280 to 1e300. The problems arises when the exponents become larger than about 1.79e308, as this causes an overflow for doubles (when calling the generated .m-function). The real size of the function is nowhere close to create an overflow, but this way of expressing the function does. This would be solved if the simplify function multiplied the 1e-320 into the parenthesis, but for some reason it doesn't.
Any idea why the symbolic toolbox chooses to represent my function this way?
I have found that I can call call expand(fun) to multiply 1e-320 into the parenthesis. The resulting expression then has exponents with the expected sizes (in the range -1 to -30) but I would prefer to know the reason why the expression looks like this in the first place, and if there are better options than calling expand to avoid the problem. Besides, calling expand seems to create a more complex function than the one I have, and I am trying to obtain a function that evaluates very fast here.
Large exponent multipliers are probably due to some floats in the formula. Try to avoid them and use rationals (1/2 instead of 0.5).
The best reason I've read is this one:
The floating point numbers tend to get converted to rational numbers, which usually involves multiplying by 2^53 and then factoring out the gcd from the top and bottom of the ratio. Square such a value and you are working with numbers on the order of 2^100... and so on.

Unbounded (infinite) repetitions in transitions for covergroup bins

How can I define coverage bin for transition that might have many repetitions in it? What I'm trying to do is something like this:
bins st = (2=> 3[* 1:$] => 2);
Of course, this doesn't work. Simulators give error messages when compiling this line.
Edit: Simulators complain about $ symbol, they don't recognize it as "unbounded maximum". However when writing sequences, it is legal to use consecutive repetition operator as [* 1:$]. I hope next version of SystemVerilog makes it legal for covergroups too.
As a crude workaround, I substituted $ with a large number so it works fine for my case.
bins st = (2=> 3[* 1:1000] => 2);
SystemVerilog transition bins were not designed to handle anything but simple transitions. Anything more complex should be modeled using a cover directive, or some combination of sequence.triggered() method and covergroup.

Matlab, economy QR decomposition, control precision?

There is a [Q,R] = qr(A,0) function in Matlab, which, according to documentation, returns an "economy" version of qr-decomposition of A. norm(A-Q*R) returns ~1e-12 for my data set. Also Q'*Q should theoretically return I. In practice there are small nonzero elements above and below the diagonal (of the order of 1e-6 or so), as well as diagonal elements that are slightly greater than 1 (again, by 1e-6 or so). Is anyone aware of a way to control precision of qr(.,0), or quality(orthogonality) of resulting Q, either by specifying epsilon, or via the number of iterations ? The size of the data set makes qr(A) run out of memory so I have to use qr(A,0).
When I try the non- economy setting, I actually get comparable results for A-Q*R. Even for a tiny matrix containing small numbers as shown here:
A = magic(20);
[Q, R] = qr(A); %Result does not change when using qr(A,0)
norm(A-Q*R)
As such I don't believe the 'economy' is the problem as confirmed by #horchler in the comments, but that you have just ran into the limits of how accurate calculations can be done with data of type 'double'.
Even if you change the accuracy somehow, you will always be dealing with an approximation, so perhaps the first thing to consider here is whether you really need greater accuracy than you already have. If you need more accuracy there may always be a way, but I doubt whether it will be a straightforward one.

Matlab: poor accuracy of optimizers/solvers

I am having difficulty achieving sufficient accuracy in a root-finding problem on Matlab. I have a function, Lik(k), and want to find the value of k where Lik(k)=L0. Basically, the problem is that various built-in Matlab solvers (fzero, fminbnd, fmincon) are not getting as close to the solution as I would like or expect.
Lik() is a user-defined function which involves extensive coding to compute a numerical inverse Laplace transform, etc., and I therefore do not include the full code. However, I have used this function extensively and it appears to work properly. Lik() actually takes several input parameters, but for the current step, all of these are fixed except k. So it is really a one-dimensional root-finding problem.
I want to find the value of k >= 165.95 for which Lik(k)-L0 = 0. Lik(165.95) is less than L0 and I expect Lik(k) to increase monotonically from here. In fact, I can evaluate Lik(k)-L0 in the range of interest and it appears to smoothly cross zero: e.g. Lik(165.95)-L0 = -0.7465, ..., Lik(170.5)-L0 = -0.1594, Lik(171)-L0 = -0.0344, Lik(171.5)-L0 = 0.1015, ... Lik(173)-L0 = 0.5730, ..., Lik(200)-L0 = 19.80. So it appears that the function is behaving nicely.
However, I have tried to "automatically" find the root with several different methods and the accuracy is not as good as I would expect...
Using fzero(#(k) Lik(k)-L0): If constrained to the interval (165.95,173), fzero returns k=170.96 with Lik(k)-L0=-0.045. Okay, although not great. And for practical purposes, I would not know such a precise upper bound without a lot of manual trial and error. If I use the interval (165.95,200), fzero returns k=167.19 where Lik(k)-L0 = -0.65, which is rather poor. I have been running these tests with Display set to iter so I can see what's going on, and it appears that fzero hits 167.19 on the 4th iteration and then stays there on the 5th iteration, meaning that the change in k from one iteration to the next is less than TolX (set to 0.001) and thus the procedure ends. The exit flag indicates that it successfully converged to a solution.
I also tried minimizing abs(Lik(k)-L0) using fminbnd (giving upper and lower bounds on k) and fmincon (giving a starting point for k) and ran into similar accuracy issues. In particular, with fmincon one can set both TolX and TolFun, but playing around with these (down to 10^-6, much higher precision than I need) did not make any difference. Confusingly, sometimes the optimizer even finds a k-value on an earlier iteration that is closer to making the objective function zero than the final k-value it returns.
So, it appears that the algorithm is iterating to a certain point, then failing to take any further step of sufficient size to find a better solution. Does anyone know why the algorithm does not take another, larger step? Is there anything I can adjust to change this? (I have looked at the list under optimset but did not come up with anything useful.)
Thanks a lot!
As you seem to have a 'wild' function that does appear to be monotone in the region, a fairly small range of interest, and not a very high requirement in precision I think all criteria are met for recommending the brute force approach.
Assuming it does not take too much time to evaluate the function in a point, please try this:
Find an upperbound xmax and a lower bound xmin, choose a preferred stepsize and evaluate your function at
xmin:stepsize:xmax
If required (and monotonicity really applies) you can get another upper and lower bound by doing this and repeat the process for better accuracy.
I also encountered this problem while using fmincon. Here is how I fixed it.
I needed to find the solution of a function (single variable) within an optimization loop (multiple variables). Because of this, I needed to provide a large interval for the solution of the single variable function. The problem is that fmincon (or fzero) does not converge to a solution if the search interval is too large. To get past this, I solve the problem inside a while loop, with a huge starting upperbound (1e200) with the constraint made on the fval value resulting from the solver. If the resulting fval is not small enough, I decrease the upperbound by a factor. The code looks something like this:
fval = 1;
factor = 1;
while fval>1e-7
UB = factor*1e200;
[x,fval,exitflag] = fminbnd(#(x)function(x,...),LB,UB,options);
factor = factor * 0.001;
end
The solver exits the while when a good solution is found. You can of course play also with the LB by introducing another factor and/or increase the factor step.
My 1st language isn't English so I apologize for any mistakes made.
Cheers,
Cristian
Why not use a simple bisection method? You always evaluate the middle of a certain interval and then reduce this to the right or left part so that you always have one bound giving a negative and the other bound giving a positive value. You can reduce to arbitrary precision very quickly. Since you reduce the interval in half each time it should converge very quickly.
I would suspect however there is some other problem with that function in that it has discontinuities. It seems strange that fzero would work so badly. It's a deterministic function right?