Scala: recursion and stack overflow error - scala

I'm trying to do the following exercise:
Define a recursive function inteDef that approximates the definite integral of a function f on the interval [a,b] using the composite trapezoidal rule. The function will take eps as a parameter, which marks the maximum length of the subintervals into which the interval [a,b] will be divided. Thus, if the length of [a,b] is less than or equal to eps, the function will return the
area of the corresponding trapezoid. Otherwise, it will divide the interval [a,b] into two subintervals of the same length (half of the original interval) and it will recursively approximate the integral of both intervals and return their sum.
My solution attempt is:
def inteDef(f: Double => Double, x: Double, y: Double, eps: Double): Double = {
if (abs(y - x) <= eps)
(y - x) * (f(x) + f(y)) / 2.0
else
inteDef(f, x, (y-x)/2.0, eps) + inteDef(f, (y-x)/2.0, y, eps)
}
It works when eps is greater than or equal to abs(y - x), but gives a stack overflow error otherwise.

(y - x) / 2.0 is not in the middle between x and y.
(x + y) / 2.0 is.

Related

How to get the average of two floating point numbers

I want to get the average of a 2 floating point numbers. My function for the integer variant
let int_average x y = (x + y) / 2
works fine but when I try to write it for floats
let float_average x y = (x +. y) / 2.
it fails with the error
This expression has type float but an expression was expected of type
int
You forgot to "floatify" the division operator. / should be /., just as +. is the float variant of +:
let float_average x y = (x +. y) /. 2.

initial point in CORDIC algorithm

I am trying to reduce number of iterations required to calculate multiplication using the CORDIC algorithm because I am using this algorithm in a continuous function to calculate square function. Here is the algorithm assuming -1<x<1'
function z=square(x)
y=x;
z=0;
for i=1:15
if (x > 0)
x = x - 2^(-i);
z = z + y*2^(-i);
else
x = x + 2^(-i);
z = z - y*2^(-i);
end
end
return
end
I already know the close value to multiplication result (from the previous result (call it pr)) and value of x (the value of x is continuous) . Does it help in anyway to decrease number of iterations?
If you are multiplying twice by the same constant, say a.x and a.x', then you can multiply and add with the delta a.(x' - x), which has less significant digits.
In case both factors vary, you can still use
x'.y' = (x'- x).(y' - y) + x.(y' - y) + (x' - x).y + x.y
where maybe the first term is neglectible.
For a square,
x'² = (x'- x)² + 2.x.(x' - x) + x²

How to work with absolute values (including variables) in objective function of Matlab Optimization

I'm trying to minimize an objective function that contains absolute terms including some of the variables. The target function looks like this (I'll only write down two terms, the actual problem contains between 500 and 5000, depending on other parameters):
min |f_11 * x_1 + f_21 * x_2 - y_1| + |f_12 * x_1 + f_22 * x_2 - y_2|
There can also be different types of constraints. Since I don't have the Symbolic Toolbox I have no clue how to put this into Matlab.
I thought of interpreting this as a quadratic program, where I square each term and get the squareroot of it. With an anonymous function this would look like this I think:
f = #(X) sqrt((F*X - Y) .* (F*X - Y)) * ones(size(Y));
Where F and Y contain the values of f_ij and y_j. So in my case F ís of size ix2, Y is of size ix1 and X is of size 1x2.
The problem here is, I can't calculate the numerical hessian via the DERIVESTsuite (http://www.mathworks.com/matlabcentral/fileexchange/13490-adaptive-robust-numerical-differentiation). I'll get the error:
Error using *
Inner matrix dimensions must agree.
Error in calcHq>#(W)sqrt((F*W-Y).*(F*W-Y))*ones(size(Y)) (line 16)
f = #(W) sqrt((F*W - Y) .* (F*W - Y)) * ones(size(Y));
Error in hessdiag>#(xi)fun(swapelement(x0,ind,xi)) (line 60)
#(xi) fun(swapelement(x0,ind,xi)), ...
Error in derivest (line 337)
f_x0(j) = fun(x0(j));
Error in hessdiag (line 59)
[HD(ind),err(ind),finaldelta(ind)] = derivest( ...
Error in hessian2 (line 74)
[hess,err] = hessdiag(fun,x0);
I assume there is some problem with the elementwise multiplication, but I really can't figure out what I'm doing wrong.
Maybe someone can give me a hint on what I'm doing wrong.
Ok guys, thank you very much. I just found out what I did wrong and it is so embarassing ...
The order of multiplications is wrong ...
f = #(X) sqrt((F*X - Y) .* (F*X - Y)) * ones(size(Y));
This gives back an ixi matrix, while
f = #(X) ones(size(Y)) * sqrt((F*X - Y) .* (F*X - Y));
gives back a scalar.

matlab quadprog constraints issue

I have a portfolio of weights I am using quadprog in matlab.
I have all the inputs for the quadprog optimizer. I am just having some trouble formulating
the constraints
I would like my constraints to have a lower bound of either 0 or 1%, is there a way to do that while maintainng my objective function
Thanks!
I am not sure if I understand your question well.
If your weights are already defined in terms of percentage, it is directly the definition of quadprog:
x = quadprog(H, f, [], [], [], [], lb, [])
So H, e, and f should be provided by the matlab description of:
quadprog(H,f) - returns a vector x that minimizes 1/2 * x' * H * x + f' * x. H must be positive definite for the problem to have a finite minimum."
And lb is a vector of the constraints. For instance if x is a vector 3 x 1, then lb = [0.01; 0.01; 0.01] in the case of the desired percentage is 0.01 (1%)
On the other hand, lets assume the sum_{i=1}^{n} w_i is not equal to 1. Therefore, w_i is not defined in terms of percentage.
Therefore, the constraint that you need is p_i (percentage)= w_i / (sum w_i) >= 0.01 (in the case of the lower bound be 1%).
Note that the constraint in this case is
w_i >= 0.01 * (sum w_i)
Or
-0.01 * (sum_{j=1}^{i-1} w_j) + 0.99 * w_i - 0.01 * (sum_{j=i+1}^{n} w_j) >= 0
Or
0.01 * (sum_{j=1}^{i-1} w_j) - 0.99 w_i + 0.01 * (sum_{j=i+1}^{n} w_j) <= 0
Therefore, this is a constraint of the type Ax <= b.
So
A_ij = 0.01 when i is different from j
A_ij = -0.99 when i = j and b = zeros(n, 1)
In this case you are using
x = quadprog(H, f, A, b)
I hope I helped you!
Daniel

ezmesh offsets all z values by over 500

I'm using octave's ezmesh to plot a linear regression defined as follows:
f = #(x,y) 1 * theta(1) + x * theta(2) + y * theta(3) + x * y * theta(4)
For some fixed vector theta:
octave:275> theta
theta =
9.4350e+00
1.7410e-04
3.3702e-02
1.6498e-07
I'm using a domain of [0 120000 0 1400], and can evaluate:
octave:276> f(0, 0)
ans = 9.4350
octave:277> f(120000, 1400)
ans = 105.23
However, if I run:
octave:278> ezmesh(f, [0 120000 0 1400])
The resulting mesh has a z value of around 570 for (0, 0) and just under 640 for (120000, 1400). I'm baffled. What could be causing this?
EDIT: Even if I simplify f to the following, similar behavior occurs:
octave:308> f = #(x, y) (x * y)
Why is ezmesh not handling multiplication as expected (by me), so that the function evaluates as I expect, and the values change when the function is used inside of ezmesh?
ezmesh invokes the function handle on a matrix of values (to benefit from vectorization performance). Use .* for multiplication.