Does f(x) = 2*x + 1 belong to $o(X)$? - nonlinear-functions

Suppose a function f: R -> R defined as
f(x) = mx + c for some m, c > 0 and x in R. Does f(x) belong to o(x)?
If the answer is "NO", can we conclude that o(x) does not properly contain the set of sub-linear functions?
The reason I'm asking this:
It is easy to see that f(x) is sub-linear because
f(x1) + f(x2) = mx1 + c + mx2 + c > m(x1+x2) + c = f(x1+x2).
But lim x-> infinity f(x)/x = 2. In this sense f(x) is not in o(x). But o(x) represents the set of sub linear functions. That's where my confusion comes from.

No, f(x) = 2x + 1 ∉ o(x).
I think your confusion comes from the definition of sublinear. Linear algebra and computer science use two different meanings here:
In linear algebra, sublinear functions are a generalization of linear functions, i.e. every linear function is a sublinear function. As you have shown in the question, your f(x) satisfies the subadditivity criterion.
In computer science, linear and sublinear describe the asymptotic behavior. A sublinear function is a function which grows slower than every linear function, given a large enough input. Thus, no linear function is a sublinear function.
Thus, your f(x) is sublinear w.r.t. linear algebra, but it is not sublinear w.r.t. computer science.

Related

Identify powers in an algebraic expression for a Buckingham Pi calculation in MatLab

This is a continuation of an earlier question I asked here.
If I create a symbolic expression in MatLab
syms L M T
F = M*L/T^2
I want to identify the powers of each dimension M, L, or T. In this case, the answer should be
for M, 1
for L, 1
for T, -2
There is a relatively easy way to do this if the expression F were a polynomial in MatLab employing the coeffs function. However, my expression is clearly not a polynomial as far as MatLab is concerned.
In the end, I will be working with at least two parameters so I will put them in a cell array since I anticipate cellfun will be useful.
V = L/T
param = {F,V};
The final output should be a table where the rows correspond to each dimension, L M and T and the columns are for each parameter F and V.
syms L M T
F = M*L/T^2
[C,T] = coeffs(expand(log(F),'IgnoreAnalyticConstraints',true))
[exp(T).' C.']
It returns the table:

Displaying integers with lagrange four square theorem in mathematica or matlab

As I am new on MatLab and Mathematica, I am trying to solve two (easy) problems using one of these two programmes.
"In number theory, Lagrange’s four-square theorem, states that every natural number n can be written as n= a^2+ b^2 + c^2 + d^2, where a, b, c, d are integers.
Given a natural number n, display all possible integers a, b, c, d.
The number of ways to write a natural number
n as the sum of four squares is denoted by r4(n). Using Jacobi's theorem, plot the function r4(n)
and compare it with the function 8n√(log n)."
This is a partial answer using Mathematica build-in functions
PowerRepresentations[n,k,p] gives the distinct representation of the integer n as sum of k non-negative p th integer powers.
Attention: by distinct we mean: if n = n1^p + n2^p + n3^p ... the function returns k-tuples such that n1<=n2<=n3...
Example:
PowerRepresentation[20,4,2]
gives
{{0,0,2,4},{1,1,3,3}}
To get the number of possible representations of integer n as a sum of d squares you can use the SquaresR[d,n] function (your rd(n) functions).
Example:
SquaresR[4,20]
prints
144
However as you explained there is still some works because rd(n) also returns negative solutions and permuted ones.
For instance:
SquaresR[2,20]
returns
8
You must understand 8 as counting without distinction:
4 sign changes:
{2,4},{2,-4},{-2,4},{-2,-4}
times
2 permutations
{2,4},{4,2}

Row--wise application of an inline function

I defined an inline function f that takes as argument a (1,3) vector
a = [3;0.5;1];
b = 3 ;
f = #(x) x*a+b ;
Suppose I have a matrix X of size (N,3). If I want to apply f to each row of X, I can simply write :
f(X)
I verified that f(X) is a (N,1) vector such that f(X)(i) = f(X(i,:)).
Now, if I a add a quadratic term :
f = #(x) x*A*x' + x*a + b ;
the command f(X) raises an error :
Error using +
Matrix dimensions must agree.
Error in #(x) x*A*x' + x*a + b
I guess Matlab is considering the whole matrix X as the input to f. So it does not create a vector with each row, i, being equal to f(X(i,:)). How can I do it ?
I found out that there exist a built-in function rowfun that could help me, but it seems to be available only in versions r2016 (I have version r2015a)
That is correct, and expected.
MATLAB tries to stay close to mathematical notation, and what you are doing (X*A*X' for A 3×3 and X N×3) is valid math, but not quite what you intend to do -- you'll end up with a N×N matrix, which you cannot add to the N×1 matrix x*a.
The workaround is simple, but ugly:
f_vect = #(x) sum( (x*A).*x, 2 ) + x*a + b;
Now, unless your N is enormous, and you have to do this billions of times every minute of every day, the performance of this is more than acceptable.
Iff however this really and truly is your program's bottleneck, than I'd suggest taking a look at MMX on the File Exchange. Together with permute(), this will allow you to use those fast BLAS/MKL operations to do this calculation, speeding it up a notch.
Note that bsxfun isn't going to work here, because that does not support mtimes() (matrix multiplication).
You can also upgrade to MATLAB R2016b, which will have built-in implicit dimension expansion, presumably also for mtimes() -- but better check, not sure about that one.

best way to obtain one answer that satisfy a linear equation in matlab

I have a linear equation:
vt = v1*x1 + v2*x2 + v3*x3
vt, v1, v2, v3 are scalars with values between 0 and 1. What is the best way to generate one set (any set will be fine) of x1, x2 and x3 that satisfy the equation above. and also satisfy
x1>0
x2>0
x3>0
I have couple thousand sets of vt,v1,v2 and v3, therefore I need to be able to generate x1, x2 and x3 programmatically.
There are two ways you could approach this:
From the method that you have devised in your post. Randomly generate x1 and x2 and ensure that vt < v1*x1 + v2*x2, then go ahead and solve for x3.
Formulate this into linear program. A linear program is basically solving a system of equations that are subject to inequality or equality constraints. In other words:
As such, we can translate your problem to be of a linear programming problem. The "maximize" statement is what is known as the objective function - the overall goal of what you are trying to accomplish. In linear programming problems, we are trying to minimize or maximize this objective. To do this, we must satisfy the inequalities seen in the subject to condition. Usually, this program is represented in canonical form, and so the constraints on each variable should be positive.
The maximize condition can be arbitrary as you don't care about the objective. You just care about any solution. This whole paradigm can be achieved by linprog in MATLAB. What you should be careful with is how linprog is specified. In fact, the objective is minimized instead of maximized. The conditions, however, are the same with the exception of ensuring that all of the variables are positive. We will have to code that in ourselves.
In terms of the arbitrary objective, we can simply do x1 + x2 + x3. As such, c = [1 1 1]. Our equality constraint is: v1*x1 + v2*x2 + v3*x3 = vt. We also must make sure that x is positive. In order to code this in, what we can do is choose a small constant so that all values of x are greater than this value. Right now, linprog does not support strict inequalities (i.e. x > 0) and so we have to circumvent this by doing this trick. Also, to ensure that the values are positive, linprog assumes that the Ax <= b. Therefore, a common trick that is used is to negate the inequality of x >= 0, and so this is equivalent to -x <= 0. To ensure the values are non-zero, we would actually do: -x <= -eps, where eps is a small constant. However, when I was doing experiments, by doing it this way, two of the variables end up to be the same solution. As such, what I would recommend we do is to generate good solutions that are random each time, let's draw b to be from a uniform random distribution as you said. This will then give us a starting point every time we want to solve this problem.
Therefore, our inequalities are:
-x1 <= -rand1
-x2 <= -rand2
-x3 <= -rand3
rand1, rand2, rand3 are three randomly generated numbers that are between 0 and 1. In matrix form, this is:
[-1 0 0][x1] [-rand1]
[0 -1 0][x2] <= [-rand2]
[0 0 -1][x3] [-rand3]
Finally, our equality constraint from before is:
[v1 v2 v3][x1] [vt]
[x2] =
[x3]
Now, to use linprog, you would do this:
X = linprog(c, A, b, Aeq, beq);
c is a coefficient array that is defined for the objective. In this case, it would be defined as [1 1 1], A and b is the matrix and column vector defined for the inequality constraints and Aeq and beq is the matrix and column vector defined for the equality constraints. X would thus give us the solution after linprog converges (i.e. x1, x2, x3). As such, you would do this:
A = -eye(3,3);
b = -rand(3,1);
Aeq = [v1 v2 v3];
beq = vt;
c = [1 1 1];
X = linprog(c, A, b, Aeq, beq);
As an example, suppose v1 = 0.33, v2 = 0.5, v3 = 0.2, and vt = 2.5. Therefore:
rng(123); %// Set seed for reproducibility
v1 = 0.33; v2 = 0.5; v3 = 0.2;
vt = 2.5;
A = -eye(3,3);
b = -rand(3,1);
Aeq = [v1 v2 v3];
beq = vt;
c = [1 1 1];
X = linprog(c, A, b, Aeq, beq);
I get:
X =
0.6964
4.4495
0.2268
To verify that this equals vt, we would do:
s = Aeq*X
s = 2.5000
The above simply does v1*x1 + v2*x2 + v3*x3. This is computed in a dot product form to make things easy as X is a column vector and v1, v2, v3 are already set in Aeq and is a row vector.
As such, either way is good, but at least with linprog, you don't have to keep looping until you get that condition to be satisfied!
Small Caveat
One small caveat that I forgot to mention in the above approach is that you need to make sure that vt >= v1*rand1 + v2*rand2 + v3*rand3 to ensure convergence. Since you said that v1,v2,v3 are bounded between 0 and 1, the worst case is when v1,v2,v3 are all equal to 1. As such, we really need to make sure that vt > rand1 + rand2 + rand3. If this is not the case, then simply take each value of rand1, rand2, rand3, and divide by (rand1 + rand2 + rand3) / vt. As such, this will ensure that the total summation will equal vt assuming that all of the weights are 1, and this will allow the linear program to converge properly.
If you don't, then the solution will not converge due to the inequality conditions placed in for b, and you won't get the right answer. Just some food for thought! As such, do this for b before you run linprog
if sum(-b) > vt
b = b ./ (sum(-b) / vt);
end
Good luck!

doing optimizations in matlab: figuring out constraint equation

I have N lines that are defined by a y-intercept and an angle, q. The constraint is that all N lines must intersect at one point. The equations I can come up with to eventually get the constraint are these:
Y = tan(q(1))X + y(1)
Y = tan(q(2))X + y(2)
...
I can, by hand, get the constraint if N = 3 or 4 but I am having trouble just getting one constraint if N is greater than 4. If N = 3 or 4, then when I solve the equations above for X, I get 2 equations and then can just set them equal to each other. If N > 4, I get more than 2 equations that equal X and I dont know how to condense them down into one constraint. If I cannot condense them down into one constraint and am able to solve the optimization problem with multiple constraints that are created dynamically (depending on the N that is passed in) that would be fine also.
To better understand what I am doing I will show how I get the constraints for N = 3. I start off with these three equations:
Y = tan(q(1))X + y(1)
Y = tan(q(2))X + y(2)
Y = tan(q(3))X + y(3)
I then set them equal to each other and get these equations:
tan(q(1))X + y(1) = tan(q(2))X + y(2)
tan(q(2))X + y(2) = tan(q(3))X + y(3)
I then solve for X and get this constraint:
(y(2) - y(1)) / (tan(q(1)) - tan(q(2))) = (y(3) - y(2)) / (tan(q(2)) - tan(q(3)))
Notice how I have 2 equations to solve for X. When N > 4 I end up with more than 2. This is OK if I am able to dynamically create the constraints and then call an optimization function in MATLAB that will handle multiple constraints but so far have not found one.
You say the optimization algorithm needs to adjust q such that the "real" problem is minimized while the above equations also hold.
Note that the fifth Euclid axoim ensures that all lines will always intersect with all other lines, unless two qs are equal but the corresponding y0s are not. This last case is so rare (in a floating point context) that I'm going to skip it here, but for added robustness, you should eventually include it.
Now, first, think in terms of matrices. Your constraints can be formulated by the matrix equation:
y = tan(q)*x + y0
where q, y and y0 are [Nx1] matrices, x an unknown scalar. Note that y = c*ones(N,1), e.g., a matrix containing only the same constant. This is actually a non-linear constraint -- that is, it cannot be expressed as
A*q <= b or A*q == b
with A some design matrix and b some solution vector. So, you'll have to write a function defining this non-linear constraint, which you can pass on to an optimizer like fmincon. From the documentation:
x = fmincon(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon) subjects the
minimization to the nonlinear inequalities c(x) or equalities ceq(x)
defined in nonlcon. fmincon optimizes such that c(x) ≤ 0 and ceq(x) =
0. If no bounds exist, set lb = [] and/or ub = [].
Note that you were actually going in the right direction. You can solve for the x-location of the intersection for any pair of lines q(n),y0(n) and q(m),y0(m) with the equation:
x(n,m) = (y0(n)-y0(m)) / (q(m)-q(n))
Your nonlcon function should find x for all possible pairs n,m, and check if they are all equal. You can do this conveniently something like so:
function [c, ceq] = nonlcon(q, y0)
% not using inequalities
c = -1; % NOTE: setting it like this will always satisfy this constraint
% compute tangents
tanq = tan(q);
% compute solutions to x for all pairs
x = bsxfun(#minus, y0, y0.') ./ -bsxfun(#minus, tanq, tanq.');
% equality contraints: they all need to be equal
ceq = diff(x(~isnan(x))); % NOTE: if all(ceq==0), converged.
end
Note that you're not actually solving for q explicitly (or need the y-coordinate of the intersection at all) -- that is all fmincon's job.
You will need to do some experimenting, because sometimes it is sufficient to define
x = x(~isnan(x));
ceq = norm(x-x(1)); % e.g., only 1 equality constraint
which will be faster (less derivatives to compute), but other problems really need
x = x(~isnan(x));
ceq = x-x(1); % e.g., N constraints
or similar tricks. It really depends on the rest of the problem how difficult the optimizer will find each case.