I am attempting to use InteriorPointSolver to solve a standard Quadratic Programming problem with linear constraints (per the definition that can be found here). My problem has no linear term (the "c" vector in the definition). I am setting up the "Q" matrix by using SetCoefficient(Int32, Rational, Int32, Int32) across all my variables (passing the "goal" row as the vidRow). Am I correct in assuming that the InteriorPointSolver is minimizing the objective function as defined in the standard definition of the quadratic programming problem?
I ask this because when I calculate x^T * Q * x myself (using the optimal solution for x that I get from the solver), I get a value that is substantially different than what the solver claims the optimal objective function value is (via Statistics.Primal or GetValue(goal)). The only time my calculation and the solver's optimal value agree is when I use an identity matrix for Q. I am guessing that I am setting something up wrong or am not understanding exactly what function is being minimized.
I have consulted all the documentation I can find and cannot find a good explanation of exactly what function the interior point solver is minimizing. Can anyone guide me in the right direction?
As it turns out,
SetCoefficient(goal, 2.0, x, y)
Has exactly the same effect as
SetCoefficient(goal, 2.0, y, x)
The effect of both calls is to set the coefficient of the x*y term in your objective function, and the second call simply overwrites the coefficient that you set in the first call. The solver does not treat the xy term as distinct from the yx term, and does not add the coefficients (as I had expected). So, if your goal is to have a 4xy term in your objective function, you must make the following call:
SetCoefficient(goal, 4.0, x, y)
instead of the two calls listed above.
Related
I want to implement densities of probability measures in Matlab. For that I define density as a function handle such that the integral of some function f (given as a function handle) on the interval [a,b] can be computed by
syms x
int(f(x)*density(x),x,a,b)
When it comes to the Dirac measure the problem is that
int(dirac(x),x,0,b)
delivers the value 1/2 instead of 1 for all b>0. However if I type
int(dirac(x),x,a,b)
where a<0 and b>0 the returned value is 1 as it should be. For this reason multiplying by 2 will not suffice as I want my density to be valid for all intervals [a,b]. I also dont want to distinct cases before integrating so that the code remains valid for a large class of densities.
Does someone know how I can implement the Dirac probability measure (as defined here) in Matlab?
There's no unique, accepted definition for int(delta, 0, b). The problem here isn't that you're getting the "wrong" answer as that you want to impose a different convention somehow on what the delta function is than what was provided by Matlab. (Their choice is defensible but not unique.) If you evaluate this in Wolfram Alpha, for example, it will give you theta(0) - which is not defined as anything in particular. Here theta is the Heaviside function. If you want to impose your own convention, implement your own delta function.
EDIT
I see you wrote a comment on the question, while I was writing this answer, so.... Keep in mind that the Dirac measure or Dirac delta function is not a function at all. The problem you're having, along with what's described below, all relate to trying to give a functional form to something that is inherently not a function. What you are doing is not well-defined in the framework that you have in Matlab.
END OF EDIT
To put the point about conventions in context, the delta function can be defined by different properties. One is that int(delta(x) f(x), a, b) = f(0) when a < 0 < b. This doesn't tell you anything about the integral that you want. Another, which probably leads to an answer as you're getting from Matlab, is to define it as a limit. One (but not the only choice) is the limit of the zero-mean Gaussian as the variance goes to 0.
If you want to use a convention int(delta(x) f(x), a, b) = f(0) when a <= 0 < b, that probably won't get you in much trouble, but just keep in mind that it's a convention you've chosen more than a "right" or
"wrong" answer relative to what you got from Matlab.
As related note, there is a similar choice to be made on the step function (Heaviside function) at x=0. There are conventions in which is is (a) undefined, (b) -1, (c) +1, and (d) 1/2. None is "wrong." This probably corresponds roughly to the choice on the Dirac function since the Heaviside is (roughly) the integral of the Dirac.
I want to differentiate the following function wrt in MATLAB:
T(e(x(t),t)⁄p(t))
My problem is that I know the derivatives of x numerically (I am inside a kind of odefun).
I want to use diff to make my code generalizeable for high order derivatives,but the derivatives of x are now constant. I would also like all this to be in an anonymous function where I can make the differentiation and substitute accordingly the time and the derivative of x needed,so that I don't have to write multiple functions for every state of my system.
My code is as follows:
syms q x star;
qd=symfun(90*pi/180+30*pi/180*cos(q),[q]);
p=symfun(79*pi/180*exp(-1.25*q)+pi/180,[q]);
T=log(-(1+star)/star);
e=symfun(x-qd,[x,q]);
and I want to write for example a function in the form
#(t,y)(d^2⁄dt^2 T(e(x(t),t)⁄p(t))+d⁄dt T(e(x(t),t)⁄p(t))+T(e(x(t),t)⁄p(t)))
I am not sure of the implementation details but in general this is one approach that you could take. It involves two steps.
in the T(.) function replace x with exp(t) this way when you do the differentiation exp(t) always stays there for the higher order derivatives to be taken and the outer functions will be differentiated with respect to x at the same time. After you do diff you should receive an expression that contains exp(t) (not tested so hopefully it is the case). At this point exp(t) is your time derivative of x. Now you only need to evaluate this expression in t. When doing so you need to replace exp(t) by the derivative of x. I do not know if this can be done, if not then perhaps using y instead of exp(t) with the constraint y=exp(t) would do it but you need to figure the correct implementation out yourself.
Here you need to substitute the derivative of x at the right t. If you do not have the value at the particular point t then do what I suggested in the comment. Pre-calculate it beforehand in many points and interpolate in this step.
This approach relies on swapping x(t) with exp(t) if that does not work then I would do what I suggested in the comment. Approximate x(t) by a known function and use that instead of x in your code.
I'm not too familiar with MATLAB or computational mathematics so I was wondering how I might solve an equation involving the sum of squares, where each term involves two vectors- one known and one unknown. This formula is supposed to represent the error and I need to minimize the error. I think I'm supposed to use least squares but I don't know too much about it and I'm wondering what function is best for doing that and what arguments would represent my equation. My teacher also mentioned something about taking derivatives and he formed a matrix using derivatives which confused me even more- am I required to take derivatives?
The problem that you must be trying to solve is
Min u'u = min \sum_i u_i^2, u=y-Xbeta, where u is the error, y is the vector of dependent variables you are trying to explain, X is a matrix of independent variables and beta is the vector you want to estimate.
Since sum u_i^2 is diferentiable (and convex), you can evaluate the minimal of this expression calculating its derivative and making it equal to zero.
If you do that, you find that beta=inv(X'X)X'y. This maybe calculated using the matlab function regress http://www.mathworks.com/help/stats/regress.html or writing this formula in Matlab. However, you should be careful how to evaluate the inverse (X'X) see Most efficient matrix inversion in MATLAB
i need to calculate the degenerate hypergeometric function of two variables given by integral formula:
and I used Matlab for taking numerical integral:
l = 0.067;
h = 0.933;
n = 1.067;
o = 0.2942;
p = 0.633;
func_F=#(x)(x.^(l-1)).*((1-x).^(n-l-1)).*((1-x.*o).^(-h)).*exp(x.*p);
hyper= quadl(func_F,0,1,'AbsTol',1e-6); % i use 'AbsTol' to avoid warnings
disp(hyper);
The result i got is 54.9085, and i know this value is wrong! So please help me to calculate true value of the above integral with singularity at 0.
I don't see where you have the Gamma functions in your code. Did you forget them, or did the value you were expecting already compensate for the lack of them?
Also, maybe you can state why "this value is wrong." Otherwise we are just guessing.
Edit: one more thing, as per the Matlab help page on this function, it might be better to use quadgk. See the following quote (near the bottom of the page):
The quadgk function will integrate functions that are singular at
finite endpoints if the singularities are not too strong. For example,
it will integrate functions that behave at an endpoint c like log|x-c|
or |x-c|p for p >= -1/2. If the function is singular at points inside
(a,b), write the integral as a sum of integrals over subintervals with
the singular points as endpoints, compute them with quadgk, and add
the results.
Bottom line is the the singularities near the endpoints (when your x gets near 0 or 1) might cause some problems.
I want to numerically integrate the following:
where
and a, b and β are constants which for simplicity, can all be set to 1.
Neither Matlab using dblquad, nor Mathematica using NIntegrate can deal with the singularity created by the denominator. Since it's a double integral, I can't specify where the singularity is in Mathematica.
I'm sure that it is not infinite since this integral is based in perturbation theory and without the
has been found before (just not by me so I don't know how it's done).
Any ideas?
(1) It would be helpful if you provide the explicit code you use. That way others (read: me) need not code it up separately.
(2) If the integral exists, it has to be zero. This is because you negate the n(y)-n(x) factor when you swap x and y but keep the rest the same. Yet the integration range symmetry means that amounts to just renaming your variables, hence it must stay the same.
(3) Here is some code that shows it will be zero, at least if we zero out the singular part and a small band around it.
a = 1;
b = 1;
beta = 1;
eps[x_] := 2*(a-b*Cos[x])
n[x_] := 1/(1+Exp[beta*eps[x]])
delta = .001;
pw[x_,y_] := Piecewise[{{1,Abs[Abs[x]-Abs[y]]>delta}}, 0]
We add 1 to the integrand just to avoid accuracy issues with results that are near zero.
NIntegrate[1+Cos[(x+y)/2]^2*(n[x]-n[y])/(eps[x]-eps[y])^2*pw[Cos[x],Cos[y]],
{x,-Pi,Pi}, {y,-Pi,Pi}] / (4*Pi^2)
I get the result below.
NIntegrate::slwcon:
Numerical integration converging too slowly; suspect one of the following:
singularity, value of the integration is 0, highly oscillatory integrand,
or WorkingPrecision too small.
NIntegrate::eincr:
The global error of the strategy GlobalAdaptive has increased more than
2000 times. The global error is expected to decrease monotonically after a
number of integrand evaluations. Suspect one of the following: the
working precision is insufficient for the specified precision goal; the
integrand is highly oscillatory or it is not a (piecewise) smooth
function; or the true value of the integral is 0. Increasing the value of
the GlobalAdaptive option MaxErrorIncreases might lead to a convergent
numerical integration. NIntegrate obtained 39.4791 and 0.459541
for the integral and error estimates.
Out[24]= 1.00002
This is a good indication that the unadulterated result will be zero.
(4) Substituting cx for cos(x) and cy for cos(y), and removing extraneous factors for purposes of convergence assessment, gives the expression below.
((1 + E^(2*(1 - cx)))^(-1) - (1 + E^(2*(1 - cy)))^(-1))/
(2*(1 - cx) - 2*(1 - cy))^2
A series expansion in cy, centered at cx, indicates a pole of order 1. So it does appear to be a singular integral.
Daniel Lichtblau
The integral looks like a Cauchy Principal Value type integral (i.e. it has a strong singularity). That's why you can't apply standard quadrature techniques.
Have you tried PrincipalValue->True in Mathematica's Integrate?
In addition to Daniel's observation about integrating an odd integrand over a symmetric range (so that symmetry indicates the result should be zero), you can also do this to understand its convergence better (I'll use latex, writing this out with pen and paper should make it easier to read; it took a lot longer to write than to do, it's not that complicated):
First, epsilon(x)-\epsilon(y)\propto\cos(y)-\cos(x)=2\sin(\xi_+)\sin(\xi_-) where I have defined \xi_\pm=(x\pm y)/2 (so I've rotated the axes by pi/4). The region of integration then is \xi_+ between \pi/\sqrt{2} and -\pi/\sqrt{2} and \xi_- between \pm(\pi/\sqrt{2}-\xi_-). Then the integrand takes the form \frac{1}{\sin^2(\xi_-)\sin^2(\xi_+)} times terms with no divergences. So, evidently, there are second-order poles, and this isn't convergent as presented.
Perhaps you could email the persons who obtained an answer with the cos term and ask what precisely it is they did. Perhaps there's a physical regularisation procedure being employed. Or you could have given more information on the physical origin of this (some sort of second order perturbation theory for some sort of bosonic system?), had that not been off-topic here...
May be I am missing something here, but the integrand
f[x,y]=Cos^2[(x+y)/2]*(n[x]-n[y])/(eps[x]-eps[y]) with n[x]=1/(1+Exp[Beta*eps[x]]) and eps[x]=2(a-b*Cos[x]) is indeed a symmetric function in x and y: f[x,-y]= f[-x,y]=f[x,y].
Therefore its integral over any domain [-u,u]x[-v,v] is zero. No numerical integration seems to be needed here. The result is just zero.