I am using qhull library for for computing the intersection of half spaces. Although this problem is a dual of convex hull problem, but as its input it needs an interior point of the intersection. As it is stated on their webpage, here, using linear programming we can find such a point. However, even for simple 2D cases, this LP problem does not have a bounded solution. Is there something wrong with the given instruction on the qhull website?
Well I found the answer myself! Yest the LP would be unbounded and we need to set an upper bound, depending on the context of the given problem.
Related
Am looking for something that is incremental (with accessible state). So that likely means some merge method is exposed.
So in general I want to start with a set of points, that has a ConvexHull calculated and add a point to it (which trivially has itself as a convex hull). Was looking for alternatives to BowyerWatson through convex hull merges. Not sure if this is a bad idea. Not sure if this should be a question in CS except it's about finding a real solution in the python echosystem.
I see some related content here.
Merging two tangled convex hulls
And Qhull (scipy Delaunay and ConvexHull use this) has a lot of options I do not yet understand
http://www.qhull.org/html/qh-optq.htm
You can use Andrew's modification of the Graham scan algorithm.
Here is a reference to some short python code implementing it.
What makes it suited for your needs is that after the points are sorted in xy-order, the upper and lower hulls are computed in linear time. Since you already have the convex hull (possibly both convex hulls), the xy-sorting of the convex hull points will take linear time (e.g., reverse the lower hulls and merge-sort four sorted lists). The rest of the algorithm will take linear time (in the number of points on the convex hulls, which may well be much smaller than the original number of points).
All the functionality for this implementation is in the code referenced above, and for the merge you can use the code from this SO answer or implement your own.
I have a simple unconstrained non-convex optimization problem. Since problems of these type have multiple local minima, I am looking for global optimization algorithm that yields a unique/global minimum. In the internet I came across global optimization algorithms like genetic algorithms, simulated annealing, etc but for solving a simple one variable unconstrained non-convex optimization problem, I think using these high level algorithms doesn't seem to be a good idea. Could anyone recommend me a simple global algorithm for solving such simple one variable unconstrained non-convex optimization problem? I would highly appreciate ideas on this.
"Since problems of these type have multiple local minima". It's not true, the real situation is the following:
Maybe you have one local minimum
Maybe you have infinite set of local miminums
Maybe you have finite number of local minimums
Maybe minimum is not attained
Maybe problem is unbounded below
Also big picture is that there are really true methods which really solve problems (numerically and they slow), but there is a slang to call method which is not nessesary find minumum value of function also call as "solve".
In fact M^n~M for any finite n and any infinite set M. So the fact that you problem has one dimension is nothing. It is still hard as problem with 1000000 parameters which are drawn from the set M from theoretical point of view.
If you interesting how approximately solve problem with known precision epsilon in domain - then split you domain into 1/espsilon regions, sample value(evalute function) at middle point, and select minimum
Method which I will describe below is precise method, and other methods: particle estimation, sequent.convex.programming, alternative direction, particle swarm, Neidler-Mead simplex method, mutlistart gradient/subgradient descend or any descend algorithm like Newton Method or coordinate descend, they all has no gurantess for non-convex problems and some of them even can no be applied if function is nonconvex.
If you interesting in really solve with some precision on function value then then you can take attention into method, which is called branch-and-bound and which truly found minimum, algorithms which you described I don't think so that they solve problem and find minimum in strong sense:
Basic idea of branch and bound - partition domain into convex sets and improve lower/upper bound, in your case it is intervals.
You should have a routine to find upper bound of optimal (min.) value: you can do it e.g. just by sampling subdomain and take smallest or use local optimization method start from random point.
But also you should have lower bound of optimal (min.) value by some principle and this is hard part:
convex relaxation of integer variables to make them real variables
use Lagrange Dual function
use Lipshitc constant on function, etc.
This is sophisticaed step.
If this two values are near - we're done in other case partion or refine partition.
Get info about lower and upper bound of child subproblems and then take min. of upper bounds and min. of lower bounds of children. If child returns more worse lower bound it can be upgraded by parent.
References:
For more great explanation please look into:
EE364B, Lecture 18, prof. Stephen Boyd, Stanford University. It's available on youtube and in ITunes University. If you new to this area I recommend you to look EE263, EE364A, EE364B courses of Stephen P. Boyd. You will love it
Since this is a one dimensional problem, things are easier.
A simple steepest descend procedure may be used as follows.
Suppose the interval of search is a<x<b.
Start the SD from a minimizing your function say f(x). You recover the first minimum Xm1. You should use a fine step, not too large.
Shift this point by adding a positive small constant Xm1+ε. Then maximize f or minimize -f, starting from this point. You get a max of f, you distort it by ε and start from there a minimization, and so on so forth.
I must solve an over constrained problem (Equations more than unknowns). So I have to use least square method.
First I create coefficient matrix .It is a 225*375 matrix. For inversing, I use pinv() function and then multiply it in load matrix .
My problem is about plate bending under uniform load with clamped edge. I expect at least correct answer in my boundary (the deflection must be zero), but even in boundary I have wrong answer.
I have read in a book that sometimes an error occurs in the Least Square method, which should be corrected manually by the user but I couldn’t find any more explanation about it elsewhere.
First of all we need more data about your problem:
What's the model?
Where are the measurements coming from?
Yet few notes about what I could figure from your issue:
If you have boundaries on the solution you should use Constrained Least Squares. If you do it on MATLAB it is easily can be done (Look at Quadratic Programming as well).
Does L2 error fit your problem? Maybe you should a different
There's no bug in the implementation of MATLAB. Using pinv gives the minimum norm (Both of the solution vector and the residual L2 norm) solution in the range of the given matrix. It might be you either construct the data in a wrong manner or the model you're using isn't adequate.
I need to calculate some kind of distance between to curves.
Those are general curves, and may not be functions - that is, some values of x may be mapped to more then one value.
EDIT
The curves are given as a list of X,Y pairs and the logical curve is the line passing through all the points in the order given. a typical data set will include about 1000 points
as noted, the curve may not be a function, but is usually similar to a function
This issue us what prevents using interp1 or the curve fitting toolbox (in Matlab)
The distance measure I was thinking of the the area of the region between the curves - but any reasonable alternative is ok.
EDIT
a sample illustration of to curves, and the area I want to compute
A Matlab solution is preferred, but other languages are also fine.
If you have functions that are of the type y = f(x) and they are defined over the same domain, then a common way to find the "distance" is to use the L2 norm as explained here http://en.wikipedia.org/wiki/L2_norm#p-norm. This is simply the integral of the absolute value of the difference between the functions squared. If you have parametric curves then you cannot directly employ this approach. If the L2 norm is not good enough for your requirements then you will need to provide a more concrete definition of what you mean by "distance". If you are unclear as to what you need try taking a look at different types of mathematical norm and see if any of the commonly used ones are what you need (ie L1 norm, uniform norm). The wikepedia link above is a good start point. If the L2 is good enough then you need a way to calculate the integral that you have - there are many many numerical integration techniques out there and I suggest google is your friend here (or a good numerical analysis text book).
If you do have parametric type curves then this is very nontrivial. Using the "area" between curves is not a good idea as there is no clear way to define this area and would become even more complicated in the general case where you could have self-intersecting curves. If your curves are parametrized in the same way you could try some very crude measurement where you evaluate points on each curve at equally spaced values over the parameter range, then calculate the length of the distance between each, sum and take the average as a notion of "closeness". i.e. partition your parameter range into a set {u_0, ... , u_n} and evaluate curve1(u_i) and curve2(u_i) for each i to generate a set of n paired points. Then sum the euclidean distances between each pair of points.
This is very very crude though and if the parametrizations are different then it wont be much use.
You need to define what you mean by distance between the curves. If it is the closest approach between two general curves, then it becomes quite difficult to solve the problem.
If the "curves" are not even representable as single valued functions of x, then it becomes more complex yet.
Merely telling us that you need to define "some kind of distance" is too broad of a statement to be on-topic here, and it says that you have not yet thought out the problem you wish to solve.
If all you are willing to tell us is that the curves are two totally general parametric curves, which may be closed or not, or they may not even lie over the same domain, then the question becomes so totally ill-posed as to be impossible to answer. What is the area between two curves in that case?
If the curves are defined over the SAME support, then subtracting them and integration of the absolute value or square of the difference will be adequate. But you have already told us that these "curves" may be multi-valued. In that case, it is essentially impossible to do what you are asking.
I have a set of points, (x, y), where each y has an error range y.low to y.high. Assume a linear regression is appropriate (in some cases the data may originally have followed a power law, but has been transformed [log, log] to be linear).
Calculating a best fit line is easy, but I need to make sure the line stays within the error range for every point. If the regressed line goes outside the ranges, and I simply push it up or down to stay between, is this the best fit available, or might the slope need changed as well?
I realize that in some cases, a lower bound of 1 point and an upper bound of another point may require a different slope, in which case presumably just touching those 2 bounds is the best fit.
The constrained problem as stated can have both a different intercept and a different slope compared to the unconstrained problem.
Consider the following example (the solid line shows the OLS fit):
Now if you imagine very tight [y.low; y.high] bounds around the first two points and extremely loose bounds over the last one. The constrained fit would be close to the dotted line. Clearly, the two fits have different slopes and different intercepts.
Your problem is essentially the least squares with linear inequality constraints. The relevant algorithms are treated, for example, in "Solving least squares problems" by Charles L. Lawson and Richard J. Hanson.
Here is a direct link to the relevant chapter (I hope the link works). Your problem can be trivially transformed to Problem LSI (by multiplying your y.high constraints by -1).
As far as coding this up, I'd suggest taking a look at LAPACK: there may already be a function there that solves this problem (I haven't checked).
I know MATLAB has an optimization library that can do constrained SQP (sequential quadratic programming) and also lots of other methods for solving quadratic minimization problems with inequality constraints. The cost function you want to minimize will be the sum of the squared errors between your fit and the data. The constraints are those you mentioned. I'm sure there are free libraries that do the same thing too.