Real time Integer/Binary programming (microSecond compute time) - real-time

My binary programming problem is:
max: (a1 * x1) + (a2 * x2) + ..... + (an * xn)
subject to:
(c1 * x1) + (c2 * x2) + ..... + (cn * xn) < C
n = 10
a1, ... an, c1, ... cn, C are known
x1, ... xn are binary
This is a process task assignment problem. In my case, the overhead of solving the binary/integer programming problem needs to be very small (< 1 millisecond). When I run this with the CBC solver / lpsolve, it reports time of 2ms - 7ms. I dont have SCIP/Gurobi. Is there any way to solve this in less than a millisecond? Does it even seem to be reasonable to expect to solve this in less than a millisec?
(I did disable the printf's in CBC. But i am not sure if there are any other system operations that it is stuck with.... any log files it is writing)

This is the standard knapsack problem which can be solved efficiently using dynamic programming. This has been discussed nicely in the knapsack wiki (and also in mutiple stack overflow posts e.g. here DP algo for knapsack and here recursive knapsack)
The c++ implementation measures ~20us on my core i7 #3Ghz

Related

Good Values for Knuth's multiplication method - Hashing

Any guesses what are good values for the "c"?
// h(k) = ⌊(m · (k · c − ⌊k · c⌋)⌋
return (int) Math.floor(distinctElements * (key * c - Math.floor(key * c)));
I have found the this one: (Math.sqrt(5)-1)/2)
However are there any other good choices known?
Kind regards
The value you've found is the number known as phi (φ), the golden ratio. It's not actually correct to use that... what you should be using is its multiplicative inverse, φ^-1. φ^-1 and φ^-2 have the special property of tending to optimally spread out the hash values regardless of the range of your inputs.
However, for sufficiently large range, pretty much any irrational number will do.

Orange Bayes algorithm with continuous features

I have a two class Bayes classification problem with four continuous features. I'm trying to partially reproduce bayes algorithm algorithm that Orange uses for calculating probabilities. But I haven't succeeded to obtain same values that Orange outputs.
Data set size : 150 (class0 : 88 and class1 : 62)
I use the following algorithm
p(class0 | X1, X2, X3, X4) = L0 / (L0 + L1)
p(class1 | X1, X2, X3, X4) = L1 / (L0 + L1)
where L0 and L1 are likelihoods
L0 = prior_class0 * product( p(Xi|class0) )
L1 = prior_class1 * product( p(Xi|class1) )
prior_class0 and prior_class1 are Laplacian estimators
prior_class0 = (88 + 1) / (150 + 2)
prior_class1 = (62 + 1) / (150 + 2)
Orange uses LOESS for calculating conditional probabilities (I guess its not necessary to reproduce that). For this dataset it outputs 49 points for both classes as given in python object classifier.conditional_distributions. By using linear interpolation between surrounding points for Xi, I can calculate p(Xi|class0) and p(Xi|class1).
1) Can anyone comment on Orange Bayes algorithm with continuous features?
2) Or any technical advice how to setup compiler/IDE that I could debug Orange C++ code and inspect some intermediary results from functions in orange/source/orange/bayes.cpp?
Orange uses a slightly different formula that, according to Kononenko, gives the same result but allows for better interpretability and m-estimate of probabilities. Instead of product( p(Xi|class0) ) it computes product( p(class0|Xi) / p(class0)). I don't think this should affect your computation, though, but you can check. The code that computes those probabilities is at https://github.com/biolab/orange/blob/master/source/orange/bayes.cpp#L169. Note that it does it for all classes in parallel.
The other piece of the code you're interested in is the computation of probabilities from LOESS density estimates. It's at https://github.com/biolab/orange/blob/master/source/orange/estimateprob.cpp#L307. Note that most operations there are on vectors, e.g. all variables in *result *= (x-x1)/(x2-x1); are actually vectors.
As for debugging, I wrote this code (many years ago, and somewhat ashamed to admit -- seeing the terrible coding style I used) with Visual Studio. I forgot the version and can't check it since I no longer use Windows. But I never really debugged Orange on any other OS.
If you load the project and build a debug version, you'll also have to build a debug version of Python. This is actually simple (see the instructions in the Python source code), the problem would be that you'd have to build debug version of any other binary libraries you use as well (e.g. numpy). A simpler way is to build a release version of Orange but switch the debug info flags on. This way you can use standard Python and libraries.

Accurate matrix multiplication in Matlab

There are 2 matrices:
A: (6 x 78) max=22.2953324329113, min=0
B: (6 x 6 ) max=2187.9013214004 , min=-377.886378385521
B is symmetric and as a result, C = A' * B * A must be a symmetric matrix (theoretically), but this is not the case when I calculate them in Matlab. In fact:
max(max(abs(C - C'))) = 2.3283064365386963e-010
How can I multiply them and get an accurate result?
or
What is a safe way to round the elements of C?
I read this question : efficient-multiplication-of-very-large-matrices-in-matlab, but my problem is not speed or memory. I need an accurate result
Thanks.
You can consider cholesky decomposition of B since it is symmetric
B = R'R
R = chol(A) % // in matlab
then C = A'R'R A =D'D, where D = RA.
With C=D'D you should have machine epsilon precision, although you introduce a possible error due to the accuracy of the decomposition.
You need to read "What Every Computer Scientist Should Know About Floating-Point Arithmetic":
http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
Realize that computers will never be able to give perfect floating point results, and that leaves you with a few options:
Do as few operations as possible, that is, pick the order of operations for the purpose of having the fewest rounding errors
Fixed decimal point arithmetic - or integer arithmetic - this isn't always practical for all applications, but in some applications, you can get away with this. Financial applications are the commonly cited example (multiply by 100 to make pennies go away! divide by 100 when you are done!).
There are other tricks I can't think of this late.
I'm going to have to give your operations a spin - on my machine, eps gives me 2.2204e-16, which is six orders of magnitude lower than what you are getting. See what eps is on your machine - it should be similar - if it is something like 1e-12 or so, I'd say your result is exactly what you'd expect from those operations.
When I do this with random numbers, I get
a = rand(6, 78);
b = rand(6, 6);
b = b + b'; % To make b symmetric
c = a' * b * a;
max(max(abs(c - c')))
ans =
7.1054e-15
Which is a bit closer to what I'd expect with rounding errors after that many operations, but I am not sure of your input, your machine, and I have no idea what else might be affecting things.
Cheers,--

Problem with arithmetic using logarithms to avoid numerical underflow (take 2)

I have two lists of fractions;
say A = [ 1/212, 5/212, 3/212, ... ]
and B = [ 4/143, 7/143, 2/143, ... ].
If we define A' = a[0] * a[1] * a[2] * ... and B' = b[0] * b[1] * b[2] * ...
I want to calculate a normalised value of A' and B'
ie specifically the values of A' / (A'+B') and B' / (A'+B')
My trouble is A are B are both quite long and each value is small so calculating the product causes numerical underflow very quickly...
I understand turning the product into a sum through logarithms can help me determine which of A' or B' is greater
ie max( log(a[0])+log(a[1])+..., log(b[0])+log(b[1])+... )
and that using logs I can calculate the value of A' / B' but how do I do A' / A'+B'
My best bet to date is to keep the number representations as fractions, ie A = [ [1,212], [5,212], [3,212], ... ] and implement my own arithmetic but it's getting clumsy and I have a feeling there is a (simple) way of logarithms I'm just missing....
The numerators for A and B don't come from a sequence. They might as well be random for the purpose of this question. If it helps the denominators for all values in A are the same, as are all the denominators for B.
Any ideas most welcome!
( ps. I asked a similar question 24 hours ago regarding the ratio A'/B' but it was actually the wrong question to ask. I'm actually after A'/(A'+B'). Sorry, my mistake. )
I see few ways here
First of all you can notice that
A' / (A'+B') = 1 / (1 + B'/A')
and you know how to calculate B'/A' with logarithms.
Another way is to implement your own rational arithmetic but you don't need to go far about it. Since you know that denominators are the same for the whole array, it immediately gives you
numerator(A') = numerator(a[0]) * numerator(a[1]) ...
denumerator(A') = denumerator(a[0]) ^ A.length
All you now need to do is to sum A' and B' which is easy and then multiply A' and 1/(A'+B') which is also easy. The hardest part here is to normalize resulting value which is done with modulo operation and is trivial.
Alternatively, since you most likely using some popular scripting language, most of them have classes for rational arithmetic built in, Python and Ruby have them for sure.
My best bet to date is to keep the number representations as fractions and implement my own arithmetic but it's getting clumsy
What language are you using? If you can overload operators, it should be really easy to make up a Fraction class that you can treat as a number pretty much everywhere.
As an example, determining whether a fraction A / B is larger than C / D is basically comparing whether A * D is larger than B * C.
Both A and B have the same denominator in each fraction you mention. Is that true for every term in the list? If that's so, why don't you factor that out when you calculate the product? The denominator will simply be X^n, when X is the value and n is the number of terms in the list.
If you do that, you'll have the opposite problem: overflow in the numerator. You know that it can't be smaller than max(X)^n, where max(X) is the maximum value in the numerator and n is the number of terms in the list. If you can calculate that, you can see if your computer will have a problem. You can't put 10 pounds of anything in a 5 pound bag.
Unfortunately, the properties of logarithms limit you to the following simplifications:
(source: equationsheet.com)
and
(source: equationsheet.com)
So you're stuck with:
(source: equationsheet.com)
If you're using a language that supports infinite precision numbers (e.g., Java BigDecimal) it might make your life a little easier. But there's still a good argument for doing some thinking before you compute. Why use brute force when you can be elegant?
Well ... if you know A'(A'+B'), then B'(A'+B') should be one minus that. I personally would not use logarithms. I would use the actual fractions. I would also use some sort of BigInt class to represent the numerator and denominator. Which language are you using? Python can be a good fit.

Solving Algebraic Equations Programmatically [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 13 years ago.
Improve this question
I have six parametric equations using 18 (not actually 26) different variables, 6 of which are unknown.
I could sit down with a couple of pads of paper and work out what the equations for each of the unknowns are, but is there a simple programatic solution (I'm thinking in Matlab) that will spit out the six equations I'm looking for?
EDIT:
Shame this has been closed, but I guess I can see why. In case anyone is still interested, the equations are (I believe) non-linear:
r11^2 = (l_x1*s_x + m_x)^2 + (l_y1*s_y + m_y)^2
r12^2 = (l_x2*s_x + m_x)^2 + (l_y2*s_y + m_y)^2
r13^2 = (l_x3*s_x + m_x)^2 + (l_y3*s_y + m_y)^2
r21^2 = (l_x1*s_x + m_x - t_x)^2 + (l_y1*s_y + m_y - t_y)^2
r22^2 = (l_x2*s_x + m_x - t_x)^2 + (l_y2*s_y + m_y - t_y)^2
r23^2 = (l_x3*s_x + m_x - t_x)^2 + (l_y3*s_y + m_y - t_y)^2
(Squared the rs, good spot #gnovice!)
Where I need to find t_x t_y m_x m_y s_x and s_y
Why am I calculating these? There are two points p1 (at 0,0) and p2 at(t_x,t_y), for each of three coordinates (l_x,l_y{1,2,3}) I know the distances (r1 & r2) to that point from p1 and p2, but in a different coordinate system. The variables s_x and s_y define how much I'd need to scale the one set of coordinates to get to the other, and m_x, m_y how much I'd need to translate (with t_x and t_y being a way to account for rotation differences in the two systems)
Oh! And I forgot to mention, I also know that the point (l_x,l_y) is below the highest of p1 and p2, ie l_y < max(0,t_y) as well as l_y > 0 and l_y < t_y.
It does seem specific enough that I might have to just get my pad out and work it through mathematically!
If you have the Symbolic Toolbox, you can use the SOLVE function. For example:
>> solve('x^2 + y^2 = z^2','z') %# Solve for the symbolic variable z
ans =
(x^2 + y^2)^(1/2)
-(x^2 + y^2)^(1/2)
You can also solve a system of N equations for N variables. Here's an example with 2 equations, 2 unknowns to solve for (x and y), and 6 parameters (a through f):
>> S = solve('a*x + b*y = c','d*x - e*y = f','x','y')
>> S.x
ans =
(b*f + c*e)/(a*e + b*d)
>> S.y
ans =
-(a*f - c*d)/(a*e + b*d)
Are they linear? If so, then you can use principles of linear algebra to set up a 6x6 matrix that represents the system of equations, and solve for it using any standard matrix inversion routine...
if they are not linear, they you need to use numerical analysis methods.
As I recall from many years ago, I believe you then create a system of linear approximations to the non-linear equations, and solve that linear system, over and over again iteratively, feeding the answers back into the inputs each time, until some error metric gets sufficiently small to indicate you have reached the solution. It's obviously done with a computer, and I'm sure there are numerical analysis software packages that will do this for you, although I imagine that as any arbitrary system of non-linear equations can include almost infinite degree of different types and levels of complexity, that these software packages can't create the linear approximations for you, (except maybe in the most straightforward standard cases) and you will have ot do that part of the thing manually.
Yes there is (assuming these are linear equations) - you do this by creating a matrix equiation which is equivalent to your 6 linear equations, for example if you had the two equatrions:
6x + 12y = 9
7x - 8y = 14
This could be equivalently represented as:
|6 12| |a| |9 |
|7 -8| |b| = |14|
(Where the 2 matrices are multipled together). Matlab can then solve this for the solution matrix (a, b).
I don't have matlab installed, so I'm afraid I'm going to have to leave the details up to you :-)
As mentioned above, the answer will depend on whether your equations are linear or nonlinear. For linear systems, you can set up a simple matrix system (but don't use matrix inversion, use LU decomposition (if your system is well-conditioned) ).
For non-linear systems, you'll need to use a more advanced solver, most likely some variation on Newton's method. Essentially you'll give Matlab your six equations, and ask it to simultaneously solve for the root (zero) of all of the equations. There are several caveats and complications that come into play when dealing with non-linear systems, one of which is the need for an initial guess that assigns each of your six unknown variables a value close to the true solution. Without a good initial guess, the solver may take a long time finding a solution, or may not converge to a solution at all, even if one exists.
Decades ago, MIT developed MACSYMA, a symbolic algebra system for just this kind of thing. MIT sold MACSYMA to Symbolics, which has pretty well folded, dried up, and blown away. However, because of the miracle of military funding, an early version of MACSYMA was required to be released to the government. THAT version was subsequently released under the GPL, and is continuing to be maintained, under the name MAXIMA.
See http://maxima.sourceforge.net/ for more information.