Dears,
I have a model with two boolean decision variable, avere only one can be equal ti 1.
Is it the solver faster if I use AddAllDifferent or It i use a simple constraint (ADD) x+y=1?
In this case, i would drop both ideas and stick to boolean-clauses:
x.Not() OR y.Not()
AND
x OR y
or if ortools supports it:
x XOR y
(= pseudocode -> not necessarily any sane syntax)
This is as simple as it gets and can be very efficiently reasoned about by unit-propagation. Furthermore, 2-SAT things might be at work internally (implication-graph and so on).
No need to reason about integer-arithmetic (including potential channeling of bool/int) or global-constraints.
The above is something which is sometimes even used in SAT when decomposing all-different by prohibit all possible pairs.
(ortools might be clever to analyze your approaches to get to the same boiled-down formulation; but maybe not -> why not help if it's so simple)
Related
I saw from Another question the following definitions, which clarifies somewhat:
Collision-resistance:
Given: x and h(x)
Hard to find: y that is distinct from x and such that h(y)=h(x).
Hiding:
Given: h(r|x), where r|x is the concatenation of r and x
Secret: x and a highly-unlikely-and-randomly-chosen r
Hard to find: y such that h(y)=h(r|x). where r|x is the concatenation of r and x
This is different from collision-resistance in that it doesn’t matter whether or not y=r|x.
My question:
Does this mean that any hash function h(x) is non-hiding if there is no secret r, that is, the hash is h(x), not h(r|x)? where r|x is the concatenation of r and x
Example:
Say I make a simple hash function h(x) = g^x mod(n), where g is the generator for the group. The hash should be Collision resistant with p(x_1 != x_2, h(x_1) = h(x_2)) = 1/(2^(n/2)), but I would think it is hiding as well?
Hashfunctions can kinda offer collision-resistance
Commitments have to be hiding.
In contrast to popular opinion these primitives are not the same!
Very strictly speaking the thing that you think of as hash-function cannot offer collision resistance: There always ARE collisions. The input space is infinite in theory, yet the function always produces a fixed-length output. The terminology should actually be “H is randomly drawn from a family of collision-resistant functions”. In practice however we will just call that function collision-resistant and ignore that it technically isn't.
A commitment has to offer two properties: Hiding and Binding. Binding means that you can only open it to one value (this is where the relation to collision-resistance comes in). Hiding means that it is impossible to learn anything about the element that is contained in it. This is why a secure commitment MUST use randomness (or nounces, but after all is said and done, those boil down to the same). Imagine any hash-function, no matter how perfect you want it to be: You can use a random-oracle if you want. If I give you a hash H(m) of a value m , you can compute H(0), compare the result and learn whether m = 0, meaning it is not hiding.
This is also why g^x is not a hiding commitment-scheme. Whether it is binding depends on what you allow as the message-space: If you allow all integers, then a simple attack y = x*phi(n)produces H(y)=H(x).
works. If you define it as ℤ_p, where p is the group-order, then it is perfectly binding, as it is an information-theoretically collision-resistant one-way-function. (Since message-space and target-space are of the same size, this time a single function actually CAN be truly collision-resistant!)
I have two matrices S and T which have n columns and a row vector v of length n. By my construction, I know that S does not have any duplicates. What I'm looking for is a fast way to find out whether or not the row vector v appears as one of the rows of S. Currently I'm using the test
if min([sum(abs(S - repmat(f,size(S,1),1)),2);sum(abs(T - repmat(v,size(dS_new,1),1)),2)]) ~= 0 ....
When I first wrote it, I had a for loop testing each (I knew this would be slow, I was just making sure the whole thing worked first). I then changed this to defining a matrix diff by the two components above and then summing, but this was slightly slower than the above.
All the stuff I've found online says to use the function unique. However, this is very slow as it orders my matrix after. I don't need this, and it's a massively waste of time (it makes the process really slow). This is a bottleneck in my code -- taking nearly 90% of the run time. If anyone has any advice as how to speed this up, I'd be most appreciative!
I imagine there's a fairly straightforward way, but I'm not that experienced with Matlab (fairly, just not lots). I know how to use basic stuff, but not some of the more specialist functions.
Thanks!
To clarify following Sardar_Usama's comment, I want this to work for a matrix with any number of rows and a single vector. I'd forgotten to mention that the elements are all in the set {0,1,...,q-1}. I don't know whether that helps or not to make it faster!
You may want this:
ismember(v,S,'rows')
and replace arguments S and v to get indices of duplicates
ismember(S,v,'rows')
Or
for test if v is member of S:
any(all(bsxfun(#eq,S,v,2))
this returns logical indices of all duplicates
all(bsxfun(#eq,S,v),2)
My question is that given an array A, how can you give another array identical to A except changing all negatives to 0 (without changing values in A)?
My way to do this is:
B = A;
B(B<0)=0
Is there any one-line command to do this and also not requiring to create another copy of A?
While this particular problem does happen to have a one-liner solution, e.g. as pointed out by Luis and Ian's suggestions, in general if you want a copy of a matrix with some operation performed on it, then the way to do it is exactly how you did it. Matlab doesn't allow chained operations or compound expressions, so you generally have no choice but to assign to a temporary variable in this manner.
However, if it makes you feel better, B=A is efficient as it will not result in any new allocated memory, unless / until B or A change later on. In other words, before the B(B<0)=0 step, B is simply a reference to A and takes no extra memory. This is just how matlab works under the hood to ensure no memory is wasted on simple aliases.
PS. There is nothing efficient about one-liners per se; in fact, you should avoid them if they lead to obscure code. It's better to have things defined over multiple lines if it makes the logic and intent of the algorithm clearer.
e.g, this is also a valid one-liner that solves your problem:
B = subsasgn(A, substruct('()',{A<0}), 0)
This is in fact the literal answer to your question (i.e. this is pretty much code that matlab will call under the hood for your commands). But is this clearer, more elegant code just because it's a one-liner? No, right?
Try
B = A.*(A>=0)
Explanation:
A>=0 - create matrix where each element is 1 if >= 0, 0 otherwise
A.*(A>=0) - multiply element-wise
B = A.*(A>=0) - Assign the above to B.
We have a problem formulation as shown in this link.
Considering that the first call of bintprog gives a solution x that after some post processing does not adequately addresses the physical problem, is it possible to recall bintprog and exclude the prior solution x?
You need a nogood cut.
Suppose you find a solution \hat{x} that you then decide is infeasible (through some sort of post-processing). Let x and \hat{x} be indexed by i.
You can add a constraint of the following form:
\sum_{i : \hat{x}_i = 0} x_i + \sum_{i : \hat{x} = 1} (1-x_i) \geq 1
This constraint is an example of a no-good cut: the solution must differ from \hat{x} by at least one index i, otherwise it is infeasible. If your variables are not binary no-goods can be a little more complex.
You can add a no-good to your solution by appending the constraint as a row to your constraint matrix and re-solving with the bintprog() function. I'll leave it to you to you rewrite it in the MATLAB notation.
You didn't say what your post-processing does, but it would be even better if the post-processing could infer from your solution \hat{x} that other solutions are also infeasible, and you can add more than one row per iteration. This is a form of logic-based Benders decomposition, and the inference of other infeasible solutions is called solving an inference dual (as opposed to standard Benders decomposition, where you're solving the linear programming dual). More on logic based Benders decomposition from the man who coined the term, John Hooker of CMU.
Sorry for the formatting. I need to go but I'll figure out a way to display equations more nicely later.
There is one thing I do not like on Matlab: It tries sometimes to be too smart. For instance, if I have a negative square root like
a = -1; sqrt(a)
Matlab does not throw an error but switches silently to complex numbers. The same happens for negative logarithms. This can lead to hard to find errors in a more complicated algorithm.
A similar problem is that Matlab "solves" silently non quadratic linear systems like in the following example:
A=eye(3,2); b=ones(3,1); x = A \ b
Obviously x does not satisfy A*x==b (It solves a least square problem instead).
Is there any possibility to turn that "features" off, or at least let Matlab print a warning message in this cases? That would really helps a lot in many situations.
I don't think there is anything like "being smart" in your examples. The square root of a negative number is complex. Similarly, the left-division operator is defined in Matlab as calculating the pseudoinverse for non-square inputs.
If you have an application that should not return complex numbers (beware of floating point errors!), then you can use isreal to test for that. If you do not want the left division operator to calculate the pseudoinverse, test for whether A is square.
Alternatively, if for some reason you are really unable to do input validation, you can overload both sqrt and \ to only work on positive numbers, and to not calculate the pseudoinverse.
You need to understand all of the implications of what you're writing and make sure that you use the right functions if you're going to guarantee good code. For example:
For the first case, use realsqrt instead
For the second case, use inv(A) * b instead
Or alternatively, include the appropriate checks before/after you call the built-in functions. If you need to do this every time, then you can always write your own functions.