Distinguishing cryptographic properties: hiding and collision resistance - hash

I saw from Another question the following definitions, which clarifies somewhat:
Collision-resistance:
Given: x and h(x)
Hard to find: y that is distinct from x and such that h(y)=h(x).
Hiding:
Given: h(r|x), where r|x is the concatenation of r and x
Secret: x and a highly-unlikely-and-randomly-chosen r
Hard to find: y such that h(y)=h(r|x). where r|x is the concatenation of r and x
This is different from collision-resistance in that it doesn’t matter whether or not y=r|x.
My question:
Does this mean that any hash function h(x) is non-hiding if there is no secret r, that is, the hash is h(x), not h(r|x)? where r|x is the concatenation of r and x
Example:
Say I make a simple hash function h(x) = g^x mod(n), where g is the generator for the group. The hash should be Collision resistant with p(x_1 != x_2, h(x_1) = h(x_2)) = 1/(2^(n/2)), but I would think it is hiding as well?

Hashfunctions can kinda offer collision-resistance
Commitments have to be hiding.
In contrast to popular opinion these primitives are not the same!
Very strictly speaking the thing that you think of as hash-function cannot offer collision resistance: There always ARE collisions. The input space is infinite in theory, yet the function always produces a fixed-length output. The terminology should actually be “H is randomly drawn from a family of collision-resistant functions”. In practice however we will just call that function collision-resistant and ignore that it technically isn't.
A commitment has to offer two properties: Hiding and Binding. Binding means that you can only open it to one value (this is where the relation to collision-resistance comes in). Hiding means that it is impossible to learn anything about the element that is contained in it. This is why a secure commitment MUST use randomness (or nounces, but after all is said and done, those boil down to the same). Imagine any hash-function, no matter how perfect you want it to be: You can use a random-oracle if you want. If I give you a hash H(m) of a value m , you can compute H(0), compare the result and learn whether m = 0, meaning it is not hiding.
This is also why g^x is not a hiding commitment-scheme. Whether it is binding depends on what you allow as the message-space: If you allow all integers, then a simple attack y = x*phi(n)produces H(y)=H(x).
works. If you define it as ℤ_p, where p is the group-order, then it is perfectly binding, as it is an information-theoretically collision-resistant one-way-function. (Since message-space and target-space are of the same size, this time a single function actually CAN be truly collision-resistant!)

Related

Defining a natural variable n in TI-Nspire CAS

I'm wondering if it's possible to define a natural variable n in TI-Nspire CAS. For example I'd like to write:
You can't define your own natural variables. However, Nspire has the following special variables you can use:
#n0...#n255: Restricted to natural numbers
#c0...#c255: Restricted to real numbers
You can replace the original variables with them by hand or for convience just put |x=#n0 and y=#n1 at the end of line.
Example: You are calculating fourier coefficients and know that variable k will only get real numbers from Σ operation. Replacing k with #n1 will simpilfy the function.
Picture
(Calculator needs to be in RAD mode if you want to try)
The answer is no. Variables in NSpire store a value. A variable has no type. Solve might return #n1 in a result to indicate an arbitrary natural number, but you can tell solve to look for integer solutions only.

Cp-Sat AddAllDifferent vs add constraint

Dears,
I have a model with two boolean decision variable, avere only one can be equal ti 1.
Is it the solver faster if I use AddAllDifferent or It i use a simple constraint (ADD) x+y=1?
In this case, i would drop both ideas and stick to boolean-clauses:
x.Not() OR y.Not()
AND
x OR y
or if ortools supports it:
x XOR y
(= pseudocode -> not necessarily any sane syntax)
This is as simple as it gets and can be very efficiently reasoned about by unit-propagation. Furthermore, 2-SAT things might be at work internally (implication-graph and so on).
No need to reason about integer-arithmetic (including potential channeling of bool/int) or global-constraints.
The above is something which is sometimes even used in SAT when decomposing all-different by prohibit all possible pairs.
(ortools might be clever to analyze your approaches to get to the same boiled-down formulation; but maybe not -> why not help if it's so simple)

What happens when we pass-by-value-result in this function?

Consider this code.
foo(int x, int y){
x = y + 1;
y = 10;
x++;
}
int n = 5;
foo(n,n);
print(n);
if we assume that the language supports pass-by-value result, what would be the answer? As far as I know, pass-by-value-result copies in and out. But I am not sure what would be n's value when it is copied to two different formal parameters. Should x and y act like references? Or should n get the value of either x or y depending on which is copied out last?
Thanks
Regardless of whether it's common pass-by-value or pass-by-value-result, then x and y would become separate copies of n, they are in no way tied to each other, except for the fact they start with the same value.
However, pass-by-value-result assigns the value back to the original variables upon function exit meaning that n would take on the value of x and y. Which one it gets first (or, more importantly, last, since that will be its final value) is open to interpretation since you haven't specified what language you're actually using.
The Wikipedia page on this entry has this to say on the subject ("call-by-copy-restore" is its terminology for what you're asking about, and I've emphasised the important bit and paraphrased to make it clearer):
The semantics of call-by-copy-restore also differ from those of call-by-reference where two or more function arguments alias one another; that is, point to the same variable in the caller's environment.
Under call-by-reference, writing to one will affect the other immediately; call-by-copy-restore avoids this by giving the function distinct copies, but leaves the result in the caller's environment undefined depending on which of the aliased arguments is copied back first. Will the copies be made in left-to-right order both on entry and on return?
I would hope that the language specification would clarify actual consistent behaviour so as to avoid all those undefined-behaviour corners you often see in C and C++ :-)
Examine the code below, slightly modified from your original since I'm inherently lazy and don't want to have to calculate the final values :-)
foo(int x, int y){
x = 7;
y = 42;
}
int n = 5;
foo(n,n);
print(n);
The immediate possibilities I see as the most likely are:
strict left to right copy-on-exit, n will become x then y, so 42.
strict right to left copy-on-exit, n will become y then x, so 7.
undefined behaviour, n may take on either, or possibly any, value.
compiler raises a diagnostic and refuses to compile, if it has no strict rule and doesn't want your code to end up behaving in a (seemingly) random manner.

Replace values in an array in matlab without changing the original array

My question is that given an array A, how can you give another array identical to A except changing all negatives to 0 (without changing values in A)?
My way to do this is:
B = A;
B(B<0)=0
Is there any one-line command to do this and also not requiring to create another copy of A?
While this particular problem does happen to have a one-liner solution, e.g. as pointed out by Luis and Ian's suggestions, in general if you want a copy of a matrix with some operation performed on it, then the way to do it is exactly how you did it. Matlab doesn't allow chained operations or compound expressions, so you generally have no choice but to assign to a temporary variable in this manner.
However, if it makes you feel better, B=A is efficient as it will not result in any new allocated memory, unless / until B or A change later on. In other words, before the B(B<0)=0 step, B is simply a reference to A and takes no extra memory. This is just how matlab works under the hood to ensure no memory is wasted on simple aliases.
PS. There is nothing efficient about one-liners per se; in fact, you should avoid them if they lead to obscure code. It's better to have things defined over multiple lines if it makes the logic and intent of the algorithm clearer.
e.g, this is also a valid one-liner that solves your problem:
B = subsasgn(A, substruct('()',{A<0}), 0)
This is in fact the literal answer to your question (i.e. this is pretty much code that matlab will call under the hood for your commands). But is this clearer, more elegant code just because it's a one-liner? No, right?
Try
B = A.*(A>=0)
Explanation:
A>=0 - create matrix where each element is 1 if >= 0, 0 otherwise
A.*(A>=0) - multiply element-wise
B = A.*(A>=0) - Assign the above to B.

Turn off "smart behavior" in Matlab

There is one thing I do not like on Matlab: It tries sometimes to be too smart. For instance, if I have a negative square root like
a = -1; sqrt(a)
Matlab does not throw an error but switches silently to complex numbers. The same happens for negative logarithms. This can lead to hard to find errors in a more complicated algorithm.
A similar problem is that Matlab "solves" silently non quadratic linear systems like in the following example:
A=eye(3,2); b=ones(3,1); x = A \ b
Obviously x does not satisfy A*x==b (It solves a least square problem instead).
Is there any possibility to turn that "features" off, or at least let Matlab print a warning message in this cases? That would really helps a lot in many situations.
I don't think there is anything like "being smart" in your examples. The square root of a negative number is complex. Similarly, the left-division operator is defined in Matlab as calculating the pseudoinverse for non-square inputs.
If you have an application that should not return complex numbers (beware of floating point errors!), then you can use isreal to test for that. If you do not want the left division operator to calculate the pseudoinverse, test for whether A is square.
Alternatively, if for some reason you are really unable to do input validation, you can overload both sqrt and \ to only work on positive numbers, and to not calculate the pseudoinverse.
You need to understand all of the implications of what you're writing and make sure that you use the right functions if you're going to guarantee good code. For example:
For the first case, use realsqrt instead
For the second case, use inv(A) * b instead
Or alternatively, include the appropriate checks before/after you call the built-in functions. If you need to do this every time, then you can always write your own functions.