I'm interested in multi-level data integrity checking and correcting. Where multiple error correcting codes are being used (they can be 2 of the same type of codes). I'm under the impression that a system using 2 codes would achieve maximum effectiveness if the 2 hash codes being used were orthogonal to each other.
Is there a list of which codes are orthogonal to what? Or do you need to use the same hashing function but with different parameters or usage?
I expect that the first level ecc will be a reed-solomon code, though I do not actually have control over this first function, hence I cannot use a single code with improved capabilities.
Note that I'm not concerned with encryption security.
Edit: This is not a duplicate of
When are hash functions orthogonal to each other? due to it essentially asking what the definition of orthogonal hash functions are. I want examples of which hash functions that are orthogonal.
I'm not certain it is even possible to enumerate all orthogonal hash functions. However, you only asked for some examples, so I will endeavour to provide some as well as some intuition as to what properties seem to lead to orthogonal hash functions.
From a related question, these two functions are orthogonal to each other:
Domain: Reals --> Codomain: Reals
f(x) = x + 1
g(x) = x + 2
This is a pretty obvious case. It is easier to determine orthogonality if the hash functions are (both) perfect hash functions such as these are. Please note that the term "perfect" is meant in the mathematical sense, not in the sense that these should ever be used as hash functions.
It is a more or less trivial case for perfect hash functions to satisfy orthogonality requirements. Whenever the functions are injective they are perfect hash functions and are thus orthogonal. Similar examples:
Domain: Integers --> Codomain: Integers
f(x) = 2x
g(x) = 3x
In the previous case, this is an injective function but not bijective as there is exactly one element in the codomain mapped to by each element in the domain, but there are many elements in the codomain that are not mapped to at all. These are still adequate for both perfect hashing and orthogonality. (Note that if the Domain/Codomain were Reals, this would be a bijection.)
Functions that are not injective are more tricky to analyze. However, it is always the case that if one function is injective and the other is not, they are not orthogonal:
Domain: Reals --> Codomain: Reals
f(x) = e^x // Injective -- every x produces a unique value
g(x) = x^2 // Not injective -- every number other than 0 can be produced by two different x's
So one trick is thus to know that one function is injective and the other is not. But what if neither is injective? I do not presently know of an algorithm for the general case that will determine this other than brute force.
Domain: Naturals --> Codomain: Naturals
j(x) = ceil(sqrt(x))
k(x) = ceil(x / 2)
Neither function is injective, in this case because of the presence of two obvious non-injective functions: ceil and abs combined with a restricted domain. (In practice most hash functions will not have a domain more permissive than integers.) Testing out values will show that j will have non-unique results when k will not and vice versa:
j(1) = ceil(sqrt(1)) = ceil(1) = 1
j(2) = ceil(sqrt(2)) = ceil(~1.41) = 2
k(1) = ceil(x / 2) = ceil(0.5) = 1
k(2) = ceil(x / 2) = ceil(1) = 1
But what about these functions?
Domain: Integers --> Codomain: Reals
m(x) = cos(x^3) % 117
n(x) = ceil(e^x)
In these cases, neither of the functions are injective (due to the modulus and the ceil) but when do they have a collision? More importantly, for what tuples of values of x do they both have a collision? These questions are hard to answer. I would suspect they are not orthogonal, but without a specific counterexample, I'm not sure I could prove that.
These are not the only hash functions you could encounter, of course. So the trick to determining orthogonality is first to see if they are both injective. If so, they are orthogonal. Second, see if exactly one is injective. If so, they are not orthogonal. Third, see if you can see the pieces of the function that are causing them to not be injective, see if you can determine its period or special cases (such as x=0) and try to come up with counter-examples. Fourth, visit math-stack-exchange and hope someone can tell you where they break orthogonality, or prove that they don't.
Related
There are many mathematical programs out there out of which some are able to solve calculus-based problems, GeoGebra, Qalculate! to name a few.
How are those programs able to solve calculus-based problems which humans need to evaluate using a long procedure?
For example, the problem:
It takes a lot of steps for humans to solve this problem as shown here on Quora.
How can those mathematical programs solve them with such a good accuracy?
The Church-Turing thesis implies that anything a human being can calculate can be calculated by any Turing-equivalent system of computation - including programs running on computers. That is to say, if we can solve the problem (or calculate an approximate answer that meets some criteria) then a computer program can be made to do the same thing. Let's consider a simpler example:
f(x) = x
a = Integral(f, 0, 1)
A human being presented with this problem has two options:
try to compute the antiderivative using some procedure, then use procedures to evaluate the definite integral over the supplied range
use some numerical method to calculate an approximate value for the definite integral which meets some criteria for closeness to the true value
In either case, human beings have a set of tools that allow them to do this:
recognize that f(x) is a polynomial in x. There are rules for constructing the antiderivatives of polynomials. Specifically, each term ax^b in the polynomial can be converted to a/(b+1)x^(b+1) and then an arbitrary constant c added to the end. We then say Sf(x)dx = (1/2)x^2 + c. Now that we have the antiderivative, we have a procedure for computing the antiderivative over a range: calculate Sf(x)dx for the high value, then subtract from that the result of calculating Sf(x)dx for the low value. This gives ((1/2)1^2) - ((1/2)0^2) = 1/2 - 0 = 1/2.
decide that for our purposes a Riemann sum with dx=1/10 is sufficient and that we'll take the midpoint value. We get 10 rectangles with base 1/10 and heights 1/20, 3/20, 5/20, 7/20, 9/20, 11/20, 13/20, 15/20, 17/20 and 19/20, respectively. The areas are 1/200, 3/200, 5/200, 7/200, 9/200, 11/200, 13/200, 15/200, 17/200 and 19/200. The sum of these is (1+3+5+7+9+11+13+15+17+19)/200 = 100/200 = 1/2. We happened to get the exact answer since we used the midpoint value and evaluated the definite integral of a linear function; in general, we'd have been close but not exact.
The only difficulty is in adequately specifying the procedure human beings use to solve these problems in various ways. Once specified, computers are perfectly capable of doing them. And make no mistake, human beings have a procedure - conscious or subconscious - for doing these problems reliably.
I noticed that in Coq's definition of rationals the inverse of zero is defined to zero. (Usually, division by zero is not well-defined/legal/allowed.)
Require Import QArith.
Lemma inv_zero_is_zero: (/ 0) == 0.
Proof. unfold Qeq. reflexivity. Qed.
Why is it so?
Could it cause problems in calculations with rationals, or is it safe?
The short answer is: yes, it is absolutely safe.
When we say that division by zero is not well-defined, what we actually mean is that zero doesn't have a multiplicative inverse. In particular, we can't have a function that computes a multiplicative inverse for zero. However, it is possible to write a function that computes the multiplicative inverse for all other elements, and returns some arbitrary value when such an inverse doesn't exists (e.g. for zero). This is exactly what this function is doing.
Having this inverse operator be defined everywhere means that we'll be able to define other functions that compute with it without having to argue explicitly that its argument is different from zero, making it more convenient to use. Indeed, imagine what a pain it would be if we made this function return an option instead, failing when we pass it zero: we would have to make our entire code monadic, making it harder to understand and reason about. We would have a similar problem if writing a function that requires a proof that its argument is non-zero.
So, what's the catch? Well, when trying to prove anything about a function that uses the inverse operator, we will have to add explicit hypotheses saying that we're passing it an argument that is different from zero, or argue that its argument can never be zero. The lemmas about this function then get additional preconditions, e.g.
forall q, q <> 0 -> q * (/ q) = 1
Many other libraries are structured like that, cf. for instance the definition of the field axioms in the algebra library of MathComp.
There are some cases where we want to internalize the additional preconditions required by certain functions as type-level constraints. This is what we do for instance when we use length-indexed vectors and a safe get function that can only be called on numbers that are in bounds. So how do we decide which one to go for when designing a library, i.e. whether to use a rich type with a lot of extra information and prevent bogus calls to certain functions (as in the length-indexed case) or to leave this information out and require it as explicit lemmas (as in the multiplicative inverse case)? Well, there's no definite answer here, and one really needs to analyze each case individually and decide which alternative will be better for that particular case.
I would like to partition a number into an almost equal number of values in each partition. The only criteria is that each partition must be in between 60 to 80.
For example, if I have a value = 300, this means that 75 * 4 = 300.
I would like to know a method to get this 4 and 75 in the above example. In some cases, all partitions don't need to be of equal value, but they should be in between 60 and 80. Any constraints can be used (addition, subtraction, etc..). However, the outputs must not be floating point.
Also it's not that the total must be exactly 300 as in this case, but they can be up to a maximum of +40 of the total, and so for the case of 300, the numbers can sum up to 340 if required.
Assuming only addition, you can formulate this problem into a linear programming problem. You would choose an objective function that would maximize the sum of all of the factors chosen to generate that number for you. Therefore, your objective function would be:
(source: codecogs.com)
.
In this case, n would be the number of factors you are using to try and decompose your number into. Each x_i is a particular factor in the overall sum of the value you want to decompose. I'm also going to assume that none of the factors can be floating point, and can only be integer. As such, you need to use a special case of linear programming called integer programming where the constraints and the actual solution to your problem are all in integers. In general, the integer programming problem is formulated thusly:
You are actually trying to minimize this objective function, such that you produce a parameter vector of x that are subject to all of these constraints. In our case, x would be a vector of numbers where each element forms part of the sum to the value you are trying to decompose (300 in your case).
You have inequalities, equalities and also boundaries of x that each parameter in your solution must respect. You also need to make sure that each parameter of x is an integer. As such, MATLAB has a function called intlinprog that will perform this for you. However, this function assumes that you are minimizing the objective function, and so if you want to maximize, simply minimize on the negative. f is a vector of weights to be applied to each value in your parameter vector, and with our objective function, you just need to set all of these to -1.
Therefore, to formulate your problem in an integer programming framework, you are actually doing:
(source: codecogs.com)
V would be the value you are trying to decompose (so 300 in your example).
The standard way to call intlinprog is in the following way:
x = intlinprog(f,intcon,A,b,Aeq,beq,lb,ub);
f is the vector that weights each parameter of the solution you want to solve, intcon denotes which of your parameters need to be integer. In this case, you want all of them to be integer so you would have to supply an increasing vector from 1 to n, where n is the number of factors you want to decompose the number V into (same as before). A and b are matrices and vectors that define your inequality constraints. Because you want equality, you'd set this to empty ([]). Aeq and beq are the same as A and b, but for equality. Because you only have one constraint here, you would simply create a matrix of 1 row, where each value is set to 1. beq would be a single value which denotes the number you are trying to factorize. lb and ub are the lower and upper bounds for each value in the parameter set that you are bounding with, so this would be 60 and 80 respectively, and you'd have to specify a vector to ensure that each value of the parameters are bounded between these two ranges.
Now, because you don't know how many factors will evenly decompose your value, you'll have to loop over a given set of factors (like between 1 to 10, or 1 to 20, etc.), place your results in a cell array, then you have to manually examine yourself whether or not an integer decomposition was successful.
num_factors = 20; %// Number of factors to try and decompose your value
V = 300;
results = cell(1, num_factors);
%// Try to solve the problem for a number of different factors
for n = 1 : num_factors
x = intlinprog(-ones(n,1),1:n,[],[],ones(1,n),V,60*ones(n,1),80*ones(n,1));
results{n} = x;
end
You can then go through results and see which value of n was successful in decomposing your number into that said number of factors.
One small problem here is that we also don't know how many factors we should check up to. That unfortunately I don't have an answer to, and so you'll have to play with this value until you get good results. This is also an unconstrained parameter, and I'll talk about this more later in this post.
However, intlinprog was only released in recent versions of MATLAB. If you want to do the same thing without it, you can use linprog, which is the floating point version of integer programming... actually, it's just the core linear programming framework itself. You would call linprog this way:
x = linprog(f,A,b,Aeq,beq,lb,ub);
All of the variables are the same, except that intcon is not used here... which makes sense as linprog may generate floating point numbers as part of its solution. Due to the fact that linprog can generate floating point solutions, what you can do is if you want to ensure that for a given value of n, you could loop over your results, take the floor of the result and subtract with the final result, and sum over the result. If you get a value of 0, this means that you had a completely integer result. Therefore, you'd have to do something like:
num_factors = 20; %// Number of factors to try and decompose your value
V = 300;
results = cell(1, num_factors);
%// Try to solve the problem for a number of different factors
for n = 1 : num_factors
x = linprog(-ones(n,1),[],[],ones(1,n),V,60*ones(n,1),80*ones(n,1));
results{n} = x;
end
%// Loop through and determine which decompositions were successful integer ones
out = cellfun(#(x) sum(abs(floor(x) - x)), results);
%// Determine which values of n were successful in the integer composition.
final_factors = find(~out);
final_factors will contain which number of factors you specified that was successful in an integer decomposition. Now, if final_factors is empty, this means that it wasn't successful in finding anything that would be able to decompose the value into integer factors. Noting your problem description, you said you can allow for tolerances, so perhaps scan through results and determine which overall sum best matches the value, then choose whatever number of factors that gave you that result as the final answer.
Now, noting from my comments, you'll see that this problem is very unconstrained. You don't know how many factors are required to get an integer decomposition of your value, which is why we had to semi-brute-force it. In fact, this is a more general case of the subset sum problem. This problem is NP-complete. Basically, what this means is that it is not known whether there is a polynomial-time algorithm that can be used to solve this kind of problem and that the only way to get a valid solution is to brute-force each possible solution and check if it works with the specified problem. Usually, brute-forcing solutions requires exponential time, which is very intractable for large problems. Another interesting fact is that modern cryptography algorithms use NP-Complete intractability as part of their ciphertext and encrypting. Basically, they're banking on the fact that the only way for you to determine the right key that was used to encrypt your plain text is to check all possible keys, which is an intractable problem... especially if you use 128-bit encryption! This means you would have to check 2^128 possibilities, and assuming a moderately fast computer, the worst-case time to find the right key will take more than the current age of the universe. Check out this cool Wikipedia post for more details in intractability with regards to key breaking in cryptography.
In fact, NP-complete problems are very popular and there have been many attempts to determine whether there is or there isn't a polynomial-time algorithm to solve such problems. An interesting property is that if you can find a polynomial-time algorithm that will solve one problem, you will have found an algorithm to solve them all.
The Clay Mathematics Institute has what are known as Millennium Problems where if you solve any problem listed on their website, you get a million dollars.
Also, that's for each problem, so one problem solved == 1 million dollars!
(source: quickmeme.com)
The NP problem is amongst one of the seven problems up for solving. If I recall correctly, only one problem has been solved so far, and these problems were first released to the public in the year 2000 (hence millennium...). So... it has been about 14 years and only one problem has been solved. Don't let that discourage you though! If you want to invest some time and try to solve one of the problems, please do!
Hopefully this will be enough to get you started. Good luck!
Suppose that f is a function of one parameter whose output is an n-dimensional (m1 × m2… × mn) array, and that B is a vector of length k whose elements are all valid arguments for f.
I am looking for a convenient, and more importantly, "shape-agnostic", MATLAB expression (or recipe) for producing the (n+1)-dimensional (m1 × m2 ×…× mn × k) array obtained by "stacking" the k n-dimensional arrays f(b), where the parameter b ranges over B.
To do this in numpy, I would use an expression like this one:
C = concatenate([f(b)[..., None] for b in B], -1)
In case it's of any use, I'll unpack this numpy expression below (see APPENDIX), but the feature of it that I want to emphasize now is that it is entirely agnostic about the shapes/sizes of f(b) and B. For the types of applications I have in mind, the ability to write such "shape-agnostic" code is of utmost importance. (I stress this point because much MATLAB code I come across for doing this sort of manipulation is decidedly not "shape-agnostic", and I don't know how to make it so.)
APPENDIX
In general, if A is a numpy array, then the expression A[..., None] can be thought as "reshaping" A so that it gets one extra, trivial, dimension. Thus, if f(b) is an n-dimensional (m1 × m2… × mn) array, then, f(b)[..., None] is the corresponding (n+1)-dimensional (m1 × m2 ×…× mn × 1) array. (The reason for adding this trivial dimension will become clear below.)
With this clarification out of the way, the meaning of the first argument to concatenate, namely:
[f(b)[..., None] for b in B]
is not too hard to decipher. It is a standard Python "list comprehension", and it evaluates to the sequence of the k (n+1)-dimensional (m1 × m2 ×…× mn × 1) arrays f(b)[..., None], as the parameter b ranges over the vector B.
The second argument to concatenate is the "axis" along which the concatenation is to be performed, expressed as the index of the corresponding dimension of the arrays to be concatenated. In this context, the index -1 plays the same role as the end keyword does in MATLAB. Therefore, the expression
concatenate([f(b)[..., None] for b in B], -1)
says "concatenate the arrays f(b)[..., None] along their last dimension". It is in order to provide this "last dimension" to concatenate over that it becomes necessary to reshape the f(b) arrays (with, e.g., f(b)[..., None]).
One way of doing that is:
% input:
f=#(x) x*ones(2,2)
b=1:3;
%%%%
X=arrayfun(f,b,'UniformOutput',0);
X=cat(ndims(X{1})+1,X{:});
Maybe there are more elegant solutions?
Shape agnosticity is an important difference between the philosophies underlying NumPy and Matlab; it's a lot harder to accomplish in Matlab than it is in NumPy. And in my view, shape agnosticity is a bad thing, too -- the shape of matrices has mathematical meaning. If some function or class were to completely ignore the shape of the inputs, or change them in a way that is not in accordance with mathematical notations, then that function destroys part of the language's functionality and intent.
In programmer terms, it's an actually useful feature designed to prevent shape-related bugs. Granted, it's often a "programmatic inconvenience", but that's no reason to adjust the language. It's really all in the mindset.
Now, having said that, I doubt an elegant solution for your problem exists in Matlab :) My suggestion would be to stuff all of the requirements into the function, so that you don't have to do any post-processing:
f = #(x) bsxfun(#times, permute(x(:), [2:numel(x) 1]), ones(2,2, numel(x)) )
Now obviously this is not quite right, since f(1) doesn't work and f(1:2) does something other than f(1:4), so obviously some tinkering has to be done. But as the ugliness of this oneliner already suggests, a dedicated function might be a better idea. The one suggested by Oli is pretty decent, provided you lock it up in a function of its own:
function y = f(b)
g = #(x)x*ones(2,2); %# or whatever else you want
y = arrayfun(g,b, 'uni',false);
y = cat(ndims(y{1})+1,y{:});
end
so that f(b) for any b produces the right output.
I'm trying to implement a Count-Min Sketch algorithm in Scala, and so I need to generate k pairwise independent hash functions.
This is a lower-level than anything I've ever programmed before, and I don't know much about hash functions except from Algorithms classes, so my question is: how do I generate these k pairwise independent hash functions?
Am I supposed to use a hash function like MD5 or MurmurHash? Do I just generate k hash functions of the form f(x) = ax + b (mod p), where p is a prime and a and b are random integers? (i.e., the universal hashing family everyone learns in algorithms 101)
I'm looking more for simplicity than raw speed (e.g., I'll take something 5x slower if it's simpler to implement).
Scala already has MurmurHash implemented (it's scala.util.MurmurHash). It's very fast and very good at distributing values. A cryptographic hash is overkill--you'll just take tens or hundreds of times longer than you need to. Just pick k different seeds to start with and, since it's nearly cryptographic in quality, you'll get k largely independent hash codes. (In 2.10, you should probably switch to using scala.util.hashing.MurmurHash3; the usage is rather different but you can still do the same thing with mixing.)
If you only need near values to be mapped to randomly far values this will work; if you want to avoid collisions (i.e. if A and B collide using hash 1 they will probably not also collide using hash 2), then you'll need to go at least one more step and hash not the whole object but subcomponents of it so there's an opportunity for the hashes to start out different.
Probably the simplest approach is to take some cryptographic hash function and "seed" it with different sequences of bytes. For most practical purposes, the results should be independent, as this is one of the key properties a cryptographic hash function should have (if you replace any part of a message, the hash should be completely different).
I'd do something like:
// for each 0 <= i < k generate a sequence of random numbers
val randomSeeds: Array[Array[Byte]] = ... ; // initialize by random sequences
def hash(i: Int, value: Array[Byte]): Array[Byte] = {
val dg = java.security.MessageDigest.getInstance("SHA-1");
// "seed" the digest by a random value based on the index
dg.update(randomSeeds(i));
return dg.digest(value);
// if you need integer hash values, just take 4 bytes
// of the result and convert them to an int
}
Edit:
I don't know the precise requirements of the Count-Min Sketch, maybe a simple has function would suffice, but it doesn't seem to be the simplest solution.
I suggested a cryptographic hash function, because there you have quite strong guarantees that the resulting hash functions will be very different, and it's easy to implement, just use the standard libraries.
On the other hand, if you have two hash functions of the form f1(x) = ax + b (mod p) and f2(x) = cx + d (mod p), then you can compute one using another (without knowing x) using a simple linear formula f2(x) = c / a * (f1(x) - b) + d (mod p), which suggests that they aren't very independent. So you could run into unexpected problems here.