Complexity in distributed system - distributed-computing

if I have a network of n nodes with unique id's chosen from a range say 1 to n squared, then order of log n bits are sufficient and also necessary to address the nodes in the network.

Assume each node is given an ID in [1, n2], and that we have no control over the assignment of IDs. You agree that if we can encode any such ID in Θ(log n) bits, then your statement of sufficiency also follows.
Now, any integer x requires Θ(log x) bits in binary. Thus, writing our maximum integer n2 in binary requires Θ(log n2) bits. We notice that:
Θ(log n2)
= Θ(2 log n)      | properties of logarithms
= Θ(log n)         | properties of Θ
This shows the sufficiency. As for necessity, it follows more trivially from the properties of numeral systems.

Related

Proof for pumping Lemma linear context free language

Where can I find the proof for the linear context free languages pumping lemma?
I am looking for the proof that is specific for the linear context free language
I also looked for the formal prof and could not find one.Not sure if the below is a formal prof but it may give you some idea.
The lemma : For every linear context free languages L there is an n>0 so that for every w in L with |w| > n we can write w as uvxyz such that |vy|> 0,|uvyz| <= n and uv^ixy^iz for every i>= 0 is in L.
"Proof":
Imagine a parse tree for some long string w in L with a start symbol S. Also lets assume that the tree does not contains non useful nodes. If w is long enough, there will be at least one non terminal repeating more than once. Lets call the first repeating non terminal going down the tree X, its first occurrence (from the top) as X[1] and its second occurrence as X[2].Let x be the string in w generated by X[2], vxy the string generated by X[1]and uvxyz the full string w generated by S. Since the movement from X[1] to X[2] generates v,y we could theoretically generate a new tree where we replicate this move multiple times before moving from X[1] down.This proves that uv^ixy^iz for every i>= 0 is in L. Since our tree contains no useless nodes, moving from X[1] to X[2] must generate some terminals and this proves that |vy|> 0.L is linear which means that on every level of the tree we have a single non terminal symbol. Each node in the tree covers some substring in w that its length is bounded by a linear function of the node height. Moving from S to X[2] covers uv and yz from w and the number of tree levels traveled is bounded by (2 * the number of non-terminals symbols + 1). Since the number of levels traveled is bounded and the tree is linear it also puts a bound on the yield of the movement from S to X[2] which means ,|uvyz| <= n for some n >= 0.
Note: Keep in mind that we construct X[1] , X[2] top down , in contradiction to how we prove the “regular” pumping lemma for context free grammar in general. In the "regular” pumping lemma there is a bound on the height of X[1] and therefore a bound on |vxy|. In our case there is no bound on the height of X[1]and it can be as high as required by the length of w. There is a bound,however,on the number of tree levels from S to X[2].This does not means much if the grammar is not linear as the output going from S to X[2] is still bounded only by the high of S (that is unbounded). But in the linear case,this output is bounded and therefore |uvyz| <= n

Space complexity for a simple streaming algorithm

I want to determine the space complexity of the go to example of a simple streaming algorithm.
If you get a permutation of n-1 different numbers and have to detect the one missing number, you calculate the sum of all numbers 1 to n using the formula n (n + 1) / 2 and then you subtract each incoming number. The result is your missing number. I found a german wikipedia article stating that the space complexity of this algorithm is O(log n). (https://de.wikipedia.org/wiki/Datenstromalgorithmus)
What I do not understand is: The amount of bits needed to store a number n is log2(n). ok.. but I do have to calculate the sum, tough. So n (n + 1) / 2 is larger than n and therefore needs more space than just log (n) right?
Can someone help me with this? Thanks in advance!
If integer A in binary coding requires Na bits and integer B requires Nb bits then A*B requires no more than Na+Nb bits (not Na * Nb). So, expression n(n+1)/2 requires no more than log2(n) + log2(n+1) = O(2log2(n)) = O(log2(n)) bits.
Even more, you may raise n to any fixed power i and it still will use O(log2(n)) space. n itself, n10, n500, n10000000 all require O(log(n)) bits of storage.

Is it possible to implement universal hashing for the complete range of integers?

I am reading about Universal hashing on integers. The prerequisite and mandatory precondition seems to be that we choose a prime number p greater than the set of all possible keys.
I am not clear on this point.
If our set of keys are of type int then this means that the prime number needs to be of the next bigger data type e.g. long.
But eventually whatever we get as the hash would need to be down-casted to an int to index the hash table. Doesn't this down-casting affect the quality of the Universal Hashing (I am referring to the distribution of the keys over the buckets) somehow?
If our set of keys are integers then this means that the prime number
needs to be of the next bigger data type e.g. long.
That is not a problem. Sometimes it is necessary otherwise the hash family cannot be universal. See below for more information.
But eventually whatever we get as the hash would need to be
down-casted to an int to index the hash table.
Doesn't this down-casting affect the quality of the Universal Hashing
(I am referring to the distribution of the keys over the buckets)
somehow?
The answer is no. I will try to explain.
Whether p has another data type or not is not important for the hash family to be universal. Important is that p is equal or larger than u (the maximum integer of the universe of integers). It is important that p is big enough (i.e. >= u).
A hash family is universal when the collision probability is equal or
smaller than 1/m.
So the idea is to hold that constraint.
The value of p, in theory, can be as big as a long or more. It just needs to be an integer and prime.
u is the size of the domain/universe (or the number of keys). Given the universe U = {0, ..., u-1}, u denotes the size |U|.
m is the number of bins or buckets
p is a prime which must be equal or greater than n
the hash family is defined as H = {h(a,b)(x)} with h(a,b)(x) = ((a * x + b) mod p) mod m. Note that a and b are randomly chosen integers (from all possible integers, so theoretically can be larger than p) modulo a prime p (which can make them either smaller or larger than m, the number of bins/buckets); but here too the data type (domain of values does not matter). See Hashing integers on Wikipedia for notation.
Follow the proof on Wikipedia and you conclude that the collision probability is _p/m_ * 1/(p-1) (the underscores mean to truncate the decimals). For p >> m (p considerably bigger than m) the probability tends to 1/m (but this does not mean that the probability would be better the larger p is).
In other terms answering your question: p being a bigger data type is not a problem here and can be even required. p has to be equal or greater than u and a and b have to be randomly chosen integers modulo p, no matter the number of buckets m. With these constraints you can construct a universal hash family.
Maybe a mathematical example could help
Let U be the universe of integers that correspond to unsigned char (in C for example). Then U = {0, ..., 255}
Let p be (next possible) prime equal or greater than 256. Note that p can be any of these types (short, int, long be it signed or unsigned). The point is that the data type does not play a role (In programming the type mainly denotes a domain of values.). Whether 257 is short, int or long doesn't really matter here for the sake of correctness of the mathematical proof. Also we could have chosen a larger p (i.e. a bigger data type); this does not change the proof's correctness.
The next possible prime number would be 257.
We say we have 25 buckets, i.e. m = 25. This means a hash family would be universal if the collision probability is equal or less than 1/25, i.e. approximately 0.04.
Put in the values for _p/m_ * 1/(p-1): _257/25_ * 1/256 = 10/256 = 0.0390625 which is smaller than 0.04. It is a universal hash family with the chosen parameters.
We could have chosen m = u = 256 buckets. Then we would have a collision probability of 0.003891050584, which is smaller than 1/256 = 0,00390625. Hash family is still universal.
Let's try with m being bigger than p, e.g. m = 300. Collision probability is 0, which is smaller than 1/300 ~= 0.003333333333. Trivial, we had more buckets than keys. Still universal, no collisions.
Implementation detail example
We have the following:
x of type int (an element of |U|)
a, b, p of type long
m we'll see later in the example
Choose p so that it is bigger than the max u (element of |U|), p is of type long.
Choose a and b (modulo p) randomly. They are of type long, but always < p.
For an x (of type int from U) calculate ((a*x+b) mod p). a*x is of type long, (a*x+b) is also of type long and so ((a*x+b) mod p is also of type long. Note that ((a*x+b) mod p)'s result is < p. Let's denote that result h_a_b(x).
h_a_b(x) is now taken modulo m, which means that at this step it depends on the data type of m whether there will be downcasting or not. However, it does not really matter. h_a_b(x) is < m, because we take it modulo m. Hence the value of h_a_b(x) modulo m fits into m's data type. In case it has to be downcasted there won't be a loss of value. And so you have mapped a key to a bin/bucket.

why number of string should be greater than or equal to number of states in pumping lemma?

If L is a regular language, then there exists a constant n (which depends on L) such that for every string w in the language L, such that the length of w is greater than or equal to n, we can divide w into three strings, w = xyz.
w = length of string. n = Number of States.
Why should we pick w greater than or equal to n?
and what is Pumping length?
If you look at the complete statement of the lemma (http://en.wikipedia.org/wiki/Pumping_lemma_for_regular_languages), you can see that it is actually stating that every string is formed by a prefix x, a part that can be repeated any number of times y and a suffix z. Now it is obvious that, in the shortest case (when the repeating part is taken only once), the length of w equals the number of states needed for the language. This Wikipedia image is very useful:
http://en.wikipedia.org/wiki/File:Pumping-Lemma_xyz_svg.svg
You seem to be misunderstanding the lemma (which you also have not stated completely), and mixing aspects of a proof with what you did state. The lemma says that for every regular language L, there is a constant p such that every string of at least p symbols that belongs to L has a non-empty substring of length no greater than p that can be "pumped", always yielding another element of L. The constant p is the (a) "pumping length".
This can be proved by observing that if a language is regular then there is a finite state automaton that accepts it, and taking p to be the number of states in that automaton (details omitted).
That does not imply, however, that the number of states in the smallest FSA the recognizes a given regular language is the smallest possible pumping length for that language. For instance, consider the language consisting of the union of { an } and { bn } for all n. You need a four-state FSA to recognize this language, but its minimum pumping length is 1.

Expected chain length after rehashing - Linear Hashing

There is one confusion I've about load factor. Some sources say that it is just the number of keys in hash table divided by total number of slots which is same as expected chain length for each slot. But that is only in simple uniform hashing right?
Suppose hash table T has n elements and we expand T into T1 by redistributing elements in slot T[0] by rehashing them using h'(k) = k mod 2m. The hash function of T1 is h(k) = k mod 2m if h(k) < 1 and k mod m if h(k) >= 1. Many sources say that we "Expand and rehash to maintain the load factor (does this imply expected chain length is still same?) Since this is not simple uniform hashing, I think the probability that any key k enters a slot is (1/4 + 1/2(m-1)).
For a key k (randomly selected), h(k) is first evaluated (there are 50-50 chances whether it is less than 1 or greater than or equal to 1) and then if it's less than 1, key k has JUST two ways - slot 0 or slot m. Hence, probability 1/4 (1/2 * 1/2) But if it is greater than or equal to 1, it has m-1 slots and could enter any and hence probability (1/2 * 1/m-1). So expected chain length would now be n/4 + n/2(m-1). Am I on right track?
The calculation for linear hashing should be the same as for "non-linear" hashing. With a certain initial number of buckets, uniform distribution of hash values would result in uniform placement. With enough expansions to double the size of the table, each of those values would be randomly split over the larger space via the incremental re-hashing, and new values would also have been distributed over the larger space. Incrementally, each point is equally likely to be at (initial bucket position) and (2x initial bucket position) as the table expands to that length.
There is a paper here which goes into detail about the chain length calculation under different circumstances (not just the average), specifically for linear hashing.