Huffman tree - highest possible frequency that gives perfect tree - encoding

Suppose you have an alphabet of 4 characters: A, B, C, D. What is the highest possible frequency of the most frequent character given the Huffman tree is perfect.
We have a theory that it is 2/5 of the total length, but we would like to see more concrete proof or explanations.

Without loss of generality, we will assume that p(A) <= p(B) <= p(C) <= p(D). The Huffman algorithm will combine A and B into a branch. (Again, without loss of generality if some of the probabilities are equal.) In order for the resulting tree to be flat, we must then combine C and D into a branch. Then the final step will be to combine those two branches.
To assure that we combine C and D into a branch, p(C) and p(D) must both be less than p(A) + p(B). So p(D) < p(A) + p(B). Note that if p(C) = p(D) = p(A) + p(B), then the Huffman algorithm has the option to pick any pair in the next step, for which two of those cases results in skewed tree. So p(D) must be strictly less than p(A) + p(B).
The rest is left as an exercise for the reader.
(Your guess is close. It must be less than 2/5. So 2/5–ϵ, where ϵ is the smallest number that allows the probability computed from the frequency, presumably an integer, to be less than 2/5. An example set of probabilities to reach the maximum is {1/5, 1/5, 1/5+ϵ, 2/5–ϵ}.)

Related

Online Algorithm approach for alternating subsequence

Consider a sequence A = a1, a2, a3, ... an of integers. A subsequence B of A is a sequence B = b1, b2, .... ,bn which is created from A by removing some elements but by keeping the order. Given an integer sequence A, the goal is to compute an alternating subsequence B, i.e. a sequence b1, ... bn such that for all i in {2, 3, ... , m-1}, if b{i-1} < b{i} then b{i} > b{i+1} and if b{i-1} > b{i} then b{i} < b{i+1}**
Consider an online version of the problem, where the sequence A is given element-by-element and each time, one needs to directly decide whether to include the next element in the subsequence B. Is it possible to achieve a constant competitive ratio (by using a deterministic online algorithm)? Either give an online algorithm which achieves a constant competitive ratio or show that it is not possible to find such an online algorithm.
Assume sequence [9,8,9,8,9,8, .... , 9,8,9,8,2,1,2,9,8,9, ... , 8,9,8,9,8,9]
My Argumentation:
The algorithm must decide immediately if it inserts an incoming number into the subsequence. If the algorithm now gets the numbers 1 then 2 then 2 it will eventually decide that they are part of the sequence and thus by a nonlinear factor worse than the optimal solution of n-3.
-> No constant competitive ratio!
Is this a proper argumentation?
If I understood what you meant, your argument is correct, but the sequence you gave in the example is wrong. for example the algorithm may choose all the 9's and 8's.
You can alter your argument slightly to make it more accurate, for example consider the sequence
3,4,3,4,3,4,......, 1/5,2/6,1/5,2/6,....
Explanation:
You start the sequence with 3,4,3,4,... etc. until the algorithm picks two numbers. If it never does, it's obviously not competitive (it gets 0/1 out of n)
If the algorithm picked a 3, then 4, the algorithm must next take a number lower than 4. By continuing with 5,6,5,6,... the algorithm cannot take another number.
If the algorithm chose to take a 4 then a 3, by a similar resoning we can easily see how continuing with 1,2,1,2,... prevents the algorithm from taking another nubmer.
Thus, in any case, the algorithm cannot take more than 2 numbers for every n, which, as you stated, isn't a constant competitive ratio.

Proof for pumping Lemma linear context free language

Where can I find the proof for the linear context free languages pumping lemma?
I am looking for the proof that is specific for the linear context free language
I also looked for the formal prof and could not find one.Not sure if the below is a formal prof but it may give you some idea.
The lemma : For every linear context free languages L there is an n>0 so that for every w in L with |w| > n we can write w as uvxyz such that |vy|> 0,|uvyz| <= n and uv^ixy^iz for every i>= 0 is in L.
"Proof":
Imagine a parse tree for some long string w in L with a start symbol S. Also lets assume that the tree does not contains non useful nodes. If w is long enough, there will be at least one non terminal repeating more than once. Lets call the first repeating non terminal going down the tree X, its first occurrence (from the top) as X[1] and its second occurrence as X[2].Let x be the string in w generated by X[2], vxy the string generated by X[1]and uvxyz the full string w generated by S. Since the movement from X[1] to X[2] generates v,y we could theoretically generate a new tree where we replicate this move multiple times before moving from X[1] down.This proves that uv^ixy^iz for every i>= 0 is in L. Since our tree contains no useless nodes, moving from X[1] to X[2] must generate some terminals and this proves that |vy|> 0.L is linear which means that on every level of the tree we have a single non terminal symbol. Each node in the tree covers some substring in w that its length is bounded by a linear function of the node height. Moving from S to X[2] covers uv and yz from w and the number of tree levels traveled is bounded by (2 * the number of non-terminals symbols + 1). Since the number of levels traveled is bounded and the tree is linear it also puts a bound on the yield of the movement from S to X[2] which means ,|uvyz| <= n for some n >= 0.
Note: Keep in mind that we construct X[1] , X[2] top down , in contradiction to how we prove the “regular” pumping lemma for context free grammar in general. In the "regular” pumping lemma there is a bound on the height of X[1] and therefore a bound on |vxy|. In our case there is no bound on the height of X[1]and it can be as high as required by the length of w. There is a bound,however,on the number of tree levels from S to X[2].This does not means much if the grammar is not linear as the output going from S to X[2] is still bounded only by the high of S (that is unbounded). But in the linear case,this output is bounded and therefore |uvyz| <= n

Is it possible to implement universal hashing for the complete range of integers?

I am reading about Universal hashing on integers. The prerequisite and mandatory precondition seems to be that we choose a prime number p greater than the set of all possible keys.
I am not clear on this point.
If our set of keys are of type int then this means that the prime number needs to be of the next bigger data type e.g. long.
But eventually whatever we get as the hash would need to be down-casted to an int to index the hash table. Doesn't this down-casting affect the quality of the Universal Hashing (I am referring to the distribution of the keys over the buckets) somehow?
If our set of keys are integers then this means that the prime number
needs to be of the next bigger data type e.g. long.
That is not a problem. Sometimes it is necessary otherwise the hash family cannot be universal. See below for more information.
But eventually whatever we get as the hash would need to be
down-casted to an int to index the hash table.
Doesn't this down-casting affect the quality of the Universal Hashing
(I am referring to the distribution of the keys over the buckets)
somehow?
The answer is no. I will try to explain.
Whether p has another data type or not is not important for the hash family to be universal. Important is that p is equal or larger than u (the maximum integer of the universe of integers). It is important that p is big enough (i.e. >= u).
A hash family is universal when the collision probability is equal or
smaller than 1/m.
So the idea is to hold that constraint.
The value of p, in theory, can be as big as a long or more. It just needs to be an integer and prime.
u is the size of the domain/universe (or the number of keys). Given the universe U = {0, ..., u-1}, u denotes the size |U|.
m is the number of bins or buckets
p is a prime which must be equal or greater than n
the hash family is defined as H = {h(a,b)(x)} with h(a,b)(x) = ((a * x + b) mod p) mod m. Note that a and b are randomly chosen integers (from all possible integers, so theoretically can be larger than p) modulo a prime p (which can make them either smaller or larger than m, the number of bins/buckets); but here too the data type (domain of values does not matter). See Hashing integers on Wikipedia for notation.
Follow the proof on Wikipedia and you conclude that the collision probability is _p/m_ * 1/(p-1) (the underscores mean to truncate the decimals). For p >> m (p considerably bigger than m) the probability tends to 1/m (but this does not mean that the probability would be better the larger p is).
In other terms answering your question: p being a bigger data type is not a problem here and can be even required. p has to be equal or greater than u and a and b have to be randomly chosen integers modulo p, no matter the number of buckets m. With these constraints you can construct a universal hash family.
Maybe a mathematical example could help
Let U be the universe of integers that correspond to unsigned char (in C for example). Then U = {0, ..., 255}
Let p be (next possible) prime equal or greater than 256. Note that p can be any of these types (short, int, long be it signed or unsigned). The point is that the data type does not play a role (In programming the type mainly denotes a domain of values.). Whether 257 is short, int or long doesn't really matter here for the sake of correctness of the mathematical proof. Also we could have chosen a larger p (i.e. a bigger data type); this does not change the proof's correctness.
The next possible prime number would be 257.
We say we have 25 buckets, i.e. m = 25. This means a hash family would be universal if the collision probability is equal or less than 1/25, i.e. approximately 0.04.
Put in the values for _p/m_ * 1/(p-1): _257/25_ * 1/256 = 10/256 = 0.0390625 which is smaller than 0.04. It is a universal hash family with the chosen parameters.
We could have chosen m = u = 256 buckets. Then we would have a collision probability of 0.003891050584, which is smaller than 1/256 = 0,00390625. Hash family is still universal.
Let's try with m being bigger than p, e.g. m = 300. Collision probability is 0, which is smaller than 1/300 ~= 0.003333333333. Trivial, we had more buckets than keys. Still universal, no collisions.
Implementation detail example
We have the following:
x of type int (an element of |U|)
a, b, p of type long
m we'll see later in the example
Choose p so that it is bigger than the max u (element of |U|), p is of type long.
Choose a and b (modulo p) randomly. They are of type long, but always < p.
For an x (of type int from U) calculate ((a*x+b) mod p). a*x is of type long, (a*x+b) is also of type long and so ((a*x+b) mod p is also of type long. Note that ((a*x+b) mod p)'s result is < p. Let's denote that result h_a_b(x).
h_a_b(x) is now taken modulo m, which means that at this step it depends on the data type of m whether there will be downcasting or not. However, it does not really matter. h_a_b(x) is < m, because we take it modulo m. Hence the value of h_a_b(x) modulo m fits into m's data type. In case it has to be downcasted there won't be a loss of value. And so you have mapped a key to a bin/bucket.

Spearman's rank correlation significance

I'm calculating Spearman's rank correlation in matlab with the following code:
[RHO,PVAL] = corr(x,y,'Type','Spearman');
RHO =
0.7211
PVAL =
4.9473e-04
and then with different variables
[RHO,PVAL] = corr(x2,y2,'Type','Spearman');
RHO =
0.3277
PVAL =
0.0060
How do you categorize these as p < 0.05, p < 0.01, p < 0.001 etc. Commonly in scientific journals these pvalues are represented as the examples I've shown and not as one number. Would both of these be p < 0.01? When defining whether a correlation is significant to a specific value do you always look for the smallest error i.e if its PVAL = 0.0005, both p > 0.05 and p > 0.001 would be correct here, do we simply write the lowest i.e. p > 0.001?
As Martin Dinov wrote, this is at least partially a matter of journal policy. But, as long as there is no explicit journal convention against it, I would recommend to always report the actual p-value, in this case in the form p = 4.9·10-4 and p = 0.006, respectively. You can then proceed to say that the effect you found is statistically significant, usually based on comparison with a previously chosen significance level, typically 0.05, unless you need to correct for multiple comparisons.
The reason is that the commonly used significance levels are purely a matter of convention. By only saying that p is below one conventional threshold means to withhold valuable information from the reader, which she might use to make up her own mind about the result – and this truncation is not even justified by relevant saving of print space.
You should also, of course, report the value of the correlation coefficient itself (which in this case doubles as a test statistic and an effect size) as well as the sample size.
At least for the field of psychology, these are official recommendations:
Hypothesis tests. It is hard to imagine a situation in which a dichotomous accept-reject decision is better than reporting an actual p value or, better still, a confidence interval.
…
Effect sizes. Always present effect sizes for primary outcomes. If the units of measurement are meaningful on a practical level (e.g., number of cigarettes smoked per day), then we usually prefer an unstandardized measure (regression coefficient or mean difference) to a standardized measure (r or d).
L. Wilkinson and the Task Force on Statistical Inference, "Statistical Methods in Psychology Journals. Guidelines and Explanations"
You mean pval is < 0.05 and also < 0.001 and not >. In general, you do want to show that it is smaller than the smallest significance level (alpha) threshold that you can. So yes, it is best to say for the second example that the p-value is < 0.001. Depending on the journal convention, it may be preferable to put the actual p-value in (so, for the first example, 4.9473e-04) or just that it's < some good alpha (0.0001 for the first case).

Minimizing objective function by changing a variable - in Matlab?

I have a 101x82 size matrix called A. Using this variable matrix, I compute two other variables called:
1) B, a 1x1 scalar, and
2) C, a 50x6 matrix.
I compare 1) and 2) with their analogues variables 3) and 4), whose values are fixed:
3) D, a 1x1 scalar, and
4) E, a 50x6 matrix.
Now, I want to perturb/change the values of A matrix, such that:
1) ~ 3), i.e. B is nearly equal to D , and
2) ~ 4), i.e. C is nearly equal to E
Note that on perturbing A, B and C will change, but not D and E.
Any ideas how to do this? Thanks!
I can't run your code as it's demanding to load data (which I don't have) and it's not immediatly obvious how to calculate B or C.
Fortunately I may be able to answer your problem. You're describing an optimization problem, and the solution would be to use fminsearch (or something of that variety).
What you do is define a function that returns a vector with two elements:
y1 = (B - D)^weight1;
y2 = norm((C - E), weight2);
with weight being how strong you allow for variability (weight = 2 is usually sufficient).
Your function variable would be A.
From my understanding you have a few functions.
fb(A) = B
fc(A) = C
Do you know the functions listed above, that is do you know the mappings from A to each of these?
If you want to try to optimize, so that B is close to D, you need to pick:
What close means. You can look at some vector norm for the B and D case, like minimizing ||B-D||^2. The standard sum of the squares of the elements of this different will probably do the trick and is computationally nice.
How to optimize. This depends a lot on your functions, whether you want local or global mimina, etc.
So basically, now we've boiled the problem down to minimizing:
Cost = ||fb(A) - fd(A)||^2
One thing you can certainly do is to compute the gradient of this cost function with respect to the individual elements of A, and then perform minimization steps with forward Euler method with a suitable "time step". This might not be fast, but with small enough time step and well-behaved enough functions it will at least get you to a local minima.
Computing the gradient of this
grad_A(cost) = 2*||fb(A)-fd(A)||*(grad_A(fb)(A)-grad_A(fd)(A))
Where grad_A means gradient with respect to A, and grad_A(fb)(A) means gradient with respect to A of the function fb evaluated at A, etc.
Computing the grad_A(fb)(A) depends on the form of fb, but here are some pages have "Matrix calculus" identities and explanations.
Matrix calculus identities
Matrix calculus explanation
Then you simply perform gradient descent on A by doing forward Euler updates:
A_next = A_prev - timestep * grad_A(cost)