is there a hamiltonian circuit in the graph in the link below? - discrete-mathematics

Like the question says in the image in the link.
Is there a hamilton circuit in the graph above ?
i found some hamilton path like :
c - b - a -j - i - h - f - e - d - g
But no hamilton circuit
I cant add the picture in here since stackoverflow didnt allow me

There cannot be a hamiltonian cycle.
Proof:
In a hamiltonian cycle, every vertex must be visited and no edge can be used twice. Thus, if a vertex has degree two, both its edges must be used in any such cycle.
a, c, and g are degree two, so it follows that if there is a hamiltonian cycle it must contain the path j - a - b - c - d - g - h. However, this path does not contain e but it contains two of e's neighbors, b and d. e only has one remaining neighbor, f, so there is no way to extend the path to a hamiltonian cycle that contains e. Thus there can be no hamiltonian cycle in the graph.

Related

Huffman tree - highest possible frequency that gives perfect tree

Suppose you have an alphabet of 4 characters: A, B, C, D. What is the highest possible frequency of the most frequent character given the Huffman tree is perfect.
We have a theory that it is 2/5 of the total length, but we would like to see more concrete proof or explanations.
Without loss of generality, we will assume that p(A) <= p(B) <= p(C) <= p(D). The Huffman algorithm will combine A and B into a branch. (Again, without loss of generality if some of the probabilities are equal.) In order for the resulting tree to be flat, we must then combine C and D into a branch. Then the final step will be to combine those two branches.
To assure that we combine C and D into a branch, p(C) and p(D) must both be less than p(A) + p(B). So p(D) < p(A) + p(B). Note that if p(C) = p(D) = p(A) + p(B), then the Huffman algorithm has the option to pick any pair in the next step, for which two of those cases results in skewed tree. So p(D) must be strictly less than p(A) + p(B).
The rest is left as an exercise for the reader.
(Your guess is close. It must be less than 2/5. So 2/5–ϵ, where ϵ is the smallest number that allows the probability computed from the frequency, presumably an integer, to be less than 2/5. An example set of probabilities to reach the maximum is {1/5, 1/5, 1/5+ϵ, 2/5–ϵ}.)

Proof for pumping Lemma linear context free language

Where can I find the proof for the linear context free languages pumping lemma?
I am looking for the proof that is specific for the linear context free language
I also looked for the formal prof and could not find one.Not sure if the below is a formal prof but it may give you some idea.
The lemma : For every linear context free languages L there is an n>0 so that for every w in L with |w| > n we can write w as uvxyz such that |vy|> 0,|uvyz| <= n and uv^ixy^iz for every i>= 0 is in L.
"Proof":
Imagine a parse tree for some long string w in L with a start symbol S. Also lets assume that the tree does not contains non useful nodes. If w is long enough, there will be at least one non terminal repeating more than once. Lets call the first repeating non terminal going down the tree X, its first occurrence (from the top) as X[1] and its second occurrence as X[2].Let x be the string in w generated by X[2], vxy the string generated by X[1]and uvxyz the full string w generated by S. Since the movement from X[1] to X[2] generates v,y we could theoretically generate a new tree where we replicate this move multiple times before moving from X[1] down.This proves that uv^ixy^iz for every i>= 0 is in L. Since our tree contains no useless nodes, moving from X[1] to X[2] must generate some terminals and this proves that |vy|> 0.L is linear which means that on every level of the tree we have a single non terminal symbol. Each node in the tree covers some substring in w that its length is bounded by a linear function of the node height. Moving from S to X[2] covers uv and yz from w and the number of tree levels traveled is bounded by (2 * the number of non-terminals symbols + 1). Since the number of levels traveled is bounded and the tree is linear it also puts a bound on the yield of the movement from S to X[2] which means ,|uvyz| <= n for some n >= 0.
Note: Keep in mind that we construct X[1] , X[2] top down , in contradiction to how we prove the “regular” pumping lemma for context free grammar in general. In the "regular” pumping lemma there is a bound on the height of X[1] and therefore a bound on |vxy|. In our case there is no bound on the height of X[1]and it can be as high as required by the length of w. There is a bound,however,on the number of tree levels from S to X[2].This does not means much if the grammar is not linear as the output going from S to X[2] is still bounded only by the high of S (that is unbounded). But in the linear case,this output is bounded and therefore |uvyz| <= n

Basic Pumping Lemma proof doesn't make sense

Proving that a^n b^n, n >= 0, is non-regular.
Using the string a^p b^p.
Every example I've seen claims that y can either contain a's, b's, or both. But I don't see how y can contain anything other than a's, because if y contains any b's, then the length of xy must be greater than p, which makes it invalid.
Conversely, for examples such as:
www, w is {a, b}*, the string used is a^p b a^p b a^p b. In the proofs I've seen, it claims that y cannot contain anything other than a's, for the reason I stated above. Why is this different?
Also throwing in another question:
Describe the error in the following "proof" that 0* 1* is not a regular language. (An
error must exist because 0* 1* is regular.) The proof is by contradiction. Assume
that 0* 1* is regular. Let p be the pumping length for 0* 1* given by the pumping
lemma. Choose s to be the string OP P. You know that s is a member of 0* 1*, but
a^p b^p cannot be pumped. Thus you have a contradiction. So 0* 1* is not regular.
I can't find any problem with this proof. I only know that 0*1* is a regular language because I can construct a DFA.
The pumping lemma states that for a regular language L:
for all strings s greater than p there exists a subdivision s=xyz such that:
For all i, xyiz is in L;
|y|>0; and
|xy|<p.
Now the claim that y can only contain a's or b's originates from the first item. Since if it contained both a's and b's, with i=2, this would result in a string of the form aa...abb...baa...b, etc. That's what the statement wants to say.
The third part indeed, makes it obvious that y can only contain a's. In other words, what the textbooks say is a conclusion derived from the first item.
Finally if you combine 1., 2. and 3., one reaches contradiction, because we know y must contain at least one character (2.), the string can only contain a's. Say y contains k a's. If we would "pump" this with i=2, the result is that we generate a string:
s'=xy2z=ap+kbp
We know however that s' is not part of L, which it should be by 1., so we reach inconsistency.
You can thus only make the proof work by combining the three items. It's not enough to know that y consist only out of a's: that doesn't result in contradiction. It's because there is no subdivision available that satisfies all three constraints simultaneously.
About your second question. In that case, L looks different. You can't reuse the proof of a^nb^n because L is perfectly happy if the string contains more a's. In other words, you can't find a contradiction. In other words, the last item of the proof fails. As long as y contains only one type of characters - regardless of its length - it can satisfy all three constraints.

How to allocate the memory for b in LAPACK sgelsd routine

According to the official user guide line, sgelsd is used to solve the least square problem
min_x || b - Ax ||_2
and allows matrix A to be rectangle and rank-deficient. And according to the interface description in the sgelsd source code, b is used as input-output parameter. When sgelsd is finished, b stores the solution. So b occupies m*sizeof(float) bytes. While the solution x needs n*sizeof(float) bytes (assume A is a m*n matrix, and b is a m*1 vector).
However, when n>m, the memory of b is too small to store the solution x. How to deal with this situation? I did not get it from the comments of sgelsd source code. Can I just allocate n*sizeof(float) bytes for b and use the first m*sizeof(float) to store the b vector?
Thanks.
This example from Intel MKL has the answer. B is allocated as LDB*NRHS (LDB = max(M,N), and zero-padded. Note that the input B is not necessarily a 1-vector, SGELSD can handle multiple least-squares problems at the same time (hence NRHS).
From the Lapack docs for SGELSD:
[in,out] B
B is REAL array, dimension (LDB,NRHS)
On entry, the M-by-NRHS right hand side matrix B.
On exit, B is overwritten by the N-by-NRHS solution
matrix X. If m >= n and RANK = n, the residual
sum-of-squares for the solution in the i-th column is given
by the sum of squares of elements n+1:m in that column.
[in] LDB
LDB is INTEGER
The leading dimension of the array B. LDB >= max(1,max(M,N)).

Minimizing objective function by changing a variable - in Matlab?

I have a 101x82 size matrix called A. Using this variable matrix, I compute two other variables called:
1) B, a 1x1 scalar, and
2) C, a 50x6 matrix.
I compare 1) and 2) with their analogues variables 3) and 4), whose values are fixed:
3) D, a 1x1 scalar, and
4) E, a 50x6 matrix.
Now, I want to perturb/change the values of A matrix, such that:
1) ~ 3), i.e. B is nearly equal to D , and
2) ~ 4), i.e. C is nearly equal to E
Note that on perturbing A, B and C will change, but not D and E.
Any ideas how to do this? Thanks!
I can't run your code as it's demanding to load data (which I don't have) and it's not immediatly obvious how to calculate B or C.
Fortunately I may be able to answer your problem. You're describing an optimization problem, and the solution would be to use fminsearch (or something of that variety).
What you do is define a function that returns a vector with two elements:
y1 = (B - D)^weight1;
y2 = norm((C - E), weight2);
with weight being how strong you allow for variability (weight = 2 is usually sufficient).
Your function variable would be A.
From my understanding you have a few functions.
fb(A) = B
fc(A) = C
Do you know the functions listed above, that is do you know the mappings from A to each of these?
If you want to try to optimize, so that B is close to D, you need to pick:
What close means. You can look at some vector norm for the B and D case, like minimizing ||B-D||^2. The standard sum of the squares of the elements of this different will probably do the trick and is computationally nice.
How to optimize. This depends a lot on your functions, whether you want local or global mimina, etc.
So basically, now we've boiled the problem down to minimizing:
Cost = ||fb(A) - fd(A)||^2
One thing you can certainly do is to compute the gradient of this cost function with respect to the individual elements of A, and then perform minimization steps with forward Euler method with a suitable "time step". This might not be fast, but with small enough time step and well-behaved enough functions it will at least get you to a local minima.
Computing the gradient of this
grad_A(cost) = 2*||fb(A)-fd(A)||*(grad_A(fb)(A)-grad_A(fd)(A))
Where grad_A means gradient with respect to A, and grad_A(fb)(A) means gradient with respect to A of the function fb evaluated at A, etc.
Computing the grad_A(fb)(A) depends on the form of fb, but here are some pages have "Matrix calculus" identities and explanations.
Matrix calculus identities
Matrix calculus explanation
Then you simply perform gradient descent on A by doing forward Euler updates:
A_next = A_prev - timestep * grad_A(cost)