If you have a binary search tree with ten nodes, storing integers 0 through 9, how do we decide if a sequence cannot represent a postorder traversal of the tree? I understand that the root has to be the last one in the sequence, but I could not arrive at any pattern. A pseudocode would be great too! (It's not homework, practicing for interviews)
As you said, you know what the root is. So you know the range of values in each sub tree. If the sequence, less the root, doesn't split into two sequences, one less than and one greater than the root then it isn't valid. If it does, you need to recursively check the two sub-traversals. If everything works, then it is valid.
Related
I have two CONTROLs (buttons specifically) that when activated act as one bit each.
So it basically means the highest number I can produce is 2 by having both buttons activated at the same time. EDIT: Okay what I meant to say was that the highest output I'm going to be able to produce is two because I only have 2 buttons, each representing a 1. So 1+1=2.
However, this is only understood logically because the bits are yet to be converted to a numerical(decimal) format. I can use a 'Boolean to 0,1' converted directly to get the values but I'm instructed to use a case structure to complete this.
Right now I'm completely perplexed because a case structure needs exactly ONE case selector but I have TWO buttons. Secondly, this problem seems way too SIMPLE to require a case structure therefore making it genuinely harder to use a more complex method.
So it basically means the highest number I can produce is 2 by having both buttons activated at the same time.
A 2bit-number can have four values, 0...3, hmm?
In general, if the two booleans are bits of the number, or the number can somehow be calculated from the bools, do it.
But if the number can have predefined values which depend on the booleans, but can not be calculated from them, you need some other kind of case distinction. Maybe, whoever instructed you, had this in mind.
You can make a case structure for the first boolean, and in each case, insert a second case structure for the second boolean. This is good when there will be more complex code and logic depending on the boolenans, so you can easily concentrate on one combination of values. For simple cases, this lacks overview, and when adding a third boolean, it's lots of work.
Calculate an interim value, and connect it to a single case structure. Now, there is only one case structure, but you have no overview over all cases. Note I've changed the radix of the case struture to boolean, so you can see the bits in the selector.
Use a simple array to take a value from
Create a lookup table with predefined conditions and values
(Note that the first two solutions force you to implement each case, while the last two do not - what if your arrays are of size 3, only?)
Infinite Adder
Write a program that can add 2 integers of ANY LENGTH (limited only by computer memory). Store the 2 integers as a linked list of digits (each node is a single digit from 0 – 9). Read each number from a comma-delimited file of digits in reverse order (for example, a file with “2,3,7,0,1” represents the number 10732). One file should be called “num1.txt” and the other file should be called “num2.txt”. The Hard Part: When you print out the answer, it should be in “normal” order.
It may be wise to write a doubly-linked list in order to make things easier.
Help would be very much appreciated.
I can tell you what I need to do, but I don't know how to write it code-wise.
When you have a Merkle tree, what is the minimal number of hashes needed to verify a change to one leaf node?
Am I correct in my understanding that, at first, only the top hash (the Merkle tree root or hash of the Merkle tree root) is needed? And then once a leaf is modified, you need to obtain the hashes of each row "visited" while descending to the leaf node that got modified?
So if a root has, say, ten children and one grand-children that is modified and I want to verify that particular grand-children, I need to obtain the new merkle root hashes, the hashes of the ten children and the hashes of the children of the parent of the grand-children.
So at every modification you always need to obtain, at least, all the hashes from the first row? (otherwise how do you reconstruct and verify the merkle root hash?)
In general Merkle trees have not been designed to indicate which hash value is actually incorrect. Instead, it makes it possible to obtain a efficient hash over large data structures. The hash of each leaf node can be calculated separately (and, of course, each branch as well, although that's just hashes).
If you want to validate which node is invalid you should keep the entire Merkle tree. If you have another party doing the calculations you can indeed descent into a branch of the tree to find the altered leaf node.
Could somebody explain to me how merkle tree implementation works in riak-core, please?
https://github.com/basho/riak_core/blob/develop/src/merkerl.erl
I don't understand what is it offfset, for example.
Thanks!
The tree is both a K/V lookup tree and a Merkle tree in one, more or less. The tree is defined by looking at a 160 bit sha1 hash. The 160 bits gives 20 bytes. At the first level of the tree, we store up to 256 subtrees according to the 0th byte of the hash. At the next level, it is the 1st byte, then the 2nd and so on.
This is a called a digital tree scheme, where the digits in the hash encode the path to take in the tree. This allows us to replace data in the tree. Alternatively, look up the concept trie. At the same time, we sign each nodes kids with sha1 to track a change in the given subtree. When running to find the diff, we can thus ignore subtrees with the same signature as they must be equivalent by construction.
The value offset encodes how far in the 160 bit key we are currently. The offset_key/1 function offsets to the right byte in the key to look at.
Looking at this question, where the questioner is interested in the first and last instances of some element in a List, it seems a more efficient solution would be to use a DoubleLinkedList that could search backwards from the end of the list. However there is only one implementation in the collections API and it's mutable.
Why is there no immutable version?
Because you would have to copy the whole list each time you want to make a change. With a normal linked list, you can at least prepend to the list without having to copy everything. And if you do want to copy everything on every change, you don't need a linked list for that. You can just use an immutable array.
There are many impediments to such a structure, but one is very pressing: a doubly linked list cannot be persistent.
The logic behind this is pretty simple: from any node on the list, you can reach any other node. So, if I added an element X to this list DL, and tried to use a part of DL, I'd face this contradiction: from the node pointing to X one can reach every element in part(DL), but, by the properties of the doubly linked list, that means from any element of part(DL) I can reach the node pointing to X. Since part(DL) is supposed to be immutable and part of DL, and since DL did not include the node pointing to X, that just cannot be.
Non-persistent immutable data structures might have some uses, but they are generally bad for most operations, since they need to be recreated whenever a derivative is produced.
Now, there's the minor matter of creating mutually referencing strict objects, but this is surmountable. One can use by-name parameters and lazy vals, or one can do like Scala's List: actually create a mutable collection, and then "freeze" it in immutable state (see ListBuffer and it's toList method).
Because it is logically impossible to create a mutually (circular) referential data-structure with strict immutability.
You cannot create two nodes that point to each other due to simple existential ordering priority, in that at least one of the nodes will not exist when the other is created.
It is possible to get this circularity with tricks involving laziness (which is implemented with mutation), but the real question then becomes why you would want this thing in the first place?
As others have noted, there is no persistent implementation of a double-linked list. You will need some kind of tree to get close to the characteristics you want.
In particular, you may want to look at finger trees, which provide O(1) access to the front and back, amortized O(1) insertion to the front and back, and O(log n) insertion elsewhere. (That's in contrast to most other commonly-used trees which have O(log n) access and insertion everywhere.)
See also:
video explanation of finger trees (by the implementor of finger trees in clojure.contrib)
finger tree implementation in Scala (I haven't used it personally, but it's the top google hit)
As a supplemental to the answer of #KimStebel I like to add:
If you are searching for a data structure suitable for the question that motivated you to ask this question, then you might have a look at Extreme Cleverness: Functional Data Structures in Scala by #DanielSpiewak.