[data-structure]: the tail pointer of cyclic queue - queue

In the implementation of Cyclic Queue, the tail pointer points to the position 1 past the last element in the queue:
|1|2|3|4|5| | |
^ ^
front tail
why?
I think I can implement the Cyclic Queue with the tail pointer pointing to the very last element, not 1 past the last.

You can, indeed implement it that way. There's a certain symmetry to having the tail pointer point to the position 1 past the last element:
front points to the first (oldest) used element - the next element to be read
tail points to the first (oldest) unused element - the next element to be written
In either case, you need to do a bit more to distinguish between a full cyclic queue and an empty one. Some of the alternatives (including doing things your way) are discussed in the Wikipedia article on circular buffers.

It looks like that is the implementation your using's way of determining if the queue is empty or full.
http://en.wikipedia.org/wiki/Circular_buffer#Full_.2F_Empty_Buffer_Distinction

Related

How to define `last` iterator without collecting/allocating?

Using the example from the Julia Docs, we can define an iterator like the following:
struct Squares
count::Int
end
Base.iterate(S::Squares, state=1) = state > S.count ? nothing : (state*state, state+1)
Base.eltype(::Type{Squares}) = Int # Note that this is defined for the type
Base.length(S::Squares) = S.count
But even though there's a length defined, asking for last(Squares(5)) results in an error:
julia> last(Squares(5))
ERROR: MethodError: no method matching lastindex(::Squares)
Since length is defined, is there a way to iterate through and return the last value without doing an allocating collect? If so, would it be bad to extend the Base.last method for my type?
As you can read in the docstring of last:
Get the last element of an ordered collection, if it can be computed in O(1) time. This is accomplished by calling lastindex to get the last index.
The crucial part is O(1) computation time. In your example the cost of computing last element is O(count) (of course if we want to use the definition of the iterator as in general it would be possible compute it in O(1) time).
The idea is to avoid defining last for collections for which it is expensive to compute it. For this reason the default definition of last is:
last(a) = a[end]
which requires not only lastindex but also getindex defined for the passed value (as the assumption is that if someone defines lastindex and getindex for some type then these operations can be performed fast).
If you look at Interfaces section of the Julia manual you will notice that the iteration interface (something that your example implements) is less demanding than indexing interface (something that is defined for your example in the next section of the manual). Usually the distinction is made that indexing interface is only added for collections that can be indexed efficiently.
If you still want last to work on your type you can either:
add a definition to Base.last specifically - there is nothing wrong with doing this;
add a definition of getindex, firstindex, and lastindex to make the collection indexable (and then the default definition of last would work) - this is the approach presented in the Julia manual

Queue implementation in Odersky Scala book. Chapter 19

I see this code on page 388 of the Odersky book on Scala:
class SlowAppendQueue[T](elems: List[T]) {
def head = elems.head
def tail = new SowAppendQueue(elems.tail)
def enqueue(x: T) = new SlowAppendQueue(elems ::: List(x))
}
class SlowHeadQueue[T](smele: List[T]) {
def head = smele.last
def tail = new SlowHeadQueue(smele.init)
def enqueue(x: T) = new SlowHeadQueue(x :: smele)
}
Is the following correct to say:
Both implementations of tail takes time proportional to the number of elements in the queue.
The second implementation of head is slower than the first. The second implementation takes time proportional to the length of the queue. Why is this? How is it implemented? Is it like a linked list where each element has a pointer to the next?
Why does Odersky say the second class' implementation of tail is problematic but not the first?
No. In the first case, tail works in constant time, because elems.tail is a constant time operation (it just returns the the tail of the list). The constructor new SlowAppendQueue(...) is also a constant time operation, because it just wraps the list.
Because if smele has N > 1 elements, then smele.init must rebuild a new list with N - 1 elements from scratch. This takes linear time, therefore it is much slower than the O(1) operation from the first queue implementation.
O(N) operations are problematic because they are slow for large N, whereas O(1) operations are essentially never problematic.
I think you should take a closer look into how the immutable single-linked list is implemented, and what it takes to prepend an element (O(1)), append an element (O(N)), to access the tail (O(1)), rebuild the init (O(N)). Then everything else becomes obvious.
No, the first tail implementation takes constant time. This is because List.tail is a constant time operation due to structural sharing, and wrapping the list in a new SlowAppendQueue is also a constant time operation.
The second implementation of head takes constant time because of the way functional linked lists (including Scala's List class) work. Each list node has a link to the node after it. In order to remove the last element via init, the entire list must be rebuilt.
In summary, List is fast when operating on the beginning, but not when solely operating on the end. See also the Scala docs for List.

Swift pointer from ArraySlice

I'm trying to determine the "Swift-y" way of creating my own contiguous memory containers (in my particular case, I'm building n-dimensional arrays). I want my containers to be as close to Swift's builtin Array as possible - in terms of functionality and usability.
I need to access the pointer to memory of my containers for stuff like Accelerate and BLAS operations.
I want to know whether an ArraySlice's pointer would point to the first element of the slice, or the first element of its base.
When I tried to test UnsafePointer<Int>(array) == UnsafePointer<Int>(array[1...2]) it looks like Swift doesn't allow pointer construction from ArraySlices (or I just did it incorrectly).
I'm looking for advice on which way would be the most "Swift-y"?
I understand that when slicing an array the follow is true:
let array = [1, 2, 3]
array[1] == array[1...2][1]
and
array[1...2][0] != 2 # index out of bounds error
In other words, indexing is always performed relative to the base Array.
Which suggests: that we should return a pointer to the base's first element. Because slices are relative to their base.
However, iteration through a slice (obviously) only considers elements of that slice:
for i in array[1..2] # i takes on 2 followed by 3
Which suggests: that we should return a pointer to the slice's first element. Because slices have their own starting point.
If my user wanted to operate on a slice in a BLAS operation it would be intuitive to expect:
mmul(matrix1[1...2, 0...1].pointer, matrix2[4...5, 0...1].pointer)
to point to the first elements of slice, but I don't know if this is the way a Swift ArraySlice would do things.
My Question: Should a container slice object's pointer point to the first element of the slice, or, the first element of the base container.
This operation is unsafe:
UnsafePointer<Int>(array)
What you mean is:
array.withUnsafeBufferPointer { ... }
This applies to your types as well, and is the pattern you should employ to interoperate with BLAS and Accelerate. You should not try to use a pointer method IMO.
There is no promise that array will continue to exist by the time you actually access the pointer, even if that happens in the same line of code. ARC is free to destroy that memory shockingly quickly.
UnsafeBufferPointer is actually a very nice type in that it is already promised to be contiguous and it behaves as a Collection.
My suggestion here would be to manage your own memory internally, probably with a ManagedBuffer, but maybe just with a UnsafeMutablePointer that you alloc and destroy yourself. It's very important that you manage the layout of the data so that it's compatible with Accelerate. You don't want Array<Array<UInt8>>. That's going to add too much structure. You want a blob of bytes that you index into in the good-ol' C ways (row*width+column, etc). You probably don't want your slices to return pointers at all directly. Your mmul function is likely going to need special logic to understand how to pull the pieces it needs out of slices with minimal copying so that it works with vDSP_mmul. "Generic" and "Accelerate" seldom go together.
For example, considering this:
mmul(matrix1[1...2, 0...1].pointer, matrix2[4...5, 0...1].pointer)
(Obviously I assume your real matrices are dramatically larger; this kind of matrix doesn't make much sense to send to vDSP.)
You're going to have to write your own mmul here obviously since this memory isn't laid out correctly. So you might as well pass the slices. Then you'd do something like (totally untested, uncompiled, and I'm sure the syntax is wildly wrong):
mmul(m1: MatrixSlice, m2: MatrixSlice) -> Matrix {
var s1 = UnsafeMutablePointer<Float>.alloc(m1.rows * m1.columns)
// use vDSP_vgathr to copy each sliced row out of m1 into s1
var s2 = UnsafeMutablePointer<Float>.alloc(m2.rows * m2.columns)
// use vDSP_vgathr to copy each sliced row out of m2 into s2
var result = UnsafeMutablePointer<Float>.alloc(m1.rows * m2.columns)
vDSP_mmul(s1, 1, s2, 1, result, 1, m1.rows, m2.columns, m1.columns)
s1.destroy()
s2.destroy()
// This will need to call result.move() or moveInitializeFrom or something
return Matrix(result)
}
I'm just throwing out stuff here, but this is probably the kind of structure you'd want.
To your underlying question about whether the pointer to the container or to the data is usually passed by Swift, the answer is unfortunately "magic" for Array and no one else. Passing an Array to something that wants a pointer will magically (by the compiler, not the stdlib) pass a pointer to the storage of the Array. No other type gets this magic. Not even ContiguousArray gets this magic. If you pass a ContiguousArray to something that wants a pointer, you'll pass the pointer to the container (and if it's mutable, corrupt the container; true story, hated that one…)
Thanks in part to #RobNapier the answer to my question is: ArraySlice's pointer should point to the slice's first element.
The way I verified this was simply:
var array = [5,4,3,325,67,7,3]
array.withUnsafeBufferPointer{ $0 } != array[3...6].withUnsafeBufferPointer{ $0 } # true
^--- points to 5's address ^--- points to 325's address

Scala append and prepend method performance

I was following a Scala video tutorial and he mentioned prepend :: takes constant time and append :+ time increases with length of list. And, also he mentioned most of the time reversing the list prepending and re-reversing the list gives better performance than appending.
Question 1
Why prepend :: takes constant time and append :+ time increases with length of list?
But reason for that is not mentioned in the tutorial and I tried in google. I didn’t find the answer but I found another surprising thing.
Question 2
ListBuffer takes constant time for both append and prepend. If possible why it wasnt implemented in List?
Obvious there would be reason behind! Appreciate if someone could explain.
Answer 1:
List is implemented as Linked list. The reference you hold is to it's head.
e.g. if you have a list of 4 elements (1 to 4) it will be:
[1]->[2]->[3]->[4]->//
Prepending meaning adding new element to the head and return the new head:
[5]->[1]->[2]->[3]->[4]->//
The reference to the old head [1] still valid and from it's point of view there are still 4 elements.
On the other hand, appending meaning adding element to the end of the list.
Since List is immutable, we can't just add it to the end, but we need to clone the entire List:
[1']->[2']->[3']->[4']->[5]->//
Since clone mean copy the entire list in the same order, we need to iterate over each element and append it.
Answer 2:
ListBuffer is mutable collection, changing it will change all the references.
Ad. 1. The list in Scala is defined (simplifying) as a head and a tail. The tail is also a list. Adding an element to the head means creation a new list with a new head and the existing list as a new tail. The existing list is not changed. This is why it is a constant time operation.
Appending to a list needs rebuilding the existing list, which cannot be done in constant time.
Ad. 2. ListBuffer is a mutable collection. It may be more efficient in some applications, but on the other hand immutable collections are thread-safe and easily scalable.

head and tail calls on empty list bringing an exception

I'm following a tutorial. (Real World Haskell)
And I have one beginner question about head and tail called on empty lists: In GHCi it returns exception.
Intuitively I think I would say they both should return an empty list. Could you correct me ? Why not ? (as far as I remember in OzML left or right of an empty list returns nil)
I surely have not yet covered this topic in the tutorial, but isnt it a source of bugs (if providing no arguments)?
I mean if ever passing to a function a list of arguments which may be optionnal, reading them with head may lead to a bug ?
I just know the GHCi behaviour, I don't know what happens when compiled.
Intuitively I think would say they both should return an empty list. Could you correct me ? Why not ?
Well - head is [a] -> a. It returns the single, first element; no list.
And when there is no first element like in an empty list? Well what to return? You can't create a value of type a from nothing, so all that remains is undefined - an error.
And tail? Tail basically is a list without its first element - i.e. one item shorter than the original one. You can't uphold these laws when there is no first element.
When you take one apple out of a box, you can't have the same box (what happened when tail [] == []). The behaviour has to be undefined too.
This leads to the following conclusion:
I surely have not yet covered this topic in the tutorial, but isnt it a source of bugs ? I mean if ever passing to a function a list of arguments which may be optionnal, reading them with head may lead to a bug ?
Yes, it is a source of bugs, but because it allows to write flawed code. Code that's basically trying to read a value that doesn't exist. So: *Don't ever use head/tail** - Use pattern matching.
sum [] = 0
sum (x:xs) = x + sum xs
The compiler can guarantee that all possible cases are covered, values are always defined and it's much cleaner to read.