How to define `last` iterator without collecting/allocating? - interface

Using the example from the Julia Docs, we can define an iterator like the following:
struct Squares
count::Int
end
Base.iterate(S::Squares, state=1) = state > S.count ? nothing : (state*state, state+1)
Base.eltype(::Type{Squares}) = Int # Note that this is defined for the type
Base.length(S::Squares) = S.count
But even though there's a length defined, asking for last(Squares(5)) results in an error:
julia> last(Squares(5))
ERROR: MethodError: no method matching lastindex(::Squares)
Since length is defined, is there a way to iterate through and return the last value without doing an allocating collect? If so, would it be bad to extend the Base.last method for my type?

As you can read in the docstring of last:
Get the last element of an ordered collection, if it can be computed in O(1) time. This is accomplished by calling lastindex to get the last index.
The crucial part is O(1) computation time. In your example the cost of computing last element is O(count) (of course if we want to use the definition of the iterator as in general it would be possible compute it in O(1) time).
The idea is to avoid defining last for collections for which it is expensive to compute it. For this reason the default definition of last is:
last(a) = a[end]
which requires not only lastindex but also getindex defined for the passed value (as the assumption is that if someone defines lastindex and getindex for some type then these operations can be performed fast).
If you look at Interfaces section of the Julia manual you will notice that the iteration interface (something that your example implements) is less demanding than indexing interface (something that is defined for your example in the next section of the manual). Usually the distinction is made that indexing interface is only added for collections that can be indexed efficiently.
If you still want last to work on your type you can either:
add a definition to Base.last specifically - there is nothing wrong with doing this;
add a definition of getindex, firstindex, and lastindex to make the collection indexable (and then the default definition of last would work) - this is the approach presented in the Julia manual

Related

What is the use of minizinc fix function?

i see that fix documentation says:
http://www.minizinc.org/doc-lib/doc-builtins-reflect.html#Ifunction-dd-T-cl-fix-po-var-opt-dd-T-cl-x-pc
function array [$U] of $T: fix(array [$U] of var opt $T: x)
Check if the value of every element of the array x is fixedat this point in evaluation. If all are fixed, return an array of their values, otherwise abort.
I am thinking it can be used to coerce a var to a par.
Here is the code.
array [1..num] of var int: value ;
%% generate random numbers from 0..num-1, this should fix the value of the var "value" or so i think
constraint forall(i in index_set(value))(let {int:temp_value=discrete_distribution([1|i in index_set(value)]); } in value[i]=trace(show(temp_value)++"\n", temp_value));
%%% this i was expecting to work, as "value" elements are fixed above
array [1..num] of int:value__ =[ trace(show(fix(value[i])), fix(value[i])) | i in index_set(value)] ;
But i get:
MiniZinc: evaluation error:
with i = 1
in call 'trace'
in call 'fix'
expression is not fixed
My questions are:
1) I think i should expect this error as minizinc is not sequential execution language?
2) Examples of fix in user guide is only where output statement is used. Is it the only place to use fix?
3) How would i coerce a var to a par?
By the way I am trying this var to par conversion because i am having problem with array generator expression. Here is the code
int:num__=200;
int:seed=134;
int: two_m=2097184;
%% prepare weights for generating numbers form 1..(two_m div 64), basically same weight
array [1..(two_m div 64)] of int: value_6_wt= [seed+1 | i in 1..(two_m div 64)] ;
%% generate numbers. this dose not work gives out
%% in variable declaration for 'value6'
%% parameter value out of range
array [1..num__] of int : value6 = [ discrete_distribution(value_6_wt) | j in 1..num__];
In the MiniZinc language the difference between a parameter and a variable is only the fact that a parameter must have a value at compile time. Within the compiler we turn as many variables into parameters as we can. This saves the solver from having to do some work. When we know that a variable has been turned into a parameter, then we can use the fix function to convince the type system that we really can use this variable as a parameter and see its value.
The problem here however is that fix is defined to abort when the variable is not fixed to one value. If no testing is done, this requires some (magic/)knowledge about the compilation process. In your case it seems that the second array is evaluated before the optimisation stage, in which all aliasing is resolved. This is the reason why it does not work. (This is indeed one of the things that is a consequence of a declarative language)
Although fix might only be used in the output statements in the examples (where it's guaranteed to work), it is used in many locations in the MiniZinc libraries. If we for example look at the library that is used for MIP solvers, there are many constraints that can be encoded more efficiently if one of the arguments is a parameter. Therefore, you will often see that the a constraint in this library first tests its arguments with is_fixed, and then use a better encoding if this returns true.
The output statement and when is_fixed returns true will both give the guarantee that a variable is fixed and ensure that the compilation doesn't abort. There is no other way to coerce a variable to a parameter, but unless you are dealing with dependant predicate definitions, you can just trust the MiniZinc compiler to ensure that the resulting FlatZinc will contain a parameter instead of a variable.

Ambiguous use of 'lazy'

I have no idea why this example is ambiguous. (My apologies for not adding the code here, it's simply too long.)
I have added prefix (_ maxLength) as an overload to LazyDropWhileBidirectionalCollection. subscript(position) is defined on LazyPrefixCollection. Yet, the following code from the above example shouldn't be ambiguous, yet it is:
print([0, 1, 2].lazy.drop(while: {_ in false}).prefix(2)[0]) // Ambiguous use of 'lazy'
It is my understanding that an overload that's higher up in the protocol hierarchy will get used.
According to the compiler it can't choose between two types; namely LazyRandomAccessCollection and LazySequence. (Which doesn't make sense since subscript(position) is not a method of LazySequence.) LazyRandomAccessCollection would be the logical choice here.
If I remove the subscript, it works:
print(Array([0, 1, 2].lazy.drop(while: {_ in false}).prefix(2))) // [0, 1]
What could be the issue?
The trail here is just too complicated and ambiguous. You can see this by dropping elements. In particular, drop the last subscript:
let z = [0, 1, 2].lazy.drop(while: {_ in false}).prefix(2)
In this configuration, the compiler wants to type z as LazyPrefixCollection<LazyDropWhileBidirectionalCollection<[Int]>>. But that isn't indexable by integers. I know it feels like it should be, but it isn't provable by the current compiler. (see below) So your [0] fails. And backtracking isn't powerful enough to get back out of this crazy maze. There are just too many overloads with different return types, and the compiler doesn't know which one you want.
But this particular case is trivially fixed:
print([0, 1, 2].lazy.drop(while: {_ in false}).prefix(2).first!)
That said, I would absolutely avoid pushing the compiler this hard. This is all too clever for Swift today. In particular overloads that return different types are very often a bad idea in Swift. When they're simple, yes, you can get away with it. But when you start layering them on, the compiler doesn't have a strong enough proof engine to resolve it. (That said, if we studied this long enough, I'm betting it actually is ambiguous somehow, but the diagnostic is misleading. That's a very common situation when you get into overly-clever Swift.)
Now that you describe it (in the comments), the reasoning is straightforward.
LazyDropWhileCollection can't have an integer index. Index subscripting is required to be O(1). That's the meaning of the Index subscript versus other subscripts. (The Index subscript must also return the Element type or crash; it can't return an Element?. That's way there's a DictionaryIndex that's separate from Key.)
Since the collection is lazy and has an arbitrary number of missing elements, looking up any particular integer "count" (first, second, etc.) is O(n). It's not possible to know what the 100th element is without walking through at least 100 elements. To be a collection, its O(1) index has to be in a form that can only be created by having previously walked the sequence. It can't be Int.
This is important because when you write code like:
for i in 1...1000 { print(xs[i]) }
you expect that to be on the order of 1000 "steps," but if this collection had an integer index, it would be on the order of 1 million steps. By wrapping the index, they prevent you from writing that code in the first place.
This is especially important in highly generic languages like Swift where layers of general-purpose algorithms can easily cascade an unexpected O(n) operation into completely unworkable performance (by "unworkable" I mean things that you expected to take milliseconds taking minutes or more).
Change the last row to this:
let x = [0, 1, 2]
let lazyX: LazySequence = x.lazy
let lazyX2: LazyRandomAccessCollection = x.lazy
let lazyX3: LazyBidirectionalCollection = x.lazy
let lazyX4: LazyCollection = x.lazy
print(lazyX.drop(while: {_ in false}).prefix(2)[0])
You can notice that the array has 4 different lazy conformations - you will have to be explicit.

Need an explanation for a confusing way the AND boolean works

I am tutoring someone in basic search and sorts. In insertion sort I iterate negatively when I have a value that is greater than the one previous to it in numerical terms. Now of course this approach can cause issues because there is a check which calls for array[-1] which does not exist.
As underlined in bold below, adding the and x > 0 boolean prevents the index issue.
My question is how is this the case? Wouldn't the call for array[-1] still be made to ensure the validity of both booleans?
the_list = [10,2,4,3,5,7,8,9,6]
for x in range(1,len(the_list)):
value = the_list[x]
while value < the_list[x-1] **and x > 0**:
the_list[x] = the_list[x-1]
x=x-1
the_list[x] = value
print the_list
I'm not sure I completely understand the question, and I don't know what programming language this is, but most modern programming languages use so-called short-circuit Boolean evaluation by default so that the logical expression isn't evaluated further once the outcome is known.
You can use that to guard against range overflow, like this:
while x > 0 and value < the_list[x-1]
but the check of x's range here must come before the use.
AND operation returns true if and only if both arguments are true, so if one of arguments is false there's no point of checking others as the final value is already known at that point. As for your example, usually evaluation goes from left to right but it is not a principle and it looks the language you used is not following that rule (othewise it still should crash on array lookup). But ut may be, this particular implementation optimizes this somehow (which IMHO is not good idea) and evaluates "simpler" things first (like checking if x > 0) before it look up the array. check the specs why this exact order works for you as in most popular languages you would still crash if test x > 0 wouldn't be evaluated before lookup

Why doesn't scala's parallel sequences have a contains method?

Why does
List.range(0,100).contains(2)
Work, while
List.range(0,100).par.contains(2)
Does not?
This is planned for the future?
The non-teleological answer is that it's because contains is defined in SeqLike but not in ParSeqLike.
If that doesn't satisfy your curiosity, you can find that SeqLike's contains is defined thus:
def contains(elem: Any): Boolean = exists (_ == elem)
So for your example you can write
List.range(0,100).par.exists(_ == 2)
ParSeqLike is missing a few other methods as well, some of which would be hard to implement efficiently (e.g. indexOfSlice) and some for less obvious reasons (e.g. combinations - maybe because that's only useful on small datasets). But if you have a parallel collection you can also use .seq to get back to the linear version and get your methods back:
List.range(0,100).par.seq.contains(2)
As for why the library designers left it out... I'm totally guessing, but maybe they wanted to reduce the number of methods for simplicity's sake, and it's nearly as easy to use exists.
This also raises the question, why is contains defined on SeqLike rather than on the granddaddy of all collections, GenTraversableOnce, where you find exists? A possible reason is that contains for Map is semantically a different method to that on Set and Seq. A Map[A,B] is a Traversable[(A,B)], so if contains were defined for Traversable, contains would need to take a tuple (A,B) argument; however Map's contains takes just an A argument. Given this, I think contains should be defined in GenSeqLike - maybe this is an oversight that will be corrected.
(I thought at first maybe parallel sequences don't have contains because searching where you intend to stop after finding your target on parallel collections is a lot less efficient than the linear version (the various threads do a lot of unnecessary work after the value is found: see this question), but that can't be right because exists is there.)

How to delete elements from a transformed collection using a predicate?

If I have an ArrayList<Double> dblList and a Predicate<Double> IS_EVEN I am able to remove all even elements from dblList using:
Collections2.filter(dblList, IS_EVEN).clear()
if dblList however is a result of a transformation like
dblList = Lists.transform(intList, TO_DOUBLE)
this does not work any more as the transformed list is immutable :-)
Any solution?
Lists.transform() accepts a List and helpfully returns a result that is RandomAccess list. Iterables.transform() only accepts an Iterable, and the result is not RandomAccess. Finally, Iterables.removeIf (and as far as I see, this is the only one in Iterables) has an optimization in case that the given argument is RandomAccess, the point of which is to make the algorithm linear instead of quadratic, e.g. think what would happen if you had a big ArrayList (and not an ArrayDeque - that should be more popular) and kept removing elements from its start till its empty.
But the optimization depends not on iterator remove(), but on List.set(), which is cannot be possibly supported in a transformed list. If this were to be fixed, we would need another marker interface, to denote that "the optional set() actually works".
So the options you have are:
Call Iterables.removeIf() version, and run a quadratic algorithm (it won't matter if your list is small or you remove few elements)
Copy the List into another List that supports all optional operations, then call Iterables.removeIf().
The following approach should work, though I haven't tried it yet.
Collection<Double> dblCollection =
Collections.checkedCollection(dblList, Double.class);
Collections2.filter(dblCollection, IS_EVEN).clear();
The checkCollection() method generates a view of the list that doesn't implement List. [It would be cleaner, but more verbose, to create a ForwardingCollection instead.] Then Collections2.filter() won't call the unsupported set() method.
The library code could be made more robust. Iterables.removeIf() could generate a composed Predicate, as Michael D suggested, when passed a transformed list. However, we previously decided not to complicate the code by adding special-case logic of that sort.
Maybe:
Collection<Double> odds = Collections2.filter(dblList, Predicates.not(IS_EVEN));
or
dblList = Lists.newArrayList(Lists.transform(intList, TO_DOUBLE));
Collections2.filter(dblList, IS_EVEN).clear();
As long as you have no need for the intermediate collection, then you can just use Predicates.compose() to create a predicate that first transforms the item, then evaluates a predicate on the transformed item.
For example, suppose I have a List<Double> from which I want to remove all items where the Integer part is even. I already have a Function<Double,Integer> that gives me the Integer part, and a Predicate<Integer> that tells me if it is even.
I can use these to get a new predicate, INTEGER_PART_IS_EVEN
Predicate<Double> INTEGER_PART_IS_EVEN = Predicates.compose(IS_EVEN, DOUBLE_TO_INTEGER);
Collections2.filter(dblList, INTEGER_PART_IS_EVEN).clear();
After some tries, I think I've found it :)
final ArrayList<Integer> ints = Lists.newArrayList(1, 2, 3, 4, 5);
Iterables.removeIf(Iterables.transform(ints, intoDouble()), even());
System.out.println(ints);
[1,3,5]
I don't have a solution, instead I found some kind of a problem with Iterables.removeIf() in combination with Lists.TransformingRandomAccessList.
The transformed list implements RandomAccess, thus Iterables.removeIf() delegates to Iterables.removeIfFromRandomAccessList() which depends on an unsupported List.set() operation.
Calling Iterators.removeIf() however would be successful, as the remove() operation IS supported by Lists.TransformingRandomAccessList.
see: Iterables: 147
Conclusion: instanceof RandomAccess does not guarantee List.set().
Addition:
In special situations calling removeIfFromRandomAccessList() even works:
if and only if the elements to erase form a compact group at the tail of the List or all elements are covered by the Predicate.