Io: How to instantiate a subclassed primitive (e.g. Number)? - iolanguage

In the book 7 Languages in 7 Weeks there is a question:
How would you change / to return 0 if the denominator is zero?
Thanks to the thread What's the significance of self inside of a method? I have a working solution, but I wanted to try to do it without clobbering the Number "/" method, and instead subclass Number. Here is what I tried:
Zeroable := Number clone
Zeroable / = method(denom, if(denom == 0, 0, self proto / denom))
However, this doesn't work. If I try to instantiate an instance of Zeroable, it behaves just like a number:
Io> ten := Zeroable 10
==> 10
Io> ten type
==> Number
Io> ten / 5
==> 2
Io> ten / 0
==> inf
Io> ten slotNames
==> list()
If I instantiate the Zeroable the "normal" way it works, but the value is always 0 and there doesn't appear to be a way to change it:
Io> zero := Zeroable clone
==> 0
Io> zero type
==> Zeroable
Io> zero / 0
==> 0
Io> zero / 2
==> 0
I think the issue is the way that ten is instantiated, but I cannot figure out how to pass "arguments" to a clone method, or otherwise how to create a Zeroable that is not 0. What is going on here?

Arguments cannot be passed to clone, clone is effectively set up like this:
clone := method(
obj := primitiveAllocateMemory(sizeof(self))
obj parent := self
obj do(?init)
)
Secondly, you can't subclass Number like that. Number objects are created by the lexer when it encounters a literal number, and given the type Number. This object is set as the cached result of a message, meaning, even if you intercept the message and evaluate it as some other object, you'll still get back a Number. Effectively this is a short circuit of message evaluation for performance reasons.
If you want a different Number type, you'll have to actually implement it yourself with what operations you want. This means subclassing Object (or some other object) and implementing all the behaviours your want. Note if you subclass Number, the implementation of methods on Number won't be able to make sense of your subclass (how it stores its numbers). Number methods assume a numerical value encoded in the object, not its slot table.

Related

Access variable after its assumptions are cleared

After this
restart; about(x); assume(x>0); f:=3*x; x:='x': about(x);
Originally x, renamed x~:
is assumed to be: RealRange(Open(0),infinity)
f := 3*x~
x:
nothing known about this object
I can use indets to access the variable x (looks like it's still x~ in f), for example
eval(f,indets(f)[1]=2);
6
but it's not efficient when f has many variables. I've tried to access variable x (or x~) directly, but it didn't work:
eval(f,x=2);
eval(f,x~=2);
eval(f,'x'=2);
eval(f,'x~'=2);
eval(f,`x`=2);
eval(f,`x~`=2);
eval(f,cat(x,`~`)=2);
since the result in all those cases was 3*x~ (not 6).
Is there a way to access a specific variable directly (i.e. without using indets) after its assumptions are cleared?
There is no direct way, if utilizing assume, without programatically extracting/converting/replacing the assumed names in the previously assigned expression.
You can store the (assumed) name in another variable, and utilize that -- even after unassigning x.
Or you can pick off the names (including previously assumed names) from f using indets -- even after unassigning x.
But both of those are awkward, and it gets more cumbersome if there are many such names.
That is one reason why some people prefer to utilize assuming instead of the assume facility.
You can construct lists/sets/sequences of the relevant assumptions, and then re-utilize those in multiple assuming instances. But the global names are otherwise left alone, and your problematic mismatch situation avoided.
Another alternative is to utilize the command Physics:-assume instead of assume.
Here's an example. Notice that the assumption that x is positive still allows some simplification that depend upon it.
restart;
Physics:-Assume(x>0);
{x::(RealRange(Open(0),
infinity))}
about(x);
Originally x, renamed x:
is assumed to be:
RealRange(Open(0),infinity)
f:=3*x;
f := 3 x
simplify(sqrt(x^2));
x
Physics:-Assume('clear'={x});
{}
about(x);
x:
nothing known about this object
eval(f, [x=2]);
6
As for handling the original example utilizing assume, and substituting in f for the (still present, assumed names), it can be done programmatically to alleviate some awkwardness with, say, a larger set of assumptions. For example,
restart;
# re-usable utility
K:=(e,p)->eval(e,
eval(map(nm->nm=parse(sprintf("%a",nm)),
indets(e,And(name,
satisfies(hasassumptions)))),
p)):
assume(x>0, y>0, t<z);
f:=3*x;
f := 3 x
g:=3*x+y-sin(t);
g := 3 x + y - sin(t)
x:='x': y:='y': t:='t':
K(f,[x=2]);
6
K(g,[x=2,y=sqrt(2),t=11]);
(1/2)
6 + 2 - sin(11)

Haskell-like range in PureScript?

Is there a native function that behaves like Haskell's range?
I found out that 2..1 returns a list [2, 1] in PureScript, unlike Haskell's [2..1] returning an empty list []. After Googling around, I found the behavior is written in the Differences from Haskell documentation, but it doesn't give a rationale behind.
In my opinion, this behavior is somewhat inconvenient/unintuitive since 0 .. (len - 1) doesn't give an empty list when len is zero, and this could possibly lead to cryptic bugs.
Is there a way to obtain the expected array (i.e. range of length len incrementing from 0) without handling the length == 0 case every time?
Also, why did PureScript decide to make range behave like that?
P.S. How I ran into this question: I wanted to write a getLocalStorageKeys function, which gets all keys from the local storage. My implementation gets the number of keys using the length function, creates a range from 0 to numKeys - 1, and then traverses it with the key function. However, the range didn't behave as I expected.
How about just make it yourself?
indicies :: Int -> Array Int
indicies n = if n <= 0 then [] else 0..(n-1)
As far as "why", I can only speculate, and my speculation is that the idea was to avoid this kind of iffy logic for creating "reverse" ranges - which is something that does come up for me in Haskell once in a while.

Should I count up in Perl 6 with a sequence or a range?

Perl 6 has lazy lists, but it also has unbounded Range objects. Which one should you choose for counting up by whole numbers?
And there's unbounded Range with two dots:
0 .. *
There's the Seq (sequence) with three dots:
0 ... *
A Range generates lists of consecutives thingys using their natural order. It inherits from Iterable, but also Positional so you can index a range. You can check if something is within a Range, but that's not part of the task.
A Seq can generate just about anything you like as long as it knows how to get to the next element. It inherits from Iterable, but also PositionalBindFailover which fakes the Positional stuff through a cache and list conversion. I don't think that a big deal if you're only moving from one element to the next.
I'm going back and forth on this. At the moment I'm thinking it's Range.
Both 0 .. * and 0 ... * are fine.
Iterating over them, for example with a for loop, has exactly the same effect in both cases. (Neither will leak memory by keeping around already iterated elements.)
Assigning them to a # variable produces the same lazy Array.
So as long as you only want to count up numbers to infinity by a step of 1, I don't see a downside to either.
The ... sequence construction operator is more generic though, in that it can also be used to
count with a different step (1, 3 ... *)
count downwards (10 ... -Inf)
follow a geometric sequence (2, 4, 8 ... *)
follow a custom iteration formula (1, 1, *+* ... *)
so when I need to do something like that, then I'd consider using ... for any nearby and related "count up by one" as well, for consistency.
On the other hand:
A Range can be indexed efficiently without having to generate and cache all preceding elements, so if you want to index your counter in addition to iterating over it, it is preferable. The same goes for other list operations that deal with element positions, like reverse: Range has efficient overloads for them, whereas using them on a Seq has to iterate and cache its elements first.
If you want to count upwards to a variable end-point (as in 1 .. $n), it's safer to use a Range because you can be sure it'll never count downwards, no matter what $n is. (If the endpoint is less than the startpoint, as in 1 .. 0, it will behave as an empty sequence when iterated, which tends to get edge-cases right in practice.)
Conversely, if you want to safely count downwards ensuring it will never unexpectedly count upwards, you can use reverse 1 .. $n.
Lastly, a Range is a more specific/high-level representation of the concept of "numbers from x to y", whereas a Seq represents the more generic concept of "a sequence of values". A Seq is, in general, driven by arbitrary generator code (see gather/take) - the ... operator is just semantic sugar for creating some common types of sequences. So it may feel more declarative to use a Range when "numbers from x to y" is the concept you want to express. But I suppose that's a purely psychological concern... :P
Semantically speaking, a Range is a static thing (a bounded set of values), a Seq is a dynamic thing (a value generator) and a lazy List a static view of a dynamic thing (an immutable cache for generated values).
Rule of thumb: Prefer static over dynamic, but simple over complex.
In addition, a Seq is an iterable thing, a List is an iterable positional thing, and a Range is an ordered iterable positional thing.
Rule of thumb: Go with the most generic or most specific depending on context.
As we're dealing with iteration only and are not interested in positional access or bounds, using a Seq (which is essentially a boxed Iterator) seems like a natural choice. However, ordered sets of consecutive integers are exactly what an integer Range represents, and personally that's what I would see as most appropriate for your particular use case.
When there is no clear choice, I tend to prefer ranges for their simplicity anyway (and try to avoid lazy lists, the heavy-weight).
Note that the language syntax also nudges you in the direction of Range, which are rather heavily Huffman-coded (two-char infix .., one-char prefix ^).
There is a difference between ".." (Range) and "..." (Seq):
$ perl6
> 1..10
1..10
> 1...10
(1 2 3 4 5 6 7 8 9 10)
> 2,4...10
(2 4 6 8 10)
> (3,6...*)[^5]
(3 6 9 12 15)
The "..." operator can intuit patterns!
https://docs.perl6.org/language/operators#index-entry-..._operators
As I understand, you can traverse a Seq only once. It's meant for streaming where you don't need to go back (e.g., a file). I would think a Range should be a fine choice.

Ada - Any defined behavior for incrementing past end of range?

If I define a range in ADA as being from 1 .. 1000, is there a defined behavior by the ADA spec if I increment past 1000?
For example:
type Index is range 1..1000;
idx : Index := 1;
procedure Increment is
begin
idx := idx + 1;
end
What should I expect to happen once I call Increment with idx = 1000?
Wraps around (idx = 1)
Out of Range exception
Undefined behavior
Something else?
Your program will fail with a CONSTRAINT_ERROR. However, this is not because you eventually try to set idx to 1001. Rather is its initial value of 0 not within your predefined range either. Thankfully, the compiler will already warn you at compile time about this fact.
If you had set idx to a permitted value and then incremented it beyond its upper limit in a way the compiler cannot statically detect, again CONSTRAINT_ERROR will be raised (but there won't be any hint at compile time).
This error is technically an exception which you can handle like any other exception in that language.
Note: I intentionally linked to the ancient Ada '83 specs above to show that this behavior has been part of the language since the beginning of time.

Having trouble with an Ada declaration "at 0 range 18..20"

I'm having trouble with this variable declaration:
Code_Length at 0 range 18..20;
I'm familiar with constraints, but the at 0 is what's giving me fits, and I can't find any working examples online anywhere else.
If I had to guess (and I'm totally guessing), the at 0 initializes the value to 0, then the constraint is enforced on any subsequent assignment operation. But I can't find anything to verify that.
That's not a variable declaration, that's a representation clause for a field of a record. Whatever record declaration you excerpted that from has a field named "Code_Length". And this representation clause indicates that the storage for it (3 bits) will be offset 0 bytes from the start of whole record's storage, and occupy bits 18 through 20.
Providing more contextual code would help the explanation.