Units of Measure, Interfaces and Mixins - interface

Consider the following F# code:
type ILinear =
interface
end
type IMetric =
interface
end
[<Measure>] type cm =
interface ILinear
interface IMetric
[<Measure>] type m =
interface ILinear
interface IMetric
[<Measure>] type time
I want to use these interfaces as a means of both grouping the types of the measures and as a way of allowing a level of genericness between "any measure" and "a specific measurement"--something akin to:
(* Yes, I know this syntax is probably incorrect *)
let rate (distance:float<'u:#ILinear,#IMetric>) (time:float<time>) =
distance/time
I realize this is probably pushing the limits of possibility but I'm just kind of curious if this is possible and if so, what the syntax would be. As I say, this is using interfaces as sort of a poor man's mixin.

I don't think this is possible, but I quite like the idea :-).
If it was possible, then the constraints would be probably written using the same syntax that you can use to write interface constraints for ordinary (non-measure) type parameters:
let rate<[<Measure>] 'u when 'u :> IMetric> (distance:float<'u>) (time:float<time>) =
distance/time
The error message clearly says that constraints can be only specified on ordinary type parameters (actually, I was even surprised that units of measure can implement interfaces - it doesn't look very useful as they are completely erased during the compilation):
error FS0703: Expected type parameter, not unit-of-measure parameter
The best workaround that I can think of is to write a simple wrapper that stores a value (with some unit) and additional (phantom) type that represents the constraints:
[<Struct>]
type FloatValue<[<Measure>] 'u, 'constr>(value:float<'u>) =
member x.Value = value
let cm f = FloatValue<_, IMetric>(f * 1.0<cm>)
The cm function takes a float and wraps it into a FloatValue. The second type argument is an ordinary type argument, so it can be provided with some type that implements interfaces (or with just a single interface). The rate function then looks like this:
let rate (distance:FloatValue<'u, #IMetric>) (time:float<time>) =
distance.Value / time
Since the constraints cannot be specified on a unit type, we have to specify them on the second type argument. You can then call the function using:
rate (cm 10.0) 5.0<time>

Related

What does "existential type" mean in Swift?

I am reading Swift Evolution proposal 244 (Opaque Result Types) and don't understand what the following means:
... existential type ...
One could compose these transformations by
using the existential type Shape instead of generic arguments, but
doing so would imply more dynamism and runtime overhead than may be
desired.
An example of an existential type is given in the evolution proposal itself:
protocol Shape {
func draw(to: Surface)
}
An example of using protocol Shape as an existential type would look like
func collides(with: any Shape) -> Bool
as opposed to using a generic argument Other:
func collides<Other: Shape>(with: Other) -> Bool
Important to note here that the Shape protocol is not an existential type by itself, only using it in "protocols-as-types" context as above "creates" an existential type from it. See this post from the member of Swift Core Team:
Also, protocols currently do double-duty as the spelling for existential types, but this relationship has been a common source of confusion.
Also, citing the Swift Generics Evolution article (I recommend reading the whole thing, which explains this in more details):
The best way to distinguish a protocol type from an existential type is to look at the context. Ask yourself: when I see a reference to a protocol name like Shape, is it appearing at a type level, or at a value level? Revisiting some earlier examples, we see:
func addShape<T: Shape>() -> T
// Here, Shape appears at the type level, and so is referencing the protocol type
var shape: any Shape = Rectangle()
// Here, Shape appears at the value level, and so creates an existential type
Deeper dive
Why is it called an "existential"? I never saw an unambiguous confirmation of this, but I assume that the feature is inspired by languages with more advanced type systems, e.g. consider Haskell's existential types:
class Buffer -- declaration of type class `Buffer` follows here
data Worker x y = forall b. Buffer b => Worker {
buffer :: b,
input :: x,
output :: y
}
which is roughly equivalent to this Swift snippet (if we assume that Swift's protocols more or less represent Haskell's type classes):
protocol Buffer {}
struct Worker<X, Y> {
let buffer: any Buffer
let input: X
let output: Y
}
Note that the Haskell example used forall quantifier here. You could read this as "for all types that conform to the Buffer type class ("protocol" in Swift) values of type Worker would have exactly the same types as long as their X and Y type parameters are the same". Thus, given
extension String: Buffer {}
extension Data: Buffer {}
values Worker(buffer: "", input: 5, output: "five") and Worker(buffer: Data(), input: 5, output: "five") would have exactly the same types.
This is a powerful feature, which allows things such as heterogenous collections, and can be used in a lot more places where you need to "erase" an original type of a value and "hide" it under an existential type. Like all powerful features it can be abused and can make code less type-safe and/or less performant, so should be used with care.
If you want even a deeper dive, check out Protocols with Associated Types (PATs), which currently can't be used as existentials for various reasons. There are also a few Generalized Existentials proposals being pitched more or less regularly, but nothing concrete as of Swift 5.3. In fact, the original Opaque Result Types proposal linked by the OP can solve some of the problems caused by use of PATs and significantly alleviates lack of generalized existentials in Swift.
I'm sure you've already used existentials a lot before without noticing it.
A summarized answer of Max is that for:
var rec: Shape = Rectangle() // Example A
only Shape properties can be accessed. While for:
func addShape<T: Shape>() -> T // Example B
Any property of T can be accessed. Because T adopts Shape then all properties of Shape can also be accessed as well.
The first example is an existential the second is not.
Example of real code:
protocol Shape {
var width: Double { get }
var height: Double { get }
}
struct Rectangle: Shape {
var width: Double
var height: Double
var area: Double
}
let rec1: Shape = Rectangle(width: 1, height: 2, area: 2)
rec1.area // ❌
However:
let rec2 = Rectangle(width: 1, height: 2, area: 2)
func addShape<T: Shape>(_ shape: T) -> T {
print(type(of: shape)) // Rectangle
return shape
}
let rec3 = addShape(rec2)
print(rec3.area) // ✅
I'd argue that for most Swift users, we all understand Abstract class and Concrete class. This extra jargon makes it slightly confusing.
The trickiness is that with the 2nd example, to the compiler, the type you return isn't Shape, it's Rectangle i.e. the function signature transforms to this:
func addShape(_ shape: Rectangle) -> Rectangle {
This is only possible because of (constrained) generics.
Yet for rec: Shape = Whatever() to the compiler the type is Shape regardless of the assigning type. <-- Box Type
Why is it named Existential?
The term "existential" in computer science and programming is borrowed from philosophy, where it refers to the concept of existence and being. In the context of programming, "existential" is used to describe a type that represents the existence of any specific type, without specifying which type it is.
The term is used to reflect the idea that, by wrapping a value in an existential type, you are abstracting away its specific type and only acknowledging its existence.
In other words, an existential type provides a way to handle values of different types in a unified way, while ignoring their specific type information†. This allows you to work with values in a more generic and flexible manner, which can be useful in many situations, such as when creating collections of heterogeneous values, or when working with values of unknown or dynamic types.
The other day I took my kid to an ice-cream shop. She asked what are you having, and I didn't know the flavor I picked, so I didn't say it's strawberry flavored or chocolate, I just said "I'm having an ice-cream".
I just specified that it's an ice-cream without saying its flavor. My daughter could no longer determine if it was Red, or Brown. If it was having a fruit flavor or not. I gave her existential-like information.
Had I told her it's a chocolate, then I would have gave her specific information. Which is then not existential.
†: In Example B, we're not ignoring the specific type information.
Special thanks to a friend who helped me come up with this answer
I feel like it's worth adding something about why the phrase is important in Swift. And in particular, I think almost always, Swift is talking about "existential containers". They talk about "existential types", but only really with reference to "stuff that is stored in an existential container". So what is an "existential container"?
As I see it, the key thing is, if you have a variable that you're passing as a parameter or using locally, etc. and you define the type of the variable as Shape then Swift has to do some things under the hood to make it work and that's what they are (obliquely) referring to.
If you think about defining a function in a library/framework module that you're writing that is publicly available and takes for example the parameters public func myFunction(shape1: Shape, shape2: Shape, shape1Rotation: CGFloat?) -> Shape... imagine it (optionally) rotates shape1, "adds" it to shape2 somehow (I leave the details up to your imagination) then returns the result. Coming from other OO languages, we instinctively think that we understand how this works... the function must be implemented only with members available in the Shape protocol.
But the question is for the compiler, how are the parameters represented in memory? Instinctively, again, we think... it doesn't matter. When someone writes a new program that uses your function at some point in the future, they decide to pass their own shapes in and define them as class Dinosaur: Shape and class CupCake: Shape. As part of defining those new classes, they will have to write implementations of all the methods in protocol Shape, which might be something like func getPointsIterator() -> Iterator<CGPoint>. That works just fine for classes. The calling code defines those classes, instantiates objects from them, passes them into your function. Your function must have something like a vtable (I think Swift calls it a witness table) for the Shape protocol that says "if you give me an instance of a Shape object, I can tell you exactly where to find the address of the getPointsIterator function". The instance pointer will point to a block of memory on the stack, the start of which is a pointer to the class metadata (vtables, witness tables, etc.) So the compiler can reason about how to find any given method implementation.
But what about value types? Structs and enums can have just about any format in memory, from a one byte Bool to a 500 byte complex nested struct. These are usually passed on the stack or registers on function calls for efficiency. (When Swift exactly knows the type, all code can be compiled knowing the data format and passed on the stack or in registers, etc.)
Now you can see the problem. How can Swift compile the function myFunction so it will work with any possible future value type/struct defined in any code? As I understand it, this is where "existential containers" come in.
The simplest approach would be that any function that takes parameters of one of these "existential types" (types defined just by conforming to a Protocol) must insist that the calling code "box" the value type... that it store the value in a special reference counted "box" on the heap and pass a pointer to this (with all the usual ARC retain/release/autorelease/ownership rules) to your function when the function takes a parameter of type Shape.
Then when a new, weird and wonderful, type is written by some code author in the future, compiling the methods of Shape would have to include a way to accept "boxed" versions of the type. Your myFunction would always handle these "existential types" by handling the box and everything works. I would guess that C# and Java do something like this (boxing) if they have the same problem with non class types (Int, etc.)?
The thing is that for a lot of value types, this can be very inefficient. After all, we are compiling mostly for 64 bit architecture, so a couple of registers can handle 8 bytes, enough for many simple structures. So Swift came up with a compromise (again I might be a bit inaccurate on this, I'm giving my idea of the mechanism... feel free to correct). They created "existential containers" that are always 4 pointers in size. 16 bytes on a "normal" 64 bit architecture (most CPUs that run Swift these days).
If you define a struct that conforms to a protocol and it contains 12 bytes or less, then it is stored in the existential container directly. The last 4 byte pointer is a pointer to the type information/witness tables/etc. so that myFunction can find an address for any function in the Shape protocol (just like in the classes case above). If your struct/enum is larger than 12 bytes then the 4 pointer value points to a boxed version of the value type. Obviously this was considered an optimum compromise, and seems reasonable... it will be passed around in 4 registers in most cases or 4 stack slots if "spilled".
I think the reason the Swift team end up mentioning "existential containers" to the wider community is because it then has implications for various ways of using Swift. One obvious implication is performance. There's a sudden performance drop when using functions in this way if the structs are > 12 bytes in size.
Another, more fundamental, implication I think is that protocols can be used as parameters only if they don't have protocol or Self requirements... they are not generic. Otherwise you're into generic function definitions which is different. That's why we sometimes need to change things like: func myFunction(shape: Shape, reflection: Bool) -> Shape into something like func myFunction<S:Shape>(shape: S, reflection: Bool) -> S. They are implemented in very different ways under the hood.

Overloading vs. Overriding in Julia

I am not familiar with Julia but I feel like I noticed it allows you to define functions multiple times with different signatures, such as this:
FK5Coords{e}(ra::T, dec::T) where {e,T<:AbstractFloat} = FK5Coords{e, T}(ra, dec)
FK5Coords{e}(ra::Real, dec::Real) where {e} =
FK5Coords{e}(promote(float(ra), float(dec))...)
To me it looks like this allows you to call FK5Coords with two different signatures.
So I'm wondering (a) if that is true, if Julia allows overloading functions like this, and (b) if Julia allows something like super in a function, which seems like it would conflict with overloading. And (c), what an example snippet of Julia code looks like that shows (1) overloading in one example, and (2) overriding in the other.
The reason I'm asking is because I am wondering how Julia solves the problem of having both super and function overloading, because both require defining the function again and it seems you would have to flag it with some metadata or something to say "in this case I am overriding" or "in this case I am overloading".
Note: If that was not an example of overloading, then (from Wikipedia) this was what I was imagining Julia supported (along these lines):
// volume of a cylinder
double volume(const double r, const int h)
{
return 3.1415926*r*r*static_cast<double>(h);
}
// volume of a cuboid
long volume(const long l, const int b, const int h)
{
return l*b*h;
}
So I'm wondering (a) if that is true, if Julia allows overloading functions like this
Julia allows you to write different versions of the same function (different "methods" for the function) that differ in the type/number of arguments. That's pretty similar to overloading, except that overloading usually means the function to be called is decided based on the compile-time type of the arguments, whereas in Julia it's decided based on the run-time type of the arguments. This is commonly called dynamic dispatch. See this C++ example to see what overloading lacks and dispatch gives you.
(b) if Julia allows something like super in a function, which seems like it would conflict with overloading
The reason I'm asking is because I am wondering how Julia solves the problem of having both super and function overloading, because both require defining the function again and it seems you would have to flag it with some metadata or something to say "in this case I am overriding" or "in this case I am overloading".
I'm not sure why you think overloading will conflict with super. In C++, overriding involves having the exact same argument numbers and types, whereas overloading requires having either the number or the type of arguments be different. Compilers are smart enough to easily distinguish between those two cases, and AFAICT C++ can have a super method despite having both overloading and overriding, except that it also has multiple inheritance. I believe (with my limited C++ knowledge) that multiple inheritance is the reason C++ doesn't have a super method call, not overloading.
Anyway, if you peel back behind the Object-oriented curtain and look into method signatures, you'll see that all overriding is really a particular type of overloading: Dog::run(int dist, int dir) can override Animal::run(int dist, int dir) (assume Dog inherits from Animal), but that's equivalent to overloading a run(Animal a, int dist, int dir) function with a run(Dog d, int dist, int dir) definition. (If run was a virtual function, this would be dynamic dispatch instead of overloading, but that's a separate discussion.)
In Julia we do this explicitly, so the definitions would be run(d::Dog, dist::Int, dir::Int) and run(a::Animal, dist::Int, dir::Int). However, in Julia, you can only inherit from abstract types, so here the supertype Animal would be an abstract type, so you can't really call the second method with an Animal instance - the second method definition is really a shorthand way of saying "call this method for any instance of some concrete subtype of Animal, unless that subtype has its own separate method definition" (which Dog does, in this case). I'm not aware of any easy way of calling the second method run(Animal... from the first run(Dog..., which would be the equivalent of a super call.
(You can also 'override' a method from another module with import, but if it has completely the same parameters and parameter types, you'd probably be committing type piracy, which is usually a bad idea. I'm not aware of any way of getting back the original method after this type of overriding. "Overloading" (using dispatch) by defining and using your own types is much more common anyway.)
(c), what an example snippet of Julia code looks like that shows (1) overloading in one example, and (2) overriding in the other.
The first code snippet you posted is an example of using dispatch (which is what Julia uses instead of overloading). For another example, let's first define our base type and function:
abstract type Vehicle end
function move(v::Vehicle, dist::Float64)
println("Moving by $dist meters")
end
Now we can create another method of this function for dispatch ("overload" it) this way:
function move(v::Vehicle, dist::LightYears)
println("Blazing across $dist light years")
end
We can do an object-oriented style "override" too (though at the language level this is just seen as another method for dispatch):
struct Car <: Vehicle
model::String
end
function move(c::Car, dist::Float64)
println("$model is moving $dist meters")
end
This is the equivalent of overriding Vehicle.move(float dist) in derived class as Car.move(float dist).
And just for the heck of it, the volume function from the question:
# volume of a cylinder
volume(r::Float64, h::Int) = π*r*r*h
volume(l::Int, b::Int, h::Int) = l*b*h;
Now the correct volume method to call will be decided based on the number (and type) of arguments passed (and the return type is automatically inferred by the compiler, Float64 for the first method and Int for the second one).

def layout[A](x: A) = ... syntax in Scala

I'm a beginner of Scala who is struggling with Scala syntax.
I got the line of code from https://www.tutorialspoint.com/scala/higher_order_functions.htm.
I know (x: A) is an argument of layout function
( which means argument x of Type A)
But what is [A] between layout and (x: A)?
I've been googling scala function syntax, couldn't find it.
def layout[A](x: A) = "[" + x.toString() + "]"
It's a type parameter, meaning that the method is parameterised (some also say "generic"). Without it, compiler would think that x: A denotes a variable of some concrete type A, and when it wouldn't find any such type it would report a compile error.
This is a fairly common thing in statically typed languages; for example, Java has the same thing, only syntax is <A>.
Parameterized methods have rules where the types can occur which involve concepts of covariance and contravariance, denoted as [+A] and [-A]. Variance is definitely not in the scope of this question and is probably too much for you too handle right now, but it's an important concept so I figured I'd just mention it, at least to let you know what those plus and minus signs mean when you see them (and you will).
Also, type parameters can be upper or lower bounded, denoted as [A <: SomeType] and [A >: SomeType]. This means that generic parameter needs to be a subtype/supertype of another type, in this case a made-up type SomeType.
There are even more constructs that contribute extra information about the type (e.g. context bounds, denoted as [A : Foo], used for typeclass mechanism), but you'll learn about those later.
This means that the method is using a generic type as its parameter. Every type you pass that has the definition for .toString could be passed through layout.
For example, you could pass both int and string arguments to layout, since you could call .toString on both of them.
val i = 1
val s = "hi"
layout(i) // would give "[1]"
layout(s) // would give "[hi]"
Without the gereric parameter, for this example you would have to write two definitions for layout: one that accepts integers as param, and one that accepts string as param. Even worse: every time you need another type you'd have to write another definition that accepts it.
Take a look at this example here and you'll understand it better.
I also recomend you to take a look at generic classes here.
A is a type parameter. Rather than being a data type itself (Ex. case class A), it is generic to allow any data type to be accepted by the function. So both of these will work:
layout(123f) [Float datatype] will output: "[123]"
layout("hello world") [String datatype] will output: "[hello world]"
Hence, whichever datatype is passed, the function will allow. These type parameters can also specify rules. These are called contravariance and covariance. Read more about them here!

Scala recursive type alias error

I have a couple of functions whose only parameter requirement is that it has some sort of collection that is also growable (i.e. it could be a Queue, List, PriorityQueue, etc.), so I attempted to create the following type alias:
type Frontier = Growable[Node] with TraversableLike[Node, Frontier]
to use with function definitions like so:
def apply(frontier: Frontier) = ???
but the type alias returns the error "Illegal cyclic reference involving type Frontier." Is there any way to get around the illegal cyclic reference to use the type alias or something similar to it?
One solution is to use the following:
def apply[F <: Growable[Node] with TraversableLike[Node, F]](f: F) = ???
but this seems to add unnecessary verbosity when the function definition is doing seemingly the exact same thing as the type alias. The type is also used in other places, so a type alias would greatly increase readability.
From section 4.3 of the spec:
The scope rules for definitions (§4) and type parameters (§4.6) make
it possible that a type name appears in its own bound or in its
right-hand side. However, it is a static error if a type alias refers
recursively to the defined type constructor itself.
So no, there's no way to do this directly, but you can accomplish much the same thing with a type parameter on the type alias:
type Frontier[F <: Frontier[F]] = Growable[Int] with TraversableLike[Int, F]
Now you just write your apply like this:
def apply[F < Frontier[F]](frontier: F) = ???
Still a more little verbose than your hypothetical first version, but shorter than writing the whole thing out.
You could also just use the wildcard shorthand for an existential type:
type Frontier = Growable[Node] with TraversableLike[Node, _]
Now your first apply will work as it is. You're just saying that there has to be some type that fits that slot, but you don't care what it is.
In this case specifically, though, is there a reason you're not using Traversable[Node] instead? It would accomplish practically the same thing, and isn't parameterized on its representation type.

How can I serialize anything without specifying type?

I'm integrating with ZeroMQ and Akka to send case classes from different instances. Problem is that when I try to compile this code:
def persistRelay(relayEvent:String, relayData:Any) = {
val relayData = ser.serialize(relayData).fold(throw _, identity)
relayPubSocket ! ZMQMessage(Seq(Frame(relayEvent), Frame(relayData)))
}
The Scala compilers throws back recursive value relayData needs type.
The case classes going in are all different and look like Team(users:List[Long], teamId:Long) and so on.
Is there a method to allow any type in the serializer or a workaround? I'd prefer to avoid writing a serializer for every single function creating the data unless absolutely necessary.
Thanks!
This isn't really a typing issue. The problem is:
val relayData = ser.serialize(relayData).fold(throw _, identity)
You're declaring a val relayData in the same line that you're making a reference to the method parameter relayData. The Scala compiler doesn't understand that you have/want two variables with the same name, and, instead, interprets it as a recursive definition of val relayData. Changing the name of one of those variables should fix the error.
Regardless, since you didn't quite follow what the Scala compiler was asking for, I think that it would also be good to fill you in on what it is that the compiler even wanted from you (even though it's advice that, if followed, probably would have just led to you getting yet another error that wouldn't seem to make a lot of sense, given the circumstances).
It said "recursive value relayData needs type". The meaning of this is that it wanted you to simply specify the type of relayData by having
val relayData = ...
become something like
val relayData: Serializable = ...
(or, in place of Serializable, use whatever type it was that you wanted relayData to have)
It needs this information in order to create a recursive definition. For instance, take the simple case of
val x = x + 1
This code is... bizarre, to say the least, but what I'm doing is defining x in a (shallowly) recursive way. But there's a problem: how can the compiler know what type to use for the inner x? It can't really determine the type through type inference, because type inference involves leveraging the type information of other definitions, and this definition requires x's type information. Now, we might be able to infer that I'm probably talking about an Int, but, theoretically, x could be so many things! In fact, here's the ambiguity in action:
val x: Int = x + 1 // Default value for an Int is '0'
x: Int = 1
val y: String = y + 1 // Default value for a String is 'null'
y: String = null1
All that really changed was the type annotation, but the results are drastically different–and this is only a very simple case! So, yeah, to summarize all this... in most cases, when it's complaining about recursive values needing types, you should just have some empathy on the poor compiler and give it the type information that it so direly craves. It would do the same for you, DeLongey! It would do the same for you!