Reasonml syntax meaning | - reason

What does this symbol mean in ReasonML |
E.g
type something =
| SomeFunc()
| AnotherFunc()
I couldnt really find an answer on the ReasonML docs

Essentially, this particular one is a case of defining a custom type.
We are defining a new type called something, the values of which can be created using either the function SomeFunc or AnotherFunc.. More specifically, these functions are called Constructor Functions... Quite useful with pattern-matching.
You can read more about them in the OCaml documentation.
You can also find the pipe symbol (|) inside pattern-matching constructs, separating various cases/variations of match-patterns.

Related

Why to use := in Scala? [duplicate]

What is the difference between = and := in Scala?
I have googled extensively for "scala colon-equals", but was unable to find anything definitive.
= in scala is the actual assignment operator -- it does a handful of specific things that for the most part you don't have control over, such as
Giving a val or var a value when it's created
Changing the value of a var
Changing the value of a field on a class
Making a type alias
Probably others
:= is not a built-in operator -- anyone can overload it and define it to mean whatever they like. The reason people like to use := is because it looks very assignmenty and is used as an assignment operator in other languages.
So, if you're trying to find out what := means in the particular library you're using... my advice is look through the Scaladocs (if they exist) for a method named :=.
from Martin Odersky:
Initially we had colon-equals for assignment—just as in Pascal, Modula, and Ada—and a single equals sign for equality. A lot of programming theorists would argue that that's the right way to do it. Assignment is not equality, and you should therefore use a different symbol for assignment. But then I tried it out with some people coming from Java. The reaction I got was, "Well, this looks like an interesting language. But why do you write colon-equals? What is it?" And I explained that its like that in Pascal. They said, "Now I understand, but I don't understand why you insist on doing that." Then I realized this is not something we wanted to insist on. We didn't want to say, "We have a better language because we write colon-equals instead of equals for assignment." It's a totally minor point, and people can get used to either approach. So we decided to not fight convention in these minor things, when there were other places where we did want to make a difference.
from The Goals of Scala's Design
= performs assignment. := is not defined in the standard library or the language specification. It's a name that is free for other libraries or your code to use, if you wish.
Scala allows for operator overloading, where you can define the behaviour of an operator just like you could write a method.
As in other languages, = is an assignment operator.
The is no standard operator I'm aware of called :=, but could define one with this name. If you see an operator like this, you should check up the documentation of whatever you're looking at, or search for where that operator is defined.
There is a lot you can do with Scala operators. You can essentially make an operator out of virtually any characters you like.

How do purely functional compilers annotate the AST with type info?

In the syntax analysis phase, an imperative compiler can build an AST out of nodes that already contain a type field that is set to null during construction, and then later, in the semantic analysis phase, fill in the types by assigning the declared/inferred types into the type fields.
How do purely functional languages handle this, where you do not have the luxury of assignment? Is the type-less AST mapped to a different kind of type-enriched AST? Does that mean I need to define two types per AST node, one for the syntax phase, and one for the semantic phase?
Are there purely functional programming tricks that help the compiler writer with this problem?
I usually rewrite a source (or an already several steps lowered) AST into a new form, replacing each expression node with a pair (tag, expression).
Tags are unique numbers or symbols which are then used by the next pass which derives type equations from the AST. E.g., a + b will yield something like { numeric(Tag_a). numeric(Tag_b). equals(Tag_a, Tag_b). equals(Tag_e, Tag_a).}.
Then types equations are solved (e.g., by simply running them as a Prolog program), and, if successful, all the tags (which are variables in this program) are now bound to concrete types, and if not, they're left as type parameters.
In a next step, our previous AST is rewritten again, this time replacing tags with all the inferred type information.
The whole process is a sequence of pure rewrites, no need to replace anything in your AST destructively. A typical compilation pipeline may take a couple of dozens of rewrites, some of them changing the AST datatype.
There are several options to model this. You may use the same kind of nullable data fields as in your imperative case:
data Exp = Var Name (Maybe Type) | ...
parse :: String -> Maybe Exp -- types are Nothings here
typeCheck :: Exp -> Maybe Exp -- turns Nothings into Justs
or even, using a more precise type
data Exp ty = Var Name ty | ...
parse :: String -> Maybe (Exp ())
typeCheck :: Exp () -> Maybe (Exp Type)
I cant speak for how it is supposed to be done, but I did do this in F# for a C# compiler here
The approach was basically - build an AST from the source, leaving things like type information unconstrained - So AST.fs basically is the AST which strings for the type names, function names, etc.
As the AST starts to be compiled to (in this case) .NET IL, we end up with more type information (we create the types in the source - lets call these type-stubs). This then gives us the information needed to created method-stubs (the code may have signatures that include type-stubs as well as built in types). From here we now have enough type information to resolve any of the type names, or method signatures in the code.
I store that in the file TypedAST.fs. I do this in a single pass, however the approach may be naive.
Now we have a fully typed AST you could then do things like compile it, fully analyze it, or whatever you like with it.
So in answer to the question "Does that mean I need to define two types per AST node, one for the syntax phase, and one for the semantic phase?", I cant say definitively that this is the case, but it is certainly what I did, and it appears to be what MS have done with Roslyn (although they have essentially decorated the original tree with type info IIRC)
"Are there purely functional programming tricks that help the compiler writer with this problem?"
Given the ASTs are essentially mirrored in my case, it would be possible to make it generic and transform the tree, but the code may end up (more) horrendous.
i.e.
type 'type AST;
| MethodInvoke of 'type * Name * 'type list
| ....
Like in the case when dealing with relational databases, in functional programming it is often a good idea not to put everything in a single data structure.
In particular, there may not be a data structure that is "the AST".
Most probably, there will be data structures that represent parsed expressions. One possible way to deal with type information is to assign a unique identifier (like an integer) to each node of the tree already during parsing and have some suitable data structure (like a hash map) that associates those node-ids with types. The job of the type inference pass, then, would be just to create this map.

Fortran numeric class

In fortran I can use Class (*) in a subroutine and use
Select Type (ir)
Type Is (Integer (Int8))
Type Is (Integer (Int16))
End Select
Does there exist any way to pass a numeric value rather than using Class (*), by using Class (Integer) for example or something similar.
Intrinsic types are not extended types, they have no common ancestor, nothing like that exist. You can use the unlimited polymorphism (class(*)) or you must indicate exact type and kind (real(dp)). You can also write type(real) in Fortran 2008, but that does not change anything, it is just a different syntax for the same.
Have a look at some common techniques for generic programming with different kinds like, for example, How to make some generic programming in fortran 90/95 working with intrinsic types , STL analogue in Fortran and others. You normally make a separate procedure for each kind and paste the code body from an include file.

Decoupling via Interfaces in Go... Slice of interface implementors?

OK. I know this is a FAQ, and I think the answer is "give up, it doesn't work that way", but I just want to make sure I'm not missing something.
I am still wrapping my head around best practices and rules for use of interfaces. I have code in different packages that I'd prefer to keep decoupled, something like so (doesn't work, or I wouldn't be here):
package A
type Foo struct {}
func (f *Foo) Bars() ([]*Foo, error) {
foos := make([]*Foo, 0)
// some loop which appends a bunch of related *Foo to foos
return foos, nil
}
package B
type Foolike interface {
Bars() []Foolike
}
func DoSomething(f Foolike) error {
// blah
}
With this, the compiler complains:
cannot use f (type *A.Foo) as type Foolike in argument to B.DoSomething:
*A.Foo does not implement Foolike (wrong type for Bars method)
have Bars() ([]*A.Foo, error)
want Bars() ([]Foolike, error)
Now, I grok that []Foolike is not an interface signature itself; it's the signature for a slice of Foolike interfaces. I think I also grok that the compiler treats []*A.Foo and []Foolike as different things because ... (mumble memory allocation, strict typing mumble).
My question is: Is there a correct way to do what I ultimately want, which is to let B.DoSomething() accept an *A.Foo without having to import A and use *A.Foo in B.DoSomething()'s function signature (or worse, in the interface definition)? I'm not hung up on trying to trick the compiler or get into crazy runtime tricks. I understand that I could probably change the implementation of Foo.Bars() to return []Foolike, but that seems stupid and wrong (Why should A have to know anything about B? That breaks the whole point of decoupling things!).
I guess another option is to remove Bars() as a requirement for implementing the interface and rely on other methods to enforce the requirement. That feels less than ideal, though (what if Bars() is the only exported method?). Edit: No, that won't work because then I can't use Bars() in DoSomething(), because it's not defined in the interface. Sigh.
If I'm just Doing It Wrong™, I'll accept that and figure something else out, but I hope I'm just not getting some aspect of how it's supposed to work.
As the error message says, you can't treat the []FooLike and []*Foo types interchangeably.
For the a []*Foo slice, the backing array will look something like this in memory:
| value1 | value2 | value3 | ... | valueN |
Since we know the values are going to be of type *Foo, they can be stored sequentially in a straight forward manner. In contrast, each element in a []FooLike slice could be of a different type (provided they conform to FooLike). So the backing array would look more like:
| type1 | value1 | type2 | value2 | type3 | value3 | ... | typeN | valueN |
So it isn't possible to do a simple cast between the types: it would be necessary to create a new slice and copy over the values.
So your underlying type will need to return a slice of the interface type for this to work.

What is the difference between = and := in Scala?

What is the difference between = and := in Scala?
I have googled extensively for "scala colon-equals", but was unable to find anything definitive.
= in scala is the actual assignment operator -- it does a handful of specific things that for the most part you don't have control over, such as
Giving a val or var a value when it's created
Changing the value of a var
Changing the value of a field on a class
Making a type alias
Probably others
:= is not a built-in operator -- anyone can overload it and define it to mean whatever they like. The reason people like to use := is because it looks very assignmenty and is used as an assignment operator in other languages.
So, if you're trying to find out what := means in the particular library you're using... my advice is look through the Scaladocs (if they exist) for a method named :=.
from Martin Odersky:
Initially we had colon-equals for assignment—just as in Pascal, Modula, and Ada—and a single equals sign for equality. A lot of programming theorists would argue that that's the right way to do it. Assignment is not equality, and you should therefore use a different symbol for assignment. But then I tried it out with some people coming from Java. The reaction I got was, "Well, this looks like an interesting language. But why do you write colon-equals? What is it?" And I explained that its like that in Pascal. They said, "Now I understand, but I don't understand why you insist on doing that." Then I realized this is not something we wanted to insist on. We didn't want to say, "We have a better language because we write colon-equals instead of equals for assignment." It's a totally minor point, and people can get used to either approach. So we decided to not fight convention in these minor things, when there were other places where we did want to make a difference.
from The Goals of Scala's Design
= performs assignment. := is not defined in the standard library or the language specification. It's a name that is free for other libraries or your code to use, if you wish.
Scala allows for operator overloading, where you can define the behaviour of an operator just like you could write a method.
As in other languages, = is an assignment operator.
The is no standard operator I'm aware of called :=, but could define one with this name. If you see an operator like this, you should check up the documentation of whatever you're looking at, or search for where that operator is defined.
There is a lot you can do with Scala operators. You can essentially make an operator out of virtually any characters you like.