Is there a way to overload the constructor for a struct in Racket, so I can make the inherited parameters optional ?
In my case, I want to define some custom exceptions for my app.
For example :
(struct exn:my-app exn ())
(struct exn:my-app:illegal-access exn:my-app ())
However, to instantiate an illegal-access exception, I have to call the constructor with the 2 arguments inherited from exn (message and continuation-marks), which is quite cumbersome.
Is it possible to define (for exn:my-app and all its descendants) a constructor, which could make both parameters optional ? So I could call either :
(raise (exn:my-app:illegal-access))
(raise (exn:my-app:illegal-access "Message")) ?
Thanks,
Here's one way to do it:
(struct exn:my-app exn ()
;; change the name of the constructor
#:constructor-name make-exn:my-app)
;; custom constructor
(define (exn:my-app [msg "default msg"]
[marks (current-continuation-marks)])
(make-exn:my-app msg marks))
(exn:my-app) ; this works now
Since you need to do this for each structure type, you may want to define a macro that abstracts over this. I bet someone has already shared such a macro on the Racket mailing list, but I don't recall one off the top of my head so I'll update this answer if I find a reference.
Related
If we have optional values foo and bar, Swift will allow us to write:
foo?.doSomething(bar)
Which will evaluate to nil if foo is nil. But it will not let us write:
foo?.doSomething(bar?)
That is, optional chaining only works on the arguments outside a function call, not inside the argument list. (The reasons for this limitation are unclear, but here we are.)
Suppose I want to write an apply function that lets me move things into the jurisidiction of optional chaining, like so:
bar?.apply { foo?.doSomething($0) }
Here, apply is a generic function that takes one argument (in this case bar) and then executes the closure. So if either foo or bar is nil, the expression will be nil.
Here's what I’ve tried:
public protocol HasApply {}
extension HasApply {
public func apply<T>(_ f : (Self) -> T) -> T {
f(self)
}
}
That’s fine as far as it goes. But to make it work, I still have to explicitly apply the protocol to the types I care about:
extension Int : HasApply {}
OK, that makes it work with Int. But I don’t want to copy & paste for every type. So I try this:
extension AnyObject : HasApply {}
No, that won’t work: the error is Non-nominal type 'AnyObject' cannot be extended.
Hence the question: is there no way to make this generic function work as a protocol method?
is there no way to make this generic function work as a protocol method?
No, you must "explicitly apply the protocol to the types I care about".
However, you are in fact reinventing the wheel. This is the use case of flatMap/map. If both foo and bar are optional, you can write:
bar.flatMap { foo?.doSomething($0) }
Note the lack of ? after bar. You are calling flatMap on Optional, rather than bar's type. If doSomething returns T, the above expression will return T?.
If only bar is optional, use map:
bar.map { foo.doSomething($0) }
As Sweeper pointed out, the language already provides you the tool for this, in the form of the map/flatMap functions.
But you could also write
if let foo = foo, let bar = bar {
foo.doSomething(bar)
}
This is an easier to read, understand, and maintain code, with clearly transmits the intent: you want doSomething to be called if both the receiver of the call and its argument are non-nil.
Now, why it would not be a good idea for the language to have this feature built-in - it's due to the way the compiler processes the code: from left to right.
The optional chaining is a short-circuit operator, thus
foo?.someExpensiveComputation().doSomething(bar)
will stop at runtime as soon as it detects that foo is nil. Which means that someExpensiveComputation will not be executed. Not the same thing can be said about a construct like this:
foo?.someExpensiveComputation().doSomething(bar?)
Assuming foo is not nil, but bar is nil, the program will execute someExpensiveComputation just to find out that doSomething doesn't need execution. Thus, the short-circuit no longer applies.
Let's take another example, let's assume doSomething has two parameters:
foo?.doSomething(someExpensiveComputation(), bar)
Again, the compiler evaluates from left to right, thus the expensive computation will be performed, just to be thrown away once the program detects at runtime that the second argument is nil.
Now, yes, the compiler might implement some advanced heuristics of looking ahead for possible nil values, but this would be highly complicated and would add lots of performance penalties at runtime.
The bottom line, the compiler will provide you with short-circuits, as long as those are well-behaved, predictable, and don't overwhelm the compiler.
I have the following two classes:
(defclass person () ())
(defmethod speak ((s person) string)
(format t "-A" string))
(defmethod speak :before ((s person) string)
(print "Hello! "))
(defmethod speak :after ((s person) string)
(print "Have a nice day!"))
(defclass speaker (person) ())
(defmethod speak ((i speaker) string)
(print "Bonjour!"))
(speak (make-instance 'speaker) "Can I help yoU?")
And the ouput of this is:
"Hello! "
"Bonjour!"
"Have a nice day!"
What I'm trying to figure out is how these methods are executed in terms of "order." I cannot seem to grasp on what is happening and why. Supposedly there is a rule precedence to this but I'm not sure where to find it. For example, why doesn't "Hello!Can I help you" ever fire in this case?
When you don't have any around methods, the order of method application is: all before methods from most specific to least specific, then the most specific primary method, and then the after methods from least specific to most specific. In your case you have two primary methods (methods without :before or :after next to the name), one which specifies on person, and the other which specifies on speaker. Since speaker is more specific than person, only the speaker primary method is called. If you want to call multiple primary methods, look at call-next-method.
While I see that there's already an accepted answer, Common Lisp has some very nice documentation in the HyperSpec, and it's useful to know where to find the full description of what happens. In this case, it's 7.6.6.2 Standard Method Combination, which says (abbreviated):
The semantics of standard method combination is as follows:
If there are any around methods, the most specific around method is called. It supplies the value or values of the generic function.
Inside the body of an around method, call-next-method can be used to call the next method. When the next method returns, the around method
can execute more code, perhaps based on the returned value or values.
The generic function no-next-method is invoked if call-next-method is
used and there is no applicable method to call. The function
next-method-p may be used to determine whether a next method exists.
If an around method invokes call-next-method, the next most specific around method is called, if one is applicable. If there are no around
methods or if call-next-method is called by the least specific around
method, the other methods are called as follows:
All the before methods are called, in most-specific-first order. Their values are ignored. An error is signaled if
call-next-method is used in a before method.
The most specific primary method is called. Inside the body of a primary method, call-next-method may be used to call the next most
specific primary method. When that method returns, the previous
primary method can execute more code, perhaps based on the returned
value or values. The generic function no-next-method is invoked if
call-next-method is used and there are no more applicable primary
methods. The function next-method-p may be used to determine whether a
next method exists. If call-next-method is not used, only the most
specific primary method is called.
All the after methods are called in most-specific-last order. Their values are ignored. An error is signaled if call-next-method is
used in an after method.
If no around methods were invoked, the most specific primary method supplies the value or values returned by the generic function. The
value or values returned by the invocation of call-next-method in the
least specific around method are those returned by the most specific
primary method.
There's a particularly helpful illustration at the end of that page that describes the behavior and its motivation:
The before methods are run in most-specific-first order while the
after methods are run in least-specific-first order. The design
rationale for this difference can be illustrated with an example.
Suppose class C1 modifies the behavior of its superclass, C2, by
adding before methods and after methods. Whether the behavior of the
class C2 is defined directly by methods on C2 or is inherited from its
superclasses does not affect the relative order of invocation of
methods on instances of the class C1. Class C1's before method runs
before all of class C2's methods. Class C1's after method runs after
all of class C2's methods.
By contrast, all around methods run before any other methods run. Thus
a less specific around method runs before a more specific primary
method.
If only primary methods are used and if call-next-method is not used,
only the most specific method is invoked; that is, more specific
methods shadow more general ones.
In addition to other answers, note that you can define custom method combination with the following macro:
DEFINE-METHOD-COMBINATION. There are already ten existing method combinators so I don't think it is common to define custom ones. Of course, being able to do so can be very useful at times (see Joshua Taylor's comment).
Also, the way your methods are invoked is subject to class inheritance, which by default takes into account parent-child relationships, as well as order between superclasses. Please read "Fundamentals of CLOS". The class precedence list can be changed with the Meta-Object Protocol: see COMPUTE-CLASS-PRECEDENCE-LIST.
I am trying to use global variable in Scala. to be accessible in the whole program .
val numMax: Int = 300
object Foo {.. }
case class Costumer { .. }
case class Client { .. }
object main {
var lst = List[Client]
// I would like to use Client as an object .
}
I got this error :
error: missing arguments for method apply in object List;
follow this method with `_' if you want to treat it as a partially applied function
var lst = List[A]
How can I deal with Global Variables in Scala to be accessible in the main program .
Should I use class or case class in this case ?
This isn't a global variable thing. Rather, you want to say this:
val lst = List(client1, client2)
However, I disagree somewhat with the other answers. Scala isn't just a functional language. It is both functional (maybe not as purely as it should be if you ask the Clojure fans) and object-oriented. Therefore, your OO expertise translates perfectly.
There is nothing wrong with global variables per se. The concern is mutability. Prefer val to var as I did. Also, you need to use object for singletons rather than the static paradigm you might be used to from Java.
The error you quote is unrelated to your attempt to create a global variable. You have missing () after the List[Client].
If you must create a global variable, you can put it in an object like Foo and reference it from other objects using Foo.numMax if the variable is called numMax.
However, global variables are discouraged. Maybe pass the data you need into the functions that need it instead. That is the functional way.
In a Haskell program I'm trying to debug, there is a class defined:
class Dictionary d where
lookupIn :: d -> Word -> String
I'd like to create a variable called runs and make it of type Dictionary so I could use it in the lookupIn function. However, nothing is working. I've tried type runs = Dictionary, and even data runs = Dictionary but nothing is working.
Haskell IS NOT an object oriented language. Typeclass is not class. A variable is not "variable" (although this is irrelevant here), and certainly also is not an object.
See this post.
P.S. I guess this is homework. Try to learn the language first (even a bit), Haskell is most likely more fun than you think.
In Haskell this is not possible. It is possible in other languages with type-class like constructs (Scala, Agda), but it is not possible in Haskell.
It is possible to make an instance of a class in Haskell:
instance Dictionary () where
lookupIn _ _ = "no"
And then to use it:
main = do
putStrLn $ lookupIn () "hello"
And it is true that instances do act a lot like data -- and they are represented by data at runtime. This is why in other languages you can store instances in variables, and pass them around explicitly.
But, in Haskell, it is not possible to name an instance, or to store it in a variable. That is, you cannot do this, or anything like it:
thisInstance :: Dictionary ()
thisInstance = ???
The reason is that in Haskell, it is assumed that for every type and typeclass, there can only be one instance of that typeclass applied to that type. That is, you can only ever define one instance Dictionary (). Since there can be only one, there is no point in naming it. This is convenient for Haskell's type inference -- any needed instances can be pulled up to "arguments" (really typeclass constraints) of the current function.
Of course it is possible to achieve the same kind of behavior, just not with typeclasses -- records work well for this:
data DictionaryType d = DictionaryData { lookupIn :: d -> Word -> String }
now lookupIn has the type DictionaryType d -> d -> Word -> String, which is a literal translation of the typeclass-using type (Dictionary d) => d -> Word -> String. And you can use it like this:
myDictionary :: DictionaryType ()
myDictionary = DictionaryData (\_ _ -> "no")
main = do
putStrLn $ lookupIn myDictionary () "hello"
Functionally identical to the typeclass solution, the only difference is how syntax and type-checking work.
One way to think of it is this:
In an OO language a class defines both a type (i.e. a set of potential values) and a set of other types (i.e. the potential descendant classes).
In Haskell a typeclass defines only a set of types (i.e. the potential "instances" of the typeclass). A typeclass is not itself a type.
(Actually I'm skating over the distinction in set theory between a set and a class, which is why they are called "typeclasses" not "typesets". But that's not important here.)
Can I specify interfaces when I declare a member?
After thinking about this question for a while, it occurred to me that a static-duck-typed language might actually work. Why can't predefined classes be bound to an interface at compile time? Example:
public interface IMyInterface
{
public void MyMethod();
}
public class MyClass //Does not explicitly implement IMyInterface
{
public void MyMethod() //But contains a compatible method definition
{
Console.WriteLine("Hello, world!");
}
}
...
public void CallMyMethod(IMyInterface m)
{
m.MyMethod();
}
...
MyClass obj = new MyClass();
CallMyMethod(obj); // Automatically recognize that MyClass "fits"
// MyInterface, and force a type-cast.
Do you know of any languages that support such a feature? Would it be helpful in Java or C#? Is it fundamentally flawed in some way? I understand you could subclass MyClass and implement the interface or use the Adapter design pattern to accomplish the same thing, but those approaches just seem like unnecessary boilerplate code.
A brand new answer to this question, Go has exactly this feature. I think it's really cool & clever (though I'll be interested to see how it plays out in real life) and kudos on thinking of it.
As documented in the official documentation (as part of the Tour of Go, with example code):
Interfaces are implemented implicitly
A type implements an interface by implementing its methods. There is
no explicit declaration of intent, no "implements" keyword.
Implicit interfaces decouple the definition of an interface from its
implementation, which could then appear in any package without
prearrangement.
How about using templates in C++?
class IMyInterface // Inheritance from this is optional
{
public:
virtual void MyMethod() = 0;
}
class MyClass // Does not explicitly implement IMyInterface
{
public:
void MyMethod() // But contains a compatible method definition
{
std::cout << "Hello, world!" "\n";
}
}
template<typename MyInterface>
void CallMyMethod(MyInterface& m)
{
m.MyMethod(); // instantiation succeeds iff MyInterface has MyMethod
}
MyClass obj;
CallMyMethod(obj); // Automatically generate code with MyClass as
// MyInterface
I haven't actually compiled this code, but I believe it's workable and a pretty trivial C++-ization of the original proposed (but nonworking) code.
Statically-typed languages, by definition, check types at compile time, not run time. One of the obvious problems with the system described above is that the compiler is going to check types when the program is compiled, not at run time.
Now, you could build more intelligence into the compiler so it could derive types, rather than having the programmer explicitly declare types; the compiler might be able to see that MyClass implements a MyMethod() method, and handle this case accordingly, without the need to explicitly declare interfaces (as you suggest). Such a compiler could utilize type inference, such as Hindley-Milner.
Of course, some statically typed languages like Haskell already do something similar to what you suggest; the Haskell compiler is able to infer types (most of the time) without the need to explicitly declare them. But obviously, Java/C# don't have this ability.
I don't see the point. Why not be explicit that the class implements the interface and have done with it? Implementing the interface is what tells other programmers that this class is supposed to behave in the way that interface defines. Simply having the same name and signature on a method conveys no guarantees that the intent of the designer was to perform similar actions with the method. That may be, but why leave it up for interpretation (and misuse)?
The reason you can "get away" with this successfully in dynamic languages has more to do with TDD than with the language itself. In my opinion, if the language offers the facility to give these sorts of guidance to others who use/view the code, you should use it. It actually improves clarity and is worth the few extra characters. In the case where you don't have access to do this, then an Adapter serves the same purpose of explicitly declaring how the interface relates to the other class.
F# supports static duck typing, though with a catch: you have to use member constraints. Details are available in this blog entry.
Example from the cited blog:
let inline speak (a: ^a) =
let x = (^a : (member speak: unit -> string) (a))
printfn "It said: %s" x
let y = (^a : (member talk: unit -> string) (a))
printfn "Then it said %s" y
type duck() =
member x.speak() = "quack"
member x.talk() = "quackity quack"
type dog() =
member x.speak() = "woof"
member x.talk() = "arrrr"
let x = new duck()
let y = new dog()
speak x
speak y
TypeScript!
Well, ok... So it's a javascript superset and maybe does not constitute a "language", but this kind of static duck-typing is vital in TypeScript.
Most of the languages in the ML family support structural types with inference and constrained type schemes, which is the geeky language-designer terminology that seems most likely what you mean by the phrase "static duck-typing" in the original question.
The more popular languages in this family that spring to mind include: Haskell, Objective Caml, F# and Scala. The one that most closely matches your example, of course, would be Objective Caml. Here's a translation of your example:
open Printf
class type iMyInterface = object
method myMethod: unit
end
class myClass = object
method myMethod = printf "Hello, world!"
end
let callMyMethod: #iMyInterface -> unit = fun m -> m#myMethod
let myClass = new myClass
callMyMethod myClass
Note: some of the names you used have to be changed to comply with OCaml's notion of identifier case semantics, but otherwise, this is a pretty straightforward translation.
Also, worth noting, neither the type annotation in the callMyMethod function nor the definition of the iMyInterface class type is strictly necessary. Objective Caml can infer everything in your example without any type declarations at all.
Crystal is a statically duck-typed language. Example:
def add(x, y)
x + y
end
add(true, false)
The call to add causes this compilation error:
Error in foo.cr:6: instantiating 'add(Bool, Bool)'
add(true, false)
^~~
in foo.cr:2: undefined method '+' for Bool
x + y
^
A pre-release design for Visual Basic 9 had support for static duck typing using dynamic interfaces but they cut the feature* in order to ship on time.
Boo definitely is a static duck-typed language: http://boo.codehaus.org/Duck+Typing
An excerpt:
Boo is a statically typed language,
like Java or C#. This means your boo
applications will run about as fast as
those coded in other statically typed
languages for .NET or Mono. But using
a statically typed language sometimes
constrains you to an inflexible and
verbose coding style, with the
sometimes necessary type declarations
(like "x as int", but this is not
often necessary due to boo's Type
Inference) and sometimes necessary
type casts (see Casting Types). Boo's
support for Type Inference and
eventually generics help here, but...
Sometimes it is appropriate to give up
the safety net provided by static
typing. Maybe you just want to explore
an API without worrying too much about
method signatures or maybe you're
creating code that talks to external
components such as COM objects. Either
way the choice should be yours not
mine.
Along with the normal types like
object, int, string...boo has a
special type called "duck". The term
is inspired by the ruby programming
language's duck typing feature ("If it
walks like a duck and quacks like a
duck, it must be a duck").
New versions of C++ move in the direction of static duck typing. You can some day (today?) write something like this:
auto plus(auto x, auto y){
return x+y;
}
and it would fail to compile if there's no matching function call for x+y.
As for your criticism:
A new "CallMyMethod" is created for each different type you pass to it, so it's not really type inference.
But it IS type inference (you can say foo(bar) where foo is a templated function), and has the same effect, except it's more time-efficient and takes more space in the compiled code.
Otherwise, you would have to look up the method during runtime. You'd have to find a name, then check that the name has a method with the right parameters.
Or you would have to store all that information about matching interfaces, and look into every class that matches an interface, then automatically add that interface.
In either case, that allows you to implicitly and accidentally break the class hierarchy, which is bad for a new feature because it goes against the habits of what programmers of C#/Java are used to. With C++ templates, you already know you're in a minefield (and they're also adding features ("concepts") to allow restrictions on template parameters).
Structural types in Scala does something like this.
See Statically Checked “Duck Typing” in Scala
D (http://dlang.org) is a statically compiled language and provides duck-typing via wrap() and unwrap() (http://dlang.org/phobos-prerelease/std_typecons.html#.unwrap).
Sounds like Mixins or Traits:
http://en.wikipedia.org/wiki/Mixin
http://www.iam.unibe.ch/~scg/Archive/Papers/Scha03aTraits.pdf
In the latest version of my programming language Heron it supports something similar through a structural-subtyping coercion operator called as. So instead of:
MyClass obj = new MyClass();
CallMyMethod(obj);
You would write:
MyClass obj = new MyClass();
CallMyMethod(obj as IMyInterface);
Just like in your example, in this case MyClass does not have to explicitly implement IMyInterface, but if it did the cast could happen implicitly and the as operator could be omitted.
I wrote a bit more about the technique which I call explicit structural sub-typing in this article.