Reading Swift language guide I cannot find explicit information whether Swift is statically dispatched (like basic C++, Java, C#) or dynamically dispatched (like Objective-C).
The documentation of language features (like classes, extensions, generics, etc) seems to suggest that it is statically typed, which might be the source of supposed speed improvements. However, Apple stated on the WWDC 2014 keynote that the language uses the same runtime as Objective-C, and is very compatible with Cocoa/Cocoa Touch, which suggest dynamic dispatch.
Describing C++, Java, and C# as statically dispatched is not particularly accurate. All three languages can and often do use dynamic dispatch.
Swift, similarly, can do both. It does differ from ObjC in that it does not always dynamically dispatch. Methods marked #final can be statically dispatched, as can struct and enum methods. I believe non-final methods can be statically dispatched if the runtime type can be proven at compile time (similar to C++ devirtualization), but I'm not certain about the details there.
According to this excerpt from Y Combinator (https://news.ycombinator.com/item?id=7835099):
From a user's point of view, it's basically straight out of the Rust book, all the gravy with also relaxed ownership and syntax.
It has it all [1]: static typing, type inference, explicit mutability, closures, pattern matching, optionals (with own syntax!
also "any"), generics, interfaces, weak ownership, tuples, plus other
nifty things like shorthand syntax, final and explicit override...
It screams "modern!", has all the latest circlejerk features. It even comes with a light-table/bret-victor style playground. But is
still a practical language which looks approachable and
straightforward.
Edit: [1]: well, almost. I don't think I've caught anything about generators, first-class concurrency and parallelism, or tail-call optimization, among others.
Related
I'm just curious if Swift is dynamic like php or static, I mean can I generate classes while an application is running?
It is static - very static. The compiler must have all information about all classes and functions at compile time. You can "extend" an existing class (with an extension), but even then you must define completely at compile time what that extension consists of.
Objective-C is dynamic, and since, in real life, you will probably be using Swift in the presence of Cocoa, you can use the Objective-C runtime to inject / swizzle methods in a Swift class that is exposed to Objective-C. You can do this even though you are speaking in the Swift language. But Swift itself is static, and in fact is explicitly designed in order to minimize or eliminate the use of Objective-C-type dynamism.
Swift itself, is statically typed. When used with Cocoa, you get access to the objective-c runtime library which gives you the ability to use dynamic classes, messages and all. This doesn't mean the language itself is dynamically typed. You could do the same with C or any other language that supports a bridge to C by using libobjc.A.dylib.
Generally you never say static language. You could say static type language or dynamic type language, and you could also say strong type language or not.
So java is a static type language as well as strong type language, because compiler has no ability to detect type automatically, so it's static, and type is strongly restricted so it's a strong type language as well.
And javascript is both a dynamic type language and non-strong type language. Because the compiler has ability to detect type during runtime, and type isn't strictly restricted as well.
So based on above examples, you could say as swift allows us to not declare type and leave to the compiler to detect type by it self, so swift is announced as a dynamic type language by Apple official. And it's also a strong type language for you should use type strictly, when even you haven't declare type, if the compiler detect it to be a String, then it isn't any other type.
Hope it's helpful.
Only when you'd like to use objective c library. Swift is inherently static typed. They just give you interface so as you can gradually move to swift.
I'm sure my terminology is off, so here's an example:
C/C++ has methods and virtual methods. Both have the opportunity to be inlined at compile time.
C#'s CIL has call and callvirt instructions (which closely resemble C++ methods and virtual methods). Although almost all method calls in C# become callvirt (due to langauge snafu) the JIT compiler is able to optimize most back to call instructions and then (if worthwhile) also inline them.
Objective-C method calls are done very differently (and inefficiently); a message object is passed via objc_msgsend every time you call a method, it's a form of dynamic dispatch, and can never be inlined.
Reading up on the language specification for functions for Swift, I don't know if Swift is using the same messaging system as Objective-C or something different.
Sometimes yes, sometimes no. If you have pure swift code, and do not expose your classes/protocols to Objective-C with the #objc decoration, it appears that pure-swift method calls are not dispatched via objc_msgSend, however in other cases they are. If the protocol your swift object adopts is declared in Objective-C, or if the swift protocol is decorated with #objc, then method calls to protocol methods, even from swift objects to other swift objects, are dispatched via objc_msgSend.
The documentation is currently a little thin; I'm sure there are other nuances... but empirically speaking (i.e. I've tried it out) some swift method calls go through objc_msgSend and others don't. I think getting the best performance will be dependent on keeping your code as much pure-swift as possible and crossing the Obj-C/swift boundary as little as possible, and through bottleneck interfaces/protocols, so as to limit the number of swift calls that have to be dispatched dynamically.
I'm sure more detailed docs will emerge sooner or later.
Unlike C++, it is not necessary to designate that a method is virtual in Swift. The compiler will work out which of the following to use:
The performance metrics of course depend on hardware.
Inline the method : 0 ns
Static dispatch: < 1.1ns
Virtual dispatch 1.1ns (like Java, C# or C++ when designated).
Dynamic Dispatch 4.9ns (like Objective-C).
Objective-C of course always uses the latter. The 4.9ns overhead is not usually a problem as this would represent a small fraction of the overall method execution time. However, where necessary developers could seamlessly fall-back to C or C++ where required. This is still somewhat of an option in Swift, however the compiler will analyze which of the fastest can be used and try to decide on your behalf.
One side-effect of this, is that some of the powerful features afforded by dynamic dispatch may not be available, where as this could previously have been assumed to be the case for any Objective-C method. Dynamic dispatch is used for method interception, which is in turn used by:
Cocoa-style property observers.
CoreData model object instrumentation.
Aspect Oriented Programming
With the latest release of Swift, even if an Object is marked as '#objc' or extends NSObject the compiler may still not necessarily use dynamic dispatch. There's a dynamic attribute that can be added to the method to opt-in.
I've heard people claim that:
Scala's type system is amazing (existential types, variant, co-variant)
Because of the power of macros, everything is a library in Clojure: (pattern matching, logic programming, non-determinism, ..)
Question:
If both assertions are true, why is Scala's type system not a library in Clojure? Is it because:
types are one of these things that do not work well as a library? [i.e. the changes would somehow have to threaded through every existing clojure library, including clojure.core?]
is Scala's notion of types fundamentally incompatible with clojure protocol / records?
... ?
It's an interesting question.
You are certainly right about Scala having an amazing type system, and about Clojure being phenomenal for meta-programming and extension of the language (although that is about more than just macros....).
A few reasons I can think of:
Clojure is a dynamically typed language while Scala is a statically typed language. Having powerful type inference isn't so much use in a language where you can assume relatively little about the types of your inputs.
Clojure already has a very interesting project to add typing as a library (Typed Clojure) which looks very promising - however it's very different in approach to Scala as it is designed for a dynamic language from the start (inspired more by Typed Racket, I believe).
Clojure philosophy actually discourages certain OOP concepts (particularly implementation inheritance, mutable objects, and data encapsulation). A type system that supports these things (as Scala does) wouldn't be a good fit for Clojure idioms - at best they would be ignored, but they could easily encourage a style of development that would cause people to run into severe problems later.
Clojure already provides tools that solve many of the problems you would typically solve with types in other languages - e.g. the use of protocols for polymorphism.
There's a strong focus in the Clojure community on simplicity (in the sense of the excellent video "Simple Made Easy" - see particularly the slide at 39:30). While Scala's type system is certainly amazing, I think it's a stretch to describe it as "Simple"
Putting in a Scala-style type system would probably require a complete rewrite of the Clojure compiler and make it substantially more complex. Nobody seems to have signed up so far to take on that particular challenge... and there's a risk that even if someone were willing and able to do this then the changes could be rejected for the various cultural / technical reasons covered above.
In the absence of a major change to Clojure itself (which I think would be unlikely) then one interesting possibility would be to create a DSL within Clojure that provided Scala-style type inference for a specific domain and compiled this DSL direct to optimised Java bytecode. I could see that being a useful approach for specific problem domains (large scale numerical data crunching with big matrices, for example).
To simply answer your question "... why is Scala's type system not a library in Clojure?":
Because the type system is part of the scala compiler and not of the scala library. The whole power of scalas type system only exists at compile time. The JVM has no support for things like that, because of type erasure and also, because it would simply slow down execution. And also there is no need for it. If you have a statically typed language, you don't need type information at runtime, unless you want to do dirty stuff.
edit:
#mikera the jvm is sure capable of running the scala compiler, I did not say anything like that. I just said, that the jvm has no support for type systems like that. It does not even support generics. At runtime all these types are gone. The compiler checks for the correctness of a program and removes all the higher kinded types / generics.
example:
val xs: List[Int] = List(1,2,3,4)
val x1: Int = xs.head
will at runtime look like this:
val xs: List = List.apply(1,2,3,4)
val x1: Int = xs.head.asInstanceOf[Int]
But it doesn't matter, because the compiler checked it before. You can only get in trouble here, when you use reflection, because you could put any value in the list and it would break at runtime exactly where the value is casted to Int.
And this is one of the reasons, why the scala type system is not part of the scala library, but built into the compiler.
And also the question of the OP was "... why is Scala's type system not a library in Clojure?" and not "Is it possible to create a type system such as scalas for clojure?" and I perfectly answered that question.
I've been hearing a lot about different JVM languages, still in vaporware mode, that propose to implement reification somehow. I have this nagging half-remembered (or wholly imagined, don't know which) thought that somewhere I read that Scala somehow took advantage of the JVM's type erasure to do things that it wouldn't be able to do with reification. Which doesn't really make sense to me since Scala is implemented on the CLR as well as on the JVM, so if reification caused some kind of limitation it would show up in the CLR implementation (unless Scala on the CLR is just ignoring reification).
So, is there a good side to type erasure for Scala, or is reification an unmitigated good thing?
See Ola Bini's blog. As we all know, Java has use-site covariance, implemented by having little question marks wherever you think variance is appropriate. Scala has definition-site covariance, implemented by the class designer. He says:
Generics is a complicated language feature. It becomes even more
complicated when added to an existing language that already has
subtyping. These two features don’t play very well together in the
general case, and great care has to be taken when adding them to a
language. Adding them to a virtual machine is simple if that machine
only has to serve one language - and that language uses the same
generics. But generics isn’t done. It isn’t completely understood how
to handle correctly and new breakthroughs are happening (Scala is a
good example of this). At this point, generics can’t be considered
“done right”. There isn’t only one type of generics - they vary in
implementation strategies, feature and corner cases.
...
What this all means is that if you want to add reified generics to the
JVM, you should be very certain that that implementation can encompass
both all static languages that want to do innovation in their own
version of generics, and all dynamic languages that want to create a
good implementation and a nice interfacing facility with Java
libraries. Because if you add reified generics that doesn’t fulfill
these criteria, you will stifle innovation and make it that much
harder to use the JVM as a multi language VM.
i.e. If we had reified generics in the JVM, most likely those reified generics wouldn't be suitable for the features we really like about Scala, and we'd be stuck with something suboptimal.
To my surprise as I am developing more interest towards dynamic languages like Ruby and Python. The claim is that they are 100% object oriented but as I read on several basic concepts like interfaces, method overloading, operator overloading are missing. Is it somehow in-built under the cover or do these languages just not need it? If the latter is true are, they 100% object oriented?
EDIT: Based on some answers I see that overloading is available in both Python and Ruby, is it the case in Ruby 1.8.6 and Python 2.5.2 ??
Dynamic languages use duck typing.
Any code can call methods on any object that support those methods, so the concept
of interfaces is extraneous.
Python does in fact support operator overloading(check - 3.3. Special method names) , as does Ruby.
Anyway, you seem to be focusing on aspects that are not essential to object oriented programming. The main focus is on concepts like encapsulation, inheritance, and polymorphism, which are 100% supported in Python and Ruby.
Thanks to late binding, they do not need it. In Java/C#, interfaces are used to declare that some class has certain methods and it is checked during compile time; in Python, whether a method exists is checked during runtime.
Method overloading in Python does work:
>>> class A:
... def foo(self):
... return "A"
...
>>> class B(A):
... def foo(self):
... return "B"
...
>>> B().foo()
'B'
Are they object-oriented? I'd say yes. It's more of an approach thing rather than if any concrete language has feature X or feature Y.
I can only speak for python, but there have been proposals for interfaces as well as home-written interface examples in the past.
However, the way python works with objects dynamically tends to reduce the need for (and the benefit of) interfaces to some extent.
With a dynamic language, your type binding happens at runtime - interfaces are mostly used for compile time constraints on objects - if this happens at runtime, it eliminates some of the need for interfaces.
name based polymorphism
"For those of you unfamiliar with Python, here's a quick intro to name-based polymorphism. Python objects have an internal dictionary that contains a string for every attribute and method. When you access an attribute or method in Python code, Python simply looks up the string in the dict. Therefore, if what you want is a class that works like a file, you don't need to inherit from file, you just create a class that has the file methods that are needed.
Python also defines a bunch of special methods that get called by the appropriate syntax. For example, a+b is equivalent to a.add(b). There are a few places in Python's internals where it directly manipulates built-in objects, but name-based polymorphism works as you expect about 98% of the time. "
Python does provide operator overloading, e.g. you can define a method __add__ if you want to overload +.
You typically don't need to provide method overloading, since you can pass arbitrary parameters into a single method. In many cases, that single method can have a single body that works for all kinds of objects in the same way. If you want to have different code for different parameter types, you can inspect the type, or double-dispatch.
Interfaces are mostly unnecessary because of duck typing, as rossfabricant points out. A few remaining cases are covered in Python by ABCs (abstract base classes) or Zope interfaces.