Motivation behind Scala package objects [closed] - scala

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
What are the advantages of the object package syntax over simply letting you add functions and variables to a package ?
example:
package object something {
def hello = 0
}
package something {
}
Why not simply:
package something {
def hello = 0
// other classes and such
}

You could even go one step further: why have packages at all, when we have objects?
Scala is intended to be useful as a "hosted language", i.e. a language that can play nicely being hosted on top of another language's platform. The original implementations of Scala were on the Java platform and the ISO Common Language Infrastructure platform (or rather its primary implementations, .NET and Mono). Today, we also have an implementation on the ECMAScript platform (Scala.js), and the Java platform implementation can also be used on Android, for example. You can imagine other interesting platforms as well, that you would like to run Scala on, e.g. the Windows UWP platform, the Swift/Objective-C/Core Foundation/macOS/iOS/tvOS/watchOS platform, the Gnome/GObject platform, the Qt/C++ platform, etc.
Scala does not just intend to run on those platforms, it intends to fulfill two, often conflicting, goals:
high performance
tight integration with a "native" feel
So, in Scala, implementation concerns are part of language design. (Rich Hickey, the designer of Clojure, once said in a talk "the JVM is not an implementation detail", the same applies to Scala.) A good example are proper tail calls: in order to support proper tail calls, Scala would have to manage its own stack instead of using the host platform's native call stack on platforms like the JVM, which however means that you can no longer easily call Scala code from other code on the platform and vice versa. So, while proper tail calls would be nice to have, Scala settles for the less powerful proper immediate tail recursion.
Theoretically, Scala needs only objects, traits, methods, types, and paths (. and #). Everything else is basically just there to ease integration with the host platform. This includes, for example, null, classes, and packages.
Ideally, there should be an easy mapping from host platform constructs to Scala constructs and vice versa. So, for example, there is an easy mapping between Scala methods and JVM methods. There is an easy mapping between JVM interfaces and Scala traits with only abstract members. There is no easy mapping between Scala traits and JVM classes, which is why Scala has the (redundant) concept of classes as well. And similarly, there is no easy mapping between Scala objects and JVM packages (or CLI namespaces), which is why Scala has the (redundant) concept of package as well.
However, we would really like packages (which are, after all, somewhat like Scala objects) to have members as well. But JVM packages and CLI namespaces can't have members other than interfaces and classes, and since we only introduced them in Scala for compatibility with the host platform it simply doesn't make sense to make them incompatible with the host platform by adding members to them.
So, we introduce yet another separate concept, the package object, which holds the members we would like to add to packages but can't, because of host platform interoperability.
tl;dr:
we have packages because of interop (even though we already have objects, which can do all the same things as packages)
we would like packages to have members
packages can't have members, because interop
so, we have package objects as companions to packages

Related

Scala project organization [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
How does one organise code in a Scala project?
After years of developing with Java (most of the times using Spring), we're trying to come up with a quick prototype in Scala.
One of the first questions that popped up is: will we just basically use the same package names and code organisation and just write code in Scala?
For example, we're used to have helpers for our entities (AccountHelper, CacheHelper...) and also sometimes we use services too (AccountService...).
ot: Further away we'll also investigate on how to port our maven submodules to sbt, but that's a different story altogether.
The answer partly depends on what is most important to you. If you are really serious about the quick prototype part, then as far as the physical file/directory layout, I would just start with one flat file and only start breaking it up when there is enough code to make that awkward. This should at least make global restructuring of your code easier until you get the overall structure right. Scala does not enforce the package:directory, class:file correspondence, and given the conciseness of Scala that can be overkill in many cases anyway.
There is nothing to stop you organizing things into multiple packages within the one file before you break it up physically once you have the structure right. Actually breaking the file up when you need to should be very easy then.
You did not say much about what your Helper & Service classes do, but from the naming convention they sound like good candidates for generic (aka parametric) traits or classes. This would allow you to factor out what all the different Helpers have in common (and similarly for Services). They should have a fair bit in common to justify the naming convention. You would then end up using or possibly extending types like Helper[Cache] and Service[Account]. I am also guessing these types will have few instances with rather broad scope and may benefit from being passed around implicitly, making Helper[_] and Service[_] into type classes. It is also possible that you will no longer need Spring at this point, since implicit lookup may give you the dependency injection you need. However, I am just going by a few class names you provided and reading an awful lot into them, so the chances are that I am completely off base here.
Another possibility is that auxiliary classes like Helper & Service are just closures in disguise. This is a fairly common case with such classes in Java. In this case you should just implement them as functions in Scala, but again I am just guessing from the names...
You could also look into the Layer Cake pattern and see if that makes sense for your project.
More information about your project would probably get you better advice than these products of my overactive imagination :).
Hera are some potentially useful links:
http://jonasboner.com/real-world-scala-dependency-injection-di/
Where does Scala look for implicits?
http://www.youtube.com/watch?v=yLbdw06tKPQ
https://vimeo.com/20308847
http://www.youtube.com/watch?v=YZxL0alO1yc
I organize packages in basically the same way, but wind up
implementing things differently.
When I first started writing scala coming from java it took me a while
to get used to a few things:
Use companion objects instead of "static"
class Bla { }
object Bla { ... }
Forget about "get*", "set*" getters and setters - use val, var - prefer val.
Get to know scala.collection., Option, and scala.collection.JavaConversions. -
(which provides implicit conversion between java and scala collection types),
so you can write code like this:
class Helper {
def lookupUser( name:String ):Option[UserInfo]
...
}
val jsonUsers:Seq[String] =
Seq( "fred", "mary", "jose" ).flatMap(
name => helper.lookupUser( name )
).map( info => helper.toJson( info ) )
or
val jsonUsers:Seq[String] = for ( name <- Seq( "fred", "mary", "jose" );
info <- helper.lookupUser( name )
) yield helper.toJson( info )
or
val jsResult = helper.lookupUser( "fred" ).map( info => helper.toJson( info )
).getOrElse( jsErrorRespose )
If you feel comfortable with that kind of code, then you have a good start ...
Good luck!
We already had our whole platform written in java. And i was very enthusiastic about scala at work. So i just added my scala code to the existing code base on the same level i.e. src/main/java also i found almost 100% compatibility from scala to java with great ease. Just include the maven scala plugin and it works.
But i would suggest keeping the scala code under src/main/scala for a cleaner code base organisation and also help in resolving minor compiler dependencies. Also having a dual build both in sbt and mvn gives great flexibility for build process.

Traits vs. Packages in Scala

After watching Martin's keynote on Reflection and Compilers I can't seem to get this crazy question out of my head. Martin talks among other things about the "(Wedding) Cake Pattern", where traits play the central part. I'm wondering, why in the world do we need packages when we already have traits? Is there anything a package can do, what a trait (at least theoretically) cannot?
I'm not talking about the current implementation, I'm just trying to imagine what programming would be like if we replace packages with traits. In my head it would be like this:
one keyword less (package is unneeded)
no need for package objects
To summarize all my questions:
Is it theoretically possible to remove packages from the language and use traits instead.
What other benefits would we gain from this change? (I was thinking about first class packages and first class imports, but mixin composition is a compile time thing, although the super calls are dynamically bound)
Is Java/JVM compatibility the only thing, which would stand in the way?
Update
Daniel Spiewak talks in this keynote about the Dependency Injection being just the top of the iceberg of all the stuff you can do with the Cake Pattern.
Martin Odersky has said that Scala could get by with just traits, objects, methods and paths (I hope I didn't forget something).
Both classes and packages are just there because Scala is intended to be a hosted language, i.e. a language which runs on (this is actually not the interesting bit) and interoperates with (this is the important point) a host platform. Some of the host platforms that Scala is intended to interoperate with are the Java platform and the CLI, both which have a concept of classes and packages (namespaces in the case of the CLI) that is significantly distinct enough that it cannot be easily expressed as traits or objects. This is unlike interfaces, which can be trivially mapped to and from purely abstract traits.
The above statement was made in a discussion about potentially removing generics from Scala, because everything generics can do can also be achieved by abstract types.
In scala the object and package serve almost the same purpose and objects are also called modules. Objects deserve to be thought of as modules because they can contain any definition including other objects of course and, significantly, types.
A trait can be thought of as an abstract module. It can contain any definition and any member can be abstract including, again significantly, type members. I am reciting all this just to highlight the symmetry. Perhaps OT but to me traits seem to be as big an innovation in scala as the merging of object and functional ideas.
To finally give an answer:
I think packages could be removed in favour of objects (not traits).
The benefit would be a simplification - package objects would not need to be explicitly defined.
I think packages are distinct from objects for Java/JVM compatibility.
Some more commentary: in the video Martin talks of traits (abstract modules) more than concrete modules because the latter only appear at the last moment to assemble and reify some combination of abstract modules.
It is good to use abstract modules even when not "mixing a cake". e.g. when sketching out some code you might define a module to contain definitions. But as soon as you come to a type or value you are not ready to fill in, don't supply a dummy such as null. Instead switch the object to a trait and leave the member abstract.

What reflection capabilities can we expect from Scala 2.10?

Scala 2.10 brings reflection other than that provided the JVM (or I guess CLR).
What in particular do we have to look forward to, and how will it improve on the platform?
For example, will there be a class that reflects the language's convertibility between fields and accessor methods, so that I can iterate over the properties of an object?
update 2012-07-04:
Daniel SOBRAL (also on SO) details in his blog post "JSON serialization with reflection in Scala! Part 1 - So you want to do reflection?" some of the features coming with reflection:
To recapitulate, Scala 2.10 will come with a Scala reflection library.
That library is used by the compiler itself, but divided into layers through the cake pattern, so different users see different levels of detail, keeping jar sizes adequate to each one's use, and hopefully hiding unwanted detail.
The reflection library also integrates with the upcoming macro facilities, enabling enterprising coders to manipulate code at compile time.
update 2012-06-14. (from Eugene Burmako):
In Scala 2.10.0-M4, we have released the new reflection API that will most likely make it into 2.10.0-final without significant changes.
More details about the API can be found:
SO answer Get companion object instance with new Scala reflection API
Scala Reflection SIP, June 2012 by Martin Odersky (SIP, actually "Scala Improvement Process")
summary and migration route from M3
Extracts:
Universes and mirrors are now separate entities:
universes host reflection artifacts (trees, symbols, types, etc),
mirrors abstract loading of those artifacts (e.g. JavaMirror loads stuff
using a classloader and annotation unpickler, while GlobalMirror uses internal compiler classreader to achieve the same goal).
Public reflection API is split into scala.reflect.base and scala.reflect.api.
The former represents a minimalistic snapshot that is exactly enough to
build reified trees and types.
To build, but not to analyze - everything smart (for example, getting a type signature) is implemented in scala.reflect.api.
Both reflection domains have their own universe: scala.reflect.basis and
scala.reflect.runtime.universe.
The former is super lightweight and doesn't involve any classloaders,
while the latter represents a stripped down compiler.
Initial answer, Sept. 2011:
You can see evolutions of the reflect package in the Scala GitHub repo, with this two very recent commits:
Changes to Liftcode to use new reflection semantics, where a compiler uses type checking.
Started work on compiler toolbox that can compile reflect trees at runtime.
(Liftcode being, according to this thread, aims at simplifying "writing code that writes code")
The class scala/reflect/internal/Importers.scala (created yesterday!) is a good example of using those latest reflection feature.
Two links which should be of interest:
The scala-internals mailing list discussion on the reflection api.
The nightly build api doc for 2.10-SNAPSHOT.
Personally I am hoping to use this for doing runtime discovery of extensions (i.e. a type that extends a known trait), and generating UI forms and a few other things from those.
With current 2.10M4 you already can iterate over members of a class:
reflect.runtime.universe.typeOf[MyClass].members.filter(!_.isMethod)
The above code lists Symbol objects representing members of a class MyClass which are not methods. There are tons of ways you can fine-tune this.

What is the purpose of Scala programming language? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
It is my opinion that every language was created for a specific purpose. What was Scala created for and what problems does it best solve?
One of the things mentioned in talks by Martin Odersky on Scala is it being a language which scales well to tackle various problems. He wasn't talking about scaling in the sense of performance but in the sense that the language itself can seem to be expanded via libraries. So that:
val lock = new ReentrantReadWriteLock
lock withReadLock {
//do stuff
}
Looks like there is some special syntactic sugar for dealing with j.u.c locks. But this is not the case, it's just using the scala language in such a way as it appears to be. The code is more readable, isn't it?
In particular the various parsing rules of the scala language make it very easy to create libraries which look like a domain-specific language (or DSL). Look at scala-test for example:
describe("MyCoolClass") {
it("should do cool stuff") {
val c = new MyCoolClass
c.prop should be ("cool")
}
}
(There are lots more examples of this - I found out this one yesterday). There is much talk about which new features are going in the Java language in JDK7 (project coin). Many of these features are special syntactic sugar to deal with some specific issue. Scala has been designed with some simple rules that mean new keywords for every little annoyance are not needed.
Another goal of Scala was to bridge the gap between functional and object-oriented languages. It contains many constructs inspired (i.e. copied from!) functional languages. I'm thing of the incredibly powerful pattern-matching, the actor-based concurrency framework and (of course) first- and higher-order functions.
Of course, your question said that there was a specific purpose and I've just given 3 separate reasons; you'll probably have to ask Martin Odersky!
One more of the original design goals was of course to create a language which runs on the Java Virtual Machine and is fully interoperable with Java classes. This has (at least) two advantages:
you can take advantage of the ubiquity, stability, features and reputation of the JVM. (think management extensions, JIT compilation, advanced Garbage Collection etc)
you can still use all your favourite Java libraries, both 3rd party and your own. If this wasn't the case, it would be a significant obstacle to using Scala commercially in many cases (mine for example).
Agree with previous answers but recommend the Introduction to An Overview of the Scala Programming Language:
The work on Scala stems from a research effort to develop better language support for component software. There are two hypotheses that we would like to validate with the Scala experiment. First, we postulate that a programming language for component software needs to be scalable in the sense that the same concepts can describe small as well as large parts. Therefore, we concentrate on mechanisms for abstraction, composition, and decomposition rather than adding a large set of primitives which might be useful for components at some level of scale, but not at other levels. Second, we postulate that scalable support for components can be provided by a programming language which unifes and generalizes object-oriented and functional programming. For statically typed languages, of which Scala is an instance, these two paradigms were up to now largely separate. (Odersky)
I'd personally classify Scala alongside Python in terms of which problems it solves and how. The conspicuous difference and occasional complaint is Type complexity. I agree Scala's abstractions are complicated and at times seemingly convoluted but for a few points:
They're also mostly optional.
Scala's compiler is like free testing and documentation as cyclomatic complexity and lines of code escalate.
When aptly implemented Scala can perform otherwise all but impossible operations behind consistent and coherent APIs. From Scala 2.8 Collections:
For instance, a String (or rather: its backing class RichString) can be seen as a sequence of Chars, yet it is not a generic collection type. Nevertheless, mapping a character to character map over a RichString should again yield a RichString, as in the following interaction with the Scala REPL:
scala> "abc" map (x => (x + 1).toChar)
res1: scala.runtime.RichString = bcd
But what happens if one applies a function from Char to Int to a string? In that case, we cannot produce a string as result, it has to be some sequence of Int elements instead. Indeed one gets:
"abc" map (x => (x + 1))
res2: scala.collection.immutable.Vector[Int] = Vector(98, 99, 100)
So it turns out that map yields different types depending on what the result type of the passed function argument is! (Odersky)
Since it's functional and uses actors (as I understand it, please comment if I've got this wrong) it makes it very easy to scale nearly anything up to any number of CPUs.
That said, I see Scala as kind of a test bed for new language features. Throw in the kitchen sink and see what happens.
My personal opinion is that for any apps involving a team of more than 3 people you are more productive with a language with Very Simple and Restrictive Syntax just because the entire job becomes more how you interact with others as opposed to just coding to make the computer do something.
The more people you add, the more time you are going to spend explaining what ?: means or the difference between | and || as applied to two booleans (In Java, you'll find very few people know).

What are the key differences between Scala and Groovy? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
On the surface Groovy and Scala look pretty similar, aside from Scala being statically typed, and Groovy dynamic.
What are the other key differences, and advantages each have over the other?
How similar are they really?
Is there competition between the two?
If so, who do you think will win in the long run?
They're both object oriented languages for the JVM that have lambdas and closures and interoperate with Java. Other than that, they're extremely different.
Groovy is a "dynamic" language in not only the sense that it is dynamically typed but that it supports dynamic meta-programming.
Scala is a "static" language in that it is statically typed and has virtually no dynamic meta-programming beyond the awkward stuff you can do in Java. Note, Scala's static type system is substantially more uniform and sophisticated than Java's.
Groovy is syntactically influenced by Java but semantically influenced more by languages like Ruby.
Scala is syntactically influenced by both Ruby and Java. It is semantically influenced more by Java, SML, Haskell, and a very obscure OO language called gBeta.
Groovy has "accidental" multiple dispatch due to the way it handles Java overloading.
Scala is single dispatch only, but has SML inspired pattern matching to deal with some of the same kinds of problems that multiple dispatch is meant to handle. However, where multiple dispatch can only dispatch on runtime type, Scala's pattern matching can dispatch on runtime types, values, or both. Pattern matching also includes syntactically pleasant variable binding. It's hard to overstress how pleasant this single feature alone makes programming in Scala.
Both Scala and Groovy support a form of multiple inheritance with mixins (though Scala calls them traits).
Scala supports both partial function application and currying at the language level, Groovy has an awkward "curry" method for doing partial function application.
Scala does direct tail recursion optimization. I don't believe Groovy does. That's important in functional programming but less important in imperative programming.
Both Scala and Groovy are eagerly evaluated by default. However, Scala supports call-by-name parameters. Groovy does not - call-by-name must be emulated with closures.
Scala has "for comprehensions", a generalization of list comprehensions found in other languages (technically they're monad comprehensions plus a bit - somewhere between Haskell's do and C#'s LINQ).
Scala has no concept of "static" fields, inner classes, methods, etc - it uses singleton objects instead. Groovy uses the static concept.
Scala does not have built in selection of arithmetic operators in quite the way that Groovy does. In Scala you can name methods very flexibly.
Groovy has the elvis operator for dealing with null. Scala programmers prefer to use Option types to using null, but it's easy to write an elvis operator in Scala if you want to.
Finally, there are lies, there are damn lies, and then there are benchmarks. The computer language benchmarks game ranks Scala as being between substantially faster than Groovy (ranging from twice to 93 times as fast) while retaining roughly the same source size. benchmarks.
I'm sure there are many, many differences that I haven't covered. But hopefully this gives you a gist.
Is there a competition between them? Yes, of course, but not as much as you might think. Groovy's real competition is JRuby and Jython.
Who's going to win? My crystal ball is as cracked as anybody else's.
scala is meant to be an oo/functional hybrid language and is very well planned and designed. groovy is more like a set of enhancements that many people would love to use in java.
i took a closer look at both, so i can tell :)
neither of them is better or worse than the other. groovy is very good at meta-programming, scala is very good at everything that does not need meta-programming, so...i tend to use both.
Scala has Actors, which make concurrency much easier to implement. And Traits which give true, typesafe multiple inheritance.
You've hit the nail on the head with the static and dynamic typing. Both are part of the new generation of dynamic languages, with closures, lambda expressions, and so on. There are a handful of syntactic differences between the two as well, but functionally, I don't see a huge difference between Groovy and Scala.
Scala implements Lists a bit differently; in Groovy, pretty much everything is an instance of java.util.List, whereas Scala uses both Lists and primitive arrays. Groovy has (I think) better string interpolation.
Scala is faster, it seems, but the Groovy folks are really pushing performance for the 2.0 release. 1.6 gave a huge leap in speed over the 1.5 series.
I don't think that either language will really 'win', as they target two different classes of problems. Scala is a high-performance language that is very Java-like without having quite the same level of boilerplate as Java. Groovy is for rapid prototyping and development, where speed is less important than the time it takes for programmers to implement the code.
Scala has a much steeper learning curve than Groovy. Scala has much more support for functional programming with its pattern matching and tail based recursion, meaning more tools for pure FP.
Scala also has dynamica compilation and I have done it using twitter eval lib (https://github.com/twitter/util ). I kept scala code in a flat file(without any extension) and using eval created scala class at run time.
I would say scala is meta programming and has feature of dynamic complication