Mimicking C# out and ref in Scala -- ready to use features? - scala

In limited sense it is very easy to write out and ref classes on your own, but my question is not how to do it -- but are there some features (or classes) ready to use?
The closest thing I found is Reference trait (but it is a trait).
I need those, not tuple, not Option, and not Either as pure result, because only ref/out makes chaining ifs elegant.

No, Scala supports parameter passing by value or by name. Parameter passing by reference is actually quite difficult to accomplish correctly in the JVM, which is probably one reason why none of the popular JVM languages have it. Additionally, out and ref parameters encourage programming via side-effect, something the at design of Scala attempts to avoid wherever possible.
As for chaining of if's, there are a variety of ways to achieve some effects like that in Scala. "match" expressions are the most obvious, and you might also look into monadic compositions using Option or Either.

Related

Why are "context" dependencies so often passed implicitly in Scala?

I'm curious why passing "context" dependencies is so often done implicitly in Scala. I'm looking at the fs2-kafka library, and as usual, deserializers are passed implicitly. I'm having a hard time seeing the advantages of this, it seems like it just obfuscates the code by hiding dependencies. Does anyone know what the upsides are of passing parameters in this way?
Edit: To be clear, I'm not asking whether this is a good practice; that's subjective. I'm wondering what the reasoning is. We wouldn't pass most dependencies implicitly just to avoid the inconvenience of having to pass them in explicitly, so why is it so often done with context dependencies like deserializers?
Deserializers are usually typeclasses, and the way that Scala implements typeclasses is via implict arguments. Doing this manually is going to be a lot more code and require the programmer to do type matching that the compiler can do itself with a typeclass.
A better example of a simple dependency would be ExecutionContext in scala.util.Future methods, and in this case it simplifies the calling code and makes it easier to provide a different context within a given scope.

Run-time Cost of Coproduct?

After reading Solving Problems in a Generic Way using Shapeless's first sentence of the conclusion:
In this article, I've demonstrated how generic solutions can be created for ADTs without relying on an expensive runtime feature such as reflection
Does that mean that Shapeless's coproducts do not use run-time reflection or casting, contrary to ADTs in Scala?
I'm the author of the blog post. I think there's been a misunderstanding: I didn't mean to imply that the ADTs rely on runtime reflection. What I was referring to was this sentence from the introduction:
Traditionally, generic programs have been written with the help of reflection APIs.
As far as I know, ADTs don't use runtime reflection, but without shapeless, there's not a lot of choices for traversing an ADT in a generic way. One way to achieve this is to use reflection for looking up object fields at runtime and iterating through the fields. You could also write code that traverses through your ADT and pattern matches on every ADT node, but that solution will only work for your ADT and not for all of the other ADTs, i.e. the solutions is would not be generic.

Scala naming convention for options

When I return something of type Option, it seems useful to explain in the name of the function name that it is an option, not the thing itself. For example, seqs have reduceOption. Is there a standard naming convention? Things I have seen:
maybeFunctionName
functionNameOption
- neither seems to be all that great.
reduceOption and friends (headOption, etc.) are only named that way to distinguish them from their unsafe alternatives (which arguably shouldn't exist in the first place—i.e, there should just be a head that returns an Option[A]).
whateverOption isn't the usual practice in the standard library (or most other Scala libraries that I'm aware of), and in general you shouldn't need or want to use this kind of Hungarian notation in Scala.
Why would you want to make your function names longer? It doesn't contribute anything, as the fact that it returns an Option is obvious when looking at the function's type.
reduceOption is sort of a special case, since in most cases you really want to use reduce, except that it doesn't work on empty sequences.

Debunking Scala myths [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
What are the most commonly held misconceptions about the Scala language, and what counter-examples exist to these?
UPDATE
I was thinking more about various claims I've seen, such as "Scala is dynamically typed" and "Scala is a scripting language".
I accept that "Scala is [Simple/Complex]" might be considered a myth, but it's also a viewpoint that's very dependent on context. My personal belief is that it's the very same features that can make Scala appear either simple or complex depending oh who's using them. Ultimately, the language just offers abstractions, and it's the way that these are used that shapes perceptions.
Not only that, but it has a certain tendency to inflame arguments, and I've not yet seen anyone change a strongly-held viewpoint on the topic...
Myth: That Scala’s “Option” and Haskell’s “Maybe” types won’t save you from null. :-)
Debunked: Why Scala's "Option" and Haskell's "Maybe" types will save you from null by James Iry.
Myth: Scala supports operator overloading.
Actually, Scala just has very flexible method naming rules and infix syntax for method invocation, with special rules for determining method precedence when the infix syntax is used with 'operators'. This subtle distinction has critical implications for the utility and potential for abuse of this language feature compared to true operator overloading (a la C++), as explained more thoroughly in James Iry's answer to this question.
Myth: methods and functions are the same thing.
In fact, a function is a value (an instance of one of the FunctionN classes), while a method is not. Jim McBeath explains the differences in greater detail. The most important practical distinctions are:
Only methods can have type parameters
Only methods can take implicit arguments
Only methods can have named and default parameters
When referring to a method, an underscore is often necessary to distinguish method invocation from partial function application (e.g. str.length evaluates to a number, while str.length _ evaluates to a zero-argument function).
I disagree with the argument that Scala is hard because you can use very advanced features to do hard stuff with it. The scalability of Scala means that you can write DSL abstractions and high-level APIs in Scala itself that otherwise would need a language extension. So to be fair you need to compare Scala libraries to other languages compilers. People don't say that C# is hard because (I assume, don't have first hand knowledge on this) the C# compiler is pretty impenetrable. For Scala it's all out in the open. But we need to get to a point where we make clear that most people don't need to write code on this level, nor should they do it.
I think a common misconception amongst many scala developers, those at EPFL (and yourself, Kevin) is that "scala is a simple language". The argument usually goes something like this:
scala has few keywords
scala reuses the same few constructs (e.g. PartialFunction syntax is used as the body of a catch block)
scala has a few simple rules which allow you to create library code (which may appear as if the language has special keywords/constructs). I'm thinking here of implicits; methods containing colons; allowed identifier symbols; the equivalence of X(a, b) and a X b with extractors. And so on
scala's declaration-site variance means that the type system just gets out of your way. No more wildcards and ? super T
My personal opinion is that this argument is completely and utterly bogus. Scala's type system taken together with implicits allows one to write frankly impenetrable code for the average developer. Any suggestion otherwise is just preposterous, regardless of what the above "metrics" might lead you to think. (Note here that those who I've seen scoffing at the non-complexity of Java on Twitter and elsewhere happen to be uber-clever types who, it sometimes seems, had a grasp of monads, functors and arrows before they were out of short pants).
The obvious arguments against this are (of course):
you don't have to write code like this
you don't have to pander to the average developer
Of these, it seems to me that only #2 is valid. Whether or not you write code quite as complex as scalaz, I think it's just silly to use the language (and continue to use it) with no real understanding of the type system. How else can one get the best out of the language?
There is a myth that Scala is difficult because Scala is a complex language.
This is false--by a variety of metrics, Scala is no more complex than Java. (Size of grammar, lines of code or number of classes or number of methods in the standard API, etc..)
But it is undeniably the case that Scala code can be ferociously difficult to understand. How can this be, if Scala is not a complex language?
The answer is that Scala is a powerful language. Unlike Java, which has many special constructs (like enums) that accomplish one particular thing--and requires you to learn specialized syntax that applies just to that one thing, Scala has a variety of very general constructs. By mixing and matching these constructs, one can express very complex ideas with very little code. And, unsurprisingly, if someone comes along who has not had the same complex idea and tries to figure out what you're doing with this very compact code, they may find it daunting--more daunting, even, than if they saw a couple of pages of code to do the same thing, since then at least they'd realize how much conceptual stuff there was to understand!
There is also an issue of whether things are more complex than they really need to be. For example, some of the type gymnastics present in the collections library make the collections a joy to use but perplexing to implement or extend. The goals here are not particularly complicated (e.g. subclasses should return their own types), but the methods required (higher-kinded types, implicit builders, etc.) are complex. (So complex, in fact, that Java just gives up and doesn't try, rather than doing it "properly" as in Scala. Also, in principle, there is hope that this will improve in the future, since the method can evolve to more closely match the goal.) In other cases, the goals are complex; list.filter(_<5).sorted.grouped(10).flatMap(_.tail.headOption) is a bit of a mess, but if you really want to take all numbers less than 5, and then take every 2nd number out of 10 in the remaining list, well, that's just a somewhat complicated idea, and the code pretty much says what it does if you know the basic collections operations.
Summary: Scala is not complex, but it allows you to compactly express complex ideas. Compact expression of complex ideas can be daunting.
There is a myth that Scala is non-deployable, whereas a wide range of third-party Java libraries can be deployed without a second thought.
To the extent that this myth exists, I suspect it exists among people who are not accustomed to separating a virtual machine and API from a language and compiler. If java == javac == Java API in your mind, you might get a little nervous if someone suggests using scalac instead of javac, because you see how nicely your JVM runs.
Scala ends up as JVM bytecode, plus its own custom library. There's no reason to be any more worried about deploying Scala on a small scale or as part of some other large project as there is in deploying any other library that may or may not stay compatible with whichever JVM you prefer. Granted, the Scala development team is not backed by quite as much force as the Google collections, or Apache Commons, but its got at least as much weight behind it as things like the Java Advanced Imaging project.
Myth:
def foo() = "something"
and
def bar = "something"
is the same.
It is not; you can call foo(), but bar() tries to call the apply method of StringLike with no arguments (results in an error).
Some common misconceptions related to Actors library:
Actors handle incoming messages in a parallel, in multiple threads / against a thread pool (in fact, handling messages in multiple threads is contrary to the actors concept and may lead to racing conditions - all messages are sequentially handled in one thread (thread-based actors use one thread both for mailbox processing and execution; event-based actors may share one VM thread for execution, using multi-threaded executor to schedule mailbox processing))
Uncaught exceptions don't change actor's behavior/state (in fact, all uncaught exceptions terminate the actor)
Myth: You can replace a fold with a reduce when computing something like a sum from zero.
This is a common mistake/misconception among new users of Scala, particularly those without prior functional programming experience. The following expressions are not equivalent:
seq.foldLeft(0)(_+_)
seq.reduceLeft(_+_)
The two expressions differ in how they handle the empty sequence: the fold produces a valid result (0), while the reduce throws an exception.
Myth: Pattern matching doesn't fit well with the OO paradigm.
Debunked here by Martin Odersky himself. (Also see this paper - Matching Objects with Patterns - by Odersky et al.)
Myth: this.type refers to the same type represented by this.getClass.
As an example of this misconception, one might assume that in the following code the type of v.me is B:
trait A { val me: this.type = this }
class B extends A
val v = new B
In reality, this.type refers to the type whose only instance is this. In general, x.type is the singleton type whose only instance is x. So in the example above, the type of v.me is v.type. The following session demonstrates the principle:
scala> val s = "a string"
s: java.lang.String = a string
scala> var v: s.type = s
v: s.type = a string
scala> v = "another string"
<console>:7: error: type mismatch;
found : java.lang.String("another string")
required: s.type
v = "another string"
Scala has type inference and refinement types (structural types), whereas Java does not.
The myth is busted by James Iry.
Myth: that Scala is highly scalable, without qualifying what forms of scalability.
Scala may indeed be highly scalable in terms of the ability to express higher-level denotational semantics, and this makes it a very good language for experimentation and even for scaling production at the project-level scale of top-down coordinated compositionality.
However, every referentially opaque language (i.e. allows mutable data structures), is imperative (and not declarative) and will not scale to WAN bottom-up, uncoordinated compositionality and security. In other words, imperative languages are compositional (and security) spaghetti w.r.t. uncoordinated development of modules. I realize such uncoordinated development is perhaps currently considered by most to be a "pipe dream" and thus perhaps not a high priority. And this is not to disparage the benefit to compositionality (i.e. eliminating corner cases) that higher-level semantic unification can provide, e.g. a category theory model for standard library.
There will possibly be significant cognitive dissonance for many readers, especially since there are popular misconceptions about imperative vs. declarative (i.e. mutable vs. immutable), (and eager vs. lazy,) e.g. the monadic semantic is never inherently imperative yet there is a lie that it is. Yes in Haskell the IO monad is imperative, but it being imperative has nothing to with it being a monad.
I explained this in more detail in the "Copute Tutorial" and "Purity" sections, which is either at the home page or temporarily at this link.
My point is I am very grateful Scala exists, but I want to clarify what Scala scales and what is does not. I need Scala for what it does well, i.e. for me it is the ideal platform to prototype a new declarative language, but Scala itself is not exclusively declarative and afaik referential transparency can't be enforced by the Scala compiler, other than remembering to use val everywhere.
I think my point applies to the complexity debate about Scala. I have found (so far and mostly conceptually, since so far limited in actual experience with my new language) that removing mutability and loops, while retaining diamond multiple inheritance subtyping (which Haskell doesn't have), radically simplifies the language. For example, the Unit fiction disappears, and afaics, a slew of other issues and constructs become unnecessary, e.g. non-category theory standard library, for comprehensions, etc..

Is there a method_missing in scala?

similar to the one in Ruby
Yes, as of Scala 2.9 with the -Xexperimental option, one can use the Dynamic trait
(scaladoc). Classes that extend Dynamic get the magical method applyDynamic(methodName, args) which behaves like Ruby's method_missing.
Among other things, the Dynamic trait can be useful for interfacing with dynamic languages on the JVM.
The following is no longer strictly true with the Dynamic trait found in [experimental] Scala 2.9. See the answer from Kipton Barros, for example.
However, Dynamic is still not quite like method_missing, but rather employs compiler magic to effectively rewrite method calls to "missing" methods, as determined statically, to a proxy (applyDynamic). It is the approach of statically-determining the "missing" methods that differentiates it from method_missing from a polymorphism viewpoint: one would need to try and dynamically forward (e.g. with reflection) methods to get true method_missing behavior. (Of course this can be avoided by avoiding sub-types :-)
No. Such a concept does not exist in Java or Scala.
Like Java, all the methods in Scala are 'bound' at compile time (this also determines what method is used for overloading, etc). If a program does compile, said method exists (or did according to the compiler), otherwise it does not. This is why you can get the NoSuchMethodError if you change a class definition without rebuilding all affected classes.
If you are just worried about trying to call a method on an object which conforms to a signature ("typed duck typing"), then perhaps you may be able to get away with structural typing. Structural typing in Scala is magical access over reflection -- thus it defers the 'binding' until runtime and a runtime error may be generated. Unlike method_missing this does not allow the target to handle the error, but it does allow the caller to (and the caller could theoretically call a defined methodMissing method on the target... but this is probably not the best way to approach Scala. Scala is not Ruby :-)
Not really. It doesn't make sense. Scala is a statically-typed language in which methods are bound at compile time; Ruby is a dynamically-typed language in which messages are passed to objects, and these messages are evaluated at runtime, which allows Ruby to handle messages that it doesn't directly respond to, à la method_missing.
You can mimic method_missing in a few ways in Scala, notably by using the Actors library, but it's not quite the same (or nearly as easy) as Ruby's method_missing.
No, this is not possible in Scala 2.8 and earlier.