I have a bunch of Scala classes (like Lift's Box, Scala's Option, etc.) that I'd like to
use in Clojure as a Clojure ISeq.
How do I tell Clojure how to make these classes into an ISeq so that all the various sequence
related functions "just work"?
To build on Arthur's answer, you can provide a generic wrapper class in Scala along these lines:
class WrapCollection(repr: TraversableOnce[_]) extends clojure.lang.Seqable { ... }
If the classes implement the Iterable interface then you can just call seq on them to get a seqeuence. Most of the functions in the sequence library will do this for you though so in almost all normal cases you can just pass them to seq functions like first and count as is.
Related
I'm reading the first section of the book "Scala in depth". In the first section, it gives an example which convert a java JdbcTemplate interface to scala:
Java code:
public interface JdbcTemplate {
List query(PreparedStatementCreator psc,
RowMapper rowMapper)
}
Scala code:
trait JdbcTemplate {
def query[ResultItem](psc : Connection => PreparedStatement,
rowMapper : (ResultSet, Int) => ResultItem) : List[ResultItem]
}
Then it says:
With a few simple transformations, we've created an interface that works directly against functions. This is a more functional approach simply because scala's function traits allow composition. By the time you're finished reading this book, you'll be able to approach the design of this interface completely differently.
I can't understand "traits allow composition" here, since I can't find any "composition" in the example provided.
Do I miss anything?
You've missed the first part of this phrase:
scala's function traits allow composition
In scala function is a simple object constructed from a trait like Function[-T, +R] and on all such traits you have two methods: andThen and compose.
Your code snippet is not a compositon sample but would allow for composition. Indeed, a trait alone composed nothing more than..itself with itself :)
Trait allows to compose at runtime thanks to the withkeyword.
Basically, it allows to add trait's methods to associated trait or class or even...runtime object!
Composition is known to be a runtime feature (its major benefit) and inheritance is known to be a compilation feature, thus static.
Here's a sample using trait composition at runtime:
val obj1 = new MyClass() with JdbcTemplate
//obj1.query .......
Compared to Java, if MySuperClass was a class or abstract class, you could still add/alter behaviour from elsewhere at runtime by implementing some kind of Strategy pattern etc... but it would require you to describe an interface + implementation..boring.
On the contrary, Trait allows to define method in one place.
That's why in scala, trait is said to "allow composition", and obviously, functional programming concept fit better with composition rather than inheritance.
UPDATE ----------------------
#Alexlv is right => the important word was Scala's FUNCTION traits.
My above explanation still right, but doesn't correspond to what the author wanted to mean.
IMHO, he wanted to mean: Connection => PreparedStatement is a Function1, and as functions are fond of composition, it enables some nice processing providing from the functional programming word to generate the required query method parameter.
The point the author is making is that replacing PreparedStatementCreator and RowMapper with function types yields a more functional API (surprise!). The API better supports code reuse since the psc and rowMapper functions can be composed from simple functional components. You might have functions, for example, that filter the ResultSet, extract various pieces of data, transform the data (map), aggregate it (fold), etc.
As #Alexiv pointed out, all Scala function objects (via the various Function traits) provide andThen and compose methods that support function composition.
I'm new to scala(just start learning it), but have figured out smth strange for me: there are classes Array and List, they both have such methods/functions as foreach, forall, map etc. But any of these methods aren't inherited from some special class(trait). From java perspective if Array and List provide some contract, that contract have to be declared in interface and partially implemented in abstract classes. Why do in scala each type(Array and List) declares own set of methods? Why do not they have some common type?
But any of these methods aren't inherited from some special class(trait)
That simply not true.
If you open scaladoc and lookup say .map method of Array and List and then click on it you'll see where it is defined:
For list:
For array:
See also info about Traversable and Iterable both of which define most of the contracts in scala collections (but some collections may re-implement methods defined in Traversable/Iterable, e.g. for efficiency).
You may also want to look at relations between collections (scroll to the two diagrams) in general.
I'll extend om-nom-nom answer here.
Scala doesn't have an Array -- that's Java Array, and Java Array doesn't implement any interface. In fact, it isn't even a proper class, if I'm not mistaken, and it certainly is implemented through special mechanisms at the bytecode level.
On Scala, however, everything is a class -- an Int (Java's int) is a class, and so is Array. But in these cases, where the actual class comes from Java, Scala is limited by the type hierarchy provided by Java.
Now, going back to foreach, map, etc, they are not methods present in Java. However, Scala allows one to add implicit conversions from one class to another, and, through that mechanism, add methods. When you call arr.foreach(println), what is really done is Predef.refArrayOps(arr).foreach(println), which means foreach belongs to the ArrayOps class -- as you can see in the scaladoc documentation.
In groovy one can do:
class Foo {
Integer a,b
}
Map map = [a:1,b:2]
def foo = new Foo(map) // map expanded, object created
I understand that Scala is not in any sense of the word, Groovy, but am wondering if map expansion in this context is supported
Simplistically, I tried and failed with:
case class Foo(a:Int, b:Int)
val map = Map("a"-> 1, "b"-> 2)
Foo(map: _*) // no dice, always applied to first property
A related thread that shows possible solutions to the problem.
Now, from what I've been able to dig up, as of Scala 2.9.1 at least, reflection in regard to case classes is basically a no-op. The net effect then appears to be that one is forced into some form of manual object creation, which, given the power of Scala, is somewhat ironic.
I should mention that the use case involves the servlet request parameters map. Specifically, using Lift, Play, Spray, Scalatra, etc., I would like to take the sanitized params map (filtered via routing layer) and bind it to a target case class instance without needing to manually create the object, nor specify its types. This would require "reliable" reflection and implicits like "str2Date" to handle type conversion errors.
Perhaps in 2.10 with the new reflection library, implementing the above will be cake. Only 2 months into Scala, so just scratching the surface; I do not see any straightforward way to pull this off right now (for seasoned Scala developers, maybe doable)
Well, the good news is that Scala's Product interface, implemented by all case classes, actually doesn't make this very hard to do. I'm the author of a Scala serialization library called Salat that supplies some utilities for using pickled Scala signatures to get typed field information
https://github.com/novus/salat - check out some of the utilities in the salat-util package.
Actually, I think this is something that Salat should do - what a good idea.
Re: D.C. Sobral's point about the impossibility of verifying params at compile time - point taken, but in practice this should work at runtime just like deserializing anything else with no guarantees about structure, like JSON or a Mongo DBObject. Also, Salat has utilities to leverage default args where supplied.
This is not possible, because it is impossible to verify at compile time that all parameters were passed in that map.
I just read and enjoyed the Cake pattern article. However, to my mind, one of the key reasons to use dependency injection is that you can vary the components being used by either an XML file or command-line arguments.
How is that aspect of DI handled with the Cake pattern? The examples I've seen all involve mixing traits in statically.
Since mixing in traits is done statically in Scala, if you want to vary the traits mixed in to an object, create different objects based on some condition.
Let's take a canonical cake pattern example. Your modules are defined as traits, and your application is constructed as a simple Object with a bunch of functionality mixed in
val application =
new Object
extends Communications
with Parsing
with Persistence
with Logging
with ProductionDataSource
application.startup
Now all of those modules have nice self-type declarations which define their inter-module dependencies, so that line only compiles if your all inter-module dependencies exist, are unique, and well-typed. In particular, the Persistence module has a self-type which says that anything implementing Persistence must also implement DataSource, an abstract module trait. Since ProductionDataSource inherits from DataSource, everything's great, and that application construction line compiles.
But what if you want to use a different DataSource, pointing at some local database for testing purposes? Assume further that you can't just reuse ProductionDataSource with different configuration parameters, loaded from some properties file. What you would do in that case is define a new trait TestDataSource which extends DataSource, and mix it in instead. You could even do so dynamically based on a command line flag.
val application = if (test)
new Object
extends Communications
with Parsing
with Persistence
with Logging
with TestDataSource
else
new Object
extends Communications
with Parsing
with Persistence
with Logging
with ProductionDataSource
application.startup
Now that looks a bit more verbose than we would like, particularly if your application needs to vary its construction on multiple axes. On the plus side, you usually you only have one chunk of conditional construction logic like that in an application (or at worst once per identifiable component lifecycle), so at least the pain is minimized and fenced off from the rest of your logic.
Scala is also a script language. So your configuration XML can be a Scala script. It is type-safe and not-a-different-language.
Simply look at startup:
scala -cp first.jar:second.jar startupScript.scala
is not so different than:
java -cp first.jar:second.jar com.example.MyMainClass context.xml
You can always use DI, but you have one more tool.
The short answer is that Scala doesn't currently have any built-in support for dynamic mixins.
I am working on the autoproxy-plugin to support this, although it's currently on hold until the 2.9 release, when the compiler will have new features making it a much easier task.
In the meantime, the best way to achieve almost exactly the same functionality is by implementing your dynamically added behavior as a wrapper class, then adding an implicit conversion back to the wrapped member.
Until the AutoProxy plugin becomes available, one way to achieve the effect is to use delegation:
trait Module {
def foo: Int
}
trait DelegatedModule extends Module {
var delegate: Module = _
def foo = delegate.foo
}
class Impl extends Module {
def foo = 1
}
// later
val composed: Module with ... with ... = new DelegatedModule with ... with ...
composed.delegate = choose() // choose is linear in the number of `Module` implementations
But beware, the downside of this is that it's more verbose, and you have to be careful about the initialization order if you use vars inside a trait. Another downside is that if there are path dependent types within Module above, you won't be able to use delegation that easily.
But if there is a large number of different implementations that can be varied, it will probably cost you less code than listing cases with all possible combinations.
Lift has something along those lines built in. It's mostly in scala code, but you have some runtime control. http://www.assembla.com/wiki/show/liftweb/Dependency_Injection
Trait Traversable has methods such as toList, toMap, ToSeq. Given that List, Map, Seq are subclasses of Traversable, this creates a circular dependency, which is generally not a desirable design pattern.
I understand that this is constrained to the collections library and it provides some nice transformation methods.
Was there any alternative design considered? Such as a "utility" class, or adding the conversion methods to Predef?
Say I want to add a new class: class RandomList extends List {...}. It would be nice to have a method toRandomList available for all Traversable classes, but for that I would need to "pimp my library" with an implicit on Traversable? This seems a bit of an overkill. With a utility class design, I could just extend that class (or Predef) to add my conversion method. What would be the recommended design here?
An alternative and extensible approach would be to[List], to[RandomList].
It's a bit tricky to add this with implicits, though. https://gist.github.com/445874/2a4b0bb0bde29485fec1ad1a5bbf968df80f2905
To add a toRandomClass you'd have to resort to a pimp my library pattern indeed. However, why do you think that is overkill? The overhead is negligible. And it wouldn't work extending an utility class -- why would Scala look into your new class for that method? Not to mention that you'd have to instantiate such a class to be able to access its methods.
There is no circular dependency here.
Circular dependency is what happens when there are a few independent components which refer to each other.
Scala standard library is one component. Since it is built always in one step there is no problem.
You are right. Let's remove toString from the String class...