I am trying to make a core java class implement an interface.
I am trying something along the lines of:
(extend-protocol clojure.lang.Seqable
java.lang.Integer
(seq [this] (seq (str this))))
but this does not seem to work because Seqable is just an interface and not a protocol.
Is it possible to make (seq 123) work? how was seq implemented for java.lang.Strings?
proxy also does not seem capable of doing this.
I know I must be missing somethnig really obvious here.
Not possible. clojure.lang.RT/seqFrom has special cases for a number of java builtin types, like Collection and String, and you can't add your own for classes that don't implement Seqable directly.
If the java class implements Iterable, and is wrapped in seq, you could use it as a sequence, with certain restrictions.
Related
We are pretty familiar with implicits in Scala for now, but macros are pretty undiscovered area (at least for me) and, despite the presence of some great articles by Eugene Burmako, it is still not an easy material to just dive in.
In this particular question I'd like to find out if there is a possibility to achieve the analogous to the following code functionality using just macros:
implicit class Nonsense(val s: String) {
def ##(i:Int) = s.charAt(i)
}
So "asd" ## 0 will return 'a', for example. Can I implement macros that use infix notation? The reason to this is I'm writing a DSL for some already existing project and implicits allow making the API clear and concise, but whenever I write a new implicit class, I feel like introducing a new speed-reducing factor. And yes, I do know about value classes and stuff, I just think it would be really great if my DSL transformed into the underlying library API calls during compilation rather than in runtime.
TL;DR: can I replace implicits with macros while not changing the API? Can I write macros in infix form? Is there something even more suitable for this case? Is the trouble worth it?
UPD. To those advocating the value classes: in my case I have a little more than just a simple wrapper - they are often stacked. For example, I have an implicit class that takes some parameters, returns a lambda wrapping this parameters (i.e. partial function), and the second implicit class that is made specifically for wrapping this type of functions. I can achieve something like this:
a --> x ==> b
where first class wraps a and adds --> method, and the second one wraps the return type of a --> x and defines ==>(b). Plus it may really be the case when user creates considerable amount of objects in this fashion. I just don't know if this will be efficient, so if you could tell me that value classes cover this case - I'd be really glad to know that.
Back in the day (2.10.0-RC1) I had trouble using implicit classes for macros (sorry, I don't recollect why exactly) but the solution was to use:
an implicit def macro to convert to a class
define the infix operator as a def macro in that class
So something like the following might work for you:
implicit def toNonsense(s:String): Nonsense = macro ...
...
class Nonsense(...){
...
def ##(...):... = macro ...
...
}
That was pretty painful to implement. That being said, macro have become easier to implement since.
If you want to check what I did, because I'm not sure that applies to what you want to do, refer to this excerpt of my code (non-idiomatic style).
I won't address the relevance of that here, as it's been commented by others.
I'm new to scala(just start learning it), but have figured out smth strange for me: there are classes Array and List, they both have such methods/functions as foreach, forall, map etc. But any of these methods aren't inherited from some special class(trait). From java perspective if Array and List provide some contract, that contract have to be declared in interface and partially implemented in abstract classes. Why do in scala each type(Array and List) declares own set of methods? Why do not they have some common type?
But any of these methods aren't inherited from some special class(trait)
That simply not true.
If you open scaladoc and lookup say .map method of Array and List and then click on it you'll see where it is defined:
For list:
For array:
See also info about Traversable and Iterable both of which define most of the contracts in scala collections (but some collections may re-implement methods defined in Traversable/Iterable, e.g. for efficiency).
You may also want to look at relations between collections (scroll to the two diagrams) in general.
I'll extend om-nom-nom answer here.
Scala doesn't have an Array -- that's Java Array, and Java Array doesn't implement any interface. In fact, it isn't even a proper class, if I'm not mistaken, and it certainly is implemented through special mechanisms at the bytecode level.
On Scala, however, everything is a class -- an Int (Java's int) is a class, and so is Array. But in these cases, where the actual class comes from Java, Scala is limited by the type hierarchy provided by Java.
Now, going back to foreach, map, etc, they are not methods present in Java. However, Scala allows one to add implicit conversions from one class to another, and, through that mechanism, add methods. When you call arr.foreach(println), what is really done is Predef.refArrayOps(arr).foreach(println), which means foreach belongs to the ArrayOps class -- as you can see in the scaladoc documentation.
I have a bunch of Scala classes (like Lift's Box, Scala's Option, etc.) that I'd like to
use in Clojure as a Clojure ISeq.
How do I tell Clojure how to make these classes into an ISeq so that all the various sequence
related functions "just work"?
To build on Arthur's answer, you can provide a generic wrapper class in Scala along these lines:
class WrapCollection(repr: TraversableOnce[_]) extends clojure.lang.Seqable { ... }
If the classes implement the Iterable interface then you can just call seq on them to get a seqeuence. Most of the functions in the sequence library will do this for you though so in almost all normal cases you can just pass them to seq functions like first and count as is.
Because I know a little bit Java I was trying to use in every Scala Code some Java types like java.lang.Integer, java.lang.Character, java.lang.Boolean ... and so on. Now people told me "No! For everything in Scala there is an own type, the Java stuff will work - but you should always prefer the Scala types and objects".
Ok now I see, there is everything in Scala that is in Java. Not sure why it is better to use for example Scala Boolean instead of Java Boolean, but fine. If I look at the types I see scala.Boolean, scala.Int, scala.Byte ... and then I look at String but its not scala.String, (well its not even java.lang.String confuse) its just a String. But I thought I should use everything that comes direct from scala. Maybe I do not understand scala correctly, could somebody explain it please?
First, 'well its not even java.lang.String' statement is not quite correct. Plain String name comes from type alias defined in Predef object:
type String = java.lang.String
and insides of Predef are imported in every Scala source, hence you use String instead of full java.lang.String, but in fact they are the same.
java.lang.String is very special class treated by JVM in special way. As #pagoda_5b said, it is declared as final, it is not possible to extend it (and this is good in fact), so Scala library provides a wrapper (RichString) with additional operations and an implicit conversion String -> RichString available by default.
However, there is slightly different situation with Integer, Character, Boolean etc. You see, even though String is treated specially by JVM, it still a plain class whose instances are plain objects. Semantically it is not different from, say, List class.
There is another situation with primitive types. Java int, char, boolean types are not classes, and values of these types are not objects. But Scala is fully object-oriented language, there are no primitive types. It would be possible to use java.lang.{Integer,Boolean,...} everywhere where you need corresponding types, but this would be awfully inefficient because of boxing.
Because of this Scala needed a way to present Java primitive types in object-oriented setting, and so scala.{Int,Boolean,...} classes were introduced. These types are treated specially via Scala compiler - scalac generates code working with primitives when it encounters one of these classes. They also extend AnyVal class, which prevents you from using null as a value for these types. This approach solves the problem with efficiency, leaves java.lang.{Integer,Boolean,...} classes available where you really need boxing, and also provides elegant way to use primitives of another host system (e.g. .NET runtime).
I'm just guessing here
If you look at the docs, you can see that the scala version of primitives gives you all the expected operators that works on numeric types, or boolean types, and sensible conversions, without resorting to boxing-unboxing as for java.lang wrappers.
I think this choice was made to give uniform and natural access to what was expected of primitive types, while at the same time making them Objects as any other scala type.
I suppose that java.lang.String required a different approach, being an Object already, and final in its implementation. So the "path of least pain" was to create an implicit Rich wrapper around it to get missing operations on String, while leaving the rest untouched.
To see it another way, java.lang.String was already good enough as-is, being immutable and what-else.
It's worth mentioning that the other "primitive" types in scala have their own Rich wrappers that provides additional sensible operations.
Trait Traversable has methods such as toList, toMap, ToSeq. Given that List, Map, Seq are subclasses of Traversable, this creates a circular dependency, which is generally not a desirable design pattern.
I understand that this is constrained to the collections library and it provides some nice transformation methods.
Was there any alternative design considered? Such as a "utility" class, or adding the conversion methods to Predef?
Say I want to add a new class: class RandomList extends List {...}. It would be nice to have a method toRandomList available for all Traversable classes, but for that I would need to "pimp my library" with an implicit on Traversable? This seems a bit of an overkill. With a utility class design, I could just extend that class (or Predef) to add my conversion method. What would be the recommended design here?
An alternative and extensible approach would be to[List], to[RandomList].
It's a bit tricky to add this with implicits, though. https://gist.github.com/445874/2a4b0bb0bde29485fec1ad1a5bbf968df80f2905
To add a toRandomClass you'd have to resort to a pimp my library pattern indeed. However, why do you think that is overkill? The overhead is negligible. And it wouldn't work extending an utility class -- why would Scala look into your new class for that method? Not to mention that you'd have to instantiate such a class to be able to access its methods.
There is no circular dependency here.
Circular dependency is what happens when there are a few independent components which refer to each other.
Scala standard library is one component. Since it is built always in one step there is no problem.
You are right. Let's remove toString from the String class...