using Calcite's ReflectiveSchema from scala - scala

I'm experimenting with calcite from scala, and trying to pass a simple scala class for creating a schema at runtime (using ReflectiveSchema), I'm having some headache.
For example, re-implementing the FoodMart JDBC Example (which works well in Java), I'm calling it as simple as new ReflectiveSchema(new Hr()), using a Hr class rewritten in scala as:
class HR {
val emps: Array[Employee] = Array(new Employee(100, "Bill"))
}
I'm experiencing an error: ...SqlValidatorException: Object 'emps' not found within 'hr'. This problem seems to be related to the fact that val fields are actually created private in bytecode from java, and the implementation in calcite seems to be able to use (by means of java reflection) only fields accessible through the .getFields() method of a class.
So I suppose this direction requires a lot more hacking than a simple my_field.setAccessible(true) or similar.
Are there any other way to construct a schema by API, avoiding reflection and the usage of JSON?
thanks in advance for any suggestion

Related

Passing Spark dataframe between scala methods - Performance

Recently, I have developed a Spark Streaming application using Scala and Spark. In this application, I have extensively used Implicit Class (Pimp my Library pattern) to implement more general utilities like Writing a Dataframe to HBase by creating an implicit class that is extending Spark's Dataframe. For example,
implicit class DataFrameExtension(private val dataFrame: DataFrame) extends Serializable { ..... // Custom methods to perform some computations }
However, a senior architect from my team refactored the code (specifying some style mismatch and performance as a reason) and copied these methods to a new class. Now, these methods accept Dataframe as an argument.
Can anyone help me on,
Whether Scala's implicit classes creates any overhead during
run-time?
Does moving dataframe object between methods creates any overhead, either in terms of method calls or serialization?
I have searched a bit, but couldn't find any style guide that gives guidelines on using implicit classes or methods over traditional methods.
Thanks in advance.
Whether Scala's implicit classes creates any overhead during run-time?
Not in your case. There is some overhead when the implicit type is AnyVal (thus needs to be boxed). Implicits are resolved during compile time, and except for maybe a few virtual method calls there should be no overhead.
Does moving dataframe object between methods creates any overhead, either in terms of method calls or serialization?
No, no more then any other type. Obviously there will be no serialization.
... if I pass dataframes between methods in Spark code, it might create closure and as a result, will bring the parent class that holds the dataframe object.
Only if you use scoped variables inside your dataframe, for example filter($"col" === myVar) where myVar declared in the scope of the method. In this case, Spark might serialize the wrapping class, but it's easy to avoid that. Please remember that dataframes are passed quite often and quite deep inside Spark code, and probably in every other library that you might be using (datasources, for example).
It is very common (and handy) to use extension implicit classes like you did.

Can Scala classes be used in Java

class Wish{
val s = "Hello! User. Wish you a Great day."
}
object Wish{
def main(args: Array[String]){
val w = new Wish()
println("Value - " + w.s )
}
}
Java classes can be used in Scala. Similarly, can Scala classes be used in Java?
Yes, Scala classes can be called from Java and vice versa.
The below text is taken from: Scala FAQs
What does it mean that Scala is compatible with Java?
The standard Scala backend is a Java VM. Scala classes are Java classes, and vice versa. You can call the methods of either language from methods in the other one. You can extend Java classes in Scala, and vice versa. The main limitation is that some Scala features do not have equivalents in Java, for example traits.
The following post also could be helpful to you: how to call Scala from Java
Yes. If you want to do this, there are a few things you might want to remember:
Do not use operators in your method names or provide a wordy alternative. Operator names can be called from Java but are mangled into somethings very ugly.
Java users might expect Java style getters and setters. You can produce those automatically by adding #BeanProperty annotation to fields.
In the same way Java user might be accustomed to factory methods called ClassName.of where Scala uses .apply. Those you have to provide by hand, if you want to provide that service.

Scala macros: Generate factory or ctor for Java Pojo

I'm currently working with reasonably large code base where new code is written in scala, but where a lot of old Java code remains. In particular there are a lot of java APIs we have to talk to. The old code uses simple Java Pojos with public non-final fields, without any methods or constructors, e.g:
public class MyJavaPojo {
public String myField1;
public MyOtheJavaPojo myField2;
}
Note that we dont have the option of adding helper methods or constructors to these types. These are currently created like old c-structs (pre-named parameters) like this:
val myPojo = new MyJavaPojo
myPojo.myField1 = ...
myPojo.myField2 = ...
Because of this, it's very easy to forget about assigning one of the fields, especially when we suddenly add new fields to the MyJavaPojo class, the compiler wont complain that I've left one field to null.
NOTE: We don't have the option of modifying the java types/adding constructors the normal way. We also don't want to start creating lots and lots of manually created helper functions for object creation - We would really like to find a solution based on scala macros instead of possible!
What I would like to do would be to create a macro that generates either a constructor-like method for my Pojos or a macro that creates a factory, allowing for named parameters. (Basically letting a macro do the work instead of creating a gazillion manually written helper methods in scala).
Do you know of any way to do this with scala macros? (I'm certain it's possible, but I've never written a scala macro in my life)
Desired API alternative 1:
val myPojo = someMacro[MyJavaPojo](myField1 = ..., myField2 = ...)
Desired API alternative 2
val factory = someMacro[MyJavaPojo]
val myPojo = factory.apply(myField1 = ..., myField2 = ...)
NOTE/Important: Named parameters!
I'm looking for either a ready-to-use solution or hints as to where I can read up on making one.
All ideas and input appreciated!
Take a look at scala-beanutils.
#beanCompanion[MyJavaPojo] object MyScalaPojo
MyScalaPojo(...)
It probably won't work directly, as you classes are not beans and it's only been made for Scala 2.10, but the source code is < 200 lines and should give you an idea of where to start.

Scala Case Class Map Expansion

In groovy one can do:
class Foo {
Integer a,b
}
Map map = [a:1,b:2]
def foo = new Foo(map) // map expanded, object created
I understand that Scala is not in any sense of the word, Groovy, but am wondering if map expansion in this context is supported
Simplistically, I tried and failed with:
case class Foo(a:Int, b:Int)
val map = Map("a"-> 1, "b"-> 2)
Foo(map: _*) // no dice, always applied to first property
A related thread that shows possible solutions to the problem.
Now, from what I've been able to dig up, as of Scala 2.9.1 at least, reflection in regard to case classes is basically a no-op. The net effect then appears to be that one is forced into some form of manual object creation, which, given the power of Scala, is somewhat ironic.
I should mention that the use case involves the servlet request parameters map. Specifically, using Lift, Play, Spray, Scalatra, etc., I would like to take the sanitized params map (filtered via routing layer) and bind it to a target case class instance without needing to manually create the object, nor specify its types. This would require "reliable" reflection and implicits like "str2Date" to handle type conversion errors.
Perhaps in 2.10 with the new reflection library, implementing the above will be cake. Only 2 months into Scala, so just scratching the surface; I do not see any straightforward way to pull this off right now (for seasoned Scala developers, maybe doable)
Well, the good news is that Scala's Product interface, implemented by all case classes, actually doesn't make this very hard to do. I'm the author of a Scala serialization library called Salat that supplies some utilities for using pickled Scala signatures to get typed field information
https://github.com/novus/salat - check out some of the utilities in the salat-util package.
Actually, I think this is something that Salat should do - what a good idea.
Re: D.C. Sobral's point about the impossibility of verifying params at compile time - point taken, but in practice this should work at runtime just like deserializing anything else with no guarantees about structure, like JSON or a Mongo DBObject. Also, Salat has utilities to leverage default args where supplied.
This is not possible, because it is impossible to verify at compile time that all parameters were passed in that map.

How do you do dependency injection with the Cake pattern without hardcoding?

I just read and enjoyed the Cake pattern article. However, to my mind, one of the key reasons to use dependency injection is that you can vary the components being used by either an XML file or command-line arguments.
How is that aspect of DI handled with the Cake pattern? The examples I've seen all involve mixing traits in statically.
Since mixing in traits is done statically in Scala, if you want to vary the traits mixed in to an object, create different objects based on some condition.
Let's take a canonical cake pattern example. Your modules are defined as traits, and your application is constructed as a simple Object with a bunch of functionality mixed in
val application =
new Object
extends Communications
with Parsing
with Persistence
with Logging
with ProductionDataSource
application.startup
Now all of those modules have nice self-type declarations which define their inter-module dependencies, so that line only compiles if your all inter-module dependencies exist, are unique, and well-typed. In particular, the Persistence module has a self-type which says that anything implementing Persistence must also implement DataSource, an abstract module trait. Since ProductionDataSource inherits from DataSource, everything's great, and that application construction line compiles.
But what if you want to use a different DataSource, pointing at some local database for testing purposes? Assume further that you can't just reuse ProductionDataSource with different configuration parameters, loaded from some properties file. What you would do in that case is define a new trait TestDataSource which extends DataSource, and mix it in instead. You could even do so dynamically based on a command line flag.
val application = if (test)
new Object
extends Communications
with Parsing
with Persistence
with Logging
with TestDataSource
else
new Object
extends Communications
with Parsing
with Persistence
with Logging
with ProductionDataSource
application.startup
Now that looks a bit more verbose than we would like, particularly if your application needs to vary its construction on multiple axes. On the plus side, you usually you only have one chunk of conditional construction logic like that in an application (or at worst once per identifiable component lifecycle), so at least the pain is minimized and fenced off from the rest of your logic.
Scala is also a script language. So your configuration XML can be a Scala script. It is type-safe and not-a-different-language.
Simply look at startup:
scala -cp first.jar:second.jar startupScript.scala
is not so different than:
java -cp first.jar:second.jar com.example.MyMainClass context.xml
You can always use DI, but you have one more tool.
The short answer is that Scala doesn't currently have any built-in support for dynamic mixins.
I am working on the autoproxy-plugin to support this, although it's currently on hold until the 2.9 release, when the compiler will have new features making it a much easier task.
In the meantime, the best way to achieve almost exactly the same functionality is by implementing your dynamically added behavior as a wrapper class, then adding an implicit conversion back to the wrapped member.
Until the AutoProxy plugin becomes available, one way to achieve the effect is to use delegation:
trait Module {
def foo: Int
}
trait DelegatedModule extends Module {
var delegate: Module = _
def foo = delegate.foo
}
class Impl extends Module {
def foo = 1
}
// later
val composed: Module with ... with ... = new DelegatedModule with ... with ...
composed.delegate = choose() // choose is linear in the number of `Module` implementations
But beware, the downside of this is that it's more verbose, and you have to be careful about the initialization order if you use vars inside a trait. Another downside is that if there are path dependent types within Module above, you won't be able to use delegation that easily.
But if there is a large number of different implementations that can be varied, it will probably cost you less code than listing cases with all possible combinations.
Lift has something along those lines built in. It's mostly in scala code, but you have some runtime control. http://www.assembla.com/wiki/show/liftweb/Dependency_Injection