I am trying to define a function that takes an integer and an implicit object that has the code to process that number, but i get a NullPointerException and i don know why.
If I delete the first println the code works.
Is there some problem with the way i am defining the implicit objects?
Here is my code:
class A {
def apply(n : Int) = n*2
}
def f(n: Int)(implicit o : A) = o(n)
implicit val a = new A
println(f(3))
class B extends A {
override def apply(n: Int) = n*3
}
implicit val b = new B
println(f(3))
And this is the error:
java.lang.NullPointerException
at Main$$anon$1.f(open.scala:5)
at Main$$anon$1.<init>(open.scala:8)
at Main$.main(open.scala:1)
at Main.main(open.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at scala.tools.nsc.util.ScalaClassLoader$$anonfun$run$1.apply(ScalaClassLoader.scala:78)
at scala.tools.nsc.util.ScalaClassLoader$class.asContext(ScalaClassLoader.scala:24)
at scala.tools.nsc.util.ScalaClassLoader$URLClassLoader.asContext(ScalaClassLoader.scala:88)
at scala.tools.nsc.util.ScalaClassLoader$class.run(ScalaClassLoader.scala:78)
at scala.tools.nsc.util.ScalaClassLoader$URLClassLoader.run(ScalaClassLoader.scala:101)
at scala.tools.nsc.ObjectRunner$.run(ObjectRunner.scala:33)
at scala.tools.nsc.ObjectRunner$.runAndCatch(ObjectRunner.scala:40)
at scala.tools.nsc.ScriptRunner.scala$tools$nsc$ScriptRunner$$runCompiled(ScriptRunner.scala:171)
at scala.tools.nsc.ScriptRunner$$anonfun$runScript$1.apply(ScriptRunner.scala:188)
at scala.tools.nsc.ScriptRunner$$anonfun$runScript$1.apply(ScriptRunner.scala:188)
at scala.tools.nsc.ScriptRunner$$anonfun$withCompiledScript$1$$anonfun$apply$mcZ$sp$1.apply(ScriptRunner.scala:157)
at scala.tools.nsc.ScriptRunner$$anonfun$withCompiledScript$1$$anonfun$apply$mcZ$sp$1.apply(ScriptRunner.scala:157)
at scala.Option.exists(Option.scala:187)
at scala.tools.nsc.ScriptRunner$$anonfun$withCompiledScript$1.apply$mcZ$sp(ScriptRunner.scala:157)
at scala.tools.nsc.ScriptRunner$$anonfun$withCompiledScript$1.apply(ScriptRunner.scala:131)
at scala.tools.nsc.ScriptRunner$$anonfun$withCompiledScript$1.apply(ScriptRunner.scala:131)
at scala.tools.nsc.util.package$.waitingForThreads(package.scala:26)
at scala.tools.nsc.ScriptRunner.withCompiledScript(ScriptRunner.scala:130)
at scala.tools.nsc.ScriptRunner.runScript(ScriptRunner.scala:188)
at scala.tools.nsc.ScriptRunner.runScriptAndCatch(ScriptRunner.scala:201)
at scala.tools.nsc.MainGenericRunner.runTarget$1(MainGenericRunner.scala:58)
at scala.tools.nsc.MainGenericRunner.process(MainGenericRunner.scala:80)
at scala.tools.nsc.MainGenericRunner$.main(MainGenericRunner.scala:89)
at scala.tools.nsc.MainGenericRunner.main(MainGenericRunner.scala)
This isn't a problem with implicits per se. The issue is that scala is deciding to use b as the implicit parameter to f for both calls, however b isn't initialized yet when the first call happens. The scala compiler should probably complain about ambiguous implicit values here, but the use of both types A and B seems to confuse it.
Related
I am a C++ programmer and is trying to learn Scala. I want to achieve something similar to the following code using C++ template
template<typename T>
class Foo {
public:
T* bar;
/////////////////Other Code Omitted//////////////////////////
};
Its counter-part in Scala will not compile due to type erasure
class Foo[E](){
val bar = new E() //Will not compile
}
I have been searching the whole night for a workaround, this seems to be one of them
package test
import scala.reflect._
object Type {
def newInstance[T: ClassTag](init_args: AnyRef*): T = {
classTag[T].runtimeClass.getConstructors.head.newInstance(init_args: _*).asInstanceOf[T]
}
}
class Foo[T1:ClassTag](init_args: AnyRef*){
val bar = Type.newInstance[T1](init_args)
}
class TestClass(val arg:String){
val data = arg
}
However, when I try to instantiate one (val test = new Foo[Test]("test")) in the scala console, it gives the following error
java.lang.IllegalArgumentException: argument type mismatch
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at ParActor.Type$.newInstance(ParActor.scala:32)
at ParActor.Foo.<init>(ParActor.scala:37)
... 35 elided
I am not exactly sure what causes the problem and how to fix this. Other work around is also welcomed.
You should turn
Type.newInstance[T1](init_args)
into
Type.newInstance[T1](init_args: _*)
What : _* does is turn a list or sequence into a varargs argument. A varargs parameter AnyRef* is actually an IndexedSeq[AnyRef], more specifically a WrappedArray[AnyRef]. So when you pass init_args as an argument to newInstance without telling the compiler to interpret it as a varargs argument you are actually passing in WrappedArray(WrappedArray("test")).
I was searching for secondary sort using Spark and found this solution:
case class RFMCKey(cId: String, R: Double, F: Double, M: Double, C: Double)
class RFMCPartitioner(partitions: Int) extends Partitioner {
require(partitions >= 0, "Number of partitions ($partitions) cannot be negative.")
override def numPartitions: Int = partitions
override def getPartition(key: Any): Int = {
val k = key.asInstanceOf[RFMCKey]
k.cId.hashCode() % numPartitions
}
}
object RFMCKey {
implicit def orderingBycId[A <: RFMCKey] : Ordering[A] = {
Ordering.by(k => (k.R, k.F * -1, k.M * -1, k.C * -1))
}
}
Now this is the code that I am using for my RFMC (Recency, Frequency, Monetary, Clumpiness) program.
In the same code, at the end, I am doing:
val rfmcTableSorted = rfmcTable.repartitionAndSortWithinPartitions(new RFMCPartitioner(1))
But when I load this file in spark-shell, I get the following error:
<console>:130: error: RFMCKey is already defined as (compiler-generated) case class companion object RFMCKey
object RFMCKey {
^
<console>:198: error: RFMCKey.type does not take parameters
case (custId, (((rVal, fVal), mVal),cVal)) => (RFMCKey(custId, rVal, fVal, mVal, cVal), rVal+","+fVal+","+mVal+","+cVal)
^
<console>:200: error: value repartitionAndSortWithinPartitions is not a member of org.apache.spark.rdd.RDD[Nothing]
val rfmcTableSorted = rfmcTable.repartitionAndSortWithinPartitions(new RFMCPartitioner(1)).cache()
How do I circumvent this issue?
Update 1
I tried changing the order of declaration of my case class and object class and surprisingly the shell loaded the file without throwing any errors. But when I ran my program it threw a new error:
org.apache.spark.SparkException: Task not serializable
at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:166)
at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:158)
at org.apache.spark.SparkContext.clean(SparkContext.scala:1623)
at org.apache.spark.rdd.RDD.map(RDD.scala:286)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$rfmc$.constructRFMC(<console>:113)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:36)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:41)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:43)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:45)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:47)
at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:49)
at $iwC$$iwC$$iwC$$iwC.<init>(<console>:51)
at $iwC$$iwC$$iwC.<init>(<console>:53)
at $iwC$$iwC.<init>(<console>:55)
at $iwC.<init>(<console>:57)
at <init>(<console>:59)
at .<init>(<console>:63)
at .<clinit>(<console>)
at .<init>(<console>:7)
at .<clinit>(<console>)
at $print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1338)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:856)
at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:901)
at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:813)
at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:656)
at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:664)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:669)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:996)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:944)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:944)
at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:944)
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1058)
at org.apache.spark.repl.Main$.main(Main.scala:31)
at org.apache.spark.repl.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:569)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:166)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.io.NotSerializableException: $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$rfmc$
Serialization stack:
- object not serializable (class: $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$rfmc$, value: $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$rfmc$#757fc606)
- field (class: $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$rfmc$$anonfun$17, name: $outer, type: class $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$rfmc$)
- object (class $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$rfmc$$anonfun$17, <function1>)
at org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:38)
at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:47)
at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:80)
at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:164)
... 52 more
Update 2
The way I am defining my objects and functions is like this:
object rfmc {
def constructrfmc() = {
// Everything goes inside including the custom key and partitioner
// code defined above
}
}
Update 3
The way I am defining my code in eclipse which works perfectly is:
object rfmc extends App {
// Everything goes inside including the custom key and partitioner
// code defined above
}
I also created a JAR for this code and ran using spark-submit and that too worked perfectly.
To address the issue that RFMCKey is already defined, you need to swap the order of your case class and object declaration as explained in this issue.
Regarding your updates, there may be some limitations in the spark-shell that can't let execute any arbitrary code (such as with accumulators). To get more insight on the serialization mechanism, you should pass the following option -Dsun.io.serialization.extendedDebugInfo=true. Remember that the spark-shell is more of an exploratory utility to test small portions of code or new features iteratively thanks to the REPL, and not a fully-fledged production-ready utility that should be used extensively to test your code.
Your safest option here is to package your app into a jar and set up Spark in standalone mode, and run spark-submit with your packaged jar. As reflected in update 3 and 4 of your post, you'll need to update your code to wrap it into an object so that it is the entry point of your job. This will enable you to make sure your code is not at fault here.
I was playing around with Scala's structural types when I discovered what looks like a bug to me. Here is my code:
type toD = { def toDouble: Double }
def foo(t: toD) = t.toDouble
foo(5)
And I got this error:
java.lang.NoSuchMethodException
at scala.runtime.BoxesRunTime.toDouble(Unknown Source)
at .foo(<console>:9)
at .<init>(<console>:11)
at .<clinit>(<console>)
at .<init>(<console>:11)
at .<clinit>(<console>)
at $print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:704)
at scala.tools.nsc.interpreter.IMain$Request$$anonfun$14.apply(IMain.scala:920)
at scala.tools.nsc.interpreter.Line$$anonfun$1.apply$mcV$sp(Line.scala:43)
at scala.tools.nsc.io.package$$anon$2.run(package.scala:25)
at java.lang.Thread.run(Unknown Source)
First, I don't know why this isn't working. Second, it's weird that the code compiles just fine and throws an exception at runtime saying that the method doesn't actually exist.
Does anyone have an explanation for this?
I just played around a bit with this and it really seems to be a bug. However it works when you just set the return type to Any:
type toD = { def toDouble: Any }
I think it may have something to do with auto boxing and the way primitives are handled.
edit:
I just found a workaround:
type toD[A] = { def toDouble: A }
def foo[A](x: toD[A])(implicit y: A =:= Double) = x.toDouble
This ensures, that the return value of toDouble (A) is Double
Given the following code:
import akka.actor._
object TraitTest {
trait A {
def something()
}
trait B extends A
class C extends TypedActor with B {
override def something() {
println("Why am I not implemented?")
}
}
def main(args: Array[String]) {
val service = TypedActor.newInstance(classOf[B], classOf[C])
service.something()
}
}
When running the main, I get the following exception:
Exception in thread "main" java.lang.AbstractMethodError: TraitTest$B$$ProxiedByAWDelegation$$1322144340710.something()V
at TraitTest$.main(TraitTest.scala:29)
at TraitTest.main(TraitTest.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120)
Now, google finally spat out this link, however I don't understand how this 'works as designed'.
Could anybody please shed some light on the issue?
Thanks!
Edit
If I change my code as follows, I obviously do not get the error. However, this is of course not a solution, but more a temporary workaround.
trait B extends A {
override def something()
}
We've completely reworked the TypedActor implementation for 2.0 and scrapped using bytecode weaving to implement it. This leads to a much more robust and powerful feature.
Are you running it on jdk7? (we've seen some shenanigans with that)
Can you try to implement the method with the correct signature? (omitted return type == Any, but you override it with Unit-return type)
Cheers,
√
I'm starting to learn Lift and I'm stuck. I have problem with simple snippet:
class Util {
def in(html: NodeSeq) : NodeSeq ={
if (User.loggedIn_?)
Helpers.bind("user", html, "name" -> User.currentUser.map(_.lastName).open_!)
else
NodeSeq.Empty
}
It should inject current User name, but I'm receiving exception:
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.scala_tools.maven.executions.MainHelper.runMain(MainHelper.java:105)
at org.scala_tools.maven.executions.MainWithArgsInFile.main(MainWithArgsInFile.java:26)
Caused by: scala.tools.nsc.symtab.Types$TypeError: type mismatch;
found : x$1.lastName.type (with underlying type object x$1.lastName)
required: com.liftworkshop.model.User#lastName.type
at scala.tools.nsc.typechecker.Contexts$Context.error(Contexts.scala:352)
What is going on?
The problem here is that _.lastName is actually a singleton object of type MappedString and not the actual string value. To get at the String value you should do:
_.lastName.is