How to import play in scala repl - scala

How can I import play in Scala repl?
scala> import play.api.libs.json._
<console>:11: error: not found: value play
import play.api.libs.json._

1) setup simple build tool(sbt) {its easy - download from here - http://www.scala-sbt.org/download.html and instructions here - http://www.scala-sbt.org/0.13/docs/Installing-sbt-on-Windows.html}
2) Create a empty folder with build.sbt with following contents
//your-test-project/build.sbt
scalaVersion := "2.11.8"
resolvers += "Typesafe repository" at "http://repo.typesafe.com/typesafe/releases/"
libraryDependencies += "com.typesafe.play" %% "play" % "2.5.12"
3) Then simply do sbt console on the root on folder, which will download play and make it available to your console.
$ ls -l ~/.ivy2/cache/com.typesafe.play/play_2.11/jars/
total 15392
-rw-r--r-- 1 as18 185223974 4107407 Jan 22 15:59 play_2.11-2.5.12.jar
Then you are good to go.
scala> import play.api.libs.json._
import play.api.libs.json._
scala> val json: JsValue = Json.parse("""{ "compiler" : "scala", "ratings" : 5 }""")
json: play.api.libs.json.JsValue = {"compiler":"scala","ratings":5}
scala> val compiler = ( json \ "compiler" )
compiler: play.api.libs.json.JsLookupResult = JsDefined("scala")
Also, you can directly provide the jar if you already have it as below
scala -cp ~/.ivy2/cache/com.typesafe.play/play_2.11/jars/play_2.11-2.5.12.jar
scala> import play.api.libs._
import play.api.libs._

Things are much simpler with Ammonite REPL:
load.ivy("com.typesafe.play" %% "play" % "2.5.12")
import whatever.you.need

The package is not found because it is not in the class path of the REPL.
If you know the location of Play Framework's JAR on your computer, you can add it to the class path when starting the REPL:
> scala -cp path/to/play.jar
You can also add this directly from inside a REPL session:
:require play.jar
Note that you will still need to import your classes as before.

Related

run-main-0) scala.ScalaReflectionException: class java.sql.Date in JavaMirror with ClasspathFilter(

Hi I have a file given to by my teacher. It is about Scala and Spark.
When I run the code it gives me this exception:
(run-main-0) scala.ScalaReflectionException: class java.sql.Date in
JavaMirror with ClasspathFilter
The file itself looks like this:
import org.apache.spark.ml.feature.Tokenizer
import org.apache.spark.sql.Dataset
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.types._
object Main {
type Embedding = (String, List[Double])
type ParsedReview = (Integer, String, Double)
org.apache.log4j.Logger getLogger "org" setLevel
(org.apache.log4j.Level.WARN)
org.apache.log4j.Logger getLogger "akka" setLevel
(org.apache.log4j.Level.WARN)
val spark = SparkSession.builder
.appName ("Sentiment")
.master ("local[9]")
.getOrCreate
import spark.implicits._
val reviewSchema = StructType(Array(
StructField ("reviewText", StringType, nullable=false),
StructField ("overall", DoubleType, nullable=false),
StructField ("summary", StringType, nullable=false)))
// Read file and merge the text abd summary into a single text column
def loadReviews (path: String): Dataset[ParsedReview] =
spark
.read
.schema (reviewSchema)
.json (path)
.rdd
.zipWithUniqueId
.map[(Integer,String,Double)] { case (row,id) => (id.toInt, s"${row getString 2} ${row getString 0}", row getDouble 1) }
.toDS
.withColumnRenamed ("_1", "id" )
.withColumnRenamed ("_2", "text")
.withColumnRenamed ("_3", "overall")
.as[ParsedReview]
// Load the GLoVe embeddings file
def loadGlove (path: String): Dataset[Embedding] =
spark
.read
.text (path)
.map { _ getString 0 split " " }
.map (r => (r.head, r.tail.toList.map (_.toDouble))) // yuck!
.withColumnRenamed ("_1", "word" )
.withColumnRenamed ("_2", "vec")
.as[Embedding]
def main(args: Array[String]) = {
val glove = loadGlove ("Data/glove.6B.50d.txt") // take glove
val reviews = loadReviews ("Data/Electronics_5.json") // FIXME
// replace the following with the project code
glove.show
reviews.show
spark.stop
}
}
I need to keep the line
import org.apache.spark.sql.Dataset
because some code depends on it but it is exactly because of it I have an exception throw.
My build.sbt file looks like this:
name := "Sentiment Analysis Project"
version := "1.1"
scalaVersion := "2.11.12"
scalacOptions ++= Seq("-unchecked", "-deprecation")
initialCommands in console :=
"""
import Main._
"""
libraryDependencies += "org.apache.spark" %% "spark-core" % "2.3.0"
libraryDependencies += "org.apache.spark" %% "spark-mllib" %
"2.3.0"
libraryDependencies += "org.scalactic" %% "scalactic" % "3.0.5"
libraryDependencies += "org.scalatest" %% "scalatest" % "3.0.5" %
"test"
The Scala guide recommends you compile with Java8:
We recommend using Java 8 for compiling Scala code. Since the JVM is backward compatible, it is usually safe to use a newer JVM to run your code compiled by the Scala compiler for older JVM versions.
Although it's only a recommendation, I found it to fix the issue you mention.
In order to install Java 8 using Homebrew, it's best to use jenv which will help you handle multiple Java versions should you need to.
brew install jenv
Then run the following to add a tap (repository) of alternative versions of casks, since Java 8 is not in the default tap anymore:
brew tap homebrew/cask-versions
To install Java 8:
brew cask install homebrew/cask-versions/adoptopenjdk8
Run the following to add the previously installed Java version to jenv's list of versions:
jenv add /Library/Java/JavaVirtualMachines/<installed_java_version>/Contents/Home
Finally run
jenv global 1.8
or
jenv local 1.8
to use Java 1.8 globally or locally (in the current folder).
Fore more information, follow the instructions at jenv's website

value wholeTextFiles is not a member of org.apache.spark.SparkContext

I have a Scala code like below :-
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
import org.apache.spark._
object RecipeIO {
val sc = new SparkContext(new SparkConf().setAppName("Recipe_Extraction"))
def read(INPUT_PATH: String): org.apache.spark.rdd.RDD[(String)]= {
val data = sc.wholeTextFiles("INPUT_PATH")
val files = data.map { case (filename, content) => filename}
(files)
}
}
When I compile this code using sbt it gives me the error :
value wholeTextFiles is not a member of org.apache.spark.SparkContext.
I am importing all of which is required but it's still giving me this errror.
But when I compile this code by replacing wholeTextFiles with textFile, the code gets compiled.
What might be the problem here and how do I resolve that?
Thanks in advance!
Environment:
Scala compiler version 2.10.2
spark-1.2.0
Error:
[info] Set current project to RecipeIO (in build file:/home/akshat/RecipeIO/)
[info] Compiling 1 Scala source to /home/akshat/RecipeIO/target/scala-2.10.4/classes...
[error] /home/akshat/RecipeIO/src/main/scala/RecipeIO.scala:14: value wholeTexFiles is not a member of org.apache.spark.SparkContext
[error] val data = sc.wholeTexFiles(INPUT_PATH)
[error] ^
[error] one error found
[error] {file:/home/akshat/RecipeIO/}default-55aff3/compile:compile: Compilation failed
[error] Total time: 16 s, completed Jun 15, 2015 11:07:04 PM
My build.sbt file looks like this :
name := "RecipeIO"
version := "1.0"
scalaVersion := "2.10.4"
libraryDependencies += "org.apache.spark" % "spark-core_2.10" % "0.9.0-incubating"
libraryDependencies += "org.eclipse.jetty" % "jetty-server" % "8.1.2.v20120308"
ivyXML :=
<dependency org="org.eclipse.jetty.orbit" name="javax.servlet" rev="3.0.0.v201112011016">
<artifact name="javax.servlet" type="orbit" ext="jar"/>
</dependency>
You have a typo: it should be wholeTextFiles instead of wholeTexFiles.
As a side note, I think you want sc.wholeTextFiles(INPUT_PATH) and not sc.wholeTextFiles("INPUT_PATH") if you really want to use the INPUT_PATH variable.

No RowReaderFactory can be found for this type error when trying to map Cassandra row to case object using spark-cassandra-connector

I am trying to get a simple example working mapping rows from Cassandra to a scala case class using Apache Spark 1.1.1, Cassandra 2.0.11, & the spark-cassandra-connector (v1.1.0). I have reviewed the documentation at the spark-cassandra-connector github page, planetcassandra.org, datastax, and generally searched around; but have not found anyone else encountering this issue. So here goes...
Building a tiny spark application using sbt (0.13.5), scala 2.10.4, spark 1.1.1 against Cassandra 2.0.11. Modelling the example from the spark-cassandra-connector docs the following two lines present an error in my IDE and fail to compile.
case class SubHuman(id:String, firstname:String, lastname:String, isGoodPerson:Boolean)
val foo = sc.cassandraTable[SubHuman]("nicecase", "human").select("id","firstname","lastname","isGoodPerson").toArray
The simple error presented by eclipse is:
No RowReaderFactory can be found for this type
The compile error is only slightly more verbose:
> compile
[info] Compiling 1 Scala source to /home/bkarels/dev/simple-case/target/scala-2.10/classes...
[error] /home/bkarels/dev/simple-case/src/main/scala/com/bradkarels/simple/SimpleApp.scala:82: No RowReaderFactory can be found for this type
[error] val foo = sc.cassandraTable[SubHuman]("nicecase", "human").select("id","firstname","lastname","isGoodPerson").toArray
[error] ^
[error] one error found
[error] (compile:compile) Compilation failed
[error] Total time: 1 s, completed Dec 10, 2014 9:01:30 AM
>
Scala source:
package com.bradkarels.simple
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
import com.datastax.spark.connector._
import com.datastax.spark.connector.rdd._
// Likely don't need this import - but throwing darts hits the bullseye once in a while...
import com.datastax.spark.connector.rdd.reader.RowReaderFactory
object CaseStudy {
def main(args: Array[String]) {
val conf = new SparkConf(true)
.set("spark.cassandra.connection.host", "127.0.0.1")
val sc = new SparkContext("spark://127.0.0.1:7077", "simple", conf)
case class SubHuman(id:String, firstname:String, lastname:String, isGoodPerson:Boolean)
val foo = sc.cassandraTable[SubHuman]("nicecase", "human").select("id","firstname","lastname","isGoodPerson").toArray
}
}
With the bothersome lines removed, everything compiles fine, assembly works, and I can perform other Spark operations normally. For example, if I remove the problem lines and drop in:
val rdd:CassandraRDD[CassandraRow] = sc.cassandraTable("nicecase", "human")
I get back the RDD and work with it as expected. That said, I suspect that my sbt project, assembly plugin, etc. are not contributing to the issues. The working source (less the new attempt to map to a case class as the connector as intended) can be found on github here.
But, to be more thorough, my build.sbt:
name := "Simple Case"
version := "0.0.1"
scalaVersion := "2.10.4"
libraryDependencies ++= Seq(
"org.apache.spark" %% "spark-core" % "1.1.1",
"org.apache.spark" %% "spark-sql" % "1.1.1",
"com.datastax.spark" %% "spark-cassandra-connector" % "1.1.0" withSources() withJavadoc()
)
So the question is what have I missed? Hoping this is something silly, but if you have encountered this and can help me get past this puzzling little issue I would very much appreciate it. Please let me know if there are any other details that would be helpful in troubleshooting.
Thank you.
This may be my newness with Scala in general, but I resolved this issue by moving the case class declaration out of the main method. So the simplified source now looks like this:
package com.bradkarels.simple
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
import com.datastax.spark.connector._
import com.datastax.spark.connector.rdd._
object CaseStudy {
case class SubHuman(id:String, firstname:String, lastname:String, isGoodPerson:Boolean)
def main(args: Array[String]) {
val conf = new SparkConf(true)
.set("spark.cassandra.connection.host", "127.0.0.1")
val sc = new SparkContext("spark://127.0.0.1:7077", "simple", conf)
val foo = sc.cassandraTable[SubHuman]("nicecase", "human").select("id","firstname","lastname","isGoodPerson").toArray
}
}
The complete source (updated & fixed) can be found on github https://github.com/bradkarels/spark-cassandra-to-scala-case-class

Using unboundid ldap in scala ... strange compile error

I am trying to use LDAP via unboundid in scala but the compiler keeps crashing.
I just created an object that looks like this:
package utils
import com.unboundid.ldap.sdk._
object LdapHelper {
val ldap = LDAPConnection("ldap.example.com", 389)
}
I added this: "com.unboundid" % "unboundid-ldapsdk" % "2.3.1" to my appDependencies in Build.scala. I use Play 2.1 and Scala Version 2.10.1.
I get a very strange error message (see below):
The error message is so strange that i really dont know where to begin to look for hints.
Not sure if the problem is in unboundid, play, scala, sbt?
How can i successfully integrate unboundid in my scala project?
Thanks
Error in Scala compiler: assertion failed: while compiling: C:\play\todolist\app\utils\LdapHelper.scala during phase: global=typer, atPhase=parser library version: version 2.10.2 compiler version: version 2.10.2 reconstructed args: -classpath C:\play\todolist.target;C:\eclipse\scala-SDK-3.0.1-vfinal-2.10-win32.win32.x86_64\configuration\org.eclipse.
...
last tree to typer: Ident(LDAPConnection)
symbol: (flags: )
symbol definition:
symbol owners:
context owners: value ldap -> object LdapHelper -> package utils
== Enclosing template or block ==
Template( // val : in object LdapHelper
"java.lang.Object" // parents
ValDef(
private
"_"
)
// 3 statements
DefDef( // def : in object LdapHelper
""
[]
List(Nil)
Block(
Apply(
super.""
Nil
)
()
)
)
DefDef( // def x: in object LdapHelper
"x"
[]
Nil
()
)
ValDef( // private[this] val ldap: in object LdapHelper
private
"ldap"
Apply(
"LDAPConnection"
// 2 arguments
"ldap.example.com"
389
)
)
)
There was a warning that turned into an assert in Scala 2.10.2 causing this.
There is a bug open here:
https://issues.scala-lang.org/browse/SI-7014
And a fix staged for 2.10.4:
https://github.com/scala/scala/pull/2829
You can ask Play to use Scala 2.10.4-SNAPSHOT by using the following Build.scala:
import sbt._
import Keys._
import play.Project._
object ApplicationBuild extends Build {
val appName = "AppName"
val appVersion = "1.0-SNAPSHOT"
val mainDeps = Seq(
jdbc,
anorm,
cache
)
lazy val main = play.Project(appName, appVersion, mainDeps).settings(
scalaVersion := "2.10.4-SNAPSHOT"
)
}
If you are using build.sbt the file would look like:
import play.Project._
playScalaSettings
name := "AppName"
version := "1.0-SNAPSHOT"
scalaVersion := "2.10.4-SNAPSHOT"
libraryDependencies ++= Seq(jdbc, anorm, cache)
Note: if building from sbt (instead of play) you may have to add a repository resolver under the scalaVersion line such as:
resolvers += "Typesafe repository" at "http://repo.typesafe.com/typesafe/repo/"
The answer from #jeckhart works.
Firstly I use Scala 2.10.4-RC1 to build the Play 2.3 SNAPSHOT. Then use the output to compile with UnboundID.
Finally everything compiles with no assertion or error.
To build Play 2.3 SNAPSHOT using Scala 2.10.4-RC1, I modified the file framework/project/Build.scala.
Change these two section from
val buildScalaVersion = propOr("scala.version", "2.10.3")
val buildScalaVersionForSbt = propOr("play.sbt.scala.version", "2.10.3")
to
val buildScalaVersion = propOr("scala.version", "2.10.4-RC1")
val buildScalaVersionForSbt = propOr("play.sbt.scala.version", "2.10.4-RC1")

value resolveOne is not a member of akka.actor.ActorSelection

I get the above error message from here:
implicit val askTimeout = Timeout(60 seconds)
val workerFuture = workerContext actorSelection(payload.classname) resolveOne()
val worker = Await.result(workerFuture, 10 seconds)
worker ask Landau(List("1", "2", "3"))
specifically from the second line.. the import made is
import akka.actor._
import akka.util.Timeout
import akka.pattern.{ ask, pipe }
import scala.concurrent.duration._
import scala.concurrent.Await
import java.util.concurrent.TimeUnit
akka version is 2.2.1 and scala is 2.10.2, i'm using sbt 0.13 to build it all..
I cannot really understand what's wrong, since resolveOne is definetely coming from that package..
EDIT: I made a print of all the methods of the class with
ActorSelection.getClass.getMethods.map(_.getName).foreach { p => println(p)}
and this is the result:
apply
toScala
wait
wait
wait
equals
toString
hashCode
getClass
notify
notifyAll
I had the same problem and changed my Scala and Akka versions as described in the link below.
I brought here part of my build.sbt for simplicity:
scalaVersion := "2.10.4"
resolvers += "Akka Snapshot Repository" at "http://repo.akka.io/snapshots/"
libraryDependencies ++= Seq("com.typesafe.akka" %% "akka-actor" % "2.4-SNAPSHOT")
link: http://doc.akka.io/docs/akka/snapshot/intro/getting-started.html