js.Dynamic.global.require generating different code in scala 2.12.0 - scala.js

Example 1 :
import scala.scalajs.js.Dynamic.{global => g}
val image1 = g.require("./images/thumbnails/like.png")
scala 2.11.8 - fastOptJS output :
this.image1$1 = require("./images/thumbnails/like.png");
scala 2.12.0 - fastOptJS output :
this.image1$1 = require(($m_sjs_js_Any$(), "./images/thumbnails/like.png"));
Example 2 :
import scala.scalajs.js.Dynamic.{global => g}
#inline def load[T](lib: String): T = g.require(lib).asInstanceOf[T]
#inline def loadDynamic(lib: String): js.Dynamic = load[js.Dynamic](lib)
val image2 = loadDynamic("./images/thumbnails/like.png")
scala 2.11.8 - fastOptJS output :
this.image2$1 = require("./images/thumbnails/like.png");
scala 2.12.0 - fastOptJS output :
this.image2$1 = ($m_Lsri_mobile_package$all$(), require(($m_sjs_js_Any$(), "./images/thumbnails/like.png")));
Scala.js Version : 0.6.13

This is a known inefficiency of the Scala.js optimizer with code produced by 2.12. It is filed here and it has a pending fix there. Note that the resulting code is still correct, although inefficient.

Related

Could not find implicit value of org.json4s.AsJsonInput in json4s 4.0.4

json4s version
In sbt:
"org.json4s" %% "json4s-jackson" % "4.0.4"
scala version
2.12.15
jdk version
JDK8
My problem
When I learnt to use json4s to read a json file "file.json".
(In book "Scala design patterns")
import org.json4s._
import org.json4s.jackson.JsonMethods._
trait DataReader {
def readData(): List[Person]
def readDataInefficiently(): List[Person]
}
class DataReaderImpl extends DataReader {
implicit val formats = DefaultFormats
private def readUntimed(): List[Person] =
parse(StreamInput(getClass.getResourceAsStream("file.json"))).extract[List[Person]]
override def readData(): List[Person] = readUntimed()
override def readDataInefficiently(): List[Person] = {
(1 to 10000).foreach(_ =>
readUntimed())
readUntimed()
}
}
object DataReaderExample {
def main(args: Array[String]): Unit = {
val dataReader = new DataReaderImpl
println(s"I just read the following data efficiently:${dataReader.readData()}")
println(s"I just read the following data inefficiently:${dataReader.readDataInefficiently()}")
}
}
It cannot compile correctly, and throw:
could not find implicit value for evidence parameter of type org.json4s.AsJsonInput[org.json4s.StreamInput]
Error occurred in an application involving default arguments.
parse(StreamInput(getClass.getResourceAsStream("file.json"))).extract[List[Person]]
when I change json4s version in 3.6.0-M2 in sbt:
"org.json4s" %% "json4s-jackson" % "3.6.0-M2"
It works well.
Why would this happen? How should I fix it in 4.0.4 or higher version?
Thank you for your Help.
I tried many ways to solve this problem.
And finally:
remove StreamInput in :
private def readUntimed(): List[Person] = {
val inputStream: InputStream = getClass.getResourceAsStream("file.json")
// parse(StreamInput(inputStream)).extract[List[Person]] // this will work in 3.6.0-M2
parse(inputStream).extract[List[Person]]
}
and now it works !

main class not found in spark scala program

//package com.jsonReader
import play.api.libs.json._
import play.api.libs.json._
import play.api.libs.json.Reads._
import play.api.libs.json.Json.JsValueWrapper
import org.apache.spark._
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.SQLContext
//import org.apache.spark.implicits._
//import sqlContext.implicits._
object json {
def flatten(js: JsValue, prefix: String = ""): JsObject = js.as[JsObject].fields.foldLeft(Json.obj()) {
case (acc, (k, v: JsObject)) => {
val nk = if(prefix.isEmpty) k else s"$prefix.$k"
acc.deepMerge(flatten(v, nk))
}
case (acc, (k, v: JsArray)) => {
val nk = if(prefix.isEmpty) k else s"$prefix.$k"
val arr = flattenArray(v, nk).foldLeft(Json.obj())(_++_)
acc.deepMerge(arr)
}
case (acc, (k, v)) => {
val nk = if(prefix.isEmpty) k else s"$prefix.$k"
acc + (nk -> v)
}
}
def flattenArray(a: JsArray, k: String = ""): Seq[JsObject] = {
flattenSeq(a.value.zipWithIndex.map {
case (o: JsObject, i: Int) =>
flatten(o, s"$k[$i]")
case (o: JsArray, i: Int) =>
flattenArray(o, s"$k[$i]")
case a =>
Json.obj(s"$k[${a._2}]" -> a._1)
})
}
def flattenSeq(s: Seq[Any], b: Seq[JsObject] = Seq()): Seq[JsObject] = {
s.foldLeft[Seq[JsObject]](b){
case (acc, v: JsObject) =>
acc:+v
case (acc, v: Seq[Any]) =>
flattenSeq(v, acc)
}
}
def main(args: Array[String]) {
val appName = "Stream example 1"
val conf = new SparkConf().setAppName(appName).setMaster("local[*]")
//val spark = new SparkContext(conf)
val sc = new SparkContext(conf)
//val sqlContext = new SQLContext(sc)
val sqlContext=new SQLContext(sc);
//val spark=sqlContext.sparkSession
val spark = SparkSession.builder().appName("json Reader")
val df = sqlContext.read.json("C://Users//ashda//Desktop//test.json")
val set = df.select($"user",$"status",$"reason",explode($"dates")).show()
val read = flatten(df)
read.printSchema()
df.show()
}
}
I'm trying to use this code to flatten a higly nested json. For this I created a project and converted it to a maven project. I edited the pom.xml and included the libraries I needed but when I run program it says "Error: Could not find or load main class".
I tried converting the code to sbt project and then run but I get the same error. I tried packaging the code and run through spark-submit which gives me same error. Please let me know what am I missing here. I have tried I could for this.
Thanks
Hard to say, but maybe you have many classes that qualify as main so the build tool does not know which one to choose. Maybe try to clean the project first sbt clean.
Anyway in scala the preferred way to define a main class is to extend the App -trait.
object SomeApp extends App
Then the whole object body will become your main method.
You can also define in your build.sbt the main class. This is necessary if you have many objects that extend the App -trait.
mainClass in (Compile, run) := Some("io.example.SomeApp")
I am answering this question for sbt configurations. I also got the same issues which I resolved recently and made some basic mistakes which I would like you to note :
1. Configure your sbt file
go to build.sbt file and see that the scala version you are using is compatible with spark.As per version 2.4.0 of spark https://spark.apache.org/docs/latest/ ,scala version required is 2.11.x and not 2.12.x . So, even though your IDE (Eclipse/IntelliJ) shows the latest version of scala or the version you downloaded, change it to compatible version. Also, include this line of code
libraryDependencies += "org.scala-lang" % "scala-library" % "2.11.6"
2.11.x is your scala version
2. File Hierarchy
Make sure your Scala file is under /src/main/scala package only
3. Terminal
If your IDE allows you to launch terminal within it, launch it(IntelliJ allows, Not sure of Eclipse or any other) OR Go to terminal and change directory to your project directory
then run :
sbt clean
This will clear any libraries loaded previously or folders created after compilation.
sbt package
This will pack your files into a single jar file under target/scala-/ package
Then submit to spark :
spark-submit target/scala-<version>/<.jar file> --class "<ClassName>(In your case , com.jsonReader.json)" --jars target/scala-<version>/<.jar file> --master local[*]
Note here that -- if specified in a program isnt required here

Error: value r is not a member of String for a simple regex

For the following straightforward regex that works fine in the repl :
val tsecs = """[^\d]+([\d]+)*""".r
tsecs: scala.util.matching.Regex = [^\d]+([\d]+)*
Why would it not compile - either in Intellij or on the commandline via mvn compile ?
val tsecs = """[^\d]+([\d]+)*""".r
error: value r is not a member of String
[ERROR] val tsecs = """[^\d]+([\d]+)*""".r
The version is scala 2.10.5 in all cases.
There are a few ways to disable Predef.
import scala.Predef.{wrapString => _, augmentString => _, _}
object Test extends App {
def r = "x".r
}
Or
object Test extends App {
val wrapString, augmentString = 42
def r = "x".r
}
Or compile with -Yno-imports.

Scala script in 2.11

I have found an example code for a Scala runtime scripting in answer to Generating a class from string and instantiating it in Scala 2.10, however the code seems to be obsolete for 2.11 - I cannot find any function corresponding to build.setTypeSignature. Even if it worked, the code seems hard to read and follow to me.
How can Scala scripts be compiled and executed in Scala 2.11?
Let us assume I want following:
define several variables (names and values)
compile script
(optional improvement) change variable values
execute script
For simplicity consider following example:
I want to define following variables (programmatically, from the code, not from the script text):
val a = 1
val s = "String"
I want a following script to be compiled and on execution a String value "a is 1, s is String" returned from it:
s"a is $a, s is $s"
How should my functions look like?
def setupVariables() = ???
def compile() = ???
def changeVariables() = ???
def execute() : String = ???
Scala 2.11 adds a JSR-223 scripting engine. It should give you the functionality you are looking for. Just as a reminder, as with all of these sorts of dynamic things, including the example listed in the description above, you will lose type safety. You can see below that the return type is always Object.
Scala REPL Example:
scala> import javax.script.ScriptEngineManager
import javax.script.ScriptEngineManager
scala> val e = new ScriptEngineManager().getEngineByName("scala")
e: javax.script.ScriptEngine = scala.tools.nsc.interpreter.IMain#566776ad
scala> e.put("a", 1)
a: Object = 1
scala> e.put("s", "String")
s: Object = String
scala> e.eval("""s"a is $a, s is $s"""")
res6: Object = a is 1, s is String`
An addition example as an application running under scala 2.11.6:
import javax.script.ScriptEngineManager
object EvalTest{
def main(args: Array[String]){
val e = new ScriptEngineManager().getEngineByName("scala")
e.put("a", 1)
e.put("s", "String")
println(e.eval("""s"a is $a, s is $s""""))
}
}
For this application to work make sure to include the library dependency.
libraryDependencies += "org.scala-lang" % "scala-compiler" % scalaVersion.value

Scala to Java8 stream compatibility issue

(scala)
Files.walk(Paths.get("")).forEach(x => log.info(x.toString))
gives
Error:(21, 16) missing parameter type
.forEach(x => log.info(x.toString))
^
and (java8)
Files.walk(Paths.get("")).forEach(x -> System.out.println(x.toString()));
works fine
What's wrong?
stream.forEach(x -> foo()) in java is syntactic sugar for
stream.forEach(
new Consumer<Path> { public void accept(Path x) { foo(); } }
)
This is not at all the same as x => ... in scala, which is an instance of Function[Path,Unit].
Try this;
Files.walk(Paths.get(""))
.forEach(new Consumer[Path] { def accept(s: Path) = println(s) })
An alternative route: instead of converting your scala function to a java consumer, you can convert the java stream to a scala stream and use normal scala functions.
scala> import scala.collection.JavaConverters._
scala> import java.nio.file._
scala> val files = Files.walk(Paths.get("/tmp")).iterator.asScala.toStream
files: scala.collection.immutable.Stream[java.nio.file.Path] = Stream(/tmp, ?)
files.foreach(println(_))
Scala 2.12 comes with much better Java 8 interoperability, check the Scala 2.12 announcement; as a result, your code as written compiles just fine in 2.12:
Files.walk(Paths.get("")).forEach(x => System.out.println(x.toString))
If you need this to work in 2.11, use scala-java8-compat. Here's the dependency
libraryDependencies += "org.scala-lang.modules" %% "scala-java8-compat" % "0.8.0"
and in this case you can use it like this:
import scala.compat.java8.FunctionConverters._
Files.walk(Paths.get("")).forEach( asJavaConsumer { x => println(x.toString) } )