GraphFrame: missing or invalid dependency detected while loading class file - scala

I am trying to create a graph using spark graphframe
here is the code:
import org.graphframes._
// Node DataFrames
val v = sqlContext.createDataFrame(List(
("a", "Alice", 34),
("b", "Bob", 36),
("c", "Charlie", 30),
("d", "David", 29),
("e", "Esther", 32),
("f", "Fanny", 36),
("g", "Gabby", 60)
)).toDF("id", "name", "age")
// Edge DataFrame
val e = sqlContext.createDataFrame(List(
("a", "b", "friend"),
("b", "c", "follow"),
("c", "b", "follow"),
("f", "c", "follow"),
("e", "f", "follow"),
("e", "d", "friend"),
("d", "a", "friend"),
("a", "e", "friend")
)).toDF("src", "dst", "relationship")
// Create a GraphFrame
val g = GraphFrame(v, e)
But this is the error I am getting:
error: missing or invalid dependency detected while loading class file
'GraphFrame.class'. Could not access type Logging in package
org.apache.spark, because it (or its dependencies) are missing. Check
your build definition for missing or conflicting dependencies. (Re-run
with -Ylog-classpath to see the problematic classpath.) A full
rebuild may help if 'GraphFrame.class' was compiled against an
incompatible version of org.apache.spark.
I am using Apache Spark 2.1 and Scala 2.11. Any suggestion what can be the issue?

Download following packages from maven central repo
com.typesafe.scala-logging_scala-logging-api_2.11-2.1.2.jar
graphframes_graphframes-0.5.0-spark2.1-s_2.11.jar
org.slf4j_slf4j-api-1.7.7.jar
com.typesafe.scala-logging_scala-logging-slf4j_2.11-2.1.2.jar
org.scala-lang_scala-reflect-2.11.0.jar
Add the following to your sparks-default.conf file(comma separated list of absolute path where the above downloaded jars are located)
spark.jars path_2_jar/org.slf4j_slf4j-api-1.7.7.jar, path_2_jar/org.scala-lang_scala-reflect-2.11.0.jar, path_2_jar/graphframes_graphframes-0.5.0-spark2.1-s_2.11.jar, path_2_jar/com.typesafe.scala-logging_scala-logging-slf4j_2.11-2.1.2.jar, path_2_jar/com.typesafe.scala-logging_scala-logging-api_2.11-2.1.2.jar

Related

Flatten a map into pairs (key, value) in Scala [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed last month.
Improve this question
Say I construct the following map in scala:
val map = Map.empty[String, Seq[String]] + ("A" -> ("1", "2", "3", "4"), "B" -> ("2", "3"), "C" -> ("3", "4"))
My output should be a Sequence of single key, value pairs. Namely, it should look like this:
[("A", "1"), ("A", "2"), ("A", "3"), ("A", "4"), ("B", "2"), ("B", "3"), ("B", "2"), ("C", "3"),
("C", "4")]
How can I obtain this using flatMap?
I would guess that your original goal was to create next map (note added Seqs in map):
val map = Map.empty[String, Seq[String]] + ("A" -> Seq("1", "2", "3", "4"), "B" -> Seq("2", "3"), "C" -> Seq("3", "4"))
Then you will be able to easily transform it easily with:
val result = map.toSeq.flatMap { case (k, v) => v.map((k, _)) }
Also, you can create the map directly, no need appending to an empty map.
val map = Map("A" -> Seq("1", "2", "3", "4"), "B" -> Seq("2", "3"), "C" -> Seq("3", "4"))

Spark SQL - Check for a value in multiple columns

I have a status dataset like below:
I want to select all the rows from this dataset which have "FAILURE" in any of these 5 status columns.
So, I want the result to contain only IDs 1,2,4 as they have FAILURE in one of the Status columns.
I guess in SQL we can do something like below:
SELECT * FROM status WHERE "FAILURE" IN (Status1, Status2, Status3, Status4, Status5);
In spark, I know I can do a filter by comparing each Status column with "FAILURE"
status.filter(s => {s.Status1.equals(FAILURE) || s.Status2.equals(FAILURE) ... and so on..})
But I would like to know if there is a smarter way of doing this in Spark SQL.
Thanks in advance!
In case there are many columns to be examined, consider a recursive function that short-circuits upon the first match, as shown below:
val df = Seq(
(1, "T", "F", "T", "F"),
(2, "T", "T", "T", "T"),
(3, "T", "T", "F", "T")
).toDF("id", "c1", "c2", "c3", "c4")
import org.apache.spark.sql.Column
def checkFor(elem: Column, cols: List[Column]): Column = cols match {
case Nil =>
lit(true)
case h :: tail =>
when(h === elem, lit(false)).otherwise(checkFor(elem, tail))
}
val cols = df.columns.filter(_.startsWith("c")).map(col).toList
df.where(checkFor(lit("F"), cols)).show
// +---+---+---+---+---+
// | id| c1| c2| c3| c4|
// +---+---+---+---+---+
// | 2| T| T| T| T|
// +---+---+---+---+---+
A similar example you can modify and filter on the new column added. I leave that to you, here checking for zeroes excluding first col:
import org.apache.spark.sql.functions._
import spark.implicits._
val df = sc.parallelize(Seq(
("r1", 0.0, 0.0, 0.0, 0.0),
("r2", 6.4, 4.9, 6.3, 7.1),
("r3", 4.2, 0.0, 7.2, 8.4),
("r4", 1.0, 2.0, 0.0, 0.0)
)).toDF("ID", "a", "b", "c", "d")
val count_some_val = df.columns.tail.map(x => when(col(x) === 0.0, 1).otherwise(0)).reduce(_ + _)
val df2 = df.withColumn("some_val_count", count_some_val)
df2.filter(col("some_val_count") > 0).show(false)
Afaik not possible to stop when first match found easily, but I do remember a smarter person than myself showing me this approach with lazy exists which I think does stop at first encounter of a match. Like this then, but a different approach, that I like:
import org.apache.spark.sql.functions._
import spark.implicits._
val df = sc.parallelize(Seq(
("r1", 0.0, 0.0, 0.0, 0.0),
("r2", 6.0, 4.9, 6.3, 7.1),
("r3", 4.2, 0.0, 7.2, 8.4),
("r4", 1.0, 2.0, 0.0, 0.0)
)).toDF("ID", "a", "b", "c", "d")
df.map{r => (r.getString(0),r.toSeq.tail.exists(c =>
c.asInstanceOf[Double]==0))}
.toDF("ID","ones")
.show()
scala> import org.apache.spark.sql.functions._
import org.apache.spark.sql.functions._
scala> import spark.implicits._
import spark.implicits._
scala> val df = Seq(
| ("Prop1", "SUCCESS", "SUCCESS", "SUCCESS", "FAILURE" ,"SUCCESS"),
| ("Prop2", "SUCCESS", "FAILURE", "SUCCESS", "FAILURE", "SUCCESS"),
| ("Prop3", "SUCCESS", "SUCCESS", "SUCCESS", "SUCCESS", "SUCCESS" ),
| ("Prop4", "SUCCESS", "FAILURE", "SUCCESS", "FAILURE", "SUCCESS"),
| ("Prop5", "SUCCESS", "SUCCESS", "SUCCESS", "SUCCESS","SUCCESS")
| ).toDF("Name", "Status1", "Status2", "Status3", "Status4","Status5")
df: org.apache.spark.sql.DataFrame = [Name: string, Status1: string ... 4 more fields]
scala> df.show
+-----+-------+-------+-------+-------+-------+
| Name|Status1|Status2|Status3|Status4|Status5|
+-----+-------+-------+-------+-------+-------+
|Prop1|SUCCESS|SUCCESS|SUCCESS|FAILURE|SUCCESS|
|Prop2|SUCCESS|FAILURE|SUCCESS|FAILURE|SUCCESS|
|Prop3|SUCCESS|SUCCESS|SUCCESS|SUCCESS|SUCCESS|
|Prop4|SUCCESS|FAILURE|SUCCESS|FAILURE|SUCCESS|
|Prop5|SUCCESS|SUCCESS|SUCCESS|SUCCESS|SUCCESS|
+-----+-------+-------+-------+-------+-------+
scala> df.where($"Name".isin("Prop1","Prop4") and $"Status1".isin("SUCCESS","FAILURE")).show
+-----+-------+-------+-------+-------+-------+
| Name|Status1|Status2|Status3|Status4|Status5|
+-----+-------+-------+-------+-------+-------+
|Prop1|SUCCESS|SUCCESS|SUCCESS|FAILURE|SUCCESS|
|Prop4|SUCCESS|FAILURE|SUCCESS|FAILURE|SUCCESS|
+-----+-------+-------+-------+-------+-------+

How to create a sample dataframe in Scala / Spark

I'm trying to create a simple DataFrame as follows:
import sqlContext.implicits._
val lookup = Array("one", "two", "three", "four", "five")
val theRow = Array("1",Array(1,2,3), Array(0.1,0.4,0.5))
val theRdd = sc.makeRDD(theRow)
case class X(id: String, indices: Array[Integer], weights: Array[Float] )
val df = theRdd.map{
case Array(s0,s1,s2) => X(s0.asInstanceOf[String],s1.asInstanceOf[Array[Integer]],s2.asInstanceOf[Array[Float]])
}.toDF()
df.show()
df is defined as
df: org.apache.spark.sql.DataFrame = [id: string, indices: array<int>, weights: array<float>]
which is what I want.
Upon executing, I get
org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 13.0 failed 1 times, most recent failure: Lost task 1.0 in stage 13.0 (TID 50, localhost): scala.MatchError: 1 (of class java.lang.String)
Where is this MatchError coming from? And, is there a simpler way to create sample DataFrames programmatically?
First, theRow should be a Row and not an Array. Now, if you modify your types in such a way that the compatibility between Java and Scala is respected, your example will work
val theRow =Row("1",Array[java.lang.Integer](1,2,3), Array[Double](0.1,0.4,0.5))
val theRdd = sc.makeRDD(Array(theRow))
case class X(id: String, indices: Array[Integer], weights: Array[Double] )
val df=theRdd.map{
case Row(s0,s1,s2)=>X(s0.asInstanceOf[String],s1.asInstanceOf[Array[Integer]],s2.asInstanceOf[Array[Double]])
}.toDF()
df.show()
//+---+---------+---------------+
//| id| indices| weights|
//+---+---------+---------------+
//| 1|[1, 2, 3]|[0.1, 0.4, 0.5]|
//+---+---------+---------------+
For another example that you can refer
import spark.implicits._
val sqlContext = new org.apache.spark.sql.SQLContext(sc)
val columns=Array("id", "first", "last", "year")
val df1=sc.parallelize(Seq(
(1, "John", "Doe", 1986),
(2, "Ive", "Fish", 1990),
(4, "John", "Wayne", 1995)
)).toDF(columns: _*)
val df2=sc.parallelize(Seq(
(1, "John", "Doe", 1986),
(2, "IveNew", "Fish", 1990),
(3, "San", "Simon", 1974)
)).toDF(columns: _*)

Dataframe state before save and after load - what's different?

I have a DF that contains some SQL expressions (coalesce, case/when etc.).
I later try to map/flatMap this DF where I get an Task not serializable error, due to the fields that contain the SQL expressions.
(Why I need to map/flatMap this DF is a separate question)
When I save this DF to a Parquet file and load it afterwards, the error is gone and I can convert to RDD and do transformations no problem!
How is the DF different before saving and after loading? In some way, the SQL expressions must have been evaluated and made persistent. How can I achieve the same thing without saving/loading? (df.perists() did not do the trick ;()
Here's test code:
val data = Seq( (1, "sku1", "EUR", 99.0, 89.0), (2, "sku2", "USD", 89.0, 79.0), (3, "sku3", "USD", 49.0, 39.9) )
val aditionalStuffForCertainSKUsMap = Map("sku1" -> List(10, 20, 30))
val listedPrice = coalesce(
List("EUR", "USD").map(c => when($"currency" === c, col(c)).otherwise(lit(null))): _*)
val df = (sc.parallelize(data)
.toDF("id", "sku", "currency", "EUR", "USD")
.withColumn("price_in_given_currency", when($"currency" === "EUR", $"EUR"*2).otherwise(1))
// .withColumn("fails_price_in_given_currency", listedPrice)
)
df.show
df.write.mode(SaveMode.Overwrite).parquet("test_df")
The data contains a sku and some SKUs represent bundles, like sku1, for which I want to add some other fields to the DF. Only when I try to access this Map[String, List[Int]] within the map() I get complaints with the fails_price_in_given_currency column, not so with the price_in_given_currency:
// If I load the df first, the map() works even when using `fails_price_in_given_currency`
//val df = sqlContext.read.parquet("test_df")
val out = df.map(d => {
val key = d.getAs[String]("sku")
aditionalStuffForCertainSKUsMap.getOrElse(key, None)
})
The error is thrown when I use fails_price_in_given_currency instead. If I however load df before the map, it will run again!

How to add a list of elements to session in Play?

I'm new to Play framework. How to add a list of elements to a session?
The compiler always complains about the code:
val cookies: List[(String, String)] = List[("a", "b), ("c", "d")]
Ok(views.html.hello(info)).withSession(request.session + cookies)
You don't need to copy existing session yourself:
val cookies: List[(String, String)] = List(("a", "b"), ("c", "d"))
Ok(views.html.hello(info)).addingToSession(cookies: _*)