I want to transpose multiple columns in Spark SQL table
I found this solution for only two columns, I want to know how to work with zip function with three column varA, varB and varC.
import org.apache.spark.sql.functions.{udf, explode}
val zip = udf((xs: Seq[Long], ys: Seq[Long]) => xs.zip(ys))
df.withColumn("vars", explode(zip($"varA", $"varB"))).select(
$"userId", $"someString",
$"vars._1".alias("varA"), $"vars._2".alias("varB")).show
this is my dataframe schema :
`root
|-- owningcustomerid: string (nullable = true)
|-- event_stoptime: string (nullable = true)
|-- balancename: string (nullable = false)
|-- chargedvalue: string (nullable = false)
|-- newbalance: string (nullable = false)
`
i tried this code :
val zip = udf((xs: Seq[String], ys: Seq[String], zs: Seq[String]) => (xs, ys, zs).zipped.toSeq)
df.printSchema
val df4=df.withColumn("vars", explode(zip($"balancename", $"chargedvalue",$"newbalance"))).select(
$"owningcustomerid", $"event_stoptime",
$"vars._1".alias("balancename"), $"vars._2".alias("chargedvalue"),$"vars._2".alias("newbalance"))
i got this error :
cannot resolve 'UDF(balancename, chargedvalue, newbalance)' due to data type mismatch: argument 1 requires array<string> type, however, '`balancename`' is of string type. argument 2 requires array<string> type, however, '`chargedvalue`' is of string type. argument 3 requires array<string> type, however, '`newbalance`' is of string type.;;
'Project [owningcustomerid#1085, event_stoptime#1086, balancename#1159, chargedvalue#1160, newbalance#1161, explode(UDF(balancename#1159, chargedvalue#1160, newbalance#1161)) AS vars#1167]
In Scala in general you can use Tuple3.zipped
val zip = udf((xs: Seq[Long], ys: Seq[Long], zs: Seq[Long]) =>
(xs, ys, zs).zipped.toSeq)
zip($"varA", $"varB", $"varC")
Specifically in Spark SQL (>= 2.4) you can use arrays_zip function:
import org.apache.spark.sql.functions.arrays_zip
arrays_zip($"varA", $"varB", $"varC")
However you have to note that your data doesn't contain array<string> but plain strings - hence Spark arrays_zip or explode are not allowed and you should parse your data first.
val zip = udf((a: Seq[String], b: Seq[String], c: Seq[String], d: Seq[String]) => {a.indices.map(i=> (a(i), b(i), c(i), d(i)))})
Related
i have a the following dataframe : let's say DF1 as
root
|-- VARIANTS: string (nullable = true)
|-- VARIANT_ID: long (nullable = false)
|-- CASE_ID: string (nullable = true)
|-- APP_ID: integer (nullable = false)
Where Variants (string) look like :
Activity_1,Activity_2,Activity_2,Activity_3,Activity_5...
Am trying to get a new column like
Variants_stats as (per Row) :
Activity_1:1, Activity_2:2, Activity_3:1, Activity_5:1
The approach i have took so far is :
1) Create an UDF :
val countActivityFrequences = udf((value: String) => value.split(",").map(_.trim).groupBy(identity).mapValues(_.length).map{case (k, v) => k + ":" + v}.mkString(","))
val dfNew = df1.withColumn("Variants_stats", countActivityFrequences($"VARIANTS"))
It seems to be ok (at least spark doesn't complain), until i try to do any SQL or dfNew.show(false) call, which always give me back :
java.lang.StringIndexOutOfBoundsException: String index out of range: -84
at java.lang.String.substring(String.java:1931)
at java.lang.Class.getSimpleBinaryName(Class.java:1448)
at java.lang.Class.getSimpleName(Class.java:1309)
at org.apache.spark.sql.catalyst.expressions.ScalaUDF.udfErrorMessage$lzycompute(ScalaUDF.scala:1055)
at org.apache.spark.sql.catalyst.expressions.ScalaUDF.udfErrorMessage(ScalaUDF.scala:1054)
at org.apache.spark.sql.catalyst.expressions.ScalaUDF.doGenCode(ScalaUDF.scala:1006)
at org.apache.spark.sql.catalyst.expressions.Expression$$anonfun$genCode$2.apply(Expression.scala:108)
at org.apache.spark.sql.catalyst.expressions.Expression$$anonfun$genCode$2.apply(Expression.scala:105)
at scala.Option.getOrElse(Option.scala:121)
I can't figure out what am doing wrong here ?
Am using Spark 2.1+
To reproduce :
val items = List(
"A_001,A_002,A_010,A_0200,A_0201,A_0201,A_0202,A_0206,A_0207,A_0208,A_0208,A_0209,A_070,A_071,A_072,A_073,A_073,A_074",
"A_001,A_002,A_010,A_0201,A_0201,A_0201,A_0202,A_0206,A_0207,A_0208,A_0208,A_0209,A_070,A_071,A_072,A_073,A_073,A_073")
val df = sc.parallelize(items).toDF("VARIANTS")
df.show(false)
df.printSchema
// create UDF function
val countActivityFrequences = udf((value: String) => value.split(",").map(_.trim).groupBy(identity).mapValues(_.length).map{case (k, v) => k + ":" + v}.mkString(","))
// Apply UDF against our little DF
var dfNew = df.withColumn("Variants_stats", countActivityFrequences($"VARIANTS"))
dfNew.printSchema
// Error Thrown : (either Malforned class name, or java.lang.StringIndexOutOfBoundsException )
dfNew.show(false)
Update :
The issue was only appearing in our AWS EMR environment, under zeppelin.
Restarting the interpreter made it work.
Let's say I have the following code
case class MyTypeInt(a: String, b: MyType2)
case class MyType2(v: Int)
case class MyTypeLong(a: String, b: MyType3)
case class MyType3(v: Long)
val typedDataset = TypedDataset.create(Seq(MyTypeInt("v", MyType2(1))))
typedDataset.withColumnRenamed(???, typedDataset.colMany('b, 'v).cast[Long]).as[MyTypeLong]
How can I implement this transformation when the field that I am trying to transform is nested? the signature of withColumnRenamed asks for a Symbol in the first parameter so I don't know how to do this...
withColumnRenamed does not allow you to transform a column. To do that, you should use withColumn. One approach would then be to cast the column and recreate the struct.
scala> val new_ds = ds.withColumn("b", struct($"b.v" cast "long" as "v")).as[MyTypeLong]
scala> new_ds.printSchema
root
|-- a: string (nullable = true)
|-- b: struct (nullable = false)
| |-- v: long (nullable = true)
Another approach would be to use map and build the object yourself:
ds.map{ case MyTypeInt(a, MyType2(b)) => MyTypeLong(a, MyType3(b)) }
Imagine that you have the following case classes:
case class B(key: String, value: Int)
case class A(name: String, data: B)
Given an instance of A, how do I create a Spark Row? e.g.
val a = A("a", B("b", 0))
val row = ???
NOTE: Given row I need to be able to get data with:
val name: String = row.getAs[String]("name")
val b: Row = row.getAs[Row]("data")
The following seems to match what you're looking for.
scala> spark.version
res0: String = 2.3.0
scala> val a = A("a", B("b", 0))
a: A = A(a,B(b,0))
import org.apache.spark.sql.Encoders
val schema = Encoders.product[A].schema
scala> schema.printTreeString
root
|-- name: string (nullable = true)
|-- data: struct (nullable = true)
| |-- key: string (nullable = true)
| |-- value: integer (nullable = false)
val values = a.productIterator.toSeq.toArray
import org.apache.spark.sql.Row
import org.apache.spark.sql.catalyst.expressions.GenericRowWithSchema
val row: Row = new GenericRowWithSchema(values, schema)
scala> val name: String = row.getAs[String]("name")
name: String = a
// the following won't work since B =!= Row
scala> val b: Row = row.getAs[Row]("data")
java.lang.ClassCastException: B cannot be cast to org.apache.spark.sql.Row
... 55 elided
Very short but probably not the fastest as it first creates a dataframe and then collects it again :
import session.implicits._
val row = Seq(a).toDF().first()
#Jacek Laskowski answer is great!
To complete:
Here some syntactic sugar:
val row = Row(a.productIterator.toSeq: _*)
And a recursive method if you happen to have nested case classes
def productToRow(product: Product): Row = {
val sequence = product.productIterator.toSeq.map {
case product : Product => productToRow(product)
case e => e
}
Row(sequence : _*)
}
I don't think there exist a public API that can do it directly. Internally Spark uses Encoder.toRow method to convert objects org.apache.spark.sql.catalyst.expressions.UnsafeRow, but this method is private. You could try to:
Obtain Encoder for the class:
val enc: Encoder[A] = ExpressionEncoder()
Use reflection to access toRow method and set it to accessible.
Call it to convert object to UnsafeRow.
Obtain RowEncoder for the expected schema (enc.schema).
Convert UnsafeRow to Row.
I haven't tried this, so I cannot guarantee it will work or not.
I have the following types in a dataframe:
root
|-- id: string (nullable = true)
|-- items: array (nullable = true)
| |-- element: string (containsNull = true)
input:
val rawData = Seq(("id1",Array("item1","item2","item3","item4")),("id2",Array("item1","item2","item3")))
val data = spark.createDataFrame(rawData)
and a list of items:
val filter_list = List("item1", "item2")
I would like to filter out items that are non in the filter_list, similar to how array_contains would function, but its not working on a provided list of strings, only a single value.
so the output would look like this:
val rawData = Seq(("id1",Array("item1","item2")),("id2",Array("item1","item2")))
val data = spark.createDataFrame(rawData)
I tried solving this with the following UDF, but I probably mix types between Scala and Spark:
def filterItems(flist: List[String]) = udf {
(recs: List[String]) => recs.filter(item => flist.contains(item))
}
I'm using Spark 2.2
thanks!
You code is almost right. All you have to do is replace List with Seq
def filterItems(flist: List[String]) = udf {
(recs: Seq[String]) => recs.filter(item => flist.contains(item))
}
It would also make sense to change signature from List[String] => UserDefinedFunction to SeqString] => UserDefinedFunction, but it is not required.
Reference SQL Programming Guide - Data Types.
I have a DataFrame with the following schema :
root
|-- journal: string (nullable = true)
|-- topicDistribution: vector (nullable = true)
The topicDistribution field is a vector of doubles: [0.1, 0.2 0.15 ...]
What I want is is to explode each row into several rows to obtain the following schema:
root
|-- journal: string
|-- topic-prob: double // this is the value from the vector
|-- topic-id : integer // this is the index of the value from the vector
To clarify, I've created a case class:
case class JournalDis(journal: String, topic_id: Integer, prob: Double)
I've managed to achieve this using dataset.explode in a very awkward way:
val df1 = df.explode("topicDistribution", "topic") {
topics: DenseVector => topics.toArray.zipWithIndex
}.select("journal", "topic")
val df2 = df1.withColumn("topic_id", df1("topic").getItem("_2")).withColumn("topic_prob", df1("topic").getItem("_1")).drop(df1("topic"))
But dataset.explode is deprecated. I wonder how to achieve this using flatmap method?
Not tested but should work:
import spark.implicits._
import org.apache.spark.ml.linalg.Vector
df.as[(String, Vector)].flatMap {
case (j, ps) => ps.toArray.zipWithIndex.map {
case (p, ti) => JournalDis(j, ti, p)
}
}