Pass column and a Map to a Scala UDF - scala

I come from Pyspark. I know how to do this in Pyspark but haven't managed to do the same thing in Scala.
Here is a dataframe,
val df = Seq(
("u1", Array[Int](2,3,4)),
("u2", Array[Int](7,8,9))
).toDF("id", "mylist")
// +---+---------+
// | id| mylist|
// +---+---------+
// | u1|[2, 3, 4]|
// | u2|[7, 8, 9]|
// +---+---------+
and here is a Map object,
val myMap = (1 to 4).toList.map(x => (x,0)).toMap
//myMap: scala.collection.immutable.Map[Int,Int] = Map(1 -> 0, 2 -> 0, 3 -> 0, 4 -> 0)
so this map has key values from 1 to 4.
For each row of df, I want to check if any element in "mylist" is contained in myMap as a key value. If myMap contains an element, then return that element (return any one if multiple elements are contained), elsewise return -1.
So the result should look like
+---+---------+-------+
| id| mylist| label|
+---+---------+-------+
| u1|[2, 3, 4]| 2 |
| u2|[7, 8, 9]| -1 |
+---+---------+-------+
I have tried the following approaches:
below function works for an array object, but does not work for a column:
def list2label(ls: Array[Int],
m: Map[Int, Int]):(Int) = {
var flag = 0
for (element <- ls) {
if (m.contains(element)) flag = element
}
flag
}
val testls = Array[Int](2,3,4)
list2label(testls, myMap)
//testls: Array[Int] = Array(2, 3, 4)
//res33: Int = 4
trying to use UDF, but got an error:
def list2label_udf(m: Map[Int, Int]) = udf( (ls: Array[Int]) =>(
var flag = 0
for (element <- ls) {
if (m.contains(element)) flag = element
}
flag
)
)
//<console>:3: error: illegal start of simple expression
// var flag = 0
// ^
I think my udf is in wrong format..
in Pyspark I can do this as I wish:
%pyspark
myDict={1:0, 2:0, 3:0, 4:0}
def list2label(ls, myDict):
for i in ls:
if i in dict3:
return i
return 0
def list2label_UDF(myDict):
return udf(lambda c: list2label(c,myDict))
df = df.withColumn("label",list2label_UDF(myDict)(col("mylist")))
Any help would be appreciated!

The solution is shown below:
scala> df.show
+---+---------+
| id| mylist|
+---+---------+
| u1|[2, 3, 4]|
| u2|[7, 8, 9]|
+---+---------+
scala> def customUdf(m: Map[Int,Int]) = udf((s: Seq[Int]) => {
val intersection = s.toList.intersect(m.keys.toList)
if(intersection.isEmpty) -1 else intersection(0)})
customUdf: (m: Map[Int,Int])org.apache.spark.sql.expressions.UserDefinedFunction
scala> df.select($"id", $"myList", customUdf(myMap)($"myList").as("new_col")).show
+---+---------+-------+
| id| myList|new_col|
+---+---------+-------+
| u1|[2, 3, 4]| 2|
| u2|[7, 8, 9]| -1|
+---+---------+-------+
Another approach could be to send list of keys of map instead of map itself as ypu are only checking on the keys. For this the solution is hown below:
scala> def customUdf1(m: List[Int]) = udf((s: Seq[Int]) => {
val intersection = s.toList.intersect(m)
if(intersection.isEmpty) -1 else intersection(0)})
customUdf1: (m: List[Int])org.apache.spark.sql.expressions.UserDefinedFunction
scala> df.select($"id",$"myList", customUdf1(myMap.keys.toList)($"myList").as("new_col")).show
+---+---------+-------+
| id| myList|new_col|
+---+---------+-------+
| u1|[2, 3, 4]| 2|
| u2|[7, 8, 9]| -1|
+---+---------+-------+
Let me know if it helps!!

Related

Evaluate formulas in Spark DataFrame

Is it possible to evaluate formulas in a dataframe which refer to columns? e.g. if I have data like this (Scala example):
val df = Seq(
( 1, "(a+b)/d", 1, 20, 2, 3, 1 ),
( 2, "(c+b)*(a+e)", 0, 1, 2, 3, 4 ),
( 3, "a*(d+e+c)", 7, 10, 6, 2, 1 )
)
.toDF( "Id", "formula", "a", "b", "c", "d", "e" )
df.show()
Expected results:
I have been unable to get selectExpr, expr, eval() or combinations of them to work.
You can use the scala toolbox eval in a UDF:
import org.apache.spark.sql.functions.col
import scala.reflect.runtime.universe
import scala.tools.reflect.ToolBox
val tb = universe.runtimeMirror(getClass.getClassLoader).mkToolBox()
val cols = df.columns.tail
val eval_udf = udf(
(r: Seq[String]) =>
tb.eval(tb.parse(
("val %s = %s;" * cols.tail.size).format(
cols.tail.zip(r.tail).flatMap(x => List(x._1, x._2)): _*
) + r(0)
)).toString
)
val df2 = df.select(col("id"), eval_udf(array(df.columns.tail.map(col):_*)).as("result"))
df2.show
+---+------+
| id|result|
+---+------+
| 1| 7|
| 2| 12|
| 3| 63|
+---+------+
A slightly different version of mck's answer, by replacing the variables in the formula column by their corresponding values from the other columns then calling eval udf :
import scala.reflect.runtime.currentMirror
import scala.tools.reflect.ToolBox
val eval = udf((f: String) => {
val toolbox = currentMirror.mkToolBox()
toolbox.eval(toolbox.parse(f)).toString
})
val formulaExpr = expr(df.columns.drop(2).foldLeft("formula")((acc, c) => s"replace($acc, '$c', $c)"))
df.select($"Id", eval(formulaExpr).as("result")).show()
//+---+------+
//| Id|result|
//+---+------+
//| 1| 7|
//| 2| 12|
//| 3| 63|
//+---+------+

Select a literal based on a column value in Spark

I have a map:
val map = Map("A" -> 1, "B" -> 2)
And I have a DataFrame. a column in the data frame contains the keys in the map. I am trying to select a column in a new DF that has the map values in it based on the key:
val newDF = DfThatContainsTheKeyColumn.select(concat(col(SomeColumn), lit("|"),
lit(map.get(col(ColumnWithKey).toString()).get) as newColumn)
But this is resulting in the following error:
java.lang.RuntimeException: Unsupported literal type class scala.None$ None
I made sure that the column ColumnWithKey has As and Bs only and does not have empty values in it.
Is there another way to get the result I am looking for? Any help would be appreciated.
The Problem in this statement (besides syntax problems)
val newDF = DfThatContainsTheKeyColumn.select(concat(col(SomeColumn), lit("|"),
lit(map.get(col(ColumnWithKey).toString()).get) as newColumn)
is that col(ColumnWithKey) will not take the value of a specific row, but is only given by the schema, i.e. has a constant value.
In your case I would suggest to join your map to your dataframe :
val map = Map("A" -> 1, "B" -> 2)
val df_map = map.toSeq.toDF("key","value")
val DfThatContainsTheKeyColumn = Seq(
"A",
"A",
"B",
"B"
).toDF("myCol")
DfThatContainsTheKeyColumn
.join(broadcast(df_map),$"mycol"===$"key")
.select(concat($"mycol",lit("|"),$"value").as("newColumn"))
.show()
gives
|newColumn|
+---------+
| A|1|
| A|1|
| B|2|
| B|2|
+---------+
You can use case classes to make it easy. This is an example:
Given this input
val givenMap = Map("A" -> 1, "B" -> 2)
import spark.implicits._
val df = Seq(
(1, "A"),
(2, "A"),
(3, "B"),
(4, "B")
).toDF("col_a", "col_b")
df.show()
Above code looks like:
+-----+-----+
|col_a|col_b|
+-----+-----+
| 1| A|
| 2| A|
| 3| B|
| 4| B|
+-----+-----+
givenMap: scala.collection.immutable.Map[String,Int] = Map(A -> 1, B -> 2)
import spark.implicits._
df: org.apache.spark.sql.DataFrame = [col_a: int, col_b: string]
The code that you need will look like:
case class MyInput(col_a: Int, col_b: String)
case class MyOutput(col_a: Int, col_b: String, new_column: Int)
df.as[MyInput].map(row=> MyOutput(row.col_a, row.col_b, givenMap(row.col_b))).show()
With the case classes you can cast your df and use object notation to access to your column values within a .map. Above code will output:
+-----+-----+----------+
|col_a|col_b|new_column|
+-----+-----+----------+
| 1| A| 1|
| 2| A| 1|
| 3| B| 2|
| 4| B| 2|
+-----+-----+----------+
defined class MyInput
defined class MyOutput
You can lookup a map using key from a column as,
val map = Map("A" -> 1, "B" -> 2)
val df = spark.createDataset(Seq("dummy"))
.withColumn("key",lit("A"))
df.map{ row =>
val k = row.getAs[String]("key")
val v = map.getOrElse(k,0)
(k,v)
}.toDF("key", "value").show(false)
Result -
+---+-----+
|key|value|
+---+-----+
|A |1 |
+---+-----+
You can look up a map present inside a column using a literal key using Column.getItem, please see an example below.
val mapKeys = Array("A","B")
val mapValues = Array(1,2)
val df = spark.createDataset(Seq("dummy"))
.withColumn("key",lit("A"))
.withColumn("keys",lit(mapKeys))
.withColumn("values",lit(mapValues))
.withColumn("map",map_from_arrays($"keys",$"values"))
.withColumn("lookUpTheMap",$"map".getItem("A"))
//A dataframe with Map is created.
//A map is looked up using a hard coded String key.
df.show(false)
Result
+-----+---+------+------+----------------+------------+
|value|key|keys |values|map |lookUpTheMap|
+-----+---+------+------+----------------+------------+
|dummy|A |[A, B]|[1, 2]|[A -> 1, B -> 2]|1 |
+-----+---+------+------+----------------+------------+
To look up a map present inside a column based on another column containing the key - you can use an UDF or use map function on the dataframe the way I am showing below.
//A map is looked up using a Column key.
df.map{ row =>
val m = row.getAs[Map[String,Int]]("map")
val k = row.getAs[String]("key")
val v = m.getOrElse(k,0)
(m,k,v)
}.toDF("map","key", "value").show(false)
Result
+----------------+---+-----+
|map |key|value|
+----------------+---+-----+
|[A -> 1, B -> 2]|A |1 |
+----------------+---+-----+
I think a simpler option could be to use typedLit:
val map = typedLit(Map("A" -> 1, "B" -> 2))
val newDF = DfThatContainsTheKeyColumn.select(concat(col(SomeColumn), lit("|"),
map(col(ColumnWithKey))) as newColumn)

Scala Dataframe get max value of specific row

Given a dataframe with an index column ("Z"):
val tmp= Seq(("D",0.1,0.3, 0.4), ("E",0.3, 0.1, 0.4), ("F",0.2, 0.2, 0.5)).toDF("Z", "a", "b", "c")
+---+---+---+---+
| Z | a| b| c|
---+---+---+---+
| "D"|0.1|0.3|0.4|
| "E"|0.3|0.1|0.4|
| "F"|0.2|0.2|0.5|
+---+---+---+---+
Say im interested in the first row where Z = "D":
tmp.filter(col("Z")=== "D")
+---+---+---+---+
| Z | a| b| c|
+---+---+---+---+
|"D"|0.1|0.3|0.4|
+---+---+---+---+
How do i get the min and max values of that Dataframe row and its corresponding column name while keeping the index column?
Desired output if i want top 2 max
+---+---+---
| Z | b|c |
+---+---+--+
| D |0.3|0.4|
+---+---+---
Desired output if i want min
+---+---+
| Z | a|
+---+---+
| D |0.1|
+---+---+
What i tried:
// first convert that DF to an array
val tmp = df.collect.map(_.toSeq).flatten
// returns
tmp: Array[Any] = Array(0.1, 0.3, 0.4) <---dont know why Any is returned
//take top values of array
val n = 1
tmp.zipWithIndex.sortBy(-_._1).take(n).map(_._2)
But got error:
No implicit Ordering defined for Any.
Any way to do it straight from dataframe instead of array?
You can do something like this
tmp
.where($"a" === 0.1)
.take(1)
.map { row =>
Seq(row.getDouble(0), row.getDouble(1), row.getDouble(2))
}
.head
.sortBy(d => -d)
.take(2)
Or if you have big amount of fields you can take schema and pattern match row fields against schema data types like this
import org.apache.spark.sql.types._
val schemaWithIndex = tmp.schema.zipWithIndex
tmp
.where($"a" === 0.1)
.take(1)
.map { row =>
for {
tuple <- schemaWithIndex
} yield {
val field = tuple._1
val index = tuple._2
field.dataType match {
case DoubleType => row.getDouble(index)
}
}
}
.head
.sortBy(d => -d)
.take(2)
Maybe there is easier way to do this.
Definitely not the fastest way, but straight from dataframe
More generic solution:
// somewhere in codebase
import spark.implicits._
import org.apache.spark.sql.functions._
def transform[T, R : Encoder](ds: DataFrame, colsToSelect: Seq[String])(func: Map[String, T] => Map[String, R])
(implicit encoder: Encoder[Map[String, R]]): DataFrame = {
ds.map(row => func(row.getValuesMap(colsToSelect)))
.toDF()
.select(explode(col("value")))
.withColumn("idx", lit(1))
.groupBy(col("idx")).pivot(col("key")).agg(first(col("value")))
.drop("idx")
}
Now it's about working with Map where the map key is a field name and map value is the field value.
def fuzzyStuff(values: Map[String, Any]): Map[String, String] = {
val valueForA = values("a").asInstanceOf[Double]
//Do whatever you want to do
// ...
//use map as a return type where key is a column name and value is whatever yo want to
Map("x" -> (s"fuzzyA-$valueForA"))
}
def maxN(n: Int)(values: Map[String, Double]): Map[String, Double] = {
println(values)
values.toSeq.sorted.reverse.take(n).toMap
}
Usage:
val tmp = Seq((0.1,0.3, 0.4), (0.3, 0.1, 0.4), (0.2, 0.2, 0.5)).toDF("a", "b", "c")
val filtered = tmp.filter(col("a") === 0.1)
transform(filtered, colsToSelect = Seq("a", "b", "c"))(maxN(2))
.show()
+---+---+
| b| c|
+---+---+
|0.3|0.4|
+---+---+
transform(filtered, colsToSelect = Seq("a", "b", "c"))(fuzzyStuff)
.show()
+----------+
| x|
+----------+
|fuzzyA-0.1|
+----------+
Define max and min functions
def maxN(values: Map[String, Double], n: Int): Map[String, Double] = {
values.toSeq.sorted.reverse.take(n).toMap
}
def min(values: Map[String, Double]): Map[String, Double] = {
Map(values.toSeq.min)
}
Create dataset
val tmp= Seq((0.1,0.3, 0.4), (0.3, 0.1, 0.4), (0.2, 0.2, 0.5)).toDF("a", "b", "c")
val filtered = tmp.filter(col("a") === 0.1)
Exple and pivot map type
val df = filtered.map(row => maxN(row.getValuesMap(Seq("a", "b", "c")), 2)).toDF()
val exploded = df.select(explode($"value"))
+---+-----+
|key|value|
+---+-----+
| a| 0.1|
| b| 0.3|
+---+-----+
//Then pivot
exploded.withColumn("idx", lit(1))
.groupBy($"idx").pivot($"key").agg(first($"value"))
.drop("idx")
.show()
+---+---+
| b| c|
+---+---+
|0.3|0.4|
+---+---+

collect_set equivalent spark 1.5 UDAF method verification

Can some one tell me the equivalent function for collect_set in spark 1.5 ?
Is there any work around to get the similar results like collect_set(col(name)) ?
Is this correct approach :
class CollectSetFunction[T](val colType: DataType) extends UserDefinedAggregateFunction {
def inputSchema: StructType =
new StructType().add("inputCol", colType)
def bufferSchema: StructType =
new StructType().add("outputCol", ArrayType(colType))
def dataType: DataType = ArrayType(colType)
def deterministic: Boolean = true
def initialize(buffer: MutableAggregationBuffer): Unit = {
buffer.update(0, new scala.collection.mutable.ArrayBuffer[T])
}
def update(buffer: MutableAggregationBuffer, input: Row): Unit = {
val list = buffer.getSeq[T](0)
if (!input.isNullAt(0)) {
val sales = input.getAs[T](0)
buffer.update(0, list:+sales)
}
}
def merge(buffer1: MutableAggregationBuffer, buffer2: Row): Unit = {
buffer1.update(0, buffer1.getSeq[T](0).toSet ++ buffer2.getSeq[T](0).toSet)
}
def evaluate(buffer: Row): Any = {
buffer.getSeq[T](0)
}
}
It code look correct. Furthermore, I tested in 1.6.2 in local mode and got the same result (see below). I don't know of any simpler alternative using the DataFrame API. Using RDD, it's pretty straightforward and it might be preferable to make a detour to RDD API sometimes in 1.5 as the data frames are not fully implemented.
scala> val rdd = sc.parallelize((1 to 10)).map(x => (x%5,x))
scala> rdd.groupByKey.mapValues(_.toSet.toList)).toDF("k","set").show
+---+-------+
| k| set|
+---+-------+
| 0|[5, 10]|
| 1| [1, 6]|
| 2| [2, 7]|
| 3| [3, 8]|
| 4| [4, 9]|
+---+-------+
And if you want to factor it out, an initial version (which can be imroved) can be the following
def collectSet(df: DataFrame, k: Column, v: Column) = df
.select(k.as("k"),v.as("v"))
.map( r => (r.getInt(0),r.getInt(1)))
.groupByKey()
.mapValues(_.toSet.toList)
.toDF("k","v")
but if you want to make other aggregations, you will not be able to avoid a join.
scala> val df = sc.parallelize((1 to 10)).toDF("v").withColumn("k", pmod('v,lit(5)))
df: org.apache.spark.sql.DataFrame = [v: int, k: int]
scala> val csudaf = new CollectSetFunction[Int](IntegerType)
scala> df.groupBy('k).agg(collect_set('v),csudaf('v)).show
+---+--------------+---------------------+
| k|collect_set(v)|CollectSetFunction(v)|
+---+--------------+---------------------+
| 0| [5, 10]| [5, 10]|
| 1| [1, 6]| [1, 6]|
| 2| [2, 7]| [2, 7]|
| 3| [3, 8]| [3, 8]|
| 4| [4, 9]| [4, 9]|
+---+--------------+---------------------+
test 2:
scala> val df = sc.parallelize((1 to 100000)).toDF("v").withColumn("k", floor(rand*10))
df: org.apache.spark.sql.DataFrame = [v: int, k: bigint]
scala> df.groupBy('k).agg(collect_set('v).as("a"),csudaf('v).as("b"))
.groupBy('a==='b).count.show
+-------+-----+
|(a = b)|count|
+-------+-----+
| true| 10|
+-------+-----+

Spark, Scala, DataFrame: create feature vectors

I have a DataFrame that looks like follow:
userID, category, frequency
1,cat1,1
1,cat2,3
1,cat9,5
2,cat4,6
2,cat9,2
2,cat10,1
3,cat1,5
3,cat7,16
3,cat8,2
The number of distinct categories is 10, and I would like to create a feature vector for each userID and fill the missing categories with zeros.
So the output would be something like:
userID,feature
1,[1,3,0,0,0,0,0,0,5,0]
2,[0,0,0,6,0,0,0,0,2,1]
3,[5,0,0,0,0,0,16,2,0,0]
It is just an illustrative example, in reality I have about 200,000 unique userID and and 300 unique category.
What is the most efficient way to create the features DataFrame?
A little bit more DataFrame centric solution:
import org.apache.spark.ml.feature.VectorAssembler
val df = sc.parallelize(Seq(
(1, "cat1", 1), (1, "cat2", 3), (1, "cat9", 5), (2, "cat4", 6),
(2, "cat9", 2), (2, "cat10", 1), (3, "cat1", 5), (3, "cat7", 16),
(3, "cat8", 2))).toDF("userID", "category", "frequency")
// Create a sorted array of categories
val categories = df
.select($"category")
.distinct.map(_.getString(0))
.collect
.sorted
// Prepare vector assemble
val assembler = new VectorAssembler()
.setInputCols(categories)
.setOutputCol("features")
// Aggregation expressions
val exprs = categories.map(
c => sum(when($"category" === c, $"frequency").otherwise(lit(0))).alias(c))
val transformed = assembler.transform(
df.groupBy($"userID").agg(exprs.head, exprs.tail: _*))
.select($"userID", $"features")
and an UDAF alternative:
import org.apache.spark.sql.expressions.{
MutableAggregationBuffer, UserDefinedAggregateFunction}
import org.apache.spark.mllib.linalg.Vectors
import org.apache.spark.sql.types.{
StructType, ArrayType, DoubleType, IntegerType}
import scala.collection.mutable.WrappedArray
class VectorAggregate (n: Int) extends UserDefinedAggregateFunction {
def inputSchema = new StructType()
.add("i", IntegerType)
.add("v", DoubleType)
def bufferSchema = new StructType().add("buff", ArrayType(DoubleType))
def dataType = new VectorUDT()
def deterministic = true
def initialize(buffer: MutableAggregationBuffer) = {
buffer.update(0, Array.fill(n)(0.0))
}
def update(buffer: MutableAggregationBuffer, input: Row) = {
if (!input.isNullAt(0)) {
val i = input.getInt(0)
val v = input.getDouble(1)
val buff = buffer.getAs[WrappedArray[Double]](0)
buff(i) += v
buffer.update(0, buff)
}
}
def merge(buffer1: MutableAggregationBuffer, buffer2: Row) = {
val buff1 = buffer1.getAs[WrappedArray[Double]](0)
val buff2 = buffer2.getAs[WrappedArray[Double]](0)
for ((x, i) <- buff2.zipWithIndex) {
buff1(i) += x
}
buffer1.update(0, buff1)
}
def evaluate(buffer: Row) = Vectors.dense(
buffer.getAs[Seq[Double]](0).toArray)
}
with example usage:
import org.apache.spark.ml.feature.StringIndexer
val indexer = new StringIndexer()
.setInputCol("category")
.setOutputCol("category_idx")
.fit(df)
val indexed = indexer.transform(df)
.withColumn("category_idx", $"category_idx".cast("integer"))
.withColumn("frequency", $"frequency".cast("double"))
val n = indexer.labels.size + 1
val transformed = indexed
.groupBy($"userID")
.agg(new VectorAggregate(n)($"category_idx", $"frequency").as("vec"))
transformed.show
// +------+--------------------+
// |userID| vec|
// +------+--------------------+
// | 1|[1.0,5.0,0.0,3.0,...|
// | 2|[0.0,2.0,0.0,0.0,...|
// | 3|[5.0,0.0,16.0,0.0...|
// +------+--------------------+
In this case order of values is defined by indexer.labels:
indexer.labels
// Array[String] = Array(cat1, cat9, cat7, cat2, cat8, cat4, cat10)
In practice I would prefer solution by Odomontois so these are provided mostly for reference.
Suppose:
val cs: SparkContext
val sc: SQLContext
val cats: DataFrame
Where userId and frequency are bigint columns which corresponds to scala.Long
We are creating intermediate mapping RDD:
val catMaps = cats.rdd
.groupBy(_.getAs[Long]("userId"))
.map { case (id, rows) => id -> rows
.map { row => row.getAs[String]("category") -> row.getAs[Long]("frequency") }
.toMap
}
Then collecting all presented categories in the lexicographic order
val catNames = cs.broadcast(catMaps.map(_._2.keySet).reduce(_ union _).toArray.sorted)
Or creating it manually
val catNames = cs.broadcast(1 to 10 map {n => s"cat$n"} toArray)
Finally we're transforming maps to arrays with 0-values for non-existing values
import sc.implicits._
val catArrays = catMaps
.map { case (id, catMap) => id -> catNames.value.map(catMap.getOrElse(_, 0L)) }
.toDF("userId", "feature")
now catArrays.show() prints something like
+------+--------------------+
|userId| feature|
+------+--------------------+
| 2|[0, 1, 0, 6, 0, 0...|
| 1|[1, 0, 3, 0, 0, 0...|
| 3|[5, 0, 0, 0, 16, ...|
+------+--------------------+
This could be not the most elegant solution for dataframes, as I barely familiar with this area of spark.
Note, that you could create your catNames manually to add zeros for missing cat3, cat5, ...
Also note that otherwise catMaps RDD is operated twice, you might want to .persist() it
Given your input:
val df = Seq((1, "cat1", 1), (1, "cat2", 3), (1, "cat9", 5),
(2, "cat4", 6), (2, "cat9", 2), (2, "cat10", 1),
(3, "cat1", 5), (3, "cat7", 16), (3, "cat8", 2))
.toDF("userID", "category", "frequency")
df.show
+------+--------+---------+
|userID|category|frequency|
+------+--------+---------+
| 1| cat1| 1|
| 1| cat2| 3|
| 1| cat9| 5|
| 2| cat4| 6|
| 2| cat9| 2|
| 2| cat10| 1|
| 3| cat1| 5|
| 3| cat7| 16|
| 3| cat8| 2|
+------+--------+---------+
Just run:
val pivoted = df.groupBy("userID").pivot("category").avg("frequency")
val dfZeros = pivoted.na.fill(0)
dzZeros.show
+------+----+-----+----+----+----+----+----+
|userID|cat1|cat10|cat2|cat4|cat7|cat8|cat9|
+------+----+-----+----+----+----+----+----+
| 1| 1.0| 0.0| 3.0| 0.0| 0.0| 0.0| 5.0|
| 3| 5.0| 0.0| 0.0| 0.0|16.0| 2.0| 0.0|
| 2| 0.0| 1.0| 0.0| 6.0| 0.0| 0.0| 2.0|
+------+----+-----+----+----+----+----+----+
Finally, use VectorAssembler to create a org.apache.spark.ml.linalg.Vector
NOTE: I have not checked performances on this yet...
EDIT: Possibly more complex, but likely more efficient!
def toSparseVectorUdf(size: Int) = udf[Vector, Seq[Row]] {
(data: Seq[Row]) => {
val indices = data.map(_.getDouble(0).toInt).toArray
val values = data.map(_.getInt(1).toDouble).toArray
Vectors.sparse(size, indices, values)
}
}
val indexer = new StringIndexer().setInputCol("category").setOutputCol("idx")
val indexerModel = indexer.fit(df)
val totalCategories = indexerModel.labels.size
val dataWithIndices = indexerModel.transform(df)
val data = dataWithIndices.groupBy("userId").agg(sort_array(collect_list(struct($"idx", $"frequency".as("val")))).as("data"))
val dataWithFeatures = data.withColumn("features", toSparseVectorUdf(totalCategories)($"data")).drop("data")
dataWithFeatures.show(false)
+------+--------------------------+
|userId|features |
+------+--------------------------+
|1 |(7,[0,1,3],[1.0,5.0,3.0]) |
|3 |(7,[0,2,4],[5.0,16.0,2.0])|
|2 |(7,[1,5,6],[2.0,6.0,1.0]) |
+------+--------------------------+
NOTE: StringIndexer will sort categories by frequency => most frequent category will be at index=0 in indexerModel.labels. Feel free to use your own mapping if you'd like and pass that directly to toSparseVectorUdf.