Extracting vectors from features - pyspark - pyspark

Let's say I have the dataframe bellow:
+---+------+-------+
|id |string|string2|
+---+------+-------+
|1 |foo |hello |
|2 |bar |hellow |
|3 |bar |hellow |
|4 |baz |hello |
+---+------+-------+
Column string contains 3 values [foo,bar,baz] and string2 contains 2 [hello,hellow].
How can I extract vectors for each column in the following way:
If column string contains foo I want to map it to vector [1,0,0] , for bar to [0,1,0] and so on. Same for string2 column (hello->[1,0],hellow->[0,1]).
Final dataframe should look something like this:
+---+----------+-----------+
|id |string_vec|string2_vec|
+---+----------+-----------+
|1 |[1,0,0] |[1,0] |
|2 |[0,1,0] |[0,1] |
|3 |[0,1,0] |[0,1] |
|4 |[0,0,1] |[0,1] |
+---+----------+-----------+
Finally I want to combine the _vec columns to:
+---+-----------+
|id |features |
+---+-----------+
|1 |[1,0,0,1,0]|
|2 |[0,1,0,0,1]|
|3 |[0,1,0,0,1]|
|4 |[0,0,1,0,1]|
+---+-----------+
I can do this with a for loop, but it is not efficient. My main problem is the mapping process. I guess for the rest I can use the VectorAssembler

You can create simple udf
Your dataframe:
values = [("foo", "hello"), ("bar", "hellow"),("bar","hellow"), ("baz","hello")]
from pyspark.sql.functions import udf
from pyspark.sql.types import *
df = spark.createDataFrame(values, ["string", "string2"])
df.show()
+------+-------+
|string|string2|
+------+-------+
| foo| hello|
| bar| hellow|
| bar| hellow|
| baz| hello|
+------+-------+
udf:
def encode(string1,string2):
values = ["foo","bar","baz","hello","hellow"]
string_values = [string1,string2]
return [1 if x in string_values else 0 for x in values]
encode_udf = udf(encode, ArrayType(IntegerType()))
result:
df.withColumn("features", encode_udf("string","string2")).show()
+------+-------+---------------+
|string|string2| features|
+------+-------+---------------+
| foo| hello|[1, 0, 0, 1, 0]|
| bar| hellow|[0, 1, 0, 0, 1]|
| bar| hellow|[0, 1, 0, 0, 1]|
| baz| hello|[0, 0, 1, 1, 0]|
+------+-------+---------------+

Related

groupBy and get count of records for multiple columns in scala

As a part of big task I am facing some issues when I reach to find the count of records in each column grouping by another column. I am not much experienced in playing around with dataframe columns.
I am having a spark dataframe as below.
+---+------------+--------+--------+--------+
|id | date|signal01|signal02|signal03|
+---+------------+--------+--------+--------+
|050|2021-01-14 |1 |3 |0 |
|050|2021-01-15 |1 |3 |0 |
|050|2021-02-02 |1 |3 |0 |
|051|2021-01-14 |1 |3 |0 |
|051|2021-01-15 |1 |3 |0 |
|051|2021-02-02 |1 |3 |0 |
|051|2021-02-03 |1 |3 |0 |
|052|2021-03-03 |1 |3 |0 |
|052|2021-03-05 |1 |3 |0 |
|052|2021-03-06 |1 |3 |0 |
|052|2021-03-16 |1 |3 |0 |
I am working in scala language to make use of this data frame and trying to get result as shown below.
+---+--------+--------+--------+
|id |signal01|signal02|signal03|
+---+--------+--------+--------+
|050|3 |3 |3 |
|051|4 |4 |4 |
|052|4 |4 |4 |
for each Id, the count for each signal should be the output.
And also is there any way we could pass condition to get the count, such as count of signals with value > 0?
I have tried something below, getting total count ,but not grouped with Id which was not expected.
val signalColumns = ((Temp01DF.columns.toBuffer) -= ("id","date"))
val Temp02DF = Temp01DF.select(signalColumns.map(c => count(col(c)).alias(c)): _*).show()
+--------+--------+--------+
|signal01|signal02|signal03|
+--------+--------+--------+
|51 |51 |51 |
Is there any ways to achieve this in scala lang?
You are probably looking for groupBy, agg and count.
You can do something like this:
// define some data
val df = Seq(
("050", 1, 3, 0),
("050", 1, 3, 0),
("050", 1, 3, 0),
("051", 1, 3, 0),
("051", 1, 3, 0),
("051", 1, 3, 0),
("051", 1, 3, 0),
("052", 1, 3, 0),
("052", 1, 3, 0),
("052", 1, 3, 0),
("052", 1, 3, 0)
).toDF("id", "signal01", "signal02", "signal03")
val countColumns = Seq("signal01", "signal02", "signal03").map(c => count("*").as(c))
df.groupBy("id").agg(countColumns.head, countColumns.tail: _*).show
/*
+---+--------+--------+--------+
| id|signal01|signal02|signal03|
+---+--------+--------+--------+
|052| 4| 4| 4|
|051| 4| 4| 4|
|050| 3| 3| 3|
+---+--------+--------+--------+
*/
Instead of counting "*", you can have a predicate:
val countColumns = Seq("signal01", "signal02", "signal03").map(c => count(when(col(c) > 0, 1)).as(c))
df.groupBy("id").agg(countColumns.head, countColumns.tail: _*).show
/*
+---+--------+--------+--------+
| id|signal01|signal02|signal03|
+---+--------+--------+--------+
|052| 4| 4| 0|
|051| 4| 4| 0|
|050| 3| 3| 0|
+---+--------+--------+--------+
*/
A PySpark Solution
df = spark.createDataFrame([(50, 1, 3, 0),(50, 1, 3, 0), (50, 1, 3, 0), (51, 1, 3, 0), (51, 1, 3, 0), (51, 1, 3, 0), (51, 1, 3, 0), (52, 1, 3, 0),(52, 1, 3, 0), (52, 1, 3, 0), (52, 1, 3, 0)],[ "col1","col2", "col3", "col4"])
df.show()
df_grp = df.groupBy("col1").agg(F.count("col2").alias("col2"), F.count("col3").alias("col3"), F.count("col4").alias("col4"))
df_grp.show()
Output
+----+----+----+----+
|col1|col2|col3|col4|
+----+----+----+----+
| 50| 3| 3| 3|
| 51| 4| 4| 4|
| 52| 4| 4| 4|
+----+----+----+----+
For the first part, I found that the required result can be achieved this way:
val signalCount = df.groupBy("id")
.agg(count("signal01"), count("signal02"), count("signal03"))
Make sure you have the spark functions imported:
import org.apache.spark.sql.functions._

Find min value for every 5 hour interval

My df
val df = Seq(
("1", 1),
("1", 1),
("1", 2),
("1", 4),
("1", 5),
("1", 6),
("1", 8),
("1", 12),
("1", 12),
("1", 13),
("1", 14),
("1", 15),
("1", 16)
).toDF("id", "time")
For this case the first interval starts from 1 hour. So every row up to 6 (1 + 5) is part of this interval.
But 8 - 1 > 5, so the second interval starts from 8 and goes up to 13.
Then I see that 14 - 8 > 5, so the third one starts and so on.
The desired result
+---+----+--------+
|id |time|min_time|
+---+----+--------+
|1 |1 |1 |
|1 |1 |1 |
|1 |2 |1 |
|1 |4 |1 |
|1 |5 |1 |
|1 |6 |1 |
|1 |8 |8 |
|1 |12 |8 |
|1 |12 |8 |
|1 |13 |8 |
|1 |14 |14 |
|1 |15 |14 |
|1 |16 |14 |
+---+----+--------+
I'm trying to do it using min function, but don't know how to account for this condition.
val window = Window.partitionBy($"id").orderBy($"time")
df
.select($"id", $"time")
.withColumn("min_time", when(($"time" - min($"time").over(window)) <= 5, min($"time").over(window)).otherwise($"time"))
.show(false)
what I get
+---+----+--------+
|id |time|min_time|
+---+----+--------+
|1 |1 |1 |
|1 |1 |1 |
|1 |2 |1 |
|1 |4 |1 |
|1 |5 |1 |
|1 |6 |1 |
|1 |8 |8 |
|1 |12 |12 |
|1 |12 |12 |
|1 |13 |13 |
|1 |14 |14 |
|1 |15 |15 |
|1 |16 |16 |
+---+----+--------+
You can go with your first idea of using aggregation function on a window. But instead of using some combination of Spark's already defined functions, you can define your own Spark's user-defined aggregate function (UDAF).
Analysis
As you correctly supposed, we should use a kind of min function on a window. On the rows of this window, we want to implement the following rule:
Given rows sorted by time, if the difference between the min_time of the previous row and the time of the current row is greater than 5, then the current row's min_time should be current row's time, else the current row's min_time should be previous row's min_time.
However, with the aggregate functions provided by Spark, we can't access to the previous row's min_time. It exists a lag function, but with this function we can only access to the already present values of previous rows. As the previous row's min_time is not already present, we can't access it.
Thus we have to define our own aggregate function
Solution
Defining a tailor-made aggregate function
To define our aggregate function, we need to create a class that extends the Aggregator abstract class. Below is the complete implementation:
import org.apache.spark.sql.expressions.Aggregator
import org.apache.spark.sql.{Encoder, Encoders}
object MinByInterval extends Aggregator[Integer, Integer, Integer] {
def zero: Integer = null
def reduce(buffer: Integer, time: Integer): Integer = {
if (buffer == null || time - buffer > 5) time else buffer
}
def merge(b1: Integer, b2: Integer): Integer = {
throw new NotImplementedError("should not use as general aggregation")
}
def finish(reduction: Integer): Integer = reduction
def bufferEncoder: Encoder[Integer] = Encoders.INT
def outputEncoder: Encoder[Integer] = Encoders.INT
}
We use Integer for input, buffer and output types. We chose Integer as it is a nullable Int. We could have used Option[Int], however the documentation of Spark advises to not recreate objects in aggregators methods for performance issues, what would happens if we use complex types like Option.
We implement the rule defined in Analysis section in reduce method:
def reduce(buffer: Integer, time: Integer): Integer = {
if (buffer == null || time - buffer > 5) time else buffer
}
Here time is the value in the column time of the current row, and buffer the value previously computed, so corresponding to the column min_time of the previous row. As in our window we sort the rows by time, time is always greater than buffer. The null buffer case only happens when treating first row.
The method merge is not used when using aggregate function over a window, so we don't implement it.
finish method is identity method as we don't need to perform final calculation on our aggregated value and output and buffer encoders are Encoders.INT
Calling user defined aggregate function
Now we can call our user defined aggregate function with the following code:
import org.apache.spark.sql.expressions.Window
import org.apache.spark.sql.functions.{col, udaf}
val minTime = udaf(MinByInterval)
val window = Window.partitionBy("id").orderBy("time")
df.withColumn("min_time", minTime(col("time")).over(window))
Run
Given the input dataframe in the question, we get:
+---+----+--------+
|id |time|min_time|
+---+----+--------+
|1 |1 |1 |
|1 |1 |1 |
|1 |2 |1 |
|1 |4 |1 |
|1 |5 |1 |
|1 |6 |1 |
|1 |8 |8 |
|1 |12 |8 |
|1 |12 |8 |
|1 |13 |8 |
|1 |14 |14 |
|1 |15 |14 |
|1 |16 |14 |
+---+----+--------+
Input data
val df = Seq(
("1", 1),
("1", 1),
("1", 2),
("1", 4),
("1", 5),
("1", 6),
("1", 8),
("1", 12),
("1", 12),
("1", 13),
("1", 14),
("1", 15),
("1", 16),
("2", 4),
("2", 8),
("2", 10),
("2", 11),
("2", 11),
("2", 12),
("2", 13),
("2", 20)
).toDF("id", "time")
The data must be sorted, otherwise the result will be incorect.
val window = Window.partitionBy($"id").orderBy($"time")
df
.withColumn("min", row_number().over(window))
.as[Row]
.map(_.getMin)
.show(40)
After, I create a case class. var min is used to hold the minimum value and is only updated when the conditions are met.
case class Row(id:String, time:Int, min:Int){
def getMin: Row = {
if(time - Row.min > 5 || Row.min == -99 || min == 1){
Row.min = time
}
Row(id, time, Row.min)
}
}
object Row{
var min: Int = -99
}
Result
+---+----+---+
| id|time|min|
+---+----+---+
| 1| 1| 1|
| 1| 1| 1|
| 1| 2| 1|
| 1| 4| 1|
| 1| 5| 1|
| 1| 6| 1|
| 1| 8| 8|
| 1| 12| 8|
| 1| 12| 8|
| 1| 13| 8|
| 1| 14| 14|
| 1| 15| 14|
| 1| 16| 14|
| 2| 4| 4|
| 2| 8| 4|
| 2| 10| 10|
| 2| 11| 10|
| 2| 11| 10|
| 2| 12| 10|
| 2| 13| 10|
| 2| 20| 20|
+---+----+---+

Spark: explode multiple columns into one

Is it possible to explode multiple columns into one new column in spark? I have a dataframe which looks like this:
userId varA varB
1 [0,2,5] [1,2,9]
desired output:
userId bothVars
1 0
1 2
1 5
1 1
1 2
1 9
What I have tried so far:
val explodedDf = df.withColumn("bothVars", explode($"varA")).drop("varA")
.withColumn("bothVars", explode($"varB")).drop("varB")
which doesn't work. Any suggestions is much appreciated.
You could wrap the two arrays into one and flatten the nested array before exploding it, as shown below:
val df = Seq(
(1, Seq(0, 2, 5), Seq(1, 2, 9)),
(2, Seq(1, 3, 4), Seq(2, 3, 8))
).toDF("userId", "varA", "varB")
df.
select($"userId", explode(flatten(array($"varA", $"varB"))).as("bothVars")).
show
// +------+--------+
// |userId|bothVars|
// +------+--------+
// | 1| 0|
// | 1| 2|
// | 1| 5|
// | 1| 1|
// | 1| 2|
// | 1| 9|
// | 2| 1|
// | 2| 3|
// | 2| 4|
// | 2| 2|
// | 2| 3|
// | 2| 8|
// +------+--------+
Note that flatten is available on Spark 2.4+.
Use array_union and then use explode function.
scala> df.show(false)
+------+---------+---------+
|userId|varA |varB |
+------+---------+---------+
|1 |[0, 2, 5]|[1, 2, 9]|
|2 |[1, 3, 4]|[2, 3, 8]|
+------+---------+---------+
scala> df
.select($"userId",explode(array_union($"varA",$"varB")).as("bothVars"))
.show(false)
+------+--------+
|userId|bothVars|
+------+--------+
|1 |0 |
|1 |2 |
|1 |5 |
|1 |1 |
|1 |9 |
|2 |1 |
|2 |3 |
|2 |4 |
|2 |2 |
|2 |8 |
+------+--------+
array_union is available in Spark 2.4+

Create New Column with range of integer by using existing Integer Column in Spark Scala Dataframe

Suppose I have a Spark Scala DataFrame object like:
+--------+
|col1 |
+--------+
|1 |
|3 |
+--------+
And I want a DataFrame like:
+-----------------+
|col1 |col2 |
+-----------------+
|1 |[0,1] |
|3 |[0,1,2,3] |
+-----------------+
Spark offers plenty of APIs/Functions to play around, most of the time default functions come handy however for a specific task UserDefinedFunctions UDFs could be written.
Reference https://jaceklaskowski.gitbooks.io/mastering-spark-sql/spark-sql-udfs.html
import org.apache.spark.sql.expressions.UserDefinedFunction
import org.apache.spark.sql.functions.udf
import org.apache.spark.sql.functions.col
import spark.implicits._
val df = spark.sparkContext.parallelize(Seq(1,3)).toDF("index")
val rangeDF = df.withColumn("range", indexToRange(col("index")))
rangeDF.show(10)
def indexToRange: UserDefinedFunction = udf((index: Integer) => for (i <- 0 to index) yield i)
You can achieve it with the below approach
val input_df = spark.sparkContext.parallelize(List(1, 2, 3, 4, 5)).toDF("col1")
input_df.show(false)
Input:
+----+
|col1|
+----+
|1 |
|2 |
|3 |
|4 |
|5 |
+----+
val output_df = input_df.rdd.map(x => x(0).toString()).map(x => (x, Range(0, x.toInt + 1).mkString(","))).toDF("col1", "col2")
output_df.withColumn("col2", split($"col2", ",")).show(false)
Output:
+----+------------------+
|col1|col2 |
+----+------------------+
|1 |[0, 1] |
|2 |[0, 1, 2] |
|3 |[0, 1, 2, 3] |
|4 |[0, 1, 2, 3, 4] |
|5 |[0, 1, 2, 3, 4, 5]|
+----+------------------+
Hope this helps!

Perform Arithmetic Operations on multiple columns in Spark dataframe

I have an input spark-dataframe named df as
+---------------+---+---+---+-----------+
|Main_CustomerID| P1| P2| P3|Total_Count|
+---------------+---+---+---+-----------+
| 725153| 1| 0| 2| 3|
| 873008| 0| 0| 3| 3|
| 625109| 1| 1| 0| 2|
+---------------+---+---+---+-----------+
Here,Total_Count is the sum of P1,P2,P3 and P1,P2,P3 were the product names. I need to find the frequency of each product by dividing the values of products with its Total_Count. I need to create a new spark-dataframe named frequencyTable as follows,
+---------------+------------------+---+------------------+-----------+
|Main_CustomerID| P1| P2| P3|Total_Count|
+---------------+------------------+---+------------------+-----------+
| 725153|0.3333333333333333|0.0|0.6666666666666666| 3|
| 873008| 0.0|0.0| 1.0| 3|
| 625109| 0.5|0.5| 0.0| 2|
+---------------+------------------+---+------------------+-----------+
I have done this using Scala as,
val df_columns = df.columns.toSeq
var frequencyTable = df
for (index <- df_columns) {
if (index != "Main_CustomerID" && index != "Total_Count") {
frequencyTable = frequencyTable.withColumn(index, df.col(index) / df.col("Total_Count"))
}
}
But I don't prefer this for loop because my df is of larger size. What is the optimized solution?
If you have dataframe as
val df = Seq(
("725153", 1, 0, 2, 3),
("873008", 0, 0, 3, 3),
("625109", 1, 1, 0, 2)
).toDF("Main_CustomerID", "P1", "P2", "P3", "Total_Count")
+---------------+---+---+---+-----------+
|Main_CustomerID|P1 |P2 |P3 |Total_Count|
+---------------+---+---+---+-----------+
|725153 |1 |0 |2 |3 |
|873008 |0 |0 |3 |3 |
|625109 |1 |1 |0 |2 |
+---------------+---+---+---+-----------+
You can simply use foldLeft on the columns except Main_CustomerID, Total_Count i.e. on P1 P2 and P3
val df_columns = df.columns.toSet - "Main_CustomerID" - "Total_Count" toList
df_columns.foldLeft(df){(tempdf, colName) => tempdf.withColumn(colName, df.col(colName) / df.col("Total_Count"))}.show(false)
which should give you
+---------------+------------------+---+------------------+-----------+
|Main_CustomerID|P1 |P2 |P3 |Total_Count|
+---------------+------------------+---+------------------+-----------+
|725153 |0.3333333333333333|0.0|0.6666666666666666|3 |
|873008 |0.0 |0.0|1.0 |3 |
|625109 |0.5 |0.5|0.0 |2 |
+---------------+------------------+---+------------------+-----------+
I hope the answer is helpful