Suppose I have a DataFrame:
val testDf = sc.parallelize(Seq(
(1,2,"x", Array(1,2,3,4)))).toDF("one", "two", "X", "Array")
+---+---+---+------------+
|one|two| X| Array|
+---+---+---+------------+
| 1| 2| x|[1, 2, 3, 4]|
+---+---+---+------------+
I want to replicate the single elements, let's say 4 times, in order to achieve a single row DataFrame with each field as an array of four elements. The desired output would be:
+------------+------------+------------+------------+
| one| two| X| Array|
+------------+------------+------------+------------+
|[1, 1, 1, 1]|[2, 2, 2, 2]|[x, x, x, x]|[1, 2, 3, 4]|
+------------+------------+------------+------------+
You can use builit-in array function to replicate n time column of your choice.
Below is PoC code.
import org.apache.spark.sql.functions._
val replicate = (n: Int, colName: String) => array((1 to n).map(s => col(colName)):_*)
val replicatedCol = Seq("one", "two", "X").map(s => replicate(4, s).as(s))
val cols = col("Array") +: replicatedCol
val testDf = sc.parallelize(Seq(
(1,2,"x", Array(1,2,3,4)))).toDF("one", "two", "X", "Array").select(cols:_*)
testDf.show(false)
+------------+------------+------------+------------+
|Array |one |two |X |
+------------+------------+------------+------------+
|[1, 2, 3, 4]|[1, 1, 1, 1]|[2, 2, 2, 2]|[x, x, x, x]|
+------------+------------+------------+------------+
In the case, you want different n for each column
val testDf = sc.parallelize(Seq(
(1,2,"x", Array(1,2,3,4)))).toDF("one", "two", "X", "Array").select(replicate(2, "one").as("one"), replicate(3, "X").as("X"), replicate(4, "two").as("two"), $"Array")
testDf.show(false)
+------+---------+------------+------------+
|one |X |two |Array |
+------+---------+------------+------------+
|[1, 1]|[x, x, x]|[2, 2, 2, 2]|[1, 2, 3, 4]|
+------+---------+------------+------------+
Well, here is my solution:
First declare the columns you want to replicate:
val columnsToReplicate = List("one", "two", "X")
Then define the replication factor and the udf to perform it:
val replicationFactor = 4
val replicate = (s:String) => {
for {
i <- 1 to replicationFactor
} yield s
}
val replicateudf = functions.udf(replicate)
Then just perform the foldLeft on the DataFrame when the columname belongs to your list of desired column names:
testDf.columns.foldLeft(testDf)((acc, colname) => if (columnsToReplicate.contains(colname)) acc.withColumn(colname, replicateudf(acc.col(colname))) else acc)
Output:
+------------+------------+------------+------------+
| one| two| X| Array|
+------------+------------+------------+------------+
|[1, 1, 1, 1]|[2, 2, 2, 2]|[x, x, x, x]|[1, 2, 3, 4]|
+------------+------------+------------+------------+
Note: You need to import this class:
import org.apache.spark.sql.functions
EDIT:
Variable replicationFactor as suggested in comments:
val mapColumnsToReplicate = Map("one"->4, "two"->5, "X"->6)
val replicateudf2 = functions.udf ((s: String, replicationFactor: Int) =>
for {
i <- 1 to replicationFactor
} yield s
)
testDf.columns.foldLeft(testDf)((acc, colname) => if (mapColumnsToReplicate.keys.toList.contains(colname)) acc.withColumn(colname, replicateudf2($"$colname", functions.lit(mapColumnsToReplicate(colname))))` else acc)
Output with those values above:
+------------+---------------+------------------+------------+
| one| two| X| Array|
+------------+---------------+------------------+------------+
|[1, 1, 1, 1]|[2, 2, 2, 2, 2]|[x, x, x, x, x, x]|[1, 2, 3, 4]|
+------------+---------------+------------------+------------+
You can use explode und groupBy/collect_list :
val testDf = sc.parallelize(
Seq((1, 2, "x", Array(1, 2, 3, 4)),
(3, 4, "y", Array(1, 2, 3)),
(5,6, "z", Array(1)))
).toDF("one", "two", "X", "Array")
testDf
.withColumn("id",monotonically_increasing_id())
.withColumn("tmp", explode($"Array"))
.groupBy($"id")
.agg(
collect_list($"one").as("cl_one"),
collect_list($"two").as("cl_two"),
collect_list($"X").as("cl_X"),
first($"Array").as("Array")
)
.select(
$"cl_one".as("one"),
$"cl_two".as("two"),
$"cl_X".as("X"),
$"Array"
)
.show()
+------------+------------+------------+------------+
| one| two| X| Array|
+------------+------------+------------+------------+
| [5]| [6]| [z]| [1]|
|[1, 1, 1, 1]|[2, 2, 2, 2]|[x, x, x, x]|[1, 2, 3, 4]|
| [3, 3, 3]| [4, 4, 4]| [y, y, y]| [1, 2, 3]|
+------------+------------+------------+------------+
This solution has the advantage that it does not rely on constant array-sizes
Related
I have a spark DataFrame with a column containing several arrays of Integers with varying lengths. I will need to create a new column to find the Quantiles for each of these.
This is the input DataFrame :
+---------+------------------------+
|Comm |List_Nb_total_operations|
+---------+------------------------+
| comm1| [1, 1, 2, 3, 4]|
| comm4| [2, 2]|
| comm3| [2, 2]|
| comm0| [1, 1, 1, 2, 2, 2, 3,3]|
| comm2| [1, 1, 1, 2, 3]|
+---------+------------------------+
This is the desired result :
+---------+------------------------+----+----+
|Comm |List_Nb_total_operations|QT25|QT75|
+---------+------------------------+----+----+
| comm1| [1, 1, 2, 3, 4]| 1| 3|
| comm4| [2, 2]| 2| 2|
| comm3| [2, 2]| 2| 2|
| comm0| [1, 1, 1, 2, 2, 2, 3,3]| 1| 3|
| comm2| [1, 1, 1, 2, 3]| 1| 2|
+---------+------------------------+----+----+
The function you want to use is percentile_approx (since Spark 3.1):
val df = Seq(
("comm1", Seq(1,1,2,3,4)),
("comm4", Seq(2,2)),
("comm3", Seq(2,2)),
("comm0", Seq(1,1,1,2,2,2,3,3)),
("comm2", Seq(1,1,1,2,3))
).toDF("Comm", "ops")
val dfQ = df.select(
col("Comm"),
explode(col("ops")) as "ops")
.groupBy("Comm")
.agg(
percentile_approx($"ops", lit(0.25), lit(100)) as "q25",
percentile_approx($"ops", lit(0.75), lit(100)) as "q75"
)
val dfWithQ = df.join(dfQ, Seq("Comm"))
The documentation has more information regarding tuning the parameters for accuracy.
Thank you for your help. I've found an other solution that works very well in my case:
import org.apache.spark.sql.functions._
import org.apache.spark.sql.Column
import org.apache.spark.sql.catalyst.expressions.aggregate.ApproximatePercentile
def percentile_approxx(col: Column, percentage: Column, accuracy: Column): Column = {
val expr = new ApproximatePercentile(
col.expr, percentage.expr, accuracy.expr
).toAggregateExpression
new Column(expr)
}
val perc_df = df.groupBy("Comm").agg(percentile_approxx(col("ops"), lit(0.75), lit(100)))
I need to find a count of occurrences of specific elements present in array, we can use array_contains function but I am looking for another solution that can work below spark 2.2
Input:
+----+------------------+
|col1| array_col2|
+----+------------------+
| x| [1, 2, 3, 7, 7]|
| z|[3, 2, 8, 9, 4, 9]|
| a| [4, 5, 2, 8]|
+----+------------------+
result1 -> count of occuriencies of 1,2 in a given array column array_col2
result2 -> count of occuriencies of 3,7,9 in a given array column array_col2
Expected Output:
+----+------------------+----------+----------+
|col1| array_col2| result1| result2|
+----+------------------+----------+----------+
| x| [1, 2, 3, 7, 7]| 2| 3|
| z|[3, 2, 8, 9, 4, 9]| 1| 3|
| a| [4, 5, 2, 8]| 1| 0|
+----+------------------+----------+----------+
You can use UDF :
val count_occ = udf((s: Seq[Int], f: Seq[Int]) => s.filter(f.contains(_)).size)
val df1 = df.withColumn(
"result1",
count_occ($"array_col2", array(lit(1), lit(2)))
).withColumn(
"result2",
count_occ($"array_col2", array(lit(3), lit(7), lit(9)))
)
df1.show
//+----+------------------+-------+-------+
//|col1| array_col2|result1|result2|
//+----+------------------+-------+-------+
//| x| [1, 2, 3, 7, 7]| 2| 3|
//| z|[3, 2, 8, 9, 4, 9]| 1| 3|
//| a| [4, 5, 2, 8]| 1| 0|
//+----+------------------+-------+-------+
You can also explode the array then groupby and count :
val df1 = df.withColumn(
"col2",
explode($"array_col2")
).groupBy("col1", "array_col2").agg(
count(when($"col2".isin(1, 2), 1)).as("result1"),
count(when($"col2".isin(3, 7, 9), 1)).as("result2")
)
I have a dataframe like below
c1 Value
A Array[47,97,33,94,6]
A Array[59,98,24,83,3]
A Array[77,63,93,86,62]
B Array[86,71,72,23,27]
B Array[74,69,72,93,7]
B Array[58,99,90,93,41]
C Array[40,13,85,75,90]
C Array[39,13,33,29,14]
C Array[99,88,57,69,49]
I need an output as below.
c1 Value
A Array[183,258,150,263,71]
B Array[218,239,234,209,75]
C Array[178,114,175,173,153]
Which is nothing but grouping column c1 and find the sum of values in column value in a sequential manner .
Please help, I couldn't find any way of doing this in google .
It is not very complicated. As you mention it, you can simply group by "c1" and aggregate the values of the array index by index.
Let's first generate some data:
val df = spark.range(6)
.select('id % 3 as "c1",
array((1 to 5).map(_ => floor(rand * 10)) : _*) as "Value")
df.show()
+---+---------------+
| c1| Value|
+---+---------------+
| 0|[7, 4, 7, 4, 0]|
| 1|[3, 3, 2, 8, 5]|
| 2|[2, 1, 0, 4, 4]|
| 0|[0, 4, 2, 1, 8]|
| 1|[1, 5, 7, 4, 3]|
| 2|[2, 5, 0, 2, 2]|
+---+---------------+
Then we need to iterate over the values of the array so as to aggregate them. It is very similar to the way we created them:
val n = 5 // if you know the size of the array
val n = df.select(size('Value)).first.getAs[Int](0) // If you do not
df
.groupBy("c1")
.agg(array((0 until n).map(i => sum(col("Value").getItem(i))) :_* ) as "Value")
.show()
+---+------------------+
| c1| Value|
+---+------------------+
| 0|[11, 18, 15, 8, 9]|
| 1| [2, 10, 5, 7, 4]|
| 2|[7, 14, 15, 10, 4]|
+---+------------------+
I have an RDD List[(String, List[Int])] like List(("A",List(1,2,3,4)),("B",List(5,6,7)))
How to transform them to List(("A",1),("A",2),("A",3),("A",4),("B",5),("B",6),("B",7))
Then action would be reducing by key and generating result like List(("A",2.5)("B",6))
I have tried using map(e=>List(e._1,e._2)) but its not giving desired result.
Where 2.5 is average for "A" and 6 is average for "B"
Help me with these set of transformation and actions.
Thanks in advance
There are several ways to get what you want. You could use a for comprehension as well, but the very first one came up to my mind is this implementation:
val l = List(("A", List(1, 2, 3)), ("B", List(1, 2, 3)))
val flattenList = l.flatMap {
case (elem, _elemList) =>
_elemList.map((elem, _))
}
Output:
List((A,1), (A,2), (A,3), (B,1), (B,2), (B,3))
If what you want is the average of each list in the end, then it's not necessary to break them up into individual elements with a flatMap. Doing so with a large list would unnecessarily shuffle a lot of data with a large data set.
Since they are already aggregated by key, just transform them with something like this:
val l = spark.sparkContext.parallelize(Seq(
("A", List(1, 2, 3, 4)),
("B", List(5, 6, 7))
))
val avg = l.map(r => {
(r._1, (r._2.sum.toDouble / r._2.length.toDouble))
})
avg.collect.foreach(println)
Bear in mind that this will fail if any of your lists are 0 length. If you have some 0 length lists, you'll have to put a check condition in the map.
The above code gives you:
(A,2.5)
(B,6.0)
You can try explode()
scala> val df = List(("A",List(1,2,3,4)),("B",List(5,6,7))).toDF("x","y")
df: org.apache.spark.sql.DataFrame = [x: string, y: array<int>]
scala> df.withColumn("z",explode('y)).show(false)
+---+------------+---+
|x |y |z |
+---+------------+---+
|A |[1, 2, 3, 4]|1 |
|A |[1, 2, 3, 4]|2 |
|A |[1, 2, 3, 4]|3 |
|A |[1, 2, 3, 4]|4 |
|B |[5, 6, 7] |5 |
|B |[5, 6, 7] |6 |
|B |[5, 6, 7] |7 |
+---+------------+---+
scala> val df2 = df.withColumn("z",explode('y))
df2: org.apache.spark.sql.DataFrame = [x: string, y: array<int> ... 1 more field]
scala> df2.groupBy("x").agg(sum('z)/count('z) ).show(false)
+---+-------------------+
|x |(sum(z) / count(z))|
+---+-------------------+
|B |6.0 |
|A |2.5 |
+---+-------------------+
scala>
I have a Dataframe like this:
+---------------------------------------------------------------------+
|ARRAY |
+---------------------------------------------------------------------+
|[WrappedArray(1, 2, 3), WrappedArray(4, 5, 6), WrappedArray(7, 8, 9)]|
+---------------------------------------------------------------------+
I use this code to create it:
case class MySchema(arr: Array[Array[Int]])
val df = sc.parallelize(Seq(
Array(Array(1,2,3),
Array(4,5,6),
Array(7,8,9))))
.map(x => MySchema(x))
.toDF("ARRAY")
I would like to get a result like this:
+-----------+
|ARRAY | |
+-----------+
|[1, 2, 3] |
|[4, 5, 6] |
|[7, 8, 9] |
+-----------+
Do you have any idea?
I already try to call an udf to do a flatmap(x => x) on my Array line but I get an incorrect result :
+---------------------------+
|ARRAY |
+---------------------------+
|[1, 2, 3, 4, 5, 6, 7, 8, 9]|
+---------------------------+
Thank you for your help
You can explode:
import org.apache.spark.sql.functions.{col, explode}
df.select(explode(col("array")))