So I have a spark dataframe called ngram_df that looks something like this
--------------------------------
Name | nGrams |
--------|--------------------- |
Alice | [ALI, LIC, ICE] |
Alicia | [ALI, LIC, ICI, CIA] |
--------------------------------
And I want to produce an output in a dictionary form such as:
ALI: 2, LIC: 2, ICE: 1, ICI: 1, CIA: 1
I've been trying to turn the nGrams column into a RDD so that I can use the reduceByKey function
rdd = ngram_df.map(lambda row: row['nGrams'])
test = rdd.reduceByKey(add).collect()
However I get the error:
ValueError: too many values to unpack
Even using flatmap doesn't help as I get the error:
ValueError: need more than 1 value to unpack
this is possible with a combination of flatMap and reduceByKey method.
rdd = spark.sparkContext.parallelize([('Alice', ['ALI', 'LIC', 'ICE']), ('Alicia', ['ALI', 'LIC', 'ICI', 'CIA'])])
result = rdd.flatMap(lambda x: [(y, 1) for y in x[1]] ).reduceByKey(lambda x,y: x+y)
result.collect()
[('ICI', 1), ('CIA', 1), ('ALI', 2), ('ICE', 1), ('LIC', 2)]
Related
I have a Dataframe that can have multiple columns of Array type like "Array1", "Array2" ... etc. These array columns would have same number of elements. I need to compute a new column of Array type which will be the sum of arrays element wise. How can I do it ?
Spark version = 2.3
For Ex:
Input:
|Column1| ... |ArrayColumn2|ArrayColumn3|
|-------| --- |------------|------------|
|T1 | ... |[1, 2 , 3] | [2, 5, 7]
Output:
|Column1| ... |AggregatedColumn|
|-------| --- |------------|
|T1. | ... |[3, 7 , 10]
No of Array columns are not fixed, thus I need a generalized solution. I would have a list of columns for which I would need to aggregate.
Thanks !
Consider using inline and higher-order function aggregate (available in Spark 2.4+) to compute element-wise sums from the Array-typed columns, followed by a groupBy/agg to group the element-wise sums back into Arrays:
val df = Seq(
(101, Seq(1, 2), Seq(3, 4), Seq(5, 6)),
(202, Seq(7, 8), Seq(9, 10), Seq(11, 12))
).toDF("id", "arr1", "arr2", "arr3")
val arrCols = df.columns.filter(_.startsWith("arr")).map(col)
For Spark 3.0+
df.
withColumn("arr_structs", arrays_zip(arrCols: _*)).
select($"id", expr("inline(arr_structs)")).
select($"id", aggregate(array(arrCols: _*), lit(0), (acc, x) => acc + x).as("pos_elem_sum")).
groupBy("id").agg(collect_list($"pos_elem_sum").as("arr_elem_sum")).
show
// +---+------------+
// | id|arr_elem_sum|
// +---+------------+
// |101| [9, 12]|
// |202| [27, 30]|
// +---+------------+
For Spark 2.4+
df.
withColumn("arr_structs", arrays_zip(arrCols: _*)).
select($"id", expr("inline(arr_structs)")).
select($"id", array(arrCols: _*).as("arr_pos_elems")).
select($"id", expr("aggregate(arr_pos_elems, 0, (acc, x) -> acc + x)").as("pos_elem_sum")).
groupBy("id").agg(collect_list($"pos_elem_sum").as("arr_elem_sum")).
show
For Spark 2.3 or below
val sumArrElems = udf{ (arr: Seq[Int]) => arr.sum }
df.
withColumn("arr_structs", arrays_zip(arrCols: _*)).
select($"id", expr("inline(arr_structs)")).
select($"id", sumArrElems(array(arrCols: _*)).as("pos_elem_sum")).
groupBy("id").agg(collect_list($"pos_elem_sum").as("arr_elem_sum")).
show
An SQL expression like array(ArrayColumn2[0]+ArrayColumn3[0], ArrayColumn2[1]+...) can used to calulate the expected result.
val df = ...
//get all array columns
val arrayCols = df.schema.fields.filter(_.dataType.isInstanceOf[ArrayType]).map(_.name)
//get the size of the first array of the first row
val firstArray = arrayCols(0)
val arraySize = df.selectExpr(s"size($firstArray)").first().getAs[Int](0)
//generate the sql expression for the sums
val sums = (for( i <-0 to arraySize-1)
yield arrayCols.map(c=>s"$c[$i]").mkString("+")).mkString(",")
//sums = ArrayColumn2[0]+ArrayColumn3[0],ArrayColumn2[1]+ArrayColumn3[1],ArrayColumn2[2]+ArrayColumn3[2]
//create a new column using sums
df.withColumn("AggregatedColumn", expr(s"array($sums)")).show()
Output:
+-------+------------+------------+----------------+
|Column1|ArrayColumn2|ArrayColumn3|AggregatedColumn|
+-------+------------+------------+----------------+
| T1| [1, 2, 3]| [2, 5, 7]| [3, 7, 10]|
+-------+------------+------------+----------------+
Using this single (long) SQL expression will avoid shuffling data over the network and thus improve performance.
I have a Dataframe with several columns. The i-th column contains strings. I want to apply the string sliding(n) function to each string in the column. Is there a way to do so without using user-defined functions?
Example:
My dataframe is
var df = Seq((0, "hello"), (1, "hola")).toDF("id", "text")
I want to apply the sliding(3) function to each element of column "text" to obtain a dataframe corresponding to
Seq(
(0, ("hel", "ell", "llo"))
(1, ("hol", "ola"))
)
How can I do this?
For spark version >= 2.4.0, this can be done using the inbuilt functions array_repeat, transform and substring.
import org.apache.spark.sql.functions.{array_repeat, transform, substring}
//Repeat the array `n` times
val repeated_df = df.withColumn("tmp",array_repeat($"text",length($"text")-3+1))
//Get the slices with transform higher order function
val res = repeated_df.withColumn("str_slices",
expr("transform(tmp,(x,i) -> substring(x from i+1 for 3))")
)
//res.show()
+---+-----+---------------------+---------------+
|id |text |tmp |str_slices |
+---+-----+---------------------+---------------+
|0 |hello|[hello, hello, hello]|[hel, ell, llo]|
|1 |hola |[hola, hola] |[hol, ola] |
+---+-----+---------------------+---------------+
I have the following DataFrame in a Spark (I'm using Scala):
[[1003014, 0.95266926], [15, 0.9484202], [754, 0.94236785], [1029530, 0.880922], [3066, 0.7085166], [1066440, 0.69400793], [1045811, 0.663178], [1020059, 0.6274495], [1233982, 0.6112905], [1007801, 0.60937023], [1239278, 0.60044676], [1000088, 0.5789191], [1056268, 0.5747936], [1307569, 0.5676605], [10334513, 0.56592846], [930, 0.5446228], [1170206, 0.52525467], [300, 0.52473146], [2105178, 0.4972785], [1088572, 0.4815367]]
I want to get a Dataframe with only first Ints of each sub-array, something like:
[1003014, 15, 754, 1029530, 3066, 1066440, ...]
Keeping hence only the x[0] of each sub-array x of the Array listed above.
I'm new to Scala, and couldn't find the right anonymous map function.
Thanks in advance for any help
For Spark >= 2.4, you can use Higher-Order Function transform with lambda function to extract the first element of each value array.
scala> df.show(false)
+----------------------------------------------------------------------------------------+
|arrays |
+----------------------------------------------------------------------------------------+
|[[1003014.0, 0.95266926], [15.0, 0.9484202], [754.0, 0.94236785], [1029530.0, 0.880922]]|
+----------------------------------------------------------------------------------------+
scala> df.select(expr("transform(arrays, x -> x[0])").alias("first_array_elements")).show(false)
+-----------------------------------+
|first_array_elements |
+-----------------------------------+
|[1003014.0, 15.0, 754.0, 1029530.0]|
+-----------------------------------+
Spark < 2.4
Explode the initial array and then aggregate with collect_list to collect the first element of each sub array:
df.withColumn("exploded_array", explode(col("arrays")))
.agg(collect_list(col("exploded_array")(0)))
.show(false)
EDIT:
In case the array contains structs and not sub-arrays, just change the accessing method using dots for struct elements:
val transfrom_expr = "transform(arrays, x -> x.canonical_id)"
df.select(expr(transfrom_expr).alias("first_array_elements")).show(false)
Using Spark 2.4:
val df = Seq(
Seq(Seq(1.0,2.0),Seq(3.0,4.0))
).toDF("arrs")
df.show()
+--------------------+
| arrs|
+--------------------+
|[[1.0, 2.0], [3.0...|
+--------------------+
df
.select(expr("transform(arrs, x -> x[0])").as("arr_first"))
.show()
+----------+
| arr_first|
+----------+
|[1.0, 3.0]|
+----------+
How can I convert spark dataframe to a tuple of 2 in scala?
I tried to explode the array and create a new column with help of lead function, so that I can use two columns to create tuple.
In order to use lead function, I need a column to sort by, I don't have any.
Please suggest which is best way to solve this?
Note: I need to retain the same order in the array.
For example:
Input
For example, input looks something like this,
id1 | [text1, text2, text3, text4]
id2 | [txt, txt2, txt4, txt5, txt6, txt7, txt8, txt9]
expected o/p:
I need to get output of tuple of length 2
id1 | [(text1, text2), (text2, text3), (text3,text4)]
id2 | [(txt, txt2), (txt2, txt4), (txt4, txt5), (txt5, txt6), (txt6, txt7), (txt7, txt8), (txt8, txt9)]
You can create an udf to create list of tuple using sliding window function
val df = Seq(
("id1", List("text1", "text2", "text3", "text4")),
("id2", List("txt", "txt2", "txt4", "txt5", "txt6", "txt7", "txt8", "txt9"))
).toDF("id", "text")
val sliding = udf((value: Seq[String]) => {
value.toList.sliding(2).map { case List(a, b) => (a, b) }.toList
})
val result = df.withColumn("text", sliding($"text"))
Output:
+---+-------------------------------------------------------------------------------------------------+
|id |text |
+---+-------------------------------------------------------------------------------------------------+
|id1|[[text1, text2], [text2, text3], [text3, text4]] |
|id2|[[txt, txt2], [txt2, txt4], [txt4, txt5], [txt5, txt6], [txt6, txt7], [txt7, txt8], [txt8, txt9]]|
+---+-------------------------------------------------------------------------------------------------+
Hope this helps!
I created a dataframe in spark scala shell for SFPD incidents. I queried the data for Category count and the result is a datafame. I want to plot this data into a graph using Wisp. Here is my dataframe,
+--------------+--------+
| Category|catcount|
+--------------+--------+
| LARCENY/THEFT| 362266|
|OTHER OFFENSES| 257197|
| NON-CRIMINAL| 189857|
| ASSAULT| 157529|
| VEHICLE THEFT| 109733|
| DRUG/NARCOTIC| 108712|
| VANDALISM| 91782|
| WARRANTS| 85837|
| BURGLARY| 75398|
|SUSPICIOUS OCC| 64452|
+--------------+--------+
I want to convert this dataframe into an arraylist of key value pairs. So I want result like this with (String,Int) type,
(LARCENY/THEFT,362266)
(OTHER OFFENSES,257197)
(NON-CRIMINAL,189857)
(ASSAULT,157529)
(VEHICLE THEFT,109733)
(DRUG/NARCOTIC,108712)
(VANDALISM,91782)
(WARRANTS,85837)
(BURGLARY,75398)
(SUSPICIOUS OCC,64452)
I tried converting this dataframe (t) into an RDD as val rddt = t.rdd. And then used flatMapValues,
rddt.flatMapValues(x=>x).collect()
but still couldn't get the required result.
Or is there a way to directly give the dataframe output into Wisp?
In pyspark it'd be as below. Scala will be quite similar.
Creating test data
rdd = sc.parallelize([(0,1), (0,1), (0,2), (1,2), (1,1), (1,20), (3,18), (3,18), (3,18)])
df = sqlContext.createDataFrame(rdd, ["id", "score"])
Mapping the test data, reformatting from a RDD of Rows to an RDD of tuples. Then, using collect to extract all the tuples as a list.
df.rdd.map(lambda x: (x[0], x[1])).collect()
[(0, 1), (0, 1), (0, 2), (1, 2), (1, 1), (1, 20), (3, 18), (3, 18), (3, 18)]
Here's the Scala Spark Row documentation that should help you convert this to Scala Spark code