How to find the max String length of a column in Spark using dataframe? - scala

I have a dataframe. I need to calculate the Max length of the String value in a column and print both the value and its length.
I have written the below code but the output here is the max length only but not its corresponding value.
This How to get max length of string column from dataframe using scala? did help me out in getting the below query.
df.agg(max(length(col("city")))).show()

Use row_number() window function on length('city) desc order.
Then filter out only the first row_number column and add length('city) column to dataframe.
Ex:
val df=Seq(("A",1,"US"),("AB",1,"US"),("ABC",1,"US"))
.toDF("city","num","country")
val win=Window.orderBy(length('city).desc)
df.withColumn("str_len",length('city))
.withColumn("rn", row_number().over(win))
.filter('rn===1)
.show(false)
+----+---+-------+-------+---+
|city|num|country|str_len|rn |
+----+---+-------+-------+---+
|ABC |1 |US |3 |1 |
+----+---+-------+-------+---+
(or)
In spark-sql:
df.createOrReplaceTempView("lpl")
spark.sql("select * from (select *,length(city)str_len,row_number() over (order by length(city) desc)rn from lpl)q where q.rn=1")
.show(false)
+----+---+-------+-------+---+
|city|num|country|str_len| rn|
+----+---+-------+-------+---+
| ABC| 1| US| 3| 1|
+----+---+-------+-------+---+
Update:
Find min,max values:
val win_desc=Window.orderBy(length('city).desc)
val win_asc=Window.orderBy(length('city).asc)
df.withColumn("str_len",length('city))
.withColumn("rn", row_number().over(win_desc))
.withColumn("rn1",row_number().over(win_asc))
.filter('rn===1 || 'rn1 === 1)
.show(false)
Result:
+----+---+-------+-------+---+---+
|city|num|country|str_len|rn |rn1|
+----+---+-------+-------+---+---+
|A |1 |US |1 |3 |1 | //min value of string
|ABC |1 |US |3 |1 |3 | //max value of string
+----+---+-------+-------+---+---+

In case you have multiple rows which share the same length, then the solution with the window function won't work, since it filters the first row after ordering.
Another way would be to create a new column with the length of the string, find it's max element and filter the data frame upon the obtained maximum value.
import org.apache.spark.sql._
import org.apache.spark.sql.functions._
import spark.implicits._
val df=Seq(("A",1,"US"),("AB",1,"US"),("ABC",1,"US"), ("DEF", 2, "US"))
.toDF("city","num","country")
val dfWithLength = df.withColumn("city_length", length($"city")).cache()
dfWithLength.show()
+----+---+-------+-----------+
|city|num|country|city_length|
+----+---+-------+-----------+
| A| 1| US| 1|
| AB| 1| US| 2|
| ABC| 1| US| 3|
| DEF| 2| US| 3|
+----+---+-------+-----------+
val Row(maxValue: Int) = dfWithLength.agg(max("city_length")).head()
dfWithLength.filter($"city_length" === maxValue).show()
+----+---+-------+-----------+
|city|num|country|city_length|
+----+---+-------+-----------+
| ABC| 1| US| 3|
| DEF| 2| US| 3|
+----+---+-------+-----------+

Find a maximum string length on a string column with pyspark
from pyspark.sql.functions import length, col, max
df2 = df.withColumn("len_Description",length(col("Description"))).groupBy().max("len_Description")

Related

Spark GroupBy and Aggregate Strings to Produce a Map of Counts of the Strings Based on a Condition

I have a dataframe with two multiple columns, two of which are id and label as shown below.
+---+---+---+
| id| label|
+---+---+---+
| 1| "abc"|
| 1| "abc"|
| 1| "def"|
| 2| "def"|
| 2| "def"|
+---+---+---+
I want to groupBy "id" and aggregate the label column by counts (ignore null) of label in a map data structure and the expected result is as shown below:
+---+---+--+--+--+--+--+--
| id| label |
+---+-----+----+----+----+
| 1| {"abc":2, "def":1}|
| 2| {"def":2} |
+---+-----+----+----+----+
Is it possible to do this without using user-defined aggregate functions? I saw a similar answer here, but it doesn't aggregate based on the count of each item.
I apologize if this question is silly, I am new to both Scala and Spark.
Thanks
Without Custom UDFs
import org.apache.spark.sql.functions.{map, collect_list}
df.groupBy("id", "label")
.count
.select($"id", map($"label", $"count").as("map"))
.groupBy("id")
.agg(collect_list("map"))
.show(false)
+---+------------------------+
|id |collect_list(map) |
+---+------------------------+
|1 |[[def -> 1], [abc -> 2]]|
|2 |[[def -> 2]] |
+---+------------------------+
Using Custom UDF,
import org.apache.spark.sql.functions.udf
val customUdf = udf((seq: Seq[String]) => {
seq.groupBy(x => x).map(x => x._1 -> x._2.size)
})
df.groupBy("id")
.agg(collect_list("label").as("list"))
.select($"id", customUdf($"list").as("map"))
.show(false)
+---+--------------------+
|id |map |
+---+--------------------+
|1 |[abc -> 2, def -> 1]|
|2 |[def -> 2] |
+---+--------------------+

How to randomly choose element in array column of different size?

Given a dataframe with a column of arrays of integers with different sizes:
scala> sampleDf.show()
+------------+
| arrays|
+------------+
|[15, 16, 17]|
|[15, 16, 17]|
| [14]|
| [11]|
| [11]|
+------------+
scala> sampleDf.printSchema()
root
|-- arrays: array (nullable = true)
| |-- element: integer (containsNull = true)
I would like to generate a new column with a random chosen item in each array.
I've tried two solution:
1. Using UDF:
import scala.util.Random
def getRandomElement(arr: Array[Int]): Int = {
arr(Random.nextInt(arr.size))
}
val getRandomElementUdf = udf{arr: Array[Int] => getRandomElement(arr)}
sampleDf.withColumn("randomItem", getRandomElementUdf('arrays)).show
crashes on the last line with a long error message: (extracts)
...
Caused by: org.apache.spark.SparkException: Failed to execute user defined function($anonfun$1: (array<int>) => int)
...
Caused by: java.lang.ClassCastException: scala.collection.mutable.WrappedArray$ofRef cannot be cast to [I
I've tried with the alternative udf definition:
val getRandomElementUdf = udf[Int, Array[Int]] (getRandomElement)
but I get the same error.
2. Second method by creating intermediary columns with a random index in the range of the corresponding array:
// Return a dataframe with a column with random index from column of Arrays with different sizes
def choice(df: DataFrame, colName: String): DataFrame = {
df.withColumn("array_size", size(col(colName)))
.withColumn("random_idx", least('array_size, floor(rand * 'array_size)))
}
choice(sampleDf, "arrays").show
outputs:
+------------+----------+----------+
| arrays|array_size|random_idx|
+------------+----------+----------+
|[15, 16, 17]| 3| 2|
|[15, 16, 17]| 3| 1|
| [14]| 1| 0|
| [11]| 1| 0|
| [11]| 1| 0|
+------------+----------+----------+
and ideally we would like to use the column random_idx to choose an item in column arrays, kind of:
sampleDf.withColumn("choosen_item", 'arrays.getItem('random_idx))
Unfortunaltely, getItem cannot take a column as argument.
Any suggestion is welcome.
You can use the below udf to select the random element from the array as
val getRandomElement = udf ((array: Seq[Integer]) => {
array(Random.nextInt(array.size))
})
df.withColumn("c1", getRandomElement($"arrays"))
.withColumn("c2", getRandomElement($"arrays"))
.withColumn("c3", getRandomElement($"arrays"))
.withColumn("c4", getRandomElement($"arrays"))
.withColumn("c5", getRandomElement($"arrays"))
.show(false)
You can see the random element selected in each use as a new column.
+------------+---+---+---+---+---+
|arrays |c1 |c2 |c3 |c4 |c5 |
+------------+---+---+---+---+---+
|[15, 16, 17]|15 |16 |16 |17 |16 |
|[15, 16, 17]|16 |16 |17 |15 |15 |
|[14] |14 |14 |14 |14 |14 |
|[11] |11 |11 |11 |11 |11 |
|[11] |11 |11 |11 |11 |11 |
+------------+---+---+---+---+---+
If you want to remain udf-free, here is a possibility:
first add a key to the dataframe outputed by choice (assume its name is choiceDf)
val myDf = choiceDf.withColumn("key", monotonically_increasing_id())
then create an intermediary dataframe that explode the arrays column and keep the index of the values
val tmp = myDf.select('key, posexplode('arrays))
finally join using key and random_idx
myDf.join(tmp.withColumnRenamed("pos", "random_idx"), Seq("key", "random_idx", "left")
the item you look for is stored in the column col
+---+----------+------------+----------+---+
|key|random_idx| arrays|array_size|col|
+---+----------+------------+----------+---+
| 0| 2|[15, 16, 17]| 3| 17|
| 1| 1|[15, 16, 17]| 3| 16|
| 2| 0| [14]| 1| 14|
+---+----------+------------+----------+---+
You can extract the random element from an array by one line using Spark-SQL
sampleDF.createOrReplaceTempView("sampleDF")
spark.sql("select arrays[Cast((FLOOR(RAND() * FLOOR(size(arrays)))) as INT)] as random from sampleDF")

In Spark windowing, how do you fill null for when the number of rows selected are less than window size?

assume there is a dataframe as follows:
machine_id | value
1| 5
1| 3
1| 4
I want to produce a final dataframe like this
machine_id | value | sum
1| 5|null
1| 3| 8
1| 4| 7
basically I have to do a window of size two, but for the first row, we don't want to sum it up with zero. It will just be filled with null.
var winSpec = Window.orderBy("machine_id ").partitionBy("machine_id ").rangeBetween(-1, 0)
df.withColumn("sum", sum("value").over(winSpec))
You can use lag function, add value column with lag(value, 1):
val df = Seq((1,5),(1,3),(1,4)).toDF("machine_id", "value")
import org.apache.spark.sql.expressions.Window
val window = Window.partitionBy("machine_id").orderBy("id")
(df.withColumn("id", monotonically_increasing_id)
.withColumn("sum", $"value" + lag($"value",1).over(window))
.drop("id").show())
+----------+-----+----+
|machine_id|value| sum|
+----------+-----+----+
| 1| 5|null|
| 1| 3| 8|
| 1| 4| 7|
+----------+-----+----+
You should be using rowsBetween rather than rangeBetween api as below
import org.apache.spark.sql.functions._
var winSpec = Window.orderBy("machine_id").partitionBy("machine_id").rowsBetween(-1, 0)
df.withColumn("sum", sum("value").over(winSpec))
.withColumn("sum", when($"sum" === $"value", null).otherwise($"sum"))
.show(false)
which should give you your expected result
+----------+-----+----+
|machine_id|value|sum |
+----------+-----+----+
|1 |5 |null|
|1 |3 |8 |
|1 |4 |7 |
+----------+-----+----+
I hope the answer is helpful
If you want a general solution where n is the size of windows
Spark- Scala
import org.apache.spark.sql.functions._
import org.apache.spark.sql.expressions.Window
val winSpec = Window.partitionBy("machine_id").orderBy("machine_id").rowsBetween(-n, 0)
val winSpec2 = Window.partitionBy("machine_id").orderBy("machine_id")
df.withColumn("sum", when(row_number().over(winSpec2) < n, "null").otherwise(sum("value").over(winSpec))
.show(false)
PySpark
from pyspark.sql.window import Window
from pyspark.sql.functions import *
winSpec = Window.partitionBy("machine_id").orderBy("machine_id").rowsBetween(-n, 0)
winSpec2 = Window.partitionBy("machine_id").orderBy("machine_id")
df.withColumn("sum", when(row_number().over(winSpec2) < n, "null").otherwise(sum("value").over(winSpec))
.show(false)

Pass Array[seq[String]] to UDF in spark scala

I am new to UDF in spark. I have also read the answer here
Problem statement: I'm trying to find pattern matching from a dataframe col.
Ex: Dataframe
val df = Seq((1, Some("z")), (2, Some("abs,abc,dfg")),
(3,Some("a,b,c,d,e,f,abs,abc,dfg"))).toDF("id", "text")
df.show()
+---+--------------------+
| id| text|
+---+--------------------+
| 1| z|
| 2| abs,abc,dfg|
| 3|a,b,c,d,e,f,abs,a...|
+---+--------------------+
df.filter($"text".contains("abs,abc,dfg")).count()
//returns 2 as abs exits in 2nd row and 3rd row
Now I want to do this pattern matching for every row in column $text and add new column called count.
Result:
+---+--------------------+-----+
| id| text|count|
+---+--------------------+-----+
| 1| z| 1|
| 2| abs,abc,dfg| 2|
| 3|a,b,c,d,e,f,abs,a...| 1|
+---+--------------------+-----+
I tried to define a udf passing $text column as Array[Seq[String]. But I am not able to get what I intended.
What I tried so far:
val txt = df.select("text").collect.map(_.toSeq.map(_.toString)) //convert column to Array[Seq[String]
val valsum = udf((txt:Array[Seq[String],pattern:String)=> {txt.count(_ == pattern) } )
df.withColumn("newCol", valsum( lit(txt) ,df(text)) )).show()
Any help would be appreciated
You will have to know all the elements of text column which can be done using collect_list by grouping all the rows of your dataframe as one. Then just check if element in text column in the collected array and count them as in the following code.
import sqlContext.implicits._
import org.apache.spark.sql.functions._
import org.apache.spark.sql.expressions._
val df = Seq((1, Some("z")), (2, Some("abs,abc,dfg")),(3,Some("a,b,c,d,e,f,abs,abc,dfg"))).toDF("id", "text")
val valsum = udf((txt: String, array : mutable.WrappedArray[String])=> array.filter(element => element.contains(txt)).size)
df.withColumn("grouping", lit("g"))
.withColumn("array", collect_list("text").over(Window.partitionBy("grouping")))
.withColumn("count", valsum($"text", $"array"))
.drop("grouping", "array")
.show(false)
You should have following output
+---+-----------------------+-----+
|id |text |count|
+---+-----------------------+-----+
|1 |z |1 |
|2 |abs,abc,dfg |2 |
|3 |a,b,c,d,e,f,abs,abc,dfg|1 |
+---+-----------------------+-----+
I hope this is helpful.

Dataframe.map need to result with more than the rows in dataset

I am using scala and spark and have a simple dataframe.map to produce the required transformation on data. However I need to provide an additional row of data with the modified original. How can I use the dataframe.map to give out this.
ex:
dataset from:
id, name, age
1, john, 23
2, peter, 32
if age < 25 default to 25.
dataset to:
id, name, age
1, john, 25
1, john, -23
2, peter, 32
Would a 'UnionAll' handle it?
eg.
df1 = original dataframe
df2 = transformed df1
df1.unionAll(df2)
EDIT: implementation using unionAll()
val df1=sqlContext.createDataFrame(Seq( (1,"john",23) , (2,"peter",32) )).
toDF( "id","name","age")
def udfTransform= udf[Int,Int] { (age) => if (age<25) 25 else age }
val df2=df1.withColumn("age2", udfTransform($"age")).
where("age!=age2").
drop("age2")
df1.withColumn("age", udfTransform($"age")).
unionAll(df2).
orderBy("id").
show()
+---+-----+---+
| id| name|age|
+---+-----+---+
| 1| john| 25|
| 1| john| 23|
| 2|peter| 32|
+---+-----+---+
Note: the implementation differs a bit from the originally proposed (naive) solution. The devil is always in the detail!
EDIT 2: implementation using nested array and explode
val df1=sx.createDataFrame(Seq( (1,"john",23) , (2,"peter",32) )).
toDF( "id","name","age")
def udfArr= udf[Array[Int],Int] { (age) =>
if (age<25) Array(age,25) else Array(age) }
val df2=df1.withColumn("age", udfArr($"age"))
df2.show()
+---+-----+--------+
| id| name| age|
+---+-----+--------+
| 1| john|[23, 25]|
| 2|peter| [32]|
+---+-----+--------+
df2.withColumn("age",explode($"age") ).show()
+---+-----+---+
| id| name|age|
+---+-----+---+
| 1| john| 23|
| 1| john| 25|
| 2|peter| 32|
+---+-----+---+