Update dataframe column by comparing with existing data in another column using Levenshtein algorithm - scala

How can I update m_name column with Levenshtein algorithm to replace nulls ?
+--------------------+--------------------+-------------------+
| original_name| m_name| created|
+--------------------+--------------------+-------------------+
| New York| New York|2017-08-01 09:33:40|
| new york| null|2017-08-01 15:15:06|
| New York city| null|2017-08-01 15:15:06|
| california| California|2017-09-01 09:33:40|
| California,000IU...| null|2017-09-01 01:40:00|
| Californiya| California|2017-09-01 11:38:21|
For every "original_name" value should be taken first nearest "m_name" value founded by algorithm based on Levenshtein distance (edit distance).
similarity(s1,s2) = [max(len(s1), len(s2)) − editDistance(s1,s2)] / max(len(s1), len(s2))
"ideal" final result should be like that
+--------------------+--------------------+-------------------+
| original_name| m_name| created|
+--------------------+--------------------+-------------------+
| New York| New York|2017-08-01 09:33:40|
| new york| New York|2017-08-01 15:15:06|
| New York city| New York|2017-08-01 15:15:06|
| california| California|2017-09-01 09:33:40|
| California,000IU...| California|2017-09-01 01:40:00|
| Californiya| California|2017-09-01 11:38:21|

Credit goes to rossettacode Levenshtein_distance
You can do the following (commented for clarity and explanation)
//collecting the m_name to unique set and filtering out nulls and finally broadcasting to be used in udf function
import org.apache.spark.sql.functions._
val collectedList = df.select(collect_set("m_name")).rdd.collect().flatMap(row => row.getAs[Seq[String]](0).filterNot(_ == "null")).toList
val broadcastedList = sc.broadcast(collectedList)
//levenshtein distance formula applying
import scala.math.{min => mathmin, max => mathmax}
def minimum(i1: Int, i2: Int, i3: Int) = mathmin(mathmin(i1, i2), i3)
def editDistance(s1: String, s2: String) = {
val dist = Array.tabulate(s2.length + 1, s1.length + 1) { (j, i) => if (j == 0) i else if (i == 0) j else 0 }
for (j <- 1 to s2.length; i <- 1 to s1.length)
dist(j)(i) = if (s2(j - 1) == s1(i - 1)) dist(j - 1)(i - 1)
else minimum(dist(j - 1)(i) + 1, dist(j)(i - 1) + 1, dist(j - 1)(i - 1) + 1)
dist(s2.length)(s1.length)
}
//udf function definition to find the levenshtein distance and finding the closest first match from the broadcasted list with original_name column
def levenshteinUdf = udf((str1: String)=> {
val distances = for(str2 <- broadcastedList.value) yield (str2, editDistance(str1.toLowerCase, str2.toLowerCase))
distances.minBy(_._2)._1
})
//calling the udf function when m_name is null
df.withColumn("m_name", when(col("m_name").isNull || col("m_name") === "null", levenshteinUdf(col("original_name"))).otherwise(col("m_name"))).show(false)
which should give you
+-------------------+----------+-------------------+
|original_name |m_name |created |
+-------------------+----------+-------------------+
|New York |New York |2017-08-01 09:33:40|
|new york |New York |2017-08-01 15:15:06|
|New York city |New York |2017-08-01 15:15:06|
|california |California|2017-09-01 09:33:40|
|California,000IU...|California|2017-09-01 01:40:00|
|Californiya |California|2017-09-01 11:38:21|
+-------------------+----------+-------------------+
Note : I didn't use your similarity(s1,s2) = [max(len(s1), len(s2)) − editDistance(s1,s2)] / max(len(s1), len(s2)) logic as its giving wrong output

Related

Spark creating a new column based on a mapped value of an existing column

I am trying to map the values of one column in my dataframe to a new value and put it into a new column using a UDF, but I am unable to get the UDF to accept a parameter that isn't also a column. For example I have a dataframe dfOriginial like this:
+-----------+-----+
|high_scores|count|
+-----------+-----+
| 9| 1|
| 21| 2|
| 23| 3|
| 7| 6|
+-----------+-----+
And I'm trying to get a sense of the bin the numeric value falls into, so I may construct a list of bins like this:
case class Bin(binMax:BigDecimal, binWidth:BigDecimal) {
val binMin = binMax - binWidth
// only one of the two evaluations can include an "or=", otherwise a value could fit in 2 bins
def fitsInBin(value: BigDecimal): Boolean = value > binMin && value <= binMax
def rangeAsString(): String = {
val sb = new StringBuilder()
sb.append(trimDecimal(binMin)).append(" - ").append(trimDecimal(binMax))
sb.toString()
}
}
And then I want to transform my old dataframe like this to make dfBin:
+-----------+-----+---------+
|high_scores|count|bin_range|
+-----------+-----+---------+
| 9| 1| 0 - 10 |
| 21| 2| 20 - 30 |
| 23| 3| 20 - 30 |
| 7| 6| 0 - 10 |
+-----------+-----+---------+
So that I can ultimately get a count of the instances of the bins by calling .groupBy("bin_range").count().
I am trying to generate dfBin by using the withColumn function with an UDF.
Here's the code with the UDF I am attempting to use:
val convertValueToBinRangeUDF = udf((value:String, binList:List[Bin]) => {
val number = BigDecimal(value)
val bin = binList.find( bin => bin.fitsInBin(number)).getOrElse(Bin(BigDecimal(0), BigDecimal(0)))
bin.rangeAsString()
})
val binList = List(Bin(10, 10), Bin(20, 10), Bin(30, 10), Bin(40, 10), Bin(50, 10))
val dfBin = dfOriginal.withColumn("bin_range", convertValueToBinRangeUDF(col("high_scores"), binList))
But it's giving me a type mismatch:
Error:type mismatch;
found : List[Bin]
required: org.apache.spark.sql.Column
val valueCountsWithBin = valuesCounts.withColumn(binRangeCol, convertValueToBinRangeUDF(col(columnName), binList))
Seeing the definition of an UDF makes me think it should handle the conversion fine, but it's clearly not, any ideas?
The problem is that parameters to an UDF should all be of column type. One solution would be to convert binList into a column and pass it to the UDF similar to the current code.
However, it is simpler to adjust the UDF slightly and turn it into a def. In this way you can easily pass other non-column type data:
def convertValueToBinRangeUDF(binList: List[Bin]) = udf((value:String) => {
val number = BigDecimal(value)
val bin = binList.find( bin => bin.fitsInBin(number)).getOrElse(Bin(BigDecimal(0), BigDecimal(0)))
bin.rangeAsString()
})
Usage:
val dfBin = valuesCounts.withColumn("bin_range", convertValueToBinRangeUDF(binList)($"columnName"))
Try this -
scala> case class Bin(binMax:BigDecimal, binWidth:BigDecimal) {
| val binMin = binMax - binWidth
|
| // only one of the two evaluations can include an "or=", otherwise a value could fit in 2 bins
| def fitsInBin(value: BigDecimal): Boolean = value > binMin && value <= binMax
|
| def rangeAsString(): String = {
| val sb = new StringBuilder()
| sb.append(binMin).append(" - ").append(binMax)
| sb.toString()
| }
| }
defined class Bin
scala> val binList = List(Bin(10, 10), Bin(20, 10), Bin(30, 10), Bin(40, 10), Bin(50, 10))
binList: List[Bin] = List(Bin(10,10), Bin(20,10), Bin(30,10), Bin(40,10), Bin(50,10))
scala> spark.udf.register("convertValueToBinRangeUDF", (value: String) => {
| val number = BigDecimal(value)
| val bin = binList.find( bin => bin.fitsInBin(number)).getOrElse(Bin(BigDecimal(0), BigDecimal(0)))
| bin.rangeAsString()
| })
res13: org.apache.spark.sql.expressions.UserDefinedFunction = UserDefinedFunction(<function1>,StringType,Some(List(StringType)))
//-- Testing with one record
scala> val dfOriginal = spark.sql(s""" select "9" as `high_scores`, "1" as count """)
dfOriginal: org.apache.spark.sql.DataFrame = [high_scores: string, count: string]
scala> dfOriginal.createOrReplaceTempView("dfOriginal")
scala> val dfBin = spark.sql(s""" select high_scores, count, convertValueToBinRangeUDF(high_scores) as bin_range from dfOriginal """)
dfBin: org.apache.spark.sql.DataFrame = [high_scores: string, count: string ... 1 more field]
scala> dfBin.show(false)
+-----------+-----+---------+
|high_scores|count|bin_range|
+-----------+-----+---------+
|9 |1 |0 - 10 |
+-----------+-----+---------+
Hope this will help.

array[array["string"]] with explode option dropping null rows in spark/scala [duplicate]

I have a Dataframe that I am trying to flatten. As part of the process, I want to explode it, so if I have a column of arrays, each value of the array will be used to create a separate row. For instance,
id | name | likes
_______________________________
1 | Luke | [baseball, soccer]
should become
id | name | likes
_______________________________
1 | Luke | baseball
1 | Luke | soccer
This is my code
private DataFrame explodeDataFrame(DataFrame df) {
DataFrame resultDf = df;
for (StructField field : df.schema().fields()) {
if (field.dataType() instanceof ArrayType) {
resultDf = resultDf.withColumn(field.name(), org.apache.spark.sql.functions.explode(resultDf.col(field.name())));
resultDf.show();
}
}
return resultDf;
}
The problem is that in my data, some of the array columns have nulls. In that case, the entire row is deleted. So this dataframe:
id | name | likes
_______________________________
1 | Luke | [baseball, soccer]
2 | Lucy | null
becomes
id | name | likes
_______________________________
1 | Luke | baseball
1 | Luke | soccer
instead of
id | name | likes
_______________________________
1 | Luke | baseball
1 | Luke | soccer
2 | Lucy | null
How can I explode my arrays so that I don't lose the null rows?
I am using Spark 1.5.2 and Java 8
Spark 2.2+
You can use explode_outer function:
import org.apache.spark.sql.functions.explode_outer
df.withColumn("likes", explode_outer($"likes")).show
// +---+----+--------+
// | id|name| likes|
// +---+----+--------+
// | 1|Luke|baseball|
// | 1|Luke| soccer|
// | 2|Lucy| null|
// +---+----+--------+
Spark <= 2.1
In Scala but Java equivalent should be almost identical (to import individual functions use import static).
import org.apache.spark.sql.functions.{array, col, explode, lit, when}
val df = Seq(
(1, "Luke", Some(Array("baseball", "soccer"))),
(2, "Lucy", None)
).toDF("id", "name", "likes")
df.withColumn("likes", explode(
when(col("likes").isNotNull, col("likes"))
// If null explode an array<string> with a single null
.otherwise(array(lit(null).cast("string")))))
The idea here is basically to replace NULL with an array(NULL) of a desired type. For complex type (a.k.a structs) you have to provide full schema:
val dfStruct = Seq((1L, Some(Array((1, "a")))), (2L, None)).toDF("x", "y")
val st = StructType(Seq(
StructField("_1", IntegerType, false), StructField("_2", StringType, true)
))
dfStruct.withColumn("y", explode(
when(col("y").isNotNull, col("y"))
.otherwise(array(lit(null).cast(st)))))
or
dfStruct.withColumn("y", explode(
when(col("y").isNotNull, col("y"))
.otherwise(array(lit(null).cast("struct<_1:int,_2:string>")))))
Note:
If array Column has been created with containsNull set to false you should change this first (tested with Spark 2.1):
df.withColumn("array_column", $"array_column".cast(ArrayType(SomeType, true)))
You can use explode_outer() function.
Following up on the accepted answer, when the array elements are a complex type it can be difficult to define it by hand (e.g with large structs).
To do it automatically I wrote the following helper method:
def explodeOuter(df: Dataset[Row], columnsToExplode: List[String]) = {
val arrayFields = df.schema.fields
.map(field => field.name -> field.dataType)
.collect { case (name: String, type: ArrayType) => (name, type.asInstanceOf[ArrayType])}
.toMap
columnsToExplode.foldLeft(df) { (dataFrame, arrayCol) =>
dataFrame.withColumn(arrayCol, explode(when(size(col(arrayCol)) =!= 0, col(arrayCol))
.otherwise(array(lit(null).cast(arrayFields(arrayCol).elementType)))))
}
Edit: it seems that spark 2.2 and newer have this built in.
To handle empty map type column: for Spark <= 2.1
List((1, Array(2, 3, 4), Map(1 -> "a")),
(2, Array(5, 6, 7), Map(2 -> "b")),
(3, Array[Int](), Map[Int, String]())).toDF("col1", "col2", "col3").show()
df.select('col1, explode(when(size(map_keys('col3)) === 0, map(lit("null"), lit("null"))).
otherwise('col3))).show()
from pyspark.sql.functions import *
def flatten_df(nested_df):
flat_cols = [c[0] for c in nested_df.dtypes if c[1][:6] != 'struct']
nested_cols = [c[0] for c in nested_df.dtypes if c[1][:6] == 'struct']
flat_df = nested_df.select(flat_cols +
[col(nc + '.' + c).alias(nc + '_' + c)
for nc in nested_cols
for c in nested_df.select(nc + '.*').columns])
print("flatten_df_count :", flat_df.count())
return flat_df
def explode_df(nested_df):
flat_cols = [c[0] for c in nested_df.dtypes if c[1][:6] != 'struct' and c[1][:5] != 'array']
array_cols = [c[0] for c in nested_df.dtypes if c[1][:5] == 'array']
for array_col in array_cols:
schema = new_df.select(array_col).dtypes[0][1]
nested_df = nested_df.withColumn(array_col, when(col(array_col).isNotNull(), col(array_col)).otherwise(array(lit(None)).cast(schema)))
nested_df = nested_df.withColumn("tmp", arrays_zip(*array_cols)).withColumn("tmp", explode("tmp")).select([col("tmp."+c).alias(c) for c in array_cols] + flat_cols)
print("explode_dfs_count :", nested_df.count())
return nested_df
new_df = flatten_df(myDf)
while True:
array_cols = [c[0] for c in new_df.dtypes if c[1][:5] == 'array']
if len(array_cols):
new_df = flatten_df(explode_df(new_df))
else:
break
new_df.printSchema()
Used arrays_zip and explode to do it faster and address the null issue.

Iterate through rows in DataFrame and transform one to many

As an example in scala, I have a list and every item which matches a condition I want to appear twice (may not be the best option for this use case - but idea which counts):
l.flatMap {
case n if n % 2 == 0 => List(n, n)
case n => List(n)
}
I would like to do something similar in Spark - iterate over rows in a DataFrame and if a row matches a certain condition then I need to duplicate the row with some modifications in the copy. How can this be done?
For example, if my input is the table below:
| name | age |
|-------|-----|
| Peter | 50 |
| Paul | 60 |
| Mary | 70 |
I want to iterate through the table and test each row against multiple conditions, and for each condition that matches, an entry should be created with the name of the matched condition.
E.g. condition #1 is "age > 60" and condition #2 is "name.length <=4". This should result in the following output:
| name | age |condition|
|-------|-----|---------|
| Paul | 60 | 2 |
| Mary | 70 | 1 |
| Mary | 70 | 2 |
You can filter matching-conditions dataframes and then finally union all of them.
import org.apache.spark.sql.functions._
val condition1DF = df.filter($"age" > 60).withColumn("condition", lit(1))
val condition2DF = df.filter(length($"name") <= 4).withColumn("condition", lit(2))
val finalDF = condition1DF.union(condition2DF)
you should have your desired output as
+----+---+---------+
|name|age|condition|
+----+---+---------+
|Mary|70 |1 |
|Paul|60 |2 |
|Mary|70 |2 |
+----+---+---------+
I hope the answer is helpful
You can also use a combination of an UDF and explode(), like in the following example:
// set up example data
case class Pers1 (name:String,age:Int)
val d = Seq(Pers1("Peter",50), Pers1("Paul",60), Pers1("Mary",70))
val df = spark.createDataFrame(d)
// conditions logic - complex as you'd like
// probably should use a Set instead of Sequence but I digress..
val conditions:(String,Int)=>Seq[Int] = { (name,age) =>
(if(age > 60) Seq(1) else Seq.empty) ++
(if(name.length <=4) Seq(2) else Seq.empty)
}
// define UDF for spark
import org.apache.spark.sql.functions.udf
val conditionsUdf = udf(conditions)
// explode() works just like flatmap
val result = df.withColumn("condition",
explode(conditionsUdf(col("name"), col("age"))))
result.show
+----+---+---------+
|name|age|condition|
+----+---+---------+
|Paul| 60| 2|
|Mary| 70| 1|
|Mary| 70| 2|
+----+---+---------+
Here is one way to flatten it with rdd.flatMap:
import org.apache.spark.sql.types._
import org.apache.spark.sql.Row
val new_rdd = (df.rdd.flatMap(r => {
val conditions = Seq((1, r.getAs[Int](1) > 60), (2, r.getAs[String](0).length <= 4))
conditions.collect{ case (i, c) if c => Row.fromSeq(r.toSeq :+ i) }
}))
val new_schema = StructType(df.schema :+ StructField("condition", IntegerType, true))
spark.createDataFrame(new_rdd, new_schema).show
+----+---+---------+
|name|age|condition|
+----+---+---------+
|Paul| 60| 2|
|Mary| 70| 1|
|Mary| 70| 2|
+----+---+---------+

How to append List[String] to every row of DataFrame?

After a series of validations over a DataFrame,
I obtain a List of String with certain values like this:
List[String]=(lvalue1, lvalue2, lvalue3, ...)
And I have a Dataframe with n values:
dfield 1 | dfield 2 | dfield 3
___________________________
dvalue1 | dvalue2 | dvalue3
dvalue1 | dvalue2 | dvalue3
I want to append the values of the List at the beggining of my Dataframe, in order to get a new DF with something like this:
dfield 1 | dfield 2 | dfield 3 | dfield4 | dfield5 | dfield6
__________________________________________________________
lvalue1 | lvalue2 | lvalue3 | dvalue1 | dvalue2 | dvalue3
lvalue1 | lvalue2 | lvalue3 | dvalue1 | dvalue2 | dvalue3
I have found something using a UDF. Could be this correct for my purpose?
Regards.
TL;DR Use select or withColumn with lit function.
I'd use lit function with select operator (or withColumn).
lit(literal: Any): Column Creates a Column of literal value.
A solution could be as follows.
val values = List("lvalue1", "lvalue2", "lvalue3")
val dfields = values.indices.map(idx => s"dfield ${idx + 1}")
val dataset = Seq(
("dvalue1", "dvalue2", "dvalue3"),
("dvalue1", "dvalue2", "dvalue3")
).toDF("dfield 1", "dfield 2", "dfield 3")
val offsets = dataset.
columns.
indices.
map { idx => idx + colNames.size + 1 }
val offsetDF = offsets.zip(dataset.columns).
foldLeft(dataset) { case (df, (off, col)) => df.withColumnRenamed(col, s"dfield $off") }
val newcols = colNames.zip(dfields).
map { case (v, dfield) => lit(v) as dfield } :+ col("*")
scala> offsetDF.select(newcols: _*).show
+--------+--------+--------+--------+--------+--------+
|dfield 1|dfield 2|dfield 3|dfield 4|dfield 5|dfield 6|
+--------+--------+--------+--------+--------+--------+
| lvalue1| lvalue2| lvalue3| dvalue1| dvalue2| dvalue3|
| lvalue1| lvalue2| lvalue3| dvalue1| dvalue2| dvalue3|
+--------+--------+--------+--------+--------+--------+

Spark Dataframe access of previous calculated row

I have following Data:
+-----+-----+----+
|Col1 |t0 |t1 |
+-----+-----+----+
| A |null |20 |
| A |20 |40 |
| B |null |10 |
| B |10 |20 |
| B |20 |120 |
| B |120 |140 |
| B |140 |320 |
| B |320 |340 |
| B |340 |360 |
+-----+-----+----+
And what I want is something like this:
+-----+-----+----+----+
|Col1 |t0 |t1 |grp |
+-----+-----+----+----+
| A |null |20 |1A |
| A |20 |40 |1A |
| B |null |10 |1B |
| B |10 |20 |1B |
| B |20 |120 |2B |
| B |120 |140 |2B |
| B |140 |320 |3B |
| B |320 |340 |3B |
| B |340 |360 |3B |
+-----+-----+----+----+
Explanation:
The extra column is based on the Col1 and the difference between t1 and t0.
When the difference between that two is too high => a new number is generated. (in the dataset above when the difference is greater than 50)
I build t0 with:
val windowSpec = Window.partitionBy($"Col1").orderBy("t1")
df = df.withColumn("t0", lag("t1", 1) over windowSpec)
Can someone help me how to do it?
I searched but didn't get a good idea.
I'm a little bit lost because I need the value of the previous calculated row of grp...
Thanks
I solved it myself
val grp = (coalesce(
($"t" - lag($"t", 1).over(windowSpec)),
lit(0)
) > 50).cast("bigint")
df = df.withColumn("grp", sum(grp).over(windowSpec))
With this I don't need both colums (t0 and t1) anymore but can use only t1 (or t) without compute t0.
(I only need to add the value of Col1 but the most important part the number is done and works fine.)
I got the solution from:
Spark SQL window function with complex condition
thanks for your help
You can use udf function to generate the grp column
def testUdf = udf((col1: String, t0: Int, t1: Int)=> (t1-t0) match {
case x : Int if(x > 50) => 2+col1
case _ => 1+col1
})
Call the udf function as
df.withColumn("grp", testUdf($"Col1", $"t0", $"t1"))
The udf function above won't work properly due to null values in t0 which can be replaced by 0
df.na.fill(0)
I hope this is the answer you are searching for.
Edited
Here's the complete solution using udaf . The process is complex . You've already got easy answer but it might help somebody who might use it
First defining udaf
class Boendal extends UserDefinedAggregateFunction {
def inputSchema = new StructType().add("Col1", StringType).add("t0", IntegerType).add("t1", IntegerType).add("rank", IntegerType)
def bufferSchema = new StructType().add("buff", StringType).add("buffer1", IntegerType)
def dataType = StringType
def deterministic = true
def initialize(buffer: MutableAggregationBuffer) = {
buffer.update(0, "")
buffer.update(1, 0)
}
def update(buffer: MutableAggregationBuffer, input: Row) = {
if (!input.isNullAt(0)) {
val buff = buffer.getString(0)
val col1 = input.getString(0)
val t0 = input.getInt(1)
val t1 = input.getInt(2)
val rank = input.getInt(3)
var value = 1
if((t1-t0) < 50)
value = 1
else
value = (t1-t0)/50
val lastValue = buffer(1).asInstanceOf[Integer]
// if(!buff.isEmpty) {
if (value < lastValue)
value = lastValue
// }
buffer.update(1, value)
var finalString = ""
if(buff.isEmpty){
finalString = rank+";"+value+col1
}
else
finalString = buff+"::"+rank+";"+value+col1
buffer.update(0, finalString)
}
}
def merge(buffer1: MutableAggregationBuffer, buffer2: Row) = {
val buff1 = buffer1.getString(0)
val buff2 = buffer2.getString(0)
buffer1.update(0, buff1+buff2)
}
def evaluate(buffer: Row) : String = {
buffer.getString(0)
}
}
Then some udfs
def rankUdf = udf((grp: String)=> grp.split(";")(0))
def removeRankUdf = udf((grp: String) => grp.split(";")(1))
And finally call the udaf and udfs
val windowSpec = Window.partitionBy($"Col1").orderBy($"t1")
df = df.withColumn("t0", lag("t1", 1) over windowSpec)
.withColumn("rank", rank() over windowSpec)
df = df.na.fill(0)
val boendal = new Boendal
val df2 = df.groupBy("Col1").agg(boendal($"Col1", $"t0", $"t1", $"rank").as("grp2")).withColumnRenamed("Col1", "Col2")
.withColumn("grp2", explode(split($"grp2", "::")))
.withColumn("rank2", rankUdf($"grp2"))
.withColumn("grp2", removeRankUdf($"grp2"))
df = df.join(df2, df("Col1") === df2("Col2") && df("rank") === df2("rank2"))
.drop("Col2", "rank", "rank2")
df.show(false)
Hope it helps