I am trying to implement a query in my Scala code which uses a regexp on a Spark Column to find all the rows in the column which contain a certain value like:
column.rlike(".*" + str + ".*")
str is a String that can be anything (except null or empty).
This works fine for the basic queries that I am testing. However being new to Spark / Scala, I am unsure of if there are any special cases that could break the code here that I need to take care of. Are there any characters that I need to be escaping or special cases that I need to worry about here?
This can be broken by any invalid regexp. You don't even have to try hard:
Seq("[", "foo", " ba.r ").toDF.filter($"value".rlike(".*" + "[ " + ".*")).show
or can give unexpected results if str is a non-trivial pattern itself. For simple cases like this you'll be better with Column.contains:
Seq("[", "foo", " ba.r ").toDF.filter($"value".contains("[")).show
Seq("[", "foo", " ba.r ").toDF.filter($"value".contains("a.r")).show
You can use rlike as zero suggested and Pattern.quote to handle the special regex characters. Suppose you have this DF:
val df = Seq(
("hel?.o"),
("bbhel?.o"),
("hel?.oZZ"),
("bye")
).toDF("weird_string")
df.show()
+------------+
|weird_string|
+------------+
| hel?.o|
| bbhel?.o|
| hel?.oZZ|
| bye|
+------------+
Here's how to find all the strings that contain "hel?.o".
import java.util.regex.Pattern
df
.withColumn("has_hello", $"weird_string".rlike(Pattern.quote("hel?.o")))
.show()
+------------+---------+
|weird_string|has_hello|
+------------+---------+
| hel?.o| true|
| bbhel?.o| true|
| hel?.oZZ| true|
| bye| false|
+------------+---------+
You could also add the quote characters manually to get the same result:
df
.withColumn("has_hello", $"weird_string".rlike("""\Qhel?.o\E"""))
.show()
If you don't properly escape the regex, you won't get the right result:
df
.withColumn("has_hello", $"weird_string".rlike("hel?.o"))
.show()
+------------+---------+
|weird_string|has_hello|
+------------+---------+
| hel?.o| false|
| bbhel?.o| false|
| hel?.oZZ| false|
| bye| false|
+------------+---------+
See this post for more details.
Related
just started with scala 2 days ago.
Here's the thing, I have a df and a list. The df contains two columns: paragraphs and authors, the list contains words (strings). I need to get the count of all the paragraphs where every word on list appears by author.
So far my idea was to create a for loop on the list to query the df using rlike and create a new df, but even if this does work, I wouldn't know how to do it. Any help is appreciated!
Edit: Adding example data and expected output
// Example df and list
val df = Seq(("auth1", "some text word1"), ("auth2","some text word2"),("auth3", "more text word1").toDF("a","t")
df.show
+-------+---------------+
| a| t|
+-------+---------------+
|auth1 |some text word1|
|auth2 |some text word2|
|auth1 |more text word1|
+-------+---------------+
val list = List("word1", "word2")
// Expected output
newDF.show
+-------+-----+----------+
| word| a|text count|
+-------+-----+----------+
|word1 |auth1| 2|
|word2 |auth2| 1|
+-------+-----+----------+
You can do a filter and aggregation for each word in the list, and combine all the resulting dataframes using unionAll:
val result = list.map(word =>
df.filter(df("t").rlike(s"\\b${word}\\b"))
.groupBy("a")
.agg(lit(word).as("word"), count(lit(1)).as("text count"))
).reduce(_ unionAll _)
result.show
+-----+-----+----------+
| a| word|text count|
+-----+-----+----------+
|auth3|word1| 1|
|auth1|word1| 1|
|auth2|word2| 1|
+-----+-----+----------+
I am trying to create a SHA256 hash of each row in a dataframe.
import org.apache.spark.sql.functions.{col, concat, sha2}
val finalResultWithHash = finalResult.withColumn("ROWHASH", sha2(concat(finalResult.columns.map(col):_*), 256))
When I had only one column in the dataframe it seemed to be working.
Later in the code I write the dataframe as a CSV and the rowhash column is empty.
I haven't been able to find any documentation on what I am doing wrong.
Thank you in advance.
Another way of doing it is by using a foldLeft():
val df2 = df.withColumn("rowsha",sha2(df.columns.foldLeft(lit(""))((x,y)=>concat(x,col(y))),256))
Folding will concat all columns left to right before hashing it:
df.withColumn("rowsha",sha2(df.columns.foldLeft(lit(""))((x,y)=>concat(x,col(y))),256)).explain()
== Physical Plan ==
*(1) Project [c1#10, c2#11, c3#12, c4#13, sha2(cast(concat(, c1#10, c2#11, c3#12, 4#13) as binary), 256) AS rowsha#165]
+- *(1) ...
However, if any of the columns in concatenation contain NULLs, the result will be also NULL. To safeguard against that you might want to use something like
val df2 = df.withColumn("rowsha",sha2(df.columns.foldLeft(lit(""))((x,y)=>concat(x,coalesce(col(y),lit("n/a"))),256))
For some reason below code works for me for multiple columns
val finalResultWithHash = personDF.withColumn("ROWHASH", sha2(concat(personDF.columns.map(col): _*), 256))
+-----+-----+---+------+--------------------+
|FName|LName|Age|Gender| ROWHASH|
+-----+-----+---+------+--------------------+
| A| B| 29| M|c4ae6946a295e9d74...|
| A| C| 12| |89a18fdc3ddb3c2fd...|
| B| D| 35| F|ef1c89dfc765c7e1e...|
| Q| D| 85| |cd91aa387a7e6a180...|
| W| R| 14| |e9ff9bb78fd93a13a...|
+-----+-----+---+------+--------------------+
Could it be just the bracket placement mistake ...
In pyspark how can i use expr to check whether a whole column contains the value in columnA of that row.
pseudo code below
df=df.withColumn("Result", expr(if any the rows in column1 contains the value colA (for this row) then 1 else 0))
Take an arbitrary example:
valuesCol = [('rose','rose is red'),('jasmine','I never saw Jasmine'),('lily','Lili dont be silly'),('daffodil','what a flower')]
df = sqlContext.createDataFrame(valuesCol,['columnA','columnB'])
df.show()
+--------+-------------------+
| columnA| columnB|
+--------+-------------------+
| rose| rose is red|
| jasmine|I never saw Jasmine|
| lily| Lili dont be silly|
|daffodil| what a flower|
+--------+-------------------+
Application of expr() here. How one can use expr(), just look for the corresponding SQL syntax and it should work with expr() mostly.
df = df.withColumn('columnA_exists',expr("(case when instr(lower(columnB), lower(columnA))>=1 then 1 else 0 end)"))
df.show()
+--------+-------------------+--------------+
| columnA| columnB|columnA_exists|
+--------+-------------------+--------------+
| rose| rose is red| 1|
| jasmine|I never saw Jasmine| 1|
| lily| Lili dont be silly| 0|
|daffodil| what a flower| 0|
+--------+-------------------+--------------+
Is there any way to transpose dataframe rows into columns.
I have following structure as a input:
val inputDF = Seq(("pid1","enc1", "bat"),
("pid1","enc2", ""),
("pid1","enc3", ""),
("pid3","enc1", "cat"),
("pid3","enc2", "")
).toDF("MemberID", "EncounterID", "entry" )
inputDF.show:
+--------+-----------+-----+
|MemberID|EncounterID|entry|
+--------+-----------+-----+
| pid1| enc1| bat|
| pid1| enc2| |
| pid1| enc3| |
| pid3| enc1| cat|
| pid3| enc2| |
+--------+-----------+-----+
expected result:
+--------+----------+----------+----------+-----+
|MemberID|Encounter1|Encounter2|Encounter3|entry|
+--------+----------+----------+----------+-----+
| pid1| enc1| enc2| enc3| bat|
| pid3| enc1| enc2| null| cat|
+--------+----------+----------+----------+-----+
Please suggest if there is any optimized direct API available for transposing rows into columns.
my input data size is quite huge, so actions like collect, I wont be able to perform as it would take all the data on driver.
I am using Spark 2.x
I am not sure that what you need is what you actually asked. Yet, just in case here is an idea:
val entries = inputDF.where('entry isNotNull)
.where('entry !== "")
.select("MemberID", "entry").distinct
val df = inputDF.groupBy("MemberID")
.agg(collect_list("EncounterID") as "encounterList")
.join(entries, Seq("MemberID"))
df.show
+--------+-------------------------+-----+
|MemberID| encounterList |entry|
+--------+-------------------------+-----+
| pid1| [enc2, enc1, enc3]| bat|
| pid3| [enc2, enc1]| cat|
+--------+-------------------------+-----+
The order of the list is not deterministic but you may sort it and then extract new columns from it with .withColumn("Encounter1", sort_array($"encounterList")(0))...
Other idea
In case what you want is to put the value of entry in the corresponding "Encounter" column, you can use a pivot:
inputDF
.groupBy("MemberID")
.pivot("EncounterID", Seq("enc1", "enc2", "enc3"))
.agg(first("entry")).show
+--------+----+----+----+
|MemberID|enc1|enc2|enc3|
+--------+----+----+----+
| pid1| bat| | |
| pid3| cat| | |
+--------+----+----+----+
Adding Seq("enc1", "enc2", "enc3") is optionnal but since you know the content of the column, it will speed up the computation.
I have two Spark dataframe's, df1 and df2:
+-------+-----+---+
| name|empNo|age|
+-------+-----+---+
|shankar|12121| 28|
| ramesh| 1212| 29|
| suresh| 1111| 30|
| aarush| 0707| 15|
+-------+-----+---+
+------+-----+---+-----+
| eName| eNo|age| city|
+------+-----+---+-----+
|aarush|12121| 15|malmo|
|ramesh| 1212| 29|malmo|
+------+-----+---+-----+
I need to get the non matching records from df1, based on a number of columns which is specified in another file.
For example, the column look up file is something like below:
df1col,df2col
name,eName
empNo, eNo
Expected output is:
+-------+-----+---+
| name|empNo|age|
+-------+-----+---+
|shankar|12121| 28|
| suresh| 1111| 30|
| aarush| 0707| 15|
+-------+-----+---+
The idea is how to build a where condition dynamically for the above scenario, because the lookup file is configurable, so it might have 1 to n fields.
You can use the except dataframe method. I'm assuming that the columns to use are in two lists for simplicity. It's necessary that the order of both lists are correct, the columns on the same location in the list will be compared (regardless of column name). After except, use join to get the missing columns from the first dataframe.
val df1 = Seq(("shankar","12121",28),("ramesh","1212",29),("suresh","1111",30),("aarush","0707",15))
.toDF("name", "empNo", "age")
val df2 = Seq(("aarush", "12121", 15, "malmo"),("ramesh", "1212", 29, "malmo"))
.toDF("eName", "eNo", "age", "city")
val df1Cols = List("name", "empNo")
val df2Cols = List("eName", "eNo")
val tempDf = df1.select(df1Cols.head, df1Cols.tail: _*)
.except(df2.select(df2Cols.head, df2Cols.tail: _*))
val df = df1.join(broadcast(tempDf), df1Cols)
The resulting dataframe will look as wanted:
+-------+-----+---+
| name|empNo|age|
+-------+-----+---+
| aarush| 0707| 15|
| suresh| 1111| 30|
|shankar|12121| 28|
+-------+-----+---+
If you're doing this from a SQL query I would remap the column names in the SQL query itself with something like Changing a SQL column title via query. You could do a simple text replace in the query to normalize them to the df1 or df2 column names.
Once you have that you can diff using something like
How to obtain the difference between two DataFrames?
If you need more columns that wouldn't be used in the diff (e.g. age) you can reselect the data again based on your diff results. This may not be the optimal way of doing it but it would probably work.