Hi I'm starting to use Pyspark and want to put a when and otherwise condition in:
df_1 = df.withColumn("test", when(df.first_name == df2.firstname & df.last_namne == df2.lastname, "1. Match on First and Last Name").otherwise ("No Match"))
I get the below error and wanted some assistance to understand why the above is not working.
Both df.first_name and df.last_name are strings and also df2.firstname and df2.lastname strings too
Error:
ValueError: Cannot convert column into bool: please use '&' for 'and', '|' for 'or', '~' for 'not' when building DataFrame boolean expressions.
Thanks in advance
There are several issues in your statement:
For df.withColum(), you can not use df and df2 columns in one statement. First join the two dataframes using df.join(df2, on="some_key", how="left/right/full").
Enclose the and condition of "when" clause in round brackets: (df.first_name == df2.firstname) & (df.last_name == df2.lastname)
The string literals of "when" and "otherwise" should be enclosed in lit() like: lit("1. Match on First and Last Name") and lit("No Match").
There is possibly a typo in your field name df.last_namne.
Related
I am using Scala.
I have a dataframe with a column date which looks like that:
| date |
|2017-09-24T11:05:52.647+02:00|
|2018-09-24T11:05:52.647+02:00|
|2018-10-24T11:05:52.647+02:00|
I have a regex to check the date format:
pattern = new regex(([12]\d{3}-(0[1-9]|1[0-2])-(0[1-9]| [12]\d|3[01])T\d{2}:\d{2}:\d{2}.\d{3}\+\d{2}:\d{2}))
I want to check if each row in the dataframe matches with the regex, if yes return true and if not return false. I need to return just true or false not a list.
Any help is welcome and many thanks for your help.
This should work - but turning it around, find first non-match:
import scala.util.Try
val result = Try(Option(df.filter($"cityid" rlike "[^0-9]").first)).toOption.flatten
if (result.isEmpty) { println("Empty")}
I use a DF as outcome and you can just check if empty or not.
Please tailor to your own situation. e.g. not empty, your own regex.
Without the Try and such the .first generates an error if empty. None is returned if empty and you can do the empty check.
I have few json messages like
{"column1":"abc","column2":"123","column3":qwe"r"ty,"column4":"abc123"}
{"column1":"defhj","column2":"45","column3":asd"f"gh,"column4":"def12d"}
I need to add double quotes both sides for column3 value and replace double quotes in the column3 value with single quotes using scala.
You have mentioned in the comment above
I have huge dataset in kafka.I am trying to read from kafka and write to hdfs through spark using scala.I am using json parser but unable to parse because of column3 issue.so need to manipulate the message to change into json
So you must be having collecting of malformed jsons as in the question. I have created a list as
val kafkaMsg = List("""{"column1":"abc","column2":"123","column3":qwe"r"ty,"column4":"abc123"}""", """{"column1":"defhj","column2":"45","column3":asd"f"gh,"column4":"def12d"}""")
and you are reading it through Spark so you must be having rdds as
val rdd = sc.parallelize(kafkaMsg)
All you need is some parsing in the malformed text json to make it valid json string as
val validJson = rdd.map(msg => msg.replaceAll("[}\"{]", "").split(",").map(_.split(":").mkString("\"", "\":\"", "\"")).mkString("{", ",", "}"))
validJson should be
{"column1":"abc","column2":"123","column3":"qwerty","column4":"abc123"}
{"column1":"defhj","column2":"45","column3":"asdfgh","column4":"def12d"}
You can create a dataframe from the validJson rdd as
sqlContext.read.json(validJson).show(false)
which should give you
+-------+-------+-------+-------+
|column1|column2|column3|column4|
+-------+-------+-------+-------+
|abc |123 |qwerty |abc123 |
|defhj |45 |asdfgh |def12d |
+-------+-------+-------+-------+
Or you can do as per your requirement.
Goal
add double quotes both sides for column3 value and replace double quotes in the column3 value with single quotes using scala.
I would recommend to use RegEx because you have more flexibility with it.
Here is the solution:
val kafkaMsg = List("""{"column1":"abc","column2":"123","column3":qwe"r"ty,"column4":"abc123"}""", """{"column1":"defhj","column2":"45","column3":asd"f"gh,"column4":"def12d"}""", """{"column1":"defhj","column2":"45","column3":without-quotes,"column4":"def12d"}""")
val rdd = sc.parallelize(kafkaMsg)
val rePattern = """(^\{.*)("column3":)(.*)(,"column4":.*)""".r
val newRdd = rdd.map(r =>
r match {
case rePattern(start, col3, col3Value, end) => (start + col3 + '"' + col3Value.replaceAll("\"", "'") + '"' + end)
case _ => r }
)
newRdd.foreach(println)
Explanation:
First and second statements are rdd initialization.
Third line defines the regex pattern. You may need to adjust it to your situation.
Regex produce 4 groups of values (whatever is in a () is a group):
string starting with "{" and whatever after until we meet "column3":
"column3": itself
whatever comes after "column3": but before ,"column4":
whatever comes starting ,"column4":
I use these 4 groups in next statement.
Iterate over your rdd, run it against regex, and change it: replace double quotes with single, and add open/close quotes. In case there is no match the original string will be returned.
Because regex was defined with 4 groups, I use 4 variables to map matches:
case rePattern(start, col3, col3Value, end) =>
Note: Code doesn't check if you have double quote in the value or not, it just runs update. You can add validation on your own if you need.
Show the results.
Important notes:
Regex that I used is strictly linked to your source string format. Keep in mind that you have JSON, so order of your keys is not guaranteed. As a result you can end up with "column4" (which is used as a column3 value ending) may come before "column3".
If you use comma as a key/value ending, make sure you don't have it as part of column3 value.
Bottom line: you need to adjust my regex to properly identify the end of column3 value.
Hope it helps.
I have a data frame with n number of columns and I want to replace empty strings in all these columns with nulls.
I tried using
val ReadDf = rawDF.na.replace("columnA", Map( "" -> null));
and
val ReadDf = rawDF.withColumn("columnA", if($"columnA"=="") lit(null) else $"columnA" );
Both of them didn't work.
Any leads would be highly appreciated. Thanks.
Your first approach seams to fail due to a bug that prevents replace from being able to replace values with nulls, see here.
Your second approach fails because you're confusing driver-side Scala code for executor-side Dataframe instructions: your if-else expression would be evaluated once on the driver (and not per record); You'd want to replace it with a call to when function; Moreover, to compare a column's value you need to use the === operator, and not Scala's == which just compares the driver-side Column object:
import org.apache.spark.sql.functions._
rawDF.withColumn("columnA", when($"columnA" === "", lit(null)).otherwise($"columnA"))
Attempting to remove rows in which a Spark dataframe column contains blank strings. Originally did val df2 = df1.na.drop() but it turns out many of these values are being encoded as "".
I'm stuck using Spark 1.3.1 and also cannot rely on DSL. (Importing spark.implicit_ isn't working.)
Removing things from a dataframe requires filter().
newDF = oldDF.filter("colName != ''")
or am I misunderstanding your question?
In case someone dont want to drop the records with blank strings, but just convvert the blank strings to some constant value.
val newdf = df.na.replace(df.columns,Map("" -> "0")) // to convert blank strings to zero
newdf.show()
You can use this:
df.filter(!($"col_name"===""))
It filters out the columns where the value of "col_name" is "" i.e. nothing/blankstring. I'm using the match filter and then inverting it by "!"
I am also new to spark So I don't know if below mentioned code is more complex or not but it works.
Here we are creating udf which is converting blank values to null.
sqlContext.udf().register("convertToNull",(String abc) -> (abc.trim().length() > 0 ? abc : null),DataTypes.StringType);
After above code you can use "convertToNull" (works on string) in select clause and make all fields null which are blank and than use .na().drop().
crimeDataFrame.selectExpr("C0","convertToNull(C1)","C2","C3").na().drop()
Note : You can use same approach in scala.
https://jaceklaskowski.gitbooks.io/mastering-apache-spark/content/spark-sql-udfs.html
rlike works fine but not rlike throws an error:
scala> sqlContext.sql("select * from T where columnB rlike '^[0-9]*$'").collect()
res42: Array[org.apache.spark.sql.Row] = Array([412,0], [0,25], [412,25], [0,25])
scala> sqlContext.sql("select * from T where columnB not rlike '^[0-9]*$'").collect()
java.lang.RuntimeException: [1.35] failure: ``in'' expected but `rlike' found
val df = sc.parallelize(Seq(
(412, 0),
(0, 25),
(412, 25),
(0, 25)
)).toDF("columnA", "columnB")
Or it is continuation of issue https://issues.apache.org/jira/browse/SPARK-4207 ?
A concise way to do it in PySpark is:
df.filter(~df.column.rlike(pattern))
There is nothing as such not rlike, but in regex you have something called negative lookahead, which means it will give the words that does not match.
For above query, you can use the regex as below. Lets say, you want the ColumnB should not start with digits '0'
Then you can do like this.
sqlContext.sql("select * from T where columnB rlike '^(?!.*[1-9]).*$'").collect()
Result: Array[org.apache.spark.sql.Row] = Array([412,0])
What I meant over all is, you have to do with regex it self to negate the match, not with rlike. Rlike simply matches the regex that you asked to match. If your regex tells it to not match, it applies that, if your regex is for matching then it does that.
The above answers suggests to use a negative lookahead. It can be achieved for some cases. However regexps were not designed to make an effecient negative match.
Those regexp will be error prone and hard to read.
Spark does support "not rlike" since version 2.0.
# given 'url' is column on a dataframe
df.filter("""url not rlike "stackoverflow.com"""")
The only usage known to me, is a sql string expression (as above). I could not find a "not" sql dsl function in the python api. There might be one in scala.
I know your question is getting a bit old, but just in case : have you simply tried scala's unary "!" operator?
In java you would go something like that :
DataFrame df = sqlContext.table("T");
DataFrame notLikeDf = df.filter(
df.col("columnB").rlike("^[0-9]*$").unary_$bang()
);
In pyspark, I did this as :
df = load_your_df()
matching_regex = "yourRegexString"
matching_df = df.filter(df.fieldName.rlike(matching_regex))
non_matching_df = df.subtract(matching_df)