PySpark: Getting a directory from path string - pyspark

I have a string which is my current working directory which is something like "Aw/Bt/Ce/Dr".
I should search for the string & retrieve "Bt".
Is there a way I could do that in PySpark.
TIA, Jagan

If you are always interested in the directory name between the first and the second / then you can split the string on / and chose the second element.
from pyspark.sql import functions as F
df = spark.sql("select 'Aw/Bt/Ce/Dr' as path")
df.withColumn("output", F.split(F.col("path"), "/")[1]).show()
Output
+-----------+------+
| path|output|
+-----------+------+
|Aw/Bt/Ce/Dr| Bt|
+-----------+------+

Try this:
from pyspark.sql.functions import regexp_extract,col
df.withColumn("New",regexp_extract(col("Column_name"),'(Bt)',1))
Result:
+---+----------+---+
| Id| Name|New|
+---+----------+---+
| 1| e |
| 2|Aw/Bt/C/Dr| Bt|
| 3| A/B/Ce/Dr| |
+---+----------+---+

Related

PySpark: Remove leading numbers and full stop from dataframe column

I'm trying to remove numbers and full stops that lead the names of horses in a betting dataframe.
The format is like this:
Horse Name
Horse Name
I would like the resulting df column to just have the horses name.
I've tried splitting the column at the full stop but am not getting the required result.
import pyspark.sql.functions as F
runners_returns = runners_returns.withColumn('runner_name', F.split(F.col('runner_name'), '.'))
Any help is greatly appreciated
With a Dataframe like the following.
df.show()
+---+-----------+
| ID|runner_name|
+---+-----------+
| 1| 123.John|
| 2| 5.42Anna|
| 3| .203Josh|
| 4| 102Paul|
+---+-----------+
You can do remove the leading numbers and periods like this.
import pyspark.sql.functions as F
df = (df.withColumn("runner_name",
F.regexp_replace('runner_name', r'(^[\d\.]+)', '')))
df.show()
+---+-----------+
| ID|runner_name|
+---+-----------+
| 1| John|
| 2| Anna|
| 3| Josh|
| 4| Paul|
+---+-----------+

is there a way we can rename or alias column with blank space in pyspark

I am getting error while renaming a column, is there anyway i can rename it, as there are space in column name
df=df.withColumnRenamed("std deviation","stdDeviation")
Error:AnalysisException: Attribute name "std deviation" contains invalid character(s) among " ,;{}()\n\t=". Please use alias to rename it.
I tried another way by using alias, but no success.
df=df.select(col("std deviation").alias("stdDeviation"))
is there a way I can rename columns that contain space?
Error:AnalysisException: Attribute name "std deviation" contains invalid character(s) among " ,;{}()\n\t=". Please use alias to rename it.
Yes its possible.
>>> input_df.show()
+-----+
|value|
+-----+
| 1|
| 2|
| 3|
+-----+
>>> input_df = input_df.withColumnRenamed("value", "test value")
>>> input_df.show()
+----------+
|test value|
+----------+
| 1|
| 2|
| 3|
+----------+
# The other way round #
>>> input_df = input_df.withColumnRenamed("test value", "value")
>>> input_df.show()
+-----+
|value|
+-----+
| 1|
| 2|
| 3|
+-----+
It is weird that you are using
df = spark.read.option("header", "true").parquet(source_file_path)
You don't need the header option when reading parquet files.
Besides that, you should use
df = df.withColumnRenamed("`old name`", "new_name").
No, according to the source code, there is no way to do that with Spark. One alternative way to go is using pandas.read_parquet, it should work. However, I'm not sure how big your file is and if your local computer (or cluster driver) can handle it.

How to do regexp_replace in one line in pyspark dataframe?

I have a pyspark dataframe column
df.groupBy('Gender').count().show()
(5) Spark Jobs
+------+------+
|Gender| count|
+------+------+
| F| 44015|
| null| 42175|
| M|104423|
| | 1|
+------+------+
I am doing regexp_replace
#df = df.fillna({'Gender':'missing'})
df = df.withColumn('Gender', regexp_replace('Gender', 'F','Female'))
df = df.withColumn('Gender', regexp_replace('Gender', 'M','Male'))
df = df.withColumn('Gender', regexp_replace('Gender', ' ','missing'))
Instead of calling df for each line, can this be done in one line?
If you do not want to use regexp_replace 3 times, you can use when/otherwise clause.
from pyspark.sql import functions as F
from pyspark.sql.functions import when
df.withColumn("Gender", F.when(F.col("Gender")=='F',F.lit("Female"))\
.when(F.col("Gender")=='M',F.lit("Male"))\
.otherwise(F.lit("missing"))).show()
+-------+------+
| Gender| count|
+-------+------+
| Female| 44015|
|missing| 42175|
| Male|104423|
|missing| 1|
+-------+------+
Or you could do your three regexp_replace in one line like this:
from pyspark.sql.functions import regexp_replace
df.withColumn('Gender', regexp_replace(regexp_replace(regexp_replace('Gender', 'F','Female'),'M','Male'),' ','missing')).show()
+-------+------+
| Gender| count|
+-------+------+
| Female| 44015|
| null| 42175|
| Male|104423|
|missing| 1|
+-------+------+
I think when/otherwise should outperform 3 regexp_replace functions because you will need to use fillna with them too.

Spark (scala) dataframes - Check whether strings in column exist in a column of another dataframe

I have a spark dataframe, and I wish to check whether each string in a particular column exists in a pre-defined a column of another dataframe.
I have found a same problem in Spark (scala) dataframes - Check whether strings in column contain any items from a set
but I want to Check whether strings in column exists in a column of another dataframe not a List or a set follow that question. Who can help me! I don't know convert a column to a set or a list and i don't know "exists" method in dataframe.
My data is similar to this
df1:
+---+-----------------+
| id| url |
+---+-----------------+
| 1|google.com |
| 2|facebook.com |
| 3|github.com |
| 4|stackoverflow.com|
+---+-----------------+
df2:
+-----+------------+
| id | urldetail |
+-----+------------+
| 11 |google.com |
| 12 |yahoo.com |
| 13 |facebook.com|
| 14 |twitter.com |
| 15 |youtube.com |
+-----+------------+
Now, i am trying to create a third column with the results of a comparison to see if the strings in the $"urldetail" column if exists in $"url"
+---+------------+-------------+
| id| urldetail | check |
+---+------------+-------------+
| 11|google.com | 1 |
| 12|yahoo.com | 0 |
| 13|facebook.com| 1 |
| 14|twitter.com | 0 |
| 15|youtube.com | 0 |
+---+------------+-------------+
I want to use UDF but i don't know how to check whether string exists in a column of a dataframe! Please help me!
I have a spark dataframe, and I wish to check whether each string in a
particular column contains any number of words from a pre-defined a
column of another dataframe.
Here is the way. using = or like
package examples
import org.apache.log4j.Level
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.functions.{col, _}
object CompareColumns extends App {
val logger = org.apache.log4j.Logger.getLogger("org")
logger.setLevel(Level.WARN)
val spark = SparkSession.builder()
.appName(this.getClass.getName)
.config("spark.master", "local").getOrCreate()
import spark.implicits._
val df1 = Seq(
(1, "google.com"),
(2, "facebook.com"),
(3, "github.com"),
(4, "stackoverflow.com")).toDF("id", "url").as("first")
df1.show
val df2 = Seq(
(11, "google.com"),
(12, "yahoo.com"),
(13, "facebook.com"),
(14, "twitter.com")).toDF("id", "url").as("second")
df2.show
val df3 = df2.join(df1, expr("first.url like second.url"), "full_outer").select(
col("first.url")
, col("first.url").contains(col("second.url")).as("check")).filter("url is not null")
df3.na.fill(Map("check" -> false))
.show
}
Result :
+---+-----------------+
| id| url|
+---+-----------------+
| 1| google.com|
| 2| facebook.com|
| 3| github.com|
| 4|stackoverflow.com|
+---+-----------------+
+---+------------+
| id| url|
+---+------------+
| 11| google.com|
| 12| yahoo.com|
| 13|facebook.com|
| 14| twitter.com|
+---+------------+
+-----------------+-----+
| url|check|
+-----------------+-----+
| google.com| true|
| facebook.com| true|
| github.com|false|
|stackoverflow.com|false|
+-----------------+-----+
with full outer join we can achive this...
For more details see my article with all joins here in my linked in post
Note : Instead of 0 for false 1 for true i have used boolean
conditions here.. you can translate them in to what ever you wanted...
UPDATE : If rows are increasing in second dataframe
you can use this, it wont miss any rows from second
val df3 = df2.join(df1, expr("first.url like second.url"), "full").select(
col("second.*")
, col("first.url").contains(col("second.url")).as("check"))
.filter("url is not null")
df3.na.fill(Map("check" -> false))
.show
Also, one more thing is you can try regexp_extract as shown in below post
https://stackoverflow.com/a/53880542/647053
read in your data and use the trim operation just to be conservative when joining on the strings to remove the whitesapace
val df= Seq((1,"google.com"), (2,"facebook.com"), ( 3,"github.com "), (4,"stackoverflow.com")).toDF("id", "url").select($"id", trim($"url").as("url"))
val df2 =Seq(( 11 ,"google.com"), (12 ,"yahoo.com"), (13 ,"facebook.com"),(14 ,"twitter.com"),(15,"youtube.com")).toDF( "id" ,"urldetail").select($"id", trim($"urldetail").as("urldetail"))
df.join(df2.withColumn("flag", lit(1)).drop("id"), (df("url")===df2("urldetail")), "left_outer").withColumn("contains_bool",
when($"flag"===1, true) otherwise(false)).drop("flag","urldetail").show
+---+-----------------+-------------+
| id| url|contains_bool|
+---+-----------------+-------------+
| 1| google.com| true|
| 2| facebook.com| true|
| 3| github.com| false|
| 4|stackoverflow.com| false|
+---+-----------------+-------------+

Sum of single column across rows based on a condition in Spark Dataframe

Consider the following dataframe:
+-------+-----------+-------+
| rid| createdon| count|
+-------+-----------+-------+
| 124| 2017-06-15| 1 |
| 123| 2017-06-14| 2 |
| 123| 2017-06-14| 1 |
+-------+-----------+-------+
I need to add the count column among rows which has createdon and rid of are same.
Therefore the resultant dataframe should be follows:
+-------+-----------+-------+
| rid| createdon| count|
+-------+-----------+-------+
| 124| 2017-06-15| 1 |
| 123| 2017-06-14| 3 |
+-------+-----------+-------+
I am using Spark 2.0.2.
I have tried agg, conditions inside select etc, but couldn't find the solution. Can anyone help me?
Try this
import org.apache.spark.sql.{functions => func}
df.groupBy($"rid", $"createdon").agg(func.sum($"count").alias("count"))
this should do what you want:
import org.apache.spark.sql.functions.sum
df
.groupBy($"rid",$"createdon")
.agg(sum($"count").as("count"))
.show