How to apply nltk.pos_tag on pyspark dataframe - pyspark

I'm trying to apply pos tagging on one of my tokenized column called "removed" in pyspark dataframe.
I'm trying with
nltk.pos_tag(df_removed.select("removed"))
But all I get is Value Error: ValueError: Cannot apply 'in' operator against a column: please use 'contains' in a string column or 'array_contains' function for an array column.
How can I make it?

It seems the answer is in the error message: the input of pos_tag should be a string and you provide a column input. You should apply pos_tag on each row of you column, using the function withColumn
For example you start by writing:
my_new_df = df_removed.withColumn("removed", nltk.pos_tag(df_removed.removed))
You can do also :
my_new_df = df_removed.select("removed").rdd.map(lambda x: nltk.pos_tag(x)).toDF()
Here you have the documentation.

Related

Pyspark : How to take Minimum in the timestamp column?

In pyspark , i tried to do this
df = df.select(F.col("id"),
F.col("mp_code"),
F.col("mp_def"),
F.col("mp_desc"),
F.col("mp_code_desc"),
F.col("zdmtrt06_zstation").alias("station"),
F.to_timestamp(F.col("date_time"), "yyyyMMddHHmmss").alias("date_time_utc"))
df = df.groupBy("id", "mp_code", "mp_def", "mp_desc", "mp_code_desc", "station").min(F.col("date_time_utc"))
But, i have an issue
raise TypeError("Column is not iterable")
TypeError: Column is not iterable
Here is an extract of the pyspark documentation
GroupedData.min(*cols)[source]
Computes the min value for each numeric column for each group.
New in version 1.3.0.
Parameters: cols : str
In other words, the min function does not support column arguments. It only works with column names (strings) like this:
df.groupBy("x").min("date_time_utc")
# you can also specify several column names
df.groupBy("x").min("y", "z")
Note that if you want to use a column object, you have to use agg:
df.groupBy("x").agg(F.min(F.col("date_time_utc")))

How do I replace string with 0 in multiple columns in Pyspark

As in the title. I have a list of columns and need to replace a certain string with 0 in these columns. I can do that using select statement with nested when function but I want to preserve my original dataframe and only change the columns in question. df.replace(string, 0, list_of_columns) doesn't work as there is a data type mismatch.
So I ended up with something like this which worked for me:
for column in column_list:
df = df.withColumn(column, F.when((F.col(column) == "string"), "0").otherwise(F.col(column)))

pyspark add int column to a fixed date

I have a fixed date "2000/01/01" and a dataframe:
data1 = [{'index':1,'offset':50}]
data_p = sc.parallelize(data1)
df = spark.createDataFrame(data_p)
I want to create a new column by adding the offset column to this fixed date
I tried different method but cannot pass the column iterator and expr error as:
function is neither a registered temporary function nor a permanent function registered in the database 'default'
The only solution I can think of is
df = df.withColumn("zero",lit(datetime.strptime('2000/01/01', '%Y/%m/%d')))
df.withColumn("date_offset",expr("date_add(zero,offset)")).drop("zero")
Since I cannot use lit and datetime.strptime in the expr, I have to use this approach which creates a redundant column and redundant operations.
Any better way to do it?
As you have marked it as pyspark question so in python you can do below
df_a3.withColumn("date_offset",F.lit("2000-01-01").cast("date") + F.col("offset").cast("int")).show()
Edit- As per comment below lets assume there was an extra column of type then based on it below code can be used
df_a3.withColumn("date_offset",F.expr("case when type ='month' then add_months(cast('2000-01-01' as date),offset) else date_add(cast('2000-01-01' as date),cast(offset as int)) end ")).show()

Sum() and count() are not working together with agg in pyspark2

I am using json data file “order_items” and data looks like
{“order_item_id”:1,“order_item_order_id”:1,“order_item_product_id”:957,“order_item_quantity”:1,“order_item_subtotal”:299.98,“order_item_product_price”:299.98}
{“order_item_id”:2,“order_item_order_id”:2,“order_item_product_id”:1073,“order_item_quantity”:1,“order_item_subtotal”:199.99,“order_item_product_price”:199.99}
{“order_item_id”:3,“order_item_order_id”:2,“order_item_product_id”:502,“order_item_quantity”:5,“order_item_subtotal”:250.0,“order_item_product_price”:50.0}
{“order_item_id”:4,“order_item_order_id”:2,“order_item_product_id”:403,“order_item_quantity”:1,“order_item_subtotal”:129.99,“order_item_product_price”:129.99}
orders = spark.read.json("/user/data/retail_db_json/order_items")
I am getting a error while run following command .
orders.where("order_item_order_id in( 2,4,5,6,7,8,9,10) ").groupby(“order_item_order_id”).agg(sum(“order_item_subtotal”),count()).orderBy(“order_item_order_id”).show()
TypeError: unsupported operand type(s) for +: ‘int’ and 'str’
I am not sure why I am getting ...All column vales are string. Any suggestion ?
Cast the column to int type. Can’t apply aggregation methods on string types.

Spark dataframe date_add function with case when not working

I have a spark DataFrame in which I have a where condition to add number of dates in the existing date column based on some condition.
My code is something like below
F.date_add(df.transDate,
F.when(F.col('txn_dt') == '2016-01-11', 9999).otherwise(10)
)
since date_add() function accepts second argument as int, but my code returns as Column, it throws error.
How to collect value from case when condition?
pyspark.sql.functions.when() returns a Column, which is why your code is producing the TypeError: 'Column' object is not callable
You can get the desired result by moving the when to the outside, like this:
F.when(
F.col('txn_dt') == '2016-01-11',
F.date_add(df.transDate, 9999)
).otherwise(F.date_add(df.transDate, 10))