How can I refer a column by its index? - pyspark

I can use col("mycolumnname") function to get the column object.
Based on the documentation the only possible parameter is the name of the column.
Is there any way to get the column object by its index?

Try this:
Let n be the index variable (integer).
df.select(df.columns[n]).show()

Is the expected result like this?
import pyspark.sql.functions as F
...
data = [
(1, 'AC Milan'),
(2, 'Real Madrid'),
(3, 'Bayern Munich')
]
df = spark.createDataFrame(data, ['id', 'club'])
df.select(F.col('club')).show()
df.select(df['club']).show()

Related

How to append 'explode'd columns to a dataframe keeping all existing columns?

I'm trying to add exploded columns to a dataframe:
from pyspark.sql.functions import *
from pyspark.sql.types import *
# Convenience function for turning JSON strings into DataFrames.
def jsonToDataFrame(json, schema=None):
# SparkSessions are available with Spark 2.0+
reader = spark.read
if schema:
reader.schema(schema)
return reader.json(sc.parallelize([json]))
schema = StructType().add("a", MapType(StringType(), IntegerType()))
events = jsonToDataFrame("""
{
"a": {
"b": 1,
"c": 2
}
}
""", schema)
display(
events.withColumn("a", explode("a").alias("x", "y"))
)
However, I'm hitting the following error:
AnalysisException: The number of aliases supplied in the AS clause does not match the number of columns output by the UDTF expected 2 aliases but got a
Any ideas?
In the end, I used the following:
display(
events.select(explode("a").alias("x", "y"), *[c for c in events.columns])
)
This approach uses select to specify the columns to return.
The first argument explodes the data:
explode("a").alias("x", "y")
The second argument specifies all existing columns should be included in the select:\
*[c for c in events.columns]
Note that I'm prefixing the list with * - this sends each column name as a separate parameter.
Simpler Method
The API docs specify:
Parameters
colsstr, Column, or list
column names (string) or expressions (Column). If one of the column names is ‘*’, that column is expanded to include all columns in the current DataFrame.
We can simplify the first approach by passing in "*" to select all the columns:
display(
events.select("*", explode("a").alias("x", "y"))
)

How do you select the 'maximum' struct from each group

I have a dataFrame that contains an id column and a struct of two values order_value
example_input = spark.createDataFrame([(1, (1,2)), (1, (2,1)), (2, (1,2))], ["id", "order_value"])
I would like to keep one record from each id, that is the maximum of the order_value column. Specifically the maximum of the order (first part of order_value) with ties broken by the the maximum of the value (second part of order_value)
How can this be done?
example_input.groupby('id').max() doesn't seem to work as it complains that order_value is not numeric.
my desired output is given by:
example_output = spark.createDataFrame([(1, (2,1)), (2, (1,2))], ["id", "order_value"])
Try with array_max function in spark.
Example:
#groupby on id then collect_list to create an array to find max in the array
example_input.groupBy("id").agg(array_max(collect_list(col("order_value"))).alias("order_value")).\
show(10,False)

Pyspark dynamic column name

I have a dataframe which contains months and will change quite frequently. I am saving this dataframe values as list e.g. months = ['202111', '202112', '202201']. Using a for loop to to iterate through all list elements and trying to provide dynamic column values with following code:
for i in months:
df = (
adjustment_1_prepared_df.select("product", "mnth", "col1", "col2")
.groupBy("product")
.agg(
f.min(f.when(condition, f.col("col1")).otherwise(9999999)).alias(
concat("col3_"), f.lit(i.col)
)
)
)
So basically in alias I am trying to give column name as a combination of constant (minInv_) and a variable (e.g. 202111) but I am getting error. How can I give a column name as combination of fixed string and a variable.
Thanks in advance!
.alias("col3_"+str(i.col))

Adding empty columns to dataframe with empty values (by type) pyspark

I have the following list:
columns = [('url','string'),('count','bigint'),('isindex','boolean')]
I want to add this columns to my df with empty values:
for column in columns:
df = df.withColumn(column[0], f.lit(?).cast(?))
I am not sure what I need to put in the lit function and in the cast in order to have the suitable empty value for each type
Thank you!

Dataframe column substring based on the value during join

I have a dataframe with column having values like "COR//xxxxxx-xx-xxxx" or "xxxxxx-xx-xxxx"
I need to compare this column with another column in a different dataframe based on the column value.
If column value have "COR//xxxxx-xx-xxxx", I need to use substring("column", 4, length($"column")
If the column value have "xxxxx-xx-xxxx", I can compare directly without using substring.
For example:
val DF1 = DF2.join(DF3, upper(trim($"column1".substr(4, length($"column1")))) === upper(trim(DF3("column1"))))
I am not sure how to add the condition while joining. Could anyone please let me know how can we achieve this in Spark dataframe?
You can try adding a new column based on the conditions and join on the new column. Something like this.
val data = List("COR//xxxxx-xx-xxxx", "xxxxx-xx-xxxx")
val DF2 = ps.sparkSession.sparkContext.parallelize(data).toDF("column1")
val DF4 = DF2.withColumn("joinCol", when(col("column1").like("%COR%"),
expr("substring(column1, 6, length(column1)-1)")).otherwise(col("column1")) )
DF4.show(false)
The new column will have values like this.
+------------------+-------------+
|column1 |joinCol |
+------------------+-------------+
|COR//xxxxx-xx-xxxx|xxxxx-xx-xxxx|
|xxxxx-xx-xxxx |xxxxx-xx-xxxx|
+------------------+-------------+
You can now join based on the new column added.
val DF1 = DF4.join(DF3, upper(trim(DF4("joinCol"))) === upper(trim(DF3("column1"))))
Hope this helps.
Simply create a new column to use in the join:
DF2.withColumn("column2",
when($"column1" rlike "COR//.*",
$"column1".substr(lit(4), length($"column1")).
otherwise($"column1"))
Then use column2 in the join. It is also possible to add the whole when clause directly in the join but it would look very messy.
Note that to use a constant value in substr you need to use lit. And if you want to remove the whole "COR//" part, use 6 instead of 4.