How to select all structures from a dataframe in pyspark? - pyspark

I have a json database loaded using pyspark.
I'm trying to access all "x" components of each structures in it.
This is the output of df.select("level_instance_json.player").printSchema()
root
|-- player: struct (nullable = true)
| |-- 0: struct (nullable = true)
| | |-- head_pitch: long (nullable = true)
| | |-- head_roll: long (nullable = true)
| | |-- head_yaw: long (nullable = true)
| | |-- r: long (nullable = true)
| | |-- x: long (nullable = true)
| | |-- y: long (nullable = true)
| |-- 1: struct (nullable = true)
| | |-- head_pitch: long (nullable = true)
| | |-- head_roll: long (nullable = true)
| | |-- head_yaw: long (nullable = true)
| | |-- r: long (nullable = true)
| | |-- x: long (nullable = true)
| | |-- y: long (nullable = true)
...
I've tried selecting all using the '*' selector but it doesn't work.
df.select("level_instance_json.player.*.x").show(10) gives this error:
'No such struct field * in 0, 1, 10, 100, 1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009, 101, 1010, 1011, 1012, 1013, 1014, 1015, 1016, 1017, 1018, 1019, 102,...

You can do this :
list_player_numbers = [el.name for el in df.select("level_instance_json.player").schema['player'].dataType]
list_fields = ['.'.join(['level_instance_json', 'player', player_number, 'x']) for player_number in list_player_numbers]
output = df.select(list_fields)
It should work.
Xavier

Related

Add a field already exists in df pyspark in struct field

I have the follow df:
sku category price state infos_gerais
33344 mmmma 3.00 SP [{5, 5656655, 5845454}]
33344 mmmma 3.00 MG [{5, 6565767, 5854545}]
33344 mmmma 3.00 RS [{5, 8788787, 4564646}]
The schema of df follow:
|-- sku: string (nullable = true)
|-- category: string (nullable = true)
|-- price: double (nullable = true)
|-- state: string (nullable = true)
|-- infos_gerais: array (nullable = true)
| |-- element: struct (containsNull = false)
| | |-- service_type_id: integer (nullable = true)
| | |-- cep_ini: integer (nullable = true)
| | |-- cep_fim: integer (nullable = true)
See that in df the field that don't repeat is 'state', so I need insert this field in struct 'infos_gerais' and apply a groupBy, so I try this below code, but return a error. Anyone can help me?
df_end = df_end.withColumn(
"infos_gerais",
sf.collect_list(
sf.struct(
sf.col("infos_gerais.*"),
sf.col('infos_gerais.state').alias('state'))
)
)
I need the follow df output:
sku category price infos_gerais
33344 mmmma 3.00 [{5, 5656655, 5845454, SP}, {5, 6565767, 5854545, MG},{5, 8788787, 4564646, RS}]
given you have an array of structs, you can use transform to process the elements of the array and withField on the structs to add/replace a struct field.
here's a simple example
data_sdf. \
withColumn('infos_gerais',
func.transform('infos_gerais', lambda x: x.withField('state', func.col('state')))
). \
groupBy('sku', 'category', 'price'). \
agg(func.flatten(func.collect_list('infos_gerais')).alias('infos_gerais')). \
show(truncate=False)
# +-----+--------+-----+---------------------------------------------------------------------------------+
# |sku |category|price|infos_gerais |
# +-----+--------+-----+---------------------------------------------------------------------------------+
# |33344|mmmma |3.0 |[{5, 5656655, 5845454, SP}, {5, 6565767, 5854545, MG}, {5, 8788787, 4564646, RS}]|
# +-----+--------+-----+---------------------------------------------------------------------------------+
# root
# |-- sku: string (nullable = true)
# |-- category: string (nullable = true)
# |-- price: double (nullable = true)
# |-- infos_gerais: array (nullable = true)
# | |-- element: struct (containsNull = true)
# | | |-- service_type_id: integer (nullable = true)
# | | |-- cep_ini: integer (nullable = true)
# | | |-- cep_fim: integer (nullable = true)
# | | |-- state: string (nullable = true)

How to convert a spark dataframe to a list of structs in scala

I have a spark dataframe composed of 12 rows and different columns, 22 in this case.
I want to convert it into a dataframe of the format:
root
|-- data: array (nullable = false)
| |-- element: struct (containsNull = false)
| | |-- ast: double (nullable = true)
| | |-- blk: double (nullable = true)
| | |-- dreb: double (nullable = true)
| | |-- fg3_pct: double (nullable = true)
| | |-- fg3a: double (nullable = true)
| | |-- fg3m: double (nullable = true)
| | |-- fg_pct: double (nullable = true)
| | |-- fga: double (nullable = true)
| | |-- fgm: double (nullable = true)
| | |-- ft_pct: double (nullable = true)
| | |-- fta: double (nullable = true)
| | |-- ftm: double (nullable = true)
| | |-- games_played: long (nullable = true)
| | |-- seconds: double (nullable = true)
| | |-- oreb: double (nullable = true)
| | |-- pf: double (nullable = true)
| | |-- player_id: long (nullable = true)
| | |-- pts: double (nullable = true)
| | |-- reb: double (nullable = true)
| | |-- season: long (nullable = true)
| | |-- stl: double (nullable = true)
| | |-- turnover: double (nullable = true)
Where each element of the dataframe data field corresponds to a different row of the original dataframe.
The final goal is exporting it to .json file which will have the format:
{"data": [{row1}, {row2}, ..., {row12}]}
The code I am employing at the moment is the following:
val best_12_struct = best_12.withColumn("data", array((0 to 11).map(i => struct(col("ast"), col("blk"), col("dreb"), col("fg3_pct"), col("fg3a"),
col("fg3m"), col("fg_pct"), col("fga"), col("fgm"),
col("ft_pct"), col("fta"), col("ftm"), col("games_played"),
col("seconds"), col("oreb"), col("pf"), col("player_id"),
col("pts"), col("reb"), col("season"), col("stl"), col("turnover"))) : _*))
val best_12_data = best_12_struct.select("data")
But the array(0 to 11) copies 12 times the same element into data. Therefore, the .json I finally obtain has 12 {"data": ...}, being in each the same row copied 12 times, instead of just one {"data": ...} with 12 elements, corresponding each to one row of the original dataframe.
you have 12 times the same row as the method withColumn will only pick information from the current treated row.
You need to aggregate rows at dataframe level with collect_list that is an aggregate function as follow:
import org.apache.spark.sql.functions._
val best_12_data = best_12
.withColumn("row", struct(col("ast"), col("blk"), col("dreb"), col("fg3_pct"), col("fg3a"), col("fg3m"), col("fg_pct"), col("fga"), col("fgm"), col("ft_pct"), col("fta"), col("ftm"), col("games_played"), col("seconds"), col("oreb"), col("pf"), col("player_id"), col("pts"), col("reb"), col("season"), col("stl"), col("turnover")))
.agg(collect_list(col("row")).as("data"))

How to update column value in case of array of struct in spark scala

root
|-- _id: string (nullable = true)
|-- h: string (nullable = true)
|-- inc: string (nullable = true)
|-- op: string (nullable = true)
|-- ts: string (nullable = true)
|-- Animal: array (nullable = false)
| |-- element: struct (containsNull = false)
| | |-- Elephant: string (nullable = false)
| | |-- Lion: string (nullable = true)
| | |-- Zebra: string (nullable = true)
| | |-- Dog: string (nullable = true)
I just want to is this posible to update the array of struct to some value if I Have a list of column of which I dont want to update.
For eg
If I have a list List[String] = List(Zebra,Dog)
Is this possible to set all other array of column to 0 like Elephant and Lion will be 0
+---+----+-----+------+-------+--------------------+
|_id|h |inc |op |ts |webhooks |
+---+----+-----+------+-------+--------------------+
|fa1|fa11|fa111|fa1111|fa11111|[[1, 1, 0, 1]]|
|fb1|fb11|fb111|fb1111|fb11111|[[0, 1, 1, 0]]|
+---+----+-----+------+-------+--------------------+
After operations It will be
+---+----+-----+------+-------+--------------------+
|_id|h |inc |op |ts |webhooks |
+---+----+-----+------+-------+--------------------+
|fa1|fa11|fa111|fa1111|fa11111|[[0, 0, 0, 1]]|
|fb1|fb11|fb111|fb1111|fb11111|[[0, 0, 1, 0]]|
+---+----+-----+------+-------+--------------------+
I was going by iteration by row
Like I made a function
def changeValue(row :Row) = {
//some code
}
But not able to do so
Check below code.
scala> ddf.show(false)
+---+----+-----+------+-------+--------------------+
|_id|h |inc |op |ts |webhooks |
+---+----+-----+------+-------+--------------------+
|fa1|fa11|fa111|fa1111|fa11111|[[1, 11, 111, 1111]]|
|fb1|fb11|fb111|fb1111|fb11111|[[2, 22, 222, 2222]]|
+---+----+-----+------+-------+--------------------+
scala> val columnsTobeUpdatedInWebhooks = Seq("zebra","dog") // Columns to be updated in webhooks.
columnsTobeUpdatedInWebhooks: Seq[String] = List(zebra, dog)
Constructing Expression
val expr = flatten(
array(
ddf
.select(explode($"webhooks").as("webhooks"))
.select("webhooks.*")
.columns
.map(c => if(columnsTobeUpdatedInWebhooks.contains(c)) col(s"webhooks.${c}").as(c) else array(lit(0)).as(c)):_*
)
)
expr: org.apache.spark.sql.Column = flatten(array(array(0) AS `elephant`, array(0) AS `lion`, webhooks.zebra AS `zebra`, webhooks.dog AS `dog`))
Applying Expression
scala> ddf.withColumn("webhooks",struct(expr)).show(false)
+---+----+-----+------+-------+--------------+
|_id|h |inc |op |ts |webhooks |
+---+----+-----+------+-------+--------------+
|fa1|fa11|fa111|fa1111|fa11111|[[0, 0, 0, 1]]|
|fb1|fb11|fb111|fb1111|fb11111|[[0, 0, 1, 0]]|
+---+----+-----+------+-------+--------------+
Final Schema
scala> ddf.withColumn("webhooks",allwebhookColumns).printSchema
root
|-- _id: string (nullable = true)
|-- h: string (nullable = true)
|-- inc: string (nullable = true)
|-- op: string (nullable = true)
|-- ts: string (nullable = true)
|-- webhooks: array (nullable = false)
| |-- element: struct (containsNull = false)
| | |-- elephant: integer (nullable = false)
| | |-- lion: integer (nullable = false)
| | |-- zebra: integer (nullable = false)
| | |-- dog: integer (nullable = false)

Merge two columns of array of structs based on a key

I have a dataframe of schema as below:
input dataframe
|-- A: string (nullable = true)
|-- B_2020: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- key: string (nullable = true)
| | |-- x: double (nullable = true)
| | |-- y: double (nullable = true)
| | |-- z: double (nullable = true)
|-- B_2019: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- key: string (nullable = true)
| | |-- x: double (nullable = true)
| | |-- y: double (nullable = true)
I want to merge 2020 and 2019 columns into one column of array of structs as well based on the matching key value.
Desired schema:
expected output dataframe
|-- A: string (nullable = true)
|-- B: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- key: string (nullable = true)
| | |-- x_this_year: double (nullable = true)
| | |-- y_this_year: double (nullable = true)
| | |-- x_last_year: double (nullable = true)
| | |-- y_last_year: double (nullable = true)
| | |-- z_this_year: double (nullable = true)
I would like to merge on the matching key in the structs. Also note, if there is a key present only in one of 2019 or 2020 data, then null need to be used to substitute the values of the other year in merged column.
scala> val df = Seq(
| ("ABC",
| Seq(
| ("a", 2, 4, 6),
| ("b", 3, 6, 9),
| ("c", 1, 2, 3)
| ),
| Seq(
| ("a", 4, 8),
| ("d", 3, 4)
| ))
| ).toDF("A", "B_2020", "B_2019").select(
| $"A",
| $"B_2020" cast "array<struct<key:string,x:double,y:double,z:double>>",
| $"B_2019" cast "array<struct<key:string,x:double,y:double>>"
| )
df: org.apache.spark.sql.DataFrame = [A: string, B_2020: array<struct<key:string,x:double,y:double,z:double>> ... 1 more field]
scala> df.printSchema
root
|-- A: string (nullable = true)
|-- B_2020: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- key: string (nullable = true)
| | |-- x: double (nullable = true)
| | |-- y: double (nullable = true)
| | |-- z: double (nullable = true)
|-- B_2019: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- key: string (nullable = true)
| | |-- x: double (nullable = true)
| | |-- y: double (nullable = true)
scala> df.show(false)
+---+------------------------------------------------------------+------------------------------+
|A |B_2020 |B_2019 |
+---+------------------------------------------------------------+------------------------------+
|ABC|[[a, 2.0, 4.0, 6.0], [b, 3.0, 6.0, 9.0], [c, 1.0, 2.0, 3.0]]|[[a, 4.0, 8.0], [d, 3.0, 4.0]]|
+---+------------------------------------------------------------+------------------------------+
scala> val df2020 = df.select($"A", explode($"B_2020") as "this_year").select($"A",
| $"this_year.key" as "key", $"this_year.x" as "x_this_year",
| $"this_year.y" as "y_this_year", $"this_year.z" as "z_this_year")
df2020: org.apache.spark.sql.DataFrame = [A: string, key: string ... 3 more fields]
scala> val df2019 = df.select($"A", explode($"B_2019") as "last_year").select($"A",
| $"last_year.key" as "key", $"last_year.x" as "x_last_year",
| $"last_year.y" as "y_last_year")
df2019: org.apache.spark.sql.DataFrame = [A: string, key: string ... 2 more fields]
scala> df2020.show(false)
+---+---+-----------+-----------+-----------+
|A |key|x_this_year|y_this_year|z_this_year|
+---+---+-----------+-----------+-----------+
|ABC|a |2.0 |4.0 |6.0 |
|ABC|b |3.0 |6.0 |9.0 |
|ABC|c |1.0 |2.0 |3.0 |
+---+---+-----------+-----------+-----------+
scala> df2019.show(false)
+---+---+-----------+-----------+
|A |key|x_last_year|y_last_year|
+---+---+-----------+-----------+
|ABC|a |4.0 |8.0 |
|ABC|d |3.0 |4.0 |
+---+---+-----------+-----------+
scala> val outputDF = df2020.join(df2019, Seq("A", "key"), "outer").select(
| $"A" as "market_name",
| struct($"key", $"x_this_year", $"y_this_year", $"x_last_year",
| $"y_last_year", $"z_this_year") as "cancellation_policy_booking")
outputDF: org.apache.spark.sql.DataFrame = [market_name: string, cancellation_policy_booking: struct<key: string, x_this_year: double ... 4 more fields>]
scala> outputDF.printSchema
root
|-- market_name: string (nullable = true)
|-- cancellation_policy_booking: struct (nullable = false)
| |-- key: string (nullable = true)
| |-- x_this_year: double (nullable = true)
| |-- y_this_year: double (nullable = true)
| |-- x_last_year: double (nullable = true)
| |-- y_last_year: double (nullable = true)
| |-- z_this_year: double (nullable = true)
scala> outputDF.show(false)
+-----------+----------------------------+
|market_name|cancellation_policy_booking |
+-----------+----------------------------+
|ABC |[b, 3.0, 6.0,,, 9.0] |
|ABC |[a, 2.0, 4.0, 4.0, 8.0, 6.0]|
|ABC |[d,,, 3.0, 4.0,] |
|ABC |[c, 1.0, 2.0,,, 3.0] |
+-----------+----------------------------+

how to convert 1 Column that is struct<year:int,month:int,day:int> to a normalized yyyy/MM/dd date datatype format in scala

given I have a dataframe that includes two columns with the following struct, how can I convert the data in start_date and end_date to a yyyy/MM/dd format with a date datatype in sparkSQL (Scala).
Also end_date struct can also be null.
|-- start_date: struct (nullable = true)
| |-- year: integer (nullable = true)
| |-- month: integer (nullable = true)
| |-- day: integer (nullable = true)
|-- end_date: struct (nullable = true)
| |-- year: integer (nullable = true)
| |-- month: integer (nullable = true)
| |-- day: integer (nullable = true)
Spark 2.4+ you could use struct access (.) operator method here I am sharing code snippet.
scala> df.show
+--------------+--------------+
| start_date| end_date|
+--------------+--------------+
|[2019, 07, 11]|[2019, 08, 12]|
|[2019, 07, 14]|[2019, 08, 13]|
+--------------+--------------+
scala> df.printSchema
root
|-- start_date: struct (nullable = false)
| |-- year: string (nullable = true)
| |-- month: string (nullable = true)
| |-- day: string (nullable = true)
|-- end_date: struct (nullable = false)
| |-- year: string (nullable = true)
| |-- month: string (nullable = true)
| |-- day: string (nullable = true)
scala>var df1= df.withColumn("start_date",date_format(concat_ws("-",col("start_date.year"),col("start_date.month"),col("start_date.day")),"yyyy/MM/dd")).withColumn("end_date",date_format(concat_ws("-",col("end_date.year"),col("end_date.month"),col("end_date.day")),"yyyy/MM/dd"))
scala> df1.show
+----------+----------+
|start_date| end_date|
+----------+----------+
|2019/07/11|2019/08/12|
|2019/07/14|2019/08/13|
+----------+----------+
let me know if you have any question related to the same.
You could use a combination (nesting) of to_date && format_string || concact_ws. Generally, you can achieve 90% of what you need with DataFrame functions
I'll provide more details once I wake up. It's late where I live...
UPDATE:
data.withColumn("start_date_as_date",
to_date(
concat_ws("/", $"start_date.year", $"start_date.month", $"start_date.day"),
"yyyy/MM/dd")
).show
+-------------+-------------+------------------+
| start_date| end_date|start_date_as_date|
+-------------+-------------+------------------+
| [776, 9, 1]| [2019, 9, 2]| 0776-09-01|
|[2019, 9, 18]|[2019, 9, 19]| 2019-09-18|
|[2019, 10, 1]|[2019, 10, 2]| 2019-10-01|
+-------------+-------------+------------------+
... .printSchema
root
|-- start_date: struct (nullable = true)
| |-- year: integer (nullable = false)
| |-- month: integer (nullable = false)
| |-- day: integer (nullable = false)
|-- end_date: struct (nullable = true)
| |-- year: integer (nullable = false)
| |-- month: integer (nullable = false)
| |-- day: integer (nullable = false)
|-- start_date_as_date: date (nullable = true)
Alternatively you could also use:
format_string("%02d/%02d/%02d", // this lets you get creative if you want!
$"start_date.year", $"start_date.month", $"start_date.day")`