Filling missing value with mean by grouping multiple columns - pyspark

Description:"
How can I fill the missing value in price column with mean, grouping data by condition and model columns in Pyspark? My python code would be like this :cars['price'] = np.ceil(cars['price'].fillna(cars.groupby(['condition', 'model' ])['price'].transform('mean')))
Error:
I try different codes in Pyspark but each time I get different errors. Like this, code:cars_new=cars.fillna((cars.groupBy("condition", "model").agg(mean("price"))['avg(price)']))
Error :
ValueError: value should be a float, int, long, string, bool or dict
DataFrame
enter image description here

Not sure how your input data looks like but let's say we have a dataframe that looks like this:
+---------+-----+-----+
|condition|model|price|
+---------+-----+-----+
|A |A |1 |
|A |B |2 |
|A |B |2 |
|A |A |1 |
|A |A |null |
|B |A |3 |
|B |A |null |
|B |B |4 |
+---------+-----+-----+
We want to fill null with average but over condition and model.
For this we can define a Window, calculate avg and then replace null.
Example:
from pyspark.sql import SparkSession, Window
import pyspark.sql.functions as F
spark = SparkSession.builder.appName("test").getOrCreate()
data = [
{"condition": "A", "model": "A", "price": 1},
{"condition": "A", "model": "B", "price": 2},
{"condition": "A", "model": "B", "price": 2},
{"condition": "A", "model": "A", "price": 1},
{"condition": "A", "model": "A", "price": None},
{"condition": "B", "model": "A", "price": 3},
{"condition": "B", "model": "A", "price": None},
{"condition": "B", "model": "B", "price": 4},
]
window = Window.partitionBy(["condition", "model"]).orderBy("condition")
df = spark.createDataFrame(data=data)
df = (
df.withColumn("avg", F.avg("price").over(window))
.withColumn(
"price", F.when(F.col("price").isNull(), F.col("avg")).otherwise(F.col("price"))
)
.drop("avg")
)
Which gives us:
+---------+-----+-----+
|condition|model|price|
+---------+-----+-----+
|A |A |1.0 |
|A |A |1.0 |
|A |A |1.0 |
|B |B |4.0 |
|B |A |3.0 |
|B |A |3.0 |
|A |B |2.0 |
|A |B |2.0 |
+---------+-----+-----+

It could be done using window functions like this:
cars_new = cars.fillna(0, subset=['price'])
w = Window().partitionBy('condition', 'model')
cars = cars.withColumn('price',when(col('price').isNull(), avg(col('price')).over(w)).otherwise(col('price')))

Related

Longest Run without being in the UK

I have the following SparkDataframe
val inputDf = List(
("1", "1", "UK", "Spain", "2022-01-01"),
("1", "2", "Spain", "Germany", "2022-01-02"),
("1", "3", "Germany", "China", "2022-01-03"),
("1", "4", "China", "France", "2022-01-04"),
("1", "5", "France", "Spain", "2022-01-05"),
("1", "6", "Spain", "Italy", "2022-01-09"),
("1", "7", "Italy", "UK", "2022-01-14"),
("1", "8", "UK", "USA", "2022-01-15"),
("1", "9", "USA", "Canada", "2022-01-16"),
("1", "10", "Canada", "UK", "2022-01-17"),
("2", "16", "USA", "Finland", "2022-01-11"),
("2", "17", "Finland", "Russia", "2022-01-12"),
("2", "18", "Russia", "Turkey", "2022-01-13"),
("2", "19", "Turkey", "Japan", "2022-01-14"),
("2", "20", "Japan", "UK", "2022-01-15"),
).toDF("passengerId", "flightId", "from", "to", "date")
I would like to get the longest run for each passengers without being in the UK.
So for example in the case of passenger 1 his itinerary was UK>Spain>Germany>China>France>Spain>Italy>UK>USA> Canada>UK>Finland>Russia>Turkey>Japan>Spain>Germany>China>France>Spain>Italy>UK>USA>Canada>UK. Therefore the longest run would be 10.
I first merge the column from and to using the following code.
val passengerWithCountries = inputDf.groupBy("passengerId")
.agg(
// concat is for concatenate two lists of strings from columns "from" and "to"
concat(
// collect list gathers all values from the given column into array
collect_list(col("from")),
collect_list(col("to"))
).name("countries")
)
Output:
+-----------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|passengerId|countries |
+-----------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|1 |[UK, Spain, Germany, China, France, Spain, Italy, UK, USA, Canada, UK, Finland, Russia, Turkey, Japan, Spain, Germany, China, France, Spain, Italy, UK, USA, Canada, UK, Finland, Russia, Turkey, Japan, UK]|
|2 |[USA, Finland, Russia, Turkey, Japan, Finland, Russia, Turkey, Japan, UK] |
+-----------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
The solution I have tried is the following. However, I since the value of my column are Array[String] and not String it does not work.
passengerWithCountries
.withColumn("countries_new", explode(split(Symbol("countries"), "UK,")))
.withColumn("journey_outside_UK", size(split(Symbol("countries"), ",")))
.groupBy("passengerId")
.agg(max(Symbol("journey_outside_UK")) as "longest_run").show()
I an looking to have the following output:
+-----------+-----------+
|passengerId|longest_run|
+-----------+-----------+
|1 |10 |
|2 |5 |
+-----------+-----------+
Please let me know if you have a solution.
// Added some edge cases:
// passengerId=3: just one itinary from UK to non-UK, longest run must be 1
// passengerId=4: just one itinary from non-UK to UK, longest run must be 1
// passengerId=5: just one itinary from UK to UK, longest run must be 0
// passengerId=6: one itinary from UK to UK, followed by UK to non-UK, longest run must be 1
val inputDf = List(
("1", "1", "UK", "Spain", "2022-01-01"),
("1", "2", "Spain", "Germany", "2022-01-02"),
("1", "3", "Germany", "China", "2022-01-03"),
("1", "4", "China", "France", "2022-01-04"),
("1", "5", "France", "Spain", "2022-01-05"),
("1", "6", "Spain", "Italy", "2022-01-09"),
("1", "7", "Italy", "UK", "2022-01-14"),
("1", "8", "UK", "USA", "2022-01-15"),
("1", "9", "USA", "Canada", "2022-01-16"),
("1", "10", "Canada", "UK", "2022-01-17"),
("2", "16", "USA", "Finland", "2022-01-11"),
("2", "17", "Finland", "Russia", "2022-01-12"),
("2", "18", "Russia", "Turkey", "2022-01-13"),
("2", "19", "Turkey", "Japan", "2022-01-14"),
("2", "20", "Japan", "UK", "2022-01-15"),
("3", "21", "UK", "Spain", "2022-01-01"),
("4", "22", "Spain", "UK", "2022-01-01"),
("5", "23", "UK", "UK", "2022-01-01"),
("6", "24", "UK", "UK", "2022-01-01"),
("6", "25", "UK", "Spain", "2022-01-02"),
("7", "25", "Spain", "Germany", "2022-01-02"),
).toDF("passengerId", "flightId", "from", "to", "date")
import org.apache.spark.sql.expressions.Window
// Declare window for analytic functions
val w = Window.partitionBy("passengerId").orderBy("date")
// Use analytic function to partition rows by UK-...-UK itinaries
val ukArrivals = inputDf.withColumn("newUK", sum(expr("case when from = 'UK' then 1 else 0 end")).over(w))
+-----------+--------+-------+-------+----------+-----+
|passengerId|flightId| from| to| date|newUK|
+-----------+--------+-------+-------+----------+-----+
| 1| 1| UK| Spain|2022-01-01| 1|
| 1| 2| Spain|Germany|2022-01-02| 1|
| 1| 3|Germany| China|2022-01-03| 1|
| 1| 4| China| France|2022-01-04| 1|
| 1| 5| France| Spain|2022-01-05| 1|
| 1| 6| Spain| Italy|2022-01-09| 1|
| 1| 7| Italy| UK|2022-01-14| 1|
| 1| 8| UK| USA|2022-01-15| 2|
| 1| 9| USA| Canada|2022-01-16| 2|
| 1| 10| Canada| UK|2022-01-17| 2|
| 2| 16| USA|Finland|2022-01-11| 0|
| 2| 17|Finland| Russia|2022-01-12| 0|
| 2| 18| Russia| Turkey|2022-01-13| 0|
| 2| 19| Turkey| Japan|2022-01-14| 0|
| 2| 20| Japan| UK|2022-01-15| 0|
| 3| 21| UK| Spain|2022-01-01| 1|
| 4| 22| Spain| UK|2022-01-01| 0|
| 5| 23| UK| UK|2022-01-01| 1|
| 6| 24| UK| UK|2022-01-01| 1|
| 6| 25| UK| Spain|2022-01-02| 2|
+-----------+--------+-------+-------+----------+-----+
// Calculate longest runs outside UK
val runs = (
ukArrivals
.groupBy("passengerId", "newUK") // for each UK-...-UK itinary
.agg((
sum(
expr("""
case
when 'UK' not in (from,to) then 1 -- count all nonUK countries, except for first one
when from = to then -1 -- special case for UK-UK itinaries
else 0 -- don't count itinaries from/to UK
end""")
) + 1 // count first non-UK country
).as("notUK"))
.groupBy("passengerId")
.agg(max("notUK").as("longest_run_outside_UK"))
)
runs.orderBy("passengerId").show
+-----------+----------------------+
|passengerId|longest_run_outside_UK|
+-----------+----------------------+
| 1| 6|
| 2| 5|
| 3| 1|
| 4| 1|
| 5| 0|
| 6| 1|
+-----------+----------------------+

Pyspark: How to create a table by crossing information in two columns?

I have a dataframe like this in Pyspark:
A 1 info_A1
A 2 info_A2
B 2 info_B2
B 3 info_B3
I would like to obtain this result:
info_A1 null
info_A2 info_B2
null info_B3
Is there any function in Pyspark that does it automatically or I should iterate each row separately?
Try using groupBy and pivot:
from pyspark.sql import SparkSession
from pyspark.sql import functions as F
spark = SparkSession.builder.getOrCreate()
data = [
{"x": "A", "y": 1, "z": "info_A1"},
{"x": "A", "y": 2, "z": "info_A2"},
{"x": "B", "y": 2, "z": "info_B2"},
{"x": "B", "y": 3, "z": "info_B3"},
]
df = spark.createDataFrame(data)
df = df.groupBy("y").pivot("x").agg(F.max("z")).orderBy("y").drop("y")
Result:
+-------+-------+
|A |B |
+-------+-------+
|info_A1|null |
|info_A2|info_B2|
|null |info_B3|
+-------+-------+

Convert nested json to dataframe in scala spark

I want to create the dataframe out of json for only given key. It values is a list and that is nested json type. I tried for flattening but I think there could be some workaround as I only need one key of json to convert into dataframe.
I have json like:
("""
{
"Id_columns": 2,
"metadata": [{
"id": "1234",
"type": "file",
"length": 395
}, {
"id": "1235",
"type": "file2",
"length": 396
}]
}""")
Now I want to create a DataFrame using spark for only key 'metadata', I have written code:
val json = Json.parse("""
{
"Id_columns": 2,
"metadata": [{
"id": "1234",
"type": "file",
"length": 395
}, {
"id": "1235",
"type": "file2",
"length": 396
}]
}""")
var jsonlist = Json.stringify(json("metadata"))
val rddData = spark.sparkContext.parallelize(jsonlist)
resultDF = spark.read.option("timestampFormat", "yyyy/MM/dd HH:mm:ss ZZ").json(rddData)
resultDF.show()
But it's giving me error:
overloaded method value json with alternatives:
cannot be applied to (org.apache.spark.rdd.RDD[Char])
[error] val resultDF = spark.read.option("timestampFormat", "yyyy/MM/dd HH:mm:ss ZZ").json(rddData)
^
I am expecting result:
+----+-----+--------+
| id | type| length |
+----+-----+--------+
|1234|file1| 395 |
|1235|file2| 396 |
+----+-----+--------+
You need to explode your array like this :
import spark.implicits._
import org.apache.spark.sql.functions._
val df = spark.read.json(
spark.sparkContext.parallelize(Seq("""{"Id_columns":2,"metadata":[{"id":"1234","type":"file","length":395},{"id":"1235","type":"file2","length":396}]}"""))
)
df.select(explode($"metadata").as("metadata"))
.select("metadata.*")
.show(false)
Output :
+----+------+-----+
|id |length|type |
+----+------+-----+
|1234|395 |file |
|1235|396 |file2|
+----+------+-----+

Merge Spark dataframe rows based on key column in Scala

I have a streaming Dataframe with 2 columns. A key column represented as String and an objects column which is an array containing one object element. I want to be able to merge records or rows in the Dataframe with the same key such that the merged records form an array of objects.
Dataframe
----------------------------------------------------------------
|key | objects |
----------------------------------------------------------------
|abc | [{"name": "file", "type": "sample", "code": "123"}] |
|abc | [{"name": "image", "type": "sample", "code": "456"}] |
|xyz | [{"name": "doc", "type": "sample", "code": "707"}] |
----------------------------------------------------------------
Merged Dataframe
-------------------------------------------------------------------------
|key | objects |
-------------------------------------------------------------------------
|abc | [{"name": "file", "type": "sample", "code": "123"}, {"name":
"image", "type": "sample", "code": "456"}] |
|xyz | [{"name": "doc", "type": "sample", "code": "707"}] |
--------------------------------------------------------------------------
One option to do this to convert this into a PairedRDD and apply the reduceByKey function, but I'd prefer to do this with Dataframes if possible since it'd more optimal. Is there any way to do this with Dataframes without compromising on performance?
Assuming column objects is an array of a single JSON string, here's how you can merge objects by key:
import org.apache.spark.sql.functions._
case class Obj(name: String, `type`: String, code: String)
val df = Seq(
("abc", Obj("file", "sample", "123")),
("abc", Obj("image", "sample", "456")),
("xyz", Obj("doc", "sample", "707"))
).
toDF("key", "object").
select($"key", array(to_json($"object")).as("objects"))
df.show(false)
// +---+-----------------------------------------------+
// |key|objects |
// +---+-----------------------------------------------+
// |abc|[{"name":"file","type":"sample","code":"123"}] |
// |abc|[{"name":"image","type":"sample","code":"456"}]|
// |xyz|[{"name":"doc","type":"sample","code":"707"}] |
// +---+-----------------------------------------------+
df.groupBy($"key").agg(collect_list($"objects"(0)).as("objects")).
show(false)
// +---+---------------------------------------------------------------------------------------------+
// |key|objects |
// +---+---------------------------------------------------------------------------------------------+
// |xyz|[{"name":"doc","type":"sample","code":"707"}] |
// |abc|[{"name":"file","type":"sample","code":"123"}, {"name":"image","type":"sample","code":"456"}]|
// +---+---------------------------------------------------------------------------------------------+

OrientDB ETL create edge using multiple fields in match criteria

I have some data that I'm tracking that looks something like this:
node.csv
Label1,Label2
Alpha,A
Alpha,B
Alpha,C
Bravo,A
Bravo,B
The pair Label1 and Label2 define a unique entry in this data set.
I have another table that has some values in it that I want to link to the vertices created in Table1:
data.csv
Label1,Label2,Data
Alpha,A,10
Alpha,A,20
Alpha,B,30
Bravo,A,99
I'd like to generate edges from entries in Data to Node when both Label1 and Label2 fields match in each.
In this case, I'd have:
Data(Alpha,A,10) ---> Node(Alpha,A)
Data(Alpha,A,20) ---> Node(Alpha,A)
Data(Alpha,B,30) ---> Node(Alpha,B)
Data(Bravo,A,99) ---> Node(Bravo,A)
In another question it appears that this issue gets solved by simply adding an extra "joinFieldName" entry into the json file, but I'm not getting the same result with my data.
My node.json file looks like:
{
"config": { "log": "info" },
"source": { "file": { "path": "./node.csv" } },
"extractor": { "csv": {} },
"transformers": [ { "vertex": { "class": "Node" } } ],
"loader": {
"orientdb": {
"dbURL": "plocal:test.orientdb",
"dbType": "graph",
"batchCommit": 1000,
"classes": [ {"name": "Node", "extends": "V"} ],
"indexes": []
}
}
}
and my data.json file looks like this:
{
"config": { "log": "info" },
"source": { "file": { "path": "./data.csv" } },
"extractor": { "csv": { } },
"transformers": [
{ "vertex": { "class": "Data" } },
{ "edge": { "class": "Source",
"joinFieldName": "Label1",
"lookup": "Node.Label1",
"joinFieldName": "Label2",
"lookup": "Node.Label2",
"direction": "in"
}
}
],
"loader": {
"orientdb": {
"dbURL": "plocal:test.orientdb",
"dbType": "graph",
"batchCommit": 1000,
"classes": [ {"name": "Data", "extends": "V"},
{"name": "Source", "extends": "E"}
],
"indexes": []
}
}
}
After I run these, I get this output when I query the result:
orientdb {db=test.orientdb}> SELECT FROM V
+----+-----+------+------+------+-------------------+----+-------------+
|# |#RID |#CLASS|Label1|Label2|out_Source |Data|in_Source |
+----+-----+------+------+------+-------------------+----+-------------+
|0 |#25:0|Node |Alpha |A |[#41:0,#43:0,#47:0]| | |
|1 |#26:0|Node |Alpha |B |[#45:0] | | |
|2 |#27:0|Node |Alpha |C | | | |
|3 |#28:0|Node |Bravo |A |[#42:0,#44:0,#48:0]| | |
|4 |#29:0|Node |Bravo |B |[#46:0] | | |
|5 |#33:0|Data |Alpha |A | |10 |[#41:0,#42:0]|
|6 |#34:0|Data |Alpha |A | |20 |[#43:0,#44:0]|
|7 |#35:0|Data |Alpha |B | |30 |[#45:0,#46:0]|
|8 |#36:0|Data |Bravo |A | |99 |[#47:0,#48:0]|
+----+-----+------+------+------+-------------------+----+-------------+
9 item(s) found. Query executed in 0.012 sec(s).
This is incorrect. I don't want Edges #42:0, #44:0, #46:0 and #47:0:
#42:0 connects Node(Bravo,A) and Data(Alpha,A)
#44:0 connects Node(Bravo,A) and Data(Alpha,A)
#46:0 connects Node(Bravo,B) and Data(Alpha,B)
#47:0 connects Node(Alpha,A) and Data(Bravo,A)
It looks like adding multiple joinFieldName entries in the transformer is resulting in an OR operation, but I'd like an 'AND' here.
Does anyone know how to fix this? I'm not sure what I'm doing differently than the other StackOverflow question...
After debugging the ETL code, I figured out a workaround. As you said, there is no way to make the multiple joinFieldNames forms one edge. Each joinFieldName will create an edge.
What you can do is, generate an extra column in the CSV file by concatenating "Label1" and "Label2" and use lookup query in edge transformation, something like, assume your data.csv has one extra field like label1_label2 and the values of that field are something like "label1====label2`.
Your edge transformation should have the following
{ "edge": { "class": "Source",
"joinFieldName": "label1_label2",
"lookup": "select expand(n) from (match {class: Node, as: n} return n) where n.Label1+'===='+n.Label2 = ?",
"direction": "in"
}
}
Don't forget to expand the vertex otherwise, ETL thinks that it is a Document. The trick here is to write one query by concatenating multiple fields and passing the equivalent joinFieldName.