I have following table:
+-----+---+----+
|type | t |code|
+-----+---+----+
| A| 25| 11|
| A| 55| 42|
| B| 88| 11|
| A|114| 11|
| B|220| 58|
| B|520| 11|
+-----+---+----+
And what I want:
+-----+---+----+
|t1 | t2|code|
+-----+---+----+
| 25| 88| 11|
| 114|520| 11|
+-----+---+----+
There are two types of events A and B.
Event A is the start, Event B is the end.
I want to connect the start with the next end dependence of the code.
It's quite easy in SQL to do this:
SELECT a.t AS t1,
(SELECT b.t FROM events AS b WHERE a.code == b.code AND a.t < b.t LIMIT 1) AS t2, a.code AS code
FROM events AS a
But I have to problem to implement this in Spark because it looks like that this kind of nested query isn't supported...
I tried it with:
df.createOrReplaceTempView("events")
val sqlDF = spark.sql(/* SQL-query above */)
Error i get:
Exception in thread "main" org.apache.spark.sql.AnalysisException: Accessing outer query column is not allowed in:
Do you have any other ideas to solve that problem?
It's quite easy in SQL to do this
And so is in Spark SQL, luckily.
val events = ...
scala> events.show
+----+---+----+
|type| t|code|
+----+---+----+
| A| 25| 11|
| A| 55| 42|
| B| 88| 11|
| A|114| 11|
| B|220| 58|
| B|520| 11|
+----+---+----+
// assumed that t is int
scala> events.printSchema
root
|-- type: string (nullable = true)
|-- t: integer (nullable = true)
|-- code: integer (nullable = true)
val eventsA = events.
where($"type" === "A").
as("a")
val eventsB = events.
where($"type" === "B").
as("b")
val solution = eventsA.
join(eventsB, "code").
where($"a.t" < $"b.t").
select($"a.t" as "t1", $"b.t" as "t2", $"a.code").
orderBy($"t1".asc, $"t2".asc).
dropDuplicates("t1", "code").
orderBy($"t1".asc)
That should give you the requested output.
scala> solution.show
+---+---+----+
| t1| t2|code|
+---+---+----+
| 25| 88| 11|
|114|520| 11|
+---+---+----+
Related
currently, schema for my table is:
root
|-- product_id: integer (nullable = true)
|-- product_name: string (nullable = true)
|-- aisle_id: string (nullable = true)
|-- department_id: string (nullable = true)
I want to apply the below schema on the above table and delete all the rows which do not follow the below schema:
val productsSchema = StructType(Seq(
StructField("product_id",IntegerType,nullable = true),
StructField("product_name",StringType,nullable = true),
StructField("aisle_id",IntegerType,nullable = true),
StructField("department_id",IntegerType,nullable = true)
))
Use option "DROPMALFORMED" while loading the data which ignores corrupted records.
spark.read.format("json")
.option("mode", "DROPMALFORMED")
.option("header", "true")
.schema(productsSchema)
.load("sample.json")
If data is not matching with schema, spark will put null as value in that column. We just have to filter the null values for all columns.
Used filter to filter ```null`` values for all columns.
scala> "cat /tmp/sample.json".! // JSON File Data, one row is not matching with schema.
{"product_id":1,"product_name":"sampleA","aisle_id":"AA","department_id":"AAD"}
{"product_id":2,"product_name":"sampleBB","aisle_id":"AAB","department_id":"AADB"}
{"product_id":3,"product_name":"sampleCC","aisle_id":"CC","department_id":"CCC"}
{"product_id":3,"product_name":"sampledd","aisle_id":"dd","departmentId":"ddd"}
{"name","srinivas","age":29}
res100: Int = 0
scala> schema.printTreeString
root
|-- aisle_id: string (nullable = true)
|-- department_id: string (nullable = true)
|-- product_id: long (nullable = true)
|-- product_name: string (nullable = true)
scala> val df = spark.read.schema(schema).option("badRecordsPath", "/tmp/badRecordsPath").format("json").load("/tmp/sample.json") // Loading Json data & if schema is not matching we will be getting null rows for all columns.
df: org.apache.spark.sql.DataFrame = [aisle_id: string, department_id: string ... 2 more fields]
scala> df.show(false)
+--------+-------------+----------+------------+
|aisle_id|department_id|product_id|product_name|
+--------+-------------+----------+------------+
|AA |AAD |1 |sampleA |
|AAB |AADB |2 |sampleBB |
|CC |CCC |3 |sampleCC |
|dd |null |3 |sampledd |
|null |null |null |null |
+--------+-------------+----------+------------+
scala> df.filter(df.columns.map(c => s"${c} is not null").mkString(" or ")).show(false) // Filter null rows.
+--------+-------------+----------+------------+
|aisle_id|department_id|product_id|product_name|
+--------+-------------+----------+------------+
|AA |AAD |1 |sampleA |
|AAB |AADB |2 |sampleBB |
|CC |CCC |3 |sampleCC |
|dd |null |3 |sampledd |
+--------+-------------+----------+------------+
scala>
do check out na.drop functions on data-frame, you can drop rows based on null values, min nulls in a row, and also based on a specific column which has nulls.
scala> sc.parallelize(Seq((1,"a","a"),(1,"a","a"),(2,"b","b"),(3,"c","c"),(4,"d","d"),(4,"d",null))).toDF
res7: org.apache.spark.sql.DataFrame = [_1: int, _2: string ... 1 more field]
scala> res7.show()
+---+---+----+
| _1| _2| _3|
+---+---+----+
| 1| a| a|
| 1| a| a|
| 2| b| b|
| 3| c| c|
| 4| d| d|
| 4| d|null|
+---+---+----+
//dropping row if a null is found
scala> res7.na.drop.show()
+---+---+---+
| _1| _2| _3|
+---+---+---+
| 1| a| a|
| 1| a| a|
| 2| b| b|
| 3| c| c|
| 4| d| d|
+---+---+---+
//drops only if `minNonNulls = 3` if accepted to each row
scala> res7.na.drop(minNonNulls = 3).show()
+---+---+---+
| _1| _2| _3|
+---+---+---+
| 1| a| a|
| 1| a| a|
| 2| b| b|
| 3| c| c|
| 4| d| d|
+---+---+---+
//not dropping any
scala> res7.na.drop(minNonNulls = 2).show()
+---+---+----+
| _1| _2| _3|
+---+---+----+
| 1| a| a|
| 1| a| a|
| 2| b| b|
| 3| c| c|
| 4| d| d|
| 4| d|null|
+---+---+----+
//drops row based on nulls in `_3` column
scala> res7.na.drop(Seq("_3")).show()
+---+---+---+
| _1| _2| _3|
+---+---+---+
| 1| a| a|
| 1| a| a|
| 2| b| b|
| 3| c| c|
| 4| d| d|
+---+---+---+
Dataframe input
+-----------------+-------+
|Id | value |
+-----------------+-------+
| 1622| 139685|
| 1622| 182118|
| 1622| 127955|
| 3837|3224815|
| 1622| 727761|
| 1622| 155875|
| 3837|1504923|
| 1622| 139684|
| 1453| 536111|
+-----------------+-------+
Output:
+-----------------+--------------------------------------------+
|Id | value |
+-----------------+--------------------------------------------+
| 1622|[139685,182118,127955,727761,155875,139684] |
| 1453| 536111 |
| 3837|[3224815,1504923] |
+-----------------+--------------------------------------------+
When particular id is having more than one value then it should collect as array format else
it should consider that as a single value without brace []
I tried with the below link solution but couldn't able to handle the if-else condition in the data frame.
link: Spark DataFrame aggregate column values by key into List
use Window function
scala> import org.apache.spark.sql.expressions.Window
scala> var df = Seq((1622, 139685),(1622, 182118),(1622, 127955),(3837,3224815),(1622, 727761),(1622, 155875),(3837,1504923),(1622, 139684),(1453, 536111)).toDF("id","value")
scala> df.show()
+----+-------+
| id| value|
+----+-------+
|1622| 139685|
|1622| 182118|
|1622| 127955|
|3837|3224815|
|1622| 727761|
|1622| 155875|
|3837|1504923|
|1622| 139684|
|1453| 536111|
+----+-------+
scala> var df1= df.withColumn("r",count($"id").over(Window.partitionBy("id").orderBy("id")).cast("int"))
scala> df1.show()
+----+-------+---+
| id| value| r|
+----+-------+---+
|1453| 536111| 1|
|1622| 139685| 6|
|1622| 182118| 6|
|1622| 127955| 6|
|1622| 727761| 6|
|1622| 155875| 6|
|1622| 139684| 6|
|3837|3224815| 2|
|3837|1504923| 2|
+----+-------+---+
scala> var df2 =df1.selectExpr("*").filter('r ===1).drop("r").union(df1.filter('r =!= 1).groupBy("id").agg(collect_list($"value").cast("string").as("value")))
scala> df2.show(false)
+----+------------------------------------------------+
|id |value |
+----+------------------------------------------------+
|1453|536111 |
|1622|[139685, 182118, 127955, 727761, 155875, 139684]|
|3837|[3224815, 1504923] |
+----+------------------------------------------------+
scala> df2.printSchema
root
|-- id: integer (nullable = false)
|-- value: string (nullable = true)
let me know if you have any question related to same.
I am trying construct distinction matrix using spark and am confused how to do it optimally. I am new to spark. I have given a small example of what I'm trying to do below.
Example of distinction matrix construction:
Given Dataset D:
+----+-----+------+-----+
| id | a1 | a2 | a3 |
+----+-----+------+-----+
| 1 | yes | high | on |
| 2 | no | high | off |
| 3 | yes | low | off |
+----+-----+------+-----+
and my distinction table is
+-------+----+----+----+
| id,id | a1 | a2 | a3 |
+-------+----+----+----+
| 1,2 | 1 | 0 | 1 |
| 1,3 | 0 | 1 | 1 |
| 2,3 | 1 | 1 | 0 |
+-------+----+----+----+
i.e whenever an attribute ai is helpful in distinguishing a pair of tuples, distinction table has a 1, otherwise a 0.
My Datasets are huge and I trying to do it in spark.Following are approaches that came to my mind:
using nested for loop to iterate over all members of RDD (of dataset)
using cartesian() transformation over original RDD and iterate over all members of resultant RDD to get distinction table.
My questions are:
In 1st approach, does spark automatically optimize nested for loop setup internally for parallel processing?
In 2nd approach, using cartesian() causes extra storage overhead to store intermediate RDD. Is there any way to avoid this storage overhead and get final distinction table?
Which of these approaches is better and is there any other approach which can be useful to construct distinction matrix efficiently (both space and time)?
For this dataframe:
scala> val df = List((1, "yes", "high", "on" ), (2, "no", "high", "off"), (3, "yes", "low", "off") ).toDF("id", "a1", "a2", "a3")
df: org.apache.spark.sql.DataFrame = [id: int, a1: string ... 2 more fields]
scala> df.show
+---+---+----+---+
| id| a1| a2| a3|
+---+---+----+---+
| 1|yes|high| on|
| 2| no|high|off|
| 3|yes| low|off|
+---+---+----+---+
We can build a cartesian product by using crossJoin with itself. However, the column names will be ambiguous (I don't really know how to easily deal with that). To prepare for that, let's create a second dataframe:
scala> val df2 = df.toDF("id_2", "a1_2", "a2_2", "a3_2")
df2: org.apache.spark.sql.DataFrame = [id_2: int, a1_2: string ... 2 more fields]
scala> df2.show
+----+----+----+----+
|id_2|a1_2|a2_2|a3_2|
+----+----+----+----+
| 1| yes|high| on|
| 2| no|high| off|
| 3| yes| low| off|
+----+----+----+----+
In this example we can get combinations by filtering using id < id_2.
scala> val xp = df.crossJoin(df2)
xp: org.apache.spark.sql.DataFrame = [id: int, a1: string ... 6 more fields]
scala> xp.show
+---+---+----+---+----+----+----+----+
| id| a1| a2| a3|id_2|a1_2|a2_2|a3_2|
+---+---+----+---+----+----+----+----+
| 1|yes|high| on| 1| yes|high| on|
| 1|yes|high| on| 2| no|high| off|
| 1|yes|high| on| 3| yes| low| off|
| 2| no|high|off| 1| yes|high| on|
| 2| no|high|off| 2| no|high| off|
| 2| no|high|off| 3| yes| low| off|
| 3|yes| low|off| 1| yes|high| on|
| 3|yes| low|off| 2| no|high| off|
| 3|yes| low|off| 3| yes| low| off|
+---+---+----+---+----+----+----+----+
scala> val filtered = xp.filter($"id" < $"id_2")
filtered: org.apache.spark.sql.Dataset[org.apache.spark.sql.Row] = [id: int, a1: string ... 6 more fields]
scala> filtered.show
+---+---+----+---+----+----+----+----+
| id| a1| a2| a3|id_2|a1_2|a2_2|a3_2|
+---+---+----+---+----+----+----+----+
| 1|yes|high| on| 2| no|high| off|
| 1|yes|high| on| 3| yes| low| off|
| 2| no|high|off| 3| yes| low| off|
+---+---+----+---+----+----+----+----+
At this point the problem is basically solved. To get the final table we can use a when().otherwise() statement on each column pair, or a UDF as I have done here:
scala> val dist = udf((a:String, b: String) => if (a != b) 1 else 0)
dist: org.apache.spark.sql.expressions.UserDefinedFunction = UserDefinedFunction(<function2>,IntegerType,Some(List(StringType, StringType)))
scala> val distinction = filtered.select($"id", $"id_2", dist($"a1", $"a1_2").as("a1"), dist($"a2", $"a2_2").as("a2"), dist($"a3", $"a3_2").as("a3"))
distinction: org.apache.spark.sql.DataFrame = [id: int, id_2: int ... 3 more fields]
scala> distinction.show
+---+----+---+---+---+
| id|id_2| a1| a2| a3|
+---+----+---+---+---+
| 1| 2| 1| 0| 1|
| 1| 3| 0| 1| 1|
| 2| 3| 1| 1| 0|
+---+----+---+---+---+
How can I replace empty values in a column Field1 of DataFrame df?
Field1 Field2
AA
12 BB
This command does not provide an expected result:
df.na.fill("Field1",Seq("Anonymous"))
The expected result:
Field1 Field2
Anonymous AA
12 BB
You can also try this.
This might handle both blank/empty/null
df.show()
+------+------+
|Field1|Field2|
+------+------+
| | AA|
| 12| BB|
| 12| null|
+------+------+
df.na.replace(Seq("Field1","Field2"),Map(""-> null)).na.fill("Anonymous", Seq("Field2","Field1")).show(false)
+---------+---------+
|Field1 |Field2 |
+---------+---------+
|Anonymous|AA |
|12 |BB |
|12 |Anonymous|
+---------+---------+
Fill: Returns a new DataFrame that replaces null or NaN values in
numeric columns with value.
Two things:
An empty string is not null or NaN, so you'll have to use a case statement for that.
Fill seems to not work well when giving a text value into a numeric column.
Failing Null Replace with Fill / Text:
scala> a.show
+----+---+
| f1| f2|
+----+---+
|null| AA|
| 12| BB|
+----+---+
scala> a.na.fill("Anonymous", Seq("f1")).show
+----+---+
| f1| f2|
+----+---+
|null| AA|
| 12| BB|
+----+---+
Working Example - Using Null With All Numbers:
scala> a.show
+----+---+
| f1| f2|
+----+---+
|null| AA|
| 12| BB|
+----+---+
scala> a.na.fill(1, Seq("f1")).show
+---+---+
| f1| f2|
+---+---+
| 1| AA|
| 12| BB|
+---+---+
Failing Example (Empty String instead of Null):
scala> b.show
+---+---+
| f1| f2|
+---+---+
| | AA|
| 12| BB|
+---+---+
scala> b.na.fill(1, Seq("f1")).show
+---+---+
| f1| f2|
+---+---+
| | AA|
| 12| BB|
+---+---+
Case Statement Fix Example:
scala> b.show
+---+---+
| f1| f2|
+---+---+
| | AA|
| 12| BB|
+---+---+
scala> b.select(when(col("f1") === "", "Anonymous").otherwise(col("f1")).as("f1"), col("f2")).show
+---------+---+
| f1| f2|
+---------+---+
|Anonymous| AA|
| 12| BB|
+---------+---+
You can try using below code when you have n number of columns in dataframe.
Note: When you are trying to write data into formats like parquet, null data types are not supported. we have to type cast it.
val df = Seq(
(1, ""),
(2, "Ram"),
(3, "Sam"),
(4,"")
).toDF("ID", "Name")
// null type column
val inputDf = df.withColumn("NulType", lit(null).cast(StringType))
//Output
+---+----+-------+
| ID|Name|NulType|
+---+----+-------+
| 1| | null|
| 2| Ram| null|
| 3| Sam| null|
| 4| | null|
+---+----+-------+
//Replace all blank space in the dataframe with null
val colName = inputDf.columns //*This will give you array of string*
val data = inputDf.na.replace(colName,Map(""->"null"))
data.show()
+---+----+-------+
| ID|Name|NulType|
+---+----+-------+
| 1|null| null|
| 2| Ram| null|
| 3| Sam| null|
| 4|null| null|
+---+----+-------+
I need the value of the first UDF (GetOtherTriggers) as a parameter to the second UDF (GetTriggerType).
The following code is not working:
val df = sql.sql(
"select GetOtherTriggers(categories) as other_triggers, GetTriggerType(other_triggers) from my_table")
return the following exception:
org.apache.spark.sql.AnalysisException: cannot resolve 'other_triggers' given input columns: [my_table columns];
You can use subquery:
val df = sql.sql("""select GetTriggerType(other_triggers), other_triggers
from (
select GetOtherTriggers(categories) as other_triggers, *
from my_table
) withOther """)
Test:
val df = sc.parallelize (1 to 10).map(x => (x, x*2, x*3)).toDF("nr1", "nr2", "nr3");
df.createOrReplaceTempView("nr");
spark.udf.register("x3UDF", (x: Integer) => x*3);
spark.sql("""select x3UDF(nr1x3), nr1x3, nr3
from (
select x3UDF(nr1) as nr1x3, *
from nr
) a """)
.show()
Gives:
+----------+-----+---+
|UDF(nr1x3)|nr1x3|nr3|
+----------+-----+---+
| 9| 3| 4|
| 18| 6| 8|
| 27| 9| 12|
| 36| 12| 16|
| 45| 15| 20|
| 54| 18| 24|
| 63| 21| 28|
| 72| 24| 32|
| 81| 27| 36|
| 90| 30| 40|
+----------+-----+---+