Pyspark Dataframes Resolved attribute(s) error with no matching column names - pyspark

I have a dataframe graphcounts with a hero Id and connections as below
+------+-----------+
|heroId|connections|
+------+-----------+
| 691| 7|
| 1159| 12|
| 3959| 143|
| 1572| 36|
| 2294| 15|
| 1090| 5|
| 3606| 172|
| 3414| 8|
| 296| 18|
| 4821| 17|
| 2162| 42|
| 1436| 10|
| 1512| 12|
I have another dataframe graph_names with hero id and names as below.
+---+--------------------+
| id| name|
+---+--------------------+
| 1|24-HOUR MAN/EMMANUEL|
| 2|3-D MAN/CHARLES CHAN|
| 3| 4-D MAN/MERCURIO|
| 4| 8-BALL/|
| 5| A|
| 6| A'YIN|
| 7| ABBOTT, JACK|
| 8| ABCISSA|
| 9| ABEL|
| 10|ABOMINATION/EMIL BLO|
| 11|ABOMINATION | MUTANT|
| 12| ABOMINATRIX|
| 13| ABRAXAS|
| 14| ADAM 3,031|
| 15| ABSALOM|
I am attempting to create a map column that can be used to lookup heroId in graphcounts with the name in graph_names which throws me below error. I have referred to this issue https://issues.apache.org/jira/browse/SPARK-10925 from another thread but my column names are not the same. I don't understand the exception message also to know how I can debug this.
>>> mapper = fn.create_map([graph_names.id, graph_names.name])
>>> mapper
Column<b'map(id, name)'>
>>>
>>> graphcounts.printSchema()
root
|-- heroId: string (nullable = true)
|-- connections: long (nullable = true)
>>>
>>> graph_names.printSchema()
root
|-- id: string (nullable = true)
|-- name: string (nullable = true)
>>>
>>>
>>> graphcounts.withColumn('name', mapper[graphcounts['heroId']]).show()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/Cellar/apache-spark/3.0.1/libexec/python/pyspark/sql/dataframe.py", line 2096, in withColumn
return DataFrame(self._jdf.withColumn(colName, col._jc), self.sql_ctx)
File "/usr/local/Cellar/apache-spark/3.0.1/libexec/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py", line 1305, in __call__
File "/usr/local/Cellar/apache-spark/3.0.1/libexec/python/pyspark/sql/utils.py", line 134, in deco
raise_from(converted)
File "<string>", line 3, in raise_from
pyspark.sql.utils.AnalysisException: Resolved attribute(s) id#242,name#243 missing from heroId#189,connections#203L in operator !Project [heroId#189, connections#203L, map(id#242, name#243)[heroId#189] AS name#286].;;
!Project [heroId#189, connections#203L, map(id#242, name#243)[heroId#189] AS name#286]
+- Project [heroId#189, sum(connections)#200L AS connections#203L]
+- Aggregate [heroId#189], [heroId#189, sum(cast(connections#192 as bigint)) AS sum(connections)#200L]
+- Project [value#7, heroId#189, (size(split(value#7, , -1), true) - 1) AS connections#192]
+- Project [value#7, split(value#7, , 2)[0] AS heroId#189]
+- Relation[value#7] text

The issue with the error was there are headers for each column of the dataframe. However when I reading with schema and without including header=True, the header column name became one of the column values. The lookup failed as there is no name with that column.

Related

Spark nested complex dataframe

I am trying to get the complex data into normal dataframe format
My data schema:
root
|-- column_names: array (nullable = true)
| |-- element: string (containsNull = true)
|-- values: array (nullable = true)
| |-- element: array (containsNull = true)
| | |-- element: string (containsNull = true)
|-- id: array (nullable = true)
| |-- element: string (containsNull = true)
My Data File(JSON Format):
{"column_names":["2_col_name","3_col_name"],"id":["a","b","c","d","e"],"values":[["2_col_1",1],["2_col_2",2],["2_col_3",9],["2_col_4",10],["2_col_5",11]]}
I am trying to convert above data into this format:
+----------+----------+----------+
|1_col_name|2_col_name|3_col_name|
+----------+----------+----------+
| a| 2_col_1| 1|
| b| 2_col_2| 2|
| c| 2_col_3| 9|
| d| 2_col_4| 10|
| e| 2_col_5| 11|
+----------+----------+----------+
I tried using explode function on id and values but got different output as below:
+---+-------------+
| id| values|
+---+-------------+
| a| [2_col_1, 1]|
| a| [2_col_2, 2]|
| a| [2_col_3, 9]|
| a|[2_col_4, 10]|
| a|[2_col_5, 11]|
| b| [2_col_1, 1]|
| b| [2_col_2, 2]|
| b| [2_col_3, 9]|
| b|[2_col_4, 10]|
+---+-------------+
only showing top 9 rows
Not sure where i am doing wrong
You can use array_zip + inline functions to flatten then pivot the column names :
val df1 = df.select(
$"column_names",
expr("inline(arrays_zip(id, values))")
).select(
$"id".as("1_col_name"),
expr("inline(arrays_zip(column_names, values))")
)
.groupBy("1_col_name")
.pivot("column_names")
.agg(first("values"))
df1.show
//+----------+----------+----------+
//|1_col_name|2_col_name|3_col_name|
//+----------+----------+----------+
//|e |2_col_5 |11 |
//|d |2_col_4 |10 |
//|c |2_col_3 |9 |
//|b |2_col_2 |2 |
//|a |2_col_1 |1 |
//+----------+----------+----------+

How to perform aggregation (sum) on different columns and group the result based on another column of a spark dataframe?

Using scala-spark, I read a table in postgres and formed a dataframe: locationDF which contains data related to locations in the below format.
val opts = Map("url" -> "databaseurl","dbtable" -> "locations")
val locationDF = spark.read.format("jdbc").options(opts).load()
locationDF.printSchema()
root
|-- locn_id: integer (nullable = true)
|-- start_date: string (nullable = true)
|-- work_min: double (nullable = true)
|-- coverage: double (nullable = true)
|-- speed: double (nullable == true)
Initial Data:
+-------------+----------+-------------------+----------------+------------------+
| locn_id|start_date| work_min| coverage| speed|
+-------------+----------+-------------------+----------------+------------------+
| 3|2012-02-22| 53.62948333333333| 13.644|3.9306276263070457|
| 7|2012-02-22|0.11681666666666667| 0.0| 0.0|
| 1|2012-02-21| 22.783333333333335| 2.6| 8.762820512820513|
| 1|2012-01-21| 23.033333333333335| 2.6| 8.85897435897436|
| 1|2012-01-21| 44.98533333333334| 6.99| 6.435670004768718|
| 4|2012-02-21| 130.34788333333333| 54.67| 2.384267117858667|
| 2|2012-01-21| 94.61035| 8.909|10.619637445280052|
| 1|2012-02-21| 0.0| 0.0| 0.0|
| 1|2012-02-21| 29.3377| 4.579| 6.407010264249837|
| 1|2012-01-21| 59.13276666666667| 8.096| 7.303948451910409|
| 2|2012-03-21| 166.41843333333333| 13.048|12.754325056202738|
| 1|2012-03-21| 14.853183333333334| 2.721| 5.458722283474213|
| 9|2012-03-21| 1.69895| 0.845|2.0105917159763314|
+-------------+----------+-------------------+----------------+------------------+
I am trying to perform the sum of work_min (and convert into hours), sum of coverage, average speed of that particular year and month and form another dataframe.
To do that, I have seperated the month and year from the date column: start_date as below and got two columns: year and month out of it.
locationDF.withColumn("year", date_format(to_date($"start_date"),
"yyyy").cast(("Integer"))).withColumn("month",
date_format(to_date($"start_date"), "MM").cast(("Integer")))
+-------------+----------+-------------------+----------------+------------------+----+-----+
| locn_id|start_date| work_min| coverage| speed|year|month|
+-------------+----------+-------------------+----------------+------------------+----+-----+
| 3|2012-02-22| 53.62948333333333| 13.644|3.9306276263070457|2012| 2|
| 7|2012-02-22|0.11681666666666667| 0.0| 0.0|2012| 2|
| 1|2012-02-21| 22.783333333333335| 2.6| 8.762820512820513|2012| 2|
| 1|2012-01-21| 23.033333333333335| 2.6| 8.85897435897436|2012| 1|
| 1|2012-01-21| 44.98533333333334| 6.99| 6.435670004768718|2012| 1|
| 4|2012-02-21| 130.34788333333333| 54.67| 2.384267117858667|2012| 2|
| 2|2012-01-21| 94.61035| 8.909|10.619637445280052|2012| 1|
| 1|2012-02-21| 0.0| 0.0| 0.0|2012| 2|
| 1|2012-02-21| 29.3377| 4.579| 6.407010264249837|2012| 2|
| 1|2012-01-21| 59.13276666666667| 8.096| 7.303948451910409|2012| 1|
| 2|2012-03-21| 166.41843333333333| 13.048|12.754325056202738|2012| 3|
| 1|2012-03-21| 14.853183333333334| 2.721| 5.458722283474213|2012| 3|
| 9|2012-03-21| 1.69895| 0.845|2.0105917159763314|2012| 3|
+-------------+----------+-------------------+----------------+------------------+----+-----+
But I dont understand how to perform an aggregation -> sum on two separate columns: work_hours & coverage, average value of the column: speed for that particular month all at the same time and obtain the result as below.
+----+-----+-------------+------------+-----------------+
|year|month|sum_work_mins|sum_coverage| avg_speed|
+----+-----+-------------+------------+-----------------+
|2012| 1|221.7617833 | 26.595 |11.07274342031118|
|2012| 2|236.2152166 | 75.493 |7.161575173745354|
|2012| 3|182.9705666 | 16.614 |6.741213018551094|
+----+-----+-------------+------------+-----------------+
Could anyone let me know how can I achieve that ?
I think you are looking for this
scala> dfd.groupBy("year","month").agg(sum("work_min").as("sum_work_min"),sum("coverage").as("sum_coverage"),avg("speed").as("avg_speed")).show
+----+-----+------------------+------------------+-----------------+
|year|month| sum_work_min| sum_coverage| avg_speed|
+----+-----+------------------+------------------+-----------------+
|2012| 1|221.76178333333334|26.595000000000002|8.304557565233385|
|2012| 2| 236.2152166666667| 75.493|3.580787586872677|
|2012| 3|182.97056666666666| 16.614|6.741213018551094|
+----+-----+------------------+------------------+-----------------+
hope it helps you.

Create a new column from one of the value available in another columns as an array of Key Value pair

I have extracted some data from hive to dataframe, which is in the below shown format.
+--------------------+-----------------+--------------------+---------------+
| NUM_ID| SIG1| SIG2| SIG3| SIG4|
+----------------------+---------------+--------------------+---------------+
|XXXXX01|[{15695605310...|[{15695605310...|[{15695605310...|[{15695605310...|
|XXXXX02|[{15695604780...|[{15695604780...|[{15695604780...|[{15695604780...|
|XXXXX03|[{15695605310...|[{15695605310...|[{15695605310...|[{15695605310...|
|XXXXX04|[{15695605310...|[{15695605310...|[{15695605310...|[{15695605310...|
|XXXXX05|[{15695605310...|[{15695605310...|[{15695605310...|[{15695605310...|
|XXXXX06|[{15695605340...|[{15695605340...|[{15695605340...|[{15695605340...|
|XXXXX07|[{15695605310...|[{15695605310...|[{15695605310...|[{15695605310...|
|XXXXX08|[{15695605310...|[{15695605310...|[{15695605310...|[{15695605310...|
If we take only one signal it will be as below.
|XXXXX01|[{1569560531000,3.7825},{1569560475000,3.7812},{1569560483000,3.7812},{1569560491000,34.7875}]|
[{1569560537000,3.7825},{1569560481000,34.7825},{1569560489000,34.7825},{1569560497000,34.7825}]|
[{1569560505000,34.7825},{1569560513000,34.7825},{1569560521000,34.7825},{1569560527000,34.7825}]|
[{1569560535000,34.7825},{1569560479000,34.7825},{1569560487000,34.7825}]
For each NUM_ID , each SIG column will have an array of E and V pairs.
The schema for the above data is
fromHive.printSchema
root
|-- NUM_ID: string (nullable = true)
|-- SIG1: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- E: long (nullable = true)
| | |-- V: double (nullable = true)
|-- SIG2: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- E: long (nullable = true)
| | |-- V: double (nullable = true)
|-- SIG3: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- E: long (nullable = true)
| | |-- V: double (nullable = true)
|-- SIG4: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- E: long (nullable = true)
| | |-- V: double (nullable = true)
My requirement is to get the all E values from all the columns for a particular NUM_ID and create as a new cloumn with corresponding signal values in another columns as shown below.
+-------+-------------+-------+-------+-------+-------+
| NUM_ID| E| SIG1_V| SIG2_V| SIG3_V| SIG4_V|
+-------+-------------+-------+-------+-------+-------+
|XXXXX01|1569560531000|33.7825|34.7825| null|96.3354|
|XXXXX01|1569560505000| null| null|35.5501| null|
|XXXXX01|1569560531001|73.7825| null| null| null|
|XXXXX02|1569560505000|34.7825| null|35.5501|96.3354|
|XXXXX02|1569560531000|33.7825|34.7825|35.5501|96.3354|
|XXXXX02|1569560505001|73.7825| null| null| null|
|XXXXX02|1569560502000| null| null|35.5501|96.3354|
|XXXXX03[1569560531000|73.7825| null| null| null|
|XXXXX03|1569560505000|34.7825| null|35.5501|96.3354|
|XXXXX03|1569560509000| null|34.7825|35.5501|96.3354|
The E values from all four signals column, for a particular NUM_ID should be taken as a single column without duplicates and the V values for corresponding E should be populated in different columns. Suppose a Signal is not having any E-V pair for a particular E, then that column should be null. as shown above.
Thanks in advance. Any lead appreciated.
For better Understanding below is the sample structure for input and expected output.
INPUT:
+-------------------------+-----------------+-----------------+------------------+
| NUM_ID| SIG1| SIG2| SIG3| SIG4|
+-------------------------+-----------------+-----------------+------------------+
|XXXXX01|[{E1,V1},{E2,V2}]|[{E1,V3},{E3,V4}]|[{E4,V5},{E5,V6}]|[{E5,V7},{E2,V8}] |
|XXXXX02|[{E7,V1},{E8,V2}]|[{E1,V3},{E3,V4}]|[{E1,V5},{E5,V6}]|[{E9,V7},{E8,V8}]|
|XXXXX03|[{E1,V1},{E2,V2}]|[{E1,V3},{E3,V4}]|[{E4,V5},{E5,V6}]|[{E5,V7},{E2,V8}] |
OUTPUT EXPECTED:
+-------+---+--------+-------+-------+-------+
| NUM_ID| E| SIG1_V| SIG2_V| SIG3_V| SIG4_V|
+-------+---+-------+-------+-------+-------+
|XXXXX01| E1| V1| V3| null| null|
|XXXXX01| E2| V2| null| null| V8|
|XXXXX01| E3| null| V4| null| null|
|XXXXX01| E4| null| null| V5| null|
|XXXXX01| E5| null| null| V6| V7|
|XXXXX02| E1| null| V3| V5| null|
|XXXXX02| E3| null| V4| null| null|
|XXXXX02| E5| null| null| V6| null|
|XXXXX02[ E7| V1| null| null| null|
|XXXXX02| E8| V2| null| null| V7|
|XXXXX02| E9| null|34.7825| null| V8|
Input CSV file is as below:
NUM_ID|SIG1|SIG2|SIG3|SIG4 XXXXX01|[{1569560531000,3.7825},{1569560475000,3.7812},{1569560483000,3.7812},{1569560491000,34.7875}]|[{1569560537000,3.7825},{1569560531000,34.7825},{1569560489000,34.7825},{1569560497000,34.7825}]|[{1569560505000,34.7825},{1569560513000,34.7825},{1569560521000,34.7825},{1569560531000,34.7825}]|[{1569560535000,34.7825},{1569560531000,34.7825},{1569560483000,34.7825}
import org.apache.spark.sql.Row
import org.apache.spark.sql.expressions.UserDefinedFunction
val df = spark.read.format("csv").option("header","true").option("delimiter", "|").load("path .csv")
df.show(false)
+-------+----------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------+
|NUM_ID |SIG1 |SIG2 |SIG3 |SIG4 |
+-------+----------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------+
|XXXXX01|[{1569560531000,3.7825},{1569560475000,3.7812},{1569560483000,3.7812},{1569560491000,34.7875}]|[{1569560537000,3.7825},{1569560531000,34.7825},{1569560489000,34.7825},{1569560497000,34.7825}]|[{1569560505000,34.7825},{1569560513000,34.7825},{1569560521000,34.7825},{1569560531000,34.7825}]|[{1569560535000,34.7825},{1569560531000,34.7825},{1569560483000,34.7825}]|
+-------+----------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------+
//UDF to generate column E
def UDF_E:UserDefinedFunction=udf((r: Row)=>{
val SigColumn = "SIG1,SIG2,SIG3,SIG4"
val colList = SigColumn.split(",").toList
val rr = "[\\}],[\\{]".r
var out = ""
colList.foreach{ x =>
val a = (rr replaceAllIn(r.getAs(x).toString, "|")).replaceAll("\\[\\{","").replaceAll("\\}\\]","")
val b = a.split("\\|").map(x => x.split(",")(0)).toSet
out = out + "," + b.mkString(",")
}
val out1 = out.replaceFirst(s""",""","").split(",").toSet.mkString(",")
out1
})
//UDF to generate column value with Signal
def UDF_V:UserDefinedFunction=udf((E: String, SIG:String)=>{
val Signal = SIG.replaceAll("\\{", "\\(").replaceAll("\\}", "\\)").replaceAll("\\[", "").replaceAll("\\]", "")
val SigMap = "(\\w+),([\\w 0-9 .]+)".r.findAllIn(Signal).matchData.map(i => {(i.group(1), i.group(2))}).toMap
var out = ""
if(SigMap.keys.toList.contains(E)){
out = SigMap(E).toString
}
out})
//new DataFrame with Column "E"
val df1 = df.withColumn("E", UDF_E(struct(df.columns map col: _*))).withColumn("E", explode(split(col("E"), ",")))
df1.show(false)
+-------+----------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------+-------------+
|NUM_ID |SIG1 |SIG2 |SIG3 |SIG4 |E |
+-------+----------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------+-------------+
|XXXXX01|[{1569560531000,3.7825},{1569560475000,3.7812},{1569560483000,3.7812},{1569560491000,34.7875}]|[{1569560537000,3.7825},{1569560531000,34.7825},{1569560489000,34.7825},{1569560497000,34.7825}]|[{1569560505000,34.7825},{1569560513000,34.7825},{1569560521000,34.7825},{1569560531000,34.7825}]|[{1569560535000,34.7825},{1569560531000,34.7825},{1569560483000,34.7825}]|1569560483000|
|XXXXX01|[{1569560531000,3.7825},{1569560475000,3.7812},{1569560483000,3.7812},{1569560491000,34.7875}]|[{1569560537000,3.7825},{1569560531000,34.7825},{1569560489000,34.7825},{1569560497000,34.7825}]|[{1569560505000,34.7825},{1569560513000,34.7825},{1569560521000,34.7825},{1569560531000,34.7825}]|[{1569560535000,34.7825},{1569560531000,34.7825},{1569560483000,34.7825}]|1569560497000|
|XXXXX01|[{1569560531000,3.7825},{1569560475000,3.7812},{1569560483000,3.7812},{1569560491000,34.7875}]|[{1569560537000,3.7825},{1569560531000,34.7825},{1569560489000,34.7825},{1569560497000,34.7825}]|[{1569560505000,34.7825},{1569560513000,34.7825},{1569560521000,34.7825},{1569560531000,34.7825}]|[{1569560535000,34.7825},{1569560531000,34.7825},{1569560483000,34.7825}]|1569560475000|
|XXXXX01|[{1569560531000,3.7825},{1569560475000,3.7812},{1569560483000,3.7812},{1569560491000,34.7875}]|[{1569560537000,3.7825},{1569560531000,34.7825},{1569560489000,34.7825},{1569560497000,34.7825}]|[{1569560505000,34.7825},{1569560513000,34.7825},{1569560521000,34.7825},{1569560531000,34.7825}]|[{1569560535000,34.7825},{1569560531000,34.7825},{1569560483000,34.7825}]|1569560489000|
|XXXXX01|[{1569560531000,3.7825},{1569560475000,3.7812},{1569560483000,3.7812},{1569560491000,34.7875}]|[{1569560537000,3.7825},{1569560531000,34.7825},{1569560489000,34.7825},{1569560497000,34.7825}]|[{1569560505000,34.7825},{1569560513000,34.7825},{1569560521000,34.7825},{1569560531000,34.7825}]|[{1569560535000,34.7825},{1569560531000,34.7825},{1569560483000,34.7825}]|1569560535000|
|XXXXX01|[{1569560531000,3.7825},{1569560475000,3.7812},{1569560483000,3.7812},{1569560491000,34.7875}]|[{1569560537000,3.7825},{1569560531000,34.7825},{1569560489000,34.7825},{1569560497000,34.7825}]|[{1569560505000,34.7825},{1569560513000,34.7825},{1569560521000,34.7825},{1569560531000,34.7825}]|[{1569560535000,34.7825},{1569560531000,34.7825},{1569560483000,34.7825}]|1569560531000|
|XXXXX01|[{1569560531000,3.7825},{1569560475000,3.7812},{1569560483000,3.7812},{1569560491000,34.7875}]|[{1569560537000,3.7825},{1569560531000,34.7825},{1569560489000,34.7825},{1569560497000,34.7825}]|[{1569560505000,34.7825},{1569560513000,34.7825},{1569560521000,34.7825},{1569560531000,34.7825}]|[{1569560535000,34.7825},{1569560531000,34.7825},{1569560483000,34.7825}]|1569560513000|
|XXXXX01|[{1569560531000,3.7825},{1569560475000,3.7812},{1569560483000,3.7812},{1569560491000,34.7875}]|[{1569560537000,3.7825},{1569560531000,34.7825},{1569560489000,34.7825},{1569560497000,34.7825}]|[{1569560505000,34.7825},{1569560513000,34.7825},{1569560521000,34.7825},{1569560531000,34.7825}]|[{1569560535000,34.7825},{1569560531000,34.7825},{1569560483000,34.7825}]|1569560537000|
|XXXXX01|[{1569560531000,3.7825},{1569560475000,3.7812},{1569560483000,3.7812},{1569560491000,34.7875}]|[{1569560537000,3.7825},{1569560531000,34.7825},{1569560489000,34.7825},{1569560497000,34.7825}]|[{1569560505000,34.7825},{1569560513000,34.7825},{1569560521000,34.7825},{1569560531000,34.7825}]|[{1569560535000,34.7825},{1569560531000,34.7825},{1569560483000,34.7825}]|1569560491000|
|XXXXX01|[{1569560531000,3.7825},{1569560475000,3.7812},{1569560483000,3.7812},{1569560491000,34.7875}]|[{1569560537000,3.7825},{1569560531000,34.7825},{1569560489000,34.7825},{1569560497000,34.7825}]|[{1569560505000,34.7825},{1569560513000,34.7825},{1569560521000,34.7825},{1569560531000,34.7825}]|[{1569560535000,34.7825},{1569560531000,34.7825},{1569560483000,34.7825}]|1569560521000|
|XXXXX01|[{1569560531000,3.7825},{1569560475000,3.7812},{1569560483000,3.7812},{1569560491000,34.7875}]|[{1569560537000,3.7825},{1569560531000,34.7825},{1569560489000,34.7825},{1569560497000,34.7825}]|[{1569560505000,34.7825},{1569560513000,34.7825},{1569560521000,34.7825},{1569560531000,34.7825}]|[{1569560535000,34.7825},{1569560531000,34.7825},{1569560483000,34.7825}]|1569560505000|
+-------+----------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------+-------------+
//Final DataFrame
val df2 = df1.withColumn("SIG1_V", UDF_V(col("E"),col("SIG1"))).withColumn("SIG2_V", UDF_V(col("E"),col("SIG2"))).withColumn("SIG3_V", UDF_V(col("E"),col("SIG3"))).withColumn("SIG4_V", UDF_V(col("E"),col("SIG4"))).drop("SIG1","SIG2","SIG3","SIG4")
df2.show()
+-------+-------------+-------+-------+-------+-------+
| NUM_ID| E| SIG1_V| SIG2_V| SIG3_V| SIG4_V|
+-------+-------------+-------+-------+-------+-------+
|XXXXX01|1569560475000| 3.7812| | | |
|XXXXX01|1569560483000| 3.7812| | |34.7825|
|XXXXX01|1569560489000| |34.7825| | |
|XXXXX01|1569560491000|34.7875| | | |
|XXXXX01|1569560497000| |34.7825| | |
|XXXXX01|1569560505000| | |34.7825| |
|XXXXX01|1569560513000| | |34.7825| |
|XXXXX01|1569560521000| | |34.7825| |
|XXXXX01|1569560531000| 3.7825|34.7825|34.7825|34.7825|
|XXXXX01|1569560535000| | | |34.7825|
|XXXXX01|1569560537000| | 3.7825| | |
+-------+-------------+-------+-------+-------+-------+

How to write a nested query?

I have following table:
+-----+---+----+
|type | t |code|
+-----+---+----+
| A| 25| 11|
| A| 55| 42|
| B| 88| 11|
| A|114| 11|
| B|220| 58|
| B|520| 11|
+-----+---+----+
And what I want:
+-----+---+----+
|t1 | t2|code|
+-----+---+----+
| 25| 88| 11|
| 114|520| 11|
+-----+---+----+
There are two types of events A and B.
Event A is the start, Event B is the end.
I want to connect the start with the next end dependence of the code.
It's quite easy in SQL to do this:
SELECT a.t AS t1,
(SELECT b.t FROM events AS b WHERE a.code == b.code AND a.t < b.t LIMIT 1) AS t2, a.code AS code
FROM events AS a
But I have to problem to implement this in Spark because it looks like that this kind of nested query isn't supported...
I tried it with:
df.createOrReplaceTempView("events")
val sqlDF = spark.sql(/* SQL-query above */)
Error i get:
Exception in thread "main" org.apache.spark.sql.AnalysisException: Accessing outer query column is not allowed in:
Do you have any other ideas to solve that problem?
It's quite easy in SQL to do this
And so is in Spark SQL, luckily.
val events = ...
scala> events.show
+----+---+----+
|type| t|code|
+----+---+----+
| A| 25| 11|
| A| 55| 42|
| B| 88| 11|
| A|114| 11|
| B|220| 58|
| B|520| 11|
+----+---+----+
// assumed that t is int
scala> events.printSchema
root
|-- type: string (nullable = true)
|-- t: integer (nullable = true)
|-- code: integer (nullable = true)
val eventsA = events.
where($"type" === "A").
as("a")
val eventsB = events.
where($"type" === "B").
as("b")
val solution = eventsA.
join(eventsB, "code").
where($"a.t" < $"b.t").
select($"a.t" as "t1", $"b.t" as "t2", $"a.code").
orderBy($"t1".asc, $"t2".asc).
dropDuplicates("t1", "code").
orderBy($"t1".asc)
That should give you the requested output.
scala> solution.show
+---+---+----+
| t1| t2|code|
+---+---+----+
| 25| 88| 11|
|114|520| 11|
+---+---+----+

How do I nest data from one Spark dataframe in another dataframe based on a condition

I have 2 dataframes:
val df1 = sc.parallelize(Seq((123, 2.23, 1.12), (234, 2.45, 0.12), (456, 1.112, 0.234))).toDF("objid", "ra", "dec")
val df2 = sc.parallelize(Seq((4567, 123, "name1", "val1"), (2322, 456, "name2", "val2"), (3324, 555, "name3", "val3"), (5556, 123, "name4", "val4"), (3345, 123, "name5", "val5"))).toDF("specid", "objid", "name", "value")
They look like below:
df1.show()
+-----+-----+-----+
|objid| ra| dec|
+-----+-----+-----+
| 123| 2.23| 1.12|
| 234| 2.45| 0.12|
| 456|1.112|0.234|
+-----+-----+-----+
df2.show()
+------+-----+-----+-----+
|specid|objid| name|value|
+------+-----+-----+-----+
| 4567| 123|name1| val1|
| 2322| 456|name2| val2|
| 3324| 555|name3| val3|
| 5556| 123|name4| val4|
| 3345| 123|name5| val5|
+------+-----+-----+-----+
Now I want to nest df2 inside df1 as a nested column so the schema should look like below:
val new_schema = df1.schema.add("specs", df2.schema)
new_schema: org.apache.spark.sql.types.StructType = StructType(StructField(objid,IntegerType,false), StructField(ra,DoubleType,false), StructField(dec,DoubleType,false), StructField(specs,StructType(StructField(specid,IntegerType,false), StructField(objid,IntegerType,false), StructField(name,StringType,true), StructField(value,StringType,true)),true))
The reason I wanted to do it this way was because there is a one to many relationship between df1 and df2, which means there are more than 1 specs per objid. And I am not going to join only these two tables. There are about 50 tables that I want to ultimately join together to create a mega table. Most of those tables have 1 to n relationships and I was just thinking about a way to avoid having a lot of duplicate rows and null cells in the ultimate join result.
The ultimate result would look something like:
+-----+-----+-----+----------------------+
| | specs |
|objid| ra| dec| specid| name | value|
+-----+-----+-----+------+----+-------+ |
| 123| 2.23| 1.12| 4567 | name1 | val1 |
| | 5556 | name4 | val4 |
| | 3345 | name5 | val5 |
+-----+-----+-----+----------------------+
| 234| 2.45| 0.12| |
+-----+-----+-----+----------------------+
| 456|1.112|0.234| 2322 | name2 | val2 |
+-----+-----+-----+----------------------+
I was trying to add the column to df1 using .withColumn but ran into errors.
What I actually wanted to do was to select all the columns from df2 with the condition where df2.objid = df1.objid to match the rows and make that the new column in df1 but I am not sure if that's the best approach. Even if so, I am not sure how to do that.
Could someone please tell me how to do this?
As per my knowledge, you cannot have dataframe inside another dataframe(same is the case with RDDs).
What you need is a join between two dataframes. You can perform different types of joins and join the rows from two dataframes(this is where you make nest df2 columns inside df1)
You need to join both the dataframes based on the column objid like below
val join = df1.join(df2, "objid")
join.printSchema()
output:
root
|-- objid: integer (nullable = false)
|-- ra: double (nullable = false)
|-- dec: double (nullable = false)
|-- specid: integer (nullable = false)
|-- name: string (nullable = true)
|-- value: string (nullable = true)
and when we say
join.show()
the output will be
+-----+-----+-----+------+-----+-----+
|objid| ra| dec|specid| name|value|
+-----+-----+-----+------+-----+-----+
| 456|1.112|0.234| 2322|name2| val2|
| 123| 2.23| 1.12| 4567|name1| val1|
+-----+-----+-----+------+-----+-----+
for more details you can check here
Update:
I think you are looking for something like this
df1.join(df2, df1("objid") === df2("objid"), "left_outer").show()
and the output is:
+-----+-----+-----+------+-----+-----+-----+
|objid| ra| dec|specid|objid| name|value|
+-----+-----+-----+------+-----+-----+-----+
| 456|1.112|0.234| 2322| 456|name2| val2|
| 234| 2.45| 0.12| null| null| null| null|
| 123| 2.23| 1.12| 4567| 123|name1| val1|
| 123| 2.23| 1.12| 5556| 123|name4| val4|
| 123| 2.23| 1.12| 3345| 123|name5| val5|
+-----+-----+-----+------+-----+-----+-----+