Create new columns from values of other columns in Scala Spark - scala

I have an input dataframe:
inputDF=
+--------------------------+-----------------------------+
| info (String) | chars (Seq[String]) |
+--------------------------+-----------------------------+
|weight=100,height=70 | [weight,height] |
+--------------------------+-----------------------------+
|weight=92,skinCol=white | [weight,skinCol] |
+--------------------------+-----------------------------+
|hairCol=gray,skinCol=white| [hairCol,skinCol] |
+--------------------------+-----------------------------+
How to I get this dataframe as an output? I do not know in advance what are the strings contained in chars column
outputDF=
+--------------------------+-----------------------------+-------+-------+-------+-------+
| info (String) | chars (Seq[String]) | weight|height |skinCol|hairCol|
+--------------------------+-----------------------------+-------+-------+-------+-------+
|weight=100,height=70 | [weight,height] | 100 | 70 | null |null |
+--------------------------+-----------------------------+-------+-------+-------+-------+
|weight=92,skinCol=white | [weight,skinCol] | 92 |null |white |null |
+--------------------------+-----------------------------+-------+-------+-------+-------+
|hairCol=gray,skinCol=white| [hairCol,skinCol] |null |null |white |gray |
+--------------------------+-----------------------------+-------+-------+-------+-------+
I also would like to save the following Seq[String] as a variable, but without using .collect() function on the dataframes.
val aVariable: Seq[String] = [weight, height, skinCol, hairCol]

You create another dataframe pivoting on the key of info column than join it back using an id column:
import spark.implicits._
val data = Seq(
("weight=100,height=70", Seq("weight", "height")),
("weight=92,skinCol=white", Seq("weight", "skinCol")),
("hairCol=gray,skinCol=white", Seq("hairCol", "skinCol"))
)
val df = spark.sparkContext.parallelize(data).toDF("info", "chars")
.withColumn("id", monotonically_increasing_id() + 1)
val pivotDf = df
.withColumn("tmp", split(col("info"), ","))
.withColumn("tmp", explode(col("tmp")))
.withColumn("val1", split(col("tmp"), "=")(0))
.withColumn("val2", split(col("tmp"), "=")(1)).select("id", "val1", "val2")
.groupBy("id").pivot("val1").agg(first(col("val2")))
df.join(pivotDf, Seq("id"), "left").drop("id").show(false)
+--------------------------+------------------+-------+------+-------+------+
|info |chars |hairCol|height|skinCol|weight|
+--------------------------+------------------+-------+------+-------+------+
|weight=100,height=70 |[weight, height] |null |70 |null |100 |
|hairCol=gray,skinCol=white|[hairCol, skinCol]|gray |null |white |null |
|weight=92,skinCol=white |[weight, skinCol] |null |null |white |92 |
+--------------------------+------------------+-------+------+-------+------+
for your second question you can get those values in a dataframe like this:
df.withColumn("tmp", explode(split(col("info"), ",")))
.withColumn("values", split(col("tmp"), "=")(0)).select("values").distinct().show()
+-------+
| values|
+-------+
| height|
|hairCol|
|skinCol|
| weight|
+-------+
but you cannot get them in Seq variable without using collect, that just impossible.

Related

Filter DF using the column of another DF (same col in both DF) Spark Scala

I am trying to filter a DataFrame DF1 using the column of another DataFrame DF2, the col is country_id. I Want to reduce all the rows of the first DataFrame to only the countries that there are on the second DF. An example:
+--------------+------------+-------+
|Date | country_id | value |
+--------------+------------+-------+
|2015-12-14 |ARG |5 |
|2015-12-14 |GER |1 |
|2015-12-14 |RUS |1 |
|2015-12-14 |CHN |3 |
|2015-12-14 |USA |1 |
+--------------+------------+
|USE | country_id |
+--------------+------------+
| F |RUS |
| F |CHN |
Expected:
+--------------+------------+-------+
|Date | country_id | value |
+--------------+------------+-------+
|2015-12-14 |RUS |1 |
|2015-12-14 |CHN |3 |
How could I do this? I am new with Spark so I have thought on use maybe intersect? or would be more efficient other method?
Thanks in advance!
You can use left semi join:
val DF3 = DF1.join(DF2, Seq("country_id"), "left_semi")
DF3.show
//+----------+----------+-----+
//|country_id| Date|value|
//+----------+----------+-----+
//| RUS|2015-12-14| 1|
//| CHN|2015-12-14| 3|
//+----------+----------+-----+
You can also use inner join :
val DF3 = DF1.alias("a").join(DF2.alias("b"), Seq("country_id")).select("a.*")

How to count the number of missing values in each row of a data frame -spark scala?

I want to count the number of missing values in each row of a data frame in spark scala.
Code:
val samplesqlDF = spark.sql("SELECT * FROM sampletable")
samplesqlDF.show()
Input Dataframe:
------------------------------------------------------------------
| name | age | degree | Place |
| -----------------------------------------------------------------|
| Ram | | MCA | Bangalore |
| | 25 | | |
| | 26 | BE | |
| Raju | 21 | Btech | Chennai |
-----------------------------------------------------------------
The Output Data frame (Row Level Count) as follows:
-----------------------------------------------------------------
| name | age | degree | Place | rowcount |
| ----------------------------------------------------------------|
| Ram | | MCA | Bangalore | 1 |
| | 25 | | | 3 |
| | 26 | BE | | 2 |
| Raju | 21 | Btech | Chennai | 0 |
-----------------------------------------------------------------
I am a beginner to scala and spark. Thanks in advance.
Looks like you want to get the null count in a dynamic way. Check this out
val df = Seq(("Ram",null,"MCA","Bangalore"),(null,"25",null,null),(null,"26","BE",null),("Raju","21","Btech","Chennai")).toDF("name","age","degree","Place")
df.show(false)
val df2 = df.columns.foldLeft(df)( (df,c) => df.withColumn(c+"_null", when(col(c).isNull,1).otherwise(0) ) )
df2.createOrReplaceTempView("student")
val sql_str_null = df.columns.map( x => x+"_null").mkString(" ","+"," as null_count ")
val sql_str_full = df.columns.mkString( "select ", ",", " , " + sql_str_null + " from student")
spark.sql(sql_str_full).show(false)
Output:
+----+----+------+---------+----------+
|name|age |degree|Place |null_count|
+----+----+------+---------+----------+
|Ram |null|MCA |Bangalore|1 |
|null|25 |null |null |3 |
|null|26 |BE |null |2 |
|Raju|21 |Btech |Chennai |0 |
+----+----+------+---------+----------+
Also a possibility and checking also for "" but not using foldLeft just to demonstrate the point:
import org.apache.spark.sql.functions._
val df = Seq(("Ram",null,"MCA","Bangalore"),(null,"25",null,""),(null,"26","BE",null),("Raju","21","Btech","Chennai")).toDF("name","age","degree","place")
// Count per row the null or "" columns!
val null_counter = Seq("name", "age", "degree", "place").map(x => when(col(x) === "" || col(x).isNull , 1).otherwise(0)).reduce(_ + _)
val df2 = df.withColumn("nulls_cnt", null_counter)
df2.show(false)
returns:
+----+----+------+---------+---------+
|name|age |degree|place |nulls_cnt|
+----+----+------+---------+---------+
|Ram |null|MCA |Bangalore|1 |
|null|25 |null | |3 |
|null|26 |BE |null |2 |
|Raju|21 |Btech |Chennai |0 |
+----+----+------+---------+---------+
A simplified version of the one suggested by #stack0114106 is
val df = Seq(("Ram",null,"MCA","Bangalore"),(null,"25",null,null),
(null,"26","BE",null),("Raju","21","Btech","Chennai"))
.toDF("name","age","degree","Place")
.withColumn("null_count", lit(0))
val df2 = df.columns.foldLeft(df)((df,c) =>
df.withColumn("null_count",
when(col(c).isNull,$"null_count" + 1).otherwise($"null_count")
)
)
df2.show(false)
the output is
+----+----+------+---------+----------+
|name|age |degree|Place |null_count|
+----+----+------+---------+----------+
|Ram |null|MCA |Bangalore|1 |
|null|25 |null |null |3 |
|null|26 |BE |null |2 |
|Raju|21 |Btech |Chennai |0 |
+----+----+------+---------+----------+

Scala - Spark - How can I get a new dataframe with distinct values of a dataframe column and the first date of this distinct values?

I have a Spark Dataframe with the following schema:
________________________
|id | no | date |
|1 | 123 |2018/10/01 |
|2 | 124 |2018/10/01 |
|3 | 123 |2018/09/28 |
|4 | 123 |2018/09/27 |
...
What I want is to have a new DataFrame with the following data:
___________________
| no | date |
| 123 |2018/09/27 |
| 124 |2018/10/01 |
Can someone help me on this?:) Thank you!!
You can resolve it by using the rank (https://databricks.com/blog/2015/07/15/introducing-window-functions-in-spark-sql.html) on dataframe with spark sql:
use registerTempTable on sparkContext such as df_temp_table
Make this query:
select dftt.*,
dense_rank() OVER ( PARTITION BY dftt.no ORDER BY dftt.date DESC) AS Rank from
df_temp_table as dftt
you will get this dataframe:
|id | no | date | rank
|1 | 123 |2018/10/01 | 1
|2 | 124 |2018/10/01 | 1
|3 | 123 |2018/09/28 | 2
|4 | 123 |2018/09/27 | 3
on this df you can now filter the rank column by 1
Welcome,
you can try below Code :
import org.apache.spark.sql.functions.row_number
import org.apache.spark.sql.expressions.Window
val w = Window.partitionBy($"no").orderBy($"date".asc)
val Resultdf = df.withColumn("rownum", row_number.over(w))
.where($"rownum" === 1).drop("rownum","id")
Resultdf.show()
Output:
+---+----------+
| no| date|
+---+----------+
|124|2018/10/01|
|123|2018/09/27|
+---+----------+

joining two dataframes having duplicate row

I have the following two dataframes
df1
+--------+-----------------------------
|id | amount | fee |
|1 | 10.00 | 5.0 |
|3 | 90 | 130.0 |
df2
+--------+--------------------------------
|exId | exAmount | exFee |
|1 | 10.00 | 5.0 |
|1 | 10.0 | 5.0 |
|3 | 90.0 | 130.0 |
I am joining between them using all three columns and trying to identify columns which are common between the two dataframes and the ones which are not.
I'm looking for output:
+--------+--------------------------------------------
|id | amount | fee |exId | exAmount | exFee |
|1 | 10.00 | 5.0 |1 | 10.0 | 5.0 |
|null| null | null |1 | 10.0 | 5.0 |
|3 | 90 | 130.0|3 | 90.0 | 130.0 |
Basically want the duplicate row in df2 with exId 1 to be listed separately.
Any thoughts?
One of the possible way is to group by all three columns and generate row numbers for each dataframe and use that additional column in addition to the rest three columns while joining. You should get what you desire.
import org.apache.spark.sql.expressions._
def windowSpec1 = Window.partitionBy("id", "amount", "fee").orderBy("fee")
def windowSpec2 = Window.partitionBy("exId", "exAmount", "exFee").orderBy("exFee")
import org.apache.spark.sql.functions._
df1.withColumn("sno", row_number().over(windowSpec1)).join(
df2.withColumn("exSno", row_number().over(windowSpec2)),
col("id") === col("exId") && col("amount") === col("exAmount") && col("fee") === col("exFee") && col("sno") === col("exSno"), "outer")
.drop("sno", "exSno")
.show(false)
and you should be getting
+----+------+-----+----+--------+-----+
|id |amount|fee |exId|exAmount|exFee|
+----+------+-----+----+--------+-----+
|null|null |null |1 |10.0 |5.0 |
|3 |90 |130.0|3 |90 |130.0|
|1 |10.00 |5.0 |1 |10.00 |5.0 |
+----+------+-----+----+--------+-----+
I hope the answer is helpful

Spark Dataframe - Implement Oracle NVL Function while joining

I need to implement NVL function in spark while joining two dataframes.
Input Dataframes :
ds1.show()
---------------
|key | Code |
---------------
|2 | DST |
|3 | CPT |
|null | DTS |
|5 | KTP |
---------------
ds2.show()
------------------
|key | PremAmt |
------------------
|2 | 300 |
|-1 | -99 |
|5 | 567 |
------------------
Need to implement "LEFT JOIN NVL(DS1.key, -1) = DS2.key" .
So I have written like this, but NVL or Coalesce function is missing .so it returned wrong values.
How to incorporate "NVL" in spark dataframes ?
// nvl function is missing, so wrong output
ds1.join(ds1,Seq("key"),"left_outer")
-------------------------
|key | Code |PremAmt |
-------------------------
|2 | DST |300 |
|3 | CPT |null |
|null | DTS |null |
|5 | KTP |567 |
-------------------------
Expected Result :
-------------------------
|key | Code |PremAmt |
-------------------------
|2 | DST |300 |
|3 | CPT |null |
|null | DTS |-99 |
|5 | KTP |567 |
-------------------------
I know one complex way.
val df = df1.join(df2, coalesce(df1("key"), lit(-1)) === df2("key"), "left_outer")
You should rename column name "key" of one df, and drop the column after join.
An implementation of nvl in Scala
import org.apache.spark.sql.Column;
import org.apache.spark.sql.functions.{when, lit};
def nvl(ColIn: Column, ReplaceVal: Any): Column = {
return(when(ColIn.isNull, lit(ReplaceVal)).otherwise(ColIn))
}
Now you can use nvl as you would use any other function for data frame manipulation, like
val NewDf = DF.withColumn("MyColNullsReplaced", nvl($"MyCol", "<null>"))
Obviously, Replaceval must be of the correct type. The example above assumes $"MyCol" is of type string.
This worked for me:
intermediateDF.select(col("event_start_timestamp"),
col("cobrand_id"),
col("rule_name"),
col("table_name"),
coalesce(col("dimension_field1"),lit(0)),
coalesce(col("dimension_field2"),lit(0)),
coalesce(col("dimension_field3"),lit(0)),
coalesce(col("dimension_field4"),lit(0)),
coalesce(col("dimension_field5"),lit(0))
)
The answer is use NVL, this code in python works
from pyspark.sql import SparkSession
spark = SparkSession.builder.master("local[1]").appName("CommonMethods").getOrCreate()
Note: SparkSession is being bulit in a "chained" fashion,ie. 3 methods are being applied in teh same line
Read CSV file
df = spark.read.csv('C:\\tableausuperstore1_all.csv',inferSchema='true',header='true')
df.createOrReplaceTempView("ViewSuperstore")
The ViewSuperstore can be ued for SQL NOW
print("*trace1-nvl")
df = spark.sql("select nvl(state,'a') testString, nvl(quantity,0) testInt from ViewSuperstore where state='Florida' and OrderDate>current_date() ")
df.show()
print("*trace2-FINAL")
df.select(expr("nvl(colname,'ZZ')"))