Convert a row to a list in spark scala - scala

Is that possible to do that? All the data in my dataframe(~1000 cols) are Doubles and I'm wondering whether I could turn a row of data to a list of Doubles?

You can use toSeq method on the Row and then convert the type from Seq[Any] to Seq[Double](if you are sure the data types of all the columns are Double):
val df = Seq((1.0,2.0),(2.1,2.2)).toDF("A", "B")
// df: org.apache.spark.sql.DataFrame = [A: double, B: double]
df.show
+---+---+
| A| B|
+---+---+
|1.0|2.0|
|2.1|2.2|
+---+---+
df.first.toSeq.asInstanceOf[Seq[Double]]
// res1: Seq[Double] = WrappedArray(1.0, 2.0)
In case you have String type columns, use toSeq and then use map with pattern matching to convert the String to Double:
val df = Seq((1.0,"2.0"),(2.1,"2.2")).toDF("A", "B")
// df: org.apache.spark.sql.DataFrame = [A: double, B: string]
df.first.toSeq.map{
case x: String => x.toDouble
case x: Double => x
}
// res3: Seq[Double] = ArrayBuffer(1.0, 2.0)

If you have a dataframe with doubles which you want to convert into List of doubles, then just convert the dataframe into rdd which will give you RDD[Row] you can covert this to List as
dataframe.rdd.map(_.toSeq.toList)
You will get list of doubles

Related

Spark scala dataframe get value for each row and assign to variables

I have a dataframe like below :
val df=spark.sql("select * from table")
row1|row2|row3
A1,B1,C1
A2,B2,C2
A3,B3,C3
i want to iterate for loop to get values like this :
val value1="A1"
val value2="B1"
val value3="C1"
function(value1,value2,value3)
Please help me.
emphasized text
You have 2 options :
Solution 1- Your data is big, then you must stick with dataframes. So to apply a function on every row. We must define a UDF.
Solution 2- Your data is small, then you can collect the data to the driver machine and then iterate with a map.
Example:
val df = Seq((1,2,3), (4,5,6)).toDF("a", "b", "c")
def sum(a: Int, b: Int, c: Int) = a+b+c
// Solution 1
import org.apache.spark.sql.Row
val myUDF = udf((r: Row) => sum(r.getAs[Int](0), r.getAs[Int](1), r.getAs[Int](2)))
df.select(myUDF(struct($"a", $"b", $"c")).as("sum")).show
//Solution 2
df.collect.map(r=> sum(r.getAs[Int](0), r.getAs[Int](1), r.getAs[Int](2)))
Output for both cases:
+---+
|sum|
+---+
| 6|
| 15|
+---+
EDIT:
val myUDF = udf((r: Row) => {
val value1 = r.getAs[Int](0)
val value2 = r.getAs[Int](1)
val value3 = r.getAs[Int](2)
myFunction(value1, value2, value3)
})

Convert spark dataframe to sequence of sequences and vice versa in Scala [duplicate]

This question already has an answer here:
How to get Array[Seq[String]] from DataFrame?
(1 answer)
Closed 3 years ago.
I have a DataFrame and I want to convert it into a sequence of sequences and vice versa.
Now the thing is, I want to do it dynamically, and write something which runs for DataFrame with any number/type of columns.
In summary, these are the questions:
How to convert Seq[Seq[String]] to a DataFrame?
How to convert DataFrame to Seq[Seq[String]?
How to perform 2 but also make the DataFrame infer the schema and decide column types by itself?
UPDATE 1
This is not a duplicate of this question because in answer to that question solution provided is not dynamic, it works for two columns or how many columns is to be hardcoded. I am trying to find a dynamic solution.
This is how you can dynamically create a dataframe from Seq[Seq[String]]:
scala> val seqOfSeq = Seq(Seq("a","b", "c"),Seq("3","4", "5"))
seqOfSeq: Seq[Seq[String]] = List(List(a, b, c), List(3, 4, 5))
scala> val lengthOfRow = seqOfSeq(0).size
lengthOfRow: Int = 3
scala> val tempDf = sc.parallelize(seqOfSeq).toDF
tempDf: org.apache.spark.sql.DataFrame = [value: array<string>]
scala> val requiredDf = tempDf.select((0 until lengthOfRow).map(i => col("value")(i).alias(s"col$i")): _*)
requiredDf: org.apache.spark.sql.DataFrame = [col0: string, col1: string ... 1 more field]
scala> requiredDf.show
+----+----+----+
|col0|col1|col2|
+----+----+----+
| a| b| c|
| 3| 4| 5|
+----+----+----+
How to convert DataFrame to Seq[Seq[String]:
val newSeqOfSeq = requiredDf.collect().map(row => row.toSeq.map(_.toString).toSeq).toSeq
To use custom column names:
scala> val myCols = Seq("myColA", "myColB", "myColC")
myCols: Seq[String] = List(myColA, myColB, myColC)
scala> val requiredDf = tempDf.select((0 until lengthOfRow).map(i => col("value")(i).alias( myCols(i) )): _*)
requiredDf: org.apache.spark.sql.DataFrame = [myColA: string, myColB: string ... 1 more field]

Selecting several columns from spark dataframe with a list of columns as a start

Assuming that I have a list of spark columns and a spark dataframe df, what is the appropriate snippet of code in order to select a subdataframe containing only the columns in the list?
Something similar to maybe:
var needed_column: List[Column]=List[Column](new Column("a"),new Column("b"))
df(needed_columns)
I wanted to get the columns names then select them using the following line of code.
Unfortunately, the column name seems to be in write mode only.
df.select(needed_columns.head.as(String),needed_columns.tail: _*)
Your needed_columns is of type List[Column], hence you can simply use needed_columns: _* as the arguments for select:
val df = Seq((1, "x", 10.0), (2, "y", 20.0)).toDF("a", "b", "c")
import org.apache.spark.sql.Column
val needed_columns: List[Column] = List(new Column("a"), new Column("b"))
df.select(needed_columns: _*)
// +---+---+
// | a| b|
// +---+---+
// | 1| x|
// | 2| y|
// +---+---+
Note that select takes two types of arguments:
def select(cols: Column*): DataFrame
def select(col: String, cols: String*): DataFrame
If you have a list of column names of String type, you can use the latter select:
val needed_col_names: List[String] = List("a", "b")
df.select(needed_col_names.head, needed_col_names.tail: _*)
Or, you can map the list of Strings to Columns to use the former select
df.select(needed_col_names.map(col): _*)
I understand that you want to select only those columns from a list(A)other than the dataframe columns. I have a below example, where I select the firstname and lastname using a separate list. check this out
scala> val df = Seq((101,"Jack", "wright" , 27, "01976", "US")).toDF("id","fname","lname","age","zip","country")
df: org.apache.spark.sql.DataFrame = [id: int, fname: string ... 4 more fields]
scala> df.columns
res20: Array[String] = Array(id, fname, lname, age, zip, country)
scala> val needed =Seq("fname","lname")
needed: Seq[String] = List(fname, lname)
scala> val needed_df = needed.map( x=> col(x) )
needed_df: Seq[org.apache.spark.sql.Column] = List(fname, lname)
scala> df.select(needed_df:_*).show(false)
+-----+------+
|fname|lname |
+-----+------+
|Jack |wright|
+-----+------+
scala>

Spark replace all NaNs to null in DataFrame API

I have a dataframe with many double (and/or float) columns, which do contain NaNs. I want to replace all NaNs (i.e. Float.NaN and Double.NaN) with null.
I can do this with e.g. for a single column x:
val newDf = df.withColumn("x", when($"x".isNaN,lit(null)).otherwise($"x"))
This works but I'd like to do this for all columns at once. I recently discovered the DataFrameNAFunctions (df.na) fill which sounds exactely what I need. Unfortunately I failed to do the above. fill should replace all NaNs and nulls with a given value, so I do:
df.na.fill(null.asInstanceOf[java.lang.Double]).show
which gives me a NullpointerException
There is also a promising replace method, but I cant even compile the code:
df.na.replace("x", Map(java.lang.Double.NaN -> null.asInstanceOf[java.lang.Double])).show
strangely, this gives me
Error:(57, 34) type mismatch;
found : scala.collection.immutable.Map[scala.Double,java.lang.Double]
required: Map[Any,Any]
Note: Double <: Any, but trait Map is invariant in type A.
You may wish to investigate a wildcard type such as `_ <: Any`. (SLS 3.2.10)
df.na.replace("x", Map(java.lang.Double.NaN -> null.asInstanceOf[java.lang.Double])).show
To replace all NaN(s) with null in Spark you just have to create a Map of replace values for every column, like this:
val map = df.columns.map((_, "null")).toMap
Then you can use fill to replace NaN(s) with null values:
df.na.fill(map)
For Example:
scala> val df = List((Float.NaN, Double.NaN), (1f, 0d)).toDF("x", "y")
df: org.apache.spark.sql.DataFrame = [x: float, y: double]
scala> df.show
+---+---+
| x| y|
+---+---+
|NaN|NaN|
|1.0|0.0|
+---+---+
scala> val map = df.columns.map((_, "null")).toMap
map: scala.collection.immutable.Map[String,String] = Map(x -> null, y -> null)
scala> df.na.fill(map).printSchema
root
|-- x: float (nullable = true)
|-- y: double (nullable = true)
scala> df.na.fill(map).show
+----+----+
| x| y|
+----+----+
|null|null|
| 1.0| 0.0|
+----+----+
I hope this helps !
To Replace all NaN by any value in Spark Dataframe using Pyspark API you can do the following:
col_list = [column1, column2]
df = df.na.fill(replace_by_value, col_list)

How to perform mathematical operation on Long and BigInt in scala in spark-sql

I have two values of different type as shown below in spark-sql
scala> val ageSum = df.agg(sum("age"))
ageSum: org.apache.spark.sql.DataFrame = [sum(age): bigint]
scala> val totalEntries = df.count();
scala> totalEntries
res37: Long = 45211
First value is coming from aggregate function on data frame and second is coming from total count function on data frame. Both are having different types as ageSum is bigInt and totalEntries is Long. I want to perform mathematical operation on it. Mean = ageSum/totalEntries
scala> val mean = ageSum/totalEntries
<console>:31: error: value / is not a member of org.apache.spark.sql.DataFrame val mean = ageSum/totalEntries
I also tried to convert to ageSum to long type but not able to do so
scala> val ageSum = ageSum.longValue
<console>:29: error: recursive value ageSum needs type
val ageSum = ageSum.longValues
ageSum is a data frame, you need to extract the value from it. One option would be to use first() to get the value as a Row and then extract the value from the row:
ageSum.first().getAs[Long](0)/totalEntries
// res6: Long = 2
If you need a more exact value, you can use toDouble to convert before division:
ageSum.first().getAs[Long](0).toDouble/totalEntries
// res9: Double = 2.5
Or you can make the result another column of your ageSum:
ageSum.withColumn("mean", $"sum(age)"/totalEntries).show
+--------+----+
|sum(age)|mean|
+--------+----+
| 10| 2.5|
+--------+----+
val df = Seq(1,2,3,4).toDF("age")