convert string data in dataframe into double - scala

I have a csv file containing double type.When i load to a dataframe i got this message telling me that the type string is java.lang.String cannot be cast to java.lang.Double although my data are numeric.How do i get a dataframe from this csv file containing double type.how should i modify my code.
import org.apache.spark.sql._
import org.apache.spark.sql.functions._
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.types.{ArrayType, DoubleType}
import org.apache.spark.sql.functions.split
import scala.collection.mutable._
object Example extends App {
val spark = SparkSession.builder.master("local").appName("my-spark-app").getOrCreate()
val data=spark.read.csv("C://lpsa.data").toDF("col1","col2","col3","col4","col5","col6","col7","col8","col9")
val data2=data.select("col2","col3","col4","col5","col6","col7")
What sould i make to transform each row in the dataframe into double type? Thanks

Use select with cast:
import org.apache.spark.sql.functions.col
data.select(Seq("col2", "col3", "col4", "col5", "col6", "col7").map(
c => col(c).cast("double")
): _*)
or pass schema to the reader:
define the schema:
import org.apache.spark.sql.types._
val cols = Seq(
"col1", "col2", "col3", "col4", "col5", "col6", "col7", "col8", "col9"
)
val doubleCols = Set("col2", "col3", "col4", "col5", "col6", "col7")
val schema = StructType(cols.map(
c => StructField(c, if (doubleCols contains c) DoubleType else StringType)
))
and use it as an argument for schema method
spark.read.schema(schema).csv(path)
It is also possible to use schema inference:
spark.read.option("inferSchema", "true").csv(path)
but it is much more expensive.

I believe using sparks inferSchema option comes in handy while reading the csv file. Below is the code to automatically detect your columns as double type :
val data = spark.read
.format("csv")
.option("header", "false")
.option("inferSchema", "true")
.load("C://lpsa.data").toDF()
Note: I am using spark version 2.2.0

Related

select the first element after sorting column and convert it to list in scala

what is the most efficient way to sort one column in data frame, convert it to list, and assign the first element to variable in scala. I tried the following
import org.apache.spark.SparkConf
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.functions.{col, first, regexp_replace}
import org.apache.spark.sql.functions._
println(CONFIG.getString("spark.appName"))
val conf = new SparkConf()
.setAppName(CONFIG.getString("spark.appName"))
.setMaster(CONFIG.getString("spark.master"))
val spark: SparkSession = SparkSession.builder().config(conf).getOrCreate()
val df = spark.read.format("com.databricks.spark.csv").option("delimiter", ",").load("file.csv")
val dfb=df.sort(desc("_c0"))
val list=df.select(df("_c0")).distinct
but I'm still no able to save the first element as variable
Use select, orderBy, map & head
Assuming column _c0 is of type string, If it is different type you have to modify your column data type in _.getAs[<your column datatype>]
Check below code.
scala> import spark.implicits._
import spark.implicits._
scala> val first = df
.select($"_c0")
.orderBy($"_c0".desc)
.map(_.getAs[String](0))
.head
Or
scala> import spark.implicits._
import spark.implicits._
scala> val first = df
.select($"_c0")
.orderBy($"_c0".desc)
.head
.getAs[String](0)

I don't know how to do the same using parquet file

Link to (data.csv) and (output.csv)
import org.apache.spark.sql._
object Test {
def main(args: Array[String]) {
val spark = SparkSession.builder()
.appName("Test")
.master("local[*]")
.getOrCreate()
val sc = spark.sparkContext
val tempDF=spark.read.csv("data.csv")
tempDF.coalesce(1).write.parquet("Parquet")
val rdd = sc.textFile("Parquet")
I Convert data.csv into optimised parquet file and then loaded it and now i want to do all the transformation on parquet file just like i did on csv file given below and then save it as a parquet file.Link of (data.csv) and (output.csv)
val header = rdd.first
val rdd1 = rdd.filter(_ != header)
val resultRDD = rdd1.map { r =>
val Array(country, values) = r.split(",")
country -> values
}.reduceByKey((a, b) => a.split(";").zip(b.split(";")).map { case (i1, i2) => i1.toInt + i2.toInt }.mkString(";"))
import spark.sqlContext.implicits._
val dataSet = resultRDD.map { case (country: String, values: String) => CountryAgg(country, values) }.toDS()
dataSet.coalesce(1).write.option("header","true").csv("output")
}
case class CountryAgg(country: String, values: String)
}
I reckon, you are trying to add up corresponding elements from the array based on Country. I have done this using DataFrame APIs, which makes the job easier.
Code for your reference:
import org.apache.spark.sql.functions._
import org.apache.spark.sql.expressions.Window
val df = spark.read
.format("csv")
.option("header", "true")
.option("inferSchema", "true")
.option("path", "/path/to/input/data.csv")
.load()
val df1 = df.select(
$"Country",
(split($"Values", ";"))(0).alias("c1"),
(split($"Values", ";"))(1).alias("c2"),
(split($"Values", ";"))(2).alias("c3"),
(split($"Values", ";"))(3).alias("c4"),
(split($"Values", ";"))(4).alias("c5")
)
.groupBy($"Country")
.agg(
sum($"c1" cast "int").alias("s1"),
sum($"c2" cast "int").alias("s2"),
sum($"c3" cast "int").alias("s3"),
sum($"c4" cast "int").alias("s4"),
sum($"c5" cast "int").alias("s5")
)
.select(
$"Country",
concat(
$"s1", lit(";"),
$"s2", lit(";"),
$"s3", lit(";"),
$"s4", lit(";"),
$"s5"
).alias("Values")
)
df1.repartition(1)
.write
.format("csv")
.option("delimiter",",")
.option("header", "true")
.option("path", "/path/to/output")
.save()
Here is the output for your reference.
scala> df1.show()
+-------+-------------------+
|Country| Values|
+-------+-------------------+
|Germany| 144;166;151;172;70|
| China| 218;239;234;209;75|
| India| 246;153;148;100;90|
| Canada| 183;258;150;263;71|
|England|178;114;175;173;153|
+-------+-------------------+
P.S.:
You can change the output format to parquet/orc or anything you wish.
I have repartitioned df1 into 1 partition just so that you could get a single output file. You can choose to repartition or not based
on your usecase
Hope this helps.
You could just read the file as parquet and perform the same operations on the resulting dataframe:
val spark = SparkSession.builder()
.appName("Test")
.master("local[*]")
.getOrCreate()
// Read in the parquet file created above
// Parquet files are self-describing so the schema is preserved
// The result of loading a Parquet file is also a DataFrame
val parquetFileDF = spark.read.parquet("data.parquet")
If you need an rdd you can then just call:
val rdd = parquetFileDF.rdd
The you can proceed with the transformations as before and write as parquet like you have in your question.

Using Scala, create DataFrame or RDD from Java ResultSet

I don't want to create Dataframe or RDD directly using spark.read method. I want to form a dataframe or RDD from a java resultset (has 5,000,00 records). Appreciate if you provide a diligent solution.
First using RowFactory, we can create rows. Secondly, all the rows can be converted into Dataframe using SQLContext.createDataFrame method. Hope, this will help you too :).
import java.sql.Connection
import java.sql.ResultSet
import org.apache.spark.sql.RowFactory
import org.apache.spark.sql.DataFrame
import org.apache.spark.sql.Row
import org.apache.spark.sql.SQLContext
import org.apache.spark.sql.types.StringType
import org.apache.spark.sql.types.StructField
import org.apache.spark.sql.types.StructType
var resultSet: ResultSet = null
val rowList = new scala.collection.mutable.MutableList[Row]
var cRow: Row = null
//Resultset is created from traditional Java JDBC.
val resultSet = DbConnection.createStatement().execute("Sql")
//Looping resultset
while (resultSet.next()) {
//adding two columns into a "Row" object
cRow = RowFactory.create(resultSet.getObject(1), resultSet.getObject(2))
//adding each rows into "List" object.
rowList += (cRow)
}
val sconf = new SparkConf
sconf.setAppName("")
sconf.setMaster("local[*]")
var sContext: SparkContext = new SparkContext(sConf)
var sqlContext: SQLContext = new SQLContext(sContext)
//creates a dataframe
DF = sqlContext.createDataFrame(sContext.parallelize(rowList ,2), getSchema())
DF.show() //show the dataframe.
def getSchema(): StructType = {
val DecimalType = DataTypes.createDecimalType(38, 10)
val schema = StructType(
StructField("COUNT", LongType, false) ::
StructField("TABLE_NAME", StringType, false) :: Nil)
//Returning the schema to define dataframe columns.
schema
}

Spark read csv with commented headers

I have the following file which I need to read using spark in scala -
#Version: 1.0
#Fields: date time location timezone
2018-02-02 07:27:42 US LA
2018-02-02 07:27:42 UK LN
I am currently trying to extract the fields using the following the -
spark.read.csv(filepath)
I am new to spark+scala and wanted to know know is there a better way to extract fields based on the # Fields row at the top of the file.
You should be using sparkContext's textFile api to read the text file and then filter the header line
val rdd = sc.textFile("filePath")
val header = rdd
.filter(line => line.toLowerCase.contains("#fields:"))
.map(line => line.split(" ").tail)
.first()
That should be it.
Now if you want to create a dataframe then you should parse it to form schema and then filter the data lines to form Rows. And finally use SQLContext to create a dataframe
import org.apache.spark.sql.types._
val schema = StructType(header.map(title => StructField(title, StringType, true)))
val dataRdd = rdd.filter(line => !line.contains("#")).map(line => Row.fromSeq(line.split(" ")))
val df = sqlContext.createDataFrame(dataRdd, schema)
df.show(false)
This should give you
+----------+--------+--------+--------+
|date |time |location|timezone|
+----------+--------+--------+--------+
|2018-02-02|07:27:42|US |LA |
|2018-02-02|07:27:42|UK |LN |
+----------+--------+--------+--------+
Note: if the file is tab delimited, instead of doing
line.split(" ")
you should be using \t
line.split("\t")
Sample input file "example.csv"
#Version: 1.0
#Fields: date time location timezone
2018-02-02 07:27:42 US LA
2018-02-02 07:27:42 UK LN
Test.scala
import org.apache.spark.SparkContext
import org.apache.spark.rdd.RDD
import org.apache.spark.sql.SparkSession.Builder
import org.apache.spark.sql._
import scala.util.Try
object Test extends App {
// create spark session and sql context
val builder: Builder = SparkSession.builder.appName("testAvroSpark")
val sparkSession: SparkSession = builder.master("local[1]").getOrCreate()
val sc: SparkContext = sparkSession.sparkContext
val sqlContext: SQLContext = sparkSession.sqlContext
case class CsvRow(date: String, time: String, location: String, timezone: String)
// path of your csv file
val path: String =
"sample.csv"
// read csv file and skip firs two lines
val csvString: Seq[String] =
sc.textFile(path).toLocalIterator.drop(2).toSeq
// try to read only valid rows
val csvRdd: RDD[(String, String, String, String)] =
sc.parallelize(csvString).flatMap(r =>
Try {
val row: Array[String] = r.split(" ")
CsvRow(row(0), row(1), row(2), row(3))
}.toOption)
.map(csvRow => (csvRow.date, csvRow.time, csvRow.location, csvRow.timezone))
import sqlContext.implicits._
// make data frame
val df: DataFrame =
csvRdd.toDF("date", "time", "location", "timezone")
// display dataf frame
df.show()
}

error: value show is not a member of String

If in this case I want to show the header . Why I cannot write in the third line header.show()?
What I have to do to view the content of the header variable?
val hospitalDataText = sc.textFile("/Users/bhaskar/Desktop/services.csv")
val header = hospitalDataText.first() //Remove the header
If you want a DataFrame use DataFrameReader and limit:
spark.read.text(path).limit(1).show
otherwise just println
println(header)
Unless of course you want to use cats Show. With cats add package to spark.jars.packages and
import cats.syntax.show._
import cats.instances.string._
sc.textFile(path).first.show
If you use sparkContext (sc.textFile), you get an RDD. You are getting the error because header is not a dataframe but a rdd. And show is applicable on dataframe or dataset only.
You will have to read the textfile with sqlContext and not sparkContext.
What you can do is use sqlContext and show(1) as
val hospitalDataText = sqlContext.read.csv("/Users/bhaskar/Desktop/services.csv")
hospitalDataText.show(1, false)
Updated for more clarification
sparkContext would create rdd which can be seen in
scala> val hospitalDataText = sc.textFile("file:/test/resources/t1.csv")
hospitalDataText: org.apache.spark.rdd.RDD[String] = file:/test/resources/t1.csv MapPartitionsRDD[5] at textFile at <console>:25
And if you use .first() then the first string of the RDD[String] is extracted as
scala> val header = hospitalDataText.first()
header: String = test1,26,BigData,test1
Now answering your comment below, yes you can create dataframe from header string just created
Following will put the string in one column
scala> val sqlContext = spark.sqlContext
sqlContext: org.apache.spark.sql.SQLContext = org.apache.spark.sql.SQLContext#3fc736c4
scala> import sqlContext.implicits._
import sqlContext.implicits._
scala> Seq(header).toDF.show(false)
+----------------------+
|value |
+----------------------+
|test1,26,BigData,test1|
+----------------------+
If you want each string in separate columns you can do
scala> val array = header.split(",")
array: Array[String] = Array(test1, 26, BigData, test1)
scala> Seq((array(0), array(1), array(2), array(3))).toDF().show(false)
+-----+---+-------+-----+
|_1 |_2 |_3 |_4 |
+-----+---+-------+-----+
|test1|26 |BigData|test1|
+-----+---+-------+-----+
You can even define the header names as
scala> Seq((array(0), array(1), array(2), array(3))).toDF("col1", "number", "text2", "col4").show(false)
+-----+------+-------+-----+
|col1 |number|text2 |col4 |
+-----+------+-------+-----+
|test1|26 |BigData|test1|
+-----+------+-------+-----+
More advanced approach would be to use sqlContext.createDataFrame with Schema defined