Spark RDD to Dataframe - scala

Below is the data in a file
PREFIX|Description|Destination|Num_Type
1|C1|IDD|NA
7|C2|IDDD|NA
20|C3|IDDD|NA
27|C3|IDDD|NA
30|C5|IDDD|NA
I am trying to read it and convert into Dataframe.
val file=sc.textFile("/user/cloudera-scm/file.csv")
val list=file.collect.toList
list.toDF.show
+--------------------+
| value|
+--------------------+
|PREFIX|Descriptio...|
| 1|C1|IDD|NA|
| 7|C2|IDDD|NA|
| 20|C3|IDDD|NA|
| 27|C3|IDDD|NA|
| 30|C5|IDDD|NA|
+--------------------+
I am not able to convert this to datafram with exact table form

Let's first consider your code.
// reading a potentially big file
val file=sc.textFile("/user/cloudera-scm/file.csv")
// collecting everything to the driver
val list=file.collect.toList
// converting a local list to a dataframe (this does not work)
list.toDF.show
There are ways to make your code work, but the very logic awkward. You are reading data with the executors, putting all of it on the driver to simply convert it to a dataframe (back to the executors). That's a lot of network communication, and the driver will most likely run out of memory for any reasonably large dataset.
What you can do it read the data directly as a dataframe like this (the driver does nothing and there is no unnecessary IO):
spark.read
.option("sep", "|") // specify the delimiter
.option("header", true) // to tell spark that there is a header
.option("inferSchema", true) // optional, infer the types of the columns
.csv(".../data.csv").show
+------+-----------+-----------+--------+
|PREFIX|Description|Destination|Num_Type|
+------+-----------+-----------+--------+
| 1| C1| IDD| NA|
| 7| C2| IDDD| NA|
| 20| C3| IDDD| NA|
| 27| C3| IDDD| NA|
| 30| C5| IDDD| NA|
+------+-----------+-----------+--------+

Related

How to make first row as header in PySpark reading text file as Spark context

The data frame what I get after reading text file in spark context
+----+---+------+
| _1| _2| _3|
+----+---+------+
|name|age|salary|
| sai| 25| 1000|
| bum| 30| 1500|
| che| 40| null|
+----+---+------+
the dataframe I required is
+----+---+------+
|name|age|salary|
+----+---+------+
| sai| 25| 1000|
| bum| 30| 1500|
| che| 40| null|
+----+---+------+
Here is the the code:
## from spark context
df_txt=spark.sparkContext.textFile("/FileStore/tables/simple-2.txt")
df_txt1=df_txt.map(lambda x: x.split(" "))
ddf=df_txt1.toDF().show()
You can use spark csv reader to read your comma seperate file.
For reading text file, you have to take first row as header and create a Seq of String and pass to toDF function. Also, remove first header to the rdd.
Note: Below code has written in spark scala. you can convert into lambda function to make it work in pyspark
import org.apache.spark.sql.functions._
val df = spark.sparkContext.textFile("/FileStore/tables/simple-2.txt")
val header = df.first()
val headerCol: Seq[String] = header.split(",").toList
val filteredRDD = df.filter(x=> x!= header)
val finaldf = filteredRDD.map( _.split(",")).map(w => (w(0),w(1),w(2))).toDF(headerCol: _*)
finaldf.show()
w(0),w(1),w(2) - you have to define fixed number of column from your file.

Spark read partition columns showing up null

I have an issue when trying to read partitioned data with Spark.
If the data in the partitioned column is in a specific format, it will show up as null in the resulting dataframe.
For example :
case class Alpha(a: String, b:Int)
val ds1 = Seq(Alpha("2020-02-11_12h32m12s", 1), Alpha("2020-05-21_10h32m52s", 2), Alpha("2020-06-21_09h32m38s", 3)).toDS
ds1.show
+--------------------+---+
| a| b|
+--------------------+---+
|2020-02-11_12h32m12s| 1|
|2020-05-21_10h32m52s| 2|
|2020-06-21_09h32m38s| 3|
+--------------------+---+
ds1.write.partitionBy("a").parquet("test")
val ds2 = spark.read.parquet("test")
ds2.show
+---+----+
| b| a|
+---+----+
| 2|null|
| 3|null|
| 1|null|
+---+----+
Do you have any idea how I could instead make that data show up as a String (or Timestamp).
Thanks for the help.
Just needed to set the parameter spark.sql.sources.partitionColumnTypeInference.enabled to false.
spark.conf.set("spark.sql.sources.partitionColumnTypeInference.enabled", "false")

Spark creates a extra column when reading a dataframe

I am reading a JSON file into a Spark Dataframe and it creates a extra column at the end.
var df : DataFrame = Seq(
(1.0, "a"),
(0.0, "b"),
(0.0, "c"),
(1.0, "d")
).toDF("col1", "col2")
df.write.mode(SaveMode.Overwrite).format("json").save("/home/neelesh/year=2018/")
val newDF = sqlContext.read.json("/home/neelesh/year=2018/*")
newDF.show
The output of newDF.show is:
+----+----+----+
|col1|col2|year|
+----+----+----+
| 1.0| a|2018|
| 0.0| b|2018|
| 0.0| c|2018|
| 1.0| d|2018|
+----+----+----+
However the JSON file is stored as:
{"col1":1.0,"col2":"a"}
{"col1":0.0,"col2":"b"}
{"col1":0.0,"col2":"c"}
{"col1":1.0,"col2":"d"}
The extra column is not added if year=2018 is removed from the path. What can be the issue here?
I am running Spark 1.6.2 with Scala 2.10.5
Could you try:
val newDF = sqlContext.read.json("/home/neelesh/year=2018")
newDF.show
+----+----+
|col1|col2|
+----+----+
| 1.0| A|
| 0.0| B|
| 0.0| C|
| 1.0| D|
+----+----+
quoting from spark 1.6
Starting from Spark 1.6.0, partition discovery only finds partitions
under the given paths by default. For the above example, if users pass
path/to/table/gender=male to either SQLContext.read.parquet or
SQLContext.read.load, gender will not be considered as a partitioning
column
Spark uses directory structure field=value as partition information see https://spark.apache.org/docs/2.1.0/sql-programming-guide.html#partition-discovery
so in your case the year=2018 is considered a year partition and thus an additonal column

spark 2.0 read csv with json

I have a CSV file that looks like:
"a","b","c","{""x"":""xx"",""y"":""yy""}"
When I use java CSV reader (au.com.bytecode.opencsv.CSVParser), it manages to parse the string when I indicate defaultEscapeChar = '\u0000'
When I tried to read it with spark 2.2 CSV reader, it failed and wasn't able to split it to 4 columns. This is what I tried:
val df = spark.read.format("csv")
.option("quoteMode","ALL")
.option("quote", "\u0000")
.load("s3://...")
I also tries it with option("escape", "\u0000")
but with no luck.
Which CSV options I need to choose in order to parse this file correctly?
You actually were close, right option is option("escape", "\"")
so given recent spark version (2.2+ or maybe even earlier), snippet below
import org.apache.spark.sql.{Dataset, SparkSession}
object CsvJsonMain {
def main(args: Array[String]): Unit = {
val spark = SparkSession.builder().appName("CsvJsonExample").master("local").getOrCreate()
import spark.sqlContext.implicits._
val csvData: Dataset[String] = spark.sparkContext.parallelize(List(
"""
|"a","b","c","{""x"":""xx"",""y"":""yy""}"
""".stripMargin)).toDS()
val frame = spark.read.option("escape", "\"").csv(csvData)
frame.show()
}
}
would produce
+---+---+---+-------------------+
|_c0|_c1|_c2| _c3|
+---+---+---+-------------------+
| a| b| c|{"x":"xx","y":"yy"}|
+---+---+---+-------------------+
The reason why spark fails to parse such csv out-of-the box is that default escape value is '\' symbol as could be seen on the line 91 at CSVOptions and it's obviously wouldn't work with default json quotes escaping.
The underlying reason why it used to work before spark 2.0 with databricks-csv library is that underlying csv engine used to be commons-csv and escape character defaulted to null would allow library to detect json and it's way of escaping. Since 2.0 csv functionality is part of the spark itself and using uniVocity CSV parser which doesn't provide such "magic" but apparently is faster.
P.S. Don't forget to specify escaping when writing csv files, if you want to preserve json data as it is.
frame.write.option("quoteAll","true").option("escape", "\"").csv("csvFileName")
I'm on Spark 1.6 and using Spark CSV as an external JAR but this works for me:
sqlContext.read.format("com.databricks.spark.csv")
.option("quoteMode", "ALL")
.option("delimiter", ",")
.load("file")
.show
+---+---+---+-------------------+
| C0| C1| C2| C3|
+---+---+---+-------------------+
| a| b| c|{"x":"xx","y":"yy"}|
+---+---+---+-------------------+
EDIT : Looks like Spark CSV is intelligent enough
sc.textFile("file").collect
res7: Array[String] = Array(a,b,c,"{""x"":""xx"",""y"":""yy""}")
scala> sqlContext.read.format("com.databricks.spark.csv").load("file").show
+---+---+---+-------------------+
| C0| C1| C2| C3|
+---+---+---+-------------------+
| a| b| c|{"x":"xx","y":"yy"}|
+---+---+---+-------------------+
scala> sqlContext.read.format("com.databricks.spark.csv").option("quoteMode", "ALL").load("file").show
+---+---+---+-------------------+
| C0| C1| C2| C3|
+---+---+---+-------------------+
| a| b| c|{"x":"xx","y":"yy"}|
+---+---+---+-------------------+

How to write a large RDD to local disk through the Scala spark-shell?

Through a Scala spark-shell, I have access to an Elasticsearch db using the elasticsearch-hadoop-5.5.0 connector.
I generate my RDD by passing the following command in the spark-shell:
val myRdd = sc.esRDD("myIndex/type", myESQuery)
myRDD contains 2.1 million records across 15 partitions. I have been trying to write all the data to a text file(s) on my local disk but when I try to run operations that convert the RDD to an array, like myRdd.collect(), I overload my java heap.
Is there a way to export the data (eg. 100k records at a time) incrementally so that I am never overloading my system memory?
When you use saveAsTextFile you can pass your filepath as "file:///path/to/output" to have it save locally.
Another option is to use rdd.toLocalIterator Which will allow you to iterate over the rdd on the driver. You can then write each line to a file. This method avoids pulling all the records in at once.
In case someone needs to do this in PySpark (to avoid overwhelming their driver), here's a complete example:
# ========================================================================
# Convenience functions for generating DataFrame Row()s w/ random ints.
# ========================================================================
NR,NC = 100,10 # Number of Rows(); Number of columns.
fn_row = lambda x: Row(*[random.randint(*x) for _ in range(NC)])
fn_df = (lambda x,y: spark.createDataFrame([fn_row(x) for _ in range(NR)])
.toDF(*[f'{y}{c}' for c in range(NC)]))
# ========================================================================
Generate a DataFrame with 100-Rows of 10-Columns; containing integer values between [1..100):
>>> myDF = fn_df((1,100),'c')
>>> myDF.show(5)
+---+---+---+---+---+---+---+---+---+---+
| c0| c1| c2| c3| c4| c5| c6| c7| c8| c9|
+---+---+---+---+---+---+---+---+---+---+
| 72| 88| 74| 81| 68| 80| 45| 32| 49| 29|
| 78| 6| 55| 2| 23| 84| 84| 84| 96| 95|
| 25| 77| 64| 89| 27| 51| 26| 9| 56| 30|
| 16| 16| 94| 33| 34| 86| 49| 16| 21| 86|
| 90| 69| 21| 79| 63| 43| 25| 82| 94| 61|
+---+---+---+---+---+---+---+---+---+---+
Then, using DataFrame.toLocalIterator(), "stream" the DataFrame Row by Row, applying whatever post-processing is desired. This avoids overwhelming Spark driver memory.
Here, we simply print() the Rows to show that each is the same as above:
>>> it = myDF.toLocalIterator()
>>> for _ in range(5): print(next(it)) # Analogous to myDF.show(5)
>>>
Row(c0=72, c1=88, c2=74, c3=81, c4=68, c5=80, c6=45, c7=32, c8=49, c9=29)
Row(c0=78, c1=6, c2=55, c3=2, c4=23, c5=84, c6=84, c7=84, c8=96, c9=95)
Row(c0=25, c1=77, c2=64, c3=89, c4=27, c5=51, c6=26, c7=9, c8=56, c9=30)
Row(c0=16, c1=16, c2=94, c3=33, c4=34, c5=86, c6=49, c7=16, c8=21, c9=86)
Row(c0=90, c1=69, c2=21, c3=79, c4=63, c5=43, c6=25, c7=82, c8=94, c9=61)
And if you wish to "stream" DataFrame Rows to a local file, perhaps transforming each Row along the way, you can use this template:
>>> it = myDF.toLocalIterator() # Refresh the iterator here.
>>> with open('/tmp/output.txt', mode='w') as f:
>>> for row in it: print(row, file=f)