Spark 2.4 CSV Load Issue with option "nullvalue" - scala

We were using Spark 2.3 before, now we're on 2.4:
Spark version 2.4.0
Using Scala version 2.11.12 (OpenJDK 64-Bit Server VM, Java 1.8.0_212)
We had a piece of code running in production that converted csv files to parquet format.
One of the options we had set csv load is option("nullValue", null). There's something wrong with how it works in spark 2.4.
Here's an example to show the issue.
let's create the following /tmp/test.csv file:
C0,C1,C2,C3,C4,C5
1,"1234",0.00,"","D",0.00
2,"",0.00,"","D",0.00
Now if we load it in spark-shell
scala> val data1 = spark.read.option("header", "true").option("inferSchema", "true").option("treatEmptyValuesAsNulls","true").option("nullValue", null).csv("file:///tmp/test.csv")
we get an empty row:
scala> data1.show
+----+----+----+----+----+----+
| C0| C1| C2| C3| C4| C5|
+----+----+----+----+----+----+
| 1|1234| 0.0| | D| 0.0|
|null|null|null|null|null|null|
+----+----+----+----+----+----+
If we additionally change the csv a little (replaced empty string with "1" in the last row)
C0,C1,C2,C3,C4,C5
1,"1234",0.00,"","D",0.00
2,"",0.00,"1","D",0.00
the result is even worse:
scala> val data2 = spark.read.option("header", "true").option("inferSchema", "true").option("treatEmptyValuesAsNulls","true").option("nullValue", null).csv("file:///tmp/test.csv")
scala> data2.show
+----+----+----+----+----+----+
| C0| C1| C2| C3| C4| C5|
+----+----+----+----+----+----+
|null|null|null|null|null|null|
|null|null|null|null|null|null|
+----+----+----+----+----+----+
Is this a bug in new version of spark 2.4.0 ? Any body faced similar issue ?

spark option emptyValue solved issue
val data2 = spark.read.option("header", "true")
.option("inferSchema", "true")
.option("treatEmptyValuesAsNulls","true")
.option("nullValue", null)***
.option("emptyValue", null)***
.csv("file:///tmp/test.csv")

Related

How to make first row as header in PySpark reading text file as Spark context

The data frame what I get after reading text file in spark context
+----+---+------+
| _1| _2| _3|
+----+---+------+
|name|age|salary|
| sai| 25| 1000|
| bum| 30| 1500|
| che| 40| null|
+----+---+------+
the dataframe I required is
+----+---+------+
|name|age|salary|
+----+---+------+
| sai| 25| 1000|
| bum| 30| 1500|
| che| 40| null|
+----+---+------+
Here is the the code:
## from spark context
df_txt=spark.sparkContext.textFile("/FileStore/tables/simple-2.txt")
df_txt1=df_txt.map(lambda x: x.split(" "))
ddf=df_txt1.toDF().show()
You can use spark csv reader to read your comma seperate file.
For reading text file, you have to take first row as header and create a Seq of String and pass to toDF function. Also, remove first header to the rdd.
Note: Below code has written in spark scala. you can convert into lambda function to make it work in pyspark
import org.apache.spark.sql.functions._
val df = spark.sparkContext.textFile("/FileStore/tables/simple-2.txt")
val header = df.first()
val headerCol: Seq[String] = header.split(",").toList
val filteredRDD = df.filter(x=> x!= header)
val finaldf = filteredRDD.map( _.split(",")).map(w => (w(0),w(1),w(2))).toDF(headerCol: _*)
finaldf.show()
w(0),w(1),w(2) - you have to define fixed number of column from your file.

Spark RDD to Dataframe

Below is the data in a file
PREFIX|Description|Destination|Num_Type
1|C1|IDD|NA
7|C2|IDDD|NA
20|C3|IDDD|NA
27|C3|IDDD|NA
30|C5|IDDD|NA
I am trying to read it and convert into Dataframe.
val file=sc.textFile("/user/cloudera-scm/file.csv")
val list=file.collect.toList
list.toDF.show
+--------------------+
| value|
+--------------------+
|PREFIX|Descriptio...|
| 1|C1|IDD|NA|
| 7|C2|IDDD|NA|
| 20|C3|IDDD|NA|
| 27|C3|IDDD|NA|
| 30|C5|IDDD|NA|
+--------------------+
I am not able to convert this to datafram with exact table form
Let's first consider your code.
// reading a potentially big file
val file=sc.textFile("/user/cloudera-scm/file.csv")
// collecting everything to the driver
val list=file.collect.toList
// converting a local list to a dataframe (this does not work)
list.toDF.show
There are ways to make your code work, but the very logic awkward. You are reading data with the executors, putting all of it on the driver to simply convert it to a dataframe (back to the executors). That's a lot of network communication, and the driver will most likely run out of memory for any reasonably large dataset.
What you can do it read the data directly as a dataframe like this (the driver does nothing and there is no unnecessary IO):
spark.read
.option("sep", "|") // specify the delimiter
.option("header", true) // to tell spark that there is a header
.option("inferSchema", true) // optional, infer the types of the columns
.csv(".../data.csv").show
+------+-----------+-----------+--------+
|PREFIX|Description|Destination|Num_Type|
+------+-----------+-----------+--------+
| 1| C1| IDD| NA|
| 7| C2| IDDD| NA|
| 20| C3| IDDD| NA|
| 27| C3| IDDD| NA|
| 30| C5| IDDD| NA|
+------+-----------+-----------+--------+

Spark creates a extra column when reading a dataframe

I am reading a JSON file into a Spark Dataframe and it creates a extra column at the end.
var df : DataFrame = Seq(
(1.0, "a"),
(0.0, "b"),
(0.0, "c"),
(1.0, "d")
).toDF("col1", "col2")
df.write.mode(SaveMode.Overwrite).format("json").save("/home/neelesh/year=2018/")
val newDF = sqlContext.read.json("/home/neelesh/year=2018/*")
newDF.show
The output of newDF.show is:
+----+----+----+
|col1|col2|year|
+----+----+----+
| 1.0| a|2018|
| 0.0| b|2018|
| 0.0| c|2018|
| 1.0| d|2018|
+----+----+----+
However the JSON file is stored as:
{"col1":1.0,"col2":"a"}
{"col1":0.0,"col2":"b"}
{"col1":0.0,"col2":"c"}
{"col1":1.0,"col2":"d"}
The extra column is not added if year=2018 is removed from the path. What can be the issue here?
I am running Spark 1.6.2 with Scala 2.10.5
Could you try:
val newDF = sqlContext.read.json("/home/neelesh/year=2018")
newDF.show
+----+----+
|col1|col2|
+----+----+
| 1.0| A|
| 0.0| B|
| 0.0| C|
| 1.0| D|
+----+----+
quoting from spark 1.6
Starting from Spark 1.6.0, partition discovery only finds partitions
under the given paths by default. For the above example, if users pass
path/to/table/gender=male to either SQLContext.read.parquet or
SQLContext.read.load, gender will not be considered as a partitioning
column
Spark uses directory structure field=value as partition information see https://spark.apache.org/docs/2.1.0/sql-programming-guide.html#partition-discovery
so in your case the year=2018 is considered a year partition and thus an additonal column

spark 2.0 read csv with json

I have a CSV file that looks like:
"a","b","c","{""x"":""xx"",""y"":""yy""}"
When I use java CSV reader (au.com.bytecode.opencsv.CSVParser), it manages to parse the string when I indicate defaultEscapeChar = '\u0000'
When I tried to read it with spark 2.2 CSV reader, it failed and wasn't able to split it to 4 columns. This is what I tried:
val df = spark.read.format("csv")
.option("quoteMode","ALL")
.option("quote", "\u0000")
.load("s3://...")
I also tries it with option("escape", "\u0000")
but with no luck.
Which CSV options I need to choose in order to parse this file correctly?
You actually were close, right option is option("escape", "\"")
so given recent spark version (2.2+ or maybe even earlier), snippet below
import org.apache.spark.sql.{Dataset, SparkSession}
object CsvJsonMain {
def main(args: Array[String]): Unit = {
val spark = SparkSession.builder().appName("CsvJsonExample").master("local").getOrCreate()
import spark.sqlContext.implicits._
val csvData: Dataset[String] = spark.sparkContext.parallelize(List(
"""
|"a","b","c","{""x"":""xx"",""y"":""yy""}"
""".stripMargin)).toDS()
val frame = spark.read.option("escape", "\"").csv(csvData)
frame.show()
}
}
would produce
+---+---+---+-------------------+
|_c0|_c1|_c2| _c3|
+---+---+---+-------------------+
| a| b| c|{"x":"xx","y":"yy"}|
+---+---+---+-------------------+
The reason why spark fails to parse such csv out-of-the box is that default escape value is '\' symbol as could be seen on the line 91 at CSVOptions and it's obviously wouldn't work with default json quotes escaping.
The underlying reason why it used to work before spark 2.0 with databricks-csv library is that underlying csv engine used to be commons-csv and escape character defaulted to null would allow library to detect json and it's way of escaping. Since 2.0 csv functionality is part of the spark itself and using uniVocity CSV parser which doesn't provide such "magic" but apparently is faster.
P.S. Don't forget to specify escaping when writing csv files, if you want to preserve json data as it is.
frame.write.option("quoteAll","true").option("escape", "\"").csv("csvFileName")
I'm on Spark 1.6 and using Spark CSV as an external JAR but this works for me:
sqlContext.read.format("com.databricks.spark.csv")
.option("quoteMode", "ALL")
.option("delimiter", ",")
.load("file")
.show
+---+---+---+-------------------+
| C0| C1| C2| C3|
+---+---+---+-------------------+
| a| b| c|{"x":"xx","y":"yy"}|
+---+---+---+-------------------+
EDIT : Looks like Spark CSV is intelligent enough
sc.textFile("file").collect
res7: Array[String] = Array(a,b,c,"{""x"":""xx"",""y"":""yy""}")
scala> sqlContext.read.format("com.databricks.spark.csv").load("file").show
+---+---+---+-------------------+
| C0| C1| C2| C3|
+---+---+---+-------------------+
| a| b| c|{"x":"xx","y":"yy"}|
+---+---+---+-------------------+
scala> sqlContext.read.format("com.databricks.spark.csv").option("quoteMode", "ALL").load("file").show
+---+---+---+-------------------+
| C0| C1| C2| C3|
+---+---+---+-------------------+
| a| b| c|{"x":"xx","y":"yy"}|
+---+---+---+-------------------+

scala dataframe filter array of strings

Spark 1.6.2 and Scala 2.10 here.
I want to filter the spark dataframe column with an array of strings.
val df1 = sc.parallelize(Seq((1, "L-00417"), (3, "L-00645"), (4, "L-99999"),(5, "L-00623"))).toDF("c1","c2")
+---+-------+
| c1| c2|
+---+-------+
| 1|L-00417|
| 3|L-00645|
| 4|L-99999|
| 5|L-00623|
+---+-------+
val df2 = sc.parallelize(Seq((1, "L-1"), (3, "L-2"), (4, "L-3"),(5, "L-00623"))).toDF("c3","c4")
+---+-------+
| c3| c4|
+---+-------+
| 1| L-1|
| 3| L-2|
| 4| L-3|
| 5|L-00623|
+---+-------+
val c2List = df1.select("c2").as[String].collect()
df2.filter(not($"c4").contains(c2List)).show()`
I am getting below error.
Unsupported literal type class [Ljava.lang.String; [Ljava.lang.String;#5ce1739c
Can anyone please help to fix this?
First, contains isn't suitable because you're looking for the opposite relationship - you want to check if c2List contains c4's value, and not the other way around.
You can use isin for that - which uses "repeated argument" (similar to Java's "varargs") of the values to match, so you'd want to "expand" c2List into a repeated argument, which can be done using the : _* operator:
df2.filter(not($"c4".isin(c2List: _*)))
Alternatively, with Spark 1.6 you can use an "left anti join", to join the two dataframes and get only values in df2 that did NOT match values in df1:
df2.join(df1, $"c2" === $"c4", "leftanti")
Unlike the previous, this option is not limited to the case where df1 is small enough to be collected.
Lastly, if you're using earlier Spark version, you can immitate leftanti using a left join and a filter:
df2.join(df1, $"c2" === $"c4", "left").filter($"c2".isNull).select("c3", "c4")