I have a CSV file that looks like:
"a","b","c","{""x"":""xx"",""y"":""yy""}"
When I use java CSV reader (au.com.bytecode.opencsv.CSVParser), it manages to parse the string when I indicate defaultEscapeChar = '\u0000'
When I tried to read it with spark 2.2 CSV reader, it failed and wasn't able to split it to 4 columns. This is what I tried:
val df = spark.read.format("csv")
.option("quoteMode","ALL")
.option("quote", "\u0000")
.load("s3://...")
I also tries it with option("escape", "\u0000")
but with no luck.
Which CSV options I need to choose in order to parse this file correctly?
You actually were close, right option is option("escape", "\"")
so given recent spark version (2.2+ or maybe even earlier), snippet below
import org.apache.spark.sql.{Dataset, SparkSession}
object CsvJsonMain {
def main(args: Array[String]): Unit = {
val spark = SparkSession.builder().appName("CsvJsonExample").master("local").getOrCreate()
import spark.sqlContext.implicits._
val csvData: Dataset[String] = spark.sparkContext.parallelize(List(
"""
|"a","b","c","{""x"":""xx"",""y"":""yy""}"
""".stripMargin)).toDS()
val frame = spark.read.option("escape", "\"").csv(csvData)
frame.show()
}
}
would produce
+---+---+---+-------------------+
|_c0|_c1|_c2| _c3|
+---+---+---+-------------------+
| a| b| c|{"x":"xx","y":"yy"}|
+---+---+---+-------------------+
The reason why spark fails to parse such csv out-of-the box is that default escape value is '\' symbol as could be seen on the line 91 at CSVOptions and it's obviously wouldn't work with default json quotes escaping.
The underlying reason why it used to work before spark 2.0 with databricks-csv library is that underlying csv engine used to be commons-csv and escape character defaulted to null would allow library to detect json and it's way of escaping. Since 2.0 csv functionality is part of the spark itself and using uniVocity CSV parser which doesn't provide such "magic" but apparently is faster.
P.S. Don't forget to specify escaping when writing csv files, if you want to preserve json data as it is.
frame.write.option("quoteAll","true").option("escape", "\"").csv("csvFileName")
I'm on Spark 1.6 and using Spark CSV as an external JAR but this works for me:
sqlContext.read.format("com.databricks.spark.csv")
.option("quoteMode", "ALL")
.option("delimiter", ",")
.load("file")
.show
+---+---+---+-------------------+
| C0| C1| C2| C3|
+---+---+---+-------------------+
| a| b| c|{"x":"xx","y":"yy"}|
+---+---+---+-------------------+
EDIT : Looks like Spark CSV is intelligent enough
sc.textFile("file").collect
res7: Array[String] = Array(a,b,c,"{""x"":""xx"",""y"":""yy""}")
scala> sqlContext.read.format("com.databricks.spark.csv").load("file").show
+---+---+---+-------------------+
| C0| C1| C2| C3|
+---+---+---+-------------------+
| a| b| c|{"x":"xx","y":"yy"}|
+---+---+---+-------------------+
scala> sqlContext.read.format("com.databricks.spark.csv").option("quoteMode", "ALL").load("file").show
+---+---+---+-------------------+
| C0| C1| C2| C3|
+---+---+---+-------------------+
| a| b| c|{"x":"xx","y":"yy"}|
+---+---+---+-------------------+
Related
The data frame what I get after reading text file in spark context
+----+---+------+
| _1| _2| _3|
+----+---+------+
|name|age|salary|
| sai| 25| 1000|
| bum| 30| 1500|
| che| 40| null|
+----+---+------+
the dataframe I required is
+----+---+------+
|name|age|salary|
+----+---+------+
| sai| 25| 1000|
| bum| 30| 1500|
| che| 40| null|
+----+---+------+
Here is the the code:
## from spark context
df_txt=spark.sparkContext.textFile("/FileStore/tables/simple-2.txt")
df_txt1=df_txt.map(lambda x: x.split(" "))
ddf=df_txt1.toDF().show()
You can use spark csv reader to read your comma seperate file.
For reading text file, you have to take first row as header and create a Seq of String and pass to toDF function. Also, remove first header to the rdd.
Note: Below code has written in spark scala. you can convert into lambda function to make it work in pyspark
import org.apache.spark.sql.functions._
val df = spark.sparkContext.textFile("/FileStore/tables/simple-2.txt")
val header = df.first()
val headerCol: Seq[String] = header.split(",").toList
val filteredRDD = df.filter(x=> x!= header)
val finaldf = filteredRDD.map( _.split(",")).map(w => (w(0),w(1),w(2))).toDF(headerCol: _*)
finaldf.show()
w(0),w(1),w(2) - you have to define fixed number of column from your file.
We were using Spark 2.3 before, now we're on 2.4:
Spark version 2.4.0
Using Scala version 2.11.12 (OpenJDK 64-Bit Server VM, Java 1.8.0_212)
We had a piece of code running in production that converted csv files to parquet format.
One of the options we had set csv load is option("nullValue", null). There's something wrong with how it works in spark 2.4.
Here's an example to show the issue.
let's create the following /tmp/test.csv file:
C0,C1,C2,C3,C4,C5
1,"1234",0.00,"","D",0.00
2,"",0.00,"","D",0.00
Now if we load it in spark-shell
scala> val data1 = spark.read.option("header", "true").option("inferSchema", "true").option("treatEmptyValuesAsNulls","true").option("nullValue", null).csv("file:///tmp/test.csv")
we get an empty row:
scala> data1.show
+----+----+----+----+----+----+
| C0| C1| C2| C3| C4| C5|
+----+----+----+----+----+----+
| 1|1234| 0.0| | D| 0.0|
|null|null|null|null|null|null|
+----+----+----+----+----+----+
If we additionally change the csv a little (replaced empty string with "1" in the last row)
C0,C1,C2,C3,C4,C5
1,"1234",0.00,"","D",0.00
2,"",0.00,"1","D",0.00
the result is even worse:
scala> val data2 = spark.read.option("header", "true").option("inferSchema", "true").option("treatEmptyValuesAsNulls","true").option("nullValue", null).csv("file:///tmp/test.csv")
scala> data2.show
+----+----+----+----+----+----+
| C0| C1| C2| C3| C4| C5|
+----+----+----+----+----+----+
|null|null|null|null|null|null|
|null|null|null|null|null|null|
+----+----+----+----+----+----+
Is this a bug in new version of spark 2.4.0 ? Any body faced similar issue ?
spark option emptyValue solved issue
val data2 = spark.read.option("header", "true")
.option("inferSchema", "true")
.option("treatEmptyValuesAsNulls","true")
.option("nullValue", null)***
.option("emptyValue", null)***
.csv("file:///tmp/test.csv")
Below is the data in a file
PREFIX|Description|Destination|Num_Type
1|C1|IDD|NA
7|C2|IDDD|NA
20|C3|IDDD|NA
27|C3|IDDD|NA
30|C5|IDDD|NA
I am trying to read it and convert into Dataframe.
val file=sc.textFile("/user/cloudera-scm/file.csv")
val list=file.collect.toList
list.toDF.show
+--------------------+
| value|
+--------------------+
|PREFIX|Descriptio...|
| 1|C1|IDD|NA|
| 7|C2|IDDD|NA|
| 20|C3|IDDD|NA|
| 27|C3|IDDD|NA|
| 30|C5|IDDD|NA|
+--------------------+
I am not able to convert this to datafram with exact table form
Let's first consider your code.
// reading a potentially big file
val file=sc.textFile("/user/cloudera-scm/file.csv")
// collecting everything to the driver
val list=file.collect.toList
// converting a local list to a dataframe (this does not work)
list.toDF.show
There are ways to make your code work, but the very logic awkward. You are reading data with the executors, putting all of it on the driver to simply convert it to a dataframe (back to the executors). That's a lot of network communication, and the driver will most likely run out of memory for any reasonably large dataset.
What you can do it read the data directly as a dataframe like this (the driver does nothing and there is no unnecessary IO):
spark.read
.option("sep", "|") // specify the delimiter
.option("header", true) // to tell spark that there is a header
.option("inferSchema", true) // optional, infer the types of the columns
.csv(".../data.csv").show
+------+-----------+-----------+--------+
|PREFIX|Description|Destination|Num_Type|
+------+-----------+-----------+--------+
| 1| C1| IDD| NA|
| 7| C2| IDDD| NA|
| 20| C3| IDDD| NA|
| 27| C3| IDDD| NA|
| 30| C5| IDDD| NA|
+------+-----------+-----------+--------+
This question already has answers here:
Can I read a CSV represented as a string into Apache Spark using spark-csv
(3 answers)
Closed 3 years ago.
At the moment, I am making a dataframe from a tab separated file with a header, like this.
val df = sqlContext.read.format("csv")
.option("header", "true")
.option("delimiter", "\t")
.option("inferSchema","true").load(pathToFile)
I want to do exactly the same thing but with a String instead of a file. How can I do that?
To the best of my knowledge, there is no built in way to build a dataframe from a string. Yet, for prototyping purposes, you can create a dataframe from a Seq of Tuples.
You could use that to your advantage to create a dataframe from a string.
scala> val s ="x,y,z\n1,2,3\n4,5,6\n7,8,9"
s: String =
x,y,z
1,2,3
4,5,6
7,8,9
scala> val data = s.split('\n')
// Then we extract the first element to use it as a header.
scala> val header = data.head.split(',')
scala> val df = data.tail.toSeq
// converting the seq of strings to a DF with only one column
.toDF("X")
// spliting the string
.select(split('X, ",") as "X")
// extracting each column from the array and renaming them
.select( header.indices.map( i => 'X.getItem(i).as(header(i))) : _*)
scala> df.show
+---+---+---+
| x| y| z|
+---+---+---+
| 1| 2| 3|
| 4| 5| 6|
| 7| 8| 9|
+---+---+---+
ps: if you are not in the spark REPL make sure to write this import spark.implicits._ so as to use toDF().
I am developing Spark using Scala, and I don't have any background of Scala. I don't get the ValueError Yet, but I am preparing the ValueError Handler for my code.
|location|arrDate|deptDate|
|JFK |1201 |1209 |
|LAX |1208 |1212 |
|NYC | |1209 |
|22 |1201 |1209 |
|SFO |1202 |1209 |
If we have data like this, I would like to store Third row and Fourth row into Error.dat then process the fifth row again. In the error log, I would like to put the information of the data such as which file, the number of the row, and details of error. For logger, I am using log4j now.
What is the best way to implement that function? Can you guys help me?
I am assuming all the three columns are type String. in that case I would solve this using the below snippet. I have created two udf to check for the error records.
if a field is has only numeric characters [isNumber]
and if the string field is empty [isEmpty]
code snippet
import org.apache.spark.sql.functions.row_number
import org.apache.spark.sql.expressions.Window
import org.apache.spark.sql.functions.udf
val df = rdd.zipWithIndex.map({case ((x,y,z),index) => (index+1,x,y,z)}).toDF("row_num", "c1", "c2", "c3")
val isNumber = udf((x: String) => x.replaceAll("\\d","") == "")
val isEmpty = udf((x: String) => x.trim.length==0)
val errDF = df.filter(isNumber($"c1") || isEmpty($"c2"))
val validDF = df.filter(!(isNumber($"c1") || isEmpty($"c2")))
scala> df.show()
+-------+---+-----+-----+
|row_num| c1| c2| c3|
+-------+---+-----+-----+
| 1|JFK| 1201| 1209|
| 2|LAX| 1208| 1212|
| 3|NYC| | 1209|
| 4| 22| 1201| 1209|
| 5|SFO| 1202| 1209|
+-------+---+-----+-----+
scala> errDF.show()
+-------+---+----+----+
|row_num| c1| c2| c3|
+-------+---+----+----+
| 3|NYC| |1209|
| 4| 22|1201|1209|
+-------+---+----+----+