This question already has answers here:
Write single CSV file using spark-csv
(16 answers)
Closed 1 year ago.
I have a Spark (2.4) DataFrame that I want to write as a Pipe separated file. It should be pretty straightforward like so
val myDF = spark.table("mySchema.myTable")
myDF.coalesce(1).write.format("csv").options("header", "true").options("delimiter", "|").save("/tmp/myDF")
I get a part-*.csv file in /tmp/myDF.
So far, so good. But I actually want the file name to be something specific, e.g. /tmp/myDF.csv
But giving this String in save will just create a dir called myDF.csv and create the part*.csv file in there.
Is there a way to write the DataFrame with a specific name?
You can't do that with Spark
You can rename the file later accessing the fileSystem
val directory = new File(/tmp/myDF)
if (directory.exists && directory.isDirectory) {
val file = directory.listFiles.filter(_.getName.endsWith(".csv")).head
file.renameTo("myDF.csv")
}
Related
I'm reading from a path say /json//myfiles_.json
I'm then flattening the json using explode. This causes an error since I have some empty files. How do I tell it to ignore empty files of somehow filter them out?
I can detect individual files checking if the head is empty but I need to do this on the collection of files iterated in the dataframe with the use of the wildcard path.
So the answer seems to be that I need to provide a schema explicitly because it can't infer one from empty file - as you would expect!
e.g.
val schemadf = sqlContext.read.json(schemapath) //infer schema from file with data or do manually
val schema = schemadf.schema
val raw = sqlContext.read.schema(schema).json(monthfile)
val prep = raw.withColumn("MyArray", explode($"MyArray"))
.select($"ID", $"name", $"CreatedAt")
display(prep)
This question already has answers here:
Spark - load CSV file as DataFrame?
(14 answers)
Closed 4 years ago.
How do I read the data from hdfs data sets using scala language? data is any "CSV" file with limited records.
You tagged the question with Spark, so I'm assuming you are trying to use that. I would recommend you start by reading through the Spark documentation here to get an idea of how to use Spark to interact with your data.
https://spark.apache.org/docs/latest/quick-start.html
https://spark.apache.org/docs/latest/sql-programming-guide.html
But, to answer your specific question, in Spark you would read in the CSV file using code like this:
val csvDf = spark.read.format("csv")
.option("sep", ",")
.option("header", "true")
.load("hdfs://some/path/to/data.csv/")
The path your provide will be to a CSV file on HDFS, or a folder containing multiple CSV files. Also, Spark will accept other types of file systems. For example you could also use "file://" to access the local file system, or "s3://" to use S3. Once you have loaded the data you will have a Spark DataFrame object with SQL like methods available to interact with it.
Note, I provided an option for separator just to show you how to do it, but it defaults to "," anyways, so it is not required. Also, if your CSV files do not include a header, you will need to specify the Schema yourself and set header to false instead.
You can read data from HDFS by following this approach :-
val hdfs = FileSystem.get(new URI("hdfs://hdfsUrl:port/"), new Configuration())
val path = new Path("/pathOfTheFileInHDFS/")
val stream = hdfs.open(path)
def readLines = Stream.cons(stream.readLine, Stream.continually( stream.readLine))
//This example checks line for null and prints every existing line consequentally
readLines.takeWhile(_ != null).foreach(line => println(line))
Also please have a look at this article https://blog.matthewrathbone.com/2013/12/28/reading-data-from-hdfs-even-if-it-is-compressed
Please let me know if this answers your question.
This question already has answers here:
How to write to a file in Scala?
(19 answers)
Closed 5 years ago.
I'm having an Array[String] called samparr with some values in it, I want it to get stored as an output file.
var samparr: Array[String] = new Array[String](4)
samparr +:= print1 + " BEST_MATCH " + print2
just like,
val output = samparr.saveAsTextFile(outputpath)
but isn't a RDD its an Array[String]
You can use SparkContext.parallelize to "distribute" your Array onto the Spark cluster (in other words, to turn it into an RDD), and then call saveAsTextFile:
sc.parallelize(samparr).saveAsTextFile(outputpath)
This action will partition the data and send each partition to one of the executors, then each partition will be saved into a separate "file-part".
Alternatively, since the array is very small and doesn't really "justify" using Spark, you can try any non-Spark method of saving data to file, e.g. the one linked by #avihoo-mamka: How to write to a file in Scala?
This question already has an answer here:
Spark dataframe save in single file on hdfs location [duplicate]
(1 answer)
Closed 5 years ago.
Say I have a Spark DF that I want to save to disk a CSV file. In Spark 2.0.0+, one can convert DataFrame(DataSet[Rows]) as a DataFrameWriter and use the .csv method to write the file.
The function is defined as
def csv(path: String): Unit
path : the location/folder name and not the file name.
Spark stores the csv file at the location specified by creating CSV files with name - part-*.csv.
Is there a way to save the CSV with specified filename instead of part-*.csv ? Or possible to specify prefix to instead of part-r ?
Code :
df.coalesce(1).write.csv("sample_path")
Current Output :
sample_path
|
+-- part-r-00000.csv
Desired Output :
sample_path
|
+-- my_file.csv
Note : The coalesce function is used to output a single file and the executor has enough memory to collect the DF without memory error.
It's not possible to do it directly in Spark's save
Spark uses Hadoop File Format, which requires data to be partitioned - that's why you have part- files. You can easily change filename after processing just like in this question
In Scala it will look like:
import org.apache.hadoop.fs._
val fs = FileSystem.get(sc.hadoopConfiguration)
val file = fs.globStatus(new Path("path/file.csv/part*"))(0).getPath().getName()
fs.rename(new Path("csvDirectory/" + file), new Path("mydata.csv"))
fs.delete(new Path("mydata.csv-temp"), true)
or just:
import org.apache.hadoop.fs._
val fs = FileSystem.get(sc.hadoopConfiguration)
fs.rename(new Path("csvDirectory/data.csv/part-0000"), new Path("csvDirectory/newData.csv"))
Edit: As mentioned in comments, you can also write your own OutputFormat, please see documents for information about this approach to set file name
This question is related to this.
I am processing an S3 folder containing csv.gz files in Spark. Each csv.gz file has a header that contains column names. This has been solved by the above SO link and the solution looks like this:
val rdd = sc.textFile("s3://.../my-s3-path").mapPartitions(_.drop(1))
The problem now is that it looks like some of the files have newline ('\n') at the end (we assume we are not sure which file). So when converting the RDD to DataFrame, I'm getting some error. The question now is:
How do I get rid of the last line of each file if it is '\n'?
Why not a simple filter:
val rdd = sc.textFile("s3...").filter(line => !line.equalsIgnoreCase("\n")).mapPartition...
Or filter any empty line:
val rdd = sc.textFile("s3...").filter(line => !line.trim().isEmpty)...