Does reading multiple files & collect bring them to driver in spark - scala
Code snippet :
val inp = sc.textFile("C:\\mk\\logdir\\foldera\\foldera1\\log.txt").collect.mkString(" ")
I know above code reads the entire file & combine them in one string & executes it driver node(single execution. not parallel one).
val inp = sc.textFile("C:\\mk\\logdir\\*\\*\\log.txt")
code block{ }
sc.stop
Q1 )Here I am reading multiple files (which are present in above folder structure). I believe in this case each file will be created as partition & will be sent to separate node & executed parallely. Am I correct in my understanding? Can someone confirm this? Or is there anyway i can confirm it systematically?
val inp = sc.textFile("C:\\mk\\logdir\\*\\*\\log.txt")
val cont = inp.collect.mkString(" ")
code block{ }
sc.stop
Q2) How the spark handles this case. though I am doing collect, I assume that it will not collect all content from all files but just the one file . Am I right? Can someone help me understand this?
Thank you very much in Advance for your time & help.
Q1 )Here I am reading multiple files (which are present in above
folder structure). I believe in this case each file will be created as
partition & will be sent to separate node & executed parallely. Am I
correct in my understanding? Can someone confirm this? Or is there
anyway i can confirm it systematically?
ANSWER :
SparkContext’s TextFile method, i.e., sc.textFile creates a RDD with each line as an element. If there are 10 files in data i.e yourtextfilesfolder folder, 10 partitions will be created. You can verify the number of partitions by:
yourtextfilesfolder.partitions.length
However, Partitioning is determined by data locality. This may result in too few partitions by default. AFAIK there is no guarantee that one partition will be created please see the code of 'SparkContext.textFile'.
& 'minPartitions' - suggested minimum number of partitions for the resulting RDD
For better understanding see below method.
/**
* Read a text file from HDFS, a local file system (available on all nodes), or any
* Hadoop-supported file system URI, and return it as an RDD of Strings.
*/
def textFile(
path: String,
minPartitions: Int = defaultMinPartitions): RDD[String] = withScope {
assertNotStopped()
hadoopFile(path, classOf[TextInputFormat], classOf[LongWritable], classOf[Text],
minPartitions).map(pair => pair._2.toString).setName(path)
}
you can mention minPartitions as shown above from SparkContext.scala
Q2) How the spark handles this case. though I am doing collect, I assume
that it will not collect all content from all files but just the one
file . Am I right? Can someone help me understand this?
ANSWER : Your rdd constructed with multiple text files. so collect will collect from all partitions to driver from different files, not one file at a time.
you can verify : using rdd.collect
However, If you want read multiple text files you can also use wholeTextFiles
please see the #note in below method Small files are preferred, large file is also allowable, but may cause bad performance.
See spark-core-sc-textfile-vs-sc-wholetextfiles
Doc :
RDD> wholeTextFiles(String path, int
minPartitions) Read a directory of text files from HDFS, a local file
system (available on all nodes), or any Hadoop-supported file system
URI.
/**
* Read a directory of text files from HDFS, a local file system (available on all nodes), or any
* Hadoop-supported file system URI. Each file is read as a single record and returned in a
* key-value pair, where the key is the path of each file, the value is the content of each file.
*
* <p> For example, if you have the following files:
* {{{
* hdfs://a-hdfs-path/part-00000
* hdfs://a-hdfs-path/part-00001
* ...
* hdfs://a-hdfs-path/part-nnnnn
* }}}
*
* Do `val rdd = sparkContext.wholeTextFile("hdfs://a-hdfs-path")`,
*
* <p> then `rdd` contains
* {{{
* (a-hdfs-path/part-00000, its content)
* (a-hdfs-path/part-00001, its content)
* ...
* (a-hdfs-path/part-nnnnn, its content)
* }}}
*
* #note Small files are preferred, large file is also allowable, but may cause bad performance.
* #note On some filesystems, `.../path/*` can be a more efficient way to read all files
* in a directory rather than `.../path/` or `.../path`
* #note Partitioning is determined by data locality. This may result in too few partitions
* by default.
*
* #param path Directory to the input data files, the path can be comma separated paths as the
* list of inputs.
* #param minPartitions A suggestion value of the minimal splitting number for input data.
* #return RDD representing tuples of file path and the corresponding file content
*/
def wholeTextFiles(
path: String,
minPartitions: Int = defaultMinPartitions): RDD[(String, String)] = withScope {
.....
}
Examples :
val distFile = sc.textFile("data.txt")
Above command returns the content of the file:
scala> distFile.collect()
res16: Array[String] = Array(1,2,3, 4,5,6)
SparkContext.wholeTextFiles can return (filename, content).
val distFile = sc.wholeTextFiles("/tmp/tmpdir")
scala> distFile.collect()
res17: Array[(String, String)] =
Array((maprfs:/tmp/tmpdir/data3.txt,"1,2,3
4,5,6
"), (maprfs:/tmp/tmpdir/data.txt,"1,2,3
4,5,6
"), (maprfs:/tmp/tmpdir/data2.txt,"1,2,3
4,5,6
"))
In your case I d prefer SparkContext.wholeTextFiles where you can get filename and its content after collect as described above, if thats the thing you wanted.
Spark is a fast and general engine for large-scale data processing. It processes all the data in parallel. So, to answer first question, then Yes, in following case:
val inp = sc.textFile("C:\\mk\\logdir\\*\\*\\log.txt")
code block{ }
sc.stop
Each file will be created as partition & will be sent to separate node & executed in parallel. But, depending on the size of a file number of partitions can be greater than the number of files being processed. For example, if log.txt in folder1 and folder2 are of few KB in size, then only 2 partitions are created as there will 2 files and they will be processed in parallel.
But, if log.txt in folder1 has size in GB(s), then multiple partitions will be created for it and number of partitions will be greater than the number of files.
However, we can always change the number of partitions of an RDD using repartition() or coalesce() method.
To answer second question, then in following case:
val inp = sc.textFile("C:\\mk\\logdir\\*\\*\\log.txt")
val cont = inp.collect.mkString(" ")
code block{ }
sc.stop
Spark will collect content from all files and not just from one file. Since, collect() means to get all content in stored an RDD and get it back to Driver in form of a collection.
Related
Spark writing compressed CSV with custom path to S3
I'm trying to simply write a CSV to S3 using Spark written in Scala: I notice in my output bucket the following file: ...PROCESSED/montfh-04.csv/part-00000-723a3d72-56f6-4e62-b627-9a181a820f6a-c000.csv.snappy when it should only be montfh-04.csv Code: val processedMetadataDf = spark.read.csv("s3://" + metadataPath + "/PROCESSED/" + "month-04" + ".csv") val processCount = processedMetadataDf.count() if (processCount == 0) { // Initial frame is 0B -> Overwrite with path val newDat = Seq("dummy-row-data") val unknown_df = newDat.toDF() unknown_df.write.mode("overwrite").option("header","false").csv("s3://" + metadataPath + "/PROCESSED/" + "montfh-04" + ".csv") } Here I notice two strange things: It puts it in a directory It adds that weird part sub-path to the file with snappy compression All I am trying to do is simply write a flat CSV file with that name to the specified path. What are my options?
This is how spark works. The location you provide for saving DataSet/DataFrame is the directory location where spark can write all their partitions. No. of part files will be equal to no of partition which in your case is only 1. Now, if you want the filename to be montfh-04.csv only then you can rename it. Note: renaming in S3 is costly operation ( copy and delete). As you are writing with spark it will be 3 times of the I/O as 2 times will be the output Commit operation and 1 time rename. Better write it in HDFS and upload it from there with the required key name.
Spark Spark Empty Json Files reading from Directory
I'm reading from a path say /json//myfiles_.json I'm then flattening the json using explode. This causes an error since I have some empty files. How do I tell it to ignore empty files of somehow filter them out? I can detect individual files checking if the head is empty but I need to do this on the collection of files iterated in the dataframe with the use of the wildcard path.
So the answer seems to be that I need to provide a schema explicitly because it can't infer one from empty file - as you would expect! e.g. val schemadf = sqlContext.read.json(schemapath) //infer schema from file with data or do manually val schema = schemadf.schema val raw = sqlContext.read.schema(schema).json(monthfile) val prep = raw.withColumn("MyArray", explode($"MyArray")) .select($"ID", $"name", $"CreatedAt") display(prep)
Fast file writing in scala?
So I have a scala program that iterates through a graph and writes out data line by line to a text file. It is essentially an edge list file for use with graphx. The biggest slow down is actually creating this text file, were talking maybe million records it writes to this text file. Is there a way I can somehow parallel this task or making faster in any way by somehow storing it in memory or anything? More info: I am using a hadoop cluster to iterate through a graph and here is my code snippet for my text file creation im doing now to write to HDFS: val fileName = dbPropertiesFile + "-edgelist-" + System.currentTimeMillis() val path = new Path("/home/user/graph/" + fileName + ".txt") val conf = new Configuration() conf.set("fs.defaultFS", "hdfs://host001:8020") val fs = FileSystem.newInstance(conf) val os = fs.create(path) while (edges.hasNext) { val current = edges.next() os.write(current.inVertex().id().toString.getBytes()) os.write(" ".getBytes()) os.write(current.outVertex().id().toString.getBytes()) os.write("\n".toString.getBytes()) } fs.close()
Writing files to HDFS is never fast. Your tags seem to suggest that you are already using spark anyway, so you could as well, take advantage of it. sparkContext .makeRDD(20, edges.toStream) .map(e => e.inVertex.id -> e.outVertex.id) .toDF .write .delimiter(" ") .csv(path) This splits your input into 20 partitions (you can control that number with the numeric parameter to makeRDD above), and writes them in parallel to 20 different chunks in hdfs, that represent your resulting file.
scala loop through multiple files in the path
I am new to spark and scala. I have below requirement. I need to process all the files under a path which have sub directories. I guess, I need to write a for-loop logic to process across all the files. Below is the example of my case: src/proj_fldr/dataset1/20170624/file1.txt src/proj_fldr/dataset1/20170624/file2.txt src/proj_fldr/dataset1/20170624/file3.txt src/proj_fldr/dataset1/20170625/file1.txt src/proj_fldr/dataset1/20170625/file2.txt src/proj_fldr/dataset1/20170625/file3.txt src/proj_fldr/dataset1/20170626/file1.txt src/proj_fldr/dataset1/20170626/file2.txt src/proj_fldr/dataset1/20170626/file3.txt src/proj_fldr/dataset2/20170624/file1.txt src/proj_fldr/dataset2/20170624/file2.txt src/proj_fldr/dataset2/20170624/file3.txt src/proj_fldr/dataset2/20170625/file1.txt src/proj_fldr/dataset2/20170625/file2.txt src/proj_fldr/dataset2/20170625/file3.txt src/proj_fldr/dataset2/20170626/file1.txt src/proj_fldr/dataset2/20170626/file2.txt src/proj_fldr/dataset2/20170626/file3.txt I need the code to iterate the files like In src loop (proj_fldr loop(dataset loop(datefolder loop(file1 then, file2....))))
Since you have a regular file structure you can use the wildcard * when reading the files. You can do the following to read all the files into a single RDD: val spark = SparkSession.builder.getOrCreate() val rdd = spark.sparkContext.wholeTextFiles("src/*/*/*/*.txt") The result will be a RDD[(String, String)] with the path and the content in a tuple for each processed file. To explicitly set if you want to use local or HDFS files you can append "hdfs://" or "file://" to the beginning of the path.
Apache Spark: multiple outputs in one map task
TL;DR: I have a large file that I iterate over three times to get three different sets of counts out. Is there a way to get three maps out in one pass over the data? Some more detail: I'm trying to compute PMI between words and features that are listed in a large file. My pipeline looks something like this: val wordFeatureCounts = sc.textFile(inputFile).flatMap(line => { val word = getWordFromLine(line) val features = getFeaturesFromLine(line) for (feature <- features) yield ((word, feature), 1) }) And then I repeat this to get word counts and feature counts separately: val wordCounts = sc.textFile(inputFile).flatMap(line => { val word = getWordFromLine(line) val features = getFeaturesFromLine(line) for (feature <- features) yield (word, 1) }) val featureCounts = sc.textFile(inputFile).flatMap(line => { val word = getWordFromLine(line) val features = getFeaturesFromLine(line) for (feature <- features) yield (feature, 1) }) (I realize I could just iterate over wordFeatureCounts to get the wordCounts and featureCounts, but that doesn't answer my question, and looking at running times in practice I'm not sure it's actually faster to do it that way. Also note that there are some reduceByKey operations and other stuff that I do with this after the counts are computed that aren't shown, as they aren't relevant to the question.) What I would really like to do is something like this: val (wordFeatureCounts, wordCounts, featureCounts) = sc.textFile(inputFile).flatMap(line => { val word = getWordFromLine(line) val features = getFeaturesFromLine(line) val wfCounts = for (feature <- features) yield ((word, feature), 1) val wCounts = for (feature <- features) yield (word, 1) val fCounts = for (feature <- features) yield (feature, 1) ??.setOutput1(wfCounts) ??.setOutput2(wCounts) ??.setOutput3(fCounts) }) Is there any way to do this with spark? In looking for how to do this, I've seen questions about multiple outputs when you're saving the results to disk (not helpful), and I've seen a bit about accumulators (which don't look like what I need), but that's it. Also note that I can't just yield all of these results in one big list, because I need three separate maps out. If there's an efficient way to split a combined RDD after the fact, that could work, but the only way I can think of to do this would end up iterating over the data four times, instead of the three I currently do (once to create the combined map, then three times to filter it into the maps I actually want).
It is not possible to split an RDD into multiple RDDs. This is understandable if you think about how this would work under the hood. Say you split RDD x = sc.textFile("x") into a = x.filter(_.head == 'A') and b = x.filter(_.head == 'B'). Nothing happens so far, because RDDs are lazy. But now you print a.count. So Spark opens the file, and iterates through the lines. If the line starts with A it counts it. But what do we do with lines starting with B? Will there be a call to b.count in the future? Or maybe it will be b.saveAsTextFile("b") and we should be writing these lines out somewhere? We cannot know at this point. Splitting an RDD is just not possible with the Spark API. But nothing stops you from implementing something if you know what you want. If you want to get both a.count and b.count you can map lines starting with A into (1, 0) and lines with B into (0, 1) and then sum up the tuples elementwise in a reduce. If you want to save lines with B into a file while counting lines with A, you could use an aggregator in a map before filter(_.head == 'B').saveAsTextFile. The only generic solution is to store the intermediate data somewhere. One option is to just cache the input (x.cache). Another is to write the contents into separate directories in a single pass, then read them back as separate RDDs. (See Write to multiple outputs by key Spark - one Spark job.) We do this in production and it works great.
This is one of the major disadvantages of Spark over traditional map-reduce programming. An RDD/DF/DS can be transformed into another RDD/DF/DS but you cannot map an RDD into multiple outputs. To avoid recomputation you need to cache the results into some intermediate RDD and then run multiple map operations to generate multiple outputs. The caching solution will work if you are dealing with reasonable size data. But if the data is large compared to the memory available the intermediate outputs will be spilled to disk and the advantage of caching will not be that great. Check out the discussion here - https://issues.apache.org/jira/browse/SPARK-1476. This is an old Jira but relevant. Checkout out the comment by Mridul Muralidharan. Spark needs to provide a solution where a map operation can produce multiple outputs without the need to cache. It may not be elegant from the functional programming perspective but I would argue, it would be a good compromise to achieve better performance.
I was also quite disappointed to see that this is a hard limitation of Spark over classic MapReduce. I ended up working around it by using multiple successive maps in which I filter out the data I need. Here's a schematic toy example that performs different calculations on the numbers 0 to 49 and writes both to different output files. from functools import partial import os from pyspark import SparkContext # Generate mock data def generate_data(): for i in range(50): yield 'output_square', i * i yield 'output_cube', i * i * i # Map function to siphon data to a specific output def save_partition_to_output(part_index, part, filter_key, output_dir): # Initialise output file handle lazily to avoid creating empty output files file = None try: for key, data in part: if key != filter_key: # Pass through non-matching rows and skip yield key, data continue if file is None: file = open(os.path.join(output_dir, '{}-part{:05d}.txt'.format(filter_key, part_index)), 'w') # Consume data file.write(str(data) + '\n') yield from [] finally: if file is not None: file.close() def main(): sc = SparkContext() rdd = sc.parallelize(generate_data()) # Repartition to number of outputs # (not strictly required, but reduces number of output files). # # To split partitions further, use repartition() instead or # partition by another key (not the output name). rdd = rdd.partitionBy(numPartitions=2) # Map and filter to first output. rdd = rdd.mapPartitionsWithIndex(partial(save_partition_to_output, filter_key='output_square', output_dir='.')) # Map and filter to second output. rdd = rdd.mapPartitionsWithIndex(partial(save_partition_to_output, filter_key='output_cube', output_dir='.')) # Trigger execution. rdd.count() if __name__ == '__main__': main() This will create two output files output_square-part00000.txt and output_cube-part00000.txt with the desired output splits.