How to handle large text file in spark? - scala

I have a large textfile (3 GB) and it is DNA reference. I would like to slice it in parts so that i can handle it.
So I want to know how to slice the file with Spark. I am currently having only one node with 4 GB of memory

Sounds like you want to load your file as multiple partitions. If your file is splittable (text file, snappy, sequence, etc.), you can simply provide the number of partitions by which it will be loaded as sc.textFile(inputPath, numPartitions). If your file is not splittable, it will be loaded as one partition, but you may call .repartition(numPartitions) on the loaded RDD to repartition into multiple partitions.

If you want some specific number of lines in your every chunk, you can try this:
rdd=sc.textFile(inputPath).zipWithIndex()
rdd2=rdd.filter(x=>lowest_no_of_line<=x._2 & x._2<=highest_no_of_line).map(x=>x._1).coalesce(1,false)
rdd2.saveAsTextFile(outputpath)
Now your saved textfile will have lines in between highest_no_of_line and lowest_no_of_line

Related

Read parquet file to multiple partitions [duplicate]

So I have just 1 parquet file I'm reading with Spark (using the SQL stuff) and I'd like it to be processed with 100 partitions. I've tried setting spark.default.parallelism to 100, we have also tried changing the compression of the parquet to none (from gzip). No matter what we do the first stage of the spark job only has a single partition (once a shuffle occurs it gets repartitioned into 100 and thereafter obviously things are much much faster).
Now according to a few sources (like below) parquet should be splittable (even if using gzip!), so I'm super confused and would love some advice.
https://www.safaribooksonline.com/library/view/hadoop-application-architectures/9781491910313/ch01.html
I'm using spark 1.0.0, and apparently the default value for spark.sql.shuffle.partitions is 200, so it can't be that. In fact all the defaults for parallelism are much more than 1, so I don't understand what's going on.
You should write your parquet files with a smaller block size. Default is 128Mb per block, but it's configurable by setting parquet.block.size configuration in the writer.
The source of ParquetOuputFormat is here, if you want to dig into details.
The block size is minimum amount of data you can read out of a parquet file which is logically readable (since parquet is columnar, you can't just split by line or something trivial like this), so you can't have more reading threads than input blocks.
The new way of doing it (Spark 2.x) is setting
spark.sql.files.maxPartitionBytes
Source: https://issues.apache.org/jira/browse/SPARK-17998 (the official documentation is not correct yet, misses the .sql)
From my experience, Hadoop settings no longer have effect.
Maybe your parquet file only takes one HDFS block. Create a big parquet file that has many HDFS blocks and load it
val k = sc.parquetFile("the-big-table.parquet")
k.partitions.length
You'll see same number of partitions as HDFS blocks. This worked fine for me (spark-1.1.0)
You have mentioned that you want to control distribution during write to parquet. When you create parquet from RDDs parquet preserves partitions of the RDD. So, if you create RDD and specify 100 partitions and from dataframe with parquet format then it will be writing 100 separate parquet files to fs.
For read you could specify spark.sql.shuffle.partitions parameter.
To achieve that you should use SparkContext to set Hadoop configuration (sc.hadoopConfiguration) property mapreduce.input.fileinputformat.split.maxsize.
By setting this property to a lower value than hdfs.blockSize, than you will get as much partitions as the number of splits.
For example:
When hdfs.blockSize = 134217728 (128MB),
and one file is read which contains exactly one full block,
and mapreduce.input.fileinputformat.split.maxsize = 67108864 (64MB)
Then there will be two partitions those splits will be read into.

Spark How to Specify Number of Resulting Files for DataFrame While/After Writing

I saw several q/a's about writing single file into hdfs,it seems using coalesce(1) is sufficient.
E.g;
df.coalesce(1).write.mode("overwrite").format(format).save(location)
But how can I specify "exact" number of files that will written after save operation?
So my question is;
If I have dataframe which consist 100 partitions when I make write operation will it write 100 files?
If I have dataframe which consist 100 partitions when I make write operation after calling repartition(50)/coalsesce(50) will it write 50 files?
Is there a way in spark which will allow to specify resulting number of files while writing dataframe into HDFS ?
Thanks
Number of output files is in general equal to the number of writing tasks (partitions). Under normal conditions It cannot be smaller (each writer writes its own part and multiple tasks cannot write to the same file), but can be larger if format has non-standard behavior or partitionBy is used.
Normally
If I have dataframe which consist 100 partitions when I make write operation will it write 100 files?
Yes
If I have dataframe which consist 100 partitions when I make write operation after calling repartition(50)/coalsesce(50) will it write 50 files?
And yes.
Is there a way in spark which will allow to specify resulting number of files while writing dataframe into HDFS ?
No.

How can I control the number of output files written from Spark DataFrame?

Using Spark streaming to read Json data from Kafka topic.
I use DataFrame to process the data, and later I wish to save the output to HDFS files. The problem is that using:
df.write.save("append").format("text")
Yields many files some are large, and some are even 0 bytes.
Is there a way to control the number of output files? Also, to avoid the "opposite" problem, is there a way to also limit the size of each file so a new file will be written to when the current reaches a certain size/num of rows?
The number of the output files is equal to the number of partitions of the Dataset This means you can control it in a number of way, depending on the context:
For Datasets with no wide dependencies you can control input using reader specific parameters
For Datasets with wide dependencies you can control number of partitions with spark.sql.shuffle.partitions parameter.
Independent of the lineage you can coalesce or repartition.
is there a way to also limit the size of each file so a new file will be written to when the current reaches a certain size/num of rows?
No. With built-in writers it is strictly 1:1 relationship.
you can use size estimator :
import org.apache.spark.util.SizeEstimator
val size = SizeEstimator.estimate(df)
an next you you can adapt the number of files according to the size of the dataframe with repatition or coalesce

How can I control number of rows and/or output file size in Spark streaming when writing to HDFS - hive?

Using spark streaming to read and process messages from Kafka and write to HDFS - Hive.
Since I wish to avoid creating many small files which spams the filesystem, I would like to know if there's a way to ensure a minimal file size, and/or ability to force a minimal number of output rows in a file, with the exception of a timeout.
Thanks.
As far as I know, there is no way to control the number of lines in your output files. But you can control the number of output files.
Controlling that and considering your dataset size may help you with your needs, since you can calculate the size of each file in your output. You can do that with the coalesce and repartition commands:
df.coalesce(2).write(...)
df.repartition(2).write(...)
Both of them are used to create the number of partitions given as parameter. So if you set 2, you should have 2 files in your output.
The difference are that with repartition you can both increase and decrease your partitions, while with coalesce you can only decrease.
Also,keep in mind that repartition performs a full shuffle to equally distribute the data among the partitions, which may be resource and time expensive. On the other hand, coalesce does not perform a full shuffle, it combines existing partitions instead.
You can find an awesome explanation in this other answer here

Spark: repartition output by key

I'm trying to output records using the following code:
spark.createDataFrame(asRow, struct)
.write
.partitionBy("foo", "bar")
.format("text")
.save("/some/output-path")
I don't have a problem when the data is small. However when I'm processing ~600GB input, I am writing around 290k files and that includes small files per partition. Is there a way we could control the number of output files per partition? Because right now I am writing a lot of small files and it's not good.
Having lots of files is the expected behavior as each partition (resulting in whatever computation you had before the write) will write to the partitions you requested the relevant files
If you wish to avoid that you need to repartition before the write:
spark.createDataFrame(asRow, struct)
.repartition("foo","bar")
.write
.partitionBy("foo", "bar")
.format("text")
.save("/some/output-path")
You have multiple files per partition because each node writes output to its own file. That means that the only way how to have only single file per partition is to re-partition data before writing. Please note, that that will be very inefficient because data repartition will cause shuffling on your data.