scala read large files - scala

Hello I am looking for fastest bat rather hi-level way to work with large data collection.
My task consist of two task read alot of large files in memory and then make some statistical calculations (the easiest way to work with data in this task is random access array ).
My first approach was to use java.io.ByteArrayOutputStream, becuase it can resize it's internal storage .
def packTo(buf:java.io.ByteArrayOutputStream,f:File) = {
try {
val fs = new java.io.FileInputStream(f)
IOUtils.copy(fs,buf)
} catch {
case e:java.io.FileNotFoundException =>
}
}
val buf = new java.io.ByteArrayOutputStream()
files foreach { f:File => packTo(buf,f) }
println(buf.size())
for(i <- 0 to buf.size()) {
for(j <- 0 to buf.size()) {
for(k <- 0 to buf.size()) {
// println("i " + i + " " + buf[i] );
// Calculate something amathing using buf[i] buf[j] buf[k]
}
}
}
println("amazing = " + ???)
but ByteArrayOutputStream can't get me as byte[] only copy of it. But I can not allow to have 2 copies of data .

Have you tried scala-io? Should be as simple as Resource.fromFile(f).byteArray with it.

Scala's built in library already provides a nice API to do this
io.Source.fromFile("/file/path").mkString.getBytes
However, it's not often a good idea to load whole file as byte array into memory. Do make sure the largest possible file can still fit into your JVM memory properly.

Related

file merge logic: scala

For scala experts this might be a silly question but me as a beginner facing hard time to identify the solution. Any pointers would help.
I've set of 3 files in HDFS location by the names:
fileFirst.dat
fileSecond.dat
fileThird.dat
Not necessarily they'll be stored in any order. fileFirst.dat could be created at very last so a ls every time would show different ordering of the files.
My task is to combine all files in a single file in the order:
fileFirst contents, then fileSecond contents & finally fileThird contents; with newline as the separator, no spaces.
I tried some ideas but couldn't come up with something working. Every time the order of combination messes up.
Below is my function to merge whatever is coming in:
def writeFile(): Unit = {
val in: InputStream = fs.open(files(i).getPath)
try {
IOUtils.copyBytes(in, out, conf, false)
if (addString != null) out.write(addString.getBytes("UTF-8"))
} finally in.close()
}
Files is defined like this:
val files: Array[FileStatus] = fs.listStatus(srcPath)
This is part of a bigger function where I'm passing all the arguments used in this method. After everything is done, I'll do the out.close() to close the output stream.
Any ideas welcome, even if it goes against the file write logic I'm trying to do; just understand that I'm not that good in scala; for now :)
If you can enumerate your Paths directly, you don't really need to use listStatus. You could try something like this (untested):
val relativePaths = Array("fileFirst.dat", "fileSecond.dat", "fileThird.dat")
val paths = relativePaths.map(new Path(srcDirectory, _))
try {
val output = fs.create(destinationFile)
for (path <- paths) {
try {
val input = fs.open(path)
IOUtils.copyBytes(input, output, conf, false)
} catch {
case ex => throw ex // Feel free to do some error handling here
} finally {
input.close()
}
}
} catch {
case ex => throw ex // Feel free to do some error handling here
} finally {
output.close()
}

Spark: run an external process in parallel

Is it possible with Spark to "wrap" and run an external process managing its input and output?
The process is represented by a normal C/C++ application that usually runs from command line. It accepts a plain text file as input and generate another plain text file as output. As I need to integrate the flow of this application with something bigger (always in Spark), I was wondering if there is a way to do this.
The process can be easily run in parallel (at the moment I use GNU Parallel) just splitting its input in (for example) 10 part files, run 10 instances in memory of it, and re-join the final 10 part files output in one file.
The simplest thing you can do is to write a simple wrapper which takes data from standard input, writes to file, executes an external program, and outputs results to the standard output. After that all you have to do is to use pipe method:
rdd.pipe("your_wrapper")
The only serious considerations is IO performance. If it is possible it would be better to adjust program you want to call so it can read and write data directly without going through disk.
Alternativelly you can use mapPartitions combined with process and standard IO tools to write to the local file, call your program and read the output.
If you end up here based on the question title from a Google search, but you don't have the OP restriction that the external program needs to read from a file--i.e., if your external program can read from stdin--here is a solution. For my use case, I needed to call an external decryption program for each input file.
import org.apache.commons.io.IOUtils
import sys.process._
import scala.collection.mutable.ArrayBuffer
val showSampleRows = true
val bfRdd = sc.binaryFiles("/some/files/*,/more/files/*")
val rdd = bfRdd.flatMap{ case(file, pds) => { // pds is a PortableDataStream
val rows = new ArrayBuffer[Array[String]]()
var errors = List[String]()
val io = new ProcessIO (
in => { // "in" is an OutputStream; write the encrypted contents of the
// input file (pds) to this stream
IOUtils.copy(pds.open(), in) // open() returns a DataInputStream
in.close
},
out => { // "out" is an InputStream; read the decrypted data off this stream.
// Even though this runs in another thread, we can write to rows, since it
// is part of the closure for this function
for(line <- scala.io.Source.fromInputStream(out).getLines) {
// ...decode line here... for my data, it was pipe-delimited
rows += line.split('|')
}
out.close
},
err => { // "err" is an InputStream; read any errors off this stream
// errors is part of the closure for this function
errors = scala.io.Source.fromInputStream(err).getLines.toList
err.close
}
)
val cmd = List("/my/decryption/program", "--decrypt")
val exitValue = cmd.run(io).exitValue // blocks until subprocess finishes
println(s"-- Results for file $file:")
if (exitValue != 0) {
// TBD write to string accumulator instead, so driver can output errors
// string accumulator from #zero323: https://stackoverflow.com/a/31496694/215945
println(s"exit code: $exitValue")
errors.foreach(println)
} else {
// TBD, you'll probably want to move this code to the driver, otherwise
// unless you're using the shell, you won't see this output
// because it will be sent to stdout of the executor
println(s"row count: ${rows.size}")
if (showSampleRows) {
println("6 sample rows:")
rows.slice(0,6).foreach(row => println(" " + row.mkString("|")))
}
}
rows
}}
scala> :paste "test.scala"
Loading test.scala...
...
rdd: org.apache.spark.rdd.RDD[Array[String]] = MapPartitionsRDD[62] at flatMap at <console>:294
scala> rdd.count // action, causes Spark code to actually run
-- Results for file hdfs://path/to/encrypted/file1: // this file had errors
exit code: 255
ERROR: Error decrypting
my_decryption_program: Bad header data[0]
-- Results for file hdfs://path/to/encrypted/file2:
row count: 416638
sample rows:
<...first row shown here ...>
...
<...sixth row shown here ...>
...
res43: Long = 843039
References:
https://www.scala-lang.org/api/current/scala/sys/process/ProcessIO.html
https://alvinalexander.com/scala/how-to-use-closures-in-scala-fp-examples#using-closures-with-other-data-types

slow retrieval of BYTEA column

I'm supposed to be testing different methods to store PDF files in a Postgres Database using JDBC. Currently I'm trying it with BYTEA. Storing files works without problems, but the retrieval is super slow.
I am working with a couple files around 3MB each. Storing them takes around 3 seconds (total), so that's alright. But when I try to retreive them, it takes around 2 minutes between the output of how many files are in the DB and the program actually starting to create the files. Once it starts, it only takes around 5 seconds though to finish. Why is Postgres taking so long for the Query "SELECT file..." ? The query takes equally long when I use pgAdmin. Not retrieving the filesize doesn't change anything.
As far as I understand, the DB uses TOAST to split my files up and when I want to retrieve them, it hast to piece them back together first. But since splitting them (when uploading) only takes a couple seconds, putting them back together shouldn't take that long, right?
Here are some code snippets:
public void saveToDB(Files[] files) {
try (PreparedStatement s = con.prepareStatement("INSERT INTO fileTable (filename, file) VALUES (?,?)")) {
for (File f : files) {
System.out.println(f.getName()+" (" + f.length() / 1024 + "KB)");
s.setString(1, f.getName());
s.setBinaryStream(2, new FileInputStream(f), f.length());
s.executeUpdate();
}
con.commit();
}
}
public void getFromDB(File dir) {
dir.mkdirs();
try (Statement s = con.createStatement(); ResultSet rs = s.executeQuery("SELECT COUNT(*) FROM useByteA")) {
rs.next();
System.out.println("Files: " + rs.getInt(1));
}
try (Statement s = con.createStatement(); ResultSet rs = s.executeQuery("SELECT length(file), filename, file FROM fileTable")) {
while (rs.next()) {
System.out.println(rs.getString(2)+" (" + rs.getInt(1) / 1024 + "KB)");
File f = new File(dir, filename);
f.createNewFile();
try (FileOutputStream out = new FileOutputStream(f)) {
out.write(rs.getBytes(3));
out.flush();
}
}
}
}

How to generate a big data stream on the fly

I have to generate a big file on the fly. Reading to the database and send it to the client.
I read some documentation and i did this
val streamContent: Enumerator[Array[Byte]] = Enumerator.outputStream {
os =>
// new PrintWriter() read from database and for each record
// do some logic and write
// to outputstream
}
Ok.stream(streamContent.andThen(Enumerator.eof)).withHeaders(
CONTENT_DISPOSITION -> s"attachment; filename=someName.csv"
)
Im rather new to scala in general only a week so don't guide for my reputation.
My questions are :
1) Is this the best way? I found this if i have a big file, this will load in memory, and also don't know what is the chunk size in this case, if it will send for each write() is not to convenient.
2) I found this method Enumerator.fromStream(data : InputStream, chunkedSize : int) a little better cause it has a chunk-size, but i don't have an inputStream cause im creating the file on the fly.
There's a note in the docs for Enumerator.outputStream:
Not [sic!] that calls to write will not block, so if the iteratee that is being fed to is slow to consume the input, the OutputStream will not push back. This means it should not be used with large streams since there is a risk of running out of memory.
If this can happen depends on your situation. If you can and will generate Gigabytes in seconds, you should probably try something different. I'm not exactly sure what, but I'd start at Enumerator.generateM(). For many cases though, your method is perfectly fine. Have a look at this example by Gaƫtan Renaudeau for serving a Zip file that's generated on the fly in the same way you're using it:
val enumerator = Enumerator.outputStream { os =>
val zip = new ZipOutputStream(os);
Range(0, 100).map { i =>
zip.putNextEntry(new ZipEntry("test-zip/README-"+i+".txt"))
zip.write("Here are 100000 random numbers:\n".map(_.toByte).toArray)
// Let's do 100 writes of 1'000 numbers
Range(0, 100).map { j =>
zip.write((Range(0, 1000).map(_=>r.nextLong).map(_.toString).mkString("\n")).map(_.toByte).toArray);
}
zip.closeEntry()
}
zip.close()
}
Ok.stream(enumerator >>> Enumerator.eof).withHeaders(
"Content-Type"->"application/zip",
"Content-Disposition"->"attachment; filename=test.zip"
)
Please keep in mind that Ok.stream has been replaced by Ok.chunked in newer versions of Play, in case you want to upgrade.
As for the chunk size, you can always use Enumeratee.grouped to gather a bunch of values and send them as one chunk.
val grouper = Enumeratee.grouped(
Traversable.take[Array[Double]](100) &>> Iteratee.consume()
)
Then you'd do something like
Ok.stream(enumerator &> grouper >>> Enumerator.eof)

Simplest way to process a list of items in a multi-threaded manner

I've got a piece of code that opens a data reader and for each record (which contains a url) downloads & processes that page.
What's the simplest way to make it multi-threaded so that, let's say, there are 10 slots which can be used to download and process pages in simultaneousy, and as slots become available next rows are being read etc.
I can't use WebClient.DownloadDataAsync
Here's what i have tried to do, but it hasn't worked (i.e. the "worker" is never ran):
using (IDataReader dr = q.ExecuteReader())
{
ThreadPool.SetMaxThreads(10, 10);
int workerThreads = 0;
int completionPortThreads = 0;
while (dr.Read())
{
do
{
ThreadPool.GetAvailableThreads(out workerThreads, out completionPortThreads);
if (workerThreads == 0)
{
Thread.Sleep(100);
}
} while (workerThreads == 0);
Database.Log l = new Database.Log();
l.Load(dr);
ThreadPool.QueueUserWorkItem(delegate(object threadContext)
{
Database.Log log = threadContext as Database.Log;
Scraper scraper = new Scraper();
dc.Product p = scraper.GetProduct(log, log.Url, true);
ManualResetEvent done = new ManualResetEvent(false);
done.Set();
}, l);
}
}
You do not normally need to play with the Max threads (I believe it defaults to something like 25 per proc for worker, 1000 for IO). You might consider setting the Min threads to ensure you have a nice number always available.
You don't need to call GetAvailableThreads either. You can just start calling QueueUserWorkItem and let it do all the work. Can you repro your problem by simply calling QueueUserWorkItem?
You could also look into the Parallel Task Library, which has helper methods to make this kind of stuff more manageable and easier.