Writing a functional and yet functional image processing library in Scala - scala

We are developing a small image processing library for Scala (student project). The library is completely functional (i.e. no mutability). The raster of image is stored as Stream[Stream[Int]] to exploit the benefits of lazy evaluation with least efforts. However upon performing a few operations on an image the heap gets full and an OutOfMemoryError is thrown. (for example, up to 4 operations can be performed on a jpeg image sized 500 x 400, 35 kb before JVM heap runs out of space.)
The approaches we have thought of are:
Twiddling with JVM options and increase the heap size. (We don't know how to do this under IDEA - the IDE we are working with.)
Choosing a different data structure than Stream[Stream[Int]], the one which is more suited to the task of image processing. (Again we do not have much idea about the functional data structures beyond the simple List and Stream.)
The last option we have is giving up on immutability and making it a mutable library (like the popular image processing libraries), which we don't really want to do. Please suggest us some way to keep this library functional and still functional, if you know what I mean.
Thank you,
Siddharth Raina.
ADDENDUM:
For an image sized 1024 x 768, the JVM runs out of heap space even for a single mapping operation. Some example code from our test:
val image = Image from "E:/metallica.jpg"
val redded = image.map(_ & 0xff0000)
redded.display(title = "Redded")
And the output:
"C:\Program Files (x86)\Java\jdk1.6.0_02\bin\java" -Didea.launcher.port=7533 "-Didea.launcher.bin.path=C:\Program Files (x86)\JetBrains\IntelliJ IDEA Community Edition 10.0.2\bin" -Dfile.encoding=windows-1252 -classpath "C:\Program Files (x86)\Java\jdk1.6.0_02\jre\lib\charsets.jar;C:\Program Files (x86)\Java\jdk1.6.0_02\jre\lib\deploy.jar;C:\Program Files (x86)\Java\jdk1.6.0_02\jre\lib\javaws.jar;C:\Program Files (x86)\Java\jdk1.6.0_02\jre\lib\jce.jar;C:\Program Files (x86)\Java\jdk1.6.0_02\jre\lib\jsse.jar;C:\Program Files (x86)\Java\jdk1.6.0_02\jre\lib\management-agent.jar;C:\Program Files (x86)\Java\jdk1.6.0_02\jre\lib\plugin.jar;C:\Program Files (x86)\Java\jdk1.6.0_02\jre\lib\resources.jar;C:\Program Files (x86)\Java\jdk1.6.0_02\jre\lib\rt.jar;C:\Program Files (x86)\Java\jdk1.6.0_02\jre\lib\ext\dnsns.jar;C:\Program Files (x86)\Java\jdk1.6.0_02\jre\lib\ext\localedata.jar;C:\Program Files (x86)\Java\jdk1.6.0_02\jre\lib\ext\sunjce_provider.jar;C:\Program Files (x86)\Java\jdk1.6.0_02\jre\lib\ext\sunmscapi.jar;C:\Program Files (x86)\Java\jdk1.6.0_02\jre\lib\ext\sunpkcs11.jar;C:\new Ph\Phoebe\out\production\Phoebe;E:\Inventory\Marvin.jar;C:\scala-2.8.1.final\lib\scala-library.jar;C:\scala-2.8.1.final\lib\scala-swing.jar;C:\scala-2.8.1.final\lib\scala-dbc.jar;C:\new Ph;C:\scala-2.8.1.final\lib\scala-compiler.jar;E:\Inventory\commons-math-2.2.jar;E:\Inventory\commons-math-2.2-sources.jar;E:\Inventory\commons-math-2.2-javadoc.jar;E:\Inventory\jmathplot.jar;E:\Inventory\jmathio.jar;E:\Inventory\jmatharray.jar;E:\Inventory\Javax Media.zip;E:\Inventory\jai-core-1.1.3-alpha.jar;C:\Program Files (x86)\JetBrains\IntelliJ IDEA Community Edition 10.0.2\lib\idea_rt.jar" com.intellij.rt.execution.application.AppMain phoebe.test.ImageTest
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at scala.collection.Iterator$class.toStream(Iterator.scala:1011)
at scala.collection.IndexedSeqLike$Elements.toStream(IndexedSeqLike.scala:52)
at scala.collection.Iterator$$anonfun$toStream$1.apply(Iterator.scala:1011)
at scala.collection.Iterator$$anonfun$toStream$1.apply(Iterator.scala:1011)
at scala.collection.immutable.Stream$Cons.tail(Stream.scala:565)
at scala.collection.immutable.Stream$Cons.tail(Stream.scala:557)
at scala.collection.immutable.Stream$$anonfun$map$1.apply(Stream.scala:168)
at scala.collection.immutable.Stream$$anonfun$map$1.apply(Stream.scala:168)
at scala.collection.immutable.Stream$Cons.tail(Stream.scala:565)
at scala.collection.immutable.Stream$Cons.tail(Stream.scala:557)
at scala.collection.immutable.Stream$$anonfun$flatten1$1$1.apply(Stream.scala:453)
at scala.collection.immutable.Stream$$anonfun$flatten1$1$1.apply(Stream.scala:453)
at scala.collection.immutable.Stream$Cons.tail(Stream.scala:565)
at scala.collection.immutable.Stream$Cons.tail(Stream.scala:557)
at scala.collection.immutable.Stream.length(Stream.scala:113)
at scala.collection.SeqLike$class.size(SeqLike.scala:221)
at scala.collection.immutable.Stream.size(Stream.scala:48)
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:388)
at scala.collection.immutable.Stream.toArray(Stream.scala:48)
at phoebe.picasso.Image.force(Image.scala:85)
at phoebe.picasso.SimpleImageViewer.<init>(SimpleImageViewer.scala:10)
at phoebe.picasso.Image.display(Image.scala:91)
at phoebe.test.ImageTest$.main(ImageTest.scala:14)
at phoebe.test.ImageTest.main(ImageTest.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:115)
Process finished with exit code 1

If I understood correctly, you store each individual pixel in one Stream element, and this can be inefficient. What you can do is create your custom LazyRaster class which contains lazy references to blocks of the image of some size (for instance, 20x20). The first time some block is written, its corresponding array is initialized, and from there on changing a pixel means writing to that array.
This is more work, but may result in better performance. Furthermore, if you wish to support stacking of image operations (e.g. do a map - take - map), and then evaluating the image in "one-go", the implementation could get tricky - stream implementation is the best evidence for this.
Another thing one can do is ensure that the old Streams are being properly garbage collected. I suspect image object in your example is a wrapper for your streams. If you wish to stack multiple image operations (like mapping) together and be able to gc the references you no longer need, you have to make sure that you don't hold any references to a stream - note that this is not ensured if:
you have a reference to your image on the stack (image in the example)
your Image wrapper contains such a reference.
Without knowing more about the exact use cases, its hard to say more.
Personally, I would avoid Streams altogether, and simply use some immutable array-based data structure which is both space-efficient and avoids boxing. The only place where I potentially see Streams being used is in iterative image transformations, like convolution or applying a stack of filters. You wouldn't have a Stream of pixels, but a Stream of images, instead. This could be a nice way to express a sequence of transformations - in this case, the comments about gc in the link given above apply.

If you process large streams, you need to avoid holding onto a reference to the head of the stream. This will prevent garbage collection.
It's possible that calling certain methods on Stream will internally hold onto the head. See the discussion here: Functional processing of Scala streams without OutOfMemory errors

Stream is very unlikely to be the optimum structure here. Given the nature of a JPEG it makes little sense to "stream" it into memory line-by-line.
Stream also has linear access time for reading elements. Again, probably not what you want unless you're streaming data.
I'd recommend using an IndexedSeq[IndexedSeq[Int]] in this scenario. Or (if performance is important) an Array[Array[Int]], which will allow you to avoid some boxing/unboxing costs.
Martin has written a good overview of the 2.8 collections API which should help you understand the inherent trade-offs in the various collection types available.
Even if using Arrays, there's still every reason to use them as immutable structures and maintain a functional programming style. Just because a structure is mutable doesn't mean you have to mutate it!

I recommend also looking at continuous rather than just discrete models for imagery. Continuous is generally more modular/composable than discrete--whether time or space.

As a first step you should take a memory dump and analyze it. It is very possible that you will see the problem immediately.
There is special command line option to force JVM to make dump on OOME: -XX:+HeapDumpOnOutOfMemoryError. And good tools, like jhat and VisualVM, which can help you in analysis.

Stream is more about lazy evaluation than immutability. And you're
forcing an insane amount of space and time overhead for each pixel by
doing so. Furthermore, Streams only make sense when you can defer the
determination (calculation or retrieval) of individual pixel values.
And, of course, random access is impossible. I'd have to deem the
Stream an entirely inappropriate data structure for image processing.
I'd strongly recommend that you manage your own raster memory (bonus
points for not fixing a single raster image organization into your
code) and allocate storage for whole channels or planes or bands
thereof (depending on the raster organization in play).
UPDATE: By the foregoing, I mean don't use nested Array or IndexedSeq, but allocate a block and compute which element using the row and column values.
Then take an "immutable after initialization" approach. Once a given
pixel or sample has been established in the raster, you never allow it
to be changed. This might require a one-bit raster plane to track the
established pixels. Alternatively, if you know how you'll be filling
the raster (the sequence in which pixels will be assigned) you can get
away with a much simpler and cheaper representation of how much of the
raster is established and how much remains to be filled.
Then as you perform processing on the raster images, do so in a pipeline
where no image is altered in place, but rather a new image is always
generated as various transforms are applied.
You might consider that for some image transformations (convolution,
e.g.) you must take this approach or you will not get the correct
results.

I strongly recommend Okasaki's Purely Functional Data Structures if you don't have any experience with functional data structures (as you seem to indicate).

To increase your heap size using intellij, you need to add the following to the VM Parameters section of the Run/Debug Configuration:
-Xms256m -Xmx256m
This will increase the maximum heap size to 256MB and also ensure this amount is requested by the VM at startup, which generally represents a performance increase.
Additionally, you're using a relatively old JDK. If possible, I recommend you update to the latest available version, as newer builds enable escape analysis, which can in some cases have a dramatic effect on performance.
Now, in terms of algorithms, I would suggest that you follow the advice above and divide the image into blocks of say, 9x9 (any size will do though). I'd then go and have a look at Huet's Zipper and think about how that might be applied to an image represented as a tree structure, and how that might enable you to model the image as a persistent data structure.

Increasing the heap size in idea can be done in the vmoptions file, which can be found in the bin directory in your idea installation directory (add -Xmx512m to set the heap size to 512 megabyte, for example).
Apart from that, it is hard to say what causes the out of memory without knowing what operations you exactly perform, but perhaps this question provides some useful tips.

One solution would be to put the image in an array, and make filters like "map" return a wrapper for that array. Basically, you have a trait named Image. That trait requires abstract pixel retrieving operations. When, for example, the "map" function is called, you return an implementation, which delegates the calls to the old Image, and executes the function on it. The only problem with that would be that the transformation could end up being executed multiple times, but since it is a functional library, that is not very important.

Related

What purpose does a queue serve in System Verilog?

They are not used for RTL but rather verification, correct? They would not be synthesizable.
Do they have better memory management features in turn optimizing program time? If I recall
correctly, System Verilog has an automatic garbage collector, so there is no need to deallocate memory.
The official IEEE documentation does a great job of explaining how they work. I am just wondering in what scenarios I would use one vs an array. One guess would be that they have associated methods that allow for easier data manipulation?
Thank you in advance for your knowledge and expertise.
A queue can be synthesisable if it has a bounded maximum size. Only a few synthesis tools support it, probably none of the FPGA synthesis tools.
The key advantage with a queue is in efficiency adding/removing one element from the array, especially when accessed at the head or tail of the queue. A dynamic array may require reallocation and copying the entire array when modifying its size. The penalty for a queue is the extra time it takes to access elements in the middle of the queue, and extra space compared with the same number of element of a dynamic array.
I hope that 2 answers this question.

Elastic Binary Search Tree in Haproxy

I just look at the source of HAproxy to learn about how is it implemented , and I see an interesting data structure called Elastic Binary Search tree. It seems to be very similar to binary search tree. But I would like to know what is the different and the reason behind choosing this data structure for load balancer.
You'll find the implementation details here : http://1wt.eu/articles/ebtree/
In short, the main difference between a regular binary tree and ebtree s that in a regular binary tree, you need to allocate intermediary nodes to attach leaves, and in some environments, having to allocate a node in the middle just to insert a leaf is not convenient. With ebtrees, each structure is both a node and a leaf, and thanks to some pointer manipulation, both of them can be used separately. And this possibility comes with a number of interesting properties described in the article above such as O(1) removal, support for duplicate keys, etc...
The benefit of using ebtrees in haproxy compared to rbtrees is the O(1) removal which makes ebtrees much faster than rbtrees for the scheduler where entries are constantly added/removed. And compared to BST (which was the original design leading to ebtrees), insertion is very fast (no malloc) and remoal doesn't require a free().
A new version is under development to save space. It will have the same complexity as rbtrees but with smaller memory usage. This will be useful to store lots of data which are often looked up and rarely removed (eg: haproxy's stick tables, caches, ...).

Best PostgreSQL hiearchical tree for both performance and moving nodes from GUI?

Since I'm using PostgreSQL there is a module which is called ltree, which satisfies at least one of my needs, performance (I don't know about scalability? Someone says materialized path trees does not scale well..).
Since the application I'm developing is a CMS built entirely around a big tree, nodes, subtrees etc performance in queuering these nodes is absolutely essential, but since it's a hiearchical large (as it grows) tree you're working on and manipulating from the GUI (CRUD), I also want to make it possible for users to drag and drop to reorder nodes, subtrees etc while updating the tree (child records) in the database correctly.
As I understand moving and reordering nodes/subtrees in a tree is not really what ltree/materialized path trees are good for, so what I hope you can help be with is to either point me to the correct tree-structure-model that is best for performance AND moving subtrees and nodes, or perhaps... if ltree is indeed not a leftover from the past but worth still using, how could you achieve this with PostgreSQL's ltree module? And why/why not use ltree in this case?
Requirements:
Query performance is of course my top priority (all nodes, subtrees, leafs).
The tree should support deep level nesting, and sorting
And of course the tree should have support for growing large and
scaling big
I can live with a little waiting time while reordering from the GUI,
if 1 "jack-of-all-trades" tree implementation doesn't exist, or is
too complex for being worth it.
I'm also considering the Closure tables aka Bridge tables (alot!), Nested Intervals (not sure I understand exactly how to implement it, and no good examples or gists currently exists?) or B-tree models. I'm just not quite sure yet, how these will satisfy my above 4 requirements. Re-organizing subtrees and nodes in nested intervals seems straightforward and performance seems good.. Quite hard to choose the right one to go with.
Since I definitely need performance (query / read performance), scalability, sorting I kinda thought that Closure tables WITH sort order could be very close, but I just cant imagine how big the closure tables and disk-space-overhead will become as my tree and nodes grow large. Closure tables and scalability, I'm just not too sure of. Am I wrong in worrying about this, and what might the best solution for this task be?
The typical data structures used to index trees stored in SQL are designed and optimized for read performance on sets that don't change often.
As an example, if you're using the nested set model, adding or deleting a node would involve updating the entire tree (which typically means rewriting the entire table): great for reads, not so great for writes.
When write performance is important for you, you'll usually be better off working on the raw (id, parent_id) tuples with recursive queries, while setting tree indexes you know for sure are dirty to null as you go. In those areas of the app where read-performance is more important, do a sanity check by checking for null values in the tree index, and re-index the tree as needed before actually using it. That way, you'll avoid incessant rewrites of your tree, and instead re-index it only when needed for a read.
An alternative albeit (much) more difficult approach is to use a variation of e.g. nested sets or nested intervals, but using reals or floats instead of integers. This allows to insert, move and delete nodes for free, at the cost of some storage and arithmetic/read overhead and the loss of some properties such as child node counts in the case of nested sets. However, it also requires that you keep an eye out for pathological edge-cases. Namely you'll need to periodically -- and sometimes preemptively -- "garbage collect" and re-index large enough chunks of the tree's index in order to fit new nodes when you run into the floating point type's precision limits.
(A variation of the latter is to use a numeric without any precision in order to try to dodge the problem. But it's actually kicking the can down the road, in the sense that you'll still be limited by Postgres internals of a few thousand digits of precision. And the storage and arithmetic overheads became material compared to just using a floating point type long before you run into that limit in my own tests from a few years back.)
As for a "The Best" structure or approach, there really is no magic bullet... Each has pros and cons based on the use-case (frequency of reads vs writes) and the size of the set. There's plenty of literature on the web that compare and explain each of them, which I'm sure you've found already.
That being said, for a CMS I'd advise that you go with whichever method you're most comfortable with. Either re-index the tree on the fly as writes occur, or mark the tree as dirty on writes and then re-indexing it on demand. The point here is that, if re-indexing is done right (= using a plpgsql function or equivalent, rather than a gazillion queries issued by your app), re-indexing an entire tree of a few hundred thousand nodes will a few hundred milliseconds at most. Assuming the tree isn't constantly getting updated, that's a perfectly acceptable overhead for end-users.

Efficient disk access of large number of small .mat files containing objects

I'm trying to determine the best way to store large numbers of small .mat files, around 9000 objects with sizes ranging from 2k to 100k, for a total of around half a gig.
The typical use case is that I only need to pull a small number (say 10) of the files from disk at a time.
What I've tried:
Method 1: If I save each file individually, I get performance problems (very slow save times and system sluggishness for some time after) as Windows 7 has difficulty handling so may files in a folder (And I think my SSD is having a rough time of it, too). However, the end result is fine, I can load what I need very quickly. This is using '-v6' save.
Method 2: If I save all of the files in one .mat file and then load just the variables I need, access is very slow (loading takes around three quarters of the time it takes to load the whole file, with small variation depending on the ordering of the save). This is using '-v6' save, too.
I know I could split the files up into many folders but it seems like such a nasty hack (and won't fix the SSD's dislike of writing many small files), is there a better way?
Edit:
The objects are consist mainly of a numeric matrix of double data and an accompanying vector of uint32 identifiers, plus a bunch of small identifying properties (char and numeric).
Five ideas to consider:
Try storing in an HDF5 object - take a look at http://www.mathworks.com/help/techdoc/ref/hdf5.html - you may find that this solves all of your problems. It will also be compatible with many other systems (e.g. Python, Java, R).
A variation on your method #2 is to store them in one or more files, but to turn off compression.
Different datatypes: It may also be the case that you have some objects that compress or decompress inexplicably poorly. I have had such issues with either cell arrays or struct arrays. I eventually found a way around it, but it's been awhile & I can't remember how to reproduce this particular problem. The solution was to use a different data structure.
#SB proposed a database. If all else fails, try that. I don't like building external dependencies and additional interfaces, but it should work (the primary problem is that if the DB starts to groan or corrupts your data, then you're back at square 1). For this purpose consider SQLite, which doesn't require a separate server/client framework. There is an interface available on Matlab Central: http://www.mathworks.com/matlabcentral/linkexchange/links/1549-matlab-sqlite
(New) Considering that the objects are less than 1GB, it may be easier to just copy the entire set to a RAM disk and then access through that. Just remember to copy from the RAM disk if anything is saved (or wrap save to save objects in two places).
Update: The OP has mentioned custom objects. There are two methods to consider for serializing these:
Two serialization program from Matlab Central: http://www.mathworks.com/matlabcentral/fileexchange/29457 - which was inspired by: http://www.mathworks.com/matlabcentral/fileexchange/12063-serialize
Google's Protocol Buffers. Take a look here: http://code.google.com/p/protobuf-matlab/
Try storing them as blobs in a database.
I would also try the multiple folders method as well - it might perform better than you think. It might also help with organization of the files if that's something you need.
The solution I have come up with is to save object arrays of around 100 of the objects each. These files tend to be 5-6 meg so loading is not prohibitive and access is just a matter of loading the right array(s) and then subsetting them to the desired entry(ies). This compromise avoids writing too many small files, still allows for fast access of single objects and avoids any extra database or serialization overhead.

Immutable Map implementation for huge maps

If I have an immutable Map which I might expect (over a very short period of time - like a few seconds) to be adding/removing hundreds of thousands of items from, is the standard HashMap a bad idea? Let's say I want to pass 1Gb of data through the Map in <10 seconds in such a way that the maximum size of the Map at any once instant is only 256Mb.
I get the impression that the map keeps some kind of "history" but I will always be accessing the last-updated table (i.e. I do not pass the map around) because it is a private member variable of an Actor which is updated/accessed only from within reactions.
Basically I suspect that this data structure may be (partly) at fault for issues I am seeing around JVMs going out of memory when reading in large amounts of data in a short time.
Would I be better off with a different map implementation and, if so, what is it?
Ouch. Why do you have to use an immutable map? Poor garbage collector! Immutable maps generally require (log n) new objects per operation in addition to (log n) time, or they really just wrap mutable hash maps and layer changesets on top (which slows things down and can increase the number of object creations).
Immutability is great, but this does not seem to me like the time to use it. If I were you, I'd stick with scala.collection.mutable.HashMap. If you need concurrent access, wrap the Java util.concurrent one instead.
You also might want to increase the size of the young generation in the JVM: -Xmn1G or more (assuming you're running with -Xmx3G). Also, use the throughput (parallel) garbage collector.
That would be awful. You say you always want to access the last-updated table, that means you only need an ephemeral data structure, there is no need to pay the cost for a persistent data structure - it's like trading time and memory to gain completely arguable "style points". You are not building your karma by using blindly persistent structures when they are not called for.
Also, a hashtable is a particularly difficult structure to make persistent. In other words, "very, very slow" (basically it is usable when reads greatly outnumber writes - and you seem to talk about many writes).
By the way, a ConcurrentHashMap wouldn't make sense in this design, given that the map is accessed from a single actor (that's what I understand from the description).
Scala's so-called(*) immutable Map is broken beyond basic usage up to Scala 2.7. Don't trust me, just look up the number of open tickets for it. And the solution is just "it will be replaced with something else on Scala 2.8" (which it did).
So, if you want an immutable map for Scala 2.7.x, I'd advise looking for it in something other than Scala. Or just use TreeHashMap instead.
(*) Scala's immutable Map isn't really immutable. It is a mutable data structure internally, which requires lot of synchronization.