Threading vs Forking (with explanation of what I want to do) - perl

So, I've reviewed a ton of articles and forums before posting this, but I keep reading conflicting answers. Firstly, OS is not an issue, I can use either Windows or Unix, whatever would be best for my problem. I have a ton of data that I need to use for read-only purposes (not sure why this would matter, but, in case it does, the data structure that I'm going to have to go through is an array of arrays of arrays of hashes whose values are also arrays). I'm essentially comparing a "query" to a ton of different "sentences" and computing their relative similarities. From these quantities (several million), I want to take the top x% and do something with them. I need to parallelize this process. There's just no good way for me to decrease the space--I need to compare over everything to get good results and it will just take too long with some sort of threading/forking. Again, I've seen many conflicting answers and don't know which one to do.
Any help would be appreciated. Thanks in advance.
EDIT: I don't think the amount of memory usage will be an issue, but I don't know (8 GB RAM)

Without more detail on your problem, there's not much help that can be given. You want to parallelize a process. Threads and forks in Perl have advantages and disadvantages.
One of the key things that makes Perl threads different from other threads is that data is not shared by default. This makes threads much easier and safer to work with, you don't have to worry about thread safety of libraries or most of your code, just the threaded bit. However it can be a performance drag and memory hungry as Perl must put a copy of the interpreter and all loaded modules into each thread.
When it comes to forking I will only be talking about Unix. Perl emulates fork on Windows using threads, it works but it can be slow and buggy.
Forking Advantages
Very fast to create a fork
Very robust
Forking Disadvantages
Communicating between the processes can be slow and awkward
Thread Advantages
Thread coordination and data interchange is fairly easy
Threads are fairly easy to use
Thread Disadvantages
Each thread takes a lot of memory
Threads can be slow to start
Threads can be buggy (better the more recent your perl)
Database connections are not shared across threads
That last one is a bit of a doozy if the documentation is up to date. If you're going to be doing a lot of SQL, don't use threads.
In general, to get good performance out of Perl threads it's best to start a pool of threads and reuse them. Forks can more easily be created, used and discarded.
Really what it comes down to is what fits your way of thinking and your particular problem.
For either case, you're likely going to want something to manage your pool of workers. For forking you're going to want to use Parallel::ForkManager or Child. Child is particularly nice as it has built in inter-process communication.
For threads you're going to want to use threads::shared, Thread::Queue and read perlthrtut.
When reading articles about Perl threads, keep in mind they were a bit crap when they were introduced in 5.8.0 in 2002, and only serviceable by 5.10.1. After that they've firmed up considerably. Information and opinions about their efficiency and robustness tends to fall rapidly out of date.

Threading can be more difficult to get correct, but won't utilize as much memory.
Forking can be simpler to implement but use a significant amount of memory.
If you don't have experience with either, I would start by implemented a forking version & go from there.

Related

Do cats and scalaz create performance overhead on application?

I know it is totally a nonsense question but due to my illiteracy on programming skill this question came to my mind.
Cats and scalaz are used so that we can code in Scala similar to Haskell/in pure functional programming way. But for achieving this we need to add those libraries additionally with our projects. Eventually for using these we need to wrap our codes with their objects and functions. It is something adding extra codes and dependencies.
I don't know whether these create larger objects in memory.
These is making me think about. So my question: will I face any performance issue like more memory consumption if I use cats/scalaz ?
Or should I avoid these if my application needs performance?
Do cats and scalaz create performance overhead on application?
Absolutely.
The same way any line of code adds performance overhead.
So, if that is your concern, then don't write any code (well, actually the world may be simpler if we would have never tried all this).
Now, dick answer outside. The proper question you should be asking is: "Does the overhead of X library is harmful to my software?"; remember this applies to any library, actually to any code you write, to any algorithm you pick, etc.
And, in order to answer that question, we need some things before.
Define the SLAs the software you are writing must hold. Without those, any performance question / observation you made is pointless. It doesn't matter if something is faster / slower if you don't know if that is meaningful for you and your clients.
Once you have SLAs you need to perform stress tests to verify if your current version of the software satisfies those. Because, if your current code is performant enough, then you should worry about other things like maintainability, testing, adding more features, etc.
PS: Remember that those SLAs should not be raw numbers but be expressed in terms of percentiles, the same goes for the results of the tests.
When you found that you are falling your SLAs then you need to do proper benchmarking and debugging to identify the bottlenecks of your project. As you saw, caring about performance must be done on each line of code, but that is a lot of work that usually doesn't produce any relevant output. Thus, instead of evaluating the performance of everything, we find the bottlenecks first, those small pieces of code that have the biggest contributions to the overall performance of your software (remember the Pareto principle).
Remember that in this step, we have to be integral, network matters too. (and you will see this last one is usually the biggest slowdown; thus, usually you would rather search for architectural solutions like using Fibers instead of Threads rather than trying to optimize small functions. Also, sometimes the easier and cheaper solution is better infrastructure).
When you find the bottleneck, then you need to formulate some alternatives, implement those and not only benchmark them but do Statistical hypothesis testing to validate if the proposed changes are worth it or not. And, of course, validate if they were enough to satisfy the SLAs.
Thus, as you can see, performance is an art and a lot of work. So, unless you are committed to doing all this then stop worrying about something you will not measure and optimize properly.
Rather, focus on increasing the maintainability of your code. This actually also helps performance, because when you find that you need to change something you would be grateful that the code is as clean as possible and that the whole architecture of the code allows for an easy change.
And, believe me when I say that, using tools like cats, cats-effect, fs2, etc will help with that regard. Also, they actually pretty optimized on their core so you should be good for a lot of use cases.
Now, the big exception is that if you know that the work you are doing will be very CPU and memory bound then yeah, you pretty much can be sure all those abstractions will be harmful. In those cases, you may even want to stay away from the JVM and rather write pretty low-level code in a language like Rust which will provide you with proper tools for that kind of problem and still be way safer than plain old C.

Perl "Out of memory!" when processing a large batch job

A few others and I are now the happy maintainers of a few legacy batch jobs written in Perl. About 30k lines of code, split across maybe 10-15 Perl files.
We have a lot of long-term fixes for improving how the batch process works, but in the short term, we have to keep the lights on for the various other projects that depend on the output of these batch jobs.
At the core of the main part of these batch jobs is a hash that is loaded up with a bunch of data collected from various data files in a bunch of directories. When these were first written, everything fit nicely into memory - no more than 100MB or so. Things of course grew over the years, and the hash now grows up to what the box can handle (8GB), leaving us with a nice message from Perl:
Out of memory!
This is, of course, a poor design for a batch job, and we have a clear (long-term) roadmap to improve the process.
I have two questions however:
What kind of short-term options can we look at, short of throwing more memory at the machine? Any OS settings that can be tweaked? Perl runtime/compile flags that can be set?
I'd also like to understand WHY Perl crashes with the "out of memory!" error, as opposed to using the swap space that is available on machine.
For reference, this is running on a Sun SPARC M3000 running Solaris 10 with 8 cores, 8 GB RAM, 10 GB swap space.
The reason throwing more memory at the machine is not really an ideal solution is mostly because of the hardware it's running on. Buying more memory for these Sun boxes is crazy expensive compared to the x86 world, and we probably won't be keeping these around much longer than another year.
The long-term solution is of course refactoring a lot of the codebase, and moving to Linux on x86.
There aren't really any generally-applicable methods of reducing a program's memory footprint; it takes someone familiar with Perl to scan the code and find something relevant to your specific situation
You may find that storing your hash as a disk-based database helps, and the more general way is to use Tie::Hash::DBD which will allow you to use any database that DBI supports, but it won't help with hashes whose values can be references, such as nested hashes. (As ThisSuitIsBlackNot has commented, DBM::Deep overcomes even this obstacle.)
I presume your Perl code is crashing at startup? If you have a memory leak then it should be simpler to find the cause. Alternatively it may be obvious to you that the initial population of the hash is wasteful in that it is storing data that will never be used. If you show that part of your code then I am sure someone will be able to assist
Try to use 64bit version of interpreter. I had the same issue with "Out of memory" message. In my case 32bit strawberry perl ate 2Gb of RAM before termination. 64bit version of interpreter can use bigger amount. It ate the rest of my 16Gb and than started to swap like hell. But I received a result.

How does logical indexing work?

In some high-level languages like Matlab, you can use "logical indexing" to select a whole set of entries in an array for operating on.
I understand what logical indexing is and how to use it.
Instead, I am asking:
How does it work ("behind the scenes")?
Does it not boil down to just a for-loop?
If so, why is it so much faster than for-looping?
Interpreted languages can be thought of as a variation on assembler running on an emulated core. They have stacks and commands that work in ways like the assembler without actually being the assembler. They are a virtual machine.
A for loop can be thought of as telling the system, set a value, run a sequence of tasks, and when you are done then come back and check on that value. If it is not at a threshold, then change it in a prescribed way, and go repeat those tasks and come back. In assembler you are running screaming fast, but in the "VM" not so much. Consider the demonstration between 13:50 and 15:30 of this link: (link)
This means that what appears to be a for loop, isn't actually a for loop. It is operating system interrupts, and virtualized memory. It is virus-scans in the background and megasloth bloatware.
If you had a virtual system, could you make a short-cut for addressing memory that didn't use the virtualized for loop, that was reasonably efficient? MatLab tries to major on data processing, so it has to have very efficient ways of storing, sorting, and selecting data within its virtual machine.
MathWorks is not going to make the details of this accessible to the public. If it has a great idea then they don't want it implemented in Python, and R tomorrow. If it has a mediocre idea then they don't want to be beaten in execution by Python and R tomorrow. Either way, making the nuts and bolts of that particular approach accessible to the public without an NDA - it is likely a losing proposition for them.
Bottom lines:
its not a real "for", even for a for loop, because its running virtually
they are opening up some of the internals of their data handling to improve usability
they aren't likely to disclose actual code because of negative business consequences
It is worthy to note that vectorized code can outperform for loops while doing the same thing. This means they likely are applying more of that internals to execution of the "sequence of tasks" to get performance improvement.

Is it a good idea to warm up a cache in the BEGIN block in Perl?

Is it a good idea to warm up cache in the BEGIN block, when it gets used?
You didn't really provide any information on what kind of environment you're talking about, which I think is important. In most cases the answer is probably "no", but I can think of one case where it's a definite yes, which is preforking servers -- web applications and the like. In that case, any work that you can do "before the fork" not only saves the cost of having the children recompute the same values individually, it alo saves memory, since the pages containing the results can be shared across all of the child processes by the OS's COW mechanism.
If you're talking about a module you're writing and not an application, then I'd say no, don't lift things to compilation time without the user's permission unless they're things that have to be done for the module to work. Instead, provide a preheat_cache class method, and if your caller has a reason to need a hot cache at compile time they can put the call into a BEGIN block themselves. You could also use a :preheat_cache import tag but that's unnecessarily fancy in my book.
If it's a choice between preloading your cache at compile time, or preloading your cache as the first thing you do at run time, there's virtually no difference.
If your cache is large enough that loading it will trigger a lot of page swaps, that's an argument for waiting until run time. That way, all your module loading and other compile time code can be done while your system is under a lighter load.
I'm going to go with "no", even though I could be wrong. Reasoning goes like this: keep the code, and data it uses, small, so that it takes up less space in any caches (I am presuming you mean CPU cache, not programmatic hashes with common query results or some such thing).
Unless you see some sort of bad access pattern, trying to second guess what needs to be prefetched is probably useless at best. In fact such code or initialization data is likely to displace something you (or another process on the system) were actually using. Think about what you can do in the actual work part of the code to maximize locality of reference, to try to stay within smaller memory regions at any one time.
I used to use "top" to detect when processes were swapping between memory and disk. I don't know of any good tools yet to tell how often a process is getting cache misses and going to plain old slow mo'board memory. There must be such tools, I just don't know what they are yet (software tools, rather than some custom In Circuit Emulator type hardware). Perhaps some thought on this earlier in the day...
by warm up I assume you mean use BEGIN() to guarantee the cache is preloaded before anything else in your script executes?
If you need the cache for your program to run properly, then yes, I think it would be a good idea.

Garbage-collectors for multi-core llvm?

I've been looking at LLVM for quite some time as a new back-end for the language I'm currently implementing. It seems to have good performance, rather high-level generation APIs, enough low-level support to optimize exotic optimizations. In addition, and although I haven't checked it myself, Apple seems to have successfully demonstrated the use of LLVM for garbage-collected multi-core programs.
So far, so good. As I'm interested in both garbage-collection and multi-core, the next step would be to choose a LLVM multi-core-able garbage-collector. Which brings me to the question: what is available? I'm aware of Jon Harrop's HLVM work, but that's about it.
Note that I need cross-platform, so Apple's GC is probably not what I'm looking for (unless there's a cross-platform version). Also note that I have nothing against stop-the-world garbage-collectors.
Thanks in advance,
Yoric
LLVM docs say that it does not support multi-threaded collectors yet.
As the matrix indicates, LLVM's
garbage collection infrastructure is
already suitable for a wide variety of
collectors, but does not currently
extend to multithreaded programs. This
will be added in the future as there
is interest.
The docs do say that to do multi-threaded garbage collection you need to stop the world and that this is a non-portable thing:
Threaded
Denotes a multithreaded mutator; the collector must still stop the
mutator ("stop the world") before
beginning reachability analysis.
Stopping a multithreaded mutator is a
complicated problem. It generally
requires highly platform specific code
in the runtime, and the production of
carefully designed machine code at
safe points.
However, shared state between threads is a nasty scaling issue. If your language communicates solely through message passing between 'tasks', and therefore there was no shared state between worker threads, then you could use a per-thread collector for the per-thread heap?
The quotes that Will gave are about LLVM's intrinsic support for GC, where you augment LLVM with C++ code telling it how to walk the stack, interpret stack frames, inject read and write barriers and so on. The primary goal of my HLVM project is to become useful with minimal effort and risk so I chose to use the shadow stack for an "uncooperative environment" in order to avoid hacking on immature internals of LLVM. Consequently, those statements about LLVM's intrinsic support for GC do not apply to HLVM's garbage collector because it does not use that infrastructure at all. My results are extremely compelling: you can achieve excellent performance with minimal effort (serial performance and parallel performance).
I believe HLVM already runs out-of-the-box across Unixs including Mac OS X because it requires only POSIX threads. I strongly disagree with the claim that writing a stop-the-world GC is difficult: it took me 5 days to write a 100-line multicore garbage collector and I barely know anything about computers. I cannot believe it would be difficult to port to Windows either.