Observable takes infinite amount of memory? - scala

Since each Observable has a cache which can be traced back to the very first emitted value it seems an amount of memory used to store this cache is not bounded.
I've tested this assumption with the following code:
Observable.interval(1.microsecond).map(_ => System.currentTimeMillis)
.subscribe(x => ())
And indeed the memory usage has been steadily rising during the whole 10 minute period while an app was running.
My question is if it's possible to instantiate a special Observable without cache or maybe instruct it to cap it's cache at some level?

Only a specific set of Observables (ReplaySubject, replay(), GroupedObservable for example) tend to cache items, but not Observable.interval().
What you are likely experiencing here is the hundreds of thousands of boxed Long values. If you have a lot of RAM, GC might not kick in but just increase the heap size up to a maximum. Assuming you can really get a 1 microsecond timer, you have roughly 24 MB/s allocation rate or 1.4 GB/minute. If left alone for 10 minutes, you'd likely see a sawtooth like shape in memory usage.

Related

About how many fibers can I create in ZIO on one machine?

I realize the exact number depends on a whole lot of things, so I’m really looking for an order of magnitude on say, a MacBook Pro.
Is it 100s of 1000s? Millions? More?
For example I’ve calculated I can run about 1M goroutines on this machine and I’m trying to get a sense of ZIO fibers would be about the same or more…
The primary resource consumption from a fiber is going to be the heap memory it consumes, plus (arguably) the memory consumed by the closure capturing its state. Because JVMs (and even different GC algorithms within a JVM) differ in how many bytes in memory a given object will take up, and this can even depend on runtime settings (e.g. if the heap is 32GiB or smaller, object references can be encoded in 32 bits, while a heap larger than that will require more space for each object reference).
On "typical" JVMs, the memory overhead of fibers is in the low hundreds of bytes. This is also approximately the overhead of an Akka actor (which can, like a goroutine, a ZIO fiber, a Cats Effect fiber, or a Scala future, be considered a means of modeling a process in a more efficient way than a thread (this ignores the substantial philosophical differences in the particulars of the respective models)), and it's well-established that substantially greater-than a million actors can be created per GiB of heap, so it's reasonable to expect that multiple millions of fibers can be created per GiB of heap.
It should be noted that it's impossible for more fibers to be consuming CPU at any point in time than you have cores/threads, so it's absolutely possible, if you have far more fibers/goroutines/actors ready to consume CPU, you may see a substantial latency effect from fibers waiting to be scheduled (so-called "thread starvation").

Postgresql auto-vacuuming taking too long

I have db table which has around 5-6 Mn entries and it is taking around 20 minutes to perform vacuuming. Since, one field of this table is updated very frequently, thereare a lot of dead rows to deal with.
For an estimate, with our current user base it can have 2 Million dead tuples on daily basis. So, vacuuming of this table requires both:
Read IO: as the whole table is not present in shared memory.
Write IO: as there are a lot of entries to update.
What should be an ideal way to vacuum this table? Should I increase the autovacuum_cost_limit to allow more operations per autovacuum run? But as i can see, it will increase IOPS, which again might hinder the performance. Currently, I have autovacuum_scale_factor = 0.2. Should I decrease it? If I decrease it it will run more often, although write IO will decrease but it will lead to more number of time period with high read IO.
Also, as the user base will increase it will take more and more time as the size of table with increase and vacuum will have to read a lot from disk. So, what should I do?
One of the solution I have thought of:
Separate the highly updated column and make a separate table.
Tweaking the parameter to make it run more often to decrease write IO(as discussed above). How to handle more Read IO, as vacuum will now run more often?
Combine point 2 along with increasing RAM to reduce Read IO as well.
In general what is the approach that people takes, because I assume people must have very big table 10GB or more, that needs to be vacuumed.
Separating the column is a viable strategy but would be a last resort to me. PostgreSQL already has a high per-row overhead, and doing this would double it (which might also remove most of the benefit). Plus, it would make your queries uglier, harder to read, harder to maintain, easier to introduce bugs. Where splitting it would be most attractive is if index-only-scans on a set of columns not including this is are important to you, and splitting it out lets you keep the visibility map for those remaining columns in a better state.
Why do you care that it takes 20 minutes? Is that causing something bad to happen? At that rate, you could vacuum this table 72 times a day, which seems to be way more often than it actually needs to be vacuumed. In v12, the default value for autovacuum_vacuum_cost_delay was dropped 10 fold, to 2ms. This change in default was not driven by changes in the code in v12, but rather by the realization that the old default was just out of date with modern hardware in most cases. I would have no trouble pushing that change into v11 config; but I don't think doing so would address your main concern, either.
Do you actually have a problem with the amount of IO you are generating, or is it just conjecture? The IO done is mostly sequential, but how important that is would depend on your storage hardware. Do you see latency spikes while the vacuum is happening? Are you charged per IO and your bill is too high? High IO is not inherently a problem, it is only a problem if it causes a problem.
Currently, I have autovacuum_scale_factor = 0.2. Should I decrease it?
If I decrease it it will run more often, although write IO will
decrease but it will lead to more number of time period with high read
IO.
Running more often probably won't decrease your write IO by much if any. Every table/index page with at least one obsolete tuple needs to get written, during every vacuum. Writing one page just to remove one obsolete tuple will cause more writing than waiting until there are a lot of obsolete tuples that can all be removed by one write. You might be writing somewhat less per vacuum, but doing more vacuums will make up for that, and probably far more than make up for it.
There are two approaches:
Reduce autovacuum_vacuum_cost_delay for that table so that autovacuum becomes faster. It will still consume I/O, CPU and RAM.
Set the fillfactor for the table to a value less than 100 and make sure that the column you update frequently is not indexed. Then you could get HOT updates which don't require VACUUM.

Scala concurrency performance issues

I have a data mining app.
There is 1 Mining Actor which receives and processes a Json containing 1000 objects. I put this into a list and foreach, I log the data by sending it to 1 Logger Actor which logs data into many files.
Processing the list sequentially, my app uses 700MB and takes ~15 seconds of 20% cpu power to process (4 core cpu). When I parallelize the list, my app uses 2GB and ~ the same amount of time and cpu to process.
My questions are:
Since I parallelized the list and thus the computation, shouldn't the compute-time decrease?
I think having only one Logger Actor is a bottleneck in this case. The computation may be faster but the bottleneck hides the speed increase. So if I add more Loggers to the pool, the app time should decrease?
Why does the memory usage jump to 2GB? Does the JVM have to store the entire collection in memory to parallelize it? And after the computation is done, the JVM garbage collector should deal with it?
Without more details, any answer is a guess. However, even a guess might point you to the right direction.
Parallelized execution should decrease the running time but your problem might lie elsewhere. For some reason, your CPU is idling a lot even in the single-threaded mode. You do not specify whether you read the input from disk or the network or where you write your output to. You explicitly say that you write logs to a lot of files. Disk and network reading/writing might in your case take much longer than data processing. Most probably your process is idle due to this I/O waiting. You should not expect any speedups from parallelizing a job that spends 80% of its time waiting on I/O. I therefore also suspect that loggers are not the bottleneck here.
The memory usage might jump if your threads allocate a lot of memory each. In that case, the more threads you have the more memory will be required. I don't know what kind of collection you are parallelizing on, but most are stored in memory, completely. Yes, the garbage collector will free any resources that do not require you to explicitly free them, such as files.
How many threads for reading and writing to the hard disk?
The memory increases because I send messages faster than the Logger can write, so the Mailbox balloons in size until the Logger has processed the messages and the GC kicks in.
I solved this by writing state to a protocol buffer file. Before doing any writes, I compare with the protobuf file because reads are significantly cheaper than writes. My resource usage is now 10% for 2 seconds, and less than 400MB RAM.

Spark: MEMORY_ONLY_SER_2 - strange memory behavior

I am storing an RDD using a storageLevel = MEMORY_ONLY_SER_2 in case if one executor lost, there is another copy of data.
Then I found something strange:
The Size in Memory of 2xReplicated RDD seems to be the same comparing when I used storageLevel = MEMORY_ONLY_SER (1xReplicated)
Fraction Cached couldn't reach 100% even though I still have a lot of storage memory left.
Am I understanding storageLevel = MEMORY_ONLY_SER_2 correctly? why 2xReplicated doesn't have twice the Size in Memory compared with 1xReplicated ? Thanks!
I guess maybe all your memory for cached is used, so no matter how many replication you used.
I do not know how many memory allocated for every executor, if you allocated a lot, you can increase the value of spark.storage.memoryFraction, the default values is 0.6.
If you just to verify whether the MEMORY_ONLY_SER_2 will cost as twice as the MEMORY_ONLY_SER, you can use a small dataset.

Most efficient memory type for kdb+

I am currently configuring a server that will run a kdb+ tickerplant with several subscription processes. Is there an optimal physical memory type for realtime kdb data?
Checkout the type sizes at http://code.kx.com/q/ref/card/#datatypes
Answer depends on what you mean by "efficient" - by the far the largest hit you take in latency is memory allocation, so the less you have to allocate the better. That means smaller types.
But of course you have to weigh that up against your use cases.
For your realtime always make sure the tickerplant inserts the time column so that #s is maintained on the time column for efficient querying.
The tickerplant itself publishes on a timer - the longer the timer the less hit on cpu, but then the tp is collecting data for a while before publishing. Again, weigh up against use cases. BTW make sure your tickerplant is writing the log file to a fast local disk so as to decrease pub delay and iowait.
If you're operating high load from multiple sources, consider OS tweaks too like tcp quickack ( http://www.techrepublic.com/article/take-advantage-of-tcp-ip-options-to-optimize-data-transmission/). There's similar tweaks for memory allocation and disk i/o.