Understanding JVM heap size (when using Eclipse) - eclipse

There are a lot of questions about "how to increase heap size" here but I'd like to understand how these settings actually influence memory consumption of a Java app (Eclipse IDE in my case but I guess that doesn't matter).
My JVM starts up with these parameters:
-Xms512m
-Xmx1024m
so I would expect that when something memory-demanding executes, the heap size will go up to 1024 MB. However, I only see Eclipse allocating around 700-800 MB at most.
(if I understand the bar correctly, the yellow part is the current allocation, the whole bar is the max allocation and that there is some "marker" that I don't know what it is).
When I start a compilation of a large project, I periodically see the yellow bar rising, reaching about 90% of the whole bar and then dropping back to about 200-300 MB. That doesn't utilize the max allowed 1024MB, does it?
It'd be great if someone could explain this behavior to me and possibly how to change it.
BTW, my Eclipse is 64-bit version with 64-bit JVM if that matters (1024 limit should be OK for 32-bit too, though).

Xmx is a hard limit. If the GC is able to clean objects / lower the mem consumption before hitting the limit the max, the max is not reached.
If you have some reason to require 1GB consumption of memory just set Xms and Xmx to the same size.

Related

Heap size usage when having two processes in a container

We have two processes (p1 and p2) in a JVM container (in Docker) using kubernetes.
The resource Limit (in the helm chart) for the container is set to 1000 MiB.
We set the XX:MaxRAMPercentage to 50% (=500 MiB). How will the heap distribution for each process look like?
Will they p1 and p2 equally so they will have 250 MiB each that cannot be exceeded?
Or will they share the whole heap of 500 MiB that cannot be exceeded?
The heap memory is just a part of the memory consumed by the JVM - there is also stack and native memory; the runtime, the JIT and the garbage collector also need memory. So, a typical Java application run with -Xmx500m will need approximately 700-1000MB of RAM (when using the full heap). The full memory usage heavily depends on what your application is doing and how it allocates and deallocates memory - some Java apps with 1GB of heap can use 20GB of RAM.
Back to your question: when you limit the container to 1000MiB and run two same-siced, pretty standard Java web applications, I would size the JVMs with -Xmx300m (or if you really want to use relative values: -XX:MaxRAMPercentage=30.0).
For more information: this answer gives a good overview of Java memory.

BsonChunkPool and memory leak

I use Mongodb 3.6 + .net driver (MongoDb.Driver 2.10) to manage our data. Recenyly, we've noticed that our services (background) consume a lot of memory. After anaylyzing a dump, it turned out that there's a mongo object called BsonChunkPool that always consumes around 0.5 GB of memory. Is it normal ? I cannot really find any valuable documentation about this type, and what it actually does. Can anyone help ?
The BsonChunkPool exists so that large memory buffers (chunks) can be reused, thus easing the amount of work the garbage collector has to do.
Initially the pool is empty, but as buffers are returned to the pool the pool is no longer empty. Whatever memory is held in the pool will not be garbage collected. This is by design. That memory is intended to be reused. This is not a memory leak.
The default configuration of the BsonChunkPool is such that it can hold a maximum of 8192 chunks of 64KB each, so if the pool were to grow to its maximum size it would use 512MB of memory (even more than the 7 or 35 MB you are observing).
If for some reason you don't want the BsonChunkPool to use that much memory, you can configure it differently by putting the following statement at the beginning of your application:
BsonChunkPool.Default = new BsonChunkPool(16, 64 * 1024); // e.g. max 16 chunks of 64KB each, for a total of 1MB
We haven't experimented with different values for chunk counts and sizes so if you do decide to change the default BsonChunkPool configuration you should do some benchmarking and verify that it doesn't have an adverse impact on your performance.
From jira: BsonChunkPool and memory leak

Common heap behavior for Wildfly or application memory leak?

We're running our application in Wildfly 14.0.1, with a -Xmx of 4096, running with OpenJDK 11.0.2. I've been using VisualVM 1.4.2 to monitor our heap since we previously were having OOM exceptions (because our -Xmx was only 512 which was incredibly bad).
While we are well within our memory allocation now, we have no more OOM exceptions happening, and even with a good amount of clients and processing happening we're nowhere near the -Xmx4096 (the servers have 16GB so memory isn't an issue), I'm seeing some strange heap behavior that I can't figure out where it's coming from.
Using VisualVM, Eclipse MemoryAnalyzer, as well as heaphero.io, I get summaries like the following:
Total Bytes: 460,447,623
Total Classes: 35,708
Total Instances: 2,660,155
Classloaders: 1,087
GC Roots: 4,200
Number of Objects Pending for Finalization: 0
However, in watching the Heap Monitor, I see the Used Heap over a 4 minute time period increase by about 450MB before the GC runs and drops back down only to spike again. Here's an image:
This is when no clients are connected and nothing is actively happening in our application. We do use Apache File IO to monitor remote directories, we have JMS topics, etc. so it's not like the application is completely idle, but there's zero logging and all that.
My largest objects are the well-known io.netty.buffer.PoolChunk, which in the heap dumps are about 60% of my memory usage, the total is still around 460MB so I'm confused why the heap monitor is going from ~425MB to ~900MB repeatedly, and no matter where I take my snapshots, I can't see any large increase of object counts or memory usage.
I'm just seeing a disconnect between the heap monitor, and .hprof analysis. So there doesn't see a way to tell what's causing the heap to hit that 900MB peak.
My question is if these heap spikes are totally expected when running within Wildfly, or is there something within our application that is spinning up a bunch of objects that then get GC'd? In doing a Component report, objects in our application's package structure make up an extremely small amount of the dump. Which doesn't clear us, we easily could be calling things without closing appropriately, etc.

Websphere Out of Memory Issue

Tried to update server.xml, deleted dumps and temporary cache files from C:\Users\username\AppData\Local\javasharedresources
And still not able to start the server.
This is the following error message I get:
JVMDUMP010I Java dump written to C:\WAS8\profiles\AppSrv02\bin\javacore.20150210.094417.6468.0009.txt
JVMDUMP013I Processed dump event "systhrow", detail "java/lang/OutOfMemoryError".
There could be multiple causes for the OutOfMemoryError exceptions. It could be that there is a memory leak in one of the applications that is loaded on startup, or the maximum heap size is not set high enough to support all of the components loaded on startup.
It is best to go through a troubleshooting exercise. I suggest you download heap analyzer tool from here and analyze the javacore file to see where the potential leak, if any, could be.
If you can't find a memory leak, try increasing the JVM maximum heap size. Check that your host system has enough RAM to support the chosen maximum JVM heap size.
Earlier I didn't update the server.xml with correct arguments and once I updated server.xml with genericJvmArguments="-Xms1024M -Xmx2048M" and InitialHeapSize="1024" maximumHeapSize="2048", I was able to start the server.

Everyday GC is running on same time

In my server everyday 3:00AM GC is running and Heapspace is filling in a flash.
This causing site outage. ANY inputs?
following are my JVM settings.I am using JBOSS server.
-Dprogram.name=run.sh -server -Xms1524m -Xmx1524m -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000 -XX:NewSize=512m -XX:MaxNewSize=512m -Djava.net.preferIPv4Stack=true -XX:MaxPermSize=512m -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -Djavax.net.ssl.trustStorePassword=changeit -Dcom.sun.management.jmxremote.port=8888 -Djava.rmi.server.hostname=192.168.100.140 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote
Any suggestions really helpful..
(This turned out somewhat long; there is an actual suggestion for a fix at the end.)
Very very briefly, garbage collection when you use -XX:+UseConcMarkSweepGC works like this:
All objects are allocated in the so-called young generation. This is typically a couple of hundred megs up to a gig in size, depending om VM settings, number of CPU:s and total heap size. The young generation is collected in a stop-the-world pause, followed by a parallel (multiple CPU) compacting (moving objects) collection. The young generation is sized so as to make this pause reasonably large.
When objects have survived (are still reachable) young gen they get promoted to "old-gen" (old generation).
The old generation is where -XX:+UseConcMarkSweepGC kicks in. In the default mode (without -XX:+UseConcMarkSweepGC) when the old generation becomes full the entire heap is collected and compacted (moving around, eliminating fragmentation) at once in a stop-the-world copy. That pause will typically be longer than young-gen pauses because the entire heap is involved, which is bigger.
With CMS (-XX:+UseConcMarkSweepGC) the work to compact the old generation is mostly concurrent (meaning, running in the background with the application not paused). This work is also not compacting; it works more like malloc()/free() and you are subject to fragmentation.
The main upside of CMS is that when things work well, you avoid long pause times that are linear in the size of the heap, because the main work is cone concurrently (there are some stop-the-world steps involved but they are supposed to usually be short).
The two primary downsides are that:
You are subject to fragmentation because old-gen is not compacted.
If you don't finish a concurrent collection cycle before old-gen fills up, or if fragmentation prevents allocation, the resulting full collection of the entire heap is not parallel as it is with the default collector. I..e, only one CPU is used. That means that when/if you do hit a full garbage collection, the pause will be longer than it would have been with the default collector.
Now... your logs. "Concurrent mode failure" is intended to convey that the concurrent mark/sweep work did not complete in time for another young-gen GC that needs to promote surviving objects into the old generation. The "promotion failed" is rather that during promotion from young-gen to old-gen, an object was unable to be allocated in old-gen due to fragmentation.
Unless you are hitting a true bug in the JVM, the sudden increase in heap usage is almost certainly from your application, JBoss, or some external entity acting on your application. So I can't really help with that. However, what is likely happening is a combination of two things:
The spike in activity is causing an increase in heap usage too quick for the concurrent collection to complete in time.
Old-gen is too fragmented, causing problems especially when the old-gen is almost full.
I should also point out now that the default behavior of CMS is to try to postpone concurrent collections as long as possible (yet not too long) for performance reasons. The later it happens, the more efficient (in terms of CPU usage) the collection is. However, a trade-off is that you're increasing the risk of not finishing in time (which again, will trigger a full GC and a long pause). It should also (I have not made empirical tests here, but it stands to reason) result in fragmentation being a greater concern; basically the more full old-gen is when an object is promoted, the greater is the likelyhood that the object's promotion will worsen fragmentation concerns (too long to go into details here).
In your case, I would do two things:
Keep figuring out what is causing the activity. I would say it's fairly unlikely that it is a GC/JVM bug.
Re-configure the JVM to trigger concurrent collection cycles earlier in order to avoid the heap every becoming so full that fragmentation becomes a particularly huge concern, and giving it more time to complete in time even during your sudden spikes of activity.
You can accomplish (2) most easily be using the JVM options
-XX:CMSInitiatingOccupancyFraction=75
-XX:+UseCMSInitiatingOccupancyOnly
in order to explicitly force the JVM to kick start a CMS cycle at a certain level of heap usage (in this example 75% - you may need to change that; the lower the percentage, the earlier it will kick in).
Note that depending on what your live size is (the number of bytes that are in fact live and reachable) in your application, forcing an earlier CMS cycle may also require that you increase your heap size to avoid CMS running constantly (not a good use of CPU).