Average TLAB size grater than Average Object size - jboss

i'm with a big doubt about TLABs and objects.
For what i've studied about TLABs in JVM, the objects should preferentially kept into TLABs when they're created, going to the young generation, right?
However, as you can see in the image 1, the 'Average TLAB Size' is 20,13 kB for my application and 'Average Object Size' (outside TLABs) is 11,25 kB.
As far as I know the objects should be saved into TLAB, unless they're greater than a TLAB. I'm afraid that we're having a lot of heap fragmentation, as you can see that we have the 'Total Memory Allocated for TLABs' with 52,90 MB and 'Total Memory Allocated for Objects' with only 1,27 MB. That is a waste of 51,63 MB for only 5 minutes running the Java Mission Control.
Is it something very wrong with my application, JVM or JBoss? Or is that a misunderstood?
Thanks in advance.
TLABs greater than Objects

Related

BsonChunkPool and memory leak

I use Mongodb 3.6 + .net driver (MongoDb.Driver 2.10) to manage our data. Recenyly, we've noticed that our services (background) consume a lot of memory. After anaylyzing a dump, it turned out that there's a mongo object called BsonChunkPool that always consumes around 0.5 GB of memory. Is it normal ? I cannot really find any valuable documentation about this type, and what it actually does. Can anyone help ?
The BsonChunkPool exists so that large memory buffers (chunks) can be reused, thus easing the amount of work the garbage collector has to do.
Initially the pool is empty, but as buffers are returned to the pool the pool is no longer empty. Whatever memory is held in the pool will not be garbage collected. This is by design. That memory is intended to be reused. This is not a memory leak.
The default configuration of the BsonChunkPool is such that it can hold a maximum of 8192 chunks of 64KB each, so if the pool were to grow to its maximum size it would use 512MB of memory (even more than the 7 or 35 MB you are observing).
If for some reason you don't want the BsonChunkPool to use that much memory, you can configure it differently by putting the following statement at the beginning of your application:
BsonChunkPool.Default = new BsonChunkPool(16, 64 * 1024); // e.g. max 16 chunks of 64KB each, for a total of 1MB
We haven't experimented with different values for chunk counts and sizes so if you do decide to change the default BsonChunkPool configuration you should do some benchmarking and verify that it doesn't have an adverse impact on your performance.
From jira: BsonChunkPool and memory leak

Why does windbg> !EEHeap -gc show a much smaller managed heap than VMMAP.exe?

I have a C# application whose memory usage increases overtime. I've taken periodic user mode dumps and after loading sos, run !EEHeap -gc to monitor the managed heap size. In windbg/sos I've seen it start ~14MB and grow up to 160MB, then shrink back to 15MB, but the applications "Private Bytes" never decreases significantly. I have identified the activity that cauases the increase in "Private Bytes", so I can control when the memory growth occurs.
I tried running Vmmap.exe and noticed it reports a managed heap of ~360MB, took a quick dump and using windbg/sos/eeheap -gc I only see 15MB.
Why am I seeing such different values?
Is the managed heap really what vmmap.exe reports?
How can I examine this area of the managed heap in windbg?
You can't break into a .NET application with WinDbg and then run VMMap at the same time. This will result in a hanging VMMap. You can also not do it in the opposite direction: start VMMap first, then break into WinDbg and then refresh the values in VMMap.
Therefore the values shown by VMMap are probably never equal, because the numbers are from a different point in time. Different points in time could also mean that the garbage collector has run. If the application is not changing so much, the values should be close.
In my tests, the committed part of the managed heap in VMMap is the sum of !eeheap -gc and !eeheap -loader, which sounds reasonable.
Given the output of !eeheap -gc, we get the start of the GC heap at generation 2 (11aa0000) and a size of only 3.6 MB.
Number of GC Heaps: 1
generation 0 starts at 0x0000000011d110f8
generation 1 starts at 0x0000000011cd1130
generation 2 starts at 0x0000000011aa1000
...
GC Heap Size 0x374a00(3623424)
!address gives the details:
...
+ 0`11aa0000 0`11ef2000 0`00452000 MEM_PRIVATE MEM_COMMIT PAGE_READWRITE <unknown>
0`11ef2000 0`21aa0000 0`0fbae000 MEM_PRIVATE MEM_RESERVE <unknown>
0`21aa0000 0`21ac2000 0`00022000 MEM_PRIVATE MEM_COMMIT PAGE_READWRITE <unknown>
0`21ac2000 0`29aa0000 0`07fde000 MEM_PRIVATE MEM_RESERVE <unknown>
+ 0`29aa0000 0`6ca20000 0`42f80000 MEM_FREE PAGE_NOACCESS Free
...
Although not documented, I believe that a new segment starts at 11aa0000, indicated by the + sign. The GC segment ends at 29aa0000, which is also the starting point of the next segment. Cross check: .NET memory should be reported as <unknown> in the last column - ok.
The total GC size (reserved + committed) is
?29aa0000-11aa0000
Evaluate expression: 402653184 = 00000000`18000000
which is 402 MB or 393.216 kB, which in my case is very close to 395.648 kB reported by VMMap.
If you have more GC heaps, the whole process needs more effort. Therefore I typically take the shortcut, which is ok if you know that you don't have anything else than .NET that calls VirtualAlloc(). Type !address -summary and then look at the first <unknown> entry:
--- Usage Summary ---------------- RgnCount ----------- Total Size -------- %ofBusy %ofTotal
Free 144 7ff`d8a09000 ( 7.999 Tb) 99.99%
<unknown> 180 0`1a718000 ( 423.094 Mb) 67.17% 0.01%
...
Thank you very much for the detailed answer. Much appreciated.
I'm clear on windbg vs VMmap access/control of the program. Since I can cause the leak by an external action, I'm pretty sure that since I quiesce the activity, memory won't grow much between samples.
I had been relying on the last line of output from !eeheap -gc:
GC Heap Size: Size: 0xed7458 (15561816) bytes.
I think this number must be the amount of managed heap in use (with un-free'ed objects in it). I summed all the "size" bytes reported by "!eeheap -gc" for each SOH and LOH and it matches the above value.
I ran VMmap, took a snap shot and quit VMmap. Then I attached to the process with windbg. Your technique of using !address was most enlightening. I'm using a 12 processor server system, so there are SOH's and LOH's for each processor, i.e 12 to sum. Taking your lead, the output from "!eeheap -gc" has the segments for all of the heaps. I feed them all into "!address " and summed their sizes (plus the size reported by !eeheap -loader ). The result was 335,108K which is within the variation I'd expect to see within the time elapsed (within 600K).
The VMmap Managed Heap seems to be the total amount of all of the memory segments committed for use by the managed heap (I didn't check the Reserved numbers). So now I see why the total reported by "!eeheap -gc" is so much less than what VMmap shows.
Thanks!

Understanding JVM heap size (when using Eclipse)

There are a lot of questions about "how to increase heap size" here but I'd like to understand how these settings actually influence memory consumption of a Java app (Eclipse IDE in my case but I guess that doesn't matter).
My JVM starts up with these parameters:
-Xms512m
-Xmx1024m
so I would expect that when something memory-demanding executes, the heap size will go up to 1024 MB. However, I only see Eclipse allocating around 700-800 MB at most.
(if I understand the bar correctly, the yellow part is the current allocation, the whole bar is the max allocation and that there is some "marker" that I don't know what it is).
When I start a compilation of a large project, I periodically see the yellow bar rising, reaching about 90% of the whole bar and then dropping back to about 200-300 MB. That doesn't utilize the max allowed 1024MB, does it?
It'd be great if someone could explain this behavior to me and possibly how to change it.
BTW, my Eclipse is 64-bit version with 64-bit JVM if that matters (1024 limit should be OK for 32-bit too, though).
Xmx is a hard limit. If the GC is able to clean objects / lower the mem consumption before hitting the limit the max, the max is not reached.
If you have some reason to require 1GB consumption of memory just set Xms and Xmx to the same size.

Inspecting memory allocation,leaks using Instruments tool

As suggested by many,instruments tool is the best way to capture the memory allocation and leaks.But for me its been easy to use instruments tool,but i am confused with the detailed results as shown in the above screenshot.
Want to know the meaning of following points,
1)All Allocations,
2)Live Bytes,
3)Overall Bytes,
4)Overall.
Simple but confusing!Any answer will be greatly appreciated.
live bytes: The number of bytes that have been allocated but not released yet.
Living:*The number of object created and still on the heap.
Transitory: The number of objects created and destroyed
Overall byte: The total number of byte that have been allocated and released
overall:The total number of objects allocated and released .
all allocations: All the allocations while the application is running .

How to keep 32 bit mongodb memory usage down on changing dataset

I'm using MongoDB on a 32 bit production system, which sucks but it's out of my control right now. The challenge is to keep the memory usage under ~2.5GB since going over this will cause 32 bit systems to crash.
According to the mongoDB team, the best way to track the memory usage is to use your operating system's process tracking system (i.e. ps or htop on Unix systems; Process Explorer on Windows.) for virtual memory size.
The DB mainly consists of one table which is continually cycling data, i.e. receiving data at regular intervals from sensors, and every day a cron job wipes all data from before the last 3 days. Over a period of time, the memory usage slowly increases. I took some notes over time using db.serverStats(), db.lectura.totalSize() and ps, shown in the chart below. Note that the size of the table in question has reduced in the last month but the memory usage increased nonetheless.
Now, there is some scope for adjustment in how many days of data I store. Today I deleted basically half of the data, and then restarted mongodb, and yet the mem virtual / mem mapped and most importantly memory usage according to ps have hardly changed! Why do these not reduce when I wipe data (and restart)? I read some other questions where people said that mongo isn't really using all the memory that it might appear to be using, and that you can't clear the cache or limit memory use. But then how can I ensure I stay under the 2.5GB limit?
Unless there is a way to stem this dataset-size-irrespective gradual increase in memory usage, it seems to me that the 32-bit version of Mongo is unuseable. Note: I don't mind losing a bit of performance if it solves the problem.
To answer regarding why the mapped and virtual memory usage does not decrease with the deletes, the mapped number is actually what you get when you mmap() the entire set of data files. This does not shrink when you delete records, because although the space is freed up inside the data files, they are not themselves reduced in size - the files are just more empty afterwards.
Virtual will include journal files, and connections, and other non-data related memory usage also, but the same principle applies there. This, and more, is described here:
http://www.mongodb.org/display/DOCS/Checking+Server+Memory+Usage
So, the 2GB storage size limitation on 32-bit will actually apply to the data files whether or not there is data in them. To reclaim deleted space, you will have to run a repair. This is a blocking operation and will require the database to be offline/unavailable while it was run. It will also need up to 2x the original size in terms of free disk space to be able to run the repair, since it essentially represents writing out the files again from scratch.
This limitation, and the problems it causes, is why the 32-bit version should not be run in production, it is just not suitable. I would recommend getting onto a 64-bit version as soon as possible.
By the way, neither of these figures (mapped or virtual) actually represents your resident memory usage, which is what you really want to look at. The best way to do this over time is via MMS, which is the free monitoring service provided by 10gen - it will graph virtual, mapped and resident memory for you over time as well as plenty of other stats.
If you want an immediate view, run mongostat and check out the corresponding memory columns (res, mapped, virtual).
In general, when using 64-bit builds with essentially unlimited storage, the data will usually greatly exceed the available memory. Therefore, mongod will use all of the available memory it can in terms of resident memory (which is why you should always have swap configured to the OOM Killer does not come into play).
Once that is used, the OS does not stop allocating memory, it will just have the oldest items paged out to make room for the new data (LRU). In other words, the recycling of memory will be done for you, and the resident memory level will remain fairly constant.
Your options for stretching 32-bit are limited, but you can try some things. The thing that you run out of is address space, and the increases in the sizes of additional database files mean that you would like to avoid crossing over the boundary from "n" files to "n+1". It may be worth structuring your data into more or fewer databases so that you can get the maximum amount of actual data into memory and as little as possible "dead space".
For example, if your database named "mydatabase" consists of the files mydatabase.ns (the namespace file) at 16 MB, mydatabase.0 at 64 MB, mydatabase.1 at 128 MB and mydatabase.2 at 256 MB, then the next file created for this database will be mydatabase.3 at 512 MB. If instead of adding to mydatabase you instead created an additional database "mynewdatabase" it would start life with mynewdatabase.ns at 16 MB and mynewdatabase.0 at 64 MB ... quite a bit smaller than the 512 MB that adding to the original database would be. In fact, you could create 4 new databases for less space than would be consumed by adding a new file to the original database, and because the files are smaller they would be easier to fit into contiguous blocks of memory.
It is a well-known message that 32-bit should not be used for production.
Use 64-bit systems.
Point.