Vertx program memory keeps growing - vert.x

I used the top command to check on the Linux server, and found that the Memory of the deployed Vertx program has been increasing. I printed the Memory usage with Native Memory Tracking, and found that the Internal Memory has been increasing.I didn't manually allocate out of heap memory in the code.
Native Memory Tracking:
Total: reserved=7595MB, committed=6379MB
Java Heap (reserved=4096MB, committed=4096MB)
(mmap: reserved=4096MB, committed=4096MB)
Class (reserved=1101MB, committed=86MB)
(classes #12776)
(malloc=7MB #18858)
(mmap: reserved=1094MB, committed=79MB)
Thread (reserved=122MB, committed=122MB)
(thread #122)
(stack: reserved=121MB, committed=121MB)
Code (reserved=253MB, committed=52MB)
(malloc=9MB #12566)
(mmap: reserved=244MB, committed=43MB)
GC (reserved=155MB, committed=155MB)
(malloc=6MB #302)
(mmap: reserved=150MB, committed=150MB)
Internal (reserved=1847MB, committed=1847MB)
(malloc=1847MB #35973)
Symbol (reserved=17MB, committed=17MB)
(malloc=14MB #137575)
(arena=3MB #1)
Native Memory Tracking (reserved=4MB, committed=4MB)
(tracking overhead=3MB)
vertx version:3.9.8
Cluster:Hazelcast
startup script:su web -s /bin/bash -c "/usr/bin/nohup /usr/bin/java -XX:NativeMemoryTracking=detail -javaagent:../target/showbiz-server-game-1.0-SNAPSHOT-fat.jar -javaagent:../../quasar-core-0.8.0.jar=b -Dvertx.hazelcast.config=/data/appdata/webdata/Project/config/cluster.xml -jar -Xms4G -Xmx4G -XX:-OmitStackTraceInFastThrow ../target/server-1.0-SNAPSHOT-fat.jar start-Dvertx-id=server -conf application-conf.json -Dlog4j.configurationFile=log4j2_logstash.xml -cluster >nohup.out 2>&1 &"

If your producers are much faster than your consumers, and back pressure isn't handled properly, it's possible to have memory that keeps increasing.
Also, this could vary depending on how the code is written.
This similar reported issue could be of help and consider exploring writeStream too.

Related

UseContainerSupport and direct memory

In a container-based environment such as Kubernetes, the UseContainerSupport JVM feature is handy as it allows configuring heap size as a percentage of container memory via options such as XX:MaxRAMPercentage instead of a static value via Xmx. This way you don't have to potentially adjust your JVM options every time the container memory limit changes, potentially allowing use of vertical autoscaling. The primary goal is hitting a Java OufOfMemoryError rather than running out of memory at the container (e.g. K8s OOMKilled).
That covers heap memory. In applications that use a significant amount of direct memory via NIO (e.g. gRPC/Netty), what are the options for this? The main option I could find is XX:MaxDirectMemorySize, but this takes in a static value similar to Xmx.
There's no similar switch for MaxDirectMemorySize as far as I know.
But by default (if you don't specify -XX:MaxDirectMemorySize) the limit is same as for MaxHeapSize.
That means, if you set -XX:MaxRAMPercentage then the same limit applies to MaxDirectMemory.
Note: that you cannot verify this simply via -XX:+PrintFlagsFinal because that prints 0:
java -XX:MaxRAMPercentage=1 -XX:+PrintFlagsFinal -version | grep 'Max.*Size'
...
uint64_t MaxDirectMemorySize = 0 {product} {default}
size_t MaxHeapSize = 343932928 {product} {ergonomic}
...
openjdk version "17.0.2" 2022-01-18
...
See also https://dzone.com/articles/default-hotspot-maximum-direct-memory-size and Replace access to sun.misc.VM for JDK 11
My own experiments here: https://github.com/jumarko/clojure-experiments/pull/32

Unable to connect to the NetBeans Distribution because of Zero sized file

I recently reinstalled Netbeans IDE on my Windows 10 PC in order to restore some unrelated configurations. When I tried checking for new plugins in order to be able to download the Sakila sample database,
I get this error.
I've tested the connection on both No Proxy and Use Proxy Settings, and both connection tests seem to end succesfully.
I have allowed Netbeans through my firewall, but this has changed nothing either.
I haven't touched my proxy configuration, so it's on default (autodetect). Switching the autodetect off doesn't change anything, either, no matter what proxy config i have on Netbeans.
Here's part of my log file that might be helpful:
Compiler: HotSpot 64-Bit Tiered Compilers
Heap memory usage: initial 32,0MB maximum 910,5MB
Non heap memory usage: initial 2,4MB maximum -1b
Garbage collector: PS Scavenge (Collections=12 Total time spent=0s)
Garbage collector: PS MarkSweep (Collections=3 Total time spent=0s)
Classes: loaded=6377 total loaded=6377 unloaded 0
INFO [org.netbeans.core.ui.warmup.DiagnosticTask]: Total memory 17.130.041.344
INFO [org.netbeans.modules.autoupdate.updateprovider.DownloadListener]: Connection content length was 0 bytes (read 0bytes), expected file size can`t be that size - likely server with file at http://updates.netbeans.org/netbeans/updates/8.0.2/uc/final/distribution/catalog.xml.gz?unique=NB_CND_EXTIDE_GFMOD_GROOVY_JAVA_JC_MOB_PHP_WEBCOMMON_WEBEE0d55337f9-fc66-4755-adec-e290169de9d5_bf88d09e-bf9f-458e-b1c9-1ea89147b12b is temporary down
INFO [org.netbeans.modules.autoupdate.ui.Utilities]: Zero sized file reported at http://updates.netbeans.org/netbeans/updates/8.0.2/uc/final/distribution/catalog.xml.gz?unique=NB_CND_EXTIDE_GFMOD_GROOVY_JAVA_JC_MOB_PHP_WEBCOMMON_WEBEE0d55337f9-fc66-4755-adec-e290169de9d5_bf88d09e-bf9f-458e-b1c9-1ea89147b12b
java.io.IOException: Zero sized file reported at http://updates.netbeans.org/netbeans/updates/8.0.2/uc/final/distribution/catalog.xml.gz?unique=NB_CND_EXTIDE_GFMOD_GROOVY_JAVA_JC_MOB_PHP_WEBCOMMON_WEBEE0d55337f9-fc66-4755-adec-e290169de9d5_bf88d09e-bf9f-458e-b1c9-1ea89147b12b
at org.netbeans.modules.autoupdate.updateprovider.DownloadListener.doCopy(DownloadListener.java:155)
at org.netbeans.modules.autoupdate.updateprovider.DownloadListener.streamOpened(DownloadListener.java:78)
at org.netbeans.modules.autoupdate.updateprovider.NetworkAccess$Task$1.run(NetworkAccess.java:111)
Caused: java.io.IOException: Zero sized file reported at http://updates.netbeans.org/netbeans/updates/8.0.2/uc/final/distribution/catalog.xml.gz?unique=NB_CND_EXTIDE_GFMOD_GROOVY_JAVA_JC_MOB_PHP_WEBCOMMON_WEBEE0d55337f9-fc66-4755-adec-e290169de9d5_bf88d09e-bf9f-458e-b1c9-1ea89147b12b
at org.netbeans.modules.autoupdate.updateprovider.DownloadListener.notifyException(DownloadListener.java:103)
at org.netbeans.modules.autoupdate.updateprovider.AutoupdateCatalogCache.copy(AutoupdateCatalogCache.java:246)
at org.netbeans.modules.autoupdate.updateprovider.AutoupdateCatalogCache.writeCatalogToCache(AutoupdateCatalogCache.java:99)
at org.netbeans.modules.autoupdate.updateprovider.AutoupdateCatalogProvider.refresh(AutoupdateCatalogProvider.java:154)
at org.netbeans.modules.autoupdate.services.UpdateUnitProviderImpl.refresh(UpdateUnitProviderImpl.java:180)
at org.netbeans.api.autoupdate.UpdateUnitProvider.refresh(UpdateUnitProvider.java:196)
[catch] at org.netbeans.modules.autoupdate.ui.Utilities.tryRefreshProviders(Utilities.java:433)
at org.netbeans.modules.autoupdate.ui.Utilities.doRefreshProviders(Utilities.java:411)
at org.netbeans.modules.autoupdate.ui.Utilities.presentRefreshProviders(Utilities.java:405)
at org.netbeans.modules.autoupdate.ui.UnitTab$14.run(UnitTab.java:806)
at org.openide.util.RequestProcessor$Task.run(RequestProcessor.java:1423)
at org.openide.util.RequestProcessor$Processor.run(RequestProcessor.java:2033)
It might be that the update server is down just right now; i haven't been able to test this either. But it also might be something wrong with my configurations. I'm going crazy!!1!
Something that worked for me was changing the "http:" to "https:" in the update urls.
I.E. Change "http://updates.netbeans.org/netbeans/updates/8.0.2/uc/final/distribution/catalog.xml.gz"
to "https://updates.netbeans.org/netbeans/updates/8.0.2/uc/final/distribution/catalog.xml.gz"
No idea why that makes it work on my end. I'm running Linux Mint 19.1.

OrientDB 2.2.2 - Is there a way to suppress OAbstractProfiler$MemoryChecker Messages?

We are running on 32bit windows and since upgrading from 1.4.1 to 2.2.2, we are seeing the following memory in stdout (numbers not exact):
INFO: Database 'BLAH' uses 770MB/912MB of DISKCACHE memory, while Heap is not completely used (usedHeap=123MB maxHeap=512MB). To improve performance set maxHeap to 124MB and DISKCACHE to 1296MB
With 32bit, we can only set a max of Xmx + storage.diskCache.bufferSize ~= 1.4gb without getting OOM or performance issues. Any combination of different sizes of either of these two configurable variables results in a variant of the above message.
Is there a way to suppress the above profiler/memory checker messages?
You can disable the profiler with:
java ... -Dprofiler.enabled=false ...
Set that configuration in your server.sh or in the last section of config/orientdb-server-config.xml file.

Intermittent JVM crash under load

I am currently load testing my clustered application which is running in JBoss 5.1, with JDK 1.6.0_45 and I am experiencing intermittent JVM crashes. From the error report (further details from report below) it appears that the eden space is full (100%) at the time of the crash, so I suspect that this is the most likely candidate.
I have therefore been running JVisualVM to look for memory leaks, specifically monitoring my own classes. I can see these classes growing in memory, but then they are periodically cleaned up by the garbage collector.
Even if there was a memory leak, I would have expected to see OutOfMemory errors before a complete JVM crash anyway. Can anyone help to point me in the right direction with where the problem may lie? Any guidance would be very much appreciated.
#
# A fatal error has been detected by the Java Runtime Environment:
#
# EXCEPTION_ACCESS_VIOLATION (0xc0000005) at pc=0x000000006dba43f7, pid=3980, tid=2556
#
# JRE version: 6.0_45-b06
# Java VM: Java HotSpot(TM) 64-Bit Server VM (20.45-b01 mixed mode windows-amd64 compressed oops)
# Problematic frame:
# V [jvm.dll+0x2c43f7]
#
# If you would like to submit a bug report, please visit:
# http://java.sun.com/webapps/bugreport/crash.jsp
#
snip
Heap
PSYoungGen total 670272K, used 662831K [0x00000007d5560000, 0x0000000800000000, 0x0000000800000000)
eden space 641728K, 100% used [0x00000007d5560000,0x00000007fc810000,0x00000007fc810000)
from space 28544K, 73% used [0x00000007fc810000,0x00000007fdcabf68,0x00000007fe3f0000)
to space 28352K, 12% used [0x00000007fe450000,0x00000007fe7d0e60,0x0000000800000000)
PSOldGen total 1398144K, used 1096904K [0x0000000780000000, 0x00000007d5560000, 0x00000007d5560000)
object space 1398144K, 78% used [0x0000000780000000,0x00000007c2f32250,0x00000007d5560000)
PSPermGen total 422848K, used 378606K [0x0000000760000000, 0x0000000779cf0000, 0x0000000780000000)
object space 422848K, 89% used [0x0000000760000000,0x00000007771bb800,0x0000000779cf0000)
snip
VM Arguments:
jvm_args: -Dprogram.name=run.bat -XX:MaxPermSize=512m -Xms2G -Xmx2G -Dhttp.proxyHost=testproxy -Dhttp.proxyPort=8010 -Dhttps.proxyHost=testproxy -Dhttps.proxyPort=8010 -Djavax.net.ssl.trustStore=cacerts -Djavax.net.ssl.trustStorePassword=changeit -Djavax.net.ssl.keyStore=testkeystore.jks -Djavax.net.ssl.keyStorePassword=testkeystore -Djboss.messaging.ServerPeerID=2 -Dhttp.nonProxyHosts=*.mydomain.com -Dsun.rmi.dgc.client.gcInterval=900000 -Dsun.rmi.dgc.server.gcInterval=900000 -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Djava.library.path=C:\jboss-5.1.0.GA\bin\native;C:\Program Files (x86)\Windows Resource Kits\Tools\;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0;C:\jboss-5.1.0.GA\bin -Djava.endorsed.dirs=C:\jboss-5.1.0.GA\lib\endorsed
java_command: org.jboss.Main -c hops-cnf -b 0.0.0.0
Launcher Type: SUN_STANDARD
This is most likely a problem with the JVM version and the eden space. Your best bet is probably to reduce the GC threading? Try with
-XX:LargePageSizeInBytes=5m -XX:ParallelGCThreads=1 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC

mongodb higher faults on Windows than on Linux

I am executing below C# code -
for (; ; )
{
Console.WriteLine("Doc# {0}", ctr++);
BsonDocument log = new BsonDocument();
log["type"] = "auth";
BsonDateTime time = new BsonDateTime(DateTime.Now);
log["when"] = time;
log["user"] = "staticString";
BsonBoolean bol = BsonBoolean.False;
log["res"] = bol;
coll.Insert(log);
}
When I run it on a MongoDB instance (version 2.0.2) running on virtual 64 bit Linux machine with just 512 MB ram, I get about 5k inserts with 1-2 faults as reported by mongostat after few mins.
When same code is run against a MongoDB instance (version 2.0.2) running on a physical Windows machine with 8 GB of ram, I get 2.5k inserts with about 80 faults as reported by mongostat after few mins.
Why more faults are occurring on Windows? I can see following message in logs-
[DataFileSync] FlushViewOfFile failed 33 file
Journaling is disable on both instances
Also, is 5k insert on a virtual machine with 1-2 faults a good enough speed? or should I be expecting better inserts?
Looks like this is a known issue - https://jira.mongodb.org/browse/SERVER-1163
page fault counter on Windows is in fact the total page faults which include both hard and soft page fault.
Process : Page Faults/sec. This is an indication of the number of page faults that
occurred due to requests from this particular process. Excessive page faults from a
particular process are an indication usually of bad coding practices. Either the
functions and DLLs are not organized correctly, or the data set that the application
is using is being called in a less than efficient manner.