in Eclipse Jboss 7.1 VM arguments
My RAM 8GB
vm arguments have a this like statements ;
-server -Xms64m
-Xmx512m
-XX:MaxPermSize=1024m
how to calculate this bold numbers?
**
Caused by: java.lang.OutOfMemoryError: Java heap space
**
You are getting that error because your server used up all of its available memory (in your case, 512mb). You can increase Xmx param, which sets the maximum amount of memory your server can use.
OutOfMemoryError can happen because of insufficient memory assignment, or memory leaks (objects that java's garbage collector can't delete, despite not being needed).
There is no magic rule to calculate those params, they depend on what you are deploying to jboss, how much concurrent users, etc, etc, etc.
You can try increasing Xmx param, and check with jvisualvm the memory usage, see how it behaves..
Related
I have a java process which is running on k8s.
I set Xms and Xmx to process.
java -Xms512M -Xmx1G -XX:SurvivorRatio=8 -XX:NewRatio=6 -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+CMSParallelRemarkEnabled -jar automation.jar
My expectation is that pod should consume 1.5 or 2 gb memory, but it consume much more, nearly 3.5gb. its too much.
if ı run my process on a virtual machine, it consume much less memory.
When ı check memory stat for pods, ı reliase that pod allocate too much cache memory.
Rss nearly 1.5GB is OK. Because Xmx is 1gb. But why cache nearly 3GB.
is there any way to tune or control this usage ?
/app $ cat /sys/fs/cgroup/memory/memory.stat
cache 2881228800
rss 1069154304
rss_huge 446693376
mapped_file 1060864
swap 831488
pgpgin 1821674
pgpgout 966068
pgfault 467261
pgmajfault 47
inactive_anon 532504576
active_anon 536588288
inactive_file 426450944
active_file 2454777856
unevictable 0
hierarchical_memory_limit 16657932288
hierarchical_memsw_limit 9223372036854771712
total_cache 2881228800
total_rss 1069154304
total_rss_huge 446693376
total_mapped_file 1060864
total_swap 831488
total_pgpgin 1821674
total_pgpgout 966068
total_pgfault 467261
total_pgmajfault 47
total_inactive_anon 532504576
total_active_anon 536588288
total_inactive_file 426450944
total_active_file 2454777856
total_unevictable 0
A Java process may consume much more physical memory than specified in -Xmx - I explained it in this answer.
However, in your case, it's not even the memory of a Java process, but rather an OS-level page cache. Typically you don't need to care about the page cache, since it's the shared reclaimable memory: when an application wants to allocate more memory, but there is not enough immediately available free pages, the OS will likely free a part of the page cache automatically. In this sense, page cache should not be counted as "used" memory - it's more like a spare memory used by the OS for a good purpose while application does not need it.
The page cache often grows when an application does a lot of file I/O, and this is fine.
Async-profiler may help to find the exact source of growth:
run it with -e filemap:mm_filemap_add_to_page_cache
I demonstrated this approach in my presentation.
I have an ETL flow through talend and there:
Read the zipped files from a remote server with a job.
Take this files unzipes them and parse them into HDFS with a job. Inside the job exists a schema check so if something is not
My problem is that TAC server stopes the execution because of this error:
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at org.talend.fileprocess.TOSDelimitedReader$ColumnBuffer4Joiner.saveCharInJoiner(TOSDelimitedReader.java:503)
at org.talend.fileprocess.TOSDelimitedReader.joinAndRead(TOSDelimitedReader.java:261)
at org.talend.fileprocess.TOSDelimitedReader.readRecord_SplitField(TOSDelimitedReader.java:148)
at org.talend.fileprocess.TOSDelimitedReader.readRecord(TOSDelimitedReader.java:125)
....
Is there any option to avoid and handle this error automatically?
There are only few files which cause this error but I want to find a solution for further similar situation.
In the TAC Job Conductor, for a selected job, you can add JVM parameters.
Add the -Xmx parameter to specify the maximum heap size. The default value depends on various factors like the JVM release/vendor, the actual memory of the machine, etc... In your situation, the java.lang.OutOfMemoryError: Java heap space reveals that the default value is not enough for this job so you need to override it.
For example, specify -Xmx2048m for 2048Mb or 2gb
#DrGenius Talend has java based environment and some default jvm heap is awarded during initialization, as in for any java program. Default for Talend - Min:256MB (xms) & max:1024MB.. As per your job requirement, you can set the range of min/max jvm like min of 512 mb & max 8gb..
This can be modified from Job run tab - advance setting.. Even this can be parameterized and can be overwritten using variables set in env. Exact value can be seen from job build -> _run.sh ..
But be careful not to set high as too high so that other jobs running on same server is depleted of memory.
More details on heap error & how to debug issue:
https://dzone.com/articles/java-out-of-memory-heap-analysis
i am monitoring weblogic server using jconsole tool. i found there is no memory leak in the heap. but i see resident memory size is growing very high and it is not coming down eventhough heap comes under 1GB. I have 6GB of heap size and 12GB of RAM. single java process is holding most of the memory. I am using weblogic9 and jdk1.5.
Once the server is restarted memory is coming down and again it started growing and reaching maximum within low time span.
-xms1024m -xmx6144m
Can someone help in resolving this issue?..Thanks in advance.
Memory leaks in weblogic server.
The memory of our server becomes saturated every day and we are called to make a daily reboot.
In the code, there is nothing special that saturates the memory. However, a heap dump shows that the classes that occupy more memory are
weblogic.management.mbeanservers.internal.MBeanCICInterceptor (retained heap 5376058)
and the weblogic.cluster.replication.ReplicationManager class (retained heap 2690546).
weblogic.xml :
<session-descriptor>
<cookie-name> OURPROJECT_SESSIONID </cookie-name>
<persistent-store-type> replicated_if_clustered</persistent-store-type>
</session-descriptor>
Is it possible that the fact of putting in configuration in weblogic.xml can cause memory leaks?
There is a know issue at Oracle with MBeanCICInterceptor which causes a memory leak with WebLogic Server 12.2.1.2
If you are running this version you can apply PSU 180717 and the apply path 27469756
If you are running another version open a SR at Oracle support.
We have PC with Windows with 2048 RAM.
We try to use next memory settings for JBoss:
-Xms256M -Xmx768M -XX:MaxPermSize=256M
But it cannot start:
Error occurred during initialization
of VM Could not reserve enough space
for object heap Could not create the
Java virtual machine.
JBoss starts only if we change -Xmx768M to -Xmx512M.
What can be the problem?
Update:
Now we use next settings
-Xms512M -Xmx768M -XX:MaxPermSize=156M
http://javahowto.blogspot.in/2006/06/6-common-errors-in-setting-java-heap.html
The error seems to be saying that the virtual memory size of the machine is smaller than the maximum heap size we are defining via "-Xms1303m -Xmx1303m". I have changed it to "-Xms256m -Xmx512m" and it started working in my local windows box.
Interesting. What happens when you set max memory to 513M?
If that fails, it's possibly a problem I haven't seen in quite a while. An ancient COBOL compiler I used refused to work on PCs with 640K of RAM because they used a signed-number check to decide if there was enough memory.
And in that world, 640K actually had the high bit set, hence was a negative number, so the check always failed.
I find it hard to believe that this would be the case in todays world but it may be worth looking at.
If it doesn't fail at 513M, then it may just be that you're trying to allocate too much memory. It's not necessarily physical memory that matters, address space may be the issue but you should have 2G (at least) of that as well in 32bit Windows.
With your settings shown, you use 1G just for permgen and heap. Try adjusting these until it works and post the figures you have.
There are two possible causes:
JVM couldn't find 768 MiB continuous region in the address space or
total size of free area on your RAM and paging file is less than 1 GiB.
(JVM checks them using -Xmx and -XX:MaxPermSize on startup because of GC implementation)
As you could -Xmx768m -XX:MaxPermSize156m, the latter is doubtful.
If so, the problem might be resolved by freeing RAM (e.g., stopping unused services) or extending paging file.
maybe you can reboot your pc and try again. you can't allocation the memory more than your total physical memory.
-Xms256M -Xmx768M -XX:MaxPermSize=256M should only be trying to grab 512M at most on initialization plus about 100M for the JVM process itself. Do you have that much free memory on the machine? We always run jboss on machines with 4G because the DB takes up quite a bit as well.
Here's a trick you can use to find the max amount you can set. You can simply run
java -version -Xms256M -Xmx768M -XX:MaxPermSize=256M
And then increase/decrease values until you find the max the JVM will let you set.
BTW, on a 4G 32-bit windows box, we usually set -Xms768M -Xmx1300M -XX:MaxPermSize=256M