Replication and memory leak - jpa

Memory leaks in weblogic server.
The memory of our server becomes saturated every day and we are called to make a daily reboot.
In the code, there is nothing special that saturates the memory. However, a heap dump shows that the classes that occupy more memory are
weblogic.management.mbeanservers.internal.MBeanCICInterceptor (retained heap 5376058)
and the weblogic.cluster.replication.ReplicationManager class (retained heap 2690546).
weblogic.xml :
<session-descriptor>
<cookie-name> OURPROJECT_SESSIONID </cookie-name>
<persistent-store-type> replicated_if_clustered</persistent-store-type>
</session-descriptor>
Is it possible that the fact of putting in configuration in weblogic.xml can cause memory leaks?

There is a know issue at Oracle with MBeanCICInterceptor which causes a memory leak with WebLogic Server 12.2.1.2
If you are running this version you can apply PSU 180717 and the apply path 27469756
If you are running another version open a SR at Oracle support.

Related

kubernetes pod high cache memory usage

I have a java process which is running on k8s.
I set Xms and Xmx to process.
java -Xms512M -Xmx1G -XX:SurvivorRatio=8 -XX:NewRatio=6 -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+CMSParallelRemarkEnabled -jar automation.jar
My expectation is that pod should consume 1.5 or 2 gb memory, but it consume much more, nearly 3.5gb. its too much.
if ı run my process on a virtual machine, it consume much less memory.
When ı check memory stat for pods, ı reliase that pod allocate too much cache memory.
Rss nearly 1.5GB is OK. Because Xmx is 1gb. But why cache nearly 3GB.
is there any way to tune or control this usage ?
/app $ cat /sys/fs/cgroup/memory/memory.stat
cache 2881228800
rss 1069154304
rss_huge 446693376
mapped_file 1060864
swap 831488
pgpgin 1821674
pgpgout 966068
pgfault 467261
pgmajfault 47
inactive_anon 532504576
active_anon 536588288
inactive_file 426450944
active_file 2454777856
unevictable 0
hierarchical_memory_limit 16657932288
hierarchical_memsw_limit 9223372036854771712
total_cache 2881228800
total_rss 1069154304
total_rss_huge 446693376
total_mapped_file 1060864
total_swap 831488
total_pgpgin 1821674
total_pgpgout 966068
total_pgfault 467261
total_pgmajfault 47
total_inactive_anon 532504576
total_active_anon 536588288
total_inactive_file 426450944
total_active_file 2454777856
total_unevictable 0
A Java process may consume much more physical memory than specified in -Xmx - I explained it in this answer.
However, in your case, it's not even the memory of a Java process, but rather an OS-level page cache. Typically you don't need to care about the page cache, since it's the shared reclaimable memory: when an application wants to allocate more memory, but there is not enough immediately available free pages, the OS will likely free a part of the page cache automatically. In this sense, page cache should not be counted as "used" memory - it's more like a spare memory used by the OS for a good purpose while application does not need it.
The page cache often grows when an application does a lot of file I/O, and this is fine.
Async-profiler may help to find the exact source of growth:
run it with -e filemap:mm_filemap_add_to_page_cache
I demonstrated this approach in my presentation.

how to reduce resident memory size in weblogic server

i am monitoring weblogic server using jconsole tool. i found there is no memory leak in the heap. but i see resident memory size is growing very high and it is not coming down eventhough heap comes under 1GB. I have 6GB of heap size and 12GB of RAM. single java process is holding most of the memory. I am using weblogic9 and jdk1.5.
Once the server is restarted memory is coming down and again it started growing and reaching maximum within low time span.
-xms1024m -xmx6144m
Can someone help in resolving this issue?..Thanks in advance.

Wildfly 8.2 stop incoming connections suddenly

We have a wildfly 8.2 app server which allocates 6GB of server RAM. Sometime due to havey transaction count wildfly has stop receiving incoming connections. But when I check server (not app server, it is our VM ) memory, it uses 4GB of RAM. Then I checked Wildfly app server's heap memory it did not use at least 25% of allocate heap size. Why is that? When I restart wildfly App server, All things work normally and when it comes that kind of load, above scenario happen again
Try to increase the connection-limit as suggested in this SO question
You can Dump HTTP requests as given here
Also, are you getting any errors in your console? Please post them as well.

CPU usage of Jboss JVM goes upto 99% and stays there

I am doing load testing on my application using jmeter and I have a situation where the cpu usage by the applications jvm goes to 99% and it stays there. Application still work, I am able to login and do some activity. But, it’s understandably slower.
Details of environment:
Server: AMD Optrom, 2.20 Ghz, 8 Core, 64bit, 24 GB RAM. Windows Server 2008 R2 Standard
Application server: jboss-4.0.4.GA
JAVA: jdk1.6.0_25, Java HotSpot(TM) 64-Bit Server VM
JVM settings:
-Xms1G -Xmx10G -XX:MaxNewSize=3G -XX:MaxPermSize=12G -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+UseCompressedOops -Dsun.rmi.dgc.client.gcInterval=1800000 -Dsun.rmi.dgc.server.gcInterval=1800000
Database: MySql 5.6 (in a different machine)
Jmeter: 2.13
My scenario is that, I make 20 users of my application to log into it and perform normal activity that should not be bringing huge load. Some, minutes into the process, JVM of Jboss goes up and it never comes back. CPU usage will remain like that till JVM is killed.
To help better understand, here are few screen shots.
I found few post which had cup # 100%, but nothing there was same as my situation and could not find a solution.
Any suggestion on what’s to be done will be great.
Regards,
Sreekanth.
To understand the root cause of the high CPU utilization, we need to check the CPU data and thread dumps at same time.
Capture 5-6 thread dumps at the time of the issue. Similarly capture CPU consumption thread-by-thread basis.
Generally the root cause of the CPU issue would be problems with threads like BLOCKED threads, long running threads, dead-lock, long running loops etc. That can be resolved by going through the stacks of the threads.

Stash out of memory

I set the max-memory to 2 gigabytes in the setenv.bat file but it still runs out of memory at about 800MB of java allocated memory.
Is this normal?
Running on Windows Server 2012.
It's not normal. Stash is usually pretty good about memory. You might want to raise a support ticket at:
https://support.atlassian.com/
They'll probably ask you to create a heap dump so they can see what might be causes problems.