We have a wildfly 8.2 app server which allocates 6GB of server RAM. Sometime due to havey transaction count wildfly has stop receiving incoming connections. But when I check server (not app server, it is our VM ) memory, it uses 4GB of RAM. Then I checked Wildfly app server's heap memory it did not use at least 25% of allocate heap size. Why is that? When I restart wildfly App server, All things work normally and when it comes that kind of load, above scenario happen again
Try to increase the connection-limit as suggested in this SO question
You can Dump HTTP requests as given here
Also, are you getting any errors in your console? Please post them as well.
Related
I'm facing some deployment issues on the Websphere Portal version 8.5 running on a clustered environment.
The deployment happens through by calling the xmlaccess.sh script. The deployment is working fine but it is taking too much time.
There are a few xmls which are being imported by the xmlaccess and it connects to the url: http://serverhost:10039/wps/config
webmodule.delete
webmodule.import
portal-content.delete
portal-content.import
The webmodule.import & portal-content.import is taking ~20mins each. Total time 45mins.
The server has federated LDAP repository but there is no issue on the LDAP as the application uses the same LDAP server for login and it's very responsive.
We have increased the jvm heap size to:
Initial Heap Memory - 4096m
Maximum Heap Memory - 8192m
The dmgr has a initial and maximum heap memory of 3072m.
Also there are no exceptions in the sysout logs during the deployment. I have enabled the trace option but still not finding any issue.
Also noticed one thing, when opening the websphere portal admin console, the response is very slow. The page is taking ~1m to load.
The hardware configs for the server are -
OS - Solaris
Ram - 16 GB
4CORE CPU
Increased the heap memory on jvms and dmgr.
Set java.net.preferIPv4Stack=true for the jvms, dmgr and node agents.
I have a serious problem my memcached memory is overflow and server is getting down.
So how to handle memcached, If memcached memory is getting full then it will just throw error msg, not set the memcached anymore.
memcached is distributed cache system and can be works on different servers. What is your server? is it couchebase, is it elastic cache of AWS? you can use memcache on many different server and providers and when you are creating those servers you need to configure them and set the size of memory you want memcache to use. for example in the company I am working, the test environment uses couchbase but the live uses Amazon Elastic cache.
Memcached uses LRU (least recently used) algorithm to insert the new object into the memory if the table is full. You should not have problem because of full memory and handling memcached. The problem can raise from the full memory but not with this reason that memcached cannot handle it. Exceptions and other problem can be in other part of the system which is quite normal if your memory is full. if you configure the server correctly memcache usually does not throw exception.
Is the server that runs memcached same as the server that runs your application? Memcached can be put on another server and in this way you can prevent the memory to become full.
I am doing load testing on my application using jmeter and I have a situation where the cpu usage by the applications jvm goes to 99% and it stays there. Application still work, I am able to login and do some activity. But, it’s understandably slower.
Details of environment:
Server: AMD Optrom, 2.20 Ghz, 8 Core, 64bit, 24 GB RAM. Windows Server 2008 R2 Standard
Application server: jboss-4.0.4.GA
JAVA: jdk1.6.0_25, Java HotSpot(TM) 64-Bit Server VM
JVM settings:
-Xms1G -Xmx10G -XX:MaxNewSize=3G -XX:MaxPermSize=12G -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+UseCompressedOops -Dsun.rmi.dgc.client.gcInterval=1800000 -Dsun.rmi.dgc.server.gcInterval=1800000
Database: MySql 5.6 (in a different machine)
Jmeter: 2.13
My scenario is that, I make 20 users of my application to log into it and perform normal activity that should not be bringing huge load. Some, minutes into the process, JVM of Jboss goes up and it never comes back. CPU usage will remain like that till JVM is killed.
To help better understand, here are few screen shots.
I found few post which had cup # 100%, but nothing there was same as my situation and could not find a solution.
Any suggestion on what’s to be done will be great.
Regards,
Sreekanth.
To understand the root cause of the high CPU utilization, we need to check the CPU data and thread dumps at same time.
Capture 5-6 thread dumps at the time of the issue. Similarly capture CPU consumption thread-by-thread basis.
Generally the root cause of the CPU issue would be problems with threads like BLOCKED threads, long running threads, dead-lock, long running loops etc. That can be resolved by going through the stacks of the threads.
I've developed a Netty based TCP server to receive maintain connection with GSM/GPRS based devices and to persist those data in MySql database. Currently 5K connections are handled. Devices send periodic messages with interval of 30-60 secs, but connections are kept alive to maintain duplex communication.
The server application consumes 1-2% CPU in normal operation with peaks up to 10%, average load is very low. However after 6 hours to 48 hours normal operation, server application hangs up with constant 100% CPU consumption, thread dump indicates that epoll selector is the reason for high CPU usage. Applications still keeps connections for a few hours, then CPU consumption increases to 200% and most of the connections are released.
In the beginning of the project we used MINA and had the same issue with 1K active connections, that is why we switched to Netty. Until 5K connections Netty was much more stable and hang up period was 1-2 weeks.
Our server configuration:
I7-2600 Quad Core CPU,
8 GB Ram, Centos 5.0,
Open JDK 6.0,
Netty 3.2.4 (Netty is updated to 3.5.2 a few hours ago)
In order to overcome this problem we will update JDK to 7.0 (JDK has a new I/O implementation optimized for asynchronous operations) and try different OS including FreeBSD, Windows Server since each operating system has different strategies for handling I/O.
Any help will be appreciated, thanks..
This sounds like the Epoll bug.
The app is proxying connections to backend systems. The proxy has a pool of channels that it can use to send requests to the backend systems. If the pool is low on channels, new channels are spawned and put into the pool so that requests sent to the proxy can be serviced. The pools get populated on app startup, so that is why it doesn't take long at all for the CPU to spike through the roof (22 seconds into the app lifecycle). Source
Netty has a workaround built-in. Not sure from which version though, will have to update later.
System.setProperty("org.jboss.netty.epollBugWorkaround", "true");
I have a PHP/Apache server with 12GB of RAM. I have been running Memcached on the same machine with 6GB of allotted RAM.
I wanted to run Memcached on a separate server (same datacenter, vlan, subnet), just as I do for MySQL. I setup a separate, identical server with the same memcached configuration.
I am seeing a roughly 10x page load time using Memcached from the remote server than what I get when running locally. I have primed both caches and I still have a 10x load time from remote.
I'm having trouble trouble shooting this.
You're loading 500kb of data per pageload, in all small keys? How many keys per pageload is this?
Latency to a remote server is very low, but running many roundtrips is still a bad idea. Memcached clients support multi-get operations, where you batch many keys into a single request/response with much lower latency.
Just for info, DDR3-1333 is about 10667 MB/s.
If you have, let's say, Gigabit ethernet, I guess it can explains some of the problems you are experiencing...