How to edit http connection in wildfly 8.2.1 on Linux machine - wildfly

I have deployed a simple Servlet web application on Wildfly 8.2.1 on RHEL 6.9. This application just takes post request and respond with 200 OK.
Now when the client(java client using apache-common-http client) is posting data on the web application. The web application is accepting the request but many of the requests are failing also with ERROR "Caused by java.net.ConnectException: Connection timed out (Connection timed out)" at the client side.
Here my assumption is, Wildfly has some default value for max Http connection which can be opened at any point in time. if further requests are coming which require opening a new connection, web server is rejecting them all.
could anyone here please help me with below question:
How can we check live open HTTP connections in RHEL 6.9. I mean command in RHEL to check how many connection open on port 8080?
How can we tweak the default value of the HTTP connection in wildfly?
Does HTTP connection and max thread count linked with each other. If So, Please let me know how they should be updated in wildfly configuration(standalone.xml).
How many requests can be kept in the queue by Wildfly? what will be happening to the request coming to
wildfly server if the queue is full.
NOTE: It is a kind of load testing for the webserver where traffic is high, not sure about exact value but it's high.

You're getting into some system administration topics but I'll answer what I can. First and foremost - Wildfly 8.2.1 is part of the very first release of Wildfly and I'd strongly encourage upgrading to a newer version.
To check the number of connections in a Unix-like environment you'll want to use the netstat command line. In your case, something like:
netstat -na | grep 8080 | grep EST
This will show you all the connections that are ESTABLISHED to port 8080. That would give you an snapshot of the number of connections. Pipe that to wc to get a count.
Next, finding documentation on Wildfly 8.2.1 is a bit challenging now but Wildfly 8 uses Undertow for the socket IO. That in turn uses XNIO. I found a thread that goes into some detail about configuring the I/O subsystem. Note that Wildfly 8.2.1 uses Undertow 1.1.8 which isn't documented anywhere I could find.
For your last two questions I believe they're related to the second one - the XNIO configuration includes configuration like:
<subsystem xmlns="urn:jboss:domain:io:1.0">
<worker name="default" io-threads="5" task-max-threads="50"/>
<buffer-pool name="default" buffer-size="16384" buffers-per-slice="128"/>
</subsystem>
but you'll need to dig deeper into the docs for details.
In Wildfly 19.1.0.Final the configuration looks similar to the code above other than the version is now 3.0.

Related

Wildfly stops when running in debug mode in Eclipse

I installed Eclipse and the Jboss Tools plugin with Wildfly.
I can run Wildfly in Eclipse in non-debug mode with no problems. But when I start Wildfly in debug, I can use it for a few minutes, and then it suddenly stops processing, the server ends.
I checked the log and there's nothing. What could be wrong?
Please note the JBoss Tools 4.9.0 is validated against 2018-09 but not against 2018-12.
Do you see something in the server log when the server dies ?
We had this issue and it was because we changed our config to close the management port, which had been used to detect that the server had started. Eclipse could no longer detect that the server had started, so it shut down the process after a set time (450 seconds)
To resolve the issue, we did the following in the Eclipse's Overview panel for our JBoss Server:
Changed the Start Timeout to 30, so it would only fail if it actually couldn't start in 30 seconds rather than waiting for 450
Changed our "Server State Detectors" to detect a Web Port for Startup Poller and Process Terminated for Shutdown Poller.
Changed the Server Ports to match our new configuration
Excerpt from JBoss Community Archive
The tooling was unable to verify your server started. Our tooling has several methods to see if your server is up or not. The two most-often used methods are either "Web Port Poller" or "Management Poller".
You can see which your server is using by opening the server object (In Servers view, double-click your server) and on the right side you'll see a section on polling.
If your server adapter (fancy word for the tooling's representation of your server) is using the Management Port Poller, you should make sure your server is actually exposing the management port. For local servers this shouldn't be an issue, since local servers should automatically expose the management port. You may want to verify in the Ports section (also in the server editor) that the management port is correct. To check if the server is up, we run a management command against the server. If the server responds properly, we declare the server to be started.
If you're using the web port poller, then you may want to verify your web port is correct. To verify the server is up, the Web Port Poller opens a URL connection on {serverHost}:{webPort} and sees if we get a valid connection.

Develop Vert.x App

I'm writting a Http server using Vert.x 3.0 and this server will listen at certain port for example at port 8080. But as I read in documentation which say that Vert.x thread will not be stopped even when main thread is terminated. So next time when I debug app, port 8080 is used and I must deploy server at another port
My question is: How I can develop Vert.x app without changing port everytime?
I have never had problems with Vert.x thread staying alive on a port after I kill the JVM. If you use linux just hit 'control-c" in the terminal. It works for me. something similar would also work in mac and windows (I suppose). This should not be a problem at all when developing in vert.x

Jboss JMS Out Of Memory

I am using JBOSS 5.1. And we are using JMS(topic) for posting the messages and JMS client will take those messages, to be specific i am using durable subscription.
It works on many systems, but on one system i always see this error after two days.
2012-08-30 12:59:27,045 WARNING [sun.rmi.transport.tcp] (RMI TCP Accept-1101) RMI TCP Accept-11101: accept loop for ServerSocket[addr=/0.0.0.0,port=0,localport=11101] throws
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:597)
at java.util.concurrent.ThreadPoolExecutor.addIfUnderMaximumPoolSize
Not sure why it is occuring only to one system, and only one JMS client is connected to Jboss for listening messages.
You should mention the details of your system, in particular the OS you are running and the Java startup parameters included in your jboss start script.
Chances are that you are running out of thread resources/file descriptors or you have set a thread stack which is not sufficient.
See this thread
Hope it helps

Swing Client - EJB2 lookup over HTTP in JBoss 5.1

I have a swing client which connects to my ejb2 application deployed in JBoss 5.1. There is a particular requirement from Customer to make it available on internet.
The deployment architecture is as follows,
swing_client --> extranet_ip |firewall | --> iis7_machine --> jboss5.1_machine.
jndi properties in client is as follows
Context.PROVIDER_URL=http://extranet_ip:9180/invoker/JNDIFactory
Context.INITIAL_CONTEXT_FACTORY=org.jboss.naming.HttpNamingContextFactory
This configuration works fine when the client is inside intranet. But it does not work in internet (extranet).
When I tried initially I got the error 'Connection refused'
After seeing some posts in various forums, I changed the file server\deploy\http-invoker.sar\META-INF\jboss-service.xml, to reflect the extranet_ip in invokerURL.
Aftet this I am getting the following error.
org.jboss.remoting.CannotConnectException: Can not get connection to server. Problem establishing socket connection for InvokerLocator [socket://10.200.1.193:4546/?dataType=invocation&enableTcpNoDelay=true&marshaller=org.jboss.invocation.unified.marshall.InvocationMarshaller&unmarshaller=org.jboss.invocation.unified.marshall.InvocationUnMarshaller]
Where 10.200.1.193 is the intranet IP address of JBoss Server machine.
I tried changing the trasport parameter in remoting-jboss-beans.xml to http, but at that time client is not working in both intranet and extranet.
Please anybody suggest a way forward for this issue. Or is there any other way to implement RMI over Http in JBoss?
Update: As a solution, I had to change my deployment architecture as follows.
swing_client --> extranet_ip |firewall | --> jboss5.1_machine
where the JBoss Application Server will be directly exposed through firewall. Then update clientConnectAddress in the remoting-jboss-beans.xml to the extranet IP. Also open the ports 8080 & 4446 in the firewall for this address.
This way the swing client is working if I use the jnid properties as follows.
Context.PROVIDER_URL : http://extranet_ip:8080/invoker/JNDIFactory
Context.INITIAL_CONTEXT_FACTORY : org.jboss.naming.HttpNamingContextFactory
But still looking for a solution where there is no need to open any non-standard ports and no need to expose the Application Server directly.
After a long struggle I found a solution for my issue. The solution was to change EJB container's invoker type to http in standardjboss.xml. When the invoker is http, it will use the settings in http-invoker.sar for remote binding.

What is the cause and how to fix 503 errors with this in Apache error_log: "Broken pipe: ajp_ilink_send(): send failed"

I'm having intermittent problems with a servlet running on JBoss, with Apache forwarding it all requests via mod_proxy_ajp.so.
Sometimes, for REST requests, I get 503 errors from Apache. When this happens, the Apache error_log has this in it:
[Mon Oct 12 09:10:19 2009] [error] (32)Broken pipe: ajp_ilink_send(): send failed
[Mon Oct 12 09:10:19 2009] [error] (32)Broken pipe: proxy: send failed to 127.0.0.1:8009 (localhost)
After a few failed attempts, it starts working again.
I've googled some and found that I'm not the only one who has encountered the problem. The only solution I've found is to make sure that Apache is started after JBoss (I restart Apache after restarting JBoss).
The strange thing about this problem is that there are other servlets running in this JBoss and I don't have the problem there.
The servlet is CXF JAX-RS based.
Apache is 2.2.6.
When using the AJP protocol, you have to be very careful to make sure that both sides of the communication (i.e. Apache and Tomcat) are configured with the same parameters. This is because AJP uses persistent, stateful connections, and both parties need to have the same expectations of the connection lifecycle.
I suggest giving the relevant Tomcat docs a good read. You'll probably have to modify either Apache's mod_proxy_ajp config, or Tomcat's AJP connector config, or both, so that they match. If the configs are even slightly different, AJP's performance can absolutely suck.
I've experienced the same problem, but haven't found the cause either. An easy solution is to dump mod_proxy_ajp in favor of mod_proxy_http, if the slight performance penalty is acceptable. Works without problems at least for a website with max ~100 pageloads per second.
I've found that this config generator is helpful when configuring AJP connections. Starting with the generated config and reading the relevant documentation was instructive.
(You can determine the "Apache mpm" parameter by executing apachectl -l, which lists compiled-in modules.)