Given two web apps running on the same Tomcat 6. If you do an http-call from one app to the other, will Tomcat "short circuit" this call, or will it go all the way out on the interwebz before calling home?
#thomasz answer shows the need for more detail. We're using Springs RestTemplate to do the communication. Its plugable architecture lets you provide your own ClientHttpRequestFactory.
Would it be possible to implement a ClientHttpRequest that, if the request was to localhost, it could persuade tomcat to handle it internally?
No, the request will go through all the layers, including loopback interface. Tomcat is not treating requests to the same web container differently. After all, how? You are accessing some URL via URLConnection or HttpClient or raw socket or... - Tomcat would have to somehow intercept (instrument) your application's code and dynamically replace HTTP call with some internal invocation. Possible, but very complicated.
To make matters worse, you can easily cause deadlock or starvation under high load. Imagine your Tomcat worker thread pool has 10 threads and at the same time you access the same servlet from 10 concurrent users. Every servlet now tries to connect to the same web container, but the worker thread pool is exhausted. So all these servlets are blocking, waiting for idle worker thread. But this will never happen, because they are occupying all of them!
Related
Right now we are running into a problem where we have a bunch of "open TCP connections" on our Windows server's that are running a tomcat webserver. The Java code is doing a SOAP call to a vendor, and we see a lot of open connections in Resource Monitor (pictured below) showing the vendor's IP address. I've tried a couple different methods of doing the SOAP call thinking the connection wasn't explicitly being closed somewhere behind the scenes. Nothing has worked so far, so I'm thinking that I may be misunderstanding what this page is actually showing.
What is the lifecycle for a TCP connection as it relates to the Windows Resource Monitor? Is it normal for connections that are no longer being used to stay out there for a while? If not, how do I remedy the situation?
It'll be either a connection pool or a resource leak in your code.
To make sure it's not a resource leak check your code to make sure that whatever object is making the network call closes the connection after it's used otherwise you'll be waiting until the garbage collector runs.
However, if the network client supports connection pooling then closing it may only place the open connection back into a pool ready for quick re-use. You don't say which client API you're using but if it supports pooling then it should provide an API to say how long released connections remain in the pool.
There is no Windows Winsock-level pooling or persistence. If the underlying socket gets closed then that's it, it gets closed.
All
I am making REST client calls from an EJB container (IBM Websphere v6.1) and cannot find any way to get a HTTP connection factory from WAS.
Is this possible in WAS 6.1?
Would expect be able to access this with JNDI so connection pool configuration, socket timeout, connection timeout, connections per URL etc could be centrally managed.
If not the alternative is to use a Client API such as HttpClient 4.3. But this has its own kettle of fish:
They recommend 'BasicHttpClientConnectionManager': "This connection manager implementation should be used inside an EJB container". However this implies one connection per thread which in an application with many threads will exhaust the resources of the O/S.
The other alternative 'PoolingHttpClientConnectionManager' seems to be a much better fit with much of the required controls, but in the the comments on the the Basic manager it says explicitly that the Pooling manager shouldn't be used in a EJB container managed context. Scanning the code for this it looks like the Pooling manager uses Future from the concurrent library but doesn't appear to directly use Threads.
Any suggestions about the best way forward would be appreciated - some options seem to be:
Test with PoolingHttpClientConnectionManager - with risk of subtle problems
Play safe with 'BasicHttpClientConnectionManager' but set short response and socket timeouts to constrain the number of concurrent sockets at the cost of lots of factory overhead. Yuk.
Some other way of getting access to the pool of HTTP connections in WAS 6.1.
Something else
Any suggestions for this rather ikky problem would be ideal.
Please don't suggest upgrading WAS - although future versions ie the WAS commerce version do seem to have a JCA HTTP Adaptor and 8.5 has a built in REST client.
Please don't publish responses relating to MQ/JMS, JDBC connection pooling or setting up resource adaptors for EIS other than HTTP.
I have to implement software that listens for UDP packets and persists their contents to a database.
It would be handy if this could run in JBoss, as this is the infrastructure we are using now.
I have seen that Netty is ideally suited to program the listener part.
Is there a way to use Netty "embedded" in JBoss? I have searched up and down the Net and the examples I have found are all for standalone listener programs.
Of course, but you have to clarify what you mean by ""embedded" in JBoss". If you are writing a standard EJB application, just put Netty bootstrap code in #PostConstruct of singleton session bean and destroy it in #PreDestroy.
If it's a web application, use any servlet's init() method (servlet must be created eagerly on startup).
Note that EJB spec does not allow creating custom threads and listening on arbitrary ports - Netty violates both of these requirements. But JBoss won't enforce this.
Sounds like JCA might be the appropriate path.
I need to implement a TCP server with a web interface included for management.
Basically, the tcp server will be listening to new connections and keeping current ones active while the web interface allow me to see information regarding these connections and to interact with them (e.g. send messages and seeing received ones)...
My concerns resides in the "TCP Server" integration with the web application.
For received messages I could simple use a shared DB, but I need to send messages to the peers connected into the TCP server.
My best bet is currently on JCA. Some research pointed me to a nice sample: http://code.google.com/p/jca-sockets.
The sample uses an Message Driven Bean to deal with messages received over port 9000, acting as an echo server.
I am new in the Java EE 6 world. I trying to figure out why things were done in one way or another in the sample (e.g. why MDB?).
JCA has a fairly complicated spec. So I am trying at first to adapt the sample above to keep the connections active to exchange data. My next step will be adapt it to accept a string over a servlet to forward it to a given peer.
Can someone help me out on this?
Well, first of all, using Java EE with TCP is not the best approach you may use. If you just need a simple TCP service with Web UI you'd better consider using Java SE with some web container attached (undertow works well).
In other hand, if you need your application to integrate into existing Java EE infrastructure your company has, JCA would be the best approach. While it's not designed for such kind of things, JCA is the only EE subsystem liberal enough for that kind of thread management you would need for TCP networking to work.
JCA-Socket you're referring above is not the best example of a JCA app. It uses plain Java's blocking sockets by blocking WorkManager thread, this is not very effective. Things got much better now and we have Java NIO and Netty for highly effective raw networking to work upon. I have a JCA connector for TCP interactions which may provide you a skeleton to build your own. Feel free to extend and contribute.
P.S. About MDB: message-driven bean is the only "legal" JCA approach of asynchronous incoming messages handling. Since TCP is asynchronous, you'll definitely need one in your application for all the things to start working. Outcoming data transfers happen through various ConnectionFactory interfaces you'll inject into your bean. The link above will provide you with a reference ConnectionFactory implementation as well as a simple tester app utilizing both ConnectionFactory and MDB messaging approaches.
I'm stuck with jboss and blazeds clusterization.
What I have now is :
2 Jboss Instances, running in all mode
One load balancer with apache and mod_jk, as suggested by Jboss docs
A spring/flex integration app
A flex application that I do not want to throw errors when one of my JBoss instances falls
I find Adobe documentation really lacking, and being new at clustering, jgroups and balancing I cannot find how to deploy my app in clustered environment.
Actually this solution is working fine with remote calls. If one of jboss instances goes down the rpc gets routed to the other instance. What is not working are push messages, cause if client is connected to JBossA, and JBossA goes down, client displays an error message, stating that it can't reach JBossA, when it should failover to JBossB, without the user noticing anything.
From what I understood if configured correctly blazeds should tell flex client about failover servers upon connection. Then if flex client can't connect to the main server it goes to another. But the hard part for me is getting there.
Can someone point me to the right direction?
Thanks in advance
If you have an apache web server sitting between the clients and the JBoss servers, with mod_jk handling communication between apache and JBoss, then that should be your failover requirements met already.
mod_jk will detect if any of the JBoss servers fails, and send requests to the other one. As far as the clients are concerned, they see a single server, which is the apache server. They see nothing of the JBoss servers behind it.
I know nothing about BlazeDS clustering, but I'm guessing it has some form of manual failover mechanism, which it tells clients about a list of server addresses, and the clients pick one that works. This should only be necessary if you don't have a mod_jk middleman, so hopefully you can just ignore the blazeds clustering.
Things can, of course, get a lot more complicated, such as when you need to JBoss servers to commnunicate amongst themselves (e.g. session replication, clustered JMS, distributed caching, etc), but if you don't need any of that, then you can safely ignore it.