I'm trying to increase timeouts in Bluemix. I've set all the timeout settings to 5 min. But after 2 min of a request I got an error:
500 Error: Failed to establish a backside connection
How do I solve this problem?
"This particular message probably comes from the L1 load balancer in Bluemix when it fails to get a timely response from the application it tries to route to. One of the possible cause here is because your application does not send any response back before the load balancer times out, which is 2 minutes if my memory serves me well."
https://developer.ibm.com/answers/questions/25439/bluemix-500-error-failed-to-establish-a-backside-connection-on-web-service-call.html
I would open up a support ticket if you need any additional help.
Related
I deployed my java web application on Bluemix Dedicated environment and use it with Cloudant Dedicated NoSql DB. In this DB i tried to return 60k documents and server returned
500 Error: Failed to establish a backside connection
to me. So i'm wondering about connection timeout in Bluemix, there're posts where people claim that Bluemix resets a network connection in 120 if there's no response received. Is it possible to change this setting, or maybe someone knows how to solve such problem.
P.S. When I deploy it on my computer then it works fine, but of course it takes some time. Particularly this case may be solved using cloudant pagination, but i develop service for scheduling REST-calls and if bluemix reset all connections after 2 minutes i'll have a big problems with it.
Not sure which Bluemix Dedicated you are using, but the timeout is typically global. Paging would work and I thinking a websocket based approach would work as well.
-r
I am having a wcf service which is hosted on iis, when 10 to 15 users accessing the service at a time, it is ends with an exception stating httprequesttimedoutwithoutdetails. I have increased the send, receive, open, closed timeout configuration but still in getting the issue. Can anyone please suggest any idea.
I am using Camel Netty for full duplex communication over TCP socket.
My application is using the following parameters in the route.
<inOut uri="netty:tcp://{{IP-Port}}?
textline=true&sync=true&decoderMaxLineLength=1000000&autoAppendDelimiter=false&disconnect=false&producerPoolMaxActive=-1&producerPoolMinEvictableIdle=120000&keepAlive=false&noReplyLogLevel=INFO&serverExceptionCaughtLogLevel=INFO&requestTimeout=2500" />
The netty component above receives requests from a preceding wiretap in the flow.
During the day after about 8-10 hours, some of the connections show as ESTABLISHED state but will not be serving any requests. Even at the server end, these connections show as ESTABLISHED but there is no activity for hours.
When we looked at one connection closely, found that the last request attempted (not been received by server) was writing body to endpoint and got an exception org.apache.camel.processor.DefaultErrorHandler - Failed delivery for (MessageId: xxxxx on ExchangeId: ID-xxxx). On delivery attempt: 0
Since netty is being called from wiretap, after this last request, succeeding requests are not even entertained and they are blocked in wiretap itself..
I am collecting tcpdump later tonight for more details though.
Questions:
1. Why is producerPoolMinEvictable NOT kicking in to clear such stale connections?
2. How do we clear these stale connections automatically without having to
bounce the application?
3. Is there a problem using wiretap?
Appreciate suggestions to resolve this issue. Please ask for any more details needed to answer and I shall be happy to share.
Note:
camel-netty
2.11.2-
I have a app using Location Updates and it can run in the background longer than 10 minutes. This app can communicate with Web Service A both in the background less than and greater than 10 minutes. The problem I'm facing is that it cannot communicate with Web Service B when my app is in the background greater than 10 minutes--I get a 500: internal server error. I can communicate with Web Service B when my app is in the background less than 10 minutes.
Note: I can communicate with Web Service A & B in the foreground as well. Also note that I use the same code/libraries to communicate with Web Service A & B whether they are in the foreground or background.
Has anyone experienced this same problem? Can you please ideas for debugging? Once my Server admin is available I will ask him to analyze the request being received and also check if the socket is being closed prematurely.
I get a 500: internal server error
This hasn't really got anything to do with iOS background services. Your application is running and communicating with the server.
To debug this issue, hook your app up to a proxy like Charles, and look for the difference between requests that succeed and requests that fail.
I suspect your session might be timing out on the server. Look at your server configuration to see if your timeout parameter matches up with what you are observing.
On our Webserver, we're seeing a ton of these errors:
Application Server last connected //psoftapp.company.net_8850
bea.jolt.ServiceException: bea.jolt.JoltRemoteService(GetCertificate)call(): Timeout\nbea.jolt.SessionException: Connection recv error\nbea.jolt.JoltException: [3] NwHdlr.recv(): Timeout Error
and on our Appserver:
PSPUBDSP_dflt.27505 (0) 07/20/11 08:13:33 (JNIUTIL): Java exception thrown: java.net.SocketException: Connection reset
I'm reading some tuning documents from PeopleSoft & I found a suggestion that I've seen in a couple of places -- Reducing the tcp_wait_time_interval to 60 seconds. I think I sort of understand what this is doing - It seems that network (or socket?) connections that are no longer being used are "recycled" or made available? Can someone confirm this? Also, why are these connections unused/stale? Is it caused by people not properly logging out of the app (and just closing the browser)?
Thanks!
PSPUBDP is part of the Integration Broker application messaging framework. You could look at the Tuxedo logs or the Integration Broker Monitor too see what is going on. You may be running a high number of messages and overloading the server or possibly you have a message with errors that is somehow causing the crashes.