SAPUI5 request timeout to the Gateway - sapui5

I have an odata-request in my SAPUI5 application which calls the Gateway.
On the Gateway, I have an Trusted RFC connection to the backend.
Now I have a complex algorithm with a duration around 2 minutes.
After 60 seconds, I get an timeout error.
HTTP request failed500,Internal Server Error,500 Connection timed out
Is there a opportunity to increase the timeout?
I tried it with the parameters gw/reg_timeout gw/conn_pending and with the keepalive-timeout of the rfc connection.
All this options havenĀ“t solved my problem.

I guess you already tried everything from SAP Help.
Maybe this is some ICM/WebDispatcher timeout, check the link and try some of the settings, i.e. PROCTIMEOUT. And also consider the hints there:
Recommendation
In systems where the standard timeout setting of 60
seconds for the keep-alive and processing timeouts is not sufficient
due to long-running applications, SAP recommends that both the TIMEOUT
and PROCTIMEOUT parameters are set for the services concerned so that
they can be configured independently of each other. The TIMEOUT value
should not be set unnecessarily high. We recommend you set this
parameter as follows:
icm/server_port_0 = PROT=HTTP,PORT=1080,TIMEOUT=60,PROCTIMEOUT=600
in order to allow a
maximum processing time of 10 minutes.

Related

What could "reason: Layer6 timeout" possibly mean?

I have a haproxy configured with two servers in the backend. Occasionally, every 16-20h one of them gets marked by haproxy as DOWN:
haproxy.log-20190731:2019-07-30T16:16:24+00:00 <local2.alert> haproxy[2716]: Server be_kibana_elastic/kibana8 is DOWN, reason: Layer6 timeout, check duration: 2000ms. 0 active and 0 backup servers left. 8 sessions active, 0 requeued, 0 remaining in queue.
I did some reading how haproxy runs the checks but the Layer6 timeout does not tell me much. What could be a possible reasons for that timeout? What does it actually mean?
Here is my backend configuration
backend be_kibana_elastic
balance roundrobin
stick on src
stick-table type ip size 100k expire 12h
server kibana8 172.24.0.1:5601 check ssl verify none
server kibana9 172.24.0.2:5601 check ssl verify none
Layer 6 refers to TLS. The backend is accepting a TCP connection but isn't negotiating TLS (SSL) on the health check connection within the allowed time.
The configuration values timeout connect, timeout check, and inter all interact to determine how much time health checks are allowed, to complete, and the default value of inter if not specified is 2000 milliseconds, which is what you're seeing here. By default, inter (health check interval) determines both how often checks run and how long they are allowed to complete.
Since you have not configured a fall count for the servers, the implication is that the default value 3 is being used, which means your server is actually failing 3 consecutive health checks, before being marked down.
Consider adding option log-health-checks to the backend declaration, which will create additional log entries of those initial failing checks before the final one causes the backend to be marked down.
Increasing the allowable time may avoid the failure, but is probably valid only for testing -- not a fix -- because if your backend can't reliably respond to a check within 2000 ms, then it also can't reliably respond to client connections within that time frame, which is a long time to wait for a response.
Note that in typical environments, intermittent packet loss will typically cause sluggish behavior in increments of 3000 ms, because TCP stacks often use a retransmission timeout (RTO) of 3 seconds. Since this is more than 2000 ms, packet loss on your network is one possible explanation for the problem.
Another possible explanation is excessive CPU load on the backend, either related to traffic or to a cron job doing something intensive, because TLS negotiation -- relatively speaking -- is an expensive process from the CPU's perspective.

Intermittent slowness in responses from vert.x based web server

I have a vertx webserver running on a 1x8g machine. It has about 15 routes mapped, 5 of which are blocking and 10 are non blocking. These are all part of one standard verticle that my app comprises of. The non blocking handlers just open an http connection to another downstream system ( all of which are very fast - elastic search / cached data APIs ). Some of the blocking handlers do take a bit of time - anywhere between 3 and 9 seconds depending on the time of the day - these also call an external system.
The API response time for my non blocking handlers are usually in the 400ms-600ms range. Occasionally, I see the response times spiking up to over 2 seconds and sometimes all the way up to 12 seconds. I'm not sure what is causing this. Is it the combination of blocking vs nonblocking handlers in the same verticle.
What is the best way to diagnose the root cause here ?

200 vs 403 server response - which degrades server's performance more?

Some rogue people have set up server monitoring that connects to server every 2 minutes to check if it's down (they connect from several different accounts so they ping the server every 20 seconds or so). It's a simple GET request.
I have two options:
Leave it as it is (ie. allow them via a normal 200 server response).
Block them by either IP or user-agent (giving 403 response).
My question is - what is the better solution as far as server performance is concerned (ie. what is less 'stressful' on the server) - 1 (200 response) or 2 (403 response)?
I'm inclined to #1 since there would be no IP / user-agent checking which should mean less stress on the server, correct?
It doesn't matter.
The status code and an if-check on the user-string is completely dominated by network IO, gc and server subsystems.
If they just query every 2 minutes, I'd very much leave it alone. If they query a few hundred times per second; time to act.

Prediction IO: Configure Timeouts for Engine

I have a trained model that I can deploy without trouble. However, querying the API will receive the response:
The server was not able to produce a timely response to your request
A simple google search (and past experience) tells me that this is Spray telling me that the response has taken too long. I want to be able to increase the timeout but I can't find how to configure the engine.
Any idea how I can change the configuration used by an engine?
From Spray documentation:
# The time after which an idle connection will be automatically closed.
# Set to `infinite` to completely disable idle connection timeouts.
idle-timeout = 60 s
# If a request hasn't been responded to after the time period set here
# a `spray.http.Timedout` message will be sent to the timeout handler.
# Set to `infinite` to completely disable request timeouts.
request-timeout = 20 s
There are also some other timeout related settings, that you might want to adjust.

Oracle Service Bus Proxy Service Scheduler

I need to create a proxy service scheduler that receive messages of the queue after 5 minutes. like queue produce message either a single or multiple but proxy receieve that messages after interval of every 5 minutes. how can i achieve this only using oracle service bus ...
Kindly help me for this
OSB do not provide Scheduler capabilities out of the box. You can do either of the following:
For JMS Queue put infinite retries by not setting retry limit and set retry interval as 5 minutes.
Create a scheduler. Check this post for the same: http://blogs.oracle.com/jamesbayer/entry/weblogic_scheduling_a_polling
Answer left for reference only, messages shouldn't be a subject to complex computed selections in this way, some value comparison and pattern matching only.
To fetch only old enough messages from queue,
not modifying queue or messages
not introducing any new brokers between queue and consumer
not prematurely consuming messages
, use Message Selector field of OSB Proxy on JMS Transport tab to set boolean expression (SQL 92) that checks that message's JMSTimestamp header is at least 5 minutes older than current time.
... and I wasn't successful to quickly produce valid message selector neither from timestamp nor JMSMessageID (it contains time in milis - 'ID:<465788.1372152510324.0>').
I guess somebody could still use it in some specific case.
You can use Quartz scheduler APIs to create schedulers across domains.
Regards,
Sajeev
I don't know whether this works for you, but its working good for me. May be you can use this to do your needful.
Goto Transport Details of your Proxy Service, under Advanced Options tab, set the following fields.
Polling Frequency (Mention your frequency 300 sec(5 min))
Physical Directory (may be here you need to give your Queue path)