Configuring wait time of SOAP request node and SOAP input node in IIB - soap

I am using IIB V10.Can we increase the max client wait time of SOAP input node more than 180 seconds(default) in IIB. Also, can we configure the request time out of SOAP request node from 120 seconds(default) to a higher number?

The IIB documentation describe these timeouts in detail here:
maxClientWaitTime of SOAP Input node.
requestTimeout of SOAP Request node.
You can configure these values either directly in the flow as properties of the nodes or via BAR overrides before the deployment.
There is also general chapter called Configuring message flows to process timeouts which describes the timeout handling of these synchronous nodes.

Related

is RabbitMQ queueing system unnecessary in a Kubernetes cluster?

I have just been certified CKAD (Kubernetes Application Developer) by The Linux Foundation.
And from now on I am wondering : is RabbitMQ queueing system unnecessary in a Kubernetes cluster ?
We use workers with queueing system in order to avoid http 30 seconds timeout : let's say for example we have a microservice which generates big pdf documents in average of 50 seconds each and you have 20 documents to generate right now, the classical schema would be to make a worker which will queue each documents one by one (this is the case for the company I have been working for lately)
But in a Kubernetes cluster by default there is no timeout for http request going inside the cluster. You can wait 1000 seconds without any issue (20 documents * 50 seconds = 1000 seconds)
With this last point, is it enought to say that RabbitMQ queueing system (via the amqplib module) is unuseful in a Kubernetes cluster ? moreover Kubernetes manages so well load balancing on each of your microservice replicas...
But in a Kubernetes cluster by default there is no timeout for http request going inside the cluster.
Not sure where you got that idea. Depending on your config there might be no timeouts at the proxy level but there's still client and server timeouts to consider. Kubernetes doesn't change what you deploy, just how you deploy it. There's certainly other options than RabbitMQ specifically, and other system architectures you could consider, but "queue workers" is still a very common pattern and likely will be forever even as the tech around it changes.

How to recover JMS inbound-gateway containers on Active MQ server failure when number of retry is changed to limited

JMS Inbound gateway is used for request processing at worker side. CustomMessageListenerContainer class is configured to impose back off max attempts as limited.
In some scenarios when active MQ server is not responding before max attempts limit reached container is being stopped with below message.
"Stopping container for destination 'senExtractWorkerInGateway': back-off policy does not allow for further attempts."
Wondering is there any configuration available to recover these containers once the Active MQ is back available.
sample configuration is given below.
<int-jms:inbound-gateway
id="senExtractWorkerInGateway"
container-class="com.test.batch.worker.CustomMessageListenerContainer"
connection-factory="jmsConnectionFactory"
correlation-key="JMSCorrelationID"
request-channel="senExtractProcessingWorkerRequestChannel"
request-destination-name="senExtractRequestQueue"
reply-channel="senExtractProcessingWorkerReplyChannel"
default-reply-queue-name="senExtractReplyQueue"
auto-startup="false"
concurrent-consumers="25"
max-concurrent-consumers="25"
reply-timeout="1200000"
receive-timeout="1200000"/>
You probably can emit some ApplicationEvent from the applyBackOffTime() of your CustomMessageListenerContainer when the super call returns false. This way you would know that something is wrong with ActiveMQ connection. At this moment you also need to stop() your senExtractWorkerInGateway - just autowire it into some controlling service as a Lifecycle. When you done fixing the connection problem, you just need to start this senExtractWorkerInGateway back. That CustomMessageListenerContainer is going to be started automatically.

What is a service in APM?

from the docs of APM: https://www.elastic.co/guide/en/apm/get-started/7.12/transactions.html
Transactions are a special kind of span that have additional
attributes associated with them. They describe an event captured by an
Elastic APM agent instrumenting a service. You can think of
transactions as the highest level of work you’re measuring within a
service. As an example, a transaction might be a: ...
How is a "service" defined?
A Service is an application that is connected to the APM server via a connection and sends all of the metrics to the server.
Almost all of the backend languages connects to the APM using a client library.
I use it in Golang, the library has an agent that wraps the router and captures all metrics. In both of HTTP and GRPC.

How Istio's sampling rate works with errors?

My question about Istio in Kubernetes. I have Istio sample rate of 1% and I have error which is not included in 1%. Would I see in Jaeger trace for this error?
I kind of new to Kubernetes and Istio. That's why can't tested on my own. I have been playing with Istio's example of Book Application and I wonder would I see trace with error which not included in 1% of sample rate.
Configure Istio when installing with:
pilot.traceSampling=1
As result want to know can I see error which not included in sample rate. If no, how I configure Istio to see it if possible?
If you have sampling rate set to 1% then error will be seen in Jaeger once it occurs 100 times.
This is mentioned at Distributed Tracing - Jaeger:
To see trace data, you must send requests to your service. The number of requests depends on Istio’s sampling rate. You set this rate when you install Istio. The default sampling rate is 1%. You need to send at least 100 requests before the first trace is visible. To send a 100 requests to the productpage service, use the following command:
$ for i in `seq 1 100`; do curl -s -o /dev/null http://$GATEWAY_URL/productpage; done
If you are not seeing the error in the current sample, I would advice make the sample higher.
You can read about Tracing context propagation which is being done by Envoy.
Envoy automatically sends spans to tracing collectors
Alternatively the trace context can be manually propagated by the service:
When using the LightStep tracer, Envoy relies on the service to propagate the x-ot-span-context HTTP header while sending HTTP requests to other services.
When using the Zipkin tracer, Envoy relies on the service to propagate the B3 HTTP headers ( x-b3-traceid, x-b3-spanid, x-b3-parentspanid, x-b3-sampled, and x-b3-flags). The x-b3-sampled header can also be supplied by an external client to either enable or disable tracing for a particular request. In addition, the single b3 header propagation format is supported, which is a more compressed format.
When using the Datadog tracer, Envoy relies on the service to propagate the Datadog-specific HTTP headers ( x-datadog-trace-id, x-datadog-parent-id, x-datadog-sampling-priority).

The target server failed to respond for multiple iterations in Jmeter

In my Jmeter script,I am getting error for 2nd iteration.
For multiple users with single iteration, no errors were observed, but with multiple iterations am getting error with below message
Response code: Non HTTP response code: org.apache.http.NoHttpResponseException
Response message: Non HTTP response message: The target server failed to respond
Response data is The target server failed to respond
Error Snapshot
Could you please suggest me what could be reason behind this error
Thanks in advance
Most likely your server becomes overloaded. In regards to possible reason my expectation is that single iteration does not deliver the full concurrency as JMeter acts like:
JMeter starts all the virtual users within the specified ramp-up period
Each virtual user starts executing samplers
When there are no more samplers to execute and no loops to iterate - the thread is being shut down
So with 1 iteration you may run into situation when some threads have already finished their job and the others have not been started yet. When you add more iterations the "old" threads start over and "new" are arriving. The situation is explained in the JMeter Test Results: Why the Actual Users Number is Lower than Expected article and you can monitor the actual delivered load using Active Threads Over Time chart of the HTML Reporting Dashboard or Active Threads Over Time Listener available via JMeter Plugins
To get to the bottom of the failure I would recommend checking the following:
components logs on the application under test side (application logs, application/web server logs, database logs)
application under test baseline health metrics (CPU, RAM, Disk, etc.). You can use JMeter PerfMon Plugin, this way you will be able to correlate increasing load with resources consumption