SoapUI error message - soap

I am sending a soap request using soapUI to fetch data from oracle 10g db. Soap UI successfully displays response when the db fetches results within 30 seconds.
But the real problem is when the db response exceeds 30 seconds, soapUI displays following error message :
Fault occurred while processing.
I have tried the below 3 scenarios :
1) Increased the socket timeout to 1200000
2) Increased the timeout values in tomcat server config file (/conf/server.xml)
3) Checked for any Null Pointer exception and found none.
Please help me to get success message in soapUI. Thanks in advance.

There are many components between the SoapUI adapter and the database engine. Most of these will have a configurable timeout.
The listener on the database server will pass the query to the database engine - and the database engine itself will have some protection against long-running queries. It's quite likely that the database is killing queries that run over 30 seconds.
You can prove this by capturing a query from your application and trying the same query directly in the database administration tool. This will tell you why the query fails (if it fails.)
JDBC calls a component listening on the database server - this flow itself will have a timeout, which you can set at the JDBC level somewhere in your environment.

Related

REST API does not return answer back after more than 3600 seconds of processing

We have spent several weeks trying to fix an issue that occurs in the customer's production environment and does not occur in our test environment.
After several analyses, we have found that this error occurs only when one condition is met: processing times greater than 3600 seconds in the API.
The situation is the following:
SAP is connected to a server with Windows Server 2016 and IIS 10.0 where we have an API that is responsible for interacting with a DB use by an external system.
The process that we execute sends data from SAP to the API and this, with the data it receives from SAP and the data it obtains from the DB of the external system, performs a processing and a subsequent update in the DB.
This process finishes without problems when the processing time in the API is less than 3600 seconds.
On the other hand, when the processing time is greater than 3600 seconds, the API generates the response correctly, and the server tries to return the response to SAP, but it is not possible.
Below I show an example of a server log entry when it tries to return a response after more than 3600 seconds of API processing. As you can see, a 995 error occurs: (I have censored some parts)
Any idea where the error could come from?
We have compared IIS configurations in Production and Test. We have also reviewed the parameters of the SAP system in Production and Test and we have not found anything either.
I remain at your disposal to provide any type of additional information that may be useful for solving the problem.
UPDATE 1 - 02/09/2022
After enabling FRT (Failed Request Tracing) on IIS for 200 response codes, looking at the event log of the request that is causing the error, we have seen this event at the end:
Any information about what could be causing this error? ErrorCode="The I/O operation has been aborted because of either a thread exit or an application request. (0x800703e3)"
UPDATE 2 - 02/09/2022
Comparing configurations from customer's environment and our test environment:
There is a Firewall between SAP Server and IIS Server with the default idle timeout configured for TCP (3600 seconds). This is not happening in Test Environment because there is no Firewall.
Establishing a Firewall policy specifying a custom idle timeout for this service (7200 seconds) the problem will be solved.
sc-win32 status 995, the I/O operation has been aborted because of
either a thread exit or an application request.
Please check the setting of minBytesPerSecond configuration parameter in IIS. The default "minBytesPerSecond" is 240.
Specifies the minimum throughput rate, in bytes, that HTTP.sys
enforces when it sends a response to the client. The minBytesPerSecond
attribute prevents malicious or malfunctioning software clients from
using resources by holding a connection open with minimal data. If the
throughput rate is lower than the minBytesPerSecond setting, the
connection is terminated.

Kafka Connect: Error detection when worker fails

I'am submitting a connector to kafka. The connector created is sftp connector. Now when the password is wrong the connector sends back success response when the connector fails. The password is wrong response is not given at that time. This is a single scenario there could be mutliple scenarios like this. Now when I use the <host>/connectors/<connector-name>/status, I get the error saying failed to establish connection. But this endpoint has a little delay. If I'am immediately trying after creating the connector, I may not get any response(404).
What is the proper way of handling this using the status api call.Is there any delay that needs to be used before firing this API. Or can it be handled while submitting the connector to API?
When you create the connector, it naturally needs to load the JAR(s) responsible for the tasks, then distribute the tasks to actually start the connector code (which is responsible for connecting to the SFTP server with the connection details).
Therefore, the delay is natural, and there's no way to know your connection details are incorrect unless you try to use them before launching the connector.

Could not open JDBC Connection, Unable to get managed connection for java during load test

Noticed below error during load test with multiple users and not in case of single SOAP request.
Could not open JDBC Connection for transaction; nested exception is java.sql.SQLException: javax.resource.ResourceException: IJ000453: Unable to get managed connection for java:
This could be due to any of the following:
The datasource connection pool has not been tuned (e.g. max-pool-size and blocking-timeout-millis) correctly for the maximum load on the application.
The application is leaking connections because it is not closing them and thereby returning them to the pool.
Threads with connections to the database are hanging and holding on to the connections.
Make sure that the min-pool-size and max-pool-size values for the respective datasource are set according to application load testing and connections are getting closed after use inside the application code.
Most likely you've found the bottleneck in your application, it seems that it cannot handle that many virtual users. The easiest solution would be raising an issue in your bug tracker system and let developers investigate it.
If you need to provide the root cause of the failure I can think of at least 2 reasons for this:
Your application or application server configuration is not suitable for high loads (i.e. number of connections in your JBOSS JDBC Connection pool configuration is lower than it is required given the number of virtual users you're simulating. Try amending min-pool-size and max-pool-size values to match the number of virtual users
Your database is overloaded hence cannot accept that many queries. In this case you can consider load testing the database separately (i.e. fire requests to the database directly via JMeter's JDBC Request sampler without hitting the SOAP endpoint of your application.) See The Real Secret to Building a Database Test Plan With JMeter article to learn more about database load testing concept.

Trying to create a new table column in DashDB but getting a timeout error

My problem is pretty much the same as here
DB2 deadlock timeout Sqlstate: 40001, reason code 68 due to update statements called from servlet using SQL
The problem is that I am using dashDB which runs as a service in IBM Cloud (formerly known as bluemix), so I don't have access to the same administrative tools some DB2 DBA has access to (AFAIK).
So I have a simple table, but when I try to add a column, I get this error
SQL0911N: SQL0911N The current transaction has been rolled back
because of a deadlock or timeout. Reason code "68". SQLSTATE=40001
[IBM][CLI Driver][DB2/LINUXX8664] SQL0911N The current transaction has
been rolled back because of a deadlock or timeout. Reason code "68".
SQLSTATE=40001
I've stopped all other DB activity such as other select statements, and I've tried using eclipse JDBC-based DB IDE instead of the web-based administrative DashDB provided by IBM Cloud (just because its authentication session just ends too quickly) without success.
Try this and post if it works.
http://www-01.ibm.com/support/docview.wss?uid=swg21440972

MongoDB connection pool is not working properly in Spring application

We have created a Spring based application which is interacting with MongoDB as back end. We are using MongoTemplate to initiate connections.While running the application , we are experiencing connection time outs with below error,
com.mongodb.DBPortPool$ConnectionWaitTimeOut: Connection wait timeout after 1500 ms
We have changed below parameters, but no luck.
connections-per-host
connect-timeout
max-wait-time
Our observation is that , whenever we experience the timeouts , nubmer of open connections remains the same all the time even before and after the timeouts.
Can you please help me to pin point an issue ?