I have a job which uses tRestClient to get a token, then uses tRest to get data from CRM based on the token. Both of these components seem to timeout sometimes while connecting to CRM. How can I set them up to have 10 retries for the connection and more minutes to wait before they timeout?
There is some setting to do that.
Check the documentation here:
https://help.talend.com/reader/7NvFnkWpbH8Gy3Rm6mUXnw/ECeCwoP1aVopmhqJe_dENA.
You need to go in the advanced setting in the component.
Moreover, there is some limitation to the studio like you see in the documentation.
Related
We have spent several weeks trying to fix an issue that occurs in the customer's production environment and does not occur in our test environment.
After several analyses, we have found that this error occurs only when one condition is met: processing times greater than 3600 seconds in the API.
The situation is the following:
SAP is connected to a server with Windows Server 2016 and IIS 10.0 where we have an API that is responsible for interacting with a DB use by an external system.
The process that we execute sends data from SAP to the API and this, with the data it receives from SAP and the data it obtains from the DB of the external system, performs a processing and a subsequent update in the DB.
This process finishes without problems when the processing time in the API is less than 3600 seconds.
On the other hand, when the processing time is greater than 3600 seconds, the API generates the response correctly, and the server tries to return the response to SAP, but it is not possible.
Below I show an example of a server log entry when it tries to return a response after more than 3600 seconds of API processing. As you can see, a 995 error occurs: (I have censored some parts)
Any idea where the error could come from?
We have compared IIS configurations in Production and Test. We have also reviewed the parameters of the SAP system in Production and Test and we have not found anything either.
I remain at your disposal to provide any type of additional information that may be useful for solving the problem.
UPDATE 1 - 02/09/2022
After enabling FRT (Failed Request Tracing) on IIS for 200 response codes, looking at the event log of the request that is causing the error, we have seen this event at the end:
Any information about what could be causing this error? ErrorCode="The I/O operation has been aborted because of either a thread exit or an application request. (0x800703e3)"
UPDATE 2 - 02/09/2022
Comparing configurations from customer's environment and our test environment:
There is a Firewall between SAP Server and IIS Server with the default idle timeout configured for TCP (3600 seconds). This is not happening in Test Environment because there is no Firewall.
Establishing a Firewall policy specifying a custom idle timeout for this service (7200 seconds) the problem will be solved.
sc-win32 status 995, the I/O operation has been aborted because of
either a thread exit or an application request.
Please check the setting of minBytesPerSecond configuration parameter in IIS. The default "minBytesPerSecond" is 240.
Specifies the minimum throughput rate, in bytes, that HTTP.sys
enforces when it sends a response to the client. The minBytesPerSecond
attribute prevents malicious or malfunctioning software clients from
using resources by holding a connection open with minimal data. If the
throughput rate is lower than the minBytesPerSecond setting, the
connection is terminated.
I'am submitting a connector to kafka. The connector created is sftp connector. Now when the password is wrong the connector sends back success response when the connector fails. The password is wrong response is not given at that time. This is a single scenario there could be mutliple scenarios like this. Now when I use the <host>/connectors/<connector-name>/status, I get the error saying failed to establish connection. But this endpoint has a little delay. If I'am immediately trying after creating the connector, I may not get any response(404).
What is the proper way of handling this using the status api call.Is there any delay that needs to be used before firing this API. Or can it be handled while submitting the connector to API?
When you create the connector, it naturally needs to load the JAR(s) responsible for the tasks, then distribute the tasks to actually start the connector code (which is responsible for connecting to the SFTP server with the connection details).
Therefore, the delay is natural, and there's no way to know your connection details are incorrect unless you try to use them before launching the connector.
I'm working with ADF and Azure Managed Postgres. I've had a reoccurring issue with look-ups and query-sourced copy activities timing out after about 35 seconds.
Failure happened on 'Source' side. 'Type=Npgsql.NpgsqlException,Message=Exception while reading from stream,Source=Npgsql,''Type=System.IO.IOException,Message=Unable to read data from the transport connection: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.,Source=System,''Type=System.Net.Sockets.SocketException,Message=A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond,Source=System,'
So the error says it's a Npgsql Exception, so I took a look at their documentation and modified the connection string to take Timeout = 60 and CommandTimeout = 60 as well (Internal Timeout will default to CommandTimeout).
And the queries still timeout at ~35 seconds. Could this be a socket issue with the Azure Managed Instance causing the timeout and it's just propagating down to npgsql?
Any help would be appreciated!
I just want to add some precision because I had the same problem (and thanks #DeliciousMalware and #Leon_Yue):
There is a default timeout of 30s for requests with a postgres connection
There is no way to change this timeout from the lookup activity directly.
The only option that does something is to add Timeout=600;CommandTimeout=0; to your connection string in your linked service (if you use a key vault for exemple) or add the options in the linked service additionnal parameters like in #DeliciousMalware screenshot.
Timeout is to establish the connection, and CommandTimeout is the timeout for the command itself (in second, 0 means infinity)
The library behind the connection is npgsql, and the others parameters and details that are usable are there: https://www.npgsql.org/doc/connection-string-parameters.html
I had a hard time to find what the parameters of the connection string are and what they mean, and which one exists, so I was really happy to find this doc. I didn't found a lot of doc on postgres in azure, so I though this list of param would be of some use for others.
I added the 2 parameters suggested by Leon and that resolved the issue I had.
Here is a screenshot of the parameters being added to the linked service:
Here is a screenshot of the error and completed run:
Here is a screenshot of the error and completed run:
I cannot seem to extend the 110 second timeout for requests to my Azure Web App. I have done the following in order to increase this limit, but with no success.
ASP.NET's HTTP request execution timeout (web.config):
<system.web>
<httpRuntime executionTimeout="600" />
</system.web>
IIS connection timeout (web.config):
<system.applicationHost>
<webLimits connectionTimeout="00:10:00" />
</system.applicationHost>
Kudu timeout before external commands are killed (site app setting):
SCM_COMMAND_IDLE_TIMEOUT = 600
What am I missing?
request timeout of 110s
It is very odd that your request timeout of 110s,From the Auzre official document, we could know that default timeout is about 4 minutes.It seems that we are not able to increase request timeout. The following is the snippet from the document. Please have a try to scale up and scale down back the App service plan. If still have the same issue, please connect to Azure support team for more help.
Azure Load Balancer has a default idle timeout setting of four minutes. This is generally a reasonable response time limit for a web request. If your web app requires background processing, we recommend using Azure WebJobs. The Azure web app can call WebJobs and be notified when background processing is finished. You can choose from multiple methods for using WebJobs, including queues and triggers.
+
WebJobs is designed for background processing. You can do as much background processing as you want in a WebJob. For more information about WebJobs, see Run background tasks with WebJobs.
Note :SCM_COMMAND_IDLE_TIMEOUT = 600 could use for your build process launches some command in the server side. But the request is timeout that will cause clients to get disconnected after 230 seconds, we could get more info from Azure Kudu Configurable settings.
Jmeter Environment Details
I am performing Jmeter testing on Microsoft Azure Cloud. I have created on VM(Virtual Machine) on the same cloud and from there I am hitting the application server on the same cloud environment. So in this case there is no network latency.
Problem Statement:
I am trying to run the load test for 300 users for 30 mins , but after 5 mins my script started failing, because of Socket connection refused error.
My Analysis based on information available on net:
I have read somewhere that this problem is because of limited socket connection limit on server, but when i run the same test from VM then my scripts run's just fine. so its definitely not server's issue. Can somebody please help me resolve this issue? Are there any settings needs to be done in jmeter, increase the socket connections?
Actual Screenshot of Error
enter image description here
Most likely:
Looks like situation described at Connection Reset since JMeter 2.10 ? wiki page. If you're absolutely sure that nothing is wrong with your server, you can follow the next recommendations:
Switch all your HTTP Request Samplers "Implementation" to be "HTTPClient4". The fastest and the easiest way of doing it is using HTTP Request Defaults.
Add the next lines to user.properties file (in JMeter's /bin folder)
httpclient4.retrycount=1
hc.parameters.file=hc.parameters
Add (or uncomment and edit) the following line in hc.parameters file
http.connection.stalecheck$Boolean=true
Alternative assumption:
"Good" browsers send "Connection: close" with the last request to the web server. "Bad" browsers don't and keep connection open. You can control this behaviour via "Use KeepAlive" checkbox in the HTTP Request Sampler/Defaults. If it's unchecked - you can try ticking it.