ADF-Postgres Timeout - postgresql

I'm working with ADF and Azure Managed Postgres. I've had a reoccurring issue with look-ups and query-sourced copy activities timing out after about 35 seconds.
Failure happened on 'Source' side. 'Type=Npgsql.NpgsqlException,Message=Exception while reading from stream,Source=Npgsql,''Type=System.IO.IOException,Message=Unable to read data from the transport connection: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.,Source=System,''Type=System.Net.Sockets.SocketException,Message=A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond,Source=System,'
So the error says it's a Npgsql Exception, so I took a look at their documentation and modified the connection string to take Timeout = 60 and CommandTimeout = 60 as well (Internal Timeout will default to CommandTimeout).
And the queries still timeout at ~35 seconds. Could this be a socket issue with the Azure Managed Instance causing the timeout and it's just propagating down to npgsql?
Any help would be appreciated!

I just want to add some precision because I had the same problem (and thanks #DeliciousMalware and #Leon_Yue):
There is a default timeout of 30s for requests with a postgres connection
There is no way to change this timeout from the lookup activity directly.
The only option that does something is to add Timeout=600;CommandTimeout=0; to your connection string in your linked service (if you use a key vault for exemple) or add the options in the linked service additionnal parameters like in #DeliciousMalware screenshot.
Timeout is to establish the connection, and CommandTimeout is the timeout for the command itself (in second, 0 means infinity)
The library behind the connection is npgsql, and the others parameters and details that are usable are there: https://www.npgsql.org/doc/connection-string-parameters.html
I had a hard time to find what the parameters of the connection string are and what they mean, and which one exists, so I was really happy to find this doc. I didn't found a lot of doc on postgres in azure, so I though this list of param would be of some use for others.

I added the 2 parameters suggested by Leon and that resolved the issue I had.
Here is a screenshot of the parameters being added to the linked service:
Here is a screenshot of the error and completed run:
Here is a screenshot of the error and completed run:

Related

The listener for Azure function was unable to start - Microsoft.Azure.EventHubs.Processor Encountered error

I am getting the below error whilst running my Python Azure Function on the local machine in VSCode.
For clarification the message is:
The listener for function 'Functions.IoT_Data-Handler' was unable to
start. Microsoft.Azure.EventHubs.Processor: Encountered error while
fetching the list of EventHub PartitionIds. System.Private.CoreLib: A
connection attempt failed because the connected party did not properly
respond after a period of time, or established connection failed
because connected host has failed to respond.
This error has never occurred before in the time I have started using VSCode for Azure functions (since last September). The only thing that has changed recently is that I now deploy this function within an Azure Function premium resource, but really that should not matter in the dev environment.
For information, this function is hooked up to an Azure IoT-Hub endpoint and is simply reading and processing the uplink data before saving it to an Azure SQL database.
Can anyone offer any advice?
Check if my below findings help to fix your issue:
As #PeterBons said, check the connection string given correctly in the local.settings.json:
Whatever the Event Hub Endpoint/IoT Hub Endpoint Connection String given in the file local.settings.json, that property name should be mapped in the function.json file.
Try replacing the IoT Hub Connection String without the consumer group name as mentioned in this GitHub Issue #5512
I found similar issues in the SO 1 & 2 which will be helpful to fix your issue.

Challenge in data from REST API using Azure Data Factory - access issue

We are trying to reach to an API hosted in our company network using rest connector in ADF (SHIR is used). Linked service connection is successful but dataset is unable to read the data and copy activity is as well failing with below error. Please suggest your thoughts in resolving the same.
Failure happened on 'Source' side. ErrorCode=UserErrorFailToReadFromRestResource,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=An error occurred while sending the request.,Source=Microsoft.DataTransfer.ClientLibrary,''Type=System.Net.Http.HttpRequestException,Message=An error occurred while sending the request.,Source=mscorlib,''Type=System.Net.WebException,Message=Unable to connect to the remote server,Source=System,''Type=System.Net.Sockets.SocketException,Message=A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond ,Source=System,'
This error is mostly seen due to firewall issues. You might want to verify your network firewall setting to allow the API request to be read.
Also, verify if your API call is working as expected using other API testing tools. If the issue persists you can raise a support ticket for engineers to investigate more on the issue.
If you are able to preview data in your source , then check your sink connection as this issue can occur when the Sink in the copy activity is behind a firewall, I was getting the same issue and I tried copying to a container without a firewall and it worked. Its weird that the error is related to Source and the issue is with Sink.

Postgres 11.8 on AWS Losing Connection/Terminating Query After 15 Minutes with No Notification or Error

I am running into an issue where multiple different clients apps (DataGrip, DBeaver, Looker) have their queries cancelled after exactly 15 minutes, but no termination message or connection error is ever sent to the app. As far as the app is concerned, the query is still running even though it has been terminated in Postgres.
For example, if I run the following query, according to the client app it just runs forever. If I check pg_stat_activity, it shows the query no longer running after 15 minutes.
SELECT pg_sleep(16 * 60);
Does anyone know of a Postgres or AWS setting that would cause this? I've checked the configuration and couldn't find any settings set to a value of 15 minutes (or 900 seconds).
There is probably a ill-configured firewall that closes your session.
Assuming that the clients you are mentioning use libpq to connect to PostgreSQL, include this in the connection string:
keepalives_idle=300
See the documentation for details.
You could of course also configure the TCP stack on your operating system to use that value, so the problem will never surface again.
Your DB log might be able to tell you what happened.
In addition, check your statement_timeout setting. The units are milliseconds so you should be looking for 900000, not 900.
If it's not that, there exist firewalls that kill idle connections. Setting tcp_keepalives_idle could help avoid those types of problems.

JMeter: java.net.SocketException: Connection reset

Once a Login script is executed with few user, I don't see connection reset problem, whereas, when the same is run 100 users, "java.net.SocketException: Connection reset" starts throwing for very first link.
What I don't understand is if there is connection problem, then it should even show the same error for single or few users as well.
This means that your server is rejecting connections because it is either overloaded or misconfigured.
It is regular that you don't face it with 1 user and face it with 100, this is typically what load testing brings, ie simulate traffic on your server
It might be the case described in Connection Reset since JMeter 2.10 ? wiki page.
If you are absolutely sure that your server is not overloaded and is configured to accept 100+ connections (defaults are good for development, not for production, they need to be tweaked) you can try work it around as follows:
In user.properties file add the next 2 lines:
httpclient4.retrycount=1
hc.parameters.file=hc.parameters
In hc.parameters file add the following line:
http.connection.stalecheck$Boolean=true
Both files live in JMeter's bin folder.
You need to restart JMeter to pick the properties up.
Above instructions are applicable for HttpClient4 implementation, make sure you use it, the fastest and the easiest way to set HttpClient4 implementation for all the HTTP Request samplers is using HTTP Request Defaults

Jmeter Error: Java.net.SocketException: Connection reset at java.net.SocketInputStream.read(Unknown Source) at

Jmeter Environment Details
I am performing Jmeter testing on Microsoft Azure Cloud. I have created on VM(Virtual Machine) on the same cloud and from there I am hitting the application server on the same cloud environment. So in this case there is no network latency.
Problem Statement:
I am trying to run the load test for 300 users for 30 mins , but after 5 mins my script started failing, because of Socket connection refused error.
My Analysis based on information available on net:
I have read somewhere that this problem is because of limited socket connection limit on server, but when i run the same test from VM then my scripts run's just fine. so its definitely not server's issue. Can somebody please help me resolve this issue? Are there any settings needs to be done in jmeter, increase the socket connections?
Actual Screenshot of Error
enter image description here
Most likely:
Looks like situation described at Connection Reset since JMeter 2.10 ? wiki page. If you're absolutely sure that nothing is wrong with your server, you can follow the next recommendations:
Switch all your HTTP Request Samplers "Implementation" to be "HTTPClient4". The fastest and the easiest way of doing it is using HTTP Request Defaults.
Add the next lines to user.properties file (in JMeter's /bin folder)
httpclient4.retrycount=1
hc.parameters.file=hc.parameters
Add (or uncomment and edit) the following line in hc.parameters file
http.connection.stalecheck$Boolean=true
Alternative assumption:
"Good" browsers send "Connection: close" with the last request to the web server. "Bad" browsers don't and keep connection open. You can control this behaviour via "Use KeepAlive" checkbox in the HTTP Request Sampler/Defaults. If it's unchecked - you can try ticking it.