I am getting a lot of this message
Error: Redis connection to x.x.x.x:6379 failed - read ECONNRESET
I know what it means, but I don't know how it happens and how to troubleshoot it.
My case is I have a redis server running on a Compute Engine for caching purposes. I have an app running on Cloud Function that is using cachegoose to cache query to my mongodb and it is set to use the redis server. And as you can tell in the Console I see that error message.
It seems when the app is trying to send a request to redis it makes it and some other times don't. The reason with that is because I can see some data going in and out of the redis db by checking with KEYS *
My question would be how the ECONNRESET issue is occurred? Is it like there is an issue with connection between Cloud Function to Compute Engine? If that's so, how do I troubleshoot it?
Related
We are trying to reach to an API hosted in our company network using rest connector in ADF (SHIR is used). Linked service connection is successful but dataset is unable to read the data and copy activity is as well failing with below error. Please suggest your thoughts in resolving the same.
Failure happened on 'Source' side. ErrorCode=UserErrorFailToReadFromRestResource,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=An error occurred while sending the request.,Source=Microsoft.DataTransfer.ClientLibrary,''Type=System.Net.Http.HttpRequestException,Message=An error occurred while sending the request.,Source=mscorlib,''Type=System.Net.WebException,Message=Unable to connect to the remote server,Source=System,''Type=System.Net.Sockets.SocketException,Message=A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond ,Source=System,'
This error is mostly seen due to firewall issues. You might want to verify your network firewall setting to allow the API request to be read.
Also, verify if your API call is working as expected using other API testing tools. If the issue persists you can raise a support ticket for engineers to investigate more on the issue.
If you are able to preview data in your source , then check your sink connection as this issue can occur when the Sink in the copy activity is behind a firewall, I was getting the same issue and I tried copying to a container without a firewall and it worked. Its weird that the error is related to Source and the issue is with Sink.
We have been running a service using NestJS and TypeORM on fully managed CloudRun without issues for several months. Yesterday PM we started getting Improper path /cloudsql/{SQL_CONNECTION_NAME} to connect to Postgres Cloud SQL instance "{SQL_CONNECTION_NAME}" errors in our logs.
We didn't make any server/SQL changes around this timestamp. Currently there is no impact to the service so we are not sure if this is a serious issue.
This error is not from our code, and our third party modules shouldn't know if we use Cloud SQL, so I have no idea where this errors come from.
My assumption is Cloud SQL Proxy or any SQL client used in Cloud Run is making this error. We use --add-cloudsql-instances flag when deploying with "gcloud run deploy" CLI command.
Link to the issue here
This log was recently added in the Cloud Run data path to provide more context for debugging CloudSQL connectivity issues. However, the original logic was overly aggressive, emitting this message even for properly working CloudSQL connections. Your application is working correctly and should not receive this warning.
Thank you for reporting this issue. The fix is ready and should roll out soon. You should not see this message anymore after the fix is out.
I'am submitting a connector to kafka. The connector created is sftp connector. Now when the password is wrong the connector sends back success response when the connector fails. The password is wrong response is not given at that time. This is a single scenario there could be mutliple scenarios like this. Now when I use the <host>/connectors/<connector-name>/status, I get the error saying failed to establish connection. But this endpoint has a little delay. If I'am immediately trying after creating the connector, I may not get any response(404).
What is the proper way of handling this using the status api call.Is there any delay that needs to be used before firing this API. Or can it be handled while submitting the connector to API?
When you create the connector, it naturally needs to load the JAR(s) responsible for the tasks, then distribute the tasks to actually start the connector code (which is responsible for connecting to the SFTP server with the connection details).
Therefore, the delay is natural, and there's no way to know your connection details are incorrect unless you try to use them before launching the connector.
I am using php Google Cloud Client library.
$bucket = $this->storage->bucket($bucketName);
$object = $bucket->upload(
fopen($localFilePath, 'r'),
$options
);
This statement, sometimes gave the following errors.
production.ERROR: cURL error 56: SSL read: error:00000000:lib(0):func(0):reason(0), errno 104 (see http://curl.haxx.se/libcurl/c/libcurl-errors.html) {"exception":"[object] (Google\Cloud\Exception\ServiceException(code: 0): cURL error 56: SSL read: error:00000000:lib(0):func(0):reason(0), errno 104 (see http://curl.haxx.se/libcurl/c/libcurl-errors.html) at /opt/processing/vendor/google/cloud/src/RequestWrapper.php:219)
[stacktrace]
But after I re-run the codes, the error is gone.
I had run the codes (data process) for more than a year, I rarely saw this error before. Now, I moved my codes to a new server. I started to see this error. (It might be that this error happened before, just my old setup is not ignore to catch and log these errors.)
Due to the error report is from Google Cloud (less than 5% error rate), and re-run the codes, the error disappears, I think the error cause is from Google Cloud Platform.
Does anyone see the same errors? Are there anything we can do to prevent this error? Or we just have to code our process to retry when this error pops up?
Thanks!
The error code you're getting (error 56) is defined as:
CURLE_RECV_ERROR (56)
Failure with receiving network data.
If you're getting this error it's likely you have a network issue that's causing your connections to break. Over the Internet you can expect to get this kind of error occasionally but rarely. If it's happening frequently there's probably something worse going on.
These types of network issues can be caused by a huge number of things but here's some possibilities:
Firewall or security software on your computer.
Network equipment (e.g. switches, routers, access points, firewalls, etc) or network equipment configuration.
An outage or intermittent connection between your ISP and Google (though it looks like Google wasn't detecting any outages recently).
When you're dealing with cloud storage providers (Google Storage, AWS S3, etc) you should always program in automatic retry logic for anything important. The Internet isn't always going to be perfectly reliable and it's best to plan for that in your code instead of relying on not having a problem.
I am trying to load some data into a table in my dashDB database but hit an error message. Can I download db2diag.log from the dashDB console to find out what has happened?
com.ibm.db2.jcc.am.SqlNonTransientConnectionException: [jcc][t4][10335][10366][4.18.60] Invalid operation: Connection is closed. ERRORCODE=-4470, SQLSTATE=08003 Data loading failed.
As Jeff said, you can't access the logs of this DBaaS as it is shared resource. You will need to debug from your applications side. Are you maybe having a contingency problem with passing around the connection handle? Can you share your code?
http://www.ibm.com/developerworks/websphere/techjournal/1205_ramachandra/1205_ramachandra.html
Details about enabling database tracing in Liberty:
http://www-01.ibm.com/support/knowledgecenter/SSAW57_8.5.5/com.ibm.websphere.wlp.nd.iseries.doc/ae/twlp_dep_jdbc_trace.html?cp=SSAW57_8.5.5%2F2-3-11-0-5-3-1&lang=en
Since DashDB is a service, you don't have access to the logs that you normally would have access to.