How to set timeout for cURL CRL checking? - powershell

curl --connect-timeout 5 --doh-url $dohUrl --max-time 10 --tlsv1.3 ....
I've tried using either --connect-timeout, --max-time or both at the same time as you can see above, still cURL wastes so much time trying to check for CRL and I want to tell it to stop doing it if it takes longer than 5 seconds. currently, cURL keeps trying CRL for 20 seconds and then throws this error:
curl: (35) schannel: next InitializeSecurityContext failed: Unknown error (0x80092013) - The revocation function was unable to check revocation because the revocation server was offline.
this is an intentional scenario that I want cURL to navigate through. I do not want to set --ssl-no-revoke because that completely skips the CRL check, I just don't want cURL to keep trying CRL for more than 5 seconds and throw that error after 5 seconds instead of 20+ seconds.
-m, --max-time
Maximum time in seconds that you allow the whole operation to
take. This is useful for preventing your batch jobs from hang‐
ing for hours due to slow networks or links going down. Since
7.32.0, this option accepts decimal values, but the actual time‐
out will decrease in accuracy as the specified timeout increases
in decimal precision.
quoting that from here. why cURL not respecting that parameter? I set it to 10 seconds but it takes more than 20 seconds just stuck at CRL checking phase. is it the problem with where in the command I use that parameter?
I don't want to do anything extra and don't want to check the certificate or CRL myself with other methods.
you can easily test it, just set incorrect DoH details in Windows settings so that DNS resolution won't work but you will still be able to access web resources using their IP addresses.

Related

REST API does not return answer back after more than 3600 seconds of processing

We have spent several weeks trying to fix an issue that occurs in the customer's production environment and does not occur in our test environment.
After several analyses, we have found that this error occurs only when one condition is met: processing times greater than 3600 seconds in the API.
The situation is the following:
SAP is connected to a server with Windows Server 2016 and IIS 10.0 where we have an API that is responsible for interacting with a DB use by an external system.
The process that we execute sends data from SAP to the API and this, with the data it receives from SAP and the data it obtains from the DB of the external system, performs a processing and a subsequent update in the DB.
This process finishes without problems when the processing time in the API is less than 3600 seconds.
On the other hand, when the processing time is greater than 3600 seconds, the API generates the response correctly, and the server tries to return the response to SAP, but it is not possible.
Below I show an example of a server log entry when it tries to return a response after more than 3600 seconds of API processing. As you can see, a 995 error occurs: (I have censored some parts)
Any idea where the error could come from?
We have compared IIS configurations in Production and Test. We have also reviewed the parameters of the SAP system in Production and Test and we have not found anything either.
I remain at your disposal to provide any type of additional information that may be useful for solving the problem.
UPDATE 1 - 02/09/2022
After enabling FRT (Failed Request Tracing) on IIS for 200 response codes, looking at the event log of the request that is causing the error, we have seen this event at the end:
Any information about what could be causing this error? ErrorCode="The I/O operation has been aborted because of either a thread exit or an application request. (0x800703e3)"
UPDATE 2 - 02/09/2022
Comparing configurations from customer's environment and our test environment:
There is a Firewall between SAP Server and IIS Server with the default idle timeout configured for TCP (3600 seconds). This is not happening in Test Environment because there is no Firewall.
Establishing a Firewall policy specifying a custom idle timeout for this service (7200 seconds) the problem will be solved.
sc-win32 status 995, the I/O operation has been aborted because of
either a thread exit or an application request.
Please check the setting of minBytesPerSecond configuration parameter in IIS. The default "minBytesPerSecond" is 240.
Specifies the minimum throughput rate, in bytes, that HTTP.sys
enforces when it sends a response to the client. The minBytesPerSecond
attribute prevents malicious or malfunctioning software clients from
using resources by holding a connection open with minimal data. If the
throughput rate is lower than the minBytesPerSecond setting, the
connection is terminated.

It is normal to use "cerbot renew" every 12 hours?

I have read the post about using docker with certbot and I have a question: it is normal to use "cerbot renew" every 12 hours?
I have read it on the post command about check certificate expired.
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
Post command artical: "This will check if your certificate is up for renewal every 12 hours as recommended by Let’s Encrypt."
... but I can't understend - it's create new certificate every 12 hours or it is check to expire at first.
Thanks for your attention.
certbot renew will not necessarily renew any certificate. It will check certificate expiry dates, and if they are due to expire within 30 days it will actually renew them, otherwise it will do nothing. So it's safe to call it every 12 hours.
https://eff-certbot.readthedocs.io/en/stable/using.html#renewing-certificates

ADF-Postgres Timeout

I'm working with ADF and Azure Managed Postgres. I've had a reoccurring issue with look-ups and query-sourced copy activities timing out after about 35 seconds.
Failure happened on 'Source' side. 'Type=Npgsql.NpgsqlException,Message=Exception while reading from stream,Source=Npgsql,''Type=System.IO.IOException,Message=Unable to read data from the transport connection: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.,Source=System,''Type=System.Net.Sockets.SocketException,Message=A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond,Source=System,'
So the error says it's a Npgsql Exception, so I took a look at their documentation and modified the connection string to take Timeout = 60 and CommandTimeout = 60 as well (Internal Timeout will default to CommandTimeout).
And the queries still timeout at ~35 seconds. Could this be a socket issue with the Azure Managed Instance causing the timeout and it's just propagating down to npgsql?
Any help would be appreciated!
I just want to add some precision because I had the same problem (and thanks #DeliciousMalware and #Leon_Yue):
There is a default timeout of 30s for requests with a postgres connection
There is no way to change this timeout from the lookup activity directly.
The only option that does something is to add Timeout=600;CommandTimeout=0; to your connection string in your linked service (if you use a key vault for exemple) or add the options in the linked service additionnal parameters like in #DeliciousMalware screenshot.
Timeout is to establish the connection, and CommandTimeout is the timeout for the command itself (in second, 0 means infinity)
The library behind the connection is npgsql, and the others parameters and details that are usable are there: https://www.npgsql.org/doc/connection-string-parameters.html
I had a hard time to find what the parameters of the connection string are and what they mean, and which one exists, so I was really happy to find this doc. I didn't found a lot of doc on postgres in azure, so I though this list of param would be of some use for others.
I added the 2 parameters suggested by Leon and that resolved the issue I had.
Here is a screenshot of the parameters being added to the linked service:
Here is a screenshot of the error and completed run:
Here is a screenshot of the error and completed run:

Postgres 11.8 on AWS Losing Connection/Terminating Query After 15 Minutes with No Notification or Error

I am running into an issue where multiple different clients apps (DataGrip, DBeaver, Looker) have their queries cancelled after exactly 15 minutes, but no termination message or connection error is ever sent to the app. As far as the app is concerned, the query is still running even though it has been terminated in Postgres.
For example, if I run the following query, according to the client app it just runs forever. If I check pg_stat_activity, it shows the query no longer running after 15 minutes.
SELECT pg_sleep(16 * 60);
Does anyone know of a Postgres or AWS setting that would cause this? I've checked the configuration and couldn't find any settings set to a value of 15 minutes (or 900 seconds).
There is probably a ill-configured firewall that closes your session.
Assuming that the clients you are mentioning use libpq to connect to PostgreSQL, include this in the connection string:
keepalives_idle=300
See the documentation for details.
You could of course also configure the TCP stack on your operating system to use that value, so the problem will never surface again.
Your DB log might be able to tell you what happened.
In addition, check your statement_timeout setting. The units are milliseconds so you should be looking for 900000, not 900.
If it's not that, there exist firewalls that kill idle connections. Setting tcp_keepalives_idle could help avoid those types of problems.

crontab with wget - why is it running twice?

I've a php script which runs from webservice and insert to DB.
crontab -e
......other cron tasks above.......
...
..
..
# Run test script.php at 1610
10 16 * * * /usr/bin/wget -q -O /home/username/my_cronjobs/logs/cron_adhoc http://localhost/project/script.php
Apparently, at 16:10, this script is run twice!
16:10:01 and 16:25:02
Is it something wrong and gotta do with using wget??
Or did i set the schedule on cron job wrongly?
When i run http://localhost/project/script.php from browser, it will only run once..
Any idea regarding this problem ?
I've tested, there are no other users running the same job... I suspect the way wget works.
As my script needs at least 20mins to complete without sending back a response (it is pulling alot of data from webservicces and save to db) .. suspect there's a time out or retry of wget by default causing this problem.
The wget docs give a default read-timeout of 900 seconds, or 15 minutes.
if, at any point in the download, no data is received for more than
the specified number of seconds, reading fails and the download is
restarted
This is why you were seeing the file called again 15 minutes later. You can specify a longer read-timeout by adding the parameter and an appropriate number of seconds:
--read-timeout=1800
i think i solve my own question.
My php takes some time to load, i guess wget retries or time out after some default specified time.
I solved it by using /usr/bin/php
Whose user's crontab is this?
Check if there is another user for which you set up cron job at different time and forgot about it.