NOSRV errors seen in haproxy logs - haproxy

We have haproxy in front of 2 apache servers and every day for less than a minute I am getting NOSRV errors in haproxy logs. There are successful requests from the source IP so this is just intermittent. There is no entry of any error in the backend logs.
Below is the snippet from access logs:
Dec 22 20:21:25 proxy01 haproxy[3000561]: X.X.X.X:60872 Local_Server~ Local_Server/<NOSRV> -1/-1/-1/ -1 0 0 0 {} "POST /xxxxtransaction HTTP/1.1" 0 0
Dec 22 20:21:26 proxy01 haproxy[3000561]: X.X.X.X:43212 Local_Server~ Local_Server/<NOSRV> -1/-1/-1/ -1 0 0 0 {} "POST /xxxxtransaction HTTP/1.1" 0 0
Dec 22 20:21:26 proxy01 haproxy[3000561]: X.X.X.X:43206 Local_Server~ Local_Server/<NOSRV> -1/-1/-1/ -1 0 0 0 {} "POST /xxxxtransaction HTTP/1.1" 0 0
Dec 22 20:21:26 proxy01 haproxy[3000561]: X.X.X.X:60974 Local_Server~ Local_Server/<NOSRV> -1/-1/-1/ -1 0 0 0 {} "POST /xxxxtransaction HTTP/1.1" 0 0
Dec 22 20:21:27 proxy01 haproxy[3000561]: X.X.X.X:32772 Local_Server~ Local_Server/<NOSRV> -1/-1/-1/ -1 0 0 0 {} "POST /xxxxtransaction HTTP/1.1" 103 0
Dec 22 20:21:27 proxy01 haproxy[3000561]: X.X.X.X:32774 Local_Server~ Local_Server/<NOSRV> -1/-1/-1/ -1 0 0 0 {} "POST /xxxxtransaction HTTP/1.1" 59 0
Dec 22 20:21:27 proxy01 haproxy[3000561]: X.X.X.X:32776 Local_Server~ Local_Server/<NOSRV> -1/-1/-1/ -1 0 0 0 {} "POST /xxxxtransaction HTTP/1.1" 57 0
below is the HAproxy config file:
defaults
log global
timeout connect 15000
timeout check 5000
timeout client 30000
timeout server 30000
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
frontend Local_Server
bind *:80
bind *:443 ssl crt /etc/haproxy/certs/
mode http
option httplog
cookie SRVNAME insert indirect nocache maxidle 8h maxlife 8h
#capture request header X-Forwarded-For len 15
#capture request header Host len 32
http-request capture req.hdrs len 512
log-format "%ci:%cp[%tr] %ft %b/%s %TR/%Tw/%Tc/%Tr/%Ta %ST %B %CC %CS %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %{+Q}r"
#log-format "%ci:%cp %ft %b/%s %Tw/%Tc/%Tr/ %ST %B %rc %bq %hr %hs %{+Q}r %Tt %Ta"
option dontlognull
option http-keep-alive
#declare whitelists for urls
acl xx_whitelist src -f /etc/haproxy/xx_whitelist.lst
acl is-blocked-ip src -f /etc/haproxy/badactors-list.txt
http-request silent-drop if is-blocked-ip
acl all src 0.0.0.0
######### ANTI BAD GUYS STUFF ###########################################
#anti DDOS sticktable - sends a 500 after 5s when requests from IP over 120 per
#frontend for stick table see backend "st_src_global" also
#Restrict number of requests in last 10 secs
# TO MONTOR RUN " watch -n 1 'echo "show table st_src_global" | socat unix:/run/haproxy/admin.sock -' " ON CLI.
#ZZZ THIS MAY NEED DISABLEING FOR LOAD TESTS ZZZZ
# Table definition
http-request track-sc0 src table st_src_global #<- defines tracking stick table
stick-table type ip size 100k expire 10s store http_req_rate(50000s) #<- sets the limit for and time to store IP
http-request silent-drop if { sc_http_req_rate(0) gt 50000 } # drops if requests are greater the 5000 in 5 secs
# Allow clean known IPs to bypass the filter
tcp-request connection accept if { src -f /etc/haproxy/xx_whitelist.lst }
#Slowlorris protection -send 408 if http request not completed in 5secs
timeout http-request 10s
option http-buffer-request
# Block Specific Requests
#http-request deny if HTTP_1.0
http-request deny if { req.hdr(user-agent) -i -m sub phantomjs slimerjs }
#traffic shape
#xxxx.xxxx.xx.xx
acl xxxxx.xxxxx.xx.xx hdr(host) -i xxxx.xxxx.xx.xx
use_backend xxxx.xxxx.xx.xx if xxxx.xxxx.xx.xx xx_whitelist #update from proxys
#sticktable for dos protection
backend st_src_global
stick-table type ip size 1m expire 10s store http_req_rate(50000s)
backend xxxxxxx.xxxxx.xx.xx
mode http
balance roundrobin
option forwardfor
http-request set-header X-Forwarded-Port %[dst_port]
http-request add-header X-Forwarded-Proto https if { ssl_fc }
server web01-http x.x.x.x:80 check maxconn 100
server web03-http x.x.x.x.:80 check maxconn 100

Related

How does maxRequestsPerConnection of istio work?

everyone.
I have been learning istio and to understand how maxRequestsPerConnection works, I applied the manifest below.
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: httpbin
spec:
host: httpbin
trafficPolicy:
connectionPool:
http:
maxRequestsPerConnection: 1
httpbin is a sample service of istio.
I thought maxRequestsPerConnection means how many http requests are allowed per one TCP Connection, and istio would close tcp connection after pod received one http request in this case.
After applying, I sent some http requests using telnet. I thought istio would accept the request once and then close the TCP connection, but istio didn't.
$ telnet httpbin 8000
Trying 10.76.12.133...
Connected to httpbin.default.svc.cluster.local.
Escape character is '^]'.
GET /get HTTP/1.1
User-Agent: Telnet [ja] (Linux)
Host: httpbin
HTTP/1.1 200 OK
server: envoy
date: Sun, 07 Nov 2021 14:14:16 GMT
content-type: application/json
content-length: 579
access-control-allow-origin: *
access-control-allow-credentials: true
x-envoy-upstream-service-time: 9
{
"args": {},
"headers": {
"Host": "httpbin",
"User-Agent": "Telnet [ja] (Linux)",
"X-B3-Parentspanid": "b042ad708e2a47a2",
"X-B3-Sampled": "1",
"X-B3-Spanid": "b6a08d45e1a1e15e",
"X-B3-Traceid": "fc23863eafb0322db042ad708e2a47a2",
"X-Envoy-Attempt-Count": "1",
"X-Forwarded-Client-Cert": "By=spiffe://cluster.local/ns/default/sa/httpbin;Hash=d9bb27f31fe44200f803dbe736419b4664b5b81045bb3811711119ca5ccf6a37;Subject=\"\";URI=spiffe://cluster.local/ns/default/sa/default"
},
"origin": "127.0.0.6",
"url": "http://httpbin/get"
}
GET /get HTTP/1.1
User-Agent: Telnet [ja] (Linux)
Host: httpbin
HTTP/1.1 200 OK
server: envoy
date: Sun, 07 Nov 2021 14:14:18 GMT
content-type: application/json
content-length: 579
access-control-allow-origin: *
access-control-allow-credentials: true
x-envoy-upstream-service-time: 3
{
"args": {},
"headers": {
"Host": "httpbin",
"User-Agent": "Telnet [ja] (Linux)",
"X-B3-Parentspanid": "85722c0d777e8537",
"X-B3-Sampled": "1",
"X-B3-Spanid": "31d2acc5348a6fc5",
"X-B3-Traceid": "d7ada94a092d681885722c0d777e8537",
"X-Envoy-Attempt-Count": "1",
"X-Forwarded-Client-Cert": "By=spiffe://cluster.local/ns/default/sa/httpbin;Hash=d9bb27f31fe44200f803dbe736419b4664b5b81045bb3811711119ca5ccf6a37;Subject=\"\";URI=spiffe://cluster.local/ns/default/sa/default"
},
"origin": "127.0.0.6",
"url": "http://httpbin/get"
}
After this, I sent http request ten times using fortio, and I got the same result.
$ kubectl exec "$FORTIO_POD" -c fortio -- /usr/bin/fortio load -c 1 -qps 0 -n 10 -loglevel Warning http://httpbin:8000/get
14:22:56 I logger.go:127> Log level is now 3 Warning (was 2 Info)
Fortio 1.17.1 running at 0 queries per second, 2->2 procs, for 10 calls: http://httpbin:8000/get
Starting at max qps with 1 thread(s) [gomax 2] for exactly 10 calls (10 per thread + 0)
Ended after 106.50891ms : 10 calls. qps=93.889
Aggregated Function Time : count 10 avg 0.010648204 +/- 0.01639 min 0.003757335 max 0.059256801 sum 0.106482036
# range, mid point, percentile, count
>= 0.00375734 <= 0.004 , 0.00387867 , 30.00, 3
> 0.004 <= 0.005 , 0.0045 , 70.00, 4
> 0.005 <= 0.006 , 0.0055 , 80.00, 1
> 0.012 <= 0.014 , 0.013 , 90.00, 1
> 0.05 <= 0.0592568 , 0.0546284 , 100.00, 1
# target 50% 0.0045
# target 75% 0.0055
# target 90% 0.014
# target 99% 0.0583311
# target 99.9% 0.0591642
Sockets used: 1 (for perfect keepalive, would be 1)
Jitter: false
Code 200 : 10 (100.0 %)
Response Header Sizes : count 10 avg 230.1 +/- 0.3 min 230 max 231 sum 2301
Response Body/Total Sizes : count 10 avg 824.1 +/- 0.3 min 824 max 825 sum 8241
All done 10 calls (plus 0 warmup) 10.648 ms avg, 93.9 qps
$
In my understanding, the message Sockets used: 1 (for perfect keepalive, would be 1) means fortio used only one TCP connection.
I guessed clients used different tcp connection for each http requests first, but if it is true, telnet connection was not closed by foreign host and fortio used ten tcp connections.
Please teach me what the function of maxRequestsPerConnection is.

cURL fails in GitHub Actions

I'm running a Raspberry Pi 4 as a server for a .NET Core side-project of mine. Nothing too fancy or heavy. After trying to get going with a webhook and uploading files with scp to the Pi and failed (still don't know why at this point; scp problem might be the same as cURL problem), I decided to make myself a small API which accepts a file and deploys it to the specified path. The API is working both from inside and outside the Pi as I've tested it using cURL and Postman with a 20MB zip file, but when I run this command from inside a GitHub Action, I get a long waiting time and then a fail message.
Command:
curl --request POST --url https://example.com/ --header 'cache-control: no-cache' --form path=DEPLOY_PATH --form archive=#FILE_PATH --form token=TOKEN
Output:
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
0 8776k 0 0 0 65536 0 42750 0:03:30 0:00:01 0:03:29 42722
0 8776k 0 0 0 65536 0 25862 0:05:47 0:00:02 0:05:45 25852
0 8776k 0 0 0 65536 0 18533 0:08:04 0:00:03 0:08:01 18528
0 8776k 0 0 0 65536 0 14444 0:10:22 0:00:04 0:10:18 14441
0 8776k 0 0 0 65536 0 11833 0:12:39 0:00:05 0:12:34 13091
0 8776k 0 0 0 65536 0 10022 0:14:56 0:00:06 0:14:50 0
0 8776k 0 0 0 65536 0 8691 0:17:14 0:00:07 0:17:07 0
...
0 8776k 0 0 0 65536 0 63 39:37:37 0:17:11 39:20:26 0
0 8776k 0 0 0 65536 0 63 39:37:37 0:17:11 39:20:26 0
curl: (55) SSL_write() returned SYSCALL, errno = 110
##[error]Process completed with exit code 55.
With both scp and cURL commands there seems to be a common problem. If I try to send a simple text file or a tar.gz containing a text.file, it works. If I try to do the same with a .dll file or a tar.gz containing a .dll file, it does not. I don't really know if the problem is because of the files or their size. To be noted that the API accepts files as big as 100MB at the moment and I'm only trying to deploy a small package of ~10MB.
Output with -v arg:
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Trying IP...
* TCP_NODELAY set
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Connected to URL (IP) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
} [5 bytes data]
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
} [512 bytes data]
* TLSv1.3 (IN), TLS handshake, Server hello (2):
{ [112 bytes data]
* TLSv1.2 (IN), TLS handshake, Certificate (11):
{ [2861 bytes data]
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
{ [300 bytes data]
* TLSv1.2 (IN), TLS handshake, Server finished (14):
{ [4 bytes data]
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
} [37 bytes data]
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
} [1 bytes data]
* TLSv1.2 (OUT), TLS handshake, Finished (20):
} [16 bytes data]
* TLSv1.2 (IN), TLS handshake, Finished (20):
{ [16 bytes data]
* SSL connection using TLSv1.2 / ECDHE-RSA-CHACHA20-POLY1305
* ALPN, server accepted to use http/1.1
* Server certificate:
* subject: CN=URL
* start date: Sep 18 16:51:41 2020 GMT
* expire date: Dec 17 16:51:41 2020 GMT
* subjectAltName: host "URL" matched cert's "URL"
* issuer: C=US; O=Let's Encrypt; CN=Let's Encrypt Authority X3
* SSL certificate verify ok.
} [5 bytes data]
> POST /api/Deployment/ HTTP/1.1
> Host: URL
> User-Agent: curl/7.58.0
> Accept: */*
> cache-control: no-cache
> Content-Length: 3333
> Content-Type: multipart/form-data; boundary=------------------------ca1748c91973ca89
> Expect: 100-continue
>
{ [5 bytes data]
< HTTP/1.1 100 Continue
} [5 bytes data]
100 3333 0 0 100 3333 0 2154 0:00:01 0:00:01 --:--:-- 2153
100 3333 0 0 100 3333 0 1307 0:00:02 0:00:02 --:--:-- 1307
100 3333 0 0 100 3333 0 938 0:00:03 0:00:03 --:--:-- 938
100 3333 0 0 100 3333 0 732 0:00:04 0:00:04 --:--:-- 732
100 3333 0 0 100 3333 0 600 0:00:05 0:00:05 --:--:-- 614
100 3333 0 0 100 3333 0 508 0:00:06 0:00:06 --:--:-- 0
100 3333 0 0 100 3333 0 441 0:00:07 0:00:07 --:--:-- 0
...
100 3333 0 0 100 3333 0 57 0:00:58 0:00:57 0:00:01 0
100 3333 0 0 100 3333 0 56 0:00:59 0:00:58 0:00:01 0
100 3333 0 0 100 3333 0 55 0:01:00 0:00:59 0:00:01 0
100 3333 0 0 100 3333 0 54 0:01:01 0:01:00 0:00:01 0* Empty reply from server
100 3333 0 0 100 3333 0 54 0:01:01 0:01:00 0:00:01 0
* Connection #0 to host URL left intact
curl: (52) Empty reply from server
##[error]Process completed with exit code 52.
EDIT: Switching to a Windows runner instead of Ubuntu solved the cURL problem, but I'm still open to suggestions regarding this question, as this is merely a workaround rather than a solution.

Facebook share error: Object at URL of type 'website' is invalid because a required property 'og:title' of type 'string' was not provided

✋🏽
When I paste the URL of my blog in Facebook debugger, its not picking the title and also the image. In view source of my page, the og:title and also og:image are getting rendered but facebook scraper is not reading any.
Object at URL 'http://blog.la-pigiste.com/2017/09/20/diy-faire-son-terrazzo-granito-do-it-yourself-inspiration-tendance-tutoriel/' of type 'website' is invalid because a required property 'og:title' of type 'string' was not provided.
Facebook debugger also says "{
"error": {
"message": "An access token is required to request this resource.",
"type": "OAuthException",
"code": 104,
"fbtrace_id": "BMdGG7oTu6k"
}
}"
but i don't no what does it means .... 🤔
any help is greatly appreciated 🙏🏻
When trying to fetch new scrape information for your URL through the Open Graph Debugger you get the error:
Curl Error : OPERATION_TIMEOUTED Operation timed out after 10000 milliseconds with 0 bytes received
In other words, your web server didn't reply in 10 seconds and the crawler timed out.
It looks like you configured your web server to behave differently when the request is coming from the Facebook Crawler.
You can verify this using curl.
Fetching your URL with curl's default User Agent works fine:
$ curl -v 'http://blog.la-pigiste.com/2017/09/20/diy-faire-son-terrazzo-granito-do-it-yourself-inspiration-tendance-tutoriel/' > /dev/null
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Trying 149.56.140.68...
* TCP_NODELAY set
* Connected to blog.la-pigiste.com (149.56.140.68) port 80 (#0)
> GET /2017/09/20/diy-faire-son-terrazzo-granito-do-it-yourself-inspiration-tendance-tutoriel/ HTTP/1.1
> Host: blog.la-pigiste.com
> User-Agent: curl/7.54.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: nginx
< Date: Wed, 20 Sep 2017 10:34:37 GMT
< Content-Type: text/html; charset=UTF-8
< Content-Length: 117446
< Connection: keep-alive
< Vary: Accept-Encoding
< Last-Modified: Wed, 20 Sep 2017 07:25:20 GMT
< Accept-Ranges: bytes
< Vary: Accept-Encoding
< X-Powered-By: PleskLin
< Cache-Control: max-age=0, no-cache, no-store, must-revalidate
< Pragma: no-cache
< Expires: Mon, 29 Oct 1923 20:30:00 GMT
<
{ [956 bytes data]
100 114k 100 114k 0 0 159k 0 --:--:-- --:--:-- --:--:-- 159k
* Connection #0 to host blog.la-pigiste.com left intact
Anyway, when the Facebook crawler User Agent is used (facebookexternalhit/1.1 (+http://www.facebook.com/externalhit_uatext.php)), the web server replies differently and only after about 14 seconds:
$ curl -v -A "facebookexternalhit/1.1 (+http://www.facebook.com/externalhit_uatext.php)" 'http://blog.la-pigiste.com/2017/09/20/diy-faire-son-terrazzo-granito-do-it-yourself-inspiration-tendance-tutoriel/' > /dev/null
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Trying 149.56.140.68...
* TCP_NODELAY set
* Connected to blog.la-pigiste.com (149.56.140.68) port 80 (#0)
> GET /2017/09/20/diy-faire-son-terrazzo-granito-do-it-yourself-inspiration-tendance-tutoriel/ HTTP/1.1
> Host: blog.la-pigiste.com
> User-Agent: facebookexternalhit/1.1 (+http://www.facebook.com/externalhit_uatext.php)
> Accept: */*
>
0 0 0 0 0 0 0 0 --:--:-- 0:00:14 --:--:-- 0< HTTP/1.1 200 OK
< Server: nginx
< Date: Wed, 20 Sep 2017 10:37:15 GMT
< Content-Type: text/html; charset=UTF-8
< Transfer-Encoding: chunked
< Connection: keep-alive
< Vary: Accept-Encoding
< X-Powered-By: PHP/5.6.30
< X-Pingback: http://blog.la-pigiste.com/xmlrpc.php
< Link: <http://blog.la-pigiste.com/wp-json/>; rel="https://api.w.org/", <...>; rel=shortlink
< Set-Cookie: wfvt_983661238=59c244cfe4c12; expires=Wed, 20-Sep-2017 11:07:03 GMT; Max-Age=1800; path=/; httponly
< Vary: Accept-Encoding
< X-Powered-By: PleskLin
<
{ [838 bytes data]
100 124k 0 124k 0 0 8507 0 --:--:-- 0:00:15 --:--:-- 36126
* Connection #0 to host blog.la-pigiste.com left intact
Ensure that your web server replies in time and with the correct HTML and the crawler will be able to fetch your OG tags.

Doing a refund on paypal using NVP/SOAP using powershell

I'm trying to perform a refund on Paypal developer account but I keep getting errors while trying to run this command via powershell:
$certpath="E:\AAAA\cert_key.pem"
curl -v -E $certpath -F content=C:\Users\AAA\Desktop\res.xml;type=text/xml" https://api.sandbox.paypal.com/2.0/
The Content of the XML are as below which i took from paypal developer site:
<?xml version="1.0" encoding="UTF-8"?>
<SOAP-ENV:Envelope
xmlns:xsi="http://www.w3.org/1999/XMLSchema-instance"
xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/"
xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:xsd="http://www.w3.org/1999/XMLSchema"
SOAP-ENV:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/">
<SOAP-ENV:Header>
<RequesterCredentials xmlns="urn:ebay:api:PayPalAPI" SOAP-ENV:mustUnderstand="1">
<Credentials xmlns="urn:ebay:apis:eBLBaseComponents">
<Username>username</Username>
<Password>password</Password>
<Subject/>
</Credentials>
</RequesterCredentials>
</SOAP-ENV:Header>
<SOAP-ENV:Body>
<RefundTransactionReq xmlns="urn:ebay:api:PayPalAPI">
<RefundTransactionRequest xsi:type="ns:RefundTransactionRequestType">
<Version xmlns="urn:ebay:apis:eBLBaseComponents" xsi:type="xsd:string">1.0</Version>
<TransactionID xsi:type="ebl:TransactionId">3P573784GG4876055</TransactionID>
<RefundType>Full</RefundType>
<Memo>Shell script FULL refund example</Memo>
</RefundTransactionRequest>
</RefundTransactionReq>
</SOAP-ENV:Body>
</SOAP-ENV:Envelope>
But i keep getting below error:
curl : * timeout on name lookup is not supported
At line:2 char:1
+ curl -v -E $certpath -F "content=C:\Users\MICHELANGELO\Desktop\res.xml;type=text ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (* timeout on na...s not supported:String) [], RemoteException
+ FullyQualifiedErrorId : NativeCommandError
* Trying 173.0.82.78...
* TCP_NODELAY set
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
* Connected to api.sandbox.paypal.com (173.0.82.78) port 443 (#0)
* schannel: SSL/TLS connection with api.sandbox.paypal.com port 443 (step 1/3)
* schannel: checking server certificate revocation
* schannel: sending initial handshake data: sending 173 bytes...
* schannel: sent initial handshake data: sent 173 bytes
* schannel: SSL/TLS connection with api.sandbox.paypal.com port 443 (step 2/3)
* schannel: failed to receive handshake, need more data
* schannel: SSL/TLS connection with api.sandbox.paypal.com port 443 (step 2/3)
* schannel: encrypted data buffer: offset 4071 length 4096
* schannel: a client certificate has been requested
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
* schannel: SSL/TLS connection with api.sandbox.paypal.com port 443 (step 2/3)
* schannel: encrypted data buffer: offset 4071 length 5095
* schannel: sending next handshake data: sending 365 bytes...
* schannel: SSL/TLS connection with api.sandbox.paypal.com port 443 (step 2/3)
* schannel: encrypted data buffer: offset 91 length 5095
* schannel: SSL/TLS handshake complete
* schannel: SSL/TLS connection with api.sandbox.paypal.com port 443 (step 3/3)
> POST /2.0/ HTTP/1.1
> Host: api.sandbox.paypal.com
> User-Agent: curl/7.51.0
> Accept: */*
> Content-Length: 203
> Expect: 100-continue
> Content-Type: multipart/form-data; boundary=------------------------f4cd70d3c58d2816
>
0 203 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0
* Done waiting for 100-continue
} [203 bytes data]
* schannel: client wants to read 16384 bytes
* schannel: encdata_buffer resized 17408
* schannel: encrypted data buffer: offset 0 length 17408
* schannel: Curl_read_plain returned CURLE_RECV_ERROR
* schannel: encrypted data buffer: offset 0 length 17408
* schannel: encrypted data buffer: offset 0 length 17408
* schannel: decrypted data buffer: offset 0 length 4096
* schannel: schannel_recv cleanup
* Curl_http_done: called premature == 1
100 203 0 0 100 203 0 92 0:00:02 0:00:02 --:--:-- 92
* Closing connection 0
* schannel: shutting down SSL/TLS connection with api.sandbox.paypal.com port 443
* Send failure: Connection was reset
* schannel: failed to send close msg: Failed sending data to the peer (bytes written: -1)
* schannel: clear security context handle
curl: (56) Send failure: Connection was reset
Any help would be really appreciated.

Connecting to gtalk in irssi errors with 301

I have irssi and the xmpp plugin configured:
{
address = "talk.google.com";
chatnet = "Gtalk";
autoconnect = "yes";
port = "5223";
#use_ssl = "yes";
#ssl_verify = "yes";
ssl_capath = "/etc/ssl/certs";
}
and
Gtalk = { type = "XMPP"; nick = "neilhwatson#gmail.com"; };
This error is returned:
09:09 [Gtalk] -!- HTTP/1.1 301 Moved Permanently
09:09 [Gtalk] -!- Location: http://www.google.com/hangouts/
09:09 [Gtalk] -!- Content-Type: text/html
09:09 [Gtalk] -!- Content-Length: 178
Is there some other host or port combination that will work?
Using DNS SRV:
$ dig SRV _xmpp-client._tcp.gmail.com
;; ANSWER SECTION:
_xmpp-client._tcp.gmail.com. 337 IN SRV 20 0 5222 alt2.xmpp.l.google.com.
_xmpp-client._tcp.gmail.com. 337 IN SRV 20 0 5222 alt3.xmpp.l.google.com.
_xmpp-client._tcp.gmail.com. 337 IN SRV 5 0 5222 xmpp.l.google.com.
_xmpp-client._tcp.gmail.com. 337 IN SRV 20 0 5222 alt1.xmpp.l.google.com.
_xmpp-client._tcp.gmail.com. 337 IN SRV 20 0 5222 alt4.xmpp.l.google.com.
You could try using xmpp.l.google.com. My XMPP client (pidgin) seems to do this automatically when I tell it that the domain is "gmail.com"