We are not able to connect to Facebook OpenGraph API now and then.
Our .NET client returns the below error:
System.Net.WebException: Unable to connect to the remote server ---> System.Net.Sockets.SocketException: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond 66.220.146.100:443 at System.Net.Sockets.Socket.DoConnect(EndPoint endPointSnapshot, SocketAddress socketAddress) at System.Net.Sockets.Socket.InternalConnect(EndPoint remoteEP) at System.Net.ServicePoint.ConnectSocketInternal(Boolean connectFailure, Socket s4, Socket s6, Socket& socket, IPAddress& address, ConnectSocketState state, IAsyncResult asyncResult, Int32 timeout, Exception& exception)
Please note that this is not specific to any particular API call.
Even pinging graph.facebook.com times out randomly.
Below is some trace route info:
1 <1 ms <1 ms <1 ms ip-97-74-87-252.ip.secureserver.net [97.74.87.252]
2 <1 ms <1 ms <1 ms ip-208-109-113-173.ip.secureserver.net [208.109.113.173]
3 1 ms <1 ms <1 ms ip-97-74-252-21.ip.secureserver.net [97.74.252.21]
4 1 ms <1 ms <1 ms ip-97-74-252-21.ip.secureserver.net [97.74.252.21]
5 9 ms 1 ms 1 ms 206-15-85-9.static.twtelecom.net [206.15.85.9]
6 13 ms 13 ms 44 ms lax2-pr2-xe-1-3-0-0.us.twtelecom.net [66.192.241.218]
7 26 ms 28 ms 27 ms cr1.la2ca.ip.att.net [12.122.104.14]
8 26 ms 27 ms 27 ms cr82.sj2ca.ip.att.net [12.122.1.146]
9 23 ms 23 ms 23 ms 12.122.128.201
10 22 ms 22 ms 22 ms 12.249.231.26
11 22 ms 22 ms 22 ms ae1.bb02.sjc1.tfbnw.net [204.15.21.164]
12 23 ms 23 ms 23 ms ae5.dr02.snc4.tfbnw.net [204.15.21.169]
13 24 ms 23 ms 23 ms eth-17-2.csw02b.snc4.tfbnw.net [74.119.76.105]
14 * * * Request timed out.
15 * * * Request timed out.
16 * * * Request timed out.
17 * * * Request timed out.
18 * * * Request timed out.
Steps to reproduce:
Try pinging graph.facebook.com.
Ping request will get timed out for certain IPs.
Related
I may be running into a situation that is completely normal. But I want to talk it out anyway. In my home lab, I have a single worker node Rancher-controlled k3s cluster. I also have a FRR VM acting as the BGP peer to MetalLB within the cluster, since a UDM Pro cannot run BGP natively. I spun up a simple nginx 1-pod deployment and backing service with LoadBalancer IP. Everything did its jobs, and the IP is accessible.
Client desktop: 192.168.0.121
UDM Router: 192.168.0.1 / 192.168.100.1
static route for 192.168.110.0/24 nexthop 192.168.100.2
FRR BGP Router VM: 192.168.100.2
k3s worker node: 192.168.100.11
MetalLB BGP-advertised service subnet: 192.168.110.0/24
nginx service LoadBalancer IP: 192.168.110.1
The FRR router VM has a single vNIC, no tunnels or subinterfaces, etc. Accessing the nginx service LoadBalancer IP by HTTP is perfectly fine, so I know routing is fine. But from a ping and traceroute perspective, it looks like I have a routing loop.
Client traceroute:
PS C:\Users\sbalm> tracert -d 192.168.110.1
Tracing route to 192.168.110.1 over a maximum of 30 hops
1 <1 ms <1 ms <1 ms 192.168.0.1
2 <1 ms <1 ms <1 ms 192.168.100.2
3 1 ms <1 ms <1 ms 192.168.100.11
4 <1 ms <1 ms <1 ms 192.168.0.1
5 <1 ms <1 ms <1 ms 192.168.100.2
6 1 ms <1 ms <1 ms 192.168.100.11
7 <1 ms <1 ms <1 ms 192.168.0.1
8 1 ms <1 ms <1 ms 192.168.100.2
9 1 ms <1 ms <1 ms 192.168.100.11
...
Something doesn't feel "normal" here. Ideas?
Please try to add the following route to your k3s node:
ip route add unreachable 192.168.110.1
I am running Plesk v12.5.30_build1205150826.19 os_Ubuntu 12.04 and getting the following errors when try to send emails.
Sep 20 18:35:59 lvps109-104-93-126 postfix/smtpd[14585]: connect from unknown[194.0.158.9]
Sep 20 18:35:59 lvps109-104-93-126 courier-pop3s: couriertls: read: error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number
Sep 20 18:36:01 lvps109-104-93-126 last message repeated 10 times
Sep 20 18:36:01 lvps109-104-93-126 postfix/smtpd[27545]: connect from unknown[194.0.158.9]
Sep 20 18:36:01 lvps109-104-93-126 courier-pop3s: couriertls: read: error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number
Sep 20 18:36:02 lvps109-104-93-126 last message repeated 9 times
When traceroute times out on intermediate hops, how is it able to continue on to the destination as follows?
[root#localhost network-scripts]# traceroute -I www.google.com
traceroute to www.google.com (216.58.196.228), 30 hops max, 60 byte packets
1 gateway (10.0.2.2) 0.531 ms 0.355 ms 0.448 ms
2 * * *
3 * * *
4 osk009nasgw111.IIJ.Net (202.32.116.129) 366.682 ms 366.562 ms 366.368 ms
5 osk004bb01.IIJ.Net (202.32.116.5) 366.206 ms 366.062 ms 365.879 ms
6 osk004ix50.IIJ.Net (58.138.107.166) 363.375 ms 125.516 ms 125.391 ms
7 210.130.133.86 (210.130.133.86) 125.574 ms 125.520 ms 137.085 ms
8 108.170.243.65 (108.170.243.65) 137.103 ms 137.491 ms 137.364 ms
9 108.170.238.93 (108.170.238.93) 138.227 ms 138.147 ms 101.212 ms
10 kix06s01-in-f4.1e100.net (216.58.196.228) 100.566 ms 100.791 ms 235.679 ms
Answering my own question here.
The servers in the middle can reject ICMP requests, but will still pass along packets to the next server once the TTL is incremented for subsequent transmissions.
I am facing a problem with HttpClient (version 4.5.2) in a web application, I mean, in a multi-threaded way. In normal situation, when a connection request is arrived, a connection is leased from the pool, then used and finally released back to the pool again to be used in future requests again as the following part of log for connection with id 673890 states .
15 Feb 2017 018:25:54:115 p-1-thread-121 DEBUG PoolingHttpClientConnectionManager:249 - Connection request: [route: {}->http://127.0.0.1:8080][total kept alive: 51; route allocated: 4 of 100; total allocated: 92 of 500]
15 Feb 2017 018:25:54:116 p-1-thread-121 DEBUG PoolingHttpClientConnectionManager:282 - Connection leased: [id: 673890][route: {}->http://127.0.0.1:8080][total kept alive: 51; route allocated: 4 of 100; total allocated: 92 of 500]
15 Feb 2017 018:25:54:116 p-1-thread-121 DEBUG DefaultManagedHttpClientConnection:90 - http-outgoing-673890: set socket timeout to 9000
15 Feb 2017 018:25:54:120 p-1-thread-121 DEBUG PoolingHttpClientConnectionManager:314 - Connection [id: 673890][route: {}->http://127.0.0.1:8080] can be kept alive for 10.0 seconds
15 Feb 2017 018:25:54:121 p-1-thread-121 DEBUG PoolingHttpClientConnectionManager:320 - Connection released: [id: 673890][route: {}->http://127.0.0.1:8080][total kept alive: 55; route allocated: 4 of 100; total allocated: 92 of 500]
After using the mentioned connection (id 673890) several times in a normal way which I mentioned above, I notice the following happens in the code:
15 Feb 2017 018:25:54:130 p-1-thread-126 DEBUG PoolingHttpClientConnectionManager:249 - Connection request: [route: {}->http://127.0.0.1:8080][total kept alive: 55; route allocated: 4 of 100; total allocated: 92 of 500]
15 Feb 2017 018:25:54:130 p-1-thread-126 DEBUG PoolingHttpClientConnectionManager:282 - Connection leased: [id: 673890][route: {}->http://127.0.0.1:8080][total kept alive: 54; route allocated: 4 of 100; total allocated: 92 of 500]
15 Feb 2017 018:25:54:131 p-1-thread-126 DEBUG DefaultManagedHttpClientConnection:90 - http-outgoing-673890: set socket timeout to 9000
15 Feb 2017 018:25:54:133 p-1-thread-126 DEBUG DefaultManagedHttpClientConnection:81 - http-outgoing-673890: Close connection
15 Feb 2017 018:25:54:133 p-1-thread-126 DEBUG PoolingHttpClientConnectionManager:320 - Connection released: [id: 673890][route: {}->http://127.0.0.1:8080][total kept alive: 55; route allocated: 3 of 100; total allocated: 91 of 500]
The log says that the connection is requested, leased, used, closed and then released back to the pool. So, my question is that why the connection is closed? And why it is released to the pool after closing?
I know that the connection could be closed by the server, but that is a different situation. In that case, the connection is leased from the pool, determined as stale, so a new connection is established and used but the log I presented above shows a different behavior.
I am aware of two reasons for connection close in HttpClient. First, closed for being idle because their KeepAliveTime is expired. Second, closed by the server which makes the connection stale in the pool. Is there any other reason for connections to be closed?
Based on Oleg Kalnichevski's reply in the HttpClient mailing list, and the examinations which I made, it turned out that the problem was because of 'Connection: close' header sent by the other hand. Another cause that may lead to the same situation is using HTTP/1.0 non-persistent connections.
Running this command in the mongodb installation file from mongodb.org
./mongo ds045907.mongolab.com:45907/database -u user -p password
I changed Database, user, and password for anonymity.
results in this
Error: couldn't connect to server ds045907.mongolab.com:45907 src/mongo/shell/mongo.js:93
exception: connect failed
Maybe i'm being blocked by a server firewall? I have no problem using git or brew or pip...
Here are a few things you can try, but you can always feel free to contact us at support#mongolab.com. I'm sure we can get to the bottom of this.
Anonymous mongo shell connection
Mongo will let you connect without authenticating. You can do very little with an unauthenticated connection, but you can use it as a test to separate a connectivity problem from a credentials problem.
% mongo ds045907.mongolab.com:45907
MongoDB shell version: 2.0.7
connecting to: ds045907.mongolab.com:45907/test
> db.version()
2.2.2
> db.runCommand({ping:1})
{ "ok" : 1 }
> exit
bye
If you can connect without authenticating and run the commands as shown above, but trying to connect with authentication fails, then you have a problem with the credentials. If, however, connecting doesn't work even without supplying credentials then you have a connectivity problem.
ping
That server does allow ICMP traffic, so make sure it's reachable from wherever you are.
% ping ds045907.mongolab.com
PING ec2-107-20-85-188.compute-1.amazonaws.com (107.20.85.188): 56 data bytes
64 bytes from 107.20.85.188: icmp_seq=0 ttl=41 time=99.744 ms
64 bytes from 107.20.85.188: icmp_seq=1 ttl=41 time=99.475 ms
64 bytes from 107.20.85.188: icmp_seq=2 ttl=41 time=99.930 ms
^C
--- ec2-107-20-85-188.compute-1.amazonaws.com ping statistics ---
3 packets transmitted, 3 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 99.475/99.716/99.930/0.187 ms
traceroute
If ping fails, use traceroute (or tracert on Windows) to try to figure out where the problem is. Once the trace reaches AWS, however, it will trail off. That's normal. AWS prevents traces from seeing too far into their networks. Make sure that the last IP on your list is owned by Amazon using some kind of IP reverse lookup tool (many on the Web).
% traceroute ds045907.mongolab.com
traceroute to ec2-107-20-85-188.compute-1.amazonaws.com (107.20.85.188), 64 hops max, 52 byte packets
1 192.168.1.1 (192.168.1.1) 1.092 ms 0.865 ms 1.047 ms
2 192.168.27.1 (192.168.27.1) 1.414 ms 1.330 ms 1.224 ms
... snipped to protect the innocent ...
14 72.21.220.83 (72.21.220.83) 87.777 ms
72.21.220.75 (72.21.220.75) 87.406 ms
205.251.229.55 (205.251.229.55) 99.363 ms
15 72.21.222.145 (72.21.222.145) 87.703 ms
178.236.3.24 (178.236.3.24) 98.662 ms
72.21.220.75 (72.21.220.75) 87.708 ms
16 216.182.224.55 (216.182.224.55) 87.312 ms 86.791 ms 89.005 ms
17 * 216.182.224.55 (216.182.224.55) 91.373 ms *
18 216.182.224.55 (216.182.224.55) 121.754 ms * *
19 * * *
20 * * *
21 * * *
22 * * *
23 * * *
24 * * *
It's a connection problem at your side. I tried it but got a login failure message:
MongoDB shell version: 1.6.5
connecting to: ds045907.mongolab.com:45907/database
Mon Dec 24 01:12:31 uncaught exception: login failed
exception: login failed