i am having an application running inside a gateway,
this application is a coap-server coded using the libcoap library
the server is running perfectly fine, the ip:port is tested using different commands such as nmap , telnet and others, each time it shows that the port is open and the connection is a success.
My problem is that there's no response from the server, wireshark is showing that the requests are being re-transmitted until timeout.
After some research, i thought that the gateway doesn't support NAT loopback, so i tried sending requests from another connection (i used my phones 4G). I even disabled firewall on the gateway too, But no success either.
UPDATE:
after some digging, i managed to receive a response from the server, but only when using TCP connection, the UDP still sends requests until timeout,
from a logical point of view, what may be the problem here ?
note: UDP is a must in this application so i cant just ignore it.
Related
I created a multi-threaded client/server application that can send messages to each other at real time. Everything works perfectly, but I want to be able to send messages over the Internet. From what I understand, I need to do port forwarding to be able to make my server reachable for the clients. I then set up my port forwarding options by providing a port (9991) and then my Macbook Air's IP Address (192.168.0.1).
I then tried to connect to my server using my public server IP (let's say 197.132.20.222) and it didn't work. I then tried to see if the port forwarding worked by using this website: https://www.yougetsignal.com/tools/open-ports/ and I realized that the connection was closed. I also tried the command nc -vz 197.132.20.222 9991 while running my application and the connection is refused.
I'm using a JavaFX application, and for my server side I use a ServerSocket with port 9991. For the client side, I use a Socket and set the IP Address to my public router IP Address, and I tried to connect with another PC using mobile data to use a different network.
My firewall settings are turn off, so I really don't know what is blocking my application to connect to that port. Could it be my ISP is blocking connections? I just don't understand why my ports are blocked even with no firewalls enabled.
I have an application with a server written in F# and serve web files using suave. I remote login using powershell into another machine in the network to run the application (The application is also in one of the network drives). I do that because that machine have access to third party APIs needed for the server. Now when I do [IPAddress_Of_Remote_Machine]/[html_file] or [name_of_pc]/[html_file] then chrome is waiting forever and doesn't ever return the webpage. This wasn't happening before and I ran into this problem recently. I opened a different port and used it instead of the default one 80. This made things work but the problem keeps showing up after a couple of days. I don't think it's a firewall issue but I'm clueless to why this is happening.
When running netstat -an, this is what I get (I hid the IP address):
As you can see all of the connections are either in CLOSE_WAIT or ESTABLISHED but not LISTENING. All of these TCP connections is probably because I have PhantomJS and two other APIs running in the application as well. However the loop back address is also open on the same port 5959:
I'm not sure what is difference between these two but when using PortQryUI to query the remote server it returns a success!
I have already made an inbound rule for port 5959 on the server so it should be allowed. The web page is stuck at Waiting for [name_of_pc]. Also, sometimes this problem disappears and everything works fine.
What is the potential problem behind this? Why would this happen all of a sudden?
UPDATE:
I re-ran the application today and it's working correctly. It could be that something is dynamically set within the firewall? Not really sure what is going on. The machine I'm running the server on has a bunch of applications running on it as well so maybe there is an external process that is affecting it?
I made a hello world app with Suave and deployed it on the network drive to test if it's going to work. I opened inbound rule for port 6001
Then I ran the app:
However, it's still not working and this time it says the site cannot be reached when I do: http://[name_of_pc]:6001.
Moving this to an answer so that it can be closed:
Could you post the bindings section of your suave cfg? I'm guessing you know where that is since you are using a non-standard port but if you need don't, search for HttpBinding. I suspect you will find it pointing to 127.0.0.1 which is not good enough for remote access. You could try changing it to 0.0.0.0 or to the server's actual IP address. I would try 0.0.0.0 first for the flexibility it provides
This might be a bit weird to explain, but I'll try my best.
I have a Lua program that's intended to serve some data through the network. Specifically, the internet. The data the program is actually transmitting are only strings stored within UDP packets. Generalized, this is how the program operates:
The first client launches the program and specifies that they are the 'host' of the connection. The program opens a connection on UDP port 6000 and the main loop listens for any packets received on said port.
The second client launches the program and specifies that they are to connect to the 'host' on port 6000. The user enters the IP, and the client opens a UDP connection using a random port between 6050 and 7000
When the client successfully connects to the server, they send a 'connection' packet, simply containing a '202 OK' string. The 'host' receives this and registers the new client
Now that the connection has been initialized, the programs can send data between each other using the registered data.
Now, on a local network this program works fine. The purpose of the 'host' mode is to have multiple clients connect to the host and have the host relay packets from one clients to all the currently registered clients. Port selections are arbitrary and random port selection from the client was simply to allow debugging and testing from a single computer. This has been tested between two and more computers on a physical network, and worked successfully. However, when I attempt to run this over the internet it's a no go. I know that the ports are closed and that's why it's not working. But seeing as I'm going to be distributing this program (privately) I can't expect every person to open ports on their router (or know how to). Security is not currently a concern with the program, and should be disregarded in the current state. That being said, I recognise there's the potential for a lot to go wrong with the use of this program through the network and I accept that. Onto the main question, how can I have the host and client communicate over the internet without having to open ports?
I'll elaborate - for example, browsers. Although the technology is quite different to what I'm doing, it's easier to paint a picture - the browser requests data from a web server, and it gets sent back to the client. But wait, if the router is in it's default state (I hope) all the ports are closed? So how does the client receive this data if the port is closed?
I hope this makes some kind of sense and I don't sound like a complete fool.
I managed to find some suitable libraries and utilities to be able to communicate through the internet (NAT traversal is now a term I am familiar with), those libraries being that supplied by NMAP. These libraries include an implementation for STUN in LUA, among HEAPS of other useful networking-related libraries and scripts.
To actually answer my own question (very simply), the clients and servers are behind what's known as a NAT gateway. Due to the limitations of addresses of IPv4, NAT gateways were implemented to bypass this limitation of IPv4 (a total of about 4.2 billion addresses) by separating the clients' internal network from the external network - in this case the internet. The NAT is supplied with a single IP address, and the NAT then supplies all of its users within the internal network with an IP respective to the network they're on. As such, the devices cannot directly be accessed without forwarding connections from the NAT gateway (generally the router) to the client. However, when using UDP connections the NAT gateway opens a port for the purposes of this connection which gets closed after the connection dies. This port that is opened differs from what is specified by the client when they open the connection, which is where the STUN methods come in. STUN allows the host to find the port that the client is connecting from and send data back to this port so the user can receive it. Bear in mind this is an EXTREMELY simple explanation of how the technology works, and I'd suggest reading up on the Wiki and some of the Request for Comments for STUN.
I am facing an issue with tcp connection..
I have a number of clients connected to the a remote server over tcp .
Now,If due to any issue i am not able to reach my server , after the successful establishment of the tcp connection , i do not receive any error on the client side .
On client end if i do netstat , it shows me that clients are connected the remote server , even though i am not able to ping the server.
So,now i am in the case where the server shows it is not connected to any client and on another end the client shows it is connected the server.
I have tested this for websocket also with node.js , but the same behavior persists over there also .
I have tried to google it around , but no luck .
Is there any standard solution for that ?
This is by design.
If two endpoints have a successful socket (TCP) connection between each other, but aren't sending any data, then the TCP state machines on both endpoints remains in the CONNECTED state.
Imagine if you had a shell connection open in a terminal window on your PC at work to a remote Unix machine across the Internet. You leave work that evening with the terminal window still logged in and at the shell prompt on the remote server.
Overnight, some router in between your PC and the remote computer goes out. Hours later, the router is fixed. You come into work the next day and start typing at the shell prompt. It's like the loss of connectivity never happened. How is this possible? Because neither socket on either endpoint had anything to send during the outage. Given that, there was no way that the TCP state machine was going to detect a connectivity failure - because no traffic was actually occurring. Now if you had tried to type something at the prompt during the outage, then the socket connection would eventually time out within a minute or two, and the terminal session would end.
One workaround is to to enable the SO_KEEPALIVE option on your socket. YMMV with this socket option - as this mode of TCP does not always send keep-alive messages at a rate in which you control.
A more common approach is to just have your socket send data periodically. Some protocols on top of TCP that I've worked with have their own notion of a "ping" message for this very purpose. That is, the client sends a "ping" message over the TCP socket every minute and the server responds back with "pong" or some equivalent. If neither side gets the expected ping/pong message within N minutes, then the connection, regardless of socket error state, is assumed to be dead. This approach of sending periodic messages also helps with NATs that tend to drop TCP connections for very quiet protocols when it doesn't observe traffic over a period of time.
I establish a TCP connection between my server and client which runs on the same host. We gather and read from the server or say source in our case continuously.
We read data on say 3 different ports.
Once the source stops publishing data or gets restarted , the server/source is not able to publish data again on the same port saying port is already bind. The reason given is that client still has established connection on those ports.
I wanted to know what could be the probable reasons of this ? Can there be issue since client is already listening on these ports and trying to reconnect again and again because we try this reconnection mechanism. I am more looking for reason on source side as the same code in client sides when source and client are on different host and not the same host works perfectly fine for us.
Edit:-
I found this while going through various article .
On the question of using SO_LINGER to send a RST on close to avoid the TIME_WAIT state: I've been having some problems with router access servers (names withheld to protect the guilty) that have problems dealing with back-to-back connections on a modem dedicated to a specific channel. What they do is let go of the connection, accept another call, attempt to connect to a well-known socket on a host, and the host refuses the connection because there is a connection in TIME_WAIT state involving the well-known socket. (Stevens' book TCP Illustrated, Vol 1 discusses this problem in more detail.) In order to avoid the connection-refused problem, I've had to install an option to do reset-on-close in the server when the server initiates the disconnection.
Link to source:- http://developerweb.net/viewtopic.php?id=2941
I guess i am facing the same problem: 'attempt to connect to a well-known socket on a host, and the host refuses the connection'. Probable fix mention is 'option to do reset-on-close in the server when the server initiates the disconnection'. Now how do I do that ?
Set the SO_REUSEADDR option on the server socket before you bind it and call listen().
EDIT The suggestion to fiddle around with SO_LINGER option is worthless and dangerous to your data in flight. Just use SO_RESUSEADDR.
You need to close the socket bound to that port before you restart/shutdown the server!
http://www.gnu.org/software/libc/manual/html_node/Closing-a-Socket.html
Also, there's a timeout time, which I think is 4 minutes, so if you created a TCP socket and close it, you may still have to wait 4 minutes until it closes.
You can use netstat to see all the bound ports on your system. If you shut down your server, or close your server after forking on connect, you may have zombie processes which are bound to certain ports that do not close and remain active, and thus, you can't rebind to the same port. Show some code.