My understanding is that a socket corresponds to a network identifier, port and TCP identifier. [1]
Operating systems enable a process to be associated with a port (which IIUC is a way of making the process addressable on the network for inbound data).
So a WebSocket server will typically be associated with a port well-known for accepting and understanding HTTP for the upgrade request (like 443) and then use TCP identifiers to enable multiple network sockets to be open concurrently for a single server process and a single port.
Please can someone confirm or correct my understanding?
[1] "To provide for unique names at
each TCP, we concatenate a NETWORK identifier, and a TCP identifier
with a port name to create a SOCKET name which will be unique
throughout all networks connected together." https://www.rfc-editor.org/rfc/rfc675
When a client connects to your server on a given port, the client connection is coming from an IP address and a client-side port number. The client-side port number is automatically generated by the client and will be unique for that client. So, you end up with four items that make a connection.
Server IP address (well known to all clients)
Server port (well known to all clients)
Client IP address (unique for that client)
Client port (dynamically unique for that client and that socket)
So, it is the combination of these four items that make a unique TCP connection. If the same client makes a second connection to the same server and port, then that second connection will have a different client port number (each connection a client makes will be given a different client port number) and thus the combination of those four items above will be different for that second client connection, allowing it's traffic to be completely separate from the first connection that client made.
So, a TCP socket is a unique combination of the four items above. To see how that is used, let's look at how some traffic flows.
After a client connects to the server and a TCP socket is created to represent that connection, then the client sends a packet. The packet is sent from the client IP address and from the unique client port number that that particular socket is using. When the server receives that packet on its own port number, it can see that the packet is coming from the client IP address and from that particular client port number. It can use these items to look up in its table and see which TCP socket this traffic is associated with and trigger an event for that particular socket. This separates that client's traffic from all the other currently connected sockets (whether they are other connections from that same client or connections from other clients).
Now, the server wants to send a response to that client. The packet is sent to the client's IP address and client port number. The client TCP stack does the same thing. It receives the packet from the server IP/port and addressed to the specific client port number and can then associate that packet with the appropriate TCP socket on the client so it can trigger an event on the right socket.
All traffic can uniquely be associated with the appropriate client or server TCP socket in this way, even though many clients may connect to the same server IP and port. The uniqueness of the client IP/port allows both ends to tell which socket a given packet belongs to.
webSocket connections start out with an HTTP connection (which is a TCP socket running the HTTP protocol). That initial HTTP request contains an "upgrade" header requesting the server to upgrade the protocol from HTTP to webSocket. If the server agrees to the upgrade, then it returns a response that indicates that the protocol will be changed to the webSocket protocol. The TCP socket remains the same, but both sides agree that they will now speak the webSocket protocol instead of the HTTP protocol. So, once connected, you then have a TCP socket where both sides are speaking the webSocket protocol. This TCP connection uses the same logic described above to remain unique from other TCP connections to the same server.
In this manner, you can have a single server on a single port that works for both HTTP connections and webSocket connections. All connections to that server start out as HTTP connections, but some are converted to webSocket connections after both sides agree to change the protocol. The HTTP connections that remain HTTP connections will be typical request/response and then the socket will be closed. The HTTP connections that are "upgraded" to the webSocket protocol will remain open for the duration of the webSocket session (which can be long lived). You can have many concurrent open webSocket connections that are all distinct from one another while new HTTP connections are regularly serviced all by the same server. The TCP logic above is used to keep track of which packets to/from the same server/port belong to which connection.
FYI, you may have heard about NAT (Network Address Translation). This is commonly used to allow private networks (like a home or corporate network) to interface to a public network (like the internet). With NAT a server may see multiple clients as having the same client IP address even though they are physically different computers on a private network). With NAT, multiple computers are routed through a common IP address, but NAT still guarantees that the client IP address and client port number are still a unique combination so the above scheme still works. When using NAT an incoming packet destined for a particular client arrives at the shared IP address. The IP/port is then translated to the actual client IP address and port number on the private network and then packet is forwarded to that device. The server is generally unaware of this translation and packet forwarding. Because the NAT server still maintains the uniqueness of the client IP/client port combination, the server's logic still works just fine even though it appears that many clients are sharing a common IP address). Note, home network routes are usually configured to use NAT since all computers on the home network will "share" the one public IP address that your router has when accessing the internet.
You will not enable multiple sockets, there is no need for it. You will have multiple conections. It's a little different, but you undesrstand well. For UDP there's nothing to do, cause there is no connections.
In TCP, if two different machines connect to the same port on a third machine, there are two distinct connections because the source IPs differ. If the same machine (or two behind NAT or otherwise sharing the same IP address) connects twice to a single remote end, the connections are differentiated by source port, the same machine cannot open 2 connections on the same port.
Related
I am trying to setup peer to peer networking and am trying to understand how this works.
Normally in Client to Server connection, I will connect to the server IP and port. Behind the scenes, it will create a client socket bound to a local port at the local ip, and the packet is sent to the router. The router will then NAT the local port and the local socket, to the client public ip and a different public client socket with a destination for the server IP and port.
When the server responds, the router then DENATs the public client ip and public client port back to the local ip and local port, and the packet arrives at the computer.
In a Peer to Peer networking, I may have the peer's public IP, but it is shared by many machines and the router hasn't allowed a connection yet, so there isn't a open port I can send the data to.
There was then an option that both peers contact a server. That opens a port on the router. Then the peers send packets to each other's client port.
However, usually the router will only accept packets from the same IP the request was made to, so the two peers cannot reuse the server's connection.
How do the two peers talk to each other in this scenario ?
Peer-to-peer networking works exactly the same way as client/server networking. Only one of the peers will become a server and the other a client.
Normally in a peer-to-peer app like bittorrent all peers are also servers but of course for any individual connection one machine must take the role of the client. However a single peer may have multiple connections. So for any single peer some of the connections to it will be server sockets and some will be client sockets.
How this works with NAT is exactly the same as a client/server architecture. You must configure your router to NAT back to your peer application in order for others to connect to it. If not then your peer can only connect to other peers but other peers cannot connect to you. For example, if your bittorrent client is generally acting slow, not managing to get a lot of connections and not managing to finish downloading some torrents this often signifies that you have not configured your router's port forwarding back to your PC for your bittorrent client.
For the use-case of non-expert users (consumers) there are several ways to get around NAT automatically without requiring your users to configure their routers. The most widely used method is UPnP (Universal Plug and Play). However a lot of more expert users who can configure their own routers often disable UPnP because it is a fairly well known DDoS target. So if you do decide to use UPnP you should make it optional for more advanced users to disable it if they don't want to use it.
For cases where you need a guaranteed connection regardless of router configuration then your app cannot be 100% peer-to-peer. You'd need a relay server that acts as a server to both peers that will forward the packet form the sending client peer to the receiving client peer. Of course, the disadvantage of this is that you now have a fixed cost of maintaining a server to support your app just like traditional client/server systems but in this case you're using peer-to-peer to reduce server costs, not eliminate the server.
One example of this "hybrid" approach is cryptocurrencies like Bitcoin and Ethereum. They need a core group of servers to exist in order to work. However, for these protocols the servers run the same software as the clients - they're all just nodes. The only difference is that you don't shut down the servers whereas most people quit their bitcoin wallet once they've done using it (unless they're mining). Another example that is similar is the TOR network. There is a set of core TOR nodes that act as the "server" part of the network ensuring that the network always exist.
You said it yourself: "peers send packets to each other's client port". Therefore, the router will "accept packets from the same IP the request was made to".
Say, Alice is behind router A and Bob is behind router B.
Having learned their public endpoints from a server, Alice will send UDP packets to Bob's public IP, and Bob will send UDP packets to Alice's.
Having seen Alice talk to Bob's IP, router A will accept UDP packets from Bob.
Having seen Bob talk to Alice, router B will accept UDP packets from her as well.
That is, some initial packets might be rejected as coming from the blue, but after both parties have initiated communication on their side, routers will have no reason to block what follows.
In terms of Symmetric NAT Traversal using STUN 2003, by sending a packet to Bob, Alice is creating a door for Bob in A. On the other side, by sending a packet to Alice, Bob is creating a door for Alice in B.
The trick in UDP hole punching seems to be for the routers to reuse the same NAT tunnel for different IPs - so that the port discovered by a server is the same as the port reused for direct communication.
We can talk with different IPs from a normal UDP socket (by skipping connect and using sendto), so it's kind of logical that a tunneled socket would be able to do the same.
This question already has answers here:
What is the theoretical maximum number of open TCP connections that a modern Linux box can have
(3 answers)
Does the port change when a server accepts a TCP connection?
(3 answers)
How does the socket API accept() function work?
(4 answers)
Closed 3 years ago.
I read something I found contradictory with my current understanding of ports. If you google "how many ports does a server have", the first thing to come up states the following:
The server generally only ever uses one port, no matter how many clients are connected. It is the tuple of (client IP, client port,
server IP, server port) that must be unique for each TCP connection -
so the limit of 65535 ports is only relevant for how many connections
a single client can make to a single server.
I thought each time a client establishes a connection to a server, then a socket is creating using a regular port for the connection between the two?
If no, does it mean that a server can have more clients connected to it, than the maximum amount of regular ports?
I thought each time a client establishes a connection to a server, then a socket is creating using a regular port for the connection between the two?
The term "port" in this context is being used to describe, essentially, an address. The port number, along with the IP address, uniquely identifies one endpoint of the network.
Not only does the server endpoint generally only use a single port number, it would be a lot more difficult to make connections to the server if it didn't, because what port number would the client endpoint use to request the connection? DNS allows a client to look up the IP address, if the IP address is not already know, but there's no such facility for port numbers. So the port number has to be known in advance.
So, no…it is not the case that each time a client makes a connection, a socket is created using a "regular port" for the connection between the two. There's no "regular port". There's just "port", all ports are the same, and they are simply a number that identifies the endpoint's address.
If no, does it mean that a server can have more clients connected to it, than the maximum amount of regular ports?
Yes, it can. On the server end, the port number is (generally) always the same. For example, an HTTP server will (generally) use port 80. The listening socket will have as its port number "80", as will the server-side socket for each connection.
The port number can be reused like this, because each socket has other identifying characteristics besides the IP address and port number. In particular, the server's listening socket is unique; there is only one socket on the server end that has that IP address, that port number, and which has no connections (i.e. is listening).
Once a connection is made, a new socket is created to represent that connection. And that socket can be uniquely identified, because unlike the listening socket, it does have a connection (i.e. a remote endpoint) associated with it, along with the IP address and port number. When the client endpoint sends data to the server, the network layer can tell which socket to which that should be delivered, because that data comes from a specific remote endpoint, which also has a unique IP address and port number.
The combination of the server's and client's unique IP addresses and port numbers uniquely identifies that connection, making it distinct from any other socket on the server that may have the same server-side endpoint's IP address and port number.
In the text you quoted, this part is describing exactly this distinct, unique identification of a socket:
It is the tuple of (client IP, client port, server IP, server port) that must be unique for each TCP connection
In this way, the server's IP address and port number can be used an indefinite number of times (not counting other constrained resources on the server, like memory and tables that hold the state of the network connections).
The limitation on port numbers only comes into play when trying to create additional listening sockets (for servers) or additional connections (for clients). Servers typically won't run out of port numbers unless they are implementing a protocol that requires the server to create a connection back to a client's listening socket (this is uncommon), and clients won't run out of port numbers unless they try to make a very large number of connections.
It is this latter limit that this part of the text you quoted is referring to:
the limit of 65535 ports is only relevant for how many connections a single client can make to a single server.
RELATED POST
The post here In UNIX forum describes
The server will keep on listeninig on a port number.
The server will accept a clients connect() request using accept(). As soon as the server accepts the client request, the kernel allocates a random port number for the server for further send() and receive(), since the same port number on the server can't be used for sending as well as listening, and the previous port is still listening for new connections
QUESTION
I have a server application S which is constantly listening on port 18333 (this is actually bitcoind testnet). When another client node C connects with it on say 53446 (random port). According to the above post, S will be able to send/receive data of 'C' only from port 53446.
But when I run a bitcoind testnet. This perfectly communicates with other node with only one socket connection in port 18333 without need for another for sending/receiving. Below is snippet and I even verified this
bitcoin-cli -testnet -rpcport=16591 -datadir=/home/user/mytest/1/
{
"id": 1,
"addr": "178.32.61.149:18333"
}
Can anyone help me understand what is the right working in TCP socket connection?
A TCP connection is identified by a socket pair and this is uniquely identified by 4 parameters :
source ip
source port
dest ip
dest port
For every connection that is established to a server the socket is basically cloned and the same port is being used. So for every connection you have a socket using the same server port. So you have n+1 socket using the same port when there are n connections.
The TCP kernel is able to make distinction between all these sockets and connections because the socket is either in the listening state, or it belongs to the socket pair where all 4 parameters are considered.
Your second bullet is therefore wrong because the same port is being used as i explained above.
The server will accept a clients connect() request using accept(). As
soon as the server accepts the client request, the kernel allocates a
random port number for the server for further send() and receive().
On normal TCP traffic this is not the case. If a webserver is listening on port 80, all packets sent back to the client wil be over server port 80 (this can be verified with WireShark for example) - but there will be a different socket for each connection (srcIP:port - dstIP:port). That information is sent in the headers of the network packets - IP and protocol code (TCP, UDP or other) in the IP header, port numbers as part of the TCP or UDP header).
But changing ports can happen when communicating over ftp, where there can be a control port (ususally 21) and a negotiated data port.
Usually a web server is listening to any incoming connection through port 80. So, my question is that shouldn't it be that in general concept of socket programming is that port 80 is for listen for incoming connection. But then after the server accepted the connection, it will use another port e.g port 12345 to communicate with the client. But, when I look into the wireshark, the server is always using port 80 during the communication. I am confused here.
So what if https://www.facebook.com:443, it has hundreds of thousands of connection to the it at a second. Is it possible for a single port to handle such a large amount of traffic?
A particular socket is uniquely identified by a 5-tuple (i.e. a list of 5 particular properties.) Those properties are:
Source IP Address
Destination IP Address
Source Port Number
Destination Port Number
Transport Protocol (usually TCP or UDP)
These parameters must be unique for sockets that are open at the same time. Where you're probably getting confused here is what happens on the client side vs. what happens on the server side in TCP. Regardless of the application protocol in question (HTTP, FTP, SMTP, whatever,) TCP behaves the same way.
When you open a socket on the client side, it will select a random high-number port for the new outgoing connection. This is required, otherwise you would be unable to open two separate sockets on the same computer to the same server. Since it's entirely reasonable to want to do that (and it's very common in the case of web servers, such as having stackoverflow.com open in two separate tabs) and the 5-tuple for each socket must be unique, a random high-number port is used as the source port. However, each of those sockets will connect to port 80 at stackoverflow.com's webserver.
On the server side of things, stackoverflow.com can already distinguish between those two different sockets from your client, again, because they already have different client-side port numbers. When it sees an incoming request packet from your browser, it knows which of the sockets it has open with you to respond to because of the different source port number. Similarly, when it wants to send a response packet to you, it can send it to the correct endpoint on your side by setting the destination port number to the client-side port number it got the request from.
The bottom line is that it's unnecessary for each client connection to have a separate port number on the server's side because the server can already uniquely identify each client connection by its client IP address and client-side port number. This is the way TCP (and UDP) sockets work regardless of application-layer protocol.
shouldn't it be that in general concept of socket programming is that port 80 is for listen for incoming connection. But then after the server accepted the connection, it will use another port e.g port 12345 to communicate with the client.
No.
But, when I look into the wireshark, the server is always using port 80 during the communication.
Yes.
I am confused here.
Only because your 'general concept' isn't correct. An accepted socket uses the same local port as the listening socket.
So what if https://www.facebook.com:443, it has hundreds of thousands of connection to the it at a second. Is it possible for a single port to handle such a large amount of traffic?
A port is only a number. It isn't a physical thing. It isn't handling anything. TCP is identifying connections based on the tuple {source IP, source port, target IP, target port}. There's no problem as long as the entire tuple is unique.
Ports are a virtual concept, not a hardware ressource, it's no harder to handle 10 000 connection on 1 port than 1 connection each on 10 000 port (it's probably much faster even)
Not all servers are web servers listening on port 80, nor do all servers maintain lasting connections. Web servers in particular are stateless.
Your suggestion to open a new port for further communication is exactly what happens when using the FTP protocol, but as you have seen this is not necessary.
Ports are not a physical concept, they exist in a standardised form to allow multiple servers to be reachable on the same host without specialised multiplexing software. Such software does still exist, but for entirely different reasons (see: sshttp). What you see as a response from the server on port 80, the server sees as a reply to you on a not-so-random port the OS assigned your connection.
When a server listening socket accepts a TCP request in the first time ,the function such as Socket java.net.ServerSocket.accept() will return a new communication socket whoes port number is the same as the port from java.net.ServerSocket.ServerSocket(int port).
Here are the screen shots.
If a client listens on a socket, at http://socketplaceonnet.com for example, how does it know that there is new content? I assume the server cannot send data directly to the client, as the client could be behind a router, with no port forwarding so a direct connection is not possible. The client could be a mobile phone which changes it's IP address. I understand that for the client to be a listener, the server doesn't need to know the client's IP.
Thank you
A client socket does not listen for incoming connections, it initiates an outgoing connection to the server. The server socket listens for incoming connections.
A server creates a socket, binds the socket to an IP address and port number (for TCP and UDP), and then listens for incoming connections. When a client connects to the server, a new socket is created for communication with the client (TCP only). A polling mechanism is used to determine if any activity has occurred on any of the open sockets.
A client creates a socket and connects to a remote IP address and port number (for TCP and UDP). A polling mechanism can be used (select(), poll(), epoll(), etc) to monitor the socket for information from the server without blocking the thread.
In the case that the client is behind a router which provides NAT (network address translation), the router re-writes the address of the client to match the router's public IP address. When the server responds, the router changes its public IP address back into the client's IP address. The router keeps a table of the active connections that it is translating so that it can map the server's responses to the correct client.
The TCP Iterative server accepts a client's connection, then processes it, completes all requests from the client,
and disconnects. The TCP iteration server can only process one client's request at a time. Only when all the
requests of the client are satisfied, the server can continue the subsequent requests. If one client occupies the
server, other clients can't work, so TCP servers seldom use the iterated server model.