Sockets TCP server - sockets

I have a question about network connection
for instance, A TCP Server support N connections simultaneously, each connection belongs other client host.The question is how many sockets the server needs?
Thanks

I think this is a valid question and do not understand why it has been downvoted.
Before I continue, an important distinction must be made. A socket is a file descriptor, while the port is an "identifier" for a socket. File descriptors/socket are owned by applications, so a port can be viewed as a way to route connections/packets to the correct application.
The way for example a web server works (or any other TCP-based server), is that you have a listen socket that is bound to a port (for example 80). When a client connects to the server, a new socket is automatically created by the operating system (this socket is the one that is returned by for example accept()). This socket is bound to the same local IP and port as the listen socket, but has a different remote IP/port. The operating system stores this mapping and routes packets belonging to this mapping to the new socket.
So the answer to your question is that only one listen socket is needed, but new sockets will be created as clients connect (and removed as they disconnect). The limit of sockets (file descriptors) than an application can create is controlled by the OS.

Related

FTP Active Mode and multiplexing

FTP RFC 959 specifies that the data connection is opened by the server from port 20 (default) to a random port in the client and known by the server through a PORT h1,h2,h3,h4,p1,p2 command. This is called Active Mode Transmission.
so that the host is h1.h2.h3.h4 while the port is p1 * 256 + p2.
My Question is: How can the server initialize multiple connections to multiple clients via the same port which is 20 by default?
Imagine client c1 has an established connection with server data port 20 and is transferring data, how can client c2 establish a connection with server if data port is already used by a TCP connection?
A server implementing Berkeley's sockets goes through a couple of phases when accepting connections. A lot of the plumbing is generally handled by the framework or the operating system, I'll try pointing them out. I'll try explaining this below with some pseudo-code.
1: Binding to the listening port
The server first asks the kernel to bind to a specific port to start listening on:
void* socket = bind(20);
2: Accepting a connection
This is probably the point that causes some misconceptions. The server gets a connection through the bound socket, but instead of using the listening port (20) to handle the communication with the new client it requests a new (random) port from the kernel to be used for a new socket connection. This is typically handled by the operating system.
void* clientSocket;
// Block until a client connects. When it does,
// use 'clientSocket' (a new socket) to handle the new client.
socket->accept(clientSocket);
// We'll use 'clientSocket' to communicate with the client.
clientSocket.send(someBuffer, ...);
// 'socket' is free again to accept more connections,
// so we can do it again:
void* clientSocket2;
socket->accept(clientSocket2);
// Of course, this is typically done in a loop that processes new connections all the time.
As a summary, what's happening is that the listener socket (20) is used only for accepting new connections. After a client establishes connection, a new socket is created to handle that specific connection.
You can test this by examining the socket connection you get as a client after establishing connection. You'll see that the remote port is not 20 anymore (it will be a random port chosen by the remote server).
All of this is shared by tcp, ftp and any protocol using the sockets protocol under its hood.

What things are exactly happening when server socket accept client sockets?

I'm studying socket programming, and the server socket accept() is confusing me. I wrote two scenarios for server socket accept(), please take a look:
When the server socket does accept(), it creates a new (client) socket that is bound to a port that is different from the port the server socket is bound. So socket communication is done via newly bound port, and the server socket (for accept() only) is waiting for another client connection on the originally bound port.
I think this is not quite correct, because (1) a port matches to a single process and (2) socket accept is inside-process matter and single process can have multiple sockets. So thought of a second scenario, based on some of stackoverflow answers:
When a server socket does accept(), it creates a new (client) socket that is not bound to any specific port. When a client communicates with the server, it uses the port that is bound to the server socket (who accept()s connections) and which client socket to actually communicate is resolved by (sourceIP, sourcePort, destIP, destPort) tuple from TCP header(?) at Transmission level (this is also suspicious because I thought socket is somewhat of an application-level object)
This scenario also raises some questions. If the socket communications still use server socket's port, i.e. client sends some messages to the server socket port, doesn't it use the server socket's backlog queue? I mean, how can messages from a client be distinguished between connect() and read() or write()? And how can they be resolved to each client socket in the server, without any port binding?
If one of my scenarios is correct, would that answer to the questions following? Or perhaps, both of my scenarios are wrong. I'd be very thankful if you could guide me to correct answers, or at least, towards some relevant texts to study.
When you create a socket and do a bind on that socket and then a listen, what you have is what is called a listening socket.
When a connection is establised this socket is basically cloned to a new socket, and this socket is called the servicing socket the port to which it bound is still the same as the original port.
But there is an important distinction between this socket and the listening socket from before. Namely it is part of a socket pair.
It is the socket pair that uniquely identifies the connection. so as there are 2 sockets in the picture for a socket pair, there are 2 IP adresses and 2 ports for both ends of the TCP communication channel. During the cloning of the servicing socket, the TCP kernel will allocate what is called a TCB and in it it will store those 2 IP# and 2 ports. The TCB also contains the socket number that belongs to the TCB.
Each time a TCP segment comes in , the TCP header is checked and whether or not it is a SYN, for a SYN you would have connection establishment so that you passed already, but then the kernel is going through its list of listening sockets. If it is a normal TCP packet, not a SYN, both port numbers are in the TCP header and the IP# are part of the IP header, so using this information the kernel is able to find the TCP that belongs to this TCP connection. (For a SYN, this information is also there, but as I said, for a SYN you have to process only the listening sockets)
That is in a nutshell how it works.
This information can be found in UNIX Network Programming: the sockets networking API. In there the link to the sockets is described whereas in other reference material it is usually not described that much in detail, rather the nitty grits of TCP are usually highlighted.
When server socket do accept(), it creates a new (client) socket that is bind to port that is different from the port server socket is bind. So socket communication is done via newly bind port, and server socket (for accept() only) is waiting for another client connection on originally bind port.
No.
I think this is not quite proper answer
It is a wrong answer.
because (1) port matches to a single process
That doesn't mean anything relevant.
and (2) socket accept is inside-process matters
Nor does that. It doesn't appear to mean anything at all actually.
and single process can have multiple sockets.
That's true but it doesn't have any bearing on why your answer is wrong. The reason your answer is wrong is because no second port is used.
When server socket do accept(), it creates a new (client) socket that is not bind to any specific port
No. It creates a second socket that inherits everything from the server socket: port number, buffer sizes, socket options, ... everything except the file descriptor and the LISTENING state, and maybe I forgot something else. It then sets the remote IP:port of the socket to that of the client and puts the socket into ESTABLISHED state.
and when client communicates with the server
The client has already communicated with the server. That's why we are creating this socket.
it uses the port that is bind to server socket (who accept()s connections) and which client socket to actually communicate is resolved by (sourceIP, sourcePort, destIP, destPort) tuple from TCP header(?) at Transmission level
This has already happened.
This is also suspicious because I thought socket is somewhat application-level object)
No it isn't. A socket is a kernel-level object with an application-level file descriptor to identity it.
If the socket communications still use server socket's port, i.e. client sends some messages to server socket port, doesn't it uses server socket's backlog queue?
No. The backlog queue is for incoming connect requests, not for data. Incoming data goes into the socket receive buffer.
I mean, how can messages from client be distinguished between connect() and read() or write()?
Because a connect() request sets special bits in the TCP header. The final part of it can be combined with data.
And how can they be resolved to each client sockets in server, WITHOUT any port binding?
Port binding happens the moment the socket is created in the call to accept(). You invented this difficulty yourself. It isn't real.
If one of my scenario is correct, would answer to the questions following?
Neither of them is correct.
Or possibly I'm making two wrong scenarios, so it would be very thankful for you to provide right answers, or at least some relevant texts to study.
Surely you already have relevant texts to study? If you don't, you should read RFC 793 or W.R. Stevens, TCP/IP Illustrated, volume I, relevant chapters. You have several major misunderstandings here.
From the Linux programmer's manual, as found via man 2 accept. Link
The accept() system call is used with connection-based socket
types (SOCK_STREAM, SOCK_SEQPACKET). It extracts the first connection
request on the queue of pending connections for the listening socket,
sockfd, creates a new connected socket, and returns a new file
descriptor referring to that socket. The newly created socket is not
in the listening state. The original socket sockfd is unaffected by
this call.
So what happens is that you have a listening TCP socket. Someone requests to connect().
You then call accept(). The old listening socket remains in listening mode, while a new socket is created in connected mode. Port is the original listening port.
That does not interfere with the listening socket, because the new socket does not listen for incoming connections.

socket programming - why web server still using listen port 80 to communicate with client even after they accepted the connection?

Usually a web server is listening to any incoming connection through port 80. So, my question is that shouldn't it be that in general concept of socket programming is that port 80 is for listen for incoming connection. But then after the server accepted the connection, it will use another port e.g port 12345 to communicate with the client. But, when I look into the wireshark, the server is always using port 80 during the communication. I am confused here.
So what if https://www.facebook.com:443, it has hundreds of thousands of connection to the it at a second. Is it possible for a single port to handle such a large amount of traffic?
A particular socket is uniquely identified by a 5-tuple (i.e. a list of 5 particular properties.) Those properties are:
Source IP Address
Destination IP Address
Source Port Number
Destination Port Number
Transport Protocol (usually TCP or UDP)
These parameters must be unique for sockets that are open at the same time. Where you're probably getting confused here is what happens on the client side vs. what happens on the server side in TCP. Regardless of the application protocol in question (HTTP, FTP, SMTP, whatever,) TCP behaves the same way.
When you open a socket on the client side, it will select a random high-number port for the new outgoing connection. This is required, otherwise you would be unable to open two separate sockets on the same computer to the same server. Since it's entirely reasonable to want to do that (and it's very common in the case of web servers, such as having stackoverflow.com open in two separate tabs) and the 5-tuple for each socket must be unique, a random high-number port is used as the source port. However, each of those sockets will connect to port 80 at stackoverflow.com's webserver.
On the server side of things, stackoverflow.com can already distinguish between those two different sockets from your client, again, because they already have different client-side port numbers. When it sees an incoming request packet from your browser, it knows which of the sockets it has open with you to respond to because of the different source port number. Similarly, when it wants to send a response packet to you, it can send it to the correct endpoint on your side by setting the destination port number to the client-side port number it got the request from.
The bottom line is that it's unnecessary for each client connection to have a separate port number on the server's side because the server can already uniquely identify each client connection by its client IP address and client-side port number. This is the way TCP (and UDP) sockets work regardless of application-layer protocol.
shouldn't it be that in general concept of socket programming is that port 80 is for listen for incoming connection. But then after the server accepted the connection, it will use another port e.g port 12345 to communicate with the client.
No.
But, when I look into the wireshark, the server is always using port 80 during the communication.
Yes.
I am confused here.
Only because your 'general concept' isn't correct. An accepted socket uses the same local port as the listening socket.
So what if https://www.facebook.com:443, it has hundreds of thousands of connection to the it at a second. Is it possible for a single port to handle such a large amount of traffic?
A port is only a number. It isn't a physical thing. It isn't handling anything. TCP is identifying connections based on the tuple {source IP, source port, target IP, target port}. There's no problem as long as the entire tuple is unique.
Ports are a virtual concept, not a hardware ressource, it's no harder to handle 10 000 connection on 1 port than 1 connection each on 10 000 port (it's probably much faster even)
Not all servers are web servers listening on port 80, nor do all servers maintain lasting connections. Web servers in particular are stateless.
Your suggestion to open a new port for further communication is exactly what happens when using the FTP protocol, but as you have seen this is not necessary.
Ports are not a physical concept, they exist in a standardised form to allow multiple servers to be reachable on the same host without specialised multiplexing software. Such software does still exist, but for entirely different reasons (see: sshttp). What you see as a response from the server on port 80, the server sees as a reply to you on a not-so-random port the OS assigned your connection.
When a server listening socket accepts a TCP request in the first time ,the function such as Socket java.net.ServerSocket.accept() will return a new communication socket whoes port number is the same as the port from java.net.ServerSocket.ServerSocket(int port).
Here are the screen shots.

In TCP, if the server uses another port to communicate, how will it inform the client?

I'm studying socket programming in C. In TCP communication, a classical situation is that once the server accept() a connect() request from a client, it will fork a new process to handle this communication. Then the child process will use another port to communicate with the client. My question is, how does the server inform the client that it will use another port rather than the original one to do the subsequent communication? Which field in the TCP header and which phase of the handshake can reflect the port change?
For example, process PA on server A is listening to its port 80. Now process PB on client B wants to connect to A's port 80. Once PA accepts PB's connecting request, it will fork a new process PA1 to handle the communication with PB. Am I right till now? Next, will PA1 still use port 80 or another port such as 1234 to communication with PB? If it still uses 80, how can the server A distribute PB's communication to PA1? If it uses another port like 1234, how will the server A inform PB to use 1234 for the subsequent communication?
A TCP connection is uniquely identified by the tuple (source ip, source port, destination ip, destinatin port). These tuple is used by OS to "bind" the TCP connection to a process, meaning to know which process the OS should deliver the TCP package to.
When server socket accepts the TCP connection and fork, that process inherits the original process so it effectively take up the binding of the TCP connection to this newly forked process. The client in the remote machine does not know and does not need to know such thing happens. The whole network keeps seeing the same thing, the package of the same tuple flow through the network.
At this time, the original process will keep listening to new TCP connection. When new TCP connection request arrive, even it is from the same previous machine, the port must be different. In OS's perspective it is a different tuple, therefore it can distinguish the TCP pcakge and deliver to the right process.
You may ask why the client from the remote machine knows it has to use another port to initiate a new connection. This is simply because the client OS knows (or informed by the socket library) that this process is creating a separate new connection. OS will assign another unique port number to the process. That's how it is possible for multiple processes communicating to the same server port without message mess up.
To put it short, the operation of accept and fork in server is just a kind of transferring the ownership of a TCP connection binding to another process. Nothing change in the server port used in this communication.
In TCP communication, a classical situation is that once the server accept() a connect() request from a client, it will fork a new process to handle this communication.
Correct, or start a thread.
Then the child process will use another port to communicate with the client.
No. It will use the same port, via the accepted socket, inherited in the case of a child process.
My question is, how does the server inform the client that it will use another port rather than the original one to do the subsequent communication?
It doesn't, because this isn't the 'classical situation'.
Which field in the TCP header and which phase of the handshake can reflect the port change?
None. It doesn't happen that way. It would be a waste of a port.
For example, process PA on server A is listening to its port 80. Now process PB on client B wants to connect to A's port 80. Once PA accepts PB's connecting request, it will fork a new process PA1 to handle the communication with PB. Am I right till now?
Yes.
Next, will PA1 still use port 80 or another port such as 1234 to communication with PB?
Port 80.
If it still uses 80, how can the server A distribute PB's communication to PA1?
By inheritance of the accepted socket.
If it uses another port like 1234, how will the server A inform PB to use 1234 for the subsequent communication?
Doesn't happen.
The client chooses this port, not the server. The client will choose a port that's not already in use on that particular machine, and use that port to tell its connections apart (just as the server does).
For example say the client has IP address 1.2.3.4 and the server has IP address 4.3.2.1 and listens on port 80. If the client has two connections to that server and port, how will it tell them apart? Simple -- it assigns a different source port to each one. Say one gets port 50001 and one gets port 50002, then the two connections are:
1.2.3.4:50001 -> 4.3.2.1:80
and
1.2.3.4:50002 -> 4.3.2.1:80
The server knows these ports because it gets them from the TCP SYN packets sent from the client to the server. So the client tells the server, not the other way around.

accept() function implementation in Unix

I have looked up in BSD code but got lost somewhere :(
the reason I want to check is this:
TCP RFC (http://www.ietf.org/rfc/rfc793.txt) sec 2.7 states:
"To provide for unique addresses within each TCP, we concatenate an internet address identifying the TCP with a port identifier to create a socket which will be unique throughout all networks connected together. A connection is fully specified by the pair of sockets at the ends."
Does this mean: socket = local (ip + port) ?
If yes, then the accept function of Unix returns a new socket descriptor. Will it mean that a new socket is created (in turn a new port is created) for responding to client requests?
PS: I am a novice in network programming.
[UPDATE] I understood what I read # How does the socket API accept() function work?.
My only doubt is: if socket = (local port +local ip), then a new socket would mean a new port for the same IP. going by this logic, accept returns a new socket (thus a new port is created). so all sending should occur through this new port.
Is what I understand here correct?
You are mostly correct. When you accept(), a new socket is created and the listening socket stays open to allow more incoming connections but the new socket uses the same local port number as the listening socket.
A connection is defined by a 5-tuple: protocol, local-addr, local-port, remote-addr, remote-port.
Therefore, each accepted connection is unique even though they all share the same local port number because the remote ip/port is always different. The listening socket has no remote ip/port and so is also unique.