Maintaining the source ip during data redirection - sockets

I'm new in this site.
This is my problem: I have 3 different sources (IP: 192.168.1.1,192.168.1.2 and 192.168.1.3) that send data (via UDP) to a single receiver (IP 192.168.1.4) on the listening port 8900 (same port for every source). I can redirect the traffic on 3 different listenging port on the receiver (one for every source i.e. 192.168.1.1 --> port 8910, 192.168.1.2 -->8920 and 192.168.1.3 --> 8930), 3 processes are listening on those ports.
If i loose a packet during the transmission, the process that should have been received this packet, try to send a retransmission request to the sender that is wrongly localhost instead of the original sender.
Is there a way to maintain the orinal IP during the redirection?
Sorry for my bad English and thanks for your help.

getsockopt(...,SO_ORIGINAL_DST) is what you are looking for.
(assuming that you did iptables -j REDIRECT or similar )

Related

Is sending ACK back on TCP/IP software or hardware thing?

How do we know port is listening?
Is there possible to know that port is listening, even if they do not respond?
i.e. when I just bind socket in some programs, but I really do not send ACK back.
1: Packet is received and analyzed.
2: If packet fit meets, ACK is send back.
Like if I can program this thing, or it's networks card HW that is responsible for ACK.
How do we know port is listening?
Your local stack knows. An application needs to register the port.
Is there possible to know that port is listening, even if they do not respond?
If there's no reponse to a SYN sent out the host may be unreachable, the destination port not listening, or the SYN filtered.
i.e. when I just bind socket in some programs, but I really do not send ACK back.
You don't have to worry about SYN and ACK, that's handled by the OS's IP stack. Just set up a listener on destination and then connect the socket from source. If the socket opens you can start talking through the pipe.
Like if I can program this thing, or it's networks card HW that is responsible for ACK.
ACKs are part of the TCP transport protocol handled by the OS's IP stack.
You can find out the list of sockets and connections using netstat command. Use netstat -a and grep for the port in question. You can find out if there is a socket listening on the port you want.

How does a TCP server handle multiple Sockets to listening to the same port? [duplicate]

This might be a very basic question but it confuses me.
Can two different connected sockets share a port? I'm writing an application server that should be able to handle more than 100k concurrent connections, and we know that the number of ports available on a system is around 60k (16bit). A connected socket is assigned to a new (dedicated) port, so it means that the number of concurrent connections is limited by the number of ports, unless multiple sockets can share the same port. So the question.
TCP / HTTP Listening On Ports: How Can Many Users Share the Same Port
So, what happens when a server listen for incoming connections on a TCP port? For example, let's say you have a web-server on port 80. Let's assume that your computer has the public IP address of 24.14.181.229 and the person that tries to connect to you has IP address 10.1.2.3. This person can connect to you by opening a TCP socket to 24.14.181.229:80. Simple enough.
Intuitively (and wrongly), most people assume that it looks something like this:
Local Computer | Remote Computer
--------------------------------
<local_ip>:80 | <foreign_ip>:80
^^ not actually what happens, but this is the conceptual model a lot of people have in mind.
This is intuitive, because from the standpoint of the client, he has an IP address, and connects to a server at IP:PORT. Since the client connects to port 80, then his port must be 80 too? This is a sensible thing to think, but actually not what happens. If that were to be correct, we could only serve one user per foreign IP address. Once a remote computer connects, then he would hog the port 80 to port 80 connection, and no one else could connect.
Three things must be understood:
1.) On a server, a process is listening on a port. Once it gets a connection, it hands it off to another thread. The communication never hogs the listening port.
2.) Connections are uniquely identified by the OS by the following 5-tuple: (local-IP, local-port, remote-IP, remote-port, protocol). If any element in the tuple is different, then this is a completely independent connection.
3.) When a client connects to a server, it picks a random, unused high-order source port. This way, a single client can have up to ~64k connections to the server for the same destination port.
So, this is really what gets created when a client connects to a server:
Local Computer | Remote Computer | Role
-----------------------------------------------------------
0.0.0.0:80 | <none> | LISTENING
127.0.0.1:80 | 10.1.2.3:<random_port> | ESTABLISHED
Looking at What Actually Happens
First, let's use netstat to see what is happening on this computer. We will use port 500 instead of 80 (because a whole bunch of stuff is happening on port 80 as it is a common port, but functionally it does not make a difference).
netstat -atnp | grep -i ":500 "
As expected, the output is blank. Now let's start a web server:
sudo python3 -m http.server 500
Now, here is the output of running netstat again:
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:500 0.0.0.0:* LISTEN -
So now there is one process that is actively listening (State: LISTEN) on port 500. The local address is 0.0.0.0, which is code for "listening for all ip addresses". An easy mistake to make is to only listen on port 127.0.0.1, which will only accept connections from the current computer. So this is not a connection, this just means that a process requested to bind() to port IP, and that process is responsible for handling all connections to that port. This hints to the limitation that there can only be one process per computer listening on a port (there are ways to get around that using multiplexing, but this is a much more complicated topic). If a web-server is listening on port 80, it cannot share that port with other web-servers.
So now, let's connect a user to our machine:
quicknet -m tcp -t localhost:500 -p Test payload.
This is a simple script (https://github.com/grokit/quickweb) that opens a TCP socket, sends the payload ("Test payload." in this case), waits a few seconds and disconnects. Doing netstat again while this is happening displays the following:
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:500 0.0.0.0:* LISTEN -
tcp 0 0 192.168.1.10:500 192.168.1.13:54240 ESTABLISHED -
If you connect with another client and do netstat again, you will see the following:
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:500 0.0.0.0:* LISTEN -
tcp 0 0 192.168.1.10:500 192.168.1.13:26813 ESTABLISHED -
... that is, the client used another random port for the connection. So there is never confusion between the IP addresses.
A server socket listens on a single port. All established client connections on that server are associated with that same listening port on the server side of the connection. An established connection is uniquely identified by the combination of client-side and server-side IP/Port pairs. Multiple connections on the same server can share the same server-side IP/Port pair as long as they are associated with different client-side IP/Port pairs, and the server would be able to handle as many clients as available system resources allow it to.
On the client-side, it is common practice for new outbound connections to use a random client-side port, in which case it is possible to run out of available ports if you make a lot of connections in a short amount of time.
A connected socket is assigned to a new (dedicated) port
That's a common intuition, but it's incorrect. A connected socket is not assigned to a new/dedicated port. The only actual constraint that the TCP stack must satisfy is that the tuple of (local_address, local_port, remote_address, remote_port) must be unique for each socket connection. Thus the server can have many TCP sockets using the same local port, as long as each of the sockets on the port is connected to a different remote location.
See the "Socket Pair" paragraph in the book "UNIX Network Programming: The sockets networking API" by
W. Richard Stevens, Bill Fenner, Andrew M. Rudoff at: http://books.google.com/books?id=ptSC4LpwGA0C&lpg=PA52&dq=socket%20pair%20tuple&pg=PA52#v=onepage&q=socket%20pair%20tuple&f=false
Theoretically, yes. Practice, not. Most kernels (incl. linux) doesn't allow you a second bind() to an already allocated port. It weren't a really big patch to make this allowed.
Conceptionally, we should differentiate between socket and port. Sockets are bidirectional communication endpoints, i.e. "things" where we can send and receive bytes. It is a conceptional thing, there is no such field in a packet header named "socket".
Port is an identifier which is capable to identify a socket. In case of the TCP, a port is a 16 bit integer, but there are other protocols as well (for example, on unix sockets, a "port" is essentially a string).
The main problem is the following: if an incoming packet arrives, the kernel can identify its socket by its destination port number. It is a most common way, but it is not the only possibility:
Sockets can be identified by the destination IP of the incoming packets. This is the case, for example, if we have a server using two IPs simultanously. Then we can run, for example, different webservers on the same ports, but on the different IPs.
Sockets can be identified by their source port and ip as well. This is the case in many load balancing configurations.
Because you are working on an application server, it will be able to do that.
I guess none of the answers tells every detail of the process, so here it goes:
Consider an HTTP server:
It asks the OS to bind the port 80 to one or many IP addresses (if you choose 127.0.0.1, only local connections are accepted. You can choose 0.0.0.0 to bind to all IP addresses (localhost, local network, wide area network, both IP versions)).
When a client connects to that port, it WILL lock it up for a while (that's why the socket has a backlog: it queues a number of connection attempts, because they ARE NOT instantaneous).
The OS then chooses a random port and transfer that connection to that port (think of it as a temporary port that will handle all the traffic from now on).
The port 80 is then released for the next connection (first, it will accept the first one in the backlog).
When client or server disconnects, the random port is held open for a while (CLOSE_WAIT in the remote side, TIME_WAIT in the local side). That allows flushing some lost packets along the path. The default time for that state is 2 * MSL seconds (and it WILL consume memory while is waiting).
After that waiting, that random port is free again to receive other connections.
So, TCP cannot even share a port amongst two IP's!
No. It is not possible to share the same port at a particular instant. But you can make your application such a way that it will make the port access at different instant.
Absolutely not, because even multiple connections may shave same ports but they'll have different IP addresses

Demultiplexing in TCP/UDP

I know there is an older answer to this question here, though it does not seem to answer my question. If in UDP two people with different IP and different ports send data to the same server (same IP) at the same socket (since in UDP there is only one socket per application - correct me if i am wrong), how does server recognises which person is who?
Does it change anything if the two people use (by luck or not) the same port as source port but with different source IP?
The server can receive UDP datagrams from two different IP/port pairs (IP could be same, port could be same, or both could be different) on the same port. The recvfrom() function returns the source IP/port of the datagram in addition to the data.
As mentioned in the question you referenced, a UDP socket is defined only by the local IP and local port. The remote IP and port can differ for both outgoing and incoming packets.

socket programming - why web server still using listen port 80 to communicate with client even after they accepted the connection?

Usually a web server is listening to any incoming connection through port 80. So, my question is that shouldn't it be that in general concept of socket programming is that port 80 is for listen for incoming connection. But then after the server accepted the connection, it will use another port e.g port 12345 to communicate with the client. But, when I look into the wireshark, the server is always using port 80 during the communication. I am confused here.
So what if https://www.facebook.com:443, it has hundreds of thousands of connection to the it at a second. Is it possible for a single port to handle such a large amount of traffic?
A particular socket is uniquely identified by a 5-tuple (i.e. a list of 5 particular properties.) Those properties are:
Source IP Address
Destination IP Address
Source Port Number
Destination Port Number
Transport Protocol (usually TCP or UDP)
These parameters must be unique for sockets that are open at the same time. Where you're probably getting confused here is what happens on the client side vs. what happens on the server side in TCP. Regardless of the application protocol in question (HTTP, FTP, SMTP, whatever,) TCP behaves the same way.
When you open a socket on the client side, it will select a random high-number port for the new outgoing connection. This is required, otherwise you would be unable to open two separate sockets on the same computer to the same server. Since it's entirely reasonable to want to do that (and it's very common in the case of web servers, such as having stackoverflow.com open in two separate tabs) and the 5-tuple for each socket must be unique, a random high-number port is used as the source port. However, each of those sockets will connect to port 80 at stackoverflow.com's webserver.
On the server side of things, stackoverflow.com can already distinguish between those two different sockets from your client, again, because they already have different client-side port numbers. When it sees an incoming request packet from your browser, it knows which of the sockets it has open with you to respond to because of the different source port number. Similarly, when it wants to send a response packet to you, it can send it to the correct endpoint on your side by setting the destination port number to the client-side port number it got the request from.
The bottom line is that it's unnecessary for each client connection to have a separate port number on the server's side because the server can already uniquely identify each client connection by its client IP address and client-side port number. This is the way TCP (and UDP) sockets work regardless of application-layer protocol.
shouldn't it be that in general concept of socket programming is that port 80 is for listen for incoming connection. But then after the server accepted the connection, it will use another port e.g port 12345 to communicate with the client.
No.
But, when I look into the wireshark, the server is always using port 80 during the communication.
Yes.
I am confused here.
Only because your 'general concept' isn't correct. An accepted socket uses the same local port as the listening socket.
So what if https://www.facebook.com:443, it has hundreds of thousands of connection to the it at a second. Is it possible for a single port to handle such a large amount of traffic?
A port is only a number. It isn't a physical thing. It isn't handling anything. TCP is identifying connections based on the tuple {source IP, source port, target IP, target port}. There's no problem as long as the entire tuple is unique.
Ports are a virtual concept, not a hardware ressource, it's no harder to handle 10 000 connection on 1 port than 1 connection each on 10 000 port (it's probably much faster even)
Not all servers are web servers listening on port 80, nor do all servers maintain lasting connections. Web servers in particular are stateless.
Your suggestion to open a new port for further communication is exactly what happens when using the FTP protocol, but as you have seen this is not necessary.
Ports are not a physical concept, they exist in a standardised form to allow multiple servers to be reachable on the same host without specialised multiplexing software. Such software does still exist, but for entirely different reasons (see: sshttp). What you see as a response from the server on port 80, the server sees as a reply to you on a not-so-random port the OS assigned your connection.
When a server listening socket accepts a TCP request in the first time ,the function such as Socket java.net.ServerSocket.accept() will return a new communication socket whoes port number is the same as the port from java.net.ServerSocket.ServerSocket(int port).
Here are the screen shots.

Communication protocols in UDP

After many hours, I have discovered that the given udp server needs the following steps for a successful communication:
1- Send "Start Message" on a given port
2- Wait to receive from server on any port
3- Then the port dedicated to you to send further data to the server equals the port you have received on it + 1
So I am asking if this kind is a known protocol/handshaking, or it is only special to this server??
PS: All above communication were in udp sockets in C#
PS: Related to a previous question: About C# UDP Sockets
Thanks
There's no special "handshake" for UDP -- each UDP service, if it needs one, specifies its own. Usually, though, a server doesn't expect the client to be able to listen on all of its ports simultaneously. If you mean that the client expects a message from any port on the server, to the port the client sent the start message from, then that makes a lot more sense -- and is very close to how TFTP works. (The only difference i'm seeing so far, is that TFTP doesn't do the "+ 1".)
The server is, effectively, listening on a 'well known port' and then switching subsequent communications to a dedicated port per client. Requiring the client to send to the port + 1 is a little strange
Client 192.168.0.1 - port 12121 ------------------------> Server 192.168.0.2 - port 5050
Client 192.168.0.1 - port 12121 <------------------------ Server 192.168.0.2 - port 23232
Client 192.168.0.1 - port 12121 ------------------------> Server 192.168.0.2 - port 23232 + 1
<------------------------ Server 192.168.0.2 - port 23232
------------------------> Server 192.168.0.2 - port 23232 + 1
The server probably does this so that it doesn't have to demultiplex the inbound client data based on the client's address/port. Doing it this way is a little more efficient (generally) and also has some advantages, depending on the design of the server, as on the server there's a 'dedicated' socket for you which means that if they're doing overlapped I/O then the socket stays the same for the whole period of communications with you which can make it easier and more efficient to associate data with the socket (this way they can probably avoid any lookups or locking to process each datagram). Anyway, enough of that (see here, if you want to know why I do it that way).
From your point of view as a client (and I'm assuming async sockets here) you need to first Bind() your local socket (just use INADDR_ANY and 0 to allow the OS to pick the port for you) then issue a RecvFrom() on the socket (so there's no race between you sending data to the server on this socket and it sending you data back before you issue a recv). Then issue a SendTo() to the 'well known port' of the server. The server will then send you back some data and your RecvFrom() will return you the data and the address that the server sent to you from. You can then take that address, add one to the port, store that address and from then on issue SendTo()s to that new sending address whilst continuing to issue RecvFrom()s for reading the server's data; or you could do something clever with Connect() to bind the remote end of the socket to the server's 'send to address' and simply use Write() and RecvFrom() from then on.