Confusion over how UDP server sends the response back to UDP client - sockets

I am writing a UDP based client server and have got pretty much the code, but I am confused about how UDP server sends the response back to UDP client, this is my understanding till now:
Suppose a UDP client wants to communicate with a UDP server, so it will send a request to the UDP server (using the UDP socket opened at client's end), now this will reach the UDP module at the UDP server, where UDP module will identify the UDP service using the port number and will send that request to that UDP service/server.
Now, since UDP is a connection-less protocol so unlike TCP, UDP server will not send response over some connection, instead, UDP server will extract the source IP address and source port from the request and send the response back to client.
My confusion is that at server side, there is a socket which is bound to a UDP port and "continuously" listening for any UDP client request, but this is not true at client side, UDP client will open a socket to send the request to UDP server and then that's it, I think it cannot keep that port hanging for UDP server to respond, and if that port closes then how client will receive the response back.
I mean ofcourse, UDP server's response will reach back the UDP client because IP address is there, but once that response has reached UDP module of the client, even though there will be a port but how UDP module can send it to the client who originally sent the request because it would have closed the socket bound to that port? Or it will not?
I am looking for answer which clearly describes the UDP communication (I am not interested in contrasting it with TCP or explaining TCP since I have already fair understanding of TCP), especially how the response will reach back the UDP client.

My confusion is that at server side, there is a socket which is bound to a UDP port and "continuously" listening for any UDP client request, but this is not true at client side, UDP client will open a socket to send the request to UDP server and then that's it, I think it cannot keep that port hanging for UDP server to respond, and if that port closes then how client will receive the response back.
I agree. This is your confusion. Why do you think can't keep the socket open and do a receive on it? It can.
I mean ofcourse, UDP server's response will reach back the UDP client because IP address is there, but once that response has reached UDP module of the client, even though there will be a port but how UDP module can send it to the client who originally sent the request because it would have closed the socket bound to that port?
Why?
Or it will not?
Not.
The client:
creates a socket
sends a datagram
calls recvfrom() or friends to receive the response.
Of course if the client isn't interested in the response it can close the socket, but this is not the normal case.
I am looking for answer which clearly describes the UDP communication (I am not interested in contrasting it with TCP or explaining TCP since I have already fair understanding of TCP), especially how the response will reach back the UDP client.
So don't tag your question with the tcp tag.

Yes, UDP server can send back to client. Here is an example:
https://www.cs.rutgers.edu/~pxk/417/notes/sockets/udp.html and code demo https://www.cs.rutgers.edu/~pxk/417/notes/sockets/demo-udp-04.html
We now have a client sending a message to a server. What if the server wants to send a message back to that client? There is no connection so the server cannot just write the response back. Fortunately, the recvfrom call gave us the address of the server. It was placed in remaddr:
recvlen = recvfrom(s, buf, BUFSIZE, 0, (struct sockaddr *)&remaddr, &addrlen);
The server can use that address in sendto and send a message back to the recipient's address.
sendto(s, buf, strlen(buf), 0, (struct sockaddr *)&remaddr, addrlen)

The client uses some random but unique source port. The server sends the response back to this unique port. Server will never receive 2 request at a time from single port. Server uses this fact to map response to request. Only after receiving the response, client closes this source port/ socket. The client can send as much request as the count of ports available for its disposal at any given point of time. Since the port is closed as soon as the response is received, it becomes available again.
Reference: https://www.slashroot.in/how-does-udp-work

Related

How to program a UDP webserver since HTTP normally uses TCP?

Here is my understanding of how a client-server HTTP server works.
The client creates a TCP socket connection to connect to the server and sends data.
The server creates a TCP socket connection to listen for incoming requests.
So it looks like both the client and the server need to agree on the use of the Transport protocol to use (in this case TCP). But if we want a website to work over UDP/QUIC protocol then we need both the client and the server to create a UDP socket connection. But some websites use TCP and others use UDP...
So does it mean it would need to look like this? To know beforehand which protocol a website uses?
if (URI == 'https://www.google.com') {
// Website that works over UDP
client.create.UDP.socket
client.sendData
server.create.UDP.socket
server.receive.data
} else {
// Website that works over TCP
client.create.TCP.socket
client.sendData
server.create.TCP.socket
server.receive.data
}
So the client needs to keep a record of which website uses TCP and which websites use UDP/QUIC and create that kind of socket to communicate with it?
If a protocol supports both TCP and UDP, the server listens both on the TCP port and the UDP port. The port number is generally the same, e.g. DNS uses TCP port 53 and UDP port 53.
Usually the client has a preference. Let's say it prefers TCP. The client will first try to connect with TCP. If the server does not respond, the client will retry with UDP. Alternatively, the server can respond over TCP but ask the client to switch to UDP. The client can then decide to continue with TCP or switch to UDP.
For QUIC 1 2, the browser will first connect using HTTP over TCP. The server will respond with a message stating that it also supports QUIC. If the browser also supports this protocol, the browser will reconnect to the server using QUIC.

On successful TCP connection between server and client

RELATED POST
The post here In UNIX forum describes
The server will keep on listeninig on a port number.
The server will accept a clients connect() request using accept(). As soon as the server accepts the client request, the kernel allocates a random port number for the server for further send() and receive(), since the same port number on the server can't be used for sending as well as listening, and the previous port is still listening for new connections
QUESTION
I have a server application S which is constantly listening on port 18333 (this is actually bitcoind testnet). When another client node C connects with it on say 53446 (random port). According to the above post, S will be able to send/receive data of 'C' only from port 53446.
But when I run a bitcoind testnet. This perfectly communicates with other node with only one socket connection in port 18333 without need for another for sending/receiving. Below is snippet and I even verified this
bitcoin-cli -testnet -rpcport=16591 -datadir=/home/user/mytest/1/
{
"id": 1,
"addr": "178.32.61.149:18333"
}
Can anyone help me understand what is the right working in TCP socket connection?
A TCP connection is identified by a socket pair and this is uniquely identified by 4 parameters :
source ip
source port
dest ip
dest port
For every connection that is established to a server the socket is basically cloned and the same port is being used. So for every connection you have a socket using the same server port. So you have n+1 socket using the same port when there are n connections.
The TCP kernel is able to make distinction between all these sockets and connections because the socket is either in the listening state, or it belongs to the socket pair where all 4 parameters are considered.
Your second bullet is therefore wrong because the same port is being used as i explained above.
The server will accept a clients connect() request using accept(). As
soon as the server accepts the client request, the kernel allocates a
random port number for the server for further send() and receive().
On normal TCP traffic this is not the case. If a webserver is listening on port 80, all packets sent back to the client wil be over server port 80 (this can be verified with WireShark for example) - but there will be a different socket for each connection (srcIP:port - dstIP:port). That information is sent in the headers of the network packets - IP and protocol code (TCP, UDP or other) in the IP header, port numbers as part of the TCP or UDP header).
But changing ports can happen when communicating over ftp, where there can be a control port (ususally 21) and a negotiated data port.

Sending and receiving data through sockets and ports

When i create a c++ program using Winsock and send() a HTTP request packet to a hostname(ie: www.blah.com at 223.224.245.233 running on port 80) and the HTTP response is given back to me through recv(), why does the the receiver of my packet need to bind a socket to a port to talk to me, but i don't?
Is it because I initially sent a packet and in that packet it contains information that enables them to send a packet back to me(a response) without me needing to bind a socket to a port?
I'm wondering why multiple computers that talk to each don't need sockets bound to certain ports.
I thought computer communication was like so:
(Service on port 80 at 223.224.245.233) sends packet to (Service on port 94 at 223.224.245.234)
(Service on port 94 at 223.224.245.234) receives packet from (Service on port 94 at 223.224.245.233)
why does the the receiver of my packet need to bind a socket to a port to talk to me
It doesn't. It needs to bind a socket to a port to listen for incoming connections. Then you connect to it, then it accepts a connected socket, then it talks to you.
but i don't
There is an automatic bind when you connect.

how can we set up Proxy server dealing with UDP packets?

Usually we can set up a proxy server by some kind of tools such as CCProxy which provides proxy services for HTTP, SOCKS, FTP packets etc.
Also, Proxifier or Proxycap is used to forward the packets of specific application on the client PC.
However, when UDP packets are forwarded to the proxy server, these packets cannot be correctly forwarded to the destination originally the application wanted them to go.
When I use a network analyzer to observe the UDP traffic flow, the UDP packets are just passed to the proxy server from my PC without going to the correct destination finally.
Besides, someone suggested that the UDP relay is not enabled in the proxy server so the UDP packets cannot be routed successfully. How can I enable the UDP relay function on the proxy server?(Assume that I can control the proxy server completely)
Any kind of proxy, whether it is for TCP or UDP, needs to be told where to forward outgoing packets to. That also allows the proxy to know who is requesting the forwarding so it can route matching inbound packets back to that same requester.
Lets assume SOCKS, for example. SOCKS v4 does not support UDP (or IPv6), but SOCKS v5 does. However, it requires the requesting app to establish a TCP connection to the SOCKS proxy and ask it to forward UDP packets on the app's behalf until that TCP connection is closed.
Tools like CCProxy, Proxycap, Proxifier, etc work (for TCP, anyway) by intercepting outgoing TCP conections and redirecting them to the proxy server, transparently handling any proxy handshaking to set up forwarding, before then allowing any application data to flow through the TCP connection. Once the TCP connection has been established, the proxifier does not need to do anything more with the connection since the app is now talking directly to the proxy.
I do not know if such tools support UDP. It would be much harder to implement, since there is no outgoing connection to redirect. Every outbound UDP packet would have to be intercepted, then the proxifier would have to check if it already has its own SOCKS v5 TCP connection associated with the packet's local/remote tuple and if not then create a new one and send the necessary UDP forwarding handshake, then encapsulate every outbound UDP packet for that tuple and send it to the proxy's outbound IP/Port, and receive every matching inbound UDP packet for that tuple from the proxy so it can be de-encapsulated and forwarded to the app's local IP/Port that sent out the original outbound UDP packet. And because UDP is connection-less, the proxifier would have to also implement a timeout mechanism on its SOCKS v5 TCP connection to the proxy so it can eventually be closed after a period of UDP traffic being idle.
That is a LOT more work for a UDP proxifier to do compared to TCP.
And that is just for SOCKS. HTTP/FTP proxies do not support UDP at all (since HTTP/FTP are TCP-based protocols). And there are other tunnel/proxy protocols as well, which may or may not have their on UDP capabilities.
So you have to check the capabilities of your proxifier tool to see if it supports UDP or not.

Communication protocols in UDP

After many hours, I have discovered that the given udp server needs the following steps for a successful communication:
1- Send "Start Message" on a given port
2- Wait to receive from server on any port
3- Then the port dedicated to you to send further data to the server equals the port you have received on it + 1
So I am asking if this kind is a known protocol/handshaking, or it is only special to this server??
PS: All above communication were in udp sockets in C#
PS: Related to a previous question: About C# UDP Sockets
Thanks
There's no special "handshake" for UDP -- each UDP service, if it needs one, specifies its own. Usually, though, a server doesn't expect the client to be able to listen on all of its ports simultaneously. If you mean that the client expects a message from any port on the server, to the port the client sent the start message from, then that makes a lot more sense -- and is very close to how TFTP works. (The only difference i'm seeing so far, is that TFTP doesn't do the "+ 1".)
The server is, effectively, listening on a 'well known port' and then switching subsequent communications to a dedicated port per client. Requiring the client to send to the port + 1 is a little strange
Client 192.168.0.1 - port 12121 ------------------------> Server 192.168.0.2 - port 5050
Client 192.168.0.1 - port 12121 <------------------------ Server 192.168.0.2 - port 23232
Client 192.168.0.1 - port 12121 ------------------------> Server 192.168.0.2 - port 23232 + 1
<------------------------ Server 192.168.0.2 - port 23232
------------------------> Server 192.168.0.2 - port 23232 + 1
The server probably does this so that it doesn't have to demultiplex the inbound client data based on the client's address/port. Doing it this way is a little more efficient (generally) and also has some advantages, depending on the design of the server, as on the server there's a 'dedicated' socket for you which means that if they're doing overlapped I/O then the socket stays the same for the whole period of communications with you which can make it easier and more efficient to associate data with the socket (this way they can probably avoid any lookups or locking to process each datagram). Anyway, enough of that (see here, if you want to know why I do it that way).
From your point of view as a client (and I'm assuming async sockets here) you need to first Bind() your local socket (just use INADDR_ANY and 0 to allow the OS to pick the port for you) then issue a RecvFrom() on the socket (so there's no race between you sending data to the server on this socket and it sending you data back before you issue a recv). Then issue a SendTo() to the 'well known port' of the server. The server will then send you back some data and your RecvFrom() will return you the data and the address that the server sent to you from. You can then take that address, add one to the port, store that address and from then on issue SendTo()s to that new sending address whilst continuing to issue RecvFrom()s for reading the server's data; or you could do something clever with Connect() to bind the remote end of the socket to the server's 'send to address' and simply use Write() and RecvFrom() from then on.