is it possible for http server to respond http client on same connection fd each time? - sockets

I want to write iterative HTTP server code that accepts one HTTP Client on the same conn_fd (file descriptor) every time, but for different clients it should create new_fd, based on checking the client address. Is the possible?

I'm not sure I understand your question, but this is basically how sockets works: you create a master socket and set it to a listening state. Then, everytime you accept a new client, a new socket is created for that client, while the master socket remains the same.
For a nice intro about Unix sockets, see http://beej.us/guide/bgnet/

Each new connection will result in a new socket. So if the same client connects multiple times it will be a new socket (and file descriptor), but if it connects one time and sends multiple requests over the same connection (HTTP keep alive) it will be the same fd.

Related

client/server socket reconnection

I developed a client/server application based on sockets.
The client side is in Delphi. The server side is on an IBM I (as400)
Sometimes, the client and the server get disconnected. I'm not really sure why, but I think it's because of a machine between them (a proxy, a router, a firewall) sending a RST packet.
Anyway, I'm trying to reconnect the client with the same process on the server. (not another one, the same, that's important).
To do that, I create a new connection from the client. So, I have two processes on the server. I'll call them the "LostProcess" and the "HelperProcess".
The LostProcess is waiting for data in a data queue.
The client tells the HelperProcess that it was connected to the LostProcess.
The HelperProcess sends data to the LostProcess (via the data queue).
The HelperProcess makes a giveDescriptor, and the LostProcess makes a takeDescriptor.
Then the HelperProcess stops and the LostProcess sends data to the client (to say “I'm back”).
So far, it works, but when the client sends data , the LostProcess (we can call it the RebornProcess now) never receives them (I tried not to stop the HelperProcess, and that he is who receives the data).
With Wireshark, I could see that the client sends data with a different local port, so I guess that's why the RebornProcess does not receive them.
I tried to force the local port of the new client socket to be the same as the first one, but then the new client socket cannot connect for a while, and if I wait long enough, I have the same problem as before.
Does somebody have an idea how to make the reconnection work?
What you are doing is generally not possible. Once a TCP connection has been lost, it is gone forever. Both apps must close their respective sockets for the lost connection, and the client app must create a new socket connection to continue exchanging data with the server.
If the client app wants to reuse the same local port via bind() (which is generally not advisable in most cases), but does not want to wait for the OS to release the port first, then the client can enable the SO_REUSEADDR option via setsockopt() on the new socket before calling bind() and connect().
Pretty sure the answer is you can't.
There'd be all kinds of security issues if TCP/IP allowed a new connection to reconnect to an existing processes connection.
You should have the lost process terminate and just use the new process instead.

Delphi Indy TCP Client/Server communication best approach

I have a client and a server application that is communicating just fine, there is a TIdCmdTCPServer in the server and a TIdTCPClient in the client.
The client has to authenticate in the server, the client asks the server for the newest version information and downloads any updates, and other communications. All this communication with TIdTCPClient.SendCmd() and TIdTCPClient.LastCmdResult.Text.Text.
The way it is, the server receives commands and replies, the clients only receives replies, never commands, and I would like to implement a way to make the client receives commands. But as I heard, if the client uses SendCmd it should never be listening for data like ReadLn() as it would interfere with the reply expected in SendCmd.
I thought of making a command to check for commands, for example, the client would send a command like "IsThereCommandForMe" and the server would have a pool of commands to each client and when the client asks, the server send it in the reply, but I think it would not be a good approach as there would be a big delay between the commands being available and the client asking for it. I also thought of making a new connection with new components, for example a TIdCmdTcpClient, but then there would be 2 connections for each client, I don't like that idea as I think it could easily give problems in the communication.
The reason I want this, is that I want to implement a chat functionality in the client, and it should be receiving messages from the server without asking for it all the time, imagine all clients continually asking the server if there is message for them. And I would like to be able to inform the client when there is an update available instead the client being asking if there is any. And with this I could send more commands to the client too.
what are your thoughts about this ? how can I make the server receiving commands from the clients, but also sends them ?
TCP sockets are bidirectional by design. Once the connection between 'client' and 'server' has been established, they are symmetric and data can be sent at any time from any side over the same socket.
It only depends on the protocol (which is just written 'contract' for the communication) which communication model is used. HTTP for example uses a request/reply model. With Telnet for example, both sides can initate data transmissions. (If you take a look at the Indy implementation for Telnet, you will see that it uses a background thread to listen for server data, but it uses the same socket connection in the main thread to send data from client to server).
A "full duplex" protocol which supports both request/response and server push, and also is firewall-friendly, is WebSockets. With WebSockets (a HTTP upgrade), the server can send data to the connected client(s) any time. This would meet your 'chat' requirement.
If you use TIdTCPClient / TIdCmdTCPServer, corporate firewalls might block the communication.

Sending file on separate connection

I have a server program that spawns a thread for every incoming connection. This thread then handles the request by receiving it and sending a response. For some kinds of connections I have to respond first with a file and then with a text response.
The problem is that, if I send the textual response after sending the file, the response gets written inside the file, because the client has no way of knowing where the file ends and where the response beings. So I need to close the connection after sending the file and then send a response on other connection or, alternatively, send the file on a separate connection and then send the response on the current connection. How can I accomplish this?
Use the technique that FTP uses to keep the data connection separate from the control connection. The server starts listening on an ephemeral port -- the OS will assign an unused port to it. It sends this port number to the client on the main connection. The client then connects to the ephemeral port, and the server sends the file on this new connection.
If you need to deal with multiple sockets concurrently, you can use select() or epoll() to wait for data on either of them.

Explain http keep-alive mechanism

Keep-alives were added to HTTP to basically reduce the significant
overhead of rapidly creating and closing socket connections for each
new request. The following is a summary of how it works within HTTP
1.0 and 1.1:
HTTP 1.0 The HTTP 1.0 specification does not really delve into how
Keep-Alive should work. Basically, browsers that support Keep-Alive
appended an additional header to the request as [edited for clarity] explained below:
When the server processes the request and
generates a response, it also adds a header to the response:
Connection: Keep-Alive
When this is done, the socket connection is
not closed as before, but kept open after sending the response. When
the client sends another request, it reuses the same connection. The
connection will continue to be reused until either the client or
the server decides that the conversation is over, and one of them drops the connection.
The above explanation comes from here. But I don't understand one thing
When this is done, the socket connection is not closed as before, but
kept open after sending the response.
As I understand we just send tcp packets to make requests and responses, how this socket connection helps and how does it work? We still have to send packets, but how can it somehow establish the persistent connection? It seems so unreal.
There is overhead in establishing a new TCP connection (DNS lookups, TCP handshake, SSL/TLS handshake, etc). Without a keep-alive, every HTTP request has to establish a new TCP connection, and then close the connection once the response has been sent/received. A keep-alive allows an existing TCP connection to be re-used for multiple requests/responses, thus avoiding all of that overhead. That is what makes the connection "persistent".
In HTTP 0.9 and 1.0, by default the server closes its end of a TCP connection after sending a response to a client. The client must close its end of the TCP connection after receiving the response. In HTTP 1.0 (but not in 0.9), a client can explicitly ask the server not to close its end of the connection by including a Connection: keep-alive header in the request. If the server agrees, it includes a Connection: keep-alive header in the response, and does not close its end of the connection. The client may then re-use the same TCP connection to send its next request.
In HTTP 1.1, keep-alive is the default behavior, unless the client explicitly asks the server to close the connection by including a Connection: close header in its request, or the server decides to includes a Connection: close header in its response.
Let's make an analogy. HTTP consists in sending a request and getting the response. This is similar to asking someone a question, and receiving a response.
The problem is that the question and the answer need to go through the network. To communicate through the network, TCP (sockets) is used. That's similar to using the phone to ask a question to someone and having this person answer.
HTTP 1.0 consists, when you load a page containing 2 images for example, in
make a phone call
ask for the page
get the page
end the phone call
make a phone call
ask for the first image
get the first image
end the phone call
make a phone call
ask for the second image
get the second image
end the phone call
Making a phone call and ending it takes time and resources. Control data (like the phone number) must transit over the network. It would be more efficient to make a single phone call to get the page and the two images. That's what keep-alive allows doing. With keep-alive, the above becomes
make a phone call
ask for the page
get the page
ask for the first image
get the first image
ask for the second image
get the second image
end the phone call
This is is indeed networking question, but it may be appropriate here after all.
The confusion arises from distinction between packet-oriented and stream-oriented connections.
Internet is often called "TCP/IP" network. At the low level (IP, Internet Protocol) the Internet is packet-oriented. Hosts send packets to other hosts.
However, on top of IP we have TCP (Transmission Control Protocol). The entire purpose of this layer of the internet is to hide the packet-oriented nature of the underlying medium and to present the connection between two hosts (hosts and ports, to be more correct) as a stream of data, similar to a file or a pipe. We can then open a socket in the OS API to represent that connection, and we can treat that socket as a file descriptor (literally an FD in Unix, very similar to file HANDLE in Windows).
Most of the rest of Internet client-server protocols (HTTP, Telnet, SSH, SMTP) are layered on top of TCP. Thus a client opens a connection (a socket), writes its request (which is transmitted as one or more pockets in the underlying IP) to the socket, reads the response from a socket (and the response can contain data from multiple IP packets as well) and then... Then the choice is to keep the connection open for the next request or to close it. Pre-KeepAlive HTTP always closed the connection. New clients and servers can keep it open.
The advantage of KeepAlive is that establishing a connection is expensive. For short requests and responses it may take more packets than the actual data exchange.
The slight disadvantage may be that the server now has to tell the client where the response ends. The server cannot simply send the response and close the connection. It has to tell the client: "read 20KB and that will be the end of my response". Thus the size of the response has to be known in advance by the server and communicated to the client as part of higher-level protocol (e.g. Content-Length: in HTTP). Alternatively, the server may send a delimiter to specify the end of the response - it all depends on the protocol above TCP.
You can understand it this way:
HTTP uses TCP as transport. Before sending and receiving packets via TCP,
Client need to send the connect request
The server responds
Data transfer transfer is done
Connection is closed.
However if we are using keep-alive feature, the connection is not closed after receiving the data. The connection stays active.
This helps improving performance as for the next calls, the Connect establishment will not take place as the connection to the server is already there. This means less time taken. Although time takes in connecting is small but it do make a lot of difference in systems where every ms counts.

What exactly is Socket

I don't know exactly what socket means.
A server runs on a specific computer and has a socket that is bound to a specific port number. The server just waits, listening to the socket for a client to make a connection request.
When the server accepts the connection, it gets a new socket bound to the same local port and also has its remote endpoint set to the address and port of the client.
It needs a new socket so that it can continue to listen to the original socket for connection requests while tending to the needs of the connected client.
So, socket is some class created in memory? And for every client connection there is created new instance of this class in memory? Inside socket is written the local port and port and IP number of the client which is connected. Can someone explain me more in details the definition of socket?
Thanks
A socket is effectively a type of file handle, behind which can lie a network session.
You can read and write it (mostly) like any other file handle and have the data go to and come from the other end of the session.
The specific actions you're describing are for the server end of a socket. A server establishes (binds to) a socket which can be used to accept incoming connections. Upon acceptance, you get another socket for the established session so that the server can go back and listen on the original socket for more incoming connections.
How they're represented in memory varies depending on your abstraction level.
At the lowest level in C, they're just file descriptors, a small integer. However, you may have a higher-level Socket class which encapsulates the behaviour of the low-level socket.
According to "TCP/IP Sockets in C-Practical Guide for Programmers" by Michael J. Doonahoo & Kenneth L. Calvert (Chptr 1, Section 1.4, Pg 7):
A socket is an abstraction through which an application may send
and receive data,in much the same way as an open file allows an application to read and write data to stable storage.
A socket allows an application to "plug in" to the network and communicate
with other applications that are also plugged in to the same network.
Information written to the socket by an application on one machine can be
read by an application on a different machine, and vice versa.
Refer to this book to get clarity about sockets from a programmers point of view.
A network socket is one endpoint in a communication flow between two programs running over a network.
A socket is the combination of IP address plus port number
This is the typical sequence of sockets requests from a server application in the connectionless context of the Internet in which a server handles many client requests and does not maintain a connection longer than the serving of the immediate request:
Steps to implement
At Server side
initilize socket()
--
bind()
--
recvfrom()
--
(wait for a sendto request from some client)
--
(process the sendto request)
--
sendto (in reply to the request from the client...for example, send an HTML file)
A corresponding client sequence of sockets requests would be:
socket()
--
bind()
--
sendto()
--
recvfrom()
so that you can make a pipeline connection ..
for more http://www.steves-internet-guide.com/tcpip-ports-sockets
I found this article in online.
So to put it all back together, a socket is the combination of an IP
address and a port, and it acts as an endpoint for receiving or
sending information over the internet, which is kept organized by TCP.
These building blocks (in conjunction with various other protocols and
technologies) work in the background to make every google search,
facebook post, or introductory technical blog post possible.
https://medium.com/swlh/understanding-socket-connections-in-computer-networking-bac304812b5c
Socket definition
A communication between two processes running on two computer systems can be completely specified by the association: {protocol, local-address, local-process, remote-address, remote-process} We also define a half association as either {protocol, local-address, local-process} or {protocol, remote-address, remote-process}, which specify half of a connection. This half association is also called socket, or transport address. The term socket has been popularized by the Berkeley Unix networking system, where it is "an end point of communication", which corresponds to the definition of half association.