How many sockets do Google open for every request it receives? - sockets

The following is my recent interview experience with a reputed network software company. I was asked questions about interconnecting TCP level and web requests and that confused me a lot. I really would like to know expert opinions on the answers. It is not just about the interview but also about a fundamental understanding of how networking work (or how application layer and transport layer cross-talk, if at all they do).
Interviewer: Tell me the process that happens behind the scenes when
I open a browser and type google.com in it.
Me: The first thing that happens is a socket is created which is
identified by {SRC-IP, SRC-PORT, DEST-IP, DEST-PORT, PROTOCOL}. The
SRC-PORT number is a random number given by the browser. Usually the TCP/IP
connection protocol (three-way handshake is established). Now
both the client (my browser) and the server (Google) are ready to handle
requests. (TCP connection is established).
Interviewer: Wait, when does the name resolution happen?
Me: Yep, I am sorry. It should have happened before even the socket is created.
DNS name resolve happens first to get the IP address of Google to
reach at.
Interviewer: Is a socket created for DNS name resolution?
Me: hmm, I actually do not know. But all I know DNS name resolution is
connectionless. That is, it's not TCP but UDP. Only a single
request-response cycle happens. (So there is a new socket created for DNS
name resolution).
Interviewer: google.com is open for other requests from other
clients. So is establishing your connection with Google blocking
other users?
Me: I am not sure how Google handles this. But in a typical socket
communication, it is blocking to a minimal extent.
Interviewer: How do you think this can be handled?
Me: I guess the process forks a new thread and creates a socket to handle my
request. From now on, my socket endpoint of communication with
Google is this child socket.
Interviewer: If that is the case, is this child socket’s port number
different than the parent one?
Me: The parent socket is listening at 80 for new requests from
clients. The child must be listening at a different port number.
Interviewer: How is your TCP connection maintained since your destination port number has changed. (That is the src-port number sent on Google's packet) ?
Me: The dest-port that I see as a client is always 80. When
a response is sent back, it also comes from port 80. I guess the OS/the
parent process sets the source port back to 80 before it sends back the
post.
Interviewer: How long is your socket connection established with
Google?
Me: If I don’t make any requests for a period of time, the
main thread closes its child socket and any subsequent requests from
me will be like I am a new client.
Interviewer: No, Google will not keep a dedicated child socket for
you. It handles your request and discards/recycles the sockets right
away.
Interviewer: Although Google may have many servers to serve
requests, each server can have only one parent socket opened at port 80. The number of clients to access Google's webpage must be larger than the number of servers they have. How is this usually handled?
Me: I am not sure how this is handled. I see the only way it could
work is spawn a thread for each request it receives.
Interviewer: Do you think the way Google handles this is different from
any bank website?
Me: At the TCP-IP socket level, it should be
similar. At the request level, slightly different because a session
is maintained to keep state between requests for banks' websites.
If someone can give an explanation of each of the points, it will be very helpful for many beginners in networking.

How many sockets do Google open for every request it receives?
This question doesn't actually appear in the interview, but it's in your title so I'll answer it. Google doesn't open any sockets at all. Your browser does that. Google accepts connections, in the form of new sockets, but I wouldn't describe that as 'opening' them.
Interviewer : Tell me the process that happens behind the scene when I open a browser and type google.com in it.
Me : The first thing that happens is a socket is created which is identified by {SRC-IP, SRC-PORT, DEST-IP, DEST-PORT, PROTOCOL}.
No. The connection is identified by the tuple. The socket is an endpoint to the connection.
The SRC-PORT number is a random number given by browser.
No. By the operating system.
Usually TCP/IP connection protocol (three way handshake is established). Now the both client (my browser) and server (google) are ready to handle requests. (TCP connection is established)
Interviewer: wait, when does the name resolution happens?
Me: yep, I am sorry. It should have happened before even the socket is created. DNS name resolve happens first to get the IP address of google to reach at.
Interviewer : Is a socket created for DNS name resolution?
Me : hmm, Iactually do not know. But all I know DNS name resolution is a connection-less. That is it not TCP but UDP. only a single request-response cycle happens. (so is a new socket created for DNS name resolution).
Any rationally implemented browser would delegate the entire thing to the operating system's Sockets library, whose internal functioning depends on the OS. It might look at an in-memory cache, a file, a database, an LDAP server, several things, before going out to a DNS server, which it can do via either TCP or UDP. It's not a great question.
Interviewer: google.com is open for other requests from other clients. so is establishing you connection with google is blocking other users?
Me: I am not sure how google handles this. But in a typical socket communication, it is blocking to a minimal extent.
Wrong again. It has very little to do with Google specifically. A TCP server has a separate socket per accepted connection, and any rationally constructed TCP server handles them completely independently, either via multithreading, muliplexed/non-blocking I/O, or asynchronous I/O. They don't block each other.
Interviewer : How do you think this can be handled?
Me : I guess the process forks a new thread and creates a socket to handle my request. From now on, my socket endpoint of communication with google is this this child socket.
Threads are created, not 'forked'. Forking a process creates another process, not another thread. The socket isn't 'created' so much as accepted, and this would normally precede thread creation. It isn't a 'child' socket.
Interviewer: If that is the case, is this child socket’s port number different than the parent one.?
Me: The parent socket is listening at 80 for new requests from clients. The child must be listening at a different port number.
Wrong again. The accepted socket uses the same port number as the listening socket, and the accepted socket isn't 'listening' at all, it is receiving and sending.
Interviewer: how is your TCP connection maintained since your Dest-port number has changed. (That is the src-port number sent on google's packet) ?
Me: The dest-port as what I see as client is always 80. when request is sent back, it also comes from port 80. I guess the OS/the parent process sets the src port back to 80 before it sends back the post.
This question was designed to explore your previous wrong answer. Your continuation of your wrong answer is still wrong.
Interviewer : how long is your socket connection established with google?
Me : If I don’t make any requests for a period of time, the main thread closes its child socket and any subsequent requests from me will be like am a new client.
Wrong again. You don't know anything about threads at Google, let alone which thread closes the socket. Either end can close the connection at any time. Most probably the server end will beat you to it, but it isn't set in stone, and neither is which if any thread will do it.
Interviewer : No, google will not keep a dedicated child socket for you. It handles your request and discards/recycles the sockets right away.
Here the interviewer is wrong. He doesn't seem to have heard of HTTP keep-alive, or the fact that it is the default in HTTP 1.1.
Interviewer: Although google may have many servers to serve requests, each server can have only one parent socket opened at port 80. The number of clients to access google webpage must be exceeding larger than the number of servers they have. How is this usually handled?
Me : I am not sure how this is handled. I see the only way it could work is spawn a thread for each request it receives.
Here you haven't answered the question at all. He is fishing for an answer about load-balancers or round-robin DNS or something in front of all those servers. However his sentence "the number of clients to access google webpage must be exceeding larger than the number of servers they have" has already been answered by the existence of what you are both incorrectly calling 'child sockets'. Again, not a great question, unless you've reported it inaccurately.
Interviewer : Do you think the way Google handles is different from any bank website?
Me: At the TCP-IP socket level, it should be similar. At the request level, slightly different because a session is maintained to keep state between requests in Banks websites.
You almost got this one right. HTTP sessions to Google exist, as well as to bank websites. It isn't much of a question. He should be asking for facts, not your opinion.
Overall, (a) you failed the interview completely, and (b) you indulged in far too much guesswork. You should have simply stated 'I don't know' and let the interview proceed to things that you do know about.

For point #6, here is how I understand it: if both ends of an end to end connection were the same as that of another socket, there would indeed be no way to distinguish both socket, but if a single end is not the same as that of the other socket, then it's easy to distinguish both. So there is not need to turn destination port 80 (the default) forth and back, since the source ports differ.

Related

Client port changes with each request

I am trying to establish a TCP/IP connection between a controller (client) and a program in my PC (server) using C++, I used a sniffer to see how client’s requests are being sent and I found out that each connect request from the controller is sent from a different port and known IP, it starts with random port number and increment by 1 with each request till I restart the controller or the server receives the request, I have some questions.
1- Is that a standard behaviour and what is the idea behind this knowing that the controller is a Mitsubishi controller?
2- Is there any way I can get the new port of the controller without using accept?
This is not so much the behaviour of the controller as it is the network stack running on top of the controller and may be integrated into the controller hardware (Search keyword: TCP offload).
This is expected behaviour. To prevent all sorts of nasty side effects, a simple example is late packets from a previous connection trying to sneak in as legitimate packets for a later connection, a port is not recycled for reuse for a lengthy period after the socket using the port is closed. Your port may not be available for use. A simple solution is to do exactly what OP's network stack did: sequentially assign the next port number.
Not with BSD-style sockets. accept accepts a connection with the client. If you do not accept, you don't get a socket to handle the connection and with the socket, you should not care what the port is. It's all abstracted away and hidden out of sight.
If this is a problem, consider using a connectionless protocol like UDP. You don't get automatic re-transmission when packet loss is detected and all of the other nice things TCP does for you, but there is no connection overhead.

Why we have to get two file descriptors in TCP server socket programming?

I take this tutorial for server socket programming link. For the functionality, I have no problem about that, and what I'm asking is more about architecture design question. Please take a look in the tutorial. We actually see two file descriptors, one when calling socket(), and one when calling accept(). It makes sense why we get a file descriptor when creating a socket because we treat a socket as a file; it also makes sense that we have to have multiple file descriptors when accepting different connections. But why do we need to have both to make it work?
The 1st socket is called the listening socket. TCP is a connection oriented stream. Each client connection operates on its own socket just like a file. If you only have one socket, you will not be able to distinguish which connection the data received on it belongs to. So the way TCP socket designed is to have the listening socket operate in LISTEN mode, and each time a client want to establish connection to the server, the accept call will return a new socket, aka the client socket, to represent the new connection, so that it is used to communication with this client exclusively.
On the other hand, UDP is a connectionless datagram-based protocol, in which just one socket is used to handle all data from all clients.
One socket represents the listening endpoint. The other socket represents the accepted incoming connection. If you don't want to accept any more connections, you can close the listening socket after calling accept.
Imagine a quick repair shop, where customers bring their PCs to be repaired, then sit in the waiting room till the PC is fixed. Not the long repairs that take weeks; the quick repairs that take an hour.
The sane way to run this shop is to have a receptionist who listens to the customers, takes the broken PC and passes it on to whichever repairman is currently free. He accepts the job, and goes off to work. The customer goes and sits. As another customer arrives, the receptionist is free to greet them, and directs them to another repairman if one is available.
The not sane way to run this shop is to have a receptionist who repairs the PC by themselves. The next customer to come to the shop has to hold their broken PC in their arms, waiting for the receptionist to repair the PC of the previous customer before they can hand over their load. Eventually, people's hands get tired, and they leave the queue, possibly to find a saner shop. Meanwhile, the poor receptionist is stressed, looking at all the people waiting to interact with her...
Ideally, they are two different TCP endpoints, where one is used as listening endpoint(LISTENING) and the other one as the accepted incoming connection (ESTABLISTED). You can close the listening endpoint once you are done with accepting connections.

What exactly is Socket

I don't know exactly what socket means.
A server runs on a specific computer and has a socket that is bound to a specific port number. The server just waits, listening to the socket for a client to make a connection request.
When the server accepts the connection, it gets a new socket bound to the same local port and also has its remote endpoint set to the address and port of the client.
It needs a new socket so that it can continue to listen to the original socket for connection requests while tending to the needs of the connected client.
So, socket is some class created in memory? And for every client connection there is created new instance of this class in memory? Inside socket is written the local port and port and IP number of the client which is connected. Can someone explain me more in details the definition of socket?
Thanks
A socket is effectively a type of file handle, behind which can lie a network session.
You can read and write it (mostly) like any other file handle and have the data go to and come from the other end of the session.
The specific actions you're describing are for the server end of a socket. A server establishes (binds to) a socket which can be used to accept incoming connections. Upon acceptance, you get another socket for the established session so that the server can go back and listen on the original socket for more incoming connections.
How they're represented in memory varies depending on your abstraction level.
At the lowest level in C, they're just file descriptors, a small integer. However, you may have a higher-level Socket class which encapsulates the behaviour of the low-level socket.
According to "TCP/IP Sockets in C-Practical Guide for Programmers" by Michael J. Doonahoo & Kenneth L. Calvert (Chptr 1, Section 1.4, Pg 7):
A socket is an abstraction through which an application may send
and receive data,in much the same way as an open file allows an application to read and write data to stable storage.
A socket allows an application to "plug in" to the network and communicate
with other applications that are also plugged in to the same network.
Information written to the socket by an application on one machine can be
read by an application on a different machine, and vice versa.
Refer to this book to get clarity about sockets from a programmers point of view.
A network socket is one endpoint in a communication flow between two programs running over a network.
A socket is the combination of IP address plus port number
This is the typical sequence of sockets requests from a server application in the connectionless context of the Internet in which a server handles many client requests and does not maintain a connection longer than the serving of the immediate request:
Steps to implement
At Server side
initilize socket()
--
bind()
--
recvfrom()
--
(wait for a sendto request from some client)
--
(process the sendto request)
--
sendto (in reply to the request from the client...for example, send an HTML file)
A corresponding client sequence of sockets requests would be:
socket()
--
bind()
--
sendto()
--
recvfrom()
so that you can make a pipeline connection ..
for more http://www.steves-internet-guide.com/tcpip-ports-sockets
I found this article in online.
So to put it all back together, a socket is the combination of an IP
address and a port, and it acts as an endpoint for receiving or
sending information over the internet, which is kept organized by TCP.
These building blocks (in conjunction with various other protocols and
technologies) work in the background to make every google search,
facebook post, or introductory technical blog post possible.
https://medium.com/swlh/understanding-socket-connections-in-computer-networking-bac304812b5c
Socket definition
A communication between two processes running on two computer systems can be completely specified by the association: {protocol, local-address, local-process, remote-address, remote-process} We also define a half association as either {protocol, local-address, local-process} or {protocol, remote-address, remote-process}, which specify half of a connection. This half association is also called socket, or transport address. The term socket has been popularized by the Berkeley Unix networking system, where it is "an end point of communication", which corresponds to the definition of half association.

How do I design a peer-to-peer app that avoids using listening sockets?

I've noticed that if you want to write an application that utilizes listening sockets, you need to create port forwarding rules on your router. If I want to connect two computers without either one of the the computers messing about with router settings, is there a way that I can get the two clients to connect to each other without either of them using listening sockets? There would need to be another server somewhere else telling them to connect but is it possible?
Some clarifications, and an answer:
Routers don't care about, or handle ports, that is the role of a firewall, which do port forwarding. The router/firewall combined device most of us have at home adds to the common misunderstanding.
Can you connect two computers without ServerSocket? No. You can use UDP (a stateless, connectionless communication protocol), but the role of a ServerSocket is to "listen" for incoming connection requests, and generate a Socket from those requests, which creates a communications channel between two endpoints. A Socket has both an InputStream and an OutputStream, so it can both read at write at either end. At that point (once the connection is made), the distinction between client/server is arbitrary, since a Socket is a two-way connection object, which allows both sides to send/receive.
What about proxying? Doesn't that allow connections between two computers without a ServerSocket? Well, no, because the server that's doing the proxying still has to be using a ServerSocket. Depending on what application you're trying to implement, this might be the way to go, or or might just add overhead. Even if there were "another server somewhere else telling them to connect", somebody has to listen for a connection request, which is the job of the ServerSocket.
If connections are happening over already open ports (most publicly accessible servers have ports <1024 not blocked by firewalls, but exceptions exist), then you shouldn't need to change firewall settings to get the connection to work.
So, to reiterate, the ONLY role of a ServerSocket (as far as your question is concerned) is to listen for incoming connection requests, and from those requests, create a Socket, which is a two-way communications channel between the two end points.
To answer the question, "How do I design a peer-to-peer app that avoids using listening sockets?", you don't. In the case of something like Vuze, the software acts as both client and server simultaneously, hence the term "peer", vs. "client" or "server" alone. In Vuze every client is a server, and every server (except for the tracker) is a client.
If you need a TCP connection between the 2 computers and both of them are behind routers (and you don't want to set up port forwarding) I think the only other possibility you have is having a third server somewhere that isn't behind a firewall running a ServerSocket and accepting connections between your 2 other computers and proxying communications between the 2. You can't establish a TCP Connection between the 2 without one listening to a socket and the other connecting to it.
Q: If I want to connect two computers without either one of the the
computers messing about with router settings, is there a way that I
can get the two clients to connect to each other
Yes: have the server listen on an open port :)

C++ Winsock API how to get connecting client IP before accepting the connection?

I am using the Winsock API (not CAsyncSocket) to make a socket that listens for incoming connections.
When somebody tries to connect, how can I get their IP address BEFORE accepting the connection? I am trying to make it only accept connections from certain IP addresses.
Thanks
SO_CONDITIONAL_ACCEPT socket option. Here
Also, pretty sure it's available in XP and Server 2003, not just Vista.
Two reasons why I do not want to accept the connection in order to check the remote IP address:
1). The client would see that there is a listening socket on this port. If i decide to reject the client connection, I would not want them to know that there is a socket listening on this port.
2). This technique is not as efficient and requires more CPU, RAM, and network usage; so it is not good in case of a Denial Of Service attack.
When using ATM, the CONNECT ACK packet will come from the most recent switch, not the end client. So, you would have to call accept() on the socket, then look at the address (based on the passed addr_family), and at that point just close the socket. By the time it reaches the requester, it will probably just get a failure.
And I'm not sure how many resources you think this will take up, but accepting a connection is at a very low level, and will not really be an issue. It's pretty easy to drop them.
If you come under a DoS attack, your code CAN quit listening for a preset amount of time, so the attacker just gets failures, if you are so worried about it.
Does it really matter if the client knows there is a socket listening? Try using telnet to connect to your localhost on port 137 and see how fast the file sharing in windows drops the connection... (If you even have it enabled, and if I remembered the correct port number.. heh..)
But, at the SOCKET level, you are not going to be able to do what you want. You are talking about getting down to the TCP level, and looking at the incoming connection requests, and deal with them there.
This can be done, but you are talking about a Kernel driver to do it. I'm not sure you can do this in user-mode at all.
If you want Kernel help with this, let me know. I may be able to give you some examples, or guidance.
Just my own two cents, and IMVHO...
accept the connection, look at the IP, if it is not allowed, close the connection
Edit:
I'm assuming you're talking about TCP connection. When you listen to the port and a connection comes from a client, the API will perform the TCP 3-way handshake, and the client will know that this port is being listened to.
I am not sure if there is a way to prevent sending any packets (i.e. accepting the connection) so that you can look at the IP address first and then decide.
The only way I can think of is to do packet filtering based on the source IP at the network layer (using firewall, for example).