I connect to a web server supported by an embedded system with Internet Explorer 9. Windows 7 is on the client side.
The web page have many tabs and I browse across until the problem occurs. It takes about one minute to happen.
The embedded system freezes so it not possible to browse and it does not respond to ping. After a moment the embedded system will recover because it is designed to reboot. I joined a Wireshark trace in which you can see 92 connections (use the filter "tcp.stream eq 0" with values [0,91]) and you will see. I have the source code so I know that the embedded system does not support more than 37 simultaneous connections. Is the cause an exhaustion of the resources?
But I have a more basic question and I really more appreciate an answer to it. The web server is at 172.21.1.12 port 80 and the client is at
172.21.9.70 and variable port numbers (see the trace). Because the IP and port on the server side do not change, how many sockets are in use on the server side? The question is important because the more sockets are opened, the more probably there is an exhaustion of the resources.
If the answer is only 1 socket then I must conclude there is no lack of resources because it can support 37.
I also suggest you use the filter ip.addr == 172.21.1.12 in Wireshark.
I thought I could upload the wireshark file. I dont know how to share it with you. Help please?
Dropbox?
Under the caveat that you haven't specified your embedded system, most TCP stacks will create a new socket for each new connection, and the mapping from socket to connection is 1-1.
When a packet arrives to the network stack, it has to associate that packet to the right socket. Usually, this is accomplished by employing a map from the TCP 4-tuple to the socket, where the 4-tuple consists of [local-ip, local-port, remote-ip, remote-port].
A server makes its service available by listening on a fixed local port that is known to clients wanting to use the service. As you understand, this is usually port 80 for a web server, and the software interface for most TCP implementations dedicate a socket for the purpose of allowing the API to perform operations on the network parameters for this service. However, the socket is not fully connected (the last two parts of the 4-tuple are set to a special "not specified" value, usually all bits 0). When a new connection is accepted, a new socket is created where the 4-tuple consists of the local information of the listening socket and the remote information taken from the source address and port of the SYN packet that initiated the TCP connection.
The limit on the number of connections a server can support is based on how the operating system is configured (you say yours limits it to 37). Using the 4-tuple, a single service (that is a fixed local-ip and local-port) will have an absolute limit of (2ADDR_BITS - RESERVED_ADDRS) × (216 - RESERVED_PORTS). For IPv4, the number of bits is 32, while for IPv6, the number of bits is 128.
When creating a connection, the client will specify the destination address and port (which fills out the remote information for the 4-tuple), but usually leave the source information unspecified. The TCP stack will choose an appropriate source address based on routing, and select an available source port (which will become the local information to complete the 4-tuple). In theory, any source port that is not being used by the selected local interface to communicate to the same remote service can be used as the local port. Most stacks will dedicate a set of the higher numbered ports for this purpose (referred to as the ephemeral port range).
Related
I've read a lot of theory about sockets and Client-Server connection on this forums but some points remains blurred or some answers does not satisfy me completely.
Also, i'd like my words to be confirmed, completed or corrected :
1)_ A socket is made out of IP source (IP of the Client), Port source (a port automatically and randomly choosen by the OS between 1024 and 65535), IP destination (127.0.0.1 ? Something i don't get here), Port destination (developper defined-by port for the server) and protocol type.
There may be something wrong in those lines already.
But considering it is true, how can the server differenciate two processes accessing the server from the same machine ? (Understand, how the developper can make any difference if he wants to prevent multiple access from the same machine).
The only difference would be the source port which is auto-filled by the OS. In this case, it would act like it was a totally different machine, right ?
2)_ I heard there was actually a pair of sockets. One generated by the Client, and one by the server.
Is there really a need for the server to have a second socket ? Is this socket a simple replica to keep a copy in the "Client currently connected"-list or is it a different socket, with different values ?
3)_ When does a Client should "disconnect" ? At each query ? At the end of some process ? Other ?
Thanks for enlightenment !
1)_ A socket is made out of IP source (IP of the Client), Port source
(a port automatically and randomly choosen by the OS between 1024 and
65535), IP destination (127.0.0.1 ? Something i don't get here), Port
destination (developper defined-by port for the server) and protocol
type.
I wouldn't say a "socket is made" out of those data points; rather a TCP-connection can be uniquely identified using just those data points:
1. Source IP - the IP address of the client computer
2. Source Port - the port number (on the client computer) that the client is sending packets from and receiving packets on
3. Destination IP - the IP address of the server computer
4. Destination Port - the port number (on the server computer) that the server is sending packets from and receivign packets on
5. Protocol type - what communications-protocol is in use (i.e. either TCP or UDP)
But considering it is true, how can the server differenciate two
processes accessing the server from the same machine ?
It can differentiate them because the 5-tuple (above) will be unique for each of the two connections. In particular, in the TCP packets the server receives from process #2, field #2 (Source Port) will be different from the value it has in the packets received from process #1.
The only difference would be the source port which is auto-filled by
the OS. In this case, it would act like it was a totally different
machine, right ?
The server can act however it was programmed to act -- but in most cases the server will be programmed not to care whether two client connections come from the same physical machine or not. To most servers, a client is a client, and a client's physical location is not that important.
Is there really a need for the server to have a second socket ? Is
this socket a simple replica to keep a copy in the "Client currently
connected"-list or is it a different socket, with different values ?
A socket is a just data structure that lives in a computer's memory to help it keep track of the current state associated with a particular network connection. Since both the client and the server need to keep track of their end of the connection, both the client the server will have their own socket representing their endpoint. (Note the difference between a "TCP-connection", which you can imagine as an imaginary/virtual wire running from one computer to another, and the two "sockets", which would be the imaginary/virtual connectors at the ends of that wire, that attach the wire to the client-program on one end, and the server-program on the other end)
3)_ When does a Client should "disconnect" ? At each query ? At the
end of some process ? Other ?
Whenever it wants to; it's up to the programmer(*). There are startup/shutdown costs to opening and connecting a new socket, but there is also some ongoing memory and CPU overhead to keeping a socket open indefinitely, so the programmer will have to make a design decision about whether he wants to keep sockets open over extended periods, or not.
(*) Note that in a modern OS, if the client program exits or crashes, the socket will be automatically closed and the connection automatically disconnected by the OS.
This might be a bit weird to explain, but I'll try my best.
I have a Lua program that's intended to serve some data through the network. Specifically, the internet. The data the program is actually transmitting are only strings stored within UDP packets. Generalized, this is how the program operates:
The first client launches the program and specifies that they are the 'host' of the connection. The program opens a connection on UDP port 6000 and the main loop listens for any packets received on said port.
The second client launches the program and specifies that they are to connect to the 'host' on port 6000. The user enters the IP, and the client opens a UDP connection using a random port between 6050 and 7000
When the client successfully connects to the server, they send a 'connection' packet, simply containing a '202 OK' string. The 'host' receives this and registers the new client
Now that the connection has been initialized, the programs can send data between each other using the registered data.
Now, on a local network this program works fine. The purpose of the 'host' mode is to have multiple clients connect to the host and have the host relay packets from one clients to all the currently registered clients. Port selections are arbitrary and random port selection from the client was simply to allow debugging and testing from a single computer. This has been tested between two and more computers on a physical network, and worked successfully. However, when I attempt to run this over the internet it's a no go. I know that the ports are closed and that's why it's not working. But seeing as I'm going to be distributing this program (privately) I can't expect every person to open ports on their router (or know how to). Security is not currently a concern with the program, and should be disregarded in the current state. That being said, I recognise there's the potential for a lot to go wrong with the use of this program through the network and I accept that. Onto the main question, how can I have the host and client communicate over the internet without having to open ports?
I'll elaborate - for example, browsers. Although the technology is quite different to what I'm doing, it's easier to paint a picture - the browser requests data from a web server, and it gets sent back to the client. But wait, if the router is in it's default state (I hope) all the ports are closed? So how does the client receive this data if the port is closed?
I hope this makes some kind of sense and I don't sound like a complete fool.
I managed to find some suitable libraries and utilities to be able to communicate through the internet (NAT traversal is now a term I am familiar with), those libraries being that supplied by NMAP. These libraries include an implementation for STUN in LUA, among HEAPS of other useful networking-related libraries and scripts.
To actually answer my own question (very simply), the clients and servers are behind what's known as a NAT gateway. Due to the limitations of addresses of IPv4, NAT gateways were implemented to bypass this limitation of IPv4 (a total of about 4.2 billion addresses) by separating the clients' internal network from the external network - in this case the internet. The NAT is supplied with a single IP address, and the NAT then supplies all of its users within the internal network with an IP respective to the network they're on. As such, the devices cannot directly be accessed without forwarding connections from the NAT gateway (generally the router) to the client. However, when using UDP connections the NAT gateway opens a port for the purposes of this connection which gets closed after the connection dies. This port that is opened differs from what is specified by the client when they open the connection, which is where the STUN methods come in. STUN allows the host to find the port that the client is connecting from and send data back to this port so the user can receive it. Bear in mind this is an EXTREMELY simple explanation of how the technology works, and I'd suggest reading up on the Wiki and some of the Request for Comments for STUN.
Usually a web server is listening to any incoming connection through port 80. So, my question is that shouldn't it be that in general concept of socket programming is that port 80 is for listen for incoming connection. But then after the server accepted the connection, it will use another port e.g port 12345 to communicate with the client. But, when I look into the wireshark, the server is always using port 80 during the communication. I am confused here.
So what if https://www.facebook.com:443, it has hundreds of thousands of connection to the it at a second. Is it possible for a single port to handle such a large amount of traffic?
A particular socket is uniquely identified by a 5-tuple (i.e. a list of 5 particular properties.) Those properties are:
Source IP Address
Destination IP Address
Source Port Number
Destination Port Number
Transport Protocol (usually TCP or UDP)
These parameters must be unique for sockets that are open at the same time. Where you're probably getting confused here is what happens on the client side vs. what happens on the server side in TCP. Regardless of the application protocol in question (HTTP, FTP, SMTP, whatever,) TCP behaves the same way.
When you open a socket on the client side, it will select a random high-number port for the new outgoing connection. This is required, otherwise you would be unable to open two separate sockets on the same computer to the same server. Since it's entirely reasonable to want to do that (and it's very common in the case of web servers, such as having stackoverflow.com open in two separate tabs) and the 5-tuple for each socket must be unique, a random high-number port is used as the source port. However, each of those sockets will connect to port 80 at stackoverflow.com's webserver.
On the server side of things, stackoverflow.com can already distinguish between those two different sockets from your client, again, because they already have different client-side port numbers. When it sees an incoming request packet from your browser, it knows which of the sockets it has open with you to respond to because of the different source port number. Similarly, when it wants to send a response packet to you, it can send it to the correct endpoint on your side by setting the destination port number to the client-side port number it got the request from.
The bottom line is that it's unnecessary for each client connection to have a separate port number on the server's side because the server can already uniquely identify each client connection by its client IP address and client-side port number. This is the way TCP (and UDP) sockets work regardless of application-layer protocol.
shouldn't it be that in general concept of socket programming is that port 80 is for listen for incoming connection. But then after the server accepted the connection, it will use another port e.g port 12345 to communicate with the client.
No.
But, when I look into the wireshark, the server is always using port 80 during the communication.
Yes.
I am confused here.
Only because your 'general concept' isn't correct. An accepted socket uses the same local port as the listening socket.
So what if https://www.facebook.com:443, it has hundreds of thousands of connection to the it at a second. Is it possible for a single port to handle such a large amount of traffic?
A port is only a number. It isn't a physical thing. It isn't handling anything. TCP is identifying connections based on the tuple {source IP, source port, target IP, target port}. There's no problem as long as the entire tuple is unique.
Ports are a virtual concept, not a hardware ressource, it's no harder to handle 10 000 connection on 1 port than 1 connection each on 10 000 port (it's probably much faster even)
Not all servers are web servers listening on port 80, nor do all servers maintain lasting connections. Web servers in particular are stateless.
Your suggestion to open a new port for further communication is exactly what happens when using the FTP protocol, but as you have seen this is not necessary.
Ports are not a physical concept, they exist in a standardised form to allow multiple servers to be reachable on the same host without specialised multiplexing software. Such software does still exist, but for entirely different reasons (see: sshttp). What you see as a response from the server on port 80, the server sees as a reply to you on a not-so-random port the OS assigned your connection.
When a server listening socket accepts a TCP request in the first time ,the function such as Socket java.net.ServerSocket.accept() will return a new communication socket whoes port number is the same as the port from java.net.ServerSocket.ServerSocket(int port).
Here are the screen shots.
For example, when you make an ssh connection, you are connected to port 22. What happens then? On a very high level brief overview, I know that if port 22 is open on the other end and if you can authenticate to it as a certain user, then you get a shell on that machine.
But I don't understand how ports tie into this model of services and connections to different services from remote machines? Why is there a need for so many specific ports running specific services? And what exactly happens when you try to connect to a port?
I hope this question isn't too confusing due to my naive understanding. Thanks.
Imagine your server as a house with 65536 doors. If you want to visit family "HTTP", you go to door 80. If you were to visit family "SMTP", you would visit door no. 25.
Technically, a port is just one of multiple possible endpoints for outgoing/incomming connections. Many of the port numbers are assigned to certain services by convention.
Opening/establishing a connection means (when the transport protocol is TCP, which are most of the “classical” services like HTTP, SMTP, etc.) that you are performing a TCP handshake. With UDP (used for things like streaming and VoIP), there's no handshake.
Unless you want to understand the deeper voodoo of IP networks, you could just say, that's about it. Nothing overly special.
TCP-IP ports on your machine are essentially a mechanism to get messages to the right endpoints.
Each of the possible 65536 ports (16 total bits) fall under certain categories as designated by the Internet Assigned Numbers Authority (IANA).
But I don't understand how ports tie into this model of services and
connections to different services from remote machines? Why is there a
need for so many specific ports running specific services?
...
And what exactly happens when you try to connect to a port?
Think of it this way: How many applications on your computer communicate with other machines? Web browser, e-mail client, SSH client, online games, etc. Not to mention all of the stuff running under the hood.
Now think: how many physical ports do you have on your machine? Most desktop machines have one. Occasionally two or three. If a single application had to take complete control over your network interface nothing else would be able to use it! So TCP ports are a way of turning 1 connection into 65536 connections.
For example, when you make an ssh connection, you are connected to
port 22. What happens then?
Think of it like sending a package. Your SSH client in front of you needs to send information to a process running on the other machine. So you supply the destination address in the form of "user#[ip or hostname]" (so that it knows which machine on the network to send it to), and "port 22" (so it gets to the right application running on the machine). Your application then packs up a TCP parcel and stamps a destination and a return address and sends it to the network.
The network finds the destination computer and delivers the package. So now it's at the right machine, but it still needs to get to the right application. What do you think would happen if your SSH packet got delivered to an e-mail client? That's what the port number is for. It effectively tells your computer's local TCP mailman where to make the final delivery. Then the application does whatever it needs to with the data (such as verify authentication) and sends a response packet using your machine's return address. The back and forth continues as long as the connection is active.
Hope that helps. :)
The port is meant to allow applications on TCP/IP to exchange data. Each machine on the internet has one single address which is its IP. The port allows different applications on one machine to send and receive data with multiple servers on the network/internet. Common application like ftp and http servers communicate on default ports like 21 and 80 unless network administrators change those default ports for security reasons
Can two applications on the same machine bind to the same port and IP address? Taking it a step further, can one app listen to requests coming from a certain IP and the other to another remote IP?
I know I can have one application that starts off two threads (or forks) to have similar behavior, but can two applications that have nothing in common do the same?
The answer differs depending on what OS is being considered. In general though:
For TCP, no. You can only have one application listening on the same port at one time. Now if you had 2 network cards, you could have one application listen on the first IP and the second one on the second IP using the same port number.
For UDP (Multicasts), multiple applications can subscribe to the same port.
Edit: Since Linux Kernel 3.9 and later, support for multiple applications listening to the same port was added using the SO_REUSEPORT option. More information is available at this lwn.net article.
Yes (for TCP) you can have two programs listen on the same socket, if the programs are designed to do so. When the socket is created by the first program, make sure the SO_REUSEADDR option is set on the socket before you bind(). However, this may not be what you want. What this does is an incoming TCP connection will be directed to one of the programs, not both, so it does not duplicate the connection, it just allows two programs to service the incoming request. For example, web servers will have multiple processes all listening on port 80, and the O/S sends a new connection to the process that is ready to accept new connections.
SO_REUSEADDR
Allows other sockets to bind() to this port, unless there is an active listening socket bound to the port already. This enables you to get around those "Address already in use" error messages when you try to restart your server after a crash.
Yes.
Multiple listening TCP sockets, all bound to the same port, can co-exist, provided they are all bound to different local IP addresses. Clients can connect to whichever one they need to. This excludes 0.0.0.0 (INADDR_ANY).
Multiple accepted sockets can co-exist, all accepted from the same listening socket, all showing the same local port number as the listening socket.
Multiple UDP sockets all bound to the same port can all co-exist provided either the same condition as at (1) or they have all had the SO_REUSEADDR option set before binding.
TCP ports and UDP ports occupy different namespaces, so the use of a port for TCP does not preclude its use for UDP, and vice versa.
Reference: Stevens & Wright, TCP/IP Illustrated, Volume II.
In principle, no.
It's not written in stone; but it's the way all APIs are written: the app opens a port, gets a handle to it, and the OS notifies it (via that handle) when a client connection (or a packet in UDP case) arrives.
If the OS allowed two apps to open the same port, how would it know which one to notify?
But... there are ways around it:
As Jed noted, you could write a 'master' process, which would be the only one that really listens on the port and notifies others, using any logic it wants to separate client requests.
On Linux and BSD (at least) you can set up 'remapping' rules that redirect packets from the 'visible' port to different ones (where the apps are listening), according to any network related criteria (maybe network of origin, or some simple forms of load balancing).
Yes Definitely. As far as i remember From kernel version 3.9 (Not sure on the version) onwards support for the SO_REUSEPORT was introduced. SO_RESUEPORT allows binding to the exact same port and address, As long as the first server sets this option before binding its socket.
It works for both TCP and UDP. Refer to the link for more details: SO_REUSEPORT
No. Only one application can bind to a port at a time, and behavior if the bind is forced is indeterminate.
With multicast sockets -- which sound like nowhere near what you want -- more than one application can bind to a port as long as SO_REUSEADDR is set in each socket's options.
You could accomplish this by writing a "master" process, which accepts and processes all connections, then hands them off to your two applications who need to listen on the same port. This is the approach that Web servers and such take, since many processes need to listen to 80.
Beyond this, we're getting into specifics -- you tagged both TCP and UDP, which is it? Also, what platform?
You can have one application listening on one port for one network interface. Therefore you could have:
httpd listening on remotely accessible interface, e.g. 192.168.1.1:80
another daemon listening on 127.0.0.1:80
Sample use case could be to use httpd as a load balancer or a proxy.
When you create a TCP connection, you ask to connect to a specific TCP address, which is a combination of an IP address (v4 or v6, depending on the protocol you're using) and a port.
When a server listens for connections, it can inform the kernel that it would like to listen to a specific IP address and port, i.e., one TCP address, or on the same port on each of the host's IP addresses (usually specified with IP address 0.0.0.0), which is effectively listening on a lot of different "TCP addresses" (e.g., 192.168.1.10:8000, 127.0.0.1:8000, etc.)
No, you can't have two applications listening on the same "TCP address," because when a message comes in, how would the kernel know to which application to give the message?
However, you in most operating systems you can set up several IP addresses on a single interface (e.g., if you have 192.168.1.10 on an interface, you could also set up 192.168.1.11, if nobody else on the network is using it), and in those cases you could have separate applications listening on port 8000 on each of those two IP addresses.
Just to share what #jnewton mentioned.
I started an nginx and an embedded tomcat process on my mac. I can see both process runninng at 8080.
LT<XXXX>-MAC:~ b0<XXX>$ sudo netstat -anp tcp | grep LISTEN
tcp46 0 0 *.8080 *.* LISTEN
tcp4 0 0 *.8080 *.* LISTEN
Another way is use a program listening in one port that analyses the kind of traffic (ssh, https, etc) it redirects internally to another port on which the "real" service is listening.
For example, for Linux, sslh: https://github.com/yrutschle/sslh
If at least one of the remote IPs is already known, static and dedicated to talk only to one of your apps, you may use iptables rule (table nat, chain PREROUTING) to redirect incomming traffic from this address to "shared" local port to any other port where the appropriate application actually listen.
Yes.
From this article:
https://lwn.net/Articles/542629/
The new socket option allows multiple sockets on the same host to bind to the same port
Yes and no. Only one application can actively listen on a port. But that application can bequeath its connection to another process. So you could have multiple processes working on the same port.
You can make two applications listen for the same port on the same network interface.
There can only be one listening socket for the specified network interface and port, but that socket can be shared between several applications.
If you have a listening socket in an application process and you fork that process, the socket will be inherited, so technically there will be now two processes listening the same port.
I have tried the following, with socat:
socat TCP-L:8080,fork,reuseaddr -
And even though I have not made a connection to the socket, I cannot listen twice on the same port, in spite of the reuseaddr option.
I get this message (which I expected before):
2016/02/23 09:56:49 socat[2667] E bind(5, {AF=2 0.0.0.0:8080}, 16): Address already in use
If by applications you mean multiple processes then yes but generally NO.
For example Apache server runs multiple processes on same port (generally 80).It's done by designating one of the process to actually bind to the port and then use that process to do handovers to various processes which are accepting connections.
Short answer:
Going by the answer given here. You can have two applications listening on the same IP address, and port number, so long one of the port is a UDP port, while other is a TCP port.
Explanation:
The concept of port is relevant on the transport layer of the TCP/IP stack, thus as long as you are using different transport layer protocols of the stack, you can have multiple processes listening on the same <ip-address>:<port> combination.
One doubt that people have is if two applications are running on the same <ip-address>:<port> combination, how will a client running on a remote machine distinguish between the two? If you look at the IP layer packet header (https://en.wikipedia.org/wiki/IPv4#Header), you will see that bits 72 to 79 are used for defining protocol, this is how the distinction can be made.
If however you want to have two applications on same TCP <ip-address>:<port> combination, then the answer is no (An interesting exercise will be launch two VMs, give them same IP address, but different MAC addresses, and see what happens - you will notice that some times VM1 will get packets, and other times VM2 will get packets - depending on ARP cache refresh).
I feel that by making two applications run on the same <op-address>:<port> you want to achieve some kind of load balancing. For this you can run the applications on different ports, and write IP table rules to bifurcate the traffic between them.
Also see #user6169806's answer.