Are there applications where the number of network ports is not enough? - sockets

In TCP/IP, the port number is specified by a 16-bit field, yielding a total of 65536 port numbers. However, the lower range (don't really know how far it goes) is reserved for the system and cannot be utilized by the application. Assuming that 60,000 port numbers are available, it should be more than plenty for most nework application. However, MMORPG games often have tens of thousands of concurrently connected users at a time.
This got me wondering: Are there situations where a network application can run out of ports? How can this limitation be worked around?

You don't need one port per connection.
A connection is uniquely identified by a tuple of (host address, host port, remote address, remote port). It's likely your host IP address is the same for each connection, but you can still service 100,000 clients on a single machine with just one port. (In theory: you'll run into problems, unrelated to ports, before that.)

The canonical starter resource for this problem is Dan Kegels C10K page from 1999.
The lower range you refer to is probably the range below 1024 on most Unix like systems. This range is reserved for privileged applications. An application running as a normal user can not start listening to ports below 1024.
An upper range is often used by the OS for return ports and NAT when creating connections.
In short, because of how TCP works, ports can run out if a lot of connections are made and then closed. The limitation can be mitigated to some extent by using long-lived connections, one for each client.
In HTTP, this means using HTTP 1.1 and keep-alive.

There are 2^16 = 65536 per IP address. In other words, for a computer with one Ip address to run out of ports it should use more than 65536 ports which will never happen naturally!
You have to understand a socket which is (IP+Port) and the end to end device for communication
IPv4 is 32 bit let's say somehow it can address around 2^32 computers publicly (regardless of NATing).
so now there are 2^16*2^32 = 2^48 public sockets possible (which is in the order of 10^15) so it will not have a conflict (again regardless of NATing).
However IPv6 is introduced to allow more public IPs

Related

Using leastconn with weights in HAProxy

I would like to be able to use leastconn in HAProxy while still having different weights on backend hosts.
Context: I have ~100 backend hosts that are being accessed by ~2000 front end hosts. All backend hosts process requests the same way (no faster hosts), however some backend hosts can process more requests (they have more cores). The problem is that I cannot use round robin as it is, because sometimes some backend host get stuck with long connections, and with round robin it will keep receiving more and more front end connections, which it never recovers from. In the current situation, I use leastconn, so all backend hosts process ~ the same number of requests, but I don't optimize their CPU usage.
What I would like to achieve is to be able to still use leastconn, but allowing more connections to certain hosts. For example if we have only 2 hosts: host A with 1 core and host B with 2 cores. At any moment, I would like HAProxy to decide which host to pick based on:
x= num_current_connections_A, y = 0.5*num_current_connections_B. If x<=y go to A, otherwise go to B.
I read this post which states the same issue, but no answer really solved my problem: http://haproxy.formilux.narkive.com/I6hSmq8H/balance-leastconn-does-not-honor-weight
Thank you

network redirection of 2 physical serial ports

I have a question about the best way to redirect over a TCP-IP connection a serial stream and I have some restrictions:
The network connection might be unstable
Communication is one way only
Communication has to be as real-time as possible to avoid buffers to get bigger and bigger serial speed is different, RX side faster than TX side
I've played with socat but most of the examples are for pty virtual serial ports and I haven't managed to make them work with a pair of physical serial ports.
ser2net demon on lede/openwrt seems to be unstable
Had a look to pyserial but I can find only a server example to client example and my python coding skills are terrible.

How to query or calculate port utilization for devices registered with CUCM

I need to query or calculate port utilization for various devices registered with Cisco CUCM, for example, H323 Gateway Port Utilization, FXS Port Utilization, BRI Channel Utilization etc.
Are these metrics available from CUCM? If yes, is it possible to query them using AXL? SNMP?
If the port utilization metrics are not available, how to query the total number of ports configured for each registered with CUCM device using AXL? I think I can obtain the number of currently active ports using AXL PerfmonPort service. If I find the way to query the total number of ports I can calculate the port utilization as following:
FXO port utilization = 100% * number of active FXO ports / total number of registered FXO port.
Thank you!
There are some paid products like Solarwinds that will do this for you. I personally prefer Cacti though - it will use SNMP to poll routers, switches, and CUCM itself for data. I'm able to use SNMP to show CUBE concurrent calls, PRI concurrent calls, CUBE transcoding and even CUCM itself. Generally, if it's a router component, you can monitor it with SNMP.
Here is an intro to monitoring CUCM with SNMP:
https://www.ucguru.com/monitoring-callmanager/
I will say that it takes some time to get up and running correctly. You may need different MIBs for each model router.

Is transmitting a file over multiple sockets faster than just using one socket?

In this old project (from 2002), It says that if you split a file into multiple chunks and then transmit each chunk using a different socket, it will arrive much faster than transmitting it as a whole using one socket. I also remember reading (many years ago) that some download manager also uses this technique. How accurate is this?
Given that a single TCP connection with large windows or small RTT can saturate any network link, I don't see what benefit you expect from multiple TCP sessions. Each new piece will begin with slow-start and so have a lower transfer-rate than an established connection would have.
TCP already has code for high-throughput, high-latency connections ("window scale option") and dealing with packet loss. Attempting to improve upon this with parallel connections will generally have a negative effect by having more failure cases and increased packet loss (due to congestion which TCP on a single connection can manage).
Multiple TCP sessions is only beneficial if you're doing simultaneous fetches from different peers and the network bottleneck is outside your local network (like bittorrent) or the server is doing bandwidth limitations per connection (at which point you're optimizing for the server, not TCP or the network).

What are the benefits of removing fragmentation from IPv6?

I was working on a project which includes developing an application using java sockets. However while reading some fundamentals and newly upcoming IPv6 paradigm which motivated me to ask below question,
What are the benefits of removing fragmentation from IPv6?
It would be helpful if someone can give me understanding about why?
I have researched on internet but haven't found any useful description.
It is a common mis-understanding that there is no IPv6 fragmentation because the IPv6 header doesn't have the fragment-offset field that IPv4 does; however, it's not exactly accurate. IPv6 doesn't allow routers to fragment packets; however, end-nodes may insert an IPv6 fragmentation header1.
As RFC 5722 states2, one of the problems with fragmentation is that it tends to create security holes. During the late 1990's there were several well-known attacks on Windows 95 that exploited overlapping IPv4 fragments3; furthermore, in-line fragmentation of packets is risky to burn into internet router silicon due to the long list of issues that must be handled. One of the biggest issues is that overlapping fragments buffered in a router (awaiting reassembly) could potentially cause a security vulnerability on that device if they are mis-handled. The end-result is that most router implementations push packets requiring fragmentation to software; this doesn't scale at large speeds.
The other issue is that if you reassemble fragments, you must buffer them for a period of time until the rest are received. It is possible for someone to leverage this dynamic and send very large numbers of unfinished IP fragments; forcing the device in question to spend many resources waiting for an opportunity to reassemble. Intelligent implementations limit the number of outstanding fragments to prevent a denial of service from this; however, limiting outstanding fragments could legitimately affect the number of valid fragments that can be reassembled.
In short, there are just too many hairy issues to allow a router to handle fragmentation. If IPv6 packets require fragmentation, hosts implementations should be smart enough to use TCP Path MTU discovery. That also implies that several ICMPv6 messages need to be permitted end-to-end; interestingly many IPv4 firewall admins block ICMP to guard against hostile network mapping (and then naively block all ICMPv6), not realizing that blocking all ICMPv6 breaks things in subtle ways4.
**END-NOTES:**
See Section 4.5 of the Internet Protocol, Version 6 (IPv6) Specification
From RFC 5722: Handling of Overlapping IPv6 Fragments:
Commonly used firewalls use the algorithm specified
in [RFC1858] to weed out malicious packets that try
to overwrite parts of the transport-layer header in
order to bypass inbound connection checks. [RFC1858]
prevents an overlapping fragment attack on an
upper-layer protocol (in this case, TCP) by recommending
that packets with a fragment offset of 1 be dropped.
While this works well for IPv4 fragments, it will not work
for IPv6 fragments. This is because the fragmentable part
of the IPv6 packet can contain extension headers before
the TCP header, making this check less effective.
See Teardrop attack (wikipedia)
See RFC 4890: Recommendations for Filtering ICMPv6 Messages in Firewalls
I don't have the "official" answer for you, but just based on reading how IPv6 handles datagrams that are too large, my guess would be to reduce the load on routers. Fragmentation and reassembly incurs overhead at the router. IPv6 moves this burden to the end nodes and requires that they perform MTU discovery to determine the maximum datagram size they can send. It stands to reason that the end nodes are better suited for the task because they have less data to process. Effectively, the routers have enough on their plates; it's makes sense to force the nodes to deal with it and allow the routers to simply drop something that exceeds their MTU threshold.
Ideally, the end result would be that routers can handle a larger load under IPv6 (all things being equal) than they did under IPv4 because there is no fragmentation/reassembly that they have to worry about. That processor power can be dedicated to routing traffic.
IPv4 has a guaranteed minimum MTU of 576 bytes, IPv6 is 1,5001,280 bytes, and recommendation is 1,500 bytes, the difference is basically performance. As most end-user LAN segments are 1,500 bytes it reduces network infrastructure overhead for storing state due to additional fragmentation from what are effectively legacy networks that require smaller sizes.
For UDP there is no definition in IPv4 standards about reconstruction of fragmented packets which means every platform can handle it differently. IPv6 asserts that the fragmentation and assembly will always occur in the IP stack and fragments will not be presented to applications.