network redirection of 2 physical serial ports - pyserial

I have a question about the best way to redirect over a TCP-IP connection a serial stream and I have some restrictions:
The network connection might be unstable
Communication is one way only
Communication has to be as real-time as possible to avoid buffers to get bigger and bigger serial speed is different, RX side faster than TX side
I've played with socat but most of the examples are for pty virtual serial ports and I haven't managed to make them work with a pair of physical serial ports.
ser2net demon on lede/openwrt seems to be unstable
Had a look to pyserial but I can find only a server example to client example and my python coding skills are terrible.

Related

ddos for server congestion using OMNeT++

I am currently try to develop a ddos model to crease congestion at server ( using standardHost ) in OMNeT++. Can anyone tell me how to develop one ? I would like to create congestion at server , not the communication link.
Default models, applications in the INET framework do NOT model the CPU resource constraint of a node. (i.e. They assume that the server has infinite number of CPUs to process the requests). Also resource requirements for packet processing (apart from memory buffers) are not modeled either. You have to write your own components and add them to INET at the right places. i.e. out of box, INET is not suitable to model that kind of problem (however you can add your own models to it.)

Is transmitting a file over multiple sockets faster than just using one socket?

In this old project (from 2002), It says that if you split a file into multiple chunks and then transmit each chunk using a different socket, it will arrive much faster than transmitting it as a whole using one socket. I also remember reading (many years ago) that some download manager also uses this technique. How accurate is this?
Given that a single TCP connection with large windows or small RTT can saturate any network link, I don't see what benefit you expect from multiple TCP sessions. Each new piece will begin with slow-start and so have a lower transfer-rate than an established connection would have.
TCP already has code for high-throughput, high-latency connections ("window scale option") and dealing with packet loss. Attempting to improve upon this with parallel connections will generally have a negative effect by having more failure cases and increased packet loss (due to congestion which TCP on a single connection can manage).
Multiple TCP sessions is only beneficial if you're doing simultaneous fetches from different peers and the network bottleneck is outside your local network (like bittorrent) or the server is doing bandwidth limitations per connection (at which point you're optimizing for the server, not TCP or the network).

How does performance of TCP and Unix Domain Sockets scale with number of processes and payload size?

Say I've a server on the same machine as some workers, each of which is talking to it. They could be talking over TCP or Unix Domain Sockets. How does the performance scale with number of workers and message size?
When I speak of performance, I'm looking for not only mean latencies, but also p90 and p99 latencies.
As for TCP you can measure its performance youself (like this). Do a few tests.
Set the length of buffer to read or write and run iperf in server mode and also run a few iperf processes in client mode. Then change the length of buffer to read or write.

What are the benefits of removing fragmentation from IPv6?

I was working on a project which includes developing an application using java sockets. However while reading some fundamentals and newly upcoming IPv6 paradigm which motivated me to ask below question,
What are the benefits of removing fragmentation from IPv6?
It would be helpful if someone can give me understanding about why?
I have researched on internet but haven't found any useful description.
It is a common mis-understanding that there is no IPv6 fragmentation because the IPv6 header doesn't have the fragment-offset field that IPv4 does; however, it's not exactly accurate. IPv6 doesn't allow routers to fragment packets; however, end-nodes may insert an IPv6 fragmentation header1.
As RFC 5722 states2, one of the problems with fragmentation is that it tends to create security holes. During the late 1990's there were several well-known attacks on Windows 95 that exploited overlapping IPv4 fragments3; furthermore, in-line fragmentation of packets is risky to burn into internet router silicon due to the long list of issues that must be handled. One of the biggest issues is that overlapping fragments buffered in a router (awaiting reassembly) could potentially cause a security vulnerability on that device if they are mis-handled. The end-result is that most router implementations push packets requiring fragmentation to software; this doesn't scale at large speeds.
The other issue is that if you reassemble fragments, you must buffer them for a period of time until the rest are received. It is possible for someone to leverage this dynamic and send very large numbers of unfinished IP fragments; forcing the device in question to spend many resources waiting for an opportunity to reassemble. Intelligent implementations limit the number of outstanding fragments to prevent a denial of service from this; however, limiting outstanding fragments could legitimately affect the number of valid fragments that can be reassembled.
In short, there are just too many hairy issues to allow a router to handle fragmentation. If IPv6 packets require fragmentation, hosts implementations should be smart enough to use TCP Path MTU discovery. That also implies that several ICMPv6 messages need to be permitted end-to-end; interestingly many IPv4 firewall admins block ICMP to guard against hostile network mapping (and then naively block all ICMPv6), not realizing that blocking all ICMPv6 breaks things in subtle ways4.
**END-NOTES:**
See Section 4.5 of the Internet Protocol, Version 6 (IPv6) Specification
From RFC 5722: Handling of Overlapping IPv6 Fragments:
Commonly used firewalls use the algorithm specified
in [RFC1858] to weed out malicious packets that try
to overwrite parts of the transport-layer header in
order to bypass inbound connection checks. [RFC1858]
prevents an overlapping fragment attack on an
upper-layer protocol (in this case, TCP) by recommending
that packets with a fragment offset of 1 be dropped.
While this works well for IPv4 fragments, it will not work
for IPv6 fragments. This is because the fragmentable part
of the IPv6 packet can contain extension headers before
the TCP header, making this check less effective.
See Teardrop attack (wikipedia)
See RFC 4890: Recommendations for Filtering ICMPv6 Messages in Firewalls
I don't have the "official" answer for you, but just based on reading how IPv6 handles datagrams that are too large, my guess would be to reduce the load on routers. Fragmentation and reassembly incurs overhead at the router. IPv6 moves this burden to the end nodes and requires that they perform MTU discovery to determine the maximum datagram size they can send. It stands to reason that the end nodes are better suited for the task because they have less data to process. Effectively, the routers have enough on their plates; it's makes sense to force the nodes to deal with it and allow the routers to simply drop something that exceeds their MTU threshold.
Ideally, the end result would be that routers can handle a larger load under IPv6 (all things being equal) than they did under IPv4 because there is no fragmentation/reassembly that they have to worry about. That processor power can be dedicated to routing traffic.
IPv4 has a guaranteed minimum MTU of 576 bytes, IPv6 is 1,5001,280 bytes, and recommendation is 1,500 bytes, the difference is basically performance. As most end-user LAN segments are 1,500 bytes it reduces network infrastructure overhead for storing state due to additional fragmentation from what are effectively legacy networks that require smaller sizes.
For UDP there is no definition in IPv4 standards about reconstruction of fragmented packets which means every platform can handle it differently. IPv6 asserts that the fragmentation and assembly will always occur in the IP stack and fragments will not be presented to applications.

Are there applications where the number of network ports is not enough?

In TCP/IP, the port number is specified by a 16-bit field, yielding a total of 65536 port numbers. However, the lower range (don't really know how far it goes) is reserved for the system and cannot be utilized by the application. Assuming that 60,000 port numbers are available, it should be more than plenty for most nework application. However, MMORPG games often have tens of thousands of concurrently connected users at a time.
This got me wondering: Are there situations where a network application can run out of ports? How can this limitation be worked around?
You don't need one port per connection.
A connection is uniquely identified by a tuple of (host address, host port, remote address, remote port). It's likely your host IP address is the same for each connection, but you can still service 100,000 clients on a single machine with just one port. (In theory: you'll run into problems, unrelated to ports, before that.)
The canonical starter resource for this problem is Dan Kegels C10K page from 1999.
The lower range you refer to is probably the range below 1024 on most Unix like systems. This range is reserved for privileged applications. An application running as a normal user can not start listening to ports below 1024.
An upper range is often used by the OS for return ports and NAT when creating connections.
In short, because of how TCP works, ports can run out if a lot of connections are made and then closed. The limitation can be mitigated to some extent by using long-lived connections, one for each client.
In HTTP, this means using HTTP 1.1 and keep-alive.
There are 2^16 = 65536 per IP address. In other words, for a computer with one Ip address to run out of ports it should use more than 65536 ports which will never happen naturally!
You have to understand a socket which is (IP+Port) and the end to end device for communication
IPv4 is 32 bit let's say somehow it can address around 2^32 computers publicly (regardless of NATing).
so now there are 2^16*2^32 = 2^48 public sockets possible (which is in the order of 10^15) so it will not have a conflict (again regardless of NATing).
However IPv6 is introduced to allow more public IPs