ddos for server congestion using OMNeT++ - server

I am currently try to develop a ddos model to crease congestion at server ( using standardHost ) in OMNeT++. Can anyone tell me how to develop one ? I would like to create congestion at server , not the communication link.

Default models, applications in the INET framework do NOT model the CPU resource constraint of a node. (i.e. They assume that the server has infinite number of CPUs to process the requests). Also resource requirements for packet processing (apart from memory buffers) are not modeled either. You have to write your own components and add them to INET at the right places. i.e. out of box, INET is not suitable to model that kind of problem (however you can add your own models to it.)

Related

Database interaction in an IOT network

Suppose we have several (100) nodes in an IOT network. Each node has limited resources. There is a postgresql database server running in one of these nodes. Now every node has several (4-5) processes which need to interact with this server to perform some insert and select queries. Each query response has to be as fast as possible for the process to work as it should. Now i think of some ways to do this are :
Each process in a node makes one database client and performs queries.
All processes in a node send their queries to a destination in localhost itself from where the queries are performed through an optimum number of database clients. This way we have some sort of control over the number of database clients like optimisation of queries getting performed through a priority queue implementation or performing queries in separate thread/process through a separate database client in each thread/process. In this case somewhat we have the control over the optimisation of number of clients,number of threads/processes , priority of in what order queries must be executed.
Each node sends all queries through some network protocol directly to the database server which then uses a limited number of database clients performing queries now in its own localhost database and then returning the response to each node through same channel. This way it increases the latency but keeps number of clients minimum. Plus we can also implement some optimisation here running each client in different process/thread etc. Database interaction can be faster in this case since number of clients can be kept minimum, it is running in localhost machine itself but it adds some overhead to transfer the query response data back to the node's process.
In order to keep the resource usage as minimum as possible in every node and queries response as fast as possible , what is the best strategy to solve this problem ?
Without knowing the networking details, option 3 would be used normally.
Reasons:
Authentication: Typically you do not want to use database users to authenticate IoT devices.
Security: By using a specific IoT protocol, you can be sure to use TLS and certificate based server authentication.
Protocol compatibility: In upgrade cases, you must ensure that you can upgrade the client nodes independently from the server nodes, or vice versa. This may not be the case for database protocol.

network redirection of 2 physical serial ports

I have a question about the best way to redirect over a TCP-IP connection a serial stream and I have some restrictions:
The network connection might be unstable
Communication is one way only
Communication has to be as real-time as possible to avoid buffers to get bigger and bigger serial speed is different, RX side faster than TX side
I've played with socat but most of the examples are for pty virtual serial ports and I haven't managed to make them work with a pair of physical serial ports.
ser2net demon on lede/openwrt seems to be unstable
Had a look to pyserial but I can find only a server example to client example and my python coding skills are terrible.

What are best practices for kubernetes geo distributed cluster?

What is the best practice to get Geo distributed cluster with asynchronous network channels ?
I suspect I would need to have some "load balancer" which should redirect connections "within" it's own DC, do you know anything like this already in place?
Second question, should we use one HA cluster or create dedicated cluster for each of the DC ?
The assumption of the kubernetes development team is that cross-cluster federation will be the best way to handle cross-zone workloads. The tooling for this is easy to imagine, but has not emerged yet. You can (on your own) set up regional or global load-balancers and direct traffic to different clusters based on things like GeoIP.
You should look into Byzantine Clients. My team is currently working on a solution for erasure coded storage in asynchronous network that prevents some problems caused by faulty clients, but it relies on correct clients to establish a consistent state across the servers.
The network consists of a set of servers {P1, ...., Pn} and a set of clients {C1, ..., Cn}, which are all PTIM with running time bounded by a polynomial in a given securty parameter. Servers and clients together are parties. Theres an adversary, which is a PITM with running time boundded by a polynoil. Servers nd clients are controlled by adversary. In this case, theyre calld corruptd, othrwise, theyre called honest. An adversary that contrls up to t servers is called t-limited.
If protecting innocent clients from getting inconsistent values is a priority, then you should go ne, but from the pointview of a client, problems caused by faulty clients don't really hurt the system.

What are the benefits of removing fragmentation from IPv6?

I was working on a project which includes developing an application using java sockets. However while reading some fundamentals and newly upcoming IPv6 paradigm which motivated me to ask below question,
What are the benefits of removing fragmentation from IPv6?
It would be helpful if someone can give me understanding about why?
I have researched on internet but haven't found any useful description.
It is a common mis-understanding that there is no IPv6 fragmentation because the IPv6 header doesn't have the fragment-offset field that IPv4 does; however, it's not exactly accurate. IPv6 doesn't allow routers to fragment packets; however, end-nodes may insert an IPv6 fragmentation header1.
As RFC 5722 states2, one of the problems with fragmentation is that it tends to create security holes. During the late 1990's there were several well-known attacks on Windows 95 that exploited overlapping IPv4 fragments3; furthermore, in-line fragmentation of packets is risky to burn into internet router silicon due to the long list of issues that must be handled. One of the biggest issues is that overlapping fragments buffered in a router (awaiting reassembly) could potentially cause a security vulnerability on that device if they are mis-handled. The end-result is that most router implementations push packets requiring fragmentation to software; this doesn't scale at large speeds.
The other issue is that if you reassemble fragments, you must buffer them for a period of time until the rest are received. It is possible for someone to leverage this dynamic and send very large numbers of unfinished IP fragments; forcing the device in question to spend many resources waiting for an opportunity to reassemble. Intelligent implementations limit the number of outstanding fragments to prevent a denial of service from this; however, limiting outstanding fragments could legitimately affect the number of valid fragments that can be reassembled.
In short, there are just too many hairy issues to allow a router to handle fragmentation. If IPv6 packets require fragmentation, hosts implementations should be smart enough to use TCP Path MTU discovery. That also implies that several ICMPv6 messages need to be permitted end-to-end; interestingly many IPv4 firewall admins block ICMP to guard against hostile network mapping (and then naively block all ICMPv6), not realizing that blocking all ICMPv6 breaks things in subtle ways4.
**END-NOTES:**
See Section 4.5 of the Internet Protocol, Version 6 (IPv6) Specification
From RFC 5722: Handling of Overlapping IPv6 Fragments:
Commonly used firewalls use the algorithm specified
in [RFC1858] to weed out malicious packets that try
to overwrite parts of the transport-layer header in
order to bypass inbound connection checks. [RFC1858]
prevents an overlapping fragment attack on an
upper-layer protocol (in this case, TCP) by recommending
that packets with a fragment offset of 1 be dropped.
While this works well for IPv4 fragments, it will not work
for IPv6 fragments. This is because the fragmentable part
of the IPv6 packet can contain extension headers before
the TCP header, making this check less effective.
See Teardrop attack (wikipedia)
See RFC 4890: Recommendations for Filtering ICMPv6 Messages in Firewalls
I don't have the "official" answer for you, but just based on reading how IPv6 handles datagrams that are too large, my guess would be to reduce the load on routers. Fragmentation and reassembly incurs overhead at the router. IPv6 moves this burden to the end nodes and requires that they perform MTU discovery to determine the maximum datagram size they can send. It stands to reason that the end nodes are better suited for the task because they have less data to process. Effectively, the routers have enough on their plates; it's makes sense to force the nodes to deal with it and allow the routers to simply drop something that exceeds their MTU threshold.
Ideally, the end result would be that routers can handle a larger load under IPv6 (all things being equal) than they did under IPv4 because there is no fragmentation/reassembly that they have to worry about. That processor power can be dedicated to routing traffic.
IPv4 has a guaranteed minimum MTU of 576 bytes, IPv6 is 1,5001,280 bytes, and recommendation is 1,500 bytes, the difference is basically performance. As most end-user LAN segments are 1,500 bytes it reduces network infrastructure overhead for storing state due to additional fragmentation from what are effectively legacy networks that require smaller sizes.
For UDP there is no definition in IPv4 standards about reconstruction of fragmented packets which means every platform can handle it differently. IPv6 asserts that the fragmentation and assembly will always occur in the IP stack and fragments will not be presented to applications.

Are there applications where the number of network ports is not enough?

In TCP/IP, the port number is specified by a 16-bit field, yielding a total of 65536 port numbers. However, the lower range (don't really know how far it goes) is reserved for the system and cannot be utilized by the application. Assuming that 60,000 port numbers are available, it should be more than plenty for most nework application. However, MMORPG games often have tens of thousands of concurrently connected users at a time.
This got me wondering: Are there situations where a network application can run out of ports? How can this limitation be worked around?
You don't need one port per connection.
A connection is uniquely identified by a tuple of (host address, host port, remote address, remote port). It's likely your host IP address is the same for each connection, but you can still service 100,000 clients on a single machine with just one port. (In theory: you'll run into problems, unrelated to ports, before that.)
The canonical starter resource for this problem is Dan Kegels C10K page from 1999.
The lower range you refer to is probably the range below 1024 on most Unix like systems. This range is reserved for privileged applications. An application running as a normal user can not start listening to ports below 1024.
An upper range is often used by the OS for return ports and NAT when creating connections.
In short, because of how TCP works, ports can run out if a lot of connections are made and then closed. The limitation can be mitigated to some extent by using long-lived connections, one for each client.
In HTTP, this means using HTTP 1.1 and keep-alive.
There are 2^16 = 65536 per IP address. In other words, for a computer with one Ip address to run out of ports it should use more than 65536 ports which will never happen naturally!
You have to understand a socket which is (IP+Port) and the end to end device for communication
IPv4 is 32 bit let's say somehow it can address around 2^32 computers publicly (regardless of NATing).
so now there are 2^16*2^32 = 2^48 public sockets possible (which is in the order of 10^15) so it will not have a conflict (again regardless of NATing).
However IPv6 is introduced to allow more public IPs