The message delivery rate decreases as the number of nodes in a v2v communication increases - simulation

On my simulation, the are several nodes that are trying sending some message to a fixed RSU, I implemented a v2v communication, where the node send the message just to nodes that are closer to RSU. But sometimes, there are not nodes available to the sender node send the message, that is ok in simulations with a low density. But, when I increase the number of nodes, the delivery rate of messages decreases instead of increases, anyone already have this kind of problem?
I tried to adjust my algorithm to route less messages, because the problem might be on collisions.

Related

RaspberryPi MQTT broker maximum connections

I want to set up a system of 100-200 sensors that send their data (in a frequency of about 30 mins) to a MQTT broker based on RaspberryPi. Sensors data is obtained in an ESP8266, which would transmit via WiFi to the MQTT broker (which is in a distance of about 2 meters).
I wanted to know if is it possible for a broker of these characteristics to handle that many connections simultaneously.
Thank you so much!
Diego
A single broker can handle many 1000s of clients.
The limiting factor is likely to be the size and frequency of message, but assuming the messages are not 10s of megabytes each then 200 messages spread of 30mins will be trivial.
Even if they are all grouped at the same time rough time (allowing for clock drift) then small messages will again not be a problem.

Beacon size vs message size in Wireless Ad-Hoc Networks

I'm working on neighbor discovery protocols in wireless ad-hoc networks. There are many protocols that rely only on beacon messages between nodes when the discovery phase is going on. On the other hand, there are other approaches that try to transmit more information (like a node's neighbor table) during the discovery, in order to accelerate it. Depending on the time needed to listen to those messages the discovery latency and power consumption varies. Suppose that the same hardware is used to transmit them and that there aren't collisions.
I read that beacons can be sent extremely fast (less than 1ms easily) but I haven't found anything about how long it takes to send/receive a bigger message. Let say a message carrying around 50-500 numbers representing all the info about your neighbors. How much extra power is needed?
Update
Can this bigger message be divided into a bunch of beacon size messages? If it does, then I suppose the power used to transmit/listen grows linearly.
One possible solution is to divide the transmission in N different beacon-like messages with a small extra information to be able to put them back together. In this way, the power used grows linearly as N grows.

Does server's location affect players' ping?

I want to reduce a latency for my players in a multiplayer game.
Would it be beneficial to make a server in each continent to reduce latency? E.g. players are in US, but the server is in Europe, so I make it be in US.
How big could the difference be?
Yes absolutely, the closest your server is from the user the better the ping is because the travelling distance / time is reduced.
Specially between Europe and America, because of the sea ;)
The difference really depend on your server, but at least 150ms I think.
Cable . . . ( raw-fiber on Layer 1 [PHY] ) mileage rulez
Recent transatlantic cables ( deploying shorter fibre-meandres in a tighter inner-tubing ) benefit from about the 2012+ generation of cables shorter transatlantic latencies somewhere under 60 milliseconds, according to Hibernia Atlantic.
One has also to account for lambda-signal amplification and retiming units, that add some additional Layer 1 [PMD] latency overheads down the road.
Yes, but...
ping is a trivial tool to test RTT records ( a packet round-trip-time ) derived from counting a time, the packet proceeds through a network infrastructure there and back.
Thus sending such ping packet across a few meter cable distance would typically cost less time than having to wait till another such packet makes it to a target on the opposite side of the Globe and has next successfully crawled back again, but... as Heraclitus from Ephesos wisdom: "You can't step twice into the same river" .. repeating the ping probes to the same target will yield in many principally very different time-delays ( latencies are non-stationary ).
But there are many additional cardinal issues, besides the geographical distance from A-to-B, that influence the end-to-end latency ( and an expected smoothness ) of the application-layer services.
What bothers?
Transport network congestions ( if actual traffic overloads an underlying network capacities, buffering delays and/or packet drops start to occur )
Packet re-transmission(s) ( if any packet got lost, the opposite side will ask for re-transmission, packets received in the meantime are being recorded for reception, but only after some remarkably longer time, the missing packet arrives, for which the receiving process was yet waiting, because without a packet 6, one cannot decode the full message and packets 7, 8, 9, 10, ... simply had to wait until the #6 was asked for by the receiver's side at the sender's side, was re-transmitted from the sender's side and have made it again, hopefully with a successful delivery this time, to the receiver's hands to fit the puzzle-gap. That costs a lot more time, than a smooth error-free data-flow )
Selective class-of-traffic prioritisation ( if your class-of-traffic is not prioritised, your packets will be policed to wait in queues, until higher priorities allow for some more lower-priority traffic to fit in )
Packet deliveries do not guarantee to take the same path over the same network-vectors and individual packets can be, in general, transported over multiple different trajectories ( add various prioritisation policies + various buffering drop-outs + various intermittent congestions and spurious flows ... and the resulting latency per se + timing variance, i.e. the uncertainty of a final delivery time of the next packet only grows up and up ).
Last but not least, the server system processing bottlenecks. Fine-tuning a server to avoid any such adverse effect ( performance bottlenecks and even worse, any blocking-state episode(s) ) belongs to a professional controlled-latency infrastructure design & maintenance.
The Devil comes next!
You might have already noticed, that besides a static latency scale ( demonstrated by ping ) realistic gaming is even more adversely affected by latency jitter ... as an in-game context magically UFO-es forwards and back in TimeDOMAIN ... which cause unrealistically jumping planes right in front of your aiming cross, "shivering" characters, deadly enemy-fire bullets that causes one's damages without ever seen / a yet visible attacker body and similar disturbing artefacts.
Server-colocation proximity per se will help in the former, but will let on you to fight the latter

UDP stream with little packets

I have a little network with a client and a server, and I'm testing the FrameRate, changing the dimension of the packet. Particulary, I have an image, changing threshold, I extract keypoints and descriptors and then I send a fixed number of packets (with different dimension with different threshold). Problems happen when udp packets are under MTU dimension, reception rate decrease and frame rate tend to be constant. I verify with wireshark that my reception times are correct, so isn't a server code problem.
this is the graph with the same image sends 30 times for threshold with a 10 step from 40 to 170.
i can't post the image so this is the link
Thanks for the responces
I think that none will interest this answer, but we arrived to the conclusion that the problem is a problem in wifi dongle's drivers.
The trasmission window does not go under a determined time's threshold. So under a determined amount of data while time remains constant, decreases.

Bandwidth measurent by minimum data transfer

I intend to write an application where I will be needing to calculate the network bandwidth along with latency and packet loss rate. One of the constraints is to passively measure the bandwidth (using the application data itself).
What I have read online and understood from a few existing applications is that almost all of them use active probing techniques (that is, generating a flow of probe packets) and use the time difference between arrival of the first and last packets to calculate the bandwidth.
The main problems with such a technique are that it floods the network with probe packets, which runs longer and is not scalable (since we need to run the application at both ends).
One of the suggestions was to calculate the RTT of a packet by echoing it back to the sender and calculate the bandwidth using the following equation:
Bandwidth <= (Receive Buffer size)/RTT.
I am not sure how accurate this could be as the receiver may not always echo back the packet on time to get the correct RTT. Use of ICMP alone may not always work as many servers disable it.
My main application runs over a TCP connection so I am interested in using the TCP connection to measure the actual bandwidth offered over a particular period of time. I would really appreciate if anybody could suggest a simple technique (reliable formula) to measure the bandwidth for a TCP connection.
It is only possible to know the available bandwidth by probing the network. This is due to that a 80% utilized link will still send echo-packets without delay, i.e. it will appear to be 0% occupied.
If you instead just wish to measure the bandwidth your application is using, it is much easier. E.g. keep a record of the amount of data you have transferred in the last second divided into 10ms intervals.
Active probing technique and its variants are bandwidth estimation algorithm. You dont want to use these algorithm to measure bandwidth. Note the difference between 'measure' and 'estimate'.
If you want to use tcp to measure bandwidth, you should be aware that tcp bandwidth is influenced by latency.
The easiest way to measure bandwidth using tcp is by sending tcp packets and measure the transferred bandwidth. It will flood the network. None of the non flooding algorithm reliable in high speed network. Plus, non flooding algorithm assume the channel is clear from traffic. If there is other traffic inside the channel, the result would be skewed. Im not suprised if the result would not make sense.