Beacon size vs message size in Wireless Ad-Hoc Networks - beacon

I'm working on neighbor discovery protocols in wireless ad-hoc networks. There are many protocols that rely only on beacon messages between nodes when the discovery phase is going on. On the other hand, there are other approaches that try to transmit more information (like a node's neighbor table) during the discovery, in order to accelerate it. Depending on the time needed to listen to those messages the discovery latency and power consumption varies. Suppose that the same hardware is used to transmit them and that there aren't collisions.
I read that beacons can be sent extremely fast (less than 1ms easily) but I haven't found anything about how long it takes to send/receive a bigger message. Let say a message carrying around 50-500 numbers representing all the info about your neighbors. How much extra power is needed?
Update
Can this bigger message be divided into a bunch of beacon size messages? If it does, then I suppose the power used to transmit/listen grows linearly.

One possible solution is to divide the transmission in N different beacon-like messages with a small extra information to be able to put them back together. In this way, the power used grows linearly as N grows.

Related

Data transmission using RF with raspberryPi

I have a project that consisted of transmitting data wirelessly from 15 tractors to a station, the maximum distance between tractor and station is 13 miles. I used a raspberry pi 3 to collect data from tractors. with some research I found that there is no wifi or GSM coverage so the only solution is to use RF communication using VHF. so is that possible with raspberry pi or I must add a modem? if yes, what is the criterion for choosing a modem? and please if you have any other information tell me?
and thank you for your time.
I had a similar issue but possibly a little more complex. I needed to cover a maximum distance of 22 kilometres and I wanted to monitor over 100 resources ranging from breeding stock to fences and gates etc. I too had no GSM access plus no direct line of sight access as the area is hilly and the breeders like the deep valleys. The solution I used was to make my own radio network using cheap radio repeaters. Everything was battery operated and was driven by the receivers powering up the transmitters. This means that the units consume only 40 micro amps on standby and when the transmitters transmit, in my case they consume around 100 to 200 milliamps.
In the house I have a little program that transmits a poll to the receivers every so often and waits for the units to reply. This gives me a big advantage because I can, via the repeater trail (as each repeater, the signal goes through, adds its code to the returning message) actually determine were my stock are.
Now for the big issue, how long do the batteries last? Well each unit has a 18650 battery. For the fence and gate controls this is charged by a small 5 volt solar panel and after 2 years running time I have not changed any of them. For the cattle units the length of time between charges depends solely on how often you poll the units (note each unit has its own code) with one exception (a bull who wants to roam and is a real escape artist) I only poll them once or twice a day and I swap the battery every two weeks.
The frequency I use is 433Mhz and the radio transmitters and receivers are very cheap ( less then 10 cents a pair if you by them in Australia) with a very small Attiny (I think) arduino per unit (around 30 cents each) and a length on wire (34.6cm long as an aerial) for the cattle and 69.2cm for the repeaters. Note these calculations are based on the frequency used i.e. 433Mhz.
As I had to install lots of the repeaters I contacted an organisation in China (sorry they no longer exist) and they created a tiny waterproof and rugged capsule that contained everything, while also improving on the design (range wise while reducing power) at a cost of $220 for 100 units not including batterys. I bought one lot as a test and now between myself and my neighbours we bought another 2000 units for only $2750.
In my case this was paid for in less then three months when during calving season I knew exactly were they were calving and was on site to assist. The first time I used it we saved a mother who was having a real issue.
To end this long message I am not an expert but I had an idea and hired people who were and the repeater approach certainly works over long distances and large areas (42 square kilometres).
Following on from the comments above, I'm not sure where you are located but spectrum around the 400mhz range is licensed in many countries so it would be worth checking exactly what you can use.
If this is your target then this is UHF rather than VHF so if you search for 'Raspberry PI UHF shield' or 'Raspberry PI UHF module' you will find some examples of cheap hardware you can add to your raspberry pi to support communication over these frequencies. Most of the results should include some software examples also.
There are also articles on using the pins on the PI to transmit directly by modulating the voltage them - this is almost certainly going to interfere with other communications so I doubt it would meet your needs.

Sending images through sockets

I have an idea for a client-server. The client handles only input, sending it to the server. Server handles the input, logic and then sends the image of the program to the client. The client prints the image on user's screen. Uses udp, slight artefacts in the image are tolerated.
How fast can those images travel through the Internet? Can they travel at least 5 times a second? I don't have 2 computers at hand to test it.
EDIT: One more question - how reliable is UDP protocol? How many pixels would arrive corrupted? Say, 10% on average?
EDIT2: For example, I have an 320x200 32 bit image (red,green,blue + alpha). That's ~2 million bits. How long it takes for the image to arrive from the server to the client, if my ping is X, my uploading speed Y Mbps and my download speed Z Mbps?
The answers to your questions depend heavily on the internet connections of the machines involved. In particular, if the program is heavily graphical, the bandwidth used by the images may be fairly substantial, especially if your client is on a mobile device connecting through the cellular telephony system.
If you have plenty of bandwidth, 5 round trips per second should be achievable most of the time if both client and server are in the U.S., or both are in Europe. There are, for example, interactive computer games that depend on having 4-5 round trips per second for smooth play, and only occasionally have glitches as a result. If client and server are on different continents, and especially if they are on opposite sides of the world, this may be more difficult, as speed of light delays start using a significant proportion of the available transmission time. In the worst case, say between China and Argentina, theoretical speed of light delays alone limit the network to less than 8 round trips per second, so with real network and bandwidth limitations, 5 round trips per second could be difficult to achieve.
The reliability of UDP depends substantially on how congested the connection is. On an uncongested network connection, you'd probably lose 1% of the packets or less. On a very congested network connection, it might be a lot worse - I've seen situations where 80% of the packets were lost.
On an uncongested network, the time for an image to travel from the server to the client would be
(ping time)/2 + (1-packet overhead)*(image size)/(minimum bandwidth)
Packet overhead is only a few percent, so you might be able to drop that term out. Minimum bandwidth would be the minimum of the server upload bandwidth and the client download bandwidth. Note that the image size might be reduced substantially through compression. Don't forget, though, that you also need to allow for time for the input to be sent from the client to the server, which adds another (ping time)/2 at a minimum.

Compare two matrices according to first row in each matrix (matlab)

I'll briefly explain my purpose in case another solution would be much easier.
I have a server and a system. The system sends a "live-stat" packet twice a second and logs the time it was sent (epoch time). The server receives the packet and logs the receiving time.
I would like to know which of the packets arrived and the latency of their arrival.
Currently I'm inserting this data to an unorganized matrix and i would like to compare the two matrices (server and system) to calculate the latency.
Is there any matlab command that could help me?
Another point is that I don't know how to get a numerical value for the latency, is there any formula or tool that i could use? (I''m currently going to show the deviation of the latency on a graph).
I'm talking about 500,000 packets per system and i'm testing this on ~50 systems so i'm looking for the most efficient calculation.

Soft hand off in CDMA cellular networks

Hi,
In the CDMA cellular networks when MS (Mobile Station) need to change a BS(Base Station), exactly necessary for hand-off, i know that is soft hand-off (make a connection with a target BS before leaving current BS-s). But i want to know, because connection of MS remaining within a time with more than one BS, MS use the same code in CDMA to communicate with all BS-s or different code for different BS-s ?
Thanks in advance
For the benefit of everyone, i have touched upon few points before coming to the main point.
Soft Handoff is also termed as "make-before-break" handoff. This technique falls under the category of MAHO (Mobile Assisted Handover). The key theme behind this is having the MS to maintain a simultaneous communication link with two or more BS for ensuring a un-interrupted call.
In DL direction, it is achieved using different transmission codes(transmit same bit stream) on different physical channels in the same frequency by two or more BTS wherein the CDMA phone simultaneously receives the signals from these two or more BTS. In the active set, there can be more than one pilot as there could be three carriers involved in soft hand off. Also, there shall also be a rake receiver that shall do maximal combining of received signals.
In UL direction, MS shall operate on a candidate set where there could be more than 1 pilot that have sufficient signal strength for usage as reported by MS. The BTS shall tag each of the user's data with Frame reliability indicator that can provide details about the transmission quality to BSC. So, even though the signals(MS code channel) are received by both base stations, it is achieved by routing the signals to the BSC along with information of quality of received signals, which shall examine the quality based on the Frame reliability indicator and choose the best quality stream or the best candidate.

How to calculate bandwidth requirments based upon flows per minute (fpm)?

I want to know how can one calculate bandwidth requirements based upon flows and viceversa.
Meaning if I had to achieve total of 50,000 netflows what is the bandwidth requirement to produce this number? Is there a formula for this. I'm using this to size up flow analyzer appliance. If its license says supports 50,000 flows what does this means. How more bandwidth if i increase I would lose the license coverage?
Most applications and appliances list flow volume per second, so you are asking what the bandwidth requirement is to transport 50k netflow updates per second:
For NetFlow v5 each record is 48 bytes each with each packet carrying 20 or so records, with 24 bytes overhead per packet. This means you'd use about 20Mbps to carry the flow packets. If you used NetFlow v9 which uses a template based format it might be a bit more, or less, depending on what is included in the template.
However, if you are asking how much bandwidth you can monitor with 50k netflow updates per second, the answer becomes more complex. In our experience monitoring regular user networks (using google, facebook, etc) an average flow update encompasses roughly 25kbytes of transferred data. Meaning 50,000 flow updates per second would equate to monitoring 10Gbps of traffic. Keep in mind that these are very rough estimates.
This answer may vary, however, if you are monitoring a server-centric network in a datacenter, where each flow may be much larger or smaller, depending on content served. The bigger each individual traffic session is, the higher the bandwidth will be you can monitor with 50,000 flow updates per second.
Finally, if you enable sampling in your NetFlow exporters you will be able to monitor much more bandwidth. Sampling will only use 1 in every N packets, thus skipping many flows. This lowers the total number of flow updates per second needed to monitor high-volume networks.