How to measure the traffic in OMNET++ - simulation

I am new to OMNET++.
In my simulation, there are several nodes generating packets. I want to get the aggregate traffic rate of those nodes. How can I measure the traffic in OMNET++?
Thanks

There are two ways to get aggregate statistics:
Let INET collect statistics on a per node basis and aggregate those data with post processing
Install the necessary #statistics listeners in the top level network module's NED file. Signals, that provide the statistics data propagate upwards in the topology tree upwards to the root node, so all signals emitted by ANYTHING in the network will be received by the top level (network) node essentially providing an aggregate value.
Obviously, the second approach is less flexible, as it does not work if you are interested in only some nodes' aggregate statistics (like statistics for all switches in the system).

Related

Arrivals for different delivery locations with same poisson rate Anylogic

My question is:
Is there a simple an proper way to create arrivals for multiple deliverylocations with the same poisson rate without using a source block for every delivery location?
So an example to make it perhaps a bit more understandable:
I've 30 delivery location nodes (node1, node2, node3, node4 etc). For all those 30 delivery location there should be something transported to those nodes with the same poission arrival rate for simplicity say 1 and they all have different intermediate points where they have to pass (so say delivery location node1, first need to go to intermediate point2 and than to location node 1, see figure for example of database values).
Now ofcourse I can create 30 source blocks with this arrival rate and the intermediate points as parameters of the agent created in that source, but this is kind of time intensive, so is there a quick solution to model this quickly?
Since it happens randomly, arrivals according to database can not be used, since there is not a specified time of arrival it just happens randomly based on a poisson rate.

Suggestions on breaking down an IOT application consisting of jobs and services hosted in kubernetes in a way which would enable horizontal scaling

I have an IOT application which is architected in the following way :-
There are plants which has its own sets of devices.
Now the entire pipeline is deployed in kubernetes consisting of the following units :-
A job which wakes up every x seconds, reads data from all plants, pushes the data to a mqtt broker.
An MQTT broker.
A subscriber service which receives data from all plants and pushes it to a timeseries database.
Jobs running at intervals of 5min, 15min, 1hr, 4hr, 1day and performs downsampling on the data of all the projects and pushes it to separate downsampled tables.
Jobs running every day to check if there was in holes/gaps in the data and tries to fill it up if possible.
Now this works fine for few Plants, but when the number of plants increases it becomes difficult perform data retrieval, push, downsampling using single service/job as it becomes too much memory intensive and chokes at multiple places. As a temporary fix scaling it vertically fixes the issue to some extent but in that case I need to put all the pods in a single machine which I am scaling vertically and scaling multiple nodes vertically is quite expensive.
Hence I am planning to break down the system in a way so that I can scale horizontally and I am looking for suggestions for the possible ways I can achieve this.

Beacon size vs message size in Wireless Ad-Hoc Networks

I'm working on neighbor discovery protocols in wireless ad-hoc networks. There are many protocols that rely only on beacon messages between nodes when the discovery phase is going on. On the other hand, there are other approaches that try to transmit more information (like a node's neighbor table) during the discovery, in order to accelerate it. Depending on the time needed to listen to those messages the discovery latency and power consumption varies. Suppose that the same hardware is used to transmit them and that there aren't collisions.
I read that beacons can be sent extremely fast (less than 1ms easily) but I haven't found anything about how long it takes to send/receive a bigger message. Let say a message carrying around 50-500 numbers representing all the info about your neighbors. How much extra power is needed?
Update
Can this bigger message be divided into a bunch of beacon size messages? If it does, then I suppose the power used to transmit/listen grows linearly.
One possible solution is to divide the transmission in N different beacon-like messages with a small extra information to be able to put them back together. In this way, the power used grows linearly as N grows.

Simulink very slow when reading large amount of signals to root level input ports

I'm trying to read structures of timeseries objects into root level input ports in simulink as described here, however, the model is reading the data extremely slowly.
I basically removed everything and now only have input ports going into terminators, and every time-step (0.01 sampling time) takes around one second.
What is going on here, this can't possibly correct? I should mention that I am reading around 500 signals, each a timeseries object.

How to calculate bandwidth requirments based upon flows per minute (fpm)?

I want to know how can one calculate bandwidth requirements based upon flows and viceversa.
Meaning if I had to achieve total of 50,000 netflows what is the bandwidth requirement to produce this number? Is there a formula for this. I'm using this to size up flow analyzer appliance. If its license says supports 50,000 flows what does this means. How more bandwidth if i increase I would lose the license coverage?
Most applications and appliances list flow volume per second, so you are asking what the bandwidth requirement is to transport 50k netflow updates per second:
For NetFlow v5 each record is 48 bytes each with each packet carrying 20 or so records, with 24 bytes overhead per packet. This means you'd use about 20Mbps to carry the flow packets. If you used NetFlow v9 which uses a template based format it might be a bit more, or less, depending on what is included in the template.
However, if you are asking how much bandwidth you can monitor with 50k netflow updates per second, the answer becomes more complex. In our experience monitoring regular user networks (using google, facebook, etc) an average flow update encompasses roughly 25kbytes of transferred data. Meaning 50,000 flow updates per second would equate to monitoring 10Gbps of traffic. Keep in mind that these are very rough estimates.
This answer may vary, however, if you are monitoring a server-centric network in a datacenter, where each flow may be much larger or smaller, depending on content served. The bigger each individual traffic session is, the higher the bandwidth will be you can monitor with 50,000 flow updates per second.
Finally, if you enable sampling in your NetFlow exporters you will be able to monitor much more bandwidth. Sampling will only use 1 in every N packets, thus skipping many flows. This lowers the total number of flow updates per second needed to monitor high-volume networks.