I am running VEINS simulation for VANETs. Is there a way to access the total number of cars being simulated during a specific time in OMNET++?(I am trying to count the number of packets being exchanged between cars, and since I am broadcasting it, I thought multiplying the packet sent to the number of vehicles being simulated would provide a good indicator of how many received packets count should be).
The return value of TraCIScenarioManager::getManagedHosts contains all active vehicles. You should get the same result as if you manually iterate over (and then filter) all submodules of your network using cModule::SubmoduleIterator.
Related
We’re generating the data that we might get from a shop floor to run, test, and validate our machine learning models. We first have here a discrete event simulation model for our manufacturing system. Each production order is seen as an agent, which then goes through different processes with a queue (waiting time) and delays (firstly production time, secondly logistics time).
enter image description here
But sometimes we have one process, for example, printing (code 5A, after the second Select5Output), with three different machines, which do not have a particular capacity. It’s time when we divide our order into parts and send them to those machines (very randomly, subjectively).
The data we take is from flowchart_process_states_log in Database.
The data we take is from flowchart_process_states_log in Database.
My questions here are:
How can we define the number of products in each order? Ex. we’re printing card, for one order it may be 10k, for another 8k or 33k. Can we define it as agent’s parameter? Then how can we vary them (stochastically, no exact number needed).
How can we split those 10k cards into three different machines? And then how to get back an complete agent with 10k? The Agent ID should remain the same as we trace and analyse them in ML model. Is it reasonable to see an order as an agent?
How can we multiply the number of our agent after a process? Ex. After cutting 10k pieces we have 20k.
We have the distribution for delay ex. triangle distribution. But we want some disturbances, when it suddenly takes 2 days for that delay instead of 3-4 hours as normal. How to do it?
Thank you in advance for your effort. Every help is highly appreciated, because we're here and learning together. Thank you !!
I have a server that processes packets from different devices. Devices can report in different intervals.
I would like to make a chart showing the distribution of intervals by the count of devices (how many devices are reporting within 5 sec/10 sec/60 sec ...)
Intervals for each device can vary.
Now I'm sending metric with Set using deviceId with tags that represent interval (5 sec, 10 sec, 30 sec, and more) but I'm not sure that it is correct.
What is the best way to realize it?
Set is almost never the right custom metric type to use. It will send a count of the number of unique items per a given tag. The underlying items details will be stripped from the metric, meaning that from one time slice to the next, you will have no idea that actual true number of items over time.
For example
3:00:07-3:00:32 | 5 second bucket:[device1,device4,device7] -> 3 values
3:00:32-3:00:47 | 5 second bucket:[device1,device3] -> 2 values
Your time series to datadog will report 3, and then 2. But because the underlying device info is stripped you have no idea how to combine that 2 and 3 if you to zoom out in time and roll up the numbers to show 1 data point per minute. It could be any number from 3 to 5, but the Datadog backend has no idea. (even though we know that across those 30 seconds there were 4 unique values total)
Plus even if it was accurate somehow, you can't create an alert of it or notify anyone, because you won't know which device is having issues if you see a spike of devices in the 60 second bucket.
So let's go through other metric options.
The only metric types that are ever worth using are usually distributions or gauges, or [counts].
A gauge metric is just a measurement of the latency at a point in time, it's usually good for things like CPU or Memory of a computer, or temperature in a room. Numbers that are impossible to actually collect all dat a points for so you just take measurements every 10 seconds, or every minute, or however often you never to get an idea of the behavior.
A count metric is more exact, it's the number of things that happened. Usually good for number of requests to a server, or number of files processed. Even something like the amount of bytes flowing through something, although that usually is treated like a gauge by most people.
Distributions are good for when you want to create a gauge metric, but you need detailed measurements for every single event that happens. For example a web server is handling hundreds of requests per second and we need to know the latency metrics of that server. It's not possible to send a latency metric for every request as a gauge. Gauges have a built in limit of 1 data point per second (in Datadog). Anything more sent in a 1 second interval gets dropped. But we need stats for every request, so a distribution will summarize the data, it keep a running count, min, max, average, and optionally several percentiles (p50, p75, p99).
I haven't seen many good use cases for metric types outside of those 3. For your scenario, it seems like you would want to be sending a distribution metric for that device interval. So device 1 sends a value of 10.14 and device 3 sends a value of 2.3 and so on.
Then you can use a distribution widget in a dashboard to show the number of devices for each interval bucket.
Of course make sure you tag each metric by the device that is generating the metric.
I am currently working on a simple simulation that consists of 4 manufacturing workstations with different processing times and I would like to measure the WIP inside the system. The model is PennyFab2 in case anybody knows it.
So far, I have measured throughput and cycle time and I am calculating WIP using Little's law, however the results don't match he expectations. The cycle time is measured by using the time measure start and time measure end agents and the throughput by simply counting how many pieces flow through the end of the simulation.
Any ideas on how to directly measure WIP without using Little's law?
Thank you!
For little's law you count the arrivals, not the exits... but maybe it doesn't make a difference...
Otherwise.. There are so many ways
you can count the number of agents inside your system using a RestrictedAreaStart block and use the entitiesInside() function
You can just have a variable that adds +1 if something enters and -1 if something exits
No matter what, you need to add the information into a dataset or a statistics object and you get the mean of agents in your system
Little's Law defines the relationship between:
Work in Process =(WIP)
Throughput (or Flow rate)
Lead Time (or Flow Time)
This means that if you have 2 of the three you can calculate the third.
Since you have a simulation model you can record all three items explicitly and this would be my advice.
Little's Law should then be used to validate if you are recording the 3 values correctly.
You can record them as follows.
WIP = Record the average number of items in your system
Simplest way would be to count the number of items that entered the system and subtract the number of items that left the system. You simply do this calculation every time unit that makes sense for the resolution of your model (hourly, daily, weekly etc) and save the values to a DataSet or Statistics Object
Lead Time = The time a unit takes from entering the system to leaving the system
If you are using the Process Modelling Library (PML) simply use the timeMeasureStart and timeMeasureEnd Blocks, see the example model in the help file.
Throughput = the number of units out of the system per time unit
If you run the model and your average WIP is 10 units and on average a unit takes 5 days to exit the system, your throughput will be 10 units/5 days = 2 units/day
You can validate this by taking the total units that exited your system at the end of the simulation and dividing it by the number of time units your model ran
if you run a model with the above characteristics for 10 days you would expect 20 units to have exited the system.
I'm working on neighbor discovery protocols in wireless ad-hoc networks. There are many protocols that rely only on beacon messages between nodes when the discovery phase is going on. On the other hand, there are other approaches that try to transmit more information (like a node's neighbor table) during the discovery, in order to accelerate it. Depending on the time needed to listen to those messages the discovery latency and power consumption varies. Suppose that the same hardware is used to transmit them and that there aren't collisions.
I read that beacons can be sent extremely fast (less than 1ms easily) but I haven't found anything about how long it takes to send/receive a bigger message. Let say a message carrying around 50-500 numbers representing all the info about your neighbors. How much extra power is needed?
Update
Can this bigger message be divided into a bunch of beacon size messages? If it does, then I suppose the power used to transmit/listen grows linearly.
One possible solution is to divide the transmission in N different beacon-like messages with a small extra information to be able to put them back together. In this way, the power used grows linearly as N grows.
I'll briefly explain my purpose in case another solution would be much easier.
I have a server and a system. The system sends a "live-stat" packet twice a second and logs the time it was sent (epoch time). The server receives the packet and logs the receiving time.
I would like to know which of the packets arrived and the latency of their arrival.
Currently I'm inserting this data to an unorganized matrix and i would like to compare the two matrices (server and system) to calculate the latency.
Is there any matlab command that could help me?
Another point is that I don't know how to get a numerical value for the latency, is there any formula or tool that i could use? (I''m currently going to show the deviation of the latency on a graph).
I'm talking about 500,000 packets per system and i'm testing this on ~50 systems so i'm looking for the most efficient calculation.