Controlling time of agents to simulate RFID readers - netlogo

I would like to test a RadioFrequency IDentification, RFID system. and build it with Netlogo because each RFID reader could be represented by an agent. Readers could act independently or be exchanging data (for example number of tags read) among them.
Tags are passive and are activated backscattering the signal from the reader.
I have not much experience modeling with Netlogo and I have not found similar example models.
I would like to ask for help that will allow me to start the model.
The first issue I have is how to represent the system. I have thought to create a network where nodes are of both types readers and tags. Posible communication among them could be a link.
A more difficult programming issue for me is how to code a reader to be active for a specific time (e.g. a maximum of 4 seconds) followed by inactive periods of 100 milliseconds. If another reader is in the proximities and both are active at the same time there will be a collision and they cannot read tags.
I would appreciate very much any help

Related

Route Costing in Anylogic

I am trying to simulate a manufacturing system that uses Automated Guided Vehicles (AGVs) to carrying loads around the network to be processed. While the AGVs are travelling, it is ideal for them to pick the fastest route to the destination (not necessarily the shortest).
Here is my model
I am kind of stuck at trying to implement a route costing algorithm, because I am not too familiar with the intricacies of this program yet. Can anyone kindly give me some rough idea on how it can be implemented in pseudo code with the following scenario:
The load needs to move from A to B and there are three possible paths. However, there is congestion in the red highlighted areas that will cause the load to take a longer time to reach point B.
How can I read the network to check for congestion and also calculate the various times needed to go to point B?

Terminology: "live-dvr" in mpeg-dash streaming

I'm working with live MPEG-DASH streaming, and I would like to know if there exists a stardard terminology for a given functionality.
It's the "live-dvr" functionality. That is, a mix between a live stream and VOD features: a live stream with the seeking bar in the player allowing to watch past stream time. This involves a series of infrastructure tweaks.
The term "live-dvr" for this setup is kind of informal, and different parties call it in its own way: "live catch-up", "live-vod", "cached live", some vendors set the name for this based on their product lines, and so on. I would like to know if there's a standard term for this kind of setup. Specially because interpreting the standard in order to understand setup parameters for the manifests may be confusing or even misleading without proper terminology.
The MPEG-DASH standard only mentions a timeShiftBufferDepth, which specifies how long after the availability of a segment it is still available on the server.
From the spec:
#timeShiftBufferDepth specifies the duration of the time shifting buffer for this Representation that is guaranteed to be available for a Media Presentation with type 'dynamic'.
There is no mention at all of DVR in the spec. So time shift seems to be the term used by MPEG-DASH. However, for example HLS does not mentioned DVR or time shift at all.
DVR (Digital Video Recording, also known as nDVR - network DVR) is a functionality that allows recording the live stream and perform its playback from any moment of recorded period. Live stream can still run while the end-user may rewind it to any particular moment in past.
Typically media servers (like our Nimble Streamer) also provide time-shift and time range selection - see our links for details.

Limiting distance of RFID readers

We are currently working on a warehouse management project that includes RFID tags and readers.
we're still at the beginning of it and are trying to design the different solutions.
Here's my situation :
We're going to have RFID tags (most likely UHF) on each of our devices we track.
Picture our current warehouse as a small room with shelves rows, and each shelve will have its own RFID reader, aimed to track the location of the devices.
We want our warehouse workers to scan their device on the reader before they store a device in.
My question is :
What are the possibilities around RFID technologies, which would allow us to ensure our RFID readers won't conflict each other and start discovering tags they shouldn't , especially because we have UFH tags?
Can we get the distance of a device we just scanned and ignore what's over 10/15 cm ? Can we limit the discovery range of the reader?
Thanks for reading me though.
Pretty much every UHF reader can be set to a reduced output power. This results in a reduced read range. However, the results are not easy to predict, they require testing. And the read range may change if the environment changes (position of shelves, metal objects in the vicinity...).

Mobile phone app event timing synchronization

I'm trying to coordinate a triggered event across many smartphones within as small a time-frame as possible (ideally have them start within half a second or less of each other).
The problem:
From my experience, the local time value on the devices cant be relied on, and additionally latency can contribute to issues with syncing a value for the current time (imagine trying to get the updated time from some remote source and eventually approaching a decently close idea of the current time locally based on that remote source, ideally within a fraction of a second of said source).
Are there any established techniques, mechanisms, or more accurate sources of some time value reference point that would allow for a planned event to be triggered on multiple devices within a fraction of a second of one another? The more I search, the more I realize this is not a trivial issue, however I thought it would be worth it to query the great minds of stackoverflow.
Thanks in advance for any and all help.
I've developed a technology that achieves synchronization of smartphones down to 10 milliseconds. Each devices takes the UTC time from many clocks and makes a non-trivial convolution.
I have applied this to a massive event (http://massivesymphony.org) and I'm now providing the technology for several corporate events.
In case you are interested in more details, my contact is
José I. Latorre
Dept. of Physics, U. Barcelona
j.i.latorre#gmail.com

How should I benchmark a system to determine the overall best architecture choice?

This is a bit of an open ended question, but I'm looking for an open ended answer. I'm looking for a resource that can help explain how to benchmark different systems, but more importantly how to analyze the data and make intelligent choices based on the results.
In my specific case, I have a 4 server setup that includes mongo that serves as the backend for an iOS game. All servers are running Ubuntu 11.10. I've read numerous articles that make suggestions like "if CPU utilization is high, make this change." As a new-comer to backend architecture, I have no concept of what "high CPU utilization" is.
I am using Mongo's monitoring service (MMS), and I am gathering some information about it, but I don't know how to make choices or identify bottlenecks. Other servers serve requests from the game client to mongo and back, but I'm not quite sure how I should be benchmarking or logging important information from them. I'm also using Amazon's EC2 to host all of my instances, which also provides some information.
So, some questions:
What statistics are important to log on a backend setup? (CPU, RAM, etc)
What is a good way to monitor those statistics?
How do I analyze the statistics? (RAM usage is high/read requests are low, etc)
What tips should I know before trying to create a stress-test or benchmarking script for my architecture?
Again, if there is a resource that answers many of these questions, I don't need an explanation here, I was just unable to find one on my own.
If more details regarding my setup are helpful, I can provide those as well.
Thanks!
I like to think of performance testing as a mini-project that is undertaken because there is a real-world need. Start with the problem to be solved: is the concern that users will have a poor gaming experience if the response time is too slow? Or is the concern that too much money will be spent on unnecessary server hardware?
In short, what is driving the need for the performance testing? This exercise is sometimes called "establishing the problem to be solved." It is about the goal to be achieved-- because if there is not goal, why go through all the work of testing the performance? Establishing the problem to be solved will eventually drive what to measure and how to measure it.
After the problem is established, a next set is to write down what questions have to be answered to know when the goal is met. For example, if the goal is to ensure the response times are low enough to provide a good gaming experience, some questions that come to mind are:
What is the maximum response time before the gaming experience becomes unacceptably bad?
What is the maximum response time that is indistinguishable from zero? That is, if 200 ms response time feels the same to a user as a 1 ms response time, then the lower bound for response time is 200 ms.
What client hardware must be considered? For example, if the game only runs on iOS 5 devices, then testing an original iPhone is not necessary because the original iPhone cannot run iOS 5.
These are just a few question I came up with as examples. A full, thoughtful list might look a lot different.
After writing down the questions, the next step is decide what metrics will provide answers to the questions. You have probably comes across a lot metrics already: response time, transaction per second, RAM usage, CPU utilization, and so on.
After choosing some appropriate metrics, write some test scenarios. These are the plain English descriptions of the tests. For example, a test scenario might involve simulating a certain number of games simultaneously with specific devices or specific versions of iOS for a particular combination of game settings on a particular level of the game.
Once the scenarios are written, consider writing the test scripts for whatever tool is simulating the server work loads. Then run the scripts to establish a baseline for the selected metrics.
After a baseline is established, change parameters and chart the results. For example, if one of the selected metrics is CPU utilization versus the number of of TCP packets entering the server second, make a graph to find out how utilization changes as packets/second goes from 0 to 10,000.
In general, observe what happens to performance as the independent variables of the experiment are adjusted. Use this hard data to answer the questions created earlier in the process.
I did a Google search on "software performance testing methodology" and found a couple of good links:
Check out this white paper Performance Testing Methodology by Johann du Plessis
Have a look at the Methodology section of this Wikipedia article.