We are currently working on a warehouse management project that includes RFID tags and readers.
we're still at the beginning of it and are trying to design the different solutions.
Here's my situation :
We're going to have RFID tags (most likely UHF) on each of our devices we track.
Picture our current warehouse as a small room with shelves rows, and each shelve will have its own RFID reader, aimed to track the location of the devices.
We want our warehouse workers to scan their device on the reader before they store a device in.
My question is :
What are the possibilities around RFID technologies, which would allow us to ensure our RFID readers won't conflict each other and start discovering tags they shouldn't , especially because we have UFH tags?
Can we get the distance of a device we just scanned and ignore what's over 10/15 cm ? Can we limit the discovery range of the reader?
Thanks for reading me though.
Pretty much every UHF reader can be set to a reduced output power. This results in a reduced read range. However, the results are not easy to predict, they require testing. And the read range may change if the environment changes (position of shelves, metal objects in the vicinity...).
Related
I'm looking at using cloud firestore to sync a multiplayer web game between players. However, this game involves continuous motion, like a player dragging a piece from one place to another. This would involve a stream of writes as its position changes. Given that the free plan is 20k free writes per day, and 20k writes can be done by a dozen players in a few minutes in this case, I worry that the cost would rapidly spiral out of control.
Is it impossible to do this sort of thing with firestore? I'm basically talking about a continuous websocket connection keeping the game data synced between players.
The limits of Firestore are well-documented. You haven't really said what hard limits you're concerned about exceeding. The only thing you've indicated is limits regarding the perpetual free tier, which can be easily exceeded by simply paying for the product based on your usage.
If you're not willing to pay for the service based on your needs, then you should probably look for another service. If you are willing to pay, then you need to do the math to figure out what your specific needs are, and if they can be met by the documented limits.
In the absence of more specific information about what you're trying to achieve, there's not much else that can be said.
At the moment I'm exerpimenting and researching with and about NFC.
I use Mifare 1k Tags and I recorgnized that there are Sime Blocks/Sectors where I'm not allowed to Write on. I've found an Application that writes Data to Tag, this Application automatically skips those forbidden sectors. If I write a Application with NFC Tags by myself I dont want to declare the forbidden Sectors by Hand, so no Specific Tagtype is required.
So my Question is: Is there some Sort of Storage Systems for NFC Tags, like NTF for SSD/HDD?
Maybe someome knows something about it or could give me a Tip what i should Search for.
The datasheets for the tags detail the storage system https://www.nxp.com/docs/en/data-sheet/MF1S50YYX_V1.pdf
The tags will usually also conform to one of the NFC Forums specifications (most available http://apps4android.org/nfc-specifications/ )
A lot will also be able to store NDEF data (the spec for that is https://github.com/haldean/ndef/tree/master/docs )
I would like to test a RadioFrequency IDentification, RFID system. and build it with Netlogo because each RFID reader could be represented by an agent. Readers could act independently or be exchanging data (for example number of tags read) among them.
Tags are passive and are activated backscattering the signal from the reader.
I have not much experience modeling with Netlogo and I have not found similar example models.
I would like to ask for help that will allow me to start the model.
The first issue I have is how to represent the system. I have thought to create a network where nodes are of both types readers and tags. Posible communication among them could be a link.
A more difficult programming issue for me is how to code a reader to be active for a specific time (e.g. a maximum of 4 seconds) followed by inactive periods of 100 milliseconds. If another reader is in the proximities and both are active at the same time there will be a collision and they cannot read tags.
I would appreciate very much any help
I'm trying to coordinate a triggered event across many smartphones within as small a time-frame as possible (ideally have them start within half a second or less of each other).
The problem:
From my experience, the local time value on the devices cant be relied on, and additionally latency can contribute to issues with syncing a value for the current time (imagine trying to get the updated time from some remote source and eventually approaching a decently close idea of the current time locally based on that remote source, ideally within a fraction of a second of said source).
Are there any established techniques, mechanisms, or more accurate sources of some time value reference point that would allow for a planned event to be triggered on multiple devices within a fraction of a second of one another? The more I search, the more I realize this is not a trivial issue, however I thought it would be worth it to query the great minds of stackoverflow.
Thanks in advance for any and all help.
I've developed a technology that achieves synchronization of smartphones down to 10 milliseconds. Each devices takes the UTC time from many clocks and makes a non-trivial convolution.
I have applied this to a massive event (http://massivesymphony.org) and I'm now providing the technology for several corporate events.
In case you are interested in more details, my contact is
José I. Latorre
Dept. of Physics, U. Barcelona
j.i.latorre#gmail.com
This is a bit of an open ended question, but I'm looking for an open ended answer. I'm looking for a resource that can help explain how to benchmark different systems, but more importantly how to analyze the data and make intelligent choices based on the results.
In my specific case, I have a 4 server setup that includes mongo that serves as the backend for an iOS game. All servers are running Ubuntu 11.10. I've read numerous articles that make suggestions like "if CPU utilization is high, make this change." As a new-comer to backend architecture, I have no concept of what "high CPU utilization" is.
I am using Mongo's monitoring service (MMS), and I am gathering some information about it, but I don't know how to make choices or identify bottlenecks. Other servers serve requests from the game client to mongo and back, but I'm not quite sure how I should be benchmarking or logging important information from them. I'm also using Amazon's EC2 to host all of my instances, which also provides some information.
So, some questions:
What statistics are important to log on a backend setup? (CPU, RAM, etc)
What is a good way to monitor those statistics?
How do I analyze the statistics? (RAM usage is high/read requests are low, etc)
What tips should I know before trying to create a stress-test or benchmarking script for my architecture?
Again, if there is a resource that answers many of these questions, I don't need an explanation here, I was just unable to find one on my own.
If more details regarding my setup are helpful, I can provide those as well.
Thanks!
I like to think of performance testing as a mini-project that is undertaken because there is a real-world need. Start with the problem to be solved: is the concern that users will have a poor gaming experience if the response time is too slow? Or is the concern that too much money will be spent on unnecessary server hardware?
In short, what is driving the need for the performance testing? This exercise is sometimes called "establishing the problem to be solved." It is about the goal to be achieved-- because if there is not goal, why go through all the work of testing the performance? Establishing the problem to be solved will eventually drive what to measure and how to measure it.
After the problem is established, a next set is to write down what questions have to be answered to know when the goal is met. For example, if the goal is to ensure the response times are low enough to provide a good gaming experience, some questions that come to mind are:
What is the maximum response time before the gaming experience becomes unacceptably bad?
What is the maximum response time that is indistinguishable from zero? That is, if 200 ms response time feels the same to a user as a 1 ms response time, then the lower bound for response time is 200 ms.
What client hardware must be considered? For example, if the game only runs on iOS 5 devices, then testing an original iPhone is not necessary because the original iPhone cannot run iOS 5.
These are just a few question I came up with as examples. A full, thoughtful list might look a lot different.
After writing down the questions, the next step is decide what metrics will provide answers to the questions. You have probably comes across a lot metrics already: response time, transaction per second, RAM usage, CPU utilization, and so on.
After choosing some appropriate metrics, write some test scenarios. These are the plain English descriptions of the tests. For example, a test scenario might involve simulating a certain number of games simultaneously with specific devices or specific versions of iOS for a particular combination of game settings on a particular level of the game.
Once the scenarios are written, consider writing the test scripts for whatever tool is simulating the server work loads. Then run the scripts to establish a baseline for the selected metrics.
After a baseline is established, change parameters and chart the results. For example, if one of the selected metrics is CPU utilization versus the number of of TCP packets entering the server second, make a graph to find out how utilization changes as packets/second goes from 0 to 10,000.
In general, observe what happens to performance as the independent variables of the experiment are adjusted. Use this hard data to answer the questions created earlier in the process.
I did a Google search on "software performance testing methodology" and found a couple of good links:
Check out this white paper Performance Testing Methodology by Johann du Plessis
Have a look at the Methodology section of this Wikipedia article.