"Join connected to independent data sources " in Microsoft Robotics Developer Studio VPL - robotics-studio

I am working on a custom made Robot made by a set of entities. I am trying to evaluate its Center of Gravity and Zero Moment Point for which i need Center of Gravity of every entity and then apply General Rule for Center of Gravity:
(X-coordinate of Center of Gravity of Body)*(Total Mass of Body)
(X-coordinate of center of gravity of entity 1)(mass of entity 1)
+...+
(X-coordinate of center of gravity of entity n)(mass of entity n)
(Same goes for other coordinates of the Center of Gravity)
but for that i need GPS sensors notifying position of Centers of Gravity of every entity and a way to incorporate all of them in the above calculation performed in the "Calculate" block.
But the problem is that as i try to "Join" values from different instances of GPS Sensors, following error pops up:
"The Join is connected to independent data sources. It will never complete. Try revising your connections."
(Attached is the image of the VPL Diagram).
https://docs.google.com/file/d/0B2w3mmBOvQsIWHBiR2NvUmxHUnc/edit?usp=sharing
Someone please help me out.

The problem
Because VPL cannot know the expected data rate for the two data sources, and the join will only fire when there are items on all branches (and then consumes the items) there are two problems, one is that it may never fire, the other is that the data may be out of sync (imagine if the two data sources fire at 1Hz and 2Hz, the first element on each branch of the join will drift apart in time)
A solution
In your diagram use the data sources to set variables with their values (which it appears that you already do)
Use one of the data sources (or some other periodic notification) to trigger the computation using the most recent values that have been set into the state. While you may not always have the most recent data, it will always be fairly recent
Alternatively, you can put a merge where you currently have the join, and use a notification on either data source to trigger the computation (again, using the most recent values that have been set into the state)

Related

SEIR infection charts going up and down

I have a problem constructing my SEIR model for system dynamics.
I want to create an infectious character chart that goes up, down and up again.
How do I go about creating it, the current one that I simulated goes up the down meaning that the virus comes to an end at some point.
If you want your infections to go up and down you need to have a transition where a person is either not immune or their immunity ends, see the stock and flow diagram below.
This is then technically a SIRS model as the person becomes Recovered and can become susceptible again.
I used a model from the AnyLogic Cloud as a base
https://cloud.anylogic.com/model/d465d1f5-f1fc-464f-857a-d5517edc2355?mode=SETTINGS

Orientation of body

We need to find the patient monitoring system which monitors the position of the body of the patient when patient is on bed . what we want to get is when patient is sleeping we want to monitor what is the position in which he/she is whether he is sleeping facing celing , or sleeping facing left side,or sleeping facing right side .
What we are using is IMU SENSOR and we have the gyro , accelerometer, magnetometer reading in xyz planes . please suggest what to do further in order to find above positions.
With only one (simple) sensor attached to a body, it's rather difficult. With multiple sensors (and the ability to distinguish between them), you could possibly triangulate each of them to get relative distances on the three spatial axes.
But, without more information than you currently have (including perhaps some details on what the sensor actually senses), I don't think it's possible.

Object Tracking in non static environment

I am working on a drone based video surveillance project. I am required to implement object tracking in the same. I have tried conventional approaches but these seem to fail due to non static environment.
This is an example of what i would want to achieve. But this uses background subtraction which is impossible to achieve with a non static camera.
I have also tried feature based tracking using SURF features, but it fails for smaller objects and is prone to false positives.
What would be the best way to achieve the objective in this scenario ?.
Edit : An object can be anything within a defined region of interest. The object will usually be a person or a vehicle. The idea is that the user will make a bounding box which will define the region of interest. The drone now has to start tracking whatever is within this region of interest.
Tracking local features (like SURF) won't work in your case. Training a classifier (like Boosting with HAAR features) won't work either. Let me explain why.
Your object to track will be contained in a bounding box. Inside this bounding box there could be any object, not a person, a car, or something else that you used to train you classifier.
Also, near the object, in the bounding box there will be also background noise that will change as soon as your target object moves, even if the appearance of the object doesn't change.
Moreover the appearance of you object changes (e.g. a person turns, or drop the jacket, a vehicle get a reflection of the sun, etc...), or the object gets (partially or totally) occluded for a while. So tracking local features is very likely to lose the tracked object very soon.
So the first problem is that you must deal with potentially a lot of different objects, possibly unknown a priori, to track and you cannot train a classifier for each one of these.
The second problem is that you must follow an object whose appearance may change, so you need to update your model.
The third problem is that you need some logic that tells you that you lost the tracked object, and you need to detect it again in the scene.
So what to do? Well, you need a good long term tracker.
One of the best (to my knowledge) is Tracking-Learning-Detection (TLD) by Kalal et. al.. You can see on the dedicated page a lot of example videos, and you can see that it works pretty good with moving cameras, objects that change appearance, etc...
Luckily for us, OpenCV 3.0.0 has an implementation for TLD, and you can find a sample code here (there is also a Matlab + C implementation in the aforementioned site).
The main drawback is that this method could be slow. You can test if it's an issue for you. If so, you can downsample the video stream, upgrade your hardware, or switch to a faster tracking method, but this depends on you requirements and needs.
Good luck!
The simplest thing to try is frame differencing instead of background subtraction. Subtract the previous frame from the current frame, threshold the difference image to make it binary, and then use some morphology to clean up the noise. With this approach you typically only get the edges of the objects, but often that is enough for tracking.
You can also try to augment this approach using vision.PointTracker, which implements the KLT (Kanad-Lucas-Tomasi) point tracking algorithm.
Alternatively, you can try using dense optical flow. See opticalFlowLK, opticalFlowHS, and opticalFlowLKDoG.

How does multitouch IR touch screen work

I am doing research on touch screens and I couldnot find a good source except for this image below which could explain how multitouch IR systems work. basically the single touch IR systems are pretty simple as on two sides of the panel, lets say left and top are the IR transmitters and on the right and bottom are the receivers. So if a user touches somewhere in the middle, the path of IR will be disrupted and the ray will not reach the receiving end, therefore the processor can pick up the coordinates. but this will not work for multitouch systems as there is an issue of ghost points with this approach.
Below I have an image of 'PQ labs' multitouch IR system working, but as there is no explanation given, therefore I am not able to understand its working, Any help will be greatly appreciated.
I consider that they have a special algorithm to avoid the point caused by the inner cross of emitter light. But this algorithm will not work for every time, so sometime if you put your finger very close to each other. The ghost point may will show up.
My guess:
The sensors are analog (there must be an Analog to digital converter to read each of the opto transistor (IR receiver).
LEDa and LEDb are not on at the same time
The opto transistor are running in a linear range (not in saturation) when no object is present.
One object:
4. When an One object is placed on the surface. There will be less light accessing some of the opto transistors. This will be reflected by a reading that is lower then the read when no object is present.
The reading of the photo transistor array (it is an array reflecting the read from each opto transistor) will provide information about:
4.1. How many opto transistors are completely shaded (off)
4.2. What opto transistor are effected
Please note: A reading from one LED is not sufficient to know the object position
To get object location we need two reading (one from LEDa and one from LEDb). Now, we can calculate the object Position and Size since we know the geometry of the screen.
Two Objects:
Now each array may have "holes" (there will be two groups) in the shaded area. These holes will indicate that there is an additional object.
If the objects are closed to each other the holes may not be seen. However, there are many LEDs. So there will be multiple arrays (one for each LED) and based on the presented geometry these holes may be seen by some of the LEDs.
For more information please see US patent#: US7932899
Charles Bibas

CoreData for Exercise App

I'm in the process of creating my first iPhone app. It is an exercise log that will allow users to use the GPS to track a run, then be able to save a map of the route as well as the time/distance and upload it to a website. A local list of runs would also be saved on the device. My question is, what is the best way to implement the saving and retrieval of the map? I recall reading somewhere that the way to do it is to have entities that have latitude and longitude attributes, and then fetch these in reverse by time when plotting the map. This would mean that each entity is a point during the run. Is there a way to store all of the coordinates in an array in one entity so that one entity would represent a whole run?
I haven't really looked at relationships since I'm new to app development, but it seems like I could use relationships to store runs? As in, have the parent entity be the run, and have one of the destinations be all the coordinate entities of that run. Does this sound correct?
Thanks!
Having run as an entity makes sense. For the waypoints along the route, suggest a relationship with a 1-to-many cardinality (that is, one run has many waypoints). The attributes of the run might include start time/date, end time/date. The waypoints attributes might be latitude, longitude, altitude, date/time. You'll probably want to experiment with how you decide to log a waypoint during the run. Maybe collect every minute, or based on moving a certain distance from the last waypoint.
The waypoint with the earliest date/time is the starting point, and the waypoint with the latest date/time is the ending point.
With the above, you can plot the route one a map, calculate speed between waypoints, average speed, total distance, and maybe some sort of difficulty factor based on altitude changes.