Similar to the Exposure to Radiation example model, I'd like to update a person's exposure level depending on whether they are in a circle or not. I'm using a SD flow to increment a totalExposure stock. The CurrentExposureLevel flow is determined by a function exposureLevel(person.getX(), person.getY()) and returns an int. The function body is:
int l = 0;
if(myCircle.contains(x, y)){
l = 5;
}
return l;
The person follows a path which passes through the circle, but not for the entire length of the path. The issue I'm experiencing is that the flow never runs, and therefore the stock doesn't increment.
You need to constantly check if the agent is inside the circle. Use a cyclic event that calls your function code. Or adjust it to return boolean (true if inside, false if not) and adjust your value accordingly.
Obviously ensure your event checks often enough :)
Note: If you replace your oval presentation with a shape from the material-handling library, they have actual code boxes "on enter" and "on exit". You could use those instead of a cyclic event, if applicable.
Related
In my model, there are pickers moving along a picking aisle. They pick up "Box" agents from a "picking slot", and then transfer those "Box" agents into a conveyor flowchart using the enter.take() method. The specific conveyor and entry point on that conveyor are dynamically defined according to the picker's current location.
A simple flowchart like so:
It works most of the time, but when traffic gets high, I end up with the following error.
An agent was not able to leave the port root.enter_convey.out at time
784.505 / date Mar 8, 2021, 12:12:04 AM (current model time is 785.088). Consider increasing capacities and/or throughputs of the subsequent object(s) or using PULL protocol
I suspect it is due to a presentation of the "Box" agent existing within the entry area of the ConveyorPath during the time the following agent is slated to enter. Is that correct? If not, what is the issue?
If my suspicions are correct, how would I go about finding out whether the entry zone of the conveyor I am trying to place agents on is occupied? And how would I go about writing a condition in order to only send agents into the conveying flowchart if the space is free?
EDIT - Additional details, follow up to Yashar's answer:
I have multiple conveyor/picking aisles, and within each of those are multiple pickers.
Let's say picker 1 is dropping off Box X at offset A, and there is currently no space. Box X enters and stays in the queue.
At the same moment, picker 2 is dropping off Box Y at offset B, and there is also no space. Box Y enters and stays in the queue behind Box X.
Now according to the Queue block functions, even if a space frees up at offset B for Box Y, Box Y would still have to wait for Box X to enter the conveyor before it can enter itself. That would not be the behavior I am looking for. Am I correct in my understanding of the Queue block?
Thank you.
You can add a queue block after enter_convey. Don't forget to tick the maximum capacity there. If your conveyor system has a maximum number of units it can accommodate, then it is natural that after that limit, no units can enter the system. You can either do that or keep them in the previous station (using a delay block with "stopDelay ()" option) and whenever you have empty space in the conveyor you can send a signal to stop the delay and send it over to the conveyor system.
Is there a way to check if there is a line of sight between two agents assuming some buildings and presentation markup?
(Meaning a function that would check if two agents can see each other assuming buildings and walls)
This is how I did it once in the past. The only problem is that it might be slow if you need to do that calculation thousands of times per second. And it's only in 2D. If you need 3D, the idea is not so different.
1) add all your building nodes and everything that might be an obstacle between the two agents to a collection. You may want to put a rectangular node around your buildings to keep everything in one collection (assuming you are using space markup with nodes)
2) generate a delta distance delta (equal to 1 for example) and find the angle of the line that passes through both agents.
3) Make a loop from the position of agent1 till the position of agent2. It will be something like this:
L=delta;
while(L<LThatReachesSecondAgent){
x1 = agent1.getX() + L*cos(angle);
y1 = agent1.getY() + L*sin(angle);
for(Node n : yourCollectionOfNodes){
If(n.contains(x1,y1))
return false
}
/*This can also work maybe faster
//int numNodesInTheWay=count(yourCollectionOfNodes,n->n.contains(x1,y1));
//if(numNodesInTheWay>0) return false
*/
L+=delta;
}
return true
welcome to SOF.
No, there is no build-in function, afaik. You would have to code something manually, which is possible but not straight forward. If you want help with that, you need to provide more details on the specifics.
I'm fairly new to Modelica, just started a few months ago due to a project I've been working on. Mostly doing work with multibody mechanical systems using the MultiBody library included in the standard Modelica distribution.
I need to change a body position according to the coordinates calculated dynamically during the simulation, but I can't find a way to do so.
This is the vector variable that calculates the position of the center of mass of the given system:
Modelica.SIunits.Length CMG[2];
CMG[1] = ... + cos(part3rotation.angles[3]) ... + part3origin[1] ...;
CMG[2] = ...;
I would like to position a massless body (FixedShape) at the coordinates (CMG[1], CMG[2]) as a way to display the center of mass and its movement during the simulation.
Is there any way to do this?
I've tried to attach the body to a fixed translation component but it expects a parameter (PARAM) instead of a variable (VAR) and this causes an error.
Software used: Modelica 3.2.2 and Wolfram SystemModeler 5.0.
I would add a frame Modelica.Mechanics.MultiBody.Interfaces.Frame_b to your body, and then add the following equations (taken from FixedTranslation):
frame_b.r_0 = your_three_d_vector;
frame_b.R = frame_a.R; // or some other orientation
/* Force and torque balance */
zeros(3) = frame_a.f + frame_b.f; // and maybe some other forces in your system
zeros(3) = frame_a.t + frame_b.t + cross(r, frame_b.f); // and maybe some other torques and forces in your system
In order to add an additional connector in Modelica, we would have to consider not only the potential variables (in this case, the position and orientation), but also the flow variables (forces and torques).
The solution was to modify the FixedTranslation class to include a new input:
input Modelica.SIunits.Position xyz[3];
and modify the equations:
frame_b.r_0 = frame_a.r_0 + xyz;
Connecting the CMG vector to the xyz vector of the class did the trick.
I suppose you calculate the position of the center of mass of the whole system relatively to the inertial system. Then you can completely omit frame_a in your class and thus it would not be necessary to always connect frame_a explicitely to world.frame_b which obviously is obsolete.
Be only sure to use Connections.root(frame_b.R) instead of Connections.branch(frame_a.R, frame_b.R) as originally defined in FixedTranslation.
And one more comment. It is advisable to directly work with vectors instead of position vector components, and to use functions from Modelica.Mechanics.MultiBody.Frames for vector transformations.
I am trying to get started with GTK, but I find the documentation for signals (https://developer.gnome.org/gobject/stable/signal.html) hard to understand.
It seems as there is a difference between a "signal" and an "event".
For example, the documentation for the "event"-signal for a Widget (https://developer.gnome.org/gtk3/stable/GtkWidget.html#GtkWidget-event) says
The GTK+ main loop will emit three signals for each GDK event delivered to a widget: one generic ::event signal, another, more specific, signal that matches the type of event delivered (e.g. “key-press-event”) and finally a generic “event-after” signal.
So it seems to me, that GDK uses "events", whereas GTK+ uses "signals". Maybe events are just packed into signals, or the other way around? Or are they completely different things?
My understanding of the above quote:
When a key is pressed, then a GDK-event is fired. This GDK-event calls a callback function of the widget (which is not for the programmer to interfer with). The callback function then in turn emits the three signals ::event, key-press-event and event-after, one after the other. As a programmer I can intercept these signals by writing callback functions. If the callback for the first ::event signal returns TRUE, then the second key-press-event signal is not fired, otherwise it is. The third event-after signal is always fired.
Is my understanding correct?
Furthermore, in the docs, sometimes signals are prepended by a double colon (::event) and sometimes they are not (key-press-event and event-after). What is the difference? What is the meaning of the double colon?
it's just nomenclature.
signals, in GObject, are just fancy ways to calling named lists of functions; each time an instance "emits" a signal, the GSignal machinery will look at all the callbacks connected to that particular signal, and call them sequentially until either one of these conditions is satisfied:
the list of callbacks is exhausted
the signal accumulator used when the signal is defined will stop the signal emission chain if a defined condition is met
all signals emitted by GDK or GTK+ (as well as any other GObject-based library) work exactly in that way.
events, in GDK, are structures related to windowing system events, like a button press, a key release, a pointer crossing the window boundaries, a change in the window hierarchy, and so on and so forth. the only interaction you generally have with GDK events happen in specific signals on the GtkWidget types. as a convention (though it does not always apply) the signals that have a GdkEvent structure have an -event suffix, like button-press-event, or key-release-event, or enter-notify-event, or window-state-event. again, those are GObject signals, and their only specialization is having a GdkEvent as an argument.
as for the double colon: the full specification of a signal is made of the type that declares it, e.g. GtkWidget, and the signal name, e.g. button-press-event, separated by a double colon, e.g. GtkWidget::button-press-event. the ::button-press-event notation is just a documentation shorthand, signifying that the speaker is referring to the button-press-event signal.
The simple way to understand it is that, events are something that you do to an object, say GtkButton (we choose button as something you can see). When you click a button, the button receive an event from you (actually it's from Gdk ( a thin layer between gtk and underlying window and graphics system ). Upon receive an event it has to do something. otherwise it's a dead object.
From there, something has to be done. Since an object has to do something, a signal will pick up the rest. Signal will emitted "from" the object to tell other object something has happened. Short word, signal is a catcher of an event.
The most used pre-defined signal for GtkButton is "clicked". Within the callback for the signal, you can do anything you want to be.
Now, another question, hey, why don't we just catch the event from the mouse button and do it from there? Of course you can. Here's how :
get the position of the button in the window
calculate the allocated width,height and position in memory, so when user emit event button press within the are, it will trigger something
make another function that when you resize, minimize, maximize the window, calculate again the position, width and height and keep it in memory. also every other widgets around it because their size is also change.
if you choose not to show the widget, calculate every widgets in a window because their position, width and height are totally different, store it in the memory.
if you move the window or window is hidden, don't do anything because the coordinate of the button is replaced by something else on top. you don't want to click the screen (to where the button was) and your application do something while other window is focused.
6.if you loose your mouse ?................damn
Next, Gdk uses signals too. For example GdkScreen emits 3 signals, which react from an event: somehow you turn off the compositing window, somehow you hookup with other screen and somehow you change the screen resolution.
Next, callbacks are not emitted signals. Signal "emits" callbacks. It is up to you if you to connect (intercept, in your term) or not. It is not your function, it's predefined function which you just wrap arounds it with your function name. After you use a signal, you can also disconnect it, for some reason.
Next, yes, if the widget signal "event" return True, the second specific signal is disconnected. Note: do not tamper with event mask of a widget, since a widget has its own default event masks.
Finally, double-colon? either its documenter like the double colon or just saying this signal belong to a class. Don't worry about it, you probably not going to use it in C
I want to make the sphero move a given amount of centimeters ahead but so far i have not managed to get anything to work properly
This is the code i have now:
EditText distanceText = (EditText) findViewById(R.id.distanceText);
int inputMSec = Integer.parseInt(distanceText.getText().toString());
int time = (inputMSec - inputMSec/2);
// The ball is given the RollCommand at half speed.
RollCommand.sendCommand(mRobot, heading,0.5f);
// A handler is created to delay the Stop command.
final Handler handler = new Handler();
handler.postDelayed(new Runnable(){
#Override
public void run() {
// Makes the ball stop
RollCommand.sendStop(mRobot);
}
// The variable 'time' defines how long the ball rolls until it is told to stop.
}, time);
Is there any other command i can send to the ball instead of RollCommand?
Or can anyone find out what to do to the input from the EditText so that the distance turns out correct?
There's no API command that directly provides the ability to drive a given distance. The only way I now how is with an iterative strategy using the locator, which provides position information about the ball.
Here's an overview of the strategy. Let me know if you need more details.
Turn on data streaming and request locator position data. You should now have some callback giving you Sphero position ~20x a sec. The positions are (x, y) coordinates on the floor in centimeters.
Record the starting position of the ball (x0, y0).
At each point in time, compute the distance the ball has traveled so far using the Pythagorean theorem/distance formula.
Use this to figure out how far the ball has left to go. Call this distance D.
Give a roll command where the speed is computed from D.
The big question is: how do you decide what speed to command based on D? I had some success with a strategy something like this: (undoubtedly you can tune it up a lot better).
If the remaining distance D > 100cm, command full speed.
Between D = 10cm and D = 100cm, command power ranging from 30% to 100% (linearly).
If D < 10cm, command zero speed to make Sphero stop.
This worked pretty well. It would drive, slow down, and coast the last few inches to a stop. Problems with this algorithm include:
You have to tune it to prevent overshoot/undershoot.
When you want to command the ball to move only very short distance it doesn't work well, so you might have to tweak it to cover those cases.
Good luck!