Change State on model runtime - anylogic

P.S. This Question has been edited to answer questions made by #Felipe
I have an Agent-Based model simulation for churn behavior modeling. On each iteration(based on time--month) each user reconsiders her choice of operator(our or other) based on model metrics (Cost/SocialNetwork/...). In runtime even when I change parameters to affect Agents' decision, no one changes his/her operator. here is my state chart image on the below:
I should note that internal transition of (our user) has below details:
the first two lines are something for display. Advocate() refers to the action of sending messages which affects social influence.
But Switch() is where decision happens based on new parameters' value. In short, d defines a normalized range between -1 and 1 : signum(d) predicts which provider is the preferred one and abs(d) shows how preferred the selected provider will be.
//Definition for Switch()
double d = (this.Social_impact()/20)+this.Monthly_Charge_Impact();
if (d>0)
SwitchToUs();
else
SwitchToOther();
the two SwitchToUs and SwitchToOther functions simply change the operator (as if creating arrows between OUR_USER and OTHER_USER states)

Related

Cimplicity Screen - one object/button that is dependent on hundreds of points

So I have created a huge screen that essentially just shows the robot status for every robot in this factory (individually)… At the very end of the project, they decided they want one object on the screen that blinks if any of the 300 robots fault. I am trying to think of a way to make this work. Maybe a global script of some kind? Problem is, I do not do much scripting in Cimplicity, so any help is appreciated.
All the points that are currently used on this screen (to indicate a fault) have very similar names… as in, the beginning is the same… so I was thinking of a script that could maybe recognize if a bit is high based on PART of it's string name characteristic. The end will change a little each time, but I am sure there is a way to only look for part of a string and negate the rest. If the end has to be hard coded, that's fine.
You can use a Python script in Cimplicity.
I will not go into detail on the use of python in Cimplicity, which is well described in the documentation indicated above.
Here's an example of what can be done... note that I don't have a way to test it and, of course, this will work if the name of your robots in the declaration follows the format Robot_1, Robot_2, Robot_3 ... Robot_10 ... Robot_300 and it also depends on the Name and the Type of the fault variable... as you didn't define it, I imagine it can be an integer, with ZERO indicating no error. But if you use something other than that, you can easily change it.
import cimplicity
(...)
OneRobotWithFault = False
# Here you get the values and check for fault
for i in range(0, 300):
pointName = f'MyFactory.Robot_{i}.FaultCode'
robotFaultCode = cimplicity.point_get(pointName)
if robotFaultCode > 0:
OneRobotWithFault = True
break
# Set the status to the variable "WeHaveRobotWithFault"
cimplicity.point_set("WeHaveRobotWithFault", OneRobotWithFault)

Split model Dymola

I'm having a problem when I use the "Split model" option. What I want to do is basically hide these 10 water volumes:.
I select the tanks then I click on button for splitting with these options:
Final result is just what I want:
When I check the entire model to verify if everything is ok, these errors come out:
I've tried several things such as modifying the text part of the splitted model with no positive results, here's the original NOT modified
Can you please explain to me what kind of error it is? How can I resolve it? Thank you.
Edit: I'm using TIL library
Edit after Markus' answer: in the split model is it necessary to declare the type of liquid and change the portArray definition. I copied these lines of code and everything worked!
parameter TILMedia.LiquidTypes.BaseLiquid liquidType = sim.liquidType1
"Liquid type" annotation (Dialog(tab="SIM",group="SIM"),choices(
choice=sim.liquidType1 "Liquid 1 as defined in SIM",
choice=sim.liquidType2 "Liquid 2 as defined in SIM",
choice=sim.liquidType3 "Liquid 3 as defined in SIM"));
replaceable package MediaConfiguration =
TIL.Utilities.MediaConfiguration
constrainedby TIL.Utilities.Internals.PartialMediaConfiguration
"Media and State Type Configuration" annotation (choicesAllMatching, Dialog(
tab="SIM", group="Media Configuration"));
protected
outer TIL.SystemInformationManager sim "System information manager";
and
public
TIL.Connectors.LiquidPort portArray(
final liquidType=liquidType) ;
TIL.Connectors.LiquidPort portArray1(
final liquidType=liquidType) ;
The issue seems to result from the vectorization of the connectors, that seems to get lost when using "split model". A bit difficult without the actual model, but:
Have you tried to modify the last two connect statements in str3000 to:
connect(portArray, colume.portArray[1])
connect(portArray1, colume.portArray[2])
Additionally on the top level of the model, it seems you have connections to vectors of str3000.portArray. Try to remove them as they seem to be wrong, as you have two non-vector ports.
There should be something like connect(str3000.portArray[1], ...) and connect(str3000.portArray1[2], ...), which should likely be changed to connect(str3000.portArray, ...) and connect(str3000.portArray1, ...).

Sentence Indicating in Neural Machine Translation Tasks

I have seen many people working on Neural Machine Translation. Usually, they represent their sentence between <BOS><EOS>, <START><END>, etc. tags, before training the network. Of course it's a logical solution to specify the start and end of sentences, but I wonder how the neural networks understand that the string <END> (or others) means end of a sentence?
It doesn't.
At inference time, there's a hardcoded rule that if that token is generated, the sequence is done, and the underlying neural model will no longer be asked for the next token.
source_seq = tokenize('This is not a test.')
print(source_seq)
At this point you'd get something like:
[ '<BOS>', 'Thi###', ... , '###t', '.' , '<EOS>' ]
Now we build the target sequence with the same format:
target_seq = [ '<BOS>' ]
while true:
token = model.generate_next_token(source_seq, target_seq)
if token == '<EOS>':
break
seq.append(token)
The model itself only predicts the most likely next token give the current state (the input sequence and the output sequence so far).
It can't exit the loop any more than it can pull your machine's plug out of the wall.
Note that that's not the only hardcoded ruled here. The other one is the decision to start from the first token and only ever append - never prepend, never delete... - like a human speaking.

VSCode Extension: Get outline of function for custom outliner

I'm trying to create a custom outliner for VSCode (currently only for python), but I don't find measures to get the information I needed.
I like to get information in this manner this:
Array:
[0]
label: "foo"
type: "Function"
parameters: [...]
Range: [...]
innerDefinitions: [0]
[1]
label: "myclass"
type: "Class"
base_class: ""
Range: [...]
innerDefinitions:
[0]:
[...]
[1]:
[...]
Currently I try to get outline information via vscode.commands.executeCommand( 'vscode.XXX'
What I've tried:
Here is what commands I've tried and what result I received.
vscode.executeImplementationProvider
half usable: range of functionname. Other information is missing
vscode.executeHoverProvider
half usable: string of function head (including def keyword)
vscode.executeDefinitionProvider
half usable: range of complete function. Individual information must be "parsed out"
vscode.executeTypeDefinitionProvider
Never provided any result
vscode.executeDeclarationProvider
Never provided any result
vscode.executeDocumentSymbolProvider
Goes in a good direction. However
(1) Does only work on the whole document (not single function)
(2) Does only return first-level entities (i.e. class methods are not included in result)
Is there any API call I've overseen?
I wonder how the built-in outliner works, as it contains all-level information.
You need to use vscode.commands.executeCommand<vscode.Location[]>("vscode.executeDocumentSymbolProvider", uri, position)
This will give you the full outline of one file. There is no way to receive a partial outline.
Note: innerDefinitions are called children here.
Regarding the detail of the outline:
How detailed (and correct) an outline is going to be, depends on the implementation of the provider. Also, provider's information is no necessarily consistent among languages. This is very important to keep in mind!
At the moment (2021/03), the standard SymbolProvider for...
... Python will have a child for each parameter and local variable of a function. They will not be distinguishable
... C++ will contain no children for parameters. But it will have the parameter types in its name. (e.g. name of void foo(string p) will be foo(string): void.
As you can see, both act differently with their own quirks.
You could create and register a DocumentSymbolProvider yourself, that would return a level of detail you need (see VSCode Providers)
Also see: https://stackoverflow.com/a/66486297/6702598

Visualizing an AutoDiff MultibodyPlant in PyDrake

I am trying to build a simple multibody plant system in Drake using the basic DrakeVisualizer. However, for my use case, I also want to be able to automatically track the derivatives through the physics simulation, so am using the AutoDiffXd version of system:
timestep = 1e-3
builder = DiagramBuilder_[AutoDiffXd]()
plant = MultibodyPlant(timestep)
scene_graph = SceneGraph_[AutoDiffXd]()
brick_file = FindResourceOrThrow("drake/examples/manipulation_station/models/061_foam_brick.sdf")
parser = Parser(plant)
brick = parser.AddModelFromFile(brick_file, model_name="brick")
plant.Finalize()
plant_ad = plant.ToAutoDiffXd()
plant_ad.RegisterAsSourceForSceneGraph(scene_graph)
scene_graph.AddRenderer("renderer", MakeRenderEngineVtk(RenderEngineVtkParams()))
DrakeVisualizer.AddToBuilder(builder, scene_graph)
builder.AddSystem(plant_ad)
builder.AddSystem(scene_graph)
builder.Connect(plant_ad.get_geometry_poses_output_port(), scene_graph.get_source_pose_port(plant_ad.get_source_id()))
builder.Connect(scene_graph.get_query_output_port(), plant_ad.get_geometry_query_input_port())
diagram = builder.Build()
context = diagram.CreateDefaultContext()
simulator = Simulator_[AutoDiffXd](diagram, context)
simulator.AdvanceTo(2.0)
However, when I run this, I get the following error:
File "/home/craig/Repos/drake-exps/autoDiffExperiment.py", line 102, in auto_phys
DrakeVisualizer.AddToBuilder(builder, scene_graph)
TypeError: AddToBuilder(): incompatible function arguments. The following argument types are supported:
1. (builder: pydrake.systems.framework.DiagramBuilder_[float], scene_graph: drake::geometry::SceneGraph<double>, lcm: pydrake.lcm.DrakeLcmInterface = None, params: pydrake.geometry.DrakeVisualizerParams = <pydrake.geometry.DrakeVisualizerParams object at 0x7ff6274e14b0>) -> pydrake.geometry.DrakeVisualizer
2. (builder: pydrake.systems.framework.DiagramBuilder_[float], query_object_port: pydrake.systems.framework.OutputPort_[float], lcm: pydrake.lcm.DrakeLcmInterface = None, params: pydrake.geometry.DrakeVisualizerParams = <pydrake.geometry.DrakeVisualizerParams object at 0x7ff627736730>) -> pydrake.geometry.DrakeVisualizer
Invoked with: <pydrake.systems.framework.DiagramBuilder_[AutoDiffXd] object at 0x7ff65654f8f0>, <pydrake.geometry.SceneGraph_[AutoDiffXd] object at 0x7ff656562130>
From this error, it appears the DrakeVisualizer class only accepts systems which use float scalars exlusively. So I am stuck --- either I can go back to floats (but lose the autodiff differentiable simulation functionality I was after in the first place), or continue to use autodiffxd systems (but be completely unable to visualize what is going on in my simulation).
Is there a way to get both that I am missing?
Sorry for the pain and inconvenience. Your description and assessment are all spot on. Most of the visualization mechanisms are float only and, in its current state, attempts to visualizing an AutoDiff diagram will fail.
You have a couple of options (neither of which is appealing):
Go with one of the outcomes you've described above (no vis or no derivatives).
Put in a Drake feature request to be able to attach a visualizer to an AutoDiff diagram.
I can come up with some hacky workarounds (that aren't immediately clear would even work). So, if you're desperate for derivatives and visualization, they could be explored. But, ultimately, the feature request and a formal Drake solution would be the best long-term resolution.
=====================================
Big update. As of #14569, the DrakeVisualizer class is now templated on the scalar type (item 2 in the list above). That has two implications:
You can build an AutoDiffXd-valued diagram with a visualizer in it (as in your example), or
You can create a double-valued diagram and scalar convert it (i.e., diagram.ToAutoDiffXd() into an AutoDiffXd-valued diagram.