How to run a walker from scratch instead of its yielded state in Jaseci? - jaseci

I am building conversational AI in jaseci using the jac language. and I run into this interesting case. Say I have a walker that has been yielded, and when I run walker run on the same walker again it will pick up from its yielded state and continue on the next node and retaining its has variable contexts.
I am wondering, is there a way to force the walker to start from scratch instead, with a fresh context?

There are two possibilities to do this in Jaseci. First, if you want to completely reset the walker and not retain any of its yielded state, or second, you want that instance of the walker to remain yielded but call a fresh instance of the same walker.
In the first case (complete walker reset):
Use the /js/walker_yield_clear to clear all yielded walkers, or /js/walker_yeild_delete to remove a specific walker by its name. Of course, if you are using jsctl those Apis map to walker yield clear and walker yield delete in the command line interface.
In the second case (retain yielded walker but create new instance of a fresh walker to execute):
Use the js/walker_spawn_create Apis to spawn a walker instance and get it's uuid and then call js\walker_execute on that uuid (not walker_run).
Note! You'll have to be sure to clean up walkers created using these Apis manually using /js/walker_spawn_delete. See all the /js/walker_spawn_* Apis for a sense of other useful apis to manage walkers manually.
Also keep in mind you can access these Api in the Jac language itself using the jaseci.* standard action library (as of version 1.3.5.* at least).

Related

Set transporter speed based on parameter of transported product?

In my model I am transporting (path guided) several types of products (motors) by an agv trough a manufacturing line with 27 cycles. It´s a flowing manufacturing line. That means the product gets manufactured while the agv is constantly running.
To model that I created an agent population called "motors" with parameter "axelType" (string) which is loaded from column "axel_type" in database "manufacturing_sequence" (local excelsheet) and is placed on Main.
Each motor is placed on an transporter "agvAssembly" (in Flowchart as: Transporter) and runs from node "locationCycle1" all the way to "locationCycle27".
Now I want to change the transporters speed at each of the 27 cycle nodes dependent on the currently loaded motor. To do that I got another database called "speeds_axel" which includes all the needed speeds for the cycles and respective parameter name for axelType (column axel_type).
So now, when the transporter enters a node I have to check first the nodes name. Than I want to read out the parameter "axelType" of the currently in that node entered agent "Motor" and search in the database for the respective speed.
In the block "transporter Fleet" - "On enter node:" I wrote as an example for cycle 1 the following:
if (node == locationCycle1) {
unit.setMaximumSpeed(
selectFrom(speeds_axel)
.where(speeds_axel.axel_type.eq(motor.axelType))
.firstResult(false, speeds_axel.cycle1)/60.0,MPS);
}
When running the model I get the following error:
"motor cannot be resolved to a variable"
(location: TransporterFleet)
I think that error accures because my approach doesn't specify which motor I mean. How can I clarify to AnyLogic that I always mean the current motor which enters the node?
I need something like this:
if (node == locationCycle1) {
unit.setMaximumSpeed(
selectFrom(speeds_axel)
.where(speeds_axel.axel_type.eq("get motor which is currently in locationCycle1".axelType))
.firstResult(false, speeds_axel.cycle1)/60.0,MPS);
}
the biggest problem in your model is that you don't follow conventions. An agent type should be named with an uppercase first letter
Why is this important? Because you want to make the difference between Motor, motor, and motors
Motor is the class (or Agent Type)
motor is an instance of that class (or agent type)
motors is a population.
Since you don't follow this convention, you make mistakes of this kind since motor is a class
in your case when you do motor.cycle1
if you followed the conventions, you would be doing
Motor.cycle1 (which is obvioulsy wrong)
Note that Motor is the Agent Type name, and what you really want is to know for a particular motor what the value of cycle1 is
The first thing you need to do with this model, is get back to using the conventions, and this will probably solve this problem and many problems in the future.
Just because you have a population called motors (which the agents within your flow 'come from' --- though it looks like they don't actually; see later), this does not mean that you can 'magically' refer to the current agent via the singular form motor at any point in the code. (And, similarly, your agent type being Motor does not mean you can use the lower-case form to refer to 'the current one'.)
In Transporter Fleet block "On enter node" actions, you refer to the current transporter via the unit keyword; see the help page. The motor is not the transporter; it's the thing being transported.
So, just use the "On exit" (or, better, "On at exit") actions of your MoveByTransporter blocks (which trigger on arrival at the destination node), where you can refer to the current motor (the agent in the flow) via the agent keyword; again, see the block help.
But, from a quick glance, there appear to be a few important changes/simplifications you should also make (though hard to definitively tell without details of all your code):
You are using a Source block to add agents (motors) to the flow when you have already created the motors in your population. You should be using an Enter block instead to add the (already-existing) Motor agents to the flow at the appropriate time (triggered by code: I imagine you want one to start at model start time, requiring code in Main's "On startup" action, and then others added either at certain simulation times or when earlier motors reach a certain stage in the process...). If your population was actually just for 'templates' of data for each motor axle type (and you would then potentially create multiple instances of motors for a given axle type) then your current logic might make sense, but your screenshot of your manufacturing_sequence table makes clear that isn't the case. (If you did go that route, you would want to rename your population so that its purpose was clear.)
It looks like you have a 'looping' process (with the same process behaviour for all your in-sequence nodes), so you should look to make your process generic with an explicit loop (so you don't have near-copies of blocks for every node in your sequence). There is a fair amount of detail in terms of how you do this but roughly:
Track the node the motor is currently in (or its sequence number) via a variable in the Motor agent.
Use a collection (List) which contains the nodes-in-order to determine where it has to go to next.
Your process would have a MoveByTransporter --> Delay --> SelectOutput sequence (plus TimeMeasureStart/End if you're using them), where SelectOutput loops back to the MoveByTransporter if the motor is not yet at the last node.

MATLAB: Does the execution of addpath/rmpath/savepath in one MATLAB instance affect other instances?

Does the execution of addpath/rmpath/savepath in one MATLAB instance affect other instances?
Motivation: Imagine that you are developing a MATLAB package, which provides a group of functions to the users. You have multiple versions of this package being developed on a single laptop. You would like to test these different versions in multiple instances of MATLAB:
You open one MATLAB window, type run_test(DIRECTORY_OF_PACKAGE_VERSION1), and hit enter;
While the first test is running, you open another MATLAB window, type run_test(DIRECTORY_OF_PACKAGE_VERSION2), and hit enter.
See the pseudo-code below for a better idea about the tests.
No code or data is shared between different tests --- except for those embedded in MATLAB, as the tests are running on the same laptop, using the same installation of MATLAB. Below is a piece of pseudo-code for such a scenario.
% MATLAB instance 1
run_test(DIRECTORY_OF_PACKAGE_VERSION1);
% MATLAB instance 2
run_test(DIRECTORY_OF_PACKAGE_VERSION2);
% Code for the tests
function run_test(package_directory)
setup_package(package_dirctory);
RUN EXPERIMENTS TO TEST THE FUNCTIONS PROVIDED BY THE PACKAGE;
uninstall_package(package_directory);
end
% This is the setup of the package that you are developing.
% It should be called as a black box in the tests.
function setup_package(package_dirctory)
addpath(PATH_TO_THE_FUNCTIONS_PROVIDED_BY_THE_PACKAGE);
% Make the package available in subsequent MATLAB sessions
savepath;
end
% The function that uninstalls the package: remove the paths
% added by `setup_package` and delete the files etc.
function uninstall_package(package_directory)
rmpath(PATH_TO_THE_FUNCTIONS_PROVIDED_BY_THE_PACKAGE);
savepath;
end
You want to make sure the following.
The tests do not interfere with each other;
Each test is calling funtions from the correct version of the package.
Hence here come our questions.
Questions:
Does the execuation of addpath, rmpath, and savepath in one MATLAB instance affect the other instance, sooner or later?
More generally, what kind of commands executed in one MATLAB instance can affect the other instance?
3. What if I am running only one instance of MATLAB, but invoke a parfor loop with two loops running in parallel? Does the execution of addpath/rmpath/savepath in one loop affect the other loop, sooner or later? In general, what kind of commands executed in one parallel loop can affect the other loop? (As pointed out by #Edric, this can be complicated; so let us not worry about it. Thank you, #Edric.)
Thank you very much for any comments and insights. It would be much appreciated if you could direct me to relevant sections in the official documentation of MATLAB --- I did some searching in the documentation, but have not found an answer to my question.
BTW, in case you find that the test described in the pseudo code is conducted in a wrong/bad manner, I will be very grateful if you could recommend a better way of doing it.
The documentation page for the MATLAB Search Path specifies at the bottom:
When you change the search path, MATLAB uses it in the current session, but does not update pathdef.m. To use the modified search path in the current and future sessions, save the changes using savepath or the Save button in the Set Path dialog box. This updates pathdef.m.
So, standard MATLAB sessions are "isolated" in terms of their MATLAB Search Path unless you use savepath. After a call to savepath, new MATLAB sessions will read the updated pathdef.m on startup.
The situation with a parallel pool is slightly more complex. There are a couple of things that affect this. First is the parameter AutoAddClientPath that you can specify for the parpool command. When true, an attempt is made to reflect the desktop MATLAB's path on the workers. (This might not work if the workers cannot access the same folders).
When a parallel pool is running, any changes to the path on the desktop MATLAB client are sent to the workers, so they can attempt to add or remove path entries. Parallel pool workers calling addpath or rmpath do so in isolation. (I'm afraid I can't find a documentation reference for this).

how to debug in simpy

I have a general question about how to debug in Simpy. Normal debugging tools don't seem to work, since everything is working on the event loop, and you can't step through the code line by line and inspect what exists at any point in time.
Primarily, I'm interested in finding what kinds of processes and callbacks are in existence at a particular time, and how to remove them at the appropriate point. Are there any best practices surrounding debugging in discrete event simulation generally?
I would just use a bunch of print()s.
One thing you might find useful is the specific requests that can be passed to primitives such as resources. For example you can ask a resource how many users it currently has or how big the queue to use the resource is with:
All of these commands can be found in the documentation, here is the resource example: https://simpy.readthedocs.io/en/latest/api_reference/simpy.resources.html

What's up with CHECK and INIT blocks?

I have a circular dependency problem with Perl modules: say package X uses Y and wants to hold a static reference to an Y instance, and package Y uses X and wants to hold a static reference to an X instance.
Simply saying our $x_instance = new X will give Can't locate object method "new" in the module that was not loaded first.
I figured something like
our $x_instance;
INIT { $x_instance = new X }
would make sense, so I read everything about the specially named blocks.
Well, this works in a simple test I made, but in my real application it systematically shows Too late to run INIT block. The same happens with CHECK blocks.
The only explanation I found was from Perl Monks and I'm afraid I couldn't make much sense of it.
Does someone have an explanation about how Perl goes about executing CHECK and INIT block that goes beyond what is in perlmod, and would help me understand why my blocks and sometimes executed and sometimes not?
By the way, I just want to understand this—I am not specifically asking a solution to my original circular dependency problem, as I have a workaround that I am reasonably happy about:
our $x_instance;
sub get_x_instance {
$x_instance //= new X;
return $x_instance;
}
INIT blocks are executed immediately before the run time phase is started in the order the compiler encountered them during the compilation phase.
If you use use require (or do) at run time to compile a Perl file that includes an INIT block then the block won't be executed.
It is rare that there is a real reason to use require in preference to use.
Despite your confidence, there must be a place where you are attempting to load a module at run time that contains an INIT block. I suggest you install and use Carp::Always so that the Too late to run INIT block message is accompanied by a stack backtrace that will help you find the erroneous call.

VHDL Bus Functional Modelling - Can't put groups of procedures into a package to clean up the code

I want to organize a working bus functional model and push commonly used procedures (which look like CPU subroutines) out into a package and get them out of the main cpu model, but I'm stuck.
The procedures don't have access to the hardware bits when they're pushed out in a package.
In Verilog, I would put commonly used procedures out into an include file and link them into the CPU model as required for a given test suite.
More details:
I have a working bus functional model of a CPU, for simulation test benching.
At the "user interface" level I have a process called "main" running inside the CPU model which calls my predefined "instruction set" like this:
cpu_read(address, read_result);
cpu_write(address, write_data);
etc.
I bundle groups of those calls up into higher level procedures like
configure_communication_bus;
clear_all_packet_counters;
etc.
At the next layer these generic functions call a more hardware specific version which knows the interface timing for the design,
and those procedures then use an input record and output record to connect to the hardware module ports and waggle the cpu bus signals as required.
cpu_read calls hardware_cpu_read(cpu_input_record, cpu_output_record, address);
Something like this:
procedure cpu_read (address : in std_logic_vector(15 downto 0);
read_result : out std_logic_vector(31 downto 0));
begin
hardware_cpu_read(cpu_input_record, cpu_output_record, address, read_result);
end procedure;
The cpu_input_record and cpu_output_record are declared as signals of type nnn_record in the cpu model vhdl file.
So this is all working, but every single one of these procedures is all stored in the cpu VHDL module file, and all in the procedure declaration section so that they are all in the same scope.
If I share the model with team members they will need to add their own testing subroutines, and those also are all in the same location in the file, as well, their simulation test code has to go into the "main" process along with mine.
I'd rather link in various tests from outside the model, and only keep model specific procedures in the model file..
Ironically I can push the lowest level hardware procedure out to a package, and call those procedures from within the "main" process, but the higher level processes can't be put out into that package or any other packages because they don't have access to the cpu_read_record and cpu_write_record.
I feel like there must be a simple way to clean up this code and make it modular, and I'm just missing something obvious.
I don't really think making a command interpreter and loading my test code into a behavioral ROM is the right way to go by the way. Nor is fighting with the simulator interface to connect up a C program, but I may break down and try this..
Quick sketch of an answer (to the question I think you are asking! :-) though I may be off-beam...
To move the BFM subprograms into a reusable package, they need to be independent of the execution scope - that usually means a long parameter list for each of them. So using them in a testbench quickly gets tedious compared with the parameterless (or parameter-lite) versions you have now..
The usual workaround is to implement the BFM in a package, with long parameter lists.
Then write parameter-lite local equivalents (wrappers) in the execution scope, which simply call the package versions supplying all the parameters explicitly.
This is just boilerplate - not pretty but it does allow you to move the BFM into a package. These wrappers can be local to the testbench, to a process within it, or even to a subprogram within that process.
(The parameter types can be records for tidiness : these are probably declared in a third package, shared between BFM. TB, and synthesisable device under test...)
Thanks to overloading, there is no ambiguity between the local and BFM package versions, so the actual testbench remains as simple as possible.
Example wrapper function :
function cpu_read(address : unsigned) return slv_32 is
begin
return BFM_pack.cpu_read (
address => address,
rd_data_bus => tb_rd_data_bus,
wait => tb_wait_signal,
oe => tb_mem_oe,
-- ditto for all the signals constants variables it needs from the tb_ scope
);
end cpu_read;
Currently your test procedures require two extra signals on them, cpu_input_record and cpu_output_record. This is not so bad. It is not uncommon to just have these on all procedures that interact with the cpu and be done with it. So use hardware_cpu_read and not cpu_read. Add cpu_input_record, cpu_output_record to your configure_communication_bus and clear_all_packet_counters procedures and be done. Perhaps choose shorter names.
I do a similar approach, except I use only one record with resolved elements. To make this work, you need to initialize the record so that all elements are non-driving (ie: 'Z' for std_logic). To make this more flexible, I have created resolution functions for integer, time, and real. However, this only saves you one signal. Not a real huge win. Perhaps half way to where you think you want to be. But it is more work than what you are doing.
For VHDL-201X, we are working on syntax to allow parameters/ports automatically map to a identically named signal. This will get you to where you want to be with any of the approaches (yours, mine, or Brian's without the extra wrapper subprogram). It is posted here: http://www.eda.org/twiki/bin/view.cgi/P1076/ImplicitConnections. Given this, I would add the two records to your procedures and call it good enough for now.
Once you get by this problem, you seem to also be asking is how do I write separate tests using the same testbench. For this I use multiple architectures - I like to think of these as a Factory Class for concurrent code. To make this feasible, I separate the stimulus generation code from the rest of the testbench (typically: netlist connections and clock). My presentation, "VHDL Testbench Techniques that Leapfrog SystemVerilog", has an overview of this architecture along with a number of other goodies. It is available at: http://www.synthworks.com/papers/index.htm
You're definitely on the right track, in fact I have a variant like this (what you describe).
The catch is, now I build up a whole subroutine using the "parameter light" procedures, and those are what I want to put in a package to share and reuse. The problem is that any procedure pushed out to a package can't call to the parameter light procedures in the main vhdl file..
So what happens is we have one main vhdl file with all the common CPU hardware setup routines, and every designer's test code all in the same vhdl file..
Long story short, putting our test subroutines into separate files is really what I was hoping for..