I created a very simple network with some nodes and a few paths. A limited amount of agents (peoples) now were supposed to just get from A to B and back in a loop. Worked so far.
Next, I wanted to limit the number of agents that can be at the same time on a specific path, using the "limit number of transporters" option in the general section of a path. This did not work. When I wanted to know how many transporters are on the path anyway, I tried calling (and displaying the output) of various functions like "getNumberOfTransporters()", "getTransporters()", etc. (called by "pathname.functionname()", each resulting in an exception, which usually looked like this:
Exception during discrete event execution:
NullPointerException
java.lang.NullPointerException
at com.anylogic.engine.markup.Path.getNumberOfTransporters(Unknown Source)
at movetest.Main.executeActionOf(Main.java:141)
at com.anylogic.engine.EventTimeout.execute(Unknown Source)
at com.anylogic.engine.Engine.c(Unknown Source)
at com.anylogic.engine.Engine.gc(Unknown Source)
at com.anylogic.engine.Engine.a(Unknown Source)
at com.anylogic.engine.Engine$i.run(Unknown Source)
The function "getMaxNumberOfTransporters()" did work though, which simply outputted the number that was specified in the "limit number of transporters" option field.
So the question is: Why is this exception being thrown? Am I doing something wrong or is there a bug with Anylogic regarding these transporter-related functions/functionality?
By the way, I'm using AnyLogic 8 Personal Learning Edition 8.3.2 on a 64-bit Windows 10 computer.
Since AnyLogic Paths provide these methods (getNumberOfTransporters, etc.) this is definitely a bug; these methods should not be throwing internal exceptions under any circumstances.
A quick test confirms that these methods throw this exception if there is no transporter fleet in your model (so exceptions being thrown is a little more forgiveable). The exceptions aren't thrown if you have a fleet with a home location set, even if that location is in a different network to the path you are checking; i.e., even if it is never possible for any transporters to be on that path. (If you don't set a home location for the fleet you get a different exception relating to that.)
So it looks like you are trying to use normal moving resource agents (i.e., from the Process Modeling library) as your 'transporters' instead of the Material Handling library transporter fleet.
If you want to restrict 'transported' movement around your network, you have two options which are conceptually different:
Use Process Modeling resource pools (as you are doing) and control the movement inside the Process Modeling blocks via use of things like RestrictedAreaStart and RestrictedAreaEnd blocks (i.e., you break the movement down into the relevant segments and control flow through the blocks that control the relevant portions). See the Job Shop example model for a good (and complex) example of this. Note that, conceptually, space markup only gives you distances for use in the model (not any model behaviour). This is the norm: space markup is only there to visualise your model and provide distances. (It also controls what movements are valid since there needs to be a route through the network but it's normally a design error if a required movement is not permitted, so this isn't really model behaviour.)
Use a TransporterFleet instead. They can interoperate with normal Process Modeling blocks (see screenshot below) and they are designed precisely to support this style of 'control their flow via restrictions on numbers of transporters on paths' (plus have built-in functionality for load/unload times, behaviour after dropping off, etc.). Notice that conceptually with the Materials Handling library the space markup defines model behaviour (rather than just giving you distances and visualisation). This is a major conceptual departure with the Materials Handling library. (Similarly, the conveyer networks you define using Materials Handling space markup also define model behaviour; e.g., the Station elements therein are similar to Service blocks in the Process Modeling library.)
P.S. I meant to add that, unless you use a transporter fleet, there is no direct way of getting which agents are on which paths. The closest is that networks support the getNearestPath function (see the API reference for Network in the help), one flavour of which will give you the nearest Path to an agent. (So, by looping through all resource agents and checking this for each of them, you could obliquely determine how many are 'on' each path, though you have to be careful because this only gives the nearest Path.) But this is irrelevant for what you want to achieve.
Related
In the documentation it is indicated, that cardinality() function is deprecated and should no longer be used. However, it is still used in the libraries such as ThermoSysPro.
e.g.
if (cardinality(C) == 0) then
some code
end if;
where C is FluidInlet or FluidOutlet
Could anyone give a simple example of how it could be replaced?
The usual solution is to make the connector conditional, and if enabled you require that it is connected.
For physical connectors you can see how heatports and support is handled in:
Modelica.Electrical.Analog.Interfaces.ConditionalHeatPort
Modelica.Mechanics.Rotational.Interfaces.PartialElementaryOneFlangeAndSupport2
For control signals you can see how p_in, h_in etc are handled in
Modelica.Fluid.Sources.Boundary_pT
Modelica.Fluid.Sources.Boundary_ph
However, the connectors of ThermoSysPro belong in neither of those categories and that should ideally also be cleaned up.
The only thing I know, that could be used in this regard, is the connectorSizing annotation. It is described in the MLS chapter 18.7.
It is used a number of times in the Modelica Standard Library, e.g. in Modelica.Blocks.Math.MinMax via the parameter nu. When using it, the tool automatically sets the modifier for nu according to the number of connections to it.
parameter Integer nu(min=0) = 0 "Number of input connections"
annotation (Dialog(connectorSizing=true));
Modelica.Blocks.Interfaces.RealVectorInput u[nu];
In the example below, nu=2 is generated by Dymola automatically when creating a connection in the graphical layer. I have removed the graphical annotations, to make the code more readable.
model ExCS
Modelica.Blocks.Math.MinMax minMax(nu=2);
Modelica.Blocks.Sources.Sine sine(freqHz=6.28);
Modelica.Blocks.Sources.Constant const(k=0.5);
equation
connect(sine.y, minMax.u[1]);
connect(const.y, minMax.u[2]);
end ExCS;
The cardinality() operator is used in Modelica.Fluid.Sources.BaseClasses.PartialSource, and in a similar way in other fluid libraries (IBSPA, AixLib, Buildings, BuildingSystems and IDEAS), in the form
// Only one connection allowed to a port to avoid unwanted ideal mixing
for i in 1:nPorts loop
assert(cardinality(ports[i]) <= 1,"
each ports[i] of boundary shall at most be connected to one component.
If two or more connections are present, ideal mixing takes
place with these connections, which is usually not the intention
of the modeller. Increase nPorts to add an additional port.
");
end for;
I occasionally had models from users who somehow ended up with more than one connection to a ports[i]. How this happened was not clear, but I find the use of cardinality() useful to catch such situations, which otherwise can yield to mixing in the fluid port which the user did not intent and which are hard to detect.
I'm new to SUMO, Veins, OMNET++ and simulations with a bit background of networks. I have successfully setup environment and run veins 4.6 demo application. On google found that unlike RSU, Car modules are added on the fly.
In demo example car nodes send Airframe11p message, i'm not getting where this message is being populated because in TraCIDemo11p.cc methods (onWSA, onWSM, handleSelfMsg, handlePositionUpdate) we are dealing with WSM message types and BaseWaveApplLayer::checkAndTrackPacket methods ensures that message being sent is either BSM, WSM or WSA.
In veins\src\veins\modules\messages AirFrame11p.msg file exists but on finding references of "AirFrame11p" in project, matches are found in AirFrame11p_m.h and AirFrame11p_m.cc only. If demo is not using these files then for what purpose these files are added? and from where simulation gets the annotation of AirFrame11p.
I'm trying to simulate a car accident scenario without RSU using V2V communication, have replaced demo map with mine, generated random routes, now trying to remove RSU from demo application and exploring to send customized messages (including geo location, speed, direction, time etc) to nearby vehicles in specified range e.g. 100 meters using WiFi direct.
If i'm confusing something then please guide me. Thanks.
The short answer: The AirFrame11p message is a lower level message that encapsulates the upper layer messages. Just use the application message type that is appropriate for your application. If you want to replace the physical layer with WiFi direct instead of 11p, and you're starting from scratch, you're probably in for quite a bit of work, since the VEINS PHY implementation is very intricate. If you have an existing implementation of WiFi direct, it may be worth investigating the integration of VEINS' TraCI implementation with that code.
Encapsulation in VEINS
You are correct that the message types at the application layer are more diverse -- these message types (BSM and WSM) are used to encapsulate "application" behavior; it's just not very well visualized in the simulation execution. You can pause the simulation and look (for example) under scheduled events, where the queued packets can be examined visually.
Unlike regular networks, where such messages would be packaged in IP, MAC and PHY encapsulations, VEINS uses the following encapsulation process: BSMs are packaged in MAC frames (80211Pkt), which in turn are encapsulated by AirFrame11p signals. So basically, you should choose the correct message type for your application.
Footnote regarding application behavior:
Technically speaking, these messages would be more correctly placed at the Facilities layer (see e.g. ETSI's spec), since the periodic exchange of messages provides data stored in the facilities layer, which is then used by cITS/VANET applications that run on top. If you need this, look at Artery (as Ventu suggested in the comments).
I have a general question about how to debug in Simpy. Normal debugging tools don't seem to work, since everything is working on the event loop, and you can't step through the code line by line and inspect what exists at any point in time.
Primarily, I'm interested in finding what kinds of processes and callbacks are in existence at a particular time, and how to remove them at the appropriate point. Are there any best practices surrounding debugging in discrete event simulation generally?
I would just use a bunch of print()s.
One thing you might find useful is the specific requests that can be passed to primitives such as resources. For example you can ask a resource how many users it currently has or how big the queue to use the resource is with:
All of these commands can be found in the documentation, here is the resource example: https://simpy.readthedocs.io/en/latest/api_reference/simpy.resources.html
I was working on a Simulink model recently and was using Goto and From blocks to keep a very busy system from becoming a twisted mess of wires. I was informed that I was not to use Goto and From blocks as they are considered bad style (at least, according to my employer).
While I hold that wires should be kept connected whenever possible, I believe that Goto and From blocks can significantly improve the readability of a system/subsystem if the model would result in lots of crossed wires otherwise; especially if the blocks can be color-coded (e.g. purple Goto block goes to all the purple From blocks).
I'd supply an image of the subsystem I'm working with, but I'm not sure I can put it on here. The subsystem itself has about 12 subsystem blocks (and possibly more later) within it, each with two bus-type outputs. The first output of each subsystem goes to a Bus Creator block, and the second output of each goes to a second Bus Creator block. Since the subsystem are aligned vertically and the Bus Creators are to the right, this results in many crossed wires. I was using Goto and From blocks to clean up the system.
I can supply an image of a smaller, but similar model that I put together for this question.
For a system with on the order of 12 subsystems, this becomes very busy. I was using Goto and From blocks to connect the subsystems and the Bus Creators without a plethora of crossed wires.
I believe my employer may be carrying the stigma of using goto statements from text-based languages and applying it to Goto/From blocks in Simulink. Generally speaking, is using Goto and From blocks in this way (or any way) considered to be bad style?
The Mathworks Automotive Advisory Board has published some modeling guidelines (PDF) that include usage of Goto/From. The rules they list are:
Do not have subsystems that are floating, i.e. all inputs / output ports are connected via Gotos. One of the great things about Simulink is the ability to determine signal flow with only a cursory visual inspection, do not destroy this by linking everything with Gotos. At least have one feed-forward and one feedback loop between subsystems connected by signal lines.
My personal opinion on feedback signals is that they should all be connected with signal lines, but I'm sure you can come up with cases where drawing all of them clutters the model.
The second guideline is about the scope of the Goto tag; keep the visibility local as much as possible.
I feel setting visibility to scoped is acceptable also as long as you're not using the matching From more than a couple of levels downstream from the Goto. I've yet to come across a legitimate need for a global Goto tag.
So, all Goto usage isn't bad, and you're right that it can improve readability in some cases. That being said, I don't think Gotos are justified for the picture above. I realize it is just an example, but I should point out that if the buses being created are virtual that order of the inputs at the creator doesn't matter, and rearranging Bus Create and Mux block inputs can work wonders for readability.
The problem with the guidelines above are that there's room for bending them, and developers on your team might do just that. Even if everyone is diligent about following them at first, you may run afoul of these guidelines one day, a long time from now, when you redraw that section of the model for refining / adding functionality. Rearranging inputs and outputs can be especially irritating in middle of implementing some cool new feature. That may be the reason your employer chose to impose a blanket ban. It is inconvenient in some cases, but is easier to enforce.
I want to organize a working bus functional model and push commonly used procedures (which look like CPU subroutines) out into a package and get them out of the main cpu model, but I'm stuck.
The procedures don't have access to the hardware bits when they're pushed out in a package.
In Verilog, I would put commonly used procedures out into an include file and link them into the CPU model as required for a given test suite.
More details:
I have a working bus functional model of a CPU, for simulation test benching.
At the "user interface" level I have a process called "main" running inside the CPU model which calls my predefined "instruction set" like this:
cpu_read(address, read_result);
cpu_write(address, write_data);
etc.
I bundle groups of those calls up into higher level procedures like
configure_communication_bus;
clear_all_packet_counters;
etc.
At the next layer these generic functions call a more hardware specific version which knows the interface timing for the design,
and those procedures then use an input record and output record to connect to the hardware module ports and waggle the cpu bus signals as required.
cpu_read calls hardware_cpu_read(cpu_input_record, cpu_output_record, address);
Something like this:
procedure cpu_read (address : in std_logic_vector(15 downto 0);
read_result : out std_logic_vector(31 downto 0));
begin
hardware_cpu_read(cpu_input_record, cpu_output_record, address, read_result);
end procedure;
The cpu_input_record and cpu_output_record are declared as signals of type nnn_record in the cpu model vhdl file.
So this is all working, but every single one of these procedures is all stored in the cpu VHDL module file, and all in the procedure declaration section so that they are all in the same scope.
If I share the model with team members they will need to add their own testing subroutines, and those also are all in the same location in the file, as well, their simulation test code has to go into the "main" process along with mine.
I'd rather link in various tests from outside the model, and only keep model specific procedures in the model file..
Ironically I can push the lowest level hardware procedure out to a package, and call those procedures from within the "main" process, but the higher level processes can't be put out into that package or any other packages because they don't have access to the cpu_read_record and cpu_write_record.
I feel like there must be a simple way to clean up this code and make it modular, and I'm just missing something obvious.
I don't really think making a command interpreter and loading my test code into a behavioral ROM is the right way to go by the way. Nor is fighting with the simulator interface to connect up a C program, but I may break down and try this..
Quick sketch of an answer (to the question I think you are asking! :-) though I may be off-beam...
To move the BFM subprograms into a reusable package, they need to be independent of the execution scope - that usually means a long parameter list for each of them. So using them in a testbench quickly gets tedious compared with the parameterless (or parameter-lite) versions you have now..
The usual workaround is to implement the BFM in a package, with long parameter lists.
Then write parameter-lite local equivalents (wrappers) in the execution scope, which simply call the package versions supplying all the parameters explicitly.
This is just boilerplate - not pretty but it does allow you to move the BFM into a package. These wrappers can be local to the testbench, to a process within it, or even to a subprogram within that process.
(The parameter types can be records for tidiness : these are probably declared in a third package, shared between BFM. TB, and synthesisable device under test...)
Thanks to overloading, there is no ambiguity between the local and BFM package versions, so the actual testbench remains as simple as possible.
Example wrapper function :
function cpu_read(address : unsigned) return slv_32 is
begin
return BFM_pack.cpu_read (
address => address,
rd_data_bus => tb_rd_data_bus,
wait => tb_wait_signal,
oe => tb_mem_oe,
-- ditto for all the signals constants variables it needs from the tb_ scope
);
end cpu_read;
Currently your test procedures require two extra signals on them, cpu_input_record and cpu_output_record. This is not so bad. It is not uncommon to just have these on all procedures that interact with the cpu and be done with it. So use hardware_cpu_read and not cpu_read. Add cpu_input_record, cpu_output_record to your configure_communication_bus and clear_all_packet_counters procedures and be done. Perhaps choose shorter names.
I do a similar approach, except I use only one record with resolved elements. To make this work, you need to initialize the record so that all elements are non-driving (ie: 'Z' for std_logic). To make this more flexible, I have created resolution functions for integer, time, and real. However, this only saves you one signal. Not a real huge win. Perhaps half way to where you think you want to be. But it is more work than what you are doing.
For VHDL-201X, we are working on syntax to allow parameters/ports automatically map to a identically named signal. This will get you to where you want to be with any of the approaches (yours, mine, or Brian's without the extra wrapper subprogram). It is posted here: http://www.eda.org/twiki/bin/view.cgi/P1076/ImplicitConnections. Given this, I would add the two records to your procedures and call it good enough for now.
Once you get by this problem, you seem to also be asking is how do I write separate tests using the same testbench. For this I use multiple architectures - I like to think of these as a Factory Class for concurrent code. To make this feasible, I separate the stimulus generation code from the rest of the testbench (typically: netlist connections and clock). My presentation, "VHDL Testbench Techniques that Leapfrog SystemVerilog", has an overview of this architecture along with a number of other goodies. It is available at: http://www.synthworks.com/papers/index.htm
You're definitely on the right track, in fact I have a variant like this (what you describe).
The catch is, now I build up a whole subroutine using the "parameter light" procedures, and those are what I want to put in a package to share and reuse. The problem is that any procedure pushed out to a package can't call to the parameter light procedures in the main vhdl file..
So what happens is we have one main vhdl file with all the common CPU hardware setup routines, and every designer's test code all in the same vhdl file..
Long story short, putting our test subroutines into separate files is really what I was hoping for..