I have following questions:
How is global code executed and global variables initialized in perl?
If I write use package_name; in multiple packages, does the global code execute each time?
Are global variables defined this way thread safe?
Perl makes a complete copy of all code and variables for each thread. Communication between threads is via specially marked shared variables (which in fact are not shared - there is still a copy in each thread, but all the copies get updated). This is a significantly different threading model than many other languages have, so the thread-safety concerns are different - mostly centering around what happens when objects are copied to make a new thread and those objects have some form of resource to something outside the program (e.g. a database connection).
Your question about use isn't really related to threads, as far as I can tell? use does several things; one is loading the specified module and running any top-level code in it; this happens only once per module, not once per use statement.
Related
I tried to use debug_access=all in vcs command line, but it seems I still can't dump the signals declared inside the task(). Is there any args I need to use?
AFAIK, tools do not yet allow you to dump variables with automatic lifetimes. This is because they come in and out of existence. Also, because of re-entrant behavior from threads or recursion, there might be multiple instances of the same named variable.
If these signals are inside a class method, you might be able to move them outside and make them class members. Otherwise you should be able to declare them as static variables as long as there is no re-entrant behavior.
dave_59 is correct, there's no way to do this. For tasks at least you can drive signals that are declared elsewhere inside the task. And you'll be able to monitor those signals. Functions I don't believe this is possible to modify external signals from inside a function. All inputs/outputs must be declared for a function.
I am coming from computer science background and used to traditional IT programming. I have relatively little experience with structured text. In my current project I am extensively using many function block. I am aware that this involves some memory issues and so on. Could anyone come up and give me some advantages and disadvantages of each of them. Should I avoid them and write everything in a single program ? Please practical hints should be welcome as I am about to release my application.
System : Codesys
I also come from the PC programming world, and there are certain object tricks I miss when programming in Codesys. The function blocks go a long way towards object thinking, though. They're too easy to peek into from the outside, so some discipline from the user is necessary, to encapsulate the functionality or objects.
You shouldn't write a single piece of program to handle all functionality, but instead use the Codesys facilities to divide the program into objects where possible. This also means to identify which objects are alike and can be programmed as function blocks. The instance of a function block is created in memory when the program is downloaded, e.g. it is always visible for monitoring.
I normally use POU's to divide the project into larger parts, e.g. Machine1(prg), Machine2(prg) and Machine3(prg). If each machine has one or more motors of similar type, this is where the function blocks come in, so that I can program one motor object called FB_Motor, and reuse it for the necessary motor instances inside the 3 machine programs. Each instance can then hold its own internal states, timers, input output, or whatever a motor needs.
The structure for the above example is now:
MAIN, calls
Machine1(prg), calls
fbMotor1 (implements FB_Motor, local for Machine1)
fbMotor2 (implements FB_Motor, local for Machine1)
Machine2(prg), calls
fbMotor1 (implements FB_Motor, local for Machine2)
Machine3(prg), calls
fbMotor1 (implements FB_Motor, local for Machine3)
fbMotor2 (implements FB_Motor, local for Machine3)
fbMotor3 (implements FB_Motor, local for Machine3)
The functions are another matter. Their data exist on the stack when the function is called, and when the function has returned its value, the data is released. There are lots of built in functions, e.g. BOOL_TO_INT(), SQR(n) and so on.
I normally use functions for lookup and conversion functions. And they can be called from all around the program.
The clarity, robustness and maintainability are everything in PLC world. Function blocks help you to archieve that if the stucture is kept relatively flat (so one should avoid functionblock inside functionblock insede function block, compared of true object and their heritage).
Also the graphical languages are there for reason they visualise the complex systems in easy to digest form in a way that the maintaining personnel in the future have easier life to follow what is wrong with the PLC program and the part of the factory.
What comes to ST it is advance to remember that it is based on strongly typed Wirthian languages (ADA, Pascal etc.). Also what is often more important than memory usage is the constant cycle time of the program (since real time system). The another cup of the tea is the electrical layer of the control system, plus the physical layer and all the relations on that layer that can backflash somewhere else in your program if not taken account.
Is this possible in Haxe to have the compiler automatically insert a function call / code segment at the point where an object instance goes out of scope? I have object instances that require manual cleanup beyond what garbage collection does (for the JS target).
More Info
I'm experimenting with allocating small data structures in JavaScript code manually inside a virtual heap (an ArrayBuffer), similar to what is done with compiled asm.js programs. I am using Haxe in part because I can create abstract types as convenient aliases/abstractions for their underlying data allocated in the heap (always some manner of ArrayBufferView), while suffering no runtime overhead from the abstraction.
The only issue is that deallocation must be done manually. It's simple enough to resort to calling a destructor function manually within code, but I find this error-prone and messy. I was hoping Haxe would have some mechanism I could use to automate the insertion of these function calls whenever a variable went out of scope, in a deterministic, compile-time manner.
I want to organize a working bus functional model and push commonly used procedures (which look like CPU subroutines) out into a package and get them out of the main cpu model, but I'm stuck.
The procedures don't have access to the hardware bits when they're pushed out in a package.
In Verilog, I would put commonly used procedures out into an include file and link them into the CPU model as required for a given test suite.
More details:
I have a working bus functional model of a CPU, for simulation test benching.
At the "user interface" level I have a process called "main" running inside the CPU model which calls my predefined "instruction set" like this:
cpu_read(address, read_result);
cpu_write(address, write_data);
etc.
I bundle groups of those calls up into higher level procedures like
configure_communication_bus;
clear_all_packet_counters;
etc.
At the next layer these generic functions call a more hardware specific version which knows the interface timing for the design,
and those procedures then use an input record and output record to connect to the hardware module ports and waggle the cpu bus signals as required.
cpu_read calls hardware_cpu_read(cpu_input_record, cpu_output_record, address);
Something like this:
procedure cpu_read (address : in std_logic_vector(15 downto 0);
read_result : out std_logic_vector(31 downto 0));
begin
hardware_cpu_read(cpu_input_record, cpu_output_record, address, read_result);
end procedure;
The cpu_input_record and cpu_output_record are declared as signals of type nnn_record in the cpu model vhdl file.
So this is all working, but every single one of these procedures is all stored in the cpu VHDL module file, and all in the procedure declaration section so that they are all in the same scope.
If I share the model with team members they will need to add their own testing subroutines, and those also are all in the same location in the file, as well, their simulation test code has to go into the "main" process along with mine.
I'd rather link in various tests from outside the model, and only keep model specific procedures in the model file..
Ironically I can push the lowest level hardware procedure out to a package, and call those procedures from within the "main" process, but the higher level processes can't be put out into that package or any other packages because they don't have access to the cpu_read_record and cpu_write_record.
I feel like there must be a simple way to clean up this code and make it modular, and I'm just missing something obvious.
I don't really think making a command interpreter and loading my test code into a behavioral ROM is the right way to go by the way. Nor is fighting with the simulator interface to connect up a C program, but I may break down and try this..
Quick sketch of an answer (to the question I think you are asking! :-) though I may be off-beam...
To move the BFM subprograms into a reusable package, they need to be independent of the execution scope - that usually means a long parameter list for each of them. So using them in a testbench quickly gets tedious compared with the parameterless (or parameter-lite) versions you have now..
The usual workaround is to implement the BFM in a package, with long parameter lists.
Then write parameter-lite local equivalents (wrappers) in the execution scope, which simply call the package versions supplying all the parameters explicitly.
This is just boilerplate - not pretty but it does allow you to move the BFM into a package. These wrappers can be local to the testbench, to a process within it, or even to a subprogram within that process.
(The parameter types can be records for tidiness : these are probably declared in a third package, shared between BFM. TB, and synthesisable device under test...)
Thanks to overloading, there is no ambiguity between the local and BFM package versions, so the actual testbench remains as simple as possible.
Example wrapper function :
function cpu_read(address : unsigned) return slv_32 is
begin
return BFM_pack.cpu_read (
address => address,
rd_data_bus => tb_rd_data_bus,
wait => tb_wait_signal,
oe => tb_mem_oe,
-- ditto for all the signals constants variables it needs from the tb_ scope
);
end cpu_read;
Currently your test procedures require two extra signals on them, cpu_input_record and cpu_output_record. This is not so bad. It is not uncommon to just have these on all procedures that interact with the cpu and be done with it. So use hardware_cpu_read and not cpu_read. Add cpu_input_record, cpu_output_record to your configure_communication_bus and clear_all_packet_counters procedures and be done. Perhaps choose shorter names.
I do a similar approach, except I use only one record with resolved elements. To make this work, you need to initialize the record so that all elements are non-driving (ie: 'Z' for std_logic). To make this more flexible, I have created resolution functions for integer, time, and real. However, this only saves you one signal. Not a real huge win. Perhaps half way to where you think you want to be. But it is more work than what you are doing.
For VHDL-201X, we are working on syntax to allow parameters/ports automatically map to a identically named signal. This will get you to where you want to be with any of the approaches (yours, mine, or Brian's without the extra wrapper subprogram). It is posted here: http://www.eda.org/twiki/bin/view.cgi/P1076/ImplicitConnections. Given this, I would add the two records to your procedures and call it good enough for now.
Once you get by this problem, you seem to also be asking is how do I write separate tests using the same testbench. For this I use multiple architectures - I like to think of these as a Factory Class for concurrent code. To make this feasible, I separate the stimulus generation code from the rest of the testbench (typically: netlist connections and clock). My presentation, "VHDL Testbench Techniques that Leapfrog SystemVerilog", has an overview of this architecture along with a number of other goodies. It is available at: http://www.synthworks.com/papers/index.htm
You're definitely on the right track, in fact I have a variant like this (what you describe).
The catch is, now I build up a whole subroutine using the "parameter light" procedures, and those are what I want to put in a package to share and reuse. The problem is that any procedure pushed out to a package can't call to the parameter light procedures in the main vhdl file..
So what happens is we have one main vhdl file with all the common CPU hardware setup routines, and every designer's test code all in the same vhdl file..
Long story short, putting our test subroutines into separate files is really what I was hoping for..
How global variables are set for drools stateless session.
Lets say two threads access same session but sets a global variable customer arraylist with
new arraylist for each thread. Does second thread's arraylist replaces first thread's arraylist for global variable customer.
That seems the case from StatelessKnowledgeSession class documentation :
StatelessKnowledgeSessions support globals, scoped in a number of ways. I'll cover the non-command way first, as commands are scoped to a specific execution call. Globals can be resolved in three ways. The StatelessKnowledgeSession supports getGlobals(), which returns a Globals instance. These globals are shared for ALL execution calls, so be especially careful of mutable globals in these cases - as often execution calls can be executing simultaneously in different threads. Globals also supports a delegate, which adds a second way of resolving globals. Calling of setGlobal(String, Object) will actually be set on an internal Collection, identifiers in this internal Collection will have priority over supplied delegate, if one is added. If an identifier cannot be found in the internal Collection, it will then check the delegate Globals, if one has been set.
http://docs.jboss.org/jbpm/v5.1/javadocs/org/drools/runtime/StatelessKnowledgeSession.html
Am i right?
Although I cannot give you a fully reliable answer (because I haven't tested this), I would say that you are right because
under the StatelessKnowledgeSession's hood, Drools uses a StatefulKnowledgeSession and in a stateful session, I would expect that the call of setGlobal(...) overrides the value from a previous call.
Globals are held in a "globals store"; this globals store is session-specific, which means that if you achieve to simultaneously access the same session using different threads, one thread will override the globals store of the other - whichever thread's setGlobal(...) is executed last.
I can confirm- globals are stored in shared memory between threads.
We were using a global to store the result from each execution and found that when multiple threads were executing at the same time, we would occasionally get the wrong result because another thread jumped in and overwrote the global before the previous thread had retrieved the value.