rsLogix 5000, program structure to handle 30 motors simultaneously - plc

The project is to have 30 linear motors execute commands simultaneously. My question is about the best way to structure the subroutines and if there is a better way to call them.
Screenshot of the work space and structure of the Control subroutine
As you can see I have the Control subroutine. Each rung of this subroutine calls the other subroutines bellow it in order. The Drive_Status_1 and 2 are called automatically. The other subroutines are only called when an 'examine on' element is true.
This way requires the changing of all the tags for each subroutine for each driver. Having to retype multiple tags and making sure not to miss any has already led to some annoying mistakes and I can only imagine it will get worse with 30 drives. Is there a better way?

You are doing fine. Many ways to skin a cat.
Looks like your using a 1756-L82E, this processor has plenty of power to do what your asking. I just did a bottle filler/conveyor control project that is using 35 different drives. We're controlling them all via Ethernet i/P, I'm not even using a managed switch and we have no problems. They are all running at the same time. I rarely separate drives into subroutines, this example of 35 drives are all being controlled within one subroutine.
I do how ever limit the setup/parameter data within the logic. I try to keep the logic as simple as possible. Config your drive then only use the necessary parameters within the logic.
Cmd Examples: fwd/Rev, start/stop, fault reset, and speed cmd.
Feedback Examples: Active, Faulted
Below is link to an example of a bare bones drive control scheme.
Drive logic

Related

MATLAB: Does the execution of addpath/rmpath/savepath in one MATLAB instance affect other instances?

Does the execution of addpath/rmpath/savepath in one MATLAB instance affect other instances?
Motivation: Imagine that you are developing a MATLAB package, which provides a group of functions to the users. You have multiple versions of this package being developed on a single laptop. You would like to test these different versions in multiple instances of MATLAB:
You open one MATLAB window, type run_test(DIRECTORY_OF_PACKAGE_VERSION1), and hit enter;
While the first test is running, you open another MATLAB window, type run_test(DIRECTORY_OF_PACKAGE_VERSION2), and hit enter.
See the pseudo-code below for a better idea about the tests.
No code or data is shared between different tests --- except for those embedded in MATLAB, as the tests are running on the same laptop, using the same installation of MATLAB. Below is a piece of pseudo-code for such a scenario.
% MATLAB instance 1
run_test(DIRECTORY_OF_PACKAGE_VERSION1);
% MATLAB instance 2
run_test(DIRECTORY_OF_PACKAGE_VERSION2);
% Code for the tests
function run_test(package_directory)
setup_package(package_dirctory);
RUN EXPERIMENTS TO TEST THE FUNCTIONS PROVIDED BY THE PACKAGE;
uninstall_package(package_directory);
end
% This is the setup of the package that you are developing.
% It should be called as a black box in the tests.
function setup_package(package_dirctory)
addpath(PATH_TO_THE_FUNCTIONS_PROVIDED_BY_THE_PACKAGE);
% Make the package available in subsequent MATLAB sessions
savepath;
end
% The function that uninstalls the package: remove the paths
% added by `setup_package` and delete the files etc.
function uninstall_package(package_directory)
rmpath(PATH_TO_THE_FUNCTIONS_PROVIDED_BY_THE_PACKAGE);
savepath;
end
You want to make sure the following.
The tests do not interfere with each other;
Each test is calling funtions from the correct version of the package.
Hence here come our questions.
Questions:
Does the execuation of addpath, rmpath, and savepath in one MATLAB instance affect the other instance, sooner or later?
More generally, what kind of commands executed in one MATLAB instance can affect the other instance?
3. What if I am running only one instance of MATLAB, but invoke a parfor loop with two loops running in parallel? Does the execution of addpath/rmpath/savepath in one loop affect the other loop, sooner or later? In general, what kind of commands executed in one parallel loop can affect the other loop? (As pointed out by #Edric, this can be complicated; so let us not worry about it. Thank you, #Edric.)
Thank you very much for any comments and insights. It would be much appreciated if you could direct me to relevant sections in the official documentation of MATLAB --- I did some searching in the documentation, but have not found an answer to my question.
BTW, in case you find that the test described in the pseudo code is conducted in a wrong/bad manner, I will be very grateful if you could recommend a better way of doing it.
The documentation page for the MATLAB Search Path specifies at the bottom:
When you change the search path, MATLAB uses it in the current session, but does not update pathdef.m. To use the modified search path in the current and future sessions, save the changes using savepath or the Save button in the Set Path dialog box. This updates pathdef.m.
So, standard MATLAB sessions are "isolated" in terms of their MATLAB Search Path unless you use savepath. After a call to savepath, new MATLAB sessions will read the updated pathdef.m on startup.
The situation with a parallel pool is slightly more complex. There are a couple of things that affect this. First is the parameter AutoAddClientPath that you can specify for the parpool command. When true, an attempt is made to reflect the desktop MATLAB's path on the workers. (This might not work if the workers cannot access the same folders).
When a parallel pool is running, any changes to the path on the desktop MATLAB client are sent to the workers, so they can attempt to add or remove path entries. Parallel pool workers calling addpath or rmpath do so in isolation. (I'm afraid I can't find a documentation reference for this).

Structured Text : Function and Function block (Pros and Cons)

I am coming from computer science background and used to traditional IT programming. I have relatively little experience with structured text. In my current project I am extensively using many function block. I am aware that this involves some memory issues and so on. Could anyone come up and give me some advantages and disadvantages of each of them. Should I avoid them and write everything in a single program ? Please practical hints should be welcome as I am about to release my application.
System : Codesys
I also come from the PC programming world, and there are certain object tricks I miss when programming in Codesys. The function blocks go a long way towards object thinking, though. They're too easy to peek into from the outside, so some discipline from the user is necessary, to encapsulate the functionality or objects.
You shouldn't write a single piece of program to handle all functionality, but instead use the Codesys facilities to divide the program into objects where possible. This also means to identify which objects are alike and can be programmed as function blocks. The instance of a function block is created in memory when the program is downloaded, e.g. it is always visible for monitoring.
I normally use POU's to divide the project into larger parts, e.g. Machine1(prg), Machine2(prg) and Machine3(prg). If each machine has one or more motors of similar type, this is where the function blocks come in, so that I can program one motor object called FB_Motor, and reuse it for the necessary motor instances inside the 3 machine programs. Each instance can then hold its own internal states, timers, input output, or whatever a motor needs.
The structure for the above example is now:
MAIN, calls
Machine1(prg), calls
fbMotor1 (implements FB_Motor, local for Machine1)
fbMotor2 (implements FB_Motor, local for Machine1)
Machine2(prg), calls
fbMotor1 (implements FB_Motor, local for Machine2)
Machine3(prg), calls
fbMotor1 (implements FB_Motor, local for Machine3)
fbMotor2 (implements FB_Motor, local for Machine3)
fbMotor3 (implements FB_Motor, local for Machine3)
The functions are another matter. Their data exist on the stack when the function is called, and when the function has returned its value, the data is released. There are lots of built in functions, e.g. BOOL_TO_INT(), SQR(n) and so on.
I normally use functions for lookup and conversion functions. And they can be called from all around the program.
The clarity, robustness and maintainability are everything in PLC world. Function blocks help you to archieve that if the stucture is kept relatively flat (so one should avoid functionblock inside functionblock insede function block, compared of true object and their heritage).
Also the graphical languages are there for reason they visualise the complex systems in easy to digest form in a way that the maintaining personnel in the future have easier life to follow what is wrong with the PLC program and the part of the factory.
What comes to ST it is advance to remember that it is based on strongly typed Wirthian languages (ADA, Pascal etc.). Also what is often more important than memory usage is the constant cycle time of the program (since real time system). The another cup of the tea is the electrical layer of the control system, plus the physical layer and all the relations on that layer that can backflash somewhere else in your program if not taken account.

In Simulink, are Goto and From blocks generally considered bad style?

I was working on a Simulink model recently and was using Goto and From blocks to keep a very busy system from becoming a twisted mess of wires. I was informed that I was not to use Goto and From blocks as they are considered bad style (at least, according to my employer).
While I hold that wires should be kept connected whenever possible, I believe that Goto and From blocks can significantly improve the readability of a system/subsystem if the model would result in lots of crossed wires otherwise; especially if the blocks can be color-coded (e.g. purple Goto block goes to all the purple From blocks).
I'd supply an image of the subsystem I'm working with, but I'm not sure I can put it on here. The subsystem itself has about 12 subsystem blocks (and possibly more later) within it, each with two bus-type outputs. The first output of each subsystem goes to a Bus Creator block, and the second output of each goes to a second Bus Creator block. Since the subsystem are aligned vertically and the Bus Creators are to the right, this results in many crossed wires. I was using Goto and From blocks to clean up the system.
I can supply an image of a smaller, but similar model that I put together for this question.
For a system with on the order of 12 subsystems, this becomes very busy. I was using Goto and From blocks to connect the subsystems and the Bus Creators without a plethora of crossed wires.
I believe my employer may be carrying the stigma of using goto statements from text-based languages and applying it to Goto/From blocks in Simulink. Generally speaking, is using Goto and From blocks in this way (or any way) considered to be bad style?
The Mathworks Automotive Advisory Board has published some modeling guidelines (PDF) that include usage of Goto/From. The rules they list are:
Do not have subsystems that are floating, i.e. all inputs / output ports are connected via Gotos. One of the great things about Simulink is the ability to determine signal flow with only a cursory visual inspection, do not destroy this by linking everything with Gotos. At least have one feed-forward and one feedback loop between subsystems connected by signal lines.
My personal opinion on feedback signals is that they should all be connected with signal lines, but I'm sure you can come up with cases where drawing all of them clutters the model.
The second guideline is about the scope of the Goto tag; keep the visibility local as much as possible.
I feel setting visibility to scoped is acceptable also as long as you're not using the matching From more than a couple of levels downstream from the Goto. I've yet to come across a legitimate need for a global Goto tag.
So, all Goto usage isn't bad, and you're right that it can improve readability in some cases. That being said, I don't think Gotos are justified for the picture above. I realize it is just an example, but I should point out that if the buses being created are virtual that order of the inputs at the creator doesn't matter, and rearranging Bus Create and Mux block inputs can work wonders for readability.
The problem with the guidelines above are that there's room for bending them, and developers on your team might do just that. Even if everyone is diligent about following them at first, you may run afoul of these guidelines one day, a long time from now, when you redraw that section of the model for refining / adding functionality. Rearranging inputs and outputs can be especially irritating in middle of implementing some cool new feature. That may be the reason your employer chose to impose a blanket ban. It is inconvenient in some cases, but is easier to enforce.

VHDL Bus Functional Modelling - Can't put groups of procedures into a package to clean up the code

I want to organize a working bus functional model and push commonly used procedures (which look like CPU subroutines) out into a package and get them out of the main cpu model, but I'm stuck.
The procedures don't have access to the hardware bits when they're pushed out in a package.
In Verilog, I would put commonly used procedures out into an include file and link them into the CPU model as required for a given test suite.
More details:
I have a working bus functional model of a CPU, for simulation test benching.
At the "user interface" level I have a process called "main" running inside the CPU model which calls my predefined "instruction set" like this:
cpu_read(address, read_result);
cpu_write(address, write_data);
etc.
I bundle groups of those calls up into higher level procedures like
configure_communication_bus;
clear_all_packet_counters;
etc.
At the next layer these generic functions call a more hardware specific version which knows the interface timing for the design,
and those procedures then use an input record and output record to connect to the hardware module ports and waggle the cpu bus signals as required.
cpu_read calls hardware_cpu_read(cpu_input_record, cpu_output_record, address);
Something like this:
procedure cpu_read (address : in std_logic_vector(15 downto 0);
read_result : out std_logic_vector(31 downto 0));
begin
hardware_cpu_read(cpu_input_record, cpu_output_record, address, read_result);
end procedure;
The cpu_input_record and cpu_output_record are declared as signals of type nnn_record in the cpu model vhdl file.
So this is all working, but every single one of these procedures is all stored in the cpu VHDL module file, and all in the procedure declaration section so that they are all in the same scope.
If I share the model with team members they will need to add their own testing subroutines, and those also are all in the same location in the file, as well, their simulation test code has to go into the "main" process along with mine.
I'd rather link in various tests from outside the model, and only keep model specific procedures in the model file..
Ironically I can push the lowest level hardware procedure out to a package, and call those procedures from within the "main" process, but the higher level processes can't be put out into that package or any other packages because they don't have access to the cpu_read_record and cpu_write_record.
I feel like there must be a simple way to clean up this code and make it modular, and I'm just missing something obvious.
I don't really think making a command interpreter and loading my test code into a behavioral ROM is the right way to go by the way. Nor is fighting with the simulator interface to connect up a C program, but I may break down and try this..
Quick sketch of an answer (to the question I think you are asking! :-) though I may be off-beam...
To move the BFM subprograms into a reusable package, they need to be independent of the execution scope - that usually means a long parameter list for each of them. So using them in a testbench quickly gets tedious compared with the parameterless (or parameter-lite) versions you have now..
The usual workaround is to implement the BFM in a package, with long parameter lists.
Then write parameter-lite local equivalents (wrappers) in the execution scope, which simply call the package versions supplying all the parameters explicitly.
This is just boilerplate - not pretty but it does allow you to move the BFM into a package. These wrappers can be local to the testbench, to a process within it, or even to a subprogram within that process.
(The parameter types can be records for tidiness : these are probably declared in a third package, shared between BFM. TB, and synthesisable device under test...)
Thanks to overloading, there is no ambiguity between the local and BFM package versions, so the actual testbench remains as simple as possible.
Example wrapper function :
function cpu_read(address : unsigned) return slv_32 is
begin
return BFM_pack.cpu_read (
address => address,
rd_data_bus => tb_rd_data_bus,
wait => tb_wait_signal,
oe => tb_mem_oe,
-- ditto for all the signals constants variables it needs from the tb_ scope
);
end cpu_read;
Currently your test procedures require two extra signals on them, cpu_input_record and cpu_output_record. This is not so bad. It is not uncommon to just have these on all procedures that interact with the cpu and be done with it. So use hardware_cpu_read and not cpu_read. Add cpu_input_record, cpu_output_record to your configure_communication_bus and clear_all_packet_counters procedures and be done. Perhaps choose shorter names.
I do a similar approach, except I use only one record with resolved elements. To make this work, you need to initialize the record so that all elements are non-driving (ie: 'Z' for std_logic). To make this more flexible, I have created resolution functions for integer, time, and real. However, this only saves you one signal. Not a real huge win. Perhaps half way to where you think you want to be. But it is more work than what you are doing.
For VHDL-201X, we are working on syntax to allow parameters/ports automatically map to a identically named signal. This will get you to where you want to be with any of the approaches (yours, mine, or Brian's without the extra wrapper subprogram). It is posted here: http://www.eda.org/twiki/bin/view.cgi/P1076/ImplicitConnections. Given this, I would add the two records to your procedures and call it good enough for now.
Once you get by this problem, you seem to also be asking is how do I write separate tests using the same testbench. For this I use multiple architectures - I like to think of these as a Factory Class for concurrent code. To make this feasible, I separate the stimulus generation code from the rest of the testbench (typically: netlist connections and clock). My presentation, "VHDL Testbench Techniques that Leapfrog SystemVerilog", has an overview of this architecture along with a number of other goodies. It is available at: http://www.synthworks.com/papers/index.htm
You're definitely on the right track, in fact I have a variant like this (what you describe).
The catch is, now I build up a whole subroutine using the "parameter light" procedures, and those are what I want to put in a package to share and reuse. The problem is that any procedure pushed out to a package can't call to the parameter light procedures in the main vhdl file..
So what happens is we have one main vhdl file with all the common CPU hardware setup routines, and every designer's test code all in the same vhdl file..
Long story short, putting our test subroutines into separate files is really what I was hoping for..

How to preserve data between executions of program

I am running a perl script on a HP-UX box. The script will execute every 15 minutes and will need to compare it's results with the results of the last time it executed.
I will need to store two variables (IsOccuring and ErrorCount) between the executions. What is the best way to do this?
Edit clarification:
It only compares the most recent execution to the current execution.
It doesn't matter if the value is lost between reboots.
And touching the filesystem is pretty much off limits.
If you can't touch the file system, try using a shared memory segment. There are helper modules for that like IPC::ShareLite, or you can use the shmget and related functions directly.
You'll have to store them in a file. This sort of file is often kept in /tmp, but any place where the user running the cron job has access would do. Make sure your script can handle the case where the file is missing.
You could create a separate process running a "remember stuff" service over your choice of IPC mechanism. This sounds like a rather tortured solution to "I don't want to touch the disk" but if it's important enough to offset a couple of days of development work (realistically, if you are new to IPC, and HP-SUX continues to live up to its name) then by all means read man perlipc for a start.
Does it have to be completely re-executed? Can you just have it running in a loop and sleeping for 15 minutes between iterations? Than you don't have to worry about saving the values externally, the program never stops.
I definitely think IPC is the way to go here.
I'd save off the data in a file. Then, inside the script I'd load the last results if the file exists.
Use module Storable to serialize Perl data structures, save them anywhere you want and deserialize them during next script execution.