What is the benefit of defining enumerators as a DUT? - plc

The main goal of defining enumerators is to assign a variable to some numbers and their equal strings as I understand.
We can define var a as an enum everywhere in the initializing section of our Program or Function Block like this:
a:(start,stop,operate);
tough I don't know why we can't see that in tabular view but there there is a big question that:
What is the benefit of defining enumerators as a DUT?

There are 3 main benefits for me:
You can use the same enum in multiples function blocks
You can use TO_STRING on enums declared as DUTs (After enabling it with {attribute 'to_string'} Infosys
You can use refactoring on names of each component, which is impossible with local enums

When defining an enum as a DUT it is available everywhere in your code (global scope).
This is helpful in many cases, but in general it is not good programming practice to have a lot of stuff available in the global scope.
Here is a bit elaboration on the topic.

In addition to the above, one benefit is that if you are using an enumeration for something like FB states, you will be able to see the descriptive status name when the program is running (READING, WRITING, WAITING, ERROR, etc.).
You can see it in the variable declarations section, in-line with your code, or in the watch window. You don’t have to remember what status number was defined in your state machine.
This benefit comes with local enumerations or DUT (global) enumerations.

In addition to other good points already made, there is another big advantage to enumerations : you can use the enumeration as the type of a variable, and when you do that, the compiler will (if {attribute 'strict'} is used in the enumeration declaration, which it probably should) refuse an assignment to that variable of a value that is not allowed by the enumeration.
In other words, you get rid of a whole class of failure modes where the variable ends up having an invalid value due to some coding mistake the compiler cannot catch.
It takes a trivial amount of time to create an enumeration, and it has benefits on many levels. I would say the real question is why not use them whenever a fixed list of meaningful values needs to be expressed.

Related

Scala legacy code: how to access input parameters at different points in execution path?

I am working with a legacy scala codebase, and as is always the case modifying the code is quite difficult without touching different parts.
One of my new requirement in to make several decisions based on some input parameters. Problem is that these decisions are to be made at various points along the execution. So either I encapsulate all those parameters in a case class instance and pass it along. But it means I would have to modify multiple methods signatures, and I want to avoid this approach as much as possible.
Another approach can be to create a global object containing all those input parameters and accessible from different points in the execution. Is it a good approach in Scala?
No, using global mutable variables to pass “hidden” parameters is not a good idea, not in Scala and not in any other programming language. It makes the code hard to understand and modify, because a function's behaviour will now depend on which functions were invoked earlier. And it's extremely fragile, because you might forget setting one of those global parameters before invoking the function, which means that it will use whatever value was stored there before. This is the kind of thing that can appear to work for years, and then break when you modify a completely unrelated part of the program.
I can't stress this enough: do not use global mutable variables, period. The solution is to man up and change those method signatures. Depending on the details, dependency injection may or may not help in your particular case.

How does one create constants in Tcl?

I have seen the use of set keyword in tcl often. This cannot be used to create constant. How does one create constant in tcl which can then be used by other procedures?
Generally speaking, most of the uses of constants fall into several categories:
enumeration values
magical numbers
looping control factors
scaling factors
In Tcl, for the first case you'd usually just use the name instead of mapping it to an integer, with integer mappings only being applied in the cases that need them. Even bit sets can be handled that way, substituting a list of names for the set of bits (and the name being present in the list is equivalent to the bit being set). Tcl's C API has relevant functions for helping with this, specifically Tcl_GetIndexFromObj().
Magical values are usually best locked away close to the code that handles them. If I was interfacing to hardware, I'd not let the magic values appear at all at the script level (since I'd have the binding code written in C).
Looping control factors are often best represented as default values for procedure arguments, as they are things that you want to sometimes override. But they're often not as needed once custom control structures are available, and they fit a lot more into the Tcl style of working.
Scaling factors are the case where constants might be useful. I tend to simulate those by just using a global or namespace variable and plain old not assigning to it from elsewhere. I'd be quite interested in having code to allow constants (specifically variables that can't be assigned to) as a standard feature, but we don't have that right now.
Once those cases are covered, what remain tend to be unimportant constants. After all, there's almost no need to calculate the sizes of things for allocation and stuff like that, and things like positional binding in SQL statements are discouraged within TDBC in favour of binding by name (an awful lot easier to get right).
A simple way of making a constant is to put a write trace on a variable so that whenever it is written to, it is reset back to its constant value.
set CONSTANT 123
trace add variable CONSTANT write {apply {args {
global CONSTANT
# Reset to the constant value; write traces are after the fact
set CONSTANT 123
# Make the write give an error
error "may not change constant"
}}}

Does Swift have an equivalent to Rust's `uninitialized`?

The Rust standard library has the std::mem::uninitialized function, which allows you to manually create an undefined (and possible invalid) value of any type. This essentially maps to LLVM's undef. I was wondering if Swift had an equivalent, as I haven't been able to find one skimming through the standard library documentation.
Motivation
The primary use for this is to bypass the compiler's normal memory initialization checks when they prove imprecise. For instance, you might want to initialize some members of a structure using methods (or in Swift's case, property setters), which the compiler usually would not allow you to do. Using dummy values works, but that's potentially less efficient (if the optimizer cannot prove that the dummy is meaningless).
In Rust, uninitialized values are treated as initialized by the initialization checker, but as uninitialized by LLVM, leading to more reliable optimization. Since the Swift compiler is also built atop LLVM, it seems like something the Swift compiler should also be able to support.
My Specific Use Case
I have a struct which encapsulates a set of bit fields compacted into one machine word (an unsigned integer type). The struct type provides a safe and convenient interface for reading and modifying the individual fields, through computed properties.
I also have an initializer that takes the relevant field values as parameters. I could initialize the underlying machine word using the same bitwise operations that the computed properties use (in which case I would be repeating myself), or I could initialize it simply by setting the values of the computed properties.
Currently, Swift does not support using computed properties before self has been fully initialized. It's also unlikely Swift will ever support this case, since the computed property setters modify the existing machine word, which would require it to already be initialized, at least as far as Swift is concerned. Only I know that all my setters, in concert, will fully initialize that machine word.
My current solution is to simply initialize the machine word to 0, and then run the setters. Since a constant 0 is trivially absorbed into the bitwise | chain that combines the fields, there's no CPU time lost, but that's always going to be the case. This is the kind of situation where, in Rust, I would have used an uninitialized value to express my intent to the optimizer.

Where to define typecast to struct in MATLAB OOP?

In the MATLAB OOP framework, it can be useful to cast an object to a struct, i.e., define a function that takes an object and returns a struct with equivalent fields.
What is the appropriate place to do this? I can think of several options:
Build a separate converter object that takes care of conversions between various classes
Add a function struct to the class that does the conversion to struct, and make the constructor accept structs
Neither option seems to be very elegant: the first means that logic about the class itself is moved to another class. On the other hand, in the second case, it provokes users to use the struct function for any object, which will in general give a warning (structOnObject).
Are there altenatives?
Personally I'd go with the second option, and not worry about provoking users to call struct on other classes; you can only worry about your own code, not that of a third-party, even if the third party is MathWorks. In any case, if they do start to call struct on an arbitrary class, it's only a warning; nothing actually dangerous is likely to happen, it's just not a good practice.
But if you're concerned about that, you can always call your converter method toStruct rather than struct. Or perhaps the best (although slightly more complex) way might be to overload cast for your class, accepting and handling the option 'struct', and passing any other option through to builtin('cast',....
PS The title of your question refers to typecasting, but what your after here is casting. In MATLAB, typecasting is a different operation, involving taking the exact bits of one type and reinterpreting them as bits of another type (possibly an array of the output type). See doc cast and doc typecast for more information on the distinction.
The second option sounds much better to me.
A quick and dirty way to get rid of the warning would be disabling it by calling
warning('off', 'MATLAB:structOnObject')
at the start of your program.
The solutions provided in Sam Roberts' answer are however much cleaner. I personally would go for the toStruct() method.

Closures and dynamic scope?

I think I understand why there is a danger in allowing closures in a language using dynamic scope. That is, it seems you will be able to close the variable OK, but when trying to read it you will only get the value at the top of global stack. This might be dangerous if other functions use same name in the interim.
Have I missed some other subtlety?
I realize I'm years late answering this, but I just ran across this question while doing a web search and I wanted to correct some misinformation that is posted here.
"Closure" just means a callable object that contains both code and an environment that provides bindings for free variables within that code. That environment is usually a lexical environment, but there is no technical reason why it can't be a dynamic environment.
The trick is to close the code over the environment and not the particular values. This is what Lisp 1.5 did, and also what MACLisp did for "downward funargs."
You can see how Lisp 1.5 did this by reading the Lisp 1.5 manual at http://www.softwarepreservation.org/projects/LISP/book
Pay particular attention in Appendix B to how eval handles FUNCTION and how apply handles FUNARG.
You can get the basic flavor of programming using dynamic closures from http://c2.com/cgi/wiki?DynamicClosure
You can get an in depth introduction to the implementation issues from ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-199.pdf
Modern dynamically scoped languages generally use shallow binding, where the current value of each variable is kept in one global location, and function calls save old values away on the stack. One way of implementing dynamic closures with shallow binding is described at http://www.pipeline.com/~hbaker1/ShallowBinding.html
Yes, that's the basic problem. The term "closure" is short for "lexical closure", though, which by definition captures its lexical scope. I'd call the things in a dynamically scoped language something else, like LAMBDA. Lambdas are perfectly safe in a dynamically scoped language as long as you don't try to return them.
(For an interesting thought experiment, compare the problem of returning a dynamically scoped lambda in Emacs Lisp to the problem of returning a reference to a stack-allocated variable in C, and how both are impossible in Scheme.)
A long time ago, back when languages with dynamic scope were much less rare than today, this was known as the funargs problem. The problem you mention is the upward funargs problem.