How does one create constants in Tcl? - constants

I have seen the use of set keyword in tcl often. This cannot be used to create constant. How does one create constant in tcl which can then be used by other procedures?

Generally speaking, most of the uses of constants fall into several categories:
enumeration values
magical numbers
looping control factors
scaling factors
In Tcl, for the first case you'd usually just use the name instead of mapping it to an integer, with integer mappings only being applied in the cases that need them. Even bit sets can be handled that way, substituting a list of names for the set of bits (and the name being present in the list is equivalent to the bit being set). Tcl's C API has relevant functions for helping with this, specifically Tcl_GetIndexFromObj().
Magical values are usually best locked away close to the code that handles them. If I was interfacing to hardware, I'd not let the magic values appear at all at the script level (since I'd have the binding code written in C).
Looping control factors are often best represented as default values for procedure arguments, as they are things that you want to sometimes override. But they're often not as needed once custom control structures are available, and they fit a lot more into the Tcl style of working.
Scaling factors are the case where constants might be useful. I tend to simulate those by just using a global or namespace variable and plain old not assigning to it from elsewhere. I'd be quite interested in having code to allow constants (specifically variables that can't be assigned to) as a standard feature, but we don't have that right now.
Once those cases are covered, what remain tend to be unimportant constants. After all, there's almost no need to calculate the sizes of things for allocation and stuff like that, and things like positional binding in SQL statements are discouraged within TDBC in favour of binding by name (an awful lot easier to get right).
A simple way of making a constant is to put a write trace on a variable so that whenever it is written to, it is reset back to its constant value.
set CONSTANT 123
trace add variable CONSTANT write {apply {args {
global CONSTANT
# Reset to the constant value; write traces are after the fact
set CONSTANT 123
# Make the write give an error
error "may not change constant"
}}}

Related

What is the benefit of defining enumerators as a DUT?

The main goal of defining enumerators is to assign a variable to some numbers and their equal strings as I understand.
We can define var a as an enum everywhere in the initializing section of our Program or Function Block like this:
a:(start,stop,operate);
tough I don't know why we can't see that in tabular view but there there is a big question that:
What is the benefit of defining enumerators as a DUT?
There are 3 main benefits for me:
You can use the same enum in multiples function blocks
You can use TO_STRING on enums declared as DUTs (After enabling it with {attribute 'to_string'} Infosys
You can use refactoring on names of each component, which is impossible with local enums
When defining an enum as a DUT it is available everywhere in your code (global scope).
This is helpful in many cases, but in general it is not good programming practice to have a lot of stuff available in the global scope.
Here is a bit elaboration on the topic.
In addition to the above, one benefit is that if you are using an enumeration for something like FB states, you will be able to see the descriptive status name when the program is running (READING, WRITING, WAITING, ERROR, etc.).
You can see it in the variable declarations section, in-line with your code, or in the watch window. You don’t have to remember what status number was defined in your state machine.
This benefit comes with local enumerations or DUT (global) enumerations.
In addition to other good points already made, there is another big advantage to enumerations : you can use the enumeration as the type of a variable, and when you do that, the compiler will (if {attribute 'strict'} is used in the enumeration declaration, which it probably should) refuse an assignment to that variable of a value that is not allowed by the enumeration.
In other words, you get rid of a whole class of failure modes where the variable ends up having an invalid value due to some coding mistake the compiler cannot catch.
It takes a trivial amount of time to create an enumeration, and it has benefits on many levels. I would say the real question is why not use them whenever a fixed list of meaningful values needs to be expressed.

Scala legacy code: how to access input parameters at different points in execution path?

I am working with a legacy scala codebase, and as is always the case modifying the code is quite difficult without touching different parts.
One of my new requirement in to make several decisions based on some input parameters. Problem is that these decisions are to be made at various points along the execution. So either I encapsulate all those parameters in a case class instance and pass it along. But it means I would have to modify multiple methods signatures, and I want to avoid this approach as much as possible.
Another approach can be to create a global object containing all those input parameters and accessible from different points in the execution. Is it a good approach in Scala?
No, using global mutable variables to pass “hidden” parameters is not a good idea, not in Scala and not in any other programming language. It makes the code hard to understand and modify, because a function's behaviour will now depend on which functions were invoked earlier. And it's extremely fragile, because you might forget setting one of those global parameters before invoking the function, which means that it will use whatever value was stored there before. This is the kind of thing that can appear to work for years, and then break when you modify a completely unrelated part of the program.
I can't stress this enough: do not use global mutable variables, period. The solution is to man up and change those method signatures. Depending on the details, dependency injection may or may not help in your particular case.

Does Swift have an equivalent to Rust's `uninitialized`?

The Rust standard library has the std::mem::uninitialized function, which allows you to manually create an undefined (and possible invalid) value of any type. This essentially maps to LLVM's undef. I was wondering if Swift had an equivalent, as I haven't been able to find one skimming through the standard library documentation.
Motivation
The primary use for this is to bypass the compiler's normal memory initialization checks when they prove imprecise. For instance, you might want to initialize some members of a structure using methods (or in Swift's case, property setters), which the compiler usually would not allow you to do. Using dummy values works, but that's potentially less efficient (if the optimizer cannot prove that the dummy is meaningless).
In Rust, uninitialized values are treated as initialized by the initialization checker, but as uninitialized by LLVM, leading to more reliable optimization. Since the Swift compiler is also built atop LLVM, it seems like something the Swift compiler should also be able to support.
My Specific Use Case
I have a struct which encapsulates a set of bit fields compacted into one machine word (an unsigned integer type). The struct type provides a safe and convenient interface for reading and modifying the individual fields, through computed properties.
I also have an initializer that takes the relevant field values as parameters. I could initialize the underlying machine word using the same bitwise operations that the computed properties use (in which case I would be repeating myself), or I could initialize it simply by setting the values of the computed properties.
Currently, Swift does not support using computed properties before self has been fully initialized. It's also unlikely Swift will ever support this case, since the computed property setters modify the existing machine word, which would require it to already be initialized, at least as far as Swift is concerned. Only I know that all my setters, in concert, will fully initialize that machine word.
My current solution is to simply initialize the machine word to 0, and then run the setters. Since a constant 0 is trivially absorbed into the bitwise | chain that combines the fields, there's no CPU time lost, but that's always going to be the case. This is the kind of situation where, in Rust, I would have used an uninitialized value to express my intent to the optimizer.

Progress-gl - What's benefit of placing variable declaration on top of the procedure

I've been doing Progress 4GL for 8 years though it's not my main responsibility. I do C++ and Java a lot more. When programming in other language it's suggested to have the declaration close to the usage. With 4GL however I see people place the declaration on top of the file. It's even in the coding standard.
I think placing them on top of them file would lead to 'vertical separation' problem. In most other language it's even suggested to do the assignment at the same line as the declaration.
The question is why it's suggested to do so in 4GL ? What's the benefit ? I know that it's possible to place the declaration anywhere in the file, given that it's declared before it is used.
I think the answer is to do with scoping, or the lack of it, within Progress 4GL.
If you are used to Java, say, and read a Progress 4GL program, that looks like
DO:
DEFINE VARIABLE x AS INTEGER INITIAL 4.
DISPLAY x.
END.
then you wouldn't expect to be able to use this value of x anywhere else in the program, and that any changes made in the block, wouldn't effect anything outside the block.
As I understand it, all progress variables declared within the body of a program are scoped to the whole program, unless they are declared are within an internal procedure or function, in which case they are scoped to the procedure or function.
(Incidentally any default buffers [i.e. undeclared] you use within an internal procedure/function are scoped to the whole program, not just the procedure or function, so you need to be very careful to explicity declare buffers in functions you intend ot use recursively).
I therefore think the convention of declaring variables at the beginning of a program is in order to reflect the fact that Progress will treat them has having been done so, regardless of where you put the declaration.
There is absolutely no benefit in scoping anything to the program as a whole when it could be scoped smaller.
Smaller scopes are easier to test, give less possibility of namespace conflict, and less opportunity for error.
Tightly scoped named buffers are especially useful when writing to the database because they eliminate the possibility of there ever being some other part of your code that uses the same buffer and causes a share-lock, i.e., this fails to compile:
do for b-customer transaction:
find b-customer where .... exclusive...
...
end.
...
find b-customer...
On the other hand, procedures and functions (and include files...) that share scope with the main body of code are a major source of bugs, because when you pick up your variable or whatever, you can never be entirely certain where it has been...
All of this is just basic Structured Programming, of course. It's true for every language and has been accepted since the 70's.
The "reason" that you usually see variables defined at the top is simple. Habit. That is just how things were done in the bad old days.
A lot of old code, or code written by old fossils, is written that way. No matter the language.
Some languages (COBOL springs to mind) even formalized it.
Is there any advantage to such an approach?
Not especially. I guess you could argue "they are all in one place and easy to find" but that isn't very compelling.
"Habit" is actually more compelling ;) If you are working with a team that expects a certain style or in an application where a particular style is prevalent then you should think twice before unilaterally throwing out a new way of doing things - the confusion could be a bigger problem than the advantages gained.

"relative global" variables in matlab or other languages

I do scientific image processing in MATLAB, which requires solving weird optimization problems with many parameters that are used by an "experiment" function which has many different "helper functions" that use various subsets of the parameters. Passing these parameters around is a pain and I would like an elegant extensible solution.
My preferred solution would be a "relative global" variable - common to a "main" function's workspace, as well as any subfunctions that the main function calls and that specify they want to share that variable. But the relative global variable would not exist outside the function which declares it, and would disappear once the main function returns.
The code would look like this, except there would be many more experiment_helpers, each using the parameters in different ways:
function y = experiment(p1,p2,p3,...)
% declare relative global variables.
% these will not change value within this experiment function,
% but this experiment will be reused several times in the calling function,
% each time with different parameter values.
relative global p1, p2, p3 ...
% load and process some data based on parameters
...
% call the helper + return
y = experiment_helper(y,z);
end
function y = experiment_helper(y,z)
relative global p1, p3
%% do some stuff with y, z, possibly using p1 and p3, but not changing either of them.
...
end
I realize that you can get the desired behavior in several other ways -- you could pass the parameters to the sub-functions called by the experiment, you could put the parameters in a parameter structure and pass them to the sub-functions, and so on. The first is horrible because I have to change all the subfunctions' arguments every time I want to change the use of parameters. The second is okay, but I have to include the structure prefix every time I want to use the variables.
I suppose the "proper" solution to my problem is to use options structures much like matlab does in its own optimization code. I just wonder if there isn't a slicker way that doesn't involve me typing "paramStruct.value" every time I want to use the parameter "value".
I realize that relative global variables could cause all sorts of debugging nightmares, but I don't think they would necessarily be worse than the ones caused by existing global variables. And they would at least have more locality than unqualified globals.
So how does a real programmer handle this problem? Is there an elegant, easy to create and maintain design for this kind of problem? Can it be done in matlab? If not, how would you do it in another language? (I always feel guilty about using matlab as it doesn't exactly encourage good design. I always want to switch to python but can't justify relearning things and migrating the code base -- unless it would make me a much faster and better programmer within a few weeks, and the matlab-specific tools, such as wavelet and optimization toolboxes, could quickly+easily be found or duplicated within python.)
No, I don't think Matlab has exactly what you are looking for.
The best answer depends on if your primary concern is due to the amount of typing required to pass the parameters around, or if you are concerned about memory usage as you pass around large datasets. There are a few approaches that I have taken, and they all have pros and cons.
Parameters in a structure. You've already mentioned this, in your question,but I would like to reinforce it as an excellent answer in most circumstances. Instead of calling my parameter structure paramStruct, I usually just call it p. It is always locally scoped to the function I'm using, and then I need to type p.value instead of value. I believe the extra two characters are well worth the ability to easily pass around the complete set of parameters.
InputParser. I use the InputParser class a lot when designing functions which require a lot of inputs. This actually fits in well with (1), in that a sturcture of parameters can be passed in, or you can use 'param'/value pairs, and yuo can allow your function to define defaults. Very useful.
Nested functions can help in limited cases, but only when all of your functions can be defined hierarchically. It limits code re-use if one of your inner function can be generic enough to support multiple parent functions.
Pass by reference using handle classes. If your concern is memory. the new form of
classes actually allows you to define a paraemter structure which is passes by reference. The code looks something like this:
classdef paramStructure < handle
properties (SetAccess = public, GetAccess = public)
param1 = [];
param2 = [];
% ...
param100 = [];
end
end
Then you can create a new pass-by-reference structure like this
p = paramStructure;
And set values
p.param1 = randn(100,1);
As this is passed around, all passing is by reference, with the pros (less memory use, enables some coding styles) and cons (generally easier to make some kinds of mistakes.
global variables. I try really hard to avoid these, but they are an option.
What you're describing is exactly available in MATLAB using nested functions. They have their own workspace, but they also have access to the workspace of the parent function in which they are nested. So the parent function can define your parameters, and the nested functions will be able to see them as they have a shared scope.
It's a pretty elegant way of programming, and causes many fewer problems with debugging than globals. Recent versions of MATLAB even highlight shared variables in the editor in light blue, to help with this.