I have some Makefiles that are flexible based on the existence of certain variables by using ifdef to check for them. It is a bit annoying that I have to actually set the variable equal to something on the command line. make all DEBUG does not trigger the ifdef but make all DEBUG=1 does. Perhaps I am just using the C pre-processor approach where it does not belong.
Q1) Is it possible to specify a variable on the command line to be empty? Without even more characters?
Q2) What is the preferred approach for such boolean parameters to a make?
I assume you mean make all DEBUG= here, right? Without the = make will consider DEBUG to be a target to build, not a variable assignment.
The manual specifies that a variable that has a non-empty value causes ifdef to return true. A variable that does not exist or exists but contains the empty string, causes ifdef to return false. Note ifdef does not expand the variable, it just tests whether the variable has any value.
You can use the $(origin ...) function to test whether a variable is really not defined at all, or is defined but empty, like this:
ifeq ($(origin DEBUG),undefined)
$(info Variable DEBUG is not defined)
else
$(info Variable DEBUG is defined)
endif
As #MadScientist explained few minutes ago,
make all DEBUG
adds a target DEBUG to your make. Luckily, there is a workaround:
ifneq (,$(filter DEBUG,$(MAKECMDGOALS)))
DEBUG:=1 # or do whatever you want
DEBUG: all; #echo -n
endif
It is essential to supply a dummy rule (e.g. echo nothing, as above) to the dummy target. And either put this statement at the bottom of your makefile, or specify the prerequisite target explicitly as in the example. Otherwise, make may wrongly choose DEBUG target instead of all.
Note that this is not a preferred approach; the convention is like using V=1 to turn echo on.
Another caveat is that make processes the command-line goals sequentially, e.g. make A B will first take care of A target, then of B target, whether these targets are independent, or depend one on the other. Therefore writing make DEBUG PERFECT and make PERFECT DEBUG could produce different results. But the order of parameters is irrelevant, therefore make PERFECT=1 DEBUG=1 and make DEBUG=1 PERFECT=1 are equivalent.
It is already clarified why you can't use just DEBUG. But I would like to add something.
You can use shell script before running make that setup all variables you need, so, for example in linux shell it will look like this:
$source debug_setup.sh
$make all
Make is starting...
Debug is enabled
...
where debug_setup.sh contains all environment variables you need to set up:
export DEBUG=1
export DEBUG_OPTION=some_option
This is nice since you can make comments there, you can comment out if you don't need something at the moment and would like to keep for the future, etc.
Then you can have several setup scripts that must/can be used as a part of standard routine. This all depends on how many variables you need to set up, how many sets of variables you would like to have, etc.
Note that it is a good idea to notify user somehow which set of variables is selected.
Related
I am defining a variable in the beginning of my source code in MATLAB. Now I would like to know at which lines this variable effects something. In other words, I would like to see all lines in which that variable is read out. This wish does not only include all accesses in the current function, but also possible accesses in sub-functions that use this variable as an input argument. In this way, I can see in a quick way where my change of this variable takes any influence.
Is there any possibility to do so in MATLAB? A graphical marking of the corresponding lines would be nice but a command line output might be even more practical.
You may always use "Find Files" to search for a certain keyword or expression. In my R2012a/Windows version is in Edit > Find Files..., with the keyboard shortcut [CTRL] + [SHIFT] + [F].
The result will be a list of lines where the searched string is found, in all the files found in the specified folder. Please check out the options in the search dialog for more details and flexibility.
Later edit: thanks to #zinjaai, I noticed that #tc88 required that this tool should track the effect of the name of the variable inside the functions/subfunctions. I think this is:
very difficult to achieve. The problem of running trough all the possible values and branching on every possible conditional expression is... well is hard. I think is halting-problem-hard.
in 90% of the case the assumption that the output of a function is influenced by the input is true. But the input and the output are part of the same statement (assigning the result of a function) so looking for where the variable is used as argument should suffice to identify what output variables are affected..
There are perverse cases where functions will alter arguments that are handle-type (because the argument is not copied, but referenced). This side-effect will break the assumption 2, and is one of the main reasons why 1. Outlining the cases when these side effects take place is again, hard, and is better to assume that all of them are modified.
Some other cases are inherently undecidable, because they don't depend on the computer states, but on the state of the "outside world". Example: suppose one calls uigetfile. The function returns a char type when the user selects a file, and a double type for the case when the user chooses not to select a file. Obviously the two cases will be treated differently. How could you know which variables are created/modified before the user deciding?
In conclusion: I think that human intuition, plus the MATLAB Debugger (for run time), and the Find Files (for quick search where a variable is used) and depfun (for quick identification of function dependence) is way cheaper. But I would like to be wrong. :-)
I've got a rather big and verbose section of line-based configuration file. I'd like to use this section as template (assuming I going to preconfigure this section, test it and then replace actual values with $(make) $(macros)), substituting the key parameters (very few of them, really) effectively "cloning" this "template" with few customized parameters to the working config file. Can make do the work for me in the described case?
Please bear with me, I'm truly a make layman and even not sure if it is right tool in this case.
An example
I'm preconfiguring and testing something like:
<section0>
contains a lot of settings
which were tested and should
be exactly the same in every copy
except marked with trailing0
</section0>
I'm wondering that if convert tokens marked with trailing zero above to macros:
<$(section)>
contains a lot of settings
which were tested and should
be exactly the same in every copy
except marked with $(trailing)
</$(section)>
... wondering that I can utilize make to produce clones of premade configuration slightly customized with my data in place of macros:
<section42>
contains a lot of settings
which were tested and should
be exactly the same in every copy
except marked with trailing42
</section42>
<foo>
contains a lot of settings
which were tested and should
be exactly the same in every copy
except marked with bar
</foo>
Assuming "section42", "foo" and "trailing42", "bar" are substitutes for $(section), $(trailing) macros respectively.
You can use m4 preprocessor in your makefiles to do exactly that: expand macros in template files:
M4 can be called a “template language”, a “macro language” or a “preprocessor language”. The name “m4” also refers to the program which processes texts in this language: this “preprocessor” or “macro processor” takes as input an m4 template and sends this to the output, after acting on any embedded directives, called macros.
Create a file named section.m4:
$ cat section.m4
<section0>
contains a lot of settings
which were tested and should
be exactly the same in every copy
except marked with trailing0
</section0>
And have a rule in your makefile to expand macros in that template to produce section.cfg:
section.cfg : section.m4
m4 -Dsection0=foo -Dtrailing0=bar $< >$#
I want to undefine a variable from a Makefile via passing a command to make. Is this possible? Man make didn't help that much.
What I want to do the following:
I want to compile a port with FreeBSD. This port is marked as broken. Though I don't have the permissions to change the Makefile I am looking for possibility to undefine the broken? variable.
Edit:
In the Makefile is:
BROKEN= does not link
And I want to unset/undefine broken. Because the Makefile is not executed further. This is not related to compiler flags so far.
I don't think you can actually undefine a variable.
However, if 'empty' is good enough for you:
make -e variable=''
You can do so. Here's how, assuming that undef is the name of the target which you want to use to undefine the variable.
FOO:=foo
ifeq (undef,$(filter undef,$(MAKECMDGOALS)))
override undefine FOO
endif
.PHONY: all
all:
echo $(FOO) '($(origin FOO))'
.PHONY: undef
undef: $(if $(filter-out undef,$(MAKECMDGOALS)),,$(.DEFAULT_GOAL)))
Explanation of make elements used:
MAKECMDGOALS is the predefined read-only variable containing the goals with which make was called. We use this to check whether make was called with undef.
.DEFAULT_GOAL is the predefined variable containing the default goal. If it is not set explicitly, it is set to the first goal, which in this case is all.
ifeq ... endif allows conditional parts in a Makefile.
The $(filter pattern,list) function returns all elements from list which are matched by pattern. So, if make was called with undef as one of its goals, $(filter undef,$(MAKECMDGOALS)) will be undef, otherwise it will be the empty String.
The $(filter-out pattern,list) function is the opposite of $(filter key,list). It returns all elements from list which are not matched by pattern. So, if make was called with undef only, $(filter-out undef,$(MAKECMDGOALS)) will be the empty String, otherwise it will be the list of all goals given on the command line, excluding undef.
undefine undefines a variable.
override changes a variable even if it was passed on the command line. You need to decide depending on your use case whether or not using override makes sense.
The $(if cond,then[,else]) function returns then if cond is true, otherwise else (or the empty String if there was no else). A non-empty String is true, an empty String is false.
So, the Makefile undefines FOO in case undef was given on the command line.
The "magic" dependency of undef makes sure that if undef was the only goal given on the command line, it will depend on and therefore execute the default goal, which in this example would be all. This is to make sure that make undef behaves like make except that it undefines the variable.
WARNING: The behavior of such Makefiles can be confusing. Imagine a Makefile that uses this mechanism to exclude CPPFLAGS+=-DNDEBUG in case debug was given. It is unexpected that make debug all and make all create a different output for make all. The expectation is that make debug all creates additional output compared to make all. The primary expectation is that make all always behaves the same, no matter what other goals might additionally be present on the command line. It is better to implement such mechanisms using a configure-style mechanism.
(Source: From my unpublished book on GNU make.)
http://www.gnu.org/software/make/manual/make.html#Undefine-Directive
Supposedly you can
override undefine VARIABLE
Not sure what compiler you're using (I know recent BSD uses clang by default) but with gcc you can undefine previously defined symbols using -U.
Edit: apparently I misread your question. So what you want is an equivalent to make variable=X, which will undefine rather than set a value? I don't believe that is possible.
If you can't modify the existing makefile, can you make your own copy and call make with that one instead?
There's a -DIGNORE_BROKEN or similar, please check the FreeBSD Porter's manual, it should hopefully have info on this.
Why do I get this from the code below
foo:=foo1
ifneq ($(foo),)
override undefine foo
endif
Makefile:5: invalid override directive
Most of the applications we developers write need to be externally parametrized at startup. We pass file paths, pipe names, TCP/IP addresses etc. So far I've been using command line to pass these to the appplication being launched. I had to parse the command line in main and direct the arguments to where they're needed, which is of course a good design, but is hard to maintain for a large number of arguments. Recently I've decided to use the environment variables mechanism. They are global and accessible from anywhere, which is less elegant from architectural point of view, but limits the amount of code.
These are my first (and possibly quite shallow) impressions on both strategies but I'd like to hear opinions of more experienced developers -- What are the ups and downs of using environment variables and command line arguments to pass arguments to a process? I'd like to take into account the following matters:
design quality (flexibility/maintainability),
memory constraints,
solution portability.
Remarks:
Ad. 1. This is the main aspect I'm interested in.
Ad. 2. This is a bit pragmatic. I know of some limitations on Windows which are currently huge (over 32kB for both command line and environment block). I guess this is not an issue though, since you just should use a file to pass tons of arguments if you need.
Ad. 3. I know almost nothing of Unix so I'm not sure whether both strategies are as similarily usable as on Windows. Elaborate on this if you please.
1) I would recommend avoiding environmental variables as much as possible.
Pros of environmental variables
easy to use because they're visible from anywhere. If lots of independent programs need a piece of information, this approach is a whole lot more convenient.
Cons of environmental variables
hard to use correctly because they're visible (delete-able, set-able) from anywhere. If I install a new program that relies on environmental variables, are they going to stomp on my existing ones? Did I inadvertently screw up my environmental variables when I was monkeying around yesterday?
My opinion
use command-line arguments for those arguments which are most likely to be different for each individual invocation of the program (i.e. n for a program which calculates n!)
use config files for arguments which a user might reasonably want to change, but not very often (i.e. display size when the window pops up)
use environmental variables sparingly -- preferably only for arguments which are expected not to change (i.e. the location of the Python interpreter)
your point They are global and accessible from anywhere, which is less elegant from architectural point of view, but limits the amount of code reminds me of justifications for the use of global variables ;)
My scars from experiencing first-hand the horrors of environmental variable overuse
two programs we need at work, which can't run on the same computer at the same time due to environmental clashes
multiple versions of programs with the same name but different bugs -- brought an entire workshop to its knees for hours because the location of the program was pulled from the environment, and was (silently, subtly) wrong.
2) Limits
If I were pushing the limits of either what the command line can hold, or what the environment can handle, I would refactor immediately.
I've used JSON in the past for a command-line application which needed a lot of parameters. It was very convenient to be able to use dictionaries and lists, along with strings and numbers. The application only took a couple of command line args, one of which was the location of the JSON file.
Advantages of this approach
didn't have to write a lot of (painful) code to interact with a CLI library -- it can be a pain to get many of the common libraries to enforce complicated constraints (by 'complicated' I mean more complex than checking for a specific key or alternation between a set of keys)
don't have to worry about the CLI libraries requirements for order of arguments -- just use a JSON object!
easy to represent complicated data (answering What won't fit into command line parameters?) such as lists
easy to use the data from other applications -- both to create and to parse programmatically
easy to accommodate future extensions
Note: I want to distinguish this from the .config-file approach -- this is not for storing user configuration. Maybe I should call this the 'command-line parameter-file' approach, because I use it for a program that needs lots of values that don't fit well on the command line.
3) Solution portability: I don't know a whole lot about the differences between Mac, PC, and Linux with regard to environmental variables and command line arguments, but I can tell you:
all three have support for environmental variables
they all support command line arguments
Yes, I know -- it wasn't very helpful. I'm sorry. But the key point is that you can expect a reasonable solution to be portable, although you would definitely want to verify this for your programs (for example, are command line args case sensitive on any platforms? on all platforms? I don't know).
One last point:
As Tomasz mentioned, it shouldn't matter to most of the application where the parameters came from.
You should abstract reading parameters using Strategy pattern. Create an abstraction named ConfigurationSource having readConfig(key) -> value method (or returning some Configuration object/structure) with following implementations:
CommandLineConfigurationSource
EnvironmentVariableConfigurationSource
WindowsFileConfigurationSource - loading from a configuration file from C:/Document and settings...
WindowsRegistryConfigurationSource
NetworkConfigrationSource
UnixFileConfigurationSource - - loading from a configuration file from /home/user/...
DefaultConfigurationSource - defaults
...
You can also use Chain of responsibility pattern to chain sources in various configurations like: if command line argument is not supplied, try environment variable and if everything else fails, return defauls.
Ad 1. This approach not only allows you to abstract reading configuration, but you can easily change the underlying mechanism without any affect on client code. Also you can use several sources at once, falling back or gathering configuration from different sources.
Ad 2. Just choose whichever implementation is suitable. Of course some configuration entries won't fit for instance into command line arguments.
Ad 3. If some implementations aren't portable, have two, one silently ignored/skipped when not suitable for a given system.
I think this question has been answered rather well already, but I feel like it deserves a 2018 update. I feel like an unmentioned benefit of environmental variables is that they generally require less boiler plate code to work with. This makes for cleaner more readable code. However a major disadvatnage is that they remove a layers of isolation from different applications running on the same machine. I think this is where Docker really shines. My favorite design pattern is to exclusively use environment variables and run the application inside of a Docker container. This removes the isolation issue.
I generally agree with previous answers, but there is another important aspect: usability.
For example, in git you can create a repository with the .git directory outside of that. To specify that, you can use a command line argument --git-dir or an environmental variable GIT_DIR.
Of course, if you change the current directory to another repository or inherit environmental variables in scripts, you get a mistake. But if you need to type several git commands in a detached repository in one terminal session, this is extremely handy: you don't need to repeat the git-dir argument.
Another example is GIT_AUTHOR_NAME. It seems that it even doesn't have a command line partner (however, git commit has an --author argument). GIT_AUTHOR_NAME overrides the user.name and author.name configuration settings.
In general, usage of command line or environmental arguments is equally simple on UNIX: one can use a command line argument
$ command --arg=myarg
or an environmental variable in one line:
$ ARG=myarg command
It is also easy to capture command line arguments in an alias:
alias cfg='git --git-dir=$HOME/.cfg/ --work-tree=$HOME' # for dotfiles
alias grep='grep --color=auto'
In general most arguments are passed through the command line. I agree with the previous answers that this is more functional and direct, and that environmental variables in scripts are like global variables in programs.
GNU libc says this:
The argv mechanism is typically used to pass command-line arguments specific to the particular program being invoked. The environment, on the other hand, keeps track of information that is shared by many programs, changes infrequently, and that is less frequently used.
Apart from what was said about dangers of environmental variables, there are good use cases of them. GNU make has a very flexible handling of environmental variables (and thus is very integrated with shell):
Every environment variable that make sees when it starts up is transformed into a make variable with the same name and value. However, an explicit assignment in the makefile, or with a command argument, overrides the environment. (-- and there is an option to change this behaviour) ...
Thus, by setting the variable CFLAGS in your environment, you can cause all C compilations in most makefiles to use the compiler switches you prefer. This is safe for variables with standard or conventional meanings because you know that no makefile will use them for other things.
Finally, I would stress that the most important for a program is not programmer, but user experience. Maybe you included that into the design aspect, but internal and external design are pretty different entities.
And a few words about programming aspects. You didn't write what language you use, but let's imagine your tools allow you the best possible argument parsing. In Python I use argparse, which is very flexible and rich. To get the parsed arguments, one can use a command like
args = parser.parse_args()
args can be further split into parsed arguments (say args.my_option), but I can also pass them as a whole to my function. This solution is absolutely not "hard to maintain for a large number of arguments" (if your language allows that). Indeed, if you have many parameters and they are not used during argument parsing, pass them in a container to their final destination and avoid code duplication (which leads to inflexibility).
And the very final comment is that it's much easier to parse environmental variables than command line arguments. An environmental variable is simply a pair, VARIABLE=value. Command line arguments can be much more complicated: they can be positional or keyword arguments, or subcommands (like git push). They can capture zero or several values (recall the command echo and flags like -vvv). See argparse for more examples.
And one more thing. Your worrying about memory is a bit disturbing. Don't write overgeneral programs. A library should be flexible, but a good program is useful without any arguments. If you need to pass a lot, this is probably data, not arguments. How to read data into a program is a much more general question with no single solution for all cases.
I am optimizing a very time/memory consuming program by running it over a dataset and under multiple parameters. For each "run", I have a csv file, "setup.csv" set up with "runNumber","Command" for each run. I then import this into a perl script to read the command for the run number I would like, extrapolate the variables, then execute it on the system via the system command. Should I be worried about the potential for this to be exploited, (I am worried right now)? If so, what can I do to protect our server? My plan now is to change the file permissions of the "setup.csv" to read only and ownership to root, then go in as root whenever I need to append another run to the list.
Thank you very much for your time.
Run your code in taint mode with -T. That will force you to carefully launder your data. Only pass through strings that are ones you are expecting. Do not launder with .*, but rather check against a list of good strings.
Ideally, there a list of known acceptable values, and you validate against that.
Either way, you want to avoid the shell by using the multi-argument form of system or by using IPC::System::Simple's systemx.
If you can't avoid the shell, you must properly convert the text to pass to the command into shell literals.
Even then, you have to be careful of values that start with -. Lots of tools accept -- to denote the end options, allowing other values to be passed safely.
Finally, you might want to make sure the args don't contain the NUL character (\0).
systemx('tool', '--', #args)
Note: Passing arbitrary strings is not possible in Windows. Extra validation is required.