supervisor program:x command expansion of environment variables $(ENV_VAR)s? - supervisord

I would like to put configuration (in this case, site name) into supervisor
environment variables, for expansion in program:x command arguments. Is this supported? The documentation's wording would seem to indicate yes.
The following syntax is not working for me on supervisor-3.0 (excerpt of config file):
[supervisord]
environment = SITE="mysite"
[program:service_name]
command=/path/to/myprog/myservice /data/myprog/%(ENV_SITE)s/%(ENV_SITE)s.db %(program_name)s_%(process_num)03d
process_name=%(program_name)s_%(process_num)03d
numprocs=5
numprocs_start=1
Raises the following error:
sudo supervisord -c supervisord.conf
Error: Format string
'/path/to/myprog/myservice /data/myprog/%(ENV_SITE)s/%(ENV_SITE)s.db %(program_name)s_%(process_num)03d'
for 'command' contains names which cannot be expanded
Reading the documentation, I expected environment variables to be available for
expansion in program:x command as %(ENV_VAR)s:
http://supervisord.org/configuration.html#program-x-section-values
command:
"String expressions are evaluated against a dictionary containing the keys
group_name, host_node_name, process_num, program_name, here (the directory of
the supervisord config file), and all supervisord's environment variables
prefixed with ENV_."
Introduced: 3.0
Related:
There are open pull requests to enable expansion in additional section values:
https://github.com/Supervisor/supervisor/issues?labels=expansions&page=1&state=open
A search of goole (or SO) returns no examples of attempts to use %(ENV_VAR)s
expansion in the command section value:
https://www.google.com/search?q=supervisord+environment+expansion+in+command

I agree supervisor is not clear about this ( to me at least ).
I've found the easiest solution to execute /bin/bash -c.
In your case it would be:
command=/bin/bash -c"/path/to/myprog/myservice /data/myprog/${SITE}/${SITE}.db ..."
What do you think?
I've found inspiration here: http://blog.trifork.com/2014/03/11/using-supervisor-with-docker-to-manage-processes-supporting-image-inheritance/

You are doing it right; however, the ENV defined in your supervisord section doesn't get made available to the processes for whatever reason during configuration loading. If you start supervisord like this:
SITE=mysite supervisord
It will run correctly and expand that variable. I don't know why supervisord has issues adding to the environment and making it available to the subprocesses' config expansion. I think the environment variable is available inside the subprocess, but not when expanding variables in the subprocess config declaration.

Related

Why is the k8s container spec "command" field an array?

According to this official kubernetes documentation page, it is possible to provide "a command" and args to a container.
The page has 13 occurrences of the string "a command" and 10 occurrences of "the command" -- note the use of singular.
There are (besides file names) 3 occurrences of the plural "commands":
One leads to the page Get a Shell to a Running Container, which I am not interested in. I am interested in the start-up command of the container.
One mention is concerned with running several piped commands in a shell environment, however the provided example uses a single string: command: ["/bin/sh"].
The third occurrence is in the introductory sentence:
This page shows how to define commands and arguments when you run a container in a Pod.
All examples, including the explanation of how command and args interact when given or omitted, only ever show a single string in an array. It even seems to be intended to use a single command only, which would receive all specified args, since the field is named with a singular.
The question is: Why is this field an array?
I assume the developers of kubernetes had a good reason for this, but I cannot think of one. What is going on here? Is it legacy? If so, how come? Is it future-readiness? If so, what for? Is it for compatibility? If so, to what?
Edit:
As I have written in a comment below, the only reason I can conceive of at this moment is this: The k8s developers wanted to achieve the interaction of command and args as documented AND allow a user to specify all parts of a command in a single parameter instead of having a command span across both command and args.
So essentially a compromise between a feature and readability.
Can anyone confirm this hypothesis?
Because the execve(2) system call takes an array of words. Everything at a higher level fundamentally reduces to this. As you note, a container only runs a single command, and then exits, so the array syntax is a native-Unix way of providing the command rather than a way to try to specify multiple commands.
For the sake of argument, consider a file named a file; with punctuation, where the spaces and semicolon are part of the filename. Maybe this is the input to some program, so in a shell you might write
some_program 'a file; with punctuation'
In C you could write this out as an array of strings and just run it
char *const argv[] = {
"some_program",
"a file; with punctuation", /* no escaping or quoting, an ordinary C string */
NULL
};
execvp(argv[0], argv); /* does not return */
and similarly in Kubernetes YAML you can write this out as a YAML array of bare words
command:
- some_program
- a file; with punctuation
Neither Docker nor Kubernetes will automatically run a shell for you (except in the case of the Dockerfile shell form of ENTRYPOINT or CMD). Part of the question is "which shell"; the natural answer would be a POSIX Bourne shell in the container's /bin/sh, but a very-lightweight container might not even have that, and sometimes Linux users expect /bin/sh to be GNU Bash, and confusion results. There are also potential lifecycle issues if the main container process is a shell rather than the thing it launches. If you do need a shell, you need to run it explicitly
command:
- /bin/sh
- -c
- some_program 'a file; with punctuation'
Note that sh -c's argument is a single word (in our C example, it would be a single entry in the argv array) and so it needs to be a single item in a command: or args: list. If you have the sh -c wrapper it can do anything you could type at a shell prompt, including running multiple commands in sequence. For a very long command it's not uncommon to see YAML block-scalar syntax here.
I think the reason the command field is an array is because it directly overrides the entrypoint of the container (and args the CMD) which can be an array, and should be one in order to use command and args together properly (see the documentation)

How to pass arguments to memcheck with ctest?

I want to use ctest from the command line to run my tests with memcheck and pass in arguments for the memcheck command.
I can run ctest -R my_test to run my test, and I can even run ctest -R my_test -T memcheck to run it through memcheck.
But I can't seem to find a way to pass arguments to that memcheck command, like --leak-check=full or --suppressions=/path/to/file.
After reading ctest's documentation I've tried using the -D option with CTEST_MEMCHECK_COMMAND_OPTIONS and MEMCHECK_COMMAND_OPTIONS. I also tried setting these as environment variables. None of my attempts produced any different test command. It's always:
Memory check command: /path/to/valgrind "--log-file=/path/to/build/Testing/Temporary/MemoryChecker.7.log" "-q" "--tool=memcheck" "--leak-check=yes" "--show-reachable=yes" "--num-callers=50"
How can I control the memcheck command from the ctest command line?
TL;DR
ctest --overwrite MemoryCheckCommandOptions="--leak-check=full --error-exitcode=100" \
--overwrite MemoryCheckSuppressionFile=/path/to/valgrind.suppressions \
-T memcheck
Explanation
I finally found the right way to override such variables, but unfortunately it's not easy to understand this from the documentation.
So, to help the next poor soul that needs to deal with this, here is my understanding of the various ways to set options for memcheck.
In a CTestConfig.cmake in you top-level source dir, or in a CMakeLists.txt (before calling include(CTest)), you can set MEMORYCHECK_COMMAND_OPTIONS or MEMORYCHECK_SUPPRESSIONS_FILE.
When you include(CTest), CMake will generate a DartConfiguration.tcl in your build directory and setting the aforementioned variables will populate MemoryCheckCommandOptions and MemoryCheckSuppressionFile respectively in this file.
This is the file that ctest parses in your build directory to populate its internal variables for running the memcheck step.
So, if you'd like to set you project's options for memcheck during cmake configuration, this is the way to got.
If instead you'd like to modify these options after you already have a properly configured build directory, you can:
Modify the DartConfiguration.tcl directly, but note that this will be overwritten if cmake runs again, since this file is regenerated each time cmake runs.
Use the ctest --overwrite command-line option to set these memcheck options just for that run.
Notes
I've seen mentions online of a CMAKE_MEMORYCHECK_COMMAND_OPTIONS variable. I have no idea what this variable is and I don't think cmake is aware of it in any way.
Setting CTEST_MEMORYCHECK_COMMAND_OPTIONS (the variable that is actually documented in the cmake docs) in your CTestConfig.cmake or CMakeLists.txt has no effect. It seems this variable only works in "CTest Client Scripts", which I have never used.
Unfortunately, both MEMORYCHECK_COMMAND_OPTIONS and MEMORYCHECK_SUPPRESSIONS_FILE aren't documented explicitly in cmake, only indirectly, in ctest documentation and the Testing With CTest tutorial.
When ctest is run in the build, it parses the file to populate its internal variables:
https://cmake.org/cmake/help/latest/manual/ctest.1.html#dashboard-client-via-ctest-command-line
It's not clear to me how this interacts with

How to set environment variables in fish shell script

In my fish shell script 'hoge.fish`, I have a code to set envs.
#!/usr/local/bin/fish
set -x HOGE "hello"
but after I exec this script the env is not set correctly and outputs nothing.
./hoge.fish
echo $HOGE
I've tried these code but none of these worked.
set -gx HOGE "hello"
set -gU HOGE "hello"
how can I fix this?
OS: macOS High Sierra 10.13.6
fish version: 2.7.1
iTerm2: 3.2.0
When you ran the script, it probably set the environment variable correctly, but only in the process that was created when you ran the script....not in the parent session you ran the script from! When the script exited, the process and its environment were destroyed.
If you want to change the environment variable in your current environment, depending on what interactive shell you're using, you can use a command like source hoge.fish, which will execute the commands in your current session rather than a subprocess, so the environment variable changes will persist.
While sourceing, as in the original answer is definitely the correct mechanism, a comment from the OP to that answer mentioned that they would still prefer a solution that could be executed as a script.
As long as the variables are exported (set -x) in the script, it's possible (but still not necessarily recommended) to do this by execing into another fish shell inside the script:
#!/usr/bin/env fish
set -gx HOGE hello
exec fish
Executing ./hoge.fish will then have a fish shell with HOGE set as expected.
However, be aware:
This will result in two fish shell processes running, one inside the other. The first (parent) is the original fish shell. It will spawn a second (child) process based on the shebang line, which will then be replaced by the third instance from the exec line.
You can reduce the number of shells that are running simultaneously by starting the script with exec ./hoge.fish. That results in the shebang script replacing the parent process, and then being replaced by the exec line at the end of the script. However, you will still have run fish's startup twice to achieve what a simple source would have done with zero additional startups.
It's also important to realize the environment of the new shell will not necessarily be the same as that of the original shell. In particular, local variables from the original shell will not be present in the exec'd shell.
There are use-cases where these pitfalls are worth execing a new shell, but most of the time a simple source will be preferred.
Consider that if you run that from bash shell it will not export the variables with the -U option because it indicates to export to "fish universe" not outside.
If you stay inside fish's shell you still can do it like this:
#!/usr/local/bin/fish
set -Ux HOGE "hello"
And this is the result:
Welcome to fish, the friendly interactive shell
Type help for instructions on how to use fish
~/trash $ ./hoge.fish
~/tr ash $ echo $HOGE
hello
Remember to keep the first line so fish will interpret it properly.

What is the recommended way to set JVM options for the executables created with sbt-native-packager?

Currently, I use export JAVA_OPTS ... on the command line, but there seem to be other possibilities, using the build.sbt or an external property file.
I have found several relevant github issues here, here and here but the many options are confusing. Is there a recommended approach?
The approach you take to setting JVM options depends mainly on your use case:
Inject options every time
If you want to be able to specify the options every time you run your service, the two mechanisms are environment variables, and command line parameters. Which you use is mostly a matter of taste or convenience (but command line parameters will override environment variable settings).
Environment variables
You can inject values using the JAVA_OPTS environment variable. This is specified as a sequence of parameters passed directly to the java binary, with each parameter separated by whitespace.
Command line parameters
You can inject values by adding command line parameters in either of two formats:
-Dkey=val
Passes a Java environment property into the java binary.
-J-X
Passes any flag -X to the java binary, stripping the leading -J.
Inject options from a file which can be modified
If you want to end up with a file on the filesystem which can be modified after install time, you will want to use sbt-native-packager's ability to read from a .ini file to initialise a default value for Java options. The details of this can be seen at http://www.scala-sbt.org/sbt-native-packager/archetypes/cheatsheet.html#file-application-ini-or-etc-default
Following the instructions, and depending on the archetype you are using, you will end up with a file at either /etc/default, application.ini, or another custom name, which will be read by the startup script to add settings.
Each line of this file are treated as if they were extra startup parameters, so the same rules as mentioned earlier are still enforced; e.g. -X flags need to be written as if they were -J-X.
Inject options & code which never need to be changed
You can hardcode changes directly into the shell script which is run to start your binary, by using the SBT setting bashScriptExtraDefines, and following the details at http://www.scala-sbt.org/sbt-native-packager/archetypes/cheatsheet.html#extra-defines
This is the most flexible option in terms of what is possible (you can write any valid bash code, and this is added to the start script). But it is also less flexible in that it is not modifiable afterwards; any optional calculations have to be described in terms of the bash scripting language.

Which shell does a Perl system() call use?

I am using a system call to do some tasks
system('myframework mycode');
but it complains of missing environment variables.
Those environment variables are set at my bash shell (from where I run the Perl code).
What am I doing wrong?
Does the system call create a brand new shell (without environment variable settings)? How can I avoid that?
It's complicated. Perl does not necessarily invoke a shell. Perldoc says:
If there is only one scalar argument, the argument is checked for shell metacharacters, and if there are any, the entire argument is passed to the system's command shell for parsing (this is /bin/sh -c on Unix platforms, but varies on other platforms). If there are no shell metacharacters in the argument, it is split into words and passed directly to execvp , which is more efficient.
So it actually looks like you would have the arguments passed right to execvp. Furthermore, whether the shell loaded your .bashrc, .profile, or .bash_profile depends on whether the shell is interactive. Likely it isn't, but you can check like this.
If you don't want to invoke a shell, call system with a list:
system 'mycommand', 'arg1', '...';
system qw{mycommand arg1 ...};
If you want a specific shell, call it explicitly:
system "/path/to/mysh -c 'mycommand arg1 ...'";
I think it's not the question of shell choice, since environment variables are always inherited by subprocesses unless cleaned up explicitly.
Are you sure you have exported your variables?
This will work:
$ A=5 perl -e 'system(q{echo $A});'
5
$
This will work too:
$ export A=5
$ perl -e 'system(q{echo $A});'
5
$
This wouldn't:
$ A=5
$ perl -e 'system(q{echo $A});'
$
system() calls /bin/sh as a shell. If you are on a somewhat different box like ARM it would be good to read the man page for the exec family of calls -- default behavior. You can invoke your .profile if you need to, since system() takes a command
system(" . myhome/me/.profile && /path/to/mycommand")
I've struggled for 2 days working on this. In my case, environment variables were correctly set under linux but not cygwin.
From mkb's answer I thought to check out man perlrun and it mentions a variable called PERL5SHELL (specific to the Win32 port). The following then solved the problem:
$ENV{PERL5SHELL} = "sh";
As is often the case - all I can really say is "it works for me", although the documentation does imply that this might be a sensible solution:
May be set to an alternative shell that perl must use internally for executing "backtick" commands or system().
If the shell used by perl does not implicitly inherit the environment variables then they will not be set for you.
I messed with environment variables being set for my script on this post where I needed the env variable $DBUS_SESSION_BUS_ADDRESS to be set, but it wouldn't when I called the script as root. You can read through that, but in the end you can check whether %ENV contains your needed variables and if not add them.
From perlvar
%ENV
$ENV{expr}
The hash %ENV contains your current environment. Setting a value in "ENV" changes
the environment for any child processes you subsequently fork() off.
My problem was that I was running the script under sudo and that didn't preserve all my user's env variables, are you running the script under sudo or as some other user, say www-data (apache)?
Simple test:
user#host:~$ perl -e 'print $ENV{q/MY_ENV_VARIABLE/} . "\n"'
and if that doesn't work then you will need to add it to %ENV at the top of your script.
try system("echo \$SHELL"); on your system.