Issues running commands via cloud-init - cloud-init

An open source project I'm working on has components that require Linux and consequently virtualization has generally been the best solution for development and testing new features. I'm attempting to provide a simple cloud-init file for Multipass that will configure the VM with our code by pulling our files from Git and setting them up in the VM automatically. However, even though extra time elapsed for launch seems to indicate the process is being run, no files seem to actually be saved to the home directory, even for simpler cases, i.e.
runcmd:
- [ cd, ~ ]
- [ touch test ]
- [ echo 'test' > test ]
Am I just misconfiguring cloud-init or am I missing something crucial?

There are a couple of problems going on here.
First, your cloud config user data must begin with the line:
#cloud-config
Without that line, cloud-init doesn't know what to do with it. If you were to submit a user-data configuration like this:
#cloud-config
runcmd:
- [ cd, ~ ]
- [ touch test ]
- [ echo 'test' > test ]
You would find the following errors in /var/log/cloud-init-output.log:
runcmd.0: ['cd', None] is not valid under any of the given schemas
/var/lib/cloud/instance/scripts/runcmd: 2: cd: can't cd to None
/var/lib/cloud/instance/scripts/runcmd: 3: touch test: not found
/var/lib/cloud/instance/scripts/runcmd: 4: echo 'test' > test: not found
You'll find the solution to these problems in the documentation, which includes this note about runcmd:
# run commands
# default: none
# runcmd contains a list of either lists or a string
# each item will be executed in order at rc.local like level with
# output to the console
# - runcmd only runs during the first boot
# - if the item is a list, the items will be properly executed as if
# passed to execve(3) (with the first arg as the command).
# - if the item is a string, it will be simply written to the file and
# will be interpreted by 'sh'
You passed a list of lists, so the behavior is governed by "*if the item is a list, the items will be properly executed as if passed to execve(3) (with the first arg as the command)". In this case, the ~ in [cd, ~] doesn't make any sense -- the command isn't being executed by the shell, so there's nothing to expand ~.
The second two commands include on a single list item, and there is no command on your system named either touch test or echo 'test' > test.
The simplest solution here is to simply pass in a list of strings intead:
#cloud-config
runcmd:
- cd /root
- touch test
- echo 'test' > test
I've replaced cd ~ here with cd /root, because it seems better to be explicit (and you know these commands are running as root anyway).

Related

Autocomplete directories in a subfolder with the Fish shell

I'm having trouble getting the 'complete' function in the fish shell to behave as I would like and I've been searching for an answer for days now.
Summary
Essentially I need to provide tab directory auto-completion as if I was in a different directory to the one I am currently in. It should behave exactly as 'cd' and 'ls' do, but with the starting point in another directory. It seems like such a trivial thing to be able to do but I can't find a way to make it work.
Explanation
Example folder structure below
- root
- foo
- a
- dir1
- subdir1
- dir2
- subdir2
- b
- dir3
- subdir3
- dir4
- subdir4
I am running these scripts whilst in the 'root' directory, but I need tab auto-complete to behave as if I was in the 'foo' directory.
testfunc -d a/dir2/subdir2
Instead of
testfunc -d foo/a/dir2/subdir2
There are a lot of directories inside 'foo' and a lot of sub-directories within them, and this auto-complete behaviour is necessary to speed our process (this script is used extensively throughout the day).
Attempted Solution
I've tried using the 'complete' builtin to get this working by specifying the directory to use, but all this managed to do was auto-complete the first level of directories with a space after the argument instead of continuing to auto-complete like 'cd' would.
complete -x -c testfunc -a "(__fish_complete_directories ./foo/)"
Working bash version
I have already got this working in Bash and I am trying to port it over to fish. See below for the Bash version.
_testfunc()
{
local cur prev words cword
_init_completion || return
compopt +o default
case $prev in
testfunc)
COMPREPLY=( $( compgen -W '-d' -- "$cur" ) )
compopt +o nospace
return
;;
-d)
curdir=$(pwd)
cd foo/ 2>/dev/null && _filedir -d
COMPREPLY=( $( compgen -d -S / -- "$cur" ) )
cd $curdir
return
;;
esac
} &&
complete -o nospace -F _testfunc testfunc
This is essentially stepping into the folder that I want, doing the autocompletion, then stepping back into the original folder that the script was run in. I was hoping this would be easier in Fish after getting it working in Bash (I need to support these two shells), but I'm just pulling my hair out.
Any help would be really appreciated!
I am not a bash completions expert, but it looks like the bash completions are implemented by changing directories, running completions, and then changing back. You can do the same in fish:
function complete_testfunc
set prevdir $PWD
cd foo
__fish_complete_directories
cd $prevdir
end
complete -x -c testfunc -a "(complete_testfunc)"
does that work for you?

Setting a variable via echo somtimes adds a random ' at the end

I have a bash script with the following functionality:
# usage: setOutput <name> <value>
function setOutput {
echo "##vso[task.setvariable variable=$1]$2"
}
setOutput environment "dev"
This normally sets the variable correctly as ENVIRONMENT=dev - however, sometimes this randomly appends a ' at the end, i.e. ENVIRONMENT=dev'
I tried re-running the same commit on the pipeline multiple times, and sometimes it works, sometimes it doesn't. Any ideas?
I tested it with your sample, it can be displayed normally .
You can try to remove the double quotes of dev to see if this will happen.
function setOutput {
echo "##vso[task.setvariable variable=$1]$2"
}
setOutput environment dev
Base on the fact that #Hugh Lin - MSFT could not reproduce the issue, I looked a littel closer and found out, what was actually going wrong:
In the beginning of my script, I used:
set -ex
This lead to there beeing two echos:
+echo '##vso[task.setvariable variable=key]value'
##vso[task.setvariable variable=key]value
Depending on the order of the two (since the first is written to stderr, the second one to stdout), sometimes the value was set to the correct value, and somtimes to the debug echo one. And since that one has the addition ' at the end, so does the exported output.
My suggestion: Only process stdout for these echo commands, not stderr, or add a special filter for set -x. I removed the debug output and now it works.

How can I call a taginfo batch script from cvsnt

I'd like to call a batch-script during a cvs tag operation on a cvsnt server. But everyting I get is "script execution failed". Where is the script I'd like to call supposed to be located or how could I address it with variables?
If I call shell commands directly like "echo something" everything works fine and I also get the additional parameters added by cvsnt like the actual TAG, command and directory. If I want to call a batch either with relative path, without a past or even with ${CVSROOT}/CVSROOT/triggerbuild.cmd everything I get is 'script execution failed'.
My taginfo entries:
ALL echo
-> results in tag command folder
ALL echo ${CVSROOT}/CVSROOT/trigger_release_build.bat
-> results in 'script execution failed'.
I want to simply call a batch script that trigger my jenkins server to start a build under some conditions. The trigger script is finished and working fine when executed from a local shell. Only integratino in cvsnt taginfo file doesn't work.
addition: the quoted Code is the overall code causing the failure. The code of the batch file is not relevant because it is not even called due to the error.
This is the documentation from cvsnt's tagfile:
# The "taginfo" file is used to control pre-tag checks.
# The filter on the right is invoked with the following arguments:
#
# $1 -- tagname
# $2 -- operation "add" for tag, "mov" for tag -F, and "del" for tag -d
# $3 -- repository
#
# The filter is passed a series of filename/version pairs on its standard input
#
# A non-zero exit of the filter program will cause the tag to be aborted.
#
# The first entry on a line is a regular expression which is tested
# against the directory that the change is being committed to, relative
# to the $CVSROOT. For the first match that is found, then the remainder
# of the line is the name of the filter to run.
#
# If the repository name does not match any of the regular expressions in this
# file, the "DEFAULT" line is used, if it is specified.
#
# If the name "ALL" appears as a regular expression it is always used
# in addition to the first matching regex or "DEFAULT".
When using backslashes I get different error messages about invalid characters \C

Get ansible to read value from mysql and/or perl script

There may be a much better way to do what i need altogether. I'll give the background first then my current (non-working) approach.
The goal is to migrate a bunch of servers from one SLES 11 to SLES 12 making use of ansible playbooks. The problem is that the newserver and the oldserver are supposed to have the same nfs mounted dir. This has to be done at the beginning of the playbook so that all of the other tasks can be completed. The name of the dir being created can be determined in 2 ways - on the oldserver directly or from a mysql db query for the volume name on that oldserver. The newservers are named migrate-(oldservername). I tried to prompt for the volumename in ansible, but that would then apply the same name to every server. Goal Recap: dir name must be determined from the oldserver, created on new server.
Approach 1: I've created a perl script that ansible will copy to the newserver and it will execute the mysql query, and create the dir itself. There are 2 problems with this - 1) mysql-client needs to be installed on each server. This is completely unneccesary for these servers and would have to then be uninstalled after the query is run. 2) Copying files and remotely executing them seems like a bad approach in general.
Approach 2: Create a version of the above except run it on the ansible control machine (where mysql-client is installed already) and store the values in key:value pairs in a file. Problems - 1) I cannot figure out how to in the perl script determine what hosts Ansible is running against and would have to manually enter them into the perl script. 2) I cannot figure out how to get Ansible to import those values correctly from the file created.
Here's the relevant perl code I have for this -
$newserver = migrate-old.server.com
($mig, $oldhost) = split(/\-/, $newserver);
$volname=`mysql -u idm-ansible -p -s -N -e "select vol_name from assets.servers where hostname like '$oldhost'";`;
open(FH, ">vols.yml");
$values = $newhost{$volname};
print FH "$newhost:$volname\n";
close(FH);
My Ansible code is all over the place as I've tried and commented out a ton of things. I can share that here if it is helpful.
Approach 3: Do it completely in Ansible Basically a mysql query for loop over each host. Problem - I have absolutely no idea how to do this. Way too unfamiliar with ansible. I think this is what I would prefer to try and do though.
What is the best approach here? How do I go about getting the right value into ansible to create the correct directory?
Please let me know if I can clarify anything.
Goal Recap: dir name must be determined from the oldserver, created on new server.
Will magic variables do?
Something like this:
---
- hosts: old-server
tasks:
- shell: "/get-my-mount-name.sh"
register: my_mount_name
- hosts: new-server
tasks:
- shell: "/mount-me.sh --mount_name={{ hostvars['old-server'].my_mount_name.stdout }}"

dpkg: How to use trigger?

I wrote a little CDN server that rebuilds its registry pool when new pool-content-packages are installed into that registry pool.
Instead of having each pool-content-package call the init.d of the cdn-server, I'd like to use triggers. That way it would restart the server only once at the end of an installation run, after all packages were installed.
What have I to do to use triggers in my packages with debhelper support?
What you are looking for is dpkg-triggers.
One solution with use of debhelper to build the debian packages is this:
Step 1)
Create file debian/<serverPackageName>.triggers (replace <serverPackageName> with name of your server package).
Step 1a)
Define a trigger that watch the directory of your pool. The content of file would be:
interest /path/to/my/pool
Step 1b)
But you can also define a named trigger, which have to be fired explicit (see step 3).
content of file:
interest cdn-pool-changed
The name of the trigger cdn-pool-changed is free. You can take what ever you want.
Step 2)
Add handler for trigger to file debian/<serverPackageName>.postinst (replace <serverPackageName> with name of your server package).
Example:
#!/bin/sh
set -e
case "$1" in
configure)
;;
triggered)
#here is the handler
/etc/init.d/<serverPackageName> restart
;;
abort-upgrade|abort-remove|abort-deconfigure)
;;
*)
echo "postinst called with unknown argument \`$1'" >&2
exit 1
;;
esac
#DEBHELPER#
exit 0
Replace <serverPackageName> with name of your server package.
Step 3) (only for named triggers, step 1b) )
Add in every content package the file debian/<contentPackageName>.triggers (replace <contentPackageName> with names of your content packages).
content of file:
activate cdn-pool-changed
Use same name for trigger you defined in step 1.
More detailed Information
The best description for dpkg-triggers I could found is "How to use dpkg triggers". The corresponding git repository with examples you can get here:
git clone git://anonscm.debian.org/users/seanius/dpkg-triggers-example.git
I had a need and read and re-read the docs many times. I think that the process is not clearly explain or rather what goes where is not clearly explained. Here I hope to clarify the use of Debian package triggers.
Service with Configuration Directory
A service reading its settings in a specific directory can mark that directory as being of interest.
Say I create a new service which reads settings from /usr/share/my-service/config/...
That service gets two additions:
In its debian directory I add my-service.triggers
And here are the contents:
# my-service.triggers
interest /usr/share/my-service/config
This means if any other package installs or removes a file from that directory, the trigger enters its "needs to be run" state.
In its debian directory I also add my-service.postinst
And I have a script as follow to check whether the trigger happened and run a process as required:
# my-service.postinst
if [ "$1" = "triggered" ]
then
if [ "$2" = "/usr/share/my-service/config" ]
then
# this may or may not be what you need to do, but this is often
# how you handle a change in your service config files
#
systemctl restart my-service
fi
exit 0
fi
That's it.
Now packages adding extensions to your service can add their own configuration file(s) under /usr/share/my-service/config (or a directory under /etc/my-service/my-service.d/... or /var/lib/my-service/..., although that last one should be reserved for dynamic files, not files installed from a package) and dpkg automatically calls your postinst script with:
postinst triggered /usr/share/my-service/config
# where /usr/share/my-service/config is your <interest-path>
This call happens only once and after all the packages were installed, hence the advantage of having a trigger in the first place. This way each package does not need to know that it has to restart my-service and it does not happen more than once, which could cause all sorts of side effects (i.e. the service tries to listen on a TCP port and get error: address already in use).
IMPORTANT: keep in mind that the postinst should include a line with #DEBHELPER#.
So you do not have to do anything special in other packages. Only make sure to install the configuration files in the correct directory and dpkg picks up from there (i.e. in my example under /usr/share/my-service/config).
I have an extension to BIND9 called ipmgr which makes use of .ini files saved in a specific folder. It uses the files to generate DNS zones (way less errors that way! and it includes support for getting letsencrypt certificates and settings for dmarc/dkim). This package uses this case: a simple directory where configuration files get installed. Other packages do not need to do anything other than install files in the right place (/usr/share/ipmgr/zones, for this package).
Service without a Configuration Folder
In some (rare?) cases, you may need to trigger something in a service which is not driven by the installation of a new configuration file.
In this case, you can use an arbitrary name (it should include your package name to make sure it is unique since this name is global to the entire Debian/Ubuntu system).
To make this one work, you need three files, one of which is a trigger in the other packages.
State the Interest
As above, we have an interest. In this case, the interest is stated as a name on its own. The dpkg system distinguish between a name and a path because a name cannot include a slash (/) character. Names are limited to ASCII except control characters and spaces. I would suggest you stick to a-z, 0-9 and dashes (-).
# my-service.triggers
interest my-service-settings
This is useful if you cannot simply track a folder. For example, the settings could come from a network connection that a package offers once installed.
Listen for the Triggers
Again, as above, you need a postinst script in your Service Package. This captures the trigger and allows you to run a command. The script is the same, only you test for the name instead of the folder (note that you can have any number of triggers, so you could also have both: a folder as above and a special name as here).
# my-service.postinst
if [ "$1" = "triggered" ]
then
if [ "$2" = "my-service-settings" ]
then
# this may or may not what you need to do, but this is often
# how you handle a change in your service config files
#
systemctl restart my-service
fi
exit 0
fi
The Trigger
As mentioned above, we need a third file. An arbitrary name is not going to be triggered automatically by dpkg. It wouldn't know whether your other package needs to trigger something just like that (although it is fairly automated as it is already).
So in other packages, you create a trigger file which looks like this:
# other-package.triggers
activate my-service-settings
Now we recognize the name, it is the same as the interest stated above.
In other words, if the trigger needs to run for something other than just the installation of files in a given location, use a special name and add this triggers file with the activate keyword.
Other Features
I have not tested the other features of the dpkg-trigger(1) tool. There are other keywords support in the triggers files:
interest
interest-await
interest-noawait
activate
activate-await
activate-noawait
The deb-triggers manual page has additional information about those. I am not too sure what the await/noawait implies other than the trigger may happen at any time when nowait is used.
Automatic Trigger Added
The build system on Ubuntu (probably Debian too) automatically adds a triggers file with the following when your package includes a library:
$ cat triggers
# Triggers added by dh_makeshlibs/11.1.6ubuntu2
activate-noawait ldconfig
I suggest you exercise caution if your package includes libraries. If you have your own triggers file, I do not know whether this addition will still happen automatically.
This also shows us a special case where it wants to use the noawait. If I understand correctly, it has to run the ldconfig trigger ASAP so your commands will work as expected after the unpack. Otherwise ldd will not know anything about your newly installed library.