How to set up variables in multi-part cloud-init userdata scripts? - cloud-init

I want to be able to set a variable in a shell-script - which is part of a multi-part cloud-init userdata configuration setup, and use this variable in a cloud-config script that comes later in the multi-part userdata.
Is this possible to do? An example would be helpful.

You could just use a file. Set the "variable" with echo "value" > /tmp/myvar and get it with echo `cat /tmp/myvar`.

Related

How do I update a datastore variable from inside a bash variable?

I have a variable set in a bbclass file like:
#some-class.bbclass
PROC ??= ""
In a recipe inheriting the class, I have a bash function where I modify that variable and immediately read its value. But, the value never gets updated.
#some-bb-file.bb
inherit some-class.bbclass
some_configure() {
PROC=$(grep -r "Processor.*${cpu_id}" ... something)
bbnote "PROC is ${PROC}"
}
I always get "PROC is " in the logs. I have tried printing the output of "(grep -r "Processor.*${cpu_id}" ... something)" and it returns a valid string. Can someone please tell me what I am missing?
Usage of bitbake and shell variables in your code snippet is mixed. Your bbnote line should omit the curly braces to access the shell variable, i.e.:
bbnote "PROC is $PROC"
Explanation: The bitbake and local shell variables are different. If you are in the shell function, then ${PROC} is the variable defined in some-class.bbclass. That variable isn't redefined when you do PROC="foo". If you use $PROC, the shell variable defined by PROC="foo" is used.
And your question in the title - I'm not sure if it is possible to update datastore variable from shell. You can get and set datastore variables in Python functions (using d.getVar and d.setVar).
Datastore variables can be read from Shell using :
${#d.getVar('PROC')}
In case you have to use others operations, then switch to Python
I guess you missed backticks
PROC=`grep -r "Processor.*${cpu_id}" ... something`
bbnote "PROC is ${PROC}"

Is it possible to use config fragments with Buildroot's .config?

I'm using Buildroot as a submodule, and I want to reuse existing in-tree defconfigs with a few modification of my own.
I'd like to store just the modified options in a config fragment, just like I can do with BR2_LINUX_KERNEL_CONFIG_FRAGMENT_FILES for the Linux kernel config.
Right now I'm doing something like:
cd buildroot
make BR2_EXTERNAL="$(pwd)/../mypackage" qemu_x86_64_defconfig
echo '
BR2_LINUX_KERNEL_CONFIG_FRAGMENT_FILES="../kernel_config_fragment"
BR2_ROOTFS_OVERLAY="../rootfs_overlay"
' >> .config
make
Is there a nicer way to avoid that echo with a config fragment, just like I'm using for the Linux kernel config fragment? I'd expect something like:
make BR2_CONFIG_FRAG=br_config_frag
where br_config_frag contains the lines:
BR2_LINUX_KERNEL_CONFIG_FRAGMENT_FILES="../kernel_config_fragment"
BR2_ROOTFS_OVERLAY="../rootfs_overlay"
and then I'd be able to write just:
make -C buildroot BR2_CONFIG_FRAG=br_config_frag qemu_x86_64_defconfig all
Here's the full example repo.
Edit
One slight improvement is to put the "config fragment" in a separate file buildroot_config_fragment:
BR2_LINUX_KERNEL_CONFIG_FRAGMENT_FILES="../kernel_config_fragment"
BR2_ROOTFS_OVERLAY="../rootfs_overlay"
and then cat that:
cat ../buildroot_config_fragment >> .config
First side note: your script should run make olddefconfig before make, so that any new options are set to their default value instead of being asked for interactively.
You could simplify the script a little by doing:
cat configs/qemu_x86_64_defconfig br_config_frag > .config
make olddefconfig
You can also use the script support/kconfig/merge_config.sh from the kconfig infrastructure. However, that script internally uses make alldefconfig which currently doesn't work - you need a patch for that.
If you would like to add support for BR2_CONFIG_FRAG to the Buildroot infrastructure, feel free to send a patch to the Buildroot mailing list!
I asked on the IRC, and an user who seems to be Yann E. Morin, who seems to be an active developer, said it is not possible currently.
Arnout's make alldefconfig patch is now merged in buildroot as of 26 Jul 2017
(https://github.com/buildroot/buildroot/commit/dab80981d15979eab3aea28a33694396635a52a1).
This means you can now do:
./support/kconfig/merge_config.sh configs/qemu_x86_64_defconfig fragment1.config fragment2.config
This will use qemu_x86_64_defconfig as the base and add modifications given in the listed fragment config files. The tool will also show nice warnings if you override items.

Current working directory for SXPG_COMMAND_EXECUTE?

Is there a way to specify the current working directory for the system command executed by the function module SXPG_COMMAND_EXECUTE?
I do not see any parameter which would allow me to do that either by defining the command in transaction SM69 or on the list of IMPORTING parameters in SE37.
It looks like by default such commands are started in DIR_HOME which can be viewed by the transaction AL11. Do I have any control over that?
There isn't a way of doing it via `SM69' unfortunately. I think the only solution is to create a script and call that.
I was going to suggest wrapping the statements in a SM69 command defined as a call to sh with parameters of -c 'cd <dir> && /path/to/command' but unfortunately that doesn't work. According to note 401095 wildcards are not permitted. When I tested, && was translated into a single &, causing the command to fail.
Would be good if you access this information using FM FILE_GET_NAME_USING_PATH (export the script name for which you want to find the physical directory).
The recieving path can be used in SXPG_COMMAND_EXECUTE.
Because the external commands I called were actually .bat files I solved this by putting the following expression at the beginning of each and every one.
cd /d %~dp0
This Stackoverflow question helped a lot actually.

Do special variables maintain their existence in function calls in other required modules?

I'm not a perl expert and I don't quite get how all of perl's scoping rules work.
I'm setting an $ENV{'whatever'} environment variable, then I'm calling a function in another source .pl file and trying to read that ENV entry, and I'm getting nothing back. Docs say everywhere that ENV persists for the current process and any forked children, but is access to the %ENV variable available in other source files?
The source file was included via a 'require' command. Is that the right way to do it, or is there something static (first time in) about how variables are made available when a source file is required?
%ENV is a global, so it is accessible from everywhere in every source file loaded into a process.
%ENV is inherited when a new process is created with a fork but the new process gets its own copy so any changes made in one will not be visible in the other.
If you're loading the other source file with do or require or use then it's being loaded into the same process and it will see the same %ENV.
However if you're loading the new script with system or exec then the new script is loading in a new process and it will get its own copy of %ENV.
From perldoc perlvar:
%ENV
The hash %ENV contains your current environment. Setting a value in
ENV changes the environment for any child processes you subsequently
fork() off.
require-ing a .pl file is not the same as forking a command.
It would be simpler to just set the necessary environmental variables through a Bash wrapper:
$ cat wrapper.sh
#!/bin/bash
export whatever="/usr/bin/some_dir/"; # Set to env
perl script.pl; # Invoke the script
$ cat script.pl
#!/usr/bin/perl
print $ENV{whatever}; # wrapper.sh : "/usr/bin/some_dir/"
# script.pl : ""

Passing parameters to Capistrano

I'm looking into the possibility of using Capistrano as a generic deploy solution. By "generic", I mean not-rails. I'm not happy with the quality of the documentation I'm finding, though, granted, I'm not looking at the ones that presume you are deploying rails. So I'll just try to hack up something based on a few examples, but there are a couple of problems I'm facing right from the start.
My problem is that cap deploy doesn't have enough information to do anything. Importantly, it is missing the tag for the version I want to deploy, and this has to be passed on the command line.
The other problem is how I specify my git repository. Our git server is accessed by SSH on the user's account, but I don't know how to change deploy.rb to use the user's id as part of the scm URL.
So, how do I accomplish these things?
Example
I want to deploy the result of the first sprint of the second release. That's tagged in the git repository as r2s1. Also, let's say user "johndoe" gets the task of deploying the system. To access the repository, he has to use the URL johndoe#gitsrv.domain:app. So the remote URL for the repository depends on the user id.
The command lines to get the desired files would be these:
git clone johndoe#gitsrv.domain:app
cd app
git checkout r2s1
Update: For Capistrano 3, see scieslak's answer below.
Has jarrad has said, capistrano-ash is a good basic set of helper modules to deploy other project types, though it's not required as at the end of the day. It's just a scripting language and most tasks are done with the system commands and end up becoming almost shell script like.
To pass in parameters, you can set the -s flag when running cap to give you a key value pair. First create a task like this.
desc "Parameter Testing"
task :parameter do
puts "Parameter test #{branch} #{tag}"
end
Then start your task like so.
cap test:parameter -s branch=master -s tag=1.0.0
For the last part. I would recommend setting up passwordless access using ssh keys to your server. But if you want to take it from the current logged in user. You can do something like this.
desc "Parameter Testing"
task :parameter do
system("whoami", user)
puts "Parameter test #{user} #{branch} #{tag}"
end
UPDATE: Edited to work with the latest versions of Capistrano. The configuration array is no longer available.
Global Parameters: See comments Use set :branch, fetch(:branch, 'a-default-value') to use parameters globally. (And pass them with -S instead.)
Update. Regarding passing parameters to Capistrano 3 task only.
I know this question is quite old but still pops up first on Google when searching for passing parameters to Capistrano task. Unfortunately, the fantastic answer provided by Jamie Sutherland is no longer valid with Capistrano 3. Before you waste your time trying it out except the results to be like below:
cap test:parameter -s branch=master
outputs :
cap aborted!
OptionParser::AmbiguousOption: ambiguous option: -s
OptionParser::InvalidOption: invalid option: s
and
cap test:parameter -S branch=master
outputs:
invalid option: -S
The valid answers for Capistrano 3 provided by #senz and Brad Dwyer you can find by clicking this gold link:
Capistrano 3 pulling command line arguments
For completeness see the code below to find out about two option you have.
1st option:
You can iterate tasks with the key and value as you do with regular hashes:
desc "This task accepts optional parameters"
task :task_with_params, :first_param, :second_param do |task_name, parameter|
run_locally do
puts "Task name: #{task_name}"
puts "First parameter: #{parameter[:first_param]}"
puts "Second parameter: #{parameter[:second_param]}"
end
end
Make sure there is no space between parameters when you call cap:
cap production task_with_params[one,two]
2nd option:
While you call any task, you can assign environmental variables and then call them from the code:
set :first_param, ENV['first_env'] || 'first default'
set :second_param, ENV['second_env'] || 'second default'
desc "This task accepts optional parameters"
task :task_with_env_params do
run_locally do
puts "First parameter: #{fetch(:first_param)}"
puts "Second parameter: #{fetch(:second_param)}"
end
end
To assign environmental variables, call cap like bellow:
cap production task_with_env_params first_env=one second_env=two
Hope that will save you some time.
I'd suggest to use ENV variables.
Somethings like this (command):
$ GIT_REPO="johndoe#gitsrv.domain:app" GIT_BRANCH="r2s1" cap testing
Cap config:
#deploy.rb:
task :testing, :roles => :app do
puts ENV['GIT_REPO']
puts ENV['GIT_BRANCH']
end
And take a look at the https://github.com/capistrano/capistrano/wiki/2.x-Multistage-Extension, may be this approach will be useful for you as well.
As Jamie already showed, you can pass parameters to tasks with the -s flag. I want to show you how you additionally can use a default value.
If you want to work with default values, you have to use fetch instead of ||= or checking for nil:
namespace :logs do
task :tail do
file = fetch(:file, 'production') # sets 'production' as default value
puts "I would use #{file}.log now"
end
end
You can either run this task by (uses the default value production for file)
$ cap logs:tail
or (uses the value cron for file
$ cap logs:tail -s file=cron
Check out capistrano-ash for a library that helps with non-rails deployment. I use it to deploy a PyroCMS app and it works great.
Here is a snippet from my Capfile for that project:
# deploy from git repo
set :repository, "git#git.mygitserver.com:mygitrepo.git"
# tells cap to use git
set :scm, :git
I'm not sure I understand the last two parts of the question. Provide some more detail and I'd be happy to help.
EDIT after example given:
set :repository, "#{scm_user}#gitsrv.domain:app"
Then each person with deploy priveledges can add the following to their local ~/.caprc file:
set :scm_user, 'someuser'