Passing parameters to Capistrano - deployment

I'm looking into the possibility of using Capistrano as a generic deploy solution. By "generic", I mean not-rails. I'm not happy with the quality of the documentation I'm finding, though, granted, I'm not looking at the ones that presume you are deploying rails. So I'll just try to hack up something based on a few examples, but there are a couple of problems I'm facing right from the start.
My problem is that cap deploy doesn't have enough information to do anything. Importantly, it is missing the tag for the version I want to deploy, and this has to be passed on the command line.
The other problem is how I specify my git repository. Our git server is accessed by SSH on the user's account, but I don't know how to change deploy.rb to use the user's id as part of the scm URL.
So, how do I accomplish these things?
Example
I want to deploy the result of the first sprint of the second release. That's tagged in the git repository as r2s1. Also, let's say user "johndoe" gets the task of deploying the system. To access the repository, he has to use the URL johndoe#gitsrv.domain:app. So the remote URL for the repository depends on the user id.
The command lines to get the desired files would be these:
git clone johndoe#gitsrv.domain:app
cd app
git checkout r2s1

Update: For Capistrano 3, see scieslak's answer below.
Has jarrad has said, capistrano-ash is a good basic set of helper modules to deploy other project types, though it's not required as at the end of the day. It's just a scripting language and most tasks are done with the system commands and end up becoming almost shell script like.
To pass in parameters, you can set the -s flag when running cap to give you a key value pair. First create a task like this.
desc "Parameter Testing"
task :parameter do
puts "Parameter test #{branch} #{tag}"
end
Then start your task like so.
cap test:parameter -s branch=master -s tag=1.0.0
For the last part. I would recommend setting up passwordless access using ssh keys to your server. But if you want to take it from the current logged in user. You can do something like this.
desc "Parameter Testing"
task :parameter do
system("whoami", user)
puts "Parameter test #{user} #{branch} #{tag}"
end
UPDATE: Edited to work with the latest versions of Capistrano. The configuration array is no longer available.
Global Parameters: See comments Use set :branch, fetch(:branch, 'a-default-value') to use parameters globally. (And pass them with -S instead.)

Update. Regarding passing parameters to Capistrano 3 task only.
I know this question is quite old but still pops up first on Google when searching for passing parameters to Capistrano task. Unfortunately, the fantastic answer provided by Jamie Sutherland is no longer valid with Capistrano 3. Before you waste your time trying it out except the results to be like below:
cap test:parameter -s branch=master
outputs :
cap aborted!
OptionParser::AmbiguousOption: ambiguous option: -s
OptionParser::InvalidOption: invalid option: s
and
cap test:parameter -S branch=master
outputs:
invalid option: -S
The valid answers for Capistrano 3 provided by #senz and Brad Dwyer you can find by clicking this gold link:
Capistrano 3 pulling command line arguments
For completeness see the code below to find out about two option you have.
1st option:
You can iterate tasks with the key and value as you do with regular hashes:
desc "This task accepts optional parameters"
task :task_with_params, :first_param, :second_param do |task_name, parameter|
run_locally do
puts "Task name: #{task_name}"
puts "First parameter: #{parameter[:first_param]}"
puts "Second parameter: #{parameter[:second_param]}"
end
end
Make sure there is no space between parameters when you call cap:
cap production task_with_params[one,two]
2nd option:
While you call any task, you can assign environmental variables and then call them from the code:
set :first_param, ENV['first_env'] || 'first default'
set :second_param, ENV['second_env'] || 'second default'
desc "This task accepts optional parameters"
task :task_with_env_params do
run_locally do
puts "First parameter: #{fetch(:first_param)}"
puts "Second parameter: #{fetch(:second_param)}"
end
end
To assign environmental variables, call cap like bellow:
cap production task_with_env_params first_env=one second_env=two
Hope that will save you some time.

I'd suggest to use ENV variables.
Somethings like this (command):
$ GIT_REPO="johndoe#gitsrv.domain:app" GIT_BRANCH="r2s1" cap testing
Cap config:
#deploy.rb:
task :testing, :roles => :app do
puts ENV['GIT_REPO']
puts ENV['GIT_BRANCH']
end
And take a look at the https://github.com/capistrano/capistrano/wiki/2.x-Multistage-Extension, may be this approach will be useful for you as well.

As Jamie already showed, you can pass parameters to tasks with the -s flag. I want to show you how you additionally can use a default value.
If you want to work with default values, you have to use fetch instead of ||= or checking for nil:
namespace :logs do
task :tail do
file = fetch(:file, 'production') # sets 'production' as default value
puts "I would use #{file}.log now"
end
end
You can either run this task by (uses the default value production for file)
$ cap logs:tail
or (uses the value cron for file
$ cap logs:tail -s file=cron

Check out capistrano-ash for a library that helps with non-rails deployment. I use it to deploy a PyroCMS app and it works great.
Here is a snippet from my Capfile for that project:
# deploy from git repo
set :repository, "git#git.mygitserver.com:mygitrepo.git"
# tells cap to use git
set :scm, :git
I'm not sure I understand the last two parts of the question. Provide some more detail and I'd be happy to help.
EDIT after example given:
set :repository, "#{scm_user}#gitsrv.domain:app"
Then each person with deploy priveledges can add the following to their local ~/.caprc file:
set :scm_user, 'someuser'

Related

How to give input parameter to the concourse pipeline on the pipeline page and trigger the job similar to jenkins

I would like to know if it is possible to give inputs to a concourse pipeline from the UI.
I know we can add input details to a git repo and read from the repo, but for every tiny input I need to do a code commit.
For this scenario is Jenkins better than concourse?
I tried searching in the internet to find if it is possible to give inputs to the concourse pipeline, but I did not find a solution.
Manual inputs via UI are not a thing in Concourse.
FWIW: When I need frequent inputs and want to avoid git commits for that purpose, I use an s3 resource versioned file in my pipeline as an input, for example with a send_input.sh script like this:
#!/bin/bash
echo "$1" > /tmp/input.txt
aws s3 cp /tmp/input.txt s3://my-bucket/my-concourse-resource-file.txt
and then
./send_input.sh "this is my input"
then the pipeline picks it up and uses it in my workflow.

How can I define env variable value during the release phase in Azure?

I have an Nx monorepo project set up. I want to set some environment variables to use throughout my app. I'm looking for a solution where I can define a value for these variables during the release process to each environment (QA, Test, Dev).
Example:
NX_API_URL is my environment variable in my .env file.
If I release to QA env the variable should be NX_API_URL=api-url-qa.com.
If I release to Test env the variable should be NX_API_URL=api-url-test.com
I have found solutions during the build process but that's not going to work, it needs to be at the release phase. How can I accomplish this?
If the file that you'd like to modify during the deployment process is either XML or JSON you can define the URLs as tokens and use standard 'FileTransform' task to replace them: https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/reference/file-transform-v2?view=azure-pipelines
For more details regarding file transformation please refer to documentation: https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/transforms-variable-substitution?view=azure-devops&tabs=Classic
If none of the links above will help you with this, then I'd suggest to create PS script to do whatever modifications are required, like adding a line after specific line definition: https://social.technet.microsoft.com/Forums/office/en-US/fabc9790-0ffe-43b2-9d6b-bc483cd28bb6/add-line-to-a-text-file-just-after-a-specific-line-with-powershell?forum=winserverpowershell

AEM: Issue using Command Line DAM Workflow

I like to execute a command line programm as a DAM workflow. I tried to implement the ImageMagic example from here: Best Practices for Configuring ImageMagick:
I addded a new Workflow Model,
added "command line" from the "DAM Workflow" list.
In the Argument tab set Mime type to "image/jpeg" (even tried wihtout Mime type)
and in Commands: "C:\Program Files\ImageMagick-7.0.7-Q16\magick.exe" convert ${file} -flip ${file}-flipped.jpg (instead of magick convet ... because in another discussion using an absolute path instead of global name helped people Re: CommmandLineProcess : ImageMagick)
I then added a luncher. And uploaded an Image to the DAM.
In the workflow > instances overview, i see that the workflow was started, it's running and the command line job is set to active.
Unfortunantly this state is never chnaged and no new asset is generated via imageMagic.
I even tried replacing the command with something simple like "ren C:\test\foo.txt bar.txt" which renames a local file. The chnage never happend either.
My question is what am i doing wrong, and how can i debug / find the command outputs? in \crx-quickstart\logs i couldn't find any logs regarding CommandLineProcess.
Thanx

How to pass a parameter to Chef recipe from external source

I'm new to Chef and seeking help here. I'm looking into using Chef to deploy our builds to Chef node servers (Windows Server 2012 machines). I have a cookbook called copy_builds that goes out to a central repository and selects the build we want to deploy and copies it out to the node server. The recipe I have contains basic steps that perform the copy steps, and this recipe could be used for all builds we want to deploy except for one thing: the build name.
Here is an example of the recipe:
powershell_script 'Copy build files' do
code '
$Project = "Dev3_SomeCoolBuild"
net use "\\\\server\\build_share\\drop\\$Project"
$BuildNum = GC "\\\\server\\build_share\\drop\\$Project\\buildlabel.txt"
robocopy \\\\server\\build_share\\drop\\$Project\\bin W:\\binroot\\$BuildNum'
end
As you can see, the variable $Project contains the name of the build in this recipe. If we have 100 different builds, all with different names, then what is the best way to handle this without creating 100 different recipes for my copy_builds cookbook?
BTW: this is how I'm currently calling Chef to deploy, which is in a PowerShell script that's external to Chef:
knife node run_list set $Node "recipe[copy_builds::$ProjectName],recipe[install_build]"
This command (from the external PowerShell script) contains the project/build name info within it's own $ProjectName variable. In this case $ProjectName contains the value of 'Dev3_SomeCoolBuild', to reference the recipe Dev3_SomeCoolBuild.rb.
What I'd like is have just one default recipe under copy_builds cookbook, and pass in the build/project name. Is this possible? And what is the best way to do it? I've read about data bags, attributes, and providers, but not sure if they would work for what I want.
Please advise.
Thanks,
Keith
The best approach for you is likely to use a single recipe that gets a list of projects to deploy from a databag or node attributes (or both). So basically take what you have now and put it in a loop, and then use either roles to set node attributes or put the project mapping into a databag item.
I ended up using attributes here to solve my problem. I updated my script to write the build name to the attributes/default.rb file for the copy_builds recipe and upload the cookbook to Chef each time a deployment is run.
My recipe now includes a call to the attributes file to get the build name, like so:
powershell_script 'Copy build files' do
code <<-EOH
$BuildNum = GC \\\\hqfas302002c\\build_share\\drop\\"#{node['copy_builds']['build']}"\\buildlabel.txt
robocopy \\\\hqfas302002c\\build_share\\drop\\"#{node['copy_builds']['build']}"\\webbin W:\\binroot\\$BuildNum /E
EOH
end
And now my call to Chef looks like this:
knife node run_list set $Node "recipe[copy_builds],recipe[install_build]"

dpkg: How to use trigger?

I wrote a little CDN server that rebuilds its registry pool when new pool-content-packages are installed into that registry pool.
Instead of having each pool-content-package call the init.d of the cdn-server, I'd like to use triggers. That way it would restart the server only once at the end of an installation run, after all packages were installed.
What have I to do to use triggers in my packages with debhelper support?
What you are looking for is dpkg-triggers.
One solution with use of debhelper to build the debian packages is this:
Step 1)
Create file debian/<serverPackageName>.triggers (replace <serverPackageName> with name of your server package).
Step 1a)
Define a trigger that watch the directory of your pool. The content of file would be:
interest /path/to/my/pool
Step 1b)
But you can also define a named trigger, which have to be fired explicit (see step 3).
content of file:
interest cdn-pool-changed
The name of the trigger cdn-pool-changed is free. You can take what ever you want.
Step 2)
Add handler for trigger to file debian/<serverPackageName>.postinst (replace <serverPackageName> with name of your server package).
Example:
#!/bin/sh
set -e
case "$1" in
configure)
;;
triggered)
#here is the handler
/etc/init.d/<serverPackageName> restart
;;
abort-upgrade|abort-remove|abort-deconfigure)
;;
*)
echo "postinst called with unknown argument \`$1'" >&2
exit 1
;;
esac
#DEBHELPER#
exit 0
Replace <serverPackageName> with name of your server package.
Step 3) (only for named triggers, step 1b) )
Add in every content package the file debian/<contentPackageName>.triggers (replace <contentPackageName> with names of your content packages).
content of file:
activate cdn-pool-changed
Use same name for trigger you defined in step 1.
More detailed Information
The best description for dpkg-triggers I could found is "How to use dpkg triggers". The corresponding git repository with examples you can get here:
git clone git://anonscm.debian.org/users/seanius/dpkg-triggers-example.git
I had a need and read and re-read the docs many times. I think that the process is not clearly explain or rather what goes where is not clearly explained. Here I hope to clarify the use of Debian package triggers.
Service with Configuration Directory
A service reading its settings in a specific directory can mark that directory as being of interest.
Say I create a new service which reads settings from /usr/share/my-service/config/...
That service gets two additions:
In its debian directory I add my-service.triggers
And here are the contents:
# my-service.triggers
interest /usr/share/my-service/config
This means if any other package installs or removes a file from that directory, the trigger enters its "needs to be run" state.
In its debian directory I also add my-service.postinst
And I have a script as follow to check whether the trigger happened and run a process as required:
# my-service.postinst
if [ "$1" = "triggered" ]
then
if [ "$2" = "/usr/share/my-service/config" ]
then
# this may or may not be what you need to do, but this is often
# how you handle a change in your service config files
#
systemctl restart my-service
fi
exit 0
fi
That's it.
Now packages adding extensions to your service can add their own configuration file(s) under /usr/share/my-service/config (or a directory under /etc/my-service/my-service.d/... or /var/lib/my-service/..., although that last one should be reserved for dynamic files, not files installed from a package) and dpkg automatically calls your postinst script with:
postinst triggered /usr/share/my-service/config
# where /usr/share/my-service/config is your <interest-path>
This call happens only once and after all the packages were installed, hence the advantage of having a trigger in the first place. This way each package does not need to know that it has to restart my-service and it does not happen more than once, which could cause all sorts of side effects (i.e. the service tries to listen on a TCP port and get error: address already in use).
IMPORTANT: keep in mind that the postinst should include a line with #DEBHELPER#.
So you do not have to do anything special in other packages. Only make sure to install the configuration files in the correct directory and dpkg picks up from there (i.e. in my example under /usr/share/my-service/config).
I have an extension to BIND9 called ipmgr which makes use of .ini files saved in a specific folder. It uses the files to generate DNS zones (way less errors that way! and it includes support for getting letsencrypt certificates and settings for dmarc/dkim). This package uses this case: a simple directory where configuration files get installed. Other packages do not need to do anything other than install files in the right place (/usr/share/ipmgr/zones, for this package).
Service without a Configuration Folder
In some (rare?) cases, you may need to trigger something in a service which is not driven by the installation of a new configuration file.
In this case, you can use an arbitrary name (it should include your package name to make sure it is unique since this name is global to the entire Debian/Ubuntu system).
To make this one work, you need three files, one of which is a trigger in the other packages.
State the Interest
As above, we have an interest. In this case, the interest is stated as a name on its own. The dpkg system distinguish between a name and a path because a name cannot include a slash (/) character. Names are limited to ASCII except control characters and spaces. I would suggest you stick to a-z, 0-9 and dashes (-).
# my-service.triggers
interest my-service-settings
This is useful if you cannot simply track a folder. For example, the settings could come from a network connection that a package offers once installed.
Listen for the Triggers
Again, as above, you need a postinst script in your Service Package. This captures the trigger and allows you to run a command. The script is the same, only you test for the name instead of the folder (note that you can have any number of triggers, so you could also have both: a folder as above and a special name as here).
# my-service.postinst
if [ "$1" = "triggered" ]
then
if [ "$2" = "my-service-settings" ]
then
# this may or may not what you need to do, but this is often
# how you handle a change in your service config files
#
systemctl restart my-service
fi
exit 0
fi
The Trigger
As mentioned above, we need a third file. An arbitrary name is not going to be triggered automatically by dpkg. It wouldn't know whether your other package needs to trigger something just like that (although it is fairly automated as it is already).
So in other packages, you create a trigger file which looks like this:
# other-package.triggers
activate my-service-settings
Now we recognize the name, it is the same as the interest stated above.
In other words, if the trigger needs to run for something other than just the installation of files in a given location, use a special name and add this triggers file with the activate keyword.
Other Features
I have not tested the other features of the dpkg-trigger(1) tool. There are other keywords support in the triggers files:
interest
interest-await
interest-noawait
activate
activate-await
activate-noawait
The deb-triggers manual page has additional information about those. I am not too sure what the await/noawait implies other than the trigger may happen at any time when nowait is used.
Automatic Trigger Added
The build system on Ubuntu (probably Debian too) automatically adds a triggers file with the following when your package includes a library:
$ cat triggers
# Triggers added by dh_makeshlibs/11.1.6ubuntu2
activate-noawait ldconfig
I suggest you exercise caution if your package includes libraries. If you have your own triggers file, I do not know whether this addition will still happen automatically.
This also shows us a special case where it wants to use the noawait. If I understand correctly, it has to run the ldconfig trigger ASAP so your commands will work as expected after the unpack. Otherwise ldd will not know anything about your newly installed library.