I want a Rundeck job to download a file over HTTP on the Rundeck server, copy that file over to other nodes, do work on that file on the nodes, and then delete the file from the Rundeck server.
So far, I've got three jobs:
Get File: has "url" and "localfile" options
Delete File: has "localfile" option
Main Job: has "url" option.
I have Main Job doing these steps:
Workflow step: Call "Get File" job with -url ${option.url} -localfile /tmp/tempfile.${job.execid}
Node step: Copy file to node with SourcePath=/tmp/tempfile.${job.execid} and DestinationPath=/tmp/tempfile.${job.execid}
Node step: Run inline script on node
Workflow step: Call "Delete File" job with -localfile /tmp/tempfile.${job.execid}
Is there some way I can define a variable or an option for "localfile" for reuse in all my steps rather than having to put '/tmp/tempfile.${job.execid} in three or four places? If I want to redefine where this tempfile is later, it would be much easier to have one place to change it. I have tried defining an option built from other options in "Main Job", but it didn't work.
You can create an environment variable for it, but you still need to pass this variable to next job.
Context Variable Usage
Make sure you Configuring remote machine for SSH
Related
Is there an option to handle the next situation:
I have a pipeline and Copy Files task implemented in it, it is used to upload some static html file from git to blob. Everything works perfect. But sometimes I need this file to be changed in the blob storage (using hosted application tools). So, the question is: can I "detect" if my git file is older than target blob file and ignore this file for the copy task to leave it untouched. My initial idea was to use Azure file copy and use an "Optional Arguments" textbox. However, I couldn't find required option in the documentation. Does it allow such things? Or should this case be handled some other way?
I think you're looking for the isSourceNewer value for the --overwrite option.
--overwrite string Overwrite the conflicting files and blobs at the destination if this flag is set to true. (default true) Possible values include true, false, prompt, and ifSourceNewer.
More info: azcopy copy - Options
Agree with ickvdbosch. The isSourceNewer value for the --overwrite option could meet your requirements.
error: couldn't parse "ifSourceNewer" into a "OverwriteOption"
Based on my test, I could reproduce this issue in Azure file copy task.
It seems that the isSourceNewer value couldn't be set to Overwrite option in Azure File copy task.
Workaround: you could use PowerShell task to run the azcopy script to upload the files with --overwrite=ifSourceNewer
For example:
azcopy copy "filepath" "BlobURLwithSASToken" --overwrite=ifSourceNewer --recursive
For more detailed info, you could refer to this doc.
For the issue about the Azure File copy task, I suggest that you could submit a feedback ticket in the following link: Report task issues.
I created a job that could be reusable for new files. The entire activities in the job, the maps and everything else will remain the same except for the file name. I already tried it once but it seems that i need to re "load" the file and remap everything again. It's inefficient. Is there any way for me to pass different file in a job without remaping, reconfiguring and reloading anything?
You have multiple options for allowing a DataStage parallel job to use a different filename for input on each job run:
When using either Sequential File stage or File Connector stage, in stead of typing the actual filename, you can input the name of a job parameter which has been defined on the Parameters tab of the job properties dialog. For example, if you define string parameter myFile, then in the filename field of input stage you would enter #myFile# and at job run-time that would be replaced by whatever is the current value of the myFile parameter. If you run job manually from Director/Designer clients, you will have job run dialog where you can specify a value for job parameters. If you start job via dsjob command, there are options to pass in job parameters on command line. You also have option to use parameterset files that you can modify prior to job run.
Another option would be to use a file location and pattern instead of a specific file name. Both Sequential File stage and File Connector stage let you specify a pattern, for example: /data/my_input_files/*.txt
Then, each time you run job it will input any files at that location matching the above pattern, so it can process multiple files. However, to prevent re-processing files from prior job runs, you will want to clean up any files at that location after job completes. Then when you have new files to process just put them in that directory and re-run the job.
In case if all the files contains a similar data structure, you need to implement one parallel job and if you have a similar pattern of file name for all file names Such as 1234ab.xls, 1234vd.xls, 1234gd.xls, ... you could pass the file name as 1234??.xls In the sequential job file name parameter (Use this as file name in parallel job) which contains the above parallel job to be executed.
# test.ps1
function foo {
echo "bar"
}
I have a file named test.ps1 which contains some frequently called functions. And I want it to be shared between my jenkins master and slave nodes.
I've tried creating 2 copies of test.ps1 and put them in master and slave nodes. But this is not convenient. Because I'll have to maintain 2 test.ps1s
Another way I've tried is putting 1 test.ps1 at master node and copying test.ps1 from master to slave with Publish Over SSH Plugin whenever I need to use test.ps1 at slave node. This is not convenient, either.
How can I share test.ps1 between master and slave nodes?
I found that Config File Provider Plugin solves my problem:
First, create a custom file using this plugin:
Write the name and content of this shared file. Then save it.
Inside your project, check Provide Configuration files
Jenkins will create test.ps1 inside Target folder (In my case, I set it to the workspace of the project) whenever you build the project. Note that this folder must exist before building the project.
Variable represents an environment variable with which you can refer to the file test.ps1.
Inside your build step, you can import the file using . $env:util. Then you will be able to call the function foo.
The above can be done no matter whether you are at master or slave node.
I'm new to Chef and seeking help here. I'm looking into using Chef to deploy our builds to Chef node servers (Windows Server 2012 machines). I have a cookbook called copy_builds that goes out to a central repository and selects the build we want to deploy and copies it out to the node server. The recipe I have contains basic steps that perform the copy steps, and this recipe could be used for all builds we want to deploy except for one thing: the build name.
Here is an example of the recipe:
powershell_script 'Copy build files' do
code '
$Project = "Dev3_SomeCoolBuild"
net use "\\\\server\\build_share\\drop\\$Project"
$BuildNum = GC "\\\\server\\build_share\\drop\\$Project\\buildlabel.txt"
robocopy \\\\server\\build_share\\drop\\$Project\\bin W:\\binroot\\$BuildNum'
end
As you can see, the variable $Project contains the name of the build in this recipe. If we have 100 different builds, all with different names, then what is the best way to handle this without creating 100 different recipes for my copy_builds cookbook?
BTW: this is how I'm currently calling Chef to deploy, which is in a PowerShell script that's external to Chef:
knife node run_list set $Node "recipe[copy_builds::$ProjectName],recipe[install_build]"
This command (from the external PowerShell script) contains the project/build name info within it's own $ProjectName variable. In this case $ProjectName contains the value of 'Dev3_SomeCoolBuild', to reference the recipe Dev3_SomeCoolBuild.rb.
What I'd like is have just one default recipe under copy_builds cookbook, and pass in the build/project name. Is this possible? And what is the best way to do it? I've read about data bags, attributes, and providers, but not sure if they would work for what I want.
Please advise.
Thanks,
Keith
The best approach for you is likely to use a single recipe that gets a list of projects to deploy from a databag or node attributes (or both). So basically take what you have now and put it in a loop, and then use either roles to set node attributes or put the project mapping into a databag item.
I ended up using attributes here to solve my problem. I updated my script to write the build name to the attributes/default.rb file for the copy_builds recipe and upload the cookbook to Chef each time a deployment is run.
My recipe now includes a call to the attributes file to get the build name, like so:
powershell_script 'Copy build files' do
code <<-EOH
$BuildNum = GC \\\\hqfas302002c\\build_share\\drop\\"#{node['copy_builds']['build']}"\\buildlabel.txt
robocopy \\\\hqfas302002c\\build_share\\drop\\"#{node['copy_builds']['build']}"\\webbin W:\\binroot\\$BuildNum /E
EOH
end
And now my call to Chef looks like this:
knife node run_list set $Node "recipe[copy_builds],recipe[install_build]"
I'm looking into the possibility of using Capistrano as a generic deploy solution. By "generic", I mean not-rails. I'm not happy with the quality of the documentation I'm finding, though, granted, I'm not looking at the ones that presume you are deploying rails. So I'll just try to hack up something based on a few examples, but there are a couple of problems I'm facing right from the start.
My problem is that cap deploy doesn't have enough information to do anything. Importantly, it is missing the tag for the version I want to deploy, and this has to be passed on the command line.
The other problem is how I specify my git repository. Our git server is accessed by SSH on the user's account, but I don't know how to change deploy.rb to use the user's id as part of the scm URL.
So, how do I accomplish these things?
Example
I want to deploy the result of the first sprint of the second release. That's tagged in the git repository as r2s1. Also, let's say user "johndoe" gets the task of deploying the system. To access the repository, he has to use the URL johndoe#gitsrv.domain:app. So the remote URL for the repository depends on the user id.
The command lines to get the desired files would be these:
git clone johndoe#gitsrv.domain:app
cd app
git checkout r2s1
Update: For Capistrano 3, see scieslak's answer below.
Has jarrad has said, capistrano-ash is a good basic set of helper modules to deploy other project types, though it's not required as at the end of the day. It's just a scripting language and most tasks are done with the system commands and end up becoming almost shell script like.
To pass in parameters, you can set the -s flag when running cap to give you a key value pair. First create a task like this.
desc "Parameter Testing"
task :parameter do
puts "Parameter test #{branch} #{tag}"
end
Then start your task like so.
cap test:parameter -s branch=master -s tag=1.0.0
For the last part. I would recommend setting up passwordless access using ssh keys to your server. But if you want to take it from the current logged in user. You can do something like this.
desc "Parameter Testing"
task :parameter do
system("whoami", user)
puts "Parameter test #{user} #{branch} #{tag}"
end
UPDATE: Edited to work with the latest versions of Capistrano. The configuration array is no longer available.
Global Parameters: See comments Use set :branch, fetch(:branch, 'a-default-value') to use parameters globally. (And pass them with -S instead.)
Update. Regarding passing parameters to Capistrano 3 task only.
I know this question is quite old but still pops up first on Google when searching for passing parameters to Capistrano task. Unfortunately, the fantastic answer provided by Jamie Sutherland is no longer valid with Capistrano 3. Before you waste your time trying it out except the results to be like below:
cap test:parameter -s branch=master
outputs :
cap aborted!
OptionParser::AmbiguousOption: ambiguous option: -s
OptionParser::InvalidOption: invalid option: s
and
cap test:parameter -S branch=master
outputs:
invalid option: -S
The valid answers for Capistrano 3 provided by #senz and Brad Dwyer you can find by clicking this gold link:
Capistrano 3 pulling command line arguments
For completeness see the code below to find out about two option you have.
1st option:
You can iterate tasks with the key and value as you do with regular hashes:
desc "This task accepts optional parameters"
task :task_with_params, :first_param, :second_param do |task_name, parameter|
run_locally do
puts "Task name: #{task_name}"
puts "First parameter: #{parameter[:first_param]}"
puts "Second parameter: #{parameter[:second_param]}"
end
end
Make sure there is no space between parameters when you call cap:
cap production task_with_params[one,two]
2nd option:
While you call any task, you can assign environmental variables and then call them from the code:
set :first_param, ENV['first_env'] || 'first default'
set :second_param, ENV['second_env'] || 'second default'
desc "This task accepts optional parameters"
task :task_with_env_params do
run_locally do
puts "First parameter: #{fetch(:first_param)}"
puts "Second parameter: #{fetch(:second_param)}"
end
end
To assign environmental variables, call cap like bellow:
cap production task_with_env_params first_env=one second_env=two
Hope that will save you some time.
I'd suggest to use ENV variables.
Somethings like this (command):
$ GIT_REPO="johndoe#gitsrv.domain:app" GIT_BRANCH="r2s1" cap testing
Cap config:
#deploy.rb:
task :testing, :roles => :app do
puts ENV['GIT_REPO']
puts ENV['GIT_BRANCH']
end
And take a look at the https://github.com/capistrano/capistrano/wiki/2.x-Multistage-Extension, may be this approach will be useful for you as well.
As Jamie already showed, you can pass parameters to tasks with the -s flag. I want to show you how you additionally can use a default value.
If you want to work with default values, you have to use fetch instead of ||= or checking for nil:
namespace :logs do
task :tail do
file = fetch(:file, 'production') # sets 'production' as default value
puts "I would use #{file}.log now"
end
end
You can either run this task by (uses the default value production for file)
$ cap logs:tail
or (uses the value cron for file
$ cap logs:tail -s file=cron
Check out capistrano-ash for a library that helps with non-rails deployment. I use it to deploy a PyroCMS app and it works great.
Here is a snippet from my Capfile for that project:
# deploy from git repo
set :repository, "git#git.mygitserver.com:mygitrepo.git"
# tells cap to use git
set :scm, :git
I'm not sure I understand the last two parts of the question. Provide some more detail and I'd be happy to help.
EDIT after example given:
set :repository, "#{scm_user}#gitsrv.domain:app"
Then each person with deploy priveledges can add the following to their local ~/.caprc file:
set :scm_user, 'someuser'