Rake::Tasks sources missed - rake

I just start using Rake instead of Make for building my projects, and would like to use some kind of "task template" for automating the building.
Consider the following snippets:
task :test1 => ['1', '2']
task :test2 => ['3', '4']
Rake::Tasks.each do |task|
p task
p task.sources
end
The output is:
$ rake
<Rake::Task test1 => [1, 2]>
[]
<Rake::Task test2 => [3, 4]>
[]
My question is why task.sources is [], that is the prerequisites are missed? Thanks in advance.

The prerequisites of a task are accessed with task.prerequisites.
task.sources and task.sourceis only used for tasks that are built from a rule as described in the rdocs: http://ruby-doc.org/stdlib-2.1.2/libdoc/rake/rdoc/Rake/Task.html#method-i-source

Related

Publish Nunit Test Results in Post Always Section

I'm trying to run a pipeline that does some Pester Testing and publish the NUnit results.
New tests were introduced and for whatever the reason, Jenkins no longer publishes the test results and errors out immediately after the powershell script. Hence, it doesn't get to the nunit publish piece. I receive this:
ERROR: script returned exit code 128
Finished: FAILURE
I've been trying to include the publish in the always section of the post section of the Jenkinsfile, however, I'm running into problems on how to make that NUnit test file available.
I've tried establishing an agent and unstash the file (even though it probably won't stash if the powershell script cancels the whole pipeline). When I use agent I get the following exception:
java.lang.NoSuchMethodError: No such DSL method 'agent' found among steps
Here is the Jenkinsfile:
pipeline {
agent none
environment {
svcpath = 'D:\\svc\\'
unitTestFile = 'UnitTests.xml'
}
stages {
stage ('Checkout and Stash') {
agent {label 'Agent1'}
steps {
stash name: 'Modules', includes: 'Modules/*/**'
stash name: 'Tests', includes: 'Tests/*/**'
}
}
stage ('Unit Tests') {
agent {label 'Agent1'}
steps {
dir(svcpath + 'Modules\\'){deleteDir()}
dir(svcpath + 'Tests\\'){deleteDir()}
dir(svcpath){
unstash name: 'Modules'
unstash name: 'Tests'
}
dir(svcpath + 'Tests\\'){
powershell """
\$requiredCoverageThreshold = 0.90
\$modules = Get-ChildItem ../Modules/ -File -Recurse -Include *.psm1
\$result = Invoke-Pester -CodeCoverage \$modules -PassThru -OutputFile ${unitTestFile} -OutputFormat NUnitXml
\$codeCoverage = \$result.CodeCoverage.NumberOfCommandsExecuted / \$result.CodeCoverage.NumberOfCommandsAnalyzed
Write-Output \$codeCoverage
if (\$codeCoverage -lt \$requiredCoverageThreshold) {
Write-Output "Build failed: required code coverage threshold of \$(\$requiredCoverageThreshold * 100)% not met. Current coverage: \$(\$codeCoverage * 100)%."
exit 1
} else {
write-output "Required code coverage threshold of \$(\$requiredCoverageThreshold * 100)% met. Current coverage: \$(\$codeCoverage * 100)%."
}
"""
stash name: 'TestResults', includes: unitTestFile
nunit testResultsPattern: unitTestFile
}
}
post {
always {
echo 'This will always run'
agent {label 'Agent1'}
unstash name: 'TestResults'
nunit testResultsPattern: unitTestFile
}
success {
echo 'This will run only if successful'
}
failure {
echo 'This will run only if failed'
}
unstable {
echo 'This will run only if the run was marked as unstable'
}
changed {
echo 'This will run only if the state of the Pipeline has changed'
echo 'For example, if the Pipeline was previously failing but is now successful'
}
}
}
Any and all input is welcome! Thanks!
The exception you are getting is due to Jenkins' strict pipeline DSL. Documentation of allowable uses of agent are here.
Currently agent {...} is not allowed to be used in the post section. Maybe this will change in the future. If you require the whole job to run on the node that services label 'Agent1' the only way to currently do that is to
Put agent {label 'Agent1'} immediately under pipeline { to make it global
Remove all instances of agent {label 'Agent1'} in each stage
Remove the agent {label 'Agent1'} from the post section.
The post section acts more like traditional scripted DSL than the pipeline declarative DSL. So you have to use node() instead of agent.
I believe I've had this same question myself, and this SO post has the answer and some good context.
This Jenkins issue isn't exactly the same thing but shows the node syntax in the post stage.

How to setup build number as a pipeline id in fastlane?

This is what I currently have in Fastfile:
def build(target_name)
cocoapods
cert
sigh
if ENV['CI_PIPELINE_ID']
increment_build_number(build_number: "#{ENV['CI_PIPELINE_ID']}")
end
build_app(
scheme: target_name,
workspace: WORKSPACE_FILE_PATH,
clean: true,
output_directory: OUTPUT_PATH,
output_name: target_name + '.ipa',
export_options: {
provisioningProfiles: {
BETA_BUNDLE_IDENTIFIER => BETA_PROVISIONING_PROFILE,
DEMO_BUNDLE_IDENTIFIER => DEMO_PROVISIONING_PROFILE,
DEV_BUNDLE_IDENTIFIER => DEV_PROVISIONING_PROFILE
}
}
)
end
But this code ends up with email from Fabric like this:
v3.3.21 (116)
instead of:
v3.3.21 (11741)
Why it doesn't assign pipeline id to build number?
It looks like it doesn't get inside if statement. Is it possible that CI_PIPELINE_ID variable is not visible for runner?

received unregistered task of type

I am trying to run tasks which are in the memory .
registerd tasks on worker
[2012-09-13 11:10:18,928: WARNING/PoolWorker-1] [u'B.run', u'M1.run', u'M11.run', u'M22.run', u'M23.run', u'M24.run', u'M25.run', u'M26.run', u'M4.run', u'celery.backend_cleanup', u'celery.chain', u'celery.chord', u'celery.chord_unlock', u'celery.chunks', u'celery.group', u'celery.map', u'celery.starmap', u'impmod.run', u'initializerNew.run']
but it still gives errors:
[2012-09-13 11:19:59,848: ERROR/MainProcess] Received unregistered task of type 'M24.run'.
The message has been ignored and discarded.
Did you remember to import the module containing this task?
Or maybe you are using relative imports?
Please see http://bit.ly/gLye1c for more information.
The full contents of the message body was:
{'retries': 0, 'task': 'M24.run', 'eta': None, 'args': [{'cnt': '3', 'ids': '0001-0004,0002-0004', 'NagID': 2, 'wgt': '3', 'ModID': 'M24', 'ProfileModuleID': 64, 'mhs': '1'}, 0], 'expires': None, 'callbacks': None, 'errbacks': None, 'kwargs': {}, 'id': 'ddf5f520-803b-4dc9-ad3b-a931d90950a6', 'utc': True} (394b)
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/celery-3.0.4-py2.7.egg/celery/worker/consumer.py", line 410, in on_task_received
strategies[name](message, body, message.ack_log_error)
KeyError: 'M24.run'
Can you attach command which starts Celery? Looks like this application have different sys.path, that's why celery app couldn't import 'M24.run' task.
Also you should remember that Celery requires setting module names where your tasks are localted.
Something similar to
CELERY_INCLUDE = [
'M24',
]

dotcloud supervisord.conf file environment specification

http://docs.dotcloud.com/guides/daemons/ states:
Configuring The Environment
You can easily modify the environment of execution of your daemon with the “directory” and “environment” directives to change the directory where the command is executed and to define additional environment variable. For example:
[program:daemonname]
command = php my_daemon.php
directory = /home/dotcloud/current/
environment = QUEUE=*, VERBOSE=TRUE
However, I'm finding my PYTHONPATH environment variable is not being set:
dotcloud.yml:
www:
type: python
db:
type: postgresql
worker:
type: python-worker
supervisord.conf:
[program:apnsd]
command=/home/dotcloud/current/printenv.py
environment=PYTHONPATH=/home/dotcloud/current/apnsd/
printenv.py
#! /home/dotcloud/env/bin/python
import os
print "ENVIRONMENT"
print os.environ
the logs:
ENVIRONMENT
{'SUPERVISOR_ENABLED': '1', 'SUPERVISOR_SERVER_URL': 'unix:///var/dotcloud/super
visor.sock', 'VERBOSE': 'no', 'UPSTART_INSTANCE': '', 'PYTHONPATH': '/', 'PREVLE
VEL': 'N', 'UPSTART_EVENTS': 'runlevel', '/': '/', 'SUPERVISOR_PROCESS_NAME': 'a
pnsd', 'UPSTART_JOB': 'rc', 'PWD': '/', 'SUPERVISOR_GROUP_NAME': 'apnsd', 'RUNLE
VEL': '2', 'PATH': '/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
', 'runlevel': '2', 'previous': 'N'}
Do not show a modified python variable!
There is a bug in Supervisor; some variables (like those containing a /) have to be quoted.
In that case, you need:
[program:apnsd]
command=/home/dotcloud/current/printenv.py
environment= PYTHONPATH="/home/dotcloud/current/apnsd/"
(The space in = PYTHONPATH is not mandatory, it's just to make the file slightly more readable; the quotes around the value of PYTHONPATH are, however, required!)
I will update dotCloud's documentation to mention this issue.

Capistrano compile assets error - assets:precompile:nondigest?

My App seems to be deploying correctly but I'm getting this error:
* executing "cd /home/deploy/tomahawk/releases/20120208222225 && bundle exec rake RAILS_ENV=production RAILS_GROUPS=assets assets:precompile"
servers: ["ip_address"]
[ip_address] executing command
*** [err :: ip_address] /opt/ruby/bin/ruby /opt/ruby/bin/rake assets:precompile:nondigest RAILS_ENV=production RAILS_GROUPS=assets
I've tried solutions here for trying to compile assets: http://lassebunk.dk/2011/09/03/getting-your-assets-to-work-when-upgrading-to-rails-3-1/
And Here: http://railsmonkey.net/2011/08/deploying-rails-3-1-applications-with-capistrano/
And here: http://dev.af83.com/2011/09/30/capistrano-rails-3-1-assets-can-be-tricky.html
Here is my deploy.rb :
require "bundler/capistrano"
load 'deploy/assets'
set :default_environment, {
'PATH' => "/opt/ruby/bin/:$PATH"
}
set :application, "tomahawk"
set :repository, "repo_goes_here"
set :deploy_to, "/home/deploy/#{application}"
set :rails_env, 'production'
set :branch, "master"
set :scm, :git
set :user, "deploy"
set :runner, "deploy"
set :use_sudo, true
role :web, "my_ip"
role :app, "my_ip"
role :db, "my_ip", :primary => true
set :normalize_asset_timestamps, false
after "deploy", "deploy:cleanup"
namespace :deploy do
desc "Restarting mod_rails with restart.txt"
task :restart, :roles => :app, :except => { :no_release => true } do
run "touch #{current_path}/tmp/restart.txt"
end
[:start, :stop].each do |t|
desc "#{t} task is a no-op with mod_rails"
task t, :roles => :domain do ; end
end
end
task :after_update_code do
run "ln -nfs #{deploy_to}/shared/config/database.yml #{release_path}/config/database.yml"
end
first don't forget to add the gem below
group :production do
gem 'therubyracer'
gem 'execjs'
end
then in your cap file just add this line in your after_update_code
run "cd #{release_path}; rake assets:precompile RAILS_ENV=production "
this worked fine for me ;)
cheers,
Gregory HORION
I have the same problem. I have added this to my deploy.rb (for adding option '--trace'):
namespace :deploy do
namespace :assets do
task :precompile, :roles => :web, :except => { :no_release => true } do
run "cd #{current_path} && #{rake} RAILS_ENV=#{rails_env} RAILS_GROUPS=assets assets:precompile --trace"
end
end
end
And error seems to be just notice :
*** [err :: my-server] ** Invoke assets:precompile (first_time)
...
I later noticed that capistrano wasn't able to delete old releases, I got an error:
*** [err :: ip_address] sudo: no tty present and no askpass program specified
I found this link regarding this error:
http://www.mail-archive.com/capistrano#googlegroups.com/msg07323.html
I had to add this line to my deploy file:
default_run_options[:pty] = true
This also solved the weird error I was getting above.
The official explanation, which I don't understand :)
No default PTY. Prior to 2.1, Capistrano would request a pseudo-tty for each command that it executed. This had the side-effect of causing the profile scripts for the user to not be loaded. Well, no more! As of 2.1, Capistrano no longer requests a pty on each command, which means your .profile (or .bashrc, or whatever) will be properly loaded on each command! Note, however, that some have reported on some systems, when a pty is not allocated, some commands will go into non-interactive mode automatically. If you’re not seeing commands prompt like they used to, like svn or passwd, you can return to the previous behavior by adding the following line to your capfile: default_run_options[:pty] = true
Here's what worked for me:
1) Add rvm-capistrano to your Gemfile
2) in confg/deploy, add the lines:
require 'rvm/capistrano'
set :rvm_ruby_string, '1.9.2' # Set to your version number
3) You may also need to set :rvm_type and :rvm_bin_path. See this Ninjahideout blog that goes into more detail.
4) apt-get/yum install nodejs on your server
(See my reply to this related Stackoverflow question.)
The message you see is the output of rake assets:precompile .
When you run rake assets:precompile, how to avoid default output
the solution is to add -q behand your command,
analysis is below, if you want to see:
# :gem_path/actionpack/lib/sprockets/assets.rake
namespace :assets do
# task entry, it will call invoke_or_reboot_rake_task
task :precompile do
invoke_or_reboot_rake_task "assets:precompile:all"
end
# it will call ruby_rake_task
def invoke_or_reboot_rake_task(task)
ruby_rake_task task
end
# it will call ruby
def ruby_rake_task(task, fork = true)
env = ENV['RAILS_ENV'] || 'production'
groups = ENV['RAILS_GROUPS'] || 'assets'
args = [$0, task,"RAILS_ENV=#{env}","RAILS_GROUPS=#{groups}"]
ruby(*args)
end
end
# :gem_path/rake/file_utils.rb
module FileUtils
# it will call sh
def ruby(*args,&block)
options = (Hash === args.last) ? args.pop : {}
sh(*([RUBY] + args + [options]), &block)
end
# it will call set_verbose_option
# and if options[:verbose] == true, it do not output cmd
# but default of options[:verbose] is an object
def sh(*cmd, &block)
# ...
set_verbose_option(options)
# ...
Rake.rake_output_message cmd.join(" ") if options[:verbose]
# ...
end
# default of options[:verbose] is Rake::FileUtilsExt::DEFAULT, which is an object
def set_verbose_option(options) # :nodoc:
unless options.key? :verbose
options[:verbose] =
Rake::FileUtilsExt.verbose_flag == Rake::FileUtilsExt::DEFAULT ||
Rake::FileUtilsExt.verbose_flag
end
end
end
# :gem_path/rake/file_utils_ext.rb
module Rake
module FileUtilsExt
DEFAULT = Object.new
end
end
# :gem_path/rake/application.rb
# the only to solve the disgusting output when run `rake assets:precompile`
# is add a `-q` option.
['--quiet', '-q',
"Do not log messages to standard output.",
lambda { |value| Rake.verbose(false) }
],
['--verbose', '-v',
"Log message to standard output.",
lambda { |value| Rake.verbose(true) }
],