How can I call another task from a task in Cakefile ?
I tried tasks[taskName].action options but didn't work because tasks in not bound in the scope of my Cakefile:
/home/omer/___/Cakefile:52
return console.log(tasks);
^
ReferenceError: tasks is not defined
...
Any ideas ?
Use invoke to invoke another task in the same cakefile.
Related
I have following code in my plugin:
#Override
void apply(Project project) {
project.extensions.create(EXTENSION,TestExtension)
project.task("task1") << {
println "Task 1"
println(project.mmm.test)
def extension = project.extensions.findByName(EXTENSION)
println(extension.test)
}
project.task("task2",type: TestTask) {
println "Task 2 "
def extension = project.extensions.findByName(EXTENSION)
// conventionMapping.test = {extension.test}
// println(project.extensions.findByName(EXTENSION).test)
// test = "test"
}
}
In task 1 extension.test return correct value. However in task2 extension.test always return null. What I am doing wrong? Is there a better way to pass some of the extensions values as input for task? I am using gradle 1.12 with jdk 1.8 on Mac. Best Regards
Edit :correct version:
project.task("task2", type: TestTask) {
project.afterEvaluate {
def extension = project.extensions.findByName(EXTENSION)
println(project.extensions.findByName(EXTENSION).test)
test = project.extensions.findByName(EXTENSION).test
}
}
task1 prints the value at execution time (notice the <<), task2 at configuration time (before the rest of the build script after the apply plugin: ... has been evaluated). This explains why the println for task1 works as expected, and the println for task2 doesn't.
However, configuring a task at execution time is too late. Instead, a plugin needs to defer reading user-provided values until the end of the configuration phase (after build scripts have been evaluated, but before any task has been executed). There are several techniques for doing so. One of the simpler ones is to wrap any such read access with project.afterEvaluate { ... }.
Updated answer as some time has passed and Gradle evolved its concepts and its syntax in the meantime.
To use up-to-date and optimally configured task you should use following syntax:
tasks.register("task1") {
doLast {
def extension = project.extensions.findByName(EXTENSION)
println(project.extensions.findByName(EXTENSION).test)
test = project.extensions.findByName(EXTENSION).test
}
}
Task Action:
A task has both configuration and actions. When using the doLast, you
are simply using a shortcut to define an action. Code defined in the
configuration section of your task will get executed during the
configuration phase of the build regardless of what task was targeted.
Deprecated << Operator
<< was deprecated in 4.x and removed in 5.0. See task action (doLast) on how to evaluate task logic at execution phase when all extensions and configurations should be evaluated.
Task Configuration Avoidance
To avoid the cost of creating a task if this won't be executed on your invoked Gradle command TaskContainer.register(String) method.
Avoid afterEvaluate
afterEvaluate should in most cases be avoided, see #sterling's comments from the link. You have now the possibility to evaluate task action part in execution phase, and additionally you can also rely on Task Inputs/Outputs with the combination of Lazy Configuration.
I have a Capistrano 2 task that updates a file
task :update_file, roles: :app do
...
end
Now I need to write a task that performs the some operation on all the files within a folder so from within update_folder I'd like to call update_file passing to it the name of the file to update but I have an hard time doing so.
How can I set a Capistrano task to accept an argument and call it from inside an other task?
Thanks
you can do like this:
$gkey=""
$gvalue=""
desc "generate config files"
task :gen_conf_files do
$servers.each do |key,value|
$MYSQL["mysql"]["passwd"]="#{key.to_s}++"
$gkey=key.to_s
$gvalue=value.to_s
$NODE_NAME="#{key.to_s}"
$NODE_NUM=key.to_s[9,10]
gen_mfs_conf
gen_cfs_conf
gen_client_conf
gen_config_shell
gen_cdn_reacheyes_net
gen_click_reacheyes_net
gen_log_reacheyes_net
gen_fluent_conf
gen_nagios_conf
end
end
desc "genrate fluent config file"
task :gen_fluent_conf do
file = "#{generate_conf_dir}/#{$gvalue}/fluent.conf"
filename ="#{config_file_path}/fluent.conf.sample"
erb = ERB.new(File.read(filename))
erb.filename = filename
File = File.new("#{file}", "w")
File.puts erb.result
end
first define a global variable
$gvalue=""
then you can use this variable between different task
Consider the following Rake tasks:
task deploy => [:package] do
end
task package => [:build] do
end
task build do
end
Is there a way to invoke Rake on the command line to execute the package and deploy tasks, but not the build task?
Short answer, no.
The way I usually go about this is instead of using the dependant task notion like you have above:
task deploy => [:package] do
end
I create an alias task for whatever action that is to be completed:
task all => [:build, :package, :deploy]
task fastDeploy => [:package, :deploy]
task deploy do
end
task package do
end
task build do
end
It's not very elegant, but I do find it to be more readable and you can visibly see the dependency of tasks on other tasks instead of the kind of spaghetti code structure the dependant notion can result in... when you have a lot of task it can be awkward to debug the logic to figure what's gone wrong and where at times.
Hope this helps.
With the command gradle tasks one can get a report of all available tasks. Is there any way to add a parameter to this command and filter tasks by their task group.
I would like to issue a command like gradle tasks group:Demo to filter all tasks and retrieve a list of only those tasks that belong to the task group called Demo.
From v5.1, you can do this: gradle tasks --group=<group-name>
Gradle docs.
You can do so by adding the following task to your build script:
task showOnlyMyTasks << {
tasks.each {
task -> if (task.group == 'My task group name') {
println(task.name)
}
}
}
And then run:gradle showOnlyMyTasks
If you need only the list, you can use gradle -q
Old answer: There is no such feature. Feel free to suggest new features at http://forums.gradle.org.
Now available since Gradle 5.1, see this answer: https://stackoverflow.com/a/54341658/4433326
Under some conditions, I want to make a celery task fail from within that task. I tried the following:
from celery.task import task
from celery import states
#task()
def run_simulation():
if some_condition:
run_simulation.update_state(state=states.FAILURE)
return False
However, the task still reports to have succeeded:
Task sim.tasks.run_simulation[9235e3a7-c6d2-4219-bbc7-acf65c816e65]
succeeded in 1.17847704887s: False
It seems that the state can only be modified while the task is running and once it is completed - celery changes the state to whatever it deems is the outcome (refer to this question). Is there any way, without failing the task by raising an exception, to make celery return that the task has failed?
To mark a task as failed without raising an exception, update the task state to FAILURE and then raise an Ignore exception, because returning any value will record the task as successful, an example:
from celery import Celery, states
from celery.exceptions import Ignore
app = Celery('tasks', broker='amqp://guest#localhost//')
#app.task(bind=True)
def run_simulation(self):
if some_condition:
# manually update the task state
self.update_state(
state = states.FAILURE,
meta = 'REASON FOR FAILURE'
)
# ignore the task so no other state is recorded
raise Ignore()
But the best way is to raise an exception from your task, you can create a custom exception to track these failures:
class TaskFailure(Exception):
pass
And raise this exception from your task:
if some_condition:
raise TaskFailure('Failure reason')
I'd like to further expand on Pierre's answer as I've encountered some issues using the suggested solution.
To allow custom fields when updating a task's state to states.FAILURE, it is important to also mock some attributes that a FAILURE state would have (notice exc_type and exc_message)
While the solution will terminate the task, any attempt to query the state (For example - to fetch the 'REASON FOR FAILURE' value) will fail.
Below is a snippet for reference I took from:
https://www.distributedpython.com/2018/09/28/celery-task-states/
#app.task(bind=True)
def task(self):
try:
raise ValueError('Some error')
except Exception as ex:
self.update_state(
state=states.FAILURE,
meta={
'exc_type': type(ex).__name__,
'exc_message': traceback.format_exc().split('\n'),
'custom': '...'
})
raise Ignore()
I got an interesting reply on this question from Ask Solem, where he proposes an 'after_return' handler to solve the issue. This might be an interesting option for the future.
In the meantime I solved the issue by simply returning a string 'FAILURE' from the task when I want to make it fail and then checking for that as follows:
result = AsyncResult(task_id)
if result.state == 'FAILURE' or (result.state == 'SUCCESS' and result.get() == 'FAILURE'):
# Failure processing task