Running a capistrano task in background - deployment

I have a cap task that call multiple other long running cap tasks:
So lets say I have a task named A
From within this cap task I (depending on condition) call another cap task lets say B.
cap task B sequentially calls 4 more cap tasks c, D, E, & ,F
So be is something like this:
task :B do
on roles(:all) do
invoke 'tasks:C'
invoke 'tasks:D'
Rake::Task['db:E'].invoke("arg1", "arg2")
Rake::Task['db:F'].invoke("arg1", "arg2")
end
end
Each of C, D, E & F are long running and must run sequentially in same order as specified.
Basically tasks C to F are db & assets zipping and uploading tasks which might take long time, so they must not hinder the cap deployment process and should run independently in background.
So I need a way to call task B from task A so that it runs in async mode, and rest of cap tasks during deployment keep running.

I'd suggest making task B an actual Rake task, and then having Capistrano call and immediately background it, e.g. https://stackoverflow.com/a/5829142/3042016

Related

How to schedule the execution of a Python script in Azure DevOps (after successful Build)?

I have an Azure Pipeline Build. The *.yaml file executes correctly a Python script (PythonScript#0). This script itself creates (if does not exist), executes and publishes Azure ML pipeline. It runs well when the Build is executed manually or is triggered by commits.
But I want to schedule the automated execution of the ML pipeline (Python script) on a daily basis.
I tried the following approach:
pipeline_id = published_pipeline.id
recurrence = ScheduleRecurrence(frequency="Day", interval=1)
recurring_schedule = Schedule.create(ws,
name=<schedule_name>,
description="Title",
pipeline_id=pipeline_id,
experiment_name=<experiment_name>,
recurrence=recurrence)
In this case the pipeline runs during 3-4 seconds and terminates successfully. However, the Python script is not executed.
Also, I tried to schedule the execution of a pipeline using Build, but I assume that it is a wrong approach. It rebuilds a pipeline, but I need to execute a previously published pipeline.
schedules:
- cron: "0 0 * * *"
displayName: Daily build
always: true
How can I execute my published pipeline daily? Should I use Release (which agents, which tasks?)?
Also, I tried to schedule the execution of a pipeline using Build, but
I assume that it is a wrong approach. It rebuilds a pipeline, but I
need to execute a previously published pipeline.
Assuming your python-related task runs after many other tasks, then it's not recommended to simply schedule the whole build pipeline, it will rerun the pipeline(other tasks+python script).
Only the pipeline can be scheduled the instead of tasks, so I suggest you can create a new build pipeline to run the python script. Also, a private agent is more suitable for this scenario.
Now we get two pipelines: Original A and B which used to run the python script.
Set B's build completion to be A, so that if A builds successfully the first time, B will run after that.
Add a command-line task or PS task as pipeline A's last task. This task(modify the yml and then push the change) will be responsible for updating the B's corresponding xx.yml file to schedule B.
In this way, if A(other tasks) builds successfully, B(pipeline to run python script) will execute. And B will run daily after that successful build.
Hope it helps and if I misunderstand anything, feel free to correct me.

Can a PowerShell script be dependent of another script's execution?

I have a situation where I want to make the execution of my scripts smarter. I have a set of scripts that execute at a given time, but because sometimes the input files are not posted at the correct times the scripts run into errors and get unexpected results. so one of the solutions I was thinking of is to make the execution of the scripts dependent of each other. Here is what I mean:
script 1 runs at 6 pm
validates that the file is there
if it's there, set a flag
the flag is active so execute script 2 at 9 pm
if it's NOT there, the flag is not set
the flag is not set so script 2 is not executed
Right now script 1 and script 2 are set with the Task Scheduler at those times, I checked the Scheduler for those type of conditions, but didn't find anything.
You can set triggers in Task Scheduler, like when an event happens for basically everything you can see in eventviewer.
I would suggest Write-Eventlog from the script which works on the file, and depending on the result the sched task would get triggerd.
I suggest you to have single script running every N-minutes on single scheduled task via Task Scheduler.
The master script will analyze activities and have all logical conditions those determine when and which external script to run. You can also have flag files.

What does deploy:initial do in Capistrano task

I use Capistrano for deploy. My Capistrano tasks are almost quoted from many blogs. I often find following structure.
namespace :deploy do
desc 'Say something before Deploy'
task :initial do
on roles(:app) do
before 'deploy:hoge', 'deploy:bazz'
invoke 'deploy'
end
end
task :hoge do
on roles(:app) do
puts "'hello, world'"
end
end
task :bazz do
on roles(:app) do
puts "'goodnight, world'"
end
end
end
What does before 'deploy:hoge', 'deploy:bazz' do in task statement? It doesn't display any messages. I think before statement must be outside of task statement.
In Capistrano 3.x at least, there is no such thing as a built-in deploy:initial task. Declaring a task with that name does not do anything special.
I think before statement must be outside of task statement.
You are exactly right. Any before and after declarations should be done at the top level and never within a task block.
As it stands, the example that you gave does nothing. If you want to run a certain task before deploy begins, you would hook into the deploy:starting task, like this:
before "deploy:starting", "deploy:hoge"
Furthermore, there is nothing special about the deploy namespace. I recommend declaring your own custom tasks in a different namespace, to keep them visually separated. Then you can easily distinguish between a built-in Capistrano task and your custom ones.
So I would rewrite the example like this:
namespace :myapp do
task :hoge do
on roles(:app) do
puts "'hello, world'"
end
end
task :bazz do
on roles(:app) do
puts "'goodnight, world'"
end
end
end
# Invoke hoge and bazz before deployment begins
before "deploy:starting", "myapp:hoge"
before "deploy:starting", "myapp:bazz"
The full list of built-in Capistrano task that you can use with before and after can be found here:
http://capistranorb.com/documentation/getting-started/flow/

How to run a capistrano task within a whenever task?

I have whenever gem setup properly. How can I run that capistrano from my whenever schedule.rb?
my schedule.rb
every 1.minute, roles: [:app] do
# how to run here a capistrano task
# invoke 'my_app:test'
end
My capistrano task:
namespace :my_app do
desc 'test'
task :test do
on roles(:web) do
puts "the task runs"
end
end
end
Or should I move that task into a rake task. And should I run that rake task within whenever and capistrano?
Jan, you may find documentation very useful https://github.com/javan/whenever. The code example below - I just copied and edited it from the doc.
schedule.rb
# run this task only on servers with the :app role in Capistrano
# see Capistrano roles section below
every :day, at: '12:20am', roles: [:web] do
rake "app_server:task"
end
lib/tasks/test_task.rb
namespace :my_app do
desc 'test'
task :test do
puts "the task runs"
end
end
I believe it's easier to create Rake task and run it via Whenever. You can choose the Role, so basically you don't need to use Capistrano task (I believe you want to use it just because of Role).
I'd suggest your latter option, moving the logic to a rake task and executing it from both whenever and Capistrano. It'll be vastly easier and cleaner to do.

Jenkins - Close the Jenkins job as soon as one of the parallel scrips fail

I'm running two Perl scripts in parallel in Jenkins
some shell commands
perl script 1 &
perl script 2 &
wait
some more shell commands
If one of the perl scripts fail in the middle of the execution , the job waits until the other script runs (as it is executed in parallel in background).
I want the job to stop as soon as one of the script fails and not waste time by completing the execution of other script.
Please help.
You set up a signal handler for SIGCHLD, which is a signal that is always delivered to the parent process when a child exits. I'm not aware of a mechanism to see which child process exited, but you can save the subprocess process identifiers and just kill both of them when you receive SIGCHLD:
some shell commands
perl script 1 &
pid1=$!
perl script 2 &
pid2=$!
trap "kill $pid1 $pid2" CHLD
wait
some more shell commands
The script above has the downside that it will kill the other script regardless of the exit status of the subprocess. You could in the trap, if you want to, add a check for the exit status. The subprocess could e.g. create some temp file if it succeeds and the trap could check if the file exists.
Typically with Jenkins you would have the parallel steps running as separate jobs (or projects as they are sometimes known) rather than steps in a job. This would then allow the steps to run in parallel across different slave machines and it would keep the output for the jobs in a separate place.
You would then have a controlling job running the other parts.
I like the Multijob plugin for this sort of thing.
There are alternatives which may suit better, such as Build Flow Plugin which uses a DSL to describe the jobs you want to run