Inspect and retry resque jobs via redis-cli - command-line

I am unable to run the resque-web on my server due to some issues I still have to work on but I still have to check and retry failed jobs in my resque queues.
Has anyone any experience on how to peek the failed jobs queue to see what the error was and then how to retry it using the redis-cli command line?
thanks,

Found a solution on the following link:
http://ariejan.net/2010/08/23/resque-how-to-requeue-failed-jobs
In the rails console we can use these commands to check and retry failed jobs:
1 - Get the number of failed jobs:
Resque::Failure.count
2 - Check the errors exception class and backtrace
Resque::Failure.all(0,20).each { |job|
puts "#{job["exception"]} #{job["backtrace"]}"
}
The job object is a hash with information about the failed job. You may inspect it to check more information. Also note that this only lists the first 20 failed jobs. Not sure how to list them all so you will have to vary the values (0, 20) to get the whole list.
3 - Retry all failed jobs:
(Resque::Failure.count-1).downto(0).each { |i| Resque::Failure.requeue(i) }
4 - Reset the failed jobs count:
Resque::Failure.clear
retrying all the jobs do not reset the counter. We must clear it so it goes to zero.

Related

Azure Devops - Release Pipeline when re-running failed tests azure devops shows failure status even if re-run succeeded

I use Specflow with SpecRunner+ I am using the Deafult.srprofile to to re-run failed tests 3 times in visual studio it shows 2passed 1 failed but the status of the test is a failure, the same goes for azure devops if a re-ran test passes the outcome of the run is a failure. The Failures are sometimes caused by locator timeouts or server timeouts not often but saw it happen few time thats why we decided to implement a re-run.
Could anyone help on this?
022-02-09T12:40:13.8607507Z Test Run Failed.
2022-02-09T12:40:13.8608607Z Total tests: 37
2022-02-09T12:40:13.8609271Z Passed: 36
2022-02-09T12:40:13.8609858Z Failed: 1
2022-02-09T12:40:13.8617476Z Total time: 7.4559 Minutes
2022-02-09T12:40:13.9226929Z ##[warning]Vstest failed with error. Check logs for failures. There might be failed tests.
2022-02-09T12:40:14.0075402Z ##[error]Error: The process 'D:\Microsoft_Visual_Studio\2019\Common7\IDE\Extensions\TestPlatform\vstest.console.exe' failed with exit code 1
2022-02-09T12:40:14.8164576Z ##[error]VsTest task failed.
But then the report states that it was retried 3 times which 2 of the retries were seccusefull but still a failure status on the azure devops run.
The behavior of the report is the correct one and sadly this can't be configured to be changed.
What you can do is to adjust how the results are reported back to Azure DevOps.
You can configure it via the VSTest element in the srProfile- File.
This example means, that at least one retry has to be passing:
<VSTest testRetryResults="Unified" passRateAbsolute="1"/>
Docs: https://docs.specflow.org/projects/specflow-runner/en/latest/Profile/VSTest.html
Be aware that we have stopped the development of the SpecFlow+ Runner. More details here: https://specflow.org/using-specflow/the-retirement-of-specflow-runner/

How can I find the jobId that caused node failure in slurm?

I know that sacctmgr command can list the event history of nodes with the reason.
sacctmgr show event Start=09/01-00:00 format=nodename,timestart,timeend,state,reason,user
This command gives the following output
gnodeXX 2020-09-04T20:21:34 2020-09-05T01:21:38 DRAIN Kill task failed root(ZZ)
gnodeXX 2020-09-09T16:44:55 2020-09-09T17:50:21 DOWN* Not responding slurm(DDDD)
Is there any way I can get the jobId or username that caused the Node Failure or any info on the task for which kill failed? The user column gives either of two outputs for all results root(ZZ) and slurm(DDDD) and I am not sure what these imply.

How to determine if a job is failed

How can I programatically determine if a job has failed for good and will not retry any more? I've seen the following on failed jobs:
status:
conditions:
- lastProbeTime: 2018-04-25T22:38:34Z
lastTransitionTime: 2018-04-25T22:38:34Z
message: Job has reach the specified backoff limit
reason: BackoffLimitExceeded
status: "True"
type: Failed
However, the documentation doesn't explain why conditions is a list. Can there be multiple conditions? If so, which one do I rely on? Is it a guarantee that there will only be one with status: "True"?
JobConditions is similar as PodConditions. You may read about PodConditions in official docs.
Anyway, To determine a successful pod, I follow another way. Let's look at it.
There are two fields in Job Spec.
One is spec.completion (default value 1), which says,
Specifies the desired number of successfully finished pods the
job should be run with.
Another is spec.backoffLimit (default value 6), which says,
Specifies the number of retries before marking this job failed.
Now In JobStatus
There are two fields in JobStatus too. Succeeded and Failed. Succeeded means how many times the Pod completed successfully and Failed denotes, The number of pods which reached phase Failed.
Once the Success is equal or bigger than the spec.completion, the job will become completed.
Once the Failed is equal or bigger than the spec.backOffLimit, the job will become failed.
So, the logic will be here,
if job.Status.Succeeded >= *job.Spec.Completion {
return "completed"
} else if job.Status.Failed >= *job.Spec.BackoffLimit {
return "failed"
}
If so, which one do I rely on?
You might not have to choose, considering commit dd84bba64
When a job is complete, the controller will indefinitely update its conditions
with a Complete condition.
This change makes the controller exit the
reconcilation as soon as the job is already found to be marked as complete.
As https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#jobstatus-v1-batch says:
The latest available observations of an object's current state. When a
Job fails, one of the conditions will have type "Failed" and status
true. When a Job is suspended, one of the conditions will have type
"Suspended" and status true; when the Job is resumed, the status of
this condition will become false. When a Job is completed, one of the
conditions will have type "Complete" and status true. More info:
https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/

boofuzz - Target connection reset, skip error

I am using boofuzz to try to fuzz a specific application. While creating the blocks etc and some testing i noticed that the target sometimes closes the connection. This causes procmon to terminate the target process and restarts it. However this is totally unnecessary for this target.
Can i somehow tell boofuzz to not handle this as an Error (so target is not restarted)
[2017-11-04 17:09:07,012] Info: Receiving...
[2017-11-04 17:09:07,093] Check Failed: Target connection reset.
[2017-11-04 17:09:07,093] Test Step: Calling post_send function:
[2017-11-04 17:09:07,093] Info: No post_send callback registered.
[2017-11-04 17:09:07,093] Test Step: Sleep between tests.
[2017-11-04 17:09:07,094] Info: sleeping for 0.100000 seconds
[2017-11-04 17:09:07,194] Test Step: Contact process monitor
[2017-11-04 17:09:07,194] Check: procmon.post_send()
[2017-11-04 17:09:07,196] Check OK: No crash detected.
Excellent question! There isn't (wasn't) any way to do this, but there really should be. A reset connection does not always mean a failure.
I just added ignore_connection_reset and ignore_connection_aborted options to the Session class to ignore ECONNRESET and ECONNABORTED errors respectively. Available in version 0.0.10.
Description of arguments available in the docs: http://boofuzz.readthedocs.io/en/latest/source/Session.html
You may find the commit that added these arguments informative for how some of the boofuzz internals work (relevant lines 182-183, 213-214, 741-756): https://github.com/jtpereyda/boofuzz/commit/a1f08837c755578e80f36fd1d78401f21ccbf852
Thank you for the solid question.

Starting Parpool in MATLAB

I tried starting parpool in MATLAB 2015b. Command as follows,
parpool('local',3);
This command should allocate 3 workers. Whereas I received an error stating failure to start parpool. The error message as follows,
Error using parpool (line 94)
Failed to start a parallel pool. (For information in addition to
the causing error, validate the profile 'local' in the Cluster Profile
Manager.)
A similar query was posted in (https://nl.mathworks.com/matlabcentral/answers/196549-failed-to-start-a-parallel-pool-in-matlab2015a). I followed the same procedure, to validate the local profile as per the suggestions.
Using distcomp.feature( 'LocalUseMpiexec', false); or distcomp.feature( 'LocalUseMpiexec', true) in startup.m didn't create any improvement. Also when attempting to validate local profile still gives error message as follows,
VALIDATION DETAILS
Profile: local
Scheduler Type: Local
Stage: Cluster connection test (parcluster)
Status: Passed
Description:Validation Passed
Command Line Output:(none)
Error Report:(none)
Debug Log:(none)
Stage: Job test (createJob)
Status: Failed
Description:The job errored or did not reach state finished.
Command Line Output:
Failed to determine if job 24 belongs to this cluster because: Unable to
read file 'C:\Users\varad001\AppData\Roaming\MathWorks\MATLAB
\local_cluster_jobs\R2015b\Job24.in.mat'. No such file or directory..
Error Report:(none)
Debug Log:(none)
Stage: SPMD job test (createCommunicatingJob)
Status: Failed
Description:The job errored or did not reach state finished.
Command Line Output:
Failed to determine if job 25 belongs to this cluster because: Unable to
read file 'C:\Users\varad001\AppData\Roaming\MathWorks\MATLAB
\local_cluster_jobs\R2015b\Job25.in.mat'. No such file or directory..
Error Report:(none)
Debug Log:(none)
Stage: Pool job test (createCommunicatingJob)
Status: Skipped
Description:Validation skipped due to previous failure.
Command Line Output:(none)
Error Report:(none)
Debug Log:(none)
Stage: Parallel pool test (parpool)
Status: Skipped
Description:Validation skipped due to previous failure.
Command Line Output:(none)
Error Report:(none)
Debug Log:(none)
I am receiving these error only in my cluster machine. But launching parpool in my standalone PC is working perfectly. Is there a way to rectify this issue?