Maintenance Plan read Server Agent log - tsql

I am new in maintenance plan and server agent. I have created two server agent job named as ABC and with type T-SQL (SELECT * FROM ABCDEFG), and server agent job named as DEF and with type T-SQL (SELECT * FROM ABC). When I execute ABC job in server agent it will appear failed, and DEF job should not execute if ABC job failed.
I create maintenance plan, with a subplan to execute ABC and DEF server agent. When I execute maintenance plan, it show Success in the job and history. Can maintenance plan job log show server agent log?
This is how my job look like :
In my server agent log it appear failed for ABC job and success for DEF job
Maintenance plan history log
My expected result is maintenance plan will capture server agent job failed or success, if failed then stop. But base on the log, maintenance plan will execute both ABC and DEF without any dependency. How should I make both server agent job to have dependency and make sure ABC success, only execute DEF job?

Related

Why is my Azure DevOps Migration timing out after several hours?

I have a long running Migration (don't ask) being run by an AzureDevOps Release pipeline.
Specifically, it's an "Azure SQL Database deployment" activity, running a "SQL Script File" Deployment Type.
Despite having configured maximums in all the timeouts in the Invoke-Sql Additional Parameters settings, my migration is still timing out.
Specifically, I get:
We stopped hearing from agent Hosted Agent. Verify the agent machine is running and has a healthy network connection. Anything that terminates an agent process, starves it for CPU, or blocks its network access can cause this error.
So far it's timed out after:
6:13:15
6:13:18
6:14:41
6:10:19
So "after 6 and a bit hours". It's ~22,400 seconds, which doesn't seem like any obvious kind of number either :)
Why? And how do I fix it?
It turns out that AzureDevOps uses Hosting Agents, to execute each Task in a pipeline, and those Agents have innate lifetimes, independent from whatever task they're running.
https://learn.microsoft.com/en-us/azure/devops/pipelines/troubleshooting/troubleshooting?view=azure-devops#job-time-out
A pipeline may run for a long time and then fail due to job time-out. Job timeout closely depends on the agent being used. Free Microsoft hosted agents have a max timeout of 60 minutes per job for a private repository and 360 minutes for a public repository. To increase the max timeout for a job, you can opt for any of the following.
Buy a Microsoft hosted agent which will give you 360 minutes for all jobs, irrespective of the repository used
Use a self-hosted agent to rule out any timeout issues due to the agent
Learn more about job timeout.
So I'm hitting the "360 minute" limit (presumably they give you a little extra on top, so that no-one complains?).
Solution is to use a self-hosted agent. (or make my Migration run in under 6 hours, of course)

Scheduled job fails when run via trigger, but works when run manually

I have a scheduled job to run the following command:
copy 'C:\Users\tdjeilati\Desktop\RDP.rdg' '\\fil03\Dept_1Z\Public\my'
This copies a file to a remote server.
I have a job trigger -AtLogon.
When I log in to my PC, it runs the job.
When I retrieve that job with receive-job, I get the job got an an access is denied error:
But then I run the job by hand, and it works correctly! What gives?
I don't understand why it fails when running from the job trigger but works when I run it manually in powershell. I can only assume that the environment/permissions are different.
EDIT: One thing I noticed is that the job that runs from the jobtrigger doesn't have any childJobs, but the job that I start from command line has child jobs. Why should there be a difference?
The scheduled task may not be running under your user account. This could explain why it works when you manually start the job.
Verify that the task is running as a user with rights to the file and the remote share.

Restarting a Spring batch job after app server failure or spring batch repository DB failure?

When spring batch DB failure happens or server is shut down, a spring batch job which was running at that time will be in a unknown started state.
In spring batch admin, we will not see an option to restart the job. Hence we are not able to resume the job.
How to restart the job from last successful commit?
The old discussions suggest that it had to be dealt manually by updating tables. I was manually able to update end time, status in batch step execution and batch job execution tables. Is it really the best option? It may not be practical to do that manually in a prod region.
As mentioned in the Aborting a Job section of the reference documentation, when a server failure happens, the job repository has no way to know that the process running the job died. Hence a manual intervention is required.
How to restart the job from last successful commit?
Change the status of the job to FAILED and restart the job instance, it should continue from where it left off.

SSIS Transfer Objects task fails when run from Agent

I am using the SSIS Transfer Objects task to transfer a database from one server to another. This is a nightly task as the final part of ETL.
If I run the task manually during the day, there is no problem. It completes in around 60 to 90 minutes.
When I run the task from Agent, it always starts but often fails . I have the agent steps set up to rety on failure, but most nights it is taking 3 attempts. On some nights 5 or 6 attempts.
The error message returned is two fold (both error messages show in the log for the same row):-
1) An error occurred while transferring data. See the inner exception for details.
2) Timeout expired: The timeout period elapsed prior to completion of the operation or the server is not responding
I can't find any timeout limit to adjust that I haven't already adjusted.
Anyone have any ideas?

How to run a capistrano task on another stage?

My root capistrano has a task that dumps the database: cap production dump or cap staging dump will dump the database.
Now, I want to define a task in staging that will run this task on production.
I could do
desc 'Updates the database of acceptance with the latest production database'
task :update_db do
run_locally do
execute :cap, 'production', 'dump'
# move dump-file from production, via local, to acceptance
end
on roles(:db) do
execute :rake, 'db:data:load'
end
end
But running a cap task from a cap task via a shell feels ugly and
fragile.
I found
Calling a multistage capistrano task from within a capistrano task
but that does not work, probably because its a solution for an old
version of Capistrano.
Is there a way to run a certain capistrano task on a certain "stage"
from within Capistrano?