Scheduled job fails when run via trigger, but works when run manually - powershell

I have a scheduled job to run the following command:
copy 'C:\Users\tdjeilati\Desktop\RDP.rdg' '\\fil03\Dept_1Z\Public\my'
This copies a file to a remote server.
I have a job trigger -AtLogon.
When I log in to my PC, it runs the job.
When I retrieve that job with receive-job, I get the job got an an access is denied error:
But then I run the job by hand, and it works correctly! What gives?
I don't understand why it fails when running from the job trigger but works when I run it manually in powershell. I can only assume that the environment/permissions are different.
EDIT: One thing I noticed is that the job that runs from the jobtrigger doesn't have any childJobs, but the job that I start from command line has child jobs. Why should there be a difference?

The scheduled task may not be running under your user account. This could explain why it works when you manually start the job.
Verify that the task is running as a user with rights to the file and the remote share.

Related

Rundeck failed to remove remote file in C:\WINDOWS\TEMP\ when the Job times out

Notice that when the Rundeck job triggers a timed out due to running too long, it fails to remove the temp file from my remote window node. Due to this action, the Temp folder has an unnecessary amount of dispatch temp files in the folder. I wondered if their a config setting to remove said files if the job triggers a timed out within Rundeck.
Rundeck is running on Community 3.4.10
There is no specific option for that, a good idea you can do is to run a periodic job that cleans that directory.

Argo workflow: execute a step when stopped forcefully

I have a 5 steps Argo-workflow:
step1: create an VM on cloud
step2: do some work
step3: do some more work
step4: do some further work
step5: delete the VM
All the above steps are time consuming. And for whatever reasons, a running workflow might be stopped or terminated by issuing the stop/terminate command.
What I want to do is, if the stop/terminate command is issued at any stage before step4 is started, I want to directly jump to step4, so that I can clean up the VM created at step1.
Is there any way to achieve this?
I was imagining it can happen this way:
Suppose I am at step2 when the stop/terminate signal is issued.
The pods running at step2 gets a signal that the workflow is going to be stopped.
The pods stop doing their current work and outputs a special string telling the next steps to skip
So step3 sees the outputs from step2, skips its work and passes it on to step4 and so on.
step5 runs irrespective of the input and deletes the VM.
Please let me know if something like this is achievable.
It sounds like step 5 needs to be run regardlessly, which is what exit handler is for. Here is an example. Exit handler would be executed when you 'stop' at any step, but would be skipped if you terminated the entire workflow.

Talend Automation Job taking too much time

I had developed a Job in Talend and built the job and automated to run the Windows Batch file from the below build
On the Execution of the Job Start Windows Batch file it will invoke the dimtableinsert job and then after it finishes it will invoke fact_dim_combine it is taking just minutes to run in the Talend Open Studio but when I invoke the batch file via the Task Scheduler it is taking hours for the process to finish
Time Taken
Manual -- 5 Minutes
Automation -- 4 hours (on invoking Windows batch file)
Can someone please tell me what is wrong with this Automation Process
The reason of the delay in the execution would be a latency issue. Talend might be installed in the same server where database instance is installed. And so whenever you execute the job in Talend, it will complete as expected. But the scheduler might be installed in the other server, when you call the job through scheduler, it would take some time to insert the data.
Make sure you scheduler and database instance is on the same server
Execute the job directly in the windows terminal and check if you have same issue
The easiest way to know what is taking so much time is to add some logs to your job.
First, add some tWarn at the start and finish of each of the subjobs (dimtableinsert and fact_dim_combine) to know which one is the longest.
Then add more logs before/after the components inside the jobs.
This way you should have a better idea of what is responsible for the slowdown (DB access, writing of some files, etc ...)

Scheduled Task Does Not Exist IF It Has a Repeat Trigger

Since yesterday my task scheduler has been causing some issues here that I was hoping to find some assistance with if possible.
I have a task "TASK" set to run daily each morning and then repeat every 30minutes with the action of launching a batch script from a directory in the C: drive. This script works appropriately when run on its own. When I create a task for the script it will run, unless it is set to have an "After triggered, repeat every X." In this case it gives the error message of: "An error has occurred for task TASK. Error message: The selected task "{0}" no longer exists. To see the current tasks, click Refresh.
I have attempted wiping all tasks from task scheduler and recreating them from scratch, I have wiped the registry of tasks, I have exported and reimported tasks. The issue only occurs when a task is set to repeat after trigger.
Got it myself. Came up with this error due to the fact that the original start date was set to before my attempt at manually running the tasks to test them. Strange.
Solution: Set next start date to the future.

Reacquiring DSC background job after a reboot

I have a system which executes a bunch of DSC configurations on a nightly basis to build out machines. This is initiated by a scheduled job (MultiJob) in Jenkins. The schedule job then triggers individual jobs and waits for all jobs to complete. 90% of the time this works without an issue.
However, occasionally one of the individual jobs requires a reboot. This is configured correctly with the LCM and using the xPendingReboot DSC resource. No issues here.
The problem is the PowerShell Job that is executing on the target machine ends when the reboot is initiated. This then triggers the next stage of the build process which fails because the DSC process is actually not yet complete.
Does anyone out there know how to reacquire pending DSC job on the target machine after it has been rebooted?
Update (untested): Currently working on a scenario that invokes the GetCimSessionInstanceId method on the LCM to acquire the session handle when the job completes. This session id can then be used to reacquire the CimSession on the remote machine after the reboot via the Get-CimSession cmdlet. My assumption is that I'll be able to remotely execute Get-Job on the cim session to determine if the DSC process has continued. This leads to an additional question. How can we determine that the initial dsc job ended due to a reboot or if it is actually the end of the process?
This approach should work for what you are looking for http://nanalakshmanan.github.io/blog/DSC-get-job-details-post-reboot/
In cases where you want to obtain the information from the node post a reboot,
set DSC to not proceed after reboot. This can be done by using the following
meta configuration sample
[DscLocalConfigurationManager()]
Configuration Settings
{
Settings
{
ActionAfterReboot = 'StopConfiguration'
RebootNodeIfNeeded = $false
}
}
Then re-apply the existing configuration using the following command
Start-DscConfiguration -Wait -UseExisting -Verbose