Autosys job failing as the file it edits was open - ms-word

I have an autosys job which logs into a windows machine and performs some tasks and logs all the working into a word file on the machine.
For yesterday's run, it failed because the file in which it logs was left open by some user who logged into the machine to check the logs. The job failed because the word document(my log file that is) always opened in Edit mode only. is there a way to restrict anyone from making changes to the log file except the automated autosys job?

No, but you can add on the job log creation %autorun% which will then create a new log everytime the job runs.

Related

Windows Services - How can I find the darktable instance in windows services

I accidentally screwed up my darktable configuration, so I reloaded it from scratch. To avoid losing all my recorded changes I have done to my pictures, I wrote a powershell backup script for the darktable database. I want to launch this script from the windows task scheduler when ever I launch darktable. I have found the event id which indicates in the security log of a new process has occurred which I should be able to use to automatically launch my backup script from task scheduler. I want to add code to the script to check the services to see if darktable is actually running and only perform the backup if it is. Anyone know how I can identify this?

Where are the standard output commands for scheduled jobs logged in Rundeck?

I am trying to analyse the logs of scheduled jobs in a project in Rundeck. When I check the successful logs of a job in the Rundeck GUI, I can see some lines in the Log Output tab, however I wish to see where these logs are on the machine.
Here's what I have already tried:
I have checked /var/log/rundeck after reading some documentation here
I have also gone through the script to see if the logs are being logged elsewhere.
The logs I am looking for are standard print statements. Where can I find these logs?
Rundeck has two kind of logs, "general logs" (located at /var/log/rundeck) and Execution Logs (your question), located at: /var/lib/rundeck/logs/rundeck/your-project-name/job/your-job-id/logs.
Those paths exist if you have a DEB/RPM based installation. If you are using a WAR based installation the "general logs" are located in $RDECK_BASE/server/logs and Execution Logs at $RDECK_BASE/var/logs/rundeck/your-project-name/job/your-job-id/logs.

Where does Rundeck store job logs?

Let's say I connect to the Rundeck UI at <server>:4440. I construct a job, schedule it to run every 15 min., then wait a few days. Then, I want to do some analysis of the Job runlogs, gathering some statistics from logging statements I added. The problem is... where are the logs? Are they somewhere on <server>? Or on some other node (if so, what server and what file path).
I know I can download the logs, but they're big, so I'd rather do the statistics gathering close to where the log data live.
The execution logs outputs are stored in the location specified in your framework.properties file. By default:
framework.logs.dir=$RDECK_BASE/var/logs
"Directory for log files written by core services and Rundeck Server’s Job executions"
ref. Configuration File reference.
Hope it helps!

FileMaker scheduled script not running

I have a scheduled script that used to run just fine, but no longer is. The schedule is running my open script for the database (and completing it) but isn't even making it to the scheduled script that is supposed to get called.
I am testing by adding a "Freeze Window" step, which creates an error in the server log (incompatible with server). When I add it as the last line in the open script, it gets called and an error gets written to the log. When I add it as the first line in my scheduled script, it never gets called and there is no error in the log.
It looks like this:
Server opens database ->
Runs open script for database, to completion ->
Never runs scheduled script after open script
Any ideas or thoughts? Anyone seen anything like this before?
This is FileMaker Server 15 running on Windows Server.
I am starting to think this might be a file reference issue. Not sure if server is able to open up external databases and that might be causing issue?
FMS Scheduled scripts and PSoS can only work with FM files hosted on the same machine.
I am testing by adding a "Freeze Window" step, which creates an error in the server log (incompatible with server). When I add it as the last line in the open script, it gets called and an error gets written to the log. When I add it as the first line in my scheduled script, it never gets called and there is no error in the log.
IIRC, FMS will abort the script at the first incompatible step UNLESS Allow User Abort = OFF.
Note also that a Halt step in a subscript will stop the main script too.

How to check user ".profile" exist or not before running crontab in Solaris 10

I am using Solaris 10.
I have another user apart from root say testuser, which is mounted in NAS file system
I have some script which need to be run as testuser. so I had added them to the crontab of testuser.
As long as NAS is up all the cronjobs are rqn properly, but when NAS goes down then cron itself crashed by giving ! could not obtain latest contract for PID 15621: No such process
this error.
I search for this issue and came to know that because it's .profile file is not accessible due to which it is giving this error. So is there any way by which we can check user specific .profile file exist or not before run any schedule job
Any help on this will be appreciated.
I think a better solution would be to actively monitor the NAS share, and report an error (however errors are reported at your location) if it isn't. You can use tools like nfsstat to get statistics on the NAS share (assuming this NAS share is mounted via NFS). It seems a better solution than checking to see if it's working before running cron -- check to make sure the share is available, because if it isn't, attention is needed.
Cron doesn't depend on anything but time, so it will run regardless of whether or not the user's home directory is available. If the script that the cron job is running is local, then you could prepend a check to make sure the home directory is available before running, otherwise just exit with an error code.
If the script that cron is attempting to run is in the user's home directory, you're out of luck, because an error will occur in even trying to run the script to check for the existence. You will need to check the status of the NAS share before attempting to run the cron job, but the cron job will run regardless. See where I'm going?
Again, I would suggest monitoring the NAS and reporting when it is failing.