Where are the standard output commands for scheduled jobs logged in Rundeck? - rundeck

I am trying to analyse the logs of scheduled jobs in a project in Rundeck. When I check the successful logs of a job in the Rundeck GUI, I can see some lines in the Log Output tab, however I wish to see where these logs are on the machine.
Here's what I have already tried:
I have checked /var/log/rundeck after reading some documentation here
I have also gone through the script to see if the logs are being logged elsewhere.
The logs I am looking for are standard print statements. Where can I find these logs?

Rundeck has two kind of logs, "general logs" (located at /var/log/rundeck) and Execution Logs (your question), located at: /var/lib/rundeck/logs/rundeck/your-project-name/job/your-job-id/logs.
Those paths exist if you have a DEB/RPM based installation. If you are using a WAR based installation the "general logs" are located in $RDECK_BASE/server/logs and Execution Logs at $RDECK_BASE/var/logs/rundeck/your-project-name/job/your-job-id/logs.

Related

Group Policy completely failing on a few domain-joined client computers

I've recently run into an issue where group policy is failing to apply on a few computers. When I run GPUPDATE /FORCE, this is the output:
Updating policy...
Computer Policy update has completed successfully.
The following warnings were encountered during computer policy processing:
Windows failed to apply the Group Policy Folders settings. Group Policy Folders settings might have its own log file. Please click on the "More information" link.
Windows failed to apply the Group Policy Files settings. Group Policy Files settings might have its own log file. Please click on the "More information" link.
Windows failed to apply the Group Policy Registry settings. Group Policy Registry settings might have its own log file. Please click on the "More information" link.
User Policy update has completed successfully.
The following warnings were encountered during user policy processing:
Windows failed to apply the Group Policy Folders settings. Group Policy Folders settings might have its own log file. Please click on the "More information" link.
Windows failed to apply the Group Policy Files settings. Group Policy Files settings might have its own log file. Please click on the "More information" link.
Windows failed to apply the Deployed Printer Connections settings. Deployed Printer Connections settings might have its own log file. Please click on the "More information" link.
Windows failed to apply the Group Policy Folder Options settings. Group Policy Folder Options settings might have its own log file. Please click on the "More information" link.
Windows failed to apply the Group Policy Scheduled Tasks settings. Group Policy Scheduled Tasks settings might have its own log file. Please click on the "More information" link.
For more detailed information, review the event log or run GPRESULT /H GPReport.html from the command line to access information about Group Policy results.
I've done the following to troubleshoot, but I keep hitting walls and dead ends:
Removed and readded the faulting computers to the domain
Confirmed domain connectivity by successfully pinging domain controllers
Confirmed DNS settings haven't been changed on NICs
Used ProcMon to see if new files/folders are being added, but saw no references to these files/folders
Investigated Event Viewer, but only see generic errors like, "Windows failed to apply the Group Policy Files settings. Group Policy Files settings might have its own log file. Please click on the "More information" link." Diving deeper into Event Viewer -> Applications and Services Logs -> Microsoft -> Group Policy -> Operational gives me errors with descriptions like, "Completed Group Policy Shortcuts Extension Processing in 62 milliseconds."
Checked RSOP on the faulting computers, and while I see the policy I'm trying to push listed in the General tab, the Error Information tab shows that Group Policy Registry, Folders, and Files all failed. The Details section simply states, "Group Policy [Registry/Files/Folders] failed due to the error listed below", yet no error is listed.
Ran GPRESULT /H GPReport.html and examined, but only receive generic messages like, "Additional information may have been logged. Review the Policy Events tab in the console or the application event log for events between 2/18/2021 12:29:39 PM and 2/18/2021 12:29:39 PM." This additional information referenced is reflected in point 5 above and is obviously unhelpful.
Tried GPUPDATE /SYNC, but I receive the error, "Failed to set the policy mode. Error - The system cannot find the file specified. Exiting...," and I have no idea what file this error is referencing.
Checked Event Viewer on domain controller, but found no relevant information about these failures
Checked for FS corruption via SFC /SCANNOW and DISM /ONLINE /CLEANUP-IMAGE /RESTOREHEALTH
I'm really pulling my hair out over this one. If anyone could point me in the right direction or provide a fix, that would be incredible. Thank you so much!

How to track installer script in a pipeline not executing?

I'm new to the whole Azure DevOps world and just got transferred to a new team that does just that.
One of my assignments is to fix an issue with a pipeline where one of the steps runs a shell script that installs an application. Currently, the step seems to run without any issue shown on the log, but when we connect to the container's pod, the app is not there.
If we run the script directly inside the pod, the application is installed correctly. I'm not sure how to track this. One of the things I've tried was to check the event log to see if there's any error while the installation is executed:Get-Eventlog -LogNmae "Windows PowerShell" -Newest 20, so far no luck here. Again, kinda of new at this, not sure what other tools are out there to track the reason why the script is not installing during the pipeline execution.
To troubleshoot your pipeline run, you can configure your pipeline logs to be more verbose.
1, To configure verbose logs for a single run, you can start a new build by choosing Run pipeline and selecting Enable system diagnostics, Run.
2,To configure verbose logs for all runs, you can add a variable named system.debug and set its value to true.
You can also try logging into your agent server and check for the event log. See this blog for view event log on windows.
The issue was related to how the task was awaited. Adding this piped params helped us solve the issue:
RUN powershell C:\dev\myprocess.ps1 -PassThru | Wait-Process;

Where does Rundeck store job logs?

Let's say I connect to the Rundeck UI at <server>:4440. I construct a job, schedule it to run every 15 min., then wait a few days. Then, I want to do some analysis of the Job runlogs, gathering some statistics from logging statements I added. The problem is... where are the logs? Are they somewhere on <server>? Or on some other node (if so, what server and what file path).
I know I can download the logs, but they're big, so I'd rather do the statistics gathering close to where the log data live.
The execution logs outputs are stored in the location specified in your framework.properties file. By default:
framework.logs.dir=$RDECK_BASE/var/logs
"Directory for log files written by core services and Rundeck Server’s Job executions"
ref. Configuration File reference.
Hope it helps!

Autosys job failing as the file it edits was open

I have an autosys job which logs into a windows machine and performs some tasks and logs all the working into a word file on the machine.
For yesterday's run, it failed because the file in which it logs was left open by some user who logged into the machine to check the logs. The job failed because the word document(my log file that is) always opened in Edit mode only. is there a way to restrict anyone from making changes to the log file except the automated autosys job?
No, but you can add on the job log creation %autorun% which will then create a new log everytime the job runs.

How to check user ".profile" exist or not before running crontab in Solaris 10

I am using Solaris 10.
I have another user apart from root say testuser, which is mounted in NAS file system
I have some script which need to be run as testuser. so I had added them to the crontab of testuser.
As long as NAS is up all the cronjobs are rqn properly, but when NAS goes down then cron itself crashed by giving ! could not obtain latest contract for PID 15621: No such process
this error.
I search for this issue and came to know that because it's .profile file is not accessible due to which it is giving this error. So is there any way by which we can check user specific .profile file exist or not before run any schedule job
Any help on this will be appreciated.
I think a better solution would be to actively monitor the NAS share, and report an error (however errors are reported at your location) if it isn't. You can use tools like nfsstat to get statistics on the NAS share (assuming this NAS share is mounted via NFS). It seems a better solution than checking to see if it's working before running cron -- check to make sure the share is available, because if it isn't, attention is needed.
Cron doesn't depend on anything but time, so it will run regardless of whether or not the user's home directory is available. If the script that the cron job is running is local, then you could prepend a check to make sure the home directory is available before running, otherwise just exit with an error code.
If the script that cron is attempting to run is in the user's home directory, you're out of luck, because an error will occur in even trying to run the script to check for the existence. You will need to check the status of the NAS share before attempting to run the cron job, but the cron job will run regardless. See where I'm going?
Again, I would suggest monitoring the NAS and reporting when it is failing.