I am creating a server job but facing this error in datastage , Plz help if you have any idea about this error - datastage

CamelServer..Sequential_File_2.DSLink3: ds_seqopen() - Win32 error in CreateFile - Access is denied.

It looks like a user rights issue, as if the user running the job does not have the necessary rights on the directory where you are reading/writing from a file. A little more context would help better understand the issue (i.e is the job run manually from the Director or is it scheduled, is the offending directory local on the server, etc..?).

The error message indicates that error occurs for the sequential file stage. It is unclear if this issue occurred on the first run of job or subsequent runs.
DataStage jobs can run under different userids unless you have all the users credential mapped to common id such as dsadm.
The most likely cause of above error are:
1. The target directory where you have selected to create sequential file has directory permissions that do not permit file create/write by the userid that is running the job.
2. OR, the job was previously run with different userid, creating a file owned by that userid, and new job run is with new userid that does not have permission to overwrite the original file.
Check the job log to see what userid is running the job...the userid is on every event message. Then confirm the file name and location sequential file is trying to write, and at OS level, confirm if that file already exists with different userid (if so, try updating file permission) and also confirm that directory permissions allow write by the userid now running the job.

Related

Autosys job failing as the file it edits was open

I have an autosys job which logs into a windows machine and performs some tasks and logs all the working into a word file on the machine.
For yesterday's run, it failed because the file in which it logs was left open by some user who logged into the machine to check the logs. The job failed because the word document(my log file that is) always opened in Edit mode only. is there a way to restrict anyone from making changes to the log file except the automated autosys job?
No, but you can add on the job log creation %autorun% which will then create a new log everytime the job runs.

LocalDeployer: app working directory

I have an app that creates a file temporarily, does not delete it. I was hopping to see the contents of the file while running.
The app is deployed using the local deployer, does any body knows where would it create the file??
I tried the temp path, and also the working directory where the out and error logs are... nothing, the app does seem to be erroring, that would be on my normal console log.
Running on unix, temp is at /tmp.
thanks
You can control this location via the local deployer property workingDirectoriesRoot and deleteFilesOnExit.
For more information, you can refer this doc:
https://docs.spring.io/spring-cloud-dataflow/docs/current/reference/htmlsingle/#configuration-deployer
Actually looking at the code of the local deployer, it seems the location it defaults to is the system temp path (System.getProperty(“java.io.tmpdir”)) and adds the stream id, plus the app id, etc. It is the same folder where the console and error streams write to.
thanks!

Windows Service calls Command Line Application which requires user logged in

I have an issue where my Windows Service is running a 3rd Party Command Line Application.
This works fine however, the Command Line Application attempts to access HKEY_CURRENT_USER registry keys. So the command line throws the following exception:
System.IO.IOException: Illegal operation attempted on a registry key that has been marked for deletion.
at Microsoft.Win32.RegistryKey.Win32Error(Int32 errorCode, String str)
at Microsoft.Win32.RegistryKey.SetValue(String name, Object value, RegistryValueKind valueKind)
at Microsoft.Win32.RegistryKey.SetValue(String name, Object value)
at ACMEApp.Settings.GetValue(String name, Object def) in SomeClass.cs:line 542
The service is configured to run under a dedicated domain user account (with Admin privileges on the machine).
I have a workaround for this which is to leave the user the service runs under logged in to the machine. This is far from ideal. Other users have logged on to the machine and logged this user off causing the issue to reappear. I cant make the third party change their code although it was meant to be able to be run from my service. This also hangs the console application.
I need some thought on how to tackle this one. Is there a way to just make it work (magic) or better to make error reporting more obvious?
At the moment the service just logs to a log file where as the failure points are when the users report things have not worked.

Jenkins job log monitoring, parsing with error pattern in master

I am working on a perl script which will do the following:
Trigger a script in post build action when job fails.
Read the log file and try to match the errors with a consolidated error/solution file.
If error is matched with pattern file, then concatenate the error message with the solution at the end of log file.
I am facing following challenges:
All jobs are running in Slave but the error log file is stored in Master. How can I run the script in post-build action? The script path will be taken from slave but my script is located in master. Is there any workaround for this?
The path of the error log is - /home/jenkins/data/jobs//builds/BUILD_NUMBER/log
We have many jobs that have folders created by jenkins folder plugins…how do we set the common folder for these?
/home/jenkins/data/jobs/FOLDERX//builds/BUILD_NUMBER/log
Other questions -
Do you think that publishing the jenkins error log and displaying the solution is the right approach?
There is no information on how complex the pattern maching is, but if it is a simple line based regex match there is a plugin for that, called Build Failure Analyzer.

How to check user ".profile" exist or not before running crontab in Solaris 10

I am using Solaris 10.
I have another user apart from root say testuser, which is mounted in NAS file system
I have some script which need to be run as testuser. so I had added them to the crontab of testuser.
As long as NAS is up all the cronjobs are rqn properly, but when NAS goes down then cron itself crashed by giving ! could not obtain latest contract for PID 15621: No such process
this error.
I search for this issue and came to know that because it's .profile file is not accessible due to which it is giving this error. So is there any way by which we can check user specific .profile file exist or not before run any schedule job
Any help on this will be appreciated.
I think a better solution would be to actively monitor the NAS share, and report an error (however errors are reported at your location) if it isn't. You can use tools like nfsstat to get statistics on the NAS share (assuming this NAS share is mounted via NFS). It seems a better solution than checking to see if it's working before running cron -- check to make sure the share is available, because if it isn't, attention is needed.
Cron doesn't depend on anything but time, so it will run regardless of whether or not the user's home directory is available. If the script that the cron job is running is local, then you could prepend a check to make sure the home directory is available before running, otherwise just exit with an error code.
If the script that cron is attempting to run is in the user's home directory, you're out of luck, because an error will occur in even trying to run the script to check for the existence. You will need to check the status of the NAS share before attempting to run the cron job, but the cron job will run regardless. See where I'm going?
Again, I would suggest monitoring the NAS and reporting when it is failing.