How to check user ".profile" exist or not before running crontab in Solaris 10 - solaris

I am using Solaris 10.
I have another user apart from root say testuser, which is mounted in NAS file system
I have some script which need to be run as testuser. so I had added them to the crontab of testuser.
As long as NAS is up all the cronjobs are rqn properly, but when NAS goes down then cron itself crashed by giving ! could not obtain latest contract for PID 15621: No such process
this error.
I search for this issue and came to know that because it's .profile file is not accessible due to which it is giving this error. So is there any way by which we can check user specific .profile file exist or not before run any schedule job
Any help on this will be appreciated.

I think a better solution would be to actively monitor the NAS share, and report an error (however errors are reported at your location) if it isn't. You can use tools like nfsstat to get statistics on the NAS share (assuming this NAS share is mounted via NFS). It seems a better solution than checking to see if it's working before running cron -- check to make sure the share is available, because if it isn't, attention is needed.
Cron doesn't depend on anything but time, so it will run regardless of whether or not the user's home directory is available. If the script that the cron job is running is local, then you could prepend a check to make sure the home directory is available before running, otherwise just exit with an error code.
If the script that cron is attempting to run is in the user's home directory, you're out of luck, because an error will occur in even trying to run the script to check for the existence. You will need to check the status of the NAS share before attempting to run the cron job, but the cron job will run regardless. See where I'm going?
Again, I would suggest monitoring the NAS and reporting when it is failing.

Related

Windows Services - How can I find the darktable instance in windows services

I accidentally screwed up my darktable configuration, so I reloaded it from scratch. To avoid losing all my recorded changes I have done to my pictures, I wrote a powershell backup script for the darktable database. I want to launch this script from the windows task scheduler when ever I launch darktable. I have found the event id which indicates in the security log of a new process has occurred which I should be able to use to automatically launch my backup script from task scheduler. I want to add code to the script to check the services to see if darktable is actually running and only perform the backup if it is. Anyone know how I can identify this?

PowerShell - Transfer Files Using BITS protocol as a scheduled task - not working when logged off

I have written a PowerShell script which uses the BITS transfer protocol in order to transfer large files from Source machine to destination machine. When I run the PowerShell script manually logging into my destination remote machine (where the files need to be copied) then the files are getting copied without any issues.
But When I have created a scheduled task that runs the PowerShell script daily at a particular time. It is throwing the following error
The operation being requested was not performed because the user has not logged on to the network. The specified service does not exist. (Exception from HRESULT: 0x800704DD)
The script is getting worked only when I log-in to my destination machine where the file is being copied. If at all I log off from that machine at that scheduled task time then I am getting the error. Please suggest what needs to be done in order to make the scheduled task run even if I log off from that machine.
According to the documentation about BITS found on https://msdn.microsoft.com/en-us/library/windows/desktop/aa362708(v=vs.85).aspx (and the specific Powershell documentation found on https://msdn.microsoft.com/en-us/library/windows/desktop/ee663885(v=vs.85).aspx) the user that created the jobs has to be logged on.
Powershell Doc :
Important  
When you use *-BitsTransfer cmdlets from within a process that runs in a noninteractive context, such as a Windows service, you may not be able to add files to BITS jobs, which can result in a suspended state. For the job to proceed, the identity that was used to create a transfer job must be logged on. For example, when creating a BITS job in a PowerShell script that was executed as a Task Scheduler job, the BITS transfer will never complete unless the Task Scheduler's task setting "Run only when user is logged on" is enabled
Generic documentation :
BITS continues to transfer files after an application exits if the user who initiated the transfer remains logged on and a network connection is maintained. BITS will not force a connection.
BITS suspends the transfer if a connection is lost or if the user logs off. BITS persists transfer information while the user is logged off, across network disconnects, and during computer restarts.
In short : what you want is not possible.
(this is one of the reasons why you should always read the documentation when encountering problems like these)

robocopy error with ERROR 32 (0x00000020)

I have two drives A and B. Using a python script I am creating some files in "A" drive and I am running a powerscript which copies all the files in the drive A to drive B in the interval of 1 sec.
I am getting this error in my powershell.
2015/03/10 23:55:35 ERROR 32 (0x00000020) Time-Stamping Destination
File \x.x.x.x\share1\source\ Dummy_100.txt The process cannot access
the file because it is being used by another process. Waiting 30
seconds...
How will I overcome this error?
This happened is because the file is locked by running process. To fix this, download Process Explorer. Then use Find>Find Handle or DLL, find out which process locked this file. Use 'taskkill' to kill that process in commandline. You will be fine.
if you want to skip this files you can use /r:n that n is times of tries
for example /w:3 /r:5 will try 5 time every 3 seconds
How will I overcome this error?
If backup is, what you got in mind, and you encounter in-use files frequently, you look into Volume Shadow Copies (VSS), which allow to copy files despite them being ‘in use’. It's not a product, but a windows technology used by various backup tool.
Sadly, it's not built into robocopy, but can be used in conjunction with it. See
➝ https://superuser.com/a/602833/75914
and especially:
➝ https://github.com/candera/shadowspawn
It could be many reasons.
In my case, I was running a CMD script to copy from one server to another, a heap of SQL Server backups and transaction logs. I too had the same problem because it was trying to write into a log file that was supposedly opened by another process. It was not.
I ran many IP checks and Process ID checkers that I ran out of knowing what was hogging the log file. Event viewer said nothing.
I found out it was not even the log file that was being locked. I was able to delete it by logging into the server as a normal user with no admin privileges!
It was the backup files themselves by the SQL Server Agent. Like #Oseack said, there may have been the need to use another tool whilst the backup files themselves were still being used or locked by the SQL Server Agent.
The way I got around it was to force ROBOCOPY to wait.
/W:5
did it.

Why I have to provide account info to run mysqldump.exe through scheduled task in Windows 7

I created a task in my Win7, to run php.exe. The command is: C:\xampp\php\php.exe -q "C:\xampp\htdocs\creport\cleaner.php", and it works fine.
Then I created another task to run mysqldump.exe. The command is: C:\xampp\mysql\bin\mysqldump.exe -u root -pvince c_report > C:\dbfiles\backup-"%DATE:/=-%.sql", but when creating the task a window popped up asking for account information like:
[Sorry I don't have enough reputation to insert an image in my post]
Why is that? I mean, why are the two .exe files treated differently? And probably just because of that, I always failed to run mysqldump.exe though the task, it failed with last-run-result being 0x6.
Thanks a lot for any help!
Actually I was fooled. I just found that the whether does the window pop up has nothing to do with which exe this task is scheduled to run. It comes up just because I has ticked option [Run whether user is logged on or not], therefore, in the case that user account is logged out, Windows need to store the password to log it in to run the task.

Google Cloud Storage - GSUtil Update fails because file used by another process

We use an ETL process to pull data from Google Cloud Storage, but annoyingly it hangs everytime Google releases udpates to GSUtil, because it sits at a prompt asking if you want to update the library. Fine if you are doing this manually, but not cool when it's being run in an automated SSIS package, as jobs don't finish for days and you keep wasting time with the same stupid cause.
I thought I was going to be cleaver, and add "python gsutil update -n" to the top of the bash script I'm automating the building/execution of in my SSIS Package in the hope to curb this problem, but when I run this command from the prompt in either Windows Server 2008r2 or Windows 7 I get the following:
C:\gsutil>python gsutil update -f -n
Copying gs://pub/gsutil.tar.gz...
OSError: The process cannot access the file because it is being used by another process.
Any help?
P.S. - Also, Google engineers... can you PLEASE remove these prompts? for all of us using these tools in automated processes? I have other things to work on, instead of constantly going back to things like this every few days/weeks.
What version of gsutil are you running?
Also, to be clear: Are you talking about the fact that gsutil checks for available software updates periodically, and if it finds them it then prompts you whether you want to update? Or are you talking about the fact that the gsutil update command asks if you want to perform the update?
If the former, gsutil shouldn't be performing this check/prompting if you are running gsutil from a script not connected to at TTY. If that's not working correctly we'd like to know.
And also, if that's the problem you're having, you can completely disable automated software update checks by setting software_update_check_period=0 in the [GSUtil] section of your .boto config file.