I have two drives A and B. Using a python script I am creating some files in "A" drive and I am running a powerscript which copies all the files in the drive A to drive B in the interval of 1 sec.
I am getting this error in my powershell.
2015/03/10 23:55:35 ERROR 32 (0x00000020) Time-Stamping Destination
File \x.x.x.x\share1\source\ Dummy_100.txt The process cannot access
the file because it is being used by another process. Waiting 30
seconds...
How will I overcome this error?
This happened is because the file is locked by running process. To fix this, download Process Explorer. Then use Find>Find Handle or DLL, find out which process locked this file. Use 'taskkill' to kill that process in commandline. You will be fine.
if you want to skip this files you can use /r:n that n is times of tries
for example /w:3 /r:5 will try 5 time every 3 seconds
How will I overcome this error?
If backup is, what you got in mind, and you encounter in-use files frequently, you look into Volume Shadow Copies (VSS), which allow to copy files despite them being ‘in use’. It's not a product, but a windows technology used by various backup tool.
Sadly, it's not built into robocopy, but can be used in conjunction with it. See
➝ https://superuser.com/a/602833/75914
and especially:
➝ https://github.com/candera/shadowspawn
It could be many reasons.
In my case, I was running a CMD script to copy from one server to another, a heap of SQL Server backups and transaction logs. I too had the same problem because it was trying to write into a log file that was supposedly opened by another process. It was not.
I ran many IP checks and Process ID checkers that I ran out of knowing what was hogging the log file. Event viewer said nothing.
I found out it was not even the log file that was being locked. I was able to delete it by logging into the server as a normal user with no admin privileges!
It was the backup files themselves by the SQL Server Agent. Like #Oseack said, there may have been the need to use another tool whilst the backup files themselves were still being used or locked by the SQL Server Agent.
The way I got around it was to force ROBOCOPY to wait.
/W:5
did it.
Related
I am working on setup scripts for a weblogic portal domain. This requires me to create the domain / delete / try again many times. However I have found that when I start the server and kill it I can not delete the folder where the domain is saved (C:\portal-10.3.7\user_projects\domains\myDomain). There are some .DAT files which are used as part of some persistent file store and they keep getting created even when I kill the servers (C:\portal-10.3.7\user_projects\domains\myDomain\servers\AdminServer\data\store). Only way I can delete them is if I restart my computer. Ive tried killing processes from task manager and shutting down services but I cant seem to figure out what keeps generating these files. Other developers using the domain setup scripts have the same complaint.
Edit:
I found using a tool called "Process Explorer" that there is a process that is holding the file.
Process explorer mentions that PID #4 is using the file
When I run tasklist I can see that PID #4 has:
Image Name=System
Session=Services
I looked around and found this PID #4 is 'NT Kernel & system' so I cant kill it or the whole system will go down. Not sure if there is a specific dll I can kill or find which dll holds the file and just shut that down
I went so far as to download systinternals handle tool (microsofts own tool). Able to find the handle identifier but microsofts own tool is not able to release the handle. Infuriating how much time I have wasted.
edit ...
Last thing to mention and I officially give up. When I start the server I can see the servers java process owns the handle. When I shut down the server (using either the shutdown script or by killing the java process) I can see the system processes with PID 4 takes over the handle to the file.
I have a database on "Microsoft OneDrive", I have 4 valid licenses from Gupta 4 SqlBase. When I try to run from PC 1 I can access the database, but when I try the same from PC 2 I got this
Reason: Attempting to open an existing file and a failure has occurred.
Remedy: Determine and correct the cause of the open file failure.
Verify that the specified file exists. Verify the number of
files allowed open for the operating system permits the
additional file, that is, check the FILES= configuration
parameter setting.
I assume this is related to the LOG files on the database and some settings in the Sql.Ini, but I'm not able to find where/how???
The intention is to run the database on "OneDrive", buy SqlBase licenses and run a multi user system. The application has been made as such.
Where do I think wrong?
Where do I do wrong?
What setting are missing?
Thanks
That won't work.
SqlBase (and all other RDBMS) are built to manage one databasefile + logfiles.
When multiple instances work with more or less replicated datafiles this ends up in a clash.
There are systems in the world which can work as a distributed cluster (e.g. like the document-database RavenDB) but they are built to work like this (not with OneDrive of course but with their own replication mechanism). Sqlbase is not.
One of the drives on my server recently gave out and corrupted the OS. I was able to restore all the files, but now I have a backup drive with just the file system; not bootable. I'm setting up a new server now, and need to setup the old cron jobs. Is there a way to look through the file structure to see all cron jobs that were setup on the old server? Server was CentOS, not sure of version. Thanks in advance!
Crontabs belonging to individual users should be found in
/var/spool/cron/##USERNAME##
Whereas the server-wide crontab should be in
/etc/crontab
I am using Solaris 10.
I have another user apart from root say testuser, which is mounted in NAS file system
I have some script which need to be run as testuser. so I had added them to the crontab of testuser.
As long as NAS is up all the cronjobs are rqn properly, but when NAS goes down then cron itself crashed by giving ! could not obtain latest contract for PID 15621: No such process
this error.
I search for this issue and came to know that because it's .profile file is not accessible due to which it is giving this error. So is there any way by which we can check user specific .profile file exist or not before run any schedule job
Any help on this will be appreciated.
I think a better solution would be to actively monitor the NAS share, and report an error (however errors are reported at your location) if it isn't. You can use tools like nfsstat to get statistics on the NAS share (assuming this NAS share is mounted via NFS). It seems a better solution than checking to see if it's working before running cron -- check to make sure the share is available, because if it isn't, attention is needed.
Cron doesn't depend on anything but time, so it will run regardless of whether or not the user's home directory is available. If the script that the cron job is running is local, then you could prepend a check to make sure the home directory is available before running, otherwise just exit with an error code.
If the script that cron is attempting to run is in the user's home directory, you're out of luck, because an error will occur in even trying to run the script to check for the existence. You will need to check the status of the NAS share before attempting to run the cron job, but the cron job will run regardless. See where I'm going?
Again, I would suggest monitoring the NAS and reporting when it is failing.
I've developed a Powershell script to deploy updates to a suite of applications; including SQL Server database updates.
Next I need a way to execute these scripts on 100+ servers; without manually connecting to each server. "Powershell v2 with remoting" is not an option as it is still in CTP.
Powershell v1 with WinRM looks the most promising, but I can't get feedback from my scripts. The scripts execute, but I need to know about exceptions. The scripts create a log file, is there a way to send the contents of the log file back to the "client" (the local computer making the remote calls)?
Quick answer is No. Long version is, possible but will involve lots of hacks. I developed very similar deployment script/system using PowerShell 2 last year. The remoting feature is the primary reason we put up with the CTP status. PowerShell 1 with WinRM is flaky at best and as you said, no real feedback apart from ok or failed.
Alternative that I considered included using PsExec, which is very much non-standard and may be blocked by firewall. The other approach involves using system management tools such as MS's System Center, but that's just a big hammer for a tiny nail. So you have to pick your poison...
Just a comment on this: The easiest way to capture powershell output is to use the start-transcript cmdlet to pipe console output to a file. We have a small snippet at the start of all our script that sends a log file with the console output from each script to a central file share, and names the log file with script name and date executed so that we'll have an idea of what happened. Its not too hard to pipe all those log files into a database for further processing either. Probably won't seolve all your problems, but would definitely help on the "getting data back" part.
best regards,
Trond