VMWare Workstation VM not starting because of locked portion of file - vmware-workstation

I am receiving the message:
The process cannot access the file because another process has locked a portion of the file
Cannot open the disk 'C:\Users\t825665\VM's\VPC\Windows 10 x64.vmdk' or one of the snapshot disks it depends on.
Module 'Disk' power on failed.
Failed to start the virtual machine.
So the virtual machine is not starting anymore, how to fix that?

I just found the solution for this issue. I created a backup and moved the 'lck' files from my VM's directory (*.lck), removing them from the VM's directory. Then just restarted the virtual machine.

To solve this error, please go to virtual O's directory and delete every thing with an ".lck" extension.

removing folders with an extension of lck solved the issue for me

I run the batch file below to delete all temporary files , locks, directories and memory files in the VMWare Working Directory (i.e. Settings/Options/Working Directory). It's got me out of many a jam. You will lose any unsaved work that was in VMWare suspended memory so backup before using if you're not sure. It will reboot the image as if it was shutdown.
--------------------------Clean.bat ----------------
#echo off
REM - Delete all directories in Working Directory
set dr=%cd%
set ex=\*
set "dr=%dr%%ex%"
for /d %%a in ("%dr%") do rd "%%a" /q /s
REM - Delete files in Working Directory
del *.log
del *.vmem
del *.vmss
del *.nvram
del *.vmx~
pause

Workstation shut down, delete any *.lck files and folders in the VM folder. Then reopen the Workstation, load the VM, and power on.

Related

Copying files from copy Server A to network share via command line - RoboCopy

I'm trying to copy files from one directory on a server to another server without overwriting permissions at the destination but I'm working with an "interesting" setup:
Server A
Git Server
Jenkins CLI
Server B
Web Server
I have a Jenkins process that will run when my fellow web developers commit changes to our repository. Jenkins then copies the files from the repository into it's workspace (located on the C: drive of the server). Once done downloading the files, I'm executing a command script that uses "ROBOCOPY" to copy the files from a directory in the Jenkins workspace to a network share (located on another server) that is pointed to the IIS web directory.
The ROBOCOPY script is as follows:
ROBOCOPY "C:\...\Jenkins\workspace\dev\app" "\\network-share-to-web-server\app" /mir /m /R:0 /W:1 /MT:8 /V /LOG:WhySkip.txt & if %errorlevel% leq 2 exit 0
Here's the problem: ROBOCOPY will only copy the directory structure and NOT the files within the directories which is all of our HTML/JavaScript/CSS/Images... AKA: The changes that we've done.
Any and all help would be greatly appreciated!
Try without /m (Copy only files with the Archive attribute and reset it)
As you want /MIR (Mirror), let Robocopy choose files to sync

MongoDB script to backup replication set locally to a Windows Server

I would like to make a Daily Backup of my MongoDB from a replication set running from Windows 2012 servers.
End goal would be to get a daily backup and write the backup to a remote or local share - Windows.
Can I batch the mongodump command?
Any help would be greatly appreciated!!
Sorry, it's a bit late but the following seems to work OK for me. The script dumps the database and compresses the output using 7-Zip.
1) Create backup script (backup.bat)
#echo off
REM move into the backups directory
CD C:\database_backups
REM Create a file name for the database output which contains the date and time. Replace any characters which might cause an issue.
set filename=database %date% %time%
set filename=%filename:/=-%
set filename=%filename: =__%
set filename=%filename:.=_%
set filename=%filename::=-%
REM Export the database
echo Running backup "%filename%"
C:\mongodb\mongodump --out %filename%
REM ZIP the backup directory
echo Running backup "%filename%"
"c:\Program Files\7-Zip\7z.exe" a -tzip "%filename%.zip" "%filename%"
REM Delete the backup directory (leave the ZIP file). The /q tag makes sure we don't get prompted for questions
echo Deleting original backup directory "%filename%"
rmdir "%filename%" /s /q
echo BACKUP COMPLETE
2) Schedule the backup
Open Computer Management
Go to Task Scheduler and select Create Task.
On the General tab, enter a description and select Run whether user is logged on or not if you want the task to run at night.
On the Triggers tab, select when you would like the task to run.
On the Actions tab, create a new action which points at your batch script.
I'm running on linux, not Windows 2012, but here is what I do. On one of the servers in the replica set, this script gets run every night via a cron job.
#config
BACKUPNAME=[backup file name]
DATAPATH=[path to mongo data folder]
DATESTAMP=$(date +"%Y-%m-%d")
FILENAME=backup.$BACKUPNAME.$DATESTAMP.tar.gz
TARPATH=$DATAPATH/$FILENAME
echo $DATESTAMP;
/etc/init.d/mongod stop
/usr/bin/mongodump --journal --dbpath $DATAPATH --out $DATAPATH/backup
tar czvf $TARPATH $DATAPATH/backup
rm -rf $DATAPATH/backup
/usr/bin/s3cmd put $TARPATH s3://[backup s3 bucket name]/$FILENAME
rm -f $TARPATH
/etc/init.d/mongod start
/scripts/prunebackups
I'm using s3cmd to send files to an S3 bucket on Amazon AWS, but you could just as easily copy the file anywhere. prunebackups is a script that deletes old backups from S3 based on how old they are.
On Windows I'd create a batch file that does similar tasks. In essence:
Stop mongod
run mongodump to generate the data
zip up the dumped data and move it somewhere
clean up files
start mongod again
You can then use Task Scheduler to run it periodically.
If you have other mongod instances in the replica set, you shouldn't run into any issues with downtime. The backup instance in my setup is never used for reads or writes, but only for backups and in case one of the other instances goes down.
MongoDB has documentation on different backup strategies: http://docs.mongodb.org/manual/administration/backup/
We chose the mongodump approach because for us it's cheaper to store dump backups than snapshots. This is the specific strategy we used: http://docs.mongodb.org/manual/tutorial/backup-databases-with-binary-database-dumps/. Good news is, we actually had to restore data from a backup to production once and it was pretty painless.

GWT DevMode filling up tmp directory

GWT 2.5.1
Anytime running GWT DevMode generates a new huge cache file under /tmp directory now, consequently the OS warns low disk space. However, this problem has never popped up in the past.
The file gwtXXXbyte-cache (XXX being a long random number) is nearly 1 GB big. Is it normal?
The cache file is cleaned up automatically after the DevMode session ends. BTW, rebooting the machine doesn't help.
#EDIT
In comparison above, running GWT starter application on DevMode generates the new cache file about 50 MB size. Is it oversize, too?
#EDIT 2
I modifed GWT UI releated source code and ran DevMode again. Later, the new huge cache file gwtYYYbyte-cache (YYY being another long random number) was generated with the same size as before - exact number of bytes. Any ideas?
#EDIT 3
After manual removal of ./gwt-unitCache, ./war/WEB-INF/deploy and ./war/ZZZ directory (ZZZ being the hosted GWT application on DevMode), the next DevMode session generates the /tmp/gwtXXXbyte-cache file shrinking to a few KB.
#EDIT 4
Launching DevMode with the option -workDir DDD (DDD being another writable directory) doesn't work. The cached staffs keep writing to the default /tmp directory.
1GB is too much for development purposes.
The only reason I can think of is that you have set a lot of permutations in your .gwt.xml file.
You should reduce the number of permutations during development to the minimum ( only include the specs you are using).
You can use the DevGuideCompileReport to locate the problem.
Edit:
The common issue has been reported by other users. It has to do with the eclipse plugin not deleting the temp files correctly. The issue has been reported and had a lot of attention from GWT users, but no concrete patch has been released. The workarounds were to manually delete the files or to write a script to do the work for you:
google-plugin-for-eclipse-issue74
Here's a windows batch script to clean up after GWT:
#ECHO OFF
ECHO Cleaning ImageResourceGenerator files ...
IF EXIST "%TEMP%\ImageResourceGenerator*" DEL "%TEMP%\ImageResourceGenerator*" /F /Q
ECHO Cleaning uiBinder files ...
IF EXIST "%TEMP%\uiBinder*" DEL "%TEMP%\uiBinder*" /F /Q
ECHO Cleaning gwt files ...
IF EXIST "%TEMP%\gwt*" DEL "%TEMP%\gwt*" /F /Q
ECHO Cleaning gwt directories ...
FOR /D /R %TEMP% %%x IN (gwt*) DO RMDIR /S /Q "%%x"
ECHO.
ECHO Done.
PAUSE

Batch File to Automate backup folders

So, I currently have a Drobo server that houses a Backup folder for customers that need to have their Hard drive files backed up. I create a folder for each customer in this backup folder. It's our policy to keep these files for our customers for 30 days, after which, need to be deleted. I'm wondering if its possible to make a batch file that can scan the entire Backup folder, each folder for each customer ONLY, not all the files, just the folder by modified date and if it is older than 30 days, move the entire folder that I'll label Delete for further review before deleting. I'm going to make a second batch file to delete all the folders and files once inside that delete folder, but I need the first batch file that scans just the folders' modified date in order to determine if it needs moved first. Any help would be appreciated. Thanks.
Forfiles /P C:\test\ /D -30 /c "cmd /c move #file c:/delete"
just an example of script you might use

After Robocopy, the copied Directory and Files are not visible on the destination Drive

I've been happily using robocopy for backing up my computers to an external usb drive. It's great since it only copies the files that were changed/updated/new. I can take my external drive to any machine and look at it just as if it's another drive on the computer.
I've recently purchased a 750g and another 1tb external hard drives. I ran a robocopy over the weekend that copied about 500g to my external drive. After the copy My Computer shows that ~500g has been used on the external drive. The strange thing is that when I click on the drive in Windows Explorer, nothing shows up in the right pane of Windows Explorer (and the + goes away in the left pane). I copied a single file (drag-and-drop) to this drive and it shows up in Windows Explorer. Command Prompt show the same thing. 1 file.
I know the files are on the drive as it shows up as the Free Space has been reduced.
I read that I should make sure simple file sharing is off, which it is. I also took ownership of the files as Administrator. Still nothing. It works the same on my WIndows XP machine and my Windows 7 Ultimate.
Has anyone else seen this? Or even better, does anyone know what I am doing wrong or how to solve this problem?
thanks!
Bill44077
In my case, the above didn't work.
This worked instead: attrib -h -s -a [ Drive : ][ Path ].
For example: attrib -h -s -a "C:\My hidden folder".
When copying from the root directory of a drive to a folder (non-root directory on a different drive), this can happen.
RoboCopy may set the new directory to hidden, as it copies the system attribute of the root folder of the drive over to the new folder.
You can prevent the new directory from becoming hidden by adding the /A-:SH option/flag/switch to your robocopy command.
See this Server Fault Answer to "Why does RoboCopy create a hidden system folder?
" for more information.
However, this may or may not prevent copying system attributes in other folders, according to this discussion on the Microsoft forum "ROBOCOPY hides destination Directory".
Here is an example taken from my longer, more thorough, Answer on Super User to the Question "How to preserve file attributes when one copies files in Windows?":
Robocopy D:\ C:\D_backup /A-:SH /DCOPY:T /COPYALL /E /R:0 /ZB /ETA /TEE /V /FP /XD D:\$RECYCLE.BIN /XD "D:\System Volume Information" /LOG:C:\D_backup_robocopy.LOG /MIR
However, if you already copied the directory without the /A-:SH option, running the command mentioned by Ricky above (attrib -h -s -a [ Drive : ][ Path ]) will fix the issue by unhiding the directory. Though, I found that -a was not needed.
So in my case, for the example above, attrib -h -s C:\D_backup (without the -a option) made D_backup visible.
Just ran into this issue myself, so it may be a late response and you may have worked it out already, but for those stumbling on this page here's my solution...
The problem is that for whatever reason, Robocopy has marked the directory with the System Attribute of hidden, making it invisible in the directory structure, unless you enable the viewing of system files.
The easiest way to resolve this is through the command line.
Open a command prompt and change the focus to the drive in question (e.g. x:)
Then use the command dir /A:S to display all directories with the System attribute set.
Locate your directory name and then enter the command ATTRIB -R -S x:\MyBackup /S /D where x:\ is the drive letter and MyBackup is your directory name.
The /S re-curses subfolders and /D processes folders as well.
This should clear the Read Only and System attributes on all directories and files, allowing you to view the directory normally.
In addition to the great answers SherylHohman and Ricky left I wanted to add that merely adding the /A-:SH switch for robocopy did not work and the copy created a hidden, system folder on the destination drive.
However, using the /A-:SHA parameter did work and my top level destination directory was not given the system or hidden attributes. Weirdly, my drive does not have the "a" (archived) attribute set so I am dumbfounded as to why this works at all. I do prefer simply removing these attributes to only the root destination folder after completion of the robocopy command per Ricky's suggestion so that these attributes are respected for any sub-directories. Though the /A- switch is easier to manage and (for my backup purposes) are not relevant to any directories I am backing up. You may want to consider not removing the system or hidden attributes if you're backing up your C:\ drive though.
You could try this, I say could, because the whole Windows 10 has annoying flaws everywhere, I have lost trust to Windows 10 and Microsoft.
Well I found that after I robocopied the whole Documents-folder to a root of external drive, I got a folder that is not named Documents but the Documents-folder is renamed&translated to my native language, so it could be some Language issue. (the /XD option tells robocopy to skip a folder)
C:\users\asdf\documents >robocopy . f:\ManuBackup /XD c:\Users\Asdf\Documents\OneDrive /s
File Explorer shows Tiedostot-name (=Documents in finnish) and Command Prompt shows ManuBackup-name. Also I have tried all attrib.exe commands to the ManuBackup-folder, don't trust me 100%