GWT DevMode filling up tmp directory - gwt

GWT 2.5.1
Anytime running GWT DevMode generates a new huge cache file under /tmp directory now, consequently the OS warns low disk space. However, this problem has never popped up in the past.
The file gwtXXXbyte-cache (XXX being a long random number) is nearly 1 GB big. Is it normal?
The cache file is cleaned up automatically after the DevMode session ends. BTW, rebooting the machine doesn't help.
#EDIT
In comparison above, running GWT starter application on DevMode generates the new cache file about 50 MB size. Is it oversize, too?
#EDIT 2
I modifed GWT UI releated source code and ran DevMode again. Later, the new huge cache file gwtYYYbyte-cache (YYY being another long random number) was generated with the same size as before - exact number of bytes. Any ideas?
#EDIT 3
After manual removal of ./gwt-unitCache, ./war/WEB-INF/deploy and ./war/ZZZ directory (ZZZ being the hosted GWT application on DevMode), the next DevMode session generates the /tmp/gwtXXXbyte-cache file shrinking to a few KB.
#EDIT 4
Launching DevMode with the option -workDir DDD (DDD being another writable directory) doesn't work. The cached staffs keep writing to the default /tmp directory.

1GB is too much for development purposes.
The only reason I can think of is that you have set a lot of permutations in your .gwt.xml file.
You should reduce the number of permutations during development to the minimum ( only include the specs you are using).
You can use the DevGuideCompileReport to locate the problem.
Edit:
The common issue has been reported by other users. It has to do with the eclipse plugin not deleting the temp files correctly. The issue has been reported and had a lot of attention from GWT users, but no concrete patch has been released. The workarounds were to manually delete the files or to write a script to do the work for you:
google-plugin-for-eclipse-issue74

Here's a windows batch script to clean up after GWT:
#ECHO OFF
ECHO Cleaning ImageResourceGenerator files ...
IF EXIST "%TEMP%\ImageResourceGenerator*" DEL "%TEMP%\ImageResourceGenerator*" /F /Q
ECHO Cleaning uiBinder files ...
IF EXIST "%TEMP%\uiBinder*" DEL "%TEMP%\uiBinder*" /F /Q
ECHO Cleaning gwt files ...
IF EXIST "%TEMP%\gwt*" DEL "%TEMP%\gwt*" /F /Q
ECHO Cleaning gwt directories ...
FOR /D /R %TEMP% %%x IN (gwt*) DO RMDIR /S /Q "%%x"
ECHO.
ECHO Done.
PAUSE

Related

VMWare Workstation VM not starting because of locked portion of file

I am receiving the message:
The process cannot access the file because another process has locked a portion of the file
Cannot open the disk 'C:\Users\t825665\VM's\VPC\Windows 10 x64.vmdk' or one of the snapshot disks it depends on.
Module 'Disk' power on failed.
Failed to start the virtual machine.
So the virtual machine is not starting anymore, how to fix that?
I just found the solution for this issue. I created a backup and moved the 'lck' files from my VM's directory (*.lck), removing them from the VM's directory. Then just restarted the virtual machine.
To solve this error, please go to virtual O's directory and delete every thing with an ".lck" extension.
removing folders with an extension of lck solved the issue for me
I run the batch file below to delete all temporary files , locks, directories and memory files in the VMWare Working Directory (i.e. Settings/Options/Working Directory). It's got me out of many a jam. You will lose any unsaved work that was in VMWare suspended memory so backup before using if you're not sure. It will reboot the image as if it was shutdown.
--------------------------Clean.bat ----------------
#echo off
REM - Delete all directories in Working Directory
set dr=%cd%
set ex=\*
set "dr=%dr%%ex%"
for /d %%a in ("%dr%") do rd "%%a" /q /s
REM - Delete files in Working Directory
del *.log
del *.vmem
del *.vmss
del *.nvram
del *.vmx~
pause
Workstation shut down, delete any *.lck files and folders in the VM folder. Then reopen the Workstation, load the VM, and power on.

How to deploy to multiple environments with webpack using msdeploy

I've got a .NET WebAPI solution, and a UI built in Angular2 RC4 (angular-cli webpack version). I'm confused about how to deploy these to different environments, especially configuration parameters - there seems to be a mismatch between the .NET way and the UI way of doing things, which I don't quite get.
Here's how I've got it currently in TeamCity. The WebAPI solution is built once only, and is configured at deploy time. The various configuration parameters the project needs (such as connection strings, endpoints etc.) are stored in web.config. When I deploy to my test environment using MSDeploy, I pass in setParam arguments to the MSDeploy command line which replaces the connection strings and endpoints in the web.config with those values. When I deploy to production, I use the same build but pass in different arguments to the setParam in the command line.
This approach makes sense to me because I know that the exact same build is going from one environment to the next, the only difference being the parameters I specifically told it to set for each environment. Super.
With Angular2 and webpack it looks like a different approach is needed. When I build my project (with ng build -prod) it minimizes and bundles my HTML and Javascript files into 3 or 4 files, along with gzipped versions of those files. This is great for reducing file size and increasing speed of my website, but there is no way to "inject" configuration parameters into these gzip files like there is with MSDeploy's setParam. Everywhere I've seen that mentions webpack is showing webpack.dev.config.js and webpack.prod.config.js. But doesn't that mean we need to build a different bundle for each environment? And actually with Angular2 the webpack bit is considered "a black box" and it's not possible to supply your own webpack.config file anyway.
The only workaround I can think of is to use TeamCity's "File Content Replacer" on the "main.1234abcd6946c6a08519.bundle.js" to replace my configuration parameters with the values for that environment, then gzip that file - overwriting the one created by webpack.
But this is horrible, so I'm looking for any better suggestions?
I don't have any experience with webpack or if this is better than your workaround but you can use the TextFile kind of setParam entry to alter any file in your project using Regex find/replace at deploy time.
https://technet.microsoft.com/en-us/library/dd569084(v=ws.10).aspx
I went with creating a separate package for each environment. I added a build step that replaces my API URL on localhost in src\app\environment.ts, with the appropriate URL for that environment, then it runs npm build -prod and then MSDeploy to create the package. I do this for all environments I want to target.
Here's the script:
REM =====CREATE TEST PACKAGE==================================================
REM backup the environment file
ren src\app\environment.ts environment.ts.bak
copy /Y src\app\environment.ts.bak src\app\environment.ts
REM replace localhost in environment file with the TEST environment URL
"%env.FART%" src\app\environment.ts http://localhost:12345 %TEST.api.url%
REM build using this environment
call npm run build-prod
REM restore backup environment file
del /Q src\app\environment.ts
ren src\app\environment.ts.bak environment.ts
REM create TEST package
"%env.MSDEPLOY%" ^
-verb:sync ^
-source:contentPath="%teamcity.build.workingDir%\dist" ^
-dest:package="%teamcity.build.checkoutDir%\Package_TEST.zip"
REM =====CREATE PROD PACKAGE==================================================
REM backup the environment file
ren src\app\environment.ts environment.ts.bak
copy /Y src\app\environment.ts.bak src\app\environment.ts
REM replace localhost in environment file with the PROD environment URL
"%env.FART%" src\app\environment.ts http://localhost:12345 %PROD.api.url%
REM build using this environment
call npm run build-prod
REM restore backup environment file
del /Q src\app\environment.ts
ren src\app\environment.ts.bak environment.ts
REM create PROD package
"%env.MSDEPLOY%" ^
-verb:sync ^
-source:contentPath="%teamcity.build.workingDir%\dist" ^
-dest:package="%teamcity.build.checkoutDir%\Package_PROD.zip"
By the way, %env.FART% is the location of fart.exe which is a great find/replace tool that I use to replace one string in a file with another.

Created copy taking too much free space

I found a problem using robocopy in PowerShell. I used this tool to backup files from one disk (around 220GB) using command:
robocopy $source $destination /s /mt:8
The problem is that created copy took a lot of free space in the destination location (I stopped making backup when it reached around 850GB). Does anyone know why it has happened?
May be there're some loops involved.
robocopy has
Ability to skip NTFS junction points which can cause copying failures because of infinite loops
Try to run with /XJ flag or simply list/log what files are copied to check for loops
See robocopy help and
post about it
UP For those who faces same problem:
there were infinite loops which I found using WinDirStat. Mostly it were Application Data/Documments and Settings/Users folders

Importing Projects and Building workspace from batch file

I have this batch file
#ECHO OFF
ECHO Please Enter Path of the View, you want to update in double quotes.
SET /P variable=
SET ECLIPSE=C:\Users\gdeep\Desktop\TED-4.3.0.20110512190809.lnk
SET WORKSPACE=C:\Users\gdeep\DevCodebase_2
:LOOP
ECHO Press 'g' for Graphical Interface and 'c' for Command line.
SET /P answer=
IF /I "%answer%"=="g" GOTO GRAPHICAL
IF /I "%answer%"=="c" GOTO COMMANDLINE
ECHO Invalid Input. Please Try Again.
GOTO LOOP
:GRAPHICAL
cleartool update -graphical %variable%
GOTO CONTINUE
:COMMANDLINE
cleartool update %variable%
GOTO CONTINUE
:CONTINUE
FOR /D %%i IN (%WORSPACE%) DO RD /S /Q "%%i" DEL /Q "%WORSPACE%\*.*"
START %ECLIPSE% -data %WORSPACE%
D:
chdir "%variable%"\v4electronics
ECHO Please Ensure that Server is killed.
PAUSE
mvn clean install -Dmaven.test.skip=true -Dresource.minify.skip=true
For deleting all the projects i used
FOR /D %%i IN (%WORSPACE%) DO RD /S /Q "%%i" DEL /Q "%WORSPACE%\*.*"
Can anyone explain this to me? I copied it from somewhere and don't want to use it without understanding.
Problem with using above command is althout it seem to work, i see
The system cannot find the file specified.
The system cannot find the path specified.
as the output.
Also, the way i am deleting, will it be equivalent to if i delete them from the eclipse, by select all projects and deleting?
Another problem here is that when i have .
mvn clean install -Dmaven.test.skip=true -Dresource.minify.skip=true
in the end it works fine, otherwise, if there is any other command after it, those commands doesn't run.
After this, I then wanna import all maven projects from the Clearcase %Variable%.
And i want to do that by command line only. Can you help me with that?
Thanks for your help.
Appreciate your time.
Please correct me, If I'm wrong. I understand that you're in the MS-Windows environment.
Regarding to the question about if there is another command after the "mvn ...", they are ignored.
I use the "call" as the following: -
cd project1
call mvn clean install
cd project2
call mvn clean install
I hope this may help.
Regards,
Charlee Ch.
cleartool update -graphical %variable%
This will update a view, opening a GUI during the update (if -graphical is used) for displaying the number of files unchanged, new, modified, deleted or hijacked during this update.
See cleartool update man page.
The graphical update will let you specify how you want hijacked files and timestamps handled by said update:
Click the Advanced tab and change default options for the Update Tool.
If you need to resolve hijacked files, select a method. You have these choices:
Leave hijacked files in place
Rename the hijacked files and load the selected version from the VOB
Delete hijacked files and load the selected version from the VOB
You can also select a method for handling timestamps. You have these choices:
Set file times to current time
Set file times to version creation time
You need to enter the path of the root directory of a snapshot view: see "To update snapshot views"
FOR /D %%i IN (%WORSPACE%) DO RD /S /Q "%%i" DEL /Q "%WORSPACE%\*.*"
This will completely empty Eclipse workspace, projects and its .metadata folder, forcing Eclipse to recreate a workspace from scratch.
It seems a bit extreme, and would basically be the same as
RD /S /Q "%WORSPACE%"
(Eclipse would recreate "%WORSPACE%" when launched with -data %WORSPACE%)

After Robocopy, the copied Directory and Files are not visible on the destination Drive

I've been happily using robocopy for backing up my computers to an external usb drive. It's great since it only copies the files that were changed/updated/new. I can take my external drive to any machine and look at it just as if it's another drive on the computer.
I've recently purchased a 750g and another 1tb external hard drives. I ran a robocopy over the weekend that copied about 500g to my external drive. After the copy My Computer shows that ~500g has been used on the external drive. The strange thing is that when I click on the drive in Windows Explorer, nothing shows up in the right pane of Windows Explorer (and the + goes away in the left pane). I copied a single file (drag-and-drop) to this drive and it shows up in Windows Explorer. Command Prompt show the same thing. 1 file.
I know the files are on the drive as it shows up as the Free Space has been reduced.
I read that I should make sure simple file sharing is off, which it is. I also took ownership of the files as Administrator. Still nothing. It works the same on my WIndows XP machine and my Windows 7 Ultimate.
Has anyone else seen this? Or even better, does anyone know what I am doing wrong or how to solve this problem?
thanks!
Bill44077
In my case, the above didn't work.
This worked instead: attrib -h -s -a [ Drive : ][ Path ].
For example: attrib -h -s -a "C:\My hidden folder".
When copying from the root directory of a drive to a folder (non-root directory on a different drive), this can happen.
RoboCopy may set the new directory to hidden, as it copies the system attribute of the root folder of the drive over to the new folder.
You can prevent the new directory from becoming hidden by adding the /A-:SH option/flag/switch to your robocopy command.
See this Server Fault Answer to "Why does RoboCopy create a hidden system folder?
" for more information.
However, this may or may not prevent copying system attributes in other folders, according to this discussion on the Microsoft forum "ROBOCOPY hides destination Directory".
Here is an example taken from my longer, more thorough, Answer on Super User to the Question "How to preserve file attributes when one copies files in Windows?":
Robocopy D:\ C:\D_backup /A-:SH /DCOPY:T /COPYALL /E /R:0 /ZB /ETA /TEE /V /FP /XD D:\$RECYCLE.BIN /XD "D:\System Volume Information" /LOG:C:\D_backup_robocopy.LOG /MIR
However, if you already copied the directory without the /A-:SH option, running the command mentioned by Ricky above (attrib -h -s -a [ Drive : ][ Path ]) will fix the issue by unhiding the directory. Though, I found that -a was not needed.
So in my case, for the example above, attrib -h -s C:\D_backup (without the -a option) made D_backup visible.
Just ran into this issue myself, so it may be a late response and you may have worked it out already, but for those stumbling on this page here's my solution...
The problem is that for whatever reason, Robocopy has marked the directory with the System Attribute of hidden, making it invisible in the directory structure, unless you enable the viewing of system files.
The easiest way to resolve this is through the command line.
Open a command prompt and change the focus to the drive in question (e.g. x:)
Then use the command dir /A:S to display all directories with the System attribute set.
Locate your directory name and then enter the command ATTRIB -R -S x:\MyBackup /S /D where x:\ is the drive letter and MyBackup is your directory name.
The /S re-curses subfolders and /D processes folders as well.
This should clear the Read Only and System attributes on all directories and files, allowing you to view the directory normally.
In addition to the great answers SherylHohman and Ricky left I wanted to add that merely adding the /A-:SH switch for robocopy did not work and the copy created a hidden, system folder on the destination drive.
However, using the /A-:SHA parameter did work and my top level destination directory was not given the system or hidden attributes. Weirdly, my drive does not have the "a" (archived) attribute set so I am dumbfounded as to why this works at all. I do prefer simply removing these attributes to only the root destination folder after completion of the robocopy command per Ricky's suggestion so that these attributes are respected for any sub-directories. Though the /A- switch is easier to manage and (for my backup purposes) are not relevant to any directories I am backing up. You may want to consider not removing the system or hidden attributes if you're backing up your C:\ drive though.
You could try this, I say could, because the whole Windows 10 has annoying flaws everywhere, I have lost trust to Windows 10 and Microsoft.
Well I found that after I robocopied the whole Documents-folder to a root of external drive, I got a folder that is not named Documents but the Documents-folder is renamed&translated to my native language, so it could be some Language issue. (the /XD option tells robocopy to skip a folder)
C:\users\asdf\documents >robocopy . f:\ManuBackup /XD c:\Users\Asdf\Documents\OneDrive /s
File Explorer shows Tiedostot-name (=Documents in finnish) and Command Prompt shows ManuBackup-name. Also I have tried all attrib.exe commands to the ManuBackup-folder, don't trust me 100%