Visual studio code consumes a lot of disk space during execution:
3GB on start-up.
2GB when running a script (Julia, in my case).
When I kill the in-built terminal and rerun the code, the available storage first goes up by 2GB and then down again by 2GB.
When I exit VSCode all of the disk space reappears.
I'm wondering if there is a way to have VSCode consume less disk space.
From previous questions, it seems that VSCode may take up lots of storage in the workspace folder
C:\Users\<user>\AppData\Roaming\Code\User\workspaceStorage
and possibly in a C++ related folder
.
C:\Users\<user>\AppData\Local\Microsoft\vscode-cpptools\ipch
Both folders take up no or very little space in my case.
I'm running VSCode version 1.72.2 on Windows 10. I tried to pinpoint the directory(ies) used by VSCode for this kind of temporary storage with windirstat but to no avail.
You may need to visualise your disk space in specific folders to pinpoint that. A common reason for that may be IntelliSense cache.
To modify this go to settings and change intelliSenseCacheSize and intelliSenseCachePath If you set the value as 0 then it disables it completely.
After installing all the latest Windows updates and freeing up space on my C drive, I can now run Visual Studio Code with virtually no disk space consumption (about 300MB). I'm not sure if it were the Windows updates or the additional disk space that helped. Anyway, here is how I freed up about 20GB of disk space:
I identified the folders, which require most disk space with windirstat.
I deleted hiberfil.sys.
I manually defragmented windows.edb.
I reduced the size of the WinSxS folder.
I reduced the file of the windows installer directory with patch cleaner.
Related
Three weeks ago, I ditched Sublime in favour of Visual Studio Code. Everything was going great till the program started taking upwards of 30 seconds to start up (launch, show visual feedback) and another 20 or so to boot up (fill in syntax colours, load extensions, and stop stuttering). In the worst instances, it takes minutes to boot up (I used a stopwatch).
At first, I guessed that extensions cost me a lot in start-up time, so I uninstalled most of them. After that, I added 2GB of RAM to my system, moved my CPU to another laptop (smaller chassis, less PPI), swapped my HDD to an SSD, and reinstalled Windows. I didn't make these changes to help VS Code's start/boot time but for other reasons. But even after all these upgrades, VS Code's start-up time seems to increase as time goes by (even without changes to my "Workbench"). Is this normal? What makes it so?
My PC setup is: Core i5 520M # 2.4 Ghz, 6GB DDR3 RAM, 128GB Micron SSD.
My VS Code setup has five extensions installed, about thirteen lines in settings.json (including autoSave, JetBrains Mono font, colour themes for Light and Dark mode), and syncs settings to my Microsoft/GitHub account.
Since you've mentioned a DDR3 RAM I assume your system is quite old and 520M i5 CPU is really old (It's a 1st gen processor). Do you have similar problems with any other applications or is it just VSCode?
If you are confident that your system is not the problem you can try this;
As others have noted, It is based on Electron so under the hood you have Node & Chromium. You cannot have high expectations from something built on Electron which is known for it's notorious memory footprints. However, 30 seconds startup time is still long. It takes roughly 5-6 seconds in my machine to load and become fully functional, with 9 extensions installed (which are quite large extensions btw).
Another note here is that even when you uninstall a VSCode plugin/extension the directory of that extension never gets removed, VSCode just marks them as Obsolete in a JSON file and keeps the directories for whatever reason. You could try uninstalling & reinstalling, which might help. A simple uninstall will not be of much help since VSCode has cache & configuration directories that are not typically removed upon an uninstall. You'd have to manually remove them. If you are on a Windows machine check
C:\Users\<your name>\.vscode,
C:\Users\<your name>\AppData\Local\Microsoft and
C:\Program Files\Microsoft VS Code
for any leftovers related to VSCode and remove them.
The reason for this to wipe the previous install without any trace (you'll lose all your customizations since they are stored & kept in these directories even after uninstalls, so that when you reinstall, VSCode can access & load your previous configurations which makes your life easier btw)
Try reinstalling after. If you are on a UNIX system look up the equivalent directories, remove the leftovers and do a clean reinstall. Hope this helps.
I've just installed Github for Windows on my Windows 8.1 machine and it appears to work fine except that my machine performance drops down dramatically.
Looking at task manager I see that ssh-agent.exe is using a constant 25% CPU (no doubt 100% of one of my cores) and the disk usage is at 100%.
I have had a look on the Internet but can't find any reference to what might be causing this.
Any ideas what might be causing this and how to resolve it?
UPDATE:
I can kill the process and GitHub for Windows appears to keep working but the ssh-agent.exe process starts up again as soon as I close and restart Github for Windows.
Further to moggizx's comment in one of the other answers, I found this was influenced by SourceTree too.
The instance of ssh-agent.exe with the high CPU actually gets terminated when you close SourceTree. Restarting SourceTree does cause another ssh-agent process to be spawned, but the CPU is then idle.
We've seen this happen on occasion due to a race condition between ssh-agent and anti-virus software competing over resources. Do you have any anti-virus software installed? Would you be able to temporarily turn it off and see if the problem persists? We'd be very keen to dig deeper into this if you could reach out to support#github.com.
I found the same issue, my solution was to add the file and the process C:\Program Files\Git\usr\bin\ssh-agent.exe to exclusion list in Windows Defender on Windows 10.
The reason this happens is most likely that your git repository is huge. Probably you have mistakenly instantiated it in a folder where you have an enormous amount of files. So git loops over them constantly and thus takes up alot of processing power needlessly. You can try and delete your .git folder(s) and this should stop.
Try and initialize your git repo in a folder where you exclusively use your projects.
I would still consider this to be a sort of bug, because we should be notified if this happens(we shouldn't need to find out by opening task manager).
I'm developing a jailbroken program for iPhone. When the disk space is not enough, the installation will still continue, thus part of files were copied, while the other files were not, this makes the disk dirty.
I've written disk space check code at preinst and prerm scripts which are control files of deb package. When disk space is not enough, the control scripts will exit with nonzero code. But the problem is, when we are upgrading a package, if the disk space is not enough, the dpkg will still remove the old files even the prerm script exit with nonzero status, thus upgrading becomes removal which is not my expect result.
I don't know much about Cydia specifically, but if it works exactly like dpkg, then this should be solveable. See the activity diagram for package upgrades at http://people.debian.org/~srivasta/MaintainerScripts.html#sec-3.4.3 .
That shows a few different paths that could be taken in the course of running prerms and preinsts which lead the system back to a clean, old-version-still-installed state. For example, if the new-preinst fails, then the new-postrm will be run with "abort-upgrade" as the parameter. If that succeeds, then the old-postinst is also run with "abort-upgrade". And if that succeeds, you're back to a clean, installed state.
I am using a 32bit WinXP with no upgrade in sight, is there a way to limit how much memory Eclipse allocates throughout the day? I am also running Weblogic 10 server in debug mode inside eclipse. After a few hours I have an 700mb STS.exe (eclipse) and 400mb java.exe (server). Is there at least a way to force a GC on eclipse?
Here are the settings i surrently use, which seem to me are not being observed.
-vm
C:\bea\jdk160_05\bin\javaw.exe
-showsplash
--launcher.XXMaxPermSize
128M
-vmargs
-Dosgi.requiredJavaVersion=1.5
-Xms40m -Xmx512m
-Dsun.lang.ClassLoader.allowArraySyntax=true
EDIT: here's the monster of a project: Eclipse and Firefox 4.
is there a way to limit how much memory Eclipse allocates throughout the day?
The values of -Xmx and -XX:MaxPermSize place an upper bound on the memory that the JVM will use.
Is there at least a way to force a GC on eclipse?
AFAIK, no. Even if there was, it probably wouldn't help. The JVM is unlikely to give the memory back to the operating system.
Here are the settings i currently use, which seem to me are not being observed.
Based on what you've said (memory usage 700Mb for eclipse.exe), I'd say the settings ARE being observed.
What can you do to get Eclipse to use less memory?
Trim the values of -Xmx and -XX:MaxPermSize. However, if you do this too much you are liable to make Eclipse sluggish ('cos it has to GC more frequently) and ultimately flakey ('cos it will run out of memory and things will start failing with OOMEs)
Get rid of superfluous plugins.
Switch to a "smaller" Eclipse (e.g. the "Classic" distro) ... though you'll lose some of the J2EE support that you are probably using.
Close projects.
Close files.
Restart Eclipse more often.
But the best solution is to upgrade your platform:
Buy some more memory for your PC / laptop. You can probably max it out for a couple of hundred dollars. It is worth it.
Switch the OS to Linux. In my experience, Linux is a much better platform for doing Java software development than Windows XP. It seems to do a much better job in terms of both memory and file system management. The performance difference on identical hardware is significant.
You can always set up your machine to dual boot, so that you can still run XP for other things.
The 700MB of OS level process usuage is consistent with your memory settings. Take 512m for heap + 128m for permgen space + a bit of overhead for the JVM itself.
You cannot force GC. If memory was releasable, JVM would release it. Note that you cannot simply look at OS level process usage as that represents the point where the JVM memory usage peaked. JVM never releases heap or permgen space once it expands to a certain size. It is simply too expensive and not really necessary. The allocated space is in virtual address space of the JVM process. It doesn't represent actual physical memory usage. As physical memory gets tight, the OS will agressively swap out to disk memory pages that haven't been used recently.
So... You need specialized memory analysis tools to get accurate representation of Eclipse memory usage.
If you are seeing a lot of disk activity when there shouldn't be any, the OS may be indeed out of physical memory and doing a lot of swapping. Try closing a few projects or restarting Eclipse. It only takes one plugin with a memory leak to consume all of your available memory.
I'm assuming that you are running BEA JRockit judging by the path to javaw.exe. Note that the -X... options are JVM-specific, so they may not work the same way for JRockit as in Sun's JVM. This link seems to indicate that you should be using
... -Xms:40m -Xmx:512m ...
But setting a heap size limit will only cause Eclipse to fail once it reaches the limit, so you won't solve the real problem. Forcing a GC usually doesn't help; any sane VM will GC periodically or when needed.
Having a 700Mb Eclipse sounds like your Eclipse is either processing large amounts of data, or it is leaking memory. JRockit seems to have a memory leak detector which may be able to give you a hint of where the problem lies.
The ini file memory settings won't help for the Virtual machine. Change VM arguments, as here
Eclipse is very thirsty. Limiting it is most likely to crash it. You really need a new computer.
I'm using NetBeans 6.7 on win xp*. I'm not really sure what the pattern is, but lately performance has gotten really bad to the point where it's almost unusable. Any ideas for where to look for slowdowns?
Intel Core Duo 2.2 GHz, 3.5 GB or ram, accoring to the system properties panel. 90 GB of free hard disk space.
NetBeans 6.5 "leaks" temporary files. It creates temporary files in %TEMP% (typically c:\\Documents and Settings\\*username*\\Local Settings\\Temp) and does not delete them. When enough files accumulate, access to the temporary directory slows to a crawl. That in turn drags NB down to a crawl.
To clean it up:
Shut down NetBeans
Open a command prompt and type:
cd %TEMP%
del *.java
del *.form
del output*
del *vcs*
Important:
Do not try to do this with windows explorer. It won't work.
The deletes can take several minutes each. Be patient.
This is much better in 6.7 and I have not seen it at all in 6.8.
If you're running on java6 you can use the jconsole app to connect to your running netbeans instance and see among other things, what the threads are doing, memory usage and whether you're in a race condition.