I keep getting to reboot (!) my Solaris V20 because of the swap being completely full. How can I add more space to it? I have allocated all the disk already.
See Adjusting the Sizes of Your ZFS Swap and Dump Devices in the Solaris 11 documentation on docs.oracle.com.
Try this tutorial. It applies to Solaris 10, but it deals with adding swap space to a ZFS disk and should be valid for Solaris 11.
Related
Visual studio code consumes a lot of disk space during execution:
3GB on start-up.
2GB when running a script (Julia, in my case).
When I kill the in-built terminal and rerun the code, the available storage first goes up by 2GB and then down again by 2GB.
When I exit VSCode all of the disk space reappears.
I'm wondering if there is a way to have VSCode consume less disk space.
From previous questions, it seems that VSCode may take up lots of storage in the workspace folder
C:\Users\<user>\AppData\Roaming\Code\User\workspaceStorage
and possibly in a C++ related folder
.
C:\Users\<user>\AppData\Local\Microsoft\vscode-cpptools\ipch
Both folders take up no or very little space in my case.
I'm running VSCode version 1.72.2 on Windows 10. I tried to pinpoint the directory(ies) used by VSCode for this kind of temporary storage with windirstat but to no avail.
You may need to visualise your disk space in specific folders to pinpoint that. A common reason for that may be IntelliSense cache.
To modify this go to settings and change intelliSenseCacheSize and intelliSenseCachePath If you set the value as 0 then it disables it completely.
After installing all the latest Windows updates and freeing up space on my C drive, I can now run Visual Studio Code with virtually no disk space consumption (about 300MB). I'm not sure if it were the Windows updates or the additional disk space that helped. Anyway, here is how I freed up about 20GB of disk space:
I identified the folders, which require most disk space with windirstat.
I deleted hiberfil.sys.
I manually defragmented windows.edb.
I reduced the size of the WinSxS folder.
I reduced the file of the windows installer directory with patch cleaner.
It works fine before the update.
After updating Big Boss I can't start mini-kube.
minikube minikube start --kubernetes-version=v1.19.2
Exiting due to K8S_INSTALL_FAILED: updating control plane: copy: copy: sudo test -d /var/tmp/minikube && sudo scp -t /var/tmp/minikube: Process exited with status 1
output: �scp: /var/tmp/minikube/kubeadm.yaml.new: Read-only file system
scp: protocol error: expected control record
Maybe I need to add some settings? 🙂
Big Sur is a bit unstable. I use a MBA 2015 and couldn't upgrade to Big Sur. I am using Catalina right now. Because of 128GB space that I have is not enough to free up 42GB of free space.
But at least I decided to stay in Catalina is better because there are lots of comments about bugs and problems in Big Sur. And if I would upgrade, my computer may be slower because it is 5 years old.
Install it if there is a snapshot version or a newer version compatible with Big Sur. If there isn't, I'm afraid you have to wait for a BigBoss compatible version.
The motherboard of a ZFS-based NAS died, and I'm now trying to access the data and move it, or revive the NAS. Debian and ZFS haven't been updated since 2015 or so, however. What I can glean from the log-files is:
ZFS 0.6.4
ZFS pool version 5000
ZFS filesystem 5
Debian Wheezy
Linux 3.2.0-4
So far so good. This Debian is rather old, though, and ZFS and some dependencies have to be compiled by hand to get it all going again - the apt repos have been largely purged of this old stuff, it seems.
So, I'm wondering if it's safe to just spin up a modern Ubuntu, say, and simply create the ZFS pools again.
The ZFS should get updated in any case, so it would be really neat if this just worked with Ubuntu 20, for example...
What came up after a bit of digging is that the ZFS pool version today is still 5000 according to Wikipedia. I can't find any information about what this "ZFS filesystem 5" refers to. I'm not sure at all what the right upgrade strategy is, or what the relevant documentation might be. Any pointers would be very welcome.
Here's what I did:
Install Ubuntu 20.04, install zfsutils-linux.
Run zpool import, this will list all the pools the system can find.
Run zpool import -f <poolname> (the -f is required because ZFS will otherwise complain that the "pool was previously in use from another system").
I have a project that has approximately 2000 files (not including library files) that I want to build.
In netbeans 6.9, I was getting "out of memory" error even when I increased the heap to 1 GB. But I got by it by building few packages at a time.
But in netbeans 7.2, I am not able to do this. Even for packages containing 30 files, I sometime get the "out of memory" error.
So, what is the maximum number of files that can be build simultaneously?
How do I get over this problem?
The heap size is 1GB.
UPDATE:
My machine is running Windows 7, 32 bit on a 64-bit machine. Currently, I can't reinstall a 64-bit windows 7.
Other configurations, 4 GB RAM. Intel Core 2 quad CPU 2.66Hz.
netbeans conf.:
netbeans_default_options="-J-client -J-Xss2m -J-Xms384m -J-Xmx1024M -J-XX:PermSize=32m -J-Dapple.laf.useScreenMenuBar=true -J-Dapple.awt.graphics.UseQuartz=true -J-Dsun.java2d.noddraw=true -J-Dsun.zip.disableMemoryMapping=true"
I have a java project containing about 2400 source files and it builds fine within 50-60 seconds.
I don't expect Netbeans to have a limitation, Any limitation that you may have is probably hardware/setup related.
I start Netbeans with the following options:
netbeans_default_options="-J-client -J-Xss32m -J-Xms256m -J-Xmx1g -J-XX:PermSize=64m -J-XX:+UseConcMarkSweepGC -J-Dapple.laf.useScreenMenuBar=true -J-Dapple.awt.graphics.UseQuartz=true -J-Dsun.java2d.noddraw=true -J-Dsun.zip.disableMemoryMapping=true"
Those options are located in install_dir/etc/netbeans.conf
setup:
Core i7 with 10GB ram running jdk1.6.0_33 on Windows 7 64 bit
"out of memory" could be caused by heap size or permgen size. You could use jVisualVM in your jdk/bin to monitor the memory usage.
From your setting, "-J-XX:PermSize=32m" in addition to NetBeans behavior: "Note that default -Xmx and -XX:MaxPermSize are selected for you automatically.". I guess it could be due to PermGen size.
You can try to set "-J-XX:PermSize=128m"
(For my case, the startup of NetBeans is faster after I increased the permsize.)
I'm using NetBeans 6.7 on win xp*. I'm not really sure what the pattern is, but lately performance has gotten really bad to the point where it's almost unusable. Any ideas for where to look for slowdowns?
Intel Core Duo 2.2 GHz, 3.5 GB or ram, accoring to the system properties panel. 90 GB of free hard disk space.
NetBeans 6.5 "leaks" temporary files. It creates temporary files in %TEMP% (typically c:\\Documents and Settings\\*username*\\Local Settings\\Temp) and does not delete them. When enough files accumulate, access to the temporary directory slows to a crawl. That in turn drags NB down to a crawl.
To clean it up:
Shut down NetBeans
Open a command prompt and type:
cd %TEMP%
del *.java
del *.form
del output*
del *vcs*
Important:
Do not try to do this with windows explorer. It won't work.
The deletes can take several minutes each. Be patient.
This is much better in 6.7 and I have not seen it at all in 6.8.
If you're running on java6 you can use the jconsole app to connect to your running netbeans instance and see among other things, what the threads are doing, memory usage and whether you're in a race condition.