Trying to change GPU Graphics and Memory Transfer Rate Clock Offset on my nVidia EVGA 1030 SC on bootup. I am a linux noob here using Rocky Linux 9.
Question:
How do I check my current set values for GPUGraphicsClockOffsetAllPerformanceLevels and GPUMemoryTransferRateOffsetAllPerformanceLevels?
Currently I am checking by starting up nVidia X Server then look for the values in Graphics Clock Offset and Memory Transfer Rate Offset under the PowerMizer tab. Is there a better way? I don't even really know if that is always up to date...
What did I do wrong and what do I need to change to fix my bootup script below to work?
I think my bootup script is NOT working because the PowerMizer in nVidia X Server does NOT show clock offset values of 50MHz and 200MHz for Graphics and Memory Transfer Rates, respectively, when I boot with my script. It only shows 0 and 0.
However, it does show 50MHz and 200MHz when I enter the following 3 commands line after line directly in bash terminal.
nvidia-smi -pm 1
nvidia-settings -a [gpu:0]/"GPUGraphicsClockOffsetAllPerformanceLevels=50"
nvidia-settings -a [gpu:0]/"GPUMemoryTransferRateOffsetAllPerformanceLevels=200"
Below is the bootup script...
i.
Wrote a shell script file named nVidiaStartUp.sh and placed it in: /etc/rc.d/init.d
nVidiaStartUp.sh contains
#!/bin/bash
nvidia-smi -pm 1
nvidia-settings -a [gpu:0]/"GPUGraphicsClockOffsetAllPerformanceLevels=50"
nvidia-settings -a [gpu:0]/"GPUMemoryTransferRateOffsetAllPerformanceLevels=200"
ii.
In Terminal, executed chmod +x /etc/rc.d/init.d/nVidiaStartUp.sh
iii.
Added a script named nVidiaStartUp.service in /etc/systemd/system with contents below
[Unit]
Description=nVidia Startup Script Call with Undervolt
After=getty.target
[Service]
Type=simple
ExecStart=/etc/rc.d/init.d/nVidiaStartUp.sh
TimeoutStartSec=0
#RemainAfterExit=yes
[Install]
WantedBy=default.target
#WantedBy=graphical.target
#WantedBy=multi-user.target
iv.
Ran in terminal
systemctl enable nVidiaStartUp.service
v.
Reboot and then I check my clockoffset under PowerMizer in nVidia X Server. I don't see 50MHz and 200MHz. I only see 0 and 0. That seems to imply my bootup script isn't working? Please help!
======================================================================================================================================================
Additional background info:
I have installed nVidia driver and it loads with proper GPU information. Here is what it shows after running nvidia-smi
Fri Nov 4 11:39:36 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 520.61.05 Driver Version: 520.61.05 CUDA Version: 11.8 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... On | 00000000:0B:00.0 On | N/A |
| 48% 50C P0 N/A / 30W | 373MiB / 2048MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 2309 G /usr/libexec/Xorg 100MiB |
| 0 N/A N/A 2439 G /usr/bin/gnome-shell 139MiB |
| 0 N/A N/A 3270 G ...470561649073451082,131072 130MiB |
+-----------------------------------------------------------------------------+
My relevant sections for Cool-bits (I am using 8 for cool-bits as this card is passively cooled) of my /etc/X11/xorg.conf shows
Section "Device"
Identifier "Device0"
Driver "nvidia"
VendorName "NVIDIA Corporation"
BoardName "NVIDIA GeForce GT 1030"
EndSection
Section "Screen"
Identifier "Screen0"
Device "Device0"
Monitor "Monitor0"
DefaultDepth 24
Option "Coolbits" "8"
SubSection "Display"
Depth 24
EndSubSection
EndSection
Related
Overview:
When I attempt to run VSCode with the instructions given in the contributions page to download all the packages, build the source code, and then run it all on the terminal, an error message pops up saying that I don't have the electron app in the vscode directory. Shouldn't have the electron app been installed when I ran the yarn command to install and build all the dependencies?
Steps to reproduce the bug:
$ yarn //building and installing all dependencies
$ yarn watchd //building vscode
$ ./scripts/code.sh //running vscode
Error Message:
Error launching app
Unable to find Electron app at /home/juan/Desktop/Projects/vscode
Cannot find module '/home/juan/Desktop/Projects/vscode/out/main'. Please verify that the package.json has a valid "main" entry
System Details:
CPUs | Intel(R) Core(TM) i7-6600U CPU # 2.60GHz (4 x 3200)
-- | --
GPU Status | 2d_canvas: unavailable_softwareflash_3d: disabled_softwareflash_stage3d: disabled_softwareflash_stage3d_baseline: disabled_softwaregpu_compositing: disabled_softwaremultiple_raster_threads: enabled_onoop_rasterization: disabled_offprotected_video_decode: disabled_offrasterization: disabled_softwareskia_renderer: disabled_off_okvideo_decode: disabled_softwareviz_display_compositor: enabled_onviz_hit_test_surface_layer: disabled_off_okwebgl: unavailable_softwarewebgl2: unavailable_software
Load (avg) | 1, 1, 1
Memory (System) | 7.63GB (0.12GB free)
Process Argv | . --no-sandbox
Screen Reader | no
VM | 0%
OS|Ubuntu 18.04 LTS
Extensions:
Extension | Author (truncated) | Version
-- | -- | --
Bookmarks | ale | 11.2.0
vscode-sqlite | ale | 0.8.2
code-gnu-global | aus | 0.2.2
npm-intellisense | chr | 1.3.0
vscode-svgviewer | css | 2.0.0
vscode-markdownlint | Dav | 0.36.0
jshint | dba | 0.10.21
vscode-eslint | dba | 2.1.5
vscode-html-css | ecm | 0.2.3
EditorConfig | Edi | 0.15.1
vscode-npm-script | eg2 | 0.3.12
vscode-firefox-debug | fir | 2.8.0
beautify | Hoo | 1.5.0
vscode-emacs-friendly | lfs | 0.9.0
rainbow-csv | mec | 1.7.0
python | ms- | 2020.5.80290
cpptools | ms- | 0.28.2
debugger-for-chrome | msj | 4.12.8
sqltools | mtx | 0.22.5
material-icon-theme | PKi | 4.1.0
rust | rus | 0.7.8
lc2k | vio | 1.1.1
Here is the bug report I filled in the vscode github page: https://github.com/microsoft/vscode/issues/99537
I got this same error myself when the code did not build correctly.
In your second step you do:
yarn watchd
I tried this command myself, but ran into the same issue that you have stated here. Although the official wiki suggests this as a tip, I would just ignore it.
Instead, do either of these instead (this is what the official wiki originally suggest to do):
Type: Ctrl + Shift + B
Or alternatively use the Command Palette:
Type: Ctrl + Shift + P
Search for the option called: Tasks: Run Build Task and select it.
Once you start the build task you'll see a couple of things:
Firstly, at the bottom of VS Code (on your status line), VS Code will let you know the code is building.
Secondly, The build command will open two terminals:
Task - Build VS Code
Task - Build VS Code Extensions
Watch the output for both of terminals, make sure:
Task - Build VS Code terminal outputs: [some time] Finished compilation ...
and
Task - Build VS Code Extensions terminal outputs: [some time] Finished compilation extensions ...
If not and the build fails, you'll probably get a notification from VS code saying so (You'll probably get the error twice, one for each task):
yarn ... exited with code [some non-zero integer]
A common error that may occur is the ENOSPC error from inotify (also documented well in a medium blog). You'll want to issue this command:
echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p
Arch users would issue:
echo fs.inotify.max_user_watches=524288 | sudo tee /etc/sysctl.d/40-max-user-watches.conf && sudo sysctl --system
After fixing this, trying to build again should work. Start the build task again and make sure both tasks succeed. (You'll notice that the tasks do not end after they succeed. This is because they will watch for changes you make in the code while developing and automatically recompile for you).
If successful you may finally issue:
./scripts/code.sh
A new instance of VS Code should open called: Code - OSS dev. This is the version of VS Code you just built.
I have setup a kubernetes node with a nvidia tesla k80 and followed this tutorial to try to run a pytorch docker image with nvidia drivers and cuda drivers working.
My nvidia drivers and cuda drivers are all accessible inside my pod at /usr/local:
$> ls /usr/local
bin cuda cuda-10.0 etc games include lib man nvidia sbin share src
And my GPU is also recongnized by my image nvidia/cuda:10.0-runtime-ubuntu18.04:
$> /usr/local/nvidia/bin/nvidia-smi
Fri Nov 8 16:24:35 2019
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 410.79 Driver Version: 410.79 CUDA Version: 10.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla K80 Off | 00000000:00:04.0 Off | 0 |
| N/A 73C P8 35W / 149W | 0MiB / 11441MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
But after installing pytorch 1.3.0 i'm not able to make pytorch recognize my cuda installation even with LD_LIBRARY_PATH set to /usr/local/nvidia/lib64:/usr/local/cuda/lib64:
$> python3 -c "import torch; print(torch.cuda.is_available())"
False
$> python3
Python 3.6.8 (default, Oct 7 2019, 12:59:55)
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> print ('\t\ttorch.cuda.current_device() =', torch.cuda.current_device())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.6/dist-packages/torch/cuda/__init__.py", line 386, in current_device
_lazy_init()
File "/usr/local/lib/python3.6/dist-packages/torch/cuda/__init__.py", line 192, in _lazy_init
_check_driver()
File "/usr/local/lib/python3.6/dist-packages/torch/cuda/__init__.py", line 111, in _check_driver
of the CUDA driver.""".format(str(torch._C._cuda_getDriverVersion())))
AssertionError:
The NVIDIA driver on your system is too old (found version 10000).
Please update your GPU driver by downloading and installing a new
version from the URL: http://www.nvidia.com/Download/index.aspx
Alternatively, go to: https://pytorch.org to install
a PyTorch version that has been compiled with your version
of the CUDA driver.
The error above is strange because my cuda version for my image is 10.0 and Google GKE mentions that:
The latest supported CUDA version is 10.0
Also, it's GKE's daemonsets that automatically installs NVIDIA drivers
After adding GPU nodes to your cluster, you need to install NVIDIA's device drivers to the nodes.
Google provides a DaemonSet that automatically installs the drivers for you.
Refer to the section below for installation instructions for Container-Optimized OS (COS) and Ubuntu nodes.
To deploy the installation DaemonSet, run the following command:
kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/container-engine-accelerators/master/nvidia-driver-installer/cos/daemonset-preloaded.yaml
I have tried everything i could think of, without success...
I have resolved my problem by downgrading my pytorch version by buildling my docker images from pytorch/pytorch:1.2-cuda10.0-cudnn7-devel.
I still don't really know why before it was not working as it should otherwise then by guessing that pytorch 1.3.0 is not compatible with cuda 10.0.
How should I fix this error?
[jalal#goku bin]$ source activate deep_emotion
(deep_emotion) [jalal#goku bin]$ python
Python 3.5.4 | packaged by conda-forge | (default, Nov 4 2017, 10:11:29)
[GCC 4.8.2 20140120 (Red Hat 4.8.2-15)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import keras
Using Theano backend.
>>> quit()
(deep_emotion) [jalal#goku bin]$ export KERAS_BACKEND=tensorflow
(deep_emotion) [jalal#goku bin]$ python
Python 3.5.4 | packaged by conda-forge | (default, Nov 4 2017, 10:11:29)
[GCC 4.8.2 20140120 (Red Hat 4.8.2-15)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import keras
Using TensorFlow backend.
2017-11-20 17:49:18.666294: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2017-11-20 17:49:18.666337: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-11-20 17:49:18.666347: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-11-20 17:49:18.666354: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2017-11-20 17:49:18.666363: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
2017-11-20 17:49:19.196610: I tensorflow/core/common_runtime/gpu/gpu_device.cc:955] Found device 0 with properties:
name: GeForce GTX 1080 Ti
major: 6 minor: 1 memoryClockRate (GHz) 1.6705
pciBusID 0000:05:00.0
Total memory: 10.91GiB
Free memory: 158.06MiB
2017-11-20 17:49:19.426132: W tensorflow/stream_executor/cuda/cuda_driver.cc:523] A non-primary context 0x42e9db0 exists before initializing the StreamExecutor. We haven't verified StreamExecutor works with that.
2017-11-20 17:49:19.426768: I tensorflow/core/common_runtime/gpu/gpu_device.cc:955] Found device 1 with properties:
name: GeForce GTX 1080 Ti
major: 6 minor: 1 memoryClockRate (GHz) 1.6705
pciBusID 0000:06:00.0
Total memory: 10.91GiB
Free memory: 398.44MiB
2017-11-20 17:49:19.427277: I tensorflow/core/common_runtime/gpu/gpu_device.cc:976] DMA: 0 1
2017-11-20 17:49:19.427309: I tensorflow/core/common_runtime/gpu/gpu_device.cc:986] 0: Y Y
2017-11-20 17:49:19.427323: I tensorflow/core/common_runtime/gpu/gpu_device.cc:986] 1: Y Y
2017-11-20 17:49:19.427347: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:05:00.0)
2017-11-20 17:49:19.427362: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Creating TensorFlow device (/gpu:1) -> (device: 1, name: GeForce GTX 1080 Ti, pci bus id: 0000:06:00.0)
2017-11-20 17:49:19.429776: E tensorflow/stream_executor/cuda/cuda_driver.cc:924] failed to allocate 158.06M (165740544 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
>>> quit()
(deep_emotion) [jalal#goku bin]$ conda list | grep keras
keras 2.0.9 py35_0 conda-forge
(deep_emotion) [jalal#goku bin]$ conda list | grep tensorflow
tensorflow-gpu 1.3.0 0
tensorflow-gpu-base 1.3.0 py35cuda8.0cudnn6.0_1
tensorflow-tensorboard 0.1.5 py35_0
Sys info is as follows:
$ uname -a
Linux goku.bu.edu 3.10.0-693.5.2.el7.x86_64 #1 SMP Fri Oct 20 20:32:50 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
and
(deep_emotion) [jalal#goku bin]$ nvidia-smi
Mon Nov 20 17:51:50 2017
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 384.81 Driver Version: 384.81 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 108... Off | 00000000:05:00.0 On | N/A |
| 0% 25C P8 19W / 250W | 10862MiB / 11172MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 1 GeForce GTX 108... Off | 00000000:06:00.0 Off | N/A |
| 0% 36C P8 19W / 250W | 10622MiB / 11172MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 2062 G /usr/bin/X 183MiB |
| 0 2779 G /usr/bin/gnome-shell 176MiB |
| 0 3298 C /cs/software/anaconda3/bin/python 10341MiB |
| 0 4350 G ...-token=2BC290A510039A38C05EF3ECBAA5E5E5 78MiB |
| 0 5212 G /usr/lib64/firefox/plugin-container 5MiB |
| 0 32257 G /proc/self/exe 64MiB |
| 1 3298 C /cs/software/anaconda3/bin/python 10611MiB |
+-----------------------------------------------------------------------------+
Thanks to Robert Crovella for the suggestions. Restarting the machine solved the problem:
[jalal#goku ~]$ source activate deep_emotion
(deep_emotion) [jalal#goku ~]$ export KERAS_BACKEND=tensorflow
(deep_emotion) [jalal#goku ~]$ python
Python 3.5.4 | packaged by conda-forge | (default, Nov 4 2017, 10:11:29)
[GCC 4.8.2 20140120 (Red Hat 4.8.2-15)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import keras
Using TensorFlow backend.
2017-11-20 18:43:28.424658: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2017-11-20 18:43:28.424690: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-11-20 18:43:28.424727: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-11-20 18:43:28.424734: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2017-11-20 18:43:28.424745: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
2017-11-20 18:43:28.951509: I tensorflow/core/common_runtime/gpu/gpu_device.cc:955] Found device 0 with properties:
name: GeForce GTX 1080 Ti
major: 6 minor: 1 memoryClockRate (GHz) 1.6705
pciBusID 0000:05:00.0
Total memory: 10.91GiB
Free memory: 10.44GiB
2017-11-20 18:43:29.172079: W tensorflow/stream_executor/cuda/cuda_driver.cc:523] A non-primary context 0x31d6630 exists before initializing the StreamExecutor. We haven't verified StreamExecutor works with that.
2017-11-20 18:43:29.172825: I tensorflow/core/common_runtime/gpu/gpu_device.cc:955] Found device 1 with properties:
name: GeForce GTX 1080 Ti
major: 6 minor: 1 memoryClockRate (GHz) 1.6705
pciBusID 0000:06:00.0
Total memory: 10.91GiB
Free memory: 10.75GiB
2017-11-20 18:43:29.173970: I tensorflow/core/common_runtime/gpu/gpu_device.cc:976] DMA: 0 1
2017-11-20 18:43:29.174019: I tensorflow/core/common_runtime/gpu/gpu_device.cc:986] 0: Y Y
2017-11-20 18:43:29.174034: I tensorflow/core/common_runtime/gpu/gpu_device.cc:986] 1: Y Y
2017-11-20 18:43:29.174055: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:05:00.0)
2017-11-20 18:43:29.174070: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Creating TensorFlow device (/gpu:1) -> (device: 1, name: GeForce GTX 1080 Ti, pci bus id: 0000:06:00.0)
>>> import tensorflow
>>>
Process Explorer has columns for CPU time (down to milliseconds) and CPU Cycles. For WinDbg I am aware of the !runaway command, also !runaway 7 for more details, but it shows CPU time only.
Are the CPU cycles also available somehow in a user mode crash dump?
What I have tried:
I looked at dt nt!_KTHREAD and I see it has a CycleTime property
ntdll!_KTHREAD
+0x000 Header : _DISPATCHER_HEADER
+0x018 CycleTime : Uint8B
I tried to query that property in a !for_each_thread, but WinDbg responds that it's available in kernel mode only.
Why do I want those CPU cycles?
I am working on a training for JetBrains dotTrace. It has an option to count CPU cycles and I'd like to explain where this cycles come from. Above kernel structure and Process Explorer is probably enough, but it would be awesome to see it live or post mortem in a user mode dump. I explain a lot of basics with WinDbg.
Following the implementation of GetProcessTimes() in ReactOS, you can see that the information is copied from the process' KPROCESS. So, indeed, it's only physically present in a dump that includes kernel memory.
C:\tw>ls -l
total 0
C:\tw>cdb -c ".dump /ma .\tw.dmp;q" calc.exe | grep writ
Dump successfully written
C:\tw>cdb -c "lm;!peb;.dump /ma .\tw1.dmp;q" calc.exe | grep writ
Dump successfully written
C:\tw>cdb -c ".ttime;q" -z tw.dmp | grep -B 3 quit
Created: Wed Apr 5 20:03:55.919 2017 ()
Kernel: 0 days 0:00:00.046
User: 0 days 0:00:00.000
quit:
C:\tw>cdb -c ".ttime;q" -z tw1.dmp | grep -B 3 quit
Created: Wed Apr 5 20:04:28.682 2017 ()
Kernel: 0 days 0:00:00.031
User: 0 days 0:00:00.000
quit:
C:\tw>
On my pi after start there is no free memory, but i can not found, waht uses it:
pi#node1 ~ $ cat /proc/cpuinfo
processor : 0
model name : ARMv6-compatible processor rev 7 (v6l)
BogoMIPS : 2.00
Features : half thumb fastmult vfp edsp java tls
CPU implementer : 0x41
CPU architecture: 7
CPU variant : 0x0
CPU part : 0xb76
CPU revision : 7
Hardware : BCM2708
Revision : 0013
Serial : 00000000bf2e5e5c
pi#node1 ~ $ uname -a
Linux node1 4.0.7+ #801 PREEMPT Tue Jun 30 18:15:24 BST 2015 armv6l GNU/Linux
pi#node1 ~ $ head -n1 /etc/issue
Raspbian GNU/Linux 7 \n \l
pi#node1 ~ $ grep MemTotal /proc/meminfo
MemTotal: 493868 kB
pi#node1 ~ $ grep "model name" /proc/cpuinfo
model name : ARMv6-compatible processor rev 7 (v6l)
pi#node1 ~ $ ps -eo pmem,pcpu,vsize,pid,cmd | sort -k 1 -nr | head -5
0.6 0.2 6244 2377 -bash
0.3 0.0 6748 2458 sort -k 1 -nr
0.3 0.0 4140 2457 ps -eo pmem,pcpu,vsize,pid,cmd
0.2 0.1 9484 2376 sshd: pi#pts/0
0.2 0.1 5600 2236 /usr/sbin/ntpd -p /var/run/ntpd.pid -g -u 104:107
pi#node1 ~ $ free
total used free shared buffers cached
Mem: 493868 478364 15504 0 500 4956
-/+ buffers/cache: 472908 20960
Swap: 102396 116 102280
I am not a linux expert, but if I understand it right, there is just 15Mb free memory, but no task uses more than 0.6%. Than why is not there more free?
Memory is not exclusively allocated by Processes.
The bootloader and the init ram filesystem is stored in memory.
The kernel (could be very big) is loaded into memory.
The kernel reserve memory for it's processes. ps shows 0.0% for these system processes.
Driver allocate buffer memory
The graphics card needs memory
If you have not configured your swap space on a harddrive or SD card, it uses memory.
The network system allocates memory for unix sockets and shared memory.
100 processes with 0.1 % are 10%.
And, if you start a process and stop it not all of it memory will be released.
Try it. Show the memory usage with free. Start a process that need some memory. Stop the process and use free again. I would bet that there is more memory usage than before.
Edit
Here is an example of a pi with less memory usage. I have no problems running java on it. I have a WLAN Dongle and a original NOIR CAM installed.
I installed Raspbian Wheezy. I used a kernel that I compiled from sources:
> uname -a
Linux raspberrypi 3.18.14+ #2 PREEMPT Sun May 31 20:19:04 UTC 2015 armv6l GNU/Linux
> head -n1 /etc/issue
Raspbian GNU/Linux 7 \n \l
On this pi I can run java -version in an acceptable period of time.
time java -version
java version "1.8.0"
Java(TM) SE Runtime Environment (build 1.8.0-b132)
Java HotSpot(TM) Client VM (build 25.0-b70, mixed mode)
real 0m1.012s
user 0m0.800s
sys 0m0.190s
Here is my memory footprint
> free
total used free shared buffers cached
Mem: 380816 138304 242512 0 8916 96728
-/+ buffers/cache: 32660 348156
Swap: 102396 0 102396