What uses the memory on raspberry pi? - raspberry-pi

On my pi after start there is no free memory, but i can not found, waht uses it:
pi#node1 ~ $ cat /proc/cpuinfo
processor : 0
model name : ARMv6-compatible processor rev 7 (v6l)
BogoMIPS : 2.00
Features : half thumb fastmult vfp edsp java tls
CPU implementer : 0x41
CPU architecture: 7
CPU variant : 0x0
CPU part : 0xb76
CPU revision : 7
Hardware : BCM2708
Revision : 0013
Serial : 00000000bf2e5e5c
pi#node1 ~ $ uname -a
Linux node1 4.0.7+ #801 PREEMPT Tue Jun 30 18:15:24 BST 2015 armv6l GNU/Linux
pi#node1 ~ $ head -n1 /etc/issue
Raspbian GNU/Linux 7 \n \l
pi#node1 ~ $ grep MemTotal /proc/meminfo
MemTotal: 493868 kB
pi#node1 ~ $ grep "model name" /proc/cpuinfo
model name : ARMv6-compatible processor rev 7 (v6l)
pi#node1 ~ $ ps -eo pmem,pcpu,vsize,pid,cmd | sort -k 1 -nr | head -5
0.6 0.2 6244 2377 -bash
0.3 0.0 6748 2458 sort -k 1 -nr
0.3 0.0 4140 2457 ps -eo pmem,pcpu,vsize,pid,cmd
0.2 0.1 9484 2376 sshd: pi#pts/0
0.2 0.1 5600 2236 /usr/sbin/ntpd -p /var/run/ntpd.pid -g -u 104:107
pi#node1 ~ $ free
total used free shared buffers cached
Mem: 493868 478364 15504 0 500 4956
-/+ buffers/cache: 472908 20960
Swap: 102396 116 102280
I am not a linux expert, but if I understand it right, there is just 15Mb free memory, but no task uses more than 0.6%. Than why is not there more free?

Memory is not exclusively allocated by Processes.
The bootloader and the init ram filesystem is stored in memory.
The kernel (could be very big) is loaded into memory.
The kernel reserve memory for it's processes. ps shows 0.0% for these system processes.
Driver allocate buffer memory
The graphics card needs memory
If you have not configured your swap space on a harddrive or SD card, it uses memory.
The network system allocates memory for unix sockets and shared memory.
100 processes with 0.1 % are 10%.
And, if you start a process and stop it not all of it memory will be released.
Try it. Show the memory usage with free. Start a process that need some memory. Stop the process and use free again. I would bet that there is more memory usage than before.
Edit
Here is an example of a pi with less memory usage. I have no problems running java on it. I have a WLAN Dongle and a original NOIR CAM installed.
I installed Raspbian Wheezy. I used a kernel that I compiled from sources:
> uname -a
Linux raspberrypi 3.18.14+ #2 PREEMPT Sun May 31 20:19:04 UTC 2015 armv6l GNU/Linux
> head -n1 /etc/issue
Raspbian GNU/Linux 7 \n \l
On this pi I can run java -version in an acceptable period of time.
time java -version
java version "1.8.0"
Java(TM) SE Runtime Environment (build 1.8.0-b132)
Java HotSpot(TM) Client VM (build 25.0-b70, mixed mode)
real 0m1.012s
user 0m0.800s
sys 0m0.190s
Here is my memory footprint
> free
total used free shared buffers cached
Mem: 380816 138304 242512 0 8916 96728
-/+ buffers/cache: 32660 348156
Swap: 102396 0 102396

Related

Getting CPU cycles from user mode dump

Process Explorer has columns for CPU time (down to milliseconds) and CPU Cycles. For WinDbg I am aware of the !runaway command, also !runaway 7 for more details, but it shows CPU time only.
Are the CPU cycles also available somehow in a user mode crash dump?
What I have tried:
I looked at dt nt!_KTHREAD and I see it has a CycleTime property
ntdll!_KTHREAD
+0x000 Header : _DISPATCHER_HEADER
+0x018 CycleTime : Uint8B
I tried to query that property in a !for_each_thread, but WinDbg responds that it's available in kernel mode only.
Why do I want those CPU cycles?
I am working on a training for JetBrains dotTrace. It has an option to count CPU cycles and I'd like to explain where this cycles come from. Above kernel structure and Process Explorer is probably enough, but it would be awesome to see it live or post mortem in a user mode dump. I explain a lot of basics with WinDbg.
Following the implementation of GetProcessTimes() in ReactOS, you can see that the information is copied from the process' KPROCESS. So, indeed, it's only physically present in a dump that includes kernel memory.
C:\tw>ls -l
total 0
C:\tw>cdb -c ".dump /ma .\tw.dmp;q" calc.exe | grep writ
Dump successfully written
C:\tw>cdb -c "lm;!peb;.dump /ma .\tw1.dmp;q" calc.exe | grep writ
Dump successfully written
C:\tw>cdb -c ".ttime;q" -z tw.dmp | grep -B 3 quit
Created: Wed Apr 5 20:03:55.919 2017 ()
Kernel: 0 days 0:00:00.046
User: 0 days 0:00:00.000
quit:
C:\tw>cdb -c ".ttime;q" -z tw1.dmp | grep -B 3 quit
Created: Wed Apr 5 20:04:28.682 2017 ()
Kernel: 0 days 0:00:00.031
User: 0 days 0:00:00.000
quit:
C:\tw>

*Excruciatingly* slow (over ten seconds for `(+ 1 1)`) with language "How To Design Programs - Beginning Student"

I just installed DrRacket, and tried out the language "How To Design Programs - Beginning Student".
Racket - A programmable programming language
Racket - Getting Started
I run (+ 1 1), and it takes over ten seconds for this to show up:
Welcome to DrRacket, version 6.5 [3m].
Language: Beginning Student; memory limit: 128 MB.
2
>
As far as I can tell, my installation is pretty much "out of the box".
What I'm wondering is if my experience is unusual,
and if there's any obvious way to troubleshoot it if so
(I've looked around the settings and didn't find anything obvious to tweak),
or if maybe the whole HTDP language was quietly abandoned or something...?
EDIT 1
I have these files:
/usr/share/racket $
find -iname "*htdp*.zo"
./pkgs/htdp-lib/lang/private/compiled/create-htdp-executable_rkt.zo
./pkgs/htdp-lib/lang/compiled/htdp-reader_rkt.zo
./pkgs/htdp-lib/lang/compiled/htdp-beginner-abbr-reader_rkt.zo
./pkgs/htdp-lib/lang/compiled/htdp-langs-save-file-prefix_rkt.zo
./pkgs/htdp-lib/lang/compiled/htdp-advanced-reader_rkt.zo
./pkgs/htdp-lib/lang/compiled/htdp-intermediate-lambda-reader_rkt.zo
./pkgs/htdp-lib/lang/compiled/htdp-advanced_rkt.zo
./pkgs/htdp-lib/lang/compiled/htdp-beginner-abbr_rkt.zo
./pkgs/htdp-lib/lang/compiled/htdp-intermediate_rkt.zo
./pkgs/htdp-lib/lang/compiled/htdp-intermediate-reader_rkt.zo
./pkgs/htdp-lib/lang/compiled/htdp-langs_rkt.zo
./pkgs/htdp-lib/lang/compiled/htdp-beginner_rkt.zo
./pkgs/htdp-lib/lang/compiled/htdp-intermediate-lambda_rkt.zo
./pkgs/htdp-lib/lang/compiled/htdp-beginner-reader_rkt.zo
./pkgs/htdp-doc/scribblings/htdp-langs/compiled/htdp-langs_scrbl.zo
./pkgs/htdp-doc/scribblings/htdp-langs/compiled/htdp-ptr_scrbl.zo
./pkgs/htdp-doc/htdp/compiled/htdp_scrbl.zo
./pkgs/htdp-doc/htdp/compiled/htdp-lib_scrbl.zo
./pkgs/htdp-doc/teachpack/htdp/scribblings/compiled/htdp_scrbl.zo
./pkgs/htdp-doc/teachpack/2htdp/scribblings/compiled/2htdp_scrbl.zo
EDIT 2 - cpu and harddrive specs
CPU
$ cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 28
model name : Intel(R) Atom(TM) CPU N450 # 1.66GHz
stepping : 10
microcode : 0x107
cpu MHz : 1000.000
cache size : 512 KB
physical id : 0
siblings : 2
core id : 0
cpu cores : 1
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 10
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts nopl aperfmperf pni dtes64 monitor ds_cpl est tm2 ssse3 cx16 xtpr pdcm movbe lahf_lm dtherm
bugs :
bogomips : 3325.00
clflush size : 64
cache_alignment : 64
address sizes : 32 bits physical, 48 bits virtual
power management:
processor : 1
vendor_id : GenuineIntel
cpu family : 6
model : 28
model name : Intel(R) Atom(TM) CPU N450 # 1.66GHz
stepping : 10
microcode : 0x107
cpu MHz : 1000.000
cache size : 512 KB
physical id : 0
siblings : 2
core id : 0
cpu cores : 1
apicid : 1
initial apicid : 1
fpu : yes
fpu_exception : yes
cpuid level : 10
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts nopl aperfmperf pni dtes64 monitor ds_cpl est tm2 ssse3 cx16 xtpr pdcm movbe lahf_lm dtherm
bugs :
bogomips : 3325.00
clflush size : 64
cache_alignment : 64
address sizes : 32 bits physical, 48 bits virtual
power management:
HD
$ sudo hdparm -I /dev/sda
/dev/sda:
ATA device, with non-removable media
Model Number: Hitachi HTS545016B9A300
Serial Number: 100324PBPB06ECC0K6XL
Firmware Revision: PBBOC60F
Transport: Serial, ATA8-AST, SATA 1.0a, SATA II Extensions, SATA Rev 2.5, SATA Rev 2.6; Revision: ATA8-AST T13 Project D1697 Revision 0b
Standards:
Used: unknown (minor revision code 0x0028)
Supported: 8 7 6 5
Likely used: 8
Configuration:
Logical max current
cylinders 16383 16383
heads 16 16
sectors/track 63 63
--
CHS current addressable sectors: 16514064
LBA user addressable sectors: 268435455
LBA48 user addressable sectors: 312581808
Logical/Physical Sector size: 512 bytes
device size with M = 1024*1024: 152627 MBytes
device size with M = 1000*1000: 160041 MBytes (160 GB)
cache/buffer size = 7208 KBytes (type=DualPortCache)
Form Factor: 2.5 inch
Nominal Media Rotation Rate: 5400
Capabilities:
LBA, IORDY(can be disabled)
Queue depth: 32
Standby timer values: spec'd by Vendor, no device specific minimum
R/W multiple sector transfer: Max = 16 Current = 16
Advanced power management level: 254
Recommended acoustic management value: 128, current value: 254
DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 udma5 *udma6
Cycle time: min=120ns recommended=120ns
PIO: pio0 pio1 pio2 pio3 pio4
Cycle time: no flow control=120ns IORDY flow control=120ns
Commands/features:
Enabled Supported:
* SMART feature set
Security Mode feature set
* Power Management feature set
* Write cache
* Look-ahead
* Host Protected Area feature set
* WRITE_BUFFER command
* READ_BUFFER command
* NOP cmd
* DOWNLOAD_MICROCODE
* Advanced Power Management feature set
Power-Up In Standby feature set
* SET_FEATURES required to spinup after power up
SET_MAX security extension
Automatic Acoustic Management feature set
* 48-bit Address feature set
* Device Configuration Overlay feature set
* Mandatory FLUSH_CACHE
* FLUSH_CACHE_EXT
* SMART error logging
* SMART self-test
* General Purpose Logging feature set
* WRITE_{DMA|MULTIPLE}_FUA_EXT
* 64-bit World wide name
* IDLE_IMMEDIATE with UNLOAD
* WRITE_UNCORRECTABLE_EXT command
* {READ,WRITE}_DMA_EXT_GPL commands
* Segmented DOWNLOAD_MICROCODE
* Gen1 signaling speed (1.5Gb/s)
* Gen2 signaling speed (3.0Gb/s)
* Native Command Queueing (NCQ)
* Host-initiated interface power management
* Phy event counters
* NCQ priority information
Non-Zero buffer offsets in DMA Setup FIS
* DMA Setup Auto-Activate optimization
Device-initiated interface power management
In-order data delivery
* Software settings preservation
* SMART Command Transport (SCT) feature set
* SCT Write Same (AC2)
* SCT Error Recovery Control (AC3)
* SCT Features Control (AC4)
* SCT Data Tables (AC5)
Security:
Master password revision code = 65534
supported
not enabled
not locked
frozen
not expired: security count
supported: enhanced erase
64min for SECURITY ERASE UNIT. 66min for ENHANCED SECURITY ERASE UNIT.
Logical Unit WWN Device Identifier: 5000cca5ffc040a7
NAA : 5
IEEE OUI : 000cca
Unique ID : 5ffc040a7
Checksum: correct
EDIT 3 command-line times
ran (+ 1 1) in HTDP-beginner 3 times -- Over 5 seconds.
$ time racket -t racket_HTDP_beginner.rkt
2
5.60user 1.04system 0:08.46elapsed 78%CPU (0avgtext+0avgdata 127968maxresident)k
5496inputs+0outputs (46major+40955minor)pagefaults 0swaps
$ time racket -t racket_HTDP_beginner.rkt
2
5.51user 0.67system 0:06.71elapsed 92%CPU (0avgtext+0avgdata 128124maxresident)k
24inputs+0outputs (0major+41790minor)pagefaults 0swaps
$ time racket -t racket_HTDP_beginner.rkt
2
5.41user 0.67system 0:06.55elapsed 92%CPU (0avgtext+0avgdata 128180maxresident)k
0inputs+0outputs (0major+36683minor)pagefaults 0swaps
ran (+ 1 1) in #lang racket 3 times -- A bit over 2 seconds.
$ time racket -t racket_lang_racket.rkt
2
2.13user 0.25system 0:02.71elapsed 87%CPU (0avgtext+0avgdata 64996maxresident)k
0inputs+0outputs (0major+12437minor)pagefaults 0swaps
$ time racket -t racket_lang_racket.rkt
2
2.15user 0.25system 0:02.63elapsed 91%CPU (0avgtext+0avgdata 61700maxresident)k
0inputs+0outputs (0major+15853minor)pagefaults 0swaps
$ time racket -t racket_lang_racket.rkt
2
2.28user 0.29system 0:02.89elapsed 89%CPU (0avgtext+0avgdata 61500maxresident)k
0inputs+0outputs (0major+15015minor)pagefaults 0swaps
EDIT 4
Running free -h every second while DrRacket is running (+ 1 1) in lang HTDP-beginner
(not running any other applications besides DrRacket and my basic system (ie window manager etc))
(this is fish shell, by the way):
http://pastebin.com/2RdZAuXj
At that point I had to kill DrRacket cuz everything was freezing up.
Anyway, yeah, it's leaking, obviously.
Everytime I re-ran the code in DrRacket, memory usage went up and stayed up.
I had only run it about twenty...two-ish (?) more times
(so maybe thirty-ish in total?)
by the point where it started getting near the limit
and I killed it to unfreeze the system.
I guess I should try this with normal #lang racket and see what happens...
EDIT 5
Yup, it leaks the same way:
http://pastebin.com/373PNnY7
I am 90% sure that the reason you had to wait was that your installation of Racket wasn't done properly.
During installation the program setup-plt needs to be run. It precompiles all racket files (.rkt) into so-called zo-files. If this step is omitted, then
DrRacket do the compilation for you the first time a file is needed.
In your case (I am guessing) it it your first time using the Beginner language, so all files relating to it needs to be compiled. And that takes a while.
The best solution is to use the official installers from http://download.racket-lang.org/ they all include precompiled zo-files.
If you happen to have used such an installer, then try again - and if the problem persist - do file a bug report (use the bug report in the Help menu in DrRacket).

Perl cannot allocate more than 1.1 GB on a Snow leopard Mac server with 32 GB RAM

I have a Mac server (snow leopard) with 32GB RAM. When I try to allocate more than 1.1GB RAM in Perl (v 5.10.0) I get an out of memory error. Here is the script that I used:
#!/usr/bin/env perl
# My snow leopard MAC server runs out of memory at >1.1 billion bytes. How
# can this be when I have 32 GB of memory installed? Allocating up to 4
# billion bytes works on a Dell Win7 12GB RAM machine.
# perl -v
# This is perl, v5.10.0 built for darwin-thread-multi-2level
# (with 2 registered patches, see perl -V for more detail)
use strict;
use warnings;
my $s;
print "Trying 1.1 GB...";
$s = "a" x 1100000000; # ok
print "OK\n\n";
print "Trying 1.2 GB...";
$s = '';
$s = "a" x 1200000000; # fails
print "..OK\n";
Here is the output that I get:
Trying 1.1 GB...OK
perl(96685) malloc: *** mmap(size=1200001024) failed (error code=12)
*** error: can't allocate region
*** set a breakpoint in malloc_error_break to debug
Out of memory!
Trying 1.2 GB...
Any ideas why this is happening?
UPDATE 4:42pm 11/14/13
As per Kent Fredric (see 2 posts below), here are my ulimits. Virtual memory defaults to unlimited
$ ulimit -a | grep bytes
data seg size (kbytes, -d) unlimited
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
pipe size (512 bytes, -p) 1
stack size (kbytes, -s) 8192
virtual memory (kbytes, -v) unlimited
$ perl -E 'my $x = "a" x 1200000000; print "ok\n"'
perl(23074) malloc: *** mmap(size=1200001024) failed (error code=12)
*** error: can't allocate region
*** set a breakpoint in malloc_error_break to debug
Out of memory!
$ perl -E 'my $x = "a" x 1100000000; print "ok\n"'
ok
I tried setting virtual memory to 10 billion but to no avail.
$ ulimit -v 10000000000 # 10 billion
$ perl -E 'my $x = "a" x 1200000000; print "ok\n"'
perl(24275) malloc: *** mmap(size=1200001024) failed (error code=12)
*** error: can't allocate region
*** set a breakpoint in malloc_error_break to debug
Out of memory!
You're using a 32-bit build of Perl (as shown by perl -V:ptrsize), but you need a 64-bit build. I recommend installing a local perl using perlbrew.
This can be achieved by passing -Duse64bitall to Configure when installing Perl.
This can be achieved by passing --64all to perlbrew install when installing Perl.
(For some odd reason, perl -V:use64bitall says this was done, but it clearly wasn't.)
Seems like this could be related to the problem. This only really is worthy of a comment, but its too complex to put as one without it being entirely illegible
perlbrew exec --with=5.10.0 memusage perl -e '$x = q[a] x 1_000_000_000; print length($x)'
5.10.0
==========
1000000000
Memory usage summary: heap total: 2000150514, heap peak: 2000141265, stack peak: 4896
Yes, thats 2 G of memory for 1 G of text.
Now with 2G ...
perlbrew exec --with=5.10.0 memusage perl -e '$x = q[a] x 1_000_000_000; $y = q[a] x 1_000_000_000; print length($x)+length($y)'
5.10.0
==========
2000000000
Memory usage summary: heap total: 4000151605, heap peak: 4000142092, stack peak: 4896
Yikes. That would certainly hit the 32Bit limit if you had one.
I was spoiled and doing my testing on 5.19.5, which has a notable improvement, namedly copy-on-write strings, which greatly reduces memory consumption:
perlbrew exec --with=5.19.5 memusage perl -e '$x = q[a] x 1_000_000_000; $y = q[a] x 1_000_000_000; print length($x)+length($y)'
5.19.5
==========
2000000000
Memory usage summary: heap total: 2000157713, heap peak: 2000150396, stack peak: 5392
So either way, if you're using any version of Perl at all other than a development one, you need to expect it to eat twice the memory you need.
If there's a memory limit for some reason around the 2G window for 32bit processes, then you will hit that with a 1G string.
Why does Copy On Write matter?
Well, when you do
$a = $b
$a is a copy of $b
So when you do
$a = "a" x 1_000_000_000
First, it expands the right hand side, creating a variable, and then makes a copy to store in $a.
You can prove this by eliminating the copy as follows:
perlbrew exec --with=5.10.0 memusage perl -e 'print length(q[a] x 1_000_000_000)'
5.10.0
==========
1000000000
Memory usage summary: heap total: 1000150047, heap peak: 1000140886, stack peak: 4896
See, all I did was removed the intermediate variable, and the memory usage halved!
:S
Though because 5.19.5 only makes references to the original string, and copies it when written to, it is efficient by default, so removal of the intermediate variable has negligible benefits
perlbrew exec --with=5.19.5 memusage perl -e 'print length(q[a] x 1_000_000_000)'
5.19.5
==========
1000000000
Memory usage summary: heap total: 1000154123, heap peak: 1000145146, stack peak: 5392
It could also be a Mac imposed limitation on per-process memory to prevent processes consuming too much system memory.
I don't know how valid this will be, but I assume Mac, being a Unix, has unix-like ulimits:
There are a few such memory limits, some excerpts from /etc/security/limits.conf
- core - limits the core file size (KB)
- data - max data size (KB)
- fsize - maximum filesize (KB)
- memlock - max locked-in-memory address space (KB)
- rss - max resident set size (KB)
- stack - max stack size (KB)
- as - address space limit (KB)
bash provides ways to limit and read these (somewhat), info bash --index-search=ulimit
For instance, ulimit -a | grep bytes emits this on my Linux machine:
data seg size (kbytes, -d) unlimited
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
stack size (kbytes, -s) 8192
virtual memory (kbytes, -v) unlimited
And I can arbitrarily limit this within a scope:
$ perl -E 'my $x = "a" x 100000000;print "ok\n"'
ok
$ ulimit -v 200000
$ perl -E 'my $x = "a" x 100000000;print "ok\n"'
Out of memory!
panic: fold_constants JMPENV_PUSH returned 2 at -e line 1.
So ulimits are certainly something to look into.
I think I figured it out. I could not accept that Apple shipped a 32-bit Perl when their documentation says otherwise. From 'man perl':
64-BIT SUPPORT
Version 5.10.0 supports 64-bit execution (which is on by default). Version 5.8.8
only supports 32-bit execution.
Then I remembered, I installed Fink on my Mac server and it being, er, finicky with 32 and 64 bit issues. So, I commented out
#test -r /sw/bin/init.sh && . /sw/bin/init.sh
from my .profile. Now I can at least allocate 14 GB RAM (yeah!) on my 32 GB RAM server
$ perl -E 'my $x = "a" x 14000000000; print "ok\n"'
ok
I tried 16GB but it hung for 5 minutes before I gave up. Now, a diff between perl -V for 32-bit and 64-bit tells the tale (but why still intsize=4?).
$ diff perlv.32 perlv.64
16c16
< intsize=4, longsize=4, ptrsize=4, doublesize=8, byteorder=1234
---
> intsize=4, longsize=8, ptrsize=8, doublesize=8, byteorder=12345678
18c18
< ivtype='long', ivsize=4, nvtype='double', nvsize=8, Off_t='off_t', lseeksize=8
---
> ivtype='long', ivsize=8, nvtype='double', nvsize=8, Off_t='off_t', lseeksize=8
34,35c34,36
< PERL_IMPLICIT_CONTEXT PERL_MALLOC_WRAP USE_ITHREADS
< USE_LARGE_FILES USE_PERLIO USE_REENTRANT_API
---
> PERL_IMPLICIT_CONTEXT PERL_MALLOC_WRAP USE_64_BIT_ALL
> USE_64_BIT_INT USE_ITHREADS USE_LARGE_FILES
> USE_PERLIO USE_REENTRANT_API
Thank you all for you help,
Paul

Solaris CPU run queue

Is there a command which can tell me whats in the Solaris run queue?
I can get a count using vmstat, but I need to know what processes/threads are in there.
The run-queue is always changing, so it's almost impossible to get the set of processes in the current run-queue.
That said, you can get an approximation by looking at the STAT (state) field of the process list from ps. When running the command below:
$ ps aux
...the if the STAT field begins with R, then the process is marked RUNNABLE by the kernel, which on most operating systems means that it is in the run-queue. Here's what a runnable process looks like on my machine:
USER PID %CPU %MEM VSZ RSS TT STAT STARTED TIME COMMAND
root 78179 0.0 0.0 599828 480 s003 R+ 7:51AM 0:00.00 ps aux
On solaris, you can also use the prstat command and look at the STATE column. The value run indicates that the process is on the run-queue. (Also note that the value cpuN indicates that the process is currently running on processor N.
For example:
$ prstat -s cpu -n 5
PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
13974 kincaid 888K 432K run 40 0 36:14.51 67% cpuhog/1
27354 kincaid 2216K 1928K run 31 0 314:48.51 27% server/5
14690 root 136M 46M sleep 59 0 0:00.59 2.3% Xsun/1
14797 kincaid 9192K 7496K sleep 59 0 0:00.10 0.9% dtwm/8
14851 kincaid 24M 14M sleep 48 0 0:00.03 0.3% netscape/1
Total: 97 processes, 190 lwps, load averages: 2.18, 2.15, 2.11
I was about to correct 0xfe answer when I saw you already did it. The run queue is containing theads not processes so the -L option is mandatory with the prstat command if you want to have the number of "state run" lines more or less matching the run queue. Beware that sampling artifacts will probably prevent to get accurate matches.
In any case, if you want to precisely know what processes/threads are sitting in the run queue you'd rather go the dtrace way assuming you are running Solaris 10 or newer.
The whoqueue.d script which might already been in /usr/demo/dtrace directory on your machine will be a good start:
# dtrace -s /usr/demo/dtrace/whoqueue.d
Run queue of length 1:
24349/1 (dtrace)
Run queue of length 3:
0/0 (sched)
0/0 (sched)
0/0 (sched)
Run queue of length 4:
22468/30 (java)
22468/17 (java)
22468/23 (java)
22468/10 (java)
Have a look at this page for details.

Comprehensive methods of viewing memory usage on Solaris [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 7 years ago.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Improve this question
On Linux, the "top" command shows a detailed but high level overview of your memory usage, showing:
Total Memory, Used Memory, Free Memory, Buffer Usage, Cache Usage, Swap size and Swap Usage.
My question is, what commands are available to show these memory usage figures in a clear and simple way? Bonus points if they're present in the "Core" install of Solaris. 'sar' doesn't count :)
Here are the basics. I'm not sure that any of these count as "clear and simple" though.
ps(1)
For process-level view:
$ ps -opid,vsz,rss,osz,args
PID VSZ RSS SZ COMMAND
1831 1776 1008 222 ps -opid,vsz,rss,osz,args
1782 3464 2504 433 -bash
$
vsz/VSZ: total virtual process size (kb)
rss/RSS: resident set size (kb, may be inaccurate(!), see man)
osz/SZ: total size in memory (pages)
To compute byte size from pages:
$ sz_pages=$(ps -o osz -p $pid | grep -v SZ )
$ sz_bytes=$(( $sz_pages * $(pagesize) ))
$ sz_mbytes=$(( $sz_bytes / ( 1024 * 1024 ) ))
$ echo "$pid OSZ=$sz_mbytes MB"
vmstat(1M)
$ vmstat 5 5
kthr memory page disk faults cpu
r b w swap free re mf pi po fr de sr rm s3 -- -- in sy cs us sy id
0 0 0 535832 219880 1 2 0 0 0 0 0 -0 0 0 0 402 19 97 0 1 99
0 0 0 514376 203648 1 4 0 0 0 0 0 0 0 0 0 402 19 96 0 1 99
^C
prstat(1M)
PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
1852 martin 4840K 3600K cpu0 59 0 0:00:00 0.3% prstat/1
1780 martin 9384K 2920K sleep 59 0 0:00:00 0.0% sshd/1
...
swap(1)
"Long listing" and "summary" modes:
$ swap -l
swapfile dev swaplo blocks free
/dev/zvol/dsk/rpool/swap 256,1 16 1048560 1048560
$ swap -s
total: 42352k bytes allocated + 20192k reserved = 62544k used, 607672k available
$
top(1)
An older version (3.51) is available on the Solaris companion CD from Sun, with the disclaimer that this is "Community (not Sun) supported".
More recent binary packages available from sunfreeware.com or blastwave.org.
load averages: 0.02, 0.00, 0.00; up 2+12:31:38 08:53:58
31 processes: 30 sleeping, 1 on cpu
CPU states: 98.0% idle, 0.0% user, 2.0% kernel, 0.0% iowait, 0.0% swap
Memory: 1024M phys mem, 197M free mem, 512M total swap, 512M free swap
PID USERNAME LWP PRI NICE SIZE RES STATE TIME CPU COMMAND
1898 martin 1 54 0 3336K 1808K cpu 0:00 0.96% top
7 root 11 59 0 10M 7912K sleep 0:09 0.02% svc.startd
sar(1M)
And just what's wrong with sar? :)
# echo ::memstat | mdb -k
Page Summary Pages MB %Tot
------------ ---------------- ---------------- ----
Kernel 7308 57 23%
Anon 9055 70 29%
Exec and libs 1968 15 6%
Page cache 2224 17 7%
Free (cachelist) 6470 50 20%
Free (freelist) 4641 36 15%
Total 31666 247
Physical 31256 244
"top" is usually available on Solaris.
If not then revert to "vmstat" which is available on most UNIX system.
It should look something like this (from an AIX box)
vmstat
System configuration: lcpu=4 mem=12288MB ent=2.00
kthr memory page faults cpu
----- ----------- ------------------------ ------------ -----------------------
r b avm fre re pi po fr sr cy in sy cs us sy id wa pc ec
2 1 1614644 585722 0 0 1 22 104 0 808 29047 2767 12 8 77 3 0.45 22.3
the colums "avm" and "fre" tell you the total memory and free memery.
a "man vmstat" should get you the gory details.
Top can be compiled from sources or downloaded from sunfreeware.com. As previously posted, vmstat is available (I believe it's in the core install?).
The command free is nice. Takes a short while to understand the "+/- buffers/cache", but the idea is that cache and buffers doesn't really count when evaluating "free", as it can be dumped right away. Therefore, to see how much free (and used) memory you have, you need to remove the cache/buffer usage - which is conveniently done for you.