mmap allocates memory in heap ? - mmap

I was reading about mmap in wikipedia and trying out this example http://en.wikipedia.org/wiki/Mmap#Example_of_usage. I have compiled this program with gcc and ran valgrind overit.
Here is valgrind output:
# valgrind a.out
==7018== Memcheck, a memory error detector
==7018== Copyright (C) 2002-2011, and GNU GPL'd, by Julian Seward et al.
==7018== Using Valgrind-3.7.0 and LibVEX; rerun with -h for copyright info
==7018== Command: a.out
==7018==
PID 7018: anonymous string 1, zero-backed string 1
PID 7019: anonymous string 1, zero-backed string 1
PID 7018: anonymous string 2, zero-backed string 2
==7018==
==7018== HEAP SUMMARY:
==7018== in use at exit: 0 bytes in 0 blocks
==7018== total heap usage: 0 allocs, 0 frees, 0 bytes allocated
==7018==
==7018== All heap blocks were freed -- no leaks are possible
==7018==
==7018== For counts of detected and suppressed errors, rerun with: -v
==7018== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 2 from 2)
PID 7019: anonymous string 2, zero-backed string 2
==7019==
==7019== HEAP SUMMARY:
==7019== in use at exit: 0 bytes in 0 blocks
==7019== total heap usage: 0 allocs, 0 frees, 0 bytes allocated
==7019==
==7019== All heap blocks were freed -- no leaks are possible
==7019==
==7019== For counts of detected and suppressed errors, rerun with: -v
==7019== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 2 from 2)
My Question is
Does mmap allocate memory on Heap ? if not what does munmap do ?

On a Unix-like system, your program's address space consists of one or more virtual memory regions, each of which is mapped by the OS to physical memory, to a file, or to nothing at all.
The heap is, generally speaking, one specific memory region created by the C runtime, and managed by malloc (which in turn uses the brk and sbrk system calls to grow and shrink).
mmap is a way of creating new memory regions, independently of malloc (and so independently of the heap). munmap is simply its inverse, it releases these regions.

mmapped memory is neither heap nor stack. It is mapped into virtual address space of the calling process, but it's not allocated on the heap.

Related

Why memory leak is not reported when perl is compiled with `DEBUG_LEAKING_SCALARS`?

I compile perl with DEBUG_LEAKING_SCALARS as described here
CASE 1
I follow this DOC to test memory leaking reporting:
env PERL_DESTRUCT_LEVEL=2 valgrind perl -e '#x; $x[0]=\#x'
==7216== Memcheck, a memory error detector
==7216== Copyright (C) 2002-2015, and GNU GPL'd, by Julian Seward et al.
==7216== Using Valgrind-3.11.0 and LibVEX; rerun with -h for copyright info
==7216== Command: perl -e #x;\ $x[0]=\\#x
==7216==
==7216==
==7216== HEAP SUMMARY:
==7216== in use at exit: 0 bytes in 0 blocks
==7216== total heap usage: 1,310 allocs, 1,310 frees, 171,397 bytes allocated
==7216==
==7216== All heap blocks were freed -- no leaks are possible
==7216==
==7216== For counts of detected and suppressed errors, rerun with: -v
==7216== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)
Nothing is reported.
CASE 2
I even in my XS sub do this thing. Exactly:
#define PERL_NO_GET_CONTEXT
#include "EXTERN.h"
#include "perl.h"
#include "XSUB.h"
#include "XSUtils.h"
#include "ppport.h"
void
call_perl() {
SV *sv;
sv = sv_2mortal( newSVpv( "XS::Utils::hello", 0 ) );
newSViv( 323 ); //<<<< SHOULD LEAK
printf( "Hi 3\n" );
ENTERSCOPE;
CALLPERL( sv , G_DISCARD|G_NOARGS );
LEAVESCOPE;
}
MODULE = XS::Utils PACKAGE = XS::Utils
void
test()
CODE:
call_perl();
Link to the REPO
$ env PERL_DESTRUCT_LEVEL=2 valgrind perl -Iblib/arch/ -Iblib/lib -MXS::Utils -e 'XS::Utils::test()'
==7308== Memcheck, a memory error detector
==7308== Copyright (C) 2002-2015, and GNU GPL'd, by Julian Seward et al.
==7308== Using Valgrind-3.11.0 and LibVEX; rerun with -h for copyright info
==7308== Command: perl -Iblib/arch/ -Iblib/lib -MXS::Utils -e XS::Utils::test()
==7308==
Hi 3
Hello
==7308==
==7308== HEAP SUMMARY:
==7308== in use at exit: 1,502 bytes in 5 blocks
==7308== total heap usage: 12,876 allocs, 12,871 frees, 1,945,298 bytes allocated
==7308==
==7308== LEAK SUMMARY:
==7308== definitely lost: 0 bytes in 0 blocks
==7308== indirectly lost: 0 bytes in 0 blocks
==7308== possibly lost: 0 bytes in 0 blocks
==7308== still reachable: 1,502 bytes in 5 blocks
==7308== suppressed: 0 bytes in 0 blocks
==7308== Rerun with --leak-check=full to see details of leaked memory
==7308==
==7308== For counts of detected and suppressed errors, rerun with: -v
==7308== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)
Nothing is reported too
CASE 3
I fix module Devel::LeakTrace (The FIX):
$ perl -MDevel::LeakTrace -Iblib/arch/ -Iblib/lib -MXS::Utils -e 'XS::Utils::test()'
Hi 3
Hello
Nothing is reported too
CASE 4
I only found Test::LeakTrace do its job:
$ perl -MTest::LeakTrace::Script=-verbose -Iblib/arch/ -Iblib/lib -MXS::Utils -e 'XS::Utils::test()'
Hi 3
Hello
leaked SCALAR(0x208e1c0) from -e line 1.
ALLOCATED at -e:1 by entersub (parent 0x0); serial 9642
SV = IV(0x208e1b0) at 0x208e1c0
REFCNT = 1
FLAGS = (IOK,pIOK)
IV = 323
Why built in tool in perl report nothing about leaking?
What did I wrong? How to debug leaking memory with DEBUG_LEAKING_SCALARS tool?
Actually not an answer, but from Dave Mitchell:
The main purpose of DEBUG_LEAKING_SCALARS isn't to list leaked scalars
(!!)
It's to help in tracking down things generally related to leaked scalars
and refcount problems. its two main features are that it turns SV
allocation from being a macro to being a function so that you can easy
attach a breakpoint; and that it adds instrumentation to each SV showing
where it was allocated (as displayed by Devel::Peek).
But I will not know what to debug, because I do not know that something is leaking. Like CASE from 1 to 3 described above. I was sure that:
newSViv( 323 );
Did not leak.
So DEBUG_LEAKING_SCALARS should list leaked scalars
Also I have found this comment at perl commit history:
-[ 24088] By: davem on 2005/03/28 21:38:44
- Log: expand -DDEBUG_LEAKING_SCALARS to instrument the creation of each SV
This will be very useful for this task to my mind.

What uses the memory on raspberry pi?

On my pi after start there is no free memory, but i can not found, waht uses it:
pi#node1 ~ $ cat /proc/cpuinfo
processor : 0
model name : ARMv6-compatible processor rev 7 (v6l)
BogoMIPS : 2.00
Features : half thumb fastmult vfp edsp java tls
CPU implementer : 0x41
CPU architecture: 7
CPU variant : 0x0
CPU part : 0xb76
CPU revision : 7
Hardware : BCM2708
Revision : 0013
Serial : 00000000bf2e5e5c
pi#node1 ~ $ uname -a
Linux node1 4.0.7+ #801 PREEMPT Tue Jun 30 18:15:24 BST 2015 armv6l GNU/Linux
pi#node1 ~ $ head -n1 /etc/issue
Raspbian GNU/Linux 7 \n \l
pi#node1 ~ $ grep MemTotal /proc/meminfo
MemTotal: 493868 kB
pi#node1 ~ $ grep "model name" /proc/cpuinfo
model name : ARMv6-compatible processor rev 7 (v6l)
pi#node1 ~ $ ps -eo pmem,pcpu,vsize,pid,cmd | sort -k 1 -nr | head -5
0.6 0.2 6244 2377 -bash
0.3 0.0 6748 2458 sort -k 1 -nr
0.3 0.0 4140 2457 ps -eo pmem,pcpu,vsize,pid,cmd
0.2 0.1 9484 2376 sshd: pi#pts/0
0.2 0.1 5600 2236 /usr/sbin/ntpd -p /var/run/ntpd.pid -g -u 104:107
pi#node1 ~ $ free
total used free shared buffers cached
Mem: 493868 478364 15504 0 500 4956
-/+ buffers/cache: 472908 20960
Swap: 102396 116 102280
I am not a linux expert, but if I understand it right, there is just 15Mb free memory, but no task uses more than 0.6%. Than why is not there more free?
Memory is not exclusively allocated by Processes.
The bootloader and the init ram filesystem is stored in memory.
The kernel (could be very big) is loaded into memory.
The kernel reserve memory for it's processes. ps shows 0.0% for these system processes.
Driver allocate buffer memory
The graphics card needs memory
If you have not configured your swap space on a harddrive or SD card, it uses memory.
The network system allocates memory for unix sockets and shared memory.
100 processes with 0.1 % are 10%.
And, if you start a process and stop it not all of it memory will be released.
Try it. Show the memory usage with free. Start a process that need some memory. Stop the process and use free again. I would bet that there is more memory usage than before.
Edit
Here is an example of a pi with less memory usage. I have no problems running java on it. I have a WLAN Dongle and a original NOIR CAM installed.
I installed Raspbian Wheezy. I used a kernel that I compiled from sources:
> uname -a
Linux raspberrypi 3.18.14+ #2 PREEMPT Sun May 31 20:19:04 UTC 2015 armv6l GNU/Linux
> head -n1 /etc/issue
Raspbian GNU/Linux 7 \n \l
On this pi I can run java -version in an acceptable period of time.
time java -version
java version "1.8.0"
Java(TM) SE Runtime Environment (build 1.8.0-b132)
Java HotSpot(TM) Client VM (build 25.0-b70, mixed mode)
real 0m1.012s
user 0m0.800s
sys 0m0.190s
Here is my memory footprint
> free
total used free shared buffers cached
Mem: 380816 138304 242512 0 8916 96728
-/+ buffers/cache: 32660 348156
Swap: 102396 0 102396

What do these Windbg error messages mean?

I'm trying to do a !heap -s in Windbg to get heap information. When I attempt it I get the following output:
Heap Flags Reserv Commit Virt Free List UCR Virt Lock Fast
(k) (k) (k) (k) length blocks cont. heap
-----------------------------------------------------------------------------
00000000005d0000 08000002 512 28 512 10 3 1 0 0
Error: Heap 0000000000000000 has an invalid signature eeffeeff
Front-end heap type info is not available
Front-end heap type info is not available
Virtual block: 0000000000000000 - 0000000000000000 (size 0000000000000000)
HEAP 0000000000000000 (Seg 0000000000000000) At 0000000000000000 Error: Unable to read virtual block
0000000000000000 00000000 0 0 0 0 0 0 1 0
-----------------------------------------------------------------------------
I can't find any reference as to what the unusual error/not available lines mean.
Can someone please give me a summary as to why I'm not getting an expected list of heaps?
The only thing I execute prior to !heap -s is !wow64exts.sw because the process dumps are from a 32 bit process but created by a 64 bit Task Manager.
After testing with the 32 and 64 bit Task Managers it appears that process dumps of 32 bit processes created by the 64 bit Task Manager can only be debugged successfully in some areas using !wow64exts.sw in Windbg to use 32 bit debugging.
That extension allows call stacks to be reviewed correctly, but !heap -s does not appear to work correctly under it. Instead you end up with the errors in the question.
For example, the output from a process dump of the 32 bit process using the 32 bit Task Manager:
0:000> !heap -s
NtGlobalFlag enables following debugging aids for new heaps:
stack back traces
LFH Key : 0x06b058a2
Termination on corruption : DISABLED
Heap Flags Reserv Commit Virt Free List UCR Virt Lock Fast
(k) (k) (k) (k) length blocks cont. heap
-----------------------------------------------------------------------------
031b0000 08000002 1024 236 1024 2 13 1 0 0 LFH
001d0000 08001002 1088 188 1088 18 9 2 0 0 LFH
01e30000 08001002 1088 160 1088 4 3 2 0 0 LFH
03930000 08001002 256 4 256 2 1 1 0 0
038a0000 08001002 64 16 64 13 1 1 0 0
-----------------------------------------------------------------------------
The output from a process dump of the 32 bit process using the 64 bit Task Manager without !wow64exts.sw:
0:000> !heap -s
NtGlobalFlag enables following debugging aids for new heaps:
stack back traces
LFH Key : 0x000000b406b058a2
Termination on corruption : ENABLED
Heap Flags Reserv Commit Virt Free List UCR Virt Lock Fast
(k) (k) (k) (k) length blocks cont. heap
-------------------------------------------------------------------------------------
0000000001f70000 08000002 512 28 512 10 3 1 0 0
0000000000020000 08008000 64 4 64 1 1 1 0 0
-------------------------------------------------------------------------------------
The output from a process dump of the 32 bit process using the 64 bit Task Manager with !wow64exts.sw:
0:000> !wow64exts.sw
Switched to 32bit mode
0:000:x86> !heap -s
NtGlobalFlag enables following debugging aids for new heaps:
stack back traces
LFH Key : 0x000000b406b058a2
Termination on corruption : ENABLED
Heap Flags Reserv Commit Virt Free List UCR Virt Lock Fast
(k) (k) (k) (k) length blocks cont. heap
-----------------------------------------------------------------------------
0000000001f70000 08000002 512 28 512 10 3 1 0 0
Error: Heap 0000000000000000 has an invalid signature eeffeeff
Front-end heap type info is not available
Front-end heap type info is not available
Virtual block: 0000000000000000 - 0000000000000000 (size 0000000000000000)
HEAP 0000000000000000 (Seg 0000000000000000) At 0000000000000000 Error: Unable to read virtual block
0000000000000000 00000000 0 0 0 0 0 0 1 0
-----------------------------------------------------------------------------
Those were all taken from the same process.

Perl cannot allocate more than 1.1 GB on a Snow leopard Mac server with 32 GB RAM

I have a Mac server (snow leopard) with 32GB RAM. When I try to allocate more than 1.1GB RAM in Perl (v 5.10.0) I get an out of memory error. Here is the script that I used:
#!/usr/bin/env perl
# My snow leopard MAC server runs out of memory at >1.1 billion bytes. How
# can this be when I have 32 GB of memory installed? Allocating up to 4
# billion bytes works on a Dell Win7 12GB RAM machine.
# perl -v
# This is perl, v5.10.0 built for darwin-thread-multi-2level
# (with 2 registered patches, see perl -V for more detail)
use strict;
use warnings;
my $s;
print "Trying 1.1 GB...";
$s = "a" x 1100000000; # ok
print "OK\n\n";
print "Trying 1.2 GB...";
$s = '';
$s = "a" x 1200000000; # fails
print "..OK\n";
Here is the output that I get:
Trying 1.1 GB...OK
perl(96685) malloc: *** mmap(size=1200001024) failed (error code=12)
*** error: can't allocate region
*** set a breakpoint in malloc_error_break to debug
Out of memory!
Trying 1.2 GB...
Any ideas why this is happening?
UPDATE 4:42pm 11/14/13
As per Kent Fredric (see 2 posts below), here are my ulimits. Virtual memory defaults to unlimited
$ ulimit -a | grep bytes
data seg size (kbytes, -d) unlimited
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
pipe size (512 bytes, -p) 1
stack size (kbytes, -s) 8192
virtual memory (kbytes, -v) unlimited
$ perl -E 'my $x = "a" x 1200000000; print "ok\n"'
perl(23074) malloc: *** mmap(size=1200001024) failed (error code=12)
*** error: can't allocate region
*** set a breakpoint in malloc_error_break to debug
Out of memory!
$ perl -E 'my $x = "a" x 1100000000; print "ok\n"'
ok
I tried setting virtual memory to 10 billion but to no avail.
$ ulimit -v 10000000000 # 10 billion
$ perl -E 'my $x = "a" x 1200000000; print "ok\n"'
perl(24275) malloc: *** mmap(size=1200001024) failed (error code=12)
*** error: can't allocate region
*** set a breakpoint in malloc_error_break to debug
Out of memory!
You're using a 32-bit build of Perl (as shown by perl -V:ptrsize), but you need a 64-bit build. I recommend installing a local perl using perlbrew.
This can be achieved by passing -Duse64bitall to Configure when installing Perl.
This can be achieved by passing --64all to perlbrew install when installing Perl.
(For some odd reason, perl -V:use64bitall says this was done, but it clearly wasn't.)
Seems like this could be related to the problem. This only really is worthy of a comment, but its too complex to put as one without it being entirely illegible
perlbrew exec --with=5.10.0 memusage perl -e '$x = q[a] x 1_000_000_000; print length($x)'
5.10.0
==========
1000000000
Memory usage summary: heap total: 2000150514, heap peak: 2000141265, stack peak: 4896
Yes, thats 2 G of memory for 1 G of text.
Now with 2G ...
perlbrew exec --with=5.10.0 memusage perl -e '$x = q[a] x 1_000_000_000; $y = q[a] x 1_000_000_000; print length($x)+length($y)'
5.10.0
==========
2000000000
Memory usage summary: heap total: 4000151605, heap peak: 4000142092, stack peak: 4896
Yikes. That would certainly hit the 32Bit limit if you had one.
I was spoiled and doing my testing on 5.19.5, which has a notable improvement, namedly copy-on-write strings, which greatly reduces memory consumption:
perlbrew exec --with=5.19.5 memusage perl -e '$x = q[a] x 1_000_000_000; $y = q[a] x 1_000_000_000; print length($x)+length($y)'
5.19.5
==========
2000000000
Memory usage summary: heap total: 2000157713, heap peak: 2000150396, stack peak: 5392
So either way, if you're using any version of Perl at all other than a development one, you need to expect it to eat twice the memory you need.
If there's a memory limit for some reason around the 2G window for 32bit processes, then you will hit that with a 1G string.
Why does Copy On Write matter?
Well, when you do
$a = $b
$a is a copy of $b
So when you do
$a = "a" x 1_000_000_000
First, it expands the right hand side, creating a variable, and then makes a copy to store in $a.
You can prove this by eliminating the copy as follows:
perlbrew exec --with=5.10.0 memusage perl -e 'print length(q[a] x 1_000_000_000)'
5.10.0
==========
1000000000
Memory usage summary: heap total: 1000150047, heap peak: 1000140886, stack peak: 4896
See, all I did was removed the intermediate variable, and the memory usage halved!
:S
Though because 5.19.5 only makes references to the original string, and copies it when written to, it is efficient by default, so removal of the intermediate variable has negligible benefits
perlbrew exec --with=5.19.5 memusage perl -e 'print length(q[a] x 1_000_000_000)'
5.19.5
==========
1000000000
Memory usage summary: heap total: 1000154123, heap peak: 1000145146, stack peak: 5392
It could also be a Mac imposed limitation on per-process memory to prevent processes consuming too much system memory.
I don't know how valid this will be, but I assume Mac, being a Unix, has unix-like ulimits:
There are a few such memory limits, some excerpts from /etc/security/limits.conf
- core - limits the core file size (KB)
- data - max data size (KB)
- fsize - maximum filesize (KB)
- memlock - max locked-in-memory address space (KB)
- rss - max resident set size (KB)
- stack - max stack size (KB)
- as - address space limit (KB)
bash provides ways to limit and read these (somewhat), info bash --index-search=ulimit
For instance, ulimit -a | grep bytes emits this on my Linux machine:
data seg size (kbytes, -d) unlimited
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
stack size (kbytes, -s) 8192
virtual memory (kbytes, -v) unlimited
And I can arbitrarily limit this within a scope:
$ perl -E 'my $x = "a" x 100000000;print "ok\n"'
ok
$ ulimit -v 200000
$ perl -E 'my $x = "a" x 100000000;print "ok\n"'
Out of memory!
panic: fold_constants JMPENV_PUSH returned 2 at -e line 1.
So ulimits are certainly something to look into.
I think I figured it out. I could not accept that Apple shipped a 32-bit Perl when their documentation says otherwise. From 'man perl':
64-BIT SUPPORT
Version 5.10.0 supports 64-bit execution (which is on by default). Version 5.8.8
only supports 32-bit execution.
Then I remembered, I installed Fink on my Mac server and it being, er, finicky with 32 and 64 bit issues. So, I commented out
#test -r /sw/bin/init.sh && . /sw/bin/init.sh
from my .profile. Now I can at least allocate 14 GB RAM (yeah!) on my 32 GB RAM server
$ perl -E 'my $x = "a" x 14000000000; print "ok\n"'
ok
I tried 16GB but it hung for 5 minutes before I gave up. Now, a diff between perl -V for 32-bit and 64-bit tells the tale (but why still intsize=4?).
$ diff perlv.32 perlv.64
16c16
< intsize=4, longsize=4, ptrsize=4, doublesize=8, byteorder=1234
---
> intsize=4, longsize=8, ptrsize=8, doublesize=8, byteorder=12345678
18c18
< ivtype='long', ivsize=4, nvtype='double', nvsize=8, Off_t='off_t', lseeksize=8
---
> ivtype='long', ivsize=8, nvtype='double', nvsize=8, Off_t='off_t', lseeksize=8
34,35c34,36
< PERL_IMPLICIT_CONTEXT PERL_MALLOC_WRAP USE_ITHREADS
< USE_LARGE_FILES USE_PERLIO USE_REENTRANT_API
---
> PERL_IMPLICIT_CONTEXT PERL_MALLOC_WRAP USE_64_BIT_ALL
> USE_64_BIT_INT USE_ITHREADS USE_LARGE_FILES
> USE_PERLIO USE_REENTRANT_API
Thank you all for you help,
Paul

Comprehensive methods of viewing memory usage on Solaris [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 7 years ago.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Improve this question
On Linux, the "top" command shows a detailed but high level overview of your memory usage, showing:
Total Memory, Used Memory, Free Memory, Buffer Usage, Cache Usage, Swap size and Swap Usage.
My question is, what commands are available to show these memory usage figures in a clear and simple way? Bonus points if they're present in the "Core" install of Solaris. 'sar' doesn't count :)
Here are the basics. I'm not sure that any of these count as "clear and simple" though.
ps(1)
For process-level view:
$ ps -opid,vsz,rss,osz,args
PID VSZ RSS SZ COMMAND
1831 1776 1008 222 ps -opid,vsz,rss,osz,args
1782 3464 2504 433 -bash
$
vsz/VSZ: total virtual process size (kb)
rss/RSS: resident set size (kb, may be inaccurate(!), see man)
osz/SZ: total size in memory (pages)
To compute byte size from pages:
$ sz_pages=$(ps -o osz -p $pid | grep -v SZ )
$ sz_bytes=$(( $sz_pages * $(pagesize) ))
$ sz_mbytes=$(( $sz_bytes / ( 1024 * 1024 ) ))
$ echo "$pid OSZ=$sz_mbytes MB"
vmstat(1M)
$ vmstat 5 5
kthr memory page disk faults cpu
r b w swap free re mf pi po fr de sr rm s3 -- -- in sy cs us sy id
0 0 0 535832 219880 1 2 0 0 0 0 0 -0 0 0 0 402 19 97 0 1 99
0 0 0 514376 203648 1 4 0 0 0 0 0 0 0 0 0 402 19 96 0 1 99
^C
prstat(1M)
PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
1852 martin 4840K 3600K cpu0 59 0 0:00:00 0.3% prstat/1
1780 martin 9384K 2920K sleep 59 0 0:00:00 0.0% sshd/1
...
swap(1)
"Long listing" and "summary" modes:
$ swap -l
swapfile dev swaplo blocks free
/dev/zvol/dsk/rpool/swap 256,1 16 1048560 1048560
$ swap -s
total: 42352k bytes allocated + 20192k reserved = 62544k used, 607672k available
$
top(1)
An older version (3.51) is available on the Solaris companion CD from Sun, with the disclaimer that this is "Community (not Sun) supported".
More recent binary packages available from sunfreeware.com or blastwave.org.
load averages: 0.02, 0.00, 0.00; up 2+12:31:38 08:53:58
31 processes: 30 sleeping, 1 on cpu
CPU states: 98.0% idle, 0.0% user, 2.0% kernel, 0.0% iowait, 0.0% swap
Memory: 1024M phys mem, 197M free mem, 512M total swap, 512M free swap
PID USERNAME LWP PRI NICE SIZE RES STATE TIME CPU COMMAND
1898 martin 1 54 0 3336K 1808K cpu 0:00 0.96% top
7 root 11 59 0 10M 7912K sleep 0:09 0.02% svc.startd
sar(1M)
And just what's wrong with sar? :)
# echo ::memstat | mdb -k
Page Summary Pages MB %Tot
------------ ---------------- ---------------- ----
Kernel 7308 57 23%
Anon 9055 70 29%
Exec and libs 1968 15 6%
Page cache 2224 17 7%
Free (cachelist) 6470 50 20%
Free (freelist) 4641 36 15%
Total 31666 247
Physical 31256 244
"top" is usually available on Solaris.
If not then revert to "vmstat" which is available on most UNIX system.
It should look something like this (from an AIX box)
vmstat
System configuration: lcpu=4 mem=12288MB ent=2.00
kthr memory page faults cpu
----- ----------- ------------------------ ------------ -----------------------
r b avm fre re pi po fr sr cy in sy cs us sy id wa pc ec
2 1 1614644 585722 0 0 1 22 104 0 808 29047 2767 12 8 77 3 0.45 22.3
the colums "avm" and "fre" tell you the total memory and free memery.
a "man vmstat" should get you the gory details.
Top can be compiled from sources or downloaded from sunfreeware.com. As previously posted, vmstat is available (I believe it's in the core install?).
The command free is nice. Takes a short while to understand the "+/- buffers/cache", but the idea is that cache and buffers doesn't really count when evaluating "free", as it can be dumped right away. Therefore, to see how much free (and used) memory you have, you need to remove the cache/buffer usage - which is conveniently done for you.