what exactly is "Memory used" (versus "Memory cached") in ganglia? - ganglia

What exactly is "Memory Used" in ganglia (versus "Memory Cached")? Does "Memory Used" refer to physical memory, virtual memory, resident memory, or shared memory?
Does it include memory used by code, data, and shared memory among tasks?
What about "Memory Cached"? Thank you.

According to Red Hat, the cached memory is memory used for the "Page Cache" in Linux. This is the result of Linux using RAM to cache the contents of frequently-used disk data in order to speed up I/O operations. This thread from the Ganglia general mailing list seems to back that up. This assumes you have Ganglia running on Linux.
In my experience, on the Ganglia graph, "Memory Used" is the chunk of physical memory used, out of the "Total In-Core Memory" available.

Related

Can page fault occur if we don't use virtual memory?

I was reading about Virtual Memory and Page Fault. From what I understand, Page Fault occurs when the page CPU is looking for is not present in the Main Memory. I had a doubt that, can page fault occur if we don't not use virtual memory?
Can page fault occur if we don't use virtual memory?
It depends on how you define "virtual memory".
In some contexts, "virtual memory" just means "using the CPU's MMU/paging". In this case, no you can't get a page fault if you don't use the CPU's MMU/paging.
In some contexts, "virtual memory" means the use of some tricks to improve things like RAM consumption (e.g. swap space, memory mapped files, "copy on write", etc). In this case it's possible to not use any tricks but still use CPU's MMU/paging (e.g. 64-bit environments on 80x86, like UEFI, where the physical memory is identity mapped so that there's no difference between virtual addresses and physical addresses). In this case it is possible to have page faults even though you're not using any tricks.

How to analyze unmanaged heap size of a .NET process

How can I analyze the unmanaged heap size of a .NET process with Windbg?
Which commands should be used in WinDbg?
!address -summary gives you an overview not focusing on individuals heaps.
Usage summary contains the following:
Free: free memory which can be allocated ans used
Image: memory used by EXE and DLL files
MappedFile: memory used by memory mapped files
Heap / Heap32 / Heap64: memory allocated via the heap manager
Stack / Stack32 / Stack 64: memory used by stacks of threads
TEB / TEB32 / TEB64: memory used by thread environment blocks
PEB / PEB32 / PEB64: memory used by process environment blocks (e.g. command line and environment variables)
Type summary contains:
MEM_IMAGE: should roughly correspond to Image
MEM_MAPPED: should roughly correspond to MappedFile
MEM_PRIVATE: private memory which can only be used by your application and not be shared
State summary:
MEM_FREE: should roughly correspond to Free
MEM_COMMIT: memory in use
MEM_RESERVE: memory which might be used
Protect Summary should explain itself. If you're very new, it's probably not that interesting.
Largest Region by usage:
Especially important here is the free region. The largest free region determines how much memory you can get in one block. Look around for memory fragmentation to find out why this can be an issue.
!heap -s gives you the summary about heaps with focus on individual heaps.
These are all native memory allocations done via the Windows heap manager. Direct allocations via VirtualAlloc() are not listed (e.g. MSXML and .NET).
Read more about native memory management on MSDN: Managing Heap Memory and MSDN: Managing Virtual Memory

virtual memory on compute nodes in a grid

When I run a job on a node, using PBS, and I get finally in the job report:
resources_used.mem=1616024kb
resources_used.vmem=2350176kb
resources_used.walltime=00:06:32
What does the virtual memory really means? I don't think there is a hard drive connected to each node.
Which kind of memory should I take into account when I try to increase the size of the problem, such that I don't hit the 16GB capacity of the node memory, the normal memory (mem) or the virtual memory (vmem) ?
Thanks
The vmem indicates how much memory your job was using in total. It used all of the available physical memory (see the mem value), and more. An operating system allows programs to allocate more memory than there is physical memory available.
If you're actively using more memory than there is physical memory available, you'll start seeing swap activity (data that was swapped out to disk being brought back into memory, and other stuff being put to disk). That's bad, it will basically kill your performance if it happens a lot.
So, as long as you're not actively using more than 16GB, you're fine. But the mem or vmem values won't tell you this, it depends on what the application is actually doing.

Where does memory dynamically allocated reside?

We know that malloc() and new operation allocate memory from heap dynamically, but where does heap reside? Does each process have its own private heap in the namespace for dynamic allocation or the OS have a global one shared by all the processes. What's more, I read from a textbook that once memory leak occurs, the missing memory cannot be reused until next time we restart our computer. Is this thesis right? If the answer is yes, how can we explain it?
Thanks for your reply.
Regards.
The memory is allocated from the user address space of your process virtual memory. And all the memory is reclaimed by the OS when the process terminates there is no need to restart the computer.
Typically the C runtime will use the various OS APIs to allocate memory which is part of its process address space. The, within that allocated memory, it will create a heap and allocate memory itself from that heap via calls to malloc or new.
The reason for this is that often OS APIs are course-grained and require you to allocate memory in large chunks (such as a page size) whereas your application typically wants to allocate small amounts of memory at any one time.
You do not mention OS of your interest.
That's exactly mean no direct answer.
Try to look into some book about OSes
e.g. Tanenbaum's

Total memory consumption of the system

Is it correct to assume that the total memory consumption (virtual + physical) of a system is sum of "Memory Usage" and "VM Size" columns shown by the task manager in windows?
Read these posts by Mark Russinovich:
http://blogs.technet.com/markrussinovich/archive/2008/07/21/3092070.aspx
http://blogs.technet.com/markrussinovich/archive/2008/11/17/3155406.aspx
In modern Windows there really is no single truth about "Total Memory Consumption". It depends of course on the definition, but the real question is what you want to do with the answer.
Some processes like SQL-Server tend to use every byte of memory they can get their hands on, if you let them. The .NET CLR garbage collector monitors memory use and acts accordingly, trying to free more memory when it gets scarce.
So for instance you can have a system with 8 GB of physical memory, of which 90% is "used". How much of that memory is actually needed, is very hard to say. The same system may run on a 4 GB machine with no noticeable performance loss or any other issues.
If you want to explore some of the complexities of memory management under Windows, download "VMMap v2.0" from the former sysinternals site. It shows very detailed memory usage per process and may aid you in your quest.
To quote from VMMaps Help:
VMMap categorizes memory into one of several types:
Image
The memory represents an executable file such as a .exe or .dll. The Details column shows the file's path.
Private
Private memory cannot be shared with other processes, is charged against the system commit limit, and typically contains application data.
Shareable
Shareable memory can be shared with other processes, is charged against the system commit limit and typically contains data shared between DLLs in different processes or inter-process communication messages. The Windows APIs refer to this type of memory as pagefile-backed sections.
Mapped File
The memory represents a file on disk and the Details column shows the file's path. Mapped files typically contain application data.
Heap
Heaps represent memory managed by the user-mode heap manager and, like Private memory, is charged against the system commit limit and contains application data.
Managed Heap
Managed heap represents memory that's allocated and used by the .NET garbage collector.
Stack
Stacks are memory used to store function parameters, local function variables and function invocation records for individual threads. Stacks are charged agains the commit limit and typically grow on demand.
System
System memory is kernel-mode physical memory associated with the process. The vast majority of System memory consists of the process page tables.
Free
Free memory regions are spaces in the process address space that are not allocated.
Now you just need to define what types of memory you consider as "used", add these up for all processes, remove multiple duplicates and look at the number... There is a reason why in task manager or other tools, there is no single number labeled "Total Memory Consumption" :-)
No, physical memory and virtual memory may overlap. If a page of memory is in virtual memory and then paged in to physical memory the virtual memory is not necessarily freed, it may be reserved for when the page gets paged out again.