Why program is executed on Memory not HardDisk? - operating-system

when I study in Computer Architecture and System Programming, some question rises up.
First of all, program is in SSD or Hard Disk but when it executed, this load to memory(RAM). Why program is not executed on HardDisk directly?? Why need to load on RAM?
Thanks

This is simply done because your RAM is way faster than your Hard Disk.
When your computer executes a program, the CPU reads all the instructions from memory one after another and executes them. The CPU itself cannot store the whole program while executing it, so it has to be read from somewhere else. If the CPU had to read the instructions from a hard disk, it would be crazy slow.
Now that we have SSDs this is becoming somewhat less relevant, but in the old days the difference between RAM ("Random Access Memory") and HDD ("Hard Disk Drive") was that the RAM could access any memory address at any point in time, thus "Random Access". The HDD would have to rotate the hard disk your data is stored on to read from a certain address. Accessing random memory addresses is very hard for an HDD.
When the CPU executes a program it has to jump around all the time. It also has to store the program's memory somewhere and access that as quickly as possible whenever needed. An HDD is very bad at those two things, a RAM is very good.
So why did we use HDDs at all? Because RAM
is way to expensive
does not persist data when turned off
And what about SSDs? They are a lot better at random access that HDDs, but they're still considerably slower than RAM.
Also, you have to take swap files into account. The computer can use some of your HDD or SSD storage as system memory if it needs to. This can be very useful if the data that's using up your RAM does not get accessed by the CPU very often.

Related

If an application program run in RAM where does Operating system run?

I have read that Operating System is loaded in main memory (RAM) when computer boots up. Also, application programs are loaded into main memory (RAM) for execution. How do both of these run simultaneously in main memory? Does the operating system stop its execution when an application program is running?
I don't know of a good overview of these areas, so I'll try to help.
Memory (RAM) can be visualized as a collection of lockers. Each locker can store something independently of all other lockers. Each locker has a number, so you can find a particular locker easily. In RAM, the locker is a byte that can store a value between zero and 255, and the locker number is an address. Better than a locker; you can open the byte at address zero, then the byte at address 1000000 instantly. You don't have to walk down a long hallway. That is what the R in RAM refers to: Random, as in Random Access Memory. Essentially every location takes the same amount of time to access.
Machines have a lot of RAM, on the order of billions of bytes. Even very big operating systems do not need all of RAM; if they require 50 million bytes, that is only 50 / 1000 or 5% of what is now considered a small system. That leaves 950 million bytes for programs to use. If every program was as big as the operating system, you could run 950/50 = 19 of them. There are tricks to permit running even more.
One of the fundamental jobs of the operating system is to provision resources like RAM to applications, and make sure that applications cannot snoop on or modify each others RAM without prior arrangement. To do this, the operating system typically uses a trick where program addresses are indirectly translated to RAM addresses under control of the operating system. This way, all applications can think they have ram at (say) address 4194304. This trick is called an MMU (Memory Management Unit), and the details start to explode at this point.
Review:
RAM is a collection of places to store numbers, and each storage place has a unique address.
There is lots of RAM, so we just have to divvy it up between applications.
We can keep applications RAM separate and secret from other applications.
The operating system only uses a relatively small amount of RAM.

Mongo Server Status - "Resident" memory

After starting Mongo via mongod, I ran a Mongo query that took 300 seconds. Calling db.serverStatus() on my "admin" db showed Mongo having resident memory of 1 GB. The docs explain that "resident" memory is the amount of physical disk/RAM that Mongo uses.
Then, I re-ran the same query, but it took 8 seconds. Looking at the resident memory this time, I saw 5 GB.
The large increase in RAM, I believe, helps to explain why the query time shrank from 300 to 8 seconds, but why did the resident memory jump so quickly?
Is there some type of "warming" step recommended to prepare Mongo so as to avoid 300 second queries?
There reason behind that MongoDB uses mmap functionality of the operating system. This means, at least on Linux systems That the memory handling of the mongodb is based on some functionality of the operating system called memory mapped files.
The memory in Linux systems is addressed in several levels basically any program will see an address space on 32-bit systems of 2GB over all, on 64-bit systems 128TB. This is a virtual address space which means on 32/64bit that amount of memory can be addressed with 4kb memory pages(page is the individually handled part of the memory). That is why if you start mongoDB on a 32 bit system it will rise a warning that the database on such system can handle only 2GB of data. Obviously this virtual address space is bigger than the amount of physical memory so there is a mapping between these virtual addresses and the physical ones. Some of the virtual addresses reside in really physical memory so they are in the real memory,but the algorithm which ensures this is on the side of the kernel. Programs running on Linux systems can deal only with virtual addresses, if one tries to access a virtual memory address which is not in physical memory, a pagefault occurs (you can track this on the serverStatus commands extra info field). (You can find short explanation of this here)
Accessing memory in case if the virtual address reside in physical memory is as fast as the memory, accessing a virtual address which has no physical currently means a paging from disk to memory and read the memory so as fast as the disks random read. (This makes the different in your case)
There is a command in mongoDB which with you can enforce the caching of a collection or an index this command is the touch
If you use this command to load the data into memory before the first query you will get the results in 8sec at first try. Unfortunately you cannot really force the OS to keep always this in memory, so if you have others things using up the memory OS will page out this data in some time.
IF you have enough physical memory mongoDB will keep everything the data and indexes in memory. This not always needed. There is a portion of data which need to be in memory to avoid extensive amount of pagefaults this is the workingset. You can check the size of the working set with the db.runCommand( { serverStatus: 1, workingSet: 1 } ) command.
You cannot handle the paging while it is OS level, but if you have enough memory usually the kernel likes to keep as much stuff cached as it can. If the workingset fits in memory you are more or less ok. If some documents really rarely accessed and there is not enough memory to keep everything there they will be paged out anyway.
When you run a query several things can happen. An index can cover which means no documents will be touched at all, if your query is selective in some notion only a part of the index will be touched. unfortunately it is really hard to define memory is sufficient and the only thing what you can do is to monitor (the workingset metric is an estimation). The symptom of running out of memory can be identified check this presentation. And use MMS.

Is this disk read speed to be expected (Amazon EBS)?

our Amazon EBS backed instance has slowed down considerably (maybe shifted physical host?).
I've checked the instance using top and the CPU use is very low when the process is activated (like 1%). Using iotop I have monitored the disk read speed of postgresql. When there is only one postgresql thread running it's reporting about a 5M/s read speed. Is this rather slow or is this in the parameters of usual disk read speeds?
Thanks
5MB/s its more or less the typical for one single hard drive. I mean sequential accesses of course. If you have only one hard disk then your CPU its fine since one hard disk its not enough to stress out your CPU probably, if you are not reaching any more speed than that even with constant queries then your hard disk its the bottleneck.

virtual memory on compute nodes in a grid

When I run a job on a node, using PBS, and I get finally in the job report:
resources_used.mem=1616024kb
resources_used.vmem=2350176kb
resources_used.walltime=00:06:32
What does the virtual memory really means? I don't think there is a hard drive connected to each node.
Which kind of memory should I take into account when I try to increase the size of the problem, such that I don't hit the 16GB capacity of the node memory, the normal memory (mem) or the virtual memory (vmem) ?
Thanks
The vmem indicates how much memory your job was using in total. It used all of the available physical memory (see the mem value), and more. An operating system allows programs to allocate more memory than there is physical memory available.
If you're actively using more memory than there is physical memory available, you'll start seeing swap activity (data that was swapped out to disk being brought back into memory, and other stuff being put to disk). That's bad, it will basically kill your performance if it happens a lot.
So, as long as you're not actively using more than 16GB, you're fine. But the mem or vmem values won't tell you this, it depends on what the application is actually doing.

Does it make sense to cache data obtained from a memory mapped file?

Or it would be faster to re-read that data from mapped memory once again, since the OS might implement its own cache?
The nature of data is not known in advance, it is assumed that file reads are random.
i wanted to mention a few things i've read on the subject. The answer is no, you don't want to second guess the operating system's memory manager.
The first comes from the idea that you want your program (e.g. MongoDB, SQL Server) to try to limit your memory based on a percentage of free RAM:
Don't try to allocate memory until there is only x% free
Occasionally, a customer will ask for a way to design their program so it continues consuming RAM until there is only x% free. The idea is that their program should use RAM aggressively, while still leaving enough RAM available (x%) for other use. Unless you are designing a system where you are the only program running on the computer, this is a bad idea.
(read the article for the explanation of why it's bad, including pictures)
Next comes from some notes from the author of Varnish, and reverse proxy:
Varnish Cache - Notes from the architect
So what happens with squids elaborate memory management is that it gets into fights with the kernels elaborate memory management, and like any civil war, that never gets anything done.
What happens is this: Squid creates a HTTP object in "RAM" and it gets used some times rapidly after creation. Then after some time it get no more hits and the kernel notices this. Then somebody tries to get memory from the kernel for something and the kernel decides to push those unused pages of memory out to swap space and use the (cache-RAM) more sensibly for some data which is actually used by a program. This however, is done without squid knowing about it. Squid still thinks that these http objects are in RAM, and they will be, the very second it tries to access them, but until then, the RAM is used for something productive.
Imagine you do cache something from a memory-mapped file. At some point in the future that memory holding that "cache" will be swapped out to disk.
the OS has written to the hard-drive something which already exists on the hard drive
Next comes a time when you want to perform a lookup from your "cache" memory, rather than the "real" memory. You attempt to access the "cache", and since it has been swapped out of RAM the hardware raises a PAGE FAULT, and cache is swapped back into RAM.
your cache memory is just as slow as the "real" memory, since both are no longer in RAM
Finally, you want to free your cache (perhaps your program is shutting down). If the "cache" has been swapped out, the OS must first swap it back in so that it can be freed. If instead you just unmapped your memory-mapped file, everything is gone (nothing needs to be swapped in).
in this case your cache makes things slower
Again from Raymon Chen: If your application is closing - close already:
When DLL_PROCESS_DETACH tells you that the process is exiting, your best bet is just to return without doing anything
I regularly use a program that doesn't follow this rule. The program
allocates a lot of memory during the course of its life, and when I
exit the program, it just sits there for several minutes, sometimes
spinning at 100% CPU, sometimes churning the hard drive (sometimes
both). When I break in with the debugger to see what's going on, I
discover that the program isn't doing anything productive. It's just
methodically freeing every last byte of memory it had allocated during
its lifetime.
If my computer wasn't under a lot of memory pressure, then most of the
memory the program had allocated during its lifetime hasn't yet been
paged out, so freeing every last drop of memory is a CPU-bound
operation. On the other hand, if I had kicked off a build or done
something else memory-intensive, then most of the memory the program
had allocated during its lifetime has been paged out, which means that
the program pages all that memory back in from the hard drive, just so
it could call free on it. Sounds kind of spiteful, actually. "Come
here so I can tell you to go away."
All this anal-rententive memory management is pointless. The process
is exiting. All that memory will be freed when the address space is
destroyed. Stop wasting time and just exit already.
The reality is that programs no longer run in "RAM", they run in memory - virtual memory.
You can make use of a cache, but you have to work with the operating system's virtual memory manager:
you want to keep your cache within as few pages as possible
you want to ensure they stay in RAM, by the virtue of them being accessed a lot (i.e. actually being a useful cache)
Accessing:
a thousand 1-byte locations around a 400GB file
is much more expensive than accessing
a single 1000-byte location in a 400GB file
In other words: you don't really need to cache data, you need a more localized data structure.
If you keep your important data confined to a single 4k page, you will play much nicer with the VMM; Windows is your cache.
When you add 64-byte quad-word aligned cache-lines, there's even more incentive to adjust your data structure layout. But then you don't want it too compact, or you'll start suffering performance penalties of cache flushes from False Sharing.
The answer is highly OS-specific. Generally speaking, there will be no sense in caching this data. Both the "cached" data as well as the memory-mapped can be paged away at any time.
If there will be any difference it will be specific to an OS - unless you need that granularity, there is no sense in caching the data.