I'm running algorithms that consume a lot of memory. For that reason I'm using memory mapped files. The problem is that the memory is allocated faster than the memory manager is able to write data to disk - and ultimately the system is stalling because data allocation and paging are interferring each other. So I need to throttle the data processing when the memory manager is currently doing extensive paging.
I have already found out how to get if the disk is currently writing/in use, but I haven't found a way to see if it is due to paging.
So the question is if there is a way to find out if the disk/memory manager is paging - or if there is even a better way to do it.
Related
How an operating system’s use of virtual memory enables the operating system to appear to support the use of more memory than is physically installed in a computer
I'm not sure how to explain this in detail, but im thinking because virtual memory is based on paging, thus a single process can demand more memory than the amount of physical memory stored. Therefore it "appears" to use more memory than the amount of physical memory.
But Im not sure if that explains it :(
Basically, as the name states, the virtual memory doesn't "exists" or is not directly related to physical memory.
The virtual memory of a process is stored on the disk, containing all the information concerning the real process. See virtual memory for some more insights. When a process is scheduled in by the processor, some parts of its memory is brought back into the main memory through memory swapping. The pages that are needed at that moment by the process are in the main memory. And the pages that are not that much used (by any process) are just swapped-out because the main memory can't contain everything at the same time.
Hope I helped :)
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I am not sure if I can post a question without code in Stack Overflow. This is a question about how the computer works.
I know it's not slow when watching a movie because the data in storage is copied to memory and used. However, because the storage processing speed is slow, is it not slower than a direct processor accessing and reading the storage? Or does another device act as a copy? I want to know the principle in detail.
The processor access addressable data: units of memory each one identified by an address.
Main memory is addressable: this allows the processor to read from "here" and write "there" and "there", exploiting the full capacity of the main memory.
Without memory, the processor could only use its internal units of storage (registers) but they are of limited size (around 2KiB for x86, but for general purpose registers it goes down to 128 bytes).
Caches are equivalent to memory for this discussion.
Disks are not addressable the same way memory is: this is due to historical reasons (small address space) and performance (reading randomly is worse than planning ahead even for SSD, also there are more commands than Read and Write and some can be executed in parallel).
So disks are either made to write data to main memory (DMA) or there is a fixed location where to write to send commands and a fixed location to repeatedly read from to fetch all the data (PIO).
This read location is small enough to fit into a CPU internal storage unit but once the CPU has the data it must "save" it somewhere for later processing and so this data would end up in memory anyway (PIO is waaaay slower than DMA).
Note that Non Volative Devices are considered a new form of storage, something in between main memory and disks (while main memory won't be phased out by NV devices, because we need scratch memory, disks may be if the NV technology can address the longevity and density problems without affecting performance).
They are addressable like main memory and thus the CPU could read directly from them.
Symbian OS based mobile phones worked this way: the OS executable files were stored in the flash ROM (which is an NV device) and read directly by the CPU without loading them (they were already loaded).
Also note that FWIW, video is played for humans which have a very slow sampling rate. We only need about 24 frames per second to consider a movie smooth, that is easy to sustain even from the network (though it depends on the resolution and format).
So the disk has all the time to serve the reads needed for the playback.
On the contrary, video conversion can be affected by the speed of the disk.
Also, videos are compressed so the CPU must modify the data (meaning it needs to overwrite it or store the result somewhere in main memory) to play the video unless there is a hardware device that can play compressed stream directly and the file format is just right.
In this case, storing the video on an NV device would allow a faster reproduction, without the CPU involved or any copy in memory.
However the speedup is not dramatic, we are shaving off the time needed to read from memory not the time needed to read from the storage device (which is still the dominant factor affecting speed).
That's mostly irrelevant for the frame rates involved when playing for humans.
I came across this question recently in a telephonic interview:
What happens if the size of a program is larger than the size of virtual memory?
Will it not be allowed to run or how does the os go about dealing with it?
Yes, it is possible to have program that will run even if total size is bigger than address space.
Programs large than available address space existed for very long time. Common way is to split program into chunks that can fit into address space and than sequentially/on demand load other chunks.
If you have player that can play a file it will play a file. Not sure how it is related to OS...
Yes, you definitely can. Overlaying is the mechanism used. CPU brings in only that part of the code that is to be executed in the main memory and is currently needed. The rest of the code resides in the secondary memory and can then be brought when needed.
is it enough if I set the visibility to NO on my CCNode/CCSprite? Is it still in the memory?
What's the best way to remove it from memory and than put it again to it (fast)?
What about b2Body's? How to do that on them?
I want to do this because I splitted up my level and I just want to put the objects into memory which are visible....
Setting a node/sprite to invisible will definitely not free it from memory. If you want to remove it from memory completely and add it in again quickly I suspect a memory pool is the best way to do that.
I'm not sure I understand why you want to have only objects that are visible in memory and then be able to quickly add them into memory again quickly? It's likely I just don't follow what you are trying to accomplish. You may be trying to optimize your memory usage to prematurely. Certainly you should stop all memory leaks but have you done any profiling as to how much memory your project is using?
I dropped out of the CS program at my university... So, can someone who has a full understanding of Computer Science please tell me: what is the meaning of Dirty and Resident, as relates to Virtual Memory? And, for bonus points, what the heck is Virtual Memory anyway? I am using the Allocations/VM Tracker tool in Instruments to analyze an iOS app.
*Hint - try to explain as if you were talking to an 8-year old kid or a complete imbecile.
Thanks guys.
"Dirty memory" is memory which has been changed somehow - that's memory which the garbage collector has to look at, and then decide what to do with it. Depending on how you build your data structures, you could cause the garbage collector to mark a lot of memory as dirty, having each garbage collection cycle take longer than required. Keeping this number low means your program will run faster, and will be less likely to experience noticeable garbage collection pauses. For most people, this is not really a concern.
"Resident memory" is memory which is currently loaded into RAM - memory which is actually being used. While your application may require that a lot of different items be tracked in memory, it may only require a small subset be accessible at any point in time. Keeping this number low means your application has lower loading times, plays well with others, and reduces the risk you'll run out of memory and crash as your application is running. This is probably the number you should be paying attention to, most of the time.
"Virtual memory" is the total amount of data that your application is keeping track of at any point in time. This number is different from what is in active use (what's being used is marked as "Resident memory") - the system will keep data that's tracked but not used by your application somewhere other than actual memory. It might, for example, save it to disk.
WWDC 2013 - 410 Fixing Memory Issues Explains this nicely. Well worth a watch since it also explains some of the practical implications of dirty, resident and virtual memory.