How to understand what Bank Switching does and how it works - atari-2600

I have been trying to learn about how games were made on the old Atari-2600 when the maximum it could address was 8KB and it only had around 127 bytes of memory. I heard that games on the Atari used a technique called Bank Switching, which allows the 6507 (The CPU of the Atari-2600), to access more memory than 8KB. I read the Wikipedia article about it, but I didn't understand how this was accomplished or what it really did.
From what I can understand you basically swap the memory the cpu is using to allow it to access more memory, but how would you keep track of what parts memory you are using?
I read the Wikipedia page about it. I also tried searching for answers here on Stack Overflow but I got no results.

Related

Why it's not slow when watching movies on storage [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I am not sure if I can post a question without code in Stack Overflow. This is a question about how the computer works.
I know it's not slow when watching a movie because the data in storage is copied to memory and used. However, because the storage processing speed is slow, is it not slower than a direct processor accessing and reading the storage? Or does another device act as a copy? I want to know the principle in detail.
The processor access addressable data: units of memory each one identified by an address.
Main memory is addressable: this allows the processor to read from "here" and write "there" and "there", exploiting the full capacity of the main memory.
Without memory, the processor could only use its internal units of storage (registers) but they are of limited size (around 2KiB for x86, but for general purpose registers it goes down to 128 bytes).
Caches are equivalent to memory for this discussion.
Disks are not addressable the same way memory is: this is due to historical reasons (small address space) and performance (reading randomly is worse than planning ahead even for SSD, also there are more commands than Read and Write and some can be executed in parallel).
So disks are either made to write data to main memory (DMA) or there is a fixed location where to write to send commands and a fixed location to repeatedly read from to fetch all the data (PIO).
This read location is small enough to fit into a CPU internal storage unit but once the CPU has the data it must "save" it somewhere for later processing and so this data would end up in memory anyway (PIO is waaaay slower than DMA).
Note that Non Volative Devices are considered a new form of storage, something in between main memory and disks (while main memory won't be phased out by NV devices, because we need scratch memory, disks may be if the NV technology can address the longevity and density problems without affecting performance).
They are addressable like main memory and thus the CPU could read directly from them.
Symbian OS based mobile phones worked this way: the OS executable files were stored in the flash ROM (which is an NV device) and read directly by the CPU without loading them (they were already loaded).
Also note that FWIW, video is played for humans which have a very slow sampling rate. We only need about 24 frames per second to consider a movie smooth, that is easy to sustain even from the network (though it depends on the resolution and format).
So the disk has all the time to serve the reads needed for the playback.
On the contrary, video conversion can be affected by the speed of the disk.
Also, videos are compressed so the CPU must modify the data (meaning it needs to overwrite it or store the result somewhere in main memory) to play the video unless there is a hardware device that can play compressed stream directly and the file format is just right.
In this case, storing the video on an NV device would allow a faster reproduction, without the CPU involved or any copy in memory.
However the speedup is not dramatic, we are shaving off the time needed to read from memory not the time needed to read from the storage device (which is still the dominant factor affecting speed).
That's mostly irrelevant for the frame rates involved when playing for humans.

Using instruments to find overflow of the stack in code

As Documents say, allocations give a heap analysis of the memory.
However, what I feel is my app is crashing because of storing a lot of data on stack, which might be overflowing.
How do I analyze that? Please Help. Thanks!
First Build your app for Profiling (Command +I); Run it. Select the Allocations tool, Play around with (Use) the application.
In the Allocations you will find a section of Live Bytes this is the current RAM utilization by your application (data on stack I suppose it's the RAM you are talking abt in your question).
Releasing Objects that are not currently in use will reduce Live bytes
Overall Bytes - All bytes (Created & Destroyed + currently live bytes).
For Further reference refer Instruments Programming Guide.
Creating and comparing "heapshots" is a good way to start narrowing down the code parts that show no obvious memory management errors at first glance. See my answer on this question for some further reading or check out this great article directly.

If malloc() is considered causing "dirty memory", is free() cleaning it up?

I've heard rumors that calling malloc leads to so called "dirty memory", which you can see in the VM Tracker instrument.
Now, rumors also say one must try to keep the amount of dirty memory as low as possible. But what they didn't talk much about was how to undirty it again.
Sometimes there's no other option than using malloc(). Heck, I love malloc(). For example when creating audio sources for OpenAL, one must malloc() a lot of data.
So: When my app calls malloc() and free() all over the place, I always believed that's fine. Am I having a huge problem when doing that? Or will free() always "clean it up"? I'm a bit confused because some very big guys at a very big company warned that malloc() must be avoided as much as possible, because of this dirty memory problem.
Maybe someone can un-confuse me about that.
I seriously doubt this is true. All memory allocation in Cocoa is eventually done via malloc. so sayeth Apple's Memory Usage Performance Guidelines. Quoting from that document:
Because memory is such a fundamental resource, Mac OS X and iOS both
provide several ways to allocate it. Which allocation techniques you
use will depend mostly on your needs, but in the end all memory
allocations eventually use the malloc library to create the memory.
Even Cocoa objects are allocated using the malloc library eventually.
I don't know about your big guys at your big company, but I've known big guys at big companies that didn't know squat. Just sayin'. Documentation trumps rumors every time. :)
I don't know what they mean by "dirty" and "clean". Possibly they are referring to the problem of fragmentation. Doing lots of allocs and frees can cause fragmentation problems, but it really depends on the usage patterns and block sizes you are allocating. In general, don't worry about using malloc and free. If you a real reason to avoid the standard allocator, you can use your own allocator. Then you just call malloc once for a huge block that you can use as the basis of your custom allocator.
If you malloc and free same size memory blocks multiple times, the memory will be reused instead of accumulating dirty VM pages. So it's perfectly safe as long as you know the max of all possible fragment sizes ever allocated by your app at any one time, and that keeps your app under the OS kill limit.
You are correct - free() will simply clean it up again.

What do "Dirty" and "Resident" mean in relation to Virtual Memory?

I dropped out of the CS program at my university... So, can someone who has a full understanding of Computer Science please tell me: what is the meaning of Dirty and Resident, as relates to Virtual Memory? And, for bonus points, what the heck is Virtual Memory anyway? I am using the Allocations/VM Tracker tool in Instruments to analyze an iOS app.
*Hint - try to explain as if you were talking to an 8-year old kid or a complete imbecile.
Thanks guys.
"Dirty memory" is memory which has been changed somehow - that's memory which the garbage collector has to look at, and then decide what to do with it. Depending on how you build your data structures, you could cause the garbage collector to mark a lot of memory as dirty, having each garbage collection cycle take longer than required. Keeping this number low means your program will run faster, and will be less likely to experience noticeable garbage collection pauses. For most people, this is not really a concern.
"Resident memory" is memory which is currently loaded into RAM - memory which is actually being used. While your application may require that a lot of different items be tracked in memory, it may only require a small subset be accessible at any point in time. Keeping this number low means your application has lower loading times, plays well with others, and reduces the risk you'll run out of memory and crash as your application is running. This is probably the number you should be paying attention to, most of the time.
"Virtual memory" is the total amount of data that your application is keeping track of at any point in time. This number is different from what is in active use (what's being used is marked as "Resident memory") - the system will keep data that's tracked but not used by your application somewhere other than actual memory. It might, for example, save it to disk.
WWDC 2013 - 410 Fixing Memory Issues Explains this nicely. Well worth a watch since it also explains some of the practical implications of dirty, resident and virtual memory.

Understanding the memory consumption on iPhone

I am working on a 2D iPhone game using OpenGL ES and I keep hitting the 24 MB memory limit – my application keeps crashing with the error code 101. I tried real hard to find where the memory goes, but the numbers in Instruments are still much bigger than what I would expect.
I ran the application with the Memory Monitor, Object Alloc, Leaks and OpenGL ES instruments. When the application gets loaded, free physical memory drops from 37 MB to 23 MB, the Object Alloc settles around 7 MB, Leaks show two or three leaks a few bytes in size, the Gart Object Size is about 5 MB and Memory Monitor says the application takes up about 14 MB of real memory. I am perplexed as where did the memory go – when I dig into the Object Allocations, most of the memory is in the textures, exactly as I would expect. But both my own texture allocation counter and the Gart Object Size agree that the textures should take up somewhere around 5 MB.
I am not aware of allocating anything else that would be worth mentioning, and the Object Alloc agrees. Where does the memory go? (I would be glad to supply more details if this is not enough.)
Update: I really tried to find where I could allocate so much memory, but with no results. What drives me wild is the difference between the Object Allocations (~7 MB) and real memory usage as shown by Memory Monitor (~14 MB). Even if there were huge leaks or huge chunks of memory I forget about, the should still show up in the Object Allocations, shouldn’t they?
I’ve already tried the usual suspects, ie. the UIImage with its caching, but that did not help. Is there a way to track memory usage “debugger-style”, line by line, watching each statement’s impact on memory usage?
What I have found so far:
I really am using that much memory. It is not easy to measure the real memory consumption, but after a lot of counting I think the memory consumption is really that high. My fault.
I found no easy way to measure the memory used. The Memory Monitor numbers are accurate (these are the numbers that really matter), but the Memory Monitor can’t tell you where exactly the memory goes. The Object Alloc tool is almost useless for tracking the real memory usage. When I create a texture, the allocated memory counter goes up for a while (reading the texture into the memory), then drops (passing the texture data to OpenGL, freeing). This is OK, but does not always happen – sometimes the memory usage stays high even after the texture has been passed on to OpenGL and freed from “my” memory. This means that the total amount of memory allocated as shown by the Object Alloc tool is smaller than the real total memory consumption, but bigger than the real consumption minus textures (real – textures < object alloc < real). Go figure.
I misread the Programming Guide. The memory limit of 24 MB applies to textures and surfaces, not the whole application. The actual red line lies a bit further, but I could not find any hard numbers. The consensus is that 25–30 MB is the ceiling.
When the system gets short on memory, it starts sending the memory warning. I have almost nothing to free, but other applications do release some memory back to the system, especially Safari (which seems to be caching the websites). When the free memory as shown in the Memory Monitor goes zero, the system starts killing.
I had to bite the bullet and rewrite some parts of the code to be more efficient on memory, but I am probably still pushing it. If I were to design another game, I would certainly think of some resource paging. With the current game it’s quite hard, because the thing is in motion all the time and loading the textures gets in the way, even if done in another thread. I would be much interested in how other people solve this issue.
Please note that these are just my views that do not have to be much accurate. If I find out something more to say on this topic, I will update the question. I’ll keep the question open in case somebody who understands the issue would care to answer, since these all are more workarounds and guesses than anything else.
I highly doubt this is a bug in Instruments.
First, read this blog post by Jeff Lamarche about openGL textures:
has a simple example of how to load
textures without causing leaks
gives understanding of how "small"
images, get once they are loaded
into openGL, actually use "a lot" of memory
Excerpt:
Textures, even if they're made from
compressed images, use a lot of your
application's memory heap because they
have to be expanded in memory to be
used. Every pixel takes up four bytes,
so forgetting to release your texture
image data can really eat up your
memory quickly.
Second, it is possible to debug texture memory with Instruments. There are two profiling configurations: OpenGL ES Analyzer and OpenGL ES Driver. You will need to run these on the device, as the simulator doesn't use OpenGL. Simply choose Product->Profile from XCode and look for these profiles once Instruments launches.
Armed with that knowledge, here is what I would do:
Check that you're not leaking memory -- this will obviously cause this problem.
Ensure your'e not accessing autoreleased memory -- common cause of crashes.
Create a separate test app and play with loading textures individually (and in combination) to find out what texture (or combination thereof) is causing the problem.
UPDATE: After thinking about your question, I've been reading Apple's OpenGL ES Programming Guide and it has very good information. Highly recommended!
One way is to start commenting out code and checking to see if the bug still happens. Yes it is tedious and elementary, but it might help if you knew where the bug was.
Where it is crashing is why it is crashing, etc.
Hrmm, that's not many details, but if leaks doesn't show you where the leaks are, there are two important options:
[i] Leaks missed a leak
[ii] The memory isn't actually being leaked
fixing [i] is quite hard, but as Eric Albert said filing a bug report with Apple will help. [ii] means that the memory you're using is still accessible somewhere, but perhaps you've forgotten about it. Are any lists growing, without throwing out old entries? Are any buffers being realloc()ed a lot?
For those seeing this after the year 2012:
The memory really loaded into device's physical memory is the Resident Memory in VM Tracker Instrument.
Allocation Instrument only marks the memory created by malloc/[NSObject alloc] and some framework buffer, for example, decompressed image bitmap is not included in Allocation Instrument but it always takes most of your memory.
Please Watch WWDC 2012 Session 242 iOS App Performance: Memory to get the information from Apple.
This doesn't specifically help you, but if you find that the memory tools don't provide all the data you need, please file a bug at bugreport.apple.com. Attach a copy of your app and a description of how the tools are falling short of your analysis and Apple will see if they can improve the tools. Thanks!