Simple Heap Implementation - Custom Memory Manager - operating-system

I am currently taking an Operating Systems course and will have my first exam tomorrow. The professor has provided us with a list of topics to be prepared for and one of them is:
Simple Heap Implementation
Based on the course material so far, I have an idea of what this entails but was wondering if anyone can possibly elaborate on this or direct me to some further resources to continue studying the topic.
What are some things I should be aware of and how can I go about implementing them?
Thanks

You can build your own memory manager using the data structure linked list. Heap is used for dynamic memory allocation. For example: malloc in C allocates memory from Heap.
In a Dynamic storage allocation model , memory is made up of series of variable sized blocks. Some are allocated and some are free. So you will basically create linked lists( to be specific doubly linked lists ), for free memory blocks and allocated memory blocks.
Take look at this and this links for details. I suggest you better have a good understanding of the data structure linked list before doing anything else.

Related

What purpose does a queue serve in System Verilog?

They are not used for RTL but rather verification, correct? They would not be synthesizable.
Do they have better memory management features in turn optimizing program time? If I recall
correctly, System Verilog has an automatic garbage collector, so there is no need to deallocate memory.
The official IEEE documentation does a great job of explaining how they work. I am just wondering in what scenarios I would use one vs an array. One guess would be that they have associated methods that allow for easier data manipulation?
Thank you in advance for your knowledge and expertise.
A queue can be synthesisable if it has a bounded maximum size. Only a few synthesis tools support it, probably none of the FPGA synthesis tools.
The key advantage with a queue is in efficiency adding/removing one element from the array, especially when accessed at the head or tail of the queue. A dynamic array may require reallocation and copying the entire array when modifying its size. The penalty for a queue is the extra time it takes to access elements in the middle of the queue, and extra space compared with the same number of element of a dynamic array.
I hope that 2 answers this question.

record virtual memory use in netlogo over time

I am running netlogo on a HPC cluster, and I wondered if there is any way to output-print the java heap used over time?
I am trying to optimize the heap space used for a large model with loads of GIS data, but the HPC cluster only gives limited information on how much is used at which step.
I believe tools exist for monitoring JVM heap usage; I don't know much about that, but it isn't actually a NetLogo-specific topic, so you might look into that separately.
If you want to gather the information from within NetLogo itself:
As you point out in a comment, the "About NetLogo" dialog displays heap usage numbers. The code that retrieves those numbers is here: https://github.com/NetLogo/NetLogo/blob/533131ddb63da21ac35639e61d67601a3dae7aa2/src/main/org/nlogo/util/SysInfo.scala#L28-L39
You can see that it's just calling some routines in the Java standard library (in java.lang.Runtime). You could write a little NetLogo extension that calls the same routines.

iPhone/Instruments: what's about the "malloc" entries in object summary?

I'm performance tuning my iPhone/iPad app, it seems like not all the memory gets freed which should be. In instruments, after I simulate a memory warning in my simulator, there are lots of "Malloc" entries left; what's about them? Can I get rid of them, what do they mean/what do they stand for?
Thanks a lot,
Stefan
At any time, your app will have a (huge) number of living objects, even after getting a memory warning (and the subsequent memory recovery by the operating system). So, it is pretty common that you will also see many of those mallocs you are seeing.
They are not in themselves a sign that something is wrong with memory allocation, but possibly only of the fact that your program is running.
Also have a look at this S.O. topic to learn more about the object allocation tool.
Furthermore, there are many advanced techniques you can use to detect memory allocation problems.
Here you can find a great tutorial that will allow you to go way beyond what the Leaks tool allows you to.
EDIT:
About the exact meaning of those mallocs, you have to think that you can allocate two broad classes of objects (to put it roughly): Objective-C objects that are created through the Obj-C runtime system, and "normal" C objects, that are allocated through malloc.
Many object of the second class are allocated (without you directly calling malloc) by system libraries and by the compiler C library (think about, e.g., sockets or file handles, whatever). Those (C) objects do not have type information associated to them, so Instruments simply shows you the size of the allocated memory block, without having more information available.
Many times malloc objects are created by higher-level classes, so that when you recover memory associated to their instances, also memory allocated through malloc is freed.
You should not worry specifically about them, unless you see that their overall size "grows indefinitely" along program execution. In such case you need first to investigate the way you alloc/release your higher level objects and understand where in your code things get stuck.

Obj-C circular buffer object, implementing one?

I've been developing for the iPhone for quite some time and I've been wondering if there's any array object that uses circular buffer in Obj-C? Like Java's Stack or List or Queue.
I've been tinkering with the NSMutableArray, testing it's limits... and it seems that after 50k simple objects inside the array - the application is significantly slowed down.
So, is there any better solution other than the NSMutableArray (which becomes very slow with huge amounts of data). If not, can anyone tell me about a way to create such an object (would that involve using chain (node) objects??).
Bottom line: Populating a UITableView from an SQLite DB directly would be smart? As it won't require memory from an array or anything, but just the queries. And SQLite is fast and not memory grinding.
Thank you very much for you time and attention,
~ Natanavra.
From what I've been thinking it seems that going for Quinn's class is the best option possibly.
I have another question - would it be faster or smarter to load everything straight from the SQLite DB instead of creating an object and pushing it into an array?
Thank you in advance,
~ Natanavra.
Apologies for tooting my own horn, but I implemented a C-based circular buffer in CHDataStructures. (Specifically, check out CHCircularBufferQueue and CHCircularBufferStack.) The project is open source and has benchmarks which demonstrate that a true circular buffer is quite fast when compared to NSMutableArray in the general case, but results will depend on your data and usage, as well as the fact that you're operating on a memory-constrained device (e.g. iPhone). Hope that helps!
If you're seeing performance issues, measure where your app is spending its time, don't just guess. Apple provides an excellent set of performance measurement tools.
It's trivial to have NSMutable array act like a stack, list, queue etc using the various insertObject:atIndex: and removeObjectAtIndex: methods. You can write your own subclasses if you want to hardwire the behavior.
I doubt the performance problems you are seeing are being caused by NSMutableArray especially if your point of reference is the much, much slower Java. The problem is most likely the iPhone itself. As noted previously, 50,000 objective-c objects is not a trivial amount of data in this context and the iPhone hardware may struggle to managed that much data.
If you need some kind of high performance array for bytes, you could use one of the core foundation arrays or roll your own in plain C and then wrap them in a custom class.
It sounds to me like you need to switch to core data so you don't have to keep all this in memory. Core data will efficiently fetch what you want only when you need it.
You can use STL classes in "Objective-C++" - which is a fancy name for Objective-C making use of C++ classes. Just name those source files that use C++ code with a ".mm" extension and you'll get the mixed runtime.
Objective-C objects are not really "simple," so 50,000 of them is going to be pretty demanding. Write your own in straight C or C++ if you want to avoid the bottlenecks and resource demands of the Objective-C runtime.
A rather lengthy and non-theoretical discussion of the overhead associated with convenience:
http://www.cocoabuilder.com/archive/cocoa/35145-nsarray-overhead-question.html#35128
And some simple math for simple people:
All it takes to make an object as opposed to a struct is a single pointer at the beginning.
Let's say that's true, and let's say we're running on a 32-bit system with 4 byte pointers.
4 bytes x 50,000 objects = 200000 bytes
That's nearly 200MB worth of extra memory that your data suddenly needs just because you used Objective-C. Now compound that with the fact that whatever NSArray you add those objects to is going to double that by keeping its own set of pointers to those objects and you've just chewed up 400MB of RAM just so you could use a couple of convenience wrappers.
Refresh my memory here... Are swap files on hard drives as fast as RAM? How much RAM is there in an iPhone? How many function calls and stack frames does it take to send an object a message? Why isn't IOKit written in Objective-C? How many of Apple's flagship applications that do a lot of DSP use AppKit? Anybody got a copy of otool they can check with? I'm seeing zero here.

Optimal way to persist an object graph to flash on the iPhone

I have an object graph in Objective-C on the iPhone platform that I wish to persist to flash when closing the app. The graph has about 100k-200k objects and contains many loops (by design). I need to be able to read/write this graph as quickly as possible.
So far I have tried using NSCoder. This not only struggles with the loops but also takes an age and a significant amount of memory to persist the graph - possibly because an XML document is used under the covers. I have also used an SQLite database but stepping through that many rows also takes a significant amount of time.
I have considered using Core-Data but fear I will suffer the same issues as SQLite or NSCoder as I believe the backing stores to core-data will work in the same way.
So is there any other way I can handle the persistence of this object graph in a lightweight way - ideally I'd like something like Java's serialization? I've been thinking of trying Tokyo Cabinet or writing the memory occupied by bunch of C structs out to disk - but that's going to be a lot of rewrite work.
I would reccomend re-writing as c structs. I know it will be a pain, but not only will it be quick to write to disk but should perform much better.
Before anyone gets upset, I am not saying people should always use structs, but there are some situations where this is actually better for performance. Especially if you pre-allocate your memory in say 20k contiguous blocks at a time (with pointers into the block), rather than creating/allocating lots of little chunks within a repeated loop.
ie if your loop continually allocates objects, that is going to slow it down. If you have preallocated 1000 structs and just have an array of pointers (or a single pointer) then this is a large magnitude faster.
(I have had situations where even my desktop mac was too slow and did not have enough memory to cope with those millions of objects being created in a row)
Rather than rolling your own, I'd highly recommend taking another look at Core Data. Core Data was designed from the ground up for persisting object graphs. An NSCoder-based archive, like the one you describe, requires you to have the entire object graph in memory and all writes are atomic. Core Data brings objects in and out of memory as needed, and can only write the part of your graph that has changed to disk (via SQLite).
If you read the Core Data Programming Guide or their tutorial guide, you can see that they've put a lot of thought into performance optimizations. If you follow Apple's recommendations (which can seem counterintuitive, like their suggestion to denormalize your data structures at some points), you can squeeze a lot more performance out of your data model than you'd expect. I've seen benchmarks where Core Data handily beat hand-tuned SQLite for data access within databases of the size you're looking at.
On the iPhone, you also have some memory advantages when using controlling the batch size of fetches and a very nice helper class in NSFetchedResultsController.
It shouldn't take that long to build up a proof-of-principle Core Data implementation of your graph to compare it to your existing data storage methods.