NSData dataWithContentsOfFile vs NSInputStream - iphone

I have to process XML, now using NSInputStream breaks my code as I have to rewrite lots of things.
Will dataWithContentsOfFile entire file into memory, or only read contents requested for getBytes method?
I am using NSData as input parameter to NSXMLParser, I wonder is there any documentation regarding this?
There is no documentation on apple's doc regarding internals of NSData's dataWithContentsOfFile or its implementation.

When you allocating NSData for NSXMALParser ,it means creating data buffer for that object and every object occupy memory (RAM) ,because iPhone IOS know very well how to use virtual memory. When you reading entire data from the file, it occupying object data and if the data size is more than a few(depended upon OS algo) memory pages, the object uses virtual memory management. A data object can also wrap preexisting data, regardless of how the data was allocated. The object contains no information about the data itself (such as its type); the responsibility for deciding how to use the data lies with the client. In particular, it will not handle byte-order swapping when distributed between big-endian and little-endian machines.
I recommend you to read again this link
https://developer.apple.com/library/mac/#documentation/Cocoa/Conceptual/BinaryData/BinaryData.html#//apple_ref/doc/uid/10000037i its related to iOS. But yes one thing concern with you ….in IOS there is type of owner of object. One is user and another one is IOS object. if you creating NSData that means you allocating memory buffer and assigning data pointer to this veriable, but at that moment entire data is resides inside memory. Its our assumption, but during this period IOS know how to handle this scenario. IOS uses vertual memory technique to handle data pages.

Related

How to receive and store binary data from server in iOS?

I am working on an application for the iPhone (iOS 5). What I have to do is create a map by using binary data that I reveive from a server. If the server has bytes available, I read them into a buffer: uint8_t[1024]. Then I parse through this data and create objects (e.g. a path that contains points with longitude and latitude) from it, but those objects are often larger than my buffer. On the simulator this is not a huge problem, because I have enough memory to store them into mutable arrays.
But how do I have to handle this to make my application safe for a device? What array size should I use for iOS devices?
I hope my issue was understandable.
You can use NSMutableArray and store data temporarily and expand its size as needed.
Hope this helps.
Have you considered using NSData (or its mutable subclass NSMutableData) instead?
These provide an object wrapper for byte buffers, and can be grown arbitrarily using the appendData: selector.
From the documentation:
NSMutableData (and its superclass NSData) provide data objects, object-oriented wrappers for byte buffers. Data objects let simple allocated buffers (that is, data with no embedded pointers) take on the behavior of Foundation objects.
That said, if you're only allocating on the order of kilobytes you're not likely to face memory issues.

Reading plist many times vs. Creating object and read plist only once to access data in plist

I would like to make a data manager class that retrieves data from plist, and I wonder if I should make a class with all class methods which read plist every time when the method is called and returns requested value, or create a class initializer which initializes an array(instance variable) with the plist data and all methods are instance methods which get data from the array.
I would like to know which is more expensive:reading plist many times (like 50 times) or instantiating an object, or simply which is better.
Thank you for your help in advance.
This is one of the classical tradeoffs in programming - speed vs. memory use. The technique of reading something once and storing it on a faster medium (in this example, in memory) is called caching. It's very popular, and for good reason. Mass storage devices are still magnitudes slower than RAM, and network access is magnitudes slower than local mass storage.
If you assume the data will be asked of the manager often, and if you assume the plist won't change (or you can detect changes), then read the plist on the first access to the getter, store it in an iVar, and answer only the iVar as long as the plist hasn't changed. This uses a little more memory, but is a lot faster for subsequent accesses.
NOTE: This approach would be harmful for very, very large files. If you are concerned about memory usage, implement the - (void)didReceiveMemoryWarning method in your viewControllers, and flush the cache (delete it) when you are low on memory.
A getter method could look like this:
- (NSArray *)data
{
if (!cacheArray) {
//what we do now is called "lazy initialization": we initialize our array only when we first need the data.
//This is elegant, because it ensures the data will always be there when you ask for it,
//but we don't need to initialize for data that might never be needed, and we automatically re-fill the array in case it has been deleted (for instance because of low memory)
cacheArray = ... //read array from plist here; be sure to retain or copy it
}
return cacheArray;
}

Cocoa app layout with Core Data and lots of business logic

For those who have seen my other questions: I am making progress but I haven't yet wrapped my head around this aspect. I've been pouring over stackoverflow answers and sites like Cocoa With Love but I haven't found an app layout that fits (why such a lack of scientific or business app examples? recipe and book examples are too simplistic).
I have a data analysis app that is laid out like this:
Communication Manager (singleton, manages the hardware)
DataController (tells Comm.mgr what to do, and checks raw data it receives)
Model (receives data from datacontroller, cleans, analyzes and stores it)
MainViewController (skeleton right now, listens to comm.mgr to present views and alerts)
Now, never will my data be directly shown on a view (like a simple table of entities and attributes), I'll probably use core plot to plot the analyzed results (once I figure that out). The raw data saved will be huge (10,000's of points), and I am using a c++ vector wrapped in an ObjC++ class to access it. The vector class also has the encodeWithCoder and initWithCoder functions which use NSData as a transport for the vector. I'm trying to follow proper design practices, but I'm lost on how to get persistent storage into my app (which will be needed to store and review old data sets).
I've read several sources that say the "business logic" should go into the model class. This is how I have it right now, I send it the raw data, and it parses, cleans and analyzes the results and then saves those into ivar arrays (of the vector class). However, I haven't seen a Core Data example yet that has a Managed Object that is anything but a simple storage of very basic attributes (strings, dates) and they never have any business logic. So I wonder, how can I meld these two aspects? Should all of my analysis go into the data controller and have it manage the object context? If so, where is my model? (seems to break the MVC architecture if my data is stored in my controller - read: since these are vector arrays, I can't be constantly encoding and decoding them into NSData streams, they need a place to exist before I save them to disk with Core Data, and they need a place to exist after I retrieve them from storage and decode them for review).
Any suggestions would be helpful (even on the layout I've already started). I just drew some of the communication between objects to give you an idea. Also, I don't have any of the connections between the model and view/view controllers yet (using NSLog for now).
While vector<> is great for handling your data that you are sampling (because of its support for dynamically resizing underlying storage), you may find that straight C arrays are sufficient (even better) for data that is already stored. This does add a level of complexity but it avoids a copy for data arrays that are already of a known and static size.
NSData's -bytes returns a pointer to the raw data within an NSData object. Core Data supports NSData as one its attribute types. If you know the size of each item in data, then you can use -length to calculate the number of elements, etc.
On the sampling side, I would suggest using vector<> as you collect data and, intermittently, copy data to an NSData attribute and save. Note: I ran into a bit of problem with this approach (Truncated Core Data NSData objects) that I attribute to Core Data not recognizing changes made to NSData attribute when it is backed by an NSMutableData object and that mutable object's data is changed.
As for MVC question. I would suggest that data (model) is managed in by Model. Views and Controllers can ask Model for data (or subsets of data) in order to display. But ownership of data is with the Model. In my case, which may be similar to yours, there were times when the Model returns abridged data sets (using Douglas-Peucker algorithm). The views and controllers were none the wiser that points were being dropped - even though their requests to the Model may have played in a role in that (graph scaling factors, etc.).
Update
Here is a snippet of code from my Data class which extends NSManagedObject. For a filesystem solution, NSFileHandle's -writeData: and methods for monitoring file offset might allow similar (better) management controls.
// Exposed interface for adding data point to stored data
- (void) addDatum:(double_t)datum
{
[self addToCache:datum];
}
- (void) addToCache:(double_t)datum
{
if (cache == nil)
{
// This is temporary. Ideally, cache is separate from main store, but
// is appended to main store periodically - and then cleared for reuse.
cache = [NSMutableData dataWithData:[self dataSet]];
[cache retain];
}
[cache appendBytes:&datum length:sizeof(double_t)];
// Periodic copying of cache to dataSet could happen here...
}
// Called at end of sampling.
- (void) wrapup
{
[self setDataSet:[NSData dataWithData:cache]]; // force a copy to alert Core Data of change
[cache release];
cache = nil;
}
I think you may be reading into Core Data a bit too much. I'm not that experienced with it, so I speak as a non-expert, but there are basically two categories of data storage systems.
First is the basic database, like SQLite, PostgreSQL, or any number of solutions. These are meant to store data and retrieve it; there's no logic so you have to figure out how to manage the tables, etc. It's a bit tricky at times but they're very efficient. They're best at managing lots of data in raw form; if you want objects with that data you have to create and manage them yourself.
Then you have something like Core Data, which shouldn't be considered a database as much as an "object persistence" framework. With Core Data, you create objects, store them, and then retrieve them as objects. It's great for applications where you have large numbers of objects that each contain several pieces of data and relationships with other objects.
From my limited knowledge of your situation, I would venture a guess that a true database may be better suited to your needs, but you'll have to make the decision there (like I said, I don't know much about your situation).
As for MVC, my view is that the view should only contain display code, the model should only contain code for managing the data itself and its storage, and the controller should contain the logic that processes the data. In your case it sounds like you're gathering raw data and processing it before storing it, in which case you'd want to have another object to process the data before storing it in the model, then a separate controller to sort, manage, and otherwise prepare the data before the view receives it. Again, this may not be the best explanation (or the best methods for your situation) so take it with a grain of salt.
EDIT: Also, if you're looking on getting into Core Data more, I like this book. It explains the whole object-persistence-vs-database concept a lot better than I can.

How a class that wraps and provides access to a single file should be designed?

MyClass is all about providing access to a single file. It must CheckHeader(), ReadSomeData(), UpdateHeader(WithInfo), etc.
But since the file that this class represents is very complex, it requires special design considerations.
That file contains a potentially huge folder-like tree structure with various node types and is block/cell based to handle fragmentation better. Size is usually smaller than 20 MB. It is not of my design.
How would you design such a class?
Read a ~20MB stream into memory?
Put a copy on temp dir and keep its path as property?
Keep a copy of big things on memory and expose them as read-only properties?
GetThings() from the file with exception-throwing code?
This class(es) will be used only by me at first, but if it ends good enough I might open-source it.
(This is a question on design, but platform is .NET and class is about offline registry access for XP)
It depends what you need to do with this data. If you only need to process it linearly one time, then it might be faster to just take the performance hit of a large file in memory.
If however you need to do various things with the file beyond a single, linear parsing, I would parse the data into a lightweight database such as SQLite and then operate on that. This way all of your file's structure is preserved and all subsequent operations on the file will be faster.
Registry access is quite complex. You are basically reading a large binary tree. The class design should rely heavily on the stored data structures. Only then you can choose an appropriate class design. To stay flexible you should model the primitives such as REG_SZ, REG_EXPAND_SZ, DWORD, SubKey, .... Don Syme has in his book Expert F# a nice section about binary parsing with binary combinators. The basic idea is that your objects know by themself how to deserialize from a binary representation. When you have a stream of bytes which is structured like this
<Header>
<Node1/>
<Node2>
<Directory1>
</Node2>
</Header>
you start with a BinaryReader to read the binary objects byte by byte. Since you know that the first thing must be the header you can pass it to the Header object
public class Header
{
static Header Deserialize(BinaryReader reader)
{
Header header = new Header();
int magic = reader.ReadByte();
if( magic == 0xf4 ) // we have a node entry
header.Insert(Node.Read( reader );
else if( magic == 0xf3 ) // directory entry
header.Insert(DirectoryEntry.Read(reader))
else
throw NotSupportedException("Invalid data");
return header;
}
}
To stay performant you can e.g. delay parsing the data up to a later time when specific properties of this or that instance are actually accessed.
Since the registry in Windows can get quite big it is not possible to read it completely into memory at once. You will need to chunk it. One solution that Windows applies is that the whole file is allocated in paged pool memory which can span several gigabytes but only the actually accessed parts are swapped out from disk into memory. That allows Windows to deal with a very large registry file in an efficient manner. You will need something similar for your reader as well. Lazy parsing is one aspect and the ability to jump around in the file without the need to read the data in between is cruical to stay performant.
More infos about paged pool and the registry can be found there:
http://blogs.technet.com/b/markrussinovich/archive/2009/03/26/3211216.aspx
Your Api design will depend on how you read the data to stay efficient (e.g. use a memory mapped file and read from different mapped regions). With .NET 4 a Memory Mapped file implementation has arrived that is quite good now but wrappers around the OS APIs exist as well.
Yours,
Alois Kraus
To support delayed loading from a memory mapped file it would make sense not to read the byte array into the object and parse it later but go one step furhter and store only the offset and length of the memory chunk from the memory mapped file. Later when the object is actually accessed you can read and deserialize the data. This way you can traverse the whole file and build a tree of objects which contain only the offsets and the reference to the memory mapped file. That should save huge amounts of memory.

Does CGDataProviderCopyData() actually copy the bytes? Or just the pointer?

I'm running that method in quick succession as fast as I can, and the faster the better, so obviously if CGDataProviderCopyData() is actually copying the data byte-for-byte, then I think there must be a faster way to directly access that data...it's just bytes in memory. Anyone know for sure if CGDataProviderCopyData() actually copies the data? Or does it just create a new pointer to the existing data?
The bytes are copied.
Internally a CFData is created (CFDataCreate) with the content of the data provider. The CFDataCreate function always make a copy.
Anyone know for sure if CGDataProviderCopyData() actually copies the data? Or does it just create a new pointer to the existing data?
Those are the same thing. Pointers are memory addresses; by definition, if you have the same data at two addresses, it is the same data in two places, so you must have copied it (either from one to the other or to both from a common origin).
So, let's restate the question accordingly:
Or does it just copy the existing pointer?
Quartz can't necessarily do this, because data providers do not necessarily provide an existing pointer, as they can be implemented as essentially stream-based (sequential) providers instead.
What about direct-access providers? Even those need not cough up a byte pointer; the provider may simply offer range-on-demand access instead.
But what if it does offer a byte pointer? Well, the documentation for that says:
You must not move or modify the provider data until Quartz calls your CGDataProviderReleaseBytePointerCallback function.
So, conceivably, Quartz could reuse the pointer. But what if you release the data provider (causing your ReleaseBytePointer callback to be called) before you release the data?
This could still be safe if Quartz implements a private custom subclass of CFData or NSData that either implements faulting or takes over the job of calling ReleaseBytePointer, so that if you create a direct-access provider and create a CFData from it and release the provider, you can still use the CFData object.
But that's a lot of ifs. They probably just create a plain old (bytes-copying-at-creation-time) CFData, which makes it a valid performance concern.
Profile it and see how much pain it's causing you. If it's enough to worry about, then you need some solutions:
You could just implement ReleaseBytePointer as a no-op (empty function body) and release the bytes separately, making sure to do so after releasing both the provider and the data. In theory, prevents the bytes from going away out from under the CFData if it is using the original bytes pointer and Quartz doesn't implement a custom CFData subclass. A little hairy. Unfortunately, Apple can't really rely on you doing this, so I doubt it will actually help.
Handle an NS/CFData directly instead. Create the data provider only to pass it to Quartz, and release it and forget about it immediately thereafter (not own it yourself).
Depending on your needs, you may prefer to keep your callbacks structure in an instance variable and call them directly to copy parts of the data. Of course, if this solution works for you, then you don't have the problem described above anyway, since you aren't creating a here-you-can-have-my-bytes-pointer direct-access data provider.
The documentation for CGDataProviderCreateWithCFData doesn't say whether it returns a direct-access data provider or not, so you'll have to err on the side of caution if that's how you're creating your data provider.