Amazon S3 (AWS ) NSMutableData - iphone

I have a project related on Amazon S3 DOWNLOADING big file sizes above 50MB. It stops without error and I chunk the file into smaller memory due to it's large data file size and download it simultaneously. When I append the chunk data into single [NSMutableData] in correct order
the video won't play. Any Idea about this related subject?..
Please Help me I'm sitting my ass for the whole week of this project T_T..

You shouldn't manage this amount of data using RAM memory only.
You'd rather use secondary memory (namely NSFileManager) as explained here
When you're done downloading the file, play it normally. If you're sure the user won't really need it anymore, just delete it right after playback.
[edit]
Or,you might as well just use MPMoviePlayerController pointing to that URL directly.

What you need to do is create a file of the appropriate size first. Each down loader object must know the offset in the file to put the data, which it should write as it appears and not store in a mutable data object. So this will greatly lower the memory footprint of this operation.
There is a second component: you must set the F_NOCACHE flag of the open file so iOS does not keep the file writes in its cache.
With both of these it should work fine. Also use a lot of asserts during development so you know ASAP if something fails - so ou can correct whatever the problem is.

Related

Verify .mat file exists and is not Corrupt - Matlab

I have 2 independent Matlab workers, with FIRST getting/saving data and SECOND reading it (and doing some calculations etc).
FIRST saves data as .mat file on the hard-disk while SECOND reads it from there. It takes ~20 seconds to SAVE this data as .mat and 8millisec to DELETE it. Before SAVING data, FIRST deletes the old file and then saves a newer version.
How can the SECOND verify that data exists and is not corrupt? I can use exists but that doesn't tell me if the data is corrupt or not. For eg, if SECOND tries to read data exactly when FIRST is saving it, exists passes but LOAD gives you an error saying - Data Corrupt etc.
Thanks.
You can't, without some synchronization mechanism - by the time SECOND completes its check and starts to read the file, FIRST might have started writing it again. You need some sort of lock or mutex.
Two options for base Matlab.
If this is on a local filesystem, you could use a separate lock file sitting next to the data file to manage concurrent access to the data file. Use Java's NIO FileChannel and FileLock objects from within Matlab to lock the first byte of the lock file and use that as a semaphore to control access to the data file, so the reader waits until the writer is finished and vice versa. (If this is on a network filesystem, don't try this - file locking may seem to work but usually is not officially supported and in my experience is unreliable.)
Or you could just put a try/catch around your load() call and have it pause a few seconds and retry if you get a corrupt file error. The .mat file format is such that you won't get a partial read if the writer is still writing it; you'll get that corrupt file error. So you could use this as a lazy sort of collision detection and backoff. This is what I usually do.
To reduce the window of contention, consider having FIRST write to a temporary file in the same directory, and then use a rename to move it to its final destination. That way the file is only unavailable during a quick filesystem move operation, not the 20 seconds of data writing. If you have multiple writers, stick the PID and hostname in the temp file name to avoid collisions.
Sounds like a classic resource sharing problem between 2 threads (R-W)
In short, you should find a method of inter-workers safe communication. Check this out.
Also, try to type
showdemo('paralleldemo_communic_prof')
in Matlab

How big xml file we can parse in iPhone application

I will have xml file on server. It will store information of about 600 stores. information includes name, address, opening time , coordinates. So is it ok to parse whole file into iphone then select nearest stores according to coordinates?
I am thinking about processing time and memory use
Please suggest
The way I would do this is write a web service and pass it the coordinates and download only those within a certain radius. Always try to download as little data as possible to the iPhone (especially xml data)
I just put this here
http://quatermain.tumblr.com/post/93651539/aqxmlparser-big-memory-win
A simple solution would be to group them into clusters that are somehow related, probably by location. You already have an XML on a server, so simply split them up into 3 groups of related stores of around 200, or preferably even smaller. I'm not entirely sure on why you would want to store 600 data points of that nature. I feel that if you filter/shrink on the server side you could be saving a lot of time/memory.
I have seen people storing 300-400 data points, though it is so dependent on how large your defined objects in your Core Database are, that it is probably best for you to just run some tests.

coredata vs file access

I have 100s of file which needs to be accessed for displaying the content on iphone. They are all plists.
Which one is faster core data or file access ? which one is secured ?
You have to consider the file size first, a nice rule of thumb found in these boards is, if the file is under 100kB you can store it as an attribute in an entity as a BLOB, if it is greater that that you maybe want to create a ad-hoc entity for it, and in the end if it exceeds 1 MB in size you can access it through the filesystem.
Secondly, you shall evaluate the cost of the operation too, 100 files may appear many but if you access them few times, maybe file access is the way to go, on the other hand if you need that stored information multiple times frequently but you can even create ad hoc entities for Core Data and load the files at start up. And so on.
This is a nice book on Core Data. You can find many guide lines by reading it, but keep in mind also the general guide lines of designing databases.
If they are static files I would recommend pre-loading them into a Core Data SQLite file. That would yield far better performance, especially if you structure your model properly.

Reason for monolithic data files

Primarily this seems to be a technique used by games, where they have all the sounds in one file, textures in another etc. With these files commonly reaching the GB size.
What is the reason behind doing this over maintaining it all in subdirectories as small files - one per texture which many small games use this, with the monolithic system being favoured by larger companies?
Is there some file system overhead with lots of small files?
Are they trying to protect their property - although most just seem to be a compressed file with a new extension?
The reasons we use an "archive" system like this where I work (a game development company):
lookup speed: We rarely need to iterate over files in a directory; we're far more often looking them up directly by name. By using a custom "file allocation table" that is essentially just a sequence of hash( normalized_filename ) -> [ offset, size ], we can look up files very quickly. We can also keep this index in RAM, potentially interleave it with other index tables, etc.
(When we do need to iterate, we can either easily iterate over all files in a .arc, or we can store a list of filenames, a list of hash-of-filenames, or just a list of [ offset, size ] pairs somewhere -- maybe even as a file in the archive. This is usually faster than directory-traversal on a FS.)
metadata: It's easy for us to tuck in any file metadata we want. For example, a single bit in the "size" field indicates whether the file is compressed or not (if it is, it has a header with more details about how to decompress it). We can even vary compression on pieces of a file if we know enough about the structure of the file ahead of time (we do this for sprite archives).
size: One of the devices we use has a "file size must be a multiple of X" requirement, where X is large compared to some of our files. For example, some of our lua scripts end up being just a few hundred bytes when compiled; taking extra overhead per .luc file adds up quickly.
alignment: on the other hand, sometimes we want to waste space. To take advantage of faster streaming (e.g. background DMA) from the filesystem, some of our files do want to obey certain alignment/size requirements. We can take care of that right in the tool, and the align/size we're shooting for doesn't necessarily have to line up with the underlying FS, allowing us to waste space only where we need it.
But those are the mundane reasons. The more fun stuff:
Each .arc registers in a list, and attempts to open a file know to look in the arcs. We search already-in-RAM archives first, then archives on the device FS, then the actual device FS. This gives us a ton of flexibility:
dynamic additions to the filesystem: at any time we can stream a new file or archive to the machine in question (over the network or the like) and have it appear as part of the "logical" filesystem; this is great when the actual FS resides in ROM or on a CD, and allows us to iterate much more quickly than we could otherwise.
(Doom's .wad system is a sort of example of the above, which allows modders to more easily override assets and scripts built into the game.)
possibility of no underlying fs: It's possible to use bin2obj to embed an entire arc directly in the executable (.rodata) at link time, at which point you don't ever need to look at the device FS -- we do this for certain small demo builds and the like. We can also send levels across the network or savegame-sneakernet this way. =)
organization and load/unload: since we can load and unload and override virtual "pieces" of our filesystem at any time, we can do some performance tricks with having the number of files in the FS be very small at any given time. We can additionally specify that an entire archive be loaded into memory, index table and data; our file load code is smart enough to know that if the file is already in memory, it doesn't need to do anything to read it other than move a pointer around. Some of the higher level code can actually detect that the file is in ram and just ask for the probably-already-looks-like-a-struct pointer directly.
portability: we only need to figure out how to get a few files on each new device we use, and then the remainder of the FS code is more or less the same. =) We do change the tool output a bit occasionally (for alignment reasons), but most of the processing remains the same.
de-duplication: with smarter archives, such as our sprite archives, we can (and do) de-duplicate data. If "jump" animation's fifth frame and "kick"'s third frame are the same, we can pull apart the file and only store one copy of that frame. We can do the same for whole files.
We ported a PC game to a system with much slower FS access recently. We didn't change the data format, and it turns out iterating through a dir on the raw device FS to load a hundred small XML files was absolutely killing our load times. The solution we used was to take each dir, make it into its own subdir.arc, and stick it in the master game.arc compressed. When the dir was needed (something like opendir was called) we decompressed the entire subdir.arc into RAM, added it to the filesystem, then iterated through it super-quickly.
It's the ability to throw something like this together in a few hours, and to ease the pain of porting across systems, that makes stuff like this worthwhile.
File systems do have an overhead. Usually, a file takes disk space rounded up to some power of 2 (e.g. up to 4 KB), so many small files would waste space. Some modern file systems try to mitigate that, but AFAIK it's not widespread yet. Additionally, file systems are often quite slow when accessing multiple files. E.g. it is usually considerably faster to copy one 400 MB file than 4000 100 KB files.
File systems come in handy when you have to modify files, because they handle changing file sizes much better than any simple home-grown solution. However, that's certainly not the case for constant game data.
On Apple systems, the most common way is to use, as you suggest, directories. They are called Bundles, and are in the Finder represented as just one file, but if you explore them more, they're actually directories. This makes writing code and conserving memory when loading individual items out of this bundle very easy. :-) Also, this makes taking incremental backups of gigantic databases easy, as for instance your iPhoto database is just a bundle, so you just backup changed and new files
On Windows, however, I believe this is much harder to do, it will look like a directory "no matter what" (I'm sure smart people have found a solution that will make Explorer see certain directories as a single file, but it's not common).
From a games developer point of view, you're not dealing with so small files that disk space overhead is something you're very much concerned with, so I doubt #doublep's suggestion, since it makes for such a hassle, but it makes it much easier with a single file if users are to copy an entire game over somewhere, then it's easy to check if the entire set is correct.
And, of course, it's harder to read for people that shouldn't have access to it. But it's also harder to modify, which means harder to patch, and harder to write extensions. Someone that uses extensions a lot, prefers the directory structure: The Sims.
Were I the games developer, I'd love to go for individual files. Then again, I'd be using bundles as I'd be writing for the Mac ;-)
Cheers
Nik
I can think of multiple reasons.
As doublep suggested, files occupy more space on the disc than they require. So an archive saves space. 10k files (of any size) should save you 20MB when packed into an archive. Not exactly a large amount of space nowadays, but still.
The other reason I can think of is disc fragmentation. I suspect a heavily fragmented disc will perform worse when accessing thousands of separate files on a fragmented space. But I'm no expert in this field, so I'd appreciate if someone more experienced verified this.
Finally, I think this may also have something to do with restricting access to separate game files. You can have a bunch of Lua scripts exposed, mess with them and break something. Or you could have the outro cinematic/sound/text/whatever exposed and get spoiled by accessing it. I do that myself as well: I encrypt images with a multipass XOR key, pack text files and config variables into a monolithic file (zipped for extra security) and only leave music freely accessible. This way, the game's secrets will remain undiscovered for a bit longer :).
Or there may be another reason I never thought about :D.
As you know games, especially with larger companies try to squeeze as much performance as they can. One technique is to have all the data in one large file and just DMA it to memory (think of it as a memcpy from CD to RAM). Since all the files are in one large one there will be no disk seeks and you can have a large number of files (which may cause large amount of seeks) all loaded quicky because of the technique.

How can I write a program that can recover files in FAT32

How can I write a program that can recover files in FAT32?
This is pretty complex, but FAT32 is very good documented:
I wrote a tool for direct FAT32 access once using only those ressources:
http://en.wikipedia.org/wiki/File_Allocation_Table
http://support.microsoft.com/kb/154997/
http://www.microsoft.com/whdc/system/platform/firmware/fatgen.mspx
But I've never actually tried to recover files. If you will successfully recover a file depends on several factors:
The file must still "exist" physically on the hard disk
You must know where the file starts
You must know what you are looking for (Headers..)
It depends on what happened to the files you're trying to recover. The data may still be on the partition, or it could be overwritten by now. There are a lot of pre-written solutions. A simple google search should give you a plethora of software that can try to recover the data, but it's not 100% sure to get them back. If you really want to recover them yourself, you'll need to write something the read the raw partition and ignore missing file markers.
here is a program (written by Thomas Tempelman. This guy is great.) that might help you out. You can make a copy of the partition, ignoring corrupt bits, then operate on the copy so you don't mess anything up, and you may also be able to recover the data directly with it.
I think you are referring to data carving, that is, reading the physical device and reconstructing previously unlinked files based on some knowledge (e.g. when you find two letters, PK, it's highly probable than a zip archive is following, same for JFIF for JPEG).
In this case, I suggest you to study the source code of PhotoRec a great (in my opinion, the best) Open Source tool for data carving.