Am writing an application for iphone to read a text file using NSData.
NSFileHandle *fileRead;
// this may lead to crash or memory issue, if the file size is about 10 mb or more
NSData *data = [fileRead readDataOfLength:(entire contents of the file)];
For example, if the file size is 8 kb, we can reading it in 8 iterations by 1kb per cycle.
Before reading the file, how can we find the size of the file contents.so that we can write the code in a optimized way to read the file effectively?
plz give me your suggestions....
The iPhone has all of the standard POSIX APIs, so you can use stat("file.txt", &st), where st is a "struct stat". The "st.st_size" member will give you the file size in bytes.
If you're really wanting to read the entire file, just use readDataToEndOfFile and don't worry about determining the length beforehand.
From an NSFileHandle, you can find the length by first seekToEndOfFile and then offsetInFile. If you have the actual file name, you could use NSFileManager's attributesOfItemAtPath:error: to retrieve the length (as with C's stat) instead. Or, for that matter, you could actually use stat, or fstat on the file descriptor returned by NSFileHandle's fileDescriptor method.
Related
Is it possible to create a PNG file with a predefined CRC? (kind of a programming challenge..)
I have a python script to generate hex codes with the target CRC, but I'm not sure how to make a valid PNG out of it.
BTW - it may be that I'm talking nonsense, but it sounds possible on theory (right?)
You can use spoof.c to do that, either at the level of a PNG chunk or at the level of the entire file. (Note that a PNG file does not contain a CRC of the whole thing, only CRCs of the chunks.)
I have a data file (6.3GB) that I'm attempting to work on in MATLAB, but I'm unable to get it to load, and I think it may be a memory issue. I've tried loading in a smaller "sample" file (39MB) and that seems to work, but my actual file won't load at all. Here's my code:
filename = 'C://Users/Andrew/Documents/filename.mat';
load(filename);
??? Error using ==> load
Can't read file C://Users/Andrew/Documents/filename.mat.
exist(filename);
EDU>> ans = 2
Well, at least the file exists. When I check the memory...
memory
Maximum possible array: 2046 MB (2.146e+009 bytes) *
Memory available for all arrays: 3442 MB (3.609e+009 bytes) **
Memory used by MATLAB: 296 MB (3.103e+008 bytes)
Physical Memory (RAM): 8175 MB (8.572e+009 bytes)
* Limited by contiguous virtual address space available.
** Limited by virtual address space available.
So since I have enough RAM, do I need to increase the maximum possible array size? If so, how can I do that without adding more RAM?
System specifics: I'm running 64-bit Windows, 8GB of RAM, MATLAB Version 7.10.0.499 (R2010a). I think I can't update to a newer version since I'm on a student license.
As the size might be the issue, you could try load('fileName.mat', 'var1'); load('fileName.mat', 'var2'); etc. For this, you'll have to know the variable names though.
An option would be to use the matfile object to load/index directly into the file instead of loading into ram.
doc matfile
But one limitation is that you can not index directly into a struct. So you would need to find a friend to convert the struct in your mat file and save it with the version option
save(filename, variables, '-v7.3')
May be you can load part by part your data to do your stuff using load part of variables from mat file. You must have matlab 7.3 or newer.
From your file path I can see you are using Windows. Matlab is only 32 bit for Windows and Linux (there is no 64 bit for these OSes at least for older releases, please see my edit), which means you are limited to <4GB ram total for a single application (no matter how much you have in your system), this is a 32 bit application issue so there is nothing you can do to remedy it. Interestingly the Mac version is 64 bit and you can use as much ram as you want (in my computer vision class we often used my mac to do our big video projects because windows machines would just say "out of memory")
As you can see from your memory output you can only have ~3.4GB total for matrix storage, this is far less than the 6.3GB file. You'll also notice, you can only use ~2GB for one particular matrix (that number changes as you use more memory).
Typically when working with large files you can read the file line by line, rather than loading the entire file into memory. But since this is a .mat file that likely wouldn't work. If the file contains multiple variables maybe separate them each into their own individual files that are small enough to load
The take home message here is you can't read the entire file at once unless you hop onto a Mac with enough RAM. Even then the size for a single matrix is still likely less than 6.3GB
EDIT
Current Matlab student versions can be purchased in 64 bit for all OSes as of 2014 see here so a newer release of Matlab might allow you to read the entire file at once. I should also add there has been a 64 bit version before 2014, but not for the student license
Does
pcap_t *pcap_open_offline(const char *fname, char *errbuf)
from libpcap read the whole pcap file into memory? If not so, I have to use tcpslice or similar tools to split pcap file up?
Thanks.
A strange way of wording your question, but I'll try and answer what I can.
pcap_open_offline() takes a .dump file (or similarly named output from tcpdump, tcpslice, or libpcap's pcap_dump_open() + pcap_dump() functions) as an input.
This file is exactly the same in format and function as a live trace of a network device, IE, you can use this pcap_t object in pcap_next, pcap_loop, etc.
Altering a dump file in any way (IE, stripping information or parsing out only what you want with tcpslice or wireshark) will render it unreadable by pcap_open_offline(), as it will not be formatted in the manner of a live packet trace.
However, it does not load the entire file at any one time into memory. It streams the file, as you would stream packets from a live trace.
To summarize: pcap_open_live() opens an unaltered tcpdump/tcpslice dump and reads it like a live stream. It does not load the entire file into its memory, as dumps can get quite large! Instead it just goes through the file only loading one packet's worth of the file at a time.
I read a file .caf with my program.
I use AudioFileReadBytes, but its OSStatus that return is -39, what is this??
thanks
Error number -39 (negative thirty-nine) is eofErr, a Mac OS Carbon error, which comes from the original Mac toolbox from the 1984. It's defined in MacErrors.r. That means it reached the end of the file, and there are no more bytes to read. You should note the number of bytes returned and complete whatever processing you're doing of the file at that point.
If you want to avoid the error, you can read the file length and number of samples from the various API calls, and calculate how many bytes to read, and never go past the end of the file.
Is it possible (on iPhone/iPod Touch) for a file written like this:
if (FILE* file = fopen(filename, "wb")) {
fwrite(buf, buf_size, 1, file);
fclose(file);
}
to get corrupted, e.g. when app is forced to terminate?
From what I know fwrite should be an atomic operation, so when I write whole file with one instruction no corruption should occure. I could not find any information on the net that would say otherwise.
Since when fwrite is atomic? I can't find any references. Anyway, even if fwrite can be atomic, the time between fopen and fwrite is not, so if your app is forced to terminate between those times, you'll get an empty file.
As you're writing for iPhoneOS, you can use -[NSData writeToFile:atomically:] to ensure the whole open-write-close procedure is atomic (it works by writing to a temporary file, then replace the original one.)
You could make things easier for yourself and write the data using the NSData class that has a writeToFile:atomically: method waiting for you. Wrapping the raw buffer with NSData is not hard, there are the dataWithBytes:length or dataWithBytesNoCopy:length:freeWhenDone: initializers.
Data written with fwrite is buffered. So a sudden termination might not flush the buffers. fclose will flush the buffer but this does not implicate that the bytes are also written to the disk (due to OS level caches) AFAIK.