Increment adding to disk by smaller chunks until whole PDF is generated - itext

We are creating more than 20,000 pages using iText.
Write now the application crashes with out of memory exception.
I heard some of the PDF writers will write smaller chunks to disk and finally completes the PDF and doesn't throw out of memory exception.
Is that true? What would be the best way to write PDF in text without leading to Out Of Memory Exception ? FileOutPutStream/ByteArrayOutputStream/Pipe..
Thanks in advance.

Related

Flutter encrypt large files

I want to change the file's byte for encryption, but when i use readAsBytes() method for large file, i get out of memory error. So is there any way to encrypt large file with less memory consumption.
Thank you
Generally speaking, you need a temporary buffer to hold your data. If your RAM is not large enough (very likely on mobile devices) it has to be the disk.
So create a second file, and read the first file in batches of bytes that are small enough your memory will be able to handle it. Your encryption method should be able to handle this, as it's a very common occurrence. Write the resulting batches of encrypted content to the second file. Once you are done, delete/overwrite the original.

How does mmap() help read information at a specific offset versus regular Posix I/O

I'm trying to understanding something a bit better about mmap. I recently read this portion of this accepted answer in the related stackoverflow question (quoted below):mmap and memory usage
Let's say you read a 100MB chunk of data, and according to the initial
1MB of header data, the information that you want is located at offset
75MB, so you don't need anything between 1~74.9MB! You have read it
for nothing but to make your code simpler. With mmap, you will only
read the data you have actually accessed (rounded 4kb, or the OS page
size, which is mostly 4kb), so it would only read the first and the
75th MB.
I understand most of the benefits of mmap (no need for context-switches, no need to swap contents out, etc), but I don't quite understand this offset. If we don't mmap and we need information at the 75th MB offset, can't we do that with standard POSIX file I/O calls without having to use mmap? Why does mmap exactly help here?
Of course you could. You can always open a file and read just the portions you need.
mmap() can be convenient when you don't want to write said code or you need sparse access to the contents and don't want to have to write a bunch of caching logic.
With mmap(), you're "mapping" the entire contest of the file to offsets in memory. Most implementation of mmap() do this lazily, so each ~4K block of the file is read on-demand, as you access those memory locations.
All you have to do is access the data in your file like it was a huge array of chars (i.e. int* someInt = &map[750000000]; return *someInt;), and let the OS worry about what portions of the file have been read, when to read the file, how much, writing the dirty data blocks back to the file, and purging the memory to free up RAM.

Amazon S3 (AWS ) NSMutableData

I have a project related on Amazon S3 DOWNLOADING big file sizes above 50MB. It stops without error and I chunk the file into smaller memory due to it's large data file size and download it simultaneously. When I append the chunk data into single [NSMutableData] in correct order
the video won't play. Any Idea about this related subject?..
Please Help me I'm sitting my ass for the whole week of this project T_T..
You shouldn't manage this amount of data using RAM memory only.
You'd rather use secondary memory (namely NSFileManager) as explained here
When you're done downloading the file, play it normally. If you're sure the user won't really need it anymore, just delete it right after playback.
[edit]
Or,you might as well just use MPMoviePlayerController pointing to that URL directly.
What you need to do is create a file of the appropriate size first. Each down loader object must know the offset in the file to put the data, which it should write as it appears and not store in a mutable data object. So this will greatly lower the memory footprint of this operation.
There is a second component: you must set the F_NOCACHE flag of the open file so iOS does not keep the file writes in its cache.
With both of these it should work fine. Also use a lot of asserts during development so you know ASAP if something fails - so ou can correct whatever the problem is.

Can i store sqlite db as zip file in iphone application

My sqlite file has a size of 7MB. I want to reduce its size. How i can do that ? When am simply compressing it will come around only 1.2 MB. Can i compress my mydb.sqlite to a zip file ? If it is not possible, any other way to reduce size of my sqlite file ?
It is possible to compress before hand, but is very redundant. You will compress your binary before distribution, Apple distributes your app through the store compressed and the compression of a compressed file is fruitless. Thus, any work you do to compress beforehand should not have much of an effect on the resulted size of your application
without details of what you are storing in the DB it's hard to give specific advice. The usual generics on DB Design will apply. Normalise your database.. for example
reduce/remove repeating data. If you have text/data that is repeated then store it once, and use key to reference it
If you are storing large chunks of data then you might be able to zip and unzip these in and out of the database in your app code rather than try to zip the DB

Is Tie::File lazily loading a file?

I'm planning on writing a simple text viewer, which I'd expect to be able to deal with very large sized files. I was thinking of using Tie::File for this, and kind of paginate the lines. Is this loading the lines lazily, or all of them at once?
It won't load the whole file. From the documentation:
The file is not loaded into memory, so this will work even for gigantic files.
As far as I can see from its source code it stores only used lines in memory. And yes, it loads data only when needed.
You can limit amount of used memory with memory parameter.
It also tracks offsets of all lines in the file to optimize disk access.