Major speed issues with imageWithContentsOfFile - iphone

In my application I'm creating a large image dynamically and then loading it up for display in my image explorer class. Because I can't add new images to the bundle at run time, it seems I have to use imageWithContentsOfFile - however, this gives me major speed issues further down the line.
The way my image explorer works is that it takes in an image, splits it up into tiles, caches those tiles and then only loads those tiles into memory for display that need to be shown on the screen. Using a bunch of NSLogs, I've managed to find out where all the slowdown is. It's not in the imageWithContentsOfFile function itself, it's when I try to call this line:
CGContextDrawImage(context_ref,
CGRectMake(0, 0, imgWidth, imgHeight), tileImage);
This is when I'm writing the tile to the cache file. tileImage is a CGImageRef that is returned from CGImageCreateWithImageInRect, which is how I get subsets of my larger image to save separately.
The odd thing is that splitting up a large image this way takes about 45 seconds (!), but when I split up an image from the bundle using imageNamed rather than imageWithContentsOfFile, it takes only about 2 seconds.
Anyone have any ideas? Thanks in advance :)

I think U should split up your image.
Because, CGContextDrawImage will take fully loaded "tileImage".
If your "tileImage" size is 8 MB, your app must load 8MB data to memory.
It takes long time for loading. and It may create memory issue and so on.
If you want to use a single big image and you can wait for loading,
there is solution that U can use another thread.
It can avoid to UI lock during loading a big image.
An 8MB JPG image will use over 8MB memory, UIImage should use noncompressed format for fast drawing.

imageNamed uses caching, and may reduce the amount of scaling.
UIImage is immutable. imageNamed may note this and return a reference to a cached image, rather than loading and creating a new image... wherever you load your image.
if you create an images, you can setup your own (in memory) caching scheme and pass references in many cases. then purge the cache when you receive a memory warning.
if you need to scale the image and the size is static, determine the size to draw, and create a UIImage using imageWithCGImage:scale:orientation: -- or you can approach the problem in a similar way using CoreGraphics apis directly too.
beyond that, hold onto/reuse what you need, and use a profiler to balance your allocations and to measure timings.

Related

Can Flutter render images from raw pixel data? [duplicate]

Setup
I am using a custom RenderBox to draw.
The canvas object in the code below comes from the PaintingContext in the paint method.
Drawing
I am trying to render pixels individually by using Canvas.drawRect.
I should point out that these are sometimes larger and sometimes smaller than the pixels on screen they actually occupy.
for (int i = 0; i < width * height; i++) {
// in this case the rect size is 1
canvas.drawRect(
Rect.fromLTWH(index % (width * height),
(index / (width * height)).floor(), 1, 1), Paint()..color = colors[i]);
}
Storage
I am storing the pixels as a List<List<Color>> (colors in the code above). I tried differently nested lists previously, but they did not cause any noticable discrepancies in terms of performance.
The memory on my Android Emulator test device increases by 282.7MB when populating the list with a 999x999 image. Note that it only temporarily increases by 282.7MB. After about half a minute, the increase drops to 153.6MB and stays there (without any user interaction).
Rendering
With a resolution of 999x999, the code above causes a GPU max of 250.1 ms/frame and a UI max of 1835.9 ms/frame, which is obviously unacceptable. The UI freezes for two seconds when trying to draw a 999x999 image, which should be a piece of cake (I would guess) considering that 4k video runs smoothly on the same device.
CPU
I am not exactly sure how to track this properly using the Android profiler, but while populating or changing the list, i.e. drawing the pixels (which is the case for the above metrics as well), CPU usage goes from 0% to up to 60%. Here are the AVD performance settings:
Cause
I have no idea where to start since I am not even sure what part of my code causes the freezing. Is it the memory usage? Or the drawing itself?
How would I go about this in general? What am I doing wrong? How should I store these pixels instead.
Efforts
I have tried so much that did not help at all that I will try to only point out the most notable ones:
I tried converting the List<List<Color>> to an Image from the dart:ui library hoping to use Canvas.drawImage. In order to do that, I tried encoding my own PNG, but I have not been able to render more than a single row. However, it did not look like that would boost performance. When trying to convert a 9999x9999 image, I ran into an out of memory exception. Now, I am wondering how video is rendered as all as any 4k video will easily take up more memory than a 9999x9999 image if a few seconds of it are in memory.
I tried implementing the image package. However, I stopped before completing it as I noticed that it is not meant to be used in Flutter but rather in HTML. I would not have gained anything using that.
This one is pretty important for the following conclusion I will draw: I tried to just draw without storing the pixels, i.e. is using Random.nextInt to generate random colors. When trying to randomly generate a 999x999 image, this resulted in a GPU max of 1824.7 ms/frames and a UI max of 2362.7 ms/frame, which is even worse, especially in the GPU department.
Conclusion
This is the conclusion I reached before trying my failed attempt at rendering using Canvas.drawImage: Canvas.drawRect is not made for this task as it cannot even draw simple images.
How do you do this in Flutter?
Notes
This is basically what I tried to ask over two months ago (yes, I have been trying to resolve this issue for that long), but I think that I did not express myself properly back then and that I knew even less what the actual problem was.
The highest resolution I can properly render is around 10k pixels. I need at least 1m.
I am thinking that abandoning Flutter and going for native might be my only option. However, I would like to believe that I am just approaching this problem completely wrong. I have spent about three months trying to figure this out and I did not find anything that lead me anywhere.
Solution
dart:ui has a function that converts pixels to an Image easily: decodeImageFromPixels
Example implementation
Issue on performance
Does not work in the current master channel
I was simply not aware of this back when I created this answer, which is why I wrote the "Alternative" section.
Alternative
Thanks to #pslink for reminding me of BMP after I wrote that I had failed to encode my own PNG.
I had looked into it previously, but I thought that it looked to complicated without sufficient documentation. Now, I found this nice article explaining the necessary BMP headers and implemented 32-bit BGRA (ARGB but BGRA is the order of the default mask) by copying Example 2 from the "BMP file format" Wikipedia article. I went through all sources but could not find an original source for this example. Maybe the authors of the Wikipedia article wrote it themselves.
Results
Using Canvas.drawImage and my 999x999 pixels converted to an image from a BMP byte list, I get a GPU max of 9.9 ms/frame and a UI max of 7.1 ms/frame, which is awesome!
| ms/frame | Before (Canvas.drawRect) | After (Canvas.drawImage) |
|-----------|---------------------------|--------------------------|
| GPU max | 1824.7 | 9.9 |
| UI max | 2362.7 | 7.1 |
Conclusion
Canvas operations like Canvas.drawRect are not meant to be used like that.
Instructions
First of, this is quite straight-forward, however, you need to correctly populate the byte list, otherwise, you are going to get an error that your data is not correctly formatted and see no results, which can be quite frustrating.
You will need to prepare your image before drawing as you cannot use async operations in the paint call.
In code, you need to use a Codec to transform your list of bytes into an image.
final list = [
0x42, 0x4d, // 'B', 'M'
...];
// make sure that you either know the file size, data size and data offset beforehand
// or that you edit these bytes afterwards
final Uint8List bytes = Uint8List.fromList(list);
final Codec codec = await instantiateImageCodec(bytes));
final Image image = (await codec.getNextFrame()).image;
You need to pass this image to your drawing widget, e.g. using a FutureBuilder.
Now, you can just use Canvas.drawImage in your draw call.

memory issues with UIImage when we load 2000 images

I'm trying to run an animation by changing the images of my UIImageView . I need about 200 images of 24K to create a 5 sec animation. I am able to load all the images into the memory (into an NSArray), but when I start the animation (switching the UIImage of the UIImageView) - after about 60 images I get a memory warning and if I continue displaying images the app crashes.
Just because your image files are 24Kb on disk, doesn't mean that is the amount of memory they will take up.
If you have an image that is 480x960 with 1 byte per pixel, that may only be a small file size due to compression (jpeg, for example), but when it is in memory in your app, it will be 450KB. Multiply that by 60 (the point at which you get the memory warning) and you will see that is approx 27MB.
If your images are larger, or have a greater colour depth, then obviously they will consume more memory. I think I read once that iOS gives you a memory warning when you hit 22Mb, but that includes other memory allocated to your app for other things as well.
And just because your app "loads" the images into the array, doesn't mean it actually loads it into memory, or expands it until it really needs it.
So, to calculate how much memory your image is going to use, don't look at the file size, but instead work it out from the image dimensions.

iOS: How to create and draw into (and save) an image larger than the screen?

We're creating an iOS photo app. In doing this, we have to create dynamically sized images up to about 2500x1600px. Once this image has been created, we want to draw smaller images on top of the big one reasonably quickly.
The problem as we can see it is that it's impossible to get a context larger than the screen resolution. The call does not crash, but it returns a nil-context. How can such a seemingly simple task be achieved?
Secondly, once this context is created, what is the fastest way to draw a small image at a given position on top of the big one?
Edit:
We found the solution. CGBitmapContextCreate returns nil because the width and height parameters were set as floats, not ints. Sometimes the solution is right there in front of you, and you're too blind to see it. Hopefully this answer can help other people that somehow have the same problem.
Make sure you specify integer widths and heights as the arguments to CGBitmapContextCreate, otherwise it returns nil. Otherwise, the size of the context should not matter as long as you can malloc enough memory for it.
It should be possible to get a context for almost any bitmap for which you can allocate/malloc enough memory, in your case multiples of 2500x1600x4 bytes of ARGB pixels.
You might also want to look into using a CATiledLayer, where you would only have to draw into the tiles covered by the smaller image. You may have to tile to support older devices which are limited by the max tile size that will fit into the GPUs texture cache.

Objective C (iPhone): CGContextDrawImage is too slow

I'm writing a program that does various types of image processing while getting pictures at a rate of 15 FPS. When I comment out the code that prints out the images and only leave in the processing, I find that I can print images at a maximal speed of 13/14 FPS.
However, upon calling CGContextDrawImage 6 times in a row (6 different images), my drawing rate drops down to 6/7 FPS. I was wondering if anyone knows an alternative to CGContext's CGContextDrawImage such that printing the image takes minimal time.
scale it to the right size and/or render intermediates to an offscreen cached context (e.g. composite and merge multiple images), which can be easily copied. make sure that your image uses an optimal layout -- assuming you render it multiple times. only draw when needed. profile to see what takes the most time. determine what needs to be drawn -- if you have 6 images and they overlap, do not draw portions which are not visible.

Image editing using iphone

I'm creating an image editing application for iphone. i would like to enable the user to pick an image from the photolibrary, edit it (grayscale, sepia,etc) and if possible, save back to the filesystem. I've done it for picking image (the simplest thing, as you know using imagepicker) and also for creating the grayscale image. But, i got stuck with sepia. I don't know how to implement that. Is it possible to get the values of each pixel of the image so that we can vary it to get the desired effects. Or any other possible methods are there??? pls help...
The Apple image picker code will most likely be holding just the file names and some lower-res renderings of the images in RAM til the last moment when a user selects an image.
When you ask for the full frame buffer of the image, the CPU suddenly has to do a lot more work decoding the image at full resolution, but it might be even as simple as this to trigger it off:
CFDataRef CopyImagePixels(CGImageRef inImage)
{
return CGDataProviderCopyData(CGImageGetDataProvider(inImage));
}
/* IN MAIN APPLICATION FLOW - but see EDIT 2 below */
const char* pixels = [[((NSData*)CopyImagePixels([myImage CGImage]))
autorelease] bytes]; /* N.B. returned pixel buffer would be read-only */
This is just a guess as to how it works, really, but based on some experience with image processing in other contexts. To work out whether what I suggest makes sense and is good from a memory usage point of view, run Instruments.
The Apple docs say (related, may apply to you):
You should avoid creating UIImage objects that are greater than 1024 x 1024 in size. Besides the large amount of memory such an image would consume, you may run into problems when using the image as a texture in OpenGL ES or when drawing the image to a view or layer. This size restriction does not apply if you are performing code-based manipulations, such as resizing an image larger than 1024 x 1024 pixels by drawing it to a bitmap-backed graphics context. In fact, you may need to resize an image in this manner (or break it into several smaller images) in order to draw it to one of your views.
[ http://developer.apple.com/iphone/library/documentation/UIKit/Reference/UIImage_Class/Reference/Reference.html ]
AND
Note: Prior to iPhone OS 3.0, UIView instances may have a maximum height and width of 1024 x 1024. In iPhone OS 3.0 and later, views are no longer restricted to this maximum size but are still limited by the amount of memory they consume. Therefore, it is in your best interests to keep view sizes as small as possible. Regardless of which version of iPhone OS is running, you should consider using a CATiledLayer object if you need to create views larger than 1024 x 1024 in size.
[ http://developer.apple.com/iPhone/library/documentation/UIKit/Reference/UIView_Class/UIView/UIView.html ]
Also worth noting:-
(a) Official how-to
http://developer.apple.com/iphone/library/qa/qa2007/qa1509.html
(b) From http://cameras.about.com/od/cameraphonespdas/fr/apple-iphone.htm
"The image size uploaded to your computer is at 1600x1200, but if you email the photo directly from the iPhone, the size will be reduced to 640x480."
(c) Encoding large images with JPEG image compression requires large amounts of RAM, depending on the size, possibly larger amounts than are available to the application.
(d) It may be possible to use an alternate compression algorithm with (if necessary) its malloc rewired to use temporary memory mapped files. But consider the data privacy/security issues.
(e) From iPhone SDK: After a certain number of characters entered, the animation just won't load
"I thought it might be a layer size issue, as the iPhone has a 1024 x 1024 texture size limit (after which you need to use a CATiledLayer to back your UIView), but I was able to lay out text wider than 1024 pixels and still have this work."
Sometimes the 1024 pixel limit may appear to be a bit soft, but I would always suggest you program defensively and stay within the 1024 pixel limit if you can.
EDIT 1
Added extra line break in code.
EDIT 2
Oops! The code gets a read-only copy of the data (there is a diference between CFMutableDataRef and CFDataRef). Because of limitations on available RAM, you then have to make a lower-res copy of it by smooth-scaling it down yourself, or to copy it into a modifiable buffer, if the image is large, you may need to write it in bands to a temporary file, release the unmodifiable data block and load the data back from file. And only do this of course if having the data in a temporary file like this is acceptable. Painful.
EDIT 3
Here's perhaps a better idea maybe try using a destination bitmap context that uses a CFData block that is a memory-mapped CFData. Does that work? Again only do this if you're happy with the data going via a temporary file.
EDIT 4
Oh no, it appears that memory mapped read-write CFData not available. Maybe try mmap BSD APIs.
EDIT 5
Added "const char*" and "pixels read-only" comment to code.