GDI+ How to fix FPS drop using PNG image? - png

Sorry for my bad English.
Currently I am using this way.
Gdiplus::Image img(L"xxx.png");
Gdiplus::Graphics g(hdc);
g.DrawImage(&img,0,0);
When I use PNG if I draw over 400*200 pixel my FPS drop the 39~45.
but instead use BMP image FPS keep maintain 60.
How I can fix this problem?.
convert pixelformat
I use this way(doesn't work)
img = Image::FromFile(filename);
bmp = new Bitmap(img->GetWidth(), img->GetHeight(), PixelFormat32bppPARGB);
Graphics gra(hdc);
gra.FromImage(bmp);
gra.DrawImage(img, destX, destY, img->GetWidth(), img->GetHeight());

I suppose this is due to the PixelFormat.
GDI(+) uses PixelFormat.Format32bppPArgb internally. All other formats are converted when drawing. The conversion is likely causing your performance issue.
So if you want to draw a picture frequently, convert it yourself on load instead having GDI doing it on each draw.
EDIT:
The PixelFormat can be "converted" like this:
// Load png from disc
Image png = Image.FromFile("x.png");
// Create a Bitmap of same size as png with the right PixelFormat
Bitmap bmp = new Bitmap(png.Width, png.Height, PixelFormat.Format32bppPArgb);
// Create a graphics object which draws to the bitmap
Graphics g = Graphics.FromImage(bmp);
// Draw the png to the bmp
g.DrawImageUnscaled(png, 0, 0);
Be aware that Image, Bitmap and Graphics objects implement IDisposable and should be properly disposed when no longer required.
Cheers
Thomas

Related

Change the color of one pixel of an UIImage without creating a new UIImage/CGImage

All solutions I've found for doing this (like this one Change color of certain pixels in a UIImage ) suggest to create a new uiimage, but I want to modify directly a pixel in the uiimage without creating a new one. Is there a way to do this ?
It seems that cgimage is not mutable, but is there a way to create an image from a pixel data buffer, and modifying this pixel data buffer would directly modify the image ?

The color of video is wrong when made from UIImage of the PNG files

I am taking a UIImage from Png file and feed it to the videoWriter.
avAdaptor appendPixelBuffer:pixelBuffer
When the end result video comes out, it seems to lacking the one color, missing the yellow color or something.
I take alook of the function that made the pixelbuffer out of the UIImage
CVPixelBufferCreateWithBytes(NULL,
myWidth,
myHeight,
kCVPixelFormatType_32BGRA,
(void*)CFDataGetBytePtr(image),
CGImageGetBytesPerRow(cgImage),
NULL,
0,
NULL,
&pixelBuffer);
I also try the kCVPixelFormatType_32AGRB and others, it didn't help.
any thoughts?
Please verify if your PNG image is with or without transparency element. If your PNG image doesn't contain transparency then it's 24-bit and not 32-bit per pixel.
Also, have you tried kCVPixelFormatType_32RGBA ?
Maybe the image sizes do not fit together.
Your input image should have the same width and height like the video output. If "myWidth" or "myHeight" is different (i.e. different aspect ratio) to the size of the image, one byte may be lost at the end of the line, which could lead to color shifting. kCVPixelFormatType_32BGRA seems to be the preferred (fastest) pixel format, so this should be okay.
There is no yellow color in RGB colorspace. This means yellow is only the >red< and >green< components. It seems that >blue< is missing.
I assume you are using a CFDataRef (maybe NSData) for the image. If it is an NSData object you can print the bytes to the debug console using
NSLog(#"data: %#", image);
This will print a hex dump to the console. Here you can see if you have alpha and what kind of byte order your png is alike. If your image has alpha, every fourth byte should be the same number.

Is it possible to tile images in a UIScrollView without having to manually create all the tiles?

In the iPhone sample code "PhotoScroller" from WWDC 2010, they show how to do a pretty good mimmic of the Photos app with scrolling, zooming, and paging of images. They also tile the images to show how to display high resolution images and maintain good performance.
Tiling is implemented in the sample code by grabbing pre scaled and cut images for different resolutions and placing them in the grid which makes up the entire image.
My question is: is there a way to tile images without having to manually go through all your photos and create "tiles"? How is it the Photos app is able to display large images on the fly?
Edit
Here is the code from Deepa's answer below:
- (UIImage *)tileForScale:(float)scale row:(int)row col:(int)col size:(CGSize)tileSize image:(UIImage *)inImage
{
CGRect subRect = CGRectMake(col*tileSize.width, row * tileSize.height, tileSize.width, tileSize.height);
CGImageRef tiledImage = CGImageCreateWithImageInRect([inImage CGImage], subRect);
UIImage *tileImage = [UIImage imageWithCGImage:tiledImage];
return tileImage;
}
Here goes the piece of code for tiled image generation:
In PhotoScroller source code replace tileForScale: row:col: with the following:
inImage - Image that you want to create tiles
- (UIImage *)tileForScale: (float)scale row: (int)row column: (int)col size: (CGSize)tileSize image: (UIImage*)inImage
{
CGRect subRect = CGRectMake(col*tileSize.width, row * tileSize.height, tileSize.width, tileSize.height);
CGImageRef tiledImage = CGImageCreateWithImageInRect([inImage CGImage], subRect);
UIImage *tileImage = [UIImage imageWithCGImage: tiledImage];
return tileImage;
}
Regards,
Deepa
I've found this which may be of help: http://www.mikelin.ca/blog/2010/06/iphone-splitting-image-into-tiles-for-faster-loading-with-imagemagick/
You just run it in the Terminal as a shell script on your Mac.
Sorry Jonah, but I think that you cannot do what you want to.
I have been implementing a comic app using the same example as a reference and had the same doubt. Finally, I realized that, even if you could load the image and cut it into tiles the first time that you use it, you shouldn't. There are two reasons for that:
You do the tiling to save time and be more responsive. Loading and tiling takes time for a large image.
Previous reason is particularly important the first time the user runs the app.
If these two reasons make no sense to you, and you still want to do it, I would use Quartz to create the tiles. CGImage function CGImageCreateWithImageInRect would be my starting point.
Deepa's answer above will load the entire image into memory as a UIImage (the input variable in his function), defeating the purpose of tiling.
Many image formats support region-based decoding. Instead of loading the whole image into memory, decompressing the whole thing, and discarding all but the region of interest (ROI), you can load and decode only the ROI, on-demand. For the most part, this eliminates the need to pre-generate and save image tiles. I've never worked with ImageMagick but I'd be amazed if it couldn't do it. (I have done it using the Java Advanced Imaging (JAI) API, which isn't going to help you on the iPhone...)
I've played with the PhotoScroller example and the way it works with pre-generated tiles is only to demonstrate the idea behind CATiledLayer, and make a working-self contained project. It's straightforward to replace the image tile loading strategy - just rewrite the TilingView tileForScale:row:col: method to return a UIImage tile from some other source, be it Quartz or ImageMagick or whatever.

How can I adjust the RGB pixel data of a UIImage on the iPhone?

I want to be able to take a UIImage instance, modify some of its pixel data, and get back a new UIImage instance with that pixel data. I've made some progress with this, but when I try to change the array returned by CGBitmapContextGetData, the end result is always an overlay of vertical blue lines over the image. I want to be able to change the colors and alpha values of pixels within the image.
Can someone provide an end-to-end example of how to do this, starting with a UIImage and ending with a new UIImage with modified pixels?
This might answer your question: How to get the RGB values for a pixel on an image on the iphone
I guess you'd follow those instructions to get a copy of the data, modify it, and then create a new CGDataProvider (CGDataProviderCreateWithCFData), use that to create a new CGImage (CGImageCreate) and then create a new UIImage from that.
I haven't actually tried this (no access to a Mac at the moment), though.

Writing a masked image to disk as a PNG file

Basically I'm downloading images off of a webserver and then caching them to the disk, but before I do so I want to mask them.
I'm using the masking code everyone seems to point at which can be found here:
http://iosdevelopertips.com/cocoa/how-to-mask-an-image.html
What happens though, is that the image displays fine, but the version that gets written to the disk with
UIImage *img = [self maskImage:[UIImage imageWithData:data] withMask:self.imageMask];
[UIImagePNGRepresentation(img) writeToFile:cachePath atomically:NO];
has it's alpha channel inverted when compared to the one displayed later on (using the same UIImage instance here).
Any ideas? I do need the cached version to be masked, otherwise displaying the images in a table view get's awfully slow if I have to mask them every time.
Edit: So yeah, UIImagePNGRepresentation(img) seems to invert the alpha channel, doesn't have anything to do with the code that writes to disk, which is rather obvious but I checked anyway.
How about drawing into a new image, and then save that?
UIGraphicsBeginImageContext(img.size);
[img drawAtPoint:CGPointZero];
UIImage *newImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[UIImagePNGRepresentation(newImg) writeToFile:cachePath atomically:NO];
(untested)
See the description in CGImageCreateWithMask in CGImage Reference:
The resulting image depends on whether the mask parameter is an image mask or an image. If the mask parameter is an image mask, then the source samples of the image mask act as an inverse alpha value. That is, if the value of a source sample in the image mask is S, then the corresponding region in image is blended with the destination using an alpha value of (1-S). For example, if S is 1, then the region is not painted, while if S is 0, the region is fully painted.
If the mask parameter is an image, then it serves as an alpha mask for blending the image onto the destination. The source samples of mask' act as an alpha value. If the value of the source sample in mask is S, then the corresponding region in image is blended with the destination with an alpha of S. For example, if S is 0, then the region is not painted, while if S is 1, the region is fully painted.
It seems for some reason the image mask is treated as a mask image to mask with while saving. According to:
UIImagePNGRepresentation and masked images
http://lists.apple.com/archives/quartz-dev/2010/Sep/msg00038.html
to correctly save with UIImagePNGRepresentation, there are several choices:
Use inverse version of the image mask.
Use "mask image" instead of "image mask".
Render to a bitmap context and then save it, like epatel mentioned.