load UIImage low quality - iphone

I would like to know how to load first a low quality UIImage and then load the original UIImage in high quality. Does it have an algorithm to do this in objective-c? I checked it out the class NYXImagesKit but it only works with iOS 5+ because it uses Core Image framework...And i need to work this with iOS 4.3+. Its effect is like Facebook app does.
JUST A edit:
check this out :
http://www.codinghorror.com/blog/2005/12/progressive-image-rendering.html
i want to do the Effect number 2

From the docs:
Incrementally Loading an Image
If you have a very large image, or are loading image data over the web, you may want to create an incremental image source so that you can draw the image data as you accumulate it. You need to perform the following tasks to load an image incrementally from a CFData object:
Create the CFData object for accumulating the image data.
Create an incremental image source by calling the function CGImageSourceCreateIncremental.
Add image data to the CFData object.
Call the function CGImageSourceUpdateData, passing the CFData object and a Boolean value (bool data type) that specifies whether the data parameter contains the entire image, or just partial image data. In any case, the data parameter must contain all the image file data accumulated up to that point.
If you have accumulated enough image data, create an image by calling CGImageSourceCreateImageAtIndex, draw the partial image, and then release it.
Check to see if you have all the data for an image by calling the function CGImageSourceGetStatusAtIndex. If the image is complete, this function returns kCGImageStatusComplete. If the image is not complete, repeat steps 3 and 4 until it is.
Release the incremental image source.

You can refer below link, it may be help you
uiimage-dsp

Related

GdkPixbuf can be created with `new_from_data` and `new_from_stream`. Why doesn't the latter require the resolution?

I am trying to understand the basics behind Pixbuf and its factory methods new_from_data and new_from_stream.
new_from_data requires a string of bytes containing the image data, and other information such as bits per sample, with and height of image.
What I don't understand is why new_from_stream does not require those additional image information. Then, how can the Pixbuf know how to render the image new_from_stream does not provide any additional information other than the Gio.InputStream ?
new_from_stream() expects to get a stream of a supported image file, equivalent to new_from_file(). All the image formats contain metadata like height and width.
new_from_data() on the other hand expects a pixel buffer, which is essentially just an array of pixels without any metadata.

Does CGImageGetDataProvider depend on type of image?

I used CGImageGetDataProvider and CGDataProviderCopyData and then get a pointer to the data. The first image I tested was a bmp and this method worked great. However, I changed my image to a JPG because I had read something about the Data Provider possibly being relative to the type of image. The length of that data returned indicates that it is 4 when it should be some large number representing the rows and columns of the image.
What I need is I can ask for the Data Provider to be for a bitmap so I can walk through the data uncompressed?
The data you get out of the data provider will be the data that went into creating the image. For instance, if the image was created using CGImageCreateWithJPEGDataProvider, it would be JPEG data. If you want bitmap data, you will need to make a bitmap, perhaps using CGBitmapContextCreate, and draw the image into the bitmap.

Major speed issues with imageWithContentsOfFile

In my application I'm creating a large image dynamically and then loading it up for display in my image explorer class. Because I can't add new images to the bundle at run time, it seems I have to use imageWithContentsOfFile - however, this gives me major speed issues further down the line.
The way my image explorer works is that it takes in an image, splits it up into tiles, caches those tiles and then only loads those tiles into memory for display that need to be shown on the screen. Using a bunch of NSLogs, I've managed to find out where all the slowdown is. It's not in the imageWithContentsOfFile function itself, it's when I try to call this line:
CGContextDrawImage(context_ref,
CGRectMake(0, 0, imgWidth, imgHeight), tileImage);
This is when I'm writing the tile to the cache file. tileImage is a CGImageRef that is returned from CGImageCreateWithImageInRect, which is how I get subsets of my larger image to save separately.
The odd thing is that splitting up a large image this way takes about 45 seconds (!), but when I split up an image from the bundle using imageNamed rather than imageWithContentsOfFile, it takes only about 2 seconds.
Anyone have any ideas? Thanks in advance :)
I think U should split up your image.
Because, CGContextDrawImage will take fully loaded "tileImage".
If your "tileImage" size is 8 MB, your app must load 8MB data to memory.
It takes long time for loading. and It may create memory issue and so on.
If you want to use a single big image and you can wait for loading,
there is solution that U can use another thread.
It can avoid to UI lock during loading a big image.
An 8MB JPG image will use over 8MB memory, UIImage should use noncompressed format for fast drawing.
imageNamed uses caching, and may reduce the amount of scaling.
UIImage is immutable. imageNamed may note this and return a reference to a cached image, rather than loading and creating a new image... wherever you load your image.
if you create an images, you can setup your own (in memory) caching scheme and pass references in many cases. then purge the cache when you receive a memory warning.
if you need to scale the image and the size is static, determine the size to draw, and create a UIImage using imageWithCGImage:scale:orientation: -- or you can approach the problem in a similar way using CoreGraphics apis directly too.
beyond that, hold onto/reuse what you need, and use a profiler to balance your allocations and to measure timings.

Which type of data is returned by the following function?

Which type of data is returned by the following function?
CFDataRef CreateDatafromImage(UIImage *image)
{
return CGDataProvidercopyData(CGImageGetDataProvider(image.CGImage));
}
Binary image data
Raw pixel data
Compressed image data
ASCII image data
I guess the closest answer would be 2) Raw pixel data. Though, to be honest, I don't really see what the difference would be between Binary image data, and Raw pixel data. As for the third choice, Compressed image data, I suppose I could imagine how that could be referring to whether the NS/CFData object returned represents the compressed JPEG data (say, 100 KB) as it exists in the file, or whether it represents the data in its uncompressed form (say, 24 bit RGB, which might be 280 KB). In that case, I guess you could say that it represents the data in its "uncompressed" form.
But then, how exactly are you defining "compressed"? For example, say you have an image that is saved and has the following layout: 16 bits per pixel RGB, kCGImageAlphaNoneSkipFirst, like in the last example in this image:
Compared to the other layouts pictured, you could think of this layout as being "compressed" in some sense. (See Color Spaces and Bitmap Layout).
So, to sum up, by the time you've obtained a CGImageRef, the image is in a "native representation" that Quartz understands. The data returned from that method is the raw pixel data; the data isn't in "JPEG format", or "PNG format", or "TIFF format", etc. You can use the inquiry functions to gather information about what combination of image channels, alpha channels, and bit depth the image has: CGImageGetBitmapInfo(), CGImageGetBitsPerComponent(), CGImageGetBitsPerPixel(), etc.
Dealing with the image formats like JPEG, PNG, TIFF, etc. are abstracted into other APIs and types such as CGImageSourceRef, CGDataProviderRef, CGImageDestinationRef, and CGDataConsumerRef. See Moving Data Into Quartz 2D and Moving Data Out Of Quartz 2D.
Uh... A CFDataRef object?
The documentation is here: http://developer.apple.com/library/ios/#documentation/CoreFoundation/Reference/CFDataRef/Reference/reference.html
It's an object you can use as NSData or CFData interchangeably.
Internally a CFData is created (With the CGDataProvidercopyData) from the return value of the CGImageGetDataProvider call.
Good luck :)
It is covered in the docs (which is one of the first hits in Google).
This particular technical note covers it in detail.
http://developer.apple.com/library/mac/#qa/qa2007/qa1509.html

How can I adjust the RGB pixel data of a UIImage on the iPhone?

I want to be able to take a UIImage instance, modify some of its pixel data, and get back a new UIImage instance with that pixel data. I've made some progress with this, but when I try to change the array returned by CGBitmapContextGetData, the end result is always an overlay of vertical blue lines over the image. I want to be able to change the colors and alpha values of pixels within the image.
Can someone provide an end-to-end example of how to do this, starting with a UIImage and ending with a new UIImage with modified pixels?
This might answer your question: How to get the RGB values for a pixel on an image on the iphone
I guess you'd follow those instructions to get a copy of the data, modify it, and then create a new CGDataProvider (CGDataProviderCreateWithCFData), use that to create a new CGImage (CGImageCreate) and then create a new UIImage from that.
I haven't actually tried this (no access to a Mac at the moment), though.