WebP image format on iOS - iphone

I’m currently researching the possibility to use Google’s WebP image format in our iOS software.
I found it’s not hard to decode the WebP into RGBA8888 as needed using the Google’s C-library.
However, I’d like to create an implementation comparable to UIImage in terms of both API and performance.
Q1. Is it possible in iOS to develop an image decoder so that native imageWithData and other APIs will read the new format?
Q2. If no, what API does UIImageView (and other framework-provided controls) use to draw the UIImage? Is it public (e.g. drawInRect/drawAtPoint) or internal?
Can I inherit from UIImage, override a few methods (such as +imageWithContentsOfFile, +imageWithData, +imageNamed, -drawInRect, -drawAtPoint), and have my WPImage objects behave well with SDK-provided APIs?
Q3. If every instance of my hypothetical WPImage class will subscribe for UIApplicationDidReceiveMemoryWarningNotification (to flush RGBA image buffer leaving the much smaller original WebP data in RAM), won’t it hurt the performance much?
The software we’re developing may easily have hundreds of different images in RAM.

I made full (decode/encode) UIImage wrapper for WebP.
https://github.com/shmidt/WebP-UIImage

There is an example . https://github.com/carsonmcdonald/WebP-iOS-example

WebP Image Support Coming to iOS 14
update your devices.

Another example I've written uses the work that Carson McDonald started but in a reusable fashion. Basically you add the WebP.framework to your project (included in my example or create-able using the instructions Carson has on his website) and then you can do something like this:
#import "UIImage+WebP.h"
...
self.imageView.image = [UIImage imageFromWebP:#"path/to/image.webp"];
// or
[self.button setImage:[UIImage imageFromWebP:#"path/to/image.webp"
forState:UIControlStateNormal]];
If you want to take a look at my example, go here: https://github.com/nyteshade/iOSWebPWithAlphaExample

First, decode the WebP pixels into a buffer. You then need to call CGDataProviderCreateWithData() to create a "data provider" for CoreGraphics and then call CGImageCreate() and pass in the provider and all the needed arguments. Finally, invoke UIImage imageWithCGImage:imageRef in order to create a UIImage instance.

SDWebImage is a library which provides an async image downloader which can easily be extended to support WebP via SDWebImageWebPCoder.

Related

Convert video to GIF

I'm building an iOS app which requires me to allow the users to record a 15sec clip (with UIImagePickerController for example) and then convert it into an animated GIF. How could I achieve this? Is there any library/framework available for such task?
Thanks!
This gist code piece may help you.
https://gist.github.com/mayoff/4969104
It shows how to export frames into a gif image.
I don't believe there is any existing library that would do a straight conversion for you. There's a lot of libraries for displaying animated GIFs - far fewer native Objective-C libraries for creating them.
Fortunately, iOS does have support for saving as GIFs. There's an existing StackOverflow answer that covers how to create animated GIFs in-depth here:
Create and and export an animated gif via iOS?
...there's also a library on GitHub that abstracts the lower-level stuff away, although it's not been maintained for a while (link here).
All you'll need to do is create an array of the frames you want to convert into your GIF. I strongly recommend you don't try and convert every single frame in your 15 second video, if only because you'll end up with a very large GIF at a frame-rate that's too high. You would be better off picking every other, or even every 3/4 frames from your video sample. Capturing images from video is also pretty well documented on iOS.
I recently created a library called Regift for converting videos to gifs on iOS. Hopefully it will help anyone coming to this in the future :)

Decompress images to use on a UITableView

I currently working on a project that i need to display large images on a UITableView,this is very common problem for a lot of developers and learning with they threads i reached to the following procedure:
NOTE:The large images i refer,they all have 300x300px(600x600px,retina) and about 200kb,JPEG
Create a NSOperationQueue;
Download images asynchronously(Each image has 600x600px,corresponding to the #2x image);
Resize and create the non retina image(300x300px image);
Decompress both images;
Store all images on a NSCache;
After all that procedures have finished i update my UITableView on the main thread.I'am using a UITableViewCell subclass to draw all my needed content(As seen on Apple's sample codes).The main problem is that i'm confused about step 4(decompress images),my doubts:
NOTE:I'm currently storing my decompressed images on a NSCache.
Should i decompress the images and store then as UIImage's or NSData's?
How can i store the decompressed images?(NSCache,NSMutableArray...)
What is the best way to pass the decompressed image to my UITableViewCell subclass?
NOTE:I'm using the decompression code presented here:link
You can't really store an UIImage object to disk, but you can turn it to NSData using UIImagePNGRepresentation
Using UIImage will give you cache out of the box, I bet it's the most efficient you can get
Just put the image into UIImageView, Apple spent a lot of time on making image rendering fast.
That said, your images are not particularly big, especially for retina devices, I would advice looking at something like AFNetworking library that has a complete and tested solution for this problem.
Plus, you can look up the code of AFImageRequestOperation which does exactly what you need: download, store, cache, reuse.

How to develop iOS app with different sets of graphics under Xcode 4.2?

I have this app with one whole set of images, but I would like to add for example personalized set of graphics. Is there an easy way to organize and develop source code and not to get lost while switching between these sets of images?
One way I see it is to have on my hard drive two folders with graphics and before releasing replace in project current images with personalized set... is there a better way?
I'm using Xcode 4.2 and iOS 5.0 for this app.
Probably the easiest and most reliable in terms of maintainance is to write your own image access function or marco, that is controlled in one place globally. Then use this in all cases where you want to have the flexibility for differnt images for example like this. I just got used to this way of assigning an image with stretchableImageWithLeftCapWidth because it adjusts the size nicely.
Of course you may use any other way to assign your images, like directly without use of stretchableImageWithLeftCapWidth
Anyway, this is my suggestion:
NSString *s = [NSString stringWithFormat:#"%s%i",#"imageName",CONST_VERSION];
UIImage *theImage = [UIImage imageNamed:s];
myImageView.image = [theImage stretchableImageWithLeftCapWidth:12 topCapHeight:0];
and you can do
#define CONST_VERSION 1 // or 2
Then name your images like:
myImage1.png
and
myImage2.png

Animations for ipad/iphone

I am trying to make an app for ipad/iphone with a lot of animations.
There is anyway to make an app in objective-c with animations in other languages and interligate them???
because i have a lot of skills in languages like after effects/illustrator/flash/etc, for that, will be better to me to make the animations in that technologies..
Thanks
If you can export your animations into an HTML5 format, you can certainly display them in a UIWebView. There are a few Flash-to-HTML5 projects available out there now, and new apps like Hype and Purple that can be used to create HTML5 animations "from scratch."
The most simple way to do this is to create the videos in whatever tools and then export to a series of PNG images or use ffmpeg to convert flash or whatever to a series of images. Then display these PNG images one at a time (and don't load them all or you will use up all app memory). If you can deal with a more lossy format, then h.264 can save a lot of space at the cost of a significantly reduced 4:2:0 YUV type colorspace. If you need lossless video or require an alpha channel, or you want to save a lot of space as compared to the PNG method then you should take a look at AVAnimator. Note that you should not integrate any ffmpeg related code into your iOS due to license issues having to do with static linking of LGPL code.

Performing iPhone optimization on externally downloaded PNGs

When a PNG is added to an XCode iPhone project, the compiler optimizes it using pngcrush. Once on the device, the image's rendering performance is very fast.
My problem is that my application downloads its PNGs from an external source at runtime (from Picasa Web albums, using the Google Data APIs). Unfortunately, these images' performance is quite bad. When I do custom rendering on top of the image, it seems 100x slower than its internally stored counterparts. I strongly suspect this is because the downloaded images haven't been optimized.
Does anyone know how I can optimize an externally downloaded PNG at runtime on the iPhone? I'm hoping for a class that does this. I even considered adding pngcrush's source code to my app, which seems drastic. I haven't been able to find an decent answer myself. I'd be very grateful for any help.
Thanks!
Update:
Some folks have suggested that it may be due to the file's size, but it isn't. During my tests, I added a toggle button to switch between using the embedded version and the downloaded version of exactly the same PNG. The only difference is that the embedded one was optimized by 'pngcrush' during compilation. This does some byte-swapping (from RGBA to BRGA) and pre-multiplication of alpha. (http://iphonedevelopment.blogspot.com/2008/10/iphone-optimized-pngs.html)
Also, the performance I'm referring to isn't the downloading, but the rendering. I superimpose custom painting on top of the image (overriding the drawRect method of the UIView), and it's very choppy when the background is the downloaded version, and very smooth when it's the embedded (and therefore optimized) version. Again, it's exactly the same file. The only difference is the optimization, which I'm hoping I can perform on the image at runtime, on the device, after downloading it.
Thanks again for everyone's help!
That link you posted pretty much answers your question.
During the build process XCode pre-processes your png so it's in a format that's more friendly to the graphics chip in the iPhone.
Png's that have not been processed like this will likely use a slower rendering path, one that deals with the non-native format and the fact that the alpha must be computed separately for each color.
So you have two options;
Perform the same work that pngcrush does and swap ordering/pre-multiply alpha. The speed up may be due to one or both of these.
After you have loaded your image, you can "create" a new image from it. This new image should be in the iPhone's native format and so should perform faster. The downside is it could potentially take up a bit more memory.
E.g.
CGRect area = CGRectMake(0, 0, width, height);
CGSize size = area.size;
UIGraphicsBeginImageContext(size);
[oldImage drawInRect:area];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
The fact that you say it "seems" 100x slower indicates that you have not performed any experimentation, but made a guess (it must be the PNG optimization), and are now going down a path based on a hunch.
You should spend time to confirm what the problem is before you try to solve it. My gut says that PNG optimization shouldn't be the issue: that mostly affects the loading of images, but once they are in memory it doesn't matter what file format they were originally in.
Anyway, you should try an A-B comparison, either get your code to load an optimized PNG from somewhere else and see how it compares, or make a test app that just does some drawing on the two PNG types. Once you've confirmed what the problem is, then you can figure out if you need to compile pngcrush into your app.
On the surface, it sounds like something else is at play here. Any additional image manipulation should only add time until it's displayed onscreen...
Would it be at all possible to get the server to gzip the images by sending the appropriate HTTP header? (If it even helps file size much, that is.)
Temporarily using the pngcrush source might be a good test as well, just to get some measurements.
Are you storing the png at the original downloaded size? If it's a large image it'll take significantly longer to render.
Well it seems that a good way to do it (since you can't run pngcrush on the iPhone and expect that to speed it up) would be to make your requests through a proxy that runs pngcrush. The proxy would have nice horse power to actually give you some gain over the 100x pain you feel.
try pincrush to trans the normal png file to the crushed png file
You say you are drawing on top of the image by overriding a UIView's drawRect: method. Are you trying to do some animation by repeatedly drawing the whole image with your custom stuff on top of it?
You might get better results if you put your custom stuff in a separate view or layer, and let the OS deal with compositing the result over the background. The OS will only update the parts of the screen that you actually change, and won't be repainting the entire image as often.