CIColorControls wrong brightness - swift

I am change brightness value of the image to negative value and trying to compare result image with apple photo editor.
Original
Edited by me
Edited by Apple Photo Editor
As you can see, my CIFilter change brightness of white part of the image too. Apple Editor changes brightness of person only.
My code is simple:
filter.setValue(NSNumber(value: -0.4), forKey: kCIInputBrightnessKey)
It makes no difference whether I increase the brightness or decrease it. The brightness of the entire image changes. Apple Editor changes only part of the image

The "Brightness" slider in Photos is not mapped to the "traditional" brightness value (that is used in CIColorControls). Apple uses more sophisticated algorithms under the hood that, among others, take the image's content into account. I'm afraid there's no single core image filter that can reproduce that result. But it looks like Photos also increased the contrast when reducing the brightness, so you could try that.

Related

How do you get the cropped version of an image using ALAsset?

I'm trying to get the cropped version of an image that's pulled using ALAsset. Specifically, I'm selecting items from the user's Photo Library and then uploading them. The issue is that in the library thumbnail view, iOS is showing us the cropped version. When you select that thumbnail and pull that image's asset using ALAsset, I get the full resolution version.
I did some research and couldn't find anything that helps in getting a second coordinate system of where the cropping happens.
To test it, you need iOS5 to edit the image in your library. Select an image in your image library, select "Edit", and crop the image. When you get the ALAsset you'll get the full image, and if you sync using iPhoto, iPhoto also pulls the full image. Also, you can re-edit the image and undo your crop.
This is how I'm getting the image:
UIImage *tmpImage = [UIImage imageWithCGImage:[[asset defaultRepresentation] fullResolutionImage]];
That gives me the full resolution image, obviously. There's a fullScreenImage flag which scales the full resolution image to the size of the screen. That's not what I want.
The ALAssetRepresenation class has a scale field, but that's a float value, which is also what I don't want.
If anyone can tell me where this cropped coordinate system can be found, I'd appreciate it.
Your Options:
Option 1 (ALAssetLibrary)
Use the - (CGImageRef)fullScreenImage method of AlAssetRepresentation.
Pros:
All the hard work is done for you, you get an image that looks just like the one in the Photos app. This includes cropping, and other changes. Easy.
Cons:
The resolution is "screen size", only as big as the device you are using, not the full possible resolution of the cropped image. If this doesn't concern you, then this is the perfect option.
Option 2 (ALAssetLibrary)
Extract the cropping data using the AdjustmentXMP key in the image's metadata (what #tom is referring to). Apply the crop.
Benefit:
It is possible to get a cropped image at the best possible resolution
Cons
You only get the cropping edits, not any other adjustments (like red-eye)
Who knows what Apple will support in the future in "Edit" mode, you may have to apply more edits in the future.
It's complicated, you first have to parse the XML data to read the crop rectangle, crop the unrotated image, and then apply the rotation.
Option 3 (Wishful Thinking)
Beg Apple to include a method like fullResolutionEditedImage which gives you the best possible quality photo, with all edits applied.
Pros:
Everything magically solved.
Cons:
Apple may never add this method.
Option 4 (UIImagePickerController)
This option only applies if you are using the image picker, you can't use it directly with the asset library
In the NSDictionary returned by -(void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
You can extract the full sized, adjusted image from the UIImagePickerControllerOriginalImage key. Save this image somewhere. Then, instead of retrieving the image from the asset library, load the copy you made.
Pros:
You get the full size image, with adjustments
This is the only option Apple gives us for getting the full size image with all adjustments (like red-eye, etc), and not just the crop. This is particularly important in iOS 7 with the introduction of filters that can drastically alter the image.
Cons:
Can only be used with the image picker (not ALAssetRepresentation)
You must keep around a full-sized copy of the image. Depending on the number of such images, the disk usage by your app could grow substantially.
Update for iOS 7: you may wish to consider Option 4, or Option 1, as iOS 7 supports many operations now like filters, and your users will probably notice if they are missing. These two options support filters (and other edits), with Option 4 giving you a higher resolution result.
When a photo has been cropped with the iOS Photos App, the cropping coordinates can be found in the ALAssetRepresentation's metadata dictionary. fullResolutionImage will give you the uncropped photo, you have to perform the cropping yourself.
The AdjustmentXMP metadata contains not only the cropping coordinates but also indicates if auto-enhance or remove-red-eyes has been applied.
As of iOS 6.0 CIFilter provides filterArrayFromSerializedXMP:inputImageExtent:error: Probably you can use the ALAssetRepresentation's AdjustmentXMP metadata here and apply the CIFilter onto the ALAssetRepresentation's fullResolutionImage to recreate the modified image.
Be aware that the iOS Photos App handles JPG and RAW images differently. For JPG images a new ALAsset with the XMP metadata is stored in the Camera Roll. For RAW images an ALAssetRepresentation is added to the original ALAsset. I'm not sure if this additional ALAssetRepresentation is the modified image and if it has the AdjustmentXMP metadata. In addition to JPG and RAW images you should also test the behaviour for RAW+JPG images.

How to remove background of image during runtime in iPhone sdk

I have an image in UIImageView. Those images are generally of clothes or accessories captured from camera on plain backgrounds. Now, I have to give a functionality to users so that they can remove the background from the actual image which is being shown. Something like what is shown in the picture here. As the slider will move the background will start getting removed more and more. Something like the 'instant alpha' brush in the Preview application available in Mac OS X. I wish to do this in native iPhone app.
I know I'll require some algorithm for image processing to do this. Does anyone have anything helpful which I can refer or use in order to get this done? Thank you so much in advance.
You can render your image in some context, than change all points you need to the color you want, get image from your context and display it again.
This link should help you to get color of a pixel in context.
Note, that this method is too slow, so I think you should remember all positions of pixels you
need to change to make your app a bit faster.

How to create background image for site

I have just made background image for my web-site. But I'm confused - if user have resolution no similar mine, then he will see wrong image right? How can I fix this problem and what resolution is normal for browser?
They will see the correct image, but it will repeat itself.
You have two options:
Create an image, which looks nice when it's repeated. Then you just have to exchange the image.
otherwise you have to disable image repeating, so that the image is just displayed once.
see: http://www.w3schools.com/cssref/pr_background-repeat.asp
It usually looks nice if you define an appropriate background-color after the image ends.
Otherwise there are more advanced ways for full scale background images: http://css-tricks.com/3458-perfect-full-page-background-image/
The "normal" Screen resolution on webpages range from the Smartphone screen (320x480) to 30" screens (2.560 x 1.600). Those are the both (not so uncommon!) extremes. The average is between 1024x768 and 1920x1200.

How to create facebook wall posts and add retina version of picture

We're using the facebook graph API http://developers.facebook.com/docs/reference/api/post/ and adding the picture parameter. Our picture is a 30x30 pixel image, which is exactly the size we want for the facebook web version. However, the image will be pixelated when using the FB mobile app on an iPhone4 (retina display).
Is there any way to serve a 60x60 high resolution image, but render it always at 30x30 for facebook wall posts?
Well.. as of this moment, here is what I have found out, and offer a 'solution' that has worked for me based on the time i've had to test & play with this concept. For all the readers out there, who need a quick answer to the question, i don't have the exact solution to the question, but…. Essentially, your 30x30 image is being scaled to 90x90. The 60x60 image is being scaled to 90x90. And I can not find a way to go around this.
Below is what I have tried. Feel free to add input.
Take your feed image, and stroke a 2-5px black line around the frame of the image.
Load up your app, initiate a wall feed on the device. With the image present, take a screenshot. Mail yourself the image. Open it up in Photoshop (or photo editing program). Use a Marquee tool to outline the image. Cut it out of the screenshot and paste it as a new image. What size is it? 90x90, right? (and obviously 180x180 if image is retina)
Create a 90x 90 image. Copy your original 30x30 image and paste it anywhere you want within the new 90x90 images' frame. Upload it to the URL parameter's location. Re-run your app. By re-running it, i mean you have to shut it down completely, it appears as though the SDK is cacheing the image upon first launch of the feed and you can clear that cache by closing the app completely, and rerunning it. When you do, you will see significant improvements with the look of the image. It may not be a retina image, but it at least won't be 'fuzzy ugly'. At this point, it boils down to how nice of illustrative lines that where done in the design process to remove the aliasing effect produced from the conversion to a raster graphic. As well, i'm not sure if a variation of resampling method will produce even better results.
Some things i've tried:
I've also saved it as a png file with no transparency : 144ppi at 90 x 90 size. In other words, save your 90x90 image with a higher resolution (pixels per inch). Remember to not constrain proportions as you image resize. And note that If you are using adobe products, i.e. photoshop ) - don't save for web, just use 'save as…', as this will retain the ppi you specified. Although, i don't believe i see much of a difference in the quality which this is displayed going this route, and best to try to keep the file size down as this will increase the overall image size by about 500% or more.
I've tried variations of hosting the image twice the size (180x180) within the same hosted folder and naming it image#2x.png & image-large.png <--(just for the heck of it). This is not really solving the problem either.
Some other things I have not tried:
Monitoring your web server traffic, and any "not found" errors to a resource to see if FB is trying to access an a potential alternate resource when grabbing your image for display, the wall feed box that comes up is a webview. Meaning web graphics. (It's FB's web page…meaning their rules, and i doubt the pages' source is available to dabble with within the SDK.. so!…
Look at the HTML of the feed itself with safari browser:
The inspection of the HTML within the final resulting image that is posted on my FB wall I can see this….
<img class="img" src="http://platform.ak.fbcdn.net/www/app_full_proxy.php?app=153675474666495&v=1&size=z&cksum=773bba91f6146b2463eed0a0bb77dc42&src=http%3A%2F%2Fwww.thumbwizards.com%2Fspeakinapps%2Fgraphics%2Fboxed%2Faussie.png" alt="">
I am wondering:
Within HTML5 isn't there a mechanism to provide a toolkit type of javascript to display retina graphics from a web page?
Would it be possible to have that code run when grabbing the url to the image (in meaning, the url of the image would be acting as a pointer to the code.? I haven't tried playing with this, since my logic tells me that per the url above that FB is essentially taking control over the image at this point. I have noticed (and not waited long enough to see) that the image is apparently cached and posting to the wall with a new image, sometimes results in the older image still being used. (and yes, i've cleared my browser cache)… perhaps simply cached in another location..
If there is another parameter for the image type, that is not published, I have not stumbled across any yet.
Can anyone figure out if through source of:
[http://platform.ak.fbcdn.net/www/app_full_proxy.php] if this php file is part of an available image processor out there we can access to view what could be done?
Can anyone mention an app that uses a retina graphic in their feed post?
Just thoughts really, I've decided to not really give a crop, and if
you've made it this far. Thanks for tuning in. ..So, Sulf, your 30x30 is being scaled to 90x90. making it UGLY!.
Good luck.. If you figure anything else out, let me know!
Mark
apple specify that if you want to add the retina effect for your ios app then the images you are using in this format -i.e
sampleImag.png- 57*57(size) , 163 (DPI)
sampleImag#2x.png - 114*114(size),326 (DPI) when you use these specific graphic images you will get your app is showing retina effect in iphone 4 and above generation.
Just point your code to a larger scaled image and Facebook will take care of the rest.

Should all .png files be in power of two on the Iphone?

When creating an UIImage file from a .png to be displayed on a button, view/cell background, etc. for a standard Iphone application, should all of them be in powers of 2 for optimization reasons?
As others have said, no - but you should generally use images with even dimensions. This is because when views are positioned with the center property, it'll position an odd-dimensioned image at some half-pixel position. This will cause the image to appear blurry.
As long as you're aware of this it shouldn't really cause you any problems, but it's still a good idea to use even sizes just to be on the safe side.
(This applies for UIKit, not necessarily OpenGL)
Apple uses odd and arbitrary dimensions for all the images it adds to the interface on your behalf, such as system toolbar items. The best optimization you can do is anything that reduces compositing, which basically means setting the opaque property of views and layers whenever possible.
If you have the choice between a transparent png that will be composited over a static background and an opaque png with the background already included, you have a chance to optimize. When the images will be sliding around or the background will change, you have to composite, otherwise choose opaque.
Here is an article on optimization of iPhone images -- basically tells you why to use PNG files. The size shouldn't matter unless you are using OpenGLES.
No, this will have little or no benefit, I usually suffice at doing my own optimization using photoshop "Save for web or devices" option.
Please see http://iphonedevelopment.blogspot.com/2008/10/iphone-optimized-pngs.html for a detailed explanation about the iPhones pre-optimization of pngs.