How to achieve instagram kind of image filter effect with Core Image - swift

I have tried a few image filter effects in Core Image with SwiftUI. Like grouping different filters and connect relevant intensity with a Slider to change the filter's intensity.
I'm wondering is there a way to achieve the similar image filter effect like instagram. Because the single one of Core Image filter looks like not that fancy as instagram's. I guess chaining several effects in CI may achieve some of them on instagram, but I simply don't know which one to chain... No idea how to perform those filter effect according their name like "Sierra", "Willow".
So is there a way can achieve those filter effect in Core Image? Or some 3rd party library is needed? Any live example, project, framework/lib or hint is appreciated.

Most of Instagram's filters are achieved by simple lookup tables, i.e. using a map (image) for replacing colors with a mapped color. You can use the CIColorCube filter to achieve something like that in Core Image. You can search the internet for example lookup tables (they usually come in the form of images). There is probably also example code on how to bring them into the right format.
Another thing Instagram does is adding a vignette effect. There is also a Core Image filter for that: CIVignetteEffect.

Related

Is it possible to have a ListView generate side by side boxes?

I'm trying to achieve something similar to the image below. I'm pulling some data from Firebase to generate the content. Is this possible to achieve with a ListView or am I better off trying to slap something together using Columns, Rows, and some functions? I already have the skeleton for design purposes done. But, just curious if you can do this with a ListView or something similar somehow?
I've tried searching and couldn't really find a good answer.

How to get brightness level form UIImage

How can I get the brightness level form an UIImage. Actually I am trying to get brightness level and then set it to some other level (using GPUImage framework), So that I can pass that image to tessaract OCR SDK.
To modifify the brightness of an image, the solution that you're searching for is applying a filter over your original image.
This can be achieved by using the filter class provided in the CoreImage framework, more specifically CIFilter. Take a quick peek over the documentation, it's pretty straight-forward.
If you don't with to use CoreImage, there's several examples on the web with already written filters, however, I think the CIFilter class should be enough.

Can we identify image having any overlay using quartz?

I have to work with image which is coming from some server. We want to process that image so that we can find out does it contain any specific color region.
Is there any way so that the image coming from server will be overlaid image & on device side we can process it to check if it contains those overlays?
I have never being worked on quartz stuff. If anybody can suggest some other solution?
There are several ways to do this depending on how complex you wish to get. OpenCV has image blocking which can segregate images into distinct regions. Or check out Image tools especially blob extraction and then look at the general pixel colouration in a single blob.

Multi changeable areas of a image on a iPhone

I have an image with picture of a person and I want to let the user to pick some area of the person and change the color. But how can I best create a multi-mask image?
E.g. should the user be able the change the color for a leg or a hand.
I am using Titanium Appcelerator, and right now I had a solution with buttons placed over the image, which is not a pretty and accepted solution.
The Kitchensink example, has only one area which can be changed.
The only solution I found for working with sections of an image is to divide the image into different views then use a vertical or horizontal view to glue them together. Sounds like you took a similar approach using buttons.
Another option might be to use one of the jQuery image libraries within the webview. This most likely will have a performance penalty though.

ColorMap in iPhone Core Graphics?

I have a monochrome image with 256 levels of grayscale. I want to map each level to a specific color and apply to the image to get a colored image as a result. How can I do it?
To be more precise here is the pair in Java 2D API that I need to find replacement for:
http://java.sun.com/javase/6/docs/api/java/awt/image/LookupOp.html
http://java.sun.com/javase/6/docs/api/java/awt/image/LookupTable.html
And here is the instruction of how it works in Java. I need to build the same under iPhone.
Thanks!
Looks to me like the kind of thing you'd want to do in a core image filter:
http://developer.apple.com/mac/library/documentation/GraphicsImaging/Reference/CoreImageFilterReference/Reference/reference.html
If there isn't a suitable one already there, it wouldn't be very tricky to write.