How can I get the brightness level form an UIImage. Actually I am trying to get brightness level and then set it to some other level (using GPUImage framework), So that I can pass that image to tessaract OCR SDK.
To modifify the brightness of an image, the solution that you're searching for is applying a filter over your original image.
This can be achieved by using the filter class provided in the CoreImage framework, more specifically CIFilter. Take a quick peek over the documentation, it's pretty straight-forward.
If you don't with to use CoreImage, there's several examples on the web with already written filters, however, I think the CIFilter class should be enough.
Related
I have tried a few image filter effects in Core Image with SwiftUI. Like grouping different filters and connect relevant intensity with a Slider to change the filter's intensity.
I'm wondering is there a way to achieve the similar image filter effect like instagram. Because the single one of Core Image filter looks like not that fancy as instagram's. I guess chaining several effects in CI may achieve some of them on instagram, but I simply don't know which one to chain... No idea how to perform those filter effect according their name like "Sierra", "Willow".
So is there a way can achieve those filter effect in Core Image? Or some 3rd party library is needed? Any live example, project, framework/lib or hint is appreciated.
Most of Instagram's filters are achieved by simple lookup tables, i.e. using a map (image) for replacing colors with a mapped color. You can use the CIColorCube filter to achieve something like that in Core Image. You can search the internet for example lookup tables (they usually come in the form of images). There is probably also example code on how to bring them into the right format.
Another thing Instagram does is adding a vignette effect. There is also a Core Image filter for that: CIVignetteEffect.
I'm looking to make a native iPhone iOS application in Swift 3/4 which uses the live preview of the back facing camera and allows users to apply filters like in the built in Camera app. The idea was for me to create my own filters by adjusting Hue/ RGB/ Brightness levels etc. Eventually I want to create a HUE slider which allows users to filter for specific colours in the live preview.
All of the answers I came across for a similar problem were posted > 2 years ago and I'm not even sure if they provide me with the relevant, up-to-date solution I am looking for.
I'm not looking to take a photo and then apply a filter afterwards. I'm looking for the same functionality as the native Camera app. To apply the filter live as you are seeing the camera preview.
How can I create this functionality? Can this be achieved using AVFoundation? AVKit? Can this functionality be achieved with ARKit perhaps?
Yes, you can apply image filters to the camera feed by capturing video with the AVFoundation Capture system and using your own renderer to process and display video frames.
Apple has a sample code project called AVCamPhotoFilter that does just this, and shows multiple approaches to the process, using Metal or Core Image. The key points are to:
Use AVCaptureVideoDataOutput to get live video frames.
Use CVMetalTextureCache or CVPixelBufferPool to get the video pixel buffers accessible to your favorite rendering technology.
Draw the textures using Metal (or OpenGL or whatever) with a Metal shader or Core Image filter to do pixel processing on the CPU during your render pass.
BTW, ARKit is overkill if all you want to do is apply image processing to the camera feed. ARKit is for when you want to know about the camera’s relationship to real-world space, primarily for purposes like drawing 3D content that appears to inhabit the real world.
I am using this image. I would like to have only the human part from this image. I don't want the background of this image.
How to do this? Any logic, links or the best and simple way?
here is the thing i want Link but i want this in objective c
There is no easy and fast way to satisfy your requirements. For the beginning you can learn how to detect objects in OpenCV.
After this you can check haarcascade_fullbody.xml from OpenCV sources. This cascade is for detecting bodies, open it in text editor, there is additional information in header.
Anyway, I'm not sure that existing cascade is too accurate for your needs.
From iOS5 and above you can user face detection API. By using FaceDetection APi you can easily find the face of the person. For reference and sample code here is the link.
All the best !!!
For ios5.1+, using a CIDetector and CIFaceFeature class can be easily face recognition.
Body recognition, is you will need to use opencv library. Not yet supported by apple.
Have a look at this: http://niw.at/articles/2009/03/14/using-opencv-on-iphone/en
Cropping an image in iOS using OpenCV face detection
I'm developing an iPhone app to recognize some well known symbols from pictures.
I'm basically following these tutorials http://aishack.in/tutorials/sudoku-grabber-with-opencv-detection/ and http://sudokugrab.blogspot.it/2009/07/how-does-it-all-work.html, using OpenCv for template matching and GPUImage for image processing.
When all images are with the same luminance level, I can adjust the threshold of GPUImageLuminanceThresholdFilter and all works smooth, but, of course, I can't be sure of the luminance.
So, I need a simple adaptive threshold filter, like the one in those tutorials, which calculate the luminance into the area surrounding each pixel.
The GPUImageAdaptiveThresholdFilter doesn't fit my needs, because it detects and sharps the edges, while I need to enhance the symbols.
How can I implement that kind of filter?
Asked to, the awesome Brad Larson added a blur size property to the box blur, and modified the adaptive threshold filter, so it works as expected!
Thanx #BradLarson!
I have a monochrome image with 256 levels of grayscale. I want to map each level to a specific color and apply to the image to get a colored image as a result. How can I do it?
To be more precise here is the pair in Java 2D API that I need to find replacement for:
http://java.sun.com/javase/6/docs/api/java/awt/image/LookupOp.html
http://java.sun.com/javase/6/docs/api/java/awt/image/LookupTable.html
And here is the instruction of how it works in Java. I need to build the same under iPhone.
Thanks!
Looks to me like the kind of thing you'd want to do in a core image filter:
http://developer.apple.com/mac/library/documentation/GraphicsImaging/Reference/CoreImageFilterReference/Reference/reference.html
If there isn't a suitable one already there, it wouldn't be very tricky to write.