Is it possible to reproduce "Texture" Effect of Adobe Lightroom in iOS? - swift

I'm trying to implement Texture effect in iOS but can't figure it out how to do that. Can anyone share some idea or resources or steps about that? See the attached image for clarification.
I know the workings of Adobe Lightroom "Texture" effect. According to Max Wendt, a Senior Computer Scientist on ACR and the lead engineer of the Texture project
Just like you can break an image into color channels (for example; red, green, and blue), an image can also be broken up into different “frequencies.” There are high frequency details, mid frequency features, and low frequency areas; together, they all make the image. If we apply "Texture" the medium-frequency features of an image are enhanced without affecting the other frequencies. link
Actually, I'm Exploring CIFilters and chaining them together to achieve custom filters. Unfortunately I stuck here for Texture

Related

Roads are looking blurry in Unity

I am trying to make some real looking roads in my game,but the problem is ,when i am using roads so they are little bit blurry at distance and look like a real one.Can any one please guide me to solve this.I have uploaded my pic here
Is there possible solution for this?
Yes, it is a common problem, for textures' rendering at a certain distance/angle. You should enhance the aniso level, in order to apply an anisotropic filtering on the blurry texture (you can also change the aniso level from the texture's component, as in the picture below).
If the road is created via a Terrain component (which I doubt, since you already have a sandy terrain at the bottom), you should change the basemap distance.
Check also Quality settings, it might have Anisotropic textures disabled.

Implementing a Simple Adaptive Threshold in GPUImage

I'm developing an iPhone app to recognize some well known symbols from pictures.
I'm basically following these tutorials http://aishack.in/tutorials/sudoku-grabber-with-opencv-detection/ and http://sudokugrab.blogspot.it/2009/07/how-does-it-all-work.html, using OpenCv for template matching and GPUImage for image processing.
When all images are with the same luminance level, I can adjust the threshold of GPUImageLuminanceThresholdFilter and all works smooth, but, of course, I can't be sure of the luminance.
So, I need a simple adaptive threshold filter, like the one in those tutorials, which calculate the luminance into the area surrounding each pixel.
The GPUImageAdaptiveThresholdFilter doesn't fit my needs, because it detects and sharps the edges, while I need to enhance the symbols.
How can I implement that kind of filter?
Asked to, the awesome Brad Larson added a blur size property to the box blur, and modified the adaptive threshold filter, so it works as expected!
Thanx #BradLarson!

Blur Effect (Wet in Wet effect) in Paint Application Using OpenGL-ES

I am developing Paint application using OpenGL-ES for iPhone and i want to implement Gaussian blur effect(Wet in Wet) for painting. Please have look at the image describing my requirement for Blur effect :
I tried to search how for OpenGL function but did not get anything. Can anyone guide me to a right direction in this problem.. Any kind of help or suggestion will be highly appreciated.. Thanks..
You should be able to render the same brush stroke many times pixels apart to get the effect you want. If you jitter the renders with a Gaussian distribution you will get a Gaussian blur.
This would be similar to jitter antialiasing with an accumulation buffer, but instead of using subpixel offsets you would use multi-pixel offsets as big as you want the blur effect. You'd would want to probably render around 16 times to make it look smooth. http://www.opengl.org/resources/code/samples/advanced/advanced97/notes/node63.html
This is also similar(or really the same thing) as jittering to create motion blur. http://glprogramming.com/red/chapter10.html
You wouldn't even NEED to use a separate accumulation buffer here, just render each pass with alpha that adds up to solid. One thing to remember, you want to always jitter across the same offsets so that successive frames look the same(i.e. if you are using random offsets then every frame will have slightly different blur effect).
I am assuming you would want to apply this on an Image. I have no idea how this could be done in OpenGL ES. But you could try using this awesome image processing library. It provides other image effects other than Guassian-Blur...
Happy Blurring...

iphone, Image processing

I am building an application on night vision but i don't find any useful algorithm which I can apply on the dark images to make it clear. Anyone please suggest me some good algorithm.
Thanks in advance
With the size of the iphone lens and sensor, you are going to have a lot of noise no matter what you do. I would practice manipulating the image in Photoshop first, and you'll probably find that it is useful to select a white point out of a sample of the brighter pixels in the image and to use a curve. You'll probably also need to run an anti-noise filter and smoother. Edge detection or condensation may allow you to bold some areas of the image. As for specific algorithms to perform each of these filters there are a lot of Computer Science books and lists on the subject. Here is one list:
http://www.efg2.com/Lab/Library/ImageProcessing/Algorithms.htm
Many OpenGL implementations can be found if you find a standard name for an algorithm you need.
Real (useful) night vision typically uses an infrared light and an infrared-tuned camera. I think you're out of luck.
Of course using the iPhone 4's camera light could be considered "night vision" ...
Your real problem is the camera and not the algorithm.
You can apply algorithm to clarify images, but it won't make from dark to real like by magic ^^
But if you want to try some algorithms you should take a look at OpenCV (http://opencv.willowgarage.com/wiki/) there is some port like here http://ildan.blogspot.com/2008/07/creating-universal-static-opencv.html
I suppose there are two ways to refine the dark image. first is active which use infrared and other is passive which manipulates the pixel of the image....
The images will be noisy, but you can always try scaling up the pixel values (all of the components in RGB or just the luminance of HSV, either linear or applying some sort of curve, either globally or local to just the darker areas) and saturating them, and/or using a contrast edge enhancement filter algorithm.
If the camera and subject matter are sufficiently motionless (tripod, etc.) you could try summing each pixel over several image captures. Or you could do what some HDR apps do, and try aligning images before pixel processing across time.
I haven't seen any documentation on whether the iPhone's camera sensor has a wider wavelength gamut than the human eye.
I suggest conducting a simple test before trying to actually implement this:
Save a photo made in a dark room.
Open in GIMP (or a similar application).
Apply "Stretch HSV" algorithm (or equivalent).
Check if the resulting image quality is good enough.
This should give you an idea as to whether your camera is good enough to try it.

image effects iphone sdk

are there any tutorials for creating image effects in iphone? like glow,paper effect etc
Can anyone tell me where to start?
A glow effect is not supported by default within the iPhone SDK (specifically CoreGraphics). For the paper effect I am not sure what you are looking for.
If you insist on effects not supported by the SDK, you should try to find less platform specific sources and adapt them to the iPhone:
Glow and Shadow Effects (Windows GDI)
Another possibly great source of effect-know-how are the ImageMagick sources.
Take a look at this project: http://code.google.com/p/simple-iphone-image-processing/
It includes code that can do various image effects such as canny edge detection, histogram equalisation, skeletonisation, thresholding, gaussian blur, brightness normalisation, connected region extraction, and resizing.
Another other more low level option is to take a look at ImageMagick or FreeImage which are further image processing libraries.