Blur image with GPUImage framework [duplicate] - iphone

I am trying to learn Shaders to implement something in my iPhone app. So far I have understood easy examples like making a color image to gray scale, thresholding, etc. Most of the examples involve simple operations in where processing input image pixel I(x,y) results in a simple modification of the colors of the same pixel
But, how about Convolutions?. For example, the easiest example would the Gaussian filter,
in where output image pixel O(x,y) depends not only on I(x,y) but also on surrounding 8 pixels.
O(x,y) = (I(x,y)+ surrounding 8 pixels values)/9;
Normally, this cannot be done with one single image buffer or input pixels will change as the filter is performed. How can I do this with shaders? Also, should I handle the borders myself? or there is a built-it function or something that check invalid pixel access like I(-1,-1) ?
Thanks in advance
PS: I will be generous(read:give a lot of points) ;)

A highly optimized shader-based approach for performing a nine-hit Gaussian blur was presented by Daniel Rákos. His process uses the underlying interpolation provided by texture filtering in hardware to perform a nine-hit filter using only five texture reads per pass. This is also split into separate horizontal and vertical passes to further reduce the number of texture reads required.
I rolled an implementation of this, tuned for OpenGL ES and the iOS GPUs, into my image processing framework (under the GPUImageFastBlurFilter class). In my tests, it can perform a single blur pass of a 640x480 frame in 2.0 ms on an iPhone 4, which is pretty fast.
I used the following vertex shader:
attribute vec4 position;
attribute vec2 inputTextureCoordinate;
uniform mediump float texelWidthOffset;
uniform mediump float texelHeightOffset;
varying mediump vec2 centerTextureCoordinate;
varying mediump vec2 oneStepLeftTextureCoordinate;
varying mediump vec2 twoStepsLeftTextureCoordinate;
varying mediump vec2 oneStepRightTextureCoordinate;
varying mediump vec2 twoStepsRightTextureCoordinate;
void main()
{
gl_Position = position;
vec2 firstOffset = vec2(1.3846153846 * texelWidthOffset, 1.3846153846 * texelHeightOffset);
vec2 secondOffset = vec2(3.2307692308 * texelWidthOffset, 3.2307692308 * texelHeightOffset);
centerTextureCoordinate = inputTextureCoordinate;
oneStepLeftTextureCoordinate = inputTextureCoordinate - firstOffset;
twoStepsLeftTextureCoordinate = inputTextureCoordinate - secondOffset;
oneStepRightTextureCoordinate = inputTextureCoordinate + firstOffset;
twoStepsRightTextureCoordinate = inputTextureCoordinate + secondOffset;
}
and the following fragment shader:
precision highp float;
uniform sampler2D inputImageTexture;
varying mediump vec2 centerTextureCoordinate;
varying mediump vec2 oneStepLeftTextureCoordinate;
varying mediump vec2 twoStepsLeftTextureCoordinate;
varying mediump vec2 oneStepRightTextureCoordinate;
varying mediump vec2 twoStepsRightTextureCoordinate;
// const float weight[3] = float[]( 0.2270270270, 0.3162162162, 0.0702702703 );
void main()
{
lowp vec3 fragmentColor = texture2D(inputImageTexture, centerTextureCoordinate).rgb * 0.2270270270;
fragmentColor += texture2D(inputImageTexture, oneStepLeftTextureCoordinate).rgb * 0.3162162162;
fragmentColor += texture2D(inputImageTexture, oneStepRightTextureCoordinate).rgb * 0.3162162162;
fragmentColor += texture2D(inputImageTexture, twoStepsLeftTextureCoordinate).rgb * 0.0702702703;
fragmentColor += texture2D(inputImageTexture, twoStepsRightTextureCoordinate).rgb * 0.0702702703;
gl_FragColor = vec4(fragmentColor, 1.0);
}
to perform this. The two passes can be achieved by sending a 0 value for the texelWidthOffset (for the vertical pass), and then feeding that result into a run where you give a 0 value for the texelHeightOffset (for the horizontal pass).
I also have some more advanced examples of convolutions in the above-linked framework, including Sobel edge detection.

Horizontal Blur using advantage of bilinear interpolation. Vertical blur pass is analog. Unroll to optimise.
//5 offsets for 10 pixel sampling!
float[5] offset = [-4.0f, -2.0f, 0.0f, 2.0f, 4.0f];
//int[5] weight = [1, 4, 6, 4, 1]; //sum = 16
float[5] weightInverse = [0.0625f, 0.25f, 0.375, 0.25f, 0.0625f];
vec4 finalColor = vec4(0.0f);
for(int i = 0; i < 5; i++)
finalColor += texture2D(inputImage, vec2(offset[i], 0.5f)) * weightInverse[i];

Related

Why is my simplest shader taking up the most processing power

So I ran a frame capture to see the performance. To my surprise it was my full screen rending things that were to blame. Take a look
Here are the two hogging functions. I have disabled the texture look up on the full screen texture to illustrate how ridiculous this is!
Program #3
Vert:
precision highp float;
attribute vec2 position;
uniform mat4 matrix;
void main()
{
gl_Position = matrix * vec4(position.xy, 0.0, 1.0);
}
Frag:
precision highp float;
uniform float alpha;
void main()
{
gl_FragColor = vec4(0.0, 0.0, 0.0, 1.0 - alpha);
}
Context:
//**Set up data
glUseProgram(shade_black.progId)
glBindBuffer(GLenum(GL_ARRAY_BUFFER), black_buffer) //Bind the coordinates
//**Pass in coordinates
let aTexCoordLoc = GLuint(black_attribute_position)
glEnableVertexAttribArray(aTexCoordLoc);
glVertexAttribPointer(aTexCoordLoc, 2, GLenum(GL_FLOAT), GLboolean(GL_FALSE), 0, BUFFER_OFFSET(0)) //Send to shader
//**Pass in uniforms
glUniformMatrix4fv(black_uniform_ortho, 1, GLboolean(GL_FALSE), &orthographicMatrix) //Pass matrix
glUniform1f(black_unifrom_alpha, 0.95) //Pass alpha
counter += timedo
//**Draw (instanced)
//The number 3 is actually variable but for this purpose I set it flat out
glDrawArraysInstanced(GLenum(GL_TRIANGLE_STRIP), 0, 4, 3 )// GLsizei(timedo)) //Draw it
//**Clean up
glBindBuffer(GLenum(GL_ARRAY_BUFFER), 0) //Clean up
glBindBuffer(GLenum(GL_ARRAY_BUFFER), 0)
Program #2
Vert:
precision highp float;
attribute vec4 data;
uniform mat4 matrix;
uniform float alpha;
varying vec2 v_texcoord;
varying float o_alpha;
void main()
{
gl_Position = matrix * vec4(data.xy, 0.0, 1.0);
v_texcoord = data.zw;
o_alpha = alpha;
}
Frag:
precision highp float;
uniform sampler2D s_texture;
varying float o_alpha;
varying vec2 v_texcoord;
void main()
{
//vec4 color = texture2D(s_texture, v_texcoord);
gl_FragColor = vec4(1.0);
//This line below is what it should be, but I wanted to isolate the issue, the picture results are from setting it to white.
//gl_FragColor = vec4(color.rgb, step(0.4, color.a ) * (color.a - o_alpha));
}
Context:
func drawTexture(texture: FBO, alpha: GLfloat)
{
//**Start up
//DONE EARLIER
//**Pass in vertices
glBindBuffer(GLenum(GL_ARRAY_BUFFER), textures_buffer)
let aTexCoordLoc = GLuint(textures_attribute_data)
glEnableVertexAttribArray(aTexCoordLoc);
glVertexAttribPointer(aTexCoordLoc, 4, GLenum(GL_FLOAT), GLboolean(GL_FALSE), 0, BUFFER_OFFSET(0)) //Tell gpu where
//**Pass in uniforms
glUniform1i(textures_uniform_texture, 0)
glUniformMatrix4fv(textures_uniform_matrix, 1, GLboolean(GL_FALSE), &orthographicMatrix)
glUniform1f(textures_uniform_alpha, alpha)
//**Texture
glBindTexture(GLenum(GL_TEXTURE_2D), texture.texture)
//**Draw
glDrawArrays(GLenum(GL_TRIANGLE_STRIP), 0, 4)
//**Clean up
glBindTexture(GLenum(GL_TEXTURE_2D), 0)
glBindBuffer(GLenum(GL_ARRAY_BUFFER), 0)
}
For the others you can at least se their draw call, but they aren't causing much damage.
What on earth is going on to cause the most complicated shaders to only be responsible for less then 1% of the latency?
NOTE: Both of these shaders use a VBO that is created and filled at the start of the app.
It does look kind of surprising. Here's how I'd try to make sense of those figures (assuming those times are GPU timings for those render calls):
Fill-rate is everything on mobile. Even running a simple pixel shader over 3 million pixels or so (iPad retina) is going to be an expensive task, and you shouldn't be too surprised that it's more expensive than a large number of much smaller particles. Your percentages are going to add up to 100%, so if all your other stuff is just a few hundred vertices and fills a few thousand pixels, you shouldn't be surprised if the full-screen stuff is huge relative to that. It also says '5ms', which is tempting to think of as an absolute figure, but bear in mind that the CPU and GPU automatically start running slower when there's not much work to do, so even a millisecond timing can be very misleading when the device is mostly idle.
Do you have a glClear at the start of the frame? If not, then you can pay a pretty high price because the first thing the GPU must do when it processes a tile is load in the old contents. With a glClear at the start of your rendering, it knows it needn't bother loading old contents. Maybe you're seeing that price on your first full-screen pass if you don't have a glClear.

How to find edges of image with its colors in iphone?

I am working on a application which is related to change the color effect in image. I have done almost everything. Now the problem is that in one of effect i have to give effect like glow egdes filter in photoshop. This filter flow the edges of image with its color and rest of image colors be black. By using BradLarson GPU Image GPUImageSobelEdgeDetectionFilter or GPUImageCannyEdgeDetectionFilter i can find the edges but with white color edges, and i need to find edges in colors. Is their any other way to find edges in color by using GPUImage or openCV.
Any help be very helpful for me.
Thanks
You really owe it to yourself to play around with writing custom shaders. It's extremely approachable, and can very quickly become powerful if you invest the effort.
That said, I think you're trying for something like this result:
There are many acceptable ways you could get here, but writing a custom shader for a subclass of GPUImageTwoInputFilter then targeting it with both the original image AND the edgeDetection image is how I accomplished the picture you see here.
The subclass would look something like this:
#import "OriginalColorEdgeMixer.h"
//Assumes you have targeted this filter with the original image first, then with an edge detection filter that returns white pixels on edges
//We are setting the threshold manually here, but could just as easily be a GLint which is dynamically fed at runtime
#if TARGET_IPHONE_SIMULATOR || TARGET_OS_IPHONE
NSString *const kOriginalColorEdgeMixer = SHADER_STRING
(
varying highp vec2 textureCoordinate;
varying highp vec2 textureCoordinate2;
uniform sampler2D inputImageTexture;
uniform sampler2D inputImageTexture2;
lowp float threshold;
mediump float resultingRed;
mediump float resultingGreen;
mediump float resultingBlue;
void main()
{
mediump vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
mediump vec4 textureColor2 = texture2D(inputImageTexture2, textureCoordinate2);
threshold = step(0.3, textureColor2.r);
resultingRed = threshold * textureColor.r;
resultingGreen = threshold * textureColor.g;
resultingBlue = threshold *textureColor.b;
gl_FragColor = vec4(resultingRed, resultingGreen, resultingBlue, textureColor.a);
}
);
#else
NSString *const kGPUImageDifferenceBlendFragmentShaderString = SHADER_STRING
(
varying vec2 textureCoordinate;
varying vec2 textureCoordinate2;
uniform sampler2D inputImageTexture;
uniform sampler2D inputImageTexture2;
float threshold;
float resultingRed;
float resultingGreen;
float resultingBlue;
void main()
{
vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
vec4 textureColor2 = texture2D(inputImageTexture2, textureCoordinate2);
threshold = step(0.3,textureColor2.r);
resultingRed = threshold * textureColor.r;
resultingGreen = threshold * textureColor.g;
resultingBlue = threshold *textureColor.b;
gl_FragColor = vec4(resultingRed, resultingGreen, resultingBlue, textureColor.a);
}
);
#endif
#implementation OriginalColorEdgeMixer
- (id)init;
{
if (!(self = [super initWithFragmentShaderFromString:kOriginalColorEdgeMixer]))
{
return nil;
}
return self;
}
#end
As I've written this, we're expecting the edgeDetection filter's output to be the second input of this custom filter.
I arbitrarily chose a threshold value of 0.3 for intensities on the edgeDetection image to enable the original color to show through. This could easily be made dynamic by tying it to a GLint fed from a UISlider in your app (many examples of this in Brad's sample code)
For the sake of clarity for people just starting out with GPUImage, using that custom filter you wrote is really easy. I did it like this:
[self configureCamera];
edgeDetection = [[GPUImageSobelEdgeDetectionFilter alloc] init];
edgeMixer = [[OriginalColorEdgeMixer alloc] init];
[camera addTarget:edgeDetection];
[camera addTarget:edgeMixer];
[edgeDetection addTarget:edgeMixer];
[edgeMixer addTarget:_previewLayer];
[camera startCameraCapture];
In summary, don't be scared to start writing some custom shaders! The learning curve is brief, and the errors thrown by the debugger are extremely helpful in letting you know exactly where you f**d up the syntax.
Lastly, this is a great place for documentation of the syntax and usage of OpenGL specific functions

How to flex a 3D texture with OpenGL ES?

I have tried to draw some 3D squares (with OpenGL on iPhone) and make them rotate around, now they look like a sphere.
http://i618.photobucket.com/albums/tt265/LoyalMoral/Post/ScreenShot2013-05-15at23249PM.png
But the square is flat (the first one on image below), and I want to flex it:
http://i618.photobucket.com/albums/tt265/LoyalMoral/Post/Untitled-1.jpg
someone told me that I have to use glsl, but I don't know shading language.
this is my vertex and fragment (follow Ray Wenderlich's tutorial):
// Vertex.glsl
attribute vec4 Position;
attribute vec4 SourceColor;
varying vec4 DestinationColor;
uniform mat4 Projection;
uniform mat4 Modelview;
attribute vec2 TexCoordIn;
varying vec2 TexCoordOut;
void main(void) {
DestinationColor = SourceColor;
gl_Position = Projection * Modelview * Position;
TexCoordOut = TexCoordIn;
}
// Fragment.glsl
varying lowp vec4 DestinationColor;
varying lowp vec2 TexCoordOut;
uniform sampler2D Texture;
void main(void) {
gl_FragColor = DestinationColor * texture2D(Texture, TexCoordOut);
}
could somebody help me? :)
Instead of using a quad (pair of triangles) for a square use a grid for it. Thus you will be able to place vertices of the grid manually resulting in the shape you want.

How do I modify a GPUImageGaussianSelectiveBlurFilter to operate over a rectangle instead of a circle?

I have used the GPUImage framework for a blur effect similar to that of the Instagram application, where I have made a view for getting a picture from the photo library and then I put an effect on it.
One of the effects is a selective blur effect in which only a small part of the image is clear the rest is blurred. The GPUImageGaussianSelectiveBlurFilter chooses the circular part of the image to not be blurred.
How can I alter this to make the sharp region be rectangular in shape instead?
Because Gill's answer isn't exactly correct, and since this seems to be getting asked over and over, I'll clarify my comment above.
The fragment shader for the selective blur by default has the following code:
varying highp vec2 textureCoordinate;
varying highp vec2 textureCoordinate2;
uniform sampler2D inputImageTexture;
uniform sampler2D inputImageTexture2;
uniform lowp float excludeCircleRadius;
uniform lowp vec2 excludeCirclePoint;
uniform lowp float excludeBlurSize;
uniform highp float aspectRatio;
void main()
{
lowp vec4 sharpImageColor = texture2D(inputImageTexture, textureCoordinate);
lowp vec4 blurredImageColor = texture2D(inputImageTexture2, textureCoordinate2);
highp vec2 textureCoordinateToUse = vec2(textureCoordinate2.x, (textureCoordinate2.y * aspectRatio + 0.5 - 0.5 * aspectRatio));
highp float distanceFromCenter = distance(excludeCirclePoint, textureCoordinateToUse);
gl_FragColor = mix(sharpImageColor, blurredImageColor, smoothstep(excludeCircleRadius - excludeBlurSize, excludeCircleRadius, distanceFromCenter));
}
This fragment shader takes in a pixel color value from both the original sharp image and a Gaussian blurred version of the image. It then blends between these based on the logic of the last three lines.
The first and second of these lines calculate the distance from the center coordinate that you specify ((0.5, 0.5) in normalized coordinates by default for the dead center of the image) to the current pixel's coordinate. The last line uses the smoothstep() GLSL function to smoothly interpolate between 0 and 1 when the distance from the center point travels between two thresholds, the inner clear circle, and the outer fully blurred circle. The mix() operator then takes the output from the smoothstep() and fades between the blurred and sharp color pixel colors to produce the appropriate output.
If you just want to modify this to produce a square shape instead of the circular one, you need to adjust the two center lines in the fragment shader to base the distance on linear X or Y coordinates, not a Pythagorean distance from the center point. To do this, change the shader to read:
varying highp vec2 textureCoordinate;
varying highp vec2 textureCoordinate2;
uniform sampler2D inputImageTexture;
uniform sampler2D inputImageTexture2;
uniform lowp float excludeCircleRadius;
uniform lowp vec2 excludeCirclePoint;
uniform lowp float excludeBlurSize;
uniform highp float aspectRatio;
void main()
{
lowp vec4 sharpImageColor = texture2D(inputImageTexture, textureCoordinate);
lowp vec4 blurredImageColor = texture2D(inputImageTexture2, textureCoordinate2);
highp vec2 textureCoordinateToUse = vec2(textureCoordinate2.x, (textureCoordinate2.y * aspectRatio + 0.5 - 0.5 * aspectRatio));
textureCoordinateToUse = abs(excludeCirclePoint - textureCoordinateToUse);
highp float distanceFromCenter = max(textureCoordinateToUse.x, textureCoordinateToUse.y);
gl_FragColor = mix(sharpImageColor, blurredImageColor, smoothstep(excludeCircleRadius - excludeBlurSize, excludeCircleRadius, distanceFromCenter));
}
The lines that Gill mentions are just input parameters for the filter, and don't control its circularity at all.
I leave modifying this further to produce a generic rectangular shape as an exercise for the reader, but this should provide a basis for how you could do this and a bit more explanation of what the lines in this shader do.
Did it ... the code for the rectangular effect is just in these 2 lines
blurFilter = [[GPUImageGaussianSelectiveBlurFilter alloc] init];
[(GPUImageGaussianSelectiveBlurFilter*)blurFilter setExcludeCircleRadius:80.0/320.0];
[(GPUImageGaussianSelectiveBlurFilter*)blurFilter setExcludeCirclePoint:CGPointMake(0.5f, 0.5f)];
// [(GPUImageGaussianSelectiveBlurFilter*)blurFilter setBlurSize:0.0f]; [(GPUImageGaussianSelectiveBlurFilter*)blurFilter setAspectRatio:0.0f];

Can I convert an image into a grid of dots?

Just a quick question.
Can this:
be done with image processing on an iOS device? If yes, how?
Yes, although Core Graphics may not be the best way to do such filtering on an image. My recommendation would be to use an OpenGL ES 2.0 fragment shader. In fact, I just wrote one to do this:
This is the GPUImagePolkaDotFilter that I just added to my open source GPUImage framework . The easiest way to use it is to grab the framework and apply the filter to whatever you want (it's fast enough to run in real time on video).
If you'd just like to use the fragment shader, the following is my code for this:
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
uniform highp float fractionalWidthOfPixel;
uniform highp float aspectRatio;
uniform highp float dotScaling;
void main()
{
highp vec2 sampleDivisor = vec2(fractionalWidthOfPixel, fractionalWidthOfPixel / aspectRatio);
highp vec2 samplePos = textureCoordinate - mod(textureCoordinate, sampleDivisor) + 0.5 * sampleDivisor;
highp vec2 textureCoordinateToUse = vec2(textureCoordinate.x, (textureCoordinate.y * aspectRatio + 0.5 - 0.5 * aspectRatio));
highp vec2 adjustedSamplePos = vec2(samplePos.x, (samplePos.y * aspectRatio + 0.5 - 0.5 * aspectRatio));
highp float distanceFromSamplePoint = distance(adjustedSamplePos, textureCoordinateToUse);
lowp float checkForPresenceWithinDot = step(distanceFromSamplePoint, (fractionalWidthOfPixel * 0.5) * dotScaling);
gl_FragColor = vec4(texture2D(inputImageTexture, samplePos ).rgb * checkForPresenceWithinDot, 1.0);
}
You should be able to do this just by looping and drawing the circles in black and white, and then using that image as a mask for your image.
Here's the first link I found on Google about CoreGraphics masking, which you can probably adapt for your needs: http://cocoawithlove.com/2009/09/creating-alpha-masks-from-text-on.html
I'd imagine drawing lots of circles is something you can figure out with some Googling of your own.
SHORT Answer is: YES, but exactly using which framework i'm not sure for the moment.
Note: Yo have just asked if it is possible, i gave the anwer for your question, not for how to go