How to read screen to see which pixels are white - iphone

I have a view on which the user can draw some lines, which have been developed using this.
Now the lines are drawn between points using the code:
- (void) renderLineFromPoint:(CGPoint)start toPoint:(CGPoint)end
{
static GLfloat* vertexBuffer = NULL;
static NSUInteger vertexMax = 64;
NSUInteger vertexCount = 0,
count,
i;
//Allocate vertex array buffer
if(vertexBuffer == NULL)
vertexBuffer = malloc(vertexMax * 2 * sizeof(GLfloat));
// Add points to the buffer so there are drawing points every X pixels
count = MAX(ceilf(sqrtf((end.x - start.x) * (end.x - start.x) + (end.y - start.y) * (end.y - start.y)) / kBrushPixelStep), 1);
for(i = 0; i < count; ++i) {
if(vertexCount == vertexMax) {
vertexMax = 2 * vertexMax;
vertexBuffer = realloc(vertexBuffer, vertexMax * 2 * sizeof(GLfloat));
}
vertexBuffer[2 * vertexCount + 0] = start.x + (end.x - start.x) * ((GLfloat)i / (GLfloat)count);
vertexBuffer[2 * vertexCount + 1] = start.y + (end.y - start.y) * ((GLfloat)i / (GLfloat)count);
vertexCount += 1;
}
//Render the vertex array
glVertexPointer(2, GL_FLOAT, 0, vertexBuffer);
glDrawArrays(GL_POINTS, 0, vertexCount);
// Display the buffer
[self swapBuffers];
}
The objective is to read the drawing area of the screen which is initiated by the following code:
PictureView * scratchPaperView = [[RecordedPaintingView alloc] initWithFrame:CGRectMake(0, 45, 320, 415)];
[self.view addSubview:scratchPaperView];
I want to find out the pixels of the lines, i.e. all the pixels that are white in the drawing area. Please tell me how to proceed from here?

Assuming that you can get a UIImage.CGImage or a CGImageRef out of a PictureView, then you render this image into a CGBitMapContext. The image is going to tell you the number of components and if it has alpha and where alpa is. Most likely you are going to get 4 byte pixels (32 bits/pixel). You then walk each row looking at each pixel. Assuming a black background (which would be 255,0,0,0 or 0,0,0,255), you will see non-black pixels when you get close to or hit a line. A pure with pixel is going to be 255,255,255,255.
I'm pretty sure you can find examples of how to render an image into a context, and also how to examine pixels by googling around. Frankly what always gets me is the confusing pixel layout attributes - I usually end up printing a few test cases out to make sure I got it right.

Related

Get colour of pixel from CCRenderTexture

So, I am trying to find the location of any pixels on the screen that are a specific colour.
The following code works, but is VERY slow, because I have to iterate over every single pixel co-ordinate, and there are a lot.
Is there any way to improve the following code to make it more efficient?
// Detect the position of all red points in the sprite
UInt8 data[4];
CCRenderTexture* renderTexture = [[CCRenderTexture alloc] initWithWidth: mySprite.boundingBox.size.width * CC_CONTENT_SCALE_FACTOR()
height: mySprite.boundingBox.size.height * CC_CONTENT_SCALE_FACTOR()
pixelFormat:kCCTexture2DPixelFormat_RGBA8888];
[renderTexture begin];
[mySprite draw];
for (int x = 0; x < 960; x++)
{
for (int y = 0; y < 640; y++)
{
ccColor4B *buffer = malloc(sizeof(ccColor4B));
glReadPixels(x, y, 1, 1, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
ccColor4B color = buffer[0];
if (color.r == 133 && color.g == 215 && color.b == 44)
{
NSLog(#"Found the red point at x: %d y: %d", x, y);
}
}
}
[renderTexture end];
[renderTexture release];
You can (and should) read more than just one pixel at a time. The way to make OpenGL fast, is to pack all your stuff into as few operations as possible. That works both ways (reading and writing to the GPU).
Try reading the whole texture in one call, and finding your red pixels from the resulting array. As below.
Also note, that generally speaking it is a good idea to traverse a bitmap row by row, which means reversing the order of for -loops (y [rows] on the outside, with x on the inside)
// Detect the position of all red points in the sprite
ccColor4B *buffer = new ccColor4B[ 960 * 640 ];
CCRenderTexture* renderTexture = [[CCRenderTexture alloc] initWithWidth: mySprite.boundingBox.size.width * CC_CONTENT_SCALE_FACTOR()
height: mySprite.boundingBox.size.height * CC_CONTENT_SCALE_FACTOR()
pixelFormat:kCCTexture2DPixelFormat_RGBA8888];
[renderTexture begin];
[mySprite draw];
glReadPixels(0, 0, 940, 640, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
[renderTexture end];
[renderTexture release];
int i = 0;
for (int y = 0; y < 640; y++)
{
for (int x = 0; x < 960; x++)
{
ccColor4B color = buffer[i];//the index is equal to y * 940 + x
++i;
if (color.r == 133 && color.g == 215 && color.b == 44)
{
NSLog(#"Found the red point at x: %d y: %d", x, y);
}
}
}
delete[] buffer;
Don't malloc your buffer every time, just reuse the same buffer; malloc is slow! Please take a look at Apple's Memory Usage Documentation.
I don't know of any algorithms that can do this any faster, but this might help.

Malformed OpenGL texture after calling CGBitmapContextCreateImage

I'm trying to use the contents of the UIView as an OpenGL texture. Here's how I obtain it:
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
char *rawImage = malloc(4 * s.width * s.height);
CGContextRef bitmapContext = CGBitmapContextCreate(rawImage,
s.width,
s.height,
8,
4 * s.width,
colorSpace,
kCGImageAlphaNoneSkipFirst);
[v.layer renderInContext:bitmapContext];
// converting ARGB to BGRA
for (int i = 0; i < s.width * s.height; i++) {
int p = i * 4;
char a = rawImage[p];
char r = rawImage[p + 1];
char g = rawImage[p + 2];
char b = rawImage[p + 3];
rawImage[p] = b;
rawImage[p + 1] = g;
rawImage[p + 2] = r;
rawImage[p + 3] = a;
}
CFRelease(colorSpace);
CGImageRef image = CGBitmapContextCreateImage(bitmapContext);
return [GLKTextureLoader textureWithCGImage:image options:nil error:nil];
This is the UIView I start with (note the small triangles at the top left corner):
This is what I get on an OpenGL surface after taking a snapshot:
It's clear that coordinates are mangled but I can't tell in which way and what do I do wrong. Is it a row byte alignment that goes wrong?
UPDATE: if I don't do color components swizzling (omitting ARGB to BGRA loop), here's the resulting picture:
Look like an alignment issue. I suggest putting a
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
before
return [GLKTextureLoader textureWithCGImage:image options:nil error:nil];
Also you might mess up the alignment when doing that component swizzling, which BTW is completely unneccessary, as modern OpenGL directly support BGRA format.
Solved: the picture above is what you're getting when you apply a texture that has the same ratio as the surface, but swap surface's s and t coordinates. Color components swizzling is still needed, though.

CoreGraphics Pixel Manipulation Blue

I'm manipulating pixels to turn the greyscale and all appears well, except at the bottom of the image I have blue colored pixels. This appears more the smaller in dimensions the image is and disappears after a certain point. Can anyone see what I'm doing wrong?
CGImageRef imageRef = image.CGImage;
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CFDataRef dataref = CopyImagePixels(imageRef);
unsigned char *rawData = (unsigned char *)CFDataGetBytePtr(dataref);
int byteIndex = 0;
for (int ii = 0 ; ii < width * height ; ++ii)
{
int red = (int)rawData[byteIndex];
int blue = (int)rawData[byteIndex+1];
int green = (int)rawData[byteIndex+2];
int r, g, b;
r = (int)(red * 0.30) + (green * 0.59) + (blue * 0.11);
g = (int)(red * 0.30) + (green * 0.59) + (blue * 0.11);
b = (int)(red * 0.30) + (green * 0.59) + (blue * 0.11);
rawData[byteIndex] = clamp(r, 0, 255);
rawData[byteIndex+1] = clamp(g, 0, 255);
rawData[byteIndex+2] = clamp(b, 0, 255);
rawData[byteIndex+3] = 255;
byteIndex += 4;
}
CGContextRef ctx = CGBitmapContextCreate(rawData,
CGImageGetWidth(imageRef),
CGImageGetHeight(imageRef),
CGImageGetBitsPerComponent(imageRef),
CGImageGetBytesPerRow(imageRef),
CGImageGetColorSpace(imageRef),
kCGImageAlphaPremultipliedLast);
imageRef = CGBitmapContextCreateImage (ctx);
CFRelease(dataref);
UIImage* rawImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGContextRelease(ctx);
Example of problem: http://iforce.co.nz/i/3rei1wba.utm.jpg
There's a reason that no-one has answered - the code posted in your question seems absolutely fine!
I've made a test project here : https://github.com/oneblacksock/stack_overflow_answer_6188863 and when I run it with your code in, it works perfectly!
The only bits that are different from your problem are the CopyPixelData and the clamp functions - perhaps your problem is in these?
Download my test project and see what I've done - try it with an image you know is broken and let me know how you get on!
Sam
The problem is I am assuming CGImageGetWidth(imageRef) == CGImageGetBytesPerRow(imageRef) - which isn't always the case. This was pointed out to me on the Apple developer forums and is correct. I've changed to use the length of the dataref and now it works as expected.
NSUInteger length = CFDataGetLength(dataref);

Placing Buttons on a Circle

I'm having a heck of a time calculating points on a circle in Objective-C. Even after reading TONS of other code samples my circle is still way off center. (And that's taking into consideration "center" vs "origin" and adjusting for the size of the UIView, in this case a UIButton.)
Here's the code I'm using. The circle is formed correctly, it's just off center. I'm not sure if this is a radians vs degrees problem or something else. This is a helper function in a ViewController that programmatically creates the UIButtons and adds them to the view:
- (CGPoint)pointOnCircle:(int)thisPoint withTotalPointCount:(int)totalPoints {
CGPoint centerPoint = CGPointMake(self.view.frame.size.width / 2, self.view.frame.size.height / 2);
float radius = 100.0;
float angle = ( 2 * M_PI / (float)totalPoints ) * (float)thisPoint;
CGPoint newPoint;
newPoint.x = (centerPoint.x / 2) + (radius * cosf(angle));
newPoint.y = (centerPoint.y / 2) + (radius * sinf(angle));
return newPoint;
}
Thanks for the help!
The center of your buttons (i.e. points on the circle) is
newPoint.x = (centerPoint.x) + (radius * cosf(angle)); // <= removed / 2
newPoint.y = (centerPoint.y) + (radius * sinf(angle)); // <= removed / 2
Please note that if you place buttons (i.e. rectangles) on these points you have to make sure that their center lies at this point (i.e. subtract buttonWidth/2 from newPoint.x and buttonHeight/2 from newPoint.y to get the top left corner).
Not sure why you divide by 2 a second time newPoint coordinates...
What about this:
newPoint.x = centerPoint.x + (radius * cosf(angle));
newPoint.y = centerPoint.y + (radius * sinf(angle));
return newPoint;  

Erase using brush in GLPaint

As part of modifying the GLPaint, I am trying to add erase functionality where user could select an eraser button and erase the painted area just as painting.
I am trying have a conditional statement within "renderLineFromPoint:(CGPoint)start toPoint:(CGPoint)end" method so that I could check whether the stroke is for painting or erasing.
For erasing I do not know how to make use of the "start" and "end" parameters for erasing. Is there any method call in OpenGL like glClear() that accepts these two parameter and does erase?
Any pointer will be very helpful. Thank you.
Along the same vein as Erase using brush in GLPaint, you could reuse the
- (void)renderLineFromPoint:(CGPoint)start toPoint:(CGPoint)end
method by having the condition:
if (isEraserBrushType) {
glBlendFunc(GL_ONE, GL_ZERO);
glColor4f(0, 0, 0, 0.0);
} else {
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
[self setBrushColorWithRed:brushColourRed green:brushColourGreen blue:brushColourBlue];
}
above the code:
// Render the vertex array
glVertexPointer(2, GL_FLOAT, 0, eraseBuffer);
glDrawArrays(GL_POINTS, 0, vertexCount);
Note, you'll need to implement isEraserBrushType, and store brushColourRed, brushColourGreen and brushColourBlue somehow.
I think I can solve this problem, although not a very good.
You can create a new function copy of "renderLineFromPoint", like this;
- (void) drawErase:(CGPoint)start toPoint:(CGPoint)end
{
static GLfloat* eraseBuffer = NULL;
static NSUInteger eraseMax = 64;
NSUInteger vertexCount = 0,
count,
i;
[EAGLContext setCurrentContext:context];
glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer);
// Convert locations from Points to Pixels
CGFloat scale = 1.0;//self.contentScaleFactor;
start.x *= scale;
start.y *= scale;
end.x *= scale;
end.y *= scale;
// Allocate vertex array buffer
if(eraseBuffer == NULL)
eraseBuffer = malloc(eraseMax * 2 * sizeof(GLfloat));
// Add points to the buffer so there are drawing points every X pixels
count = MAX(ceilf(sqrtf((end.x - start.x) * (end.x - start.x) + (end.y - start.y) * (end.y - start.y)) / kBrushPixelStep), 1);
for(i = 0; i < count; ++i) {
if(vertexCount == eraseMax) {
eraseMax = 2 * eraseMax;
eraseBuffer = realloc(eraseBuffer, eraseMax * 2 * sizeof(GLfloat));
}
eraseBuffer[2 * vertexCount + 0] = start.x + (end.x - start.x) * ((GLfloat)i / (GLfloat)count);
eraseBuffer[2 * vertexCount + 1] = start.y + (end.y - start.y) * ((GLfloat)i / (GLfloat)count);
vertexCount += 1;
}
//}
//glEnable(GL_BLEND); // 打开混合
//glDisable(GL_DEPTH_TEST); // 关闭深度测试
//glBlendFunc(GL_SRC_ALPHA, GL_ONE); // 基于源象素alpha通道值的半透明混合函数
//You need set the mixed-mode
glBlendFunc(GL_ONE, GL_ZERO);
//the erase brush color is transparent.
glColor4f(0, 0, 0, 0.0);
// Render the vertex array
glVertexPointer(2, GL_FLOAT, 0, eraseBuffer);
glDrawArrays(GL_POINTS, 0, vertexCount);
// Display the buffer
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER_OES];
// at last restore the mixed-mode
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
}
My English is poor. Can you understand what I said?
Hope these can help you.
I attempted to use the accepted answer, however it would erase in a square pattern, whereas I wanted to erase using my own brush. Instead I used a different blending function.
if (self.isErasing) {
glBlendFunc(GL_ZERO, GL_ONE_MINUS_SRC_ALPHA);
[self setBrushColorWithRed:0 green:0 blue:0];
} else {
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
}
The way this works is that the incoming (source) colour is multiplied by zero to completely disappear meaning that you don't actually paint anything new. Then the destination colour is set to be (1 - Source Alpha). So wherever the brush has colour the destination has no colour.
Another idea, if suppose your view's back ground is pure black you can simply call [self setBrushColorWithRed:0.0 green:0.0 blue:0.0] and then call renderPoint:ToPint: - This will draw in black (and the user may think that he is actually erasing.