Erase using brush in GLPaint - iphone

As part of modifying the GLPaint, I am trying to add erase functionality where user could select an eraser button and erase the painted area just as painting.
I am trying have a conditional statement within "renderLineFromPoint:(CGPoint)start toPoint:(CGPoint)end" method so that I could check whether the stroke is for painting or erasing.
For erasing I do not know how to make use of the "start" and "end" parameters for erasing. Is there any method call in OpenGL like glClear() that accepts these two parameter and does erase?
Any pointer will be very helpful. Thank you.

Along the same vein as Erase using brush in GLPaint, you could reuse the
- (void)renderLineFromPoint:(CGPoint)start toPoint:(CGPoint)end
method by having the condition:
if (isEraserBrushType) {
glBlendFunc(GL_ONE, GL_ZERO);
glColor4f(0, 0, 0, 0.0);
} else {
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
[self setBrushColorWithRed:brushColourRed green:brushColourGreen blue:brushColourBlue];
}
above the code:
// Render the vertex array
glVertexPointer(2, GL_FLOAT, 0, eraseBuffer);
glDrawArrays(GL_POINTS, 0, vertexCount);
Note, you'll need to implement isEraserBrushType, and store brushColourRed, brushColourGreen and brushColourBlue somehow.

I think I can solve this problem, although not a very good.
You can create a new function copy of "renderLineFromPoint", like this;
- (void) drawErase:(CGPoint)start toPoint:(CGPoint)end
{
static GLfloat* eraseBuffer = NULL;
static NSUInteger eraseMax = 64;
NSUInteger vertexCount = 0,
count,
i;
[EAGLContext setCurrentContext:context];
glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer);
// Convert locations from Points to Pixels
CGFloat scale = 1.0;//self.contentScaleFactor;
start.x *= scale;
start.y *= scale;
end.x *= scale;
end.y *= scale;
// Allocate vertex array buffer
if(eraseBuffer == NULL)
eraseBuffer = malloc(eraseMax * 2 * sizeof(GLfloat));
// Add points to the buffer so there are drawing points every X pixels
count = MAX(ceilf(sqrtf((end.x - start.x) * (end.x - start.x) + (end.y - start.y) * (end.y - start.y)) / kBrushPixelStep), 1);
for(i = 0; i < count; ++i) {
if(vertexCount == eraseMax) {
eraseMax = 2 * eraseMax;
eraseBuffer = realloc(eraseBuffer, eraseMax * 2 * sizeof(GLfloat));
}
eraseBuffer[2 * vertexCount + 0] = start.x + (end.x - start.x) * ((GLfloat)i / (GLfloat)count);
eraseBuffer[2 * vertexCount + 1] = start.y + (end.y - start.y) * ((GLfloat)i / (GLfloat)count);
vertexCount += 1;
}
//}
//glEnable(GL_BLEND); // 打开混合
//glDisable(GL_DEPTH_TEST); // 关闭深度测试
//glBlendFunc(GL_SRC_ALPHA, GL_ONE); // 基于源象素alpha通道值的半透明混合函数
//You need set the mixed-mode
glBlendFunc(GL_ONE, GL_ZERO);
//the erase brush color is transparent.
glColor4f(0, 0, 0, 0.0);
// Render the vertex array
glVertexPointer(2, GL_FLOAT, 0, eraseBuffer);
glDrawArrays(GL_POINTS, 0, vertexCount);
// Display the buffer
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER_OES];
// at last restore the mixed-mode
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
}
My English is poor. Can you understand what I said?
Hope these can help you.

I attempted to use the accepted answer, however it would erase in a square pattern, whereas I wanted to erase using my own brush. Instead I used a different blending function.
if (self.isErasing) {
glBlendFunc(GL_ZERO, GL_ONE_MINUS_SRC_ALPHA);
[self setBrushColorWithRed:0 green:0 blue:0];
} else {
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
}
The way this works is that the incoming (source) colour is multiplied by zero to completely disappear meaning that you don't actually paint anything new. Then the destination colour is set to be (1 - Source Alpha). So wherever the brush has colour the destination has no colour.

Another idea, if suppose your view's back ground is pure black you can simply call [self setBrushColorWithRed:0.0 green:0.0 blue:0.0] and then call renderPoint:ToPint: - This will draw in black (and the user may think that he is actually erasing.

Related

GL_LINES drawing back to first point at the end of verticies

Using GLKit (OpenGLES 2.0) on iOS.
I'm modifying some code that renders cubes in a 3d environment so that it will render waveforms in 2 dimensions in a 3d scene. I have it working great except for one problem which is:
Suppose I render n x/y coordinates using GL_LINES. It will draw a line from 0 -> 1, then 1 -> 2, and so on. But then point N will always have a line drawn back to point 0. I would expect GL_LINE_LOOP to this, but not GL_LINES.
I have a work around in mind if I cannot solve this and that is to use 2 GL_TRIANGLE_STRIP to draw each line segment. I'd really rather just understand why this isn't working though.
Here are the important parts of the code:
typedef struct {
float Position[3];
} Vertex;
const NSUInteger kVerticiesSize = 320;
Vertex Vertices[kVersiciesSize] = {};
const GLubyte Indices[] = {
0,1,
1,2,
2,3,
3,4,
4,5,
5,6,
6,7,
7,8,
8,9,
9,10,
// keeps going to kVerticiesSize
The GL setup
- (void)setupGL {
[EAGLContext setCurrentContext:self.context];
self.effect = [[GLKBaseEffect alloc] init];
glGenVertexArraysOES(1, &_vertexArray);
glBindVertexArrayOES(_vertexArray);
glGenBuffers(1, &_vertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, _vertexBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(Vertices), Vertices, GL_STATIC_DRAW);
glGenBuffers(1, &_indexBuffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, _indexBuffer);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(Indices), Indices, GL_STATIC_DRAW);
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer(GLKVertexAttribPosition,
3,
GL_FLOAT,
GL_FALSE,
sizeof(Vertex),
offsetof(Vertex, Position));
float aspect = fabsf(self.bounds.size.width / self.bounds.size.height);
GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(65.0f), aspect, 0.0f, 50.0f);
self.effect.transform.projectionMatrix = projectionMatrix;
}
And the Update and Render methods:
// Always called once before "render"
-(void)update {
static bool hasRun = NO;
for(int x = 0; x < kVersiciesSize; x++){
// normalize x variable to x coord -1 .. 1 and set Y to math function
Vertex v = Vertices[x];
v.Position[0] = [self normalizedX:x];
v.Position[1] = cos(x);
v.Position[2] = 0;
Vertices[x] = v;
if(!hasRun){
NSLog(#"x=%f y=%f z=%f", v.Position[0], v.Position[1], v.Position[2]);
}
}
hasRun = YES;
glBufferData(GL_ARRAY_BUFFER, sizeof(Vertices), Vertices, GL_STATIC_DRAW);
[self updateTransforms];
}
-(void)render {
[self.effect prepareToDraw];
glBindVertexArrayOES(_vertexArray);
glDrawElements(GL_LINE_STRIP,
sizeof(Indices)/sizeof(Indices[0]),
GL_UNSIGNED_BYTE,
0);
}
You have:
glVertexAttribPointer(GLKVertexAttribPosition,
3,
GL_FLOAT,
GL_FALSE,
sizeof(Vertex),
offsetof(Vertex, Position));
But the signature of glVertexAttribPointer is:
glVertexAttribPointer (GLuint indx, // GLKVertexAttribPosition
GLint size, // 3 floats per vertex
GLenum type, // Vertices are GL_FLOAT-s
GLboolean normalized, // GL_FALSE
GLsizei stride, // *** Should be 0, no?
const GLvoid* ptr); // *** Should be &Vertices[0], no?
I'm pretty sure that, if you make the commented suggestions above, it would work. Then you just render the whole thing with:
glDrawArrays(GL_LINES, 0, kVerticiesSize);
or
glDrawArrays(GL_LINES, startIndex, lineCount);
...right?
DISCLAIMER: I'm not fluent with the whole bound-buffers pipeline, but it strikes me as unnecessary complication, in this case.
ALSO NOTE: Your text mentions GL_LINES, but your sample code uses GL_LINE_STRIP. A line-strip works like a triangle-strip, where the first 2 vertices define the first line segment but, after that, each single vertex defines the next line-segment, which picks-up where the previous one left off. Hence, your vertex indices are correct for GL_LINES but, for GL_LINE_STRIP, you'd just want { 0, 1, 2, 3, 4, 5... }

How to read screen to see which pixels are white

I have a view on which the user can draw some lines, which have been developed using this.
Now the lines are drawn between points using the code:
- (void) renderLineFromPoint:(CGPoint)start toPoint:(CGPoint)end
{
static GLfloat* vertexBuffer = NULL;
static NSUInteger vertexMax = 64;
NSUInteger vertexCount = 0,
count,
i;
//Allocate vertex array buffer
if(vertexBuffer == NULL)
vertexBuffer = malloc(vertexMax * 2 * sizeof(GLfloat));
// Add points to the buffer so there are drawing points every X pixels
count = MAX(ceilf(sqrtf((end.x - start.x) * (end.x - start.x) + (end.y - start.y) * (end.y - start.y)) / kBrushPixelStep), 1);
for(i = 0; i < count; ++i) {
if(vertexCount == vertexMax) {
vertexMax = 2 * vertexMax;
vertexBuffer = realloc(vertexBuffer, vertexMax * 2 * sizeof(GLfloat));
}
vertexBuffer[2 * vertexCount + 0] = start.x + (end.x - start.x) * ((GLfloat)i / (GLfloat)count);
vertexBuffer[2 * vertexCount + 1] = start.y + (end.y - start.y) * ((GLfloat)i / (GLfloat)count);
vertexCount += 1;
}
//Render the vertex array
glVertexPointer(2, GL_FLOAT, 0, vertexBuffer);
glDrawArrays(GL_POINTS, 0, vertexCount);
// Display the buffer
[self swapBuffers];
}
The objective is to read the drawing area of the screen which is initiated by the following code:
PictureView * scratchPaperView = [[RecordedPaintingView alloc] initWithFrame:CGRectMake(0, 45, 320, 415)];
[self.view addSubview:scratchPaperView];
I want to find out the pixels of the lines, i.e. all the pixels that are white in the drawing area. Please tell me how to proceed from here?
Assuming that you can get a UIImage.CGImage or a CGImageRef out of a PictureView, then you render this image into a CGBitMapContext. The image is going to tell you the number of components and if it has alpha and where alpa is. Most likely you are going to get 4 byte pixels (32 bits/pixel). You then walk each row looking at each pixel. Assuming a black background (which would be 255,0,0,0 or 0,0,0,255), you will see non-black pixels when you get close to or hit a line. A pure with pixel is going to be 255,255,255,255.
I'm pretty sure you can find examples of how to render an image into a context, and also how to examine pixels by googling around. Frankly what always gets me is the confusing pixel layout attributes - I usually end up printing a few test cases out to make sure I got it right.

Draw continuously gradient line in cocos2d

I have searched and searched and not found anything that works for me. How can I draw a continuously line with a gradient in cocos2d-iphone? I have tried CCRibbon but then I get gaps, and I have tried to draw multiple aligned lines in the draw method but with different alpha vales, but my draw method does not react to setting the alpha (it is always 100% alpha, see my draw method here under). How can I do this please?
- (void)draw {
glEnable(GL_LINE_SMOOTH);
glColor4ub(0,255,255,50);
ccDrawLine( ccp(0 - 5, 0 - 5), ccp(200 - 5, 300 - 5) );
}
Thank you
Søren
I am not sure whether this is what you need:
- (void)draw {
static GLubyte alpha = 0;
static int step = 1;
alpha += step;
if (alpha == 255 || alpha == 0) {
step = -step;
}
// You gonna need these two lines to use Alpha channel
glEnable(GL_BLEND); // disabled by default
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_LINE_SMOOTH);
glColor4ub(0, 255, 255, alpha);
ccDrawLine(ccp(0, 0), ccp(200, 300));
}
For more information about GL_BLEND and glBlendFunc you can refer to OpenGL ES 1.1 Reference.
From the question I'm thinking you want, for example, a thick line that runs from left to right and a gradient that runs from the top of that line to the bottom. If so, then this is how I did it. I created a 1 x 6 png image that had a vertical gradient with an alpha built into it. Then when I wanted to create a line, I would call my drawLine method in my layer.
- (void) drawLine: (CGPoint)origin withEnd:(CGPoint)end
{
CCSprite *wall = [CCSprite spriteWithFile:#"WallGradient.png"];
float distance = sqrt(powf(origin.x - end.x, 2) + powf(origin.y - end.y, 2));
float rotation = (180/M_PI) * acosf((origin.x - end.x) / distance));
[wall setScaleX:distance];
[wall setRotation: rotation];
[wall setPosition: origin];
[self addChild:wall];
}

iPhone Circular Progress Indicator. CGContextRef. Draw on Demand

I want to draw an image that would effectively be a circular progress indicator on a UIButton. Because the image is supposed to represent progress of a task, I do not think I should handle the drawing code in the view's drawrect method.
I have a thread that is performing some tasks. After each task, it calls a method on the main thread. The called method is supposed to update the image on the button.
In the button update method, I create a CGContextRef by using CGBitmapContextCreate.
Then I use the button's frame to create a CGRect.
Then I attempt to draw into using the context I created.
Lastly I set NeedsDisplay and clean up.
But none of this is inside the view's drawrect method.
I would like to know if anyone has used CGContext to draw on / in a view on-demand in a view while the view is being displayed.
I would like to get some ideas regarding an approach to doing this.
Here is an encapsulated version of what I am doing now:
CGContextRef xContext = nil;
CGColorSpaceRef xColorSpace;
CGRect xRect;
void* xBitmapData;
int iBMPByteCount;
int iBMPBytesPerRow;
float fBMPWidth = 20.0f;
float fBMPHeight = 20.0f;
float fPI = 3.14159;
float fRadius = 25.0f;
iBMPBytesPerRow = (fBMPWidth * 4);
iBMPByteCount = (iBMPBytesPerRow * fBMPHeight);
xColorSpace = CGColorSpaceCreateDeviceRGB();
xBitmapData = malloc(iBMPByteCount);
xContext = CGBitmapContextCreate(xBitmapData, fBMPWidth, fBMPHeight, 8, iBMPBytesPerRow, xColorSpace, kCGImageAlphaPremultipliedFirst);
CGColorSpaceRelease(xColorSpace);
UIGraphicsPushContext(xContext);
xRect = CGRectMake(30.0f, 400.0f, 50.0f, 50.0f);
float fWidth = xRect.size.width;
float fHeight = xRect.size.height;
CGContextClearRect(xContext, xRect);
CGContextSetRGBStrokeColor(xContext, 0.5f, 0.6f, 0.7f, 1.0f);
CGContextSetLineWidth(xContext, 1.0f);
float fArcBegin = 45.0f * fPI / 180.0f;
float fArcEnd = 90.0f * fPI / 180.0f;
CGContextSetFillColor(xContext, CGColorGetComponents( [[UIColor greenColor] CGColor]));
CGContextMoveToPoint(xContext, fWidth, fHeight);
CGContextAddArc(xContext, fWidth, fHeight, fRadius, fArcBegin, fArcEnd, 0);
CGContextClosePath(xContext);
CGContextFillPath(xContext);
UIGraphicsPopContext;
CGContextRelease(xContext);
[self.view setNeedsDisplay];
// [self.view setNeedsDisplayInRect: xRect];
The above is a little bit wonky because I've tried different tweaks. However, I think it communicates what I am trying to do.
Alternative approach:
You could create a series of images that represent the progress updates and then replace the UIButton currentImage property with the setImage:forState: method at each step of the process. This doesn't require drawing in the existing view and this approach has worked well for me to show simple "animation" of images (buttons or other).
Would this approach work for you? If not, why not?
Bart
This was really bugging me so after dealing with a series of silly, but necessary issues regarding the project I want this functionality for, I played around with it.
The end result is that I can now arbitrarily draw an arc representing the progress of a particular background task to a button.
The goal was to draw something like the little indicator in the lower right hand corner of the XCode windows while a project is being cleaned or compiled.
I created a function that will draw and fill an arc and return it as a UIImage.
The worker thread calls method (PerformSelectorOnMainThread) with the current values and a button identifier. In the called method, I call the arc image function with the percentage filled and such.
example call:
oImg = [self ArcImageCreate:100.0f fWidth:100.0f
fPercentFilled: 0.45f fAngleStart: 0.0f xFillColor:[UIColor blueColor]];
Then set the background image of the button:
[oBtn setBackgroundImage: oImg forState: UIControlStateNormal];
Here is the function:
It is not finished, but it works well enough to illustrate how I am doing this.
/**
ArcImageCreate
#ingroup UngroupedFunctions
#brief Create a filled or unfilled solid arc and return it as a UIImage.
Allows for dynamic / arbitrary update of an object that allows a UIImage to be drawn on it. \
This can be used for some sort of pie chart or progress indicator by Image Flipping.
#param fHeight The height of the created UIImage.
#param fWidth The width of the created UIImage.
#param fPercentFilled A percentage of the circle to be filled by the arc. 0.0 to 1.0.
#param AngleStart The angle where the arc should start. 0 to 360. Clock Reference.
#param xFillColor The color of the filled area.
#return Pointer to a UIImage.
#todo [] Validate object creation at each step.
#todo [] Convert PercentFilled (0.0 to 1.0) to appropriate radian(?) (-3.15 to +3.15)
#todo [] Background Image Support. Allow for the arc to be drawn on top of an image \
and the whole thing returned.
#todo [] Background Image Reduction. Background images will have to be resized to fit the specfied size. \
Do not want to return a 65KB object because the background is 60K or whatever.
#todo [] UIColor RGBA Components. Determine a decent method of extracting RGVA values \
from a UIColor*. Check out arstechnica.com/apple/guides/2009/02/iphone-development-accessing-uicolor-components.ars \
for an idea.
*/
- (UIImage*) ArcImageCreate: (float)fHeight fWidth:(float)fWidth fPercentFilled:(float)fPercentFilled fAngleStart:(float)fAngleStart xFillColor:(UIColor*)xFillColor
{
UIImage* fnRez = nil;
float fArcBegin = 0.0f;
float fArcEnd = 0.0f;
float fArcPercent = 0.0f;
UIColor* xArcColor = nil;
float fArcImageWidth = 0.0f;
float fArcImageHeight = 0.0f;
CGRect xArcImageRect;
CGContextRef xContext = nil;
CGColorSpaceRef xColorSpace;
void* xBitmapData;
int iBMPByteCount;
int iBMPBytesPerRow;
float fPI = 3.14159;
float fRadius = 25.0f;
// #todo Force default of 100x100 px if out of bounds. \
// Check max image dimensions for iPhone. \
// If negative, flip values *if* values are 'reasonable'. \
// Determine minimum useable pixel dimensions. 10x10 px is too small. Or is it?
fArcImageWidth = fHeight;
fArcImageHeight = fWidth;
// Get the passed target percentage and clip it between 0.0 and 1.0
fArcPercent = (fPercentFilled 1.0f) ? 1.0f : fPercentFilled;
fArcPercent = (fArcPercent > 1.0f) ? 1.0f : fArcPercent;
// Get the passed start angle and clip it between 0.0 to 360.0
fArcBegin = (fAngleStart 359.0f) ? 0.0f : fAngleStart;
fArcBegin = (fArcBegin > 359.0f) ? 0.0f : fArcBegin;
fArcBegin = (fArcBegin * fPI) / 180.0f;
fArcEnd = ((360.0f * fArcPercent) * fPI) / 180.0f;
//
if (xFillColor == nil) {
// random color
} else {
xArcColor = xFillColor;
}
// Calculate memory required for image.
iBMPBytesPerRow = (fArcImageWidth * 4);
iBMPByteCount = (iBMPBytesPerRow * fArcImageHeight);
xBitmapData = malloc(iBMPByteCount);
// Create a color space. Behavior changes at OSXv10.4. Do not rely on it for consistency across devices.
xColorSpace = CGColorSpaceCreateDeviceRGB();
// Set the system to draw. Behavior changes at OSXv10.3.
// Both of these work. Not sure which is better.
// xContext = CGBitmapContextCreate(xBitmapData, fArcImageWidth, fArcImageHeight, 8, iBMPBytesPerRow, xColorSpace, kCGImageAlphaPremultipliedFirst);
xContext = CGBitmapContextCreate(NULL, fArcImageWidth, fArcImageHeight, 8, iBMPBytesPerRow, xColorSpace, kCGImageAlphaPremultipliedFirst);
// Let the system know the colorspace reference is no longer required.
CGColorSpaceRelease(xColorSpace);
// Set the created context as the current context.
// UIGraphicsPushContext(xContext);
// Define the image's box.
xArcImageRect = CGRectMake(0.0f, 0.0f, fArcImageWidth, fArcImageHeight);
// Clear the image's box.
// CGContextClearRect(xContext, xRect);
// Draw the ArcImage's background image.
// CGContextDrawImage(xContext, xArcImageRect, [oBackgroundImage CGImage]);
// Set Us Up The Transparent Drawing Area.
CGContextBeginTransparencyLayer(xContext, nil);
// Set the fill and stroke colors
// #todo [] Determine why SetFilColor does not. Use alternative method.
// CGContextSetFillColor(xContext, CGColorGetComponents([xArcColor CGColor]));
// CGContextSetFillColorWithColor(xContext, CGColorGetComponents([xArcColor CGColor]));
// Test Colors
CGContextSetRGBFillColor(xContext, 0.3f, 0.4f, 0.5f, 1.0f);
CGContextSetRGBStrokeColor(xContext, 0.5f, 0.6f, 0.7f, 1.0f);
CGContextSetLineWidth(xContext, 1.0f);
// Something like this to reverse drawing?
// CGContextTranslateCTM(xContext, TranslateXValue, TranslateYValue);
// CGContextScaleCTM(xContext, -1.0f, 1.0f); or CGContextScaleCTM(xContext, 1.0f, -1.0f);
// Test Vals
// fArcBegin = 45.0f * fPI / 180.0f; // 0.785397
// fArcEnd = 90.0f * fPI / 180.0f; // 1.570795
// Move to the start point and draw the arc.
CGContextMoveToPoint(xContext, fArcImageWidth/2.0f, fArcImageHeight/2.0f);
CGContextAddArc(xContext, fArcImageWidth/2.0f, fArcImageHeight/2.0f, fRadius, fArcBegin, fArcEnd, 0);
// Ask the OS to close the arc (current point to starting point).
CGContextClosePath(xContext);
// Fill 'er up. Implicit path closure.
CGContextFillPath(xContext);
// CGContextEOFillPath(context);
// Close Transparency drawing area.
CGContextEndTransparencyLayer(xContext);
// Create an ImageReference and create a UIImage from it.
CGImageRef xCGImageTemp = CGBitmapContextCreateImage(xContext);
CGContextRelease(xContext);
fnRez = [UIImage imageWithCGImage: xCGImageTemp];
CGImageRelease(xCGImageTemp);
// UIGraphicsPopContext;
return fnRez;
}

How to draw a solid circle with cocos2d for iPhone

Is it possible to draw a filled circle with cocos2d ?
An outlined circle can be done using the drawCircle() function, but is there a way to fill it in a certain color? Perhaps by using pure OpenGL?
In DrawingPrimitives.m, change this in drawCricle:
glDrawArrays(GL_LINE_STRIP, 0, segs+additionalSegment);
to:
glDrawArrays(GL_TRIANGLE_FAN, 0, segs+additionalSegment);
You can read more about opengl primitives here:
http://www.informit.com/articles/article.aspx?p=461848
Here's a slight modification of ccDrawCircle() that lets you draw any slice of a circle. Stick this in CCDrawingPrimitives.m and also add the method header information to CCDrawingPrimitives.h:
Parameters: a: starting angle in radians, d: delta or change in angle in radians (use 2*M_PI for a complete circle)
Changes are commented
void ccDrawFilledCircle( CGPoint center, float r, float a, float d, NSUInteger totalSegs)
{
int additionalSegment = 2;
const float coef = 2.0f * (float)M_PI/totalSegs;
NSUInteger segs = d / coef;
segs++; //Rather draw over than not draw enough
if (d == 0) return;
GLfloat *vertices = calloc( sizeof(GLfloat)*2*(segs+2), 1);
if( ! vertices )
return;
for(NSUInteger i=0;i<=segs;i++)
{
float rads = i*coef;
GLfloat j = r * cosf(rads + a) + center.x;
GLfloat k = r * sinf(rads + a) + center.y;
//Leave first 2 spots for origin
vertices[2+ i*2] = j * CC_CONTENT_SCALE_FACTOR();
vertices[2+ i*2+1] =k * CC_CONTENT_SCALE_FACTOR();
}
//Put origin vertices into first 2 spots
vertices[0] = center.x * CC_CONTENT_SCALE_FACTOR();
vertices[1] = center.y * CC_CONTENT_SCALE_FACTOR();
// Default GL states: GL_TEXTURE_2D, GL_VERTEX_ARRAY, GL_COLOR_ARRAY, GL_TEXTURE_COORD_ARRAY
// Needed states: GL_VERTEX_ARRAY,
// Unneeded states: GL_TEXTURE_2D, GL_TEXTURE_COORD_ARRAY, GL_COLOR_ARRAY
glDisable(GL_TEXTURE_2D);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
glVertexPointer(2, GL_FLOAT, 0, vertices);
//Change to fan
glDrawArrays(GL_TRIANGLE_FAN, 0, segs+additionalSegment);
// restore default state
glEnableClientState(GL_COLOR_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnable(GL_TEXTURE_2D);
free( vertices );
}
Look into:
CGContextAddArc
CGContextFillPath
These will allow you to fill a circle without needing OpenGL
I also wonder this, but haven't really accomplished doing it. I tried using CGContext stuff that Grouchal tipped above, but I can't get it to draw anything on the screen. This is what I've tried:
-(void) draw
{
[self makestuff:UIGraphicsGetCurrentContext()];
}
-(void)makestuff:(CGContextRef)context
{
// Drawing lines with a white stroke color
CGContextSetRGBStrokeColor(context, 1.0, 1.0, 1.0, 1.0);
// Draw them with a 2.0 stroke width so they are a bit more visible.
CGContextSetLineWidth(context, 2.0);
// Draw a single line from left to right
CGContextMoveToPoint(context, 10.0, 30.0);
CGContextAddLineToPoint(context, 310.0, 30.0);
CGContextStrokePath(context);
// Draw a connected sequence of line segments
CGPoint addLines[] =
{
CGPointMake(10.0, 90.0),
CGPointMake(70.0, 60.0),
CGPointMake(130.0, 90.0),
CGPointMake(190.0, 60.0),
CGPointMake(250.0, 90.0),
CGPointMake(310.0, 60.0),
};
// Bulk call to add lines to the current path.
// Equivalent to MoveToPoint(points[0]); for(i=1; i<count; ++i) AddLineToPoint(points[i]);
CGContextAddLines(context, addLines, sizeof(addLines)/sizeof(addLines[0]));
CGContextStrokePath(context);
// Draw a series of line segments. Each pair of points is a segment
CGPoint strokeSegments[] =
{
CGPointMake(10.0, 150.0),
CGPointMake(70.0, 120.0),
CGPointMake(130.0, 150.0),
CGPointMake(190.0, 120.0),
CGPointMake(250.0, 150.0),
CGPointMake(310.0, 120.0),
};
// Bulk call to stroke a sequence of line segments.
// Equivalent to for(i=0; i<count; i+=2) { MoveToPoint(point[i]); AddLineToPoint(point[i+1]); StrokePath(); }
CGContextStrokeLineSegments(context, strokeSegments, sizeof(strokeSegments)/sizeof(strokeSegments[0]));
}
These methods are defined in a cocos node class, and the makestuff method I borrowed from a code example...
NOTE:
I'm trying to draw any shape or path and fill it. I know that the code above only draws lines, but I didn't wanna continue until I got it working.
EDIT:
This is probably a crappy solution, but I think this would at least work.
Each CocosNode has a texture (Texture2D *). Texture2D class can be initialized from an UIImage. UIImage can be initialized from a CGImageRef. It is possible to create a CGImageRef context for the quartz lib.
So, what you would do is:
Create the CGImageRef context for quartz
Draw into this image with quartz
Initialize an UIImage with this CGImageRef
Make a Texture2D that is initialized with that image
Set the texture of a CocosNode to that Texture2D instance
Question is if this would be fast enough to do. I would prefer if you could sort of get a CGImageRef from the CocosNode directly and draw into it instead of going through all these steps, but I haven't found a way to do that yet (and I'm kind of a noob at this so it's hard to actually get somewhere at all).
There is a new function in cocos2d CCDrawingPrimitives called ccDrawSolidCircle(CGPoint center, float r, NSUInteger segs). For those looking at this now, use this method instead, then you don't have to mess with the cocos2d code, just import CCDrawingPrimitives.h
I used this way below.
glLineWidth(2);
for(int i=0;i<50;i++){
ccDrawCircle( ccp(s.width/2, s.height/2), i,0, 50, NO);
}
I made multiple circle with for loop and looks like a filled circle.