I'm trying to develop a chess game on the iPhone, and have become stuck on an otherwise insignificant detail but can not seem to get past it. Can anybody see the piece that's missing?
Here's the problem. When a user clicks on a square, it should highlight in some faded color. What actually is happening though, is it just draws black, totally regardless of what color I set it to.
Here's an image of what I get. The black circle should be red, or any color.
Here, notice the UIView I'm drawing to is behind the pieces view. Doubt it matters, but I want to be complete.
Finally, the code for the layer:
- (id)initWithFrame:(CGRect)frame
{
self = [super initWithFrame:frame];
if (self) {
// Initialization code
self.backgroundColor = [UIColor clearColor];
rowSelected = 0;
colSelected = 0;
}
return self;
}
-(void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextClearRect(context, rect);
CGContextSetRGBStrokeColor(context, 255.0, 0.0, 0.0, 1.0);
if (rowSelected && colSelected)
{
int gridsize = (self.frame.size.width / 8);
int sx = (colSelected-1) * gridsize;
int sy = (rowSelected-1) * gridsize;
int ex = gridsize;
int ey = gridsize;
CGContextFillEllipseInRect(context, CGRectMake(sx,sy,ex,ey));
}
}
// Handles the start of a touch
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
int gridsize = (self.frame.size.width / 8);
// Position of touch in view
UITouch *touch = [[event allTouches] anyObject];
CGPoint loc = [touch locationInView:self.superview];
int x = loc.x / gridsize;
int y = loc.y / gridsize;
rowSelected = y + 1;
colSelected = x + 1;
[self setNeedsDisplay];
}
I have tried a number of things, using a value of 1 rather than 255, playing with the alpha, etc, but I can't get anything other than a solid black.
EDIT:
Also, here's the code of this view's superview:
#implementation ChessBoard
-(id)initWithFrame:(CGRect)frame
{
if (self != [super initWithFrame:frame])
return self;
int SQUARE_SIZE = frame.size.width / 8;
currentSet = BW;
UIImage *img = [UIImage imageNamed:#"board.png"];
background = [[UIImageView alloc] initWithImage:img];
[background setFrame:frame];
[self addSubview:background];
overlay = [[ChessBoardOverlay alloc] initWithFrame:frame];
[self addSubview:overlay];
In the function CGContextSetRGBStrokeColor color compontent values (red, green,blue, alpha) need to be between 0 and 1. Since you are assuming that 255 is the maximum for a color you need to divide all of your color component values by 255.0f.
CGContextSetRGBStrokeColor(context, 255.0f/255.0f, 0.0/255.0f, 0.0/255.0f, 1.0);
You say you're drawing to a layer... but a CALayer doesn't have a drawRect: method, it has drawInContext:
I would guess by trying to call drawRect: in your instance of a CALayer that the true context hasn't been built correctly, or it has but you're not tying in to it.
Maybe you should state your color this way:
CGContextSetRGBStrokeColor(context, 255.0/255.0f, 0.0/255.0f, 0.0/255.0f, 1.0f);
This happened to me once using:
[UIColor colorWithRed:12.0/255.0f green:78.0/255.0f blue:149.0/255.0f alpha:1.0f];
until I added the /255.0f so that may work.
Related
I am making a project on touch moved I want to change the alpha or visibility (whatever possible) of the image at the point where it is touched. Lets say Unhidethat particular point of image..
PS I already know how to erase the image. I am looking for unerase
I think this project does exactly what you're looking for : https://github.com/SebastienThiebaud/STScratchView
You have 2 images where one is the hidden image and the other one is the one that you scratch off.
Or even this project : http://www.cocoacontrols.com/platforms/ios/controls/scratch-n-see
I'm sorry for not having a good format for the code, for they are copied from my blog.
It takes me a lot to edit.
1 To hide the part that you touch on.
When your finger moves, erase some parts :
- (UIImage *)erasePartsMoved {
UIImage *image = nil;
UIColor *color = [UIColor whiteColor];
CGFloat width = 5.0f;
CGSize imageContextSize = self.imageView.frame.size;
UIGraphicsBeginImageContext(imageContextSize);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetLineWidth(context, width);
CGContextSetStrokeColorWithColor(context, [color CGColor]);
CGContextMoveToPoint(context, self.touchStartPoint.x, self.touchStartPoint.y);
//Do something needed.
CGContextAddLineToPoint(context, self.touchEndPoint.x, self.touchEndPoint.y);
CGContextStrokePath(context);
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;}
2 Your image is not visible at first, for example, transparent.
Get the point touched, then calculate a circle with the touched point, and finally change the alpha of the pixels in the circle:
typedef struct _singleRGBA{
unsigned char red;
unsigned char green;
unsigned char blue;
unsigned char alpha;} RGBA;
To change the pixel's alpha :
void filterOpacity(UInt8 *pixelBuf, UInt32 offset, void *context){
double val = *((double*)context);
int a = offset+3;
int alpha = pixelBuf[a];
pixelBuf[a] = SAFECOLOR(alpha * val);}
I hope the above would do a favor.
Funny question, I would like to do a runnable demo later.
I believe alpha is the property of the entire view so all you can do is draw on that particular point... refer this may be this is what you are trying to achieve.
- (void)viewDidLoad
{
//load super view
[super viewDidLoad];
//to decrease alpha.
UISwipeGestureRecognizer *swiperight=[[UISwipeGestureRecognizer alloc]
initWithTarget:self action: #selector(setalpha_decrease:)];
[swiperight setDirection:UISwipeGestureRecognizerDirectionRight];
[self.view addGestureRecognizer:swiperight];
//to increase alpha.
UISwipeGestureRecognizer *swipeleft=[[UISwipeGestureRecognizer alloc]
initWithTarget:self action: #selector(setalpha_increase:)];
[swipeleft setDirection:UISwipeGestureRecognizerDirectionLeft];
[self.view addGestureRecognizer:swipeleft];
}
-(void) setalpha_decrease:(UIPanGestureRecognizer *) gestureRecognizer {
imgview.alpha= imgtomove.alpha-(imgtomove.alpha/10);
}
-(void) setalpha_increase:(UIPanGestureRecognizer *) gestureRecognizer {
imgview.alpha= imgtomove.alpha+(imgtomove.alpha/10);
}
Why don't you put an empty view on top which is filled with a colour or texture and then using #AnkitSrivastava answer let the touch rub away parts of the view's background, revealing the hidden image behind.
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *t = [[touches allObjects] objectAtIndex:0];
float alpha = [t locationInView:self.view].x / self.view.frame.size.width;
_img.alpha = alpha;
}
I currently have the following code to try and allow the user to draw a dotted path and make a custom shape. Once they have made this shape, I wanted it to automatically be filled with colour. That isn't happening.
At the moment I am getting this error for the code below:
<Error>: CGContextClosePath: no current point.
Here's the code I'm using:
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *touch = [touches anyObject];
CGPoint previous = [touch previousLocationInView:self];
CGPoint current = [touch locationInView:self];
#define SQR(x) ((x)*(x))
//Check for a minimal distance to avoid silly data
if ((SQR(current.x - self.previousPoint2.x) + SQR(current.y - self.previousPoint2.y)) > SQR(10))
{
float dashPhase = 5.0;
float dashLengths[] = {10, 10};
CGContextSetLineDash(context,
dashPhase, dashLengths, 2);
CGContextSetFillColorWithColor(context, [[UIColor lightGrayColor] CGColor]);
CGContextFillPath(context);
CGContextSetLineWidth(context, 2);
CGFloat gray[4] = {0.5f, 0.5f, 0.5f, 1.0f};
CGContextSetStrokeColor(context, gray);
self.brushSize = 5;
self.brushColor = [UIColor lightGrayColor];
self.previousPoint2 = self.previousPoint1;
self.previousPoint1 = previous;
self.currentPoint = current;
// calculate mid point
self.mid1 = [self pointBetween:self.previousPoint1 andPoint:self.previousPoint2];
self.mid2 = [self pointBetween:self.currentPoint andPoint:self.previousPoint1];
if(self.paths.count == 0)
{
UIBezierPath* newPath = [UIBezierPath bezierPath];
CGContextBeginPath(context);
[newPath moveToPoint:self.mid1];
[newPath addLineToPoint:self.mid2];
[self.paths addObject:newPath];
CGContextClosePath(context);
}
else
{
UIBezierPath* lastPath = [self.paths lastObject];
CGContextBeginPath(context);
[lastPath addLineToPoint:self.mid2];
[self.paths replaceObjectAtIndex:[self.paths indexOfObject:[self.paths lastObject]] withObject:lastPath];
CGContextClosePath(context);
}
//Save
[self.pathColors addObject:self.brushColor];
self.needsToRedraw = YES;
[self setNeedsDisplayInRect:[self dirtyRect]];
//[self setNeedsDisplay];
}
}
Why is this happening and why is the inside of the path not being filled with colour?
Your code has a few issues:
You should be doing your drawing in your view's drawRect: method, not the touch handler.
You never set the variable context with a value for the current context. Do that with the UIGraphicsGetCurrentContext() method. Again, within your drawRect: method.
You go through the trouble of creating a UIBezierPath object, but you never use it. Do that by calling CGContextAddPath( context, newPath.CGPath ), changing the variable name as needed in the two places you appear to the using a UIBezierPath.
Keep the call to setNeedsDisplayInRect: in your touch handler method. That tells the system to do the work to update your view using the drawing that your (yet to be implemented) drawRect: method draws.
My problem : the color of the fill in square go from black to full blue (255) with no transition color (darkest blue, dark blue, blue...). It seems that the CGContextSetRGBStroke is additive (WTF Oo) . like if the blue was 14 and the next update I put 140 the blue will be 154 and not 140 has I set.
Is someone get that problem ?
in the .h
CGFloat angle;
int width;
int height;
NSTimer* timer;
CGPoint touch;
in the .m
- (id)initWithFrame:(CGRect)frame
{
if ((self = [super initWithFrame:frame]))
{
angle=0;
width = frame.size.width;
height= frame.size.height;
//self.backgroundColor=[UIColor colorWithRed:0.1 green:0.1 blue:1 alpha:1];
timer = [NSTimer scheduledTimerWithTimeInterval:1 target:self selector:#selector(update:) userInfo:nil repeats:YES];
}
return self;
}
-(void)update:(NSTimer*)theTimer
{
[self setNeedsDisplay];
}
NS_INLINE CGFloat radians(CGFloat ang)
{
return ang*(M_PI/180.0f);
}
- (void)drawRect:(CGRect)rect
{
CGContextRef ctx=UIGraphicsGetCurrentContext();
//CGContextSetRGBFillColor(ctx, -1,-1,-1,-1);
CGContextSaveGState(ctx);
CGRect ourRect = CGRectMake(40+angle, 40+angle, 240, 120);
CGFloat colour=(touch.y/768)*255;
NSQLog(#"draw %f",colour);
CGContextSetRGBFillColor(ctx, 0.0,0.0,colour,1);
CGContextClearRect(ctx, rect);
CGContextFillRect(ctx, ourRect);
CGContextSetRGBStrokeColor(ctx, 0.0, 1.0, 0.0, 1.0);
CGContextStrokeRectWithWidth(ctx, ourRect, 10);
CGContextRestoreGState(ctx);
angle+=1;
}
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
touch= [[touches anyObject] locationInView:self];
}
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
touch= [[touches anyObject] locationInView:self];
}
Just a quick answer as I'm seeing the following lines:
CGFloat colour=(touch.y/768)*255;
CGContextSetRGBFillColor(ctx, 0.0,0.0,colour,1);
You should specify the color parts as a CGFloat ranging from 0.0f to 1.0f. Could it be your blue color part is in the "0 to 255 mode"?
Edit
Judging by your code, I think you can leave out the multiply by 255 in the color calculation. When y is 0 you'll have 0.0f as blue color component and when y is 768 it'll be 1.0f.
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *touch=[touches anyObject];
currentPoint=[touch locationInView:self.view];
rootLayer = [CALayer layer];
rootLayer.frame = self.view.bounds;
[self.view.layer addSublayer:rootLayer];
starPath = CGPathCreateMutable();
CGPathMoveToPoint(starPath, NULL, currentPoint.x, currentPoint.y + 15.0);
for(int i = 1; i < 5; ++i)
{
CGFloat x = 15.0 * sinf(i * 4.0 * M_PI / 5.0);
CGFloat y = 15.0 * cosf(i * 4.0 * M_PI / 5.0);
CGPathAddLineToPoint(starPath, NULL, currentPoint.x + x, currentPoint.y + y);
}
CGPathCloseSubpath(starPath);
shapeLayer = [CAShapeLayer layer];
shapeLayer.path = starPath;
UIColor *fillColor = [UIColor colorWithWhite:0.9 alpha:1.0];
shapeLayer.fillColor = fillColor.CGColor;
[rootLayer addSublayer:shapeLayer];
}
- (void)dealloc {
[imageView release];
CGPathRelease(starPath);
[super dealloc];
}
When i am running with performance tool leaks it occupying more memory
when iam moving ....
what to do....
i need to draw this star shape on touches movies on the layer so i can perform animations later ....
You're releasing the path in dealloc, but creating it (potentially) many times inside your touchesMoved:. You can safely release this resource at the end of your touchesMoved:, please consider doing that.
Also, after you've made that change, you can remove the release from dealloc, since you will never have to release it there any longer. Your path doesn't exist outside your method.
I'm creating a simple drawing application and would like some help with drawing rectangles based on the user's touch events. I'm able to draw a rectangle using Core Graphics from the point in touchesBegan to the point in touchesEnded. This rectangle is then displayed once the touchesEnded event is fired.
However, I would like to be able to do this in a 'live' fashion. Meaning, the rectangle is updated and displayed as the user drags their finger. Is this possible? Any help would be greatly appreciated, thank you!
UPDATE:
I was able to get this to work by using the following code. It works perfectly fine for small images, however it get's very slow for large images. I realize my solution is hugely inefficient and was wondering if anyone could suggest a better solution. Thanks.
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *touch = [touches anyObject];
_firstPoint = [touch locationInView:self.view];
_firstPoint.y -= 40;
_lastPoint = _firstPoint;
}
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *touch = [touches anyObject];
CGPoint currentPoint = [touch locationInView:self.view];
currentPoint.y -= 40;
[self drawSquareFrom:_firstPoint to:currentPoint];
_lastPoint = currentPoint;
}
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *touch = [touches anyObject];
CGPoint currentPoint = [touch locationInView:self.view];
currentPoint.y -= 40;
[self drawSquareFrom:_firstPoint to:currentPoint];
[_modifiedImage release];
_modifiedImage = [[UIImage alloc] initWithCGImage:_imageView.image.CGImage];
}
- (void)drawSquareFrom:(CGPoint)firstPoint to:(CGPoint)lastPoint
{
_imageView.image = _modifiedImage;
CGFloat width = lastPoint.x - firstPoint.x;
CGFloat height = lastPoint.y - firstPoint.y;
_path = [UIBezierPath bezierPathWithRect:CGRectMake(firstPoint.x, firstPoint.y, width, height)];
UIGraphicsBeginImageContext( _originalImage.size );
[_imageView.image drawInRect:CGRectMake(0, 0, _originalImage.size.width, _originalImage.size.height)];
_path.lineWidth = 10;
[[UIColor redColor] setStroke];
[[UIColor clearColor] setFill];
[_path fill];
[_path stroke];
_imageView.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
What you are doing right now is creating a paired CGImage and CGBitmapContext for every call to drawSquareFrom:to: and disposing them each time. Instead, create a single paired CGImage and CGBitmapContext that you reuse for each call.
-(void) drawSquareFrom:(CGPoint)from to:(CGPoint)to {
CGContextRef context = [self offscreenContext];
CGRect draw , full = [self offscreenContextBounds];
draw.origin = from;
draw.size.width = to.x - from.x;
draw.size.height = to.y - from.y;
draw = CGRectStandardize( draw );
[[UIColor redColor] setStroke];
[[UIColor clearColor] setFill];
CGContextClearRect( context , full );
CGContextFillRect( context , draw );
CGContextStrokeRectWithWidth( context , draw , 10 );
[_imageView setNeedsDisplay];
}
You create a paired CGImage and CGContext like this, although there are some ... to fill in. All the myX variables are intended to be class members.
-(CGContextRef) offscreenContext {
if ( nil == myContext ) {
size_t width = 400;
size_t height = 300;
CFMutableDataRef data = CFDataCreateMutable( NULL , width * height * 4 ); // 4 is bytes per pixel
CGDataProviderRef provider = CGDataProviderCreateWithCFData( data );
CGImageRef image = CGImageCreate( width , height , ... , provider , ... );
CGBitmapContextRef context = CGBitmapContextCreate( CFDataGetMutableBytePtr( data ) , width , height , ... );
CFRelease( data ); // retained by provider I think
CGDataProviderRelease( provider ); // retained by image
myImage = image;
myContext = context;
myContextFrame.origin = CGPointZero;
myContextFrame.size.width = width;
myContextFrame.size.height = height;
_imageView.image = [UIImage imageWithCGImage:image];
}
return myContext;
}
-(CGRect) offscreenContextBounds {
return myContextFrame;
}
This sets the _imageView.image to the newly created image. In drawSquareFrom:to: I assume that setNeedsDisplay is sufficient to draw the changes to made, but it may be that you need to assign a new UIImage each time that wraps the same CGImage.
Just use two Contexts.
In one you just "preview" draw the Rectangle and you can clear it on TouchesMoved.
On TouchesEnd you finally draw on your original context and then on the Image.