UIImage collision effect...??? Iphone - iphone

I need a way to create a collide effect, without the actual collision between the two images. Let me explain in more detail... If I have one fixed image and one image that is moving both images are on the same x coordinate at different positions. Basically I want the images appear to be colliding.I would want to run a check on the moving image like this...
If (front of moving image is clear)
{[move forward];}
else
{[stop];}
How would I implement this in code??? So that right before the moving image collides into the fixed image it stops, so that they appear to have collided.This check would also be running on a 1/60 NSTimer.
Any advice is welcome. Thank you.

Assuming the object is moving from the left
#define PIXELS_PER_FRAME 1
-(CGFloat) getCurrentX:(UIImage*)image
{
return image.frame.origin.x + image.frame.size.width;
}
-(void) moveImageToStableImage
{
CGFloat xTarget = stableImage.frame.origin.x;
if([self getCurrentX:movingImage] < xTarget)
{
movingImage.frame.origin.x += PIXELS_PER_FRAME;
[self performSelector:#selector(moveImageToStableImage) withObject:nil afterDelay:1.0/60];
}
}
But truth be told in this situation you would probably just be better off using an animation

Related

Implementing powerUps to change entire game state in cocos2d?

I asked a question recently that sort of involved implementing power-ups; however, I have decided that I want to go about it a different way.
In my game, I have an endless scrolling background and the main character moving to the right while collecting coins.
When the player hits the blue coin, the power-up, I want 1. the character to change to a blue color (I have the frames for this), 2. the background to go blue, and 3. the platform to change blue (have images for this); I would like all these factors to take place for a 20 second period of time.
I planned to do this by having a Blue class with a blue instance variable (public variable) that I would set to YES and NO accordingly (if a blue coin has been hit) in my other classes (Platform class, Player class). However, it is not efficient and does not work for when I incorporate a timer.
Does anyone have an idea on how to implement the power-up?
This is my code for when the blue coin is hit by the player:
// try remove blue coin
- (void) tryRemoveBluecoin
{
NSMutableArray * currentBluecoinArray = [self getcurrentBluecoinArr];
if(currentBluecoinArray)
{
int playerY = ((CCLayer*)(self.player)).position.y;
for(int x=0; x<[currentBluecoinArray count];x++)
{
CCSprite *bluecoin = [currentBluecoinArray objectAtIndex:x];
if(abs(bluecoin.position.x+bluecoin.parent.position.x-[Player initX])<50)
{
if(abs(bluecoin.position.y+bluecoin.parent.position.y-playerY)<30 && bluecoin.visible && bluecoin.visible)
{
[bluecoin.parent removeChild:bluecoin cleanup:YES];
CGSize winSize = [[CCDirector sharedDirector] winSize];
[[SimpleAudioEngine sharedEngine] playEffect:#"jump.wav" pitch:1 pan:0 gain:1];
// SET BLUE VARIABLE TO YES
NSLog(#"BEGIN BLUE POWER UP EFFECTS FOR 20 SECONDS");
}
}
}
}
[self hitTestOB];
}
Thanks for any ideas you have!
Now, it has been some time since I last used Cocos2d, but you are aware that you can actually set color-information to nodes, right? This sounds like a much more sensible way to go down. Design your sprites with this in mind and keep an array with all elements you need to color.
Have a method defining the new color based on your game state, the game state enums can be matched with an NSInteger property on your power-ups for instance.
ccColor3B color;
// _state being an NSInteger ivar
// the different states defined in an enum
switch (_state) {
case gameStateBlue:
color = ccc3(0, 0, 255);
break;
case gameStateGreen:
color = ccc3(0, 255, 0);
break;
default:
break;
// etc.
}
send this color information to a method which handles your array of sprites like this:
for (CCSprite *sprite in _arrayOfSpritesToChangeColor) {
sprite.color = color;
}
This will demand planning ahead with your assets, but a lot more flexibility down the line if you want to experiment with different colors and effects. It is a lot less taxing as well as you won't need to swap a bunch of assets to achieve what you want. Now, my Cocos is rather rusty so I might have messed some details up, but the general idea should be sound.
Edit: An alternative to holding reference to the sprites in an array is to have your own sprite subclass and let that subclass have a CCSprite colorSprite property. Then you could loop through the children of your scene and change the sprites that have this property.

Drawrect with CGBitmapContext is too slow

So I've got a basic drawing app in the process that allows me to draw lines. I draw to an off screen bitmap then present the image in drawRect. It works but its way too slow, updating about half a second after you've drawn it with your finger. I took the code and adapted it from this tutorial, http://www.youtube.com/watch?v=UfWeMIL-Nu8&feature=relmfu , as you can see in the comments people are also saying its too slow but the guy hasn't responded.
So how can I speed it up? or is there a better way to do it? any pointers will be appreciated.
Heres the code in my DrawView.m.
-(id)initWithCoder:(NSCoder *)aDecoder {
if ((self=[super initWithCoder:aDecoder])) {
[self setUpBuffer];
}
return self;
}
-(void)setUpBuffer {
CGContextRelease(offscreenBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
offscreenBuffer = CGBitmapContextCreate(NULL, self.bounds.size.width, self.bounds.size.height, 8, self.bounds.size.width*4, colorSpace, kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
CGContextTranslateCTM(offscreenBuffer, 0, self.bounds.size.height);
CGContextScaleCTM(offscreenBuffer, 1.0, -1.0);
}
-(void)drawToBuffer:(CGPoint)coordA :(CGPoint)coordB :(UIColor *)penColor :(int)thickness {
CGContextBeginPath(offscreenBuffer);
CGContextMoveToPoint(offscreenBuffer, coordA.x,coordA.y);
CGContextAddLineToPoint(offscreenBuffer, coordB.x,coordB.y);
CGContextSetLineWidth(offscreenBuffer, thickness);
CGContextSetLineCap(offscreenBuffer, kCGLineCapRound);
CGContextSetStrokeColorWithColor(offscreenBuffer, [penColor CGColor]);
CGContextStrokePath(offscreenBuffer);
}
- (void)drawRect:(CGRect)rect {
CGImageRef cgImage = CGBitmapContextCreateImage(offscreenBuffer);
UIImage *image =[[UIImage alloc] initWithCGImage:cgImage];
CGImageRelease(cgImage);
[image drawInRect:self.bounds];
}
Works perfectly on the simulator but not device, I imagine that's something to do with processor speed.
I'm using ARC.
I tried to fix your code, however as you only seem to have posted half of it I couldn't get it working (Copy+pasting code results in lots of errors, let alone start performance tuning it).
However there are some tips you can use to VASTLY improve performance.
The first, and probably most noticeably, is -setNeedsDisplayInRect: rather then -setNeedsDisplay. This will mean that it only redraws the little rect that changed. For an iPad 3 with 1024*768*4 pixels that is a lot of work. Reducing that down to about 20*20 or less for each frame will massively improve performance.
CGRect rect;
rect.origin.x = minimum(coordA.x, coordB.x) - (thickness * 0.5);
rect.size.width = (maximum(coordA.x, coordB.x) + (thickness * 0.5)) - rect.origin.x;
rect.origin.y = minimum(coordA.y, coordB.y) - (thickness * 0.5);
rect.size.height = (maximum(coordA.y, coordB.y) + (thickness * 0.5)) - rect.origin.y;
[self setNeedsDisplayInRect:rect];
Another big improvement you could make is to only draw the CGPath for this current touch (which you do). However you then draw that saved/cached image in the draw rect. So, again, it is redrawn each frame. A better approach is to have the draw view being transparent and then to use a UIImageView behind that. UIImageView is the best way to display images on iOS.
- DrawView (1 finger)
-drawRect:
- BackgroundView (the image of the old touches)
-self.image
The draw view would itself then only ever draw the current touch only the part that changes each time. When the user lifts their finger you can cache that to a UIImage, draw that over the current background/cache UIImageView's image and set the imageView.image to the new image.
That final bit when combining the images involves drawing 2 full screen images into an off screen CGContext and so will cause lag if done on the main thread, instead this should be done in a background thread and then the result pushed back to the main thread.
* touch starts *
- DrawView : draw current touch
* touch ends *
- 'background thread' : combine backgroundView.image and DrawView.drawRect
* thread finished *
send resulting UIImage to main queue and set backgroundView.image to it;
Clear DrawView's current path that is now in the cache;
All of this combined can make a very smooth 60fps drawing app. However, views are not updated as quickly as we'd like so the drawing when moving the figure faster looks jagged. This can be improved by using UIBezierPath's instead of CGPaths.
CGPoint lastPoint = [touch previousLocationInView:self];
CGPoint mid = midPoint(currentPoint, lastPoint);
-[UIBezierPath addQuadCurveToPoint:mid controlPoint:lastPoint];
The reason it is slow is because every frame you are creating a bitmap and trying to draw that.
You asked for better ways of doing it? Have you looked at the apple sample code for a drawing app on iOS? If you don't like that, then you can always use cocos2d which provides a CCRenderTexture class (and sample code).
Currently, you are using a method which you already know is not efficient.
With this approach I suppose you should consider using background thread for all hard work of image rendering and main thread for UI updates only, i. e.
__block UIImage *__imageBuffer = nil;
- (UIImage *)drawSomeImage
{
UIGraphicsBeginImageContext(self.bounds);
// draw image with CoreGraphics
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
- (void)updateUI
{
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
// prepare image on background thread
__imageBuffer = [self drawSomeImage];
dispatch_async(dispatch_get_main_queue(), ^{
// calling drawRect with prepared image
[self setNeedsDisplay];
});
});
}
- (void)drawRect
{
// draw image buffer on current context
[__imageBuffer drawInRect:self.bounds];
}
I am omitting some details for making the optimization more clear. Even better to switch to UIImageView. This way you could get rid from critically important - (void)drawDect method and update image property of the UIImageView when the image is ready.
Well I think you need to change your logic. You may get some very good idea with the help of this link
http://devmag.org.za/2011/04/05/bzier-curves-a-tutorial/
and if you think that you have no time to make understanding then you may go directly to this code https://github.com/levinunnink/Smooth-Line-View :) I hop this will help you a lot.
Use CgLayer for caching your paths, read the docs, Its best for optimization.
I did something exactly like this. Check out the Pixelate app on AppStore. In order to draw , I used tiles in my code. After all , when you toch the screen and draw something you need to re-draw the entire image which is a very heavy operation. If you like the way Pixelate is moving , here's how I did it:
1)Split my image in n x m tiles. That was so I can change those values and obtain bigger/smaller tiles. In the worst case scenario (the user taps at the intersection of 4 tiles) you have to re-draw those 4 tiles. Not the entire image.
2) Make a 3 dimensional matrix in which I was storring the pixel information of each tile. So matrix[0][0][0] was the red value ( each pixel has a RGB or RGBA value depending if you are using pngs or jpgs) of the first pixel of the first tile.
3) Get the location the user pressed and calculate the tiles that need to be modified.
4) Modify the values in the matrix and update the tiles that need to update.
NOTE: This most certainly isn't the best option. It's just an alternative. I mentioned it because I think it is close to what you have right now. And it worked for me on an iPhone 3GS. If you are targeting >= iPhone 4 , you should be more than ok.
Regards,
George
Whatever the method u've suggested is way too inefficient, because creating the image every time you move the finger is inappropriate.
If its just paths that you need to draw, then have a CGMutablePathref as a member variable,
and in draw rect just move to the specified point using CGPath functions.
And more importantly, while refreshing the view, call setNeedsDisplayInRect passing only the area that you need to draw. Hope it will work for you.

Allowing touch to pass through transparent images

I'm using a piece of code I found to pass touch events through transparent parts of a UIImageView. I'm subclassing UIImageView and adding this:
- (BOOL)pointInside:(CGPoint)point withEvent:(UIEvent *)event
{
unsigned char pixel[1] = {0};
CGContextRef context = CGBitmapContextCreate(pixel,
1, 1, 8, 1, NULL,
kCGImageAlphaOnly);
UIGraphicsPushContext(context);
[self.image drawAtPoint:CGPointMake(-point.x, -point.y)];
UIGraphicsPopContext();
CGContextRelease(context);
CGFloat alpha = pixel[0]/255.0;
BOOL transparent = alpha < 0.01;
if(transparent){
return NO;
} else {
return YES;
}
}
So far its working great in my iPad version. However, in my iPhone version, I set the image frames to be half of the image's actual size (self.image.frame.size.width/2). This code is interfering with my pan gestures in an odd way, where if I touch the top left of the image I cannot access my pan gesture, but I can access it to the far bottom right, past the actual image into a transparent zone.
Removing the code returns the pan gesture to normal. However, I would still like to keep the ability to ignore touches on transparent parts. Does anyone know what part of this code is messing with my touch points or any other reason its acting like this?
Thanks
I know this question is very old, but I just had to use this on a project and my problem was that I was setting the imageView frame manually. What helped was changing the code so resizing was only done through transforms.
If the imageView is initially the same size as the image inside it, you can use this code and it will work with any transform applied to the imageView afterwards.
So in your iPhone version, you could set the frame of the imageView to the size of the image inside it and then apply a scaling transform to it so it's half the size.

ApplyLinearImpulse doesnt work when called within UIPanGestureRecognizer function

I am using Box2d and Cocos2d to develop an iPhone game. I have a ball in the center of the screen with a simple boundary created using fixtures etc. What I want to do is add the functionality so that a user can "swipe" the ball to apply force and basically flick it across the screen.
I am using the following code to achieve this:
b2Vec2 force = b2Vec2(30, 30);
_body->ApplyLinearImpulse(force, _ballBodyDef->position);
If I apply a linear impulse during the init function the ball fires off in the correct direction and behaves as it should do. It doesnt do anything, however, if I put it into the handlePan function that is called when a gesture is done by the user. Here is the full code for the function: (Note that the NSLog writes out the correct information.
- (void)handlePanFrom:(UIPanGestureRecognizer *)recognizer {
if (recognizer.state == UIGestureRecognizerStateEnded) {
CGPoint velocity = [recognizer velocityInView:recognizer.view];
NSLog(#"Vel: %f, newPos: %f",velocity.x,velocity.y);
b2Vec2 force = b2Vec2(30, 30);
_body->ApplyLinearImpulse(force, _ballBodyDef->position);
}
}
The force you are applying in handlePanFrom is always b2Vec2(30, 30). It will never change with your current code, and is therefore not going to move in your direction.

sample code for collision detection in iPhone sdk

Can you please suggest sample code for collision detection of two imageviews.
Thanks in advance.
You can Detect Collision between two images by making rect for those image views.
Consider my image-views being named: img_view1 and img_view2.
Image-views creation:
//For img_view1 rect
//parameters are x,y,width,height
CGRect image_rect1 = CGRectMake(img_view1.position.x,img_view1.position.y,100,100);
//For img_view2 rect
//parameters are x,y,width,height
CGRect image_rect2 = CGRectMake(img_view2.position.x,img_view2.position.y,100,100);
Collision detection:
if(CGRectIntersectsRect(image_rect1, image_rect2))
{
NSLog(#"Rect is Intersecting");
}
Nice answere #Anish, however you dont really need to create a new CGRect for the views as you can simply use their respective frame properties.
If you want to put that logic in a method it would look like this:
-(BOOL)viewsDoCollide:(UIView *)view1 :(UIView *)view2{
if(CGRectIntersectsRect(view1.frame, view2.frame))
{
return YES;
}
return NO;
}
Simply pass the two views you want to test into this method, and check the output result.