i want to add a magnifier in cocos2d game. here is what i found online:
http://coffeeshopped.com/2010/03/a-simpler-magnifying-glass-loupe-view-for-the-iphone
I've changed the code a bit:(since i don't want to let the loupe follow our touch)
- (id)initWithFrame:(CGRect)frame {
if ((self = [super initWithFrame:magnifier_rect])) {
// make the circle-shape outline with a nice border.
self.layer.borderColor = [[UIColor lightGrayColor] CGColor];
self.layer.borderWidth = 3;
self.layer.cornerRadius = 250;
self.layer.masksToBounds = YES;
touchPoint = CGPointMake(CGRectGetMidX(magnifier_rect), CGRectGetMidY(magnifier_rect));
}
return self;
}
Then i want to add it in one of my scene init method:
loop = [[MagnifierView alloc] init];
[loop setNeedsDisplay];
loop.viewToMagnify = [CCDirector sharedDirector].openGLView;
[[CCDirector sharedDirector].openGLView.superview addSubview:loop];
But the result is: the area inside the loupe is black.
Also this loupe just magnify images with the same scale, how can i change it to magnify more near the center and less near the edge? (just like real magnifier)
Thank you !!!
Here I assume that you want to magnify the center of the screen.
You have to change dynamically size attribute to your wishes according to your app needs.
CGSize size = [[CCDirector sharedDirector] winSize];
id lens = [CCLens3D actionWithPosition:ccp(size.width/2,size.height/2) radius:240 grid:ccg(15,10) duration:0.0f];
[self runAction:lens];
Cocos2d draws using OpenGL, not CoreAnimation/Quartz. The CALayer you are drawing is empty, so you see nothing. You will either have to use OpenGL graphics code to perform the loupe effect or sample the pixels and alter them appropriately to achieve the magnification effect, as was done in the Christmann article referenced from the article you linked to. That code also relies on CoreAnimation/Quartz, so you will need to work out another way to get your hands on the image data you wish to magnify.
Related
I have a simple rotation gesture implemented in my code, but the problem is when I rotate the image it goes off the screen/out of the view always to the right.
The image view that is being rotated center X gets off or increases (hence it going right off the screen out of the view).
I would like it to rotate around the current center, but it's changing for some reason. Any ideas what is causing this?
Code Below:
- (void)viewDidLoad
{
[super viewDidLoad];
CALayer *l = [self.viewCase layer];
[l setMasksToBounds:YES];
[l setCornerRadius:30.0];
self.imgUserPhoto.userInteractionEnabled = YES;
[self.imgUserPhoto setClipsToBounds:NO];
UIRotationGestureRecognizer *rotationRecognizer = [[UIRotationGestureRecognizer alloc] initWithTarget:self action:#selector(rotationDetected:)];
[self.view addGestureRecognizer:rotationRecognizer];
rotationRecognizer.delegate = self;
}
- (void)rotationDetected:(UIRotationGestureRecognizer *)rotationRecognizer
{
CGFloat angle = rotationRecognizer.rotation;
self.imageView.transform = CGAffineTransformRotate(self.imageView.transform, angle);
rotationRecognizer.rotation = 0.0;
}
You want to rotate the image around it's center, but that's not what it is actually happening. Rotation transforms take place around the origin. So what you have to do is to apply a translate transform first to map the origin to the center of the image, and then apply the rotation transform, like so:
self.imageView.transform = CGAffineTransformTranslate(self.imageView.transform, self.imageView.bounds.size.width/2, self.imageView.bounds.size.height/2);
Please note that after rotating you'll probably have to undo the translate transform in order to correctly draw the image.
Hope this helps
Edit:
To quickly answer your question, what you have to do to undo the Translate Transform is to subtract the same difference you add to it in the first place, for example:
// The next line will add a translate transform
self.imageView.transform = CGAffineTransformTranslate(self.imageView.transform, 10, 10);
self.imageView.transform = CGAffineTransformRotate(self.imageView.transform, radians);
// The next line will undo the translate transform
self.imageView.transform = CGAffineTransformTranslate(self.imageView.transform, -10, -10);
However, after creating this quick project I realized that when you apply a rotation transform using UIKit (like the way you're apparently doing it) the rotation actually takes place around the center. It is only when using CoreGraphics that the rotation happens around the origin. So now I'm not sure why your image goes off the screen. Anyway, take a look at the project and see if any code there helps you.
Let me know if you have any more questions.
The 'Firefox' image is drawn using UIKit. The blue rect is drawn using CoreGraphics
You aren't rotating the image around its centre. You'll need correct this manually by translating it back to the correct position
I am creating an iPhone application using both UIKit and cocos2d. Now, in one of my ViewControllers, I am adding a HelloWorldLayer as a subview. Which is successfully added.
Now this layer is being added with a black background color. I want its background color to be clearColor, precisely, I want it to be transparant, so that I can view the contents in my ViewController except for the contents in the HelloWorldLayer.
I know how to change the CCLayer background color. I am using ccc4(r, g, b, a) for it.
Here id my code:
-(id) init
{
// always call "super" init
// Apple recommends to re-assign "self" with the "super's" return value
if( (self=[super initWithColor:ccc4(0, 0, 0, 128)]) ) {
CGSize windowSize = [[CCDirector sharedDirector] winSize];
CCSprite *imgRoof;
imgRoof = [CCSprite spriteWithFile:#"Tops.png"];
imgRoof.position = ccp(windowSize.width/2,windowSize.height/2);
[self addChild:imgRoof];
CCAction* action = [CCBlink actionWithDuration:20 blinks:20];
[imgRoof runAction:action];
}
return self;
}
I just want to know the color code for clearColor for ccc4(). Can any one help me please, I am really stuck.
Thanks a lot in advance!!!
yourView.layer.backgroundColor=[[UIColor clearColor]CGColor];
will do the job for you
Dont forgot
#import <QuartzCore/QuartzCore.h>
EDIT
CCLayerColor* colorLayer = [CCLayerColor layerWithColor:ccc4(0, 0, 0, 128)];
[self addChild:colorLayer z:0];
The first three numbers are "RGB" colors and the last number is opacity. Each can have a value in range between 0 and 255.
Try This...
CCLayer *pauseLayer;
CGSize size = [[CCDirector sharedDirector] winSize];
pauseLayer = [CCLayerColor layerWithColor: ccc4(0, 0, 0, 128) width: size.width height: size.height];
pauseLayer.position = CGPointZero;
[self addChild: pauseLayer];
E.X for using ccc4
layerWithColor:ccc4(Red, Green, Blue, Opacity)
The first three numbers are "RGB" colors and the last number is opacity. Each can have a value in range between 0 and 255.
Try this, it works for me.
UIColor *color = [UIColor clearColor];
CGColorRef layerBackgroundColor = [color CGColor];
[subLayer setBackgroundColor:layerBackgroundColor];
Finally I didn't find a proper answer to this question, after looking for many docs and searching for days in google, found no proper solution, and no color code is available for clearColor for ccc4 so I used a patch for my application.
I am setting the the same image as a background CCSprite to my HelloWorldLayer, which I am using in the ViewController as a background on which I m adding the HelloWorldLayer scene, so that one don't see the black background, and even user presumes that the behind ViewController is visible apart from the HelloWorldLayer contents.
This is a solution particularly for my application. For other situations, I can't say. If anyone finds any solution to this, please let me know.
Thanks!!!
This is a difficult problem to explain... but i'll do my best.
First a background on the problem, basically i am creating a paint like app for ios and wanted to add a functionality that allows the user to select part of the image (multi-touch shows an opaque rectangle) and delete/copy-paste/rotate that part. I have got the delete and copy-paste working perfectly but the rotation is another story. To rotate the part of the image I first start by copying the part of the image and setting it to be the background of the selected rectangle layer, then the user rotates by an arbitrary angle using a slider. The problem is that sometimes the image ends up being displayed from another location of the rectangle (meaning the copied image hangs off the wrong corner of the rectangle). I thought this could be a problem with my rectangle.frame.origin but the value for that seems to be correct through various tests. It also seems to change depending on the direction that the drag goes in...
These Are Screens of the problem
In each of the above cases the mismatched part of the image should be inside the grey rectangle, i am at a loss as to what the problem is.
bg = [[UIImageView alloc] initWithImage:[self crop:rectangle.frame:drawImage.image]];
[rectangle addSubview:bg];
drawImage is the users drawing, and rectangle is the selected grey area.
crop is a method which returns a part of a given image from a give rect.
I am also having trouble with pasting an arbitrarily rotated image.. any ideas on how to do that?
Edit: adding more code.
-(void)drawRect:(int)x1:(int)y1:(int)x2:(int)y2{
[rectangle removeFromSuperview];
rectangle = [[UIView alloc] initWithFrame:CGRectMake(x1, y1, x2-x1, y2-y1)];
rectangle.backgroundColor = [UIColor colorWithRed:0.9 green:0.9 blue:0.9 alpha:0.6];
selectionImage = drawImage.image;
drawImage.image = selectionImage;
[drawImage addSubview:rectangle];
rectangleVisible = true;
rectangle.transform = transformation;
Could it have anything to do with how i draw my rectangle? (above) I call this method from a part of a touchesMoved method (below) which may cause the problem (touch 1 being in the wrong location may cause width to be negative?) if so, is there an easy way to remedy this?
if([[event allTouches] count] == 2 && !drawImage.hidden){
NSSet *allTouches = [event allTouches];
UITouch *touch1 = [[allTouches allObjects] objectAtIndex:0];
UITouch *touch2 = [[allTouches allObjects] objectAtIndex:1];
[self drawRect:[touch1 locationInView:drawImage].x :[touch1 locationInView:drawImage].y:
[touch2 locationInView:drawImage].x :[touch2 locationInView:drawImage].y];
}
I'm not sure if this is your problem, but it looks like you are just assuming that touch1 represents the upper left touch. I would start out by standardizing the rectangle.
// Standardizing the rectangle before making it the frame.
CGRect frame = CGRectStandardize(CGRectMake(x1, y1, x2-x1, y2-y1));
rectangle = [[UIView alloc] initWithFrame:frame];
The task is, to draw paths at runtime on custom maps which im using in a Scrollview, and then i will have to draw paths at runtime whenever the location coordinates (lat, long) updates. The problem what im trying to solve here is that i have made a class 'graphics' which is a subclass of UIView, in which i code the drawing in the 'drawrect:' method. So when im adding the graphics as subview of the scrollview over image, the line draws, but i need to keep drawing the line as though it were paths. I need to draw the lines at runtime, need to keep updating the points(x,y) of 'CGContextStrokeLineSegments' method. The code:
ViewController:
- (void)loadView {
[[UIApplication sharedApplication] setStatusBarHidden:YES withAnimation:UIStatusBarAnimationNone];
CGRect fullScreenRect=[[UIScreen mainScreen] applicationFrame];
scrollView=[[UIScrollView alloc] initWithFrame:fullScreenRect];
graph = [[graphics alloc] initWithFrame:fullScreenRect];
scrollView.contentSize=CGSizeMake(320,480);
UIImageView *tempImageView2 = [[UIImageView alloc] initWithImage:[UIImage imageNamed:#"fortuneCenter.png"]];
self.view=scrollView;
[scrollView addSubview:tempImageView2];
scrollView.userInteractionEnabled = YES;
scrollView.bounces = NO;
[scrollView addSubview:graph];
}
Graphics.m:
- (id)initWithFrame:(CGRect)frame
{
self = [super initWithFrame:frame];
if (self) {
// Initialization code
self.backgroundColor = [UIColor clearColor];
}
return self;
}
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
CGPoint point [2] = { CGPointMake(160, 100), CGPointMake(160,300)};
CGContextSetRGBStrokeColor(context, 255, 0, 255, 1);
CGContextStrokeLineSegments(context, point, 2);
}
So how can i draw the lines at runtime. Im just simulating right now, so im not using the realtime data (coordinates). Just want to simulate by using dummy data (coordinates of x,y). Lets say have a button, whenever i press it it updates the coordinates so path extends.
The easiest way would be to add an instance variable representing the points to the UIView subclass.
Then, every time the path changes, update the ivar appropriately and call -setNeedsDisplay or setNeedsDisplayInRect on the custom UIView (or even on its superview). The runtime will then redraw the new path.
You just need to make CGPoint point[] dynamically resizable, from the looks of it.
You can use malloc, a std::vector, or even NSMutableData to store the points you add. Then you pass that array to CGContextStrokeLineSegments.
If 2 points is all you will need, move CGPoint point[2] to an ivar so you may store the positions, then (as Rich noted) invalidate rects appropriately when these values (or the array) are changed.
This subject comes up every now and then, so I created a longer blog post on the general concepts involved with one potential solution, creating and using your own graphics context, here: http://www.musingpaw.com/2012/04/drawing-in-ios-apps.html
I'm running into problems when dealing with a large amount of UIButtons in my interface. I was wondering if anyone had first hand experience with this and how they did it?
When dealing with 30-80 buttons most simple, a couple of complex do you just use UIButton or do something different like drawRect, respond to touch events and get the coordinates of the touch event?
Best example is a calendar, similar to that of Apples Calendar App. Would you just draw most of the days using drawRect and then when you click a button replace it with an image or just use UIButtons? It's not so much the memory footprint or creating the buttons, just strange things are happening with them sometimes (previous question about it) and having performance issues animating them.
Thanks for any help.
If "strange things are happening" with your buttons, you need to get to the bottom of why. Switching architectures just to avoid a problem that you don't understand (and might crop up again) doesn't sound like a good idea.
-drawRect: works by drawing to a bitmap-backed context. This happens when -displayIfNeeded is called after -setNeedsDisplay (or doing something else that implicitly sets the needsDisplay flag, like resizing a view with contentMode = UIContentModeRedraw). The bitmap-backed context is then composited to screen.
Buttons work by putting the different components (background image, foreground image, text) in different layers. The text is drawn when it changes and composited to the screen; the images are just composited directly to the screen.
The "best" way to do things is usually a combination of the two. For example, you might draw text and a background image in -drawRect: so the different layers didn't need to be composited at render time (you get an additional speedup if your view is "opaque"). You probably want to avoid full-screen animations via drawRect: (and it won't integrate so well with CoreAnimation), since drawing tends to be more expensive than compositing.
But first, I'd find out what's going wrong with UIButton. There's little point worrying about how you could make things faster until you actually find out what the slow bits are. Write code so that it is easy to maintain. UIButton is not that expensive and -drawRect: is not that bad (presumably it's even better if you use -setNeedsDisplayInRect: for a smallish rect, but then you need to calculate the rect...), but if you want a button, use UIButton.
Instead of using 30-80 UIButtons I will prefer using images (if possible, a single image or as small number as possible) and compare the touch location.
And if I must create buttons, then obviously will not create 30-80 variables for them. I will set and get view tag to determine which one is tapped.
If this is all stuff you are animating then you could create a bunch of CALayers with their contents set to a CGImage. You would have to compare the touch location to identify the layer. CALayers have a useful style property that is an NSDictionary you can store meta-data in.
I just use the UIButtons unless there happens to be a specific performance issue that crops up. If they have similar functionality, however, such as a keyboard, I map them all to one IBAction and differentiate the behavior based on the sender.
What specific performance and animation issues are you running into?
I recently ran across this problem myself when developing a game for the iPhone. I was using UIButtons to hold game tiles, then stylized them with transparent images, background colors and text.
It all worked well for a small number of tiles. Once we got to about 50, however, the performance dropped significantly. After scouring Google I discovered that others had experienced the same problem. It seems the iPhone struggles with lots of transparent buttons onscreen at once. Not sure if it's a bug in the UIButton code or just a limitation of the graphics hardware on the device, but either way, it's beyond your control as a programmer.
My solution was to draw the board by hand using Core Graphics. It seemed daunting at first, but in reality it was pretty easy. I just placed one big UIImageView on my ViewController in Interface Builder, made it an IBOutlet so I could alter it from Objective-C, then constructed the image with Core Graphics.
Since a UIImageView doesn't handle taps, I used the touchesBegan method of my UIViewController, and then triangulated the x/y coordinates of the touch to the precise tile on my game board.
The board now renders in less than a tenth of a second. Bingo!
If you need sample code, just let me know.
UPDATE: Here's a simplified version of the code I'm using. Should be enough for you to get the gist.
// CoreGraphicsTestViewController.h
// CoreGraphicsTest
#import <UIKit/UIKit.h>
#interface CoreGraphicsTestViewController : UIViewController {
UIImageView *testImageView;
}
#property (retain, nonatomic) IBOutlet UIImageView *testImageView;
-(void) drawTile: (CGContextRef) ctx row: (int) rowNum col: (int) colNum isPressed: (BOOL) tilePressed;
#end
... and the .m file ...
// CoreGraphicsTestViewController.m
// CoreGraphicsTest
#import "CoreGraphicsTestViewController.h"
#import <QuartzCore/QuartzCore.h>
#import <CoreGraphics/CoreGraphics.h>
#implementation CoreGraphicsTestViewController
#synthesize testImageView;
int iTileSize;
int iBoardSize;
- (void)viewDidLoad {
int iRow;
int iCol;
iTileSize = 75;
iBoardSize = 3;
[testImageView setBounds: CGRectMake(0, 0, iBoardSize * iTileSize, iBoardSize * iTileSize)];
CGRect rect = CGRectMake(0.0f, 0.0f, testImageView.bounds.size.width, testImageView.bounds.size.height);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
for (iRow = 0; iRow < iBoardSize; iRow++) {
for (iCol = 0; iCol < iBoardSize; iCol++) {
[self drawTile: context row: iRow col: iCol color: isPressed: NO];
}
}
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
[testImageView setImage: image];
UIGraphicsEndImageContext();
[super viewDidLoad];
}
- (void)dealloc {
[testImageView release];
[super dealloc];
}
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [touches anyObject];
CGPoint location = [touch locationInView: testImageView];
if ((location.x >= 0) && (location.y >= 0) && (location.x <= testImageView.bounds.size.width) && (location.y <= testImageView.bounds.size.height)) {
UIImage *theIMG = testImageView.image;
CGRect rect = CGRectMake(0.0f, 0.0f, testImageView.bounds.size.width, testImageView.bounds.size.height);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[theIMG drawInRect: rect];
iRow = location.y / iTileSize;
iCol = location.x / iTileSize;
[self drawTile: context row: iRow col: iCol color: isPressed: YES];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
[testImageView setImage: image];
UIGraphicsEndImageContext();
}
}
-(void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
UIImage *theIMG = testImageView.image;
CGRect rect = CGRectMake(0.0f, 0.0f, testImageView.bounds.size.width, testImageView.bounds.size.height);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[theIMG drawInRect: rect];
[self drawTile: context row: iRow col: iCol isPressed: NO];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
[testImageView setImage: image];
UIGraphicsEndImageContext();
}
-(void) drawTile: (CGContextRef) ctx row: (int) rowNum col: (int) colNum isPressed: (BOOL) tilePressed {
CGRect rrect = CGRectMake((colNum * iTileSize), (rowNum * iTileSize), iTileSize, iTileSize);
CGContextClearRect(ctx, rrect);
if (tilePressed) {
CGContextSetFillColorWithColor(ctx, [[UIColor redColor] CGColor]);
} else {
CGContextSetFillColorWithColor(ctx, [[UIColor greenColor] CGColor]);
}
UIImage *theImage = [UIImage imageNamed:#"tile.png"];
[theImage drawInRect: rrect];
}