uiimage not being put into subview correctly - iphone

This is a difficult problem to explain... but i'll do my best.
First a background on the problem, basically i am creating a paint like app for ios and wanted to add a functionality that allows the user to select part of the image (multi-touch shows an opaque rectangle) and delete/copy-paste/rotate that part. I have got the delete and copy-paste working perfectly but the rotation is another story. To rotate the part of the image I first start by copying the part of the image and setting it to be the background of the selected rectangle layer, then the user rotates by an arbitrary angle using a slider. The problem is that sometimes the image ends up being displayed from another location of the rectangle (meaning the copied image hangs off the wrong corner of the rectangle). I thought this could be a problem with my rectangle.frame.origin but the value for that seems to be correct through various tests. It also seems to change depending on the direction that the drag goes in...
These Are Screens of the problem
In each of the above cases the mismatched part of the image should be inside the grey rectangle, i am at a loss as to what the problem is.
bg = [[UIImageView alloc] initWithImage:[self crop:rectangle.frame:drawImage.image]];
[rectangle addSubview:bg];
drawImage is the users drawing, and rectangle is the selected grey area.
crop is a method which returns a part of a given image from a give rect.
I am also having trouble with pasting an arbitrarily rotated image.. any ideas on how to do that?
Edit: adding more code.
-(void)drawRect:(int)x1:(int)y1:(int)x2:(int)y2{
[rectangle removeFromSuperview];
rectangle = [[UIView alloc] initWithFrame:CGRectMake(x1, y1, x2-x1, y2-y1)];
rectangle.backgroundColor = [UIColor colorWithRed:0.9 green:0.9 blue:0.9 alpha:0.6];
selectionImage = drawImage.image;
drawImage.image = selectionImage;
[drawImage addSubview:rectangle];
rectangleVisible = true;
rectangle.transform = transformation;
Could it have anything to do with how i draw my rectangle? (above) I call this method from a part of a touchesMoved method (below) which may cause the problem (touch 1 being in the wrong location may cause width to be negative?) if so, is there an easy way to remedy this?
if([[event allTouches] count] == 2 && !drawImage.hidden){
NSSet *allTouches = [event allTouches];
UITouch *touch1 = [[allTouches allObjects] objectAtIndex:0];
UITouch *touch2 = [[allTouches allObjects] objectAtIndex:1];
[self drawRect:[touch1 locationInView:drawImage].x :[touch1 locationInView:drawImage].y:
[touch2 locationInView:drawImage].x :[touch2 locationInView:drawImage].y];
}

I'm not sure if this is your problem, but it looks like you are just assuming that touch1 represents the upper left touch. I would start out by standardizing the rectangle.
// Standardizing the rectangle before making it the frame.
CGRect frame = CGRectStandardize(CGRectMake(x1, y1, x2-x1, y2-y1));
rectangle = [[UIView alloc] initWithFrame:frame];

Related

Rotating UIImageView Moves the Image Off Screen

I have a simple rotation gesture implemented in my code, but the problem is when I rotate the image it goes off the screen/out of the view always to the right.
The image view that is being rotated center X gets off or increases (hence it going right off the screen out of the view).
I would like it to rotate around the current center, but it's changing for some reason. Any ideas what is causing this?
Code Below:
- (void)viewDidLoad
{
[super viewDidLoad];
CALayer *l = [self.viewCase layer];
[l setMasksToBounds:YES];
[l setCornerRadius:30.0];
self.imgUserPhoto.userInteractionEnabled = YES;
[self.imgUserPhoto setClipsToBounds:NO];
UIRotationGestureRecognizer *rotationRecognizer = [[UIRotationGestureRecognizer alloc] initWithTarget:self action:#selector(rotationDetected:)];
[self.view addGestureRecognizer:rotationRecognizer];
rotationRecognizer.delegate = self;
}
- (void)rotationDetected:(UIRotationGestureRecognizer *)rotationRecognizer
{
CGFloat angle = rotationRecognizer.rotation;
self.imageView.transform = CGAffineTransformRotate(self.imageView.transform, angle);
rotationRecognizer.rotation = 0.0;
}
You want to rotate the image around it's center, but that's not what it is actually happening. Rotation transforms take place around the origin. So what you have to do is to apply a translate transform first to map the origin to the center of the image, and then apply the rotation transform, like so:
self.imageView.transform = CGAffineTransformTranslate(self.imageView.transform, self.imageView.bounds.size.width/2, self.imageView.bounds.size.height/2);
Please note that after rotating you'll probably have to undo the translate transform in order to correctly draw the image.
Hope this helps
Edit:
To quickly answer your question, what you have to do to undo the Translate Transform is to subtract the same difference you add to it in the first place, for example:
// The next line will add a translate transform
self.imageView.transform = CGAffineTransformTranslate(self.imageView.transform, 10, 10);
self.imageView.transform = CGAffineTransformRotate(self.imageView.transform, radians);
// The next line will undo the translate transform
self.imageView.transform = CGAffineTransformTranslate(self.imageView.transform, -10, -10);
However, after creating this quick project I realized that when you apply a rotation transform using UIKit (like the way you're apparently doing it) the rotation actually takes place around the center. It is only when using CoreGraphics that the rotation happens around the origin. So now I'm not sure why your image goes off the screen. Anyway, take a look at the project and see if any code there helps you.
Let me know if you have any more questions.
The 'Firefox' image is drawn using UIKit. The blue rect is drawn using CoreGraphics
You aren't rotating the image around its centre. You'll need correct this manually by translating it back to the correct position

UIImage remove some pixels issue

I have the application where user can erase image.
So if user touches some px of the image, alfa of these pixels should become lower.
For instance, if I touched (0,0) pixel of the image one time, (0,0) px opacity should become 0.9. If I touched that px 10 times I shouldn't see image at point (0,0).
What is the best approach to implement that?
This is the coe by which you can detect the touch values
CGPoint StartPoint = [touch previousLocationInView:self];
CGPoint Endpoint = [touch locationInView:self];
NSString *str = [NSString stringWithFormat:#"%f",StartPoint.x];
NSString *strlx = [NSString stringWithFormat:#"%f",StartPoint.y];
NSString *strcx = [NSString stringWithFormat:#"%f",Endpoint.x];
NSString *strcy = [NSString stringWithFormat:#"%f",Endpoint.y];
here touch is the object for UITouch.
I cann't say anything about the opacity.
If you want it fast (as in real time fast) you'll need to use OpenGL.
The best way to do it to create a mask of alpha values which will be applied on the original image using a custom built shader.
The easier way but slower is to get the raw pixels from the UIImage
and applying the alpha values on the raw pixels array and then turning it back to a UIImage (here is a nice example)

CGContextStrokePath not working when zooming and drawing images

I'm drawing lines according to touchesMoved: method and normally it works fine. But when I zoom into the image and draw, the previously drawn lines are both displaced and keep getting more and more blurry, ultimately vanishing. I've tried using UIPinchGestureRecognizer and simply increasing the frame of myImageView (for multi-touch events only) but the problem occurs both ways. Here's the code for drawing:
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
NSArray *allTouches = [touches allObjects];
int count = [allTouches count];
if(count==1){//single touch case for drawing line
UITouch *touch = [touches anyObject];
CGPoint currentPoint = [touch locationInView:myImageView];
UIGraphicsBeginImageContext(myImageView.frame.size);
[drawImage.image drawInRect:CGRectMake(0, 0, myImageView.frame.size.width, myImageView.frame.size.height)];
CGContextSetLineCap(UIGraphicsGetCurrentContext(), kCGLineCapRound);
CGContextSetLineWidth(UIGraphicsGetCurrentContext(), 2.0);
CGContextBeginPath(UIGraphicsGetCurrentContext());
CGContextMoveToPoint(UIGraphicsGetCurrentContext(), lastPoint.x, lastPoint.y);
CGContextAddLineToPoint(UIGraphicsGetCurrentContext(), currentPoint.x, currentPoint.y);
CGContextStrokePath(UIGraphicsGetCurrentContext());
drawImage.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
lastPoint = currentPoint;
}
else{//multi touch case
// handle pinch/zoom
}
}
Here is the image drawn over without zooming:
And this is the image depicting the problem after zooming-in with the red arrow showing the segment that was already drawn before zooming-in (as shown in previous image). The image is both blurred and displaced:
It can also be noticed that a part of the line drawn towards the end is unaffected and the phenomenon occurs for lines drawn back in time. I believe the reason for this is that the image size attributes are being lost when I zoom in/out which probably causes the blur and shift, but I'm not sure about that!
EDIT- I've uploaded a short video to show what's happening. It's sort of entertaining...
EDIT 2- Here's a sample single-view app focussing on the problem.
I downloaded your project and I found the problem is of autoresizing. The following steps will solve it:
Step 1. Comment the line 70:
drawImage.frame = CGRectMake(0, 0, labOrderImgView.frame.size.width, labOrderImgView.frame.size.height);
in your touchesMoved method.
Step 2. Add one line of code after drawImage is alloced (line 90) in viewDidLoad method:
drawImage.autoresizingMask = UIViewAutoresizingFlexibleWidth | UIViewAutoresizingFlexibleHeight;
Then, the bug is fixed.
You are always just drawing to an image context the size of your image view - this of course gets blurry, as you do not adapt to a higher resolution when zoomed in. It would be more sensible to instead create a UIBezierPath once and just add a line to it (with addLineToPoint:) in the touchesMoved method, then draw it in a custom drawRect method via [bezierPath stroke]. You could also just add a CAShapeLayer as a subview of the image view and set its path property to the CGPath property of the bezierPath you created earlier.
See Drawing bezier curves with my finger in iOS? for an example.
I implemented behavior like this in next way:
You should remember all coordinates of your path (MODEL).
Draw your path into temporary subview of imageView.
When user starts pinch/zoom your image - do nothing, i.e path will be scaled by iOS
In the moment when user has finished pinch zooming - redraw your path in correct way using your model.
If you save path as image you get bad looking scaling as result.
Also - do not draw path straight to image - draw to some transparent view and split them together at the end of editing.

Add a magnifier in cocos2d games

i want to add a magnifier in cocos2d game. here is what i found online:
http://coffeeshopped.com/2010/03/a-simpler-magnifying-glass-loupe-view-for-the-iphone
I've changed the code a bit:(since i don't want to let the loupe follow our touch)
- (id)initWithFrame:(CGRect)frame {
if ((self = [super initWithFrame:magnifier_rect])) {
// make the circle-shape outline with a nice border.
self.layer.borderColor = [[UIColor lightGrayColor] CGColor];
self.layer.borderWidth = 3;
self.layer.cornerRadius = 250;
self.layer.masksToBounds = YES;
touchPoint = CGPointMake(CGRectGetMidX(magnifier_rect), CGRectGetMidY(magnifier_rect));
}
return self;
}
Then i want to add it in one of my scene init method:
loop = [[MagnifierView alloc] init];
[loop setNeedsDisplay];
loop.viewToMagnify = [CCDirector sharedDirector].openGLView;
[[CCDirector sharedDirector].openGLView.superview addSubview:loop];
But the result is: the area inside the loupe is black.
Also this loupe just magnify images with the same scale, how can i change it to magnify more near the center and less near the edge? (just like real magnifier)
Thank you !!!
Here I assume that you want to magnify the center of the screen.
You have to change dynamically size attribute to your wishes according to your app needs.
CGSize size = [[CCDirector sharedDirector] winSize];
id lens = [CCLens3D actionWithPosition:ccp(size.width/2,size.height/2) radius:240 grid:ccg(15,10) duration:0.0f];
[self runAction:lens];
Cocos2d draws using OpenGL, not CoreAnimation/Quartz. The CALayer you are drawing is empty, so you see nothing. You will either have to use OpenGL graphics code to perform the loupe effect or sample the pixels and alter them appropriately to achieve the magnification effect, as was done in the Christmann article referenced from the article you linked to. That code also relies on CoreAnimation/Quartz, so you will need to work out another way to get your hands on the image data you wish to magnify.

bounds and frames: how do I display part of an UIImage

My goal is simple; I want to create a program that displays an UIImage, and when swiped from bottom to top, displays another UIImage. The images here could be a happy face/sad face. The sad face should be the starting point, the happy face the end point. When swiping your finger the part below the finger should be showing the happy face.
So far I tried solving this with the frame and bounds properties of the UIImageview I used for the happy face image.
What this piece of code does is wrong, because the transition starts in the center of the screen and not the bottom. Notice that the origin of both frame and bounds are at 0,0...
I have read numerous pages about frames and bounds, but I don't get it. Any help is appreciated!
The loadimages is called only once.
- (void)loadImages {
sadface = [UIImage imageNamed:#"face-sad.jpg"];
happyface = [UIImage imageNamed:#"face-happy.jpg"];
UIImageView *face1view = [[UIImageView alloc]init];
face1view.image = sadface;
[self.view addSubview:face1view];
CGRect frame;
CGRect contentRect = self.view.frame;
frame = CGRectMake(0, 0, contentRect.size.width, contentRect.size.height);
face1view.frame = frame;
face2view = [[UIImageView alloc]init];
face2view.layer.masksToBounds = YES;
face2view.contentMode = UIViewContentModeScaleAspectFill;
face2view.image = happyface;
[self.view addSubview:face2view];
frame = CGRectMake(startpoint.x, 0, contentRect.size.width, contentRect.size.height);
face2view.frame = frame;
face2view.clipsToBounds = YES;
}
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
CGPoint movepoint = [[touches anyObject] locationInView: self.view];
NSLog(#"movepoint: %f %f", movepoint.x, movepoint.y);
face2view.bounds = CGRectMake(0, 0, 320, 480 - movepoint.y);
}
The UIImages and UIImageViews are properly disposed of in the dealloc function.
Indeed, you seem to be confused about frames and bounds. In fact, they are easy. Always remember that any view has its own coordinate system. The frame, center and transform properties are expressed in superview's coordinates, while the bounds is expressed in the view's own coordinate system. If a view doesn't have a superview (not installed into a view hierarchy yet), it still has a frame. In iOS the frame property is calculated from the view's bounds, center and transform. You may ask what the hell frame and center mean when there's no superview. They are used when you add the view to another view, allowing to position the view before it's actually visible.
The most common example when a view's bounds differ from its frame is when it is not in the upper left corner of its superview: its bounds.origin may be CGPointZero, while its frame.origin is not. Another classic example is UIScrollView, which frequently modifies its bounds.origin to make subviews scroll (in fact, modifying the origin of the coordinate system automatically moves every subview without affecting their frames), while its own frame is constant.
Back to your code. First of all, when you already have images to display in image views, it makes sense to init the views with their images:
UIImageView *face1view = [[UIImageView alloc] initWithImage: sadface];
That helps the image view to immediately size itself properly. It is not recommended to init views with -init because that might skip some important code in their designated initializer, -initWithFrame:.
Since you add face1view to self.view, you should really use its bounds rather than its frame:
face1view.frame = self.view.bounds;
Same goes for the happier face. Then in -touchesMoved:… you should either change face2view's frame to move it inside self.view or (if self.view does not contain any other subviews besides faces) modify self.view's bounds to move both faces inside it together. Instead, you do something weird like vertically stretching the happy face inside face2view. If you want the happy face to slide from the bottom of self.view, you should initially set its frame like this (not visible initially):
face2view.frame = CGRectOffset(face2view.frame, 0, CGRectGetHeight(self.view.bounds));
If you choose to swap faces by changing image views' frames (contrasted with changing self.view's bounds), I guess you might want to change both the views' frame origins, so that the sad face slides up out and the happy face slides up in. Alternatively, if you want the happy face to cover the sad one:
face2view.frame = face1view.frame;
Your problem seems to have something to do with the face2view.bounds in touchesMoved.
You are setting the bounds of this view to the rect, x:0, y:0, width:320, height:480 - y
x = 0 == left on the x axis
y = 0 == top on the y axis
So you are putting this image frame at the upper left corner, and making it fill the whole view. That's not what you want. The image simply becomes centered in this imageView.