Could somebody advice which formula used to content offset calculation after zoom in UIScrollView? Let's consider following example:
I have a UIScrollView with content view in size (1000, 1000), then if I programmatically setZoomScale to 2.0 and in scrollViewDidEndZooming:withView:atScale method I will have the following:
contentSize before zoom = {1000, 1000}
contentOffset before zoom = {0, 0}
scale = 2.000000
contentSize after zoom = {2000, 2000}
contentOffset after zoom = {160, 230}
I need to know how the new value of contentOffset {160, 230} calculated. Is there any dependency of formula that used to calculate the content offset in this case?
Thanks
This may or may not be relevant, but note that 160x230 is half the resolution of the iPhone, less the status bar: 320x460. Try changing the UIScrollView's frame, or its superview's frame, and see how this affects the numbers.
EDIT: Come to think of it, it makes perfect sense that the offset is half the size of the scroll view, as it will expand equally in both directions. So, the formula would be: contentOffset = (scrollView.frame.size.width/2 * (scaleAfter - scaleBefore), scrollView.frame.size.height/2 * (scaleAfter - scaleBefore)).
Therefore, if the scale was 4.0f, the offset would be: (320/2 * (4-1), 460/2 * (4-1)) => (480, 690). Try a scale of 4 and see if (480, 690) comes out.
Related
The positions and sizes of my Game Pieces, as set by CGPoint(..) and CGRect(..), don’t make arithmetic sense to me when looked at with respect to the width and height of the surrounding container of all Game Pieces?
Let me illustrate with just one specific example –
I call the surrounding container = “room”.
One of many specific Game Pieces = “rock”.
Here’s the math
roomWidth = Double(UIScreen.main.bounds.width)
roomHeight = Double(UIScreen.main.bounds.height)
While in Portrait mode:
roomWidth = 744.0
roomHeight = 1133.0
When rotated to Landscape mode:
roomWidth = 1133.0,
roomHeight = 744.0
So far so good .. here’s the problem:
When I look at my .sks file, the width of the “rock” and its adjacent game pieces far exceeds the roomWidth; for example,
Widths of rock + paddle + tree = 507 + 768 + 998 which obviously exceeds the room’s width for either Portrait or Landscape mode – and this math doesn’t even address the separation between Game Pieces.
The final math “craziness” looks at the swift xPos values for each Game Piece as specified in my .sks file:
Room: xPos = 40,
Rock: xPos = -390,
Paddle: xPos = -259,
Tree: xPos = 224
I cannot grasp the two high negative numbers .. to me, that means the Rock and the Paddle shouldn’t even be visible .. seriously off-screen.
One significant addition = I did set the Autoresizing Mask to center horizontally and vertically
I need a serious infusion of “smarts” here.
The default anchorPoint of an sks file (SpriteKit Scene file) is (0.5, 0.5). So the origin (0, 0) of the scene is drawn at the center of the SKView. You can change the anchor point in the Attributes inspector when editing the sks file. The default means that negative coordinates not too far from the origin will be visible in the SKView.
The scene also has a scaleMode property which determines how the scene is scaled if its size doesn't match the SKView's size. The default is .fill, which means the view scales the scene's axes independently so the scene's size exactly fills the view.
I'm attempting to detect if there's a sprite node immediately to the left or right of the current sprite node.
This seems straightforward, but I'm seeing an odd behaviour.
I've created a thin rect (width = 1point) that's the same height as the current node and with the same origin as the current node.
e.g:
// Create a thin rect that's aligned to the left edge of 'block'
CGRect adjacentFrame;
adjacentFrame = CGRectMake(block.frame.origin.x,
block.frame.origin.y,
1,
block.frame.size.height);
// Shift the rect left a few points to position it to the left of 'block'
adjacentFrame.origin.x -= 10;
Then I test to see if that rect (adjacentFrame) is intersecting a node:
SKPhysicsBody* obstructingBody;
obstructingBody = [self.physicsWorld bodyInRect:adjacentFrame];
Now, the weird thing is, obstructingBody contains 'block' itself!
I've even added code to add a SpriteNode to the scene with a frame of adjacentFrame so I can check the rect's positioning. It's clearly displaying a few points left of 'block' and is clearly not touching it!
Any ideas what could be going on here?
Thanks,
Chris
bodyInRect needs scene coordinates. You provide coordinates in the coordinate space of block.parent. Unless block.parent is the scene itself, you need to convert origin with:
CGRect blockFrame = block.frame;
blockFrame.origin = [self convertPoint:blockFrame.origin toNode:self.scene];
Also block's width must be less than 10 otherwise your -10 offset isn't enough to move the frame outside the block's frame.
I have a map app where the user can place waypoints manually. I would like for them to press the waypoint button and have a waypoint placed in the center of their currently visible view on the content view.
I'm afraid you'd have to calculate it yourself. contentSize returns size of the scrolled content, contentOffset gives you the origin of the scroll view inside the content. Then with scrollView.bounds.size you can find the center of the view.
Haven't tested this, but maybe you could convert scrollView.center to your scrolled map like this:
CGPoint viewportCenterInMapCoords =
[scrollView.superview convertPoint:scrollView.center
toView:mapViewInsideScrollView];
Need to account for how zoomed it is, then I can convert the content offset to the size of the full image and add some.
/// this is the full size of the map image
CGSize fullSize = CGPointMake(13900, 8400);
/// determines how the current content size compares to the full size
float zoomFactor = size.width/self.contentSize.width;
/// apply the zoom factor to the content offset , this basically upscales
/// the content offset to apply to the dimensions of the full size map
float newContentOffsetX = self.contentOffset.x*zoomFactor + (self.bounds.size.width/2) *zoomFactor-300;
float newContentOffsetY = self.contentOffset.y*zoomFactor + (self.bounds.size.height/2) * zoomFactor-300;
/// not sure why i needed to subtract the 300, but the formula wasn't putting
/// the point in the exact center, subtracting 300 put it there in all situations though
CGPoint point = CGPointMake(newContentOffsetX,newContentOffsetY );
This is basically a simple issue, which I can't get around ...
So, I have an object of UIImageView of a certain frame over which, I implement CAAnimation with rotation and translation, and it is at new coordinates (x,y), and has been rotated by some degrees.
This animation works beautifully. But if I again do a rotation and movement from THAT state, I want the object to use the new frame and new properties from STEP 1 after its rotation and again rotate by the new angle.
As of now, when I try rotation again, it uses its standard state & frame size(during initialization) and performs rotation on it...
And by this I mean... If I have a square of frame (100, 200, 10, 10), and I rotate it by 45 degrees, the shape is now a different square, with different frame size and end points compared to the original square, and I implement a new rotation by (say) 152 degrees on it and it needs to perform a rotation on the newer square... But it turns out that it uses the same frame size as the previous one (x,y, 10, 10).
How can I continue rotating / moving the object with its updated position and state ??
Note: (if you need to see the code for animation)
This is the code for my animation, which involves simple rotation and movement ! http://pastebin.com/cR8zrKRq
You need to save the rotation step and update object rotation in animationDidStop: method. So, in your cas, you should apply:
-(void)animationDidStop:(CAAnimation *)anim finished:(BOOL)flag
{
//angle is global
CATransform3D rotationTransform = CATransform3DMakeRotation((angle%4)*M_PI_4, 0, 0, 1);
[object.layer setTransform:rotationTransform];
object.center = tempFrame; //already there
}
where angle is an integer counter of animations(steps) with values 0,1,2,3. My step is M_PI_4. There is probably a better solution to the problem, but this should do the trick
I have a custom UIView which is drawn using its -[drawRect:] method.
The problem is that the anti-aliasing acts very weird as black lines horizontal or vertical lines are drawn very blurry.
If I disable anti-aliasing with CGContextSetAllowsAntialiasing, everything is drawn as expected.
Anti-Aliasing:
alt text http://dustlab.com/stuff/antialias.png
No Anti-Aliasing (which looks like the expected result with AA):
alt text http://dustlab.com/stuff/no_antialias.png
The line width is exactly 1, and all coordinates are integral values.
The same happens if I draw a rectangle using CGContextStrokeRect, but not if I draw exactly the same CGRect with UIRectStroke.
Since a stroke expands equal amounts to both sides, a line of one pixel width must not be placed on an integer coordinate, but at 0.5 pixels offset.
Calculate correct coordinates for stroked lines like this:
CGPoint pos = CGPointMake(floorf(pos.x) + 0.5f, floorf(pos.y) + 0.5f);
BTW: Don't cast your values to int and back to float to get rid of the decimal part. There's a function for this in C called floor.
in your view frames, you probably have float values that are not integers. While the frames are precise enough to do fractions of a pixel (float), you will get blurriness unless you cast to an int
CGRect frame = CGRectMake((int)self.frame.bounds..., (int)...., (int)...., (int)....);