Dragging/Resizing a UIImage on the Device - iphone

I'd like to allow a user to add a shape (which would just be a UIImage) onto some sort of canvas, then move and resize it on the screen but I'm not sure how to go about this. Ideally I'd like the basics of a drawing app which can use images from a user's device. Each shape would have an associated position, size and z-index.
The only thing I'm unsure of is how I'd create a bounding box (the one with four blue dots to allow resizing/moving). I have experience with UIKit, and would prefer to keep the majority of the app in this for the time being, but I get the feeling this type of thing might be better suited to Cocos2D or a similar framework.
If anyone has any pointers/open source code I can dig through it would be hugely appreciated.

I think you should look into CALayer, or even CAShapeLayer. I'm just starting to play with them, but I'm pretty sure you can easily get the functionality you want with either. Draw the border in the layer's drawLayer:inContext:. Check out the Quartz2d Guide path drawing section for the functions you need.

Related

Get bounds of PKDrawing / individual strokes in PencilKit

I need access to the coordinates of individual strokes of a PKDrawing in PencilKit. Is there any way I can get access to that? Currently, my only idea is to try and decode the opaque data representation we get from PKDrawing.
As Ben has said, there doesn't seem to be any way to access stroke-level data in PencilKit at this time. This seems like a pretty rudimentary feature so hopefully Apple will add it next WWDC. Fingers crossed.
LetsBuildThatApp's YouTube tutorial is a good starting point if you just want basic drawing capabilities and are not too concerned about drawing quality or latency. I ran into issues when I tried to add the ability to vary the stroke width with pen pressure. I was never able to get it to transition smoothly between stroke segments of different widths - it always 'jumped' jarringly from one width to the next. Maybe there's a way to fix that, but I wasn't able to.
I suspect that currently, the only way to draw low-latency high-quality images with Apple Pencil is to write a drawing engine from scratch in Metal or OpenCL. It's strange it has taken Apple so long since the release of Apple Pencil to get a comprehensive drawing framework out. Fingers crossed that changes at WWDC 2020.
Edit: Apple announced substantial updates to PencilKit at WWDC20, including access to stroke data and functions to programatically draw strokes.
I was was looking for the same thing and from this comment https://stackoverflow.com/a/57565661/8891611 and my own searching of the documentation, it seems what you're asking for doesn't exist. If you do discover a way to decode the opaque data please update this thread on how so.
If you aren't necessarily set on PencilKit, you could manually code your drawing functions which is easier than it sounds. In this tutorial: https://youtu.be/E2NTCmEsdSE?t=504 I have timestamped where he shows that he has access to all the points on the lines he is creating.
And another potential solution could be to use both, by have a mapping between your PencilKit drawing and the points found from the technique in this tutorial.
For the strokes, you can access the property .renderBounds that gives you the dimensions of the rectangle where the stroke is into.

iOS app view background (Image vs draw) design question

Suppose I was writing a game which involved a relatively complex geometric game board. Something like a dartboard.
I would want a view to display the game state. What is the best way to implement that view?
For example, should I draw the board off line in something like photoshop, add it as a resource, and then show it using a UIImageView? Or should I use drawing primitives and essentially draw the board programmatically?
What are the trade-offs?
If I do use an image, what format should I prefer? .png, .tiff, .gif, .jpg?
Thanks,
John
If you decide to go the image route you should use png. Displaying any other format you pay a performance hit (as mentioned in the comment).
To decide between building photoshop vs drawing via code you need to decide how much time you want to put into learning Quartz/CoreGraphics. Apple's docs:
http://developer.apple.com/library/mac/#documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/Introduction/Introduction.html
If you already know Photoshop then building the graphic there is probably much easier, if you don't then learning Quartz is prob a less steep learning curve than Photoshop...
If it's a simple board, it's easy enough to draw it into the view, which gives you the possibility of easily manipulating it in interesting ways. Drawing in a view is done with a set of postscript like primitives.
For something more fancy, photoshop might be the way to go.
PNGs are preferred.

How do I create a custom page curl Core Animation?

I'm trying to create a "page curl" animation of an image in my iPhone application. I t UIViewAnimationTransitionCurlUp, and it's undocumented Core Animation siblings, however the image I need to animate is a transparent PNG, with "uneven" (some alpha pixels) outlines. When using the aforementioned pre-made transition, those alpha pixels are painted black as soon as the animation starts, which looks terribly ugly.
Therefore, I seek to create a Core Animation of my own. I have tried to research the subject, but have been unable to find a good overview of the techniques involved. The implementation would of course have to be more complex than a single property change, I get the feeling that even CATransform3D would be to limited for this purpose, as the image needs to have different 3D transformations applied in different parts of it - changing over time. How would one then go about this subject? I'm very grateful for any thoughts or ideas!
Best,
Eli
As Corey points out, you'll probably need to go with OpenGL ES for this one. Core Animation exposes the ability to work with layers, even in 3-D, but all layers are just rectangles and they are manipulated as such. You can animate the flipping of a layer about an axis, even with a perspective distortion, but the kind of curving you want to do is more complex than you can manage using the Core Animation APIs.
You might be able to split your image up into a mesh of tiny layers and manipulate each using a CATransform3D to create this curving effect, but at that point you might as well be using OpenGL ES to create the same effect.
The book Core Animation for Mac OS X and the iPhone: Creating Compelling Dynamic User Interfaces from Pragmatic Programmer may help you write custom Core Animation animations.

Need help optimizing my 2d drawing on iPhone

I'm writing a game that displays 56 hexagon pieces filling the screen in the shape of a board. I'm currently drawing each piece using a singleton rendering class that when called to draw a piece, creates a path from 6 points based of the coordinate passed in. This path is filled with a solid color and then a 59x59 png with an alpha to white gradient is overlayed over the drawing to give the piece a shiny look. Note I'm currently doing this in Core Graphics.
My first thought is that creating a path everytime I draw is costly and seems like I can somehow do this once and then reuse it, but I'm not sure of the best approach for this. When I look at the bottlenecks with Shark, it looks like the drawing of the png is the most taxing part of the process. I've tried just rendering the png overlay or just rendering the path without the overlay and both give me some frame gains, although removing the png overlay yields the most frames.
My current thought is that at startup, I should render 6 paths (1 for each color piece I have) and overlay them with the png and then store an image of these pieces and then just redraw the pieces each time I need them. Is there an effecient machanism for storing something you've drawn once and redrawing it? It kinda just sounds like I'd be running into the whole drawing pngs too often thing again, but maybe there's a less taxing method that does a similar thing...
Any suggestions are much appreciated.
Thanks!
You might try CGLayer or CALayer.
General thoughts:
Game programming on iPhone usually necessitates OpenGL. Core Graphics is a bit easier to work with, but OpenGL is optimized for speed.
Prerender this "shiny look" into the textures as much as is possible (as in: do it in Photoshop before you even insert them into your project). Alpha blending is hell on performance.
Maybe try PVRTC (also this tutorial) as it's a format used by iPhone's GPU's manufacturer. Then again, this could make things worse depending on where your bottleneck is.
If you really need speed you have to go the OpenGL route. Be careful if you want to mix OpenGL and Core Animation, they can conflict.
OpenGL is a pain if you haven't done much with it. It sounds like you could use Core Animation and make each tile a layer. CA doesn't call the redraw again unless you change something, so you should be able to just move that layer around without taking a big hit. Also note that CA stores the layer in the texture memory so it should be much faster.
Some others have mentioned that you should use OpenGL. Here's a nice introduction specifically for the iPhone: OpenGL ES from the Ground Up: Table of Contents
You might also want to look at cocos2d. It seems to be significantly faster than using CoreAnimation in my tests, and provides lots of useful stuff for games.

How do I use CALayer with the iPhone?

Currently, I have a UIView subclass that "stamps" a single 2px by 2px CGLayerRef across the screen, up to 160 x 240 times.
I currently animate this by moving the UIView "up" the screen 2 pixels (actually, a UIImageView) and then drawing the next "row".
Would using multiple CALayer layers speed up performance of rendering this animation?
Are there tutorials, sample applications or code snippets for use of CALayer with the iPhone SDK?
The reason I ask is that most of the code snippets I find that demonstrate simple examples of CALayer employ method calls that do not work with the iPhone SDK. I appreciate any advice or pointers.
Okay, well, if you want something that has some good examples of CA good that draws things like that and works on the phone, I recommend the GeekGameBoard code that Jens Aflke published (it is an improved version of some Apple demo code).
Based on what you are describing I think you are doing somthing way more complicated than it needs be. My impression is you want basically a static view that you are animating by shifting its position so that it is partially off screen. If you just need to set some static content in your drawRect going through layers is not going to be faster than just calling CGFillRect() with your color. After that you could just use implicit animations and the animator proxy on UIView to move the view. I suspect you could even get rid of the custom drawRect: implementation with a patterned UIColor, but I honestly have not benchmarked the difference between the two.
What CALayer methods are you seeing that don't work on iPhone? Aside from animation features tied to CoreImage I have not noticed much that is missing. The big thing you are likely to notice is that all views are layer backed (so you do not need to do anything special to use layers, you can just grab a UIView's layer through the layer accessors methos), and the coordinate system has a top left origin.
In any event, generally having more things is slower than having fewer things. If you are just repeating the same pattern over and over again you are likely to find the best performance is implementing a custom UIView/CALayer/UIColor that knows how to draw what you want, rather than placing visually identical layers or views next to each other.
Having said that, generally layers are lighter weight than views, so if you have a lot of separate elements that you need to keep logically separated you will find that moving to layers can be a win over using views.
You might want to look at -[UIColor initWithPatternImage:] depending on exactly what you are trying to do. If you are using this two pixel pattern as a background color you could just make a UIColor that draws it and set the background.
What CALayer methods are you seeing that don't work on iPhone?
As one example, I tried implementing the grid demo here, without much luck. It looks like CAConstraintLayoutManager and CAConstraint are not available in QuartzCore.h.
In another attempt, I tried a very simple, small 20x20 CALayer object as a sublayer of my UIView's layer property, but that didn't show up.
Right now, I have a custom UIView of which I override the drawRect method. In drawRect I grab a context and render two types of CGLayerRefs:
At "off" cells I draw the background color across the entire 320x480 canvas.
At "on" cells, I either draw a single CGLayerRef across a grid of 320x480 pixels (initialization) or across a 320x2 row (animation).
During animation, I make a UIImageView clip view from 320x478 pixels, and draw a single row. This "pushes" my bitmap up the screen two pixels at a time.
Basically, I'd like to test whether or not using CALayer will accomplish two things:
Make my rendering faster, if CALayer has less overhead than what I'm doing now
Make my animation smoother, by letting me transition a layer up the screen smoothly
Unfortunately, I can't seem to get a basic CALayer working at the moment, and haven't found a good chunk of sample code to look at and play with.