Get bounds of PKDrawing / individual strokes in PencilKit - swift

I need access to the coordinates of individual strokes of a PKDrawing in PencilKit. Is there any way I can get access to that? Currently, my only idea is to try and decode the opaque data representation we get from PKDrawing.

As Ben has said, there doesn't seem to be any way to access stroke-level data in PencilKit at this time. This seems like a pretty rudimentary feature so hopefully Apple will add it next WWDC. Fingers crossed.
LetsBuildThatApp's YouTube tutorial is a good starting point if you just want basic drawing capabilities and are not too concerned about drawing quality or latency. I ran into issues when I tried to add the ability to vary the stroke width with pen pressure. I was never able to get it to transition smoothly between stroke segments of different widths - it always 'jumped' jarringly from one width to the next. Maybe there's a way to fix that, but I wasn't able to.
I suspect that currently, the only way to draw low-latency high-quality images with Apple Pencil is to write a drawing engine from scratch in Metal or OpenCL. It's strange it has taken Apple so long since the release of Apple Pencil to get a comprehensive drawing framework out. Fingers crossed that changes at WWDC 2020.
Edit: Apple announced substantial updates to PencilKit at WWDC20, including access to stroke data and functions to programatically draw strokes.

I was was looking for the same thing and from this comment https://stackoverflow.com/a/57565661/8891611 and my own searching of the documentation, it seems what you're asking for doesn't exist. If you do discover a way to decode the opaque data please update this thread on how so.
If you aren't necessarily set on PencilKit, you could manually code your drawing functions which is easier than it sounds. In this tutorial: https://youtu.be/E2NTCmEsdSE?t=504 I have timestamped where he shows that he has access to all the points on the lines he is creating.
And another potential solution could be to use both, by have a mapping between your PencilKit drawing and the points found from the technique in this tutorial.

For the strokes, you can access the property .renderBounds that gives you the dimensions of the rectangle where the stroke is into.

Related

How to use Core Graphics Layer Drawing

Can anyone tell me about how to color with pattern(Butterfly) like follwing link a link.
In addition to reading the entirety of the CGLayer reference you posted in your original question I strongly advise you to watch the 'Optimizing 2D Graphics and Animation Performance' session from the 2012 WWDC.
As you progress I think you'll find that it isn't particularly difficult to draw content to the screen using the likes of Quartz 2D and Core Animation but the real challenge will be doing so in a way that achieves an acceptable level of performance.
In the session they optimise a drawing app similar to the one you want to create. The fundamental principals they used to optimize their drawing app were:
Only ever update as little of the screen as you need to
Every so often create a flat composite image of what the user has drawn and re-use this image in proceeding drawing operations. This prevents having to draw everything the user has drawn to the canvas individually, making the application much more performant.
In addition to this they cover a collection of tricks to squeeze out every drop of performance.
I apologise that I have no code examples for you (I usually like my answers to include some) but your question was very broad. I suggest you watch the video I have suggested, continue your research and attempt to begin implementing the application yourself. Once you run into more specific problems you can return here for answers in the event you can't find them elsewhere.
Good luck!

Dragging/Resizing a UIImage on the Device

I'd like to allow a user to add a shape (which would just be a UIImage) onto some sort of canvas, then move and resize it on the screen but I'm not sure how to go about this. Ideally I'd like the basics of a drawing app which can use images from a user's device. Each shape would have an associated position, size and z-index.
The only thing I'm unsure of is how I'd create a bounding box (the one with four blue dots to allow resizing/moving). I have experience with UIKit, and would prefer to keep the majority of the app in this for the time being, but I get the feeling this type of thing might be better suited to Cocos2D or a similar framework.
If anyone has any pointers/open source code I can dig through it would be hugely appreciated.
I think you should look into CALayer, or even CAShapeLayer. I'm just starting to play with them, but I'm pretty sure you can easily get the functionality you want with either. Draw the border in the layer's drawLayer:inContext:. Check out the Quartz2d Guide path drawing section for the functions you need.

How to draw an image according to the pixels of another image?

HI all ,what i want is to map the images.Suppose i have two images of persons,one is of fat person and another is of weak person,Now i want to match their faces ,eyes.I want to increase or decrease the face size eye size of one image according to another.As you can see in adobe photoshop you can make the face fat,make it squueze.I want to do the image manuplation in this.These types of operations i want to implement.I don't know from where to start.
Pleas guide and help me.Can i perform all this with core graphics if so then how
Any reference,tutorial address ,sample code ........appreciated.
You are probably going to have to deal with some sort of edge detection and face recognition algorithms, at the very least, if this is to be accomplished automatically. Otherwise, if the user is going to be resizing one image to match the other, this will require simple resizing operations driven by perhaps user pinch & gestures.
UPDATE:
For manual resizing:
Download the source code for the great book Cool iPhone Projects. One of the projects is called 'Touching'. This project contains code that accomplishes what you need: pinch and zoom functionality.

Overlay "Structured Glas" Effect on iPhone Camera Feed - General Directions

I'm currently trying to write an app, that would be able to show the effects of glas, as seen through the iPhone Camera.
I'm not talking about simple, uniform glas but glass like this:
Now I already broke this into two problems:
1) Apply some Image Filter to the 2D-frames presented by the iPhone Camera. This has been done and seems possible, e.g. in the app: faceman
2) I need to get the individual lighting properties of a sheet of glas that my client supplies me with. Now basicly, there must be a way to read the information about how the glas distorts ands skews the image. I think It might be somehow possible to make a high-res picture of the plate of glasplate, laid on a checkerboard-image and somehow analyze this.
Now, I'm mostly searching for literature, weblinks on how you guys think I could start at 2. It doesn't need to be exact, in the end I just need something that looks approximately like the sheet of glass I want to show. And I'm don't even know where to search, Physics, Image Filtering or Comupational Photography books.
EDIT: I'm currently thinking, that one easy solution could be bump-mapping the texture on top of the camera-feed, I asked another question on this here.
You need to start with OpenGL. You want to effectively have a texture - similar to the one you've got above - displace the texture below it (the live camera view) to give the impression of depth and distortion. This is a 'non-trivial' problem, in that whilst it's a fairly standard problem in its field if you're coming from a background with no graphics or OpenGL experience you can expect a very steep learning curve.
So in short, the only way you can achieve this realistically on iOS is to use OpenGL, and that should be your starting point. Apple have a few guides on the matter, but you'll be better off looking elsewhere. There are some useful books such as the OpenGL ES 2.0 Programming Guide that can get you off on the right track, but where you start would depend on how comfortable you are with 3D graphics and C.
Just wanted to add that I solved this old answer using the refraction example in the Khronos OpenGl ES SDK.
Wrote a blog-entry with pictures about it :
simulating windows with refraction

iPhone Accessibility using "Red on Black" for images

Is there a way to convert an image on the fly to "Red on Black" for accessibility? I have pictures that I want to stream to the iphone. Viewing them at night, Red on Black is better for viewing.
Answer:
You're much better off making your own night friendly images, and swapping those out along with text color, etc.
I'm not sure how you have your current images implemented, but before they load you could check for BOOL isNightTime, and if it returns TRUE, then load the nightTime images instead. I would suggest taking your current image set, and duplicating it with the prefix nt_.
Bonus:
You can take this a step further. Grab the GPS location, then use the location to get weather information from Wunderground. Part of their report includes the times of Sunrise and Sunset. You could then use those values and check them against the current time (be careful that all the time zones are playing nice), and from the result of that, enable the NightTime image set.
If you do implement this, make sure that the user can still enable or disable it to his/her preference.
I had originally said NOAA, but I can't find where that information is on their website. I know it's there somewhere. Why are .gov sites so ugly? Anyways, I changed it to mention Wunderground instead, just scroll down to the Astronomy section. They have a pretty well done iPhone website as well, worth checking out.
Bonus 2:
I'm unsure what your maps/images look like, but instead of having to edit them all to red on black, you could instead edit them to white on black, and put a layer on top of that which would allow the user to pick any color/intensity. Instead of using a layer, you could likely also programmatically implement it, but I think a colorizing layer would be much faster/easier.
An alternate method of doing this is to instead make your map transparent/black, and put a layer underneath that which could change colors to the user's liking. You could implement this on a finer scale (place rects of color behind objects/text/whatever else) to allow for full color customization.
Both use transparency to some extent, but I believe that the alternate method requires less overall work.
Bonus 3:
If you're already going through the effort to grab the GPS coordinates, it wouldn't be too much additional work to have it also check with another server, which would point out other users using the application locally on the map. Make sure this is disabled by default, as lots of users are uncomfortable with broadcasting their location to the world.
Science:
It's also worth mentioning that green is a horrible color to use if you're looking for night friendliness. Red is the color you want to be using. Red light doesn't cause the eye to release the enzymes which cause you to lose your nightvision (what you get once your eyes adjust). This is the reason the inside of military vehicles usually have red interior lights, and also why every movie you've ever seen with tactical anything uses lots of red lighting.
Red light is also used to preserve night vision in low-light or night-time situations, as the rod cells in the human eye aren't sensitive to red.
-Wikipedia
I learned this when I went up to Kitt Peak National Observatory this Thanksgiving on a family trip to Arizona. They hand out little keychains with red lights on them, so you can see where you're going in the dark. It was probably one of the coolest things I've ever participated in. I learned so much. If you're in the Tuscon area, or have another observatory local to you, I strongly suggest checking them out.
The keychain they gave me broke and it fell off somewhere, it's nowhere to be found :( It was my only souvenir. If anybody from KPNO happens to see this and wants to mail me another one, my email address is in my profile.
Also here's a link that goes into far much more detail than needed, but I know you're all going to google it anyways.
I did find another solution:
http://sourceforge.net/projects/photoshopframew/
Source code is available and i can run the tiles through photoshop as part of a chain of events for night viewing.