Recognizing patterns when drawing over the iPhone screen - iphone

I'm trying to write a game where the user can issue commands by drawing certain patterns with his fingers..for example, if he draws a circle, an 'S' letter, an expiral, etc.
I'm already familiar with touch events and I'm capable of reading the coordinates... my problem Is in finding algorithms and information about the recognition of the patterns with some degree of error.... for example, If I'm supposed to detect a circle I should detect it even if the user didn't did a make perfect one.
Any resources in the matter?, thanks !

This site demos a very simple, easy to implement gesture recognizer, which they wrote in Javascript. I've implemented it myself in another language, and found it quite easy to deal with. They've got code, and a paper describing the algorithm; everything you could need.

The patterns you're referring to are known as "gestures".
This code seems to be what you're looking for.

Related

Get bounds of PKDrawing / individual strokes in PencilKit

I need access to the coordinates of individual strokes of a PKDrawing in PencilKit. Is there any way I can get access to that? Currently, my only idea is to try and decode the opaque data representation we get from PKDrawing.
As Ben has said, there doesn't seem to be any way to access stroke-level data in PencilKit at this time. This seems like a pretty rudimentary feature so hopefully Apple will add it next WWDC. Fingers crossed.
LetsBuildThatApp's YouTube tutorial is a good starting point if you just want basic drawing capabilities and are not too concerned about drawing quality or latency. I ran into issues when I tried to add the ability to vary the stroke width with pen pressure. I was never able to get it to transition smoothly between stroke segments of different widths - it always 'jumped' jarringly from one width to the next. Maybe there's a way to fix that, but I wasn't able to.
I suspect that currently, the only way to draw low-latency high-quality images with Apple Pencil is to write a drawing engine from scratch in Metal or OpenCL. It's strange it has taken Apple so long since the release of Apple Pencil to get a comprehensive drawing framework out. Fingers crossed that changes at WWDC 2020.
Edit: Apple announced substantial updates to PencilKit at WWDC20, including access to stroke data and functions to programatically draw strokes.
I was was looking for the same thing and from this comment https://stackoverflow.com/a/57565661/8891611 and my own searching of the documentation, it seems what you're asking for doesn't exist. If you do discover a way to decode the opaque data please update this thread on how so.
If you aren't necessarily set on PencilKit, you could manually code your drawing functions which is easier than it sounds. In this tutorial: https://youtu.be/E2NTCmEsdSE?t=504 I have timestamped where he shows that he has access to all the points on the lines he is creating.
And another potential solution could be to use both, by have a mapping between your PencilKit drawing and the points found from the technique in this tutorial.
For the strokes, you can access the property .renderBounds that gives you the dimensions of the rectangle where the stroke is into.

Draw Line Connecting two Object/Vector3 Unity3d

I am making game .
In that i will have points(Stations) already given and on hitting those points, line should be started to draw till it reached to next point.
as well as i want to avoid overlapping of lines.
and line should continue to expand if i touch end of the line.
I know 2 approaches to draw line
I can use linerenderer or else i can use GL class .
I want to know which will be suitable for my requirement. and if you guys is having another idea then also you can share.
I have seen vectrocity demo but its not free so i cant use.
Thank you guys for your help and support till now and help me to solve my confusion.
Use the LineRenderer for a reasonably limited number of lines. It's much easier to customize than GL.Lines, and it will let you work in Unity's coordinate system without dealing with transform matrices.
And a small note: Up until 4.x, only Pro had GL.Lines, and it didn't work on iOS, so most people were using other approaches. I've never seen anyone use GL.Lines outside of a demo specifically to compare the two approaches. GL.Lines performs better, but is limited in customization options. Another approach I've seen is using the Graphics class and procedural meshes.It's also faster than the LineRenderer, but takes a little work to implement. This article comapares all three approaches and has some code for using the Graphics class
For scene you can use GL.Lines or Debug.DrawLine or Debug.DrawRay. But this lines you can't see in your game. So use Line Renderer.
For you it can useful https://www.youtube.com/watch?v=Bqcu94VuVOI

Copy face from Image

I'm a noob to this forum, but wanted to give it a try.
I'm currently learning Objective-C and Cocoa; trying to build my first iPhone app.
One thing I'm working on is allowing the user to cut his/her face from an image they have taken and paste it into another image. (The idea is cut from one image and paste into another image with a spot for a face to go.)
How can this be done? I am thinking I would allow the user to just touch and drag over their face, in the shape of a rectangle, and then allow them to copy.
Thanks for the help.
Ok, nevertheless your bit arrogant style of asking, here are some guidelines about how to start: generic obj-c/iOS development (start from hello world); UIImage class; camera API; image processing algorithms, face detection algorithms. Go on gradually and do not wish to resolve all problems at once. Write first an application that simple loads an arbitrary photo and shows it to the user. Then modify it that you can crop a specified rectangular area from the image and save it into the new file. Then write an app that switches on the camera that you can take an image and save it to the disk. Then unite what you wrote that you save only a cropped area of the captured image.
When you arrive to this point, you will know much more about software development image handling. AFTER THIS you can start looking for image processing algorithms. Start also here with something simple like a trivial blur filter or similar implemented by you. If you know already a bit of image processing, search for face detection algorithms on the net. It is even possible that you will find some ready framework that includes also these features, or at least you will understand the concepts. You can even come back here to stack overflow and ask for suggestions about a good face detection algorithms, however we still prefer if you have chosen already one and have some concrete issue with it.

compare one image in matlab with a database of images and show the most similar

I have a database of images of one person who is using his hands to show various words and phrases in sign language. The background is white and the only thing changing is the shape of the person's hands and their locations. Now in my gui in matlab, I want the user to be able to choose another image from the same person that was taken at another time doing a sign but wearing the same clothes and then the program will have to compare this against the images in the database and show the most similar. Obviously I can't do pixel by pixel comparison as the images were taken by a hand held mobile camera and slight movement has been inevitable so I should try and locate the hands in the images and compare their shapes. I have no idea how to go about this? I have to say I am new to image processing toolbox in matlab.
Your help is much appreciated
I am doing a phD in computer vision, and I can tell you that it is an unsolved problem. (even in your simple framewrok, with white background)
If you are interested, you might read some works about it ar MIT:
http://people.csail.mit.edu/rywang/handtracking/
or at Oxford:
http://www.robots.ox.ac.uk/~vgg/research/sign_language/index.html
http://www.robots.ox.ac.uk/~vgg/research/hands/index.html
I disagree with you. Such a project can achieve results quickly.
This becomes a problem as soon as the project has to deal with "real life".
Using a single camera, and a completely known background; Opencv provides a simple way to extract hand shape in a image (in about 20 lines of code). You will find plenty of source on the web (have a look at calcbackproj).
After that, what you will have to do is to play with shape, and search for characteristic points.
Begin with some simple signs (example : a circle and a V). How would you recognize one from the other?
There are thousands of papers on sign language; just read the older one to simple ideas flowing :)

Overlay "Structured Glas" Effect on iPhone Camera Feed - General Directions

I'm currently trying to write an app, that would be able to show the effects of glas, as seen through the iPhone Camera.
I'm not talking about simple, uniform glas but glass like this:
Now I already broke this into two problems:
1) Apply some Image Filter to the 2D-frames presented by the iPhone Camera. This has been done and seems possible, e.g. in the app: faceman
2) I need to get the individual lighting properties of a sheet of glas that my client supplies me with. Now basicly, there must be a way to read the information about how the glas distorts ands skews the image. I think It might be somehow possible to make a high-res picture of the plate of glasplate, laid on a checkerboard-image and somehow analyze this.
Now, I'm mostly searching for literature, weblinks on how you guys think I could start at 2. It doesn't need to be exact, in the end I just need something that looks approximately like the sheet of glass I want to show. And I'm don't even know where to search, Physics, Image Filtering or Comupational Photography books.
EDIT: I'm currently thinking, that one easy solution could be bump-mapping the texture on top of the camera-feed, I asked another question on this here.
You need to start with OpenGL. You want to effectively have a texture - similar to the one you've got above - displace the texture below it (the live camera view) to give the impression of depth and distortion. This is a 'non-trivial' problem, in that whilst it's a fairly standard problem in its field if you're coming from a background with no graphics or OpenGL experience you can expect a very steep learning curve.
So in short, the only way you can achieve this realistically on iOS is to use OpenGL, and that should be your starting point. Apple have a few guides on the matter, but you'll be better off looking elsewhere. There are some useful books such as the OpenGL ES 2.0 Programming Guide that can get you off on the right track, but where you start would depend on how comfortable you are with 3D graphics and C.
Just wanted to add that I solved this old answer using the refraction example in the Khronos OpenGl ES SDK.
Wrote a blog-entry with pictures about it :
simulating windows with refraction