I have to make an app that takes an image of a face, allows the user to indicate the location of main features like eyes and mouth and then warp the image to give a particular expression (smile / frown).
My thinking is that I could use a few component ccgrids for the mouth / eyes separately and then combine these to make a live ccgrid to be applied with a ccactiongrid.
First of all is this a sensible way to approach this?
Secondly, I can't find any examples of how to work with ccgrid objects or ccactiongrids. Are there any examples out there that I'm missing or could someone give me a crash course?
Thanks,
Martin
Related
I need access to the coordinates of individual strokes of a PKDrawing in PencilKit. Is there any way I can get access to that? Currently, my only idea is to try and decode the opaque data representation we get from PKDrawing.
As Ben has said, there doesn't seem to be any way to access stroke-level data in PencilKit at this time. This seems like a pretty rudimentary feature so hopefully Apple will add it next WWDC. Fingers crossed.
LetsBuildThatApp's YouTube tutorial is a good starting point if you just want basic drawing capabilities and are not too concerned about drawing quality or latency. I ran into issues when I tried to add the ability to vary the stroke width with pen pressure. I was never able to get it to transition smoothly between stroke segments of different widths - it always 'jumped' jarringly from one width to the next. Maybe there's a way to fix that, but I wasn't able to.
I suspect that currently, the only way to draw low-latency high-quality images with Apple Pencil is to write a drawing engine from scratch in Metal or OpenCL. It's strange it has taken Apple so long since the release of Apple Pencil to get a comprehensive drawing framework out. Fingers crossed that changes at WWDC 2020.
Edit: Apple announced substantial updates to PencilKit at WWDC20, including access to stroke data and functions to programatically draw strokes.
I was was looking for the same thing and from this comment https://stackoverflow.com/a/57565661/8891611 and my own searching of the documentation, it seems what you're asking for doesn't exist. If you do discover a way to decode the opaque data please update this thread on how so.
If you aren't necessarily set on PencilKit, you could manually code your drawing functions which is easier than it sounds. In this tutorial: https://youtu.be/E2NTCmEsdSE?t=504 I have timestamped where he shows that he has access to all the points on the lines he is creating.
And another potential solution could be to use both, by have a mapping between your PencilKit drawing and the points found from the technique in this tutorial.
For the strokes, you can access the property .renderBounds that gives you the dimensions of the rectangle where the stroke is into.
Is anyone knows how to do a building system like coc / boom beach? I know how to do a fortnite building system but there's only 1x1 structures to do while i need 3x2, 5x3 and many more sizes to go. I'm going to do it using UE4 with Blueprints. I've been looking so long and couldn't find answer. Hope you'll help me!
Thanks.
As the question is rather vague and doesn't give anyone much to go on. I'll try to take a stab at it, conceptually at least.
I'd assume that you have a base building BP or struct so, in there I would create a Vector2D variable or something similar to give it a length and width per building.
Then when you are trying to spawn the building, check the tiles that are that length and width away from the center of the screen/cursor for any existing buildings. Then when you spawn the building make sure that the building takes claim over the tiles that it is using so others will not be able to overlap later on.
So your main "meat and potatoes" of this project will be creating a grid system and also creating a system that can check whether or not a tile is inhabited already and also using and releasing tiles when needed.
If you want someone to give you a more concrete answer, you will need to show what you have done and tried in your question. Especially for one as broad as this one.
I'm a noob to this forum, but wanted to give it a try.
I'm currently learning Objective-C and Cocoa; trying to build my first iPhone app.
One thing I'm working on is allowing the user to cut his/her face from an image they have taken and paste it into another image. (The idea is cut from one image and paste into another image with a spot for a face to go.)
How can this be done? I am thinking I would allow the user to just touch and drag over their face, in the shape of a rectangle, and then allow them to copy.
Thanks for the help.
Ok, nevertheless your bit arrogant style of asking, here are some guidelines about how to start: generic obj-c/iOS development (start from hello world); UIImage class; camera API; image processing algorithms, face detection algorithms. Go on gradually and do not wish to resolve all problems at once. Write first an application that simple loads an arbitrary photo and shows it to the user. Then modify it that you can crop a specified rectangular area from the image and save it into the new file. Then write an app that switches on the camera that you can take an image and save it to the disk. Then unite what you wrote that you save only a cropped area of the captured image.
When you arrive to this point, you will know much more about software development image handling. AFTER THIS you can start looking for image processing algorithms. Start also here with something simple like a trivial blur filter or similar implemented by you. If you know already a bit of image processing, search for face detection algorithms on the net. It is even possible that you will find some ready framework that includes also these features, or at least you will understand the concepts. You can even come back here to stack overflow and ask for suggestions about a good face detection algorithms, however we still prefer if you have chosen already one and have some concrete issue with it.
How to implement a way to measure distances in real time (video camera?) on the iPhone, like this app that uses a card to compare the size of the card with the actual distance?
Are there any other ways to measure distances? Or how to go about doing this using the card method? What framework should I use?
Well you do have something for reference, hence the use of the card. Saying that after watching the a video for the app I can't seem it seems too user friendly.
So you either need a reference of an object that has some known size, or you need to deduct the size from the image. One idea I just had that might help you do it is what the iPhone's 4 flash (I'm sure it's very complicated by it might just work for some stuff).
Here's what I think.
When the user wants to measure something, he takes a picture of it, but you're actually taking two separate images, one with flash on, one with flash off. Then you can analyze the lighting differences in the image and the flash reflection to determine the scale of the image. This will only work for close and not too shining objects I guess.
But that's about the only other way I thought about deducting scale from an image without any fixed objects.
I like Ron Srebro's idea and have thought about something similar -- please share if you get it to work!
An alternative approach would be to use the auto-focus feature of the camera. Point-and-shoot camera's often have a laser range finder that they use to auto-focus. iPhone doesn't have this and the f-stop is fixed. However, users can change the focus by tapping the camera screen. The phone can also switch between regular and macro focus.
If the API exposes the current focus settings, maybe there's a way to use this to determine range?
Another solution may be to use two laser pointers.
Basically you would shine two laser pointers at, say, a wall in parallel. Then, the further back you go, the beams will look closer and closer together in the video, but they will still remain the same distance apart. Then you can easily come up with some formula to measure the distance based on how far apart the dots are in the photo.
See this thread for more details: Possible to measure distance with an iPhone and laser pointer?.
I'm currently trying to write an app, that would be able to show the effects of glas, as seen through the iPhone Camera.
I'm not talking about simple, uniform glas but glass like this:
Now I already broke this into two problems:
1) Apply some Image Filter to the 2D-frames presented by the iPhone Camera. This has been done and seems possible, e.g. in the app: faceman
2) I need to get the individual lighting properties of a sheet of glas that my client supplies me with. Now basicly, there must be a way to read the information about how the glas distorts ands skews the image. I think It might be somehow possible to make a high-res picture of the plate of glasplate, laid on a checkerboard-image and somehow analyze this.
Now, I'm mostly searching for literature, weblinks on how you guys think I could start at 2. It doesn't need to be exact, in the end I just need something that looks approximately like the sheet of glass I want to show. And I'm don't even know where to search, Physics, Image Filtering or Comupational Photography books.
EDIT: I'm currently thinking, that one easy solution could be bump-mapping the texture on top of the camera-feed, I asked another question on this here.
You need to start with OpenGL. You want to effectively have a texture - similar to the one you've got above - displace the texture below it (the live camera view) to give the impression of depth and distortion. This is a 'non-trivial' problem, in that whilst it's a fairly standard problem in its field if you're coming from a background with no graphics or OpenGL experience you can expect a very steep learning curve.
So in short, the only way you can achieve this realistically on iOS is to use OpenGL, and that should be your starting point. Apple have a few guides on the matter, but you'll be better off looking elsewhere. There are some useful books such as the OpenGL ES 2.0 Programming Guide that can get you off on the right track, but where you start would depend on how comfortable you are with 3D graphics and C.
Just wanted to add that I solved this old answer using the refraction example in the Khronos OpenGl ES SDK.
Wrote a blog-entry with pictures about it :
simulating windows with refraction