iPhone Photo Smile Manipulation Applications - iphone

I'm trying to create an iphone application that allows a user to take a photo of their smile, then drag and drop new smiles, from a small predefined list.
I know there are a lot of photo manipulation apps and I have seen similar concepts that allow smile manipulation, but not quite what I am looking for. The problem is knowing where to start. How can I create this effect? Would the OpenCV iPhone port be the best way to go? Or perhaps something using OpenGL? Willing to do some research, but I find that experience often goes a long way, so any advice or insight would be much appreciated.

That's a pretty cool application. I'd recommend some type of dense optical flow type alignment method which would be strike a balance between global consistency and local consistency.
In short, you'll want some generic mouth shapes in a gallery. Then, you can crop the user's mouth region and warp it to the gallery shape to show what their smile in that shape will look like.
Ce Liu's Optical Flow implementation might be an interesting point to start. You should be able to port that reasonably easily.

Related

iOS image comparison

I am just doing some research into image processing and would appreciate it if someone could point me in the right direction. I want to compare image 'A' which is a picture of a person's face with image's stored in a database -B,C,D,E .. etc which are also pictures of faces. I want to compare them to see if the person 'A' is already in the database.
Several questions :
1.How is face recognition comparison usually done? (do you extract features e.g. eyes/mouth and compare them to other images?).
2. Are there prebuilt libraries that are able to do a comparison between images? or do i need to write my own algorithm?
3. Where can i start with this? (would appreciate some references/reading material).
Yes, you identify, extract and quantify various aspects of human faces, such as distance between pupils, width of mouth, percentage of head height where tip of nose is, etc.
There is a company, Luxand which makes software to do this, and I think they license it. Last time I looked (2009?) they didn't have an objective-c library. They do have an app that claims to merge faces from photograhs, so you can see what the offspring of any two people would look like, but it is very cheesy, with lots of hard-coded faces. (If you cross a dog with a tea-pot, you get the same baby-face as from crossing a 2 real faces.)
AFAIK, there is nothing in the iOS SDK that does this.
I would just Google "face recognition" and start reading. Good luck.
I would go with compiling openCV for the iPhone ( http://computer-vision-talks.com/2011/02/building-opencv-for-iphone-in-one-click/ ), and then implementing one of the classical ways to do face recognition like eigenfaces ( http://www.shervinemami.info/faceRecognition.html )
But don't expect miracles the accuracy will be low, and the app will be slow.
Also when you say face recognition is difficult doesn't the first link show how easy it is to detect faces on a picture?
The face detection from the first link is just to detect the face. It is just to see if there is a face in the image, which then you can pass as input to the recognition algorithm.
face recognition are very difficult, you need to extract some kind of "features" and perform some measurement...iphone hardware isn't very appropriate for this job.
yes, you can check here
http://maniacdev.com/2011/11/tutorial-easy-face-detection-with-core-image-in-ios-5/
for a tutorial and here
http://maniacdev.com/2011/12/open-source-library-for-adding-easy-face-to-your-ios-app-with-the-free-face-com-api/
for a free webservice.
3.i suggest you google scholar (http://scholar.google.it/scholar?q=face+recognition&hl=it&btnG=Cerca&lr=) but i think that if you want to write your own algorithm you need a lot o spare time :)

Does a free API for a Augmented reality service exist?

Currently I am trying to create an app for iPhone which is capable of recognizing the objects on an image such as car, bus, building, bridge, human, etc, and label as object name with the help of Internet.
Is there any free service which provide solution to my problem, as object recognition its self a complex algorithm requiring digital image processing, neural networks and all.
Can this can be done via API?
If you want to recognise planar images the current generation of mobile AR SDKs from Metaio, Qualcomm and Layar will allow you to upload images to match against, and perform the matching.
If you want to match freely against a set of 3D objects, e.g. a Toyota Prius or the Empire state, the same techniques might be applied to match against sets of images taken at different rotations, but you might have to choose to match just one object due to limitations on how large an image database you can have with the service, or contact those companies for a custom solution, and it may not work very reliably given the state of the art is to reliably match against planar images.
If you want to recognize general classes (human, car, building), this is a very difficult problem, and I don't know of any solutions anywhere fast enough to operate online (which I assume is a requirement given you want an AR solution - is that a fair assumption?). It's been a few years since I studied CV, but at that time the most promising solution for visual classification was "bag of visual words" approaches - you might try reading up on those.
Take a look at Cortexica. Very useful for this sort of thing.
http://www.cortexica.com/
I haven't done work with mobile AR in a while, but the last time I was working on this stuff I was using Layar and starting to investigate Junaio. Those are oriented toward 3D graphics, not simply text labels, so for your use case you may be better served with OpenCV.
Note that Layar (and I believe Junaio too) works like a web app, where you put the content on your own server and give Layar the URL to link to.

Is there an imaging library that can make you look thinner?

Very odd question, I know, but this is a problem a potential client handed me today.
We assume we have a full length photo of a person. We want to generate a thinner image of that user. Obviously, one way would just be to compress the width of the image but that would result in various distortions that wouldn't be realistic.
I'd like to keep this an open-source implementation so if anybody knows of a library that can identify certain parts of the body and slim each in a way that is most realistic, I'd like to know.
This is obviously something that could be done by hand but we need a solution that works without user interaction.
You should look into seam-carving algorithms. The algorithm is very simple to implement and has many such implmentations online. Seems like ImageMagick has it too - called "Liquid Rescale".
I assume that already the detection of bodyparts in photos is a challenge too hard for algorithms, unless the photos are all very similar (e.g. same background, same pose, etc.)
I have once played around developing algorithms for skin smoothing. I was able to detect skin areas pretty well by converting colors to the LAB space and selecting pixels similar to skin sample colors learnt with a support vector machine from various sample images. Once you have that, you could run something like a liquify-contract algorithm for slimming.
I wouldn't expect satisfying results though unless you spend huge amounts of time on this.

What is the best way to work with a user interface/user experience designer on an iPhone app?

I have a friend who is a graphic designer & user experience designer who will be collaborating with me to develop an iPhone app. He does not have previous iPhone experience. What is the best way to work with him on developing the user interface, i.e. custom colors for UITableViews, UIButtons, etc? We've looked into Photoshop mock ups, but that depends on me (the developer) implementing what he drew in Photoshop, which might get tricky.
Most of the methods I've thought of have long turn around time, i.e. he uses Photoshop, sends me the image, I develop, send him a test build of the app, he doesn't like it, rinse, lather, repeat.
Do you think it's feasible to set him up with Interface Builder so he can modify XIB files? Potentially, he could build and run the app in the simulator...
Does anyone have experience doing this? Any suggestions?
Thanks much,
-dan
This goes for a developer or designer. The best way in my opinion is to mock up designs in photoshop, debate on what is good and what is bad, then send the final mock ups to the developer.
The reason you want to do it this way is because your designer can't do everything he wants to do by simply using the IB. You need to allow your designer to express his creative freedom without the burdeon of figuring out how to use a piece of software correctly.
You can find plenty of templates of iPhone and iPad components on the web. Having those components will make it very simple for your friend to put together concepts. It will also keep things consistent so you can have an easier time implementing them.
A Great Collection of iPad Resources
iPhone Materials
One suggestion is to start with the elements that do not need graphic design but you know they will be there, this will be things like table views, tab bars, any UI element provided by UIKit or even custom UI elements that you make...I would say you will probably have most of your app made by this approach and will look VERY plain...once you have that basis you should be able to work with the graphic designer and identify where and what he needs to make, it should also be pretty easy for you to integrate it since it will probably be mostly images or textures, things like animations and such will have to be handled by you anyway...just a suggestion, hope its helpful
Omnigraffle is your best bet for quickly mocking out UIs. It produces nearly photorealistic mockups. It's easy for non-artist to use but can also utilize imported images of arbitrary complexity if he needs to do something fancy.
If you want my advice, keep the graphic designers away from the app until it is fully functional logically. They should only be brought in at the end of the process to tweak the UI.
They cause train wrecks if they come into the process early. Everybody in that field has been trained first and foremost to create visuals that attract attention. In an UI, that always translates into flashy, non-standard elements that turn into annoyance with repeated use. A good UI is essentially invisible to the user. Ideally, they should notice it only because they notice that they don't notice it. (It's all very Zen.)
People trained to attract attention in the blizzard of competing images of a media saturated world don't make invisible interfaces. They make "in your face" and "look at me!" interfaces that get old in a hurry.
Don't get me wrong: a good graphics person can really enhance an interface by the skillful and subtle use of proportion and color. Unfortunately finding a good UI graphics person is a challenge. Be prepared for fights over what works transparently versus what looks cool and draws attention the first time you see it.

OpenGL ES and real world development

I'm trying to learn OpenGL ES quickly (I know, I know, but these are the pressures that have been thrusted upon me) and I have been read around a fair bit, which lots of success at rendering basic models, some basic lighting and 'some' texturing success too.
But this is CONSTANTLY the point at which all OpenGL ES tutorials end, they never say more of what a real life app may need. So I have a few questions that Im hoping arent too difficult.
How do people get 3d models from their favorite 3d modeling tool into the iPhone/iPad application? I have seen a couple of blog posts where people have written some python scripts for tools like Blender which create .h files that you can use, is this what people seem to do everytime? Or do the "big" tooling suites (3DS, Maya, etc...) have exporting features?
Say I have my model in a nice .h file, all the vertexes, texture points, etc.. are lined up, how to I make my model (say of a basic person) walk? Or to be more general, how do you animate "part" of a model (legs only, turn head, etc...)? Do they need to be a massive mash-up of many different tiny models, or can you pre-bake animations these days "into" models (somehow)
Truely great 3D games for the iPhone are (im sure) unbelievably complex, but how do people (game dev firms) seem to manage that designer/developer workflow? Surely not all the animations, textures, etc... are done programatically.
I hope these are not stupid questions, and in actual fact, my app that Im trying to investigate how to make is really quite simple, just a basic 3D model that I want to be able to pan/tilt around using touch. Has anyone ever done/seen anything like this that I might be able to read up on?
Thanks for any help you can give, I appreciate all types of response big or small :)
Cheers,
Mark
Trying to explain why the answer to this question always will be vague.
OpenGLES is very low level. Its all about pushing triangles to the screen and filling pixels and nothing else basicly.
What you need to create a game is, as you've realised, a lot of code for managing assets, loading objects and worlds, managing animations, textures, sound, maybe network, physics, etc.
These parts is the "game engine".
Development firms have their own preferences. Some buy their game engine, other like to develop their own. Most use some combination of bought tech, open source and inhouse built tech and tools. There are many engines on the market, and everyone have their own opinion on which is best...
Workflow and tools used vary a lot from large firms with strict roles and big budgets to small indie teams of a couple of guys and gals that do whatever is needed to get the game done :-)
For the hobbyist, and indie dev, there are several cheap and open source engines you can use of different maturity, and amount of documentation/support. Same there, you have to look around until you find one you like.
on top of the game engine, you write your game code that uses the game engine (and any other libraries you might need) to create whatever game it is you want to make.
something many people are surprised with when starting OpenGL development is that there's no such thing as a "OpenGL file format" for models, let alone animated ones. (DirectX for example comes with a .x file format supported right away). This is because OpenGL acts somewhat at a lower level. Of course, as tm1rbrt mentioned, there are plenty of libraries available. You can easily create your own file format though if you only need geometry. Things get more complex when you want to take also animation and shading into account. Take a look at Collada for that sort of things.
again, animation can be done in several ways. Characters are often animated with skeletal animation. Have a look at the cal3d library as a starting point for this.
you definitely want to spend some time creating a good pipeline for your content creation. Artist must have a set of tools to create their models and animations and to test them in the game engine. Artist must also be instructed about the limits of the engine, both in terms of polygons and of shading. Sometimes complex custom editors are coded to create levels, worlds, etc. in a way compatible with your specific needs.
Write or use a model loading library. Or use an existing graphics library; this will have routines to load models/textures already.
Animating models is done with bones in the 3d model editor. Graphics library will take care of moving the vertices etc for you.
No, artists create art and programmers create engines.
This is a link to my favourite graphics engine.
Hope that helps