How to segment an image in iOS to remove background and retain the foreground picture - iphone

I need to segment an image in ios for a fashion app by keeping only the foreground image and removing all other background part of the image which should resemble like a tool for removing the background of images in various photo editing tools please help me.

General background subtraction is an unsolved problem, so getting perfect results is going to be a big effort. With that said, you can probably get close. Here are a couple of suggested avenues:
I am guessing that your app will place clothes on a human, or something of the sort. Instead of getting a perfect segmentation, run a person detector, remove all of the image except for the detected person, and fit a part-based human model to the remaining image. Then you have the pose of the person, and can do your image processing accordingly.
Allow the user to input some strokes from the foreground and some strokes from the background, and run a graph-cuts-based image segmentation algorithm on the frame.
Begin your process by having the user not be present in your video stream. From this, learn the background distribution (start with a simple histogram of background pixels, there are much more elaborate schemes but you need a starting place). Then, when the user enters the scene, create a binary image containing the connected components that don't fit into the learned background distribution. This will not be perfect, but you will start to see something close to a binary image where the white pixels are your user, and the black pixels are the background. Use morphology operators to join any large connected components that are slightly separated, and threshold your image to remove small noise in the image, from things like specular objects and illumination changes.
Like I said (and is mentioned in the comments), this is not an easy problem, but you can come up with a good approximation if you put some time into it. I suggest the third method I listed. It is achievable, and can be broken down into small parts so you can tell when you're making progress.
Good luck!

Related

Human Detection using edge detection

I am trying to detect exact silhouette of human body in this dataset using background subtraction. After doing some thresholding I was getting split blobs so I looked at this tutorial by Steve but now I am getting blob other that human body as shown below
So here is the original
After Subtracting it from background, background was considered as the first frame of the video, so after subtracting it from orignal image I get the following image
so I did basic thresholding and I get the following image, which is split from further areas
and using Steve's method I get this
But this contains a lot of area which is not a part of human body, any suggestion if somehow or using edges I can get good blob of human body.
EDIT
As #lennon310 asked me to upload color image so here it is
and as #NKN asked me to upload edge information of the same image so here it is
Instead of literally subtracting the background, try using the vision.ForegroundDetector object, which is part of the Computer Vision System Toolbox. It implements the mixture-of-gaussians adaptive background modeling, and it may give you a cleaner segmentation.
Having said that, it is very unlikely that you will get the "exact" silhouette. Some error is inevitable.
In your result image, you have tow types of black regions. one is moving and the other is stationary.
So when you you want to fill the human body, you have to choose only the moving region, for this purpose, I suggest to segment your image by adding optical flow technique to know where the moving regions are.
This is an interesting tutorial doing what you need to do:
http://docs.opencv.org/trunk/doc/py_tutorials/py_video/py_lucas_kanade/py_lucas_kanade.html

Copy face from Image

I'm a noob to this forum, but wanted to give it a try.
I'm currently learning Objective-C and Cocoa; trying to build my first iPhone app.
One thing I'm working on is allowing the user to cut his/her face from an image they have taken and paste it into another image. (The idea is cut from one image and paste into another image with a spot for a face to go.)
How can this be done? I am thinking I would allow the user to just touch and drag over their face, in the shape of a rectangle, and then allow them to copy.
Thanks for the help.
Ok, nevertheless your bit arrogant style of asking, here are some guidelines about how to start: generic obj-c/iOS development (start from hello world); UIImage class; camera API; image processing algorithms, face detection algorithms. Go on gradually and do not wish to resolve all problems at once. Write first an application that simple loads an arbitrary photo and shows it to the user. Then modify it that you can crop a specified rectangular area from the image and save it into the new file. Then write an app that switches on the camera that you can take an image and save it to the disk. Then unite what you wrote that you save only a cropped area of the captured image.
When you arrive to this point, you will know much more about software development image handling. AFTER THIS you can start looking for image processing algorithms. Start also here with something simple like a trivial blur filter or similar implemented by you. If you know already a bit of image processing, search for face detection algorithms on the net. It is even possible that you will find some ready framework that includes also these features, or at least you will understand the concepts. You can even come back here to stack overflow and ask for suggestions about a good face detection algorithms, however we still prefer if you have chosen already one and have some concrete issue with it.

Polling IPhone Camera to Process Image

Scenario is I want my app to process (in the background if possible) images been seen by the iphone camera.
e.g. App is running, user places the phone down on a piece of red cardboard, than want to display an alertview saying "Phone placed on Red Surface"(this is a simplified version of what i want to do but just to keep the question direct).
Hope this makes sense. I know there is two seperate concerns here.
How to process images from the camera in the background of the app (if we cant do this that we can initiate the process with say a button click if needed).
Processing the image to say what solid colour it is sitting on.
Any help/guidance would be greatly appreciated.
Thanks
Generic answers to your two questions:
Background processing of image can be triggered as a timer event. Say for example, every 30 second, capture the image on the screen and do the processing behind. If the processing is not computing/time intensive, this should work
It is technically possible to know the color of say one pixel programatically. If you are sure that the entire image is just one color, you can try that approach. Get few random points and get the color of the pixel in the image. But if the image (in your example, red board) consists of an image or multiple colors, then that will require detailed image processing techniques.
Hope this helps
1) Image Capture
There's two kinds of apps that continually take imagery from the camera: media capture (e.g. Camera, iMovie) or Augmented Reality apps.
Here's the iPhone SDK tutorial for media capture:
https://developer.apple.com/library/ios/documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/04_MediaCapture.html#//apple_ref/doc/uid/TP40010188-CH5-SW3
Access the camera with iPhone SDK
Augmented Reality apps take continual pictures from the camera for processing/overlay. I suggest you look into some of the available AR kits and see how they get a continual stream from the camera and also analyze the pixels.
Starting a augmented reality (AR) app like Panasonic VIERA AR Setup Simulator
http://blog.bordertownlabs.com/post/157320598/customizing-the-iphone-camera-view-with
2) Image Processing
Image processing is a really big topic that's been addressed in multiple other places:
https://photo.stackexchange.com/questions/tagged/image-processing
https://dsp.stackexchange.com/questions/tagged/image-processing
https://mathematica.stackexchange.com/questions/tagged/image-processing
..but for starters, you'll need to use some heuristical analysis to determine what you're looking for. Sampling the captured pixels in a bunch of places (e.g. corners + middle) may help, as would generating a histogram of colour intensities - if there's lots of red but little or no blue and green, it's a red card.

compare one image in matlab with a database of images and show the most similar

I have a database of images of one person who is using his hands to show various words and phrases in sign language. The background is white and the only thing changing is the shape of the person's hands and their locations. Now in my gui in matlab, I want the user to be able to choose another image from the same person that was taken at another time doing a sign but wearing the same clothes and then the program will have to compare this against the images in the database and show the most similar. Obviously I can't do pixel by pixel comparison as the images were taken by a hand held mobile camera and slight movement has been inevitable so I should try and locate the hands in the images and compare their shapes. I have no idea how to go about this? I have to say I am new to image processing toolbox in matlab.
Your help is much appreciated
I am doing a phD in computer vision, and I can tell you that it is an unsolved problem. (even in your simple framewrok, with white background)
If you are interested, you might read some works about it ar MIT:
http://people.csail.mit.edu/rywang/handtracking/
or at Oxford:
http://www.robots.ox.ac.uk/~vgg/research/sign_language/index.html
http://www.robots.ox.ac.uk/~vgg/research/hands/index.html
I disagree with you. Such a project can achieve results quickly.
This becomes a problem as soon as the project has to deal with "real life".
Using a single camera, and a completely known background; Opencv provides a simple way to extract hand shape in a image (in about 20 lines of code). You will find plenty of source on the web (have a look at calcbackproj).
After that, what you will have to do is to play with shape, and search for characteristic points.
Begin with some simple signs (example : a circle and a V). How would you recognize one from the other?
There are thousands of papers on sign language; just read the older one to simple ideas flowing :)

How can I process an image to remove a watermark within my iPhone application?

I want to remove watermark from a picture within my iPhone / iPad application. Is there any kind of image processing I can perform within this application to do this?
Can't be done, sorry.
The watermarked image were originally two images (the base and the watermark), which were merged together to form the result. The problem here is that the most common image formats (such as JPG, PNG, or GIF) have no concept of layers - so that the base would be one layer, and the watermark another: the result is just one layer, onto which both were redrawn. This is somewhat similar to a physical painting: if you paint one image on a paper using watercolors, and then another over the same spot, their colors will mix and you won't be able to tell which parts belong to one or the other, as they'd become a single image.
This is similar with the computer image formats: there is only one "layer", which for every pixel encodes exactly one color that is there - only the current color exists, and the image doesn't keep track what was on that pixel before.
Now, the information is irreversibly lost from the result - in other words, it is not possible to recover the base knowing just the result (or the result and watermark) - BTW, that's exactly the point of watermarking.
I have borrowed the image sprites of StackOverflow for a demonstration; the actual images used are not unique, the technique would work just as well with any images. This was the watermark I used:
And this is the result image, after merging with the base:
Now, even though we have the exact watermark image used, there's no way to recover what was underneath that star in the original image. Through image processing operations, we could almost remove the star from the result, but there's not enough data to tell us what used to be underneath: - that information got erased in the merge at the beginning.
We could guess what used to be there, but then we're not doing recovery any more, we're interpreting the image and guessing what possibly could have been there - and that's pretty hard, even for a human; computers are really bad at that. This is the original image, before I watermarked it - I bet you were expecting something slightly different, no?
The watermark is almost certainly part of the image. (The only case in which it wouldn't be is something like PDF or SVG, where it could be a separate vector element.)
Watermarks are typically present on images for purposes of managing intellectual property; if one has licensed an image for a particular use, typically one will receive access to a version of the image without a watermark. Thus wanting to "remove watermarks" is also likely to be treated as highly suspicious.
Watermarks are part of the image, there isn't going to be a magic way to remove them and recover the missing pixels in any tool.
Take a look at the source! Most or the current watermarking is done in php as an automated script. In most cases you will see the base picture in source