Detecting particular objects in the image i.e image segmentation with opencv - iphone

I have to select any particular object visible in my image on i-phone.
Basically my project is to segment image objects on the basis of my touch.
The method I am following is to first detect contours of the image and then select a particular sequence based on finger touch.
Is there any other method which would be more robust because I have to run it on video frames?
I am using OpenCV and iphone for the project.
PLease help if there is any other idea which has been implemented or is feasible to implement.

Have you looked at SIFT or SURF implementations? They both track object features and are resilient (to a certain degree) to rotation, translation and scale.
Also check out FAST which is a corner detection algorithm which might help you, they have an app on the app store showing how quick it is too.

Related

How to detect contours of object and describe it to compare on server with ARKit

I want to detect shape and then describe it (somehow) to compare it with server data.
So the first question is, is it possible to detect shape like blob with ARKit?
To be more specific, let's describe my usecase generally.
I want to scan image by phone, get the specific shape, send it on server, compare two images on server (server image is the real one, scanned image would be very similar) and then send back some data. I am not asking about server side, the only question about server side is what should I compare - images using OpenCV, some mathematical description of both images and try to find similarity, etc.).
If the question is hard to understand, let's split it on two easy questions:
1) How to scan 2D object by iPhone and save it (trim the specific shape from its background when object is black and background white).
2) Describe scanned object for comparision with almost the same object.
ARKit has no use here.
You will probably need a lot of CoreImage (for fixing perspective distortion and binarization) and OpenCV logic.
Perhaps Vision can help you a little bit with getting ROI from the entire frame, especially if the waveform image is located in some kind of rectangle.
Perhaps you can train a custom ML model that will recognize specific waveforms or waveforms in general to use with Vision.
In any case, it is not a trivial task.

Extracting measurements from a finger via ROI and image processing MATLAB

I am trying to do a number of things via MATLAB but I am getting a bit lost with what techniques to use. My ultimate goal is to extract various measurements from a users fingerprint presentation, e.g. how far the finger over/undershoots, the co-ordinates of where the finger enters, the angle of the finger.
In my current setup, I have a web camera recording footage of a top down view of the presentation which I then take the video file and break down into individual frames. https://www.dropbox.com/s/zhvo1vs2615wr29/004.bmp?dl=0
What I am trying to work on at the moment is using ROI based image processing to create a binary mask around the edges of the scanner. I'm using the imbw function to get a binarised image and getting this as a result. https://www.dropbox.com/s/1re7a3hl90pggyl/mASK.bmp?dl=0
What I could use is some guidance on where to go from here. I want to be able to take measurements from the defined ROI to work out various metrics e.g. how far a certain point is from the ROI so I must have some sort of border for the scanner edges. From my experience in image processing so far, this has been hard to clearly define. I would like to get a clearer image where the finger is outlined and defined and the background (i.e. the scanner light/blocks) are removed.
Any help would be appreciated.
Thanks

Copy face from Image

I'm a noob to this forum, but wanted to give it a try.
I'm currently learning Objective-C and Cocoa; trying to build my first iPhone app.
One thing I'm working on is allowing the user to cut his/her face from an image they have taken and paste it into another image. (The idea is cut from one image and paste into another image with a spot for a face to go.)
How can this be done? I am thinking I would allow the user to just touch and drag over their face, in the shape of a rectangle, and then allow them to copy.
Thanks for the help.
Ok, nevertheless your bit arrogant style of asking, here are some guidelines about how to start: generic obj-c/iOS development (start from hello world); UIImage class; camera API; image processing algorithms, face detection algorithms. Go on gradually and do not wish to resolve all problems at once. Write first an application that simple loads an arbitrary photo and shows it to the user. Then modify it that you can crop a specified rectangular area from the image and save it into the new file. Then write an app that switches on the camera that you can take an image and save it to the disk. Then unite what you wrote that you save only a cropped area of the captured image.
When you arrive to this point, you will know much more about software development image handling. AFTER THIS you can start looking for image processing algorithms. Start also here with something simple like a trivial blur filter or similar implemented by you. If you know already a bit of image processing, search for face detection algorithms on the net. It is even possible that you will find some ready framework that includes also these features, or at least you will understand the concepts. You can even come back here to stack overflow and ask for suggestions about a good face detection algorithms, however we still prefer if you have chosen already one and have some concrete issue with it.

How to draw an image according to the pixels of another image?

HI all ,what i want is to map the images.Suppose i have two images of persons,one is of fat person and another is of weak person,Now i want to match their faces ,eyes.I want to increase or decrease the face size eye size of one image according to another.As you can see in adobe photoshop you can make the face fat,make it squueze.I want to do the image manuplation in this.These types of operations i want to implement.I don't know from where to start.
Pleas guide and help me.Can i perform all this with core graphics if so then how
Any reference,tutorial address ,sample code ........appreciated.
You are probably going to have to deal with some sort of edge detection and face recognition algorithms, at the very least, if this is to be accomplished automatically. Otherwise, if the user is going to be resizing one image to match the other, this will require simple resizing operations driven by perhaps user pinch & gestures.
UPDATE:
For manual resizing:
Download the source code for the great book Cool iPhone Projects. One of the projects is called 'Touching'. This project contains code that accomplishes what you need: pinch and zoom functionality.

Shape detection using MATLAB

I am working on car parking system project. For that, I would like to detect the presence of a car.
Can anybody tell me how I can accomplish this using MATLAB?
Also, what is the algorithm for detecting a car?
There's a whole world of methods for object detection in images. You need to learn a little bit about image processing to solve this problem. I suggest you read about template matching or more generally about Object recognition. Specifically for car detection, if you know they will be seen at a certain angle (head on, for example) i'd try Viola-Jones detection which is implemented in OpenCV as haar-based feature cascade detection. Although OpenCV is not a matlab library, you can probably find something in matlab's image processing toolboxes that does a similar job (or interface into OpenCV)
Background subtraction would be a simple place to start.
In a nutshell:
Can capture an image of your empty parking lot. This is your reference image.
Compare the current image of your parking lot with the reference image. The parts that are different will be of interest.
Problems:
You need to keep updating your reference image to stay current with the conditions (e.g. day, night, cloudy, raining). Sometimes this may not be possible, because your reference image needs to have no cars in it for the approach to work.
Moving things in the background (like trees shaking in the wind) will come up as false positives
Have you considered using 3D/stereoscopic imaging in addition to using 'normal' images? If yes you could open up a whole new world of methods and intelligent tricks to remove objects based upon their distance to the camera. Then, any object that is a certain, fixed distance from the camera (e.g. your background) is easily removable and you can just process the new parts of the image (e.g. cars).
If this interests you I can supply you with an algorithm I have developed to detect animals in a livestock pen, which is a similar concept.