Extracting measurements from a finger via ROI and image processing MATLAB - matlab

I am trying to do a number of things via MATLAB but I am getting a bit lost with what techniques to use. My ultimate goal is to extract various measurements from a users fingerprint presentation, e.g. how far the finger over/undershoots, the co-ordinates of where the finger enters, the angle of the finger.
In my current setup, I have a web camera recording footage of a top down view of the presentation which I then take the video file and break down into individual frames. https://www.dropbox.com/s/zhvo1vs2615wr29/004.bmp?dl=0
What I am trying to work on at the moment is using ROI based image processing to create a binary mask around the edges of the scanner. I'm using the imbw function to get a binarised image and getting this as a result. https://www.dropbox.com/s/1re7a3hl90pggyl/mASK.bmp?dl=0
What I could use is some guidance on where to go from here. I want to be able to take measurements from the defined ROI to work out various metrics e.g. how far a certain point is from the ROI so I must have some sort of border for the scanner edges. From my experience in image processing so far, this has been hard to clearly define. I would like to get a clearer image where the finger is outlined and defined and the background (i.e. the scanner light/blocks) are removed.
Any help would be appreciated.
Thanks

Related

Area measurement of fluorescent powder from finger contacts - batch processing

I have 120 photos like the one below showing the amount of fluorescent powder deposited onto a surface when it is touched by fingers. The photo is taken under UV light. You can see 5 finger prints and the reflection from the light source.
I'd like to know if there is an automated way of estimating the area of the fluorescent finger prints in batch mode. We have been using image J to manually select a particular print and estimate the area. Is it possible to automatically recognise the fingerprint in imageJ and measure it for all 5 prints on each of the 120 photos?
Note: Clearly you can see the print on the right is quite well defined but the one of the left is quite diffuse.
First, the data is useless without a scale, and the photos will be hard to process without a fixed set-up. I'd spend time to make a photo set up that minimizes glare and doesn't change scale, then try approaching the problem using the Threshold tool to find the prints, make selections using the resulting mask, then measuring the area. I'd then create a macro to batch process them.

How to detect contours of object and describe it to compare on server with ARKit

I want to detect shape and then describe it (somehow) to compare it with server data.
So the first question is, is it possible to detect shape like blob with ARKit?
To be more specific, let's describe my usecase generally.
I want to scan image by phone, get the specific shape, send it on server, compare two images on server (server image is the real one, scanned image would be very similar) and then send back some data. I am not asking about server side, the only question about server side is what should I compare - images using OpenCV, some mathematical description of both images and try to find similarity, etc.).
If the question is hard to understand, let's split it on two easy questions:
1) How to scan 2D object by iPhone and save it (trim the specific shape from its background when object is black and background white).
2) Describe scanned object for comparision with almost the same object.
ARKit has no use here.
You will probably need a lot of CoreImage (for fixing perspective distortion and binarization) and OpenCV logic.
Perhaps Vision can help you a little bit with getting ROI from the entire frame, especially if the waveform image is located in some kind of rectangle.
Perhaps you can train a custom ML model that will recognize specific waveforms or waveforms in general to use with Vision.
In any case, it is not a trivial task.

Is it possible to create a 3D photo from a normal photo?

If I have understand well, 3D 360 photos are created from a panorama photo, so I guess it should be possible to create a 3D photo (non 360) from a normal photo. But how? I did not find anything in Google! Any idea of what should I search??
So far, if nothing available (I don't think so), I'll try to duplicate the same photo in each eye. One of the pictures a little bit moved to the right, and the other one moved a little bit to the left. But I think the distortion algorithm is much more complicated.
Note: I'm also receiving answers here: https://plus.google.com/u/0/115463690952639951338/posts/4KdqFcqUTT9
I am in no way certain of this, but my intuition on how 3D 360 images are created in GoogleVR is this:
As you take a panorama image, it actually takes a series of images. As you turn the phone around, the perspective changes slightly with each image, not only by angle, but also offset (except in the unlikely event you spin the phone around its own axis). When it stitches together the final image, it creates one image for each eye, picking suitable images from the series so that it creates a 3D effect when viewed together. The same "area" of the image for each eye comes from a different source image.
You can't do anything similar with a single image. It's the multitude of images produced, each with a different perspective coming from the turning of the phone, that enables the algorithm to create a 3D image.
2D lacks a dimension hence cannot be converted to 3D just like that, but there are clever ways for example Google Pixel even though doesn't have 2 camera can make it seem like the image is 3D by applying some Machine learning algorithm that create the effect of perspective and depth by selective blurring.
3d photos can't be taken by normal but you can take 360 photos with normal camera ..... There are many apps via which you can do this ..... Also there are many algorithms to do it programmatically

Human Detection using edge detection

I am trying to detect exact silhouette of human body in this dataset using background subtraction. After doing some thresholding I was getting split blobs so I looked at this tutorial by Steve but now I am getting blob other that human body as shown below
So here is the original
After Subtracting it from background, background was considered as the first frame of the video, so after subtracting it from orignal image I get the following image
so I did basic thresholding and I get the following image, which is split from further areas
and using Steve's method I get this
But this contains a lot of area which is not a part of human body, any suggestion if somehow or using edges I can get good blob of human body.
EDIT
As #lennon310 asked me to upload color image so here it is
and as #NKN asked me to upload edge information of the same image so here it is
Instead of literally subtracting the background, try using the vision.ForegroundDetector object, which is part of the Computer Vision System Toolbox. It implements the mixture-of-gaussians adaptive background modeling, and it may give you a cleaner segmentation.
Having said that, it is very unlikely that you will get the "exact" silhouette. Some error is inevitable.
In your result image, you have tow types of black regions. one is moving and the other is stationary.
So when you you want to fill the human body, you have to choose only the moving region, for this purpose, I suggest to segment your image by adding optical flow technique to know where the moving regions are.
This is an interesting tutorial doing what you need to do:
http://docs.opencv.org/trunk/doc/py_tutorials/py_video/py_lucas_kanade/py_lucas_kanade.html

Detecting particular objects in the image i.e image segmentation with opencv

I have to select any particular object visible in my image on i-phone.
Basically my project is to segment image objects on the basis of my touch.
The method I am following is to first detect contours of the image and then select a particular sequence based on finger touch.
Is there any other method which would be more robust because I have to run it on video frames?
I am using OpenCV and iphone for the project.
PLease help if there is any other idea which has been implemented or is feasible to implement.
Have you looked at SIFT or SURF implementations? They both track object features and are resilient (to a certain degree) to rotation, translation and scale.
Also check out FAST which is a corner detection algorithm which might help you, they have an app on the app store showing how quick it is too.