Background subtracting in MATLAB - matlab

I'm looking to do background subtracting on an image. I'm new to MATLAB and new to image processing/analysis, so sorry if any of this sounds stupid. 1) Other than imsubtract() are there other ways to do background subtracting (besides comparing one image to another)? 2) In the Math Works explanation for imsubtract() why do they make their structuring element a disk? This seems rather difficult so far because every time I try something, I end up not only subtracting the noisy background but also losing the parts of the image I want to look at!

What kind of image do you work with?
Background subtraction is easy. If you want to subtract a constant value, or a background with the same size as your image, you simply write img = img - background. imsubtract simply makes sure that the output is zero wherever the background is larger than the image.
Background estimation is hard. There you need to know what kind of image you're looking at, otherwise, the background estimation will fail.
If you have, for example, spot or line features that are either all dark on bright or bright on dark background, you can pass through with a local maximum filter (imdilate) or a local minimum filter (imerode), respectively, that is larger than your features, so that wherever you place the filter mask, there are some pixels that cover background. Also, you want the filter to have somewhat similar shape as the features. In your case, if you lose part of your image, you may want to try and make the filter larger (but not too large).
Instead of subtracting maximum or minimum, subtracting the median can work well, though you have to choose the filter size such that there's usually a majority of background pixels inside the filter mask. Unfortunately, median filtering is rather slow.

To subtract the background image, you need a model of the background. The simplest model is an image captured as the background along with some allowable deviation (+/- 0-255). Then, background subtraction in MATLAB is pretty simple:
image( find(abs(image-background) <= threshold) ) = 0;
It becomes more difficult when you use a statistical model, but essentially subtracting the background is pretty easy. imsubtract is NOT background subtraction; it is a subtraction filter like you would find in photoshop. It does not care about background versus foreground, which then defeats the point.
Since background subtraction itself is pretty easy, the question becomes more about background estimation. This is a bit more complicated, and generally requires more frames and training to build statistical models of the background (for example, looking at pixels as Gaussian distributions or mixtures of Gaussians, or looking instead at optical flow to determine what's not moving).
If you have access to technical articles (via work or school), "Pfinder: Real-Time Tracking of the Human Body" by Wren and others gives a pretty simple approach. Or you can just search google for single Gaussian background subtraction. There are a number of methods implemented with OpenCV here --> http://dparks.wikidot.com/source-code <-- that you might find useful.

The Computer Vision System Toolbox has the vision.ForegroundDetector object, which implements a variant of Stauffer and Grimson's GMM background subtraction. The implementation is very fast, leveraging multiple cores. Check out this example of how to use background subtraction as a building block of a system for tracking multiple objects.

Related

How to get the area of the bubble in the image using MATLAB?

Here are some images taken from experiments which show a bubble caused by spheres moving in liquid.
Now I want to get the area of the bubble from every image using Matlab. The first thing come to my mind is edge detection. So I tried using the following code:
A = imread('D:\1.jpg');
BW1 = edge(A,'sobel');
figure, imshow(BW1)
to get the cavity edge of the picture which was then cropped manually, as the picture show, the result (below) doesn't satisfy requirements. Also, I still don't know how to get the area of the bubble.
So, can someone tell me what should I do?
I think you should use background subtraction and try a simple segmentation.
You could use regionprops to get the area of the bubble:
https://www.mathworks.com/help/images/ref/regionprops.html
I feel like it should work pretty well. If you have a hard time obtaining a clean segmentation you could probably improve the experimental setup to increase the contrast of the bubble with respect to the background by choosing a background as dark as possible and using some lateral illumination to leverage the diffusion of the light by the bubble.
Finally the segmentation should be performed in a region of interest (ROI) since you know the bubble is confined within the tank
As for the issue of getting an accurate cavity edges, the computer vision system toolbox has the vision.ForegroundDetector object, which implements a variant of Stauffer and Grimson's GMM background subtraction. The implementation is very fast, leveraging multiple cores. Check out this example of how to use background subtraction.
As for the issue of finding the area of the bubble, use the bwarea command. https://www.mathworks.com/help/images/ref/bwarea.html, it will sum up all the white pixels in the image.
I believe background subtraction is the most efficient method to calculate this bubble area. Note that you may need to use opening and closing techniques afterwards to filter other regions see (imopen imclose) at: https://uk.mathworks.com/help/images/ref/imopen.html , and afterwards, you can apply bwarea to calculate area. You could also use impixelinfo command to compare intensity level of bubbles and other areas, and therefore, threshold image to extract bubbles. It works only when you have same threshold level for all images. Further, it is possible to combine all these techniques which is completely depended on your images to achieve better results.
Other shape-based techniques also can be used to extract bubble region area.

Stitching overlaying images from different cameras in Matlab

Two cameras takes two images of a wooden plank. The images have an overlap of the plank which I need to stitch together in a way that it looks natural and preferably seamless to the human eye for inspection purposes. The images are cropped to the same size and masked to remove the background and most of the non-overlapping areas but the plank can have a slight tilt on the conveyor belt.
Currently I'm using the normxcorr2 function on the general overlay area, following ideas from the Matlab totorial of the normxcorr2 function, to try and identify one of the images in the other and work out an overlay offset, following the tutorial. However, this fails quite often as the normxcorr2 functions returns a zero offset - resulting in a bad stitching:
c = normxcorr2(plank_part1,plank_part2);
Find peak in cross-correlation:
[ypeak, xpeak] = ind(c==max(c(:)));
Account for the padding that normxcorr2 adds:
yoffSet = ypeak-size(onion,1);
xoffSet = xpeak-size(onion,2);
[xoffSet,yoffSet]
ans =
0 0
It would seem normxcorr2 can not always find the correct overlay of the images, or any overlay at all(?), even though I try to make it easier by increasing the gray scale contrast by the function histeq. My guess is that the amount of "gray-ish" area from the sapwood overwhelms the distinct knots, which are the important parts to stitch propperly.
Does anyone know of a way to either increase the likelihood of this stitching process, maybe by some more preprocessing, or use any other matlab skills/functions to make this work better?
P.S I can not use anything but freely accessible scripts as this would probably become license/copyright issues for my project.
Thank you for your time in trying to help!
You should look at the following link. The term that you should be looking for is image registration. There are more advanced methods than normxcorr2

Static image calibration

I am capturing static images of particulate biological materials on the millimeter scale, and then processing them in MATLAB. My routine is working well so far, but I am using a rudimentary calibration procedure where I include some coins in the image, automatically find them based on their size and circularity, count their pixels, and then remove them. This allows me to generate a calibration line with input "area-mm^2" and output "Area- pixels," which I then use to convert the pixel area of the particles into physical units of millimeters squared.
My question is: is there a better calibrant object that I can use, such as a stage graticule or "phantom" as some people seem to call them? Do you know where I could purchase such a thing? I can't even seem to find a possible vendor. Is there another rigorous way to approach this problem without using calibrant objects in the field of view?
Thanks in advance.
Clay
Image calibration is always done using features of knowns size or distance.
You could calculate the scale based on nominal specifications but your imaging equipment will always have some production tolerances, your object distance is only known to a certain accuracy...
So it's always safer and simpler to actually calibrate your scale.
As a calibrant you can use anything that meets your requirements. If you know the size well enough and if you are able to extract it's dimensions in pixels properly you can use it...
I don't know your requirements and your budget, but if you want something very precise and fancy you can use glass masks.
There are temperature stable glass slides that are coated with chrome for example. There are many companies that produce such masks customized (IMT AG, BVM maskshop, ...) Also most optics lab equipment suppliers have such things on stock. Edmund Optics, Newport, ...

Detecting shape from the predefined shape and cropping the background

I have several images of the pugmark with lots of irrevelant background region. I cannot do intensity based algorithms to seperate background from the foreground.
I have tried several methods. one of them is detecting object in Homogeneous Intensity image
but this is not working with rough texture images like
http://img803.imageshack.us/img803/4654/p1030076b.jpg
http://imageshack.us/a/img802/5982/cub1.jpg
http://imageshack.us/a/img42/6530/cub2.jpg
Their could be three possible methods :
1) if i can reduce the roughness factor of the image and obtain the more smoother texture i.e more flat surface.
2) if i could detect the pugmark like shape in these images by defining rough pugmark shape in the database and then removing the background to obtain image like http://i.imgur.com/W0MFYmQ.png
3) if i could detect the regions with depth and separating them from the background based on difference in their depths.
please tell if any of these methods would work and if yes then how to implement them.
I have a hunch that this problem could benefit from using polynomial texture maps.
See here: http://www.hpl.hp.com/research/ptm/
You might want to consider top-down information in the process. See, for example, this work.
Looks like you're close enough from the pugmark, so I think that you should be able to detect pugmarks using Viola Jones algorithm. Maybe a PCA-like algorithm such as Eigenface would work too, even if you're not trying to recognize a particular pugmark it still can be used to tell whether or not there is a pugmark in the image.
Have you tried edge detection on your image ? I guess it should be possible to finetune Canny edge detector thresholds in order to get rid of the noise (if it's not good enough, low pass filter your image first), then do shape recognition on what remains (you would then be in the field of geometric feature learning and structural matching) Viola Jones and possibly PCA-like algorithm would be my first try though.

fractal microscope simulator

I've done work on software used for controlling imaging hardware, such as microscopes, that are sometimes hard to get time on. This means it is difficult to test out new/different algorithms which would require access to the instrument. I'd like to create a synthetic instrument that could be used for some of these testing purposes, and I was thinking of using some kind of fractal image generation to create the synthetic images. The key would be to be able to generate features at many different 'magnifications' and locations in some sort of deterministic manner. This is because some of the algorithms being tested may need to pan/zoom and relocate previously 'imaged' areas. Onto these base images I can then apply whatever instrument 'defects' are appropriate (focus, noise, saturation, etc.).
I'm at a bit of a loss on how to select/implement a good fractal algorithm for the base image. Any help would be appreciated. Preferably it would have the following qualities:
Be fast at rendering new image areas.
Fairly wide 'feature' coverage at as many locations and scales as possible.
Be deterministic (but initialized from random starting parameters).
Ability to tune to make images look more like 'real' images.
Item 2 is important, for example a mandelbrot set, with its large smooth/empty regions, might not be good since the software controlling the synthetic scope might fall into one of these areas.
So far I've thought of using something like a mandelbrot, but randomly shifting/rotating/scaling and merging two or more fractal sets to get more complete 'feature' coverage.
I've also seen images of the fractal flame algorithms and they seem to generate images that might be useful (and nice to look at).
Finally, I've thought of using some sort of paused particle simulation run to generate images that are more cell-like (my current imaging target), but I'm not sure if this approach can be made to work with the other requirements.
Edit:
#Jeffrey - So it sounds like some kind of terrain generation might be the way to go, as long as I have complete control over the PSRNG. Perhaps I can use some stored initial seed + x position + y position to generate my random numbers? But then I am unsure of how to consistently generate the terrains across scales, except, as you mentioned, to create the base terrain at the coursest scale, and at certain pre-determined 'magnifications' add new deterministic pseudo-random variations to this base. I'd also have to be careful about when to generate the next level of terrain, since if I'm too aggressive I'd have to generate and integrate the results appropriately for display at the coarser level... This is why I initially was leaning toward a more 'traditional' fractal, since this integration from finer scales would be handled more implicitly (I think).
The idea behind a fractal terrain creation algorithm is to build the image at each scale separately. For a landscape it's easy: just make a small array of height values, and set them randomly. Then scale it up to a larger array, averaging the values so that the contour is smooth, and then add small random amounts to those values. Then scale it up, etc. The original small bumps have become mountains, and they are filled with complex terrain.
There are two particular difficulties with the problem posed here, though. First, you don't want to store any of these values, since it would be potentially huge. Secondly, the features at each scale are of a different kind than the features at other scales.
These problems are not insurmountable.
Basically, you would divide the image up into a grid, and using deterministic psedorandom numbers establish the key features of each square in the grid. For example, each square could have a certain density of cell types.
At the next level of magnification, subdivide each square into another grid, apply a gradiant of values across the grid that is based on the values of the containing square and its surrounding squares. Then apply pseudorandom variations to that seeded with the containing square's grid coordinates. For the random seed, always use the coordinates of the immediately containing square of the subdivision under consideration regardless of where the image is cropped, in order to ensure that it is recreated correctly accross multiple runs.
At some level of magnification the random values go from being densities of paticles types to particle locations. Then for each particle, there are partical features. Then features on those features.
Although arbitrary left/right and up/down scrolling will be desired, the image at all levels of magnification above the current scene will have to be calculated each time the frame is shifted to ensure that all necessary features are included. This way the image can be scrolled from one cell to another without loss of consistancy. Partical simulations can be used to ensure that cells or cell features don't overlap. This could be done in a repeatable, deterministic manner.
And don't forget to apply a smoothing gradient based on averages of surrounding squares at higher levels before adding in the random variations. Otherwise, the abrupt changes will make the squares themselves appear in the images!
This answer is somewhat rambling and probably confusing, but that is best I can explain it right now. I hope it helps!