Image segmentation algorithm in MATLAB - matlab

I need to implement an image segmentation function in MATLAB based on the principles of the connected components algorithm, but with a few modifications. This is intended for very simple, 2D images, with a background color and some objects in different colors.
The idea is that, taking the image as a matrix, I provide a tool to select the background color (it will vary for every image). Then, when the value of the color of the background of the image is selected, I have to segment all the objects in the image, and the result should be a labeled matrix, of the same size of the image, with 0's for the background, and a different number for each object.
This is a graphic example of what I mean:
I understand the idea of how to do it, but I do not know how to implement it on MATLAB. For each pixel (matrix position) I should mark it as visited and then if the value corresponds to the one of the background, assign 0, if not, assign another value. The objects can be formed by different colors, so in the end, I need to segment groups of adjacent pixels, whatever their color is. Also I have to use 8-connectivity, in order to count the green object of the example image as only one object and not 4 different ones. And also, the objects should be counted from top to bottom, and from left to right.
Is there a simple way of doing this in MATLAB? I know the bwlabel function, but it works for binary images only, so I'd like to adapt it to my case.

once you know the background color, you can easily convert your image into a binary mask of the same size:
bw=img!=bg_color;
Once you have a binary mask you can call bwlavel with 8-connectivity argument as you suggested yourself.
Note: you might want to convert your color image from RGB representation to an indexed image using rgb2ind before processing.

Related

MATLAB: How do I resize (connected) components in a 3D binary image sequence without changing the dimensions of the sequence?

I'd like to resize the components contained in a 3D binary image sequence without changing any of the dimensions of the sequence itself.
I'm not sure if I need to do it on a component-by-component basis, if yes, then how do I create a transform such that the resized components are re-positioned 'correctly' in the image sequence? By 'correctly', I mean with the same centre of mass as the original unprocessed components.
(If that last paragraph doesn't make sense then please ignore)
A 2D example: suppose I wanted to enlarge by 10% the white blobs in the following [295x445] image
How would you do this without making the image itself larger?
you could use the imdilate function to dilate the regions of interest. The examples in the webpage show how to use this function.

MATLAB: display RGB values of a fits image

I want to read a .fits image of wide field sky and display the RGB values contained in a star. Can you please suggest a method to do so?
I have used fitsread to read in the image but i am not able to show the RGB values for specific locations(star).
In order to do this, you'll need a proper rgb fits file. The only .fits viewer I know of, ds9, does not support saving rgb fits files, but rather as the three separate (r,g,b) fits images. You can use "getpix" from wcstools (http://tdc-www.harvard.edu/wcstools/) or scisoft (http://www.eso.org/sci/software/scisoft/) on the individual frames. Note that "getpix" returns the pixel value given an image (x,y) location. ds9 does not provide the physical image location, but rather the wcs coordinates, so you may have to convert to image coordinates before calling getpix.

How to convert a black and white photo that was originally colored, back to its original color?

I've converted a colored photo to black and white, and bolded the edges. Now i need to convert it back to its original color with the bolded edges. Is there any function in matlab which allows me to do so?
Once you remove the colour from an image, there is no possible way to automatically put it back. You're basically reducing a set of 16,777,216 colours to a set of 256 - on average each shade of grey has 65,536 equivalent colours, and without the original image there's no way to guess which it could be.
Now, if you were to take the bolded lines from your black-and-white image and paint them on top of the original coloured image, that might end up producing what you're looking for.
If what you are trying to do is to use some filter over the B/W image and then use that with the original color. I suggest you convert your image to a color space with Lightness channel that suits your needs (for example L*a*b* if you need the ligtness to be uniformly distributed regarding human recognition of differences) and apply your filter only over the Lightness channel.

Count black spots in an image - iPhone - Objective C

I need to count the number of black spots in an image(Not the percentage of black spots but the count). Can anyone suggest a step wise procedure that is used in image manipulation to count the spots.
Objective : Count black spots in an image
What I've done till now :
1. Converted image to grayscale
2. Read the pixels for their intensity values
3. I have set a threshold to find darker areas
Other implementations:
1. Gaussian blur
2. Histogram equalisations
What i have browsed :
Flood fill algorithms, Water shed algorithms
Thanks a lot..
you should first "label" the image, then count the number of labels you have found.
the label operation is the first operation done in a blob analysis operation: it groups similar adjacent pixels into a single object, and assign a value to this object. the condition for grouping generally is a background/foreground distinction: the label operation will group adjacent pixels which are part of the foreground, where background is defined as pure black or pure white, and foreground is any pixel whose color is not the color of the background.
the label operation is pretty easy to implement and requires not much resources.
_(see the wikipedia article, or this page for more information on labelling. a good paper on the implementation of the label operation is "Two Strategies to Speed up Connected Component Labeling Algorithms" by Kesheng Wu, Ekow Otoo and Kenji Suzuki)_
after labelling, count the number of labels (you can even count the labels while labelling), and you have the number of "black spots".
the next step is defining what a black spot is: converting your input image into a grayscale image (by converting it to HSL and using the luminance plane for example) then applying a threshold should do it. if the illumination of your input image is not even, you may need a better thresholding algorithm (a form of adaptive threshold)...
It sounds like you want to label the black spots (Blobs) using a binary image labelling algorithm. This should give you a place to start

Iphonesdk boundries checking for coloring

im creating and app where user already have an image (with different objects) without colors, i have to check the object and then color with respected color with the touch on that objects. how should i do this. can anyone help me.
I would say that that is non-trivial. I can only give hints since I have not done such an app yet.
First, you need to convert the image into a CGImageRef, for example by doing [uiimage_object CGImage].
Next you need convert the CGImageRef into array of pixel colors. You can follow the tutorial at http://www.fiveminutes.eu/iphone-image-processing/ for sample code. But for your app you need to convert the array into two dimension based on image width and height.
Then, use the coordinate of the user touch to access the exact pixel color value from the array. Next you read off the color values of the surrounding pixels and determine if color is similar to the touched pixel or not (you might need to read some wikipedia articles etc on doing the color comparison). If the color is similar, change the color to the one you want. Recurse until the surrounding color is different (i.e. you hit the boundary).
When you are finished modifying the pixel color value array, you need to convert the array back into CGImageRef using CGImageCreate function. Then you convert back to UIImage using [UIImage imageWithCGImage:imageref].
Now you are on your own to implement the steps into code. It would be unreasonable if you expect me to code all that for you, wouldn't it?