Differentiating size of Lego blocks in image - matlab

I wanted to ask how I could count certain type of Lego block in am image in MATLAB.
I am able to counts blocks of a certain color.
But I cannot seem to work out how to differentiate size and type of a certain block. I need to count the number of 4 by 2 blue blocks in a set of images.
I need count how many of this block
And this block are in the images below
This includes those that are upside down or at different orientations.
Examples of images are below.
In this image there are 3 blue and 1 red
In this image there is only 1 blue
Three blue and two red
Any help in MATLAB will be much appreciated.

Related

Cropping the minimum sized rectangle of a shape from an image

I am making a card recognition project on MATLAB and I am stuck at this point. There are images of cards and on an image I want to define the smallest rectangle that takes the card inside. Example like below
Original image
Converted image
I am currently able to convert the image to black and white (leaves me only the cards white spaces), I want to define the rectangles by the whole white spaces. E.g., if I have 3 non-lapping cards in my image, I want to have 3 images like above (doesn't matter if another cards edge appears on the image, the important part is that rectangle must pass through the edges of the selected card).
I have tried edge definition methods but wasn't successful. Thanks for your help already.
I recommend you use regionprops function from the image processing tool box, i.e.,
bb = regionprops(yourImage, 'boundingbox');
which will return the bounding box. There is a nice MATWORKS video here and you can jump to about minute 26 for what you need.

Biomedical Image Segmentation

In brain tumor segmentation,can I consider Images and Labels as color images?
Or images can have 3 channels but Ground Truth/ Mask/ Label must be in 1 channel. Or both must be of 1 channel?? As I have used both (images & GT) of 3 channels for UNET architecture, and giving me output as blank colored image. Why output is so?
There is no necessary to use colored images to perform biomedical image segmentation. The value of CT/MR image has a specific meaning, which denotes different lesions such as bones or vessels.
If you use 3 channels, I don't know whether the value still has the same meaning or not. Also, I do not recommend you take the GT as 3 channels image, because the voxel value denotes different classes. In your case, maybe 1-n for different kinds of tumors, 0 for background.
Thus, 3 channels representation will lose some semantic information, make the problem more complex.

Image segmentation algorithm in MATLAB

I need to implement an image segmentation function in MATLAB based on the principles of the connected components algorithm, but with a few modifications. This is intended for very simple, 2D images, with a background color and some objects in different colors.
The idea is that, taking the image as a matrix, I provide a tool to select the background color (it will vary for every image). Then, when the value of the color of the background of the image is selected, I have to segment all the objects in the image, and the result should be a labeled matrix, of the same size of the image, with 0's for the background, and a different number for each object.
This is a graphic example of what I mean:
I understand the idea of how to do it, but I do not know how to implement it on MATLAB. For each pixel (matrix position) I should mark it as visited and then if the value corresponds to the one of the background, assign 0, if not, assign another value. The objects can be formed by different colors, so in the end, I need to segment groups of adjacent pixels, whatever their color is. Also I have to use 8-connectivity, in order to count the green object of the example image as only one object and not 4 different ones. And also, the objects should be counted from top to bottom, and from left to right.
Is there a simple way of doing this in MATLAB? I know the bwlabel function, but it works for binary images only, so I'd like to adapt it to my case.
once you know the background color, you can easily convert your image into a binary mask of the same size:
bw=img!=bg_color;
Once you have a binary mask you can call bwlavel with 8-connectivity argument as you suggested yourself.
Note: you might want to convert your color image from RGB representation to an indexed image using rgb2ind before processing.

Count black spots in an image - iPhone - Objective C

I need to count the number of black spots in an image(Not the percentage of black spots but the count). Can anyone suggest a step wise procedure that is used in image manipulation to count the spots.
Objective : Count black spots in an image
What I've done till now :
1. Converted image to grayscale
2. Read the pixels for their intensity values
3. I have set a threshold to find darker areas
Other implementations:
1. Gaussian blur
2. Histogram equalisations
What i have browsed :
Flood fill algorithms, Water shed algorithms
Thanks a lot..
you should first "label" the image, then count the number of labels you have found.
the label operation is the first operation done in a blob analysis operation: it groups similar adjacent pixels into a single object, and assign a value to this object. the condition for grouping generally is a background/foreground distinction: the label operation will group adjacent pixels which are part of the foreground, where background is defined as pure black or pure white, and foreground is any pixel whose color is not the color of the background.
the label operation is pretty easy to implement and requires not much resources.
_(see the wikipedia article, or this page for more information on labelling. a good paper on the implementation of the label operation is "Two Strategies to Speed up Connected Component Labeling Algorithms" by Kesheng Wu, Ekow Otoo and Kenji Suzuki)_
after labelling, count the number of labels (you can even count the labels while labelling), and you have the number of "black spots".
the next step is defining what a black spot is: converting your input image into a grayscale image (by converting it to HSL and using the luminance plane for example) then applying a threshold should do it. if the illumination of your input image is not even, you may need a better thresholding algorithm (a form of adaptive threshold)...
It sounds like you want to label the black spots (Blobs) using a binary image labelling algorithm. This should give you a place to start

Histogram of image

I have 2 images that look nearly identical. The histogram for one (256 bins) has intensities distributed pretty evenly throughout. The other has intensities at the lowest and highest bin. Why would this be? Then wouldnt it appear binary (thats not the case)?
Think about it this way: Imagine you are taking a histogram of two grayscale images with each pixel represented by a color value 0-255. One image contains pixels that all have gray levels of 128. The second image contains a "checkerboard" pattern (pixels alternate between 0 and 255). If you step back far enough that you no longer see individual pixels, they will appear identical to the naked eye. Your brain "averages" the alternating black and white pixels into a field of gray.
This is what your images are doing. The first image has colors distributed evenly throughout the range and the second image has concentrations of specific colors, but if you calculate an average color for the image (and also for sub-sections within the image) you should see similar values for both.
Never trust in your eyes! They will always lie to you.
Consider this silly example that can be illustrative here. An X-Ray 'photo' is nothing more than black and white dots. But as they are small and mixed along the image, your eyes see different shades of gray.
The same can happen in a digital image, where, although the pixels may have the same size, then can be black and white and 'distributed' in the image in such a way that you see it as having more graylevels. This is called halftone.
Without seeing the images it's hard to say, but it sounds like the second may be slightly clipped.
The difference also could just be a slight difference in contrast in the images that's no visible to the naked eye.