I want to use imcrop function or any other way to extract outside part of the bounding box in matlab. for example original image size is 128*128 and I want to extract outside values of the following bounding box : [15, 32, 95,23]. for instance, bounding box represents the eyes and I want other parts of the face except eyes.
Please take a look at the image for better clarification.
enter image description here
Related
I have recognized and labeled objects in my image that is fully consists of texts. you can see the objects are labeled as red color in the attached image. so, I want to separate the objects in the second line (or more lines) from the first line and give them different colors (each line would has a different colors) but I can't do that. do you have any idea? thanks for all answers.
this is part of my matlab code that does the labeling:
%% Label connected components
[L, Ne]=bwlabel(imagen);
%% Measure properties of image regions
propied=regionprops(L,'BoundingBox');
hold on
%% Plot Bounding Box
for n=1:size(propied,1)
rectangle('Position',propied(n).BoundingBox,'EdgeColor','r','LineWidth',2)
end
and this is labeled image that all the objects in different lines have the same label (same color=red).
I think the following methods should work if the lines are not too curvy.
Find the centroids of the bounding boxes, or get the centroids from the regionprops itself, then cluster their y coordinates using kmeans with k = 2.
The result is not perfect, but fine. May be you can then fit a curve to the clustered points, with outlier removal (e.g. RANSAC)
OR
Prepare a new image by filling in the bounding boxes.
Prepare a rectangular structuring element whose height is 1 and width is the width of the widest bounding box.
Perform a morphological closing of the filled image using this structuring element. This will connect the regions horizontally. Now you get a mask separating the two regions.
The resulting images were obtained using opencv (I'm not posting the code because it's too untidy. Hope the instructions are clear enough).
I Want to get 2 curve as in Desire result. I tried to use Edge detection technique to get these 2 curves, but the output was not as expected. First step, I convert the original image to grayscale image. In second step, I convert the grayscale image to binary image with threshold calculated by the formula below:
threshold = floor((sum(sum("grayscale image here")))/(2 *high *width));
And then use Sobel edge detection algorithm to find the edges:
im_edge = edge("binary image here", 'sobel');
I remove unwanted edges in left side and right side by just fill it with black.
I got the result in Result but it was not as my expected. The result is also embedded edges found by:
im_edge = edge("grayscale image here", 'sobel');
Can anyone help me to get a better result
Since I don't have 50 reputations to write a comment, I will write my comments here as answer.
The problem you have is that there is no visible edge in your input image. The image is pretty smooth as far as I can see. If you did not put two lines on the image, I won't be able to tell it.
To get better results, you need to get more features such as by applying some transformation on the input image. For example, you can try to find the edge on the gradient of the input image or absolute value of gradient, and see if you can find that two lines better(imgradient).
I'm using the MNIST digit images for a machine learning experiment, and I'm trying to center each image based on position, rather than the center of mass that they are centered on by default.
I'm using the regionprops class, BoundingBox method to extract the images. I create a B&W copy of the greyscale, use this to determine the BoundingBox properties (regionprops works only B&W images) and then apply that on the greyscale original to extract the precise image rectangle. This works fine on ~98% of the images.
The problem I have is that the other ~2% of images has some kind of noise or errant pixel in the upper left corner, and I end up extracting only that pixel, with the rest of the image discarded.
How can I incorporate all elements of the image into a single rectangle?
EDIT: Further research has made me realise that I can summarise and rephrase this question as "How do I find the bounding box for all regions?". I've tried adjusting a label matrix so that all regions are the same label, to no avail.
You can use an erosion mask with the same size of that noise to make it totally disappear " using imerode followed by imdilate to inverse erosion ", or you can use median filter
I would like to crop an image but I want to retain the part of image that is outside of the rectangle. How can this can be done?
It seems that with imcrop only the part within the rectangle can be retained.
An image in Matlab is represented by a matrix, just like any other matrix, you can read more about representation forms here.
It seems that what you want to do is to take the area that you don't want and change the values of the corresponding cells in the matrix to the color that you want to put instead (each cell in the matrix is a pixel in the image). That is if you know the place where your unwanted data is.
If you don't know where it is, and want to use the tool given by imcrop to manually choose the "cropped" area, you can take the resulting matrix, and find the part of the original image which is an exact match with the cropped part, and to color it as you wish.
The code for doing this:
I=imread('img_9.tif');
I2=imcrop(I,[60,50,85,85]);
n_big=size(I);
n_small=size(I2);
for j1=1:(n_big(1)-n_small(1))
for j2=1:(n_big(2)-n_small(2))
Itest=I(j1:j1+n_small(1)-1,j2:j2+n_small(2)-1,:);
if ( Itest == I2)
I(j1:j1+n_small(1)-1,j2:j2+n_small(2)-1,:) = zeros(n_small(1),n_small(2),3);
end
end
end
figure(1);
imshow(I);
figure(2);
imshow(I2);
The results of my test were:
original:
cropped:
resulting image:
maybe what you want to do is first a mask with the inverse area of what you want to crop and save this result.
My image is a 2D surface of a protein, and I use matlab function "scatter" to display the image, so there are some white empty spaces in it.
I want to fill them with colors,but the question is that the points have different colors, some are red and some are orange(point color is decided by its RGB value).
So I wanna assign the color of the white space similar to their corresponding neighbors.
the original work i did is to extract the edge of the polygon first,which helps me detect if the point is inside the polygon or not, because I am not assigning colors to white spaces that are outside the polygon.
And then simply scan the whole image pixels one by one to check if the pixel is the white, if so, I just assign the neighbor color to it,like what i said, I have to check if the pixel is inside the polygon or not every time.
But the speed is really slow, and the result is not good enough,could anybody give me some idea on it ?
I have the 2D scatter points image and also the 3D structure.Each point in 2D can find one
counterpart in 3D, I don't know if this information would help.
After an erosion with a disk kernel(7x7) such as and then a bilateral filter:
PS: if you have the 3D points structure, upload it somewhere and post a link