What I want to do is color an irregular shape when user touch within that path.
Same as flood fill. But I found that flood fill is too costly in case of performance/speed/memory. So I have an idea. I dont know how to implement it. CGContextFillPath fills an irregular shapes.
So my Question is can we get a bounding paths/border line of that shape so that we can color that region??
It sounds like you have an image with a shape in it, where all the pixels in the shape are one color, and the boundary of the shape is a different color.
If I understand you correctly, you would have to use a flood-fill algorithm to find the boundary of the shape so you could turn that boundary into a CGPath. There's no magic way to get a path for the boundary of the shape without looking at the pixels.
Related
Could someone help me to find what morphological operations should I use in order to smoothen the vertical and horizontal rectangle on this image
More precisely what I would want is that the white rectangles become continuous. The final application of that would be to detect vertical and horizontal lines in the image, indeed this image is a map where white element represent obstacle and where i would want to detect the walls.
So the result i would want should be something like that:
I'm learning about statistical feature of an image.A quote that I'm reading is
For the first method which is statistical features of texture, after
the image is loaded, it is converted to gray scale image. Then the
background is subtracted from the original image. This is done by
subtract the any blue intensity pixels for the image. Finally, the ROI
is obtained by finding the pixels which are not zero value.
The implementation :
% PREPROCESSING segments the Region of Interest (ROI) for
% statistical features extraction.
% Convert RGB image to grayscale image
g=rgb2gray(I);
% Obtain blue layer from original image
b=I(:,:,3);
% Subtract blue background from grayscale image
r=g-b;
% Find the ROI by finding non-zero pixels.
x=find(r~=0);
f=g(x);
My interpretation :
The purpose of substracting the blue channel here is related to the fact that the ROI is non blue background? Like :
But in the real world imaging like for example an object but surrounded with more than one colors? What is the best way to extract ROI in that case?
like for example (assuming only 2 colors on all parts of the bird which are green and black, & geometri shaped is ignored):
what would I do in that case? Also the picture will be transformed to gray scale right? while there's a black part of the ROI (bird) itself.
I mean in the bird case how can I extract only green & black parts? and remove the rest colors (which are considered as background ) of it?
Background removal in an image is a large and potentielly complicated subject in a general case but what I understand is that you want to take advantage of a color information that you already have about your background (correct me if I'm wrong).
If you know the colour to remove, you can for instance:
switch from RGB to Lab color space (Wiki link).
after converting your image, compute the Euclidean from the background color (say orange), to all the pixels in your image
define a threshold under which the pixels are background
In other words, if coordinates of a pixel in Lab are close to orange coordinates in Lab, this pixel is background. The advantage of using Lab is that Euclidean distance between points relates to human perception of colours.
I think this should work, please give it a shot or let me know if I misunderstood the question.
I have an image like this.note that the regions are not perfectly shaped.it is rectangular like region and ellipse like region. I have segmented the ellipse like region using some algorithm.segmented region is bright one.the border (red rectangle) is dark one
finally i must get red rectangular like region
can you suggest any algorithm to perform this
I see that you have done some real progress on your segmentation. Because you already have an idea of the location of elements you want to segment, you should use a watershed with constraints/markers:
Your actual segmentation represents the inner markers.
You dilate it with a big structuring element (bugger than the inter disk space).
You take the contour of the dilation, and that's your outer markers.
You compute the gradient of the original image.
You apply the watershed on the gradient image, using the markers you have just computed.
[EDIT] As the segmentation you provided does not match with the original image (different dimensions), I had to simulate roughly a simple segmentation, using this image (the red lines being the the segmentation you already have). And I got this result.
I have multiple simple circle objects in grid of an image from which I want to create mask image for the objects. A gotcha is that light intensity for each object is different. So simple thresholding would not create a mask.
As a solution, I want to threshold based on gradient. Basically, I'd like to first find the circle with edge detection and make inside of the circle white and outside black. But this is really slow. Is there any better way to do this on matlab?
I would create a low-pass filtered version of the image, and use it as the threshold. The "strength" of the filter should be tuned carefully in order to make the result follow the distribution of light intensity, but this is not that hard.
(This approach worked for me when I had to extract the contour of blood vessels from brain-surface images, few years ago.)
My image is a 2D surface of a protein, and I use matlab function "scatter" to display the image, so there are some white empty spaces in it.
I want to fill them with colors,but the question is that the points have different colors, some are red and some are orange(point color is decided by its RGB value).
So I wanna assign the color of the white space similar to their corresponding neighbors.
the original work i did is to extract the edge of the polygon first,which helps me detect if the point is inside the polygon or not, because I am not assigning colors to white spaces that are outside the polygon.
And then simply scan the whole image pixels one by one to check if the pixel is the white, if so, I just assign the neighbor color to it,like what i said, I have to check if the pixel is inside the polygon or not every time.
But the speed is really slow, and the result is not good enough,could anybody give me some idea on it ?
I have the 2D scatter points image and also the 3D structure.Each point in 2D can find one
counterpart in 3D, I don't know if this information would help.
After an erosion with a disk kernel(7x7) such as and then a bilateral filter:
PS: if you have the 3D points structure, upload it somewhere and post a link