horizontal and vertical projection of an image - matlab

I am doing a project on image forgery detection in MatLab software. But I am new to both image processing and matlab.
Now I have to calculate horizontal and vertical projection of an image. How to do it in matlab?
I have used
ver=imfilter(edge1,[1 0 -1])
and
hor=imfilter(edge1,[1 0 -1]')
where edge1 is an edge image.
But i am not sure if it is right or not. Edge detection algorithm is based on the standard deviation. I have not used built in edge detection function. I have implemented standard deviation based edge detection.Can anybody help me on this . I need to know this very immediately. Thanks. Expecting your answers........

What is image projection? I think using and edge detector is NOT correct.
If I remember correctly image project is an "histogram over horizontal or vertical way of grayscale level".
If you need a projection of the edges you developed the first step.
Then, I think you have to sum over rows or columns the grayscale of image.
sum(image,1)
sum(image,2)
here the projection of my photo (apologize fro my futility :)

Related

How can I find the boundary surface in this image

I am new to image processing. I want to find the surface between black and white pixels which separates them. Here is the link of image.
The size of image is (21,900,900)
https://drive.google.com/file/d/1zUWK0Fb_n6f1JZou5mrUJq0x3h2X8mBK/view?usp=sharing
I tried to use boundarymask command of MATLAB in one plane of image but I am getting noise and also it works for 2d image only. Please suggest me how to find boundary 3d surface here. Thank you.
This is the output image after applying boundarymask.
Your first step should be to get rid of your noise. Since you got some kind of salt and pepper noise you can to that using the median filter on a 2D-image with medfilt2() in matlab. After that you can use an edge ditector to find your edge pixels. The code for this could look like this. If you want the surface, you need to loop this, over the 3rd dimension of your 3D-image. The code will look like this:
for ii=1:16
I=imread('image.tif',ii);
I_bs=boundarymask(I);
I_filt=medfilt2(I_bs,[7 7]);
boundarysurface(:,:,ii)=edge(I_filt,'Canny');
end
The edge detector I used here is certainly overkill for this easy case, but was the easiest thing I could think of in short term. If performance is relevant let me know, and I will give you another approach.

Extract shapes of curves from binary images

I am analyzing back bone formation in zebrafish embryos and in this picture:
I would like to extract the shape and position of the horizontal lines/curves. Here is a little information about the image. The image at the top is already a segmented image through morphological processing and by using the MATLAB active contour function. The region between the two vertical lines is where the spinal cord develops and the horizontal lines on either side of the spinal cord later develop into ribs. The image at the bottom is where I have applied a canny edge detector. I have a time series of the development of ribs and I would now like to extract the shape and position of the horizontal curves. This is a follow up of my previous question:
Identify curves in binary image
I am guessing this will involve some kind of curve fitting module to obtain the shapes. Any ideas to go about this is very welcome.
Thanks

3D reconstruction based on stereo rectified edge images

I have two closed curve stereo rectified edge images. Is it possible to find the disparity(along x-axis in image coordinates) between the edge images and do a 3D reconstruction since I know the camera matrix. I am using matlab for the process. And I will not be able to do a window based technique as it's a binary image since a window based technique requires texture. The question how will I compute the disparity between the edge images? The images are available in the following links. Left Edge image https://www.dropbox.com/s/g5g22f6b0vge9ct/edge_left.jpg?dl=0 Right Edge Image https://www.dropbox.com/s/wjmu3pugldzo2gw/edge_right.jpg?dl=0
For this type of images, you can easily map each edge pixel from the left image to its counterpart in the right image, and therefore calculate the disparity for those pixels as usual.
The mapping can be done in various ways, depending on how typical these images are. For example, using DTW like approach to match curvatures.
For all other pixels in the image, you just don't have any information.
#Photon: Thanks for the suggestion. I did what you suggested. I matched each edge pixel in the left and right image in a DTW like fashion. But there are some pixels whose y-pixel coordinate value differ by 1 or 2 pixels, albeit they are properly rectified. So I calculated the depth by averaging those differing(up to 2-pixel difference in y-axis) edge pixels using least squares method. But I ended getting this space curve (https://www.dropbox.com/s/xbg2q009fjji0qd/false_edge.jpg?dl=0) when they actually should have been like this (https://www.dropbox.com/s/0ib06yvzf3k9dny/true_edge.jpg?dl=0) which is obtained using RGB images.I couldn't think of any other reason why it would be the case since I compared by traversing along the 408 edge pixels.

How Do I Find The Bounding Box For All Regions?

I'm using the MNIST digit images for a machine learning experiment, and I'm trying to center each image based on position, rather than the center of mass that they are centered on by default.
I'm using the regionprops class, BoundingBox method to extract the images. I create a B&W copy of the greyscale, use this to determine the BoundingBox properties (regionprops works only B&W images) and then apply that on the greyscale original to extract the precise image rectangle. This works fine on ~98% of the images.
The problem I have is that the other ~2% of images has some kind of noise or errant pixel in the upper left corner, and I end up extracting only that pixel, with the rest of the image discarded.
How can I incorporate all elements of the image into a single rectangle?
EDIT: Further research has made me realise that I can summarise and rephrase this question as "How do I find the bounding box for all regions?". I've tried adjusting a label matrix so that all regions are the same label, to no avail.
You can use an erosion mask with the same size of that noise to make it totally disappear " using imerode followed by imdilate to inverse erosion ", or you can use median filter

Matlab: canny edge detector

Matlab Version : 7.8.0(R2009a)
I am getting edges from an image by using Canny edge detector using standard 'edge' function. But for my project I need to get intermediate Gradient Magnitude matrix. I.e. gradient magnitude values for each pixel.
I know we could do it using imgradientxy(), But I need exact result what canny would have given and I don't know the implementation used by Matlab for Canny. Is there any way to do it or do I have to implement canny from scratch?
Background: I am basically changing intensity values for some pixels on the edges as detected by canny. I need to know that after the change, when the gradient is calculated using new values, will they still come under Threshold values?
To find the implementation of the Canny edge detector in Matlab, you can simply open the file (edit edge), since the function isn't built-in. This way, you can check the filtering and gradient scheme that is used in your release of Matlab.