How can I measure the distance from vector coordinate to a raster pixel? - coordinates

I have a file of geocoded point data (not pictured) that overlays a 30m cell size raster with the pixels of interest shown in green (image below).
For each point I want to calculate the distance to nearest green pixel. I tried raster to point (an attempt to convert each pixel to a point), but this process takes a long time to complete (days). Are there other viable options for me?
Is there something I can first do to the raster to preprocess it in order to make it a smaller file (dropping pixels if they are not pixels of interest) before attempting the raster to point conversion?

One way this can be done is by reducing the number of pixels to the pixels of interest. For now, I'm using this workflow below. Although it takes some time, it works.
Reproject raster and/or point data, if necessary
Reclassify the raster (No Data applied to the non-interest pixels)
Raster to point
Near tool for distance to nearest point

Related

transform a sector image to rectangle image in matlab

I am currently working the image to transform the sector part in the image to a rectangle shape.
I tried some ways but don't work well.
Any ideas on how to transform?
Create a destination image the height of which is the difference in the radii, and the width is the length of the perimeter at middle radius (this ensures square pixels along the middle arc).
Scan this image and for every pixel convert the coordinates (angle, radius) from Cartesian to polar, with a shift to the ROI center. This will give you the corresponding pixel in the source image, which you copy to the destination. Make sure to scale the angle and radius to match the destination image limits to the ROI edges.
As the source coordinates won't be integer, truncating and merely copying the source pixel achieves so-called nearest-neighbor resampling, which features visible artifacts. You can smoothen them by considering four neighboring pixels and interpolating bilinearly between them by means of the fractional parts of the coordinates.
You can even go for bicubic interpolation, using sixteen neighbors. But from my experience, the gain in quality is not so visible.

Finding the centers of overlapping circles in a low resolution grayscale image

I am currently taking my first steps in the field of computer vision and image processing.
One of the tasks I'm working on is finding the center coordinates of (overlapping and occluded) circles.
Here is a sample image:
Here is another sample image showing two overlapping circles:
Further information about the problem:
Always a monochrome, grayscale image
Rather low resolution images
Radii of the circles are unknown
Number of circles in a given image is unknown
Center of circle is to be determined, preferably with sub-pixel accuracy
Radii do not have to be determined
Relative low overhead of the algorithm is of importance; the processing is supposed to be carried out with real-time camera images
For the first sample image, it is relatively easy to calculate the center of the circle by finding the center of mass. Unfortunately, this is not going to work for the second image.
Things I tried are mainly based on the Circle Hough Transform and the Distance Transform.
The Circle Hough Transform seemed relatively computationally expensive due to the fact that I have no information about the radii and the range of possible radii is large. Furthermore, it seems hard to identify the (appropriate) pixels along the edge because of the low resolution of the image.
As for the Distance Transform, I have trouble identifying the centers of the circles and the fact that the image needs to be binarized implies a certain loss of information.
Now I am looking for viable alternatives to the aforementioned algorithms.
Some more sample images (images like the two samples above are extracted from images like the following):
Just thinking aloud to try and get the ball rolling for you... I would be thinking of a Blob, or Connected Component analysis to separate out your blobs.
Then I would start looking at each blob individually. First thing is to see how square the bounding box is for each blob. If it is pretty square AND the centroid of the blob is central within the square, then you have a single circle. If it is not square, or the centroid is not central, you have more than one circle.
Now I am going to start looking at where the white areas touch the edges of the bounding box for some clues as to where the centres are...

3D reconstruction based on stereo rectified edge images

I have two closed curve stereo rectified edge images. Is it possible to find the disparity(along x-axis in image coordinates) between the edge images and do a 3D reconstruction since I know the camera matrix. I am using matlab for the process. And I will not be able to do a window based technique as it's a binary image since a window based technique requires texture. The question how will I compute the disparity between the edge images? The images are available in the following links. Left Edge image https://www.dropbox.com/s/g5g22f6b0vge9ct/edge_left.jpg?dl=0 Right Edge Image https://www.dropbox.com/s/wjmu3pugldzo2gw/edge_right.jpg?dl=0
For this type of images, you can easily map each edge pixel from the left image to its counterpart in the right image, and therefore calculate the disparity for those pixels as usual.
The mapping can be done in various ways, depending on how typical these images are. For example, using DTW like approach to match curvatures.
For all other pixels in the image, you just don't have any information.
#Photon: Thanks for the suggestion. I did what you suggested. I matched each edge pixel in the left and right image in a DTW like fashion. But there are some pixels whose y-pixel coordinate value differ by 1 or 2 pixels, albeit they are properly rectified. So I calculated the depth by averaging those differing(up to 2-pixel difference in y-axis) edge pixels using least squares method. But I ended getting this space curve (https://www.dropbox.com/s/xbg2q009fjji0qd/false_edge.jpg?dl=0) when they actually should have been like this (https://www.dropbox.com/s/0ib06yvzf3k9dny/true_edge.jpg?dl=0) which is obtained using RGB images.I couldn't think of any other reason why it would be the case since I compared by traversing along the 408 edge pixels.

Finding the length/area of the object using 2d web cam

I have to calculate the area, or length of the objects present in the frame.
As i use the 2d camera, the distance from the camera can't be found.
In this case, i am planning to draw a constant(X CM) line in the back ground where its length is known in CM/M.
Please find the attachment for a sample input image. (Yellow Line is a Constant line)
Consider that a person or an object stands in front of a wall, where the constant line is drawn.
Is there any way to calculate the distance of other objects with reference to the constant line?
First, it isn't a line. It is a parcel. A line is non-physical. The parcel of pixels has both area and length. The natural unit of measurement of images is pixels. Units of length are both non-physical and require assumptions.
Second, you can do a thresholded 2-d convolution. PIV-sleuth uses 2d convolution. It can allow some faster, more accurate measurement in images. Peak intensity will tell you something about the length or area. You can also use row-sum and column sum very quickly to get ideas of lengths. It helps if the images are aligned to the pixel-axes in your image. Use of affine transformations can help you test various rotations for suitability.

How to find the distance between the only two points in an image produced by a grating like substance?

i need to find the distance between the two points.I can find the distance between them manually by the pixel to cm converter in the image processing tool box. But i want a code which detects the point positions in the image and calculate the distance.
More accurately speaking the image contains only three points one mid and the other two approximately distanced equally from it...
There might be a better way then this, but I hacked something similar together last night.
Use bwboundaries to find the objects in the image (the contiguous regions in a black/white image).
The second returned matrix, L, is the same image but with the regions numbered. So for the first point, you want to isolate all the pixels related to it,
L2 = (L==1)
Now find the center of that region (for object 1).
x1 = (1:size(L2,2))*sum(L2,1)'/size(L2,2);
y1 = (1:size(L2,1))*sum(L2,2)/size(L2,1);
Repeat that for all the regions in your image. You should have the center of mass of each point. I think that should do it for you, but I haven't tested it.