Halcon - get area of region in world coordinates - coordinates

I have following code:
image_points_to_world_plane (CamParam, Pose, intersection_points_row, intersection_points_col, 'mm', X1, Y1)
distance_pp (X1[2], Y1[2], X1[3], Y1[3], Measure1)
this code retunrs the values in mm.
now the code goes on as shown below, and I would need the area of the Regions1 in mm². Instead i get them in pixel..
access_channel(Image, ImageMono, 1)
threshold(ImageMono, Region, 0, 100)
fill_up(Region, RegionFillUp)
reduce_domain(ImageMono, RegionFillUp, ImageReduced)
threshold (ImageReduced, Regions1, 230, 255)
connection (Regions1, Connection)
select_shape(Connection, Labels, 'area', 'and', 2000, 99999)
area_center(Labels, AreaLabels, RowLabels, ColumnLabels)
AreaLabels is in px² and i would need it in mm². but couldnt find anything like region_to_world_plane... how can this be done?

Try looking at the operator "image_to_world_plane". It will transform the image so that the pixel size is in metric units (or whatever you prefer). It will also warp the image so that it looks as if the picture was taken directly from overhead. Then any area calculations you perform on this transformed image will be in the units you specified (mm, m, etc).

Related

Automatically convert pixels to millimeters in Mathematica

I can get the drop contour through a GetDropProfile command.
However, I can't find the conversion factor from pixels to millimeters. As the contour of the drop is obtained point by point starting from left to right, then the first ordered pair in the list gives the coordinates of the first pixel on the left. Consequently, the last ordered pair gives the value of the last pixel on the right. Since they are opposite each other, they therefore have the same y, so the difference in x of these two points is the diameter of the drop. How can I automate this process of converting pixels into millimeters and viewing the graph in millimeters, smoothing the contour of the discrete curve automatically giving us how many points to the right and left we should take?
It follows the image of the drop and the contour in pixels obtained.
As posted here, assuming the axes are in millimetres, the scale can be obtained from the x-axis ticks, which can be sampled from the row 33 from the bottom. As can be observed by executing the code below, the left- and rightmost ticks occupy one pixel each, coloured RGB {0.4, 0.4, 0.4}. So there are 427 pixels per 80mm.
img = Import["https://i.stack.imgur.com/GIuYq.png"];
{wd, ht} = ImageDimensions[img];
data = ImageData[img];
(* View the left- and rightmost pixel data *)
Take[data[[-33]], 20]
Take[data[[-33]], -20]
p1 = LengthWhile[data[[-33]], # == {1., 1., 1.} &];
p2 = LengthWhile[Reverse[data[[-33]]], # == {1., 1., 1.} &];
p120 = wd - p1 - p2 - 1
427
(* Showing the sampled row in the graphic *)
data[[-33]] = ConstantArray[{1, 0, 0}, wd];
Graphics[Raster[Reverse[data]]]
You might ask about smoothing the curve here https://mathematica.stackexchange.com

Altair: How to make scatter plot aligned with image background created by mark_image?

I'm looking for a working example to have a .png picture as the background of a scatter chart.
Currently, I use mark_image to draw the background image:
source = pd.DataFrame.from_records([
{"x": 0, "y": 0,
"img": "http://localhost:8888/files/BARTStreamlit/assets/BARTtracksmap.png?_xsrf=2%7Ce13c35be%7Ce013c83479b892363239e5b6a77d97dc%7C1652400559"}
])
tracksmap = alt.Chart(source).mark_image(
width=500,
height=500
).encode(
x='x',
y='y',
url='img'
)
tracksmap
Here is the resulted image drown:
and draw the scater chart,
chart = alt.Chart(maptable).mark_circle(size=60).encode(
x= 'x',
y= 'y',
tooltip=['short_name', 'ENTRY']
).interactive()
chart
I have scaled the x, y channel values for the scatter chart to be in the range of [0, 500]. 500 is the width and height of the background image that I guessed.
Here is the resulted scatter plot:
then I combined the two chart with layer mechanism:
geovizvega = alt.layer(tracksmap, chart)
geovizvega
resulting the following:
The two charts do not align. I'd like to have the scatter dots aligning with the tracks on the background image. How can I achieve that?
To have them aligned, I might need to have the background image's top left corner at the coordinates (0, 0), how can I achieve that? (It seems that the x, y channel values for mark_image is the coordinates of the center of the image? With accurate definition of the x, y channel values, it might be possible to calculate the proper value of x, and y for the top left coroner to be at (0, 0)).
I might need to to have precise dimension of the background image. How?
My above approach may not be the right one. Please show me a working example.
Yes, if you change the values of x and y in your image plot to something like y=-200 and x=200, the image should be more centered in the scatter plot.
You can also change the anchor point of the image using align and baseline:
import altair as alt
import pandas as pd
source = pd.DataFrame.from_records([
{"x": 2, "y": 2, "img": "https://vega.github.io/vega-datasets/data/7zip.png"}
])
imgs = alt.Chart(source).mark_image(
width=100,
height=100
).encode(
x='x',
y='y',
url='img'
)
imgs + imgs.mark_circle(size=200, color='red', opacity=1)
imgs = alt.Chart(source).mark_image(
width=100,
height=100,
align='right',
baseline='top'
).encode(
x='x',
y='y',
url='img'
)
imgs + imgs.mark_circle(size=200, color='red', opacity=1)
After this, you would still need to change the dimensions of the chart so that it has the same size as the image. The default is width=400 and height=300. You can get the dimensions of your image in most image editing software or using the file <imagename> command (at least on linux). But even after getting these dimensions, you would have to do some manual adjustments due to axes taking up some of that space in the chart.

Getting the coordinates of vertices of an A4 sheet with coins on it, for its further projective transformation and coin detection

I need to transform my tilted image in a way I can find coins on an A4 paper. So far, I have been getting four coordinates of edges of my paper by manually selecting them with ginput.
targetImageData = imread('coin1.jpg');
imshow(targetImageData);
fprintf('Corner selection must be clockwise or anti-clockwise.\n');
[X,Y] = ginput(4);
Is there a way to automate this process, say, apply some edge detector and then find coordinates of each vertex and then pass them as the coordinates needed for transformation?
Manual selection:
Result:
You can try using detectHarrisFeatures on the S color channel of HSV color space:
I was looking for a color space that gets maximum contrast of the paper.
It looks like the saturation color channel of HSV makes a good contrast between the paper and the background.
Image is resized the image by a factor of 0.25, for removing noise.
detectHarrisFeatures finds the 4 corners of the paper, but it might not be robust enough.
You may need to find more features, and find the 4 correct features, using some logic.
Here is a code sample:
%Read input image
I = imread('im.png');
%Remove the margins, and replace them using padding (just because the image is a MATLAB figure)
I = padarray(I(11:end-10, 18:end-17, :), [10, 17], 'both', 'replicate');
HSV = rgb2hsv(I);
%H = HSV(:, :, 1);%figure;imshow(H);title('H');
S = HSV(:, :, 2);%figure;imshow(S);title('S');
%V = HSV(:, :, 3);%figure;imshow(V);title('V');
%Reduce image size by a factor of 0.25 in each axis
S = imresize(S, 0.25);
%S = imclose(S, ones(3)); %May be requiered
%Detect corners
corners = detectHarrisFeatures(S);
imshow(S); hold on;
plot(corners.selectStrongest(4));
Result:
Different approach you may try:
Take a photo without the coins.
Mark the corners manually, and extract features of the 4 corners.
Use image matching techniques to match the image with the coins with the image without the coins (mach basted on the 4 corners).

Displaying georeferenced images using OpenLayers 5

I'm trying to make an application where the user can georeference scanned maps. You can look at an example here: https://codesandbox.io/s/2o99jvrnyy
There are two images:
assets/test.png - without rotation
assets/test_rotation.png - with rotation
The first image is loaded correctly on the map but the one with rotation is not.
I can't find information on whether OpenLayers 5 can handle images with transformation parameters stored in world file. Probably I'm missing something but can't figure out what.
This is how my logic works:
Transformation parameters are calculated with affine transformation using 4 points. You can see the logic in Affine.js file. At least 4 points are picked up from the source image and the map. Then using these 4 points the transformation parameters are calculated. After that I'm calculating the extent of the image:
width = image.width in pixels
height = image.height in pixels
width *= Math.sqrt(Math.pow(parameters.A, 2) + Math.pow(parameters.D, 2));
height *= Math.sqrt(Math.pow(parameters.B, 2) + Math.pow(parameters.E, 2));
// then the extent in projection units is
extent = [parameters.C, parameters.F - height, parameters.C + width, parameters.F];
World file parameters are calculated as defined here.
Probably the problem is that the image with rotation is not rotated when loaded as static image in OpenLayers 5, but can't find a way to do it.
I tried to load both images in QGIS and ArcMap with calculated parameters and both of them are loaded correctly. You can see the result for the second picture:
You can see the parameters for each image here:
Image: test.png
Calculated extent: [436296.79726721847, 4666723.973240128, 439864.3389057907, 4669253.416495154]
Calculated parameters (for world file):
3.8359372067274027
-0.03146800786355865
-0.03350636818089405
-3.820764346376064
436296.79726721847
4669253.416495154
Image: test_rotation.png
Calculated extent: [437178.8291026594, 4667129.767589236, 440486.91675884253, 4669768.939256327]
Calculated parameters (for world file):
3.506332904308879
-1.2831186688536016
-1.3644002712982917
-3.7014921022625864
437178.8291026594
4669768.939256327
I realized that my approach was wrong. There is no need to calculate the extent of the image in map projection and set it in the layer. I can simply add a transformation function responsible for transforming coordinates between image projection and map projection. This way the image layer has always it's projection set to image projection and extent set to the size of the image in pixels.
The transformation function is added like this:
import { addCoordinateTransforms } from 'ol/proj.js';
addCoordinateTransforms(
mapProjection,
imageProjection,
coords => {
// forward
return Affine.transform(coords);
},
coords => {
// inverse
}
)
Affine parameters are again calculated from at least 3 points:
// mapPoints - coordinates in map projection
// imagePoints - coordinates in image projection
Affine.calculate(mapPoints, imagePoints);
You can see a complete example here - https://kw9l85y5po.codesandbox.io/

Matlab and OpenCV calculate different image moment m00 for the same image

For exactly the same image
Opencv Code:
img = imread("testImg.png",0);
threshold(img, img_bw, 0, 255, CV_THRESH_BINARY | CV_THRESH_OTSU);
Mat tmp;
img_bwR.copyTo(tmp);
findContours(tmp, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE);
// Get the moment
vector<Moments> mu(contours.size() );
for( int i = 0; i < contours.size(); i++ )
{ mu[i] = moments( contours[i], false );
}
// Display area (m00)
for( int i = 0; i < contours.size(); i++ )
{
cout<<mu[i].m00 <<endl;
// I also tried the code
//cout<<contourArea(contours.at(i))<<endl;
// But the result is the same
}
Matlab code:
Img = imread('testImg.png');
lvl = graythresh(Img);
bw = im2bw(Img,lvl);
stats = regionprops(bw,'Area');
for k = 1:length(stats)
Area = stats(k).Area; %m00
end
Any one has any thought on it? How to unify them? I think they use different methods to find contours.
I uploaded the test image at the link below so that someone who is interested in this can reproduce the procedure
It is a 100 by 100 small 8 bit grayscale image with only 0 and 255 pixel intensity. For simplicity, it only has one blob on it.
For OpenCV, the area of contour (image moment m00) is 609.5 (Very odd value)
For Matlab, the area of contour (image moment m00) is 763.
Thanks
Exist many different definitions of how contours should be extracted from binary image. For example it can be polygon that is the perimeter of white object in a binary image. If this definition was used by OpenCV, then areas of contours would be the same as areas of connected components found by Matlab. But this is not the case. Contour found by findContour() function is the polygon that connects centers of neighbor "edge pixels". Edge pixel is a white pixel that has black neighbor in N4 neighborhood.
Example: suppose you have an image whose size is 100x100 pixels. Every pixel above the diagonal is black. Every pixel below or on the diagonal is white (black triangle and white triangle). Exact separation polygon will have almost 200 vertexes at distance of 1 pixel: (0,0), (1,0), (1,1), (2,1), (2,2),.... (100,99), (100,100), (0,100). As you can see this definition is not very good from practical point of view. Polygon returned by OpenCV will have exactly 3 vertexes needed to define the triangle: (0,0), (99,99), (0,99). Its area is (99 x 99 / 2) pixels. It is not equal to number of white pixels. It is not even an integer. But this polygon is more practical than previous one.
Those are not the only possible definitions for polygon extraction. Many other definitions exist. Some of them (in my opinion) may be better than the one used by OpenCV. But this is the one that was implemented and used by a lot of people.
Currently there no effective workaround for your problem. If you want to get exactly same numbers from MATLAB and OpenCV you will have to draw the contours found by foundContours on some black image, and use function moments() on image. I know that upcoming OpenCV 3 have function that finds connected components but I didn't tried it myself.