Tikz lines from node to node: E.g. Tikz for curves in coordinate systems - coordinates

Tikz capabilities to draw lines from node to node a very useful, e.g, for something like:
However, if the node should have size 0 (i.e., no description) the outgoing or incoming line stops before the actual point. This is for instance the case if it is only used to mark a point in the coordinate system, See, e.g., node (1,1) or (0,0) in the following picture:
The code for the second picture writes:
\node at(0,0) (origin) {};
\node[below = 0.0cm of origin]{$(0,0)$};
\node[above = 3cm of origin] (y) {$y$};
\node[right = 3cm of origin] (z) {$z$};
\node[above = 2cm of origin, left] (y1) {$1$};
\node[right = 2cm of origin, below] (z1) {$1$};
\node[right = 2cm of y1] (yz1) {};
\node[above = 2.5cm of z] (endgz) {};
\node[above = 3cm of z] (yz) {};
\node[above = 1cm of origin] (p0) {};
\draw[fill](p0) circle(0.06cm);
\node[left=0cm of p0]{$g(0)=p0$};
\draw[fill](yz1) circle(0.06cm);
\draw[->,thick] (0,0) -- (y);
\draw[->,thick] (0,0) -- (z);
\draw (origin) -- node [above = 1cm] {$y(z)=z$} (yz);
\draw (p0) .. controls +(0:1cm) and +(205:1cm) .. (endgz) node [below] {$g(z)$} ;
\draw (y1) -- (yz1);
\draw (z1) -- (yz1);
How can this be "corrected" without giving absolute coordinates instead of relative ones to nodes? I find it very attractive to not always give absolute coordinates as this makes the image much more adaptable for future and generalized use.

You can change the inner sep:
\documentclass{article}
\usepackage{tikz}
\usetikzlibrary{positioning}
\begin{document}
\begin{tikzpicture}
\node[inner sep=0pt] at(0,0) (origin) {};
\node[below = 0.0cm of origin]{$(0,0)$};
\node[above = 3cm of origin] (y) {$y$};
\node[right = 3cm of origin] (z) {$z$};
\node[above = 2cm of origin, left] (y1) {$1$};
\node[right = 2cm of origin, below] (z1) {$1$};
\node[inner sep=0pt,right = 2cm of y1] (yz1) {};
\node[above = 2.5cm of z] (endgz) {};
\node[above = 3cm of z] (yz) {};
\node[inner sep=0pt,above = 1cm of origin] (p0) {};
\draw[fill](p0) circle(0.06cm);
\node[left=0cm of p0]{$g(0)=p0$};
\draw[fill](yz1) circle(0.06cm);
\draw[->,thick] (0,0) -- (y);
\draw[->,thick] (0,0) -- (z);
\draw (origin) -- node [above = 1cm] {$y(z)=z$} (yz);
\draw (p0) .. controls +(0:1cm) and +(205:1cm) .. (endgz) node [below] {$g(z)$} ;
\draw (y1) -- (yz1);
\draw (z1) -- (yz1);
\end{tikzpicture}
\end{document}

Related

Coordinates of an object - Matlab

How can I find the top, button, left, right x,y coordinates of this object?
Code:
clc;
clear all;
Image = rgb2gray(imread('https://upload-icon.s3.us-east-2.amazonaws.com/uploads/icons/png/1606078271536061993-512.png'));
BW = imbinarize(Image);
BW = imfill(BW,'holes');
BW = bwareaopen(BW, 100);
BW = padarray(BW,60,60,'both')
BW = imcomplement(BW);
imshow(BW)
The red pixels on the coordinate plot indicate the furthest most pixels to the left, right, top and bottom of the icon/image. This method similarly uses for-loops and break() statements to scan the image until a pixel of logical value "1" is found.
The scanning patterns include:
• Top → Scan image from top to bottom
• Bottom → Scan image from bottom to top
• Left → Scan image from left to right
• Right → Scan image from right to left
clc;
clear all;
Image = rgb2gray(imread('https://upload-icon.s3.us-east-2.amazonaws.com/uploads/icons/png/1606078271536061993-512.png'));
BW = imbinarize(Image);
BW = imfill(BW,'holes');
BW = bwareaopen(BW, 100);
BW = padarray(BW,60,60,'both');
BW = imcomplement(BW);
[Image_Height,Image_Width] = size(BW);
%Top (scanning from top to bottom)%
for Row_Scanner = 1: +1: Image_Height
for Column_Scanner = 1: +1: Image_Width
Pixel_Value = BW(Row_Scanner,Column_Scanner);
if(Pixel_Value == 1)
Top = [Row_Scanner Column_Scanner];
break;
end
end
end
%Bottom (scanning from bottom to top)%
for Row_Scanner = Image_Height: -1: 1
for Column_Scanner = 1: +1: Image_Width
Pixel_Value = BW(Row_Scanner,Column_Scanner);
if(Pixel_Value == 1)
Bottom = [Row_Scanner Column_Scanner];
break;
end
end
end
%Left (scanning from left to right)%
for Column_Scanner = 1: +1: Image_Width
for Row_Scanner = 1: +1: Image_Height
Pixel_Value = BW(Row_Scanner,Column_Scanner);
if(Pixel_Value == 1)
Left = [Row_Scanner Column_Scanner];
break;
end
end
end
%Right (scanning from right to left)%
for Column_Scanner = Image_Width: -1: 1
for Row_Scanner = 1: +1: Image_Height
Pixel_Value = BW(Row_Scanner,Column_Scanner);
if(Pixel_Value == 1)
Right = [Row_Scanner Column_Scanner];
break;
end
end
end
Coordinates_Image = zeros(Image_Height,Image_Width,3);
Coordinates_Image(Top(1,1),Top(1,2),1) = 255;
Coordinates_Image(Bottom(1,1),Bottom(1,2),1) = 255;
Coordinates_Image(Left(1,1),Left(1,2),1) = 255;
Coordinates_Image(Right(1,1),Right(1,2),1) = 255;
Top_X = Top(1,2);
Top_Y = Top(1,1);
Bottom_X = Bottom(1,2);
Bottom_Y = Bottom(1,1);
Left_X = Left(1,2);
Left_Y = Left(1,1);
Right_X = Right(1,2);
Right_Y = Right(1,1);
fprintf("Top coordinate is (x,y) -> (%d,%d)\n",Top_X,Top_Y);
fprintf("Bottom coordinate is (x,y) -> (%d,%d)\n",Bottom_X,Bottom_Y);
fprintf("Left coordinate is (x,y) -> (%d,%d)\n",Left_X,Left_Y);
fprintf("Right coordinate is (x,y) -> (%d,%d)\n",Right_X,Right_Y);
subplot(1,2,1); imshow(BW);
title('Image');
subplot(1,2,2); imshow(Coordinates_Image);
title('Coordinate Plot');
xlabel('X-Axis'); ylabel('Y-Axis');
You can do it using a loop. Find the first/last rows and columns whose sum is non-zero and break the loop once the condition is met.
BW = imcomplement(BW);
[r,c] = size(BW);
% Top cordinate: Find first row with non-zero sum
for i=1:r
top = i;
if sum(BW(i,:))~= 0
break
end
end
% Bottom cordinate: Find last row with non zero sum
for i=r:-1:1
bottom = i;
if sum(BW(i,:))~= 0
break
end
end
% Left cordinate: Find first column with non zero sum
for i=1:c
left = i;
if sum(BW(:,i))~= 0
break
end
end
% Right cordinate: find last column with non-zero sum
for i=c:-1:1
right = i;
if sum(BW(:,i))~= 0
break
end
end

Patch on a sphere of varying size

Imagine a patch glued to a sphere. How would I manage to make the patch keep its center position and surface area as the sphere is scaled up or down? Normally, only the curvature of the patch should change, as it is « glued » to the sphere. Assume the patch is described as a set of ( latitude, longitude ) coordinates.
One possible solution would consist of converting the geographical coordinates of the patch into gnomonic coordinates (patch viewed perpendicularly directly from above), thereby making a 2D texture, which is then scaled up or down as the sphere changes its size. But I am unsure whether this is the right approach and how close of the desired effect this would be.
I am a newbie so perhaps Unity can do this simply with the right set options when applying a texture. In this case which input map projection should be used for the texture? Or maybe I should use a 3D surface and « nail » it somehow to the sphere.
Thank you!!
EDIT
I’m adding an illustration to show how the patch should be deformed as the sphere is scaled up or down. On a very small sphere, the patch would eventually wrap around. Whereas on a larger sphere, the patch would be almost flat. The deformation of the patch could be thought of as being similar to gluing the same sticker to spheres of different sizes.
The geometry of the patch could be any polygonal surface, and as previously mentioned must preserve its center position and surface area when the sphere is scaled up or down.
Assume you have a sphere of radius R1 centered at the origin of the standard coordinate system O e1 e2 e3. Then the sphere is given by all points x = [x[0], x[1], x[2]] in 3D that satisfy the equation x[0]^2 + x[1]^2 + x[2]^2 = R1^2. On this sphere you have a patch and the patch has a center c = [c[0], c[1], c[2]].
First, rotate the patch so that the center c goes to the north pole, then project it onto a plane, using an area preserving map for the sphere of radius R1, then map it back using the analogous area preserving map but for radius R2 sphere and finally rotate back the north pole to the scaled position of the center.
Functions you may need to define:
Function 1: Define spherical coordinates
x = sc(u, v, R):
return
x[0] = R*sin(u)*sin(v)
x[1] = R*sin(u)*cos(v)
x[2] = R*cos(u)
where
0 <= u <= pi and 0 <= v < 2*pi
Function 2: Define inverse spherical coordinates:
[u, v] = inv_sc(x, R):
return
u = arccos( x[2] / R )
if x[1] > 0
v = arccot(x[0] / x[1]) if x[1] > 0
else if x[1] < 0
v = 2*pi - arccot(x[1] / x[0])
else if x[1] = 0 and x[0] > 0
v = 0
else if x[1] = 0 and x[0] < 0
v = pi
where x[0]^2 + x[1]^2 + x[2]^2 = R^2
Function 3: Rotation matrix that rotates the center c to the north pole:
Assume the center c is given in spherical coordinates [uc, vc]. Then apply function 1
c = [c[0], c[2], c[3]] = sc(uc, vc, R1)
Then, find for which index i we have c[i] = min( abs(c[0]), abs(c[1]), abs(c[2])). Say i=2 and take the coordinate vector e2 = [0, 1, 0].
Calculate the cross-product vectors cross(c, e2) and cross(cross(c, e2), c), think of them as row-vectors, and form the 3 by 3 rotation matrix
A3 = c / norm(c)
A2 = cross(c, e2) / norm(cross(c, e2))
A1 = cross(A2, A3)
A = [ A1,
A2,
A3 ]
Functions 4:
[w,z] = area_pres(u,v,R1,R2):
return
w = arccos( 1 - (R1/R2)^2 * (1 - cos(u)) )
z = v
Now if you re-scale the sphere from radius R1 to radius R2 then any point x from the patch on the sphere with radius R1 gets transformed to the point y on the sphere of radius R2 by the following chain of transformations:
If x is given in spherical coordinates `[ux, vx]`, first apply
x = [x[0], x[1], x[2]] = sc(ux, vx, R1)
Then rotate with the matrix A:
x = matrix_times_vector(A, x)
Then apply the chain of transformations:
[u,v] = inv_sc(x, R1)
[w,z] = area_pres(u,v,R1,R2)
y = sc(w,z,R2)
Now y is on the R2 sphere.
Finally,
y = matrix_times_vector(transpose(A), y)
As a result all of these points y fill-in the corresponding transformed patch on the sphere of radius R2 and the patch-area on R2 equals the patch-area of the original patch on sphere R1. Plus the center point c gets just scaled up or down along a ray emanating from the center of the sphere.
The general idea behind this appriach is that, basically, the area element of the R1 sphere is R1^2*sin(u) du dv and we can look for a transformation of the latitude-longitude coordinates [u,v] of the R1 sphere into latitude-longitude coordinates [w,z] of the R2 sphere where we have the functions w = w(u,v) and z = z(u,v) such that
R2^2*sin(w) dw dz = R1^2*sin(u) du dv
When you expand the derivatives of [w,z] with respect to [u,v], you get
dw = dw/du(u,v) du + dw/dv(u,v) dv
dz = dz/du(u,v) du + dz/dv(u,v) dv
Plug them in the first formula, and you get
R2^2*sin(w) dw dz = R2^2*sin(w) * ( dw/du(u,v) du + dw/dv(u,v) dv ) wedge ( dz/du(u,v) du + dz/dv(u,v) dv )
= R1^2*sin(u) du dv
which simplifies to the equation
R2^2*sin(w) * ( dw/du(u,v) dz/dv(u,v) - dw/dv(u,v) dz/du(u,v) ) du dv = R^2*sin(u) du dv
So the general differential equation that guarantees the area preserving property of the transformation between the spherical patch on R1 and its image on R2 is
R2^2*sin(w) * ( dw/du(u,v) dz/dv(u,v) - dw/dv(u,v) dz/du(u,v) ) = R^2*sin(u)
Now, recall that the center of the patch has been rotated to the north pole of the R1 sphere, so you can think the center of the patch is the north pole. If you want a nice transformation of the patch so that it is somewhat homogeneous and isotropic from the patch's center, i.e. when standing at the center c of the patch (c = north pole) you see the patch deformed so that longitudes (great circles passing through c) are preserved (i.e. all points from a longitude get mapped to points of the same longitude), you get the restriction that the longitude coordinate v of point [u, v] gets transformed to a new point [w, z] which should be on the same longitude, i.e. z = v. Therefore such longitude preserving transformation should look like this:
w = w(u,v)
z = v
Consequently, the area-preserving equation simplifies to the following partial differential equation
R2^2*sin(w) * dw/du(u,v) = R1^2*sin(u)
because dz/dv = 1 and dz/du = 0.
To solve it, first fix the variable v, and you get the ordinary differential equation
R2^2*sin(w) * dw = R1^2*sin(u) du
whose solution is
R2^2*(1 - cos(w)) = R1^2*(1 - cos(u)) + const
Therefore, when you let v vary, the general solution for the partial differential equation
R2^2*sin(w) * dw/du(u,v) = R^2*sin(u)
in implicit form (equation that links the variables w, u, v) should look like
R2^2*(1 - cos(w)) = R1^2*(1 - cos(u)) + f(v)
for any function f(v)
However, let us not forget that the north pole stays fixed during this transformation, i.e. we have the restriction that w= 0 whenever u = 0. Plug this condition into the equation above and you get the restriction for the function f(v)
R2^2*(1 - cos(0)) = R1^2*(1 - cos(0)) + f(v)
R2^2*(1 - 1) = R1^2*(1 - 1) + f(v)
0 = f(v)
for every longitude v
Therefore, as soon as you impose longitudes to be transformed to the same longitudes and the north pole to be preserved, the only option you are left with is the equation
R2^2*(1 - cos(w)) = R1^2*(1 - cos(u))
which means that when you solve for w you get
w = arccos( 1 - (R1/R2)^2 * (1 - cos(u)) )
and thus, the corresponding area preserving transformation between the patch on sphere R1 and the patch on sphere R2 with the same area, fixed center and a uniform deformation at the center so that longitudes are transformed to the same longitudes, is
w = arccos( 1 - (R1/R2)^2 * (1 - cos(u)) )
z = v
Here I implemented some of these functions in Python and ran a simple simulation:
import numpy as np
import math
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
def trig(uv):
return np.cos(uv), np.sin(uv)
def sc_trig(cos_uv, sin_uv, R):
n, dim = cos_uv.shape
x = np.empty((n,3), dtype=float)
x[:,0] = sin_uv[:,0]*cos_uv[:,1] #cos_u*sin_v
x[:,1] = sin_uv[:,0]*sin_uv[:,1] #cos_u*cos_v
x[:,2] = cos_uv[:,0] #sin_u
return R*x
def sc(uv,R):
cos_uv, sin_uv = trig(uv)
return sc_trig(cos_uv, sin_uv, R)
def inv_sc_trig(x):
n, dim = x.shape
cos_uv = np.empty((n,2), dtype=float)
sin_uv = np.empty((n,2), dtype=float)
Rad = np.sqrt(x[:,0]**2 + x[:,1]**2 + x[:,2]**2)
r_xy = np.sqrt(x[:,0]**2 + x[:,1]**2)
cos_uv[:,0] = x[:,2]/Rad #cos_u = x[:,2]/R
sin_uv[:,0] = r_xy/Rad #sin_v = x[:,1]/R
cos_uv[:,1] = x[:,0]/r_xy
sin_uv[:,1] = x[:,1]/r_xy
return cos_uv, sin_uv
def center_x(x,R):
n, dim = x.shape
c = np.sum(x, axis=0)/n
return R*c/math.sqrt(c.dot(c))
def center_uv(uv,R):
x = sc(uv,R)
return center_x(x,R)
def center_trig(cos_uv, sin_uv, R):
x = sc_trig(cos_uv, sin_uv, R)
return center_x(x,R)
def rot_mtrx(c):
i = np.where(c == min(c))[0][0]
e_i = np.zeros(3)
e_i[i] = 1
A = np.empty((3,3), dtype=float)
A[2,:] = c/math.sqrt(c.dot(c))
A[1,:] = np.cross(A[2,:], e_i)
A[1,:] = A[1,:]/math.sqrt(A[1,:].dot(A[1,:]))
A[0,:] = np.cross(A[1,:], A[2,:])
return A.T # ready to apply to a n x 2 matrix of points from the right
def area_pres(cos_uv, sin_uv, R1, R2):
cos_wz = np.empty(cos_uv.shape, dtype=float)
sin_wz = np.empty(sin_uv.shape, dtype=float)
cos_wz[:,0] = 1 - (R1/R2)**2 * (1 - cos_uv[:,0])
cos_wz[:,1] = cos_uv[:,1]
sin_wz[:,0] = np.sqrt(1 - cos_wz[:,0]**2)
sin_wz[:,1] = sin_uv[:,1]
return cos_wz, sin_wz
def sym_patch_0(n,m):
u = math.pi/2 + np.linspace(-math.pi/3, math.pi/3, num=n)
v = math.pi/2 + np.linspace(-math.pi/3, math.pi/3, num=m)
uv = np.empty((n, m, 2), dtype=float)
uv[:,:,0] = u[:, np.newaxis]
uv[:,:,1] = v[np.newaxis,:]
uv = np.reshape(uv, (n*m, 2), order='F')
return uv, u, v
uv, u, v = sym_patch_0(18,18)
r1 = 1
r2 = 2/3
r3 = 2
limits = max(r1,r2,r3)
p = math.pi
x = sc(uv,r1)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(x[:,0], x[:,1], x[:,2])
ax.set_xlim(-limits, limits)
ax.set_ylim(-limits, limits)
ax.set_zlim(-limits, limits)
ax.set_xlabel('X Label')
ax.set_ylabel('Y Label')
ax.set_zlabel('Z Label')
plt.show()
B = rot_mtrx(center_x(x,r1))
x = x.dot(B)
cs, sn = inv_sc_trig(x)
cs1, sn1 = area_pres(cs, sn, r1, r2)
y = sc_trig(cs1, sn1, r2)
y = y.dot(B.T)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(y[:,0], y[:,1], y[:,2])
ax.set_xlim(-limits, limits)
ax.set_ylim(-limits, limits)
ax.set_zlim(-limits, limits)
ax.set_xlabel('X Label')
ax.set_ylabel('Y Label')
ax.set_zlabel('Z Label')
plt.show()
cs1, sn1 = area_pres(cs, sn, r1, r3)
y = sc_trig(cs1, sn1, r3)
y = y.dot(B.T)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(y[:,0], y[:,1], y[:,2])
ax.set_xlim(-limits, limits)
ax.set_ylim(-limits, limits)
ax.set_zlim(-limits, limits)
ax.set_xlabel('X Label')
ax.set_ylabel('Y Label')
ax.set_zlabel('Z Label')
plt.show()
One can see three figures of how a patch gets deformed when the radius of the sphere changes from radius 2/3, through radius 1 and finally to radius 2. The patch's area doesn't change and the transformation of the patch is homogeneous in all direction with no excessive deformation.
You could e.g. do something like
public class Example : MonoBehaviour
{
public Transform sphere;
public float latitude;
public float longitude;
private void Update()
{
transform.position = sphere.position
+ Quaternion.AngleAxis(longitude, -Vector3.up)
* Quaternion.AngleAxis(latitude, -Vector3.right)
* sphere.forward * sphere.lossyScale.x / 2f;
transform.LookAt(sphere);
transform.Rotate(90,0,0);
}
}
The pin would not be a child of the sphere. It would result in a pin (in red) like:
Alternatively as said you could make the pin a child of the sphere in a structure like
Sphere
|--PinAnchor
|--Pin
So in order to change the Pin position you would rotate the PinAnchor. The Pin itself would update its own scale so it has always a certain target scale e.g. like
public class Example : MonoBehaviour
{
public float targetScale;
private void Update()
{
var scale = transform.parent.lossyScale;
var invertScale = new Vector3(1 / scale.x, 1 / scale.y, 1 / scale.z);
if (float.IsNaN(invertScale.x)) invertScale.x = 0;
if (float.IsNaN(invertScale.y)) invertScale.y = 0;
if (float.IsNaN(invertScale.z)) invertScale.z = 0;
transform.localScale = invertScale * targetScale;
}
}
I am going to add another answer, because it is possible you may decide that different properties are important for your patch transformation, more specifically having minimal (in some sense) distortion, and the area preservation of the patch is not as important.
Assume you want to create a transformation from a patch (an open subset of the sphere with relatively well-behaved boundary, e.g. piecewise smooth or even piecewise geodesic boundary) on a sphere of radius R1 to a corresponding patch on a sphere of radius R2. However, you want the transformation to not distort the original patch on R1 wen mapping it to R2. Assume the patch on R1 has a distinguished point c, called the center. This could be its geometric center, i.e. its center of mass (barycenter), or a point selected in another way.
For this discussion, let us assume the center c is at the north pole of the sphere R1. If it is not, we can simply rotate it to the north pole (see my previous post for one way to rotate the center), so that the standard spherical coordinates [u, v] (latitude and longitude) naturally apply, i.e.
for sphere R1:
x[0] = R1*sin(u)*cos(v)
x[1] = R1*sin(u)*sin(v)
x[2] = R1*cos(u)
for sphere R2:
y[0] = R2*sin(w)*cos(z)
y[1] = R2*sin(w)*sin(z)
y[2] = R2*cos(w)
with point c being with coordinates [0,0] (or any [0,v] for that matter, as these coordinates have a singularity at the pole). Ideally, if you construct an isometric transformation between the two patches (isometry is a transformation that preserves distances, angles and consequently area), then you are done. The two spheres, however, have different radii R1 and R2 and so they have different intrinsic curvature, so there can be no isometry between the patches. Nevertheless, let us see what an isometry would have done: An isometry is a transformation that transforms the metric tensor (the line element, the way we measure distance on the sphere) of the first sphere to the metric tensor of the second, i.e.
Metric tensor of R1:
R1^2 * ( du^2 + (sin(u))^2 dv^2 )
Metric tensor of R2:
R2^2 * ( dw^2 + (sin(w))^2 dz^2 )
An isometry: [u,v] --> [w,z] so that
R1^2 * ( du^2 + (sin(u))^2 dv^2 ) = R2^2 * ( dw^2 + (sin(w))^2 dz^2 )
What an isometry would do, fist it would send spherical geodesics (great circles) to spherical geodesics, so in particular longitudinal circles of R1 should be mapped to longitudinal circles of R2, because we want the north pole of R1 to be mapped to the north pole of R2. Also, an isometry would preserve angles, so in particular, it would preserve angles between longitudinal circles. Since the angle between the zero longitudinal circle and the longitudinal circle of longitude v is equal to v (up to a translation by a constant if a global rotation of the sphere around the north pole is added, but we don't want that), then v should be preserved by an isometry (i.e. the isometry should preserve the bearing at the north pole). That implies that the desired isometric map between the patches should have the form
Map between patch on R1 and patch on R2,
which maps the north pole of R1 to the north pole of R2:
w = w(u, v)
z = v
Furthermore, since the sphere looks the same at any point and in any direction (it is homogeneous and isotropic everywhere), in particular this is true for the north pole and therefore an isometry should transform identically in all direction when looking from the north pole (the term is "isometric transformations should commute with the with the group of isometric automorphisms of the surfaces") which yields that w = w(u, v) should not depend on the variable v:
Map between patch on R1 and patch on R2,
which maps the north pole of R1 to the north pole of R2:
w = w(u)
z = v
The final steps towards finding an isometric transformation between the patches on R1 and R2 is to make sure that the metric tensors before and after the transformation are equal, i.e.:
R2^2 * ( dw^2 + (sin(w))^2 dz^2 ) = R1^2 * ( du^2 + (sin(u))^2 dv^2 )
dw = (dw/du(u)) du and dz = dv
R2^2 * ( (dw/du(u))^2 du^2 + (sin( w(u) ))^2 dv^2 ) = R1^2 * ( du^2 + (sin(u))^2 dv^2 )
set K = R1/R2
( dw/du(u) )^2 du^2 + (sin( w(u) ))^2 dv^2 = K^2 du^2 + K^2*(sin(u))^2 dv^2
For the latter equation to hold, we need the function w = w(u) to satisfy the following two restrictions
dw/du(u) = K
sin(w(u)) = K * sin(u)
However, we have only one function w(u) and two equations which are satisfied only when K = 1 (i.e. R1 = R2) which is not the case. This is where the isometric conditions break and that is why there is no isometric transformation between a patch on sphere R1 and a patch on R2 when R1 != R2. One thing we can try to do is to find a transformation that in some reasonable sense minimizes the discrepancy between the metric tensors (i.e. we would like to minimize somehow the degree of non-isometricity of the transformation [w = w(u), z = v] ). To that end, we can define a Lagrangian discrepancy function (yes, exactly like in physics) and try to minimize it:
Lagrangian:
L(u, w, dw/du) = ( dw/du - K )^2 + ( sin(w) - K*sin(u) )^2
minimize the action:
S[w] = integral_0^u2 L(u, w(u), dw/du(u))du
or more explicitly, find the function `w(u)` that makes
the sum (integral) of all discrepancies:
S[w] = integral_0^u2 ( ( dw/du(u) - K )^2 + ( sin(w(u)) - K*sin(u) )^2 )du
minimal
In order to find the function w(u) that minimizes the discrepancy integral S[w] above, one needs to derive the Euler-Lagrange equations associated to the Lagrangian L(u, w, dw,du) and to solve them. The Euler-Lagrange equation in this case is one and it is second derivative one:
d^2w/du^2 = sin(w)*cos(w) - K*sin(u)*cos(w)
w(0) = 0
dw/du(0) = K
or using alternative notation:
w''(u) = sin(w(u))*cos(w(u)) - K*sin(u)*cos(w(u))
w(0) = 0
w'(0) = K
The reason for the condition w'(0) = K comes from imposing the isometric identity
( dw/du(u) )^2 du^2 + (sin( w(u) ))^2 dv^2 = K^2 du^2 + K^2*(sin(u))^2 dv^2
When u = 0, we already know w(0) = 0 because we want the north pole to be mapped to the north pole and so the latter identity simplifies to
( dw/du(0) )^2 du^2 + (sin(0))^2 dv^2 = K^2 du^2 + K^2*(sin(0))^2 dv^2
( dw/du(0) )^2 du^2 = K^2 du^2
( dw/du(0) )^2 = K^2
which holds when
dw/du(0) = u'(0) = K
Now, to obtain a north -pole respecting transformation between circular patches on two spheres of radii R1 and R2 respectively, that has as little distortion as possible (with respect to the error Lagrnagian), we have to solve the non-linear initial value problem
d^2w/du^2 = sin(w)*cos(w) - K*sin(u)*cos(w)
w(0) = 0
dw/du(0) = K
or written as a system of two first-derivative differential equations (Hamiltonain form):
dw/du = p
dp/du = sin(w)*cos(w) - K*sin(u)*cos(w)
w(0) = 0
p(0) = K
I seriously doubt that this is an exactly solvable (integrable) system of ordinary differential equations, but a numerical integration with a reasonably small integration step can give an excellent discrete solution, which combined with a good interpolation scheme, like cubic splines, can give you a very accurate solution.
Now, if you do not care too much about exactly equal areas between the patches, but reasonably close areas and would actually prefer to have a smallest possible (in some sence) geometric deformation, you can simply use this model and stop here. However, if you really insist on the equal area between the two patches, you can continue further, by splitting your original patch (call it D1) on sphere R1 into a subpatch C1 inside D1 with the same center as D1, such that the difference D1 \ C1 is a narrow frame surrounding C1. Let the image of C1 under the map w = w(u), z = v, defined above, be denoted by C2. Then to find a transformation (a map) from the patch D1 onto a patch D2 on the sphere R2, which has the same area as D1 and includes C2, you can piece together one map from two submaps:
w = w(u)
z = v
for [u,v] from C1 ---> [w,z] from C2
w = w_ext(u, v)
z = v
for [u,v] from D1 \ C1 ---> [w,z] from D2 \ C2
The question is how to find the extension transfromation w_ext(u). For the area of D2 to be equal to the area of D1, you need to choose w_ext(u) so that
integra_(D1 \ C1) sin(w_ext(u)) dw_ext/du(u) du dv = (R1/R2)^2 Area(D1) - Area(C2) ( = the areas on the right are constants )
Now, pick a suitable function (you can start with a cosntant if you want) f(u), say a polynomial with adjustable coefficients, so that
integra_(D1 \ C1) f(u) du dv = (R1/R2)^2 Area(D1) - Area(C2)
e.g.
f(u) = L (constant) such that
integra_(D1 \ C1) L du dv = (R1/R2)^2 Area(D1) - Area(C2)
i.e.
L = ( (R1/R2)^2 Area(D1) - Area(C2) ) / integra_(D1 \ C1) du dv
Then solve the differential eqution
sin(w) dw/du = f(u)
e.g.
sin(w) dw/du = L
w(u) = arccos(L*u + a)
But in this case it is imortant to glue this solution with the previous one, so the initial condition of w_ext(u) matters, possibly depending on the direction v, i.e
w_ext(u, v) = arccos(L*u + a(v))
So there exists a somewhat more laborious approach, but it has a lot of details and is more comlicated.

How to find distance between two connected points through white pixels?

I want to find the two endpoints of a curve.
I used convex hull to find 8 boundary-points and then for every point, distances with all other points are calculated. The points with maximum distances are assumed to be the endpoints. It works for most of the cases when the shape is simple, like below:
I'm using this code to find the two points:
% Calculate 8-boundary points using Convex Hull
% (points consists of all white pixels for the curve)
out=convhull(points);
extremes=points(out,:);
% Finding two farthest points among above boundary points
arr_size = size(extremes);
max_distance = 0;
endpoints = [extremes(1,:); extremes(2,:)];
for i=1:arr_size(1)
p1 = extremes(i,:);
for j=i+1:arr_size(1)
p2 = extremes(j,:);
dist = sqrt((p2(1)-p1(1))^2 + (p2(2)-p1(2))^2);
if dist>max_distance
max_distance = dist;
endpoints = [p1; p2];
end
end
end
disp(endpoints);
In most of the cases it works, but the issue comes when it is applied on U-typed shape.
I want to find out the number of white pixels between the two points. I followed this
Counting white pixels between 2 points in an image but it helps in case of only straight lines not for curve. Any help appreciated!
EDIT 1 :
I have updated my code based on the answer by #Rotem and it fixed my problem.
Updated Code:
for i=1:arr_size(1)
p1 = extremes(i,:);
distanceMap = calculate_distance_map(arr, p1);
for j=i+1:arr_size(1)
p2 = extremes(j,:);
dist = distanceMap(p2(2), p2(1));
if dist>max_distance
max_distance = dist;
endpoints = [p1; p2];
end
end
end
However it takes significantly much time. Any suggestions on how to reduce the calculation time?
I found a solution inspired by dynamic programming algorithms.
Explanations are in the comments:
function n_points = BlueDotsDist()
I = imread('BlueDots.png');
%Fix the image uploaded to the site - each dot will be a pixel.
I = imresize(imclose(I, ones(3)), 1/6, 'nearest');
%Convert each color channel to a bynary image:
R = imbinarize(I(:,:,1));G = imbinarize(I(:,:,2));B = imbinarize(I(:,:,3));
%figure;imshow(cat(3, double(R), double(G), double(B)));impixelinfo
%Find blue dost:
[blue_y, blue_x] = find((B == 1) & (R == 0));
%Compute distance map from first blue point
P = CalcDistMap(R, blue_y(1), blue_x(1));
%Compute distance map from second blue point
Q = CalcDistMap(R, blue_y(2), blue_x(2));
%Get 3x3 pixels around second blue point.
A = P(blue_y(2)-1:blue_y(2)+1, blue_x(2)-1:blue_x(2)+1);
dist_p = min(A(:)); %Minimum value is the shortest distance from first point (but not the number of white points).
T = max(P, Q); %Each element of T is maximum distance from both points.
T(T > dist_p) = 10000; %Remove points that are more far than dist_p.
%Return number of white points between blue points.
n_points = sum(T(:) < 10000);
function P = CalcDistMap(R, blue_y, blue_x)
%Each white pixel in R gets the distance from the blue dot in coordinate (blue_x, blue_y).
%Initialize array with values of 10000 (high value - marks infinite distance).
P = zeros(size(R)) + 10000; %P - distance from blue dot
P(blue_y, blue_x) = 0; %Distance from itself.
prvPsum = 0;
while (sum(P(:) < 10000) ~= prvPsum)
prvPsum = sum(P(:) < 10000); %Sum of "marked" dots.
for y = 2:size(R, 1)-1
for x = 2:size(R, 2)-1
p = P(y, x);
if (p < 10000) %If P(y,x) is "marked"
A = P(y-1:y+1, x-1:x+1); %3x3 neighbors.
A = min(A, p+1); %Distance is the minimum of current distance, and distance from p pixel + 1.
A(R(y-1:y+1, x-1:x+1) == 0) = 10000; %Ignore black pixels.
A(P(y-1:y+1, x-1:x+1) == 0) = 0; %Restore 0 in blue point.
P(y-1:y+1, x-1:x+1) = A; %Update distance map.
end
end
end
end
Illustration of distance maps (10000 replaced with zero):
P (distances from first blue point):
Q (distances from second blue point):
max(P, Q):
T:

How to square the corners of a "rectangle" in a bw image with matlab

I have images of rectangles or deformed rectangles with rounded corners, like this:
or this:
is there a way to make the corners squared with matlab?
And then how can i get the coordinates of those new corners?
Thank you
Explanation
This problem is similar to the following question. My answer will be somehow similar to my answer there, with the relevant modifications.
we want to find the parallelogram corners which fits the most to the given shape.
The solution can be found by optimization, as follows:
find an initial guess for the 4 corners of the shape. This can be done by finding the boundary points with the highest curvature, and use kmean clustering to cluster them into 4 groups.
create a parallelogram given these 4 corners, by drawing a line between each pair of corresponding corners.
find the corners which optimize the Jaccard coefficient of the boundary image and the generated parallelogram map.
The optimization will done locally on each corner, in order to spare time.
Results
Initial corner guess (corners are marked in blue)
final results:
Code
main script
%reads image and binarize it
I = rgb2gray(imread('eA4ci.jpg')) > 50;
%finds boundry of largerst connected component
boundries = bwboundaries(I,8);
numPixels = cellfun(#length,boundries);
[~,idx] = max(numPixels);
B = boundries{idx};
%finds best 4 corners
[ corners ] = optimizeCorners(B);
%generate line mask given these corners, fills the result
linesMask = drawLines(size(I),corners,corners([2:4,1],:));
rectMask = imfill(linesMask,'holes');
%remove biggest CC from image, adds linesMask instead
CC = bwconncomp(I,8);
numPixels = cellfun(#numel,CC.PixelIdxList);
[~,idx] = max(numPixels);
res = I;
res(CC.PixelIdxList{idx}) = 0;
res = res | rectMask;
optimize corners function:
function [ corners] = optimizeCorners(xy)
%finds the corners which fits the most for this set of points
Y = xy(:,1);
X = xy(:,2);
%initial corners guess
corners = getInitialCornersGuess(xy);
boundriesIm = zeros(max(Y)+20,max(X)+20);
boundriesIm(sub2ind(size(boundriesIm),xy(:,1),xy(:,2))) = 1;
%R represents the search radius
R = 7;
%continue optimizing as long as there is no change in the final result
unchangedIterations = 0;
while unchangedIterations<4
for ii=1:4
%optimize corner ii
currentCorner = corners(ii,:);
bestCorner = currentCorner;
bestRes = calcEnergy(boundriesIm,corners);
cornersToEvaluate = corners;
for yy=currentCorner(1)-R:currentCorner(1)+R
for xx=currentCorner(2)-R:currentCorner(2)+R
cornersToEvaluate(ii,:) = [yy,xx];
res = calcEnergy(boundriesIm,cornersToEvaluate);
if res > bestRes
bestRes = res;
bestCorner = [yy,xx];
end
end
end
if isequal(bestCorner,currentCorner)
unchangedIterations = unchangedIterations + 1;
else
unchangedIterations = 0;
corners(ii,:) = bestCorner;
end
end
end
end
function res = calcEnergy(boundriesIm,corners)
%calculates the score of the corners list, given the boundries image.
%the result is acutally the jaccard index of the boundries map and the
%lines map
linesMask = drawLines(size(boundriesIm),corners,corners([2:4,1],:));
res = sum(sum(linesMask&boundriesIm)) / sum(sum(linesMask|boundriesIm));
end
get initial corners function:
function corners = getInitialCornersGuess(boundryPnts)
%calculates an initial guess for the 4 corners
%finds corners by performing kmeans on largest curvature pixels
[curvatureArr] = calcCurvature(boundryPnts, 5);
highCurv = boundryPnts(curvatureArr>0.3,:);
[~,C] = kmeans([highCurv(:,1),highCurv(:,2)],4);
%sorts the corners from top to bottom - preprocessing stage
C = int16(C);
corners = zeros(size(C));
%top left corners
topLeftInd = find(sum(C,2)==min(sum(C,2)));
corners(1,:) = C(topLeftInd,:);
%bottom right corners
bottomRightInd = find(sum(C,2)==max(sum(C,2)));
corners(3,:) = C(bottomRightInd,:);
%top right and bottom left corners
C([topLeftInd,bottomRightInd],:) = [];
topRightInd = find(C(:,2)==max(C(:,2)));
corners(4,:) = C(topRightInd,:);
bottomLeftInd = find(C(:,2)==min(C(:,2)));
corners(2,:) = C(bottomLeftInd,:);
end
function [curvatureArr] = calcCurvature(xy, halfWinSize)
%calculate the curvature of a list of points (xy) given a window size
%curvature calculation
curvatureArr = zeros(size(xy,1),1);
for t=1:halfWinSize
y = xy(t:halfWinSize:end,1);
x = xy(t:halfWinSize:end,2);
dx = gradient(x);
ddx = gradient(dx);
dy = gradient(y);
ddy = gradient(dy);
num = abs(dx .* ddy - ddx .* dy) + 0.000001;
denom = dx .* dx + dy .* dy + 0.000001;
denom = sqrt(denom);
denom = denom .* denom .* denom;
curvature = num ./ denom;
%normalizing
if(max(curvature) > 0)
curvature = curvature / max(curvature);
end
curvatureArr(t:halfWinSize:end) = curvature;
end
end
draw lines function:
function mask = drawLines(imgSize, P1, P2)
%generates a mask with lines, determine by P1 and P2 points
mask = zeros(imgSize);
P1 = double(P1);
P2 = double(P2);
for ii=1:size(P1,1)
x1 = P1(ii,2); y1 = P1(ii,1);
x2 = P2(ii,2); y2 = P2(ii,1);
% Distance (in pixels) between the two endpoints
nPoints = ceil(sqrt((x2 - x1).^2 + (y2 - y1).^2));
% Determine x and y locations along the line
xvalues = round(linspace(x1, x2, nPoints));
yvalues = round(linspace(y1, y2, nPoints));
% Replace the relevant values within the mask
mask(sub2ind(size(mask), yvalues, xvalues)) = 1;
end

Colouring a Cylinder in MATLAB using the density of points located on it

I have a plot in MATLAB that is currently just a cylinder. I also have a large set of data points from an experiment that lie on this cylinder. I want to colour the cylinder based on the density of these points (ie. Dark red for high density, fading to blue for low density). I am unsure what the best way to do this would be. Currently I draw the points and the mesh for the cylinder separately. The points are not uniformly spaced.
rad = linspace( 0, 1, 100 ) ;
theta = linspace( 0, 2 * pi, 100 ) ;
[r, th] = meshgrid( rad, theta ) ;
x = 190 * cos( th ) ;
y = 115 * sin( th ) ;
z = 1730 * r ;
mesh( x, y, z )
hold on
x = [35.12 -44.44 24.98 -17.05 152.52 109.28 -181.85 -72.26 84.45 -89.96 55.02 70.88 172.08 -144.16 44.24 28.81 -30.14 72.79 -126.75 -37.22]
y = [-113.01 -111.80 -114.00 -114.53 -68.57 -94.07 -33.31 -106.35 -103.01 -101.28 -110.07 -106.69 -48.74 -74.90 -111.83 -113.66 -113.54 -106.22 -85.66 -112.71]
z = [1650.59 767.18 845.06 311.28 1352.75 921.70 1111.35 1572.80 1231.16 89.67 891.30 551.67 547.92 983.57 1746.61 1346.11 810.22 465.33 1564.76 1624.73]
scatter3( x, y, z )
Below is a pictorial example of what I'm trying to achieve:
It's a bit tricky here...
Let's start by estimating the density of the points on the surface of the cylinder, converting their 3D coordinates to 2D (because they are on a 2D surface in 3D space)
xx = [35.12 -44.44 24.98 -17.05 152.52 109.28 -181.85 -72.26 84.45 -89.96 55.02 70.88 172.08 -144.16 44.24 28.81 -30.14 72.79 -126.75 -37.22];
yy = [-113.01 -111.80 -114.00 -114.53 -68.57 -94.07 -33.31 -106.35 -103.01 -101.28 -110.07 -106.69 -48.74 -74.90 -111.83 -113.66 -113.54 -106.22 -85.66 -112.71];
zz = [1650.59 767.18 845.06 311.28 1352.75 921.70 1111.35 1572.80 1231.16 89.67 891.30 551.67 547.92 983.57 1746.61 1346.11 810.22 465.33 1564.76 1624.73];
tt = atan2( yy./115, xx./190 ); %// angle in range [-pi pi]
tt( tt<0 ) = tt( tt<0 ) + 2*pi; %// in range [0..2*pi] for compatibility with definition of `theta`.
%//compute density using hist3
[n c] = hist3( [tt;zz]' ); %'// you can play with the granularity here...
Extrapolate the histogram over the entire plotting surface
d = interpn( c{1}, c{2}, n, th, z, 'linear', 0 );
Now we can use the density to color the cylinder
mesh( x, y, z, d );
Resulting with