annulus with scipy Delaunay - scipy

i try to draw a 3d solid that represents an annulus. I have used the scipy module and Delaunay to do the calculation.
Unfortunately the plot shows a 3d cylinder and not an annulus. Has somebody an idea how to modify the code? Is scipy the right module? Can i use Delaunay with retangular shapes?
thanks in advance!
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from scipy.spatial import Delaunay
points = 50
theta = np.linspace(0,2*np.pi,points)
radius_middle = 7.5
radius_inner = 7
radius_outer = 8
x_m_cartesian = radius_middle * np.cos(theta)
y_m_cartesian = radius_middle * np.sin(theta)
z_m_cartesian = np.zeros(points)
M_m = np.c_[x_m_cartesian,y_m_cartesian,z_m_cartesian]
x_i_cartesian = radius_inner * np.cos(theta)
y_i_cartesian = radius_inner * np.sin(theta)
z_i_cartesian = np.zeros(points)
M_i = np.c_[x_i_cartesian,y_i_cartesian,z_i_cartesian]
x1_m_cartesian = radius_middle * np.cos(theta)
y1_m_cartesian = radius_middle * np.sin(theta)
z1_m_cartesian = np.ones(points)
M1_m = np.c_[x1_m_cartesian,y1_m_cartesian,z1_m_cartesian]
x2_i_cartesian = radius_inner * np.cos(theta)
y2_i_cartesian = radius_inner * np.sin(theta)
z2_i_cartesian = np.ones(points)
M2_i = np.c_[x2_i_cartesian,y2_i_cartesian,z2_i_cartesian]
M = np.vstack((M_m,M_i,M1_m,M2_i))
# Delaunay
CH = Delaunay(M).convex_hull
x,y,z = M[:,0],M[:,1],M[:,2]
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(111,projection='3d')
#ax.scatter(x[:,0],y[:,1],z[:,2])
ax.plot_trisurf(x,y,z,triangles=CH, shade=False, color='lightblue',lw=1, edgecolor='k')
plt.show()

As noted in the comments the convex hull is a convex shape and therefore cannot represent an annulus. However, the concept of the concave hull (also known as the alpha-shape) is probably appropriate for your needs. Basically, the alpha-shape removes from the Delaunay triangulation the triangles (tetrahedra in your 3D case) that have a circumradius greater than some value (defined by the alpha parameter).
This answer provides an implementation of the alpha-shape surface (i.e., the outer boundary) for 3D points. Using the alpha_shape_3D function from that answer, with an alpha value of 3, resulted in the figure below.
The following two lines in the code (replacing the assignment to CH and the plot function) do the job.
vertices, edges, facets = alpha_shape_3D(pos=M, alpha=3.)
ax.plot_trisurf(x,y,z,triangles=facets, shade=False, color='lightblue',lw=1, edgecolor='k')

Related

How to create Bezier curves from B-Splines in Sympy?

I need to draw a smooth curve through some points, which I then want to show as an SVG path. So I create a B-Spline with scipy.interpolate, and can access some arrays that I suppose fully define it. Does someone know a reasonably simple way to create Bezier curves from these arrays?
import numpy as np
from scipy import interpolate
x = np.array([-1, 0, 2])
y = np.array([ 0, 2, 0])
x = np.r_[x, x[0]]
y = np.r_[y, y[0]]
tck, u = interpolate.splprep([x, y], s=0, per=True)
cx = tck[1][0]
cy = tck[1][1]
print( 'knots: ', list(tck[0]) )
print( 'coefficients x: ', list(cx) )
print( 'coefficients y: ', list(cy) )
print( 'degree: ', tck[2] )
print( 'parameter: ', list(u) )
The red points are the 3 initial points in x and y. The green points are the 6 coefficients in cx and cy. (Their values repeat after the 3rd, so each green point has two green index numbers.)
Return values tck and u are described scipy.interpolate.splprep documentation
knots: [-1.0, -0.722, -0.372, 0.0, 0.277, 0.627, 1.0, 1.277, 1.627, 2.0]
# 0 1 2 3 4 5
coefficients x: [ 3.719, -2.137, -0.053, 3.719, -2.137, -0.053]
coefficients y: [-0.752, -0.930, 3.336, -0.752, -0.930, 3.336]
degree: 3
parameter: [0.0, 0.277, 0.627, 1.0]
Not sure starting with a B-Spline makes sense: form a catmull-rom curve through the points (with the virtual "before first" and "after last" overlaid on real points) and then convert that to a bezier curve using a relatively trivial transform? E.g. given your points p0, p1, and p2, the first segment would be a catmull-rom curve {p2,p0,p1,p2} for the segment p1--p2, {p0,p1,p2,p0} will yield p2--p0, and {p1, p2, p0, p1} will yield p0--p1. Then you trivially convert those and now you have your SVG path.
As demonstrator, hit up https://editor.p5js.org/ and paste in the following code:
var points = [{x:150, y:100 },{x:50, y:300 },{x:300, y:300 }];
// add virtual points:
points = points.concat(points);
function setup() {
createCanvas(400, 400);
tension = createSlider(1, 200, 100);
}
function draw() {
background(220);
points.forEach(p => ellipse(p.x, p.y, 4));
for (let n=0; n<3; n++) {
let [c1, c2, c3, c4] = points.slice(n,n+4);
let t = 0.06 * tension.value();
bezier(
// on-curve start point
c2.x, c2.y,
// control point 1
c2.x + (c3.x - c1.x)/t,
c2.y + (c3.y - c1.y)/t,
// control point 2
c3.x - (c4.x - c2.x)/t,
c3.y - (c4.y - c2.y)/t,
// on-curve end point
c3.x, c3.y
);
}
}
Which will look like this:
Converting that to Python code should be an almost effortless exercise: there is barely any code for us to write =)
And, of course, now you're left with creating the SVG path, but that's hardly an issue: you know all the Bezier points now, so just start building your <path d=...> string while you iterate.
A B-spline curve is just a collection of Bezier curves joined together. Therefore, it is certainly possible to convert it back to multiple Bezier curves without any loss of shape fidelity. The algorithm involved is called "knot insertion" and there are different ways to do this with the two most famous algorithm being Boehm's algorithm and Oslo algorithm. You can refer this link for more details.
Here is an almost direct answer to your question (but for the non-periodic case):
import aggdraw
import numpy as np
import scipy.interpolate as si
from PIL import Image
# from https://stackoverflow.com/a/35007804/2849934
def scipy_bspline(cv, degree=3):
""" cv: Array of control vertices
degree: Curve degree
"""
count = cv.shape[0]
degree = np.clip(degree, 1, count-1)
kv = np.clip(np.arange(count+degree+1)-degree, 0, count-degree)
max_param = count - (degree * (1-periodic))
spline = si.BSpline(kv, cv, degree)
return spline, max_param
# based on https://math.stackexchange.com/a/421572/396192
def bspline_to_bezier(cv):
cv_len = cv.shape[0]
assert cv_len >= 4, "Provide at least 4 control vertices"
spline, max_param = scipy_bspline(cv, degree=3)
for i in range(1, max_param):
spline = si.insert(i, spline, 2)
return spline.c[:3 * max_param + 1]
def draw_bezier(d, bezier):
path = aggdraw.Path()
path.moveto(*bezier[0])
for i in range(1, len(bezier) - 1, 3):
v1, v2, v = bezier[i:i+3]
path.curveto(*v1, *v2, *v)
d.path(path, aggdraw.Pen("black", 2))
cv = np.array([[ 40., 148.], [ 40., 48.],
[244., 24.], [160., 120.],
[240., 144.], [210., 260.],
[110., 250.]])
im = Image.fromarray(np.ones((400, 400, 3), dtype=np.uint8) * 255)
bezier = bspline_to_bezier(cv)
d = aggdraw.Draw(im)
draw_bezier(d, bezier)
d.flush()
# show/save im
I didn't look much into the periodic case, but hopefully it's not too difficult.

fill oceans for basemap in 3D

I would like to fill oceans for my basemap in 3D but
ax.add_collection3d(m.drawmapboundary(fill_color='aqua'))
doesn't seem to work because the basemap drawmapboundary method doesn’t return an object supported by add_collection3d but a matplotlib.collections.PatchCollection object. Is there any workaround similar to the one done for land polygons here? Thank you!
Drawing a rectangle (polygon) below the map is one solution. Here is the working code that you may try.
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from mpl_toolkits.basemap import Basemap
from matplotlib.collections import PolyCollection
map = Basemap()
fig = plt.figure()
ax = Axes3D(fig)
ax.azim = 270
ax.elev = 50
ax.dist = 8
ax.add_collection3d(map.drawcoastlines(linewidth=0.20))
ax.add_collection3d(map.drawcountries(linewidth=0.15))
polys = []
for polygon in map.landpolygons:
polys.append(polygon.get_coords())
# This fills polygons with colors
lc = PolyCollection(polys, edgecolor='black', linewidth=0.3, \
facecolor='#BBAAAA', alpha=1.0, closed=False)
lcs = ax.add_collection3d(lc, zs=0) # set zero zs
# Create underlying blue color rectangle
# It's `zs` value is -0.003, so it is plotted below land polygons
bpgon = np.array([[-180., -90],
[-180, 90],
[180, 90],
[180, -90]])
polys2 = []
polys2.append(bpgon)
lc2 = PolyCollection(polys2, edgecolor='none', linewidth=0.1, \
facecolor='#445599', alpha=1.0, closed=False)
lcs2 = ax.add_collection3d(lc2, zs=-0.003) # set negative zs value
plt.show()
The resulting plot:

Basemap plus 3d graph

Hello Stackoverflow forks,
I'm a enthusiastic python learner.
I have studied python to visualiza my personal project about population density.
I have gone through tutorials about matplotlib and basemap in python.
I came across with the idea about
mapping my 3dimensional graph on top of the basemap which allows me to use geographycal coordinate information.
Can anyone let me know how I could use basemap as a base plane for the 3dimensional graph?
Please let me know which tutorial or references I could go with for developing this.
Best,
Thank you always Stackoverflow forks.
The basemap documentation has a small section on 3D plotting. Here's a simple script to get you started:
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap
plt.close('all')
fig = plt.figure()
ax = fig.gca(projection='3d')
extent = [-127, -65, 25, 51]
# make the map and axis.
m = Basemap(llcrnrlon=extent[0], llcrnrlat=extent[2],
urcrnrlon=extent[1], urcrnrlat=extent[3],
projection='cyl', resolution='l', fix_aspect=False, ax=ax)
ax.add_collection3d(m.drawcoastlines(linewidth=0.25))
ax.add_collection3d(m.drawcountries(linewidth=0.25))
ax.add_collection3d(m.drawstates(linewidth=0.25))
ax.view_init(azim = 230, elev = 15)
ax.set_xlabel(u'Longitude (°E)', labelpad=10)
ax.set_ylabel(u'Latitude (°N)', labelpad=10)
ax.set_zlabel(u'Altitude (ft)', labelpad=20)
# values to plot - change as needed. Plots 2 dots, one at elevation 0 and another 100.
# also draws a line between the two.
x, y = m(-85.4808, 32.6099)
ax.plot3D([x, x], [y, y], [0, 100], color = 'green', lw = 0.5)
ax.scatter3D(x, y, 100, s = 5, c = 'k', zorder = 4)
ax.scatter3D(x, y, 0, s = 2, c = 'k', zorder = 4)
ax.set_zlim(0., 400.)
plt.show()

cartopy: map overlay on NOAA APT image

I am working on a project trying to decode NOAA APT images, so far I reached the stage where I can get the images from raw IQ recordings from RTLSDRs. Here is one of the decoded images,
Decoded NOAA APT image this image will be used as input for the code (seen as m3.png here on)
Now I am working on overlaying map boundaries on the image (Note: Only on the left half part of the above image)
We know, the time at which the image was captured and the satellite info: position, direction etc. So, I used the position of the satellite to get the center of map projection and and direction of satellite to rotate the image appropriately.
First I tried in Basemap, here is the code
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap
import numpy as np
from scipy import ndimage
im = plt.imread('m3.png')
im = im[:,85:995] # crop only the first part of whole image
rot = 198.3913296679117 # degrees, direction of sat movement
center = (50.83550180700588, 16.430852851867176) # lat long
rotated_img = ndimage.rotate(im, rot) # rotate image
w = rotated_img.shape[1]*4000*0.81 # in meters, spec says 4km per pixel, but I had to make it 81% less to get better image
h = rotated_img.shape[0]*4000*0.81 # in meters, spec says 4km per pixel, but I had to make it 81% less to get better image
m = Basemap(projection='cass',lon_0 = center[1],lat_0 = center[0],width = w,height = h, resolution = "i")
m.drawcoastlines(color='yellow')
m.drawcountries(color='yellow')
im = plt.imshow(rotated_img, cmap='gray', extent=(*plt.xlim(), *plt.ylim()))
plt.show()
I got this image as a result, which seems pretty good
I wanted to move the code to Cartopy as it is easier to install and is actively being developed. I was unable to find a similar way to set boundaries i.e. width and height in meters. So, I modified most similar example. I found a function which would add meters to longs and lats and used that to set the boundaries.
Here is the code in Cartopy,
import matplotlib.pyplot as plt
import numpy as np
import cartopy.crs as ccrs
from scipy import ndimage
import cartopy.feature
im = plt.imread('m3.png')
im = im[:,85:995] # crop only the first part of whole image
rot = 198.3913296679117 # degrees, direction of sat movement
center = (50.83550180700588, 16.430852851867176) # lat long
def add_m(center, dx, dy):
# source: https://stackoverflow.com/questions/7477003/calculating-new-longitude-latitude-from-old-n-meters
new_latitude = center[0] + (dy / 6371000.0) * (180 / np.pi)
new_longitude = center[1] + (dx / 6371000.0) * (180 / np.pi) / np.cos(center[0] * np.pi/180)
return [new_latitude, new_longitude]
fig = plt.figure()
img = ndimage.rotate(im, rot)
dx = img.shape[0]*4000/2*0.81 # in meters
dy = img.shape[1]*4000/2*0.81 # in meters
leftbot = add_m(center, -1*dx, -1*dy)
righttop = add_m(center, dx, dy)
img_extent = (leftbot[1], righttop[1], leftbot[0], righttop[0])
ax = plt.axes(projection=ccrs.PlateCarree())
ax.imshow(img, origin='upper', cmap='gray', extent=img_extent, transform=ccrs.PlateCarree())
ax.coastlines(resolution='50m', color='yellow', linewidth=1)
ax.add_feature(cartopy.feature.BORDERS, linestyle='-', edgecolor='yellow')
plt.show()
Here is the result from Cartopy, it is not as good as the result from Basemap.
I have following questions:
I found it impossible to rotate the map instead of the image, in
both basemap and cartopy. Hence I resorted to rotating the image, is
there a way to rotate the map?
How do I improve the output of cartopy? I think it is the way in
which I am calculating the extent a problem. Is there a way I can
provide meters to set the boundaries of the image?
Is there a better way to do what I am trying to do? any projection that are specific to these kind of applications?
I am adjusting the scale (the part where I decide the number of kms per pixel) manually, is there a way to do this based
on
satellite's altitude?
Any sort of input would be highly appreciated. Thank you so much for your time!
If you are interested you can find the project here.
As far as I can see, there is no ability for the underlying Proj.4 to define satellite projections with rotated perspectives (happy to be shown otherwise - I'm no expert!) (note: perhaps via ob_tran?). This is the main reason you can't do this in "native" coordinates/orientation with Basemap or Cartopy.
This question really comes down to a georeferencing problem, to which I couldn't find enough information in places like https://www.cder.dz/download/Art7-1_1.pdf.
My solution is entirely a fudge, but does get you quite close to referencing this image. I double the fudge factors are actually universal, which is a bit of an issue if you want to write general-purpose code.
Some of the fudges I had to make (trial-and-error):
adjust the satellite bearing by 3.2 degrees
adjust where the image centre is by moving it along the satellite trajectory by 10km
adjust where the image centre is by moving it perpendicularly along the satellite trajectory by 10km
scale the x and y pixel sizes by 0.62 and 0.65 respectively
use the "near-sided perspective" projection at an unrealistic satellite_height
The result is what appears to be a relatively well registered image, but as I say, seems unlikely to be generally applicable to all images received:
The code to produce this image (fairly involved, but complete):
import urllib.request
urllib.request.urlretrieve('https://i.stack.imgur.com/UBIuA.jpg', 'm3.jpg')
import matplotlib.pyplot as plt
import numpy as np
import cartopy.crs as ccrs
from scipy import ndimage
import cartopy.feature
im = plt.imread('m3.jpg')
im = im[:,85:995] # crop only the first part of whole image
rot = 198.3913296679117 # degrees, direction of sat movement
center = (50.83550180700588, 16.430852851867176) # lat long
import numpy as np
from cartopy.geodesic import Geodesic
import matplotlib.transforms as mtransforms
from matplotlib.axes import Axes
tweaked_rot = rot - 3.2
geod = Geodesic()
# Move the center along the trajectory of the satellite by 10KM
f = np.array(
geod.direct([center[1], center[0]],
180 - tweaked_rot,
10000))
tweaked_center = f[0, 0], f[0, 1]
# Move the satellite perpendicular from its proposed trajectory by 15KM
f = np.array(
geod.direct([tweaked_center[0], tweaked_center[1]],
180 - tweaked_rot + 90,
10000))
tweaked_center = f[0, 0], f[0, 1]
data_crs = ccrs.NearsidePerspective(
central_latitude=tweaked_center[1],
central_longitude=tweaked_center[0],
)
# Compute the center in data_crs coordinates.
center_lon_lat_ortho = data_crs.transform_point(
tweaked_center[0], tweaked_center[1], ccrs.Geodetic())
# Define the affine rotation in terms of matplotlib transforms.
rotation = mtransforms.Affine2D().rotate_deg_around(
center_lon_lat_ortho[0], center_lon_lat_ortho[1], tweaked_rot)
# Some fudge factors. Sorry - there are entirely application specific,
# perhaps some reading of https://www.cder.dz/download/Art7-1_1.pdf
# would enlighten these... :(
ff_x, ff_y = 0.62, 0.65
ff_x = ff_y = 0.81
x_extent = im.shape[1]*4000/2 * ff_x
y_extent = im.shape[0]*4000/2 * ff_y
img_extent = [-x_extent, x_extent, -y_extent, y_extent]
fig = plt.figure(figsize=(10, 10))
ax = plt.axes(projection=data_crs)
ax.margins(0.02)
with ax.hold_limits():
ax.stock_img()
# Uing matplotlib's image transforms if the projection is the
# same as the map, otherwise we need to fall back to cartopy's
# (slower) image resampling algorithm
if ax.projection == data_crs:
transform = rotation + ax.transData
else:
transform = rotation + data_crs._as_mpl_transform(ax)
# Use the original Axes method rather than cartopy's GeoAxes.imshow.
mimg = Axes.imshow(ax, im, origin='upper', cmap='gray',
extent=img_extent, transform=transform)
lower_left = rotation.frozen().transform_point([-x_extent, -y_extent])
lower_right = rotation.frozen().transform_point([x_extent, -y_extent])
upper_left = rotation.frozen().transform_point([-x_extent, y_extent])
upper_right = rotation.frozen().transform_point([x_extent, y_extent])
plt.plot(lower_left[0], lower_left[1],
upper_left[0], upper_left[1],
upper_right[0], upper_right[1],
lower_right[0], lower_right[1],
marker='x', color='black',
transform=data_crs)
ax.coastlines(resolution='10m', color='yellow', linewidth=1)
ax.add_feature(cartopy.feature.BORDERS, linestyle='-', edgecolor='yellow')
sat_pos = np.array(geod.direct(tweaked_center, 180 - tweaked_rot,
np.linspace(-x_extent*2, x_extent*2, 50)))
with ax.hold_limits():
plt.plot(sat_pos[:, 0], sat_pos[:, 1], transform=ccrs.Geodetic(),
label='Satellite path')
plt.plot(tweaked_center, 'ob')
plt.legend()
As you can probably tell, I got a bit carried away with this question. It is a super interesting problem, but not really a cartopy/Basemap one per-say.
Hope that helps!

Stretching an ellipse in an image to form a circle

I want to stretch an elliptical object in an image until it forms a circle. My program currently inputs an image with an elliptical object (eg. coin at an angle), thresholds and binarizes it, isolates the region of interest using edge-detect/bwboundaries(), and performs regionprops() to calculate major/minor axis lengths.
Essentially, I want to use the 'MajorAxisLength' as the diameter and stretch the object on the minor axis to form a circle. Any suggestions on how I should approach this would be greatly appreciated. I have appended some code for your perusal (unfortunately I don't have enough reputation to upload an image, the binarized image looks like a white ellipse on a black background).
EDIT: I'd also like to apply this technique to the gray-scale version of the image, to examine what the stretch looks like.
code snippet:
rgbImage = imread(fullFileName);
redChannel = rgbImage(:, :, 1);
binaryImage = redChannel < 90;
labeledImage = bwlabel(binaryImage);
area_measurements = regionprops(labeledImage,'Area');
allAreas = [area_measurements.Area];
biggestBlobIndex = find(allAreas == max(allAreas));
keeperBlobsImage = ismember(labeledImage, biggestBlobIndex);
measurements = regionprops(keeperBlobsImage,'Area','MajorAxisLength','MinorAxisLength')
You know the diameter of the circle and you know the center is the location where the major and minor axes intersect. Thus, just compute the radius r from the diameter, and for every pixel in your image, check to see if that pixel's Euclidean distance from the cirlce's center is less than r. If so, color the pixel white. Otherwise, leave it alone.
[M,N] = size(redChannel);
new_image = zeros(M,N);
for ii=1:M
for jj=1:N
if( sqrt((jj-center_x)^2 + (ii-center_y)^2) <= radius )
new_image(ii,jj) = 1.0;
end
end
end
This can probably be optimzed by using the meshgrid function combined with logical indices to avoid the loops.
I finally managed to figure out the transform required thanks to a lot of help on the matlab forums. I thought I'd post it here, in case anyone else needed it.
stats = regionprops(keeperBlobsImage, 'MajorAxisLength','MinorAxisLength','Centroid','Orientation');
alpha = pi/180 * stats(1).Orientation;
Q = [cos(alpha), -sin(alpha); sin(alpha), cos(alpha)];
x0 = stats(1).Centroid.';
a = stats(1).MajorAxisLength;
b = stats(1).MinorAxisLength;
S = diag([1, a/b]);
C = Q*S*Q';
d = (eye(2) - C)*x0;
tform = maketform('affine', [C d; 0 0 1]');
Im2 = imtransform(redChannel, tform);
subplot(2, 3, 5);
imshow(Im2);