Basemap and proj get different projection result - matplotlib-basemap

I am struggling to understand the projections using pyproj.
for now my question is to understand the results of projection operations
I have following coordinates that I project on x,y:
from mpl_toolkits.basemap import Basemap
import pyproj
lon = [3.383789, 5.822754]
lat = [48.920575, 53.72185]
# with Basemap
M = Basemap(projection='merc',ellps = 'WGS84')
q1, q2 = M(lon, lat)
for a,b in zip([x for x in q1], [x for x in q2]):
print(a,b)
# with pyproj
from pyproj import Proj
p = Proj(proj='merc', ellps='WGS84',errcheck = True)
p1 = Proj(proj='latlong', datum='WGS84',errcheck = True)
print(p(3.383789, 48.920575), p(5.822754, 53.72185))
print(p1(3.383789, 48.920575), p1(5.822754, 53.72185))
20414190.011221122 65799915.8523339
20685694.35308374 66653928.94763097
(376681.6684318804, 6229168.979819128) (648186.0102944968, 7083182.075116195)
(0.0590582592427664, 0.8538251057188251) (0.10162622883366988, 0.9376231627625158)
why are the results different while I use the same projections parameters
as a newbie in geospatial data processing I apologize in advance for a question which may be trivial

For Mercator projection, Basemap uses the origin of the grid coordinate system at lower-left extent of the computable values. With your code, the values can be computed as
M(0, 0, inverse=True)
# output: (-180.0, -89.98999999999992)
If you compute projection coordinates for (long=0, lat=0) and assign the values to (x0, y0). You get the shifts in coordinates (x0, y0) that make its projection coordinates different from standard values (0,0 at center of the map).
lon0, lat0 = 0, 0
x0, y0 = M(lon0, lat0)
# x0=20015077.371242613, y0=59546805.8807
For a test point at (long=3.383789, lat=48.920575),
lon1 = 3.383789
lat1 = 48.920575
x1, y1 = M(lon1, lat1)
with the coordinate shifts applied, the result is
print(x1-x0, y1-y0)
# output: (376259.9924608879, 6254386.049398325)
when compare with the values from pyproj
p0 = Proj(proj='merc', ellps='WGS84', errcheck = True)
print(p0(lon1, lat1))
# output (376681.6684318804, 6229168.979819128)
they are quite agree but not close. For small scale map plotting, you can't see the discrepancies on the maps.

Related

How to calculate RoM spine using two nodes

I want to calculate the range of motion (ROM) of my spine simulation in Abaqus. I created two nodes of which I use the coordinates (y,z);
In the figure you see the initial model (green) and the blue model. I want to calculate the RoM by measuring the angle difference between the two lines (line AB and line CD). I know the initial coordinates (y,z since the motion is in zy plane) and the coordinates after simulation. I used the formulas of this website because the lines do not share a common starting point: https://www.redcrab-software.com/en/Calculator/Angles-Of-Lines
I try to calculate the RoM using the following code in Matlab;
% Flexion
% A coordinates
a_y = -1.2957500E+02;
a_z = -4.6569600E+02;
b_y = -1.32482e+02;
b_z = -4.69579e+02;
% B coordinates (scaled)
c_y = -1.64725e+02;
c_z = -4.72290e+02;
d_y = -1.66319e+02;
d_z = -4.77098e+02;
%% Move lines to one starting point
a = [a_y; a_z]
b = [b_y; b_z]
c = [c_y; c_z]
d = [d_y; d_z]
lineAB = a - b
lineCD = c - d
% Magnitude
ABCD = (lineAB(1) * lineCD(1)) + (lineAB(2) * lineCD(2));
% Absolute value
AB = sqrt((lineAB(1)^2)+(lineAB(2)^2))
CD = sqrt((lineCD(1)^2)+(lineCD(2)^2))
Cosalpha = ABCD/(AB*CD)
Alpha_rad = acos(Cosalpha)
% Alpha_deg = Alpha_rad/pi*180
Alpha_deg = rad2deg(Alpha_rad)
But the angles for extension 87 degrees is and for flexion is 18 degrees according to my code.
When measuring by hand (figure), it is so different. What am I not seeing here? Thank you

find length of individual dashed lines as well as the gaps between them

i have the following png with me:
[![enter image description here][1]][1]
in the above image i want to find the length of every dash and also the length between every 2 dashes ie the gaps.
Parallely, i have this code which gives me length of a line:
Another idea that i have in my mind about finding length of every dashes could be that, if i go on finding the medial axes of the dashes, then would traversing in the same direction of the medial axis(a line which passes through the centre of the concerned object) could some how give me the length in any way??
medial axis output: the green line inside the black is the medial axis i get, i have made use of open cv to fine the medial axis, but again i am not sure if that would work on another complex images such as the circle above and also, how do i find the length of gaps...
Any help would be appreciated!
This is a total brute force code.
Is not optimized at all, is super slow. I hope somebody vectorize it totally or in part in another answer. I invite to patch it.
This code detects each dash as a "blob", and that's the slowest part of the code. But there are many libraries for blob detection, which should be much faster. I just don't know how to use them.
The code scans line by line, and when it finds a black pixel, it adds it to a blob, but it results in some blobs being detected multiple times.
This image shows the centers of the blobs:
The second stage, which is the slowest, compares all blobs to join and discard the adjacent ones.
After the duplicated blobs are deleted, the distance between blob center is calculated by finding the nearest neighbor of each blob, and taking the median distance, to avoid the influence of outsiders.
Here the distance between centers is drawn with yellow arrows
The blobs get stored as instances of the class blob, which stores the pixels of each blob, calculates the center, and the axis of symmetry, to be able to tell the orientation of each dash.
The class blob has a function named extremeAxisOfInertia(), which returns 2 vectors: the first vector points in the direction of the longest side of the dash, and the second vector point to the shorter side
principalAxis, secondaryAxis = blobInstance.extremeAxisOfInertia()
Also, the maximum dimension of each blob (measured between centers of pixels) is given by the function dimensions()
blobLength, blobWidth = blobInstance.dimensions()
It is calculated as the length between the most distant pixels on the direction of the principal axis of inertia.
Here the dots show the center of each pixel of one blob:
The distance of separation between dashes is calculated as the difference between the median separation of the blobs center, minus the median length of the blobs.
As the title on the plot below says, the image posted by the OP results in a median separation of 9.17 pixels, and a median dash length of 7.60 pixels. But hat calculation only uses the closest neighbor. There is a lot of space for improvement.
print("Imports...")
import cmath
import numpy as np
import cv2
import matplotlib.pyplot as plt
URL = "https://i.stack.imgur.com/eNV5m.png"
def downloadImage(URL):
'''Downloads the image on the URL, and convers to cv2 BGR format'''
from io import BytesIO
from PIL import Image as PIL_Image
import requests
response = requests.get(URL)
image = PIL_Image.open(BytesIO(response.content))
return cv2.cvtColor(np.array(image), cv2.COLOR_BGR2RGB)
# index of row and column on each pixel=[pixel[_rw],pixel[_cl]]=[row,column]
_rw, _cl = 0, 1
class blob(object):
# array of xy coordinates
pixels: list
blobNumber: int
def __init__(self, blobNumber, row=None, col=None):
if row is not None and col is not None:
self.pixels = [[row, col]]
self.blobNumber = blobNumber
def addPixel(self, row, col):
if [row, col] not in self.pixels:
self.pixels.append([row, col])
else:
raise ValueError("Pixel already in blob")
# baricenter of the blob
def centerRC(self):
'''returns row and column of the baricenter of the blob
returns a pair of floats which may not match a specific pixel
returns rowY,columnX
'''
center = np.mean(self.pixels, axis=0)
return center[_rw], center[_cl]
def Ixx(self):
''' central moment of the blob respect to x axis'''
Cy, Cx = self.centerRC()
return sum((p[_rw]-Cy)**2 for p in self.pixels)
def Iyy(self):
''' central moment of the blob respect to y axis'''
Cy, Cx = self.centerRC()
return sum((p[_cl]-Cx)**2 for p in self.pixels)
def Ixy(self):
''' central moment of the blob respect to x and y axis'''
Cy, Cx = self.centerRC()
return sum((p[_rw]-Cy)*(p[_cl]-Cx) for p in self.pixels)
def extremeAxisOfInertia(self):
'''Calculates the principal axis of inertia of the blob
returns unitary vectors pointing on the direction normal to the
max [principal] axis of inertia
and the minimum [principal] axis of inertia
Also returns the maximum and minimum momentum along the principal axis of inertia
returns maxAxis, minAxis, maxI, minI
^minAxis
|max axis of inertia
┌──────|──────┐
---│------|------│---minor axis of inertia ──>maxAxis
└──────|──────┘
|
'''
Ixx = self.Ixx()
Iyy = self.Iyy()
Ixy = self.Ixy()
I = np.array([[Ixx, Ixy], [Ixy, Iyy]])
# print(f"I = {I}")
eigenvalues, eigenvectors = np.linalg.eig(I)
eigMatrix = np.array(eigenvectors)
Imax = np.matmul(eigMatrix, np.matmul(I, eigMatrix.T))
# print(f"eigenvalues = {eigenvalues}")
# print(f"eigenvectors = {eigenvectors}")
# print(f"Imax = {Imax}")
if Imax[0, 0] >= Imax[1, 1]:
maxAxis = eigenvectors[0]
minAxis = eigenvectors[1]
else:
maxAxis = eigenvectors[1]
minAxis = eigenvectors[0]
return maxAxis, minAxis, max(Imax[0, 0], Imax[1, 1]), min(Imax[0, 0], Imax[1, 1])
def dimensions(self):
'''
returns the dimensions of the blob, measured between pixel centers
assuming that the blob is roughly a rectangle with small side b and large side h
returns h,b
┌─────────────h─────────────┐(measured between pixel centers)
┌─────────────────────────────┐
| |┐
│pixel center non represented │|b (measured between pixel centers)
| |┘
└─────────────────────────────┘
'''
maxAxis, minAxis, maxI, minI = self.extremeAxisOfInertia()
# rotate all pixel coordinates to convert max axis into horizontal
# and extract the maximum and minimum x coordinates
rotor = complex(maxAxis[_cl], maxAxis[_rw])
pixelsHorizontalized = np.array(
[complex(p[_cl], p[_rw])/rotor for p in self.pixels])
x, y = pixelsHorizontalized.real, pixelsHorizontalized.imag
h = max(x)-min(x)
b = max(y)-min(y)
return h, b
def plotPixels(self):
import matplotlib.pyplot as plt
plt.scatter([p[_cl] for p in self.pixels], [p[_rw]
for p in self.pixels], label=f" blob {self.blobNumber}")
centerR, centerC = self.centerRC()
maxAxis, minAxis, _, __ = self.extremeAxisOfInertia()
length, width = self.dimensions()
plt.plot([centerC, centerC+maxAxis[_cl]*length/2], [centerR,
centerR+maxAxis[_rw]*length/2], 'r', label="Max axis")
plt.plot([centerC, centerC+minAxis[_cl]*width/2], [centerR,
centerR+minAxis[_rw]*width/2], 'b', label="Min axis")
ax = plt.gca()
ax.invert_yaxis() # on images y axis goes down
ax.legend()
plt.title(
f"Blob {self.blobNumber}; max dimension = {length:.2f}, min dimension = {width:.2f}")
plt.show()
print("Fetching image from URL...")
image = downloadImage(URL)
cv2.imshow('Original image', image)
cv2.waitKey(10)
img_gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
threshold = 128
ret, thresh = cv2.threshold(
img_gray, type=cv2.THRESH_BINARY_INV, thresh=threshold, maxval=255)
print("Classification of points in blobs...")
# all pixels classified as 0 blob
# add extra border rows and columns to avoid problems with the blob classifier
padThresh = np.pad(thresh, (1, 1), constant_values=0)
classif = padThresh*0
blobs = {} # key=blobCount, value=blob object
blobCount = 0
neighborPixelsToCheck = [(-1, -1), (-1, 0), (-1, 1), (0, -1)]
for row in range(1, padThresh.shape[0]-1): # avoided first and last row added
# avoided first and last column added
for col in range(1, padThresh.shape[1]-1):
# if pixel is black...
if padThresh[row, col] > threshold:
# if up and left pixels are also black...
if any(padThresh[row+y, col+x] > threshold for y, x in neighborPixelsToCheck):
numBlob = max(classif[row+y, col+x]
for y, x in neighborPixelsToCheck)
classif[row, col] = numBlob
blobs[numBlob].addPixel(row, col)
else:
blobCount += 1
classif[row, col] = blobCount
blobs[blobCount] = blob(blobCount, row=row, col=col)
plt.imshow(classif/max(classif.flatten()))
# Collect centers of all blobs
centers = [value.centerRC() for key, value in blobs.items()]
plt.scatter([c[_cl] for c in centers], [c[_rw]
for c in centers], label="blobs \ncenters", color="red", marker="+")
#legend on upper right corner
plt.legend(loc="center")
plt.title("Blobs and Centers of blobs detected")
plt.show()
print("Unifying blobs...")
# unify adjacent blobs
keys = list(blobs.keys())
for idx, this in enumerate(keys[:-1]):
if this in blobs.keys(): # It may had been deleted by a previous loop
print(f" Comparing blob {this} of {len(keys)}...")
thisPixels = blobs[this].pixels[::-1] # reverse to speed up comparison
for other in keys[1+idx:]:
if other in blobs.keys(): # It may had been deleted by a previous loop
otherPixels = blobs[other].pixels
# if squared euclidean distance between centers of blobs < 2
if any((p[0]-q[0])**2+(p[1]-q[1])**2 < 2.1 for p in thisPixels for q in otherPixels):
# merge blobs
blobs[this].pixels.extend(otherPixels)
# reverse to speed up comparison
thisPixels.extend(otherPixels[::-1])
# remove other blob
del blobs[other]
plt.imshow(classif/max(classif.flatten()))
# Calculating median distance between blobs
# Collect centers of all blobs
centers = np.asarray([value.centerRC() for key, value in blobs.items()])
plt.scatter([c[_cl] for c in centers], [c[_rw]
for c in centers], label="centers", color="red", marker="+")
def closest_node(node, nodes):
# nodes = np.asarray(nodes)
deltas = nodes - node
dist_2 = np.einsum('ij,ij->i', deltas, deltas)
return np.argmin(dist_2)
nearest = []
for idx, c in enumerate(centers):
Cent_withoutC = np.delete(centers, idx, axis=0)
nearest.append(Cent_withoutC[closest_node(c, Cent_withoutC)])
# plt.scatter([c[_cl] for c in nearest],[c[_rw] for c in nearest],label="nearest",color="red",marker="+")
distances = [((n-c)[0]**2+(n-c)[1]**2)**0.5 for c, n in zip(centers, nearest)]
for c, n in zip(centers, nearest):
x, y, dx, dy = c[_cl], c[_rw], (n-c)[_cl], (n-c)[_rw]
plt.arrow(x, y, dx, dy, length_includes_head=True, head_width=1 /
4*(abs(dx*dy))**.5, color="yellow", edgecolor="black")
plt.title("Nearest neighbor of each blob")
plt.show()
plt.scatter(x=range(len(distances)), y=np.sort(distances),
label="Distances between blobs", color="red")
# the median value is better than an average,
# because is less sensible to outsiders
medianDistance = np.median(distances)
plt.plot([1, len(distances)], [medianDistance, medianDistance],
label="Median distance", color="red")
title=f"Median distance between blob centers = {medianDistance:.2f} pixels"
# Median value of the largest dimension of the blobs
blobsLengths=[]
for key,_blob in blobs.items():
length,width=_blob.dimensions()
blobsLengths.append(length)
medianBlobLength = np.median(blobsLengths)
plt.scatter(x=range(len(blobsLengths)), y=np.sort(
blobsLengths), label="blobs Lengths", color="blue")
plt.plot([1, len(blobsLengths)], [medianBlobLength, medianBlobLength],
label="Median blob length", color="blue")
# add to title the median value of the largest dimension of the blobs
title=f"{title}\nMedian blob length = {medianBlobLength:.2f} pixels"
medianBlobSeparation=medianDistance-medianBlobLength
title=f"{title}\nMedian blob separation = {medianBlobSeparation:.2f} pixels"
plt.title(title)
plt.legend()
plt.show()

How can I add the slope of a specific point in a polynomial line in plotly

Let's say I have a polynomial regression in plotly that looks like that
Something along a code like this:
fig = px.scatter(
x=final_df.index,
y=final_df.nr_deaths,
trendline="lowess", #ols
trendline_color_override="red",
trendline_options=dict(frac=0.1),
opacity=.5,
title='Deaths per year'
)
fig.show()
How would I calculate the slope (= tangent) line on a specific point of the polynomial regression line?
Currently, this cannot be done within plotly alone. But you can achieve this by using other libraries for calculation and applying the results in the chart.
The difficulty in this question lies in
calculating the slope of the polynomial at a certain point
calculating the x and y values for plotting them as lines
For calculating the slopes at a certain point you can use numpy functionality. Afterwards you can just calculate the x and y values with python and plot them with plotly.
poly_degree = 3
y = df.col.values
x = np.arange(0, len(y))
x = x.reshape(-1, 1)
fitted_params = np.polyfit(np.arange(0, len(y)), y, poly_degree )
polynomials = np.poly1d(fitted_params)
derivatives = np.polyder(polynomials)
y_value_at_point = polynomials(x).flatten()
slope_at_point = np.polyval(derivatives, np.arange(0, len(y)))
For calculating the corresponding slope values (the necessary x values and y values) at a point, and plotting it in plotly you can do something like this:
def draw_slope_line_at_point(fig, ind, x, y, slope_at_point, verbose=False):
"""Plot a line from an index at a specific point for x values, y values and their slopes"""
y_low = (x[0] - x[ind]) * slope_at_point[ind] + y[ind]
y_high = (x[-1] - x[ind]) * slope_at_point[ind] + y[ind]
x_vals = [x[0], x[-1]]
y_vals = [y_low, y_high]
if verbose:
print((x[0] - x[ind]))
print(x[ind], x_vals, y_vals, y[ind],slope_at_point[ind])
fig.add_trace(
go.Scatter(
x=x_vals,
y=y_vals,
name="Tangent at point",
line = dict(color='orange', width=2, dash='dash'),
)
)
return x_vals, y_vals
Calling it and adding annotation would look like this:
for pt in [31]:
draw_slope_line_at_point(
fig,
x= np.arange(0, len(y)),
y = y_value_at_point,
slope_at_point=slope_at_point,
ind = pt)
fig.add_annotation(x=pt, y=y_value_at_point[pt],
text=f'''Slope: {slope_at_point[pt]:.2f}\t {df.date.strftime('%Y-%m-%d')[pt]}''',
showarrow=True,
arrowhead=1)
and then looking like that in the result:

Applying scipy.stats.gaussian_kde to 3D point cloud

I have a set of about 33K (x,y,z) points in a csv file and would like to convert this to a grid of density values using scipy.stats.gaussian_kde. I have not been able to find a way to convert this point cloud array into an appropriate input format for the gaussian_kde function (and then take the output of this and convert it into a density value grid). Can anyone provide sample code?
Here's an example with some comments which may be of use. gaussian_kde wants the data and points to be row stacked, ie. (# ndim, # num values), as per the docs. In your case you would row_stack([x, y, z]) such that the shape is (3, 33000).
from scipy.stats import gaussian_kde
import numpy as np
import matplotlib.pyplot as plt
# simulate some data
n = 33000
x = np.random.randn(n)
y = np.random.randn(n) * 2
# data must be stacked as (# ndim, # n values) as per docs.
data = np.row_stack((x, y))
# perform KDE
kernel = gaussian_kde(data)
# create grid over which to evaluate KDE
s = np.linspace(-8, 8, 128)
grid = np.meshgrid(s, s)
# again KDE needs points to be row_stacked
grid_points = np.row_stack([g.ravel() for g in grid])
# evaluate KDE and reshape result correctly
Z = kernel(grid_points)
Z = Z.reshape(grid[0].shape)
# plot KDE as image and overlay some data points
fig, ax = plt.subplots()
ax.matshow(Z, extent=(s.min(), s.max(), s.min(), s.max()))
ax.plot(x[::10], y[::10], 'w.', ms=1, alpha=0.3)
ax.set_xlim(s.min(), s.max())
ax.set_ylim(s.min(), s.max())

How to use interpn?

I am trying to use interpn (in python using Scipy) to replicate results from Matlab using interp3. However, I am struggling to structure my arguments. I tried the following line:
f = interpn(blur_maps, fx, fy, pyr_level)
Where blur maps is a 600 x 800 x 7 representing a grayscale image at seven levels of blur,
fx and fy are indices of the seven maps. Both fx and fy are 2d arrays. pyr_level is a 2d array that contains values from 1 to 7 representing the blur map to be interpolated.
My question is since I incorrectly arranged the arguments, how can I arrange them in a way that works? I tried to look up examples but I didn't see anything similar. Here is an example of the data I am trying to interpolate:
import numpy as np
import cv2, math
from scipy.interpolate import interpn
levels = 7
img_path = '/Users/alimahdi/Desktop/i4.jpg'
img = cv2.cvtColor(cv2.imread(img_path), cv2.COLOR_BGR2GRAY)
row, col = img.shape
x_range = np.arange(0, col)
y_range = np.arange(0, row)
fx, fy = np.meshgrid(x_range, y_range)
e = np.exp(np.sqrt(fx ** 2 + fy ** 2))
pyr_level = 7 * (e - np.min(e)) / (np.max(e) - np.min(e))
blur_maps = np.zeros((row, col, levels))
blur_maps[:, :, 0] = img
for i in range(levels - 1):
img = cv2.pyrDown(img)
r, c = img.shape
tmp = img
for j in range(int(math.log(row / r, 2))):
tmp = cv2.pyrUp(tmp)
blur_maps[:, :, i + 1] = tmp
pixelGrid = [np.arange(x) for x in blur_maps.shape]
interpPoints = np.array([fx.flatten(), fy.flatten(), pyr_level.flatten()])
interpValues = interpn(pixelGrid, blur_maps, interpPoints.T)
finalValues = np.reshape(interpValues, fx.shape)
I am now getting the following error: ValueError: One of the requested xi is out of bounds in dimension 0 I do know that the problem is in interpPoints but I am not sure how to fix it. Any suggestions?
The documentation for scipy.interpolate.interpn states that the first argument is a grid of the data you are interpolating over (which is just the integers of the pixel numbers), second argument is data (blur_maps) and third arguments is the interpolation points in the form (npoints, ndims). So you would have to do something like:
import scipy.interpolate
pixelGrid = [np.arange(x) for x in blur_maps.shape] # create grid of pixel numbers as per the docs
interpPoints = np.array([fx.flatten(), fy.flatten(), pyr_level.flatten()])
# interpolate
interpValues = scipy.interpolate.interpn(pixelGrid, blur_maps, interpPoints.T)
# now reshape the output array to get in the original format you wanted
finalValues = np.reshape(interpValues, fx.shape)