how to set x_0,y_0 (false easting / false northing) of LCC in basemap - matplotlib-basemap

I'm using matplotlib-basemap with LCC projection.
after using .proj4string, I found x_0, y_0 in map data with default value
I want to set x_0, y_0 to 0.
but I can't find variable about false easting and false northing
how can I set x_0,y_0 value to 0?
https://matplotlib.org/basemap/api/basemap_api.html - no argument data about x_0, y_0
>>> map_lcc=Basemap(width=8000000,height=8000000, rsphere=(6378206.4,6356583.8),
... resolution='l',area_thresh=1000.0,projection='lcc',
... lat_1=33,lat_2=45,lat_0=23,lon_0=-96)
>>> map_lcc.proj4string
'+a=6378206.4 +b=6356583.8 +y_0=4000000.0 +lon_0=-96.0 +proj=lcc +x_0=4000000.0 +units=m +lat_2=45.0 +lat_1=33.0 +lat_0=23.0 '

Related

Reference error in MATLAB (Undefined function or variable)

[Rv,Rh]=Ref(theta,Er);
Unrecognized function or variable for 'Ref'
What function or method to make a correct reference?
freq=2.0;
lamda=0.3/freq;
k=2*pi/lamda;
d=0:0.1:100;
Pt=0.0;
Gt=0.0;Gr=0.0;
zt=3.0;zr=1.5;
Er=5.0-1j*60*0.0005*lamda;
E0=1;
d0=sqrt(d.^2+(zt-zr)^2);
d1=sqrt(d.^2+(zt+zr)^2);
theta=acos((zt+zr)./d1);
[Rv,Rh]=Ref(theta,Er);
Ed=E0*(lamda/4/pi./d0);
Egv=E0*(lamda/4/pi./d1).*Rv.*exp(-1j*k*(d1-d0));
Erec_v=Ed+Egv;
L_v=-20*log10(abs(Erec_v));
Prec_v=Pt+Gt+Gr-L_v;
plot(d,Prec_v,'-b')
grid
axis([0 100 -90 -30])
legend('V-pol.')
xlabel('Distance d (m)')
ylabel('Received Power (dBm)')
title('Distance Characteristics of 2 path model')

Optimizing a piecewise linear regression

I have written a function that, given parameters, can apply a piecewise linear fit, with arbitrarily many piecewise sections, to some data.
I am trying to fit the function to my data using scipy.optimize.curve_fit, but I am receiving an "OptimizeWarning: Covariance of the parameters could not be estimated" error. I believe this may be because of the nested lambda functions I am using to define the piecewise sections.
Is there an easy way to tweak my code to get round this, or a different scipy optimisation function that might be more suitable?
#The piecewise function
def piecewise_linear(x, *params):
N=len(params)/2
if N.is_integer():N=int(N)
else:raise(ValueError())
c=params[0]
xbounds=params[1:N]
grads=params[N:]
#First we define our conditions, which are true if x is a member of a given
#bin.
conditions=[]
#first and last bins are a special case:
cond0=lambda x: x<xbounds[0]
condl=lambda x: x>=xbounds[-1]
conditions.append(cond0(x))
for i in range(len(xbounds)-1):
cond=lambda x : (x >= xbounds[i]) & (x < xbounds[i+1])
conditions.append(cond(x))
conditions.append(condl(x))
#Next we define our linear regression function for each bin. The offset
#for each bin depends on where the previous bin ends, so we define
#the regression functions recursively:
functions=[]
func0 = lambda x: grads[0]*x +c
functions.append(func0)
for i in range(len(grads)-1):
func = (lambda j: lambda x: grads[j+1]*(x-xbounds[j])\
+functions[j](xbounds[j]))(i)
functions.append(func)
return np.piecewise(x,conditions,functions)
#Some data
x=np.arange(100)
y=np.array([*np.arange(0,19,1),*np.arange(20,59,2),\
*np.arange(60,20,-1),*np.arange(21,42,1)]) + np.random.randn(100)
#A first guess of parameters
cguess=0
boundguess=[20,30,50]
gradguess=[1,1,1,1]
p0=[cguess,*boundguess,*gradguess]
fit=scipy.optimize.curve_fit(piecewise_linear,x,y,p0=p0)
Here is example code that fits two straight lines to a curved data set with a breakpoint, where the line parameters and breakpoint are all fitted. This example uses scipy's Differential Evolution genetic algorithm to determine initial parameter estimates for the regression. That module uses the Latin Hypercube algorithm to ensure a thorough search of parameter space, which requires bounds within which to search. In this example those search bounds are derived from the data itself. Note that it is much easier to find ranges for the initial parameter estimates than to give specific values.
import numpy, scipy, matplotlib
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
from scipy.optimize import differential_evolution
import warnings
xData = numpy.array([19.1647, 18.0189, 16.9550, 15.7683, 14.7044, 13.6269, 12.6040, 11.4309, 10.2987, 9.23465, 8.18440, 7.89789, 7.62498, 7.36571, 7.01106, 6.71094, 6.46548, 6.27436, 6.16543, 6.05569, 5.91904, 5.78247, 5.53661, 4.85425, 4.29468, 3.74888, 3.16206, 2.58882, 1.93371, 1.52426, 1.14211, 0.719035, 0.377708, 0.0226971, -0.223181, -0.537231, -0.878491, -1.27484, -1.45266, -1.57583, -1.61717])
yData = numpy.array([0.644557, 0.641059, 0.637555, 0.634059, 0.634135, 0.631825, 0.631899, 0.627209, 0.622516, 0.617818, 0.616103, 0.613736, 0.610175, 0.606613, 0.605445, 0.603676, 0.604887, 0.600127, 0.604909, 0.588207, 0.581056, 0.576292, 0.566761, 0.555472, 0.545367, 0.538842, 0.529336, 0.518635, 0.506747, 0.499018, 0.491885, 0.484754, 0.475230, 0.464514, 0.454387, 0.444861, 0.437128, 0.415076, 0.401363, 0.390034, 0.378698])
def func(xArray, breakpoint, slopeA, offsetA, slopeB, offsetB):
returnArray = []
for x in xArray:
if x < breakpoint:
returnArray.append(slopeA * x + offsetA)
else:
returnArray.append(slopeB * x + offsetB)
return returnArray
# function for genetic algorithm to minimize (sum of squared error)
def sumOfSquaredError(parameterTuple):
warnings.filterwarnings("ignore") # do not print warnings by genetic algorithm
val = func(xData, *parameterTuple)
return numpy.sum((yData - val) ** 2.0)
def generate_Initial_Parameters():
# min and max used for bounds
maxX = max(xData)
minX = min(xData)
maxY = max(yData)
minY = min(yData)
slope = 10.0 * (maxY - minY) / (maxX - minX) # times 10 for safety margin
parameterBounds = []
parameterBounds.append([minX, maxX]) # search bounds for breakpoint
parameterBounds.append([-slope, slope]) # search bounds for slopeA
parameterBounds.append([minY, maxY]) # search bounds for offsetA
parameterBounds.append([-slope, slope]) # search bounds for slopeB
parameterBounds.append([minY, maxY]) # search bounds for offsetB
result = differential_evolution(sumOfSquaredError, parameterBounds, seed=3)
return result.x
# by default, differential_evolution completes by calling curve_fit() using parameter bounds
geneticParameters = generate_Initial_Parameters()
# call curve_fit without passing bounds from genetic algorithm
fittedParameters, pcov = curve_fit(func, xData, yData, geneticParameters)
print('Parameters:', fittedParameters)
print()
modelPredictions = func(xData, *fittedParameters)
absError = modelPredictions - yData
SE = numpy.square(absError) # squared errors
MSE = numpy.mean(SE) # mean squared errors
RMSE = numpy.sqrt(MSE) # Root Mean Squared Error, RMSE
Rsquared = 1.0 - (numpy.var(absError) / numpy.var(yData))
print()
print('RMSE:', RMSE)
print('R-squared:', Rsquared)
print()
##########################################################
# graphics output section
def ModelAndScatterPlot(graphWidth, graphHeight):
f = plt.figure(figsize=(graphWidth/100.0, graphHeight/100.0), dpi=100)
axes = f.add_subplot(111)
# first the raw data as a scatter plot
axes.plot(xData, yData, 'D')
# create data for the fitted equation plot
xModel = numpy.linspace(min(xData), max(xData))
yModel = func(xModel, *fittedParameters)
# now the model as a line plot
axes.plot(xModel, yModel)
axes.set_xlabel('X Data') # X axis data label
axes.set_ylabel('Y Data') # Y axis data label
plt.show()
plt.close('all') # clean up after using pyplot
graphWidth = 800
graphHeight = 600
ModelAndScatterPlot(graphWidth, graphHeight)

Basemap and proj get different projection result

I am struggling to understand the projections using pyproj.
for now my question is to understand the results of projection operations
I have following coordinates that I project on x,y:
from mpl_toolkits.basemap import Basemap
import pyproj
lon = [3.383789, 5.822754]
lat = [48.920575, 53.72185]
# with Basemap
M = Basemap(projection='merc',ellps = 'WGS84')
q1, q2 = M(lon, lat)
for a,b in zip([x for x in q1], [x for x in q2]):
print(a,b)
# with pyproj
from pyproj import Proj
p = Proj(proj='merc', ellps='WGS84',errcheck = True)
p1 = Proj(proj='latlong', datum='WGS84',errcheck = True)
print(p(3.383789, 48.920575), p(5.822754, 53.72185))
print(p1(3.383789, 48.920575), p1(5.822754, 53.72185))
20414190.011221122 65799915.8523339
20685694.35308374 66653928.94763097
(376681.6684318804, 6229168.979819128) (648186.0102944968, 7083182.075116195)
(0.0590582592427664, 0.8538251057188251) (0.10162622883366988, 0.9376231627625158)
why are the results different while I use the same projections parameters
as a newbie in geospatial data processing I apologize in advance for a question which may be trivial
For Mercator projection, Basemap uses the origin of the grid coordinate system at lower-left extent of the computable values. With your code, the values can be computed as
M(0, 0, inverse=True)
# output: (-180.0, -89.98999999999992)
If you compute projection coordinates for (long=0, lat=0) and assign the values to (x0, y0). You get the shifts in coordinates (x0, y0) that make its projection coordinates different from standard values (0,0 at center of the map).
lon0, lat0 = 0, 0
x0, y0 = M(lon0, lat0)
# x0=20015077.371242613, y0=59546805.8807
For a test point at (long=3.383789, lat=48.920575),
lon1 = 3.383789
lat1 = 48.920575
x1, y1 = M(lon1, lat1)
with the coordinate shifts applied, the result is
print(x1-x0, y1-y0)
# output: (376259.9924608879, 6254386.049398325)
when compare with the values from pyproj
p0 = Proj(proj='merc', ellps='WGS84', errcheck = True)
print(p0(lon1, lat1))
# output (376681.6684318804, 6229168.979819128)
they are quite agree but not close. For small scale map plotting, you can't see the discrepancies on the maps.

passing a tuple to fill_value in scipy.interpolate.interp1d results in ValueError

The docs in scipy.interpolate.interp1d (v0.17.0) say the following for the optional fill_value argument:
fill_value : ... If a two-element tuple, then the first element is used as a fill value for x_new < x[0] and the second element is used for x_new x[-1].
Thus I pass a two-element tupe in this code:
N=100
x=numpy.arange(N)
y=x*x
interpolator=interp1d(x,y,kind='linear',bounds_error=False,fill_value=(x[0],x[-1]))
r=np.arange(1,70)
interpolator(np.arange(1,70))
But it throws ValueError:
ValueError: shape mismatch: value array of shape (2,) could not be broadcast to indexing result of shape (0,1)
Can anyone please point to me what am I doing wrong here?
Thanks in advance for any help.
It's a bug which has been fixed in the current dev version:
>>> N = 100
>>> x = np.arange(N)
>>> y = x**2
>>> from scipy.interpolate import interp1d
>>> iii = interp1d(x, y, fill_value=(-10, 10), bounds_error=False)
>>> iii(-1)
array(-10.0)
>>> iii(101)
array(10.0)
>>> scipy.__version__
'0.18.0.dev0+8b07439'
That being said, if all you want is a linear interpolation with fill values for left-hand and right-hand sides, you can use np.interp
directly.

How to modify the dynamic range of an image (gray scale) in matlab to be between [-3000 15000]?

how can I modify the dynamic range of an image (gray scale [-30000 30000]) in matlab to be between [-3000 15000]?
You can use the second argument of imagesc to do that:
imagesc(rand(10),[-3000 15000])
colormap('gray')
Simple interpolation along with some vector multiplication
x1 = img[i,j]
O1 = -30000 // min range of values in img
O2 = 30000 // max range of values in img
T1 = -3000 // min range of target value
T2 = 15000 // max range of target value
x2 = ((x1 - O1) * (T2 - T1)) / (O2 - O1) // Value in new range
Using the above equation and two passes over the matrix using vectorization you can convert the values. I leave that part to you.