I am trying to create a program that converts between multiple coordinate systems in Matlab.
I have different systems and want to transfer between them. There are different topocentric, geocentric & heliocentric systems. I have written the transformation matrices to transfer between these coordinate systems.
To simplify this problem I'll use an example.
If I have 3 coordinate systems:
Cartesian , Cylindrical & Spherical Coordinates
To convert from Cylindrical coordinates to Cartesian coordinates. I can apply:
x = r ∙ cos(ø)
y = r ∙ sin(ø)
z = z
To convert from Spherical coordinates to Cartesian coordinates. I can apply:
x = R ∙ sin(θ) ∙ cos(ø)
y = R ∙ sin(θ) ∙ sin(ø)
z = R ∙ cos(θ)
Assuming we don't convert Spherical Coordinates directly to Cylindrical Coordinates, we convert:
Spherical -> Cartesian
Cartesian -> Cylindrical
In my real problem, I have 8 different coordinate systems with transformations back and forth between each of them. The systems have only have two paths linking to different coordinate systems.
It looks like this:
A <-> B <-> C <-> D <-> E <-> F <-> G <-> H
I want to create a method for the user to choose a coordinate system, input coordinates and select the destination coordinate systems.
Instead of manually writing functions for: A -> C , A -> D , A -> E ... for 54 different steps
Is there a way I can create a system to connect the paths?
Is there a way I can use a graph or nodes and apply functions which connect the nodes (A->C)
What is this concept so I can read up more on it?
You could implement something complicated with object oriented programming, but to keep things simple I propose to store all different types of coordinates as structs that all have a member type and whatever other members needed for that particular type of coordinate.
You could then define all your conversion functions that do a single step to all have the same function signature function out_coord = A2B(in_coord), e.g.:
function cart = sphere2cart(sphere)
assert(strcmp(sphere.type, 'sphere')) % make sure input is correct type
cart.type = 'cart';
cart.x = sphere.R * sin(sphere.theta) * cos(sphere.omega);
cart.y = sphere.R * sin(sphere.theta) * sin(sphere.omega);
cart.z = sphere.R * cos(sphere.theta);
These functions can be called from one universal convert function like this:
function output_coord = convert(input_coord, target_type)
output_coord = input_coord;
while ~strcmp(output_coord.type, target_type)
func = get_next_conversion_func(input_coord.type, target_type);
output_coord = func(output_coord);
end
which does one conversion step at a time, until output_coord has the correct type. The only missing step is then a function that determines which conversion to do next, based on the current type and the target type. In your case with a 'linear' conversion chain, this is not so hard. In more complex cases, where the types are connected in a complex graph, this might require some shortest path algorithm. Unfortunately, this is a bit cumbersome to implement in Matlab, but one hard-coded solution might be something like:
function func = get_next_conversion_func(current_type. target_type);
switch current_type
case 'A'
func = #A2B;
case 'B'
switch target_type
case 'A'
func = #B2A;
case {'C','D','E'}
func = #B2C;
end
case 'C'
switch target_type
case {'A','B'}
func = #C2B;
case {'D','E'}
func = #C2D;
end
...
end
For sure, there are smarter ways to implement this, this is basically a dispatch table that says in which direction to go based on the current type and the target type.
EDIT
Following Jonas' suggestion of doing all conversions via one central type (let's say C), all of this can be simplified to
function output_coord = convert(input_coord, target_type)
output_coord = input_coord;
if strcmp(output_coord.type, target_type)
return % nothing to convert
end
if ~strcmp(output_coord.type, 'C')
switch output_coord.type
case 'A'
output_coord = A2C(output_coord)
case 'B'
output_coord = B2C(output_coord)
case 'D'
output_coord = D2C(output_coord)
case 'E'
output_coord = E2C(output_coord)
end
end
assert(strcmp(output_coord.type, 'C'))
if ~strcmp(output_coord.type, target_type)
switch target_type
case 'A'
output_coord = C2A(output_coord)
case 'B'
output_coord = C2B(output_coord)
case 'D'
output_coord = C2D(output_coord)
case 'E'
output_coord = C2E(output_coord)
end
end
In your current logic, A <-> B <-> C <-> D <-> E <-> F <-> G <-> H, you will have to write 14 functions for converting (if A->B and B->A count as two).
I suggest that instead, you choose one reference coordinate system, say, A and write the functions A<->B, A<->C, etc.
This solution requires you to write the same number of functions as your solution, but the logic becomes trivial. Furthermore, each conversion takes a maximum of two conversion steps, which will avoid accumulating round-off errors when you perform a chain of conversions.
Related
Since the original problem is more complicated, the idea is described using a simple example below.
For example, suppose we want to put several router antennas somewhere in a room so that the cellphone get most signal strength on the table (received power > Pmax) while weakest signal strength on bed (received power < Pmin). What is the best (minimum) number of antennas that should be used, and where should they be placed, in order to achieve the goal.
Mathematically,
SIGNAL_STRENGTH is dependent on variable (x, y, z) and the number
of variables
. i.e. location and number of antennas.
Besides, assume
PREDICTION = f((x1, y1, z1), (x2, y2, z2), ... (xi, yi, zi), ... (xn,
yn, zn))
where n and (xi, yi, zi) are to be optimized. The goal is to minimize
cost function = ||SIGNAL_STRENGTH - PREDICTION||
I tried to use GA with mixed integer programming in Matlab to implement that. Two optimization functions are used, outer function is to optimize n, and inner optimization function optimizes (x, y, z) with given n. This method works slow and I haven't seen one result given by this method so far.
Does anyone have a more efficient way to solve this problem? Any suggestion is appreciated. Thanks in advance.
Terminology | Problem Definition
An antenna is sending at position a in R^3 with constant power. Its signal strength can be measured by some S: R^3 -> R where S has a single maximum S_0 at a and the set, constructed by S(x) > const, is simply connected, i.e. S(x) = S_0 * exp(-const * (x-a)^2).
Given a set of antennas A the resulting signal strength is the maximum of a single antenna
S_A(x) = max{S_a(x) : for all a in A} ,
which means we 'lock' on the strongest antenna, which is what cell phones do.
Let K = R^3 x R denote a space of points (position, intensity). Now concider two finite subsets POI_min and POI_max of K. We want to find the set A with the minimal amount of antennas (|A| -> min.), that satisfies
for all (x,w) in POI_min : S_A(x) < w and for all (x,w) in POI_max : S_A(x) > w .
Implication
As S(x) > const is simply connected there has to be an antenna in a sphere around the position of each element (x,w) in POI_max with radius r = max{||xi - x|| : for all xi in S(xi) = w}. Which means that if we would put an antenna at the position of (x,w), then the furthest we can go away from x and still have signal strength w is the radius r within which an actual antenna has to be positioned.
With a similar argumentation for POI_min it follows that there is no antenna within r = min{||xi - x|| : for all xi in S(xi) = w}.
Solution
Instead of solving a nonlinear optimization task we can intersect spheres to obtain the optimal solution. If k spheres around the POI_max positions intersect, we can place a single antenna in the intersection, reducing the amount of antennas needed by k-1.
However each antenna that is placed must satisfy all constraints given by the elements of POI_min. Assuming that antennas are omnidirectional and thus orientation of an antenna doesn't matter we can do (pseudocode):
min_sphere = {(x_i,r_i) : from POI_min},
spheres_to_cover = {(x_i,r_i) : from POI_max}
A = {}
while not is_empty(spheres_to_cover)
power_set_score = struct // holds score, k
PS <- costruct power set of sphere_to_cover
for i = 1:number_of_elements(PS)
k = PS[i]
if intersection(k) \ min_sphere is not empty
power_set_score[i].score = |k|
else
power_set_score[i].score = 0
end if
power_set_score[i].k = k
end for
sort(power_set_score) // sort by score, biggest first
A <- add arbitrary point in (intersection(power_set_score[1].k) \ min_sphere)
spheres_to_cover = spheres_to_cover \ power_set_score[1].k
end while
On the other hand you have just given an example problem and thus this solution may not be applicable or broad enough for your case. I did make a few assumptions. So being more specific in the question might give you an even better answer.
I would like to use MATLAB's chi2gof to perform a chi-square goodness-of-fit test. My problem is that my assumed (i.e., theoretical) distribution is not one of the standard built-in probability distributions in MATLAB. The specific form of my desired distribution is:
p = x^a*exp(-b*x^2)
where a and b are constants. There must be a way to use chi2gof for arbitrary PDFs? I have done an exhaustive Google search, but have come up empty-handed.
You can specify a handle to a function that takes a single argument to chi2gof this way:
a = ...
b = ...
c = ...
F = #(x)a*exp(-b*x-c*x.^2); % Technically this is an anonymous function
[H,P,STATS] = chi2gof(data,'cdf',F)
Or in special cases:
a = ...
b = ...
c = ...
F = #(x,a,b,c)a*exp(-b*x-c*x.^2);
[H,P,STATS] = chi2gof(data,'cdf',{F,a,b,c})
the last line of which is equivalent to
[H,P,STATS] = chi2gof(data,'cdf',#(x)F(x,a,b,c))
If the parameters a, b, and c are estimated (e.g., using some fitting process), then you should specify the number of estimated parameters to chi2gof. In this case:
[H,P, STATS] = chi2gof(data,'cdf',F,'nparams',3)
Please read the documentation to learn about the other options.
I have an expression with three variables x,y and v. I want to first integrate over v, and so I use int function in MATLAB.
The command that I use is the following:
g =int((1-fxyz)*pv, v, y,+inf)%
PS I haven't given you what the function fxyv is but it is very complicated and so int is taking so long and I am afraid after waiting it might not solve it.
I know one option for me is to integrate numerically using for example integrate, however I want to note that the second part of this problem requires me to integrate exp[g(x,y)] over x and y from 0 to infinity and from x to infinity respectively. So I can't take numerical values of x and y when I want to integrate over v I think or maybe not ?
Thanks
Since the question does not contain sufficient detail to attempt analytic integration, this answer focuses on numeric integration.
It is possible to solve these equations numerically. However, because of complex dependencies between the three integrals, it is not possible to simply use integral3. Instead, one has to define functions that compute parts of the expressions using a simple integral, and are themselves fed into other calls of integral. Whether this approach leads to useful results in terms of computation time and precision cannot be answered generally, but depends on the concrete choice of the functions f and p. Fiddling around with precision parameters to the different calls of integral may be necessary.
I assume that the functions f(x, y, v) and p(v) are defined in the form of Matlab functions:
function val = f(x, y, v)
val = ...
end
function val = p(v)
val = ...
end
Because of the way they are used later, they have to accept multiple values for v in parallel (as an array) and return as many function values (again as an array, of the same size). x and y can be assumed to always be scalars. A simple example implementation would be val = ones(size(v)) in both cases.
First, let's define a Matlab function g that implements the first equation:
function val = g(x, y)
val = integral(#gIntegrand, y, inf);
function val = gIntegrand(v)
% output must be of the same dimensions as parameter v
val = (1 - f(x, y, v)) .* p(v);
end
end
The nested function gIntegrand defines the object of integration, the outer performs the numeric integration that gives the value of g(x, y). Integration is over v, parameters x and y are shared between the outer and the nested function. gIntegrand is written in such a way that it deals with multiple values of v in the form of arrays, provided f and p do so already.
Next, we define the integrand of the outer integral in the second equation. To do so, we need to compute the inner integral, and therefore also have a function for the integrand of the inner integral:
function val = TIntegrandOuter(x)
val = nan(size(x));
for i = 1 : numel(x)
val(i) = integral(#TIntegrandInner, x(i), inf);
end
function val = TIntegrandInner(y)
val = nan(size(y));
for j = 1 : numel(y)
val(j) = exp(g(x(i), y(j)));
end
end
end
Because both function are meant to be fed as an argument into integral, they need to be able to deal with multiple values. In this case, this is implemented via an explicit for loop. TIntegrandInner computes exp(g(x, y)) for multiple values of y, but the fixed value of x that is current in the loop in TIntegrandOuter. This value x(i) play both the role of a parameter into g(x, y) and of an integration limit. Variables x and i are shared between the outer and the nested function.
Almost there! We have the integrand, only the outermost integration needs to be performed:
T = integral(#TIntegrandOuter, 0, inf);
This is a very convoluted implementation, which is not very elegant, and probably not very efficient. Again, whether results of this approach prove to be useful needs to be tested in practice. However, I don't see any other way to implement these numeric integrations in Matlab in a better way in general. For specific choices of f(x, y, v) and p(v), there might be possible improvements.
I am new to iPython, and need to solve a specific curve fitting problem, I have the concept but my programming knowledge is yet too limited. I have experimental data (x, y) to fit to an equation (curve fitting) with four coefficients (a,b,c,d), I would like to fix one of these coefficients (e.g. a ) to a specific value and refit my experimental data (non-linear least squares). coefficients b, c and d are not independent of another, meaning they are related by a system of equations.
Is it more adequate to use curve_fit or lmfit?
I started this with curve_fit:
def fitfunc(x,a,b,c,d):
return a+b*x+c/x+log10(x)*d
popt, fitcov = curve_fit(fitfunc, x, y)
or a code like this with lmfit:
import scipy as sp
from lmfit import minimize, Parameters, Parameter, report_fit
def fctmin(params, x, y):
a = params['a'].value
b = params['b'].value
c = params['c'].value
d = params['d'].value
model = a+b*x+c/x+d*np.log10(x)
return model - y
#create parameters
params = Parameters()
params.add('a', value = -89)
params.add('b', value =b)
params.add('c', value = c)
params.add('d', value = d)
#fit leastsq model
result = minimize(fctmin, params, args=(x, y))
#calculate results
final = y + result.residual
report_fit(params)
I'll admit to being biased. curve_fit() is designed to simplify scipy.optimize.leastsq() by assuming that you are fitting y(x) data to a model for y(x, parameters), so the function you pass to curve_fit() is one that will calculate the model for the values to be fit. lmfit is a bit more general and flexible in that your objective function has to return the array to be minimized in the least-squares sense, but your objective function has to return "model-data" instead of "model"
But, lmfit has features that appear to do exactly what you want: fix one of the parameters in the model without having to rewrite the objective function.
That is, you could say
params.add('a', value = -89, vary=False)
and the parameter 'a' will stay fixed. To do that with curve_fit() you have to rewrite your model function.
In addition, you say that b, c and d are related by equations, but don't give details. With lmfit, you might be able to include these equations as constraints. You have
params.add('b', value =b)
params.add('c', value = c)
params.add('d', value = d)
though I don't see a value for b. Even assuming there is a value, this creates three independent variables with the same starting value. You might mean "vary b, and force c and d to have the same value". lmfit can do this with:
params.add('b', value = 10)
params.add('c', expr = 'b')
params.add('d', expr = 'c')
That will have one independent variable, and the value for c will be forced to the value of b (and d to c). You can use (approximately) any valid python statement as a constraints constraint expression, for example:
params.add('b', value = 10)
params.add('c', expr = 'sqrt(b/10)')
params.add('d', expr = '1-c')
I think that might be the sort of thing you're looking for.
I’m currently a Physics student and for several weeks have been compiling data related to ‘Quantum Entanglement’. I’ve now got to a point where I have to plot my data (which should resemble a cos² graph - and does) to a sort of “best fit” cos² graph. The lab script says the following:
A more precise determination of the visibility V (this is basically how 'clean' the data is) follows from the best fit to the measured data using the function:
f(b) = A/2[1-Vsin(b-b(center)/P)]
Granted this probably doesn’t mean much out of context, but essentially A is the amplitude, b is an angle and P is the periodicity. Hence this is also a “wave” like the experimental data I have found.
From this I understand, as previously mentioned, I am making a “best fit” curve. However, I have been told that this isn’t possible with Excel and that the best approach is Matlab.
I know intermediate JavaScript but do not know Matlab and was hoping for some direction.
Is there a tutorial I can read for this? Is it possible for someone to go through it with me? I really have no idea what it entails, so any feed back would be greatly appreciated.
Thanks a lot!
Initial steps
I guess we should begin by getting a representation in Matlab of the function that you're trying to model. A direct translation of your formula looks like this:
function y = targetfunction(A,V,P,bc,b)
y = (A/2) * (1 - V * sin((b-bc) / P));
end
Getting hold of the data
My next step is going to be to generate some data to work with (you'll use your own data, naturally). So here's a function that generates some noisy data. Notice that I've supplied some values for the parameters.
function [y b] = generateData(npoints,noise)
A = 2;
V = 1;
P = 0.7;
bc = 0;
b = 2 * pi * rand(npoints,1);
y = targetfunction(A,V,P,bc,b) + noise * randn(npoints,1);
end
The function rand generates random points on the interval [0,1], and I multiplied those by 2*pi to get points randomly on the interval [0, 2*pi]. I then applied the target function at those points, and added a bit of noise (the function randn generates normally distributed random variables).
Fitting parameters
The most complicated function is the one that fits a model to your data. For this I use the function fminunc, which does unconstrained minimization. The routine looks like this:
function [A V P bc] = bestfit(y,b)
x0(1) = 1; %# A
x0(2) = 1; %# V
x0(3) = 0.5; %# P
x0(4) = 0; %# bc
f = #(x) norm(y - targetfunction(x(1),x(2),x(3),x(4),b));
x = fminunc(f,x0);
A = x(1);
V = x(2);
P = x(3);
bc = x(4);
end
Let's go through line by line. First, I define the function f that I want to minimize. This isn't too hard. To minimize a function in Matlab, it needs to take a single vector as a parameter. Therefore we have to pack our four parameters into a vector, which I do in the first four lines. I used values that are close, but not the same, as the ones that I used to generate the data.
Then I define the function I want to minimize. It takes a single argument x, which it unpacks and feeds to the targetfunction, along with the points b in our dataset. Hopefully these are close to y. We measure how far they are from y by subtracting from y and applying the function norm, which squares every component, adds them up and takes the square root (i.e. it computes the root mean square error).
Then I call fminunc with our function to be minimized, and the initial guess for the parameters. This uses an internal routine to find the closest match for each of the parameters, and returns them in the vector x.
Finally, I unpack the parameters from the vector x.
Putting it all together
We now have all the components we need, so we just want one final function to tie them together. Here it is:
function master
%# Generate some data (you should read in your own data here)
[f b] = generateData(1000,1);
%# Find the best fitting parameters
[A V P bc] = bestfit(f,b);
%# Print them to the screen
fprintf('A = %f\n',A)
fprintf('V = %f\n',V)
fprintf('P = %f\n',P)
fprintf('bc = %f\n',bc)
%# Make plots of the data and the function we have fitted
plot(b,f,'.');
hold on
plot(sort(b),targetfunction(A,V,P,bc,sort(b)),'r','LineWidth',2)
end
If I run this function, I see this being printed to the screen:
>> master
Local minimum found.
Optimization completed because the size of the gradient is less than
the default value of the function tolerance.
A = 1.991727
V = 0.979819
P = 0.695265
bc = 0.067431
And the following plot appears:
That fit looks good enough to me. Let me know if you have any questions about anything I've done here.
I am a bit surprised as you mention f(a) and your function does not contain an a, but in general, suppose you want to plot f(x) = cos(x)^2
First determine for which values of x you want to make a plot, for example
xmin = 0;
stepsize = 1/100;
xmax = 6.5;
x = xmin:stepsize:xmax;
y = cos(x).^2;
plot(x,y)
However, note that this approach works just as well in excel, you just have to do some work to get your x values and function in the right cells.