Since the original problem is more complicated, the idea is described using a simple example below.
For example, suppose we want to put several router antennas somewhere in a room so that the cellphone get most signal strength on the table (received power > Pmax) while weakest signal strength on bed (received power < Pmin). What is the best (minimum) number of antennas that should be used, and where should they be placed, in order to achieve the goal.
Mathematically,
SIGNAL_STRENGTH is dependent on variable (x, y, z) and the number
of variables
. i.e. location and number of antennas.
Besides, assume
PREDICTION = f((x1, y1, z1), (x2, y2, z2), ... (xi, yi, zi), ... (xn,
yn, zn))
where n and (xi, yi, zi) are to be optimized. The goal is to minimize
cost function = ||SIGNAL_STRENGTH - PREDICTION||
I tried to use GA with mixed integer programming in Matlab to implement that. Two optimization functions are used, outer function is to optimize n, and inner optimization function optimizes (x, y, z) with given n. This method works slow and I haven't seen one result given by this method so far.
Does anyone have a more efficient way to solve this problem? Any suggestion is appreciated. Thanks in advance.
Terminology | Problem Definition
An antenna is sending at position a in R^3 with constant power. Its signal strength can be measured by some S: R^3 -> R where S has a single maximum S_0 at a and the set, constructed by S(x) > const, is simply connected, i.e. S(x) = S_0 * exp(-const * (x-a)^2).
Given a set of antennas A the resulting signal strength is the maximum of a single antenna
S_A(x) = max{S_a(x) : for all a in A} ,
which means we 'lock' on the strongest antenna, which is what cell phones do.
Let K = R^3 x R denote a space of points (position, intensity). Now concider two finite subsets POI_min and POI_max of K. We want to find the set A with the minimal amount of antennas (|A| -> min.), that satisfies
for all (x,w) in POI_min : S_A(x) < w and for all (x,w) in POI_max : S_A(x) > w .
Implication
As S(x) > const is simply connected there has to be an antenna in a sphere around the position of each element (x,w) in POI_max with radius r = max{||xi - x|| : for all xi in S(xi) = w}. Which means that if we would put an antenna at the position of (x,w), then the furthest we can go away from x and still have signal strength w is the radius r within which an actual antenna has to be positioned.
With a similar argumentation for POI_min it follows that there is no antenna within r = min{||xi - x|| : for all xi in S(xi) = w}.
Solution
Instead of solving a nonlinear optimization task we can intersect spheres to obtain the optimal solution. If k spheres around the POI_max positions intersect, we can place a single antenna in the intersection, reducing the amount of antennas needed by k-1.
However each antenna that is placed must satisfy all constraints given by the elements of POI_min. Assuming that antennas are omnidirectional and thus orientation of an antenna doesn't matter we can do (pseudocode):
min_sphere = {(x_i,r_i) : from POI_min},
spheres_to_cover = {(x_i,r_i) : from POI_max}
A = {}
while not is_empty(spheres_to_cover)
power_set_score = struct // holds score, k
PS <- costruct power set of sphere_to_cover
for i = 1:number_of_elements(PS)
k = PS[i]
if intersection(k) \ min_sphere is not empty
power_set_score[i].score = |k|
else
power_set_score[i].score = 0
end if
power_set_score[i].k = k
end for
sort(power_set_score) // sort by score, biggest first
A <- add arbitrary point in (intersection(power_set_score[1].k) \ min_sphere)
spheres_to_cover = spheres_to_cover \ power_set_score[1].k
end while
On the other hand you have just given an example problem and thus this solution may not be applicable or broad enough for your case. I did make a few assumptions. So being more specific in the question might give you an even better answer.
Related
I'm new to Matlab and want to write a program that chooses the value of a parameter (P) to minimize the difference between two vectors, where each vector is a variable in a dataframe. The first vector (call it A) is a predetermined vector of 1s and 0s, and the second vector (call it B) has each of its entries determined as an indicator function that depends on the value of the parameter P and other variables in the dataframe. For instance, let C be a third variable in the dataset, so
A = [1, 0, 0, 1, 0]
B = [x, y, z, u, v]
where x = 1 if (C[1]+10)^0.5 - P > (C[1])^0.5 and otherwise x = 0, and similarly, y = 1 if (C[2]+10)^0.5 - P > (C[2])^0.5 and otherwise y = 0, and so on.
I'm not really sure where to start with the code, except that it might be useful to use the fminsearch command. Any suggestions?
Edit: I changed the above by raising to a power, which is closer to the actual example that I have. I'm also providing a complete example in response to a comment:
Let A be as above, and let C = [10, 1, 100, 1000, 1]. Then my goal with the Matlab code would be to choose a value of P to minimize the differences between the coordinates of the vectors A and B, where B[1] = 1 if (10+10)^0.5 - P > (10)^0.5 and otherwise B[1] = 0, and similarly B[2] = 1 if (1+10)^0.5 - P > (1)^0.5 and otherwise B[2] = 0, etc. So I want to choose P to maximize the likelihood that A[1] = B[1], A[2] = B[2], etc.
I have the following setup in Matlab, where ds is the name of my dataset:
ds.B = zeros(size(ds,1),1); % empty vector to fill
for i = 1:size(ds,1)
if ((ds.C(i) + 10)^(0.5) - P > (ds.C(i))^(0.5))
ds.B(i) = 1;
else
ds.B(i) = 0;
end
end
Now I want to choose the value of P to minimize the difference between A and B. How can I do this?
EDIT: I'm also wondering how to do this when the inequality is something like (C[i]+10)^0.5 - P*D[i] > (C[i])^0.5, where D is another variable in my dataset. Now P is a scalar being multiplied rather than just added. This seems more complicated since I can't solve for P exactly. How can I solve the problem in this case?
EDIT 1: It seems fminbnd() isn't optimal, likely due to the stairstep nature of the indicator function. I've updated to test the midpoints of all the regions between indicator function flips, plus endpoints.
EDIT 2: Updated to include dataset D as a coefficient of P.
If you can package your distance calculation up in a single function based on P, you can then search for its minimum.
arraySize = 1000;
ds.A = double(rand([arraySize,1]) > 0.5);
ds.C = rand(size(ds.A));
ds.D = rand(size(ds.A));
B = #(P)double((ds.C+10).^0.5 - P.*ds.D > ds.C.^0.5);
costFcn = #(P)sqrt(sum((ds.A-B(P)).^2));
% Solving the equation (C+10)^0.5 - P*D = C^0.5 for P, and sorting the results
BCrossingPoints = sort(((ds.C+10).^0.5-ds.C.^0.5)./ds.D);
% Taking the average of each crossing point with its neighbors
BMidpoints = (BCrossingPoints(1:end-1)+BCrossingPoints(2:end))/2;
% Appending endpoints onto the midpoints
PsToTest = [BCrossingPoints(1)-0.1; BMidpoints; BCrossingPoints(end)+0.1];
% Calculate the distance from A to B at each P to test
costResult = arrayfun(costFcn,PsToTest);
% Find the minimum cost
[~,lowestCostIndex] = min(costResult);
% Find the optimum P
optimumP = PsToTest(lowestCostIndex);
ds.B = B(optimumP);
semilogx(PsToTest,costResult)
xlabel('P')
ylabel('Distance from A to B')
1.- x is assumed positive real only, because with x<0 then complex values show up.
Since no comment is made in the question it seems reasonable to assume x real and x>0 only.
As requested, P 'the parameter' a scalar, P only has 2 significant states >0 or <0, let's see how is this:
2.- The following lines generate kind-of random A and C.
Then a sweep of p is carried out and distances d1 and d2 are calculated.
d1 is euclidean distance and d2 is the absolute of the difference between A and and B converting both from binary to decimal:
N=10
% A=[1 0 0 1 0]
A=randi([0 1],1,N);
% C=[10 1 1e2 1e3 1]
C=randi([0 1e3],1,N)
p=[-1e4:1:1e4]; % parameter to optimize
B=zeros(1,numel(A));
d1=zeros(1,numel(p)); % euclidean distance
d2=zeros(1,numel(p)); % difference distance
for k1=1:1:numel(p)
B=(C+10).^.5-p(k1)>C.^.5;
d1(k1)=(sum((B-A).^2))^.5;
d2(k1)=abs(sum(A.*2.^[numel(A)-1:-1:0])-sum(B.*2.^[numel(A)-1:-1:0]));
end
figure;
plot(p,d1)
grid on
xlabel('p');title('d1')
figure
plot(p,d2)
grid on
xlabel('p');title('d2')
The only degree of freedom to optimise seems to be the sign of P regardless of |P| value.
3.- f(p,x) has either no root, or just one root, depending upon p
The threshold funtion is
if f(x)>0 then B(k)==1 else B(k)==0
this is
f(p,x)=(x+10)^.5-p-x^.5
Now
(x+10).^.5-p>x.^.5 is same as (x+10).^.5-x.^.5>p
There's a range of p that keeps f(p,x)=0 without any (real) root.
For the particular case p=0 then (x+10).^.5 and x.^.5 do not intersect (until Inf reached = there's no intersection)
figure;plot(x,(x+10).^.5,x,x.^.5);grid on
[![enter image description here][3]][3]
y2=diff((x+10).^.5-x.^.5)
figure;plot(x(2:end),y2);
grid on;xlabel('x')
title('y2=diff((x+10).^.5-x.^.5)')
[![enter image description here][3]][3]
% 005
This means the condition f(x)>0 is always true holding all bits of B=1. With B=1 then d(A,B) turns into d(A,1), a constant.
However, for a certain value of p then there's one root and f(x)>0 is always false keeping all bits of B=0.
In this case d(A,B) the cost function turns into d(A,0) and this is A itself.
4.- P as a vector
The optimization gains in degrees of freedom if instead of P scalar, P is considered as vector.
For a given x there's a value of p that switches B(k) from 0 to 1.
Any value of p below such threshold keeps B(k)=0.
Equivalently, inverting f(x) :
g(p)=(10-p^2)^2/(4*p^2)>x
Values of x below this threshold bring B closer to A because for each element of B it's flipped to the element value of A.
Therefore, it's convenient to consider P as a vector, not a ascalar, and :
For all, or as many (as possible) elements of C to meet c(k)<(10-p^2)^2/(4*p^2) in order to get C=A or
minimize d(A,C)
5.- roots of f(p,x)
syms t positive
p=[-1000:.1:1000];
zp=NaN*ones(1,numel(p));
sol=zeros(1,numel(p));
for k1=1:1:numel(p)
p(k1)
eq1=(t+10)^.5-p(k1)-t^.5-p(k1)==0;
s1=solve(eq1,t);
if ~isempty(s1)
zp(k1)=s1;
end
end
nzp=~isnan(zp);
zp(nzp)
returns
=
620.0100 151.2900 64.5344 34.2225 20.2500 12.7211
8.2451 5.4056 3.5260 2.2500 1.3753 0.7803
0.3882 0.1488 0.0278
The final goal I am trying to achieve is the generation of a ten minutes time series: to achieve this I have to perform an FFT operation, and it's the point I have been stumbling upon.
Generally the aimed time series will be assigned as the sum of two terms: a steady component U(t) and a fluctuating component u'(t). That is
u(t) = U(t) + u'(t);
So generally, my code follows this procedure:
1) Given data
time = 600 [s];
Nfft = 4096;
L = 340.2 [m];
U = 10 [m/s];
df = 1/600 = 0.00167 Hz;
fn = Nfft/(2*time) = 3.4133 Hz;
This means that my frequency array should be laid out as follows:
f = (-fn+df):df:fn;
But, instead of using the whole f array, I am only making use of the positive half:
fpos = df:fn = 0.00167:3.4133 Hz;
2) Spectrum Definition
I define a certain spectrum shape, applying the following relationship
Su = (6*L*U)./((1 + 6.*fpos.*(L/U)).^(5/3));
3) Random phase generation
I, then, have to generate a set of complex samples with a determined distribution: in my case, the random phase will approach a standard Gaussian distribution (mu = 0, sigma = 1).
In MATLAB I call
nn = complex(normrnd(0,1,Nfft/2),normrnd(0,1,Nfft/2));
4) Apply random phase
To apply the random phase, I just do this
Hu = Su*nn;
At this point start my pains!
So far, I only generated Nfft/2 = 2048 complex samples accounting for the fpos content. Therefore, the content accounting for the negative half of f is still missing. To overcome this issue, I was thinking to merge the real and imaginary part of Hu, in order to get a signal Huu with Nfft = 4096 samples and with all real values.
But, by using this merging process, the 0-th frequency order would not be represented, since the imaginary part of Hu is defined for fpos.
Thus, how to account for the 0-th order by keeping a procedure as the one I have been proposing so far?
I have intensity points which is marked as pink in above plot, and these are stored in variable and is given as
intensity_info =[ 35.9349
46.4465
46.4790
45.7496
44.7496
43.4790
42.5430
41.4351
40.1829
37.4114
33.2724
29.5447
26.8373
24.8171
24.2724
24.2487
23.5228
23.5228
24.2048
23.7057
22.5228
22.0000
21.5210
20.7294
20.5430
20.2504
20.2943
21.0219
22.0000
23.1096
25.2961
29.3364
33.4351
37.4991
40.8904
43.2706
44.9798
47.4553
48.9324
48.6855
48.5210
47.9781
47.2285
45.5342
34.2310 ];
I also have information of point A, B and C which is calculated by :
[maxtab, mintab] = peakdet(intensity_info, 1); % maxtab has A and B information and
% mintab has C information
peakdet.m matlab code can be found here: (http://www.billauer.co.il/peakdet.html). I want to calculate point D (where there is sight increase in intensity value i.e. if we come down from point A intensity decreases but at point D there is slight increase in intensity). As seen from graph below point C can also lie in the left of point D and in this case if we come down from point B intensity decrease and at D there is slight increase in intensity. Intensity values for below graph below is given as:
intensity_info =[29.3424
39.4847
43.7934
47.4333
49.9123
51.4772
52.1189
51.6601
48.8904
45.0000
40.9561
36.5868
32.5904
31.0439
29.9982
27.9579
26.6965
26.7312
28.5631
29.3912
29.7496
29.7715
29.7294
30.2706
30.1847
29.7715
29.2943
29.5667
31.0877
33.5228
36.7496
39.7496
42.5009
45.7934
49.1847
52.2048
53.9123
54.7276
54.9781
55.0000
54.9781
54.7276
53.9342
51.4246
38.2512];
and Point A ,B and C calculated in same manner as above.
How can I calculate point D in these cases?
I'm not MATLAB literate, but if the 'tabs' are subtables, then perhaps you can manipulate them to create other subtables... something like (I repeat, illiterate)
left_of_graph = part of graph from A to C
right_of_graph = part of graph from C to B
left_delta = some fraction of the difference between A's y-value and C's y-value
right_delta = some fraction of the difference between C's y-value and B's y-value
[left_maxtab,left_mintab] = peakdet(left_of_graph,left_delta)
[right_maxtab,right_mintab] = peakdet(right_of_graph,right_delta)
I do have some experience with peak analysis, so I will say this will help, not answer, the problem. You can find all the peaks you want, but that doesn't mean the data is worth squinting at. Noise and imperfect resolution are real. Happy hunting!
PS You might also scan for all points higher than both neighbors in the whole band. That is gauranteed to miss none of the 'true' maxima, but to give you more 'false' maxima than you can count (although your data looks pretty smooth!).
The solution your looking for is a numerical method based on iterations,in your particular case bisection is the one who best suits cause others like uniform sequential search do not take an interval as an input.
This is the implementation for bisection:
function [ a, b, L ] = Bisection( f, a, b, e, d )
%[a,b] is the interval for your local maxima; e is the error for the result and d is the step(dx).
L = b - a;
while(L > e)
xa = ((a + b)/2) - d/2;
xb = ((a + b)/2) + d/2;
ya = subs(f,xa);
yb = subs(f,xb);
if(ya < yb)
a = xa;
else
b = xb;
end
L = b - a;
end
end
The method before is pretty efficient and straightforward to use, although there are others even better(in performance) like Fibonacci and Gold Section methods.
Cheers.
I have found the alternative solution. The extrema.m helped in finding Point D in above two garphs. The extrema.m can be downloaded from (http://www.mathworks.com/matlabcentral/fileexchange/12275-extrema-m-extrema2-m) and used in following way to find point D:
[ymax,imax,ymin,imin] = extrema(intensity_info);
figure;plot(x,intensity_info,x(imax),ymax,'g.',x(imin),ymin,'r.');
I’m currently a Physics student and for several weeks have been compiling data related to ‘Quantum Entanglement’. I’ve now got to a point where I have to plot my data (which should resemble a cos² graph - and does) to a sort of “best fit” cos² graph. The lab script says the following:
A more precise determination of the visibility V (this is basically how 'clean' the data is) follows from the best fit to the measured data using the function:
f(b) = A/2[1-Vsin(b-b(center)/P)]
Granted this probably doesn’t mean much out of context, but essentially A is the amplitude, b is an angle and P is the periodicity. Hence this is also a “wave” like the experimental data I have found.
From this I understand, as previously mentioned, I am making a “best fit” curve. However, I have been told that this isn’t possible with Excel and that the best approach is Matlab.
I know intermediate JavaScript but do not know Matlab and was hoping for some direction.
Is there a tutorial I can read for this? Is it possible for someone to go through it with me? I really have no idea what it entails, so any feed back would be greatly appreciated.
Thanks a lot!
Initial steps
I guess we should begin by getting a representation in Matlab of the function that you're trying to model. A direct translation of your formula looks like this:
function y = targetfunction(A,V,P,bc,b)
y = (A/2) * (1 - V * sin((b-bc) / P));
end
Getting hold of the data
My next step is going to be to generate some data to work with (you'll use your own data, naturally). So here's a function that generates some noisy data. Notice that I've supplied some values for the parameters.
function [y b] = generateData(npoints,noise)
A = 2;
V = 1;
P = 0.7;
bc = 0;
b = 2 * pi * rand(npoints,1);
y = targetfunction(A,V,P,bc,b) + noise * randn(npoints,1);
end
The function rand generates random points on the interval [0,1], and I multiplied those by 2*pi to get points randomly on the interval [0, 2*pi]. I then applied the target function at those points, and added a bit of noise (the function randn generates normally distributed random variables).
Fitting parameters
The most complicated function is the one that fits a model to your data. For this I use the function fminunc, which does unconstrained minimization. The routine looks like this:
function [A V P bc] = bestfit(y,b)
x0(1) = 1; %# A
x0(2) = 1; %# V
x0(3) = 0.5; %# P
x0(4) = 0; %# bc
f = #(x) norm(y - targetfunction(x(1),x(2),x(3),x(4),b));
x = fminunc(f,x0);
A = x(1);
V = x(2);
P = x(3);
bc = x(4);
end
Let's go through line by line. First, I define the function f that I want to minimize. This isn't too hard. To minimize a function in Matlab, it needs to take a single vector as a parameter. Therefore we have to pack our four parameters into a vector, which I do in the first four lines. I used values that are close, but not the same, as the ones that I used to generate the data.
Then I define the function I want to minimize. It takes a single argument x, which it unpacks and feeds to the targetfunction, along with the points b in our dataset. Hopefully these are close to y. We measure how far they are from y by subtracting from y and applying the function norm, which squares every component, adds them up and takes the square root (i.e. it computes the root mean square error).
Then I call fminunc with our function to be minimized, and the initial guess for the parameters. This uses an internal routine to find the closest match for each of the parameters, and returns them in the vector x.
Finally, I unpack the parameters from the vector x.
Putting it all together
We now have all the components we need, so we just want one final function to tie them together. Here it is:
function master
%# Generate some data (you should read in your own data here)
[f b] = generateData(1000,1);
%# Find the best fitting parameters
[A V P bc] = bestfit(f,b);
%# Print them to the screen
fprintf('A = %f\n',A)
fprintf('V = %f\n',V)
fprintf('P = %f\n',P)
fprintf('bc = %f\n',bc)
%# Make plots of the data and the function we have fitted
plot(b,f,'.');
hold on
plot(sort(b),targetfunction(A,V,P,bc,sort(b)),'r','LineWidth',2)
end
If I run this function, I see this being printed to the screen:
>> master
Local minimum found.
Optimization completed because the size of the gradient is less than
the default value of the function tolerance.
A = 1.991727
V = 0.979819
P = 0.695265
bc = 0.067431
And the following plot appears:
That fit looks good enough to me. Let me know if you have any questions about anything I've done here.
I am a bit surprised as you mention f(a) and your function does not contain an a, but in general, suppose you want to plot f(x) = cos(x)^2
First determine for which values of x you want to make a plot, for example
xmin = 0;
stepsize = 1/100;
xmax = 6.5;
x = xmin:stepsize:xmax;
y = cos(x).^2;
plot(x,y)
However, note that this approach works just as well in excel, you just have to do some work to get your x values and function in the right cells.
I have an physical instrument of measurement (force platform with load cells) which gives me three values, A, B and C. It happens, though, that these values - that should be orthogonal - actually are somewhat coupled, due to physical characteristics of the measuring device, which causes cross-talk between applied and returned values of force and torque.
Then, it is recommended that a calibration matrix be used to transform the measured values into a better estimate of the actual values, like this:
The problem is that it is necessary to perform a SET of measurements, so that different measured(Fz, Mx, My) and actual(Fz, Mx, My) are least-squared to get some C matrix that works best for the system as a whole.
I can solve Ax = B problems with scipy.linalg.lststq, or even scipy.linalg.solve (giving an exact solution) for ONE measurement, but how should I proceed to consider a set of different measurements, each one with its own equation giving a potentially different 3x3 matrix?
Any help is much appreciated, thanks for reading.
I posted a similar question containing just the mathematical part of this at math.stackexchange.com, and this answer solved the problem:
math.stackexchange.com/a/232124/27435
In case anyone have a similar problem in the future, here is the almost literal Scipy implementation of that answer (first lines are initialization boilerplate code):
import numpy
import scipy.linalg
### Origin of the coordinate system: upper left corner!
"""
1----------2
| |
| |
4----------3
"""
platform_width = 600
platform_height = 400
# positions of each load cell (one per corner)
loadcell_positions = numpy.array([[0, 0],
[platform_width, 0],
[platform_width, platform_height],
[0, platform_height]])
platform_origin = numpy.array([platform_width, platform_height]) * 0.5
# applying a known force at known positions and taking the measurements
measurements_per_axis = 5
total_load = 50
results = []
for x in numpy.linspace(0, platform_width, measurements_per_axis):
for y in numpy.linspace(0, platform_height, measurements_per_axis):
position = numpy.array([x,y])
for loadpos in loadcell_positions:
moments = platform_origin-loadpos * total_load
load = numpy.array([total_load])
result = numpy.hstack([load, moments])
results.append(result)
results = numpy.array(results)
noise = numpy.random.rand(*results.shape) - 0.5
measurements = results + noise
# now expand ("stuff") the 3x3 matrix to get a linearly independent 3x3 matrix
expands = []
for n in xrange(measurements.shape[0]):
k = results[n,:]
m = measurements[n,:]
expand = numpy.zeros((3,9))
expand[0,0:3] = m
expand[1,3:6] = m
expand[2,6:9] = m
expands.append(expand)
expands = numpy.vstack(expands)
# perform the actual regression
C = scipy.linalg.lstsq(expands, measurements.reshape((-1,1)))
C = numpy.array(C[0]).reshape((3,3))
# the result with pure noise (not actual coupling) should be
# very close to a 3x3 identity matrix (and is!)
print C
Hope this helps someone!