I have intensity points which is marked as pink in above plot, and these are stored in variable and is given as
intensity_info =[ 35.9349
46.4465
46.4790
45.7496
44.7496
43.4790
42.5430
41.4351
40.1829
37.4114
33.2724
29.5447
26.8373
24.8171
24.2724
24.2487
23.5228
23.5228
24.2048
23.7057
22.5228
22.0000
21.5210
20.7294
20.5430
20.2504
20.2943
21.0219
22.0000
23.1096
25.2961
29.3364
33.4351
37.4991
40.8904
43.2706
44.9798
47.4553
48.9324
48.6855
48.5210
47.9781
47.2285
45.5342
34.2310 ];
I also have information of point A, B and C which is calculated by :
[maxtab, mintab] = peakdet(intensity_info, 1); % maxtab has A and B information and
% mintab has C information
peakdet.m matlab code can be found here: (http://www.billauer.co.il/peakdet.html). I want to calculate point D (where there is sight increase in intensity value i.e. if we come down from point A intensity decreases but at point D there is slight increase in intensity). As seen from graph below point C can also lie in the left of point D and in this case if we come down from point B intensity decrease and at D there is slight increase in intensity. Intensity values for below graph below is given as:
intensity_info =[29.3424
39.4847
43.7934
47.4333
49.9123
51.4772
52.1189
51.6601
48.8904
45.0000
40.9561
36.5868
32.5904
31.0439
29.9982
27.9579
26.6965
26.7312
28.5631
29.3912
29.7496
29.7715
29.7294
30.2706
30.1847
29.7715
29.2943
29.5667
31.0877
33.5228
36.7496
39.7496
42.5009
45.7934
49.1847
52.2048
53.9123
54.7276
54.9781
55.0000
54.9781
54.7276
53.9342
51.4246
38.2512];
and Point A ,B and C calculated in same manner as above.
How can I calculate point D in these cases?
I'm not MATLAB literate, but if the 'tabs' are subtables, then perhaps you can manipulate them to create other subtables... something like (I repeat, illiterate)
left_of_graph = part of graph from A to C
right_of_graph = part of graph from C to B
left_delta = some fraction of the difference between A's y-value and C's y-value
right_delta = some fraction of the difference between C's y-value and B's y-value
[left_maxtab,left_mintab] = peakdet(left_of_graph,left_delta)
[right_maxtab,right_mintab] = peakdet(right_of_graph,right_delta)
I do have some experience with peak analysis, so I will say this will help, not answer, the problem. You can find all the peaks you want, but that doesn't mean the data is worth squinting at. Noise and imperfect resolution are real. Happy hunting!
PS You might also scan for all points higher than both neighbors in the whole band. That is gauranteed to miss none of the 'true' maxima, but to give you more 'false' maxima than you can count (although your data looks pretty smooth!).
The solution your looking for is a numerical method based on iterations,in your particular case bisection is the one who best suits cause others like uniform sequential search do not take an interval as an input.
This is the implementation for bisection:
function [ a, b, L ] = Bisection( f, a, b, e, d )
%[a,b] is the interval for your local maxima; e is the error for the result and d is the step(dx).
L = b - a;
while(L > e)
xa = ((a + b)/2) - d/2;
xb = ((a + b)/2) + d/2;
ya = subs(f,xa);
yb = subs(f,xb);
if(ya < yb)
a = xa;
else
b = xb;
end
L = b - a;
end
end
The method before is pretty efficient and straightforward to use, although there are others even better(in performance) like Fibonacci and Gold Section methods.
Cheers.
I have found the alternative solution. The extrema.m helped in finding Point D in above two garphs. The extrema.m can be downloaded from (http://www.mathworks.com/matlabcentral/fileexchange/12275-extrema-m-extrema2-m) and used in following way to find point D:
[ymax,imax,ymin,imin] = extrema(intensity_info);
figure;plot(x,intensity_info,x(imax),ymax,'g.',x(imin),ymin,'r.');
Related
I'm new to Matlab and want to write a program that chooses the value of a parameter (P) to minimize the difference between two vectors, where each vector is a variable in a dataframe. The first vector (call it A) is a predetermined vector of 1s and 0s, and the second vector (call it B) has each of its entries determined as an indicator function that depends on the value of the parameter P and other variables in the dataframe. For instance, let C be a third variable in the dataset, so
A = [1, 0, 0, 1, 0]
B = [x, y, z, u, v]
where x = 1 if (C[1]+10)^0.5 - P > (C[1])^0.5 and otherwise x = 0, and similarly, y = 1 if (C[2]+10)^0.5 - P > (C[2])^0.5 and otherwise y = 0, and so on.
I'm not really sure where to start with the code, except that it might be useful to use the fminsearch command. Any suggestions?
Edit: I changed the above by raising to a power, which is closer to the actual example that I have. I'm also providing a complete example in response to a comment:
Let A be as above, and let C = [10, 1, 100, 1000, 1]. Then my goal with the Matlab code would be to choose a value of P to minimize the differences between the coordinates of the vectors A and B, where B[1] = 1 if (10+10)^0.5 - P > (10)^0.5 and otherwise B[1] = 0, and similarly B[2] = 1 if (1+10)^0.5 - P > (1)^0.5 and otherwise B[2] = 0, etc. So I want to choose P to maximize the likelihood that A[1] = B[1], A[2] = B[2], etc.
I have the following setup in Matlab, where ds is the name of my dataset:
ds.B = zeros(size(ds,1),1); % empty vector to fill
for i = 1:size(ds,1)
if ((ds.C(i) + 10)^(0.5) - P > (ds.C(i))^(0.5))
ds.B(i) = 1;
else
ds.B(i) = 0;
end
end
Now I want to choose the value of P to minimize the difference between A and B. How can I do this?
EDIT: I'm also wondering how to do this when the inequality is something like (C[i]+10)^0.5 - P*D[i] > (C[i])^0.5, where D is another variable in my dataset. Now P is a scalar being multiplied rather than just added. This seems more complicated since I can't solve for P exactly. How can I solve the problem in this case?
EDIT 1: It seems fminbnd() isn't optimal, likely due to the stairstep nature of the indicator function. I've updated to test the midpoints of all the regions between indicator function flips, plus endpoints.
EDIT 2: Updated to include dataset D as a coefficient of P.
If you can package your distance calculation up in a single function based on P, you can then search for its minimum.
arraySize = 1000;
ds.A = double(rand([arraySize,1]) > 0.5);
ds.C = rand(size(ds.A));
ds.D = rand(size(ds.A));
B = #(P)double((ds.C+10).^0.5 - P.*ds.D > ds.C.^0.5);
costFcn = #(P)sqrt(sum((ds.A-B(P)).^2));
% Solving the equation (C+10)^0.5 - P*D = C^0.5 for P, and sorting the results
BCrossingPoints = sort(((ds.C+10).^0.5-ds.C.^0.5)./ds.D);
% Taking the average of each crossing point with its neighbors
BMidpoints = (BCrossingPoints(1:end-1)+BCrossingPoints(2:end))/2;
% Appending endpoints onto the midpoints
PsToTest = [BCrossingPoints(1)-0.1; BMidpoints; BCrossingPoints(end)+0.1];
% Calculate the distance from A to B at each P to test
costResult = arrayfun(costFcn,PsToTest);
% Find the minimum cost
[~,lowestCostIndex] = min(costResult);
% Find the optimum P
optimumP = PsToTest(lowestCostIndex);
ds.B = B(optimumP);
semilogx(PsToTest,costResult)
xlabel('P')
ylabel('Distance from A to B')
1.- x is assumed positive real only, because with x<0 then complex values show up.
Since no comment is made in the question it seems reasonable to assume x real and x>0 only.
As requested, P 'the parameter' a scalar, P only has 2 significant states >0 or <0, let's see how is this:
2.- The following lines generate kind-of random A and C.
Then a sweep of p is carried out and distances d1 and d2 are calculated.
d1 is euclidean distance and d2 is the absolute of the difference between A and and B converting both from binary to decimal:
N=10
% A=[1 0 0 1 0]
A=randi([0 1],1,N);
% C=[10 1 1e2 1e3 1]
C=randi([0 1e3],1,N)
p=[-1e4:1:1e4]; % parameter to optimize
B=zeros(1,numel(A));
d1=zeros(1,numel(p)); % euclidean distance
d2=zeros(1,numel(p)); % difference distance
for k1=1:1:numel(p)
B=(C+10).^.5-p(k1)>C.^.5;
d1(k1)=(sum((B-A).^2))^.5;
d2(k1)=abs(sum(A.*2.^[numel(A)-1:-1:0])-sum(B.*2.^[numel(A)-1:-1:0]));
end
figure;
plot(p,d1)
grid on
xlabel('p');title('d1')
figure
plot(p,d2)
grid on
xlabel('p');title('d2')
The only degree of freedom to optimise seems to be the sign of P regardless of |P| value.
3.- f(p,x) has either no root, or just one root, depending upon p
The threshold funtion is
if f(x)>0 then B(k)==1 else B(k)==0
this is
f(p,x)=(x+10)^.5-p-x^.5
Now
(x+10).^.5-p>x.^.5 is same as (x+10).^.5-x.^.5>p
There's a range of p that keeps f(p,x)=0 without any (real) root.
For the particular case p=0 then (x+10).^.5 and x.^.5 do not intersect (until Inf reached = there's no intersection)
figure;plot(x,(x+10).^.5,x,x.^.5);grid on
[![enter image description here][3]][3]
y2=diff((x+10).^.5-x.^.5)
figure;plot(x(2:end),y2);
grid on;xlabel('x')
title('y2=diff((x+10).^.5-x.^.5)')
[![enter image description here][3]][3]
% 005
This means the condition f(x)>0 is always true holding all bits of B=1. With B=1 then d(A,B) turns into d(A,1), a constant.
However, for a certain value of p then there's one root and f(x)>0 is always false keeping all bits of B=0.
In this case d(A,B) the cost function turns into d(A,0) and this is A itself.
4.- P as a vector
The optimization gains in degrees of freedom if instead of P scalar, P is considered as vector.
For a given x there's a value of p that switches B(k) from 0 to 1.
Any value of p below such threshold keeps B(k)=0.
Equivalently, inverting f(x) :
g(p)=(10-p^2)^2/(4*p^2)>x
Values of x below this threshold bring B closer to A because for each element of B it's flipped to the element value of A.
Therefore, it's convenient to consider P as a vector, not a ascalar, and :
For all, or as many (as possible) elements of C to meet c(k)<(10-p^2)^2/(4*p^2) in order to get C=A or
minimize d(A,C)
5.- roots of f(p,x)
syms t positive
p=[-1000:.1:1000];
zp=NaN*ones(1,numel(p));
sol=zeros(1,numel(p));
for k1=1:1:numel(p)
p(k1)
eq1=(t+10)^.5-p(k1)-t^.5-p(k1)==0;
s1=solve(eq1,t);
if ~isempty(s1)
zp(k1)=s1;
end
end
nzp=~isnan(zp);
zp(nzp)
returns
=
620.0100 151.2900 64.5344 34.2225 20.2500 12.7211
8.2451 5.4056 3.5260 2.2500 1.3753 0.7803
0.3882 0.1488 0.0278
Since the original problem is more complicated, the idea is described using a simple example below.
For example, suppose we want to put several router antennas somewhere in a room so that the cellphone get most signal strength on the table (received power > Pmax) while weakest signal strength on bed (received power < Pmin). What is the best (minimum) number of antennas that should be used, and where should they be placed, in order to achieve the goal.
Mathematically,
SIGNAL_STRENGTH is dependent on variable (x, y, z) and the number
of variables
. i.e. location and number of antennas.
Besides, assume
PREDICTION = f((x1, y1, z1), (x2, y2, z2), ... (xi, yi, zi), ... (xn,
yn, zn))
where n and (xi, yi, zi) are to be optimized. The goal is to minimize
cost function = ||SIGNAL_STRENGTH - PREDICTION||
I tried to use GA with mixed integer programming in Matlab to implement that. Two optimization functions are used, outer function is to optimize n, and inner optimization function optimizes (x, y, z) with given n. This method works slow and I haven't seen one result given by this method so far.
Does anyone have a more efficient way to solve this problem? Any suggestion is appreciated. Thanks in advance.
Terminology | Problem Definition
An antenna is sending at position a in R^3 with constant power. Its signal strength can be measured by some S: R^3 -> R where S has a single maximum S_0 at a and the set, constructed by S(x) > const, is simply connected, i.e. S(x) = S_0 * exp(-const * (x-a)^2).
Given a set of antennas A the resulting signal strength is the maximum of a single antenna
S_A(x) = max{S_a(x) : for all a in A} ,
which means we 'lock' on the strongest antenna, which is what cell phones do.
Let K = R^3 x R denote a space of points (position, intensity). Now concider two finite subsets POI_min and POI_max of K. We want to find the set A with the minimal amount of antennas (|A| -> min.), that satisfies
for all (x,w) in POI_min : S_A(x) < w and for all (x,w) in POI_max : S_A(x) > w .
Implication
As S(x) > const is simply connected there has to be an antenna in a sphere around the position of each element (x,w) in POI_max with radius r = max{||xi - x|| : for all xi in S(xi) = w}. Which means that if we would put an antenna at the position of (x,w), then the furthest we can go away from x and still have signal strength w is the radius r within which an actual antenna has to be positioned.
With a similar argumentation for POI_min it follows that there is no antenna within r = min{||xi - x|| : for all xi in S(xi) = w}.
Solution
Instead of solving a nonlinear optimization task we can intersect spheres to obtain the optimal solution. If k spheres around the POI_max positions intersect, we can place a single antenna in the intersection, reducing the amount of antennas needed by k-1.
However each antenna that is placed must satisfy all constraints given by the elements of POI_min. Assuming that antennas are omnidirectional and thus orientation of an antenna doesn't matter we can do (pseudocode):
min_sphere = {(x_i,r_i) : from POI_min},
spheres_to_cover = {(x_i,r_i) : from POI_max}
A = {}
while not is_empty(spheres_to_cover)
power_set_score = struct // holds score, k
PS <- costruct power set of sphere_to_cover
for i = 1:number_of_elements(PS)
k = PS[i]
if intersection(k) \ min_sphere is not empty
power_set_score[i].score = |k|
else
power_set_score[i].score = 0
end if
power_set_score[i].k = k
end for
sort(power_set_score) // sort by score, biggest first
A <- add arbitrary point in (intersection(power_set_score[1].k) \ min_sphere)
spheres_to_cover = spheres_to_cover \ power_set_score[1].k
end while
On the other hand you have just given an example problem and thus this solution may not be applicable or broad enough for your case. I did make a few assumptions. So being more specific in the question might give you an even better answer.
I am fitting an exponential decay function with lsqvurcefit in Matlab. To do this I first normalize my data because they differ several orders of magnitude. However Im not sure how to denormalize my fitted parameters.
My fitting model is s = O + A * exp(-t/T) where t and s are known and t is in the order of 10^-3 and s in the order of 10^5. So I subtract from them their mean and divide them by their standarddeviation. My goal is to find the best A, O and T that at the given times t will result most near s. However I dont know how to denormalize my resulting A O and T.
Might somebody know how to do this? I only found this question on SO about normalisation, but does not really address the same problem.
When you normalize, you must record the means and standard deviations for each of your featuers. Then you can easily use those values to denormalize.
e.g.
A = [1 4 7 2 9]';
B = 100 475 989 177 399]';
So you could just normalize right away:
An = (A - mean(A)) / std(A)
but then you can't get back to the original A. So first save the means and stds.
Am = mean(A); Bm = mean(B);
As = std(A); Bs = std(B);
An = (A - Am)/As;
Bn = (B - Bm)/Bs;
now do whatever processing you want and then to denormalize:
Ad = An*As + Am;
Bd = Bn*Bs + Bm;
I'm sure you can see that that's going to be an issue if you have a lot of features (i.e. you have to type code out for each feature, what a mission!) so lets assume your data is arranged as a matrix, data, where each sample is a row and each column is a feature. Now you can do it like this:
data = [A, B]
means = mean(data);
stds = std(data);
datanorm = bsxfun(#rdivide, bsxfun(#minus, data, means), stds);
%// Do processing on datanorm
datadenorm = bsxfun(#plus, bsxfun(#times, datanorm, stds), means);
EDIT:
After you have fit your model parameters (A,O and T) using normalized t and f then your model will expect normalized inputs and produce normalized outputs. So to use it you should first normalize t and then denormalize f.
So to find a new f by running the model on a normalized new t. So f(tn) where tn = (t - tm)/ts and tm is the mean of your training (or fitting) t set and ts the std. Then to get your correct magnitude f you must denormalize only f, so the full solution would be
f(tn)*fs + fm
So once again, all you need to do is save the mean and std you used to normalize.
I've read up on fsolve and solve, and tried various methods of curve fitting/regression but I feel I need a bit of guidance here before I spend more time trying to make something work that might be the wrong approach.
I have a series of equations I am trying to fit to a data set (x) separately:
for example:
(a+b*c)*d = x
a*(1+b*c)*d = x
x = 1.9248
3.0137
4.0855
5.0097
5.7226
6.2064
6.4655
6.5108
6.3543
6.0065
c= 0.0200
0.2200
0.4200
0.6200
0.8200
1.0200
1.2200
1.4200
1.6200
1.8200
d = 1.2849
2.2245
3.6431
5.6553
8.3327
11.6542
15.4421
19.2852
22.4525
23.8003
I know c, d and x - they are observations. My unknowns are a and b, and should be constant.
I could do it manually for each x observation but there must be an automatic and far superior way or at least another approach.
Very grateful if I could receive some guidance. Thanks for the time!
Given your two example equations; let y=x./d, then
y = a+b*c
y = a+a*b*c
The first case is just a line, for which you can obtain a least squares fit (values for a and b) with polyfit(). In the second case, you can just say k=a*b (since these are both fitted anyway), then rewrite it as:
y = a+k*c
Which is exactly the same line as the first problem, except now b = k/a. In fact, b=b1/a is the solution to the second problem where b1 is the fit from the first problem. In short, to solve them both, you need one call to polyfit() and a couple of divisions.
Will that work for you?
I see two different equations to fit here. To spell out the code:
For (a+b*c)*d = x
p = polyfit(c, x./d, 1);
a = p(2);
b = p(1);
For a*(1+b*c)*d = x
p = polyfit(c, x./d, 1);
a = p(2);
b = p(1) / a;
No need for polyfit; this is just a linear least squares problem, which is best solved with MATLAB's slash operator:
>> ab = [ones(size(c)) c] \ (x./d)
ans =
1.411437211703194e+000 % 'a'
-7.329687661579296e-001 % 'b'
Faster, cleaner, more educative :)
And, as Emmet already said, your second equation is nothing more than a different form of your first equation, the difference being that the b in your first equation, is equal to a*b in your second one.
I have a MATLAB routine with one rather obvious bottleneck. I've profiled the function, with the result that 2/3 of the computing time is used in the function levels:
The function levels takes a matrix of floats and splits each column into nLevels buckets, returning a matrix of the same size as the input, with each entry replaced by the number of the bucket it falls into.
To do this I use the quantile function to get the bucket limits, and a loop to assign the entries to buckets. Here's my implementation:
function [Y q] = levels(X,nLevels)
% "Assign each of the elements of X to an integer-valued level"
p = linspace(0, 1.0, nLevels+1);
q = quantile(X,p);
if isvector(q)
q=transpose(q);
end
Y = zeros(size(X));
for i = 1:nLevels
% "The variables g and l indicate the entries that are respectively greater than
% or less than the relevant bucket limits. The line Y(g & l) = i is assigning the
% value i to any element that falls in this bucket."
if i ~= nLevels % "The default; doesnt include upper bound"
g = bsxfun(#ge,X,q(i,:));
l = bsxfun(#lt,X,q(i+1,:));
else % "For the final level we include the upper bound"
g = bsxfun(#ge,X,q(i,:));
l = bsxfun(#le,X,q(i+1,:));
end
Y(g & l) = i;
end
Is there anything I can do to speed this up? Can the code be vectorized?
If I understand correctly, you want to know how many items fell in each bucket.
Use:
n = hist(Y,nbins)
Though I am not sure that it will help in the speedup. It is just cleaner this way.
Edit : Following the comment:
You can use the second output parameter of histc
[n,bin] = histc(...) also returns an index matrix bin. If x is a vector, n(k) = >sum(bin==k). bin is zero for out of range values. If x is an M-by-N matrix, then
How About this
function [Y q] = levels(X,nLevels)
p = linspace(0, 1.0, nLevels+1);
q = quantile(X,p);
Y = zeros(size(X));
for i = 1:numel(q)-1
Y = Y+ X>=q(i);
end
This results in the following:
>>X = [3 1 4 6 7 2];
>>[Y, q] = levels(X,2)
Y =
1 1 2 2 2 1
q =
1 3.5 7
You could also modify the logic line to ensure values are less than the start of the next bin. However, I don't think it is necessary.
I think you shoud use histc
[~,Y] = histc(X,q)
As you can see in matlab's doc:
Description
n = histc(x,edges) counts the number of values in vector x that fall
between the elements in the edges vector (which must contain
monotonically nondecreasing values). n is a length(edges) vector
containing these counts. No elements of x can be complex.
I made a couple of refinements (including one inspired by Aero Engy in another answer) that have resulted in some improvements. To test them out, I created a random matrix of a million rows and 100 columns to run the improved functions on:
>> x = randn(1000000,100);
First, I ran my unmodified code, with the following results:
Note that of the 40 seconds, around 14 of them are spent computing the quantiles - I can't expect to improve this part of the routine (I assume that Mathworks have already optimized it, though I guess that to assume makes an...)
Next, I modified the routine to the following, which should be faster and has the advantage of being fewer lines as well!
function [Y q] = levels(X,nLevels)
p = linspace(0, 1.0, nLevels+1);
q = quantile(X,p);
if isvector(q), q = transpose(q); end
Y = ones(size(X));
for i = 2:nLevels
Y = Y + bsxfun(#ge,X,q(i,:));
end
The profiling results with this code are:
So it is 15 seconds faster, which represents a 150% speedup of the portion of code that is mine, rather than MathWorks.
Finally, following a suggestion of Andrey (again in another answer) I modified the code to use the second output of the histc function, which assigns entries to bins. It doesn't treat the columns independently, so I had to loop over the columns manually, but it seems to be performing really well. Here's the code:
function [Y q] = levels(X,nLevels)
p = linspace(0,1,nLevels+1);
q = quantile(X,p);
if isvector(q), q = transpose(q); end
q(end,:) = 2 * q(end,:);
Y = zeros(size(X));
for k = 1:size(X,2)
[junk Y(:,k)] = histc(X(:,k),q(:,k));
end
And the profiling results:
We now spend only 4.3 seconds in codes outside the quantile function, which is around a 500% speedup over what I wrote originally. I've spent a bit of time writing this answer because I think it's turned into a nice example of how you can use the MATLAB profiler and StackExchange in combination to get much better performance from your code.
I'm happy with this result, although of course I'll continue to be pleased to hear other answers. At this stage the main performance increase will come from increasing the performance of the part of the code that currently calls quantile. I can't see how to do this immediately, but maybe someone else here can. Thanks again!
You can sort the columns and divide+round the inverse indexes:
function Y = levels(X,nLevels)
% "Assign each of the elements of X to an integer-valued level"
[S,IX]=sort(X);
[grid1,grid2]=ndgrid(1:size(IX,1),1:size(IX,2));
invIX=zeros(size(X));
invIX(sub2ind(size(X),IX(:),grid2(:)))=grid1;
Y=ceil(invIX/size(X,1)*nLevels);
Or you can use tiedrank:
function Y = levels(X,nLevels)
% "Assign each of the elements of X to an integer-valued level"
R=tiedrank(X);
Y=ceil(R/size(X,1)*nLevels);
Surprisingly, both these solutions are slightly slower than the quantile+histc solution.