Loglog plot with linear ticks on x-axis - matlab

So I have this data I'd like plotted on loglog scale, with linear values on the y-axis and the values in dB on the x axis and
loglog(EbN0,BER)
outputs a nice looking curve, but the problem is the axis ticks. It's fine on the y-axis, but the x axis only has one tick, at 10^0and no other ticks. Furthermore, that tick corresponds to the absolute value, not the dB value. Any convenient way to accomplish this?
(Note that both EbN0 and BER contain absolute values)
EDIT: I'll add my data and explain what I want a bit more.
EbN0 =
Columns 1 through 14
0.5000 1.0000 1.5000 2.0000 2.5000 3.0000 3.5000 4.0000 4.5000 5.0000 5.5000 6.0000 6.5000 7.0000
Columns 15 through 20
7.5000 8.0000 8.5000 9.0000 9.5000 10.0000
BER_TOT_ITER =
Columns 1 through 14
0.2928 0.2024 0.1183 0.0511 0.0164 0.0046 0.0010 0.0003 0.0001 0 0.0000 0.0000 0.0000 0
Columns 15 through 20
0 0 0 0 0 0
If I do plot(10*log10(EbN0),10*log10(BER_TOT_ITER)), I actually get exactly the graph I want and the dB values on the x axis, but now the y ticks are displayed in dB's instead of absolute values... so I just want to relabel the y ticks, NOT rescale the figure.

Relabeling the ticks is really the wrong approach here. You'd replace numerical values with strings and resizing etc. wouldn't work anymore.
Also your data does not fit to what you're actually looking at.
You should always try to transform your data first.
So besides loglog have a look at semilogx and semilogy, which allow you to have a single logarithmic axis.
To sum up, what you're looking for is:
semilogy(10*log10(EbN0), BER_TOT_ITER)

Related

moving mean on a circle

Is there a way to calculate a moving mean in a way that the values at the beginning and at the end of the array are averaged with the ones at the opposite end?
For example, instead of this result:
A=[2 1 2 4 6 1 1];
movmean(A,2)
ans = 2.0 1.5 1.5 3.0 5 3.5 1.0
I want to obtain the vector [1.5 1.5 1.5 3 5 3.5 1.0], as the initial array element 2 would be averaged with the ending element 1.
Generalizing to an arbitrary window size N, this is how you can add circular behavior to movmean in the way you want:
movmean(A([(end-floor(N./2)+1):end 1:end 1:(ceil(N./2)-1)]), N, 'Endpoints', 'discard')
For the given A and N = 2, you get:
ans =
1.5000 1.5000 1.5000 3.0000 5.0000 3.5000 1.0000
For an arbitrary window size n, you can use circular convolution with an averaging mask defined as [1/n ... 1/n] (with n entries; in your example n = 2):
result = cconv(A, repmat(1/n, 1, n), numel(A));
Convolution offers some nice ways of doing this. Though, you may need to tweak your input slightly if you are only going to partially average the ends (i.e. the first is averaged with the last in your example, but then the last is not averaged with the first).
conv([A(end),A],[0.5 0.5],'valid')
ans =
1.5000 1.5000 1.5000 3.0000 5.0000 3.5000 1.0000
The generalized case here, for a moving average of size N, is:
conv(A([end-N+2:end, 1:end]),repmat(1/N,1,N),'valid')

Summing cumulative area under curves of overapping triangles

I have two matrices for several triangles:
x =
2.0000 5.0000 10.0000
8.0000 10.0000 12.0000
12.0000 24.0000 26.0000
22.0000 25.0000 28.0000
23.0000 26.0000 25.0000
23.5000 27.0000 27.5000
20.0000 23.0000 27.0000
21.0000 24.0000 27.0000
24.0000 25.0000 27.0000
24.0000 26.0000 27.0000
24.0000 28.0000 29.0000
19.0000 22.0000 25.0000
18.0000 21.0000 23.0000
y =
0 1.0000 0
0 0.8000 0
0 0.6000 0
0 0.8000 0
0 0.8000 0
0 0.8000 0
0 1.0000 0
0 1.0000 0
0 1.0000 0
0 1.0000 0
0 1.0000 0
0 1.0000 0
0 1.0000 0
one line is one triangle. Columns are x and y positions of each point of the triangles.
So, I plot all these triangles and I need to sum the cumulative area under the curve of the triangles.
I try to use the area function, but I couldn't find how to sum their areas.
EDIT: I need to plot the sum of the areas on a red line in the same graphics. So I don't want a number like 20 cm²... I would like something like that:
I suggest that you interpolate to create all your individual triangles and then add the results. First you will need to augment your x and y matrices with the beginning (the origin) and end points like so:
m = 30; %// This is your max point, maybe set it using max(x(:))?
X = [zeros(size(x,1),1), x, ones(size(x,1),1)*m];
Y = [zeros(size(y,1),1), y, zeros(size(y,1),1)];
then perform all the interpolations (I'll sum as I go):
xi = 0:0.1:m;
A = zeros(1,size(xi,2)); %// initialization
for row = 1:size(x,1)
A = A + interp1(X(row,:), Y(row,:), xi);
end
and finally plot:
plot(x,y,'k')
hold on
plot(xi,A,'r','linewidth',2)
using your example data this gives:

Assigning Different Colors to a Plot / Scatter

So I have a vector called C1_Vector that has been previously filled with different shades of 1 RGB color ([0 0.5 1]), blue. So there are many different vectors within the C1_Vector
ex:
C1_Vector = ([0 0.5 1], [0 0.45 0.98], [0 0.49 1.01], etc.)
I want to each one of my points, in s1, to correspond to a different color. This is what I've been playing around with, and struggling with. Can someone help me with this syntax?
plot(s1(1,:),s1(2,:),'.', 'color', C1_Vector );
where,
s1 =
3.0000 3.0000 3.0000 1.5000 1.5000 1.5000 0 -1.5000
1.5000 0 -1.5000 1.5000 0 -1.5000 0 3.0000
Using the scatter function makes it quite easy as long as you provide the same number of color vectors than element to plot.
Basically for each pair of points to display the function assign it the corresponding color in the color matrix provided, which is M x 3 where M is the number of points.
Therefore for the demo I added colors to C1_Vector so that it contains as many elements as s1.
C1_Vector = [0 0.5 1; 0 0.45 0.98; 0 0.49 1.01;1 0 1; rand(1,3); 0 1 0; 0 1 1;rand(1,3)];
s1 = [3.0000 3.0000 3.0000 1.5000 1.5000 1.5000 0 -1.5000;
1.5000 0 -1.5000 1.5000 0 -1.5000 0 3.0000];
scatter(s1(1,:),s1(2,:),[],C1_Vector,'filled')
grid on
Output:
Is that what you meant?

shape context matching technique

I'm trying to implement shape context matching algorithm which is
Calculate the distance of a point to all other points.
Normalize the distance by mean distance.
Create logarithmic distance scale for normalized distances.
Create distance histogram: Iterate for each scale incrementing bins when
bins with higher numbers describe points closer together.
Calculate angle between all points.
Bin angles which is slightly different than distance.
Matching - Cost Matrix: Calculate cost of matching each point to every
other point.
Matching - Additional Cost Terms:
Surrounding Texture Difference,
Tangent Angle Difference.
Matching: Find pairing of points that leads to least total cost according
to equation 3
I'm now in step 3, and I don't know how to calculate log distance scale
Example:
if we have coordinates in shape as:
0.2000 0.5000
0.4000 0.5000
0.3000 0.4000
0.1500 0.3000
0.3000 0.2000
0.4500 0.3000
step(1): Euclidean distance from each point to all others:
0 0.2000 0.1414 0.2062 0.3162 0.3202
0.2000 0 0.1414 0.3202 0.3162 0.2062
0.1414 0.1414 0 0.1803 0.2000 0.1803
0.2062 0.3202 0.1803 0 0.1803 0.3000
0.3162 0.3162 0.2000 0.1803 0 0.1803
0.3202 0.2062 0.1803 0.3000 0.1803 0
step(2):Normalized distances between each point:
0 1.0623 0.7511 1.0949 1.6796 1.7004
1.0623 0 0.7511 1.7004 1.6796 1.0949
0.7511 0.75110 0.9575 1.0623 0.9575
1.0949 1.7004 0.9575 0 0.9575 1.5934
1.6796 1.6796 1.0623 0.9575 0 0.9575
1.7004 1.0949 0.9575 1.5934 0.9575 0
I don't know how to create log distance scale where
The log distance scale for normalized distances (closer = more discriminate):
0.1250 0.2500 0.5000 1.0000 2.0000
Any one help me?
I have no background in shape context, and here's my thinking:
The teacher wants to measure the distance in a way that fits the need by shape context. To achieve this,
(step 3) First build a scale (aka a ruler). The scale has 5 bins, namely 0~0.1250, 0.1250~0.2500, 0.2500~0.5000, 0.5000~1.0000, 1.0000~2.0000. The maximum range should cover the maximum value in your data of normalized distance (1.7004 here). This scale is increasing logarithmically (I mean "exponentially"). I still have no idea about "closer = more discriminate".
(step 4) throw each data value into different bins, using the scale. The teacher uses iteration method, which works; but essentially you just need to find some way to categorize the values.
I think you can also achieve it in this way:
>> x = rand(1,10)*2;ceil(log2(ceil(x/.125)))+1,x
ans =
2 1 5 5 5 3 5 4 1 4
x =
0.1517 0.1079 1.0616 1.5583 1.8680 0.2598 1.1376 0.9388 0.0238 0.6742
So you are finding logarithm of distance to base 0.125.

Subtle MatLab function fitting (discretization by value, not argument)

So I have quite a few (over 60000) data points
f(x_k) = k, here k=0,1,2,...,N.
Function is monotonically increasing and visually looks pretty smooth. I would love to be able to find fitting F(x) such that for every x_k it so happens that k <= F(x_k) < k+1.
How should I approach this problem?
Data example
x 0 1 3 5 8 10 14 16 20 23 27 29 35 37 41
f(x) 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
(This looks a bit like a lookup table. Maybe an image processing application of some sort? I did some tools in my past life where an unrounding was needed.)
Is this a one time problem, or will you be doing it often, so you have a need for speed?
I'd throw it into SLM. Since I don't have the data, I cannot test it out or give you any results myself, but there is certainly no problem with an assured fit of the quality you wish as long as you use sufficient number of knots. You would need additional knots on the right hand side, as it appears to approach a vertical asymptote, thus a singularity. Splines in general tend not to like singularities, as they are still polynomials at heart.
Better yet, swap the x and y axes to do the fit, thus fitting x = f(y). The left end point is not an asymptote, so there is no longer a singularity. Now all you need do is constrain the result to be monotonic increasing, and concave down (thus everywhere a negative second derivative.) You will require far fewer knots for the inverse fit, but use enough knots that the fit is of adequate quality for your goals.
To use the inverse fit, simply interpolate in the reverse direction, something that SLMEVAL is capable of doing. I'll see how it does on the little bit of test data you have provided (with just the default number of knots):
x = [0 1 3 5 8 10 14 16 20 23 27 29 35 37 41];
y = [0 1 2 3 4 5 6 7 8 9 10 11 12 13 14];
slm = slmengine(y,x,'plot','on','increasing','on');
So the fit seems reasonable, but I note that your data seems a bit bumpy. It may indeed be difficult to get a solution that is smooth, yet fits entirely within your requirements.
Lets see how well it did:
[x;y;slmeval(x,slm,-1)]'
ans =
0 0 0.0190
1.0000 1.0000 0.9656
3.0000 2.0000 2.0522
5.0000 3.0000 2.9239
8.0000 4.0000 4.1096
10.0000 5.0000 4.8419
14.0000 6.0000 6.1963
16.0000 7.0000 6.8331
20.0000 8.0000 8.0638
23.0000 9.0000 8.9699
27.0000 10.0000 10.1459
29.0000 11.0000 10.7088
35.0000 12.0000 12.2942
37.0000 13.0000 12.8285
41.0000 14.0000 NaN
It misses the last point completely, refusing to extrapolate. But the remainder are not far off. They do fail your requirement though, as it is not true that
k <= F(x_k) < k+1
Of course, I did not build the spline with such a requirement in the specs. Were I to try to solve this problem in general, I might write code that would estimate the values on the curve directly, with no spline intermediary. Then I could easily enforce your constraints, finding the smoothest set of points that satisfies your error bar requirements and monotonicity, that also lies as close to the original data as is possible. Of course, that would involve a large system solve, with 60k unknowns. I don't know how lsqlin would handle that large of a problem, but there are other solvers that might be able to do so if time was an issue.
Again, with your test data as a small scale example:
x = [0 1 3 5 8 10 14 16 20 23 27 29 35 37 41]';
n = numel(x);
k = (0:(n-1))';
% The "unrounding" bound constraints
LB = k;
UB = k+1;
% The best fit possible
Afit = speye(n,n);
% And as smooth as possible
ind = 1:(n-2);
% could do this with diff of course
dx1 = x(ind+1) - x(ind);
dx2 = x(ind+2) - x(ind + 1);
% central second finite difference, for unequal spacing
den = dx1.*dx2.*(dx1 + dx2)/2;
Areg = spdiags([dx2./den,-(dx1 + dx2)./den,dx1./den],[0 1 2],n-2,n);
rhs = [k;zeros(n-2,1)];
% monotonicity constraints...
Amono = spdiags(repmat([1 -1],14,1),[0 1],n-1,n);
bmono = zeros(n-1,1);
% choose a value for r, that allows you to control the smoothness
% larger values of r will make the curve smoother, but the bounds
% will always be enforced. I played with it, and r = 5 seemed a
% reasonable compromise here.
r = 5;
yhat = lsqlin([Afit;r*Areg],rhs,Amono,bmono,[],[],LB,UB);
lsqlin is a bit unhappy, since it does not handle sparse problem of this form at this time. So it throws a warning that it is converting the problem to a full one.
Warning: Large-scale algorithm can handle bound constraints only;
using medium-scale algorithm instead.
> In lsqlin at 270
Warning: This problem formulation not yet available for sparse matrices.
Converting to full to solve.
> In lsqlin at 320
Optimization terminated.
Of course, this conversion will be TOTALLY unacceptable for a problem with 60k unknowns. DO NOT TRY IT ON 60k data points!!!!!!!!!!!!!!!! Your computer will go into a deep freeze.
How did it do though?
disp([x,k,yhat,k+1])
0 0 0.4356 1.0000
1.0000 1.0000 1.0000 2.0000
3.0000 2.0000 2.0504 3.0000
5.0000 3.0000 3.0000 4.0000
8.0000 4.0000 4.2026 5.0000
10.0000 5.0000 5.0000 6.0000
14.0000 6.0000 6.2739 7.0000
16.0000 7.0000 7.0000 8.0000
20.0000 8.0000 8.0916 9.0000
23.0000 9.0000 9.0000 10.0000
27.0000 10.0000 10.2497 11.0000
29.0000 11.0000 11.0000 12.0000
35.0000 12.0000 12.2994 13.0000
37.0000 13.0000 13.0000 14.0000
41.0000 14.0000 14.0594 15.0000
It worked nicely, although it would be a hog of obscene proportions for large problems as you have. Perhaps there is another optimizer (maybe in TOMLAB or some other package) that can handle a large scale sparse linear problem, subject to linear and bound constraints. You also might wish to force the first point through zero, but that is trivial to do.
A final option, is if say 1000 points is doable, to recreate the curve in batches of 1010 at a time using the above scheme. lsqlin should be able to handle problems of that size with no problem. Leave some overlap at the ends, 5 points in each overlap region should be sufficient. Then average the results in the overlap regions.