How to mention shadowing and fading in wireless channel model? - matlab

I am trying to model a wireless channel with following parameters in matlab.
Multipath Fading: Exponential distribution with unit mean
Shadowing: Log-normal distribution with standard deviation 8 dB
Path-loss exponent: 2.4
Path-loss constant: 30
How should I mention shadowing and fading in the channel model in dB?
I tried to use log-normal and exponential distributions in matlab to generate random numbers with given parameters. But I am not sure if it is true or not.
Can anyone help me?
(There is a similar question in Sjaffry question, but it doesn't have any answer and because I don't have enough reputation to comment on that topic, I tried to ask my own question.)
More Information:
I know that:
g_i,j = 10log10(K) - 10log10(B) - 10log10(T) -10alog10(L_i,j);
Where g_i,j is channel gain, B is fading gain, T is shadowing gain, L_i,j is distance between i , j and K is path loss constant.
I wrote this code in matlab:
k = 30;
a = 2.4;
T = 8; % dB
Distance = Dist([i_x, i_y], [j_x,j_y]);
G_dB = 10*log10(k) - 10*log10(exprnd(1)) - 10*log10(random('logn', 0 , (10^(T/10)))) -10 * a * log10(Distance);
The channel gain values (for distances about 300 m) must be more than one or less than one?

Related

Is nearest centroid classifier really inefficient?

I am currently reading "Introduction to machine learning" by Ethem Alpaydin and I came across nearest centroid classifiers and tried to implement it. I guess I have correctly implemented the classifier but I am getting only 68% accuracy . So, is the nearest centroid classifier itself is inefficient or is there some error in my implementation (below) ?
The data set contains 1372 data points each having 4 features and there are 2 output classes
My MATLAB implementation :
DATA = load("-ascii", "data.txt");
#DATA is 1372x5 matrix with 762 data points of class 0 and 610 data points of class 1
#there are 4 features of each data point
X = DATA(:,1:4); #matrix to store all features
X0 = DATA(1:762,1:4); #matrix to store the features of class 0
X1 = DATA(763:1372,1:4); #matrix to store the features of class 1
X0 = X0(1:610,:); #to make sure both datasets have same size for prior probability to be equal
Y = DATA(:,5); # to store outputs
mean0 = sum(X0)/610; #mean of features of class 0
mean1 = sum(X1)/610; #mean of featurs of class 1
count = 0;
for i = 1:1372
pre = 0;
cost1 = X(i,:)*(mean0'); #calculates the dot product of dataset with mean of features of both classes
cost2 = X(i,:)*(mean1');
if (cost1<cost2)
pre = 1;
end
if pre == Y(i)
count = count+1; #counts the number of correctly predicted values
end
end
disp("accuracy"); #calculates the accuracy
disp((count/1372)*100);
There are at least a few things here:
You are using dot product to assign similarity in the input space, this is almost never valid. The only reason to use dot product would be the assumption that all your data points have the same norm, or that the norm does not matter (nearly never true). Try using Euclidean distance instead, as even though it is very naive - it should be significantly better
Is it an inefficient classifier? Depends on the definition of efficiency. It is an extremely simple and fast one, but in terms of predictive power it is extremely bad. In fact, it is worse than Naive Bayes, which is already considered "toy model".
There is something wrong with the code too
X0 = DATA(1:762,1:4); #matrix to store the features of class 0
X1 = DATA(763:1372,1:4); #matrix to store the features of class 1
X0 = X0(1:610,:); #to make sure both datasets have same size for prior probability to be equal
Once you subsamples X0, you have 1220 training samples, yet later during "testing" you test on both training and "missing elements of X0", this does not really make sense from probabilistic perspective. First of all you should never test accuracy on the training set (as it overestimates true accuracy), second of all by subsampling your training data your are not equalizing priors. Not in the method like this one, you are simply degrading quality of your centroid estimate, nothing else. These kind of techniques (sub/over- sampling) equalize priors for models that do model priors. Your method does not (as it is basically generative model with the assumed prior of 1/2), so nothing good can happen.

Efficient size choice for SciPy Discrete Sine Transform

I noticed that SciPy has an implementation of the Discrete Sine Transform, and I was comparing it to the one that's in MATLAB. The MATLAB documentation notes that for best performance, the size of the inputs should be 2^p -1, presumably for a divide and conquer strategy. Is this also true for the SciPy implementation?
Although this question is old, I happen to have just ran some tests and then stumbled upon this question.
The answer is yes. Internally, scipy seems to converts the array to size M = 2*(N+1).
Ideally, M = 2^i, for some integer i. Therefore, N should follow N = 2^i - 1. The following picture shows how timings scale with fft-size. Note that the orange line is much smoother, indicating no unexpected memory overhead.
Green line: N = 2^i
Blue line: N = 2^i + 1
Orange line: N = 2^i - 1
UPDATE
After digging some more into the documentation of scipy.fftpack, I found that the above answer is only partly true. According to the documentation, "SciPy’s FFTPACK has efficient functions for radix {2, 3, 4, 5}". This means that instead of efficiently doing arrays of size M = 2^i, it can handle any M = 2^i * 3^j * 5^k (4 is not a prime). The optimum for scipy.fftpack.dst (or dct) is then M - 1. Finding those numbers can be a little awkward, but luckily there's a function for that, too!
Please note that the above graph is log-log scale, so speedups of 40 or so are not uncommon. Thus, choosing a fast size can make you calculations orders of magnitudes faster! (I found this out the hard way).

Combination of SVD perturbation

To apply the combination of SVD perturbation:
I = imread('image.jpg');
Ibw = single(im2double(I));
[U S V] = svd(Ibw);
% calculate derviced image
P = U * power(S, i) * V'; % where i is between 1 and 2
%To compute the combined image of SVD perturbations:
J = (single(I) + (alpha*P))/(1+alpha); % where alpha is between 0 and 1
I applied this method to a specific face recognition model and I noticed the accuracy was highly increased!! So it is very efficient!. Interestingly, I used the value i=3/4 and alpha=0.25 according to a paper that was published in a journal in 2012 in which the authors used i=3/4 and alpha=0.25. But I didn't make attention that i must be between 1 and 2! (I don't know if the authors make an error of dictation or they in fact used the value 3/4). So I tried to change the value of i to a value greater than 1, the accuracy decreased!!. So can I use the value 3/4 ? If yes, how can I argument therefore my approach?
The paper that I read is entitled "Enhanced SVD based face recognition". In page 3, they used the value i=3/4.
(http://www.oalib.com/paper/2050079)
Kindly I need your help and opinions. Any help will be very appreciated!
The idea to have the value between one and two is to magnify the singular values to make them invariant to illumination changes.
Refer to this paper: A New Face Recognition Method based on SVD Perturbation for Single Example Image per Person: Daoqiang Zhang,Songcan Chen,and Zhi-Hua Zhou
Note that when n equals to 1, the derived image P is equivalent to the original image I . If we
choose n>1, then the singular values satisfying s_i > 1 will be magnified. Thus the reconstructed
image P emphasizes the contribution of the large singular values, while restraining that of the
small ones. So by integrating P into I , we get a combined image J which keeps the main
information of the original image and is expected to work better against minor changes of
expression, illumination and occlusions.
My take:
When you scale the singular values in the exponent, you are basically introducing a non-linearity, so its possible that for a specific dataset, scaling down the singular values may be beneficial. Its like adjusting the gamma correction factor in a monitor.

How to generate random matlab vector with these constraints

I'm having trouble creating a random vector V in Matlab subject to the following set of constraints: (given parameters N,D, L, and theta)
The vector V must be N units long
The elements must have an average of theta
No 2 successive elements may differ by more than +/-10
D == sum(L*cosd(V-theta))
I'm having the most problems with the last one. Any ideas?
Edit
Solutions in other languages or equation form are equally acceptable. Matlab is just a convenient prototyping tool for me, but the final algorithm will be in java.
Edit
From the comments and initial answers I want to add some clarifications and initial thoughts.
I am not seeking a 'truly random' solution from any standard distribution. I want a pseudo randomly generated sequence of values that satisfy the constraints given a parameter set.
The system I'm trying to approximate is a chain of N links of link length L where the end of the chain is D away from the other end in the direction of theta.
My initial insight here is that theta can be removed from consideration until the end, since (2) in essence adds theta to every element of a 0 mean vector V (shifting the mean to theta) and (4) simply removes that mean again. So, if you can find a solution for theta=0, the problem is solved for all theta.
As requested, here is a reasonable range of parameters (not hard constraints, but typical values):
5<N<200
3<D<150
L==1
0 < theta < 360
I would start by creating a "valid" vector. That should be possible - say calculate it for every entry to have the same value.
Once you got that vector I would apply some transformations to "shuffle" it. "Rejection sampling" is the keyword - if the shuffle would violate one of your rules you just don't do it.
As transformations I come up with:
switch two entries
modify the value of one entry and modify a second one to keep the 4th condition (Theoretically you could just shuffle two till the condition is fulfilled - but the chance that happens is quite low)
But maybe you can find some more.
Do this reasonable often and you get a "valid" random vector. Theoretically you should be able to get all valid vectors - practically you could try to construct several "start" vectors so it won't take that long.
Here's a way of doing it. It is clear that not all combinations of theta, N, L and D are valid. It is also clear that you're trying to simulate random objects that are quite complex. You will probably have a hard time showing anything useful with respect to these vectors.
The series you're trying to simulate seems similar to the Wiener process. So I started with that, you can start with anything that is random yet reasonable. I then use that as a starting point for an optimization that tries to satisfy 2,3 and 4. The closer your initial value to a valid vector (satisfying all your conditions) the better the convergence.
function series = generate_series(D, L, N,theta)
s(1) = theta;
for i=2:N,
s(i) = s(i-1) + randn(1,1);
end
f = #(x)objective(x,D,L,N,theta)
q = optimset('Display','iter','TolFun',1e-10,'MaxFunEvals',Inf,'MaxIter',Inf)
[sf,val] = fminunc(f,s,q);
val
series = sf;
function value= objective(s,D,L,N,theta)
a = abs(mean(s)-theta);
b = abs(D-sum(L*cos(s-theta)));
c = 0;
for i=2:N,
u =abs(s(i)-s(i-1)) ;
if u>10,
c = c + u;
end
end
value = a^2 + b^2+ c^2;
It seems like you're trying to simulate something very complex/strange (a path of a given curvature?), see questions by other commenters. Still you will have to use your domain knowledge to connect D and L with a reasonable mu and sigma for the Wiener to act as initialization.
So based on your new requirements, it seems like what you're actually looking for is an ordered list of random angles, with a maximum change in angle of 10 degrees (which I first convert to radians), such that the distance and direction from start to end and link length and number of links are specified?
Simulate an initial guess. It will not hold with the D and theta constraints (i.e. specified D and specified theta)
angles = zeros(N, 1)
for link = 2:N
angles (link) = theta(link - 1) + (rand() - 0.5)*(10*pi/180)
end
Use genetic algorithm (or another optimization) to adjust the angles based on the following cost function:
dx = sum(L*cos(angle));
dy = sum(L*sin(angle));
D = sqrt(dx^2 + dy^2);
theta = atan2(dy/dx);
the cost is now just the difference between the vector given by my D and theta above and the vector given by the specified D and theta (i.e. the inputs).
You will still have to enforce the max change of 10 degrees rule, perhaps that should just make the cost function enormous if it is violated? Perhaps there is a cleaner way to specify sequence constraints in optimization algorithms (I don't know how).
I feel like if you can find the right optimization with the right parameters this should be able to simulate your problem.
You don't give us a lot of detail to work with, so I'll assume the following:
random numbers are to be drawn from [-127+theta +127-theta]
all random numbers will be drawn from a uniform distribution
all random numbers will be of type int8
Then, for the first 3 requirements, you can use this:
N = 1e4;
theta = 40;
diffVal = 10;
g = #() randi([intmin('int8')+theta intmax('int8')-theta], 'int8') + theta;
V = [g(); zeros(N-1,1, 'int8')];
for ii = 2:N
V(ii) = g();
while abs(V(ii)-V(ii-1)) >= diffVal
V(ii) = g();
end
end
inline the anonymous function for more speed.
Now, the last requirement,
D == sum(L*cos(V-theta))
is a bit of a strange one...cos(V-theta) is a specific way to re-scale the data to the [-1 +1] interval, which the multiplication with L will then scale to [-L +L]. On first sight, you'd expect the sum to average out to 0.
However, the expected value of cos(x) when x is a random variable from a uniform distribution in [0 2*pi] is 2/pi (see here for example). Ignoring for the moment the fact that our limits are different from [0 2*pi], the expected value of sum(L*cos(V-theta)) would simply reduce to the constant value of 2*N*L/pi.
How you can force this to equal some other constant D is beyond me...can you perhaps elaborate on that a bit more?

Generate a random number with max, min and mean (average) in Matlab

I need to generate random numbers with following properties.
Min must be 1
Max must be 9
Average (mean) is 6.00 (or something else)
Random number must be Integer (positive) only
I have tried several syntaxes but nothing works, for example
r=1+8.*rand(100,1);
This gives me a random number between 1-9 but it's not an integer (for example 5.607 or 4.391) and each time I calculate the mean it varies.
You may be able to define a function that satisfies your requirements based on Matlab's randi function. But be careful, it is easy to define functions of random number generators which do not produce random numbers.
Another approach might suit -- create a probability distribution to meet your requirements. In this case you need a vector of 9 floating-point numbers which sum to 1 and which, individually, express the probability of the i-th integer occurring. For example, a distribution might be described by the following vector:
[0.1 0.1 0.1 0.1 0.2 0.1 0.1 0.1 0.1]
These split the interval [0,1] into 9 parts. Then, take your favourite rng which generates floating-point numbers in the range [0,1) and generate a number, suppose it is 0.45. Read along the interval from 0 to 1 and you find that this is in the 5-th interval, so return the integer 5.
Obviously, I've been too lazy to give you a vector which gives 6 as the mean of the distribution, but that shouldn't be too hard for you to figure out.
Here is an algorithm with a loop to reach a required mean xmean (with required precision xeps) by regenerating a random number from one half of a vector to another according to mean at current iteration. With my tests it reached the mean pretty quick.
n = 100;
xmean = 6;
xmin = 1;
xmax = 9;
xeps = 0.01;
x = randi([xmin xmax],n,1);
while abs(xmean - mean(x)) >= xeps
if xmean > mean(x)
x(find(x < xmean,1)) = randi([xmean xmax]);
elseif xmean < mean(x)
x(find(x > xmean,1)) = randi([xmin xmean]);
end
end
x is the output you need.
You can use randi to get random integers
You could use floor to truncate your random numbers to integer values only:
r = 1 + floor(9 * rand(100,1));
Obtaining a specified mean is a little trickier; it depends what kind of distribution you're after.
If the distribution is not important and all you're interested in is the mean, then there's a particularly simple function that does that:
function x=myrand
x=6;
end
Before you can design your random number generator you need to specify the distribution it should draw from. You've only partially done that: i.e., you specified it draws from integers in [1,9] and that it has a mean that you want to be able to specify. That still leaves an infinity of distributions to chose among. What other properties do you want your distribution to have?
Edit following comment: The mean of any finite sample from a probability distribution - the so-called sample mean - will only approximate the distribution's mean. There is no way around that.
That having been said, the simplest (in the maximum entropy sense) distribution over the integers in the domain [1,9] is the exponential distribution: i.e.,
p = #(n,x)(exp(-x*n)./sum(exp(-x*(1:9))));
The parameter x determines the distribution mean. The corresponding cumulative distribution is
c = cumsum(p(1:9,x));
To draw from the distribution p you can draw a random number from [0,1] and find what sub-interval of c it falls in: i.e.,
samp = arrayfun(#(y)find(y<c,1),rand(n,m));
will return an [n,m] array of integers drawn from p.