I'm having trouble creating a random vector V in Matlab subject to the following set of constraints: (given parameters N,D, L, and theta)
The vector V must be N units long
The elements must have an average of theta
No 2 successive elements may differ by more than +/-10
D == sum(L*cosd(V-theta))
I'm having the most problems with the last one. Any ideas?
Edit
Solutions in other languages or equation form are equally acceptable. Matlab is just a convenient prototyping tool for me, but the final algorithm will be in java.
Edit
From the comments and initial answers I want to add some clarifications and initial thoughts.
I am not seeking a 'truly random' solution from any standard distribution. I want a pseudo randomly generated sequence of values that satisfy the constraints given a parameter set.
The system I'm trying to approximate is a chain of N links of link length L where the end of the chain is D away from the other end in the direction of theta.
My initial insight here is that theta can be removed from consideration until the end, since (2) in essence adds theta to every element of a 0 mean vector V (shifting the mean to theta) and (4) simply removes that mean again. So, if you can find a solution for theta=0, the problem is solved for all theta.
As requested, here is a reasonable range of parameters (not hard constraints, but typical values):
5<N<200
3<D<150
L==1
0 < theta < 360
I would start by creating a "valid" vector. That should be possible - say calculate it for every entry to have the same value.
Once you got that vector I would apply some transformations to "shuffle" it. "Rejection sampling" is the keyword - if the shuffle would violate one of your rules you just don't do it.
As transformations I come up with:
switch two entries
modify the value of one entry and modify a second one to keep the 4th condition (Theoretically you could just shuffle two till the condition is fulfilled - but the chance that happens is quite low)
But maybe you can find some more.
Do this reasonable often and you get a "valid" random vector. Theoretically you should be able to get all valid vectors - practically you could try to construct several "start" vectors so it won't take that long.
Here's a way of doing it. It is clear that not all combinations of theta, N, L and D are valid. It is also clear that you're trying to simulate random objects that are quite complex. You will probably have a hard time showing anything useful with respect to these vectors.
The series you're trying to simulate seems similar to the Wiener process. So I started with that, you can start with anything that is random yet reasonable. I then use that as a starting point for an optimization that tries to satisfy 2,3 and 4. The closer your initial value to a valid vector (satisfying all your conditions) the better the convergence.
function series = generate_series(D, L, N,theta)
s(1) = theta;
for i=2:N,
s(i) = s(i-1) + randn(1,1);
end
f = #(x)objective(x,D,L,N,theta)
q = optimset('Display','iter','TolFun',1e-10,'MaxFunEvals',Inf,'MaxIter',Inf)
[sf,val] = fminunc(f,s,q);
val
series = sf;
function value= objective(s,D,L,N,theta)
a = abs(mean(s)-theta);
b = abs(D-sum(L*cos(s-theta)));
c = 0;
for i=2:N,
u =abs(s(i)-s(i-1)) ;
if u>10,
c = c + u;
end
end
value = a^2 + b^2+ c^2;
It seems like you're trying to simulate something very complex/strange (a path of a given curvature?), see questions by other commenters. Still you will have to use your domain knowledge to connect D and L with a reasonable mu and sigma for the Wiener to act as initialization.
So based on your new requirements, it seems like what you're actually looking for is an ordered list of random angles, with a maximum change in angle of 10 degrees (which I first convert to radians), such that the distance and direction from start to end and link length and number of links are specified?
Simulate an initial guess. It will not hold with the D and theta constraints (i.e. specified D and specified theta)
angles = zeros(N, 1)
for link = 2:N
angles (link) = theta(link - 1) + (rand() - 0.5)*(10*pi/180)
end
Use genetic algorithm (or another optimization) to adjust the angles based on the following cost function:
dx = sum(L*cos(angle));
dy = sum(L*sin(angle));
D = sqrt(dx^2 + dy^2);
theta = atan2(dy/dx);
the cost is now just the difference between the vector given by my D and theta above and the vector given by the specified D and theta (i.e. the inputs).
You will still have to enforce the max change of 10 degrees rule, perhaps that should just make the cost function enormous if it is violated? Perhaps there is a cleaner way to specify sequence constraints in optimization algorithms (I don't know how).
I feel like if you can find the right optimization with the right parameters this should be able to simulate your problem.
You don't give us a lot of detail to work with, so I'll assume the following:
random numbers are to be drawn from [-127+theta +127-theta]
all random numbers will be drawn from a uniform distribution
all random numbers will be of type int8
Then, for the first 3 requirements, you can use this:
N = 1e4;
theta = 40;
diffVal = 10;
g = #() randi([intmin('int8')+theta intmax('int8')-theta], 'int8') + theta;
V = [g(); zeros(N-1,1, 'int8')];
for ii = 2:N
V(ii) = g();
while abs(V(ii)-V(ii-1)) >= diffVal
V(ii) = g();
end
end
inline the anonymous function for more speed.
Now, the last requirement,
D == sum(L*cos(V-theta))
is a bit of a strange one...cos(V-theta) is a specific way to re-scale the data to the [-1 +1] interval, which the multiplication with L will then scale to [-L +L]. On first sight, you'd expect the sum to average out to 0.
However, the expected value of cos(x) when x is a random variable from a uniform distribution in [0 2*pi] is 2/pi (see here for example). Ignoring for the moment the fact that our limits are different from [0 2*pi], the expected value of sum(L*cos(V-theta)) would simply reduce to the constant value of 2*N*L/pi.
How you can force this to equal some other constant D is beyond me...can you perhaps elaborate on that a bit more?
Related
Given a matrix, it's easy to compute the value and index of the min value:
A = rand(10);
[value, index] = min(A(:));
However I would also like to recover the second min value (idem for max).
I can of course take any of this two approaches:
Converting A to a vector and sorting it.
PROS: I can then recover the second, third... n minimum value
CONS: If A is large, sorting is expensive
Once the min location of A is located, I can replace this value by a large one (eg: Inf) and then run min again.
PROS: Cheaper than sort
CONS: I must modify my matrix (and save the modified value in an aux variable). Also re-running min is costly on a large matrix.
I'm wondering if there is a better solution:
When computing min the algorithm has to keep track of the min value found so far, until a new value has a lower value (then we update the value).
If instead we keep track of the last n min values found so far will allow to recover the minimum n values.
I can implement this, but I'm wondering if it's the best approach or if it's already implemented.
I don't know in which case it would be less expensive than sorting, but an easy, but not so fast way would be to use the following code. I may be wrong, but I don't think you can get faster with build-in functions if you just want the first and the second min.
A = rand(10);
[firstMin, firstMinIndex] = min(A(:));
secondMin = min(A(A~=firstMin));
secondMinIndex = find(A==secondMin); % slow, but use only if you need the index
Here, you go through the matrix two times more, one for the boolean operation, and one for the second min.
After some testing on 2000x2000 and 4000x4000 random matrix, it seems that this code snipset is around 3.5 time faster than the sort function applied on the same matrix.
If you really need more efficiency, you'd have to write your own mex routine, with which you can theoretically get the two values in n+log n-2 comparison, as explained in the link provided by #luismendotomas.
Hope this help !
In a single pass:
a = [53 53 49 49 97 75 4 22 4 37];
first = Inf;
second = Inf;
for i = 1:1:numel(a)
if (a(i) < first)
second = first;
first = a(i);
elseif (a(i) < second && a(i) ~= first)
second = a(i);
end
end
fprintf('First smallest %d\n', first);
fprintf('Second smallest %d\n', second);
You can remove the a(i) ~= first condition if you rather have 4, 4 as output instead of 4, 23
Also, see this SO question
As already mentioned I suppose the best (read: "most efficient") method is to implement the methods from #luismendotomas link.
However, if you want to avoid doing too much programming yourself, then you could apply some k-nearest neighbours algorithm, given you have a lower bound on your data, e.g. if all your data points are positive, you can find the 2 nearest neighbours to 0. Though I am not sure whether this is faster than your initial suggestions or not.
For one k-nearest neighbour algorithm see e.g. this
beesleep has already pointed out that method 2 (by computing the minimum twice) is more efficient that method 1 (by sorting). However the implementation provided in the answer to compute the index of the second minimum via find is, as mentioned, very inefficient.
In fact, to get the index of the second minimum, it is ca. 10x faster to set the first minimum value to inf (as suggested in the question) and then get the index of the second minimum from the min function (as opposed to using find)
[firstMin, firstMinIndex] = min(A(:));
A(firstMinIndex) = inf;
[secondMin, secondMinIndex] = min(A(:));
Here is the code which I used to compare this implementation to the one suggested by beesleep:
for i = 1:10
A = rand(10000);
tic
[firstMin, firstMinIndex] = min(A(:));
secondMin = min(A(A~=firstMin));
secondMinIndex = find(A==secondMin); % slow, but use only if you need the index
t1(i) = toc;
tic
[firstMin, firstMinIndex] = min(A(:));
A(firstMinIndex) = inf;
[secondMin, secondMinIndex] = min(A(:));
t2(i) = toc;
end
disp(mean(t1) / mean(t2))
I am attempting to create a model whereby there is a line - represented as a 1D matrix populated with 1's - and points on the line are generated at random. Every time a point is chosen (A), it creates a 'zone of exclusion' (based on an exponential function) such that choosing another point nearby has a much lower probability of occurring.
Two main questions:
(1) What is the best way to generate an exponential such that I can multiply the numbers surrounding the chosen point to create the zone of exclusion? I know of exppdf however i'm not sure if this allows me to create an exponential which terminates at 1, as I need the zone of exclusion to end and the probability to return to 1 eventually.
(2) How can I modify matrix values plus/minus a specific index (including that index)? I got as far as:
x(1:100) = 1; % Creates a 1D-matrix populated with 1's
p = randi([1 100],1,1);
x(p) =
But am not sure how to go about using the randomly generated number to alter values in the matrix.
Any help would be much appreciated,
Anna
Don't worry about exppdf, pick the width you want (how far away from the selected point does the probability return to 1?) and define some simple function that makes a small vector with zero in the middle and 1 at the edges. So here I'm just modifying a section of length 11 centred on p and doing nothing to the rest of x:
x(1:100)=1;
p = randi([1 100],1,1);
% following just scaled
somedist = (abs(-5:5).^2)/25;
% note - this will fail if p is at edges of data, but see below
x(p-5:p+5)=x(p-5:p+5).*somedist;
Then, instead of using randi to pick points you can use datasample which allows for giving weights. In this case your "data" is just the numbers 1:100. However, to make edges easier I'd suggest initialising with a "weight" vector which has zero padding - these sections of x will not be sampled from but stop you from having to make edge checks.
x = zeros([1 110]);
x(6:105)=1;
somedist = (abs(-5:5).^2)/25;
nsamples = 10;
for n = 1:nsamples
p = datasample(1:110,1,'Weights',x);
% if required store chosen p somewhere
x(p-5:p+5)=x(p-5:p+5).*somedist;
end
For an exponential exclusion zone you could do something like:
somedist = exp(abs(-5:5))/exp(5)-exp(0)/exp(5);
It doesn't quite return to 1 but fairly close. Here's the central region of x (ignoring the padding) after two separate runs:
Lets say I have two vectors A and B with different lengths Length(A) is not equal to Length(B) and the Values in Vector A, are not the same as in Vector B. I want to compare each value of B with Values of A (Compare means if Value B(i) is almost the same value of A(1:end) for example B(i)-Tolerance<A(i)<B(i)+Tolerance.
How Can I do this without using for loop since the data is huge?
I know ismember(F), intersect,repmat,find but non of those function can really help me
You may try a solution along these lines:
tol = 0.1;
N = 1000000;
a = randn(1, N)*1000; % create a randomly
b = a + tol*rand(1, N); % b is "tol" away from a
a_bin = floor(a/tol);
b_bin = floor(b/tol);
result = ismember(b_bin, a_bin) | ...
ismember(b_bin, a_bin-1) | ...
ismember(b_bin, a_bin+1);
find(result==0) % should be empty matrix.
The idea is to discretize the a and b variables to bins of size tol. Then, you ask whether b is found in the same bin as any element from a, or in the bin to the left of it, or in the bin to the right of it.
Advantages: I believe ismember is clever inside, first sorting the elements of a and then performing sublinear (log(N)) search per element b. This is unlike approaches which explicitly construct differences of each element in b with elements from a, meaning the complexity is linear in the number of elements in a.
Comparison: for N=100000 this runs 0.04s on my machine, compared to 20s using linear search (timed using Alan's nice and concise tf = arrayfun(#(bi) any(abs(a - bi) < tol), b); solution).
Disadvantages: this leads to that the actual tolerance is anything between tol and 1.5*tol. Depends on your task whether you can live with that (if the only concern is floating point comparison, you can).
Note: whether this is a viable approach depends on the ranges of a and b, and value of tol. If a and b can be very big and tol is very small, the a_bin and b_bin will not be able to resolve individual bins (then you would have to work with integral types, again checking carefully that their ranges suffice). The solution with loops is a safer one, but if you really need speed, you can invest into optimizing the presented idea. Another option, of course, would be to write a mex extension.
It sounds like what you are trying to do is have an ismember function for use on real valued data.
That is, check for each value B(i) in your vector B whether B(i) is within the tolerance threshold T of at least one value in your vector A
This works out something like the following:
tf = false(1, length(b)); %//the result vector, true if that element of b is in a
t = 0.01; %// the tolerance threshold
for i = 1:length(b)
%// is the absolute difference between the
%//element of a and b less that the threshold?
matches = abs(a - b(i)) < t;
%// if b(i) matches any of the elements of a
tf(i) = any(matches);
end
Or, in short:
t = 0.01;
tf = arrayfun(#(bi) any(abs(a - bi) < t), b);
Regarding avoiding the for loop: while this might benefit from vectorization, you may also want to consider looking at parallelisation if your data is that huge. In that case having a for loop as in my first example can be handy since you can easily do a basic version of parallel processing by changing the for to parfor.
Here is a fully vectorized solution. Note that I would actually recommend the solution given by #Alan, as mine is not likely to work for big datasets.
[X Y]=meshgrid(A,B)
M=abs(X-Y)<tolerance
Now the logical index of elements in a that are within the tolerance can be obtained with any(M) and the index for B is found by any(M,2)
bsxfun to the rescue
>> M = abs( bsxfun(#minus, A, B' ) ); %//' difference
>> M < tolerance
Another way to do what you want is with a logical expression.
Since A and B are vectors of different sizes you can't simply subtract and look for values that are smaller than the tolerance, but you can do the following:
Lmat = sparse((abs(repmat(A,[numel(B) 1])-repmat(B',[1 numel(A)])))<tolerance);
and you will get a sparse logical matrix with as many ones in it as equal elements (within tolerance). You could then count how many of those elements you have by writing:
Nequal = sum(sum(Lmat));
You could also get the indexes of the corresponding elements by writing:
[r,c] = find(Lmat);
then the following code will be true (for all j in numel(r)):
B(r(j))==A(c(j))
Finally, you should note that this way you get multiple counts in case there are duplicate entries in A or in B. It may be advisable to use the unique function first. For example:
A_new = unique(A);
I have a Problem. I have a Matrix A with integer values between 0 and 5.
for example like:
x=randi(5,10,10)
Now I want to call a filter, size 3x3, which gives me the the most common value
I have tried 2 solutions:
fun = #(z) mode(z(:));
y1 = nlfilter(x,[3 3],fun);
which takes very long...
and
y2 = colfilt(x,[3 3],'sliding',#mode);
which also takes long.
I have some really big matrices and both solutions take a long time.
Is there any faster way?
+1 to #Floris for the excellent suggestion to use hist. It's very fast. You can do a bit better though. hist is based on histc, which can be used instead. histc is a compiled function, i.e., not written in Matlab, which is why the solution is much faster.
Here's a small function that attempts to generalize what #Floris did (also that solution returns a vector rather than the desired matrix) and achieve what you're doing with nlfilter and colfilt. It doesn't require that the input have particular dimensions and uses im2col to efficiently rearrange the data. In fact, the the first three lines and the call to im2col are virtually identical to what colfit does in your case.
function a=intmodefilt(a,nhood)
[ma,na] = size(a);
aa(ma+nhood(1)-1,na+nhood(2)-1) = 0;
aa(floor((nhood(1)-1)/2)+(1:ma),floor((nhood(2)-1)/2)+(1:na)) = a;
[~,a(:)] = max(histc(im2col(aa,nhood,'sliding'),min(a(:))-1:max(a(:))));
a = a-1;
Usage:
x = randi(5,10,10);
y3 = intmodefilt(x,[3 3]);
For large arrays, this is over 75 times faster than colfilt on my machine. Replacing hist with histc is responsible for a factor of two speedup. There is of course no input checking so the function assumes that a is all integers, etc.
Lastly, note that randi(IMAX,N,N) returns values in the range 1:IMAX, not 0:IMAX as you seem to state.
One suggestion would be to reshape your array so each 3x3 block becomes a column vector. If your initial array dimensions are divisible by 3, this is simple. If they don't, you need to work a little bit harder. And you need to repeat this nine times, starting at different offsets into the matrix - I will leave that as an exercise.
Here is some code that shows the basic idea (using only functions available in FreeMat - I don't have Matlab on my machine at home...):
N = 100;
A = randi(0,5*ones(3*N,3*N));
B = reshape(permute(reshape(A,[3 N 3 N]),[1 3 2 4]), [ 9 N*N]);
hh = hist(B, 0:5); % histogram of each 3x3 block: bin with largest value is the mode
[mm mi] = max(hh); % mi will contain bin with largest value
figure; hist(B(:),0:5); title 'histogram of B'; % flat, as expected
figure; hist(mi-1, 0:5); title 'histogram of mi' % not flat?...
Here are the plots:
The strange thing, when you run this code, is that the distribution of mi is not flat, but skewed towards smaller values. When you inspect the histograms, you will see that is because you will frequently have more than one bin with the "max" value in it. In that case, you get the first bin with the max number. This is obviously going to skew your results badly; something to think about. A much better filter might be a median filter - the one that has equal numbers of neighboring pixels above and below. That has a unique solution (while mode can have up to four values, for nine pixels - namely, four bins with two values each).
Something to think about.
Can't show you a mex example today (wrong computer); but there are ample good examples on the Mathworks website (and all over the web) that are quite easy to follow. See for example http://www.shawnlankton.com/2008/03/getting-started-with-mex-a-short-tutorial/
hi can anyone show how to use the modem.oqpskmod for BER. thanks!
h = modem.oqpskmod
y = modulate(h, values);
g = modem.oqpskdemod(h)
z = demodulate(g, y)
let's assume that i have array called values which contains only 1s and 0s.
my question is how would i calculate BER? of course if above my code is correct.
Based on this Wikipedia page, you simply have to compute the number of incorrect bits and divide by the total number of transferred bits to get the bit error rate (BER). If values is the unmodulated input signal and z is the output signal after modulation and demodulation, you can compute it like this:
BER = sum(logical(values(:)-z(:)))/numel(values);
EDIT: I modified the above code just in case you run into two situations:
If z has values other than 0 and 1.
If z is a different size than values (i.e. row vector versus column vector).
I don't know if you are ever likely to come across these two situations, but better safe than sorry. ;)