Matlab: Comparing two vectors with different length and different values? - matlab

Lets say I have two vectors A and B with different lengths Length(A) is not equal to Length(B) and the Values in Vector A, are not the same as in Vector B. I want to compare each value of B with Values of A (Compare means if Value B(i) is almost the same value of A(1:end) for example B(i)-Tolerance<A(i)<B(i)+Tolerance.
How Can I do this without using for loop since the data is huge?
I know ismember(F), intersect,repmat,find but non of those function can really help me

You may try a solution along these lines:
tol = 0.1;
N = 1000000;
a = randn(1, N)*1000; % create a randomly
b = a + tol*rand(1, N); % b is "tol" away from a
a_bin = floor(a/tol);
b_bin = floor(b/tol);
result = ismember(b_bin, a_bin) | ...
ismember(b_bin, a_bin-1) | ...
ismember(b_bin, a_bin+1);
find(result==0) % should be empty matrix.
The idea is to discretize the a and b variables to bins of size tol. Then, you ask whether b is found in the same bin as any element from a, or in the bin to the left of it, or in the bin to the right of it.
Advantages: I believe ismember is clever inside, first sorting the elements of a and then performing sublinear (log(N)) search per element b. This is unlike approaches which explicitly construct differences of each element in b with elements from a, meaning the complexity is linear in the number of elements in a.
Comparison: for N=100000 this runs 0.04s on my machine, compared to 20s using linear search (timed using Alan's nice and concise tf = arrayfun(#(bi) any(abs(a - bi) < tol), b); solution).
Disadvantages: this leads to that the actual tolerance is anything between tol and 1.5*tol. Depends on your task whether you can live with that (if the only concern is floating point comparison, you can).
Note: whether this is a viable approach depends on the ranges of a and b, and value of tol. If a and b can be very big and tol is very small, the a_bin and b_bin will not be able to resolve individual bins (then you would have to work with integral types, again checking carefully that their ranges suffice). The solution with loops is a safer one, but if you really need speed, you can invest into optimizing the presented idea. Another option, of course, would be to write a mex extension.

It sounds like what you are trying to do is have an ismember function for use on real valued data.
That is, check for each value B(i) in your vector B whether B(i) is within the tolerance threshold T of at least one value in your vector A
This works out something like the following:
tf = false(1, length(b)); %//the result vector, true if that element of b is in a
t = 0.01; %// the tolerance threshold
for i = 1:length(b)
%// is the absolute difference between the
%//element of a and b less that the threshold?
matches = abs(a - b(i)) < t;
%// if b(i) matches any of the elements of a
tf(i) = any(matches);
end
Or, in short:
t = 0.01;
tf = arrayfun(#(bi) any(abs(a - bi) < t), b);
Regarding avoiding the for loop: while this might benefit from vectorization, you may also want to consider looking at parallelisation if your data is that huge. In that case having a for loop as in my first example can be handy since you can easily do a basic version of parallel processing by changing the for to parfor.

Here is a fully vectorized solution. Note that I would actually recommend the solution given by #Alan, as mine is not likely to work for big datasets.
[X Y]=meshgrid(A,B)
M=abs(X-Y)<tolerance
Now the logical index of elements in a that are within the tolerance can be obtained with any(M) and the index for B is found by any(M,2)

bsxfun to the rescue
>> M = abs( bsxfun(#minus, A, B' ) ); %//' difference
>> M < tolerance

Another way to do what you want is with a logical expression.
Since A and B are vectors of different sizes you can't simply subtract and look for values that are smaller than the tolerance, but you can do the following:
Lmat = sparse((abs(repmat(A,[numel(B) 1])-repmat(B',[1 numel(A)])))<tolerance);
and you will get a sparse logical matrix with as many ones in it as equal elements (within tolerance). You could then count how many of those elements you have by writing:
Nequal = sum(sum(Lmat));
You could also get the indexes of the corresponding elements by writing:
[r,c] = find(Lmat);
then the following code will be true (for all j in numel(r)):
B(r(j))==A(c(j))
Finally, you should note that this way you get multiple counts in case there are duplicate entries in A or in B. It may be advisable to use the unique function first. For example:
A_new = unique(A);

Related

Finding the best threshold level without a for loop in MATLAB

Let A and B be two matrices of the same size. For a matrix M, let ht(M,t) threshold all the entries of M by t. That is All entries whose absolute value is less than t are set to 0. Suppose I want to find the optimal threshold t such that norm(ht(A,t)-B,'fro')^2 is minimized.
The only way that I can see to do this is deficient: do a for loop over the unique values of A and threshold A and setting C=ht(A,t)-B, compute sum(sum(C.*C)).
This is just too slow when A is large. I have considered sorting the elements of A and finding some efficient way to set a few entries to zero at a time, but I'm not sure this can all be done without a for loop.
Is there a way to do it?
Here's a very simple example (so simple a for loop works easily in this case):
B =
0.101508820368332 0
0 0.301996943246957
Set
A=B+.1*ones(2)
A =
0.201508820368332 0.1
0.1 0.401996943246957
Simple inspection shows that if we zero out the off-diagonal entries of A we minimize the difference between A and B. There are 3 possible threshold values, given by unique(A)=[.1,.2015,.402]. Given a potential threshold value t, we can hard threshold A by:
function [A_thresholded] = ht(A,t)
%
A_thresholded = A .* (abs(A)>t);
The form of the data in a matrix is irrelevant. You can convert them to vectors and simply compute the square-norm. In fact, you can sort the contents of A in increasing order (and permute B to preserve pairing). When you increase the threshold to include one more value in A, the norm only changes by that one increment. Therefore, you can find your solution in O(n log n). Hope this helps.

Solving for [A] to satisfy [A]*[B]=[C] when [C] is known and [B] is randomly generated with less rows than columns

My goal is to solve for a matrix [A] that satisfies [A]*[B]=[C] where [C] is known and [B] is randomly generated. Below is an example:
C=[1/3 1/3 1/3]'*[1/3 1/6 1/6 1/6 1/6];
B=rand(5,5);
A=C*pinv(B);
A*B=C_test;
norm(C-C_test);
ans =
4.6671e-16
Here the elements of [C_test] are within 1e-15 to the original [C], but when [B] has less rows than columns, the error dramatically increases (not sure is norm() is the best way to show the error, but I think it illustrates the problem). For example:
B=rand(4,5);
A=C*pinv(B);
A*B=C_test;
norm(C-C_test);
ans =
0.0173
Additional methods:
QR-Factorization
[Q,R,P]=qr(B);
A=((C*P)/R))*Q';
norm(C-A*B);
ans =
0.0173
/ Operator
A=C/B;
norm(C-A*B);
ans =
0.0173
Why does this happen? In both cases [B]*pinv([B])=[I] so it seems like the process should work.
If this is a numerical or algebraic fact of life associated with pinv() or the other methods, is there another way I can generate [A] to satisfy the equation? Thank you!
Since C is 3×5, the number of elements in C and hence the number of equations is equal to 15. If B is 5×5, the number of unknowns (the elements in A) equals 3×5 = 15 as well, and the solution will be accurate.
If on the other hand B is for instance 3×5, the number of elements in A is equal to 3×3 = 9 and hence the system is overdetermined, which means that the resulting A will be the least-squares solution.
See for general information wikipedia: System of linear equations, and Matlabs Overdetermined system.
The resulting matrix A is the best fit and there is no way to improve (in a least square sense).
In response to your second question: you are measuring the quality of A*B as an approximation of C by applying the 2-norm to A*B-C: which is equivalent to least-squares fitting. In this measure, all the approaches that you use provide the optimal answer.
If you however would prefer some other measure, such as the 1-norm, the Infinity-norm or any other measure (for instance by picking different weights for column, row or element), the obtained answers from the original approach will of course not be necessarily optimal with respect to this new measure.
The most general approach would be to use some optimization routine, like this:
x = fminunc(f, zeros(3*size(B,1),1));
A = reshape(x,3,size(B,1));
where f is some (any) measure. The least-square measure should result in the same A. So if you try this one:
f = #(x) norm(reshape(x,3,size(B,1))*B - C);
A should match the results in your approaches.
But you could use any f here. For instance, try the 1-norm:
f = #(x) norm(reshape(x,3,size(B,1))*B - C, 1);
Or something crazy like:
f = #(x) sum(abs(reshape(x,3,size(B,1))*B - C)*[1 10 100 1000 10000]');
This will give different results, which are according to the new measure f optimal. That being said, I would stick to the least squares ;)

How to generate random matlab vector with these constraints

I'm having trouble creating a random vector V in Matlab subject to the following set of constraints: (given parameters N,D, L, and theta)
The vector V must be N units long
The elements must have an average of theta
No 2 successive elements may differ by more than +/-10
D == sum(L*cosd(V-theta))
I'm having the most problems with the last one. Any ideas?
Edit
Solutions in other languages or equation form are equally acceptable. Matlab is just a convenient prototyping tool for me, but the final algorithm will be in java.
Edit
From the comments and initial answers I want to add some clarifications and initial thoughts.
I am not seeking a 'truly random' solution from any standard distribution. I want a pseudo randomly generated sequence of values that satisfy the constraints given a parameter set.
The system I'm trying to approximate is a chain of N links of link length L where the end of the chain is D away from the other end in the direction of theta.
My initial insight here is that theta can be removed from consideration until the end, since (2) in essence adds theta to every element of a 0 mean vector V (shifting the mean to theta) and (4) simply removes that mean again. So, if you can find a solution for theta=0, the problem is solved for all theta.
As requested, here is a reasonable range of parameters (not hard constraints, but typical values):
5<N<200
3<D<150
L==1
0 < theta < 360
I would start by creating a "valid" vector. That should be possible - say calculate it for every entry to have the same value.
Once you got that vector I would apply some transformations to "shuffle" it. "Rejection sampling" is the keyword - if the shuffle would violate one of your rules you just don't do it.
As transformations I come up with:
switch two entries
modify the value of one entry and modify a second one to keep the 4th condition (Theoretically you could just shuffle two till the condition is fulfilled - but the chance that happens is quite low)
But maybe you can find some more.
Do this reasonable often and you get a "valid" random vector. Theoretically you should be able to get all valid vectors - practically you could try to construct several "start" vectors so it won't take that long.
Here's a way of doing it. It is clear that not all combinations of theta, N, L and D are valid. It is also clear that you're trying to simulate random objects that are quite complex. You will probably have a hard time showing anything useful with respect to these vectors.
The series you're trying to simulate seems similar to the Wiener process. So I started with that, you can start with anything that is random yet reasonable. I then use that as a starting point for an optimization that tries to satisfy 2,3 and 4. The closer your initial value to a valid vector (satisfying all your conditions) the better the convergence.
function series = generate_series(D, L, N,theta)
s(1) = theta;
for i=2:N,
s(i) = s(i-1) + randn(1,1);
end
f = #(x)objective(x,D,L,N,theta)
q = optimset('Display','iter','TolFun',1e-10,'MaxFunEvals',Inf,'MaxIter',Inf)
[sf,val] = fminunc(f,s,q);
val
series = sf;
function value= objective(s,D,L,N,theta)
a = abs(mean(s)-theta);
b = abs(D-sum(L*cos(s-theta)));
c = 0;
for i=2:N,
u =abs(s(i)-s(i-1)) ;
if u>10,
c = c + u;
end
end
value = a^2 + b^2+ c^2;
It seems like you're trying to simulate something very complex/strange (a path of a given curvature?), see questions by other commenters. Still you will have to use your domain knowledge to connect D and L with a reasonable mu and sigma for the Wiener to act as initialization.
So based on your new requirements, it seems like what you're actually looking for is an ordered list of random angles, with a maximum change in angle of 10 degrees (which I first convert to radians), such that the distance and direction from start to end and link length and number of links are specified?
Simulate an initial guess. It will not hold with the D and theta constraints (i.e. specified D and specified theta)
angles = zeros(N, 1)
for link = 2:N
angles (link) = theta(link - 1) + (rand() - 0.5)*(10*pi/180)
end
Use genetic algorithm (or another optimization) to adjust the angles based on the following cost function:
dx = sum(L*cos(angle));
dy = sum(L*sin(angle));
D = sqrt(dx^2 + dy^2);
theta = atan2(dy/dx);
the cost is now just the difference between the vector given by my D and theta above and the vector given by the specified D and theta (i.e. the inputs).
You will still have to enforce the max change of 10 degrees rule, perhaps that should just make the cost function enormous if it is violated? Perhaps there is a cleaner way to specify sequence constraints in optimization algorithms (I don't know how).
I feel like if you can find the right optimization with the right parameters this should be able to simulate your problem.
You don't give us a lot of detail to work with, so I'll assume the following:
random numbers are to be drawn from [-127+theta +127-theta]
all random numbers will be drawn from a uniform distribution
all random numbers will be of type int8
Then, for the first 3 requirements, you can use this:
N = 1e4;
theta = 40;
diffVal = 10;
g = #() randi([intmin('int8')+theta intmax('int8')-theta], 'int8') + theta;
V = [g(); zeros(N-1,1, 'int8')];
for ii = 2:N
V(ii) = g();
while abs(V(ii)-V(ii-1)) >= diffVal
V(ii) = g();
end
end
inline the anonymous function for more speed.
Now, the last requirement,
D == sum(L*cos(V-theta))
is a bit of a strange one...cos(V-theta) is a specific way to re-scale the data to the [-1 +1] interval, which the multiplication with L will then scale to [-L +L]. On first sight, you'd expect the sum to average out to 0.
However, the expected value of cos(x) when x is a random variable from a uniform distribution in [0 2*pi] is 2/pi (see here for example). Ignoring for the moment the fact that our limits are different from [0 2*pi], the expected value of sum(L*cos(V-theta)) would simply reduce to the constant value of 2*N*L/pi.
How you can force this to equal some other constant D is beyond me...can you perhaps elaborate on that a bit more?

Minimizing objective function by changing a variable - in Matlab?

I have a 101x82 size matrix called A. Using this variable matrix, I compute two other variables called:
1) B, a 1x1 scalar, and
2) C, a 50x6 matrix.
I compare 1) and 2) with their analogues variables 3) and 4), whose values are fixed:
3) D, a 1x1 scalar, and
4) E, a 50x6 matrix.
Now, I want to perturb/change the values of A matrix, such that:
1) ~ 3), i.e. B is nearly equal to D , and
2) ~ 4), i.e. C is nearly equal to E
Note that on perturbing A, B and C will change, but not D and E.
Any ideas how to do this? Thanks!
I can't run your code as it's demanding to load data (which I don't have) and it's not immediatly obvious how to calculate B or C.
Fortunately I may be able to answer your problem. You're describing an optimization problem, and the solution would be to use fminsearch (or something of that variety).
What you do is define a function that returns a vector with two elements:
y1 = (B - D)^weight1;
y2 = norm((C - E), weight2);
with weight being how strong you allow for variability (weight = 2 is usually sufficient).
Your function variable would be A.
From my understanding you have a few functions.
fb(A) = B
fc(A) = C
Do you know the functions listed above, that is do you know the mappings from A to each of these?
If you want to try to optimize, so that B is close to D, you need to pick:
What close means. You can look at some vector norm for the B and D case, like minimizing ||B-D||^2. The standard sum of the squares of the elements of this different will probably do the trick and is computationally nice.
How to optimize. This depends a lot on your functions, whether you want local or global mimina, etc.
So basically, now we've boiled the problem down to minimizing:
Cost = ||fb(A) - fd(A)||^2
One thing you can certainly do is to compute the gradient of this cost function with respect to the individual elements of A, and then perform minimization steps with forward Euler method with a suitable "time step". This might not be fast, but with small enough time step and well-behaved enough functions it will at least get you to a local minima.
Computing the gradient of this
grad_A(cost) = 2*||fb(A)-fd(A)||*(grad_A(fb)(A)-grad_A(fd)(A))
Where grad_A means gradient with respect to A, and grad_A(fb)(A) means gradient with respect to A of the function fb evaluated at A, etc.
Computing the grad_A(fb)(A) depends on the form of fb, but here are some pages have "Matrix calculus" identities and explanations.
Matrix calculus identities
Matrix calculus explanation
Then you simply perform gradient descent on A by doing forward Euler updates:
A_next = A_prev - timestep * grad_A(cost)

MATLAB/General CS: Sampling Without Replacement From Multiple Sets (+Keeping Track of Unsampled Cases)

I currently implementing an optimization algorithm that requires me to sample without replacement from several sets. Although I am coding in MATLAB, this is essentially a CS question.
The situation is as follows:
I have a finite number of sets (A, B, C) each with a finite but possibly different number of elements (a1,a2...a8, b1,b2...b10, c1, c2...c25). I also have a vector of probabilities for each set which lists a probability for each element in that set (i.e. for set A, P_A = [p_a1 p_a2... p_a8] where sum(P_A) = 1). I normally use these to create a probability generating function for each set, which given a uniform number between 0 to 1, can spit out one of the elements from that set (i.e. a function P_A(u), which given u = 0.25, will select a2).
I am looking to sample without replacement from the sets A, B, and C. Each "full sample" is a sequence of elements from each of the different sets i.e. (a1, b3, c2). Note that the space of full samples is the set of all permutations of the elements in A, B, and C. In the example above, this space is (a1,a2...a8) x (b1,b2...b10) x (c1, c2...c25) and there are 8*10*25 = 2000 unique "full samples" in my space.
The annoying part of sampling without replacement with this setup is that if my first sample is (a1, b3, c2) then that does not mean I cannot sample the element a1 again - it just means that I cannot sample the full sequence (a1, b3, c2) again. Another annoying part is that the algorithm I am working with requires me do a function evaluation for all permutations of elements that I have not sampled.
The best method at my disposal right now is to keep track the sampled cases. This is a little inefficient since my sampler is forced to reject any case that has been sampled before (since I'm sampling without replacement). I then do the function evaluations for the unsampled cases, by going through each permutation (ax, by, cz) using nested for loops and only doing the function evaluation if that combination of (ax, by, cz) is not included in the sampled cases. Again, this is a little inefficient since I have to "check" whether each permutation (ax, by, cz) has already been sampled.
I would appreciate any advice in regards to this problem. In particular, I am looking a method to sample without replacement and keep track of unsampled cases that does not explicity list out the full sample space (I usually work with 10 sets with 10 elements each so listing out the full sample space would require a 10^10 x 10 matrix). I realize that this may be impossible, though finding efficient way to do it will allow me to demonstrate the true limits of the algorithm.
Do you really need to keep track of all of the unsampled cases? Even if you had a 1-by-1010 vector that stored a logical value of true or false indicating if that permutation had been sampled or not, that would still require about 10 GB of storage, and MATLAB is likely to either throw an "Out of Memory" error or bring your entire machine to a screeching halt if you try to create a variable of that size.
An alternative to consider is storing a sparse vector of indicators for the permutations you've already sampled. Let's consider your smaller example:
A = 1:8;
B = 1:10;
C = 1:25;
nA = numel(A);
nB = numel(B);
nC = numel(C);
beenSampled = sparse(1,nA*nB*nC);
The 1-by-2000 sparse matrix beenSampled is empty to start (i.e. it contains all zeroes) and we will add a one at a given index for each sampled permutation. We can get a new sample permutation using the function RANDI to give us indices into A, B, and C for the new set of values:
indexA = randi(nA);
indexB = randi(nB);
indexC = randi(nC);
We can then convert these three indices into a single unique linear index into beenSampled using the function SUB2IND:
index = sub2ind([nA nB nC],indexA,indexB,indexC);
Now we can test the indexed element in beenSampled to see if it has a value of 1 (i.e. we sampled it already) or 0 (i.e. it is a new sample). If it has been sampled already, we repeat the process of finding a new set of indices above. Once we have a permutation we haven't sampled yet, we can process it:
while beenSampled(index)
indexA = randi(nA);
indexB = randi(nB);
indexC = randi(nC);
index = sub2ind([nA nB nC],indexA,indexB,indexC);
end
beenSampled(index) = 1;
newSample = [A(indexA) B(indexB) C(indexC)];
%# ...do your subsequent processing...
The use of a sparse array will save you a lot of space if you're only going to end up sampling a small portion of all of the possible permutations. For smaller total numbers of permutations, like in the above example, I would probably just use a logical vector instead of a sparse vector.
Check the matlab documentation for the randi function; you'll just want to use that in conjunction with the length function to choose random entries from each vector. Keeping track of each sampled vector should be as simple as just concatenating it to a matrix;
current_values = [5 89 45]; % lets say this is your current sample set
used_values = [used_values; current_values];
% wash, rinse, repeat