Here is an example of convolution given:
I have two questions here:
Why is the vector 𝑥 padded with two 0s on each side? As, the length of kernel ℎ is 3. If 𝑥 is padded with one 0 on each side, the middle element of convolution output would be within the range of the length of 𝑥, why not one 0 on each side?
Explain the following output to me:
>> x = [1, 2, 1, 3];
>> h = [2, 0, 1];
>> y = conv(x, h, 'valid')
y =
3 8
>>
What is valid doing here in the context of the previously shown mathematics on vectors 𝑥 and ℎ?
I can't speak as to the amount of zero padding that is proper .... That being said, any zero padding is making up data that is not there. This isn't necessarily wrong, but you should be aware that the values computing this information may be biased. Sometimes you care about this, sometimes you don't. Introducing 1 zero (in this case) would leave the middle kernel value always in the data, but why should that be a stopping criteria? Importantly, adding on 2 zeros still leaves one multiplication of values that are actually present in the data and the kernel (the x[0]*h[0] and x[3]*h[2] - using 0 based indexing). Adding on a 3rd zero (or more) would just yield zeros in the output since 3 is the length of the kernel. In other words zero padding will always yield an output that is partially based on the actual data (but not completely) for any zero padding from n=1 to n = length(h)-1 (in this case either 1 or 2).
Even though zero padding with length 2 or 1 still has multiplications based on real data, some values are summed over "fake" data (those multiplied with a padded zero). In this case Matlab gives you 3 options for how you want the data returned. First, you can get the full convolution, which includes values that are biased because they include adding in 0 values that aren't really in the data. Alternatively you can get same, which means the length of the output is the length of the data y = [4 3 8 1]. This corresponds to 1 zero but note that for longer kernels you could technically get other lengths between full and same, Matlab just doesn't return those for you.
Finally, and probably most important to understand out of all this, you have the valid option. In your example only 2 samples of the output are computed from summations that occur only from multiplications over real data (i.e. from multiplying samples of the kernel with samples from x and not from zeros). More specifically:
y[2] = h[2]*x[0] + h[1]*x[1] + h[2]*x[2] = 3 //0 based indexing like example
y[3] = h[2]*x[1] + h[1]*x[2] + h[2]*x[3] = 8
Note none of the other y values are computed with only h and x, they all involve a padded zero which is not necessarily indicative of the real data. For example:
y[4] = h[2]*x[2] + h[1]*x[3] + h[2]*0 <= padded zero
Related
I have a problem using matrix multiplication in Matlab.
I have a 8x4 matrix called A and a 4x1 vector called B. Looking at matrix A, the fourth row of has the values
A(4,:) = (-19.723654104483987, -73.609679228848705, 73.609679228848705, 19.723654104483987)
and the vector B has the values
B = (101325, 101325, 101325, 101325)'.
It appears to me that these two, when multiplied, would cancel eachother out. However, when I use A*B = ans, the fourth row of ans has a value as shown below.
ans(4) = 4.656612873077393e-10
I find this strange since I tried to check if the elements of A(4,:) and they really should cancel out since
(A(4,1) == -A(4,4)) = 1
(A(4,2) == -A(4,3)) = 1
and
(B(1) == B(2)) = 1
(B(1) == B(3)) = 1
(B(1) == B(4)) = 1
I was thinking that it might have something to do with the machine epsilon of Matlab, but the answer is larger than e-16.
Another thing I find strange is that when I use A(4,:)*B it returns 0. Why does a value appear on the forth row when using full scale matrix multiplication?
If anyone has an idea why this doesn't return zero, I would be grateful!
You are computing an expression
((-a+(-b))+b)+a, a,b > 0
Floating point operations are not commutative, the truncation error committed by the first addition of -a and -b is not undone by the second addition of b, so the third addition of a will not land at zero, but will reflect the truncation error of the first sum. So it is not surprising that you get a residual error of size (a+b)*mu, mu being the machine constant, about 2e-16. As your a+b is about 1e+7, this conforms with the actual result you got.
See also the many resources provided in Is floating point math broken?
From a binary matrix, I want to calculate a kind of adjacency/joint probability density matrix (not quite sure how to label it as so please feel free to rename).
For example, I start with this matrix:
A = [1 1 0 1 1
1 0 0 1 1
0 0 0 1 0]
I want to produce this output:
Output = [1 4/5 1/5
4/5 1 1/5
1/5 1/5 1]
Basically, for each row, I want to calculate the proportion of times where they agreed (1 and 1 or 0 and 0). A will always agree with itself and thus have it as 1 along the diagonal. No matter how many different js are added it will still result in a 3x3, but an extra i variable will result in a 4x4.
I like to think of the inputs along i in the A matrix as the person and Js as the question and so the final output is a 3x3 (number of persons) matrix.
I am having some trouble with this on matlab. If you could please help point me in the right direction that would be fabulous.
So, you can do this in two parts.
bothOnes = A*A';
gives you a matrix showing how many 1s each pair of rows share, and
bothZeros = (1-A)*(1-A)';
gives you a matrix showing how many 0s each pair of rows share.
If you just add them up, you get how many elements they share of either type:
bothSame = A*A' + (1-A)*(1-A)';
Then just divide by the row length to get the desired fractional representation:
output = (A*A' + (1-A)*(1-A)') / size(A, 2);
That should get you there.
Note that this only works if A contains only 1's and 0's, but it can be adapted for other cases.
Here are some alternatives, assuming A can only contain 0 and 1:
If you have the Statistics Toolbox:
result = 1-squareform(pdist(A, 'hamming'));
Manual approach with implicit expansion:
result = mean(permute(A, [1 3 2])==permute(A, [3 1 2]), 3);
Using bitwise operations. This is a more esoteric approach, and is only valid if A has at most 53 columns, due to floating-point limitations:
t = bin2dec(char(A+'0')); % convert each row from binary to decimal
u = bitxor(t, t.'); % bitwise xor
v = mean(dec2bin(u)-'0', 2); % compute desired values
result = 1 - reshape(v, size(A,1), []); % reshape to obtain result
For my experiment I have 20 categories which contain 9 pictures each. I want to show these pictures in a pseudo-random sequence where the only constraint to randomness is that one image may not be followed directly by one of the same category.
So I need something similar to
r = randi([1 20],1,180);
just with an added constraint of two numbers not directly following each other. E.g.
14 8 15 15 7 16 6 4 1 8 is not legitimate, whereas
14 8 15 7 15 16 6 4 1 8 would be.
An alternative way I was thinking of was naming the categories A,B,C,...T, have them repeat 9 times and then shuffle the bunch. But there you run into the same problem I think?
I am an absolute Matlab beginner, so any guidance will be welcome.
The following uses modulo operations to make sure each value is different from the previous one:
m = 20; %// number of categories
n = 180; %// desired number of samples
x = [randi(m)-1 randi(m-1, [1 n-1])];
x = mod(cumsum(x), m) + 1;
How the code works
In the third line, the first entry of x is a random value between 0 and m-1. Each subsequent entry represents the change that, modulo m, will give the next value (this is done in the fourth line).
The key is to choose that change between 1 and m-1 (not between 0 and m-1), to assure consecutive values will be different. In other words, given a value, there are m-1 (not m) choices for the next value.
After the modulo operation, 1 is added to to transform the range of resulting values from 0,...,m-1 to 1,...,m.
Test
Take all (n-1) pairs of consecutive entries in the generated x vector and count occurrences of all (m^2) possible combinations of values:
count = accumarray([x(1:end-1); x(2:end)].', 1, [m m]);
imagesc(count)
axis square
colorbar
The following image has been obtained for m=20; n=1e6;. It is seen that all combinations are (more or less) equally likely, except for pairs with repeated values, which never occur.
You could look for the repetitions in an iterative manner and put new set of integers from the same group [1 20] only into those places where repetitions have occurred. We continue to do so until there are no repetitions left -
interval = [1 20]; %// interval from where the random integers are to be chosen
r = randi(interval,1,180); %// create the first batch of numbers
idx = diff(r)==0; %// logical array, where 1s denote repetitions for first batch
while nnz(idx)~=0
idx = diff(r)==0; %// logical array, where 1s denote repetitions for
%// subsequent batches
rN = randi(interval,1,nnz(idx)); %// new set of random integers to be placed
%// at the positions where repetitions have occured
r(find(idx)+1) = rN; %// place ramdom integers at their respective positions
end
This question already has answers here:
Find specific value's count in a vector
(4 answers)
Closed 8 years ago.
I have a NxM matrix for example named A. After some processes I want to count the zero elements.
How can I do this in one line code? I tried A==0 which returns a 2D matrix.
There is a function to find the number of nonzero matrix elements nnz. You can use this function on a logical matrix, which will return the number of true.
In this case, we apply nnz on the matrix A==0, hence the elements of the logical matrix are true, if the original element was 0, false for any other element than 0.
A = [1, 3, 1;
0, 0, 2;
0, 2, 1];
nnz(A==0) %// returns 3, i.e. the number of zeros of A (the amount of true in A==0)
The credits for the benchmarking belong to Divarkar.
Benchmarking
Using the following paramters and inputs, one can benchmark the solutions presented here with timeit.
Input sizes
Small sized datasize - 1:10:100
Medium sized datasize - 50:50:1000
Large sized datasize - 500:500:4000
Varying % of zeros
~10% of zeros case - A = round(rand(N)*5);
~50% of zeros case - A = rand(N);A(A<=0.5)=0;
~90% of zeros case - A = rand(N);A(A<=0.9)=0;
The results are shown next -
1) Small Datasizes
2. Medium Datasizes
3. Large Datasizes
Observations
If you look closely into the NNZ and SUM performance plots for medium and large datasizes, you would notice that their performances get closer to each other for 10% and 90% zeros cases. For 50% zeros case, the performance gap between SUM and NNZ methods is comparatively wider.
As a general observation across all datasizes and all three fraction cases of zeros,
SUM method seems to be the undisputed winner. Again, an interesting thing was observed here that the general case solution sum(A(:)==0) seems to be better in performance than sum(~A(:)).
some basic matlab to know: the (:) operator will flatten any matrix into a column vector , ~ is the NOT operator flipping zeros to ones and non zero values to zero, then we just use sum:
sum(~A(:))
This should be also about 10 times faster than the length(find... scheme, in case efficiency is important.
Edit: in the case of NaN values you can resort to the solution:
sum(A(:)==0)
I'll add something to the mix as well. You can use histc and compute the histogram of the entire matrix. You specify the second parameter to be which bins the numbers should be collected at. If we just want to count the number of zeroes, we can simply specify 0 as the second parameter. However, if you specify a matrix into histc, it will operate along the columns but we want to operate on the entire matrix. As such, simply transform the matrix into a column vector A(:) and use histc. In other words, do this:
histc(A(:), 0)
This should be equivalent to counting the number of zeroes in the entire matrix A.
Well I don't know if I'm answering well the question but you could code it as follows :
% Random Matrix
M = [1 0 4 8 0 6;
0 0 7 4 8 0;
8 7 4 0 6 0];
n = size(M,1); % Number of lines of M
p = size(M,2); % Number of columns of M
nbrOfZeros = 0; % counter
for i=1:n
for j=1:p
if M(i,j) == 0
nbrOfZeros = nbrOfZeros + 1;
end
end
end
nbrOfZeros
I am using 64 bit matlab with 32g of RAM (just so you know).
I have a file (vector) of 1.3 million numbers (integers). I want to make another vector of the same length, where each point is a weighted average of the entire first vector, weighted by the inverse distance from that position (actually it's position ^-0.1, not ^-1, but for example purposes). I can't use matlab's 'filter' function, because it can only average things before the current point, right? To explain more clearly, here's an example of 3 elements
data = [ 2 6 9 ]
weights = [ 1 1/2 1/3; 1/2 1 1/2; 1/3 1/2 1 ]
results=data*weights= [ 8 11.5 12.666 ]
i.e.
8 = 2*1 + 6*1/2 + 9*1/3
11.5 = 2*1/2 + 6*1 + 9*1/2
12.666 = 2*1/3 + 6*1/2 + 9*1
So each point in the new vector is the weighted average of the entire first vector, weighting by 1/(distance from that position+1).
I could just remake the weight vector for each point, then calculate the results vector element by element, but this requires 1.3 million iterations of a for loop, each of which contains 1.3million multiplications. I would rather use straight matrix multiplication, multiplying a 1x1.3mil by a 1.3milx1.3mil, which works in theory, but I can't load a matrix that large.
I am then trying to make the matrix using a shell script and index it in matlab so only the relevant column of the matrix is called at a time, but that is also taking a very long time.
I don't have to do this in matlab, so any advice people have about utilizing such large numbers and getting averages would be appreciated. Since I am using a weight of ^-0.1, and not ^-1, it does not drop off that fast - the millionth point is still weighted at 0.25 compared to the original points weighting of 1, so I can't just cut it off as it gets big either.
Hope this was clear enough?
Here is the code for the answer below (so it can be formatted?):
data = load('/Users/mmanary/Documents/test/insertion.txt');
data=data.';
total=length(data);
x=1:total;
datapad=[zeros(1,total) data];
weights = ([(total+1):-1:2 1:total]).^(-.4);
weights = weights/sum(weights);
Fdata = fft(datapad);
Fweights = fft(weights);
Fresults = Fdata .* Fweights;
results = ifft(Fresults);
results = results(1:total);
plot(x,results)
The only sensible way to do this is with FFT convolution, as underpins the filter function and similar. It is very easy to do manually:
% Simulate some data
n = 10^6;
x = randi(10,1,n);
xpad = [zeros(1,n) x];
% Setup smoothing kernel
k = 1 ./ [(n+1):-1:2 1:n];
% FFT convolution
Fx = fft(xpad);
Fk = fft(k);
Fxk = Fx .* Fk;
xk = ifft(Fxk);
xk = xk(1:n);
Takes less than half a second for n=10^6!
This is probably not the best way to do it, but with lots of memory you could definitely parallelize the process.
You can construct sparse matrices consisting of entries of your original matrix which have value i^(-1) (where i = 1 .. 1.3 million), multiply them with your original vector, and sum all the results together.
So for your example the product would be essentially:
a = rand(3,1);
b1 = [1 0 0;
0 1 0;
0 0 1];
b2 = [0 1 0;
1 0 1;
0 1 0] / 2;
b3 = [0 0 1;
0 0 0;
1 0 0] / 3;
c = sparse(b1) * a + sparse(b2) * a + sparse(b3) * a;
Of course, you wouldn't construct the sparse matrices this way. If you wanted to have less iterations of the inside loop, you could have more than one of the i's in each matrix.
Look into the parfor loop in MATLAB: http://www.mathworks.com/help/toolbox/distcomp/parfor.html
I can't use matlab's 'filter' function, because it can only average
things before the current point, right?
That is not correct. You can always add samples (i.e, adding or removing zeros) from your data or from the filtered data. Since filtering with filter (you can also use conv by the way) is a linear action, it won't change the result (it's like adding and removing zeros, which does nothing, and then filtering. Then linearity allows you to swap the order to add samples -> filter -> remove sample).
Anyway, in your example, you can take the averaging kernel to be:
weights = 1 ./ [3 2 1 2 3]; % this kernel introduces a delay of 2 samples
and then simply:
result = filter(w,1,[data, zeros(1,3)]); % or conv (data, w)
% removing the delay introduced by the kernel
result = result (3:end-1);
You considered only 2 options:
Multiplying 1.3M*1.3M matrix with a vector once or multiplying 2 1.3M vectors 1.3M times.
But you can divide your weight matrix to as many sub-matrices as you wish and do a multiplication of n*1.3M matrix with the vector 1.3M/n times.
I assume that the fastest will be when there will be the smallest number of iterations and n is such that creates the largest sub-matrix that fits in your memory, without making your computer start swapping pages to your hard drive.
with your memory size you should start with n=5000.
you can also make it faster by using parfor (with n divided by the number of processors).
The brute force way will probably work for you, with one minor optimisation in the mix.
The ^-0.1 operations to create the weights will take a lot longer than the + and * operations to compute the weighted-means, but you re-use the weights across all the million weighted-mean operations. The algorithm becomes:
Create a weightings vector with all the weights any computation would need:
weights = (-n:n).^-0.1
For each element in the vector:
Index the relevent portion of the weights vector to consider the current element as the 'centre'.
Perform the weighted-mean with the weights portion and the entire vector. This can be done with a fast vector dot-multiply followed by a scalar division.
The main loop does n^2 additions and subractions. With n equal to 1.3 million that's 3.4 trillion operations. A single core of a modern 3GHz CPU can do say 6 billion additions/multiplications a second, so that comes out to around 10 minutes. Add time for indexing the weights vector and overheads, and I still estimate you could come in under half an hour.