i have a signal sine wave with an amplitude of -1 db
The quantizer step Δ is 1/7. The middle quantization level is 0 and the other levels are ±kΔ,, k=1-7. the output is 4-bit unsigned integer which values from 0 to 14 corresponding to the 15 quantization levels.
how i can calculate the 4 bit output
Related
I have two data sets ( which is 80*80 matrix) with relative risk ranging from -1.5 to +1.5.
I want to plot these two data sets as ine normalised frequency distribution plot.
How can I convert the actual frequency to normalised one( range 0 to 1)
So what I actually want is: if my frequency ranges from 0 to 200
( i want 0 to assign value of 0, 20 as 0.1, 40 as 0.2 , 60 as 0.3 .... 200 as 1)
So if the value of relative risk is -1 and actual frequency at this risk is 60 for dataset one and 80 for data set two, so in that case, I want -1 ( which is relative risk value) to show frequency as 0.3 and 0.4 for dataset one and two respectively after normalization. I need it in the same graph so that I could figure out the difference between two data sets.
This what I want my graphs axis to be:
Y-axis: normalized frequency for the following groups (ranging from 0 to 1)
X-axis; RR classes - <-1.5, -1.5 to -1.25, -1.25 to -1, -1 to -.75, -0.75 to -0.5, -0.5 to -0.25, -0.25 to 0, 0 to 0.25, 0.25 to 0.5, 0.5 to 0.75, 0.75 to 1, 1 to 1.25, 1.25 to 1.5 and >1.5
From the matlab documentation:
% assuming a,b are your frequencies
figure; hold on;
histogram(a,'Normalization','probability');
histogram(b,'Normalization','probability');
I have an input signal s(n) = [1 -1 0 -1 1 -1 1].
The output signal is a delayed version of the input signal with noise:
x(n) = a*s(n-D) + w(n)
How do i delay the input signal by D?
You can pad the signal with zeros.
x = a * [zeros(1,D) s]
When I try to find the eigen-decomposition of a matrix in Matlab that has a repeated eigenvalue but is NOT defective, it is not returning an orthonormal matrix of eignevectors. For example:
k = 5;
repeats = 1;
% First generate a random matrix of eignevectors that is orthonormal
V = orth(rand(k));
% Now generate a vector of eigenvalues with the given number of repeats
D = rand(k,1);
for i = 1:repeats
% Put one random value into another (note this sometimes will result in
% less than the given number of repeats if we ever input the same
% number)
D(ceil(k*rand())) = D(ceil(k*rand()));
end
A = V'*diag(D)*V;
% Now test the eignevector matrix of A
[V_A, D_A] = eig(A);
disp(V_A*V_A' - eye(k))
I am finding that my matrix of eigenvectors V_A is not orthogonal i.e. V_A*V_A' is not equalling the identity matrix (taking into account rounding errors).
I was under the impression that if my matrix was real and symmetric then Matlab would return an orthogonal matrix of eigenvectors, so what is the issue here?
This seems to be a numerical precision issue.
The eigenvectors of a real symmetric matrix are orthogonal. But your input matrix A is not exactly symmetric. The differences are on the order of eps, as expected from numerical errors.
>> A-A.'
ans =
1.0e-16 *
0 -0.2082 -0.2776 0 0.1388
0.2082 0 0 -0.1388 0
0.2776 0 0 -0.2776 0
0 0.1388 0.2776 0 -0.5551
-0.1388 0 0 0.5551 0
If you force A to be exactly symmetric you'll get an orthogonal V_A, up to numerical errrors on the order of eps:
>> A = (A+A.')/2;
>> A-A.'
ans =
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
>> [V_A, D_A] = eig(A);
>> disp(V_A*V_A' - eye(k))
1.0e-15 *
-0.3331 0.2220 0.0755 0.1804 0
0.2220 -0.2220 0.0572 -0.1665 0.1110
0.0755 0.0572 -0.8882 -0.0590 -0.0763
0.1804 -0.1665 -0.0590 0 -0.0555
0 0.1110 -0.0763 -0.0555 0
Still, it's surprising that so wildly different results are obtained for V_A when A is symmetric and when A is nearly symmetric. This is my bet as to what's happening: as noted by #ArturoMagidin,
(1) Eigenvectors corresponding to distinct eigenvalues of a symmetric matrix must be orthogonal to each other. Eigenvectors corresponding to the same eigenvalue need not be orthogonal to each other.
(2) However, since every subspace has an orthonormal basis,you can find orthonormal bases for each eigenspace, so you can find an orthonormal basis of eigenvectors.
Matlab is probably taking route (2) (thus forcing V_a to be orthogonal) only if A is symmetric. For A not exactly symmetric it probably takes route (1) and gives you a basis of each subspace, but not necessarily with orthogonal vectors.
The eigenvectors of a real matrix will be orthogonal if and only if AA'=A'A and eigenvalues are distinct. If eigenvalues are not distinct, MATLAB chooses an orthogonal system of vectors. In the above example, AA'~=A'A. Besides, you have to consider round off and numerical errors.
I have a dataset that has some values from different sources for a particular coordinate XY.
How can I transform these coordinates so I can train a neural network to predict the coordinates X and Y from similar data inputs?
I have 12 inputs in the network but how should I transform the output?
My coordinates range from 0 to 63 max so should I use 6 bits for X and 6 bits for Y so the output has 12 bits?
[0,0,0,0,0,0,0,0,0,0,0,0]
| X | | Y |
While using MATLAB 2D filter funcion filter2(B,X) and convolution function conv(X,B,''), I see that the filter2 function is essentially 2D convolution but with a rotation by 180 degrees of the filter coefficients matrix. In terms of the outputs of filter2 and conv2, I see that the below relation holds true:
output matrix of filter2 = each element negated of output of conv2
EDIT: I was incorrect; the above relation does not hold true in general, but I saw it for a few cases. In general, the two output matrices are unrelated, due to the fact that 2 entirely different kernels are obtained in both which are used for convolution.
I understand how 2D convolution is performed. What I want to understand is the implication of this in image processing terms. How do I visualize what is happening here? What does it mean to rotate a filter coefficient matrix by 180 degrees?
I'll start with a very brief discussion of convolution, using the following image from Wikipedia:
As illustrated, convolving two 1-D functions involves reflecting one of them (i.e. the convolution kernel), sliding the two functions over one another, and computing the integral of their product.
When convolving 2-D matrices, the convolution kernel is reflected in both dimensions, and then the sum of the products is computed for every unique overlapping combination with the other matrix. This reflection of the kernel's dimensions is an inherent step of the convolution.
However, when performing filtering we like to think of the filtering matrix as though it were a "stencil" that is directly laid as is (i.e. with no reflections) over the matrix to be filtered. In other words, we want to perform an equivalent operation as a convolution, but without reflecting the dimensions of the filtering matrix. In order to cancel the reflection performed during the convolution, we can therefore add an additional reflection of the dimensions of the filter matrix before the convolution is performed.
Now, for any given 2-D matrix A, you can prove to yourself that flipping both dimensions is equivalent to rotating the matrix 180 degrees by using the functions FLIPDIM and ROT90 in MATLAB:
A = rand(5); %# A 5-by-5 matrix of random values
isequal(flipdim(flipdim(A,1),2),rot90(A,2)) %# Will return 1 (i.e. true)
This is why filter2(f,A) is equivalent to conv2(A,rot90(f,2),'same'). To illustrate further how there are different perceptions of filter matrices versus convolution kernels, we can look at what happens when we apply FILTER2 and CONV2 to the same set of matrices f and A, defined as follows:
>> f = [1 0 0; 0 1 0; 1 0 0] %# A 3-by-3 filter/kernel
f =
1 0 0
0 1 0
1 0 0
>> A = magic(5) %# A 5-by-5 matrix
A =
17 24 1 8 15
23 5 7 14 16
4 6 13 20 22
10 12 19 21 3
11 18 25 2 9
Now, when performing B = filter2(f,A); the computation of output element B(2,2) can be visualized by lining up the center element of the filter with A(2,2) and multiplying overlapping elements:
17*1 24*0 1*0 8 15
23*0 5*1 7*0 14 16
4*1 6*0 13*0 20 22
10 12 19 21 3
11 18 25 2 9
Since elements outside the filter matrix are ignored, we can see that the sum of the products will be 17*1 + 4*1 + 5*1 = 26. Notice that here we are simply laying f on top of A like a "stencil", which is how filter matrices are perceived to operate on a matrix.
When we perform B = conv2(A,f,'same');, the computation of output element B(2,2) instead looks like this:
17*0 24*0 1*1 8 15
23*0 5*1 7*0 14 16
4*0 6*0 13*1 20 22
10 12 19 21 3
11 18 25 2 9
and the sum of the products will instead be 5*1 + 1*1 + 13*1 = 19. Notice that when f is taken to be a convolution kernel, we have to flip its dimensions before laying it on top of A.