Understanding deconv in Matlab, how it works - matlab

I am trying to understand how deconv works in Matlab.
Can anyone clarify that for me by explaining how this is calculated
[quotient,remainder]=deconv([1 2 8 4 4],[1 1 2 2])
quotient=
1 1
remainder=
0 0 5 0 2
I need to understand the step by step method of calculation.
Thank you.

Well, if you understand polynomial (long) division, you already have it. This result just says that
x^4 + 2x^3 + 8x^2 + 4x + 4
divided by
x^3 + x^2 + 2x + 2
equals
x + 1
with remainder
5x^2 + 2
The reason is that convolution is the same as polynomial multiplication, and thus deconvolution is polynomial division.
This is mentioned in deconv documentation:
If u and v are vectors of polynomial coefficients, convolving them is equivalent to multiplying the two polynomials, and deconvolution is polynomial division. The result of dividing v by u is quotient q and remainder r.

Related

In CF item-item recommenders, how can I calculate item similarity when the matrix is sparse?

On the way to find item neighbors, I need first to calculate similarity. How can I calculate it in sparse matrix? Is it correct?
In item-based collaborative filtering we calculate the similarity between items.
Here we can use cosine similarity because no matter how much sparse vectors are, Cosine similarity will calculate neighbors based on cosine angle between the vectors or the closeness of two vectors in the vector space. Not on the basis of the values of thè vectors.
For example:-
Per1 Per2 Per3
Item1 5 3 1
Ttem2 2 3 3
If we calculate the cosine similarity of two vectors:
Cos_sim_1 = (5*2 + 3*3 + 1*3) / sqrt((25+9+1)*(4+9+9)
Cos_sim_1 = 0.792
And if the matrix is sparse:
Per1 Per2 Per3 Per4 Per5 Per6 Per7 per8
Item1 5 3 1 0 0 0 0 0
Ttem2 2 3 3 0 0 0 0 0
And the cosine similarity of sparse vectors:
Cos_sim_2 = (5*2 + 3*3 + 1*3 + 0*0 + 0*0 +0*0 +0*0 +0*0) / sqrt((25+9+1+0+0+0+0+0)*(4+9+9+0+0+0+0+0))
Cos_sim_2 = 0.792
I hope it helps!!!!!

How to create a polynomial that accepts vectors?

I have a problem with creating a polynomial function of arbitrary length in Matlab, that would work, when used with a vector as an argument.
I have to do an algorithm, that includes and returns a value of a polynomial.
Bellow is my code:
n = 4 % For simplicity, could be arbitrary positive integer
f = #(x) x.^[0:n] %Coefficients are 1 (for this example), if not, would be multiplied with vector of them
p = #(x) sum(f(x)) %My polynomial
>> p(5)
ans =
781
This goes as planed. But because I need a plot, I need my polynomial to be able to receive vectors of values and return them. But when I do this, an error pops up.
Example:
>>p([1 2 3 4])
Error using .^
Matrix dimensions must agree.
Error in #(x)x.^[0:n]
Error in #(x)sum(f(x))
What I want it to return is a vector of length 4 with values of my polynomial [p(1) p(2) p(3) p(4)]
I got around this by creating a values vector with a for loop, but am just wondering, is it possible to change my code, so this would work?
The problem can be easily fixed using a row and a column vector, instead of two row vectors:
p([1 2 3 4]')
and explicitly defining the dimension along which you want to take the summation:
p = #(x) sum(f(x), 2)
Explanation
Note that .^ is an element wise operation. p([1 2 3 4 5]) works, because both row vectors have the same size, but doesn't return the desired result, i.e. it calculates 1^0 + 2^1 + 3^2 + 4^3 + 5^4 = 701.
Matlab automatically expands (in pseudo matlab code)
[1 .^ [0 1 2 3 4]
2
3
4]
to
[1 1 1 1 .^ [0 1 2 3 4
2 2 2 2 0 1 2 3 4
3 3 3 3 0 1 2 3 4
4 4 4 4] 0 1 2 3 4]
Backward compatibility (2006-2016a)
The definition of f should be changed because matlab does not support automatic arithmetic expansion yet.
f = #(x) bsxfun(#power, x, 0:n);
Backward compatibility (1996-2005)
bsxfun didn't exist yet, so one should resort to repmat.

Linear Programming solvable by MATLAB

I want to solve linear Programming by MATLAB . For this purpose , I am following the following link . Linear Programming .
Here , a sample problem is given :
Find x that minimizes
f(x) = –5x1 – 4x2 –6x3,
subject to
x1 – x2 + x3 ≤ 20
3x1 + 2x2 + 4x3 ≤ 42
3x1 + 2x2 ≤ 30
0 ≤ x1, 0 ≤ x2, 0 ≤ x3.
First, enter the coefficients
f = [-5; -4; -6];
A = [1 -1 1
3 2 4
3 2 0];
b = [20; 42; 30];
lb = zeros(3,1);
Next, call a linear programming routine.
[x,fval,exitflag,output,lambda] = linprog(f,A,b,[],[],lb);
My question is that what is meant by this line ?
lb = zeros(3,1);
Without this line , all problems solvable by MATLAB is seen as infeasible . Can you help me in this purpose ?
This is not common to ALL linear problems. Here you deal with a problem where there are some constraints on the minimal values of the solution:
0 ≤ x1, 0 ≤ x2, 0 ≤ x3
You have to set up these constraints in the parameters of your problem. The way to do so is by specifying lower boundaries of the solution, which is the 5th argument.
Without this line, the domain on which you search for a solution is not bounded, and exitflag has the value -3 after calling the function, which is precisely the error code for unbounded problems.

Impulse response function in matlab

There are examples for summation of a vector but not for matrix in Matlab. So please help solve the following:
How to write impulse response function in matlab?
I want program in Matlab for the equation:
hij(t) = ∑_(k=1)to n (φik*φjk*e-xwk*sin(wdk(t))/(M*wdk))
h is impulse response function
φ is mode shape
x is constant
wk is kth mode nat frequency
wdk is kth mode damped frequency
M is mass matrix.
Summing on a matrix, in general, looks like this:
>> A = randi(5,[3,6]) % Creating a random [3 x 6] integer matrix
A =
3 4 4 1 2 4
3 4 4 3 3 2
4 2 1 5 2 3
>> sum(A) % Sums on rows (dim=1 is default) so you get a [1 x 6] vector
ans =
10 10 9 9 7 9
>> sum(A,2) % Sums on columns (dim=2) so you get a [3 x 1] vector
ans =
18
19
17
And similarly if you had a 3D matrix V, then you could do sum(V,3) to sum on the slices.
If you want more specific help, please note the dimensions of each input (phi_i, phi_j, M, w, and wd)

Physical significance of the rotation of the filter matrix in filter2 function

While using MATLAB 2D filter funcion filter2(B,X) and convolution function conv(X,B,''), I see that the filter2 function is essentially 2D convolution but with a rotation by 180 degrees of the filter coefficients matrix. In terms of the outputs of filter2 and conv2, I see that the below relation holds true:
output matrix of filter2 = each element negated of output of conv2
EDIT: I was incorrect; the above relation does not hold true in general, but I saw it for a few cases. In general, the two output matrices are unrelated, due to the fact that 2 entirely different kernels are obtained in both which are used for convolution.
I understand how 2D convolution is performed. What I want to understand is the implication of this in image processing terms. How do I visualize what is happening here? What does it mean to rotate a filter coefficient matrix by 180 degrees?
I'll start with a very brief discussion of convolution, using the following image from Wikipedia:
As illustrated, convolving two 1-D functions involves reflecting one of them (i.e. the convolution kernel), sliding the two functions over one another, and computing the integral of their product.
When convolving 2-D matrices, the convolution kernel is reflected in both dimensions, and then the sum of the products is computed for every unique overlapping combination with the other matrix. This reflection of the kernel's dimensions is an inherent step of the convolution.
However, when performing filtering we like to think of the filtering matrix as though it were a "stencil" that is directly laid as is (i.e. with no reflections) over the matrix to be filtered. In other words, we want to perform an equivalent operation as a convolution, but without reflecting the dimensions of the filtering matrix. In order to cancel the reflection performed during the convolution, we can therefore add an additional reflection of the dimensions of the filter matrix before the convolution is performed.
Now, for any given 2-D matrix A, you can prove to yourself that flipping both dimensions is equivalent to rotating the matrix 180 degrees by using the functions FLIPDIM and ROT90 in MATLAB:
A = rand(5); %# A 5-by-5 matrix of random values
isequal(flipdim(flipdim(A,1),2),rot90(A,2)) %# Will return 1 (i.e. true)
This is why filter2(f,A) is equivalent to conv2(A,rot90(f,2),'same'). To illustrate further how there are different perceptions of filter matrices versus convolution kernels, we can look at what happens when we apply FILTER2 and CONV2 to the same set of matrices f and A, defined as follows:
>> f = [1 0 0; 0 1 0; 1 0 0] %# A 3-by-3 filter/kernel
f =
1 0 0
0 1 0
1 0 0
>> A = magic(5) %# A 5-by-5 matrix
A =
17 24 1 8 15
23 5 7 14 16
4 6 13 20 22
10 12 19 21 3
11 18 25 2 9
Now, when performing B = filter2(f,A); the computation of output element B(2,2) can be visualized by lining up the center element of the filter with A(2,2) and multiplying overlapping elements:
17*1 24*0 1*0 8 15
23*0 5*1 7*0 14 16
4*1 6*0 13*0 20 22
10 12 19 21 3
11 18 25 2 9
Since elements outside the filter matrix are ignored, we can see that the sum of the products will be 17*1 + 4*1 + 5*1 = 26. Notice that here we are simply laying f on top of A like a "stencil", which is how filter matrices are perceived to operate on a matrix.
When we perform B = conv2(A,f,'same');, the computation of output element B(2,2) instead looks like this:
17*0 24*0 1*1 8 15
23*0 5*1 7*0 14 16
4*0 6*0 13*1 20 22
10 12 19 21 3
11 18 25 2 9
and the sum of the products will instead be 5*1 + 1*1 + 13*1 = 19. Notice that when f is taken to be a convolution kernel, we have to flip its dimensions before laying it on top of A.