Finding the essential prime implicants - boolean-expression

Given the table:
I tried to find the prime implicants of the function F(w, x, y, z) = sum (0, 2, 5, 7, 8, 10, 12, 13, 14, 15). I got an essential prime implicant of xz. I was wondering if there were more essential prime implicants? I am having trouble understand how to find them.

Prime implicant is a logical term in minimal DNF (disjunctive normal form; SOP = sum of products) or minimal CNF (conjunctive normal form; POS = product of sums) form that after removal of any of it's part it would not be a prime implicant for the output function anymore.
Consider this easier example:
a\b 0 1
+---+---|
0 | 0 | 0 |
1 | 0 |(1)|
+---+---+
You could write the output function in the DNF form as a minterm on position of literals' values: a=1,b=1 ("circled" by the parenthesis in the K-map):
y = a*b
It is the prime implicant, because if you remove any of the literals (variable or the variable's negation) it wont be an implicant for the output function anymore!
Here you have only two variables, therefore you can remove from the right output function definition y = a*b either the a (a or its negaton) or the b (b or its negation).
Removing the b from the y = a*b you would either circle one zero on the position a=1, b=0:
a\b 0 1
+---+---|
0 | 0 | 0 |
1 |(0 1)|
+---+---+ wrong_output_function: y1 = a
or circle one unwanted zero on the position a=0, b=1 by removing the a (a or its negation):
a\b 0 1
+---+---|
0 | 0 |(0)|
1 | 0 |(1)|
+---+---+ wrong_output_function: y2 = b
In this example by choosing to express the output function in DNF you want to circle all ones and NO zero, that is why all those two boolean expressions y1 and y2 are wrong.
For your example:
For finding the minimal DNF form you would like to find the largest groups (terms) possible, those have the least possible number of variables in them, without threatening the validity of the output function in this case adding a zero in any of the groups (output function's implicants).
Here you can see, that some of the groups overlap, but that is ok, because the groups have the minimal form by the definition. This K-map is minimal, because you cannot make the chosen groups any bigger.
You could for example circle the red ones not captured by the green circle separately, but in that case they would not be prime implicants, because you can make their groups bigger as seen in the previous Karnaugh map. This K-map is therefore not minimal, because there are non-prime implicants.
Note: the indexes in your function does not match the indexes in my K-map, because of the different permutation of the variables. My K-maps match positions of the variables in your original K-map style. The output function is based on that.

Related

Output a matrix size n x m, 1 when the sum of the indices is even, 0 otherwise

I'm attempting the following as a hobby, not as homework. In Computer Programming with MATLAB: J. Michael Fitpatrick and Akos Ledeczi, there is a practice problem that asks this:
Write a function called alternate that takes two positive integers, n and m, as input arguments (the function does not have to check the format of the input) and returns one matrix as an output argument. Each element of the n-by-m output matrix for which the sum of its indices is even is 1.
All other elements are zero.
A previous problem was similar, and I wrote a very simple function that does what it asks:
function A = alternate(n,m)
A(1:n,1:m)=0;
A(2:2:n,2:2:m)=1;
A(1:2:n,1:2:m)=1;
end
Now my question is, is that good enough? It outputs exactly what it asks for, but it's not checking for the sum. So far we haven't discussed nested if statements or anything of that sort, we just started going over very basic functions. I feel like giving it more functionality would allow it to be recycled better for future use.
Great to see you're learning, step 1 in learning any programming language should be to ensure you always add relevant comments! This helps you, and anyone reading your code. So the first improvement would be this:
function A = alternate(n,m)
% Function to return an n*m matrix, which is 1 when the sum of the indices is even
A(1:n,1:m)=0; % Create the n*m array of zeros
A(2:2:n,2:2:m)=1; % All elements with even row and col indices: even+even=even
A(1:2:n,1:2:m)=1; % All elements with odd row and col indicies: odd+odd=even
end
You can, however, make this more concise (discounting comments), and perhaps more clearly relate to the brief:
function A = alternate(n,m)
% Function to return an n*m matrix, which is 1 when the sum of the indices is even
% Sum of row and col indices. Uses implicit expansion (R2016b+) to form
% a matrix from a row and column array
idx = (1:n).' + (1:m);
% We want 1 when x is even, 0 when odd. mod(x,2) is the opposite, so 1-mod(x,2) works:
A = 1 - mod( idx, 2 );
end
Both functions do the same thing, and it's personal preference (and performance related for large problems) which you should use.
I'd argue that, even without comments, the alternative I've written more clearly does what it says on the tin. You don't have to know the brief to understand you're looking for the even index sums, since I've done the sum and tested if even. Your code requires interpretation.
It can also be written as a one-liner, whereas the indexing approach can't be (as you've done it).
A = 1 - mod( (1:n).' + (1:m), 2 ); % 1 when row + column index is even
Your function works fine and output the desired result, let me propose you an alternative:
function A = alternate(n,m)
A = zeros( n , m ) ; % pre-allocate result (all elements at 0)
[x,y] = meshgrid(1:m,1:n) ; % define a grid of indices
A(mod(x+y,2)==0) = 1 ; % modify elements of "A" whose indices verify the condition
end
Which returns:
>> alternate(4,5)
ans =
1 0 1 0 1
0 1 0 1 0
1 0 1 0 1
0 1 0 1 0
initialisation:
The first line is the equivalent to your first line, but it is the cannonical MATLAB way of creating a new matrix.
It uses the function zeros(n,m).
Note that MATLAB has similar functions to create and preallocate matrices for different types, for examples:
ones(n,m) Create
a matrix of double, size [n,m] with all elements set to 1
nan(n,m) Create a
matrix of double, size [n,m] with all elements set to NaN
false(n,m) Create a
matrix of boolean size [n,m] with all elements set to false
There are several other matrix construction predefined function, some more specialised (like eye), so before trying hard to generate your initial matrix, you can look in the documentation if a specialised function exist for your case.
indices
The second line generate 2 matrices x and y which will be the indices of A. It uses the function meshgrid. For example in the case shown above, x and y look like:
| x = | y = |
| 1 2 3 4 5 | 1 1 1 1 1 |
| 1 2 3 4 5 | 2 2 2 2 2 |
| 1 2 3 4 5 | 3 3 3 3 3 |
| 1 2 3 4 5 | 4 4 4 4 4 |
odd/even indices
To calculate the sum of the indices, it is now trivial in MATLAB, as easy as:
>> x+y
ans =
2 3 4 5 6
3 4 5 6 7
4 5 6 7 8
5 6 7 8 9
Now we just need to know which ones are even. For this we'll use the modulo operator (mod) on this summed matrix:
>> mod(x+y,2)==0
ans =
1 0 1 0 1
0 1 0 1 0
1 0 1 0 1
0 1 0 1 0
This result logical matrix is the same size as A and contain 1 where the sum of the indices is even, and 0 otherwise. We can use this logical matrix to modify only the elements of A which satisfied the condition:
>> A(mod(x+y,2)==0) = 1
A =
1 0 1 0 1
0 1 0 1 0
1 0 1 0 1
0 1 0 1 0
Note that in this case the logical matrix found in the previous step would have been ok since the value to assign to the special indices is 1, which is the same as the numeric representation of true for MATLAB. In case you wanted to assign a different value, but the same indices condition, simply replace the last assignment:
A(mod(x+y,2)==0) = your_target_value ;
I don't like spoiling the learning. So let me just give you some hints.
Matlab is very efficient if you do operations on vectors, not on individual elements. So, why not creating two matrices (e.g. N, M) that holds all the indices? Have a look at the meshgrid() function.
Than you might be able find all positions with an even sum of indices in one line.
Second hint is that the outputs of a logic operation, e.g. B = A==4, yields a logic matrix. You can convert this to a matrix of zeros by using B = double(B).
Have fun!

Convolution using 'valid' in Matlab's conv() function

Here is an example of convolution given:
I have two questions here:
Why is the vector 𝑥 padded with two 0s on each side? As, the length of kernel ℎ is 3. If 𝑥 is padded with one 0 on each side, the middle element of convolution output would be within the range of the length of 𝑥, why not one 0 on each side?
Explain the following output to me:
>> x = [1, 2, 1, 3];
>> h = [2, 0, 1];
>> y = conv(x, h, 'valid')
y =
3 8
>>
What is valid doing here in the context of the previously shown mathematics on vectors 𝑥 and ℎ?
I can't speak as to the amount of zero padding that is proper .... That being said, any zero padding is making up data that is not there. This isn't necessarily wrong, but you should be aware that the values computing this information may be biased. Sometimes you care about this, sometimes you don't. Introducing 1 zero (in this case) would leave the middle kernel value always in the data, but why should that be a stopping criteria? Importantly, adding on 2 zeros still leaves one multiplication of values that are actually present in the data and the kernel (the x[0]*h[0] and x[3]*h[2] - using 0 based indexing). Adding on a 3rd zero (or more) would just yield zeros in the output since 3 is the length of the kernel. In other words zero padding will always yield an output that is partially based on the actual data (but not completely) for any zero padding from n=1 to n = length(h)-1 (in this case either 1 or 2).
Even though zero padding with length 2 or 1 still has multiplications based on real data, some values are summed over "fake" data (those multiplied with a padded zero). In this case Matlab gives you 3 options for how you want the data returned. First, you can get the full convolution, which includes values that are biased because they include adding in 0 values that aren't really in the data. Alternatively you can get same, which means the length of the output is the length of the data y = [4 3 8 1]. This corresponds to 1 zero but note that for longer kernels you could technically get other lengths between full and same, Matlab just doesn't return those for you.
Finally, and probably most important to understand out of all this, you have the valid option. In your example only 2 samples of the output are computed from summations that occur only from multiplications over real data (i.e. from multiplying samples of the kernel with samples from x and not from zeros). More specifically:
y[2] = h[2]*x[0] + h[1]*x[1] + h[2]*x[2] = 3 //0 based indexing like example
y[3] = h[2]*x[1] + h[1]*x[2] + h[2]*x[3] = 8
Note none of the other y values are computed with only h and x, they all involve a padded zero which is not necessarily indicative of the real data. For example:
y[4] = h[2]*x[2] + h[1]*x[3] + h[2]*0 <= padded zero

Matlab: Covariance Matrix from matrix of combinations using E(X) and E(X^2)

I have a set of independent binary random variables (say A,B,C) which take a positive value with some probability and zero otherwise, for which I have generated a matrix of 0s and 1s of all possible combinations of these variables with at least a 1 i.e.
A B C
1 0 0
0 1 0
0 0 1
1 1 0
etc.
I know the values and probabilities of A,B,C so I can calculate E(X) and E(X^2) for each. I want to treat each combination in the above matrix as a new random variable equal to the product of the random variables which are present in that combination (show a 1 in the matrix). For example, random variable Row4 = A*B.
I have created a matrix of the same size to the above, which shows the relevant E(X)s instead of the 1s, and 1s instead of the 0s. This allows me to easily calculate the vector of Expected values of the new random variables (one per combination) as the product of each row. I have also generated a similar matrix which shows E(X^2) instead of E(X), and another one which shows prob(X>0) instead of E(X).
I'm looking for a Matlab script that computes the Covariance matrix of these new variables i.e. taking each row as a random variable. I presume it will have to use the formula:
Cov(X,Y)=E(XY)-E(X)E(Y)
For example, for rows (1 1 0) and (1 0 1):
Cov(X,Y)=E[(AB)(AC)]-E(X)E(Y)
=E[(A^2)BC]-E(X)E(Y)
=E(A^2)E(B)E(C)-E(X)E(Y)
These values I already have from the matrices I've mentioned above. For each Covariance, I'm just unsure how to know which two variables appear in both rows, because for those I will have to select E(X^2) instead of E(X).
Alternatively, the above can be written as:
Cov(X,Y)=E(X)E(Y)*[1/prob(A>0)-1]
But the problem remains as the probabilities in the denominator will only be the ones of the variables which are shared between two combinations.
Any advice on how automate the computation of the Covariance matrix in Matlab would be greatly appreciated.
I'm pretty sure this is not the most efficient way to do that but that's a start:
Assume r1...n the combinations of the random variables, R is the matrix:
A B C
r1 1 0 0
r2 0 1 0
r3 0 0 1
r4 1 1 0
If you have the vector E1, E2 and ER as:
E1 = [E(A) E(B) E(C) ...]
E2 = [E(A²) E(B²) E(C²) ...]
ER = [E(r1) E(r2) E(r3) ...]
If you want to compute E(r1,r2) you can:
1) Extract the R1 and R2 columns from R
v1 = R(1,:)
v2 = R(2,:)
2) Sum both vectors in vs
vs = v1 + v2
3) Loop in vs, if you see a 2 that means the value in R2 has to be used, if you see a 1 it is the value in R1, if it is 0 do not use the value.
4) Using the loop, compute your E(r1,r2) as wanted.

Can anyone explain to me what is going on in this line of MatLAB code

y = rand(20,3);
aa= unidrnd(2,20,3) - 1;
val = ( aa & y<1.366e-04) | (~aa & y<8.298e-04);
aa(val) = ~aa(val);
I have this code.
Can any one explain to me what is happening here. I have tried to understand it step by step (debugging) but I cannot understand the purpose of using inverse '~' in line 4 and also using 'val' as indices.
y = rand(20,3);
Creates a matrix of uniformly distributed random numbers, y.
aa= unidrnd(2,20,3) - 1;
Creates a matrix of uniformly distributed random integers, that goes from 1 to 2, and then subtract one. Thus, aa is a matrix of 0s and 1s.
val = ( aa & y<1.366e-04) | (~aa & y<8.298e-04);
This line checks all the values where aa is 1AND y<1.366e-04 OR aa is 0 AND y<8.298e-04. Note that this barely happens, being y uniformly distributed numbers from 0 to 1, being them this smalls is unlikely.
aa(val) = ~aa(val);
Take all those cases computed before, and make aa change from 0 to 1 or from 1 to 0 if it happened in that index.

What is the Haskell / hmatrix equivalent of the MATLAB pos function?

I'm translating some MATLAB code to Haskell using the hmatrix library. It's going well, but
I'm stumbling on the pos function, because I don't know what it does or what it's Haskell equivalent will be.
The MATLAB code looks like this:
[U,S,V] = svd(Y,0);
diagS = diag(S);
...
A = U * diag(pos(diagS-tau)) * V';
E = sign(Y) .* pos( abs(Y) - lambda*tau );
M = D - A - E;
My Haskell translation so far:
(u,s,v) = svd y
diagS = diag s
a = u `multiply` (diagS - tau) `multiply` v
This actually type checks ok, but of course, I'm missing the "pos" call, and it throws the error:
inconsistent dimensions in matrix product (3,3) x (4,4)
So I'm guessing pos does something with matrix size? Googling "matlab pos function" didn't turn up anything useful, so any pointers are very much appreciated! (Obviously I don't know much MATLAB)
Incidentally this is for the TILT algorithm to recover low rank textures from a noisy, warped image. I'm very excited about it, even if the math is way beyond me!
Looks like the pos function is defined in a different MATLAB file:
function P = pos(A)
P = A .* double( A > 0 );
I can't quite decipher what this is doing. Assuming that boolean values cast to doubles where "True" == 1.0 and "False" == 0.0
In that case it turns negative values to zero and leaves positive numbers unchanged?
It looks as though pos finds the positive part of a matrix. You could implement this directly with mapMatrix
pos :: (Storable a, Num a) => Matrix a -> Matrix a
pos = mapMatrix go where
go x | x > 0 = x
| otherwise = 0
Though Matlab makes no distinction between Matrix and Vector unlike Haskell.
But it's worth analyzing that Matlab fragment more. Per http://www.mathworks.com/help/matlab/ref/svd.html the first line computes the "economy-sized" Singular Value Decomposition of Y, i.e. three matrices such that
U * S * V = Y
where, assuming Y is m x n then U is m x n, S is n x n and diagonal, and V is n x n. Further, both U and V should be orthonormal. In linear algebraic terms this separates the linear transformation Y into two "rotation" components and the central eigenvalue scaling component.
Since S is diagonal, we extract that diagonal as a vector using diag(S) and then subtract a term tau which must also be a vector. This might produce a diagonal containing negative values which cannot be properly interpreted as eigenvalues, so pos is there to trim out the negative eigenvalues, setting them to 0. We then use diag to convert the resulting vector back into a diagonal matrix and multiply the pieces back together to get A, a modified form of Y.
Note that we can skip some steps in Haskell as svd (and its "economy-sized" partner thinSVD) return vectors of eigenvalues instead of mostly 0'd diagonal matrices.
(u, s, v) = thinSVD y
-- note the trans here, that was the ' in Matlab
a = u `multiply` diag (fmap (max 0) s) `multiply` trans v
Above fmap maps max 0 over the Vector of eigenvalues s and then diag (from Numeric.Container) reinflates the Vector into a Matrix prior to the multiplys. With a little thought it's easy to see that max 0 is just pos applied to a single element.
(A>0) returns the positions of elements of A which are larger than zero,
so forexample, if you have
A = [ -1 2 -3 4
5 6 -7 -8 ]
then B = (A > 0) returns
B = [ 0 1 0 1
1 1 0 0]
Note that we have ones corresponding to an elemnt of A which is larger than zero, and 0 otherwise.
Now if you multiply this elementwise with A using the .* notation, then you are multipling each element of A that is larger than zero with 1, and with zero otherwise. That is, A .* B means
[ -1*0 2*1 -3*0 4*1
5*1 6*1 -7*0 -8*0 ]
giving finally,
[ 0 2 0 4
5 6 0 0 ]
So you need to write your own function that will return positive values intact, and negative values set to zero.
And also, u and v does not match in dimension, for a generall SVD decomposition, so you actually would need to REDIAGONALIZE pos(diagS - Tau), so that u* diagnonalized_(diagS -tau) agrres to v