Randomizing values in specman - specman

Given 3 numbers
a: uint (bits :2);
b : uint (bits :2);
c: uint (bits :2);
What is the way to define constraints for these numbers that satisfy the following:
- at least one of them has to be different than 0
- the product of all the non-zero numbers has to be in a certain range (e.g [3..20])

The most elegant way to write it would be:
keep ((a>0?a:1) * (b>0?b:1) * (c>0?c:1)) in [3..20];
Note that it also enforces that at least one of the items to be different than 0, since if all are 0s then the product would be 1 and the constraint won't be satisfied.
But to make it cleared for future readers you can add:
keep a > 0 or b > 0 or c > 0;

You can write all of these out explicitly:
struct some_struct {
a: uint (bits :2);
b : uint (bits :2);
c: uint (bits :2);
keep a > 0 or b > 0 or c > 0;
keep a > 0 and b > 0 and c > 0 => a * b * c in [ 3..20 ];
keep a > 0 and b > 0 and c == 0 => a * b in [ 3..20 ];
keep a > 0 and b == 0 and c > 0 => a * c in [ 3..20 ];
keep a == 0 and b > 0 and c > 0 => b * c in [ 3..20 ];
keep a == 0 and b == 0 and c > 0 => c in [ 3..20 ];
keep a == 0 and b > 0 and c == 0 => b in [ 3..20 ];
keep a > 0 and b == 0 and c == 0 => a in [ 3..20 ];
};
This is kind of a lot of code and if you need to change your range ([ 3..20]) you need to do it in quite a few places.

Related

Is there a way to display every line of code in Julia when it is not suppressed with ";" , like in MATLAB?

Say that I am running a Julia script. I would like every line of code to be displayed in the terminal like it is in MATLAB. Is there any way in which can do that? It is clunky for me to write display(...) for every variable that I want to see in the terminal, especially when I want to check my work quickly.
For intance, say that I have this code:
a = [1; 0; 0]
b = [0; 1; 0]
c = [0; 0; 1]
a * transpose(a)
b * transpose(b)
c * transpose(c)
I would like all six of these lines to be automatically displayed in the terminal, instead of having to write, say:
a = [1; 0; 0]
b = [0; 1; 0]
c = [0; 0; 1]
display(a)
display(b)
display(c)
display(a * transpose(a))
display(b * transpose(b))
display(c * transpose(c))
Thank you in advance.
One way to handle that is to write your own macro:
macro displayall(code)
for i in eachindex(code.args)
typeof(code.args[i]) == LineNumberNode && continue
code.args[i] = quote
display($(esc(code.args[i])));
end
end
return code
end
Now you can use it such as:
julia> #displayall begin
a = [1; 0; 0]
b = [0; 1; 0]
c = [0; 0; 1]
a * transpose(a)
b * transpose(b)
c * transpose(c)
end
3-element Vector{Int64}:
1
0
0
3-element Vector{Int64}:
0
1
0
3-element Vector{Int64}:
0
0
1
3×3 Matrix{Int64}:
1 0 0
0 0 0
0 0 0
3×3 Matrix{Int64}:
0 0 0
0 1 0
0 0 0
3×3 Matrix{Int64}:
0 0 0
0 0 0
0 0 1

Get dynamic rows of matrix based on the ones of another matrix (Matlab)

I am new to Matlab and I need some help.
I want compute Parity Check Matrix and then to encode a codeword using Generator Matrix
My matrix is the following :
1 0 0 0 1 1 1
0 1 0 0 1 1 0
0 0 1 0 1 0 1
0 0 0 1 0 1 1
The codeword is 1 0 1 1.
My code in Matlab is as follow :
printf('Generator Matrix\n');
G = [
1 0 0 0 1 1 1;
0 1 0 0 1 1 0;
0 0 1 0 1 0 1;
0 0 0 1 0 1 1
]
[k,n] = size(G)
P = G(1:k,k+1:n)
PT = P'
printf('Parity Check Matrix\n');
H = cat(2,PT,eye( n-k ))
printf('Encode the following word : \n');
D = [1 0 1 1]
C = xor( G(1,:), G(3,:) , G(4,:) )
My problem is that I want to get dynamically the rows of G Matrix in order to make the xor operation.
Could you help me please ?
Thanks a lot
You only need matrix multiplication modulo 2:
C = mod(D*G, 2);
Alternatively, compute the sum of the rows of G indicated by D, modulo 2:
C = mod(sum(G(D==1,:), 1), 2);

Boolean Simplification - Q=A.B.(~B+C)+B.C+B

I've been struggling with boolean simplification in class, and took it to practice some more at home. I found a list of questions, but they don't have any answers or workings. This one I'm stuck on, if you could answer clearly showing each step I'd much appreciate:
Q=A.B.(~B+C)+B.C+B
I tried looking for a calculator to give me the answer and then to work out how to get to that, but I'm lost
(I'm new to this)
Edit: ~B = NOT B
I've never done this, so I'm using this site to help me.
A.B.(B' + C) = A.(B.B' + B.C) = A.(0 + B.C) = A.(B.C)
So the expression is now A.(B.C) + B.C + B.
Not sure about this, but I'm guessing A.(B.C) + (B.C) = (A + 1).(B.C).
This equals A.(B.C).
So the expression is now A.(B.C) + B.
As A.(B + C) = B.(A.C), the expression is now B.(A.C) + B, which equals (B + 1).(A.C) = B.(A.C).
NOTE: This isn't complete yet, so please avoid downvoting as I'm not finished yet (posted this to help the OP understand the first part).
Let's be lazy and use sympy, a Python library for symbolic computation.
>>> from sympy import *
>>> from sympy.logic import simplify_logic
>>> a, b, c = symbols('a, b, c')
>>> expr = a & b & (~b | c) | b & c | b # A.B.(~B+C)+B.C+B
>>> simplify_logic(expr)
b
There are two ways to go about such a formula:
Applying simplifications,
Brute force
Let's look at brute force first. The following is a dense truth table (for a better looking table, look at Wα), enumerating all possible value for a, b and c, alongside the values of the expression.
a b c -- a & b & (~b | c) | b & c | b = Q
0 0 0 0 0 10 1 0 0 0 0 0 = 0
0 0 1 0 0 10 1 1 0 0 1 0 = 0
0 1 0 0 1 01 0 0 1 0 0 1 = 1
0 1 1 0 1 01 1 1 1 1 1 1 = 1
1 0 0 1 0 10 1 0 0 0 0 0 = 0
1 0 1 1 0 10 1 1 0 0 1 0 = 0
1 1 0 1 1 01 1 0 1 0 0 1 = 1
1 1 1 1 1 01 1 1 1 1 1 1 = 1
You can also think of the expression as a tree, which will depend on the precedence rules (e.g. usually AND binds stronger than OR, see also this question on math.se).
So the expression:
a & b & (~b | c) | b & c | b
is a disjunction of three terms:
a & b & (~b | c)
b & c
b
You can try to reason about the individual terms, knowing that only one has to be true (as this is a disjunction).
The last two will be true, if and only if b is true. For the first, this a bit harder to see, but if you look closely: you have now a conjunction (terms concatenated by AND): All of them must be true, for the whole expression to be true, so a and b must be true. Especially b must be true.
In summary: For the whole expression to be true, in all three top-level cases, b must be true (and it will be false, if b is false). So it simplifies to just b.
Explore more on Wolfram Alpha:
https://www.wolframalpha.com/input/?i=a+%26+b+%26+(~b+%7C+c)+%7C+b+%26+c+%7C+b
A.B.(~B+C) + B.C + B = A.B.~B + A.B.C + B.C + B ; Distribution
= A.B.C + B.C + B ; Because B.~B = 0
= B.C + B ; Because A.B.C <= B.C
= B ; Because B.C <= B

Minizinc - assign job to specific machine

I have say n jobs and m machines and have a jobtype array with type of job. If the job is of specific type in job type array , i have to assign even numbered machine among the available else to the odd numbered machine. Is it possible with minizinc?
Snippet I tried given below:
forall(w in 1..num_workers) (
if jobtype[job] == "NC" then assignment[job,(w mod 2 ==0)]=1
else assignment[job,(w mod 2 !=0)]=1 endif
)
which is giving the following warning
WARNING: undefined result becomes false in Boolean context
(array access out of bounds)
TIA
Here is one model that might be what you want, i.e. to assign even numbered machines to the jobs marked as "NC". The important constraint is the following, which might be the one you want. Here we use a temporary decision variable w in the range of 1..num_workers, and then ensure that for the NC jobs the machine number must be even:
forall(job in 1..num_jobs) (
let { var 1..num_workers: w; } in
% Jobs with NC must be assigned to even numbered workers (machines)
if jobtype[job] == "NC" then w mod 2 == 0 else w mod 2 == 1 endif
/\ assignment[job,w]=1
)
Here is the full model - as I imagined it - with 7 jobs and 7 workers. I assume that a worker/machine can only be assigned to at most one job. Well, it's a lot of guesswork...
int: num_workers = 7;
int: num_jobs = 7;
array[1..num_jobs] of string: jobtype = ["NC","X","NC","X","X","NC","X"];
% decision variables
array[1..num_jobs, 1..num_workers] of var 0..1: assignment;
solve satisfy;
constraint
forall(job in 1..num_jobs) (
let { var 1..num_workers: w; } in
% Jobs with NC must be assigned to even numbered workers (machines)
if jobtype[job] == "NC" then w mod 2 == 0 else w mod 2 == 1 endif
/\
assignment[job,w]=1
)
/\ % at most one worker for each job (or perhaps not)
forall(job in 1..num_jobs) (
sum([assignment[job,worker] | worker in 1..num_workers]) <= 1
)
/\ % and at most one job per worker (or perhaps not)
forall(worker in 1..num_workers) (
sum([assignment[job,worker] | job in 1..num_jobs]) <= 1
)
;
output [
if w == 1 then "\n" else " " endif ++
show(assignment[j,w])
++ if w == num_workers then " (\(jobtype[j]))" else "" endif
| j in 1..num_jobs, w in 1..num_workers
];
The model yields 144 different solutions. Here's the first:
0 0 0 0 0 1 0 ("NC")
0 0 0 0 0 0 1 ("X")
0 0 0 1 0 0 0 ("NC")
0 0 0 0 1 0 0 ("X")
0 0 1 0 0 0 0 ("X")
0 1 0 0 0 0 0 ("NC")
1 0 0 0 0 0 0 ("X")

assigning rows and columns

I am trying to plot data to a grid that is made up of hexagons. Because of this, the row lengths alternate between two different values.
ie: a grid would look like this with row lengths 4 and 5
0 0 0 0 0
0 0 0 0
0 0 0 0 0
0 0 0 0
Does any one know a clever way to approach this? I thought about using flags to tell you which row you are in, but feel like there can be a more elegant solution
If all you're trying to do is figure out the row length, it's quite simple: Just use modulus of 2.
In your example, assume the top row has an index of 0, and the index increases as you go down.
rowLength = rowIndex % 2 == 0 ? 5 : 4;
0 % 2 == 0 --> 5
1 % 2 == 1 --> 4
2 % 2 == 0 --> 5
3 % 2 == 1 --> 4
Alternatively, you can have the rows always have a length of five, and on every other row store a value (such as null) that indicates the value should be skipped:
0 0 0 0 0
0 0 0 0 (null)
0 0 0 0 0
0 0 0 0 (null)
One solution is to write a "generic" pattern generator:
Takes a pattern, in your case "0 ". This specific pattern uses 2 characters.
Given number of rows, say nR, and number of characters per row, say nC (in your case, nC=5*2-1=9): repeat the pattern into one string that contains this pattern, however, without proper line breaks.
Using number of characters per row nC: insert a line break exactly after nC characters continually throughout your string.
This can be done in a somewhat "swifty functional" manner:
func plotGrid(numRows: Int, numCharsPerRow: Int, myPattern: String) {
let totalNumChars = numRows*numCharsPerRow
let numCharsInPattern = myPattern.characters.count
let gridTemplate = [String](count:totalNumChars/numCharsInPattern + totalNumChars%numCharsInPattern, repeatedValue: myPattern).reduce("", combine: +)
let grid = 0.stride(to: totalNumChars, by: numCharsPerRow)
.map {
i -> String in
let a = gridTemplate.startIndex.advancedBy(i)
let b = a.advancedBy(numCharsPerRow, limit: gridTemplate.endIndex)
return gridTemplate[a..<b] + "\n"
}.reduce("", combine: +)
print(grid)
}
With the results (for your example and another 4-character pattern example, respectively)
plotGrid(5, numCharsPerRow: 9, myPattern: "0 ")
//0 0 0 0 0
// 0 0 0 0
//0 0 0 0 0
// 0 0 0 0
//0 0 0 0 0
plotGrid(4, numCharsPerRow: 10, myPattern: "X O ")
//X O X O X
//O X O X O
//X O X O X
//O X O X O
Note, however, that in the let grid assignment, you create an array that is immediately reduced, so there will be an unnecessary overhead here, especially for large grids. The imperative approach below should be faster, but the above, perhaps, neater.
One possible (among many) classic imperative approach:
let numRows = 5
let numCharsPerRow = 9
let myPattern = "0 "
let totalNumChars = numRows*numCharsPerRow
var grid = "0"
var addZero = false
for i in 2...totalNumChars {
grid += addZero ? "0" : " "
addZero = !addZero
if i%numCharsPerRow == 0 {
grid += "\n"
}
}
print(grid)
//0 0 0 0 0
// 0 0 0 0
//0 0 0 0 0
// 0 0 0 0
//0 0 0 0 0