System verilog throughout usage - system-verilog

I'm trying to code the following:
let's say that X is a bool term, and Y is a sequence, I would like to have a property saying that (X throughout Y) |-> term_to_check
meaning: if you see that term X holds for the entire Y sequence, then at the end of the sequence check some term. the problem is I'm not sure whether the above writing will check the term at the end of the sequence or during it.
any ideas how to code the described behavior?

By using the overlapping implication operator |->, the term_to_check is checked in the same clock cycle that sequence Y ends. If you want it to check in the cycle following the end of the sequence, you could use the non-overlapping implication |=>, but my preference is always using the overlapping operator with |-> ##1.

Related

Clingo: How to do "If p causes UNSAT then q."

In the code below
% Facts
a.
% Rules
-a :- a, not not p.
Adding the fact p. to the above would cause it to be UNSAT. Is there a way in clingo to add a rule to show this? Something like
q :- Assuming p causes UNSAT.
Solutions like adding the rule
{p; q} = 1.
Wouldn't work. It would give q. in the answer set if p. causes UNSAT, as I'd want. But, it would give p. and q. as answer sets when p. doesn't cause UNSAT. In the case of p. not causing UNSAT I wouldn't want q. in the answer set.
I'd like to be able to check whether certain facts causes a certain complex condition to not hold. For example, suppose one part of a problem requires you to check that a graph does NOT contain a Hamiltonian cycle. A program that finds Hamiltonian cycles would return UNSAT if the graph satisfies the condition, but I wouldn't want the program to end, since there would be other calculations to do.
You can probably treat it as an optimisation problem. So something that would normally be a constraint is turned into a special predicate which you then seek to minimise. Basically you have something of the form:
err(err1) :- some-bad-condition.
err(err2) :- some-other-bad-condition.
#minimize{ 1,XXX: err(XXX) }.
Here the XXX is a unique identifier for each error condition. The optimisation statement finds a model that minimises the number of errs.
The only caveat is that this increases the complexity of the problem; you're now solving an optimisation problem instead of a decision problem. It is a useful trick when debugging asp programs, but for large/difficult problems it may be too slow.

How to define custom functions in Maple?

I'm new to Maple and I'm looking for a simple way to automate some tasks. In particular, I'm looking for a way to define custom "action" that perform some steps automatically.
As as an example I would like to define a quick way to compute the determinant of the Hessian of a polynomial. Currently the way I do this is opening Maple, create a new worksheet than performing the following commands:
p := (x, y) -> x^2*y + 3*x^3 + y^3
with(VectorCalculus):
h := Hessian(p(x, y), [x, y])
Determinant(h)
What I would like to do is to compute the hessian determinant directly with something like
HessDet(p)
where HessDet would be a custom command that performs the operations above. How does one achieve something like this in Maple?
First things first: The value assigned to your p is a procedure which can return a polynomial expression, but not itself a polynomial. It's important not to muddle expressions and procedures. Doing so is a common cause of problems for new users.
Being able to throw around p(x,y) may be visually pleasing to your eye, but it serves little programmatic purpose here. The fact that the formal parameters of procedure p happen to be called x and y, along with the fact that you called procedure p with arguments x and y, is actually just another common source of confusion. Don't create procedures merely to call them in this way.
Also, your call p(x,y) makes it look magic that your code snippet "knows" how many arguments would be required by procedure p. So it's already a muddle to have your candidate HessDet accept p as a procedure.
So instead let's keep it straightforward, by writing HessDet to accept a polynomial rather than a procedure. We can programmatically ascertain the names in which this expression of of type polynom.
restart;
HessDet:=proc(p::algebraic)
local H,vars;
vars:=indets(p,
And(name,Non(constant),
satisfies(u->type(p,polynom(anything,u)))));
H:=VectorCalculus:-Hessian(p,[vars[]]);
LinearAlgebra:-Determinant(H);
end proc:
Now some examples of using it,
P := x^2*y + 3*x^3 + y^3;
HessDet(P);
p := (x, y) -> x^2*y + 3*x^3 + y^3;
HessDet(p(x,y));
HessDet(x^3-x^2+4*x);
HessDet(s^2*t + 3*s^3 + t^3);
HessDet(s[r]^2*t[r] + 3*s[r]^3 + t[r]^3);
You might also wonder how you could re-use this custom procedure across sessions, without having to type it in each time. Two reasonable ways are:
Put the (above) defining plaintext definition of HessDet inside a personal initialization file.
Create a (.mla) Maple Library Archive file, then Save your HessDet to that, and then augment the Library search path in your initialization file.
It might look like 2) is more effort, but only the Save step is needed for repeats, and you can store many custom procedures to the same archive. Your choice...
[edit] The OP has asked for clarification of the first part of the above procedure HessDet, which I suspect means the call to indets.
If P is assigned an expression then then the call indets(P,name) will return a set of all the names present in that expression. Basically, it returns the set of all indeterminate subexpressions of the expression which are of type name in Maple's technical sense.
For example,
P := x*y + sin(a*Pi)*x;
x y + sin(a Pi) x
indets( P,
name );
{Pi, a, x, y}
Perhaps the name of the constant Pi is not wanted here. Ie,
indets( P,
And( name,
Non(constant) ) );
{a, x, y}
Perhaps we want only the non-constant names in which the expression is a polynomial? Ie,
indets( P,
And( name,
Non(constant),
satisfies(u->type(p,polynom(anything,u))) ) );
{x, y}
That last result is an advanced way of using the following tests:
type(P, polynom(anything, x));
true
type(P, polynom(anything, y));
true
type(P, polynom(anything, a));
false
A central issue here is that the OP made no mention of what kind of polynomials are to be handled by the custom procedure. So I guessed with some defensive coding, in hope of less surprises later on. The original Question states that the input could be a "polynomial", but we weren't told what kind of coefficients there might be.
Perhaps the coefficients will always be real and exact or numeric. Perhaps the custon procedure should throw an error when not supplied such. These details weren't mentioned in the Question.

Distinguishing cryptographic properties: hiding and collision resistance

I saw from Another question the following definitions, which clarifies somewhat:
Collision-resistance:
Given: x and h(x)
Hard to find: y that is distinct from x and such that h(y)=h(x).
Hiding:
Given: h(r|x), where r|x is the concatenation of r and x
Secret: x and a highly-unlikely-and-randomly-chosen r
Hard to find: y such that h(y)=h(r|x). where r|x is the concatenation of r and x
This is different from collision-resistance in that it doesn’t matter whether or not y=r|x.
My question:
Does this mean that any hash function h(x) is non-hiding if there is no secret r, that is, the hash is h(x), not h(r|x)? where r|x is the concatenation of r and x
Example:
Say I make a simple hash function h(x) = g^x mod(n), where g is the generator for the group. The hash should be Collision resistant with p(x_1 != x_2, h(x_1) = h(x_2)) = 1/(2^(n/2)), but I would think it is hiding as well?
Hashfunctions can kinda offer collision-resistance
Commitments have to be hiding.
In contrast to popular opinion these primitives are not the same!
Very strictly speaking the thing that you think of as hash-function cannot offer collision resistance: There always ARE collisions. The input space is infinite in theory, yet the function always produces a fixed-length output. The terminology should actually be “H is randomly drawn from a family of collision-resistant functions”. In practice however we will just call that function collision-resistant and ignore that it technically isn't.
A commitment has to offer two properties: Hiding and Binding. Binding means that you can only open it to one value (this is where the relation to collision-resistance comes in). Hiding means that it is impossible to learn anything about the element that is contained in it. This is why a secure commitment MUST use randomness (or nounces, but after all is said and done, those boil down to the same). Imagine any hash-function, no matter how perfect you want it to be: You can use a random-oracle if you want. If I give you a hash H(m) of a value m , you can compute H(0), compare the result and learn whether m = 0, meaning it is not hiding.
This is also why g^x is not a hiding commitment-scheme. Whether it is binding depends on what you allow as the message-space: If you allow all integers, then a simple attack y = x*phi(n)produces H(y)=H(x).
works. If you define it as ℤ_p, where p is the group-order, then it is perfectly binding, as it is an information-theoretically collision-resistant one-way-function. (Since message-space and target-space are of the same size, this time a single function actually CAN be truly collision-resistant!)

Why does this statement introduce memory?

I'm learning SystemVerilog and today my lecturer warned us against accidentally introducing memory into combinational systems. He used the following code as an example of this:
module gate(output logic y, input logic a);
always_comb
if(a)
y = '1;
endmodule
However, I don't understand why this presents a problem. As far as I can see, this is just a simple buffer. In what way does this code introduce memory into the system?
At the beginning of the simulation if a == 0 the value of y will be '0. If later a == 1'b1 then y will become '1. What value do you expect y to have when later on a == 0?
The answer to this question is that it will retain its previous value: '1. That is not the behaviour of combinational logic, whose output by definition only depends on the current state of the inputs, not on their previous states. In order to implement the behaviour you have described, the synthesiser will need a component with state, with storage, with memory. It will fulfill this by using a latch.

Create a loop switcher between different operations in matlab?

I have three looped operations O1 O2 O3 each with an IF statement and the operation with the largest flag=[F1 F2 F3] value has a higher priority to run.
How can I switch between operations depending on the value of that flag ? The flag value for each operation varies with time.
For simplicity, operation 1 is going to run first, and by the end of it's loop the flag value will be the lowest, hence operation 2 or 3 should run next. So for this example, at t=0 : F1=5 F2=3 and F3=1.
The over-simplified pseudo code for what im trying to achieve :
while 1
find largest flag value using [v index]=max(flag)
Run operation with highest flag value
..loop back..
end
I am not sure how the value of flag will be compared in between operations, and hence why I hope for someone to shed some light on the issue here.
EDIT
Currently, all operations are written in one matlab file, and each is triggered with an IF statement. The operations run systematically one after the other (depending on which one is written first in matlab). I want to avoid that and trigger them depending on the flag value instead.
If your operations are functions (a little hard to tell from the question), then make a cell array of function handles, there fun1 is the name of one of your actual functions.
handles = {#fun1, #fun2, #fun3}
Then you can just use the index returned from your max term to get the correct function from the array. You can pass any arguments to the function using the following syntax.
handles{index}(args)
Using the style above makes the solution scalable, so you don't need a stack of if statements that require maintenance when the number of operations expands. If the functions are really simple you can always use lambdas (or anonymous functions in Matlab speak).
However, if you have a limited number of simple operations that are not likely to expand, you may choose to use a switch statement in your while loop instead. It conveys your intention better than a stack of if statements.
while 1
[~, index]=max(flag);
switch index
case 1
operation1
flag = [x y z]
case 2
operation2
flag = [x y z]
otherwise
operation3
flag = [x y z]
end
end