How to run e file one by one? Not in parallel test - specman

I am new to specman, I am now writing a testbench which i want to give many specific test cases to debug a calculator.
For example,
I have two files, the first one called "test1" and the second called "test2".
Here is my code for "test1":
extend instruction_s {
keep cmd_in_1 == ADD;
keep din1_1 < 10;
keep din2_1 < 10;
};
extend driver_u {
keep instructions_to_drive.size() == 10;
};
And here is my code for "test2":
extend instruction_s {
keep cmd_in_1 == SUB;
keep din1_1 < 10;
keep din2_1 < 10;
};
extend driver_u {
keep instructions_to_drive.size() == 10;
};
However, when I tried to test my code, specman shows error, it seems I can't do this like that.
Is there any possible way that I can let specman execute "test1" file first and then run "test2" file?
Or if there is some other way that I can achieve my goal?
Thanks for your helping.

Do you really want to have one test that executes 10 ADD instructions, and run another test that executes 10 SUB instructions?
If so, the common way to do so is to compile your testbench, and run multiple times - each time loading another test file.
For a start, try this:
xrun my_device.v my_testbench.e test1.e
xrun my_device.v my_testbench.e test2.e

Related

Pedersen circom/circomlibjs inconsistency?

As a unit test for a larger use case, I am checking that indeed the pedersen hash I am doing in the frontend aligns with the expected hash done through a circom circuit. I am using a simple assert in the circuit and generating a witness and am feeding both the hashed and unhashed values to the circuit, recreating the hash to make sure that it goes through.
I am running a Pedersen hash in my frontend using circomlibjs. As a unit test, I have. a circuit with a simple assert that check whether the results from my frontend line up with the pedersen hash in the circom circuit.
The circuit I am using:
include "../node_modules/circomlib/circuits/bitify.circom";
include "../node_modules/circomlib/circuits/pedersen.circom";
template check() {
signal input unhashed;
signal input hashed;
signal output createdHash[2];
component hasher = Pedersen(256);
component unhashedBits = Num2Bits(256);
unhashedBits.in <== unhashed;
for (var i = 0; i < 256; i++){
hasher.in[i] <== unhashedBits.out[i];
}
createdHash[0] <== hasher.out[0];
createdHash[1] <== hasher.out[1];
hashed === createdHash[1];
}
component main = check();
In the frontend, I am running the following,
import { buildPedersenHash } from 'circomlibjs';
export function buff2hex(buff) {
function i2hex(i) {
return ('0' + i.toString(16)).slice(-2);
}
return '0x' + Array.from(buff).map(i2hex).join('');
}
const secret = (new TextEncoder(32)).encode("Hello");
var pedersen = await buildPedersenHash();
var h = pedersen.hash(secret);
console.log(buff2hex(secret));
console.log(buff2hex(h));
The values that are printed are:
0x48656c6c6f
0x0e90d7d613ab8b5ea7f4f8bc537db6bb0fa2e5e97bbac1c1f609ef9e6a35fd8b
Which are consistent with the test done here.
So I then create an input.json file which looks as follows,
{
"unhashed": "0x48656c6c6f",
"hashed": "0x0e90d7d613ab8b5ea7f4f8bc537db6bb0fa2e5e97bbac1c1f609ef9e6a35fd8b"
}
And lastly run the following script to create a witness, in the hopes that the assert will go through.
# Compile the circuit
circom ${CIRCUIT}.circom --r1cs --wasm --sym --c
# Generate the witness.wtns
node ${CIRCUIT}_js/generate_witness.js ${CIRCUIT}_js/${CIRCUIT}.wasm input.json ${CIRCUIT}_js/witness.wtns
However, I keep getting an assert error,
Error: Error: Assert Failed.
Error in template check_11 line: 26
Which describes the assert in the circuit, so I assume there is an inconsistency in the hash.
I am new to circom so any insights would be greatly appreciated!
For anyone who stumbles across this, it happens that the cause of issue is endianess. The issue was fixed by converting the unhashed to little endian in the input, I am not sure as to where exactly the problem is, but seems like the hasher reads it as big endian on the frontend but the input is expected little endian (or vice verse).
As I have managed to patch up a fix for this at the moment, I will stop investigating, but implore anyone who understand this further to give a better explanation.

Cleaner way of declaring many classes in perl

I´m working in a code which follows some steps, and each of this steps is done in one class. Right now my code looks like this:
use step_1_class;
use step_2_class;
use step_3_class;
use step_4_class;
use step_5_class;
use step_6_class;
use step_7_class;
...
use step_n_class;
my $o_step_1 = step_1_class->new(#args1);
my $o_step_2 = step_2_class->new(#args2);
my $o_step_3 = step_3_class->new(#args3);
my $o_step_4 = step_4_class->new(#args4,$args_4_1);
my $o_step_5 = step_5_class->new(#args5);
my $o_step_6 = step_6_class->new(#args6,$args_6_1);
my $o_step_7 = step_7_class->new(#args7);
...
my $o_step_n = step_n_class->new(#argsn);
Is there a cleaner way of declaring this somewhat similar classes wihtout using hundreds of lines?
Your use classes as written are equivalent to
BEGIN {
require step_1_class;
step_1_class->import() if step_1_class->can('import');
require step_2_class;
step_2_class->import() if step_2_class->can('import');
...
}
This can be rewritten as
BEGIN {
foreach my $i ( 1 .. $max_class ) {
eval "require step_${i}_class";
"step_${i}_class"->import() if "step_${i}_class"->can('import');
}
}
The new statements are a little more complex as you have separate variables and differing parameters, however this can be worked around by storing all the objects in an array and also preprocessing the parameters like so
my #steps;
my #parameters = ( undef, \#args1, \#args2, \#args3, [ #args4, $args_4_1], ...);
for ($i = 1; $i <= $max_class; $i++) {
push #steps, "step_${i}_class"->new(#{$parameters[$i]});
}
You can generate the use clauses in a Makefile. Generating the construction of the object will be more tricky as the arguments aren't uniform - but you can e.g. save the exceptions to a hash. This will make deployment more complex and searching the code tricky.
It might be wiser to rename each step by its purpose, and group the steps together logically to form a hierarchy instead of a plain sequence of steps.

How can i tell specman to terminate the test after 5 dut_erros

I need to specify to specman a maximal amount of dut_errors in the test, which after that limit the test should be terminated.
Currently i have the option to terminate the test when an error accord or never.
You can also change the check_effect to ERROR, so it will cause the run to stop. For example (I am taking here Thorsten's example, and modify it):
extend sn_util {
!count_errors: uint;
};
extend dut_error_struct {
pre_error() is also {
util.count_errors += 1;
if util.count_errors > 5 {
set_check_effect(ERROR);
};
};
};
AFAIK this does not work out of the box. You could count the errors by using a global variable and extending the error struct, something in the line of
extend sn_util {
!count_errors: uint;
count() is {
count_errors += 1;
if count_errors > 5 { stop_run() };
};
};
extend dut_error_struct {
write() is also { util.count() };
};
There might even be an object in global that does the counting already, but probably not documented.

simple parallel processing in perl

I have a few blocks of code, inside a function of some object, that can run in parallel and speed things up for me.
I tried using subs::parallel in the following way (all of this is in a body of a function):
my $is_a_done = parallelize {
# block a, do some work
return 1;
};
my $is_b_done = parallelize {
# block b, do some work
return 1;
};
my $is_c_done = parallelize {
# block c depends on a so let's wait (block)
if ($is_a_done) {
# do some work
};
return 1;
};
my $is_d_done = parallelize {
# block d, do some work
return 1;
};
if ($is_a_done && $is_b_done && $is_c_done && $is_d_done) {
# just wait for all to finish before the function returns
}
First, notice I use if to wait for threads to block and wait for previous thread to finish when it's needed (a better idea? the if is quite ugly...).
Second, I get an error:
Thread already joined at /usr/local/share/perl/5.10.1/subs/parallel.pm line 259.
Perl exited with active threads:
1 running and unjoined
-1 finished and unjoined
3 running and detached
I haven't seen subs::parallel before, but given that it's doing all of the thread handling for you, and it seems to be doing it wrong, based on the error message, I think it's a bit suspect.
Normally I wouldn't just suggest throwing it out like that, but what you're doing really isn't any harder with the plain threads interface, so why not give that a shot, and simplify the problem a bit? At the same time, I'll give you an answer to the other part of your question.
use threads;
my #jobs;
push #jobs, threads->create(sub {
# do some work
});
push #jobs, threads->create(sub {
# do some other work
});
# Repeat as necessary :)
$_->join for #jobs; # Wait for everything to finish.
You need something a little bit more intricate if you're using the return values from those subs (simply switching to a hash would help a good deal) but in the code sample you provided, you're ignoring them, which makes things easy.

Simplest way to process a list of items in a multi-threaded manner

I've got a piece of code that opens a data reader and for each record (which contains a url) downloads & processes that page.
What's the simplest way to make it multi-threaded so that, let's say, there are 10 slots which can be used to download and process pages in simultaneousy, and as slots become available next rows are being read etc.
I can't use WebClient.DownloadDataAsync
Here's what i have tried to do, but it hasn't worked (i.e. the "worker" is never ran):
using (IDataReader dr = q.ExecuteReader())
{
ThreadPool.SetMaxThreads(10, 10);
int workerThreads = 0;
int completionPortThreads = 0;
while (dr.Read())
{
do
{
ThreadPool.GetAvailableThreads(out workerThreads, out completionPortThreads);
if (workerThreads == 0)
{
Thread.Sleep(100);
}
} while (workerThreads == 0);
Database.Log l = new Database.Log();
l.Load(dr);
ThreadPool.QueueUserWorkItem(delegate(object threadContext)
{
Database.Log log = threadContext as Database.Log;
Scraper scraper = new Scraper();
dc.Product p = scraper.GetProduct(log, log.Url, true);
ManualResetEvent done = new ManualResetEvent(false);
done.Set();
}, l);
}
}
You do not normally need to play with the Max threads (I believe it defaults to something like 25 per proc for worker, 1000 for IO). You might consider setting the Min threads to ensure you have a nice number always available.
You don't need to call GetAvailableThreads either. You can just start calling QueueUserWorkItem and let it do all the work. Can you repro your problem by simply calling QueueUserWorkItem?
You could also look into the Parallel Task Library, which has helper methods to make this kind of stuff more manageable and easier.