Trying to match to a typedef value in a receive statement causes "bad node type 44" error message - promela

When I try to match a message in a receive statement I get a "bad node type 44" error message. This happens when the message's type is a typedef. The error message is rather cryptic and doesn't give much insight.
typedef t {
int i
}
init {
chan c = [1] of {t}
t x;
!(c ?? [eval(x)]) // <--- ERROR
}

Note: This may, or may not, be a bug in Spin: apparently, the grammar allows using a structure variable as an argument for eval(), but it does not look like this situation is handled correctly (within the extent of my limited understanding). I would encourage you to contact the maintainers of Promela/Spin and submit your model.
Nevertheless, there is a work-around for the issue you reported (see below).
Contrary to what is reported here:
NAME
eval - predefined unary function to turn an expression into a constant.
SYNTAX
eval( any_expr )
The actual promela grammar for EVAL looks a bit different:
receive : varref '?' recv_args /* normal receive */
| varref '?' '?' recv_args /* random receive */
| varref '?' '<' recv_args '>' /* poll with side-effect */
| varref '?' '?' '<' recv_args '>' /* ditto */
recv_args: recv_arg [ ',' recv_arg ] * | recv_arg '(' recv_args ')'
recv_arg : varref | EVAL '(' varref ')' | [ '-' ] const
varref : name [ '[' any_expr ']' ] [ '.' varref ]
Take-aways:
apparently, eval is allowed to take as argument a structure (because name may be the identifier of a typedef structure [?])
eval can also take as argument a structure field
when one aims to apply message filtering to an entire structure, it can expand the relevant fields of the structure itself
Example:
typedef Message {
int _filter;
int _value;
}
chan inout = [10] of { Message }
active proctype Producer()
{
Message msg;
byte cc = 0;
for (cc: 1 .. 10) {
int id;
select(id: 0..1);
msg._filter = id;
msg._value = cc;
atomic {
printf("Sending: [%d|%d]\n", msg._filter, msg._value);
inout!msg;
}
}
printf("Sender Stops.\n");
}
active proctype Consumer()
{
Message msg;
msg._filter = 0;
bool ignored;
do
:: atomic {
inout??[eval(msg._filter)] ->
inout??eval(msg._filter), msg._value;
printf("Received: [%d|%d]\n", msg._filter, msg._value);
}
:: timeout -> break;
od;
printf("Consumer Stops.\n");
}
simulation output:
~$ spin test.pml
Sending: [1|1]
Sending: [0|2]
Received: [0|2]
Sending: [0|3]
Received: [0|3]
Sending: [0|4]
Received: [0|4]
Sending: [0|5]
Received: [0|5]
Sending: [1|6]
Sending: [0|7]
Received: [0|7]
Sending: [0|8]
Received: [0|8]
Sending: [1|9]
Sending: [1|10]
Sender Stops.
timeout
Consumer Stops.
2 processes created
Generating a verifier does not result in a compilation error:
~$ spin -a test.pml
~$ gcc -o run pan.c
Note: when using both message filtering and message polling (like in your model sample), the fields of the structure that are subject to message filtering should be placed at the beginning of it.

Apparently it's a bug, link to github issue: https://github.com/nimble-code/Spin/issues/17
Update: Bug is now fixed.
Update 2: Bug was actually partially fixed, there are still some edge cases where it's behaving weirdly.
Update 3: As far as I can tell bug looks fixed now. The only downside is that it seems that now there is a strict restriction on what you put in the receive args. They have to match exactly the types declared in the channel. No more partial matches or unrolling struct fields.

My guess is that this error is related to the restrictions that structured types have. One restriction is that they can't be handled as a unit, to assign or compare them one must do it one field at a time.
For example, if one writes: x == y, where x and y are variables of a typedef type, the following error is shown: Error: incomplete structure ref 'x' saw 'operator: =='
Under the hood, when trying to compare the channel's queue to match the eval something is triggered that indicates that the comparison can't be done and then raises the "bad node type" message.

Related

Pre-compile textual replacement macro with arguments

I am trying to create some kind of a dut_error wrapper. Something that will take some arguments and construct them in a specific way to a dut_error.
I can't use a method to replace the calls to dut_error because to my understanding after check that ... then ... else can only come a dut_error (or dut_errorf). And indeed if I try to do something like:
my_dut_error(arg1: string, arg2: string) is {
dut_error("first argument is ", arg, " and second argument is ", arg2);
};
check that FALSE else my_dut_error("check1", "check2");
I get an error:
*** Error: Unrecognized exp
[Unrecognized expression 'FALSE else my_dut_error("check1", "check2")']
at line x in main.e
check that FALSE else my_dut_error("check1", "check2");
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
So I thought about defining a macro to simply do a textual replace from my wrapper to an actual dut_error:
define <my_dut_error'exp> "my_dut_error(<arg1'name>, <arg2'name>)" as {
dut_error("first argument is ", <arg1'name>, " and second argument is ", <arg2'name>)
};
But got the same error.
Then I read about the preprocessor directive #define so tried:
#define my_dut_error(arg1, arg2) dut_error("first argument is ", arg, " and second argument is ", arg2)
But that just gave a syntax error.
How can I define a pre-compiled textual replacement macro that takes arguments, similar to C?
The reason I want to do that is to achieve some sort of an "interface" to the dut_error so all errors have a consistent structure. This way, different people writing different errors will only pass the arguments necessary by that interface and internally an appropriate message will be created.
not sure i understood what you want to do in the wrapper, but perhaps you can achieve what you want by using the dut_error_struct.
it has set of api, which you can use as hooks (do something when the error is caught) and to query about the specific error.
for example:
extend dut_error_struct {
pre_error() is also {
if source_method_name() == "post_generate" and
source_struct() is a BLUE packet {
out("\nProblem in generation? ", source_location());
// do something for error during generation
};
write() is first {
if get_mesage() ~ "AHB Error..." {
ahb_monitor::increase_errors();
};
};
};
dut_error accepts one parameter, one string. but you can decide of a "separator", that will define two parts to the message.
e.g. - instruct people to write "XXX" in the message, before "first arg" and "second arg".
check that legal else dut_error("ONE thing", "XXX", "another thing");
check that x < 7 else dut_error("failure ", "XXX", "of x not 7 but is ", x);
extend dut_error_struct {
write() is first {
var message_parts := str_split(get_message(), "XXX");
if message_parts.size() == 2 {
out ("First part of message is ", message_parts[0],
"\nand second part of message is ", message_parts[1]
);
};
};
I could get pretty close to what I want using the dut_errorf method combined with a preprocessor directive defining the format string:
#define DUT_FORMAT "first argument is %s and second argument is %s"
check that FALSE else dut_errorf(DUT_FORMAT, "check1", "check2");
but I would still prefer a way that doesn't require this DUT_FORMAT directive and instead uses dut_error_struct or something similar.

How to parse new line in Scala grammar with flex/bison?

I want to parse Scala grammar with flex and bison. But I don't know how to parse the newline token in Scala grammar.
If I parse newline as a token T_NL, Here's the Toy.l for example:
...
[a-zA-Z_][a-zA-Z0-9_]* { yylval->literal = strdup(yy_text); return T_ID; }
\n { yylval->token = T_LN; return T_LN; }
[ \t\v\f\r] { /* skip whitespaces */ }
...
And here's the Toy.y for example:
function_def: 'def' T_ID '(' argument_list ')' return_expression '=' expression T_NL
;
argument_list: argument
| argument ',' argument_list
;
expression: ...
;
return_expression: ...
;
You could see that I have to skip T_NL in all other statements and definitions in Toy.y, which is really boring.
Please educate me with source code example!
This is a clear case where bison push-parsers are useful. The basic idea is that the decision to send an NL token (or tokens) can only be made when the following token has been identified (and, in one corner case, the second following token).
The advantage of push parsers is that they let us implement strategies like this, where there is not necessarily a one-to-one relationship between input lexemes and tokens sent to the parser. I'm not going to deal with all the particularities of setting up a push parser (though it's not difficult); you should refer to the bison manual for details. [Note 1]
First, it's important to read the Scala language description with care. Newline processing is described in section 2.13:
A newline in a Scala source text is treated as the special token “nl” if the three following criteria are satisfied:
The token immediately preceding the newline can terminate a statement.
The token immediately following the newline can begin a statement.
The token appears in a region where newlines are enabled.
Rules 1 and 2 are simple lookup tables, which are precisely defined in the following two paragraphs. There is just one minor exception to rule 2 has a minor exception, described below:
A case token can begin a statement only if followed by a class or object token.
One hackish possibility to deal with that exception would be to add case[[:space:]]+class and case[[:space:]]+object as lexemes, on the assumption that no-one will put a comment between case and class. (Or you could use a more sophisticated pattern, which allows comments as well as whitespace.) If one of these lexemes is recognised, it could either be sent to the parser as a single (fused) token, or it could be sent as two tokens using two invocations of SEND in the lexer action. (Personally, I'd go with the fused token, since once the pair of tokens has been recognised, there is no advantage to splitting them up; afaik, there's no valid program in which case class can be parsed as anything other than case class. But I could be wrong.)
To apply rules one and two, then, we need two lookup tables indexed by token number: token_can_end_stmt and token_cannot_start_stmt. (The second one has its meaning reversed because most tokens can start statements; doing it this way simplifies initialisation.)
/* YYNTOKENS is defined by bison if you specify %token-tables */
static bool token_can_end_stmt[YYNTOKENS] = {
[T_THIS] = true, [T_NULL] = true, /* ... */
};
static bool token_cannot_start_stmt[YYNTOKENS] = {
[T_CATCH] = true, [T_ELSE] = true, /* ... */
};
We're going to need a little bit of persistent state, but fortunately when we're using a push parser, the scanner does not need to return to its caller every time it recognises a token, so we can keep the persistent state as local variables in the scan loop. (That's another advantage of the push-parser architecture.)
From the above description, we can see that what we're going to need to maintain in the scanner state are:
some indication that a newline has been encountered. This needs to be a count, not a boolean, because we might need to send two newlines:
if two tokens are separated by at least one completely blank line (i.e a line which contains no printable characters), then two nl tokens are inserted.
A simple way to handle this is to simply compare the current line number with the line number at the previous token. If they are the same, then there was no newline. If they differ by only one, then there was no blank line. If they differ by more than one, then there was either a blank line or a comment (or both). (It seems odd to me that a comment would not trigger the blank line rule, so I'm assuming that it does. But I could be wrong, which would require some adjustment to this scanner.) [Note 2]
the previous token (for rule 1). It's only necessary to record the token number, which is a simple small integer.
some way of telling whether we're in a "region where newlines are enabled" (for rule 3). I'm pretty sure that this will require assistance from the parser, so I've written it that way here.
By centralising the decision about sending a newline into a single function, we can avoid a lot of code duplication. My typical push-parser architecture uses a SEND macro anyway, to deal with the boilerplate of saving the semantic value, calling the parser, and checking for errors; it's easy to add the newline logic there:
// yylloc handling mostly omitted for simplicity
#define SEND_VALUE(token, tag, value) do { \
yylval.tag = value; \
SEND(token); \
} while(0);
#define SEND(token) do { \
int status = YYPUSH_MORE; \
if (yylineno != prev_lineno) \
&& token_can_end_stmt[prev_token] \
&& !token_cannot_start_stmt[token] \
&& in_new_line_region) { \
status = yypush_parse(ps, T_NL, NULL, &yylloc, &nl_enabled); \
if (status == YYPUSH_MORE \
&& yylineno - prev_lineno > 1) \
status = yypush_parse(ps, T_NL, NULL, &yylloc, &nl_enabled); \
} \
nl_encountered = 0; \
if (status == YYPUSH_MORE) \
status = yypush_parse(ps, token, &yylval, &yylloc, &nl_enabled); \
if (status != YYPUSH_MORE) return status; \
prev_token = token; \
prev_lineno = yylineno; \
while (0);
Specifying the local state in the scanner is extremely simple; just place the declarations and initialisations at the top of your scanner rules, indented. Any indented code prior to the first rule is inserted directly into yylex, almost at the top of the function (so it is executed once per call, not once per lexeme):
%%
int nl_encountered = 0;
int prev_token = 0;
int prev_lineno = 1;
bool nl_enabled = true;
YYSTYPE yylval;
YYLTYPE yylloc = {0};
Now, the individual rules are pretty simple (except for case). For example, we might have rules like:
"while" { SEND(T_WHILE); }
[[:lower:]][[:alnum:]_]* { SEND_VALUE(T_VARID, str, strdup(yytext)); }
That still leaves the question of how to determine if we are in a region where newlines are enabled.
Most of the rules could be handled in the lexer by just keeping a stack of different kinds of open parentheses, and checking the top of the stack: If the parenthesis on the top of the stack is a {, then newlines are enabled; otherwise, they are not. So we could use rules like:
[{([] { paren_stack_push(yytext[0]); SEND(yytext[0]); }
[])}] { paren_stack_pop(); SEND(yytext[0]); }
However, that doesn't deal with the requirement that newlines be disabled between a case and its corresponding =>. I don't think it's possible to handle that as another type of parenthesis, because there are lots of uses of => which do not correspond with a case and I believe some of them can come between a case and it's corresponding =>.
So a better approach would be to put this logic into the parser, using lexical feedback to communicate the state of the newline-region stack, which is what is assumed in the calls to yypush_parse above. Specifically, they share one boolean variable between the scanner and the parser (by passing a pointer to the parser). [Note 3] The parser then maintains the value of this variable in MRAs in each rule which matches a region of potentially different newlinedness, using the parse stack itself as a stack. Here's a small excerpt of a (theoretical) parser:
%define api.pure full
%define api.push-pull push
%locations
%parse-param { bool* nl_enabled; }
/* More prologue */
%%
// ...
/* Some example productions which modify nl_enabled: */
/* The actions always need to be before the token, because they need to take
* effect before the next lookahead token is requested.
* Note how the first MRA's semantic value is used to keep the old value
* of the variable, so that it can be restored in the second MRA.
*/
TypeArgs : <boolean>{ $$ = *nl_enabled; *nl_enabled = false; }
'[' Types
{ *nl_enabled = $1; } ']'
CaseClause : <boolean>{ $$ = *nl_enabled; *nl_enabled = false; }
"case" Pattern opt_Guard
{ *nl_enabled = $1; } "=>" Block
FunDef : FunSig opt_nl
<boolean>{ $$ = *nl_enabled; *nl_enabled = true; }
'{' Block
{ *nl_enabled = $3; } '}'
Notes:
Push parsers have many other advantages; IMHO they are they are the solution of choice. In particular, using push parsers avoids the circular header dependency which plagues attempts to build pure parser/scanner combinations.
There is still the question of multiline comments with preceding and trailing text:
return /* Is this comment
a newline? */ 42
I'm not going to try to answer that question.)
It would be possible to keep this flag in the YYLTYPE structure, since only one instance of yylloc is ever used in this example. That might be a reasonable optimisation, since it cuts down on the number of parameters sent to yypush_parse. But it seemed a bit fragile, so I opted for a more general solution here.

how to compare expected value to be in the list [duplicate]

One of my test expects an error message text to be one of multiple values. Since getText() returns a promise I cannot use toContain() jasmine matcher. The following would not work since protractor (jasminewd under-the-hood) would not resolve a promise in the second part of the matcher, toContain() in this case:
expect(["Unknown Error", "Connection Error"]).toContain(page.errorMessage.getText());
Question: Is there a way to check if an element is in an array with jasmine+protractor where an element is a promise?
In other words, I'm looking for inverse of toContain() so that the expect() would implicitly resolve the promise passed in.
As a workaround, I can explicitly resolve the promise with then():
page.errorMessage.getText().then(function (text) {
expect(["Unknown Error", "Connection Error"]).toContain(text);
});
I'm not sure if this is the best option. I would also be okay with a solution based on third-parties like jasmine-matchers.
As an example, this kind of assertion exists in Python:
self.assertIn(1, [1, 2, 3, 4])
Looks like you need a custom matcher. Depending on the version of Jasmine you are using:
With Jasmine 1:
this.addMatchers({
toBeIn: function(expected) {
var possibilities = Array.isArray(expected) ? expected : [expected];
return possibilities.indexOf(this.actual) > -1;
}
});
With Jasmine 2:
this.addMatchers({
toBeIn: function(util, customEqualityTesters) {
return {
compare: function(actual, expected) {
var possibilities = Array.isArray(expected) ? expected : [expected];
var passed = possibilities.indexOf(actual) > -1;
return {
pass: passed,
message: 'Expected [' + possibilities.join(', ') + ']' + (passed ? ' not' : '') + ' to contain ' + actual
};
}
};
}
});
You'll have to execute this in the beforeEach section on each of your describe blocks it's going to be used in.
Your expect would look like:
expect(page.errorMessage.getText()).toBeIn(["Unknown Error", "Connection Error"]);
The alternative solution is to use .toMatch() matcher with Regular Expressions and specifically a special character | (called "or"), which allows to match only one entry to succeed:
expect(page.errorMessage.getText()).toMatch(/Unknown Error|Connection Error/);
To me, the work-around that you identified is the best solution. However, we should not forget that this is an asynchronous execution and you might want to consider Jasmine's asynchronous support.
Then, your test will look like the following one:
it('should check item is in array', function(done){ //Note the argument for callback
// do your stuff/prerequisites for the spec here
page.errorMessage.getText().then(function (text) {
expect(["Unknown Error", "Connection Error"]).toContain(text);
done(); // Spec is done!
});
});
Note: If you don't pass this done argument to the spec callback, it is going to run to completion without failures, but no assertions are going to be reported in the execution results for that spec (in other words, that spec will have 0 assertions) and it might lead to confusions.

[zetapush]Error in macro: code=SUB_ASSIGN - Message = Assignment failed

When calling a macro I have the following error in response:
code=SUB_ASSIGN
Message = Assignment failed
Location= Reduce at line 65 of file mymacro.zms
the line 65 is
/** Reduce */
var b = gda(GDA).reduce {
table: 'myTable',
start: __parameters.key + '##',
stop: __parameters.key + '##a',
page: {
pageNumber: 0,
pageSize: 100000000
},
initialValue: {
tR: tR,
count: 0
},
'columns': ['col1', 'col2'],
'function': usr:myfunc
};
Seen on Android SDK (not on JS SDK) and not 100% reproducible.
What's the meaning of this error and how can I correct it?
Thanks.
The (poorly documented) meaning of SUB_ASSIGN is that an assignment with a syntax like a.b.c = d; has failed.
More precisely, the left hand operand is made of several hierarchical sub-parts (in my example a, b, and c).
Note that SUB_ASSIGN reports a programming error, which you should guard against :
when assigning a.b.c, at least a.b must exist and be not null.
A code pattern such as this one should do the trick:
// the ?? operator returns true when an expression is defined and not null
if (! a.b ??) {
a.b = {};
}
...
a.b.c = 0;
or, shorter, if applicable :
if (! a.b ??) {
a.b = {c:0};
}
The relevant documentation for '??' can be found in the official API reference documentation
As your code does not include such a statement anyway, I suppose that the actual error does not lie in your reduce call, but inside your callback function (usr:myfunc).
Moreover, to ease further debugging :
the error report you gave does not seem to contain the full stack trace up to the callback of the 'reduce' call. This might very well be a missing/not-yet-implemented feature of the zetapush API (which you could request...).

How to further improve error messages in Scala parser-combinator based parsers?

I've coded a parser based on Scala parser combinators:
class SxmlParser extends RegexParsers with ImplicitConversions with PackratParsers {
[...]
lazy val document: PackratParser[AstNodeDocument] =
((procinst | element | comment | cdata | whitespace | text)*) ^^ {
AstNodeDocument(_)
}
[...]
}
object SxmlParser {
def parse(text: String): AstNodeDocument = {
var ast = AstNodeDocument()
val parser = new SxmlParser()
val result = parser.parseAll(parser.document, new CharArrayReader(text.toArray))
result match {
case parser.Success(x, _) => ast = x
case parser.NoSuccess(err, next) => {
tool.die("failed to parse SXML input " +
"(line " + next.pos.line + ", column " + next.pos.column + "):\n" +
err + "\n" +
next.pos.longString)
}
}
ast
}
}
Usually the resulting parsing error messages are rather nice. But sometimes it becomes just
sxml: ERROR: failed to parse SXML input (line 32, column 1):
`"' expected but `' found
^
This happens if a quote characters is not closed and the parser reaches the EOT. What I would like to see here is (1) what production the parser was in when it expected the '"' (I've multiple ones) and (2) where in the input this production started parsing (which is an indicator where the opening quote is in the input). Does anybody know how I can improve the error messages and include more information about the actual internal parsing state when the error happens (perhaps something like a production rule stacktrace or whatever can be given reasonably here to better identify the error location). BTW, the above "line 32, column 1" is actually the EOT position and hence of no use here, of course.
I don't know yet how to deal with (1), but I was also looking for (2) when I found this webpage:
https://wiki.scala-lang.org/plugins/viewsource/viewpagesrc.action?pageId=917624
I'm just copying the information:
A useful enhancement is to record the input position (line number and column number) of the significant tokens. To do this, you must do three things:
Make each output type extend scala.util.parsing.input.Positional
invoke the Parsers.positioned() combinator
Use a text source that records line and column positions
and
Finally, ensure that the source tracks positions. For streams, you can simply use scala.util.parsing.input.StreamReader; for Strings, use scala.util.parsing.input.CharArrayReader.
I'm currently playing with it so I'll try to add a simple example later
In such cases you may use err, failure and ~! with production rules designed specifically to match the error.