Specman e: "keep type .. is a" fails to refine the type of a field - specman

I have the next code in my verification environment:
// seq_file.e
extend SPECIFIC_TYPE sequence {
keep type driver is a SPECIFIC_TYPE sequence_driver;
event some_event is #driver.as_a(SPECIFIC_TYPE sequence_driver).some_event;
};
extend SPECIFIC_TYPE SEQ_NAME sequence {
body()#driver.clock is only {
var foo := driver.specific_type_field;
};
};
Note that thank to keep type driver is a.. there is no need in driver's casting in the line starting var foo..
BUT, the keep type driver is a.. does not affect the some_event, i.e. if the as_a casting is removed from the line, there is a compilation error says 'driver' does not have 'some_event' though its subtype do. use driver.as_a(SPECIFIC_TYPE sequence_driver).
Why the keep type driver is a.. fails to cast the driver in some_event.. line?
Thank you for your help

Your observation is correct: type constraint is taken into account for a field access (and for a method call), but not for event sampling. This is a limitation, and it might be removed in upcoming versions. I suggest contacting official Specman support for the exact info on the plans.

Related

"The trait bound `Bson: From<u128>` is not satisfied" error when using the doc! macro with u128 attribute

The following function leads to the error mentioned in the title:
pub async fn read_foo(client: mongodb::Client, key: u128) -> Option<mongodb::bson::Document> {
client
.database("db")
.collection("col")
.find_one(mongodb::bson::doc! {"key": key}, None)
.await
.expect("Could not connect to database")
}
This function works with u32, i64 but not u128. I need u128 here, what do I do?
I think the functionBson::From(u128) was called to convert the value of type u128 into one of the variants of the enum Bson, which defines all the supported BSON types. However, because there is no valid variant for u128 the compiler complained and threw the error message.
I can think of two workarounds, but don't know if any of them works for you. One is use string instead of u128 as the value type if you just wanted to store it in MongoDB without calculations, and the other is use Decimal128 if 34 decimal digits of precision is acceptable.
If you go the latter way, please pay attention that it seems the support for Decimal128 has not fully supported yet. See the doc.

Dart/Flutter linter rule: the type to index a map should be the key type of map?

For example, I have Map<int, int> m;. Then I can write down m['hello'] without any compile-time error, but of course, cannot find any element at runtime. I hope it will produce an error (or warning) at compile-time or lint time.
This is a big problem in many cases. For example, when I refactor Map<A, int> m into Map<B, int> m, I want to have compile-time errors for all accesses like m[some_var_of_type_A], instead of no compile-time errors and suddenly it explodes at runtime. As another example, the de-serialized JSON is of type Map<String, ...> but the key is actually a int. So it is tempting to do var userId=42; deserializedJson[userId] but only to find errors. Actually need to do deserializedJson[userId.toString()].
You know, dart's type system is so strong (even null safe!), and I really enjoy it since it catchs a LOT of bugs at compile-time. So I hope this problem can also be addressed at compile-time.
Thanks for any suggestions!
There currently is no lint to warn about doing lookups on a Map with arguments of the wrong type. This has been requested in https://github.com/dart-lang/linter/issues/1307.
Also see https://github.com/dart-lang/sdk/issues/37392, which requests a type-checked alternative to Map.operator []. In the meantime, Dart's extension mechanism allows anyone to easily add such an alternative themselves. For example, package:basics provides a type-checked Map.get extension.
NOTE:
The original answer was wrong and has been edited to:
point out the right/better answer
explain why the original answer was wrong
Thank you #jamesdlin for pointing this out.
Better answer
As pointed by #jamesdlin in his answer, the lint rule mentioned in the question has been requested in the flutter Github issues, and not in production yet.
Original Answer (wrong but kind of related to the question)
Why it is wrong:
The question was asking about the lint rule when using an index of Map. The answer however gave the lint rule about initializing a map using the wrong index (By the wrong index, I mean different data type).
Below is the answer:
There is a lint rule for this.
For example, if you define a Map like this ->
final Map<String, String> m = {
1: 'some random value',
};
It shows an error right away and this won't compile. This is the error ->
Error: A value of type 'int' can't be assigned to a variable of type 'String'.
1: 'error because index is of type String but assigned value is of type int',
^
Error: Compilation failed.
See the official docs where this lint rule, map_key_type_not_assignable is defined.
I have tested this in dartpad and vs code. Both IDEs show this error.
There could be some issues in your IDE configuration if you're not seeing this lint error.
As for your question, there is already a lint rule for this as explained above.

Replacing class keyword with actor causes an error

Here's my code:
class Eapproximator
var step : F64
new create(step' :F64) =>
step = step'
fun evaluate() :F64 =>
var total = F64(0)
var value = F64(1)
while total < 1 do
total = total + step
value = value + (value * step)
end
value
actor Main
new create(env: Env) =>
var e_approx = Eapproximator(0.00001)
var e_val = e_approx.evaluate()
env.out.print(e_val.string())
It works well and prints (as expected) 2.7183. However, if I replace class with actor in Eapproximator definition I get a bunch of errors:
Error:
/src/main/main.pony:18:34: receiver type is not a subtype of target type
var e_val = e_approx.evaluate()
^
Info:
/src/main/main.pony:18:17: receiver type: Eapproximator tag
var e_val = e_approx.evaluate()
^
/src/main/main.pony:6:3: target type: Eapproximator box
fun evaluate() :F64 =>
^
/src/main/main.pony:3:3: Eapproximator tag is not a subtype of Eapproxim
ator box: tag is not a subcap of box
new create(step' :F64) =>
^
Error:
/src/main/main.pony:19:19: cannot infer type of e_val
env.out.print(e_val.string())
What can I do to fix this?
The actor is the unit of concurrency in Pony. This means that many different actors in the same program can run at the same time, including your Main and Eapproximator actors. Now what would happen if the fields of an actor were modified by multiple actors at the same time? You'd most likely get some garbage value in the end because of the way concurrent programs work on modern hardware. This is called a data race and it is the source of many, many bugs in concurrent programming. One of the goals of Pony is to detect data races at compile time, and this error message is the compiler telling you that what you're trying to do is potentially unsafe.
Let's walk through that error message.
receiver type is not a subtype of target type
The receiver type is the type of the called object, e_approx here. The target type is the type of this inside of the method, Eapproximator.evaluate here. Subtyping means that an object of the subtype can be used as if it was an object of the supertype. So that part is telling you that evaluate cannot be called on e_approx because of a type mismatch.
receiver type: Eapproximator tag
e_approx is an Eapproximator tag. A tag object can neither be read nor written. I'll detail why e_approx is tag in a minute.
target type: Eapproximator box
this inside of evaluate is an Eapproximator box. A box object can be read, but not written. this is box because evaluate is declared as fun evaluate, which implicitly means fun box evaluate (which means that by default, methods cannot modify their receiver.)
Eapproximator tag is not a subtype of Eapproxim
ator box: tag is not a subcap of box
According to this error message, a tag object isn't a subtype of a box object, which means that a tag cannot be used as if it was a box. This is logical if we look at what tag and box allow. box allows more things than tag: it can be read while tag cannot. A type can only be a subtype of another type if it allows less (or as much) things than the supertype.
So why does replacing class with actor make the object tag? This has to do with the data race problems I talked about earlier. An actor has free reign over its own fields. It can read from them and write to them. Since actors can run concurrently, they must be denied access to each other's fields in order to avoid data races with the fields' owner. And there is something in the type system that does exactly that: tag. An actor can only see other actors as tag, because it would be unsafe to read from or write to them. The main useful thing it can do with those tag references is send asynchronous messages (by calling the be methods, or behaviours), because that's neither reading nor writing.
Of course, since you're not doing any mutation of Eapproximatorin your program, your specific case would be safe. But it is much easier to try to forbid every unsafe program than to try to allow every safe program in addition to that.
To sum it up, there isn't really a fix for your program, except keeping Eapproximator as a class. Not anything needs to be an actor in a Pony program. The actor is the unit of concurrency, but that means it is also the unit of sequentiality. Computations that need to be sequential and synchronous must live in a single actor. You can then break down those computations into various classes for good code hygiene.

DataStage Job stucks with warnings

i am trying to stage a dataset from source to my server, When I run my job in DataStage, It keeps stucked with no errors.
All I see is a warning which says:
When checking operator: When binding output interface field "DRIVERS" to field "DRIVERS": Implicit conversion from source type "dfloat" to result type "sfloat": Possible range/precision limitation.
Try to reset and see if you get any other information......! otherwise one thing straight you can do is use Cast function to convert that to integer and process if the source is a DB, if it is a file read as is and change it in Transformer. hope this helps.
When checking operator: When binding output interface field "DRIVERS" to field "DRIVERS": Implicit conversion from source type "dfloat" to result type "sfloat": Possible range/precision limitation.
You have to learn to read APT/Torrent error messages, which is the company that originally created Datastage PX. It is say:
When checking operator ===> As the compiler is pre-checking a stage
When binding ... "Drivers" ===> I'm looking at a stage in which you are assigning the input field "driver" to the output field "drivers"
Implicit conversion from source type "dfloat" to result type "sfloat": ===> You've got a type mismatch
I believe you can tell datastage to compile even if you get warnings, but the real answer is to go back inside your job and figure out why you're sticking a dfloat (double precision) into an sfloat (single precision). It's likely that you need to specify how to get from dfloat to the sfloat using a transformer and a user specified rule for accuracy truncation.

Constraint a scala type such that I would never need to check for null?

def myMethod(dog: Dog) = {
require (dog != null) // is it possible to already constraint it in the `Dog` type?
}
Is there a way to construct Dog such that it would be an ADT which would never be able to accept a null thus eliminate any null check? (I don't want an Option here, otherwise all my code would turn to have Option based, I want to already constraint the Dog class such that null is never possible, this is why type system is for to allow me to specify constraints in my program).
There was an attempt to provide such functionality (example I'm running in 2.10.4):
class A extends NotNull
defined class A
val x: A = null
// <console>:8: error: type mismatch;
// found : Null(null)
// required: A
// val x: A = null
^
Though it was never complete and eventually got deprecated. As for the time of writing, I don't think it's possible to construct ones hierarchy in a way that prevent you from nulls, without additional nullity checking analysis.
Check out comments in relevant ticket for an insight
I don't think this is possible, generally, because Java ruins everything. If you have a Java method that returns Dog, that could give you a null no matter what language/type features you add to Scala. That null could then be passed around, even in Scala code, and end up being passed to myMethod.
So you can't have non-null types in Scala without losing the interoperability property that Scala objects are Java objects (at least for the type in question).
Unfortunately inheritance makes it very difficult for the computer to know in the general case whether a method could be passed an object that originated from Java - unless everything is final/sealed, you can always subclass a class that handled the object at some point, and override the Dog-returning method. So it requires hairy full-program analysis to figure out the concrete types of everything (and remember, which concrete types are used can depend on runtime input!) just to establish that a given case could not involve Java code.