DataStage Job stucks with warnings - datastage

i am trying to stage a dataset from source to my server, When I run my job in DataStage, It keeps stucked with no errors.
All I see is a warning which says:
When checking operator: When binding output interface field "DRIVERS" to field "DRIVERS": Implicit conversion from source type "dfloat" to result type "sfloat": Possible range/precision limitation.

Try to reset and see if you get any other information......! otherwise one thing straight you can do is use Cast function to convert that to integer and process if the source is a DB, if it is a file read as is and change it in Transformer. hope this helps.

When checking operator: When binding output interface field "DRIVERS" to field "DRIVERS": Implicit conversion from source type "dfloat" to result type "sfloat": Possible range/precision limitation.
You have to learn to read APT/Torrent error messages, which is the company that originally created Datastage PX. It is say:
When checking operator ===> As the compiler is pre-checking a stage
When binding ... "Drivers" ===> I'm looking at a stage in which you are assigning the input field "driver" to the output field "drivers"
Implicit conversion from source type "dfloat" to result type "sfloat": ===> You've got a type mismatch
I believe you can tell datastage to compile even if you get warnings, but the real answer is to go back inside your job and figure out why you're sticking a dfloat (double precision) into an sfloat (single precision). It's likely that you need to specify how to get from dfloat to the sfloat using a transformer and a user specified rule for accuracy truncation.

Related

Specify type of a TaggedOutput to pass through GroupByKey (as a part of CombinePerKey)

When I tried to migrate my project based on apache beam pipelines from python 3.7 to 3.8 the type hint check started to fail at this place:
pcoll = (
wrong_pcoll,
some_pcoll_1,
some_pcoll_2,
some_pcoll_3,
) | beam.Flatten(pipeline=pipeline)
pcoll | beam.CombinePerKey(MyCombineFn()) # << here
with this error:
apache_beam.typehints.decorators.TypeCheckError: Input type hint violation at GroupByKey: expected Tuple[TypeVariable[K], TypeVariable[V]], got Union[TaggedOutput, Tuple[Any, Any], Tuple[Any, _MyType1], Tuple[Any, _MyType2]]
The wrong_pcoll is actually a TaggedOutput because it's received as a tagged output from one on previous ptransforms.
Type hint check fails when the type of wrong_pcoll which is a TaggedOutput as a part of the type of pcoll (which type in correspondence with the exception is Union[TaggedOutput, Tuple[Any, Any], Tuple[Any, _MyType1], Tuple[Any, _MyType2]]) passed to GrouByKey that is used inside of CombinePerKey.
So I have two questions:
Why does it work in python 3.7 and doesn't on 3.8?
How to specify type for a tagged output? I tried to specify the type for the process() method of PTransform that produced that as a union of all output types that it yields, but for some reason for the type hint check has been chose the wrong one. Then I specified strictly the type I need: Tuple[Any, Any] and it has worked. But it's not a way since process() also yields other types, like simple str.
As a workaround, I can pass this wrong_pcoll through a simple beam.Map with lambda x: x and .with_output_types(Tuple[Any, Any]), but it does not seem to be a clear way to fix it.
I investigated similar failures recently.
Beam has some type-inferencing capabilities which rely on opcode analysis of pipeline code. Inferencing is somewhat limited and conservative. For example, when Beam attempts to infer a function's return type and encounters an opcode that it does not know, Beam infers the return type as Any. It is also sensitive to Python minor version.
Python 3.8 removed some opcodes, such as SETUP_LOOP, that Beam didn't handle previously. Therefore, type inference behavior kicked in for some portions of the code where it didn't work before. I've seen pipelines where an increased type inference on Python 3.8 exposed incorrectly-specified hints.
You are running into a bug/limitation in Beam's type inference for multi-output DoFns tracked in https://issues.apache.org/jira/browse/BEAM-4132. There was some progress, but it's not completely addressed. As a workaround you could manually specify the hints. I think beam.Flatten().with_output_types(Tuple[str, Union[_MyType1, _MyType2]]) should work for your case.

Eiffel: are the convert methods working in case of agent call arguments?

I'm calling a procedure with an argument which is an integer_64. I implemented a WATT class which can create it from an INTEGER_64 and it seems the execution stops when reached this point, where am I wrong?
Catcall detected for argument#1args': expected TUPLE [!WATT] but got TUPLE [INTEGER_64]`
Attached case (Update)
Actually when checking with syntax
attached {INTEGER_64} my_watt_object as l_int
it doesn't pass either... is it the expected behaviour?
Actually it seems for me that the semantic cases are the same which have to validate the conformity step... for me (but seems not to be the case for the definition of the language between conformance/conformity) which says
Conformance and convertibility are exclusive of each other,
p.87
Is the conformance rule valid for a type which defines as convert a type to another which is my case from WATT to INTEGER_64?
In Eiffel, the conversion specified by the language works only at compile time. It applies if the source of a reattachment does not conform to the target of the reattachment at compile time and there is the corresponding conversion feature.
No automatic conversion is performed at run-time. If you need this functionality, you need to implement it yourself. In your example, if the argument type is WATT, you need to call the conversion from INTEGER_64 to WATT explicitly, and pass the object of type WATT, not INTEGER_64.

Specman e: "keep type .. is a" fails to refine the type of a field

I have the next code in my verification environment:
// seq_file.e
extend SPECIFIC_TYPE sequence {
keep type driver is a SPECIFIC_TYPE sequence_driver;
event some_event is #driver.as_a(SPECIFIC_TYPE sequence_driver).some_event;
};
extend SPECIFIC_TYPE SEQ_NAME sequence {
body()#driver.clock is only {
var foo := driver.specific_type_field;
};
};
Note that thank to keep type driver is a.. there is no need in driver's casting in the line starting var foo..
BUT, the keep type driver is a.. does not affect the some_event, i.e. if the as_a casting is removed from the line, there is a compilation error says 'driver' does not have 'some_event' though its subtype do. use driver.as_a(SPECIFIC_TYPE sequence_driver).
Why the keep type driver is a.. fails to cast the driver in some_event.. line?
Thank you for your help
Your observation is correct: type constraint is taken into account for a field access (and for a method call), but not for event sampling. This is a limitation, and it might be removed in upcoming versions. I suggest contacting official Specman support for the exact info on the plans.

Expected parameter scala.Option<Timestamp> vs. Actual argument Timestamp

This is probably a very silly question, but I have a case class which takes as a parameter Option[Timestamp]. The reason this is necessary is because sometimes the timestamp isn't included. However, for testing purposes I'm making an object where I pass in
Timestamp.valueOf("2016-01-27 22:27:32.596150")
But, it seems I can't do this as this is an actual Timestamp, and it's expecting an Option.
How do I convert to Option[Timestamp] from Timestamp. Further, why does this cause a problem to begin with? Isn't the whole benefit of Option that it could be there or not?
Thanks in advance,
Option indicates the possibility of a missing value, but you still need to construct an Option[Timestamp] value. There are two subtypes for Option - None when there is no value, and Some[T] which contains a value of type T.
You can create one directly using Some:
Some(Timestamp.valueOf("2016-01-27 22:27:32.596150"))
or Option.apply:
Option(Timestamp.valueOf("2016-01-27 22:27:32.596150"))

Int extension not applied to raw negative values

My extensions to the Int type do not work for raw, negative values. I can work around it but the failure seems to be a type inference problem. Why is this not working as expected?
I first encountered this within the application development environment but I have recreated a simple form of it here on the Playground. I am using the latest version of Xcode; Version 6.2 (6C107a).
That's because - is interpreted as the minus operator applied to the integer 2, and not as part of the -2 numeric literal.
To prove that, just try this:
-(1.foo())
which generates the same error
Could not find member 'foo'
The message is probably misleading, because the error is about trying to apply the minus operator to the return value of the foo method.
I don't know if that is an intentional behavior or not, but it's how it works :)
This is likely a compiler bug (report on radar if you like). Use brackets:
println((-2).foo())