Conditional typing in ABAP [duplicate] - type-conversion

This question already has an answer here:
Strange behaviour using string templates and new COND syntax
(1 answer)
Closed 4 years ago.
We all know this fancy new pseudo-ternary operator as COND:
COND #( WHEN 1 = 1 THEN something ELSE everything ).
However, recently during the practices I noticed weird thing with typing of returned variable: it always get type of the first THEN operand and this had been confirmed by the ABAP help.
If the operand type is not fully identifiable, an operand with a statically identifiable type must be specified after the first THEN (except when passing the constructor parameter to an actual parameter with generically typed formal parameter). This type is then used.
DATA(val) = COND #( WHEN quantity NE '0.00' THEN CONV wrbtr( quantity ) ELSE '' )
In this example val variable will always have packed type disregard of the value of quantity.
How can we achieve conditional typing here? I.e. WRBTR type with non-empty quantity and string type with empty quantity.
It is very often a requirement during passing internal data to external systems, external methods/FMS, as well as external formats (Excel, CSV).
Is there some syntax I am missing with COND and CONV operators? Can we achieve this with their help? Or maybe there is some more fancier syntax except
IF quantity NE '0.00'.
val = VALUE wrbtr( ).
ELSE.
val = VALUE string( ).
ENDIF.

There is no conditional typing in ABAP. As a fully typed language, every variable needs a definite type at compile time.
The example you provided doesn't work, by the way:
DATA quantity TYPE wrbtr.
DATA val TYPE wrbtr.
IF quantity NE '0.00'.
val = '3.12'.
ELSE.
val = VALUE string( ).
ENDIF.
val will still have type wrbtr, even if the ELSE is executed. ABAP first converts the value to string, then on to the target wrbtr.

Related

Is it possible to type string template?

So that it would be able to assign a string value that only composed of a typed template values.
tStr = "2000:80" -- "Int:Int"
tStr = "2000:AAA" -- "Int:String" - that would be an error
For example typescript support this:
type TemplateS = `${number}:{number}`
const tStr: TemplateS = "2000:80"
I think it is possible but it would not be practical and may significantly increase compilation time. You would have to make use of Symbol, RowList, Append, and type classes.
TypeScript wants to retrofit types to stringly-typed JavaScript APIs because it is a JavaScript superlanguage. PureScript is different. It prefers the creation of new APIs with a strong type design from conception.
If you want a pair of types use Tuple. For example: Tuple 1 2 :: Tuple Int Int or Tuple 1 "2" :: Tuple Int String.

how can i do dynamic casting of a variable from one type to another in Scala

I would like to do dynamic casting for a scala variable, the casting type is stored in a different variable or in the database or provided by the user.
I am new to Scala and have mainly done coding on python. Here we are trying to take Any type input and query the type of the variable as per the type saved in DB eg.: "String/Int" and user-defined classes and cast them before any future processing.
in python:
eval("str('123')")
in scala I have tried
var a = 123.05
a.asInstanceOf['Int']
gives me error
error: identifier expected but string literal found.
And I want the code to be something as follows:
var a = 123.05
var b = "Int"
a.asInstanceOf[b]
gives me error
error: not found: type b
Ok so I have to be fairly exhaustive here because the stuff you're trying to do is tricky due to the static vs dynamic typing difference of scala and python.
First what you're doing in Python is not type casting but rather a conversion.
str(12)
in python takes the integer value 12 and converts it to a (UTF-8 ? dunno in python) string. The same thing in scala would be
12.toString
typecasting in the meantime is basically a pinky promise to the compilers typechecker that you know more than it and it should just believe you. It also should be avoided like the pest because you basically drop all the safety that static typechecking gives you. However there are certain cases where it is unavoidable
in a bit more fundamental terms. lets say you have the following scala snippet
sealed trait Foo
final case class Bar(i:Int) extends Foo
final case class Baz(s:String) extends Foo
val f:Foo = Bar(2)
//won't work because the compiler thinks f is of type Foo due to the type ascription bove
println(f.i)
//will work
println(f.asInstanceOf[Bar].i)
the Type Foo can be either Bar or Baz (because of the sealed and final, this is called an ADT) but we specifically told the compiler to treat f as a Foo which means we forgot which specific type it was. this is the case where you could use typecasting, note that we don't convert but rather tell the compiler that it is actually a Bar
Now this all happens during compile time which means you can't cast according to a runtime value like this
var a = 123.05
var b = "Int"
a.asInstanceOf[b]
Now as I understand you have some sort of stringly typed input and need to convert it according to some schema. The scalafiddle below has an example how you could do this: https://scalafiddle.io/sf/i97WZlA/0
However note that this makes use of some fairly advanced concepts in scala to ensure the types line up.
This will also work https://scalafiddle.io/sf/i97WZlA/1 but note that we lose all the type information requiring us to do a typecast if we want to do anything meaningful with out
EDIT/ADDENDUM: I thought I should also do an example on how to consume such values and make the schema dynamic.
https://scalafiddle.io/sf/i97WZlA/2
As a final note be warned that this is getting close to the limits what the compiler can do and necessitates a lot of boxing and unboxing to carry along the type information (in SchemaValue) also it's not stacksafe and has lackluster errorhandling. this solution would require some serious engineering to make viable but it should get the idea across
If you have a String value, then you can use toInt or toDouble to parse that string:
val s = "123.05"
val i = s.toInt
val d = s.toDouble
If you have an Any value then it is best to use match to convert it:
val a: Any = ...
a match {
case i: Int => // It is an Int
case d: Double => // It is a Double
case _ => // It is a different type
}
If all else fails you can explicitly pick a type, but this is not good practice:
val i = a.asInstanceOf[Int]
Any decent database framework will give you the ability to read and write values of specific types, so this should not be a problem with the right library.

What is the use of minizinc fix function?

i see that fix documentation says:
http://www.minizinc.org/doc-lib/doc-builtins-reflect.html#Ifunction-dd-T-cl-fix-po-var-opt-dd-T-cl-x-pc
function array [$U] of $T: fix(array [$U] of var opt $T: x)
Check if the value of every element of the array x is fixedat this point in evaluation. If all are fixed, return an array of their values, otherwise abort.
I am thinking it can be used to coerce a var to a par.
Here is the code.
array [1..num] of var int: value ;
%% generate random numbers from 0..num-1, this should fix the value of the var "value" or so i think
constraint forall(i in index_set(value))(let {int:temp_value=discrete_distribution([1|i in index_set(value)]); } in value[i]=trace(show(temp_value)++"\n", temp_value));
%%% this i was expecting to work, as "value" elements are fixed above
array [1..num] of int:value__ =[ trace(show(fix(value[i])), fix(value[i])) | i in index_set(value)] ;
But i get:
MiniZinc: evaluation error:
with i = 1
in call 'trace'
in call 'fix'
expression is not fixed
My questions are:
1) I think i should expect this error as minizinc is not sequential execution language?
2) Examples of fix in user guide is only where output statement is used. Is it the only place to use fix?
3) How would i coerce a var to a par?
By the way I am trying this var to par conversion because i am having problem with array generator expression. Here is the code
int:num__=200;
int:seed=134;
int: two_m=2097184;
%% prepare weights for generating numbers form 1..(two_m div 64), basically same weight
array [1..(two_m div 64)] of int: value_6_wt= [seed+1 | i in 1..(two_m div 64)] ;
%% generate numbers. this dose not work gives out
%% in variable declaration for 'value6'
%% parameter value out of range
array [1..num__] of int : value6 = [ discrete_distribution(value_6_wt) | j in 1..num__];
In the MiniZinc language the difference between a parameter and a variable is only the fact that a parameter must have a value at compile time. Within the compiler we turn as many variables into parameters as we can. This saves the solver from having to do some work. When we know that a variable has been turned into a parameter, then we can use the fix function to convince the type system that we really can use this variable as a parameter and see its value.
The problem here however is that fix is defined to abort when the variable is not fixed to one value. If no testing is done, this requires some (magic/)knowledge about the compilation process. In your case it seems that the second array is evaluated before the optimisation stage, in which all aliasing is resolved. This is the reason why it does not work. (This is indeed one of the things that is a consequence of a declarative language)
Although fix might only be used in the output statements in the examples (where it's guaranteed to work), it is used in many locations in the MiniZinc libraries. If we for example look at the library that is used for MIP solvers, there are many constraints that can be encoded more efficiently if one of the arguments is a parameter. Therefore, you will often see that the a constraint in this library first tests its arguments with is_fixed, and then use a better encoding if this returns true.
The output statement and when is_fixed returns true will both give the guarantee that a variable is fixed and ensure that the compilation doesn't abort. There is no other way to coerce a variable to a parameter, but unless you are dealing with dependant predicate definitions, you can just trust the MiniZinc compiler to ensure that the resulting FlatZinc will contain a parameter instead of a variable.

What's the meaning of "$" in Dataset's operators (like select or filter)?

I am a bit confused about using $ to reference columns in DataFrame operators like select or filter.
The following statements work:
df.select("app", "renders").show
df.select($"app", $"renders").show
But, only the first statement in the following works:
df.filter("renders = 265").show // <-- this works
df.filter($"renders" = 265).show // <-- this does not work (!) Why?!
However, this again works:
df.filter($"renders" > 265).show
Basically, what is this $ in DataFrame's operators and when/how should I use it?
Implicits are a major feature of the Scala language that take a lot of different forms--like implicit classes as we will see shortly. They have different purposes, and they all come with varying levels of debate regarding how useful or dangerous they are. Ultimately though, implicits generally come down to simply having the compiler convert one class to another when you bring them into scope.
Why does this matter? Because in Spark there is an implicitclass called StringToColumn that endows a StringContext with additional functionality. As you can see, StringToColumn adds the $ method to the Scala class StringContext. This method produces a ColumnName, which extends Column.
The end result of all this is that the $ method allows you to treat the name of a column, represented as a String, as if it were the Column itself. Implicits, when used wisely, can produce convenient conversions like this to make development easier.
So let's use this to understand what you found:
df.select("app","renders").show -- succeeds because select takes multiple Strings
df.select($"app",$"renders").show -- succeeds because select takes multiple Columnss that result after the implicit conversions are applied
df.filter("renders = 265").show -- succeeds because Spark supports SQL-like filters
df.filter($"renders" = 265).show -- fails because $"renders" is of type Column after implicit conversion, and Columns use the custom === operator for equality (unlike the case in SQL).
df.filter($"renders" > 265).show -- succeeds because you're using a Column after implicit conversion and > is a function on Column.
$ is a way to convert a string to the column with that name.
Both options of select work originally because select can receive either a column or a string.
When you do the filter $"renders" = 265 is an attempt at assigning a number to the column. > on the other hand is a comparison method. You should be using === instead of =.

Dataframe: Adding prefix to all columns in Scala

val prefix = "ABC"
val renamedColumns = df.columns.map(c=> df(c).as(s"$prefix$c"))
val dfNew = df.select(renamedColumns: _*)
Hi,
I am fairly new to scala and the code above works perfectly to add a prefix to all columns. Can someone please explain the breakdown of how it works ?
The second line above will return a map of col1 as ABCcol1, col2 as ABCcol2.... etc
I have trouble understanding what the third line is doing , especailly the ":_* at the end.
thanks for your help in advance.
The third line is an example of Scala's syntactic sugar. Essentially, Scala has ways to shorten just exactly what you are typing, and you have discovered the dreaded :_*.
There are two portions to this small bit - the : and the _* serve two different purposes. The : is typically for ascription, which tells the compiler "this is the type that I need to use for this method". The _* however, is your type - in Scala this is the type varargs. Varargs is a type that has an arbitrary number of values (good resource here). It allows you to pass a method a list that you do not know the number of elements in.
In your example, you are creating a variable called renamedColumns from the columns of your original dataframe, with the new string appendage. Although you may know just how many columns are in your df, Scala does not. When you create dfNew, you are running a select statement on that and passing in your new column names, of which there could be an arbitrary number.
Essentially, you do not know how many columns you may have, so you pass in your varargs to allow the number to be arbitrary, thus determined by the compiler.