def indexOf[T](seq: Seq[T],value: T, from: Int):Int={
for(i<-from until seq.length){
if(seq(i)== value) return i
}
-1
}
Anyone explain to me indexOf[T] meaning? And what does (seq:Seq[T],value:T) do?
def indexOf - This is a method. We'll call it indexOf.
[T] - This method will make reference to an unspecified type. We'll call it T.
(seq:Seq[T], value:T, from:Int) - This method will take 3 passed parameters:
variable seq which is a Seq of elements of type T
variable value which is a single value of type T
variable from which is a single value of type Int
:Int - This method returns a value of type Int.
= { - Mehod code begins here.
This is related to Scala generics.
https://docs.scala-lang.org/tour/generic-classes.html
In simple terms, here, T acts as a place holder for any data type.
The indexOf function takes a generic T, which during runtime can be a Integer, String or custom Employee object.
For example in the sequence, you can pass a Seq of Employee or String and same data type value.
By using generics, for your example, you dont have to create different indexOf function for every other data type.
How to call indexOf? As below:
val index = indexOf[String](stringSeq, "searchThis", 0)
or
val index = indexOf[Employee](employeeSeq, empObj, 0)
This method is what we call a parametric method in scala.
Parametric methods in Scala can be parameterized by type as well as
value. The syntax is similar to that of generic classes. Type
parameters are enclosed in square brackets, while value parameters are
enclosed in parentheses.
Since T is a generic type, that means that indexOf method can be called on a variety of types.
Your method indexOf[T] takes a type parameter T and value parameters seq, value and from.
When calling your method, you can either set explicitly the type you will be manipulating by replacing the T by your concrete type (see example 1), or let the compiler work for you (type inference) based on the parameter type of your param seq and value. (see example 2)
Example 1
val index = indexOf[Int](Seq(3, 5, 4), 4, 0)
Example 2
val index = indexOf(Seq("alice", "bob", "yo"), "bob", 1)
Related
I need to call a generic function n times with n different types. Is it possible to loop through a list of types and call the function with each type inside the loop instead of writing n function call statements? If not, is there any other succinct way?
Sample code:
// Now
function[Type1]()
function[Type2]()
…
function[Typen]()
// Want something like this
val types = List(Type1, Type2, …, Typen)
for (type <- types) {
function[type]()
}
Case 1: if the function doesn't accept an implicit argument depending on the type parameter, something like def function[A]()(implicit foo: Foo[A]), then the calls can't actually do anything different and you can write it as
for (_ <- 0 to n) {
function[SomeArbitraryType]()
}
Case 2: if it does, then make a list of the implicit parameters and call the function on them:
val foos = List(implicitly[Foo[Type1]], ...)
for (foo <- foos) {
function()(foo)
}
If the above fails to compile because it can't figure out the type parameter, you can cheat:
val foos = List(implicitly[Foo[Type1]].asInstanceOf[Foo[Any]], ...)
This way the type parameter of function()(foo) will be inferred as Any but again it can't actually matter for the execution of the function; only the value of the implicit parameter does.
There may be a convenience function to get the implicit value such as classTag for ClassTags but calling implicitly will work for any Foo.
In conclusion: type erasure is your best friend here.
EDIT: of course, case 2 ends up being more code than you had initially but it may be more reasonable if your function actually has non-implicit arguments which are the same for each call.
I am currently developing a static analysis of Java code using the OPAL framework.
I want to analyze the following Java method:
private void indirectCaller2b(double d, Object o1, Object o2) {
indirectCaller1(d, o1, o2);
}
I know, that indirectCaller2b is only called with the parameters (double, ArrayList, LinkedList).
Having this in mind, I constructed an IndexedSeq of DomainValues, which I pass to the perform-method ob BaseAI.
It looks like this:
Vector({ai.native_methods_parameter_type_approximation.PublicClass, null}[#0;t=101], ADoubleValue, {_ <: java.util.ArrayList, null}[#-4;t=102], {_ <: java.util.LinkedList, null}[#-5;t=103])
The this-parameter ({ai.native_methods_parameter_type_approximation.PublicClass, null}[#0;t=101]) was created with the following code:
domain.TypedValue(0, project.classFile(caller).thisType)
The other domain values were created using the parameterToValueIndex method:
org.opalj.ai.parameterToValueIndex(caller.isStatic, caller.descriptor, index), t)
Here, caller stands for the method indirectCaller2b and t is the known runtime type of the parameter (ArrayList for parameter index 1 and LinkedList for parameter index 2).
When I now perform the abstract interpretation of the method with
BaseAI.perform(classFile, caller, domain)(Some(parameters))
and print the stack index at the program counter, where the call of indirectCaller1 happens with the following code,
for (i <- 0 to analysisResult.operandsArray(pc).size - 1) {
println(s"stack index $i: ${analysisResult.operandsArray(pc)(i)}")
}
I get the following output:
stack index 0: null
stack index 1: {_ <: java.util.LinkedList, null}[#-5;t=103]
stack index 2: ADoubleValue
stack index 3: {ai.native_methods_parameter_type_approximation.PublicClass, null}[#0;t=101]
This is a bit confusing, since I just pass the arguments of indirectCaller2b to indirectCaller1. Therefore, the output should be the same as the IndexedSeq is passed to the perform method.
But in the output, parameter after the double parameter is LinkedList instead of ArrayList. The ArrayList parameter somehow disappeared, and the last parameter on the operandStack is "null".
Can anyone explain me, how this can happen?
Representation of "this"
To get the correct representation for the "this" reference you should use the method
InitializedObjectValue(
origin: ValueOrigin,
objectType: ObjectType ): DomainReferenceValue
to create a representation of the this value. The difference is that in this case the AI will try to use the information that (a) the value is guaranteed to be non-null and is also guaranteed to be initialized. In particular the former property is often interesting and generally leads to more precise results.
Initializing Locals
The function: org.opalj.ai.parameterToValueIndex only calculates the logical origin information (the "pc" that is associated with the value to make it possible to identify the respective values as parameters later on).
To correctly map operands to locals you can either use the method mapOperandsToParameters or you just add all values to an IndexedSeq but add another null value for computational type category 2 values.
I am writing a static analysis using the OPAL framework.
Therefore, I invoke an abstract interpretation of a method, where I have upper type bounds for the passed parameters as FieldTypes.
It looks like this:
BaseAI.perform(classFile, caller, domain)(parameters)
Where parameters is an IndexedSeq[FieldType].
This results in the following type error:
type mismatch; found : scala.collection.immutable.IndexedSeq[org.opalj.br.FieldType] required: Option[scala.collection.IndexedSeq[domain.DomainValue]] (which expands to) Option[scala.collection.IndexedSeq[domain.Value]]
Is there any possibility to convert my FieldTypes to DomainValues?
Can I use
domain.ClassValue(origin, identifiedFieldType)
to convert it, even if the type is e.g. an int? (since int is not a class)
If yes, is there a method, which computes the origin index for method parameters?
Part one of the question:
You can use:
domain.TypedValue(origin, parameterType)
In this case parameterType can be "any type".
Part two of the question:
The following function defined by the package object org.opalj.ai can be used compute the correct value index.
def parameterToValueIndex(
isStaticMethod: Boolean,
descriptor: MethodDescriptor,
parameterIndex: Int
): Int = {
I have a Scala class that reads formatting information from a JOSN template file, and data from a different file. The goal is to format as a JSON object specified by the template file. I'm getting the layout working, but now I want to set the type of my output to the type in my template (i.e. if I have a field value as a String in the template, it should be a string in the output, even if it's an integer in the raw data).
Basically, I'm looking for a quick and easy way of doing something like:
output = dataValue.asInstanceOf[templateValue.getClass]
That line gives me an error that type getClass is not a member of Any. But I haven't been able to find any other member or method that gives me an variable type at runtime. Is this possible, and if so, how?
Clarification
I should add, by this point in my code, I know I'm dealing with just a key/value pair. What I'd like is the value's type.
Specifically, given the JSON template below, I want the name to be cast to a String, age to be cast to an integer, and salary to be cast a decimal on output regardless of how it appears in the raw data file (it could be all strings, age and salary could both be ints, etc.). What I was hoping for is a simple cast that didn't require me to do pattern matching to handle each data type specifically.
Example template:
people: [{
name: "value",
age: 0,
salary: 0.00
}]
Type parameters must be known at compile time (type symbols), and templateValue.getClass is just a plain value (of type Class), so it cannot be used as type parameter.
What to do instead - this depends on your goal, which isn't yet clear to me... but it may look like
output = someMethod(dataValue, templateValue.getClass),
and inside that method you may do different computations depending on second argument of type Class.
How do I get an object's type and pass it along to asInstanceOf in Scala?
The method scala.reflect.api.JavaUniverse.typeOf[T] requires it's type argument to be hard-coded by the caller or type-inferred. To type-infer, create a utility method like the following (works for all types, even generics - it counteracts java runtime type arg erasure by augmenting T during compilation with type tag metadata):
// http://www.scala-lang.org/api/current/index.html#scala.reflect.runtime.package
import scala.reflect.runtime.universe._
def getType[T: TypeTag](a: T): Type = typeOf[T]
3 requirements here:
type arg implements TypeTag (but previous implementation via Manifest still available...)
one or more input args are typed T
return type is Type (if you want the result to be used externally to the method)
You can invoke without specifying T (it's type-inferred):
import scala.reflect.runtime.universe._
def getType[T: TypeTag](a: T): Type = typeOf[T]
val ls = List[Int](1,2,3)
println(getType(ls)) // prints List[Int]
However, asInstanceOf will only cast the type to a (binary consistent) type in the hierarchy with no conversion of data or format. i.e. the data must already be in the correct binary format - so that won't solve your problem.
Data Conversion
A few methods convert between Integers and Strings:
// defined in scala.Any:
123.toString // gives "123"
// implicitly defined for java.lang.String via scala.collection.immutable.StringOps:
123.toHexString // gives "7b"
123.toOctalString // gives "173"
"%d".format(123) // also gives "123"
"%5d".format(123) // gives " 123"
"%05d".format(123) // gives "00123"
"%01.2f".format(123.456789) // gives "123.46"
"%01.2f".format(123.456789) // gives "0.46"
// implicitly defined for java.lang.String via scala.collection.immutable.StringOps:
" 123".toInt // gives 123
"00123".toInt // gives 123
"00123.4600".toDouble // gives 123.46
".46".toDouble // gives 0.46
Parsing directly from file to target type (no cast or convert):
Unfortunately, scala doesn't have a method to read the next token in a stream as an integer/float/short/boolean/etc. But you can do this by obtaining a java FileInputStream, wrapping it in a DataInputStream and then calling readInt, readFloat, readShort, readBoolean, etc.
In a type-level context the value-level terms still have a few accessors. The first one and the one you asked for is the type of the value itself (type):
output = dataValue.asInstanceOf[templateValue.type]
if the type of the value has inner members, those become available as well:
class A {
class B {}
}
val a: A = new A
val b: a.B = new a.B
Notice b: a.B.
I must also mention how to access such members without a value-level term:
val b: A#B = new a.B
The Scala compiler can often infer return types for methods, but there are some circumstances where it's required to specify the return type. Recursive methods, for example, require a return type to be specified.
I notice that sometimes I get the error message "overloaded method (methodname) requires return type", but it's not a general rule that return types must always be specified for overloaded methods (I have examples where I don't get this error).
When exactly is it required to specify a return type, for methods in general and specifically for overloaded methods?
The Chapter 2. Type Less, Do More of the Programming Scala book mentions:
When Explicit Type Annotations Are Required.
In practical terms, you have to provide explicit type annotations for the following situations:
Method return values in the following cases:
When you explicitly call return in a method (even at the end).
When a method is recursive.
When a method is overloaded and one of the methods calls another. The calling method needs a return type annotation.
When the inferred return type would be more general than you intended, e.g., Any.
Example:
// code-examples/TypeLessDoMore/method-nested-return-script.scala
// ERROR: Won't compile until you put a String return type on upCase.
def upCase(s: String) = {
if (s.length == 0)
return s // ERROR - forces return type of upCase to be declared.
else
s.toUpperCase()
}
Overloaded methods can sometimes require an explicit return type. When one such method calls another, we have to add a return type to the one doing the calling, as in this example.
// code-examples/TypeLessDoMore/method-overloaded-return-script.scala
// Version 1 of "StringUtil" (with a compilation error).
// ERROR: Won't compile: needs a String return type on the second "joiner".
object StringUtil {
def joiner(strings: List[String], separator: String): String =
strings.mkString(separator)
def joiner(strings: List[String]) = joiner(strings, " ") // ERROR
}
import StringUtil._ // Import the joiner methods.
println( joiner(List("Programming", "Scala")) )
The two joiner methods concatenate a List of strings together.
The first method also takes an argument for the separator string.
The second method calls the first with a “default” separator of a single space.
If you run this script, you get the following error.
... 9: error: overloaded method joiner needs result type
def joiner(strings: List[String]) = joiner(strings, "")
Since the second joiner method calls the first, it requires an explicit String return type. It should look like this:
def joiner(strings: List[String]): String = joiner(strings, " ")
Basically, specifying the return type can be a good practice even though Scala can infer it.
Randall Schulz comments:
As a matter of (my personal) style, I give explicit return types for all but the most simple methods (basically, one-liners with no conditional logic).
Keep in mind that if you let the compiler infer a method's result type, it may well be more specific than you want. (E.g., HashMap instead of Map.)
And since you may want to expose the minimal interface in your return type (see for instance this SO question), this kind of inference might get in the way.
And about the last scenario ("When the inferred return type would be more general than you intended"), Ken Bloom adds:
specify the return type when you want the compiler to verify that code in the function returns the type you expected
(The faulty code which triggers a "more general than expected return type was:
// code-examples/TypeLessDoMore/method-broad-inference-return-script.scala
// ERROR: Won't compile. Method actually returns List[Any], which is too "broad".
def makeList(strings: String*) = {
if (strings.length == 0)
List(0) // #1
else
strings.toList
}
val list: List[String] = makeList() // ERROR
, which I incorrectly interpreted and List[Any] because returning an empty List, but Ken called it out:
List(0) doesn't create a list with 0 elements.
It creates a List[Int] containing one element (the value 0).
Thus a List[Int] on one conditional branch and a List[String] on the other conditional branch generalize to List[Any].
In this case, the typer isn't being overly-general -- it's a bug in the code.
)