Meaning of 'Ignore potential matches' - eclipse

Under Window > Preferences > General > Search, there is the option Ignore potential matches
What does it do? Whether I activate it or not, I never see a difference.
Is it an option that only makes sense for Java development (which I never do, but I do develop in C, Python and PHP using Eclipse)?

See bug 127442 for examples: depending on what you are searching (a class, a method, ...), the Search engine can find instances which could match (but it cannot say for certain).
Those instances are marked "POTENTIAL_MATCH":
A method with different number of parameters is not a potential match.
(see bug 97322 )
A potential match is a match where the resolution failed (e.g. the method binding is null).
If the user searches for "foo(String)" (without qualifying String), then "foo(java.lang.String)" and "foo(p.String)" are both exact matches.
For the .class file case, I think we can only have potential matches in the case of the missing type case (see bug 196200), i.e if the .class file was compiled and some types it references were missing.
A current example of potential match misbehavior is found in bug 382778:
I have a public static void method printIt(String name).
When I open its call hierarchy, some callers are missing.
I am guessing the callers are missing because java search marks them as potential instead of exact matches for the printIt(String) reference.
The following code is sometimes marked as potential, and sometimes exact:
// Listing 1
PublicInterface2 impl2 = new Impl2("Name Broken");
Static.printIt(impl2.getName());
When the search result is marked potential, the caller is missing from the printIt() call hierarchy.
PublicInterface2 is an empty public interface which extends PackageInterface2Getters.
PackageInterface2Getters is an empty default-scoped interface which extends PackageInterface1Getters.
PackageInterface1Getters is a default-scoped interface which declares String getName().
So impl2.getName() above returns a String.
There are some problems reported which I guess make the matches be marked as potential:
...
Filename : \D:\workspace\eclipse\_runtimes\jdt\call-hierarchy-bug\src\main\PublicInterface2.java
COMPILED type(s)
2 PROBLEM(s) detected
- Pb(2) PackageInterface1Getters cannot be resolved to a type
- Pb(327) The hierarchy of the type PublicInterface2 is inconsistent
Turns out that:
The compiler asks the "NameEnvironment" to get the type information of any dependent type.
Search has it's own NameEnvironment implementation in JavaSearchNameEnvironment and it is not looking for secondary types.
This is bad and it is surprising that we haven't run into this problem until now.

In Eclipse Luna (Service Release 1 (4.4.1)) I searched just for references to this Java method:
merge(DashboardConfigurationModel template, DashboardModel custom)
It returns two references. One of these calls to the merge() method passes in a DashboardConfigurationModel and a DashboardModel, as befits the method signature. This is a match all right!
The other reference to a merge() method passes in a String and a Map. It is marked in Eclipse as a "potential match" but in my mind, since the argument types don't match, this has zero potential to be a match.
I then checked Ignore potential matches, did the search again, and this noise went away.

Related

Scala and JMM: number types performance

I'm new in Scala and don't understand some base things.
Scala does not contains primitives. Hence int, short and other "simple" number types are objects. So, according to JMM, they are not located at stack and subject to cleaning by GB. Cleaning by GB may be too expensive for some cases.
So I don't clearly understand, why Scala is considered faster than Java (in which primitives located in stack).
Scala does not contains primitives. Hence int, short and other "simple" number types are objects.
That is correct.
So, according to JMM,
The Java Memory Model is for Java. It is completely irrelevant to Scala.
they are not located at stack and subject to cleaning by GB. Cleaning by GB may be too expensive for some cases.
There is no such thing as a "stack" in Scala. The Scala Language Specification only mentions the term "stack" in very few places, and none of them have anything to do with Ints:
In section 1 Lexical Syntax, subsection 1.6 XML mode, it is said that because XML literals and Scala code can be arbitrarily nested, the parser has to use a stack data structure to keep track of the context.
In section 7 Implicits, subsection 7.2 Implicit parameters, it is said that to prevent an infinite recursion when searching for implicit, the compiler keeps a stack of "open types", which are types that it is currently searching an implicit for.
In section 6 Expressions, subsection 6.6 Function Applications, there is the following statement, specifying Proper Direct Tail Recursion:
A function application usually allocates a new frame on the program's run-time stack. However, if a local method or a final method calls itself as its last action, the call is executed using the stack-frame of the caller.
In section 6 Expressions, subsection 6.20 Return Expressions, there is the following statement about one possible implementation strategy for non-local returns from nested functions:
Returning from the method from within a nested function may be implemented by throwing and catching a scala.runtime.NonLocalReturnControl. Any exception catches between the point of return and the enclosing methods might see and catch that exception. A key comparison makes sure that this exception is only caught by the method instance which is terminated by the return.
If the return expression is itself part of an anonymous function, it is possible that the enclosing method m has already returned before the return expression is executed. In that case, the thrown scala.runtime.NonLocalReturnControl will not be caught, and will propagate up the call stack.
Of these 4 instances, the first 2 clearly do not refer to the concept of a call stack but rather to the generic computer science data structure. The 4th one is only an example of a possible implementation strategy ("Returning from the method from within a nested function may be implemented by […]"). Only the 3rd one is actually relevant, as it indeed talks about a call stack. However, it does not say anything about allocating Ints, and it explicitly leaves the door open to alternative implementations as well, by stating that "usually" function application leads to allocation of a stack frame, but doesn't have to.
So I don't clearly understand, why Scala is considered faster than Java (in which primitives located in stack).
Actually, there is nothing in the Java Language Specification either that says that primitives are located on the stack. In fact, the Java Language Specification does not mandate the existence of a stack at all. It would be perfectly legal to implement Java without a stack.
There are exactly zero occurrences of the term "stack" in the JLS. There are a couple of mentions of the term "heap", but only in the compound term "heap pollution", which is simply a word describing a certain flaw in the type system, but does not necessarily require a heap, and does not mandate a heap.
And none of these mentions of "heap pollution" have anything to do with primitives.
Note that, when I say that the Scala Language Specification says nothing about stacks or heaps or how Ints are allocated, that is actually really important. Because the SLS doesn't say anything, implementors are allowed to do whatever they want, including making Ints primitive and allocating them on the stack.
And that is exactly what most Scala implementations do. The (now-defunct) Scala.NET implemented scala.Int as a .NET System.Int32. Scala-native implements scala.Int as a C int32_t. Scala.js implements scala.Int as an ECMAScript number. And Scala-JVM implements scala.Int as a JVM int.
If you check out the source code of scala.Int in the Scala-JVM repository (src/library/scala/Int.scala), you will find that it is actually empty! More precisely, it only contains documentation and declarations, but no definitions or implementations. Also, the class is marked final (meaning it can't be inherited from) and abstract (meaning it must be inherited from in order to provide overrides for the missing implementations), which is a contradiction.
How does this work? Well, the compiler knows what an Int is and how it works, and it simply generates the correct code for dealing with a JVM int. So, when it sees a call to scala.Int.+, it knows that instead it must generate an iadd bytecode instruction. Likewise, Scala-native will just generate the native integer addition instructions, and so on.
In other words, Ints are semantically defined as objects, but they are actually pragmatically implemented as primitives.
This is a general rule of how language specifications work: typically, they only describe what the result is that the programmer sees, but they leave it open to the implementor how to actually achieve that result. So, the SLS specifies that an Int must look as if it actually were an object, but there is nothing that says it actually has to be one.
They are handled the same way that Java handles those types, they're only boxed when strictly necessary. The details on how and when they are boxed may differ, but the compiler uses a primitive representation if it can do so. Here's what the docs say (this is just for Int, but it applies to other "primitive" types too):
Int, a 32-bit signed integer (equivalent to Java's int primitive type) is a subtype of scala.AnyVal. Instances of Int are not represented by an object in the underlying runtime system.
There is an implicit conversion from scala.Int => scala.runtime.RichInt which provides useful non-primitive operations.
https://www.scala-lang.org/api/2.13.6/scala/Int.html
The main difference, really, is that there aren't two separate types, like in Java, to represent the boxed and unboxed representations — both get the same Int type, whereas Java has int and Integer.

ItextSharp - Lowercase' is ambiguous because multiple kinds of members with this name exist in class 'List'

I am using vb.net and when I try set the list lettered to lowercase
mylist.lowercase = list.lowercase
I get an error
Lowercase' is ambiguous because multiple kinds of members with this name exist in class 'List'
lowercase is a protected field on the List class so I'm pretty sure you mean the class constant LOWERCASE.
For historical and back-compatibility reasons the VB.Net language is case-insensitive however the rest of CLR is case-sensitive so you have to be conscious of this.
Anyway, when using that specific field you'll run into a conflict so your safest bet is to just use the field's value of True in its place. If this bugs you you can also waste a bunch of extra CPU cycles and jump into reflection but I wouldn't recommend it:
''Bad code but works
mylist.lowercase = GetType(iTextSharp.text.List).GetField("LOWERCASE").GetValue(Nothing)
EDIT
From the comments I now see that it is the left side that is causing you problems. Just use the IsLowercase property instead:
mylist.IsLowercase = True

How do purely functional compilers annotate the AST with type info?

In the syntax analysis phase, an imperative compiler can build an AST out of nodes that already contain a type field that is set to null during construction, and then later, in the semantic analysis phase, fill in the types by assigning the declared/inferred types into the type fields.
How do purely functional languages handle this, where you do not have the luxury of assignment? Is the type-less AST mapped to a different kind of type-enriched AST? Does that mean I need to define two types per AST node, one for the syntax phase, and one for the semantic phase?
Are there purely functional programming tricks that help the compiler writer with this problem?
I usually rewrite a source (or an already several steps lowered) AST into a new form, replacing each expression node with a pair (tag, expression).
Tags are unique numbers or symbols which are then used by the next pass which derives type equations from the AST. E.g., a + b will yield something like { numeric(Tag_a). numeric(Tag_b). equals(Tag_a, Tag_b). equals(Tag_e, Tag_a).}.
Then types equations are solved (e.g., by simply running them as a Prolog program), and, if successful, all the tags (which are variables in this program) are now bound to concrete types, and if not, they're left as type parameters.
In a next step, our previous AST is rewritten again, this time replacing tags with all the inferred type information.
The whole process is a sequence of pure rewrites, no need to replace anything in your AST destructively. A typical compilation pipeline may take a couple of dozens of rewrites, some of them changing the AST datatype.
There are several options to model this. You may use the same kind of nullable data fields as in your imperative case:
data Exp = Var Name (Maybe Type) | ...
parse :: String -> Maybe Exp -- types are Nothings here
typeCheck :: Exp -> Maybe Exp -- turns Nothings into Justs
or even, using a more precise type
data Exp ty = Var Name ty | ...
parse :: String -> Maybe (Exp ())
typeCheck :: Exp () -> Maybe (Exp Type)
I cant speak for how it is supposed to be done, but I did do this in F# for a C# compiler here
The approach was basically - build an AST from the source, leaving things like type information unconstrained - So AST.fs basically is the AST which strings for the type names, function names, etc.
As the AST starts to be compiled to (in this case) .NET IL, we end up with more type information (we create the types in the source - lets call these type-stubs). This then gives us the information needed to created method-stubs (the code may have signatures that include type-stubs as well as built in types). From here we now have enough type information to resolve any of the type names, or method signatures in the code.
I store that in the file TypedAST.fs. I do this in a single pass, however the approach may be naive.
Now we have a fully typed AST you could then do things like compile it, fully analyze it, or whatever you like with it.
So in answer to the question "Does that mean I need to define two types per AST node, one for the syntax phase, and one for the semantic phase?", I cant say definitively that this is the case, but it is certainly what I did, and it appears to be what MS have done with Roslyn (although they have essentially decorated the original tree with type info IIRC)
"Are there purely functional programming tricks that help the compiler writer with this problem?"
Given the ASTs are essentially mirrored in my case, it would be possible to make it generic and transform the tree, but the code may end up (more) horrendous.
i.e.
type 'type AST;
| MethodInvoke of 'type * Name * 'type list
| ....
Like in the case when dealing with relational databases, in functional programming it is often a good idea not to put everything in a single data structure.
In particular, there may not be a data structure that is "the AST".
Most probably, there will be data structures that represent parsed expressions. One possible way to deal with type information is to assign a unique identifier (like an integer) to each node of the tree already during parsing and have some suitable data structure (like a hash map) that associates those node-ids with types. The job of the type inference pass, then, would be just to create this map.

Definition of statically typed and dynamically types

Which of these two definitions is correct?
Statically typed - Type matching is checked at compile time (and therefore can only be applied to compiled languages)
Dynamically typed - Type matching is checked at run time, or not at all. (this term can be applied to compiled or interpreted languages)
Statically typed - Types are assigned to variables, so that I would say 'x is of type int'.
Dynamically typed - types are assigned to values (if at all), so that I would say 'x is holding an int'
By this definition, static or dynamic typing is not tied to compiled or interpreted languages.
Which is correct, or is neither one quite right?
Which is correct, or is neither one quite right?
The first pair of definitions are closer but not quite right.
Statically typed - Type matching is checked at compile time (and therefore can only be applied to compiled languages)
This is tricky. I think if a language were interpreted but did type checking before execution began then it would still be statically typed. The OCaml REPL is almost an example of this except it technically compiles (and type checks) source code into its own byte code and then interprets the byte code.
Dynamically typed - Type matching is checked at run time, or not at all.
Rather:
Dynamically typed - Type checking is done at run time.
Untyped - Type checking is not done.
Statically typed - Types are assigned to variables, so that I would say 'x is of type int'.
Dynamically typed - types are assigned to values (if at all), so that I would say 'x is holding an int'
Variables are irrelevant. Although you only see types explicitly in the source code of many statically typed languages at variable and function definitions all of the subexpressions also have static types. For example, "foo" + 3 is usually a static type error because you cannot add a string to an int but there is no variable involved.
One helpful way to look at the word static is this: static properties are those that hold for all possible executions of the program on all possible inputs. Then you can look at any given language or type system and consider which static properties can it verify, for example:
JavaScript: no segfaults/memory errors
Java/C#/F#: if a program compiled and a variable had a type T, then the variable only holds values of this type - in all executions. But, sadly, reference types also admit null as a value - the billion dollar mistake.
ML has no null, making the above guarantee stronger
Haskell can verify statements about side effects, for example a property such as "this program does not print anything on stdout"
Coq also verifies termination - "this program terminates on all inputs"
How much do you want to verify, this depends on taste and the problem at hand. All magic (verification) comes at price.
If you have never ever seen ML before, do give it a try. At least give 5 minutes of attention to Yaron Minsky's talk. It can change your life as a programmer.
The second is a better definition in my eyes, assuming you're not looking for an explanation as to why or how things work.
Better again would be to say that
Static typing gives variables an EXPLICIT type that CANNOT change
Dynamic typing gives variables an IMPLICIT type that CAN change
I like the latter definition. Consider the type checking when casting from a base class to a derived class in object oriented languages like Java or C++ which fits the second definition and not the first. It's a compiled language with (optional) dynamic type checking.

scala - is it possible to force immutability on an object?

I mean if there's some declarative way to prevent an object from changing any of it's members.
In the following example
class student(var name:String)
val s = new student("John")
"s" has been declared as a val, so it will always point to the same student.
But is there some way to prevent s.name from being changed by just declaring it like immutable???
Or the only solution is to declare everything as val, and manually force immutability?
No, it's not possible to declare something immutable. You have to enforce immutability yourself, by not allowing anyone to change it, that is remove all ways of modifying the class.
Someone can still modify it using reflection, but that's another story.
Scala doesn't enforce that, so there is no way to know. There is, however, an interesting compiler-plugin project named pusca (I guess it stands for Pure-Scala). Pure is defined there as not mutating a non-local variable and being side-effect free (e.g. not printing to the console)—so that calling a pure method repeatedly will always yield the same result (what is called referentially transparent).
I haven't tried out that plug-in myself, so I can't say if it's any stable or usable already.
There is no way that Scala could do this generally.
Consider the following hypothetical example:
class Student(var name : String, var course : Course)
def stuff(course : Course) {
magically_pure_val s = new Student("Fredzilla", course)
someFunctionOfStudent(s)
genericHigherOrderFunction(s, someFunctionOfStudent)
course.someMethod()
}
The pitfalls for any attempt to actually implement that magically_pure_val keyword are:
someFunctionOfStudent takes an arbitrary student, and isn't implemented in this compilation unit. It was written/compiled knowing that Student consists of two mutable fields. How do we know it doesn't actually mutate them?
genericHigherOrderFunction is even worse; it's going to take our Student and a function of Student, but it's written polymorphically. Whether or not it actually mutates s depends on what its other arguments are; determining that at compile time with full generality requires solving the Halting Problem.
Let's assume we could get around that (maybe we could set some secret flags that mean exceptions get raised if the s object is actually mutated, though personally I wouldn't find that good enough). What about that course field? Does course.someMethod() mutate it? That method call isn't invoked from s directly.
Worse than that, we only know that we'll have passed in an instance of Course or some subclass of Course. So even if we are able to analyze a particular implementation of Course and Course.someMethod and conclude that this is safe, someone can always add a new subclass of Course whose implementation of someMethod mutates the Course.
There's simply no way for the compiler to check that a given object cannot be mutated. The pusca plugin mentioned by 0__ appears to detect purity the same way Mercury does; by ensuring that every method is known from its signature to be either pure or impure, and by raising a compiler error if the implementation of anything declared to be pure does anything that could cause impurity (unless the programmer promises that the method is pure anyway).[1]
This is quite a different from simply declaring a value to be completely (and deeply) immutable and expecting the compiler to notice if any of the code that could touch it could mutate it. It's also not a perfect inference, just a conservative one
[1]The pusca README claims that it can infer impurity of methods whose last expression is a call to an impure method. I'm not quite sure how it can do this, as checking if that last expression is an impure call requires checking if it's calling a not-declared-impure method that should be declared impure by this rule, and the implementation might not be available to the compiler at that point (and indeed could be changed later even if it is). But all I've done is look at the README and think about it for a few minutes, so I might be missing something.