In Java package protected access was very handy, because it allowed to write modular code. This is not possible with Kotlin unless you stick all those classes into one file and put Private on all of them or by implementing Internal in a separate Module. But I don't like this solutions. Putting lot of stuff in one file is not readable and another problem is that you cannot test any Method/Class that is not Public. Is there another solution?
No, package-protected access is not supported.
You should use internal in Kotlin. This restricts access to the same module, a logical unit of files compiled together to an artifact.
The motivation for not providing a package-protected visibility specifier is as follows, from a Kotlin developer:
The motivation for not having package protected access is very simple: it does not provide any real encapsulation. Any other module in the system can define classes in the same package as your complex independent component and get full access to its internals. On the other hand, classes with internal visibility cannot be accessed from any module other than the one where they are defined.
And you definitely can test methods/classes that have internal access: the tests of a module have full access to internal declarations of that module.
Related
I work with multiple grammars in the repl. The grammars use same names for some of their rules.
One of the documentation recipes mentions full qualification, to disambiguate type annotations in function pattern matching (it's in a note of the load function, but not in the code of this page - the .jar has it correct). But that might become tedious, so maybe there is aliasing for imports (like in Python import regex as r)?! And using full qualification in the first argument of the parse function doesn't seem to help to disambiguate all parse rules that are invoked recursively, parse(#lang::java::\syntax::Java18::CompilationUnit, src). At least it produces weird errors if I also import lang::java::\syntax::Java15.
In general, what is a safe way to handle symbols from different modules with same names?
Alternatively, is there a way to "unload" a module in the repl?
Some background information:
Rascal modules are open for reasons of extensibility, in particular data, syntax definitions and overloaded functions can be extended by importing another module; In this way you can extend a language and its processing functions by importing another module and adding rules and function alternatives at leisure.
There is a semantic difference between importing and extending a module. In particular import is not transitive and fuses only the uses of a name inside the importing module, while extend is transitive and also fuses recursive uses of a name in the module that is extended. So for extending a language, you'd default to using extend, while for using a library of functions you'd use import.
We are planning to remove the fusing behavior from import completely in one of the releases of 2020. After this all conflictingly imported non-terminal names must be disambiguated by prefixing with the module name, and prefixing will not have a side-effect of fusing recursively used non-terminals from different modules anymore. Not so for extend, which will still fuse the non-terminal and functions all the way.
all the definitions in a REPL instance simulate the semantics of the members of a single anonymous module.
So to answer your questions:
it's not particularly safe to handle symbols from different imported modules with the same name, until we fix the semantics of import that is.
the module prefix trick only works "top-level", below this the types are fused anyway because the code which reifies a non-terminal as a grammar does not propagate the prefix. It wouldn't know how.
Unimporting a module:
rascal>import IO;
ok
rascal>println("x");
x
ok
rascal>:un
undeclare unimport
rascal>:unimport IO
ok
rascal>println("x");
|prompt:///|(0,7,<1,0>,<1,7>): Undeclared variable: println
probably one of the least used features in the environment; caveat emptor!
To work around these issues, a way is to write functions inside a different module for every separate language/language version, and create a top module which imports these if you want to bundle the functionality in a single interface. This way, because import is not transitive, the namespaces stay separate and clean. Of course this does not solve the REPL issue; the only thing I can offer there is to start a fresh REPL for each language version you are playing with.
This issue is a pure design modeling
I have two packages and there are different classes with same name should embedded to these packages
what's the good design solution if I have same classes in different packages
I have read different solutions based on coding such as:
1-use "import"dependency between packages to avoid redundancy classes
2-create instance of classes in other package, and thus allow to have same name classes in different packages
3-fully qualify one of the class name
Would you suggest which is best solution or tell me other good solutions please?
You are allowed to use the same name for classes when they are I different packages. A package is a namespace so the fully qualified names of such classes will be different. Now how you access the class depends on in which package are you at the moment. Whenever you're outside the package containing the class (either directly out through import/access), you have to use fully qualified names to avoid ambiguity.
If the classes are actually the same, you may:
- put it the one package where it suits more and simply access it from the other package (standard approach, possible for all public classes)
- put it in one of the packages (if it suits there better for some reason) and import it to the other package (through element or package import)
- put it in additional package (e.g. Utils) and import it to both packages.
The choice will depend on specific situation.
If it is the same class you should define it in one package and "reuse" it in the other.
A complete UML modeling tool should be able to Drag-n-Drop an existing class in another package.
The tool should be able to indicate you are using a class from another package.
I am currently reading "Scala Programming, 2nd ed." (OReilly 2015) by Wampler/Payne, and they mention Package Objects as a means to expose abstractions.
On p.504 however, they mention
Package objects
An alternative to finegrained visibility controls is putting all implementation constructs in a protected package, then using a top-level package object to expose only the appropriate public abstractions. For example, type members can alias types that would otherwise be hidden [...]"
Now my question is: is there a way to declare said protected package as protected once, without having to declare it for every every class/object down the hierarchy? And if so, how?
Or did I simply misunderstand the authors?
As clarification: I am currently working on a library which is supposed to expose minimal API in order for $colleagues to have to actually touch internals to make fundamental changes or to have to do configuration via config-files.
Second question: is this the right path to go? Should I maybe go another route?
Doing a little research of my own, it seems to me that it's not possible in Scala to declare a package to be entirely private. (You can check out the Language Specification here, where there is no mention of being able to qualify a package declaration with 'private' or similar)
I think what the authors are suggesting is the following, roughly translated:
Instead of using finegrained controls, such as declaring some of your class members private or protected and leaving the rest public, you could make all of the important classes in your package completely private (as in private class/trait/object P {...}). Then, you can put the important public details of the API into the package object, exposing these private innards. For example, if you need to expose a protected type, you can use type aliasing for that...
Lets say I have a class...
com.mycom.app.AbstractMessage
There is another class in
com.mycom.model.QueryResponse
QueryResponse extends AbstractMessage and notice they are in different pacakges
com.mycom.model is a GWT Module and in the module XML
When I compile model there are errors. However when I try to use QueryReponse in another GWT module, I get runtime errors
"No source code is available for type com.mycom.app.AbstractMessage; did you forget to inherit a required module"
This lends me to believe that AbstractMessage was not compiled/compiled right to begin understandably because I DO NOT WANT to have "app" package be a GWT module
In other words, I only want to compile all classes in "model" and not any super classes. How can I tell the GWT compiler/rpc/linker/serializer etc not to do so?
i.e Is there a way to tell GWT not to walk beyond certain classes when it serializing/compiling it
I am doing this a source environment where we have a lot of packages, most of them depend on MODEL only and I DO NOT want to make a GWT module out of every package, just so it compiles.
Thoughts anyone?
I did a little bit of research on this one, you are right GWT will look for all implementations of an Abstract class, if and only if, the AbstractClass is referenced in an RPC GWTAsync interface, even though some are in non-GWT packages.
Let's say an object of type AbstractClass comes in over the network, and the GWT deserializer is now tasked with coverting the network data into a specific instance. It needs to know about all implementations of AbstractClass, to find which is coming over the network right now! -- So to accomplish this it, at compiletime, generates a .rpc file for each GWT service interface, listing all possible concrete types that the service methods can return.
Ray Ryan (Google employee) once mentioned that it is a bad idea to use interfaces arguments or return types in any RPC interface. - because it makes it difficult for the deserializer to know the exact type.
You can hand edit the generated RPC file and remove the offending types, or mark the other implementations as Non Serializable by not implementing Serializable in those implementations in other packages.
A Better way could be - I suspect you wrote code : "implements java.io.Serializable" at the top level (for the AbstractClass itself), maybe it's now time to move it to each implementation.
Now the GWT RPC deserializer's task is clear and straightforward - it knows that only certain implementations (that are serializable) of the AbstractClass will come over the network, and reach and compile them only. So it will not compile the other non serializable subclassess of your AbstractClass - as it knows they arent serializable.
There is one more option : If as I suspect you are using the command pattern - I have seen all the abstract interfaces, super classes for Command and Response etc always go in the client side packages - i.e., those that are GWT compiled. They are referrable and usable and instantiable for the server end of the application - so these source files are compiled twice, once by GWT into javascript for browser usage, and once by javac into bytecode for allowing reference from serverside. Thus in all GWT modules, including gwt-user.jar if you open them with 7Zip or WinZip you will see source and class files JARed together.
I recommend Moving AbstractMessage into the models package - as it is the model QueryResponse's super class.
And also inhertance in models is only a good idea, if you have 0 fields and only methods(behaviour) in the super class.
Lastly, if GWT is to make your QueryResponse into javascript - it needs ALL Types mentioned in the source file, to compile properly. So do not mention any server-only-classes in a source file meant to become javascript.
Have a region that has all the server-side java classes that will be run in a JVM on the server, and another region full of source files that will be compiled into javascript by the GWT compiler. The server-side region code/classes CAN refer to client region code/classes but defenitely NOT the vice versa. Make sure that no code thats gonna become javascript is referring (even an unused import statement) to a server side class.
GWT compiler works with source files only, however you need to compile client code into .class files so your serverside classes can refer to them.
NEW EDIT :
I did a little bit of research on this one, you are right GWT will look for all implementations of an Abstract class, if and only if, the AbstractClass is referenced in an RPC GWTAsync interface, even though some are in non-GWT packages.
Let's say an object of type AbstractClass comes in over the network, and the GWT deserializer is now tasked with coverting the network data into a specific instance. It needs to know about all implementations of AbstractClass, to find which is coming over the network right now! -- So to accomplish this it, at compiletime, generates a .rpc file for each GWT service interface, listing all possible concrete types that the service methods can return.
Ray Ryan (Google employee) once mentioned that it is a bad idea to use interfaces arguments or return types in any RPC interface. - because it makes it difficult for the deserializer to know the exact type.
You can hand edit the generated RPC file and remove the offending types, or mark the other implementations as Non Serializable by not implementing Serializable in those implementations in other packages.
A Better way could be -
I suspect you wrote code : "implements java.io.Serializable" at the top level (for the AbstractClass itself), maybe it's now time to move it to each implementation.
Now the GWT RPC deserializer's task is clear and straightforward - it knows that only certain implementations (that are serializable) of the AbstractClass will come over the network, and reach and compile them only. So it will not compile the other non serializable subclassess of your AbstractClass - as it knows they arent serializable.
I have GWT project that uses Generators to create light dynamic reflection objects.
I was wondering if anybody knows of a way to determine whether or not a particular class is referenced in the dependency tree beginning at all EntryPoints. If I could do this, I could avoid generating reflection data for classes that will never be used anyway.
My understanding is that when GWT does its compiling, it performs a similar check so that it can reduce the total size of the compiled code, but I haven't been able to find any related methods in TypeOracle or anything like that.
This is an indirect method of accomplishing what you are getting at. I believe each GWT module, is fully packaged into a regular java package. You can use
TypeOracle.findPackage(String pkgName)
to get the JPackage instance, and on that instance you use findType(String typeName) to see if a type is present in that package. If present, its likely that it is referenced in some file and GWT will compile it.
There is also this method getPackages() which returns an array of all packages known to this type oracle - therefore reachable for GWT compiler.
JPackage[] getPackages()
You can iteratively findType() on each package to find if the type is going to be compiled or not.
The BEST method is to define a custom annotation and whitelist all the classes that you do want to generate reflection code. You can annotate the required classes with it, and checking for that presence of annotation before generating code for it.
My favorite is to follow a naming convention over annotation, (I did both together), and thus maintain a whitelist, and make the convention (its usually a REGEX) a "setting" that can be changed however the team wants.