I have several internal logic dependencies in my source code. For example
Class A accepts an object and that object to be valid in Class A needs to have particular interfaces such as InterfaceOne, InterfaceTwo
I would like a way to represent the Interface dependencies for Class A visually in enterprise architect. Right now i'm generating the base class by importing the source code then I'm manually creating the dependencies between the Classes and Interfaces.
In my source code these dependencies are all within a variable of the class
$requiredDependencies = array('InterfaceOne', 'InterfaceTwo')...
Is there a way to programatically either parse this code or maybe enterprise architect has a way to read comments (like doxygen) and I could specify such relation in comments?
The Grammar Framework lets you generate in-EA parsers for custom languages, allowing you to reverse-engineer code in whatever language you choose. This is a pretty complex beast, but have a look in the help file under Extending UML Models -- MDG Technology SDK -- Grammar Framework.
If the language is already supported by EA, then that reverse-engineering process cannot be modified (other than what's available in the options), although you can of course write your own parser from scratch using the grammar framework.
If you want to do additional processing for a reverse-engineered class based on what's in its source file, then you can find the source file in Element.GenFile. You would then have to parse it yourself, of course.
Related
I've noticed that all of the examples in Optaplanner use the line
import org.optaplanner.examples.common.domain.AbstractPersistable;
to import an abstract class that most of the domain classes implement. To persist my solution to an xml file, should I also utilize this class? My maven project would need to have org.optaplanner.examples as a dependency, which I am hesitant to do.
I've read the source for AbstractPersistable, and I can't tell what benefits it provides to the example projects.
Don't depend on optaplanner-examples.
You probably don't need that class, just leave it out.
If you want to have that superclass that provides a database id, for every domain class, simply copy paste that class into your sources.
I am developing a general purpose CRUD code generator application. The idea is that codes/files (model, controller, view) for common insert, update, list, delete etc. operations will be automatically generated from model definition (like the definition used in Grails). But the generated code can be for any framework, e.g. Play (Scala or Java version) or Django or Grails or whatever framework user wants to use it for, even AngularJs. That is, same model definition can be used for generating code for any framework.
My question is, what can I use for this task - Scala or Groovy or some DSL specialized tools like Xtext?
This seems like a good case for a DSL. A DSL can be summed up as the following 3 elements:
Abstract Syntax: the concepts of your DSL. Here you want to specify CRUD applications.
Concrete Syntax(es): a way to materialize your Abstract Syntax. As a programmer, the first thought is often text-based syntaxes, but you could also use a graphical or tree-like syntax, or even simply a GUI with text fields and checkboxes.
Semantics: the meanings of your DSL. Here you want to generate code.
I'll now suggest some solutions which are based on Java and come from the Eclipse Modeling ecosystem.
Eclipse EMF implements standards for the definition of so-called "metamodels" (basically abstract syntaxes). In the Eclipse world, EMF is the base for a lot of tooling.
Assuming you have an EMF metamodel, textual syntaxes can be specified using Eclipse Xtext, and graphical syntaxes using Eclipse Sirius. Note that you can also develop your own GUI in Java and create your model using the EMF Java APIs. Also note that Xtext can create your metamodel for you based on the grammar you want for your text-based syntax. This is nice if you don't want to dive too deep into EMF it self (thus steps 1 and 2 are one and the same).
Eclipse Acceleo provides a template language specifically designed to generate text, including code. Once again, you can also write your code generator using plain Java, or any JVM-based language thanks to the EMF Java APIs. If you use Xtext, there is also a facility for including an Xtend-based code generator alonside your syntax.
Is it possible to create a class (or add attributes to a class) dynamically, e.g. load field names and types from external file in Scala?
this is follow-up on Representing nested structures in scala
It is possible to achieve this with macros, and there are two techniques for that with a different set of trade-offs. Refer to our joint talk with Travis Brown for more information and a link to an example implementation: https://github.com/travisbrown/type-provider-examples/blob/master/docs/scalar-2014-slides.pdf?raw=true.
You can declare and compile a Scala class from a data structure description. It requires that you a) construct a syntactically correct class description and save it to a file (or equivalent) and then b) compile that class description to object code (i.e., a .class file). You can then load the class and use it.
This is not for the faint of heart. You need to understand the process of translation, compilation, class-loading and dynamic class binding. Even more important, you have to answer how you would actually use this in a program.
An example of dynamic class creation occurs in the Scala Play framework, where a presentation template files are translated into Scala and compiled into class files that can then be referenced from other Scala source code.
Lets say I have a class...
com.mycom.app.AbstractMessage
There is another class in
com.mycom.model.QueryResponse
QueryResponse extends AbstractMessage and notice they are in different pacakges
com.mycom.model is a GWT Module and in the module XML
When I compile model there are errors. However when I try to use QueryReponse in another GWT module, I get runtime errors
"No source code is available for type com.mycom.app.AbstractMessage; did you forget to inherit a required module"
This lends me to believe that AbstractMessage was not compiled/compiled right to begin understandably because I DO NOT WANT to have "app" package be a GWT module
In other words, I only want to compile all classes in "model" and not any super classes. How can I tell the GWT compiler/rpc/linker/serializer etc not to do so?
i.e Is there a way to tell GWT not to walk beyond certain classes when it serializing/compiling it
I am doing this a source environment where we have a lot of packages, most of them depend on MODEL only and I DO NOT want to make a GWT module out of every package, just so it compiles.
Thoughts anyone?
I did a little bit of research on this one, you are right GWT will look for all implementations of an Abstract class, if and only if, the AbstractClass is referenced in an RPC GWTAsync interface, even though some are in non-GWT packages.
Let's say an object of type AbstractClass comes in over the network, and the GWT deserializer is now tasked with coverting the network data into a specific instance. It needs to know about all implementations of AbstractClass, to find which is coming over the network right now! -- So to accomplish this it, at compiletime, generates a .rpc file for each GWT service interface, listing all possible concrete types that the service methods can return.
Ray Ryan (Google employee) once mentioned that it is a bad idea to use interfaces arguments or return types in any RPC interface. - because it makes it difficult for the deserializer to know the exact type.
You can hand edit the generated RPC file and remove the offending types, or mark the other implementations as Non Serializable by not implementing Serializable in those implementations in other packages.
A Better way could be - I suspect you wrote code : "implements java.io.Serializable" at the top level (for the AbstractClass itself), maybe it's now time to move it to each implementation.
Now the GWT RPC deserializer's task is clear and straightforward - it knows that only certain implementations (that are serializable) of the AbstractClass will come over the network, and reach and compile them only. So it will not compile the other non serializable subclassess of your AbstractClass - as it knows they arent serializable.
There is one more option : If as I suspect you are using the command pattern - I have seen all the abstract interfaces, super classes for Command and Response etc always go in the client side packages - i.e., those that are GWT compiled. They are referrable and usable and instantiable for the server end of the application - so these source files are compiled twice, once by GWT into javascript for browser usage, and once by javac into bytecode for allowing reference from serverside. Thus in all GWT modules, including gwt-user.jar if you open them with 7Zip or WinZip you will see source and class files JARed together.
I recommend Moving AbstractMessage into the models package - as it is the model QueryResponse's super class.
And also inhertance in models is only a good idea, if you have 0 fields and only methods(behaviour) in the super class.
Lastly, if GWT is to make your QueryResponse into javascript - it needs ALL Types mentioned in the source file, to compile properly. So do not mention any server-only-classes in a source file meant to become javascript.
Have a region that has all the server-side java classes that will be run in a JVM on the server, and another region full of source files that will be compiled into javascript by the GWT compiler. The server-side region code/classes CAN refer to client region code/classes but defenitely NOT the vice versa. Make sure that no code thats gonna become javascript is referring (even an unused import statement) to a server side class.
GWT compiler works with source files only, however you need to compile client code into .class files so your serverside classes can refer to them.
NEW EDIT :
I did a little bit of research on this one, you are right GWT will look for all implementations of an Abstract class, if and only if, the AbstractClass is referenced in an RPC GWTAsync interface, even though some are in non-GWT packages.
Let's say an object of type AbstractClass comes in over the network, and the GWT deserializer is now tasked with coverting the network data into a specific instance. It needs to know about all implementations of AbstractClass, to find which is coming over the network right now! -- So to accomplish this it, at compiletime, generates a .rpc file for each GWT service interface, listing all possible concrete types that the service methods can return.
Ray Ryan (Google employee) once mentioned that it is a bad idea to use interfaces arguments or return types in any RPC interface. - because it makes it difficult for the deserializer to know the exact type.
You can hand edit the generated RPC file and remove the offending types, or mark the other implementations as Non Serializable by not implementing Serializable in those implementations in other packages.
A Better way could be -
I suspect you wrote code : "implements java.io.Serializable" at the top level (for the AbstractClass itself), maybe it's now time to move it to each implementation.
Now the GWT RPC deserializer's task is clear and straightforward - it knows that only certain implementations (that are serializable) of the AbstractClass will come over the network, and reach and compile them only. So it will not compile the other non serializable subclassess of your AbstractClass - as it knows they arent serializable.
Both of those frameworks deal with meta-model:
XText (Eclipse)
MPS (JetBrain)
Do you have example of practical applications based on meta-model transformation with those tools?
We created whole bug tracker using MPS. Code generation is not the goal but mean to get some executable code. The goal is to give a tool to developer that allows creating DSLs with minimum effort.
Cool thing about MPS is that it also provides you with an IDE for your language. And different DSLs you create are compatible, i.e. you can create DSL that extends Java with closures and another DSL that enables external methods, and these extensions will work together.
They are different in term of document storing the metamodel.
Regarding XText, this article illustrates one usage, when it comes to y create your own programming languages and domain-specific languages (DSLs).
Once you have a language, you want to process it and this means usually to transform your model into another representation.
The facility responsible for this transformation is called generator and consists of a bunch of transformation templates (e.G. XPand) and some code executing them. On some event, the model is read in and the transformations are applied to produce code.
Example of such a model transformation:
dot3zest, which comes with a DOT to Zest interpreter (which now uses the Xtext switch API generated for the DOT grammar) is support for ad-hoc DOT edge definitions.
Regarding MPS, you have here a serie of practical examples,
like this code generation to GPL such as Java, C#, C++ or XML:
(source: googlecode.com)
I think the main usage of XText is firstly to create a DSL from the grammer you defined and an eclipse workbench auto-generated for you. Secondly, it can transform the scrpit written in your DSL into java. The built-in expressions from XText2 is a plus.
The framework gives you a free IDE to support your writing DSL you created. And the DSL is the ulimate product to provide. It can be used to abstract the rules and logics from the real world. For example, in our project, the product config rule. Only specialist knows them, so they write some in the DSL you create.