I want to write a code generator that generates a class based on the meta model of another ceylon class. I want the code generator to run at compile time. What is the best way for me to do this. I could probably accomplish this by writing a plugin for gradle or the ceylon build system but I'm hoping for a simpler solution. Unfortunately, I don't see any support for code generators in ceylon. Also, are there any plans for code generators in ceylon?
I want to write this code generator because I'm thinking about writing a simple web framework for ceylon that look at a class like the following using the meta-model:
controller
shared class Controller() {
shared void doSomething() => print("did it!");
}
I plan for it to be like Spring MVC. This framework would make a restful API from the Controller class that allows someone to write an AJAX call like this:
$http.get("/Controller/doSomething");
I want to make things more convenient, high level, and simple by doing something like GWT. I want to create a code generator that automatically generates a class like this:
shared class RemoteController() {
shared void doSomething() {
$http.get("/Controller/doSomething");
}
}
The RemoteController would be run in a user's browser as javaScript and allow client side ceylon code to do an Ajax call like this:
RemoteController().doSomething();
That would end up calling the Controller().doSomething() on the server so "did it!" would be printed.
AST Transformers have been proposed, but are still in the early design phase. For now, to do compile-time code generation, you’ll have to rig up something of your own.
To actually generate the code, I would recommend use of ceylon.ast and ceylon.formatter. The workflow would roughly be:
analyze source code –
either parse it with ceylon.ast (ceylon.ast.redhat::compileAnyCompilationUnit) and analyze it without typechecking,
or parse it using the compiler, run the typechecker, then convert it to ceylon.ast (ceylon.ast.redhat::anyCompilationUnitToCeylon), keeping the typechecker information using the new update hooks in the very soon upcoming 1.2.0 release
edit the source code AST to add your new code (using a custom ceylon.ast.core::Editor that injects new class definitions into the CompilationUnits), or perhaps create entirely new compilation units if the RemoteController lives in a different module
convert the ceylon.ast AST to a compiler AST and feed it into ceylon.formatter to turn the AST into code again (see here for an example of that)
Alternatively, if you integrate this into your build step, you could skip the ceylon.formatter part of step 3 and instead feed the converted compiler AST into the typechecker and rest of the compiler directly.
Related
Besides REST-API Calls, I need to call my own Java-Class, which basically does something, which I want to confirm later in the test via REST-API-Calls.
When calling my Java-Class, there is an expected behavior: It may fail or not fail, depending on the actual Test-Case.
Is there any chance to code this expectation this into my test-class:
java("com.org.xyz.App").method("run").methodArgs(args).build();
As this is the Main-Class, which should be executed later in a automated fashion, I would prefer to validate the Return-Code.
However, I'm looking for any possible way (Exception-Assertion, Stdout-Check, ..) to verify the status of the program.
As you are using the Java DSL to write test cases I would suggest to go with custom test action implementation and/or initializing your custom class directly in the test method and call the method as you would do with any other API.
You can wrap the custom code in a custom AbstractTestAction implementation as you then have access to the TestContext and your custom code is integrated into the test action sequence.
The java("com.org.xyz.App").method("run").methodArgs(args) API is just for compliance to the XML DSL where you do not have the opportunity to initialize own Java class instances. Way too complicated for your requirement in my opinion.
As far as I know current usage for compiler plugin is to define the attribute for compiler to recognize and then the compiler will invoke the code defined and registered in the plugin.
I am thinking if it is possible to build a compiler plugin that have a post processor. I can some how first register the structs encountered by proc_macro_derive in a data structure within the plugin, then the post processor can generate code according to the plugin data structure populated before.
My intention is to generate a symbol table form derived structs, so I can do some experiment for dynamic typing in rust. I am not sure if it is possible to achieved in compile time without manually register them one by one in runtime.
I want to create js files using a template engine with Scala. Is it possible with the popular templating engines for Scala, namely Play and Scalate? If possible, than what are the pros and cons for using either of them?
Just create view with .js ext, i.e.: app/views/myScript.scala.js and dummy content:
#(message: String)
alert("#message");
Then add an action into your controller:
def myScript = Action {
// use views.js... NOT views.html... !
Ok(views.js.myScript.render("Whoha! I'm dynamic JS in Scala :)"))
}
or in Java version:
public Result myScript(){
// use views.js... NOT views.html... !
return ok(views.js.myScript.render("Yey! I'm dynamic JS in Java :)"));
}
Add the route to this action:
GET /my-script controllers.Application.myScript()
So you can use this route directly:
<script src="/my-script"></script>
note, that Play should return valid Content-Type:text/javascript; charset=utf-8 in the response, anyway depending on version you are using it may be required to enforce this manually within your action (use browser's inspection tool to check the response type)
It really depends on what you want to achieve, i.e. how sophisticated your JavaScript code will be, but, unless it's something really small and simple, I'd suggest using the Scala.js. This way you basically will write some Scala code that will compile into JavaScript, and that compiled JavaScript you should be able to include into your Play application.
Advantages of writing Scala vs JavaScript should be pretty obvious (type safety, using lots of the existing Scala libraries). Disadvantage would be some delays for Scala -> JavaScript compilation, and also lack of the same seamless integration of Scala.js and Play, like Play has with its own templating engine. It's up to you to decide if the extra work to make these 2 work together worth it.
In Cucumber (the ruby version) you can easily call steps from other steps and thus build hierarchical libraries of steps making it easy to write the Gherkin feature specifications in the most generic terms.
However it is not readily apparent how to do this in Cucumber-JVM and I have been unable to find documentation for it.
Let me be clear I am not interested in calling the step implementation function directly because I don't want to have to know what its signature is, nor to change the call every time the implementation changes.
Rather, I want to pass an arbitrary string that will go through the regex matcher and automatically find the matching step and execute it. Just as the engine runs all steps.
simple example of what I would expect syntax to look like to define synonym "logout":
When("user logs out") { () =>
d.executeScript("logout();")
}
When("logout") { () =>
Step("user logs out")
}
This functionality is not supported in Cucumber-JVM. (Note that the Cucumber Backgrounder document you link in your question describes using Steps within Steps as "an anti-pattern")
Essentially, we believe that Cucumber is a collaboration tool and that Gherkin is not a programming language.
You can see a longer discussion of how we arrived at this decision here
To call steps within step definitions, inherit cuke4duke.Steps in java
import cuke4duke.StepMother;
import cuke4duke.Steps;
import cuke4duke.annotation.I18n.EN.When;
public class CallingSteps extends Steps {
public CallingSteps(StepMother stepMother) {
super(stepMother);
}
#When("^I call another step$")
public void iCallAnotherStep() {
Given("it is magic"); // This will call a step defined somewhere else.
}
}
Example:
https://github.com/cucumber-attic/cuke4duke/blob/master/examples/java/src/test/java/simple/CallingSteps.java
Note: cuke4duke support scala as well
Calling steps within steps is a terrible anti-pattern that can easily be replaced by something much simpler.
Instead of one step calling another step, have both steps call the same helper method.
If you apply this pattern with rigour you and up with
step definitions that are all just single calls to helper methods
a suite of helper methods that collectively provide a test-api
The art of elegantly implementing your Cucumber scenarios now becomes a known programming problem as all your functionality is now directly in code in your programming language rather than being in some restrictive construct specific to Cucumber.
You can now
refactor your helper methods to provide cleaner interaces
use parameters to give methods greater power
use naming to give all your calls greater clarity
use a helper method as an entry point to a suite of extra functionality
use delegation to move functionality out of helper methods and into test service objects
...
Providing this separation can be initially challenging if you are not a programmer or not experienced in the particular programming language in use. However once you get past this initial hurdle the code you can and should produce will be much easier to work with than the tangled mess that inevitably occurs with step nesting.
In Cucumber each Step is a Method. That way, you can call other methods in any step that you want.
#When("^click on \"([^\"]*)\"$")
public void clickOn(String arg1) throws Throwable {
driver.findElement(By.linkText(arg1)).click();
}
#Then("^should see the static elements changing$")
public void shouldSeeTheStaticElementsChanging() throws Throwable {
clickOn();
}
I have a use case where I need to create a class based on user input.
For example, the user input could be : "(Int,fieldname1) : (String,fieldname2) : .. etc"
Then a class has to be created as follows at runtime
Class Some
{
Int fieldname1
String fieldname2
..so..on..
}
Is this something that Scala supports? Any help is really appreciated.
Your scenario doesn't seem to make sense. It's not so much an issue of runtime instantiation (the JVM can certainly do this with reflection). Really, what you're asking is to dynamically generate a class, which is only useful if your code makes use of it later on. But how can your code make use of it later on if you don't know what it looks like? For example, how would your later code know which fields it could reference?
No, not really.
The idea of a class is to define a type that can be checked at compile time. You see, creating it at runtime would somewhat contradict that.
You might want to store the user input in a different way, e.g. a map.
What are you trying to achieve by creating a class at runtime?
I think this makes sense, as long as you are using your "data model" in a generic manner.
Will this approach work here? Depends.
If your data coming from a file that is read at runtime but available at compile time, then you're in luck and type-safety will be maintained. In fact, you will have two options.
Split your project into two:
In the first run, read the file and write the new source
programmatically (as Strings, or better, with Treehugger).
In the second run, compile your generated class with the rest of your project and use it normally.
If #1 is too "manual", then use Macro Annotations. The idea here is that the main sub-project's compile time follows the macro sub-project's runtime. Therefore, if we provide the main sub-project with an "empty" class, members can be added to it dynamically at compile time using data that the macro sees at runtime. - To get started, Modify the macro to read from a file in this example
Else, if you're data are truly only knowable at runtime, then #Rob Starling's suggestion may work for you as it did me. I'll share my attempt if you want to be a guinea pig. For debugging, I've got an App.scala in there that shows how to pass strings to a runtime class generator and access it at runtime with Java reflection, even define a Scala type alias with it. So the question is, will your new dynamic class serve as a type-parameter in Slick, or fail to, as it sometimes does with other libraries?