I use Eclipse with a target platform. The target definition is an xml-file listing the P2-repositories with the corresponding features. I know how to check and modify the target definition via the Eclipse IDE.
I am wondering whether it's possible to load the target definition via Java code? In other words: is there a way to write a program that loads the (active) target definition?
Once I have access to the target definition, I would like to extract the P2-repositories and perform some P2QL queries on them. Unfortunately, I have no idea how to even access the target definition by code...
Related
I am using the mybatis-generator in a maven project to generate the Java files for a few tables. At the end of the generation, I would like to generate a few non-java files like properties files and resources. However the default generator allows me to generate only XML and Java files. Is there any way to also get the generator to create sql files, SPI definitions and property files for example?
Looking inside the generator, it seems that the Generated java files and XML files go through some further process(formatting et al). Even if I write a custom plugin, I can generate an XML or an sql file only but not a properties files or an sql file. Even if I did, I cannot get the process to finish because the subsequent steps would fail.
Currently, I am getting over these by creating my own files and writing them thru a custom plugin. However, during the plugin execution, the folder target/generates-sources/mybatis-generator is not created yet. Therefore assuming that location to have already been created is ruled out. On the other hand, if I go ahead and create the folder and its internal META-INF/services folder, I am not sure if this will be overwritten at a later stage. In addition, my plugin does not (by virtue of the way the generator initiates plugins), have access to the project root folder. So that is not an option either.
I neither have access to the ShellCallBack, implying that postponing the file creation to a well defined time-point in the build process is also not possible.
So how do I go about creating the service definitions and the additional resource files?
The last resort is to hard-code the project folder or to pump the project folder through a property. This is coming to my rescue now. But clearly, the generated files are being detected by my git client and I have to clean up these files also despite their being dynamic.
Hints please?
Thanks in advance.
Rahul
The generator currently supports Java, Kotlin, and XML file generation. There is an open feature request to support other file types in plugins. You can follow it here: https://github.com/mybatis/generator/issues/752
I am interested in understanding whether bazel can handle "two stage builds", where dependencies are discovered based on the file contents and dependencies must be compiled before the code that depends on them (unlike C/C++ where dependencies are mostly header files that are not separately compiled). Concretely, I am building the Coq language which is like Ocaml.
My intuition for creating a build plan would use an (existing) tool (called coqdep) that reads a .v file and returns a list of all of its direct dependencies. Here's the algorithm that I have in mind:
invoke coqdep on the target file and (transitively) on each of its dependent files,
once transitive dependencies for a target are computed, add a rule to build the .vo from the .v that includes transitive dependencies.
Ideally, the calls to coqdep (in step 1) would be cached between builds and so only need to be re-computed when the file changes. And the transitive closure of the dependency information would also be cached.
Is it possible to implement this in bazel? Are there any pointers to setting up builds for languages like this? Naively, it seems to be a two-stage build and I'm not sure how this fits into bazel's compilation model. When I looked at the rules for Ocaml, it seemed like it was relying on ocamlbuild to satisfy the build order and dependency requirements rather than doing it "natively" in bazel.
Thanks for any pointers or insights.
(don't have enough rep to comment yet, so this is an answer)
#2 of Toraxis' answer is probably the most canonical.
gazelle is an example of this for Golang, which is in the same boat: dependencies for Golang files are determined outside a Bazel context by reading the import statements of source files. gazelle is a tool that writes/rewrites Golang rules in BUILD files according to the imports in source files of the Bazel workspace. Similar tools could be created for other languages that follow this pattern.
but the generated BUILD file will be in the output folder, not in the source folder. So you also have to provide an executable that copies the files back into the source folder.
Note that binaries run via bazel run have the environment variable BUILD_WORKSPACE_DIRECTORY set to the root of the Bazel workspace (see the docs) so if your tool uses this environment variable, it could edit the BUILD files in-place rather than generating and copying back.
(In fact, the generating-and-copying-back strategy would likely not be feasible, because purely-generated files would contain only Coq rules, and not any other types of rules. To generate a BUILD file with Coq rules from one with other types of rules, one would have to add the BUILD files themselves as dependencies - which would create quite the mess!)
I'm looking into similar questions because I want to build ReasonML with Bazel.
Bazel computes the dependencies between Bazel targets based on the BUILD files in your repository without accessing your source files. The only interaction you can do with the file system during this analysis phase is to list directory contents by using glob in your rule invocations.
Currently, I see four options for getting fine-grained incremental builds with Bazel:
Spell out the fine-grained dependencies in hand-written BUILD files.
Use a tool for generating the BUILD files. You cannot directly wrap that tool in a Bazel rule to have it run during bazel build because the generated BUILD file would be in the output folder, not in the source folder. But you can run rules that call coqdep during the build, and provide an executable that edits the BUILD file in the source folder based on the (cacheable) result of the coqdep calls. Since you can read both the source and the output folder during the build, you could even print a message to the user if they have to run the executable again. Anyway, the full build process would be bazel run //tools/update-coq-build-files && bazel build to reach a fixed point.
Have coarse-grained dependencies in the BUILD files but persistent workers to incrementally rebuild individual targets.
Have coare-grained dependencies in the BUILD files but generate a separate action for each target file and use the unused_inputs_list argument of ctx.actions.run to communicate to Bazel which dependencies where actually unused.
I'm not really sure whether 3 and 4 would actually work or how much effort would be involved, though.
I'm working on a project developing an Eclipse-based application. Running a JUnit plug-in test requires the run configuration for it to have a bunch of parameters set. This means that if I want to run a single test class or method, as far as I can tell I have to create a new configuration or edit one that I reuse. More annoyingly I can't use the convenience of Alt+Shift+X, P.
Is there a way to tell Eclipse that a bunch of parameters are the defaults for an implicitly created run configuration of a given type to use it when it's automagically creating one?
If you are using a custom target platform (which you should use anyway), you can specify Program arguments and VM arguments on the Environment tab of the target platform editor.
They will be used as default values for PDE launch configurations.
I'm trying to import a C project to eclipse (CDT) that is managed by waf. There is a list of predefines generated by waf (when running ./waf configure). That list has to be imported to Project->Properties->C/C++ General/Paths and Symbols/Symbols/GNU C so that the indexer knows about them and does not print errors. That list (when using the GUI) is stored to the .cproject file. I created a Build Target that runs ./waf configure and stores the list to a file named DEFINES.txt. How do I automatically update the list of .cproject with the values of DEFINES.txt after running the Build Target?
I thought about the following solutions and their follow-up problems:
Solution: Writing a plug-in.
Problem: What is the appropriate extension point?
Solution: Writing an external program that calls ./waf configure reads DEFINES.txt and writes the list to .cproject. That program replaces the old Build Target.
Problem: How safe is this? Am I allowed to change the .cproject file by an external program without causing any problems?
Solution: Implementing the .cproject updating algorithm in wscript file.
Problem: This is not a solution for me, because the project is used by others, too, that do not use eclipse as IDE. So the modified wscript will cause errors if the other developers want to build the project.
Does anybody have better ideas or some advice?
Here is how to go about it:
Writing a plug-in: What I recommend you do is write an extension to the LanguageSettingsProvider. The FAQ has some more info, but the summary is that provider does:
This extension point is used to contribute a new Language Settings
Provider. A Language Settings Provider is used to get additions to
compiler options such as include paths (-I) or preprocessor defines
(-D) and others into the project model.
CMake has an option to generate .cproject as part of its configure state, so you could do something similar. See the CMake Wiki for inspiration, but the summary is that you don't store and .cproject/.project in source control and have CMake (or waf in your case) generate the IDE specific files.
You could also just pick up the build settings using the build output parser and ignore the DEFINES.txt altogether. That requires running the build once from within Eclipse for CDT to see all the commands, and requires the commands to be parseable in the build output.
I am trying to compile a (Scala) macro in Eclipse 3.7.2 with the Scala IDE Plugin available for Scala 2.10.0-M3, but I am experiencing the following error:
"macro implementation not found: XXXXX (the most common reason for that is that you cannot use macro implementations in the same compilation run that defines them) if you do need to define macro implementations along with the rest of your program, consider two-phase compilation with -Xmacro-fallback-classpath in the second phase pointing to the output of the first phase"
I already know how to avoid it with a simple editor and a terminal (just following the error message), but is it possible to achieve the dual phase task in Eclipse?
How to create a macro project to link to an existing project:
Create a scala project named for example ProjectMacros, put a file named for example Macros.scala containing macros in it. This project should compile without problems, because there are only macros.
Right-click on the existing scala project, then "Properties". The Properties window opens.
In the Java Build Path section:
Under the tab Projects, add ProjectMacros.
Under the tab Libraries, click Add Class Folder, and select the ProjectMacros/bin directory.
In the Project References section, check ProjectMacros
Now, after adding in the existing project an import like import Macros._ you can use the macros functions and annotations.
Well, separating macro implementation and macro invocation in two differents projects (and playing with project references) seems to solve the issue. Anyway, the Scala-IDE plugin has been notably improved with its version for Scala 2.10-M4 (in terms of macro support), so I recommend to update to it.
You could probably use Ant for building, but since you are saying that you already achieved this with terminal I think it would be easier to create script, and run it using custom builder (go to project properties, click Builders -> New... -> Program and then set it up to run your script.