I have some data resources that I would like eclipse to not copy every time it builds. I put them as part of the build path so they get copied, but I don't want that to happen every time as it's time consuming.
Any idea on a better strategy?
I don't even want them to be deleted when clean is invoked. In Visual Studio one can mark a resource file as "copy once" is there such a thing in Eclipse.
Thank You
Having eclipse not copy some file that you have modified even after a clean will be a nightmare: you'll have to remember to copy it manually each time it's modified.
And Eclipse only copies files which have been modified when building incrementally.
If it's so slow, it probably means you have too many such files, and they should perhaps be put in a jar in the build path.
As you ask for a strategy, then ...
Eclipse is not a build tool, it's IDE. So you better not try to setup some build logic based on it. Use eclipse for coding and for performing specific tasks during build use build tools like maven or ant.
Related
I have a collection of C++ projects that use managed makefiles inside Eclipse. I build them all, and run an existing deploy script to place all the compiled results into a target directory on the system.
I'd like a way to automate the build process without having to open the IDE, but due to external constraints from the project owner, I have to use managed make.
Once I kickoff a build for the first time inside the Eclipse IDE, the Debug / Release directories are created, and the makefiles are autogenerated. Once they are all present, I can call make from the command line, across all the projects, run an overall deploy script, etc. This will work up until the point that code changes would cause the makefiles to differ (adding/removing source files, or changing build parameters) The big problem with this is that it requires me to open eclipse and manually cause the generation of the makefiles.
I see two options:
Check the generated makefiles into revsion control (even though they're generated, I know), and commit them whenever they, after generation, have differed from the previous content.
Find a way to either open eclipse in a scriptable way, and cause makefile generation, or to find some alternate way to generate the makefiles from the .project files.
Option 1 is obviously subject to problems if I or other developers don't remember to commit the files, or if we update the Debug makefile but not the Release makefile, etc. It also makes me feel dirty to commit a generated file.
Option 2 seems much better, if its feasible.
Does a method to do option 2 exist? Are there other options I haven't considered?
I recognize similarity with a continouous integration server, but This is not the same use case. Essentially I'm building a package that gets installed on a development system as a testing release of a system library. So I'm operating as a developer, I just want to reduce variance out of the deploy process itself.
From Gradle's documentation:
The scripts generated by this task are intended to be committed to
your version control system. This task also generates a small
gradle-wrapper.jar bootstrap JAR file and properties file which should
also be committed to your VCS. The scripts delegates to this JAR.
From: What should NOT be under source control?
I think generated files should not be in the VCS.
When are gradlew and gradle/gradle-wrapper.jar needed?
Why not store a gradle version in the build.gradle file?
Because the whole point of the gradle wrapper is to be able, without having ever installed gradle, and without even knowing how it works, where to download it from, which version, to clone the project from the VCS, to execute the gradlew script it contains, and to build the project without any additional step.
If all you had was a gradle version number in a build.gradle file, you would need a README explaining everyone that gradle version X must be downloaded from URL Y and installed, and you would have to do it every time the version is incremented.
Because the whole point of the Gradle wrapper is to be able, without having ever installed Gradle
Same argument goes for the JDK, do you want to commit that also? Do you also commit all your dependency libraries?
The dependencies should be upgraded continuously as new versions are released to get security and other bug fixes, and because if you get to far behind it can be a very time consuming task to get up to date again.
If the Gradle wrapper is incremented for every new release, and it is committed, the repo will grow very large. The problem is obvious when working with distributed VCS where a clone will download all versions of everything.
and without even knowing how it works
Create a build script that downloads the wrapper and uses it to build. Everyone does not need to know how the script works, they need to agree that the project is build by executing it.
where to download it from, which version
task wrapper(type: Wrapper) {
gradleVersion = 'X.X'
}
for Gradle version >= 5:
wrapper {
gradleVersion = 'X.X'
}
and then
gradle wrapper
to download the correct version.
to clone the project from the VCS, to execute the gradlew script it contains, and to build the project without any additional step.
Solved by the steps above. Downloading the Gradle wrapper is not different from downloading any other dependency. The script could be smart enough to check for any current Gradle wrapper and only download it if there is a new version.
If the developer has never used Gradle before and maybe doesn't know the project is built with Gradle, then it is more obvious to run a build.sh compared to running gradlew build.
If all you had was a gradle version number in a build.gradle file, you would need a README explaining everyone that gradle version X must be downloaded from URL Y an installed,
No, you would not need a README. You could have one, but we are developers and we should automate as much as possible. Creating a script is better.
and you would have to do it every time the version is incremented.
If the developers agree that the correct process is to:
Clone repo
Run build script
Then there upgrading to latest Gradle wrapper is no problem. If the version is incremented since last run, the script could download the new version.
I would like to recommend a simple approach.
In your project's README, document that an installation step is required, namely:
gradle wrapper --gradle-version 3.3
This works with Gradle 2.4 or higher. This creates a wrapper without requiring a dedicated task to be added to "build.gradle".
With this option, ignore (do not check in) these files/folders for version control:
./gradle
gradlew
gradlew.bat
The key benefit is that you don't have to check-in a downloaded file to source control. It costs one extra step on installation. I think it is worth it.
According to Gradle docs, adding gradle-wrapper.jar to VCS is expected as making Gradle Wrapper available to developers is part of the Gradle approach:
To make the Wrapper files available to other developers and execution environments you’ll need to check them into version control. All Wrapper files including the JAR file are very small in size. Adding the JAR file to version control is expected. Some organizations do not allow projects to submit binary files to version control. At the moment there are no alternative options to the approach.
What is the "project"?
Maybe there is a technical definition of this idiom that excludes build scripts. But if we accept this definition, then we must say your "project" is not all the things that you need to versioned!
But if we say "your project" is everything you have done. Then we can say you must include it and only it into VCS.
This is very theoretical and maybe not practical in case of our development works. So we change it to "your project is every file (or folder) you need to editing them directly".
"directly" means "not indirectly" and "indirectly" means by editing another file and then an effect will be reflected into this file.
So we reach the same that OP said (and is said here):
I think Generated files should not be in the VCS.
Yes. Because you haven't created them. So they are not part of "your project" according to the second definition.
What is the result about these files:
build.gradle: Yes. We need to edit it. Our works should be versioned.
Note: There is no difference where you edit it. Whether in your text editor environment or in Project Structure GUI environment. Anyway you doing it directly!
gradle-wrapper.properties: Yes. We need to at least determine Gradle version in this file.
gradle-wrapper.jar and gradlew[.bat]: I haven't created or edited them in any of my development works, till this moment! So the answer is "No". If you have done so, the answer is "Yes" about you at that work (and about the same file you edited).
The important note about the latest case is the user who clones your repo, needs to execute this command on repo's <root-directory> to auto-generate wrapper files:
> gradle wrapper --gradle-version=$v --distribution-type=$distType
$v and $distType are determined from gradle-wrapper.properties:
distributionUrl=https\://services.gradle.org/distributions/gradle-{$v}-{$distType}.zip
See https://gradle.org/install/ for more information.
gradle executable is bin/gradle[.bat] in local distribution. It's not required that local distribution be same as that determined in the repo. After wrapper files created then gradlew[.bat] can download determined Gradle distribution automatically (if not exists locally). Then he/she probably must regenerate wrapper files using new gradle executable (in downloaded distribution) using above instructions.
Note: In the above instructions, supposed the user has at least one Gradle distribution locally (e.g. ~/.gradle/wrapper/dists/gradle-4.10-bin/bg6py687nqv2mbe6e1hdtk57h/gradle-4.10). It covers almost all real cases. But what happens if the user hasn't any distribution already?
He/She can download it manually using the URL in .properties file. But if he/she doesn't locate it in the path that the wrapper expected, the wrapper will download it again! The expected path is completely predictable but is out of the subject (see here for the most complex part).
There are also some easier (but dirty) ways. For example, he/she can copy wrapper files (except .properties file) from any other local/remote repository to his/her repository and then run gradlew on his/her repository. It will automatically download the suitable distribution.
Old question, fresh answer. If you don't upgrade gradle often (most of us don't), it's better to commit it to VCS. And the main reason for me is to increase the build speed on the CI server. Nowadays, most of the projects are getting built and installed by CI servers, different server instance every time.
If you don't commit it, CI server will download a jar for every build and it significantly increases a build time. There are other ways to handle this problem, but I find this one the easiest to maintain.
i am looking to make our deployments here not suck and i need some help, if you can help me with these few things i owe you beer
right now whenever i make a change thats not to the jsps i need to clean-including-tomcat otherwise my change doesnt take. this is really annoying.
any clues as to what i can change to make it work?
my current build is really simple, just the regular old, javac, war, deploy
one thing that isnt done is that there is no build dir, the project itself contains a web-inf and the javac is done in place, then the war excludes all the .java resources and wars the project.
edit:
I am looking to fix this problem with least amount of effort - so while switching to maven and learning how to use it might solve this problem, but it will create another problem ;)
You've already identified some of the weaknesses, in your current build.
The easiest way that I can suggest to clean it up would be to start with the directory structure.
I highly recommend using the maven directory structure, I would go further to suggest using maven as a build tool instead of ant, however for some folk that remains open for debate.
The maven directory structure has been well thought out, I really like working on projects that use the maven directory structure, because they follow a convention that allows me to save a lot of time, by knowing from previous experience where to find the application components
java source
unit test source
resources etc.
Also by following the convention, the maven plugins work with less configuration required.
Another useful advantage that I get from working on maven based projects is good code metrics, to measure the health of the application. There are various report available as maven plugins, which will give you new insight into your codebase, including:
checkstyle
pmd
findbugs
and more.
Created a build directory where everything got copied before build
Added some flags to not copy over things that rarely change, like images (also to not remove them on clean)
Started using ant-reload task after deploying code
Now i don't need to restart tomcat on every build, and build takes much less time.
I have a large codebase added into the Eclipse project and have added one External Tool providing the path where the class for that java file is to be kept and the Classpath. The build folder is somewhere else.
Now when I need to compile only one file, Eclipse starts building whole of the codebase(>100 MB of Java files), it takes my system down and I have to wait for the whole compilation to go through.
Can only one java file be compiled without building the whole code?
Any pointers would be helpful.
I think you would benefit greatly with a build tool, such as ant or maven. You can have a target to compile only one class without building the whole code, or any other related task.
Have you built the project at least once ?
Eclipse builds incrementally using the saved state from the previous build. So if you build your project once, the subsequent builds would only pick up the changes made "after" the previous build.
I've set an external tool (sablecc) in eclipse (3.4) that generates a bunch of classes in the current project. I need to run this tool and regenerate these classes fairly frequently. This means that every time I want to run sablecc, I have to manually delete the packages/classes that sablecc creates in order to ensure that I don't have conflicts between the old and new generated classes. Is there some easy way to automate this from within eclipse or otherwise?
Not sure if I understand your point right, I suppose you need to delete old classes before running sablecc because some of them would not be eventually created in new run.
It is probably best to write short Ant build.xml with the target, which first removes the classes (Ant delete task) and then runs sablecc (Ant exec task). It is also possible to preset eclipse so that it refreshes workspace after Ant finishes.
Put the build.xml anywhere to project, right click, Run As/Ant Build.
Just for the sake of the clean style, you could then call sablecc with its Ant task (implemented by org.sablecc.ant.taskdef), instead of running it externally in new process.
You can tell Eclipse to refresh the workspace (or parts of it) after an external tool has been run. This should force Eclipse to detect any new/deleted classes.
JesperE is referring to the option Refresh->Refresh resources on completion in your external tools configuration for running sablecc.