Phing build.xml in SVN and local build - phing

I am developing in windows environment and my production server has linux. My goal was updating production version from my local machine, but run in to problems with windows&ssh.
So now my backup plan is just doing it locally on my server and that is ok. This allows me to do it easily via ssh from different workstations.
What I am now wondering is my build.xml file, which is included with my project in svn. Do I really need to make initial export just to run my build.xml which then does actual build? Or should I just separate my build.xml from app? How this kind of local build should be done?

I would keep the build file with the app for portability, future proofing and versioning reasons. I also would keep the svn export step it ensures a clean and consistent state to run your build from.

Related

Scala IDE and classpath

I have a Scala project I share via git between two (windows) machines. I have them set up using SBT and sbt-eclipse so I can edit and test within eclipse or build and test from the command-line.
Unfortunately my user name (and, therefore, the user profile directory) is different on the different machines. This means that when sbt fetches dependencies it puts them in different base directories on the two platforms. This wouldn't be a problem except that the full pathname is hardcoded into the eclipse .classpath file from sbt-eclipse. This means that I have to rerun the 'eclipse' task when I do a pull on my 'current' machine.
This must be even worse for others who do this kind of thing as a team. How is this normally handled? I would prefer to do a pull on whatever machine, even from within eclipse, and get started right away.
I would strongly recommend removing the eclipse-sbt-generated files (and all other generated files) from git. Each machine will have it's own .classpath file that is generated on that machine that is generated on that machine for that machine and can be regenerated whenever you want/need. Your build.sbt project files should be in git, so when you pull onto each machine, it will have the updated config, and you can just run sbt eclipse only when you have a dependency change.
Really, you should always avoid having generated files in source control. Have only the important stuff in your git project, and generate the rest as necessary.

Why should the Gradle Wrapper be committed to VCS?

From Gradle's documentation:
The scripts generated by this task are intended to be committed to
your version control system. This task also generates a small
gradle-wrapper.jar bootstrap JAR file and properties file which should
also be committed to your VCS. The scripts delegates to this JAR.
From: What should NOT be under source control?
I think generated files should not be in the VCS.
When are gradlew and gradle/gradle-wrapper.jar needed?
Why not store a gradle version in the build.gradle file?
Because the whole point of the gradle wrapper is to be able, without having ever installed gradle, and without even knowing how it works, where to download it from, which version, to clone the project from the VCS, to execute the gradlew script it contains, and to build the project without any additional step.
If all you had was a gradle version number in a build.gradle file, you would need a README explaining everyone that gradle version X must be downloaded from URL Y and installed, and you would have to do it every time the version is incremented.
Because the whole point of the Gradle wrapper is to be able, without having ever installed Gradle
Same argument goes for the JDK, do you want to commit that also? Do you also commit all your dependency libraries?
The dependencies should be upgraded continuously as new versions are released to get security and other bug fixes, and because if you get to far behind it can be a very time consuming task to get up to date again.
If the Gradle wrapper is incremented for every new release, and it is committed, the repo will grow very large. The problem is obvious when working with distributed VCS where a clone will download all versions of everything.
and without even knowing how it works
Create a build script that downloads the wrapper and uses it to build. Everyone does not need to know how the script works, they need to agree that the project is build by executing it.
where to download it from, which version
task wrapper(type: Wrapper) {
gradleVersion = 'X.X'
}
for Gradle version >= 5:
wrapper {
gradleVersion = 'X.X'
}
and then
gradle wrapper
to download the correct version.
to clone the project from the VCS, to execute the gradlew script it contains, and to build the project without any additional step.
Solved by the steps above. Downloading the Gradle wrapper is not different from downloading any other dependency. The script could be smart enough to check for any current Gradle wrapper and only download it if there is a new version.
If the developer has never used Gradle before and maybe doesn't know the project is built with Gradle, then it is more obvious to run a build.sh compared to running gradlew build.
If all you had was a gradle version number in a build.gradle file, you would need a README explaining everyone that gradle version X must be downloaded from URL Y an installed,
No, you would not need a README. You could have one, but we are developers and we should automate as much as possible. Creating a script is better.
and you would have to do it every time the version is incremented.
If the developers agree that the correct process is to:
Clone repo
Run build script
Then there upgrading to latest Gradle wrapper is no problem. If the version is incremented since last run, the script could download the new version.
I would like to recommend a simple approach.
In your project's README, document that an installation step is required, namely:
gradle wrapper --gradle-version 3.3
This works with Gradle 2.4 or higher. This creates a wrapper without requiring a dedicated task to be added to "build.gradle".
With this option, ignore (do not check in) these files/folders for version control:
./gradle
gradlew
gradlew.bat
The key benefit is that you don't have to check-in a downloaded file to source control. It costs one extra step on installation. I think it is worth it.
According to Gradle docs, adding gradle-wrapper.jar to VCS is expected as making Gradle Wrapper available to developers is part of the Gradle approach:
To make the Wrapper files available to other developers and execution environments you’ll need to check them into version control. All Wrapper files including the JAR file are very small in size. Adding the JAR file to version control is expected. Some organizations do not allow projects to submit binary files to version control. At the moment there are no alternative options to the approach.
What is the "project"?
Maybe there is a technical definition of this idiom that excludes build scripts. But if we accept this definition, then we must say your "project" is not all the things that you need to versioned!
But if we say "your project" is everything you have done. Then we can say you must include it and only it into VCS.
This is very theoretical and maybe not practical in case of our development works. So we change it to "your project is every file (or folder) you need to editing them directly".
"directly" means "not indirectly" and "indirectly" means by editing another file and then an effect will be reflected into this file.
So we reach the same that OP said (and is said here):
I think Generated files should not be in the VCS.
Yes. Because you haven't created them. So they are not part of "your project" according to the second definition.
What is the result about these files:
build.gradle: Yes. We need to edit it. Our works should be versioned.
Note: There is no difference where you edit it. Whether in your text editor environment or in Project Structure GUI environment. Anyway you doing it directly!
gradle-wrapper.properties: Yes. We need to at least determine Gradle version in this file.
gradle-wrapper.jar and gradlew[.bat]: I haven't created or edited them in any of my development works, till this moment! So the answer is "No". If you have done so, the answer is "Yes" about you at that work (and about the same file you edited).
The important note about the latest case is the user who clones your repo, needs to execute this command on repo's <root-directory> to auto-generate wrapper files:
> gradle wrapper --gradle-version=$v --distribution-type=$distType
$v and $distType are determined from gradle-wrapper.properties:
distributionUrl=https\://services.gradle.org/distributions/gradle-{$v}-{$distType}.zip
See https://gradle.org/install/ for more information.
gradle executable is bin/gradle[.bat] in local distribution. It's not required that local distribution be same as that determined in the repo. After wrapper files created then gradlew[.bat] can download determined Gradle distribution automatically (if not exists locally). Then he/she probably must regenerate wrapper files using new gradle executable (in downloaded distribution) using above instructions.
Note: In the above instructions, supposed the user has at least one Gradle distribution locally (e.g. ~/.gradle/wrapper/dists/gradle-4.10-bin/bg6py687nqv2mbe6e1hdtk57h/gradle-4.10). It covers almost all real cases. But what happens if the user hasn't any distribution already?
He/She can download it manually using the URL in .properties file. But if he/she doesn't locate it in the path that the wrapper expected, the wrapper will download it again! The expected path is completely predictable but is out of the subject (see here for the most complex part).
There are also some easier (but dirty) ways. For example, he/she can copy wrapper files (except .properties file) from any other local/remote repository to his/her repository and then run gradlew on his/her repository. It will automatically download the suitable distribution.
Old question, fresh answer. If you don't upgrade gradle often (most of us don't), it's better to commit it to VCS. And the main reason for me is to increase the build speed on the CI server. Nowadays, most of the projects are getting built and installed by CI servers, different server instance every time.
If you don't commit it, CI server will download a jar for every build and it significantly increases a build time. There are other ways to handle this problem, but I find this one the easiest to maintain.

Coldfusion deployment process

I am trying to figure out what's the best process to implement for build & deployment for coldfusion project.
I am much more familiar with the regular java stack: some back-end framework (Spring, Struts, etc), bunch of JSP files, then use maven to compile and bundle everything to a .war file that I simply deploy (copy) over to a tomcat webapp directory
Are cfm files practically same as jsp? What are the similarities & differences between Java vs Coldfusion build/deploy process?
The resources I found so far make it sound like to just copy & paste the physical files, which doesn't sound quite right.
The thread here Best Practices for Code/Web Application Deployment? - goes on the generic deployment process, which we already have implemented. We have code repository and maven to manage our build & deployment process, can coldfusion work straight out of the box with the same set up as regular Java/war projects?
A thread in Adobe forum does not give much insights either: Deploying ColdFusion 8 project via EAR/WAR file, plus it talks about EAR rather than WAR.
This is an old link from 2007: build tools: maven and coldfusion seem to indicate maven is not straight out of the box solution, also seems like Coldfusion has no need for dependency management that maven is so useful for?
Can someone help point me to the right direction for build & deployment of coldfusion projects with the following stack:
Code repository, doesn't matter much: Git, svn
Maven build
Deploy project as war into Tomcat7 (not built in)
MySQL db connector
and Lastly - how would the solution be different between CF8 vs CF10? Looks to me CF8 may be worse as it doesn't officially support Tomcat, whereas CF10 runs on modified version of Tomcat?
Thanks!
When it comes to deploying CFML out the box then you really just have to copy and paste the file into your web server. In your case if you are using git just pull it from your repository. You don't have to do anything other then that. However, in some cases you may need to clear your CFML Cache if you don't see the changes immediately. This is my personal process:
Make changes on local machine running a CFML Development environment.
Commit and Push changes to git repository.
Pull changes to Production Server
Clear cache if needed.
It really is simple as that as long as your code makes it that simple.
Answer 1:
I have worked on some substantial apps were the process was zip up all the files, send them to a deployment team and they will unzip at the appropriate location.
Answer 2:
I suspect you are looking for something like CAR files. http://help.adobe.com/en_US/ColdFusion/10.0/Admin/WSc3ff6d0ea77859461172e0811cbf364104-7fd3.html

How can I setup ANT with Subversion and ColdFusion Builder (eclipse) to check out a local build to work on?

I am not sure if there's an answer for this already -- couldn't find one for this (hopefully common) setup:
I recently converted one of my ColdFusion projects to deploy via ANT.
I have a local ant script that instructs a remote server to check out the code, and run the application's specific build file, remotely on the server.
I have a few endpoints:
Live - production (on the production server)
Staging - on the production server, different datasource, etc.
dev - on the local box.
What I have run into it seems is a simple and common problem. I now need ANT to create any build, even locally. Fine, created a local endpoint and it configures for my box.
Issue? How do I get it to show up as a project (automatically if possible) in Eclipse/ColdFusion builder. What I envision is instead of checking out a branch via the subversion plugin in CFBuilder/Eclipse, I now use ANT to do that for me.
Since I use ColdFusion Builder (Eclipse + Adobe's plugin), I have all of eclipse's tools and plugins available to solve the problem of : how can I best call ANT from within Eclipse/ColdFusion Builder, to setup the local build as a project that I can develop and work on?
I think when I check the code back in from the local box, I'd have to be sure not to check in any files with local config paths, etc.
I hope this is a detailed and clear enough explanation, if not, please ask.
Thanks in advance!
You won't be able to have it "automatically" show up in CFBuilder, but you can make it pretty easy.
Eclipse requires the ".project" file, which is a simple xml file that by default generally just contains the project name.
Once you check out your project from SVN, Do file -- new -- ColdFusion project and point it to the directory where you've checked out your code. This will create the .project in there. From there, you can commit that file to SVN.
Subsequent developers who check out the project from SVN can then do File -- Import -- Existing Project into workspace, and point it to their checked out location. Since it'll have the .project file in there (from when you committed it), that project will show up when they search for projects in that import wizard.
Now, that's how you'd do it if you already used ANT to check out the code. However, if you wanted a potentially even easier way, then you can just install either the Subversive or Subclipse plugin into CFBuilder, and then do
file -- new -- checkout project from svn
point to your svn url
select the directory you want to check out
choose a location where you want the code to live
click through to completion

Good Ways to Use Source Control and an IDE for Plugin Code?

What are good ways of dealing with the issues surrounding plugin code that interacts with outside system?
To give a concrete and representative example, suppose I would like to use Subversion and Eclipse to develop plugins for WordPress. The main code body of WordPress is installed on the webserver, and the plugin code needs to be available in a subdirectory of that server.
I could see how you could simply checkout a copy of your code directly under the web directory on a development machine, but how would you also then integrate this with the IDE?
I am making the assumption here that all the code for the plugin is located under a single directory.
Do most people just add the plugin as a project in an IDE and then place the working folder for the project wherever the 'main' software system wants it to be? Or do people use some kind of symlinks to their home directory?
Short answer - I do have my development and production servers check out the appropriate directories directly from SVN.
For your example:
Develop on the IDE as you would normally, then, when you're ready to test, check in to your local repository. Your development webserver can then have that directory checked out and you can easily test.
Once you're ready for production, merge the change into the production branch, and do an svn update on the production webserver.
Where I work some folks like to use the FileSync Plugin for Eclipse for this purpose, though I have seen some oddities with that plugin where files in the target directory occasionally go missing. The whole structure is:
Ant task to create target directory at desired location (via copy commands, mostly)
FileSync Plugin configured to keep files in sync between development location and target location as you code (sync the Eclipse output folder to a location in the Web server's classpath, etc.)
Of course, symlinks may work better on systems that have good support for symlinks :-)
To me, adding a symlink pointing to your development folder seems like a tidy solution to the problem.
If the main project is on a different machine/webserver, you could use something like sshfs to mount your development directory into the right place on the webserver.