Play!framework; Compile on server only instead on client - eclipse

Is it possible to compile my Play!framework application only serverside?
Since I connect a samba share to my client from the server hosting Play!, the paths differ between client and server (modules, play, libs). So eclipsify gives me the server paths on my client, instead of using the client paths. Due to this the client gives me a build error.
Solution would be;
Change the eclipsify paths per client configuration.
Only compile my app on the server (preferred since there'll be no differences in env settings).
Can anyone tell me how one of these options would be possible?

Take a look at the play-maven plugin? Using maven for dependency management means all developers will have the same pom/config file, on running a maven build jars/libs will be downloaded from the repository server (you can use your own repo server too).

why don't you install paly framework in the client? this framework is for development tasks so you should install it in your development machine (client i presume). Play framework is freely downloadable and easy to install on your client.

I've found a temp "solution" to let each client define its own path (probably will be overwritten by play eclipsify? Can I change this?).
In Eclipse I've added a variable called PLAY_HOME under Window > Preferences > Java > Build path > Classpath Variables pointing to "D:\play-1.2.2" in this case.
In the .classpath I've replaced all absolute paths:
<classpathentry kind="lib" path="/usr/local/bin/play-1.2.2/framework/lib/...jar" />
to:
<classpathentry kind="var" path="PLAY_HOME/framework/lib/...jar"/>
Still no compilation on the server/continious integration etc. but it's a working solution for now, though it could be improved (the client - server diff dependencies still exists).
Would be nice to check if the version of play matches
Would be nice to make the PLAY_HOME variable optional by defaulting it to '..' (parent dir)

Perhaps an Ant script is what you need?

If I understand your question correctly, you want to develop with multiple developers on a single instance of an application hosted on some server???
It's maybe not the answer you're looking for, but my advice: don't do it this way.
Developing directly on a server, especially with multiple developers, is one of the great anti-patterns in development. Typically, only beginners and rather non-professional developers (no offense meant) do their development this way.
Restarting the server, debugging code, working in the same files... it only ends in tears when doing this 'shared' development.
Make sure you can run the application completely isolated on each workstation. Use version control to check in changes. If two developers have been working on the same code, you at least have a chance to rectify the situation (and a rather good chance if you use e.g. Mercurial or Git). If you still want to a global server to e.g. demo changes to non-developers, just periodically check-out a snapshot from version control and deploy that to this server.

Related

Eclipse Kepler and JBoss Wildfly hot deployment

I am trying to use eclipse kepler for Java EE 7.I already installed JBoss Tools and added JBoss Wildfly successfully as a server. However my changes are not automatically deployed. Is there anyway the app can be deployed automatically just as when using glassfish?
Using Eclipse, click twice on your WildFly Server to edit the following properties:
Publishing: choose "Automatically publish after a build event". I like to change the publishing interval to 1 second too.
Application Reload Behavior: check the "Customize application reload ..." checkbox and edit the regex pattern to \.jar$|\.class$
That's it. Good luck!
Both #varantes and #Sean are essentially correct, but these answers are not full.
Unfortunately the only way in a Java server environment to have full, zero-downtime hot deployment is to use paid JRebel or free spring-loaded tool.
But for small project there are some ways to speed up work by partial hot-deployment. Essentially:
When enabled option Automatically publish when resource change
then changes inside *.html, *.xhtml files are immediately
reflected as soon as you refresh the browser.
To make hot deployment work for *.jsp files too, then you should
inside ${wildfly-home}/standalone/configuration/standalone.xml
make following change:
<jsp-config/>
replace with:
<jsp-config development="true"/>
restart the server and enjoy hot deployment of web files.
But when modifying *.java source files, then only partial hot deployment is possible. As #varantes stated in his answer, enabling Application Reload Behavior with regex pattern set to \.jar$|\.class$ is an option, but has serious downside: whole module is restarted, thus:
It takes some time (depending on how big is a module).
Whole application state is lost.
So personally, I discourage this solution. JVM supports (in debug mode) code-swapping for methods' bodies. So as long as you are modifying only bodies of existing methods, you are at home (zero downtime, changes are reflected immediately). But you have to disable automatic publishing inside server settings otherwise the application's state will still be destroyed by that republish.
But if you are heavily crafting Java code (adding classes, annotations, constructors) then unfortunately I can only recommend set publishing into Never publish automatically (or shutdown server) and when you finish your work in Java files, then restart by hand your module (or turn-on server). Up to you.
It works for small Java projects, but for bigger ones, JRebel is invaluable (or just spring-loaded), because all approaches described above are not sufficient. Also because of such problems, solutions like Rails/ Django /Play! Framework gained so huge popularity.
I am assuming you are using the latest version of Wildfly (8.0 Beta 1 as of writing).
In the standalone.xml config file, look for <jsp-config/>. Add the attribute development="true" and it should hot-deploy. The resulting config will look like this:
<jsp-config development="true"/>
Add attributes (development, check-interval, modification-test-interval, recompile-on-fail) in configuration file in xPath = //servlet-container/jsp-config/
<servlet-container name="default" default-buffer-cache="default" stack-trace-on-error="local-only">
<jsp-config development="true" check-interval="1" modification-test-interval="1" recompile-on-fail="true"/>
</servlet-container>
(It works in WildFly-8.0.0.Final)
Start server in debug mode and It will track chances inside methods. Other changes It will ask to restart the server.

Creating SVN repository server with XAMPP on windows

For last 3 days I am struggling in setting up my SVN server. I tried several ways and tools but I found always some issue and bug all the files.
I am planning to use the following tools for this project.
For Server and database - XAMPP (Comes with APACHE and MySql)
Version control server - subversion-1.6.16
Version control client - tortoiseSVN
IDE is Eclipse
Following are my queries
1. Is the above combination of tools and softwares is perfect for my project?
2. Is there any open-source software which provides all the above functionalities combined?
3. If anybody of you has already done such kind of project, could you please share with me which are the correct version of softwares I should use to get it worked error free.
If anybody can provide solution for below I can carry on with my current setup also.
My Error Message from server for current configuration : I tried to setup the svn-win32-1.6.16 with my XAMPP installation by copying the two moduels mod_dav_svn.so and mod_authz_svn.so to my apache modules directory and changed the httpd.conf file with Loadmodules of these the so files and set the location also for these. But when i start the server in error logs I get error message like this - "httpd.exe: Syntax error on line 136 of C:/xampp/apache/conf/httpd.conf: Cannot load C:/xampp/apache/modules/mod_dav_svn.so into server: The specified module could not be found."
Following are pre-conditions and configurations prior to this error
Location of SVN - C:/SVN/svn-win32-1.6.16
Location XAMPP - C:/xampp/
Changes in httpd.conf file
LoadModule dav_svn_module modules/mod_dav_svn.so
LoadModule authz_svn_module modules/mod_authz_svn.so
and for location
# Enter this location in your browser to access the repository
<Location /repos>
DAV svn
SVNPath c:/SVN/svn_repos
</Location>
I have created the repository here - C:/SVN/svn_repos
Is the above combination of tools and softwares is perfect for my
project?
That is impossible to answer, because:
a) we don't know what your project is
b) nothing is perfect
But it is definitely an ok combination of tools. If I were you I would not use XAMPP but Zend Server CE instead! You get a nice web GUI for most php configuration needs.
Is there any open-source software which provides all the above
functionalities combined?
No. These tools are maintained for various target audiences and the combination you're asking for wouldn't make much sense in a bundle.
But of course your IDE (Eclipse in this case) integrates nicely with these tools. 'Integrates' means it plays together, doesn't mean it comes bundled with these things.
If anybody of you has already done
such kind of project, could you please
share with me which are the correct
version of softwares I should use to
get it worked error free.
I used to have such a combination (now I'm on Zend Studio with Zend Server CE) and there is no problem with it. The problem is that you're trying to do something unnecessary and wrong.
If you're using XAMPP, you're on a Windows machine, using .so extensions wouldn't do any good at all, Windows needs .dll extensions.
Why do you want to load such extensions anyways? You don't need those in order to get it all working.
Where are your repositories? Only if you want to host your own repositories do you need to run your own server. If that is the case, look at VisualSVNServer. You just install it, no need for integration with anything.
If your repositories are on a location in the net (more likely) you don't need an SVN server, you just need the client. In that case you're ready to go, no need for php extensions. You can checkout repos, commit, export, branch, tag, etc. From within Eclipse or in your file system with TortoiseSVN.
Try it and get back here, if you still experience problems.

How does ‘Servers’ view work underlying in Eclipse?

‘Servers’ is built-in view in Eclipse. We could integrate Java EE server into Eclipse easily. It could start/stop server both in normal and debug modes. Moreover, we could even set timeout and deployment path, things like that. Various types of server tomcat, jboss, websphere are supported, no intrusive to server.
I am just curious about how these cool things happen behind the scene. The complete mechanism is large and complex, so I just want to know general mechanism about it, an article also could be fine for me. Thank you!
It's the server-specific plugin which does all the work. When integrating a Server in Eclipse you basically need to instruct the plugin where to find the installation root of the server in question. The plugin in turn knows precisely where to locate the default libraries, how to deploy webapps to the server in question and how to start/stop the server with eventually extra commandline arguments.
Since every server make/version needs a different approach (as different as when you need to do it "manually"), I'll only give a Tomcat 6.0 based example how it roughly works. Doubleclick the server entry in Servers view and check the Server Location section. The field Server Path denotes the root location of configuration files. It's by default in Eclipse metadata (when Use workspace metadata is selected). If you browse further in this folder, you'll find something like tmp0\conf\server.xml. It contains information about where the to-be-deployed webapps are located, which context name it should have and so on. The plugin basically gives this information to Tomcat and it will handle it further.
Basically, server adapters are Eclipse plugins and allow to extend the IDE by implementing a set of generic actions (start, debug, stop, deploy, undeploy) that are translated into server specific orders. They also expose server specific configuration parameters. The deployment is more or less intrusive depending on the server (it may be done outside the server folder tree or in a special eclipse folder).

Storing third-party framework/middleware into source control that needs to alter your compiler/IDE

I know there are posts that ask how one stores third-party libraries into source control (such as this and this). While those are great answers, I still can't find the answer to this:
How do you store third-party middleware/frameworks binaries that need to alter your compiler / IDE for the library to work properly? Note: for my needs, I don't need to store the middleware source, I only store header files / lib / JAR ..so that it's ready to be linked.
Typically, you simply link libraries to your app, and you are good. But what about middleware / frameworks that need more?
Specific examples:
Qt moc pre-processor.
ZeroC Ice Slice (ice) compiler (similar to CORBA IDL preprocessor).
Basically these frameworks/middleware need to generate their own code before your application can link to it.
From the point of view of the developer, ideally he wants to just checkout, and everything should be ready to go. But then my IDE/compiler will not be setup properly yet, so the compilation will fail..
What do you think?
Backup everything including the setup of the IDE, operating system, etc. This is what i do
1) Store all 3rd party libraries in source control. I have a branch for all the libraries.
2) Backup the entire tool chain which was used to build. This includes every tool. Each tool is installed into the same directory on each developers computer, so this makes it simple to setup a developers machine remotely.
3) This is the most hardcore, but prepare 1 perfect developer IDE setup which is clean, then make a VMWare / VirtualPC image out of it. This will be useful when you cant seem to get the installers to work in future.
I learned this lesson the painful way because I often have to wade through visual studio 6 code which don't build properly.
I think that a better solution is to make sure that the build is self-contained and downloads all necessary software for itself unless you tell it otherwise. This is the way maven works, and it is really handy. The downside is that it sometimes needs to download a application server or similar, which is highly unpractical, but at least the build succeeds and it becomes the new developers responsibility to improve the build if needed.
This does of course not work great if your software needs attended installs, but I would try to avoid any such dependencies in any case. You can add alternative routes (e.g the ant script compiles the code if eclipse hasn't done it yet). If this is not feasible, an alternative option is to fail with a clear indication of what went wrong (e.g 'CORBA_COMPILER_HOME' not set, please set and try again').
All that said, the most complete solution is of course to ship everything with your app (i.e OS, IDE, the works), but I doubt that that is applicable in the general case, how would you feel about that type of requirements to build a software product? It also limits people who want to adapt your software to new platforms.
What about adding 1 step.
A nant script which is started with a bat file. The developer would only have to execute one .bat file, the bat file could start nant, and the Nant script could be made to do anything you need.
This is actually a pretty subtle question. You're talking about how to manage features of the environment which are necessary in order to allow your build to proceed. In this case it's the top level of your code toolchain, but the problem can be generalised to include the entire toolchain, and even key aspects of the operating system.
In my place of work, we have various requirements of the underlying operating system before our code will successfully run. This includes machine-specific configurations as well as ensuring correct versions of system libraries and language runtimes are present. We've dealt with this by maintaining a standard generic build machine image which contains the toolchain requirements we need. We can push this out to a virgin machine and get a basic environment that contains the complete toolchain and any auxiliary programs.
We then use fsvs to version control any additional configuration, which can be layered on to specific groups of machines as needed.
Finally, we use custom scripts hooked in to our CI server (we use Hudson) to perform any pre-processing steps required for specific projects.
The main advantages for us of this approach is:
We can build and deploy developer and production machines very easily (and have IT handle this side of the problem).
We can easily replace failed machines.
We have a known environment for testing (we install everything to a simulated 'production server' before going live).
We (the software team) version control critical configuration details and any explicit pre-processing steps.
I would outsource the task of building the midleware to a specialized build server and only include the binary output as regular 3rd party dependencies under source control.
If this strategy can be successfully applied depends on whether all developers need to be able to change midleware code and recompile it frequently. But this issue could also be solved via a Continous Integration Server like Teamcity that allows to create private builds.
Your build process would look like the following:
Middleware repo containing middleware code
Build server, building middleware
Push middleware build output to project repository as 3rd party references
Update: This doesn't really answer how to modify the IDE. It's just a sort-of Maven replacement thingy for C++/Python/Java. You shouldn't need to modify the IDE to build stuff, if so, you need a different IDE or a system that generates/modifies IDE files for you. (See CMake for a cross-platform c/c++ project file generator.)
I've written a system (first in Ant/Beanshell at two different places, then rewrote it in Python at my current job) where third-partys are compiled separately (by someone), stored and shared via HTTP.
Somewhat hurried description follows:
Upon start, the build system looks through all modules in repo, executes each module's setup target, which downloads the specific version of a third-party lib or app that the current code revision uses. These are then unzipped, PATH/INCLUDE etc are added to (or, for small libs, copy them to a single directory for the current repo) and then launches Visual Studio with /useenv.
Each module's file check for stuff that it needs, and if it needs installing and licensing, such as Visual Studio, Matlab or Maya, that must be on the local computer. If that's not there, the cmd-file will fail with a nice error message. This way, you can also check that the correct version is in there
So there are a number of directories on the local disk involved. %work% needs to be set using an global environment variable, preferrable on a different disk than system or source-checkout, at least if doing heavy C++.
%work% <- local store for all temp files, unzip, and for each working copy's temp files
%work%/_cache <- downloaded zips (2 gb)
%work%/_local <- local zips (for development or retrieved in other manners while travvelling)
%work%/_unzip <- unzips of files in _cache (10 gb)
%work%&_content <- textures/3d models and other big files (syncronized manually, this is 5 gb today, not suitable for VC either)
%work%/D_trunk/ <- store for working copy checked out to d:/trunk
%work%/E_branches/v2 <- store for working copy checked out to e:/branches/v2
So, if trunk uses Boost 1.37 and branches/v2 uses 1.39, both boost-1.39 and boost-1.37 reside in /_cache/ (as zips) and /_unzip/ (as raw files).
When starting visual studio using bat files from d:/trunk/BuildSystem/Visual Studio.cmd, INCLUDE points to /_unzip/boost-1.37, while if runnig e:/branches/v2/BuildSystem/Visual Studio.cmd, INCLUDE points to /_unzip/boost-1.39.
In the repo, only a small set of bootstrap binaries need to be stored (i.e. wget and 7z).
We currently download about 2 gb of packed data, which is unzipped to 10 gb (pdb files are huge!), so keeping this out of source control is essential. Having this system allows us to keep the repo size small enough to use DVCS such as Mercurial (or Git) instead of SVN, which is very nice. (I'm thinking of using Mercurials bigfiles extension or file sharing instead of a separately http-served directory.)
It work flawlessly. Developers need only to check out, set an enviroment variable for their local cache, then run Visual Studio via a specific batch-file in the repo. No unzipping or compiling or stuff. A new developer can set up his computer in no time. (Installing Visual Studio takes the order of a magnitude more time.)
First time on a new computer takes some time, but then it's fast, only a few seconds. Downloads/unzips are shared on the local computer, do checking out additional branches/versions does not occupy more space. Working offline is also possible, you just need to get the zip files manually if new ones have been uploaded. (This mechanism is essential to test new versions/compilations of third-party libraries.)
The basics are in a repo on bitbucket but it needs more work before it's ready for the public. Apart from doc and polish, I plan to:
extend it to use cmake instead of raw
vcproj-files, to make it more
cross-platform.
script the entire
process from checkout/download of
third-party packages to building and
zipping them (including storing the
download in a local repo) ... currently that's on my dev computer. Not good. Will fix. :)
As for moc, we use Qt's Visual Studio add-in, which stores this in the .vcproj files. Works well. I do think that CMake is one of the best answers for this though

Advice on creating self contained project, and distributing a web server with the source code

I need some advice on configuring a project so it works in development, staging and production environments:
I have a web app project, MainProject, that contains two sub-projects, ProjectA and ProjectB, as well as some common code, Common. It's in a Subversion repository. It's nearly all HTML, CSS and JavaScript.
In our current development environment we check MainProject out, then set up Apache virtual hosts to point at each of the sub-project's directories, as paths within each project are relative to their root. We also have a build process that then compiles each of the sub-projects into their own deliverable package, with the common code copied into each.
So - I'm trying to make development of this project a bit easier. At the moment there is a lot of configuration of file paths in Apache http.conf files, as well as the build.xml file and in a couple of other places too.
Ideally I'd like the project to be checked out of SVN onto a fresh computer, with a web server as part of the project, fully configured, that can then be run from the checkout directory with very little extra configuration, either on a PC or Mac. And I'd like anyone to be able to run the build to compile it too.
I'd love to hear from anyone who has done something like this, and any advice you have.
Thanks,
Paul
If you can add python as a dependency, you can get a minimal HTTP server running in less than ten lines of code. If you have basic server side code, there is a CGI server as well.
The following snippet is copied directly from the BaseHTTPServer documentation
import BaseHTTPServer
def run(server_class=BaseHTTPServer.HTTPServer,
handler_class=BaseHTTPServer.BaseHTTPRequestHandler):
server_address = ('', 8000)
httpd = server_class(server_address, handler_class)
httpd.serve_forever()
I've done this with Jetty, from within Java. Basically you write a simple Java class that starts Jetty (which is a small web server) - you can make then this run via an ant task (I used it with automated tests - Java code made requests to the server and checked the results in various ways).
Not sure it's appropriate here because you don't mention Java at all, so apologies if it's not the kind of thing you're looking for.