What is the deployment workflow if using CoffeeScript and an IDE for developing a web application? - eclipse

I've just picked up CoffeeScript and I'm struggling to understand the deployment workflow. It seems you constantly have to compile the .coffee files before using them. (Yes, I'm aware that you can have it embedded in the browser, but that's not recommended for production applications).
Does one have to constantly (manually) compile the files before deploying? (For example, if using Eclipse, a simple Ctrl+S saves and deploys the .war/.ear on the local machine's server.) Do we have to change the build scripts (for a central, possible CI server) for deploying .coffee files? Is there anyway to have integrated compiling via the IDEs (Eclipse/Netbeans)
Any ideas/pointers/examples on this? How/what have you used in the past?

I call browserify in my Cakefile to pre-compile and package my CoffeeScript for the browser. For an example of how I call browserify as well as coffeedoc and coffeedoctest take a look at the Cakefile for my Lumenize project.
If you are using express or some other node based server, you can have your CoffeeScript compiled at request time, using tools like NibJS or as described in The Little Book on CoffeeScript (Applications chapter), you can use Stitch. BTW, I highly recommend, The Little Book. The "Compiling" chapter has information about Cake and compiling that might help you.

Yes, you should have a build script. Most CoffeeScript projects use a Cakefile for this; see, for example, 37signals' pow. With a Cakefile, you can just run
cake build
from the command line to run the build task in the Cakefile.
You can run the Cakefile on a CI server, assuming that you have Node and CoffeeScript installed on that server.

Don't deploy the coffee files, use something like "coffee -cwj" to constantly watch and compile the .coffee files into javascript (.js) files and deploy those.
The options are c=compile, w=watch and j=join the files.
See the coffee-script web site for details of the options you can pass in.

Related

Flow IDE support is fighting with Webpack

I have flow integrated into a webpack / babel build using flow-babel-webpack-plugin and it works great. Webpack dev server compiles / serves assets in less than a second and if there are flow type errors it prints them out nicely. I'm very happy with that.
The problem begins when I turn on my IDE. In both VSCode and Atom, if I enable any kind of flow support, my webpack / babel build immediately begins to choke. It will take anywhere between 4 and 70 seconds to compile any change. Often it fails and gives multiple flow is still initializing notices and indicates it has tried to start the server over and over.
I suspect that both webpack and the IDE are trying to spin up separate flow servers at the same time and this is causing a conflict. Or they are using the same flow server and this is, for some reason, also a problem. I just can't figure out what to do about it. I have tried pointing at separate binaries with webpack using the global flow and the IDE using the one from node_modules. No dice.
It seems like this must be an extremely common use case - flow + a webpack watcher + any IDE whatsoever.
I'd like to have both my webpack build compile flow code and have my IDE show me syntax errors etc. So far that's been impossible
It looks like that plugin uses its own copy of Flow, from the flow-bin package:
index.js
package.json
If this version is out of sync with what your IDE is starting up, then they will fight -- starting up one version of Flow will kill any Flow server with a different version that is already running in that directory.
If you put flow-bin in your devDependencies (alongside this webpack plugin) and lock it to a specific version, and also set your IDE to use the Flow binary from flow-bin, then it looks like npm will just install the version you specify, and both the plugin and the IDE will be able to use the same Flow version.
Without knowing more specifics about your setup, it's hard to recommend a more concrete solution. You'll have to either make it so both your IDE and this webpack plugin are running the same version of Flow, or stop using either the IDE or the webpack plugin.

Start a MAMP server with Grunt.js

I want to set my Grunt.js file to launch my MAMP server on grunt serve.
I have been trying to use this tutorial here to do this:
https://coderwall.com/p/kwne-g
I was then planning to use this tutorial to set up grunt watch:
http://darrenhall.info/development/yeoman-and-mamp
Now I am having trouble with step one. I have successfully installed grunt exec. However, grunt claims not to be able to find the task 'serverup'.
Here is my code:
grunt.task.run([
'clean:server',
'concurrent:server',
'autoprefixer',
'connect:livereload',
'serverup',
'watch',
'serverdown'
]);
If you followed the examples at the link, it looks like your targets should be exec:serverup and exec:serverdown for the tasks to work.
The problem here and with many other Grunt related questions is that without the full Gruntfile, "answers" are largely guesses. Configuration, task loading, and task registration are all related and without seeing all three, it is difficult to say "this is the answer".
Given that, here's a checklist I use for problems like this when using Grunt and packages from NPM:
Is the code that supports the task installed? Did I forget npm install? Did I miss
errors on install (check npm-debug.log if it exists)?
Have I correctly used grunt.loadNpmTasks for the plugin? Is the line in my Gruntfile and
did I get the plugin name spelled correctly? Did I or my IDE accidentally use to
grunt.loadTasks when I need loadNpmTasks?
Have I correctly named/used grunt.registerTask for the task? If I need to call a
specific target for a task, do I have the the name correctly specified?
If there are paths involved and things are broken, do I have my globbing patterns correctly specified? If a cwd
is involved, do I have the other paths or files correctly specified?
Did I get the configuration right? (This is where things get too plugin-specific for a
checklist.)

Using coffeescript with basic Yeoman project.

I've used Yeoman to make a quick project skeleton using the yo webapp generator command. In the resulting Gruntfile I see that it's setup to compile CoffeeScript but it seems like its just sticking compiled files in a tmp folder.
coffee: {
dist: {
files: {
'.tmp/scripts/coffee.js': '<%= yeoman.app %>/scripts/*.coffee'
}
},
},
How do these get included in the project during development. I'm not using RequireJS.
The yeoman docs are unclear on how to use coffeescript. They only mention that it gets automatically compiled.
Using yeomen 1.0.0-rc1.4. I use:
$ yo angular --coffee
The resulting project has controller and app scripts in CoffeeScript.
grunt configuration file remains in js (what is not really a problem).
Running
$ grunt test
runs tests and all seems fine.
$ grunt server
is also doing what one expects (build the app, test it, starts server, opens the app in web browser and starts watching for changes, so if I change a coffee script file, it is quickly reflected in the web broser.
Documentation also states, one can use yo to add particular pieces like
angular:controller
angular:directive
angular:filter
angular:route
angular:service
angular:decorator
angular:view
each can be called with a --coffee switch and get the script in CoffeeScript, e.g.:
yo angular:controller user --coffee
I just found an issue in the github repo referencing this problem. https://github.com/yeoman/generator-webapp/issues/12
It offers a temporary solution: https://github.com/yeoman/generator-webapp/issues/12#issuecomment-13731929

Storing third-party framework/middleware into source control that needs to alter your compiler/IDE

I know there are posts that ask how one stores third-party libraries into source control (such as this and this). While those are great answers, I still can't find the answer to this:
How do you store third-party middleware/frameworks binaries that need to alter your compiler / IDE for the library to work properly? Note: for my needs, I don't need to store the middleware source, I only store header files / lib / JAR ..so that it's ready to be linked.
Typically, you simply link libraries to your app, and you are good. But what about middleware / frameworks that need more?
Specific examples:
Qt moc pre-processor.
ZeroC Ice Slice (ice) compiler (similar to CORBA IDL preprocessor).
Basically these frameworks/middleware need to generate their own code before your application can link to it.
From the point of view of the developer, ideally he wants to just checkout, and everything should be ready to go. But then my IDE/compiler will not be setup properly yet, so the compilation will fail..
What do you think?
Backup everything including the setup of the IDE, operating system, etc. This is what i do
1) Store all 3rd party libraries in source control. I have a branch for all the libraries.
2) Backup the entire tool chain which was used to build. This includes every tool. Each tool is installed into the same directory on each developers computer, so this makes it simple to setup a developers machine remotely.
3) This is the most hardcore, but prepare 1 perfect developer IDE setup which is clean, then make a VMWare / VirtualPC image out of it. This will be useful when you cant seem to get the installers to work in future.
I learned this lesson the painful way because I often have to wade through visual studio 6 code which don't build properly.
I think that a better solution is to make sure that the build is self-contained and downloads all necessary software for itself unless you tell it otherwise. This is the way maven works, and it is really handy. The downside is that it sometimes needs to download a application server or similar, which is highly unpractical, but at least the build succeeds and it becomes the new developers responsibility to improve the build if needed.
This does of course not work great if your software needs attended installs, but I would try to avoid any such dependencies in any case. You can add alternative routes (e.g the ant script compiles the code if eclipse hasn't done it yet). If this is not feasible, an alternative option is to fail with a clear indication of what went wrong (e.g 'CORBA_COMPILER_HOME' not set, please set and try again').
All that said, the most complete solution is of course to ship everything with your app (i.e OS, IDE, the works), but I doubt that that is applicable in the general case, how would you feel about that type of requirements to build a software product? It also limits people who want to adapt your software to new platforms.
What about adding 1 step.
A nant script which is started with a bat file. The developer would only have to execute one .bat file, the bat file could start nant, and the Nant script could be made to do anything you need.
This is actually a pretty subtle question. You're talking about how to manage features of the environment which are necessary in order to allow your build to proceed. In this case it's the top level of your code toolchain, but the problem can be generalised to include the entire toolchain, and even key aspects of the operating system.
In my place of work, we have various requirements of the underlying operating system before our code will successfully run. This includes machine-specific configurations as well as ensuring correct versions of system libraries and language runtimes are present. We've dealt with this by maintaining a standard generic build machine image which contains the toolchain requirements we need. We can push this out to a virgin machine and get a basic environment that contains the complete toolchain and any auxiliary programs.
We then use fsvs to version control any additional configuration, which can be layered on to specific groups of machines as needed.
Finally, we use custom scripts hooked in to our CI server (we use Hudson) to perform any pre-processing steps required for specific projects.
The main advantages for us of this approach is:
We can build and deploy developer and production machines very easily (and have IT handle this side of the problem).
We can easily replace failed machines.
We have a known environment for testing (we install everything to a simulated 'production server' before going live).
We (the software team) version control critical configuration details and any explicit pre-processing steps.
I would outsource the task of building the midleware to a specialized build server and only include the binary output as regular 3rd party dependencies under source control.
If this strategy can be successfully applied depends on whether all developers need to be able to change midleware code and recompile it frequently. But this issue could also be solved via a Continous Integration Server like Teamcity that allows to create private builds.
Your build process would look like the following:
Middleware repo containing middleware code
Build server, building middleware
Push middleware build output to project repository as 3rd party references
Update: This doesn't really answer how to modify the IDE. It's just a sort-of Maven replacement thingy for C++/Python/Java. You shouldn't need to modify the IDE to build stuff, if so, you need a different IDE or a system that generates/modifies IDE files for you. (See CMake for a cross-platform c/c++ project file generator.)
I've written a system (first in Ant/Beanshell at two different places, then rewrote it in Python at my current job) where third-partys are compiled separately (by someone), stored and shared via HTTP.
Somewhat hurried description follows:
Upon start, the build system looks through all modules in repo, executes each module's setup target, which downloads the specific version of a third-party lib or app that the current code revision uses. These are then unzipped, PATH/INCLUDE etc are added to (or, for small libs, copy them to a single directory for the current repo) and then launches Visual Studio with /useenv.
Each module's file check for stuff that it needs, and if it needs installing and licensing, such as Visual Studio, Matlab or Maya, that must be on the local computer. If that's not there, the cmd-file will fail with a nice error message. This way, you can also check that the correct version is in there
So there are a number of directories on the local disk involved. %work% needs to be set using an global environment variable, preferrable on a different disk than system or source-checkout, at least if doing heavy C++.
%work% <- local store for all temp files, unzip, and for each working copy's temp files
%work%/_cache <- downloaded zips (2 gb)
%work%/_local <- local zips (for development or retrieved in other manners while travvelling)
%work%/_unzip <- unzips of files in _cache (10 gb)
%work%&_content <- textures/3d models and other big files (syncronized manually, this is 5 gb today, not suitable for VC either)
%work%/D_trunk/ <- store for working copy checked out to d:/trunk
%work%/E_branches/v2 <- store for working copy checked out to e:/branches/v2
So, if trunk uses Boost 1.37 and branches/v2 uses 1.39, both boost-1.39 and boost-1.37 reside in /_cache/ (as zips) and /_unzip/ (as raw files).
When starting visual studio using bat files from d:/trunk/BuildSystem/Visual Studio.cmd, INCLUDE points to /_unzip/boost-1.37, while if runnig e:/branches/v2/BuildSystem/Visual Studio.cmd, INCLUDE points to /_unzip/boost-1.39.
In the repo, only a small set of bootstrap binaries need to be stored (i.e. wget and 7z).
We currently download about 2 gb of packed data, which is unzipped to 10 gb (pdb files are huge!), so keeping this out of source control is essential. Having this system allows us to keep the repo size small enough to use DVCS such as Mercurial (or Git) instead of SVN, which is very nice. (I'm thinking of using Mercurials bigfiles extension or file sharing instead of a separately http-served directory.)
It work flawlessly. Developers need only to check out, set an enviroment variable for their local cache, then run Visual Studio via a specific batch-file in the repo. No unzipping or compiling or stuff. A new developer can set up his computer in no time. (Installing Visual Studio takes the order of a magnitude more time.)
First time on a new computer takes some time, but then it's fast, only a few seconds. Downloads/unzips are shared on the local computer, do checking out additional branches/versions does not occupy more space. Working offline is also possible, you just need to get the zip files manually if new ones have been uploaded. (This mechanism is essential to test new versions/compilations of third-party libraries.)
The basics are in a repo on bitbucket but it needs more work before it's ready for the public. Apart from doc and polish, I plan to:
extend it to use cmake instead of raw
vcproj-files, to make it more
cross-platform.
script the entire
process from checkout/download of
third-party packages to building and
zipping them (including storing the
download in a local repo) ... currently that's on my dev computer. Not good. Will fix. :)
As for moc, we use Qt's Visual Studio add-in, which stores this in the .vcproj files. Works well. I do think that CMake is one of the best answers for this though

Advice on creating self contained project, and distributing a web server with the source code

I need some advice on configuring a project so it works in development, staging and production environments:
I have a web app project, MainProject, that contains two sub-projects, ProjectA and ProjectB, as well as some common code, Common. It's in a Subversion repository. It's nearly all HTML, CSS and JavaScript.
In our current development environment we check MainProject out, then set up Apache virtual hosts to point at each of the sub-project's directories, as paths within each project are relative to their root. We also have a build process that then compiles each of the sub-projects into their own deliverable package, with the common code copied into each.
So - I'm trying to make development of this project a bit easier. At the moment there is a lot of configuration of file paths in Apache http.conf files, as well as the build.xml file and in a couple of other places too.
Ideally I'd like the project to be checked out of SVN onto a fresh computer, with a web server as part of the project, fully configured, that can then be run from the checkout directory with very little extra configuration, either on a PC or Mac. And I'd like anyone to be able to run the build to compile it too.
I'd love to hear from anyone who has done something like this, and any advice you have.
Thanks,
Paul
If you can add python as a dependency, you can get a minimal HTTP server running in less than ten lines of code. If you have basic server side code, there is a CGI server as well.
The following snippet is copied directly from the BaseHTTPServer documentation
import BaseHTTPServer
def run(server_class=BaseHTTPServer.HTTPServer,
handler_class=BaseHTTPServer.BaseHTTPRequestHandler):
server_address = ('', 8000)
httpd = server_class(server_address, handler_class)
httpd.serve_forever()
I've done this with Jetty, from within Java. Basically you write a simple Java class that starts Jetty (which is a small web server) - you can make then this run via an ant task (I used it with automated tests - Java code made requests to the server and checked the results in various ways).
Not sure it's appropriate here because you don't mention Java at all, so apologies if it's not the kind of thing you're looking for.