What grunt files to upload to repo vs files to upload when deploying site to production - deployment

So, I have a webapp I am creating using the 3 muskateers yeoman, grunt and bower.
My questions are:
What is best practice when it comes to uploading my webapp into a git/mercurial repo? Do I include the entire project? What about directories like 'node_modules' or 'test', etc?
Also, when deploying to live production site: Will my 'dist' folder be what I should be uploading?
With research yielding no results (I could be searching the wrong things?).. I'm a bit new to this process so any feedback is greatly appreciated. Thanks!

You should always commit all of your yeoman, grunt, and bower config files.
There are two schools of thought on committing the output they produce or dependencies they download:
One is, you should upload everything needed for another user to deploy the web app after cloning the repository, without performing any additional operations. The idea is, dependencies may not exist anymore, network connections might be down, etc.
Another is, keep the repository small and don't commit node_modules, etc, since they can be downloaded by the user.
As far as the dist folder goes, yes you'll be uploading it to your server, as it contains all of your minified files. Whether or not you want to commit it to the repository is a separate question. You might let the user build every time, assuming they can get all the dependencies one way or another (from above choice). Or you might want to commit it to tag it with a release version along with your source code.
There's some more discussion on this here: http://addyosmani.com/blog/checking-in-front-end-dependencies/

Related

How to properly deploy project on production server

I would like to raise statements regarding how properly deploy projects with multiple dependencies on production server.
E.g. my project depends on node (npm), ruby (for sass), composer, gulp etc. - things which are related to development process.
So, maybe a good idea is to avoid all those things on production server, and create a separate repository to hold there project in 'production-ready' state with all dependencies (e.g. vendor/ directory with composer deps), and push it directly to production.
According to this answer it looks like I should build everything on dev or local environment and then copy files to production, which might be tedious and maybe it would be good to hold everything in separate repository.
Or, are there already some best practices regarding this? Could somebody help me with the decision?
Thanks!

Deploy build files from continuous integration

I am working on a project with multiple people, a website application which requires webpack to be built, uglified, concatenated into a few files e.g. app.min.js, style.min.css etc. - As a result of this, in an effort to prevent merge conflicts we recently added the build folder to .gitignore, under the assumption that we would be able to build during deployment.
When pushing to the Master branch, we automatically "deploy" through Semaphore CI (similar to Travis) which runs composer install, npm install, and finally "npm run build" which triggers the webpack build. This is all built and then tested on the CI side of things, and then Semaphore automatically deploys to Amazon's Elastic Beanstalk where our application is hosted.
The problem with this is, it seems Semaphore doesn't upload the build it's just tested, but rather the Master branch itself which has no built JS or CSS. I'm wondering if there's a way to push these built files to deployment as well, or if running the entire build process AGAIN on Elastic Beanstalk is the only route. It seems unnecessary to have to do that process essentially 3 times, locally, CI, and then deployment. Every time a step like this is needed on EB the actual re-instantiation time gets longer, which I'd like to keep as short as possible.
Obviously if building it a 3rd time on EB is the only way to go about this then I'll have to, just wondering if there are better solutions for this whole workflow.
I haven't worked with Semaphore CI, but you might be able to use an .ebignore file.
If you create one, the cli will use that instead of your .gitignore file.
I find in some deployment situations you want the inverse of your .gitignore (all compiled, no src). It essentially lets you pick the files from your project directory that you want to deploy, in the same way as the .gitignore file.
Edit: I just noticed the documentation on aws is lacking. It only mentions file exclusion, but you can include files too.
Edit 2: I don't think Semaphore supports the use of .ebignore, so right now this solution isn't of any use. :(
I just had a great first experience with https://deploybot.com/. The can deploy directly to elastic beanstalk. It might be interesting or you.

Continuous delivery with capistrano/chef/puppet: where do you store your artifacts?

I've been reading up on how people do continuous delivery with some of the popular toolsets.
Lots of posts (like this one) seem to indicate that a common way of doing things is to use something like capistrano to push software from your builds to your machines, and then chef or puppet to configure anything related to it.
My question is, do people generally push there software directly into a special git repo for binary assets, or can capistrano fetch it out of a maven repo? The maven approach seems most natural to me, but I don't seem to be able to find much information on it - which is what makes me think it's not the approach that people are generally taking.
Basically, I'm slightly confused as there seems to be a gap between the build output (where one would normally publish to a maven repo) - and where the delivery tools expect to find the software you have asked them to deploy (which seems to be a file system, or a git repo)
When it comes to artifacts; I attempt to leverage the jenkins plugin to upload to S3. Here's a link to it.
Basiclly right now, all my ci goes through Jenkins and when I get a complete build I upload it to a bucket and have chef pull the tarball/war/gem from it and install it from there.

Output binary files linked my version-control server without a build system?

I am trying to setup a internal Mercurial HgWeb server on a Windows 2003 server. The Hgweb part is working. I could just share a folder to put released binary files for each projects. But I am wandering could I still somehow link the version control system with binary build output. So when there is a commit, the build output will automated get update as well for a release?
I know I could have a build system on the server end. But for Delphi, C#, ASP.NET projects and with a few third-party libraries, it seems much more work.
Right now, I am thinking about for each project I will have two repository, one for development (not output binary), the other for release which will including everything including the build result binaries (or only build result including dependency will be a better idea?). But I don't know yet how to make those two synchronize automatically without manually commit twice.
Maybe simply a hook on Dev repository fires every time commit to Master branch which will make another commit to the Release branch?
You really need a build system like CruiseControl.NET to build your binaries after pushes happen to a remote repository that CC.NET is watching. The binaries built can then just be copied to a standard Web server to be served up for download. CC.NET is not complicated to configure and supports Mercurial out-of-the-box. Using a system like this, you can get the extras like build stats, run unit tests before pushing a build to be downloaded, and lots more.

Best practice updating a website

currently my work-flow is as follows:
Locally on a machine I maintain a git repo on each website I am working on, when the time comes to publish something I compress the folder and upload this single file to the production server via ssh then I decompress, test the changes a move the changes to the live folder and I get rid of the .git folder.
I was wondering if the use of a git repo on the live server was a good idea, seems to be at first but it can be problematic if a change doesn't look the same on on the production server in comparison to the local development machine... this could start a fire...
What about creating a bare repo on some folder on production server then clone from there to the public folder thus pushing updates from local machine to the bare repo and pulling from the bare on the public folder of the production server... may anyone plese provide some feedback.
Later I read about capistrano http://capify.org but I have no experience w/ this software...
In your experience what is the best practice/methodology to accomplish a website deployment/updates?
Thanks in advance and for your feedback.
I don't think that our method can be called best practice, but it has served us well.
We have several large databases for our application (20gb+), so maintaining local copies on each developers computer has never really been an option, and even though we don't develop against the live database, we do need to do the development against a database that is as close to the real thing as possible.
As a consequence we use a central web server as well, and keep a development branch of our subversion trunk on it. Generally we don't work on the same part of the system at once, but when we do need to do that, or someone is making a lot of substantial changes, we branch the trunk and create a new vhost on the dev server.
We also have a checkout of the code on the production servers, so after we're finished testing we simply do a svn update on the production servers. We've implemented a script that executes the update command on all servers using ssh. This is extremely convinient, since our code base is large and takes a lot of time to upload. Subversion will only copy the files that actually have been changed, so it's a lot faster.
This has worked really well for us, and the only thing to watch out for is making changes on the production servers directly (which of course is a no-no from the beginning) since it might cause conflicts when updating.
I never thought about having a repository copy on the server. After reading it, I thought it might be cool... However, updating the files directly in the live environment without testing is not a great idea.
You should always update a secondary environment matching exactly the live one (webserver + DB version, if any) and test there. If everything goes well, then put the live site under maintenance, update files, and go live again.
So I wouldn't make the live site a copy of the repository, but you could do so with the test env. You'll save SSH + compressing time, plus you can check out any specific revision you'd like to test.
Capistrano is great. The default recipes The documentation is spotty, but the mailing list is active, and getting it set up is pretty easy. Are you running Rails? It has some neat built-in stuff for Rails apps, but is also used fairly frequently with other types of webapps.
There's also Webistrano, which is based on Capistrano but has a web front-end. Haven't used it myself. Another deployment system that seems to be gaining some traction, at least among Rails users, is Vlad the Deployer.