Files from Yeoman web-app that needs to be committed in SCM/GIT - version-control

When we do "yo webapp" (assuming webapp generator is installed), it scaffold projects which contains file relevant to bower, grunt and then there is app folder, which we all know what's it about.
My question is, out of this structure what are the files that needs to be maintained in SCM, Should it be only app directory or should it whole structure ?(assuming there are no additional grunt task or any build file changes from earlier scaffolding)

Yeoman webapp generator will produce a .gitignore file which includes files that should not be committed to a SCM. This file includes the following directories:
node_modules
dist
.tmp
.sass-cache
bower_components
test/bower_components
It is clear that .tmp and .sass-cache have no reason to be in the repo as they both are only temporary.
There is however a discussion whether bower (and rarely node) dependencies should be checked in. For most projects I recommend not to.
Please note that in either case one should never change the packages directly in the bower_components or node_modules folder as any change will be lost at next bower install or npm install. A fork of the original project (either as a independent repo or to folder in the project - e.g. lib) is a better idea - a follow up pull request would then add a lot of karma :)
The dist folder with the build of the application may be committed depending on your deployment method. There is a very good guide on deployment on Yeoman site.

As a start, you should put everything into SCM with the exception of app/bower_components, test/bower_components and node_modules. All files under these directories come from public repo, either node or bower repo.
In this setup, whenever another developer checkout from SCM, he needs to run 2 commands: npm install and bower install. What I typically do is I create a file called install.sh (install.bat on Windows) and have these 2 commands inside this script file. In this way, when you find that you need to run more commands for initialization, you can easily add to this script file and new developers can just checkout and run install.sh.
In some cases, I found that I need to perform small modification to a public library. In this case, I will check this library inside bower_components into SCM as well. This is not common, but it happens.

Related

npm install and build of forked github repo

I'm using a module for my angular app called angular-translate. However, I've had to make a few small modifications to the source code to get everything working the way I'd like, and now I want to persist those changes on npm install. A colleague suggested that I fork the repo of the source code and point to my forked repo as a dependency, which I've tried in these ways, e.g.
npm install https://github.com/myRepo/angular-translate
npm install https://github.com/myRepo/angular-translate/archive/master.tar.gz
The first gives me a directory like this with no build. Just a package.json, .npmignore, and some markdown files
-angular-translate
.npmignore
.nvmrc
CHANGELOG.md
package.json
etc
The second npm install gives me the full repo, but again I don't get a build like when I use the command npm install angular-translate. I've seen some dicussion of running the prepublish script, but I'm not sure how to do this when installing all the modules. I've also tried publishing the fork as my own module to the npm registry, but again I get no build, and I'm not sure that's the right thing to do...
I apologise for my ignorance on the topic. I don't have a huge amount of experience with npm. Would love to get some feedback on this issue. It seems like it could be a common enough issue when modifications need to be made to a package's source code? Maybe there's a better solution?
Try npm install <ghusername>/<repoName>, where <ghUsername> is your GitHub username (without the #) and <repoName> is the name of the repository. That should correctly install it. You will most likely want to use the --save or --save-dev flag with the install command to save dependency in your package.json.
If that isn't working correctly, check the contents of your .npmignore file.
Don't panic if the install command takes a long time; installing from a git repository is slower than installing from the npm registry.
Edit:
Your problem is that in your case, dist/ is not committed to the repo (since it is in the .gitignore). That is where the actual code lives. dist/ is built from the files in src/ before the package is published to the npm registry, but dist/ is never committed to the repo.
It's ugly, but in this case you will have to remove dist/ from the .gitignore and then run:
npm run build
git add .
git commit
git push
(Ensure that you have run npm install first)
You then should be able to install from github.
There might be another way to do this using a prepare script, but I'm not sure if that's possible; I've never tried it. Edit: Cameron Tacklind has written an excellent answer detailing how to do this: https://stackoverflow.com/a/57829251/7127751
TL;DR use a prepare script
and don't forget package.json#files or .npmignore
Code published to npmjs.com is often not what's in the repository for the package. It is common to "compile" JavaScript source files into versions meant for general consumption in libraries. That's what's usually published to npmjs.com.
It is so common that it's a feature of npm to automatically run a "build" step before publishing (npm publish). This was originally called prepublish. It seems that Npm thought it would be handy to also run the prepublish script on an npm install since that was the standard way to initialize a development environment.
This ended up leading to some major confusion in the community. There are very long issues on Github about this.
In the end, in an effort to not change old behavior, they decided to add two more automatic scripts: prepublishOnly and prepare.
prepublishOnly does what you expect. It does not run on npm install. Many package maintainers just blindly switched to this.
But there was also this problem that people wanted to not depend on npmjs.com to distribute versions of packages. Git repositories were the natural choice. However it's common practice to not commit "compiled" files to git. That's what prepare was added to handle...
prepare is the correct way
If you have a repository with source files but a "build" step is necessary to use it,
prepare does exactly what you want in all cases (as of npm 4).
prepare: Run both BEFORE the package is packed and published, on local npm install without any arguments, and when installing git dependencies.
You can even put your build dependencies into devDependencies and they will be installed before prepare is executed.
Here is an example of a package of mine that uses this method.
Problems with .gitignore
There is one issue with this option that gets many people.
When preparing a dependency, Npm and Yarn will keep only the files that are listed in the files section of package.json.
One might see that files defaults to all files being included and think they're done.
What is easily missed is that .npmignore mostly overrides the files directive and, if .npmignore does not exist, .gitignore is used instead.
So, if you have your built files listed in .gitignore, like a sane person, and don't do anything else, prepare will seem broken.
If you fix files to only include the built files or add an empty .npmignore, you're all set.
My recommendation
Set files (or, by inversion, .npmignore) such that the only files actually published are those needed by users of the published package. Imho, there is no need to include uncompiled sources in published packages.
Original answer: https://stackoverflow.com/a/57503862/4612476
Update for those using npm 5:
As of npm#5, prepublish scripts are deprecated.
Use prepare for build steps and prepublishOnly for upload-only.
I found adding a "prepare": "npm run build" to scripts fixed all my problems.
Just use the command npm install git+https://git#github.com/myRepo/angular-translate.git. Thanks.
To piggyback off of #RyanZim's excellent answer, postinstall is definitely a valid option for this.
Either do one of the following:
Update the package.json in your forked repo to add a postinstall element to scripts. In here, run whatever you need to get the compiled output (Preferred).
Update your package.json, and add a postinstall that updates the necessary directory in node_modules.
If you've forked another persons repository, then it might be worth raising an issue to illustrate the issue that installing their package through GitHub does not work as it does not provide the necessary means to build the script. From there, they can either accept a PR to resolve this with a postinstall, or they can reject it and you can do #2.
If you are using yarn like me. Imagine that you want to use a package like this:
yarn add ghasemikasra39/gridfs-easy --save where the ghasemikasra39 is the username and gridfs-easy is the name of the repo

yii2 vendor and some files are missing from github

I have set up a new repo on github and pushed yii2 advanced template. Now i realized that some folders/files are missing from github like vendor and backend/web/index.php.
Anyone have idea why this is happening, i also checked my local git setup there files are present.
Check out the installation guide.
Running composer install is what creates the vendor folder, while running init creates those index.php files.
I have found that its happening because of .gitignore file. i removed it and its working fine for me.
It's quite simple idea behind missing these files, they are called ...-local.php, because their content can vary for different developers or production conditions. All you have to do before upload yii2 to the github, check the /environments directory, it includes templates for the local files, so after yii2 project is copied from the github, they will be generated by ./yii init.
Step-by-step, what should be done:
Configure files in /environments/dev and /environments/prod (for production). Most likely, if you don't change the yii2 project file structure or touch any of the /config files, you don't need to adapt them.
Update /environments/index.php, if you have updated /environments files
Upload the project to GitHub
Clone the project and run composer install to install dependencies
And finally run ./yii init from the root folder. You will see that ...-local.php files are now generated in certain directories the same way as they are configured in the environments.
More detailed information about this topic: https://www.yiiframework.com/extension/yiisoft/yii2-app-advanced/doc/guide/2.0/en/structure-environments

Should I include configure and makefile in a github repository?

We recently moved from subversion to git, and then to Github, for several open source projects. Github was nice in that it provided a lot of functionality. One of the things I particularly like is the ability to download tags as zip or .tar.gz files.
Unfortunately Github recently discontinued downloads. That shouldn't be a problem because of the ability to download tags. However in the past we have not put a Makefile , configure script or any other autoconf-generated files into the repo because they get lots of conflicts when people merge.
What's the proper way to handle this?
Should I put autoconf and automake-generated files in the repo so people can download tags directly?
Or should there be a bootstrap.sh file and people are told to run that?
Or should I just do a make dist and put that into the repo?
Thanks
Publish the output of make dist via GitHub Releases
Your first option—putting the Autoconf- and Automake-generated files into the repository—is not a good idea. It's almost never beneficial to store generated files in source control. In this case, it's going to pollute your history with a lot of unnecessary and potentially conflicting commits, particularly if not all your contributors are using the same version of Autotools. Your third option—checking in the output of make dist—is a bad idea for exactly the same reasons as the first option.
Your second option—adding a "bootstrap" script that calls Autoconf and Automake to generate the configure scripts—is also a bad idea. This defeats the entire purpose of Autotools, which is to make your source portable across systems—including those for which Autotools is not available! (Consider what would happen if someone wanted to build and install your software on a machine on which they don't have root access, and where the GNU Build System is not installed. A bootstrap script is not going to help them because they'd first need to make a local installation of Autotools and possibly all its dependencies.)
The proper way of releasing code that uses Autotools is to produce a tarball with make dist (or better yet, make distcheck, since this will also run tests and do other sanity checks), and then publish this tarball somewhere other than the source repository.
Your original question, from April 2013, states that GitHub discontinued download pages. However, in July 2013, GitHub added a "Releases" feature that not only pre-packages your source tags, but also allows you to attach arbitrary files to each release. So on GitHub, the Releases page is where you should publish your make dist tarballs (and preferably also the detached GnuPG signatures of them).
Basic steps
When you are ready to make a release, tag it and push the tag to GitHub:
$ git tag 1.0 # Also use -s if desired
$ git push --tags
Use your Makefile to produce a tarball:
$ make dist # Alternatively, 'make distcheck'
Visit the GitHub page for your project and follow the "releases" link:
You will be taken to the Releases page for your project. The first time you visit, all you will see is a list of tags and automatically produced tarballs from the source tree:
Press the "Draft a new release" button.
You will then be presented with a form in which you should fill in the Git tag associated with the release and an optional title and description. Below this there is also a file selector labelled "Attach binaries by dropping them here or selecting them". Use this to upload the tarball you created in Step 2 (and maybe also a detached GnuPG signature of it).
When you're done, press the "Publish release" button.
Your project's Releases page will now display the release, including prominent download links for the attached files:
If you don't want to use GitHub Releases, then as pointed out in a previous answer, you should upload the tarballs somewhere else, such as your own website or FTP site. Add a link to this repository from your project's README.md so that users can find it.
The second is better: you want any user of your repo to be up and running as fast as possible, re-generating what he/she needs in order to build your program.
Since Git is very much a version control for text (as opposed to an artifact repo like Nexus), providing a way to generate the final binary is the way to go.
When you cut a release, upload the result of make distcheck to your project's download page: it's a makefile target that builds the tarball and verifies that it installs, uninstalls, passes tests and other sanity checks. Github being wrong-headed isn't an excuse: create a tree like this in your repo:
/
/source
/source/configure.ac
/source/Makefile.am
/source/...
/releases
/releases/foo-0.1.tar.gz
/releases/...
For developers, you should not have generated files in source control. Many modern autotooled projects bootstrap fine off an invocation of autoreconf -i.

what purpose does 'package restore' serve?

i understand that nuget's package restore downloads and 'installs' the various required packages before building a project. but i can't work out what purpose this actually serves.
as far as i can tell, the 'installation' of a package during the package restore, isn't the same as a package's actual installation - for example, if you do the following:
install the jQuery package (NOTE that this adds jQuery script files to your project's 'Scripts' directory)
delete the added jQuery script files
delete the 'packages' directory (steps 2 & 3 simulate the state on a build machine, or other dev's machine)
do a build (triggering a package restore)
at this point the build states
2> Successfully installed 'jQuery 1.9.1'.
however, the jQuery package's script files are NOT added to the 'Scripts' folder, and the files are NOT added to the project.
this means that you have to check these files into source control anyway.
which also means that when you update this package, you have to manage adding/removing the new/old files (since different, versioned filenames are used). otherwise your 'Scripts' folder fills up with an endless history of versioned script files.
so, if you have to check everything in anyway, and you have to manually manage adding and removing files when updating, what exactly is the benefit of restoring the package on build? what purpose does this serve?
more to the point, why doesn't this serve the obvious purpose: automatically adding the package's files to the project?
Using NuGet Without Committing Packages to Source Control discusses the reason behind package restore.
Package restore means you do not have to check the packages folder into source control. Once enabled for your project it will download the packages and put them back into the packages folder at build time if they are missing. It will not, as you have found, add any package files to your project. In the case of jQuery all the files from the NuGet package are added to your project. Other NuGet packages however include one or more binary files.

Keeping custom build configuration files in a git repository

I've got a project in a git repository that uses some custom (and so far unversioned) setup scripts for the build environment etc. I'd like to put these under version control (hopefully git) but keep them versioned separate from the project itself, while still living in the base directory of the project - I've considered options like local branches but these seem to have the problem that switch back to master (or any other "real" branch) will throw away the working copies of the setup scripts.
I'm on Windows using msysgit so I've got a few tools to play with; does anyone have a recommendation or solution?
If you really need them separate from your main git repo while still living directly within it, you could try:
creating a new repo with those script within it
and:
adding that new repo as a submodule to your repo. Except:
a/ those scripts won't live directly in the base directory, but in a subfolder representing the submodule
b/ you need of course to not publish (push) that new repo, in order for other cloning your main repo to not get those setup files
or:
merging that new repo into your main repo (with the subtree project), but:
you need to split back your project to get rid of those files
for a project with a large history, and with frequent push, that step (the split) can be long and cumbersome.
I would consider a simpler solution, involving some evolution to your current setup files:
a private repo (as in "not pushed") with those setup files
environment variables with the path of your main git repo in order for your setup files (which would not be directly within the base directory of said main repo) to do their job in the right directory (like beginning for instance with a 'cd right_main_git_repo_dir').
I want to share an additional solution and some samples from which to start.
I've has a similar problem in attempting to build Mozilla Firefox with Buildbot -- I need to have some files in the root folder (namely the .mozconfig file and some helper scripts) and I wanted to version them separately.
My solution is as follow:
checkout the Firefox code from the Mercurial repository;
checkout an additional repository with the additional file I need;
before starting the build, I copy these file to the folder with the Firefox code.
This approach is implemented in the following repositories:
buildconfig-mozilla-central: it contains the Buildbot configuration, which
pulls both repositories
copies the files from the scripts repository
and start the build;
buildscripts-mozilla-central: the repository with the build configuration and helper scripts.
Please note that the code might not be well factored (for example the paths) but it should be a good starting point.
This procedure is tailored for Firefox, but it can be applied to any repository.