How to use/install a GitHub commit? - github

There is a package I want to use that is implemented based on fairseq toolkit. The package requirement says:
Please use an earlier commit of Apex - NVIDIA/apex#4a8c4ac
Even though I know how to install Apex, I'm not sure if I understand what it means to use an earlier commit of a package and how exactly I can use the commit (e.g., how can I install a commit of a package)? Does it just mean a specific version of that package? And if so, how can I find that specific version from a commit?

Well, I figured out how to install a specific commit! Here is how in case anyone else is wondering:
$ git clone https://github.com/nvidia/apex
$ cd apex
$ git checkout <commit hash>
$ pip install ... # whatever install command
So for example, if there is a specific commit for a GitHub repo like the following (in my case, I was trying to use an earlier commit of Apex):
https://github.com/NVIDIA/apex/commit/4a8c4ac088b6f84a10569ee89db3a938b48922b4
After cloning the repo, you run:
git checkout 4a8c4ac088b6f84a10569ee89db3a938b48922b4
Using this command, you in fact change the HEAD to a specific commit. Then, you install your package using whatever command you have.

Related

How to convert a github repository to local project component code?

There is a github repository that is no longer actively maintained. I want to use the code and move it into my project's components but that is tedious and not sure if that is the best approach.
I just want to bump the version of draftjs used by the repository.
Here is the repo and it uses draft js version 0.10.0
https://github.com/brijeshb42/medium-draft
My local project uses draft js version 0.11.7
This causes errors and incompatibility issues.
What is the best approach when a repository uses an outdated version of a repository used by local project?
Before forking and publishing to npm your own version of that dependencies, you might consider using the package/patch-package
Patches created by patch-package are automatically and gracefully applied when you use npm(>=5) or yarn.
No more waiting around for pull requests to be merged and published. No more forking repos just to fix that one tiny thing preventing your app from working.
# fix a bug in one of your dependencies
vim node_modules/some-package/brokenFile.js
# run patch-package to create a .patch file
npx patch-package some-package
# commit the patch file to share the fix with your team
git add patches/some-package+3.14.15.patch
git commit -m "fix brokenFile.js in some-package"
In your case, you would be patching the brijeshb42/medium-draft/package.json file.

Where to place custom packages that are being developed in github?

I want to get started developing my own packages. I am also adding version control via Github. I mainly develop on my Mac and a Windows laptop, but there is potential for me to develop on other machines down the line. My IDE of choice is PyCharm. I need to figure out where to place my packages both on Github and on my local machines so that my packages are always in sync regardless of where I am developing. Help??
First, let's clarify that git is the version control system, and Github is a platform for hosting git repositories (there are many other platforms aside from Github). You use git commands to manage your codes, and Github is where you store a copy of your codes.
By adding version control and putting a copy on Github, you've already taken the first step in managing your codes on different machines. All you need to do is to make sure the codes on Github is always the latest updated or maintained version.
Here's a sample workflow:
On machine 1 (Mac), clone a copy of the Github repo
Develop on machine 1
When you are satisfied with your changes, push your codes from machine 1 to Github
On machine 2 (Windows), clone a copy of the Github repo
Develop on machine 2
When you are satisfied with your changes, push your codes from machine 2 to Github
On machine 1, do a fetch to check for updates to the code
If there are updates, pull those changes to machine 1
Again, when done making changes, push them from machine 1 to Github
On machine 2 again, fetch and pull changes
Repeat this fetch-pull-push- cycle for all machines
Basically, you'll need to make sure that on wherever machine you are, when you are done, you should always push those changes to the remote (Github). So that other machines, can fetch and pull those changes and continue where you left off.
UPDATE (based on comment):
Once you've got the workflow for your package source codes, next is to package them like any other regular Python package and install them to your site-packages (either directly for your system or preferably in a virtual environment).
I recommend taking a look at the Python docs on Packaging Python Projects which uses setuptools to make your package compatible with pip.
Here's a sample workflow:
git clone <mypackage#github.com> # or git pull if you already cloned it before
cd mypackage
pip install -r requirements.txt
pip install -e . or pip install --user -e .
That last step will install your package to your site-packages folder, like any other pip-compatible package (assuming you've setup your setup.py file properly). If you are using virtual environments, you'll have to activate the virtual env first, then install your package there.
If you are not going to do any modification on the source code, and you just want to install the package on a specific machine, then you can also specify the Github URL to pip:
$ pip install -e git+https://git.repo/some_pkg.git#egg=SomeProject # from git
Lastly, if you are planning to upload this package to PyPi, check out the docs on Uploading the distribution archives. This just adds an extra step to your workflow of uploading your package to PyPi and then doing pip install from there next time.

How to use git lfs with Visual Studio Team Services hosted build agents

I use git lfs to store the large files of my git repo. I then try to build this repo with hosted agents. My build is pretty simple. It has a single task: Execute PowerShell. In the invoked script, the first thing that I want to do is to fetch my lfs dependencies. I therefore have the following in my script:
& git lfs fetch
Unfortunately, my build fails with the following error:
2016-03-04T19:49:05.7021988Z ##[error]git: 'lfs' is not a git command. See 'git --help'.
2016-03-04T19:49:05.7031986Z ##[error]Did you mean this?
2016-03-04T19:49:05.7041987Z ##[error] flow
Since I can't install anything on hosted agents, how am I supposed to have git lfs available?
EDIT
In this issue, I am not talking about git lfs authentication problems as described here. I am strictly talking about the issue of calling git lfs.
Once you are able to call git lfs, look at this answer to solve the authentication problem.
Git LFS is now supported by default on the Hosted Build Controller. But you do need to enable it in your get sources step.
You get this error message because git-lfs isn't installed on Hosted Build Agent by default.
And since you are using Hosted Build Agent, it would be a little troublesome to install git-lfs via Chocolatey on it as you don't have administrator permission. An alternative way would be download the binary files for git-lfs directly and upload it into Source Control. Then you can invoke the git-tfs.exe with an absolute path in your script.
Here are some more details around the solution provided by Eddie. git lfs is not a built-in command. It is a git custom command.
When you call git lfs, git.exe does not know about the lfs command. So it looks in the PATH environment variable and searches for a program named git-lfs.exe. Once found, it calls that program with the provided argument.
So calling git-lfs.exe pull is equivalent to calling git.exe lfs pull.
The solution suggested is therefore to download git-lfs.exe, add it your git repo (it should obviously not be tracked by LFS), and call git-lfs.exe.
It is also possible to add the folder that contains git-lfs.exe to your path environment variable. This makes it possible to use commands like git.exe lfs pull as you usually do.
If you are allowed to install software and have internet access during build you might be able to install git-lfs using the Chocolatey package in a cmd / PowerShell task prior to your git-lfs operation.

how to properly register a github fork with Bower

A while back I had to use a jQuery plugin in my project. I needed some different functionality,
so I rewrote the plugin and a few days back I published a fork on github. I wanted to add the
package to the bower repository.
The forked repository
I added a bower.json file to the repository and registered the package with the usual "bower register" command.
The problem is, when I try to install my package, bower installs the original script and not the fork.
What I already tried:
At first I thought it's because I didn't make a release, so I fixed that part. But It didn't help.
I also tried to change the version number to the version number of the original script with no luck.
So maybe the bower.json file I wrote was not well written, right? My next attempt was using Bower to
make a propper bower.json file for me using "bower init". No luck.
So what could I be doing wrong?
The GitHub help page defines a fork as a method to use someone else's project as a starting point for your own idea.
That was my intention since I rewrote the plugin to be oo oriented and added some functionality, but 80% of the code
used is still from the original plugin and it didn't feel right to just make a new repository. Should I instead make a new repository
and will registering my repo with Bower work then?
What is the usual approach if you did some medium to major changes to a repository? Do you fork it or publish a new repo?
Do you still make a pull request even if the changes are bigger?
This worked for me :
Fork the repository
Clone on your disk
Increment the version number in bower.json (ex. 2.0.1)
Commit and push
Create a new version tag higher than the forked repository. ex: git tag "2.0.1"
Push : git push --tag
bower install "https://github.com/myname/forkedrepo.git#2.0.1"
You don't need to create a new repository. A fork will work fine.
But you can't overload on someone else's registered package name with bower. It does look like you've changed the name from onepage-scroll to onepage-scroll-extended though.
If you want to figure out what Bower knows about your package:
Do: bower info onepage-scroll-extended
{
name: 'onepage-scroll-extended',
homepage: 'https://github.com/itd24/onepage-scroll-extended',
version: '1.1.1'
}
Available versions:
- 1.1.1
- 1.0.1
Here you can see that it does not have the full bower.json manifest information and the latest information that it has is for version 1.1.1 (not 1.1.3, your latest).
This is because you don't have a v1.1.3 tag in your repository's master branch. I can see a v1.1.1 and v1.2 tag, but no v1.1.3 tag. Create that tag and push it up to GitHub to enable you to bower install that new version.
You may also need to re-run the bower register command to tell it to pick up the latest manifest. This should be happening automatically (AFAIK). You don't include the bower register command that you ran, perhaps you used the wrong repo URL there. You should use something like:
bower register onepage-scroll-extended git#github.com:itd24/onepage-scroll-extended.git

Best practice for using one mercurial project in another

What are the best practices for using one mercurial project in another? I've got a django app that I'm working on, but I'm also using mercurial to version control a website that uses that app. I've looked at mercurial subrepositories, but apparently this is considered a "feature of last resort". Is there a good way of doing what I want to do, or do I just have to copy the code from my app into my website repo when I want to update to a new version of my app?
In your specific case I like to let pip handle my django application dependencies: http://guide.python-distribute.org/pip.html#installing-from-a-vcs
We have in our "website" repo a requirements.txt and our deploy does a pip install --upgrade -r requirements.txt That pulls the latest from the repo and installes it into the application's virtual env. This gives nice flexibility and separation while leaving the package management up to pip. With those VCS urls in pip you can point to a specific tag or branch too if you want different sites using different revisions from the same underlying repo.
pip also has a -e /path/to/file mode for pointing to an "editable" clone that's outside the website repo, which would work too, but I've not tried it.
That said, if you think subrepos fit your workflow better by all means use them. They work just fine, but people often get hung up on the workflow constraints ("What do you mean I can't commit my parent repo w/o also committing in the subrepo?!")