So I'm trying to use this library here https://github.com/joeyh/github-backup which you run from the CL using github-backup.
My problem is that it keeps limiting my requests since I haven't authenticated with the GitHub API. Is there a way that I can authenticate and then call this program? As far as I can tell the program itself doesn't allow you to pass in authentication credentials.
From their README: "github-backup does not log into git, so it cannot backup private repositories."
That said, you could use a different tool which does allow for authentication. I forked and updated a previous (python, sorry it isn't haskell) script to use version 3 of the GitHub API and it allows for authentication. You can clone the fork here. If you set the git variables (i.e., git config ...) github.user and github.token, the script will handily pick them up. It's not the best user experience, but it works.
Related
I have a public repository which is an Ansible role. This Ansible role uses the GitHub API in order to get the most recent release for a given repository. I use this metadata in order to then subsequently download the latest release binary for the given project.
Unfortunately, I'm hitting GitHub's API rate-limit when running my tests in Travis and occasionally on my local machine. Since this is a public-facing project, what are my options for overcoming this rate limit?
I could use some kind of secret management system in Ansible or expose the value via Travis environment variables, but is there a standard practice for dealing with these kinds of scenarios for public code?
Unauthenticated requests only get 60/hour. Authenticated requests get 5000/hour.
To authenticate, generate a personal API access token for use by the project. Put it either in an encrypted Travis environment variable or some other way to store encrypted secrets (for example, Rails has built in encrypted credentials. Use that token to access the API.
Make a separate Github account for the project and use an API token for that. This avoids sharing its rate limit with anyone else.
Use Git commands on a local clone where possible. For example, if you want to look up a commit instead of doing it via the API, clone the repository and use normal Git commands. Cache the clones and git fetch periodically to keep them up to date.
Finally, make use of conditional requests. These use HTTP headers so you can safely use cached queries. These do not count against your rate limit. A good Github authentication library should have an option for caching.
My department wants to teach students who are new to coding to use Git with GitHub.
git-push will by default ask for remote credentials every time. Furthermore, git-config can be used without --global option to store project-specific user.name and user.email [1]. A typical question is "How can I store my credentials so I don't have to log in each time I push", but for us having to log in each time is the desired behavior.
Since these students are new to coding, we would prefer to use a graphical client like GitHub Desktop. The Git GUIs make it easy to store credentials, but GitHub desktop seems to want to store only one set of credentials, and also specifies that storing name and email "will change your global gitconfig". The problem is that we are working in a kiosk-style computer lab, so any changes to software configuration will be stored for all users, including GitHub account credentials and global config settings.
Therefore can we use GitHub Desktop, or can you recommend another free Git GUI that could accomplish one of the following:
Does not require GitHub credentials, asks for credentials on push or other interaction with remote; or
Allows multiple GitHub accounts separated into distinct, password-protected profiles.
If you feel this question is on the wrong track and want to propose a different kind of solution than I am considering, please let me know.
Background
We intend for students to be using this in a kiosk environment, where students keep all their work (project files) on external media.
I searched the web now for several hours but couldn't get around this:
Is there an easy way to deploy a private repository from Github to a staging/development server on each push (or at least manually)? (Best would be if only FTP-data of development server would be needed for this).
I found this: How can I automatically deploy my app after a git push ( GitHub and node.js)? but this kind of "tutorial" in the best answer stops at the point of what exactly to insert into the build.sh. And what modules are needed for this on the development server? SSH, GIT, Ruby? Maybe this sounds stupid to you, or is a wrong thinking of mine, cause nowhere on the net I found any answer to this.
The problem is, that most time, the server on which the contents of the master branch should be deployed is on a shared hosting server, where you doesn't always have SSH, GIT, Python, Ruby, etc. on which most solutions for deploying from github seem to rely on... :/
http://beanstalkapp.com/ is really great at this, you can just enter FTP-Data and deploy automatically or manually for chosen repositories and branches. So I wondered why I couldn't find a similar easy way to deploy from Github?
Thank you very much in advance!
Jonas
It isn't really clear what type of project you have, but here are a couple of ideas.
If your code is written in a compiled language, then you could:
Have a Jenkins server as mentioned in the other comment
Write a simple script in bash that does a git pull and compile and add a cron job to it.
Use an automation framework like Chef or Puppet which would automatically keep the compiled binary up to date.
If your code is an interpreted language (like HTML & JavaScript), then you could:
Use vagrant for local testing. The biggest reason is that changes are live on your local system. It only takes a git push on your machine and a git pull on the production server to make your changes live globally.
Your best bet is probably going to be #2.
I do mostly small projects as a part of my research at the university, and have been using our SVN server, and also played around with Mercurial in connection with SourceForge.
I am wondering if running Mercurial or any other kind of version control on my home server would make sense. The SVN server I use at work is behind the university firewalls, and between the IT-department of the building and our IT-responsible in our department I think it's much of a hassle of starting new projects on the server and coding when I am at home. I have a Drobo FS (NAS) at home which I could imagine using for running a version control server, so that I can easily reach my code wherever I happen to be, without having to put my code on a 3rd party server.
What are the pros/cons of this approach compared to getting an account at a project hosting site with support for private projects? Is it feasible? If so, would it imply a significant maintenance workload?
The pros are that you are in full controll of your server:
you can set it up any way you want it
noone else has access to your source/project
The cons are that you are the only one that is responsible: you have to
ensure the proper setup
do maintenance
perform upgrades
ensure protection against power outages
ensure adequate security measures
ensure reguar back-up
etc.
Of course you should do it as soon as you have projects you don't want to open on servers like github.
Most small private teams have a source server, no reason not to have one. For example gitolite is easy to install and use (I don't know for Mercurial but I think there is an easy to install solution too, probably even easier).
A side effect would be you could use something a little more modern than svn, for example a decentralized vcs you could use at home and synchronize with your server (no need to use a server for every manipulation when using mercurial and git : just set up a local repository and push to your server from time to time).
Whenever you have distributed development (either because of a team across different geographical sites, or because you develop from different sites), a DVCS makes sense.
Don't forget that, on one site, if your team members have access to the git/mercurial repo filesystem (ie the shared path of the repo), you don't even need a server at all. Those DVCS supports filesystem protocol access (albeit without authentication or authorization), aka local protocol.
You can also share your project across sites with an external service like BitBucket (supporting both public and private projects, for Git or Mercurial)
If you have write access to university network (through an USB key for instance), you don't even need to access that external service (BitBucket could be blocked, it wouldn't matter).
A git bundle allows you to export a git repo as one file, from which you can pull from as it was a repo.
So you have various options in order to access/manage a repo from different site, without having to register yourself to a centralize server (like your SVN), which you couldn't access from any site (like from home).
I am trying a lot and i am not bale to get how this version control work in my scenario
I have the VPS server where i host php sites. Users have home directories in /home/users.
Currently users edit files via FTP and i have no control what they do. I want to setup version control system on VPS i don't know hoe to start . I mean
I will explain what i want , i may be wrong but please correct me.
How can i install VCS on my VPS server so that all directories in /home/users are version controlled. I don't know if its possible or not. I want that final saving place or repo should be /home/user/public_html so that when user commit then my live site should change. Now i don't know if VCS works that way or not.
Now how will my client computers connect that VCS server
Is it possible to have version control for one user i mean /home/user1/public_html and not for others
Now users will still have FTP details , can't they change files via FTP even if i use VCS
Please clear my doubts , i really want to learn VCS systems
Yes, it should be feasible. Expect to be storing some extra data as the whole history will be stored plus separate copy of the current version for the stored.
You have to decide which version control system you want to use. The most common options are:
Subversion
Git
Mercurial
Bazaar
If you or your users already have experience with one, than it's probably best choice.
You want to:
Install the version control system of choice and create a post-commit hook to check out each version into the target directories.
Clients will commit into the respository. All the systems support access through restricted ssh (users log in using public key and the key is set in .ssh/authorized_keys to only allow one particular command). Some also have HTTP(s)-based method (special Apache module for Subversion, CGI script for Mercurial, Bazaar and Git).
Yes; the hook script will check out what you tell it to. You can implement it to checkout for all users, listed users, users in a group, whatever you need.
Turn the FTP server off.
Usually the workflow is that you have a repository with all the revisions and changes. This uses a special format, there is no point in directly accessing these files. The repo is typically accessed thru WebDAV interface (running as an apache module), or running a standalone server (with it's own protocol).
Users commit their changes to the repo, then can export the latest revision (or one of their choice) to their publicly accessible *public_html* directory. This involves them interacting with the VCS and knowing (and caring) about it.
A simpler setup can be that the *public_html* contains a working copy and they interact with it thru conventional FTP. (You have to make sure that the VCS's files for example the .svn folders can not be accessed by the general public). This way you can expose the VCS functions (basically commit and rollback) to your users thru a web interface (you write a small PHP script that does the commit and update for your them).
Incremental backups: a completely different story
As I understood you probably need something more like incremental backups, for example rsync. Each time a user closes an FTP connection you can initialize an rsync backup. It has flexible options, you can have all the changes for the last X days, or last X FTP sessions, so the user could roll back after an accidental upload. (It can be used with a remote or local storage for backups).
VCS (Version Control System) is just a class of software: You need to select one before you can implement it. In your case you probably want subversion, or one of the DVCS (Distributed Version control system) (git or mercurial).
It sounds like what you want is some kind of automated deployment system for your websites, which is certainly possible.
Disabling ftp is easy: simply stop the ftp server from running: ftp is insecure and the servers are often dangerous themselves.
Have a look at how Branchable works. They have specific web framework (ikiwiki), but the underlying principle of keeping the web sites in version control (git) is the same and all the software they use is open-source including the scripts that bind it all together, so you can look how it works.