Download latest GitHub release - github

I'd like to have "Download Latest Version" button on my website which would represent the link to the latest release (stored at GitHub Releases). I tried to create release tag named "latest", but it became complicated when I tried to load new release (confusion with tag creation date, tag interchanging, etc.). Updating download links on my website manually is also a time-consuming and scrupulous task. I see the only way - redirect all download buttons to some html, which in turn will redirect to the actual latest release.
Note that my website is hosted at GitHub Pages (static hosting), so I simply can't use server-side scripting to generate links. Any ideas?

You don't need any scripting to generate a download link for the latest release. Simply use this format:
https://github.com/:owner/:repo/zipball/:branch
Examples:
https://github.com/webix-hub/tracker/zipball/master
https://github.com/iDoRecall/selection-menu/zipball/gh-pages
If for some reason you want to obtain a link to the latest release download, including its version number, you can obtain that from the get latest release API:
GET /repos/:owner/:repo/releases/latest
Example:
$.get('https://api.github.com/repos/idorecall/selection-menu/releases/latest', function (data) {
$('#result').attr('href', data.zipball_url);
});
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<a id="result">Download latest release (.ZIP)</a>

Github now provides a "Latest release" button on the release page of a project, after you have created your first release.
In the example you gave, this button links to https://github.com/reactiveui/ReactiveUI/releases/latest

You can use the following where:
${Organization} as the GitHub user or organization
${Repository} is the repository name
curl -L https://api.github.com/repos/${Organization}/${Repository}/tarball > ${Repository}.tar.gz
The top level directory in the .tar.gz file has the sha hash of the commit in the directory name which can be a problem if you need an automated way to change into the resulting directory and do something.
The method below will strip this out, and leave the files in a folder with a predictable name.
mkdir ${Repository}
curl -L https://api.github.com/repos/${Organization}/${Repository}/tarball | tar -zxv -C ${Repository} --strip-components=1

Since February 18th, 2015, the GitHUb V3 release API has a get latest release API.
GET /repos/:owner/:repo/releases/latest
See also "Linking to releases".
Still, the name of the asset can be tricky.
Git-for-Windows, for instance, requires a command like:
curl -IkLs -o NUL -w %{url_effective} \
https://github.com/git-for-windows/git/releases/latest|\
grep -o "[^/]*$"| sed "s/v//g"|\
xargs -I T echo \
https://github.com/git-for-windows/git/releases/download/vT/PortableGit-T-64-bit.7z.exe \
-o PortableGit-T-64-bit.7z.exe| \
sed "s/.windows.1-64/-64/g"|sed "s/.windows.\(.\)-64/.\1-64/g"|\
xargs curl -kL
The first 3 lines extract the latest version 2.35.1.windows.2
The rest will build the right URL
https://github.com/git-for-windows/git/releases/download/
v2.35.1.windows.2/PortableGit-2.35.1.2-64-bit.7z.exe
^^^^^^^^^^^^^^^^^ ^^^^^^^^^

Maybe could you use some client-side scripting and dynamically generate the target of the link by invoking the GitHub api, through some JQuery magic?
The Releases API exposes a way to retrieve the list of all the releases from a repository. For instance, this link return a Json formatted list of all the releases of the ReactiveUI project.
Extracting the first one would return the latest release.
Within this payload:
The html_url attribute will hold the first part of the url to build (ie. https://github.com/{owner}/{repository}/releases/{version}).
The assets array will list of the downloadable archives. Each asset will bear a name attribute
Building the target download url is only a few string operations away.
Insert the download/ keyword between the releases/ segment from the html_url and the version number
Append the name of the asset to download
Resulting url will be of the following format: https://github.com/{owner}/{repository}/releases/download/{version}/name_of_asset
For instance, regarding the Json payload from the link ReactiveUI link above, we've got html_url: "https://github.com/reactiveui/ReactiveUI/releases/5.99.0" and one asset with name: "ReactiveUI.6.0.Preview.1.zip".
As such, the download url is https://github.com/reactiveui/ReactiveUI/releases/download/5.99.0/ReactiveUI.6.0.Preview.1.zip

If you using PHP try follow code:
function getLatestTagUrl($repository, $default = 'master') {
$file = #json_decode(#file_get_contents("https://api.github.com/repos/$repository/tags", false,
stream_context_create(['http' => ['header' => "User-Agent: Vestibulum\r\n"]])
));
return sprintf("https://github.com/$repository/archive/%s.zip", $file ? reset($file)->name : $default);
}
Function usage example
echo 'Download';

As I didn't see the answer here, but it was quite helpful for me while running continuous integration tests, this one-liner that only requires you to have curl will allow to search the Github repo's releases to download the latest version
https://gist.github.com/steinwaywhw/a4cd19cda655b8249d908261a62687f8
I use it to run PHPSTan on our repository using the following script
https://gist.github.com/rvanlaak/7491f2c4f0c456a93f90e31774300b62

If you are trying to download form any linux — even old or tiny versions — or are trying to download from a bash script then the failproof way is using this command:
wget https://api.github.com/repos/$OWNER/$REPO/releases/latest -O - | awk -F \" -v RS="," '/browser_download_url/ {print $(NF-1)}' | xargs wget
do not forget to replace $OWNER and $REPO with the right owner and repository names. The command downloads a json page with the data of the latest release. then awk gets the value from the browser_download_url key.
If you are in a really old linux or a tiny embedded system with a small wget, the download name can be a problem. In such case you can always use the ultra-reliable:
URL=$(wget https://api.github.com/repos/$OWNER/$REPO/releases/latest -O - | awk -F \" -v RS="," '/browser_download_url/ {print $(NF-1)}'); wget $URL -O $(basename "$URL")

As noted by #Dan Dascalescu in a comment to accepted answer, there are some projects (roughly 30%) which do not bother to file formal releases, so neither "Latest release" button nor /releases/latest API call would return useful data.
To reliably fetch the latest release for a GitHub project, you can use lastversion.

Related

How to get all the artifacts list from Azure feed

How to get the list of all artifacts from Azure feed.
Below command is working fine, but getting only first 1000 artifacts.
#ReposList = `curl -u username\#abc.com:key -X GET "https://feeds.dev.azure.com/abc/ProjectName/_apis/packaging/Feeds/FeedName/packages?api-version=5.1-preview.1"`;
But, how we can get rest of the artifacts as well ?
Can some one help on this !
Take a look at the documentation for Artifact Details - Get Packages: https://learn.microsoft.com/en-us/rest/api/azure/devops/artifacts/artifact-details/get-packages?view=azure-devops-rest-6.0.
There are a couple query parameters you can try passing to get all the artifacts.
$top is used in order to:
Get the top N packages (or package versions where getTopPackageVersions=true)
You can try setting this to an arbitrarily high number and see if that returns the total number of artifacts you expected.
There is also includeAllVersions which works in the following way:
True to return all versions of the package in the response. Default is false (latest version only).
I suggest using one (or both) and see if that helps resolve your situation.
Something like: curl -u username\#abc.com:key -X GET "https://feeds.dev.azure.com/abc/ProjectName/_apis/packaging/Feeds/FeedName/packages&$top=5000&includeAllVersions=true?api-version=5.1-preview.1"
For powershell,it seems like:
$UriFeedPackageslist = "https://feeds.dev.azure.com/$orgName/$projectName/_apis/packaging/Feeds/$feed" + '/packages?$top=30000&api-version=7.0'

How to extract the list of all repositories in Stash or Bitbucket?

I need to extract the list of all repos under all projects in Bitbucket. Is there a REST API for the same? I couldn't find one.
I have both on-premise and cloud Bitbucket.
Clone ALL Projects & Repositories for a given stash url
#!/usr/bin/python
#
# #author Jason LeMonier
#
# Clone ALL Projects & Repositories for a given stash url
#
# Loop through all projects: [P1, P2, ...]
# P1 > for each project make a directory with the key "P1"
# Then clone every repository inside of directory P1
# Backup a directory, create P2, ...
#
# Added ACTION_FLAG bit so the same logic can run fetch --all on every repository and/or clone.
import sys
import os
import stashy
ACTION_FLAG = 1 # Bit: +1=Clone, +2=fetch --all
url = os.environ["STASH_URL"] # "https://mystash.com/stash"
user = os.environ["STASH_USER"] # joedoe"
pwd = os.environ["STASH_PWD"] # Yay123
stash = stashy.connect(url, user, pwd)
def mkdir(xdir):
if not os.path.exists(xdir):
os.makedirs(xdir)
def run_cmd(cmd):
print ("Directory cwd: %s "%(os.getcwd() ))
print ("Running Command: \n %s " %(cmd))
os.system(cmd)
start_dir = os.getcwd()
for project in stash.projects:
pk = project_key = project["key"]
mkdir(pk)
os.chdir(pk)
for repo in stash.projects[project_key].repos.list():
for url in repo["links"]["clone"]:
href = url["href"]
repo_dir = href.split("/")[-1].split(".")[0]
if (url["name"] == "http"):
print (" url.href: %s"% href) # https://joedoe#mystash.com/stash/scm/app/ae.git
print ("Directory cwd: %s Project: %s"%(os.getcwd(), pk))
if ACTION_FLAG & 1 > 0:
if not os.path.exists(repo_dir):
run_cmd("git clone %s" % url["href"])
else:
print ("Directory: %s/%s exists already. Skipping clone. "%(os.getcwd(), repo_dir))
if ACTION_FLAG & 2 > 0:
# chdir into directory "ae" based on url of this repo, fetch, chdir back
cur_dir = os.getcwd()
os.chdir(repo_dir)
run_cmd("git fetch --all ")
os.chdir(cur_dir)
break
os.chdir(start_dir) # avoiding ".." in case of incorrect git directories
Once logged in: on the top right, click on your profile pic and then 'View profile'
Take note of your user (in the example below 'YourEmail#domain.com', but keep in mind it's case sensitive)
Click on profile pic > Manage account > Personal access token > Create a token (choosing 'Read' access type is enough for this functionality)
For all repos in all projects:
Open a CLI and use the command below (remember to fill in your server domain!):
curl -u "YourEmail#domain.com" -X GET https://<my_server_domain>/rest/api/1.0/projects/?limit=1000
It will ask you for your personal access token, you comply and you get a JSON file with all repos requested
For all repos in a given project:
Pick the project you want to get repos from. In my case, the project URL is: <your_server_domain>/projects/TECH/ and therefore my {projectKey} is 'TECH', which you'll need for the command below.
Open a CLI and use this command (remember to fill in your server domain and projectKey!):
curl -u "YourEmail#domain.com" -X GET https://<my_server_domain>/rest/api/1.0/projects/{projectKey}/repos?limit=50
Final touches
(optional) If you want just the titles of the repos requested and you have jq installed (for Windows, downloading the exe and adding it to PATH should be enough, but you need to restart your CLI for that new addition to be detected), you can use the command below:
curl -u $BBUSER -X GET <my_server_domain>/rest/api/1.0/projects/TECH/repos?limit=50 | jq '.values|.[]|.name'
(tested with Data Center/Atlassian Bitbucket v7.9.0 and powershell CLI)
For Bitbucket Cloud
You can use their REST API to access and perform queries on your server.
Specifically, you can use this documentation page, provided by Atlassian, to learn how to list you're repositories.
For Bitbucket Server
Edit: As of receiving this tweet from Dan Bennett, I've learnt there is an API/plugin system for Bitbucket Server that could possibly cater for your needs. For docs: See here.
Edit2: Found this reference to listing personal repositories that may serve as a solution.
AFAIK there isn't a solution for you unless you built a little API for yourself that interacted with your Bitbucket Server instance.
Atlassian Documentation does indicate that to list all currently configured repositories you can do git remote -v. However I'm dubious of this as this isn't normally how git remote -v is used; I think it's more likely that Atlassian's documentation is being unclear rather than Atlassian building in this functionality to Bitbucket Server.
I ended up having to do this myself with an on-prem install of Bitbucket which didn't seem to have the REST APIs discussed above accessible, so I came up with a short script to scrape it out of the web page. This workaround has the advantage that there's nothing you need to install, and you don't need to worry about dependencies, certs or logins other than just logging into your Bitbucket server. You can also set this up as a bookmark if you urlencode the script and prefix it with javascript:.
To use this:
Open your bitbucket server project page, where you should see a list of repos.
Open your browser's devtools console. This is usually F12 or ctrl-shift-i.
Paste the following into the command prompt there.
JSON.stringify(Array.from(document.querySelectorAll('[data-repository-id]')).map(aTag => {
const href = aTag.getAttribute('href');
let projName = href.match(/\/projects\/(.+)\/repos/)[1].toLowerCase();
let repoName = href.match(/\/repos\/(.+)\/browse/)[1];
repoName = repoName.replace(' ', '-');
const templ = `https://${location.host}/scm/${projName}/${repoName}.git`;
return {
href,
name: aTag.innerText,
clone: templ
}
}));
The result is a JSON string containing an array with the repo's URL, name, and clone URL.
[{
"href": "/projects/FOO/repos/some-repo-here/browse",
"name": "some-repo-here",
"clone": "https://mybitbucket.company.com/scm/foo/some-repo-here.git"
}]
This ruby script isn't the greatest code, which makes sense, because I'm not the greatest coder. But it is clear, tested, and it works.
The script filters the output of a Bitbucket API call to create a complete report of all repos on a Bitbucket server. Report is arranged by project, and includes totals and subtotals, a link to each repo, and whether the repos are public or personal. I could have simplified it for general use, but it's pretty useful as it is.
There are no command line arguments. Just run it.
#!/usr/bin/ruby
#
# #author Bill Cernansky
#
# List and count all repos on a Bitbucket server, arranged by project, to STDOUT.
#
require 'json'
bbserver = 'http(s)://server.domain.com'
bbuser = 'username'
bbpassword = 'password'
bbmaxrepos = 2000 # Increase if you have more than 2000 repos
reposRaw = JSON.parse(`curl -s -u '#{bbuser}':'#{bbpassword}' -X GET #{bbserver}/rest/api/1.0/repos?limit=#{bbmaxrepos}`)
projects = {}
repoCount = reposRaw['values'].count
reposRaw['values'].each do |r|
projID = r['project']['key']
if projects[projID].nil?
projects[projID] = {}
projects[projID]['name'] = r['project']['name']
projects[projID]['repos'] = {}
end
repoName = r['name']
projects[projID]['repos'][repoName] = r['links']['clone'][0]['href']
end
privateProjCount = projects.keys.grep(/^\~/).count
publicProjCount = projects.keys.count - privateProjCount
reportText = ''
privateRepoCount = 0
projects.keys.sort.each do |p|
# Personal project slugs always start with tilde
isPrivate = p[0] == '~'
projRepoCount = projects[p]['repos'].keys.count
privateRepoCount += projRepoCount if isPrivate
reportText += "\nProject: #{p} : #{projects[p]['name']}\n #{projRepoCount} #{isPrivate ? 'PERSONAL' : 'Public'} repositories\n"
projects[p]['repos'].keys.each do |r|
reportText += sprintf(" %-30s : %s\n", r, projects[p]['repos'][r])
end
end
puts "BITBUCKET REPO REPORT\n\n"
puts sprintf(" Total Projects: %5d Public: %5d Personal: %5d", projects.keys.count, publicProjCount, privateProjCount)
puts sprintf(" Total Repos: %5d Public: %5d Personal: %5d", repoCount, repoCount - privateRepoCount, privateRepoCount)
puts reportText
The way I solved this issue, was get the html page and give it a ridiculous limit like this. thats in python :
cmd = "curl -s -k --user " + username + " https://URL/projects/<KEY_PROJECT_NAME>/?limit\=10000"
then I parsed it with BeautifulSoup
make_list = str((subprocess.check_output(cmd, shell=True)).rstrip().decode("utf-8"))
html = make_list
parsed_html = BeautifulSoup(html,'html.parser')
list1 = []
for a in parsed_html.find_all("a", href=re.compile("/<projects>/<KEY_PROJECT_NAME>/repos/")):
list1.append(a.string)
print(list1)
to use this make sure you change and , this should be the bitbucket project you are targeting. All , I am doing is parsing an html file.
Here's how I pulled the list of repos from Bitbucket Cloud.
Setup OAauth Consumer
Go to your workspace settings and setup an OAuth consumer, you should be able to go here directly using this link: https://bitbucket.org/{your_workspace}/workspace/settings/api
The only setting that matters is the callback URL which can be anything but I chose http://localhost
Once setup, this will display a key and secret pair for your OAuth consumer, I will refer to these as {oauth_key} and {oauth_secret} below
Authenticate with the API
Go to https://bitbucket.org/site/oauth2/authorize?client_id={oauth_key}&response_type=code ensuring you replace {oauth_key}
This will redirect you to something like http://localhost/?code=xxxxxxxxxxxxxxxxxx, make a note of that code, I'll refer to that as {oauth_code} below
In your terminal go to curl -X POST -u "{oauth_key}:{oauth_secret}" https://bitbucket.org/site/oauth2/access_token -d grant_type=authorization_code -d code={oauth_code} replacing the placeholders.
This should return json including the access_token, I’ll refer to that access token as {oauth_token}
Get the list of repos
You can now run the following to get the list of repos. Bear in mind that your {oauth_token} lasts 2hrs by default.
curl --request GET \
--url 'https://api.bitbucket.org/2.0/repositories/pageant?page=1' \
--header 'Authorization: Bearer {oauth_token}' \
--header 'Accept: application/json'
This response is paginated so you'll need to page through the responses, 10 repositories at a time.

Can you get the number of lines of code from a GitHub repository?

In a GitHub repository you can see “language statistics”, which displays the percentage of the project that’s written in a language. It doesn’t, however, display how many lines of code the project consists of. Often, I want to quickly get an impression of the scale and complexity of a project, and the count of lines of code can give a good first impression. 500 lines of code implies a relatively simple project, 100,000 lines of code implies a very large/complicated project.
So, is it possible to get the lines of code written in the various languages from a GitHub repository, preferably without cloning it?
The question “Count number of lines in a git repository” asks how to count the lines of code in a local Git repository, but:
You have to clone the project, which could be massive. Cloning a project like Wine, for example, takes ages.
You would count lines in files that wouldn’t necessarily be code, like i13n files.
If you count just (for example) Ruby files, you’d potentially miss massive amount of code in other languages, like JavaScript. You’d have to know beforehand which languages the project uses. You’d also have to repeat the count for every language the project uses.
All in all, this is potentially far too time-intensive for “quickly checking the scale of a project”.
You can run something like
git ls-files | xargs wc -l
Which will give you the total count →
You can also add more instructions. Like just looking at the JavaScript files.
git ls-files | grep '\.js' | xargs wc -l
Or use this handy little tool → https://line-count.herokuapp.com/
A shell script, cloc-git
You can use this shell script to count the number of lines in a remote Git repository with one command:
#!/usr/bin/env bash
git clone --depth 1 "$1" temp-linecount-repo &&
printf "('temp-linecount-repo' will be deleted automatically)\n\n\n" &&
cloc temp-linecount-repo &&
rm -rf temp-linecount-repo
Installation
This script requires CLOC (“Count Lines of Code”) to be installed. cloc can probably be installed with your package manager – for example, brew install cloc with Homebrew. There is also a docker image published under mribeiro/cloc.
You can install the script by saving its code to a file cloc-git, running chmod +x cloc-git, and then moving the file to a folder in your $PATH such as /usr/local/bin.
Usage
The script takes one argument, which is any URL that git clone will accept. Examples are https://github.com/evalEmpire/perl5i.git (HTTPS) or git#github.com:evalEmpire/perl5i.git (SSH). You can get this URL from any GitHub project page by clicking “Clone or download”.
Example output:
$ cloc-git https://github.com/evalEmpire/perl5i.git
Cloning into 'temp-linecount-repo'...
remote: Counting objects: 200, done.
remote: Compressing objects: 100% (182/182), done.
remote: Total 200 (delta 13), reused 158 (delta 9), pack-reused 0
Receiving objects: 100% (200/200), 296.52 KiB | 110.00 KiB/s, done.
Resolving deltas: 100% (13/13), done.
Checking connectivity... done.
('temp-linecount-repo' will be deleted automatically)
171 text files.
166 unique files.
17 files ignored.
http://cloc.sourceforge.net v 1.62 T=1.13 s (134.1 files/s, 9764.6 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
Perl 149 2795 1425 6382
JSON 1 0 0 270
YAML 2 0 0 198
-------------------------------------------------------------------------------
SUM: 152 2795 1425 6850
-------------------------------------------------------------------------------
Alternatives
Run the commands manually
If you don’t want to bother saving and installing the shell script, you can run the commands manually. An example:
$ git clone --depth 1 https://github.com/evalEmpire/perl5i.git
$ cloc perl5i
$ rm -rf perl5i
Linguist
If you want the results to match GitHub’s language percentages exactly, you can try installing Linguist instead of CLOC. According to its README, you need to gem install linguist and then run linguist. I couldn’t get it to work (issue #2223).
I created an extension for Google Chrome browser - GLOC which works for public and private repos.
Counts the number of lines of code of a project from:
project detail page
user's repositories
organization page
search results page
trending page
explore page
If you go to the graphs/contributors page, you can see a list of all the contributors to the repo and how many lines they've added and removed.
Unless I'm missing something, subtracting the aggregate number of lines deleted from the aggregate number of lines added among all contributors should yield the total number of lines of code in the repo. (EDIT: it turns out I was missing something after all. Take a look at orbitbot's comment for details.)
UPDATE:
This data is also available in GitHub's API. So I wrote a quick script to fetch the data and do the calculation:
'use strict';
async function countGithub(repo) {
const response = await fetch(`https://api.github.com/repos/${repo}/stats/contributors`)
const contributors = await response.json();
const lineCounts = contributors.map(contributor => (
contributor.weeks.reduce((lineCount, week) => lineCount + week.a - week.d, 0)
));
const lines = lineCounts.reduce((lineTotal, lineCount) => lineTotal + lineCount);
window.alert(lines);
}
countGithub('jquery/jquery'); // or count anything you like
Just paste it in a Chrome DevTools snippet, change the repo and click run.
Disclaimer (thanks to lovasoa):
Take the results of this method with a grain of salt, because for some repos (sorich87/bootstrap-tour) it results in negative values, which might indicate there's something wrong with the data returned from GitHub's API.
UPDATE:
Looks like this method to calculate total line numbers isn't entirely reliable. Take a look at orbitbot's comment for details.
You can clone just the latest commit using git clone --depth 1 <url> and then perform your own analysis using Linguist, the same software Github uses. That's the only way I know you're going to get lines of code.
Another option is to use the API to list the languages the project uses. It doesn't give them in lines but in bytes. For example...
$ curl https://api.github.com/repos/evalEmpire/perl5i/languages
{
"Perl": 274835
}
Though take that with a grain of salt, that project includes YAML and JSON which the web site acknowledges but the API does not.
Finally, you can use code search to ask which files match a given language. This example asks which files in perl5i are Perl. https://api.github.com/search/code?q=language:perl+repo:evalEmpire/perl5i. It will not give you lines, and you have to ask for the file size separately using the returned url for each file.
Not currently possible on Github.com or their API-s
I have talked to customer support and confirmed that this can not be done on github.com. They have passed the suggestion along to the Github team though, so hopefully it will be possible in the future. If so, I'll be sure to edit this answer.
Meanwhile, Rory O'Kane's answer is a brilliant alternative based on cloc and a shallow repo clone.
From the #Tgr's comment, there is an online tool :
https://codetabs.com/count-loc/count-loc-online.html
You can use tokei:
cargo install tokei
git clone --depth 1 https://github.com/XAMPPRocky/tokei
tokei tokei/
Output:
===============================================================================
Language Files Lines Code Comments Blanks
===============================================================================
BASH 4 48 30 10 8
JSON 1 1430 1430 0 0
Shell 1 49 38 1 10
TOML 2 78 65 4 9
-------------------------------------------------------------------------------
Markdown 4 1410 0 1121 289
|- JSON 1 41 41 0 0
|- Rust 1 47 38 5 4
|- Shell 1 19 16 0 3
(Total) 1517 95 1126 296
-------------------------------------------------------------------------------
Rust 19 3750 3123 119 508
|- Markdown 12 358 5 302 51
(Total) 4108 3128 421 559
===============================================================================
Total 31 6765 4686 1255 824
===============================================================================
Tokei has support for badges:
Count Lines
[![](https://tokei.rs/b1/github/XAMPPRocky/tokei)](https://github.com/XAMPPRocky/tokei)
By default the badge will show the repo's LoC(Lines of Code), you can also specify for it to show a different category, by using the ?category= query string. It can be either code, blanks, files, lines, comments.
Count Files
[![](https://tokei.rs/b1/github/XAMPPRocky/tokei?category=files)](https://github.com/XAMPPRocky/tokei)
You can use GitHub API to get the sloc like the following function
function getSloc(repo, tries) {
//repo is the repo's path
if (!repo) {
return Promise.reject(new Error("No repo provided"));
}
//GitHub's API may return an empty object the first time it is accessed
//We can try several times then stop
if (tries === 0) {
return Promise.reject(new Error("Too many tries"));
}
let url = "https://api.github.com/repos" + repo + "/stats/code_frequency";
return fetch(url)
.then(x => x.json())
.then(x => x.reduce((total, changes) => total + changes[1] + changes[2], 0))
.catch(err => getSloc(repo, tries - 1));
}
Personally I made an chrome extension which shows the number of SLOC on both github project list and project detail page. You can also set your personal access token to access private repositories and bypass the api rate limit.
You can download from here https://chrome.google.com/webstore/detail/github-sloc/fkjjjamhihnjmihibcmdnianbcbccpnn
Source code is available here https://github.com/martianyi/github-sloc
Hey all this is ridiculously easy...
Create a new branch from your first commit
When you want to find out your stats, create a new PR from main
The PR will show you the number of changed lines - as you're doing a PR from the first commit all your code will be counted as new lines
And the added benefit is that if you don't approve the PR and just leave it in place, the stats (No of commits, files changed and total lines of code) will simply keep up-to-date as you merge changes into main. :) Enjoy.
Firefox add-on Github SLOC
I wrote a small firefox addon that prints the number of lines of code on github project pages: Github SLOC
npm install sloc -g
git clone --depth 1 https://github.com/vuejs/vue/
sloc ".\vue\src" --format cli-table
rm -rf ".\vue\"
Instructions and Explanation
Install sloc from npm, a command line tool (Node.js needs to be installed).
npm install sloc -g
Clone shallow repository (faster download than full clone).
git clone --depth 1 https://github.com/facebook/react/
Run sloc and specifiy the path that should be analyzed.
sloc ".\react\src" --format cli-table
sloc supports formatting the output as a cli-table, as json or csv. Regular expressions can be used to exclude files and folders (Further information on npm).
Delete repository folder (optional)
Powershell: rm -r -force ".\react\" or on Mac/Unix: rm -rf ".\react\"
Screenshots of the executed steps (cli-table):
sloc output (no arguments):
It is also possible to get details for every file with the --details option:
sloc ".\react\src" --format cli-table --details
Open terminal and run the following:
curl -L "https://api.codetabs.com/v1/loc?github=username/reponame"
If the question is "can you quickly get NUMBER OF LINES of a github repo", the answer is no as stated by the other answers.
However, if the question is "can you quickly check the SCALE of a project", I usually gauge a project by looking at its size. Of course the size will include deltas from all active commits, but it is a good metric as the order of magnitude is quite close.
E.g.
How big is the "docker" project?
In your browser, enter api.github.com/repos/ORG_NAME/PROJECT_NAME
i.e. api.github.com/repos/docker/docker
In the response hash, you can find the size attribute:
{
...
size: 161432,
...
}
This should give you an idea of the relative scale of the project. The number seems to be in KB, but when I checked it on my computer it's actually smaller, even though the order of magnitude is consistent. (161432KB = 161MB, du -s -h docker = 65MB)
Pipe the output from the number of lines in each file to sort to organize files by line count.
git ls-files | xargs wc -l |sort -n
This is so easy if you are using Vscode and you clone the project first. Just install the Lines of Code (LOC) Vscode extension and then run LineCount: Count Workspace Files from the Command Pallete.
The extension shows summary statistics by file type and it also outputs result files with detailed information by each folder.
There in another online tool that counts lines of code for public and private repos without having to clone/download them - https://klock.herokuapp.com/
None of the answers here satisfied my requirements. I only wanted to use existing utilities. The following script will use basic utilities:
Git
GNU or BSD awk
GNU or BSD sed
Bash
Get total lines added to a repository (subtracts lines deleted from lines added).
#!/bin/bash
git diff --shortstat 4b825dc642cb6eb9a060e54bf8d69288fbee4904 HEAD | \
sed 's/[^0-9,]*//g' | \
awk -F, '!($2 > 0) {$2="0"};!($3 > 0) {$3="0"}; {print $2-$3}'
Get lines of code filtered by specified file types of known source code (e.g. *.py files or add more extensions, etc).
#!/bin/bash
git diff --shortstat 4b825dc642cb6eb9a060e54bf8d69288fbee4904 HEAD -- *.{py,java,js} | \
sed 's/[^0-9,]*//g' | \
awk -F, '!($2 > 0) {$2="0"};!($3 > 0) {$3="0"}; {print $2-$3}'
4b825dc642cb6eb9a060e54bf8d69288fbee4904 is the id of the "empty tree" in Git and it's always available in every repository.
Sources:
My own scripting
How to get Git diff of the first commit?
Is there a way of having git show lines added, lines changed and lines removed?
shields.io has a badge that can count up all the lines for you here. Here is an example of what it looks like counting the Raycast extensions repo:
You can use sourcegraph, an open source search engine for code. It can connect to your GitHub account, index the content, and then on the admin section you would see the number of lines of code indexed.
I made an NPM package specifically for this usage, which allows you to call a CLI tool and providing the directory path and the folders/files to ignore
it goes like this:
npm i -g #quasimodo147/countlines
to get the $ countlines command in your terminal
then you can do
countlines . node_modules build dist

List of available software sites lost after Eclipse update

After clicking the Check Update, installing a few updates and clicking OK to restart Eclipse, the list of available software sites in the Install window is gone.
Is it possible to get it as it was?
If not, how can I rebuild it so that my plug-ins will be updated in the future?
I'm working with Eclipse 4.3.2 in Windows 7.
I ran into the same problem on a Win7-64bit after a set of automatic updates.
All settings for "Available Software Sites" were lost. My eclipse version after the data lost was luna 4.4.2. (don't now the version number before, I had installed eclipse-cpp-luna-SR1a-win32-x86_64_2014.zip).
I set this site to get the "/Help/Install new software...." Dialog work again:
Eclipse-Project-Repository - http://download.eclipse.org/eclipse/updates/4.4
To get the repository for another eclipse version look here.
Follow the link to your eclipse version and search there for the "Eclipse p2 Repository".
For those interested in restoring the update site, here is a way to do the job which may or may not work for you:
Locate the ${ECLIPSE_HOME}\p2\org.eclipse.equinox.p2.engine\profileRegistry\<profile>\.data\.settings\org.eclipse.equinox.p2.artifact.repository.prefs file. The <profile> depends on the installed Eclipse, for me it was epp.package.rcp.profile.
Find all keys ending by /uri=: they would contains the original URI. You could use grep: grep --color -Po '/uri=.+' org.eclipse.equinox.p2.artifact.repository.prefs (you may want to filter file:/ URI).
Remove the /uri and unescape the property to regain valid URI: sed is good for that. eg: sed -E -e 's#^/uri=##g' -e 's#\\##g'
Apply a sort --unique
Now, you would have this command line and result:
$ grep --color -Po '/uri=http.+' org.eclipse.equinox.p2.artifact.repository.prefs | sed -E -e 's#^/uri=##g' -e 's#\\##g' | sort
https://spotbugs.github.io/eclipse/
http://download.eclipse.org/e4/snapshots/org.eclipse.e4.tools/latest/
http://download.eclipse.org/eclipse/updates/4.7
http://download.eclipse.org/eclipse/updates/4.7/R-4.7-201706120950
http://download.eclipse.org/eclipse/updates/4.7/R-4.7.1-201709061700
http://download.eclipse.org/eclipse/updates/4.7/R-4.7.1a-201710090410
http://download.eclipse.org/eclipse/updates/4.7/R-4.7.2-201711300510
http://download.eclipse.org/eclipse/updates/4.7/R-4.7.3-201803010715
http://download.eclipse.org/eclipse/updates/4.7/R-4.7.3a-201803300640
You are almost there!
If you look at the example above, you can see multiple duplicate URI for the same endpoint (/eclipse/updates/4.7) which is a P2 composite repository: you may add it to the sed command to remove these parts: -e 's#/(R-[^/]+|)20[0-9]{10}##g'.
That's better:
$ grep --color -Po '/uri=http.+' org.eclipse.equinox.p2.artifact.repository.prefs | sed -E -e 's#^/uri=##g' -e 's#\\##g' -e 's#/(R-[^/]+|)20[0-9]{10}##g' | sort --unique
http://download.eclipse.org/e4/snapshots/org.eclipse.e4.tools/latest/
http://download.eclipse.org/eclipse/updates/4.7
http://download.eclipse.org/efxclipse/updates-released/3.0.0/site
http://download.eclipse.org/releases/oxygen
http://download.eclipse.org/technology/epp/packages/oxygen/
http://eclipse.pitest.org/release/
http://netceteragroup.github.io/quickrex/updatesite
http://repo1.maven.org/maven2/.m2e/connectors/m2eclipse-tycho/0.8.0/N/0.8.0.201409231215/
http://ucdetector.sourceforge.net/update/
Now we will transform that into a XML file to be imported: in the Available Software Sites, you may export a bookmarks.xml file which contains this for one entry:
<?xml version="1.0" encoding="UTF-8"?>
<bookmarks>
<site url="http://download.eclipse.org/eclipse/updates/4.7" selected="true" name=""/>
</bookmarks>
As you would probably don't care about the name or selected (Eclipse may also update these using the Update site metadata), you may use builtin or sed again:
$ grep --color -Po '/uri=http.+' org.eclipse.equinox.p2.artifact.repository.prefs.old | \
sed -E -e 's#^/uri=##g' -e 's#\\##g' -e 's#/(R-[^/]+|)20[0-9]{10}##g' | \
sort --unique | \
while read url; do echo "<site url=\"${url}\" />"; done > bookmarks.xml
You now have a bookmarks.xml to edit: simply add the <?xml version="1.0" encoding="UTF-8"?> <bookmarks> and </bookmarks>, and import it in Available Software Suite.
All that remains is to enable all site by selecting them and clicking Enable. When done, try to update Eclipse as usual and that should do the job!
You may want to:
Remove any invalid entries or at least disable them
Save your bookmarks.xml into a repository or "somewhere".
Export the bookmarks.xml again, now with proper name.
Good luck!
And ... raise this bug report: https://bugs.eclipse.org/bugs/show_bug.cgi?id=502524
I ran into the same problem on a Win10-64bit after a set of automatic updates. All settings for "Available Software Sites" were lost.
Eclipse p2 Repository
To update your Eclipse installation to this development stream, you can use the software repository at
http://download.eclipse.org/eclipse/updates/4.5 .
To update your build to use this specific build, you can use the software repository at
http://download.eclipse.org/eclipse/updates/4.5/R-4.5.2-201602121500
Fixed in Neon
Good new is that the available update sites survive Eclipse update on Neon. But I still see this problem on Mars and older.
Fix for Mars and older releases
There is a simple change which fixed this problem for me on Mars-, it's adding the "-Djava.net.preferIPv4Stack=true" JVM param to Eclipse.ini file before running an update:
-Djava.net.preferIPv4Stack=true
Please note this is a VM argument, so it must go after the "-vmargs"
So why the available software sites were deleted on update?
Eclipse Update is done by ProvisioningJob which calls LoadMetadataRepositoryJob.runModal(), which in its turn calls MetadataRepositoryManager.loadRepository().
AbstractRepositoryManager.loadRepository() checks if a repository is valid by callingcheckNotFound(). If it's not found that the repo is not added. preferIPv4=true fixes it.
Had the same problem while updating from 4.19 (2021-03) to 4.20 (2021-06). What worked for me was:
Re-Opening the "Help" -> "Install New Software" after the list was lost, this refreshed some of the repositories that went missing.
Manually adding the repo URL at "Window" -> "Preferences" -> "Install/Update" -> "Available Software Sites" -> "Add..."
I wanted to update to the last version (at that time) so I added the following URL. Replace that URL with the URL of the version you wish to install or your current version:
"Help" -> "Check for updates" again, it should work.
I encountered this with Eclipse 4.16 (2020-06).
Inspired by #NoDataFound approach, I wrote a Perl script as an all-in-one-solution.
How it works:
Check each line of '/p2/org.eclipse.equinox.p2.engine/profileRegistry//.data/.settings/org.eclipse.equinox.p2.artifact.repository.prefs' for 'nickname', 'uri' or 'enabled' of a http[s] repo.
Store captured property name and value in a hash associated with the repo identifier.
my $propNameCapture = qr/(nickname(?==.*)|(?:uri|enabled)(?==.+))/;
my ($repo, $propName) = m{^repositories/(http.+?)/$propNameCapture};
next if !$repo || !$propName;
my ($propValue) = m{^repositories/http.+?/$propName=(.*)};
# Remove the backslash after http[s] if we have an uri.
$propValue =~ s/\\// if $propName eq 'uri';
# $repo is used merely to collect props belonging together.
$SiteFromRepo{$repo}->{$propName} = $propValue;
Create an array of site properties
for my $site (values %SiteFromRepo)
{
# There are many entries without a nickname.
# These are not from "Available Software Sites", so skip them.
next unless exists $site->{nickname};
print("nickname:\t$site->{nickname}\nuri:\t\t$site->{uri}\nenabled:\t$site->{enabled}\n\n");
push(#Sites, $site);
}
Write XML file
open($fh, '>:utf8', $OutFilePath) or die("Could not open file '$OutFilePath' for write, exiting.\n");
$fh->print("<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<bookmarks>\n");
$fh->print(" <site url=\"$_->{uri}\" selected=\"$_->{enabled}\" name=\"$_->{nickname}\"/>\n") for #Sites;
$fh->print('</bookmarks>');
close($fh);
The complete script can be downloaded from my GitHub repo
Try to run Eclipse as administrator. That way the "check for updates" should work.

Getting the issues from a certain milestone in Github

All I'm looking for is a way to get a list of issues for a given milestone. It looks like Github treats milestones a bit like labels in that you can ask for the labels for an issue, but not the issues for a given label.
I know that I can filter my issues by milestone on the Github website, but this traverses multiple pages and I wanted an easy way to see all of the issues for a milestone in a more printer friendly version.
Any tips?
You could use GitHub's API for this. See here on how to get the list of issues for a repo and notice the milestone parameter. The response you will get is a big JSON document, so you would have to create a small script to pull only the titles of the issues, or use grep, or smething like jq.
Notice also that API responses are also paged, but you can set the paging to be 100 entries per page, which is usually enough. If not, you would again have to create a small script to fetch all the pages (or do it manually).
You can use the GraphQL API which is V4. and do something like:
{
repository(owner: "X", name: "X") {
milestone(number: X) {
id
issues(first: 100) {
edges {
node {
id,
title
}
}
}
}
}
}
I was not able to find any easy methods. This worked a treat for me:
brew install hub (on OSX). Hub is created by GitHub
cd to the local repo you want to access the origin for.
hub issue -M 21 -f "%I,%t,%L,%b,%au,%as" > save_here.csv
profit.
Find the issue # (21 in the example above) in the URL on GitHub when you are viewing the milestone.
Docs for hub and in particular the format (-f) flag can be found here: https://hub.github.com/hub-issue.1.html
First find the list of milestones using this
Then query this api by milestone number for each milestone
Given a milestone $title in $owner/$repo, we can list the issues in this milestone using curl and jq:
api_url="https://api.github.com/repos/$owner/$repo"
MS=$(curl -s "$api_url/milestones" | jq '.[] | select(.title == "QA")')
MS_number=$(echo "$MS" | jq .number -r)
MS_state=$(echo "$MS" | jq .state -r)
echo "Found $title milestone with state=$MS_state"
echo ""
issues=$(curl -s "$api_url/issues?milestone=$MS_number" | jq '.[].number' -r)
echo "The following issues are in the QA milestone:"
for i in $issues; do
issue_title=$(curl -s "$api_url/issues/$i" | jq '.title' -r)
echo " https://github/$owner/$repo/issues/$i - $issue_title"
done
echo ""