I published a repository using the version control program mercurial. More specifically, I installed the mercurial-server program and followed the instructions as given at
http://ekkescorner.wordpress.com/blog-series/git-mercurial/step-by-step-install-mercurial-server-on-ubuntu/
and
http://dev.lshift.net/paul/mercurial-server/docbook.html
. The repository works great, but the problem is that there are no emails sent to the users when we do an hg push after we made some changes.
I found several pages on the internet describing how to supposedly solve this problem, but none of them used mercurial-server to publish the repository and the activation of sending emails with an hg push seems to differ per method of publication.
How do I activate the emailing with an hg push in case of the mercurial-server package? Do I need to setup a SMTP server for mercurial?
The hgrc file that is in the .hg folder of my repository directory is as follows.
[paths]
default = ssh://hg#<ip_address>/jays/project
[extensions]
hgext.notify =
[hooks]
# Enable either changegroup or incoming.
# changegroup will send one email for each push,
# whereas incoming sends one email per changeset.
# Note: Configuring both is possible, but probably not
# what you want, you'll get one email for the group
# and one for each changeset in the group.
changegroup.notify = python:hgext.notify.hook
#commit.notify = python:hgext.notify.hook
#incoming.notify = python:hgext.notify.hook
[email]
from = <from#email.address>
[smtp]
host = <smtp.server.address>
# Optional options:
username = <smtp.server.username>
password = <smtp.server.password>
port = 465
tls = true
# local_hostname = me.example.com
# presently it is necessary to specify the baseurl for the notify
# extension to work. It can be a dummy value if your repo isn't
# available via http
[web]
baseurl = file:///
[notify]
# multiple sources can be specified as a whitespace separated list
sources = serve push pull bundle
# set this to False when you're ready for mail to start sending
test = false
# While the subscription information can be included in this file,
# (in which case, set: config =)
# having it in a separate file allows for it to be version controlled
# and for the option of having subscribers maintain it themselves.
config =
# you can override the changeset template here, if you want.
# If it doesn't start with \n it may confuse the email parser.
# here's an example that makes the changeset template look more like hg log:
#template = \ndetails: {baseurl}{webroot}/rev/{node|short}\nchangeset: {rev}:{node|short}\nuser: {author}\ndate: {date|date}\ndescription:\n{desc}\n
template = \ndetails: "{baseurl}{webroot}"\nchangeset: {rev}:{node|short}\nuser: {author}\ndate: {date|date}\ndescription:\n{desc}\n
# max lines of diffs to include (0=none, -1=all)
maxdiff = 1000
[reposubs]
* = <first#recipient.address>
Related
I worked with bugzilla and Eclipse, and I used Mylyn to manage issues though Eclipse.
Now I use Gitlab and gitlab issues, I wonder if there is a mylyn connector for Gitlab ?
I knwow that there is this one : gitlab connector , but it is no more usable and I did not found another one.
Did someone face with the same problem and did find a solution ?
After a while I can share my solution, maybe it will help others.
There is no Mylin connector for Gitlab that runs correctly. A solution could be to debug the buggy one but in fact Gitlab is not a powerfull tool to manage issues.
I chose to use Bugzilla at least for three points :
the bug workflow is easy to customize and this is an important feature to adapt the bug workflow to the company processes
Mylin connector for Bugzilla is avalaible since a long time and runs correctly
Bugzilla is still a reference tool
The first step is to define Bugzilla as the issues management tool, this is done through Gitlab UI and the documentation is here.
For me, if an external tool is used, the best is to desactivate Gitlab issues tracking. On your project, go to Settings->General->Visibility, project features and desactivate Issues.
Note: if Bugzilla and Gitlab are deployed on the same host, you have to accept request to localhost. On Gitlab administration, go to Settings->Network->Outbound requests, select the two options about local network.
After that, you can comment your commits with a message containing Ref #id where id is a bug id in Bugzilla. As with Gitlab issue, the commit will contain an hyperlink to the issue but the hyperlink will open Bugzilla bug page.
If you do not go further, you will lost a Gitlab feature : Gitlab issue references all commits related to it.
A solution to have a similar feature with Bugzilla is to add to bug a comment with an hyperlink to commits.
This could be achieve with a server hook, this is described here.
Note : each time you change the gitlab.rb file, do no forget to execute gitlab-ctl reconfigure.
The hook has to manage "standard" commit and merge commits.
The following python code could be seen as a starting point for a such hook.
It assumes that development are done on branches named feature/id nd that commits comments contains a string Ref #id. Id is a bug id.
It could be improve:
to manage exceptions better
to manage more push cases
to check more rules such as :
the bug has to be in progess
the bug assignee has to be the git user who performs the push
the bugzilla project has to be the one for the Gitlab project
the bug is open on a version that is still under development or debug
....
#!/usr/bin/env python3
import sys
import os
import fileinput
import glob
import subprocess
import requests
#
# Constants
#
G__GIT_CMD =["git","rev-list","--pretty"]
G__SEP ="commit "
G__NL ='\n'
G__AUTHOR ='Author'
G__AUTHOR_R ='Author: '
G__DATE ='Date'
G__DATE_R ='Date: '
G__C_MSG ='message'
G__URL_S ='https://<<gitlab server url>>/<<project>>/-/commit/'
G__MERGE_S ='Merge: '
G__MERGE ='Merge'
G__URL ='URL'
G__BUGZ_URL ='http://<<bugzilla url>>/rest/bug/{}/comment'
G__HEADERS = {'Content-type': 'application/json'}
G__JSON = {"Bugzilla_login":"<<bugzilla user>>","Bugzilla_password":"<<password>>","comment": "{}"}
G__JSON_MR = {"Bugzilla_login":"<<bugzilla user>>","Bugzilla_password":"<<password>>","comment": "Merge request {}"}
G__COMMENT_ELEM = 'comment'
G__MSG_REF ="Ref #"
G__MSG_REF_MR ="feature/"
G__WHITE =" "
G__APOS ="'"
#
# Filters some parts of message that are empty
#
def filter_message_elements(message_elements):
flag=False
for message_element in message_elements:
if len(message_element)!=0:
flag=True
return flag
#
# Add an element in commit dictionary.
#
# If this is a commit for a merge, an element is added.
#
def add_commit_in_dict(commits_dict, temp_list, flag_merge):
url = G__URL_S+temp_list[0]
commits_dict[temp_list[0]]={}
commits_dict[temp_list[0]][G__URL]=url
if False==flag_merge:
commits_dict[temp_list[0]][G__AUTHOR]=temp_list[1].replace(G__AUTHOR_R,'')
commits_dict[temp_list[0]][G__DATE]=temp_list[2].replace(G__DATE_R,'')
commits_dict[temp_list[0]][G__C_MSG]=temp_list[3]
else:
commits_dict[temp_list[0]][G__MERGE]=temp_list[1]
commits_dict[temp_list[0]][G__AUTHOR]=temp_list[2].replace(G__AUTHOR_R,'')
commits_dict[temp_list[0]][G__DATE]=temp_list[3].replace(G__DATE_R,'')
commits_dict[temp_list[0]][G__C_MSG]=temp_list[4]
#
# Fill commits data
#
def fills_commit_data(commits_dict, fileinput_line):
params=fileinput_line[:-1].split()
try:
# Git command to get commits list
cmd=G__GIT_CMD+[params[1],"^"+params[0]]
rev_message = subprocess.run(cmd,stdout=subprocess.PIPE, stderr=subprocess.STDOUT, text=True)
# loop on commits
messages_list=rev_message.stdout.split(G__SEP)
for message in messages_list:
if len(message)==0:
continue
message_elements = message.split(G__NL)
# filters empty message
flag=filter_message_elements(message_elements)
if not flag:
continue
# Extracts commit data and detects merge commit
temp_list=[]
flag_merge=False
for message_element in message_elements:
text = message_element.strip()
if 0!=len(text):
temp_list.append(text)
if -1!=text.find(G__MERGE):
flag_merge=True
# adds the commit in commits dictionary
add_commit_in_dict(commits_dict, temp_list, flag_merge)
except Exception as inst:
sys.exit(1)
#
# Extract the bug id from the commit message
#
def find_bug_id(message):
issue_int=-1
pos=message.find(G__MSG_REF)
if pos==-1:
sys.exit(1)
issue_nb=message[pos+len(G__MSG_REF):]
pos2=issue_nb.find(G__WHITE)
issue_nb=issue_nb[:pos2]
try:
issue_int=int(issue_nb)
except ValueError:
sys.exit(1)
return(issue_int)
#
# Extract the bug id from the commit message
# in case of merge request
#
def find_bug_id_mr(message):
issue_int=-1
pos=message.find(G__MSG_REF_MR)
if pos==-1:
sys.exit(1)
issue_nb=message[pos+len(G__MSG_REF_MR):]
pos2=issue_nb.find(G__APOS)
issue_nb=issue_nb[:pos2]
try:
issue_int=int(issue_nb)
except ValueError:
sys.exit(1)
return(issue_int)
#
# Checks if the commit list contains a merge request commit
#
def is_merge_request(commits_dict):
flag=False
for key in commits_dict:
if G__MERGE in commits_dict[key]:
flag=True
break
return flag
#
# Add a comment to a bug
#
def add_comment_to_bug( commit_data):
bug_id = find_bug_id(commit_data[G__C_MSG])
url = G__BUGZ_URL.format(str(bug_id))
G__JSON[G__COMMENT_ELEM] = G__JSON[G__COMMENT_ELEM].format(commit_data[G__URL])
response = requests.post(url, json=G__JSON, headers=G__HEADERS)
#
# add a comment in case of merge request
#
def add_mr_comment_to_bug(commits_dict):
commit_data=None
for key in commits_dict:
if G__MERGE in commits_dict[key]:
commit_data=commits_dict[key]
break
bug_id = find_bug_id_mr(commit_data[G__C_MSG])
url = G__BUGZ_URL.format(str(bug_id))
G__JSON_MR[G__COMMENT_ELEM] = G__JSON_MR[G__COMMENT_ELEM].format(commit_data[G__URL])
response = requests.post(url, json=G__JSON_MR, headers=G__HEADERS)
#
# Main program
#
def main():
# dictionary containing all commits
commits_dict={}
# loop on inputs referencing data changes
for fileinput_line in sys.stdin:
fills_commit_data(commits_dict, fileinput_line)
# find if this is merge request or not
flag_merge_request = is_merge_request(commits_dict)
if False==flag_merge_request:
# loop on commit to add comments to bugs
for key in commits_dict.keys():
add_comment_to_bug(commits_dict[key])
else:
# in case of merge request, only the merge commit has to be added
# others commits have been processed before
add_mr_comment_to_bug(commits_dict)
if __name__ == "__main__":
main()
I have a module, built with Dist::Zilla. I have Dist::Zilla set up to automatically push changes out to my GitHub repo. Works great when the repo is private.
However, as soon as I make the repo public, I start getting errors during the build process. Specifically, these lines in the dist.ini
[Bugtracker]
web = http://github.com/myaccount/%s/issues
If I comment out these lines, it works. With these lines left in, I get an error:
Duplication of element resources.bugtracker.web at /Users/me/perl5/perlbrew/perls/perl-5.24.1/lib/site_perl/5.24.4/Dist/Zilla.pm line 595.
OK, so fine, I comment out the lines. However, another problem crops up. The version number of my builds no longer autoincrements and is stuck at the same number every time I try to release a build.
Is there some configuration setting I need to change with Dist::Zilla so it will play nice with public github repos? Here is the full dist.ini file:
name = Module-Test
author = me
license = Perl_5
copyright_holder = Me
copyright_year = 2018
[Repository]
;[Bugtracker]
;web = http://github.com/sdondley/%s/issues
[Git::NextVersion]
[GitHub::Meta]
[PodVersion]
[PkgVersion]
[NextRelease]
[Run::AfterRelease]
run = mv Changes tmp && cp %n-%v/Changes Changes
[InstallGuide]
[PodWeaver]
[ReadmeAnyFromPod]
type = markdown
location = root
phase = release
[Git::Check]
[Git::Commit]
allow_dirty = README.mkdn
allow_dirty = Changes
allow_dirty = INSTALL
[Git::Tag]
[Git::Push]
[Run::AfterRelease / MyAppAfter]
run = mv tmp/Changes Changes
[GatherDir]
[AutoPrereqs]
[PruneCruft]
[PruneFiles]
filename = weaver.ini
filename = README.mkdn
filename = dist.ini
filename = .gitignore
[ManifestSkip]
[MetaYAML]
[License]
[Readme]
[ExtraTests]
[ExecDir]
[ShareDir]
[MakeMaker]
[Manifest]
[TestRelease]
[FakeRelease]
Your [Bugtracker] entry leads to duplication because you are also setting the bugtracker through [GitHub::Meta]. Choose one or the other.
As for version number management, note that [Git::NextVersion] is based on your git tags. Make sure that these tags are present in your local repository and have the correct format. That plugin uses a command line invocation similar to this to obtain all tags:
git rev-list --simplify-by-decoration --pretty=%d HEAD | grep -oE 'tag: [^,)\s]+'
Public GitHub repos should not be a problem for Dist::Zilla – this is exactly the setup most dzil distros use anyway. But interactions between multiple plugins can lead to hard to track down bugs, especially since the order of plugins is important. It can help to organize your plugins by the phase in which they run, and to test whether the problem persists after removing optional plugins. It also tends to be better to start with a simple dist.ini and add plugins as pain points in your development process become apparent.
I have to send a mail with an attachment from a shell script.
I am trying to do it using mutt as shown here: How do I send a file as an email attachment using Linux command line?
Command:
echo "This is the message body" | mutt -a "/path/to/file.to.attach" -s "subject of message" -- recipient#domain.com
Error:
Error sending message, child exited 127 (Exec error.). Could not send
the message.
I was having the same issue on Ubuntu 18.04 and just like #jono, I only had installed mutt. Installing
sudo apt-get install sendmail
After that, sending mail with the test method or straight through the mutt CLI worked perfectly.
I have encountered this same error today.
I found I only had mutt installed, but once I installed sendmail this error went away. However I then got blocked locally.
So I uninstalled sendmail, and installed postfix this worked..
Now receiving email with attached pdf.
This was on RHEL 7.4 in an enterprise environment. Unsure if results will differ on other versions or environments.
I had this error and had to simply add below to my .muttrc. I'm using Gmail if that matters. This way I'm using someone elses server to send and don't have to install extra junk.
set smtp_pass="secrets"
set smtp_url = "smtps://username#gmail.com#smtp.gmail.com:465/"
set the password generated from this link into this file:
# file: ~/.muttrc
set from="first_name.last_name#gmail.com"
set realname="first_name last_name"
set imap_user="first_name.last_name#gmail.com"
#
# v1.0.1
# check the following google help page:
# http://support.google.com/accounts/bin/answer.py?answer=185833
# that is set here your google application password
set imap_pass="SecretPass!"
#nopeset imap_authenticators="gssapi"
set imap_authenticators="gssapi:cram-md5:login"
set certificate_file="~/.mutt/certificates"
#
# These two lines appear to be needed on some Linux distros, like Arch Linux
#
##REMOTE GMAIL FOLDERS
set folder="imaps://imap.gmail.com:993"
set record="+[Gmail]/Sent Mail"
set spoolfile="imaps://imap.gmail.com:993/INBOX"
set postponed="+[Gmail]/Drafts"
set trash="+[Google Mail]/Trash"
#
###SMTP Settings to sent email
set smtp_url="smtp://first_name.last_name#smtp.gmail.com:587"
#
# v1.0.1
# check the following google help page:
# http://support.google.com/accounts/bin/answer.py?answer=185833
# that is set here your google application password
set smtp_pass="SecretPass!"
#
###LOCAL FOLDERS FOR CACHED HEADERS AND CERTIFICATES
set header_cache="~/.mutt/cache/headers"
set message_cachedir="~/.mutt/cache/bodies"
set certificate_file =~/.mutt/certificates
#
###SECURING
set move=no #Stop asking to "move read messages to mbox"!
set imap_keepalive=900
#
###Sort by newest conversation first.
set sort=reverse-threads
set sort_aux=last-date-received
#
###Set editor to create new email
set editor='vim'
set ssl_starttls=yes
set ssl_force_tls=yes
Fix for GMail Account Configuration
The following post worked for me: https://www.codyhiar.com/blog/getting-mutt-setup-with-gmail-using-2-factor-auth-on-ubuntu-14-04/
But it was not very clear. The contents of ~/.muttrc that worked for me are as follows (My account has 2-Step verification enabled and I had to generate app password as described in the post):
set imap_user = "<username>#gmail.com"
set imap_pass = "<16-character-app-password>"
set sendmail="/usr/sbin/ssmtp"
set folder="imaps://imap.gmail.com:993"
set spoolfile="imaps://imap.gmail.com/INBOX"
set record="imaps://imap.gmail.com/[Gmail]/Sent Mail"
set postponed="imaps://imap.gmail.com/[Gmail]/Drafts"
set header_cache = "~/.mutt/cache/headers"
set message_cachedir = "~/.mutt/cache/bodies"
set certificate_file = "~/.mutt/certificates"
set from = "<username>#gmail.com"
set realname = "<name-used-in-the-gmail-account>"
set smtp_url = "smtp://<username>#smtp.gmail.com:587/"
set smtp_pass="<16-character-app-password>"
set move = no
set imap_keepalive = 900
# Gmail-style keyboard shortcuts
macro index,pager ga "<change-folder>=[Gmail]/All<tab><enter>" "Go to all mail"
macro index,pager gi "<change-folder>=INBOX<enter>" "Go to inbox"
macro index,pager gs "<change-folder>=[Gmail]/Starred<enter>" "Go to starred messages"
macro index,pager gd "<change-folder>=[Gmail]/Drafts<enter>" "Go to drafts"
macro index,pager e "<enter-command>unset trash\n <delete-message>" "Gmail archive message" # different from Gmail, but wanted to keep "y" to show folders.
Replace the following:
<username>: Your gmail username
<16-character-app-password>: You have to generate this
<name-used-in-the-gmail-account>: Your name as per gmail account
Note: Don't change <change-folder>
Basically I have inherited ShellCommand to overwrite evaluatecommand.
In evaluatecommand, I parse the log and find the actual maintainer of the package to send a mail notification.
Everything other than mailnotification does not work fine.
class CustomShellCommand(ShellCommand):
command = None
parser = None
haltOnFailure = True
buildername = ''
ci = None
def __init__(self,command, ci, buildername, **kwargs):
self.ci = ci
self.command = command
self.buildername = buildername
ShellCommand.__init__(self, **kwargs)
if len(self.command) > 0 and self.command[0] == 'make_isolated':
self.parser = ParseLog()
self.addLogObserver('stdio', self.parser)
self.setDefaultWorkdir("build")
def evaluateCommand(self, cmd):
if self.parser is not None:
self.parser.packages
for pkg in self.parser.packages:
emails = get_maintainer_emails()
if cmd.rc > 0:
mn = add_mail_notifiers([self.buildername], emails[-1])
self.ci.masterconfig['services'].append(mn)
return util.FAILURE
else:
return util.SUCCESS
But when I add mail notifiers in init it works, but does not work in evaluate command.
Any pointers would be appreciated.
I am no buildbot expert, I just started using it 2 months ago in my new job. But here I think the MailNotifier is something related to the master, and more precisely to the config. For your ShellCommand, I suppose the master executes the __init__ when it loads its config. But the evaluateCommand I think is only executed on runtime by the slave, and they cannot change the config of the master...
Here we have written an external script to send personalized mail for failed builds. It has a builder which triggers it once a day, early in the morning after the nightly builds have finished and before people arrive at the office. We will investigate how to do this more generally as only one of our project has this feature, the other projects' failures are summarized in a general mail sent to everyone. Maybe there is something to do with the SetProperty, but I cannot tell for now...
I need to extract the list of all repos under all projects in Bitbucket. Is there a REST API for the same? I couldn't find one.
I have both on-premise and cloud Bitbucket.
Clone ALL Projects & Repositories for a given stash url
#!/usr/bin/python
#
# #author Jason LeMonier
#
# Clone ALL Projects & Repositories for a given stash url
#
# Loop through all projects: [P1, P2, ...]
# P1 > for each project make a directory with the key "P1"
# Then clone every repository inside of directory P1
# Backup a directory, create P2, ...
#
# Added ACTION_FLAG bit so the same logic can run fetch --all on every repository and/or clone.
import sys
import os
import stashy
ACTION_FLAG = 1 # Bit: +1=Clone, +2=fetch --all
url = os.environ["STASH_URL"] # "https://mystash.com/stash"
user = os.environ["STASH_USER"] # joedoe"
pwd = os.environ["STASH_PWD"] # Yay123
stash = stashy.connect(url, user, pwd)
def mkdir(xdir):
if not os.path.exists(xdir):
os.makedirs(xdir)
def run_cmd(cmd):
print ("Directory cwd: %s "%(os.getcwd() ))
print ("Running Command: \n %s " %(cmd))
os.system(cmd)
start_dir = os.getcwd()
for project in stash.projects:
pk = project_key = project["key"]
mkdir(pk)
os.chdir(pk)
for repo in stash.projects[project_key].repos.list():
for url in repo["links"]["clone"]:
href = url["href"]
repo_dir = href.split("/")[-1].split(".")[0]
if (url["name"] == "http"):
print (" url.href: %s"% href) # https://joedoe#mystash.com/stash/scm/app/ae.git
print ("Directory cwd: %s Project: %s"%(os.getcwd(), pk))
if ACTION_FLAG & 1 > 0:
if not os.path.exists(repo_dir):
run_cmd("git clone %s" % url["href"])
else:
print ("Directory: %s/%s exists already. Skipping clone. "%(os.getcwd(), repo_dir))
if ACTION_FLAG & 2 > 0:
# chdir into directory "ae" based on url of this repo, fetch, chdir back
cur_dir = os.getcwd()
os.chdir(repo_dir)
run_cmd("git fetch --all ")
os.chdir(cur_dir)
break
os.chdir(start_dir) # avoiding ".." in case of incorrect git directories
Once logged in: on the top right, click on your profile pic and then 'View profile'
Take note of your user (in the example below 'YourEmail#domain.com', but keep in mind it's case sensitive)
Click on profile pic > Manage account > Personal access token > Create a token (choosing 'Read' access type is enough for this functionality)
For all repos in all projects:
Open a CLI and use the command below (remember to fill in your server domain!):
curl -u "YourEmail#domain.com" -X GET https://<my_server_domain>/rest/api/1.0/projects/?limit=1000
It will ask you for your personal access token, you comply and you get a JSON file with all repos requested
For all repos in a given project:
Pick the project you want to get repos from. In my case, the project URL is: <your_server_domain>/projects/TECH/ and therefore my {projectKey} is 'TECH', which you'll need for the command below.
Open a CLI and use this command (remember to fill in your server domain and projectKey!):
curl -u "YourEmail#domain.com" -X GET https://<my_server_domain>/rest/api/1.0/projects/{projectKey}/repos?limit=50
Final touches
(optional) If you want just the titles of the repos requested and you have jq installed (for Windows, downloading the exe and adding it to PATH should be enough, but you need to restart your CLI for that new addition to be detected), you can use the command below:
curl -u $BBUSER -X GET <my_server_domain>/rest/api/1.0/projects/TECH/repos?limit=50 | jq '.values|.[]|.name'
(tested with Data Center/Atlassian Bitbucket v7.9.0 and powershell CLI)
For Bitbucket Cloud
You can use their REST API to access and perform queries on your server.
Specifically, you can use this documentation page, provided by Atlassian, to learn how to list you're repositories.
For Bitbucket Server
Edit: As of receiving this tweet from Dan Bennett, I've learnt there is an API/plugin system for Bitbucket Server that could possibly cater for your needs. For docs: See here.
Edit2: Found this reference to listing personal repositories that may serve as a solution.
AFAIK there isn't a solution for you unless you built a little API for yourself that interacted with your Bitbucket Server instance.
Atlassian Documentation does indicate that to list all currently configured repositories you can do git remote -v. However I'm dubious of this as this isn't normally how git remote -v is used; I think it's more likely that Atlassian's documentation is being unclear rather than Atlassian building in this functionality to Bitbucket Server.
I ended up having to do this myself with an on-prem install of Bitbucket which didn't seem to have the REST APIs discussed above accessible, so I came up with a short script to scrape it out of the web page. This workaround has the advantage that there's nothing you need to install, and you don't need to worry about dependencies, certs or logins other than just logging into your Bitbucket server. You can also set this up as a bookmark if you urlencode the script and prefix it with javascript:.
To use this:
Open your bitbucket server project page, where you should see a list of repos.
Open your browser's devtools console. This is usually F12 or ctrl-shift-i.
Paste the following into the command prompt there.
JSON.stringify(Array.from(document.querySelectorAll('[data-repository-id]')).map(aTag => {
const href = aTag.getAttribute('href');
let projName = href.match(/\/projects\/(.+)\/repos/)[1].toLowerCase();
let repoName = href.match(/\/repos\/(.+)\/browse/)[1];
repoName = repoName.replace(' ', '-');
const templ = `https://${location.host}/scm/${projName}/${repoName}.git`;
return {
href,
name: aTag.innerText,
clone: templ
}
}));
The result is a JSON string containing an array with the repo's URL, name, and clone URL.
[{
"href": "/projects/FOO/repos/some-repo-here/browse",
"name": "some-repo-here",
"clone": "https://mybitbucket.company.com/scm/foo/some-repo-here.git"
}]
This ruby script isn't the greatest code, which makes sense, because I'm not the greatest coder. But it is clear, tested, and it works.
The script filters the output of a Bitbucket API call to create a complete report of all repos on a Bitbucket server. Report is arranged by project, and includes totals and subtotals, a link to each repo, and whether the repos are public or personal. I could have simplified it for general use, but it's pretty useful as it is.
There are no command line arguments. Just run it.
#!/usr/bin/ruby
#
# #author Bill Cernansky
#
# List and count all repos on a Bitbucket server, arranged by project, to STDOUT.
#
require 'json'
bbserver = 'http(s)://server.domain.com'
bbuser = 'username'
bbpassword = 'password'
bbmaxrepos = 2000 # Increase if you have more than 2000 repos
reposRaw = JSON.parse(`curl -s -u '#{bbuser}':'#{bbpassword}' -X GET #{bbserver}/rest/api/1.0/repos?limit=#{bbmaxrepos}`)
projects = {}
repoCount = reposRaw['values'].count
reposRaw['values'].each do |r|
projID = r['project']['key']
if projects[projID].nil?
projects[projID] = {}
projects[projID]['name'] = r['project']['name']
projects[projID]['repos'] = {}
end
repoName = r['name']
projects[projID]['repos'][repoName] = r['links']['clone'][0]['href']
end
privateProjCount = projects.keys.grep(/^\~/).count
publicProjCount = projects.keys.count - privateProjCount
reportText = ''
privateRepoCount = 0
projects.keys.sort.each do |p|
# Personal project slugs always start with tilde
isPrivate = p[0] == '~'
projRepoCount = projects[p]['repos'].keys.count
privateRepoCount += projRepoCount if isPrivate
reportText += "\nProject: #{p} : #{projects[p]['name']}\n #{projRepoCount} #{isPrivate ? 'PERSONAL' : 'Public'} repositories\n"
projects[p]['repos'].keys.each do |r|
reportText += sprintf(" %-30s : %s\n", r, projects[p]['repos'][r])
end
end
puts "BITBUCKET REPO REPORT\n\n"
puts sprintf(" Total Projects: %5d Public: %5d Personal: %5d", projects.keys.count, publicProjCount, privateProjCount)
puts sprintf(" Total Repos: %5d Public: %5d Personal: %5d", repoCount, repoCount - privateRepoCount, privateRepoCount)
puts reportText
The way I solved this issue, was get the html page and give it a ridiculous limit like this. thats in python :
cmd = "curl -s -k --user " + username + " https://URL/projects/<KEY_PROJECT_NAME>/?limit\=10000"
then I parsed it with BeautifulSoup
make_list = str((subprocess.check_output(cmd, shell=True)).rstrip().decode("utf-8"))
html = make_list
parsed_html = BeautifulSoup(html,'html.parser')
list1 = []
for a in parsed_html.find_all("a", href=re.compile("/<projects>/<KEY_PROJECT_NAME>/repos/")):
list1.append(a.string)
print(list1)
to use this make sure you change and , this should be the bitbucket project you are targeting. All , I am doing is parsing an html file.
Here's how I pulled the list of repos from Bitbucket Cloud.
Setup OAauth Consumer
Go to your workspace settings and setup an OAuth consumer, you should be able to go here directly using this link: https://bitbucket.org/{your_workspace}/workspace/settings/api
The only setting that matters is the callback URL which can be anything but I chose http://localhost
Once setup, this will display a key and secret pair for your OAuth consumer, I will refer to these as {oauth_key} and {oauth_secret} below
Authenticate with the API
Go to https://bitbucket.org/site/oauth2/authorize?client_id={oauth_key}&response_type=code ensuring you replace {oauth_key}
This will redirect you to something like http://localhost/?code=xxxxxxxxxxxxxxxxxx, make a note of that code, I'll refer to that as {oauth_code} below
In your terminal go to curl -X POST -u "{oauth_key}:{oauth_secret}" https://bitbucket.org/site/oauth2/access_token -d grant_type=authorization_code -d code={oauth_code} replacing the placeholders.
This should return json including the access_token, I’ll refer to that access token as {oauth_token}
Get the list of repos
You can now run the following to get the list of repos. Bear in mind that your {oauth_token} lasts 2hrs by default.
curl --request GET \
--url 'https://api.bitbucket.org/2.0/repositories/pageant?page=1' \
--header 'Authorization: Bearer {oauth_token}' \
--header 'Accept: application/json'
This response is paginated so you'll need to page through the responses, 10 repositories at a time.