How can I export Jira issues to BitBucket - import
Ive just moved my projects code from java.net to BitBucket. But my jira issue tracking is still hosted on java.net, although BitBucket does have some options for linking to an external issue tracker I don't think I can use it for java.net, not least because I do not have the admin priviledges need to install the DVCS connector.
So I thought an alternative option would be to export and then import the issues into BitBucket issue tracker, is that possible ?
Progress so far
So I tried following the steps in both informative answers using OSX below but I hit a problem - I'm rather confused about what the script would actually be called because in the answers it talks about export.py but no such script exists with that name so I renamed the one I downloaded.
sudo easy_install pip (OSX)
pip install jira
pip install configparser
easy_install -U setuptools
Go to https://bitbucket.org/reece/rcore, select downloads tab, download zip and unzip, and rename to reece ( for some reason git clone https://bitbucket.org/reece/rcore fails with error)
cd reece/rcore
Save script as export.py in rcore subfolder
Replace iteritems with items in import.py
Replace iteritems with types/immutabledict.py
Create .config in rcore folder
Create .config/jira-issues-move-to-bitbucket.conf containing
jira-username=paultaylor
jira-hostname=https://java.net/jira/browse/JAUDIOTAGGER
jira-password=password
Run python export.py --jira-project jaudiotagger
gives
macbook:rcore paul$ python export.py --jira-project jaudiotagger
Traceback (most recent call last):
File "export.py", line 24, in <module>
import configparser
ImportError: No module named configparser
- Run python export.py --jira-project jaudiotagger
I need to run pip insdtall as root so did
sudo pip install configparser
and that worked
but now
python export.py --jira.project jaudiotagger
gives
File "export.py" line 35, in <module?
from jira.client import JIRA
ImportError: No module named jira.client
You can import issues into BitBucket, they just need to be in the appropriate format. Fortunately, Reece Hart has already written a Python script to connect to a Jira instance and export the issues.
To get the script to run I had to install the Jira Python package as well as the latest version of rcore (if you use pip you get an incompatible previous version, so you have to get the source). I also had to replace all instances of iteritems with items in the script and in rcore/types/immutabledict.py to make it work with Python 3. You will also need to fill in the dictionaries (priority_map, person_map, etc) with the values your project uses. Finally, you need a config file to exist with the connection info (see comments at the top of the script).
The basic command line usage is export.py --jira-project <project>
Once you've got the data exported, see the instructions for importing issues to BitBucket
#!/usr/bin/env python
"""extract issues from JIRA and export to a bitbucket archive
See:
https://confluence.atlassian.com/pages/viewpage.action?pageId=330796872
https://confluence.atlassian.com/display/BITBUCKET/Mark+up+comments
https://bitbucket.org/tutorials/markdowndemo/overview
2014-04-12 08:26 Reece Hart <reecehart#gmail.com>
Requires a file ~/.config/jira-issues-move-to-bitbucket.conf
with content like
[default]
jira-username=some.user
jira-hostname=somewhere.jira.com
jira-password=ur$pass
"""
import argparse
import collections
import configparser
import glob
import itertools
import json
import logging
import os
import pprint
import re
import sys
import zipfile
from jira.client import JIRA
from rcore.types.immutabledict import ImmutableDict
priority_map = {
'Critical (P1)': 'critical',
'Major (P2)': 'major',
'Minor (P3)': 'minor',
'Nice (P4)': 'trivial',
}
person_map = {
'reece.hart': 'reece',
# etc
}
issuetype_map = {
'Improvement': 'enhancement',
'New Feature': 'enhancement',
'Bug': 'bug',
'Technical task': 'task',
'Task': 'task',
}
status_map = {
'Closed': 'resolved',
'Duplicate': 'duplicate',
'In Progress': 'open',
'Open': 'new',
'Reopened': 'open',
'Resolved': 'resolved',
}
def parse_args(argv):
def sep_and_flatten(l):
# split comma-sep elements and flatten list
# e.g., ['a','b','c,d'] -> set('a','b','c','d')
return list( itertools.chain.from_iterable(e.split(',') for e in l) )
cf = configparser.ConfigParser()
cf.readfp(open(os.path.expanduser('~/.config/jira-issues-move-to-bitbucket.conf'),'r'))
ap = argparse.ArgumentParser(
description = __doc__
)
ap.add_argument(
'--jira-hostname', '-H',
default = cf.get('default','jira-hostname',fallback=None),
help = 'host name of Jira instances (used for url like https://hostname/, e.g., "instancename.jira.com")',
)
ap.add_argument(
'--jira-username', '-u',
default = cf.get('default','jira-username',fallback=None),
)
ap.add_argument(
'--jira-password', '-p',
default = cf.get('default','jira-password',fallback=None),
)
ap.add_argument(
'--jira-project', '-j',
required = True,
help = 'project key (e.g., JRA)',
)
ap.add_argument(
'--jira-issues', '-i',
action = 'append',
default = [],
help = 'issue id (e.g., JRA-9); multiple and comma-separated okay; default = all in project',
)
ap.add_argument(
'--jira-issues-file', '-I',
help = 'file containing issue ids (e.g., JRA-9)'
)
ap.add_argument(
'--jira-components', '-c',
action = 'append',
default = [],
help = 'components criterion; multiple and comma-separated okay; default = all in project',
)
ap.add_argument(
'--existing', '-e',
action = 'store_true',
default = False,
help = 'read existing archive (from export) and merge new issues'
)
opts = ap.parse_args(argv)
opts.jira_components = sep_and_flatten(opts.jira_components)
opts.jira_issues = sep_and_flatten(opts.jira_issues)
return opts
def link(url,text=None):
return "[{text}]({url})".format(url=url,text=url if text is None else text)
def reformat_to_markdown(desc):
def _indent4(mo):
i = " "
return i + mo.group(1).replace("\n",i)
def _repl_mention(mo):
return "#" + person_map[mo.group(1)]
#desc = desc.replace("\r","")
desc = re.sub("{noformat}(.+?){noformat}",_indent4,desc,flags=re.DOTALL+re.MULTILINE)
desc = re.sub(opts.jira_project+r"-(\d+)",r"issue #\1",desc)
desc = re.sub(r"\[~([^]]+)\]",_repl_mention,desc)
return desc
def fetch_issues(opts,jcl):
jql = [ 'project = ' + opts.jira_project ]
if opts.jira_components:
jql += [ ' OR '.join([ 'component = '+c for c in opts.jira_components ]) ]
if opts.jira_issues:
jql += [ ' OR '.join([ 'issue = '+i for i in opts.jira_issues ]) ]
jql_str = ' AND '.join(["("+q+")" for q in jql])
logging.info('executing query ' + jql_str)
return jcl.search_issues(jql_str,maxResults=500)
def jira_issue_to_bb_issue(opts,jcl,ji):
"""convert a jira issue to a dictionary with values appropriate for
POSTing as a bitbucket issue"""
logger = logging.getLogger(__name__)
content = reformat_to_markdown(ji.fields.description) if ji.fields.description else ''
if ji.fields.assignee is None:
resp = None
else:
resp = person_map[ji.fields.assignee.name]
reporter = person_map[ji.fields.reporter.name]
jiw = jcl.watchers(ji.key)
watchers = [ person_map[u.name] for u in jiw.watchers ] if jiw else []
milestone = None
if ji.fields.fixVersions:
vnames = [ v.name for v in ji.fields.fixVersions ]
milestone = vnames[0]
if len(vnames) > 1:
logger.warn("{ji.key}: bitbucket issues may have only 1 milestone (JIRA fixVersion); using only first ({f}) and ignoring rest ({r})".format(
ji=ji, f=milestone, r=",".join(vnames[1:])))
issue_id = extract_issue_number(ji.key)
bbi = {
'status': status_map[ji.fields.status.name],
'priority': priority_map[ji.fields.priority.name],
'kind': issuetype_map[ji.fields.issuetype.name],
'content_updated_on': ji.fields.created,
'voters': [],
'title': ji.fields.summary,
'reporter': reporter,
'component': None,
'watchers': watchers,
'content': content,
'assignee': resp,
'created_on': ji.fields.created,
'version': None, # ?
'edited_on': None,
'milestone': milestone,
'updated_on': ji.fields.updated,
'id': issue_id,
}
return bbi
def jira_comment_to_bb_comment(opts,jcl,jc):
bbc = {
'content': reformat_to_markdown(jc.body),
'created_on': jc.created,
'id': int(jc.id),
'updated_on': jc.updated,
'user': person_map[jc.author.name],
}
return bbc
def extract_issue_number(jira_issue_key):
return int(jira_issue_key.split('-')[-1])
def jira_key_to_bb_issue_tag(jira_issue_key):
return 'issue #' + str(extract_issue_number(jira_issue_key))
def jira_link_text(jk):
return link("https://invitae.jira.com/browse/"+jk,jk) + " (Invitae access required)"
if __name__ == '__main__':
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
opts = parse_args(sys.argv[1:])
dir_name = opts.jira_project
if opts.jira_components:
dir_name += '-' + ','.join(opts.jira_components)
if opts.jira_issues_file:
issues = [i.strip() for i in open(opts.jira_issues_file,'r')]
logger.info("added {n} issues from {opts.jira_issues_file} to issues list".format(n=len(issues),opts=opts))
opts.jira_issues += issues
opts.dir = os.path.join('/','tmp',dir_name)
opts.att_rel_dir = 'attachments'
opts.att_abs_dir = os.path.join(opts.dir,opts.att_rel_dir)
opts.json_fn = os.path.join(opts.dir,'db-1.0.json')
if not os.path.isdir(opts.att_abs_dir):
os.makedirs(opts.att_abs_dir)
opts.jira_issues = list(set(opts.jira_issues)) # distinctify
jcl = JIRA({'server': 'https://{opts.jira_hostname}/'.format(opts=opts)},
basic_auth=(opts.jira_username,opts.jira_password))
if opts.existing:
issues_db = json.load(open(opts.json_fn,'r'))
existing_ids = [ i['id'] for i in issues_db['issues'] ]
logger.info("read {n} issues from {fn}".format(n=len(existing_ids),fn=opts.json_fn))
else:
issues_db = dict()
issues_db['meta'] = {
'default_milestone': None,
'default_assignee': None,
'default_kind': "bug",
'default_component': None,
'default_version': None,
}
issues_db['attachments'] = []
issues_db['comments'] = []
issues_db['issues'] = []
issues_db['logs'] = []
issues_db['components'] = [ {'name':v.name} for v in jcl.project_components(opts.jira_project) ]
issues_db['milestones'] = [ {'name':v.name} for v in jcl.project_versions(opts.jira_project) ]
issues_db['versions'] = issues_db['milestones']
# bb_issue_map: bb issue # -> bitbucket issue
bb_issue_map = ImmutableDict( (i['id'],i) for i in issues_db['issues'] )
# jk_issue_map: jira key -> bitbucket issue
# contains only items migrated from JIRA (i.e., not preexisting issues with --existing)
jk_issue_map = ImmutableDict()
# issue_links is a dict of dicts of lists, using JIRA keys
# e.g., links['CORE-135']['depends on'] = ['CORE-137']
issue_links = collections.defaultdict(lambda: collections.defaultdict(lambda: []))
issues = fetch_issues(opts,jcl)
logger.info("fetch {n} issues from JIRA".format(n=len(issues)))
for ji in issues:
# Pfft. Need to fetch the issue again due to bug in JIRA.
# See https://bitbucket.org/bspeakmon/jira-python/issue/47/, comment on 2013-10-01 by ssonic
ji = jcl.issue(ji.key,expand="attachments,comments")
# create the issue
bbi = jira_issue_to_bb_issue(opts,jcl,ji)
issues_db['issues'] += [bbi]
bb_issue_map[bbi['id']] = bbi
jk_issue_map[ji.key] = bbi
issue_links[ji.key]['imported from'] = [jira_link_text(ji.key)]
# add comments
for jc in ji.fields.comment.comments:
bbc = jira_comment_to_bb_comment(opts,jcl,jc)
bbc['issue'] = bbi['id']
issues_db['comments'] += [bbc]
# add attachments
for ja in ji.fields.attachment:
att_rel_path = os.path.join(opts.att_rel_dir,ja.id)
att_abs_path = os.path.join(opts.att_abs_dir,ja.id)
if not os.path.exists(att_abs_path):
open(att_abs_path,'w').write(ja.get())
logger.info("Wrote {att_abs_path}".format(att_abs_path=att_abs_path))
bba = {
"path": att_rel_path,
"issue": bbi['id'],
"user": person_map[ja.author.name],
"filename": ja.filename,
}
issues_db['attachments'] += [bba]
# parent-child is task-subtask
if hasattr(ji.fields,'parent'):
issue_links[ji.fields.parent.key]['subtasks'].append(jira_key_to_bb_issue_tag(ji.key))
issue_links[ji.key]['parent task'].append(jira_key_to_bb_issue_tag(ji.fields.parent.key))
# add links
for il in ji.fields.issuelinks:
if hasattr(il,'outwardIssue'):
issue_links[ji.key][il.type.outward].append(jira_key_to_bb_issue_tag(il.outwardIssue.key))
elif hasattr(il,'inwardIssue'):
issue_links[ji.key][il.type.inward].append(jira_key_to_bb_issue_tag(il.inwardIssue.key))
logger.info("migrated issue {ji.key}: {ji.fields.summary} ({components})".format(
ji=ji,components=','.join(c.name for c in ji.fields.components)))
# append links section to content
# this section shows both task-subtask and "issue link" relationships
for src,dstlinks in issue_links.iteritems():
if src not in jk_issue_map:
logger.warn("issue {src}, with issue_links, not in jk_issue_map; skipping".format(src=src))
continue
links_block = "Links\n=====\n"
for desc,dsts in sorted(dstlinks.iteritems()):
links_block += "* **{desc}**: {links} \n".format(desc=desc,links=", ".join(dsts))
if jk_issue_map[src]['content']:
jk_issue_map[src]['content'] += "\n\n" + links_block
else:
jk_issue_map[src]['content'] = links_block
id_counts = collections.Counter(i['id'] for i in issues_db['issues'])
dupes = [ k for k,cnt in id_counts.iteritems() if cnt>1 ]
if dupes:
raise RuntimeError("{n} issue ids appear more than once from existing {opts.json_fn}".format(
n=len(dupes),opts=opts))
json.dump(issues_db,open(opts.json_fn,'w'))
logger.info("wrote {n} issues to {opts.json_fn}".format(n=len(id_counts),opts=opts))
# write zipfile
os.chdir(opts.dir)
with zipfile.ZipFile(opts.dir + '.zip','w') as zf:
for fn in ['db-1.0.json']+glob.glob('attachments/*'):
zf.write(fn)
logger.info("added {fn} to archive".format(fn=fn))
NOTE: I'm writing a new answer because writing this in a comment would be horrible, but most of the credit goes to #Turch's answer.
My steps (in OSX and Debian machines, both worked fine):
apt-get install python-pip (Debian) or sudo easy_install pip (OSX)
pip install jira
pip install configparser
easy_install -U setuptools (not sure if really needed)
Download or clone the source code from https://bitbucket.org/reece/rcore/ in your home folder, for example. Note: don't download using pip, it will get the 0.0.2 version and you need the 0.0.3.
Download the Python script created by Reece, mentioned by #Turch, and place it inside of the rcore folder.
Follow the instructions by #Turch: I also had to replace all instances of iteritems with items in the script and in rcore/types/immutabledict.py to make it work with Python 3. You will also need to fill in the dictionaries (priority_map, person_map, etc) with the values your project uses. Finally, you need a config file to exist with the connection info (see comments at the top of the script). Note: I used hostname like jira.domain.com (no http or https).
(This change did the trick for me) I had to change part of the line 250 from 'https://{opts.jira_hostname}/' to 'http://{opts.jira_hostname}/'
To finish, run the script like #Turch mentioned: The basic command line usage is export.py --jira-project <project>
The file was placed in /tmp/.zip for me.
The file was perfectly accepted in the BitBucket importer today.
Hooray for Reece and Turch! Thanks guys!
Related
Using poetry on different machines
I am working on a Python project and recently started using poetry. I was originally working on the project using macOS 11.0, but as I near completion, I wanted to test it on a Linux workstation. I use Github for my repository, and so, I am able to clone the repo easily. However, that is where the simple part ends. I have used both pyenv and conda for the virtual environment on macOS, but when I clone the repo onto the workstation, I set up an environment and then try the following command: poetry shell poetry install I receive the following error after poetry install: ParseVersionError Unable to parse "at20RC5+54.g5702a232fe.dirty". at ~/.poetry/lib/poetry/_vendor/py3.8/poetry/core/semver/version.py:211 in parse 207│ except TypeError: 208│ match = None 209│ 210│ if match is None: → 211│ raise ParseVersionError('Unable to parse "{}".'.format(text)) 212│ 213│ text = text.rstrip(".") 214│ 215│ major = int(match.group(1)) I have tried poetry lock with the same result, and I have even deleted poetry.lock and attempted poetry lock with no success. I intend to build and publish it when all is said and done, but because I eventually want to add features that my Mac does not have (e.g., CUDA), I want to code on the workstation as well. Any help will be appreciated. Update -- 03/27/21 pyproject.toml [tool.poetry] name = "qaa" version = "0.1.0" description = "Quasi-Anharmonic Analysis" authors = ["Timothy H. Click <tclick#okstate.edu>"] license = "BSD-3-Clause" readme = "README.rst" homepage = "https://github.com/tclick/qaa" repository = "https://github.com/tclick/qaa" documentation = "https://qaa.readthedocs.io" classifiers = [ "Programming Language :: Python :: 3.6", "Programming Language :: Python :: 3.7", "Programming Language :: Python :: 3.8", "Programming Language :: Python :: 3.9", ] [tool.poetry.urls] Changelog = "https://github.com/tclick/qaa/releases" [tool.poetry.dependencies] python = "^3.8" click = "^7.0" numpy = "^1.20.1" scipy = "^1.6.1" matplotlib = "^3.3.4" seaborn = "^0.11.1" scikit-learn = "^0.24.1" pandas = "^1.2.3" netCDF4 = "^1.5.6" mdtraj = "^1.9.5" [tool.poetry.dev-dependencies] pytest = "^6.2.2" pytest-cache = "^1.0" pytest-click = "^1.0.2" pytest-console-scripts = "^1.1.0" pytest-cov = "^2.11.1" pytest-flake8 = "^1.0.7" pytest-mock = "^3.5.1" pytest-pep8 = "^1.0.6" pytest-randomly = "^3.5.0" coverage = {extras = ["toml"], version = "^5.3"} safety = "^1.9.0" mypy = "^0.812" typeguard = "^2.9.1" xdoctest = {extras = ["colors"], version = "^0.15.0"} sphinx = "^3.3.1" sphinx-autobuild = "^2020.9.1" pre-commit = "^2.8.2" flake8 = "^3.8.4" black = "^20.8b1" flake8-bandit = "^2.1.2" flake8-bugbear = "^21.3.2" flake8-docstrings = "^1.5.0" flake8-rst-docstrings = "^0.0.14" pep8-naming = "^0.11.1" darglint = "^1.5.5" reorder-python-imports = "^2.3.6" pre-commit-hooks = "^3.3.0" sphinx-rtd-theme = "^0.5.0" sphinx-click = "^2.5.0" Pygments = "^2.7.2" ipython = "^7.21.0" isort = "^5.7.0" towncrier = "^19.2.0" nox = "^2020.12.31" pytest-coverage = "^0.0" nox-poetry = "^0.8.4" numpydoc = "^1.1.0" codecov = "^2.1.11" flake8-black = "^0.2.1" flake8-import-order = "^0.18.1" [tool.poetry.scripts] qaa = "qaa.__main__:main" update -- 03/29/21 This is the error received when running poetry lock -vvv Using virtualenv: /home/tclick/.cache/pypoetry/virtualenvs/qaa-VNW0yB_S-py3.8 Stack trace: 10 ~/.poetry/lib/poetry/_vendor/py3.8/clikit/console_application.py:131 in run 129│ parsed_args = resolved_command.args 130│ → 131│ status_code = command.handle(parsed_args, io) 132│ except KeyboardInterrupt: 133│ status_code = 1 9 ~/.poetry/lib/poetry/_vendor/py3.8/clikit/api/command/command.py:120 in handle 118│ def handle(self, args, io): # type: (Args, IO) -> int 119│ try: → 120│ status_code = self._do_handle(args, io) 121│ except KeyboardInterrupt: 122│ if io.is_debug(): 8 ~/.poetry/lib/poetry/_vendor/py3.8/clikit/api/command/command.py:163 in _do_handle 161│ if self._dispatcher and self._dispatcher.has_listeners(PRE_HANDLE): 162│ event = PreHandleEvent(args, io, self) → 163│ self._dispatcher.dispatch(PRE_HANDLE, event) 164│ 165│ if event.is_handled(): 7 ~/.poetry/lib/poetry/_vendor/py3.8/clikit/api/event/event_dispatcher.py:22 in dispatch 20│ 21│ if listeners: → 22│ self._do_dispatch(listeners, event_name, event) 23│ 24│ return event 6 ~/.poetry/lib/poetry/_vendor/py3.8/clikit/api/event/event_dispatcher.py:89 in _do_dispatch 87│ break 88│ → 89│ listener(event, event_name, self) 90│ 91│ def _sort_listeners(self, event_name): # type: (str) -> None 5 ~/.poetry/lib/poetry/console/config/application_config.py:141 in set_installer 139│ 140│ poetry = command.poetry → 141│ installer = Installer( 142│ event.io, 143│ command.env, 4 ~/.poetry/lib/poetry/installation/installer.py:65 in __init__ 63│ self._installer = self._get_installer() 64│ if installed is None: → 65│ installed = self._get_installed() 66│ 67│ self._installed_repository = installed 3 ~/.poetry/lib/poetry/installation/installer.py:561 in _get_installed 559│ 560│ def _get_installed(self): # type: () -> InstalledRepository → 561│ return InstalledRepository.load(self._env) 562│ 2 ~/.poetry/lib/poetry/repositories/installed_repository.py:118 in load 116│ path = Path(str(distribution._path)) 117│ version = distribution.metadata["version"] → 118│ package = Package(name, version, version) 119│ package.description = distribution.metadata.get("summary", "") 120│ 1 ~/.poetry/lib/poetry/_vendor/py3.8/poetry/core/packages/package.py:61 in __init__ 59│ 60│ if not isinstance(version, Version): → 61│ self._version = Version.parse(version) 62│ self._pretty_version = pretty_version or version 63│ else: ParseVersionError Unable to parse "at20RC5+54.g5702a232fe.dirty". at ~/.poetry/lib/poetry/_vendor/py3.8/poetry/core/semver/version.py:206 in parse 202│ except TypeError: 203│ match = None 204│ 205│ if match is None: → 206│ raise ParseVersionError('Unable to parse "{}".'.format(text)) 207│ 208│ text = text.rstrip(".") 209│ 210│ major = int(match.group(1))
I think you should NOT run poetry shell first. It's used only when poetry project and its dependencies are installed. poetry install should work nice in the project directory with pyproject.toml Possible help: ensure that poetry installed and path to poetry executable in PATH environment variable; check that python 3.8+ is installed (from your dependencies). If you use pyenv check that path to this version is available for terminal.
`nixos-rebuild switch` gets stuck when using `builtins.fetchGit`
I'm trying to download a package with a version that is not on nixpkgs. To do so, I'm using builtins.fetchGit. Here's a summary of the file where I use fetchGit (/etc/nixos/home/core.nix) for a better idea: { pkgs, username, homeDirectory }: ############################ # Custom package snapshots # ############################ let custom-ver-pkgs = { # Haskell Language Server hls = let pkgsSnapshot = import (builtins.fetchGit { name = "custom hls version"; url = "https://github.com/nixos/nixpkgs-channels/"; ref = "refs/heads/nixpkgs-unstable"; rev = "2c162d49cd5b979eb66ff1653aecaeaa01690fcc"; }) {}; in pkgsSnapshot.haskellPackages.haskell-language-server; }; in { # Actual config } And here's the point where I use the hls keyword defined above: # Packages home.packages = with pkgs; [ ... # Normal packages ] ++ # Packages with custom version (See start of file) (with custom-ver-pkgs; [ hls ]); As you can see, I also use home-manager. The above-mentioned .../core.nix file is imported directly into /etc/nixos/configuration.nix. As the title says, if I run sudo nixos-rebuild switch, the terminal freezes (in the sense that the command goes on forever without doing anything). What could my problem be?
azure pipeline to delete the old azure git branch(not repo)
I am trying to create a azure pipeline to delete the old azure git branch(not repo). So that creating an automated pipeline which will take bellow parameters. Project Name Repo Name Target date Based on input provided, all branches created before the target date for the given repo should be deleted. Note :- We will only delete the child branch not master. Rules Only branches should be deleted on the basis be dry run flag if flag is true delete all branches in repo within given target date excluding master branch. It’s better if we can write the code in python.
I am using rest azure rest api to call the branch but not able to delete as per date parameters.
all thing is working except user input not working in azure pipeline which i had took as hard code. For the user input (input credentials ), please reference below sample: import requests import base64 repo_endpoint_url = "https://dev.azure.com/<organization>/<project>/_apis/git/repositories?api-version=5.1" username = "" # This can be an arbitrary value or you can just let it empty password = "<your-password-here>" userpass = username + ":" + password b64 = base64.b64encode(userpass.encode()).decode() headers = {"Authorization" : "Basic %s" % b64} response = requests.get(repo_endpoint_url, headers=headers) print(response.status_code) # Expect 200 You can also try using the PAT or OAuth token $env:SYSTEM_ACCESSTOKEN directly (use PAT or $env:SYSTEM_ACCESSTOKEN to replace the password). However, to enable your script to use the build pipeline OAuth token, we need to go to the Options tab of the build pipeline and select Allow Scripts to Access OAuth Token.
I am using rest azure rest api to call the branch but not able to delete as per date parameters. import requests import sys from datetime import datetime as dt import json from git import Repo import git import time username = '<u name>' auth_key = '<auth key>' class gitRepoDeletion: def getRepo(self, organization_name, project_name, repo_name): """ Getting the repo details from the user and flitering the master branch with date functionality(still implementing) """ getting_repo_list = "https://dev.azure.com/" + organization_name + '/' + \ project_name + "/_apis/git/repositories/" + repo_name + "/refs?api-version=5.0" get_reponse = requests.get( getting_repo_list, auth=(username,auth_key)) try: repojson = json.loads(get_reponse.content) except ValueError: print("Error loading json file") output_json = [x for x in repojson['value'] if x['name'] != 'refs/heads/master'] with open('/home/vsts/work/1/s/data.json', 'w', encoding='utf-8') as f: json.dump(output_json, f, ensure_ascii=False, indent=4) def filtering_branches(self, organization_name, project_name, repo_name, user_date): """ Filtering branches according to the date passed by user """ git_url = "https://" + organization_name + "#dev.azure.com" + '/' + \ organization_name + '/' + project_name + '/_git' + '/' + repo_name branches = Repo.clone_from(git_url, "./mylocaldir209") remote_branches = [] for ref in branches.git.branch('-r').split('\n'): if ref != ' origin/HEAD -> origin/master': if ref != ' origin/master': remote_branches.append(ref[9:]) else: pass branch_and_timing_dict = {} for listy in remote_branches: branches.git.checkout(listy) commit = branches.head.commit timing = time.asctime(time.gmtime(commit.committed_date)) timing = time.strftime( "%d/%m/%Y", time.gmtime(commit.committed_date)).replace(' 0', ' ') branch_and_timing_dict[listy] = timing global filterlist filterlist = [] for key, values in branch_and_timing_dict.items(): d1 = dt.strptime(user_date, "%d/%m/%Y") d2 = dt.strptime(key, "%d/%m/%Y") if d1 > d2: filterlist.append(values) else: pass return filterlist def repo_delete(self, organization_name, project_name, repo_name, dry_flag): """ Deleting repo as per date input by user also exlucling master """ all_repo_to_be_deleted = [] newObjectId = "0000000000000000000000000000000000000000" filteredBranchesAsPerDateWithRef = [] for value in filterlist: filteredBranchesAsPerDateWithRef.append("refs/heads/" + value) print(value) print(filteredBranchesAsPerDateWithRef) # Cluttering extra spaces and lowering the case of the dry flag value passed by the user # Reading data.json file, which is fetched by the getRepo() method after excluding the master branch with open('/home/vsts/work/1/s/data.json') as data_file: json_data = json.load(data_file) for item in json_data: name_of_branch = item['name'] objectId = item['objectId'] # Adding name_of_branch in all_repo_to_be_deleted list all_repo_to_be_deleted.append(name_of_branch) # Adding objectId in all_repo_to_be_deleted list # all_repo_to_be_deleted.append(objectId) passing_branch_name = "https://dev.azure.com/" + organization_name + '/' + \ project_name + "/_apis/git/repositories/" + repo_name + "/refs?api-version=5.0" headers = {'Content-type': 'application/json'} for nameOfBranchWithref in filteredBranchesAsPerDateWithRef: print(nameOfBranchWithref) nameOfBranchWithref = nameOfBranchWithref data = [ { "name": nameOfBranchWithref, "newObjectId": newObjectId, "oldObjectId": objectId, } ] dry_flag = dry_flag.lower().strip() if dry_flag == 'true': repo_delete = requests.post(passing_branch_name, data=json.dumps( data), headers=headers, auth=(username, auth_key)) print(repo_delete) else: with open('delete_repo.txt', 'w') as d: for item in all_repo_to_be_deleted: d.write("%s\n" % item) print("---- This is Dry Run ----") print("These are the repo to be deleted: ", all_repo_to_be_deleted) if __name__ == "__main__": gitRepoDeletion().getRepo('sushmasureshyadav202', 'my_delete_git', 'my_delete_git') gitRepoDeletion().filtering_branches( "<azure org name>", '<azure project>', '<azure repo>', "31/1/2020") gitRepoDeletion().repo_delete("<azure org name>", '<azure project>', '<azure repo>', 'true')
Jupyter Importing Ipynb files Error: no module named 'mynotebook'
I need to import different ipynb files, so I tried this: https://jupyter-notebook.readthedocs.io/en/stable/examples/Notebook/Importing%20Notebooks.html But I get no module named 'mynotebook' found. (I even tried it with other notebooks names, which definitely exist, but still not working) Do you have any ideas about what I could do? import io, os, sys, types from IPython import get_ipython from nbformat import read from IPython.core.interactiveshell import InteractiveShell def find_notebook(fullname, path=None): name = fullname.rsplit('.', 1)[-1] if not path: path = [''] for d in path: nb_path = os.path.join(d, name + ".ipynb") if os.path.isfile(nb_path): return nb_path # let import Notebook_Name find "Notebook Name.ipynb" nb_path = nb_path.replace("_", " ") if os.path.isfile(nb_path): return nb_path class NotebookLoader(object): def __init__(self, path=None): self.shell = InteractiveShell.instance() self.path = path def load_module(self, fullname): """import a notebook as a module""" path = find_notebook(fullname, self.path) print ("importing Jupyter notebook from %s" % path) # load the notebook object with io.open(path, 'r', encoding='utf-8') as f: nb = read(f, 4) # create the module and add it to sys.modules # if name in sys.modules: # return sys.modules[name] mod = types.ModuleType(fullname) mod.__file__ = path mod.__loader__ = self mod.__dict__['get_ipython'] = get_ipython sys.modules[fullname] = mod # extra work to ensure that magics that would affect the user_ns # actually affect the notebook module's ns save_user_ns = self.shell.user_ns self.shell.user_ns = mod.__dict__ try: for cell in nb.cells: if cell.cell_type == 'code': # transform the input to executable Python code = self.shell.input_transformer_manager.transform_cell(cell.source) # run the code in themodule exec(code, mod.__dict__) finally: self.shell.user_ns = save_user_ns return mod class NotebookFinder(object): def __init__(self): self.loaders = {} def find_module(self, fullname, path=None): nb_path = find_notebook(fullname, path) if not nb_path: return key = path if path: # lists aren't hashable key = os.path.sep.join(path) if key not in self.loaders: self.loaders[key] = NotebookLoader(path) return self.loaders[key] sys.meta_path.append(NotebookFinder()) import mynotebook I just want to import the code of another jupyter file
WOW, i also face this problem. I create a new env and after open jupyter, it can't find nbformat in my new installed env, so just: pip install nbformat
Using input function with remote files in snakemake
I want to use a function to read inputs file paths from a dataframe and send them to my snakemake rule. I also have a helper function to select the remote from which to pull the files. from snakemake.remote.GS import RemoteProvider as GSRemoteProvider from snakemake.remote.SFTP import RemoteProvider as SFTPRemoteProvider from os.path import join import pandas as pd configfile: "config.yaml" units = pd.read_csv(config["units"]).set_index(["library", "unit"], drop=False) TMP= join('data', 'tmp') def access_remote(local_path): """ Connnects to remote as defined in config file""" provider = config['provider'] if provider == 'GS': GS = GSRemoteProvider() remote_path = GS.remote(join("gs://" + config['bucket'], local_path)) elif provider == 'SFTP': SFTP = SFTPRemoteProvider( username=config['user'], private_key=config['ssh_key'] ) remote_path = SFTP.remote( config['host'] + ":22" + join(base_path, local_path) ) else: remote_path = local_path return remote_path def get_fastqs(wc): """ Get fastq files (units) of a particular library - sample combination from the unit sheet. """ fqs = units.loc[ (units.library == wc.library) & (units.libtype == wc.libtype), "fq1" ] return { "r1": list(map(access_remote, fqs.fq1.values)), } # Combine all fastq files from the same sample / library type combination rule combine_units: input: unpack(get_fastqs) output: r1 = join(TMP, "reads", "{library}_{libtype}.end1.fq.gz") threads: 12 run: shell("cat {i1} > {o1}".format(i1=input['r1'], o1=output['r1'])) My config file contains the bucket name and provider, which are passed to the function. This works as expected when running simply snakemake. However, I would like to use the kubernetes integration, which requires passing the provider and bucket name in the command line. But when I run: snakemake -n --kubernetes --default-remote-provider GS --default-remote-prefix bucket-name I get this error: ERROR :: MissingInputException in line 19 of Snakefile: Missing input files for rule combine_units: bucket-name/['bucket-name/lib1-unit1.end1.fastq.gz', 'bucket-name/lib1-unit2.end1.fastq.gz', 'bucket-name/lib1-unit3.end1.fastq.gz'] The bucket is applied twice (once mapped correctly to each element, and once before the whole list (which gets converted to a string). Did I miss something ? Is there a good way to work around this ?