multiprocess.Pool hangs - subprocess

I'm using multiprocess.Pool to run several git clone process, but when there is a git repo much larger than others, the git clone process hangs.
The Pool I used as follows
def ProcessHelper(runfunc,incl_infos,project_root,skip_dirs):
p = Pool(processNum)
multi_res = [p.apply_async(runfunc, args=(incl_info,project_root,skip_dirs,)) for incl_info in incl_infos]
p.close()
p.join()
resMesList = []
for res in multi_res:
resMes = str(res.get())
if (resMes.find("Failed") != -1):
resMesList.append(resMes)
Then thread function use to check whether it's a git repo and clone the data if it is.
def runfunc(incl_info,project_root,skip_dirs):
if incl_info == 0:
git_url = project_root["git_url"]
dir_path = project_root["dir_path"]
GitCloneInShell(git_url,dir_path)
def GitCloneInShell(git_url,dir_path):
cmd = ("git clone %s %s " % (git_url, dir_path))
LogInfo("git clone cmd : %s" % cmd)
status = subprocess.call(cmd, shell=True)
return status
It usually works well. But when deal with a large repo with some other smaller repos, it will hangs on the large repo.
Can anyone give me some suggestions please ? Thanks a lot.

Related

Yocto - git revision in the image name

By default Yocto adds build timestamp to the output image file name, but I would like to replace it by the revision of my integration Git repository (which references all my layers and configuration files). To achieve this, I put the following code to my image recipe:
def get_image_version(d):
import subprocess
import os.path
try:
parentRepo = os.path.dirname(d.getVar("COREBASE", True))
return subprocess.check_output(["git", "describe", "--tags", "--long", "--dirty"], cwd = parentRepo, stderr = subprocess.DEVNULL).strip().decode('UTF-8')
except:
return d.getVar("MACHINE", True) + "-" + d.getVar("DATETIME", True)
IMAGE_VERSION = "${#get_image_version(d)}"
IMAGE_NAME = "${IMAGE_BASENAME}-${IMAGE_VERSION}"
IMAGE_NAME[vardepsexclude] = "IMAGE_VERSION"
This code works properly until I change Git revision (e.g. by adding a new commit). Then I receive the following error:
ERROR: When reparsing /home/ubuntu/yocto/poky/../mylayer/recipes-custom/images/core-image-minimal.bb.do_image_tar, the basehash value changed from 63e1e69797d2813a4c36297517478a28 to 9788d4bf2950a23d0f758e4508b0a894. The metadata is not deterministic and this needs to be fixed.
I understand this happens because the image recipe has already been parsed with older Git revision, but why constant changes of the build timestamp do not cause the same error? How can I fix my code to overcome this problem?
The timestamp does not have this effect since its added to vardepsexclude:
https://docs.yoctoproject.org/bitbake/bitbake-user-manual/bitbake-user-manual-metadata.html#variable-flags
[vardepsexclude]: Specifies a space-separated list of variables that should be excluded from a variable’s dependencies for the purposes of calculating its signature.
You may need to add this in a couple of places, e.g.:
https://git.yoctoproject.org/poky/tree/meta/classes/image-artifact-names.bbclass#n7
IMAGE_VERSION_SUFFIX ?= "-${DATETIME}"
IMAGE_VERSION_SUFFIX[vardepsexclude] += "DATETIME SOURCE_DATE_EPOCH"
IMAGE_NAME ?= "${IMAGE_BASENAME}-${MACHINE}${IMAGE_VERSION_SUFFIX}"
After some research it turned out the problem was in this line
IMAGE_VERSION = "${#get_image_version(d)}"
because the function get_image_version() was called during parsing. I took inspiration from the source file in aehs29's post and moved the code to the anonymous Python function which is called after parsing.
I also had to add vardepsexclude attribute to the IMAGE_NAME variable. I tried to add vardepvalue flag to IMAGE_VERSION variable as well and in this particular case it did the same job as vardepsexclude. Mentioned Bitbake class uses both flags, but I think in my case using only one of them is enough.
The final code is below:
IMAGE_VERSION ?= "${MACHINE}-${DATETIME}"
IMAGE_NAME = "${IMAGE_BASENAME}-${IMAGE_VERSION}"
IMAGE_NAME[vardepsexclude] += "IMAGE_VERSION"
python () {
import subprocess
import os.path
try:
parentRepo = os.path.dirname(d.getVar("COREBASE", True))
version = subprocess.check_output(["git", "describe", "--tags", "--long", "--dirty"], cwd = parentRepo, stderr = subprocess.DEVNULL).strip().decode('UTF-8')
d.setVar("IMAGE_VERSION", version)
except:
bb.warning("Could not get Git revision, image will have default name.")
}
EDIT:
After some research I realized it's better to define a global variable in layer.conf file of the layer containing the recipes referencing the variable. The variable is set by a python script and is immediately expanded to prevent deterministic build warning:
layer.conf:
require conf/image-version.py.inc
IMAGE_VERSION := "${#get_image_version(d)}"
image-version.py.inc:
def get_image_version(d):
import subprocess
import os.path
try:
parentRepo = os.path.dirname(d.getVar("COREBASE", True))
return subprocess.check_output(["git", "describe", "--tags", "--long", "--dirty"], cwd = parentRepo, stderr = subprocess.DEVNULL).strip().decode('UTF-8')
except:
bb.warn("Could not determine image version. Default naming schema will be used.")
return d.getVar("MACHINE", True) + "-" + d.getVar("DATETIME", True)
I think this is cleaner approach which fits BitBake build system better.

checksum in recipe didn't be checked

I have a recipe:
BB_STRICT_CHECKSUM = "1"
SRC_URI += "file://foo1.zip;md5sum=1234;unpack=0"
SRC_URI += "file://foo2.tar.gz;md5sum=5678;unpack=0"
Both checksums are wrong, but they can still pass bitbake.
You can try to do file check by hand with something like:
ZOO1_MD5 = "1234"
python do_package_prepend(){
input = oe.path.join(d.getVar('B'), 'foo1.zip')
cks = bb.utils.md5_file(input)
xpct = d.getVar('ZOO1_MD5')
if cks != xpct:
raise bb.fetch2.FetchError("MD5 fails for ...")
}
Another solution could be to put zoo1.zip in separate git repository and rely on git fetcher checks.

How to determine which forks on GitHub are ahead?

Sometimes, the original GitHub repository of a piece of software I'm using, such as linkchecker, is seeing little or no development, while a lot of forks have been created (in this case: 142, at the time of writing).
For each fork, I'd like to know:
which branches it has with commits ahead of the original master branch
and for each such branch:
how many commits it is ahead of the original
how many commits it is behind
GitHub has a web interface for comparing forks, but I don't want to do this manually for each fork, I just want a CSV file with the results for all forks. How can this be scripted? The GitHub API can list the forks, but I can't see how to compare forks with it. Cloning every fork in turn and doing the comparison locally seems a bit crude.
After clicking "Insights" on top and then "Forks" on the left, the following bookmarklet prints the info directly onto the web page like this:
The code to add as a bookmarklet (or to paste into the console):
javascript:(async () => {
/* while on the forks page, collect all the hrefs and pop off the first one (original repo) */
const aTags = [...document.querySelectorAll('div.repo a:last-of-type')].slice(1);
for (const aTag of aTags) {
/* fetch the forked repo as html, search for the "This branch is [n commits ahead,] [m commits behind]", print it directly onto the web page */
await fetch(aTag.href)
.then(x => x.text())
.then(html => aTag.outerHTML += `${html.match(/This branch is.*/).pop().replace('This branch is', '').replace(/([0-9]+ commits? ahead)/, '<font color="#0c0">$1</font>').replace(/([0-9]+ commits? behind)/, '<font color="red">$1</font>')}`)
.catch(console.error);
}
})();
You can also paste the code into the address bar, but note that some browsers delete the leading javascript: while pasting, so you'll have to type javascript: yourself. Or copy everything except the leading j, type j, and paste the rest.
It has been modified from this answer.
Bonus
The following bookmarklet also prints the links to the ZIP files:
The code to add as a bookmarklet (or to paste into the console):
javascript:(async () => {
/* while on the forks page, collect all the hrefs and pop off the first one (original repo) */
const aTags = [...document.querySelectorAll('div.repo a:last-of-type')].slice(1);
for (const aTag of aTags) {
/* fetch the forked repo as html, search for the "This branch is [n commits ahead,] [m commits behind]", print it directly onto the web page */
await fetch(aTag.href)
.then(x => x.text())
.then(html => aTag.outerHTML += `${html.match(/This branch is.*/).pop().replace('This branch is', '').replace(/([0-9]+ commits? ahead)/, '<font color="#0c0">$1</font>').replace(/([0-9]+ commits? behind)/, '<font color="red">$1</font>')}` + " <a " + `${html.match(/href="[^"]*\.zip">/).pop() + "Download ZIP</a>"}`)
.catch(console.error);
}
})();
useful-forks
useful-forks is an online tool which filters all the forks based on ahead criteria. I think it answers your needs quite well. :)
For the repo in your question, you could do: https://useful-forks.github.io/?repo=wummel/linkchecker
That should provide you with similar results to (ran on 2022-04-02):
Also available as a Chrome Extension
Download it here: https://chrome.google.com/webstore/detail/useful-forks/aflbdmaojedofngiigjpnlabhginodbf
And as a bookmarklet
Add this as the URL of a new bookmark, and click that bookmark when you're on a repo:
javascript:!function(){if(m=window.location.href.match(/github\.com\/([\w.-]+)\/([\w.-]+)/),m){window.open(`https://useful-forks.github.io/?repo=${m[1]}/${m[2]}`)}else window.alert("Not a GitHub repo")}();
Although to be honest, it's a better option to simply get the Chrome Extension, if you can.
Disclaimer
I am the maintainer of this project.
Had exactly the same itch and wrote a scraper that takes the info printed in the rendered HTML for forks: https://github.com/hbbio/forkizard
Definitely not perfect, but a temporary solution.
Late to the party - I think this is the second time I've ended up on this SO post so I'll share my js-based solution (I ended up making a bookmarklet by just fetching and searching the html pages).
You can either create a bookmarklet from this, or simply paste the whole thing into the console. Works on chromium-based and firefox:
EDIT: if there are more than 10 or so forks on the page, you may get locked out for scraping too fast (429 too many requests in network). Use async / await instead:
javascript:(async () => {
/* while on the forks page, collect all the hrefs and pop off the first one (original repo) */
const forks = [...document.querySelectorAll('div.repo a:last-of-type')].map(x => x.href).slice(1);
for (const fork of forks) {
/* fetch the forked repo as html, search for the "This branch is [n commits ahead,] [m commits behind]", print it to console */
await fetch(fork)
.then(x => x.text())
.then(html => console.log(`${fork}: ${html.match(/This branch is.*/).pop().replace('This branch is ', '')}`))
.catch(console.error);
}
})();
or you can do batches, but it's pretty easy to get locked out
javascript:(async () => {
/* while on the forks page, collect all the hrefs and pop off the first one (original repo) */
const forks = [...document.querySelectorAll('div.repo a:last-of-type')].map(x => x.href).slice(1);
getfork = (fork) => {
return fetch(fork)
.then(x => x.text())
.then(html => console.log(`${fork}: ${html.match(/This branch is.*/).pop().replace('This branch is ', '')}`))
.catch(console.error);
}
while (forks.length) {
await Promise.all(forks.splice(0, 2).map(getfork));
}
})();
Original (this fires all requests at once and will possibly lock you out if it is more requests/s than github allows)
javascript:(() => {
/* while on the forks page, collect all the hrefs and pop off the first one (original repo) */
const forks = [...document.querySelectorAll('div.repo a:last-of-type')].map(x => x.href).slice(1);
for (const fork of forks) {
/* fetch the forked repo as html, search for the "This branch is [n commits ahead,] [m commits behind]", print it to console */
fetch(fork)
.then(x => x.text())
.then(html => console.log(`${fork}: ${html.match(/This branch is.*/).pop().replace('This branch is ', '')}`))
.catch(console.error);
}
})();
Will print something like:
https://github.com/user1/repo: 289 commits behind original:master.
https://github.com/user2/repo: 489 commits behind original:master.
https://github.com/user2/repo: 1 commit ahead, 501 commits behind original:master.
...
to console.
EDIT: replaced comments with block comments for paste-ability
active-forks doesn't quite do what I want, but it comes close and is very easy to use.
Here's a Python script using the Github API. I wanted to include the date and last commit message. You'll need to include a Personal Access Token (PAT) if you need a bump to 5k requests/hr.
USAGE: python3 list-forks.py https://github.com/itinance/react-native-fs
Example Output:
https://github.com/itinance/react-native-fs root 2021-11-04 "Merge pull request #1016 from mjgallag/make-react-native-windows-peer-dependency-optional make react-native-windows peer dependency optional"
https://github.com/AnimoApps/react-native-fs diverged +2 -160 [+1m 10d] "Improved comments to align with new PNG support in copyAssetsFileIOS"
https://github.com/twinedo/react-native-fs ahead +1 [+26d] "clear warn yellow new NativeEventEmitter()"
https://github.com/synonymdev/react-native-fs ahead +2 [+23d] "Merge pull request #1 from synonymdev/event-emitter-fix Event Emitter Fix"
https://github.com/kongyes/react-native-fs ahead +2 [+10d] "aa"
https://github.com/kamiky/react-native-fs diverged +1 -2 [-6d] "add copyCurrentAssetsVideoIOS function to retrieve current modified videos"
https://github.com/nikola166/react-native-fs diverged +1 -2 [-7d] "version"
https://github.com/morph3ux/react-native-fs diverged +1 -4 [-30d] "Update package.json"
https://github.com/broganm/react-native-fs diverged +2 -4 [-1m 7d] "Update RNFSManager.m"
https://github.com/k1mmm/react-native-fs diverged +1 -4 [-1m 14d] "Invalidate upload session Prevent memory leaks"
https://github.com/TickKleiner/react-native-fs diverged +1 -4 [-1m 24d] "addListener and removeListeners methods wass added to pass warning"
https://github.com/nerdyfactory/react-native-fs diverged +1 -8 [-2m 14d] "fix: applying change from https://github.com/itinance/react-native-fs/pull/944"
import requests, re, os, sys, time, json, datetime
from dateutil.relativedelta import relativedelta
from urllib.parse import urlparse
GITHUB_PAT = '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'
def json_from_url(url):
response = requests.get(url, headers={ 'Authorization': 'token {}'.format(GITHUB_PAT) })
return response.json()
def date_delta_to_text(date1, date2):
ret = []
date_delta = relativedelta(date2, date1)
sign = '+' if date1 < date2 else '-'
if date_delta.years != 0:
ret.append('{}y'.format(abs(date_delta.years)))
if date_delta.months != 0:
ret.append('{}m'.format(abs(date_delta.months)))
if date_delta.days != 0:
ret.append('{}d'.format(abs(date_delta.days)))
return '{}{}'.format(sign, ' '.join(ret))
def iso8601_date_to_date(date):
return datetime.datetime.strptime(date, '%Y-%m-%dT%H:%M:%SZ')
def date_to_text(date):
return date.strftime('%Y-%m-%d')
def process_repo(repo_author, repo_name, fork_of_fork):
page = 1
while 1:
forks_url = 'https://api.github.com/repos/{}/{}/forks?per_page=100&page={}'.format(repo_author, repo_name, page)
forks_json = json_from_url(forks_url)
if not forks_json:
break
for fork_info in forks_json:
fork_author = fork_info['owner']['login']
fork_name = fork_info['name']
forks_count = fork_info['forks_count']
fork_url = 'https://github.com/{}/{}'.format(fork_author, fork_name)
compare_url = 'https://api.github.com/repos/{}/{}/compare/master...{}:master'.format(repo_author, fork_name, fork_author)
compare_json = json_from_url(compare_url)
if 'status' in compare_json:
items = []
status = compare_json['status']
ahead_by = compare_json['ahead_by']
behind_by = compare_json['behind_by']
total_commits = compare_json['total_commits']
commits = compare_json['commits']
if fork_of_fork:
items.append(' ')
items.append(fork_url)
items.append(status)
if ahead_by != 0:
items.append('+{}'.format(ahead_by))
if behind_by != 0:
items.append('-{}'.format(behind_by))
if total_commits > 0:
last_commit = commits[total_commits-1];
commit = last_commit['commit']
author = commit['author']
date = iso8601_date_to_date(author['date'])
items.append('[{}]'.format(date_delta_to_text(root_date, date)))
items.append('"{}"'.format(commit['message'].replace('\n', ' ')))
if ahead_by > 0:
print(' '.join(items))
if forks_count > 0:
process_repo(fork_author, fork_name, True)
page += 1
url_parsed = urlparse(sys.argv[1].strip())
path_array = url_parsed.path.split('/')
root_author = path_array[1]
root_name = path_array[2]
root_url = 'https://github.com/{}/{}'.format(root_author, root_name)
commits_url = 'https://api.github.com/repos/{}/{}/commits/master'.format(root_author, root_name)
commits_json = json_from_url(commits_url)
commit = commits_json['commit']
author = commit['author']
root_date = iso8601_date_to_date(author['date'])
print('{} root {} "{}"'.format(root_url, date_to_text(root_date), commit['message'].replace('\n', ' ')));
process_repo(root_author, root_name, False)
Here's a Python script for listing and cloning the forks that are ahead. This script partially uses the API, so it triggers the rate limit (you can extend the rate limit (not infinitely) by adding GitHub API authentication to the script, please edit or post that).
Initially I tried to use the API entirely, but that triggered the rate limit too fast, so now I use is_fork_ahead_HTML instead of is_fork_ahead_API. This might require adjustments if the GitHub website design changes.
Due to the rate limit, I prefer the other answers that I posted here.
import requests, json, os, re
def obj_from_json_from_url(url):
# TODO handle internet being off and stuff
text = requests.get(url).content
obj = json.loads(text)
return obj, text
def is_fork_ahead_API(fork, default_branch_of_parent):
""" Use the GitHub API to check whether `fork` is ahead.
This triggers the rate limit, so prefer the non-API version below instead.
"""
# Compare default branch of original repo with default branch of fork.
comparison, comparison_json = obj_from_json_from_url('https://api.github.com/repos/'+user+'/'+repo+'/compare/'+default_branch_of_parent+'...'+fork['owner']['login']+':'+fork['default_branch'])
if comparison['ahead_by']>0:
return comparison_json
else:
return False
def is_fork_ahead_HTML(fork):
""" Use the GitHub website to check whether `fork` is ahead.
"""
htm = requests.get(fork['html_url']).content
match = re.search('<div class="d-flex flex-auto">[^<]*?([0-9]+ commits? ahead(, [0-9]+ commits? behind)?)', htm)
# TODO if website design changes, fallback onto checking whether 'ahead'/'behind'/'even with' appear only once on the entire page - in that case they are not part of the username etc.
if match:
return match.group(1) # for example '1 commit ahead, 114 commits behind'
else:
return False
def clone_ahead_forks(user,repo):
obj, _ = obj_from_json_from_url('https://api.github.com/repos/'+user+'/'+repo)
default_branch_of_parent = obj["default_branch"]
page = 0
forks = None
while forks != [{}]:
page += 1
forks, _ = obj_from_json_from_url('https://api.github.com/repos/'+user+'/'+repo+'/forks?per_page=100&page='+str(page))
for fork in forks:
aheadness = is_fork_ahead_HTML(fork)
if aheadness:
#dir = fork['owner']['login']+' ('+str(comparison['ahead_by'])+' commits ahead, '+str(comparison['behind_by'])+'commits behind)'
dir = fork['owner']['login']+' ('+aheadness+')'
print dir
os.mkdir(dir)
os.chdir(dir)
os.system('git clone '+fork['clone_url'])
print
# recurse into forks of forks
if fork['forks_count']>0:
clone_ahead_forks(fork['owner']['login'], fork['name'])
os.chdir('..')
user = 'cifkao'
repo = 'tonnetz-viz'
clone_ahead_forks(user,repo)
Here's a Python script for listing and cloning all forks that are ahead.
It doesn't use the API. So it doesn't suffer from a rate limit and doesn't require authentication. But it might require adjustments if the GitHub website design changes.
Unlike the bookmarklet in the other answer that shows links to ZIP files, this script also saves info about the commits because it uses git clone and also creates a commits.htm file with the overview.
import requests, re, os, sys, time
def content_from_url(url):
# TODO handle internet being off and stuff
text = requests.get(url).content
return text
ENCODING = "utf-8"
def clone_ahead_forks(forklist_url):
forklist_htm = content_from_url(forklist_url).decode(ENCODING)
with open("forklist.htm", "w", encoding=ENCODING) as text_file:
text_file.write(forklist_htm)
is_root = True
# not working if there are no forks: '<a class="(Link--secondary)?" href="(/([^/"]*)/[^/"]*)">'
for match in re.finditer('<a (class=""|data-pjax="#js-repo-pjax-container") href="(/([^/"]*)/[^/"]*)">', forklist_htm):
fork_url = 'https://github.com'+match.group(2)
fork_owner_login = match.group(3)
fork_htm = content_from_url(fork_url).decode(ENCODING)
match2 = re.search('([0-9]+ commits? ahead(, [0-9]+ commits? behind)?)', fork_htm)
# TODO check whether 'ahead'/'behind'/'even with' appear only once on the entire page - in that case they are not part of the readme, "About" box, etc.
sys.stdout.write('.')
if match2 or is_root:
if match2:
aheadness = match2.group(1) # for example '1 commit ahead, 2 commits behind'
else:
aheadness = 'root repo'
is_root = False # for subsequent iterations
dir = fork_owner_login+' ('+aheadness+')'
print(dir)
if not os.path.exists(dir):
os.mkdir(dir)
os.chdir(dir)
# save commits.htm
commits_htm = content_from_url(fork_url+'/commits').decode(ENCODING)
with open("commits.htm", "w", encoding=ENCODING) as text_file:
text_file.write(commits_htm)
# git clone
os.system('git clone '+fork_url+'.git')
print
# no need to recurse into forks of forks because they are all listed on the initial page and being traversed already
os.chdir('..')
else:
print(dir+' already exists, skipping.')
base_path = os.getcwd()
match_disk_letter = re.search(r'^([a-zA-Z]:\\)', base_path)
with open('repo_urls.txt') as url_file:
for url in url_file:
url = url.strip()
url = re.sub(r'\?[^/]*$', '', url) # remove stings like '?utm_source=...' from the end
print(url)
match = re.search('github.com/([^/]*)/([^/]*)$', url)
if match:
user_name = match.group(1)
repo_name = match.group(2)
print(repo_name)
dirname_for_forks = repo_name+' ('+user_name+')'
if not os.path.exists(dirname_for_forks):
url += "/network/members" # page that lists the forks
TMP_DIR = 'tmp_'+time.strftime("%Y%m%d-%H%M%S")
if match_disk_letter: # if Windows, i.e. if path starts with A:\ or so, run git in A:\tmp_... instead of .\tmp_..., in order to prevent "filename too long" errors
TMP_DIR = match_disk_letter.group(1)+TMP_DIR
print(TMP_DIR)
os.mkdir(TMP_DIR)
os.chdir(TMP_DIR)
clone_ahead_forks(url)
print
os.chdir(base_path)
os.rename(TMP_DIR, dirname_for_forks)
else:
print(dirname_for_forks+' ALREADY EXISTS, SKIPPING.')
print('DONE.')
If you make the file repo_urls.txt with the following content (you can put several URLs, one URL per line):
https://github.com/cifkao/tonnetz-viz
then you'll get the following directories each of which contains the respective cloned repo:
tonnetz-viz (cifkao)
bakaiadam (2 commits ahead)
chumo (2 commits ahead, 4 commits behind)
cifkao (root repo)
codedot (76 commits ahead, 27 commits behind)
k-hatano (41 commits ahead)
shimafuri (11 commits ahead, 8 commits behind)
If it doesn't work, try earlier versions.

Showing test count in buildbot

I am not particularly happy about the stats that Buildbot provides. I understand that it is for building and not testing - that's why it has a concept of Steps, but no concept of Test. Still there are many cases when you need test statistics from build results. For example when comparing skipped and failed tests on different platforms to estimate the impact of a change.
So, what is needed to make Buildbot display test count in results?
What is the most simple way, so that a person who don't know anything about Buildbot can do this in 15 minutes?
Depending how you want to process the test results and how the test results are presented, Buildbot does provide a Test step, buildbot.steps.shell.Test
An example of how I use it for my build environment:
from buildbot.steps import shell
class CustomStepResult(shell.Test):
description = 'Analyzing results'
descriptionDone = 'Results analyzed'
def __init__(self, log_file = None, *args, **kwargs):
self._log_file = log_file
shell.Test.__init__(self, *args, **kwargs)
self.addFactoryArguments(log_file = log_file)
def start(self):
if not os.path.exists(self._log_file):
self.finished(results.FAILURE)
self.step_status.setText('TestResult XML file not found !')
else:
import xml.etree.ElementTree as etree
tree = etree.parse(self._log_file)
root = tree.getroot()
passing = len(root.findall('./testsuite/testcase/success'))
skipped = len(root.findall('./testsuite/testcase/skip'))
fails = len(root.findall('./testsuite/error')) + len(root.findall('./testsuite/testcase/error')) + len(root.findall('./testsuite/testcase/failure'))
self.setTestResults(total = fails+passing+skipped, failed = fails, passed = passing)
## the final status for WARNINGS is green but the step itself will be orange
self.finished(results.SUCCESS if fails == 0 else results.WARNINGS)
self.step_status.setText(self.describe(True))
And in the configuration factory I create a step as below:
factory.addStep(CustomStepResult(log_file = log_file))
Basically I override the default Test shell step and pass a custom XML file which contains my test results. I then look for the pass/fail/skip result nodes and accordingly display the results in the waterfall.

How to copy the XPO files out of version control...partial code working, bizarre issue

I began upgrading our layers to Roll Up 7 while we still were developing in another environment with TFS turned on. We were at say version 1850, and by the time I finished, we were at 1900. So the goal is to merge in the 50 different check-ins into the completed RU7 environment. Each check-in can contain many different objects, and each object is stored in TFS as an XPO somewhere.
My code is 90% of the way there, but the issue arrises when copying the files out of the temp directory. When I look in the temp directory, the files aren't there, but somehow they're able to be accessed.
static void Job33(Args _args)
{
#File
SysVersionControlSystem sysVersionControlSystem = versioncontrol.parmSysVersionControlSystem();
SysVersionControlTmpItem contents;
SysVersionControlTmpChange change;
SysVersionControlTmpChange changes;
int i;
SysVersionControlTmpItem contentsAddition;
SysVersionControlTmpItem contentsItem;
str writePath;
Set permissionSet = new Set(Types::Class);
str fileName;
int n;
;
change = versioncontrol.getChangesHistory();
// BP deviation documented
changes.setTmp();
changes.checkRecord(false);
changes.setTmpData(change);
while select changes
order by changes.ChangeNumber asc
where changes.ChangeNumber > 1850
{
writePath = #'C:\TEMP\' + int2str(changes.ChangeNumber) + #'\';
contentsAddition = versioncontrol.getChangeNumberContents(changes.ChangeNumber);
n = 0;
while select contentsAddition
{
// HOW DOES THIS LINE ACCESS THE FILE BUT MY METHOD CAN NOT??
contentsAddition.viewFile();
//?????????????
// Write to appropriate directory
if(!WinAPI::pathExists(writePath))
WinAPI::createDirectory(writePath);
n++;
fileName = int2str(changes.ChangeNumber) + '_' + int2str(n) + '.xpo';
if (WinAPI::fileExists(contentsAddition.fileName(), false))
{
// Write to appropriate directory
if(!WinAPI::pathExists(writePath))
WinAPI::createDirectory(writePath);
WinAPI::copyFile(contentsAddition.fileName(), writePath + fileName, true);
info(strfmt("|%1|%2|", contentsAddition.fileName(), writePath + fileName));
}
}
info(strfmt("%1", changes.ChangeNumber));
}
}
Buried in Classes\SysVersionControlFilebasedBackEndTfs there is a .Net assembly that is used. I was able to use this to extract what I needed mixed in with the upper code. After I used this...my code from above started working strangely enough??
Somehow there was a file lock on the folder that I copied TO, that just wouldn't let me delete it until I closed AX...no big deal, but it suggests there is a tfsProxy.close() method or something I should have called.
Microsoft.Dynamics.Morphx.TeamFoundationServer.Proxy tfsProxy = new Microsoft.Dynamics.Morphx.TeamFoundationServer.Proxy();
;
tfsProxy.DownloadFile(contentsAddition.InternalFilename, changes.ChangeNumber, writePath + fileName);
So you are trying to just get the objects that were changed so you can import them into the new RU7 environment? Why not do this within Visual Studio directly? You can pull the XPOs from there based on the history of changesets since you started the RU7 upgrade.
Also, you should use branching for this. It would have been easy to just branch the new code in that way. Something you should look into for the future.