In my README.md I want to see automated tests results, e.g. something like:
[TestJob: failing] [TestJob Stats: 40 passed, 1 failed, 10 skipped, took 126 sec.]
There are existing workflows that can do this, e.g. https://github.com/marketplace/actions/dynamic-badges, BUT I'm on private enterprise repo, gist is disabled, and any content is not publicly visible, and requires token.
I figured I have two options.
Option1 - update README.md via e.g. bash script after execution, as part of the GitHub Action job.
Option2 - Find a way to customize the default badge.svg message.
Before I jump into spending time figuring out how to do option1, I want to ask - is there way to somehow relay / alternate the result text of the default badge.svg, currently it simply states "failing" passing" or "no status".
Related
Occasionally in our codebase we need to use an //eslint-disable to bypass a styleguide rule on a line. I would like to somehow automatically add a comment on each new instance of that in PRs, requiring the developer to explain why they bypassed the styleguide.
I've found this question referencing how to create a comment programmatically, but what I'm not sure how to do is identify the new code and parse it for a certain piece of text, then add comments on those particular lines where the text was found.
This is one of the approaches to ingest scripts & achieve what you want, wherein Expected outcome is:
On every pull request, a pre build validation pipeline kicks off & adds comments on the PR.
Create a script (powershell/python/bash) with following logic:
Find file names in the given branch which contains //eslint-disable
In the files above (1.), get the location/line number of //eslint-disable
Foreach file.LineNumber (wrote like that just for representation): add comment on file.LineNumber using Pull Request Threads API. See line parameter
Create a pipeline containing above script & add that pipeline as build validation or if you have an existing build validation process, add these scripts as tasks in that pipeline.
Hope this helps :)
I currently have an Azure Devops install that I am configuring for automated build and testing. I would like to enable a Continuous Integration trigger for the build process, however our check-in standards require different parts of our code to be checked in separate from each other.
For example: we are using nettiers auto generated code, so whenever a ticket requires a database change, the nettiers code base gets updated. Because that is auto generated code it gets checked in separately from manual modifications with a comment indicating that it is an auto generated check-in.
A build will fail if it does not have both the nettiers and the manual modifications checked in. However with Continuous Integration turned on, the first check-in will trigger a build to begin that will be missing the second half of the changes which are checked in a couple minutes later.
The ideal way I would like to fix this would be to implement a 5 minute delay between when the CI build first gets triggered, and when it actually begins its work. Even better would be if each successive check-in would cancel the first build and start a new timer with its own build to account for any subsequent check-ins.
An alternative was to solve the issue might be to have a gate on a work item query. However, I have been unsuccessful in figuring out how to implement either of these ideas, or in coming up with other options. Gates based on queries only seem to be available in Release pipelines, not Builds.
Has anyone out there solved a similar problem, or have thoughts on how to solve or work around this issue?
Azure Devops delayed Continuous Integration build
I am afraid there is no such out of box setting/method to set this specify continuous integration build for your case.
As workaround, we could generated code and gets checked in to some specify folder by using nettiers, like \NettiersGenerated.
Then we could exclude that folder by the Path filters under the Enable continuous integration:
In this case, the generated code will not trigger the build pipeline.
Update:
It would require that the nettiers code always gets checked in first
(which would be hard to enforce)
Yes, agree with you. If the build will fail if it does not have both the nettiers and the manual modifications checked in, my first is indeed not reasonable enough.
As another workaround, we could use the Azure DevOps counter and get the rest of the counter through a powershell script, build the pipeline only if the number is even, otherwise cancel the build, like:
Counter expression like
variables:
internalBuildNumber: 1
semanticBuildNumber: $[counter(variables['internalBuildNumber'], 0)]
Powershell scripts:
$value=$(semanticBuildNumber)
switch($value)
{
{($_ % 2) -ne 0} {"Go on build pipeline"}
{($_ % 2) -eq 0}
{
Write-Host "##vso[task.setvariable variable=agent.jobstatus;]canceled"
Write-Host "##vso[task.complete result=Canceled;]DONE"
}
}
In this case, the pipeline will be build when it triggered at second time.
Hope this helps.
I'm creating a new build pipeline, it will be triggered on a new check-in on a specific branch on a TFVC repository. I want to get the title of the check-in every time the pipeline triggers and store it in the variables.
Additionally, after getting the title I want to perform some checks and depending on the result ( e.g. title matches a specific template) either stop the build pipeline or move on to the next steps
I've looked in Variables and Triggers tabs, but I didn't find anything useful. I've also looked in the predefined variables, but I didn't find anything related to my issue either.
You can try this variable Build.SourceVersion to get the number and Build.SourceVersionMessage to get the comments.
Build.SourceVersionMessage The comment of the commit or changeset. Note: This variable is available from TFS 2015.4.
See also: https://learn.microsoft.com/en-us/azure/devops/pipelines/build/variables?view=azure-devops&tabs=yaml
Is there a REST API endpoint to get a collection of changes that are pending for a build in TeamCity?
We have the build set to manual and it is triggered outside TeamCity and would like to show a bullet point list of commits that'd be in that build.
In the user interface you can see this under the "Pending Changes (X)" tab.
I can't find any examples of doing this and the closest I've found is:
http://<server>/httpAuth/app/rest/changes/buildType:<build type id>
This seems to return the last change though.
Anyone done this before?
I just found a working solution thanks to this question. I'll show it here in case other people are looking for a full solution :
You need to know the buildTypeId of the build on which you want to get the pending changes. In this case lets say buildTypeId=bt85
1
http://<server>/httpAuth/app/rest/buildTypes/id:bt85/builds/
// Get the last build from the XML returned.
// Lets say last build id = 14000
2
http://<server>/httpAuth/app/rest/changes?build=id:14000
// The newest change returned is the one you need.
// Lets say newest change id = 15000
3
http://<server>/httpAuth/app/rest/changes?buildType=id:bt85&sinceChange=15000
// You're now looking at the pending changes list of the buildType bt85
My eventual solution in a work around kind of way is to:
Find the latest change ID from my database of builds outside of TeamCity (I guess you could query the TeamCity API to find the last successful build and pull it from there)
Then call:
http://<server>/httpAuth/app/rest/changes?buildId=id:<build id>&sinceChange=id:<last change id>
Then fetch each individual change from that list.
A bit of a workaround but I couldn't see anyway otherwise to get the list of pending changes.
Generally,our fixes/patches for any bugs involves changes in multiple files and we will commit all these files in a single shot.
In SVN, for each commit (may involve multiple files),it will increment revision number of whole repository by one. So, we can easily link all the multiple files that went in a single commit.
Now the difficulty with the same case in CVS is that it will increment the revision numbers of all the files individually. Let's say if a commit involves the following files:
file1.c //revision assigned as part of this commit..1.5.10.2
file2.c //revision assigned as part of this commit..1.41.10.1
and the comment given for this commit is "First Bug Fix".
Now, the only way to get all files checked-in as part of this commit is by searching through all the cvs logs for comment "First Bug Fix" and hopefully it will return only the two file revisions mentioned above.
Please share your views on if there is any better way in CVS to keep track of all files checked-in in a single commit instead of relaying on comment given as part of commit.
I think CVSps might do what you are looking for.
"CVSps is a program for generating 'patchset' information from a CVS repository. A patchset in this case is defined as a set of changes made to a collection of files, and all committed at the same time (using a single 'cvs commit' command). This information is valuable to seeing the big picture of the evolution of a cvs project. While cvs tracks revision information, it is often difficult to see what changes were committed 'atomically' to the repository."
This cvsps relies on cvs client. Make sure you have proper version of cvs which supports rlog command (1.1.1)
CVS does not have inherent support for "transactions".
You need some additional glue to do this. Fortunately, this has all been done for you and is available in a very nice extension called "cvszilla".
The home page is here:
http://www.nyetwork.org/wiki/CVSZilla
This also ties in to CVSweb, which is a great way to browse through your CVS modules via a web-based GUI.
Perhaps the ANT CvsChangeLog Task is another choice. See http://ant.apache.org/manual/Tasks/changelog.html . It provides date and time for a checkin message. You can produce nice reports with XSLT - try the example at the bottom of the ANT manual page.
I know it's late for an answer, but perhaps other users come across this like I did (searching) and appreciate the ANT integration.
OK, I just installed cvsps and ran it from the top level. Here's a sample of the output... this is one of the few hundred patch sets on my module. Note that indeed this does work across different directory trees.
---------------------
PatchSet 221
Date: 2009/04/22 22:09:37
Author: jlove-ext
Branch: HEAD
Tag: LCA_v1_0_0_0_v6
Log:
Bug: 45562
Check the length of strings in messages. Namely:
* Logical server IDs cannot be more than 18 characters (forcing a
TCSE protocol requirement).
* Overall 'sid' (filter) search string length cannot be more than
500 (this is actually more than the technical maximum messages are
allowed, but is close).
Alarm messages and are now not going to crash either as the alarm text
is shortened if necessary by the LCA.
Members:
catalogue/extractCmnAlarms.pl:1.2->1.3
programs/ldapControlAgent/LcaCommon.h:1.18->1.19
programs/ldapControlAgent/LcaUtils.cc:1.20->1.21
programs/ldapControlAgent/LcaUtils.h:1.6->1.7
programs/ldapControlAgent/LdapSession.cc:1.61->1.62
tests/cts-45562.txt:INITIAL->1.1
So, this may indeed do what you want. Nice one, Joakim. However, as mentioned, CVSzilla does much more than this:
Web-browsable CVS repositories (via CVSweb).
Web-browsable transactions.
Supports transactions across modules.
Generates CVS commands (using 'cvs -j') to merge patchsets onto other branches.
Integration with bugzilla (transactions are automatically registered against bugs).
If all you want is just the patchset info, go with cvsps. If you're looking to use CVS on large projects over a long period of time and are thinking about using bugzilla for your bug-tracking, then I would suggest looking into CVSzilla.
This also could be useful:
http://code.google.com/a/eclipselabs.org/p/changelog/