nx affected not working with --base and --head - nrwl-nx

I'm trying to run nx affected:build inside a GitHub push action, yet I'm getting the No projects with build were run (this happens against lint/test/build). I know for a fact that projects are affected b/c I run nx affected:build on them prior to making a PR and the build runs.
At first I suspected it might be because I was operating in the Github push action. So I tried using various --base and --head parameters, but nothing worked.
Then I tried running these same commands locally, and nx affected:build still produced nothing, even though git diff clearly showed that a project was affected. It's only until I run nx affected:build ==base=<branch> that the affected project will be built.
Eg:
// Doesn't work, even though there is a diff.
nx affected:build --base=197458c645479844fc235ea09b5ae12048b7fa35 --head=da8fad48d67bd8a0c59b8fbecba0bdf7c90fdf6e
// Works
nx affected:build --base=develop
What am I missing here? Why can't I supply --base and --head parameters to specify the exact commits to run nx affected against?

It seems I had base and head backwards:
--base Base of the current branch (usually master)
--head Latest commit of the current branch (usually HEAD)
When I switched the SHAs around, nx affected produced the desired outcome.

Related

Git: How to include working tree status in git log --pretty=format...?

I'm using the following command in an Eclipse CDT pre-build step, to generate a header file containing my current short Git hash as a string macro:
git log --pretty=format:'#define GIT_HASH_STRING "%h"' -n 1 > ../Inc/gitcommit.h
Works great, but it doesn't indicate the status of the working tree. Like when running git submodule status, if there are working tree changes, I'd like it to spit out something like
a289542-dirty
Is this possible? I checked the man page for git-log formats, but didn't see anything that looked pertinent.
Context: The GIT_HASH_STRING macro is displayed when issuing a version command via the CLI of an embedded device. If I can include a -dirty flag in the string, it can serve as a warning that the device is running an unreleased version of firmware that doesn't align with a specific commit.
The git log command does not inspect the work-tree, so it cannot do this.
There are many commands that do inspect the work-tree. One simple one is git describe:
git describe --always --dirty
will print out a string that will end with -dirty if the work-tree or the index is modified with respect to the current commit (i.e., in the same situations where git status would say something is staged for commit or not staged for commit).
If you want to check submodules as well, you will need more.

How to increment version numbers in GitHub Protected Branches?

I am looking for a good process to manage the version number of my project when my master branch has GitHub branch protection turned on.
In an ideal world, when you merge to your 'release' branch your continuous integration server would run its tests then automatically increment the version number of your project and commit back to your SCM system.
However, with branch protection, you cannot commit to your branch without a Pull Request, so you have a catch-22 where your CI server cannot push to your protected branch when it tries to update the version number.
I can think of a few work arounds, all of which are sub optimal:
Rig up a system so your CI server would make a PR to update the version. I'm not sure if you can even do that with GitHub, but even if it is that ends up creating two Pull Requests for every one 'real' PR, which is awkward.
Remove Branch Protection - now anyone can push anything to the branches, and you have to manage your developers through a manual process
Update the version manually before the pull request. this is slightly better than #2, but it opens the door for developers to make mistakes when choosing the new version number, or merging the right version number to the wrong branch.
I'm hoping there are other options I haven't thought of.
We are using Javascript and npm, but I believe the problem is language agnostic, certainly the same problem would exist with Java and Maven for example.
I used git tags to automate versioning using Jenkins. So every time the job ran it took the last tag and incremented it. Since tags are different than commits, adding a tag doesn't conflict with Branch Protection.
# get last tag
last=$(git describe --abbrev=0 --tags)
# increment and tag again
git tag $(($last + 1))
git push origin $(($last + 1))
This script is for a simple integer version number but you could follow semver if you like. This was for a Android project so I added a gradle function that would take out the last tag and use if for version number while building.
/*
* Gets the version name from the latest Git tag
*/
def getVersionName = { ->
def stdout = new ByteArrayOutputStream()
exec {
commandLine 'git', 'describe', '--tags'
standardOutput = stdout
}
return stdout.toString().trim()
}
I am sure you could use a similar setup with Javascript.

Is there a good way to express a dependency on an unpublished NPM version?

We frequently run into this problem where our two main modules need to be kept in sync with each other. Features get developed in branches, reviewed, merged into master, and then the branch gets deleted.
Let's say module A, version 1.3.1 requires B version 2.4.0. Neither has been merged to master, and neither has been published to NPM. We don't synchronise code review, so we don't really know which will get published first.
Currently we have two choices for the dependency we express in A:
Option 1
"moduleB": "2.4.0"
Problem: you can't actually install this until module B gets published to NPM. npm link works for the main developers, but it's still bad having something that can't be installed in master.
Option 2
"moduleB": "ourOrg/moduleB#newFeature"
Problem: you can install this now, but as soon as newFeature gets merged, that branch will be deleted, and this will break.
What to do?
Is there a standard solution to this problem? I guess we could create extra branches that don't get deleted?

Buildbot fails on reset --hard

Buildbot 0.8.6
Periodically the buildbot would fail getting a particular repository. It does this command:
argv: ['/usr/bin/git', 'reset', '--hard',
'26d7a2f5af4b777b074ac47a7218fe775d039845']
and then complains:
fatal: Could not parse object
'26d7a2f5af4b777b074ac47a7218fe775d039845'.
However, the correct command is actually:
argv: ['/usr/bin/git', 'reset', '--hard', 'FETCH_HEAD']
Not only that. The SHA number used in the failed command is from a different repository.
Anyone knows how to fix this?
Thanks so much.
Update:
We have two repositories. We have a GitPoller watching one of the repositories. I would like to have a build run if the watched repository had a push. However, both repositories are needed for the build. The error specified above occurs on the second, non-watched repository. The SHA number in the error is from the watched repository.
Ok, first, let's make sure we have the right understanding:
You're having problem with one builder, that builds 2 repositories
Each build has two git steps which clone two different repositories
You're polling one of these repositories to trigger builds
There is no other scheduler that is triggering builds (or at least not those that fail that way)
What happens when you're polling a repository to trigger builds is that each new build carries with it the changes that triggered it. The git steps refer to these changes to checkout the correct version. You probably need to use codebases to help the two steps distinguish between changes. Unfortunately, codebases were introduced in 0.8.7, so you should consider upgrading. 0.8.6 is ancient.
If upgrading is not an option, pass alwaysUseLatest=True to the Git() step of the repository that you are not polling. That will force it to always use FETCH_HEAD. Here's my shot at that setup:
f = BuildFactory()
f.addStep(Git(repourl='git://github.com/you/polled_repo.git', mode='copy'))
f.addStep(Git(repourl='git://github.com/you/other_repo.git', alwaysUseLatest=True))

What is the cleverest use of source repository that you have ever seen?

This actually stems from on my earlier question where one of the answers made me wonder how people are using the scm/repository in different ways for development.
Pre-tested commits
Before (TeamCity, build manager):
The concept is simple, the build system stands as a roadblock between your commit entering trunk and only after the build system determines that your commit doesn't break things does it allow the commit to be introduced into version control, where other developers will sync and integrate that change into their local working copies
After (using a DVCS like Git, that is a source repository):
My workflow with Hudson for pre-tested commits involves three separate Git repositories:
my local repo (local),
the canonical/central repo (origin)
and my "world-readable" (inside the firewall) repo (public).
For pre-tested commits, I utilize a constantly changing branch called "pu" (potential updates) on the world-readable repo.
Inside of Hudson I created a job that polls the world-readable repo (public) for changes in the "pu" branch and will kick off builds when updates are pushed.
my workflow for taking a change from inception to origin is:
* hack, hack, hack
* commit to local/topic
* git pup public
* Hudson polls public/pu
* Hudson runs potential-updates job
* Tests fail?
o Yes: Rework commit, try again
o No: Continue
* Rebase onto local/master
* Push to origin/master
Using this pre-tested commit workflow I can offload the majority of my testing requirements to the build system's cluster of machines instead of running them locally, meaning I can spend the majority of my time writing code instead of waiting for tests to complete on my own machine in between coding iterations.
(Variation) Private Build (David Gageot, Algodeal)
Same principle than above, but the build is done on the same workstation than the one used to develop, but on a cloned repo:
How not to use a CI server in the long term and not suffer the increasing time lost staring at the builds locally?
With git, it’s a piece of cake.
First, we ‘git clone’ the working directory to another folder. Git does the copy very quickly.
Next times, we don’t need to clone. Just tell git get the deltas. Net result: instant cloning. Impressive.
What about the consistency?
Doing a simple ‘git pull’ from the working directory will realize, using delta’s digests, that the changes where already pushed on the shared repository.
Nothing to do. Impressive again.
Of course, while the build is running in the second directory, we can keep on working on the code. No need to wait.
We now have a private build with no maintenance, no additional installation, not dependant on the IDE, ran with a single command line. No more broken build in the shared repository. We can recycle our CI server.
Yes. You’ve heard well. We’ve just built a serverless CI. Every additional feature of a real CI server is noise to me.
#!/bin/bash
if [ 0 -eq `git remote -v | grep -c push` ]; then
REMOTE_REPO=`git remote -v | sed 's/origin//'`
else
REMOTE_REPO=`git remote -v | grep "(push)" | sed 's/origin//' | sed 's/(push)//'`
fi
if [ ! -z "$1" ]; then
git add .
git commit -a -m "$1"
fi
git pull
if [ ! -d ".privatebuild" ]; then
git clone . .privatebuild
fi
cd .privatebuild
git clean -df
git pull
if [ -e "pom.xml" ]; then
mvn clean install
if [ $? -eq 0 ]; then
echo "Publishing to: $REMOTE_REPO"
git push $REMOTE_REPO master
else
echo "Unable to build"
exit $?
fi
fi
Dmitry Tashkinov, who has an interesting question on DVCS and CI, asks:
I don't understand how "We’ve just built a serverless CI" cohere with Martin Fowler's state:
"Once I have made my own build of a properly synchronized working copy I can then finally commit my changes into the mainline, which then updates the repository. However my commit doesn't finish my work. At this point we build again, but this time on an integration machine based on the mainline code. Only when this build succeeds can we say that my changes are done. There is always a chance that I missed something on my machine and the repository wasn't properly updated."
Do you ignore or bend it?
#Dmitry: I do not ignore nor bend the process described by Martin Fowler in his ContinuousIntegration entry.
But you have to realize that DVCS adds publication as an orthogonal dimension to branching.
The serverless CI described by David is just an implementation of the general CI process detailed by Martin: instead of having a CI server, you push to a local copy where a local CI runs, then you push "valid" code to a central repo.
#VonC, but the idea was to run CI NOT locally particularly not to miss something in transition between machines.
When you use the so called local CI, then it may pass all the tests just because it is local, but break down later on another machine.
So is it integeration? I'm not criticizing here at all, the question is difficult to me and I'm trying to understand.
#Dmitry: "So is it integeration"?
It is one level of integration, which can help get rid of all the basic checks (like format issue, code style, basic static analysis detection, ...)
Since you have that publication mechanism, you can chain that kind of CI to another CI server if you want. That server, in turn, can automatically push (if this is still fast-forward) to the "central" repo.
David Gageot didn't need that extra level, being already at target in term of deployment architecture (PC->PC) and needed only that basic kind of CI level.
That doesn't prevent him to setup more complete system integration server for more complete testing.
My favorite? An unreleased tool which used Bazaar (a DSCM with very well-thought-out explicit rename handling) to track tree-structured data by representing the datastore as a directory structure.
This allowed an XML document to be branched and merged, with all the goodness (conflict detection and resolution, review workflow, and of course change logging and the like) made easy by modern distributed source control. Splitting components of the document and its metadata into their own files prevented the issues of allowing proximity to create false conflicts, and otherwise allowed all the work that the Bazaar team put into versioning filesystem trees to work with tree-structured data of other kinds.
Definitely Polarion Track & Wiki...
The entire bug tracking and wiki database is stored in subversion to be able to keep a complete revision history.
http://www.polarion.com/products/trackwiki/features.php