I know this sounds obvious, but I cannot find the answer anywhere: what is the syntax for an OR condition in glob ?
I want to match main OR pre in a branch name, this is for github branches protection configuration. So far I had no choice than to use * which matches unwanted branches.
Related
Tried a lot of searching in Github docs and also stackoverflow. Couldn't find a descent example to show how we can filter PRs with logical negation operator
Let's say I want to filter all the PRs which are closed and does not have AutoDeploy in title.
Tried the following but it does not work.
is:pr is:closed -in:title "AutoDeploy"
From "Understanding the search syntax", you can indeed prefix any search qualifier with a - to exclude all results that are matched by that qualifier.
However, that seems to apply to repository search only, not issue and PR search, where the exclusion operator does nothing on the 'in' qualifier.
You might need to list all pr, and then subtract the one with AutoDeploy (or grep -v to exclude them)
Using the GitHub CLI command gh pr list can help post-processing the results.
gh pr list --search "is:closed" | grep -v "AutoDeploy"
i'm trying to set rules for set branches on my repo, but having issues with the pattern to apply only to specific branches.
ie rule to apply only to brances master,develop,release
Issue: the pattern isn't picking up the wrong branches
I tried looking through here, but it's working as expected
https://ruby-doc.org/core-2.5.1/File.html#method-c-fnmatch
I've tried, but not working with 0 branches or everything selected:
{^.*(master).*$,^.*(develop).*$}
[master,develop,release]
[master;develop;release]*
ps this one dose the opposite an applied all brances, but the listed ones:
*[!master|!develop|!release]*
I have a work around, but i'm sure there is a better solution, or rite solution for this.
I basically use the oder of precedence of the rules to have only set branches with the rules i want.
I target every branch except the ones I want, and set no rule.
Then set all branches to require the rule i want to apply to my branches.
The first rule takes president and ensures that only the ones i want, get the rule.
NB there is a
conflict raised in the rule, be this seems more like a warning
As johnfo says in a comment, GitHub's branch protection rule patterns are not regular expressions. In fact, the GitHub documentation mentions that they are specifically Ruby File::FNM_PATHNAME style expressions. This is highly specific (Git itself uses no Ruby code at all).
I snipped the git tag; curiously, you included fnmatch yourself, which is the right tag for someone who might like to supply the right Ruby expressions here. It looks like GitHub do not set the FNM_EXTMATCH flag, so you probably need multiple match expressions (also noted in the comment above). I wouldn't bother answering except that it seemed useful to add some links.
You can try [dm]*[pr] for branches starting with 'd' or 'm' and ending with 'p' or 'm' (for develop and master). I'm sure this can be further refined.
The rebaseif mercurial extension automates the process, when pulling, of doing a rebase only if the merge can be done automatically with no conflicts. (If there are conflicts to resolve manually, it does not rebase, leaving you ready to do a manual merge of the two branches.) This simplifies and linearizes the history when developers are working in different parts of the code, although any rebase does throw away some information about the state of the world when a developer was doing work. I tend to agree with arguments like this and this that in the general case, rebasing is not a good idea, but I find the rebase-if philosophy appealing for the non-conflict case. I’m on the fence about it, even though I understand that there are still risks of logic errors when changes happen in different parts of the code (and the author of rebaseif extension has come to feel it’s a bad idea..)
I recently went through a complicated and painful bisect, and I think that having a large number of merges of short branches in our repository was the main reason the bisect did not live up to its implied O(lg n) promise. I found myself needing to run "bisect --extend" many times, to stretch the range beyond the merge, going by a couple of changesets at a time, essentially making bisect O(n). I also found it very complicated to keep track of how the bisect was going and to understand what information I'd gained so far, because I couldn't follow the branching when looking at graphs of the repository.
Are there better ways to use bisect (and to look at and understand the revision history) or am I right that the process would have been smoother if we had used rebaseif more in development. Alternately, can you help me understand more concretely what may go wrong using rebase in the non-conflict case: is it likely enough to cause problems that it should be avoided?
I’m tagging this more generally (not just mercurial) since I think rebaseif matches a more typical git workflow: git users may have seen the gotchas.
I think the answer is simple: you have to devide between hard bisects or risky rebasing.
Or, something in between: only rebase if it is very unlikely that the rebase silently breaks things. If a rebase involves only a few changesets which additionally are semantically distant to the changes they are rebased on, it's usually safe to rebase.
Here's an example, where a conflict-free merge breaks things:
Suppose two branches start from a file with this content:
def foo(a):
# do
# something
# with a (an integer)
...
foo(4)
In branch A, this is changed to:
def foo(a):
# now this function is 10 times faster, but only work with positive integers
assert a > 0
# do
# something with
# with a
...
foo(4)
In branch B, it is changed to:
def foo(a):
# do
# something
# with a (an integer)
...
foo(4)
...
foo(-1) # now we have a use case where we need to call foo with -1
Semantically, both edits conflict with each other. However, Mercurial happily merges them without conflicts (in both cases, when rebasing or when doing a regular merge):
def foo(a):
# now this function is 10 times faster, but only work with positive integers
assert a > 0
# do
# something with
# with a
...
foo(4)
...
foo(-1) # now we have a use case where we need to call foo with -1
The advantage of a merge is that a it allows to understand what went wrong at some later point, so you can fix things accordingly. A rebase might throw away information you need to understand bugs caused by automatic merges.
The main argument against git rebase seems to be a philosophical one around "losing history", but if I really cared about that I'd make the final build step a checkin (or the first build step to track all the failed builds too!).
I'm not particularly familiar with Mercurial or bisecting (except that it's a bit like git), but in my month-and-a-bit with git I exclusively stuck to rebase. I also use git rebase -i --autosquash and git add -p a lot.
IME, there's also not that much difference between a rebase and a merge when it comes to fixing conflicts — the answer you linked to suggests "rebaseif" is bad because the "if" conditions on whether the merge proceeded without conflict, whereas it should be conditioned on whether the codebase builds and tests pass.
Perhaps my thinking is skewed by an inherent weakness in git's design (it doesn't explicitly keep track of the history of a branch, i.e. the subset of commits that it's actually pointed to), or perhaps it's just how I work (check that the diff is sane and that it builds, although admittedly after a rebase I don't check that intermediate commits build).
(Aside: For personal projects I often would like to keep track of each build output and corresponding source snapshot, but I've yet to find anything which is good at doing so.)
In CVS I could put $LOG$ into the source file and when the file is checked in $LOG$ will be expanded into true logs in the file.
But how to implement this in Mercurial? Of course I mean the other keyword such as the latest checkin date and time.
For most of the problems keyword expansion solves it creates a whole heap more; isn't recommended in Mercurial CVS/RCS-like Keyword Substitution - Why You Don't Need It however it is documented how to do it with expansions if you really need to.
I'm not the only one to advise against keyword expansion, although there are times it can be useful one really needs to think hard before doing it.
Use the built-in keyword extension.
A couple of important things:
ONLY add the specific files you need keyword expansion to the filename pattern in hgrc [keyword].
The expansion is LOCAL. When your changeset is pushed to another repo, unless that repo also has the same keyword setup, keyword is NOT expanded.
I agree that it should be avoided whenever possible. When it is not possible to avoid is that you need to distribute a few selected files (for example, API headers) to other people (for example, API users), such that there's no way they can use hg to find out the version info.
At least two brilliant programmers, Linus Torvalds and Guido von Rossum, disparage the practice of putting keywords into a file that expand to show the version number, last author, etc.
I know how keyword differences clutter up diffs. One of the reasons I like SlickEdit's DiffZilla is because it can be set to skip leading comments.
However, I have vivid memories of team-programming where we had four versions of a file (two different releases, a customer one-off, and the development version) all open for patching at the same time, and was quite helpful to verify with a glance that each time we navigated to an included header we got the proper one, and each time we pasted code the source and destination were what we expected.
There is also the where-did-this-file-come-from problem that arises when a hasty developer copies a file from one place to another using the file system, rather than checking it out of the repository using the tool; or, more defensibly, when files under control in locations A, B, and C need to be marshalled (with cherry-picking) into a distribution location D.
In places where VCS keywords are banned, how do you cope?
I've never used VCS keywords in my entire career, over 30 years. From the most primitive VCS system I've used, up to the present (TFS), I've used some other structure to understand "where I am".
I am rarely in a situation where I've only got one file to work with. I've usually got all the other files necessary to build the project or set of projects. I usually use branching (or streams on one occasion), and I'm working on some slice of the given branch or stream.
If I'm working on multiple branches or streams, I'll have one directory tree for each. All I need to do to know what file I'm working on is check the file path, at the very worst.
At the very best, the version control system will tell you exactly which version of the file you're working on, what the change history is, who else is working on different versions of the file, and anything else you'd care to know.
This doesn't exactly answer your question, but I imagine Linus and Guido have reasons for disliking keywords that don't apply to small-team corporate development.
An $Id$ tag for instance, has what you could consider to be a global version number. Linux and I guess also Python development is fragmented enough that no number can be global. Lots of people have their own repositories all over the place that would fill in their own $Id$ values and then those patches might be sent to Linus or Guido's repositories where they don't make any sense.
However, in your environment, you probably have one central repository which would assign these and it would be fine. Sounds like you're using git. I wonder if it's possible to configure the central git repository to do tag substitution while the local developer repositories don't. Or perhaps it's better to get the commit hash in the tag.