Issue: CODEOWNERS need fully qualified path for rule against a directory/subdirectory.
I am writing a sample CODEOWNERS below to show the problem.
* #global-owner
foo/bar/ #octocat
I am expecting that whenever a PR is raised for any file (even recursively) inside directory foo/bar, user should be assigned a review. However, this always defaults to the * rule.
However, when I change the file to something like this:
* #global-owner
/app/src/main/java/com/cueo/foo/bar/ #octocat
This works like a charm. But the problem with this is that I need to repeat each directory twice to something like this:
/app/src/main/java/com/cueo/foo/bar/ #octocat
/app/src/test/java/com/cueo/foo/bar/ #octocat
According to the documentation:
# In this example, #octocat owns any file in an apps directory
# anywhere in your repository.
apps/ #octocat
I believe this should work for a nested directory structure also, like:
foo/bar/apps/ #octocat
We need to prefix the paths with ** as well. This is not clear from the documentation.
So if we add a rule like:
* #global-owner
**/foo/bar/ #octocat
#octocat will be assigned for all foo/bar directories in the project.
Related
I have the following structure:
.github/
CODEOWNERS
A/
files
B/
files
example.py
example2.py
I want code owners to have a specific owners on all files in the main directory.
I can do:
example.py #owner
example2.py #owner
But that means list them manually which is something I dont want to do as it doesnt offer protection against adding new files!
So my question is: How can I say all files in main directory but not folders?
The CODEOWNERS docs have an example file, with this relevant bit:
# The `docs/*` pattern will match files like
# `docs/getting-started.md` but not further nested files like
# `docs/build-app/troubleshooting.md`.
docs/* docs#example.com
So, to assign the root directory, but not its subdirectories, you could do
/* #owner
For configuration of streams in Perforce, there exist five access types (according to documentation, in order more-to-less-inclusive): share, isolate, import/import+, exclude. They are placed in the configuration line-by-line, like so:
share folder1/...
isolate folder2/...
Is it possible to override the access to a subfolder? Like so:
share folder/...
isolate folder/subfolder1/...
In the way that, folder/subfolder1/... will be isolate-d, but folder/subfolder2/... and all others will be share-d? It seems like a lot of manual work to include all separate subfolders otherwise, especially if they are added as development progresses.
If this works, what are the rules? Do later lines override earlier lines?
Or do more restrictive access lines override less restrictive ones (i.e. can share parent folder, isolate child folder, but not other way around)? E.g. is something like
exclude folder/...
share folder/subfolder1/...
also possible?
Let's try it out. If I change my stream Paths to:
Paths:
share folder/...
isolate folder/subfolder1/...
here's what I get when I try to merge a path that's inside the isolated folder and outside it:
C:\Perforce\test>p4 merge -n folder/subfolder1/...
folder/subfolder1/... - no target file(s) in branch view.
C:\Perforce\test>p4 merge -n folder/subfolder2/...
No such file(s).
which tells me that, indeed, subfolder1 is isolated correctly. The "no target files in branch view" error tells me that the path is excluded from the branch view (which is the function of isolate), whereas "no such file(s)" lets me know that the only reason there's nothing to merge is that I didn't bother to actually add any files there.
Let's try the other example and see how that works. After changing my Paths to:
Paths:
exclude folder/...
share folder/subfolder1/...
I can do a similar experiment with p4 sync:
C:\Perforce\test>p4 sync -n folder/subfolder1/...
folder/subfolder1/... - no such file(s).
C:\Perforce\test>p4 sync -n folder/subfolder2/...
folder/subfolder2/... - file(s) not in client view.
and that also works as I'd expect (basically the same way classic client views work) -- the later and more specific line overrides the earlier and more general line, so subfolder1 is shared while subfolder2 is excluded.
Is there any way (for on premise github) to :
For N number of files in the Pull Request.
Look at the history of those files.
And add any/all github users (on the history) .. to the code reviewers list of users?
I have searched around.
I found "in general" items like this:
https://www.freecodecamp.org/news/how-to-automate-code-reviews-on-github-41be46250712/
But cannot find anything in regards to the specific "workflow" I describe above.
We can get the list of changed files to the text file from PR. Then we can run the git command below to get the list of users included in last version's blame. For each file we get from file list, run the blame command. This might be also simple script.
Generate txt file from list of files of PR.
Traverse all filenames through txt file. (python, bash etc.)
Run blame command and store in a list.
Add reviewers to the PR from that list manually or some JS script for it.
For github spesific: list-pull-requests-files
The blame command is something like :
git blame filename --porcelain | grep "^author " | sort -u
As a note, if there are users who are not available in github anymore. Extra step can be added after we get usernames to check whether they exist or not. (It looks achievable through github API)
The Goal:
How to use .gitignore to exclude all folders & files except PowerShell $Profile?
The answer should help expand the current exception file list to more files in other subfolders. If possible, reduce wildcards to minimum, and use as specific as possible (full/fuller file path). Why? For instance, Book1.xlsx may exist in multiple subfolders but I want to be able to choose only the specific desired subfolders.
Thanks in advance!
Current status:
On Windows 10 (not Linux Distros):
git init initiated on top level directory C:\. [Please don't suggest to start from other subfolders. Just remain with C:\, as I will include more files to the exception list]
C:\.gitignore containing the below:
# Ignore All
/*
# Exception List [eg. PowerShell's $Profile (please use FULL/FULLER FILE PATH, if possible)]
!.gitignore
!Microsoft.PowerShell_profile.ps1
!Users/John/Documents/WindowsPowerShell/Microsoft.PowerShell_profile.ps1
!Users\John\Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1
!C:/Users/John/Documents/WindowsPowerShell/Microsoft.PowerShell_profile.ps1
!C:\Users\John\Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1
With the above codes, git status successfully returned only .gitignore as 1 untracked file. Microsoft.PowerShell_profile.ps1 remained missing from untrack listing.
I've tried alternative ways (wildcard, partial subfolder name with pattern, etc) but all failed to return the fuller file path PowerShell $Profile.
Is it possible to list only the folders in a bucket using the gsutil tool?
I can't see anything listed here.
For example, I'd like to list only the folders in this bucket:
Folders don't actually exist. gsutil and the Storage Browser do some magic under the covers to give the impression that folders exist.
You could filter your gsutil results to only show results that end with a forward slash but this may not show all the "folders". It will only show "folders" that were manually created (i.e., not implicitly exist because an object name contains slashes):
gsutil ls gs://bucket/ | grep -e "/$"
Just to add here, if you directly drag a folder tree to google cloud storage web GUI, then you don't really get a file for a parent folder, in fact each file name is a fully qualified url e.g. "/blah/foo/bar.txt" , instead of a folder blah>foo>bar.txt
The trick here is to first use the GUI to create a folder called blah and then create another folder called foo inside (using the button in the GUI) and finally drag the files in it.
When you now list the file you will get a separate entry for
blah/
foo/
bar.txt
rather than only one
blah/foo/bar.txt