I have a file structure like so:
foo/file1.c
foo/file2.c
foo/freeforall.c
My foogroup on GitHub needs to review changes to everything in foo except freeforall.c which is a file that anyone can touch without restriction, so it should not automatically add anybody as a reviewer when changes are made to it.
(In reality, whoever modifies freeforall.c just asks someone else on their own team to review their change.)
The Question
How can I do this with GitHub CODEOWNERS? The file looks like this right now:
foo #MyOrg/foogroup
What I Tried
GitHub explicitly says that the ! syntax is not supported, so that won't work:
foo #MyOrg/foogroup
!foo/freeforall.c
And there are too many files in the foo directory to explicitly include them individually. I could move freeforall.c into a different directory, but really I don't want to be having GitHub's CODEOWNER feature dictating how I organize my components. I want freeforall.c to be in the foo directory, that's where it belongs!
I also considered creating an emptygroup to assign that file to, but soon realized that now I'd be requiring that a group with zero members must approve PRs to that file which obviously would cause problems. 😂
foo #MyOrg/foogroup
foo/freeforall.c #MyOrg/emptygroup
The CODEOWNERS file takes the last matching line into account. You can make definitions with empty owners to specify paths/files without an owner.
Consequently, you can use the following lines to have all files in the foo directory owned by #MyOrg/foogroup except foo/freeforall.c:
foo #MyOrg/foogroup
foo/freeforall.c
Related
I have implemented a backup per workspace functionality in my extension using workspaceState. Since the data can be sensitive - I'd like to clear all workspaceStates on extension deactivation/uninstall.
The ExtensionContext provides no ability to clear all extension related data across different workspaces with their workspaceStates.
So I've considered saving data on the ExtensionContext globalState, tagging each entry with a workspace id. Problem is that the workspace namespace doesn't provide a way to uniquely identify the current workspace. I thought about hashing workspace name and path but both of these things are changeable and any change will destroy the pointer to the data. This is exactly why I cant just write files to internal folders. The only other solution I have is to write the backup data directly to the workspace and I'd like to avoid that.
How does VSCode maintain the knowledge of which workspaceState belongs to which workspace? How can I tie data to the workspace but have access from anywhere else in VSCode?
Side note: You should avoid saving sensitive data in general. And if necessary, try to encrypt it.
Anyway:
I don't have the full answers but i was researching something similar (An extension I use crashes due to now invalid settings in the WorkspaceState).
I found the Storage for the Workspace state in this folder (windows):
%appdata%\Code\User\workspaceStorage\
In there, you find a lot of folders with hex-based names. Inside those folders, I always found 2 Files named state.vscdb and state.vscdb.backup.
There usually is a 3rd file called workspace.json which helps you figure out if you are in the correct workspace. (but you'd have to iterate through all the folders - maybe there is a way to figure out the folder name coming from the extension API?)
If you open the state.vscdb-file you find something that looks quite like a serialized object set in my eyes. It does have some Seperator chars of unknown function. But you also find full paths or names in there that clearly origin from different modules of VSC - Including the extensions.
I don't need to worry about the other cached stuff i'm just gonna delete the whole folder to fix my current issue. But I'm pretty sure, one can figure out the way the file is built and edit out your sensitive data if one has to.
The state.vscdb.backup-file looks pretty much like what the name is telling you: they probably just make a copy of the other file every few minutes so you have a fallback position.
To add to the conversation, there are two SQLite state databases:
<user-data-dir>\User\globalStorage\state.vscdb
<user-data-dir>\User\workspaceStorage\<workspace.id>\state.vscdb
Depending on how VS Code was launched you could have a Single Folder Workspace or a Multi-Folder Workspace that is global or local. Globally, the data lives here:
Linux: $HOME/.config/Code/
OS X: $HOME/Library/Application Support/Code/
Windows: %APPDATA%\Code\
Locally, the data will be in the .vscode folder of the current workspace.
In my situation:
I open a new workspace.
Set it up as I want it to start every time.
Makes copies of the two SQLite databases.
Copy over the databases before launching VS Code.
This leads to a clean VS Code state.
To see how workspace.id is generated check this link.
Let's say I'm writing a program, and I want it to follow the XDG Base Directory Specification for where it puts its files (app foo uses $XDG_CONFIG_HOME/foo as the directory for configuration files if XDG_CONFIG_HOME is set and non-blank, or ~/.config/foo, or fails with an error if the home directory can't even be resolved).
Is there a correct/specified behavior for the situation where for example XDG_CONFIG_HOME is set and non-blank, but that directory doesn't exist? Or if there is no such variable, and ~/.config doesn't exist? Is it expected that my program attempt to create it? Or is the non-existance of that directory considered an error on the environment's/system's part, and my program should avoid doing anything about it (just bail with an error)?
Note: I'm not asking if I should create ~/.config/foo - obviously that's a yes; I'm asking if I should create ~/.config itself, if it doesn't exist.
(To be more pedantic: obviously some program should create them - the question is whether it's solely the system's/desktop's/user's job to do so, or if any program should try creating the relevant directories if they don't exist?)
I've tried reading the XDG Base Directory Specification, which says that when attempting to write its files, the program may create the requisite directory, but it's unclear if this is referring to just the application's specific/"personal" sub-directory in the XDG base directories, or if this is meant for the XDG base directories themselves.
P.S. Usually I have a good idea of what tags to use, but here I'm really uncertain: please edit this post or suggest improvements to give it proper tags.
From the XDG Base Directory Specification:
If, when attempting to write a file, the destination directory is
non-existant an attempt should be made to create it with permission
0700. If the destination directory exists already the permissions should not be changed. The application should be prepared to handle
the case where the file could not be written, either because the
directory was non-existant and could not be created, or for any other
reason. In such case it may chose to present an error message to the
user.
I would interpret this so application should try to create the XDG base directory (or any directory required for the destination) and only display an error if it is unable to do so.
I've settled on my own answer after a few years of thinking:
never create non-standard base XDG directories, but
it may be okay to automatically create the standard XDG base directories, and
you should automatically create any of your application's subdirectories within the base directories.
I think it can be good to be automatically helpful, but it is also very important to not worsen a user's mistakes.
If I write XDG_DATA_HOME=~/.locals/hare in my environment variable configuration, I might have wanted that, but it's much more likely that I made a typo of ~/.local/share. So the most helpful, least disruptive, and least wrong thing to do in that case is to report the lack of the requested XDG base directory.
So, if the user has specified a custom XDG base directory, and that base directory does not exist, never try to create it. Don't put your user in a situation where for example next to their standard ~/.config directory they get a ~/.configs or ~/.comfig directory which also contains some of their configurations, until one day they fix the typo and suddenly their programs behave as if they were reset to the defaults. This is a situation where early detection of mistakes is the most helpful thing to do in the long run, so tell the user immediately "this doesn't exist".
But if the user has not asked for a custom base directory, and you're about to use the known, standard location, then it's fine to try to automatically create it, if you ensure you only create it with reasonable ownership, permissions, and so on.
Finally, when the base XDG directory exists, most apps should probably make their own subdirectory inside that, and you definitely should create that app-specific directory, and any other subdirectories inside that one, automatically.
I have been using Mercurial version control system for some time already and several times I have seen the system warn me about divergent renames.
What I do is make two different descendants of one larger file. Like for example I've had one class named HttpRequest that was a wrapper for CGI.pm. Later, when I decided to move to the PSGI protocol, I have made two copies of this file, namely Cgi.pm and Psgi.pm. The original class remains and becomes abstract with new ones inheriting from it.
I always thought that it is the preferred way to deal with such situations because each file retains the history of the file that it was based on. But when I push my changes to the remote server, it tells me this:
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 2 changesets with 2 changes to 2 files (+1 heads)
remote: warning: detected divergent renames of lib/HttpRequest.pm to:
remote: lib/HttpRequest/Cgi.pm
remote: lib/HttpRequest/Psgi.pm
Is this bad? Should I do it some other way?
It looks like the problem is not that you made versioned copies of the same source file, but that there are two changesets in which the file was renamed to a different name each time and these changesets are not in the same line of development. This observation is based on the fact that your pull created a new head.
So in this instance, Mercurial is warning you that the pull has created a new head and as a result your repo has two heads inside each of which CGI.pm has been renamed to a different file. Therefore, if you attempt to merge these into one you will have to decide which change should stick.
The simplest way to have prevented this would be to copy the original file instead of renaming it. More precisely, the first time you can rename it but on all subsequent occasions you have to copy it instead (obviously if you do this on the same line of development, and necessary if you do not but intend to merge divergent lines later).
You might also want to look at the relevant section of Mercurial: The Definitive Guide.
We use ClearCase where I work and I'm trying to figure out how to find any files that have been modified but not merged up. We have a main branch at the very top level in ClearCase, and this is where the final source code changes are merged to and where we do our formal release builds from. We then have an integration branch under main where integration issues are worked out. When we get everything working and tested in the integration branch, we merge the integration branch up to main. For individual feature implementations and bug fixes, we create a new branch (usually named after an action, feature, or bugfix) off of the integration branch and work the issue. When we are done with it, we merge that change up to the integration branch.
I was wondering if anybody knew of a command or a way to see what files are modified in the feature/bug fix branches but were not merged back up to the integration branch. I've been looking around but I can't seem to find a way to do it. I would like to be able to run the command and have it tell me all files that have been modified on all of the sub-branches but not merged up. Thanks.
Normally, you use ct findmerge to find files to merge from one branch or view into the current view (assuming ct is an alias for cleartool).
I think you would have to identify all the branches you are interested in and do a separate ct findmerge operation for each branch - for each destination branch. That's complex. You'd also want to be able to eliminate many branches which are known to be fully merged. You can annotate a branch to indicate that it is fully merged.
So, I don't think there is a simple, single command to do this job.
You need to decide which branches are targets that you're concerned about. These would be your integration branch(es). Presumably, you have a fairly small list of these.
For each of those target branches, you need to decide which work branches are relevant to that integration branch. This is the tricky part; there is no easy way to determine whether a particular bug fix or feature branch is relevant to that integration branch using information in the VOBs; it is really only known by the users.
You then need a script that does (in outline):
for int_branch in $(list_relevant_integration_branches)
do
...create view with tag $tag for $int_branch...
ct setcs -f $(cspec_for_integration_branch $int_branch) $tag
ct setview -exec "find_outstanding_merges_for_integration_branch $int_branch" $tag
done
where find_outstanding_merges_for_integration_branch looks a bit like:
vob_list=$(list_relevant_vobs)
for mrg_branch in $(list_relevant_merge_branches $int_branch)
do
echo
echo "Merges from $mrg_branch to $int_branch"
ct findmerge $vob_list -fversion .../$mrg_branch/LATEST -print
done
Note that this command assumes (a) the current view is appropriate for the target, and (b) the integration branch name is passed as an argument.
You can get fancy and decide how to handle automatic or graphical merges instead of -print. The hard part is still the unwritten commands such as list_relevant_integration_branches and list_relevant_vobs. These might be simple:
# list_relevant_integration_branches
cat <<EOF
integration_branch_version_3_0
integration_branch_version_3_1
integration_branch_version_4_0
EOF
# list_relevant_vobs
cat <<EOF
/vobs/mainproject
/vobs/altproject
/vobs/universal
EOF
Or they might be considerably more complex. (If you only have one VOB, then your life is much simpler; the systems we work with have 20-odd VOBs visible in the cspec.)
The other unwritten script is list_relevant_merge_branches. I don't know whether there's a simple way to write that. If you define and apply appropriate attribute types (ct mkattype, ct mkattr) when the development branches are created (perhaps a 'target integration branch' attribute type, an enumeration type), you could use that to guide you. That leaves you with a retrofit problem; how to get the correct attribute onto each existing working branch. You also need a separate attribute to identify branches that no longer need merge scrutiny, unless you decide that the absence of a 'target integration branch' attribute means 'no need to scrutinize any more'. That reduces the retrofit problem to adding the target integration branch to those branches that still need merging; by default, all existing branches will be deemed fully merged.
If you know the source and destination branches (topic detailed in Jonathan's answer, which I have upvoted), then don't forget the query primitive merge:
merge (from-location , to-location)
In all cases, TRUE if the element to which the object belongs has a merge hyperlink (default name: Merge) connecting the from-location and to-location.
You can specify either or both locations with a branch pathname or a version selector.
Specifying a branch produces TRUE if the merge hyperlink involves any version on that branch.
The branch pathname must be complete (for example, /main/rel2_bugfix, not rel2_bugfix).
This thread illustrates that query in action:
How is it possible to find all the elements on a specific branch that are checked in and not merged away?
cleartool find \\view\administration\ProjectVOB \
-branch "brtype(HNH-372452) && \
!merge(...\HNH-372452\LATEST,...\main-372452\LATEST)" -print
\\view\administration\ProjectVOB\Com-API\Dll\COMFrontendDll\Mmi.cpp##\main\HNH-372452
\\view\administration\ProjectVOB\geometry\geochain\geocutterloc.cpp##\main\HNH-372452
That "merge hyperlink" is the red arrow you see in version tree:
(see article "Versioning and parallel development of requirements")
What is a good strategy for dealing with changing product and feature names in source code. Here's the situation I find myself in over and over again (most of you can relate?)...
Product name starts off as "DaBomb"
Major features are "Exploder", "Lantern" and "Flag".
Time passes, and the Feature names are changed to "Boom", "Lighthouse" and "MarkMan"
Time passes, and the product name changes to "DaChronic"
...
...
Blah, blah, blah...over and over and over
And now we have a large code base with 50 different names sprinkled around the directory tree and source files, most of which are obsolete. Only the veterans remember what each name means, the full etimologic history, etc.
What is the solution to this mess?
Clarification: I don't mean the names that customers see, I mean the names of directories, source files, classes, variables, etc. that the developers see where the changing product and feature names get woven into.
Given your clarification that you "don't mean the names that customers see, [you] mean the names of directories, source files, classes, variables, etc. that the developers see", yeah, this can be an annoying problem.
The way teams I've been on have coped with best when we've had a policy of always using only one name for each thing in the code base. If the name changes later on we either stay with the old name in the code, or we migrate all instances of the old name to the new name. The important thing is to never start using the new name in the code unless all instance of the old name have been migrated. That way you only ever have to keep 2 names for something in your head: the "old name", used in the code, and the name everyone else uses.
We've also often chosen a very generic/descriptive name for things when starting out if we know the "brand name" is likely to change.
I consider renaming to better naming conventions just another form of refactoring. Create a branch, perform the renames, run unit/integration tests, commit, merge, repeat. It's all about process control to keep consistency in the project.
The solution to the mess is to not create it in the first place. Once a code path is named, there's rarely a good reason to change it and never a good reason to use a new name alongside the old one. When "Exploder" becomes "Boom", you have two choices: Either keep using Exploder exclusively, and never mention Boom anywhere, or change all instances of Exploder to Boom and then continue on using Boom exclusively and never mention Exploder again.
If you're using both Exploder and Boom in the same code base, you're doing it wrong.
Also, I know you clarified that you're not talking about the user-visible names, but, if you start out working with your own internal names which are relevant to what the code does and completely independent of what marketing wants to call the product/feature, then this is much less likely to become an issue. If you're already referring to Exploder internally as TNT, then what difference does it make if Exploder gets changed to Boom?
How do you deal with Localization? Same thing; same method.
We use an internal and and external name. It could be as simple as a static variable definition like
public static final String EXPLODER = "Boom";
And in code you'll always use the reference to EXPLODER. Same for path names and the like - hard coding those paths at different places is a no-go anyway. If some guys starts digging through internal stuff (like JS sources or ini files or whatever), who cares if they discover Exploder?
Just use internal names, and ignore changes to marketing/official names: https://softwareengineering.stackexchange.com/a/208578/55472.