I used Beyond Compare 3 for years as a merge tool. I'm familiar with how it works.
I recently installed Beyond Compare 4 (Version 4.2.6) and Mercurial uses it as the visual editing tool for resolving merge conflicts when this command is entered:
hg resolve --all
The problem is that both sides of the merge come up as read only, showing "Editing disabled" in the status bar, which makes the merge impossible. (See screen shot)
I have read extensively about the various configuration files for Beyond Compare and Mercurial. I've edited multiple lines in the 'mercurial.ini' and '.hgrc' files to attempt to override this, but I can't seem to fix this behavior:
merge-tools.beyondcompare4.args=$local $other $base /mergeoutput=$output /lefttitle=parent1 /centertitle=base /righttitle=parent2 /outputtitle=merged /automerge /reviewconflicts /solo
merge-tools.beyondcompare4.diff3args=$parent1 $parent2 $child /lefttitle='$plabel1' /centertitle='$clabel' /righttitle='$plabel2' /solo
merge-tools.beyondcompare4-noauto.args=$local $other $base /mergeoutput=$output /lefttitle=parent1 /centertitle=base /righttitle=parent2 /outputtitle=merged /reviewconflicts /solo
merge-tools.beyondcompare4-noauto.diff3args=$parent1 $parent2 $child /lefttitle='$plabel1' /centertitle='$clabel' /righttitle='$plabel2' /solo
Does anybody have an answer for (1) WHY this behavior exists and (2) how I can fix it?
Related
Is there a way to suppress PSScriptAnalyzer from highlighting alias warnings? e.g.
'rm' is an alias of 'Remove-Item'. Aliases can introduce possible problems and make scripts hard to maintain. Please consider changing alias to its full content.
Aliases in PowerShell are extremely useful. I have a simple rule: I only ever use the rational built-in aliases in scripts (I ignore the strange ones). Why? Well, most of these particular aliases are now 13 years old and have never changed (PowerShell 1.0 release November 14, 2006). So, for example, % or ls or cd are reliable in 99.99% of cases. I consider 99.99% reliability to be "good enough". Possibly the single-most-over-repeated comment on all PowerShell StackOverflow questions is "Note: it is not recommended to use aliases in PowerShell scripts as they can change!" (not recommended by whom I often wonder? God? ;-) )
However, PSScriptAnalyzer in VSCode highlights all aliases as problems so that my current 7,000 line script has 488 such "problems". Is there a way to tell PSScriptAnalyzer that I like aliases, I intend to use aliases for the vastly more concise code, clarity, and greatly improved readability that they give me, and so I do not consider them to be problems?
Mathias' comment states to search for "Select PSScriptAnalyzer Rules" but I was not able to find that setting (VS 1.58.2, ms-vscode.powershell 2021.6.2).
The solution I found was to change the "Script Analysis: Settings Path" to point to a created file that contains the following code1 to whitelist certain aliases. Below I've un-commented the relevant section.
#{
# Only diagnostic records of the specified severity will be generated.
# Uncomment the following line if you only want Errors and Warnings but
# not Information diagnostic records.
#Severity = #('Error','Warning')
# Analyze **only** the following rules. Use IncludeRules when you want
# to invoke only a small subset of the default rules.
IncludeRules = #('PSAvoidDefaultValueSwitchParameter',
'PSMisleadingBacktick',
'PSMissingModuleManifestField',
'PSReservedCmdletChar',
'PSReservedParams',
'PSShouldProcess',
'PSUseApprovedVerbs',
'PSAvoidUsingCmdletAliases',
'PSUseDeclaredVarsMoreThanAssignments')
# Do not analyze the following rules. Use ExcludeRules when you have
# commented out the IncludeRules settings above and want to include all
# the default rules except for those you exclude below.
# Note: if a rule is in both IncludeRules and ExcludeRules, the rule
# will be excluded.
#ExcludeRules = #('PSAvoidUsingWriteHost')
# You can use rule configuration to configure rules that support it:
Rules = #{
PSAvoidUsingCmdletAliases = #{
Whitelist = #("cd")
}
}
}
[1] https://github.com/PowerShell/vscode-powershell/blob/master/examples/PSScriptAnalyzerSettings.psd1
I am attempting to exclude certain files from my doxygen generated documentation. I am using version 1.8.14.
My files come in this naming convention:
/Path2/OtherFile.cs
/Path/DAL.Entity/Source.cs
/Path/DAL.Entity/SourceBase.generated.cs
I want to exclude all files that do NOT end in Base.generated.cs, and are located inside of /Path/.
Since it appears doxygen claims to use regex for the exclude_patterns variable, I eventually came up with this:
.*\\Path\\DAL\..{4,15}\\((?<!Base\.generated).)*
Needless to say, it did not work. Nor did multiple other variations. So far a simple wildcard * is the only regex character I have gotten to actually work.
doxygen uses QRegExp for a lot of things, so I assumed that was the library used for this variable as well, but even several variations of a pattern that that library claims to support did not work; granted apparently that library is full of bugs, but I would expect some things to work.
Does doxygen actually use a regex library for this variable?
If so, which library is it?
In either case, is there a method of achieving my goal?
My conclusion is; No... Doxygen Doxyfile does not support real regex. Even though they claim that it do. It's just standard wildcards that work.
We ended up with a really awkward solution to work around this.
What we did is that we added a macro in our CMakeLists.txt that creates a string with everything we want to include in INPUT instead. Manually excluding the parts we don't want.
The sad part is that CMakes regex also is crippled. So we couldn't use advanced regex such as negative lookahead in LIST(FILTER EXLUDE) similar to LIST(FILTER children EXCLUDE REGEX "^((?!autogen/public).)*$")... So even this solution is not really what we wanted.
Our CMakeLists.txt ended up looking something like this
cmake_minimum_required(VERSION 3.9)
project(documentation_html LANGUAGES CXX)
find_package(Doxygen REQUIRED dot)
# Custom macros
## Macro for getting all relevant directories when creating HTML documentain.
## This was created cause the regex matching in Doxygen and CMake are lacking support for more
## advanced syntax.
MACRO(SUBDIRS result current_dir include_regex)
FILE(GLOB_RECURSE children ${current_dir} ${current_dir}/*)
LIST(FILTER children INCLUDE REGEX "${include_regex}")
SET(dir_list "")
FOREACH(child ${children})
get_filename_component(path ${child} DIRECTORY)
IF(${path} MATCHES ".*autogen/public.*$" OR NOT ${path} MATCHES ".*build.*$") # If we have the /source/build/autogen/public folder available we create the doxygen for those interfaces also.
LIST(APPEND dir_list ${path})
ENDIF()
ENDFOREACH()
LIST(REMOVE_DUPLICATES dir_list)
string(REPLACE ";" " " dirs "${dir_list}")
SET(${result} ${dirs})
ENDMACRO()
SUBDIRS(DOCSDIRS "${CMAKE_SOURCE_DIR}/docs" ".*.plantuml$|.*.puml$|.*.md$|.*.txt$|.*.sty$|.*.tex$|")
SUBDIRS(SOURCEDIRS "${CMAKE_SOURCE_DIR}/source" ".*.cpp$|.*.hpp$|.*.h$|.*.md$")
# Common config
set(DOXYGEN_CONFIG_PATH ${CMAKE_SOURCE_DIR}/docs/doxy_config)
set(DOXYGEN_IN ${DOXYGEN_CONFIG_PATH}/Doxyfile.in)
set(DOXYGEN_IMAGE_PATH ${CMAKE_SOURCE_DIR}/docs)
set(DOXYGEN_PLANTUML_INCLUDE_PATH ${CMAKE_SOURCE_DIR}/docs)
set(DOXYGEN_OUTPUT_DIRECTORY docs)
# HTML config
set(DOXYGEN_INPUT "${DOCSDIRS} ${SOURCEDIRS}")
set(DOXYGEN_EXCLUDE_PATTERNS "*/tests/* */.*/*")
set(DOXYGEN_FILE_PATTERNS "*.cpp *.hpp *.h *.md")
set(DOXYGEN_RECURSIVE NO)
set(DOXYGEN_GENERATE_LATEX NO)
set(DOXYGEN_GENERATE_HTML YES)
set(DOXYGEN_HTML_DYNAMIC_MENUS NO)
configure_file(${DOXYGEN_IN} ${CMAKE_BINARY_DIR}/DoxyHTML #ONLY)
add_custom_target(docs
COMMAND ${DOXYGEN_EXECUTABLE} ${CMAKE_BINARY_DIR}/DoxyHTML -d Markdown
WORKING_DIRECTORY ${CMAKE_BINARY_DIR}
COMMENT "Generating documentation"
VERBATIM)
and in the Doxyfile we added the environment variables for those fields
OUTPUT_DIRECTORY = #DOXYGEN_OUTPUT_DIRECTORY#
INPUT = #DOXYGEN_INPUT#
FILE_PATTERNS = #DOXYGEN_FILE_PATTERNS#
RECURSIVE = #DOXYGEN_RECURSIVE#
EXCLUDE_PATTERNS = #DOXYGEN_EXCLUDE_PATTERNS#
IMAGE_PATH = #DOXYGEN_IMAGE_PATH#
GENERATE_HTML = #DOXYGEN_GENERATE_HTML#
HTML_DYNAMIC_MENUS = #DOXYGEN_HTML_DYNAMIC_MENUS#
GENERATE_LATEX = #DOXYGEN_GENERATE_LATEX#
PLANTUML_INCLUDE_PATH = #DOXYGEN_PLANTUML_INCLUDE_PATH#
After this we can run cd ./build && cmake ../ && make docs to create our html documentation and have it include the autogenerated interfaces in our source folder without including all the other directories in the build folder.
Quick description of what actually happens in the CMakeLists.txt
# Macro that gets all directories from current_dir recursively and returns the result to result as a space separated string
MACRO(SUBDIRS result current_dir include_regex)
# Gets all files recursively from current_dir
FILE(GLOB_RECURSE children ${current_dir} ${current_dir}/*)
# Filter files so we only keep the files that match the include_regex (can't be to advanced regex)
LIST(FILTER children INCLUDE REGEX "${include_regex}")
SET(dir_list "")
# Let us act on all files... :)
FOREACH(child ${children})
# We're only interested in the path. So we get the path part from the file
get_filename_component(path ${child} DIRECTORY)
# Since CMakes regex also is crippled we can't do nice things such as LIST(FILTER children EXCLUDE REGEX "^((?!autogen/public).)*$") which would have been preferred (CMake regex does not understand negative lookahead/lookbehind)... So we ended up with this ugly thing instead... Adding all build/autogen/public paths and not adding any other paths inside build. I guess it would be possible to write this expression in regex without negative lookahead. But I'm both not really fluent in regex (who are... right?) and a bit lazy in this case. We just needed to get this one pointer task done... :P
IF(${path} MATCHES ".*autogen/public.*$" OR NOT ${path} MATCHES ".*build.*$")
LIST(APPEND dir_list ${path})
ENDIF()
ENDFOREACH()
# Remove all duplicates... Since we GLOBed all files there are a lot of them. So this is important or Doxygen INPUT will overflow... I know... I tested...
LIST(REMOVE_DUPLICATES dir_list)
# Convert the dir_list to a space seperated string
string(REPLACE ";" " " dirs "${dir_list}")
# Return the result! Coffee and cinnamon buns for everyone!
SET(${result} ${dirs})
ENDMACRO()
# Get all the pathes that we want to include in our documentation ... this is also where the build folders for the different applications are going to be... with our autogenerated interfaces which we want to keep.
SUBDIRS(SOURCEDIRS "${CMAKE_SOURCE_DIR}/source" ".*.cpp$|.*.hpp$|.*.h$|.*.md$")
# Add the dirs we want to the Doxygen INPUT
set(DOXYGEN_INPUT "${SOURCEDIRS}")
# Normal exlude patterns for stuff we don't want to add. This thing does not support regex... even though it should.
set(DOXYGEN_EXCLUDE_PATTERNS "*/tests/* */.*/*")
# Normal use of the file patterns that we want to keep in the documentation
set(DOXYGEN_FILE_PATTERNS "*.cpp *.hpp *.h *.md")
# IMPORTANT! Since we are creating all the INPUT paths our self we don't want Doxygen to do any recursion for us
set(DOXYGEN_RECURSIVE NO)
# Write the config
configure_file(${DOXYGEN_IN} ${CMAKE_BINARY_DIR}/DoxyHTML #ONLY)
# Create the target that will use that config to create the html documentation
add_custom_target(docs
COMMAND ${DOXYGEN_EXECUTABLE} ${CMAKE_BINARY_DIR}/DoxyHTML -d Markdown
WORKING_DIRECTORY ${CMAKE_BINARY_DIR}
COMMENT "Generating documentation"
VERBATIM)
I know this isn't the answer anyone who stumbles in on this question wants... unfortunately it seems to be the only reasonable solution...
... you all have my deepest condolences...
This question is not a duplicate of hg log - How to get the last 5 log entries? - it is easy to apply a limit. The problem is that the log output, when limited, does not appear to always be ordered descending by log date - the behavior changes with the addition of a revset.
For example, the simple log work "as expected" and it is displays the newest five log entries.
hg log -l5
However, when using a revset the result is the oldest nodes first (as observed without -l); hence the following shows the oldest five entries which is not desired.
hg log -r "user('Me')" -l5
How can can hg log, with a revset, be instructed to order by the log date descending ("as expected") so that the limit has a predictable1 and meaningful effect?
$ hg --version
Mercurial Distributed SCM (version 3.6.1)
1 I don't consider throwing random reverse calls in a revset predictable, but if that is the "best" way..
There are a couple of options you have.
First, you can use reverse() in conjunction with your existing revset, e.g.:
hg log -r 'reverse(user("me"))' -l 5
As a shorthand, you can also use -f or --follow, which – when used in conjunction with -r – will wrap the revision in reverse(...). Example:
hg log -f -r 'user("me")' -l 5
Or you can encode the limit in the changeset, e.g.:
hg log -r 'last(user("me"), 5)'
Note that revset aliases can be useful to avoid having to type out revsets over and over. So, you can put something like this in your .hgrc:
[revsetalias]
lastby($1) = last(user($1), 5)
And then do:
hg log -r 'lastby("me")`
Important addendum answer: do not use reverse blindly for this task. While it will work in many cases, the better/reliable generic solution is to use sort, as in:
hg log -r 'sort(user("me"), "-date")' -l 5
This is because reverse does not guarantee the source set order is well-ordered - as such it may still result in final output that does not meet the requested criteria of 'newest'.
The use of sort above guarantees the behavior as it sorts by the date, descending, and then selects the top 5 per hg log's limit option.
(Otherwise, see Reimer's answer.)
I have been merging all of source-code files used by various developers/CAD drafters for the past 15 or so years. It appears that everyone worked off the same code base until about 7 years ago, when everyone seems to have made a local copy of all the files and used/edited them locally.
I have successfully/painfully merged all of their files with the same names back together. However, I am finding that sometimes, files with different names contain functions with the same names and parameters. Tools that are expecting one implementation of a function may end up calling a different one depending on which files were loaded when.
Is there a simple way to search all of the files for repeated function names?
For Example, a function looks like this:
(defun MyInStr (SearchIn SearchFor)
...
)
How could I search all files for (defun MyInStr (SearchIn SearchFor)
I would suggest using ctags to generate the TAGS file, then searching it for duplicate lines:
$ ctags -R
$ sort TAGS -o - | uniq -c | grep -v '^ *1 '
The above will produce output like this:
...
3 defun MyInStr (SearchIn SearchFor)
...
which will tell you that MyInStr is re-defined 3 times in the codebase with the identical signature.
You can also extract just the function name using sed or do a more complicated processing of the TAGS file with perl or lisp or python any other scripting tool.
Having a regular size-efficient backup for only the modified checkedout elements in all views would be a great thing for us, since a great deal of the defined dynamic/snapshot views cannot be included in the daily backup policy.
The following ksh code is near to what we would need for a dynamic view, but it trivially assumes that the first line in the config-spec file for the view always selects the checked-out element first ( *element * CHECKEDOUT* ). It will not work well in general.
For each versioned file in the view we would like to be able to add it to the backup list only if it is different from the last corresponding versioned element in the VOB that is selected for that view. (Only if it has been developed in the view).
[The solution would have to be valid for snapshot views also]
for CHECKEDOUT_FILE_IN_THE_VIEW in $( /usr/atria/bin/cleartool lsco -cview -avobs -short )
do
VERSIONED_FILE_NAME=$( /usr/atria/bin/cleartool describe -short ${CHECKEDOUT_FILE_IN_THE_VIEW} \
| sed -e's/CHECKEDOUT/LATEST/' )
if [ -f ${VERSIONED_FILE_NAME} ]; then
if [ -f ${CHECKEDOUT_FILE_IN_THE_VIEW} ]; then
diff -b ${CHECKEDOUT_FILE_IN_THE_VIEW} ${VERSIONED_FILE_NAME} > /dev/null
if [ $? -ne 0 ]; then
##-- The checked-out file in the view is different from the corresponding
##-- versioned element in the VOB. So it has to be added to the backup list.
echo "${VERSIONED_FILE_NAME}" >> ${F_LOG}
fi
fi
fi
done
Any idea(s) ?. TIA.
Javier C.
Frankly, for dynamic views, a simpler backup strategy would be to just zip and backup the view storage associated with said dynamic view (after a 'cleartool endivew -server aDynViewTag):
all the checked-out and private files are stored in the view storage (only for dynamic view)
but it won't take into account checked-out file with (yet) no modifications compared to their versioned counterpart.
If you need a generic solution both for dynamic and snaphot views, then you can refer to:
'How to find all checkedout files with ClearCase cleartool?' (a 'cleartool lsco' you are using), but you don't need to compute the LATEST version to make a system-based diff.
You can simply:
cleartool diff -pred ${CHECKEDOUT_FILE_IN_THE_VIEW}
If any modification exists between the checked-out version and its previous version, it will return something (for versions in snapshot or dynamic views).
See cleartool diff.