How do you prevent file confusion if version-control keywords are forbidden? - version-control

At least two brilliant programmers, Linus Torvalds and Guido von Rossum, disparage the practice of putting keywords into a file that expand to show the version number, last author, etc.
I know how keyword differences clutter up diffs. One of the reasons I like SlickEdit's DiffZilla is because it can be set to skip leading comments.
However, I have vivid memories of team-programming where we had four versions of a file (two different releases, a customer one-off, and the development version) all open for patching at the same time, and was quite helpful to verify with a glance that each time we navigated to an included header we got the proper one, and each time we pasted code the source and destination were what we expected.
There is also the where-did-this-file-come-from problem that arises when a hasty developer copies a file from one place to another using the file system, rather than checking it out of the repository using the tool; or, more defensibly, when files under control in locations A, B, and C need to be marshalled (with cherry-picking) into a distribution location D.
In places where VCS keywords are banned, how do you cope?

I've never used VCS keywords in my entire career, over 30 years. From the most primitive VCS system I've used, up to the present (TFS), I've used some other structure to understand "where I am".
I am rarely in a situation where I've only got one file to work with. I've usually got all the other files necessary to build the project or set of projects. I usually use branching (or streams on one occasion), and I'm working on some slice of the given branch or stream.
If I'm working on multiple branches or streams, I'll have one directory tree for each. All I need to do to know what file I'm working on is check the file path, at the very worst.
At the very best, the version control system will tell you exactly which version of the file you're working on, what the change history is, who else is working on different versions of the file, and anything else you'd care to know.

This doesn't exactly answer your question, but I imagine Linus and Guido have reasons for disliking keywords that don't apply to small-team corporate development.
An $Id$ tag for instance, has what you could consider to be a global version number. Linux and I guess also Python development is fragmented enough that no number can be global. Lots of people have their own repositories all over the place that would fill in their own $Id$ values and then those patches might be sent to Linus or Guido's repositories where they don't make any sense.
However, in your environment, you probably have one central repository which would assign these and it would be fine. Sounds like you're using git. I wonder if it's possible to configure the central git repository to do tag substitution while the local developer repositories don't. Or perhaps it's better to get the commit hash in the tag.

Related

How can I have 2 verions of Gensim for summarization in one Jupyter notebook?

I want to have 2 versions of Gensim for using summarization and keyword function from old Gensim.
How can I setup this senario?
In general, a single Jupyter notebook is backed by a single Python interpreter/environment, and popular packages at their 'official' installation paths can only be installed once.
There are a few hackish workarounds suggested in answers like:
Installing multiple versions of a package with pip
However, each workaround presents operational problems.
One approach is to install the older package to a non-standard path (directory) that's still found by Python importing logic (controlled by PYTHONPATH). For example, put/move the older copy of Gensim to a gensim_old package directory. But: this is only likely to work well with very sime (single-.py-file) packages.
With any signficant library (like Gensim) which cross-imports a lot of things from its own utility modules, using the standard paths, lots of things are likely to break unless you dig into all involved individual files to change their import paths. That's kind of kludgey & hard-to-maintain. (Though, to the extent you're just using one old version, say gensim-3.8.3 for the removed summarization feature, perhaps it'd be worth fighting through this process once, then keeping the changes around.)
Another approach is to create a totally-separate Python environment with the alternate version, and only use that other environment from the notebook by a system-call – via either something in Python-code like subprocess.call(), or the notebook-cell ! or !! magic-escapes to run a shell command. That is, you give up the ability to run individual interactive lines of Python in that alt environment - but could still send it batches of data, and either capture the console output or observe its output files to continue processing in your notebook.
I'd expect this to be a better option – cleaner & more-maintainable – provided that either the old-version-functionality (summarization) or new-version-functionality (whatever else) can be condensed into one (or a few) single-step scripts.
Another option would be to try to completely copy the gensim.summarization source code files to some new location inside your own project – performing whatever (few, minor) edits are necessary to ensure it works from the alternate location.
One of the reasons that functionality was removed was that its approach to things like tokenization was not consistent/integrated with other Gensim practices – which actually means it's likely to be a little easier to keep it working (given its use of its own idiosyncratic approaches) separately.
Personally I'd rank these three options desirability as:
(best) Section off the summarization tasks to be run via subprocess executions in a separate Python environment, which has only the older package installed.
(maybe ok) Copy the 10 .py files that implement the gensim.summarization' to your own local module. Edit lightly as necessary to ensure they still work. (That should mainly be updating import` lines, but might reuire a few other adaptations to other Python 3.x/Gensim 4.x changes.)
(probably too messy) Install the whole old package to a non-standard directory, edit lots of files to ensure anything you're using still works.
Finally, note that the main reason the feature was removed is that it did not offer very impressive or adaptable results. While I've seen some people say it's worked OK for their applications, I've never seen even so much as a demo where its practices/algorithm – which can only extract some subset of important sentences, never paraphrase – gave impressive results.
So unless you already know that its approach works well for your needs, don't get your hopes up! Good luck.

All Inclusive file version merge

Typical merge and source control tools mistake added code for code change especially if you merge several branches/version. Is there solution to merge several versions /branches of code without deleting a symbol? So the result of merge is interleaving of both files?
A use case one branch adds a fields and another branch adds a similar fields to website/api. The changes are practically isomorphic so might confuse a source code management tool.
Adding a non trivial field is typically tedious error prone task due MVC pattern and multi-layering, and involves adding some files, modifying other, lines and symbols yet deletion or swapping pieces of code is rarely needed. In fact I think to automate the task by having a generic patch/branch and just changing name and adding custom logic each time. I am aware one can code in way the adding of typical fields is push button task, or few lines change, yet not everybody likes such code.
Usually I use PyCharm (python version of IntellijIdea) or Smart Git for merging, but open to any tools or solutions.
Beyond Compare can't automatically detect it, but in the Text Merge, you can manually take text from both sides using the right click Take Left Then Right or Take Right Then Left commands.

Tool to compare/diff HTML in bulk

I have a lot of HTML files (10,000's and GBs worth) scraped from a server and I want to check to make sure the server produces the same results after some modifications but ignore kinds of differences that don't matter, e.g. whitespace, missing newlines, timestamps, small changes in some kinds of number, etc.
Does anyone know of a tool for doing this? I'd really rather not do more filtering than I have to.
(Oh and it needs to run under linux)
You might consider using a clone detector such as our CloneDR. This tool parses large sets of computer program (HTML is special case) files, builds abstract syntax trees representing the essential structure of each files, and compares programs for similarity.
Because it is comparing essential program structure, it ignores inessential differences such as comments and whitespace, and deterimines that two code segments are either identical or one can be obtained from the other by substituting other blocks of code. The latter allows the recognition of code that has been modified in various ways. You can see samples of clone detection runs on a variety of computer languages at the web site.
In your case, what you would be looking for are files in system A which are essentially clones (exact or near misses) of files in system B. As a general rule, if a file a is a variant of file b (e.g., with a few changes) the CloneDr will report it as a clone and show the exact differences.
At the scale of 20,000 files, I can see why you want a tool, and I can see why you want near-miss matches rather than exact matches.
Doesn't run under Linux, but I assume your problem is hard to enough to solve so that isn't what you are optimizing.
I use winmerge alot in windows and from what i can see some people enjoy meld in linux, so perhaps that could do the trick for you
http://meld.sourceforge.net/
Other examples i saw from a quick googling was Kompare,xxdiff.sourceforge.net, and kdiff3.sourceforge.net
(could only post 1 link so wrote the adresses to xxdiff and kdiff3 as text)
Beyond Compare is purchased software that is actually worth the money (I never thought I'd hear myself typing that!). It is GUI based but handles thousands of files very well. It will allow you to specify unimportant changes with regular expressions as well as whitespace (beginning, middle and end of line). The feature set is very extensive, check out a trial download.
I do not work for this company, I just use Beyond Compare every day at work and enjoy it every time!

Code formatting and source control diffs

What source control products have a "diff" facility that ignores white space, braces, etc., in calculating the difference between checked-in versions? I seem to remember that Clearcase's diff did this but Visual SourceSafe (or at least the version I used) did not.
The reason I ask is probably pretty typical. Four perfectly reasonable developers on a team have four entirely different ways of formatting their code. Upon checking out the code last changed by someone else, each will immediately run some kind of program or editor macro to format things the way they like. They make actual code changes. They check-in their changes. They go on vacation. Two days later that program, which had been running fine for two years, blows up. The developer assigned to the bug does a diff between versions and finds 204 differences, only 3 of which are of any significance, because the diff algorithm is lame.
Yes, you can have coding standards. Most everyone finds them dreadful. A solution where everyone can have their cake and eat it too seems far more preferable.
=========
EDIT: Thanks to everyone for some great suggestions.
What I take away from this is:
(1) A source control system with plug-in type diffs is preferable.
(2) Find a diff with suitable options.
(3) Use a good source formatting program and settle on a check-in standard.
Sounds like a plan. Thanks again.
Git does have these options:
--ignore-space-at-eol
Ignore changes in whitespace at EOL.
-b, --ignore-space-change
Ignore changes in amount of whitespace. This ignores whitespace at line end, and considers all other sequences of one or more
whitespace characters to be equivalent.
-w, --ignore-all-space
Ignore whitespace when comparing lines. This ignores differences even if one line has whitespace where the other line has
none.
I am not sure if brace changes can be ignored using Git's diff.
If it is C/C++ code, you can define Astyle rules and then convert the source code's brace style to the one that you want, using Astyle. A git diff will then produce sane output.
Choose one (dreadful) coding standard, write it down in some official coding standards document, and get on with your life, messing with whitespace is not productive work.
And remember you are a professional developer, it's your job to get the project done, changing anything in the code because of a personal style preference hurts the project - it wont only make diff-ing more difficult, it can also introduce hard to find problems if your source formatter or compiler has bugs (and your fancy diff tool won't save you when two co-worker start fighting over casing).
And if someone just doesn't agree to work with the selected style just remind him (or her) that he is programming as a profession not as an hobby, see http://www.ericsink.com/entries/No_Great_Hackers.html
Maybe you should choose one format and run some indentation tool before checking in so that each person can check out, reformat to his/her own preferences, do the changes, reformat back to the official standard and then check in?
A couple of extra steps but they already use indentation tools when working. Maybe it can be a triggered check-in script?
Edit: this would perhaps also solve the brace problem.
(I haven't tried this solution myself, hence the "perhapes" and "maybes", but I have been in projects with the same problems, and it is a pain to try to go through diffs with hundreds of irrelevant changes that are not limited to whitespace, but includes the formatting itself.)
As explained in Is it possible for git-merge to ignore line-ending differences?, it is more a matter to associate the right diff tool to your favorite VCS, rather than to rely on the right VCS option (even if Git does have some options regarding whitespace, like the one mentioned in Alan's answer, it will always be not as complete as one would like).
DiffMerge is the more complete on those "ignore" options, as it can not only ignore spaces but also other "variations" based on the programming language used in a given file.
Subversion apparently supports this, either natively in the latest versions, or by using an alternate diff like Gnu Diff.
Beyond Compare does this (and much much more) and you can integrate it either in Subversion or Sourcesafe as an external diff tool.

How would you explain the risks of the $Log$ keyword?

I seem to get into an annual debate about the use of the $Log$ keyword. My point of view is this:
$Log$ is white hot death.
All it does is jam marginally relevant spam into your source files. Any information that anyone thinks they might be able to get from a $Log$ is more readily available from (and is likely to be more accurate in) your version control system.
So, here's the question: how would you explain to an "old school" coder (who thinks that $Log$ is the way to manage source code changes) that we have better tools now?
The CVSNT remarks on $Log$ are a good start but they're just not pointed enough. To date, the closest that I've come to a one-liner that I've managed to come up with is "$Log$ is a wish. You're hoping that what gets spammed into your file has any relation to what really happened to this file."
PS for clarity: when I say "old school," I mean old in attitude, not old in years. My first programming paycheck (and a remarkably modest one it was, too) was sometime in 1986 and I never thought $Log$ was a good idea.
I think the Subversion FAQ also has a good explanation.
$Log$ is a total horror the moment you start merging changes
between branches. You're practically guaranteed to get conflicts there,
which -- because of the nature of this keyword -- simply cannot be
resolved automatically.
In addition to what the others have said, try putting a comment (/* ... */) into a commit message :->.
The amount of useful bits in a source file slowly decreases as changes are made to it with that $Log$ statement in it. We had it in some files that came from CVS and number of lines of $Log$ statements was on the order of 10x longer than the executable code in file actually was. And it had a few groups of duplicates caused by bad merging from some branches.
You may consider (emphasis on may) embedding immutable meta-data in your file.
(See the debate between me and an "older schooler" : Embedded Version Numbers - Good or Evil?).
Even though I have always considered that practice as evil (mixing meta-data information into data), introducing "merge hell", one could argue that it could work, with the right merge manager, for immutable meta-data with a fixed format, like:
$Revision$ $Revision: 9.13 $
$Date$ $Date: 2009/03/06 06:52:26 $
$RCSfile$ $RCSfile: stderr.c,v $
But mutable meta-data like logs ? With unknown format or content ? That is bound to fail.