Get last commit for every file of a file list in Mercurial - find

I have an hg repository and I would like to know the last commit date of every file in sources/php/dracca/endpoint/wiki/**/Wiki*.php
So far, I have this one liner:
find sources/php/dracca/endpoint/wiki/ -name "Wiki*.php" -exec hg log --limit 1 --template "{date|shortdate}" {} \; -exec echo {} \;
But this seems utterly slow as (I suppose) find makes 1 hg call per file, leading to 15seconds of computation for the (say) ~40 files I have in there...
Is there a faster way?
The output of this command looks like:
2019-09-20 sources/php/dracca/endpoint/wiki/characters/colmarr/WikiCharactersColmarrEndpoint.php
2019-09-20 sources/php/dracca/endpoint/wiki/characters/dracquints/allgroup/WikiCharactersDracquintsAllgroupEndpoint.php
...
It might be changed a bit if needed (I won't mind having, say, 1 date and then the list of files changed for that date, or whatever like this)

Even with find+exec you can have shorter (by one last exec) chain with modified template {date|shortdate}\n
You can use (accepted) perl-ism from this question or ask anybody to update mentioned lof extension to current Mercurial (code from 2012 will not work now)
Alternatives (dirty ugly hacks)
In any case, you can|have to call hg only once and perform some post-processing of results.
Before these trick, read hg help filesets and get one common fileset for your files (I suppose, it can be just set:sources/php/dracca/endpoint/wiki/**/Wiki*.php but TBT!)
After it, you can:
Perform hg log like this
hg log setup.* --template "{files % '{file} {rev} {date|shortdate}\n'}"
(I used simple pattern for test, you have to have own fileset)
get output in such form
setup.py 1163 2018-11-07
README.md 1162 2018-11-07
setup.py 1162 2018-11-07
hggit/git_handler.py 1124 2018-05-01
setup.py 1124 2018-05-01
setup.cfg 1118 2017-11-27
setup.py 1117 2017-11-27
hggit/git2hg.py 1111 2017-11-27
hggit/overlay.py 1111 2017-11-27
setup.py 1111 2017-11-27
…
(there are some unwanted unexpected files, because I out all files in revision, which affect file in interest, without filter). You have to grep only needed files, sort by cols 1+2 and use date of latest revision of each file
Use hg grep. For the above test-pattern
hg grep "." -I setup.* --files-with-matches -d -q
(find any changes, output only filename+revision, short date)
you'll get something like
setup.py:1163:2018-11-07
setup.cfg:1118:2017-11-27
and 3-rd column will be your needed last modification date of file

Related

Why is Mercurial matching a nonexistent local revision number?

Quick intro: In Mercurial there are two different ways to numerically refer to a changeset.
First, there's the node ID hash. It is global and functions like a git commit hash. It consists of 40 hexadecimal digits.
Second, there's the local revision number. It is a decimal number that starts at 0 and counts up. Unlike the node hash, this is local, meaning the same changeset can have different local revision numbers in two different repos. This depends on what other changesets are present in each repo and depends even on the order each repo received their changesets.
A revision can be specified numerically to Mercurial as a local revision number, a full 40-digit hash, or "a short-form identifier". The latter gives a unique prefix of a hash; that is, if only one full hash starts with the given string then the string matches that changeset.
I found that in certain cases, Mercurial commands (such as hg log with an -r switch), given plain decimal numbers, will match some revision even though there aren't enough local revisions for the given number to match as a local revision number.
Here's an example I constructed after coming across such a case by chance:
test$ hg --version
Mercurial Distributed SCM (version 6.1)
(see https://mercurial-scm.org for more information)
Copyright (C) 2005-2022 Olivia Mackall and others
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
test$ hg init
test$ touch a
test$ hg add a
test$ hg ci -d "1970-01-01 00:00:00 +0000" -u testuser -m a
test$ touch b
test$ hg add b
test$ hg ci -d "1970-01-01 00:00:00 +0000" -u testuser -m b
test$ hg log
changeset: 1:952880b76ae5
tag: tip
user: testuser
date: Thu Jan 01 00:00:00 1970 +0000
summary: b
changeset: 0:d61f66df66f9
user: testuser
date: Thu Jan 01 00:00:00 1970 +0000
summary: a
test$ hg log -r 2
abort: unknown revision '2'
test$ hg log -r 9
changeset: 1:952880b76ae5
tag: tip
user: testuser
date: Thu Jan 01 00:00:00 1970 +0000
summary: b
test$
As is evident, hg log -r 9 matches a changeset even though there aren't that many changesets to match the 9 as a local revision number.
The question: Why is this? Additionally, how can we avoid matching a nonexistent local revision number?
This is due to how Mercurial parses revision specifiers. Here's how Olivia Mackall explains it in a mail from 2014:
Here is a hexadecimal identifier:
60912eb2667de45415eff601bfc045ae0fe8db42
See how it starts with 6? If you ask for revision 6, Mercurial will:
a) look for revision 6
b) if that fails, look for a hex identifier starting with "6"
c) if we find more than one match, complain
d) if we find no matches, complain
e) we found one match: success!
That is, if hg log -r 9 doesn't match any local revision number (because there are less than ten changesets in the repo), Mercurial next will match a node hash that happens to start with a 9.
To avoid this ambiguity, she responded that one should use hg log -r 'rev(9)' to match only local revision numbers, and hg log -r 'id(9)' to match only prefixes or full hashes.
In the documentation on revsets, these predicates are listed as:
"id(string)"
Revision non-ambiguously specified by the given hex string prefix.
And:
"rev(number)"
Revision with the given numeric identifier.
Unfortunately, both this page and the help page on revisions do not (as of version 6.1) explicitly point out the ambiguity between numbers that can match either as local revision numbers or node hash prefixes. The 2014 mailing list thread I quoted does contain suggestions to clarify this but it appears nothing came off it.
Additionally, here is a changeset message in which I explained the entire affair and how it came to affect the operation of a script of mine:
fix to use 'rev(x)' instead of just x to refer to local rev number
The revsets syntax to unambiguously refer to a local revision
number is to wrap the number in rev(). Without this, a number
that doesn't exist (eg -r 2) may be misinterpreted to refer to
a changeset that has a node hash starting with the requested
number.
In our case this bug happened to act up after the revision on at
2022-04-02 16:10:42 2022 Z "changed encoding from cp850 to utf8"
which the day after it was added was converted as the second
(local revision number 1) changeset from the svn repo. The
particular hg convert command was:
hg convert svn-mirror DEST \
--config hooks.pretxncommit.checkcommitmessage=true \
--config convert.svn.startrev=1152
This created a changeset known as 1:2ec9f101bc31 from that svn
revision. Lacking a local revision number 2, the -r 2 picked up
this changeset because its hash started with the digit "2".
Tnus the NEWNODE variable received the changeset hash for this
changeset. Because our hg rebase command is configured to keep
empty changesets, the changeset got added atop its already
existing copy in the destination repo.
Ever since the akt.sh script would pick up the wrong revision
number from the destination repo and abort its run with the
message indicating "Revisions differ!".

How do you find the changesets between two tags in mercurial?

If I have two tags named 2.0 and 2.1, how do I find the changeset messages between the two? I'm trying to find to a way to use HG make release notes and list the different messages associated with the commits.
Example Changeset:
changeset: 263:5a4b3c2d1e
user: User Name <user.name#gmail.com>
date: Tue Nov 27 14:22:54 2018 -0500
summary: Added tag 2.0.1 for changeset 9876fghij
Desired Output:
Added tag 2.1 for changeset 67890pqrst
Change Info...
Added tag 2.0.1 for changeset 9876fghij
Change Info...
Added tag 2.0 for changeset klmno12345
Preface
"Any challenge has a simple, easy-to-understand wrong decision". And Boris's answer is a nicest illustration for this rule: "::" topo-range will produce good results only in case of pure single-branch development (which is, in common, The Bad Idea (tm) anyway)
Face
Good solution must correctly handle complex DAGs and answer on question "New changesets included in NEW, missing in OLD (regardless of the nature of occurrence)"
For me it's "only()" functions in revsets with both parameters
"only(set, [set])"
Changesets that are ancestors of the first set that are not ancestors
of any other head in the repo. If a second set is specified, the
result is ancestors of the first set that are not ancestors of the
second set (i.e. ::set1 - ::set2).
hg log -r "only(2.1,2.0)"
maybe for better presentation powered by predefined style "changelog"
hg log -r "only(2.1,2.0)" -s changelog
or custom style|template
You'll want to use a revset to select all changesets between two tags, for example: 2.0::2.1 will likely do the trick. You can validate the selected changesets by running: hg log -G -r '2.0::2.1'. (See hg help revset for more information about revsets).
Once you have the right selected changesets, you can now apply a template to retrieve only the needed information. For example if you only want the first line of changeset description, you can do hg log -r '2.0::2.1' -T '{desc}\n' for the whole description or hg log -r '2.0::2.1' -T '{desc|firstline}\n' only for the first line of each changeset description.
If you want to add even more information, hg help template is your friend.

Can you get the number of lines of code from a GitHub repository?

In a GitHub repository you can see “language statistics”, which displays the percentage of the project that’s written in a language. It doesn’t, however, display how many lines of code the project consists of. Often, I want to quickly get an impression of the scale and complexity of a project, and the count of lines of code can give a good first impression. 500 lines of code implies a relatively simple project, 100,000 lines of code implies a very large/complicated project.
So, is it possible to get the lines of code written in the various languages from a GitHub repository, preferably without cloning it?
The question “Count number of lines in a git repository” asks how to count the lines of code in a local Git repository, but:
You have to clone the project, which could be massive. Cloning a project like Wine, for example, takes ages.
You would count lines in files that wouldn’t necessarily be code, like i13n files.
If you count just (for example) Ruby files, you’d potentially miss massive amount of code in other languages, like JavaScript. You’d have to know beforehand which languages the project uses. You’d also have to repeat the count for every language the project uses.
All in all, this is potentially far too time-intensive for “quickly checking the scale of a project”.
You can run something like
git ls-files | xargs wc -l
Which will give you the total count →
You can also add more instructions. Like just looking at the JavaScript files.
git ls-files | grep '\.js' | xargs wc -l
Or use this handy little tool → https://line-count.herokuapp.com/
A shell script, cloc-git
You can use this shell script to count the number of lines in a remote Git repository with one command:
#!/usr/bin/env bash
git clone --depth 1 "$1" temp-linecount-repo &&
printf "('temp-linecount-repo' will be deleted automatically)\n\n\n" &&
cloc temp-linecount-repo &&
rm -rf temp-linecount-repo
Installation
This script requires CLOC (“Count Lines of Code”) to be installed. cloc can probably be installed with your package manager – for example, brew install cloc with Homebrew. There is also a docker image published under mribeiro/cloc.
You can install the script by saving its code to a file cloc-git, running chmod +x cloc-git, and then moving the file to a folder in your $PATH such as /usr/local/bin.
Usage
The script takes one argument, which is any URL that git clone will accept. Examples are https://github.com/evalEmpire/perl5i.git (HTTPS) or git#github.com:evalEmpire/perl5i.git (SSH). You can get this URL from any GitHub project page by clicking “Clone or download”.
Example output:
$ cloc-git https://github.com/evalEmpire/perl5i.git
Cloning into 'temp-linecount-repo'...
remote: Counting objects: 200, done.
remote: Compressing objects: 100% (182/182), done.
remote: Total 200 (delta 13), reused 158 (delta 9), pack-reused 0
Receiving objects: 100% (200/200), 296.52 KiB | 110.00 KiB/s, done.
Resolving deltas: 100% (13/13), done.
Checking connectivity... done.
('temp-linecount-repo' will be deleted automatically)
171 text files.
166 unique files.
17 files ignored.
http://cloc.sourceforge.net v 1.62 T=1.13 s (134.1 files/s, 9764.6 lines/s)
-------------------------------------------------------------------------------
Language files blank comment code
-------------------------------------------------------------------------------
Perl 149 2795 1425 6382
JSON 1 0 0 270
YAML 2 0 0 198
-------------------------------------------------------------------------------
SUM: 152 2795 1425 6850
-------------------------------------------------------------------------------
Alternatives
Run the commands manually
If you don’t want to bother saving and installing the shell script, you can run the commands manually. An example:
$ git clone --depth 1 https://github.com/evalEmpire/perl5i.git
$ cloc perl5i
$ rm -rf perl5i
Linguist
If you want the results to match GitHub’s language percentages exactly, you can try installing Linguist instead of CLOC. According to its README, you need to gem install linguist and then run linguist. I couldn’t get it to work (issue #2223).
I created an extension for Google Chrome browser - GLOC which works for public and private repos.
Counts the number of lines of code of a project from:
project detail page
user's repositories
organization page
search results page
trending page
explore page
If you go to the graphs/contributors page, you can see a list of all the contributors to the repo and how many lines they've added and removed.
Unless I'm missing something, subtracting the aggregate number of lines deleted from the aggregate number of lines added among all contributors should yield the total number of lines of code in the repo. (EDIT: it turns out I was missing something after all. Take a look at orbitbot's comment for details.)
UPDATE:
This data is also available in GitHub's API. So I wrote a quick script to fetch the data and do the calculation:
'use strict';
async function countGithub(repo) {
const response = await fetch(`https://api.github.com/repos/${repo}/stats/contributors`)
const contributors = await response.json();
const lineCounts = contributors.map(contributor => (
contributor.weeks.reduce((lineCount, week) => lineCount + week.a - week.d, 0)
));
const lines = lineCounts.reduce((lineTotal, lineCount) => lineTotal + lineCount);
window.alert(lines);
}
countGithub('jquery/jquery'); // or count anything you like
Just paste it in a Chrome DevTools snippet, change the repo and click run.
Disclaimer (thanks to lovasoa):
Take the results of this method with a grain of salt, because for some repos (sorich87/bootstrap-tour) it results in negative values, which might indicate there's something wrong with the data returned from GitHub's API.
UPDATE:
Looks like this method to calculate total line numbers isn't entirely reliable. Take a look at orbitbot's comment for details.
You can clone just the latest commit using git clone --depth 1 <url> and then perform your own analysis using Linguist, the same software Github uses. That's the only way I know you're going to get lines of code.
Another option is to use the API to list the languages the project uses. It doesn't give them in lines but in bytes. For example...
$ curl https://api.github.com/repos/evalEmpire/perl5i/languages
{
"Perl": 274835
}
Though take that with a grain of salt, that project includes YAML and JSON which the web site acknowledges but the API does not.
Finally, you can use code search to ask which files match a given language. This example asks which files in perl5i are Perl. https://api.github.com/search/code?q=language:perl+repo:evalEmpire/perl5i. It will not give you lines, and you have to ask for the file size separately using the returned url for each file.
Not currently possible on Github.com or their API-s
I have talked to customer support and confirmed that this can not be done on github.com. They have passed the suggestion along to the Github team though, so hopefully it will be possible in the future. If so, I'll be sure to edit this answer.
Meanwhile, Rory O'Kane's answer is a brilliant alternative based on cloc and a shallow repo clone.
From the #Tgr's comment, there is an online tool :
https://codetabs.com/count-loc/count-loc-online.html
You can use tokei:
cargo install tokei
git clone --depth 1 https://github.com/XAMPPRocky/tokei
tokei tokei/
Output:
===============================================================================
Language Files Lines Code Comments Blanks
===============================================================================
BASH 4 48 30 10 8
JSON 1 1430 1430 0 0
Shell 1 49 38 1 10
TOML 2 78 65 4 9
-------------------------------------------------------------------------------
Markdown 4 1410 0 1121 289
|- JSON 1 41 41 0 0
|- Rust 1 47 38 5 4
|- Shell 1 19 16 0 3
(Total) 1517 95 1126 296
-------------------------------------------------------------------------------
Rust 19 3750 3123 119 508
|- Markdown 12 358 5 302 51
(Total) 4108 3128 421 559
===============================================================================
Total 31 6765 4686 1255 824
===============================================================================
Tokei has support for badges:
Count Lines
[![](https://tokei.rs/b1/github/XAMPPRocky/tokei)](https://github.com/XAMPPRocky/tokei)
By default the badge will show the repo's LoC(Lines of Code), you can also specify for it to show a different category, by using the ?category= query string. It can be either code, blanks, files, lines, comments.
Count Files
[![](https://tokei.rs/b1/github/XAMPPRocky/tokei?category=files)](https://github.com/XAMPPRocky/tokei)
You can use GitHub API to get the sloc like the following function
function getSloc(repo, tries) {
//repo is the repo's path
if (!repo) {
return Promise.reject(new Error("No repo provided"));
}
//GitHub's API may return an empty object the first time it is accessed
//We can try several times then stop
if (tries === 0) {
return Promise.reject(new Error("Too many tries"));
}
let url = "https://api.github.com/repos" + repo + "/stats/code_frequency";
return fetch(url)
.then(x => x.json())
.then(x => x.reduce((total, changes) => total + changes[1] + changes[2], 0))
.catch(err => getSloc(repo, tries - 1));
}
Personally I made an chrome extension which shows the number of SLOC on both github project list and project detail page. You can also set your personal access token to access private repositories and bypass the api rate limit.
You can download from here https://chrome.google.com/webstore/detail/github-sloc/fkjjjamhihnjmihibcmdnianbcbccpnn
Source code is available here https://github.com/martianyi/github-sloc
Hey all this is ridiculously easy...
Create a new branch from your first commit
When you want to find out your stats, create a new PR from main
The PR will show you the number of changed lines - as you're doing a PR from the first commit all your code will be counted as new lines
And the added benefit is that if you don't approve the PR and just leave it in place, the stats (No of commits, files changed and total lines of code) will simply keep up-to-date as you merge changes into main. :) Enjoy.
Firefox add-on Github SLOC
I wrote a small firefox addon that prints the number of lines of code on github project pages: Github SLOC
npm install sloc -g
git clone --depth 1 https://github.com/vuejs/vue/
sloc ".\vue\src" --format cli-table
rm -rf ".\vue\"
Instructions and Explanation
Install sloc from npm, a command line tool (Node.js needs to be installed).
npm install sloc -g
Clone shallow repository (faster download than full clone).
git clone --depth 1 https://github.com/facebook/react/
Run sloc and specifiy the path that should be analyzed.
sloc ".\react\src" --format cli-table
sloc supports formatting the output as a cli-table, as json or csv. Regular expressions can be used to exclude files and folders (Further information on npm).
Delete repository folder (optional)
Powershell: rm -r -force ".\react\" or on Mac/Unix: rm -rf ".\react\"
Screenshots of the executed steps (cli-table):
sloc output (no arguments):
It is also possible to get details for every file with the --details option:
sloc ".\react\src" --format cli-table --details
Open terminal and run the following:
curl -L "https://api.codetabs.com/v1/loc?github=username/reponame"
If the question is "can you quickly get NUMBER OF LINES of a github repo", the answer is no as stated by the other answers.
However, if the question is "can you quickly check the SCALE of a project", I usually gauge a project by looking at its size. Of course the size will include deltas from all active commits, but it is a good metric as the order of magnitude is quite close.
E.g.
How big is the "docker" project?
In your browser, enter api.github.com/repos/ORG_NAME/PROJECT_NAME
i.e. api.github.com/repos/docker/docker
In the response hash, you can find the size attribute:
{
...
size: 161432,
...
}
This should give you an idea of the relative scale of the project. The number seems to be in KB, but when I checked it on my computer it's actually smaller, even though the order of magnitude is consistent. (161432KB = 161MB, du -s -h docker = 65MB)
Pipe the output from the number of lines in each file to sort to organize files by line count.
git ls-files | xargs wc -l |sort -n
This is so easy if you are using Vscode and you clone the project first. Just install the Lines of Code (LOC) Vscode extension and then run LineCount: Count Workspace Files from the Command Pallete.
The extension shows summary statistics by file type and it also outputs result files with detailed information by each folder.
There in another online tool that counts lines of code for public and private repos without having to clone/download them - https://klock.herokuapp.com/
None of the answers here satisfied my requirements. I only wanted to use existing utilities. The following script will use basic utilities:
Git
GNU or BSD awk
GNU or BSD sed
Bash
Get total lines added to a repository (subtracts lines deleted from lines added).
#!/bin/bash
git diff --shortstat 4b825dc642cb6eb9a060e54bf8d69288fbee4904 HEAD | \
sed 's/[^0-9,]*//g' | \
awk -F, '!($2 > 0) {$2="0"};!($3 > 0) {$3="0"}; {print $2-$3}'
Get lines of code filtered by specified file types of known source code (e.g. *.py files or add more extensions, etc).
#!/bin/bash
git diff --shortstat 4b825dc642cb6eb9a060e54bf8d69288fbee4904 HEAD -- *.{py,java,js} | \
sed 's/[^0-9,]*//g' | \
awk -F, '!($2 > 0) {$2="0"};!($3 > 0) {$3="0"}; {print $2-$3}'
4b825dc642cb6eb9a060e54bf8d69288fbee4904 is the id of the "empty tree" in Git and it's always available in every repository.
Sources:
My own scripting
How to get Git diff of the first commit?
Is there a way of having git show lines added, lines changed and lines removed?
shields.io has a badge that can count up all the lines for you here. Here is an example of what it looks like counting the Raycast extensions repo:
You can use sourcegraph, an open source search engine for code. It can connect to your GitHub account, index the content, and then on the admin section you would see the number of lines of code indexed.
I made an NPM package specifically for this usage, which allows you to call a CLI tool and providing the directory path and the folders/files to ignore
it goes like this:
npm i -g #quasimodo147/countlines
to get the $ countlines command in your terminal
then you can do
countlines . node_modules build dist

How to view a single Mercurial commit from the command line?

I would like to view from the commandline what was changed in given Mercurial commit similar to what one would see from hg status or from the TortoiseHg tool. The closest I can seem to get is hg log --stat but that prints extra symbols (i.e. pluses and minuses) and I cannot specify at which specific revision I want to look.
I need this because I have developers who have check-in comments like "." or ",". >:-(
It turns out that hg status has a --change argument where you can pass the revision number (e.g. 109), relative revision (ie -1 is last commit, -2 is second-last, etc), or the hash of the revision to it and it will print out the changes (i.e. additions, removals, and modification) that revision had.
--change isolates that revision and shows just from that revision, but replacing --change with --rev shows the cumulative effect since that revision to the current state.
hg log -v -r <changeset>
changeset: 563:af4d66e2bc6e
tag: tip
user: David M. Carr <****>
date: Fri Oct 26 22:46:02 2012 -0400
files: hggit/gitrepo.py tests/test-pull.t
description:
pull: don't pull tags as bookmarks
or, using templates, something like
hg log -r tip --template "{node|short} - files: {files}\n"
with output
af4d66e2bc6e - files: hggit/gitrepo.py tests/test-pull.t

Find deleted files in Mercurial repository history, quickly?

You can use hg grep, but it searches the contents of all files.
What if I just want to search the file names of deleted files to recover one?
I tried hg grep -I <file-name-pattern> <pattern> but this seems to return no results.
using templates is simple:
$ hg log --template "{rev}: {file_dels}\n"
Update for Mercurial 1.6
You can use revsets for this too:
hg log -r "removes('**')"
(Edit: Note the double * - a single one detects removals from the root of the repository only.)
Edit: As Mathieu Longtin suggests, this can be combined with the template from dfa's answer to show you which files each listed revision removes:
hg log -r "removes('**')" --template "{rev}: {file_dels}\n"
That has the virtue (for machine-readability) of listing one revision per line, but you can make the output prettier for humans by using % to format each item in the list of deletions:
hg log -r "removes('**')" --template "{rev}:\n{file_dels % '{file}\n'}\n"
If you are using TortoiseHg workbench, a convenient way is to use the revision filter. Just hit ctrl+s, and then type
removes("**/FileYouWantToFind.txt")
**/ indicates that you want to search recursively in your repository.
You can use * wildcard in the filename too. You can combine this query with other revision sets using and, or operators.
There is also this Advanced Query Editor:
I have taken other answers and improved it.
Added "--no-merges". On large project with dev teams, there will lots of merges. --no-merger will filter out the log noise.
Change removes("**") to sort(removes("**"), -rev). For a large project with over 100K changesets, this will get to the latest files removed a lot faster. This reverses the order from starting at rev 0 to start at tip instead.
Added {author} and {desc} to ouput. This will give context as to why the files was removed by displaying the log comment and who did it.
So for my use case, it was hg log --template "File(s) deleted in rev {rev}: {author} \n {desc}\n {file_dels % '\n {file}'}\n\n" -r 'sort(removes("**"), -rev)' --no-merges
Sample output:
File(s) deleted in rev 52363: Ansariel
STORM-2141: Fix various inventory floater related issues:
* Opening new inventory via Control-Shift-I shortcut uses legacy and potentinally dangerous code path
* Closing new inventory windows don't release memory
* During shutdown legacy and inoperable code for inventory window cleanup is called
* Remove old and unused inventory legacy code
indra/newview/llfloaterinventory.cpp
indra/newview/llfloaterinventory.h
File(s) deleted in rev 51951: Ansariel
Remove readme.md file - again...
README.md
File(s) deleted in rev 51856: Brad Payne (Vir Linden) <vir#lindenlab.com>
SL-276 WIP - removed avatar_skeleton_spine_joints.xml
indra/newview/character/avatar_skeleton_spine_joints.xml
File(s) deleted in rev 51821: Brad Payne (Vir Linden) <vir#lindenlab.com>
SL-276 WIP - removed avatar_XXX_orig.xml files.
indra/newview/character/avatar_lad_orig.xml
indra/newview/character/avatar_skeleton_orig.xml
Search for a specific file you deleted efficiently, and format the result nicely:
hg log --template "File(s) deleted in rev {rev}: {file_dels % '\n {file}'}\n\n" -r 'removes("**/FileYouWantToFind.txt")'
Sample output:
File(s) deleted in rev 33336:
class/WebEngineX/Database/RawSql.php
File(s) deleted in rev 34468:
class/PdoPlus/AccessDeniedException.php
class/PdoPlus/BulkInsert.php
class/PdoPlus/BulkInsertInfo.php
class/PdoPlus/CannotAddForeignKeyException.php
class/PdoPlus/DuplicateEntryException.php
class/PdoPlus/Escaper.php
class/PdoPlus/MsPdo.php
class/PdoPlus/MyPdo.php
class/PdoPlus/MyPdoException.php
class/PdoPlus/NoSuchTableException.php
class/PdoPlus/PdoPlus.php
class/PdoPlus/PdoPlusException.php
class/PdoPlus/PdoPlusStatement.php
class/PdoPlus/RawSql.php
from project root
hg status . | grep "\!" >> /tmp/filesmissinginrepo.txt