Some git mistakes happened and I lost a lot of changes for one file. I was using Eclipse as my IDE but the git mishap included deleting the project and re cloning the directory. So I can't do the restore from within Eclipse. I believe I have found the local history file that contains the code I want to restore but I'm not sure how to cat this file. It kinda looks like a json.
Anyone know how to restore or read the .metadata.plugins\org.eclipse.core.resources.history
files?
I was able to recover my code.
I went to prj/.metadata/.plugins/org.eclipse.core.resources/.history
Then did some bashing:
fgrep -r -c "[Some function name specific to that file]" * | grep -v ":0" | cut -d : -f 1 | xargs ls -l | grep "Jul 29"
So this is greping and counting the number of times some text specific to the code I'm missing shows up in the files, then removes the files where the count is 0, then removes the count from the end of the file name, then does ls to get details on the files, then optionally search for a specific day. Since I was working with clojure I noticed that there were files that either had a large or small file size. The large files were backups of the REPL, the small files were backups of the code.
+1 for Eclipse :)
Related
Some IDEs support a feature usually called "master filelist", that the user provides a simple text file containing all files for a project, thus the IDE only parses the listed files.
Is it possible with vscode workspace? Note that I am aware of the "Exclude" feature of vscode, but it is not convenient for my use case.
Thanks.
After trying many methods (all in vain), I came up with the following workaround: make symlinks to all files in the master filelist.
Suppose that the files in the filelist (${ABS_INCLUDE}) are with absolute pathes, and suppose they share a root directory (which can be always true), then first create a dedicated root directory (SYM_ROOT_DIR) for vscode workspace, and then create symlinks for each files under the new root directory, e.g.,
mkdir -p ${SYM_ROOT_DIR}
while IFS= read -r line
do
OLD_DIR=$(dirname "$line")
BASENAME=$(basename "$line")
SYM_DIR=$(echo "${OLD_DIR}" | sed "s#${ABS_ROOT_DIR}#${SYM_ROOT_DIR}#")
mkdir -p ${SYM_DIR}
ln -s ${line} ${SYM_DIR}/${BASENAME}
done < ${ABS_INCLUDE}
I have this wget command:
sudo wget --user-agent='some-agent' --referer=http://some-referrer.html -N -r -nH --cut-dirs=x --timeout=xxx --directory-prefix=/directory/for/downloaded/files -i list-of-files-to-download.txt
-N will check if there is actually a newer file to download.
-r will turn the recursive retrieving on.
-nH will disable the generation of host-prefixed directories.
--cut-dirs=X will avoid the generation of the host's subdirectories.
--timeout=xxx will, well, timeout :)
--directory-prefix will store files in the desired directorty.
This works nice, no problem.
Now, to the issue:
Let's say my files-to-download.txt has these kind of files:
http://website/directory1/picture-same-name.jpg
http://website/directory2/picture-same-name.jpg
http://website/directory3/picture-same-name.jpg
etc...
You can see the problem: on the second download, wget will see we already have a picture-same-name.jpg, so it won't download the second or any of the following ones with the same name. I cannot mirror the directory structure because I need all the downloaded files to be in the same directory. I can't use the -O option because it clashes with --N, and I need that. I've tried to use -nd, but doesn't seem to work for me.
So, ideally, I need to be able to:
a.- wget from a list of url's the way I do now, keeping my parameters.
b.- get all files at the same directory and being able to rename each file.
Does anybody have any solution to this?
Thanks in advance.
I would suggest 2 approaches -
Use the "-nc" or the "--no-clobber" option. From the man page -
-nc
--no-clobber
If a file is downloaded more than once in the same directory, >Wget's behavior depends on a few options, including -nc. In certain >cases, the local file will be
clobbered, or overwritten, upon repeated download. In other >cases it will be preserved.
When running Wget without -N, -nc, -r, or -p, downloading the >same file in the same directory will result in the original copy of file >being preserved and the second copy
being named file.1. If that file is downloaded yet again, the >third copy will be named file.2, and so on. (This is also the behavior >with -nd, even if -r or -p are in
effect.) When -nc is specified, this behavior is suppressed, >and Wget will refuse to download newer copies of file. Therefore, ""no->clobber"" is actually a misnomer in
this mode---it's not clobbering that's prevented (as the >numeric suffixes were already preventing clobbering), but rather the >multiple version saving that's prevented.
When running Wget with -r or -p, but without -N, -nd, or -nc, >re-downloading a file will result in the new copy simply overwriting the >old. Adding -nc will prevent this
behavior, instead causing the original version to be preserved >and any newer copies on the server to be ignored.
When running Wget with -N, with or without -r or -p, the >decision as to whether or not to download a newer copy of a file depends >on the local and remote timestamp and
size of the file. -nc may not be specified at the same time as >-N.
A combination with -O/--output-document is only accepted if the >given output file does not exist.
Note that when -nc is specified, files with the suffixes .html >or .htm will be loaded from the local disk and parsed as if they had been >retrieved from the Web.
As you can see from this man page entry, the behavior might be unpredictable/unexpected. You will need to see if it works for you.
Another approach would be to use a bash script. I am most comfortable using bash on *nix, so forgive the platform dependency. However the logic is sound, and with a bit of modifications, you can get it to work on other platforms/scripts as well.
Sample pseudocode bash script -
for i in `cat list-of-files-to-download.txt`;
do
wget <all your flags except the -i flag> $i -O /path/to/custom/directory/filename ;
done ;
You can modify the script to download each file to a temporary file, parse $i to get the filename from the URL, check if the file exists on the disk, and then take a decision to rename the temp file to the name that you want.
This offers much more control over your downloads.
I am having a problem with version control in Subversion. I checked out a working copy from respository and got locks on all of its files. Then, without releasing the locks I have deleted the folder from disk.
I can't delete the folder from repository, since its got a lock
If the I and try to release the locks recursively, it says there are no locks to be released.
In Browse Repository view, I can only break the locks on particular, not folders recursively.
How can I break the locks residing in repository? I am using TortoiseSVN on Windows.
Is there a command to break locks recursively for a folder?
Ok I got it. Here's what worked for me.
Check out a
working copy
Then go in Windows explorer menu,
TortoiseSVN -> Check for
modifications...
Click on Check repository button
Select All the files, right click and
select the break lock option
Delete the working copy and the one
in repository. Voila! :)
Doing an SVN cleanup will release the lock as well:
$ svn cleanup
From the advance locking section
$ svn status -u
M 23 bar.c
M O 32 raisin.jpg
* 72 foo.h
Status against revision: 105
$ svn unlock raisin.jpg
svn: 'raisin.jpg' is not locked in this working copy
That simply means the file is not locked in your current working directory
, but if it is still locked at the repository level, you can force the unlock ("breaking the lock")
$ svn unlock http://svn.example.com/repos/project/raisin.jpg
svn: Unlock request failed: 403 Forbidden (http://svn.example.com)
$ svn unlock --force http://svn.example.com/repos/project/raisin.jpg
'raisin.jpg' unlocked.
(which is what you did through the TortoiseSVN GUI)
If somebody else has locked the files remotely, I found that using TortoiseSVN 1.7.11 to do the following successfully unlocked them in my working copy. (similar to vikkun's answer)
Right click working copy > Check for modifications
Click Check repository button
Select files you wish to unlock
Right click > Get lock
Check "Steal the lock" checkbox
After lock is stolen, select again
Right click > Release lock
Files in working copy should now be unlocked.
Unless you have admin access to the svn machine and can use the 'svnadmin' tool, your best option seems to be this:
Checkout the problematic directory using svn checkout --ignore-externals *your_repo*
Use svn status --show-updates on the checked out repository to find out which files are potentially locked (if someone finds the documentation on the meaning of the status codes please comment).
Use svn unlock --force *some_file* on the files found at 2.
I've used the following one-liner to automate 2. and 3.:
svn status -u | head -n -1 | awk '{ print $3 }' | xargs svn unlock --force
If you have access to the svnadmin tool in the repo server, you can use this alternative to remove all locks (based on the script posted by VonC)
svnadmin lslocks <path_to_repo> |grep -B2 Owner |grep Path |sed "s/Path: \///" | xargs svnadmin rmlocks <path_to_repo>
The repository administrator can remove the locks (recursively), operating on hundreds of files inside a problematic directory -- but only by scripting since there is not a --recursive option to svnadmin rmlocks.
$repopath=/var/svn/repos/myproject/;
$problemdirectory=trunk/bikeshed/
IFS=$'\n'; for f in $(sudo svnadmin lslocks $repopath $problemdirectory \
| grep 'Path: ' \
| sed "s/Path: \///") ; \
do sudo svnadmin rmlocks $repopath "$f" ; done
This solution works with filenames that have spaces in them.
For me deleting the lock file inside .svn did not work since I got bad checksum msg after trying to update the file.
I got the following msg after executing svn cleanup inside the directory:
svn: In directory ''
svn: Can't copy '.svn/tmp/text-base/file_name.svn-base' to 'filename.3.tmp': No such file or directory
So I copied my file to .svn/tmp/text-base and changed the name to file_name.svn-base. Then cleanup and update worked fine.
When I tried to run the script from above as originally provided, I was getting an error when it tried to set the variables:
./scriptname: line1: =/svn/repo/path/: No such file or directory
./scriptname: line2: =directory/: No such file or directory
I removed the '$' from the first two lines and this worked perfectly after that.
repopath=/var/svn/repos/myproject/;
problemdirectory=trunk/bikeshed/
IFS=$'\n'; for f in $(sudo svnadmin lslocks $repopath $problemdirectory \
| grep 'Path: ' \
| sed "s/Path: \///") ; \
do sudo svnadmin rmlocks $repopath "$f" ; done
Is there any way to do it? I only have client access and no access to the server. Is there a command I've missed or some software that I can install locally that can connect and find a file by filename?
You could grep the output of
cvs rlog -Nh .
(note the period character at the end - this effectively means: the whole repository).
That should give you info about the whole shebang including removed files and files added on branches.
You can use
cvs rls -Rde <modulename>
which will give you all files in recursively, e.g.
foo:
/x.py/1.2/Mon Dec 1 23:33:51 2008//
/y.py/1.1/Mon Dec 1 23:33:31 2008//
D/bar////
foo/bar:
/xxx/1.1/Mon Dec 1 23:36:38 2008//
Notice that the -d option gives you also deleted files; not sure whether you
wanted that. Without -e, it only gives you the file names.
Anybody have a script or alias to find untracked (really: unadded) files in a Perforce tree?
EDIT: I updated the accepted answer on this one since it looks like P4V added support for this in the January 2009 release.
EDIT: Please use p4 status now. There is no need for jumping through hoops anymore. See #ColonelPanic's answer.
In the Jan 2009 version of P4V, you can right-click on any folder in your workspace tree and click "reconcile offline work..."
This will do a little processing then bring up a split-tree view of files that are not checked out but have differences from the depot version, or not checked in at all. There may even be a few other categories it brings up.
You can right-click on files in this view and check them out, add them, or even revert them.
It's a very handy tool that's saved my ass a few times.
EDIT: ah the question asked about scripts specifically, but I'll leave this answer here just in case.
On linux, or if you have gnu-tools installed on windows:
find . -type f -print0 | xargs -0 p4 fstat >/dev/null
This will show an error message for every unaccounted file. If you want to capture that output:
find . -type f -print0 | xargs -0 p4 fstat >/dev/null 2>mylogfile
Under Unix:
find -type f ! -name '*~' -print0| xargs -0 p4 fstat 2>&1|awk '/no such file/{print $1}'
This will print out a list of files that are not added in your client or the Perforce depot. I've used ! -name '*~' to exclude files ending with ~.
Ahh, one of the Perforce classics :) Yes, it really sucks that there is STILL no easy way for this built into the default commands.
The easiest way is to run a command to find all files under your clients root, and then attempt to add them to the depot. You'll end up with a changelist of all new files and existing files are ignored.
E.g dir /s /b /A-D | p4 -x - add
(use 'find . -type f -print' from a nix command line).
If you want a physical list (in the console or file) then you can pipe on the results of a diff (or add if you also want them in a changelist).
If you're running this within P4Win you can use $r to substitute the client root of the current workspace.
Is there an analogue of svn status or git status?
Yes, BUT.
As of Perforce version 2012.1, there's the command p4 status and in P4V 'reconcile offline work'. However, they're both very slow. To exclude irrelevant files you'll need to write a p4ignore.txt file per https://stackoverflow.com/a/13126496/284795
2021-07-16: THIS ANSWER MAY BE OBSOLETE.
I am reasonably sure that it was accurate in 2016, for whatever version of Perforce I was using them (which was not necessarily the most current). But it seems that this problem or design limitation has been remedied in subsequent releases of Perforce. I do not know what the stack overflow etiquette for this is -- should this answer be removed?
2016 ANSWER
I feel impelled to add an answer, since the accepted answer, and some of the others, have what I think is a significant problem: they do not understand the difference between a read-only query command, and a command that makes changes.
I don't expect any credit for this answer, but I hope that it will help others avoid wasting time and making mistakes by following the accepted but IMHO incorrect answer.
---+ BRIEF
Probably the most convenient way to find all untracked files in a perforce workspace is p4 reconcile -na.
-a says "give me files that are not in the repository, i.e. that should be added".
-n says "make no changes" - i.e. a dry-run. (Although the messages may say "opened for add", mentally you must interpret that as "would be opened for add if not -n")
Probably the most convenient way to find all local changes made while offline - not just files that might need to be added, but also files that might need to be deleted, or which have been changed without being opened for editing via p4 edit, is p4 reconcile -n.
Several answers provided scripts, often involving p4 fstat. While I have not verified all of those scripts, I often use similar scripts to make up for the deficiencies of perforce commands such as p4 reconcile -n - e.g. often I find that I want local paths rather than Perforce depot paths or workspace paths.
---+ WARNING
p4 status is NOT the counterpart to the status commands on other version control systems.
p4 status is NOT a read-only query. p4 status actually finds the same sort of changes that p4 reconcile does, and adds them to the repository. p4 status does not seem to have a -n dry-run option like p4 reconcile does.
If you do p4 status, look at the files and think "Oh, I don't need those", then you will have to p4 revert them if you want to continue editing in the same workspace. Or else the changes that p4 status added to your changeset will be checked in the next time.
There seems to be little or no reason to use p4 status rather than p4 reconcile -n, except for some details of local workspace vs depot pathname.
I can only imagine that whoever chose 'status' for a non-read-only command had limited command of the English language and other version control tools.
---+ P4V GUI
In the GUI p4v, the reconcile command finds local changes that may need to be added, deleted, or opened for editing. Fortunately it does not add them to a changelist by default; but you still may want to be careful to close the reconcile window after inspecting it, if you don't want to commit the changes.
Alternatively from P4Win, use the ""Local Files not in Depot" option on the left hand view panel.
I don't use P4V much, but I think the equivalent is to select "Hide Local Workspace Files" in the filter dropdown of the Workspace view tab.p4 help fstat
In P4V 2015.1 you'll find these options under the filter button like this:
I use the following in my tool that backs up any files in the workspace that differ from the repository (for Windows). It handles some odd cases that Perforce doesn't like much, like embedded blanks, stars, percents, and hashmarks:
dir /S /B /A-D | sed -e "s/%/%25/g" -e "s/#/%40/g" -e "s/#/%23/g" -e "s/\*/%2A/g" | p4 -x- have 1>NUL:
"dir /S /B /A-D" lists all files at or below this folder (/S) in "bare" format (/B) excluding directories (/A-D). The "sed" changes dangerous characters to their "%xx" form (a la HTML), and the "p4 have" command checks this list ("-x-") against the server discarding anything about files it actually locates in the repository ("1>NUL:"). The result is a bunch of lines like:
Z:\No_Backup\Workspaces\full\depot\Projects\Archerfish\Portal\Main\admin\html\images\nav\navxx_background.gif - file(s) not on client.
Et voilĂ !
Quick 'n Dirty: In p4v right-click on the folder in question and add all files underneath it to a new changelist. The changelist will now contain all files which are not currently part of the depot.
The following commands produce status-like output, but none is quite equivalent to svn status or git status, providing a one-line summary of the status of each file:
p4 status
p4 opened
p4 diff -ds
I don't have enough reputation points to comment, but Ross' solution also lists files that are open for add. You probably do not want to use his answer to clean your workspace.
The following uses p4 fstat (thanks Mark Harrison) instead of p4 have, and lists the files that aren't in the depot and aren't open for add.
dir /S /B /A-D | sed -e "s/%/%25/g" -e "s/#/%40/g" -e "s/#/%23/g" -e "s/\*/%2A/g" | p4 -x- fstat 2>&1 | sed -n -e "s/ - no such file[(]s[)]\.$//gp"
===Jac
Fast method, but little orthodox. If the codebase doesn't add new files / change view too often, you could create a local 'git' repository out of your checkout. From a clean perforce sync, git init, add and commit all files locally. Git status is fast and will show files not previously committed.
The p4 fstat command lets you test if a file exists in the workspace, combine with find to locate files to check as in the following Perl example:
// throw the output of p4 fstat to a 'output file'
// find:
// -type f :- only look at files,
// -print0 :- terminate strings with \0s to support filenames with spaces
// xargs:
// Groups its input into command lines,
// -0 :- read input strings terminated with \0s
// p4:
// fstat :- fetch workspace stat on files
my $status=system "(find . -type f -print0 | xargs -0 p4 fstat > /dev/null) >& $outputFile";
// read output file
open F1, $outputFile or die "$!\n";
// iterate over all the lines in F1
while (<F1>) {
// remove trailing whitespace
chomp $_;
// grep lines which has 'no such file' or 'not in client'
if($_ =~ m/no such file/ || $_ =~ m/not in client/){
// Remove the content after '-'
$_=~ s/-\s.*//g;
// below line is optional. Check ur output file for more clarity.
$_=~ s/^.\///g;
print "$_\n";
}
}
close F1;
Or you can use p4 reconcile -n -m ...
If it is 'opened for delete' then it has been removed from the workspace. Note that the above command is running in preview mode (-n).
I needed something that would work in either Linux, Mac or Windows. So I wrote a Python script for it. The basic idea is to iterate through files and execute p4 fstat on each. (of course ignoring dependencies and tmp folders)
You can find it here: https://gist.github.com/givanse/8c69f55f8243733702cf7bcb0e9290a9
This command can give you a list of files that needs to be added, edited or removed:
p4 status -aed ...
you can use them separately too
p4 status -a ...
p4 status -e ...
p4 status -d ...
In P4V, under the "View" menu item choose "Files in Folder" which brings up a new tab in the right pane.
To the far right of the tabs there is a little icon that brings up a window called "Files in Folder" with 2 icons.
Select the left icon that looks like a funnel and you will see several options. Choose "Show items not in Depot" and all the files in the folder will show up.
Then just right-click on the file you want to add and choose "Mark for Add...". You can verify it is there in the "Pending" tab.
Just submit as normal (Ctrl+S).