When performing a merge-down, Perforce thinks a file has moved that wasn't - version-control

Using the Performce Visual Client, I am trying to merge down new content from a parent stream to our feature stream. One of the changes is that a new directory was added, with some content that is similar (but not identical) to what was already in an existing directory. After performing the merge, Perforce incorrectly thinks one of the existing files was actually moved to a location in the newly added directory. The name of this new file is the same, and the name of the new directory is similar to that of the dir that was already there.
It should be noted that the problematic files are not owned by me. It is all code that has never been touched by me or my team.
To clarify:
The original situation:
/path/to/file.txt
New situation on parent stream:
/path/to/file.txt
/path/tofoo/file.txt
New situation on my stream after merge:
Deleted: /path/to/file.txt
/path/tofoo/file.txt
When I look at the second file with "diff against have revision", it shows the deleted file as the "previous" revision, even though in reality it is a new file, which does not have a previous revision.
Anyone have any idea what could cause such behaviour?

Related

Recovering data from Firebird database partially-encrypted by ransomware

our test server was hacked and they installed a ransomware (Cry36) for which there is no solution to date. We also didn't keep any snapshots up to date (lesion learned).
Since it's only a test server, i am not too worried. But we had stored in our Firebird DB (v2.5) a bunch of work which i would like to save.
Looking at the database in a hex editor, i can see that the data is encrypted up until offset 00006430.
Looking at the structure of the firebird database it says that all the headers are encrypted (Header page, PIP,..., Data page).
All the data is still there.
I've tryed with gfix and even copying the headers from an older version of the db. But while it does fix the db, the headers are wrong and most of the new pages are removed.
Does anyone have any idea how to restore the database or extract the tables?
Regards
I have used this method restoring ransomware files encrypted on hard drives from any ransomware by renaming the file in question back to its original filename and extension. You may be able to apply the same method to revert the data or database back to the pre-encrypted version of the file/s or data/bases.
From my testing:
the ransomed file = is compressed and or simply renamed, the encryption is either not applied actually but only implied or the containing file or renamed file is encrypted but the original file is never touched. Simply rename back to original and you can access the file as you could be for the attack. Example:
This is the Ransomed file:
Adobe Acrobat XI Pro 11.0.20.zip.id[42AF04FF-2275].[supportcrypt2019#cock.li].Adame
This is the Ransomed file, renamed and fixed:
Adobe Acrobat XI Pro 11.0.20.zip
The removed portion of the FileName is:
.id[42AF04FF-2275].[supportcrypt2019#cock.li].Adame
Upon renaming the file, you will be prompted for approval to change the application type/ file type for which the file will be opened (Back to its original state), and what application will open it (its original designation as determined by the FileType preset after the FileName. The reason the file doesn't work when ransomed is the final file extension renaming scheme, whereas in this case .ADAME is not a real file type, but made up, and no program will or can open it. Thus, the file can not be opened as named.
You would need to do this for each file individually, could you post more information on the database file and encryption information as this should work for you as well. The Ransom Methodology should be the same. I can not identify the naming scheme used on your system without more information pertaining to unusual or new/unidentified portions of code injected throughout your instance.
For Renaming multiple files you could try an application such as "Advanced Renamer" for bulk processing.

How to exclude a directory in Stream mapping in p4v

I'm using the visual client for perforce and I want to exclude a directory from the workspace. Before streams, I would just navigate to my workspace, find the folder in the tree, and exclude it (and I've found this solution in a number of other related questions I've found). However, now that I am using a stream, it won't let me do this, i have to edit the stream mapping apparently.
So I tried to add this line to the remapped box when editing the stream:
-//NumberPlus/current/Library/... //nplus-mainline/current/Library/
However I just get an error:
Error in stream specification.
Error detected at line 24
Null directory (//) not allowed in '-//NumberPlus/current/Library/...'.
EDIT: I'm in Windows 8.1, for clarification.
If the folder you want to exclude is specific to your machine, setting P4IGNORE locally is the easiest way to exclude it from being added to the depot.
http://www.perforce.com/blog/120214/new-20121-p4ignore
You'd set P4IGNORE to some name like "p4ignore.txt", create a file with that name, and add "Libraries" to it -- subsequent "p4 add" commands will skip over paths found in the P4IGNORE file, so those files will never get added to the depot.
If this is something that's going to be common to all workspaces of this stream (e.g. it's a build artifact that everyone is going to generate and nobody is supposed to check in), what you want to do is add an "exclude" to the stream's Paths (this will exclude it from both branch views and client views generated by that stream). E.g.:
Paths:
share ...
exclude Libraries/...
The "exclude Libraries/..." is basically the same thing as the exclusion line you would add to the client view, except you specify it as a relative path, you don't need to specify both sides of the mapping, and the "-" is implied by the "exclude" type. The "remap" type is if you want to keep those files but in a different depot location, which doesn't sound applicable here.
More information on defining stream views:
http://www.perforce.com/perforce/doc.current/manuals/p4v/streams_views.html
You can't just edit the mappings for your client workspace if it is switched to a particular stream. The whole point of streams is that your workspace mapping is directly generated from the stream definition. So that's a feature.
It's not totally clear whether
you don't want the directory in the stream at all, or
it's valid to have the directory in the stream, but you don't want to sync it to your workstation, or
you want the directory sync'd to your workstation, but you want the directory to have different contents (say, from some other stream which has a different version of the library.
However, for all of these situations, I suspect the best path forward is to define a new child stream of your current stream.
You will want to define the path mappings using the "share", "exclude", "isolate", and "import" mapping types.
For example, if you just didn't want the Library/... directory at all, you'd "exclude" it from your parent.
Then that stream simply won't have that directory, and it (of course) won't be on your workstation when you sync to the stream, either.
If you wanted to have a different copy of the code in the Library/... directory, so that it became a point of intentional divergence from the parent, you'd "isolate" it from your parent to submit your own custom version, or "import" it from another stream to use that stream's Library/... directory instead.
In either case, the directory would be part of the stream, and would be sync'd to your workstation, but the contents of that directory would differ from the contents that are used in the parent stream (the exact way in which they'd differ is under your control, as you define the stream accordingly).
Documentation and some examples are here: http://www.perforce.com/perforce/doc.current/manuals/p4v/streams_views.html
and here:
http://www.perforce.com/sites/default/files/pdf/Streams-ebook.pdf
I believe I have solved this. To be clear, I wanted the folder to be completely ignored by version control. I'm using p4connect with Unity and it keeps wanting to include unnecessary stuff in my depot.
All I had to do was add this line to my parent stream in the Paths box:
exclude current/Library/...

SharpSvn: Why was update of subfolder from Empty Depth Checkout skipped?

I'm having some trouble cherrypicking some folders out of a repo using SharpSvn (from C#). I did this:
client.CheckOut( uri, dir, new SvnCheckOutArgs() { Depth = SvnDepth.Empty } );
foreach( var folder in folders )
{
client.Update( folder );
}
But my second call to Update didn't work. It reports that the action was SvnNotifyAction.Skip and nothing gets written to the working copy.
uri is essentially something like: svn://myserver/myrepo/mysdk and dir is something like C:\Test\mysdk. (I've changed exact names for the purposes of this question, but structurally it's identical.)
Then the 1st folder is C:\Test\mysdk\include (this works)
Then the 2nd folder is C:\Test\mysdk\bin\v100\x86 (this one doesn't update)
Why would the first one work but when I get the 2nd folder (nested subfolders) it doesn't Update? It reports that it is skipped? But I don't know how to figure out why.
It turns out that updating the nested subdirectory doesn't work because the parent directories don't exist yet and so the nested subdirectory update is skipped. To fix this, I needed to add an argument to Update to indicate that it should create the parent directories.
(The equivalent svn command line option would be --parents).
client.Update( folder, new SvnUpdateArgs() { UpdateParents = true } );
I discovered this by trying to do it manually from the svn command line (and encountered the same problem.) svn help co offered this tiny clue: --parents : make intermediate directories I'm assuming that UpdateParents and --parents are equivalent. So far so good.

addToFolder(): The copy version of the file is deleted, if the original version is deleted

I started doing development with google scripts few days ago and recently joined stackoverflow. I have a problem with addToFolder() function. I have the following piece of code that copies my new spreadsheet into a folder (test/sheets) in my Google Drive:
var ss = SpreadsheetApp.create("test");
var ssID = ss.getId();
DocsList.getFileById(ssID).addToFolder(DocsList.getFolder("test/sheets"));
My problem is that now I have 2 versions of the same file (one in the root of my Google Drive folder and the other in test/sheets folder), whenever I try to delete either of the copies, the other copy is deleted as well. Is there a way to delete the old file and keep the new one OR is there a way to create the file in the desired folder in first place?
EDIT :
thanks for you quick response. I played with this for couple of hours but still have problem copying the file to the destination folder. The problem is that even when I use makeCopy Method of the file, still addToFolder is the only option to mention the folder. Again this ends up having the tagged filename in the destination folder.
I had the same problem with the copy method.
Here is my new Code:
var SetLocationFile = "icompare/sheets/stocks"
var FolderID = DocsList.getFolder(SetLocationFile);
var FileID = DocsList.getFileById(ssID);
FileID.makeCopy("test3").addToFolder(FolderID);
Folders in Google Docs\Google Drive are actually tags. When you "add" a file to the folder "test/sheets", you do not make a copy of your file, you just attach the tag "test/sheets" to it. Now the same file is shown both in the "test/sheets" folder (i.e. in the list of all files with the tag "test/sheets") and in the root. If you wish to make a copy of the file, you should use the copy method. (Please let me know if I just misunderstand your question.)
I realize this is an old questions but you can simply use .removeFromFolder(DocsList.getRootFolder()); to remove the file from the root folder.
I would also like to know the answer to this question.. seems rather "weird" that the API does not even provide a way to create spreadsheets and place them in a certain map? And no, I do not want a Copy of the file, I want the file to be in a specific map and in no other map...

How do I get ClearCase to make an archive of a subdir of snapshot as it was at an earlier revision?

I'm not particularly experienced with ClearCase, so if my terminology is incorrect please let me know.
In Git I can run the command:
git archive -o /tmp/dump.zip $SHA_FROM_THE_PAST path/to/dump
I want to do something similar in ClearCase.
The ClearCase repository contains two branches: main and snapshot_foo.
snapshot_foo branches from main at some point in the past.
What I want is a dump of all the files as they looked at the time the snapshot was first created.
I understand that there is no 'global' state identifier like there is in Git; in AFAIK, in ClearCase each element is versioned individually, so there will not necessarily be a one-to-one equivalent to that command.
I've thought about creating a new snapshot starting at the same point in time from main, and just copying what I need from that, but I am bewildered and confused as to how I would go about it.
The simplest case would be when you have set a label on main just before creating snapshot.
But if you have no such label, you can get all the files at the time just before the creation of the snapshot_foo branch:
a/ cleartool descr -l brtype:snapshot_foo#/myVob to get the date of creation for this branch
b/ make a snapshot view with a time-based selection rule similar to this question
element /myPath/... /main/{!created_since(01-Sep-2008.12:34:56)}
element /myPath/... /main/LATEST
(with 12h 34 minutes 56 seconds being the time just before the creation of the brtype snapshot_foo)
(see the config_spec man page)
Once the snapshot view is created with the right versions in it, you can zip its content, achieving a similar result to the git archive you mention in your question.