Does Perforce supports file streams on Windows, on NTFS?
Sorry to resurrect such an old thread, but I found a workaround that will allow Perforce clients (P4/P4V) to create ADS data.
Chapter 2 of the Perforce Users Guide has a section titled "Mapping files to different locations in the workspace". This section covers how to remap the depot to the workspace and vice-versa.
Let's assume that you want to store some asset metadata with your files in Perforce. You create a tool that generates an ADS called asset.meta such that your filenames are of the form file.ext:asset.meta.
If you modify your Perforce Workspace to include the following:
//depot/....asset.meta //CLIENT/...:asset.meta
This will take ADS asset.meta streams and create files for them in Perforce.
foo.txt with an asset.meta ADS gets stored as 2 files in the depot: foo.txt and foo.txt.asset.meta. When you sync them down, they end joined correctly.
Now there are 2 caviats to be aware of.
1.) P4V will not see the ADSs. You have to add them manually through P4, the P4API or some other explicit mechanism.
2.) If the base file (foo.txt from our example) is not marked writable, you will not be able to sync the ADS.
You will have to deal with #1 in whatever way you want. #2 is trickier IMO. You can +w the main files so they are always writable on the client (if your workflows can accommodate that), or you can write a custom sync routine that handles making files read-only or read-write as necessary.
I may respond to this if I hear any good ideas from Perforce other than the ones mentioned above, but considering how high this page shows up in Google when searching for "Perforce Alternate Data Stream", I thought this might help someone.
I just got a response from Perforce:
Perforce does not have any special support for NTFS Alternate Data Streams.
This means that you will lose any additional data stream when you submit a file into perforce.
Related
Requirement
make history for web text/code source files.
login-worker is only me, i.e personal usage.
automatically save history for each updated files(no require at once but at least once per week)
It must be a simple way to start and work.
I have 3 work places so need to do async files.
(not must but hopefully for future working environment) Any other non-engineer can also understand the location of history file and can see it easily.
Current way:
I made history folder the day, download files in there for edit, copy files when I edit/creat new one.
Advantage of the current way:
Very quick and simple, no need to do additional task to make history
Disadvantage of the current way:
Messy. Whenever day I work, I create a new history folder to keep downloaded files, so that it is messy in Finder(or windows explore).
Also, I don't have a way to Doing Async files for sure with in other places.
I tested to use GIT before, I had Thought GIT automatically save files I edit and save with a editor, but that was not the case. Also GIT is too complicated to use/start. If you recommend GIT, you need to show me ways to deal with the problem I had, for instance, simple GIT GUI with limited options without merging/project/branch etc because of personal usage for maintaining just one website.
Do you know any way to do version control personally and simply?
Thanks.
Suppose you entered <form ...> in your HTML—without the closing tag—and saved the file; do you really think the commit created by our imaginary VCS picked up that file's update event would have any sense?
What I mean, is that as with writing programs¹,
the history of source code changes are there for humans to read,
and for that matter, a good history graph should really read like a prose:
each commit should be atomic in the sense it comprises one (small) but
internally integral feature or fixes a bug, and had to be properly annotated
so that the intent of the change captured by that commit is clear.
What you want instead is just some dumb stream of changes purely for backup purposes.
Well, if you're fully aware of the repercussions (the most glaring one is that the generated history is completely useless for doing development on
the project and can only be used for rollbacks in case of "oopsies"),
there are two ways to go:
Some IDEs (namely, Eclipse) save a backup copy of each file they manage
on each save—thus providing your with such a rollback functionality w/o
using any VCS.
Script around any VCS you like: say, on Linux,
you start something like inotifywait telling it to watch your
project's root directory, recurvively, for write events on files,
read whatever the tool prints to its stdout when these events happen,
and for each event, call to your VCS of choice to record a new commit
with these changes.
¹ «Programs must be written for people to read, and only incidentally for machines to execute.» — Abelson & Sussman, "Structure and Interpretation of Computer Programs", preface to the first edition.
I strongly suggest you to have a deeper look at git.
It may looks difficult at the beginning, but you should spend some time learning it, that's all. All the problems above could be easily solved if you spend some time to learn the basics. There is also a nice "tutorial" on github on how to use git, no need to install anything: https://try.github.io/levels/1/challenges/1.
I'm using P4V. I work in a subdirectory (eg code/jorge) and other people work in another subdirectory (eg art/) that I never deal with. Additionally I have a stream where I do my personal work. Every so often I need to merge changes from the main line to my stream, and copy them back up. However, the files in art/ are large binaries and Perforce spends a long time thinking about them even though I've not touched them. Is there any way to have perforce merge/copy my directory (code/jorge) without it spending time trying to merge art/? Can I tell P4V to merge/copy only the code directory?
Related but not identical question: Perforce streams, exclude files from merge/copy
If you don't touch those files, it might be easier to not include them in your stream at all rather than manually exclude them every time you do a merge.
I.e. if your stream Paths currently says:
share ...
maybe it should instead be:
share code/jorge/...
or, if you need the art for builds but never need to modify it, you might consider doing something like:
import art/...
share code/...
I am not sure this is the recommended option but you can actually merge without using the "Stream to Stream" option but the standard "Specify source and target file" options, even if you are in a stream depot.
So you can select any subdirectory as your source like 'dev/code/jorge' and the same subdirectory as destination like "main/code/jorge' and it will only consider that directory. We do it routinely in my team because we have a big mono repo and have not taken the time to setup multiple depots when we migrated to Perforce.
I'm using Perforce version control system (http://www.perforce.com/) and would like to format source code files (mainly XML) when developers submit their files to Perforce. I know that Git and SVN allow script hooks that provide for that.
Is there a way to change files that are being submitted to Perforce using some kind of a hook?
How can I do that on Perforce?
Thanks!
When I've done these sorts of policy-enforcement tools in the past, I've done it post-commit.
That is, after the submit completes, my tool retrieves the newly-submitted files, re-formats them according to the policy that I'm enforcing, and submits the re-formatted files as a follow-on change.
I do this by writing a tool that monitors changes similarly to the way the change review daemon monitors changes, so that the tool notices new submits and reviews the new files to see if they comply to the organization policy.
I generally have the tool perform a "revert -a" prior to the submit, so that if the files were formatted according to policy by the original developer, no second submission occurs.
I actually think this is a better approach than trying to do it during the submit:
The change that is submitted is exactly as the user provided it, with the identical content as provided by the user
The modifications that are due to the tool are clearly visible in a separate submission, which makes it very easy to recognize when the tool has gone astray and damaged the file during its re-formatting (such tool bugs do occur).
The net effect, overall, is the desired one: the files at the head of the branch are formatted according to company policy.
A Perforce trigger is what you need.
I know about templates. I know this has been asked before (here and here). Please let me explain my situation in detail and hopefully you'll understand why I'm asking this question again.
I use an IDE (and language) called PowerBuilder. PowerBuilder (PB) stores source code and binary object code together in a PBL (pibble) file. A PBL can contain source code for multiple classes. Because of this, it's not really practical to keep the PBL under version control; it's each individual class that should be revisioned independently. However, since it's the PBL file itself that the IDE uses, and because of the presence of the object code within the PBL, these files need to be pushed out when a repository is cloned. I can go into more detail on this if requested.
The PB IDE provides hooks for the MSSCCAPI interface so that it can support source code control providers. It works great with Visual Source Safe 6! But there are no usable MSSCCAPI providers for Mercurial. (I've asked before.) Yes, I'm trying to get the people that create PB to support an updated API, but there's no telling how long that will take. The IDE does, however, offer its own, basic, SCC functions. It's not a "real" solution; it's more of a "this will get you by, you cheap b*****d, until you can buy a real SCC program" type of thing. It works, though, by exporting the source for each class into individual text files and creating a corresponding "status" file (PRP file) for each class. Text files? Those can be tracked by Mercurial! FYI, this basic, "get you by" SCC option doesn't keep history or handle merges.
Let me detail a little more about these PRP files. PB's built-in SCC solution is built around exclusive locks. Check-out, check-in, all that old stuff. It manages these check-outs and check-ins via the PRP files. It also knows what is and what isn't under revision control by the presence of the corresponding PRP file.
So first with the PRP files. I need to have these pushed out (and added for new classes) so that the IDE can see that the corresponding class should be tracked. If there's no PRP file, the IDE doesn't export the syntax and nothing can get tracked in Mercurial. But if I continue to track changes to the PRP files, then that means that I'm pushing out the exclusive locks on the classes as well, and nobody wants that. So I need to add the PRP files but not track any subsequent changes to them.
And I need the same for the binary PBL files. As mentioned before, I need them to exist so that the IDE knows what PBLs make up a code base, but the complexities of the object code, compilation, and class inter-dependencies mean that it's not feasible to recreate them on the fly. So I need the PBLs added to Mercurial, but I don't really want to track the changes to those PBLs. And though I might be able to get by with templates for the PRP files, I can't do that for these binary PBL files.
Hopefully, that explains my situation fully. I apologize that this question is so long, but I wanted to make sure that you had a clear understanding of what I was up against so that I didn't get a bunch of off-the-cuff "This is a duplicate of X" responses. Thank you for your patience and for any guidance you can offer.
Even if I can't understand this
if I continue to track changes to the PRP files, then that means that I'm pushing out the exclusive locks on the classes as well, and nobody wants that. So I need to add the PRP files but not track any subsequent changes to them.
namely: "...nobody wants that..." and "...add the PRP files but not track any subsequent changes to them..." - if you don't version-contol changeable (and changed) sources I can't see reason to add outdated after fist change files to Mercurial
You can add, store and ignore later files in Mercurial. This answer play game nicely with small change: because you want .hgignore full working copy (really want?) you can use hg up -r N
Alternative solutions
SourceControl integration for PB 11.5 - TortoiseSVN (SVN)
WizSource - SCM on top of RDBMS
PushOk Git or SVN SCC plug-ins - Git or SVN respectively
I need to keep under version some large files (some Gigs).
I don't need, and I can't keep under version all the version of the files.
I want to be able to remove from my VCS large files version at some moment.
The files that I want to keep under version control are big .zip files or ISO images.
These files may contains executable software or data (seismic data, SAR images, GNSS data) and they are provided by the software supplier of my company.
What control version system could I use?
In CVS you can do that by removing the files from the repo. Subversion allows that by dumping the content of the repo and filter it to remove the files (that is a bit cumbersome). Perforce has an obliterate command for that. Many of the newer distributed VCS make it rather difficult by their usage of hashes all over the places and the fact that your repo may have been replicated elsewhere also complicate things. Hg has a strip command (part of the Mq extension), Git can also do that I think.
I don’t think there’s any version control system that allows you do that regularly because that goes against everything version control systems stand for.
Perforce generally allows files to be put in two way, as head revision only (so, you'd only every have one copy) or all revisions. Perforce does have the admin level obliterate command that can be used to delete revisions. Its up to you to query for a list of files, possibly by date or number of revisions, and to specify the revisions to the obliterate command. As the name suggests obliterate deletes the revisions permanently from the database, so, I always generate scripts to do this and review them before running them. If the obliterate command is NOT run with the -Y flag, it will generate a list of what would be obliterated, also very useful.
Somehow I get the impression that you should not use a version control system at all. As said before, what you're trying to do goes against everything you would need a version control system for in the first place.
I suggest you create a file system directory structure that makes sense for what you're trying to accomplish and so that you can structure your data. And just make backup's of those files.
TFS has a destroy command that you can use to permanently delete files or revisions as you see fit.
There is more information at this MSDN article.
Many version control systems allow you to configure them in a way so that they store only the differences between several versions of a file and save space through that.
For example if you have a 1Gig file committed, change a part of it and commit it again, only the changed part will be stored in the version control system.
There won't be 2Gigs used (initial and new file) but only 1Gig+sizeOfChanges.
There's just one downside:if you're storing files which change their whole content from revision to revision this can also be counter-productive as the changes take almost the same space as the original version. Archive files are a example for such files where only a small change in the (real) content can lead to a completely changed content of the archive file.
I'd suggest to test several version control systems on your own and with your specific needs and environment and monitor each one at the server-side how the storage requirements for each system changes.
Some distributed version control systems allow to create "checkpoints" that allow you to use this version as kind of a base revision and safe you from pulling all the history before the checkpoint on every checkout. So you can remove the big files, create a checkpoint, and checkout/clone the repository from that checkpoint to a new directory. Then you have there a new, small repository, but without the history before the checkpoint. It you don't need that history you can burn the old repository on CD and use the new, partial one from now on.
I've only tested it in darcs, and there it works, but YMMV depending on version control system and use cases.
It sounds to me like you need an intelligent backup system, rather than version control.
I use SyncBackSE; it allows you to keep a number of previous versions, and can also do things like "ignore all files changed more than 30 days ago".
It's one of the few bits of paid-for software I use. I think it's worth checking out.
I think you're talking about something like "AlienBrain" "bucket" system, aren't you? The ability to remove some revisions from version control.
If you want to destroy an item, it's normally called "obliterate" and it's supported by a number of systems out there.
Buckets, AFAIK are supported by:
AlienBrain
Accurev
PlasticSCM
I would save such files under a unique name (datestamped, perhaps), and perhaps additionally make a textual reference to the external file in the version control system.
Fossil allows you to do this via the "shun" mechanism. Fossil being a distributed SCM, however, means that this does not affect all repositories (for obvious reasons).