Create a copy of an old revision of a file using Dropbox API - dropbox-api

I’m trying to use the Dropbox API to create a copy of an older revision of a file.
The use case is that the user browses his files, picks one and lists its revisions, picks one of the revisions and copies it to some folder in his account.
However, as far as I can tell, the copy method in the API only accepts a source path and there’s no way to specify a revision. I tried passing a revision in the path as is supported in download, but that fails.
Therefore, I could only think of the following workarounds:
Use restore to rollback the source file to the desired revision, copy it to the destination folder , and restore it back to the original revision
Download the desired revision and upload it to the destination folder
I don’t like solution #1 because if something goes wrong during the process, the user may end up with an older revision of the original file. Solution #2 does a redundant download and upload.
Can anyone suggest a better way to implement this functionality?

Related

Latest raw version of a file on GitHub

On GitHub there is a raw button that allows you to download the raw contents of a file.
The raw URL is for that particular version of the file.
Is it possible to get a raw URL that always downloads the latest version of a file?
It's possible to specify a branch name instead of an object ID in the URL. For example, to always get the version of Git's README that's on the master branch, you can use https://raw.githubusercontent.com/git/git/master/README.md.
However, be aware that, like all accesses to raw.githubusercontent.com, there is caching involved and that can't be disabled, so while the URL will eventually be updated, it may not be immediately. If you need the absolute latest version of a file at all times, or if you need that file for an automated process (such as another website or a program), then you need to host it yourself.

How can I do a stream copy/merge of only a subdirectory in Perforce?

I'm using P4V. I work in a subdirectory (eg code/jorge) and other people work in another subdirectory (eg art/) that I never deal with. Additionally I have a stream where I do my personal work. Every so often I need to merge changes from the main line to my stream, and copy them back up. However, the files in art/ are large binaries and Perforce spends a long time thinking about them even though I've not touched them. Is there any way to have perforce merge/copy my directory (code/jorge) without it spending time trying to merge art/? Can I tell P4V to merge/copy only the code directory?
Related but not identical question: Perforce streams, exclude files from merge/copy
If you don't touch those files, it might be easier to not include them in your stream at all rather than manually exclude them every time you do a merge.
I.e. if your stream Paths currently says:
share ...
maybe it should instead be:
share code/jorge/...
or, if you need the art for builds but never need to modify it, you might consider doing something like:
import art/...
share code/...
I am not sure this is the recommended option but you can actually merge without using the "Stream to Stream" option but the standard "Specify source and target file" options, even if you are in a stream depot.
So you can select any subdirectory as your source like 'dev/code/jorge' and the same subdirectory as destination like "main/code/jorge' and it will only consider that directory. We do it routinely in my team because we have a big mono repo and have not taken the time to setup multiple depots when we migrated to Perforce.

How to check out file in Git?

I just started using github.com and my friend and I are working on a project. How can I pull parts of the project but check out certain files I'm working on so he doesn't work on them. He can still download the files but he won't be able to open or edit them until I upload them back and give permission?
I suppose you mean lock a file when you edit it. Git won't let you do this and it's not something you need to worry about. Instead, you can both work on the same file and then merge your changes later.
If you really want to work that way (i.e: lock files, or at least control when your friend will modify your repo), you can ask your friend to fork your repo.
That way, he/she:
will have his/her own copy of said repo
will work on any file
will rebase first with branches fetched from your repo (added as a remote on his/her fork, as described in GitHub: working with remotes)
will make pull request, allowing to decide what to include and when.
Historically version control systems provide a checkin/checkout feature. When you do a checkout, you reserve the artifact. If another person also has the same file checked out, then you get an error when trying to checkin the artifact. Not sure creating another fork is really the equivale

Are there version control systems that allow you to permanently delete files?

I need to keep under version some large files (some Gigs).
I don't need, and I can't keep under version all the version of the files.
I want to be able to remove from my VCS large files version at some moment.
The files that I want to keep under version control are big .zip files or ISO images.
These files may contains executable software or data (seismic data, SAR images, GNSS data) and they are provided by the software supplier of my company.
What control version system could I use?
In CVS you can do that by removing the files from the repo. Subversion allows that by dumping the content of the repo and filter it to remove the files (that is a bit cumbersome). Perforce has an obliterate command for that. Many of the newer distributed VCS make it rather difficult by their usage of hashes all over the places and the fact that your repo may have been replicated elsewhere also complicate things. Hg has a strip command (part of the Mq extension), Git can also do that I think.
I don’t think there’s any version control system that allows you do that regularly because that goes against everything version control systems stand for.
Perforce generally allows files to be put in two way, as head revision only (so, you'd only every have one copy) or all revisions. Perforce does have the admin level obliterate command that can be used to delete revisions. Its up to you to query for a list of files, possibly by date or number of revisions, and to specify the revisions to the obliterate command. As the name suggests obliterate deletes the revisions permanently from the database, so, I always generate scripts to do this and review them before running them. If the obliterate command is NOT run with the -Y flag, it will generate a list of what would be obliterated, also very useful.
Somehow I get the impression that you should not use a version control system at all. As said before, what you're trying to do goes against everything you would need a version control system for in the first place.
I suggest you create a file system directory structure that makes sense for what you're trying to accomplish and so that you can structure your data. And just make backup's of those files.
TFS has a destroy command that you can use to permanently delete files or revisions as you see fit.
There is more information at this MSDN article.
Many version control systems allow you to configure them in a way so that they store only the differences between several versions of a file and save space through that.
For example if you have a 1Gig file committed, change a part of it and commit it again, only the changed part will be stored in the version control system.
There won't be 2Gigs used (initial and new file) but only 1Gig+sizeOfChanges.
There's just one downside:if you're storing files which change their whole content from revision to revision this can also be counter-productive as the changes take almost the same space as the original version. Archive files are a example for such files where only a small change in the (real) content can lead to a completely changed content of the archive file.
I'd suggest to test several version control systems on your own and with your specific needs and environment and monitor each one at the server-side how the storage requirements for each system changes.
Some distributed version control systems allow to create "checkpoints" that allow you to use this version as kind of a base revision and safe you from pulling all the history before the checkpoint on every checkout. So you can remove the big files, create a checkpoint, and checkout/clone the repository from that checkpoint to a new directory. Then you have there a new, small repository, but without the history before the checkpoint. It you don't need that history you can burn the old repository on CD and use the new, partial one from now on.
I've only tested it in darcs, and there it works, but YMMV depending on version control system and use cases.
It sounds to me like you need an intelligent backup system, rather than version control.
I use SyncBackSE; it allows you to keep a number of previous versions, and can also do things like "ignore all files changed more than 30 days ago".
It's one of the few bits of paid-for software I use. I think it's worth checking out.
I think you're talking about something like "AlienBrain" "bucket" system, aren't you? The ability to remove some revisions from version control.
If you want to destroy an item, it's normally called "obliterate" and it's supported by a number of systems out there.
Buckets, AFAIK are supported by:
AlienBrain
Accurev
PlasticSCM
I would save such files under a unique name (datestamped, perhaps), and perhaps additionally make a textual reference to the external file in the version control system.
Fossil allows you to do this via the "shun" mechanism. Fossil being a distributed SCM, however, means that this does not affect all repositories (for obvious reasons).

Can I safely edit a renamed file in perforce

I have a file I need to move that's already under perforce. Once moved it needs some editing - update the package, etc - appropriate to its new location. Should I submit the move changespec and then reopen it for edit, or can I do this in one go? If so, what is the appropriate sequence of events?
I have done this before in one go, but depending on your build process, I recommend against it. What I generally do is this:
Move the file.
If the move needs a change in order to compile, open it for edit and make those changes.
Submit the changes, telling perforce to reopen the files for editing.
Make the changes for path, etc., that don't cause compile errors but should be updated.
Submit those changes with an appropriate description.
If you want to, however, you could just do all your changes in step (2) above. Perforce might change the flag for the new file from integrate to add, but it still remembers the source path for the file.
Edit: Better method
I realized that I often use a different method, but the idea of "moving" the file distracted me. So, I would recommend these steps instead:
Integrate the file into the new path/name, leaving the previous file there. I am assuming that this won't break your build process.
Submit the new file, checking it out again for edit after submission.
Make the required changes to the new file, and to the project so that you are using the new file.
Submit the edits for the new file.
[Optional] You might need to check through branch specs to see if you need to map the old file into the new one in any branches.
Create a changelist for deleting the old file, and submit it sometime later.
This method allows the edits to be cleanly separated from the rename/move, while never leaving the project in a state that won't compile.
Also, why wait for step 6? Sometimes, especially on bigger projects, you might want to move a file that another person is editing. Perforce will helpfully tell you this. By waiting to delete the file, you allow your coworker(s) to finish the edits and submit without needing to move their work manually. After the edits are submitted, they can be integrated into the new file, and then the old one can be safely deleted.
Submit the move change and then reopen for edit (you could use the reopen option too).
This is much more readable to the user in the change history.
Also, recent versions of Perforce do perform checks for changes to files after resolution. So, there may be complaints editing files after some resolve operations have been completed.
I would say always submit first then edit. It is much cleaner and makes it more obvious whats happening in your repository. Then simply checkout the file in the new location and make whatever changes. This also makes it much more obvious that the changes were made in the new location and to all it to work after renaming.
Yes you can. Simply reopen for edit the branched file (i.e. the new one). In P4Win, there is a context menu for this ("re-open for edit").
"Safely" is probably an important point here. Once you rename or move the file it'll get a revision number of "1" which looks like a new file to your Perforce client. Of course, admins will be able to get its prior history, but if the editing/version history of the file is important to you it's a little harder to get the older revision.
Update: Thanks to Commodore Jaeger and Greg Whitfield for enlightening comments.
This wasn't easy to track down regarding what the One True Answer is, even from Perforce support, so I figured I'd update everyone on what we found:
Perforce stores all versions of every document in its database.
If it's saving your file as type <text> or <ktext> then it stores the diffs of one file version to another and not the entire file.
If you check out a file, make no changes to it, and then re-submit, it will save as a new version with 0 diffs. This is configurable and P4 can be set up to ignore changelist items without any actual diffs. You can force this behavior by selecting "Revert unchanged files..." before you submit a changelist.
Use "Rename/Move..." to move files in P4 so it can track them. Don't copy them using Windows Explorer and then re-add them in P4.
If you use the "Rename/Move..." function from the context menu, the "new" file will show a revision number of "1" as though it were a new file.
However, since P4 saves every function performed on a file, you can actually get to any previous revision (and even recover "deleted" files) with the CLI command p4 filelog -i
If you want to get to the revision history of a moved or renamed file and you're not an admin, you can right-click and select its "Revision Graph" which shows every version of a file even when moved between branches.
According to Perforce support, easier tracking of revision history through branch or folder moves is an oft-requested feature and is in their current roadmap.
Perforce's answer: At the moment, there isn't a way to move/rename/integrate files and still maintain the exact file history.
However, if you were to choose "Integrate..." by right-clicking on the folder that you want to share, the versions of the files of the newly branched folder and underlying files will start from revision #1, but the integration history between the branched folder and underlying files and the original folder and underlying files will remain through which you can trace the revision history of the files.