Automatic `p4 resolve` after `p4 sync` in p4v? - triggers

I would like to set up some kind of userspace trigger for my local p4v that automatically calls p4 resolve -as (safe resolve) after it calls p4 sync when run from the "Get Latest Revision" or "Get Revision..." menu item.
Is there a way to do this? I am losing a lot of time resolving trivial changes or even checking whether there is an unmerged file, and I consider p4 resolve -as to be always safe on this project.

You could write a script that runs p4 sync followed by p4 resolve -as, and then install it as a P4V custom tool. Custom tools can be passed selected files or folders as input arguments, so your script would know which files to operate on. Custom tools are available from context menus.
Perhaps not as convenient as a client-side trigger (which Perforce does not have), but it does give you the option of choosing a regular sync or a sync+resolve.

Related

What is the svn equivalent to winkin in clearcase?

I am new to svn, however I am at present making the transition of some Perl scripts from ClearCase. I know that Clearcase have dynamic views, so it can access one or more derived objects (DOs) from a dynamic view, or convert a nonshareable derived object to a shareable (promoted) derived object by the cleartool command winkin. How do I replace it with an equivalent svn command, knowing that svn is static.
You don't: Those notion of derived object are very specific to ClearCase dynamic view. cleartool winkin accesses the data of any existing DO, and a DO does not exist in Subversion.
That is similar to "Is there a git equivalent to cleartool catcr": gcc (meaning the tool using the sources to compile) might have some information, but the source control tool itself (Subversion) won't have any.
flag
1) I had to get rid of the omake concept of calling the makefile (I got a makefile.bat for the same)
This has nothing to do with SVN or git.
A third-party tool (OmpenMake, graddle or Bazel (see "e") would be needed here.
2) The cleartool commands like winkin, endview should be changed so that this becomes svn compatible.
There is no notion of dynamic view in any other tool but ClearCase.
3) this changes will stir up changes in the *.LOG file that gets generated on successfully running the *.BAT, I need to have a list of files (URL and revision no. which will be stored in the *.LOG file, for this I need to change the cleartool describe command)
That depends on the language and build mechanism you will chose, and not on SVN.
With a modern language like go (golang), you wouldn't be concerned anymore by the list of files built: only the one with changes would be recompiled.

How do I delete files marked for add from my local disk?

I'm currently using Perforce with P4V (Rev. Perforce Visual Client/MACOSX105X86_64/2012.1/490402) in Unity 3.5.5,
When you revert files marked for add, it only removes it from the changelist. Sometimes files are auto generated or I created some files that I don't want to add anymore and I want to remove them from the changelist and delete the local copy of it.
This also occurs when shelving files marked for add. I'm currently manually reverting and deleting each file. Is there a way to easily delete the local copies?
We haven't implemented it in P4V yet, but if you are on at least a 2013.2 server, you can use 'p4 revert -w' to do this in one step. If you follow the advice of the custom tool route, the command is super simple since it's a one-step process. You can just have the application be p4.exe (or p4 binary if you're on Linux or Mac) and the arguments are 'revert -w %F'.
There may not be an ability built into p4v, but you can call your own script from p4v, in which the script calls a revert and delete on the file.
Go to Tools->Manage Custom Tools..., and select New->Tool.
You'll get a dialog as seen below.
You would put your user script in the Application. For the Arguments, you can click the Select... button to choose the different variables. Here, %F refers to the file you right-clicked on (can be multiple).
I haven't actually written an entire script to test this, but I verified that you can at least right-click a file and this script is available (provided you check Add to applicable context menus).
You can do this from the "Reconcile offline work" tool.
Right-click on the pending-add, "Delete local file":
(p4v 2015 February 18 1007540)
This is a little dangerous, as you'll need to ensure you don't have anything on your client that you want to keep that isn't also in the P4 repository. Once you verify that you want to delete all non-p4-versioned files...
Right-click the containing directory or depot, and choose "reconcile offline work".
This opens a dialog box. You'll see a section called "Local files not in depot", which are the files that were presumably auto-generated that you want to delete.
Select all the files in "Local files not in depot" (ctrl+A), right-click on them, and choose "Delete Local File".
Answer yes, you're sure you want to delete the file(s).
This should remove any files from your p4 client (under the directory you choose) that aren't under p4 version control.

How to force a directory to stay in exact sync with subversion server

I have a directory structure containing a bunch of config files for an application. The structure is maintained in Subversion, and then a few systems have that directory struture checked out. Developers make changes to the struture in the repository, and a script on the servers just runs an "svn update" periodically.
However, sometimes we have people who will inadvertently remove a .svn directory under one of the directories, or stick a file in that doesn't belong. I do what I can to cut off the hands of the procedural unfaithful, but I'd still prefer for my update script to be able to gracefully (well, automatically) handle these changes.
So, what I need is a way to delete files which are not in subversion, and a way to go ahead and stomp on a local directory which is in the way of something in the repository. So, warnings like
Fetching external item into '/path/to/a/dir'
svn: warning: '/path/to/a/dir' is not a working copy
and
Fetching external item into '/path/to/another/dir'
svn: warning: Failed to add directory '/path/to/another/dir': an unversioned directory of the same name already exists
should be automatically resolved.
I'm concerned that I'll have to either parse the svn status output in a script, or use the svn C API and write my own "cleanup" program to make this work (and yes, it has to work this way; rsync / tar+scp, and whatever else aren't options for a variety of reasons). But if anyone has a solution (or partial solution) which takes care of the issue, I'd appreciate hearing about it. :)
How about
rm -rf $project
svn checkout svn+ssh://server/usr/local/svn/repos/$project
I wrote a perl script to first run svn cleanup to handle any locks, and then parse the --xml output of svn status, removing anything which has a bad status (except for externals, which are a little more complicated)
Then I found this:
http://svn.apache.org/repos/asf/subversion/trunk/contrib/client-side/svn-clean
Even though this doesn't do everything I want, I'll probably discard the bulk of my code and just enhance this a little. My XML parsing is not as pretty as it could be, and I'm sure this is somewhat faster than launching a system command (which matters on a very large repository and a command which is run every five minutes).
I ultimately found that script in the answer to this question - Automatically remove Subversion unversioned files - hidden among all the suggestions to use Tortoise SVN.

How to clear TFS server knowledge of my local version?

Our build person was having issues compiling some source code that is checked into our TFS instance.
I was working on some changes that I was not ready to check in so I made a manual backup of my local folder and deleted the contents of my local folder. Then I did a "Get Latest - Specific Version , with overwrite" to ensure I got the latest. And made sure it compiled (it did, the issue was a setup issue on the build machine).
So now if I manually rename folders locally to go back to my version I have the problem that TFS thinks I have all the latest source ... which I don't. Files were changed by another developer but since I did a "Get Latest - Specific Version , with overwrite" it considers my code to be completely up to date.
Questions:
Can some how 'tell' tfs that my local versions are not that latest?
(I'm thinking that I might to do this with a TFS cmd line util but not really sure which one)
Was there a different way I should have done this?
Thanks.
You could delete/remove your local workspace.
Source Control Explorer -> Workspace dropdown -> Workspaces -> Remove
If you get specific version of Changeset "1" of your source code, TFS will delete local files, and will believe that you no longer have the latest code in your workspace. Then, when you do a get latest it will actually get the latest.
In future, instead of making a manual copy, create a shelveset. In the "pending changes" window, click "Shelve" and follow the dialogue (in this case you'd not want to keep your pending changes locally). This puts your work on the server in a secure, recoverable place, but without checking it in.
Alternatively, in the workspace dropdown, you can create a second workspace. That gives you two separate copies of the code locally, but also two separate sets of checkouts. This is really useful if you often find yourself interrupting one piece of work to look at something else.
If you do another "get specific" with overwrite, this should still fix your problem.
Do you know which files are changed? Are we talking a lot of files? Or just a few?
If it is just a few, then you should just copy your changed version back in then re-checkout the files. TFS will then register than you have changed those files.
If you have a lot of changed files then I recommend you give the Team Foundation Power Tools (tfpt) Online "Command Line" command a try.
The Command Line Help can be seen here.
Here some more info from Buck Hodges:
Online
With Team Foundation, a server connection is necessary to check files in or out, to delete files, to rename files, etc. The TFPT online tool makes it easier to work without a server connection for a period of time by providing functionality that informs the server about changes made in the local workspace.
Non-checked-out files in the local workspace are by default read-only. The user is expected to check out the file with the tf checkout command before editing the file. When working in this
When working offline with the intent to sync up later by using the TFPT online tool, users must adhere to a strict workflow:
* Users without a server connection manually remove the read-only flag from files they want to edit. Non-checked-out files in the local workspace are by default read-only, and when a server connection is available the user must check out the file with the tf checkout command before editing the file. When working offline, the DOS command “attrib –r” should be used.
* Users without a server connection add and delete files they want to add and delete. If not checked out, files selected for deletion will be read-only and must be marked as writable with “attrib –r” before deleting. Files which are added are new and will not be read-only.
* Users must not rename files while offline, as the TFPT online tool cannot distinguish a rename from a deletion at the old name paired with an add at the new name.
* When connectivity is re-acquired, users run the TFPT online tool, which scans the directory structure and detects which files have been added, edited, and deleted. The TFPT online tool pends changes on these files to inform the server what has happened.
To invoke the TFPT online tool, execute
tfpt online
at the command line. The online tool will begin to scan your workspace for writable files and will determine what changes should be pended on the server.
By default, the TFPT online tool does not detect deleted files in your local workspace, because to detect deleted files the tool must transfer significantly more data from the server. To enable the detection of deleted files, pass the /deletes command line option.
When the online tool has determined what changes to pend, the Online window is displayed.
Individual changes may be deselected here if they are not desired. When the Pend Changes button is pressed, the changes are actually pended in the workspace.
Important Note: If a file is edited while offline (by marking the file writable and editing it), and the TFPT online tool pends an edit change on it, a subsequent undo will result in the changes to the file being lost. It is therefore not a good idea to try pending a set of changes to go online, decide to discard them (by doing an undo), and then try again, as the changes will be lost in the undo. Instead, make liberal use of the /preview command line option (see below), and pend changes only once.
Preview Mode
The Online window displayed above is a graphical preview of the changes that will be pended to bring the workspace online, but a command-line version of this functionality is also available. By passing the /preview and /noprompt options on the command line, a textual representation of the changes that the TFPT online tool thinks should be pended can be displayed.
tfpt online /noprompt /preview
Inclusions
The TFPT online tool by default operates on every file in the workspace. Its focus can be more directed (and its speed improved) by including only certain files and folders in the set of items to inspect for changes. Filespecs (such as *.c, or folder/subfolder) may be passed on the command line to limit the scope of the operation, as in the following example:
tfpt online *.c folder\subfolder
This command instructs the online tool to process all files with the .c extension in the current folder, as well as all files in the folder\subfolder folder. No recursion is specified. With the /r (or /recursive) option, all files matching *.c in the current folder and below, as well as all files in the folder\subfolder folder and below will be checked. To process only the current folder and below, use
tfpt online . /r
Exclusions
Many build systems create log files and/or object files in the same directory as source code which is checked in. It may become necessary to filter out these files to prevent changes from being pended on them. This can be achieved through the /exclude:filespec1,filespec2,… option.
With the /exclude option, certain filemasks may be filtered out, and any directory name specified will not be entered by the TFPT online tool. For example, there may be a need to filter out log files and any files in object directories named “obj”.
tfpt online /exclude:*.log,obj
This will skip any file matching *.log, and any file or directory named obj.
I'm using a hack with opening the solution without network connection (unplug cable, turn off wifi) and solution will be opened in offline mode.
There is also a plugin called "go offline" for that.
And then, you click on "go online" which is automatically displayed, in case of offline solution.
After this, VS will check all your local files against TFS and automatically checkout files which were changed.
But for your case, I would also suggest to use shelvesets.
in TFS 2013+ and VS 2015+ you have Cloak option which deletes local files and cloaks those branches from getting downloaded to your local workspace (basically unmaps specific branches)

How to actually use a source control system?

So I get that most of you are frowning at me for not currently using any source control. I want to, I really do, now that I've spent some time reading the questions / answers here. I am a hobby programmer and really don't do much more than tinker, but I've been bitten a couple of times now not having the 'time machine' handy...
I still have to decide which product I'll go with, but that's not relevant to this question.
I'm really struggling with the flow of files under source control, so much so I'm not even sure how to pose the question sensibly.
Currently I have a directory hierarchy where all my PHP files live in a Linux Environment. I edit them there and can hit refresh on my browser to see what happens.
As I understand it, my files now live in a different place. When I want to edit, I check it out and edit away. But what is my substitute for F5? How do I test it? Do I have to check it back in, then hit F5? I admit to a good bit of trial and error in my work. I suspect I'm going to get tired of checking in and out real quick for the frequent small changes I tend to make. I have to be missing something, right?
Can anyone step me through where everything lives and how I test along the way, while keeping true to the goal of having a 'time machine' handy?
Eric Sink has a great series of posts on source control basics. His company (Sourcegear) makes a source control tool called Vault, but the how-to is generally pretty system agnostic.
Don't edit your code on production.
Create a development environment, with the appropriate services (apache w/mod_php).
The application directory within your dev environment is where you do your work.
Put your current production app in there.
Commit this directory to the source control tool. (now you have populated source control with your application)
Make changes in your new development environment, hitting F5 when you want to see/test what you've changed.
Merge/Commit your changes to source control.
Actually, your files, while stored in a source repository (big word for another place on your hard drive, or a hard drive somewhere else), can also exist on your local machine, too, just where they exist now.
So, all files that aren't checked out would be marked as "read only", if you are using VSS (not sure about SVN, CVS, etc). So, you could still run your website by hitting "F5" and it will reload the files where they currently are. If you check one out and are editing it, it becomes NOT read only, and you can change it.
Regardless, the web server that you are running will load readonly/writable files with the same effect.
You still have all the files on your hard drive, ready for F5!
The difference is that you can "checkpoint" your files into the repository. Your daily life doesn't have to change at all.
You can do a "checkout" to the same directory where you currently work so that doesn't have to change. Basically your working directory doesn't need to change.
This is a wildly open ended question because how you use a SCM depends heavily on which SCM you choose. A distributed SCM like git works very differently from a centralized one like Subversion.
svn is way easier to digest for the "new user", but git can be a little more powerful and improve your workflow. Subversion also has really great docs and tool support (like trac), and an online book that you should read:
http://svnbook.red-bean.com/
It will cover the basics of source control management which will help you in some way no matter which SCM you ultimately choose, so I recommend skimming the first few chapters.
edit: Let me point out why people are frowning on you, by the way: SCM is more than simply a "backup of your code". Having "timemachine" is nothing like an SCM. With an SCM you can go back in your change history and see what you actually changed and when which is something you'll never get with blobs of code. I'm sure you've asked yourself on more than one occasion: "how did this code get here?" or "I thought I fixed that bug"-- if you did, thats why you need SCM.
You don't "have" to change your workflow in a drastic way. You could, and in some cases you should, but that's not something version control dictates.
You just use the files as you would normally. Only under version control, once you reach a certain state of "finished" or at least "working" (solved an issue in your issue tracker, finished a certain method, tweaked something, etc), you check it in.
If you have more than one developer working on your codebase, be sure to update regularly, so you're always working against a recent (merged) version of the code.
Here is the general workflow that you'd use with a non-centralized source control system like CVS or Subversion: At first you import your current project into the so-called repository, a versioned storage of all your files. Take care only to import hand-generated files (source, data files, makefiles, project files). Generated files (object files, executables, generated documentation) should not be put into the repository.
Then you have to check out your working copy. As the name implies, this is where you will do all your local edits, where you will compile and where you will point your test server at. It's basically the replacement to where you worked at before. You only need to do these steps once per project (although you could check out multiple working copies, of course.)
This is the basic work cycle: At first you check out all changes made in the repository into your local working copy. When working in a team, this would bring in any changes other team members made since your last check out. Then you do your work. When you've finished with a set of work, you should check out the current version again and resolve possible conflicts due to changes by other team members. (In a disciplined team, this is usually not a problem.) Test, and when everything works as expected you commit (check in) your changes. Then you can continue working, and once you've finished again, check out, resolve conflicts, and check in again. Please note that you should only commit changes that were tested and work. How often you check in is a matter of taste, but a general rule says that you should commit your changes at least once at the end of your day. Personally, I commit my changes much more often than that, basically whenever I made a set of related changes that pass all tests.
Great question. With source control you can still do your "F5" refresh process. But after each edit (or a few minor edits) you want to check your code in so you have a copy backed up.
Depending on the source control system, you don't have to explicitly check out the file each time. Just editing the file will check it out. I've written a visual guide to source control that many people have found useful when grokking the basics.
I would recommend a distributed version control system (mercurial, git, bazaar, darcs) rather than a centralized version control system (cvs, svn). They're much easier to setup and work with.
Try mercurial (which is the VCS that I used to understand how version control works) and then if you like you can even move to git.
There's a really nice introductory tutorial on Mercurial's homepage: Understanding Mercurial. That will introduce you to the basic concepts on VCS and how things work. It's really great. After that I suggest you move on to the Mercurial tutorials: Mercurial tutorial page, which will teach you how to actually use Mercurial. Finally, you have a free ebook that is a really great reference on how to use Mercurial: Distributed Revision Control with Mercurial
If you're feeling more adventurous and want to start off with Git straight away, then this free ebook is a great place to start: Git Magic (Very easy read)
In the end, no matter what VCS tool you choose, what you'll end up doing is the following:
Have a repository that you don't manually edit, it only for the VCS
Have a working directory, where you make your changes as usual.
Change what you like, press F5 as many times as you wish. When you like what you've done and think you would like to save the project the way it is at that very moment (much like you would do when you're, for example, writing something in Word) you can then commit your changes to the repository.
If you ever need to go back to a certain state in your project you now have the power to do so.
And that's pretty much it.
If you are using Subversion, you check out your files once . Then, whenever you have made big changes (or are going to lunch or whatever), you commit them to the server. That way you can keep your old work flow by pressing F5, but every time you commit you save a copy of all the files in their current state in your SVN-repository.
Depends on the source control system you use. For example, for subversion and cvs your files can reside in a remote location, but you always check out your own copy of them locally. This local copy (often referred to as the working copy) are just regular files on the filesystem with some meta-data to let you upload your changes back to the server.
If you are using Subversion here's a good tutorial.
Depending on the source control system, 'checkout' may mean different things. In the SVN world, it just means retrieving (could be an update, could be a new file) the latest copy from the repository. In the source-safe world, that generally means updating the existing file and locking it. The text below uses the SVN meaning:
Using PHP, what you want to do is checkout your entire project/site to a working folder on a test apache site. You should have the repository set up so this can happen with a single checkout, including any necessary sub folders. You checkout your project to set this up one time.
Now you can make your changes and hit F5 to refresh as normal. When you're happy with a set of changes to support a particular fix or feature, you can commit in as a unit (with appropriate comments, of course). This puts the latest version in the repository.
Checking out/committing one file at a time would be a hassle.
A source control system is generally a storage place for your files and their history and usually separate from the files you're currently working on. It depends a bit on the type of version control system but suppose you're using something CVS-like (like subversion), then all your files will live in two (or more) places. You have the files in your local directory, the so called "working copy" and one in the repository, which can be located in another local folder, or on another machine, usually accessed over the network. Usually, after the first import of your files into the repository you check them out under a working folder where you continue working on them. I assume that would be the folder where your PHP files now live.
Now what happens when you've checked out a copy and you made some non-trivial changes that you want to "save"? You simply commit those changes in your working copy to the version control system. Now you have a history of your changes. Should you at any point wish to go back to the version at which you committed those changes, then you can simply revert your working copy to an older revision (the name given to the set of changes that you commit at once).
Note that this is all very CVS/SVN-specific, as GIT would work slightly different. I'd recommend starting with subversion and reading the first few chapters of the very excellent SVN Book to get you started.
This is all very subjective depending on the the source control solution that you decide to use. One that you will definitely want to look into is Subversion.
You mentioned that you're doing PHP, but are you doing it in a Linux environment or Windows? It's not really important, but what I typically did when I worked in a PHP environment was to have a production branch and a development branch. This allowed me to configure a cron job (a scheduled task in Windows) for automatically pulling from the production-ready branch for the production server, while pulling from the development branch for my dev server.
Once you decide on a tool, you should really spend some time learning how it works. The concepts of checking in and checking out don't apply to all source control solutions, for example. Either way, I'd highly recommend that you pick one that permits branching. This article goes over a great (in my opinion) source control model to follow in a production environment.
Of course, I state all this having not "tinkered" in years. I've been doing professional development for some time and my techniques might be overkill for somebody in your position. Not to say that there's anything wrong with that, however.
I just want to add that the system that I think was easiest to set up and work with was Mercurial. If you work alone and not in a team you just initialize it in your normal work folder and then go on from there. The normal flow is to edit any file using your favourite editor and then to a checkin (commit).
I havn't tried GIT but I assume it is very similar. Monotone was a little bit harder to get started with. These are all distributed source control systems.
It sounds like you're asking about how to use source control to manage releases.
Here's some general guidance that's not specific to websites:
Use a local copy for developing changes
Compile (if applicable) and test your changes before checking in
Run automated builds and tests as often as possible (at least daily)
Version your daily builds (have some way of specifying the exact bits of code corresponding to a particular build and test run)
If possible, use separate branches for major releases (or have a development and a release branch)
When necessary, stabilize your code base (define a set of tests such that passing all of those tests means you are confident enough in the quality of your product to release it, then drive toward 0 test failures, i.e. ban any checkins to the release branch other than fixes for the outstanding issues)
When you have a build which has the features you want and has passed all of the necessary tests, deploy it.
If you have a small team, a stable product, a fast build, and efficient, high-quality tests then this entire process might be 100% automated and could take place in minutes.
I recommend Subversion. Setting up a repository and using it is actually fairly trivial, even from the command line. Here's how it would go:
if you haven't setup your repo (repository)
1) Make sure you've got Subversion installed on your server
$ which svn
/usr/bin/svn
which is a tool that tells you the path to another tool. if it returns nothing that tool is not installed on your system
1b) If not, get it
$ apt-get install subversion
apt-get is a tool that installs other tools onto your system
If that's not the right name for subversion in apt, try this
$ apt-cache search subversion
or this
$ apt-cache search svn
Find the right package name and install it using apt-get install packagename
2) Create a new repository on your server
$ cd /path/to/directory/of/repositories
$ svnadmin create my_repository
svnadmin create reponame creates a new repository in the present working directory (pwd) with the name reponame
You are officially done creating your repository
if you have an existing repo, or have finished setting it up
1) Make sure you've got Subversion installed on your local machine per the instructions above
2) Check out the repository to your local machine
$ cd /repos/on/your/local/machine
$ svn co svn+ssh://www.myserver.com/path/to/directory/of/repositories/my_repository
svn co is the command you use to check out a repository
3) Create your initial directory structure (optional)
$ cd /repos/on/your/local/machine
$ cd my_repository
$ svn mkdir branches
$ svn mkdir tags
$ svn mkdir trunk
$ svn commit -m "Initial structure"
svn mkdir runs a regular mkdir and creates a directory in the present working directory with the name you supply after typing svn mkdir and then adds it to the repository.
svn commit -m "" sends your changes to the repository and updates it. Whatever you place in the quotes after -m is the comment for this commit (make it count!).
The "working copy" of your code would go in the trunk directory. branches is used for working on individual projects outside of trunk; each directory in branches is a copy of trunk for a different sub project. tags is used more releases. I suggest just focusing on trunk for a while and getting used to Subversion.
working with your repo
1) Add code to your repository
$ cd /repos/on/your/local/machine
$ svn add my_new_file.ext
$ svn add some/new/directory
$ svn add some/directory/*
$ svn add some/directory/*.ext
The second to last line adds every file in that directory. The last line adds every file with the extension .ext.
2) Check the status of your repository
$ cd /repos/on/your/local/machine
$ svn status
That will tell you if there are any new files, and updated files, and files with conflicts (differences between your local version and the version on the server), etc.
3) Update your local copy of your repository
$ cd /repos/on/your/local/machine
$ svn up
Updating pulls any new changes from the server you don't already have
svn up does care what directory you're in. If you want to update your entire repository, makre sure you're in the root directory of the repository (above trunk)
That's all you really need to know to get started. For more information I recommend you check out the Subversion Book.