Automatically triggering merge activity after remote on-site (custom) development? - version-control

In our office, the software we create is sent to our client's office along with an engineer and a laptop. They modify the code at the customer site, based on the customer requests, and deploy the exe.
When the engineer returns to the office, the changed/latest code is not updated to the server, thereby causing us all sorts of problems in the source code on the development boxes and laptops.
I tried to use a version control system like svn, but sometimes the engineer forgets to update the latest code to the svn server. Is there an automatic way that when the laptop connects to the domain, the version control system should automatically check for changes and prompt the user to update the code on the server, or automatically update the code to the server.

I think that the key to this is to require the on-site engineers to use a VCS at the customer site, and to make it a condition of their continued employment that the code at the customer site is in fact reloaded into the VCS on return to the office. You could say that the engineers sent on-site need to be trained in their duties, and they should be held accountable for not doing the complete job - the job isn't finished until the paperwork is done (where 'paperwork' in this context includes updating the source repositories with the customer's custom adaptations of the software).
It seems to me that it might be better to use a DVCS such as Git or Mercurial rather than SVN in this context. However, you should be able to work with SVN if the laptop dispatched to the server has a suitable working copy created for the customization work.
That said, the question is "can we make this easier and more nearly automatic". In part, that might depend on your infrastructure - it also might depend on Windows capabilities about which I'm clueless. There might be a way to get a particular program to run when the laptop connects to a new domain. An alternative (Unix-ish) approach would be to use some regularly scheduled job that runs, say, every hour and looks to see whether it is on the home domain and whether there are changes that should be submitted to the main repository.

Related

Is it safe to cloud sync TFS workspaces?

Please excuse a newbie question, but I've always used SVN and more recently, Git. Just now am touching TFS for the first time.
If I have two different machines that I work on regularly, can I safely keep the project files in sync using something like Dropbox/Sugarsync/Skydrive?
Are there any pros/cons to be aware of?
(I know that some of you might ask something like why not just checkout on the other machine. Just trying to save a step. I want to just pick up the other machine and do what I need to do without having to check out anything.)
TFS workspaces contain information about the machine name and user that created them, however if you're using local workspaces and you're not putting any server-side locks on files then I suppose you could sync them via dropbox and it should probably work just fine.
That said, I'd never recommend it.
You're not only going to sync all your code but also all the binaries that you're producing each and every time you compile, plus you won't have any change history between machines and you need to keep monitoring the drop box app to make sure things have synced fully before switching machines.
If you want to move changes between two machines I'd recommend using shelvesets. It only takes a few seconds to do and you'll have a more explicit update process between machines. You can be sure of what is happening in your code on each machine and you have an implicit rollback point if you realise you put something in the shelveset you didn't want.

Can you share a client spec in Perforce?

It seems rather pointless to have everybody creating the same client for a project in Perforce, so, is there any one one could create a "public" client in Perforce from where everybody could sync from?
Edit: I meant clients like the ones you create in Perforce from a client spec
It's easier to understand the architecture, I believe, if you use the term 'workspace' rather than 'client'. Perforce applications manage files in a designated area of your local disk, called your workspace. As the name implies, your workspace is where you do most of your work. You can have more than one client workspace, even on the same workstation.
Since two different users are generally working independently, on separate workstations or laptops, they each need their own copy of the code, and they each need their own workspace so that they can control when they sync up with the changes in the server.
If you and I try to share a single copy of the code, on a single workstation, we'll find ourselves quickly confused about whose changes are whose; it's much easier for us to work independently, and to merge our changes as separate submissions to the server.
If the issue in your case is that client definitions are complex, with very intricate view definitions, then you may wish to investigate the 'template client' feature: set up a single master client with the view and options that you prefer, and then your other users can use 'client -t' to create workspace definitions that copy the view and options details from the template client.
It's possible to do this, but not advisable. Since Perforce keeps a server-side record of what files are synced to each client, you could run into a situation where:
User Fred syncs using the shared client and gets a fresh set of files.
Before any changes are committed, user Jim syncs using the shared client and gets nothing because the Perforce server thinks that the client already has an up to date set of files.
Jim could get around this using "p4 sync -f" which will force all the latest files to be synced to his workspace, but that's a kludge around the way Perforce is designed to be used.
Perforce clients are very lightweight in terms of the resources they take up on the server, so it's better not to have shared clients.
I tried to find a more complete explanation of why clients should not be shared in the online Perforce documentation, but it's not very helpful. The book "Practical Perforce" has the best overview I've seen if you happen to have a copy around.
Use a template workspace as Bryan mentioned, or consider using streams. In the streams framework you define the stream view (composition) once, and workspaces are generated automatically.
p4 sync -f is too slow. Because firstly it will delete all the files in your local and then reload the files from central depot! there is a tricky way to do. It is to create a havelist and do sync, when wanting do sync -f. details is 1,get the clientspec, 2, save it to local. 3, delete the client 4, create a same client using the saved clientspec. Therefore we save the time for delete local files.

Version Control advice

We've decided on a version control system - using Mercurial clients and Bitbucket for repositories. But it's just occurred to me we have a problem I didn't consider.
We have an internal development LAMP server (Ubuntu) and all the developers work on websites stored on it, which means all developers share a single file source and we are all working from it. It's rare that two different developers will work on the same site at the some time, but it does happen occasionally. This means that two developers can easily overwrite each others work if they are working on the same file at the same time.
So my questions is: what is the best solution to this problem? Bearing in mind we like the convenience of a single internal server so that we can demo sites internally, and it also has a cron job running for backing up the files and databases.
I am guessing each developer would have to run their own LAMP (or WAMP) servers on their individual workstations, commit, and push to bitbucket repository. And of course whenever working on a different site, do a pull and resolve any differences as per usual. This of course takes away the convenience of other team members (non developers) being able to browse to 192.168.0.100 (the LAMP server IP address) and looking at the progress of websites, not to mention that some clients can also access the same server externally (I've set up a port forward and limited to their IP addresses) to see the progress of their websites too.
Any advice will be greatly appreciated.
Thanks in advance.
I think, you have to seriously re-think about used workflow, because LAMP-per-dev is only slightly better than editing sites in-place
I can't see place for Bitbucket in serious corporate development - in-house resources are at least more manageable
I can't see reasons don't use Staging Mercurial-server (pseudo-central) with Staging internal LAMP-server (which you have and use now)
I can imagine at least two possible choices (fast, dirty, draft idea, not ready-to-use solution), both are hook-based
Less manageable, faster for implement
Every developer have in own local repo hook, which after (each?) commit export his tip and copy exported to related site space. Workflow: commit - test results on internal site
Advantages: easy, fast to implement
Disadvantages: Can't prevent (due to distributed nature) overwriting of tested code by code from another developer
Manageable deploy, harder to implement and manage
LAMP-server become also Mercurial-server, which hosts "central" clones of all site-repos, updated by push only from developer local repo. Each repo on this server must get two hooks:
"before-push" checks, is it allowed to push now, or site "locked" by previous developer
"post-push", which export-copy received data and perform also control function for hook 1: based on conditions (subject of discussion) lock/unlock pushes to repo
Workflow: commit - push - test results - tag WC with special (moved) tag - commit tag - push unlocking changeset into repo
Advantages: manageable single-point testing
Disadvantages: possible delays due to push-workflow and blocking of pushes. The need to install, configure, support additional server. Complexity of changegroup and pretxnchangegroup hooks
Final notes and hints for solution 2: I think (not tested), special tag (with -f for movement across changesets) can be used as unlock sign (bookmark will not satisfy condition "move by hand"). I.e - developer commit (and pushes) non-tagged changeset, tag (f.e) "Passed" mark some older changeset. When testing results on Staging server is done, developer tag WC with the above tag, commit tag and pushed to central repo. changegroup hook must detect pushing of .hgtags and (in some-way) allow future data-pushes (control-pushes must be allowed always)
Yes, the better solution is probably to set each developer up with a local server. It may seem inconvenient to you because you're apparently used to sharing a server, but consider:
If you're really interested in using a single server as a demo server, it's probably better that people aren't actively working developing on it at the time. They could break stuff that way! And developers shouldn't have to worry about breaking stuff when they're developing. Developing often means experimenting.
Having each developer running their own server will give them flexibility to, say, work disconnected. You've got a decentralized version control system (mercurial), but your development process is highly centralized. Even if you don't want people to work remotely, realize that when your single server goes down now, everybody goes down.
Any time a developer commits and pushes those commits, you can automate deployment directly to your demo site. That way, you still have a quite up-to-date source on your demo server.
TL;DR: Keep the demo server, but let your devs work on their own servers.

What's the best way to update code remotely?

For example, I have a website with various types of information. If that goes down I have a copy of the same website the users use on a local webserver, like Apache or IIS on the client. They use this local version until the Internet version returns. They can have no downtime, in other words.
The problem is that over time the Internet version will change while the client versions will remain the same unless I touch each client's machine to make the updates. I don't want to do that.
Is there a good way to keep my client up to date so that when I make a change on the server the client gets a copy so they can run it locally if needs be?
Thank you.
EDIT: do you think maybe using SVN and timely running of the update by the clients would work?
EDIT: they'll never ever submit anything. It's just so I don't have to update the client by hand, manually going to the machine. they're webpages that run in case the main server is down.
I will go for Git over SVN because of its distributed nature. Gives you multiple copies of code; use it along with this comment's solution:
Making git auto-commit
to autocommit.
Why not use something like HTTrack to make local copies of your actual internet site on each machine, rather then trying to do a separate deployment. That way you'll automatically stay in sync.
This has the advantage that if, at some point, part of your website is updated dynamically from a database, the user will still be able to have a static copy of the resulting site that is up-to-date.
There are tools like rsync which you can use periodically to sync the changes.

Can Microsoft Windows Workflow route to specific workstations?

I want to write a workflow application that routes a link to a document. The routing is based upon machines not users because I don't know who will ever be at a given post. For example, I have a form. It is initially filled out in location A. I now want it to go to location B and have them fill out the rest. Finally, it goes to location C where a supervisor will approve it.
None of these locations has a known user. That is I don't know who it will be. I only know that whomever it is is authorized (they are assigned to the workstation and are approved to be there.)
Will Microsoft Windows Workflow do this or do I need to build my own workflow based on SQL Server, IP Addresses, and so forth?
Also, How would the user at a workstation be notified a document had been sent to their machine?
Thanks for any help.
I think if I was approaching this problem workflow would work to do it. It is a state machine you want that has three states:
A Start
B Completing
C Approving
However workflow needs to work in one central place (trust me on this, you only want to have one workflow run time running at once, otherwise the same bit of work can be done multiple times see our questions on MSDN forum). So a central server running the workflow is the answer.
How you present this to the users can be done in multiple ways. Dave suggested using an ASP.NET site to identify the machines that are doing the work, which is probably how I would do it. However you could also write a windows forms client that would do the same thing. This would require using something like SOAP / WCF to facilitate communication between client form applications and the central workflow service. This would have the advantage that you could use a system try icon to alert the user.
You might also want to look at human workflow engines, as they are designed to do things such as this (and more), I'm most familiar with PNMsoft's Sequence
You can design a generic "routing" workflow that will cause data to go to a workstation. The easiest way to do this would be to embed the workflow in an ASP.NET application. Each workstation should visit the application with a workstation ID in the querystring:
http://myapp/default.aspx?wid=01
When the form is filled out at workstation A, the workflow running in the web app can enter it into the "work bin" of the next workstation. Anyone sitting at the computer for which the form is destined will see it appear in their list of forms to review. You can use AJAX to make it slick and auto-updating.