Can you share a client spec in Perforce? - version-control

It seems rather pointless to have everybody creating the same client for a project in Perforce, so, is there any one one could create a "public" client in Perforce from where everybody could sync from?
Edit: I meant clients like the ones you create in Perforce from a client spec

It's easier to understand the architecture, I believe, if you use the term 'workspace' rather than 'client'. Perforce applications manage files in a designated area of your local disk, called your workspace. As the name implies, your workspace is where you do most of your work. You can have more than one client workspace, even on the same workstation.
Since two different users are generally working independently, on separate workstations or laptops, they each need their own copy of the code, and they each need their own workspace so that they can control when they sync up with the changes in the server.
If you and I try to share a single copy of the code, on a single workstation, we'll find ourselves quickly confused about whose changes are whose; it's much easier for us to work independently, and to merge our changes as separate submissions to the server.
If the issue in your case is that client definitions are complex, with very intricate view definitions, then you may wish to investigate the 'template client' feature: set up a single master client with the view and options that you prefer, and then your other users can use 'client -t' to create workspace definitions that copy the view and options details from the template client.

It's possible to do this, but not advisable. Since Perforce keeps a server-side record of what files are synced to each client, you could run into a situation where:
User Fred syncs using the shared client and gets a fresh set of files.
Before any changes are committed, user Jim syncs using the shared client and gets nothing because the Perforce server thinks that the client already has an up to date set of files.
Jim could get around this using "p4 sync -f" which will force all the latest files to be synced to his workspace, but that's a kludge around the way Perforce is designed to be used.
Perforce clients are very lightweight in terms of the resources they take up on the server, so it's better not to have shared clients.
I tried to find a more complete explanation of why clients should not be shared in the online Perforce documentation, but it's not very helpful. The book "Practical Perforce" has the best overview I've seen if you happen to have a copy around.

Use a template workspace as Bryan mentioned, or consider using streams. In the streams framework you define the stream view (composition) once, and workspaces are generated automatically.

p4 sync -f is too slow. Because firstly it will delete all the files in your local and then reload the files from central depot! there is a tricky way to do. It is to create a havelist and do sync, when wanting do sync -f. details is 1,get the clientspec, 2, save it to local. 3, delete the client 4, create a same client using the saved clientspec. Therefore we save the time for delete local files.

Related

Perforce: Sync master data but directly download derived data as well

This must have been solved somewhere but I can't find a straightforward answer.
The Perforce depot has code, master data, and derived data. But only the code and master data need to be source-controlled. The derived data can be generated during nightly-build on the build machine.
Here is the problem: The users want to sync with the depot as usual and get all three above. So the derived data must be downloaded after user clicks "Get Latest Version" in P4V. They don't want to run extra scripts on their local machines either.
Is there anything I can do on the server side to make this happen?
EDIT
The reason why the derived data had better not be submitted:
The derived data are owned by a particular team as their products. These data are constantly edited by only that team on multiple machines and could have constant conflicts. So for their convenience, the data should not need to be checked out and checked in on every single edits because conflicts should be handled from the master-data end, so the derived data better be excluded from SCM.
The rest of the project teams simply consume these derived data in their work and require no change at all. They should only get one healthy batch of the derived data from the depot, say, once a day.
Submit the derived data from the build machine each night after it's rebuilt.
Have the team that needs to rebuild it themselves exclude it from their client views. This is easy to automate in various ways, e.g. via virtual streams or client spec triggers, but even if it's done "manually," it's only done once per workspace, so there's no maintenance cost.
For everyone else the derived data just syncs down normally, and you can use protections to make it read-only to everyone but the build machine if you want to make sure that nobody is checking it in when they shouldn't be.

Is it safe to cloud sync TFS workspaces?

Please excuse a newbie question, but I've always used SVN and more recently, Git. Just now am touching TFS for the first time.
If I have two different machines that I work on regularly, can I safely keep the project files in sync using something like Dropbox/Sugarsync/Skydrive?
Are there any pros/cons to be aware of?
(I know that some of you might ask something like why not just checkout on the other machine. Just trying to save a step. I want to just pick up the other machine and do what I need to do without having to check out anything.)
TFS workspaces contain information about the machine name and user that created them, however if you're using local workspaces and you're not putting any server-side locks on files then I suppose you could sync them via dropbox and it should probably work just fine.
That said, I'd never recommend it.
You're not only going to sync all your code but also all the binaries that you're producing each and every time you compile, plus you won't have any change history between machines and you need to keep monitoring the drop box app to make sure things have synced fully before switching machines.
If you want to move changes between two machines I'd recommend using shelvesets. It only takes a few seconds to do and you'll have a more explicit update process between machines. You can be sure of what is happening in your code on each machine and you have an implicit rollback point if you realise you put something in the shelveset you didn't want.

Avoiding unexistent metadata in Perforce Server

My question might be simple, and the solution as well, however, i want to know, supposing that a user syncs a branch, and later delete the physical files from his local machine manually, the metadata about these files wil still exist in the server...
In the long run i'm afraid this could slow down the server.
I haven't found much about this issue, this is why i'm asking here, how do companies usually manage their Perforce metadata? A trigger that verifies the existing metadatas? a program that runs sync #none for client directories that does not exist anymore from time to time?
As i said, there might be many simple ways to solve that, but i'm looking for the best one.
Any help is appreciated.
In practice I don't think you'll have too much to worry about.
That being said, if you want to keep the workspace metadata size to a minimum, there are two things you'll need to do:
You'll need to write the sync #none script you referenced above, and also make sure to delete any workspaces that are no longer in use.
Create a checkpoint, and recreate the metadata from that checkpoint. When the metadata is recreated, that should remove any data from deleted clients. My understanding of the Perforce metadata is that it won't shrink unless it's being recreated from a checkpoint.

Source code backup strategy

I have a "Projects" folder which contains dozens of Visual Studio projects. I want to create a backup for them. First I thought I should copy them all to my SkyDrive or DropBox folders and let them be synced to the cloud whenever there is a change.
The other strategy would be using a source control but I don't want the backup to take place whenever a change is made and it should be optimized. By that I mean, only the changed files and only the changed parts should be uploaded to the server to save my bandwidth. I don't have a very good connection (512 Kbps).
Also my codes are very valuable for me so security is very important to me.
Is there a way to achieve the automatic backup to the cloud (ideally free) and take advantage of the source control options (such as revisions, etc.)?
I'm sure a lot of people have solutions for this and a lot of people have the same problem so please let the question be answered instead of just clicking "close"!
Use GitHub or BitBucket. You have all the benefits of version control and a cloud storage for your repositories.
You can commit changes as often as you like, and only need traffic when you push or pull changes to or from the server. The version control systems are smart enough to sent only the modified files.
You could even have a team working on a local network, without the need of a cloud solution and only push to the cloud server periodically just for backup. To do that, you can create a script that pulls from your local repository and pushes to the server. That script can be run in a scheduler.
Apart from the service used to backup your files, I think you should use version control anyway. As a programmer I don't think you can live without.
This might be of interest to you.
The idea is that you create just the Source Control repository in Dropbox, and check out an actual copy onto your machine.
You could then only commit (which would trigger the sync) the files you've modified, and that was also reserve all of your history for those projects.

What's the best way to update code remotely?

For example, I have a website with various types of information. If that goes down I have a copy of the same website the users use on a local webserver, like Apache or IIS on the client. They use this local version until the Internet version returns. They can have no downtime, in other words.
The problem is that over time the Internet version will change while the client versions will remain the same unless I touch each client's machine to make the updates. I don't want to do that.
Is there a good way to keep my client up to date so that when I make a change on the server the client gets a copy so they can run it locally if needs be?
Thank you.
EDIT: do you think maybe using SVN and timely running of the update by the clients would work?
EDIT: they'll never ever submit anything. It's just so I don't have to update the client by hand, manually going to the machine. they're webpages that run in case the main server is down.
I will go for Git over SVN because of its distributed nature. Gives you multiple copies of code; use it along with this comment's solution:
Making git auto-commit
to autocommit.
Why not use something like HTTrack to make local copies of your actual internet site on each machine, rather then trying to do a separate deployment. That way you'll automatically stay in sync.
This has the advantage that if, at some point, part of your website is updated dynamically from a database, the user will still be able to have a static copy of the resulting site that is up-to-date.
There are tools like rsync which you can use periodically to sync the changes.