Initial perforce structure - version-control

I'm trying to setup an initial perforce structure for a small game.
I'm familiar with SVN and SCMs in a "classic" (read source code only) usage.
After some research, it seems Perforce is the way to go for managing source and binary content, like models, textures, sounds ...
But, to be honest, I'm struggling a bit finding a good structure.
I want to put everything in source control, so that means the artists source files, plus the game source code.
The stream depot seem nice for the devs / source code, but a bit cumbersome for the artists.
I don't think the artists will be happy having to manage the streams and the copy/integrate into branches.
So, my idea, is to put everything into a plain depot and then have another stream depot to add the "icing" on top.
The problem is I don't know if it's possible or how.
I think I recall one forum post of someone having setup a stream that would mirror a "standard branch", but I could not find it.
Unfortunately my google-fu failed me, so I'll ask here :
Is there some kind of "standard" or "recommended" perforce setup for game related development ? I could not even find one full example :/
Can I make a stream that will "mirror" a standard branch on a plain depot ? If so, how ?
Thank you.

I'd lean towards using streams for both groups and putting them each in their own depot, but there's not necessarily one right way to do this. Here are some data points about streams to consider:
Streams don't necessarily imply branching; you can have one "//art/main" stream that the artists work in exclusively. There's no inherent reason they'd need to create and maintain branches (except for the reasons you'd normally create branches, but those might not apply to your art assets the same way they do to your code).
The benefit of using streams beyond managing codelines is centralized management of client specs -- suppose you notice that all the artists are submitting giant generated ".foo" files that don't need to be in the depot. Rather than trying to get all your artists to add "-//....foo" lines to their client specs, or messing around with triggers, you add one Ignored line to the "//art/main" stream spec and it affects everyone using that stream.
You can pretty easily share content between the "art" and "code" sections of your repository whether both teams use streams, one team does, or neither.
Supposing you set up an "//art" stream depot (with an "//art/main" stream) and a "//code" stream depot (with "//code/main" plus whatever other streams it makes sense to create -- dev streams for different coders who want to work on different features in isolation before merging them together, etc). Within your "//code/main" stream you add this Path:
import art/... //art/main/...
Now everything from "//art/main" shows up in an "art" directory under clients of "//code/main", as well as clients of child streams like "//code/sadral". Note that this exact same syntax works if the art lives in a "local" depot rather than a "stream" depot -- depot files are depot files.
If you end up needing to isolate certain versions of the art assets, keep in mind that you can create branches in the //art depot without the artists needing to be involved; they just keep working in //art/main and someone else can take care of branching/copying things around as needed during the development process.

You can integrate content from a non-streams to a streams depot.
Most customer's use that process for moving their content from non-streams to streams depots.
Streams are designed to provide a bit more structure to your workflow and follow the merge down copy up methodology.
Not using streams also has it's advantages, but it is easier to do wacky things and go off the rails a bit. I am therefore not sure if having the artists work in a non-streams environment, while having everyone else work in a streams environment, will be the best solution for you.
You might want to check out the Streams Adoption Guide, which gives tips for moving to and working with streams:
http://www.perforce.com/sites/default/files/pdf/streams-adoption-guide.pdf
You may also find this Merge2013 presentation useful, as it discusses how a UK Game company develop and work with streams:
http://www.perforce.com/resources/presentations/merge-2013/tips-tricks/streamlining-game-development-streams
Also, you may find that Perforce's Helix Cloud solution meets your requirements:
http://www.perforce.com/helix-cloud
Hope this helps,
Jen.

Related

Best practice for project with multiple related components

Background: I'm using jira for bug tracking, and git for source control. I've got a complete end-to-end system comprising of an iOS front end, and a Java/Tomcat back end that provides web services and a GUI. Right now I've got a single git repository holding all the software and a single jira project tracking issues for the whole system.
Now that the software is live, I'm finding that changes are being made to either the iOS application or the server, but generally not both. The version numbers of the two components have diverged somewhat.
It's probably too late for this project, but in future:
Should I pursue the path of having all related components in a single source repository and tracked using a single bug-tracking project; or
Should each component be in a separate repository and be managed by a separate bug-tracking project?
I can see pro's and con's for both approaches, and I can also see that the answer could easily be "it depends".
Which way would you lean, and why?
I'd go with distinct source repositories for a few reasons
The developers working on the two are likely to have distinct
skill sets. Also, you may have management reasons for wanting to
segregate who sees what.
They should not be tightly tied at a protocol level - different versions need to interact.
The first point becomes even more important when you do another front end
The second reason is my main one.
However, I'd go with a common bug database. Defects/features may need changes on both ends. Also, it is extremely likely you will have bugs that are believed to be in one component but actually end up fixed in the other. If you try to migrate across databases, information will get lost. I've seen that too many times.

Source control system to branch by user instead of version

Once again, I'm a bit stumped about the best stack-exchange site on which to post this question. But I think developers are best suited to answer questions about source control, so here it is.
I am considering a crowd-sourced, user-rated game development project and am wondering what, if any, source control and merging systems might best be capable of hosting the kinds of source control I'm interested in. By user-rated, I mean that there will be some kind of rating/voting system like that found here on StackOverflow. For some details on the project idea, you can read my posting about it at http://gamedev.enigmadream.com/index.php?topic=1589.0. What I think I need is:
Ability to branch by user and maximize merging capabilities. I know source control systems are mainly focused on branching by version, and we could maybe think of each user maintaining their own version. But I guess we need some really robust merging capabilities to maximize the abilities of one user to merge changes from another user into their own branch, for example. So I think I would like the ability for "cross-branch" merging without having to merge into the common root branch first. (I'm most familiar with Team Foundation Server (TFS), which doesn't easily support this.)
Massive branching and merging. If there are hundreds or thousands of people wanting to incorporate their own changes into the project, there could be a lot of branches, and the system would need to be able to handle that without a meltdown. A single user might want to create multiple branches deriving from multiple other users' branches under their own name too, ideally, with the ability to merge among them to some extent.
Permission control by branch. I see SourceForge supports Subversion and Mercurial, but does not currently support permission controls by path/branch on these (as far as I can tell), although that does appear to be a feature under consideration. Users should be limited from pushing their code into other branches. I suspect the normal operations for a user would be pulling edits from other branches into their own branch, and checking in additional changes in their own branch.
A voting system. I know I shouldn't expect a source control system to support voting natively, but anything that could contribute to making this possible would be helpful. For example, maybe a voting system would involve or rely on the ability to label the best edits from various branches and pull them into a single file based on a label or a set of labels. And anything that would assist in merging the results of a selected set of labels from various branches (perhaps applying a new label to the set) could help too.
Very few files and possibly no directories. I would be willing to give up the ability to manage a large number of files or directories in exchange to gain any of the above because the format for the game file I'm considering is generally contained in a single text (XML or HTML5 -- haven't decided yet) file. But this does mean that the system should be pretty good at merging edits to relatively large text files efficiently. I know Team Foundation Server does a pretty good job of maintaining just changes to a file. I hope other source control systems do at least as well.
Or is source control not the proper paradigm to be talking about here? Is there some other technology ideal for merging code like this, one that doesn't involve source control and/or branching the way I'm thinking about it?
Any VCS, because "...source control systems are mainly focused on branching by version..." is just wrong, VCS support diverged changes of code over time, nothing more and nothing less
Any DVCS, because they have reasonable good branch-merge capabilities from the ground
Mercurial, which have branch-level ACLs, SVN have path-based ACLs. And because Subversion have physical tree repository (at some degree), ACLs can be applied to any part of subtree, i.e to branches also
Any CodeReview tool, integrated with VCS and modified for specific-reqs
Fossil SCM is single-file portable EXE, repo - one file; any DVCS also add only one dir of repo to existing tree and handle big files without headache

How to one track several branches of a tool to a common platform

I'm currently working with a tool that over the years has evolved naturally from a number of perl scripts accessed through an apache web server, to a huge collection of tools using a common database and web site (still running apache, but using catalyst instead of CGI).
The problem we're having is that different departments have done local branches from the common main branch for implementing their own new functionality and adaptations.
We're now charged with the task of deciding how a common platform can be made available where certain base functionality is made one track instead of having all these different branches.
These kind of problems must spring up all the time so I'm hoping someone have a good strategy to offer as to how we should continue from here. Any thoughts would be appreciated.
In general, make sure you have buy in from everybody involved. Trying to do this kind of project without having people on board will just make your life more difficult.
Look for the quick wins. What functionality, if it changed, would have the fastest and clearest beneficial effect across all departments. If it takes you three months to get some good out of it, people won't rate the good results very highly.
Break functionality down as far as you can. One of the biggest problems in forked legacy systems is that a seemingly innocuous change in one place can have huge ramifications elsewhere because of the assumptions made about state. Isolating state in different features should help you out there.

Clearcase UCM - Working with streams and components, how?

My co-workers and I are relatively need to the stream idea with Clearcase UCM. Currently management has created streams for each functional software package, each of which has defined interfaces and lives within a layered architecture. Developers create child streams depending on the package they are working in, and attempt to develop their code independently, however they normally have dependencies on other packages during initial development. This has caused our integration group to create system builds that developers then use to create an adequate environment to develop their software and manually pull in dependencies (i.e. zip files, patches, etc.).
My contention is that this is wrong and not how UCM was intended to be used, but I needed someone more familiar with UCM to confirm my beliefs.
I believe that streams should be created from a functional point of view (while each package does some function, multiple architectural packages contribute to achieving some customer function, call it "ABC"). Then, the component for each architectural component which is perfomring initial development for function "ABC" is added to the stream. All developers for function "ABC" now work in the stream (or in some set of child streams) to complete that function. Once complete you have a baseline for each UCM component, and no "binding" between components exists from UCM's point of view (someone claimed this could somehow happen within UCM due to Activity Records).
NOTE: I agree that maybe you don't work this way FOREVER, but during initial development where interfaces commonly change, you haven't implemented all the interfaces for all functions, and so having multiple components working together in a stream makes the most sense. Later you can transition to a "architectural package-centric" way of working where each package is independent of changes in another.
Thoughts? Sorry for the long post, I felt the detail was necessary.
created streams for each functional software package
All developers for function "ABC" now work in the stream (or in some set of child streams) to complete that function
Yes, that's pretty much the two UCM normal usages of stream
(the only very bad usage is the one involving one stream per developer, just for isolation purpose, and that would be madness, as specified before)
Those two modes are system approach and component approach, detailed in this answer.
Basically, you want to avoid too much merges or rebase during the initial phase of development and keep one coherent system (with all components writable) at the beginning.
Then, when API is stabilized, you can go one stream per writable component.
Note: that does not prevent you to establish "system integration" streams, when you have a set of well-defined baselines referencing a stable state for all your components (read-only), and where you can deploy and test your system.
Those streams are maintained on one or several separate "integration" UCM projects.
I do agree with VonC. I'd prefer the functional approach.
There is a ClearCase plug-in that can help you to establish environments for your users (stream, views, project strategy) whatever approach you take. Just google about "clearEnv"

Is there any form of Version Control for LSL?

Is there any form of version control for Linden Scripting Language?
I can't see it being worth putting all the effort into programming something in Second Life if when a database goes down over there I lose all of my hard work.
Unfortunately there is no source control in-world. I would agree with giggy. I am currently moving my projects over to a Subversion (SVN) system to get them under control. Really should have done this a while ago.
There are many free & paid SVN services available on the net.
Just two free examples:
http://www.sourceforge.net
http://code.google.com
You also have the option to set one up locally so you have more control over it.
Do a search on here for 'subversion' or 'svn' to learn more about how to set one up.
[edit 5/18/09]
You added in a comment you want to backup entire objects. There are various programs to do that. One I came across in a quick Google search was: Second Inventory
I cannot recommend this or any other program as I have not used them. But that should give you a start.
[/edit]
-cb
You can use Meerkat viewer to backupt complete objects. or use some of the test programas of libopenmetaverse to backup in a text environment. I think you can backup scripts from the inventory with them.
Jon Brouchoud, an architect working in SL, developed an in-world collaborative versioning system called Wikitree. It's a visual SVN without the delta-differencing that occurs in typical source code control systems. He announced that it was being open sourced in http://archvirtual.com/2009/10/28/wiki-tree-goes-open-source/#.VQRqDeEyhzM
Check out the video in the blog post to see how it's used.
Can you save it to a file? If so then you can use just about anything, SVN, Git, VSS...
There is no good source control in game. I keep meticulous version information on the names of my scripts and I have a pile of old versions of things in folders.
I keep my source out of game for the most part and use SVN. LSLEditor is a decent app for working with the scripts and if you create a solution with objects, it can emulate alot of the in game environment. (Giving Objects, reading notecards etc.) link text
I personally keep any code snippets that I feel are worth keeping around on github.com (http://github.com/cylence/slscripts).
Git is a very good source code manager for LSL since its commits work line-by-line, unlike other SCM's such as Subversion or CVS. The reason this is so crucial is due to the fact that most Second Life scripts live in ONE FILE (since they can't call each other... grrr). So having the comparison done on the file level is not nearly as effective. Comparing line by line is perfect for LSL. With that said, it also (alike SourceForge and Google Code) allows you to make your code publicly viewable (if you so choose) and available for download in a compressed file for easier distribution.
Late reply, I know, but some things have changed in SecondLife, and some things, well, have not. Since the Third Party Viewer policy still keeps a hard wall up against saving and loading objects between viewer and system, I was thinking about another possibility so far completely overlooked: Bots!
Scripted agents, AKA Bots, have all usual avatar actions available to them. Although I have never seen one used as an object repository, there is no reason you couldn't create one. Logged in as a separate account the agent can be wherever you want automatically or by command, then collect any or all objects you are working on at set intervals or by command, and anything they have collected may be given to you or collaborators.
I won't say it's easy to script an agent, and couldn't even speak for making an extension to a scripted agent myself, but if you don't want to start from scratch there is an extensive open source framework to build on, Corrade. Other bot services don't seem to list 'object repository' among their abilities either but any that support CasperVend must already provide the ability to receive items on request.
Of course the lo-fi route, just regularly taking a copy and sending the objects to a backup avatar, may still be a simple backup solution for one user. Although that does necessitate logging in as the other account either in parallel or once every 20 or-so items to be sure they are being received and not capped by the server. This process cannot rename the items or sort them automatically like a bot may. Identically named items are listed in inventory as most recent at the top but this is a mess when working with multiples of various items.
Finally, there is a Coalesce feature for managing several items as one in inventory. This is currently not supported for sending or receiving objects, but in the absence of a bot, can make it easier to keep track of projects you don't wish to actually link as one item. (Caveat; don't rezz 'no-copy' coalesced items near 'no-build' land parcels, any that cannot be rezzed are completely lost)