I am about to try and automate a daily build, which will involve database changes, code generation, and of course a build, commit, and later on a deployment. At the moment, each developer on the team includes their structure and data changes for the DB in two files respectively, e.g. 6.029_Brady_Data.sql. Each structure and data file includes all changes for a version, but all changes are repeatable, i.e. with EXISTS checks etc, so they can be run every day if needed.
What can I do to bring more order to this process, which is currently basically concatenate all structure change files, run them repeatedly until all dependencies are resolved, then repeat with the data change files.
Create a database project using Visual studio database edition, put it into source control and let the developers check in their code. I have done this and it works good with daily builds and offer a lot of support for structuring your database code. See this blog post for features
http://www.vitalygorn.com/blog/post/2008/01/Handling-Database-easily-with-Visual-Studio-2008.aspx
Related
Background: We use a feature branching strategy to isolate change. Here's a quick diagram of what I mean. Nodes are branches, edges are parent/child relationships in TFS:
Most of the time we create feature branches on a per issue basis, so there are a lot of feature branches.
Also, for my purposes, I only care about migrating source control and changeset history. We don't use TFS work items, test cases, or test results.
Attempt 1: When I first ran the migration tool, it ran for about an entire day before filling up my hard drive and failing.
Attempt 2: Thinking that the 150ish feature branches were the culprit for the slowness/storage needs, I wanted a way to migrate only the "dev" branch of my "ecomm" team project collection in the diagram above. I did not see any way in the opshub tool to migrate a single branch.
I accomplished this by creating a new team project collection "ecomm-migration", then branched $/ecomm/dev to $/ecomm-migration/dev. Then I migrated the ecomm-migration team project collection (which only contains a single branch).
Everything seemed to work: I could see all my source files on Visual Studio Online. However, when I browsed the history of the ecomm-migration project that was migrated to Visual Studio Online, the history was lost: everything appeared to be committed as a single changeset, and annotating files reflected this as well.
Why didn't the changeset history get migrated?
Am I doing it wrong?
Does my approach of creating a separate team project collection to reduce the size of the team project collection being migrated interfere with the tools ability to migrate changeset history?
Are there better tools/options for my scenario?
One thing I had previously considered was pruning dead feature branches with tf destroy, but it would be nice to avoid such drastic, irreversible, history destroying measures if possible.
I am using version 1.1.0.005 of the utility.
Can you please also share what is the version of the utility that you are using? You can find that from the About button on the utility left menu pane.
Addressing your concerns in points.
The disk space requirements of the utility is equivalent to the size of your project. A good reference point would be to get the latest version of the project to your hard drive through Visual Studio and see the size. So that SIZE + 20% would be a good approximation of space that the utility will need in your C: drive during the migration process.
While you did nothing wrong by branching your dev branch into a new project. The utility did nothing wrong either. When you migrated to VSO, you might have selected only the newly created project. Now in your VSO, the trunk of your data in the new project is not present. So, the utility converts Branch -> to -> Add. Which is the only sane way to migrate when source is not selected/available.
To get your desired result (of actual history) you will have to select all the projects where branch-merge was done (across projects) for migration. Which of course is nothing but what you did in Step 1.
Hence, we suggest you to migrate your original project, if you want to retain full history. We can work together to overcome the memory/space crunches that you are stuck in.
I am working on a ModX website (mainly templates but also system settings, user management, etc) while a development website is already online with the customer starting to input the content.
I haven't found a practical solution to push my work online (layouts are stored in the database) without overriding the content input by the customer (in the database as well).
My present workflow consists of first replacing my local modx_site_content table with the one extracted from the online database, then push this hybrid database online. Not practicle, plus I am not sure user changes are confined to modx_site_content only.
There must be a better workflow though! How do you handle this?
I don't think it gets any easier than selecting the tables you need and exporting only those into the live environment. Assuming you only work on templating, the template, snippet & chunk tables are all you need to export.
We usually take a copy, develop, and merge only once when the new features are supposed to go live this minimizes this trouble. Also the client can continue normally until d-day.
If youre doing a lot of snippet work you could always just include an actual php file instead and work with your editor directly towards those files, connect them to git and what not.
If you project is not really big, you can store your chunks/resources, etc. in a separate files (there is and option called "Static Resource"), and then, manage your changes with git. Otherwise, you need to store user data in a separate table and deploy the whole database with Fabric, for example.
My development team is in the process of implementing branching and merging in TFS. Our software is contained in one solution, which many projects embedded within. One of those projects contains all the SQL scripts to keep our databases versioned.
We are having trouble figuring out how to handle the Database project during merging. For example, whenever a developer needs to make any changes to the database (add/remove a stored proc, create table, add indexes, etc) we create a script file. Each file gets named with a version number. For instance, if the last script file that was checked in is named 4.1.0.1, I would name my new script 4.1.0.2. We use these file names to match a software version with the corresponding database version it needs to run.
Say I create a branch off our main code branch to do a new feature. I do all the coding, and I put all my SQL changes in one script file and add it to the DB project. Obviously I could just merge from the main code branch into my new branch to make sure I have the latest list of SQL scripts so I could name it correctly. The problem is new script files are added several times throughout the day and we think we're going to have a ton of merge conflicts with the script naming.
I'd like to somehow automate this, so a developer can just add a script with some random name and during the merge there is a hook or event that will figure out which files are new to the main code base and also how to correctly name those files so the developers don't have to worry about it. For example, I can just create a new file called "new_script.sql", and when I merge that back to the main code branch it will get renamed to 4.1.0.2
Previously I've used tools such as RoundHouse to take care of all the SQL versioning, however at my current job we use Sybase SQL Anywhere 10, and I haven't been able to track down something similar to RoundHouse that will work with Sybase.
Can anyone point me in the right direction on how to automate a task during a TFS merge? I'm assuming this can be done using PowerShell, but my concern is that most of the development team is unfamiliar with PowerShell and I was hoping to be able to automate this task without the developer needing to leave the team explorer window.
Any help is greatly appreciated!
Thanks!
I think you are absolute right that you will get naming conflicts if you continue to use this strategy and introduce branches.
I think you have two options:
When adding these files on a branch, add them prefixed with the branch name. Eg. branch_0.1.sql and branch_0.2.sql. Then when you merge your branch back into the trunk, you would have to rename these to the correct name. You could write a program to do this for you, and would be fairly simple.
To make this simpler, you could only create one SQL file per branch too and just keep adding to that one SQL file. Then even you merge it's a very simply process.
Ditch the self versioned files. Tfs is version control. There isn't much need to version the files yourself. We use Visual Studio database projects for our database versioning and this works absolute fine. I guess you would need to change something in your deployment strategy if you changed this.
I hope this helps.
I am beset by my lot using VSS 2005 (8.0.50727.42) as source control, which I really struggle to get on with. I am proposing moving to SVN http://www.visualsvn.com/server/ and have found a tool which appears to do the migration along with pulling all the history across - in order to keep my fellows happy. - http://vsstosvn.codeplex.com/
(if anyone has had any success or experience with this would be interested to hear your thoughts)
however, in order to make sure this works I would like to do a trial run, but have no idea how to take a backup of the existing VSS in order to do so.. as this tool appears to also deal with changing all the source control bindings in the solution, so if it goes tits up I would probably be beaten..
can I simply make a copy of the folder structure in which the srcsafe.ini resides?
its just that seems to have all kinds of crap in its data folder..
folders called
a
b
c
etc..
any help much appreciated
thanks
I've used the VSS2SVN command line client in the past and it worked OK. I think it was hindered somewhat by how VSS had been abused (poor commit messages, commits to single sporadic files) so that the commit history was only loosely useful.
I can't remember how I worked it but it was probably just following the documentation for both VSS2SVN and VSS.
The documentation for Visual SourceSafe (appears to be the 2005 version going by the "What's new" pages) has instructions on how to backup and restore a VSS database with history. You can do it all from the administrator interface and restore to a new location, or there are command line clients to do it.
Notice the warning that users cannot be using the database while you make the backup and the analyze utility cannot run. This implies that it's probably just a simple file copy over the network with no protection or locking in the database. You'll probably need to schedule the backup around your users (it was OK when I done it as there was only three of us).
Edit: I've found a blog article which summarises the options for doing a VSS backup, which seems familiar so I might have referenced it when I performed our migration. The outcome of that is that yes you can just copy the directory with all the VSS information, but again you need to be sure that the database cannot be modified while it is being copied.
https://support.microsoft.com/en-us/kb/244016
Make sure that no one is using the database and that Analyze will not begin to run while you are backing up the database.
Copy the following folders:
\DATA
\Temp
\USERS
Copy the User.txt and Srcsafe.ini files.
When you follow this procedure, you can do a full restore of the database by replacing the existing Users, Temp and Data folders as well as the Users.txt and Srcsafe.ini files with the copied versions.
You can also use this procedure to move the database to another location by placing the copied files into a new folder. To open the database, on the File menu in the Visual SourceSafe Explorer, click Open SourceSafe Database to browse to the new location.
I am looking quickly move changes between Salesforce Production & Sandbox. Is there any way we can know the difference between two environments i.e How many workflows, objects, email templates are modified/added in compared view.
I know we can use outbound change set but its tedious job of moving the changes and not feasible when Production is continuously being updated.
After speaking with the experts at two Dreamforce conferences I find the only way to get a description of an instance is to use the force.com IDE, as suggested by LaceySnr. I've learned a couple techniques that help.
First, I no longer even attempt to use change sets. These are time consuming to build, have no clarity as to what is really inside and sometimes just won't work.
Second, I keep at least two force.com IDE projects for each instance (test,production). The first project has everything (check everything in the metadata component). The second project is tiny and only has the components I want to work on.
The first project is checked into some change control system; CVS, SVN, Git, Mercurial, etc. Your choice. Using the differencing tools on this project let's you compare change sets.
But it is nearly impossible to develop using the first project because it takes too long for force.com to process even the smallest change. This is because it processes the entire project whenever any change is made. So, make all the code changes in the smaller project.
Then look at the ANT build tools http://www.salesforce.com/us/developer/docs/apexcode/Content/apex_deploying_ant.htm to automate the migration of changes from the smaller to larger project.
I've not done this with workflows, but it is a method I use for code, layouts and objects: Use the force.com IDE from http://developer.force.com to setup projects for both Sandbox and Production, being sure to select all of the metadata components that you want (you'll want to include workflows for instance).
This will leave you with the contents of your projects stored inside a project directory in the IDE's workspace directory, then you can easily use a diff tool (I use the free DiffMerge on Mac) to compare the directories, and of course drill down into files to see what changes exist.