How to setup a DotNetNuke Development Environment with Source Control? - version-control

My team is developing a new DotNetNuke web application and would like to know what is recommended to setup a development environment with source control and automated builds? We would like to keep the DNN source code separate from our custom modules and extensions source code.
The DotNetNuke Compiled Module template for Visual Studio wants us to store the source code in the DesktopModules directory of the DNN source code and output to the DNN source code bin directory. Is this the recommended structure? I would rather keep the files in different locations, but then it becomes more difficult to run and debug locally as it would require an install of the module for each change. Also, how should an automated build deploy any changes?
How have others set this up? Is there a recommended best practice?

For my source control, I develop modules in their own project. This contains the module code, test code, data provider code (if applicable) and anything else. This is checked into source control like any other project. Note that the module project contains no links to a specific DNN website, and DNN references are made in the project to a common "bin" directory that references your target build. For example, in my projects folder, I have \bin460 , \bin480, \bin510, \bin520 etc. Each of these folders holds a set of binaries for a specific DNN version. That way you can build against a particular version but test against any version you like.
The problem with source-controlling a module in place in a dnn install is
- sometimes not all of the module code is easily isolated under a single parent directory
- doesn't lend well to a PA module approach
- not easy to shift the project to a different DNN Version for development or testing
- easy to inadvertently source control parts of the DNN solution, particularly with integrated VS source control solutions.
This approach compiles quickly because you're not trying to compile the entire project. For test deployment I have a build script that copies the various parts of the module into a target website. This can be done via the compile (link the build script) or just run after you've had a successful compile in a cmd window. My build script has a 'target' environment switch, so that I can say 'dnn520' to deploy the build to my test dnn520 install. Note that you need to manually create the module configuration first before this will work, but this is a one-time effort, and you can use the export feature to create your .dnn module manifest.
To build your module package, invest the time in a comprehensive script which will take the various parts from your source directory, and zip them into an install package. Keep all of the parts in your source control folder, and copy them into a temp directory, then run a command-line zip utility (I use an ancient version of pkzip) to pack it into an installable file.
The benefits of this approach is :
- separation of module code from installed code
- simple way of keeping only the module code in source control (don't have to exclude all the website code)
- ability to quickly test out modules in different dnn versions
- packaging script allows you to quickly and easily build a new version of a module for install testing/deployment
The drawbacks are
- can't use the magic green 'go' button in VS (have to manually attach debugger)
- more setup time than developing in-place

We typically stick to keeping the module code in a folder under DesktopModules and building to the website's bin directory.
In source control, we just map the individual modules, rather than the entire website. Depending on what we're working on, a module may be an entire project in source control, or we may have multiple related modules in the same project, living next to each other.
Automatically deploying changes is somewhat difficult in DNN. It's highly recommended to have a build script that packages your module into an installable form. You can then copy installable packages into the website's Install/Module folder, and get the URL /Install/Install.aspx?mode=InstallResources, which will install any packages in that folder.

In response to bduke's answer. You should, and don't want to build projects in the DesktopModules folder.
That's where all of the source code for the site out of the box goes.
That's where you modules will be "installed" and thus if someone "updates" or re-installs one, then it will be overwritten
It can make upgrading your Application far more difficult. Many developers don't understand that the idea of not touching the original source code files to modify their behavior. BECAUSE it will just be overwritten when you perform an upgrade.
If you want to build modules, create a solution folder called Modules and place your seperate project modules there.
If you want to debug them, make the target debug output point to the web\bin folder.
If you want to install/deploy them. Build it in release mode and install them through the Module/Extension filter.

Related

TFS Source Control Uncompressed Build

Is there a way to create a "build" but not actually compile the output of the site? Bascially I want to push the files live from the source control in TFS to the final IIS folder destination.
I have used CopyDirectory on my other project builds, but that requires a BuildDetail.DropLocation (compiled Build). Maybe there is another option for the CopyDirectory Source I could use that wouldn't require a build DropLocation.
To simplify, I want to copy files directly from a tfs Source control to a folder, using a Build Template, but that won't compile the files. Is that possible?
While the default .xaml build workflow for Team Foundation Build is indeed a compilation build it does not have to be. I usually recommend teams to at least have one compile and one deploy .xaml workflow.
1) The CompileMyStuff.xaml (DefaultBuildTemplate.xaml) should take my stuff from source control and do whatever is needed to create a Build Drop folder with my output. I may or may not need to actually compile before creating the drop, and it looks like you just want to copy to the drop location.
2) The DeployMyStuff.xaml should take a build number and deploy my code to an environment of my choice.
It looks like you want to skip the "Drop" and go state to deploy and while I would never recommend this you do have a "BuildDetail.BuildLocation" for teh local workspace where the build server has done a get of your code. You can just "CopyDirectory" from there to your server/host for the website.
If you are having a little trouble you could use the Community Build Extensions and fire up PowerShell to do your copy/deploy.
I figured out a solution to this problem. I created a new .xaml file and then the only item that I put in the sequence was "DownLoadFiles". Then I filled out the properties of the task and ran a "build" and it worked.

Keeping SSIS packages under the source control

I store all SSIS packages in Subversion repository, their configuration files as well. Configuration file almost always stored in the same folder where package is.
Problem is - SSIS seems to always store path to configuration file (the one saved in the package itself) as an absolute path.
When someone else checks out folder with the package in the location different from where I had on my development PC the configuration file is not detected (because my absolute path is stored and it doesn't exist on the other developer PC). So another developer has to remove this configuration and add it again from where it is now on his local hard drive. Then changed package is saved which will cause new version to be committed. When I get that version from SVN it will no longer match local path on my PC.
On a related note: another developer may want to change values in configuration file as well. If I later get the latest version of everything from SVN package will no longer work on my PC.
How do you work around these inconveniences?
Another solution is to save your configuration in a database with an environment variable as the first configuration to tell it what database to look in, that's what we do. We have scripts to populate ssisconfig for each server in our source control, but the package uses the actual table data for the database in the environment variable we are using.
Anyone who has heard my SQL Saturday presentations knows I don't much care for XML and this is one of the reasons. A trick to using XML configuration with varying locations is to use an environment variable (indirect configuration) to direct SSIS where it can look for that resource. The big, big downside to this approach is you'd generally need to create an environment variable for each set of configuration files or have a massive, honking .dtsconfig file which becomes painful for versioning.
The option I prefer if XML configuration is a must is that the "variableness" is removed. Developers and admins get together and everyone agrees "there will be a folder everywhere SSIS is done to hold configuration files and that location is X" and then it's just a matter of solving for X. At a previous job, we used D:\ssisdata\configs
#HLGEM's approach of a table for configurations is hands down my favorite approach to SSIS configuration (until you get to 2012 and their project deployment model where configuration is an entirely different animal)
I add a folder called "config" under my projects folder, add it to source control and mantain the config file in this folder. You can also add it to the SSIS project if you like.
I think its a good solution because everybody can have this folder and dowload the config file.
When the package is deployed it will read the config file from where you inform in the deployment manifest so this solution wont impact your development

Deployed a version control system for company, how to use it with binary files

I am tasked with setup a Mercurial version control system for our small team of developers (2-3 person). There was no version control system before, just shared folders and multi-copies. I don't have much experience in setting version control system except for personal projects, just happened to be the most experienced person in term of version control system in our team. The code repository is in a shared folder in centre server, the top leve directory is client name, one level down is project name for that client.
The problems is I haven't figure out how to deal with binary files in our code repository. From what I read, the binary files shouldn't be version tracked. But as the code repository is centralized on the server, shouldn't the binary in here as well? Otherwise for things like image file, and third-party dll files, the project wouldn't build or run properly when cloned from centre server. Also there is a nice feature for Mercurial web interface where you can download the whole source package as ZIP or BZ2 compressed file, without necessary binary files, the download project wouldn't run or compile.
I guess the solution is including everything for the version control system except the temporary files and the files for debug purpose, but other than that, most binary files should be included? Due to limitation of version control system, I don't think there is a way for them to track changes sets only for binary files, so I guess we have to deal with it for a version control system.
Edit: After more research about how to setup version-control repository, the more recommended way of using version-control is to "store everything which is created manually, and nothing else", quote from Eric Sink.
You want to version control anything that you can't generate from other stuff in version control. That would be your source files, and your instances of third-party libraries, tools, etc. that your package relies on.
The binaries built from your project are something else entirely, and should be treated as different sorts of artifacts. If you want an easy-to-test downloadable archive, adapt your build process to provide that as a target: it should build the code, and then compress the source and built binary into the desired single file.
Binary files that are related or required by the project must be included in version-control, they can be tracked. The only thing that version control can't do with binary files is compare and merge.

Developing with Qooxdoo and multiple developers

I'm interested in Qooxdoo as a possible web development framework. I have downloaded the SDK and installed it in a central location on my PC as I expect to use it on multiple projects. I used the create-application.py script to make a new test application and added all the generated files to my version control system.
I would like to be able to collaborate on this with other developers on other PCs. They are likely to have the SDK installed in a different location. The auto-generated files in Qooxdoo seem to include the SDK path in both config.json and generator.py: if the SDK path moves, the generator.py script stops working. generator.py doesn't seem to be too much of a problem as it looks in config.json for an updated path, but I'm not sure how best to handle config.json.
The only options I've thought of so far are:
Exclude it from the VCS, but there doesn't seem to be a script to regenerate it automatically, so this could be dangerous.
Add it to the VCS but have each developer modify the path line and accept that it might need to be adjusted whenever changes are merged.
Change config.json to be a path and a single 'include' line that points to a second file that contains all the non-SDK-path related information.
Use a relative path to the SDK and keep a separate, closely located copy of the SDK for every project that uses it.
Approach 1 would be ideal if the generation script existed; approach 2 is really nasty; I couldn't get approach 3 to work and approach 4 is a bit messy as it means multiple copies of the SDK littered about the place.
The Android SDK seems to deal with this very well (using approach 1), with the SDK path in its own file with a script that automatically generates that file. As far as I can tell, Qooxdoo puts lots of other important information in config.json and the only way to automatically generate that file is to create a new project.
Is there a better/recommended way to deal with this?
As an alternative to using symlinks, you can override the QOOXDOO_PATH macro on the command line:
./generate.py source -m QOOXDOO_PATH:<local_path_to_qooxdoo>
(Depending on the shell you are using you might have to apply some proper quoting of the -m argument). This way, every programmer can use his locally installed qooxdoo SDK. You can even drop the QOOXDOO_PATH entry from config.json to enforce this.
We work with a symbolic link pointing to the sdk ... config.json contains just the path of the link.

How to manage external dependencies which are constantly being modified

Our development uses lots of open-source code and I'm trying to figure out what the best way to manage these external dependencies.
Our current configuration:
we are developing for both linux and windows
We use svn for our own code
external dependencies (boost, log4cpp, etc) are not stored in svn. Instead I put them under ./extern (or c:\extern on windows). I don't want to put them in our repository because I will not be able to update them that way. Some of these are constantly being updated.
My questions
What to do if I need to modify external code?
Currently I have created a folder in my svn repository called extern_hacks and that is where I put the modified external code. I then link (or copy on windows) the files into the external directory structure. This solution is problematic since it is hard to keep track of copying the files, and very hard to update from svn when files are sitting in two repositories (mine for the modified files, and the original repository say sourceforge)
How to manage versions of external dependencies?
I'm interested to hear how others deal with these issues. Thanks.
I keep them in svn, and manage them as vendor branches. Keeping them loose externally makes it very hard to go back to a previous build, or fix bugs in a previous build (especially if the bug is from a change to the external dependency)
Keeping them in svn has saved me lots of headache, and also allows you to get a new workstation able to work on your codebase quickly.
I do not understand why you say
I don't want to put them in our repository because I will not be able to update them that way. Some of these are constantly being updated.
You really need to
include external dependencies in your source control and periodically update them and then tese, test, test.
Coordinate your build process with the updates for the external dependencies.
If your code depends upon something, then you really need to have control over when it gets updated/modified. Coding in a space where these dependencies can get updated at any time is too painful as you're no doubt finding out. I personally prefer option 1.
When I had to do something like this, I added the external source as external, and then applied a patch to it. The patch contains my modifications to the external source. So, I actually only version control my patches. Most of the times this works, if there are no "dramatic" changes in the external code.
Have you considered Maven? It's a build system that has excellent support for managing dependencies. For each project you can specify the required dependencies in an xml file as part of that project. The external libraries are held in a dependency repository (in our case Artifactory) this is separate from your version control system and can just be a network drive. It also allows managing different versions of projects.
I would be careful considering Maven because:
it is another repository in a system where you already have a repository with your current version control system;
it (Maven) is based on the only "common version control" every developer have, the file system (which means no metadata, or properties attached to the file, no proper history in term of who modified what and when)
Now when dealing with third-parties, you can consider having them in your version control system, but in a packaged way: that is in a very compact way, with sources and documentations zipped, in order to have the least possible number of files.
That way, you will manage the deployment of those (many) third-party libraries easily since the number of files to deploy is low.
Plus, having them under source control allows you to make a branch (say, a 'hack' branch), in which you will stored the packaged (or zipped) version of the hacked library.
What you can store in an external way is the un-zipped, complete set of files representing those libraries since there is no real development on them, or just a punctual hack: normally, your job is not to develop existing libraries, but to use them (even a bit modified) for implementing faster some features of your project.
If you need at some point to compare some hacked version with some official version, you will just pull out from svn the appropriate 'hacked' version number, unzip-it and compare-it with the official (and externally stored) version (with winmerge for instance)