Azure website GIT deployment: server compiled .DLL different from local built .DLL - entity-framework

I have an MVC4 + EF4.0 .NET 4.5 project (say, MyProject) I'm able to run the project locally just fine. When I FTP deploy it to Azure Websites (not cloud service) it runs fine too. However, if I do a GIT deploy, the site 'runs' for the most part until it does some EF5.0 database operations. I get an exception Unable to load the specified metadata resource.
Upon debugging I noticed that if I:
GIT deploy the entire MVC4 project (as before)
FTP in and then replace bin\MyProject.dll with the bin\MyProject.dll file that I just built locally (Windows 8 x64, VS2012, Oct'12 Azure tools) after the GIT push (i.e. same source)
then the Azure hosted website runs just fine (even the EF5.0 database functionality portion).
The locally built .dll is about 5KB larger than the Azure GIT publish built one and both are 'Release' mode. It's obvious that the project as built after the GIT push (inside Azure) is being built differently than as on my own PC. I checked the portal and it's set to .NET 4.5. I'm also GIT pushing the entire solution folder (with just one project) and not just small bits and pieces.
When I load the locally built as well as the remotely built MyProject.dll files, I noticed the following difference(FrameworkDisplayName)
local: System.Runtime.Versioning.TargetFrameworkAttribute(".NETFramework,Version=v4.5", FrameworkDisplayName = ".NET Framework 4.5"),
remote: System.Runtime.Versioning.TargetFrameworkAttribute(".NETFramework,Version=v4.5", FrameworkDisplayName = ""),
Anyone knows why this is happening and what the fix might be?

Yes, this is a bug that will be fixed in the next release. The good news is that it's possible to work around it today:
First, you need to use a custom deployment script, per this post.
Then you need to change the MSBuild command line in the custom script per this issue.

Credit goes to David above for the pointers and hints. I voted him up but I'll also post the exact solution to the issue here. I've edited my original post because I found there was a major bug that I didn't notice until I started from scratch (moved GIT servers). So here is the entire process, worked for me.
Download Node.JS (it's needed even for .NET projects because the GIT deploy tools use it)
Install the azure-cli tool (open regular command prompt => npm install azure-cli -g)
In the command prompt, cd to the root of your repository (cd \projects\MyRepoRoot)
In there, type azure site deploymentscript --aspWAP PathToMyProject\MyProject.csproj -s PathToMySolution.sln (obviously adjust the paths as needed)
This will create the .deployment and deploy.cmd files
Now edit the deploy.cmd file, find the line starting with %MSBUILD_PATH% (will be just one)
Insert the /t:Build parameter. For example:
[Before] %MSBUILD_PATH% <blah blah> /verbosity:m /t:pipelinePreDeployCopyAllFilesToOneFolder
[After] %MSBUILD_PATH% <blah blah> /verbosity:m /t:Build /t:pipelinePreDeployCopyAllFilesToOneFolder)
Push to GIT (check the GIT output if everything went ok)
Browse to your website and confirm it works!
I'll be glad when it's fixed in the next revision so we won't maintain the build script

Related

Copying files and deploying to Azure without building using Visual Studio Team Services

I'm attempting to deploy a web site to Azure using VSTS. Basically, I commit code to the GIT repo and have it setup to run CI, so it begins building as soon as I commit. However, once it hits the release section, it never copies the code to the Azure web app, rather, it gives me this line:
Info: Updating file ({projectname}\error.txt).
It doesn't copy the files I changed, but rather always just copies this file. I checked and there is indeed an error.txt file in my website directory in Azure, but it is always blank.
This build/deploy process isn't "standard" because the build step only downloads from source code, it doesn't build, because the website isn't a "web application", but rather just a "web site", meaning it doesn't need to be built.
So my build step is as follows:
Get Sources
Run on Agent - this step is empty
so the idea is that it just downloads everything from source control, that's it.
Then, my release step is as follows:
Artefacts are from build step above
deploy to environment 1 (dev)
Azure app service deploy, using "package or folder" as $(System.DefaultWorkingDirectory)/
Any idea what I might be doing wrong here?
So I actually figured this out and will leave this here in case anyone else needs it.
I admit I'm pretty new to the Azure/VSTS world, so maybe someone else is making my mistake as well.
If you don't need to "build" your project, then don't. I resolved it by simply skipping the build step altogether. What I was really after was to just download the files from source control and deploy them as-is.
In your release editor, you can specify which "artifact" you want to use to release, and one of the options is source control, which is what I did.
This would be useful for websites like mine where you don't need to build them (mine is DNN/DotNetNuke, so you don't build it before deploying).

mkdocs site doesn't exist after build on codeship

I'm trying to use codeship to automate building docs from a repository.
After the Executing the command mkdocs build --clean I get a path to where my site folder is supposed to be.
INFO - Cleaning site directory
INFO - Building documentation to directory: /home/rof/src/bitbucket.org/josephkobti/test/site
The thing is that I can't find that folder using the ssh console for debugging.
The reason for the folder not existing was a misunderstanding of Codeship's SSH Debug Build feature, documented here https://documentation.codeship.com/basic/builds-and-configuration/ssh-access/
The VMs started for the debug feature are not the actual VMs that run the automated builds. They are new VMs running the same initialization steps as the automated builds (i.e. cloning the repository, configuring project specific environment variables, ...) but none of the actual setup or test commands.
Because of this the mkdocs build --clean command wasn't run either when Joseph connected via to the debug VM, and as such the generated site wasn't available.

Capistrano: Deploy Laravel app on ubuntu server

I have an ubuntu staging server where I have installed apache, php, mysql, git, composer installed. I have a private git repository setup on the bitbucket, the project is already cloned to the staging server and to my local development machine. The Laravel setup is working perfectly fine on both machine.
What I am currently doing is whenever there is an update to the git repository, I do login to the staging server, pull the latest code from the git repository and do composer install, npm install, bower install.
I want to automate this process via capistrano tool. I checked the tutorials online, but all of them do the clone of repository whenever, I issue a deploy command and creates a fresh installation every time. Can't capistrano helps me to work on the existing folder that is already setup?
The basic premise of Capistrano is the idea that a new installation is created each time, such that there is not much to be done initially in terms of setup. If you'd rather use a different mechanism, a different tool would work better for you! For such cases, you could try to write a script using SSHKit directly (fairly advanced), or write a makefile or some other tool to automate your process.
If you do want to try to make Capistrano work on its terms, look into how linked_dirs and linked_files work in it. They allow you to have some files (e.g. config files, log dirs, etc) which are outside of the deployment directory and as such are shared between deploys.

Automated Building and Release Management VS2012

Trying to make my life easier, Currently we have 4 developers working in Visual Studio 2012 and we are using TFS 2012 for source control. The project we work on is a multi-tenant web application (single source directory with multiple dbs) that is a mixture of legacy, asp and vb6 com components, coupled with new C# code. We use TFS for source control and for managing User Stories and Bugs. Because of the way our site works it can not be ran or debugged locally only on the server.
Source Control is currently setup with a separate branch for each developer that's working directory is mapped to a shared network path on the dev server that has a web site pointed to it in IIS. Dev01-Dev05 etc. The developers work on projects in their branch test it using their dev website, then check in changes to their own branch and merge those into the trunk. The trunk's work space is mapped to the main dev website so that the developers can test their changes against the other customer's dev domains to test against customizations and variances in functionality based on the specific dbs the are connected to.
Very long explanation but basically each dev has a branch and a site, that are then merged into the trunk with its own site.
In order to deploy our staging server:
I compile the trunk's website via a bat file on the server
Run a windows app I built to query TFS for changesets associated with
specific WorkItems in a certain status, and copy all the files for
those changesets from the publish folder to a deployment folder.
Run another bat file on the server to use RedGate's Deployment Manager
to create a package from those new files
Go to the DM site on our network to create and deploy that release (haven't been able to get the command line tools to work for this, so I have to do it manually)
Run any SQL scripts that have been saved off in Folders that match ticket numbers on each database (10 or so customer dbs) to support the release
I have tried using TFS automated build stuff and never really got it to build the website correctly. Played around with Cruise Control also with little success. Using a mishmash of skunk works projects to do this is very time consuming and unreliable at best.
My perfect scenario would be:
Gated Checkin, Attempt build/publish every time a developer merges into the trunk, rejects and notifies developer if the build fails.
End of the day collect the TFS Items of a certain status and deploys files associated with them to the staging site
Deploy SQL scripts for those TFS items across all the customer dbs in staging
Eventually* run automated regression UI tests, create new WorkItems or emails to devs if failed
Update TFS WorkItems to new state so QA/Customers know their items are ready to test in our staging environment
Send report of what items were deployed successfully
How can I get here so that I am not spending hours preparing and deploying releases to staging and eventually production? Pretty open to potential solutions, things that would be hard to change would be the source control we are using, can't really switch to subversion or something else so we are pretty stuck with TFS.
Thanks
Went back in and started trying to get TFS to build/publish my web solution. I was able to get a build to complete successfully. adding msbuild argument /p:DeployOnBuild=True and setting the msbuild platform to x86 seemed to do the trick on that.
Then I found https://github.com/red-gate/deployment-manager-tfs which gives you a build process template to do the package and deployment using the redgate tools. After playing with that for a bit I finally got it to create, package and deploy my build to our staging environment.
Next up will be to modify the template to run some custom scripts to collect only the correct items to deploy, deploy all the sql files and then to set the workitems to the appropriate statuses after completion.
Really detailed description of your process. Thanks for sharing!
I believe you can set up TFS to have gated check-in on a single branch, which if you can setup on trunk would make sure that the merges built successfully. That could trigger msbuild, if you can get that working or a custom build job.
If you can get that working then you'd be able to use that trunk code as the artifact to send to Deployment Manager. That avoids having to assemble the files for deployment through the TFS change sets, as you'd be confident that the trunk could always build.
Are you using Deployment Manager to deploy the database from source control as well as the application?
That could be a way to further automate the process. SQL Source Control and SQL CI allow you to source control the structure of a database, keep a database up to date on each check-in, and run database unit tests. They also produce database packages for Deployment Manager, so you can deploy a release that contains both the application and the database.
If you want to send me the command you're using in step 4 to deploy the release using Deployment Manager I can help out with that. The commands I use are:
DeploymentManager.exe --create-release --server=http://localhost:81 --project="Project Name" --apiKey=XXXXXXXXXXX--version=1.1
DeploymentManager.exe --deploy-release --server=http://localhost:81 --project="Project Name" --apiKey=XXXXXXXXXXX--version=1.1 --deployto=CI-Environment-Name
That will create a release version 1.1 using the latest available packages for that project. You can optionally specify the package to be used when creating the release with
--packageversion=<package name>=<version>
--packageversion="application=1.5

Output binary files linked my version-control server without a build system?

I am trying to setup a internal Mercurial HgWeb server on a Windows 2003 server. The Hgweb part is working. I could just share a folder to put released binary files for each projects. But I am wandering could I still somehow link the version control system with binary build output. So when there is a commit, the build output will automated get update as well for a release?
I know I could have a build system on the server end. But for Delphi, C#, ASP.NET projects and with a few third-party libraries, it seems much more work.
Right now, I am thinking about for each project I will have two repository, one for development (not output binary), the other for release which will including everything including the build result binaries (or only build result including dependency will be a better idea?). But I don't know yet how to make those two synchronize automatically without manually commit twice.
Maybe simply a hook on Dev repository fires every time commit to Master branch which will make another commit to the Release branch?
You really need a build system like CruiseControl.NET to build your binaries after pushes happen to a remote repository that CC.NET is watching. The binaries built can then just be copied to a standard Web server to be served up for download. CC.NET is not complicated to configure and supports Mercurial out-of-the-box. Using a system like this, you can get the extras like build stats, run unit tests before pushing a build to be downloaded, and lots more.