MDM model update deployment - deployment

I've recently started an internship in an IT company and have been given a project to work on that involves making changes to a model on their MDM server. I've made these changes on the dev server and am now ready to deploy to production.
I've done some research on deployment and found out that I can only perform a model update deployment if the initial deployment type was clone, but unfortunately the initial deployment type was new.
Would anyone have any advice as to what would be the best and safest way to perform the update?
Is it possible to create a model deployment package from the production server, delete the model and deploy the package as a clone?

You should use deploy clone when deploying model, if you use deploy new then the MUID will not be changed and next time when you will update your model it will say that there is already a model with same MUID, in this case you should use Deploy clone rather than deploy new.
Now delete your model and create new model with deploy clone option.

Related

Deployment scenario of git integrated Azure Data Factory via arm template

What happens if you have multiple features being tested in test environment of a ADF V2 test data factory and only one or few of them is ready for production deployment. How do we hande this type of deployment scenario in Microsoft recommended CICD model of git/vsts integrated adf v2 through arm template
Consider we have dev test and prod environment of ADF v2. The dev environment is git integrated. The developers have debuged their changes and merged with collaboration​ branch after pull request. The changes are published and deployed to test environment first. Here many features are getting tested but few are ready for prod and few are not, how do we move the ones which are ready since tge arm template takes the entire factory?
this is somewhat of a strange question. you can apply same logic to anything, how do you create a feature for an application since application is only deployed as a single entity. answer would be: use git flow or something akin to that. Use feature branches and promotions.

Entity Framework Core Migrations Separate CI/CD Pipeline

My company is moving to microservices and as part of this shift devops is standing up CI/CD build release pipelines in Azure with VSTS and GIT.
The current model for managing migrations is a developer is given 2 projects in 2 separate git repositories.
Project 1 - API Services project - .NET Framework / .Net Core
Project 2 - Database project based on EF6 using the migration API
These projects have completely independent release pipelines based on the repositories. So when you create a pull request into master the pipeline builds and releases the project.
This new architecture also supports blue green deployments and our app services run on multiple nodes.
The issue we have is that with this set up we have to basically hand code our migrations and can't use any of the tooling provided in EF Core.
Most of the articles and documentation I have read shows running the migrations from app start up, but if you have multiple app service nodes how do you prevent 2 nodes from running the migrations?
Other articles I have looked at show moving migrations into a separate project, but that project needs a reference to the project with the dbcontext in it. In my company's setup this is not possible. Neither can we do the reverse since moving the dbcontext into the database project prevents us from referencing it in the api services project.
Is there any way to support this model with EF Core?
What is the preferred way to implement blue green deployments with EF Core Migrations on a multi node app service?
I will try to claim that there isn't and not because EF Core doesn't support it in some way, but because this sounds impossible from what I understood in your question.
Your company want the ability to do blue/green deployments, but it is only really possible on the service layer, but not on database. The idea sounds really cool, fast rollback, almost no downtime. But in reality databases complicate things a lot.
So imagine your Project 1 is running on machines A and B (representing blue and green deployments). A currently is a production environment and B is identical but not serving any requests ATM. Both of them are pointing to the exact same database (if not, it's not blue/green, it's just a separate environment). When you want to deploy your DB changes, you migrate your database, but now both of the machines A and B will be pointing to the updated database. You can keep switching from A to B, but they both might have stopped working if your database migration broke anything.
Therefore I don't really understand what you achieve with having DB migrations in a separate repository with a separate pipeline. This just complicates coordination of the releases as they are clearly dependent, but I don't see how it helps to solve anything. As you noted, you can't delegate creation of migration scripts to EF Core, at least without some manual work.
Would be happy to hear any advantages of such design.

Octopus deploy, I need to deploy all packages up till latest on promotion to QA

Here is the story, I am using RedGate SqlCompare to generate update scripts for my Dev env, each package contains only changes from current Dev version to Latest in source control.
Here is an example:
I create a table (package-0.1) -> Deploy to DevDB
I add Columns (package-0.2) -> Deploy to DevDB
I renamed some Column (package-0.3) -> Deploy to DevDB
But once I want to promote it to QA it causes me problem because it promotes only latest package-0.3 that contains only part of the changes (renaming of the column)
So I am looking for a way to deploy all the packages prior to current on Promotion if it is possible.
By now I solved that by creating custom package that contains all the change scripts, but is it possible to solve that with Octopus?
Thanks
Ihor
each package contains only changes from current Dev version to Latest
The way you do it is going to be painful for you as SQL Compare takes a state based approach. What you want to apply is the migrations based approach. You can see Alex's post on the difference between two approaches.
SQL Source Control 5 will come with a better migrations approach which will work with SQL Compare command line tool and DLM Automation tools. However, beta is closed right now unfortunately but I suggest you to contact the team through the e-mail address provided there.
The other option you have is ReadyRoll which has the pure migrations based approach. You can see this post on its octopus deploy integration.

How to manage database context changes in production / CI

I've spent the past few months developing a webApi solution that I'm ready to push up to Azure and hook into an Azure SQL Database. It was built with EF Code First.
I'm wondering what standard approaches there are to making changes to the database while in production. I've been using database initializers up to this point but they all blow away data and re-seed.
I have a feeling this question is too broad for a concise answer, so I'd like to ask: what terminology / processes / resources should a developer look into when designing a continuous integration workflow for a solution built with EF Code First and ASP.NET WebAPI, hosted as an Azure Service and hooked up to Azure SQL?
On the subject of database migration, there was an interesting article on ASP.NET about this subject: Strategies for Database Development and Deployment.
Also since you are using EF Code First you will be able to use Code First Migrations here for database changes. This will allow you to better manage the changes you make to the database.
I'm not sure how far you want to go with continuous integration but since you are using Azure it might be worth it to have a look at Continuous delivery to Windows Azure by using Team Foundation Service. Although it relies on TFS in the cloud it's of course also possible to configure it with for example Jenkins. However this does require a bit more work.
I use this technic:
1- Create a clone database for your development environment if it doesn't exist.
2- Make the necessary changes in your dev environment and dev
database.
3- Deploy to your staging environment.
4- If you added some static datas
that should also exist in your prod database, use a tool like
SQLDataExaminer to find the data differences and execute the
insert, update, deletes for according rows. Use Schema Compare in VS2012 to find differences between your dev
and prod environment by selecting source as dev and target as prod.
And execute the script in your prod.
5- Swap the environments

What are the Team City best practices for multistage deployment?

We have 3 environments:
Development: Team City deploys here for Subversion commits on trunk.
Staging: User acceptance is done here, on builds that are release candidates.
Production: When UAT passed, the passing code set is deployed here.
We're using Team City and only have Continuous Integration setup with our development environment. I don't want to save artifacts for every development deployment that Team City does. I want an assigned person to be able to fire a build configuration that will deploy a certain successful development deployment to our staging server.
Then, I want each staging deployment to save artifacts. When a staging deployment passes UAT, I want to deploy that package to Production.
I'm not sure how to set this up in Team City. I'm using version 6.5.4, and I'm aware there's a "Promote..." action/trigger, but I think it depends on saved artifacts. I don't want to save development deployments each time as artifacts, but I do want the person running the staging deployment to be able to specify which successful development deployment to deploy to staging.
I'm aware there may be multiple ways to do this, is there a best practice? What is your setup and why do you recommend it?
Update:
I have one answer so far, and it's an idea we had considered internally. I'd really like to know if anyone has a somewhat automated way for deploying to a staging/production environemnt via Team City itself, where only people with certain role/permission can run a deploy script to production rather than having to manually deal with any kind of artifact package. Anyone?
Update 2
I still have 1 day to award bounty, and I thought the answer below didn't answer my question, but after rereading it I see that my question wasn't what I thought it was.
Are there any ways to use Team City for some kind of automated deployment to Staging/Production environments?
I think you're actually asking two different questions here; one is about controlling access rights to TeamCity builds and another is about the logistics of artifact management.
Regarding permissions, I assume what you mean by "only people with certain role/permission can run a deploy script to production" and your response to Julien is that you probably don't want devs deploying direct to production but you do want them to be able to see other builds in the project. This is possibly also similar to Julien's scenario when IT then take the process "offline" from TeamCity (either that or it's just IT doing what IT do and insisting they must use a separate, entirely inefficient process because "that's just the way we do it" - don't get me started on that!)
The problem is simply that all permissions in TeamCity are applied against the project and never the build so if you've got one project with all your builds, there's no ability to apply permissions granularity to dev versus production builds. I've previously dealt with this in two ways:
Handle it socially. Everyone knows what their responsibilities are and you don't run what you're not meant to run. If you do, it's audited and traceable back to YOU. Work fine when there's maturity, a clear idea of responsibilities and not compliance requirement that prohibits it.
Create separate projects. I don't like having to do this but it does fix the problem. You can still use artifacts from another project and means you simply end up with one project containing builds that deploy to environments you're happy for all the devs to access and another project to sensitive environments. The downside is that if the production build fails, the very people you probably want support from won't be able to access it!
Regarding artifact management, there's no problem with retaining these in the development build, just define a clean-up policy that only keeps artifacts from the last X builds if you're worried about capacity. A lot of people want certainty they're deploying the same compiled output to every environment which means once you build it, you want to keep it around for later use.
Once you have these artefacts from your dev deployment, you can re-deploy them to your other environments through separate builds. You'll have an issue with config transforms (assuming you're using them), but have a read of this 2 part series for some ideas on how to address that (I'm yet to absorb it in detail but I believe he's on the right track).
Does that answer your question? Is there anything still missing?
We also used TeamCity as our build server so let me explain our setup.
We have 4 environments
Development used by Dev to verify commits in a server environment
QA for testing purposes
Staging for deployment checks and some UAT
Production
We only use TeamCity to deploy to Development (Nightly builds) and to QA (on-demand).
The Dev build uses the trunk branch and QA build uses a different branch used for the RC.
Deployment to the Staging and Production are managed by the IT team, and are therefore not automated.
What we do instead is that we use TeamCity to produce artifacts from the QA build. The artifacts are the deployment kits sent for Staging/Production deployments.
That said, I am not sure if TeamCity would provide you a complete control on which build can be promoted to which environment. We basically control this on the SVN side with branches, and have different builds for those branches. You could (should) do be able to manage this it the same way. You can therefore ensure what is getting deployed.
I understand that your needs may be slightly different than ours but I hope that this will helps you finding the best setup.
I think you might want to check out something like Octopus Deploy or BuildMaster. They provide a nice structure for the deployment practices you're trying to automate. Both tools integrate with TeamCity nicely.
Basically, you'd continue to use TeamCity for CI, and you could also continue to deploy to your development environment with TeamCity too, but you'd use one of the deployment tools to promote an (existing) build to staging and production.
Edit 2014-02-05 – Update
The makers of BuildMaster have a new deployment feature – ProGet Deploy – for their NuGet server tool, ProGet. It's very similar to Octopus Deploy, tho I haven't played with it yet myself, so Octopus may have a better visualization of what versions have been deployed to which environments; I still use BuildMaster because of that important feature.
Also, I'm currently using both TeamCity, BuildMaster, and ProGet and I never want to go back to not having automated builds. Currently, all of my apps are built and deployed via BuildMaster. All of my library projects are built in TeamCity and deployed to ProGet. Being able to manage my internal dependencies via the NuGet infrastructure is nice.