I have written a step to deploy database build in respective database. But at a time, i am able to deploy one database ie one step for one db. is it possible to deploy same db build in multiple db at a step in relase?
is it possible to deploy same db build in multiple db at a step in relase?
AFAIK, there is no build-in task to do it. But you could create a powershell/batch script that loops runs the sqlpackage.exe to deploy DAC to multiple db at a step in relase:
You could check the similar thread Deployment to several databases using SQL Server Data Tools and Team foundation Server for some details.
Besides, there are many extensions in Marketplace can do it, so choose some of them that meet your requirement:
https://marketplace.visualstudio.com/search?term=sql&target=AzureDevOps&category=All%20categories&visibilityQuery=all&sortBy=Relevance
Hope this helps.
Related
Forgive me for asking a stupid question. I am from IT Infrastructure background & have been asked to create CI/CD pipelines based on my recent learnings on DevOps.
We have couple of applications whose source code is currently in TFS 2013 & those apps are written in ASP.NET C# language. Now, requirement is to migrate the source code from TFS to Azure Repos (Azure DevOps services) & further create a CI/CD pipeline.
Now for demo purposes, customer is asking us to do the deployment (i.e. Release pipeline) on a test server which is a plain windows 2012 OS without any SQL & IIS for both of these applications. Is that possible & how could we achieve the results to confirm release pipeline is funcioning properly?
In my opinion, it wont work as there is no application infra/configuration done for those applications on that plain test server. I guess we actually need a ready dev/stage environment which is replica of production to do the testing of release pipeline for those applications. Am I correct?
Just need expert advise for confirmation so I communicate the same to customer.
Azure DevOps Pipelines use an agent to perform the deployments. You can run the agent entirely in the cloud when deploying to Azure resources. You can also install an agent locally. Follow this link and scroll down to read about self-hosted agents. This is how you can deploy to your test instance from the pipeline.
Now, what you deploy there may require additional software be installed. You say it's an application in C#. Cool. Now, what's it do? Is it a windows program? Then just having the server there, with an agent installed, is all you need. Is it a web program? Then, yeah, it's going to need an IIS (or whatever) instance available somewhere to deploy to. Is it a database program? Then, yeah, it's going to need a database instance to deploy to. There's nothing magical about having a VM or a machine somewhere. All the same rules have to apply. There has to be an OS, drives, memory, and yes, supporting services depending on the needs of the application.
However, using a local machine instead of a hosted one, that works fine. Just follow the instructions in the link above.
Asking for an opinion or direction on the current problem.
We are using bitbucket pipeline to deploy ci/cd web applications to Azure. Now what is remaining - the database, also being hosted on Azure.
From my research - everything on SQL Database Projects deployments usually utilizes Azure DevOps pipelines (connects to github repo, allows plural environments, has a built-in SqlAgent allows deploy SQL db to the target server via dacpac file. It allows CI with every check-in, every time you push changes. Nice!
But what if can not (for some reason) use Azure DevOps and have to utilize Bitbucket pipelines instead. is that possible? how? via scripting? a tool? to call in the command line? Any help - highly appreciated.
It's true that in Azure DevOps it is easier to deploy (Azure) SQL Database, as Azure DevOps offers many tasks (including 3rd party custom tasks you can find in Microsoft MarketPlace).
However, no matter what tool will you use, you should be able to do the same, knowing the concept of deployment of a specific service.
I don't know BitBucket very well, but I bet the product has the capability to execute some commands, including PowerShell commands as well. If so, you must do 2 steps in your pipeline to publish Azure SQL database:
1) Create server and (empty) database - perhaps BitBucket offers some task for creating services in Azure (from ARM template or other way). If not - you can always use CLI or PowerShell to do so. More info: az cli server
2) Deploy the database or changes to it. This step is always to compare DACPAC file (which is compiled version of SQL Server database project) to target database on the server. The result is T-SQL (differential) script which must be executed against the target database. There is only one way to do so - sqlpackage.exe - tool provided by Microsoft. You can find the whole documentation here and plenty of examples on how to use it on the Internet.
Let me know if that helps.
I'm working on a web project(built with the .Net framework) on a remote windows server, and this project is connected to a database my SQL server management studio, now on multiple other remote windows servers exist the same web project linked to the same database, now I change a page's code in my project or add/remove a table or stored procedure in my database, is there a way(or an already existing software) which will my to deploy the changes that I made to all the others(or to choose multiple servers if I don't want to deploy the changes to all of them)?
If it were me, I would stand up a git server somewhere (cloud or local vm), make a branch called something like Prod or Stable, and create a script (powershell if the servers are windows, bash on anything else) on a nightly or hourly job to pull from that branch. Only push to that branch after testing thoroughly. If your code requires compilation, you have the choice to compile once before committing (in which case you're probably going to commit to releases), or on each endpoint after the pull. I would have the script that does the pull also compile and restart the service (only if there was something new in the pull).
You can probably achieve this by following two things :
Create a separate publishing profile for each server.
Use git/vsts branches to keep the code separate. (as suggested by #memtha).
Let's say you have total 6 servers and two branches A and B. So, you'll have to create 6 publishing profiles. Then, you can choose which branch to deploy where. e.g. you can deploy branch B on server 1,3 and 4.
For the codebase you could use Git Hooks.
https://gist.github.com/noelboss/3fe13927025b89757f8fb12e9066f2fa
And for the database, maybe you could use migrations or something similar. You will need to provide more info about your database, do you store your database across multiple servers etc.
If the same web project is connecting to the same database and the database changes, I suspect you would need to update all the web apps to ensure the database changes don't break any of the apps and to keep all the apps updated to prevent any being left behind.
You should look at using Azure Devops to build and deploy your apps and update the database.
If you use Entity Framework, you can run the migrations on startup and have the application update the database when deployed manually or automatically using devops.
To maintain the software updated in multiple server you could use Git with hooks, post-receive hook is what you need.
The idea is to use one server as your Remote Repository and here configure the post-receive hook to update the codebase in the same server and the others.
I want to create one click deployment on azure pipelines to move Postgres Sql changes from dev to QA environment,similar to what we implement using SQL Server Database project where a Powershell script deploy the changes to the remote server.
I have tried pg_dump and psql commands which will create dump file and restore it on the remote server. It does not perform diffing ie(comparing database changes on source and destination , and only replicating the missing changes)
You've stumbled upon one of the features lacking in the Postgres ecosystem. One of the more elegant ways to solve migrations using Postgres' own tooling is to package up your migrations as a Postgres Extension. This requires you to generate the deployment scripts yourself, but it is a neat way of applying and packaging up the deployments.
There are a number of commercial tools that will assist in this process, such as Datical, Liquibase, and Flyway. Note, some of these still require you to generate the change statements yourself, some attempt to create them for you.
Generating change statements is a whole different animal and I recommend you look at schema diffing tools for Postgres to find what best suites your needs.
I've spent the past few months developing a webApi solution that I'm ready to push up to Azure and hook into an Azure SQL Database. It was built with EF Code First.
I'm wondering what standard approaches there are to making changes to the database while in production. I've been using database initializers up to this point but they all blow away data and re-seed.
I have a feeling this question is too broad for a concise answer, so I'd like to ask: what terminology / processes / resources should a developer look into when designing a continuous integration workflow for a solution built with EF Code First and ASP.NET WebAPI, hosted as an Azure Service and hooked up to Azure SQL?
On the subject of database migration, there was an interesting article on ASP.NET about this subject: Strategies for Database Development and Deployment.
Also since you are using EF Code First you will be able to use Code First Migrations here for database changes. This will allow you to better manage the changes you make to the database.
I'm not sure how far you want to go with continuous integration but since you are using Azure it might be worth it to have a look at Continuous delivery to Windows Azure by using Team Foundation Service. Although it relies on TFS in the cloud it's of course also possible to configure it with for example Jenkins. However this does require a bit more work.
I use this technic:
1- Create a clone database for your development environment if it doesn't exist.
2- Make the necessary changes in your dev environment and dev
database.
3- Deploy to your staging environment.
4- If you added some static datas
that should also exist in your prod database, use a tool like
SQLDataExaminer to find the data differences and execute the
insert, update, deletes for according rows. Use Schema Compare in VS2012 to find differences between your dev
and prod environment by selecting source as dev and target as prod.
And execute the script in your prod.
5- Swap the environments