Analogue of capistrano/fabric for common lisp + restas + huchentoot - deployment

Does exists any deployment tool for cl like capistrano(ruby)/fabric(python)?
Requirments:
git support;
simple one-command deploy to server (like cap deploy), that pull updates from repository;
support of database (mysql/postgresql) migrations (schema changes, loading data, etc.).

Related

Is there any way of deploying Heroku PostgreSQL configuration to a new app via a git repo?

We are making a series of apps, whose backends and frontends are forked from a common git codebase, and which share a database schema, but have different data within each.
At the moment we are running PostgreSQL database setup manually, but would like to be able to track these commands via git, and ideally, be able to provision databases in new Heroku apps in an automated way.
Is this possible? If not, are there any patterns for this that are effective?
*Note: we don't want to actually backup the data itself, just the process of creating the tables, data schema, etc.

Sending a file to multiple servers

I'm working on a web project(built with the .Net framework) on a remote windows server, and this project is connected to a database my SQL server management studio, now on multiple other remote windows servers exist the same web project linked to the same database, now I change a page's code in my project or add/remove a table or stored procedure in my database, is there a way(or an already existing software) which will my to deploy the changes that I made to all the others(or to choose multiple servers if I don't want to deploy the changes to all of them)?
If it were me, I would stand up a git server somewhere (cloud or local vm), make a branch called something like Prod or Stable, and create a script (powershell if the servers are windows, bash on anything else) on a nightly or hourly job to pull from that branch. Only push to that branch after testing thoroughly. If your code requires compilation, you have the choice to compile once before committing (in which case you're probably going to commit to releases), or on each endpoint after the pull. I would have the script that does the pull also compile and restart the service (only if there was something new in the pull).
You can probably achieve this by following two things :
Create a separate publishing profile for each server.
Use git/vsts branches to keep the code separate. (as suggested by #memtha).
Let's say you have total 6 servers and two branches A and B. So, you'll have to create 6 publishing profiles. Then, you can choose which branch to deploy where. e.g. you can deploy branch B on server 1,3 and 4.
For the codebase you could use Git Hooks.
https://gist.github.com/noelboss/3fe13927025b89757f8fb12e9066f2fa
And for the database, maybe you could use migrations or something similar. You will need to provide more info about your database, do you store your database across multiple servers etc.
If the same web project is connecting to the same database and the database changes, I suspect you would need to update all the web apps to ensure the database changes don't break any of the apps and to keep all the apps updated to prevent any being left behind.
You should look at using Azure Devops to build and deploy your apps and update the database.
If you use Entity Framework, you can run the migrations on startup and have the application update the database when deployed manually or automatically using devops.
To maintain the software updated in multiple server you could use Git with hooks, post-receive hook is what you need.
The idea is to use one server as your Remote Repository and here configure the post-receive hook to update the codebase in the same server and the others.

Is using dependency management a sane strategy for Database Backups?

So, while I agree in principle with the often referenced article "Get your database under version control", nobody seems to address the problem of big databases. I'm not talking just abount the schema, but the data too.
Additionally, while I'm a big supporter of DVCSs like git and Mercurial, they fall short when handling big files (not just binary).
It just hit me that this is a configuration management problem, rather than a version control one. So, I could treat a SQL dump as a build artifact, store the backup as a revision of the artifact and pair it through a manifest in the project, I could do the same for each of the single environments (staging, development, production, etc.). The one disadvantage I find is that Build Artifact Repositories (such as Artifactory and Nexus) don't seem to handle artifact revisions in a storage-efficient way (e.g. differential backup).
My question is broken in two:
A) Is this —taking a full database backup— a sane strategy robust enough for productive environments?, meaning, is this (or something close) actually done in the real world?
B) What is the best practice for managing (and using!) database backups in a way that a particular backup has traceability to a given revision of the productive application?
Database schema changes are a deployment problem. Each version of a project should have code that updates the database from the previous version. Your staging server should be backed up, and you can trial the changes there.

Class library referenced by multiple websites + version control branching

Consider the following -
I have a solution that consists of multiple projects:
DAL (Class Library)
BusinessLogic (Class Library)
Website1 (Web Application)
Website2 (Web Application)
Both Website1 and Website2 share a reference to BusinessLogic, which in turn references the DAL.
Since these are just websites, I don't need to keep track of multiple versions, as such, but I do like to have the following branches:
Trunk
Production
Trunk is where I do all my development work, and after everything is tested and ready to go, I merge from Trunk to Production when a website is actually deployed to production servers. This allows me to shelve my current work, check out the Production branch and address any major bugs that were found after deployment and immediately deploy the fix.
My problem is that, using this approach, what lives in the Production branch isn't always correct. Let's say I make an update to BusinessLogic which is utilised by Website1. It passes testing and is deployed. If I merge all the projects to the Production branch, then it's wrong because Website2 wasn't deployed to production at that time.
Or, I could merge only the relevant projects to Production. So, in this case, I would merge Website1, BusinessLogic and DAL. This is still wrong, however. If I were to check out the Production branch to do work on Website2, it would have a newer version of BusinessLogic and DAL than actually exist on our production servers.
What is the correct approach here?
You should not use a code sharing or code promotion model. It reduces quality and forces rework. Instead look to create a release pipeline where you create a package for your Business and Dal layers and consume those packaged in the web apps.
The best approach for this is to use a build server and create a NuGet package for your DAL that is consumed by the Business Layer. This in turn is packaged as a NuGet package that your websites can consume.
Your workflow for getting a change to the business layer into your website is then:
Open Business Layer solution and make fix
Check in and trigger CI build
CI build creates and publishes NuGet package
Open Website solution and update NuGet package
Clean and simple. No branching is good branching.
There can be a lot of correct ways, and it always depends on what is correct for you. Each way would have pros and cons, of course.
If you'd like source-level dependencies, create several production branches per site, each includes external:
/Site1/Production
site content
BusinessLogic [external] -> /BusinessLogic/v1Branch
/Site2/Production
site content
BusinessLogic [external] -> /BusinessLogic/v1Branch
/BusinessLogic/v1Branch (from DevBranch)
/BusinessLogic/DevBranch
Here is how you perform version upgrade:
Make a change to BusinessLogic/DevBranch, test it.
Branch it as BusinessLogic/v2Branch
Update Site2/Production's external to point to BusinessLogic/v2Branch
Build site2, test and deploy.
So you'll have -
/Site1/Production
site content
BusinessLogic [external] -> /BusinessLogic/v1Branch
/Site2/Production
site content
BusinessLogic [external] -> /BusinessLogic/v2Branch
/BusinessLogic/v1Branch (from DevBranch)
/BusinessLogic/v2Branch (from DevBranch)
/BusinessLogic/DevBranch
This requires a certain level of development culture and some amount of svn management.
Also you can also put binaries to such svn branches, which is pretty much the same scheme. In general such approach is named vendor branches.
If you prefer binary dependencies outside the source control, you could utilize local nuget repository. It works the same as official one: you create a new version, publish to nuget, then refer it from site, the build and deploy. This requires additional setup and maintenance effort and is more appropriate for larger projects.

How to manage database context changes in production / CI

I've spent the past few months developing a webApi solution that I'm ready to push up to Azure and hook into an Azure SQL Database. It was built with EF Code First.
I'm wondering what standard approaches there are to making changes to the database while in production. I've been using database initializers up to this point but they all blow away data and re-seed.
I have a feeling this question is too broad for a concise answer, so I'd like to ask: what terminology / processes / resources should a developer look into when designing a continuous integration workflow for a solution built with EF Code First and ASP.NET WebAPI, hosted as an Azure Service and hooked up to Azure SQL?
On the subject of database migration, there was an interesting article on ASP.NET about this subject: Strategies for Database Development and Deployment.
Also since you are using EF Code First you will be able to use Code First Migrations here for database changes. This will allow you to better manage the changes you make to the database.
I'm not sure how far you want to go with continuous integration but since you are using Azure it might be worth it to have a look at Continuous delivery to Windows Azure by using Team Foundation Service. Although it relies on TFS in the cloud it's of course also possible to configure it with for example Jenkins. However this does require a bit more work.
I use this technic:
1- Create a clone database for your development environment if it doesn't exist.
2- Make the necessary changes in your dev environment and dev
database.
3- Deploy to your staging environment.
4- If you added some static datas
that should also exist in your prod database, use a tool like
SQLDataExaminer to find the data differences and execute the
insert, update, deletes for according rows. Use Schema Compare in VS2012 to find differences between your dev
and prod environment by selecting source as dev and target as prod.
And execute the script in your prod.
5- Swap the environments