Is it possible to have ansible use a "remote" playbook for git-based continuous deployment? - deployment

I need to manage a few servers that run code that is currently being deployed there as a couple of git repositories. I would like to be able to store in the project's repository the parts (if not all) of the playbook that is relevant for the repository. For example, the list of package dependencies, virtualenv requirements, configuration templates. This will also allow those to change in a per branch/commit way. Meaning I can make sure that if I need to deploy a specific branch/commit, playbook that is correct for that commit is being used, if, say, the configuration template being used changed.
It seems like the only solution is to checkout the git repository locally. Is it possible in ansible to tell it to run a remote play book (from the git repository that is being checked out on the server)? I was thinking of having ansible run a ansible using a local connection on the remote host, I haven't tried it to see if this will actually work out.
How do people manage to use ansible for continuous deployment based on git without some mechanisms for running a remote playbook?

Take a look at ansible-pull.
It pulls the repo and executes playbook.

Related

Skaffold config dependencies with profiles

I have a microservice application in one repo that communicates with another service that's managed by another repo.
This is not an issue when deploying to cloud, however, when devving locally the other service needs to be deployed too.
I've read this documentation: https://skaffold.dev/docs/design/config/#remote-config-dependency and this seems like a clean solution, but I only want it to depend on the git skaffold config if deploying locally (i.e. current context is "minikube").
Is there a way to do this?
Profiles can be automatically activated based on criteria such as environment variables, kube-context names, and the Skaffold command being run.
Profiles are processed after resolving the config dependencies though. But you could have your remote config include a profile that is contingent on a kubeContext: minikube.
Another alternative is to have several skaffold.yamls: one for prod, one for dev.

Capistrano: Deploy Laravel app on ubuntu server

I have an ubuntu staging server where I have installed apache, php, mysql, git, composer installed. I have a private git repository setup on the bitbucket, the project is already cloned to the staging server and to my local development machine. The Laravel setup is working perfectly fine on both machine.
What I am currently doing is whenever there is an update to the git repository, I do login to the staging server, pull the latest code from the git repository and do composer install, npm install, bower install.
I want to automate this process via capistrano tool. I checked the tutorials online, but all of them do the clone of repository whenever, I issue a deploy command and creates a fresh installation every time. Can't capistrano helps me to work on the existing folder that is already setup?
The basic premise of Capistrano is the idea that a new installation is created each time, such that there is not much to be done initially in terms of setup. If you'd rather use a different mechanism, a different tool would work better for you! For such cases, you could try to write a script using SSHKit directly (fairly advanced), or write a makefile or some other tool to automate your process.
If you do want to try to make Capistrano work on its terms, look into how linked_dirs and linked_files work in it. They allow you to have some files (e.g. config files, log dirs, etc) which are outside of the deployment directory and as such are shared between deploys.

How to configure a Thoughtworks:GO task to deploy a repo?

I'm trying to figure out how to create a task (custom-command, not ant/rake, etc) to perform a deployment of a git-repo to some server/target (in this case Heroku). If I were to do this manually, it's just git push heroku master.
I've created a basic pipeline/stage/job/task (custom-command, in this case a Python script), and single agent. The pipeline has a material (git repo, with a name). Inside the script, I'm printing out os.environ.items() - it has several variables, including the SHA of the latest commit - but no URL for the actual repository.
So how is the agent (or task) supposed to know what repository to deploy?
The pipeline knows the material name, and I've tried passing in an Environment Variable such as ${materialName} (which didn't work). I could hard-code a URL in the task, but that's not a real solution.
Thoughtworks:GO's documentation is shiny, but a bit sparse in the details. I'd have thought something this basic would be well documented, but if so, I haven't found it so far.
When a task runs on an agent, it clones the repository specified in the material (config). The .git/config wouldn't have remote Heroku url and as such git push heroku master wouldn't work.
You would need to add Heroku remote url before you can perform a deployment of your git-repo to Heroku.
git remote add heroku git#heroku.com:project.git
where project is the name of your Heroku project. This is required to be done only once unless you perform a clean working directory every time (in Stage Settings which removes all files/directories in the working directory on the agent, you can see this option from the UI as well: Admin -> Piplelines -> Stages -> Stage Settings Tab) in which case you may have to add the remote url via a task before you run the task to deploy.
Once you've done so, you should be able to use the heroku xxxx commands (assuming you have the Heroku Toolbelt installed on the agent machine you are using for deploying), and should be able to push to Heroku as usual via git push heroku master.

Building and deploying from a remote server with Capistrano

I'm new to Capistrano and struggling a little to get started. A brief description of what I need to do:
git pull the latest code from our git repo, on a central build server. This build server's environment matches the deployment environment exactly. I need the code to be built here. I don't want to deploy a binary that was built on a Mac laptop, for example.
compile the binary on this machine.
deploy it from this machine to all the target machines.
There is a shared user we can all SSH into on the build machine to do the builds.
The build machine is behind a gateway machine, not directly accessible.
All of the deployment target machines also have this shared user and are also behind the gateway.
The deployed binary is a single executable, and there is an init script on the target machines. After deploying the binary and changing the symlink to it, restart the service via the init script.
Everyone has appropriate SSH keys and agent forwarding for all necessary tasks.
So in principle it seems rather simple, but Capistrano seems opinionated and a bit magical. As a result I'm not sure how to accomplish all of this. It seems like it wants to check out my code and copy it to the remote machines, for example without building it first.
I think I need to ignore all of Capistrano's default smarts and just make it run some shell commands on the appropriate servers. In pseudo-code:
ssh buildmachine via gateway "cd repo && git pull && make"
ssh targetmachine(s) via gateway "scp buildmachine:repo/binary .; <mv && symlink>; service foo restart"
Am I even using the right tool for the job? It seems a lot like a round peg in a square hole.
Can someone explain to me what the contents of the Capistrano configuration files should be, and what cap commands I'd run to accomplish this?
BTW, I've searched around and looked at questions like deploying with capistrano with remote git repo but without git running on production server and From manual pull on server to Capistrano
The question is rather old, but you never know when someone steps onto it in need of information...
First and formost, consider that Capistrano might just not be the right tool for the job you want to do.
That said, it is not impossible to accomplish what you expect. While in projects that deploy large amount of files and modify them (like css/js minify, js builds etc.) I would avoid it, in your case, you can consider runing a "deployment repository" and configure it in capistrano as the source. Your process would look like this :
run the local build with whatever tools you need
upload resulting binary to a deployment repository
run capistrano that will connect to application servers, fetch fresh binary from repository, perform any server side tasks required and symlink to "current"
As a side effect you end up with full history of deployed binaries

Simplest way to use mercurial to manage differences between web development and deployment?

I am using mercurial for website development. I "think" I'm using it correctly.
I develop on my development machine, commit fairly regularly. I will somewhat regularly push my commits to my hosted site-dev repository.
If things are set up how I want them for the live site, I push from my dev machine to the hosted site-live repository. Then I pull down from that repository onto the live server.
However, there are some changes that need to be made (changing directories from localhost to www.example.com, changing the DB connection stuff, etc.).
What I did was made these changes on my live machine, then pushed them back up to the site-live repository. I don't know why I did that, really, but at least there's a changeset sitting there with the necessary config changes.
What I don't know how to do is manage this process. I'm a little lost beyond committing, pushing and pulling with hg. I'm a single developer and haven't even done a merge yet.
Is there some way to keep that particular changeset identified, and just apply it, hopefully even BEFORE I pull from the repo down to the live server?
I think you can tell from my question that I'm in a little over my head with hg and workflow at the moment ;)
This is my understanding:
What essentially you are trying to do is have a development, staging and deployment environment. You do your development using 'development' repository, test it on a staging environment and then once satisfied, pull those changes into deployment repository.
And when you pull from staging to deployment, you need to change your environment / configuration data.
My take is you should not be changing the configuration at all.
You should have configuration files such that you have a
basic configuration file
basic.conf
Environment specific overrides
basic.dev.conf, basic.staging.conf and basic. deployment.conf
Use environment variable:
The overrides to the basic configuration data should be defined via an environment
specific variable : APP_ENV : dev or staging or deploy
This way you should be able to override the configuration based on the environment without changing the configuration information.
It is not a good idea to rely on making changes to config files each time you pull your code from development to staging to deployment.
I would keep the live server outside the version control. Meaning that I would have a small "install" script that pulls updates from the repository, removes any unnecessary development files, and applies the correct configuration files. Both development and production configuration files should be in version control.