Sharing a fabfile across multiple projects - deployment

Fabric has become my deployment tool of choice both for deploying Django projects and for initially configuring Ubuntu slices. However, my current workflow with Fabric isn't very DRY, as I find myself:
copying the fabfile.py from one Django project to another and
modifying the fabfile.py as needed for each project (e.g., changing the webserver_restart task from Apache to Nginx, configuring the host and SSH port, etc.).
One advantage of this workflow is that the fabfile.py becomes part of my Git repository, so between the fabfile.py and the pip requirements.txt, I have a recreateable virtualenv and deployment process. I want to keep this advantage, while becoming more DRY. It seems that I could improve my workflow by:
being able to pip install the common tasks defined in the fabfile.py and
having a fab_config file containing the host configuration information for each project and overriding any tasks as needed
Any recommendations on how to increase the DRYness of my Fabric workflow?

I've done some work in this direction with class-based "server definitions" that include connection info and can override methods to do specific tasks in a different way. Then my stock fabfile.py (which never changes) just calls the right method on the server definition object.

Related

Should I bundle the source code, build script and deployment script together?

Should I bundle the source code, build script and deployment script together? In my previous company, they are always bundled together, but there is always a problem that when the company add a new server, they need to change the deployment script and create a new build version, however, there is no change to the source code. I would like to see what is your company practice on the source control, build and deployment.
The best practices for deployment are to have some standard system for that purpose. Usually that system will have a standard way to enumerate what hosts are available and what versions of software are on each host, so any scripts necessary for deployment become agnostic to the machines in use.
Similarly, in many environments, deployment uses a set of standard techniques. For example, it is common to use CI to run tests and then build one or more deployment artifacts, such as a tarball or container, and then all deploys using the same technique use the same deployment method (e.g. unpack the tarball into a directory named after the repository name), so in that case a deployment script may not even be necessary. If you use a standard method and one is necessary, then obviously you should include it in your artifact (which means it's included in the source code) or in the configuration for the deployment system (which should be maintained as repository as well).
As to whether one should include source code depends on whether it's needed. If you're deploying a project in a language like Python or Ruby, then obviously it will be needed. However, if you're deploying a project in a compiled language like Go or Rust, then it probably is not, and your build artifacts will be smaller and easier to work with if you don't include it and just build a binary artifact during CI.

Big cluster config managment: Kustomize or Jsonnet?

Currently I'm working with Kubeflow. It is a quite large set up with about 30 different deployments. The default manifests of the Kubeflow team is using the standard Kustomize tool to provide patches for different environments like on-prem, cloud, testing, etc.
However, I still feel quite overwhelmed and limited with all those configurations. The only way that I can quickly navigate and manipulate variables for the whole project is to use the search and/or replace function from IDE (yes I know Kustomize can do variables in each environment but I want to do it for all deployments). This sucks, as it is not reversible once replaced. Another problem is that many folders are not just named base or reverse so many folders are named base, making it quite hard to findout some original fields. I also struggled when I want to combine just a few parts of different environment. For example, they provided 3 different environment for dex auth, 1 using email, 1 using GitHub and 1 using Google. I want my setup to have all 3 of them but now I cannot reuse the config from those environment easily. I had to create my own environment and dig up each of those environment to see what changes did they make in the patches.
I have never tried setting up Jsonnet configurations (maybe with tools like Tanka) on a similar cluster. I'm seeing a few big companies using Jsonnet. The two that I know are Grafana (they even created Tanka) and Databricks (they create their own Jsonnet compiler). What are the pros and cons of doing k8s configurations management in Jsonnet compared to Kustomize - the currently most popular choice? Is it worth learning and managing k8s config using Jsonnet (maybe with Tanka)?
What about ArgoCD?
Argo CD is a git-opt tool for managing projects like you describe.
It support Kustomization, Jsonnet and more
It can manage your resources, shows you what is deployed, synced, edit yor YAML file, and more.

How to apply a patch in a remote repository?

Summary:
I created a patch in my local machine, but I need to apply this patch in a remote machine.
I'm using Tortoise in both machines. How can I do this in a proper way ?
Context:
I have development environment in a project that is not very common, I guess. I could develop it in a remote VM, but the Eclipse and the entire machine are so slow that I think is unproductive. Also, I have to use a VPN to connect to the VM, which makes my connection slower. Because of that, I want to develop in my local machine, but, for the build I need to apply this patch in this remote VM to test if the changes were made correctly. I noticed that a patch can't be applied to a unversioned file, for that, I have to clean my entire remote repository with Tortoise and apply the patch again. But I wonder if this is the best approach.
If you are working with two build environments, you have to version ALL of the source files. The only thing that will not be under version control are the build directory and the machine-specific configuration files.
So if a file is not under version control somewhere, it is likely because your project setup is not correct. Take the time put everything in a single folder that can be under version control, start tracking it and then have the two machines communicating with the same repository.
Side note : it is quite common to develop on a machine and build on an other, you should be able to get a simple and efficient work environment quite easily.
I hope I got you question right. If not, please provide more specific info like your project's tree, the reason why you cannot test on your development machine, why is this specific file not under version control and anything else relevant.

Puppet - recognize new build versions and deploy

I have a puppet master sources my application builds into a master folder. for eg. xxxxx_v1.0.0.zip and yyyyy_v1.0.8.zip [xxxxx gets deployed to a ser of servers and yyyyy to another set of servers].
What is the best way to handle sourcing on puppet master on new versions of my application builds, without editing the .pp files on the master to reference the new build number on the filename, preferably, automatic.
Thanks
A good way to build a suitable package for your operating system instead. Puppet can use those with
package { 'application-x': ensure => latest }
Failing that, you solve this
on the agent side, by fetching your application metadata from somewhere, e.g. with an exec of wget, then having it run a script to perform the deployment if necessary
on the master side using an ENC like the Puppet Dashboard, or better yet, Hiera, to hold your latest version information
If you really want to do this through Puppet's fileserver without touching any metadata and just dropping the files in your modules, you can try with the generate function.
$latest_zip_application_x = generate("/usr/local/bin/find_latest application_x")
file { 'application_x.zip':
...
source => "puppet:///modules/application_x/path/to/$latest_zip_application_x",
}
where /usr/local/bin/find_latest is a script that will find the most recent version of your package and write it to stdout.
This is pretty horrible practice though - you are really not catering to Puppet's strengths with constructs like these.

Automated deployment of Check Script for Nagios

We currently use Ant to automate our deployment process. One of the tasks that requires carrying out when setting up a new Service is to implement monitoring for it.
This involves adding the service in one of the hosts in the Nagios configuration directory.
Has anyone attempted to implement such a thing where it is all automated? It seems that the Nagios configuration is laid out where the files are split up so that they are host based, opposed to application based.
For example:
localhost.cfg
This may cause an issue with implementing an automated solution as when I'm setting up the monitoring as I'm deploying the application to the environment (i.e - host). It's like a jigsaw puzzle where two pieces don't quite fit together. Any suggestions?
Ok, you can say that really you may only need to carry out the setting up of the monitor only once but I want the developers to have the power to update the checking script when the testing criteria changes without too much involvement from Operations.
Anyone have any comments on this?
Kind Regards,
Steve
The splitting of Nagios configuration files is optional, you can have it all in one file if you want to or split it up into several files as you see fit. The cfg_dir configuration statement can be used to have Nagios pick up any .cfg files found.
When configuration files have changed, you'll have to reload the configuration in Nagios. This can be done via the external commands pipe.
Nagios provides a configuration validation tool, so that you can verify that your new configuration is ok before loading it into the live environment.