Maintaining DJANGO_SETTINGS_MODULE between local and production environments - docker-compose

Running into an issue when we deployed to production, had to update manage.py to set os.environ.setdefault("DJANGO_SETTINGS_MODULE", "config.settings.local") to config.settings.production. Of course this broke local settings when we pulled back to our dev branch.
We're running our containers via the docker-compose local.yml commands recommended in the documentation.
Am I missing something? Is this by design?

This environment variable should be set via a .env file, the production one is located under .envs/.production/.django, and is not in source control (for security reasons). So yes, it is by design.
Depending on how you start your server, this file might be missing and the environment will end up unset.

Related

Skaffold config dependencies with profiles

I have a microservice application in one repo that communicates with another service that's managed by another repo.
This is not an issue when deploying to cloud, however, when devving locally the other service needs to be deployed too.
I've read this documentation: https://skaffold.dev/docs/design/config/#remote-config-dependency and this seems like a clean solution, but I only want it to depend on the git skaffold config if deploying locally (i.e. current context is "minikube").
Is there a way to do this?
Profiles can be automatically activated based on criteria such as environment variables, kube-context names, and the Skaffold command being run.
Profiles are processed after resolving the config dependencies though. But you could have your remote config include a profile that is contingent on a kubeContext: minikube.
Another alternative is to have several skaffold.yamls: one for prod, one for dev.

How to get bamboo use different config file on deployment depending on environment

I have different config files for each different deployment environments I have. Dev, QA and live are those environments.
How do i get bamboo to use the right config file for each environment when deploying?
So when deploying the dev environment bamboo use the dev config file and change name on it and put it on the right place.
I assume that artifacts can fix this? But how?
I solved it. I simply used artifacts and pointed directly to the file. Then in deploy i made a ps script for deploying the right artifact to the right deployment environment.
I also made the file changing depending on environment so i don't need separately config files.
Just hit me up if you are having the same problem, Atlassian documentation on this is terrible.

Is it possible to have ansible use a "remote" playbook for git-based continuous deployment?

I need to manage a few servers that run code that is currently being deployed there as a couple of git repositories. I would like to be able to store in the project's repository the parts (if not all) of the playbook that is relevant for the repository. For example, the list of package dependencies, virtualenv requirements, configuration templates. This will also allow those to change in a per branch/commit way. Meaning I can make sure that if I need to deploy a specific branch/commit, playbook that is correct for that commit is being used, if, say, the configuration template being used changed.
It seems like the only solution is to checkout the git repository locally. Is it possible in ansible to tell it to run a remote play book (from the git repository that is being checked out on the server)? I was thinking of having ansible run a ansible using a local connection on the remote host, I haven't tried it to see if this will actually work out.
How do people manage to use ansible for continuous deployment based on git without some mechanisms for running a remote playbook?
Take a look at ansible-pull.
It pulls the repo and executes playbook.

Building and deploying from a remote server with Capistrano

I'm new to Capistrano and struggling a little to get started. A brief description of what I need to do:
git pull the latest code from our git repo, on a central build server. This build server's environment matches the deployment environment exactly. I need the code to be built here. I don't want to deploy a binary that was built on a Mac laptop, for example.
compile the binary on this machine.
deploy it from this machine to all the target machines.
There is a shared user we can all SSH into on the build machine to do the builds.
The build machine is behind a gateway machine, not directly accessible.
All of the deployment target machines also have this shared user and are also behind the gateway.
The deployed binary is a single executable, and there is an init script on the target machines. After deploying the binary and changing the symlink to it, restart the service via the init script.
Everyone has appropriate SSH keys and agent forwarding for all necessary tasks.
So in principle it seems rather simple, but Capistrano seems opinionated and a bit magical. As a result I'm not sure how to accomplish all of this. It seems like it wants to check out my code and copy it to the remote machines, for example without building it first.
I think I need to ignore all of Capistrano's default smarts and just make it run some shell commands on the appropriate servers. In pseudo-code:
ssh buildmachine via gateway "cd repo && git pull && make"
ssh targetmachine(s) via gateway "scp buildmachine:repo/binary .; <mv && symlink>; service foo restart"
Am I even using the right tool for the job? It seems a lot like a round peg in a square hole.
Can someone explain to me what the contents of the Capistrano configuration files should be, and what cap commands I'd run to accomplish this?
BTW, I've searched around and looked at questions like deploying with capistrano with remote git repo but without git running on production server and From manual pull on server to Capistrano
The question is rather old, but you never know when someone steps onto it in need of information...
First and formost, consider that Capistrano might just not be the right tool for the job you want to do.
That said, it is not impossible to accomplish what you expect. While in projects that deploy large amount of files and modify them (like css/js minify, js builds etc.) I would avoid it, in your case, you can consider runing a "deployment repository" and configure it in capistrano as the source. Your process would look like this :
run the local build with whatever tools you need
upload resulting binary to a deployment repository
run capistrano that will connect to application servers, fetch fresh binary from repository, perform any server side tasks required and symlink to "current"
As a side effect you end up with full history of deployed binaries

Simplest way to use mercurial to manage differences between web development and deployment?

I am using mercurial for website development. I "think" I'm using it correctly.
I develop on my development machine, commit fairly regularly. I will somewhat regularly push my commits to my hosted site-dev repository.
If things are set up how I want them for the live site, I push from my dev machine to the hosted site-live repository. Then I pull down from that repository onto the live server.
However, there are some changes that need to be made (changing directories from localhost to www.example.com, changing the DB connection stuff, etc.).
What I did was made these changes on my live machine, then pushed them back up to the site-live repository. I don't know why I did that, really, but at least there's a changeset sitting there with the necessary config changes.
What I don't know how to do is manage this process. I'm a little lost beyond committing, pushing and pulling with hg. I'm a single developer and haven't even done a merge yet.
Is there some way to keep that particular changeset identified, and just apply it, hopefully even BEFORE I pull from the repo down to the live server?
I think you can tell from my question that I'm in a little over my head with hg and workflow at the moment ;)
This is my understanding:
What essentially you are trying to do is have a development, staging and deployment environment. You do your development using 'development' repository, test it on a staging environment and then once satisfied, pull those changes into deployment repository.
And when you pull from staging to deployment, you need to change your environment / configuration data.
My take is you should not be changing the configuration at all.
You should have configuration files such that you have a
basic configuration file
basic.conf
Environment specific overrides
basic.dev.conf, basic.staging.conf and basic. deployment.conf
Use environment variable:
The overrides to the basic configuration data should be defined via an environment
specific variable : APP_ENV : dev or staging or deploy
This way you should be able to override the configuration based on the environment without changing the configuration information.
It is not a good idea to rely on making changes to config files each time you pull your code from development to staging to deployment.
I would keep the live server outside the version control. Meaning that I would have a small "install" script that pulls updates from the repository, removes any unnecessary development files, and applies the correct configuration files. Both development and production configuration files should be in version control.