version control of docker-compose.yml - docker-compose

My application has 4 docker containers that talk to each other and is specified with a docker-compose.yml file, so I can just do docker-compose up -d from the location where that file is stored and it starts.
I am virtually the end of setting up my CI service to go from commit to the git repository to testing and then building the docker images that I need for my deploy. I now need to sort out how to deploy.
I already have the current version running, and my docker-compose.yml file is configured via environment variables held in a .env file. It is unlikely that it will change between versions, but it might. What will change is the .env file, as that specifies image names and tags that the CI system has just build and which the docker-compose.yml file will use to start the new version of the running system. .env is created on the fly by scripts in the repository and is run by the CI system in its workspace. My deploy step is really just about copying .env and docker-compose.yml into place and then stopping the old set of services and starting the new.
My question is, if I change the .env file or docker-compose.yml under a running version, will docker-compose down properly stop the old running images, so that when I immediately follow it with a docker-compose up -d I swap over to the new images. Is there a better way of handling this situation

Related

Maintaining DJANGO_SETTINGS_MODULE between local and production environments

Running into an issue when we deployed to production, had to update manage.py to set os.environ.setdefault("DJANGO_SETTINGS_MODULE", "config.settings.local") to config.settings.production. Of course this broke local settings when we pulled back to our dev branch.
We're running our containers via the docker-compose local.yml commands recommended in the documentation.
Am I missing something? Is this by design?
This environment variable should be set via a .env file, the production one is located under .envs/.production/.django, and is not in source control (for security reasons). So yes, it is by design.
Depending on how you start your server, this file might be missing and the environment will end up unset.

Is there a way to "pull" and "up" with docker-compose without creating build folders in testing enviroment?

So, i have a docker-compose file that has a build command in each service. In development, docker-compose up works ok. In test enviroment, i want to docker-compose pull and docker-compose build the images, and it works ok, except it needs the folder in build command created in testing server.
Is it really necesary or is there a way to pull and up the containers without create the build folders in the testing server?
There is docker-compose up --no-build

How do I speed up an EB deploy using ebignore?

I'm deploying my app to ElasticBeanstalk. I'm using an .ebignore file because there are files that I do not want to check into git, but I do want deployed with the app(like application secrets, config vars, etc). The issue I'm facing is that when using an .ebignore, the deploy takes FOREVER. I've used the --verbose flag, and I can see that it is recursing my entire node_modules directory and skipping each file individually. When I deploy by using .gitignore, it becomes very fast.
Has anyone else experienced this? How do I speed up this process?

How to deploy heroku app with secret yaml configuration file without committing the file?

In other rails projects, I'd have a local database.yml and in source code repository only commit the database.sample file. When deploying, a capistrano script that would symlink a shared version of database.yml to all the releases.
When deploying to heroku, git is used and they seem to override database.yml altogether and do something internal.
That's all fine and good for database.yml, but what if I have s3 configurations in config/s3.yml. And I'm putting my project on github so I don't want to commit the s3.yml where everyone can see my credentials. It'd rather commit a sample s3.sample which people will override with their own settings, and keep a local s3.yml file uncommitted in my working directory.
what is the best way to handle this?
Heroku have some guidance on this -
http://devcenter.heroku.com/articles/config-vars
An alternative solution is to create a new local-branch where you modify .gitignore so secret-file can be pushed to heroku.
DON'T push this branch to your Github repo.
To push non-master branch to heroku, use:
git push heroku secret-branch:master
More info can be found on:
https://devcenter.heroku.com/articles/multiple-environments#advanced-linking-local-branches-to-remote-apps
Use heroku run bash and then ls to check whether your secret-file have been pushed on to heroku or not
Store the s3 credentials in environment variables.
$ cd myapp
$ heroku config:add S3_KEY=8N029N81 S3_SECRET=9s83109d3+583493190
Adding config vars:
S3_KEY => 8N029N81
S3_SECRET => 9s83109d3+583493190
Restarting app...done.
In your app:
AWS::S3::Base.establish_connection!(
:access_key_id => ENV['S3_KEY'],
:secret_access_key => ENV['S3_SECRET']
)
See the Heroku Config Vars documentation which explain development setup etc.
If using Rails 4.1 beta, try the heroku_secrets gem, from https://github.com/alexpeattie/heroku_secrets:
gem 'heroku_secrets', github: 'alexpeattie/heroku_secrets'
This lets you store secret keys in Rails 4.1's config/secrets.yml (which is not checked in to source control) and then just run
rake heroku:secrets RAILS_ENV=production
to make its contents available to heroku (it parses your secrets.yml file and pushes everything in it to heroku as environment variables, per the heroku best practice docs).
You can also check out the Figaro gem.
I solved this by building the credentials from env variables during the build time, and write it to where I need it to be before the slug is created.
Some usecase specific info that you can probably translate to your situation:
I'm deploying a Node project, and in the package.json in the postinstall script I call "bash create-secret.sh". Since postinstall is performed before the slug is created, the file will be added to the slug.
I had to use a bash script because I had some trouble printing strings that contained newlines that had to be printed correctly, and I wasn't able to get it done with Node. Probably just me not being skilled enough, but maybe you run into a similar problem.
Looking into this with Heroku + Build & Deploy-time Secrets. It seems like it's not something Heroku supports. This means for a rails app, there is no way other than committing BUNDLE_GITHUB__COM for example to get from private repo.
I'll try to see if there is a way to have CI bundle private deps before beaming at heroku

How to deploy local files without commiting to git?

I'm working in local branch and want to try my changes on staging server, but I don't want to commit these changes. Can I commit local changes.
I know about deploy:upload recipe. I need a way to deploy several files or whole working derictory.
Thanks.
Most important of capistrano is to allow execute code on remote server, what we call deploy is a set of default scripts that do a lot of small tasks required for setting up new version of application on server.
So it is possible to write your own scrip that will execute following script (it's not working probably):
pack sources
system "tar -czf /tmp/package.tgz *"
upload it to server
upload "/tmp/package.tgz" "/tmp/package.tgz"
remove old files, unpack sources on server
run "cd /app_path/; rm -rf *; tar -xzf /tmp/package.tgz"
override (force recursively symlinks) files with some server configs ... like database.yml
run "cp -flrs /app_shared_path/* /app_path/"
restart application - this is for passenger, use your own server command for restart
run "cd /app_path/; touch tmp/restart.txt"
I did similar setup once for deployment - before I got access to git.
I deploy some cached (minified, etc) javascript files from a rails app. The simplest way is just to do this in a capestrano task:
top.upload("public/javascripts/cache", "#{current_path}/public/javascripts/cache")
This will use scp to upload the entire 'cache' directory.