Local testing of Perl repository using Travis CI (with docker) - perl

I'd like to fix a bug in a Perl repository (now owned by me, I just submitted some pull requests), but at the moment it's failing its Travis CI tests (before my pull requests).
My goal is to be able to run Travis CI tests locally starting from the repository's .travis.yml.
Note that I'm totally new to Travis CI.
Following other's solutions that pointed to this FAQ (http://web.archive.org/web/20180929150027/https://docs.travis-ci.com/user/common-build-problems/#troubleshooting-locally-in-a-docker-image), that as you can see is no longer officially available in travis-ci.com, I tried:
sudo docker pull travisci/ci-amethyst:packer-1512508255-986baf0
sudo docker run --name travis-debug -dit travisci/ci-amethyst:packer-1512508255-986baf0 /sbin/init
sudo docker exec -it travis-debug bash -l
From the container:
su - travis
git clone https://github.com/{user}/{repo}.git
Now I don't know how to build the bash script to run the tests, as the last two steps (manually install dependencies / run your Travis CI build) looks cryptic (I don't know how to run the build, and possibly lead to lack of reproducibily (if I install dependencies manually, how do I know I'll get the same results as the cloud test?)
I tried starting from the procedure described here (https://github.com/travis-ci/travis-build
), one error is ´Could not locate Gemfile or .bundle/ directory´, but I probably need some missing steps.

For what its worth, I think you are going at it from the wrong angle.
Travis is just running your stuff remotely. Instead of bringing Travis to your machine, you need to make your tests pass locally first - cryptic or not - there is no way around it, especially if you are going to own this repo.
Another reason I am recommending this, is that - as you have already witnessed - the develop-debug-fix cycle is much lengthier when you rely on something to test your code remotely.
It has been my experience that your .travis.yml should be super simple, since it just runs one or two scripts or commands that can comfortably run locally.
If you are comfortable with Docker, I would consider building a local Dockerfile with all the dependencies, and bring your tests to work in your docker environment. Once you succeeded with this step, asking Travis to do the same (run tests in a docker) is trivial.
Not sure if it is the answer you were looking for, but it was too long for a comment.

Related

Has there been a fix for this issue related to celery? https://github.com/celery/celery/issues/3519

Celery seems to be picking both my old code and new code. I have tried clearing cache, clearing the broker queue(redis), restarting celery etc. But none of them seem to be fixing this issue.
For context, we have new releases going to various servers periodically. The web application uses Django rest framework in the backend and celery for scheduling asynchronous tasks. Recently when we deployed new version of the code, the application was behaving very strangely. It had artifacts from the old code that was being run and parts of the new code as well. This was very weird behaviour until we found a github thread(https://github.com/celery/celery/issues/3519) which outlines the issue exactly as we had faced. There seems to be no good answers for this issue in that thread, so posting it here so that if anyone with celery knowledge knows of a workaround where we can stop celery from picking up the old artifacts.
The deployment is done through Jenkins build scripts. Please find below the script for the same. For obvious reasons, I have replaced our application name with "proj".
sudo /bin/systemctl stop httpd
sudo /bin/systemctl stop celery
/bin/redis-cli flushall
/srv/proj/bin/pip install --no-cache-dir --upgrade -r /srv/proj/requirements/staging.txt
/usr/bin/git fetch
/usr/bin/git fetch --tags
/usr/bin/git checkout $TAG
/srv/proj/bin/python manage.py migrate
sudo /bin/systemctl restart httpd
sudo /bin/systemctl restart proj
sudo /bin/systemctl start celery
OP, the problem is almost invariably that your servers include old code on them. There are two possible issues:
the checkout command fails. checkout can fail to switch branches if there are local changes on a directory. In addition, the deployment script doesn't pull the latest changes to the server. The more common approach that we use is to have the venv in another directory, and then on deploy clone the repository to a new directory (versioned). Then switch the "main" app directory link to point to the latest version. The latter, for example, is the approach used in aws's elastic beanstalk.
there are stray processes of celery still running. Depending on how you started / stopped celery, there could be stray processes of celery still hanging around and running your old code.
Ultimately, you won't be able to diagnose this problem without confirming that all of your servers are 100% identical. So, if you have devs or admins that ssh to these boxes, chances are that one or more of them made some change that affected (1) or (2).

Is it possible to have ansible use a "remote" playbook for git-based continuous deployment?

I need to manage a few servers that run code that is currently being deployed there as a couple of git repositories. I would like to be able to store in the project's repository the parts (if not all) of the playbook that is relevant for the repository. For example, the list of package dependencies, virtualenv requirements, configuration templates. This will also allow those to change in a per branch/commit way. Meaning I can make sure that if I need to deploy a specific branch/commit, playbook that is correct for that commit is being used, if, say, the configuration template being used changed.
It seems like the only solution is to checkout the git repository locally. Is it possible in ansible to tell it to run a remote play book (from the git repository that is being checked out on the server)? I was thinking of having ansible run a ansible using a local connection on the remote host, I haven't tried it to see if this will actually work out.
How do people manage to use ansible for continuous deployment based on git without some mechanisms for running a remote playbook?
Take a look at ansible-pull.
It pulls the repo and executes playbook.

Building and deploying from a remote server with Capistrano

I'm new to Capistrano and struggling a little to get started. A brief description of what I need to do:
git pull the latest code from our git repo, on a central build server. This build server's environment matches the deployment environment exactly. I need the code to be built here. I don't want to deploy a binary that was built on a Mac laptop, for example.
compile the binary on this machine.
deploy it from this machine to all the target machines.
There is a shared user we can all SSH into on the build machine to do the builds.
The build machine is behind a gateway machine, not directly accessible.
All of the deployment target machines also have this shared user and are also behind the gateway.
The deployed binary is a single executable, and there is an init script on the target machines. After deploying the binary and changing the symlink to it, restart the service via the init script.
Everyone has appropriate SSH keys and agent forwarding for all necessary tasks.
So in principle it seems rather simple, but Capistrano seems opinionated and a bit magical. As a result I'm not sure how to accomplish all of this. It seems like it wants to check out my code and copy it to the remote machines, for example without building it first.
I think I need to ignore all of Capistrano's default smarts and just make it run some shell commands on the appropriate servers. In pseudo-code:
ssh buildmachine via gateway "cd repo && git pull && make"
ssh targetmachine(s) via gateway "scp buildmachine:repo/binary .; <mv && symlink>; service foo restart"
Am I even using the right tool for the job? It seems a lot like a round peg in a square hole.
Can someone explain to me what the contents of the Capistrano configuration files should be, and what cap commands I'd run to accomplish this?
BTW, I've searched around and looked at questions like deploying with capistrano with remote git repo but without git running on production server and From manual pull on server to Capistrano
The question is rather old, but you never know when someone steps onto it in need of information...
First and formost, consider that Capistrano might just not be the right tool for the job you want to do.
That said, it is not impossible to accomplish what you expect. While in projects that deploy large amount of files and modify them (like css/js minify, js builds etc.) I would avoid it, in your case, you can consider runing a "deployment repository" and configure it in capistrano as the source. Your process would look like this :
run the local build with whatever tools you need
upload resulting binary to a deployment repository
run capistrano that will connect to application servers, fetch fresh binary from repository, perform any server side tasks required and symlink to "current"
As a side effect you end up with full history of deployed binaries

How does capistrano "run_locally" work with branches?

I have a capistrano task that uses "run_locally" to compass compile/compress my css files and then upload them to the server.
Is it going to be smart and run that on the git branch that's getting deployed, or will it just run on the branch that I currently have in my working copy?
I'd want it to run on the branch that's getting deployed regardless of what I have checked locally. If it's not smart about this would I instead need to run_locally a git checkout on the branch that's getting deployed before running the compile command?
It runs on you current local code. So it matters what code is checked out there. As you mentioned you can try to ensure that you run the version you are going to deploy.
Better would be to do the compilation work on the server.

How to deploy heroku app with secret yaml configuration file without committing the file?

In other rails projects, I'd have a local database.yml and in source code repository only commit the database.sample file. When deploying, a capistrano script that would symlink a shared version of database.yml to all the releases.
When deploying to heroku, git is used and they seem to override database.yml altogether and do something internal.
That's all fine and good for database.yml, but what if I have s3 configurations in config/s3.yml. And I'm putting my project on github so I don't want to commit the s3.yml where everyone can see my credentials. It'd rather commit a sample s3.sample which people will override with their own settings, and keep a local s3.yml file uncommitted in my working directory.
what is the best way to handle this?
Heroku have some guidance on this -
http://devcenter.heroku.com/articles/config-vars
An alternative solution is to create a new local-branch where you modify .gitignore so secret-file can be pushed to heroku.
DON'T push this branch to your Github repo.
To push non-master branch to heroku, use:
git push heroku secret-branch:master
More info can be found on:
https://devcenter.heroku.com/articles/multiple-environments#advanced-linking-local-branches-to-remote-apps
Use heroku run bash and then ls to check whether your secret-file have been pushed on to heroku or not
Store the s3 credentials in environment variables.
$ cd myapp
$ heroku config:add S3_KEY=8N029N81 S3_SECRET=9s83109d3+583493190
Adding config vars:
S3_KEY => 8N029N81
S3_SECRET => 9s83109d3+583493190
Restarting app...done.
In your app:
AWS::S3::Base.establish_connection!(
:access_key_id => ENV['S3_KEY'],
:secret_access_key => ENV['S3_SECRET']
)
See the Heroku Config Vars documentation which explain development setup etc.
If using Rails 4.1 beta, try the heroku_secrets gem, from https://github.com/alexpeattie/heroku_secrets:
gem 'heroku_secrets', github: 'alexpeattie/heroku_secrets'
This lets you store secret keys in Rails 4.1's config/secrets.yml (which is not checked in to source control) and then just run
rake heroku:secrets RAILS_ENV=production
to make its contents available to heroku (it parses your secrets.yml file and pushes everything in it to heroku as environment variables, per the heroku best practice docs).
You can also check out the Figaro gem.
I solved this by building the credentials from env variables during the build time, and write it to where I need it to be before the slug is created.
Some usecase specific info that you can probably translate to your situation:
I'm deploying a Node project, and in the package.json in the postinstall script I call "bash create-secret.sh". Since postinstall is performed before the slug is created, the file will be added to the slug.
I had to use a bash script because I had some trouble printing strings that contained newlines that had to be printed correctly, and I wasn't able to get it done with Node. Probably just me not being skilled enough, but maybe you run into a similar problem.
Looking into this with Heroku + Build & Deploy-time Secrets. It seems like it's not something Heroku supports. This means for a rails app, there is no way other than committing BUNDLE_GITHUB__COM for example to get from private repo.
I'll try to see if there is a way to have CI bundle private deps before beaming at heroku