I have deploy via bitbucket pipelines, which is uses Capistrano for deploy release to server. But there is one problem, Capistrano is pulling actual version of branch, but i need to deploy not actual branch, but version from pipeline(this need for redeploy previous stable version). And because in pipeline is already needed version of files, i just need to disable pulling if its possible.
Used ruby bundle for deploy.
Ruby: 2.5.5
Capistrano 3.8.0
Part of deploy.rb configuration:
set :repo_url, 'git#bitbucket.org:user/repo.git'
set :deploy_via, :remote_cache
set :copy_exclude, [ '.git' ]
set :pty, true
Thanks.
I was tried to replace git command for disable git pull with command:
replace_git_pull() {
if [ $1 = "pull" ]; then
echo "Git pull is disabled, exit";
return 0;
fi;
$(which git) "$#";
}
alias git='replace_git_pull'
Locally its works, but its doesn't have affect to Capistrano deploy, seems like Capistrano not uses console for pulling.
Tried replace git:update command, - receive error.
Deployment tool was changed to Deployer (php)
https://github.com/deployphp/deployer
Related
CI Runner Context
Gitlab version : 13.12.2 (private server)
Gitlab Runner version : 14.9.1
Executor : shell executor (PowerShell)
Exploitation system : Windows 10
Project in Python (may be unrelated)
(using Poetry for dependency management)
The Problem
I am setting up an automated integration system for a project that has several internal dependencies that are hosted on the same server as the project being integrated. If I run the CI with a poetry update in the yml file, the Job console sends an exit with error code 128 upon calling a git clone on my internal dependency.
To isolate the problem, I tried simply calling a git clone on that same repo. The response is that the runner cannot authenticate itself to the Gitlab server.
What I Have Tried
Reading through the Gitlab docs, I found that the runners need authorization to pull any private dependencies. For that, Gitlab has created deploy keys.
So I followed the instructions to create the deploy key for the dependency and added it to the sub-project's deploy key list. I then ran into the exact same permissions problem.
What am I missing?
(For anyone looking for this case for a Winodws PowerShell, the user that the runner uses is nt authority/system, a system only user that I have not found a way to access as a human. I had to make the CI runner do the ssh key creation steps.)
Example .gitlab-ci.yml file:
#Commands in PowerShell
but_first:
#The initial stage, always happens first
stage: .pre
script:
# Start ssh agent for deploy keys
- Start-Service ssh-agent
# Check if ssh-agent is running
- Get-Service ssh-agent
- git clone ssh://git#PRIVATE_REPO/software/dependency-project.git
I solved my problem of pulling internal dependencies via completely bypassing the ssh pull of the source code and by switching from poetry to hatch for dependency management (I'll explain why further down).
Hosting the compiled dependencies
For this, I compiled my dependency project's source code into a distribution-ready package (in this context it was a python wheel).
Then used Gitlab's Packages and Registries offering to host my package. Instead of having packages in each source code project, I pushed the packages of all my dependencies to a project I created for this single purpose.
My .gitlab-ci.yaml file looks like this when publishing to that project:
deploy:
# Could be used to build the code into an installer
stage: Deploy
script:
- echo "deploying"
- hatch version micro
# only wheel is built (without target, both wheel and sdist are built)
- hatch build -t wheel
- echo "Build done ..."
- hatch publish --repo http://<private gitlab repo>/api/v4/projects/<project number>/packages/pypi --user gitlab-ci-token --auth $CI_JOB_TOKEN
- echo "Publishing done!"
Pulling those hosted dependencies (& why I ditched poetry)
My first problem was having pip find the extra pypi repository with all my packages. But pip already has a solution for that!
In it's pip.ini file(to find where it is, you can do pip config -v list), 2 entries need to be added:
[global]
extra-index-url = http://__token__:<your api token>#<private gitlab repo>/api/v4/projects/<project number>/packages/pypi/simple
[install]
trusted-host = <private gitlab repo>
This makes it functionally the same as adding the --extra-index-url and --trusted-host tags while calling pip install.
Since I was using a dependency manager, I was not directly using pip, but the manager's wrapper for pip. And here comes the main reason why I decided to change dependency managers: poetry does not read or recognize pip.ini. So any changes done in any of those files will be ignored.
With the configuration of the pip.ini file, any dependencies I have in the private package repo will also be searched for the installation of projects. So the line:
- git clone ssh://git#PRIVATE_REPO/software/dependency-project.git
changes to a simple line:
- pip install dependency-project
Or a line in pyproject.toml:
dependencies = [
"dependency-project",
"second_project",
]
I've been trying to get this pipeline to trigger with all sorts of different formats and this appears to be the correct one, but I must be missing something because it's still not working.
I'm just doing yarn version --prerelease to git tag and keep parity with the apps package.json version.
# This is a sample build configuration for JavaScript.
# Check our guides at https://confluence.atlassian.com/x/14UWN for more examples.
# Only use spaces to indent your .yml configuration.
# -----
# You can specify a custom docker image from Docker Hub as your build environment.
image: node:10.15.3
pipelines:
tags:
'v*-*':
- step:
script:
- echo "I FEEL LIKE I'M TAKING CRAZY PILLS"
I had assumed that my tags were being pushed to bitbucket with everything else when I did git push (it works that way in GH Actions). I had to explicitly push tags with git push --tags and then it started working.
I have generated documentation for my project with cargo doc, and it is made in the target/doc directory. I want to allow users to view this documentation without a local copy, but I cannot figure out how to push this documentation to the gh-pages branch of the repository. Travis CI would help me automatically do this, but I cannot get it to work either. I followed this guide, and set up a .travis.yml file and a deploy.sh script. According to the build logs, everything goes fine but the gh-pages branch never gets updated. My operating system is Windows 7.
It is better to use travis-cargo, which is intended to simplify deploying docs and which also has other features. Its readme provides an example of .travis.yml file, although in the simplest form it could look like this:
language: rust
sudo: false
rust:
- nightly
- beta
- stable
before_script:
- pip install 'travis-cargo<0.2' --user && export PATH=$HOME/.local/bin:$PATH
script:
- |
travis-cargo build &&
travis-cargo test &&
travis-cargo --only beta doc
after_success:
- travis-cargo --only beta doc-upload
# needed to forbid travis-cargo to pass `--feature nightly` when building with nightly compiler
env:
global:
- TRAVIS_CARGO_NIGHTLY_FEATURE=""
It is very self-descriptive, so it is obvious, for example, what to do if you want to use another Rust release train for building docs.
In order for the above .travis.yml to work, you need to set your GH_TOKEN somehow. There are basically two ways to do it: inside .travis.yml via an encrypted string, or by configuring it in the Travis itself, in project options. I prefer the latter way, so I don't need to install travis command line tool or pollute my .travis.yml (and so the above config file does not contain secure option), but you may choose otherwise.
I'm trying to figure out how to create a task (custom-command, not ant/rake, etc) to perform a deployment of a git-repo to some server/target (in this case Heroku). If I were to do this manually, it's just git push heroku master.
I've created a basic pipeline/stage/job/task (custom-command, in this case a Python script), and single agent. The pipeline has a material (git repo, with a name). Inside the script, I'm printing out os.environ.items() - it has several variables, including the SHA of the latest commit - but no URL for the actual repository.
So how is the agent (or task) supposed to know what repository to deploy?
The pipeline knows the material name, and I've tried passing in an Environment Variable such as ${materialName} (which didn't work). I could hard-code a URL in the task, but that's not a real solution.
Thoughtworks:GO's documentation is shiny, but a bit sparse in the details. I'd have thought something this basic would be well documented, but if so, I haven't found it so far.
When a task runs on an agent, it clones the repository specified in the material (config). The .git/config wouldn't have remote Heroku url and as such git push heroku master wouldn't work.
You would need to add Heroku remote url before you can perform a deployment of your git-repo to Heroku.
git remote add heroku git#heroku.com:project.git
where project is the name of your Heroku project. This is required to be done only once unless you perform a clean working directory every time (in Stage Settings which removes all files/directories in the working directory on the agent, you can see this option from the UI as well: Admin -> Piplelines -> Stages -> Stage Settings Tab) in which case you may have to add the remote url via a task before you run the task to deploy.
Once you've done so, you should be able to use the heroku xxxx commands (assuming you have the Heroku Toolbelt installed on the agent machine you are using for deploying), and should be able to push to Heroku as usual via git push heroku master.
In other rails projects, I'd have a local database.yml and in source code repository only commit the database.sample file. When deploying, a capistrano script that would symlink a shared version of database.yml to all the releases.
When deploying to heroku, git is used and they seem to override database.yml altogether and do something internal.
That's all fine and good for database.yml, but what if I have s3 configurations in config/s3.yml. And I'm putting my project on github so I don't want to commit the s3.yml where everyone can see my credentials. It'd rather commit a sample s3.sample which people will override with their own settings, and keep a local s3.yml file uncommitted in my working directory.
what is the best way to handle this?
Heroku have some guidance on this -
http://devcenter.heroku.com/articles/config-vars
An alternative solution is to create a new local-branch where you modify .gitignore so secret-file can be pushed to heroku.
DON'T push this branch to your Github repo.
To push non-master branch to heroku, use:
git push heroku secret-branch:master
More info can be found on:
https://devcenter.heroku.com/articles/multiple-environments#advanced-linking-local-branches-to-remote-apps
Use heroku run bash and then ls to check whether your secret-file have been pushed on to heroku or not
Store the s3 credentials in environment variables.
$ cd myapp
$ heroku config:add S3_KEY=8N029N81 S3_SECRET=9s83109d3+583493190
Adding config vars:
S3_KEY => 8N029N81
S3_SECRET => 9s83109d3+583493190
Restarting app...done.
In your app:
AWS::S3::Base.establish_connection!(
:access_key_id => ENV['S3_KEY'],
:secret_access_key => ENV['S3_SECRET']
)
See the Heroku Config Vars documentation which explain development setup etc.
If using Rails 4.1 beta, try the heroku_secrets gem, from https://github.com/alexpeattie/heroku_secrets:
gem 'heroku_secrets', github: 'alexpeattie/heroku_secrets'
This lets you store secret keys in Rails 4.1's config/secrets.yml (which is not checked in to source control) and then just run
rake heroku:secrets RAILS_ENV=production
to make its contents available to heroku (it parses your secrets.yml file and pushes everything in it to heroku as environment variables, per the heroku best practice docs).
You can also check out the Figaro gem.
I solved this by building the credentials from env variables during the build time, and write it to where I need it to be before the slug is created.
Some usecase specific info that you can probably translate to your situation:
I'm deploying a Node project, and in the package.json in the postinstall script I call "bash create-secret.sh". Since postinstall is performed before the slug is created, the file will be added to the slug.
I had to use a bash script because I had some trouble printing strings that contained newlines that had to be printed correctly, and I wasn't able to get it done with Node. Probably just me not being skilled enough, but maybe you run into a similar problem.
Looking into this with Heroku + Build & Deploy-time Secrets. It seems like it's not something Heroku supports. This means for a rails app, there is no way other than committing BUNDLE_GITHUB__COM for example to get from private repo.
I'll try to see if there is a way to have CI bundle private deps before beaming at heroku