Rustdoc on gh-pages with Travis - github

I have generated documentation for my project with cargo doc, and it is made in the target/doc directory. I want to allow users to view this documentation without a local copy, but I cannot figure out how to push this documentation to the gh-pages branch of the repository. Travis CI would help me automatically do this, but I cannot get it to work either. I followed this guide, and set up a .travis.yml file and a deploy.sh script. According to the build logs, everything goes fine but the gh-pages branch never gets updated. My operating system is Windows 7.

It is better to use travis-cargo, which is intended to simplify deploying docs and which also has other features. Its readme provides an example of .travis.yml file, although in the simplest form it could look like this:
language: rust
sudo: false
rust:
- nightly
- beta
- stable
before_script:
- pip install 'travis-cargo<0.2' --user && export PATH=$HOME/.local/bin:$PATH
script:
- |
travis-cargo build &&
travis-cargo test &&
travis-cargo --only beta doc
after_success:
- travis-cargo --only beta doc-upload
# needed to forbid travis-cargo to pass `--feature nightly` when building with nightly compiler
env:
global:
- TRAVIS_CARGO_NIGHTLY_FEATURE=""
It is very self-descriptive, so it is obvious, for example, what to do if you want to use another Rust release train for building docs.
In order for the above .travis.yml to work, you need to set your GH_TOKEN somehow. There are basically two ways to do it: inside .travis.yml via an encrypted string, or by configuring it in the Travis itself, in project options. I prefer the latter way, so I don't need to install travis command line tool or pollute my .travis.yml (and so the above config file does not contain secure option), but you may choose otherwise.

Related

GitHub PR doesn't trigger GitLab pipeline

I'm trying to use GitHub to trigger on PR a GitLab pipeline.
Practically when a developer creates a PR in GitHub, his/her code get tested against a GitLab pipeline.
I'm trying to follow this user guide: https://docs.gitlab.com/ee/ci/ci_cd_for_external_repos/github_integration.html
and we have a silver account, but it won't work. When creating the PR, the GitLab pipeline is not triggered.
Anyone with this kind of experience who can help?
Thanks
Joe
I've found the cause of the issue.
In order for GitHub to trigger GitLab as CD/CI mostly in PR request, you need to have a Silver/Premium account AND, very important, being the root owner.
Any other case, you won't be able to see github in the integration list on GitLab. People from gitlab had the brilliant idea to hide it instead of showing it disabled (which would had been a tip to understand that you needed an upgraded license)
In the video above it's not explained.
Firstly, you need to give us the content of your .gitlab-ci.yaml file. In your question you asked about GitHub but you're following Gitlab documentation which is completely different. Both are using git commands to commit and push repos but Github & Gitlab are different.
For Github pipelines, you need to create a repository, then you go to Actions. Github will propose you to configure a .github/workflows directory which contain a file.yaml. In this .yaml file you can code your pipelines. According to your project, Github will propose you several linux machines with the adequate configuration to run your files (If it's a Java Project --> you'll be proposed maven machines, Python --> Python Machines, React/Angular -> machines with npm installed, Docker, Kubernetes for deployments...) and you're limited to 4 private project as far as I know (check this last information).
For Gitlab you have two options, you can use preconfigured machines like github, and you call them by adding for example atag: npm in your .gitlab-ci.yaml file, to call a machine with npm installed, but you need to pay an amount of money. Or you can configure your own runners by following the Gitlab documentation with gitlab commands (which is the best option), but you'll need good machines and servers to run npm - mvn - python3 - ... commands
Of course, in your Gitlab repository, and finally to answer your question this an example, of .gitlab-ci.yaml file with two simple stages: build & test, the only statement specifies that these pipelines will run if there is a merge request ( I use the preconfigured machines of Gitlab as a sample here) More details on my python github project https://github.com/mehdimaaref7/Scrapping-Sentiment-Analysis and for gitlab https://docs.gitlab.com/runner/
stages:
- build
- test
build:
tags:
- shell
- linux
stage: build
script:
- echo "Building"
- mkdir build
- touch build/info.txt
artifacts:
paths:
- build/
only:
- merge_requests
test:
tags:
- shell
- linux
stage: test
script:
- echo "Testing"
- test -f "build/info.txt"
only:
- merge_requests

How to install an old version of the Direct X Api in GitHub actions

I'm working on an implementation of continuous integration in this project, which requires an old version of the DirectX SDK from June 2010. Is it possible to install this as a part of a GitHub Actions workflow at all? It may build with any version of the SDK as long as it's compatible with Windows 7.
Here's the workflow I've written so far, and here's the general building for Windows guide I'm following...
I have a working setup for project using DX2010, however i am not running installer (which always failed for me during beta, maybe it's fixed nowadays) but extracting only parts required for build. Looking at link you provided, this is exactly what guide recommends :)
First, DXSDK_DIR variable is set using ::set-env "command". Variable most likely should point to directory outside default location, which can be overwritten if repository is checked out after preparing DX files.
- name: Config
run: echo ::set-env name=DXSDK_DIR::$HOME/cache/
shell: bash
I didn't want to include DX files in repository, so they had to be downloaded when workflow is running. To avoid doing that over and over again cache action is used to keep files between builds.
- name: Cache
id: cache
uses: actions/cache#v1
with:
path: ~/cache
key: cache
And finally, downloading and extracting DX2010. This step will run only if cache wasn't created previously or current workflow cannot create/restore caches (like on: schedule or on: repository_dispatch).
- name: Cache create
if: steps.cache.outputs.cache-hit != 'true'
run: |
curl -L https://download.microsoft.com/download/a/e/7/ae743f1f-632b-4809-87a9-aa1bb3458e31/DXSDK_Jun10.exe -o _DX2010_.exe
7z x _DX2010_.exe DXSDK/Include -o_DX2010_
7z x _DX2010_.exe DXSDK/Lib/x86 -o_DX2010_
mv _DX2010_/DXSDK $HOME/cache
rm -fR _DX*_ _DX*_.exe
shell: bash
Aaand that's it, project is ready for compilation.

Bitbucket Pipeline Tags Trigger Not Working

I've been trying to get this pipeline to trigger with all sorts of different formats and this appears to be the correct one, but I must be missing something because it's still not working.
I'm just doing yarn version --prerelease to git tag and keep parity with the apps package.json version.
# This is a sample build configuration for JavaScript.
# Check our guides at https://confluence.atlassian.com/x/14UWN for more examples.
# Only use spaces to indent your .yml configuration.
# -----
# You can specify a custom docker image from Docker Hub as your build environment.
image: node:10.15.3
pipelines:
tags:
'v*-*':
- step:
script:
- echo "I FEEL LIKE I'M TAKING CRAZY PILLS"
I had assumed that my tags were being pushed to bitbucket with everything else when I did git push (it works that way in GH Actions). I had to explicitly push tags with git push --tags and then it started working.

Gitlab Runner - New folder for each build

I'm using Gitlab CI for my project. When I push on develop branch, it runs tests and update the code on my test environment (a remote server).
But the gitlab runner is already using the same build folder : builds/a3ac64e9/0/myproject/myproject
But I would like to create a now folder every time :
builds/a3ac64e9/1/yproject/myproject
builds/a3ac64e9/2/yproject/myproject
builds/a3ac64e9/3/yproject/myproject
and so on
Using this, I could just update my website by changing a symbolic link pointing to the last runner directory.
Is there a way to configure Gitlab Runner this way ?
While it doesn't make sense to use your build directory as your deployment directory, you can setup a custom build directory
Open config.toml in a text editor: (more info on where to find it here)
Set enabled = true under [runners.custom_build_dir] (more info here)
[runners.custom_build_dir]
enabled = true
In your .gitlab-ci.yml file, under variables set GIT_CLONE_PATH. It must start with $CI_BUILDS_DIR/, e.g. $CI_BUILDS_DIR/$CI_JOB_ID/$CI_PROJECT_NAME, which will probably give you what you're looking for, although if you have multiple stages, they will have different job IDs. Alternatively, you could try $CI_BUILDS_DIR/$CI_COMMIT_SHA, which would give you a unique folder for each commit. (More info here)
variables:
GIT_CLONE_PATH: '$CI_BUILDS_DIR/$CI_JOB_ID/$CI_PROJECT_NAME'
Unfortunately there is currently an issue with using GIT_BUILDS_DIR in GIT_CLONE_PATH, if you're using Windows and Powershell, so you may have to do something like this as a work-around, if all your runners have the same build directory: GIT_CLONE_PATH: 'C:\GitLab-Runner/builds/$CI_JOB_ID/$CI_PROJECT_NAME'
You may want to take a look at the variables available to you (predefined variables) to find the most suitable variables for your path.
You might want to read the following answer Changing the build intermediate paths for gitlab-runner
I'll repost my answer here:
Conceptually, this approach is not the way to go; the build directory is not a deployment directory, it's a temporary directory, to build or to deploy from, whereas on a shell executor this could be fixed.
So what you need is to deploy from that directory with a script as per gitlab-ci.yml below, to the correct directory of deployment.
stages:
- deploy
variables:
TARGET_DIR: /home/ab12/public_html/$CI_PROJECT_NAME
deploy:
stage: deploy
script:
mkdir -pv $TARGET_DIR
rsync -r --delete ./ $TARGET_DIR
tags:
- myrunner
This will move your projectfiles in /home/ab12/public_html/
naming your projects as project1 .. projectn, all your projects could use this same .gitlab-ci.yml file.
You can not achieve this only with Gitlab CI runner configuration, but you can create 2 runners, and assign them exclusively to each branch by using a combination of only and tags keywords.
Assuming your two branches are named master and develop and two runners have been tagged with master_runner and develop_runner tags, your .gitlab-ci.yml can look like this:
master_job:
<<: *your_job
only:
- master
tags:
- master_runner
develop_job:
<<: *your_job
only:
- develop
tags:
- develop_runner
(<<: *your_job is your actual job that you can factorize)

How can I connect Coveralls and Travis in GitHub?

I currently have TravisCI building on PRs in a public GitHub repo.
The instructions for Coveralls say to put this in a .coveralls.yml file:
service_name: travis-pro
repo_token: <my_token>
That doesn't work for me because the .coveralls.yml file would be public--checked into GitHub. My TravisCI is integrated into my GitHub repo wired to a branch and fires on PR.
So I tried this:
In TravisCI's site I set an environment var:
COVERALLS_REPO_TOKEN to my token's value.
Then modded my .travis.yml to look like this:
language: scala
scala:
- 2.11.7
notifications:
email:
recipients:
- me#my_email.com
jdk:
- oraclejdk8
script: "sbt clean coverage test"
after_success: "sbt coverageReport coveralls"
script:
- sbt clean coverage test coverageReport &&
sbt coverageAggregate
after_success:
- sbt coveralls
Now when I create a PR on the branch this runs ok--no errors and I see output in Travis' console that the coverage test ran and generated files. But when I go to Coveralls I see nothing--"There have been no builds for this repo."
How can I set this up?
EDIT: I also tried creating a .coveralls.yml with just service_name: travis-ci
No dice, sadly.
How can I set this up?
Step 1 - Enable Coveralls
The first thing to do is to enable Coveralls for your repository.
You can do that on their website http://coveralls.io:
go to http://coveralls.io
sign in with your GitHub credentials
click on "Repositories", then "Add Repo"
if the repo isn't listed, yet, then "Sync GitHub Repos"
finally, flip the "enable coveralls" switch to "On"
Step 2 - Setup Travis-CI to push the coverage infos to Coveralls
You .travis.yml file contains multiple entries of the script and after_success sections. So, let's clean that up a bit:
language: scala
scala: 2.11.7
jdk: oraclejdk8
script: "sbt clean coverage test"
after_success: "sbt coveralls"
notifications:
email:
recipients:
- me#my_email.com
Now, when you push, the commands in the script sections are executed.
This is were your coverage data is generated.
When the commands finish successfully the after_success section is executed.
This is were the coverage data is pushed to coveralls.
The .coveralls config file
The .coveralls file is only needed to:
public Travis-CI repos do not need this config file since Coveralls can get the information via their API (via access token exchange)
the repo_token (found on the repo page on Coveralls) is only needed for private repos and should be kept secret. If you publish it, then anyone could submit some coverage data for your repo.
Boils down to: you need the file only in two cases:
to specify a custom location to the files containing the coverage data
or when you are using Travis-Pro and private repositories. Then you have to configure "travis-pro" and add the token:
service_name: travis-pro
repo_token: ...
I thought it might be helpful to explain how to set this up for PHP, given that the question applies essentially to any language that Coveralls supports (and not just Lua).
The process is particularly elusive for PHP because the PHP link on Travis-CI's website points to a password-protected page on Coveralls' site that provides no means by which to login using GitHub, unlike the main Coveralls site.
Equally confusing is that the primary PHP page on Coveralls' site seems to contain overly-complicated instructions that require yet another library called atoum/atoum (which looks to be defunct) and are anything but complete.
What ended-up working perfectly for me is https://github.com/php-coveralls/php-coveralls/ . The documentation is very thorough, but it boils-down to this:
Enable Coveralls for your repository (see Step 1 in the Accepted Answer).
Ensure that xdebug is installed and enabled in PHP within your Travis-CI build environment (it should be by default), which is required for code-coverage support in PHPUnit.
Add phpunit and the php-coveralls libraries to the project with Composer:
composer require phpunit/phpunit php-coveralls/php-coveralls
Update travis.yml at the root of the project to include the following directives:
script:
- mkdir -p build/logs
- vendor/bin/phpunit tests --coverage-clover build/logs/clover.xml
after_success:
- travis_retry php vendor/bin/php-coveralls
Create .coveralls.yml at the root of the project and populate it with:
service_name: travis-ci
I'm not positive that this step is necessary for public repositories (the Accepted Answer implies that it's not), but the php-coveralls documentation says of this directive (emphasis mine):
service_name: Allows you to specify where Coveralls should look to find additional information about your builds. This can be any string, but using travis-ci or travis-pro will allow Coveralls to fetch branch data, comment on pull requests, and more.
Push the above changes to the remote repository on GitHub and trigger a Travis-CI build (if you don't already have hooks to make it happen automatically).
Slap a Coveralls code-coverage badge in your README (or wherever else you'd like). The required markup may be found on the Coveralls page for the repository in question, in the Badge column.