How to install an old version of the Direct X Api in GitHub actions - github

I'm working on an implementation of continuous integration in this project, which requires an old version of the DirectX SDK from June 2010. Is it possible to install this as a part of a GitHub Actions workflow at all? It may build with any version of the SDK as long as it's compatible with Windows 7.
Here's the workflow I've written so far, and here's the general building for Windows guide I'm following...

I have a working setup for project using DX2010, however i am not running installer (which always failed for me during beta, maybe it's fixed nowadays) but extracting only parts required for build. Looking at link you provided, this is exactly what guide recommends :)
First, DXSDK_DIR variable is set using ::set-env "command". Variable most likely should point to directory outside default location, which can be overwritten if repository is checked out after preparing DX files.
- name: Config
run: echo ::set-env name=DXSDK_DIR::$HOME/cache/
shell: bash
I didn't want to include DX files in repository, so they had to be downloaded when workflow is running. To avoid doing that over and over again cache action is used to keep files between builds.
- name: Cache
id: cache
uses: actions/cache#v1
with:
path: ~/cache
key: cache
And finally, downloading and extracting DX2010. This step will run only if cache wasn't created previously or current workflow cannot create/restore caches (like on: schedule or on: repository_dispatch).
- name: Cache create
if: steps.cache.outputs.cache-hit != 'true'
run: |
curl -L https://download.microsoft.com/download/a/e/7/ae743f1f-632b-4809-87a9-aa1bb3458e31/DXSDK_Jun10.exe -o _DX2010_.exe
7z x _DX2010_.exe DXSDK/Include -o_DX2010_
7z x _DX2010_.exe DXSDK/Lib/x86 -o_DX2010_
mv _DX2010_/DXSDK $HOME/cache
rm -fR _DX*_ _DX*_.exe
shell: bash
Aaand that's it, project is ready for compilation.

Related

Gitlab runner and flutter_dotenv in CI/CD

I want to build a desktop application for linux with flutter that will be used by clients in a controlled environment, I mean it is not a public app that will be downloaded by anyone.
I've thought about building the app from the Gitlab runners and storing it in the gitlab packages. Then the clients, managed by me, download the new versions through the gitlab api.
I'm using the flutter_dotenv package to load environment variables in my flutter app. The documentation recommends adding the .env file to the assets and also adding it to the .gitignore.
The truth is that in the CI/CD when I build the flutter app that is going to be used in production, it gives an error because the .env file does not exist and yet it is being referenced in the assets. This is the error it shows.
The only thing that occurs to me is from the runner itself to know if I am building for test or for prod and create the .env as appropriate. Obviously there is no other option for testing, but I'm not sure if it's good practice and if it's the responsibility of the gitlab runners to create the prod env.
Any ideas?
You don't say what your CI/CD environment is - edit - I see it. GitLab. If it were Azure - you could upload your .env to local secure files storage and in a step of your CI/CD deployment you could copy it in from secure files to the working directory where your app is being built. That is how I have dealt with a missing local.properties file for Android deployment from Azure
sample for Azure, maybe can be adapted to GitLab format
- task: DownloadSecureFile#1
name: local_properties
displayName: 'Download local properties'
inputs:
secureFile: 'local.properties'
- script: |
echo Installing $(local_properties.secureFilePath) to the android build folder $(Build.SourcesDirectory)/android ...
sudo cp $(local_properties.secureFilePath) $(Build.SourcesDirectory)/android
sudo chown -R $(whoami) /home/vsts/work/1/s/android/local.properties

Gitlab Runner cannot retrieve dependency repo in a Powershell executor

CI Runner Context
Gitlab version : 13.12.2 (private server)
Gitlab Runner version : 14.9.1
Executor : shell executor (PowerShell)
Exploitation system : Windows 10
Project in Python (may be unrelated)
(using Poetry for dependency management)
The Problem
I am setting up an automated integration system for a project that has several internal dependencies that are hosted on the same server as the project being integrated. If I run the CI with a poetry update in the yml file, the Job console sends an exit with error code 128 upon calling a git clone on my internal dependency.
To isolate the problem, I tried simply calling a git clone on that same repo. The response is that the runner cannot authenticate itself to the Gitlab server.
What I Have Tried
Reading through the Gitlab docs, I found that the runners need authorization to pull any private dependencies. For that, Gitlab has created deploy keys.
So I followed the instructions to create the deploy key for the dependency and added it to the sub-project's deploy key list. I then ran into the exact same permissions problem.
What am I missing?
(For anyone looking for this case for a Winodws PowerShell, the user that the runner uses is nt authority/system, a system only user that I have not found a way to access as a human. I had to make the CI runner do the ssh key creation steps.)
Example .gitlab-ci.yml file:
#Commands in PowerShell
but_first:
#The initial stage, always happens first
stage: .pre
script:
# Start ssh agent for deploy keys
- Start-Service ssh-agent
# Check if ssh-agent is running
- Get-Service ssh-agent
- git clone ssh://git#PRIVATE_REPO/software/dependency-project.git
I solved my problem of pulling internal dependencies via completely bypassing the ssh pull of the source code and by switching from poetry to hatch for dependency management (I'll explain why further down).
Hosting the compiled dependencies
For this, I compiled my dependency project's source code into a distribution-ready package (in this context it was a python wheel).
Then used Gitlab's Packages and Registries offering to host my package. Instead of having packages in each source code project, I pushed the packages of all my dependencies to a project I created for this single purpose.
My .gitlab-ci.yaml file looks like this when publishing to that project:
deploy:
# Could be used to build the code into an installer
stage: Deploy
script:
- echo "deploying"
- hatch version micro
# only wheel is built (without target, both wheel and sdist are built)
- hatch build -t wheel
- echo "Build done ..."
- hatch publish --repo http://<private gitlab repo>/api/v4/projects/<project number>/packages/pypi --user gitlab-ci-token --auth $CI_JOB_TOKEN
- echo "Publishing done!"
Pulling those hosted dependencies (& why I ditched poetry)
My first problem was having pip find the extra pypi repository with all my packages. But pip already has a solution for that!
In it's pip.ini file(to find where it is, you can do pip config -v list), 2 entries need to be added:
[global]
extra-index-url = http://__token__:<your api token>#<private gitlab repo>/api/v4/projects/<project number>/packages/pypi/simple
[install]
trusted-host = <private gitlab repo>
This makes it functionally the same as adding the --extra-index-url and --trusted-host tags while calling pip install.
Since I was using a dependency manager, I was not directly using pip, but the manager's wrapper for pip. And here comes the main reason why I decided to change dependency managers: poetry does not read or recognize pip.ini. So any changes done in any of those files will be ignored.
With the configuration of the pip.ini file, any dependencies I have in the private package repo will also be searched for the installation of projects. So the line:
- git clone ssh://git#PRIVATE_REPO/software/dependency-project.git
changes to a simple line:
- pip install dependency-project
Or a line in pyproject.toml:
dependencies = [
"dependency-project",
"second_project",
]

How do I cache steps in GitHub actions?

Say I have a GitHub actions workflow with 2 steps.
Download and compile my application's dependencies.
Compile and test my application
My dependencies rarely change and the compiled dependencies can be safely cached until I next change the lock-file that specifies their versions.
Is a way to save the result of the first step so that in future workflow can skip over that step?
Most use-cases are covered by existing actions, for example:
actions/setup-node for JS
docker/build-push-action for Docker
Custom caching is supported via the cache action. It works across both jobs and workflows within a repository. See also: GitHub docs and Examples.
Consider the following example:
name: GitHub Actions Workflow with NPM cache
on: push
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
- name: Cache NPM dependencies
uses: actions/cache#v3
with:
path: ~/.npm
key: ${{ runner.OS }}-npm-cache-${{ hashFiles('**/package-lock.json') }}
restore-keys: |
${{ runner.OS }}-npm-cache-
- name: Install NPM dependencies
run: npm install
How caching works step-by-step:
At the Cache NPM dependencies step, the action will check if there's an existing cache for the current key
If no cache is found, it will check spatial matches using restore-keys. In this case, if package-lock.json changes, it will fall back to a previous cache. It is useful to prefix keys and restore keys with the OS and name of the cache, as it shouldn't load files for a different type of cache or OS.
If any cache is found, it will load the files to path
The CI continues to the next step and can use the filed loaded from the cache. In this case, npm install will use the files in ~/.npm to save downloading them over the network (note that for NPM, caching node_modules directly is not recommended).
At the end of the CI run a post-action is executed to save the updated cache in case the key changes. This is not explicitly defined in the workflow, rather it is built into the cache action to take care of both loading and saving the cache.
You can also build your own reusable caching logic with #actions/cache such as:
1-liner NPM cache
1-liner Yarn cache
Old answer:
Native caching is not currently possible, expected to be implemented by mid-November 2019.
You can use artifacts (1, 2) to move directories between jobs (within 1 workflow) as proposed on the GH Community board. This, however, doesn't work across workflows.
The cache action can only cache the contents of a folder. So if there is such a folder, you may win some time by caching it.
For instance, if you use some imaginary package-installer (like Python's pip or virtualenv, or NodeJS' npm, or anything else that puts its files into a folder), you can win some time by doing it like this:
- uses: actions/cache#v2
id: cache-packages # give it a name for checking the cache hit-or-not
with:
path: ./packages/ # what we cache: the folder
key: ${{ runner.os }}-packages-${{ hashFiles('**/packages*.txt') }}
restore-keys: |
${{ runner.os }}-packages-
- run: package-installer packages.txt
if: steps.cache-packages.outputs.cache-hit != 'true'
So what's important here:
We give this step a name, cache-packages
Later, we use this name for conditional execution: if, steps.cache-packages.outputs.cache-hit != 'true'
Give the cache action a path to the folder you want to cache: ./packages/
Cache key: something that depends on the hash of your input files. That is, if any packages.txt file changes, the cache will be rebuilt.
The second step, package installer, will only be run if there was no cache
For users of virtualenv: if you need to activate some shell environment, you have to do it in every step. Like this:
- run: . ./environment/activate && command
My dependencies rarely change and the compiled dependencies can be safely cached until I next change the lock-file that specifies their versions. Is a way to save the result of the first step so that in future workflow can skip over that step?
The first step being:
Download and compile my application's dependencies.
GitHub Actions themselves will not do this for you. The only advice I can give you is that you adhere to Docker best practices in order to ensure that if Actions do make use of docker caching, your image could be re-used instead of rebuilt. See: https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#leverage-build-cache
When building an image, Docker steps through the instructions in your Dockerfile, executing each in the order specified. As each instruction is examined, Docker looks for an existing image in its cache that it can reuse, rather than creating a new (duplicate) image.
This also implies that the underlying system of GitHub Actions can/will leverage the Docker caching.
However things like compilation, Docker won't be able to use the cache mechanism, so I suggest you think very well if this is something you desperately need. The alternative is to download the compiled/processed files from an artifact store (Nexus, NPM, MavenCentral) to skip that step. You do have to weight the benefits vs the complexity you are adding to your build on this.
This is now natively supported using: https://help.github.com/en/actions/automating-your-workflow-with-github-actions/caching-dependencies-to-speed-up-workflows.
This is achieved by using the new cache action: https://github.com/actions/cache
If you are using Docker in your WorkFlows, as #peterevans answered, GitHub now supports caching through the cache action, but it has its limitations.
For that reason, you might find useful this action to bypass GitHub's action limitations.
Disclaimer: I created the action to support cache before GitHub did it officially, and I still use it because of its simplicity and flexibility.
I'll summarize the two options:
Caching
Docker
Caching
You can add a command in your workflow to cache directories. When that step is reached, it'll check if the directory that you specified was previously saved. If so, it'll grab it. If not, it won't. Then in further steps you write checks to see if the cached data is present. For example, say you are compiling some dependency that is large and doesn't change much. You could add a cache step at the beginning of your workflow, then a step to build the contents of the directory if they aren't there. The first time that you run it won't find the files but subsequently it will and your workflow will run faster.
Behind the scenes, GitHub is uploading a zip of your directory to github's own AWS storage. They purge anything older than a week or if you hit a 2GB limit.
Some drawbacks with this technique is that it saves just directories. So if you installed into /usr/bin, you'll have to cache that! That would be awkward. You should instead install into $home/.local and use echo set-env to add that to your path.
Docker
Docker is a little more complex and it means that you have to have a dockerhub account and manage two things now. But it's way more powerful. Instead of saving just a directory, you'll save an entire computer! What you'll do is make a Dockerfile that will have in it all your dependencies, like apt-get and python pip lines or even long compilation. Then you'll build that docker image and publish it on dockerhub. Finally, you'll have your tests set to run on that new docker image, instead of on eg, ubuntu-latest. And from now on, instead of installing dependencies, it'll just download the image.
You can automate this further by storing that Dockerfile in the same GitHub repo as the project and then write a job with steps that will download the latest docker image, rebuild if necessary just the changed steps, and then upload to dockerhub. And then a job which "needs" that one and uses the image. That way your workflow will both update the docker image if needed and also use it.
The downsides is that your deps will be in one file, the Dockerfile, and the tests in the workflow, so it's not all together. Also, if the time to download the image is more than the time to build the dependencies, this is a poor choice.
I think that each one has upsides and downsides. Caching is only good for really simple stuff, like compiling into .local. If you need something more extensive, Docker is the most powerful.

Gitlab Runner - New folder for each build

I'm using Gitlab CI for my project. When I push on develop branch, it runs tests and update the code on my test environment (a remote server).
But the gitlab runner is already using the same build folder : builds/a3ac64e9/0/myproject/myproject
But I would like to create a now folder every time :
builds/a3ac64e9/1/yproject/myproject
builds/a3ac64e9/2/yproject/myproject
builds/a3ac64e9/3/yproject/myproject
and so on
Using this, I could just update my website by changing a symbolic link pointing to the last runner directory.
Is there a way to configure Gitlab Runner this way ?
While it doesn't make sense to use your build directory as your deployment directory, you can setup a custom build directory
Open config.toml in a text editor: (more info on where to find it here)
Set enabled = true under [runners.custom_build_dir] (more info here)
[runners.custom_build_dir]
enabled = true
In your .gitlab-ci.yml file, under variables set GIT_CLONE_PATH. It must start with $CI_BUILDS_DIR/, e.g. $CI_BUILDS_DIR/$CI_JOB_ID/$CI_PROJECT_NAME, which will probably give you what you're looking for, although if you have multiple stages, they will have different job IDs. Alternatively, you could try $CI_BUILDS_DIR/$CI_COMMIT_SHA, which would give you a unique folder for each commit. (More info here)
variables:
GIT_CLONE_PATH: '$CI_BUILDS_DIR/$CI_JOB_ID/$CI_PROJECT_NAME'
Unfortunately there is currently an issue with using GIT_BUILDS_DIR in GIT_CLONE_PATH, if you're using Windows and Powershell, so you may have to do something like this as a work-around, if all your runners have the same build directory: GIT_CLONE_PATH: 'C:\GitLab-Runner/builds/$CI_JOB_ID/$CI_PROJECT_NAME'
You may want to take a look at the variables available to you (predefined variables) to find the most suitable variables for your path.
You might want to read the following answer Changing the build intermediate paths for gitlab-runner
I'll repost my answer here:
Conceptually, this approach is not the way to go; the build directory is not a deployment directory, it's a temporary directory, to build or to deploy from, whereas on a shell executor this could be fixed.
So what you need is to deploy from that directory with a script as per gitlab-ci.yml below, to the correct directory of deployment.
stages:
- deploy
variables:
TARGET_DIR: /home/ab12/public_html/$CI_PROJECT_NAME
deploy:
stage: deploy
script:
mkdir -pv $TARGET_DIR
rsync -r --delete ./ $TARGET_DIR
tags:
- myrunner
This will move your projectfiles in /home/ab12/public_html/
naming your projects as project1 .. projectn, all your projects could use this same .gitlab-ci.yml file.
You can not achieve this only with Gitlab CI runner configuration, but you can create 2 runners, and assign them exclusively to each branch by using a combination of only and tags keywords.
Assuming your two branches are named master and develop and two runners have been tagged with master_runner and develop_runner tags, your .gitlab-ci.yml can look like this:
master_job:
<<: *your_job
only:
- master
tags:
- master_runner
develop_job:
<<: *your_job
only:
- develop
tags:
- develop_runner
(<<: *your_job is your actual job that you can factorize)

Rustdoc on gh-pages with Travis

I have generated documentation for my project with cargo doc, and it is made in the target/doc directory. I want to allow users to view this documentation without a local copy, but I cannot figure out how to push this documentation to the gh-pages branch of the repository. Travis CI would help me automatically do this, but I cannot get it to work either. I followed this guide, and set up a .travis.yml file and a deploy.sh script. According to the build logs, everything goes fine but the gh-pages branch never gets updated. My operating system is Windows 7.
It is better to use travis-cargo, which is intended to simplify deploying docs and which also has other features. Its readme provides an example of .travis.yml file, although in the simplest form it could look like this:
language: rust
sudo: false
rust:
- nightly
- beta
- stable
before_script:
- pip install 'travis-cargo<0.2' --user && export PATH=$HOME/.local/bin:$PATH
script:
- |
travis-cargo build &&
travis-cargo test &&
travis-cargo --only beta doc
after_success:
- travis-cargo --only beta doc-upload
# needed to forbid travis-cargo to pass `--feature nightly` when building with nightly compiler
env:
global:
- TRAVIS_CARGO_NIGHTLY_FEATURE=""
It is very self-descriptive, so it is obvious, for example, what to do if you want to use another Rust release train for building docs.
In order for the above .travis.yml to work, you need to set your GH_TOKEN somehow. There are basically two ways to do it: inside .travis.yml via an encrypted string, or by configuring it in the Travis itself, in project options. I prefer the latter way, so I don't need to install travis command line tool or pollute my .travis.yml (and so the above config file does not contain secure option), but you may choose otherwise.