Compiling with multiple scala versions - scala

I wanted to run travis build against two Scala versions (2.12, 2.13) i.e crossCompilation, so I created two jobs for it, as logs were huge and there is a log limit of 4 MB in travis. So I created two jobs for it.Here is my travis.yml file. I am not so good with travis-ci. So I am struggling to run two jobs with different scala versions. Here is my travis.yml file:
language: scala
jdk:
- openjdk11
if: tag IS blank
services:
- mysql
addons:
apt:
sources:
- mysql-5.7-xenial
packages:
- mysql-server
dist: bionic
sudo: required
before_install:
- echo -e "machine github.com\n login $GITHUB_AUTH_TOKEN" > ~/.netrc
- mysql -e 'CREATE DATABASE IF NOT EXISTS $ZZ_API_TEST_DB_NAME;'
- sudo mysql -e "use mysql; update user set authentication_string=PASSWORD('') where user='$ZZ_API_DB_USERNAME'; update user set plugin='mysql_native_password';FLUSH PRIVILEGES;"
- sudo mysql_upgrade -u $ZZ_API_DB_USERNAME
- sudo service mysql restart
git:
depth: false
env:
global:
- ZZ_API_DB_HOST="localhost:3306"
- ZZ_API_TEST_DB_NAME=issue_management_test
- ZZ_API_DB_USERNAME=root
- ZZ_API_DB_PASSWORD=""
- SCALA_2_12="2.12.8"
- SCALA_2_13="2.13.3"
before_cache:
- find $HOME/.ivy2 -name "ivydata-*.properties" -delete
- find $HOME/.sbt -name "*.lock" -delete
cache:
directories:
- $HOME/.sbt/boot/scala*
- $HOME/.sbt/cache
- $HOME/.sbt/launchers
- $HOME/.ivy2/cache
- $HOME/.coursier
stages
- version_2.12
- version_2.13
jobs:
include:
- stage: version_2.12
name: "2.12.8"
script:
- if [ "$TRAVIS_EVENT_TYPE" == "cron" ]; then sbt coverage $SCALA_2_12 test ; else sbt $SCALA_2_12 test; fi
after_success:
- sbt coverageReport coverageAggregate
deploy:
- provider: script
skip_cleanup: true
script: sbt publish
on:
all_branches: true
condition: $TRAVIS_BRANCH != master || $TRAVIS_BRANCH != develop
- provider: script
skip_cleanup: true
before_deploy:
- travis/before_deploy.sh
script: sbt publish
on:
branch: develop
- provider: script
skip_cleanup: true
script: travis/release.sh
on:
branch: master
- stage: version_2.13
name: "2.13.3"
script:
- if [ "$TRAVIS_EVENT_TYPE" == "cron" ]; then sbt coverage $SCALA_2_13 test ; else sbt $SCALA_2_13 test; fi
after_success:
- sbt coverageReport coverageAggregate
deploy:
- provider: script
skip_cleanup: true
script: sbt publish
on:
all_branches: true
condition: $TRAVIS_BRANCH != master || $TRAVIS_BRANCH != develop
- provider: script
skip_cleanup: true
before_deploy:
- travis/before_deploy.sh
script: sbt publish
on:
branch: develop
- provider: script
skip_cleanup: true
script: travis/release.sh
on:
branch: master
I am not much familiar with travis, somehow its not picking
- SCALA_2_12="2.12.8"
- SCALA_2_13="2.13.3"
and this command:
- if [ "$TRAVIS_EVENT_TYPE" == "cron" ]; then sbt coverage $SCALA_2_12 test ; else sbt $SCALA_2_12 test; fi
is failing in travis build.
How to specify two different scala versions for these two different task, someone please help on this

It worked finally, this change I did,
changed $SCALA_2_13 to ++$SCALA_2_13

Related

Why github actions cant timeout a single job

I have a workflow in which a run request runs infinitely. i want to stop that run after 5 minutes of it running.
my workflow file:-
name: MSBuild
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
env:
# Path to the solution file relative to the root of the project.
SOLUTION_FILE_PATH: ./genshincheat.sln
# Configuration type to build.
# You can convert this to a build matrix if you need coverage of multiple configuration types.
# https://docs.github.com/actions/learn-github-actions/managing-complex-workflows#using-a-build-matrix
BUILD_CONFIGURATION: Release
permissions:
contents: read
jobs:
build:
runs-on: windows-latest
steps:
- uses: actions/checkout#v1
with:
submodules: recursive
- name: Add MSBuild to PATH
uses: microsoft/setup-msbuild#v1.0.2
- name: Restore NuGet packages
working-directory: ${{env.GITHUB_WORKSPACE}}
run: nuget restore ${{env.SOLUTION_FILE_PATH}}
- name: Build
working-directory: ${{env.GITHUB_WORKSPACE}}
# Add additional options to the MSBuild command line here (like platform or verbosity level).
# See https://learn.microsoft.com/visualstudio/msbuild/msbuild-command-line-reference
run: msbuild /m /p:Configuration=${{env.BUILD_CONFIGURATION}} ${{env.SOLUTION_FILE_PATH}}
- uses: montudor/action-zip#v1
with:
args: zip -qq -r bin.zip dir
- uses: actions/checkout#v2
- run: mkdir -p path/to/artifact
- run: echo hello > path/to/artifact/world.txt
- uses: actions/upload-artifact#v3
with:
name: bin.zip
path: ./bin.zip
the "build" runs infinitely any way to stop it after 5 mins so it can carry out next jobs? it runs infinitely becauseafter build it runs the built program so i cant exit that ;-;. any help is appreciated
There are different fields that can help you achieve what you want.
At the job level: job.<id>.timeout-minutes (defining a job timeout)
At the step level: job.<id>.steps.timeout-minutes (defining a step timeout)
Which would look like this in your case:
At the job level:
build:
runs-on: windows-latest
timeout-minutes: 5
steps:
[...]
At the step which never ends (example):
- name: Build
timeout-minutes: 5
working-directory: ${{env.GITHUB_WORKSPACE}}
# Add additional options to the MSBuild command line here (like platform or verbosity level).
# See https://learn.microsoft.com/visualstudio/msbuild/msbuild-command-line-reference
run: msbuild /m /p:Configuration=${{env.BUILD_CONFIGURATION}} ${{env.SOLUTION_FILE_PATH}}
Another reference on the Github Community

Process 'command 'git'' finished with non zero exit value 128"

I need some help I have a gradle project within the IntelliJ IDEA and I'm trying to automate gradle with github using the github actions. My .yml file for the github action contains
name: CI - build and test
on:
push:
branches: [ main ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Set up JDK 11
uses: actions/setup-java#v2
with:
java-version: '11'
distribution: 'adopt'
- name: Grant execute permission for gradlew
working-directory: ./project
run: chmod +x ./gradlew
- name: Build
working-directory: ./project
run: ./gradlew build
- name: Test
working-directory: ./project
run: ./gradlew test
- name: Update Website
working-directory: ./project
run: ./gradlew deployReports
The error is coming from the final step - name: Update Website working-directory: ./project run: ./gradlew deployReports
here is the function for deployReports located within my build.gradle file
task deployReports (dependsOn: 'copyWebsite'){
group = "Reporting"
description 'Copies reports to the website repo and pushes to github'
doLast{
def pagesDir = "$buildDir/gh-pages"
exec{
workingDir = 'build/gh-pages'
commandLine = ['git', 'add', '.']
}
exec{
workingDir = 'build/gh-pages'
commandLine = ['git', 'commit', '-m', 'Updating-webpages']
}
exec{
workingDir = 'build/gh-pages'
commandLine = ['git', 'push']
}
}
}
The error is coming from this line commandLine = ['git', 'commit', '-m', 'Updating-webpages']
I'm unsure of how to fix this because git is installed correctly and I can still commit and push myself from the terminal. any insight would be great!
The simplest solution is the clone the branch through GitHub (Using a terminal). otherwise, setups git in this dir so you are able to solve this.
This happens because of the system not finding any git commands here and unable to locate git that's why it is happening

Gitlab CI - flutter test running very slow

I've been managing a project for the last couple of months using Flutter and TDD on Gitlab and using the Gitlab CI for monitoring code quality and tests. With the end of the project tin sight we now have over 700 tests and our CI has reached glacial speeds. At first we would take 5-10min to run the whole pipe, now it can take as long as 58min, with the only noticeable difference being the number of tests.
After removing the --machine from our scripts I noticed that the flutter test --coverage is taking 11 times longer than when run locally.
LOCAL
PS C:\Users\lr\Documents\GitHub\hive-manager> flutter test --coverage
04:04 +707: All tests passed!
GITLAB CI
$ flutter test --coverage
45:30 +707: All tests passed!
It is definitely the test phase of the pipeline that is causing issue as when I look at each jobs time there is a noticeable difference:
code_quality - 00:05:24
test - 00:48:47
coverage - 00:02:06 (82.2%)
semantic-version - 00:01:20
I'm a little lost on what to do now as I still have more tests but the CI is starting to cost quite a bit in minutes while at the same time I don't want to have to rebuild the CI somewhere else. Is this just a limit of Gitlab CI or is something going wrong here. I've attached the .yaml below
stages: # List of stages for jobs, and their order of execution
- analyze
- test
- coverage
- semantic-version
default:
image: cirrusci/flutter:latest
cache:
paths:
- /flutter/bin/cache/dart-sdk
code_quality:
stage: analyze
before_script:
- pub global activate dart_code_metrics
- export PATH="$PATH":"$HOME/.pub-cache/bin"
script:
- flutter --version
- flutter analyze
- metrics lib -r codeclimate > gl-code-quality-report.json
rules:
- if: $CI_COMMIT_BRANCH == "main"
- if: $CI_COMMIT_BRANCH == "develop" && $CI_PIPELINE_SOURCE == "push"
- if: $CI_COMMIT_BRANCH == "develop" && $CI_PIPELINE_SOURCE == "merge_request_event"
allow_failure: true
artifacts:
reports:
codequality: gl-code-quality-report.json
test: # This job runs in the test stage.
stage: test # It only starts when the job in the build stage completes successfully.
script:
- flutter test --coverage
rules:
- if: $CI_COMMIT_BRANCH == "main"
- if: $CI_COMMIT_BRANCH == "develop" && $CI_PIPELINE_SOURCE == "push"
- if: $CI_COMMIT_BRANCH == "develop" && $CI_PIPELINE_SOURCE == "merge_request_event"
coverage: # This job runs in the test stage.
stage: coverage # It only starts when the job in the build stage completes successfully.
script:
- lcov --summary coverage/lcov.info
- lcov --remove coverage/lcov.info
"lib/config/*"
"lib/application/l10n/l10n.dart"
"lib/application/l10n/**/*"
"lib/domain/repositories/*"
"lib/injection.config.dart"
"lib/presentation/routes/*"
"lib/infrastructure/repositories/firebase_injectable_module.dart"
"**/mock_*.dart"
"**/*.g.dart"
"**/*.gr.dart"
"**/*.freezed.dart"
"**/*.mocks.dart"
"**/*.config.dart"
-o coverage/clean_lcov.info
- genhtml coverage/clean_lcov.info --output=coverage
- curl -Os https://uploader.codecov.io/latest/linux/codecov
- chmod +x codecov
- ./codecov -t $CODECOV_TOKEN
- mv coverage/ public/
rules:
- if: $CI_COMMIT_BRANCH == "main"
- if: $CI_COMMIT_BRANCH == "develop" && $CI_PIPELINE_SOURCE == "push"
- if: $CI_COMMIT_BRANCH == "develop" && $CI_PIPELINE_SOURCE == "merge_request_event"
coverage: '/lines\.*: \d+\.\d+\%/'
artifacts:
paths:
- public
semantic-version:
image: node:16
stage: semantic-version
only:
refs:
- main
- develop
script:
- touch CHANGELOG.md
- npm install #semantic-release/gitlab #semantic-release/changelog
- npx semantic-release
artifacts:
paths:
- CHANGELOG.md
ADDITIONAL INFO
Looking into the report.xml from GitLab I can also see that most tests take ~2sec to complete, but the total tests only take 135sec.
While the test job takes ~44min to complete.
I've also tried removing the --coverage to reduce the time which resulted in 8min30sec, but which is still a lot more than the 1min30sec that it takes locally.

Pass build directory (/dist) from a job to next job in concourse

I know its not quite simple to do this and tried to explore many approaches but either I couldn't understand it properly or didn't work for me.
I have a concourse job which runs angular build (ng build) and creates /dist folder. This works well.
jobs:
- name: cache
plan:
- get: source
trigger: true
- get: npm-cache
- name: build
plan:
- get: source
trigger: true
passed: [cache]
- get: npm-cache
passed: [cache]
- task: run build
file: source/ci/build.yml
build.yml
---
platform: linux
image_resource:
type: docker-image
source: { repository: alexsuch/angular-cli, tag: '7.3' }
inputs:
- name: source
- name: npm-cache
path: /cache
outputs:
- name: artifact
run:
path: source/ci/build.sh
build.sh
#!/bin/sh
mv cache/node_modules source
cd source
npm rebuild node-saas # temporary fix
npm run build_prod
cp -R dist ../artifact/
I have mentioned output as artifact where I am storing the dist content.
But when I am trying to use this in next job, it doesn't work. Failed with missing input error.
Here is the next job that supposed to consume this dist folder:
jobs:
...
...
- name: list
plan:
- get: npm-cache
passed: [cache, test, build]
trigger: true
- task: list-files
config:
platform: linux
image_resource:
type: registry-image
source: { repository: busybox }
inputs:
- name: artifact
run:
path: ls
args: ['-la', 'artifact/']
Can anyone please help me with this. How I can use the dist folder in above job.
I'm not quite sure why would you want to have different plan definitions for each task but here is the simplest way of doing what you want to do:
jobs:
- name: deploying-my-app
plan:
- get: source
trigger: true
passed: []
- get: npm-cache
passed: []
- task: run build
file: source/ci/build.yml
- task: list-files
file: source/ci/list-files.yml
build.yml
---
platform: linux
image_resource:
type: docker-image
source: { repository: alexsuch/angular-cli, tag: '7.3' }
inputs:
- name: source
- name: npm-cache
path: /cache
outputs:
- name: artifact
run:
path: source/ci/build.sh
list-files.yml
---
platform: linux
image_resource:
type: registry-image
source: { repository: busybox }
inputs:
- name: artifact
run:
path: ls
args: ['-la', 'artifact/']
build.sh
#!/bin/sh
mv cache/node_modules source
cd source
npm rebuild node-saas # temporary fix
npm run build_prod
cp -R dist ../artifact/
Typically you would pass folders as inputs and outputs between TASKS instead of JOBS (althought there's some alternatives)
Concourse is statelessness and that is the idea behind it. But if you want to pass something between jobs the only way to do that is to use a concourse resource and depending on the nature of the project that could be anything from a git repo to a s3 bucket, docker image etc. You can create your own custom resources too.
Using something like s3 concourse resource for example
This way you can push your artifact to an external storage and then use it again on the next jobs on the get step as a resource. But that just may create some unnecesary complexity understanding that what you want to do is pretty straightforward
In my experience I found that sometimes the visual aspect of a job plan in the concourse dashboard gives the impression that a job-plan should by task atomic, which is not always needed
Hope that helps.

Is there a way to configure Travis GitHub Pages Deployment within stages?

Expected
To be able to deploy Pages to GitHub within Travis using stages.
Result
Fail, I must not have the proper synthax or it is not possible.
Reproduction
Source:
https://docs.travis-ci.com/user/deployment/pages/
https://docs.travis-ci.com/user/build-stages/
Project:
https://travis-ci.org/kopax/deleteme
https://github.com/kopax/deleteme
This work for building the page without stages: See the passing travis job : travis-ci.org/kopax/deleteme/builds/380660202:
dist: trusty
# Blocklist
branches:
except:
- gh-pages # will be deployed to, no need to build it
cache:
directories:
- node_modules
node_js:
- "10"
before_install:
- npm install -g npm
# const
- export PACKAGE_NAME=$(node -p "require('./package.json').name")
- export PACKAGE_VERSION=$(node -p "require('./package.json').version")
- export NODE_VERSION=$(node --version)
- export NPM_VERSION=$(npm --version)
# logging
- npm --version || echo npm not installed
- node --version|| echo node not installed
- npx rollup-umd-scripts --version || echo npx not installed
- echo "package version $PACKAGE_VERSION"
language: node_js
sudo: required
script:
# execute all of the commands which need to be executed
# before running actual tests
- npm run styleguide:build
deploy:
provider: pages
skip_cleanup: true
github_token: $GH_TOKEN # Set in the settings page of your repository, as a secure variable
keep_history: true
local_dir: public/
on:
branch: master
But this is failing when added the Page deployement as a stage See this failed travis job: travis-ci.org/kopax/deleteme/jobs/380983577:
language: node_js
sudo: required
#env:
# global:
# - DISPLAY=:99.0
# - NODE_ENV=test
dist: trusty
# Blocklist
branches:
except:
- gh-pages # will be deployed to, no need to build it
cache:
directories:
- node_modules
node_js:
- "10"
before_install:
- npm install -g npm
# const
- export PACKAGE_NAME=$(node -p "require('./package.json').name")
- export PACKAGE_VERSION=$(node -p "require('./package.json').version")
- export NODE_VERSION=$(node --version)
- export NPM_VERSION=$(npm --version)
# logging
- npm --version || echo npm not installed
- node --version|| echo node not installed
- npx rollup-umd-scripts --version || echo npx not installed
- echo "package version $PACKAGE_VERSION"
stages:
- build
- test
- release
- deploy
script:
# execute all of the commands which need to be executed
# before running actual tests
- npm run styleguide:build
jobs:
include:
# Job: Build
- stage: build
node_js:
- lts/*
# - 10
# - 8
script:
- npm run build
branches:
only:
- release
- dev
- master
# Job: Test
- stage: test
node_js:
- lts/*
# - 10
# - 8
script:
- npm run test
branches:
only:
- release
- dev
- master
# Job: Release
- stage: release
node_js:
- lts/*
skip_cleanup: true
script:
- npx semantic-release
branches:
only:
- master
# Job: Page
- stage: deploy
provider: pages
skip_cleanup: true
github_token: $GH_TOKEN # Set in the settings page of your repository, as a secure variable
keep_history: true
local_dir: public/
on:
branch: master
Does anybody know how I can have stages deployment with page working in Travis?
Your deploy stage had no install/script defined, thus it took default one.
You need to define in stage what you want to do, you forgot about deploy level.
To have dedicated stage for deployment only, configure it like this:
- stage: deploy
if: type = push AND branch = master # or whenever you want to deploy
script: skip # to not run Travis' default script
deploy: # <-- that was missing !!!
- provider: pages
skip_cleanup: true
github_token: $GH_TOKEN # Set in the settings page of your repository, as a secure variable
keep_history: true
local_dir: public/