I have recently built a selenium test project. Now i want to host it on circle CI.
Can someone please provide me the step by step details with the configuration file.
I already have a Github and circle CI account. Im new to this integration aspect.
#
# Check https://circleci.com/docs/2.0/language-java/ for more details
#
version: 2
jobs:
build:
docker:
- image: circleci/openjdk:8-jdk-stretch-browsers
environment:
# Customize the JVM maximum heap limit
MAVEN_OPTS: -Xmx3200m
steps:
- checkout
- run: mkdir test-reports
- run:
name: Download Selenium
command: |
curl -O http://selenium-release.storage.googleapis.com/3.5/selenium-server-standalone-3.5.3.jar
- run:
name: Start Selenium
command: |
java -jar selenium-server-standalone-3.5.3.jar -log test-reports/selenium.log
background: true
- run: mvn dependency:go-offline
- save_cache:
paths:
- ~/.m2
key: v1-dependencies-{{ checksum "pom.xml" }}
# run tests!
- run: mvn clean integration-test
- run:
name: Save test results
command: |
mkdir -p ~/testng/results/
find . -type f -regex "./test-output/testng-results.xml" -exec cp {} ~/testng/results/ \;
when: always
- store_test_results:
path: ~/testng/results/
- store_artifacts:
path: ~/testng/results/
- store_artifacts:
path: testng/results/
After a lot of digging the internet I was finally able to create my own config file. I searched crazily but I could'nt anthying for testng. I started building from scratch. Hopefully this helps someone in the future.
Related
I am trying to integrate Sonarqube into my project CI/CD pipeline on Gitlab. I have followed the documentation on Gitlab and Sonarqube to the best of my understanding to get the job included in my yml file.
I am current experiencing the error as shown in the image below
This is my yml file script
build_project:
stage: build
script:
- xcodebuild clean -workspace TinggIOS/TinggIOS.xcworkspace -scheme TinggIOS | xcpretty
- xcodebuild test -workspace TinggIOS/TinggIOS.xcworkspace -scheme TinggIOS -destination 'platform=iOS Simulator,name=iPhone 11 Pro Max,OS=15' | xcpretty -s
tags:
- stage
image: macos-11-xcode-12
sonarqube-check:
stage: analyze
image:
name: sonarsource/sonar-scanner-cli:latest
entrypoint: [""]
variables:
SONAR_USER_HOME: "${CI_PROJECT_DIR}/.sonar" # Defines the location of the analysis task cache
GIT_DEPTH: "0" # Tells git to fetch all the branches of the project, required by the analysis task
cache:
key: "${CI_JOB_NAME}"
paths:
- .sonar/cache
script:
- sonar-scanner -Dsonar.qualitygate.wait=true
allow_failure: true
only:
- merge_requests
- feature/unit-test # or the name of your main branch
- develop
tags:
- stage
It looks like the
- sonar-scanner -Dsonar.qualitygate.wait=true
command is not found. Try to run that command on the machine you are setting up your pipeline (like ssh into that machine or ssh into it and try running that command). The issue might be that it isn't installed on there.
I have the following template in my .gitlab-ci.yml file:
x-android-build-tools: &android_build_tools
image: jangrewe/gitlab-ci-android
stage: build
script:
- export GRADLE_USER_HOME=$(pwd)/.gradle
- chmod +x ./gradlew
artifacts:
expire_in: 1 hours
paths:
- app/build/
I want to extend the script part to make actual builds. For example:
android-stage-build:
<<: *android_build_tools
environment: stage
only:
- dev
after_script:
- ./gradlew :app:assembleDebug
It works well, but it has a problem. Docker launches the both jobs instead of ignoring the template.
Is there way to run only the "android-stage-build" job which triggers the template job when it will be needed?
In order to make gitlab ci ignore the first entry, you need to add a dot (.) in front of the definition.
.x-android-build-tools: &android_build_tools
image: jangrewe/gitlab-ci-android
stage: build
script:
- export GRADLE_USER_HOME=$(pwd)/.gradle
- chmod +x ./gradlew
artifacts:
expire_in: 1 hours
paths:
- app/build/
android-stage-build:
<<: *android_build_tools
environment: stage
only:
- dev
after_script:
- ./gradlew :app:assembleDebug
Besides that, I think, based on what I read here, you don't want to use after_script.
I think you want to use the before_script in the template, and on the build stage specific the script-key.
The main difference is that after_script also runs if the script fails. And by what I read here, it does not look like you would like that to happen.
Yes. You're simply missing a . :)
See https://docs.gitlab.com/ee/ci/yaml/yaml_optimization.html#anchors
It should work if you write it like this:
.x-android-build-tools: &android_build_tools
image: jangrewe/gitlab-ci-android
stage: build
script:
- export GRADLE_USER_HOME=$(pwd)/.gradle
- chmod +x ./gradlew
artifacts:
expire_in: 1 hours
paths:
- app/build/
android-stage-build:
<<: *android_build_tools
environment: stage
only:
- dev
after_script:
- ./gradlew :app:assembleDebug
I am using CircleCI with a GameCI docker image in order to build a Unity project. The build works, but I am trying to make use of the h-matsuo/github-release orb in order to create a release on GitHub for the build. I have created a new separate job for this, so I needed to share data between the jobs. I am using persist_to_workspace in order to do that, as specified in the documentation, but the solution doesn't seem to work. I get the following error:
Could not ensure that workspace directory /root/project/Zipped exists
For the workspace persist logic, I've added the following lines of code in my config.yml file:
working_directory: /root/project - Inside the executor of the main job
persist_to_workspace - As a last command inside my main job's steps
attach_workspace - As a beginning command inside my second job's steps
Here's my full config.yml file:
version: 2.1
orbs:
github-release: h-matsuo/github-release#0.1.3
executors:
unity_exec:
docker:
- image: unityci/editor:ubuntu-2019.4.19f1-windows-mono-0.9.0
environment:
BUILD_NAME: speedrun-circleci-build
working_directory: /root/project
.build: &build
executor: unity_exec
steps:
- checkout
- run: mkdir -p /root/project/Zipped
- run:
name: Git submodule recursive
command: git submodule update --init --recursive
- run:
name: Remove editor folder in shared project
command: rm -rf ./Assets/Shared/Movement/Generic/Attributes/Editor/
- run:
name: Converting Unity license
command: chmod +x ./ci/unity_license.sh && ./ci/unity_license.sh
- run:
name: Building game binaries
command: chmod +x ./ci/build.sh && ./ci/build.sh
- run:
name: Zipping build
command: apt update && apt -y install zip && zip -r "/root/project/Zipped/build.zip" ./Builds/
- store_artifacts:
path: /root/project/Zipped/build.zip
- run:
name: Show all files
command: find "$(pwd)"
- persist_to_workspace:
root: Zipped
paths:
- build.zip
jobs:
build_windows:
<<: *build
environment:
BUILD_TARGET: StandaloneWindows64
release:
description: Build project and publish a new release tagged `v1.1.1`.
executor: github-release/default
steps:
- attach_workspace:
at: /root/project/Zipped
- run:
name: Show all files
command: sudo find "/root/project"
- github-release/create:
tag: v1.1.1
title: Version v1.1.1
description: This release is version v1.1.1.
file-path: ./build.zip
workflows:
version: 2
build:
jobs:
- build_windows
- release:
requires:
- build_windows
Can somebody help me with this please?
If somebody ever encounters the same issue, try to avoid making use of the /root path. I've stored the artifacts somewhere inside /tmp/, and before storing artifacts, I've manually created the folder with chmod 777 by using mkdir with the -m flag to specify chmod permissions.
I wanted to run travis build against two Scala versions (2.12, 2.13) i.e crossCompilation, so I created two jobs for it, as logs were huge and there is a log limit of 4 MB in travis. So I created two jobs for it.Here is my travis.yml file. I am not so good with travis-ci. So I am struggling to run two jobs with different scala versions. Here is my travis.yml file:
language: scala
jdk:
- openjdk11
if: tag IS blank
services:
- mysql
addons:
apt:
sources:
- mysql-5.7-xenial
packages:
- mysql-server
dist: bionic
sudo: required
before_install:
- echo -e "machine github.com\n login $GITHUB_AUTH_TOKEN" > ~/.netrc
- mysql -e 'CREATE DATABASE IF NOT EXISTS $ZZ_API_TEST_DB_NAME;'
- sudo mysql -e "use mysql; update user set authentication_string=PASSWORD('') where user='$ZZ_API_DB_USERNAME'; update user set plugin='mysql_native_password';FLUSH PRIVILEGES;"
- sudo mysql_upgrade -u $ZZ_API_DB_USERNAME
- sudo service mysql restart
git:
depth: false
env:
global:
- ZZ_API_DB_HOST="localhost:3306"
- ZZ_API_TEST_DB_NAME=issue_management_test
- ZZ_API_DB_USERNAME=root
- ZZ_API_DB_PASSWORD=""
- SCALA_2_12="2.12.8"
- SCALA_2_13="2.13.3"
before_cache:
- find $HOME/.ivy2 -name "ivydata-*.properties" -delete
- find $HOME/.sbt -name "*.lock" -delete
cache:
directories:
- $HOME/.sbt/boot/scala*
- $HOME/.sbt/cache
- $HOME/.sbt/launchers
- $HOME/.ivy2/cache
- $HOME/.coursier
stages
- version_2.12
- version_2.13
jobs:
include:
- stage: version_2.12
name: "2.12.8"
script:
- if [ "$TRAVIS_EVENT_TYPE" == "cron" ]; then sbt coverage $SCALA_2_12 test ; else sbt $SCALA_2_12 test; fi
after_success:
- sbt coverageReport coverageAggregate
deploy:
- provider: script
skip_cleanup: true
script: sbt publish
on:
all_branches: true
condition: $TRAVIS_BRANCH != master || $TRAVIS_BRANCH != develop
- provider: script
skip_cleanup: true
before_deploy:
- travis/before_deploy.sh
script: sbt publish
on:
branch: develop
- provider: script
skip_cleanup: true
script: travis/release.sh
on:
branch: master
- stage: version_2.13
name: "2.13.3"
script:
- if [ "$TRAVIS_EVENT_TYPE" == "cron" ]; then sbt coverage $SCALA_2_13 test ; else sbt $SCALA_2_13 test; fi
after_success:
- sbt coverageReport coverageAggregate
deploy:
- provider: script
skip_cleanup: true
script: sbt publish
on:
all_branches: true
condition: $TRAVIS_BRANCH != master || $TRAVIS_BRANCH != develop
- provider: script
skip_cleanup: true
before_deploy:
- travis/before_deploy.sh
script: sbt publish
on:
branch: develop
- provider: script
skip_cleanup: true
script: travis/release.sh
on:
branch: master
I am not much familiar with travis, somehow its not picking
- SCALA_2_12="2.12.8"
- SCALA_2_13="2.13.3"
and this command:
- if [ "$TRAVIS_EVENT_TYPE" == "cron" ]; then sbt coverage $SCALA_2_12 test ; else sbt $SCALA_2_12 test; fi
is failing in travis build.
How to specify two different scala versions for these two different task, someone please help on this
It worked finally, this change I did,
changed $SCALA_2_13 to ++$SCALA_2_13
I know its not quite simple to do this and tried to explore many approaches but either I couldn't understand it properly or didn't work for me.
I have a concourse job which runs angular build (ng build) and creates /dist folder. This works well.
jobs:
- name: cache
plan:
- get: source
trigger: true
- get: npm-cache
- name: build
plan:
- get: source
trigger: true
passed: [cache]
- get: npm-cache
passed: [cache]
- task: run build
file: source/ci/build.yml
build.yml
---
platform: linux
image_resource:
type: docker-image
source: { repository: alexsuch/angular-cli, tag: '7.3' }
inputs:
- name: source
- name: npm-cache
path: /cache
outputs:
- name: artifact
run:
path: source/ci/build.sh
build.sh
#!/bin/sh
mv cache/node_modules source
cd source
npm rebuild node-saas # temporary fix
npm run build_prod
cp -R dist ../artifact/
I have mentioned output as artifact where I am storing the dist content.
But when I am trying to use this in next job, it doesn't work. Failed with missing input error.
Here is the next job that supposed to consume this dist folder:
jobs:
...
...
- name: list
plan:
- get: npm-cache
passed: [cache, test, build]
trigger: true
- task: list-files
config:
platform: linux
image_resource:
type: registry-image
source: { repository: busybox }
inputs:
- name: artifact
run:
path: ls
args: ['-la', 'artifact/']
Can anyone please help me with this. How I can use the dist folder in above job.
I'm not quite sure why would you want to have different plan definitions for each task but here is the simplest way of doing what you want to do:
jobs:
- name: deploying-my-app
plan:
- get: source
trigger: true
passed: []
- get: npm-cache
passed: []
- task: run build
file: source/ci/build.yml
- task: list-files
file: source/ci/list-files.yml
build.yml
---
platform: linux
image_resource:
type: docker-image
source: { repository: alexsuch/angular-cli, tag: '7.3' }
inputs:
- name: source
- name: npm-cache
path: /cache
outputs:
- name: artifact
run:
path: source/ci/build.sh
list-files.yml
---
platform: linux
image_resource:
type: registry-image
source: { repository: busybox }
inputs:
- name: artifact
run:
path: ls
args: ['-la', 'artifact/']
build.sh
#!/bin/sh
mv cache/node_modules source
cd source
npm rebuild node-saas # temporary fix
npm run build_prod
cp -R dist ../artifact/
Typically you would pass folders as inputs and outputs between TASKS instead of JOBS (althought there's some alternatives)
Concourse is statelessness and that is the idea behind it. But if you want to pass something between jobs the only way to do that is to use a concourse resource and depending on the nature of the project that could be anything from a git repo to a s3 bucket, docker image etc. You can create your own custom resources too.
Using something like s3 concourse resource for example
This way you can push your artifact to an external storage and then use it again on the next jobs on the get step as a resource. But that just may create some unnecesary complexity understanding that what you want to do is pretty straightforward
In my experience I found that sometimes the visual aspect of a job plan in the concourse dashboard gives the impression that a job-plan should by task atomic, which is not always needed
Hope that helps.