How can cypress be made to work with aurelia with github actions and locally? - github

Ok, so I added cypress to aurelia during my configuration and it worked fine. When I went to set up cypress on github as just a command, I could not get it to recognize puppeteer as a browser. So instead I went and used the official github actions for cypress, and that works
- name: test
uses: cypress-io/github-action#v1
with:
start: yarn start
browser: ${{matrix.browser}}
record: true
env:
# pass the Dashboard record key as an environment variable
CYPRESS_RECORD_KEY: ${{ secrets.CYPRESS_RECORD_KEY }}
however I had to set my cypress.json as follows
{
"baseUrl": "http://localhost:8080",
"fixturesFolder": "test/e2e/fixtures",
"integrationFolder": "test/e2e/integration",
"pluginsFile": "test/e2e/plugins/index.js",
"screenshotsFolder": "test/e2e/screenshots",
"supportFile": "test/e2e/support/index.js",
"videosFolder": "test/e2e/videos",
"projectId": "..."
}
and now running yarn e2e doesn't work because there's no server stood up, as it's not doing it itself anymore via cypress.config.js
const CLIOptions = require( 'aurelia-cli').CLIOptions;
const aureliaConfig = require('./aurelia_project/aurelia.json');
const PORT = CLIOptions.getFlagValue('port') || aureliaConfig.platform.port;
const HOST = CLIOptions.getFlagValue('host') || aureliaConfig.platform.host;
module.exports = {
config: {
baseUrl: `http://${HOST}:${PORT}`,
fixturesFolder: 'test/e2e/fixtures',
integrationFolder: 'test/e2e/integration',
pluginsFile: 'test/e2e/plugins/index.js',
screenshotsFolder: 'test/e2e/screenshots',
supportFile: 'test/e2e/support/index.js',
videosFolder: 'test/e2e/videos'
}
};
how can I make it so that yarn e2e works as it previously did, and have it working on github?(I don't care which side of the equation is changed)
here's yarn e2e not sure what the au is doing under the hood.
"e2e": "au cypress",

Easiest way to achieve this, create a test/e2e/cypress-config.json
{
"baseUrl": "http://localhost:8080",
"fixturesFolder": "test/e2e/fixtures",
"integrationFolder": "test/e2e/integration",
"pluginsFile": "test/e2e/plugins/index.js",
"screenshotsFolder": "test/e2e/screenshots",
"supportFile": "test/e2e/support/index.js",
"videosFolder": "test/e2e/videos",
"projectId": "1234"
}
, and then setup the github action like this.
- name: test
uses: cypress-io/github-action#v1
with:
config-file: tests/e2e/cypress-config.json
start: yarn start
browser: ${{matrix.browser}}
record: true
env:
# pass the Dashboard record key as an environment variable
CYPRESS_RECORD_KEY: ${{ secrets.CYPRESS_RECORD_KEY }}
the path doesn't matter, just that you configure the same one. Just make sure it doesn't overlap with what aurelia wants.

Related

Bitbucket pipeline fails with mongo service

I'm trying to setup test for my backend in Bitbucket Pipelines. But when I set jest.config with jest-mongodb the tests doesn't even start and exit with this error.
The tests are working perfectly fine on local.
Here's my pipeline configuration part that doesn't work:
image: node:18.12.0
definitions:
services:
mongo:
image: mongo
caches:
nodeall: ./node_modules
yarn: /usr/local/share/.cache/yarn
steps:
- step: &Quality-Check
name: Code Quality Checks ๐ŸŽ€
script:
- echo Fixing code quality and format ๐Ÿ”Ž
- yarn install
- yarn run lint:fix
- yarn format:fix
- step: &Testing
name: Testing ๐Ÿงช
caches:
- nodeall
script:
- yarn install
# - yarn run test
- echo Checking test coverage and generating report ๐Ÿ“œ
- yarn run test:coverage
artifacts:
- coverage/**
services:
- mongo
pipelines:
branches:
main:
- step:
name: Install dependencies
caches:
- nodeall
script:
- yarn install
- step: *Quality-Check
- step: *Testing
When i search for this error i'm headed to mongo-memory-server but i don't use this package in the code. And couldn't find anything.
I've tried changing anchors, calling mongo service earlier, changing mongo docker image but no success.
I'm expecting that the test and pipeline pass
EDIT
I tried 3 different Jest.configs and realise that the one that was on the project actually use memory-server.
Here are the 3 configs i tried
const { defaults: tsjPreset } = require('ts-jest/presets')
//Custom config with files
// module.exports = {
// preset: 'ts-jest',
// globalSetup: './mongo-memory-server/globalSetup.ts',
// globalTeardown: './mongo-memory-server/globalTeardown.ts',
// setupFilesAfterEnv: ['./mongo-memory-server/setupFile.ts'],
// }
//Config for mongo-memory-db
module.exports = {
preset: '#shelf/jest-mongodb',
transform: tsjPreset.transform,
}
// Basic config
// module.exports = {
// preset: 'ts-jest',
// testEnvironment: 'node',
// setupFiles: ['dotenv/config'],
// }

GitHub actions to trigger build on new Pull Requests

I have the following workflow to trigger CMake builds on my GitHub project:
name: C/C++ CI
on:
push:
branches: [ master, develop ]
pull_request:
types: [ opened, edited, reopened, review_requested ]
branches: [ master, develop ]
jobs:
build:
runs-on: ubuntu-18.04
steps:
- name: Install deps
run: sudo apt-get update; sudo apt-get install python3-distutils libfastjson-dev libcurl4-gnutls-dev libssl-dev -y
- uses: actions/checkout#v2
- name: Run CMake
run: mkdir build; cd build; cmake .. -DCMAKE_INSTALL_PREFIX=/home/runner/work/access/build/ext_install;
- name: Run make
run: cd build; make -j8
I expected it to trigger builds on new Pull Requests and have the build status as a condition to approve the merging.
However I'm finding it a bit challenging to achieve such results. I'm sort of a newbie when it comes to GitHub Actions.
I'm able to accomplish your scenario with a combination of github actions and github protected branch settings.
You've got your github actions setup correctly to run on a Pull Request with a destination branch: master or develop.
Now you have to configure your repo to prevent merging a PR if the CI fails:
On your Github Repo, go to Settings => Branches => Add a rule => Set branch name pattern to master => Enable 'Require status checks to pass before merging' => Status checks found in the last week for this repository pick the CI build you want to enforce
Until I write this response there is no way to do that only using GitHub actions, but you can do that by writing an action, using javascript or other of languages supported by GitHub Actions.
import * as core from '#actions/core'
import * as github from '#actions/github'
import {getRequiredEnvironmentVariable} from "./utils";
type GitHubStatus = { context: string, description?: string, state: "error" | "failure" | "pending" | "success", target_url?: string }
function commitStatusFromConclusion(conclusion: CheckConclusion): GitHubStatus{
let status: GitHubStatus = {
context: "branch-guard",
description: "Checks are running...",
state: "pending",
};
if (conclusion.allCompleted) {
if (conclusion.failedCheck) {
status.state = "failure";
status.description = `${conclusion.failedCheck.appName} ${conclusion.failedCheck.conclusion}`;
status.target_url = conclusion.failedCheck.url
} else {
status.state = "success";
status.description = "All checks are passing!";
}
}
return status;
}
export async function setStatus(repositoryOwner: string, repositoryName: string, sha: string, status: GitHubStatus): Promise<number> {
let api = new github.GitHub(getRequiredEnvironmentVariable('GITHUB_TOKEN'));
let params = {
owner: repositoryOwner,
repo: repositoryName,
sha: sha,
};
let response = await api.repos.createStatus({...params, ...status});
return response.status
}
and after you create the action you only have to call the step inside your workflow:
on:
pull_request: # to update newly open PRs or when a PR is synced
check_suite: # to update all PRs upon a Check Suite completion
type: ['completed']
name: Branch Guard
jobs:
branch-guard:
name: Branch Guard
if: github.event.check_suite.head_branch == 'master' || github.event.pull_request.base.ref == 'master'
runs-on: ubuntu-latest
steps:
- uses: YOUR-REP/YOUR-ACTION#v0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
If you want more doc about create javascript actions
I got this example from:
Block PR merges when Checks for target branches are failing
Hope that it help you.

Do I have to make a logout when using build and push command in docker task in azure pipeline

steps:
- task: Docker#2
displayName: Build and Push
inputs:
command: buildAndPush
containerRegistry: dockerRegistryServiceConnection1
repository: contosoRepository
tags: |
tag1
A convenience command called buildAndPush allows for build and push of images to container registry in a single command. See the above snippet
Question:
Do I need to log out from the container registry by adding following task?
- task: Docker#2
displayName: Logout of ACR
inputs:
command: logout
containerRegistry: dockerRegistryServiceConnection1
In my opinion this is not necessary to login or logout.
You may even find an example in documentation without login or logout:
- stage: Build
displayName: Build and push stage
jobs:
- job: Build
displayName: Build job
pool:
vmImage: $(vmImageName)
steps:
- task: Docker#2
displayName: Build and push an image to container registry
inputs:
command: buildAndPush
repository: $(imageRepository)
dockerfile: $(dockerfilePath)
containerRegistry: $(dockerRegistryServiceConnection)
tags: |
$(tag)
So you may wonder what actually login does. If you check a source code you will find that it actually set up DOCKER_CONFIG (The location of your client configuration files.)
export function run(connection: ContainerConnection): any {
var defer = Q.defer<any>();
connection.setDockerConfigEnvVariable();
defer.resolve(null);
return defer.promise;
}
and what logout does ;)
export function run(connection: ContainerConnection): any {
// logging out is being handled in connection.close() method, called after the command execution.
var defer = Q.defer<any>();
defer.resolve(null);
return <Q.Promise<any>>defer.promise;
}
So how does it work?
// Connect to any specified container registry
let connection = new ContainerConnection();
connection.open(null, registryAuthenticationToken, true, isLogout);
let dockerCommandMap = {
"buildandpush": "./dockerbuildandpush",
"build": "./dockerbuild",
"push": "./dockerpush",
"login": "./dockerlogin",
"logout": "./dockerlogout"
}
let telemetry = {
command: command,
jobId: tl.getVariable('SYSTEM_JOBID')
};
console.log("##vso[telemetry.publish area=%s;feature=%s]%s",
"TaskEndpointId",
"DockerV2",
JSON.stringify(telemetry));
/* tslint:disable:no-var-requires */
let commandImplementation = require("./dockercommand");
if (command in dockerCommandMap) {
commandImplementation = require(dockerCommandMap[command]);
}
let resultPaths = "";
commandImplementation.run(connection, (pathToResult) => {
resultPaths += pathToResult;
})
/* tslint:enable:no-var-requires */
.fin(function cleanup() {
if (command !== "login") {
connection.close(true, command);
}
})
Starting build command you will
connect to container registry
run command
close connection (if this is not a login command)
And this is what close connection does:
If registry info is present, remove auth for only that registry. (This can happen for any command - build, push, logout etc.)
Else, remove all auth data. (This would happen only in case of logout command. For other commands, logout is not called.)
Answering your question, you can live without login and logout command.

Using a separate testing mongo database on mongo and Heroku

In my project I am using https://www.npmjs.com/package/dotenv-safe in order to declare environment variables needed for configuration. For example:
NODE_ENV=development
JWT_SECRET=xxxxxxx
JWT_EXPIRATION_MINUTES=15
MONGO_URI=mongodb://mongodb:27017/proddb
BASE_URI=http://localhost:3000/
MONGO_URI_TESTS=mongodb://mongodb:27017/testdb
PORT=3000
Then I use those files in a config file :
module.exports = {
env: process.env.NODE_ENV,
port: process.env.PORT,
jwtSecret: process.env.JWT_SECRET,
jwtExpirationInterval: process.env.JWT_EXPIRATION_MINUTES,
mongo: {
uri: process.env.NODE_ENV === 'test'
? process.env.MONGO_URI_TESTS
: process.env.MONGO_URI,
},
logs: process.env.NODE_ENV === 'production' ? 'combined' : 'dev',
};
and in my package.json file, I've got:
"scripts": {
"start": "NODE_ENV=production node ./src/index.js",
"dev": "LOG_LEVEL=debug nodemon --inspect=0.0.0.0 ./src/index.js",
"test": "NODE_ENV=test nyc --reporter=html --reporter=text mocha --timeout 20000 --recursive src/tests"
}
The problem? Everything works fine but when tests are run on Heroku (prod) , they run on the main database and not on the testdb...

How to pass jenkins build environment into pod using kubernetes plugin?

Env: Jenkins 2.73.1 & Kubernetes plugin 1.0
Inside the container, I like to get the normal jenkins build environment variable like BUILD_NUMBER
podTemplate(label: 'mypod', containers: [
containerTemplate(name: 'python', image: 'python:2.7.8', ttyEnabled: true)
]) {
node("mypod") {
echo sh(returnStdout: true, script: 'env')
container('python') {
stage('Checkout') {
sh "env"
}
}
}
}
So far in the code above, inside python, it doesn't have the traditional build variable.
Any solution to get those variables inside container?
You can use env.BUILD_NUMBER
i.e.
node{
echo env.BUILD_NUMBER
}
Also if you want a list of all the env vars that are available you can run
node{
echo "${env.getEnvironment()}"
}
These are the default jenkins plugins env vars but you can also set env vars for your kubernetes plugin build pods in the pod template, for example..
envVars: [
envVar(key: 'GOPATH', value: '/home/jenkins/go')
]),
FWIW here's that code being used https://github.com/fabric8io/fabric8-pipeline-library/blob/3834f0f/vars/goTemplate.groovy#L27
More details here