Bitbucket pipeline fails with mongo service - mongodb

I'm trying to setup test for my backend in Bitbucket Pipelines. But when I set jest.config with jest-mongodb the tests doesn't even start and exit with this error.
The tests are working perfectly fine on local.
Here's my pipeline configuration part that doesn't work:
image: node:18.12.0
definitions:
services:
mongo:
image: mongo
caches:
nodeall: ./node_modules
yarn: /usr/local/share/.cache/yarn
steps:
- step: &Quality-Check
name: Code Quality Checks ๐ŸŽ€
script:
- echo Fixing code quality and format ๐Ÿ”Ž
- yarn install
- yarn run lint:fix
- yarn format:fix
- step: &Testing
name: Testing ๐Ÿงช
caches:
- nodeall
script:
- yarn install
# - yarn run test
- echo Checking test coverage and generating report ๐Ÿ“œ
- yarn run test:coverage
artifacts:
- coverage/**
services:
- mongo
pipelines:
branches:
main:
- step:
name: Install dependencies
caches:
- nodeall
script:
- yarn install
- step: *Quality-Check
- step: *Testing
When i search for this error i'm headed to mongo-memory-server but i don't use this package in the code. And couldn't find anything.
I've tried changing anchors, calling mongo service earlier, changing mongo docker image but no success.
I'm expecting that the test and pipeline pass
EDIT
I tried 3 different Jest.configs and realise that the one that was on the project actually use memory-server.
Here are the 3 configs i tried
const { defaults: tsjPreset } = require('ts-jest/presets')
//Custom config with files
// module.exports = {
// preset: 'ts-jest',
// globalSetup: './mongo-memory-server/globalSetup.ts',
// globalTeardown: './mongo-memory-server/globalTeardown.ts',
// setupFilesAfterEnv: ['./mongo-memory-server/setupFile.ts'],
// }
//Config for mongo-memory-db
module.exports = {
preset: '#shelf/jest-mongodb',
transform: tsjPreset.transform,
}
// Basic config
// module.exports = {
// preset: 'ts-jest',
// testEnvironment: 'node',
// setupFiles: ['dotenv/config'],
// }

Related

mongodb memory server Wrongfully downloading binaries on CircleCi even tho they are cached

Im using mongodb memory server.
Locally everything passes, but on circle ci it tries to download the binary even tho it's already there.
By running node node_modules/mongodb-memory-server/postinstall.js on CircleCi I see this output:
However then down the line when I run my tests it tries to download the binary again:
in addition, the SYSTEM_BINARY env variable is set to the path where the binary is.
Versions
NodeJS: v17.6.0
mongodb-memory-server-*: 8.8.0
mongoose: 6.4.7
system: ubuntu 20.04
Code Example
// .circleci.config.yml
version: 2.1
orbs:
node: circleci/node#5.0.2
jobs:
main:
docker:
- image: cimg/node:lts-browsers
steps:
- checkout
- node/install-packages:
pkg-manager: yarn
- run: echo $SYSTEM_BINARY
- run: node node_modules/mongodb-memory-server/postinstall.js
- run: yarn test:api
workflows:
build:
jobs:
- main
// mock-db-test-util.js
import { MongoMemoryServer } from "mongodb-memory-server"
import mockMongoose from "mongoose"
const mockMongoMemoryServer = MongoMemoryServer.create()
// #ts-ignore
jest.mock("../../src/db", () => ({
// #ts-ignore
...(jest.requireActual("../../src/db").default as any),
// #ts-ignore
connect: jest.fn().mockImplementation(async () => {
const mongo = await mockMongoMemoryServer
const uri = mongo.getUri()
await mockMongoose.connect(uri)
}),
}))

terraform plan recreates resources on every run with terraform cloud backend

I am running into an issue where terraform plan recreates resources that don't need to be recreated every run. This is an issue because some of the steps depend on those resources being available, and since they are recreated with each run, the script fails to complete.
My setup is Github Actions, Linode LKE, Terraform Cloud.
My main.tf file looks like this:
terraform {
required_providers {
linode = {
source = "linode/linode"
version = "=1.16.0"
}
helm = {
source = "hashicorp/helm"
version = "=2.1.0"
}
}
backend "remote" {
hostname = "app.terraform.io"
organization = "MY-ORG-HERE"
workspaces {
name = "MY-WORKSPACE-HERE"
}
}
}
provider "linode" {
}
provider "helm" {
debug = true
kubernetes {
config_path = "${local_file.kubeconfig.filename}"
}
}
resource "linode_lke_cluster" "lke_cluster" {
label = "MY-LABEL-HERE"
k8s_version = "1.21"
region = "us-central"
pool {
type = "g6-standard-2"
count = 3
}
}
and my outputs.tf file
resource "local_file" "kubeconfig" {
depends_on = [linode_lke_cluster.lke_cluster]
filename = "kube-config"
# filename = "${path.cwd}/kubeconfig"
content = base64decode(linode_lke_cluster.lke_cluster.kubeconfig)
}
resource "helm_release" "ingress-nginx" {
# depends_on = [local_file.kubeconfig]
depends_on = [linode_lke_cluster.lke_cluster, local_file.kubeconfig]
name = "ingress"
repository = "https://kubernetes.github.io/ingress-nginx"
chart = "ingress-nginx"
}
resource "null_resource" "custom" {
depends_on = [helm_release.ingress-nginx]
# change trigger to run every time
triggers = {
build_number = "${timestamp()}"
}
# download kubectl
provisioner "local-exec" {
command = "curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && chmod +x kubectl"
}
# apply changes
provisioner "local-exec" {
command = "./kubectl apply -f ./k8s/ --kubeconfig ${local_file.kubeconfig.filename}"
}
}
In Github Actions, I'm running these steps:
jobs:
init-terraform:
runs-on: ubuntu-latest
defaults:
run:
working-directory: ./terraform
steps:
- name: Checkout code
uses: actions/checkout#v2
with:
ref: 'privatebeta-kubes'
- name: Setup Terraform
uses: hashicorp/setup-terraform#v1
with:
cli_config_credentials_token: ${{ secrets.TERRAFORM_API_TOKEN }}
- name: Terraform Init
run: terraform init
- name: Terraform Format Check
run: terraform fmt -check -v
- name: List terraform state
run: terraform state list
- name: Terraform Plan
run: terraform plan
id: plan
env:
LINODE_TOKEN: ${{ secrets.LINODE_TOKEN }}
When I look at the results of terraform state list I can see my resources:
Run terraform state list
terraform state list
shell: /usr/bin/bash -e {0}
env:
TERRAFORM_CLI_PATH: /home/runner/work/_temp/3f9749b8-515b-4cb4-8053-1a6318496321
/home/runner/work/_temp/3f9749b8-515b-4cb4-8053-1a6318496321/terraform-bin state list
helm_release.ingress-nginx
linode_lke_cluster.lke_cluster
local_file.kubeconfig
null_resource.custom
But my terraform plan fails and the issue seems to stem from the fact that those resources try to get recreated.
Run terraform plan
terraform plan
shell: /usr/bin/bash -e {0}
env:
TERRAFORM_CLI_PATH: /home/runner/work/_temp/3f9749b8-515b-4cb4-8053-1a6318496321
LINODE_TOKEN: ***
/home/runner/work/_temp/3f9749b8-515b-4cb4-8053-1a6318496321/terraform-bin plan
Running plan in the remote backend. Output will stream here. Pressing Ctrl-C
will stop streaming the logs, but will not stop the plan running remotely.
Preparing the remote plan...
Waiting for the plan to start...
Terraform v1.0.2
on linux_amd64
Configuring remote state backend...
Initializing Terraform configuration...
linode_lke_cluster.lke_cluster: Refreshing state... [id=31946]
local_file.kubeconfig: Refreshing state... [id=fbb5520298c7c824a8069397ef179e1bc971adde]
helm_release.ingress-nginx: Refreshing state... [id=ingress]
โ•ท
โ”‚ Error: Kubernetes cluster unreachable: stat kube-config: no such file or directory
โ”‚
โ”‚ with helm_release.ingress-nginx,
โ”‚ on outputs.tf line 8, in resource "helm_release" "ingress-nginx":
โ”‚ 8: resource "helm_release" "ingress-nginx" {
Is there a way to tell terraform it doesn't need to recreate those resources?
Regarding the actual error shown, Error: Kubernetes cluster unreachable: stat kibe-config: no such file or directory... which is referencing your outputs file... I found this which could help with your specific error: https://github.com/hashicorp/terraform-provider-helm/issues/418
1 other thing looks strange to me. Why does your outputs.tf refer to 'resources' & not 'outputs'. Shouldn't your outputs.tf look like this?
output "local_file_kubeconfig" {
value = "reference.to.resource"
}
Also I see your state file / backend config looks like it's properly configured.
I recommend logging into your terraform cloud account to verify that the workspace is indeed there, as expected. It's the state file that tells terraform not to re-create the resources it manages.
If the resources are already there and terraform is trying to re-create them, that could indicate that those resources were created prior to using terraform or possibly within another terraform cloud workspace or plan.
Did you end up renaming your backend workspace at any point with this plan? I'm referring to your main.tf file, this part where it says MY-WORKSPACE-HERE :
terraform {
required_providers {
linode = {
source = "linode/linode"
version = "=1.16.0"
}
helm = {
source = "hashicorp/helm"
version = "=2.1.0"
}
}
backend "remote" {
hostname = "app.terraform.io"
organization = "MY-ORG-HERE"
workspaces {
name = "MY-WORKSPACE-HERE"
}
}
}
Unfortunately I am not a kurbenetes expert, so possibly more help can be used there.

Flutter pipeline with Bitbucket

I want to create pipeline for building Flutter(Android/iOS) application on Bitbucket easily. Unfortunately, I have not found any article or guide. Is it possible?
Web
image: cirrusci/flutter
pipelines:
branches:
dev:
- step:
name: Run unit tests
script:
- flutter test
- step:
name: Build web
script:
- flutter build web
- step:
name: Deploy to Firebase
deployment: production
script:
- pipe: atlassian/firebase-deploy:1.1.0
variables:
PROJECT_ID: $PROJECT_ID
KEY_FILE: $FIREBASE_KEY_FILE
DEBUG: 'true'
BUILD_DIR: $BUILD_DIR/web/
main:
- step:
name: Run unit tests
script:
- flutter pub get
- flutter test
Android
workflows:
internal-release:
name: Internal Release
environment:
vars:
databaseURL: https://my-app.firebaseio.com
apiKey: asdf
googleAppID: 1:123:android:456
AD_MOB_AD_ID: ca-app-pub-123/456
AD_MOB_APP_ID: ca-app-pub-123~456
FCI_KEYSTORE_PATH: $FCI_BUILD_DIR/android/app/key.jks
FCI_KEYSTORE: Encrypted(xxx)
FCI_KEYSTORE_PASSWORD: Encrypted(xxx==)
FCI_KEY_PASSWORD: Encrypted(xxx==)
FCI_KEY_ALIAS: Encrypted(xxx==)
GCLOUD_SERVICE_ACCOUNT_CREDENTIALS: Encrypted(xxx=)
flutter: stable
xcode: latest
cocoapods: default
triggering:
events:
- push
branch_patterns:
- pattern: '*'
include: true
source: true
cancel_previous_builds: true
when:
changeset:
includes:
- '.'
excludes:
- '**/*.yaml'
- '**/*.yml'
- '**/*.json'
- '**/*.md'
- '**/*.db'
scripts:
- |
# set up key.properties
echo $FCI_KEYSTORE | base64 --decode > $FCI_KEYSTORE_PATH
cat >> "$FCI_BUILD_DIR/android/key.properties" <<EOF
storePassword=$FCI_KEYSTORE_PASSWORD
keyPassword=$FCI_KEY_PASSWORD
keyAlias=$FCI_KEY_ALIAS
storeFile=$FCI_KEYSTORE_PATH
EOF
- |
# set up local properties
echo "flutter.sdk=$HOME/programs/flutter" > "$FCI_BUILD_DIR/android/local.properties"
- cd . && flutter packages pub get
# - cd . && flutter analyze
# - cd . && flutter test
- |
#!/usr/bin/env sh
if ! command -v yq &> /dev/null
then
HOMEBREW_NO_AUTO_UPDATE=1 brew install yq
exit
fi
- cd . && flutter build appbundle --release --build-name=$(yq e .version pubspec.yaml|awk -F+ '{print $1}') --build-number=$PROJECT_BUILD_NUMBER
- |
# generate signed universal apk with user specified keys
android-app-bundle build-universal-apk \
--bundle build/**/outputs/**/*.aab \
--ks $FCI_KEYSTORE_PATH \
--ks-pass $FCI_KEYSTORE_PASSWORD \
--ks-key-alias $FCI_KEY_ALIAS \
--key-pass $FCI_KEY_PASSWORD
artifacts:
- build/**/outputs/**/*.apk
- build/**/outputs/**/*.aab
- build/**/outputs/**/mapping.txt
- flutter_drive.log
publishing:
google_play:
credentials: Encrypted(xxx=)
track: internal
in_app_update_priority: 0

Do I have to make a logout when using build and push command in docker task in azure pipeline

steps:
- task: Docker#2
displayName: Build and Push
inputs:
command: buildAndPush
containerRegistry: dockerRegistryServiceConnection1
repository: contosoRepository
tags: |
tag1
A convenience command called buildAndPush allows for build and push of images to container registry in a single command. See the above snippet
Question:
Do I need to log out from the container registry by adding following task?
- task: Docker#2
displayName: Logout of ACR
inputs:
command: logout
containerRegistry: dockerRegistryServiceConnection1
In my opinion this is not necessary to login or logout.
You may even find an example in documentation without login or logout:
- stage: Build
displayName: Build and push stage
jobs:
- job: Build
displayName: Build job
pool:
vmImage: $(vmImageName)
steps:
- task: Docker#2
displayName: Build and push an image to container registry
inputs:
command: buildAndPush
repository: $(imageRepository)
dockerfile: $(dockerfilePath)
containerRegistry: $(dockerRegistryServiceConnection)
tags: |
$(tag)
So you may wonder what actually login does. If you check a source code you will find that it actually set up DOCKER_CONFIG (The location of your client configuration files.)
export function run(connection: ContainerConnection): any {
var defer = Q.defer<any>();
connection.setDockerConfigEnvVariable();
defer.resolve(null);
return defer.promise;
}
and what logout does ;)
export function run(connection: ContainerConnection): any {
// logging out is being handled in connection.close() method, called after the command execution.
var defer = Q.defer<any>();
defer.resolve(null);
return <Q.Promise<any>>defer.promise;
}
So how does it work?
// Connect to any specified container registry
let connection = new ContainerConnection();
connection.open(null, registryAuthenticationToken, true, isLogout);
let dockerCommandMap = {
"buildandpush": "./dockerbuildandpush",
"build": "./dockerbuild",
"push": "./dockerpush",
"login": "./dockerlogin",
"logout": "./dockerlogout"
}
let telemetry = {
command: command,
jobId: tl.getVariable('SYSTEM_JOBID')
};
console.log("##vso[telemetry.publish area=%s;feature=%s]%s",
"TaskEndpointId",
"DockerV2",
JSON.stringify(telemetry));
/* tslint:disable:no-var-requires */
let commandImplementation = require("./dockercommand");
if (command in dockerCommandMap) {
commandImplementation = require(dockerCommandMap[command]);
}
let resultPaths = "";
commandImplementation.run(connection, (pathToResult) => {
resultPaths += pathToResult;
})
/* tslint:enable:no-var-requires */
.fin(function cleanup() {
if (command !== "login") {
connection.close(true, command);
}
})
Starting build command you will
connect to container registry
run command
close connection (if this is not a login command)
And this is what close connection does:
If registry info is present, remove auth for only that registry. (This can happen for any command - build, push, logout etc.)
Else, remove all auth data. (This would happen only in case of logout command. For other commands, logout is not called.)
Answering your question, you can live without login and logout command.

How can cypress be made to work with aurelia with github actions and locally?

Ok, so I added cypress to aurelia during my configuration and it worked fine. When I went to set up cypress on github as just a command, I could not get it to recognize puppeteer as a browser. So instead I went and used the official github actions for cypress, and that works
- name: test
uses: cypress-io/github-action#v1
with:
start: yarn start
browser: ${{matrix.browser}}
record: true
env:
# pass the Dashboard record key as an environment variable
CYPRESS_RECORD_KEY: ${{ secrets.CYPRESS_RECORD_KEY }}
however I had to set my cypress.json as follows
{
"baseUrl": "http://localhost:8080",
"fixturesFolder": "test/e2e/fixtures",
"integrationFolder": "test/e2e/integration",
"pluginsFile": "test/e2e/plugins/index.js",
"screenshotsFolder": "test/e2e/screenshots",
"supportFile": "test/e2e/support/index.js",
"videosFolder": "test/e2e/videos",
"projectId": "..."
}
and now running yarn e2e doesn't work because there's no server stood up, as it's not doing it itself anymore via cypress.config.js
const CLIOptions = require( 'aurelia-cli').CLIOptions;
const aureliaConfig = require('./aurelia_project/aurelia.json');
const PORT = CLIOptions.getFlagValue('port') || aureliaConfig.platform.port;
const HOST = CLIOptions.getFlagValue('host') || aureliaConfig.platform.host;
module.exports = {
config: {
baseUrl: `http://${HOST}:${PORT}`,
fixturesFolder: 'test/e2e/fixtures',
integrationFolder: 'test/e2e/integration',
pluginsFile: 'test/e2e/plugins/index.js',
screenshotsFolder: 'test/e2e/screenshots',
supportFile: 'test/e2e/support/index.js',
videosFolder: 'test/e2e/videos'
}
};
how can I make it so that yarn e2e works as it previously did, and have it working on github?(I don't care which side of the equation is changed)
here's yarn e2e not sure what the au is doing under the hood.
"e2e": "au cypress",
Easiest way to achieve this, create a test/e2e/cypress-config.json
{
"baseUrl": "http://localhost:8080",
"fixturesFolder": "test/e2e/fixtures",
"integrationFolder": "test/e2e/integration",
"pluginsFile": "test/e2e/plugins/index.js",
"screenshotsFolder": "test/e2e/screenshots",
"supportFile": "test/e2e/support/index.js",
"videosFolder": "test/e2e/videos",
"projectId": "1234"
}
, and then setup the github action like this.
- name: test
uses: cypress-io/github-action#v1
with:
config-file: tests/e2e/cypress-config.json
start: yarn start
browser: ${{matrix.browser}}
record: true
env:
# pass the Dashboard record key as an environment variable
CYPRESS_RECORD_KEY: ${{ secrets.CYPRESS_RECORD_KEY }}
the path doesn't matter, just that you configure the same one. Just make sure it doesn't overlap with what aurelia wants.