gitlab pipeline with embedded mongodb - mongodb

I'm trying to build a pipeline for a gradle java app which is using an embedded mongo instance. I have built a container to which has java and mongo. However, I keep getting the following error for all my test that require the embedded mongo instance.
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'embeddedMongoServer'
defined in class path resource [org/springframework/boot/autoconfigure/mongo/embedded/EmbeddedMongoAutoConfiguration.class]:
Invocation of init method failed; nested exception is java.io.IOException:
Cannot run program "/tmp/extract-f816c11c-614b-46d7-ad29-68923ca9d624extractmongod": error=2, No such file or directory
My gitlab-ci.yml looks like this:
image: java:latest
services:
- docker:dind
variables:
GRADLE_OPTS: "-Dorg.gradle.daemon=false"
DOCKER_DRIVER: overlay
SPRING_PROFILES_ACTIVE: gitlab-ci
stages:
- build
- test
build:
stage: build
script: ./gradlew --build-cache assemble
cache:
key: "$CI_COMMIT_REF_NAME"
policy: push
paths:
- build
- .gradle
test:
stage: test
image: registry.gitlab.com/path/to/explorer-ci:1.0.0
script: ./gradlew check --debug
cache:
key: "$CI_COMMIT_REF_NAME"
policy: pull
paths:
- build
- .gradle
The build job works fine, the test job fails. My explorer-ci container is built using the following Dockerfile
FROM openjdk:8-jdk-alpine
RUN apk update && \
apk add --no-cache \
mongodb \
bash
VOLUME /data/db
VOLUME log
RUN ["mongod", "--smallfiles", "--fork", "--logpath", "log/mongodb.log"]
I've spent a week with a bunch of different configs but can't seem to hack it. Just to note, builds/tests run fine on my local machine. Any ideas of what I'm doing wrong?

On reflection, as I am using an embedded mongo instance, I do not have a dependency on mongodb to build or test. I am now using the following gitlab-ci.yaml and it works fine.
image: openjdk:8-jdk
variables:
GRADLE_OPTS: "-Dorg.gradle.daemon=false"
DOCKER_DRIVER: overlay
SPRING_PROFILES_ACTIVE: gitlab-ci
stages:
- build
- test
build:
stage: build
script: ./gradlew --build-cache assemble
cache:
key: "$CI_COMMIT_REF_NAME"
policy: push
paths:
- build
- .gradle
test:
stage: test
script: ./gradlew check
cache:
key: "$CI_COMMIT_REF_NAME"
policy: pull
paths:
- build
- .gradle

The issue was on base docker image for your project runner, You need to use jdk version not alpine.
Try to change base image to this image: openjdk:8-jdk and it will work fine.

In our case the solution described at Fladoodle embedded MongoDB on the issues here helped:
https://github.com/flapdoodle-oss/de.flapdoodle.embed.mongo/issues/281
In short - we added this to our Gitlab script (or do this in container setup itself if possible) before running our tests:
apk --no-cache add ca-certificates
wget -q -O /etc/apk/keys/sgerrand.rsa.pub https://alpine-pkgs.sgerrand.com/sgerrand.rsa.pub
wget https://github.com/sgerrand/alpine-pkg-glibc/releases/download/2.29-r0/glibc-2.29-r0.apk
wget https://github.com/sgerrand/alpine-pkg-glibc/releases/download/2.29-r0/glibc-bin-2.29-r0.apk
apk add glibc-2.29-r0.apk glibc-bin-2.29-r0.apk
wget https://github.com/sgerrand/alpine-pkg-glibc/releases/download/2.29-r0/glibc-i18n-2.29-r0.apk
apk add glibc-bin-2.29-r0.apk glibc-i18n-2.29-r0.apk
/usr/glibc-compat/bin/localedef -i en_US -f UTF-8 en_US.UTF-8

just updating the dependency should resolve:
<dependency>
<groupId>de.flapdoodle.embed</groupId>
<artifactId>de.flapdoodle.embed.mongo</artifactId>
<version>2.2.0</version>
<scope>test</scope>
</dependency>

Related

Sonar scanner command found on Gitlab CI/CD

I am trying to integrate Sonarqube into my project CI/CD pipeline on Gitlab. I have followed the documentation on Gitlab and Sonarqube to the best of my understanding to get the job included in my yml file.
I am current experiencing the error as shown in the image below
This is my yml file script
build_project:
stage: build
script:
- xcodebuild clean -workspace TinggIOS/TinggIOS.xcworkspace -scheme TinggIOS | xcpretty
- xcodebuild test -workspace TinggIOS/TinggIOS.xcworkspace -scheme TinggIOS -destination 'platform=iOS Simulator,name=iPhone 11 Pro Max,OS=15' | xcpretty -s
tags:
- stage
image: macos-11-xcode-12
sonarqube-check:
stage: analyze
image:
name: sonarsource/sonar-scanner-cli:latest
entrypoint: [""]
variables:
SONAR_USER_HOME: "${CI_PROJECT_DIR}/.sonar" # Defines the location of the analysis task cache
GIT_DEPTH: "0" # Tells git to fetch all the branches of the project, required by the analysis task
cache:
key: "${CI_JOB_NAME}"
paths:
- .sonar/cache
script:
- sonar-scanner -Dsonar.qualitygate.wait=true
allow_failure: true
only:
- merge_requests
- feature/unit-test # or the name of your main branch
- develop
tags:
- stage
It looks like the
- sonar-scanner -Dsonar.qualitygate.wait=true
command is not found. Try to run that command on the machine you are setting up your pipeline (like ssh into that machine or ssh into it and try running that command). The issue might be that it isn't installed on there.

How to extend template script?

I have the following template in my .gitlab-ci.yml file:
x-android-build-tools: &android_build_tools
image: jangrewe/gitlab-ci-android
stage: build
script:
- export GRADLE_USER_HOME=$(pwd)/.gradle
- chmod +x ./gradlew
artifacts:
expire_in: 1 hours
paths:
- app/build/
I want to extend the script part to make actual builds. For example:
android-stage-build:
<<: *android_build_tools
environment: stage
only:
- dev
after_script:
- ./gradlew :app:assembleDebug
It works well, but it has a problem. Docker launches the both jobs instead of ignoring the template.
Is there way to run only the "android-stage-build" job which triggers the template job when it will be needed?
In order to make gitlab ci ignore the first entry, you need to add a dot (.) in front of the definition.
.x-android-build-tools: &android_build_tools
image: jangrewe/gitlab-ci-android
stage: build
script:
- export GRADLE_USER_HOME=$(pwd)/.gradle
- chmod +x ./gradlew
artifacts:
expire_in: 1 hours
paths:
- app/build/
android-stage-build:
<<: *android_build_tools
environment: stage
only:
- dev
after_script:
- ./gradlew :app:assembleDebug
Besides that, I think, based on what I read here, you don't want to use after_script.
I think you want to use the before_script in the template, and on the build stage specific the script-key.
The main difference is that after_script also runs if the script fails. And by what I read here, it does not look like you would like that to happen.
Yes. You're simply missing a . :)
See https://docs.gitlab.com/ee/ci/yaml/yaml_optimization.html#anchors
It should work if you write it like this:
.x-android-build-tools: &android_build_tools
image: jangrewe/gitlab-ci-android
stage: build
script:
- export GRADLE_USER_HOME=$(pwd)/.gradle
- chmod +x ./gradlew
artifacts:
expire_in: 1 hours
paths:
- app/build/
android-stage-build:
<<: *android_build_tools
environment: stage
only:
- dev
after_script:
- ./gradlew :app:assembleDebug

CircleCI "Could not ensure that workspace directory exists"

I am using CircleCI with a GameCI docker image in order to build a Unity project. The build works, but I am trying to make use of the h-matsuo/github-release orb in order to create a release on GitHub for the build. I have created a new separate job for this, so I needed to share data between the jobs. I am using persist_to_workspace in order to do that, as specified in the documentation, but the solution doesn't seem to work. I get the following error:
Could not ensure that workspace directory /root/project/Zipped exists
For the workspace persist logic, I've added the following lines of code in my config.yml file:
working_directory: /root/project - Inside the executor of the main job
persist_to_workspace - As a last command inside my main job's steps
attach_workspace - As a beginning command inside my second job's steps
Here's my full config.yml file:
version: 2.1
orbs:
github-release: h-matsuo/github-release#0.1.3
executors:
unity_exec:
docker:
- image: unityci/editor:ubuntu-2019.4.19f1-windows-mono-0.9.0
environment:
BUILD_NAME: speedrun-circleci-build
working_directory: /root/project
.build: &build
executor: unity_exec
steps:
- checkout
- run: mkdir -p /root/project/Zipped
- run:
name: Git submodule recursive
command: git submodule update --init --recursive
- run:
name: Remove editor folder in shared project
command: rm -rf ./Assets/Shared/Movement/Generic/Attributes/Editor/
- run:
name: Converting Unity license
command: chmod +x ./ci/unity_license.sh && ./ci/unity_license.sh
- run:
name: Building game binaries
command: chmod +x ./ci/build.sh && ./ci/build.sh
- run:
name: Zipping build
command: apt update && apt -y install zip && zip -r "/root/project/Zipped/build.zip" ./Builds/
- store_artifacts:
path: /root/project/Zipped/build.zip
- run:
name: Show all files
command: find "$(pwd)"
- persist_to_workspace:
root: Zipped
paths:
- build.zip
jobs:
build_windows:
<<: *build
environment:
BUILD_TARGET: StandaloneWindows64
release:
description: Build project and publish a new release tagged `v1.1.1`.
executor: github-release/default
steps:
- attach_workspace:
at: /root/project/Zipped
- run:
name: Show all files
command: sudo find "/root/project"
- github-release/create:
tag: v1.1.1
title: Version v1.1.1
description: This release is version v1.1.1.
file-path: ./build.zip
workflows:
version: 2
build:
jobs:
- build_windows
- release:
requires:
- build_windows
Can somebody help me with this please?
If somebody ever encounters the same issue, try to avoid making use of the /root path. I've stored the artifacts somewhere inside /tmp/, and before storing artifacts, I've manually created the folder with chmod 777 by using mkdir with the -m flag to specify chmod permissions.

Azure pipeline docker fails copy with multiple projects

Copying of Data.csproj to Data/ is failing when building my app in azure devops. Though, the first copy command, Api.csproj to Api/ is working fine. Do note that I did not specify the buildContext on my azure-pipeline.yml file. But, when I did add the buildContext, buildContext: '$(Build.Repository.LocalPath)', it failed even on the first copy.
Any inputs or suggestion on how to fix this one? I tried searching and adding the buildcontext or adding the folder on the csproj doesn't seem to work. For example, COPY ["/Data/Data.csproj", "Data/"]
This is my folder structure (my azure-pipeline.yml file is outside the App folder):
App
- Api/
- Api.csproj
- Dockerfile
- Data/
- Data.csproj
- Domain/
- Domain.csproj
- App.sln
My dockerfile:
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build
WORKDIR /src
COPY ["Api.csproj", "Api/"]
COPY ["Data.csproj", "Data/"]
COPY ["Domain.csproj", "Domain/"]
RUN dotnet restore "Api/Api.csproj"
COPY . .
WORKDIR "/src/Api"
RUN dotnet build "Api.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "Api.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "Api.dll"]
parts of my azure-pipeline.yml
stages:
- stage: Build
displayName: Build stage
jobs:
- job: Build
displayName: Build
pool:
vmImage: 'ubuntu-latest'
steps:
- task: Docker#2
displayName: Build and push an image to container registry
inputs:
command: buildAndPush
repository: 'App'
dockerfile: '**/Dockerfile'
tags: |
$(tag)
Here's the error:
Step 6/28 : WORKDIR /src
---> Running in 266a78d293ee
Removing intermediate container 266a78d293ee
---> 2d899fafdf05
Step 7/28 : COPY ["Api.csproj", "Api/"]
---> 92c8c1450c3c
Step 8/28 : COPY ["Data.csproj", "Data/"]
COPY failed: stat /var/lib/docker/tmp/docker-builder764823890/Data.csproj: no such file or directory
##[error]COPY failed: stat /var/lib/docker/tmp/docker-builder764823890/Data.csproj: no such file or directory
##[error]The process '/usr/bin/docker' failed with exit code 1
Okay, after trying so many times, I was able to fix this by changing the dockerfile and azure-pipelines.yml.
I think what fixed the issue is to specifically set the buildContext to 'App/' instead of the variable '$(Build.Repository.LocalPath)' that I'm not sure what's the exact value.
I'll just post the part that I made changes to.
Dockerfile
COPY ["Api/Api.csproj", "Api/"]
COPY ["Data/Data.csproj", "Data/"]
COPY ["Domain/Domain.csproj", "Domain/"]
azure-pipelines.yml
inputs:
command: buildAndPush
repository: $(imageRepository)
dockerfile: $(dockerfilePath)
buildContext: 'App/'

How can i Remove DockerFile and use only ci file with kubernetes runner

Right now I have a Docker file and a .gitlab-ci.yml , and SHELL runner
FROM node:latest
RUN cd /
RUN mkdir Brain
COPY . /Brain/
WORKDIR /Brain/
RUN npm install
ENV CASSANDRA_HOST_5="10.1.1.58:9042"
ENV IP="0.0.0.0"
ENV PORT=6282
EXPOSE 6282
CMD npm start
and ci file
before_script:
- export newver="0.1.0.117"
build:
image: node:latest
stage: build
script:
- docker build -t Brain .
- docker tag pro 10.1.1.134:5000/Brain:$newver
- docker push 10.1.1.134:5000/Brain:$newver
deploy:
stage: deploy
script:
- kubectl create -f brain-dep.yml
- kubectl create -f brain-service.yml
I dont want create image for every small change, I only want to keep stable images in local registry. now i have multiple version of Brain image, and also how can i have other services beside Brain (elasticsearch and..)
any suggestion
Kubernetes has to be able to pull the image from somewhere. You can use an alternate repo for non-release builds or use some kind of naming scheme, and then clear out non-release builds more frequently.