I have this Jenkinsfile which I want to use to build a pipeline:
pipeline {
agent any
environment {
NEXUS_VERSION = "nexus3"
NEXUS_PROTOCOL = "http"
NEXUS_URL = "you-ip-addr-here:8081"
NEXUS_REPOSITORY = "maven-nexus-repo"
NEXUS_CREDENTIAL_ID = "nexus-user-credentials"
}
stages {
stage('Download Helm Charts') {
steps {
sh "echo 'Downloading Helm Charts from Bitbucket repository...'"
// configure credentials under http://192.168.1.28:8080/user/test/credentials/ and put credentials ID
// not sure do I need to point the root folder of the Helm repository or only the single chart
checkout scmGit(
branches: [[name: 'master']],
userRemoteConfigs: [[credentialsId: 'c2672602-dfd5-4158-977c-5009065c867e',
url: 'http://192.168.1.30:7990/scm/jen/helm.git']])
}
}
stage('Test Kubernetes version') {
steps {
sh "echo 'Checking Kubernetes version..'"
// How to do remote test of kubernetes version
}
}
stage('Push Helm Charts to Kubernetes') {
steps {
sh "echo 'building..'"
// push here helm chart from Jenkins server to Kubernetes cluster
}
}
stage('Build Image') {
steps {
sh "echo 'building..'"
// configure credentials under http://192.168.1.28:8080/user/test/credentials/ and put credentials ID
git credentialsId: 'bitbucket-server:50001e738fa6dafbbe7e336853ced1fcbc284fb18ea8cda8b54dbfa3a7bc87b9', url: 'http://192.168.1.30:7990/scm/jen/spring-boot-microservice.git', branch: 'master'
// execute Java -jar ... and build docker image
./gradlew build && java -jar build/libs/gs-spring-boot-docker-0.1.0.jar
docker build -t springio/gs-spring-boot-docker .
}
}
stage('Push Image into Nexus registry') {
steps {
sh "echo 'building..'"
// push compiled docker image into Nexus repository
script {
pom = readMavenPom file: "pom.xml";
filesByGlob = findFiles(glob: "target/*.${pom.packaging}");
echo "${filesByGlob[0].name} ${filesByGlob[0].path} ${filesByGlob[0].directory} ${filesByGlob[0].length} ${filesByGlob[0].lastModified}"
artifactPath = filesByGlob[0].path;
artifactExists = fileExists artifactPath;
if(artifactExists) {
echo "*** File: ${artifactPath}, group: ${pom.groupId}, packaging: ${pom.packaging}, version ${pom.version}";
nexusArtifactUploader(
nexusVersion: NEXUS_VERSION,
protocol: NEXUS_PROTOCOL,
nexusUrl: NEXUS_URL,
groupId: pom.groupId,
version: pom.version,
repository: NEXUS_REPOSITORY,
credentialsId: NEXUS_CREDENTIAL_ID,
artifacts: [
[artifactId: pom.artifactId,
classifier: '',
file: artifactPath,
type: pom.packaging],
[artifactId: pom.artifactId,
classifier: '',
file: "pom.xml",
type: "pom"]
]
);
} else {
error "*** File: ${artifactPath}, could not be found";
}
}
}
}
stage('Deploy Image from Nexus registry into Kubernetes') {
steps {
sh "echo 'building..'"
}
}
stage('Test'){
steps {
sh "echo 'Testing...'"
// implement a check here is it deployed sucessfully
}
}
}
}
How I can deploy the docker image build by Jenkins server and pushed in Nexus repository? If possible I want to use service account with token?
Instead of using 'nexusArtifactUploader', why don´t you use docker push, like you do to build the image?
I guess nexusArtifactUploader uses Nexus API and doesn´t work with docker images, but you can access the registry using docker and the exposed port (defaults to 5000)
withCredentials([string(credentialsId: NEXUS_CREDENTIAL_ID, variable: 'registryToken')]) {
sh 'docker push --creds default:${registryToken} your-registry-url/image-name:image-tag'
}
You may also change docker build command to build the image using your registry name (or tag it after building, see How to push a docker image to a private repository)
Related
I am using postgres sql as my DB in my springboot application .
My SonarQube is unable to calculate code coverage.can someone please guide me in this
build.gradle
plugins {
id 'org.springframework.boot' version "${springBootVersion}"
id 'io.spring.dependency-management' version '1.0.15.RELEASE'
id 'java'
id 'eclipse'
id 'jacoco'
id 'org.sonarqube' version "3.3"
id 'com.google.cloud.tools.jib' version "${jibVersion}"
}
group = 'com.vsi.postgrestoattentive'
if (!project.hasProperty('buildName')) {
throw new GradleException("Usage for CLI:"
+ System.getProperty("line.separator")
+ "gradlew <taskName> -Dorg.gradle.java.home=<java-home-dir> -PbuildName=<major>.<minor>.<buildNumber> -PgcpProject=<gcloudProject>"
+ System.getProperty("line.separator")
+ "<org.gradle.java.home> - OPTIONAL if available in PATH"
+ System.getProperty("line.separator")
+ "<buildName> - MANDATORY, example 0.1.23")
+ System.getProperty("line.separator")
+ "<gcpProject> - OPTIONAL, project name in GCP";
}
project.ext {
buildName = project.property('buildName');
}
version = "${project.ext.buildName}"
sourceCompatibility = '1.8'
apply from: 'gradle/sonar.gradle'
apply from: 'gradle/tests.gradle'
apply from: 'gradle/image-build-gcp.gradle'
repositories {
mavenCentral()
}
dependencies {
implementation("org.springframework.boot:spring-boot-starter-web:${springBootVersion}")
implementation("org.springframework.boot:spring-boot-starter-actuator:${springBootVersion}")
implementation 'org.springframework.boot:spring-boot-starter-web:2.7.0'
developmentOnly 'org.springframework.boot:spring-boot-devtools'
testImplementation 'org.springframework.integration:spring-integration-test'
testImplementation 'org.springframework.batch:spring-batch-test:4.3.0'
implementation("org.springframework.boot:spring-boot-starter-data-jpa:${springBootVersion}")
implementation 'org.postgresql:postgresql:42.2.16'
implementation 'org.springframework.batch:spring-batch-core:4.1.1.RELEASE'
implementation 'com.fasterxml.jackson.datatype:jackson-datatype-jsr310:2.14.1'
implementation group: 'io.micrometer', name: 'micrometer-registry-datadog', version: '1.7.0'
implementation 'com.google.cloud:libraries-bom:26.3.0'
implementation 'com.google.cloud:google-cloud-storage:2.16.0'
testImplementation('org.mockito:mockito-core:3.7.7')
//Below 4 dependencies should be commented in local
implementation 'org.springframework.cloud:spring-cloud-starter-kubernetes-client-all:2.0.4'
implementation 'io.kubernetes:client-java:12.0.0'
implementation("org.springframework.cloud:spring-cloud-gcp-starter-metrics:${gcpSpringCloudVersion}")
implementation 'org.springframework.cloud:spring-cloud-gcp-logging:1.2.8.RELEASE'
testImplementation('org.mockito:mockito-core:3.7.7')
testImplementation 'org.springframework.boot:spring-boot-test'
testImplementation 'org.springframework:spring-test'
testImplementation 'org.assertj:assertj-core:3.21.0'
testImplementation("org.springframework.boot:spring-boot-starter-test:${springBootVersion}") {
exclude group: "org.junit.vintage", module: "junit-vintage-engine"
}
}
bootJar {
archiveFileName = "${project.name}.${archiveExtension.get()}"
}
springBoot {
buildInfo()
}
test {
finalizedBy jacocoTestReport
}
jacoco {
toolVersion = "0.8.8"
}
jacocoTestReport {
dependsOn test
}
//: Code to make build check code coverage ratio
project.tasks["bootJar"].dependsOn "jacocoTestReport","jacocoTestCoverageVerification"
tests.gradle
test {
finalizedBy jacocoTestReport
useJUnitPlatform()
testLogging {
exceptionFormat = 'full'
}
afterSuite { desc, result ->
if (!desc.parent) {
println "Results: (${result.testCount} tests, ${result.successfulTestCount} successes, ${result.failedTestCount} failures, ${result.skippedTestCount} skipped)"
boolean skipTests = Boolean.parseBoolean(project.findProperty('SKIP_TESTS') ?: "false")
if (result.testCount == 0 && !skipTests) {
throw new IllegalStateException("No tests were found. Failing the build")
}
}
}
jacocoTestCoverageVerification {
dependsOn test
violationRules {
rule{
limit {
//SMS-28: Since project is in nascent stage setting code coverage ratio limit to 1%
minimum = 0.5
}
}
}
}
}
sonar.gradle
apply plugin: "org.sonarqube"
apply plugin: 'jacoco'
jacoco {
toolVersion = "0.8.5"
reportsDir = file("$buildDir/jacoco")
}
jacocoTestReport {
reports {
xml.enabled true
html.enabled true
csv.enabled false
}
}
JenkinsBuildFile
pipeline {
agent any
environment {
// TODO: Remove this
GIT_BRANCH_LOCAL = sh (
script: "echo $GIT_BRANCH | sed -e 's|origin/||g'",
returnStdout: true
).trim()
CURRENT_BUILD_DISPLAY="0.1.${BUILD_NUMBER}"
PROJECT_FOLDER="."
PROJECT_NAME="xyz"
//Adding default values for env variables that sometimes get erased from GCP Jenkins
GRADLE_JAVA_HOME="/opt/java/openjdk"
GCP_SA="abc"
GCP_PROJECT="efg"
SONAR_JAVA_HOME="/usr/lib/jvm/java-17-openjdk-amd64"
SONAR_HOST="http://sonar-sonarqube:9000/sonar"
}
stages {
stage('Clean Workspace') {
steps {
echo "Setting current build to ${CURRENT_BUILD_DISPLAY}"
script {
currentBuild.displayName = "${CURRENT_BUILD_DISPLAY}"
currentBuild.description = """Branch - ${GIT_BRANCH_LOCAL}"""
}
dir("${PROJECT_FOLDER}") {
echo "Changed directory to ${PROJECT_FOLDER}"
echo 'Cleaning up Work Dir...'
// Had to add below chmod command as Jenkins build was failing stating gradlew permission denied
sh 'chmod +x gradlew'
//gradlew clean means deletion of the build directory.
sh './gradlew clean -PbuildName=${CURRENT_BUILD_DISPLAY} -Dorg.gradle.java.home=${GRADLE_JAVA_HOME}'
//mkdir -p creates subdirectories
//touch creates new empty file
sh 'mkdir -p build/libs && touch build/libs/${PROJECT_NAME}-${CURRENT_BUILD_DISPLAY}.jar'
}
}
}
stage('Tests And Code Quality') {
steps {
dir("${PROJECT_FOLDER}") {
echo 'Running Tests and SonarQube Analysis'
withCredentials([string(credentialsId: 'sonar_key', variable: 'SONAR_KEY')]) {
sh '''
./gradlew -i sonarqube -Dorg.gradle.java.home=${SONAR_JAVA_HOME} \
-Dsonar.host.url=${SONAR_HOST} \
-PbuildName=${CURRENT_BUILD_DISPLAY} \
-Dsonar.login=$SONAR_KEY \
-DprojectVersion=${CURRENT_BUILD_DISPLAY}
'''
}
echo 'Ran SonarQube Analysis successfully'
}
}
}
stage('ECRContainerRegistry') {
steps {
withCredentials([file(credentialsId: 'vsi-ops-gcr', variable: 'SECRET_JSON')]) {
echo 'Activating gcloud SDK Service Account...'
sh 'gcloud auth activate-service-account $GCP_SA --key-file $SECRET_JSON --project=$GCP_PROJECT'
sh 'gcloud auth configure-docker'
echo 'Activated gcloud SDK Service Account'
dir("${PROJECT_FOLDER}") {
echo "Pushing image to GCR with tag ${CURRENT_BUILD_DISPLAY}..."
sh './gradlew jib -PbuildName=${CURRENT_BUILD_DISPLAY} -PgcpProject=${GCP_PROJECT} -Dorg.gradle.java.home=${GRADLE_JAVA_HOME}'
echo "Pushed image to GCR with tag ${CURRENT_BUILD_DISPLAY} successfully"
}
echo 'Revoking gcloud SDK Service Account...'
sh "gcloud auth revoke ${GCP_SA}"
echo 'Revoked gcloud SDK Service Account'
}
}
}
}
post {
/*
TODO: use cleanup block
deleteDir is explicit in failure because always block is run
before success causing archive failure. Also cleanup block is not
available in this version on Jenkins ver. 2.164.2
*/
success {
dir("${PROJECT_FOLDER}") {
echo 'Archiving build artifacts...'
archiveArtifacts artifacts: "build/libs/*.jar, config/**/*", fingerprint: true, onlyIfSuccessful: true
echo 'Archived build artifacts successfully'
echo 'Publising Jacoco Reports...'
jacoco(
execPattern: 'build/jacoco/*.exec',
classPattern: 'build/classes',
sourcePattern: 'src/main/java',
exclusionPattern: 'src/test*'
)
echo 'Published Jacoco Reports successfully'
}
echo 'Cleaning up workspace...'
deleteDir()
}
failure {
echo 'Cleaning up workspace...'
deleteDir()
}
aborted {
echo 'Cleaning up workspace...'
deleteDir()
}
}
}
Below is the error which i get in Jenkins console
> Task :sonarqube
JaCoCo report task detected, but XML report is not enabled or it was not produced. Coverage for this task will not be reported.
Caching disabled for task ':sonarqube' because:
Build cache is disabled
Task ':sonarqube' is not up-to-date because:
Task has not declared any outputs despite executing actions.
JaCoCo report task detected, but XML report is not enabled or it was not produced. Coverage for this task will not be reported.
User cache: /var/jenkins_home/.sonar/cache
Default locale: "en", source code encoding: "UTF-8"
Load global settings
Load global settings (done) | time=101ms
Server id: 4AE86E0C-AX63D7IJvl7jEHIL9nIz
User cache: /var/jenkins_home/.sonar/cache
Load/download plugins
Load plugins index
Load plugins index (done) | time=48ms
Load/download plugins (done) | time=91ms
Process project properties
Process project properties (done) | time=9ms
Execute project builders
Execute project builders (done) | time=1ms
Java "Test" source files AST scan (done) | time=779ms
No "Generated" source files to scan.
Sensor JavaSensor [java] (done) | time=8057ms
Sensor JaCoCo XML Report Importer [jacoco]
'sonar.coverage.jacoco.xmlReportPaths' is not defined. Using default locations: target/site/jacoco/jacoco.xml,target/site/jacoco-it/jacoco.xml,build/reports/jacoco/test/jacocoTestReport.xml
No report imported, no coverage information will be imported by JaCoCo XML Report Importer
Sensor JaCoCo XML Report Importer [jacoco] (done) | time=4ms
Can someone please guide me how to resolve this issue of sonar not calculating code coverage
While it could be caused by a number of things, from the logs, it seems likely that it couldn't find the report.
As the first comment says, check to verify that the report is being generated (but not found). If it is generated, then its likely SQ can't find the XML Jacoco report.
Note the error line, near the end:
'sonar.coverage.jacoco.xmlReportPaths' is not defined. Using default locations: target/site/jacoco/jacoco.xml,target/site/jacoco-it/jacoco.xml,build/reports/jacoco/test/jacocoTestReport.xml
An example of defining this location this would be putting this line in sonar-project.properties:
sonar.coverage.jacoco.xmlReportsPaths=build/reports/jacoco.xml
It can also be defined directly in a *.gradle file:
sonarqube {
properties {
property "sonar.coverage.jacoco.xmlReportsPaths", "build/reports/jacoco.xml"
}
}
I am running into an issue where terraform plan recreates resources that don't need to be recreated every run. This is an issue because some of the steps depend on those resources being available, and since they are recreated with each run, the script fails to complete.
My setup is Github Actions, Linode LKE, Terraform Cloud.
My main.tf file looks like this:
terraform {
required_providers {
linode = {
source = "linode/linode"
version = "=1.16.0"
}
helm = {
source = "hashicorp/helm"
version = "=2.1.0"
}
}
backend "remote" {
hostname = "app.terraform.io"
organization = "MY-ORG-HERE"
workspaces {
name = "MY-WORKSPACE-HERE"
}
}
}
provider "linode" {
}
provider "helm" {
debug = true
kubernetes {
config_path = "${local_file.kubeconfig.filename}"
}
}
resource "linode_lke_cluster" "lke_cluster" {
label = "MY-LABEL-HERE"
k8s_version = "1.21"
region = "us-central"
pool {
type = "g6-standard-2"
count = 3
}
}
and my outputs.tf file
resource "local_file" "kubeconfig" {
depends_on = [linode_lke_cluster.lke_cluster]
filename = "kube-config"
# filename = "${path.cwd}/kubeconfig"
content = base64decode(linode_lke_cluster.lke_cluster.kubeconfig)
}
resource "helm_release" "ingress-nginx" {
# depends_on = [local_file.kubeconfig]
depends_on = [linode_lke_cluster.lke_cluster, local_file.kubeconfig]
name = "ingress"
repository = "https://kubernetes.github.io/ingress-nginx"
chart = "ingress-nginx"
}
resource "null_resource" "custom" {
depends_on = [helm_release.ingress-nginx]
# change trigger to run every time
triggers = {
build_number = "${timestamp()}"
}
# download kubectl
provisioner "local-exec" {
command = "curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && chmod +x kubectl"
}
# apply changes
provisioner "local-exec" {
command = "./kubectl apply -f ./k8s/ --kubeconfig ${local_file.kubeconfig.filename}"
}
}
In Github Actions, I'm running these steps:
jobs:
init-terraform:
runs-on: ubuntu-latest
defaults:
run:
working-directory: ./terraform
steps:
- name: Checkout code
uses: actions/checkout#v2
with:
ref: 'privatebeta-kubes'
- name: Setup Terraform
uses: hashicorp/setup-terraform#v1
with:
cli_config_credentials_token: ${{ secrets.TERRAFORM_API_TOKEN }}
- name: Terraform Init
run: terraform init
- name: Terraform Format Check
run: terraform fmt -check -v
- name: List terraform state
run: terraform state list
- name: Terraform Plan
run: terraform plan
id: plan
env:
LINODE_TOKEN: ${{ secrets.LINODE_TOKEN }}
When I look at the results of terraform state list I can see my resources:
Run terraform state list
terraform state list
shell: /usr/bin/bash -e {0}
env:
TERRAFORM_CLI_PATH: /home/runner/work/_temp/3f9749b8-515b-4cb4-8053-1a6318496321
/home/runner/work/_temp/3f9749b8-515b-4cb4-8053-1a6318496321/terraform-bin state list
helm_release.ingress-nginx
linode_lke_cluster.lke_cluster
local_file.kubeconfig
null_resource.custom
But my terraform plan fails and the issue seems to stem from the fact that those resources try to get recreated.
Run terraform plan
terraform plan
shell: /usr/bin/bash -e {0}
env:
TERRAFORM_CLI_PATH: /home/runner/work/_temp/3f9749b8-515b-4cb4-8053-1a6318496321
LINODE_TOKEN: ***
/home/runner/work/_temp/3f9749b8-515b-4cb4-8053-1a6318496321/terraform-bin plan
Running plan in the remote backend. Output will stream here. Pressing Ctrl-C
will stop streaming the logs, but will not stop the plan running remotely.
Preparing the remote plan...
Waiting for the plan to start...
Terraform v1.0.2
on linux_amd64
Configuring remote state backend...
Initializing Terraform configuration...
linode_lke_cluster.lke_cluster: Refreshing state... [id=31946]
local_file.kubeconfig: Refreshing state... [id=fbb5520298c7c824a8069397ef179e1bc971adde]
helm_release.ingress-nginx: Refreshing state... [id=ingress]
╷
│ Error: Kubernetes cluster unreachable: stat kube-config: no such file or directory
│
│ with helm_release.ingress-nginx,
│ on outputs.tf line 8, in resource "helm_release" "ingress-nginx":
│ 8: resource "helm_release" "ingress-nginx" {
Is there a way to tell terraform it doesn't need to recreate those resources?
Regarding the actual error shown, Error: Kubernetes cluster unreachable: stat kibe-config: no such file or directory... which is referencing your outputs file... I found this which could help with your specific error: https://github.com/hashicorp/terraform-provider-helm/issues/418
1 other thing looks strange to me. Why does your outputs.tf refer to 'resources' & not 'outputs'. Shouldn't your outputs.tf look like this?
output "local_file_kubeconfig" {
value = "reference.to.resource"
}
Also I see your state file / backend config looks like it's properly configured.
I recommend logging into your terraform cloud account to verify that the workspace is indeed there, as expected. It's the state file that tells terraform not to re-create the resources it manages.
If the resources are already there and terraform is trying to re-create them, that could indicate that those resources were created prior to using terraform or possibly within another terraform cloud workspace or plan.
Did you end up renaming your backend workspace at any point with this plan? I'm referring to your main.tf file, this part where it says MY-WORKSPACE-HERE :
terraform {
required_providers {
linode = {
source = "linode/linode"
version = "=1.16.0"
}
helm = {
source = "hashicorp/helm"
version = "=2.1.0"
}
}
backend "remote" {
hostname = "app.terraform.io"
organization = "MY-ORG-HERE"
workspaces {
name = "MY-WORKSPACE-HERE"
}
}
}
Unfortunately I am not a kurbenetes expert, so possibly more help can be used there.
I would like to run my tests against a Docker MongoDB instance using Jenkins pipeline. I have got it working kind of. My problem is the tests are running within the Mongo container. I just want it to load up a container and my tests for it to connect to the Monogo container. At the moment it downloads Gradle within the container and takes about 5 min to run. Hope that makes sense. Here is my JenkinsFile
#!/usr/bin/env groovy
pipeline {
environment {
SPRING_PROFILES_ACTIVE = "jenkins"
}
agent {
node {
label "jdk8"
}
}
parameters {
choice(choices: 'None\nBuild\nMinor\nMajor', description: '', name: 'RELEASE_TYPE')
string(defaultValue: "refs/heads/master:refs/remotes/origin/master", description: 'gerrit refspec e.g. refs/changes/45/12345/1', name: 'GERRIT_REFSPEC')
choice(choices: 'master\nFETCH_HEAD', description: 'gerrit branch', name: 'GERRIT_BRANCH')
}
stages {
stage("Test") {
stages {
stage("Initialise") {
steps {
println "Running on ${NODE_NAME}, release type: ${params.RELEASE_TYPE}"
println "gerrit refspec: ${params.GERRIT_REFSPEC}, branch: ${params.GERRIT_BRANCH}, event type: ${params.GERRIT_EVENT_TYPE}"
checkout scm
sh 'git log -n 1'
}
}
stage("Verify") {
agent {
dockerfile {
filename 'backend/Dockerfile'
args '-p 27017:27017'
label 'docker-pipeline'
dir './maintenance-notifications'
}
}
steps {
sh './gradlew :maintenance-notifications:backend:clean'
sh './gradlew :maintenance-notifications:backend:check :maintenance-notifications:backend:test'
}
post {
always {
junit 'maintenance-notifications/backend/build/test-results/**/*.xml'
}
}
}
}
}
stage("Release") {
when {
expression {
return params.RELEASE_TYPE != '' && params.RELEASE_TYPE != 'None';
}
}
steps {
script {
def gradleProps = readProperties file: "gradle.properties"
def isCurrentSnapshot = gradleProps.version.endsWith("-SNAPSHOT")
def newVersion = gradleProps.version.replace("-SNAPSHOT", "")
def cleanVersion = newVersion.tokenize(".").collect{it.toInteger()}
if (params.RELEASE_TYPE == 'Build') {
newVersion = "${cleanVersion[0]}.${cleanVersion[1]}.${isCurrentSnapshot ? cleanVersion[2] : cleanVersion[2] + 1}"
} else if (params.RELEASE_TYPE == 'Minor') {
newVersion = "${cleanVersion[0]}.${cleanVersion[1] + 1}.0"
} else if (params.RELEASE_TYPE == 'Major') {
newVersion = "${cleanVersion[0] + 1}.0.0"
}
def newVersionArray = newVersion.tokenize(".").collect{it.toInteger()}
def newSnapshot = "${newVersionArray[0]}.${newVersionArray[1]}.${newVersionArray[2] + 1}-SNAPSHOT"
println "release version: ${newVersion}, snapshot version: ${newSnapshot}"
sh "./gradlew :maintenance-notifications:backend:release -Prelease.useAutomaticVersion=true -Prelease.releaseVersion=${newVersion} -Prelease.newVersion=${newSnapshot}"
}
}
}
}
}
and here is my Dockerfile
FROM centos:centos7
ENV container=docker
RUN mkdir -p /usr/java; curl http://configuration/yum/thecloud/artifacts/java/jdk-8u151-linux-x64.tar.gz|tar zxC /usr/java && ln -s /usr/java/jdk1.8.0_151/bin/j* /usr/bin
RUN mkdir -p /usr/mongodb; curl http://configuration/yum/thecloud/artifacts/mongodb/mongodb-linux-x86_64-3.4.10.tgz|tar zxC /usr/mongodb && ln -s /usr/mongodb/mongodb-linux-x86_64-3.4.10/bin/* /usr/bin
ENV JAVA_HOME /usr/java/jdk1.8.0_151/
ENV SPRING_PROFILES_ACTIVE jenkins
RUN yum -y install git.x86_64 && yum clean all
# Set up directory requirements
RUN mkdir -p /data/db /var/log/mongodb /var/run/mongodb
VOLUME ["/data/db", "/var/log/mongodb"]
# Expose port 27017 from the container to the host
EXPOSE 27017
CMD ["--port", "27017", "--pidfilepath", "/var/run/mongodb/mongod.pid"]
# Start mongodb
ENTRYPOINT /usr/bin/mongod
I have a git repo with the Jenkins pipeline and the official template of postgresql:
kind: "BuildConfig"
apiVersion: "v1"
metadata:
name: "postgresql-pipeline"
spec:
strategy:
jenkinsPipelineStrategy:
jenkinsfile: |-
pipeline {
agent any
environment {
DATABASE_NAME = 'sampledb'
DATABASE_USER = 'root'
DATABASE_PASSWORD = 'root'
}
stages {
stage('Clone git') {
steps {
git 'https://bitbucket.org/businnessdata_db/postgresql-test.git'
}
}
stage('Deploy db') {
steps {
sh 'oc status'
sh 'oc delete secret/postgresql'
sh 'oc delete pvc/postgresql'
sh 'oc delete all -l "app=postgresql-persistent"'
sh 'oc new-app -f openshift/templates/postgresql-persistent.json'
}
}
stage('Execute users script') {
steps {
sh 'oc status'
}
}
stage('Execute update script') {
steps {
sh 'oc status'
}
}
}
}
type: JenkinsPipeline<code>
What i have to put in the last 2 steps to run a script against the new generated database?
You can either install psql on your Jenkins container and then run the script through the shell command.
sh """
export PGPASSWORD=<password>
psql -h <host> -d <database> -U <user_name> -p <port> -a -w -f <file>.sql
"""
Or, since Jenkinsfiles are written in Groovy, use Groovy to execute your statements. Here's the Groovy documentation for working with databases.
I'm setting up a Jenkins pipeline with Kubernetes, there is an option to set environment variables for a container in containerTemplate. Is there some option to override those values in container i.e.:
container(
name: 'my-container',
envVars: [
envVar(key: $KEY, value: $VALUE)
]) {
...
}
because some variables are derived during build stages and cannot be set up in podTemplate. The example above unfortunately does not work.
Note that as of this writing as per the docs:
The container statement allows to execute commands directly into each container. This feature is considered ALPHA as there are still some problems with concurrent execution and pipeline resumption
I believe there is not an option. However, you can try setting the variables in the sh command. For example:
def label = "mypod-${UUID.randomUUID().toString()}"
podTemplate(label: label, containers: [
containerTemplate(name: 'maven', image: 'maven:3.3.9-jdk-8-alpine', ttyEnabled: true, command: 'cat'),
containerTemplate(name: 'golang', image: 'golang:1.8.0', ttyEnabled: true, command: 'cat')
]) {
node(label) {
stage('Get a Maven project') {
git 'https://github.com/jenkinsci/kubernetes-plugin.git'
container('maven') {
stage('Build a Maven project') {
sh 'MYENV1=value1 MYEVN2=value2 mvn -B clean install'
}
}
}
stage('Get a Golang project') {
git url: 'https://github.com/hashicorp/terraform.git'
container('golang') {
stage('Build a Go project') {
sh """
mkdir -p /go/src/github.com/hashicorp
ln -s `pwd` /go/src/github.com/hashicorp/terraform
MYENV1=value1 MYEVN2=value2 cd /go/src/github.com/hashicorp/terraform && make core-dev
"""
}
}
}
}
}