How to execute a database script after deploying a Postgresql image to openshift with Jenkins? - postgresql

I have a git repo with the Jenkins pipeline and the official template of postgresql:
kind: "BuildConfig"
apiVersion: "v1"
metadata:
name: "postgresql-pipeline"
spec:
strategy:
jenkinsPipelineStrategy:
jenkinsfile: |-
pipeline {
agent any
environment {
DATABASE_NAME = 'sampledb'
DATABASE_USER = 'root'
DATABASE_PASSWORD = 'root'
}
stages {
stage('Clone git') {
steps {
git 'https://bitbucket.org/businnessdata_db/postgresql-test.git'
}
}
stage('Deploy db') {
steps {
sh 'oc status'
sh 'oc delete secret/postgresql'
sh 'oc delete pvc/postgresql'
sh 'oc delete all -l "app=postgresql-persistent"'
sh 'oc new-app -f openshift/templates/postgresql-persistent.json'
}
}
stage('Execute users script') {
steps {
sh 'oc status'
}
}
stage('Execute update script') {
steps {
sh 'oc status'
}
}
}
}
type: JenkinsPipeline<code>
What i have to put in the last 2 steps to run a script against the new generated database?

You can either install psql on your Jenkins container and then run the script through the shell command.
sh """
export PGPASSWORD=<password>
psql -h <host> -d <database> -U <user_name> -p <port> -a -w -f <file>.sql
"""
Or, since Jenkinsfiles are written in Groovy, use Groovy to execute your statements. Here's the Groovy documentation for working with databases.

Related

Deploy docker image from Nexus registry

I have this Jenkinsfile which I want to use to build a pipeline:
pipeline {
agent any
environment {
NEXUS_VERSION = "nexus3"
NEXUS_PROTOCOL = "http"
NEXUS_URL = "you-ip-addr-here:8081"
NEXUS_REPOSITORY = "maven-nexus-repo"
NEXUS_CREDENTIAL_ID = "nexus-user-credentials"
}
stages {
stage('Download Helm Charts') {
steps {
sh "echo 'Downloading Helm Charts from Bitbucket repository...'"
// configure credentials under http://192.168.1.28:8080/user/test/credentials/ and put credentials ID
// not sure do I need to point the root folder of the Helm repository or only the single chart
checkout scmGit(
branches: [[name: 'master']],
userRemoteConfigs: [[credentialsId: 'c2672602-dfd5-4158-977c-5009065c867e',
url: 'http://192.168.1.30:7990/scm/jen/helm.git']])
}
}
stage('Test Kubernetes version') {
steps {
sh "echo 'Checking Kubernetes version..'"
// How to do remote test of kubernetes version
}
}
stage('Push Helm Charts to Kubernetes') {
steps {
sh "echo 'building..'"
// push here helm chart from Jenkins server to Kubernetes cluster
}
}
stage('Build Image') {
steps {
sh "echo 'building..'"
// configure credentials under http://192.168.1.28:8080/user/test/credentials/ and put credentials ID
git credentialsId: 'bitbucket-server:50001e738fa6dafbbe7e336853ced1fcbc284fb18ea8cda8b54dbfa3a7bc87b9', url: 'http://192.168.1.30:7990/scm/jen/spring-boot-microservice.git', branch: 'master'
// execute Java -jar ... and build docker image
./gradlew build && java -jar build/libs/gs-spring-boot-docker-0.1.0.jar
docker build -t springio/gs-spring-boot-docker .
}
}
stage('Push Image into Nexus registry') {
steps {
sh "echo 'building..'"
// push compiled docker image into Nexus repository
script {
pom = readMavenPom file: "pom.xml";
filesByGlob = findFiles(glob: "target/*.${pom.packaging}");
echo "${filesByGlob[0].name} ${filesByGlob[0].path} ${filesByGlob[0].directory} ${filesByGlob[0].length} ${filesByGlob[0].lastModified}"
artifactPath = filesByGlob[0].path;
artifactExists = fileExists artifactPath;
if(artifactExists) {
echo "*** File: ${artifactPath}, group: ${pom.groupId}, packaging: ${pom.packaging}, version ${pom.version}";
nexusArtifactUploader(
nexusVersion: NEXUS_VERSION,
protocol: NEXUS_PROTOCOL,
nexusUrl: NEXUS_URL,
groupId: pom.groupId,
version: pom.version,
repository: NEXUS_REPOSITORY,
credentialsId: NEXUS_CREDENTIAL_ID,
artifacts: [
[artifactId: pom.artifactId,
classifier: '',
file: artifactPath,
type: pom.packaging],
[artifactId: pom.artifactId,
classifier: '',
file: "pom.xml",
type: "pom"]
]
);
} else {
error "*** File: ${artifactPath}, could not be found";
}
}
}
}
stage('Deploy Image from Nexus registry into Kubernetes') {
steps {
sh "echo 'building..'"
}
}
stage('Test'){
steps {
sh "echo 'Testing...'"
// implement a check here is it deployed sucessfully
}
}
}
}
How I can deploy the docker image build by Jenkins server and pushed in Nexus repository? If possible I want to use service account with token?
Instead of using 'nexusArtifactUploader', why don´t you use docker push, like you do to build the image?
I guess nexusArtifactUploader uses Nexus API and doesn´t work with docker images, but you can access the registry using docker and the exposed port (defaults to 5000)
withCredentials([string(credentialsId: NEXUS_CREDENTIAL_ID, variable: 'registryToken')]) {
sh 'docker push --creds default:${registryToken} your-registry-url/image-name:image-tag'
}
You may also change docker build command to build the image using your registry name (or tag it after building, see How to push a docker image to a private repository)

INFO org.zaproxy.addon.network.ExtensionNetwork - ZAP is now listening on 0.0.0.0:8090

I have created one pipeline to use OWASP ZAP,
pipeline {
agent any
stages {
stage('Execute Zap Jar') {
steps {
sh '''
java -jar /home/pl/tools/com.cloudbees.jenkins.plugins.customtools.CustomTool/OwaspZap/ZAP_2.12.0/zap-2.12.0.jar -dir "/home/pl/.ZAP" -host 0.0.0.0 -port 8090 -daemon -config api.disablekey=true
'''
}
}
stage('Execute Zap CLI') {
steps {
sh '''
export ZAP_URL=http://localhost && export ZAP_PORT=8090 && zap-cli status
'''
}
}
stage('Execute Zap Session and Zap Scan') {
steps {
sh '''
zap-cli session new && zap-cli spider https://portail-re7-test.XXXXXX.com/ && zap-cli ajax-spider https://portail-re7-test.XXXXXX.com/ && zap-cli active-scan https://portail-re7-test.XXXXXX.com/ && zap session save default
'''
}
}
stage('Extract Zap Report') {
steps {
sh '''
zap-cli report -o report-default.html -f html
'''
}
}
}
}
But it is getting stuck at
7127 [ZAP-daemon] INFO org.zaproxy.addon.network.ExtensionNetwork - ZAP is now listening on 0.0.0.0:8090
can someone please help me what I am doing wrong
Regrads,
SAM
It looks like ZAP is acting as expected - its been started and is listenning on port 8090.
It has been started in daemon mode and so will stay running until you stop it.
FYI this is not one of the recommended ways to run ZAP - these are listed on https://www.zaproxy.org/docs/automate/
I'd recommend using the Automation Framework :)

terraform plan recreates resources on every run with terraform cloud backend

I am running into an issue where terraform plan recreates resources that don't need to be recreated every run. This is an issue because some of the steps depend on those resources being available, and since they are recreated with each run, the script fails to complete.
My setup is Github Actions, Linode LKE, Terraform Cloud.
My main.tf file looks like this:
terraform {
required_providers {
linode = {
source = "linode/linode"
version = "=1.16.0"
}
helm = {
source = "hashicorp/helm"
version = "=2.1.0"
}
}
backend "remote" {
hostname = "app.terraform.io"
organization = "MY-ORG-HERE"
workspaces {
name = "MY-WORKSPACE-HERE"
}
}
}
provider "linode" {
}
provider "helm" {
debug = true
kubernetes {
config_path = "${local_file.kubeconfig.filename}"
}
}
resource "linode_lke_cluster" "lke_cluster" {
label = "MY-LABEL-HERE"
k8s_version = "1.21"
region = "us-central"
pool {
type = "g6-standard-2"
count = 3
}
}
and my outputs.tf file
resource "local_file" "kubeconfig" {
depends_on = [linode_lke_cluster.lke_cluster]
filename = "kube-config"
# filename = "${path.cwd}/kubeconfig"
content = base64decode(linode_lke_cluster.lke_cluster.kubeconfig)
}
resource "helm_release" "ingress-nginx" {
# depends_on = [local_file.kubeconfig]
depends_on = [linode_lke_cluster.lke_cluster, local_file.kubeconfig]
name = "ingress"
repository = "https://kubernetes.github.io/ingress-nginx"
chart = "ingress-nginx"
}
resource "null_resource" "custom" {
depends_on = [helm_release.ingress-nginx]
# change trigger to run every time
triggers = {
build_number = "${timestamp()}"
}
# download kubectl
provisioner "local-exec" {
command = "curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && chmod +x kubectl"
}
# apply changes
provisioner "local-exec" {
command = "./kubectl apply -f ./k8s/ --kubeconfig ${local_file.kubeconfig.filename}"
}
}
In Github Actions, I'm running these steps:
jobs:
init-terraform:
runs-on: ubuntu-latest
defaults:
run:
working-directory: ./terraform
steps:
- name: Checkout code
uses: actions/checkout#v2
with:
ref: 'privatebeta-kubes'
- name: Setup Terraform
uses: hashicorp/setup-terraform#v1
with:
cli_config_credentials_token: ${{ secrets.TERRAFORM_API_TOKEN }}
- name: Terraform Init
run: terraform init
- name: Terraform Format Check
run: terraform fmt -check -v
- name: List terraform state
run: terraform state list
- name: Terraform Plan
run: terraform plan
id: plan
env:
LINODE_TOKEN: ${{ secrets.LINODE_TOKEN }}
When I look at the results of terraform state list I can see my resources:
Run terraform state list
terraform state list
shell: /usr/bin/bash -e {0}
env:
TERRAFORM_CLI_PATH: /home/runner/work/_temp/3f9749b8-515b-4cb4-8053-1a6318496321
/home/runner/work/_temp/3f9749b8-515b-4cb4-8053-1a6318496321/terraform-bin state list
helm_release.ingress-nginx
linode_lke_cluster.lke_cluster
local_file.kubeconfig
null_resource.custom
But my terraform plan fails and the issue seems to stem from the fact that those resources try to get recreated.
Run terraform plan
terraform plan
shell: /usr/bin/bash -e {0}
env:
TERRAFORM_CLI_PATH: /home/runner/work/_temp/3f9749b8-515b-4cb4-8053-1a6318496321
LINODE_TOKEN: ***
/home/runner/work/_temp/3f9749b8-515b-4cb4-8053-1a6318496321/terraform-bin plan
Running plan in the remote backend. Output will stream here. Pressing Ctrl-C
will stop streaming the logs, but will not stop the plan running remotely.
Preparing the remote plan...
Waiting for the plan to start...
Terraform v1.0.2
on linux_amd64
Configuring remote state backend...
Initializing Terraform configuration...
linode_lke_cluster.lke_cluster: Refreshing state... [id=31946]
local_file.kubeconfig: Refreshing state... [id=fbb5520298c7c824a8069397ef179e1bc971adde]
helm_release.ingress-nginx: Refreshing state... [id=ingress]
╷
│ Error: Kubernetes cluster unreachable: stat kube-config: no such file or directory
│
│ with helm_release.ingress-nginx,
│ on outputs.tf line 8, in resource "helm_release" "ingress-nginx":
│ 8: resource "helm_release" "ingress-nginx" {
Is there a way to tell terraform it doesn't need to recreate those resources?
Regarding the actual error shown, Error: Kubernetes cluster unreachable: stat kibe-config: no such file or directory... which is referencing your outputs file... I found this which could help with your specific error: https://github.com/hashicorp/terraform-provider-helm/issues/418
1 other thing looks strange to me. Why does your outputs.tf refer to 'resources' & not 'outputs'. Shouldn't your outputs.tf look like this?
output "local_file_kubeconfig" {
value = "reference.to.resource"
}
Also I see your state file / backend config looks like it's properly configured.
I recommend logging into your terraform cloud account to verify that the workspace is indeed there, as expected. It's the state file that tells terraform not to re-create the resources it manages.
If the resources are already there and terraform is trying to re-create them, that could indicate that those resources were created prior to using terraform or possibly within another terraform cloud workspace or plan.
Did you end up renaming your backend workspace at any point with this plan? I'm referring to your main.tf file, this part where it says MY-WORKSPACE-HERE :
terraform {
required_providers {
linode = {
source = "linode/linode"
version = "=1.16.0"
}
helm = {
source = "hashicorp/helm"
version = "=2.1.0"
}
}
backend "remote" {
hostname = "app.terraform.io"
organization = "MY-ORG-HERE"
workspaces {
name = "MY-WORKSPACE-HERE"
}
}
}
Unfortunately I am not a kurbenetes expert, so possibly more help can be used there.

run my test in docker mongo instance using jenkins pipeline

I would like to run my tests against a Docker MongoDB instance using Jenkins pipeline. I have got it working kind of. My problem is the tests are running within the Mongo container. I just want it to load up a container and my tests for it to connect to the Monogo container. At the moment it downloads Gradle within the container and takes about 5 min to run. Hope that makes sense. Here is my JenkinsFile
#!/usr/bin/env groovy
pipeline {
environment {
SPRING_PROFILES_ACTIVE = "jenkins"
}
agent {
node {
label "jdk8"
}
}
parameters {
choice(choices: 'None\nBuild\nMinor\nMajor', description: '', name: 'RELEASE_TYPE')
string(defaultValue: "refs/heads/master:refs/remotes/origin/master", description: 'gerrit refspec e.g. refs/changes/45/12345/1', name: 'GERRIT_REFSPEC')
choice(choices: 'master\nFETCH_HEAD', description: 'gerrit branch', name: 'GERRIT_BRANCH')
}
stages {
stage("Test") {
stages {
stage("Initialise") {
steps {
println "Running on ${NODE_NAME}, release type: ${params.RELEASE_TYPE}"
println "gerrit refspec: ${params.GERRIT_REFSPEC}, branch: ${params.GERRIT_BRANCH}, event type: ${params.GERRIT_EVENT_TYPE}"
checkout scm
sh 'git log -n 1'
}
}
stage("Verify") {
agent {
dockerfile {
filename 'backend/Dockerfile'
args '-p 27017:27017'
label 'docker-pipeline'
dir './maintenance-notifications'
}
}
steps {
sh './gradlew :maintenance-notifications:backend:clean'
sh './gradlew :maintenance-notifications:backend:check :maintenance-notifications:backend:test'
}
post {
always {
junit 'maintenance-notifications/backend/build/test-results/**/*.xml'
}
}
}
}
}
stage("Release") {
when {
expression {
return params.RELEASE_TYPE != '' && params.RELEASE_TYPE != 'None';
}
}
steps {
script {
def gradleProps = readProperties file: "gradle.properties"
def isCurrentSnapshot = gradleProps.version.endsWith("-SNAPSHOT")
def newVersion = gradleProps.version.replace("-SNAPSHOT", "")
def cleanVersion = newVersion.tokenize(".").collect{it.toInteger()}
if (params.RELEASE_TYPE == 'Build') {
newVersion = "${cleanVersion[0]}.${cleanVersion[1]}.${isCurrentSnapshot ? cleanVersion[2] : cleanVersion[2] + 1}"
} else if (params.RELEASE_TYPE == 'Minor') {
newVersion = "${cleanVersion[0]}.${cleanVersion[1] + 1}.0"
} else if (params.RELEASE_TYPE == 'Major') {
newVersion = "${cleanVersion[0] + 1}.0.0"
}
def newVersionArray = newVersion.tokenize(".").collect{it.toInteger()}
def newSnapshot = "${newVersionArray[0]}.${newVersionArray[1]}.${newVersionArray[2] + 1}-SNAPSHOT"
println "release version: ${newVersion}, snapshot version: ${newSnapshot}"
sh "./gradlew :maintenance-notifications:backend:release -Prelease.useAutomaticVersion=true -Prelease.releaseVersion=${newVersion} -Prelease.newVersion=${newSnapshot}"
}
}
}
}
}
and here is my Dockerfile
FROM centos:centos7
ENV container=docker
RUN mkdir -p /usr/java; curl http://configuration/yum/thecloud/artifacts/java/jdk-8u151-linux-x64.tar.gz|tar zxC /usr/java && ln -s /usr/java/jdk1.8.0_151/bin/j* /usr/bin
RUN mkdir -p /usr/mongodb; curl http://configuration/yum/thecloud/artifacts/mongodb/mongodb-linux-x86_64-3.4.10.tgz|tar zxC /usr/mongodb && ln -s /usr/mongodb/mongodb-linux-x86_64-3.4.10/bin/* /usr/bin
ENV JAVA_HOME /usr/java/jdk1.8.0_151/
ENV SPRING_PROFILES_ACTIVE jenkins
RUN yum -y install git.x86_64 && yum clean all
# Set up directory requirements
RUN mkdir -p /data/db /var/log/mongodb /var/run/mongodb
VOLUME ["/data/db", "/var/log/mongodb"]
# Expose port 27017 from the container to the host
EXPOSE 27017
CMD ["--port", "27017", "--pidfilepath", "/var/run/mongodb/mongod.pid"]
# Start mongodb
ENTRYPOINT /usr/bin/mongod

How to set up envVars in container in Jenkins pipeline with Kubernetes plugin

I'm setting up a Jenkins pipeline with Kubernetes, there is an option to set environment variables for a container in containerTemplate. Is there some option to override those values in container i.e.:
container(
name: 'my-container',
envVars: [
envVar(key: $KEY, value: $VALUE)
]) {
...
}
because some variables are derived during build stages and cannot be set up in podTemplate. The example above unfortunately does not work.
Note that as of this writing as per the docs:
The container statement allows to execute commands directly into each container. This feature is considered ALPHA as there are still some problems with concurrent execution and pipeline resumption
I believe there is not an option. However, you can try setting the variables in the sh command. For example:
def label = "mypod-${UUID.randomUUID().toString()}"
podTemplate(label: label, containers: [
containerTemplate(name: 'maven', image: 'maven:3.3.9-jdk-8-alpine', ttyEnabled: true, command: 'cat'),
containerTemplate(name: 'golang', image: 'golang:1.8.0', ttyEnabled: true, command: 'cat')
]) {
node(label) {
stage('Get a Maven project') {
git 'https://github.com/jenkinsci/kubernetes-plugin.git'
container('maven') {
stage('Build a Maven project') {
sh 'MYENV1=value1 MYEVN2=value2 mvn -B clean install'
}
}
}
stage('Get a Golang project') {
git url: 'https://github.com/hashicorp/terraform.git'
container('golang') {
stage('Build a Go project') {
sh """
mkdir -p /go/src/github.com/hashicorp
ln -s `pwd` /go/src/github.com/hashicorp/terraform
MYENV1=value1 MYEVN2=value2 cd /go/src/github.com/hashicorp/terraform && make core-dev
"""
}
}
}
}
}