Unable to authenticate _csrf token based web application in OWASP ZAP zap-full-scan.py docker - owasp

I am using docker.io/owasp/zap2docker-weekly zap-full-scan.py to execute on a target.
Below is the jenkins pipeline script -
pipeline {
agent { label master }
stages {
stage ('Pull Docker Image') {
steps {
sh 'docker images'
}
}
stage ('Scan Host') {
steps {
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
sh """sudo docker run -v /home/root/workspace/ZAP_docker/Reports:/zap/wrk/:rw -t docker.io/owasp/zap2docker-weekly zap-full-scan.py \
-I -j -m 30 -T 60 \
-t https://${params.IP}:${params.PORT}/webui.* \
-d -r testreport.html \
-g gen.conf \
-z "auth.loginurl=https://${params.IP}:${params.PORT}/webui/login \
auth.username=${params.USERNAME} \
auth.password=${params.PASSWORD} \
auth.username_field="username" \
auth.password_field="password" \
auth.submit_field="login-btn"""
}
}
}
stage ('Publish Report') {
steps {
archiveArtifacts artifacts: 'logs/*, Reports/*', followSymlinks: false
}
}
stage ('Send a Report') {
steps {
office365ConnectorSend message: 'ZAP PROXY Report Alert!!!!', webhookUrl: 'dfdfDF'
}
}
}
}
Below are my questions -
How can i clarify if the zap has logged into the application, or the login has been failed or the authentication has been failed.
My application has the csrf token based authentication
username=abc&password=abc&_csrf=4334345-435345354-435345345-2a-02c38a3d9e20
Now zap is not scanning with login auth.
I am able to perform end to end scan on OWASP ZAP desktop. However I want to automate this using jenkins pipeline.

-z is for ZAP config options, which does not include:
-z "auth.loginurl=https://${params.IP}:${params.PORT}/webui/login \
auth.username=${params.USERNAME} \
auth.password=${params.PASSWORD} \
auth.username_field="username" \
auth.password_field="password" \
auth.submit_field="login-btn"""
^ those seem to be ICTU specific, and don't seem to include any logged in or logged out indicator ...
You could properly setup your Context via ZAP GUI, then export it and use it in your scan with the options:
-n context_file context file which will be loaded prior to scanning the target
-U user username to use for authenticated scans - must be defined in the given context file (post 2.9.0)
Or you could configure things via the ZAP Web API and use custom scan hooks to leverage it.

Related

Detect if argo workflow is given unused parameters

My team runs into a recurring issue where if we mis-spell a parameter for our argo workflows, that parameter gets ignored without error. For example, say I run the following submission command, where the true (optional) parameter is validation_data_config:
argo submit --from workflowtemplate/training \
-p output=$( artifacts resolve-url $BUCKET $PROJECT $TAG) \
-p tag=${TAG} \
-p training_config="$( yq . training.yaml )" \
-p training_data_config="$( yq . train-data.yaml )" \
-p validation-data-config="$( yq . validation-data.yaml )" \
-p wandb-project="cyclegan_c48_to_c384" \
-p cpu=4 \
-p memory="15Gi" \
--name "${NAME}" \
--labels "project=${PROJECT},experiment=${EXPERIMENT},trial=${TRIAL}"
The validation configuration is ignored and the job is run without validation metrics because I used hyphens instead of underscores.
I also understand the parameters should use consistent hyphen/underscore naming, but we've also had this happen with e.g. the "memory" parameter.
Is there any way to detect this automatically, to have the submission error if a parameter is unused, or even to get a list of parameters for a given workflow template so I can write such detection myself?

Enable multiple audience in keycloak via kcadm

I have microservice ecosystem and all users interacting with it need to authenticate to a keycloak installation and receive a jwt token.
All is fine, I enabled audience support using this snippet:
/opt/jboss/keycloak/bin/kcadm.sh \
create clients/d3170ee6-7778-413b-8f41-31479bdb2166/protocol-mappers/models -r your-realm \
-s name=audience-mapping \
-s protocol=openid-connect \
-s protocolMapper=oidc-audience-mapper \
-s config.\"included.client.audience\"="your-audience" \
-s config.\"access.token.claim\"="true" \
-s config.\"id.token.claim\"="false"
as described here: Add protocol-mapper to keycloak using kcadm.sh
Which is fine, it works. My problem is, how do I enable multiple values for audience? I mean, I would like to allow the same user to use 2 different services with the same token - each of them should have a different audience.
And the token should look like:
{
"aud": [
"audience-1",
"audience-2"
]
}
Where audience-1 is the audience expected by the first service and audience-2 is the one expected by the 2nd service.
Is it even possible to do that via command line?
I think I may have found the answer. Or at least it worked for me:
kcadm.sh create clients/CLIENT_ID/protocol-mappers/models -r REALM_NAME \
-s name=audience-mapping \
-s prodocol=openid-connect \
-s protocolMapper=oidc-audience-mapper \
-s config.\"included.client.audience\"="audience" \
-s config.\"access.token.claim\"=\"true\" \
-s config.\"id.token.claim\"=\"false\"

Google Cloud Endpoint Error when creating service config

I am trying to configure Google Cloud Endpoints using Cloud Functions. For the same I am following instructions from: https://cloud.google.com/endpoints/docs/openapi/get-started-cloud-functions
I have followed the steps given and have come to the point of building the service config into a new ESPv2 Beta docker image. When I give the command:
chmod +x gcloud_build_image
./gcloud_build_image -s CLOUD_RUN_HOSTNAME \
-c CONFIG_ID -p ESP_PROJECT_ID
after replacing the hostname and configid and projectid I get the following error
> -c service-host-name-xxx -p project-id
Using base image: gcr.io/endpoints-release/endpoints-runtime-serverless:2
++ mktemp -d /tmp/docker.XXXX
+ cd /tmp/docker.5l3t
+ gcloud endpoints configs describe service-host-name-xxx.run.app --project=project-id --service=service-host-name-xxx.app --format=json
ERROR: (gcloud.endpoints.configs.describe) NOT_FOUND: Service configuration 'services/service-host-name-xxx.run.app/configs/service-host-name-xxx' not found.
+ error_exit 'Failed to download service config'
+ echo './gcloud_build_image: line 46: Failed to download service config (exit 1)'
./gcloud_build_image: line 46: Failed to download service config (exit 1)
+ exit 1
Any idea what am I doing wrong? Thanks
My bad. I repeated the steps and got it working. So I guess there must have been some mistake I did while trying it out. The document works as it states.
I had the same error. When running the script twice it works. This means you have to already have a service endpoint configured, which does not exist yet when the script tries to fetch the endpoint information with:
gcloud endpoints configs describe service-host-name-xxx.run.app
What I would do (in cloudbuild) is to supply some sort of an "empty" container first. I used the following example on top of my cloudbuild.yaml:
gcloud run services list \
--platform managed \
--project ${PROJECT_ID} \
--region europe-west1 \
--filter=${PROJECT_ID}-esp-svc \
--format yaml | grep . ||
gcloud run deploy ${PROJECT_ID}-esp-svc \
--image="gcr.io/endpoints-release/endpoints-runtime-serverless:2" \
--allow-unauthenticated \
--platform managed \
--project=${PROJECT_ID} \
--region=europe-west1 \
--timeout=120

Mongo DB Atlas. Is it safe to whitelist all ip because someone attempting to access the database needs a password

I have a google app engine with my express server. I also have my db in MongoDB Atlas. I currently have my MongoDB Atlas whitelisting all ip. The connection string is in the code for my express server running on Google Cloud. Presumable any attacker trying to get into the database would still need a user name and password for the connection string.
Is it safe to do this?
If it's not safe, then how do I whitelist my google app engine on Mongo Atlas?
Is it safe to do this?
"Safe" is a relative term. It is safer than having an unauthed database open to the internet, but the weakest link is now your password.
A whitelist is an additional layer of security, so that if someone knows or can guess your password, they can't just connect from anywhere. They must be connecting from a set of known IP addresses. This makes the attack surface smaller, so the database is less likely to be broken into by a random person in the internet.
If it's not safe, then how do I whitelist my google app engine on Mongo Atlas?
You would need to determine the IP ranges of your application, and plug in that range into the whitelist.
here is an answer i left elsewhere. hope it helps someone who comes across this:
this script will be kept up to date on my gist
why
mongo atlas provides a reasonably priced access to a managed mongo DB. CSPs where containers are hosted charge too much for their managed mongo DB. they all suggest setting an insecure CIDR (0.0.0.0/0) to allow the container to access the cluster. this is obviously ridiculous.
this entrypoint script is surgical to maintain least privileged access. only the current hosted IP address of the service is whitelisted.
usage
set as the entrypoint for the Dockerfile
run in cloud init / VM startup if not using a container (and delete the last line exec "$#" since that is just for containers
behavior
uses the mongo atlas project IP access list endpoints
will detect the hosted IP address of the container and whitelist it with the cluster using the mongo atlas API
if the service has no whitelist entry it is created
if the service has an existing whitelist entry that matches current IP no change
if the service IP has changed the old entry is deleted and new one is created
when a whitelist entry is created the service sleeps for 60s to wait for atlas to propagate access to the cluster
env
setup
create API key for org
add API key to project
copy the public key (MONGO_ATLAS_API_PK) and secret key (MONGO_ATLAS_API_SK)
go to project settings page and copy the project ID (MONGO_ATLAS_API_PROJECT_ID)
provide the following values in the env of the container service
SERVICE_NAME: unique name used for creating / updating (deleting old) whitelist entry
MONGO_ATLAS_API_PK: step 3
MONGO_ATLAS_API_SK: step 3
MONGO_ATLAS_API_PROJECT_ID: step 4
deps
bash
curl
jq CLI JSON parser
# alpine / apk
apk update \
&& apk add --no-cache \
bash \
curl \
jq
# ubuntu / apt
export DEBIAN_FRONTEND=noninteractive \
&& apt-get update \
&& apt-get -y install \
bash \
curl \
jq
script
#!/usr/bin/env bash
# -- ENV -- #
# these must be available to the container service at runtime
#
# SERVICE_NAME
#
# MONGO_ATLAS_API_PK
# MONGO_ATLAS_API_SK
# MONGO_ATLAS_API_PROJECT_ID
#
# -- ENV -- #
set -e
mongo_api_base_url='https://cloud.mongodb.com/api/atlas/v1.0'
check_for_deps() {
deps=(
bash
curl
jq
)
for dep in "${deps[#]}"; do
if [ ! "$(command -v $dep)" ]
then
echo "dependency [$dep] not found. exiting"
exit 1
fi
done
}
make_mongo_api_request() {
local request_method="$1"
local request_url="$2"
local data="$3"
curl -s \
--user "$MONGO_ATLAS_API_PK:$MONGO_ATLAS_API_SK" --digest \
--header "Accept: application/json" \
--header "Content-Type: application/json" \
--request "$request_method" "$request_url" \
--data "$data"
}
get_access_list_endpoint() {
echo -n "$mongo_api_base_url/groups/$MONGO_ATLAS_API_PROJECT_ID/accessList"
}
get_service_ip() {
echo -n "$(curl https://ipinfo.io/ip -s)"
}
get_previous_service_ip() {
local access_list_endpoint=`get_access_list_endpoint`
local previous_ip=`make_mongo_api_request 'GET' "$access_list_endpoint" \
| jq --arg SERVICE_NAME "$SERVICE_NAME" -r \
'.results[]? as $results | $results.comment | if test("\\[\($SERVICE_NAME)\\]") then $results.ipAddress else empty end'`
echo "$previous_ip"
}
whitelist_service_ip() {
local current_service_ip="$1"
local comment="Hosted IP of [$SERVICE_NAME] [set#$(date +%s)]"
if (( "${#comment}" > 80 )); then
echo "comment field value will be above 80 char limit: \"$comment\""
echo "comment would be too long due to length of service name [$SERVICE_NAME] [${#SERVICE_NAME}]"
echo "change comment format or service name then retry. exiting to avoid mongo API failure"
exit 1
fi
echo "whitelisting service IP [$current_service_ip] with comment value: \"$comment\""
response=`make_mongo_api_request \
'POST' \
"$(get_access_list_endpoint)?pretty=true" \
"[
{
\"comment\" : \"$comment\",
\"ipAddress\": \"$current_service_ip\"
}
]" \
| jq -r 'if .error then . else empty end'`
if [[ -n "$response" ]];
then
echo 'API error whitelisting service'
echo "$response"
exit 1
else
echo "whitelist request successful"
echo "waiting 60s for whitelist to propagate to cluster"
sleep 60s
fi
}
delete_previous_service_ip() {
local previous_service_ip="$1"
echo "deleting previous service IP address of [$SERVICE_NAME]"
make_mongo_api_request \
'DELETE' \
"$(get_access_list_endpoint)/$previous_service_ip"
}
set_mongo_whitelist_for_service_ip() {
local current_service_ip=`get_service_ip`
local previous_service_ip=`get_previous_service_ip`
if [[ -z "$previous_service_ip" ]]; then
echo "service [$SERVICE_NAME] has not yet been whitelisted"
whitelist_service_ip "$current_service_ip"
elif [[ "$current_service_ip" == "$previous_service_ip" ]]; then
echo "service [$SERVICE_NAME] IP has not changed"
else
echo "service [$SERVICE_NAME] IP has changed from [$previous_service_ip] to [$current_service_ip]"
delete_previous_service_ip "$previous_service_ip"
whitelist_service_ip "$current_service_ip"
fi
}
check_for_deps
set_mongo_whitelist_for_service_ip
# run CMD
exec "$#"

Add provider to User federation in RedHat SSO/keycloak using CLI

I have custom provider created and deployed.
Now I goto user federation select the drop down and add my provider using UI and fine. Image using UI
Can some one please let me know how to add the same using CLI as I want to automate the manual process.
This worked for me:
kcadm.bat create user-federation/instances -r Test1 \
-s providerName=tatts-asg-authentication \
-s priority=0 \
-s config.debug=false
This is what works for Keycloak 3.4.3:
kcadm.bat create components -x -r MyRealm \
-s providerType=org.keycloak.storage.UserStorageProvider \
-s name=my-provider \
-s parentId=MyRealm \
-s providerId=my-provider \
-s 'config.path=["C:\\path\\to\\properties"]' \
-s 'config.priority=["0"]'
user-federation/instances has been replaced with components: issues.jboss.org/browse/KEYCLOAK-6583
The -x option is to output the stacktrace on error.