When starting a scan with docker run -t owasp/zap2docker-stable zap-baseline.py -t https://10.1.2.3/zapwave the spider will follow back to the root URL https://10.1.2.3 and continue to scan out of scope items. eg /ghost, /mono, /webgoat
Is it possible to limit the scan scope to just the specified directory or below? The web application in this example is OWASP DWVA.
You haven't passed it a context file from which it would constraint to scope.
The baseline scan documentation is here:
https://github.com/zaproxy/zaproxy/wiki/ZAP-Baseline-Scan
In particular you probably care about:
-n context_file context file which will be loaded prior to spidering the target
You should probably also have a look at:
https://blog.mozilla.org/security/2017/01/25/setting-a-baseline-for-web-security-controls/
Also it's probably easiest to configure, test, and export the context from the GUI before trying to use it with automation.
Related
At my job we using sungrid qstat, qsub, etc.
Is there a way to see the percentage of resources currently used by each user? I know there is qhost -u "*" but this is a bit more difficult to interpret b/c it doesn't show how many resources are being used with respect to what is available.
If this is out of scope for SO then I will remove.
Are there are any built in tools that do this or public scripts on GitHub that can achieve this functionality?
The command qstat -u "*" -nenv -j "*" outputs job details, including a line with job's usage:
usage 1: wallclock=44:12:05:42, cpu=1:10:40:01, mem=9284973.79642 GBs, io=631.16018 GB, iow=65.130 s, ioops=22213570, vmem=284.719M, maxvmem=65.121G, rss=14.435M, ..., maxrss=61.611G, maxpss=68.641G
I am not aware of a public script that would parse it and cross reference the output of qhost to retrieve hosts resources.
I think I should be working on this over the weekend. :)
I am working in a GCP lab (Securing Google Cloud with CFT Scorecard). All instructions for the lab are given.
First I have to run the following two commands to set environment variables
export GOOGLE_PROJECT=$DEVSHELL_PROJECT_ID
export CAI_BUCKET_NAME=cai-$GOOGLE_PROJECT
In the second command given above I don't know what to replace with my own credentials? May be that is the reason I am getting error.
Now I have to enable the "cloudasset.googleapis.com" gcloud service. For this they gave the following command.
gcloud services enable cloudasset.googleapis.com \
--project $GOOGLE_PROJECT
Error for this is given in the screeshot attached herewith:
Error in the serviec enabling command
Next step is to clone the policy: The given command for that is:
git clone https://github.com/forseti-security/policy-library.git
After that they said: "You realize Policy Library enforces policies that are located in the policy-library/policies/constraints folder, in which case you can copy a sample policy from the samples directory into the constraints directory".
and gave this command:
cp policy-library/samples/storage_blacklist_public.yaml policy-library/policies/constraints/
On running this command I received this:
error on running the directory command
Finally they said "Create the bucket that will hold the data that Cloud Asset Inventory (CAI) will export" and gave the following command:
gsutil mb -l us-central1 -p $GOOGLE_PROJECT gs://$CAI_BUCKET_NAME
I am confused in where to replace my own credentials like in the place of project_Id I wrote my own project id.
Also I don't know these errors are ocurring. Kindly help me.
I'm unable to access the tutorial.
What happens if you run the following:
echo ${DEVSHELL_PROJECT_ID}
I suspect you'll get an empty result because I think this environment variable isn't actually set.
I think it should be:
echo ${DEVSHELL_GCLOUD_CONFIG}
Does that return a result?
If so, perhaps try using that variable instead:
export GOOGLE_PROJECT=${DEVSHELL_GCLOUD_CONFIG}
export CAI_BUCKET_NAME=cai-${GOOGLE_PROJECT}
It's not entirely clear to me why this tutorial is using this approach but, if the above works, it may get you further along.
We're you asked to create a Google Cloud Platform project?
As per the shared error, this seems to be because your env variable GOOGLE_PROJECT is not set. You can verify it by using echo $GOOGLE_PROJECT and seeing whether it returns the project ID or not. You could also use echo $DEVSHELL_PROJECT_ID. If that returns the project ID and the former doesn't, it means that you didn't export the variable as stated at the beginning.
If the problem is that GOOGLE_PROJECT doesn't have any value, there are different approaches on how to solve it.
Set the env variable as you explained at the beginning. Obviously this will only work if the variable DEVSHELL_PROJECT_ID is also set.
export GOOGLE_PROJECT=$DEVSHELL_PROJECT_ID
Manually set the project ID into that variable. This is far from ideal because in Qwiklabs they create a new temporal project on every lab, so this would've only worked if you were still on that project. The project ID can be seen on both of your shared screenshots.
export GOOGLE_PROJECT=qwiklabs-gcp-03-c6e1787dc09e
Avoid using the argument --project. According to the documentation, the aforementioned argument is optional and if none is used the command will take the one by default, which will be on the configuration settings. You can get the current project by using this:
gcloud config get-value project
If the previous command matches the project ID you want to use, you can simply issue the following command:
gcloud services enable cloudasset.googleapis.com
Notice that the project ID is not being explicitly mentioned using --project.
Regarding your issue with the GitHub file, I have checked the repository and the file storage_blacklist_public.yaml doesn't seem to be in the directory policy-library/samples. There seems to be a trace that it was once there, but it isn't anymore, they should probably update the lab as it isn't anymore.
About your credentials confusion, you don't have to use your own project ID, just the one given on your lab. If I recall properly all the needed data should be on the left side of the lab. Still, you shouldn't need to authenticate in a normal situation as you are already logged in your temporal project if you are accessing it form the Cloud Shell, which is where you should be doing all this.
Adding this for the later versions
in the gcloud shell you can set a temp variable for the current project id with
PROJECT_ID="$(gcloud config get-value project)"
then use like
--project ${PROJECT_ID}
When deploying a terraform task via Azure, the resulting logs dont display the resource symbols correctly.
2020-03-16T22:19:21.4807821Z An execution plan has been generated and is shown below.
2020-03-16T22:19:21.4808173Z Resource actions are indicated with the following symbols:
2020-03-16T22:19:21.4808433Z [32m+[0m create
2020-03-16T22:19:21.4808648Z [33m~[0m update in-place
2020-03-16T22:19:21.4808823Z [0m
2020-03-16T22:19:21.4809040Z Terraform will perform the following actions:
Wondering if its possible to get the correct symbols instead of the 32m+, etc.
To fix those weird characters just use the -no-color option on your plan and apply steps!
And for reference those weird values represent colors of the terminal - see: https://github.com/shiena/ansicolor/blob/master/README.md
No-Color option: https://www.terraform.io/docs/commands/plan.html#no-color
You're seeing ANSI codes, setting your Git Bash terminal to support xterm-256color is one way to see these (usually invisible) codes.
I am using Aspera Connect on mac to download files from a server. It works fine in terminal, but i was wondering if before i download a file, i could read its size first and then decide if i want to download it or not. I found the flag
'--precalculate-job-size'
but it's only doing that right before download and there's no way to stop the download.
The current command i use is this:
/Applications/Aspera\ Connect.app/Contents/Resources/./ascp -QT -l 200M -P33001 -i "/Applications/Aspera Connect.app/Contents/Resources/asperaweb_id_dsa.openssh" emp_ext3#fasp.ebi.ac.uk:/{asp_path} {local_path}
The resources for the flags are here:
https://download.asperasoft.com/download/docs/ascp/2.7/html/index.html
To answer your question, without going too much in the details:
If you want to display the size of an elements on an Aspera server for which you have access, you can use the command line "Amelia", see:
https://www.rubydoc.info/gems/asperalm
mlia server --url=ssh://fasp.ebi.ac.uk:33001 --username=emp_ext3 --ssh-keys=~/.aspera/mlia/aspera_bypass_dsa.pem br /10002/data/100_movie_gc.mrcs
there are plenty of options, like : --format=csv --fields=size
Note that this displays individual file sizes, but not recursive folder size.
a few other things:
You are not exactly using "Connect", but rather the "ascp" command line. Connect refers rather to the browser extension and lightweight app. while ascp is the implementation of Aspera FASP transfer protocol, found basically in all Aspera products.
the latest ascp documentation can be found here: https://www.ibm.com/support/knowledgecenter/SSL85S_3.9.6/hsts_admin_linux/dita/hsts_admin_linux_ascp_usage.html
did you know you can also use the free client:
https://downloads.asperasoft.com/en/downloads/2
it includes also ascp, but also a graphical user interface
I want to be able to access my MongoDB database from 2 Openshift apps- one app is an interactive database maintenance app via the browser, the other is the principle web application which runs on mobile devices via an Openshift app. As I see it in Openshift, MongoDB gets set up within a particular app's folder space, not independent of that space.
What would be the method to accomplish this multiple app access to the database ?
It's not ideal but is my only choice to merge the functionality of both Openshift apps into one ? That's tastes like a bad plate of spaghetti.
2018 update: this applies to Openshift 2. Version 3 is very different, and however the general rules of linux and scaling apply, the details got obsolete.
Although #MartinB answer was timely and correct, it's just a link, so let me put the essentials here.
Assuming that setting up a non-shared DB is already done, you need to find it's host and port. You can ssh to your app (the one with the DB) or use the rhc:
rhc ssh -a appwithdb
env | grep MONGODB
env brings all the environment variables, and grep filters them to show only Mongo-related ones. You should see something like:
OPENSHIFT_MONGODB_DB_HOST=xxxxx-yyyyy.apps.osecloud.com
OPENSHIFT_MONGODB_DB_PORT=zzzzz
xxxxx is the ID of the gear that Mongo sits on
yyyyy is your domain/namespace
zzzzz is MongoDB port
Now, you can use these to create a connection to the DB from anywhere in your Openshift environment. Another application has to use the xxxxx-yyyyy:zzzzz URL. You can store them in custom variables to make maintenance easier.
$ rhc env-set \
MYOWN_DB_HOST=xxxxx-yyyyy \
MYOWN_DB_PORT=zzzzz \
MYOWN_DB_PASSWORD=****** \
MYOWN_DB_USERNAME=admin..... \
MYOWN_DB_NAME=dbname...
And then use the environment variables instead of the standard ones. Just remember they don't get updated automatically when the DB moves away.
Please read the following article from the open shift blog: https://blog.openshift.com/sharing-database-across-applications/