OWASP ZAP - Scan a list of url - owasp

I can use Manual Request Editor to scan 1 URL, how can I use it for a list of URL (e.g: list URLs in a CSV or text file)?
Thanks,

Install the 'Import URLs' add-on from the ZAP Marketplace: https://github.com/zaproxy/zap-extensions/wiki/HelpAddonsImporturlsImportUrls - that will allow you to import a file of URLs (one per line)

Related

Jmeter Importing using .xlsx file extension

Hello can someone help me import .xlsx on Jmeter, I tried to use this configuration unfortunately it doesn't work. Your response is highly appreciated. Thank you so much in advance.
Screenshot:
File Location
1st Payload/Request
2nd Payload Request:
Header
Response:
If you're using "Files upload" tab and your server expects multipart/form-data content type you should tick the corresponding box in JMeter's HTTP Request sampler:
Going forward be informed that it's possible to record the file upload request from your browser (or other application) using JMeter's HTTP(S) Test Script recorder, just make sure to copy the file(s) you will be uploading to JMeter's "bin" folder so JMeter could properly generate the relevant HTTP Request sampler and the HTTP Header Manager.
More information: Recording File Uploads with JMeter

Passing config values to OWASP ZAP rest api script as a file: format?

I wanted to automate API pentesting.
I referred this blog:
https://zaproxy.blogspot.in/2017/06/scanning-apis-with-zap.html
Could you direct me to where I can get a sample zap-options file that we pass with -z option to the zap-api-scan.py script, or where I can get documentation regarding the format in which config values has to be specified in the file? I could not find the official ZAP docs.
See this FAQ https://github.com/zaproxy/zaproxy/wiki/FAQconfigValues - thats the best we've got at the moment I'm afraid

Error 403 with Ckan 2.6.2 - Datapush

I have, in order to process some big data, to set up ckan on a local machine. I've set up the whole system following this guide : http://docs.ckan.org/en/latest/maintaining/installing/install-from-source.html
I wanted to display a preview of a locally loaded file, so the user can actually see it before downloading it. And it doesn't work, because it only works for online files. For instance, it DOES work with this online file but NOT with my own file I upload.
So, I've been interested about Datastore and Datapusher. I've followed every part of the guide, and it appears on my ckan. However, I have an error. Specifically this one :
Upload error: An Error occurred while sending the job: 403 Client Error: Forbidden for url: http://127.0.0.1:8800/job
Here's my most important parts about my production.ini file (copying the whole would be very long) :
ckan.site_url = http://localhost
ckan.plugins = datastore datapusher stats text_view image_view
recline_view recline_graph_view recline_map_view webpage_view
ckan.datapusher.formats = csv xls xlsx tsv application/csv
application/vnd.ms-excel
application/vnd.openxmlformats-officedocument.spreadsheetml.sheet
ckan.datapusher.url = http://127.0.0.1:8800/
I truly have no idea about what my problem could be, I tried to change the datapusher.url to 0.0.0.0 as the guide suggested, but it doesn't work either.
If the data to be added to CKAN is in a file on your computer, select “Upload a file” option. CKAN will give you a file browser to select it. You should use link to a file option just for publicly available resources.
Have you installed datapusher also? Its a separate process running on port 8800. CKAN uses datastore to be able to have a grid view of tabular data. Data needs to be pushed through datapusher to be used by datastore.
Yes, you need to set up the Datapusher.It's not activated by default.
Pull the datapusher code, install the dependencies and run it using:
python datapusher/main.py deployment/settings.py
The instructions to configure the settings are on the repository.
Here's the datapusher manual: http://docs.ckan.org/projects/datapusher/en/latest/
Here's the repository: https://github.com/ckan/datapusher
Had the exact same error message.
This post solved my issue though.
short: insert/check the following in your virtualhost in /etc/apache2/sites-enabled/datapusher.conf
<Directory /etc/ckan>
Options All
AllowOverride All
Require all granted
</Directory>

Import CSV on Mac Neo4j: Couldn't load the external resource

I'm running in neo4j community edition hoping to upload a csv in my downloads folder
LOAD CSV WITH HEADERS FROM "file:/Users/santouko/Downloads/neo4j_module_datasets/test.csv" as line
RETURN count(*);
However it return error msg where the path is different from the one I specified, any possible reasons?
Couldn't load the external resource at: file:/Users/santokou/Documents/Neo4j/default.graphdb/import/Users/santokou/Downloads/neo4j_module_datasets/test.csv
By default the LOAD CSV path is relative to the import directory of the Neo4j installation.
You can configure this by specifying a value for dbms.directories.import in neo4j.conf. See this page or more info.

Are you able to create clean URLs with Wget?

I'm attempting to create a mirror of a WordPress site with clean URLs (i.e. http://example.org/foo not http://example.org/foo.php). When Wget mirrors the site, it gives all pages and links a ".html" extension (i.e. http://example.org/foo.html).
Is it possible to set options for Wget to create a clean URL structure, so that the mirrored file corresponding to the page "http:example.org/foo" would be "/foo/index.html" and the link to that page would be "http:example.org/foo"? If so, how?
If I understand your question correctly, you're asking for what is the default behaviour of Wget.
Wget will only add the extension to the local copy, if the --adjust-extension option has been passed to it. Quoting the man page for Wget:
--adjust-extension
If a file of type application/xhtml+xml or text/html is downloaded and the URL does not end with the regexp \.[Hh][Tt][Mm][Ll]?, this option will cause the suffix .html to be appended to the
local filename. This is useful, for instance, when you're mirroring a remote site that uses .asp pages, but you want the mirrored pages to be viewable on your stock Apache server. Another good
use for this is when you're downloading CGI-generated materials. A URL like http://example.com/article.cgi?25 will be saved as article.cgi?25.html.
However, what you seem to be asking for, that Wget saves example.org/foo as /foo/index.html is actually the default option. If you're seeing some other output, you should post the complete output of Wget with the --debug switch.