OWASP ZAP: Active scanning manual explored Actions - owasp

I started, with an automatic scan of my site under test.
Then I have explored it manually to test actions, not found by the automatic scan.
These Actions are visible in the History Tab.
I stored the session.
How can I re-run those manually added actions? I didn't find them as executed, when running an active scan again.

You can explore your app in a variety of ways using ZAP:
Manual exploring - very effective but no good for automation
Traditional Spider - fast but doesnt handle modernapp which use JS
Ajax Spider - slower but handles modern apps
Proxying unit tests - good it you have them
Importing definitions such as OpenAPI, SOAP, GraphQL
Proxying a program/script which makes those requests for you
For more details see the vidoes taged 'explore' on https://www.zaproxy.org/videos-list/

Related

Configuring OWASP Zap Spider to output the "chain of URLs" for each request

I am new to vulnerability testing at my new job at an EC site development (we also get them up and continue to run them on AWS EC2).
I am wondering if there is a way to configure Spider so that I can get ouput of the the "URL chain" to serve all the requests that are listed when I run php artisan route:list
Currently, my colleague who joined the company a few months before me is manually inputting this info into a spreadsheet.
Ex. "Home->Register user info->Confirm registered user info->main shopping page->item category page->item description page->confirm adding product to cart page->etc."
I find this to be extremely tedious, he does as well, and because he only speaks Japanese, I don't think he is able to post questions here.
I have started looking through the Zap documentation but have not seen anything relevant yet. Any advice is appreciated.
You can Active Scan specific orders of operations by leveraging the Sequence addon: https://github.com/zaproxy/zap-extensions/wiki/HelpAddonsSequenceSequence. You can get it via the ZAP Marketplace:
There's also the Call Graph addon which might be of benefit to you, though I don't know the export options it provides off the top of my head.
Another alternative that might work for you would be writing a Standalone script that goes through the Sites Tree or History table looking at URLs and Referer headers:
https://github.com/zaproxy/community-scripts/blob/master/standalone/Traverse%20sites%20tree.js
https://github.com/zaproxy/community-scripts/blob/master/standalone/Loop%20through%20history%20table.js

Finding, and deleting, a rogue Application Insights Web Test

I have a quite extensive application running under Azure.
As part of the operational management of the application, I have a set of Application Insight instances to provide monitoring, tracking and logging.
The overall application consists of three ASP.NET MVC websites and a Worker Role. Additionally, I have three instances ("environments") of the application overall deployed (QA, UAT and Production).
I noticed a while back that for one of the App Insight instances (for the same MVC website across all environments) it was quite heavy on the number of Dependency data points that is being collected. Specifically, this is causing me to exceed the 5 million data points included in the monthly quota.
Noting this, I changed the Web Tests (for availability) to hit a different endpoint (one that doesn't invoke the dependencies).
However, I am still seeing the old endpoint being hit.
Digging a little further into this, I believe that I have an old rogue Web Test that is still active, and still hitting the old endpoint.
Issue is - I can't find it.
Is there a way to query, even if via the Powershell Cmdlets, the subscription in an attempt to find this? I've trawled through the portal and cannot see it anywhere.
Could this be the "Proactive Detection" feature? If so, can you change the endpoint it monitors?
You should definitely open a support ticket with us. Check out the dev support options and look at either option 3 or 4. It's preferred you open a support ticket via Azure with a support plan (option 3) if you have one. But, if you don't have a support plan check out option 4 and you can get in contact with us that way.

Google Fusion Table REST Api vs Advanced Services Fusion Table Services in app scripts

I am very confused about the correct or recommended mechanism to use for accessing google fusion tables APIs in app scripts. There seem to be two methods with examples but no discussion about which is preferred or why. Is one of these interfaces newer and preferred while the other is dying? Is one obsolete or more restricted in what it can do?
Method 1 is the REST API described here
https://developers.google.com/fusiontables/docs/v2/sql-reference#Select
Method 2 is a set of library functions sort of described here under the Apps Script/Google Advanced Services:
https://developers.google.com/apps-script/advanced/fusion-tables
For example, using the REST api to do a dql query, we end up with something like this:
function runSQL(sql){
var getDataURL = 'https://www.googleapis.com/fusiontables/v1/query?sql='+sql;
var dataResponse = UrlFetchApp.fetch(getDataURL,getUrlFetchOptions()).getContentText();
return dataResponse;
}
And using the advanced API we use something like this:
result = FusionTables.Query.sql(sql, { hdrs: false });
The REST API seems much harder to use, requireing complex oAuth and developer keys to be configured in advance and coded into the application while the Advanced Services API harvests all this behind the scenes and makes for simple API calls like I show here.
I have seen numerous examples using each of the above with no hint as to why one author chose her mechanism instead of the other.
Your help is greatly appreciated.
The service within app-script is a work in progress, so the full functionality of the API might not be fully supported at the moment. As you mentioned though, the big advantage of the service over the REST API is that you do not have to handle the OAuth flow, as you only need to enable it on your script (as stated here).
The Apps Script "advanced service" implementation still lacks some advanced functionality (like alt=media format queries or multipart / resumable uploads) -- if it actually has those features, it lacks extremely basic documentation of them, to the point that the Apps Script editor autocomplete is unaware of them. The tradeoff of these functionality gaps is that you don't need to handle keys, request building, etc.
So, if you're doing simple sql select / importRows work, the Advanced Service should be able to cover almost all your needs. If you need to delete from your FusionTables, you might want to consider setting up the REST API - because deleting is 1 record per query, the better way to delete is to instead "download what you want to keep, then re-upload it back via replaceRows."
(This worked for me for a while, but eventually what I was keeping outgrew the Apps Script service's limitations and I began receiving Empty Response errors from the call to replaceRows. My remedy was to perform my record maintenance tasks via the REST API, where I can specify resumable uploads, timeouts, etc., while more "normal" interactions are done through the Advanced Service.)

How to query RTC builds?

We are using RTC for version control and build system.
RTC's web interface allow user to create custom queries for work items - good.
How about creating custom queries to the builds (or other RTC items maybe)?
Let's say I want to know in what builds this particular file was modified or in what builds this particular team member contributed something.
Definitely there is no web interface to do this.
Maybe some other tool? .. Something...
BTW, I didn't find it in scm.exe tool provided with eclipse.
Thanks
While there is no web GUI for building such a query, there is a REST API for querying Build Results:
See "Report REST API" (you need a -- free -- jazz.net account to access it), for com.ibm.team.build.BuildResult, that you can access as in this thread, for instance:
https://<host>:<port>/jazz/resource/itemOid/com.ibm.team.build.BuildResult/_uKcncTTuEeOy2d_WN7u_Bg

Creating SOP invoices in Great Plains: eConnect or Web Service API?

We are using Integration Manager to create a batch of monthly invoices. I want to build a replacement that creates a batch in GP and imports the invoices into the batch. After review, the batch will be posted to GP. Is this doable with either of these API's and which would you choose?
Integration manager can use econnect for its insertion engine. If you are processing a high volume of transactions, you will notice a huge difference between integration manager's UI engine and econnect. When you create a new integration, simply choose the econnect option and whatever data source you have set up.
Concerning the non-IM APIs, both may be used, and they are situational. The web services sits on top of econnect, and it is much slower integrating because you are passing information between several layers. It does provide a secure link between your SQL server and any outside integration sources, and it is ideal if you need to setup something to allow integrations to happen through middleware such as a billing gateway. If you have access to build an econnect process/app that makes a connection to your SQL server for GP, this is the fastest way to integrate SOP and receivable transactions. It maintains all the business rules to help ensure GP does not break in as a result of a patch, and the speed is fast enough to push thousands of records without requiring a custom integration solution.
If you want to get done quickly, and do not mind working from the integration manager interface, just build your integrations using econnect. If you have the time to develop a custom integration routine, go for econnect. If you want to leverage WCF technology on top of econnect, go for web services.
Each are listed in the amount of time it will take you to develop from fast implementation to slower implementation.