What do "Out of Scope" and "Out of context:" mean in OWASP ZAP? - owasp

I have done automatic scan of my application. Unfortunately, I have found that on the list of URL after scanning there are some GET requests to the URIs which are flagged with "Out of Context" or "Out of Scope". Should I be affraid that I have done something illegal?

No, not at all :) We have to be very careful with ZAP scans, partly due to legal reasons but also so that we dont try to try to scan the whole internet :)
This is typically a problem with the 2 spiders as they explore stuff, the active scanner just attacks what you've already found.
The scope is typically the domain you started the spider on. If you start spidering https://www.example.com then the spider wont start spidering https://www.google.com even if there is a link to it - that way we would end up trying to spider the internet :P
The context you define - you can think of it as an application, or part of an application, but its really just 2 sets of regexes - ones which are used to include or exclude urls. If you define a context then ZAP wont go outside of it.
ZAP doesnt know about the legal side - maybe you are employed by Google in which case you might be allowed to attack google.com, but maybe not. ZAP doesnt know that.

Related

Which exploit and which payload use?

Hi everyone and sorry for my bad English.
I'm learning penetration testing.
After reconnaissance and scanning of my target, I have enough information to pass to next phase.
Some info I have is open ports with related running services, names of the services, service's versions, operative system of the device, firewalls used, etc.)
I launched the mfs console.
I should find the correct exploit and payload, based on the information collected to gain access. I've read the Metasploit Unleashed guide on offensive-security. I've learned the Metasploit Fundamentals and the use of mfs console.
But I don't understand the way to start all of this. Assuming that my target has 20 ports open, I want test the vulnerability using an exploit payload that do not require user interaction. The possibilities of which exploit and payloads to use are now reduced, but are always too. Searching and testing all exploit and payloads for each ports isn't good! So, if i don't know the vulnerability of the target, how do I proceed?
I would like to be aware of what I do. and do not try without understanding.
Couple of things:
We have a stack exchange for security! Check it out at https://security.stackexchange.com/
For an answer: you want to look for "remote exploits", as those do not require user interaction. you can find a curated list of exploits here: https://www.exploit-db.com/remote/
You can search the services on this page for something that matches the same service/version as your attack vector.

Shiro web filter chain order policy

Probably I'm in a kind of misunderstanding problem, but here is the thing:
I want my <domain>/index.html publicly available and everything else should be protected.
I'm using shiro web + guice:
...
bindConstant().annotatedWith(Names.named("shiro.loginUrl")).to("/index.html");
addFilterChain("/index.html", ANON);
addFilterChain("/**", AUTHC);
...
This configuration is leading me to a "TOO MANY REDIRECTS LOOP" problem. Shiro documentation says here it uses a FIRST MATCH WIN policy, but I think I didn't get it well.
Any thoughts?
On the surface, your filter chain looks like it should work. I probably can't diagnose your too many redirects issue without a bit more information - content of index.html, what http is actually returned from the server when you hit index.html, etc.
However, I CAN tell you that you shouldn't need to do this. The AUTHC filter has a special case for the "loginUrl" page - it will let it through. So try removing the ANON filter, and see how things go.

Rest: Right or Wrong to Choose URLs From Usecases

I was at a developer conference where the speaker argued that the following set of URLs are not RESTful:
/users/username/changepassword
/users/username/resetpassword
The main reason given was that the same URLs might be used in different context and that this didn't facilitate HATEOAS in a meaningful way.
He then continued to argue that a more viable approach is to use the following URLs:
/account/changepassword
/administration/server/users/username/resetpassword
According to the speaker this latter approach allowed for each use-case to have a specifically tailored (html-)form for each URL, which could then be posted to the same URL. No more problems with the same URL used in different contexts.
I would spontaneously say that neither of these URL sets are RESTful, simply due to the fact that they are both centered around actions (verbs) which in my eyes do not really qualify as resources except for in exceptional cases (like search). I feel like this setup is very RPC-like.
I would have suggested something more noun-like and granular like
//Change password
PUT /users/username/account/password
//Register reset
POST /users/username/account/password/resets
//Verify reset
PUT /users/username/account/password/resets/0/verification_code
What is your opinion? Is the speakers approach RESTful or not, or is there simply not enough information here?
I agree, the whole idea of a RESTful interface (as I understand it) is to allow access to "resources". So neither of those URL schemes seem very nice to me.
Having said that REST isn't set in stone, it is more of a guide than a set of rules. Some things don't sit that well with it, so you have to get as close as you can just using the HTTP verbs.
A password reset isn't a resource, however a password is. So, I would say something along these lines for a password reset operation ...
GET /users/antonyscott/password
PUT /users/antonyscott/password
With the 2nd call requiring authentication of some sort derived from the first call and passing in the new password. Actually that's more of a straight password change than a reset. If you're after a reset (ie - following a link in an email to confirm the reset) then what you had seems okay.
Obviously designing an API is an iterative process, so I would say have a go and see how it works, then refine it.

SOP issue behind reverse proxy

I've spent the last 5 months developing a gwt app, and it's now become time for third party people to start using it. In preparation for this one of them has set up my app behind a reverse proxy, and this immediately resulted in problems with the browser's same origin policy. I guess there's a problem in the response headers, but I can't seem to rewrite them in any way to make the problem go away. I've tried this
response.setHeader("Server", request.getRemoteAddress());
in some sort of naive attempt to mimic the behaviour I want. Didn't work (to the surprise of no-one).
Anyone knowing anything about this will most likely snicker and shake their heads when reading this, and I do not blame them. I would snicker too, if it was me... I know nothing at all about this, and that naturally makes this problem awfully hard to solve. Any help at all will be greatly appreciated.
How can I get the header rewrite to work and get away from the SOP issues I'm dealing with?
Edit: The exact problem I'm getting is a pop-up saying:
"SmartClient can't directly contact
URL
'https://localhost/app/resource?action='doStuffs'"
due to browser same-origin policy.
Remove the host and port number (even
if localhost) to avoid this problem,
or use XJSONDataSource protocol (which
allows cross-site calls), or use the
server-side HttpProxy included with
SmartClient Server."
But I shouldn't need the smartclient HttpProxy, since I have a proxy on top of the server, should I? I've gotten no indications that this could be a serialisation problem, but maybe this message is hiding the real issue...
Solution
chris_l and saret both helped to find the solution, but since I can only mark one I marked the answer from chris_l. Readers are encouraged to bump them both up, they really came through for me here. The solution was quite simple, just remove any absolute paths to your server and use only relative ones, that did the trick for me. Thanks guys!
The SOP (for AJAX requests) applies, when the URL of the HTML page, and the URL of the AJAX requests differ in their "origin". The origin includes host, port and protocol.
So if the page is http://www.example.com/index.html, your AJAX request must also point to something under http://www.example.com. For the SOP, it doesn't matter, if there is a reverse proxy - just make sure, that the URL - as it appears to the browser (including port and protocol) - isn't different. The URL you use internally is irrelevant - but don't use that internal URL in your GWT app!
Note: The solution in the special case of SmartClient turned out to be using relative URLs (instead of absolute URLs to the same origin). Since relative URLs aren't an SOP requirement in browsers, I'd say that's a bug in SmartClient.
What issue are you having exactly?
Having previously had to write a reverseproxy for a GWT app I can't remember hitting any SOP issues, one thing you need to do though is make sure response headers and uri's are rewritten to the reverseproxies url - this includes ajax callback urls.
One issue I hit (which you might also experience) when running behind a reverseproxy was with the serialization policy of GWT server.
Fixing this required writing an implementation of RemoteServiceServlet. While this was in early/mid 2009, it seems the issue still exists.
Seems like others have hit this as well - see this for further details (the answer by Michele Renda in particular)

Connectedness & HATEOAS

It is said that in a well defined RESTful system, the clients only need to know the root URI or few well known URIs and the client shall discover all other links through these initial URIs. I do understand the benefits (decoupled clients) from this approach but the downside for me is that the client needs to discover the links each time it tries access something i.e given the following hierarchy of resources:
/collection1
collection1
|-sub1
|-sub1sub1
|-sub1sub1sub1
|-sub1sub1sub1sub1
|-sub1sub2
|-sub2
|-sub2sub1
|-sub2sub2
|-sub3
|-sub3sub1
|-sub3sub2
If we follow the "Client only need to know the root URI" approach, then a client shall only be aware of the root URI i.e. /collection1 above and the rest of URIs should be discovered by the clients through hypermedia links. I find this cumbersome because each time a client needs to do a GET, say on sub1sub1sub1sub1, should the client first do a GET on /collection1 and the follow link defined in the returned representation and then do several more GETs on sub resources to reach the desired resource? or is my understanding about connectedness completely wrong?
Best regards,
Suresh
You will run into this mismatch when you try and build a REST api that does not match the flow of the user agent that is consuming the API.
Consider when you run a client application, the user is always presented with some initial screen. If you match the content and options on this screen with the root representation then the available links and desired transitions will match nicely. As the user selects options on the screen, you can transition to other representations and the client UI should be updated to reflect the new representation.
If you try and model your REST API as some kind of linked data repository and your client UI as an independent set of transitions then you will find HATEOAS quite painful.
Yes, it's right that the client application should traverse the links, but once it's discovered a resource, there's nothing wrong with keeping a reference to that resource and using it for a longer time than one request. If your client has the possibility of remembering things permanently, it can do so.
consider how a web browser keeps its bookmarks. You probably have maybe ten or a hundred bookmarks in the browser, and you probably found some of these deep in a hierarchy of pages, but the browser dutifully remembers them without requiring remembering the path it took to find them.
A more rich client application could remember the URI of sub1sub1sub1sub1 and reuse it if it still works. It's likely that it still represents the same thing (it ought to). If it no longer exists or fails for any other client reason (4xx) you could retrace your steps to see if you can find a suitable replacement.
And of course what Darrel Miller said :-)
I don't think that that's the strict requirement. From how I understand it, it is legal for a client to access resources directly and start from there. The important thing is that you do not do this for state transitions, i.e. do not automatically proceed with /foo2 after operating on /foo1 and so forth. Retrieving /products/1234 initially to edit it seems perfectly fine. The server could always return, say, a redirect to /shop/products/1234 to remain backwards compatible (which is desirable for search engines, bookmarks and external links as well).