Do web.config header size limits override http.sys limits in the registry? - web-config

I have an ASP.Net 4.0 application using Windows Integrated Authentication on IIS7.5 on Windows 2003.
Some users are reporting errors with this message:
Bad Request - Request Too Long
HTTP Error 400. The size of the request headers is too long.
Others succeed in loading pages but have errors loading other resources and performing AJAX calls.
One of the users experiencing intermittent errors has a Kerberos Authorization header of about 5700 characters. This user is a member of 250 AD groups. My theory is that other HTTP headers (inc cookies) may take the total beyond 8000 characters, which if encoded using UTF16, totals over the default 16KB limit.
This page describes using web.config to configure limits on each HTTP header:
http://www.iis.net/configreference/system.webserver/security/requestfiltering/requestlimits/headerlimits
This page describes using registry settings to set limits on HTTP header size and total request size, by default both 16KB:
https://support.microsoft.com/en-us/kb/820129
Do the web.config settings override the HTTP.sys registry settings?
If so, is there a web.config setting for the total request size?

The registry one takes high priority as the driver is the entry point of packets. It is also a server wide setting for all incoming HTTP packets.
The request filtering setting only takes effect when the packets are forwarded to IIS pipeline. It can be set at multiple levels to achieve fine grained control.
There does not seem to be a setting for that.

Have you tried clearing your cookies within your browser?
It could be possible that you have an overly large number of cookies being stored that will be added to any and all requests made within your browser. This can sometimes lead to issues like this. Additionally, you could try running your application within a different browser to see if that affects anything.

Related

Is it normal for IBM Connections opensocial gadgets to make 2 HTTP requests on gadgets.io.makeRequest?

Within an IBM Connections sharebox/share dialog gadget my-sharebox.xml, I make the following request:
gadgets.io.makeRequest(url, function (response) { ... });
Using tcpflow on the IBM Connections server to capture the outgoing request & response, I see 2 HTTP requests.
The first one to the url specified above, and a second request to the gadget XML file, my-sharebox.xml.
Is this second request expected behaviour?
Is it possible to somehow suppress the second request?
In a production environment it should be caching the gadget XML and only fetch it once. That will usually happen when the gadget is rendered. Do you have all debug parameters related to opensocial disabled?

FaceBook loads HTTPS hosted iframe apps via HTTP POST (S3 & CloudFront errors)

I have been trying to write a bucket policy that will allow (X-HTTP-Method-Override) because my research shows that Facebook loads HTTPS hosted iframe apps via HTTP POST instead of HTTP GET which causes S3 and CloudFront errors.
Can anyone please help me with this problem?
This is what's returned from S3 if I served my Facebook app directly from S3:
<?xml version="1.0" encoding="UTF-8" ?>
- <Error>
<Code>MethodNotAllowed</Code>
<Message>The specified method is not allowed against this resource.</Message>
<ResourceType>OBJECT</ResourceType>
<Method>POST</Method>
<RequestId>B21565687724CCFE</RequestId>
<HostId>HjDgfjr4ktVxqlIBeIlvXT3UzBNuPg8b+WbhtNHOvNg3cDNpfLH5GIlyUUpJKZzA</HostId>
</Error>
This is what's returned from CloudFront if I served my Facebook app from CloudFront with S3 as the origin:
ERROR
The request could not be satisfied.
Generated by cloudfront (CloudFront)
I think the solution should be to write a bucket policy that makes use of X-HTTP-Method-Override... Probably I am wrong though. A solution to this problem would be highly appreciated.
After trying many different ways to get this to work, it turns out that it simply is not possible to make the POST to static content work on S3 as things stand. Even if you allow POST through Cloudfront, enable CORS, change the bucket policy so that the Cloudfront origin identity can GET/PUT etc. it will still throw an error.
As an aside, S3 is not the only thing that balks at responding to such a POST request to static content. If you configure nginx as an origin for a Facebook iframe you will get the same 405 error, though you can work around that problem in a couple of ways (essentially rewriting it to a GET under the covers). You can also change the page (though still static) to be a dynamic extension (.aspx or .php) to work around the issue with nginx.
You can host all your other content on S3 of course, and just move the page that you POST to onto a different origin. With a decent cache time you should see minimal traffic, but it will mean keeping your content in two places. What I ended up doing was:
Creating EC2 instances in an autoscaling group (just in case) to serve the content
They used a cron job to sync the content from S3 every 5 minutes
No change in workflow was required (still just upload content to S3)
It's not ideal, nor is it particularly efficient, but hopefully it will save others a lot of fruitless testing trying to get this to work on S3 alone.
You can set your Cloudfront distribution to allow POST methods.
If you go into your dashboard and edit the Behavior for the distribution
- Then select Allowed HTTP Methods - GET, HEAD, PUT, POST, PATCH, DELETE, OPTIONS
This allows the POST from Facebook to go through to your origin.
I was fighting with S3 and CloudFront for last couple of days. and I confirm that with any bucket policy we cannot redirect POST calls from Facebook to S3 static (JS enriched) contents.
The only solution seems to be the one Adam Comerford mentioned in this thread:
Having a light application which receives Facebook calls then fetching the content from S3 or CloudFront.
If anyone has any other solution or idea it will be appreciated.
you can't change POST to GET - that's the way Facebook loads app page because it also sends data about the current user as POST body (see signed_request for more details). I would suggest you look into fixing your app to make sure it properly responds to POST request.

how to disable caching HTTP GET in metro app, I am using IXMLHTTPRequest2

I am doing an http GET to fetch data, I am using IXMLHTTPRequest2.
If I GET url "http://foo.com" (curl "http://foo.com"), the second time I get this url again, the content on server is actually changed, but what I am getting is the cached result.
Caching seems only honor URL, so if different header with same URL, still same cached result.
I have tried "Cache-Control: no-cache", "Cache Control: no-store", and "Pragma: no-cache". None of them are honored by the API.
Is there a way to turn cache off or walk around? (One walk around I am using is appending garbage at end of URL but I am not feeling good with it).
my question got answered here by Prashant: http://social.msdn.microsoft.com/Forums/en-US/winappswithnativecode/thread/1df95d3e-68c9-4351-822a-c2cfde380248/#1df95d3e-68c9-4351-822a-c2cfde380248
You can force XHR to retrieve the latest content by setting the "If-Modified-Since" HTTP header in the request and set a time in the past.
If you have control over the server response, you could send back an Expires HTTP response header with a value 0 or a date in the past. That should make XHR retrieve the latest response for you.
You are only required to do one of the above, changing both the client and server side code is not necessary.
The client side code could be changed to something like this:
xhr->Open(...)
xhr->SetRequestHeader(L"If-Modified-Since", L"Sat, 01 Jan 2000 00:00:01 GMT");
xhr->Send(...)
For changing the server side behavior if your server side code is based on ASP.net you could change your response header like this:
Response.Headers.Add("Expires", "0")
I think you need to use sockets.....i think these two links should help
C# WebClient disable cache
How do I Create an HTTP Request Manually in .Net?

apache security

I need to use a facebook application but my web page return response 206 instead 200,
so that the facebook application return http code 500.
I tested with http://developers.facebook.com/tools/debug/og/object?q=http://adserver.leadhouse.net/test/test/index.php and return 206 instead joomla.it return 200
when they are same curl -I response datae
I tested with this perl script: http://pastebin.com/NCDv9eTh
and my page is vulnerable instead joomla.it is good.
I think that my answer is very close between
Facebook debugger : Response 206
and Apache Webserver security and optimization tips
but I don't understand how change my apache configuration.
the solution is into this page:
www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.35.2
with similar code:
SetEnvIf Range (,.*?){5,} bad-range=1
RequestHeader unset Range env=bad-range
or
httpd.apache.org/docs/2.2/mod/core.html#limitrequestfieldsize
how can I make it less vulnerable to my web pages?
I have no idea what kind of “vulnerability” you are talking about here.
Facebook debugger showing a response status code 206 is normal – because the debugger tries to only request the first x (K)Bytes from your URL. If your server accepts such range requests and answers them correctly, then the response code will be 206.
There is no vulnerability in that.
If this causes you any other problems with your site – then please describe them in a manner that makes them comprehensible.
Yes, everything is started with debugging facebook: dialog send return 500 http code with my page return 206 http code.
And my curiosity is focused on DoS vulnerability of http code 206 when I tested perl script http://pastebin.com/NCDv9eTh
I report some significant phrase about apache documentation:
This vulnerability concerns a 'Denial of Service' attack. This means
that a remote attacker, under the right circumstances, is able to slow
your service or server down to a crawl or exhausting memory available
to serve requests, leaving it unable to serve legitimate clients in a
timely manner.
There are no indications that this leads to a remote exploit; where a
third party can compromise your security and gain foothold of the
server itself. The result of this vulnerability is purely one of
denying service by grinding your server down to a halt and refusing
additional connections to the server.
so that LimitRequestFieldSize workaround was insufficient,
you could modify Range parameters consulting Mitigation paragraph
about apache wiki documentation: http://wiki.apache.org/httpd/CVE-2011-3192
You obtain switch between return http code: from 206 to 200.
You best apache configuration, but you're still exposed to DoS vulnerability.
I added mod_headers with this line:
RequestHeader unset Range
and now my page return http code 200.
And to limit exhausting memory available to serve requests,
I limit ip connections adding mod_limitipconn with this code:
MaxConnPerIP 10

Why does Fiddler break my site's redirects?

Why does using Fiddler break my site sometimes on page transitions.
After a server side redirect -- in the http response (as found in Fiddler) I get this:
Object moved
Object moved to here.
The site is an ASP.NET 1.1 / VB.NET 1.1 [sic] site.
Why doesnt Fiddler just go there for me? i dont get it.
I'm fine with this issue when developing but I'm worried that other proxy servers might cause this issue for 'real customers'. Im not even clear exactly what is going on.
That's actually what Response.Redirect does. It sends a 302 - Object moved response to the user-agent. The user-agent then automatically goes to the URL specified in the 302 response. If you need a real server-side redirect without round-tripping to the client, try Server.Transfer.
If you merely constructed the request using the request builder, you're not going to see Fiddler automatically follow the returned redirect.
In contrast, if you are using IE or another browser, it will generally check the redirect header and follow it.
For IE specifically, I believe there's a timing corner case where the browser will fail to follow the redirect in obscure situations. You can often fix this by clicking Tools / Fiddler Options, and enabling both the "Server" and "Client" socket reuse settings.
Thanks user15310, it works with Server.Transfer
Server.Transfer("newpage.aspx", true);
Firstly, transferring to another page using Server.Transfer conserves server resources. Instead of telling the browser to redirect, it simply changes the "focus" on the Web server and transfers the request. This means you don't get quite as many HTTP requests coming through, which therefore eases the pressure on your Web server and makes your applications run faster.
But watch out: because the "transfer" process can work on only those sites running on the server, you can't use Server.Transfer to send the user to an external site. Only Response.Redirect can do that.
Secondly, Server.Transfer maintains the original URL in the browser. This can really help streamline data entry techniques, although it may make for confusion when debugging.
That's not all: The Server.Transfer method also has a second parameter—"preserveForm". If you set this to True, using a statement such as Server.Transfer("WebForm2.aspx", True), the existing query string and any form variables will still be available to the page you are transferring to.
Read more here:
http://www.developer.com/net/asp/article.php/3299641/ServerTransfer-Vs-ResponseRedirect.htm