How do i configure fiddler to show the parameters included in a url - fiddler

I am having problems with our .net openID authorisation
I am trying to use fiddler to view to flow of addresses and data, but fiddler isn't showing the url parameters in the return to localhost
so i tested this by typing
https://localhost:8080/#id=2
into IE and seeing what showed up in fiddler, it was
1281 502 HTTP Tunnel to localhost:8080 512 no-cache, must-revalidate text/html; charset=UTF-8 iexplore:19884
i tried looking at fiddler help for configuration it suggested not using localhost but using your machine name, I tried this is wasn't any different.
Thank you in advance.
This is the pre-question to get all the information I will need for the next question about the openid problem :)

I wanted to be looking at the response of the line above for the information I needed. This question is closed thanks

Related

Sending HTTP via proxy with haproxy

We have a company proxy (ip:port) and need to send HTTP POST with json payload to the URL like "http://server1.smthng.com/foo". Locally, name could not be resolved, but it is resolved at proxy. I dont understand how to configure haproxy to use proxy "ip:port" and send request without modifying the original URL.
I've tried curl to "http://server1.smthng.com/foo" after setting https_proxy variable from CLI (in linux) and it worked for me, so now I need to replicate same via haproxy.
From curl logs I could see that it first makes a CONNECT to proxy and once connection is there it POSTs the data.
I could be missing some knowledge here regarding tcp tunnels and the answer could be simple really. Anyway, need help.
The question is to be closed with no asnwer. The solution we took is via civetweb htt_proxy parameters.

DeAuthorize callback URL not working [duplicate]

I have an FB app, when I enter as the deauthorization callback URL my development box address, the box is pinged with this request after app removal on FB:
POST /facebook/deauthorize HTTP/1.1
Host: bashman.org
Accept: */*
Content-Length: 261
Content-Type: application/x-www-form-urlencoded
Connection: close
fb_sig_uninstall=1&fb_sig_locale=de_DE&fb_sig_in_new_facebook=1&fb_sig_time=1322732591.2685&fb_sig_added=0&fb_sig_user=1476224117&fb_sig_country=de&fb_sig_api_key=e39a74891fd234bb2575bab75e8f&fb_sig_app_id=32352348363&fb_sig=f6bbb27324aedf337e5f0059c4971
(The keys are fake here)
BUT! when I enter my production box URL in the deauthorization callback URL, the POST request is never made. Tested it with Tcpdump. No request on my production machine, why?
I checked with mtr the route from my production box to the IP address the request came from, all is OK, 0% packet lost.
The hostname port and path is correct, tested it 1k times, no firewall, IDS, or other systems blocking my ethernet slot.
Why is the Post callback not called? (How can I fix it?)
How I can debug this to determine what the issue is?
You can try using the facebook URL Debugger and see if facebook's servers are able to reach your callback URL...
Viewing the information facebook IS able to retrieve might help you debug this issue.
I had the same issue with NGINX and after hours of debugging I found this solution in NGINX documentation:
Some browsers may complain about a certificate signed by a well-known
certificate authority, while other browsers may accept the certificate
without issues. This occurs because the issuing authority has signed
the server certificate using an intermediate certificate that is not
present in the certificate base of well-known trusted certificate
authorities which is distributed with a particular browser. In this
case the authority provides a bundle of chained certificates which
should be concatenated to the signed server certificate. The server
certificate must appear before the chained certificates in the
combined file:
$ cat www.example.com.crt bundle.crt > www.example.com.chained.crt
The resulting file should be used in the ssl_certificate directive:
server {
listen 443 ssl;
server_name www.example.com;
ssl_certificate www.example.com.chained.crt;
ssl_certificate_key www.example.com.key;
...
}
In short, you just need to concatenate the certificate and the bundle and use the result as your ssl_certificate.
I am receiving the post requests from Facebook now.

Facebook gives 403 error for my website - Updated Information

I have a website where i added facebook og tags. http://bowarrow.de No matter what i try and what i change i always get a 403 Error in the debugger.
Though it can access my site somehow. I read every question about this and in the last question i asked about it, no one could really help me. So i decided i ask on facebook and could find the following:
In this case, your site is definitely returning a 403 error to at
least some of the requests from the debugger. This is something
happening in your code or hosting infrastructure
$curl -A "facebookexternalhit/1.1" -i 'http://bowarror.de/' HTTP/1.1
403 Forbidden Date: Mon, 03 Jun 2013 16:03:55 GMT Server: Apache
Content-Length: 2940 Content-Type: text/html
Host Europe GmbH – bowarrow.de [...]
I tried it myself and can confirm that i can't get any access with that facebook header. I asked hostgator several times if there is a server problem on their site and they denied. So maybe i think it might have something to do with host europe, where my domain is registered?
I linked the domain to my hosting through a-records because host europe doesn't support nameserver changes.
Any ideas, help?
Okay i've found probably what caused it. The reason for this was that i use my domain from hosteurope with hostgator. Because hosteurope doesn't allow nameserver changes i had to change the a-records.
Unfortunally there were some AAAA records IPv6 that i didn't change, because hostgator doesn't support ipv6 in my hosting.
Facebook was crawling these ipv6 and sent a 403 to the debugger. (Because there was no ipv6 server that it could have access to)
Yesterday i deleted them and it nearly immediately startet working. See here: https://developers.facebook.com/tools/debug/og/object?q=http%3A%2F%2Fwww.bowarrow.de%2F
Unfortunally it only works for the URL with www. without it i still get a 403.
see here: https://developers.facebook.com/tools/debug/og/object?q=http%3A%2F%2Fbowarrow.de%2F
For anyone using the 10Web Social Post Feed WordPress plugin, follow these steps:
Go to Facebook Feed WD > Options page and press Uninstall.
Uninstall the plugin by following the steps in this video.
Navigate to Plugins page and delete Facebook Feed WD
Reinstall and activate it.
Re-Authenticate your facebook account and recreate your feeds.

Facebook deauthorization callback is not called

I have an FB app, when I enter as the deauthorization callback URL my development box address, the box is pinged with this request after app removal on FB:
POST /facebook/deauthorize HTTP/1.1
Host: bashman.org
Accept: */*
Content-Length: 261
Content-Type: application/x-www-form-urlencoded
Connection: close
fb_sig_uninstall=1&fb_sig_locale=de_DE&fb_sig_in_new_facebook=1&fb_sig_time=1322732591.2685&fb_sig_added=0&fb_sig_user=1476224117&fb_sig_country=de&fb_sig_api_key=e39a74891fd234bb2575bab75e8f&fb_sig_app_id=32352348363&fb_sig=f6bbb27324aedf337e5f0059c4971
(The keys are fake here)
BUT! when I enter my production box URL in the deauthorization callback URL, the POST request is never made. Tested it with Tcpdump. No request on my production machine, why?
I checked with mtr the route from my production box to the IP address the request came from, all is OK, 0% packet lost.
The hostname port and path is correct, tested it 1k times, no firewall, IDS, or other systems blocking my ethernet slot.
Why is the Post callback not called? (How can I fix it?)
How I can debug this to determine what the issue is?
You can try using the facebook URL Debugger and see if facebook's servers are able to reach your callback URL...
Viewing the information facebook IS able to retrieve might help you debug this issue.
I had the same issue with NGINX and after hours of debugging I found this solution in NGINX documentation:
Some browsers may complain about a certificate signed by a well-known
certificate authority, while other browsers may accept the certificate
without issues. This occurs because the issuing authority has signed
the server certificate using an intermediate certificate that is not
present in the certificate base of well-known trusted certificate
authorities which is distributed with a particular browser. In this
case the authority provides a bundle of chained certificates which
should be concatenated to the signed server certificate. The server
certificate must appear before the chained certificates in the
combined file:
$ cat www.example.com.crt bundle.crt > www.example.com.chained.crt
The resulting file should be used in the ssl_certificate directive:
server {
listen 443 ssl;
server_name www.example.com;
ssl_certificate www.example.com.chained.crt;
ssl_certificate_key www.example.com.key;
...
}
In short, you just need to concatenate the certificate and the bundle and use the result as your ssl_certificate.
I am receiving the post requests from Facebook now.

Custom HTTP header fields stripped

My company sells a LAMP-based (where P = Perl, not PHP) application deployed as an appliance. A customer is attempting to integrate their SiteMinder SSO with our application, such that our appliance sits behind a proxy running a SiteMinder Apache plugin that acts as a gatekeeper. For our application to authenticate a user via SSO, we expect to see HTTP requests that include an SSO cookie (SMSESSION in this case) and a custom HTTP header variable containing the username.
However, when our Apache server receives HTTP requests from the SSO proxy, all custom HTTP appear to have been stripped, although the cookie is present. I have instrumented the Perl code to write the headers to a log file with the following code:
my $q = new CGI;
...
my %headers = map { $_ => $q->http($_) } $q->http();
my $headerDump = "Got the following headers:\n";
for my $header ( keys %headers ) {
$headerDump = $headerDump . "$header: $headers{$header}\n";
}
kLogApacheError("info", $headerDump);
...and this is the output I get (slightly edited for confidentiality):
[Wed Mar 16 23:47:31 UTC 2011] [info] Got the following headers:
HTTP_COOKIE: s_vi=[CS]v1|26AE2FFD851D091F-4000012E400035C5[CE]; s_nr=1297899843493; [snip]
HTTP_ACCEPT_LANGUAGE: en-US,en;q=0.8
HTTP_ACCEPT_ENCODING: gzip,deflate,sdch
HTTP_CONNECTION: keep-alive
HTTP_ACCEPT: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5
HTTP_ACCEPT_CHARSET: ISO-8859-1,utf-8;q=0.7,*;q=0.3
HTTP_USER_AGENT: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/534.13 (KHTML, like Gecko) Chrome/9.0.597.107 Safari/534.13
HTTP_HOST: [redacted].com
IOW, the customer HTTP headers I'm expecting are missing. When we redirect traffic from the proxy to a different Apache server (i.e. not our appliance) all the 20+ custom headers show up as expected. This strongly suggests that it's our Apache server that is stripping the headers.
We have never run into a problem like this with other deployments, even with this particular SSO solution. I realize this is similar to another question on this site ( Server removes custom HTTP header fields ) but the suggestions there (such as a problem caused by running mod_security) don't apply.
Is there any other reason why our server might be stripping out the HTTP headers? Or is there possibly something else going on?
Thanks for any help!
Matt
Have you sniffed the raw HTTP traffic between the proxy and your Apache instance? If the necessary headers are missing herein, the problem is on the proxy side.
I finally figured this out, and it was pretty obscure...
Using HttpFox, it really looked like traffic was being redirected to the appliance rather than being forwarded. In the case of redirects, cookies were persisting but HTTP request headers were not. However, the SSO Proxy rules were all "forwards" so we were completely stumped as to why redirects were showing up.
We knew that our application's logic redirects to /signin/ if the user isn't already authenticated but we expected this would still be routed through the proxy. However, what we didn't realize is that there was a SiteMinder SSO option, enableredirectrewrite that by default would handle "any redirects initiated by destination servers [by passing them] back to the requesting user". Once we set this flag to "yes", and the redirectrewritablehostnames to "all", everything worked like magic.
(For reference, see a version of the SiteMinder manual here: http://www.scribd.com/doc/48749285/h002921e).
I recently had a problem where I could not get any custom HTTP Headers passed to my PHP Script.
It seems that Apache 2 running PHP 7 with FCGID would not allowing and removing or tripping all custom HTTP Headers.
Here is my fix:
http://kiteplans.info/2017/06/13/solved-apache-2-php-7-fcgid-not-allowing-removing-stripping-custom-http-headers/