My company sells a LAMP-based (where P = Perl, not PHP) application deployed as an appliance. A customer is attempting to integrate their SiteMinder SSO with our application, such that our appliance sits behind a proxy running a SiteMinder Apache plugin that acts as a gatekeeper. For our application to authenticate a user via SSO, we expect to see HTTP requests that include an SSO cookie (SMSESSION in this case) and a custom HTTP header variable containing the username.
However, when our Apache server receives HTTP requests from the SSO proxy, all custom HTTP appear to have been stripped, although the cookie is present. I have instrumented the Perl code to write the headers to a log file with the following code:
my $q = new CGI;
...
my %headers = map { $_ => $q->http($_) } $q->http();
my $headerDump = "Got the following headers:\n";
for my $header ( keys %headers ) {
$headerDump = $headerDump . "$header: $headers{$header}\n";
}
kLogApacheError("info", $headerDump);
...and this is the output I get (slightly edited for confidentiality):
[Wed Mar 16 23:47:31 UTC 2011] [info] Got the following headers:
HTTP_COOKIE: s_vi=[CS]v1|26AE2FFD851D091F-4000012E400035C5[CE]; s_nr=1297899843493; [snip]
HTTP_ACCEPT_LANGUAGE: en-US,en;q=0.8
HTTP_ACCEPT_ENCODING: gzip,deflate,sdch
HTTP_CONNECTION: keep-alive
HTTP_ACCEPT: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5
HTTP_ACCEPT_CHARSET: ISO-8859-1,utf-8;q=0.7,*;q=0.3
HTTP_USER_AGENT: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/534.13 (KHTML, like Gecko) Chrome/9.0.597.107 Safari/534.13
HTTP_HOST: [redacted].com
IOW, the customer HTTP headers I'm expecting are missing. When we redirect traffic from the proxy to a different Apache server (i.e. not our appliance) all the 20+ custom headers show up as expected. This strongly suggests that it's our Apache server that is stripping the headers.
We have never run into a problem like this with other deployments, even with this particular SSO solution. I realize this is similar to another question on this site ( Server removes custom HTTP header fields ) but the suggestions there (such as a problem caused by running mod_security) don't apply.
Is there any other reason why our server might be stripping out the HTTP headers? Or is there possibly something else going on?
Thanks for any help!
Matt
Have you sniffed the raw HTTP traffic between the proxy and your Apache instance? If the necessary headers are missing herein, the problem is on the proxy side.
I finally figured this out, and it was pretty obscure...
Using HttpFox, it really looked like traffic was being redirected to the appliance rather than being forwarded. In the case of redirects, cookies were persisting but HTTP request headers were not. However, the SSO Proxy rules were all "forwards" so we were completely stumped as to why redirects were showing up.
We knew that our application's logic redirects to /signin/ if the user isn't already authenticated but we expected this would still be routed through the proxy. However, what we didn't realize is that there was a SiteMinder SSO option, enableredirectrewrite that by default would handle "any redirects initiated by destination servers [by passing them] back to the requesting user". Once we set this flag to "yes", and the redirectrewritablehostnames to "all", everything worked like magic.
(For reference, see a version of the SiteMinder manual here: http://www.scribd.com/doc/48749285/h002921e).
I recently had a problem where I could not get any custom HTTP Headers passed to my PHP Script.
It seems that Apache 2 running PHP 7 with FCGID would not allowing and removing or tripping all custom HTTP Headers.
Here is my fix:
http://kiteplans.info/2017/06/13/solved-apache-2-php-7-fcgid-not-allowing-removing-stripping-custom-http-headers/
Related
I want to capture traffic from a host using HTTP, but I do not see a response coming back. If I close fiddler, my application runs as normal.
I see '-' in the Result section, where it should have been an HTTP response code. If I manually execute the request using Composer, I get a 200 response. Fiddler is able to capture traffic from all other web applications without issue.
I have installed Fiddler certificate. Troubleshooting Mode returns 200. The host does not use HTTPS, but I have enabled Capture HTTPS Connects anyways.
I am using Fiddler v5.0.20182
Some applications performs certificate pinning. Also web applications can perform certificate pinning e.g. via HTTP Public Key Pinning (HPKP). If you have ever used the web application in your browser without Fiddler, the web app public key has been downloaded and cached in the web-browser.
Afterwards the Fiddler root certificate is no longer accepted for that site/app even it it has been installed correctly. You should be able to identify such problematic connections in Fiddler if you only see a CONNECT request but no subsequent requests to the same domain.
To delete the HPKP in your web browser you should use a fresh profile or clear the complete browser cache. Afterwards only use it with activated Fiddler proxy and SSL decryption. As far as I know Fiddler will remove HPKP data from responses so that the web application should also work with Fiddler in between.
I think you should be able to uncheck the options for https, uncheck the boxes which appear checked here? Or you might be able to skip decryption by adding the host in the box below where it says Skip decryption for the following hosts
I am using RestyGWT to communicate with remote service on JBoss AS7 but getting following error:
OPTIONS http://localhost:8080/remoteService No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://127.0.0.1:8888' is therefore not allowed access.
VM482:81
XMLHttpRequest cannot load http://localhost:8080/remoteService No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://127.0.0.1:8888' is therefore not allowed access.
I have enabled following headers and access control via #OPTIONS in back-end server as:
"Access-Control-Allow-Origin", "*"
"Access-Control-Allow-Methods", "POST, GET, UPDATE, DELETE, OPTIONS"
"Access-Control-Allow-Headers", "content-type,x-http-method-override"
My Client Interface to communicate with the server is as:
#Path("/remoteService")
public interface MonitorMeService extends RestService {
#Path(value="/getBooks")
#GET
#Consumes(MediaType.APPLICATION_JSON)
void getBooks(MethodCallback<List<Books>> callback);
}
Can anyone please tell what i am missing? What CORS handling i am missing?
I was using CORS successfully with RestyGWT until I hit a wall trying to get session cookies to work properly. I use Play framework on the server and the browser was not cooperating with the set-cookie header response to CORS moderated interactions.
I found out that I could completely dispense with all the CORS directives (and also no longer require the use of JSONP) by moving to a simple reverse proxy setup on the server.
This made everything simpler and the cookies work properly now.
If you are interested in more details, please respond to this - I'll be happy to post more details. thanks. JR
Apart from the OPTION, you have to set the Access-Control-Allow-Origin header also for other methods: POST, GET, etc
[EDIT]
I've never used restyGwt, so I dont know how to configure restyGwt servlets to set headers, but I use this filter I wrote sometime ago when I want to configure CORS in my server container. It works for any server servlet (RPC, RF, JSON, etc). I suggest to use this filter instead of dealing with headers in your app.
I have a web application running on Weblogic. The HTTPS URL to this application is https://localhost:7002/MyApp.
Whenever I am changing the URL in the address bar to http://localhost:7002/MyApp, it automatically redirects to the original HTTPS based URL.
My requirement is to take the user to some kind of custom error page, if they request the HTTP URL. For example, http://localhost:7002/MyApp should redirect to https://localhost:7002/MyApp/error.jsp.
Is this redirection possible to configure in Weblogic?
You mentioned that your https URL is:
https://localhost:7002/MyApp
And assuming that your http URL is:
http://localhost:7001/MyApp
When you say you change the https URL in browser to:
http://localhost:7002/MyApp
This is in-correct. If you provide such a URL, WLS will accept the request on secure port 7002 but will fail to identify the protocol (it expected https but you gave http). Instead of a redirection, you would get some error in browser and definitely following error in WLS logs:
<May XX, 2013 XX:XX:17 PM IST> <Warning> <Security> <BEA-090475> <Plaintext data for protocol HTTP was received from peer
XXXXXXXXXXXXXX - 192.169.0.100 instead of an SSL handshake.>
I assume you are changing the URL to:
http://localhost:7001/MyApp
Please correct/update your issue description.
Now onto your requirement, it seems nearly impossible to do this via WLS configuration.
As a workaround, you can create a servlet filter and call isSecure on ServletRequest to determine whether the request was made using secure protocol or not. If you find it was not, then you can redirect to some custom page. And you would also need to disable this automatic redirection to https that you have reported for your application.
Ref: http://docs.oracle.com/javaee/6/api/javax/servlet/ServletRequest.html#isSecure%28%29
I am developing an iPad application and using the ASIHTTPRequest library (https://github.com/pokeb/asi-http-request) to make requests to my web server, which runs CentOS 6.2 and is equipped with Apache 2.2 and mod_ssl enabled.
When I make an HTTPS request to the server, sometimes I get a null response. Absolutely nothing. As if the server were completely dead. Sometimes it works just fine, returning the expected response. There is no rhyme or reason to when the response is null and when it's fine.
The server uses a dummy security certificate
I am setting validatesSecurityCertificate to NO
I am setting SSLVerifyClient to none in httpd.conf
Note, HTTPS requests sent through a web browser work fine (after you tell it to proceed without a security certificate). But, all HTTPS requests sent through HTTP Client: ((Zero-length response returned from the server.))
The trick to using ASIHTTPRequest well is that you dont use it. Its deprecated by its author allseeing-i.com/ASIHTTPRequest . I suggest using AFNetworking, RESTKit or even NSURLConnection.
As it is we have no code of yours to see, but when experiencing random issues with a library that hasn't been worked on in years I would say to start by using a different library.
I am completely new to SiteMinder and SSO in general. I poked around on SO and CA's web site all afternoon for a basic example and can't find one. I don't care about setting up or programming SM or anything like that. All of that is already done by someone else. I just want to adapt my JS web app to use SM for authentication.
I get that SM will add a HTTP header with a key such as SM_USER that will tell me who the user is. What I don't get is -- what prevents anyone from adding this header themselves and bypassing SM entirely? What do I have to put in my server-side code to verify that the SM_USER really came from SM? I suppose secure cookies are involved...
The SM Web Agent installed on the Web Server is designed to intercept all traffic and checks to see if the resource request is...
Protected by SiteMinder
If the User has a valid SMSESSION (i.e. is Authenticated)
If 1 and 2 are true, then the WA checks the Siteminder Policy Server to see if the user is Authorized to access the requested resource.
To ensure that you don't have HTTP Header injections of user info, the SiteMinder WebAgent will rewrite all the SiteMinder specific HTTP Header information. Essentially, this means you can "trust" the SM_ info the WebAgent is presenting about the user since it is created by the Web Agent on the server and not part of the incoming request.
Because all traffic should pass through Siteminder Web Agent so even if the user sets this header it will be overwritten/removed
All Siteminder architectures do indeed make the assumption that the application just has to trust the "SM_" headers.
In practice, this may not be sufficient depending on the architecture of your application.
Basically, you have 3 cases:
The Web Agent is installed on the web server where your application runs (typical case for Apache/PHP applications): as stated above, you can trust the headers as no requests can reach your application without being filtered by the web agent.
The Web Agent is installed on a different web server than the one where your application runs, but on the same machine (typical case: SM Agent installed on an Apache front-end serving a JEE Application Server): you must ensure that no requests can directly reach your application server. Either you bind your application server to the loopback interface or you filter the ports on the server.
The Web Agent runs on a reverse proxy in front of your application. Same remark. The only solution here is to implement an IP filter on your application to only allow requests that come from your reverse proxy.
SiteMinder r12.52 contains a new functionality named Enhanced Session Assurance with DeviceDNA™. DeviceDNA can be used to ensure that the SiteMinder Session Cookie has not been tampered with. If the Session is replayed on a different machine, or from another brower instance on the same machine, DeviceDNA will catch this and block the request.
Click here to view a webcast discussing new features in CA SiteMinder r12.52
Typical enterprise architecture will be Webserver (Siteminder Agent) + AppServer (Applications)
Say IP filtering is not enabled, and webs requests are allowed directly to AppServer, bypassing webserver and the sso-agent.
If applications have to implement a solution to assert the request headers / cookies are not tampered / injected, do we have any solution simillar to the following?
Send the SM_USERID encrypted in a seperate cookie or encrypted (Sym/Asym) along with SMSESSION id
Application will use the key to decrypt the SMSESSION or SM_USERID to retrive the user id, session expiry status and any other addtional details and authorization details if applicable.
Application now trusts the user_id and do authentication