We're using ruby-saml to establish our app as a service provider while using Google as an identity provider, though I do not think this question is specific to Ruby or that project.
I have seen this answer from the point of view of an IdP, but I'm hoping to see one from the point of view of an SP, because I have a hard time believing Google is getting the signature on the response wrong.
On top of that, we have successfully integrated with other Google accounts, and they work at the same time this one is broken.
As the service providers, how can we figure out the source of an Invalid Signature on SAML Response from the identity provider?
We had same error, but different solution. Our problem was invalid characters in the xml response. Both parsing and validation failed. We could substitute the chars before parsing, but then the validation would still fail because of the changed content. The solution was to base64 decode the response, and open the xml response in an editor (or online xml validator) to find the problematic data. In our case: attribute name "objectSid" from AD. We then changed the simplesamlphp config so that it sent only the data we needed. Now the response validates and parses without problems. Btw in "settings.idp_cert" (using ruby-saml gem) we include both the "begin certificate and end certificate headers".
Also there are browser add-ons that will intercept the saml conversations for debugging purposes.
Also check this for online troubleshooting:
validate response:
https://www.samltool.com/validate_response.php
(be careful not to paste your private keys online. only public cert is needed for response validation)
validate xml:
https://www.xmlvalidation.com
online base64 decode:
https://www.samltool.com/base64.php
I ended up using the suggestion to use XMLSec in the answer I referenced in the question, and ran through the decoded base 64 response and the certificate(s) in the metadata file from Google.
That gave me the confidence that there was indeed something wrong with the certificates in the IdP metadata XML file that Google provided.
I then noticed that my working accounts only had 1 certificate in the file, while this one had two. So I removed one, and it did not work. Then I replaced it and removed the other, and it worked.
Then I found out that I could place both certs in the file as long as the working one was first.
I am not sure why there was a difference, and I do not know why Google outputs the certs in an order that XMLSec cannot use to verify the signature.
Perhaps someone with more knowledge than myself can chime in on that, but for now, I'm happy to report that simply reversing the order in which the certs appeared in the IdP metadata file from Google allowed the signature to be verified.
I needed to include this setting as well. YMMV, seems like the default algo is sha1, but the key and output that i was calculating using the openssl utility was using sha256:
settings.idp_cert_fingerprint_algorithm = "http://www.w3.org/2000/09/xmldsig#sha256"
Related
We’re trying to implement a Wopi Host following the protocol to integrate with OWA, as documented in here, and we’re having some issues with some points:
We have implemented a simple host that is only capable of viewing files, that is, it implements the CheckFileInfo and GetFile views. In a test environment, the flow is working and we’re able to view the files in OWA. The point is, when executing the Wopi Validator (the web and the docker version), we’re having an error in the GetFile operation because the validator is trying to access the endpoint with two // at the end:
host/wopi/files/file_id//contents
Is this a known issue that is happening only in the validator? Why are the two ‘/’ being appended to the end of the WopiSrc? How can we address this issue?
We have read some posts here stating that the editing is required in order to officially validate our OWA integration with Microsoft. Is this true? Isn’t the CheckFileInfo and GetFile views the only ones necessary to implement a simple Wopi host capable only of viewing files? We’re just passing the required information in the response of the CheckFileInfo operation. We’re not using FileUrl or any other parameter but the required ones. As far as I can see, these two views are the only one required for viewing files with OWA, such as stated here
Additionally, we’re having an issue in the first part of the flow, when the browser sends a request to OWA and passes the token and the WopiSrc. We were only able to make the flow work passing the token in the query string via the GET method. If we put it under a JSON with a POST method, the OWA simply ignores it and does not make an attempt to call the Wopi Host at all, via the WopiSrc. Could someone enlighten us a bit on this matter to figure out what may be happening?
Furthermore, we’re stuck in some point of token validation. The docs are crystal clear when they say that the token is generated by the host, and that it should be unique for a single user/file combination. We have done that. The problem is, how are we supposed to know what is the user that is trying to access a resource, when the request comes from OWA? For example, when the OWA calls the host in the CheckFileInfo and GetFile views, it passes us the token. But how could we know the user information as well? Since the token is for a single file (which we have in the address of the endpoint being accessed) and for a single user, how can we validate the user at this point? We have not found any header or placeholder value that could be used to extract this information when receiving a request from OWA, and we’re a bit lost here. We’ve thought about appending the user information to the token, and then extracting it back, but for what I could see, doing that I’m only ensuring that the token has not been modified between requests. Does anyone have any idea?
Regarding the validation with Microsfot demands the edit functionality.
For the POST situation, the submission must be made as a "form" not as JSON.
The token validation is completely open, you must choose the way you think would be the best approach. JWT is a good alternative in this case.
The problem is that I've gone through the API documentation of authy. Now there can be trust issues with the users to provide me their PII.
So I tried the Non-PII approach. For which I need a QR code to be scanned by my users.
Even though I've followed the exact process mentioned in the documentation after doing a test-drive of my application I tried scanning the QR code which gets generated in a php file that I've made. However the authy app after scanning the QR code says:
Account couldn't be added. Please contact your service provider
P.S. I will provide the PHP code that I've made which generates the QR code. However I just want to know if this is a known issue about Not being able to add an account to my Twilio authy application.
I'm seeing the same error. Have not been able to get a response from Authy dev support on what the error means (if it's a config issue, or issue with my code, etc). But, https://jwt.io/ says the token is correct, so I'm assuming it's an Authy config issue.
Ensure you have expiration date <= (issuing date + 15 minutes) on your JWT token.
That was a reason I had an error 'Account couldn't be added. Please contact your service provider'.
I have a simple client socket application that I want to access an website with. In order to access the Internet, my client must go through a HTTP proxy server (I'm using Microsoft Forefront Threat Management Gateway). The proxy server requires authentication and it is configured to accept Kerberos via GSSAPI.
In my client, I use Microsoft's SSPI:
First, I call AcquireCredentialsHandle which succeeds and returns SEC_E_OK
Next, I call InitializeSecurityContext which also succeeds and returns SEC_E_OK
So far so good. But now, I need to submit the token to the proxy server for authorization and this is the part that is giving me problems.
If I connect to my proxy server using Internet Explorer, I can watch the packet exchanges via Wireshark. IE negotiates a ticket with Kerberos and appears to submit it via a Proxy-Authorization header. The header contents appear to be base64 encoded.
If I simply take the token that is returned from InitializeSecurityContext, base64 encode it and send the result to the proxy server via a header like Proxy-Authorization: Negotiate <base64Data>, the authentication fails.
I feel like I'm close, but still missing something. One site discussed using EncryptMessage on a token before sending it. Another discussed using Mutual Authentication (I don't think IE is using Mutual Authentication because the client only seems to send authorization once and there is no feedback data from the server (with which to call InitializeSecurityContext a second time) Another site outlined sandwiching the token with different SEC_BUFFER types (padding, data, etc.) and encrypting. I suspect this is what I need to do as I am not finding much documentation on how to do it.
Any insights or suggestions you may have would be appreciated.
UPDATE 7/19/2014: To be clear, I am asking how to use SSPI to calculate the "base64Data" field (as referenced above). While computing the base64 encoding the contents contained in the SECBUFFER_TOKEN's buffer had been my initial guess, the server does not accept the result so it is clearly invalid.
Further research suggests that the token must be "wrapped" (a.k.a. "EncryptMessage" via SSPI) and to encrypt in a manner that is compatible with GSSAPI, three buffers must be used (in the order: SECBUFFER_TOKEN, SECBUFFER_DATA, and SECBUFFER_PADDING) I tried this yesterday, but did not find success.
http://msdn.microsoft.com/en-us/library/ms995352.aspx
http://msdn.microsoft.com/en-us/magazine/cc301890.aspx
Do you really want to code the proxy interaction yourself? I would rather recommend to use libcurl on Windows. That works fine with TMG here at work.
The reason why your copy and paste does fail is that the accept detects you resend as a reply. Kerberos is replay-proof. You cannot steal a ticket and reuse it.
Consider you have called InitializeSecurityContext and receive on successful call SEC_E_OK a pointer to a SecBufferDesc. You must access the included array, read the PSecBuffer struct, access the pvBuffer element and pass that void* casted to a unsigned char* to a function which converts unsigned char* (hex bytes) to base64 to a char*. Do not forget to pass in cbBuffer length. Then you dont. This is the base64 encoded ticket for your HTTP proxy.
You can use EncryptMessage but not with HTTP. HTTP uses TLS. If you want to use EncryptMessage, use plain sockets. EncryptMessage will transparently encrypt your entire traffic between your client and the server.
Btw: The proxy will return a ticket to IE because I always inquires a mutual auth. You should do the case. Therefore, you must look the init context until you receive OK and not CONTINUE_NEEDED.
I have having a very strange issue Using Eclipse With the Amazon Integration.
If i right click on an item in my bucket and generate a presigned URL and then use this URL in a browser i will get the following error:
SignatureDoesNotMatch
The request signature we calculated does not match the signature you
provided. Check your key and signing method.
I have tried changing the keys in my amazon configuration, I have also tried with my server generating the key for the item but the access is also denied.
The credentials im using are the ones with the highest authority.
Is this a bug or is some special configuration needed? Like enabling the generation of presigned URLs or something.
If anyone has any experience on this it will be greatly appreciated.
This is quite normal situation for Eclipse. I would suggest you to use SDK for that.
I'm working on a REST webservice, and in particular authentication methods for browser-based requests. (using JsonP or Cross-domain XHR requests/XDomainRequest).
I've done some research in OAuth, and also Amazon's AWS. The big drawbacks of both is that I need to do either of the following:
Store secret tokens in the browser
Let a server-side script handle the signing. Basically I'd first to a request to a server of mine to get a specific pre-signed javascript request, which I'll use to connect to the real REST server.
What are some other options or suggestions?
Well, the only true answer here is proxying through a server, using sessions/cookies to authenticate and of course use SSL. Sorry for answering my own question.
Yes, jsonp call-authentication is tough, because the browser-client needs to know the shared secret.
An option would be to make the end-point anonymous (no authentication necessary). This comes with other security wholes (server is open for attacks, anyone can call it). But you could handle this by either only exposing very limited resource and/or using rate-limiting. With rate-limiting only a certain number of calls are allowed by one client in a certain range of time. It works by identifying the client (e.g. by source-ips or other client footprints).
I once experimented with one-time tokens, but they all somewhat failed because you have the problem of getting the token itself and protecting multiple retrievals of the token by bots (which comes again to the need of rate-limiting).
I havent tried this myself but you can try the following..(I am pretty sure i will get some feedback)
On the server side, generate a timestamp. Using HMAC-SHA256 an generate a key for that time stamp using a password and send the generated key and time stamp in the html.
When you make the AJAX call to the web service(assuming it is a different server) send the key and the time stamp along with the request. Check if timestamp is within a 5-15 minutes..
if it is do do the HMAC-SHA256 with the same password and key if the key generated is same.
Also on the client side you will have to check if your timestamp is still valid before making the call..
You can generate the key using the following url..
http://buchananweb.co.uk/security01.aspx