Two Factor Issuer - django-two-factor-auth

I have the django-two-factor-auth package working on my development server, I noticed however that the google authenticator app shows the ip:port after scanning the QR code. Is there a way to set the issuer to something else?

It seems that django-two-factor-auth uses the function django.contrib.sites.shortcuts.get_current_site to determine the issuer.
In order to change the value of the issuer you need to change the name field of the existing Site object with your desired value, this can be done through the django admin site.

Related

Can I whitelist all domains for Keycloak in the development environment?

Let's say we have a lot of projects. Project1, Project2, etc. and let's say their local development domains are example1.local and example2.local, etc.
Now we have set up a Keycloak instance of our development machine, with a Development realm inside it, with an AdminPanel client in that realm, and we want to use it for all of our projects.
We can manually add https://example1.local/* and https://example2.local/* etc. to valid redirect URLs and web origins.
But this means that we need to add each and every project we have and we do many many projects per year.
We tried https://* but it did not let us login complaining about invalid redirect_uri.
Is it possible to whitelist every domain for Keycloak?
You should be able to do that. I suggest to check your configuration again. Something like this works perfectly for my scenario which is the same as yours. The only difference is that I created a dedicated client for my applications, but still it's single client for many dev environments:
Valid Redirect URIs: https://* or https://*.local
Web Origin: *
Don't put anything extra for Web Origin. Just the * but this is only needed for example if you want to use a swagger-ui hosted on somewhere else. It allows swagger from any domain ask for token from the Keycloak. If you don't put the *, due to CORS error, the swagger-ui or any tools like that would not be able to fetch token.
It's a minor thing, but worth mentioning that you put https:// in the config, so the client app should also be accessed using https. If someone type http by mistake, the same error would be returned.
We tried https://* but it did not let us login complaining about
invalid redirect_uri.
Unless you are working in a testing environment, or you want to get hacked, DO NOT DO THIS in a production environment. From OAuth 2.0 Security Best Current Practice you read an explanation of a an exploit based on this misconfiguration.
Therefore, you should make your registered redirect URIs as specific as feasible, and simply using a wildcard in a big no-no.
But this means that we need to add each and every project we have and
we do many many projects per year.
Wouldn't it be possible to automatize this via scripts or so? Get the project names and then call the Keycloak Admin API to add those redirectURIs to the client?!

How to debug an Invalid Signature on SAML Response

We're using ruby-saml to establish our app as a service provider while using Google as an identity provider, though I do not think this question is specific to Ruby or that project.
I have seen this answer from the point of view of an IdP, but I'm hoping to see one from the point of view of an SP, because I have a hard time believing Google is getting the signature on the response wrong.
On top of that, we have successfully integrated with other Google accounts, and they work at the same time this one is broken.
As the service providers, how can we figure out the source of an Invalid Signature on SAML Response from the identity provider?
We had same error, but different solution. Our problem was invalid characters in the xml response. Both parsing and validation failed. We could substitute the chars before parsing, but then the validation would still fail because of the changed content. The solution was to base64 decode the response, and open the xml response in an editor (or online xml validator) to find the problematic data. In our case: attribute name "objectSid" from AD. We then changed the simplesamlphp config so that it sent only the data we needed. Now the response validates and parses without problems. Btw in "settings.idp_cert" (using ruby-saml gem) we include both the "begin certificate and end certificate headers".
Also there are browser add-ons that will intercept the saml conversations for debugging purposes.
Also check this for online troubleshooting:
validate response:
https://www.samltool.com/validate_response.php
(be careful not to paste your private keys online. only public cert is needed for response validation)
validate xml:
https://www.xmlvalidation.com
online base64 decode:
https://www.samltool.com/base64.php
I ended up using the suggestion to use XMLSec in the answer I referenced in the question, and ran through the decoded base 64 response and the certificate(s) in the metadata file from Google.
That gave me the confidence that there was indeed something wrong with the certificates in the IdP metadata XML file that Google provided.
I then noticed that my working accounts only had 1 certificate in the file, while this one had two. So I removed one, and it did not work. Then I replaced it and removed the other, and it worked.
Then I found out that I could place both certs in the file as long as the working one was first.
I am not sure why there was a difference, and I do not know why Google outputs the certs in an order that XMLSec cannot use to verify the signature.
Perhaps someone with more knowledge than myself can chime in on that, but for now, I'm happy to report that simply reversing the order in which the certs appeared in the IdP metadata file from Google allowed the signature to be verified.
I needed to include this setting as well. YMMV, seems like the default algo is sha1, but the key and output that i was calculating using the openssl utility was using sha256:
settings.idp_cert_fingerprint_algorithm = "http://www.w3.org/2000/09/xmldsig#sha256"

How to verify email confirmation token generated by web service in web site

I'm using .NET 4.5 with a MVC5 5.2.2 web site and a Web API 2.2 service. The web site is using Identity 2.0, and I'm using MachineKey as a data protection provider. In the web site, I'm able to create new users, generate an email confirmation token and then verify that token when it comes back.
In the web service, I need to follow the same process - create a new user, generate email confirmation token and email that token to the new user. The user should then be able to visit the site, confirm the email address and finish creating the account. The problem I'm having is the email confirmation tokens generated by the web service can't be verified by the web site.
Both the service and the site are on the same machine. I can also duplicate this on my local machine in Visual Studio. My first guess was the machine keys weren't the same, but changing both sites to use the same hasn't worked. I've tried and confirmed:
Both sites have <httpRuntime targetFramework="4.5"/> in the <system.web> section.
I've tried <machineKey compatibilityMode="Framework45"/> in both sites.
I've tried generating machine keys - using decryption=AES and validation=SHA1 - with and without setting compatibilityMode.
Per https://aspnetidentity.codeplex.com/workitem/2439, I tried capturing the data protection provider and using that instead of MachineKey.
What am I missing?
So my first lesson from yesterday is that it is best to play Russian roulette with as few bullets in the chamber as possible. Otherwise you end up with a sore foot...as well as a sore forehead.
My problem ended up being that while I knew the confirmation tokens were being url encoded correctly from the site since I was using UrlHelper, I was forgetting the service was not using UrlHelper which mean those tokens were not being encoded correctly. After fixing that, I was able to figure out the machinekey settings.
For anyone who finds this, if you need to share Identity 2.0 tokens between different sites, I can confirm that you need a common machinekey set in your web.config for each site. I wasn't able to figure out if a common machinekey can be configured in IIS Express, so I ended up putting the keys in the web.config in source control then using the config transforms to remove them to make sure they aren't included when the site is published. In production, I'm going to use IIS to set these keys for the default web site so they are shared across both sites.

Name Key in SCEP Payload for OTA Enrollment

In the CA I'm working on, we have certificate templates that are used to configure CSRs on various devices. We need to keep track of which template we push to a device, so that we can validate the CSR against the template used. For iOS devices, we're thinking of including the template name in the "Name" field for the SCEP Payload. However, I'm not sure how this field is packaged into the CSR that the iOS device creates. According to the OTA Configuration Guide,
The service can provide different certificate issuing services parameterized on the Name value that becomes part of the final URL. In the case of Windows, this value needs to be set, although any value will do.
This is the only indication of how this Name key/field is used. Does anyone know what becomes of this key? Is it made into an attribute in the CSR? This quote says it "becomes part of the final URL." Does this mean it's injected into the SCEP URL? There doesn't seem to be much documentation on this.

How to ensure/determine that a post is coming from an specific application running on an iPhone/iTouch?

Building an iPhone OS application that will allow users to anonymously post information to a web application (in my particular case it will be a Rails based site) ... and I want to ensure that I only accept posts that originate from a specific application running on an iPhone/iTouch.
How is this best accomplished?
(btw, if your answer applies to Android please feel free to post it here as well as I'm curious to know if the techniques are the same or vary).
Thanks
The best way would be to implement a known call and response pattern. Send a value of some sort (integer, string, hash of a timestamp) to the iPhone/iTouch application. Have the application modify this information in a known way and send it back for verification. Then all you have to do is use a different modification algorithm per-platform and that will verify what type of device is being used.
VERY simple example:
Server sends 100 with the response to an iPhone.
iPhone adds 10 to this value and sends back with request.
Server detects the value was increased by 10 and now knows it was from an iPhone.
Then on your Android clients add 20 and on another platform add 30 and so on...
You could also add a hidden field in the form. or in the data being passed up if it is XML or other format
Encrypt or sign something using the public key of a key pair, then decrypt or verify it on the server with the private key. Ultimately, anything that can be sent can be duplicated, be it a spoofed html header or an encrypted block. The app has to know the secret handshake, and anyone with access to it (and sufficient technical skills) can figure out the secret handshake.
I would suggest the following approach.
Build an ssl enabled access to your rails app.
Now create a user account for every plattform you want to use and enable your applications to log in with the correct key. If you use the ssl standard in a correct way there shouldn't be a way to sniff the password and you can use standard components on the rail and the phone side of your app.
You then need to secure the login credentials on your phone with the appropriate technics. Eg. put it in the keychain on the Iphone.