I've been scavenging around the internet for information about multiple security configurations regarding combining oauth and basic authentication.
I'm not sure it's really what I want, but I decided to do some research to figure out weather it was a good idea or not.
The question is really simple. Can you combine Oauth authentication and basic authentcation in your spring boot application. So that some endpoints uses one type of authentication and other end points uses another type of authentication.
and does it make sense to do so?
The idea behind it is that I want to have heavy(oauth authentication on my endpoints if another party is calling my application) however if i'm calling my endpoints through a frontend application that I control. Should that then still use Oauth, or would basic authentication be alright?
to sum up. Is it possible to have "/getCustomers" secured by oauth, and "/ping" completely open or with another authentication type.
I hope this makes sense, I kinda trying to figure out what I want with this and if it even makes sense.
to sum up. Is it possible to have "/getCustomers" secured by oauth, and "/ping" completely open or with another authentication type.
To sum up, yes you can.
You can configure multiple entry points with same http element, you can configure different http elements, and you can even configure several WebSecurityConfigurerAdapter according to spring security reference documentation.
https://docs.spring.io/spring-security/site/docs/current/reference/htmlsingle/#multiple-httpsecurity
I've got a public Java web app created with Spring Boot. I'm ready to release it to production and wanted to get some tips on making sure that it is as secure as possible.
It is basically a search engine that uses Apache Lucene in the backed and a lot of javascript in the front end. I am planning on deploying it to Amazon Web Services, while using my local machine as a backup/test enviroment.
Users can search and browse data. The frontend calls REST endpoints using Javascripts XMLHttpRequest to query the backend for content and then displays it to the user.
The app is completely public and there is no user authentication as of yet.
The app also persists user requests to a database for tracking purposes.
Here's what I've done so far to secure it:
Make sure that the REST endpoints fully verify that the parameters given to them in the requests are valid.
What I plan on doing:
Using HTTPS.
Verifying that any put request or url requests have a reasonable size.
What I am considering adding:
Limiting the number of requests a user can make in a given time period. (not sure if Spring-Boot already has a facility to do this, or I should implement this myself)
Use some kind of API key scheme to make sure that my endpoints are only accessed by my front end. (not sure if this is effective and don't yet know how to do this).
Is there anything else that I should consider doing? Do the things I listed above make sense?
Would greatly appreciate any tips on this.
I am in the middle of setting up SSO in our infrastructure and I am wondering if people would more experience could share their learnings.
I already have a reverse-proxy in-front of our system.
We have several legacy java apps running on tomcat
We have SPA apps as well written in JS
We have few APIs that will also need to be protected
I have two ways to set SSO up for us.
set up SSO on the reverse proxy using mod_auth_openidc so our gatekeeper makes sure that anyone who is hitting our services is already validated.
add a keycloak libs to each individual service
My preference is to set this up on the referse proxy.
Are there any disadvantages / best practices when it comes to this?
For legacy apps I would just use the HTTP headers added by the reverse proxy to find user details
For the new apps I would like to use the keycloak libs to get user details.
I do not want to go down some routes which is obviously problematic. So Any tips so that I can save some time are very welcome.
So far I have come up with the following list
pros to use a proxy server and mod_auth_openidc
Single place to handle all auth specific configuration
Abstracts out the implementation detail of the SSO. E.g. we do not need to integrate keycloak into each service. In my opinion this could cause issues if we decided later to actually move to a different SSO. ( I know this does not happen often )
cons to use a proxy server and mod_auth_openidc
an additional piece of software to maintain ( bugs etc )
possible extra checks on credentials if the app also integrated with keycloak ( not required it is possible but only needed if keycloak specific features are required in the app and those not available in the headers )
I would be interested in other's opinion on the pros and cons?
Thanks
This might be a shot in the dark, but I am trying to implement an OpenID Provider in Perl using the Net::OpenID::Server module. The documentation for the entire process is confusing and sparse.
If anyone has successfully implemented a provider in Perl, could you please paste some code snippets?
So I finally jiggered the OpenID installation into place and it's working pretty well. I figure I will detail some of the gotchas I ran into.
There are more than three states/steps to the OpenID sign-in process. This is confusing, because the documentation and sample code would lead you to believe that there are three. There are, in some cases, up to seven. Watch your server logs and see how many times a SERVER and USER (the ones requesting the authentication) hit the PROVIDER (what you are presumably setting up.) It's difficult to debug something when you're only looking at half of the interactions
Many providers are using the unfinalized OpenID 2.0 spec. (It's a little better.) The 2.0 spec performs differently from the 1.0 spec; the SERVER (them) establishes trust with the PROVIDER (you). Net::OpenID::Server handles this gracefully, but doesn't tell you what spec it's using. The 2.0 spec adds a step to the handshaking process.
Set up your own OpenID SERVER for easy testing. I used a simple Rails server with a gem called ruby-openid. It took about 10 minutes to set up to mimic behavior of a real in-the-wild server.
It should go without saying, but make sure your login process is stateless. We had a global variable that handled how the user was verified. Because use of that variable made certain assumptions that were incompatible with the OpenID sign-in process, users would have been allowed to log in to accounts other than their own. This is obviously bad. A few closures and we have some stateless and more secure code.
All in all, OpenID is pretty cool once you get it working.
Fyi, development on the Net-OpenID Perl modules is starting up so you can expect a big pile of bugfixes and better docs to hit real soon now. Check CPAN and the openid-perl group for details.
I am working on a experimental website (which is accessible through web browser) that will act as a front-end to a restful interface (a sub-system). The website will serve as an interface between a user and the restful interface, as it will make http requests to the restful interface for almost all database operations. Authentication will probably be done using openid and authorization for the database operations will be done via oAuth.
Just out of curiousity, is this a feasible solution or I should develop two systems that accesses the database in parallel (i.e. the website has its own data access logic, and the restful interface has another data access logic)? And what are the pros/cons if I insist on doing it this way (it is just an experiment project for me to learn things like how OpenID and oAuth work in real life anyway) besides there will be more database queries and http requests generated for each transaction?
Your concept sounds quite feasible. I'd say that you'll get some fairly good wins out of this approach. For starters you'll get a large degree of code reuse since you'll be able to put other front ends on top of the RESTful service. Additionally, you'll be able to unit test this architecture with relative ease. Finally, you'll be able to give 3rd party developers access to the same API that you use (subject possibly to some restrictions) which will be a huge win when it comes to attracting customers and developers to your platform.
On the down side, depending on how you structure your back end you could run into the standard problem of granularity. Too much granularity and you'll end up making lots of connections for very little amounts of data. Too little and you'll get more data than you need in some cases. As for security, you should be able to lock down the back end so that requests can only be made under certain conditions: requests contain an authorization token, api key, etc.
Sounds good, but I'd recommend that you do this only if you plan to open up the restful API for other UI's to use, or simply to learn something cool. Support HTML XML and JSON for the interface.
Otherwise, use a great MVC framework instead (asp.net MVC, rails, cakephp). You'll end up with the same basic result but you'll be "strongerly" typed to the database.
with a modern javascript library your approach is quite straightforward.
ExtJS now has always had Ajax support, but it is now able to do this via a REST interface.
So, your ExtJS user interface components populate receive a URL. They populate themselves via a GET to the URL, and store update via POST to the URL.
This has worked really well on a project I'm currently working on. By applying RESTful principles there's an almost clinical separation between the front & backends - meaning it would be trivial undertaking to replace other. Plus, the API barely needs documenting, since it's an implementation of an existing mature standard.
Good luck,
Ian
woow! A question from 2009! And it's funny to read the answers. Many people seem to disagree with the web services approach and JS front end - which has nowadays become kind of standard, known as Single Page Applications..
I think the general approach you outline is quite feasible -- the main pro is flexibility, the main con is that it won't protect clueless users against their own ((expletive deleted)) abuses. As most users are likely to be clueless, this isn't feasible for mass consumption... but, it's fine for really leet users!-)
So to clarify, you want to have your web UI call into your web service, which in turn calls into the database?
This is exactly the path I took for a recent project and I think it was a mistake because you end up creating a lot of extra work. Here's why:
When you are coding your web service, you will create a library to wrap database calls, which is typical. No problem there.
But then when you code your web UI, you will end up creating another library to wrap calls into the REST interface... because otherwise it will get cumbersome making all the raw HTTP calls.
So you essentially created 2 data access libraries, one to wrap DB and the other to wrap the Web service calls. This basically doubles the amount of work you do, because for every operation on a resource, you will end up implementing in both libraries. This gets tiring real fast.
The simpler alternative is to create a single library that wraps access to the database, as before, then use that library from BOTH the web UI and web service.
This is assuming that your web UI and web service reside on the same network and both have direct access to the backend database server (which was the case for me). In this setup having both go directly to the database is also a lot more efficient then having the UI go through the web service.