How to detect browser with ambassador gateway routing or cloudflare? - kubernetes

I have an angular application running behind cloudflare and ambassador gateway routing deployed with kubernetes. What I want to do is to check from which browser the application is receiving the traffic and match it against the list of supported browsers I have. If the browser does not match, I would like to throw a static html page which says, sorry we don't support this browser and please follow the steps to upgrade your browser, etc.
Now, there are many solutions to achieve this exact same thing but, I have a hard requirement to block my website completely for unsupported browsers.
I can easily do it within my application but, why should the whole angular application be loaded just to deny access to my website. It would be really great to block the users from the root itself. Problem is that I don't have nginx, haproxy, etc in which case it would have been fairly simple and straightforward to implement this. Instead, I have cloudflare and ambassador of which I have least experience in.
Could someone please guide me on how can I achieve browser detection and redirection based on some conditions with cloudflare or ambassador?

Not sure if this is what you want, but you can configure Cloudflare to block/challenge specific user-agents using User-Agent Rules.
Additionally, Firewall Rules can also be created to match incoming requests with specific user agents to be blocked/challenged/allowed and you can combine it with other expressions.

Related

Best practices for securing a public Java Spring-Boot web app

I've got a public Java web app created with Spring Boot. I'm ready to release it to production and wanted to get some tips on making sure that it is as secure as possible.
It is basically a search engine that uses Apache Lucene in the backed and a lot of javascript in the front end. I am planning on deploying it to Amazon Web Services, while using my local machine as a backup/test enviroment.
Users can search and browse data. The frontend calls REST endpoints using Javascripts XMLHttpRequest to query the backend for content and then displays it to the user.
The app is completely public and there is no user authentication as of yet.
The app also persists user requests to a database for tracking purposes.
Here's what I've done so far to secure it:
Make sure that the REST endpoints fully verify that the parameters given to them in the requests are valid.
What I plan on doing:
Using HTTPS.
Verifying that any put request or url requests have a reasonable size.
What I am considering adding:
Limiting the number of requests a user can make in a given time period. (not sure if Spring-Boot already has a facility to do this, or I should implement this myself)
Use some kind of API key scheme to make sure that my endpoints are only accessed by my front end. (not sure if this is effective and don't yet know how to do this).
Is there anything else that I should consider doing? Do the things I listed above make sense?
Would greatly appreciate any tips on this.

Microservices end points accessible on Internet

We have a MicroService based Architecture where each service has a REST End point. These services talk to each other via REST.
However I noticed that a lot of developers have directly started calling these Services in the Javascript code of our Web Application. I want to know if it is recommended to access these MicroServices over the Internet OR they should be hidden behind a Facade layer. Of course all the end points are authenticated but all Web application users can find these end points once they do a F12.
thanks,
Abhi
I would not do that for the following reasons
Security. You are exposing your endpoint as is and it allow other people to know quite a lot about your endpoints then what you rather want them to know. Authentication is ok, but are still open for DDOS for your individual services, out of turn calls, unexpected load etc.
Service Discovery. By allowing access to the endpoints directly you are basically forcing dev to bind themselves with a given URL. This may work but since it is restricting you to make changes in future to your URL etc its better not to do it. By having a layer in between you will be required to change one url if ever required
Code Duplication There are quite a few cross cutting concerns when it comes to URL handling like request logging, https stripping, authentication, prevention of DDOS, request limiting etc. By having one common layer before your services you can manage all these at that one place rather than doing each of them for each services
If you think any of these are or could be major concerns that you should add an additional layer in between and route your internet facing api via that.

Why do API's have different URLs?

Why do API's use different URLs? Is there two different interfaces on the web server? One processing API requests and the other web HTTP requests? For example there might be a site called www.joecoffee.com but then they use the URL www.api.joecoffe.com for their API requests. Why are different URLS being used here?
We separate ours for a couple of reasons, and they won't always apply.
Separation of concerns.
We write API code in one project, and deploy it in one unit. When we work on the API we only worry about that and we don't worry about page layout. When we do web work, that's completely separate
Different authentication mechanisms.
The way you tell a user to log in is quite different to how you tell an API client it's not authenticated.
Different scalability requirements
It might be that the API does a lot of complex operations, while the web-server serves more or less static content. So you might want to add hundreds of API servers around the world, but only have 10 web servers.
Different Clients
You might have an API for the web client and a separate API for a mobile client. Or perhaps a public one and a private / authenticated one. This might not apply to your example.
Different Technologies
Kind of an extension of Separation of concerns, but it allows you to have Linux server for one and use something like an AWS Lambda for the other.
SSL Wrangling
This one is more of an anti-reason (particularly for the specific example you give). Many sites use SSL for both web and api. Most sites are going to use SSL for the API at least. You tend to have SSL certificates matched to your URL, so there might be a reason there. That said, if you had a *.joecoffee.com certificate you would use api.joecoffee.com not www.api.joecoffee.com (because apparently an extra '.' in your URL costs more, or something like that).
As #james suggested - there's no really right answer and some debate.

keycloak with mod_auth_openidc advantages

I am in the middle of setting up SSO in our infrastructure and I am wondering if people would more experience could share their learnings.
I already have a reverse-proxy in-front of our system.
We have several legacy java apps running on tomcat
We have SPA apps as well written in JS
We have few APIs that will also need to be protected
I have two ways to set SSO up for us.
set up SSO on the reverse proxy using mod_auth_openidc so our gatekeeper makes sure that anyone who is hitting our services is already validated.
add a keycloak libs to each individual service
My preference is to set this up on the referse proxy.
Are there any disadvantages / best practices when it comes to this?
For legacy apps I would just use the HTTP headers added by the reverse proxy to find user details
For the new apps I would like to use the keycloak libs to get user details.
I do not want to go down some routes which is obviously problematic. So Any tips so that I can save some time are very welcome.
So far I have come up with the following list
pros to use a proxy server and mod_auth_openidc
Single place to handle all auth specific configuration
Abstracts out the implementation detail of the SSO. E.g. we do not need to integrate keycloak into each service. In my opinion this could cause issues if we decided later to actually move to a different SSO. ( I know this does not happen often )
cons to use a proxy server and mod_auth_openidc
an additional piece of software to maintain ( bugs etc )
possible extra checks on credentials if the app also integrated with keycloak ( not required it is possible but only needed if keycloak specific features are required in the app and those not available in the headers )
I would be interested in other's opinion on the pros and cons?
Thanks

Setting up multiple domains with Play Framework

How does one get started with multiple domains using the Play Framework? In other words, the same server will serve content for both somedomain.com and anotherdomain.com, and both of these domain's content will be served by the Play Framework
Do you set up Play behind Apache for example, or can you configure virtual hosts on Play itself. I'm starting with a blank Linux server, and just want to know how to get started, i.e. should I mess about with things like Apache, or will I come right with the Play Framework alone?
As a follow-up to biesior's answer, using a front-end server appears to remain the best option as of 2.5.x (updated docs at https://www.playframework.com/documentation/2.5.x/HTTPServer).
That said, you could serve both domains with the same web application, detecting the intended host by pattern matching on request.headers.get("Host"). I've found it works reasonably well when "anotherdomain.com" is static and doesn't require any meaningful routing, but tread carefully.
I'll also note that recent versions of the Play Framework support https in a painless way once you have the necessary certs in your keystore (https://www.playframework.com/documentation/2.5.x/ConfiguringHttps). However, I can't see how one would make that play nicely with multiple domains.
Using front-end HTTP server is typical solution, otherwise you would need to access each application on the separate port and/or IP address.
Additionally HTTP servers allows you to work with SSL (Play 2.x doesn't support it!) so if you plan to create secure connections you will need to use scenario described in doc.
Finally using server will allow you to incorporate other useful things like load-balancing, serving static (really static) content in CDN-lke mode with very precise cache settings etc...
Just one tip: if only job for the HTTP server will be just proxying the Play apps, consider using some lighter option than Apache, for an example nginx or lighttpd, you'll find sample configurations for all of them in Play's documentation.