Best Practices for authorizing local scripts via oauth to access Web Services - rest

I couldn't find information on how other people solve this, so maybe you can help me out.
What I have
Multiple Services with REST APIs, that are secured using OpenID Connect. Connections between the Services work fine.
Now I have multiple developers, who sometimes need to write and execute local scripts (Python, R, Bash etc.) for quick analysis and testing.
What I want
I want to enable the developers to use the services as easy as possible, but still respecting security concerns.
What I tried
I defined the script itself as a client. Therefore I created a public client in my OIDC product, which is called somewhat like 'developer-scripts'. Using a library which handles the oauth dance, I can then execute the script connecting as aforesaid client. First time, the browser pops up and requests the user to authenticate and therefore authorize the client to use the REST API on behalf of the user. After that, the tokens are cached and I can easily continue working on that script.
This simplified drawing tries to summarize, what I just described
That works perfectly fine and regarding security I'm glad that credentials are not saved on the local computers as it was before with e.g. Basic Authentication. Furthermore, I'm able to control the access to different services on a user level.
Other ideas, which didn't convince me:
every web service also has an public client which can then be used as a client by the scripts (so the scripts aren't defined as clients anymore)
token generation is done somewhere else and the developer just adds the generated access/refresh token to the script
My problem
What concerns me about my current solution is the definition of that client. In the described case it would be either a generic client used by all developers for all scripts, or a new client for every developer who want's to write a local script. The latter seems to be a lot of overhead, the former may be a security problem?
So finally I'm asking the question: Are there any known best practices for my described use case?
EDIT:
I found a small article by [Martin Fowler](https://martinfowler.com/articles/command-line-google.html), he is basically explaining, how he is receiving a token to use for a local script. But in his case, he's using it for one certain use case, and not as a general public client. So unfortunately it doesn't really contribute to my answer.

Related

How to structure API service app architecture

Background:
I'm building an API service app. The app is just like any other, you send an HTTP request and receive a response. This seems simple up until I start thinking about user registration, payments, authentication, logging and so on.
Application:
tl;dr simple app diagram
Endpoints listening for HTTP requests and doing all the request related work. This is the core of the service, what the service user would use this app for. Directly not accessible to the end user (unless somehow it knows the url). Python flask server, deployed on google cloud RUN.
API gateway acting like a proxy and a single access point forwarding the requests to the endpoints. This is the service access point for the end users. This part will also be responsible for authentication, limitations, logging and tracking the use of the API endpoints. Python flask server, deployed on google cloud RUN.
Website including documentation, demo and show off of API calls through API gateway, registration, payment (thinking of Stripe) etc. VueJS app on NodeJS server on google cloud compute VM.
Database storing credentials of registered users, payment information and auth keys. Not implemented yet.
Problems:
Is this architecture proper? What could be done differently or improved? How could I further simplify all the interactions between separate parts of the app? Am I not missing any essential parts?
Haven't yet implemented the database part and I'm not sure what should I
use? There are plenty of options on google cloud. Also I could go with something simple and just install a DB with http/JSON interface on google cloud compute VM. How do I chose the DB? Given such an app, what would be the best choice?
Please recommend literature/blogs/other sources of info on similar app
architecture for new developers not familiar with it?
This is pretty open ended, but here are some general comments:
Think about how your UI will work. Are you setting up a static app served directly from cloud storage or do you need something rendered on the server? Personally I prefer separating UI from API when I can but you need to be aware of things like search engine optimization. Even if you need to render some content dynamically your site can still be static. Take a look at static site generators like Gatsby. I haven't had to implement a server rendered UI in years and that makes me happy.
API gateway might be fine, but you don't really need it for anything. It might be simpler to start without it and concentrate on what actually matters. If your APIs are being called by an external client you can't trust the calls anyways and any API key you might be using will be exposed. I'd say don't worry about it for a single app. That being said, if you definitely want to use a GW then use one, just be aware that it is mostly a glorified proxy and not some core part of your architecture.
Make sure your API implementations don't store any local state so you can rely on Cloud Run scaling your services up and down. Definitely don't ever store state directly inside your containers. If you need state on the server it needs to be in some external data store.
Use JWTs or an external IDM (that will generate JWTs) for authentication. Keep session data on the client side as much as possible and pass the JWT in every API call to authenticate the caller. If you are implementing login on your own the only APIs you need to expose without tokens are for auth and password recovery, which you can separate into their own service.
Database selection depends on how well you understand your processes, how transactional your services are and your existing skillset. Overall I would use what you are comfortable with, you can probably succeed with a lot of things. Certain NoSQL flavors can seem simple on the surface but if you don't have a clear understanding on the types of queries you need to run they can get tedious to work with. Generally you should stick to relational databases for OLAP style implementations and consider NoSQL for OLTP. Personally I like MongoDB and it is very popular, probably because it sort of sits in the middle of the pack which makes it fit a lot of applications. Using MongoDB also makes you cloud agnostic since it is available on every platform. Using platform specific database flavors can lock you down to a specific vendor.
Whatever you do, don't start installing things on VMs. You can be almost 100% sure you are doing it wrong if this comes up. Remember, the services you consume don't all have to be managed by Google or even run on GCP. You can get MongoDB capacity directly from MongoDB who manage it on your behalf on all of the Big3 cloud vendors.
At least think about the long term, even if you don't necessarily need to have it impact your architecture right now. If you are expecting your app to be up for years try to make it more platform agnostic than less. This might mean sticking away from some really platform specific serverless features that will force you to jump a couple of extra hoops. If you are using Cloud Run you are using containers which already makes your app pretty portable, don't lock it to one platform by using a lot of platform specific features. That being said, don't stay away from them either. You should always go for the low hanging fruit, so don't try to avoid using things like secrets manager etc. If your app has a short lifespan and you need really fast time to market then don't worry about it.
Just my 2c, what you are doing is very generic and can be done in a lot of different ways.

Integrating Moodle and ASP.NET Identity 2.1

TL;DR: I'd like to make a Moodle installation and an ASP.NET Identity-based site share authentication. If they have a single login page, so much the better, but logging in to one should automatically log into the other; logout should also be shared.
I have a Moodle installation (M) at www.example.com/moodle, and another website (O) at www.example.com.
O is written using .NET 4.5.2 and has areas that require authentication to access, managed using ASP.NET Identity 2.1 with a custom user model. This model is not particularly sophisticated. It is essentially the out-of-the-box model, but with integer IDs rather than GUIDs.
M is version 2.6, with intentions to upgrade to the latest version (currently 3.1) in the near future.
Both are accessible via the public Internet; there is no requirement to be on a private network to access them.
I know of no plans to move either M or O onto a different domain. However, if one or both was to move, I imagine they would move to a subdomain of example.com.
I would like to create a single-sign-on system, allowing a login for M to also log the user into O. As it stands, I am using the external database authentication plugin for Moodle, with M referring to the database for O. While this works, it does require the user to log in twice. I would like to set it up so that logging in to either M or O will also log the user in to the other site.
I am able to create matching hashes from PHP and .NET code. Unless it is particularly relevant to the solution, please consider the creation of hashes out of scope.
Some users of M are using Moodle's built-in authentication. However, unless it is particularly relevant to the solution, please consider the migration of users out of scope.
I'd prefer O to manage users, if possible. M, by nature of being Moodle, will have to have its own records for the users, but I'd like it if they were similar to the records used by the external DB plugin: just saying that the user exists and can be found elsewhere.
Things I have tried, investigated, or considered:
Moodle's external database plugin. This is how it works at the moment. It sort of works, but requires multiple logins.
Automating the multiple logins. I've experimented with taking the posted credentials, making a HTTP request from the server to the sites' respective login forms when logging in, lifting the cookie out of the response, then sending that cookie back to the client. This also works, but it's clunky at best, and is reliant on the cookies not getting out of sync.
Using PHP's DOTNET library and doing...something. All of the documentation that I can find says that DOTNET does not work with anything other than .NET 2.0, 3.0 or 3.5. I'm using 4.5.2, so this seems like a no-go. I don't know what I'd do even if I could get it to work with more recent versions of .NET.
Somehow getting Moodle to accept the ASP.NET Identity cookie in place of its own. This seems like the most fruitful course, given that it is a single cookie to manage.
To wrap up: I'd like to make M and O share authentication. If they have a single login page, so much the better, but logging in to one should automatically log into the other; logout should also be shared. Is this possible, and does anyone know how I should go about it?
Maybe take a look at SAML.
I believe that .NET 4.5 supports SAML?
https://msdn.microsoft.com/en-us/library/ms733083%28v=vs.110%29.aspx
On the server, install simplesamlphp.
https://simplesamlphp.org/docs/stable/simplesamlphp-sp
It can be used both as a service provider and as an identity provider.
Then install this SAML plugin in Moodle :
https://moodle.org/plugins/auth_saml

How do open source/free software applications handle the client_secret in oauth? (Without a web server)

I am making a tiny desktop application for my personal use. (Also for a few of my friends.) As part of this application, I am using OAuth 2 to access some of Google's APIs.
I want to eventually upload this to a server where potentially anyone could download it. So I can't just bundle the client_secret with the application.
So, I would like to know, how do open source applications that share there entire source code deal with this?
I could just require users to get their own client_id from google. But that's a bit of a cumbersome process, and I would ultimately end up writing a scraper to do it automatically, defeating the purpose, also it would be very brittle.
Alternatively, there is this question where the answer is to run your own server to act as a middle agent. However, because this is only a small app I'm doing for me and a few friends, I don't really want to manage a server just for this, and even if I did, it certainly would be fairly unstable.
In short, are there any solutions here that
allow me to put my source code on the internet,
don't require me to run my own server, and
don't require my users to go get there own 'client_id' after they've already downloaded my desktop application, or require me to make a web scraper that does it for them?

subdivide web api authorization in machine to machine scenario

I need advice coming up with the proper configuration for a scenario using IdentiyServer.
The scenario is machine to machine communication. A single web api is divided into two parts. One part allows notifications to be posted into it (write). The second allows information to be queried from it (read).
I envision protecting endpoints with something like [Authorize("Write")] and [Authorize("Read")]. From what I can tell, scopes are API wide... if they can be used to clarify access in this way, I haven't figured it out... or its too simple for my brain.
Suggestions?
Scopes can be used at a finer grained level than app wide. Just do a normal claims check in the API for the scope you require for that API.
Perhaps even something like this would work: https://github.com/IdentityModel/Thinktecture.IdentityModel/blob/master/source/WebApi.ScopeAuthorization/ScopeAuthorizeAttribute.cs

Minimum overhead for ASP.NET MVC authentication

I want to keep things as simple as possible and I don't want a complicated security mechanism. Basically I need for a user an ID and an e-mail address and I really don't want to bother about other things. Also, I was a minimum overhead in terms of security (if there is anoter provider who can do it for me, that's even better).
What is the simplest way to do this? I was thinking about incorporating LiveID or OpenID by I don't know what are the advantages/disadvantages.
I am working with the Azure SDK.
If you use the Windows Azure Access Control Service, you can basically outsource all identity management. Take a look at the Windows Azure Platform Training Kit - there's a lab called "Introduction to the AppFabric Access Control Service 2.0" that will get you up and running quickly. Currently, you can choose any combination of the following identity providers:
WS-Federation
Facebook
Windows Live ID
Google
Yahoo!
"Simple" for whom?
The simplest strategy for you would probably be to use ASP.NET's standard SQL-based authentication provider. You just run a script against your database to set up all the tables, and then you use ASP.NET's built-in utility methods to authenticate. Give your user-specific tables a foreign key reference to that user's ID, and you're good to go. We've done this, and never had any trouble with it. It's a tried and well-used system, so you know you won't be introducing any security invulnerabilities by hacking your own solution together. (see SqlMembershipProvider vs a custom solutions)
If you want something simple for the user, then an OpenId solution would be my pick. Set up something like StackOverflow has, where you can let users choose an account from a number of trusted providers to allow them to log in. From the user's perspective, it's really nice not to have to remember one more username and password for one more site.