subdivide web api authorization in machine to machine scenario - identityserver3

I need advice coming up with the proper configuration for a scenario using IdentiyServer.
The scenario is machine to machine communication. A single web api is divided into two parts. One part allows notifications to be posted into it (write). The second allows information to be queried from it (read).
I envision protecting endpoints with something like [Authorize("Write")] and [Authorize("Read")]. From what I can tell, scopes are API wide... if they can be used to clarify access in this way, I haven't figured it out... or its too simple for my brain.
Suggestions?

Scopes can be used at a finer grained level than app wide. Just do a normal claims check in the API for the scope you require for that API.
Perhaps even something like this would work: https://github.com/IdentityModel/Thinktecture.IdentityModel/blob/master/source/WebApi.ScopeAuthorization/ScopeAuthorizeAttribute.cs

Related

Best Practices for authorizing local scripts via oauth to access Web Services

I couldn't find information on how other people solve this, so maybe you can help me out.
What I have
Multiple Services with REST APIs, that are secured using OpenID Connect. Connections between the Services work fine.
Now I have multiple developers, who sometimes need to write and execute local scripts (Python, R, Bash etc.) for quick analysis and testing.
What I want
I want to enable the developers to use the services as easy as possible, but still respecting security concerns.
What I tried
I defined the script itself as a client. Therefore I created a public client in my OIDC product, which is called somewhat like 'developer-scripts'. Using a library which handles the oauth dance, I can then execute the script connecting as aforesaid client. First time, the browser pops up and requests the user to authenticate and therefore authorize the client to use the REST API on behalf of the user. After that, the tokens are cached and I can easily continue working on that script.
This simplified drawing tries to summarize, what I just described
That works perfectly fine and regarding security I'm glad that credentials are not saved on the local computers as it was before with e.g. Basic Authentication. Furthermore, I'm able to control the access to different services on a user level.
Other ideas, which didn't convince me:
every web service also has an public client which can then be used as a client by the scripts (so the scripts aren't defined as clients anymore)
token generation is done somewhere else and the developer just adds the generated access/refresh token to the script
My problem
What concerns me about my current solution is the definition of that client. In the described case it would be either a generic client used by all developers for all scripts, or a new client for every developer who want's to write a local script. The latter seems to be a lot of overhead, the former may be a security problem?
So finally I'm asking the question: Are there any known best practices for my described use case?
EDIT:
I found a small article by [Martin Fowler](https://martinfowler.com/articles/command-line-google.html), he is basically explaining, how he is receiving a token to use for a local script. But in his case, he's using it for one certain use case, and not as a general public client. So unfortunately it doesn't really contribute to my answer.

How to structure API service app architecture

Background:
I'm building an API service app. The app is just like any other, you send an HTTP request and receive a response. This seems simple up until I start thinking about user registration, payments, authentication, logging and so on.
Application:
tl;dr simple app diagram
Endpoints listening for HTTP requests and doing all the request related work. This is the core of the service, what the service user would use this app for. Directly not accessible to the end user (unless somehow it knows the url). Python flask server, deployed on google cloud RUN.
API gateway acting like a proxy and a single access point forwarding the requests to the endpoints. This is the service access point for the end users. This part will also be responsible for authentication, limitations, logging and tracking the use of the API endpoints. Python flask server, deployed on google cloud RUN.
Website including documentation, demo and show off of API calls through API gateway, registration, payment (thinking of Stripe) etc. VueJS app on NodeJS server on google cloud compute VM.
Database storing credentials of registered users, payment information and auth keys. Not implemented yet.
Problems:
Is this architecture proper? What could be done differently or improved? How could I further simplify all the interactions between separate parts of the app? Am I not missing any essential parts?
Haven't yet implemented the database part and I'm not sure what should I
use? There are plenty of options on google cloud. Also I could go with something simple and just install a DB with http/JSON interface on google cloud compute VM. How do I chose the DB? Given such an app, what would be the best choice?
Please recommend literature/blogs/other sources of info on similar app
architecture for new developers not familiar with it?
This is pretty open ended, but here are some general comments:
Think about how your UI will work. Are you setting up a static app served directly from cloud storage or do you need something rendered on the server? Personally I prefer separating UI from API when I can but you need to be aware of things like search engine optimization. Even if you need to render some content dynamically your site can still be static. Take a look at static site generators like Gatsby. I haven't had to implement a server rendered UI in years and that makes me happy.
API gateway might be fine, but you don't really need it for anything. It might be simpler to start without it and concentrate on what actually matters. If your APIs are being called by an external client you can't trust the calls anyways and any API key you might be using will be exposed. I'd say don't worry about it for a single app. That being said, if you definitely want to use a GW then use one, just be aware that it is mostly a glorified proxy and not some core part of your architecture.
Make sure your API implementations don't store any local state so you can rely on Cloud Run scaling your services up and down. Definitely don't ever store state directly inside your containers. If you need state on the server it needs to be in some external data store.
Use JWTs or an external IDM (that will generate JWTs) for authentication. Keep session data on the client side as much as possible and pass the JWT in every API call to authenticate the caller. If you are implementing login on your own the only APIs you need to expose without tokens are for auth and password recovery, which you can separate into their own service.
Database selection depends on how well you understand your processes, how transactional your services are and your existing skillset. Overall I would use what you are comfortable with, you can probably succeed with a lot of things. Certain NoSQL flavors can seem simple on the surface but if you don't have a clear understanding on the types of queries you need to run they can get tedious to work with. Generally you should stick to relational databases for OLAP style implementations and consider NoSQL for OLTP. Personally I like MongoDB and it is very popular, probably because it sort of sits in the middle of the pack which makes it fit a lot of applications. Using MongoDB also makes you cloud agnostic since it is available on every platform. Using platform specific database flavors can lock you down to a specific vendor.
Whatever you do, don't start installing things on VMs. You can be almost 100% sure you are doing it wrong if this comes up. Remember, the services you consume don't all have to be managed by Google or even run on GCP. You can get MongoDB capacity directly from MongoDB who manage it on your behalf on all of the Big3 cloud vendors.
At least think about the long term, even if you don't necessarily need to have it impact your architecture right now. If you are expecting your app to be up for years try to make it more platform agnostic than less. This might mean sticking away from some really platform specific serverless features that will force you to jump a couple of extra hoops. If you are using Cloud Run you are using containers which already makes your app pretty portable, don't lock it to one platform by using a lot of platform specific features. That being said, don't stay away from them either. You should always go for the low hanging fruit, so don't try to avoid using things like secrets manager etc. If your app has a short lifespan and you need really fast time to market then don't worry about it.
Just my 2c, what you are doing is very generic and can be done in a lot of different ways.

Stress testing of flow (Rest) with JMeter

I have a rest service which receives a customer's ID and an input, acording to this the services response something different, it's basically a menu, but the reponse could be different (depends on the customers input), so I need to stress this service to see how many request the server can handle and try to determine the max TPS, but since is a flow I don't know how I can simulate this, any idea or page that can be useful?
Thanks in advance for your help
What you mostly need is an understanding of how to handle dynamic parameters. Imagine you are simulating a user which goes on a blog, views a random blog article and then posts a comment about it.
It's all about designing a user which has a dynamic behavior, which changes depending on variables or server output. JMeter supports this kind of simulation very well by providing dozen of useful components like:
CSV Datasets,
Regexp Variable extractors,
and more.
We have written an article which explains how to simulate users with a dynamic behavior. It's very similar to what you would do in JMeter since OctoPerf is based on JMeter, with a Web UI built on top.

Fully scalable website with micro-applications

I'm in the process of designing a cloud deployed website for a new solution my company is looking to provide. I have been attempting to answer a few questions and haven't had any luck, so when in rome.
First, I don't want the website to be stuck to any one particular framework. I know there is no way to completely future proof a website, but I would rather not put all of our eggs in one basket.
Secondly, I want to have a separation between the front and back end entirely. I have a list of reasons why I'm looking to do this, don't necessarily want to get into the conversation of what they are. Server Side rendering for the most part is out of the question.
So where does that leave me?
My initial thoughts on the design are to have a REST API that can be accessed for any API calls (this may be turned to GraphQL in the future).
The design decisions that I'm mostly wresting with are for the front end. The website will be a dashboard type system, where tenants can log in and see screens for them.
I was thinking that I would have a sort of shell, that hooks on to the index.html. This would have it's own routing, that would render micro-applications that are completely separate from the shell logic.
So for example, if I load index.html, path being "/"
It has some routes that it's responsible for, lets say
"/todos"
"/account"
If I accessed the /todos route, my shell application would then render that micro app. This application would be completely separate from the shell, except some data that might be loaded via the window. Once this application is rendered via the shell application.
So my todos route, for example, could be a redux application that's independent. It could have it's own routing, etc.
Is this is a common architecture? Are there any examples of this? Is there a better way of going about this?
Thanks for any insight!
Sounds like your well and truly over engineering this beast.
You may take on such an architecture for a HUGE build with many dev teams all working separately. Small agile team, the above would create so much overhead in boilerplate and brain ache in context switching between each "app"
Micro-service architecture is seriously great. Just don't break it up too small, read your use case well and break your services up accordingly.
For example: we are a team of 3. We have a pretty large-ish app devised into:
Php API
Backend management interface (redux)
Frontend website (html, react, php)
Search service (elastic search)
Cache (redis)
Data store (mysql)
All on running in multiple docker containers across multiple hosts. Pull down the backend.. Fine the frontend website is still up and running!

Google Fusion Table REST Api vs Advanced Services Fusion Table Services in app scripts

I am very confused about the correct or recommended mechanism to use for accessing google fusion tables APIs in app scripts. There seem to be two methods with examples but no discussion about which is preferred or why. Is one of these interfaces newer and preferred while the other is dying? Is one obsolete or more restricted in what it can do?
Method 1 is the REST API described here
https://developers.google.com/fusiontables/docs/v2/sql-reference#Select
Method 2 is a set of library functions sort of described here under the Apps Script/Google Advanced Services:
https://developers.google.com/apps-script/advanced/fusion-tables
For example, using the REST api to do a dql query, we end up with something like this:
function runSQL(sql){
var getDataURL = 'https://www.googleapis.com/fusiontables/v1/query?sql='+sql;
var dataResponse = UrlFetchApp.fetch(getDataURL,getUrlFetchOptions()).getContentText();
return dataResponse;
}
And using the advanced API we use something like this:
result = FusionTables.Query.sql(sql, { hdrs: false });
The REST API seems much harder to use, requireing complex oAuth and developer keys to be configured in advance and coded into the application while the Advanced Services API harvests all this behind the scenes and makes for simple API calls like I show here.
I have seen numerous examples using each of the above with no hint as to why one author chose her mechanism instead of the other.
Your help is greatly appreciated.
The service within app-script is a work in progress, so the full functionality of the API might not be fully supported at the moment. As you mentioned though, the big advantage of the service over the REST API is that you do not have to handle the OAuth flow, as you only need to enable it on your script (as stated here).
The Apps Script "advanced service" implementation still lacks some advanced functionality (like alt=media format queries or multipart / resumable uploads) -- if it actually has those features, it lacks extremely basic documentation of them, to the point that the Apps Script editor autocomplete is unaware of them. The tradeoff of these functionality gaps is that you don't need to handle keys, request building, etc.
So, if you're doing simple sql select / importRows work, the Advanced Service should be able to cover almost all your needs. If you need to delete from your FusionTables, you might want to consider setting up the REST API - because deleting is 1 record per query, the better way to delete is to instead "download what you want to keep, then re-upload it back via replaceRows."
(This worked for me for a while, but eventually what I was keeping outgrew the Apps Script service's limitations and I began receiving Empty Response errors from the call to replaceRows. My remedy was to perform my record maintenance tasks via the REST API, where I can specify resumable uploads, timeouts, etc., while more "normal" interactions are done through the Advanced Service.)