I'm building an SOA architecture which consists of a simple NGINX-based API gateway which forwards calls from browser clients to an appropriate backend API based on their prefix, for example:
/auth/login will route the call to the login endpoint on the Authentication service
/users/update/widget-1 will route the call to the update endpoint on the Users service
etc.
Each service has its own datastore and follows SOLID design principles. I use events on a queue to keep services informed about interesting things that happen to data that they both know about. For example, both the Users service and the Authentication service need to store the user's email address as it's used for authentication and emailing. So when a user's email is changed I queue a 'user email change' event onto a User Events queue. The Authentication service subscribes to this queue and uses the event to keep itself up to date.
For simple events, I can include enough details in the event to avoid needing more information. But, thinking ahead, what if a lot of changes have happened to the user and I have a datawarehouse that subscribes to every event type. I don't want to start having huge events - I would rather just include enough information for the interested service to use the event to trigger a call to ask for more details.
So the sequence in this example would be:
Client synchronously calls user update with JWT bearer token
User update service validates JWT and uses it to carry out the update
User update service generates a 'user updated' event to the queue, containing the User ID
Datawarehouse picks up the event and calls a 'get user details' endpoint on the User service to get full details of the update.
How do I authenticate the 'internal service call'? I can't use the original JWT as the internal request is happening asynchronously and the calling service doesn't have the JWT. It might not even be valid any more by the time the Datawarehouse requests the user details. It feels like I need some 'internal' JWT - for example, in this case, would the answer be for the Datawarehouse service to have the ability to generate its own JWT with its own private key then the User service checks the signature using the Datawarehouse service's public key? In which case, doesn't this mean each service would have to know about all the other services that could call it?
If it helps, my current implementation uses Lumen for the services with the jwt-auth package to check the JWT at the API level.
Any advice is appreciated, thanks.
Related
We are developing a portal application where an already existing customer can register their account to see the details of their account (something like you have a credit card and then you register on Bank's portal to see the transactions details - here you are already a customer of the Bank). So when a user is coming very first time for registration then this whole registration flow is not authenticated (as user still doesn't have a username and Password- one will have this after the registration)
We also want that a customer can not have concurrent registrations i.e. if a customer opens multiple tabs (or uses Postman to call our registration API) then only one request should be allowed to register and all other be rejected. For this we have used a registration_session.
So when the first request comes, we find the customer from our master record and generates a GUID/UUID and save it as a registeration_session value against CustomerId as key in Redis (with set expiration). So if any other registration request comes for the same customer then we first search the Redis to see if there is a registeration_session value against the CusrtomerId, and if it exists we will reject this request saying that registration is already in progress.
Now my first question is this: is this behavior is Stateless or not for RESTful APIs? as I am kind of maintaining a request context via registration_session on the server. One may argue that I am not maintaining the application state, true; but if we see it like this: every other registration request has to check the status of any previous registration request - then this means we are no longer having statelessness as per REST principles as now two requests for registrations are no longer independent of each other.
Next requirement is of OTP generation and verify. During registration, we ask the user to identify oneself via an OTP sent to their mobile number (we already have their mobile number from their customer record). A user may request to resend a token multiple times but if use provides wrong input for 3 times, we will put the user's account in locked status. Also we want that once user has verified the OTP check successfully, then any other OTP generation request for the same registration session should not be allowed - as once OTP is verified successfully then to generate OTP again is futile operation (a malicious user may still want to do this via Postman/curl)
Now server has to maintain following information for OTP:
retry count for OTP verification, as the moment it reaches 3, account is to be locked.
verification status for OTP i.e. once it is verified, other request to generate the OTP for same registration session is not allowed
and my second question is this: Is it again violates the REST stateless principal, as it seems we are maintaining the context for requests and every request is dependent on the context of the previous request?
Or is there a gap in my understanding of application state and stored context and above mentioned scenarios do not break Restfulness of an API? OR we can not design Restful APIs for the above mentioned requirements?
Note: I have read enough questions on REST and Context state on SO, but none offered a solution that eliminates my confusion for the specific scenarios that I have asked.
I'm in the process of implementing a user management Microservice (MS) and wanted to find out whether what I'm doing is ok. Users are created from the UI, which interacts with an API. The API makes an RPC call to the user management MS, and publishes a CreateUserCommand to an InMem-bus. The consumer then handles the command by then creating a user in the DB, but then I need this user also registered within Auth0 - would the way to go about this be to send a different command to a persistent queue, for a subscriber to pick it up and register that user with Auth0 (persistent queue in case can't reach Auth0). Once that completes successfully, I could then publish a UserCreatedEvent?
Any help with this would be much appreciated.
You have two Bounded Contexts: User management and Authentication.
User management BC deals with the life-cicle of a user (creation, mutating and deletion).
Authentication BC deals with how the users identify themselves in the system.
So, it is a valid assumption that a user can exists even if it has (yet) no possibility to identify himself in the system.
That being said, you should emit the AUserWasCreatedEvent immediately after the User management BC processes the CreateUserCommand because in that moment the user is born. It has an ID, let's name it UserID, so it exists.
Then, this user needs a mean to identify himself and a Saga (or Process manager or whatever you want to call it) catch the event and create a CreateAuth0UserCommand that it is sent to the Authentication BC by calling the Auth0 API. The API respond with some data, possibly including a token; that token is handled by the Authentication BC and it is associated with the UserID.
Are there any best practices to minimize the exchanged data between (internal) microservices when calling the API of a service (aka Need-to-Know)?
How to achive something like this:
There are three services:
User
Notification (let's assume just email)
Shipping
When the notification service needs the email address of a user it queries the API of the user service and should get the email (and NOT the full data set).
When the shipping service needs the shipping address of a user it queries the API of the user service and should get the shipping address (and NOT the full data set).
Question:
Should this be handled inside the user service with kind of an ACL (what service "XYZ" is allowed to see)?
Using JWT for authentication, there is a need to exchange keys at all, so during the setup-phase these ACLs could be discussed between the teams.
Should this be handled inside the user service with kind of an ACL
I think this is the best option. You could delegate the actual authorization to a separate service which the User service can call with the identity of the caller and the "claim" the caller is making (eg "I am allowed to see Email Address for User"). The claims can be evaluated on a per call basis.
However, is is arguable whether you actually need to query the user service at all. It would mean a change to your design but imagine for a minute that the Notifications service already knew about the user, for example the user ID and email address, then the notifications service would not need to query anything to be able to do it's job.
In order for the notifications service to have the user data already, it is necessary for that data to have been sent to the notifications service at some point in the past. A good time to do this would be when the user is first created, or any time a user details are changed. The best way to distribute this kind of information is in the form of an event message, although you could have the distribution based on an http POST to the notifications service.
I have a COTS application(PLM application) which has provided few SOAP APIs to access. Since this SOAP API is highly complex, we are developing a easy to use REST wrapper service. Before invoking any API in my COTS application, authentication API needs to be invoked. In my REST wrapper web service, I have a login resource which invokes COTS SOAP login API. To keep things simple for my API users, I store the logged in user details in user session. In every other REST resoruces, I retrieve the session and check whether session has user details. If yes, I proceed and invoke the SOAP API. if not, I return proper HTTP status code. I use Apache CXF for service and client. I mandate my APIusers to maintain the session in the client like this
WebClient.getConfig(client).getRequestContext().put(Message.MAINTAIN_SESSION,
Boolean.TRUE);
In every REST tutorials, it said REST is stateless. I am doubtful whether what I am doing is correct as per REST standards. Please suggest. Thanks
Basically the idea of REST is a stateless interface. However it is common practice to use some kind of authentication for API calls since most of the time not all resources should be public (e.g. the timeline of a twitter user over the twitter API)
Therefore it is ok if you do some kind of authentication and validate a session on further requests (or maybe authenticate with every single request, e.g. with HTTP Basic Access Authentication) to check if access should be granted.
Not part of this and not the idea of a RESTful API would be to store complex session information that would really make the whole thing stateful. This for example includes storage of information of an older request for processing together with one following later.
client.getRequestContext().put(Message.MAINTAIN_SESSION, Boolean.TRUE)
This code causes cookies to be maintained in that specific client only.
If you want those cookies be available in another client, it needs to be programmed.
And if the second client receives additional cookies and you want those cookies available in the first client too, how is that possible?
I need something like a root client that maintains cookies of all sub clients. All cookies must be shared among all clients. Like a shared cookie repository for all clients. Does anyone know how to achieve this?
I have a high-level/conceptual question about Shibboleth.
I'm working on the front-end (running Drupal) of a data-driven web app. End-users interact with the front-end to construct data queries, which makes background requests to a caching/archiving data proxy (the "data retrieval service"), which in turn either delivers data from its cache or goes out and queries still more services ("out there") which have desired data. So far so good... it is ornate, but only as ornate as the problem we're trying to solve.
Here's the wrinkle: Some of services queried by the data retrieval service want to implement user-level authentication, so that some users may access their data, but others cannot. For organizational reasons, our identity and authentication mechanism is likely to be Shibboleth.
So, here's my scenario: a user logs in to the frontend using Shibboleth. Now, can my frontend, and in turn, the data retrieval service, authenticate against against external services as the user? And if so, how does that work in practice (what authentication data gets passed from server to server)?
Yes it can - you service has to exist in the identity provider (how it is set up is up to you)