Should a service call another service or should it fetch it's own data - swift

I have a message service that is responsible for pushing the correct messages to a UITableView. Some of these messages are system messages and whilst their content is generic they should for example, include a user's name.
This data is currently requested and available via my profile service.
I have been trying to write a service per API, but now I am wondering, should I inject my profile service into message service? I feel this violates SOLID if my service starts doing more than just talking to messages, but then as I understand, a service should not depend on another?
Apologies for the broad question, I am still learning everyday,

Your Message Service can call your Profile Service, that does not violate any principles, however your Message Service should not break if the underlying code in your Profile Service changes.
I would not have your Message Service talk to your Profile API.
Imagine if the contract or implementation of your Profile API changed, now your Message Service AND Profile Service potentially are broken.
By having your Message Service talk to your Profile Service, you can be sure (through unit / integration tests) any changes to your API / Service do not break your other services and views.

Related

How to handle internal service-to-service authentication in an SOA environment

I'm building an SOA architecture which consists of a simple NGINX-based API gateway which forwards calls from browser clients to an appropriate backend API based on their prefix, for example:
/auth/login will route the call to the login endpoint on the Authentication service
/users/update/widget-1 will route the call to the update endpoint on the Users service
etc.
Each service has its own datastore and follows SOLID design principles. I use events on a queue to keep services informed about interesting things that happen to data that they both know about. For example, both the Users service and the Authentication service need to store the user's email address as it's used for authentication and emailing. So when a user's email is changed I queue a 'user email change' event onto a User Events queue. The Authentication service subscribes to this queue and uses the event to keep itself up to date.
For simple events, I can include enough details in the event to avoid needing more information. But, thinking ahead, what if a lot of changes have happened to the user and I have a datawarehouse that subscribes to every event type. I don't want to start having huge events - I would rather just include enough information for the interested service to use the event to trigger a call to ask for more details.
So the sequence in this example would be:
Client synchronously calls user update with JWT bearer token
User update service validates JWT and uses it to carry out the update
User update service generates a 'user updated' event to the queue, containing the User ID
Datawarehouse picks up the event and calls a 'get user details' endpoint on the User service to get full details of the update.
How do I authenticate the 'internal service call'? I can't use the original JWT as the internal request is happening asynchronously and the calling service doesn't have the JWT. It might not even be valid any more by the time the Datawarehouse requests the user details. It feels like I need some 'internal' JWT - for example, in this case, would the answer be for the Datawarehouse service to have the ability to generate its own JWT with its own private key then the User service checks the signature using the Datawarehouse service's public key? In which case, doesn't this mean each service would have to know about all the other services that could call it?
If it helps, my current implementation uses Lumen for the services with the jwt-auth package to check the JWT at the API level.
Any advice is appreciated, thanks.

How to protect an API endpoint for reporting client-side JS errors against spam (if even necessary)?

I am developing a web application with Spring Boot and a React.js SPA, but my question is not specific to those libraries/frameworks, as i assume reporting client-side JS errors to the server (for logging and analyzing) must be a common operation for many modern web applications.
So, suppose we have a JS client application that catches an error and a REST endpoint /errors that takes a JSON object holding the relevant information about what happened. The client app sends the data to the server, it gets stored in a database (or whatever) and everyone's happy, right?
Now I am not, really. Because now I have an open (as in allowing unauthenticated create/write operations) API endpoint everyone with just a little knowledge could easily spam.
I might validate the structure of JSON data the endpoint accepts, but that doesn't really solve the problem.
In questions like "Open REST API attached to a database- what stops a bad actor spamming my db?" or "Secure Rest-Service before user authentification", there are suggestions such as:
access quotas (but I don't want to save IPs or anything to identify clients)
Captchas (useless for error reporting, obviously)
e-mail verification (same, just imagine that)
So my questions are:
Is there an elegant, commonly used strategy to secure such an endpoint?
Would a lightweight solution like validating the structure of the data be enough in practice?
Is all this even necessary? After all I won't advertise my error handling API endpoint with a banner in the app...
I’ve seen it done three different ways…
Assuming you are using OAuth 2 to secure your API. Stand up two
error endpoints.
For a logged in user, if an errors occurs you would
hit the /error endpoint, and would authenticate using the existing
user auth token.
For a visitor, you can expose a /clientError (or
named in a way that makes sense to you) endpoint that takes the
client_credentials token for the client app.
Secure the /error endpoint using an api key that would be scope for
access to the error endpoint only.
This key would be specific to the
client and would be pass in the header.
Use a 3rd party tool such as Raygun.io, or any APM tool, such as New Relic.

Can I replace a microservice inside of AKS k8s with smarter nginx config?

Question
Can I get nginx to call another microservice inside of AKS k8s prior to it routing to the requested api? - the goal being to speed up requests (fewer hops) and simplify build and deployment (fewer services).
Explanation
In our currently deployed Azure AKS (Kubernetes) cluster, we have an additional service I was hoping to replace with nginx. It's a routing microservice that calls out to a identity API prior to doing the routing.
The reason is a common one I'd imagine, we recieve some kind of authentication token via some pre-defined header(s) (the standard Authorization header, or sometimes some bespoke ones used for debug tokens, and impersonation), we call from the routing API into the identity API with those pre-defined headers and get a user identity object in return.
We then pass on this basic user identity object into the microservices so they have quick and easy access to the user and roles.
A brief explanation would be:
Nginx receives a request, off-loads SSL and route to the requested service.
Routing API takes the authorization headers and makes a call to the Identity API.
Identity API validations the authorization information and returns either an authorization error (when auth fails), or a serialized user identity object.
Router API either returns there and then, for failure, or routes to the requested microservice (by cracking the request path), and attaches the user identity object as a header.
Requested microservice can then turn that user identity object into a Claims Principal in the case of .NET Core for example.
There are obviously options for merging the Router.API and the UserIdentity.API, but keeping the separation of concerns seems like a better move. I'd just to remove the Route.API, in-order to maintain that separation, but get nginx to do that work for me.
ProxyKit (https://github.com/damianh/ProxyKit) could be a good alternative to nginx - it allows you to easily add custom logic to certain requests (for example I lookup API keys based on a tenant in URL) and you can cache the responses using CacheCow (see a recipe in ProxyKit source)

Two channels for one API

We have a SaaS. It consists of Single Page application (client), Gateway, Data Service 1, Data Service 2 and Notification Service.
Client talk with Gateway (using REST) and service route the request to appropriate Data Service (1 or 2) or do own calculations.
One request from the client can be split on multiple at Gateway service. The result is an aggregation of responses from the sub-services.
Notification Service - is a service which pushing information about changes made by other users using MQ and WebSocket connection to the client. Notification can be published by any service.
With enginers, we had a discussion how the process can be optimized.
Currently, the problem that Gateway spending a lot of time just waiting for the response from Data Services.
One of the proposals is letting Gateway service response 200 Ok as soon as message pushed to the Data Service and let client wait for operation progress throw Notification channel (WebSocket connection).
It means that client always sends HTTP request for operation and get confirmation that operation is executed by WebSocket from the different endpoint.
This schema can be hidden by providing JS client library which will hide all this internal complexity.
I think something wrong with this approach. I have never seen such design. But I don't have valuable arguments against it, except complexity and two points of failure (instead of one).
What do you think about this design approach?
Do you see any potential problems with it?
Do you know any public solutions with
such approach?
Since your service is slow it might makes sense to treat it more like a batch job.
Client sends a job request to Gateway.
Gateway returns a job ID immediately after accepting it from the Client.
Client periodically polls the Gateway for results for that job ID.

Microservices Service-to-Service-Communication Need-to-Know principle

Are there any best practices to minimize the exchanged data between (internal) microservices when calling the API of a service (aka Need-to-Know)?
How to achive something like this:
There are three services:
User
Notification (let's assume just email)
Shipping
When the notification service needs the email address of a user it queries the API of the user service and should get the email (and NOT the full data set).
When the shipping service needs the shipping address of a user it queries the API of the user service and should get the shipping address (and NOT the full data set).
Question:
Should this be handled inside the user service with kind of an ACL (what service "XYZ" is allowed to see)?
Using JWT for authentication, there is a need to exchange keys at all, so during the setup-phase these ACLs could be discussed between the teams.
Should this be handled inside the user service with kind of an ACL
I think this is the best option. You could delegate the actual authorization to a separate service which the User service can call with the identity of the caller and the "claim" the caller is making (eg "I am allowed to see Email Address for User"). The claims can be evaluated on a per call basis.
However, is is arguable whether you actually need to query the user service at all. It would mean a change to your design but imagine for a minute that the Notifications service already knew about the user, for example the user ID and email address, then the notifications service would not need to query anything to be able to do it's job.
In order for the notifications service to have the user data already, it is necessary for that data to have been sent to the notifications service at some point in the past. A good time to do this would be when the user is first created, or any time a user details are changed. The best way to distribute this kind of information is in the form of an event message, although you could have the distribution based on an http POST to the notifications service.