DAPR input bindings with multiple apps - kubernetes

After reading into the Dapr docs, I'm left with a few questions regarding the behavior of input bindings. From what I understand, it's not possible to tell Dapr that a specific input binding should only trigger a specific endpoint of one particular app in a declarative sense. Rather, you create an input binding and define its endpoint (e.g. 'checkout'), and then dapr will test all apps for that endpoint. Correct?
If so then, tbh, I don't understand this design decision. For example, if the input binding is coming from a queue (e.g. SQS), then each item should only be processed once. But then, if multiple apps are automatically configured to process items from the queue simply because they have the same endpoint, how would you guarantee that the correct one does the job? Does this behavior change if the apps are in the same vs. differing namespaces?
In this use-case, this set-up is a big bummer since it means you cannot develop your apps independently (or else you risk running into naming collisions).
Hopefully I've missed a few details, so please correct me if I'm wrong. Thank you!

Related

Best way to keep in sync data in two different applications

I have 2 closed-source application that must share the same data at some point. Both uses REST APIs.
An actual example are helpdesk tickets, they can be created on both applications and i need to update the data on one application when the user adds a new ticket/closes a ticket on the other application and vice versa.
Since is closed-source I can't really modify che code.
I was thinking I can create a third application that every 5 minutes or so, list both applications' tickets for differences on the precedent call, and if the data is different from the precedent call it updates the other application too.
Is there a better way of doing this?
With closed-source applications it's nearly impossible to get something out of them, unless they have some plugin-based setup that you can hook into.
The most efficient way in terms of costs would be to have the first application publish a message on a queue, or call a web-hook that you set, whenever the event is triggered. But as I mentioned, the application needs to support that.
So yeah, your solution is pretty much everything you can do for now, but keep in mind the challenges that you may encounter over time:
What if the results of both APIs are too large to be compared directly? Maybe you need to think about paging the results.
What if your app crashes and you loose the previous state? You need to somehow back it up in an external source
How often you should poll the API to make sure you're getting the updates you need, while keeping a good performance for the existing traffic?

Fetching potentially needed data from repository - DDD

We have (roughly) following architecture:
Application service does the infrastructure job - fetches data from repositories which are hidden behind interfaces.
Object graph is created and passed to appropriate domain service.
Domain service does it thing and raises appropriate events.
Events are handled in different application services which perform some persistent operations (altering repositories, sending e-mails etc).
However. Domain service (3) has become so complex that it requires data from different external APIs only if particular conditions are satisfied. For example - if Product X is of type Car, we need to know price of that car model from some external CatalogService (example invented) hidden behind ICatalogService. This operation is potentially expensive one (REST call).
How do we go about this?
A. Do we pre-fetch all data in Application Service listed as (1) even we might not need it? Do we inject interface ICatalogService into given Domain Service and fetch data only when needed? The latter solution might create performance issues if, some other client of Domain Service, calls this Domain Service repeatedly without knowing there is a REST call hidden inside it.
Or did we simply get the domain model wrong?
This question is related to Domain Driven Design.
How do we go about this?
There are two common patterns.
One is to pass the capability to make the query into the domain model, allowing the model to fetch the information itself when it is needed. What this will usually look like is defining an interface / a contract that will be consumed by the domain model, but implemented in the application/infrastructure layers.
The other is to extend the protocol between the domain model and the application, so that we can signal to the application layer what information is needed, and then the application code can decide how to provide it. You end up with something like a state machine for the processes, with the application code coordinating the exchange of information between the external api and the domain model.
If you use a bit of imagination, you've already got a state machine something like this; as your application code is already coordinating the movement of inputs to the repository and the domain model. The difference, of course, is that the existing "state machine" is simple and linear enough that it may not be obvious that there is a state machine present at all.
how exactly would you signal application layer?
Simple queries; which is to say, the application code pulls the information it needs out of the domain model and uses that information to compute the next action. When the action is completed, the application code pushes information to the domain model.
There isn't enough information to give you targeted good advice. I suspect you need to refactor your domains into further subdomains. It sounds like your domain service has way more than 1 responsibility. Keep the service simple.
In addition, If you have a long running task like a service call that takes a long time, then you need to architect it away. The most supple design will not keep the consumer waiting. It'll return immediately with some sort of result to the user even if it's simply a periodic status update.

Why should I make my service Restful when the business needs and workflows are complicated?

My service requirements and business workflows are bit complicated. First, please consider the two different options below.
In my case, the problems going with restful option are
In the restful option, basically to distinguish the operation intended, I
need to inspect the input payload. So, my controller logic is going to bit ugly.
For each of these operation, I need to check for specific roles and permissions. Based on the input payload, I need to check whether the user has the required permission first, rather than having it at the controller method level as we do now in the RPS style.
For some operations, I need to check the current status of the order. For example, approving or rejecting an order which is currently in draft status doesn't make sense. Before approving and disapproving, the order should be in pending for approval status. So, I need call DB to check the current status and this will impact the performance.
Monitoring and perf profiling are going to very complex with restful option in my case.
Trouble shooting production issues going to be complex. Because the input
payload needs to logged and inspected. The http verbs needs to be inspected.
I don't think restful way is making it simple just because of exposing fewer endpoints. Now, clients of this service has to be given clear documentation
explaining what input they have send in order to perform a specific
intended action.
My service is not a simple content delivery applications with fewer operation. In the future, I may need to support more operations than what I have today.
Please don't tell me, I can pass the operation to be performed in the request header. I don't think, it solves all of the above problems.
So, now why should I bother making my service restful?

REST Business Logic and error handling

I am trying to make a REST application where I try to hide to hide Business Logic from request and responses.
I have to examples which I don't know how to handle.
First example: I have a shopping cart and product x can't be ordered with product y. The client however decided to order them both. How can I give a proper error message or guide the client that this isn't allowed. Because giving an error saying "x and y are not allowed together" seems like exposing Business Logic to me.
The structure is in place because of different services that we have. The products can be re-used, but the order intake is different. For example we can offer an order intake for vehicles which need different configuration when ordering cloths. In both cases you will have product, which have name and price and therefore can be re-used. That's is why vehicles and cloths can't be ordered together and shouldn't. To make this more user friendly there will be a service which presents available options for the specific order intake. But there should be a part which validates it and gives proper error on this.
Second example: A client has one pending order and can't create a new order when the pending order is completed. This seems/feels stateful to me and should probably avoided. How should this be handled?
UPDATE So resolving the issue for my first example could be to create an endpoint something like /products?type=vehicle or /products/combinations?type=vehicle. This is for displaying the allowed products/combinations and have an endpoint /order to put the products in where the validation happens. These endpoints can stand on their own, but the context may come from somewhere else. Do I understand correctly?
As the other answer already pointed out, this is only marginally related to REST, it has I think more to do with the meaning of "exposing business logic" and "statelessness".
The first point of not wanting to expose business logic: It is only exposing business logic if some system really needs to interpret the specific error. If it is "only" supplying a localized message to the user, it is not exposing any logic to the systems in between. The frontend system does not need to know what is going on, it only needs to display the message from the backend system.
There are cases when the frontend system needs to know, to be able to guide the user. It is not fundamentally wrong to expose business logic, as long as it is not implicitly exposed, but explicitly part of the interface description.
On the second point about statelessness: REST defines that the communication needs to be stateless. That means any arbitrary request from client should be meaningful without the context of any previous messages (this includes previous logins, sessions, whatever). Each request stands on its own. This does not mean that specific resources can not keep a state of their own. A shopping cart on the backend does in fact has a state, this is OK.
Or said differently: Can the next request hit a different server and still be successful? And I mean without session replication, distributed cache or other magic. If yes, the communication is stateless. If you need "sticky" sessions or such things, then no, you are not stateless.
I think your questions are not entirely related to REST itself but I will try to answer them anyways. Maybe, you can give more details about what bothers you reading my answers.
I am not completely sure how the first question is related to REST because I feel it is about wording. The question to me would be: Why is it not allowed to order both of these products together? So, you cannot but them into the same shopping basket? This would not be really user-friendly, so the best idea would be to allow it. If you cannot change that both are not allowed at the same time, I would just "grey out" all the items that are not allowed together with product X if it is already in the shopping basket.
However, this is more of a user experience question. Maybe, you could go into detail here in what exact case a user could be insert both of the products at the same time, while it is not allowed.
Towards your second question: In most online shops you usually have a state that is either mapped to the account, a session or via cookies. If you truly want to have a stateless API here with REST, you could work with order IDs. These could be passed to each request. Of course, the order itself has a state. But the requests do not have one.
Notice: REST does not mean much. You basically have no state for each request and have all information in the URL that are necessary.
Maybe, this helps a bit already.

Proper way to distinguish between multiple services using zeroconf

I'm writing a piece of software that will run on computers as well as phones.
The service uses an HTTP API for communication and will be published over the local network using Zeroconf.
Initially I published my service using _http._tcp. as the service type but I quickly discovered that both my NAS and my music receiver(!) also broadcasts themselves with that exact service type.
So the question now arises how to differentiate between my service and other services that are using HTTP.
Alternatives
Using a different service type
The is certainly the most certainly the easiest way and (almost) guarantees no other services will be picked up.
However, according to Apple1 new services should be registered with IANA. This is obviously not required but seeing as they recommend it it feels like it would be the wrong way to do it
Using the TXT record
Apple2 describes the TXT record like this:
When a service is registered, three related DNS records are created: a service (SRV) record, a pointer (PTR) record, and a text (TXT) record. The TXT record contains additional data needed to resolve or use the service, although it is also often empty.
The certainly feels like it could be the right way to do it, but I'm still not sure and it's hard to find a description of what the field should contain.
My first though would be to put something like <service_name>-<version> which will then be parsed to see which service it actually is.
My NAS seems to use this for identifying model and version numbers.
Try talking to the service
After finding a service one could always perform a HEAD request on a known endpoint and look for a known header set by the service.
This feels like a fairly slow approach and who knows what making a HEAD request to my receiver will do.
And just to be clear, this question has nothing to do with a specific language or framework, it's about the concepts of zeroconf.
I could show some code but I don't see how that would help.
First, does the service you're advertising actually meet the qualifications for _http as defined by RFC 2782. Specifically- is it not just using HTTP for a transport but is also:
can be displayed by "typical" web browser client software, and
is intended primarily to be viewed by a human user.
If no, register your own service type (there are a couple other services that use HTTP as a transport but don't meet those qualifications so they have -http as a suffix to the service name, see pgpkey-http, senteo-http, xul-http).
If yes, there are a couple ways to go depending on how strict one's interpretation of the RFC is. The least strict being just adding a TXT record as you've already noted in your question. iTunes registers itself with a TXT record in the format iTSh Version=196618.
If you're feeling a little more strict, the RFC only explicitly states that the u=, p= and path= TXT records exist for HTTP. Perhaps someone can chime in on this, but I haven't seen much discussion on whether adding TXT records to already existing entries is frowned upon or not. So with that, the other way is to just an algorithmic instance name. For example, adding the suffix "-NicklasAService" to the device name. Hopefully giving it a unique name to the local network but still making it so that the service can be easily picked out by the PTR record by just looking for the suffix.