Promotion engine for POS- Self Checkout - rule-engine

I wish to make a promotion engine for retail systems connected to a central system. So when items are added in a cart, a promotion offer should get automatically applied based on the items added.
For Example, Buy 2 items A, and 1 item of B, offer applied Buy1 Get2 Free.
Now item A, item B are groups of few items.
One of the way I can go by is hitting a API, Every time cart is updated, and then checking which rule is satisfied, but I don't want that way to go. I want to make it work in real time without hitting API.
Please help me suggest the best possible way to do it.
Thanks

Related

Best way of passing object information throw pages

This question probably already exist but its too specific and hard to seach for it.
So, imagine that we have a ecommerce application.
On page 1 we have a list of products. And when its tapped, we go to a page 2, where it holds more information about the product that you just tapped for. Pretty much like any other ecommerce out there.
Which one of these two situations are better:
When one product is tapped, we pass via arguments all the informations about this product to the page 2. Then, no requests to the database is necessary.
When one product is tapped, we pass its ID only, then we need to do a request to get this product information from database.
You might think its obvious that the option 1 is better, but with option 2 we pretty much guarantee that all product informations have the last update from database, because the owner might change the product price milliseconds after you just clicked.
image describing user interaction
I would go for the second option most of the time.
As you already said, the information will always be up to date. Also if you request all the products with all of their information it creates quite some overhead, depending on the size of the product page and the information about the different products. Another thing is updating the information live. Maybe you'll decide to add a Stream later on, updating the information while the user is on the product page. Querying each product will make that easier as well.
If you can afford the resources of requesting the product every time and your process isn't too expensive it's in my opinion the better option.

Test fail in IFTTT with "returns at least three items"

I'm creating my own service and in the endpoint test fails here and this error is shown:
"returns at least three items"
This error comes from the trigger part.
Can somebody share a sample value of output with three items in it. Please help
IFTTT Expects you to send at least 3 result items, to skip just clone the same object twice with different ids.
From the FAQ section;
My service fails the returns at least three items endpoint test. Why does IFTTT require three items? We require three items during the
testing phase to make sure your API behaves like a timeline of events,
not a state engine.
This requirement might seem strange when you think of your integration
with IFTTT as something that is entirely realtime in nature, like “IF
Button Pressed, THEN Turn On Lights”— what good would come from
anything but the current state of the button?
But what about the Applet “IF Button Pressed, THEN Log to
Spreadsheet”? In this case it would be important to store and return
multiple event items because there is no guarantee that we’ll call
your API (even with the Realtime API) at the moment the event occurs.
By keeping and returning a list of events, IFTTT users are more
assured they won’t miss a thing.

Is it correct performing GET requests and checks inside a POST handler?

I'm designing a ticket booking API. Right now booking a ticket resolves into POST /users/{id}/tickets but each /events/{id} has a maximum of available tickets. How do I properly design a check?
I've come up with two ways:
1) having an availibleTickets: field into the /events/{id} that gets checked and possibly updated each time I POST a new ticket.
2) having a maxTickets: field into /events/{id} and check the length of GET /events/{id}/tickets array, compare it to maxTickets
Anyway I have to perform a GET request inside the POST handler but it doesn't look right to me, do you have any suggestions?
How would you desing a ticketing system for a Web page? The same steps you apply to a Web page also apply to REST as it is just a generalization of the same interaction flow used on the Web.
Usually, on the Web you have a link you can see an event you can order tickets for. On this page you have a link to order tickets for that particular show. Depending on the system you use, you might see a layout of the event venue in the form of buttons or images to click if there is a certain seat order where available seats are marked as green and ones that are already booked as red or whatever color scheme you use. A click on a seat will trigger some reservation logic on the server that returns almost the same page as before but this time with the seat marked as orange to indicate a reservation. Next you click the available seat next to that seat to reserve a further seat. This story continues until you either have enough seats marked as reserved or no available seats are available and you have no options left as to either cancel the reservation, proceed to the order step or unreserve seats you marked as reserved beforehand. Once you are satisfied with your choice, you will find an order or submit button or link where you turn your reservation into a booking. This might involve some further steps like entering your contact and/or billing information. Though this is in principle how I'd design such a system for the Web.
As you might see, this turns out into some kind of state machine where the server tells you all of the options you have available at this current state of the process. This is exactly what Asbjørn Ulsberg mentiones when talking about affordance and state machines. From the blueprint of the venue and the respective seats on that blueprint, which are actually buttons or images you might click, you knew what these widges are for and you somehow know what will happen when you click on one of the seats. This is what affordance is all about. By seeing it you know what you can do with it.
The interaction concept outlined above should be taken and translated to REST. As a client you don't need to know the structure of the URI, all you need to know is what seats are available and what happens when you click certain links. This is usually done in REST through link relation names that give the mentioned link some semantical context to the current state of the resource the client just fetched. Such link-relations may seem like a-priori knowledge needed by the client, which is a bit anti-REST, as REST tries to decouple clients from servers to allow the latter one to evolve freely without risking clients to break, though as link-relations should be standardized, or should be based on extensions, such as dublin-core or other microformats. Buidling up on standards will either lead to broad acceptance and support by different clients or on mechanisms to plug-in such knowledge into a client later on. This in general avoids so-called out-of-band information or process flows that force you to lookup up the manual on how to use that system.
The approach outlined above would utilize an own reservation resource that is uniquely created on "entering" the reservation, which is kept till the order ticket step is invoked. This reservation resource keeps track of the reserved seats the user has chosen so far. Whether the system considers reserved seats by other users as taken or not is an implementation detail. It is ok to either use a first-come system or a more polite one that guarantees the reserver his seats until some grace-period has passed and the user didn't order them. This gives you a good impression that such resources can be volatile and just be part of a certain process.
In regards whether to use GET, POST or other HTTP methods, a Web page that sends you to a reservation page will show you a form containing all of the seats of the venue. As HTML does only support GET or POST, the latter one is the most appropriate thing. In a REST or HTTP API you might use PUT though. A server might already have assigned you a certain, unique "reservation" link that you can just invoke with PUT. If the reservation resource does not exist yet, it will be created for you, if it did, the whole content will just be updated. Especially when you dealing with reservations and money flows you want to use idempotent methods such as PUT.
I hope I could give you some ideas on how you might design your reservation system by letting a server teach a client everything it needs to know to proceed through its task.
It's inside the post method (server-side) that you must check if tickets are available before book the event.
you can create a specific route to know how many tickets is available if needed. the client could call it before book an event. Or give the availibleTickets in the get /events/{id}
Imagine 10 client trying to buy the last ticket at the same time, if the security is not in the post method, you'll book 9 imaginary tickets

Event-Sourcing how to change business rules

My application use cqrs and event sourcing. It's already in production.
Now i must add a business rules. My business rules are in my aggregate root UserAggregate.
My commands :
public class CallUserForMarketingPlanCommand
{
public Guid UserId {get;set;}
public DateTime CallDate {get;set;}
public Guid PlanId {get;set;}
}
public class AcceptMarketingPlanCommand
{
public Guid UserId {get;set;}
public Date AswerDate {get;set;}
public Guid PlanId {get;set;}
}
... the same thing for RefuseMarketingPlanCommand
these commands are applied on my aggregate root which generate events stored in event store
Now if 50 days after the call, the user do not give answer, the user must be recalled by operator. To do this, i think generate event UserDoNotRepliedInDelayEvent and use it to project to a read model with recall informations.
My solution is to create a deferred command (from UserCalledForMarketingPlanEvent handler) CheckUserAnswerCommand which check the call date and generate UserDoNotRepliedInDelayEvent if necessary across the aggregate. Ok.
My problem is how to deffered this command on users already in my event store (before this change) ?
EDIT :
Without considering deferred message, how to change business rules (or a business rules parameter) affecting the state of an aggregate. Simple example :
Disable account if two payments are not permformed.
this rule come with the first deployement. Ok, now there are 1000 accounts disabled. The boss change the rule because the business is impacted, and want disable account if 5 payments are not performed.
How to enable account having less than 5 payments not performed ?
Thanks for your help.
Now if 50 days after the call, the user do not give answer, the user must be recalled by operator. To do this, i think generate event UserDoNotRepliedInDelayEvent and use it to project to a read model with recall informations.
If I undestood your question correctly, the main point here, is that the user "not replying" in time is not an action (command) of your domain, quite the contrary, it is the absence of an action. So in this scenario, I don't think you need an event at all.
You simply need a read model which will register all sent invitations and their statuses (whether they're replied, their reply dates and how long did they stand unanswered). Then, you can check this read model for unanswered invitations that exceed your deadline of 50 days (which should be simple enough at this point).
So, up to this point, no new events are generated in your "Invitations" event store. You're simply interpreting the store into a specific read model that will answer you a question you have (which invitations were not answered).
From this point, it depends on your architecture.
You might want a recurring process to check this read model for invitations that exceed your deadline, having those specific invitations trigger a "InvitationExpiredEvent" or something to notify the interested parties (those who will resend them, for instance)
Or you simply might want a more passive approach, not needing an extra event, simply reading this Read Model when appropriate (on the GUI, maybe) and listing the expired invitations.
This will then fix itself... since you can generate the read model retroactively (finding users from any given point in the that never answered their invitations) and put them through the re-invitation pipeline.
Without considering deferred message, how to change business rules (or a business rules parameter) affecting the state of an aggregate. Simple example :
Disable account if two payments are not permformed.
this rule come with the first deployement. Ok, now there are 1000 accounts disabled. The boss change the rule because the business is impacted, and want disable account if 5 payments are not performed.
How to enable account having less than 5 payments not performed ?
This part of your question is more confusing. From what I understood, you once had a rule that stated "Accounts with two or more expired payments should be inactivated" and you want to change this rule to "Accounts with five or more expired payments should be inactivated". If that's the case, you have to deal with this on multiple levels...
First, you must first implement the new rule on your command model, the same way it always have been but with the updated parameter.
Second, you cannot retroactively reactivated accounts with 2,3,4 expired payments by ignoring their "deactivation events". From your event store point of view, this happened and you must abide by the rules that an event store is a "push only" storage. So, you must use compensating events to reactivate them after the rule change.
So, if you took care of the first topic (and your domain is up and running with the new rule) and since you can't take a shortcut because of the second topic, one of your easier options is to simply develop a one-shot operation that will find accounts with 2,3,4 expired payments that are currently disabled and append to their event stores a reactivation event. At this point you will have to regenerate any affected read models if your architecture doesn't do this automatically.
That way, the next time commands are executed against these accounts, their event stores will reflect the fact that they have been reactivated and thus are currently active.
From an event store point of view... each of these accounts will have something like this on their event streams:
... > Payment Expired > Account Disabled > (maybe other stuff happened) > Account Re-Enabled
So your event store will be a pretty accurate representation of your business scenario... once you chose to disable accounts with only 2 expired payments, so a certain account was disabled by that... later you changed your mind, and even without paying their debts, these accounts were re-enabled.
EDIT:
In fact, i think the problem can be summarized by "how to integrate retroactive rules in event sourced system"
If that's the case, than the answer will be more focused on the lines of "there shouldn't be retroactive actions in an event-sourced domain".
As I said in my original answer, an event stream should be a "push-only" storage and that's mainly because only the exact order of events, as they happened, can guarantee the integrity of your rules as they were when those events happened. In that sense, an event storage is less flexible than a traditional one as it will be way more sensitive to external interference and that will sometimes be a pain (were used to meddling with the data sources directly to fix stuff).
However, we should really try to keep the rule and acknowledge that whatever happened, happened and can't be changed. What you can do, is add, to the end of event stream "compensation events" that is, new events that will register a change of state at a given time to reflect your rules-changing. And then you will need a one-shot process to go through your entities and decide which of them are eligible for such a compensating event.
Now, of course, rules are meant to be broken when needed and with enough consideration, you can go wild into the event store. Just know the risks. If you choose to go "full time machine mode" into the event store, the main risks you will face (and should guard against) are:
Entities going into invalid states in their lifetime. It doesn't matter the entity "ends" the event stream in a valid state. You must validate it never enters an invalid state as that is a prerequisite of event streams. So, for each entity affected by your editing, you will need to evaluate its validity step by step through the new event stream.
Mismatches between source code and event stream. This is a little trickier. But one of the maneuvers you can pull with an event sourced system is rollback your source code repository to a given date and "discard" events from that date forward. That way, you can re-execute actions as they would have happened in the past. If you edit past events though, you might face situations where the recorded events don't match what would have happened in the past based on source code. That might be critical and extremely misleading in the future. You should monitor that.
If your architecture integrates different contexts/domains/microservices, that might also need further evaluation. Say context-A issued a cross-boundary message to context-B because of a given state of an entity. Moving forward, you change the entity state by meddling with the event stream. Now there's a chance these contexts might be left inconsistent between them as context-B believes that entity had a state it no longer has. This might be very relevant in your scenario.
you could also use a Saga that keeps track of the process and than create a command like "recallneeded" when the time is up. it also keeps track of events that tells the Saga to complete if there was a call within the 50 days. (Keep in mind that a Saga is part of your Domain logic and acts as an AR if doing DDD)

In app purchase subscription vs consumable

I'm making an app which will allow the user to purchase either a subscription or consumable which allows them to access data on a monthly basis. Once the new data for the next month is available they will download that and the previous data is invalid and actually illegal to use so it will be removed. So I'm not sure which to choose. A subscription model or a consumable model? From what I see either one would work. Any reason to choose one over another?
I'd go with consumable in this case. Although it sounds like a monthly subscription model, it is not as the access to the previous item essentially runs-out or is consumed. The consumable model makes more sense.
Plus from a developers point of view, not having to make the content available across multiple devices (a requirement of the subscription model) can make things a little easier.
a consumable item, if the user makes a restore the item will not be restored. Whereas subscription it can be if it is still valid.