I have events module .my user favroite some of random events from listing . i will store them in favroite table of my database
id module
1 events
5 events
9 events
2 business
now , my question how can i make a query to fetch 1,5,9 at single request for event ? is there any way to request it
Yes, you can filter by id to get multiple events in one request, look at the selected answer to this question to learn how to do that. Should work out of the box.
Related
I'm using REST API for updating the SharePoint list item (Counter). In my code on form load I'm fetching the count from list and increment to 1 & on submit button click I'm updating the value of count. Till here I have achieved.
Problem arises , If two users submit the form at the same time, then counter value is incremented by 1 instead to 2 in the list.
Checked with etags . But I found that during post if etag doesn't match with get request etag then error will be thrown.
Is there way to achieve this functionality using REST API also incrementing the counter properly , if more than 3-4 users submit the form ?
SharePoint doesn't expose a transaction manager such that you could group multiple rest operations in one atomic transaction.
This is still a theory in my mind.
I'm rebuilding my backend by splitting things into microservices. The microservices I'm imagining for starting off are:
- Order (stores order details and status of each order)
- Customer (stores customer details, addresses, orders booked)
- Service Provider (stores service provider details, status & location of each service provider, order(s) currently being processed by the service provider, etc.)
- Payment (stores payment info for each order)
- Channel (communicates with customers via email / SMS / mobile push)
I hope to be able to use PUB/SUB to create a message with corresponding data, which can be used by any other microservice subscribing to that message.
First off, I understand the concept that each microservice should have complete code & data isolation (thus, on different instances / VMs); and that all microservices should communicate strictly using HTTP REST API contracts.
My doubts are as follows:
To show a list of orders, I'll be using the Order DB to get all orders. In each Order document (I'll be using MongoDB for storage), I'll be having a customer_id Foreign Key. Now the issue of resolving customer_name by using customer_id.
If I need to show 100 orders on the page and go with the assumption that each order has a unique customer_id associated with it, then will I need to do a REST API call 100 times so as to get the names of all the 100 customer_ids?
Or, is data replication a good solution for this problem?
I am envisioning something like this w.r.t. PUB/SUB: The business center personnel mark an order as assigned & select the service provider to allot to that order. This creates a message on the cross-server PUB/SUB channel.
Then, the Channel microservice (which is on a totally different instance / VM) captures this message & sends a Push message & SMS to the service provider's device using the data within the message's contents.
Is this possible at all?
UPDATE TO QUESTION 2: I want the Order microservice to be completely independent of any other microservices that will be built upon / side-by-side it. Channel microservice is an example of a microservice that depends upon events taking place within Order microservice.
Also, please guide me as to what all technologies / libraries to use.
What I'll be developing on:
Java
MongoDB
Amazon AWS instances for each microservice.
Would appreciate anyone's help on this.
Thanks!
#1
If I need to show 100 orders and each order has a unique customer_id, will I need to do 100 REST API call?
No, just make 1 request with 100 order_id(s) and return a dictionary of order_id <=> customer_id
#2
It's a single request
POST
/orders/new
{
"selected_service_provider_id" : "123"
...
}
Which can return you order_id and you can print it locally for the customer or track progress or what have you.
On the server side, you receive an order and process it. Processing can include sending an SMS at some stage. This functionality can be implemented inside original service that received this request or as a separate call to another dedicated service.
To your first question, you don't need to do 100 queries, just one with the array of your 100 documents, like the following:
db.collection.find( { _id : { $in : [1,2,3,4] } } );
https://stackoverflow.com/a/7713461/1384539
I know this question is 1 year old, but I would like to add my answer to the first point.
One option would be to use some form of CQRS and store on the OrderDB also some of the user details when creating an order. This way when you have to show the list of orders you already have all the details you need. Also, the order document would represent a photograph of the user state at the moment of the order creation.
Of course, in case you don't have the user details when storing the order, you just need to make a GET call to the User Service, but that would be 1 call, not 100.
I've created test event which has the address like this:
www dot facebook dot com/events/1485237735028137
What mean the digits at the end??
How are they generated when event is created?
For example in vk dot com social network events have order: 1,2,3,4,......
And what order in Facebook?
The number at the end is the Event ID, it uniquely identifies that event, but there's no other information you can infer from that ID directly - you need to retrieve the event details via the API to get more data
I'm implementing a notification system in mongodb. Basically I create notification items and subscribe users on them. After that if any actions happen on notification items, I just log them in action document so i can query user's notifications on action collection.
So far it's all good.
Now I just want to group actions by their type and notification items and sort the result by latest action time (let say follow, comment, like). So i can make summary like "John, Lisa and 3 other like your bla bla"
To succeed it I implemented mapreduce on mongodb. It's also working. But the problem is it doesnt look right;
Because mapreduce generates a collection as output.
In this case;
User1 login, mapreduce run generate notification collection then
User2 login, mapreduce run generate notification collection with same name(which already created for another user).
I'm curios about what happens if multiple users trigger the mapreduce at the same time?(which is quite possible,it's a social network)
I thought that maybe i can create unique notification collections for each user notification_$userid .. For 1M users 1M collection doesnt sound right?
By the way I started with the MongoDB grouping but changed to mapreduce because i want to sort result set.
Thank you.
The repository in the CommonDomain only exposes the "GetById()". So what to do if my Handler needs a list of Customers for example?
On face value of your question, if you needed to perform operations on multiple aggregates, you would just provide the ID's of each aggregate in your command (which the client would obtain from the query side), then you get each aggregate from the repository.
However, looking at one of your comments in response to another answer I see what you are actually referring to is set based validation.
This very question has raised quite a lot debate about how to do this, and Greg Young has written an blog post on it.
The classic question is 'how do I check that the username hasn't already been used when processing my 'CreateUserCommand'. I believe the suggested approach is to assume that the client has already done this check by asking the query side before issuing the command. When the user aggregate is created the UserCreatedEvent will be raised and handled by the query side. Here, the insert query will fail (either because of a check or unique constraint in the DB), and a compensating command would be issued, which would delete the newly created aggregate and perhaps email the user telling them the username is already taken.
The main point is, you assume that the client has done the check. I know this is approach is difficult to grasp at first - but it's the nature of eventual consistency.
Also you might want to read this other question which is similar, and contains some wise words from Udi Dahan.
In the classic event sourcing model, queries like get all customers would be carried out by a separate query handler which listens to all events in the domain and builds a query model to satisfy the relevant questions.
If you need to query customers by last name, for instance, you could listen to all customer created and customer name change events and just update one table of last-name to customer-id pairs. You could hold other information relevant to the UI that is showing the data, or you could simply hold IDs and go to the repository for the relevant customers in order to work further with them.
You don't need list of customers in your handler. Each aggregate MUST be processed in its own transaction. If you want to show this list to user - just build appropriate view.
Your command needs to contain the id of the aggregate root it should operate on.
This id will be looked up by the client sending the command using a view in your readmodel. This view will be populated with data from the events that your AR emits.