AnyLogic Chat Call Center Model - anylogic

I am trying to model a Call Center with Chat communication and need your thoughts on this scenario. Real world scenario is that Customer Service Representatives[CSR] in Chat Call Center can service multiple customer chats at same time based on their capacity[integer value 1,2...]
"Chat" Agent [source]
"ChatAgent" resource unit with int parameters totalCapacity[default=3]
Using a service, incoming "Chat" from source seizes a "ChatAgent" from a resourcePool[with resourceUnit "ChatAgent"]. In this model, a "ChatAgent" accepts only 1 "Chat" inside the service block.
ResourcePool
On seize: unit.totalCapacity--;
On release: unit.totalCapacity++;
But I couldn't model a scenario where 1 "ChatAgent" can service multiple customer "Chats" at a time based on their totalCapacity like in a real chat call center.
Please advise on how I can configure this multiple agents to 1 resource seize/delay.
Updated Model
Updated ChatAgent Resource Structure
Thanks,
Shiva

Many ways of doing this, but the first thing that comes to mind is NOT to use ChatAgent as a resource (at least not the kind you use on a service block) because chats can come at any given time, and you can't have a resource taking many different agents that come at different times through the service block...
Instead you can use the following structure in the chatAgent:
The capacity of the resource will define how many agents can enter the restrictedArea block... This structure will exist inside your chatAgent resource.
Your main agent will have the following structure:
when the chat waits for an available chatAgent, if a chatAgent is available by doing:
chatAgent.beginService.entitiesInside() < chatAgent.capacity
These are the most important details to make it work... now you have to build the model properly.

Related

How to effectively use Worker, WorkflowClient

Product Use Case - Our product has a typical use case where we will be having n no of users. Each user will have n no of workflows and each workflow can be run at any time(n of time).
I hope this is a typical use case of any workflow product.
can I use a domain to differentiate users (I mean to say that creating a domain per user)?
Can I create one WorkflowClient per user to serve all his workflow executions? Or for each request should I need to create one WorkflowClient? which one is a recommended approach?
What is the recommended approach in creating Worker objects to poll task list?
Please don't mistake me If I have asked anything meaningless
can I use a domain to differentiate users (I mean to say that creating a domain per user)?
Yes, especially when these users are working in different teams or product, using different domain will avoid workflowName/IDs conflicting each others, and also assign independent number of quotas for managing traffic.
Can I create one WorkflowClient per user to serve all his workflow executions? Or for each request should I need to create one WorkflowClient? which one is a recommended approach?
Use one WorkflowClient for each domain, but let all WorkflowClients on the same instance share the same TChannelService to save the TCP connection.
I would start with a single namespace (domain) for all users. Unless your users directly operate their workflow implementations it doesn't buy you much to use multiple namespaces.

Fiware-Orion: Geolocation

Hi I'm a student and I'm working for the first time on the broker. I understood how the creation of entities works and their updating through "update" queries. My question is: you can create an entity that contains variables (eg geolocation) defined with value "null" or "zero" and then initialize them with values that interest me. So as to have dynamic and non-static variables (ie that require user update)?
Or do we need interaction with the CEP to do this?
From what I have read in the fiware-orion guide when I create an entity (ex car having attributes and velocity coordinates: geopoint). The values of these 2 attributes must be set in a static way (ex: speed 100 and position coordinates 40.257, 2.187). If I understand the values of these attributes I can only update them by making an update query. So my question is:
Is it possible to update the value of the attributes that contain the position or the speed of the car in a dynamic way, ie without having to write the values from the keyboard? Or does this require the use of the CEP of orion?
If I could not explain myself more generally I would like to know if it is possible to follow the progress of a moving car without me having to add the values from the keyboard.
Thanks.
Orion Context Brokker exposes a REST-based API that (among other things) allows you to create, update and query entities. From the point of view of Orion, it doesn't matter who is the one invoking the API: it can be done manually (for instance, using Postman or curl) or can be an automated system developed by you or a third party (for instance, a software running in a sensor in the car that measures the speed and periodically sends an update using a wireless communication network).
From a client-server point of view (in the case you are familiar with these concepts), Orion takes the server of the API role and the one updating the speed (either manually or automatically) takes the role of client of the API.

How to merge/consolidate responses from multiple RESTful microservices?

Let's say there are two (or more) RESTful microservices serving JSON. Service (A) stores user information (name, login, password, etc) and service (B) stores messages to/from that user (e.g. sender_id, subject, body, rcpt_ids).
Service (A) on /profile/{user_id} may respond with:
{id: 1, name:'Bob'}
{id: 2, name:'Alice'}
{id: 3, name:'Sue'}
and so on
Service (B) responding at /user/{user_id}/messages returns a list of messages destined for that {user_id} like so:
{id: 1, subj:'Hey', body:'Lorem ipsum', sender_id: 2, rcpt_ids: [1,3]},
{id: 2, subj:'Test', body:'blah blah', sender_id: 3, rcpt_ids: [1]}
How does the client application consuming these services handle putting the message listing together such that names are shown instead of sender/rcpt ids?
Method 1: Pull the list of messages, then start pulling profile info for each id listed in sender_id and rcpt_ids? That may require 100's of requests and could take a while. Rather naive and inefficient and may not scale with complex apps???
Method 2: Pull the list of messages, extract all user ids and make bulk request for all relevant users separately... this assumes such service endpoint exists. There is still delay between getting message listing, extracting user ids, sending request for bulk user info, and then awaiting for bulk user info response.
Ideally I want to serve out a complete response set in one go (messages and user info). My research brings me to merging of responses at service layer... a.k.a. Method 3: API Gateway technique.
But how does one even implement this?
I can obtain list of messages, extract user ids, make a call behind the scenes and obtain users data, merge result sets, then serve this final result up... This works ok with 2 services behind the scenes... But what if the message listing depends on more services... What if I needed to query multiple services behind the scenes, further parse responses of these, query more services based on secondary (tertiary?) results, and then finally merge... where does this madness stop? How does this affect response times?
And I've now effectively created another "client" that combines all microservice responses into one mega-response... which is no different that Method 1 above... except at server level.
Is that how it's done in the "real world"? Any insights? Are there any open source projects that are built on such API Gateway architecture I could examine?
The solution which we used for such problem was denormalization of data and events for updating.
Basically, a microservice has a subset of data it requires from other microservices beforehand so that it doesn't have to call them at run time. This data is managed through events. Other microservices when updated, fire an event with id as a context which can be consumed by any microservice which have any interest in it. This way the data remain in sync (of course it requires some form of failure mechanism for events). This seems lots of work but helps us with any future decisions regarding consolidation of data from different microservices. Our microservice will always have all data available locally for it process any request without synchronous dependency on other services
In your case i.e. for showing names with a message, you can keep an extra property for names in Service(B). So whenever a name update in Service(A) it will fire an update event with id for the updated name. The Service(B) then gets consumes the event, fetches relevant data from Service(A) and updates its database. This way even if Service(A) is down Service(B) will function, albeit with some stale data which will eventually be consistent when Service(A) comes up and you will always have some name to be shown on UI.
https://enterprisecraftsmanship.com/2017/07/05/how-to-request-information-from-multiple-microservices/
You might want to perform response aggregation strategies on your API gateway. I've written an article on how to perform this on ASP.net Core and Ocelot, but there should be a counter-part for other API gateway technologies:
https://www.pogsdotnet.com/2018/09/api-gateway-response-aggregation-with.html
You need to write another service called Aggregator which will internally call both services and get the response and merge/filter them and return the desired result. This can be easily achieved in non-blocking using Mono/Flux in Spring Reactive.
An API Gateway often does API composition.
But this is typical engineering problem where you have microservices which is implementing databases per service pattern.
The API Composition and Command Query Responsibility Segregation (CQRS) pattern are useful ways to implement queries .
Ideally I want to serve out a complete response set in one go
(messages and user info).
The problem you've described is what Facebook realized years ago in which they decided to tackle that by creating an open source specification called GraphQL.
But how does one even implement this?
It is already implemented in various popular programming languages and maybe you can give it a try in the programming language of your choice.

Inter-microservices Communication using REST & PUB/SUB

This is still a theory in my mind.
I'm rebuilding my backend by splitting things into microservices. The microservices I'm imagining for starting off are:
- Order (stores order details and status of each order)
- Customer (stores customer details, addresses, orders booked)
- Service Provider (stores service provider details, status & location of each service provider, order(s) currently being processed by the service provider, etc.)
- Payment (stores payment info for each order)
- Channel (communicates with customers via email / SMS / mobile push)
I hope to be able to use PUB/SUB to create a message with corresponding data, which can be used by any other microservice subscribing to that message.
First off, I understand the concept that each microservice should have complete code & data isolation (thus, on different instances / VMs); and that all microservices should communicate strictly using HTTP REST API contracts.
My doubts are as follows:
To show a list of orders, I'll be using the Order DB to get all orders. In each Order document (I'll be using MongoDB for storage), I'll be having a customer_id Foreign Key. Now the issue of resolving customer_name by using customer_id.
If I need to show 100 orders on the page and go with the assumption that each order has a unique customer_id associated with it, then will I need to do a REST API call 100 times so as to get the names of all the 100 customer_ids?
Or, is data replication a good solution for this problem?
I am envisioning something like this w.r.t. PUB/SUB: The business center personnel mark an order as assigned & select the service provider to allot to that order. This creates a message on the cross-server PUB/SUB channel.
Then, the Channel microservice (which is on a totally different instance / VM) captures this message & sends a Push message & SMS to the service provider's device using the data within the message's contents.
Is this possible at all?
UPDATE TO QUESTION 2: I want the Order microservice to be completely independent of any other microservices that will be built upon / side-by-side it. Channel microservice is an example of a microservice that depends upon events taking place within Order microservice.
Also, please guide me as to what all technologies / libraries to use.
What I'll be developing on:
Java
MongoDB
Amazon AWS instances for each microservice.
Would appreciate anyone's help on this.
Thanks!
#1
If I need to show 100 orders and each order has a unique customer_id, will I need to do 100 REST API call?
No, just make 1 request with 100 order_id(s) and return a dictionary of order_id <=> customer_id
#2
It's a single request
POST
/orders/new
{
"selected_service_provider_id" : "123"
...
}
Which can return you order_id and you can print it locally for the customer or track progress or what have you.
On the server side, you receive an order and process it. Processing can include sending an SMS at some stage. This functionality can be implemented inside original service that received this request or as a separate call to another dedicated service.
To your first question, you don't need to do 100 queries, just one with the array of your 100 documents, like the following:
db.collection.find( { _id : { $in : [1,2,3,4] } } );
https://stackoverflow.com/a/7713461/1384539
I know this question is 1 year old, but I would like to add my answer to the first point.
One option would be to use some form of CQRS and store on the OrderDB also some of the user details when creating an order. This way when you have to show the list of orders you already have all the details you need. Also, the order document would represent a photograph of the user state at the moment of the order creation.
Of course, in case you don't have the user details when storing the order, you just need to make a GET call to the User Service, but that would be 1 call, not 100.

Looking for RESTful approach to update multiple resources with the same field set

The task: I have multiple resources that need to be updated in one HTTP call.
The resource type, field and value to update are the same for all resources.
Example: have set of cars by their IDs, need to update "status" of all cars to "sold".
Classic RESTFul approach: use request URL something like
PUT /cars
with JSON body like
[{id:1,status:sold},{id:2,status:sold},...]
However this seems to be an overkill: too many times to put status:sold
Looking for a RESTful way (I mean the way that is as close to "standard" rest protocol as possible) to send status:sold just once for all cars along with the list of car IDs to update. This is what I would do:
PUT /cars
With JSON
{ids=[1,2,...],status:sold} but I am not sure if this is truly RESTful approach.
Any ideas?
Also as an added benefit: I would like to be able to avoid JSON for small number of cars by simply setting up a URL with parameters something like this:
PUT /cars?ids=1,2,3&status=sold
Is this RESTful enough?
An even simpler way would just be:
{sold:[1,2,...]}
There's no need to have multiple methods for larger or smaller numbers of requests - it wastes development time and has no noteable impact upon performance or bandwidth.
As far as it being RESTful goes, as long as it's easily decipherable by the recipient and contains all the information you need, then there's no problem with it.
As I understand it using put is not sufficient to write a single property of a resource. One idea is to simply expose the property as a resource itself:
Therefore: PUT /car/carId/status with body content 'Sold'.
Updating more than one car should result in multiple puts since a request should only target a single resource.
Another Idea is to expose a certain protocol where you build a 'batch' resource.
POST /daily-deals-report/ body content {"sold" : [1, 2, 3, 4, 5]}
Then the system can simply acknowledge the deals being made and update the cars status itself. This way you create a whole new point of view and enable a more REST like api then you actually intended.
Also you should think about exposing a resource listing all cars that are available and therefore are ready for being sold (therefore not sold, and not needing repairs or are not rent).
GET /cars/pricelist?city=* -> List of all cars ready to be sold including car id and price.
This way a car do not have a status regarding who is owning the car. A resource is usually independent of its owner (owner is aggregating cars not a composite of it).
Whenever you have difficulties to map an operation to a resource your model tend to be flawed by object oriented thinking. With resources many relations (parent property) and status properties tend to be misplaced since designing resources is even more abstract than thinking in services.
If you need to manipulate many similar objects you need to identify the business process that triggers those changes and expose this process by a single resource describing its input (just like the daily deals report).