(Socket.io) How should I optimize number of Namespaces? - sockets

I am planning on using socket.io to create an online app that allows users to chat via video, and saves each video call/session in a database. (Audio for each of these sessions will be separately available to download as well.)
Because there could be multiple video calls/sessions happening at once, I want to separate the data for each session. Socket.io offers both Namespaces and Rooms to do this, but I am unsure which is more optimal for my problem.
Namespaces, according to the documentation, are "a useful feature to minimize the number of resources (TCP connections) and at the same time separate concerns within your application by introducing separation between communication channels."
Is it best to have 1 namespace per room/call/session?
Or would it be best to limit tcp connections by creating some rule that will create a new namespace if/when:
Let's say MAX_CLIENTS_PER_NAMESPACE = (the max number of sockets/clients we want in a namespace)
Say the last/smallest namespace currently has N clients (i.e. if you add up the clients from all the rooms in that namespace we get some number N), and the new room/call/session to-be-created has M clients. Create a new namespace if M + N >= MAX_CLIENTS_PER_NAMESPACE

Related

How to effectively use Worker, WorkflowClient

Product Use Case - Our product has a typical use case where we will be having n no of users. Each user will have n no of workflows and each workflow can be run at any time(n of time).
I hope this is a typical use case of any workflow product.
can I use a domain to differentiate users (I mean to say that creating a domain per user)?
Can I create one WorkflowClient per user to serve all his workflow executions? Or for each request should I need to create one WorkflowClient? which one is a recommended approach?
What is the recommended approach in creating Worker objects to poll task list?
Please don't mistake me If I have asked anything meaningless
can I use a domain to differentiate users (I mean to say that creating a domain per user)?
Yes, especially when these users are working in different teams or product, using different domain will avoid workflowName/IDs conflicting each others, and also assign independent number of quotas for managing traffic.
Can I create one WorkflowClient per user to serve all his workflow executions? Or for each request should I need to create one WorkflowClient? which one is a recommended approach?
Use one WorkflowClient for each domain, but let all WorkflowClients on the same instance share the same TChannelService to save the TCP connection.
I would start with a single namespace (domain) for all users. Unless your users directly operate their workflow implementations it doesn't buy you much to use multiple namespaces.

How do I seed default data to Mongo db (or any database) in a microservice architecture?

I have a use case where there are multiple microservices and one of them deals with roles and resources(let's call this microservice as A). Resources are just endpoints.
A maintains a collection(let's call this X) to store all the resources from different microservices. For each microservice other than A, I would like to store all of its resources(endpoints) into X the first time this microservice boots up.
I am thinking of having a json file with all the resources in each microservice and calling A's endpoint to add resources whenever a microservice boots up.
Is there any idiomatic way to do this?
Consider the use of Viper so you can set default data from multiple different sources like yaml, json, remote config like etcd, live watch of files among others. You can algo configure the call to and endpoint with it's remote configuration feature.

AnyLogic Chat Call Center Model

I am trying to model a Call Center with Chat communication and need your thoughts on this scenario. Real world scenario is that Customer Service Representatives[CSR] in Chat Call Center can service multiple customer chats at same time based on their capacity[integer value 1,2...]
"Chat" Agent [source]
"ChatAgent" resource unit with int parameters totalCapacity[default=3]
Using a service, incoming "Chat" from source seizes a "ChatAgent" from a resourcePool[with resourceUnit "ChatAgent"]. In this model, a "ChatAgent" accepts only 1 "Chat" inside the service block.
ResourcePool
On seize: unit.totalCapacity--;
On release: unit.totalCapacity++;
But I couldn't model a scenario where 1 "ChatAgent" can service multiple customer "Chats" at a time based on their totalCapacity like in a real chat call center.
Please advise on how I can configure this multiple agents to 1 resource seize/delay.
Updated Model
Updated ChatAgent Resource Structure
Thanks,
Shiva
Many ways of doing this, but the first thing that comes to mind is NOT to use ChatAgent as a resource (at least not the kind you use on a service block) because chats can come at any given time, and you can't have a resource taking many different agents that come at different times through the service block...
Instead you can use the following structure in the chatAgent:
The capacity of the resource will define how many agents can enter the restrictedArea block... This structure will exist inside your chatAgent resource.
Your main agent will have the following structure:
when the chat waits for an available chatAgent, if a chatAgent is available by doing:
chatAgent.beginService.entitiesInside() < chatAgent.capacity
These are the most important details to make it work... now you have to build the model properly.

Dedicate a node to a stream - Security rules

Can anyone let me know how to show a stream only in a specific node
i have a 2 nodes cluster.. and i would like to dedicate RIM01 specific to Stream1. RIM02 to Steam2. Meaning any request to that streams or apps in that stream should go to there nodes
So, if a go to RIM01 the Stream2 should be hidden etc...
Central node
RIM02 -- Repository + Engine
RIM03 -- Repository + Engine + Scheduler
i tried lot of security rules like
Filter : ServerNodeConfiguration_,Stream_
(node.#NodeUse="dev") and (node.#NodeType=stream.#StreamType and !resource.stream.Empty())
or
Filter : ServerNodeConfiguration_,Stream_
((resource.resourcetype = "Nodes" and resource.name="RIM01")) and ((resource.name="test"))
but none of them work :/
Thanks
So, at present, load balancing in Qlik Sense applies to Apps, not Streams. Load Balancing routes apps to servers, whereas security rules govern stream visibility. And, unfortunately, there is not a clean mechanism to use node meta-data in security rules. All in all, there isn't a solution for hiding a stream on a given server.
I have the same issue, you can designate the apps are only readable on single node, so depending on how your user stream rights are configured some users may see an empty stream on the node where the app cannot be accessed.
There's some interesting stuff happening with the multi cloud capability where the concept of streams is now collections, which gives lots more flexibility around this type of thing. Alas QEFE capability is only just come with June 2018, and access is limited to certain use cases / customers.

What are "Included Services" in a Service useful for?

I have a custom profile for a proprietary device (my smartphone app will be the only thing communicating with my peripheral) that includes two simple services. Each service allows the client to read and write a single byte of data on the peripheral. I would like to add the ability to read and write both bytes in a single transaction.
I tried adding a third service that simply included the two existing single byte services but all that appears to do is assign a UUID that combines the UUIDs for the existing services and I don't see how to use the combined UUID since it doesn't have any Characteristic Values.
The alternatives I'm considering are to make a separate service for the two bytes and combine their effects on my server, or I could replace all of this with a single service that includes the two bytes along with a boolean flag for each byte that indicates whether or not the associated byte should be written.
The first alternative seems overly complicated and the second would preclude individual control of notifications and indications for the separate bytes.
Is there a way to use included services to accomplish my goals?
It's quite an old question, but in case anyone else comes across it I leave a comment here.
Here are two parts. One is a late answer for Lance F: You had a wrong understanding of the BLE design principles. Services are defined on the host level of the BLE stack. And you considered your problem from the application level point view, wanting an atomic transaction to provide you with a compound object of two distinct entities. Otherwise why would you have defined two services?
The second part is an answer to the actual question taken as quote from "Getting Started with Bluetooth Low Energy" by Kevin Townsend et al., O'Reilly, 2014, p.58:
Included services can help avoid duplicating data in a GATT server. If a service will be referenced by other services, you can use this mechanism to save memory and simplify the layout of the GATT server. In the previous analogy with classes and objects, you could see include definitions as pointers or references to an existing object instance.
It's an update of my answer to clarify why there is no need for the included services in a problem stated by Lance F.
I am mostly familiar with BLE use in medical devices, so I briefly sketch the SIG defined Glucose Profile as an example to draw some analogies with your problem.
Let's imagine a server device which has the Glucose Service with 2 defined characteristics: Glucose Measurement and Glucose Measurement Context. A client can subscribe for notifications of either or both of these characteristics. In some time later the client device can change its subscriptions by simply writing to the Client Configuration Characteristic Descriptor of the corresponding characteristic.
Server also has a special mandatory characteristic - Record Access Control Point (RACP), which is used by a client to retrieve or update glucose measurement history.
If a client wants to get a number of stored history records it writes to the RACP { OpCode: 4 (Report number of stored records), Operator: 1 (All records) }. Then a server sends an indication from the RACP { OpCode: 5 (Number of stored records response), Operator: 0 (Null), Operand: 17 (some number) }.
If a client wants to get any specific records it writes to the RACP { OpCode: 1 (Report stored records), Operator: 4 (Within range of, inclusive), Operand: [13, 14] (for example the records 13 and 14) }. In response a server sends requested records one by one as notifications of the Glucose Measurement and Glucose Measurement Context characteristics, and then sends an indication from the RACP characteristic to report a status of the operation.
So Glucose Measurement and Glucose Measurement Context are your Mode and Rate characteristics, then you also need one more control characteristic - an analog of the RACP. Now you need to define a number of codes, operators, and operands. Create a structure whichever suits you best, for example, Code: 1 - update, Operator: 1 - Mode only, Operand: actual number. A client writes it to the control point characteristic. A server gets notified on write, interprets it, and acts in a way defined by your custom profile.