How to calculate server compute for jitsi video calls - cpu-architecture

I am using Jitsi for audio/video calls, I am able to access the bandwidth per call but I want to calculate the computation cost per call. Zoom is providing a subscription model and having a number of users limitations per call, and having price in place. I want to know how this multi-tenant application build and how to cost to them? Thanks in advance. I am using the AWS server only.

Related

real-time data update with cloud firestore via api rest call

I would like to create a Cloud Firestore API while maintaining my business rule within Cloud Functions, I would like to process these requirements in my Angular application and update the data in real time.
Is it possible to do real time updates if I create a Rest API with Cloud Firestore?
If you're building your REST API on top of Cloud Functions, this will not be possible. Cloud Functions doesn't support HTTP chunked transfer encoding, nor does it support HTTP/2 streaming that would be required to let the client keep a socket open for the duration of the listen. Instead, functions are required to send their entire response to the client at once, with a size less than 10MB, within the timeout of the function (max 9 minutes).

is sails.io efficient enough to build chat website for some 1000 online user at the same time?

I intend to use sails.io to build a chat website. There will be some 1000 user online at the same time. is sails.io is enough to do that? And is there a way to test performance of chat website? with some normal website i known jmeter but for chat website i know nothing at all
That will depend mostly of the server you will be using for your service.
Sockets are simply an array of connections. You can have as many as you want (within normal memory usage limits of your server machine).
You can checkout this answer for more information on socket costs What's the maximum number of rooms socket.io can handle?
I am currently sails.io for a chat product with 2000+ simultaneous users using it during business hours. Sails socket.io have been holding it pretty well. Nevertheless I got it prepared for horizontal scaling when I maximum capacity starts to show symptoms.You should too.

Jitsi server hardware requirements on test environment

We are implementing secure videoconferencing/chat using Jitsi. We could not find any hardware requirements for a Jitsi server. Could you please share your thoughts regarding the hardware requirements for a Jitsi server in test as well as production environment?
Thanks,
Syed
I am using https://github.com/matrix-org/docker-jitsi on a free tier Ec2 instance.
With 1 active conference (8 participant). it didn't seem to spike resource consumption. You can observe that CPU usage is 0.0 & used RAM is about 450MB.
The hardware requirements will depend on the amount of users you have. From what I've seen, Jisti does not require huge resources to run smoothly.
According to this Jitsi Videobridge Performance Evaluation
On a plain Xeon server that you can rent for about a hundred dollars,
for about 20% CPU you will be able to run 1000+ video streams using an
average of 550 Mbps!
For that thing, we need to get an idea about how many simultaneous conferences go there and how many participants at each conference.
Another special parameter is how many users enable their video stream and audio stream. And their network bandwidth.Based on that we can decide server requirements.

Is there a concurrent rate limit to MSFT graph (Outlook mail)?

currently I am making a small app that will provide users with the ability to mark multiple email messages as being "read" in one click.
Unfortunately, the MSFT graph API does not support multiple update calls as specified here
So what I am doing right now is using asynchronous IO server side to send multiple REST API requests simultaneously.
I know there is a 60 requests/min limit. But is there a simultaneous connection limit as well?
There is a simulatenous connection limit because access to store is serialized. I would recommend going with 4 simultaneous requests at max (Maybe even lower). From Exchange Store perspective all access to store is serialized irrespective of the App.
Microsoft Graph also plans to support batching soon so when they do enable it you can make one call and update read flag for multiple messages.

how can i measure stress testing for the iPhone app?

how can i measure stress testing for the iPhone app ?
i need stress testing not performance testing, for example 100 users access the database of the app which is on the server at the same time.
any help?
thanks in advance
First, you need to decide if you need to test the client-side (iPhone) app, the server-side code, or both.
Testing ONLY the server-side, might make this much easier - especially if it is using HTTP to communicate with the server and exchanges data via a text-based format (XML, JSON, etc). There are many web load testing tools available which can handle this scenario. Using our Load Tester product, for example, you would configure the proxy settings on your iPhone to point to our software running on a local machine. Then start a recording and use the application. Load Tester will record the messages exchanged with the server. You can then replay the scenario, en masse, to simulate many users hitting your server simultaneously. The process, at a high level, is the same with most of the web load testing tools.
Of course, the requests to the server can't be replayed exactly as recorded - they'll need to be customized to accurately simulate multiple users. How much customization is needed will depend on the kind of data being exchanged, the complexity of the scenario and the ability of the tool to automatically configure dynamic fields (and this is one area where the abilities of the tools vary greatly).
Hope that helps!
A basic simulation would involve running your unit tests on OS X, using many simultaneous unit test processes (with unique simulated users, and other variables).
If you need more 'stress', add machines - you'll likely end up hitting io or network limits from one machine relatively early on.