Jitsi server hardware requirements on test environment - server

We are implementing secure videoconferencing/chat using Jitsi. We could not find any hardware requirements for a Jitsi server. Could you please share your thoughts regarding the hardware requirements for a Jitsi server in test as well as production environment?
Thanks,
Syed

I am using https://github.com/matrix-org/docker-jitsi on a free tier Ec2 instance.
With 1 active conference (8 participant). it didn't seem to spike resource consumption. You can observe that CPU usage is 0.0 & used RAM is about 450MB.

The hardware requirements will depend on the amount of users you have. From what I've seen, Jisti does not require huge resources to run smoothly.
According to this Jitsi Videobridge Performance Evaluation
On a plain Xeon server that you can rent for about a hundred dollars,
for about 20% CPU you will be able to run 1000+ video streams using an
average of 550 Mbps!

For that thing, we need to get an idea about how many simultaneous conferences go there and how many participants at each conference.
Another special parameter is how many users enable their video stream and audio stream. And their network bandwidth.Based on that we can decide server requirements.

Related

What Raspberry Pi would be the best to host these Discord bots?

Because the country is in lockdown and we are learning from home, we can't go to college to use raspberry pi there, I can't spare money to get one and I also don't have anything I could use to build a project with it, so I asked if I can host a discord bot that I'm working on for fun on the pi and professor told me to do an analysis of various pi's and various version of the bot to find with what I could get away when choosing to host one. So here is a hypothetical situation:
There are 3 bots: A-fun bot, B-moderation bot, C-fun bot with DB,
Bot A: Has commands like !blackjack that uses reactions and embeds to portray the game, cards are represented by their number values and various others. Can play music off Youtube by using ytdl, has skip, stop and other commands, has queues, also the bot can get images and jokes from various site API's with axios.
Bot B: Basic moderation bot, no fun or music commands.
Bot C: Has the same commands as Bot A, except it also connects to mongoDB and stores user data there, thus it also has economy.
My questions are:
What kind of Raspberry would I need to host either of these bots?
Could I get away with Raspberry Pi Zero for Bot B?
How many servers could the bots be in before it crashes, how many
people?
I understand that it all depends on the dataflow and how many interactions it has to handle, but the more input I can get on this the better.
Note: All of these hypothetical bots are written using Node.js
note: I write bots in python, so these estimates may be a little off.
In general, a simple discord connection does not use very many resources (eg. moderation commands that are used occasionally.) More servers does not require more processing power, but one can assume that a bot being in more servers will lead to an increase in bot usage.
Making more requests via HTTP and receiving more requests over the gateway will increase resource consumption. Auto-deleting messages may increase resource usage more than expected.
As for bot B(a)(no message filter), you could probably get away with a raspberry pi 0/0w for 10-20 servers. Bot B(b)(has message filter) will require more RAM and CPU power. I would recommend a Raspberry Pi 2 for the word filter.
Writing games using Discord results in many requests for reactions, editing messages, and possibly an AI. I am not sure how the economy works on bot C, but using MongoDB should not take too much additional CPU power. Depending on the number of servers it is in, you may want a faster SD card and more RAM.
**For bots A and C, it really depends on how much it is used. A small bot (active use in 1-2 servers) would probably only need 1gb of ram. For a lager bot, I would recommend investing in 2+ GB of ram, especially for bot C. If you are planning on making one of the "fun" bots public, I would recommend at least 4 GB ram. **
Alternative Option:
most small (<10 servers) bots can be run on a decent computer (eg. dual core 2ghz, 8gb ram) with no significant performance reduction.
TL;DR:
Pi0 will work for bot Ba. Get more ram/a better processor for bot Bb. I recommend 2gb if private, 4gb if public RAM for bot A/C, a faster processor for bot C especially. most discord bots will not crash unless you are absolutely straining the hardware.
a raspberry pi 4 (8gb) could probably run all three bots at once
You could maybe get away with a Pi Zero W if your doing a basic bot, but I would recommend a Pi 3 or a Pi 4 for more advanced bots. It can also depend on how much data you are storing. You can try using the PIs power itself or use repl on chromium on the pi.
You would need a high storage SD Card for DBs and unexpected growth.
If your using a pi4 with more than 2gb of ram you could get around 75 servers with a very good network connection. With a pi3 you can maybe get 40 servers. With a pi zero w around 15 servers. A lot of this depends on the cpu and the network connection. This is guessing that each server has around 100 people.
TL;DR - Pi Zero W for basic bots, Pi 3 or 4 for more advanced bots.

How to identify the network performance issue?

I am a little confuse about my message server's network bottleneck issue. I can obviously found the problem caused by the a lot of network operation, but I am not sure why and how to identify it.
Currently we are using GCP as our VM and 4 core/8G RAM for our message server. Redis & Cassandra is in other server at the same place. The problem happened at the network operation to the redis server and cassandra server.
I need to handle 3000+ requests at once to save data to redis and 12000+ requests to cassandra server.
My task consuming all my CPU power and the CPU usage down right after I merge the redis request and cassandra request to kind of batch request. The penalty is I have to delay my data saving.
What I want to know is how can I know the network's capability of my system. How many requests within 1 second is a reasonable task?. As my testing, this is obviously true that the bottleneck is the network operation, but I can't prove it. I can't even know how to estimate a reasonable network usage of my system? Are there some tools or other thing that can help to my make sure my network's problem? Or this is just a error config of my GCP system?
Thanks,
Eric
There is a "monitoring" label in each instance where you can check through graphs values like instance CPU, Network and RAM usage.
But to further check the performance of your instance you should use StackDriver Logging1 and Monitoring2. It stores a lot of information from the internal servers and the system performance. for that you will need to install the agent in the instance. It also stores information about your Load Balancer3, in case you are using one with your web application, which is very advisable since it scale your resources up or down with intelligent Autoscaling.
But in order to test out your network you will need to use some third party tool to overload the network. There are multiple tools to achieve this, like JMeter.

How to establish a Server PC to host my website?

So I am interested in server PCs and I want to buy one, and I will choose very powerful. But I don't know how to establish the hard disk to be connected to the internet. I want other people to see it when they write it's domain in the search. I am just searching for advice.
I went back over your question and this thread and this is what I recommend. You are looking to create a hosting environment for others from what I am understanding. Regardless the platform you select (linux or windows) having a beefy machine is going to be key to this. I would recommend at a minimum for hardware the following specifications. I recommend building a dedicated server with multiple Quad Core processors, 32 GB RAM, 2 or more TB Disk, with provision for backups. If you call say Dell or one of the other big server providers, they can custom-create a build for you that will accommodate your needs. That configuration would be a start; your final build may be beefier according to your needs and budget.

how can i measure stress testing for the iPhone app?

how can i measure stress testing for the iPhone app ?
i need stress testing not performance testing, for example 100 users access the database of the app which is on the server at the same time.
any help?
thanks in advance
First, you need to decide if you need to test the client-side (iPhone) app, the server-side code, or both.
Testing ONLY the server-side, might make this much easier - especially if it is using HTTP to communicate with the server and exchanges data via a text-based format (XML, JSON, etc). There are many web load testing tools available which can handle this scenario. Using our Load Tester product, for example, you would configure the proxy settings on your iPhone to point to our software running on a local machine. Then start a recording and use the application. Load Tester will record the messages exchanged with the server. You can then replay the scenario, en masse, to simulate many users hitting your server simultaneously. The process, at a high level, is the same with most of the web load testing tools.
Of course, the requests to the server can't be replayed exactly as recorded - they'll need to be customized to accurately simulate multiple users. How much customization is needed will depend on the kind of data being exchanged, the complexity of the scenario and the ability of the tool to automatically configure dynamic fields (and this is one area where the abilities of the tools vary greatly).
Hope that helps!
A basic simulation would involve running your unit tests on OS X, using many simultaneous unit test processes (with unique simulated users, and other variables).
If you need more 'stress', add machines - you'll likely end up hitting io or network limits from one machine relatively early on.

What is the practical / hard limit on socket connections per server

I have a number of client devices that open socket connection exposed by a service running on a Windows 2008 R2 server. I'm wondering if what is hard limit on the number of concurrent client connections.
According to this article, one hard limit is (was) 16,777,214. The practical limit depends on your application also: for example, if you create a thread per connection, then the practical limit comes from the limitation in the number of threads more than from the network stack. There is also a limit on the number of handles any process may have, and so on.
Assuming you select a sensible architecture for your server then the limit will be memory and cpu related. IMHO you'll never reach the hard limit that Martin mentions :)
So, rather than worrying about a theoretical limit that you'll never hit you should, IMHO, be thinking about how you will design your application and how you will test it to determine the current maximum number of client connections that you can maintain for your application on given hardware. The important thing for me is to run your perf tests from Day 0 (see here for a blog posting where I explain this). Modern operating systems and hardware allow you to build very scalable systems but simple day to day coding and design mistakes can easily squander that scalability and so you simply MUST run perf tests all the time so that you know when you are building in road blocks to your performance. You simply cannot go back and fix these kind of mistakes at the end of the project.
As an aside, I ran some tests on Windows 2003 Server with a low spec VM and easily achieved more than 70,000 concurrent and active connections with a simple server based on an overlapped I/O (I/O completion port) based design. See this answer for more details.
My personal approach would be to get a shell of a server put together quickly using whatever technology you decide on (I favour unmanaged C++ using I/O Completion Ports and minimal threads), see this blog posting for more details. Then build a client or series of clients that can stress test the application and keep updating and running the test clients as you implement your server logic. You would expect to see a gradually declining curve of maximum concurrent clients as you add more complexity to your server; large drops in scalability should cause you to examine the latest check ins to look for unfortunate design decisions.