What influence does ejabberd get from disconnection process - xmpp

CentOS 7.3.1611
ejabberd version 17.04
Using ElastiCache (Redis) for session store
Using Amazon RDS (MySQL) for user database
Install from official installer
We want the advice about the following situation.
We are caring out the performance test to check if we can connect to 3,000,000 users to XMPP cluster, which we constructed on AWS. There is 10 cluster nodes.
The result is all green, no problem with 3,000,000 users continuous connection.
But, when more than half of the 3,000,000 users try to reconnect, the server which ejabberd is in operation, although there are enough capacity for CPU and Memory, can't start new session.
Is the process of reject and connect disturb the new session?
Also, we want the advice about ejabberd setting and Linux setting, to improve the situation we have right now.

Related

PostgreSQL Multi master Synchronisation

I have a scenario as follows,
One cloud server is running an application with PGSQL as DB
Multiple local servers are running with same application with PGSQL as DB
User may access the cloud server for read/write data
User may access any of the local server to read/write data
What I need is synchronisation between all these databases. The synchronisation can be done live if connectivity is available, or immediately when connectivity is available.
Please guide me with some inputs, where can i start from.
Rethink your requirements.
Multimaster replication is full of pitfalls, and it is easy to get your databases out of sync unless you plan carefully. You'd probably be better off with a single master node.
That said, you could look at BDR by 2ndQuadrant which provides such functionality.

mongo atlas or aws - Internal or External connection

i am working on my next project currently which works 100% on mongo,
my past projects worked on SQL + Mongo on which i used AWS RDS + AWS EC2 and could connect them both in AWS internal IP which result me with much faster connection.
Now in mongo there is alot of fancy cloud servers like MLab and MongoDB Atlas which is actually cheaper then AWS.
My concern is that moving back to external DB connection will be slower and more network consuming then the internal connection in RDS
Have anyone experienced in such issue? maybe the different isn't that big as i make it but i need it to be optimized
This depends on your setup. Many of the "fancy" services also host stuff on AWS, so latency is minimal. Some even offer "private environments" or such, so you can hide your databases from public view.
The only thing left to care about is the amount of network traffic. But this will be your problem regardless of your database host. You can test this relatively easily (e.g. get a trial from one of the providers and test for throughput, or raise your own MongoDB docker cluster to use as a test etc) just to get an idea of the performance range you'll be in.

Recently, i can't use MongoDB(AWS Amazon EC2)

I have MongoDB running on AWS. The setup has been up and running for over several months. The application server(server is also on AWS) has been running the same code for several months.
I received a below alert notification January 5 2017 from Amazon.
Then i can't use MongoDB. the mongodb started failing, i tried to change inboud 27017 port in Security Group. But the result was same.
Notification from Amazon is very principle message. So i don't know what i do exactly.
We are aware of malware that is targeting unauthenticated MongoDB databases which are accessible to the Internet and not configured according to best practices. Based on your security group settings, it appears that your EC2 instances, listed below, may be running a MongoDB database which is accessible to the Internet. If you are not using MongoDB, you may safely ignore this message.
We suggest you utilize EC2 Security Groups to limit connections to your MongoDB databases to only addresses or other security groups that should have access.
...

Postgres Databases synchronization

I have to deploy odoo with postgres database on amazon cloud. That i can do by simply setting up EC2 server and setting uo odoo on it. In case if internet in down, I want to be able to access same services and already saved data offline as well. For that I plan to install odoo with postgres database on my local machine (in my office) as well. Now I can access odoo services from anywhere (using cloud) when there is internet available. But in case if internet is down, I must be able to use locally installed odoo to get same services. For that purpose I need two things
Both databases should be exact replicas of each other.
On recovering from internet disconnectivity, I want the changes made in my local ( office) database reflect in the database at amazon cloud.
I am new to this stuff, kindly suggest the best possible approach (architecture) in this scenario.

Mongodb sharding is NOT recovered after power off accident

I'm running 4 vms (centos) on a single machine (Windows 2008 R2). The 4 vms are setup as below:
1 mongos
1 mongo configure server
2 mongod as sharding servers
OK, everything was fine before a power off accident. When the power came back, I did manually reboot all the VMs, and found the sharding setting is gone. I mean, the mongos can talk to the configure server, but somehow the sharding data is lost and it show the database is not sharded.
I setup the sharding based on documents from mongodb websites (e.g. running some command in mongo shell to enable sharding for the db and each collection). Do I need to do all the mongo shell commands again after rebooting? Or is it supposed to recover automatically once the sharding is enabled?
Thanks.
Once you've established a sharded cluster, it is certainly supposed to stay configured, even if servers fail, even if they all fail at the same time. Restarting the servers should bring everything up just the way it was before the outage. Based on your description, it is difficult to reason what might have gone wrong. A dump of the config database, and the log files of all the affected servers, would be necessary to analyze what happened. This should perhaps be filed as a ticket with MongoDB support.
(It is, by the way, recommended to run three config servers rather than a single one, for availability reasons. But even so, even a single server should recover just fine after having failed. The three-server recommendation is only to make sure that there's always a live config server even if one of them should fail.)