Syncing and Mirroring data between 2 servers automatically cPanel - server

I have two servers and both are working fine.
How to sync all my data from one server to another server/backup-storage/remote-storage.
I want to know if one of my server is down due to heavy load then how to use instantly second server and what is the role of DNS in this, because if we use another server then we have to change DNS also for particular website so how to overcome this.

You can check cloudflare load balancer.
Architecturally you have two problems to solve:
load balancing (how clients are routed to one of the servers) - this involves sometimes DNS settings but because cloudflare hosts your DNS as well, you are cool
Synchronization: files and database sync between hosting accounts. Now here there is no standard way to go especially because your are hosted using cpanel
DATABASE:
You can't use master-master or master-slabe database replication mechanisms like Galera Cluster has.
You're best bet is to have a cron that will export the database from one server to the other. (using mysqldump - basically exporting and then importing)
on live:
mysqldump -u userName -p yourLiveDatabaseName > live_database_export.sql
on the hot backup (your other account):
mysql -u username -p yourOtherServerDatabaseName < live_database_export.sql
FILES:
If you have SSH access use rsync.
Otherwise you may need to invent something.
For instance you can check the Cpanel API in regards to account transfers -> that will solve the database as well https://api.docs.cpanel.net/openapi/whm/operation/create_remote_user_transfer_session/
As a remark - you are not in the best position to do HA having two cPanel shared accounts. What I usually do is to use virtual machines that are sync at the hypervisor level.

Related

Incremental data from Postgresql

I have a number of identical local postgreSql databases (identical in structure - not data) on several laptops that have intermittant access to internet. Records are being added to each DB daily. So Branch A,B,C each with a local Postgresql database. I would like all records from A,B,C in each table in a cloud Database.Also A,B,C data is separate - there is no overlap - A doesnt change B, or C etc. There are no duplicated unique keys.
NEED: I would like to collect all this data on a cloud based database by adding daily incremental data to a single cloud databse - so I can query the whole consolidated data using SQL and pull reports as needed.
Please can anyone point me in the right direction?
Thanks
It sounds like you want logical replication from each laptop to the cloud server. The problem there might be that contact must be made by the replica to each of the masters, so when your laptops are online, they would need to have predictable IP addresses so that they can be reached.
Maybe the best way around this is with a reverse SSH tunnel. On the central replica, you would tell it to subscribe to a publication hosted on some non-standard port on localhost. With a different port reserved for each laptop. So, for example, 9997, 9998, and 9999.
Then when each laptop has connectivity, it could run something like:
ssh rajb#1centralserver.example.com -R9999:localhost:5432 -f -N -T
This establishes an ssh connection to the central server (requiring a password, or private key, or however you have ssh set up) and sends instructions to the central server that whenever someone connects to port 9999 on the central server it should really send that connection back over ssh tunnel and hook it up to port 5432 (the default postgres server port) of the laptop.
For initially setting things up and debugging, you might want to omit the -f -N -T. That way, in addition to setting up the tunnel, you also get an interactive ssh session you can use for monitoring things.
Once the central service notices the connection is available, it will start downloading changes since the last time it could connect. When there is no connection, you will get a lot of nuisance messages to the log file as it checks each server every ~5 seconds to see if it is available.
From each laptop's perspective, the connection is coming from within, so the replication connection will use whatever authentication is set up or 127.0.0.1 or ::1, not the authentication set up for the actual remote IP.

How to configure PostgreSQL database over the tunnel in jmeter

I am using jmeter to test an application which uses PostgreSQL. I can connect to the database by using ssh tunnel provided by the database applications.
Can someone please tell me how do I do this using jmeter. I do not see any ssh tunnel option in jmeter database connection confi element.
You could use port forwarding,as explained in this answer:
https://stackoverflow.com/a/1968446/460802
I don't think you should be load testing the database directly, your load test should simulate real-life application under test usage. So instead of testing the database you should focus on the application itself and treat it like a black box so my general recommendation is reconsidering the approach.
If you have performed normal load testing already and identified that the database is the bottleneck and would like to load test the database separately - performance testing it over the SSH tunnel is not the best idea itself as the SSH tunnel traffic might be the next bottleneck due to the nature of TCP protocol and immense CPU footprint required for encryption/decryption of the data sent over SSH. So I would recommend talking to network administrators and asking them to temporarily open the Postgres network port to the machine(s) you're running JMeter from or provide you access to the machine(s) where you can install JMeter which will be having access to the database directly (preferably in the same subnet / physical location, otherwise you might be suffering from high latencies)
If for any reason the above instructions are not applicable for you - you can use SSH Local Forwarding in order to map remote Postgres port to your local port, the relevant command would be:
ssh -L 2345:localhost:5432 username#your_postgresql_server
Once done you should be able to connect to Postgres instance as it is installed locally on port 2345 like:
postgres://localhost:2345/your_database

RocketChat database connection based on sub-domain

We have hosted RocketChat on AWS and I have two questions and not sure if possible. Couldn't find anything on the docs.
Separate database and application servers from each other
Connect to a specific database based on the subdomain in URL
Any thoughts?
Cheers
You can definitely run your Mongodb servers seperately from your Rocket.Chat servers.
To route based on domain. You would just simply have to have a Rocket.Chat instance running for each subdomain you wish to have Rocket.Chat running on.
Then when starting the instances for those domains include environment variables like:
# subdomain1
PORT=3001
MONGO_URL=mongodb://ip-to-mongo-host:27017/subdomain1?replSet=rs0
MONGO_OPLOG_URL=mongodb://ip-to-mongo-host:27017/local?replSet=rs0
# subdomain2
PORT=3002
MONGO_URL=mongodb://ip-to-mongo-host:27017/subdomain2?replSet=rs0
MONGO_OPLOG_URL=mongodb://ip-to-mongo-host:27017/local?replSet=rs0
Above of course is assuming you are running your mongodb in replicaset mode. Which for Rocket.Chat I would definitely recommend. Especially when you go to scale the instances out to handle additional load.
Then in your reverse proxy just simply route:
subdomain1 -> 127.0.0.1:3001
subdomain2 -> 127.0.0.1:3002

Should I secure my MongoDB Database?

I am setting up two computers to run a web application. web-host hosts a MongoDB database and NodeJS web server, while worker runs some more demanding processes and populates the database. Using an SSH tunnel from worker, web-host:27017 is accessible using localhost:9999 from worker. web-host:80 has been set up to be accessible on http://our.corporate.site/my_site/.
At the moment MongoDB has no authentication on it - anything that can contact web-host:27017 can read or write anything to the database.
With this setup, how paranoid should I be about authenticating requests to MongoDB? The answers to this question seemed to suggest not very. Considering access is only possible from localhost it seems about as secure as the local file system. In MySQL I usually have a special 'web' user with limited privileges to limit the damage of an injection attack in case I make a mistake sanitizing input, however MongoDB seems less vulnerable to injection (or at least easier to sanitize) compared with MySQL.
Here's the issue: If you do set-up Mongo authentication, you are going to need to store the keys on the machine that accesses it.
So assuming that web-host:80 is compromised, the keys are also vulnerable.
There are some mitigation processes you can use to secure your environment, but there is no silver bullet if an attacker gains root access to your environment.
First I would consider putting mongodb on a separate machine on a private internal network that can only be accessed by machines in a DMZ (the part of the network where machines can communicate with your internal network and the outside world).
Next, assuming you are running a Linux-based system, you should be able to use AppArmor or SELinux to limit which processes are allowed to make outbound network requests. In this case only your webapp process should be able to initiate network requests such as connecting to your Mongo database.
If an attacker was able to get non-root access on your machine, the SELinux/AppArmor system policy would prevent them from initiating a connection to your database from their own script.
Using this architecture, you should be more secure than simply augmenting your current architecture with authentication. In a choice between the SELinux/AppArmor, I would use SELinux, since it is was much more mature and had much more granular control the last time I checked.

Can create a remote server with MongoDB? How?

My question, to be more clear, it is to create a server with mongodb on a cloud hosting (for example) and access it through another server.
Example:
I have a mobile app.
I hosted my mongoDB a cloud hosting (ubuntu).
I want to connect my app to the db on the server cloud.
Is it possible? How?
I'm joining this learning and my question was exactly MongoDB to create a server in a way that I could access it remotely.
Out of "localhost"? Different from all the tutorials I've seen.
From what you are describing, I think you want to implement a 2-Tier-Architecture. For practically all use cases, don't do it!
It's definitely possible, yes. You can open up the MongoDB port in your firewall. Let's say your computer has a fixed IP or a fixed name like mymongo.example.com. You can then connect to mongodb://mymongo.example.com:27017 (if you use the default port). But beware:
Security You need to make sure that clients can only perform those operations that you want to allow, e.g. using MongoDB integrated authentication, otherwise some random script kiddie will steal you database, delete it, or fill it with random data. Many servers, even if they don't host a well-known service, get attacked thousands of times per day. Also, you probably want to encrypt the connection so people can't spy on the connection. And to make it all worse, you will have to store the database credentials in your client app, which is practically impossible to do in a truly secure way.
Software architecture There is a ton of arguments against this architecture, but 1) alone should be enough. You never want to couple your client to the database, be it because of data migrations, software updates, security considerations, etc.
3-Tier
So what to do instead? Use a 3-Tier-Architecture: Host a server of some kind on mymongo.example.com that then connects to the database. That server could be implemented in nginx/node.js, iis/asp.net, apache/php, or whatever. It could even be a plain old C application (like many game servers).
The mongodb can still reside on yet a different machine, but when you use a server, the database credentials are only known to the server, not to all the clients.
Yes, it is possible. You would connect to MongoDB using the ip address of your host, or preferably using it's fully qualified hostname rather than "localhost". If you do that, you should secure your MongoDB installation otherwise anyone would be able to connect to your MongoDB instance. At an absolute minimum, enable MongoDB authentication. You should read up on MongoDB Security.
For a mobile application, you would probably have some sort of application server in front of MongoDB, e.g. your mobile application would not be connecting to MongoDB directly. In that case only your application server would be connecting to MongoDB, and you would secure MongoDB accordingly.