How do I set my VPS Webmin/Virtualmin server to show data from MongoDB in the hosted website? - mongodb

This is my first question I hope I do it right. So :
I developed a MERN website, on which I have perfect connection with a MongoDB db as well as an Amazon S3 one.
I am currently trying to host it on a Hostinger VPS with Virtualmin and Webmin. The data is in thanks to FTP working, so the website design shows but the mongoDB data is missing.
So far :
DNS set properly,
SSH all good,
mongo shell installed inside of server through the console, I can see my db and the data
new user created successfully with mongo method db.createUser(), attached to my db
So my question is : what are the following steps to make the way to data, through the server, to the website ?
I'm new to this and I've searched for several days now everywhere without any success, and the hosting support is lost on the matter...
Thanks!

By default, Virtualmin installs the LAMP/LEMP stack. there is no support for MERN/MEAN or node js based applications. you have to manually configure your server through the terminal by ssh.
follow the instruction.
Apache NodeJS installation
There is no GUI support for node based apps. but you can manage other services like mail, DNS, firewall and SSL etc for your app through Virtualmin and Webmin.

In case it helps anyone I did succeed in setting up the server, it's quite a bit of work. Here's my config :
I set Nginx to listen to the https requests from the front and send them to the back. The config file is called "default", in the folder sites-available, and has the following content :
server {
listen 80;
listen 443 ssl;
root /the/frontend/root/folder;
server_name _;
ssl on;
ssl_certificate /the/ssl/.crt/file;
ssl_certificate_key /the/ssl/.key/file;
# react app & front-end files
location / {
try_files $uri /index.html;
}
# node api reverse proxy
location /api/ {
proxy_pass http://localhost:portlistenedbybackend/api/;
}
}
The React frontend comes with a .env file that is integrated in the build. In it I set the url to where the frontend sends requests (these are then caught by Nginx). Be careful to set this url to the domain of your website when deploying, so in my case : https://example.com/api
The production process manager pm2 is useful to keep alive the backend at all times so I installed it and used it for the Node backend. The command to add the backend main server file (in my case server.js) to pm2 from the console : sudo pm2 start your/serverfile/address
Here's a couple of links that proved very useful to understand how to configure the server :
Applicable to Amazon server but a lot applicable here too : https://jasonwatmore.com/post/2019/11/18/react-nodejs-on-aws-how-to-deploy-a-mern-stack-app-to-amazon-ec2
A guide to log journal in console for debugging :
https://www.thegeekdiary.com/beginners-guide-to-journalctl-how-to-use-journalctl-to-view-and-manipulate-systemd-logs/
For setting up Webmin server : https://www.serverpronto.com/kb/cpage.php?id=Webmin
At first I discarded Webmin and Virtualmin since all I could find (from support included) was tutorials to setup the server via console. So bits by bits I set it up. Then, finally, I got from the support a tuto to setup the server from Webmin. But to this day I can't say wether that made a difference in the structure. But at least it's clean.
Last but not least, a couple of console commands I found very useful :
systemctl status theserviceyouwanttosee
systemctl start theserviceyouwanttostart
systemctl stop theserviceyouwanttostop
systemctl restart theserviceyouwanttorestart
Examples of services : nginx, mongod...
Test if Nginx is setup properly : sudo nginx -t
reload nginx config file after any modification : sudo nginx -s reload
See the last errors logged by nginx : sudo tail -f /var/log/nginx/error.log
Save current pm2 settings : pm2 save
Backend execution logs : sudo pm2 logs
Identify the processes still running with mongo (useful if mongod won't restart properly) : pgrep mongo
Kill the mongo process to be able to start fresh : kill <process>
Show all services used by server : sudo systemctl
See all processes in execution and stats of the CPU amongst other things : top
I'm still new to all of this despite my couple of weeks on the subject so this description is most probably improvable. don't hesitate to suggest any improvement, mistake, or ask questions, I'll do my best to answer them.
Cheers!

Related

How i upload my MEAN App (Website Application) on Digital Ocean?

i have a question please some body help me to out this
How i upload my MEAN App (Website Application) on Digital Ocean?
You can host your MEAN stack application to degital ocean by following this steps listed on this article:
How to Host a MEAN Stack App on Digital Ocean
Creating an SSH Key:
ssh-keygen -t rsa
Registering your Account:
Before we get started, you should have a fully functional Digital Ocean account.
Registering an account is pretty simple.
Here is a walkthrough of the registration process.
Adding your personal details along with your email address.
Verifying your account using the email sent at your provided email address in step one.
Adding your credit card details.
Creating your Droplet:
Once you have registered your account, move to the Droplets section at the dashboard and click the Create Droplet button.
when you create choose your OS and next .
We are going to use the MEAN 0.5.0 on 20.04 stack.
After that complete the droplet creation by select the region and add the SSH key generated before .
Next Steps:
Login to your droplet:
Execute the following command to create a remote terminal session to your droplet.
$ ssh -i <path-to-your-key-file> root#<your-droplet-ip>
You code uploaded to the droplet now check that npm and node is installed on your droplet using:
npm version
node version
Install and configure Nginx:
apt install nginx
Next, configure Nginx using the /etc/nginx/sites-available/default file.
server {
listen 80;
server_name <your-droplet-ip>;
location / {
proxy_pass http://<your-droplet-ip>:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
Restart Nginx service by executing the command sudo service nginx restart
Point your browser to http:// and you will see the sample application's welcome page again.
Specifying the IP address only without the need to add port 3000 shows Nginx was setup correctly.
In other words, web browsers forward requests to port 80 by default and since we have Nginx running on port 80 ready to forward requests to the Node application, everything works as expected.
Keeping your Application Alive:
PM2 is a Node process manager that allows you to run your applications as a service in the background which is restarted whenever it goes down.
Deploy your app:
1- cloning repository at /opt/mean
$ git clone your-repo-url "mean"
2- Move to the project root.
$ cd /opt/mean
Install NPM modules.
$ npm install
Finally, execute the following command to start your application using PM2.
$ pm2 start server.js
Point your browser to http:// and you should see the application's home page.
I think that can help you to resolve your issue.

Stuck with Configuring Varnish

I successfully installed varnish etc. via the whm terminal then when it came to Configuring Varnish to listen in on port 80 for incoming http requests.
/etc/sysconfig/varnish
I do not know how to do this. I got help from the get go with the repo to command lines etc and all went off without a hitch until I got to the configure varnish section. Got stuck on step 3.
https://support.qualityunit.com/496090-How-to-install-Varnish-with-CPanel-and-CentOS-to-cache-static-content-on-server
from terminal type:
sudo nano /etc/sysconfig/varnish
The file opens, look for the VARNISH_LISTEN_PORT and change it to 80. Save and close the file and you are done

Having issues with meteorjs app when running with mongodb on localhost

I am having some issues with the MeteorJs app that I am working on. I am working for a client and we are using his dedicated server for our app's deployment. The server has php installed and is already running apache server (a php app is live on server). The server itself is running a version of CentOS.
I bundled my meteor app and uploaded it on server using my cPanel access (it is not root level access). I also created an ssh key and logged into the server using that ssh access.
I used export command to set my MONGO_URL to mongodb://localhost:27017/<db-name> (Version 2.6.3 of MongoDB in installed on server) and PORT to 3000. From here I ran the app using node package "pm2".
Now the issue is that when the app runs it accesses the database for data.
The request is made from client side.
The server receives the request (seen in the live log)
The server fetches data from db and logs it in the terminal.
But then it takes somewhere around 10-15 seconds to send that data back to the client.
There is not extra commands or computation between logging the data fetched from server and returning it to client.
But if I change the mongo URI to my instance of MongoLab, everything works fine and there are no delays. My client prefers that the mongo runs on his dedicated server.
As a programmer I know it would be difficult to answer this question with limited information and no hands-on debugging. But I was hoping someone else experienced this issue and was able to resolve. I just installed mongodb on the server without any further configurations. Is it that I need to install any further packages or do any configurations?
you need to set MONGO_OPLOG_URL to enable oplog tailing feature. when oplog tailing is disabled it takes around 10-15 seconds to send that data to the client.
export MONGO_OPLOG_URL like this.
MONGO_OPLOG_URL=mongodb://localhost/local

How to find, properly configure MS Team Foundation server?

I have a win7 virtual machine, that has a postgresql installed. There is an (apache) Enterprise DB on my localhost:8080.
I have installed MS Team Foundation Server successfully, and I can see from the management console, that my "DefaultCollection" is online.
Browsing for localhost:8080/tfs or localhost:8080/tfs/DefaultCollection returns a 404 not found error. I had no say on what port I would like to use,
Can you help me find the proper address for this team foundation server? Or tell me how to configure it properly. (I am unfamiliar with this server configuring world, please provide detailed commands or material.)
It sounds like you must've installed PostgreSQL using the one-click installer for Windows, then ran the StackBuilder and installed Apache using StackBuilder.
If so, it's just an ordinary Apache install that you can configure just like normal. You need to either stop and disable any running Apache service in the Services control panel (services.msc).
Alternately, if you wish to continue using it but on a different port, edit the Apache configuration to set the Listen directive to something other than 8080 and change any NameVirtualHost and VirtualHost directives to use the new port, eg:
Listen 8080
NameVirtualHost *:8080
<VirtualHost *:8080>
... blah blah ...
</VirtualHost>
would become:
Listen 8181
NameVirtualHost *:8181
<VirtualHost *:8181>
... blah blah ...
</VirtualHost>
See:
Apache - Virtual Hosts
Apache - Listen
You can find the location of the Apache config file by examining the command that's being used to run Apache. That might be a batch file to start it and stop it, or a service command in the services control panel. It'll probably be called httpd.conf or apache2.conf..
They are 'proper' addresses, but unless the person trying to open the webpage has a valid TFS account then you will not be able to access TFS through the website.
Can you access: http://localhost:8080/tfs/web?
Is your Windows login allowed to access TFS server?
As Craig mentioned, you don't hive any information that could help diagnose what you're trying to achieve. Why are you trying to access TFS through its web endpoints? Did you make sure MSSQL and IIS are installed on the machine? Why have you got apache and postgresql installed on a ALM server that doesn't require them?
TFS is a very complex product, and even though the development team has made huge strides in making it easy to install, it's no small task to get a server working.

supervisord with haproxy, paster, and node js

I have to run paster serve for my app and nodejs for my real time requirements both are configured through haproxy, but here I need to run haproxy as sudo to bind port 80 and other processes as normal user, how to do it? I tried different ways, but no use. I tried this command
command=sudo haproxy
I think this is not the ways we should do this. Any ideas?
You'll need to run supervisord as root, and configure it to run your various services under non-privileged users instead.
[program:paster]
# other configuration
user = wwwdaemon
In order for this to work, you cannot set the user option in the [supervisord] section (otherwise the daemon cannot restart your haproxy server). You therefore do want to make sure your supervisord configuration is only writeable by root so no new programs can be added to a running supervisord daemon, and you want to make sure the XML-RPC server options are well protected.
The latter means you need to review any [unix_http_server], [inet_http_server] and [rpcinterface:x] sections you have configured to be properly locked down. For example, use the chown and chmod options for the [unix_http_server] section to limit access to the socket file to privileged users only.
Alternatively, you can run a lightweight front-end server with minimal configuration to proxy port 80 to a non-privileged port instead, and keep this minimal server out of your supervisord setup. ngnix is an excellent server to do this, for example, installed via the native packaging system of your server (e.g. apt-get on Debian or Ubuntu).