supervisord with haproxy, paster, and node js - haproxy

I have to run paster serve for my app and nodejs for my real time requirements both are configured through haproxy, but here I need to run haproxy as sudo to bind port 80 and other processes as normal user, how to do it? I tried different ways, but no use. I tried this command
command=sudo haproxy
I think this is not the ways we should do this. Any ideas?

You'll need to run supervisord as root, and configure it to run your various services under non-privileged users instead.
[program:paster]
# other configuration
user = wwwdaemon
In order for this to work, you cannot set the user option in the [supervisord] section (otherwise the daemon cannot restart your haproxy server). You therefore do want to make sure your supervisord configuration is only writeable by root so no new programs can be added to a running supervisord daemon, and you want to make sure the XML-RPC server options are well protected.
The latter means you need to review any [unix_http_server], [inet_http_server] and [rpcinterface:x] sections you have configured to be properly locked down. For example, use the chown and chmod options for the [unix_http_server] section to limit access to the socket file to privileged users only.
Alternatively, you can run a lightweight front-end server with minimal configuration to proxy port 80 to a non-privileged port instead, and keep this minimal server out of your supervisord setup. ngnix is an excellent server to do this, for example, installed via the native packaging system of your server (e.g. apt-get on Debian or Ubuntu).

Related

connecting wget to vpn

I'm trying to download some files using wget but the problem is the files will only download from specific servers how can I use wget over VPN?
p s: I tried use_proxy=yes -e http_proxy=[server]:[port] but it didn't work I need to connect to a VPN server not a proxy
Install a VPN on your machine first, then run the command
Proxies and VPNs are entirely different things. The proxy functionality won't be of any use to you here.
To use a VPN you have to setup a connection at the OS level (i assume linux ? but i could be wrong) - the wget tool itself wont be involved, you'll just run that after your connection is replaced with the VPN connection (no need for any special flags).
As for how you setup the vpn connection, that differs a lot based on the particular details of your situation. It could involve running openvpn yourinfo.ovpn or something like that, or your vpn provider may offer a separate application to set up the tunnel connection and then adjust your OS's routing table so traffic flows through the tunnel instead of to the normal gateway.

How do I set my VPS Webmin/Virtualmin server to show data from MongoDB in the hosted website?

This is my first question I hope I do it right. So :
I developed a MERN website, on which I have perfect connection with a MongoDB db as well as an Amazon S3 one.
I am currently trying to host it on a Hostinger VPS with Virtualmin and Webmin. The data is in thanks to FTP working, so the website design shows but the mongoDB data is missing.
So far :
DNS set properly,
SSH all good,
mongo shell installed inside of server through the console, I can see my db and the data
new user created successfully with mongo method db.createUser(), attached to my db
So my question is : what are the following steps to make the way to data, through the server, to the website ?
I'm new to this and I've searched for several days now everywhere without any success, and the hosting support is lost on the matter...
Thanks!
By default, Virtualmin installs the LAMP/LEMP stack. there is no support for MERN/MEAN or node js based applications. you have to manually configure your server through the terminal by ssh.
follow the instruction.
Apache NodeJS installation
There is no GUI support for node based apps. but you can manage other services like mail, DNS, firewall and SSL etc for your app through Virtualmin and Webmin.
In case it helps anyone I did succeed in setting up the server, it's quite a bit of work. Here's my config :
I set Nginx to listen to the https requests from the front and send them to the back. The config file is called "default", in the folder sites-available, and has the following content :
server {
listen 80;
listen 443 ssl;
root /the/frontend/root/folder;
server_name _;
ssl on;
ssl_certificate /the/ssl/.crt/file;
ssl_certificate_key /the/ssl/.key/file;
# react app & front-end files
location / {
try_files $uri /index.html;
}
# node api reverse proxy
location /api/ {
proxy_pass http://localhost:portlistenedbybackend/api/;
}
}
The React frontend comes with a .env file that is integrated in the build. In it I set the url to where the frontend sends requests (these are then caught by Nginx). Be careful to set this url to the domain of your website when deploying, so in my case : https://example.com/api
The production process manager pm2 is useful to keep alive the backend at all times so I installed it and used it for the Node backend. The command to add the backend main server file (in my case server.js) to pm2 from the console : sudo pm2 start your/serverfile/address
Here's a couple of links that proved very useful to understand how to configure the server :
Applicable to Amazon server but a lot applicable here too : https://jasonwatmore.com/post/2019/11/18/react-nodejs-on-aws-how-to-deploy-a-mern-stack-app-to-amazon-ec2
A guide to log journal in console for debugging :
https://www.thegeekdiary.com/beginners-guide-to-journalctl-how-to-use-journalctl-to-view-and-manipulate-systemd-logs/
For setting up Webmin server : https://www.serverpronto.com/kb/cpage.php?id=Webmin
At first I discarded Webmin and Virtualmin since all I could find (from support included) was tutorials to setup the server via console. So bits by bits I set it up. Then, finally, I got from the support a tuto to setup the server from Webmin. But to this day I can't say wether that made a difference in the structure. But at least it's clean.
Last but not least, a couple of console commands I found very useful :
systemctl status theserviceyouwanttosee
systemctl start theserviceyouwanttostart
systemctl stop theserviceyouwanttostop
systemctl restart theserviceyouwanttorestart
Examples of services : nginx, mongod...
Test if Nginx is setup properly : sudo nginx -t
reload nginx config file after any modification : sudo nginx -s reload
See the last errors logged by nginx : sudo tail -f /var/log/nginx/error.log
Save current pm2 settings : pm2 save
Backend execution logs : sudo pm2 logs
Identify the processes still running with mongo (useful if mongod won't restart properly) : pgrep mongo
Kill the mongo process to be able to start fresh : kill <process>
Show all services used by server : sudo systemctl
See all processes in execution and stats of the CPU amongst other things : top
I'm still new to all of this despite my couple of weeks on the subject so this description is most probably improvable. don't hesitate to suggest any improvement, mistake, or ask questions, I'll do my best to answer them.
Cheers!

remotely pulling configuration information from BIND9 nameserver

How do I remotely pull configuration information from a running bind name server without logging in as root on the server where it is running?
I searched a lot and read many materials about BIND9 but still no answers.
I know there are some commands to conduct zone transfer or update zone resource data, but I didn't find any way to pull configuration info from a name server.
In short: you cannot. There is no provision in the DNS protocol to send server configuration. So whatever technology you use, it will NOT be DNS. And since Bind9 is designed to serve DNS requests and send DNS replies only, Bind9 cannot be coerced to send its configuration the way you'd expect.
You have to install and configure some other piece of software to be able to access the configuration. SSH is one of the most widespread such technology used for managing server configurations.
You could use "rndc -s dns-server dumpdb".
In named's configuration you point dump-file to a shared folder which is accessible from the system that ran rndc.

Is there any IRC Server / Demon with integrated "bouncer"?

I want to offer IRC service to other users on my local network.
I'd like to have persistent logs of all (or at least certain) channels and private messages that can be replayed by the client. The log capacity could be limited. I know this is usually handled by a bouncer.
I want this setup to work locally, even if the server uplink goes down, so I probably want to run my own IRC server.
Are there any IRC servers already support this?
Having a common chat and pastebin on the local network is very useful.
I've been attempting this today. And after some tribulations I have success.
I've been running ircd-hybrid without any problems for a while, but conversation histories, as you know, are not saved.
You could use any bouncer but I'll demonstrate ZNC:
If you're running linux, run...
sudo apt-get install znc
once it's installed, run...
znc --makeconf
This generates a config file. When asked for a port number specify a free port. This is the one you will connect to from your client and should NOT be the same as you IRC daemons port.
Later on you will be asked to specify the server you want to connect to, this should be 127.0.0.1:.
Make sure you firewall allows the new port, and restart ircd:
sudo service <your irc daemon> restart
That's it. Unless you've set it's modes to +i, your bouncer should now be visible on the channels you've asked it to join.
For more info on ZNC:
http://wiki.znc.in/FAQ
Might help to talk to the IRC crew at #ircd-coders on irc.ircd-hybrid.org
and for ZNC people... #znc on chat.freenode.net

How do you run multiple instances of JBoss 4.0 (running under Eclipse) on the same machine?

At my office we run JBoss 4.0 and use Eclipse to debug and run the JBoss server. We're deploying simple wars, nothing terribly complex. However, I haven't yet figured out how to get this version of JBoss to either allow me to run separate instances of the war (HEAD and the Branch, for example) or to run separate servers controlled by two different projects in Eclipse. Anyone know how to do this? I've searched and found nothing that addresses this specifically.
The three things you have to think about are:
Making sure that instances do not overwrite each other’s files
Making sure that the instances don’t open the same TCP ports
Determining how to shut down each instance
Create a copy of your configuration so you don't have file collisions (like when temp files are created). Then, I would recommend just binding the two configurations to different IPs on the same machine, which will avoid port conflicts. You can do something like this:
run –b 192.168.0.100 –c myconfig
run –b 192.168.0.101 –c myconfig2
If you have two network cards, this is easy. If you don't, you can setup virtual IP addresses with a loopback adapter on Windows. On Linux, you can use ifconfig.
To shut down, just make sure you specify the IP/port to shut down, like this:
shutdown –s 192.168.0.100:1099 -S
shutdown –s 192.168.0.101:1099 -S
I'm not sure how to get you going on Eclipse, but you should be able to specify those flags to the run and shutdown scripts through the configuration somehow.
We cover this topic in depth in JBoss in Action in section 15.2 - Collocating multiple application server instances.
I think you can subscribe various instances of JBoss to your eclipse installation. normal installation example
Hope it helps you