What is NGINX Expecting To See Here? - nginx-config

I've been working on trying to integrate onlyoffice with my working nextcloud app. I've attempted this many times and in many different ways. I believe I understand "most" of the mistakes I've made previously...but there always seems to be one more.
Nextcloud is running on a Ubuntu 22.04 VM. I have another VM operating nginx reverse proxy for this and the other apps I want to expose to the outside. I've decided on this attempt to also run onlyoffice on the nextcloud server but using different ports from that app.
Using the modified template suggested for SSL enabled nginx, when I attempt to start the service, I get:
bonzo#cloud:/etc/nginx/conf.d$ systemctl status nginx.service
nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fri 2022-08-05 15:12:02 CDT; 8s ago
Docs: man:nginx(8)
Process: 7990 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process on; (code=exited, status=1/FAILURE)
CPU: 26ms
Aug 05 15:12:02 cloud systemd[1]: Starting A high performance web server and a reverse proxy
server...
Aug 05 15:12:02 cloud nginx[7990]: nginx: [emerg] host not found in upstream "docservice" in
/etc/nginx/includes/ds-docservice.conf:74
Aug 05 15:12:02 cloud nginx[7990]: nginx: configuration file /etc/nginx/nginx.conf test failed
Aug 05 15:12:02 cloud systemd[1]: nginx.service: Control process exited, code=exited,
status=1/FAILURE
Aug 05 15:12:02 cloud systemd[1]: nginx.service: Failed with result 'exit-code'.
Aug 05 15:12:02 cloud systemd[1]: Failed to start A high performance web server and a reverse
proxy server.*
When I looked at the line 74 in ds-docservice.conf, it shows this:
location / {
proxy_pass http://docservice;
}
And, I'm not exactly sure what it is expecting to see there. I haven't included it in this first post but I'd be happy to share my ds.conf or any other logs/configs that would be required for help with this. I also realize this is probably going to be something silly that I've missed or messed up, I'm still new to nginx and integrating onlyoffice has been a lot more difficult than you'd think, from the literature and yt vids I've seen!
Thanks for any help

Related

Getting error "Failed to start Artifactory service" after Artifactory installation

Getting this error after installing Artifactory
artifactory.service - Artifactory service
Loaded: loaded (/usr/lib/systemd/system/artifactory.service; enabled; vendor preset: disabled)
Active: activating (auto-restart) (Result: protocol) since Mon 2023-02-13 14:14:06 UTC; 43s ago
Process: 1469 ExecStart=/opt/jfrog/artifactory/app/bin/artifactoryManage.sh start (code=exited, status=0/SUCCESS)
Main PID: 26760 (code=exited, status=143)
Feb 13 14:14:06 mbiazelkdynatrace systemd[1]: Failed to start Artifactory service.
Feb 13 14:14:06 mbiazelkdynatrace systemd[1]: Unit artifactory.service entered failed state.
Feb 13 14:14:06 mbiazelkdynatrace systemd[1]: artifactory.service failed.
[root#mbiazelkdynatrace run]# systemctl status artifactory.service
Please help figure out the issue; I've trying this for the last two weeks.
Since you are trying to start for the first time, there may be an issue with ports where the ports required for Artifactory might already being used.
The above snippet does not reveal anything of the issue.
Can you try to perform the below steps.
Stop Artifactory from service.
Navigate to $JFROG_HOME/artifactory/app/bin (ideally /opt/jfrog/artifactory/app/bin if you have not changed the default location)
./artifactoryctl start
Navigate to $JFROG_HOME/artifactory/var/log location
tail the console.log that should reveal what is the issue.
or artifactory-service.log should reveal the issue.
If nothing is found, share the error snippet from artifactory-service.log

mongodb service fail after reinstalling it on ubuntu

i was using mongodb and it was fine.
then i wanted to convert it to replica set and i get into some problems and i uninstalled it.
after reinstalling (10 times and doing everything on internet xD) why i check status with systemctl status it say failed with exit_code ( i know my conf file dont have problem).
what can i do? i even installed the 3.3 version and even it doesnt start anymore.
i used anything that it came to my mind (purging config files & lot more...).
i really dont want to reinstall my os (really cant).
this is my sudo systemctl status mongod
● mongod.service - MongoDB Database Server
Loaded: loaded (/lib/systemd/system/mongod.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Thu 2021-02-18 20:05:20 +0330; 8s ago
Docs: https://docs.mongodb.org/manual
Process: 147513 ExecStart=/usr/bin/mongod --config /etc/mongod.conf (code=exited, status=1/FAILURE)
Main PID: 147513 (code=exited, status=1/FAILURE)
Feb 18 20:05:20 nima-Lenovo-ideapad-320-15AST systemd[1]: Started MongoDB Database Server.
Feb 18 20:05:20 nima-Lenovo-ideapad-320-15AST mongod[147513]: about to fork child process, waiting until server is ready for connections.
Feb 18 20:05:20 nima-Lenovo-ideapad-320-15AST mongod[147527]: forked process: 147527
Feb 18 20:05:20 nima-Lenovo-ideapad-320-15AST mongod[147513]: ERROR: child process failed, exited with 1
Feb 18 20:05:20 nima-Lenovo-ideapad-320-15AST mongod[147513]: To see additional information in this output, start without the "--fork" option.
Feb 18 20:05:20 nima-Lenovo-ideapad-320-15AST systemd[1]: mongod.service: Main process exited, code=exited, status=1/FAILURE
Feb 18 20:05:20 nima-Lenovo-ideapad-320-15AST systemd[1]: mongod.service: Failed with result 'exit-code'.
I solved the problem by changing the default mongodb port from port 27017 to port 27018 in /etc/mongod.conf.
I'm sure this will come handy to a lot of people.
And for the last part, after uninstalling mongodb I removed mongod.service files (every file) in the system and systemd directories in root and installed mongodb again.
(so I think uninstalling mongodb wasn't complete at first time. And 2 instances interfere with each other. Now everything works fine in mongodb with port 27018).

systemd service activation for Python script fails

I want to register a python script as a daemon service, executed at system startup and running continuously in the background. The script opens network sockets, a local log file and executes a number of threads. The script is well-formed and runs without any compilation or runtime issues.
I used below service file for registration:
[Unit]
Description=ModBus2KNX Gateway Daemon
After=multi-user.target
[Service]
Type=simple
ExecStart=/usr/bin/python3 /usr/bin/ModBusDaemon.py
[Install]
WantedBy=multi-user.target
Starting the service results in below error:
● ModBusDaemon.service - ModBus2KNX Gateway Daemon
Loaded: loaded (/lib/systemd/system/ModBusDaemon.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Mon 2021-01-04 21:46:29 CET; 6min ago
Process: 1390 ExecStart=/usr/bin/python3 /usr/bin/ModBusDaemon.py (code=exited, status=1/FAILURE)
Main PID: 1390 (code=exited, status=1/FAILURE)
Jan 04 21:46:29 raspberrypi systemd[1]: Started ModBus2KNX Gateway Daemon.
Jan 04 21:46:29 raspberrypi systemd[1]: ModBusDaemon.service: Main process exited, code=exited, status=1/FAILURE
Jan 04 21:46:29 raspberrypi systemd[1]: ModBusDaemon.service: Failed with result 'exit-code'.
Appreciate your support!
Related posts brought me to the resolution for my issue. Ubuntu systemd custom service failing with python script refers to the same issue. The proposed solution adding the WorkingDirectory to the Service section resolved the issue for me. Though, I could not find the adequate systemd documentation outlining on the implicit dependency.
As MBizm saim you must also add WorkingDirectory.
And After that you must also run these commands:
sudo systemctl daemon-reload
sudo systemctl enable your_service.service
sudo systemctl start your_service.service

Trying to get HaProxy to log specific HTTP headers to custom file

I'm using HaProxy as a reverse proxy and load balancer for three servers. Each of these servers uses Basic Authentication to control access to them and HaProxy shares the load across them via a round-robin method.
I'd like to log the IP address, user agent, request URL and Basic Authentication username for each request that HaProxy handles and to log this to a custom file so that another script can check periodically to ensure that credentials are not be shared by my users.
It looks like this possible to do but I cannot work out how to do it.
Here's what I've added to my haproxy.cfg file in my frontend section:
# Log name of server
capture request header Host len 500
# Capture request user agent
capture request header User-Agent len 64
# Capture authorization details
capture request header Authorization len 64
log-format "%ci:%cp [%t] %H %HP %hr %hrl"
When I include this in my haproxy.cfg file I and restart the service HaProxy fails to start. Looking in 'systemctl status haproxy.service' results in:
haproxy.service - HAProxy Load Balancer
Loaded: loaded (/lib/systemd/system/haproxy.service; enabled)
Active: failed (Result: start-limit) since Mon 2018-06-18 13:09:04 BST; 18s ago
Docs: man:haproxy(1)
file:/usr/share/doc/haproxy/configuration.txt.gz
Process: 25529 ExecReload=/bin/kill -USR2 $MAINPID (code=exited, status=0/SUCCESS)
Process: 25527 ExecReload=/usr/sbin/haproxy -c -f ${CONFIG} (code=exited, status=0/SUCCESS)
Process: 9883 ExecStart=/usr/sbin/haproxy-systemd-wrapper -f ${CONFIG} -p /run/haproxy.pid $EXTRAOPTS (code=exited, status=0/SUCCESS)
Process: 10644 ExecStartPre=/usr/sbin/haproxy -f ${CONFIG} -c -q (code=exited, status=1/FAILURE)
Main PID: 9883 (code=exited, status=0/SUCCESS)
Jun 18 13:09:04 host systemd[1]: Failed to start HAProxy Load Balancer.
Jun 18 13:09:04 host systemd[1]: Unit haproxy.service entered failed state.
Jun 18 13:09:04 host systemd[1]: haproxy.service holdoff time over, scheduling restart.
Jun 18 13:09:04 host systemd[1]: Stopping HAProxy Load Balancer...
Jun 18 13:09:04 host systemd[1]: Starting HAProxy Load Balancer...
Jun 18 13:09:04 host systemd[1]: haproxy.service start request repeated too quickly, refusing to start.
Jun 18 13:09:04 host systemd[1]: Failed to start HAProxy Load Balancer.
Jun 18 13:09:04 host systemd[1]: Unit haproxy.service entered failed state.
What am I doing wrong ?
I'm new to HAproxy. I have turned on basicauth on backend server and I wanted to see in HAprogxy log who is making requests.
I have done it like this:on backend part, I have set varible txn.user:
http-request set-var(txn.user) http_auth_group(myuserlist)
Of course you need to have myuserlist, but this is easy.
In fronted I have added this variable (%[var(txn.user)]) to my custom log:
log-format "%ci:%cp [%tr] %ft %b/%s %TR/%Tw/%Tc/%Tr/%Ta %ST %B %CC %CS %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %{+Q}r [%[var(txn.user)]]"
I can now see IP address, user agent, request URL and HAProxy basic authentication username.
Just for fun (since I'm using basic http server busybox, 20k in size):
on backend I have added:
http-response set-header Set-Cookie user=%[var(txn.user)];path=/;SameSite=strict;Secure
and inside html I read this cookie with javascript and show on page who is loged in :)
Haproxy only knows how to log to a syslog socket / daemon.
You will need to tag (with the log-tag directive, probably on a backend) your logs, and configure your syslog daemon to log to a file the entries matching the previsouly defined tag.

fail2ban fails to start ubuntu 16.04

I have used this tutorial to install fail2ban for my Ubuntu 16.04 server.
After going through this I tried to start with: /etc/init.d/fail2ban start
Here was the response:
[....] Starting fail2ban (via systemctl): fail2ban.serviceJob for fail2ban.service failed because the control process exited with error code. See "systemctl status fail2ban.service" and "journalctl -xe" for details.
failed!
When I then run: systemctl status fail2ban.service
I get this:
> fail2ban.service - Fail2Ban Service
Loaded: loaded (/lib/systemd/system/fail2ban.service; enabled; vendor preset: enabled)
Active: inactive (dead) (Result: exit-code) since Tue 2018-05-15 14:01:38 UTC; 1min 40s ago
Docs: man:fail2ban(1)
Process: 4468 ExecStart=/usr/bin/fail2ban-client -x start (code=exited, status=255)
May 15 14:01:38 tastycoders-prod1 systemd[1]: fail2ban.service: Control process exited, code=exited status=255
May 15 14:01:38 tastycoders-prod1 systemd[1]: Failed to start Fail2Ban Service.
May 15 14:01:38 tastycoders-prod1 systemd[1]: fail2ban.service: Unit entered failed state.
May 15 14:01:38 tastycoders-prod1 systemd[1]: fail2ban.service: Failed with result 'exit-code'.
May 15 14:01:38 tastycoders-prod1 systemd[1]: fail2ban.service: Service hold-off time over, scheduling restart.
May 15 14:01:38 tastycoders-prod1 systemd[1]: Stopped Fail2Ban Service.
May 15 14:01:38 tastycoders-prod1 systemd[1]: fail2ban.service: Start request repeated too quickly.
May 15 14:01:38 tastycoders-prod1 systemd[1]: Failed to start Fail2Ban Service.
Some tutorials at DigitalOcean contain errors. Check your /etc/fail2ban/jail.local. Try to keep it as simple as you can, i.e. keep there only those options you want to change.
Otherwise, if you have copied jail.conf to jail.local (according to the guide at DO), then delete or comment out pam section, if you do not use it, in jail.local file.
Go to line 146 of /etc/fail2ban/jail.local
# [pam-generic]
# enabled = false
# pam-generic filter can be customized to monitor specific subset of 'tty's
# filter = pam-generic
# port actually must be irrelevant but lets leave it all for some possible uses
# port = all
# banaction = iptables-allports
# port = anyport
# logpath = /var/log/auth.log
# maxretry = 6
More details are here: https://github.com/fail2ban/fail2ban/issues/1396