ProFTPD Version 1.3.5d ExtendedLog is not working? - ubuntu-16.04

I am a beginner in Linux. I am using Plesk onyx with 17.5.3 Ubuntu 16.04 1705170317.16. My ExtendedLog configured in /etc/proftpd.conf is not working.
Following is my /etc/proftpd.conf
#
# To have more informations about Proftpd configuration
# look at : http://www.proftpd.org/
#
# This is a basic ProFTPD configuration file (rename it to
# 'proftpd.conf' for actual use. It establishes a single server
# and a single anonymous login. It assumes that you have a user/group
# "nobody" and "ftp" for normal operation and anon.
ServerName "ProFTPD"
#ServerType standalone ServerType inetd DefaultServer on LogFormat nijin "%t %h %u %D
%f \"%r\" %s %b" ExtendedLog /var/log/ftp.log ALL nijin <Global
DefaultRoot ~ psacln AllowOverwrite on
<IfModule mod_tls.c
# common settings for all virtual hosts
TLSEngine on
TLSRequired off
TLSLog /var/log/plesk/ftp_tls.log
TLSRSACertificateFile /usr/local/psa/admin/conf/httpsd.pem
TLSRSACertificateKeyFile /usr/local/psa/admin/conf/httpsd.pem
# Authenticate clients that want to use FTP over TLS?
TLSVerifyClient off
# Allow SSL/TLS renegotiations when the client requests them, but
# do not force the renegotations. Some clients do not support
# SSL/TLS renegotiations; when mod_tls forces a renegotiation, these
# clients will close the data connection, or there will be a timeout
# on an idle data connection.
TLSRenegotiate none
# As of ProFTPD 1.3.3rc1, mod_tls only accepts SSL/TLS data connections
# that reuse the SSL session of the control connection, as a security measure.
# Unfortunately, there are some clients (e.g. curl) which do not reuse SSL sessions.
TLSOptions NoSessionReuseRequired </IfModule PassivePorts 50001 50100 </Global DefaultTransferMode binary UseFtpUsers
on
TimesGMT off SetEnv TZ :/etc/localtime
# Port 21 is the standard FTP port. Port 21
# Umask 022 is a good standard umask to prevent new dirs and files
# from being group and world writable. Umask 022
# To prevent DoS attacks, set the maximum number of child processes
# to 30. If you need to allow more than 30 concurrent connections
# at once, simply increase this value. Note that this ONLY works
# in standalone mode, in inetd mode you should use an inetd server
# that allows you to limit maximum number of processes per service
# (such as xinetd) MaxInstances 30
#Following part of this config file were generate by PSA automatically
#Any changes in this part will be overwritten by next manipulation
#with Anonymous FTP feature in PSA control panel.
#Include directive should point to place where FTP Virtual Hosts configurations
#preserved
ScoreboardFile /var/run/proftpd_scoreboard
# Primary log file mest be outside of system logrotate province
TransferLog /var/log/plesk/xferlog
#Change default group for new files and directories in vhosts dir to psacln
<Directory /var/www/vhosts
GroupOwner psacln </Directory
# Enable PAM authentication AuthPAM on AuthPAMConfig proftpd
IdentLookups off UseReverseDNS off
AuthGroupFile /etc/group
Include /etc/proftpd.d/*.conf
This is my customized configuration file. Added ExtendedLog /var/log/ftp.log in my configuration file. But extended log /var/log/ftp.log is not created yet. I touch this file manually but no use, logs are not populating.
Any answers will be appreciated.

Update: the permissions were set to 664

Try to set non-world writable permissions to /var/log/ftp.log as it recommended at http://www.proftpd.org/docs/directives/linked/config_ref_ExtendedLog.html
I have tested it by the following steps and it worked:
echo "ExtendedLog /var/log/ftp.log read,write" >> /etc/proftpd.conf
touch /var/log/ftp.log && chmod 644 /var/log/ftp.log
upload a test file
check the log:root#server:/# cat /var/log/ftp.log
192.168.34.219 UNKNOWN mario [14/Jun/2017:11:38:20 +0700] "STOR Google Chrome.lnk" 226 2356

Related

Permission denied (502 Bad Gateway) for NGINX to access uWSGI socket with 'user www-data' but not 'user root' | NGINX uWSGI FLASK Ubuntu 22.04

I am setting up a VM with NGINX, uWSGI FLASK, and Ubuntu 22.04.
I know this question has been asked but I couldn't find a clear answer or a good solution.
I get a 502 - Bad Gateway error from NGINX when its user in nginx.conf is 'www-data' but not when I set it as root or my sudo user.
I created a project folder in my sudo-user home directory. Put the virtual environment inside. With the application files, etc. The website works with:
uwsgi --socket 0.0.0.0:5000 --protocol=http -w wsgi:app
But NGINX can't access my socket. No matter how I change my socket's permissions or ownership, I get a 502.
But if the user in nginx.conf is root or my sudo-user name, then it works. I tried changing modes and ownerships but nothing works.
Is it safe to use my sudo-user in nginx.conf?
Does anybody have an idea how to solve this and grant access to the socket to www-data user?
Current permissions for the files:
-rw-rw-r-- 1 anthony anthony 151 Aug 29 04:16 main.ini
-rw-rw-r-- 1 anthony anthony 340 Aug 28 13:35 main.py
srw-rw---- 1 anthony www-data 0 Aug 29 05:05 main.sock
drwxrwxr-x 4 anthony anthony 4096 Aug 28 06:27 venv
-rw-rw-r-- 1 anthony anthony 60 Aug 28 08:11 wsgi.py
Nginx sites-available file:
server {
listen 80;
server_name xyz.com www.xyz.com;
location / {
include uwsgi_params;
uwsgi_pass unix:/home/anthony/website/main.sock;
}
}
Systemd service file:
[Unit]
Description=uWSGI instance to serve xyz
After=network.target
[Service]
User=anthony
Group=www-data
WorkingDirectory=/home/anthony/website
Environment="PATH=/home/anthony/website/venv/bin"
ExecStart=/home/anthony/website/venv/bin/uwsgi --ini main.ini
[Install]
WantedBy=multi-user.target
ini file for UWSGI:
[uwsgi]
module = wsgi:app
master = true
processes = 5
socket = /home/anthony/website/main.sock
chmod-socket = 660
vacuum = true
die-on-term = true
Beginning of NGINX's conf file (nothing helpful here I think):
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
Thank you for your help.
I found a solution that I find clean.
On the website for the mod_wsgi they say:
On some Linux distributions, the home directory of a user account is not accessible to other users. Rather than change the permissions on your home directory, it might be better to consider locating your WSGI application code and any Python virtual environment outside of your home directory.
So I deleted the whole project in my home directory, and instead created a virtual environment in /usr/local/venvs/. I gave permission sudo chmod 777 /usr/local/venvs in order to be able to create the virtualenv of my website into it (otherwise the write access is denied for the virtualenv command).
I created the application folder in /var/www/mywebsite to be consistent with NGINX's default configurations, and updated the links to python, uwsgi, the socket, set user www-data in nginx.conf, etc... And now it works.
Either I did something wrong when trying to give permission to my home directory earlier, or Ubuntu 22.04 just doesn't allow other users at all. I any case I find this solution much cleaner than giving access to my home directory or using my sudo-user for nginx.
Hope it can help someone in future.
Thanks
EDIT:
I found the reason why it didn't work in the home folder.
By default (at least on my Ubuntu 22.04 version), the users' home directory have a permission 750.
Setting it to 751 solves the Bad Gateway issue. I think the reason is that .ini file of uWSGI (mine is place in my application folder) that is called from the .service file in /etc/systemd/system/ folder:
[Unit]
Description=uWSGI instance to serve website
After=network.target
[Service]
User=anthony
Group=www-data
WorkingDirectory=/home/anthony/website
Environment="PATH=/usr/local/venvs/website-venv/bin"
ExecStart=/usr/local/venvs/website-venv/bin/uwsgi --ini main.ini
[Install]
WantedBy=multi-user.target
But I am no sure at all, I am a new user of Unix-like systems.
At least apparently many distribution have a permission 755 for user's home directory by default, so switching from 750 to 751 seems fine, security-wise, and makes accessing the app folder more convenient.

MongoDB Compass error creating SSH Tunnel: connect EADDRINUSEt, after setting username / password on database, AWS Linux 2 (EC2)

I have setup my mongodb on AWS Linux 2 EC2 instance.
I have associated inbound rule as - SSH | TCP | 22 | to the instance.
I was able to SSH into it through MongoDB Compass by using following settings:
However as soon as I added a username password to my database using following method:
use my_database
db.createUser(
{
user: "some_user",
pwd: "some_password",
roles: [{ role: "readWrite", db: "my_database" }]
}
)
And tried to access it using following parameters:
I got following error:
Error creating SSH Tunnel: connect EADDRINUSE some_ip:22 - Local (0.0.0.0:29353)
Here is my /etc/ssh/sshd_config file content:
#Port 22
#AddressFamily any
#ListenAddress 0.0.0.0
#ListenAddress ::
HostKey /etc/ssh/ssh_host_rsa_key
#HostKey /etc/ssh/ssh_host_dsa_key
HostKey /etc/ssh/ssh_host_ecdsa_key
HostKey /etc/ssh/ssh_host_ed25519_key
# Ciphers and keying
#RekeyLimit default none
# Logging
#SyslogFacility AUTH
SyslogFacility AUTHPRIV
#LogLevel INFO
# Authentication:
#LoginGraceTime 2m
#PermitRootLogin yes
#StrictModes yes
#MaxAuthTries 6
#MaxSessions 10
#PubkeyAuthentication yes
# The default is to check both .ssh/authorized_keys and .ssh/authorized_keys2
# but this is overridden so installations will only check .ssh/authorized_keys
AuthorizedKeysFile .ssh/authorized_keys
#AuthorizedPrincipalsFile none
# For this to work you will also need host keys in /etc/ssh/ssh_known_hosts
#HostbasedAuthentication no
# Change to yes if you don't trust ~/.ssh/known_hosts for
# HostbasedAuthentication
#IgnoreUserKnownHosts no
# Don't read the user's ~/.rhosts and ~/.shosts files
#IgnoreRhosts yes
# To disable tunneled clear text passwords, change to no here!
PasswordAuthentication yes
PermitEmptyPasswords no
#PasswordAuthentication no
# Change to no to disable s/key passwords
ChallengeResponseAuthentication yes
#ChallengeResponseAuthentication no
# Kerberos options
#KerberosAuthentication no
#KerberosOrLocalPasswd yes
#KerberosTicketCleanup yes
#KerberosGetAFSToken no
#KerberosUseKuserok yes
# GSSAPI options
GSSAPIAuthentication yes
GSSAPICleanupCredentials no
#GSSAPIStrictAcceptorCheck yes
#GSSAPIKeyExchange no
#GSSAPIEnablek5users no
# Set this to 'yes' to enable PAM authentication, account processing,
# and session processing. If this is enabled, PAM authentication will
# be allowed through the ChallengeResponseAuthentication and
# PasswordAuthentication. Depending on your PAM configuration,
# PAM authentication via ChallengeResponseAuthentication may bypass
# the setting of "PermitRootLogin without-password".
# If you just want the PAM account and session checks to run without
# PAM authentication, then enable this but set PasswordAuthentication
# and ChallengeResponseAuthentication to 'no'.
# WARNING: 'UsePAM no' is not supported in Red Hat Enterprise Linux and may cause several
# problems.
UsePAM yes
#AllowAgentForwarding yes
AllowTcpForwarding yes
#GatewayPorts no
X11Forwarding yes
#X11DisplayOffset 10
#X11UseLocalhost yes
#PermitTTY yes
#PrintMotd yes
#PrintLastLog yes
#TCPKeepAlive yes
#UseLogin no
#UsePrivilegeSeparation sandbox
#PermitUserEnvironment no
#Compression delayed
#ClientAliveInterval 0
#ClientAliveCountMax 3
#ShowPatchLevel no
#UseDNS yes
#PidFile /var/run/sshd.pid
#MaxStartups 10:30:100
#PermitTunnel no
#ChrootDirectory none
#VersionAddendum none
# no default banner path
#Banner none
# Accept locale-related environment variables
AcceptEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES
AcceptEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT
AcceptEnv LC_IDENTIFICATION LC_ALL LANGUAGE
AcceptEnv XMODIFIERS
# override default of no subsystems
Subsystem sftp /usr/libexec/openssh/sftp-server
# Example of overriding settings on a per-user basis
#Match User anoncvs
# X11Forwarding no
# AllowTcpForwarding no
# PermitTTY no
# ForceCommand cvs server
AuthorizedKeysCommand /opt/aws/bin/eic_run_authorized_keys %u %f
AuthorizedKeysCommandUser ec2-instance-connect
Am I missing anything over here?
I was running in to the exact same issue when trying to connect through an SSH tunnel. I have found a quirky solution for this issue.
I solved it by installing Studio 3T. Once opened create a new connection by clicking on Connect -> New Connection. 
Once opened set up your connection, save it, and you should be able to connect successfully. 
When this is complete do the following:
Click on Connect once again.
Right-click the saved connection and select Edit....
At the bottom left there is an option named To URI... to export the Connection String. 
And Finally select the option Include Passwords and copy the Connection String.
That's it! You can now paste it in MongoDB Compass and you should be good to go.

configure proftpd to serve ftp and sftp simultaneously

Using Ubuntu 18.04 LTS and ProFTPD 1.3.5e.
I have ProFTPD serving FTP on ports 20, 21 and running just fine.
When I add in /etc/proftpd/conf.d/sftp.conf, FTP quits working. When I delete the sftp.conf and restart proftpd, FTP starts working again. I conclude that there is something wrong with this conf file.
Also, I want sftp to accept just a login id and password for authentication. How do I do that? I have looked at the SFTPAuthMethods directive and it looks like if I leave it out then it will allow all authentication methods and that is okay with me.
Here is the sftp.conf file:
<IfModule mod_sftp.c>
SFTPEngine on
Port 2222
SFTPLog /var/log/proftpd/sftp.log
# Configure both the RSA and DSA host keys, using the same host key
# files that OpenSSH uses.
SFTPHostKey /etc/ssh/ssh_host_rsa_key
SFTPHostKey /etc/ssh/ssh_host_dsa_key
SFTPAuthorizedUserKeys file:/etc/proftpd/authorized_keys/%u
# Enable compression
SFTPCompression delayed
</IfModule>
What should I change to get SFTP running on port 2222 and continue to have FTP running on ports 20 & 21?
Thanks in advance!
Update:
Based on the excellent feedback I have received in the notes, instead of using the sftp.conf file I have above, I added a wrapper and some other configuration parameters and have put that config into the proftpd.conf file. It reads as follows:
<snip>
<IfModule mod_sftp.c>
<VirtualHost 0.0.0.0>
# The SFTP configuration
SFTPEngine on
Port 2222
SFTPLog /var/log/proftpd/sftp.log
Include /etc/proftpd/sql.conf
SFTPAuthMethods password keyboard-interactive hostbased publickey
# Configure both the RSA and DSA host keys, using the same host key
# files that OpenSSH uses.
SFTPHostKey /etc/ssh/ssh_host_rsa_key
SFTPHostKey /etc/ssh/ssh_host_dsa_key
SFTPAuthorizedUserKeys file:/etc/proftpd/authorized_keys/%u
# Enable compression
SFTPCompression delayed
</VirtualHost>
</IfModule>
So now the server is answering on FTP ports normally and on port 2222. When I attempt to connect to port 2222 using WinSCP, it fails authentication. Here is the sftp.log file snipped that is generated each time I try to connect.
2020-04-21 21:03:50,340 mod_sftp/0.9.9[13017]: sent server version 'SSH-2.0-mod_sftp/0.9.9'
2020-04-21 21:03:50,355 mod_sftp/0.9.9[13017]: received client version 'SSH-2.0-WinSCP_release_5.17.3'
2020-04-21 21:03:50,355 mod_sftp/0.9.9[13017]: handling connection from SSH2 client 'WinSCP_release_5.17.3'
2020-04-21 21:03:51,284 mod_sftp/0.9.9[13017]: + Session key exchange: ecdh-sha2-nistp256
2020-04-21 21:03:51,284 mod_sftp/0.9.9[13017]: + Session server hostkey: ssh-rsa
2020-04-21 21:03:51,284 mod_sftp/0.9.9[13017]: + Session client-to-server encryption: aes256-ctr
2020-04-21 21:03:51,284 mod_sftp/0.9.9[13017]: + Session server-to-client encryption: aes256-ctr
2020-04-21 21:03:51,284 mod_sftp/0.9.9[13017]: + Session client-to-server MAC: hmac-sha2-256
2020-04-21 21:03:51,284 mod_sftp/0.9.9[13017]: + Session server-to-client MAC: hmac-sha2-256
2020-04-21 21:03:51,285 mod_sftp/0.9.9[13017]: + Session client-to-server compression: none
2020-04-21 21:03:51,285 mod_sftp/0.9.9[13017]: + Session server-to-client compression: none
2020-04-21 21:03:51,957 mod_sftp/0.9.9[13017]: sending acceptable userauth methods: password,keyboard-interactive,hostbased,publickey
2020-04-21 21:03:52,302 mod_sftp/0.9.9[13017]: expecting USER_AUTH_INFO_RESP message, received SSH_MSG_IGNORE (2)
2020-04-21 21:03:52,322 mod_sftp_pam/0.3[13017]: PAM authentication error (7) for user 'test': Authentication failure
For FTP, I am authenticating successfully from a MySQL database. But the last line of the sftp.log file says that PAM authentication failed for my SFTP attempt. I am just trying to authenticate in the WinSCP client with a login and password that come from MySQL. Does that involve PAM authentication?
I think I am getting close!
Thanks in advance!
Here is the full /etc/proftpd/proftpd.conf that accomplishes my goals as stated above.
Note that I am also using mod_sql to provide for authentication via MySQL. So there are other configuration files referenced by this config file but are not listed in this posting.
# cat /etc/proftpd/proftpd.conf
# /etc/proftpd/proftpd.conf -- This is a basic ProFTPD configuration file.
# To really apply changes, reload proftpd after modifications, if
# it runs in daemon mode. It is not required in inetd/xinetd mode.
#
# Includes DSO modules
Include /etc/proftpd/modules.conf
# Set off to disable IPv6 support which is annoying on IPv4 only boxes.
UseIPv6 on
# If set on you can experience a longer connection delay in many cases.
IdentLookups off
ServerName "hostname"
# Set to inetd only if you would run proftpd by inetd/xinetd.
# Read README.Debian for more information on proper configuration.
ServerType standalone
DeferWelcome off
MultilineRFC2228 on
DefaultServer on
ShowSymlinks on
TimeoutNoTransfer 600
TimeoutStalled 600
TimeoutIdle 1200
DisplayLogin welcome.msg
DisplayChdir .message true
ListOptions "-l"
DenyFilter \*.*/
# Use this to jail all users in their homes
DefaultRoot ~
# This line will create the user directories of an FTP user if they successfully authenticate but do not have a user directory.
# See http://www.proftpd.org/docs/howto/CreateHome.html
# CreateHome off|on [<mode>] [skel <path>] [dirmode <mode>] [uid <uid>] [gid <gid>] [homegid <gid>] [NoRootPrivs]
CreateHome on dirmode 750
# Users require a valid shell listed in /etc/shells to login.
# Use this directive to release that constrain.
RequireValidShell off
# Port 21 is the standard FTP port.
Port 21
# In some cases you have to specify passive ports range to by-pass
# firewall limitations. Ephemeral ports can be used for that, but
# feel free to use a more narrow range.
# PassivePorts 49152 65534
# If your host was NATted, this option is useful in order to
# allow passive transfers to work. You have to use your public
# address and opening the passive ports used on your firewall as well.
# MasqueradeAddress 1.2.3.4
# This is useful for masquerading address with dynamic IPs:
# refresh any configured MasqueradeAddress directives every 8 hours
<IfModule mod_dynmasq.c>
# DynMasqRefresh 28800
</IfModule>
# To prevent DoS attacks, set the maximum number of child processes
# to 30. If you need to allow more than 30 concurrent connections
# at once, simply increase this value. Note that this ONLY works
# in standalone mode, in inetd mode you should use an inetd server
# that allows you to limit maximum number of processes per service
# (such as xinetd)
MaxInstances 50
# Set the user and group that the server normally runs at.
User proftpd
Group nogroup
# Umask 022 is a good standard umask to prevent new files and dirs
# (second parm) from being group and world writable.
Umask 022 022
# Normally, we want files to be overwriteable.
AllowOverwrite on
# Uncomment this if you are using NIS or LDAP via NSS to retrieve passwords:
# PersistentPasswd off
# This is required to use both PAM-based authentication and local passwords
# AuthOrder mod_auth_pam.c* mod_auth_unix.c
# Be warned: use of this directive impacts CPU average load!
# Uncomment this if you like to see progress and transfer rate with ftpwho
# in downloads. That is not needed for uploads rates.
#
# UseSendFile off
TransferLog /var/log/proftpd/xferlog
SystemLog /var/log/proftpd/proftpd.log
# Logging onto /var/log/lastlog is enabled but set to off by default
#UseLastlog on
# In order to keep log file dates consistent after chroot, use timezone info
# from /etc/localtime. If this is not set, and proftpd is configured to
# chroot (e.g. DefaultRoot or <Anonymous>), it will use the non-daylight
# savings timezone regardless of whether DST is in effect.
#SetEnv TZ :/etc/localtime
<IfModule mod_quotatab.c>
QuotaEngine off
</IfModule>
<IfModule mod_ratio.c>
Ratios off
</IfModule>
# Delay engine reduces impact of the so-called Timing Attack described in
# http://www.securityfocus.com/bid/11430/discuss
# It is on by default.
<IfModule mod_delay.c>
DelayEngine on
</IfModule>
<IfModule mod_ctrls.c>
ControlsEngine off
ControlsMaxClients 2
ControlsLog /var/log/proftpd/controls.log
ControlsInterval 5
ControlsSocket /var/run/proftpd/proftpd.sock
</IfModule>
<IfModule mod_ctrls_admin.c>
AdminControlsEngine off
</IfModule>
#
# Alternative authentication frameworks
#
#Include /etc/proftpd/ldap.conf
Include /etc/proftpd/sql.conf
#
# This is used for FTPS connections
#
#Include /etc/proftpd/tls.conf
#
# Useful to keep VirtualHost/VirtualRoot directives separated
#
#Include /etc/proftpd/virtuals.conf
# A basic anonymous configuration, no upload directories.
# <Anonymous ~ftp>
# User ftp
# Group nogroup
# # We want clients to be able to login with "anonymous" as well as "ftp"
# UserAlias anonymous ftp
# # Cosmetic changes, all files belongs to ftp user
# DirFakeUser on ftp
# DirFakeGroup on ftp
#
# RequireValidShell off
#
# # Limit the maximum number of anonymous logins
# MaxClients 10
#
# # We want 'welcome.msg' displayed at login, and '.message' displayed
# # in each newly chdired directory.
# DisplayLogin welcome.msg
# DisplayChdir .message
#
# # Limit WRITE everywhere in the anonymous chroot
# <Directory *>
# <Limit WRITE>
# DenyAll
# </Limit>
# </Directory>
#
# # Uncomment this if you're brave.
# # <Directory incoming>
# # # Umask 022 is a good standard umask to prevent new files and dirs
# # # (second parm) from being group and world writable.
# # Umask 022 022
# # <Limit READ WRITE>
# # DenyAll
# # </Limit>
# # <Limit STOR>
# # AllowAll
# # </Limit>
# # </Directory>
#
# </Anonymous>
# Include other custom configuration files
Include /etc/proftpd/conf.d/
<IfModule mod_sftp.c>
<VirtualHost 0.0.0.0>
# The SFTP configuration
SFTPEngine on
Port 2222
SFTPAuthMethods password
RequireValidShell off
SFTPLog /var/log/proftpd/sftp.log
Include /etc/proftpd/sql.conf
# Configure both the RSA and DSA host keys, using the same host key
# files that OpenSSH uses.
SFTPHostKey /etc/ssh/ssh_host_rsa_key
SFTPHostKey /etc/ssh/ssh_host_dsa_key
#SFTPAuthorizedUserKeys file:/etc/proftpd/authorized_keys/%u
# Enable compression
SFTPCompression delayed
DefaultRoot ~
</VirtualHost>
</IfModule>
# Time stamp - IP Address - Protocol - User Name - UID - Filename - File Sizeo - Response Time in Milliseconds - Transfer Time in Seconds - Transfer Status - Reason for failure if applicable
# http://www.proftpd.org/docs/modules/mod_log.html#LogFormat
LogFormat custom "%{iso8601} %a %{protocol} %u %{uid} %f %{file-size} %R %T %{transfer-status} %{transfer-failure}"
ExtendedLog /var/log/proftpd/custom.log READ,WRITE custom
The essence of the solution to make it listen on both FTP 21 and SFTP 2222 is to add <VirtualHost 0.0.0.0> section inside of <IfModule mod_sftp.c>:
...
Port 21
...
<IfModule mod_sftp.c>
<VirtualHost 0.0.0.0> # << *** this part makes it listen on both 21 above and 2222 below ***
...
Port 2222
...
</VirtualHost> # << closing tag
</IfModule>
(thanks to the original question author and his response, eyeballing two configs I was able to boil it down to this)

Kibana is not running on FreeBSD

I'm fighting with kibana since few days and I don't overcome to start it on my FreeBSD server.
This is my environment:
FreeBSD 11.1-STABLE
ElasticSearch 5.3.0
Kibana 5.3.0
Logstash 5..
ElasticSearch and Logstash work fine. But I don't overcome to start kibana service.
This is files according to kibana:
kibana.yml file:
# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601
# Specifies the address to which the Kibana server will bind. IP addresses and host names are
both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "0.0.0.0"
# Enables you to specify a path to mount Kibana at if you are running behind a proxy. This only affects
# the URLs generated by Kibana, your proxy is expected to remove the basePath value before forwarding requests
# to Kibana. This setting cannot end in a slash.
server.basePath: "/qual/kibana"
# The maximum payload size in bytes for incoming server requests.
#server.maxPayloadBytes: 1048576
# The Kibana server's name. This is used for display purposes.
#server.name: "your-hostname"
# The URL of the Elasticsearch instance to use for all your queries.
elasticsearch.url: "http://localhost:9200"
# When this setting's value is true Kibana uses the hostname specified in the server.host
# setting. When the value of this setting is false, Kibana uses the hostname of the host
# that connects to this Kibana instance.
#elasticsearch.preserveHost: true
# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist.
kibana.index: ".kibana"
# The default application to load.
#kibana.defaultAppId: "discover"
# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
#elasticsearch.username: "user"
#elasticsearch.password: "pass"
# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key
# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files validate that your Elasticsearch backend uses the same key files.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key
# Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]
# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full
# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500
# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000
# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]
# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}
# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 0
# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying.
#elasticsearch.startupTimeout: 5000
# Specifies the path where Kibana creates the process ID file.
pid.file: /var/run/kibana.pid
# Enables you specify a file where Kibana stores log output.
logging.dest: /var/log/kibana.log
# Set the value of this setting to true to suppress all logging output.
#logging.silent: false
# Set the value of this setting to true to suppress all logging output other than error messages.
#logging.quiet: false
# Set the value of this setting to true to log all events, including system usage information
# and all requests.
#logging.verbose: false
# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000.
#ops.interval: 5000
# The default locale. This locale can be used in certain circumstances to substitute any missing
# translations.
#i18n.defaultLocale: "en"
/usr/local/etc/rc.d/kibana:
#!/bin/sh
#
# $FreeBSD: head/textproc/kibana5/files/kibana.in 462830 2018-02-24 14:17:41Z feld $
#
# PROVIDE: kibana
# REQUIRE: DAEMON
# KEYWORD: shutdown
. /etc/rc.subr
name=kibana
rcvar=kibana_enable
load_rc_config $name
: ${kibana_enable:="NO"}
: ${kibana_config:="/usr/local/etc/kibana.yml"}
: ${kibana_user:="www"}
: ${kibana_group:="www"}
: ${kibana_log:="/var/log/kibana.log"}
required_files="${kibana_config}"
pidfile="/var/run/${name}/${name}.pid"
start_precmd="kibana_precmd"
procname="/usr/local/bin/node"
command="/usr/sbin/daemon"
command_args="-f -p ${pidfile} env BABEL_DISABLE_CACHE=1 ${procname} /usr/local/www/kibana5/src/cli serve --config ${kibana_config} --log-file ${kibana_log}"
kibana_precmd()
{
if [ ! -d $(dirname ${pidfile}) ]; then
install -d -o ${kibana_user} -g ${kibana_group} $(dirname ${pidfile})
fi
if [ ! -f ${kibana_log} ]; then
install -o ${kibana_user} -g ${kibana_group} -m 640 /dev/null ${kibana_log}
fi
if [ ! -d /usr/local/www/kibana5/optimize ]; then
install -d -o ${kibana_user} -g ${kibana_group} /usr/local/www/kibana5/optimize
fi
}
run_rc_command "$1"
/etc/rc.conf:
kibana_enable="YES"
But when I execute: service kibana start
I get:
root#server:/var/log # service kibana start
Starting kibana.
root#server:/var/log # service kibana status
kibana is not running.
I don't know why ?
Start the service in debug mode
sh -x /usr/local/etc/rc.d/kibana start
find which command is used to start the kibana service. For kibana, the command should be something like /usr/local/bin/node /usr/local/www/kibana6/src/cli serve --config /usr/local/etc/kibana/kibana.yml
Start the process in foreground
It is possible that node is not properly installed or some permission issue.

Trying to set up Catch-All Email Address with Sendmail

I'm trying to create a catchall email address with Sendmail (it will be used to catch email bounces for Oceth's OEMPro).
First I started by creating a new user:
# useradd -s /bin/false bounces
# passwd bounces
Then I created & opened a virtusertable file with vim virtusertable and added:
bounces#sub.example.com bounces
#sub.example.com bounces#sub.example.com
Then I added the below line to sendmail.mc near the end but before the MAILER_DEFINITIONS of with
FEATURE(`virtusertable', `hash -o /etc/mail/virtusertable.db')dnl
Finally, I ran
# make
Updating databases ...
Reading configuration from /etc/mail/sendmail.conf.
Validating configuration.
Creating /etc/mail/databases...
Updating auth ...
sasl2-bin not installed, not configuring sendmail support.
To enable sendmail SASL2 support at a later date, invoke "/usr/share/sendmail/update_auth"
Creating /etc/mail/relay-domains
# Optional file...
Updating Makefile ...
Reading configuration from /etc/mail/sendmail.conf.
Validating configuration.
Creating /etc/mail/Makefile...
Updating sendmail.cf ...
The following file(s) have changed:
/etc/mail/sendmail.cf
** ** You should issue `/etc/init.d/sendmail reload` ** **
# service sendmail reload
* Reloading Mail Transport Agent (MTA) sendmail [ OK ]
# service sendmail restart
* Restarting Mail Transport Agent (MTA) sendmail [ OK ]
After all this it does not seem to be working, how can I test this properly. I've tried sending an email to bounces#sub.example.com but when I look in /var/mail/ I don't see the bounces user.
# ls /var/mail/
root www-data other-user
I created a MX DNS record for this too, e.g. sub.example.com.
The other indication it is not working correctly is that we are getting a 504 error when we try to use this email address as our POP3 Monitoring method in Oceth's OEMPro.
UPDATE
I tried running the below commands as root, in an attempt to debug the issue but I'm not clear what it's telling me.
root:/# sendmail -d60.5 -bv no-such-user#sub.example.com
map_lookup(dequote, other-user, %0=other-user) => NOT FOUND (0)
map_lookup(host, sub.example.com, %0=sub.example.com) => sub.example.com. (0)
no-such-user#sub.example.com... deliverable: mailer esmtp, host sub.example.com., user no-such-user#sub.example.com
root:/# sendmail -d60.5 -bv bounces#sub.example.com
map_lookup(dequote, other-user, %0=other-user) => NOT FOUND (0)
map_lookup(host, sub.example.com, %0=sub.example.com) => sub.example.com. (0)
bounces#sub.example.com... deliverable: mailer esmtp, host sub.example.com., user bounces#sub.example.com
I'm not sure why it first tries to look up another user on our system called other-user
UPDATE 2
After running # echo '$=w' | sendmail -bt I get the following result.
# echo '$=w' | sendmail -bt
ADDRESS TEST MODE (ruleset 3 NOT automatically invoked)
Enter <ruleset> <address>
> localhost
ip-1??-??-??-??5
[1??.??.??.??5]
ip-1??-??-??-??5.ec2.internal
[127.0.0.1]
ip-172-31-31-167.eu-west-1.compute.internal
In sendmail.mc I've changed FEATURE(virtusertable', hash -o /etc/mail/virtusertable.db')dnl to FEATURE(virtusertable', hash -o /etc/mail/virtusertable.db')dnl, basically I just removed the -o flag.
Then I updated /etc/mail/local-host-names to include sub.example.com, so now it reads:
localhost
ip-17?-??-??-?67.eu-west-1.compute.internal
sub.example.com
Then I ran:
# service sendmail restart
* Restarting Mail Transport Agent (MTA) sendmail
# echo '$=w' | sendmail -bt
ADDRESS TEST MODE (ruleset 3 NOT automatically invoked)
Enter <ruleset> <address>
> localhost
ip-1??-??-??-??5
[1??.??.??.??5]
ip-1??-??-??-??5.ec2.internal
[127.0.0.1]
sub.example.com
ip-17?-??-??-?67.eu-west-1.compute.internal
After sending an email to bunces#sub.example.com I still don't see the mailbox in /var/mail/
# ls /var/mail/
root www-data other-user
I also still get the 504 error in the OEMPro app when I try to configure it with these settings.
Sendmail consults virtusertable only for deliveries to local email domains (listed in $=w) and virtual domains (listed in $={VirtHost}). It seems that sub.example.com is not listed in any of them.
You can add sub.example.com to list of local email domains by listing it in file /etc/mail/local-host-names (one domain/name per line). After modifying the file restart sendmail daemon or send HUP signal to sendmail daemon.
You can check content of $=w by executing the following command as root:
echo '$=w' | sendmail -bt
Sendmail by default automagically adds some "guesswork" to $=w.
Extra hint:
Do not use -o (optional) flag in FEATURE(virtusertable). Without the flag sendmail refuses to start when compiled version of virtusertable is unawailable.