How can I change settings in pg_hba.conf and postgresql.conf either from the command-line or programatically (especially from fabric or fabtools)?
I already found set_config, but that does not seem to work for parameters which require a server restart. The parameters to change are listen_addresses in postgresql.conf and a new line in pg_hba.conf, so connections from our sub-network will be accepted.
This is needed to write deployment scripts using fabric. It is not an option to copy template-files which then override the existing *.conf files, because the database server might be shared with other applications which bring their own configuration parameters. Thus, the existing configuration must be altered, not replaced.
Here is the currently working solution, incorporating the hint from a_horse_with_no_name. I paste a snippet from our fabfile.py (it uses require from fabtools, and it runs against Ubuntu):
db_name = env.variables['DB_NAME']
db_user = env.variables['DB_USER']
db_pass = env.variables['DB_PASSWORD']
# Require a PostgreSQL server.
require.postgres.server(version="9.4")
require.postgres.user(db_user, db_pass)
require.postgres.database(db_name, db_user)
# Listen on all addresses - use firewall to block inadequate access.
sudo(''' psql -c "ALTER SYSTEM SET listen_addresses='*';" ''', user='postgres')
# Download the remote pg_hba.conf to a temp file
tmp = tempfile.NamedTemporaryFile()
with open(tmp.name, "w") as f:
get("/etc/postgresql/9.4/main/pg_hba.conf", f, use_sudo=True)
# Define the necessary line in pg_hba.conf.
hba_line = "host all all {DB_ACCEPT_IP}/0 md5".format(**env.variables)
# Search the hba_line in the existing pg_hba.conf
with open(tmp.name, "ra") as f:
for line in f:
if hba_line in line:
found = True
break
else:
found = False
# If it does not exist, append it and upload the modified pg_hba.conf to the remote machine.
if not found:
with open(tmp.name, "a") as f:
f.write(hba_line)
put(f.name, "/etc/postgresql/9.4/main/pg_hba.conf", use_sudo=True)
# Restart the postgresql service, so the changes take effect.
sudo("service postgresql restart")
The aspect I don't like with this solution is that if I change DB_ACCEPT_IP, this will just append a new line and not remove the old one. I am sure a cleaner solution is possible.
Related
I'm trying to switch from using the local network to a UNIX socket with mpd.To do so, i have my config file:
# Recommended location for database
db_file "~/.config/mpd/database"
# If running mpd using systemd, delete this line to log directly to systemd.
#log_file "syslog"
# The music directory is by default the XDG directory, uncomment to amend and choose a different directory
music_directory "~/Music"
# Uncomment to refresh the database whenever files in the music_directory are changed
auto_update "yes"
auto_update_depth "5"
# Uncomment to enable the functionalities
playlist_directory "~/.config/mpd/playlists"
pid_file "~/.config/mpd/pid"
state_file "~/.config/mpd/state"
#sticker_file "~/.config/mpd/sticker.sql"
bind_to_address "~/.config/mpd/socket"
restore_paused "yes"
audio_output {
type "pipewire"
name "PipeWire Sound Server"
}
I created a socket file in the folder ~/.config/mpd/socket
I also export MPD_HOST=~/.config/mpd/socket in order to be the default host. Nevertheless if i run the command:
mpc play , i have the error MPD error: Failed to resolve host name
But if i run MPD_HOST=~/.config/mpd/socket mpc play it work. I don't understand what i'm missing?
I have log files on my server as follows
vpn_20191007.log
vpn_20191008.log
vpn_20191009.log
vpn_20191010.log
vpn_20191011.log
vpn_20191012.log
vpn_20191013.log
vpn_20191014.log
vpn_20191015.log
vpn_20191016.log
Is it possible to add log files pattern in fail2ban jail config?
[application]
enabled = false
filter = example
action = iptables
logpath = /var/log/vpn_%D.log
maxretry = 1
Well, conditionally it is possible...
Although wildcards are basically allowed at the moment, so :
logpath = /var/log/vpn_*.log
will do the job, but it is a bit ugly in your case:
fail2ban cumulate the list of files only by start of service, so the list remains obtained in fail2ban (unless it gets reloaded) - this means you should notify fail2ban that the log file name got changed (see https://github.com/fail2ban/fail2ban/issues/1379, the work is in progress).
since only one file will get new messages, the monitoring of other files is unneeded, especially if polling backend is used.
So better create some logrotate rules for that:
in order to rename/compress all previous log-files (to avoid match for obsolete files);
either create hard- or sym-link for last/active file with a fixed name (so fail2ban is always able to find it with the same name, and you'd not need wildcard at all);
or to notify fail2ban to reload the jail if logfile-name got changed(fail2ban-client reload vpn).
Here is an example for logrotate amendment:
postrotate
nfn="/var/log/vpn_$(date +%Y%m%d).log"
touch "$nfn"
ln -fs "$nfn" /var/log/vpn.log
You can add wilcard :
logpath = /var/log/vpn_*.log
and/or you can use multiple lines :
logpath = /var/log/vpn_20191007.log
/var/log/vpn_20191008.log
/var/log/vpn_20191009.log
/var/log/vpn_20191010.log
/var/log/vpn_20191011.log
/var/log/vpn_20191012.log
/var/log/vpn_20191013.log
/var/log/vpn_20191014.log
/var/log/vpn_20191015.log
/var/log/vpn_20191016.log
(You can combine the two)
mngmt-users.properties file. The users are added in the file but when I try to run the localhost it says it's running then if I try to view the admin console it is redirecting to http://localhost:9990/error/index_win.html. That tells the server is running but I could not open admin console.
#
# Properties declaration of users for the realm 'ManagementRealm' which is the default realm
# for new installations. Further authentication mechanism can be configured
# as part of the <management /> in standalone.xml.
#
# Users can be added to this properties file at any time, updates after the server has started
# will be automatically detected.
#
# By default the properties realm expects the entries to be in the format: -
# username=HEX( MD5( username ':' realm ':' password))
#
# A utility script is provided which can be executed from the bin folder to add the users: -
# - Linux
# bin/add-user.sh
#
# - Windows
# bin\add-user.bat
#
#$REALM_NAME=ManagementRealm$ This line is used by the add-user utility to identify the realm name already used in this file.
#
# On start-up the server will also automatically add a user $local - this user is specifically
# for local tools running against this AS installation.
#
# The following illustrates how an admin user could be defined, this
# is for illustration only and does not correspond to a usable password.
#
#admin=2a0923285184943425d1f53ddd58ec7a
tejaswini=25ab658c2861b2e64783aaa9ba95c2e5
aswini#19=388ced81791ddb1760b83dc4ec8b7a61
saisana=ff39d778414ab12d84fc4fa7fdacb634
alekya=d72e9c90345ce4d9290c3a2728b3cd60
prasad=c6c7c67cf343f6862d3b77bae9f61d17
teju=28b9e55b314fd60855a7843b4455dbed
Screen shot of added user
May be u have tried to create specifically application user or management user when u ran the addUser utility,
please refer the below link for steps to register user
https://bgasparotto.com/add-user-wildfly
I am trying to read a file in Spark Shell that comes with CentOS distribution of Cloudera on my local machine. Following are the commands I have entered in Spark Shell.
spark-shell
val fileData = sc.textFile("hdfs://user/home/cloudera/cm_api.py");
fileData.count
I also tried this statment for reading file:
val fileData = sc.textFile("user/home/cloudera/cm_api.py");
However I am getting
org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://quickstart.cloudera:8020/user/cloudera/user/cloudera/cm_api.py
I haven't changed any settings or configurations. What am I doing wrong?
You are missing the leading slash in your url, so the path is relative. To make it absolute, use
val fileData = sc.textFile("hdfs:///user/home/cloudera/cm_api.py")
or
val fileData = sc.textFile("/user/home/cloudera/cm_api.py")
I think you need to put the file in hdfs first: hadoop fs -put, then check the file: hadoop fs -ls, then go spark-shell , val fileData = sc.textFile("cm_api.py")
In "hdfs://user/home/cloudera/cm_api.py", you are missing the hostname of the URI. You should have pass something like "hdfs://<host>:<port>/user/home/cloudera/cm_api.py", where <host> is Hadoop NameNode host and the <port> is, well, port number of Hadoop NameNode, which is 50070 by default.
The error message says hdfs://quickstart.cloudera:8020/user/cloudera/user/cloudera/cm_api.py does not exist. The path looks suspicious! The file you mean is probably at hdfs://quickstart.cloudera:8020/user/cloudera/cm_api.py.
If it is, you can access it by using that full path. Or, if the default file system is configured as hdfs://quickstart.cloudera:8020/user/cloudera/, you can use simply cm_api.py.
You may be confused between HDFS file paths and local file paths. By specifying
hdfs://quickstart.cloudera:8020/user/home/cloudera/cm_api.py
you are saying two things:
1) there is a computer by the name "quickstart.cloudera' reachable via the network (try ping to ensure that is the case), and it is running HDFS.
2) the HDFS file system contains a file at /user/home/cloudera/cm_api.py (try 'hdfs dfs -ls /user/home/cloudera/' to verify this
If you are trying to access a file on the local file system you have to use a different URI:
file:///user/home/cloudera/cm_api.py
I want to copy a directory from one host to another host using SCP
I tried with following syntax
my $src_path="/abc/xyz/123/";
my $BASE_PATH="/a/b/c/d/";
my $scpe = Net::SCP::Expect->new(host=> $host, user=>$username, password=>$password);
$scpe->scp -r($host.":".$src_path, $dst_path);
i am getting the errror like no such file or directory.can you help in this regard.
According to the example given in the manpage, you don't need to repeat the host in the call, if you already passed it as an option.
from http://search.cpan.org/~djberg/Net-SCP-Expect-0.12/Expect.pm:
Example 2 - uses constructor, shorthand scp:
my $scpe = Net::SCP::Expect->new(host=>'host', user=>'user', password=>'xxxx');
$scpe->scp('file','/some/dir'); # 'file' copied to 'host' at '/some/dir'
Besides, is this "-r" a typo? If you want to copy recursively, you need to set recursive => "yes" in the options hash.