Why do "hostname" and "hostname -f" return each other's value? - centos

I'm using a Centos8 VM to learn puppet. At first, I only set a short hostname for my VM -- puppet-mst. After some failure-- someone told me puppet needs FQDN to proceed. So I set a long hostname for my vm --
hostnamectl set-hostname puppet-mst.eisen
Then I found something weird --
[root#puppet-mst yum.repos.d]# hostname -f
puppet-mst
[root#puppet-mst yum.repos.d]# hostname
puppet-mst.eisen
"hostname" and "hostname -f" just return each other's values-- "hostname" return the long name while "hostname -f" returns the short one...
So now -- I can't install foreman on this centos VM -- as it will return error --
[root#puppet-mst yum.repos.d]# foreman-installer -i
2021-11-01 00:22:04 [NOTICE] [root] Loading installer configuration. This will take some time.
2021-11-01 00:22:08 [NOTICE] [root] Running installer with log based terminal output at level NOTICE.
2021-11-01 00:22:08 [NOTICE] [root] Use -l to set the terminal output log level to ERROR, WARN, NOTICE, INFO, or DEBUG. See --full-help for definitions.
Output of 'facter fqdn' is different from 'hostname -f'
Please kind help -- how to set the hostname then made both "hostname" and "hostname -f" return the correct one -- especially "hostname -f" should return the long domainname? Thanks in advance.

After some research, I found it's due to /etc/hosts.
When it's --
192.168.160.131 puppet-mst puppet-mst.eisen
the "hostname -f" will return puppet-mst
and when it's --
192.168.160.131 puppet-mst.eisen puppet-mst
the "hostname -f" will return puppet-mst.eisen

Related

SNMPD to configure as Proxy agent

Am configuring SNMPD to acts as proxy agent between SNMP manager(snmpb browser) and Network devices in centOS by referring https://net-snmp.sourceforge.io/wiki/index.php/Snmpd_proxy. After adding the <proxy -Cn testname -v 3 -u testUser -a MD5 -A "PasswordA" -x DES -X "PasswordX" -l authPriv ipaddress .1.3> in snmpd.config file, getting the below mentioned error on that line.
Error: failed to parse proxy args.
Kindly help me to resolve this.

How to get the base package install location on Linux?

I am on Linux Centos OS. I understand that using "rpm -qa" gives a lot of install paths for the corresponding package. However, I need just the base package install location for the package. Is there any way/command/option in Linux to retrieve the same? My code snippet is to retrieve list of running services and the corresponding package installed is as below:-
for i in $(service --status-all | grep -v "not running" | grep -E running\|stopped | awk '{print $1}');
do
packagename=$(rpm -qf /etc/init.d/$i)
servicestatus=$(service --status-all | grep $i | awk '{print $NF}' | sed 's/...//g' | sed 's/.//g');
echo $tdydate, $(ip route get 8.8.8.8 | awk 'NR==1 {print $NF}'), $i, $packagename, $servicestatus > "$HOME/MyLog/running_services.csv"
done
Now, I need to also get the corresponding package install location as well which is hosting the running service. Is there a way to retrieve this as well along with getting the package names. Please confirm.
Thanks in advance for extending help.
Regards.
Okay, with your answer to my question in the comments, which is much clearer to me than you initial question...
Hi, basically what i need is:- I get a list of all installed services on my Centos using service --status-all. Now, for each service, I need to know the corresponding application package location on linux.
...I'll propose this (tested here on CentOS 6.6):
#!/bin/bash
for i in `chkconfig --list | awk '{ print $1}'`; do
service $i status >/dev/null 2>&1
if [ $?==0 ]; then
rpm -qf /etc/init.d/$i
fi
done | sort | uniq
That spits out all rpm names of the services which are currently running.
A bit more detail as to why your current approach is not going to work:
service --status-all is not going to return information which can be parsed reliably. For example, the output on a VM here:
acpid (pid 872) is running...
auditd (pid 789) is running...
Stopped
cgred is stopped
Checking for service cloud-init:Checking for service cloud-init:Checking for service cloud-init:Checking for service cloud-init:crond (pid 1088) is running...
ip6tables: Firewall is not running.
iptables: Firewall is not running.
Kdump is not operational
mdmonitor is stopped
netconsole module not loaded
Configured devices:
lo eth0
Currently active devices:
lo eth0
ntpd (pid 997) is running...
master (pid 1076) is running...
rdisc is stopped
restorecond is stopped
rsyslogd (pid 809) is running...
sandbox is stopped
saslauthd is stopped
openssh-daemon (pid 988) is running...
Some services don't even return their name (third line). Some say stopped, others not running. If you parse the first column of chkconfig --list you know all the service names, which correspond to files in /etc/init.d. Then you can query their status individually and read the return code ($?), which is 0 for running services (or generally for success in the Unix/Linux world), 1 or higher for not running or not installed or incomplete/malfunctioning services.
Armed with names in /etc/init.d/ you can then query the owning package with rpm -qf /etc/init.d/<servicename> and get exactly what I think you were looking for.
Edit: added | sort | uniq after the loop, because some packages contain multiple services, like for example cloud-init, which creates four different services on CentOS. So you sort the list, then make sure you only get distinct (uniq) names back.
Works for me:
acpid-1.0.10-2.1.el6.x86_64
audit-2.3.7-5.el6.x86_64
cloud-init-0.7.5-10.el6.centos.2.x86_64
cronie-1.4.4-12.el6.x86_64
cyrus-sasl-2.1.23-15.el6_6.1.x86_64
initscripts-9.03.46-1.el6.centos.1.x86_64
iptables-1.4.7-14.el6.x86_64
iptables-ipv6-1.4.7-14.el6.x86_64
iputils-20071127-17.el6_4.2.x86_64
kexec-tools-2.0.0-280.el6.x86_64
libcgroup-0.40.rc1-15.el6_6.x86_64
mdadm-3.3-6.el6.x86_64
ntp-4.2.6p5-1.el6.centos.x86_64
ntpdate-4.2.6p5-1.el6.centos.x86_64
openssh-server-5.3p1-104.el6_6.1.x86_64
policycoreutils-2.0.83-19.47.el6_6.1.x86_64
postfix-2.6.6-6.el6_5.x86_64
rsyslog-5.8.10-9.el6_6.x86_64
udev-147-2.57.el6.x86_64
You are looking for --whatprovides instead of -qf (which does formatting).
Tweaking your example...
for i in $(chkconfig --list | awk '{ print $1}'); do service $i status >/dev/null 2>&1; if [ 0==$? ]; then echo -n "$i: "; rpm -q --whatprovides /etc/init.d/$i; fi; done | sort
FYI - this doesn't work on more modern systemd-based systems (CentOS 7).
Example on my Fedora 21 box:
Note: This output shows SysV services only and does not include native
systemd services. SysV configuration data might be overridden by native
systemd configuration.
If you want to list systemd services use 'systemctl list-unit-files'.
To see services enabled on particular target use
'systemctl list-dependencies [target]'.
netconsole: initscripts-9.56.1-5.fc21.x86_64
network: initscripts-9.56.1-5.fc21.x86_64

Unable to run "gearman" command line tool with gearman 1.1.6

I am trying to run the example on "http://gearman.org/getting_started" on Ubuntu in VirtualBox environment.
At first I tried to download an old version 0.16 by using apt-get install gearman-job-server, apt-get install gearman-tools and everything worked well. The server ran in the background, I was able to create 2 workers and verify that I can call them by creating a client.
I decided to download and compile the latest version, 1.1.6. Now, I am trying to do the same thing with the new version and I am having errors.
I run the server as admin:
sudo gearmand
The statement
gearadmin --getpid
seems to work - it returns me the process ID of the server. Thus, the server is running, and this answer is not relevant.
Now, I am adding a worker:
gearman -w -f wc -- wc -l
It seems to run.
Nevertheless,
gearadmin --workers
results in something that probably represents and empty list :
33 127.0.0.1 - :
.
(In version 0.16, I was able to see 2 lines, the second showing the registered function name.)
Attempting to run the client
gearman -f wc < /etc/passwd
results in
gearman: gearman_client_run_tasks : flush(GEARMAN_COULD_NOT_CONNECT) localhost:0 -> libgearman/connection.cc:671"
This might be the very same problem described in here - the port not specified, but I have no idea how to do it through the command line tool.
Any idea?
Ok, It looks like the answer in here was the key to success. Probably, the "getting started" section was not updated for a while. Indeed, one must specify a port explicitly for gearmand and gearman .
Server:
sudo gearmand -p 5000
Worker:
gearman -p 5000 -w -f wc -- wc -l
Client:
gearman -p 5000 -f wc < /etc/passwd

how to suppress FATAL output from ssh on perl using the "system" call?

I'm trying to write a simple script that uses ssh, but I want to test that ssh is working first -if it doesnt work I want to know why it's not working.
Here is what I'm trying:
my $output = system("ssh -q -o ConnectionTimeout=3 -o BatchMode=yes $host \"echo $host\"");
my $real_output = $output/256;
if($real_output == 2){print "RESPONSE: $host is not working, No address associated to the name \n";}
elsif($real_output == 255){print "RESPONSE: $host is not working (connection timed out after 3 secs, it exists but does not respond) \n";}
This works: it collects the error and tests the connection, but when I run it; it shows the following when the name of the host does not exist:
./testMachines --n invalid_host
warning: Connecting to invalid_host failed: No address associated to the name
RESPONSE: invalid_host is not working, No address associated to the name
or when the host name does exist but it's timing out:
./testMachines --n timeout_host
ssh: FATAL: Connection timed out. No protocol greeting received in 10 seconds.
RESPONSE: timeout_host is not working (connection timed out after 3 secs, it exists but does not respond)
How do I suppress the "ssh: FATAL" and "warning" messages? I was under the impression that the '-q' option for ssh will do the trick, but it didnt.
I've also tried the '-q -q' option on ssh (as suggested on the man pages) but no luck.
I've also tried this:
my $output = system("ssh -q -o ConnectionTimeout=3 -o BatchMode=yes $host \"echo 2>&1\"");
without any luck... any ideas?
Basically what I want is the output to be like this (without the FATAL and warning ssh messages):
# ./testMachines -n host
RESPONSE: host is not working (connection timed out after 3 secs, it exists but does not respond)
Thanks a lot!!!
Dan
Not sure how you got the idea to add all that escaping, but this should work:
my $output = system("ssh -q -o ConnectionTimeout=3 -o BatchMode=yes $host \"echo $host\" 2>/dev/null");

How can I tail a remote file?

I am trying to find a good way to tail a file on a remote host. This is on an internal network of Linux machines. The requirements are:
Must be well behaved (no extra process laying around, or continuing output)
Cannot require someone's pet Perl module.
Can be invoked through Perl.
If possible, doesn't require a custom built script or utility on the remote machine (regular linux utilities are fine)
The solutions I have tried are generally of this sort
ssh remotemachine -f <some command>
"some command" has been:
tail -f logfile
Basic tail doesn't work because the remote process continues to write output to the terminal after the local ssh process dies.
$socket = IO:Socket::INET->new(...);
$pid = fork();
if(!$pid)
{
exec("ssh $host -f '<script which connects to socket and writes>'");
exit;
}
$client = $socket->accept;
while(<$client>)
{
print $_;
}
This works better because there is no output to the screen after the local process exits but the remote process doesn't figure out that its socket is down and it lives on indefinitely.
Have you tried
ssh -t remotemachine <some command>
-t option from the ssh man page:
-t Force pseudo-tty allocation. This can be used to execute
arbitrary screen-based programs on a remote machine, which
can be very useful, e.g. when implementing menu services.
Multiple -t options force tty allocation, even if ssh has no local tty.
instead of
-f Requests ssh to go to background just before command execution.
This is useful if ssh is going to ask for passwords or passphrases,
but the user wants it in the background.
This implies -n. The recommended way to start X11 programs at a remote
site is with something like ssh -f host xterm.
Some ideas:
You could mount it over NFS or CIFS, and then use File::Tail.
You could use one of Perl's SSH modules (there are a number of them), combined with tail -f.
You could try Survlog Its OS X only though.
netcat should do it for you.
You can Tail files remotely using bash and rsync. The following script is taken from this tutorial: Tail files remotely using bash and rsync
#!/bin/bash
#Code Snippet from and copyright by sshadmincontrol.com
#You may use this code freely as long as you keep this notice.
PIDHOME=/a_place/to/store/flag/file
FILE=`echo ${0} | sed 's:.*/::'`
RUNFILEFLAG=${PIDHOME}/${FILE}.running
if [ -e $RUNFILEFLAG ]; then
echo "Already running ${RUNFILEFLAG}"
exit 1
else
touch ${RUNFILEFLAG}
fi
hostname=$1 #host name to remotlely access
log_dir=$2 #log directory on the remotehost
log_file=$3 #remote log file name
username=$3 #username to use to access remote host
log_base=$4 #where to save the log locally
ORIGLOG="$log_base/$hostname/${log_file}.orig"
INTERLOG="$log_base/$hostname/${log_file}.inter"
FINALLOG="$log_base/$hostname/${log_file}.log"
rsync -q -e ssh $username#$hostname:$log_dir/$log_file ${ORIGLOG}
grep -Ev ".ico|.jpg|.gif|.png|.css" > ${INTERLOG}
if [ ! -e $FINALLOG ]; then
cp ${INTERLOG} ${FINALLOG}
else
LINE=`tail -1 ${FINALLOG}`
grep -F "$LINE" -A 999999999 ${INTERLOG} \
| grep -Fv "$LINE" >> ${FINALLOG}
fi
rm ${RUNFILEFLAG}
exit 0
rsync://[USER#]HOST[:PORT]/SRC... [DEST] | tail [DEST] ?
Someone suggested using nc (netcat). This solution does work but is less ideal than just using ssh -t. The biggest problem is that you have to use nc on both sides of the connection and need to do some port discovery on the local machine to find a suitable port over which to connect. Here is the adaptation of the above code to use netcat:
$pid = fork();
if(!$pid)
{
exec("ssh $host -f 'tail -f $filename |nc $localhost $port'");
exit;
}
exec("nc -l -p $port");
There is File::Tail. Don't know if it helps?