How do I extract device info and mountpoints from fstab using Perl? - perl

I'm new to Perl and I really need help witch a specific issue.
I need to extract info from my fstab, but there's a lot of information in there and I only want the information about the devices and their mount points.
The closest I got to finding an answer was:
http://www.freebsd.org/doc/en/articles/vinum/perl.html
But since I'm new to Perl I have a hard time tweaking the code so it helps me with my problem
This is my fstab, but I only want the 3 "dev" lines including mountpoints, any smart way to do this?
/dev/disk/by-id/usb-ST925041_0AS_FF9250410A0000000000005FF87FF7-part2 /
ext3 noatime,nodiratime,acl,user_xattr 1 1
/dev/disk/by-id/usb-ST925041_0AS_FF9250410A0000000000005FF87FF7-part3 /var/log
ext3 noatime,nodiratime,acl,user_xattr 1 2
/dev/disk/by-id/usb-ST925041_0AS_FF9250410A0000000000005FF87FF7-part1 swap swap
defaults 0 0
proc /proc proc defaults 0 0
sysfs /sys sysfs noauto 0 0
debugfs /sys/kernel/debug debugfs noauto 0 0
usbfs /proc/bus/usb usbfs noauto 0 0
devpts /dev/pts devpts mode=0620,gid=5 0 0
Help is very appreciated, thanks in advance!

If that is your output, and you just want to grab the lines that start with /dev, you can simply pipe it to grep, without altering your perl script.
perlscript.pl | grep "^/dev"
Not sure if that works without the -e flag, its been a while and I can't test it right now. If all else fails, use perl:
perlscript.pl | perl -nwe 'print if m#^/dev#'

Something like this should be just fine, then :
#!/usr/bin/perl
open (my $fstab, "<", "/etc/fstab") or die "Cannot open /etc/fstab.";
while(<$fstab>)
{
my #list = split;
next if($list[0] !~ m,^/dev,);
print "Device : $list[0]\nMountpoint : $list[1]\n";
}
close($fstab);
exit 0;
Keep in mind that this will not work if your fstab has UUID= entries or any kind of file systems that aren't devices listed in /dev.

Related

CentOS EPEL fail2ban not processing systemd journal for tomcat

I've installed fail2ban 0.10.5-2.el7 from EPEL on CentOS 7.8. I'm trying to get it to work with systemd for processing a Tomcat log (also systemd).
In jail.local I added:
[guacamole]
enabled = true
port = http,https
backend = systemd
In filter.d/guacamole.conf:
[Definition]
failregex = Authentication attempt from <HOST> for user "[^"]*" failed\.$
ignoreregex =
journalmatch = _SYSTEMD_UNIT=tomcat.service + _COMM=java
If I run journalctl -u tomcat.service I see all the log lines. The ones I am interested in look like this:
May 18 13:58:26 myhost catalina.sh[42065]: 13:58:26.485 [http-nio-8080-exec-6] WARN o.a.g.r.auth.AuthenticationService - Authentication attempt from 1.2.3.4 for user "test" failed.
If I redirect journalctl -u tomcat.service to a log file, and process it with fail2ban-regex then it works exactly the way I want it to work, finding all the lines it needs.
% fail2ban-regex /tmp/j9 /etc/fail2ban/filter.d/guacamole.conf
Running tests
=============
Use failregex filter file : guacamole, basedir: /etc/fail2ban
Use log file : /tmp/j9
Use encoding : UTF-8
Results
=======
Failregex: 47 total
|- #) [# of hits] regular expression
| 1) [47] Authentication attempt from <HOST> for user "[^"]*" failed\.$
`-
Ignoreregex: 0 total
Date template hits:
|- [# of hits] date format
| [1] ExYear(?P<_sep>[-/.])Month(?P=_sep)Day(?:T| ?)24hour:Minute:Second(?:[.,]Microseconds)?(?:\s*Zone offset)?
| [570] {^LN-BEG}(?:DAY )?MON Day %k:Minute:Second(?:\.Microseconds)?(?: ExYear)?
`-
Lines: 571 lines, 0 ignored, 47 matched, 524 missed
[processed in 0.12 sec]
However, if fail2ban reads the journal directly then it does not work:
fail2ban-regex systemd-journal /etc/fail2ban/filter.d/guacamole.conf
It comes back right away, and processes 0 lines!
Running tests
=============
Use failregex filter file : guacamole, basedir: /etc/fail2ban
Use systemd journal
Use encoding : UTF-8
Use journal match : _SYSTEMD_UNIT=tomcat.service + _COMM=java
Results
=======
Failregex: 0 total
Ignoreregex: 0 total
Lines: 0 lines, 0 ignored, 0 matched, 0 missed
[processed in 0.00 sec]
I've tried to remove _COMM=java. It doesn't make a difference.
If I leave out the journal match line altogether, it at least processes all the lines from the journal, but does not find any matches (even though, as I mentioned, it processes a dump of the log file fine):
Running tests
=============
Use failregex filter file : guacamole, basedir: /etc/fail2ban
Use systemd journal
Use encoding : UTF-8
Results
=======
Failregex: 0 total
Ignoreregex: 0 total
Lines: 202271 lines, 0 ignored, 0 matched, 202271 missed
[processed in 34.54 sec]
Missed line(s): too many to print. Use --print-all-missed to print all 202271 lines
Either this is a bug, or I'm missing a small detail.
Thanks for any help you can provide.
To make sure the filter definition is properly initialised, it would be good to include the common definition. Your filter definition (/etc/fail2ban/filter.d/guacamole.conf) would therefore look like:
[INCLUDES]
before = common.conf
[Definition]
journalmatch = _SYSTEMD_UNIT='tomcat.service'
failregex = Authentication attempt from <HOST> for user "[^"]*" failed\.$
ignoreregex =
A small note given that your issue only occurs with systemd but not flat files, could you try the same pattern without $ at the end? Maybe there is an issue with the end of line when printed to the journal?
In your jail definition (/etc/fail2ban/jail.d/guacamole.conf), remember to define the ban time/find time/retries if they haven't already been defined in the default configuration:
[guacamole]
enabled = true
port = http,https
maxretry = 3
findtime = 1h
bantime = 1d
# "backend" specifies the backend used to get files modification.
# systemd: uses systemd python library to access the systemd journal.
# Specifying "logpath" is not valid for this backend.
# See "journalmatch" in the jails associated filter config
backend = systemd
Remember to restart the fail2ban service after doing such changes.

Does osquery inotify install watcher on directory or files

I am using osquery to monitor files and folders to get events on any operation on those files. There is a specific syntax for osquery configuration:
"/etc/": watches the entire directory at a depth of 1.
"/etc/%": watches the entire directory at a depth of 1.
"/etc/%%": watches the entire tree recursively with /etc/ as the root.
I am trying to evaluate the memory usage in case of watching a lot of directories. In this process I found the following statistics:
"/etc", "/etc/%", "/etc/%.conf": only 1 inotify handle is found registered in the name of osquery.
"/etc/%%: a few more than 289 inotify handles found which are registered in the name of osquery, given that there are a total of 285 directories under the tree. When checking the entries in /proc/$PID/fdinfo, all the inodes listed in the file points to just folders.
eg: for "/etc/%.conf"
$ grep -r "^inotify" /proc/$PID/fdinfo/
18:inotify wd:1 ino:120001 sdev:800001 mask:3ce ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:01001200bc0f1cab
$ printf "%d\n" 0x120001
1179649
$ sudo debugfs -R "ncheck 1179649" /dev/sda1
debugfs 1.43.4 (31-Jan-2017)
Inode Pathname
1179649 //etc
The inotify watch is established on the whole directory here, but the events are only reported for the matching files /etc/*.conf. Is osquery filtering the events based on the file_paths supplied, which is what I am assuming, but not sure.
Another experiment that I performed to support the above claim was, use the source in the inotify(7) and run a watcher on a particular file. When I check the list of inotify watchers, it just shows :
$ ./a.out /tmp/inotify.cc &
$ cat /proc/$PID/fdinfo/3
...
inotify wd:1 ino:1a1 sdev:800001 mask:38 ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:a1010000aae325d7
$ sudo debugfs -R "ncheck 417" /dev/sda1
debugfs 1.43.4 (31-Jan-2017)
Inode Pathname
417 /tmp/inotify.cc
So, according to this experiment, establishing a watcher on a single file is possible (which is clear from the inotify man page). This supports the claim that osquery is doing some sort of filtering based on the file patterns supplied.
Could someone verify the claim or present otherwise?
My osquery config:
{
"options": {
"host_identifier": "hostname",
"schedule_splay_percent": 10
},
"schedule": {
"file_events": {
"query": "SELECT * FROM file_events;",
"interval": 5
}
},
"file_paths": {
"sys": ["/etc/%.conf"]
}
}
$ osqueryd --version
osqueryd version 3.3.2
$ uname -a
Linux lab 4.9.0-6-amd64 #1 SMP Debian 4.9.88-1+deb9u1 (2018-05-07) x86_64 GNU/Linux
It sounds like some great sleuthing!
I think the comments in the source code support that. It's worth skimming it. I think the relevant files:
https://github.com/osquery/osquery/blob/master/osquery/tables/events/linux/file_events.cpp
https://github.com/osquery/osquery/blob/master/osquery/events/linux/inotify.cpp

Listing the volumes on Solaris OS

I am new to solaris OS, and trying to write a script which collects volume data from solaris box.
We did a similar script for Linux, and we used "df -P" command to list the volumes, and select the entries that start with "/dev".
By default, in linux, i could see a volume "/dev/sda1".
when i run df command on solaris box(df -k),i could not see any entry similar to (/dev/*) in my output.
When i mounted a CD, i could see an entry in df output as below.
/dev/dsk/c1t1d0s2 57632 57632 0 100% /media/VBOXADDITIONS_5.0.14_105127
So, in solaris, what is the pattern, i should look for to pick the volumes?
And, why am I not seeing at least one volume in the pattern /dev/
is it "/dev" or something else?
I am using solaris 11 image on oracle virtual box.
When i try "format" command, i could see 3 disks:
AVAILABLE DISK SELECTIONS:
0. c1d0 <VBOX HAR-8ea18e8b-2b2a0a5-0001-31.25GB> testvolu
/pci#0,0/pci-ide#1,1/ide#0/cmdk#0,0
1. c2d0 <VBOX HAR-b4343b55-dbed77c-0001 cyl 1020 alt 2 hd 64 sec 32>
/pci#0,0/pci-ide#1,1/ide#1/cmdk#0,0
2. c3t0d0 <ATA-VBOX HARDDISK-1.0 cyl 1009 alt 2 hd 64 sec 32>
/pci#0,0/pci8086,2829#d/disk#0,0
But, i dont see any partition in "df -k"
Also, i read here(https://docs.oracle.com/cd/E19455-01/805-6331/6j5vgg680/index.html), that disk names should be in "/dev/dsk/*" format.
Solaris 11 uses ZFS which has no one to one relationship between volumes (partitions) and file systems.
You can look at zpool status output to get the underlying devices.
$ zpool status
pool: rpool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
c1t0d0 ONLINE 0 0 0
Here, the whole c1t0d0 disk is used, hence no sx or px suffix.

python formatting return value of subprocess

I am attempting on Python 2.6.6 to get the routing table of a system (for a interface) into a python list to parse; but I cannot seem to solve why the entire result is stored into one variable.
The loop seems to iterate over one characters at a time, while the behavior I wanted was one line at a time.
what I get is one character; short example below...
1
0
.
2
4
3
what I'd like line to return; so I can run other commands against each line..
10.243.186.1 0.0.0.0 255.255.255.255 UH 0 0 0 eth0
10.243.188.1 0.0.0.0 255.255.255.255 UH 0 0 0 eth0
10.243.184.0 10.243.186.1 255.255.255.128 UG 0 0 0 eth0
Here is the code below...
def getnet(int):
int = 'eth0' ####### for testing only
cmd = ('route -n | grep ' + int)
routes = subprocess.Popen([cmd], shell=True, stdout=subprocess.PIPE)
routes, err = routes.communicate()
for line in routes:
print line
routes in your case is a bytestring that contains the entire output from the shell command. for character in astring statement produces one character at a time as the example in your question demonstrates. To get lines as a list of strings instead, call lines = all_output.splitlines():
from subprocess import check_output
lines = check_output("a | b", shell=True, universal_newlines=True).splitlines()
Here's a workaround if your Python version has no check_output(). If you want to read one line at a time while the process is still running, see Python: read streaming input from subprocess.communicate().
You could try to use os, ctypes modules, to get the info instead of grepping the output of an external command.

'No such file or directory' even though I own the file and it has read permissions for everyone

I have a perl script on CentOS and am trying to read a file using File::Slurp:
my $local_filelist = '~/filelist.log';
use File::Slurp;
my #files = read_file($local_filelist);
But I get the following error:
Carp::croak('read_file \'~/filelist.log\' - sysopen: No such file or directory') called at /usr/local/share/perl5/File/Slurp.pm line 802
This is despite the fact that I am running the script as myuser and:
(2013-07-26 06:55:16 [myuser#mybox ~]$ ls -l ~/filelist.log
-rw-r--r--. 1 myuser myuser 63629044 Jul 24 22:18 /home/myuser/filelist.log
This is on perl 5.10.1 x86_64 on CentOS 6.4.
What could be causing this?
I've not used File::Slurp, but I'll hazard a guess that it doesn't understand the ~ for home directory. Does it work if you specify the full path - e.g., use:
my $local_filelist = "$ENV{HOME}/filelist.log";
Using double quotes will mean that perl will expand $ENV{HOME}.
Just use the glob function. That's what it is for.
my $local_filelist = glob '~/filelist.log';
I think that when you are running a script ~ may not be set or is set to somewhere else - try replacing:
'~/filelist.log'
with:
'/home/myuser/filelist.log'
in your script.