Snort custom log file from rule (logto) not working - snort

I am trying to create a custom Snort rule which outputs a log file whenever it is triggered. My rule is as follows:
log tcp $HOME_NET any -> any any (msg: "Facebook Accessed"; content:"www.facebook.com"; logto:"facebook.log"; content:"www.facebook.com"; threshold: type limit, track by_dst, count:1, seconds:60;
And I call Snort with the following command:
snort -A console -u snort -g snort -c /etc/snort/snort.conf -l /var/log/snort
However, there is no facebook.log file being created. I can see all the snort.log files, and an alert of the same rule is working just fine. I have given permissions to the user snort to write to /var/log/snort.
What am I missing?

Related

tcpdump session issue in moving dump files out of current folder

I would like to have a tcpdump script which dumps into files let's say every hour.
This I can achieve quite simply like this:
tcpdump -i eth0 -G 3600 -w /tmp/files/<some-name>-%F-%H-%M-%S.pcap -Z root -z gzip
I want to MOVE the "finished" files to s3 for which I'm using the rclone tool:
rclone move /tmp/files remote:<s3 bucket name> --filter "- *.pcap"
All runs fine apart from the fact that whenever I move some any of the *pcap.gz files the currently processed *.pcap file size is enlarged with all the current session data which makes the file pretty big.
Does this mean that I can't move out any of the files from the directory and have to restart the tcpdump command on regular basis?
Thanks
Modify your tcpdump command to add a capture filter that excludes the rclone traffic. For example, assuming the remote IP address and TCP port number are 192.0.2.1 and 1234, respectively, apply a capture filter of "not (host 192.0.2.1 and tcp port 1234)" to exclude that traffic.

How to set up cron using curl command?

After apache rebuilt my cron jobs stopped working.
I used the following command:
wget -O - -q -t 1 http://example.com/cgi-bin/loki/autobonus.pl
Now my DC support suggests me to change the wget method to curl. What would be the correct value in this case?
-O - is equivalent to curl's default behavior, so that's easy.
-q is curl's -s (or --silent)
--retry N will substitute for wget's -t N
All in all:
curl -s --retry 1 http://example.com/cgi-bin/loki/autobonus.pl
try run change with the full path of wget
/usr/bin/wget -O - -q -t 1 http://example.com/cgi-bin/loki/autobonus.pl
you can find the full path with:
which wget
and more, check if you can reach the destination domain with ping or other methods:
ping example.com
Update:
based on the comments, seems to be caused by the line in /etc/hosts:
127.0.0.1 example.com #change example.com to the real domain
It seems that you have restricted options in terms that on the server where the cron should run you have the domain pinned to 127.0.0.1 but the virtual host configuration does not work with that.
What you can do is to let wget connect by IP but send the Host header so that the virtual host matching would work:
wget -O - -q -t 1 --header 'Host: example.com' http://xx.xx.35.162/cgi-bin/loki/autobonus.pl
Update
Also probably you don't need to run this over the web server, so why not just run:
perl /path/to/your/script/autobonus.pl

How to perform logging with gsutil rsync

What's the proper way to log any errors or warnings when performing a quiet rsync?
This is what I currently run from my crontab:
gsutil -m -q rsync -r -C /mount1/share/folder gs://my-bucket-1/folder/ > /mount2/share/folder/gsutil.log
Since the log file is always completely empty and I'm uploading terabytes of data I'm starting to think that maybe even errors and warnings are being supressed.
After having realized that this is related to how you pipe stdout and/or stderr to files in general, the answer really lies within this existing thread: How to redirect both stdout and stderr to a file
So a simple solution to log as much as possible into one single log file could be something like:
gsutil -d rsync [src] [dst] &> [logfile]
...where -d enables debug output. I found this to be the only way to show files which were affected by an error such as CommandException: 3 files/objects could not be copied. Please note that -d exposes authentication credentials.

Jmeter - Run .jmx file through command line and get the summary report in a excel

I am new to jmeter. I have the .jmx file containg all the required http samplers. I could run it throught the Jmeter UI using "Run-> Start" and view the result in the "Summary Report". I can then save the results to the .csv using "Save Table Data" button in "Summary Report".
Question is how can I achieve the same using command line.
JMeter can be launched in non-GUI mode as follows:
jmeter -n -t /path/to/your/test.jmx -l /path/to/results/file.jtl
You can set what would you like to see in result jtl file via playing with JMeter Properties.
See jmeter.properties file under /bin folder of your JMeter installation and look for those starting with
jmeter.save.saveservice.
Defaults are listed below:
#jmeter.save.saveservice.output_format=csv
#jmeter.save.saveservice.assertion_results_failure_message=false
#jmeter.save.saveservice.assertion_results=none
#jmeter.save.saveservice.data_type=true
#jmeter.save.saveservice.label=true
#jmeter.save.saveservice.response_code=true
#jmeter.save.saveservice.response_data=false
#jmeter.save.saveservice.response_data.on_error=false
#jmeter.save.saveservice.response_message=true
#jmeter.save.saveservice.successful=true
#jmeter.save.saveservice.thread_name=true
#jmeter.save.saveservice.time=true
#jmeter.save.saveservice.subresults=true
#jmeter.save.saveservice.assertions=true
#jmeter.save.saveservice.latency=true
#jmeter.save.saveservice.samplerData=false
#jmeter.save.saveservice.responseHeaders=false
#jmeter.save.saveservice.requestHeaders=false
#jmeter.save.saveservice.encoding=false
#jmeter.save.saveservice.bytes=true
#jmeter.save.saveservice.url=false
#jmeter.save.saveservice.filename=false
#jmeter.save.saveservice.hostname=false
#jmeter.save.saveservice.thread_counts=false
#jmeter.save.saveservice.sample_count=false
#jmeter.save.saveservice.idle_time=false
#jmeter.save.saveservice.timestamp_format=ms
#jmeter.save.saveservice.timestamp_format=yyyy/MM/dd HH:mm:ss.SSS
#jmeter.save.saveservice.default_delimiter=,
#jmeter.save.saveservice.default_delimiter=\t
#jmeter.save.saveservice.print_field_names=false
#jmeter.save.saveservice.xml_pi=<?xml-stylesheet type="text/xsl" href="../extras/jmeter-results-detail-report_21.xsl"?>
#jmeter.save.saveservice.base_prefix=~/
#jmeter.save.saveservice.autoflush=false
Uncomment the one you are interested in and set it's value to change the default. Another option is override property in user.properties file or provide it as a command-line argument using -J key as follows:
jmeter -Jjmeter.save.saveservice.print_field_names=true -n /path/to/your/test.jmx -l /path/to/results/file.jtl
See Apache JMeter Properties Customization Guide for more details on what can be done using JMeter Properties.
You can use this command,
jmeter -n -t /path to the script.jmx -l /path to save results with file name file.jtl
But if you really want to run a load test in a remote machine, you should be able to make it run eventhough you close the window. So we can use nohup to ignore the HUP (hangup) signal. So you can use this command as below.
nohup sh jmeter.sh -n -t /path to the script.jmx -l /path to save results with file name file.jtl &
You can run JMeter from the command line using the -n parameter for 'Non-GUI' and the -t parameter for the test plan file.
jmeter -n -t "PATHTOJMXFILE"
If you want to further customize the command line experience, I would direct you to the 'Getting Started' section of their documentation.
This worked for me on mac os High sierra 10.13.6, java 8 64-bit, jmeter 4.0
$ jmeter -n --testfile /path/to/Test_Plan.jmx
Sample output:
Creating summariser <summary>
Created the tree successfully using ./src/test/jmeter/Test_Plan.jmx
Starting the test # Fri Aug 24 17:18:18 PDT 2018 (1535156298333)
Waiting for possible Shutdown/StopTestNow/Heapdump message on port 4445
summary = 10 in 00:00:09 = 1.1/s Avg: 6666 Min: 1000 Max: 8950 Err:
0 (0.00%)
Tidying up ... # Fri Aug 24 17:18:28 PDT 2018 (1535156308049)
... end of run
To get the results in excel like file, you have one option to get it done with csv file.
Use below commands with provided options.
jmeter -n -t /path-to-jmeter-test/file.jmx -l TestResults.csv
-n states Non GUI mode
-t states Test JMX File
-l state Log the results in provided file
Also you can pass any results related parameters dynamically in command line arguments using -Jprop.name=value which are already defined in jmeter.properties in bin folder.
This would be the command line statement.
"%JMETER_HOME%\bin\jmeter.bat" -n -t <jmx test file path> -l <csv result file path> -Djmeter.save.saveservice.output_format=csv
In Command line mode:
I have planned on Linux OS.
download the latest jmeter version. Apache JMeter 3.2 (Requires Java 8 or later) as of now.
Extract in your desired directory. For example, extract to /tmp/
Now, default output file format will be csv. No need to change anything or specify in the CLI command.
for example:
./jmeter -n -t examples/test.jmx -l examples/output.csv
For changing the default format, change the following parameter in jmeter.properties : jmeter.save.saveservice.output_format=xml
Now if you run the command : ./jmeter -n -t examples/test.jmx -l examples/output.jtl
output get stored in xml format.
Now, make the request on multiple server(Additional info query): We can specify
host and port as tags in
./jmeter -n -t examples/test.jmx -l examples/output.jtl -JHOST=<HOST> -JPORT=<PORT>
Check my powershell command
$Date = Get-Date -Format ddMMyyyyhhmmss
jmeter -n -t jmetter\dev.jmx -l jmetter\TestResult-$Date.csv -o jmetter\Results-$Date\ -X
// For to know all parameter (like -n, -t, ...), use this command:
jmeter --?
Running JMeter in command line mode:
1.Navigate to JMeter’s bin directory
Now enter following command,
jmeter -n –t test.jmx
-n: specifies JMeter is to run in non-gui mode
-t: specifies name of JMX file that contains the Test Plan

Error with LSF Platform: lsb_init: Failed in an LSF library call: Unable to open file lsf.conf

I have an issue with LSF Platform I cannot wrap my head around.
For scripting reason, I need to check the running/pending jobs with 'bjobs' (and other b***) with a perl script.
For some reason it did not work, and I was able to view the following error message:
lsb_init: Failed in an LSF library call: Unable to open file lsf.conf
Some research on Google and in the manual gave nothing great, I did a little test.
My account (max) is a LSF administrator. Root is a LSF admin as well.
So I switched to root, and tried to launch bjobs, but being max with 'sudo –u max'. Please have a look at these commands:
hn[~]=> whoami
max
hn[~]=> bjobs
No unfinished job found
hn[~]=> su
Password:
[root#hn max]# whoami
root
[root#hn max]# sudo -u max whoami
max
[root#hn max]# bjobs
No unfinished job found
[root#hn max]# sudo -u max bjobs
lsb_init: Failed in an LSF library call: Unable to open file lsf.conf
How can I correct this?
By default LSF will look for lsf.conf in /etc. If its not there, then it will look in the directory in the env variable LSF_ENVDIR.
sudo is probably resetting your environment. Try sudo -i or put
Defaults !env_reset
in your sudoers file.
You could also try something like this
sudo -u max LSF_ENVDIR=$LSF_ENVDIR LSF_SERVERDIR=$LSF_SERVERDIR bjobs
For anybody scripting around SSH, the two variables above must be explicitly set, either on the command line, as in:
ssh foo#bar.net 'export LSF_ENVDIR=/path/to/lsf/envdir; export LSF_SERVERDIR=/path/to/lsf/serverdir; bsub ...'
or in the file ~/.ssh/environment file (provided that sshd is configured with PermitUserEnvironment yes).