why snort sfportscan log file output does not have event_id, instead is event_ref and the value is 0 - snort

my config is follow:
preprocessor sfportscan: proto { all } \
scan_type { all } \
sense_level { high } \
logfile { alert }
when I run snort,and use nmap to scan,then log file output as follow:
Time: 02/23-12:54:21.183932
event_ref: 0
[Source ip address] -> [Destination ip address] (portscan) TCP Portscan
Priority Count: 9
Connection Count: 10
IP Count: 1
Scanner IP Range: [Destination ip address]:[Destination ip address]
Port/Proto Count: 10
Port/Proto Range: 981:12174
but the snort doc say as this:
Time: 09/08-15:07:31.603880
event_id: 2
192.168.169.3 -> 192.168.169.5 (portscan) TCP Filtered Portscan
Priority Count: 0
Connection Count: 200
IP Count: 2
Scanner IP Range: 192.168.169.3:192.168.169.4
Port/Proto Count: 200
Port/Proto Range: 20:47557
If there are open ports on the target, one or more additional tagged packet(s) will be appended:
Time: 09/08-15:07:31.603881
event_ref: 2
192.168.169.3 -> 192.168.169.5 (portscan) Open Port
Open Port: 38458
I do not has event_id,instead is event_ref and its value is 0

Related

Bind nsupdate command getting REFUSED error

I am using nsupdate command to update a name zone, but I receive the error message update failed: REFUSED. I created the key use "rndc-confgen -a -c /etc/remote_rndc_key"
My named.conf is as follows
options {
listen-on port 53 { 9.82.159.110; };
listen-on-v6 port 53 { ::1; };
directory "/var/named";
dump-file "/var/named/data/cache_dump.db";
statistics-file "/var/named/data/named_stats.txt";
memstatistics-file "/var/named/data/named_mem_stats.txt";
secroots-file "/var/named/data/named.secroots";
recursing-file "/var/named/data/named.recursing";
allow-query { any; };
allow-update {key remote_rndc_key; };
recursion yes;
dnssec-enable no;
dnssec-validation no;
pid-file "/run/named/named.pid";
};
logging {
channel default_debug {
file "data/named.run";
severity debug 3;
};
};
zone "." IN {
type hint;
file "named.ca";
};
include "/etc/remote_rndc_key";
include "/etc/named.rfc1912.zones";
include "/etc/named.root.key";
zone "test.com" IN {
type master;
file "test.com.zone";
};
zone "82.9.in-addr.arpa" IN {
type master;
file "test.com.local";
};
key "remote_rndc_key" {
algorithm hmac-md5;
secret "lWB9P5pwaqO3FEb7GsFZkw==";
};
controls {
inet 9.82.159.110 port 953
allow { 9.82.224.110; } keys { "remote_rndc_key"; };
};
/etc/remote_rndc_key:
key "rndc-key" {
algorithm hmac-md5;
secret "lWB9P5pwaqO3FEb7GsFZkw==";
};
/var/named/test.com.zone:
$TTL 1D
# IN SOA ns1 rname.invalid. (
2019062901 ; serial
5M ; refresh
1H ; retry
1W ; expire
3H ) ; minimum
NS ns1
ns1 IN A 9.82.159.110
www IN A 9.82.100.100
use nsupdte:
[root#localhost tmp]# nsupdate -v -d -k ./remote_rndc_key
Creating key...
Creating key...
namefromtext
keycreate
> server 9.82.159.110
> update add ftps.test.com 600 A 1.1.1.2
> send
Reply from SOA query:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 40666
;; flags: qr aa ra; QUESTION: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1
;; QUESTION SECTION:
;ftps.test.com. IN SOA
;; AUTHORITY SECTION:
test.com. 0 IN SOA ns1.test.com. rname.invalid. 2019062901 300 3600 604800 10800
;; TSIG PSEUDOSECTION:
rndc-key. 0 ANY TSIG hmac-md5.sig-alg.reg.int. 1649854961 300 16 MFdWnAJcNEQ17QovaBmzTw== 40666 NOERROR 0
Found zone name: test.com
The master is: ns1.test.com
Sending update to 9.82.159.110#53
Outgoing update query:
;; ->>HEADER<<- opcode: UPDATE, status: NOERROR, id: 59745
;; flags:; ZONE: 1, PREREQ: 0, UPDATE: 1, ADDITIONAL: 1
;; UPDATE SECTION:
ftps.test.com. 600 IN A 1.1.1.2
;; TSIG PSEUDOSECTION:
rndc-key. 0 ANY TSIG hmac-md5.sig-alg.reg.int. 1649854961 300 16 vJjzs0bT4QxHW40mL/MT7g== 59745 NOERROR 0
Reply from update query:
;; ->>HEADER<<- opcode: UPDATE, status: REFUSED, id: 59745
;; flags: qr; ZONE: 1, PREREQ: 0, UPDATE: 0, ADDITIONAL: 1
;; ZONE SECTION:
;test.com. IN SOA
;; TSIG PSEUDOSECTION:
rndc-key. 0 ANY TSIG hmac-md5.sig-alg.reg.int. 1649854961 300 16 FAcO+t5JUdOJdC1mRuHNeA== 59745 NOERROR 0
named server log as below:
[root#localhost named]# systemctl status named
● named.service - Berkeley Internet Name Domain (DNS)
Loaded: loaded (/usr/lib/systemd/system/named.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2022-04-13 20:36:14 CST; 29min ago
Process: 3371415 ExecStartPre=/bin/bash -c if [ ! "$DISABLE_ZONE_CHECKING" == "yes" ]; then /usr/sbin/named-checkconf -z "$NAMEDCONF"; else echo "Checking of zone files is disabled"; fi (code=exited, >
Process: 3371418 ExecStart=/usr/sbin/named -u named -c ${NAMEDCONF} $OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 3371421 (named)
Tasks: 35
Memory: 88.8M
CGroup: /system.slice/named.service
└─3371421 /usr/sbin/named -u named -c /etc/named.conf
Apr 13 20:36:32 localhost.localdomain named[3371421]: client #0x7ff1f0108770 9.82.224.110#59471/key rndc-key: signer "rndc-key" denied
What can be the reason?
I confused the key name with the key file name:
/etc/remote_rndc_key:
key "rndc-key" {
algorithm hmac-md5;
secret "lWB9P5pwaqO3FEb7GsFZkw==";
};
should be changed to:
key "remote_rndc_key" {
algorithm hmac-md5;
secret "lWB9P5pwaqO3FEb7GsFZkw==";
};
I got this error today on my "hidden primary" Bind dns server, and wasted a couple of hours to find the reason for the failure.
At the end, I got tired and tried again, and then it worked.
So my advice is: Try again, it may be a bug.

Error core: failed to lookup token: error=failed to read entry, dial tcp [::1]:8500: getsockopt: connection refused in Vault log

We are performing load test on our application using Jmeter, our application uses consul and vault as a backend service for reading/storing application configuration related data. While performing load testing, our application queries the vault for authentication data and this happens for each incoming request. Initially it runs fine for some duration (10 to 15 minutes) and I can see the success response in Jmete, but eventually after sometime the responses starts failing for all the requests. I see the following error in the vault log for each request but do not see any error/exception in the consul log.
Error in Vault log
[ERROR] core: failed to lookup token: error=failed to read entry: Get http://localhost:8500/v1/kv//vault/sys/token/id/87f7b82131cb8fa1ef71aa52579f155d4cf9f095: dial tcp [::1]:8500: getsockopt: connection refused
As of now the load is 100 request (users) in each 10 milliseconds with a ramp-up period of 60 seconds. And this executes over a loop. What could be the cause of this error? Is it due to the limited connection to port 8500
Below is my vault and consul configuration
Vault
backend "consul" {
address = "localhost:8500"
path = "app/vault/"
}
listener "tcp" {
address = "10.88.97.216:8200"
cluster_address = "10.88.97.216:8201"
tls_disable = 0
tls_min_version = "tls12"
tls_cert_file = "/var/certs/vault.crt"
tls_key_file = "/var/certs/vault.key"
}
Consul
{
"data_dir": "/var/consul",
"log_level": "info",
"server": true,
"leave_on_terminate": true,
"ui": true,
"client_addr": "127.0.0.1",
"ports": {
"dns": 53,
"serf_lan": 8301,
"serf_wan" : 8302
},
"disable_update_check": true,
"enable_script_checks": true,
"disable_remote_exec": false,
"domain": "primehome",
"limits": {
"http_max_conns_per_client": 1000,
"rpc_max_conns_per_client": 1000
},
"service": {
"name": "nginx-consul-https",
"port": 443,
"checks": [{
"http": "https://localhost/nginx_status",
"tls_skip_verify": true,
"interval": "10s",
"timeout": "5s",
"status": "passing"
}]
}
}
I have also configured the http_max_conns_per_client & rpc_max_conns_per_client, thinking that it might be due to the limited connection perclicent. But still I am seeing this error in vault log.
After taking another look at this, the issue appears to be that Vault is attempting to contact Consul over the IPv6 loopback address–likely due to the v4 and v6 addresses being present in /etc/hosts–but Consul is only listening on the IPv4 loopback address.
You can likely resolve this through one of the following methods.
Use 127.0.0.1 instead of localhost for Consul's address in the Vault config.
backend "consul" {
address = "127.0.0.1:8500"
path = "app/vault/"
}
Configure Consul to listen on both the IPv4 and IPv6 loopback addresses.
{
"client_addr": "127.0.0.1 [::1]"
}
(Rest of the config omitted for brevity.)
Remove the localhost hostname from the IPv6 loopback in /etc/hosts
127.0.0.1 localhost
# Old hosts entry for ::1
#::1 localhost ip6-localhost ip6-loopback
# New entry
::1 ip6-localhost ip6-loopback

Connect two machines in AKKA remotely ,connection refused

I'm new to akka and wanted to connect two PC using akka remotely just to run some code in both as (2 actors). I had tried the example in akka doc. But what I really do is to add the 2 IP addresses into config file I always get this error?
First machine give me this error:
[info] [ERROR] [11/20/2018 13:58:48.833]
[ClusterSystem-akka.remote.default-remote-dispatcher-6]
[akka.remote.artery.Association(akka://ClusterSystem)] Outbound
control stream to [akka://ClusterSystem#192.168.1.2:2552] failed.
Restarting it. Handshake with [akka://ClusterSystem#192.168.1.2:2552]
did not complete within 20000 ms
(akka.remote.artery.OutboundHandshake$HandshakeTimeoutException:
Handshake with [akka://ClusterSystem#192.168.1.2:2552] did not
complete within 20000 ms)
And second machine:
Exception in thread "main"
akka.remote.RemoteTransportException: Failed to bind TCP to
[192.168.1.3:2552] due to: Bind failed because of
java.net.BindException: Cannot assign requested address: bind
Config file content :
akka {
actor {
provider = cluster
}
remote {
artery {
enabled = on
transport = tcp
canonical.hostname = "192.168.1.3"
canonical.port = 0
}
}
cluster {
seed-nodes = [
"akka://ClusterSystem#192.168.1.3:2552",
"akka://ClusterSystem#192.168.1.2:2552"]
# auto downing is NOT safe for production deployments.
# you may want to use it during development, read more about it in the docs.
auto-down-unreachable-after = 120s
}
}
# Enable metrics extension in akka-cluster-metrics.
akka.extensions=["akka.cluster.metrics.ClusterMetricsExtension"]
# Sigar native library extract location during tests.
# Note: use per-jvm-instance folder when running multiple jvm on one host.
akka.cluster.metrics.native-library-extract-folder=${user.dir}/target/native
First of all, you don't need to add cluster configuration for AKKA remoting. Both the PCs or nodes should be enabled remoting with a concrete port instead of "0" that way you know which port to connect.
Have below configurations
PC1
akka {
actor {
provider = remote
}
remote {
artery {
enabled = on
transport = tcp
canonical.hostname = "192.168.1.3"
canonical.port = 19000
}
}
}
PC2
akka {
actor {
provider = remote
}
remote {
artery {
enabled = on
transport = tcp
canonical.hostname = "192.168.1.4"
canonical.port = 18000
}
}
}
Use below actor path to connect any actor in remote from PC1 to PC2
akka://<PC2-ActorSystem>#192.168.1.4:18000/user/<actor deployed in PC2>
Use below actor path to connect from PC2 to PC1
akka://<PC2-ActorSystem>#192.168.1.3:19000/user/<actor deployed in PC1>
Port numbers and IP address are samples.

Keepalived vrrp_script does not failover

I have 2 nodes with keepalived and haproxy services (CentOS7).
If I'm shutdown one node all working fine. But I want to failover the VIPS if haproxy is down.
This is 1st node config:
vrrp_script ha_check {
script "/etc/keepalived/haproxy_check"
interval 2
weight 21
}
vrrp_instance VI_1 {
state MASTER
interface eno16777984
virtual_router_id 151
priority 101
advert_int 1
authentication {
auth_type PASS
auth_pass 11111
}
virtual_ipaddress {
10.0.100.233
}
smtp_alert
track_script {
ha_check
}
}
2nd node:
vrrp_script ha_check {
script "/etc/keepalived/haproxy_check"
interval 2
fall 2
rise 2
timeout 1
weight 2
}
vrrp_instance VI_1 {
state BACKUP
interface eno16777984
virtual_router_id 151
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 11111
}
virtual_ipaddress {
10.0.100.233
}
smtp_alert
track_script {
ha_check
}
}
cat /etc/keepalived/haproxy_check
systemctl status haproxy | grep "inactive"
When I stop haproxy it still does not failover the VIPs to the next
host.
[root#cks-hatest1 keepalived]# tail /var/log/messages
Nov 30 10:35:24 cks-hatest1 Keepalived_vrrp[5891]: VRRP_Script(ha_check) failed
Nov 30 10:35:33 cks-hatest1 systemd: Started HAProxy Load Balancer.
Nov 30 10:35:45 cks-hatest1 systemd: Stopping HAProxy Load Balancer...
Nov 30 10:35:45 cks-hatest1 systemd: Stopped HAProxy Load Balancer.
Nov 30 10:35:46 cks-hatest1 Keepalived_vrrp[5891]: VRRP_Script(ha_check) succeeded
What I am doing wrong? Thank you in advance!
In your script you are checking if
systemctl status haproxy
contains keyword "inactive". Is that the value you get when you stop haproxy service manually?
As soon as haproxy service is stopped your logs contains it is started again. Can you verify that?
Also, try with replacing the script as
script "killall -0 haproxy"
It's easy. Try this for example:
vrrp_script check_haproxy {
script "pidof haproxy"
interval 2
weight 2
}
In the end of config you should add following part too:
track_script {
check_haproxy
}

perl how to grab spesific string in paragraph style

I'm new on perl and still learning by doing some case. I've some case on parsing a log with perl.
There is a data log :
Physical interface: ge-1/0/2, Unit: 101, Vlan-id: 101, Address: 10.187.132.3/27
Index: 353, SNMP ifIndex: 577, VRRP-Traps: enabled
Interface state: up, Group: 1, State: backup, VRRP Mode: Active
Priority: 190, Advertisement interval: 1, Authentication type: none
Advertisement threshold: 3, Delay threshold: 100, Computed send rate: 0
Preempt: yes, Accept-data mode: yes, VIP count: 1, VIP: 10.187.132.1
Dead timer: 2.715s, Master priority: 200, Master router: 10.187.132.2
Virtual router uptime: 5w5d 12:54
Tracking: disabled
Physical interface: ge-1/0/2, Unit: 102, Vlan-id: 102, Address: 10.187.132.35/27
Index: 354, SNMP ifIndex: 580, VRRP-Traps: enabled
Interface state: up, Group: 2, State: master, VRRP Mode: Active
Priority: 200, Advertisement interval: 1, Authentication type: none
Advertisement threshold: 3, Delay threshold: 100, Computed send rate: 0
Preempt: yes, Accept-data mode: yes, VIP count: 1, VIP: 10.187.132.33
Advertisement Timer: 0.816s, Master router: 10.187.132.35
Virtual router uptime: 5w5d 12:54, Master router uptime: 5w5d 12:54
Virtual Mac: 00:00:5e:00:01:02
Tracking: disabled
Physical interface: ge-1/0/2, Unit: 103, Vlan-id: 103, Address: 10.187.132.67/27
Index: 355, SNMP ifIndex: 581, VRRP-Traps: enabled
Interface state: up, Group: 3, State: backup, VRRP Mode: Active
Priority: 190, Advertisement interval: 1, Authentication type: none
Advertisement threshold: 3, Delay threshold: 100, Computed send rate: 0
Preempt: yes, Accept-data mode: yes, VIP count: 1, VIP: 10.187.132.65
Dead timer: 2.624s, Master priority: 200, Master router: 10.187.132.66
Virtual router uptime: 5w5d 12:54
Tracking: disabled
I curious how we can retrieve some value and store it to array. I've tried grep it but I'm confuse how to take a spesific value.
Expected Value on Array of Hashes:
$VAR1 = {
'interface' => 'ge-1/0/2.101',
'address' => '10.187.132.3/27',
'State' => 'backup'
'Master-router' => '10.187.132.2'
};
$VAR2 = {
'interface' => 'ge-1/0/2.102',
'address' => '10.187.132.35/27',
'State' => 'master'
'Master-router' => '10.187.132.35'
};
$VAR3 = {
'interface' => 'ge-1/0/2.103',
'address' => '10.187.132.67/27',
'State' => 'backup'
'Master-router' => '10.187.132.66'
};
You could use regex to split each paragraph up. Something like this might work:
/((\w|\s|-)+):\s([^,]+)/m
The matching groups would then look something like:
Match 1
1. Physical interface
2. e
3. ge-1/0/2
Match 2
1. Unit
2. t
3. 101
Match 3
1. Vlan-id
2. d
3. 101
As you can see 1. corresponds to a key whereas 3. is the corresponding value. You can store the set of pairs any way you like.
For this to work each attribute in the log would need to be comma-seperated, which the example you have listed isn't. Assuming the example you have listed is correct, you would have to adjust the regex a little to make it work. You can test it online at rubular until it works. If it is comma seperated you might just want to split each paragraph by "," and then split each result on ":".
EDIT:
It seems to me like each line is comma seperated so the methods mentioned above might work perfectly well if you use them on a single line at a time.
To parse the data:
Obtain a section from the file by reading lines into a section while the lines start with whitespace.
Split the sections at the item separators which are either commas or linebreaks
Split each item at the first colon into a key and value
Case-fold the key and store the pair into a hash
A sketch of an implementation:
my #hashes;
while (<>) {
push #hashes, {} if /\A\S/;
for my $item (split /,/) {
my ($k, $v) = split /:/, $item, 2;
$hashes[-1]{fc $k} = $v;
}
}
Then you can extract those pieces of information from the hash which you are interested in.
Since each record is a paragraph, you can have Perl read the file in those chunks by local $/ = ''; (paragraph mode). Then, use a regex to capture each value that you want within that paragraph, pair that value with a hash key, and then push a reference to that hash onto an array to form an array of hashes (AoH):
use strict;
use warnings;
use Data::Dumper;
my #arr;
local $/ = '';
while (<DATA>) {
my %hash;
( $hash{'interface'} ) = /interface:\s+([^,]+)/;
( $hash{'address'} ) = /Address:\s+(\S+)/;
( $hash{'State'} ) = /State:\s+([^,]+)/;
( $hash{'Master-router'} ) = /Master router:\s+(\S+)/;
push #arr, \%hash;
}
print Dumper \#arr;
__DATA__
Physical interface: ge-1/0/2, Unit: 101, Vlan-id: 101, Address: 10.187.132.3/27
Index: 353, SNMP ifIndex: 577, VRRP-Traps: enabled
Interface state: up, Group: 1, State: backup, VRRP Mode: Active
Priority: 190, Advertisement interval: 1, Authentication type: none
Advertisement threshold: 3, Delay threshold: 100, Computed send rate: 0
Preempt: yes, Accept-data mode: yes, VIP count: 1, VIP: 10.187.132.1
Dead timer: 2.715s, Master priority: 200, Master router: 10.187.132.2
Virtual router uptime: 5w5d 12:54
Tracking: disabled
Physical interface: ge-1/0/2, Unit: 102, Vlan-id: 102, Address: 10.187.132.35/27
Index: 354, SNMP ifIndex: 580, VRRP-Traps: enabled
Interface state: up, Group: 2, State: master, VRRP Mode: Active
Priority: 200, Advertisement interval: 1, Authentication type: none
Advertisement threshold: 3, Delay threshold: 100, Computed send rate: 0
Preempt: yes, Accept-data mode: yes, VIP count: 1, VIP: 10.187.132.33
Advertisement Timer: 0.816s, Master router: 10.187.132.35
Virtual router uptime: 5w5d 12:54, Master router uptime: 5w5d 12:54
Virtual Mac: 00:00:5e:00:01:02
Tracking: disabled
Physical interface: ge-1/0/2, Unit: 103, Vlan-id: 103, Address: 10.187.132.67/27
Index: 355, SNMP ifIndex: 581, VRRP-Traps: enabled
Interface state: up, Group: 3, State: backup, VRRP Mode: Active
Priority: 190, Advertisement interval: 1, Authentication type: none
Advertisement threshold: 3, Delay threshold: 100, Computed send rate: 0
Preempt: yes, Accept-data mode: yes, VIP count: 1, VIP: 10.187.132.65
Dead timer: 2.624s, Master priority: 200, Master router: 10.187.132.66
Virtual router uptime: 5w5d 12:54
Tracking: disabled
Output:
$VAR1 = [
{
'Master-router' => '10.187.132.2',
'interface' => 'ge-1/0/2',
'address' => '10.187.132.3/27',
'State' => 'backup'
},
{
'Master-router' => '10.187.132.35',
'interface' => 'ge-1/0/2',
'address' => '10.187.132.35/27',
'State' => 'master'
},
{
'Master-router' => '10.187.132.66',
'interface' => 'ge-1/0/2',
'address' => '10.187.132.67/27',
'State' => 'backup'
}
];
Hope this helps!