Mxtoobox: Loop detected! We were referred back to IP - bind9

I followed the tutorial for DNSSEC found in https://www.digitalocean.com/community/tutorials/how-to-setup-dnssec-on-an-authoritative-bind-dns-server--2
Here is my zone file:
$ORIGIN .
$TTL 86400 ; 1 day
example.net IN SOA ns1.example.net. root.mailserver.net. (
2016091915 ; serial
43200 ; refresh (12 hours)
300 ; retry (5 minutes)
1209600 ; expire (2 weeks)
86400 ; minimum (1 day)
)
NS ns1.example.net.
NS ns2.example.net.
$TTL 60 ; 1 minute
A ...
$TTL 86400 ; 1 day
TXT ...
DNSKEY 256 3 7 ...
DNSKEY 257 3 7...
$ORIGIN example.net.
ns1 A VPS_IP
ns2 A VPS_IP
In godaddy, I created two hosts (ns1.example.net and ns2.example.net), both linked to the same ip VPS_IP. The zone is configured in a VPS of ip VPS_IP.
Almost everything works, I can successfuly query A and records of my zone, that are correctly ponting to the desired ip.
I checked with (mxtoolbox.com), using 'dns:example.net', and everything is fine, except for a warning saying the nameservers are part of the same subnet (expected since they are the same VPS_IP).
However, when I use (mxtoolbox.com) to check for dns key (dnskey:example.net) I get this message: Loop detected! We were referred back to 'VPS_IP'. All other queries using mxtoolbox.com is fine.
Also, when I try to add the DS records in godaddy, I have this error:
We are unable to validate your data at this time. Please try again later. If the problem persists, contact customer support.
Are both errors related? What could be wrong in my zone file to get that error from mxtoolbox?

Turn out that the problem is in using the same host for nameserver. They must be different.

I actually fixed this by adding ns1 and ns2 A records to the domain name.
ns1.janglehost.com. [142.112.212.219] [TTL=172800]
ns2.janglehost.com. [142.112.212.219] [TTL=172800]

Related

fail2ban - how to ban ip permanently after it was baned 3 times temporarily

Have set up fail2ban service on CentOS 8 by this tutorial: https://www.cyberciti.biz/faq/how-to-protect-ssh-with-fail2ban-on-centos-8/.
I have set up settings similiarly according to tutorial above like this:
[DEFAULT]
# Ban IP/hosts for 24 hour ( 24h*3600s = 86400s):
bantime = 86400
# An ip address/host is banned if it has generated "maxretry" during the last "findtime" seconds.
findtime = 1200
maxretry = 3
# "ignoreip" can be a list of IP addresses, CIDR masks or DNS hosts. Fail2ban
# will not ban a host which matches an address in this list. Several addresses
# can be defined using space (and/or comma) separator. For example, add your
# static IP address that you always use for login such as 103.1.2.3
#ignoreip = 127.0.0.1/8 ::1 103.1.2.3
# Call iptables to ban IP address
banaction = iptables-multiport
# Enable sshd protection
[sshd]
enabled = true
I would like an ip to be baned permanently after it was baned 3 times temporarily. How to do that?
A persistent banning is not advisable - it simply unnecessarily overloads your net-filter subsystem (as well as fail2ban)... It is enough to have a long ban.
If you use v.0.11, you can use bantime increment feature, your config may looks like in this answer - https://github.com/fail2ban/fail2ban/discussions/2952#discussioncomment-414693
[sshd]
# initial ban time:
bantime = 1h
# incremental banning:
bantime.increment = true
# default factor (causes increment - 1h -> 1d 2d 4d 8d 16d 32d ...):
bantime.factor = 24
# max banning time = 5 week:
bantime.maxtime = 5w
But note if this feature is enabled, it would also affect maxretry, so 2nd and following bans from known as bad IPs occur much earlier than after 3 attempts (it'd be halved each time).
You can use jail [recidive] with bantime = -1 for permanent ban. Example jail.local:
# Jail for more extended banning of persistent abusers
# !!! WARNINGS !!!
# 1. Make sure that your loglevel specified in fail2ban.conf/.local
# is not at DEBUG level -- which might then cause fail2ban to fall into
# an infinite loop constantly feeding itself with non-informative lines
# 2. Increase dbpurgeage defined in fail2ban.conf to e.g. 648000 (7.5 days)
# to maintain entries for failed logins for sufficient amount of time
[recidive]
enabled = true
logpath = /var/log/fail2ban.log
banaction = %(banaction_allports)s
bantime = -1 ; permanent
findtime = 86400 ; 1 day
maxretry = 6
General note:
Use SSH key auth and set "AllowGroups" or "AllowUsers" in sshd_config. Most SSH login attempts will stop after a few tries. I also notice on my servers that it is getting less and less after months or years.

Nagios event handler ignoring check interval

I have recently created an event handler for a service check which will restart Tomcat on 3 different boxes.
The check settings are:
5 checks
2 Minute checks when Ok
5 Minute checks otherwise
In the event handler script I have:
# What state is the iOS PN in?
case "$1" in
OK)
# The service is ok, so don't do anything...
;;
WARNING)
# Is this a "soft" or a "hard" state?
case "$2" in
SOFT)
case "$3" in
#Check number
2)
echo "`date` Restarting Tomcat on Node 1 for iOS PN (2nd soft warning state)..." >> /tmp/iOSPN.log
;;
3)
echo "`date` Restarting Tomcat on Node 2 for iOS PN (3rd soft warning state)..." >> /tmp/iOSPN.log
;;
4)
echo "`date` Restarting Tomcat on Node 3 for iOS PN (4th soft warning state)..." >> /tmp/iOSPN.log
;;
esac
;;
HARD)
# Do nothing let Nagios send alert
;;
esac
;;
CRITICAL)
# In theory nothing should reach this point...
;;
esac
exit 0
So the event handler should restart Tomcat on node 1 after the 2nd warning check, wait 5 minutes before checking again, then restart node 2 if it is still an issue, then wait 5 minutes and check again then restart node 3 if it is still an issue.
However when I check the log file I can see the following:
Thu Apr 18 15:09:13 2019 Restarting Tomcat on Node 1 for iOS PN (2nd soft warning state)...
Thu Apr 18 15:09:23 2019 Restarting Tomcat on Node 2 for iOS PN (3rd soft warning state)...
Thu Apr 18 15:09:33 2019 Restarting Tomcat on Node 3 for iOS PN (4th soft warning state)...
As you can see it would have restarted each box after 10 seconds not 5 minutes, I have removed the lines which actually call the restart of Tomcat as this cannot be done in this short amount of time.
I cannot see anything in the Nagios logs detailing why it did the next check so quickly after, so any help would be appreciated.
Additional:
This is the service definition:
define service{
use 5check-service
host_name ACTIVEMQ1
contact_groups tyrell-admins-non-critical
service_description ActiveMQ - iOS PushNotification Queue Pending Items
event_handler restartRemote_Tomcat!$SERVICESTATE$ $SERVICESTATETYPE$ $SERVICEATTEMPT$
check_command check_activemq_queue_item2!http://activemq1:8161/admin/xml/queues.jsp!IosPushNotificationQueue!100!300
}
define service{
name 5check-service ; The 'name' of this service template
active_checks_enabled 1 ; Active service checks are enabled
passive_checks_enabled 1 ; Passive service checks are enabled/accepted
parallelize_check 1 ; Active service checks should be parallelized (disabling this can lead to major performance problems)
obsess_over_service 1 ; We should obsess over this service (if necessary)
check_freshness 0 ; Default is to NOT check service 'freshness'
notifications_enabled 1 ; Service notifications are enabled
event_handler_enabled 1 ; Service event handler is enabled
flap_detection_enabled 1 ; Flap detection is enabled
failure_prediction_enabled 1 ; Failure prediction is enabled
process_perf_data 1 ; Process performance data
retain_status_information 1 ; Retain status information across program restarts
retain_nonstatus_information 1 ; Retain non-status information across program restarts
is_volatile 0 ; The service is not volatile
check_period 24x7 ; The service can be checked at any time of the day
max_check_attempts 5 ; Re-check the service up to 5 times in order to determine its final (hard) state
normal_check_interval 2 ; Check the service every 5 minutes under normal conditions
retry_check_interval 5 ; Re-check the service every two minutes until a hard state can be determined
contact_groups support ; Notifications get sent out to everyone in the 'admins' group
notification_options w,u,c,r ; Send notifications about warning, unknown, critical, and recovery events
notification_interval 5 ; Re-notify about service problems every 5 mins
notification_period 24x7 ; Notifications can be sent out at any time
register 0 ; DONT REGISTER THIS DEFINITION - ITS NOT A REAL SERVICE, JUST A TEMPLATE!
}

Fail2Ban not working on Ubuntu 16.04 (Date issues)

I have a problem with Fail2Ban
2018-02-23 18:23:48,727 fail2ban.datedetector [4859]: DEBUG Matched time template (?:DAY )?MON Day 24hour:Minute:Second(?:\.Microseconds)?(?: Year)?
2018-02-23 18:23:48,727 fail2ban.datedetector [4859]: DEBUG Got time 1519352628.000000 for "'Feb 23 10:23:48'" using template (?:DAY )?MON Day 24hour:Minute:Second(?:\.Microseconds)?(?: Year)?
2018-02-23 18:23:48,727 fail2ban.filter [4859]: DEBUG Processing line with time:1519352628.0 and ip:158.140.140.217
2018-02-23 18:23:48,727 fail2ban.filter [4859]: DEBUG Ignore line since time 1519352628.0 < 1519381428.727771 - 600
It says "ignoring Line" because the time skew is greater than the inspection period. However, this is not the case.
If indeed 1519352628.0 is derived from Feb 23, 10:23:48, then the other date: 1519381428.727771 must be wrong.
I have run tests for 'invalid user' hitting this repeatedly. But Fail2ban is always ignoring the line.
I am positive I am getting Filter Matches within 1 second.
This is Ubuntu 16.04 and Fail2ban 0.9.3
Thanks for any help you might have!
Looks like there is a time zone issue on your machine that might cause the confusion. Try to set the correct time zone and restart both rsyslogd and fail2ban.
Regarding your debug log:
1519352628.0 = Feb 23 02:23:48
-> timestamp parsed from line in log file with time Feb 23 10:23:48 - 08:00 time zone offset!
1519381428.727771 = Feb 23 10:23:48
-> timestamp of current time when fail2ban processed the log.
Coincidently this is the same time as the time in the log file. That's what makes it so confusing in this case.
1519381428.727771 - 600 = Feb 23 10:13:48
-> limit for how long to look backwards in time in the log file since you've set findtime = 10m in jail.conf.
Fail2ban 'correctly' ignores the log entry that appears to be older than 10 minutes, because of the set time zone -08:00.
btw:
If you need IPv6 support for banning, consider upgrading fail2ban to v0.10.x.
And there is also a brand new fail2ban version v0.11 (not yet marked stable, but running without issue for 1+ month on my machines) that has this wonderful new auto-increment bantime feature.

How to configure open-fire server with HttpUploadComponent for offline file transferring?

I use Openfire with Conversations and would like to implement offline file transferring with HttpUploadComponent, I have copied httpupload folder inside openfire folder as below screenshot:
Then I did below configurations in openfire:
I also installed Python and configured config.yml file in httpupload folder like below:
component_jid: upload.192.168.105.164
component_secret: 1234
component_port: 5275
storage_path : ./var/lib/httpupload/
max_file_size: 20971520 #20MiB
http_address: 0.0.0.0 #use 0.0.0.0 if you don't want to use a proxy
http_port: 8080
get_url : http://192.168.105.164:8080/
put_url : http://192.168.105.164:8080/
expire_interval: 82800 #time in secs between expiry runs (82800 secs = 23 hours). set to '0' to disable
expire_maxage: 2592000 #files older than this (in secs) get deleted by expiry runs (2592000 = 30 days)
user_quota_hard: 104857600 #100MiB. set to '0' to disable rejection on uploads over hard quota
user_quota_soft: 78643200 #75MiB. set to '0' to disable deletion of old uploads over soft quota an expiry runs
allow_web_clients: true #answer OPTIONS requests to allow web clients to upload files
I did run Httpupload server as well :
After starting python server, if you go openfire\serversetting\external components*view the external components* [in the first line], you'll see whether session is created or not:
After all of this, when I want to send a file from android client its failling and It gives me this error:
Where is my problem? Thanks.
In attached error screenshot, the last word is 403, which is indicating that it's related to authorization on HttpUploadComponent end.
Now I started to check the code of this component and on line 83 of https://github.com/siacs/HttpUploadComponent/blob/master/httpupload/server.py it is picking the variable "storage_path" from configuration to place the file in that directory.
Now as mentioned in your question, you have set storage_path : ./var/lib/httpupload/
But you are on a windows machine and this path is invalid.
Try giving a valid windows os path.

Zookeeper - three nodes and nothing but errors

I have three zookeeper nodes. All ports are open. The ip address are correct. Below is my config file. All nodes where booted by chef and all have the same install and config file.
# The number of milliseconds of each tick
tickTime=3000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
dataDir=/var/lib/zookeeper
# Place the dataLogDir to a separate physical disc for better performance
# dataLogDir=/disk2/zookeeper
# the port at which the clients will connect
clientPort=2181
server.1=111.111.111:2888:3888
server.2=111.111.112:2888:3888
server.3=111.111.113:2888:3888
Here is error for one of the nodes. So...I am rather confused on how I could get an error since the config is rather vanilla. All three nodes are doing hte same thing.
2012-07-16 05:16:57,558 - INFO [main:QuorumPeerConfig#90] - Reading configuration from: /etc/zookeeper/conf/zoo.cfg
2012-07-16 05:16:57,567 - INFO [main:QuorumPeerConfig#310] - Defaulting to majority quorums
2012-07-16 05:16:57,572 - FATAL [main:QuorumPeerMain#83] - Invalid config, exiting abnormally
org.apache.zookeeper.server.quorum.QuorumPeerConfig$ConfigException: Error processing /etc/zookeeper/conf/zoo.cfg
at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java:110)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:99)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:76)
Caused by: java.lang.IllegalArgumentException: serverid replace this text with the cluster-unique zookeeper's instance id (1-255) is not a number
at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parseProperties(QuorumPeerConfig.java:333)
at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java:106)
... 2 more
You need create a file named myid and put it into zookeeper var directory, one for each server, consists of a single line containing only the text of that machine's id. So myid of server 1 would contain the text "1" and nothing else. The id must be unique within the ensemble and should have a value between 1 and 255.
see more at http://zookeeper.apache.org/doc/r3.3.3/zookeeperAdmin.html#sc_zkMulitServerSetup
server.1=111.111.111:2888:3888
server.2=111.111.112:2888:3888
server.3=111.111.113:2888:3888
Are your servers and IP's
Then create myid file on each of the nodes with value 1 in 111.111.111 and 2 in 111.111.111.112 and 3 in 111.111.111.113 servers under directory(dataDir=/var/lib/zookeeper)
If you place value "1" myid file you will get Number format exception and "Invalid config, exiting abnormally" if the myid file is created with any extension.
Therefore just create myid file without any extension and place integer values 1,2,3 in the corresponding servers without double quotes