Bind nsupdate command getting REFUSED error - bind9

I am using nsupdate command to update a name zone, but I receive the error message update failed: REFUSED. I created the key use "rndc-confgen -a -c /etc/remote_rndc_key"
My named.conf is as follows
options {
listen-on port 53 { 9.82.159.110; };
listen-on-v6 port 53 { ::1; };
directory "/var/named";
dump-file "/var/named/data/cache_dump.db";
statistics-file "/var/named/data/named_stats.txt";
memstatistics-file "/var/named/data/named_mem_stats.txt";
secroots-file "/var/named/data/named.secroots";
recursing-file "/var/named/data/named.recursing";
allow-query { any; };
allow-update {key remote_rndc_key; };
recursion yes;
dnssec-enable no;
dnssec-validation no;
pid-file "/run/named/named.pid";
};
logging {
channel default_debug {
file "data/named.run";
severity debug 3;
};
};
zone "." IN {
type hint;
file "named.ca";
};
include "/etc/remote_rndc_key";
include "/etc/named.rfc1912.zones";
include "/etc/named.root.key";
zone "test.com" IN {
type master;
file "test.com.zone";
};
zone "82.9.in-addr.arpa" IN {
type master;
file "test.com.local";
};
key "remote_rndc_key" {
algorithm hmac-md5;
secret "lWB9P5pwaqO3FEb7GsFZkw==";
};
controls {
inet 9.82.159.110 port 953
allow { 9.82.224.110; } keys { "remote_rndc_key"; };
};
/etc/remote_rndc_key:
key "rndc-key" {
algorithm hmac-md5;
secret "lWB9P5pwaqO3FEb7GsFZkw==";
};
/var/named/test.com.zone:
$TTL 1D
# IN SOA ns1 rname.invalid. (
2019062901 ; serial
5M ; refresh
1H ; retry
1W ; expire
3H ) ; minimum
NS ns1
ns1 IN A 9.82.159.110
www IN A 9.82.100.100
use nsupdte:
[root#localhost tmp]# nsupdate -v -d -k ./remote_rndc_key
Creating key...
Creating key...
namefromtext
keycreate
> server 9.82.159.110
> update add ftps.test.com 600 A 1.1.1.2
> send
Reply from SOA query:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 40666
;; flags: qr aa ra; QUESTION: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1
;; QUESTION SECTION:
;ftps.test.com. IN SOA
;; AUTHORITY SECTION:
test.com. 0 IN SOA ns1.test.com. rname.invalid. 2019062901 300 3600 604800 10800
;; TSIG PSEUDOSECTION:
rndc-key. 0 ANY TSIG hmac-md5.sig-alg.reg.int. 1649854961 300 16 MFdWnAJcNEQ17QovaBmzTw== 40666 NOERROR 0
Found zone name: test.com
The master is: ns1.test.com
Sending update to 9.82.159.110#53
Outgoing update query:
;; ->>HEADER<<- opcode: UPDATE, status: NOERROR, id: 59745
;; flags:; ZONE: 1, PREREQ: 0, UPDATE: 1, ADDITIONAL: 1
;; UPDATE SECTION:
ftps.test.com. 600 IN A 1.1.1.2
;; TSIG PSEUDOSECTION:
rndc-key. 0 ANY TSIG hmac-md5.sig-alg.reg.int. 1649854961 300 16 vJjzs0bT4QxHW40mL/MT7g== 59745 NOERROR 0
Reply from update query:
;; ->>HEADER<<- opcode: UPDATE, status: REFUSED, id: 59745
;; flags: qr; ZONE: 1, PREREQ: 0, UPDATE: 0, ADDITIONAL: 1
;; ZONE SECTION:
;test.com. IN SOA
;; TSIG PSEUDOSECTION:
rndc-key. 0 ANY TSIG hmac-md5.sig-alg.reg.int. 1649854961 300 16 FAcO+t5JUdOJdC1mRuHNeA== 59745 NOERROR 0
named server log as below:
[root#localhost named]# systemctl status named
● named.service - Berkeley Internet Name Domain (DNS)
Loaded: loaded (/usr/lib/systemd/system/named.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2022-04-13 20:36:14 CST; 29min ago
Process: 3371415 ExecStartPre=/bin/bash -c if [ ! "$DISABLE_ZONE_CHECKING" == "yes" ]; then /usr/sbin/named-checkconf -z "$NAMEDCONF"; else echo "Checking of zone files is disabled"; fi (code=exited, >
Process: 3371418 ExecStart=/usr/sbin/named -u named -c ${NAMEDCONF} $OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 3371421 (named)
Tasks: 35
Memory: 88.8M
CGroup: /system.slice/named.service
└─3371421 /usr/sbin/named -u named -c /etc/named.conf
Apr 13 20:36:32 localhost.localdomain named[3371421]: client #0x7ff1f0108770 9.82.224.110#59471/key rndc-key: signer "rndc-key" denied
What can be the reason?

I confused the key name with the key file name:
/etc/remote_rndc_key:
key "rndc-key" {
algorithm hmac-md5;
secret "lWB9P5pwaqO3FEb7GsFZkw==";
};
should be changed to:
key "remote_rndc_key" {
algorithm hmac-md5;
secret "lWB9P5pwaqO3FEb7GsFZkw==";
};

I got this error today on my "hidden primary" Bind dns server, and wasted a couple of hours to find the reason for the failure.
At the end, I got tired and tried again, and then it worked.
So my advice is: Try again, it may be a bug.

Related

how to get http whole request content from bpftrace

I Want to use bpftrace to get all the http request content of my program.
cat /etc/redhat-release
CentOS Linux release 8.0.1905 (Core)
uname -a
Linux infra-test4.18.0-305.12.1.el8_4.x86_64 #1 SMP Wed Aug 11
01:59:55 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
bpftrace bt :
BEGIN
{
printf("Welcome to Offensive BPF... Use Ctrl-C to exit.\n");
}
tracepoint:syscalls:sys_enter_accept*
{
#sk[tid] = args->upeer_sockaddr;
}
tracepoint:syscalls:sys_exit_accept*
/ #sk[tid] /
{
#sys_accepted[tid] = #sk[tid];
}
tracepoint:syscalls:sys_enter_read
/ #sys_accepted[tid] /
{
printf("->sys_enter_read for allowed thread (fd: %d)\n", args->fd);
#sys_read[tid] = args->buf;
}
tracepoint:syscalls:sys_exit_read
{
if (#sys_read[tid] != 0)
{
$len = args->ret;
$cmd = str(#sys_read[tid], $len);
printf("*** Command: %s\n", $cmd);
}
}
END
{
clear(#sk);
clear(#sys_read);
clear(#sys_accepted);
printf("Exiting. Bye.\n");
}
And I star my server on 8080 and then start bpftrace :
Attaching 8 probes...
Welcome to Offensive BPF... Use Ctrl-C to exit.
then I start to curl :
curl -H "traceparent: 00-123-456-01" 127.0.0.1:8080/misc/ping -lv
The bpftrace only output :
bpftrace --unsafe http.bt
Attaching 8 probes...
Welcome to Offensive BPF... Use Ctrl-C to exit.
->sys_enter_read for allowed thread (fd: 15)
*** Command: GET /misc/ping HTTP/1.1
Host: 127.0.0.1:8080
User-Agent: curl
->sys_enter_read for allowed thread (fd: 15)
*** Command: GET /misc/ping HTTP/1.1
Host: 127.0.0.1:8080
User-Agent: curl
output is not the whole curl content, I don`t know why, Can anyone help?

how openstack remove offline host node by kolla-ansible

I have an offline host node which includes (compute node, control node and storage node). This host node was shutdown by accident and can't recover to online. All services about that node are down and enable but I can't set to disable.
So I can't remove the host by:
kolla-ansible -i multinode stop --yes-i-really-really-mean-it --limit node-17
I get this error:
TASK [Gather facts] ********************************************************************************************************************************************************************************************************************
fatal: [node-17]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host node-17 port 22: Connection timed out", "unreachable": true}
PLAY RECAP *****************************************************************************************************************************************************************************************************************************
node-17 : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0
How can I remove that offline host node? THX.
PS: Why I remove that offline host?
node-14(online) : **manage node which kolla-ansible installed**; compute node, control node and storage node
node-15(online) : compute node, control node and storage node
node-17(offline) : compute node, control node and storage node
osc99 (adding) : compute node, control node and storage node
Because when I deploy a new host(osc99) with (the multinode file had comment the node-17 line):
kolla-ansible -i multinode deploy --limit osc99
kolla-ansible will report error:
TASK [keystone : include_tasks] ********************************************************************************************************************************************************************************************************
included: .../share/kolla-ansible/ansible/roles/keystone/tasks/init_fernet.yml for osc99
TASK [keystone : Waiting for Keystone SSH port to be UP] *******************************************************************************************************************************************************************************
ok: [osc99]
TASK [keystone : Initialise fernet key authentication] *********************************************************************************************************************************************************************************
ok: [osc99 -> node-14]
TASK [keystone : Run key distribution] *************************************************************************************************************************************************************************************************
fatal: [osc99 -> node-14]: FAILED! => {"changed": true, "cmd": ["docker", "exec", "-t", "keystone_fernet", "/usr/bin/fernet-push.sh"], "delta": "0:00:04.006900", "end": "2021-07-12 10:14:05.217609", "msg": "non-zero return code", "rc": 255, "start": "2021-07-12 10:14:01.210709", "stderr": "", "stderr_lines": [], "stdout": "Warning: Permanently added '[node.15]:8023' (ECDSA) to the list of known hosts.\r\r\nssh: connect to host node.17 port 8023: No route to host\r\r\nrsync: connection unexpectedly closed (0 bytes received so far) [sender]\r\nrsync error: unexplained error (code 255) at io.c(235) [sender=3.1.2]", "stdout_lines": ["Warning: Permanently added '[node.15]:8023' (ECDSA) to the list of known hosts.", "", "ssh: connect to host node.17 port 8023: No route to host", "", "rsync: connection unexpectedly closed (0 bytes received so far) [sender]", "rsync error: unexplained error (code 255) at io.c(235) [sender=3.1.2]"]}
NO MORE HOSTS LEFT *********************************************************************************************************************************************************************************************************************
PLAY RECAP *****************************************************************************************************************************************************************************************************************************
osc99 : ok=120 changed=55 unreachable=0 failed=1 skipped=31 rescued=0 ignored=1
How could I fixed this error, this is the main point whether or not I can remove the offline host.
Maybe I could fixed that by change the init_fernet.yml file:
node-14:~$ cat .../share/kolla-ansible/ansible/roles/keystone/tasks/init_fernet.yml
---
- name: Waiting for Keystone SSH port to be UP
wait_for:
host: "{{ api_interface_address }}"
port: "{{ keystone_ssh_port }}"
connect_timeout: 1
register: check_keystone_ssh_port
until: check_keystone_ssh_port is success
retries: 10
delay: 5
- name: Initialise fernet key authentication
become: true
command: "docker exec -t keystone_fernet kolla_keystone_bootstrap {{ keystone_username }} {{ keystone_groupname }}"
register: fernet_create
changed_when: fernet_create.stdout.find('localhost | SUCCESS => ') != -1 and (fernet_create.stdout.split('localhost | SUCCESS => ')[1]|from_json).changed
until: fernet_create.stdout.split()[2] == 'SUCCESS' or fernet_create.stdout.find('Key repository is already initialized') != -1
retries: 10
delay: 5
run_once: True
delegate_to: "{{ groups['keystone'][0] }}"
- name: Run key distribution
become: true
command: docker exec -t keystone_fernet /usr/bin/fernet-push.sh
run_once: True
delegate_to: "{{ groups['keystone'][0] }}"
by changing the delegate_to: "{{ groups['keystone'][0] }}? But I can't implement that.

why snort sfportscan log file output does not have event_id, instead is event_ref and the value is 0

my config is follow:
preprocessor sfportscan: proto { all } \
scan_type { all } \
sense_level { high } \
logfile { alert }
when I run snort,and use nmap to scan,then log file output as follow:
Time: 02/23-12:54:21.183932
event_ref: 0
[Source ip address] -> [Destination ip address] (portscan) TCP Portscan
Priority Count: 9
Connection Count: 10
IP Count: 1
Scanner IP Range: [Destination ip address]:[Destination ip address]
Port/Proto Count: 10
Port/Proto Range: 981:12174
but the snort doc say as this:
Time: 09/08-15:07:31.603880
event_id: 2
192.168.169.3 -> 192.168.169.5 (portscan) TCP Filtered Portscan
Priority Count: 0
Connection Count: 200
IP Count: 2
Scanner IP Range: 192.168.169.3:192.168.169.4
Port/Proto Count: 200
Port/Proto Range: 20:47557
If there are open ports on the target, one or more additional tagged packet(s) will be appended:
Time: 09/08-15:07:31.603881
event_ref: 2
192.168.169.3 -> 192.168.169.5 (portscan) Open Port
Open Port: 38458
I do not has event_id,instead is event_ref and its value is 0

need a example about kube-proxy config file

When installing kubernetes 1.7.2 and a warning about kube-proxy appears
WARNING: all flags other than --config, --write-config-to, and --cleanup-iptables are deprecated. Please begin using a config file ASAP.
So I try make my own config file, like this,
{
"bind-address": "10.110.200.42",
"hostname-override": "10.110.200.42",
"cluster-cidr": "172.30.0.0/16",
"logtostderr": true,
"v": 0,
"allow-privileged": true,
"master": "http://10.110.200.42:8080",
"etcd-servers": "http://10.110.200.42:2379"
}
but I still get error
error: Object 'apiVersion' is missing in '{
I think I need some example about the config file, but I googled without any result, even search the source code in git , I found nothing usefull, please help!
ps, I found way to generate example file , just use --write-config-to command line , the example is below
apiVersion: componentconfig/v1alpha1
bindAddress: 0.0.0.0
clientConnection:
acceptContentTypes: ""
burst: 10
contentType: application/vnd.kubernetes.protobuf
kubeconfig: ""
qps: 5
clusterCIDR: ""
configSyncPeriod: 15m0s
conntrack:
max: 0
maxPerCore: 32768
min: 131072
tcpCloseWaitTimeout: 1h0m0s
tcpEstablishedTimeout: 24h0m0s
enableProfiling: false
featureGates: ""
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: ""
iptables:
masqueradeAll: false
masqueradeBit: 14
minSyncPeriod: 0s
syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: ""
oomScoreAdj: -999
portRange: ""
resourceContainer: /kube-proxy
udpTimeoutMilliseconds: 250ms
I am using k8s version 1.10.3, and just for simplicity and testing, i disable service account in apiserver by adding the item
--disable-admission-plugins=ServiceAccount
And for kube-proxy, just add the --master item, e.g.
./kube-proxy --master 127.0.0.1:8080 --v=3
and the kube-proxy turns out to be working.

Keepalived vrrp_script does not failover

I have 2 nodes with keepalived and haproxy services (CentOS7).
If I'm shutdown one node all working fine. But I want to failover the VIPS if haproxy is down.
This is 1st node config:
vrrp_script ha_check {
script "/etc/keepalived/haproxy_check"
interval 2
weight 21
}
vrrp_instance VI_1 {
state MASTER
interface eno16777984
virtual_router_id 151
priority 101
advert_int 1
authentication {
auth_type PASS
auth_pass 11111
}
virtual_ipaddress {
10.0.100.233
}
smtp_alert
track_script {
ha_check
}
}
2nd node:
vrrp_script ha_check {
script "/etc/keepalived/haproxy_check"
interval 2
fall 2
rise 2
timeout 1
weight 2
}
vrrp_instance VI_1 {
state BACKUP
interface eno16777984
virtual_router_id 151
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 11111
}
virtual_ipaddress {
10.0.100.233
}
smtp_alert
track_script {
ha_check
}
}
cat /etc/keepalived/haproxy_check
systemctl status haproxy | grep "inactive"
When I stop haproxy it still does not failover the VIPs to the next
host.
[root#cks-hatest1 keepalived]# tail /var/log/messages
Nov 30 10:35:24 cks-hatest1 Keepalived_vrrp[5891]: VRRP_Script(ha_check) failed
Nov 30 10:35:33 cks-hatest1 systemd: Started HAProxy Load Balancer.
Nov 30 10:35:45 cks-hatest1 systemd: Stopping HAProxy Load Balancer...
Nov 30 10:35:45 cks-hatest1 systemd: Stopped HAProxy Load Balancer.
Nov 30 10:35:46 cks-hatest1 Keepalived_vrrp[5891]: VRRP_Script(ha_check) succeeded
What I am doing wrong? Thank you in advance!
In your script you are checking if
systemctl status haproxy
contains keyword "inactive". Is that the value you get when you stop haproxy service manually?
As soon as haproxy service is stopped your logs contains it is started again. Can you verify that?
Also, try with replacing the script as
script "killall -0 haproxy"
It's easy. Try this for example:
vrrp_script check_haproxy {
script "pidof haproxy"
interval 2
weight 2
}
In the end of config you should add following part too:
track_script {
check_haproxy
}