Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
before I begin kindly note that I am a newbie and still learning.
Yesterday 10 hours from now, I had to move all my hosted websites to a new server(to be more specific - from one droplet to a new droplet). So, since the websites were moved to a new server, meant that their ip addresses would change too. So, I updated the dns configuration for all the websites to point to the new ip address now. But I was unaware that the previous dns configuration had set the ttl to 86400(1 day). I learned about this concept after searching on google why my websites would still resolve to the old server.
So, that basically meant that the old dns config is cached for 1 day and I have to wait that long to see the change in the domain name resolution to reflect the websites from the new server.
So, i tried to perform nslookup and dig commands on the domains to just check the remaining ttl. But, this is where I am upset right now.
The nslookup command with -debug parameter gave the following result:
Please Note:- I have replaced my website's domain name with (mywebsite.com) and my new server's ip address with (new.server.ip.address) from the actual nslookup result
nslookup -debug mywebsite.com new.server.ip.address
------------
Got answer:
HEADER:
opcode = QUERY, id = 1, rcode = REFUSED
header flags: response, want recursion
questions = 1, answers = 0, authority records = 0, additional = 0
QUESTIONS:
address.ip.server.new.in-addr.arpa, type = PTR, class = IN
------------
Server: UnKnown
Address: new.server.ip.address
------------
Got answer:
HEADER:
opcode = QUERY, id = 2, rcode = NOERROR
header flags: response, auth. answer, want recursion
questions = 1, answers = 1, authority records = 2, additional = 2
QUESTIONS:
mywebsite.com, type = A, class = IN
ANSWERS:
-> mywebsite.com
internet address = new.server.ip.address
ttl = 14400 (4 hours)
AUTHORITY RECORDS:
-> mywebsite.com
nameserver = ns2.centos-webpanel.com
ttl = 86400 (1 day)
-> mywebsite.com
nameserver = ns1.centos-webpanel.com
ttl = 86400 (1 day)
ADDITIONAL RECORDS:
-> ns1.centos-webpanel.com
internet address = 127.0.0.1
ttl = 14400 (4 hours)
-> ns2.centos-webpanel.com
internet address = 127.0.0.1
ttl = 14400 (4 hours)
------------
------------
Got answer:
HEADER:
opcode = QUERY, id = 3, rcode = NOERROR
header flags: response, auth. answer, want recursion
questions = 1, answers = 0, authority records = 1, additional = 0
QUESTIONS:
mywebsite.com, type = AAAA, class = IN
AUTHORITY RECORDS:
-> mywebsite.com
ttl = 86400 (1 day)
primary name server = ns1.centos-webpanel.com
responsible mail addr = myemail#gmail.com
serial = 2013071601
refresh = 86400 (1 day)
retry = 7200 (2 hours)
expire = 3600000 (41 days 16 hours)
default TTL = 86400 (1 day)
------------
Name: mywebsite.com
Address: new.server.ip.address
Now, here's what upset me. As in the above result, the ttl (even after 10 hours since changing the dns configuration) shows 86400. I was expecting it to show the remaining ttl but the ttl is constant at 86400. Does that mean that the dns will never update for my websites?? The ttl just does not decrease.
So, to verify even further I tried using linux's dig command and here's the result I got.
Please Note:- I have replaced my website's domain name with (mywebsite.com) and my old server's ip address with (old.server.ip.address) from the actual dig result
; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.62.rc1.el6_9.5 <<>> mywebsite.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 15423
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;mywebsite.com. IN A
;; ANSWER SECTION:
mywebsite.com. 83221 IN A old.server.ip.address
;; Query time: 0 msec
;; SERVER: 67.207.67.2#53(67.207.67.2)
;; WHEN: Mon Feb 5 01:55:05 2018
;; MSG SIZE rcvd: 44
Now here, the dig command resolves the domain to my old server's ip address and it shows the ttl as 83221 !!! Like I said, its more than 10 hours that I updated the dns configuration to point to my new server's ip address. But, even after 10 hours passing, the ttl says 83221 !!!!
Running the dig command again does reflect a reduction in the ttl here though, unlike the nslookup command.
So, what do you guys think is the problem that has been going on here?? Or I am misunderstanding something?? If so, please correct me. Any kind of help will seriously assist a lot. It would really help me if someone can explain what is going on here and also what's wrong or if something is wrong with my new server.
And just if it helps, I have kept the websites' files on both - the old server as well as the new server.
Thanks.
Edit:- (Solved)
So here's what fixed all the issues I was facing. I use centos web panel on my server which comes bundled with freedns manager. So, a bug in freedns kept my nameservers and domains' dns from updating. So, I went for cloudflare dns and that fixed all the issues.
Your domain is not correctly configured, please use online diagnostics tools such as dnsviz.net, see the report: http://dnsviz.net/d/mkinfra.in/dnssec/
You are in a lame delegation situation.
If we query .IN authoritative nameservers for your domain, they reply:
mkinfra.in. 86400 IN NS ns1.centos-webpanel.com.
mkinfra.in. 86400 IN NS ns2.centos-webpanel.com.
mkinfra.in. 86400 IN NS ns3.centos-webpanel.com.
mkinfra.in. 86400 IN NS ns4.centos-webpanel.com.
mkinfra.in. 86400 IN NS ns5.centos-webpanel.com.
If we query any of these 5 nameservers for your domain, they reply:
mkinfra.in. 86400 IN NS ns1.centos-webpanel.com.
mkinfra.in. 86400 IN NS ns2.centos-webpanel.com.
Which is not the same set of records. You will first need to resolve this discrepancy.
For your website they all reply the same:
www.mkinfra.in. 86400 IN CNAME mkinfra.in.
mkinfra.in. 86400 IN A 139.59.63.210
So they all reply will your old IP and not the new one. Your problem has nothing to do with TTLs: the authoritative nameservers for your domain are still not delivering the new IP address you wish, so you have to configure them properly. If you do it yourself, please remember to update the serial of the zone for any change.
The serial is in fact 2018012401 which follows the pattern YYYYMMDDXX so we can infer that the zone was changed on January 24th but not since then (or was changed but serial not updated so the new content is not taken into account at all).
And to reply to your other question: if you query an authoritative nameserver you will always get the same TTL, which is per design. It is only if you query a resolving and caching nameserver that you will see the TTL decreasing from one query to another, because the case is slowly forgetting about the data it resolved in the past.
Never use nslookup but always dig but always specify the command you use when you ask for people to check what you are doing (it is very important to specify the nameserver you query with the # parameter of dig since the results will be vastly different from an authoritative or a recursive nameserver).
The domain name has more than one address:
a.test.com: 10.10.10.10 a.test.com: 10.10.10.11
I use 'nsupdate' to add them,
but how can I update one of the records;
a test.com 10.10.10.10 ->10.10.10.12
I tried to use delete the 10.10.10.10.in-add.... and it worked.
But when I delete the a.test.com, the other record is deleted too.
So when I nslookup a.test.com, None of the ip address can be found.
I want to know how can I just delete the specific record.
Finally, I get the solutions:
nsupdate
>server 127.0.0.1
>update delete a.test.com 3600 IN A 10.10.10.10
>send
For reverse ip :
>update delete 10.10.10.10.in-addr.arpa 3600 IN PTR a.test.com
Need some help related to create a custom filter for custom app which is websocket server written in node.js . As per my understanding from other articles the custom node.js app needs to write a log which enters any authentication failed attempts which will further be read by Fail2ban to block IP in question . Now I need help with example for log which my app should create which can be read or scanned by fail2ban and also need example to add custom filter for fail2ban to read that log to block ip for brute force .
Its really old question but I found it in google so I will write answer.
The most important thing is that line you logging needs to have right timestamp because fail2ban uses it to ban and unban. If time in log file is different than system time, fail2ban will not find it so set right timezone and time in host system. In given example I used UTC time and time zone offset and everything is working. Fail2Ban recognizes different types of timestamps but I didn't found description. But in fail2ban manual you can find two examples. There also exist command to check if your line is recognized by written regular expression. I really recommend to use it. I recommend also to use "regular expression tester". For example this one.
Rest of the log line is not really important. You just need to pass user ip.
This are most important informations but I will write also example. Im just learning so I did it for educational purposes and Im not sure if given example will have sense but it works. I used nginx, fail2ban, pm2, and node.js with express working on Debian 10 to ban empty/bad post requests based on google recaptcha. So set right time in Your system:
For debian 10 worked:
timedatectl list-timezones
sudo timedatectl set-timezone your_time_zone
timedatectl <-to check time
First of all You need to pass real user ip in nginx. This helped me so You need to add line in You server block.
sudo nano /etc/nginx/sites-available/example.com.
Find location and add this line:
location / {
...
proxy_set_header X-Forwarded-For $remote_addr;
...
}
More about reverse proxy.
Now in node.js app just add
app.set('trust proxy', true)
and you can get user ip now using:
req.ip
Making it work with recaptcha:
All about recaptcha is here: Google Developers
When You get user response token then you need to send post request to google to verify it. I did it using axios. This is how to send post request. Secret is your secret, response is user response.
const axios = require('axios');
axios
.post(`https://www.google.com/recaptcha/api/siteverify?secret=${secret}&response=${response}`, {}, {
headers: {
"Content-Type": "application/x-www-form-urlencoded; charset=utf-8"
},
})
.then(async function (tokenres) {
const {
success, //gives true or false value
challenge_ts,
hostname
} = tokenres.data;
if (success) {
//Do something
} else {
//For fail2ban. You need to make correct timestamp.
//Maybe its easier way to get this but on my level of experience
//I did it like this:
const now = new Date();
const tZOffset = now.getTimezoneOffset()/60;
const month = now.toLocaleString('en-US', { month: 'short' });
const day = now.getUTCDate();
const hours = now.getUTCHours()-tZOffset;
const minutes = now.getUTCMinutes();
const seconds = now.getUTCSeconds();
console.log(`${month} ${day} ${hours}:${minutes}:${seconds} Captcha verification failed [${req.ip}]`);
res.send(//something)
}
Time zone offset to set right time. Now pm2 save console.log instructions in log file in /home/youruserdir/.pm2/logs/yourappname-out.log
Make empty post request now. Example line of bad request will look like this:
Oct 14 19:5:3 Captcha verification failed [IP ADRESS]
Now I noticed that minutes and seconds have no 0 but fail2ban still recognizes them so its no problem. BUT CHECK IF DATE AND TIME PASSES WITH YOUR SYSTEM TIME.
Now make filter file for fail2ban:
sudo nano /etc/fail2ban/filter.d/your-filter.conf
paste:
[Definition]
failregex = Captcha verification failed \[<HOST>\]
ignoreregex =
Now ctrl+o, ctrl+x and you can check if fail2ban will recognize error lines using fail2ban-regex command:
fail2ban-regex /home/youruserdir/.pm2/logs/yourappname-out.log /etc/fail2ban/filter.d/your-filter.conf
Result will be:
Failregex: 38 total
|- #) [# of hits] regular expression
| 1) [38] Captcha verification failed \[<HOST>\]
`-
Ignoreregex: 0 total
Date template hits:
|- [# of hits] date format
| [38] {^LN-BEG}(?:DAY )?MON Day %k:Minute:Second(?:\.Microseconds)?(?: ExYear)?
`-
Lines: 42 lines, 0 ignored, 38 matched, 4 missed
[processed in 0.04 sec]
As You can see 38 matched. You will have one. If You have no matches, check pm2 log file. When I was testing on localhost my app gave IP address with ::127.0.0.1. It can be ipv6 related. It can maybe make make a problem.
Next:
sudo nano /etc/fail2ban/jail.local
Add following block:
[Your-Jail-Name]
enabled = true
filter = your-filter
logpath = /home/YOURUSERDIR/.pm2/logs/YOUR-APP-NAME-out.log
maxretry = 5
findtime = 10m
bantime = 10m
So now. Be sure that you wrote filter name without .conf extension.
In logpath be sure to write right user dir and log name. If You will get 5(maxrety) wrong post requests in 10minutes(finditme) then user will be banned for 10 minutes. You can change this values.
Now just restart nginx and fail2ban:
sudo systemctl restart nginx
sudo systemctl restart fail2ban
After You can check if Your jail is working using commands:
sudo fail2ban-client status YOUR-JAIL-NAME
There will be written how much matches was found and how much ips are banned. More information You can find in fail2ban log file.
cat /var/log/fail2ban.log
Found IPADDR - 2021-10-13 13:12:57
NOTICE [YOUR-JAIL-NAME] Ban IPADDRES
I wrote this step-by-step because probably only people with little experience will look for this. If You see mistakes or you can suggest me something then just comment.
I develop our application and can succeed to get hotsname through func: ServiceFound(DNSSDService sref, DNSSDFlags flags, uint ifIndex, String serviceName, String regType, String domain)
I checked the wireshark and Log, the serviceName is right.
My question:
Why I can not ping it through "serviceName.domain", e.g: ping serviceName.local. (I want to use ping to test the network available beofre run my application. now it is blocked the failed ping)
But I can ping it through the real IP, e.g: ping 1.2.3.4 (This means that the network is ok)
ServiceFound and ServiceResolved only provide serviceName.
So how to solve this problem:
1) one simple way to get IP
or 2) how to solve the problem of "ping serviceName.local"
Thanks a lot for your support in advance!
************************Update*******
I retest it on other PC:
I use dns-sd.exe to debug the network
Using following command can get servicename
$ dns-sd.exe -B _http._tcp
Browsing for _http._tcp
Timestamp A/R Flags if Domain Service Type Instance Name
4:33:52.663 Add 3 3 local. _http._tcp. test
Using following command can get zone file
$ dns-sd.exe -Z _http._tcp
Browsing for _http._tcp
_http._tcp PTR Officejet\032Pro\032L7500\032[FEDCE8]._http._tcp
Officejet\032Pro\032L7500\032[FEDCE8]._http._tcp SRV 0 0 80 HPFEDCE8.local. ; Replace with unicast FQDN of target host
Officejet\032Pro\032L7500\032[FEDCE8]._http._tcp TXT ""
using following command can get IP (based on HPFEDCE8.local. in above feedback)
$ dns-sd.exe -G v4 HPFEDCE8.local.
Timestamp A/R Flags if Hostname Address TTL
4:43:38.965 Add 2 3 bej1301Dell2360.local. 10.61.20.99 240
So I can ping it through HPFEDCE8.local.
But in my Test PC: "$ dns-sd.exe -B _http._tcp" is ok, but others commands are failed.
So I think this is the root cause.
So my question is:
as I know, we can use "instance Name" to generate the hostname: test.local.
why they are different from "HPFEDCE8.local." in the zone file
why "ping HPFEDCE8.local." ok and "ping test.local." failed
Do you have any others ideas for my Test PC?
Thanks a lot!!
How do I programatically determine the WHOIS server for a given TLD?
For name servers, I just query a.root-servers.net
Is there an equivalent procedure for WHOIS?
I know "host -t ns xxx." yields the DNS for a TLD: can the WHOIS
server be derived from that result?
It's in the SRV-record _nicname._tcp.tld -
For example;
# dig +short SRV _nicname._tcp.no
0 0 43 whois.norid.no.
More information can be found in the Wikipedia-article of whois.
That works for some tld's at least - but not .com.
tld.whois-servers.net is a commonly used alias that should point to a valid whois-server. For example com.whois-servers.net